Using AI Generated Code Will Make You a Bad Programmer

83 pointsposted 7 hours ago
by speckx

173 Comments

NitpickLawyer

6 hours ago

I am old enough to have heard this before.

C makes you a bad programmer. Real men code in assembler.

IDEs make you a bad programmer. Real men code in a text editor.

Intellisense / autocomplete makes you a bad programmer. Real men RTFM.

Round and round we go...

mywittyname

6 hours ago

My opinion is that these are not analogous.

Programming takes practice. And if all of your code is generated via LLM, then you're not getting any practice.

It's the same as saying using genAI will make you a bad artist. In the sense that putting hands-to-medium makes you a good artist, that is true. Unless you take deliberate steps to learn, your skills will attrophe.

However, being a good programmer|artist is different from being a successful programmer|artist. GenAI can help you churn out tons of content, and if you can turn that content into income, you'll be successful.

Even before LLMs, successful and capable were orthogonal features for most programmers. We had people who made millions churning out a crud website over a few months, and others that can build game engines, but are stuck in underpaid contracting roles.

johnfn

6 hours ago

Are you not getting practice working with an LLM? Why would that not also be a skill you can practice?

FloorEgg

5 hours ago

25+ year programmer thats been using an agentic IDE for 9 months chiming in.

Yes I absolutely am. Yes it's a skill. Some programmers I've discussed this with made up their mind before they tried it. If the programmers goal is to produce valuable software that works and is secure and easy to maintain then they will gravitate to LLM assisted programming. I'd their goal is to program software then they won't like the idea of the LLM doing it for them.

To make the distinction more clear; if the carpenters goal is to produce chairs they may be inclined to use power tools, if their goal is to work with wood then they might not want to use power tools.

So where people fall on this debate mostly depends on their values and priorities (wants).

The thing about wants is they are almost never changed with logical arguments. Some people just want to write the code themselves. Some people also want other people to write the code themselves. I don't know why, but I know that logical arguments are unlikely to change these people's minds because our wants are so entangled in our sense of self that they exist outside of the context of pure logic, and probably for valid evolutionary beneficial reasons.

To be clear, programmers working on very novel niche use cases where LLMs genuinely aren't useful have valid case of "it's not helpful to me yet", and these people are distinct from what I'm mostly referring to. If someone is open minded, tried their best to make it work, and decided it's not good enough yet, that's totally fair and unrelated to my main point.

johnpdoe1234

4 hours ago

I kinda like your analogy but I find it a bit misguided. I'll give another one that fits more my experience.

Consider a math/physics studying a course. Using an LLM is like having all the solutions to math/physics exercises in the course and reading them. If the goal is to finish all the problems quickly then an LLM is great. If the goal is to properly learn math/physics, then doing the thinking yourself and use the LLM as last recourse or to double check your work is the way to go.

Back to the carpenter, I think there is a lot of value on not using power tools to learn more about making chairs and become better at it.

I am using many LLMs for coding everyday. I think they are great. I am more productive. I finish features quickly and make progress quickly and the dopamine release is fantastic. I started playing with agents and I am marvelled at what they can do. I can also tell that I am learning less and becoming a lot more complacent when working like this.

So I question myself what the goal should be (for me). Should my goal be producing more raw output or produce less output while enriching my knowledge and expertise?

FloorEgg

4 hours ago

Ah yes there is a distinction for students or someone learning principles.

If the goal is learning programming then some of that should be done with LLMs and some without. I think we are still figuring out how to use LLMs to optimize rate of learning, but my guess is the way they should be used is very different than how an expert should use them to be productive.

Again it comes back to the want though (learning vs doing vs getting done), so I think my main point stands.

mywittyname

4 hours ago

> To make the distinction more clear; if the carpenters goal is to produce chairs they may be inclined to use power tools, if their goal is to work with wood then they might not want to use power tools.

I'd say it's closer to carpenters using CNC machines.

You can be a "successful" carpenter that sells tons of wood projects built entirely using a CNC and not even know how to hold a chisel. But nobody is going to call you a good woodworker. You're just a successful businessman.

For sure, there's gradients, where people use it for the hard parts and manually do what they can. I.e., CNC templates and use those as router guides on their work. People will be impressed by your builds, but Paul Sellers is still going to be more considered more talented.

FloorEgg

4 hours ago

From my thousands of hours working with LLMs since GPT-3, I strongly disagree.

In the media AI hype perspective your CNC analogy sounds right. In my -grounded in real experience using it- perspective the power tool analogy is far more apt.

If you treat agentic IDE like CNC machine that's how you get problems.

Consider the population of opinions. One other reply to my comment is about how the LLM introduced security flaw and repeated line after line of the same code, implying it's useless and can't be trusted. Now you're replying that the LLMs are more capable and autonomous they can be trusted with full automation to the extent of CNC.

My point is that the truth lies somewhere in between.

Maybe in the future your CNC analogy would be valid but right now with windsurf/cursor and Opus 4.5 we aren't there yet.

Lastly, even with your analogy setting up and using CNC is a skill. It's an engineering and design skill. So maybe the person doing that would be more of an engineer than a woodworker, but going as far as calling them a business person isn't accurate to me.

throwaway2016a

4 hours ago

> If the programmers goal is to produce valuable software that works and is secure and easy to maintain then they will gravitate to LLM assisted programming.

Just this week alone I had the LLMs:

- Introduce a serious security flaw.

- Decided it was better to duplicate the same 5 lines of code 20 times instead of making a function and calling that.

And that is actually just this week. And to be clear, I am not making that up to prove a point, I use AI day in and day out and it happens consistently. Which is fine, humans can do that too, the issue is when there is a whole new generation of "programmers" that have absolutely zero clue how to spot those issues when (not if) they come up.

And as AI gets better (which it will) it actually makes it more dangerous because people start blindly trusting the code it produces.

FloorEgg

4 hours ago

If that's happening then you're most likely not using the best tools (best model and IDE) for agentic coding and/or not using them right.

How an experienced developer uses LLMs to program is different than how a new developer should use LLMs to learn programming principles.

I don't have a CS degree. I never programmed in assembly. Before LLMs I could pump out functional secure LAMP stack and JS web apps productively after years of practice. Some curmudgeon CS expert might scrutinize my code for being not optimally efficient or engineered. Maybe I reinvented some algorithm instead of using a standard function or library. Yet my code worked and the users got what they wanted.

If you're not using the best tools and you're not using them properly and then they produce a result you don't like, while thousands of developers are using the tools productively, does that say something about you or the tools?

Also, if you use an LLM haphazardly and it introduces a security flaw, you as the user are responsible. The LLM is a power tool, not a person.

Whether the inexperienced dev uses an LLM or not doesn't change the fact that they might product bad code with security flaws.

I'm not arguing that people that don't know how to program can use LLMs to replace competent programmers. I'm arguing that competent programmers can be 3-4x more productive with the current best agentic coding tools.

I have extremely compelling valid evidence of this, and if you're going to try to debate me with examples of how you're unable to get these results then all it proves is you're ideologically opposed to it or not capable.

throwaway2016a

3 hours ago

First, I'm using frontier models with Cursor agenic mode.

> Also, if you use an LLM haphazardly and it introduces a security flaw, you as the user are responsible. The LLM is a power tool, not a person.

I 100% agree. That was my point. A lot of people (not saying you, I don't know you) are not qualified to take on that level of responsibility yet they do it anyway and ship it to the user.

And on the human side, that is precisely why procedures like code review have been standard for a while.

But my main objection to the parent post was not that LLMs can't be powerful tools but that specifically the examples used of maintainability and security are (IMO) possibly the worst examples you can use. Since 70k line un-reviewable pull requests are not maintainable and probably also not secure (how would you know?).

FloorEgg

27 minutes ago

Okay, I'm pretty sure we would heavily agree on a lot of this if we pulled it all apart.

It really boils down to who is using the LLM tool and how they are using it and what they want.

When I prompt the LLM to do something I scout out what I want it to do, potential security and maintenance considerations, etc. I then prompt it precisely, sometimes with equivalent of multi page essay, sometimes with a list of instructions, etc. the point is I'm not vague. I then review what it did and look for potential issues. I also ask it to review what it did and if it sees potential issues (sometimes with more specific questioning).

So we are mashing together a few dimensions, my GPC was pointing out:

- A: competent developer wants software functionality produced that is secure and maintainable

- B: competent developer wants to produce software functionality that is secure and maintainable

The distinction between these is subtle but has a huge impact on senior developer attitudes to LLMs from what I've seen. Dev A more likely to figure out how to get most out of LLMs, Dev B will resist and use flaws as excuse to do it themselves. Reminds me a bit of early AWS days and engineers hung up on self hosting. Or devs wanting to built everything from scratch instead of using a framework.

What youre pointing out is that if careless or inexperienced developers use LLMs they will produce unmaintainable and insecure code. Yeah I agree. They would probably produce insecure and unmaintainable code without LLMs too. Experienced devs using LLMs well can produce secure and maintainable code. So the distinction isn't LLMs, it's who is using them and how.

What just occured to me though, and I suspect you will appreciate, is the fact that I'm only working with other very experienced devs. Experienced devs working with JR or careless devs who can now produce unmaintainable and insecure code much faster is a novel problem and would probably be really frustrating to deal with. Reviewing a 70k line PR produced by an LLM without thoughtful prompting and oversight sounds awful. I'm not advocating this is a good thing. Though surely there is some way to manage that, and figuring out how to manage it probably has some huge benefits. I've only been thinking about it for 5 min so I definitely don't have an answer.

One last thought that just occured to me: the whole narrative of AI replacing junior devs seemed bonkers to me because there's still so much demand for new software and LLMs don't remotely compare to developers. That said, as an industry I guess we haven't figured out how to mix LLMs and Jr developers in a way that's net constructive? If JR+LLM = 10x more garbage for SR to review, maybe that's the real reason why JR roles are harder to find?

mywittyname

4 hours ago

It's the difference between looking at someone else's work and doing it yourself.

How much can you level up by watching other people do their homework?

johnfn

4 hours ago

Would you say that management is just a dead-end skill then, with no ability to level up your managerial experience whatsoever? Or is there a distinction I am missing?

dogma1138

6 hours ago

Those are objectively different skills tho.

jerf

6 hours ago

The problem I have with a lot of these "oh I've heard it all before"-type posts is that some of what you heard is true. Yes, IDEs did make for some bad programmers. Yes, scripting languages has made for some bad programmers. Yes, some other shortcuts have made for bad programmers.

They haven't destroyed everyone but there definitely are sets of people who used the crutches and never got past them. And not just in a "well they never needed anything more" but worse programmers than they should or could have been.

wrs

6 hours ago

AI makes you not be a programmer (at least, that seems to be the goal). So it’s a little different from those.

It’s like a carpenter talking about IKEA saying “I remember when I got an electric sander, it’s the same thing”.

jswelker

5 hours ago

"Paying a guy from the Philippines to write your code and submit it under your name is just another tool no different than using an IDE!"

Surely we agree that some boundary exists where it becomes absurd right? We are just quibbling over where to draw the line. I personally draw it at AI.

AndyKelley

6 hours ago

C doesn't make you dependent on constant Internet connectivity, charge a monthly subscription, or expose you to lawsuits from powerful companies claiming copyright over your work.

IDEs don't make you dependent on constant Internet connectivity, charge a monthly subscription, or expose you to lawsuits from powerful companies claiming copyright over your work.

Intellisense/autocomplete doesn't make you dependent on constant Internet connectivity, charge a monthly subscription, or expose you to lawsuits from powerful companies claiming copyright over your work.

dllthomas

an hour ago

> IDEs don't make you dependent on constant Internet connectivity, charge a monthly subscription,

Sometimes they do! But not in general, yes.

mjr00

5 hours ago

I get what you're saying but let's be real: 99.99999% of modern software development is done with constant internet connectivity and is effectively impossible without it. Whether that's pulling external packages or just looking up the name of an API in the standard library. Yeah, you could grep docs, or have a shelf full of "The C++ Programming Language Reference" books like we did in the 90s, but c'mon.

I have some friends in the defense industry who have to develop on machines without public internet access. You know what they all do? Have a second machine set up next to them which does have internet access.

ThrowawayR2

5 hours ago

> "C makes you a bad programmer. Real men code in assembler."

So, who is it that supposedly said that? Not K&R (obviously). Not Niklaus Wirth. Not Stroustrup. Not even Dijkstra (Algol 60) and he loved writing acerbic remarks about how much the average developer sucked. I don't recall Ken Thompson, Fred Brooks (OS/360), Cutler, or any other OS architect having said anything like that either. Who in that era that has any kind of credibility said that?

The "Real Men Don't Use Pascal" essay was a humorous shitpost that didn't reflect any kind of prevailing opinion.

HPsquared

6 hours ago

I'll always remember a lab we had in university where we hand-wrote machine code to do blinkenlights, and used an array of toggle switches to enter each byte into memory by hand.

nunez

6 hours ago

lol all of the latter points are true though (except for it being only men, though i get how things were back in the day)

utopman

6 hours ago

This.

I am mid career now.

High level langages like js or python have a lot of bad design / suboptimal code... as well as some java code in many places.

Some bad java code (it just needs to be a sql select in a loop) can easily perform thousand time worse than a clean python implementation of the same thing.

As said above, once c was high level programing language and still is in some places.

I do not code in python / go / js that much these days, but what made me a not so bad developper is my understanding of computing mechanism (why and how to use memory instead of disk, how to arange code so cpu can use it's cache efficiently...)

As said in many posts, code quality even for vibe coded stuff depends more on what was prompted and how many efforts the PR diff is human readable to get maintainable and efficient softwares at the end of the day.

Yet senior devs often spend more time reviewing code instead of actually writting some. Vibe coding ultimately feels the same for me at the moment.

I still love to write some code by hand, but I start to feel less and less productive with this approach while at the same time feeling I don't really lost my skills to do so.

I think I really feel and effectively am more efficient at delivering thing with appropriate quality level for my customers now that I have agentic coding skills in my belt.

saubeidl

6 hours ago

All of this is true, but all of the examples that came before were deterministic, so once you understood the abstraction, you still understood the whole thing.

AI is different.

monkaiju

6 hours ago

Those are all syntactic changes, AI attempts to be semantic, totally different.

CharlesW

6 hours ago

The examples are semantic shifts. Assembler → C wasn't just a syntax swap (functions are semantic units, types are meaning, compilation is optimization reasoning, etc.). "Rename this symbol safely across a project" is a semantic transformation. And of course, autocomplete is semantic. AI represents a difference in degree, but not kind. Like the examples cited by the parent, AI further moves us from lower-level semantics to higher-level semantics.

mohsen1

6 hours ago

I'm just enjoying the last few years of this career. Let me have fun!

Joking aside, we have to understand that this is the way software is being created and this tool is going to be the tool most trivial software (which most of us make) will be created with.

I feel like the industry is telling me: Adopt of become irrelevant

jf22

6 hours ago

I already miss the fun heads down days of unraveling complex bugs.

Now I'm just telling AI what to do.

wrs

6 hours ago

Actually kind of worse: adopt and become irrelevant.

xtracto

5 hours ago

Meh, I am also old enough to have experienced what the GP post mentioned, and I remember also when Visual Basic 6 was released, a similar sentiment appeared:

Suddenly, every cousin 13 year old could implement apps for their Uncle's dental office, laboratory, parts shop billing, tourism office management, etc. Some people also believed that software developers would become irrelevant in couple of years.

For me as an old programmer, I am having A BLAST using these tools. I have used enough tools (TurboBasic, Rational Rose (model based development, ha!), NetBeans, Eclipse, VB6, BorlandC++ builder) to be able to identify their limits and work with them.

wrs

5 hours ago

That's great! I am also having a blast, and trying hard to take advantage of the new capability while not turning into the programmer equivalent of the passengers of the BNL Axiom. We aren't the intended audience for this post.

true2octave

7 hours ago

> It's probably fine--unless you care about self-improvement or taking pride in your work.

I’m hired to solve business problems with technology, not to self-improve or get on my high horse because I hand-wrote a silly abstraction layer for the n-th time

darkwater

7 hours ago

And you/we will be replaced by an AI that will solve the business problem (the day they get so good to actually do that, which might happen or not but... who knows?)

sallveburrpi

6 hours ago

I really really hope an AI will do this work and solve all the “business problems” so I can go and be a goat herder

lelanthran

6 hours ago

I'm skeptical of claims like this.

After all, you can go and be a goat herder right now, and yet you are presumably not doing this.

Nothing is stopping you being a goat herder - the place that is paying you for solving business problems will continue just fine if you leave, after all. Your presence there is not required.

sallveburrpi

5 hours ago

See my replies below - as often on HN you assume too much and construct a made up person to get mad at:

- first the place that is paying me to solve problems will NOT be just fine when I suddenly leave

- second I need UBI or some AI-enabled utopia to be ushered in to live comfortably as goat herder

- third I do have a concrete plan to get out, but it will take me a couple of years to realise

lelanthran

4 hours ago

> - first the place that is paying me to solve problems will NOT be just fine when I suddenly leave

Yes they will!

No one, not even the chief officers or the shareholders of a company are irreplaceable. Well, maybe you're the only tech guy in a 5-company outfit?

They'll replace you just fine.

> - second I need UBI or some AI-enabled utopia to be ushered in to live comfortably as goat herder

Well, that's not really relevant to your assertion, is it?

>>> I really really hope an AI will do this work and solve all the “business problems” so I can go and be a goat herder

After all, UBI as an outcome is not dependent on AI solving all the business problems you currently solve.

IOW, whether UBI comes to pass or not is completely independent of "AI takes our jobs".

sallveburrpi

4 hours ago

The chief officers and/or shareholders are probably the ones who are the most replaceable….

AI utopia is not needed for UBI that is true - but it will be much easier to become reality if “all the jobs” are taken.

Aside from all the snark - I think that the fundamental societal problem is that there will always be some shitty jobs that no one wants to do and there needs to be some system to force some people to do these jobs - call it capitalism, communism, or marriage. There is no way around this basic fact of the human condition

noman-land

6 hours ago

Go herd goats. You don't need to wait for AI to destroy your livelihood.

defterGoose

6 hours ago

Yeah, but there's nothing like some sweet, sweet justification.

sallveburrpi

6 hours ago

I need that sweet AI-enabled UBI first to do it comfortably

shadowgovt

6 hours ago

Herding goats doesn't solve the interesting technical problem I'm trying to solve.

Point is: if that problem is solvable without me, that's the win condition for everyone. Then I go herd goats (and have this nifty tool that helps me spec out an optimal goat fence while I'm at it).

lelanthran

6 hours ago

> Point is: if that problem is solvable without me, that's the win condition for everyone.

The problem is solvable without you. I don't even need to know what the problem actually is, because the odds of you being one of the handful of the people in the world who are so critical that the world notices their passing is so low, I have a better chance of winning a lottery jackpot than of you being some critical piece of some solution.

sallveburrpi

4 hours ago

I completely disagree - I think it’s the other way around.

Solving the problem - no matter what problem it is - is extremely dependent on you and every single human being (or animal for that matter) is a critical piece of their environment and circumstances.

Tade0

6 hours ago

I was thinking of buying land and planting beetroot, which I would be picking by hand, cutting into thin plasters and freeze-drying for sale.

I have buy-in from a former co-worker with whom I remained in touch over the years, so there will be at least two of us working the fields.

sallveburrpi

6 hours ago

I unironically have a 5 year plan to get out of tech and into something more “real”. I want to work on something that helps actual humans not these “business problems”

rybosworld

6 hours ago

And the person that hand-writes the code won't be replaced?

darkwater

6 hours ago

Yes, as well.

There aare probably 2 ways to see te future of LLMs / AI: they are either going to have the capabilities to replace all white collar work, or they are not.

If you think they are going to replace us, then yo ucan either surrender or fight back, and personally I read all these anti-AI posts as fighting back, to help people realize we might be digging our own grave.

If, OTOH, you see AI as a force-multiplier tool that's never going to completely replace a human developer then yes, probably the smartest thing to do is to learn how to master this new tool, but at the same time keep in mind the side effects it might bring, like atrophy.

rybosworld

6 hours ago

Realistically I think the only way to fight back is unions.

shadowgovt

6 hours ago

My personal goal has been to dig that grave ever since I could hold a shovel.

We've always been in the business of replacing humans in the 3-D's space (dirty, dangerous, dull... And to be clear. data manipulation for its own sake is dull). If we make AI that replaces 90% of what I do at my desk every day... We did it. We realized the dream from the old Tom Swift novels where he comes up with an idea for an invention and hands the idea off to his computer to extrapolate it, or the ship's computer in Star Trek acting like a perfect engineering and analytical assistant to take fuzzy asks from humans and turn them into useful output.

rybosworld

6 hours ago

The problem is that this time, we're creating a competing intelligence that in theory could replace all work, AND, that competing intelligence is ultimately owned/controlled by a few dozen very rich guys.

They aren't going to willingly spread the wealth.

avgDev

6 hours ago

I love to code, like fun code, solving a relatively small concrete problem with code feels rewarding to me....however, writing business code on the other hand? Not really.

I do however, love solving business problems. This is what I am hired for. I speak to VP/managers to improve their day to day. I come up with feasible solution and translate them into code.

If AI could actually code, like really code(not here is some code, it may or may not work go read documentation to figure out why it doesn't), I would just go and focus on creating affordable software solutions to medium/small businesses.

This is kind of like gardening/farming, before industrial revolution most crops required a huge work force, these days with all the equipment and advancements a single farmer can do a lot on their own with small staff. People still "hand" garden for pleasure, but without using the new tech they wouldn't be able to compete on a big scale.

I know many fear AI, but it is progress and it will never stop. I do think many devs are intelligent and will be able to evolve in the workplace.

user

6 hours ago

[deleted]

shams93

6 hours ago

I agree, I was always annoyed in projects where these kids thought they were still in school and spinning up incredible levels of over abstraction that led to some really horrible security problems.

lelanthran

6 hours ago

> I’m hired to solve business problems with technology, not to self-improve or get on my high horse because I hand-wrote a silly abstraction layer for the n-th time

So, this "solve business problems" is some temporary[1] gig for you?[2]

------------------------------

[1] I'm reminded of the anti-union people who are merely temporarily embarrassed millionaires.

[2] Skills atrophy. Maybe you won't need the atrophied skill in the future, but how sure are you that this is the case? The eventual outcome?

wrs

6 hours ago

Are you a consultant? Because otherwise there’s a thing called a “career ladder”, and you are very much being paid to self-improve. And if you don’t, that’s going to feature prominently in your next promotion review.

hudon

6 hours ago

and a teacher is hired to teach, but some self-improve so they may become headmaster

tomjen3

6 hours ago

For me AI is really powerful autocomplete. Like you said, I wrote the abstraction years ago. Writing the abstraction again now is not required.

A time and place may come where the AI are so powerful I’m not needed. That time is not right now.

I have used Rider for years at this point and it automatically handles most imports. It’s not AI, but its one of the things that is just not needed for me to even think about.

saubeidl

6 hours ago

Maybe you become worse at solving business problems with technology once you let that muscle atrophy?

jonas21

6 hours ago

You could make the same argument that "Using Open-Source Code Will Make You a Bad Programmer" -- and in fact, a generation ago, many people did.

ThrowawayR2

6 hours ago

> "...many people did..."

I'm trying to think of any examples of someone who said that "a generation ago" at all? I assume that they were some sort of fringe crackpot.

shadowgovt

6 hours ago

I've also heard similar arguments about "Using stackoverflow instead of RTFM makes you a bad programmer."

These things are all tradeoffs. A junior engineer who goes to the manual every time is something I encourage, but if they go exclusively to only the manual every time they are going to be slower and produce code more disjoint and harder to maintain than their peers who have taken advantage of other people's insights into the things the manuals don't say.

billy99k

6 hours ago

It doesn't make you a bad developer, it just stops novel and innovative ways of doing something, because the cheaper way is to just use what's free.

shadowgovt

3 hours ago

At some point, "novel and innovative" becomes Rube Goldberg, not I.M. Pei.

I think for software engineering, the far more common issue is that there's already a best practice and the individual engineer hasn't chanced to hear about it yet than the problem on the desk is in need of a brand-new mousetrap.

xp84

6 hours ago

Maybe everyone is using these agentic tools super heavily, and it’s way different for them, but I just use AI to do all the boring stuff, then I read it and tweak it. It just accelerates my process by 2-5x since I don’t have to implement boring and tedious things like reading or writing a csv file, so I can spend all my coding time on the actually important parts, the novel parts.

I don’t commit 1,000 lines that i don’t know how it works.

If people are just not coding anymore and trusting AI to do everything, i agree, they’re going to hit a wall hard once the complexity of their non-architected Frankenstein project hits a certain level. And they’ll be paying for a ton of tokens to spin the AI’s wheels trying to fix it.

bloppe

6 hours ago

It has long been understood that programming is more about reading code than writing code. I don't see any issue with having LLMs write code. The real issue arises when you stop bothering to read all the code that the LLM writes.

lelanthran

6 hours ago

> The real issue arises when you stop bothering to read all the code that the LLM writes.

Fluency in reading will disappear if you aren't writing enough. And for the pipeline from junior to senior, if the juniors don't write as much as we wrote when young, they are never going to develop the fluency to read.

bloppe

4 hours ago

I think you have this backward. Someone who never reads might forget how to write. Someone who never writes, but reads all the time, will not forget how to read.

lelanthran

4 hours ago

> I think you have this backward. Someone who never reads might forget how to write. Someone who never writes, but reads all the time, will not forget how to read.

You are saying something completely different to what I am saying; are you really saying that someone who writes all the time will forget how to read?

You understand that the act of writing is in fact partially reading?

user

6 hours ago

[deleted]

antfarm

6 hours ago

I have always found it way easier to write code than to understand code written by someone else. I use Claude for research and architectural discussions, but only allow it to present code snippets, not to change any files. I treat those the same way I treat code from Stack Overflow and manually adapt them to the present coding guidelines and my aesthetics. Not a recipe for 10x, but it gets road blocks out of the way quickly.

QuadrupleA

6 hours ago

As a 25+ year veteran programmer that's been mostly unimpressed with the quality of AI-generated code -

I've still learned from it. Just read each line it generates carefully. Read the API references of unfamiliar functions or language features it uses. You'll learn things.

You'll also see a lot of stupidity, overcomplication, outdated or incorrect APIs calls, etc.

kevin42

7 hours ago

Is it just me, or does anyone else use AI not just to write code, but to learn. Since I've been using Claude I've learned a lot about Rust by having it build things for me, then working with that code. I've never been a front end guy, but I had it write a Chrome plugin for me, then I used that code to learn how it works. It's not a black box to me, but I don't need to look up some CSS stuff I've never used. I can prompt Claude to write it and then I can look at it then "Huh, that's how it works". Better than researching it myself, I can see an example of exactly how it's done, then I learn from that.

I'm doing a lot of new things I never would have done before. Yes, I could have googled APIs and read tutorials, but I learn best by doing, and AI helps me learn a lot faster.

pdntspa

6 hours ago

I second this. It's like having a second brain with domain expertise in pretty much anything I could want to ask questions of. And while factual assertions may still be problematic (hallucinations), I can very quickly run code and see if it does what I want or not. I don't care if it hallucinates if it solves my problem with code that is half decent. Which it does.

fpauser

6 hours ago

> I don't care if it hallucinates if it solves my problem with code that is half decent. Which it does.

Sometimes.

xp84

6 hours ago

A competent developer should be able to read the code, spot any defects in “decency”, and fix them (or indeed, explain as you would to a junior dev how you want it fixed and let AI fix it). And of course they should have tests that should be able to categorically prove that the code does what it is supposed to do.

pdntspa

6 hours ago

Then you don't know how to work with it. Just like a real programmer first-pass code is meh. But then you circle back and have it refine.

chankstein38

6 hours ago

Me too! I got into ESP32s and sensors thanks to AI. I wouldn't have had time or energy after stressful work all day but thanks to them I can get firmware written for my projects. Along the way I'm also learning how the firmware has to be written and finding issues with what the AI wrote and correcting them.

If people aren't learning from AI it's their fault. Yeah AI makes stuff up and hallucinates and can be wrong but how is that different than a distracted senior dev? AI is available to me 24/7 to answer my questions in minutes or seconds where half the time when I message people I have to wait 30-60min for a response.

People just need to approach things intelligently and actually learn along the way. You can easily get to the point where you're thinking more clearly about a problem than the AI writing your code pretty quickly if you just pay attention and do the research you need to understand what's happening. They're not as factual as a textbook but they don't need to be to give you the space to ask the right questions and they'll frequently provide sources (though I'd heavily recommend checking them. Sometimes the sources are a joke)

striking

6 hours ago

I do agree this is where AI shines. If you need a quick rehash of something that's been done a zillion times before or a quick integration between two known good components, AI's great.

But the skills you describe are still skills, reading and researching and doing your own fact finding are still important to practice and be good at. Those things only get more important in situations off the beaten path, where AI doesn't always give you trustworthy answers or do trustworthy work.

I'm still going to nurture some of these skills. If I'm trying to learn, I'll stick to using AI only when I'm truly stuck or no longer having fun.

outside2344

6 hours ago

I am using AI to learn EVERYTHING. Spanish, code, everything. Honestly, the largest acceleration I am getting is in research towards design docs (which then get used for implementation).

chankstein38

6 hours ago

I'm curious how the spanish is going! Have you used any interesting methods or are you just kind of talking to it and asking it questions about spanish?

RationPhantoms

6 hours ago

Absolutely. It's a tireless rubik's cube. One that you can rotate endlessly to digest new material. It doesn't sigh heavily or not have the mental bandwidth to answer. Yes, it should not be trusted with high precision information but the world can get by quite well on vibes.

shadowgovt

6 hours ago

I have definitely had Claude make recommendations that gave me structural insight into the code that I didn't have on my own, and I integrated that insight.

People who claim "It's not synthesized, it's just other people's work run through a woodchipper" aren't precisely right, but they also aren't precisely wrong... And in this space, having the whole ecosystem of programmers who published code looking over my shoulder as I try to solve problems is a huge boon.

okokwhatever

6 hours ago

This is the smartest answer in this polarized thread

mmoll

6 hours ago

That may be dangerous. The more obscure the topic, the more likely it is that the AI will come up with a working but needlessly convoluted solution.

kevin42

6 hours ago

Compared to what though? I have ended up with needlessly convoluted solutions when learning something the old-fashioned way before. Then over time, as I learn more, I improve my approach.

Not everyone has access to an expert that will guide them to the most efficient way to do something.

With either form of learning though, critical thinking is required.

jf22

6 hours ago

The same way the loom made bad weavers.

Anybody know any weavers making > 100k a year?

ralferoo

6 hours ago

I guess yours might have been intended to be a facetious comment, but a quick google for designer weaving shows up a UK company as the first hit for me that sells their work for approximately $1500 per square foot.

If the demand for this work is high, maybe the individual workers aren't earning $100k per year, but the owner of the company who presumably was/is a weaver might well be earning that much.

What the loom has done is made the repeatable mass production of items cheap and convenient. What used to be a very expensive purchase is now available to more people at a significantly cheaper price, so probably the profits of the companies making them are about the same or higher, just on a higher volume.

It hasn't entirely removed the market for high end quality weaving, although it probably has reduced the number of people buying high-end bespoke items if they can buy a "good enough" item much cheaper.

But having said that, I don't think weavers were on the inflation-adjusted equivalent of 100k before the loom either. They may have been skilled artisans, but that doesn't mean the majority were paid multiples above an average wage.

The current price bubble for programming salaries is based on the high salaries being worth paying for a company who can leverage that person to produce software that can earn the company significantly more than that salary, coupled with the historic demand for good programmers exceeding supply.

I'm sure that even if the bulk of programming jobs disappear because people can produce "good enough" software for their purposes using AI, there will always be a requirement for highly skilled specialists to do what AI can't, or from companies that want a higher confidence that the code is correct/maintainable than AI can provide.

jf22

6 hours ago

I think this comment supports the point I was trying to make...

user

6 hours ago

[deleted]

user

6 hours ago

[deleted]

visarga

6 hours ago

Coding agents are going to become better and used everywhere, why train for the artisanal coding style of 2010 when you are closer to 2030? What you need to know is how to break complex projects in small parts, improve testing, organize work and the typical agent problems and capabilities. In the future no employer is going to have the patience for you to code manually.

rbbydotdev

6 hours ago

While I agree with much of the sentiment, I believe a point will approach where the amount of code and likely its complexity; due to having been written by ai, will require ai to work with and maintain

monkaiju

6 hours ago

But theyre worse at navigating nuance and complexity than humans...

user

7 hours ago

[deleted]

jgbuddy

7 hours ago

A bad programmer maybe, but a better / faster developer.

PunchyHamster

6 hours ago

well, faster, till you have to touch old code that now neither you nor AI understands

jf22

6 hours ago

AI's are amazing at understanding even the utterly horrific bad code.

I've refactored the sloppiest slop with AI in days with zero regressions. If I did it manually it could have taken months.

monkaiju

6 hours ago

Not in my experience... I am more productive and produce better output than the devs i know that rely on AI tooling

frizlab

7 hours ago

> I love to write code. I very much do not love to read, review, and generate feedback on other people's code. I understand it's a good skill to develop and can be instrumental in helping to shape less experienced colleagues into more productive collaborators, but I still hate it.

Same. Writing code is easy. Reading code is very very hard.

dahateb

6 hours ago

I find a that actually a disturbing assumption. I've learned a lot from reading other peoples code, seeing how they were thinking and spotting errors, so the good and the bad. I believe that in order to actually write good code its important to actually understand what is the context of the task which basically requires a lot of code reading, which is also sometimes quite enjoyable when you have competent authors. Reading code is an essential part of the game. If you cannot do that you'll just create huge balls of mud with or without ai usage. Though using ai will speedrun the mud so yeah, there is an argument for not using it.

frizlab

2 hours ago

To each his own, I guess. I learn by doing.

To be clear, I’m not saying there is nothing interesting in the code of others, obviously. However, reading code is, in my opinion, twice as hard as writing it. Especially understanding the structure is very hard.

eduction

6 hours ago

What an odd thing for them to put in the article. This is an example of AI generated code making someone a better programmer (by improving their ability to read code and give feedback). So it contradicts the title.

They could rename it "Using AI Generated Code Makes Programming Less Fun, for Me", that would be more honest.

The problem for programmers is (as a group) they tend to dislike the parts of their job that are hardest to replace with AI and love the stuff that is easiest for machines to copy. It turns out meetings, customer support, documentation, tests, and QA are core parts of being a good engineer.

frizlab

2 hours ago

I don’t know about other people, but I tend to love architecting and code. Coding time, after architecting, is the “reward” for the brain. Just write what you’ve painstakingly engineered, let your brain rest.

This is how I work. Honestly the writing time (the one I’m promised I'll gain on by using AI), is something like 10% of my coding time. And it’s the only time I’m “resting” so yeah. I don’t want to get rid of it. Nor do I need it. And I especially do not want to check on the “intern” to verify it did what I imagined. Nor do I want to spend time explaining what I imagined. I just do it.

outside2344

6 hours ago

Anyone want to wade in and claim CodeSense made us worse developers too?

ramesh31

6 hours ago

>It's probably fine--unless you care about self-improvement or taking pride in your work.

I did, for a very long time. Then I realized that it's just work, and I'd like to spend my life minimizing the amount of that I do and maximizing the things I do want to do. Code gen has completely changed the equation for workaday folks. Maybe that will make us obsolete, and fall out of practice. But I tend to think the best software engineers are the laziest ones who don't try to be clever. Maybe not the best programmers per se, but I know whose codebase I'd rather inherit.

billy99k

6 hours ago

This is the future of code.

I know plenty of 50-something developers out of work because they stuck to their old ways and the tech world left them behind.

d--b

6 hours ago

FWIW, AI writes React code much better than I ever could (or would want to know)

alunchbox

6 hours ago

I get your points here; I've had a similar discussion with my VP of Engineering. His argument is that I'm not hired to write `if` statements, I'm hired to solve problems. AI can solve it faster that's what he cares about at the end of the day.

However I agree there's a different category here under the idea of 'craft'. I don't have a good way to express this. It's not that I'm writing these 'if' statements in a particular way, it's how the whole system is structured and I understand every single line and it's an expression of my clarity of the system in code.

I believe there a split between these two and both are focusing on different problems. Again I don't want to label, but if I *had to* I would say one side is business focused. Here's the thing though - your end customers don't give a fuck if it's built with AI or crafted by hand.

The other side is the craftsmanship, and I don't know how to express this to make sense.

I'm looking for a good way to express this - feeling? Reality? Practice?

IDK, but I do understand your side of it; However, I don't think many companies will give a shit.

If they can go to market in 2 weeks vs 2 month's you know what they'll choose.

kerkeslager

6 hours ago

I think, for those of us who have been in this industry for 20 years, AI isn't going to magically make me lose everything I learned.

However, for those in the first few years of their career, I'm definitely seeing the problem where junior devs are reaching for AI on everything, and aren't developing any skills that would allow them to do anything more than the AI can do or catch any of the mistakes that AI makes. I don't see them on a path that leads them from where they are to where I am.

A lot of my generation of developers is moving into management, switching fields, or simply retiring in their 40s. In theory there should be some of us left who can do what AI can't for another 20 years until we reach actual retirement age, but programming isn't a field that retains its older developers well. So this problem is going to catch up with us quickly.

Then again, I don't feel like I ever really lived up to any of the programmers I looked up to from the 80s and 90s, and I can't really point to many modern programmers I look up to in the same way. Moxie and Rob Nystrom, maybe? And the field hasn't collapsed, so maybe the next generation will figure out how to make it work.

macinjosh

6 hours ago

Wow folks really aren’t getting that your perfectly idiomatic, well formatted, agonized over code isn’t needed any more.

People care if their software works. They don’t care how beautiful the code is.

AI can churn out 25 drafts faster than 99% of devs can get their boilerplate setup for the first time.

The new skill is fitting all that output into deployable code, which if you are experienced in shipping software is not hard to get the model to do.

ErroneousBosh

6 hours ago

Yeah. Come and talk to me when AI can actually write code, though.

fellowniusmonk

6 hours ago

So this author loves the easy part (writing code), hates the hard part (reading and reviewing), and lacks so much self awareness that he is going to lecture people on skill atrophy?

If you want to be an artist be an artist, that's fine, don't confuse artististry with engineering.

I write art code for myself, I engineer code professionally.

The author wraps with a false dichotomy that uses emotionally loaded language at the end: "You Believe We have Entered a New Post-Work Era, and Trust the Corporations to Shepherd Us Into It". I mean, what? Why can't I think it's quickly becoming a new era _and_ not trust corporations? Why does the author take that idea off the table? Is this logic or rhetoric? Who is this author trying to convince?

SWE life has always had smatterings of weird gatekeeping, self identities wrapped up in external tooling or paradigms, fragile egos, general misanthropy, post-hoc rationalization, etc. but... man watching the progressions of the crash outs these last few years has been wild.

shadowgovt

6 hours ago

This is a key insight.

In my day job, I use best practices. If I'm changing a SQL database, I write database migrations.

In my hobby coding? I will never write a database migration. You couldn't force me to at gunpoint. I just hate them, aesthetically. I will come up with the most elaborate and fragile solutions to avoid writing them. It's part of the fun.

user

6 hours ago

[deleted]

monkaiju

6 hours ago

Its honestly a phenomenal time to be a developer that doesnt use AI tooling. Its easier now than ever to differentiate yourself from increasingly knowledge-less devs who can only recite buzzwords but cant actually create, maintain, and improve remotely complex systems.

crimsoneer

7 hours ago

I think this is a slightly silly take.

Yes, taking the bus to work will make me a worse runner than jogging there. Sometimes, I just want to get to a place.

Secondly, I'm not convinced the best way to learn to be a good programmer is just to do a whole project from 0 to 100. International practice is a thing.

PunchyHamster

6 hours ago

AI bus sometimes will just decide to take you to different city on a whim tho

29ebJCyy

7 hours ago

Having someone else write the code is about as far from intentional practice as can be.

I do think the “becoming dependent on your replacement” point is somewhat weak. Once AI is as good as the best human at programming (which I think could still be many years away), the conversation is moot.

mrkeen

7 hours ago

Maybe it's more like being a taxi driver using a self-driving car.

darkwater

6 hours ago

Yep, this is the only analogy that makes sense. And if, like in the taxi situation, you are the owner of the taxi license, then you win because you keep making money but now you don't have to drive. But if OTOH you are just driving for a salary, bad news, you need to find another job now. Maybe if you are a very good driver, good looking and with good manners, some rich guy can hire you as his personal driver but otherwise...

takira

7 hours ago

Agreed mostly, especially in terms of efficiency. I have, however, been seeing more people recently with a built in dependency on their IDEs to solve their problems.

OptionOfT

7 hours ago

No, it's more akin to running around the neighborhood for 3 miles vs driving to the gym and run on a treadmill there for 3 miles.

oofbey

7 hours ago

Using a compiler will also make you much worse at writing assembly code. Doesn’t bother me at all. Haven’t written any assembly since the 20th century.

threethirtytwo

6 hours ago

Exactly 2 years ago I remember people calling AI stochastic parrots with no actual intellectual capability and people on HN weren’t remotely worried that AI would take over there jobs.

I mean in 2 years the entire mentality shifted. Most people on HN were just completely and utterly wrong (also quite embarrassing if you read how self assured these people were, this is like 70 percent of HN at the time).

First AI is clearly not a stochastic parrot and second it hasn’t taken our jobs yet but we can all see that potential up ahead.

Now we get articles like this saying your skills will atrophy with AI because the entire industry is using it now.

I think it’s clear. Everyone’s skills will atrophy. This is the future. I fully expect in the coming decades that the generation after zoomers have never coded ever without the assistance of AI and they will have an even harder time finding jobs in software.

Also: because the change happened so fast you see tons of pockets of people who aren’t caught up yet. People who don’t realize that the above is the overarching reality. You’ll know you’re one of these people if AI hasn’t basically taken over your work place and you and your coworkers aren’t going all in on Claude or Codex. Give it another 2 years and everyone will flip here too.

rybosworld

6 hours ago

About a year ago, another commenter said this in response to the question "Ask HN: SWEs how do you future-proof your career in light of LLMs?":

> "I’m a senior and LLM’s never provide code that pass my sniff test, and it remains a waste of time."

Even a year ago that seemed like a ridiculous thing to say. LLM's have made one thing very clear to me: A massive percentage of developers derive their sense of self worth from how smart coding makes them feel.

threethirtytwo

6 hours ago

Yes. If one thing is universal among people is that they can’t fully accept reality at face value if that reality is violating their identity.

What has to happen first is that people need to rebuild their identity before they can accept what is happening and that rebuilding process will take longer then the rate at which AI is outrunning all of us.

What is my role in tech if for the past 20 years I was a code ninja but now AI can do better than me? I can become a delegator or manager to AI, a prompt wizard or some leadership role… but even this is a target for replacement by AI.

jmathai

6 hours ago

AI doesn't need or care about "high quality" code in the same ways we define it. It needs to understand the system so that it can evolve it to meet evolving requirements. It's not bound by tech debt in the same way humans are.

That being said, what will be critical is understanding business needs and being able to articulate them in a manner that computers (not humans) can translate into software.

tomku

6 hours ago

Two years ago there were also hundreds of people constantly panic-posting here about how our jobs would be gone in a month, that learning anything about programming was now a waste of time and the entire profession was already dead, with all other knowledge work guaranteed to follow. People were posting about how they were considering giving up on CS degrees because AI would make them pointless. The people who used language like "stochastic parrots" were regularly mocked by AI enthusiasts, and the AI enthusiasts were then mocked in return for their absurd claims about fast take-off and imminent AGI. It was a cesspool of bad takes coming from basically every angle, strengthening in certainty as they bounced off each other's idiocy.

Your memory of the discourse of that era has apparently been filtered by your brain in order to support the point you want to make. Nobody who thoughtlessly adopted an extreme position at a hinge point where the future was genuinely uncertain came out of that looking particularly good.

threethirtytwo

3 hours ago

Bro. You’re gonna have a hard time finding people panic posting about how they’re going to lose their jobs in a month. Literally find me one. Then show me that the majority of people posting were panicking.

That is literally not what happened. You’re hallucinating. The majority of people on HN were so confident in their coding abilities that they weren’t worried at all. Just a cursory glance at the conversations back then and that is what you will see OVERALL.

data-ottawa

6 hours ago

How is AI not a stochastic parrot? That’s exactly what it is. That never precluded it from being useful.

throwway120385

6 hours ago

Yeah -- stochastic just implies a probabilistic method. It's just that when you include enough parameters your probabilities start to match the actual space of acceptable results really really well. In other words, we started to throw memory at the problem and the results got better. But it doesn't change the fundamentals of the approach.

RationPhantoms

6 hours ago

In my experience, it's not that the term itself is incorrect but more so people use it as a bludgeoning force to end conversations about the technology. Rather than, what should happen, is to invite nuance about how it can be utilized and it's pitfalls.

threethirtytwo

5 hours ago

Colloquially, it just means there’s no thinking or logic going on. LLMs are just pattern matching an answer.

From what we do know about LLMs we do know that it is not trivial pattern matching, the output formulated is literally by the definition of machine learning itself completely original information not copied from the training data.

pegasus

6 hours ago

And a parrot (or human) is not stochastic? The truth is we don't actually know. So the usually included "just" is unjustified.

mmoll

6 hours ago

Exactly. After all, how can WE confidently claim that we’re more than stochastic parrots?

bigstrat2003

6 hours ago

> First AI is clearly not a stochastic parrot

No it very clearly is. Even still today, it is obvious that it has zero understanding of anything and it's just parroting training data arranged in different ways.

threethirtytwo

5 hours ago

No. Many of the answers it produces can only be attributed to intelligence. Not all but a many can be. We can prove that these answers are not parroted.

As for “understanding” we can only infer this from input and output. We can’t actually know if it “understands” because we don’t actually know how these things work and in addition to that, we don’t have a formal definition of what “understanding” is.

bena

6 hours ago

So what you're saying is that two years ago, people were saying that AI won't take our jobs. And that it hasn't taken our jobs.

Fascinating.

threethirtytwo

6 hours ago

It will bro.

It also has already taken junior jobs. The market is hard for them.

mjr00

6 hours ago

> It also has already taken junior jobs.

Correction: it has been a convenient excuse for large tech companies to cut junior jobs after ridiculous hiring sprees during COVID/ZIRP.

threethirtytwo

6 hours ago

That’s part of it. You’d be lying to yourself if you think AI didn’t take junior jobs as well.

dragonwriter

6 hours ago

> It also has already taken junior jobs.

Well, its taken blame for the job cutting due to the broad growth slowdown since COVID fiscal and monetary stimulus was stopped and replaced with monetary tightening, and then most recently the economy was hit with the additional hammers of the Trump tariff and immigration policies, as lots of people want to obscure, deny, and distract from the general economic malaise (and because many of the companies, and even more of their big investors, involved are in incestuous investment relationships with AI companies, so "blaming" AI for the cuts is also a form of self-serving promotion.)

mjr00

6 hours ago

But AI is still a stochastic parrot with no actual intellectual capability... who actually believes otherwise? I figured most people had played with local models enough by now to understand that it's just math underneath. It's extremely useful, but laughably far from intelligence, as anyone who has attempted to use Claude et al for anything nontrivial knows.

threethirtytwo

6 hours ago

“It’s just math underneath”

This quote is so telling. I’m going to be straight with you and this is my opinion so you’re free to disagree.

From my POV you are out of touch with the ground truth reality of AI and that’s ok because it has all changed so fast. Everything in the universe is math based and in theory even your brain can be fully modelled by mathematics… it’s a pointless quote.

The ground truth reality is that nobody and I mean nobody understands how LLMs work. This isn’t me making shit up, if you know transformers, if you know the industry and if you even listened to the people behind the technology who make these things… they all say we don’t know how AI works.

But we do know some things. We know it’s not a stochastic parrot because in addition to the failures we’ve seen plenty of successes to extremely complicated problems that are too non trivial for anything other than an actual intelligence to solve.

In the coming years reality will change so much that your opinion will flip. You might be so stubborn as to continue calling it a stochastic parrot but by then it will just be lip service. Your current reaction is normal given the paradigm shift happened so fast and so recently.

mjr00

6 hours ago

> The ground truth reality is that nobody and I mean nobody understands how LLMs work.

This is a really insane and untrue quote. I would, ironically, ask an LLM to explain how LLMs work. It's really not as complicated as it seems.

rybosworld

6 hours ago

It's not an insane thing to say.

You can boil LLM's down to "next token predictor". But that's like boiling down the human brain to "synapses firing".

The point that OP is making I think, is that we don't understand how "next token prediction" leads to more emergent complexity.

mjr00

6 hours ago

The only thing we don't fully understand is how the ELIZA effect[0] has been known for 60 years yet people keep falling for it.

[0] https://en.wikipedia.org/wiki/ELIZA_effect

rybosworld

6 hours ago

> The only thing we don't fully understand is

It seems clear you don't want to have a good faith discussion.

It's you claiming that we understand how LLM's work, while the researchers who built them say that we ultimately don't.

threethirtytwo

6 hours ago

https://futurism.com/anthropic-ceo-admits-ai-ignorance

There’s tons more where that came from. Like I said lots of people are out of touch because the landscape is changing so fast.

What is baffling to me is that not only are you unaware of what I’m saying but you also think what I’m saying is batshit insane despite the fact that people in the center of it all who are creating these things SAY the same thing. Maybe it’s just terminology…understanding how t build an LLM is not the same as understanding why it works or how it works.

Either way I can literally provide tons and tons more of evidence to the contrary if you’re still not getting it: We do not understand how LLMs work.

Also you can prompt an LLM about whether or not we understand LLMs it should tell the same thing I’m saying along with explaining transformers to you.

mjr00

6 hours ago

That's a CEO of an AI company saying his product is really superintelligent and dangerous and nobody knows how it works and if you don't invest you're going to be left behind. That's a marketing piece, if you weren't aware.

Just because the restaurant says "World's Best Burgers" on its logo doesn't make it true.

threethirtytwo

6 hours ago

Didn’t I say I have tons of evidence?

Here’s another: https://youtube.com/shorts/zKM-msksXq0?si=bVethH1vAneCq28v

Geoffrey Hinton father of AI who quit his job at Google to warn people about AI. What’s his motivation? Altruism.

Man it’s not even about people saying things. If you knew how transformers and LLMs work you would know even for the most basic model we do not understand how they work.

mjr00

6 hours ago

I mean at a minimum I understand how they work, even if you don't. So the claim that "nobody and I mean nobody understands how LLMs work" is verifiably false.

threethirtytwo

5 hours ago

Did you not look at the evidence I posted? It’s not about you or I it’s about humanity. I have two on the ground people who are central to AI saying humanity doesn’t understand AI.

If you say you understand LLMs then my claim is then that you are lying. Nobody understands these things and people core to building these things are in absolute agreement with me.

I build LLMs for a living, btw. So it’s not just other experts saying these things.. I know what I’m talking about on a fundamental level.

fingerlocks

6 hours ago

Try to use an LLM to a solve a novel problem or within a domain that can’t easily be googled.

It will just spew over-confident sycophantic vomit. There is no attempt to reason. It’s all worthless nonsense.

It’s a fancy regurgitation machine and that will go completely off the rails when it steps outside of it’s training area. That’s it.

threethirtytwo

6 hours ago

I’ve seen it solve a complex domain specific problem and build a basis of code in 10 minutes what took a year for a human to do. And it did it better.

I’ve also seen it fuck up in the same way you describe. So do I weigh and balance these two pieces of contrasting evidence to form a logical conclusion? Or do I pick and choose one of pieces of evidence that is convenient to my world view? What should I do? Actually why don’t you tell me what you ended up doing?

rybosworld

6 hours ago

Why does it even matter if it is a stochastic parrot? And whose to say that humans aren't also?

Imagine the empire state building was just completed, and you had a man yelling at the construction workers: "PFFT that's just a bunch of steel and bricks"

pegasus

6 hours ago

Are you serious? Sam Altman and a legion of Silicon Valley movers and shakers believe otherwise. How do you think they gather the billions to build those data centers. Are they right? Are you right? We don't really know, do we...

bigstrat2003

6 hours ago

Sam Altman is the modern day PT Barnum. He doesn't believe a damn thing except "make more money for Sam Altman", and he's real good at convincing people to go along with his schemes. His actions have zero evidential value for whether or not AI is intelligent, or even whether it's useful.

pegasus

4 hours ago

Maybe not, but I was answering to "nobody believes", not to whether AI is intelligent or not (which might just be semantics anyway). Plenty believe, especially the insiders working on the tech, who know it much better than us. Take Ilya Sutskever, of "do you feel the AGI" fame. Labelling them all as cynical manipulators is delusional. Now, they might be delusional as well, at least to some degree - my bet is on the latter - but there are plenty of true believers out there and here on HN. I've debated them in the past. There are cogent arguments on either side.

xgulfie

6 hours ago

"They convinced the investors so they must be right"

mjr00

6 hours ago

> Are you serious? Sam Altman and a legion of Silicon Valley movers and shakers believe otherwise. How do you think they gather the billions to build those data centers. Are they right? Are you right? We don't really know, do we...

The money is never wrong! That's why the $100 billion invested in blockchain companies from 2020 to 2023 worked out so well. Or why Mark Zuckerberg's $50 billion investment in the Metaverse resulted in a world-changing paradigm shift.

bena

6 hours ago

It's not that the money can predict what is correct, it's that it can tell us where people's values lie.

Those people who invested cash in blockchain believed that they could develop something worthwhile on the blockchain.

Zuckerberg believed the Metaverse could change things. It's why he hired all of those people to work on it.

However, what you have here are people claiming LLMs are going to be writing 90% of code in the next 18 months, then turning around and hiring a bunch of people to write code.

There's another article posted here, "Believe the Checkbook" or something like that. And they point out that Anthropic had no reason to purchase Bun except to get the people working on it. And if you believe we're about to turn a corner on vibe coding, you don't do that.

threethirtytwo

3 hours ago

> However, what you have here are people claiming LLMs are going to be writing 90% of code in the next 18 months, then turning around and hiring a bunch of people to write code.

Very few people say this. But it’s realistically to say at the least in the next decade our jobs are going out the window.

bena

an hour ago

The CEO of Nvidia is saying this.

So yeah, he's just "one guy", but in terms of "one guys", he's a notable one.

threethirtytwo

5 hours ago

Someone also believed the internet would take over the world. They were right.

So we could be right or we could be wrong. What we do know is that from 2 years ago a lot of what people were saying or “believed” about LLMs are now categorically wrong.

bena

4 hours ago

Someone also believed the moon was made of green cheese. They were wrong.

And some of those beliefs they were wrong about is about when and how it will change things.

And my post is not about who is correct. It's about discerning what people truly believe despite what they might tell you up front.

People invested money into the internet. They hired people to develop it. That told you they believed it was useful to them.