NitpickLawyer
6 hours ago
I am old enough to have heard this before.
C makes you a bad programmer. Real men code in assembler.
IDEs make you a bad programmer. Real men code in a text editor.
Intellisense / autocomplete makes you a bad programmer. Real men RTFM.
Round and round we go...
mywittyname
6 hours ago
My opinion is that these are not analogous.
Programming takes practice. And if all of your code is generated via LLM, then you're not getting any practice.
It's the same as saying using genAI will make you a bad artist. In the sense that putting hands-to-medium makes you a good artist, that is true. Unless you take deliberate steps to learn, your skills will attrophe.
However, being a good programmer|artist is different from being a successful programmer|artist. GenAI can help you churn out tons of content, and if you can turn that content into income, you'll be successful.
Even before LLMs, successful and capable were orthogonal features for most programmers. We had people who made millions churning out a crud website over a few months, and others that can build game engines, but are stuck in underpaid contracting roles.
johnfn
6 hours ago
Are you not getting practice working with an LLM? Why would that not also be a skill you can practice?
FloorEgg
5 hours ago
25+ year programmer thats been using an agentic IDE for 9 months chiming in.
Yes I absolutely am. Yes it's a skill. Some programmers I've discussed this with made up their mind before they tried it. If the programmers goal is to produce valuable software that works and is secure and easy to maintain then they will gravitate to LLM assisted programming. I'd their goal is to program software then they won't like the idea of the LLM doing it for them.
To make the distinction more clear; if the carpenters goal is to produce chairs they may be inclined to use power tools, if their goal is to work with wood then they might not want to use power tools.
So where people fall on this debate mostly depends on their values and priorities (wants).
The thing about wants is they are almost never changed with logical arguments. Some people just want to write the code themselves. Some people also want other people to write the code themselves. I don't know why, but I know that logical arguments are unlikely to change these people's minds because our wants are so entangled in our sense of self that they exist outside of the context of pure logic, and probably for valid evolutionary beneficial reasons.
To be clear, programmers working on very novel niche use cases where LLMs genuinely aren't useful have valid case of "it's not helpful to me yet", and these people are distinct from what I'm mostly referring to. If someone is open minded, tried their best to make it work, and decided it's not good enough yet, that's totally fair and unrelated to my main point.
johnpdoe1234
4 hours ago
I kinda like your analogy but I find it a bit misguided. I'll give another one that fits more my experience.
Consider a math/physics studying a course. Using an LLM is like having all the solutions to math/physics exercises in the course and reading them. If the goal is to finish all the problems quickly then an LLM is great. If the goal is to properly learn math/physics, then doing the thinking yourself and use the LLM as last recourse or to double check your work is the way to go.
Back to the carpenter, I think there is a lot of value on not using power tools to learn more about making chairs and become better at it.
I am using many LLMs for coding everyday. I think they are great. I am more productive. I finish features quickly and make progress quickly and the dopamine release is fantastic. I started playing with agents and I am marvelled at what they can do. I can also tell that I am learning less and becoming a lot more complacent when working like this.
So I question myself what the goal should be (for me). Should my goal be producing more raw output or produce less output while enriching my knowledge and expertise?
FloorEgg
4 hours ago
Ah yes there is a distinction for students or someone learning principles.
If the goal is learning programming then some of that should be done with LLMs and some without. I think we are still figuring out how to use LLMs to optimize rate of learning, but my guess is the way they should be used is very different than how an expert should use them to be productive.
Again it comes back to the want though (learning vs doing vs getting done), so I think my main point stands.
mywittyname
4 hours ago
> To make the distinction more clear; if the carpenters goal is to produce chairs they may be inclined to use power tools, if their goal is to work with wood then they might not want to use power tools.
I'd say it's closer to carpenters using CNC machines.
You can be a "successful" carpenter that sells tons of wood projects built entirely using a CNC and not even know how to hold a chisel. But nobody is going to call you a good woodworker. You're just a successful businessman.
For sure, there's gradients, where people use it for the hard parts and manually do what they can. I.e., CNC templates and use those as router guides on their work. People will be impressed by your builds, but Paul Sellers is still going to be more considered more talented.
FloorEgg
4 hours ago
From my thousands of hours working with LLMs since GPT-3, I strongly disagree.
In the media AI hype perspective your CNC analogy sounds right. In my -grounded in real experience using it- perspective the power tool analogy is far more apt.
If you treat agentic IDE like CNC machine that's how you get problems.
Consider the population of opinions. One other reply to my comment is about how the LLM introduced security flaw and repeated line after line of the same code, implying it's useless and can't be trusted. Now you're replying that the LLMs are more capable and autonomous they can be trusted with full automation to the extent of CNC.
My point is that the truth lies somewhere in between.
Maybe in the future your CNC analogy would be valid but right now with windsurf/cursor and Opus 4.5 we aren't there yet.
Lastly, even with your analogy setting up and using CNC is a skill. It's an engineering and design skill. So maybe the person doing that would be more of an engineer than a woodworker, but going as far as calling them a business person isn't accurate to me.
throwaway2016a
4 hours ago
> If the programmers goal is to produce valuable software that works and is secure and easy to maintain then they will gravitate to LLM assisted programming.
Just this week alone I had the LLMs:
- Introduce a serious security flaw.
- Decided it was better to duplicate the same 5 lines of code 20 times instead of making a function and calling that.
And that is actually just this week. And to be clear, I am not making that up to prove a point, I use AI day in and day out and it happens consistently. Which is fine, humans can do that too, the issue is when there is a whole new generation of "programmers" that have absolutely zero clue how to spot those issues when (not if) they come up.
And as AI gets better (which it will) it actually makes it more dangerous because people start blindly trusting the code it produces.
FloorEgg
4 hours ago
If that's happening then you're most likely not using the best tools (best model and IDE) for agentic coding and/or not using them right.
How an experienced developer uses LLMs to program is different than how a new developer should use LLMs to learn programming principles.
I don't have a CS degree. I never programmed in assembly. Before LLMs I could pump out functional secure LAMP stack and JS web apps productively after years of practice. Some curmudgeon CS expert might scrutinize my code for being not optimally efficient or engineered. Maybe I reinvented some algorithm instead of using a standard function or library. Yet my code worked and the users got what they wanted.
If you're not using the best tools and you're not using them properly and then they produce a result you don't like, while thousands of developers are using the tools productively, does that say something about you or the tools?
Also, if you use an LLM haphazardly and it introduces a security flaw, you as the user are responsible. The LLM is a power tool, not a person.
Whether the inexperienced dev uses an LLM or not doesn't change the fact that they might product bad code with security flaws.
I'm not arguing that people that don't know how to program can use LLMs to replace competent programmers. I'm arguing that competent programmers can be 3-4x more productive with the current best agentic coding tools.
I have extremely compelling valid evidence of this, and if you're going to try to debate me with examples of how you're unable to get these results then all it proves is you're ideologically opposed to it or not capable.
throwaway2016a
3 hours ago
First, I'm using frontier models with Cursor agenic mode.
> Also, if you use an LLM haphazardly and it introduces a security flaw, you as the user are responsible. The LLM is a power tool, not a person.
I 100% agree. That was my point. A lot of people (not saying you, I don't know you) are not qualified to take on that level of responsibility yet they do it anyway and ship it to the user.
And on the human side, that is precisely why procedures like code review have been standard for a while.
But my main objection to the parent post was not that LLMs can't be powerful tools but that specifically the examples used of maintainability and security are (IMO) possibly the worst examples you can use. Since 70k line un-reviewable pull requests are not maintainable and probably also not secure (how would you know?).
FloorEgg
27 minutes ago
Okay, I'm pretty sure we would heavily agree on a lot of this if we pulled it all apart.
It really boils down to who is using the LLM tool and how they are using it and what they want.
When I prompt the LLM to do something I scout out what I want it to do, potential security and maintenance considerations, etc. I then prompt it precisely, sometimes with equivalent of multi page essay, sometimes with a list of instructions, etc. the point is I'm not vague. I then review what it did and look for potential issues. I also ask it to review what it did and if it sees potential issues (sometimes with more specific questioning).
So we are mashing together a few dimensions, my GPC was pointing out:
- A: competent developer wants software functionality produced that is secure and maintainable
- B: competent developer wants to produce software functionality that is secure and maintainable
The distinction between these is subtle but has a huge impact on senior developer attitudes to LLMs from what I've seen. Dev A more likely to figure out how to get most out of LLMs, Dev B will resist and use flaws as excuse to do it themselves. Reminds me a bit of early AWS days and engineers hung up on self hosting. Or devs wanting to built everything from scratch instead of using a framework.
What youre pointing out is that if careless or inexperienced developers use LLMs they will produce unmaintainable and insecure code. Yeah I agree. They would probably produce insecure and unmaintainable code without LLMs too. Experienced devs using LLMs well can produce secure and maintainable code. So the distinction isn't LLMs, it's who is using them and how.
What just occured to me though, and I suspect you will appreciate, is the fact that I'm only working with other very experienced devs. Experienced devs working with JR or careless devs who can now produce unmaintainable and insecure code much faster is a novel problem and would probably be really frustrating to deal with. Reviewing a 70k line PR produced by an LLM without thoughtful prompting and oversight sounds awful. I'm not advocating this is a good thing. Though surely there is some way to manage that, and figuring out how to manage it probably has some huge benefits. I've only been thinking about it for 5 min so I definitely don't have an answer.
One last thought that just occured to me: the whole narrative of AI replacing junior devs seemed bonkers to me because there's still so much demand for new software and LLMs don't remotely compare to developers. That said, as an industry I guess we haven't figured out how to mix LLMs and Jr developers in a way that's net constructive? If JR+LLM = 10x more garbage for SR to review, maybe that's the real reason why JR roles are harder to find?
mywittyname
4 hours ago
It's the difference between looking at someone else's work and doing it yourself.
How much can you level up by watching other people do their homework?
johnfn
4 hours ago
Would you say that management is just a dead-end skill then, with no ability to level up your managerial experience whatsoever? Or is there a distinction I am missing?
dogma1138
6 hours ago
Those are objectively different skills tho.
jerf
6 hours ago
The problem I have with a lot of these "oh I've heard it all before"-type posts is that some of what you heard is true. Yes, IDEs did make for some bad programmers. Yes, scripting languages has made for some bad programmers. Yes, some other shortcuts have made for bad programmers.
They haven't destroyed everyone but there definitely are sets of people who used the crutches and never got past them. And not just in a "well they never needed anything more" but worse programmers than they should or could have been.
wrs
6 hours ago
AI makes you not be a programmer (at least, that seems to be the goal). So it’s a little different from those.
It’s like a carpenter talking about IKEA saying “I remember when I got an electric sander, it’s the same thing”.
jswelker
5 hours ago
"Paying a guy from the Philippines to write your code and submit it under your name is just another tool no different than using an IDE!"
Surely we agree that some boundary exists where it becomes absurd right? We are just quibbling over where to draw the line. I personally draw it at AI.
AndyKelley
6 hours ago
C doesn't make you dependent on constant Internet connectivity, charge a monthly subscription, or expose you to lawsuits from powerful companies claiming copyright over your work.
IDEs don't make you dependent on constant Internet connectivity, charge a monthly subscription, or expose you to lawsuits from powerful companies claiming copyright over your work.
Intellisense/autocomplete doesn't make you dependent on constant Internet connectivity, charge a monthly subscription, or expose you to lawsuits from powerful companies claiming copyright over your work.
dllthomas
an hour ago
> IDEs don't make you dependent on constant Internet connectivity, charge a monthly subscription,
Sometimes they do! But not in general, yes.
mjr00
5 hours ago
I get what you're saying but let's be real: 99.99999% of modern software development is done with constant internet connectivity and is effectively impossible without it. Whether that's pulling external packages or just looking up the name of an API in the standard library. Yeah, you could grep docs, or have a shelf full of "The C++ Programming Language Reference" books like we did in the 90s, but c'mon.
I have some friends in the defense industry who have to develop on machines without public internet access. You know what they all do? Have a second machine set up next to them which does have internet access.
ThrowawayR2
5 hours ago
> "C makes you a bad programmer. Real men code in assembler."
So, who is it that supposedly said that? Not K&R (obviously). Not Niklaus Wirth. Not Stroustrup. Not even Dijkstra (Algol 60) and he loved writing acerbic remarks about how much the average developer sucked. I don't recall Ken Thompson, Fred Brooks (OS/360), Cutler, or any other OS architect having said anything like that either. Who in that era that has any kind of credibility said that?
The "Real Men Don't Use Pascal" essay was a humorous shitpost that didn't reflect any kind of prevailing opinion.
HPsquared
6 hours ago
I'll always remember a lab we had in university where we hand-wrote machine code to do blinkenlights, and used an array of toggle switches to enter each byte into memory by hand.
nunez
6 hours ago
lol all of the latter points are true though (except for it being only men, though i get how things were back in the day)
utopman
6 hours ago
This.
I am mid career now.
High level langages like js or python have a lot of bad design / suboptimal code... as well as some java code in many places.
Some bad java code (it just needs to be a sql select in a loop) can easily perform thousand time worse than a clean python implementation of the same thing.
As said above, once c was high level programing language and still is in some places.
I do not code in python / go / js that much these days, but what made me a not so bad developper is my understanding of computing mechanism (why and how to use memory instead of disk, how to arange code so cpu can use it's cache efficiently...)
As said in many posts, code quality even for vibe coded stuff depends more on what was prompted and how many efforts the PR diff is human readable to get maintainable and efficient softwares at the end of the day.
Yet senior devs often spend more time reviewing code instead of actually writting some. Vibe coding ultimately feels the same for me at the moment.
I still love to write some code by hand, but I start to feel less and less productive with this approach while at the same time feeling I don't really lost my skills to do so.
I think I really feel and effectively am more efficient at delivering thing with appropriate quality level for my customers now that I have agentic coding skills in my belt.
saubeidl
6 hours ago
All of this is true, but all of the examples that came before were deterministic, so once you understood the abstraction, you still understood the whole thing.
AI is different.
monkaiju
6 hours ago
Those are all syntactic changes, AI attempts to be semantic, totally different.
CharlesW
6 hours ago
The examples are semantic shifts. Assembler → C wasn't just a syntax swap (functions are semantic units, types are meaning, compilation is optimization reasoning, etc.). "Rename this symbol safely across a project" is a semantic transformation. And of course, autocomplete is semantic. AI represents a difference in degree, but not kind. Like the examples cited by the parent, AI further moves us from lower-level semantics to higher-level semantics.