faizshah
9 hours ago
The copy-paste programmer will always be worse than the programmer who builds a mental model of the system.
LLMs are just a faster and more wrong version of the copy-paste stackoverflow workflow it’s just now you don’t even need to ask the right question to find the answer.
You have to teach students and new engineers to never commit a piece of code they don’t understand. If you stop at “I don’t know why this works” then you will never be able to get out of the famous multi hour debug loop that you get into with LLMs or similarly the multi day build debugging loop that everyone has been through.
The real thing that LLMs do that is bad for learning is that you don’t need to ask it the right question to find your answer. This is good if you already know the subject but if you don’t know the subject you’re not getting that reinforcement in your short term memory and you will find things you learned through LLMs are not retained as long as if you did more of it yourself.
nyrikki
8 hours ago
It is a bit more complicated, as it can be harmful for experts also, and the more reliable it gets the more problematic it becomes.
Humans suffer from automation bias and other cognitive biases.
Anything that causes disengagement from a process can be challenging, especially with long term maintainability and architectural erosion, which is mostly what I actively search for to try and avoid complacency with these tools.
But it takes active effort to avoid for all humans.
IMHO, writing the actual code has always been less of an issue than focusing on domain needs, details, and maintainability.
As distrusting automation is unfortunately one of the best methods of fighting automation bias I do try to balance encouraging junior individuals to use tools that boost productivity while making sure they still maintain ownership of the delivered product.
If you use the red-green refactor method, avoiding generative tools for the test and refactor steps seems to work.
But selling TDD in general can be challenging especially with Holmström's theorem, and the tendency of people to write implementation tests vs focusing on domain needs.
It is a bit of a paradox that the better the tools become, the higher the risk is, but I would encourage people to try the above, just don't make the mistake of copying the prompts required to get from red to green as the domain tests, there is a serious risk of coupling to the prompts.
We will see if this works for me long term, but I do think beginners manually refactoring with though could be an accelerator.
But only with intentionally focusing on learning why over time.
faizshah
7 hours ago
I completely reject this way of thinking. I remember when I was starting out it was popular to say you learn less by using an IDE and you should just use a text editor because you never learn how the system works if you rely on a run button or debug button or a WYSIWYG editor.
Well in modern software we stand on the shoulders of many many giants, you have to start somewhere. Some things you may never need to learn (like say learning git at a deep level when learning the concept of add, commit, rebase, pull, push, cherry pick and reset are enough even if you use a GUI to do it) and some thins you might invest in over time (like learning things about your OS so you can optimize performance).
The way you use automation effectively is to automate the things you don’t want to learn about and work on the things you do want to learn about. If you’re a backend dev who wants to learn how to write an API in Actix go ahead and copy paste some ChatGPT code, you just need to learn the shape of the API and the language first. If you’re a rust dev who wants to learn how Actix works don’t just copy and paste the code, get ChatGPT to give you a tutorial and then your write your API and use the docs yourself.
yoyohello13
6 hours ago
Using an IDE does stunt learning though. Whether that’s a problem is up for debate. But relying on the run button, or auto completion does offload the need to remember how the cli fits together or learn library apis .
faizshah
6 hours ago
Do you have any evidence of that?
From personal experience and from the popularity of rstudio, jupyter etc. the evidence points in the other direction.
It’s because when you start out you just need to learn what code looks like, how does it run (do all the lines execute at once or sequentially), that you have these boxes that can hold values etc.
FridgeSeal
6 hours ago
Given that form most of my projects, the run button is a convenient wrapper over “cargo run” or “cargo test — test-name”, or “python file.py” not super convinced of the argument.
Maybe in C/C++ where build systems are some kind of lovecraftian nightmare?
jwrallie
4 hours ago
Some Makefile knowledge does not hurt much, but other than that it starts to become a nightmare.
Another big difference is the size of the standard library. One can hold on his brain all the information needed to program in C, but I would argue that for C++ or Java it would be too taxing and an IDE is almost a requirement, the alternative being consulting the documentation often.
chipotle_coyote
6 hours ago
> If you’re a rust dev who wants to learn how Actix works don’t just copy and paste the code, get ChatGPT to give you a tutorial
But if you don't know how Actix works, how can you be sure that the ChatGPT-generated tutorial is going to be particularly good? It might spit out non-idiomatic, unnecessarily arcane, or even flat-out wrong information, asserted confidently, and you may not have any good way of determining that. Wouldn't you be better off "using the docs yourself" in the first place, assuming they have a halfway decent "Getting Started" section?
I know it's easy to draw an analogy between "AI-assisted coding" and autocompleting IDEs, but under the hood they're really not the same thing. Simpler autocompletion systems offer completions based on the text you've already typed in your project and/or a list of keywords for the current language; smarter ones, like LSP-driven ones, perform some level of introspection of the code. Neither of those pretend to be a replacement for understanding "how the system works." Just because my editor is limiting its autocomplete suggestions to things that make sense at a given cursor position doesn't mean I don't have to learn what those methods actually do. An LLM offering to write a function for you based on, say, the function signature and a docstring does let you skip the whole "learn what the methods actually do" part, and certainly lets you skip things like "what's idiomatic and elegant code for this language". And I think that's what the OP is actually getting at here: you can shortcut yourself out of understanding what you're writing far more easily with an LLM than with even the best non-LLM-based IDE.
nyrikki
5 hours ago
Strawman on the WYSIWIG editor vs text editor question. That is not an "automated decision-making system"
> Automation bias is the propensity for humans to favor suggestions from automated decision-making systems and to ignore contradictory information made without automation, even if it is correct.
Note this 1998 NASA paper to further refute the IDE/WYSIWYG editor claims.
https://ntrs.nasa.gov/citations/19980048379
> This study clarified that automation bias is something unique to automated decision making contexts, and is not the result of a general tendency toward complacency.
The problems with automation bias has been known for decades and the studies in the human factors field is quite robust.
While we are still way to early in the code assistant world to have much data IMHO, there is evidence called out even in studies that edge towards positive results in coding assistants that point out issues with complacency and automation bias.
https://arxiv.org/abs/2208.14613
> On the other hand, our eye tracking results of RQ2 suggest that programmers make fewer fixations and spend less time reading code during the Copilot trial. This might be an indicator of less inspection or over-reliance on AI (automation bias), as we have observed some participants accept Copilot suggestions with little to no inspection. This has been reported by another paper that studied Copilot [24].
faizshah
5 hours ago
Some decisions like “how do I mock a default export in jest again?” are low stakes. While other decisions like “how should I modify our legacy codebase to use the new grant type” are high stakes.
Deciding what parts of your workflow to automate is whats important.
nonrandomstring
7 hours ago
> It is a bit of a paradox that the better the tools become, the higher the risk is
"C makes it easy to shoot yourself in the foot; C++ makes it harder,
but when you do it blows your whole leg off". -- Bjarne Stroustrup
Buttons840
8 hours ago
You suggest learning the mental model behind the system, but is there a mental model behind web technologies?
I'm reminded of the Wat talk: https://www.destroyallsoftware.com/talks/wat
Is it worth learning the mental model behind this system? Or am I better off just shoveling LLM slop around until it mostly works?
faizshah
7 hours ago
The modern software space is too complex for any one person to know everything. There’s no one mental model. Your expertise over time comes from learning multiple mental models.
For example if you are a frontend developer doing typescript in React you could learn how React’s renderer works or how typescript’s type system works or how the browser’s event listeners work. Over time you accumulate this knowledge through the projects you work on and the things you debug in prod. Or you can purposefully learn it through projects and studying. We also build up mental models of the way our product and it’s dependencies work.
The reason a coworker might appear to be 10x or 100x more productive than you is because they are able to predict things about the system and arrive at solution faster. Why are they able to do that? It’s not because they use vim or type at 200 wpm. It’s because they have a mental of the way the system works that might be more refined than your own.
tambourine_man
7 hours ago
Of course there is. The DOM stands for Document Object Model. CSS uses the box model. A lot of thought went behind all these standards.
JavaScript is weird, but show me a language that doesn’t have its warts.
hansvm
7 hours ago
> JavaScript is weird, but show me a language that doesn’t have its warts.
False equivalence much? Languages have warts. JS is a wart with just enough tumorous growth factors to have gained sentience and started its plans toward world domination.
senko
7 hours ago
All languages have their warts. In JavaScript, the warts have their language.
joshuakcockrell
2 minutes ago
JavaScript needs a Python 2->3 moment, but like 100x the damage
qwery
6 hours ago
> Is it worth learning the mental model behind this system?
If you want to learn javascript, then yes, obviously. You also need to learn the model to be able to criticise it (effectively) -- or to make the next wat.
> am I better off just shoveling LLM slop around until it mostly works?
Probably not, but this depends on context. If you want a script to do a thing once in a relatively safe environment, and you find the method effective, go for it. If you're being paid as a professional programmer I think there is generally an expectation that you do programming.
umpalumpaaa
6 hours ago
Uhhh. I hated JS for years and years until I started to actually look at it.
If you just follow a few relatively simple rules JS is actually very nice and "reliable". Those "rules" are also relatively straight forward: let/const over var, === unless you know better, make sure you know about Number.isInteger, isSafeInteger, isObject etc etc. (there were a few more rules like this - fail to recall all of them - has been a few years since i touched JS) - hope you get the idea.
Also when I looked at JS I was just blown away by all the things people built on top of it (babel, typescript, flowtype, vue, webpack, etc etc).
Buttons840
5 hours ago
That's a pile of tricks, not a mental model though.
A mental model might be something like "JavaScript has strict and non-strict comparisons", but there are no strict less-than comparisons for example, so remembering to use === instead of == is just a one-off neat tip rather than an application of some more general rule.
rsynnott
7 hours ago
I always wonder how much damage Stackoverflow did to programmer education, actually. There’s a certain type of programmer who will search for what they want to do, paste the first Stackoverflow answer which looks vaguely related, then the second, and so forth. This is particularly visible when interviewing people.
It is… not a terribly effective approach to programming.
noufalibrahim
7 hours ago
I'd qualify that (and the llm situation) with a level of abstraction.
It's one thing to have the llm generate a function call for you where you don't remember all the parameters. That's a low enough abstraction where it serves as a turbo charged doc lookup. It's also probably okay to get a basic setup (toolchain etc. for an ecosystem you're unfamilar with(. But to have it solve entire problems for you especially when you're learning is a disaster.
exe34
7 hours ago
my workflow with stackoverflow is to try to get working code that does the minimum of what I'm trying to do, and only try to understand it after it works as I want it to. otherwise there's an infinite amount of code out there that doesn't work (because of version incompatibility, plain wrong code, etc) and I ran out of patience long ago. if it doesn't run, I don't want to understand it.
faizshah
7 hours ago
This is in my opinion the right way to use it. You can use stackoverflow or ChatGPT to get to “It works!” But don’t stop there, stop at “It works, and I know why it works and I think this is the best way to do it.” If you just stop at “It works!” You didn’t learn anything and might be unknowingly making new problems.
username135
6 hours ago
My general philosophy as well.
underlipton
7 hours ago
Leaning on SO was always the inevitable conclusion, though. "Write once" (however misinterpreted that may be) + age discrimination fearmongering hindering the transfer of knowledge from skilled seniors to juniors + the increasingly brutal competition to secure one's position by producing, producing, producing. With the benefit of the doubt and the willingness to cut/build in slack all dead, of course "learning how to do it right" is a casualty. Something has to give, and if no one's willing to volunteer a sacrifice, the break will happen wherever physically or mechanically convenient.
sgustard
7 hours ago
Quite often I'm incorporating a new library into my source. Every new library involves a choice: do I just spend 15 minutes on the Quick Start guide (i.e. "copy-paste"), or a day reading detailed docs, or a week investigating the complete source code? All of those are tradeoffs between understanding and time to market. LLMs are another tool to help navigate that tradeoff, and for me they continue to improve as I get better at asking the right questions.
travisgriggs
6 hours ago
Or "do I even need a library really?" These libraries do what I need AND so many other things that I don't need. Am I just bandwagoning. For my very simple purposes, maybe my own "prefix(n)" method is better than a big ol' library.
Or not.
All hail the mashup.
LtWorf
7 hours ago
If you spend less than 15 minutes before even deciding which library to include and if include it at all. You're probably doing it wrong.
smikhanov
7 hours ago
No, that person is doing it right. That’s 15 minutes of your life you’ll never get back; no library is worth it.
faizshah
7 hours ago
If your goal is “ship it” then you might be right. If your goal is “ship it, and don’t break anything else, and don’t cause any security issues in the future and don’t rot the codebase, and be able to explain why you did it that way and why you didn’t use X” then you’re probably wrong.
stonethrowaway
8 hours ago
If engineers are still taught engineering as a discipline then it doesn’t matter what tools they use to achieve their goals.
If we are calling software developers who don’t understand how things work, and who can get away with not knowing how things work, engineers, then that’s a separate discussion of profession and professionalism we should be having.
As it stands there’s nothing fundamentally rooted in software developers having to understand why or how things work, which is why people can and do use the tools to get whatever output they’re after.
I don’t see anything wrong with this. If anyone does, then feel free to change the curriculum so students are graded and tested on knowing how and why things work the way they do.
The pearl clutching is boring and tiresome. Where required we have people who have to be licensed to perform certain work. And if they fail to perform it at that level their license is taken away. And if anyone wants to do unlicensed work then they are held accountable and will not receive any insurance coverage due to a lack of license. Meaning, they can be criminally held liable. This is why some countries go to the extent of requiring a license to call yourself an engineer at all.
So where engineering, actual engineering, is required, we already have protocols in place that ensure things aren’t done on a “trust me bro” level.
But for everyone else, they’re not held accountable whatsoever, and there’s nothing wrong with using whatever tools you need or want to use, right or wrong. If I want to butt splice a connector, I’m probably fine. But if I want to wire in a 3 phase breaker on a commercial property, I’m either looking at getting it done by someone licensed, or I’m looking at jail time if things go south. And engineering or no different.
RodgerTheGreat
8 hours ago
In many parts of the world, it is illegal to call yourself an "engineer" without both appropriate certification/training and legal accountability for the work one signs off upon, as with lawyers, medical doctors, and so on. It's frankly ridiculous that software "engineers" are permitted the title without the responsibility in the US.
djeastm
an hour ago
>as with lawyers, medical doctors, and so on. It's frankly ridiculous that software "engineers" are permitted the title without the responsibility in the US.
It's because 1) most of us don't work on things that can get people jailed or killed and 2) the US leans towards not regulating language so much.
But if it makes you more comfortable, just think of the term "software engineer" as tongue-in-cheek, like some people call janitors "sanitation engineers"
stonethrowaway
6 hours ago
Yet my comment keeps getting upvoted and downvoted. I guess I’m either saying something controversial, which I don’t think I am since I am stating the obvious, or potentially the anti-AI crowd doesn’t like my tone. I’m not pro or against AI (I don’t have a dog in this race). Everything at your disposal is potentially a tool to use how you see fit, whether it be AI or a screwdriver.
faizshah
6 hours ago
If your goal is just to get something working then go right ahead. But if your goal is to be learning and improving your process and not introducing any new issues and not introducing a new threat etc. then you’re better off not just stopping at “it works” but also figuring out why it works and if this is the right way to make it work.
The idea that wanting to become better at using something is pearl clutching is frankly why everything has become so mediocre.
stonethrowaway
6 hours ago
We are saying the same thing I think.
baxtr
8 hours ago
I wonder if this is an elitist argument.
AI empowers normal people to start building stuff. Of course it won’t be as elegant and it will be bad for learning. However these people would have never learned anything about coding in the first place.
Are we senior dev people a bit like carriage riders that complain about anyone being allowed to drive a car?
UncleMeat
8 hours ago
My spouse is a university professor. A lot of her students cheat using AI. I am sure that they could be using AI as a learning mechanism, but they observably aren't. Now, the motivations for using AI to pass a class are different but I think that it is important to recognize that there is using AI to build something and learn and there is using AI to build something.
Engineering is also the process of development and maintenance over time. While an AI tool might help you build something that functions, that's just the first step.
I am sure that there are people who leverage AI in such a way that the build a thing and also ask it a lot of questions about why it is built in a certain way and seek to internalize that. I'd wager that this is a small minority.
KoolKat23
6 hours ago
Back in school on occasion it was considered cheating to use a calculator, the purpose to encourage learning. It would be absurd in the work environment to ban the use of calculators, it's your responsibility as an employee to use it correctly. As you say the first step.
UncleMeat
5 hours ago
I'm sure at some point that universities will figure out how to integrate AI into pedagogy in a way that works other than a blanket ban. It also doesn't surprise me that until people figure out effective strategies that they say "no chatgpt on your homework."
faizshah
7 hours ago
It has nothing to do with your level of knowledge or experience as a programmer. It has to do with how you learn: https://www.hup.harvard.edu/books/9780674729018
To learn effectively you need to challenge your knowledge regularly, elaborate on that knowledge and regularly practice retrieval.
Building things solely relying on AI is not effective for learning (if that is your goal) because you aren’t challenging your own knowledge/mental model, retrieving prior knowledge or elaborating on your existing knowledge.
senko
7 hours ago
The problem with using the current crop of LLMs for coding, if you're not a developer, is that they're leaky abstractions. If something goes wrong (as it usually will in software developent), you'll need to understand the underlying tech.
In contrast, if you're a frontend developer, you don't need to know C++ even though browsers are implemented in it. If you're a C++ developer, you don't need to know assembly (unless you're working on JIT).
I am convinced AI tools for software development will improve to the point that non-devs will be able to build many apps now requiring professional developers[0]. It's just not there yet.
[0] We already had that. I've seen a lot of in-house apps for small businesses built using VBA/Excel/Access in Windows (and HyperCard etc on Mac). They've lost that power with the web, but it's clearly possible.
lovethevoid
7 hours ago
I'm a huge fan of drivers with no experience or knowledge of a car getting on the highway. After all, look at how empowered they are!
lawn
7 hours ago
Maybe the senior developers are just jaded having to maintain code that nobody, not even their authors, know how it's supposed to work?
jcgrillo
7 hours ago
I've gotten a lot of mileage in my career by following this procedure:
1. Talk to people, read docs, skim the code. The first objective is to find out what we want the system to do.
2. Once we've reverse-engineered a sufficiently detailed specification, deeply analyze the code and find out how well (or more often poorly) it actually meets our goals.
3. Do whatever it takes to make the code line up with the specification. As simply as possibly, but no simpler.
This recipe gets you to a place where the codebase is clean, there are fewer lines of code (and therefore fewer bugs, better development velocity, often better runtime performance). It's hard work but there is a ton of value to be gained from understanding the problem from first principles.
EDIT: You may think the engineering culture of your organization doesn't support this kind of work. That may be true, in which case it's incumbent upon you to change the culture. You can attempt this by using the above procedure to find a really nasty bug and kill it loudly and publicly. If this results in a bunch of pushback then your org is beyond repair and you should go work somewhere else.