pyman
12 hours ago
Two years ago, I saw myself as a really good Python engineer. Now I'm building native mobile apps, desktop apps that talk to Slack, APIs in Go, and full web apps in React, in hours or days!
It feels like I've got superpowers. I love it. I feel productive, fast, creative. But at night, there's this strange feeling of sadness. My profession, my passion, all the things I worked so hard to learn, all the time and sacrifices, a machine can now do most of it. And the companies building these tools are just getting started.
What does this mean for the next generation of engineers? Where's it all heading? Do you feel the same?
simonw
12 hours ago
The reason you can use these tools so effectively across native, mobile, Go, React etc is that you can apply most of what you learned about software development as a Python engineer in these new areas.
The thing LLMs replace is the need for understanding all of the trivia required for each platform.
I don't know how to write a for loop in Go (without looking it up), but I can write useful Go code now without spinning back up on Go first.
I still need to conceptually understand for loops, and what Go is, and structured programming, and compilers, and build and test scripts, and all kinds of other base level skills that people without an existing background in programming are completely missing.
I see LLMs as an amplifier and accelerant. I've accumulated a huge amount of fuzzy knowledge across my career - with an LLM I can now apply that fuzzy knowledge to concrete problems in a huge array of languages and platforms.
Previously I would stay in my lane: I was fluent in Python, JavaScript and SQL so I used those to solve every problem because I didn't want to take the time to spin up on the trivia for a new language or platform.
Now? I'll happily use things like Go and Bash and AppleScript and jq and ffmpeg and I'm considering picking up a Swift project.
snoman
11 hours ago
This is difficult to express because I too have enjoyed using an LLM lately and have felt a productivity increase, but I think there is a false sense of security being expressed in your writing and it underlies one of the primary risks I see with LLMs for programming.
With minor exceptions, moving from one language to another isn’t a matter of simple syntax, trivia, or swapping standard libraries. Certainly, expert beginners do espouse that all the time, but languages often have fundamental concepts that they’re built that and need to be understood in order to be effective with them. For example: have you ever seen a team move from Java to Scala, js to Java, or C# to Python - all of which I’ve seen - where the developers didn’t try to understand language they were moving to? Non-fail, they tried to force the concepts that were important to their prior language, onto the new one, to abysmal results.
If you’re writing trivial scripts, or one-off utils, it probably doesn’t build up enough to matter, and feels great, but you don’t know what you don’t know, and you don’t know what to look for. Offloading the understanding of the concepts that are important for a language to an LLM is a recipe for a bad time.
simonw
10 hours ago
> but languages often have fundamental concepts that they’re built that and need to be understood in order to be effective with them
I completely agree. That's another reason I don't feel threatened by non-programmers using LLMs: to actually write useful Go code you need to figure out goroutines, for React you need to understand the state model and hooks, for AppleScript you need to understand how apps expose their features, etc etc etc.
All of these are things you need to figure out, and I would argue they are conceptually more complex than for loops etc.
But... don't need to memorize the details. I find understanding concepts like goroutines to be a very different mental activity to memorizing the syntax for a Go for loop.
I can come back to some Go code a year later and remind myself how goroutines work very quickly, because I'm an experienced software engineer with a wealth of related knowledge about concurrency primitives to help me out.
danw1979
9 hours ago
What are your thoughts on humans learning from LLM output ?
I’ve been encouraging the new developers I work with to ensure they read the docs and learn the language to ensure the LLM doesn’t become a crutch, but rather a bicycle.
But it occurred to me recently that I probably learned most of what I know from examples of code written by others.
I’m certain my less experienced colleagues are doing the same, but from Claude rather than Stack Overflow…
simonw
8 hours ago
I think an important skill to develop as a software developer (or any other profession) is learning to learn effectively.
In software these days that means learning from many different sources: documentation and tutorials and LLM-generated examples and YouTube talks and conference sessions and pairing with coworkers and more.
Sticking to a single source will greatly reduce the effectiveness of how you learn your craft. New developers need to understand that.
ivm
8 hours ago
I've been programming since 2009 and lately I've been also learning a ton from LLM output. When they review my code or architecture ideas, they sometimes suggest approaches I outright didn't know because my day-to-day "rut" has been different so far.
LLMs are like a map of everything, even it's a medieval map with distorted country sizes and dragons in every sea. Still, it can be used to get a general idea of where to go.
skydhash
9 hours ago
Not Simon, but here is my take.
Code are practical realizations of concepts that exists outside of code. Let's take concurrency as an example. It's something that is common across many domains where 2 (or more) independent actors suddenly need to share a single resource that can't be used by both at the same time. In many cases, it can be resolved (or not) in an ad-hoc manner. And sometimes there's a basic agreement that takes place.
But for computers, you need the solution to be precise, eliminating all error cases. So we go one to define primitives and how these interacts with each other and the sequence of the actions. These are still not code. Once we are precise enough, we translate it to code.
So if you're learning from code, you are tracing back to the original concept. And we can do so because we are good at grasping patterns.
But generated code are often a factor of several patterns at once, and some are not even relevant. Just that in the training phases, the algorithm detected similarity (and we know things can be similar but are actually very different).
So I'm wary of learning from LLMs, because it's a land of mirages.
skydhash
9 hours ago
> But... don't need to memorize the details. I find understanding concepts like goroutines to be a very different mental activity to memorizing the syntax for a Go for loop.
That's one of the argument that seems to never make any sense to me. Were people actually memorizing all these stuff and are now happy that they don't have too? Because that's what books, manuals, references, documentations, wiki,... are here for.
I do agree with you both that you need to understand concepts. But by the time I need to write code, I already have a good understanding of what the solution is like and unless I'm creating the scaffolding of the project, I rarely needs to write more than ten lines at time. If I need to write more, that's when I reach for generators, snippets, code examples from the docs, or switch to a DSL.
Also by the time I need to code, I need factual information, not generated examples which can be wrong.
simonw
8 hours ago
Yes, I was memorizing the stuff.
The reason I used to stick to just a very small set of languages that I knew inside out is that because I was using them on a daily basis I was extremely productive at writing code in them.
Why write code in Go where I have to stop and look things up every few minutes when I could instead of use Python where I have to look things up about once an hour?
LLMs have fixed that for me.
> Also by the time I need to code, I need factual information, not generated examples which can be wrong.
If those generated examples are wrong I find out about 30 seconds later when I run the code: https://simonwillison.net/2025/Mar/2/hallucinations-in-code/
skydhash
7 hours ago
> Why write code in Go where I have to stop and look things up every few minutes when I could instead of use Python where I have to look things up about once an hour
My workflow is different. Whenever I can't remember something, I look it up. And for something that I'm not familiar with, that usually means a lot of browser tabs. But I try to retrieve it from my memory first (and it usually exists in my short-term memory). And after some days, it has become part of my long-term memory and I don't need to look it up again. If I don't touch the language and its ecosystem for a while, I forgot some details, but they are refreshed with a quick read.
> If those generated examples are wrong I find out about 30 seconds later when I run the code
I would be grateful if those were the only kind of wrong I encounter. Writing code that does the stuff is usually easy. The difficult part is to ensure that the data fits the definition of correct at every stage of the process. And there's not some corrupting action somewhere. Doing so exhaustively is formal verification and it is costly (mostly in terms of time). So what you do is other kind of testing which only covers the most likely places and dimensions where thing can go wrong.
The stressful part is when you have to change some part to fit a new use case. You then have to carefully revise both the change and the other parts so that their definition of correct is consistent. And sometimes there's some decoupling mechanism which makes the link non-obvious.
That's the definition of wrong that I'm fearful of. Where two part are internally correct, but their integration is not consistent.
ivm
8 hours ago
Yes, for example that's why https://learnxinyminutes.com/ exists. Like, I don't remember the particulars of JavaScript syntax or the core library after a year of not touching it, so before I had to reconstruct it in my head even for small tasks. Now LLMs solve this issue.
pyman
10 hours ago
I agree Simon. With the little time I have these days for side projects, LLMs have become my new best friends. I'm more worried about the future of my students than my own. I'm sure I'll grow old gracefully alongside the machines that helped me graduate and build a career.
I run Python workshops on weekends, and honestly, I'm speechless at the things students are already doing. But is it realistic to encourage them to study computer science in a year or two, when it's still a 4-year degree? By the time they start uni, LLMs will be 100 times more powerful than they are today. And in 6 years?
Any advice?
sealeck
10 hours ago
> With minor exceptions, moving from one language to another isn’t a matter of simple syntax, trivia, or swapping standard libraries.
I think this is not true for most programming languages. Most of the ones we use today have converged around an imperative model with fairly similar functionality. Converting from one to another can often be done programmatically!
kiitos
9 hours ago
I mean, absolutely and obviously not, right?
Like, languages aren't just different ways to express the same underlying concepts? It's clearly not the case that you can just transliterate a program in language X to language Y and expect to get the same behavior, functionality, etc. in the end?
sealeck
6 hours ago
> It's clearly not the case that you can just transliterate a program in language X to language Y and expect to get the same behavior, functionality, etc. in the end?
You most definitely can; if you have Turing complete languages A and B, you can implement an interpreter for A in B and B in A; you can implement a simple X86/aarch64 interpreter in either A or B and use that to run compiled/interpreted programs in the other; you can also write higher-level routines that map to language constructs in one or the other to convert between the two.
exe34
10 hours ago
You can write Fortran in any language!
nojito
11 hours ago
How about converting a rust library into Haskell?
That's from 6 months ago and the tooling today is almost night and day in terms of improvements.
amelius
10 hours ago
It really makes me wonder what is taking us so long to translate all these C python modules like Numpy and SciPy into something that works with one of the GIL-free python variants out there.
nojito
7 hours ago
I have started pulling all my backlog projects of converting my feature complete python code over to rust and with Claude it's been absolutely phenomenal.
fleebee
11 hours ago
Where can I see the code?
Also:
> This wasn't something that existed; it wasn't regurgitating knowledge from Stackoverflow. It was inventing/creating something new.
Didn't he expressly ask the LLM to copy an established, documented library?
simonw
10 hours ago
Yes, but in another language. It wasn't regurgitating Haskell code it had seen before.
snzixjxjxjsn
10 hours ago
Translation is the one thing everyone expects AI to be good at. And while I’m not an expert in above posts languages so I can’t review, I’d be willing to bet there’s not obvious mistakes that could end up being pretty significant. The same thing happens with language I’m an expert it.
It’s odd the post describes what it’s doing as creating something new - that’s only true in the most literal (intellectually dishonest) sense.
diggan
9 hours ago
> moving from one language to another isn’t a matter of simple syntax, trivia, or swapping standard libraries. [...] Java to Scala, js to Java, or C# to Python
I find it kind of funny (or maybe sad?) that you say that, then your examples are all languages that basically offer the same features and semantics, but with slightly different syntax. They're all Algol/C-style languages.
I'd understand moving from Java to/from Haskell can be a bit tricky since they're actually very different from each other, but C# to/from Java? Those languages are more similar than they are different, and the biggest changes is just names and trivia basically, not the concepts themselves.
AstroBen
11 hours ago
100%.
LLMs aren't good engineers. They're mediocre, overengineering, genius, dumb, too simplistic, clean coding, overthinking, underthinking, great but terrible engineers rolled into one. You can get them to fight for any viewpoint you want with the right prompt. They're literally everything at once
To get good code out of them, you have to understand what good code is in the first place and lead them to it
My experience is iterating through 5 potential options being like "nope, that wont work, do it this way.. nope adjust that.. nope add more of this.. oh yeah that's great!"
The "that's great!" made me nervous that an LLM was capable of producing code as good as me until I realized that I threw out most of what it gave.. and someone less experienced would've taken the first suggestion without realizing its drawbacks
simonw
10 hours ago
> To get good code out of them, you have to understand what good code is in the first place and lead them to it
That's a great way of putting it. Completely agree.
anon7000
11 hours ago
Completely agree. And along the way, my personal knowledge of bash has increased a lot, for example, just because I’m using it a lot more
giancarlostoro
11 hours ago
LLMs are search engines not robots you can automate things with search engine results but you can also accidentally automate NSFW being delivered to customers. Use the right tools correctly.
user
9 hours ago
qsort
10 hours ago
This is 100% the rational response I'd give to OP, but I can't deny I feel a bit of the same unease. It's mostly that this technology is really... weird, I think. I struggle to put that into words if I'm being honest.
pglevy
10 hours ago
This way of putting it resonates with me: unlocking the value of fuzzy knowledge.
kiitos
8 hours ago
> Previously .. I didn't want to take the time to spin up on the trivia for a new language or platform ... Now? I'll happily use things like Go and Bash and AppleScript and jq and ffmpeg
It's pretty wild that you're characterizing the understanding of a language and its details as "trivia" that you don't need to "take the time to spin up on" before you write programs in that language.
I mean, I get this perspective, but it's the position of a product manager, not a software engineer...
simonw
8 hours ago
I stand by what I said. Knowing how to eg loop through every file in the current directory in Bash is trivia.
That's not to say it's trivial, or to disparage that knowledge. But it's not at the same level as understanding how eg Unix processes can be piped together.
If I'm interviewing a candidate and they can't remember the syntax for a Bash loop, I don't care. If they can't explain what happens when you pipe output from one process to another (at least at a high level), that's a problem.
guluarte
18 minutes ago
It is because you already have the technical knowledge, and with LLMs, it is easier to apply it to other tech stacks because all are fundamentally the same. And the things you don't know (for e.g., Go coroutines), the LLMs can explain them to you based on what you know, so you can learn them faster and apply your Python knowledge in Go.
mettamage
12 hours ago
Have you seen non-technical people use LLMs to build things? It's usually a lot slower or a disaster. You don't necessarily need to be technical perhaps in time, but you still need to be precise. Just knowing about HTML to the level we do allows us to embed text into it so the LLM can reason more sharply about it, etc.
Being technical still is an advantage, in my opinion.
AnotherGoodName
11 hours ago
Not really a criticism of llms though. Just a natural result of more accessibility to programming.
pembrook
11 hours ago
I’m sure the handicrafts laborers prior to the industrial revolution felt the same way.
However, they also likely had little education, 1-2 of their children probably died of trivial illnesses before the age of 10, and they lived without electricity, indoor plumbing, running water or refrigeration.
Yes, blacksmithing your own tools is romantic (just as hand writing python code), but somehow I’m guessing our ancestors will be better off living at higher layers of abstraction. Nobody will stop them from writing python code themselves, and I’m sure they’ll do it as a fun hobby like blacksmithing is today.
user
10 hours ago
prairieroadent
10 hours ago
my brain read "before the age of 10" as "before the age of 100" haha and wondered if llms are leading us to the point where we look back and realize dying before 100 is truly young??
diggan
9 hours ago
> My profession, my passion, all the things I worked so hard to learn, all the time and sacrifices, a machine can now do most of it
Can it? It doesn't have experience, it doesn't have foresight nor hindsight, it cannot plan and it just follows the instructions, regardless if the instructions are good or bad.
But you, as a human, have taste, ideas, creativity and a goal. You can convince others of your good idea, and can take things into account that an LLM can only think about if someone tells it to think about it.
I'm not worried about programming as a profession disappearing, but it is changing, as we're moving up the ladder of abstractions.
When I first got started professionally with development, you could get a career in it without knowing the difference between bits/bytes, and without writing a line of assembly, or any other low-level details. Decades before that, those were base-level requirements for being able to program things, but we've moved up the abstraction ladder.
Now we're getting so far up this ladder, that you don't even need to know a programming language to make programs, if you're good enough at English and good enough at knowing what the program needs to do.
Still, the people who understand memory layout, and assembly, and bits and bobs will always understand more of what's happening underneath than me, and probably be able to do a "better" job than me when that's needed. But it doesn't mean the rest of the layers above that are useless or will even go away.
jtms
10 hours ago
Absolutely feel the same way. I have been writing software professionally for over 20 years and truly love the craft. Now that I am using Claude Code basically 100% of the time my productivity has certainly increased, but I agree there is a hollowness to it - where before the process felt like art it now feels like industrialized mass manufacturing. I’m hoping that I can rediscover some of what has kept me so fascinated with software in this new reality, because quite a bit of the joy has indeed been stripped away for me.
octopoc
12 hours ago
Yeah there are a lot of things in programming that I don’t enjoy, that I can now avoid doing, like learning how to do the same technical thing in a different language/framework.
But the stuff I enjoy doing is something I don’t delegate to the agent. I just wonder how long the stuff I enjoy can be delegated to an agent who will do it cheaper and faster.
giancarlostoro
11 hours ago
Till theres an outage and the AI cannot help because they got rid of all the technical people or worse it generates sloppy underperforming code. You will continue to need devs for a good minute.
AnotherGoodName
11 hours ago
For this you lose a day of llm usage? Big deal. Don’t mean to pick on you at all but I also saw you with a comment above along the lines of ‘but what if the llm includes nsfw stuff?’ which again seems to be clutching at straws since we all still at least review whats being added.
I feel since the worst criticisms of llms in this thread boil down to down to ‘llm might be offline, might include nsfw stuff and might lead to non experts coding with poor results’ which are such weak criticisms that it’s is a damn good compliment to just how good they’ve gotten isn’t it?
rcruzeiro
8 hours ago
I don’t think you understood his point?
anon7000
11 hours ago
I think it’s yet another layer of abstraction on top of complex ideas. There will always be room for people working on the infrastructure, tools, maintenance, and operations side of things. The biggest danger is if you stop learning because you trust AI to just do it.
I’ve been (carefully) using AI on some AWS infrastructure stuff which I don’t know about. Whenever it proposes something, I research and learn what it’s actually trying to do. I’ve been learning quickly with this approach because it helps point me in the right direction, so I know what to search for. Of course, this includes reviews and discussions with my colleagues who know more, and I frequently completely rewrite what AI has done if it’s getting too complex and hard to understand.
The important thing is not allowing AI to abstract away your thought process. It can help, but on my terms
krzyk
11 hours ago
It is similar to the days of first search engines - suddenly you didn't have to remember everything about your language/library or search multiple files.
You lose some, you gain some.
Although I'm not happy with code quality/style produced in most cases, or duplications, lack of refactorings to make the code look better/smaller.
jostylr
10 hours ago
Feeling the same, I just started a dialectic blog, with ChatGPT arguing with itself based on my prompt. It can have some biting words. I also have been using Suno AI to make a song with each post; while not perfect, it certainly allows for totally non-musical people like me to have something produced that can be listened to.
One post that fits with this feeling is https://silicon-dialectic.jostylr.com/2025/07/02/impostor-sy... The techno-apocalyptic lullaby at the end that the AIs came up with is kind of chilling.
delduca
12 hours ago
I have the exact same feeling. 100x more engineering at cost of some questions.
jasonmarks_
10 hours ago
Yes, software value appears to only exist for owners of products. You should pursue ownership of something. You can write anything, even a different type of Claude code!
titanomachian
9 hours ago
Hey, friend, let's see if my account helps ease your worries a little. Three years ago I decided to learn how to code, even though I was already 35 years old. My line of work is a bit far away from tech… I'm still trying to learn everyday, but going real slow because of work and some mental health stuff. Even though LLMs have been already a thing for a couple of years, I only managed to first try my hand at them a couple of months ago. Understand that I have very low self-esteem and that social anxiety prevents me from asking for help, even online — I think I might have done so less than 5 times my whole life, and I've been online since I was kid in 1996… Asking a complex text generator for help feels a lot more comfortable for me, even though there are ups and downs. I'm not sure if what I'm doing is the same as what everyone is calling "vibe coding", but it has been a real game changer in my self-study routine. I know LLMs sometimes (often?) write "unorthodox" code, but I like studying their output and comparing it to other stuff I find online. I'm sure there are better ways to learn, and I still wish to become like experienced programmers who learned their trade before these tools were around. Anyway, yeah, the machine helps. But I believe you're only feeling like the machine can make most of your work (or maybe replace you?) because your experience enables you to use the machine effectively. See it from my perspective as a baginner: I managed to do more than I could before, sure, but I quickly noticed I'll never get better at it if I don't learn how to "speak" it. At best, it would feel like knowing a foreign language "instrumentally", like enough to read a text if you have a dictionary beside you, but not near enough to strike a conversation with a native speaker of that language. If everything goes well, most beginners will soon realize that they need to know much more, even if just to write better prompts when asking for code. But, if all goes bad… I don't know yet… I worry about that too — like I worry about my young nephew and niece who barely touched a physical keyboard their entire lives and cannot touch-type to save their lives if they needed to. Whatever happens, we have to make the best of it. I would still be striving to be like more experienced engineers anyway, with or without LLMs around. But I can only speak for myself. I hope you feel at least a bit less sad by knowing there are still people out here who appreciate the effort people like you took.
apwell23
11 hours ago
yes remixing bunch of shit that already exists is good entertainment, i do it all the time.
no need to be sad though as you can tell from your own creations you haven't really created anything of any value. its all AI slop.