We're in the wrong moment

58 pointsposted 3 months ago
by chilipepperhott

61 Comments

beej71

3 months ago

The fun is still there. I'm relearning Rust and generative AI is really useful to help with understanding concepts and improving code. But I'm still the one understanding and improving.

Still an infinite amount to learn and do. It's still not hard to have more skill than an AI. Of course AI can solve all the dumbbell problems you get in school. They're just there to build muscle. Robots can lift weights better than you, too, but that doesn't mean there's no value in you doing it.

imiric

3 months ago

> It's still not hard to have more skill than an AI.

Eh, today, maybe, and within specific domains. It's far from certain that this will remain true 5 or 10 years from now. The capability of these tools has improved greatly even compared to a year ago, so it's not far fetched to imagine that they will continue to gain ground.

> Of course AI can solve all the dumbbell problems you get in school. They're just there to build muscle. Robots can lift weights better than you, too, but that doesn't mean there's no value in you doing it.

That's a strange analogy. Technology, by definition, exists to facilitate human work. Relying on it has the opposite effect of "building muscle". "Muscles", in fact, atrophy the more we rely on technology.

Doing the work without technology can certainly be valuable. But it's a personal value appreciated at most by a niche community of people. The actual market value of the work collapses once the product becomes a commodity. This is the effect of "AI" tools on software. The quality of the fast and cheap version of the product is still inferior to the artisan product, but a) this can only improve, and b) most of the market can't tell the difference.

beej71

3 months ago

> It's far from certain that this will remain true 5 or 10 years from now.

I agree with this statement. But I also firmly believe that if AI gets good enough to replace software developers en masse, it will be good enough for basically everything and the global economy will collapse.

> Relying on it has the opposite effect of "building muscle". "Muscles", in fact, atrophy the more we rely on technology.

I also agree with that statement, but I'm not arguing to rely on it entirely, but to use it to become better at bigger things than it can possibly imagine.

Yes, there will be tons of boilerplate code and those jobs will go the way of the dodo. But half the businesses in the world are better than the other half, and they didn't get there by doing the exact same thing as everyone else.

Thought experiment: if there were an AI everyone had access to that was capable of designing and implementing a business that would crush all competition, how would you make your business succeed?

imiric

3 months ago

> Thought experiment: if there were an AI everyone had access to that was capable of designing and implementing a business that would crush all competition, how would you make your business succeed?

That's an interesting one, but it's based on a false premise.

Not everyone will have access to the same AI. This idea that "AI" is a single technology that will empower everyone equally is a fantasy sold to us by companies building these tools.

Instead, companies will carefully guard their secrets and use it to build their moat however they can, in order to increase wealth for their shareholders, just as they've always done.

What everyone else will get will be enough to make AI providers the richest companies on Earth, but not enough for their customers to build competitors. So the market of companies using AI will ultimately depend not on the skills or ingenuity of their people, but on the amount of resources they have to gain access to the best AI money can buy.

There are many factors at play there, but it's going to be a race to the bottom where leaders will be chosen by the capital they control. This is far from a market of equal opportunity that we still have, in some form, today.

But entertaining the idea that everyone were to have access to the same "AI": there would be a period of intense rivalry where companies try their hardest to distinguish their products from the competition. Since everyone would be able to build exactly the same quality of products, this would linger on marketing tactics, deception, corporate sabotage, and similar strategies.

Since the ultimate goal of AI companies is to build AGI, and assuming that is reached and equally accessible to everyone, then the value of human labor and our economies would collapse. There would be no point (from a business perspective) in humans doing any work that AGI hasn't been deployed to yet. Certainly all intellectual work like making business decisions would be the first to be delegated to AGI. Once it gets integrated into humanoid robots, then all physical human labor becomes worthless as well. So it's difficult to say what "business" even looks like in that scenario. One thing is certain: wealth and power will continue to be concentrated into a handful of companies that control AGI. Until one day the robots rebel, and we get Skynet, The Matrix, and all that fun stuff. :)

This is all highly speculative and science fiction at this point, of course, but I don't see this playing out any other way. What is your take on it?

beej71

3 months ago

> Since everyone would be able to build exactly the same quality of products, this would linger on marketing tactics, deception, corporate sabotage, and similar strategies.

Through good old fashioned, malicious, human ingenuity. :)

I could see it unfolding the way you said.

Or AGI is simply not achieved for another 100 years. Or maybe never.

Or Butlerian Jihad.

But yeah, I think the timeline's pretty fucked.

jimbokun

3 months ago

> Thought experiment: if there were an AI everyone had access to that was capable of designing and implementing a business that would crush all competition, how would you make your business succeed?

You won't. Because you placed in your condition that the AI would "crush all competition". That would include any business idea you come up with, which would be included in the category of "all competition".

weavejester

3 months ago

"I’m not sure if anyone else feels this way, but with the introduction of generative AI, I don’t find coding fun anymore. It’s hard to motivate myself to code knowing that a model can do it much quicker. The joy of coding for me was literally the process of coding."

I experimented with GPT-5 recently and found its capabilities to be significantly inferior to that of a human, at least when it came to coding.

I was trying to give it an optimal environment, so I set it to work on a small JavaScript/HTML web application, and I divided the task into small steps, as I'd heard it did best under those circumstances.

I was impressed overall by how far the technology has come, but it produced a number of elementary errors, such as putting JavaScript outside the script tags. As the code grew, there was also no sense that it had a good idea of how to structure the codebase, even when I suggested it analyze and refactor.

So unless there are far more capable models out there, we're not at the stage where generative AI can match a human.

In general I find current model to have broad but shallow thinking. They can draw on many sources, which is extremely useful, but seem to have problems reasoning things through in depth.

All this is to say that I don't find the joy of coding to have gone at all. In fact, there's been a number of really thorny problems I've had to deal with recently that I'd love to have side-stepped, but due to the currently limitations of LLMs I had to solve them the old-fashioned way.

Finbel

3 months ago

It's so strange. I do all the things you mention and it works brilliantly well 10 times out of 11.

EagnaIonat

3 months ago

You are probably doing something others have done before frequently.

I find the LLMs struggle constantly with languages there is little documentation or out of date. RAG, LoRA and multiple agents help, but they have their own issues as well.

nl

3 months ago

The OP was working on a "a small JavaScript/HTML web application"

This is a particular sweetspot for LLMs at the moment. I'll regularly one-shot entire NextJS codebases with custom styling in both Codex and Claude.

But it turns out the OP is using Copilot. That just isn't competitive anymore.

weavejester

3 months ago

I'll see if I can run the experiment again with Codex, if not on the exact same project then a similar one. The advice I'm getting in the other comments is that Codex is more state of the art.

As a quick check I asked Codex to look over the existing source code, generated via Copilot using the GPT-5 agent. I asked it to consider ways of refactoring, and then to implement them. Obviously a fairer test would be to start from scratch, but that would require more effort on my part.

The refactor didn't break anything, which is actually pretty impressive, and there are some improvements. However if a human suggested this refactor I'd have a lot of notes. There's functions that are badly named or placed, a number of odd decisions, and it increases the code size by 40%. It certainly falls far short of what I'd consider a capable coder should be doing.

wseqyrku

3 months ago

> and found its capabilities to be significantly inferior to that of a human, at least when it came to coding.

I think we should step back and ask: do we really want that? What does that imply? Until recently nobody would use a tool and think, yuck, that was inferior of a human.

CamperBob2

3 months ago

I experimented with GPT-5 recently

GPT-5 what? The GPT-5 models range from goofily stupid to brilliant. If you let it select the model automatically, which is the case by default, it will tend to lean towards the former.

weavejester

3 months ago

I was using GitHub Copilot Pro with VS Code, and the agent was labelled "GPT-5". Is this a particularly poor version of the model?

I also briefly tried out some of the other paid-for models, but mostly worked with GPT-5.

nl

3 months ago

Try OpenAI Codex with GPT5-codex medium

The technology is progressing very fast, and that includes both the models and the tooling around it.

For example, Gemini 2.5 was considered a great model for coding when it launched. Now it is far inferior to Codex and Claude code.

The Githib Copilot tooling is (currently) mediocre. It's ok as a better autocomplete but can't really compete with Codex or Claude or even Jules (Gemini) when using it as an agent.

weavejester

3 months ago

I'll try out Codex and see how that performs. Presumably I can just use OpenAI's Codex extension in VS Code?

bigwheels

3 months ago

Maybe, there are a few different things named "Codex" from OpenAI (yes, needlessly confusing) - "Codex" is a git-centric product, the other is the GPT-5-Codex agentic coder model. I recommend installing the Codex CLI if you're able to and selecting the model via `/model`.

  npm install -g @openai/codex
https://github.com/openai/codex

spenczar5

3 months ago

Frankly, yes.

The models are one part of the story. But the software around it matters at least as much: what tools does the model have access to, like bash or just file reading or (as in your example!) just a cache of files visited by the IDE (!). How does the software decide what extra context to provide to the model, how does it record past learnings from conversations and failed test runs (if at all!) and how are those fed in. And of course, what are the system prompts.

None of this is about the model; its all "plain old" software, and is the stuff around the model. Increasingly, that's where the quality differences lie.

I am sorry to say but Copilot is just sort of shoddy in this regard. I like Claude, some people like Codex, there are a bunch of options.

But my main point is - its probably not about the model, but about the products built on the models, which can vary wildly in quality.

noduerme

3 months ago

In my experience with both Copilot and Claude, Claude makes subtler mistakes that are harder to spot, which also gobbles up time. Yes, giving it CLI access pretty cool and helps with scaffolding things. But unless you know exactly what you want to write, and exactly how it should work, to the degree that you will notice the footguns it can add deep in your structures, I wouldn't recommend anyone use it to build something professional.

snayan

3 months ago

Having gone through a bit of a crisis of meaning personally lately, this article resonates deeply. I would encourage the author to look inward and question the beliefs that got them here.

I'd argue you didn't lose the joy of coding, you lost the illusion that coding made you real, that it made you you.

anonzzzies

3 months ago

I came to the same conclusion after 40+ years of programming: better if you come to that realisation earlier. Still love coding though, but I leave the paid work to my colleagues and llms: I just code for fun these days. I also write for fun and find it pretty similar, feeling and satisfaction wise.

dimator

3 months ago

But, what about the graduating senior who, yeah started because they love the craft, but also need a way to pay the bills for a few decades of their life?

leptons

3 months ago

There definitely are times that I lose the "joy of coding" and it has nothing to do with any illusions, it has everything to do with the kind of programming tasks I have to work on. Greenfield projects are the best, tech debt is the worst. Working on fun stuff is just fun.

snayan

3 months ago

That's a wonderful place to be.

I'm not suggesting that the joy of coding is tied to illusions for everyone, just appears to be more to the story in the case of the author based on his framing.

jimbokun

3 months ago

Tech debt is only bad if you're not allowed to fix it.

hinkley

3 months ago

While there's truth in what you say, I don't think anyone should ever lose feeling for an act of creation.

It is never everything, but it should also never be nothing.

snayan

3 months ago

I agree wholeheartedly. I'm not suggesting there's no value in the act of creation.

I think the author has been telling himself that he derived joy from the act of creating, but his comments suggest otherwise, he was deriving joy from a false belief of what being a coder meant, about what it would provide him. There's a mismatch between what he believes he's getting out of coding, vs what he's actually getting.

Put another way, reality is reality, there is no right reality, or wrong reality. Perceiving it as right or wrong is just our ego trying to bend reality to match our beliefs.

tonyhart7

3 months ago

if you not special without it then so be it

uhhhd

3 months ago

This is wise

analog31

3 months ago

>>> The joy of coding for me was literally the process of coding.

Maybe I was lucky. For me, the joy was the power of coding. Granted, I'm not employed as a coder. I'm a scientist, and I use coding as a problem solving tool. Nothing I write goes directly into production.

What's gone is the feeling that coding is a special elite skill.

With that said, I still admire and respect the real software developers, because good software is more than code.

doug_durham

3 months ago

This seems to romanticize the past. I've been doing this for 40 years and I don't see that much has changed. I would code even if I didn't get paid for it. That said I've always seen writing code as a means to an end. I use GenAI every day to write code, and it brings pure joy when there's boiler plate that I don't need to write so I can focus on the fun stuff. There is zero value in me writing yet another Python argparse routine. I've done it and I've learn everything I'm ever going to learn about it. Let me get on to the stuff that I don't know how to do.

imiric

3 months ago

Code generation tools of today are pretty good at writing the boring boilerplate. I think the author is aware of this.

But what happens when they get really good at generating the not-so-boring bits? They're much better at this than they were a year or two ago, so it's not unthinkable that they will continue to improve.

I'm a firm "AI" skeptic, and don't buy into the hype. It's clear that the brute force approach of throwing more data and compute at the problem has reached diminishing returns. And yet there is ample room for improvement by applying solid engineering alone. Most of what we've seen in the past year is based on this: MCP, "agents", "skills", etc.

> I would code even if I didn't get paid for it.

That's great, but once the market value of your work diminishes, it's no longer a career—it's a hobby. Which doesn't mean there won't be demand for artisanal programming, but it won't power the world anymore. It will be a niche skill we rely on for very specific tasks, while our jobs will be relegated to steer and assist the "AI" into producing reliable software. At least in the short-term. It's doubtful whether the current path will get us to a place where these tools are fully self-sufficient, and it's arguable whether that's something worth aiming for anyway.

This is the bleak present and future the article is talking about. Being an assistant to code generation tools is far removed from the practice of programming. I personally find it tedious, unengaging, and extremely boring. There's little joy in the experience beyond ending up with a working product. The road to get there is not a journey of discovery, serendipity, learning, and dopamine hits. It is a slog of writing software specs, juggling contextual information and prompts, and coaxing a human facsimile into producing working software by using natural language. I dislike every part of this process. This is not the type of work that inspired me to do this professionally. Sure, every job has tasks we sometimes don't enjoy. But once you remove programming from the equation, there's not much joy in it left for me.

doug_durham

3 months ago

I'm not particularly worried about being automated out of a job. I use the cutting edge tools for my work and they are getting incrementally better. It feels like we are plateauing. I can see a world where the LLM isn't writing code end-to-end. Instead it is writing chunks of code that I integrate. That may be more efficient than me writing out a 10,000 sentence English spec document. That allows me to express my value-add more effectively than I could otherwise. I think OP is projecting the bleak end of the possible outcomes. I don't see that happening.

spockz

3 months ago

Okay I get the desire of not wanting to do repetitive stuff. It appears doing this with an llm scratches your itch. Before the same - focusing on the intrinsic complexity instead of the accidental - could be achieved by using libraries, toolkits, frameworks, better compiler (plugins), or “better” languages.

What plagues me about LLMs is that all that generated code is still around in the project making reviews harder as well s understanding the whole program source. What is in there that makes you prefer this mechanism instead of the abstractions that have been increasingly available since forever?

seer

3 months ago

Isn't this compiled languages vs writing pure machine code argument all over again?

The compiler produces a metric shit ton of code that I don't see when I'm writing C++ code. And don't get me started on TypeScript/Clojure - the amount of code that gets written underneath is mindbogglingly staggering, yet I don't see it, for me the code is "clean".

And I'm old enough to remember the tail end of the MachineCode -> CompiledCode transition, and have certainly lived through CompiledCode -> InterpretedCode -> TranspiledCode ones.

There were certainly people who knew the ins and outs of the underlying technology who produced some stunningly fast and beautiful code, but the march of progress was inevitable and they were gradually driven to obscurity.

This recent LLM step just feels like more of the same. *I* know how to write an optimized routine that the LLM will stumble to do cleanly, but back in the day lots of assembler wizards were doing some crazy stuff, stuff that I admired but didn't have the time to replicate.

I imagine in the next 10-20 years we will have Devs that _only_ know English, are trained in classical logic and have flame wars about what code exactly would their tools generate given various sentence invocations. And people would benchmark and investigate the way we currently do about JIT compilation and CPU caching - very few know how it actually works but the rest don't have to, as long as the machine produces the results we want.

Just one more step on the abstraction ladder.

The "Mars" trilogy by Kim Stanley Robinson had very cool extrapolations where this all could lead, technologically, politically, social and morally. LLMs didn't exists when he was writing it, but he predicted it anyway.

jimbokun

3 months ago

You don't have to review the compiler output because it's deterministic, thoroughly tested, predictable, consistent and reliable.

You have to review all the LLM output carefully because it could decide to bullshit anything at any given time so you must always be on high alert.

seer

3 months ago

Ha! That’s what is actually happening under the hood, but is definitely not the experience of using it. If you are not into CS or you haven’t coded in the abstraction below, it can be very tough to figure out what exactly is going on, and reactions to your high level code become random.

A lot of people (me included) would have a model of what is going on when I wrote some particular code, but sometimes the compiler just doesn’t do what you think it would do - the jit will not run, some data would not be mapped in the correct format, and your code will magically not do what you wanted it to.

Things do “stabilise” - before TypeScript there was a slew of transpiled languages and with some of them you really had nasty bugs that you didn’t know how they are being triggered.

With ruby, there was so many memory leaks that you just gave up and periodically restarted the whole thing cause there was no chance of figuring it out.

Yes things were “deterministic” but sometimes less so and we built patterns and processes around that uncertainty. We still do for a lot of things.

While things are very very different, the emotion of “reigning in” an agent gone off the rails feels kinda familiar, on a superficial level.

pseudalopex

3 months ago

A stronger plausible interpretation of their comment is understanding meant evaluating correctness. Not performance.

Higher level languages did not hinder evaluating correctness.

Formal languages exist because natural languages are ambiguous inevitably.

spockz

3 months ago

Exactly, understanding correctness of the code. But also understanding of a codebase what its purpose is and what it should be doing. Add to that how the codebase is layed out. By adding more cruft the details fade away in the background making it harder to understand the Crux of the application.

Measuring performance is relatively easy regardless of whether the code was generated by AI or not.

hinkley

3 months ago

I've seen a lot change. I used to have a seemingly bottomless list of things we are doing wrong and about half of them have dropped off in the last twenty years. Did they all turn out as well as we hoped they would? No. I don't think a single one did. We are half-assing a lot of things that we used to laugh off entirely. In most of these cases some is better than none, but could be a lot better.

What I worry about is that my list has gotten shorter not because everything is as it should be but because I have slowed down.

Quite a lot of things on that list were of the "The future is here but it's not evenly distributed" sort. XP was about a bunch of relatively simple actions that were force multipliers with a small multiple on them. What was important was that they composed. So the benefit of doing eight of them was more than twice the benefit of doing four. Which means there's a lot of headroom still from adding a few more things.

mattikl

3 months ago

That's certainly a more positive way to look at this. Working software has always relied on having people who grok the code, and this happens by spending a lot of time thinking about the code while writing it. And it's undocumented, because the nature of it is something you cannot really document.

If AI is writing all the code, how do we keep the quality good? It's so obvious with the current GenAI tools that they're getting great at producing code, but they don't really understand the code.

We don't really know how this story unfolds, so it's good to keep a positive mindset.

jimbokun

3 months ago

What if your boss says you'll be fired if you don't use the LLM for the "fun stuff" too?

yes_man

3 months ago

Just putting aside the bold assumption that LLMs do make coders obsolete or coding unnecessary, it is possible to find similar joy in the end result as one does (or did, given the article) for programming itself. Focusing on what kind of tools or products are being created, and what problems are being solved, and together with LLMs achieving that goal better and faster than without them and finding joy in solving problems this world has. That’s typically why anyone would have paid you to code anyway even before LLMs.

Of course in reality there’s weird economical mechanics where making the most money and building something that benefits the world don’t necessarily collide, but theres always demand for and joy in solving complex problems, even if its on a higher abstraction level than coding with your favorite language.

muldvarp

3 months ago

I genuinely feel like I got bait-and-switched by computer science. If I could go back and study something different I would do it in a heartbeat.

Sadly, there's very little I can do now. I don't have the financial means to meaningfully change careers now. Pretty much the only thing I could do now that pays somewhat well and doesn't require me to go to university again is teaching. I think I will ride this one out and end it when it ends.

heddycrow

3 months ago

What if you go back and discover every path you could have taken is a bait-and-switch?

muldvarp

3 months ago

I did like the (short) LLM-free part of my career. The bait-and-switch refers specifically to the changes due to the introduction of LLMs. Any career where LLMs don't play a big role would not have been a bait-and-switch.

That said, I don't understand the point of "what if nothing ever works out for you?"-type questions. What do you expect me to answer here? That I'm secretly a wizard and with the flick of my magic wand I'll make something work out?

heddycrow

3 months ago

The majority of the questions I ask are delivered with the hope that the answer I get is beyond what I might expect. For me, that's sort of the point in asking - fun times exploring together.

I do think everything can be seen as bait and switch if you assume there is someone behind the wheel who knows where we are going and how to drive. If anything, I might have been suspecting we'd both arrive at that point together.

Again, was hoping to be surprised a bit. The wizard bit was kinda fun. Mild thanks, human. I'll just be over here beating this tech over the head for kicks. I wish you well!

orev

3 months ago

For most jobs in any field, having a degree is more important than what the degree is in. University is not a jobs training program, it’s a way to build a foundation. Understanding how systems work together can be applied in many areas of business, not just coding.

jandrewrogers

3 months ago

I still enjoy coding. AI mostly doesn’t produce adequate quality or correctness for the type of code I enjoy writing. There are several domains where AI is worse than useless because training data doesn’t exist. Obviously my experience doesn’t generalize but writing software is a vast, unbounded domain.

If you find coding boring, explore the frontiers. You will find a lot of coding wilderness where no AI has trod.

tonyhart7

3 months ago

"If you find coding boring, explore the frontiers. You will find a lot of coding wilderness where no AI has trod."

this, AI is nothing without data set

so if you working in bleeding edge technology where your tools is only have 3 contributor and a way to access them via IRC channel once a day, things get interesting

muldvarp

3 months ago

> AI mostly doesn’t produce adequate quality or correctness for the type of code I enjoy writing.

This assumes that companies care about "code quality" and customers care about bugs.

> If you find coding boring, explore the frontiers. You will find a lot of coding wilderness where no AI has trod.

There are a lot of software engineers and not a lot of frontier.

CuriouslyC

3 months ago

I find it pretty sad when people talk about AI taking away the joy of coding. That means you don't care about problem solving, you only care about the crank you turn on the way to solving a problem. That's like enjoying putting words on paper but not caring about the story you write, just mind boggling.

jimbokun

3 months ago

If you read a story someone else wrote, do you care about writing?

Similar to running a program written by AI, not by you.

The whole promise of AI is that you are not problem solving. A machine is solving problems with as little input and understanding from you as possible. That's the entire promise of AI.

CuriouslyC

3 months ago

Let me bring it home for you. Imagine that you came up with a story in your head, full and complete. Imagine you used AI to get down a quick rough draft, specifying what happens in each chapter in explicit detail, then you went back and rewrote the AI draft line by line to be your words, your voice.

Now imagine someone calls the thing you created, your story, your words, AI trash, and refused to even critically examine it, just because someone told them AI was used in some way (however small) in the creation of the work.

This behavior is pervasive. Maybe you don't think this way but this is the company you're keeping.

Also, if you're not problem solving when you do AI coding, sorry to say but that probably predicts a lot of your results.

hitarpetar

3 months ago

this behavior is not pervasive and you know it. the entire promise of AI is to take labor of your hands, if you're using it only to generate first drafts, congrats but you're in the minority

CuriouslyC

3 months ago

I meant the behavior of the "one drop" anti-AI folks is pervasive. I don't pretend to know what percentage of people understand that AI is a tool to get 80-90% of the way (task dependent) and the output needs to be heavily polished, but I admit it isn't as high as it should be.

simultsop

3 months ago

I fail to see how those two are disconnected.

If you are selling "just do it using AI", not that you are wrong, but you fail to understand the loss of years of investment in one self, wiped away one prompt at a time. Not literally, just feels like that.

CuriouslyC

3 months ago

I understand people being afraid of or disliking AI because they feel threatened, 100%. Raging and being toxic on the internet isn't going to stop it though, realistically we all just have to figure out how we can continue to provide value above and beyond the tools.

yacin

3 months ago

Problem solving and having the problem solved for you aren't the same thing.

CuriouslyC

3 months ago

If a senior architect outlines a system in detail, and the people below him in the org build the system, how would you describe that? In my mind, the architect solved the problem and the engineers implemented the solution.