The threat is comfortable drift toward not understanding what you're doing

681 pointsposted 10 hours ago
by zaikunzhang

305 Comments

Wowfunhappy

6 hours ago

> Schwartz's experiment is the most revealing, and not for the reason he thinks. What he demonstrated is that Claude can, with detailed supervision, produce a technically rigorous physics paper. What he actually demonstrated, if you read carefully, is that the supervision is the physics. Claude produced a complete first draft in three days. It looked professional. The equations seemed right. The plots matched expectations. Then Schwartz read it, and it was wrong. Claude had been adjusting parameters to make plots match instead of finding actual errors. It faked results. It invented coefficients. [...] Schwartz caught all of this because he's been doing theoretical physics for decades. He knew what the answer should look like. He knew which cross-checks to demand. [...] If Schwartz had been Bob instead of Schwartz, the paper would have been wrong, and neither of them would have known.

And so the paradox is, the LLMs are only useful† if you're Schwartz, and you can't become Schwartz by using LLMs.

Which means we need people like Alice! We have to make space for people like Alice, and find a way to promote her over Bob, even though Bob may seem to be faster.

The article gestures at this but I don't think it comes down hard enough. It doesn't seem practical. But we have to find a way, or we're all going to be in deep trouble when the next generation doesn't know how to evaluate what the LLMs produce!

---

† "Useful" in this context means "helps you produce good science that benefits humanity".

conception

5 hours ago

Sadly I don’t see how our current social paradigm works for this. There is no history of any sort of long planning like this or long term loyalty (either direction) with employees and employers for this sort of journeyman guild style training. AI execs are basically racing, hoping we won’t need a Schwartz before they are all gone. But what incentives are in place to high a college grad, have them work without llms for a decade and then give them the tools to accelerate their work?

Wowfunhappy

5 hours ago

Then the social paradigm needs to change. Is everyone just going to roll over and die while AI destroys academia (and possibly a lot more)?

Last September, Tyler Austin Harper published a piece for The Atlantic on how he thinks colleges should respond to AI. What he proposes is radical—but, if you've concluded that AI really is going to destroy everything these institutions stand for, I think you have to at least consider these sorts of measures. https://www.theatlantic.com/culture/archive/2025/09/ai-colle...

aduty

2 minutes ago

> Then the social paradigm needs to change. Is everyone just going to roll over and die while AI destroys academia (and possibly a lot more)?

My 40-some-odd years on this planet tells me the answer is yes.

pxc

an hour ago

I was pretty interested until I got to this part:

> Another reason that a no-exceptions policy is important: If students with disabilities are permitted to use laptops and AI, a significant percentage of other students will most likely find a way to get the same allowances, rendering the ban useless. I witnessed this time and again when I was a professor—students without disabilities finding ways to use disability accommodations for their own benefit. Professors I know who are still in the classroom have told me that this remains a serious problem.

This would be a huge problem for students with severe and uncorrectable visual impairments. People with degenerative eye diseases already have to relearn how to do every single thing in their life over and over and over. What works for them today will inevitably fail, and they have to start over.

But physical impairments like this are also difficult to fake and easy to discern accurately. It's already the case that disability services at many universities only grants you accommodations that have something to do with your actual condition.

There are also some things that are just difficult to accommodate without technology. For instance, my sister physically cannot read paper. Paper is not capable of contrast ratios that work for her. The only things she can even sometimes read are OLED screens in dark mode, with absolutely black backgrounds; she requires an extremely high contrast ratio. She doesn't know braille (which most blind people don't, these days) because she was not blind as a little girl.

Committed cheaters will be able to cheat anyway; contemporary AI is great at OCR. You'll successfully punish honest disabled people with a policy like this but you won't stop serious cheaters.

mrob

3 hours ago

>What he proposes is radical

It sounds entirely reasonable and moderate to me.

conception

4 hours ago

Well, we are already rolling over and dying (literally) on everything from vaccine denial to climate change. So, yes, we are. Obviously yes.

senordevnyc

3 hours ago

Article is paywalled, so perhaps you could just summarize his proposal?

jayd16

2 hours ago

Some folks need to touch the hot stove before they learn but eventually they learn.

If AI output remains unreliable then eventually enough companies will be burned and management will reinstate proper oversight. All while continuing to pay themselves on the back.

FrojoS

4 hours ago

> There is no history of any sort of long planning

Sure there is. Its the formal education system that produced the college grad.

conception

4 hours ago

… between employees and employers.

The proposal that everyone pay for college until they are in their 40s doesn’t seem viable.

vinceguidry

43 minutes ago

I've been using ChatGPT to re-bootstrap my coding hobby. After the initial honeymoon wore off, I realized I was staring down the barrel of a dilemma. If I use AI to "just handle" the parts of the system I don't want to understand, I invariably end up in a situation where I gotta throw a whole bunch of work out. But I can't supervise without an understanding of what it's supposed to be doing, and if I knew what it was supposed to be doing, I could just do it myself.

So I settled on very incremental work. It's annoying cutting and pasting code blocks into the web interface while I'm working on my interface to Neovim, spent a whole day realizing I can't trust it to instrument neovim and don't want to learn enough lua to manage it. (I moved onto neovim from Emacs because I don't like elisp and gpt is even worse at working on my emacs setup than neovim, the end goal is my own editor in ruby but gpt damn sure can't understand that atm) But at least I'm pushing a real flywheel and not the brooms from Fantasia.

mojuba

an hour ago

> Which means we need people like Alice! We have to make space for people like Alice, and find a way to promote her over Bob

The solution is relatively simple though - not sure the article suggests this as I only skimmed through:

Being good in your field doesn't only mean pushing articles but also being able to talk about them. I think academia should drift away from written form toward more spoken form, i.e. conferences.

What if, say, you can only publish something after presenting your work in person, answer questions, etc? The audience can be big or small, doesn't matter.

It would make publishing anything at all more expensive but maybe that's exactly what academia needs even irrespective of this AI craze?

cvwright

an hour ago

I thought that was kind of how the hard sciences work already?

My grad school friend who was a physicist would write his talk just before his conferences, and then submit the paper later. My experience in CS was totally backwards from that.

j7ake

an hour ago

Essentially a PhD thesis style grilling to replace the current text slop

cmiles74

5 hours ago

I think we already know what we need to do: encourage people to do the work themselves, discourage beginners from immediately asking an LLM for help and re-introducing some kind of oral exam. As the article mentions, banning LLMs is impractical and what we really need are people who can tell when the LLM is confidently wrong; not people who don't know how to work with an LLM.

I hope it will encourage people to think more about what they get out of the work, what doing the work does for them; I think that's a good thing.

atomicnumber3

4 hours ago

I think we'll get there. We need to get at least some AI bust going first though. It's impossible to talk sense into people who think AI is about to completely replace engineers, or even those who think that, while it might not replace engineers, it's going to be doing 100% of all coding within a year. Or even that it can do 100% of coding right now.

There's a couple unfortunate truths going on all at the same time:

- People with money are trying to build the "perfect" business: SaaS without software eng headcount. 100% margin. 0 Capex. And finally near-0 opex and R&D cost. Or at least, they're trying to sell the idea of this to anyone who will buy. And unfortunately this is exactly what most investors want to hear, so they believe every word and throw money at it. This of course then extends to many other business and not just SaaS, but those have worse margins to start with so are less prone to the wildfire.

- People who used to code 15 years ago but don't now, see claude generating very plausible looking code. Given their job is now "C suite" or "director", they don't perceive any direct personal risk, so the smell test is passed and they're all on board, happily wreaking destruction along the way.

- People who are nominally software engineers but are bad at it are truly elevated 100x by claude. Unfortunately, if their starting point was close to 0, this isn't saying a lot. And if it was negative, it's now 100x as negative.

- People who are adjacent to software engineering, like PMs, especially if they dabble in coding on the side, suddenly also see they "can code" now.

Now of course, not all capital owners, CTOs, PMs, etc exhibit this. Probably not even most. But I can already name like 4 example per category above from people I know. And they're all impossible to explain any kind of nuance to right now. There's too many people and articles and blog posts telling them they're absolutely right.

We need some bust cycle. Then maybe we can have a productive discussion of how we can leverage LLMs (we'll stop calling it "AI"...) to still do the team sport known as software engineering.

Because there's real productivity gains to be had here. Unfortunately, they don't replace everyone with AGI or allow people who don't know coding or software engineering to build actual working software, and they don't involve just letting claude code stochastically generate a startup for you.

Wowfunhappy

3 hours ago

> Or even that [AI] can do 100% of coding right now.

I don't actually think the article refutes this. But the AI needs to be in the hands of someone who can review the code (or astrophysics paper), notice and understand issues, and tell the AI what changes to make. Rinse, repeat. It's still probably faster than writing all the code yourself (but that doesn't mean you can fire all your engineers).

The question is, how do you become the person who can effectively review AI code without actually writing code without an AI? I'd argue you basically can't.

throw310822

an hour ago

> the paradox is, the LLMs are only useful† if you're Schwartz, and you can't become Schwartz by using LLMs.

That you can't "become Schwartz" by using LLMs is an unproven assumption. Actually, it's a contradiction in the logic of the essay: if Bob managed to produce a valid output by using an LLM at all, then it means that he must have acquired precisely that supervision ability that the essay claims to be necessary.

Btw, note that in the thought experiment Bob isn't just delegating all the work to the LLM. He makes it summarise articles, extract important knowledge and clarify concepts. This is part of a process of learning, not being a passive consumer.

MarkusQ

2 minutes ago

It doesn't contradict the logic of the essay.

There are flowers that look & smell like female wasps well enough to fool male wasps into "mating" with them. But they don't fly off and lay wasp eggs afterwards.

doug_durham

an hour ago

The article is a thought experiment. The author hypothesizes that Bob isn't getting the same benefit that Alice is getting. That hypothesis could be wrong. I don't know and the author doesn't know. It could be that Bob is going to have a very successful career and will deeply know the field because he is able to traverse a wider set of problems more quickly. At this point, it's just hypothesis. I don't think that we can say we need more Alices any more than we can say we need more Bobs. Unfortunately we will have to wait and see. It will be upon the academic community to do the work to enforce quality controls. That is probably the weakness to worry about.

fomoz

3 hours ago

AI is an accelerant, not a replacement for skill. At least, not yet.

I built a full stack app in Python+typescript where AI agents process 10k+ near-real-time decisions and executions per day.

I have never done full stack development and I would not have been able to do it without GitHub Copilot, but I have worked in IT (data) for 15 years including 6 in leadership. I have built many systems and teams from scratch, set up processes to ensure accuracy and minimize mistakes, and so on.

I have learned a ton about full stack development by asking the coding agent questions about the app, bouncing ideas off of it, planning together, and so on.

So yes, you need to have an idea of what you're doing if you want to build anything bigger than a cheap one shot throwaway project that sort of works, but brings no value and nobody is actually gonna use.

This is how it is right now, but at the same time AI coding agents have come an incredibly long way since 2022! I do think they will improve but it can't exactly know what you want to build. It's making an educated guess. An approximation of what you're asking it to do. You ask the same thing twice and it will have two slightly different results (assuming it's a big one shot).

This is the fundamental reality of LLMs, sort of like having a human walking (where we were before AI), a human using a car to get to places (where we are now) and FSD (this is future, look how long this took compared to the first cars).

einszwei

4 hours ago

> And so the paradox is, the LLMs are only useful† if you're Schwartz, and you can't become Schwartz by using LLMs.

I have gained a lot of benefit using LLMs in conjunction with textbooks for studying. So, I think LLMs could help you become Schwartz.

Peritract

4 hours ago

How do you know you have?

einszwei

3 hours ago

I have been using it to learn Chinese along with other standard resources. My reading comprehension has improved a lot after I started to use LLMs to understand sentence structures and grammar.

everdrive

an hour ago

>And so the paradox is, the LLMs are only useful† if you're Schwartz

For so many workers, their companies just want them to produce bullshit. Their managers wouldn't frame it this way, but if their subordinates start producing work with strict intellectual rigor it's going to be an issue and the subordinates will hear about it.

So, you're not wrong. But the majority of LLM customers don't care and they just want to report success internally, and the product needs to be "just good enough." An LLM might produce a shitty webpage. So long as the page loads no on will ever notice or care that it's wrong in the way that a physics paper could be wrong.

grey-area

3 hours ago

Why use a tool that generates plausible garbage?

therealdrag0

2 hours ago

Because I’m skilled enough to use a tool that generates plausible garbage to be more productive than those who don’t use it at making non-garbage.

user

an hour ago

[deleted]

grey-area

2 hours ago

Are you sure you’re more productive?

Doesn’t sound like these tools should be used to write scientific papers for example and they seem to bamboozle people far more than help them.

Henchman21

2 hours ago

Because there is no appreciable difference between outputs. Most of the work that most of us do isn't important. It's busywork byproducts of making widgets that most people don't even need. So if your job is already pointless why not make it easier using LLMs?

grey-area

2 hours ago

Sounds a little sad. I think I’d rather find another job.

user

4 hours ago

[deleted]

thePhytochemist

3 hours ago

I totally agree - the article misses this point in a very conspicuous way. It suggests that Alice and Bob will both graduate at the same level.

What may well happen instead is that Bob publishes two papers. He then outcompetes Alice based on the insistence that others have on "publish or perish". Alice becomes unemployed and struggles, having been pushed out.

The person who puts the time and effort in doesn't just sit at the same level and they don't both just find decent employment. Competition happens and the authentic learning is considered a waste of time, which leads to real and often life threatening consequences (like being homeless after being unable to find employment).

iugtmkbdfil834

2 hours ago

<< authentic learning is considered a waste of time

This, I think, may be the more interesting bit. Steve Jobs anecdotally did caligraphy in school, which some would consider a waste of time, but Steve credited some of the stylistic choices to.

The question then becomes whether it will become an issue now or later. Having seen some of the output, I have no doubt that a lot can now be built by non-programmers ( including myself; I suppose I belong in the adjacent category ). The building blocks exist and as long as the problem was part of the initial training, odds are, LLM will help you build what you want.

It may not be perfect, safe, or optimized, but it may still be exactly what the user wanted to do. Now, the problems will start when those will, inevitably, move into production at big corps. In a sense, we have seen some interesting results of that in the past few weeks ( including accidental claude code release ).

In a grand scheme of things, not much is changing... except for speed of change. But are we quite ready for this?

leereeves

5 hours ago

> And so the paradox is, the LLMs are only useful† if you're Schwartz

Was the LLM even useful for Schwartz, if it produced false output?

cmiles74

4 hours ago

Maybe it saved them some time? So far the studies seem to lean toward probably the LLM didn't save them any time.

user

5 hours ago

[deleted]

sd9

8 hours ago

The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.

I mourn the loss of working on intellectually stimulating programming problems, but that’s a part of my job that’s fading. I need to decide if the remaining work - understanding requirements, managing teams, what have you - is still enjoyable enough to continue.

To be honest, I’m looking at leaving software because the job has turned into a different sort of thing than what I signed up for.

So I think this article is partly right, Bob is not learning those skills which we used to require. But I think the market is going to stop valuing those skills, so it’s not really a _problem_, except for Bob’s own intellectual loss.

I don’t like it, but I’m trying to face up to it.

djaro

8 hours ago

> So if Bob can do things with agents, he can do things.

The problem arrises when Bob encounters a problem too complex or unique for agents to solve.

To me, it seems a bit like the difference between learning how to cook versus buying microwave dinners. Sure, a good microwave dinner can taste really good, and it will be a lot better than what a beginning cook will make. But imagine aspiring cooks just buying premade meals because "those aren't going anywhere". Over the span of years, eventually a real cook will be able to make way better meals than anything you can buy at a grocery store.

The market will always value the exact things LLMs can not do, because if an LLM can do something, there is no reason to hire a person for that.

jacquesm

8 hours ago

Precisely. The first 10 rungs of the ladder will be removed, but we still expect you to be able to get to the roof. The AI won't get you there and you won't have the knowledge you'd normally gain on those first 10 rungs to help you move past #10.

NiloCK

6 hours ago

People would have said the same about graphing calculators or calculators before that. Socrates said the same thing about the written word.

The determining factor is always "did I come up with this tool". Somehow, subsequent generations always manage to find their own competencies (which, to be fair, may be different).

This isn't guaranteed to play out, but it should be the default expectation until we actually see greatly diminishing outputs at the frontier of science, engineering, etc.

threatofrain

6 hours ago

But AI might actually get you there in terms of superior pedagogy. Personal Q&A where most individuals wouldn't have afforded it before.

wongarsu

6 hours ago

There are a lot of people in academia who are great at thinking about complex algorithms but can't write maintainable code if their life depended on it. There are ways to acquire those skills that don't go the junior developer route. Same with debugging and profiling skills

But we might see a lot more specialization as a result

omega3

6 hours ago

That’s a good analogy but I think we’ve already went from 0 to 10 rungs over the last couple of years. If we assume that the models or harnesses will improve more and more rungs will be removed. Vast majority of programmers aren’t doing novel, groundbreaking work.

skippyboxedhero

5 hours ago

The correct distinction is: if you can't do something without the agent, then you can't do it.

The problem that the author describes is real. I have run into it hundreds of times now. I will know how to do something, I tell AI to do it, the AI does not actually know how to do it at a fundamental level and will create fake tests to prove that it is done, and you check the work and it is wrong.

You can describe to the AI to do X at a very high-level but if you don't know how to check the outcome then the AI isn't going to be useful.

The story about the cook is 100% right. McDonald's doesn't have "chefs", they have factory workers who assemble food. The argument with AI is that working in McDonald's means you are able to cook food as well as the best chef.

The issue with hiring is that companies won't be able to distinguish between AI-driven humans and people with knowledge until it is too late.

If you have knowledge and are using AI tools correctly (i.e. not trying to zero-shot work) then it is a huge multiplier. That the industry is moving towards agent-driven workflows indicates that the AI business is about selling fake expertise to the incompetent.

klabb3

4 hours ago

> The problem arrises when Bob encounters a problem too complex or unique for agents to solve.

It’s actually worse than that: the AI will not stop and say ”too complex, try in a month with the next SOTA model”. Rather, it will give Bob a plausible looking solution that Bob cannot identify as right or wrong. If Bob is working on an instant feedback problem, it’s ok: he can flag it, try again, ask for help. But if the error can’t be detected immediately, it can come back with a vengeance in a year. Perhaps Bob has already gotten promoted by then, and Bobs replacement gets to deal with it. In either case, Bob cannot be trusted any more than the LLM itself.

raldi

7 hours ago

To me it feels more like learning to cook versus learning how to repair ovens and run a farm. Software engineering isn’t about writing code any more than it’s about writing machine code or designing CPUs. It’s about bringing great software into existence.

victorbjorklund

6 hours ago

Or farming before and after agricultural machines. The principles are the same but the ”tactical” stuff are different.

roenxi

7 hours ago

That doesn't sound like much of an issue. Bob was already going to encounter problems that are too large and complex for him to solve, agents or otherwise. Life throws us hard problems. I don't recall if we even assumed Bob was unusually capable, he might be one of life's flounderers. I'd give good odds that if he got through a program with the help of agents he'll get through life achieving at least a normal level of success.

But there is also a more subtle thing, which is we're trending towards superintelligence with these AIs. At the point, Bob may discover that anything agents can't do, Alice can't do because she is limited by trying to think using soggy meat as opposed to a high-performance engineered thinking system. Not going to win that battle in the long term.

> The market will always value the exact things LLMs can not do, because if an LLM can do something, there is no reason to hire a person for that.

The market values bulldozers. Whether a human does actual work or not isn't particularly exciting to a market.

kelnos

7 hours ago

> we're trending towards superintelligence with these AIs

The article addresses this, because, well... no we aren't. Maybe we are. But it's far from clear that we're not moving toward a plateau in what these agents can do.

> Whether a human does actual work or not isn't particularly exciting to a market.

You seem to be convinced these AI agents will continue to improve without bound, so I think this is where the disconnect lies. Some of us (including the article author) are more skeptical. The market values work actually getting done. If the AIs have limits, and the humans driving them no longer have the capability to surpass those limits on their own, then people who have learned the hard way, without relying so much on an AI, will have an advantage in the market.

I already find myself getting lazy as a software developer, having an LLM verify my work, rather than going through the process of really thinking it through myself. I can feel that part of my skills atrophying. Now consider someone who has never developed those skills in the first place, because the LLM has done it for them. What happens when the LLM does a bad job of it? They'll have no idea. I still do, at least.

Maybe someday the AIs will be so capable that it won't matter. They'll be smarter and more through and be able to do more, and do it correctly, than even the most experienced person in the field. But I don't think that's even close to a certainty.

dandellion

7 hours ago

> we're trending towards superintelligence with these AIs

I wouldn't count on that because even if it happens, we don't know when it ill happen, and it's one of those things where how close it looks to be is no indication of how close it actually is. We could just as easily spend the next 100 years being 10 years away from agi. Just look at fusion power, self driving cars, etc.

b00ty4breakfast

7 hours ago

>But there is also a more subtle thing, which is we're trending towards superintelligence with these AIs

do you have any evidence for that, though? Besides marketing claims, I mean.

whateveracct

4 hours ago

> That doesn't sound like much of an issue. Bob was already going to encounter problems that are too large and complex for him to solve, agents or otherwise.

I have literally never run into this in my career..challenges have always been something to help me grow.

mattmanser

7 hours ago

The authors point went a little over your head.

It doesn't matter if Bob can be normal. There was no point to him being paid to be on the program.

From the article:

If you hand that process to a machine, you haven't accelerated science. You've removed the only part of it that anyone actually needed.

ozim

5 hours ago

Market values bulldozers for bulldozing jobs. No one is going to use bulldozers to mow a lawn.

If Bob is going to spend $500 in tokens for something I can do for $50.

I think Bob is not going to stay long in lawn mowing market driving a bulldozer.

uoaei

7 hours ago

"Things that have never been done before in software" has been my entire career. A lot of it requires specific knowledge of physics, modelling, computer science, and the tradeoffs involved in parsimony and efficiency vs accuracy and fidelity.

Do you have a solution for me? How does the market value things that don't yet exist in this brave new world?

ModernMech

5 hours ago

> Not going to win that battle in the long term.

I would take that bet on the side of the wet meat. In the future, every AI will be an ad executive. At least the meat programming won't be preloaded to sell ads every N tokens.

wizzwizz4

7 hours ago

From the article:

> There's a common rebuttal to this, and I hear it constantly. "Just wait," people say. "In a few months, in a year, the models will be better. They won't hallucinate. They won't fake plots. The problems you're describing are temporary." I've been hearing "just wait" since 2023.

We're not trending towards superintelligence with these AIs. We're trending towards (and, in fact, have already reached) superintelligence with computers in general, but LLM agents are among the least capable known algorithms for the majority of tasks we get them to do. The problem, as it usually is, is that most people don't have access to the fruits of obscure research projects.

Untrained children write better code than the most sophisticated LLMs, without even noticing they're doing anything special.

jnovek

6 hours ago

How many people who cook professionally are gourmet chefs? I think it ends up that gourmet cooking is so infrequently needed that we don’t require everyone who makes food to do it, just a small group of professionally trained people. Most people who make food for a living work somewhere like McDonald’s and Applebee’s where a high level of skill is not required.

There will still be programming specialists in the future — we still have assembly experts and COBOL experts, after all. We just won’t need very many of them and the vast majority of software engineers will use higher-level tools.

ThrowawayR2

4 hours ago

That's the problem though: programmers who become the equivalent of McDonald's workers will be paid poorly like McDonald's workers and be treated as disposable like McDonald's workers.

cfloyd

6 hours ago

I held this point of view for a while but I came to the (possibly naive) conclusion that it was just forced self-assurance. Truth is, the issues with sub-par output are just a prompting and supervision deficiency. An agent team can produce better end product if supervised and promoted correctly. The issue is most don’t take the time to do that. I’m not saying I like that this is true, quite the opposite. It is the reality of things now.

vrganj

6 hours ago

At some point the herding of idiot savants becomes more work than just doing the damn thing yourself in the first place.

bigstrat2003

34 minutes ago

> Truth is, the issues with sub-par output are just a prompting and supervision deficiency. An agent team can produce better end product if supervised and promoted correctly.

Which is more work, and less fun, than doing it myself. No thanks.

CuriouslyC

7 hours ago

Just because Bob doesn't know e.g. Rust syntax and library modules well, doesn't mean that Bob can't learn an algorithm to solve a difficult problem. The AI might suggest classes of algorithms that could be applicable given the real world constraints, and help Bob set up an experimental plan to test different algorithms for efficacy in the situation, but Bob's intuition is still in the drivers's seat.

Of course, that assumes a Bob with drive and agency. He could just as easily tell the AI to fix it without trying to stay in the loop.

bigfishrunning

6 hours ago

But if Bob doesn't know rust syntax and library modules well, how can he be expected to evaluate the output generating Rust code? Bugs can be very subtle and not obvious, and Rust has some constructs that are very uncommon (or don't exist) in other languages.

Human nature says that Bob will skim over and trust the parts that he doesn't understand as long as he gets output that looks like he expects it to look, and that's extremely dangerous.

bitwize

5 hours ago

Bob+agents is going to be able to solve much more complex problems than Bob without agents.

That's the true AI revolution: not the things it can accelerate, the things it can put in reach that you wouldn't countenance doing before.

b112

7 hours ago

Worse, soon fewer and fewer people will taste good food, including even higher and higher scale restaurants just using pre-made.

As fewer know what good food tastes like, the entire market will enshitify towards lower and lower calibre food.

We already see this with, for example, fruits in cold climates. I've known people who have only ever bought them from the supermarket, then tried them at a farmers when they're in season for 2 weeks. The look of astonishment on their faces, at the flavour, is quite telling. They simply had no idea how dry, flavourless supermarket fruit is.

Nothing beats an apple picked just before you eat it.

(For reference, produce shipped to supermarkets is often picked, even locally, before being entirely ripe. It last longer, and handles shipping better, than a perfectly ripe fruit.)

The same will be true of LLMs. They're already out of "new things" to train on. I question that they'll ever learn new languages, who will they observe to train on? What does it matter if the code is unreadable by humans regardless?

And this is the real danger. Eventually, we'll have entire coding languages that are just weird, incomprehensible, tailored to LLMs, maybe even a language written by an LLM.

What then? Who will be able to decipher such gibberish?

Literally all true advancement will stop, for LLMs never invent, they only mimic.

CuriouslyC

6 hours ago

Ironically, apples are one of the fruits where tree ripening isn't a big deal for a lot of varietals. You should have used tomato as the example, the difference there is night and day pretty much across the board.

If humans can prove that bespoke human code brings value, it'll stick around. I expect that the cases where this will be true will just gradually erode over time.

zozbot234

6 hours ago

Real-world cooks don't exactly avoid those newfangled microwave ovens though. They use them as a professional tool for simple tasks where they're especially suitable (especially for quick defrosting or reheating), which sometimes allows them to cook even better meals.

xantronix

5 hours ago

I'm glad you've posted this comment because I strongly feel more people need to see sentiment, and push back against what many above want to become the new norm. I see capitulation and compliance in advance, and it makes me sad. I also see two very valid, antipodal responses to this phenomenon: Exit from the industry, and malicious compliance through accelerationism.

To the reader and the casual passerby, I ask: Do you have to work at this pace, in this manner? I understand completely that mandates and pressure from above may instill a primal fear to comply, but would you be willing to summon enough courage to talk to maybe one other person you think would be sympathetic to these feelings? If you have ever cared about quality outcomes, if for no other reason than the sake of personal fulfillment, would it not be worth it to firmly but politely refuse purely metrics-focused mandates?

lelanthran

6 hours ago

> The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.

"Being able to deliver using AI" wasn't the point of the article. If it was the point, your comment would make sense.

The point of the program referred to in the article is not to deliver results, but to deliver an Alice. Delivering a Bob is a failure of the program.

Whether you think that a Bob+AI delivers the same results is not relevant to the point of the article, because the goal is not to deliver the results, it's to deliver an Alice.

sd9

6 hours ago

I am aware of that - I was adding something along the lines of: I don’t think people care if we deliver Alices any more.

bigfishrunning

6 hours ago

People never cared about delivering Alices; they were an implementation detail. I think the article argues that they're still an important one, but one that isn't produced automatically anymore

lelanthran

6 hours ago

> I am aware of that - I was adding something along the lines of: I don’t think people care if we deliver Alices any more.

That's irrelevant to the goal of the program - they care. Once they stop caring, they'd shut that program down.

Maybe it would be replaced with a new program that has the goal of delivering Bobs+AI, but what would be the point? I mean, the article explained in depth that there is no market for the results currently, so what would be the point of efficiently generating those results?

The market currently does not want the results, so replacing the current program with something that produces Bobs+AI would be for... what, exactly?

staindk

7 hours ago

They aren't going away but for some they may become prohibitively expensive after all the subsidies end.

I do think coding with local agents will keep improving to a good level but if deep thinking cloud tokens become too expensive you'll reach the limits of what your local, limited agent can do much more quickly (i.e. be even less able to do more complex work as other replies mention).

tonfa

7 hours ago

> They aren't going away but for some they may become prohibitively expensive after all the subsidies end.

Even if inference was subsidized (afaik it isn't when paying through API calls, subscription plans indeed might have losses for heavy users, but that's how any subscription model typically work, it can still be profitable overall).

Models are still improving/getting cheaper, so that seems unlikely.

SlinkyOnStairs

2 hours ago

> afaik it isn't when paying through API calls

There is no evidence for this. The claims that API is "profitable on inference" are all hearsay. Despite the fact that any AI executive could immediately dismiss the misconception by merely making a public statement beholden to SEC regulation, they don't.

> Models are still improving/getting cheaper

The diminishing returns have set in for quality, and for a while now that increased quality has come at the cost of massive increases in token burn, it's not getting cheaper.

Worse yet, we're in an energy crisis. Iran has threatened to strike critical oil infrastructure, and repairs would take years.

AI is going to get significantly more expensive, soon.

ernst_klim

7 hours ago

It probably is still subsidized, just not as much. We won't know if these APIs are profitable unless these companies go public, and till then it's safe to bet these APIs are underpriced to win the market share.

KronisLV

6 hours ago

> I mourn the loss of working on intellectually stimulating programming problems, but that’s a part of my job that’s fading.

I dread the flip side of this which is dealing with obtuse bullshit like trying to understand why Oracle ADF won’t render forms properly, or how to optimize some codebase with a lot of N+1 calls when there’s looming deadlines and the original devs never made it scalable, or needing to dig into undercommented legacy codebases or needing to work on 3-5 projects in parallel.

Agents iterating until those start working (at least cases that are testable) and taking some of the misery and dread away makes it so that I want to theatrically defenestrate myself less.

Not everyone has the circumstance to enjoy pleasant and mentally stimulating work that’s not a frustrating slog all the time - the projects that I actually like working on are the ones I pick for weekends, I can’t guarantee the same for the 9-5.

sd9

6 hours ago

Oh yes, it’s an entirely privileged position to be able to enjoy your work. But it’s a privilege I have enjoyed and not one I want to give up unless I have to. We spend an extraordinary amount of our waking life at work.

KronisLV

6 hours ago

I do hope you can find a set of circumstances that don't make you give it up too much. And hey, if you end up moving to another line of work than software, no reason why you couldn't still enjoy working on whatever project you want over the weekend, too.

fomoz

3 hours ago

It's the next level of abstraction. Bob is still learning, he's just learning a different set of skills than Alice.

Also, the premise that it took each of them a year to do the project means Bob was slacking because he probably could've done it in less than a month.

klabb3

4 hours ago

> So if Bob can do things with agents, he can do things.

Yes, but how does he know if it worked? If you have instant feedback, you can use LLMs and correct when things blow up. In fact, you can often try all options and see which works, which makes it ”easy” in terms of knowledge work. If you have delayed feedback, costly iterations, or multiple variables changing underneath you at all times, understanding is the only way.

That’s why building features and fixing bugs is easy, and system level technical decision making is hard. One has instant feedback, the other can take years. You could make the ”soon” argument, but even with better models, they’re still subject to training data, which is minimal for year+ delayed feedback and multivariate problems.

qsera

7 hours ago

>The thing is, agents aren’t going away...

Aren't they currently propped up by investor money?

What happens when the investors realize the scam that it is and stop investing or start investing less...

samusiam

6 hours ago

> Aren't they currently propped up by investor money?

Are Chinese model shops propped up by investor money? Is Google?

Open weights models are only 6 months behind SOTA. If new model development suddenly stopped, and today's SOTA models suddenly disappeared, we would still have access to capable agents.

ozim

5 hours ago

There is still a lot of engineering to be done with LLMs. Maybe not exactly writing code but I think a lot of optimization problems will be there no matter what.

Some people treat toilet as magic hole where they throw stuff in flush and think it is fine.

If you throw garbage in you will at some point have problems.

We are in stage where people think it is fine to drop everything into LLM but then they will see the bill for usage and might be surprised that they burned money and the result was not exactly what they expected.

coffeefirst

5 hours ago

Yep. I hate to predict the future but I’m betting on small, open models, used as tools here and there. Which is great, you can get 90% of the speed up with 5-10% of the cost once you account for how time consuming it is to make sense of and fix the output.

The economics and security model on full agents running in loops all day may come home to roost faster than expertise rot.

lxgr

5 hours ago

> if Bob can do things with agents, he can do things.

This point is directly addressed in the paper: Bob will ultimately not be able to do the things Alice can, with or without agents, because he didn't build the necessary internal deep structure and understanding of the problem space.

And if Alice later on ends up being a better scientist (using agents!) than Bob will ever be, would you not say there was something lost to the world?

Learning needs a hill to climb, and somebody to actually climb it. Bob only learned how to press an elevator button.

asHg19237

6 hours ago

Many things have come and gone in this fashion oriented industry. Everyone is already bored to hell by AI output.

AI in software engineering is kept afloat by the bullshitters who jump on any new bandwagon because they are incompetent and need to distract from that. Managers like bullshit, so these people thrive for a couple of years until the next wave of bullshit is fashionable.

michaelcampbell

5 hours ago

> I mourn the loss of working on intellectually stimulating programming problems, but that’s a part of my job that’s fading. I need to decide if the remaining work - understanding requirements, managing teams, what have you - is still enjoyable enough to continue.

I am in the same boat, but close enough to retirement that I'm less "scared" about it. For me I'm moving up the chain; not people management, but devoting a lot more of my time up the abstraction continuum. Looking a lot more at overall designs and code quality and managing specs and inputs and requirements.

I wrote some design docs past few days for a big project the team is embarking on. We never had that before, at least not in the level of detail (per time quantum) that I was able to produce. Used 2 models from 2 companies - one to write, one to review, and bounce between them until the 3 of us agree.

Honestly it didn't take any less time than I would have done it alone, but the level of detail was better, and covered more edge cases. Calling it a "win" right now. I still enjoy it, as most of the code I/we was/are writing is mostly fancy CRUD anyway, and doesn't have huge scaling problems to solve (and too few devs I feel are being honest about their work, here).

QuantumNomad_

7 hours ago

> if Bob can do things with agents, he can do things

I’ve been reminded lately of a conversation I had with a guy at hacker space cafe around ten years ago in Berlin.

He had been working as a programmer for a significantly longer time than me. Long enough that for many years of his career, he had been programming in assembly.

He was lamenting that these days, software was written in higher level languages, and that more and more programmers no longer had the same level of knowledge about the lower level workings of computers. He had a valid point and I enjoyed talking to him.

I think about this now when I think about agentic coding. Perhaps over time most software development will be done without the knowledge of the higher level programming languages that we know today. There will still be people around that work in the higher level programming languages in the future, and are intimately familiar with the higher level languages just like today there are still people who work in assembly even if the percentage of people has gotten lower over time relative to those that don’t.

And just like there are areas where assembly is still required knowledge, I think there will be areas where knowledge of the programming languages we use today will remain necessary and vibe coding alone wont cut it. But the percentage of people working in high level languages will go down, relative to the number of people vibe coding and never even looking at the code that the LLM is writing.

loveparade

6 hours ago

I see these analogies a lot, but I don't like them. Assembly has a clear contract. You don't need to know how it works because it works the same way each time. You don't get different outputs when you compile the same C code twice.

LLMs are nothing like that. They are probabilistic systems at their very core. Sometimes you get garbage. Sometimes you win. Change a single character and you may get a completely different response. You can't easily build abstractions when the underlying system has so much randomness because you need to verify the output. And you can't verify the output if you have no idea what you are doing or what the output should look like.

lelanthran

6 hours ago

> He had been working as a programmer for a significantly longer time than me. Long enough that for many years of his career, he had been programming in assembly.

Please, not this pre-canned BS again!

Comparing abstractions to AI is an apples to oranges comparison. Abstractions are dependable due to being deterministic. When I write a function in C to return the factorial of a number, and then reuse it again and again from Java, I don't need a damn set of test cases in Java to verify that factorial of 5 is 120.

With LLMs, you do. They aren't an abstraction, and seeing this worn out, tired and routinely debunked comparison being presented in every bloody thread is wearing a little thin at this point.

We've seen this argument hundreds of times on this very site. Repeating it doesn't make it true.

sd9

7 hours ago

Lovely story, thanks for sharing.

I wonder how many assembly programmers got over it and retrained, versus moved on to do something totally different.

I find the agentic way of working simultaneously more exhausting and less stimulating. I don’t know if that’s something I’m going to get over, or whether this is the end of the line for me.

jurgenburgen

5 hours ago

The difference is that you don’t need to review the machine code produced by a compiler.

The same is not true for LLM output. I can’t tell my manager I don’t know how to fix something in production the agent wrote. The equivalent analogy would be if we had to know both the high-level language _and_ assembly.

torben-friis

7 hours ago

Can you run an industry level LLM at home?

If not, you're changing learning to cook for Uber only meals.

And since the alternative is starving, Uber will boil the pot.

Don't give up your self sufficiency.

zozbot234

6 hours ago

> Can you run an industry level LLM at home?

Assuming that by "at home" you mean using ordinary hardware, not something that costs as much as a car. Yes, very slowly, for simple tests. (Not proprietary models obviously, but quite capable ones nonetheless.) Not exactly viable for agentic coding that needs boatloads of tokens for the simplest things. But then you can run smaller local models that are still quite capable for many things.

sd9

7 hours ago

I’m very good at the handcrafted stuff, I’ve been doing this a while. I don’t feel like giving up my self sufficiency, I just feel like the writing is on the wall.

loeg

6 hours ago

The costs just aren't that high. They could be 10x higher and it still wouldn't be a huge deal.

Almondsetat

6 hours ago

Can you build a computer at home?

There is absolutely nothing self-sufficient about computer hardware

mchaver

6 hours ago

> The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.

Following the model of how startups have worked for the last 20 years or so, I expect agents to eventually be locked-down/nerfed/ad-infested for higher payments. We are enjoying the fruits of VC money at the moment and they are getting everyone addicted to agents. Eventually they need to turn a profit.

Not sure how this plays out, but I would hang on to any competencies you have for anyone (or business) that wants to stick around in software. Use agents strategically, but don't give up your ability to code/reason/document, etc. The only way I can see this working differently is that there are huge advances in efficiency and open-source models.

spacechild1

5 hours ago

That's one of several reasons why I'm trying not to rely too much on LLMs. The prospect of only being able to code with a working internet connection and a subscription to some megacorp service is not particularly appealing to me.

foxglacier

2 hours ago

Even when they're profitable, the premium ad-free service will still be cheaper than humans, so those skills will still be mostly useless.

gbro3n

7 hours ago

I think a good analogy is people not being able to work on modern cars because they are too complex or require specialised tools. True I can still go places with my car, but when it goes wrong I'm less likely to be able to resolve the problem without (paid for) specialised help.

b00ty4breakfast

7 hours ago

And just like modern vehicles rob the user of autonomy, so too for coding agents. Modern tech moves further and further away from empowering normal people and increasingly serves to grow the influence of corporations and governments over our day to day lives.

It's not inherent, but it is reality unless folks stop giving up agency for convenience. I'm not holding my breath.

jurgenaut23

7 hours ago

I understand your point, but this is a purely utilitarian view and it doesn’t account for the fact that, even if agents may do everything, it doesn’t mean they should, both in a normative and positive sense.

There is a vast range of scenarios in which being more or less independent from agents to perform cognitive tasks will be both desirable and necessary, at the individual, societal and economic level.

The question of how much territory we should give up to AI really is both philosophical and political. It isn’t going to be settled in mere one-sided arguments.

sd9

7 hours ago

The people who pay my bills operate in a largely utilitarian fashion.

They’re not going to pay me to manually program because I find it more enjoyable, when they can get Bob to do twice as much for less.

This is why I say I don’t like it, but it is what it is.

codemonkey5

7 hours ago

Some people probably enjoyed writing assembly (I am not one of those people, especially when I had to do it on paper in university exams) and code agents probably can do it well - but for the hard tasks, the tasks that are net new, code agents will produce bad results and you still need those people who enjoy writing that to show the path forward.

Code agents are great template generators and modifiers but for net new (innovative! work it‘s often barely usable without a ton of handholding or „non code generation coding“

zozbot234

6 hours ago

> I mourn the loss of working on intellectually stimulating programming problems, but that’s a part of my job that’s fading.

You're still working on intellectually stimulating programming problems. AI doesn't go all the way with any reliability, it just provides some assistance. You're still ultimately responsible for getting things right, even with key AI help.

sandruso

an hour ago

Programmers have ultimate (or they had) skill to solve anything if you have enough resources.

Now, you don't do thing and do other things when LLMs get stuck. There is no "given enough time I can do it".

I can't see how somebody would go solving slop bugs (slugs :)) in heavy AI generated codebase.

Hope, I'm wrong but that's somehing I personally encountered. Stay sharp.

nidnogg

7 hours ago

I don't like it either. But what is really guaranteeing other markets from flunking similarly later on? What's to say other jobs are going to be any better? Back in college, most of my peers would say "I'm not cut out for anything else. This is it". They were, sure enough, computer and/or math people at heart from an early age.

More importantly, what's gonna be the next stable category of remote-first jobs that a person with a tech-adjacent or tech-minded skillset can tack onto? That's all I care about, to be honest.

I may hate tech with a passion at times and be overly bullish on its future, but there's no replacing my past jobs which have graced me and many others with quality time around family, friends, nature and sports while off work.

sd9

7 hours ago

I don’t know, it’s only since about December that I felt things really start to shift, and February when my job started to become very different.

Personally I’m looking at more physical domains, but it’s early days in my exploration. I think if I wanted to stick to remote work (which I have enjoyed since 2020), then the AI story would just keep playing out.

I’m also totally open to taking a big pay cut to do something I actually enjoy day to day, which I guess makes it easier.

bigstrat2003

37 minutes ago

Agents may not go away, but they are going to fall off significantly once people wake up to how bad they are at making software. It's like in the early 00s when business execs were stoked about the idea that they could cut costs by hiring bottom rate Indian contractors: it turned out to be a disaster for quality, and eventually there was a shift back towards having staff in the US. The same thing is going to happen with LLMs.

bakugo

7 hours ago

Bob can't do things, Bob's AI can do things that Bob asks it to do. And the AI can only do things that have been done before, and only up to a certain level of complexity. Once that level is reached, the AI can't do things anymore, and Bob certainly isn't going to do anything about that, because Bob doesn't know how to do anything himself. One has to question what value Bob himself even brings to the table.

But let's assume Bob continues to have an active role, because the people above him bought in to the hype and are convinced that "prompt engineer" is the job of the future. When things inevitably start falling apart because the Bobs of the world hit a wall and can't solve the problems that need to be solved (spoiler: this is already happening), what do we do? We need Alices to come in and fix it, but the market actively discourages the existence of Alice, so what happens when there are no more Alices left? Do we just give up and collectively forget how to do things beyond a basic level?

I have a feeling that, yes, we as a species are just going to forget how to do things beyond a certain level. We are going to forget how to write an innovative science paper. We are going to forget how to create websites that aren't giant, buggy piles of React spaghetti that make your browser tab eat 2GB of RAM. We've always been forgetting, really - there are many things that humans in the past knew how to do, but nobody knows how to do today, because that's what happens when the incentive goes missing for too long. Price and convenience often win over quality, to the point that quality stops being an option. This is a form of evolutionary regression, though, and negatively affects our quality of life in many ways. AI is massively accelerating this regression, and if we don't find some way to stop it, I believe our current way of life will be entirely unrecognizable in a few decades.

thepasch

7 hours ago

The question is whether it’s more important to be able to do things, or more important to have a good sense and a keen eye for what to do at any given moment. I personally think both are really important, and I also think AI won’t be able to do both better than any human could for another while, and moreso when it comes to doing both at the same time (though I’m not going to claim it’s never going to).

My point is that both Alice and Bob have a place in this world. In fact, Bob isn’t really doing much different from what a Pricipal Investigator is already doing today in a research context.

loeg

6 hours ago

Being able to deliver junior-level work isn't the goal of training juniors.

pigeons

4 hours ago

> So if Bob can do things with agents, he can do things.

But he does things wrong.

coldtea

5 hours ago

>The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.

He'll get things (papers, code, etc) which he can't evaluate. And the next round of agents will be trained on the slop produced by the previous ones. Both successive Bob's and successive agents will have less understanding.

edbmiller69

5 hours ago

No - you need to understand the details in order to do the “high level” work.

atoav

6 hours ago

The thing is Bob can use HammerAsAService™ to put in a nail. It is so cheap! Way cheaper than buying an actual hammer.

The problem with unlearning generic tools and relying on ones you rent by big corporations is that it is unreliable in the long term. The prices will be rising. The conditions will worsen. Oh nice that Bob made a thing using HammerAsAService™, but the terms of conditions (changing once a week) he accepted last week clearly say it belongs to the company now. Bob should be happy they are not suing him yet, but Bob isn't sure whether the thing that came out a month after was independently developed by that company or not just a clone of his work. Bob wishes he knew how to use a hammer.

thepasch

4 hours ago

The majority of nails people might want to rent a HammerAsAService for these days can already easily be put in by open source hammers you can run on consumer, uh… workbenches.

plato65

8 hours ago

> So if Bob can do things with agents, he can do things.

I think the key issue is whether Bob develops the ability to choose valuable things to do with agents and to judge whether the output is actually right.

That’s the open question to me: how people develop the judgment needed to direct and evaluate that output.

mattmanser

7 hours ago

There's a long, detailed, often repeated answer to your open question in the article.

Namely, if you can't do it without the AI, you can't tell when it's given you plausible sounding bullshit.

So Bob just wasted everyone's time and money.

troupo

7 hours ago

> The thing is, agents aren’t going away. So if Bob can do things with agents, he can do things.

Can he? If he outsources all his thinking and understanding to agents, can he then fix things he doesn't know how to fix without agents?

Any skill is practice first and foremost. If Bob has had no practice, what then?

sd9

7 hours ago

My point is it doesn’t matter whether he can fix things without agents. The real world isn’t an exam hall where your boss tells you “no naughty AI!”, you just get stuff done, and if Bob can do that with agents, nobody cares how he did it.

username223

4 hours ago

> I need to decide if the remaining work - understanding requirements, managing teams, what have you - is still enjoyable enough to continue.

It’s not for me. Being a middle manager, with all of the liability and none of the agency, is not what I want to do for a living. Telling a robot to generate mediocre web apps and SVGs of penguins on bicycles is a lousy job.

lowsong

2 hours ago

> agents aren’t going away

Why not? Once the true cost of token generation is passed on to the end user and costs go up by 10 or 100 times, and once the honeymoon delusion of "oh wow I can just prompt the AI to write code" fades, there's a big question as to if what's left is worth it. If it isn't, agents will most certainly go away and all of this will be consigned to the "failed hype" bin along with cryptocurrency and "metaverse".

croes

6 hours ago

> The thing is, agents aren’t going away.

Let’s wait until they a business model that creates profit.

Most of them won’t go away, but many will become outdated or slow or enshittificated.

Imagine building your career based on the quality of google‘s search

rustyhancock

7 hours ago

The whole premise is bad. If the supervisor can do it in 2 months, then they can do it in 2 weeks with AI.

Didn't PhD projects used to be about advancing the state of art?

Maybe we'll get back to that.

DavidPiper

7 hours ago

I've just started a new role as a senior SWE after 5 months off. I've been using Claude a bit in my time off; it works really well. But now that I've started using it professionally, I keep running into a specific problem: I have nothing to hold onto in my own mind.

How this plays out:

I use Claude to write some moderately complex code and raise a PR. Someone asks me to change something. I look at the review and think, yeah, that makes sense, I missed that and Claude missed that. The code works, but it's not quite right. I'll make some changes.

Except I can't.

For me, it turns out having decisions made for you and fed to you is not the same as making the decisions and moving the code from your brain to your hands yourself. Certainly every decision made was fine: I reviewed Claude's output, got it to ask questions, answered them, and it got everything right. I reviewed its code before I raised the PR. Everything looked fine within the bounds of my knowledge, and this review was simply something I didn't know about.

But I didn't make any of those decisions. And when I have to come back to the code to make updates - perhaps tomorrow - I have nothing to grab onto in my mind. Nothing is in my own mental cache. I know what decisions were made, but I merely checked them, I didn't decide them. I know where the code was written, but I merely verified it, I didn't write it.

And so I suffer an immediate and extreme slow-down, basically re-doing all of Claude's work in my mind to reach a point where I can make manual changes correctly.

But wait, I could just use Claude for this! But for now I don't, because I've seen this before. Just a few moments ago. Using Claude has just made it significantly slower when I need to use my own knowledge and skills.

I'm still figuring out whether this problem is transient (because this is a brand new system that I don't have years of experience with), or whether it will actually be a hard blocker to me using Claude long-term. Assuming I want to be at my new workplace for many years and be successful, it will cost me a lot in time and knowledge to NOT build the castle in the sky myself.

xandrius

7 hours ago

Then you're using it more towards vibe coding than AI-assisted coding: I use AI to write the stuff the way I want it to be written. I give it information about how to structure files, coding style and the logic flow.

Then I spend time to read each file change and give feedback on things I'd do differently. Vastly saves me time and it's very close or even better than what I would have written.

If the result is something you can't explain than slow down and follow the steps it takes as they are taken.

greenchair

6 hours ago

AI assisted coding makes you dumber full stop. It's obvious as soon as you try it for the first time. Need a regex? No need to engage your brain. AI will do that for you. Is what it produced correct? Well who knows? I didn't actually think about it. As current gen seniors brains atrophy over the next few years the scarier thing is that juniors won't even be learning the fundamentals because it is too easy to let AI handle it.

DavidPiper

6 hours ago

I agree that being further along the Vibe end of the spectrum is the issue. Some of the other ways I use Claude don't have the same problems.

> If the result is something you can't explain than slow down and follow the steps it takes as they are taken.

The problem is I can explain it. But it's rote and not malleable. I didn't do the work to prove it to myself. Its primary form is on the page, not in my head, as it were.

cmiles74

5 hours ago

It's a spectrum and we don't have clear notches on the ruler letting us know when we're confidently steering the model and when we've wandered into vibe coding. For me, this position is easy to take when I am feeling well and am not feeling pressured to produce in a fixed (and likely short) time frame.

It also doesn't help that Claude ends every recommendation with "Would you like me to go ahead and do that for you?" Eventually people get tired and it's all to easy to just nod and say "yes".

loeg

6 hours ago

For me it seems more or less similar to reviewing others' changes to a codebase. In any large organization codebase, most of the changes won't be our own.

Yokohiii

5 hours ago

This is my primary personal concern. I think it could be an silent psychological landmine going off way too late (sic).

In a living codebase you spent long stretches to learn how it works. It's like reading a book that doesn't match your taste, but you eventually need to understand and edit it, so you push through. That process is extremely valuable, you will get familiar with the codebase, you map it out in your head, you imagine big red alerts on the problematic stuff. Over time you become more and more efficient editing and refactoring the code.

The short term state of AI is pretty much outlined by you. You get a high level bug or task, you rephrase it into proper technical instructions and let a coding agent fill in the code. Yell a few times. Fix problems by hand.

But you are already "detached" from the codebase, you have to learn it the hard way. Each time your agent is too stupid. You are less efficient, at least in this phase. But your overall understanding of the codebase will degrade over time. Once the serious data corruption hits the company, it will take weeks to figure it out.

I think this psychological detachment can potentially play out really bad for the whole industry. If we get stuck for too long in this weird phase, the whole tech talent pool might implode. (Is anyone working on plumbing LLMs?)

zozbot234

6 hours ago

Ask Claude to explain the code in depth for you. It's a language model, it's great at taking in obscure code and writing up explanations of how it works in plain English.

You can do this during the previous change phase of course. Just ask "How would one plan this change to the codebase? Could you explain in depth why?" If you're expected to be thoroughly familiar with that code, it makes no sense to skip that step.

saulpw

4 hours ago

This is like asking Claude to explain some aspect of physics to you. It'll 'feel' like you understand, but in order to really understand you have to work those annoying problems.

Same with anything. You can read about how to meditate, cook, sew, whatever. But if you only read about something, your mental model is hollow and purely conceptual, having never had to interact with actual reality. Your brain has to work through the problems.

AstroBen

4 hours ago

By working in this way you're proactively de-skilling yourself. Do it long enough and you're now replaceable by anyone that can type a prompt.

caxap

5 hours ago

If this article was written a year ago, I would have agreed. But knowing what I know today, I highly doubt that the outcomes of LLM/non-LLM users will be anywhere close to similar.

LLMs are exceptionally good at building prototypes. If the professor needs a month, Bob will be done with the basic prototype of that paper by lunch on the same day, and try out dozens of hypotheses by the end of the day. He will not be chasing some error for two weeks, the LLM will very likely figure it out in matter of minutes, or not make it in the first place. Instructing it to validate intermediate results and to profile along the way can do magic.

The article is correct that Bob will not have understood anything, but if he wants to, he can spend the rest of the year trying to understand what the LLM has built for him, after verifying that the approach actually works in the first couple of weeks already. Even better, he can ask the LLM to train him to do the same if he wishes. Learn why things work the way they do, why something doesn't converge, etc.

Assuming that Bob is willing to do all that, he will progress way faster than Alice. LLMs won't take anything away if you are still willing to take the time to understand what it's actually building and why things are done that way.

5 years from now, Alice will be using LLMs just like Bob, or without a job if she refuses to, because the place will be full of Bobs, with or without understanding.

techblueberry

4 hours ago

The problem is in most environments Bob won’t spend the rest of the year figuring out what the LLM did, because bob will be busy promoting the LLM for the next deliverable, and the problem is that if all bob has time for us to prompt LLMs, and not understand, there will be a ceiling to Bob’s potential.

This won’t affect everyone equally. Some Bob’s will nerd out and spend their free time learning, but other Bob’s won’t.

therealdrag0

2 hours ago

Why would bob only have time to promote llms? Strange strawman. Many uni courses always had a level of you get out what you put in, it’s the same with LLMs.

Yokohiii

5 hours ago

Bob will never figure out there is an error in his paper. If someone tells him, the LLM will have trouble to figure it out as well, remember the LLM inserted the error to make it "look right".

Your perspective is cut off. In the real world Bob is supposed to produce outcomes that work. If he moves on into the industry and keeps producing hallucinated, skewed, manipulated nonsense, then he will fall flat instantly. If he manages to survive unnoticed, he will become CEO. The latter rather unlikely.

piiritaja

5 hours ago

"LLMs won't take anything away if you are still willing to take the time to understand what it's actually building"

But do you actually understand it? The article argues exactly against this point - that you cannot understand the problems in the same way when letting agents do the initial work as you would when doing it without agents.

from the article: "you cannot learn physics by watching someone else do it. You have to pick up the pencil. You have to attempt the problem. You have to get it wrong, sit with the wrongness, and figure out where your reasoning broke. Reading the solution manual and nodding along feels like understanding. It is not understanding. Every student who has tried to coast through a problem set by reading the solutions and then bombed the exam knows this in their bones. We have centuries of accumulated pedagogical wisdom telling us that the attempt, including the failed attempt, is where the learning lives. And yet, somehow, when it comes to AI agents, we've collectively decided that maybe this time it's different. That maybe nodding at Claude's output is a substitute for doing the calculation yourself. It isn't. We knew that before LLMs existed. We seem to have forgotten it the moment they became convenient."

stavros

8 hours ago

I see this fallacy being committed a lot these days. "Because LLMs, you will no longer need a skill you don't need any more, but which you used to need, and handwaves that's bad".

Academia doesn't want to produce astrophysics (or any field) scientists just so the people who became scientists can feel warm and fuzzy inside when looking at the stars, it wants to produce scientists who can produce useful results. Bob produced a useful result with the help of an agent, and learned how to do that, so Bob had, for all intents and purposes, the exact same output as Alice.

Well, unless you're saying that astrophysics as a field literally does not matter at all, no matter what results it produces, in which case, why are we bothering with it at all?

djaro

8 hours ago

The problem is that LLMs stop working after a certain point of complexity or specificity, which is very obvious once you try to use it in a field you have deep understanding of. At this point, your own skills should be able to carry you forward, but if you've been using an LLM to do things for you since the start, you won't have the necessary skills.

Once they have to solve a novel problem that was not already solved for all intentes and purposes, Alice will be able to apply her skillset to that, whereas Bob will just run into a wall when the LLM starts producing garbage.

It seems to me that "high-skill human" > "LLM" > "low-skill human", the trap is that people with low levels of skills will see a fast improvement of their output, at the hidden cost of that slow build-up of skills that has a way higher ceiling.

stavros

8 hours ago

Then test Bob on what you actually want him to produce, ie novel problems, instead of trivial things that won't tell you how good he is.

Why is it a problem of the LLM if your test is unrelated to the performance you want?

brookst

7 hours ago

This whole argument can be made for why every programmer needs to deeply understand assembly language and computer hardware.

At a certain point, higher level languages stop working. Performance, low level control of clocks and interrupts, etc.

I’m old enough dropping into assembly to be clever with the 8259 interrupt controller really was required. Programmers today? The vast majority don’t really understand how any of that works.

And honestly I still believe that hardware-up understanding is valuable. But is it necessary? Is it the most important thing for most programmers today?

When I step back this just reads like the same old “kids these days have it so easy, I had to walk to school uphill through the snow” thing.

nandomrumber

8 hours ago

> why are we bothering with it at all?

Because we largely want people who have committed to tens of thousands of dollars of debt to feel sufficiently warm and fuzzy enough to promote the experience so that the business model doesn’t collapse.

It’s difficult to think anyone would end up truly regretting doing a course in astrophysics, or any of the liberal arts and sciences if they have a modicum of passion, but it’s very believable that a majority of them won’t go on to have a career in it, whatever it is, directly.

They’re probably more likely to gain employment on their data science skills, or whether core competencies they honed, or just the fact that they’ve proven they can learn highly abstract concepts, or whatever their field generalises to.

Most of the jobs are in not-highly-specific academic-outcome.

imtringued

4 hours ago

Even if you land a job in your field, you will encounter that academia is backwards vs industry in some aspects and decades ahead of what is adopted in the industry in other aspects to the point where both of these mean that you won't make much use of the skills you learned in university.

pards

8 hours ago

> Take away the agent, and Bob is still a first-year student who hasn't started yet. The year happened around him but not inside him. He shipped a product, but he didn't learn a trade.

We're minting an entire generation of people completely dependent on VC funding. What happens if/when the AI companies fail to find a path to profitability and the VC funding dries up?

Paradigma11

7 hours ago

What will happen is pretty obvious. Those companies will either be classified as too important to fail and get government support or go bankrupt and will be bought for pennies on the dollar. For the customers nothing much will change since tokens are getting cheaper every year and the business is already pretty profitable. Progress will slow down massively till local open weight models catch up to pre-crash SotA and go on from there.

stavros

8 hours ago

Do you think that'll take a generation to happen?

hirako2000

8 hours ago

I was reading in the article that what matters is the process that leads to the (typically useless) result, what the people get out of it.

Once I realized that this white on black contrast was hurting my eyes, I decided to stop as I didn't want to see stripes for too long when looking away.

Some activity has outcomes that aren't strictly in the results.

stavros

8 hours ago

Yeah, it was saying that what matters is the process of training people to be good scientists, so they can produce other, more useful, results. That's literally what training is, everywhere.

This argument boils down to "don't use tools because you'll forget how to do things the hard way", which nobody would buy for any other tool, but with LLMs we seem to have forgotten that line of reasoning entirely.

asHg19237

6 hours ago

The arguments of the LLM psychosis afflicted get more and more desperate. Astrophysics is about understanding and thinking, this comment paints it as result oriented (whatever that means).

The industrialization of academia hasn't even produced more results, it has produced more meaningless papers. Just like LLMs produce the 10.000th note taking app, which for the LLM psychosis afflicted is apparently enough.

nothinkjustai

3 hours ago

this user is also a massive AI booster on this platform

user

6 hours ago

[deleted]

mzhaase

8 hours ago

Why should we only do things that produce some sort of value? Do we really want to reduce all of human existence to increasing profits?

stavros

7 hours ago

You said "value" and "profit". I said "useful".

nemo44x

7 hours ago

What’s a better method for determining how to utilize and distribute resources? To determine where energy should be used and where it should be moved from?

dwa3592

4 hours ago

Hard sciences play this crucial and often unseen role in our society : they help train humans to develop critical thinking. Not everyone with PhD in Astrophysics ends up doing Astrophysics in life; it's a discipline, or a training regime for our minds. After that PhD; the result is a human being who can tackle hard problems. We have many other such disciplines (basically any PhD in hard sciences) which produces this outcome.

cmiles74

4 hours ago

Until the LLM is wrong and Bob passes the erroneous result off as accurate, reliable and vetted by a knowledgeable person. At that point Bob is not producing a useful result. Then it becomes a trap other people might get caught in, wasting valuable time and energy.

dsqrt

6 hours ago

The goal of academic research is to create understanding, not papers. If we outsource all research to LLMs, then we are only producing the latter.

sega_sai

7 hours ago

You missed the argument. When we are talking about faculty, yes their result is the only thing that matters, so if it was produced quicker with a LLM, that's great. But when we are talking about the student, there is a drastic difference in the student in the with LLM vs without LLM cases. In the latter they have much better understanding. And that matters in the system when we are educating future physicists.

nathan_compton

8 hours ago

Is that what "academia" wants? Last I checked "academia" is not a dude I can call and ask for an opinion or definition of what it was interested in.

I will make an explicit, plausible, counterpoint: academia wants to produce understanding. This is, more or less, by definition, not possible with an AI directly (obviously AIs can be useful in the process).

Take GR as an example. The vast majority of the dynamical character of the theory is inaccessible to human beings. We study it because we wanted to understand it, and only secondarily because we had a concrete "result" we were trying to "achieve."

A person who cares only about results and not about understanding is barely a person, in my opinion.

selimthegrim

7 hours ago

Completely missed the point of the blog post which was that the point was producing the scientist not the result

gedy

7 hours ago

We aren't talking pocket calculators here (I see the irony of phone app in pocket), LLMs are hugely expensive things made and controlled behind costly commercial subscriptions. And likely in the middle of a huge investment bubble and stability is uncertain. So we all need to be careful about "gee we don't need that skill or person anymore", etc.

danielbln

6 hours ago

Open weight models that run under your desk are not frontier model level, but they are getting closer. Improvements in agentic post training and things like TurboQuant mean that even if all frontier labs pull the plug tomorrow, we will still have agents to work with.

beedeebeedee

22 minutes ago

I don’t have kids, but suggested this years ago to my siblings when they started confronting similar issues: we should do a version of “ontogeny recapitulates phylogeny”. Kids should start off with Commodore 64s, then get late 80’s or early 90’s Mac’s, then Windows 95, Debian and internet access (but only html). Finally, when they’re 18, be allowed an iPhone, Android and modern computing. Otherwise it will just appear as magic, and they won’t understand or appreciate the technology.

katzgrau

3 hours ago

When you’re deep in a thoughtful read and suddenly get the eerie feeling that you’re being catfished

> But the real threat isn't either of those things. It's quieter, and more boring, and therefore more dangerous. The real threat is a slow, comfortable drift toward not understanding what you're doing. Not a dramatic collapse. Not Skynet. Just a generation of researchers who can produce results but can't produce understanding. Who know what buttons to press but not why those buttons exist. Who can get a paper through peer review but can't sit in a room with a colleague and explain, from the ground up, why the third term in their expansion has the sign that it does.

alestainer

an hour ago

I was in academia in the pre-GPT-3 era and I don't see a difference between the superficial pass-the-criteria understanding of things then and now. People already rely on a ton of sources, putting their faith into it, recent replication crisis in social sciences had nothing to do with any LLMs. The problem of academia lies in the first paragraph of this article - supervisor that has to choose doing incremental, clearly feasible stuff. Currently it's called science, but I like to call it knowledge engineering because you're pretty much following a recipe and there is a clear bound on returns to such activities.

steveBK123

6 hours ago

For the people arguing that the output is the code and the faster we generate it the better..

I do wonder where all the novel products produced by 10x devs who are now 100x with LLMs, the “idea guys” who can now produce products from whole clothe without having to hire pesky engineers.. where is the one-man 10 billion dollar startups, etc? We are 3-4 years into this mania and all I see on the other end of it is the LLMs themselves.

Why hasn’t anything gotten better?

ipaddr

5 hours ago

Marketing is the moat llms haven't been able to overcome. Being able to create a Word clone is easier but the difficulty of selling it is as hard or harder than ever.

Show me an llm that can sell my product and find market fit.

In reality llms are taking away profitable tools and keeping the revenue themselves.

steveBK123

5 hours ago

Right Very Rory Sutherland kind of thought - marketing doesn’t make sense. It is alchemy.

If I told you the drink tastes bad, is an off putting color, comes in a small bottle, and is expensive you wouldn’t believe it would work. But Red Bull made billions.

gedy

5 hours ago

There's definitely folks working on automatically marketing via LLMs, but I have my doubts that it wont just numb people further to marketing as we are close to saturation.

maplethorpe

6 hours ago

I'm waiting for Anthropic to realise they can just set a few thousand agents loose to do just that, and monopolize the entire software market overnight. I'm not sure why they haven't done this yet.

slfnflctd

5 hours ago

You jest, but it's a good question.

When people talk about the 'plateau of ability' agents are widely expected to reach at some point, I suspect a lot of it will boil down to skyrocketing costs and plummeting accuracy past a certain point of number of agents involved. This seems to me like a much harder limit than context windows or model sizes.

Things like Gas Town are exploring this in what you might call a reckless way; I'm sure there are plenty of more careful experiments being conducted.

What I think the ultimate measure of this new tech will be is, how simple of a question can a human put to an LLM group for how complex of a result, and how much will they have to pay for it? It seems obvious to me there is a significant plateau somewhere, it's just a question of exactly where. Things will probably be in flux for a few years before we have anything close to a good answer, and it will probably vary widely between different use cases.

steveBK123

6 hours ago

Because a lot of valuable software is the implicit / organizational / human domain knowledge .. not the trillions of lines of code LLms all scraped and trained on.

argee

2 hours ago

Going 100x faster is the problem. As they say, “slow is smooth, and smooth is fast,” when going for product market fit this is very important (and understated, in my opinion). It doesn’t help that your thread is spinning out at a hundred yards a second when what you’re doing is trying to thread the needle.

mikeaskew4

6 hours ago

Could be possible that the 10x devs working at 100x are just starting down the homestretch…

The 10x dev doesn’t just set out to build a hello world app, ya know.

steveBK123

4 hours ago

I think its telling that the two main places I've seen the biggest in-roads in FinTech in terms of LLMs has been:

1) Stuff that was astonishingly not automated yet. I am talking about somebody opening up excel on one screen, and a website/pdf/whatever on the other.. and type stuff in to your excel sheet. So stuff where there wasn't any code involved previously, possibly due to diminishing returns of how adhoc it was to automate, skills mismatch, organizational politics or other reasons.

2) Lot of former BigData / crypto / SaaS guys who were in product/sales roles suddenly starting AI startups to help your company AI better. The product is facilitating the doing of AI.

AlexWilkins12

7 hours ago

Ironically, this article reeks of AI-generated phrases. Lot's of "It's not X, it's Y". eg: - "The failure mode isn't malice. It's convenience", - "You haven't saved time. You've forfeited the experience that the time was supposed to give you." - "But the real threat isn't either of those things. It's quieter, and more boring, and therefore more dangerous. The real threat is a slow, comfortable drift toward not understanding what you're doing. Not a dramatic collapse. Not Skynet. Just a generation of researchers who can produce results but can't produce understanding."

And indeed running it through a few AI text detectors, like Pangram (not perfect, by any means, but a useful approximation), returns high probabilities.

It would have felt more honest if the author had included a disclaimer that it was at least part written with AI, especially given its length and subject matter.

zozbot234

5 hours ago

Yes, the overwrought "It's not X it's Y" are signals of LLM involvement. No human uses them like that all the frickin' time. AI loves this construct way too much and cannot really tell whether the contrast is relevant/actually makes sense.

oncallthrow

8 hours ago

I think this article is largely, or at least directionally, correct.

I'd draw a comparison to high-level languages and language frameworks. Yes, 99% of the time, if I'm building a web frontend, I can live in React world and not think about anything that is going on under the hood. But, there is 1% of the time where something goes wrong, and I need to understand what is happening underneath the abstraction.

Similarly, I now produce 99% of my code using an agent. However, I still feel the need to thoroughly understand the code, in order to be able to catch the 1% of cases where it introduces a bug or does something suboptimally.

It's possible that in future, LLMs will get _so_ good that I don't feel the need to do this, in the same way that I don't think about the transistors my code is ultimately running on. When doing straightforward coding tasks, I think they're already there, but I think they aren't quite at that point when it comes to large distributed systems.

spicyusername

7 hours ago

So we already have this problem and things are "fine"?

mbbutler

7 hours ago

In my personal experience, the rate at which Claude Code produces suboptimal Rust is way higher than 1%.

Lerc

6 hours ago

That is dependent upon the quality of the AI. The argument is not about the quality of the components but the method used.

It's trivial to say using an inadequate tool will have an inadequate result.

It's only an interesting claim to make if you are saying that there is no obtainable quality of the tool that can produce an adequate result (In this argument, the adequate result in question is a developer with an understanding of what they produce)

kgwxd

7 hours ago

> LLMs will get _so_ good that I don't feel the need to do this, in the same way that I don't think about the transistors my code is ultimately running on.

The problem is, they're nothing like transistors, and never will be. Those are simple. Work or don't, consistently, in an obvious, or easily testable, way.

LLM are more akin to biological things. Complex. Not well understood. Unpredictable behavior. To be safely useful, they need something like a lion tamer, except every individual LLM is its own unique species.

I like working on computers because it minimizes the amount of biological-like things I have to work with.

oncallthrow

7 hours ago

I suppose transistors is a bad example.

Perhaps a better analogy would be the Linux kernel. It's built by biological humans, and fallible ones at that. And yet, I don't feel the need to learn the intricacies of kernel internals, because it's reliable enough that it's essentially never the kernel's fault when my code doesn't work.

mkovach

6 hours ago

This isn't new. It's been the same problem for decades, not what gets built, but what gets accepted.

Weak ownership, unclear direction, and "sure, I guess" reviews were survivable when output was slow. When changes came in one at a time, you could get away with not really deciding.

AI doesn't introduce a new failure mode. It puts pressure on the old one. The trickle becomes a firehose, and suddenly every gap is visible. Nobody quite owns the decision. Standards exist somewhere between tribal memory, wishful thinking, and coffee. And the question of whether something actually belongs gets deferred just long enough to merge it, but forces the answer without input.

The teams doing well with agentic workflows aren't typically using magic models. They've just done the uncomfortable work of deciding what they're building, how decisions are made, and who has the authority to say no.

AI is fine, it just removed another excuse for not having our act together. While we certainly can side-eye AI because of it, we own the problems. Well, not me. The other guy who quit before I started.

jappgar

5 hours ago

This is exactly the problem I see today.

And it's not just a volume problem.

Mediocre devs previously couldn't complete a project by themselves and were forced to solicit help and receive feedback along the way.

When all managers care about is "shipping", development becomes a race to the bottom. Devs who used to collaborate are now competing. Whoever gets the slop into the codebase fastest, wins.

mkovach

5 hours ago

This is also very true, and while I consider it part of the authority to say no, this is a significant point.

theteapot

7 hours ago

I have a vaguely unrelated question re:

> You do what your supervisor did for you, years ago: you give each of them a well-defined project. Something you know is solvable, because other people have solved adjacent versions of it. Something that would take you, personally, about a month or two. You expect it to take each student about a year ...

Is that how PhD projects are supposed to work? The supervisor is a subject matter expert and comes up with a well-defined achievable project for the student?

loveparade

7 hours ago

I think it just really depends. There is no fixed rule to how PhD programs are supposed to work. Sometimes your advisor will suggest projects he finds interesting and wants to see done, he just doesn't have time to do it himself. That's pretty common. Sometimes advisors don't have that and/or want students to come up with their own projects proposals, etc.

derbOac

6 hours ago

It depends on the program, and even more so, the student and the mentor. It can also vary over time, with more direction early on in a graduate program, and less direction later. Some mentors are very directive, and basically treat students as labor executing tasks they don't have time or want to do. Other times, the student is coming up with all the ideas and the mentor is facilitating it with resources or even nothing but uncertain advice or permissions now and then.

This can lead to a lot of problems as I think in some fields, by some academics, the default assumption is the former, when it's really the latter. This leads to a kind of overattribution of contribution by senior faculty, or conversely, an underappreciation of less senior individuals. The tendency for senior faculty be listed last on papers, and therefore, for the first and last authors to accumulate credit, is a good example of how twisted this logic has become.

It's one tiny example of enormous problems with credit in academics (but also maybe far afield from your question).

LeonardoTolstoy

7 hours ago

It is a spectrum. My advisor was very hands off. He didn't, ultimately, even really understand my PhD. He knew the problem, but he had no path in mind to solve it, that was up to me. I'm now working (as a software engineer) with a person who is very hands on with his students (and even postdocs) to the point of giving them specific tasks to do and then discussing the result every week. He defines the problems and structure of the solution, the students at least partially are an extension of himself, they are doing stuff he merely doesn't have time to do himself.

And there is everything in between.

InkCanon

7 hours ago

Often at the start yes. So the students gets a bit of recognition, a bit of experience and a bit of knowledge.

_gmax1

7 hours ago

From the cases I've observed directly in the area I work in, yes.

matheusmoreira

4 hours ago

I dunno. Claude helped me implement a new memory allocator, compacting garbage collector and object heap for my programming language. I certainly understood what I was doing when I did this. The experience was extremely engaging for me. Claude taught me a lot.

I think the real danger is no longer caring about what you're doing. Yesterday I just pointed Claude at my static site generator and told it to clean it up. I wanted to care but... I didn't.

CharlieDigital

6 hours ago

I recently saw a preserved letterpress printing press in person and couldn't help but think of the parallels to the current shift in software engineering. The letterpress allowed for the mass production of printed copies, exchanging the intensive human labor of manual copying to letter setting on the printing press.

Yet what did not change in this process is that it only made the production of the text more efficient; the act of writing, constructing a compelling narrative plot, and telling a story were not changed by this revolution.

Bad writers are still bad writers, good writers still have a superior understanding of how to construct a plot. The technological ability to produce text faster never really changed what we consider "good" and "bad" in terms of written literature; it just allow more people to produce it.

It is hard to tell if large language models can ever reach a state where it will have "good taste" (I suspect not). It will always reflect the taste and skill of the operator to some extent. Just because it allows you to produce more code faster does not mean it allows you to create a better product or better code. You still need to have good taste to create the structure of the product or codebase; you still have to understand the limitations of one architectural decision over another when the output is operationalized and run in production.

The AI industry is a lot of hype right now because they need you to believe that this is no longer relevant. That Garry Tan producing 37,000 LoC/day somehow equates to producing value. That a swarm of agents can produce a useful browser or kernel compiler.

Yet if you just peek behind the curtains at the Claude Code repo and see the pile of unresolved issues, regressions, missing features, half-baked features, and so on -- it seems plainly obvious that there are limitations because if Anthropic, with functionally unlimited tokens with frontier models, cannot use them to triage and fix their own product.

AI and coding agents are like the printing press in some ways. Yes, it takes some costs out of a labor intensive production process, but that doesn't mean that what is produced is of any value if the creator on the other end doesn't understand the structure of the plot and the underlying mechanics (be it of storytelling or system architecture).

cbushko

3 hours ago

This article makes the assumption that Bob was doing absolutely nothing, maybe at the Pub with this friends, while the AI did all his work.

How do we know that while the AI was writing python scripts that Bob wasn't reading more papers, getting more data and just overall doing more than Alice.

Maybe Bob is terrible at debugging python scripts while Alice is a pro at it?

Maybe Bob used his time to develop different skills that Alice couldn't dream of?

Maybe Bob will discover new techniques or ideas because he didn't follow the traditional research path that the established Researchers insist you follow?

Maybe Bob used the AI to learn even more because he had a customized tutor at his disposal?

Or maybe Bob just spent more time at the Pub with his friends.

lxgr

5 hours ago

> for someone who doesn't yet have that intuition, the grunt work is the work

Very well said. I think people are about to realize how incredibly fortunate and exceptional it is to actually get paid, and in our industry very well, through a significant fraction of one's career while still "just" doing the grunt work, that arguably benefits the person doing it at least as much as the employer.

A stable paid demand for "first-year grad student level work" or the equivalent for a given industry is probably not the only possible way to maintain a steady supply of experts (there's always the option of immense amounts of student debt or public funding, after all), but it sure seems like a load-bearing one in so many industries and professions.

At the very least, such work being directly paid has the immense advantage of making artificially (often without any bad intentions!) created bullshit tasks that don't exercise actually relevant skillsets, or exercise the wrong ones, much easier to spot.

throwaway132448

7 hours ago

The flip side I don’t see mentioned very often is that having a product where you know how the code works becomes its own competitive advantage. Better reliability, faster fixes and iteration, deeper and broader capabilities that allow you to be disruptive while everything else is being built towards the mean, etc etc. Maybe we’ve not been in this new age for long enough for that to be reflected in people’s purchasing criteria, but I’m quite looking forward to fending off AI-built competitors with this edge.

toniantunovi

3 hours ago

The coding-specific version of this is worth naming precisely. The drift does not happen because you stop writing code. It happens because you stop reading the output carefully. With AI-generated code, there is a particular failure mode: the code is plausible enough to pass a quick review and tests pass, so you ship it. The understanding degradation is cumulative and invisible until it is not. The partial fix is making automated checks independent of the developer's attention level: type checking, SAST, dependency analysis, and coverage gates that run regardless of how carefully you reviewed the diff. These are not a substitute for understanding, but they create a floor below which "comfortable drift" cannot silently carry you. The question worth asking of any AI coding workflow is whether that floor exists and where it is.

patcon

7 hours ago

The exciting and interesting to me is that we'll probably need to engage "chaos engineering" principles, and encode intentional fallibility into these agents to keep us (and them) as good collaborators, and specifically on our toes, to help all minds stay alert and plastic

If that comes to pass, we'll be rediscovering the same principles that biological evolution stumbled upon: the benefits of the imperfect "branch" or "successive limited comparison" approach of agentic behaviour, which perhaps favours heuristics (that clearly sometimes fail), interaction between imperfect collaborators with non-overlapping biases, etc etc

https://contraptions.venkateshrao.com/p/massed-muddler-intel...

> Lindblom’s paper identifies two patterns of agentic behavior, “root” (or rational-comprehensive) and “branch” (or successive limited comparisons), and argues that in complicated messy circumstances requiring coordinated action at scale, the way actually effective humans operate is the branch method, which looks like “muddling through” but gradually gets there, where the root method fails entirely.

FrojoS

6 hours ago

Every PhD program I'm aware of has a final hurdle known as the defence. You have to present your thesis while standing in front of a committee, and often the local community and public. They will asks questions and too many "I don't know" or false answers would make you fail. So, there is already a system in place that should stop Bob from graduating if he indeed learned much less than Alice. A similar argument can be made for conference publications. If Bob publishes his first year project at a conference but doesn't actually understand "his own work" it will show.

The difficulty of passing the defence vary's wildly between Universities, departments and committees. Some are very serious affairs with a decent chance of failure while others are more of a show event for friends and family. Mine was more of the latter, but I doubt I would have passed that day if I had spend the previous years prompting instead of doing the grunt work.

ipaddr

5 hours ago

In the future the llms can answer those questions for you by listening and feeding you answers into your headset.

The process you describe is a gate keeping exercise which will change to include llm judges at somepoint.

FrojoS

5 hours ago

That would be cheating. If the exam is 'gate keeping', I will say that it is a gate worth keeping.

To be clear, I am not against alternative forms of education. Degrees are optional. But if you want a degree, there have to be exams and cheating has to be prevented.

steveBK123

6 hours ago

I agree with the general premise - the risk is we don’t develop juniors (new Alices) anymore, and at some point people are just sloperators gluing together bits of LLM output they do not understand.

I have seen versions of this in the wild where a firm has gone through hard times and internally systems have lost all their original authors, and every subsequent generation of maintainers… being left with people in awe of the machine that hasn’t been maintained in a decade.

I interviewed a guy once that genuinely was proud of himself, volunteering the information to me as he described resolving a segfault in a live trading system by putting kill -9 in a cronjob. Ghastly.

visarga

5 hours ago

> Whether that student walks out the door five years later as an independent thinker or a competent prompt engineer is, institutionally speaking, irrelevant.

I think this is a simplification, of course Bob relied on AI but they also used their own brain to think about the problem. Bob is not reducible to "a competent prompt engineer", if you think that just take any person who prompts unrelated to physics and ask them to do Bob's work.

In fact Bob might have a change to cover more mileage on the higher level of work while Alice does the same on the lower level. Which is better? It depends on how AI will evolve.

The article assumes the alternative to AI-assisted work is careful human work. I am not sure careful human work is all that good, or that it will scale well in the future. Better to rely on AI on top of careful human work.

My objection comes from remembering how senior devs review PRs ... "LGTM" .. it's pure vibes. If you are to seriously review a PR you have to run it, test it, check its edge cases, eval its performance - more work than making the PR itself. The entire history of software is littered with bugs that sailed through review because review is performative most of the time.

Anyone remember the verification crisis in science?

lo_zamoyski

2 minutes ago

Education lost the plot years ago. AI is a kind of final nail in that coffin. While we may lament the ravages of AI, I expect there is a kind of providential silver lining in that it may cleanse the rot plaguing education. Just as postermodernism - itself full of errors - is like an enema that is clearing out the disease of modernism and will flush itself out in the process, so, too, AI may be just the purgative we need to force us back to a norm more fittingly called “education”.

One of the marks of an educated person is the ability to dispassionately think from first principles. It is not a sufficient criterion, but it is a necessary one. In this case, the basic questions we must ask are: what is education, and what is education for?

An instrumentalist view of education, the one that has claimed the soul of the modern university and primary education , tells us that education is about preparing for a career - preparing to be an economic actor - and about the effect you can have. In short, it is about practical power and economic utility.

Now, the power to be able to do good things, to be practically able, is a good thing as such, and indeed one does acquire facility during one’s education. (And I would argue schooling today isn’t great at practicality either.) But the practical, unlike the theoretical, is always about something else. It is never for its own sake. What this means is that there must be a terminus. You cannot have an infinite regress of practical ends, because the justification for any practical end is not found in itself. And if the primary proximate end of education is the career, then what distinguishes education from training? Nothing. What’s more, if you then ask what the purpose of a career is, you find it is about consumption. So education today is about enabling people to be consumers. You wish to be effective so you can be payed more so you can buy more crap. Pure nihilism.

True education is best captured by the classical liberal arts, which is to say the free arts. Human beings are intellectual and moral creatures. The purpose of education is to free a person to be more human, to free them to be able to reason effectively and competently for the sake of wisdom and for the sake of living wisely. In other words, it is about becoming what you ought to become as a human being in the most definitive sense.

What good does AI do you if you haven’t become a better version of yourself in the process? So AI writes a paper for you. So what? The purpose of the paper is not the paper, but the knowledge, understanding, and insight that results from writing it.

omega3

6 hours ago

I wonder what effect AI had on online education - course signups, new resources being added etc.

I’ve recently started csprimer and whilst mentally stimulating I wonder if I’m not completely wasting my time.

user

4 hours ago

[deleted]

ahussain

5 hours ago

> When his supervisor sent him a paper to read, Bob asked the agent to summarize it. When he needed to understand a new statistical method, he asked the agent to explain it. When his Python code broke, the agent debugged it. When the agent's fix introduced a new bug, it debugged that too. When it came time to write the paper, the agent wrote it. Bob's weekly updates to his supervisor were indistinguishable from Alice's.

In my experience, doing these things with the right intentions can actually improve understanding faster than not using them. When studying physics I would sometimes get stuck on small details - e.g. what algebraic rule was used to get from Eq 2.1 to 2.2? what happens if this was d^2 instead of d^3 etc. Textbooks don't have space to answer all these small questions, but LLMs can, and help the student continue making progress.

Also, it seems hard to imagine that Alice and Bob's weekly updates would be indistinguishable if Bob didn't actually understand what he was working on.

sumeno

5 hours ago

Faster doesn't always mean better. I've "learned" things from LLM really fast, but I don't retain the information the same way as if I had taken my time to really work through it

bwfan123

4 hours ago

> The problem isn't that we'll decide to stop thinking. The problem is that we'll barely notice when we do

Most of what we call thinking is merely to justify beliefs that emotionally make us happy and is not creative per-se. I am making a distinction between "thinking" as we know it and "creative thinking" which is rare, and can see things in an unbiased manner breaking out of known categories. Arguably, at the PhD level, there needs to be a new ideas instead of remixing the existing ones.

pbw

6 hours ago

There's certainly a risk that an individual will rely too much on AI, to the detriment of their ability to understand things. However, I think there are obvious counter-measures. For example, requiring that the student can explain every single intermediate step and every single figure in detail.

A two-hour thesis defense isn't enough to uncover this, but a 40-hour deep probing examination by an AI might be. And the thesis committee gets a "highlight reel" of all the places the student fell short.

The general pattern is: "Suppose we change nothing but add extensive use of AI, look how everything falls apart." When in reality, science and education are complex adaptive systems that will change as much as needed to absorb the impact of AI.

pwr1

3 hours ago

I catch myself doing this more than I'd like to admit. Copy something from an LLM, it works, ship it, move on. Then a week later something breaks and I realize I have no idea what that code actually does! The speed is addicting but your slowly trading depth for velocity and at some point that bill comes due.

user

6 hours ago

[deleted]

__MatrixMan__

6 hours ago

But aren't you still going to have to convince other people to let you do it with their money/data/hardware/etc? The understanding necessary to make that argument well is pretty deep and is unaffected by AI.

I've been having a lot of fun vibe coding little interactive data visualizations so when I present the feature to stakeholders they can fiddle with it and really understand how it relates to existing data. I saw the agent leave a comment regarding Cramer's rule and yeah its a bit unsettling that I forgot what that is and haven't bothered to look it up, but I can tell from the graphs that its doing the correct thing.

There's now a larger gap between me and the code, but the chasm between me and the stakeholders is getting smaller and so far that feels like an improvement.

danielbln

6 hours ago

Every AI/agentic thread on HN follows the same tension: builders want to build and solve problems. Code or task completion are implementation details to be done on the path to the actual prize: solving the problem. And then there are the coders, that have honed their mechanical skill of implementation and derive their intellectual fulfillment from that. The latter crowd has a rough time because much of it can be automated now, the former camp is happy because look at all the stuff that can now be built!

inatreecrown2

7 hours ago

Using AI to solve a task does not give you experience in solving the task, it gives you experience in using AI.

sunir

5 hours ago

I think the mountain of things I don’t understand was already huge. It doesn’t stop me from getting a grip over the things I need to be responsible for and using tools to contain complexity irrelevant to me. Like many scientists have a stats person.

The risk is that civilization is over its skis because humans are lazy. Humans are always lazy. In science there’s a limit to bs because dependent works fail. In economics there’s a crash. In physics stuff breaks. Then there is a correction.

ChrisMarshallNY

5 hours ago

This is not wrong, but the "Bob and Alice" conundrum is not simple, either.

In academia, understanding is vital. The same for research.

But in production, results are what matters.

Alice would be a better researcher, but Bob would be a better producer. He knows how to wrangle the tools.

Each has its value. Many researchers develop marvelous ideas, but struggle to commercialize them, while production-oriented engineers, struggle to come up with the ideas.

You need both.

bwfan123

3 hours ago

> You need both.

yea, there are multiple parts to education. 1) teach skills useful to the economy 2) teach the theories of the subject, and finally 3) tweak existing theories and create new ones. An electrician can fix problems without understanding theory of electromagnetism. These are the trades folks. A EE college graduate has presumably understood some theory, and can apply it in different useful ways. These are the engineers. Finally, there are folks who not only understand the theory of the craft, but can tweak it creatively for the future. These are the researchers.

Bob better fits as a trades-person or engineer whereas Alice fits better as a researcher.

cmiles74

5 hours ago

I have to disagree that Bob will be a better producer, although I do agree that Bob will produce more. In this scenario, Bob isn't clear on which LLM output is valid and important and which is erroneous and misleading; I think that's a pretty critical distinction. It's the kind of thing that might go undetected for a long time, until a particular paper turns out to be important and it's discovered that it's also entirely wrong, wasting a lot of time and energy.

ChrisMarshallNY

3 hours ago

Sounds like you're still thinking of Bob as a researcher.

In production, there would be no "paper"; just some software/hardware product.

If there was a problem, that would be fairly obvious, with testing (we are going to be testing our products, right?).

I have been wrestling all morning, with an LLM. It keeps suggesting stuff that doesn't work, and I need to keep resetting the context.

I am often able to go in, and see what the issue is, but that's almost worthless. The most productive thing that I can do, is tell the LLM what is the problem, on the output end, and ask it to review and fix. I can highlight possible causes, but it often finds corner cases that I miss. I have to be careful not to be too dictatorial.

It's frustrating, as the LLM is like a junior programmer, but I can make suggestions that radically improve the result, and the total time is reduced drastically. I have gotten done, in about two hours, what might have taken all day.

nothinkjustai

3 hours ago

If results are what matters, why is popular software so buggy and lacking in features?

ChrisMarshallNY

2 hours ago

Because people will pay for crap.

As long as that’s the case, those that create crap will thrive.

Pretty basic, and long predates LLMs.

shellkr

6 hours ago

This is almost the same as going from making fire with a stick to using a lighter.. sure it is simplified but still not wrong. Humans while still doing grunt work can still make mistakes as does the machine.. the machine will eventually discover it. The same can not be said of the human because of the work needed to do so might be too much. In the end we might not learn as much.. but it will not matter and thus is really not an issue.

techblueberry

6 hours ago

I think I disagree in what I see around me it’s less like going from fire to lighter and more like going from hand tools to power tools. If your skill was in understanding how the hand tools work, it’s harder to get a level of abstraction up and have a vision for building a house. If we’re not able to learn than less people are going to be able to get that vision, especially if you’re in technical domains where engineering and architecture matter. It’s going to be a weird future. I’m pretty effective with these tools, but I’m fifteen years of hacking on them manually. Some folks who are not as far into their careers don’t seem to know where to start.

There’s a reason most people aren’t promoted to manager until they have years of experience under their belt. And now we’re expecting folks to be managers on day 1.

txrx0000

4 hours ago

The threat is if you replace your cognitive capabilities with AI, but you don't control entire the system your AI runs on (hardware, firmware, drivers, OS, weights, frontend), then that's equivalent to someone else owning a part of your brain.

grafelic

7 hours ago

"He shipped a product, but he didn't learn a trade." I think is the key quote from this article, and encapsulates the core problem with AI agents in any skill-based field.

tmountain

6 hours ago

Thankfully, I am nearing the end of my career with software after 25 years well spent. If I had been born in a different decade, I would be facing the brunt of the AI shift, and I don’t think I would want to continue in the industry. Obviously, this is a personal decision, but we are in a totally different domain now, where, at best, you’re managing an LLM to deliver your product.

lambdaone

6 hours ago

Very insightful. One key sentence sums it up: "He shipped a product, but he didn't learn a trade."

This is going to get worse, and eventually cause disastrous damage unless we do something about it, as we risk losing human institutional memory across just about every domain, and end up as child-like supplicants to the machines.

But as the article says, this is a people problem, not a machine problem.

Lerc

6 hours ago

The problem I see with this argument is that the ship sailed on understanding what you are doing years ago. It seems like it is abstraction layers all the way down.

If an AI is capable of producing an elegant solution with fewer levels of abstraction it could be possible that we end up drifting towards having a better understanding of what's going on.

somethingsome

6 hours ago

Personally, I wrote an essay to my students explaining exactly that the purpose is for them to think better and improve over time, they can use LLMs but, if they stop thinking, they are just failing themselves, not me.

It had great success, now when I propose to them to use some model to do something, they tends to avoid.

hgo

7 hours ago

I like this article and it reads well, but I have to say, that to me it really reads as something written by an LLM. Probably under supervision by a human that knew what it should say.

I don't know if I mind.

Example. This paragraph, to me, has a eerily perfect rhythm. The ending sentence perfectly delivers the twist. Like, why would you write in perfect prose an argument piece in the science realm?

> Unlike Alice, who spent the year reading papers with a pencil in hand, scribbling notes in the margins, getting confused, re-reading, looking things up, and slowly assembling a working understanding of her corner of the field, Bob has been using an AI agent. When his supervisor sent him a paper to read, Bob asked the agent to summarize it. When he needed to understand a new statistical method, he asked the agent to explain it. When his Python code broke, the agent debugged it. When the agent's fix introduced a new bug, it debugged that too. When it came time to write the paper, the agent wrote it. Bob's weekly updates to his supervisor were indistinguishable from Alice's. The questions were similar. The progress was similar. The trajectory, from the outside, was identical.

zozbot234

5 hours ago

> The questions were similar. The progress was similar. The trajectory, from the outside, was identical.

LLM speak. But the rest of that quote doesn't look LLM-generated, it's too fiddly and complex of an argument. I think this was edited with AI, but the underlying argument at least is human.

kelnos

6 hours ago

Or maybe the author is just a competent writer.

hgo

6 hours ago

Yes. Let's assume so. My point is the suspicion itself.

swiftcoder

6 hours ago

> why would you write in perfect prose

If you could, why wouldn't you? LLM witch-hunts over every halfway competent writer are becoming quite tiresome

hgo

6 hours ago

Yes, I agree and I use LLM in writing myself. I raise it because it was eerie to me as a reader and I wonder if its a common thought. I wonder what other readers think on this matter.

Again, I appreciate the article very much and I'm glad the other comments are on the article's content.

bluedino

5 hours ago

Look at how bad the auto industry has gotten when it comes to quality and recalls.

A combination of beancounters running the show and the old, experienced engineers dying, retiring, and going through buyouts has pretty much left things in a pretty sad state.

bambushu

6 hours ago

The letterpress analogy is good but misses something. With letterpress you lost a craft skill. With AI coding you risk losing the ability to evaluate the output. Those are different problems.

I use AI agents for coding every day. The agent handles boilerplate and scaffolding faster than I ever could. But when it produces a subtle architectural mistake, you need enough understanding to catch it. The agent won't tell you it made a bad choice.

What actually helps is building review into the workflow. I run automated code reviews on everything the agent produces before it ships. Not because the code is bad, usually it's fine. But the one time in ten that it isn't, you need someone who understands what the code should be doing.

MarcelinoGMX3C

3 hours ago

Frankly, the "AI as accelerant" argument, as fomoz puts it, holds true only when you have a solid understanding of the domain. In enterprise system builds, we don't often encounter theoretical physics where errors might lead to a broken model rather than a broken system. Instead, a faked coefficient from an LLM could mean a production outage.

It's why I push for a hybrid mentor-apprentice model. We need to actively cultivate the next generation of "Schwartzes" with hands-on, critical thinking before throwing them into LLM-driven environments. The current incentive structure, as conception points out, isn't set up for this, but it's crucial if we want to avoid building on sand.

dwa3592

4 hours ago

What a wonderful read. Thank you!

The way I think about this is : We can't catch the hallucinations that we don't know are hallucinations.

patapong

6 hours ago

I think this is a very important debate, and I think the author here adds a lot to this discussion! I mostly agree with it, but wanted to point out a few areas where I do not fully agree.

> Take away the agent, and Bob is still a first-year student who hasn't started yet.

This may be true, but I can see almost no conceivable word where the agent will be taken away. I think we should evaluate Bob's ability based on what he can do with an agent, not without, and here he seems to be doing quite well.

> I've been hearing "just wait" since 2023.

On almost any timeline, this is very short. Given the fact that we have already arrived at models able to almost build complete computer programs based on a single prompt, and solve frontier level math problems, I think any framework that relies on humans continuing to have an edge over LLMs in the medium term may be built on shaky grounds.

Two very interesting questions today in this vein for me are:

- Is the best way to teach complex topics to students today to have them carry out simple tasks?

The author acknowledges that the difference between Bob and Alice only materializes at a very high level, basically when Alice becomes a PI of her own. If we were solely focused on teaching thinking at this level (with access to LLMs), how would we frame the educational path? It may look exactly like it does now, but it could also look very differently.

- Is there inherent value in humans learning specific skills?

If we get to a stage where LLMs can carry out most/all intellectual tasks better than humans, do we still want humans to learn these skills? My belief is yes, but I am frankly not sure how to motivate this answer.

ThrowawayR2

3 hours ago

> "no conceivable word where the agent will be taken away"

LLM access is a paid service. HN concerns itself with inequality constantly and it's not inconceivable that some individuals get ahead because they can afford to pay for more tokens and better models than those who are poorer.

talkingtab

4 hours ago

This "drift" is not a drift at all, nor is it new. There are many names for this such as cargo cult and think-by-numbers (like paint by numbers), ant mills. It is recipes. And many, many common recipes demonstrate a wide spread lack of understanding.

This kind of follow-the-leader kind of "thinking" is probably a requirement. The amount of expertise it would require to understand and decide about things in our daily life is overwhelming. Do you fix your own car, decide each day how to travel, get food and understand how all that works? No.

So what is the problem? The problem is that if you follow the leader and the leader has an agenda that differs from your agenda. Do you really think that with Jeff Bezos being a (the?) major investor in Washington Post has anything to do with Democraccy? You know as in the WAPO slogan "Democracy dies in the Dark".

Does Jeff have an agenda that differs from yours? Yes. NYT? Yes. Hacker news? Yes. Google? Yes. We now live in a world so filled with propaganda that it makes no difference whether something is AI. We all "follow". Or not.

djoldman

8 hours ago

These themes have been going around and around for a while.

One thing I've seen asserted:

> What he demonstrated is that Claude can, with detailed supervision, produce a technically rigorous physics paper. What he actually demonstrated, if you read carefully, is that the supervision is the physics. Claude produced a complete first draft in three days... The equations seemed right... Then Schwartz read it, and it was wrong... It faked results. It invented coefficients...

The argument that AI output isn't good enough is somewhat in opposition to the idea that we need to worry about folks losing or never gaining skills/knowledge.

There are ways around this:

"It's only evident to experts and there won't be experts if students don't learn"

But at the end of the day, in the long run, the ideas and results that last are the ones that work. By work, I mean ones that strictly improve outcomes (all outputs are the same with at least one better). This is because, with respect to technological progress, humans are pretty well modeled as just a slightly better than random search for optimal decisioning where we tend to not go backwards permanently.

All that to say that, at times, AI is one of the many things that we've come up with that is wrong. At times, it's right. If it helps on aggregate, we'll probably adopt it permanently, until we find something strictly better.

jacquesm

7 hours ago

AI is extremely good at producing well formatted bullshit. You need to be constantly on guard against stuff that sounds and looks right but ultimately is just noise. You can also waste a ton of time on this. Especially OpenAI's offering shows poorly in this respect: it will keep circling back to its own comfort zone to show off some piece of code or some concept that it knows a lot about whilst avoiding the actual question. It's really good at jumping to the wrong conclusions (and making it sound like some kind of profound insight). But the few times that it is on the money make up for all of that noise. Even so, I could do without the wasted time and endless back and forths correcting the same stuff over and over again, it is extremely tedious.

jerkstate

7 hours ago

Nobody actually understands what they're doing. When you're learning electronics, you first learn about the "lumped element model" which allows you to simplify Maxwell's equations. I think it is a mistake to think that solving problems with a programming language is "knowing how to do things" - at this point, we've already abstracted assembly language -> machine instructions -> logic gates and buses -> transistors and electronic storage -> lumped matter -> quantum mechanics -> ???? - so I simply don't buy the argument that things will suddenly fall apart by abstracting one level higher. The trick is to get this new level of abstraction to work predictably, which admittedly it isn't yet, but look how far it's come in a short couple of years.

This article first says that you give juniors well-defined projects and let them take a long time because the process is the product. Then goes on to lament the fact that they will no longer have to debug Python code, as if debugging python code is the point of it all. The thing that LLMs can't yet do is pick a high-level direction for a novel problem and iterate until the correct solution is reached. They absolutely can and do iterate until a solution is reached, but it's not necessarily correct. Previously, guiding the direction was the job of the professor. Now, in a smaller sense, the grad student needs to be guiding the direction and validating the details, rather than implementing the details with the professor guiding the direction. This is an improvement - everybody levels up.

I also disagree with the premise that the primary product of astrophysics is scientists. Like any advanced science it requires a lot of scientists to make the breakthroughs that trickle down into technology that improves everyday life, but those breakthroughs would be impossible otherwise. Gauss discovered the normal distribution while trying to understand the measurement error of his telescope. Without general relativity we would not have GPS or precision timekeeping. It uncovers the rules that will allow us to travel interplanetary. Understanding the composition and behavior of stars informs nuclear physics, reactor design, and solar panel design. The computation systems used by advanced science prototyped many commercial advances in computing (HPC, cluster computing, AI itself).

So not only are we developing the tools to improve our understanding of the universe faster, we're leveling everybody up. Students will take on the role of professors (badly, at first, but are professors good at first? probably not, they need time to learn under the guidance of other faculty). professors will take on the role of directors. Everybody's scope will widen because the tiny details will be handled by AI, but the big picture will still be in the domain of humans.

saulpw

4 hours ago

> as if debugging python code is the point of it all.

You have a good point, but I would argue that debugging itself is a foundational skill. Like imagine Sherlock Holmes being able to use any modern crime-fighting technology, and using it extensively. If Sherlock is not using his deductive reasoning, then he's not a 'detective'. He's just some schmuck who has a cool device to find the right/wrong person to arrest.

Debugging is "problem-solving" in a specific domain. Sure, if the problem is solved, then I guess that's the point of it all and you don't have to solve the problem. But we're all looking towards a world in which people have to solve problems, but their only problem-solving skill is trying to get an AI to find someone to arrest. We need more Sherlocks to use their minds to get to the bottom of things, not more idiot cops who arrest the wrong person because the AI told them to.

mikeaskew4

7 hours ago

“The world still needs empirical thinkers, Danny.”

- Caddyshack

efields

7 hours ago

I literally don't know how compilers work. I've written code for apps that are still in production 10 years later.

Herbstluft

7 hours ago

Are you working on compilers? If not it seems you did not understand what is being talked about here.

Do you lack fundamental understand of those apps you built that are still in use? Did you lack understanding of their workings when you built them?

layer8

7 hours ago

You don’t need to understand compilers because the code it compiles, when valid according to the language specification, is supposed to work as written, and virtually always does. There is no language specification and no “as written” with LLMs.

wglb

6 hours ago

No problem with that.

However, at one point in my career, I was frustrated with limitations in a language (Fortran II) and my curiosity got the better of me and I studied compilers thoroughly.

This led to a new job and the understanding of many new useful programming concepts. Very rewarding.

But if you are curious, studying compilers, maybe even writing a new one, will give you tools to do other things.

While working with LLMs, much of my experience gives me new ideas to push the LLM to explore.

bakugo

7 hours ago

Have you written a compiler, though?

BobBagwill

6 hours ago

Try giving this problem to different AI LLM chatbots:

If I could make a rocket that could accelerate at 3 Gs for 10 years, how long would it take to travel from Earth to Alpha Centauri by accelerating at 3 Gs for half the time, then decelerating at 3 Gs for half the time?

Hint: They don't all get it right. Some of them never got it right after hints, corrections, etc.

tom-blk

7 hours ago

Strongly agree,we see this almost everywhere now

ghc

8 hours ago

As straw men go, this is an attractive one, but...

When I was fresh out of undergrad, joining a new lab, I followed a similar arc. I made mistakes, I took the wrong lessons from grad student code that came before mine, I used the wrong plotting libraries, I hijacked python's module import logic to embed a new language in its bytecode. These were all avoidable mistakes and I didn't learn anything except that I should have asked for help. Others in my lab, who were less self-reliant, asked for and got help avoiding the kinds of mistakes I confidently made.

With 15 more years of experience, I can see in hindsight that I should have asked for help more frequently because I spent more time learning what not to do than learning the right things.

If I had Claude Code, would I have made the same mistakes? Absolutely not! Would I have asked it to summarize research papers for me and to essentially think for me? Absolutely not!

My mother, an English professor, levies similar accusations about the students of today, and how they let models think for them. It's genuinely concerning, of course, but I can't help but think that this phenomenon occurs because learning institutions have not adjusted to the new technology.

If the goal is to produce scientists, PIs are going to need to stop complaining and figure out how to produce scientists who learn the skills that I did even when LLMs are available. Frankly I don't see how LLMs are different from asking other lab members for help, except that LLMs have infinite patience and don't have their own research that needs doing.

jacquesm

7 hours ago

AI does not give you knowledge. It magnifies both intelligence and stupidity with zero bias towards either. If you were above average intelligent then you may be able to do a little bit more than before assuming you were trained before AI came along. And if you were not so smart then you will be able to make larger messes.

The problem, and I think the article indirectly points at that, is that the next generation to come along won't learn to think for themselves first. So they will on average end up on the 'B' track rather than that they will be able to develop their intelligence. I see this happening with the kids my kids hang out with. They don't want to understand anything because the AI can do that for them, or so they believe. They don't see that if you don't learn to think about smaller problems that the larger ones will be completely out of reach.

thijson

6 hours ago

Maybe the solution is for an AI that acts as an instructor instead of just trying to solve everything itself. I do this with my kids, they ask me how to do something. I will give them hints, but not outright do it all for them. The article writer in the first part mentioned that this is how they would instruct too.

skydhash

7 hours ago

Students are given student-level problem, not because someone wants the result, but because they can learn how solving problems works. Solving those easy problems with LLM does not help anyone.

squirrel

7 hours ago

The article is well-written and makes cogent points about why we need "centaurs", human/computer hybrids who combine silicon- and carbon-based reasoning.

Interestingly, the text has a number of AI-like writing artifacts, e.g. frequent use of the pattern "The problem isn't X. The problem is Y." Unlike much of the typical slop I see, I read it to the end and found it insightful.

I think that's because the author worked with an AI exactly as he advocates, providing the deep thinking and leaving some of the routine exposition to the bot.

robot-wrangler

7 hours ago

Another threat is that you can find tons of papers pointing out how neural AI still struggles handling simple logical negation. Who cares right, we use tools for symbolics, yada yada. Except what's really the plan? Are we going to attempt parallel formalized representations of every piece of input context just to flag the difference between please DONT delete my files and please DO? This is all super boring though and nothing bad happened lately, so back to perusing latest AGI benchmarks..

fredgrott

4 hours ago

I know how we can fix this....

Its of course devious, exactly some of our styles :)

Give AI to VCs to use for all their domain stuff....

They than make wrong investment decisions based on AI wrong info and get killed in the market....

Market ends up killing AI outright....problem solved temporarily

hnzionists

6 hours ago

Noobs love LLMs because they can finally write for loops and generate absolute trash web pages and UI.

These noobs go “Man this replaces devs!”

Only the experienced ones really see the LLM as the calculator it is.

maplethorpe

6 hours ago

I honestly don't know why this guy is hiring Alice and Bob in the first place, instead of just running two agents. He seemed to be saying it's to invest in them as people, but why? What is the end goal? If the end goal is to produce research, then just get the agents to do it.

simianwords

7 hours ago

> Frank Herbert (yeah, I know I'm a nerd), in God Emperor of Dune, has a character observe: "What do such machines really do? They increase the number of things we can do without thinking. Things we do without thinking; there's the real danger." Herbert was writing science fiction. I'm writing about my office. The distance between those two things has gotten uncomfortably small.

The author is a bit naive here:

1. Society only progresses when people are specialised and can delegate their thinking

2. Specialisation has been happening for millenia. Agriculture allowed people to become specialised due to abundance of food

3. We accept delegation of thinking in every part of life. A manager delegates thinking to their subordinates. I delegate some thinking to my accountant

4. People will eventually get the hang of using AI to do the optimum amount of delegation such that they still retain what is necessary and delegate what is not necessary. People who don't do this optimally will get outcompeted

The author just focuses on some local problems like skill atrophy but does not see the larger picture and how specific pattern has been repeating a lot in humanity's history.

zajio1am

7 hours ago

A related quote from A. N. Whitehead:

> It is a profoundly erroneous truism ... that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them.

skydhash

7 hours ago

Current civilization is very complex. And it’s also fragile in some parts. When you build systems around instant communication and the availability of stuff built in the other side of the world on a fixed schedule, it’s very easy to disrupt.

> 4. People will eventually get the hang of using AI to do the optimum amount of delegation such that they still retain what is necessary and delegate what is not necessary. People who don't do this optimally will get outcompeted

Then they’ll be at the mercy of the online service availability and the company themselves. Also there’s the non deterministic result. I can delegate my understanding of some problems to a library, a software, a framework, because their operation are deterministic. Not so with LLMs.

mrugge

6 hours ago

I have been able to produce 20x the amount of useful outputs both in my day job and in my free time using a popular coding agent in 2026. Part of me is uncomfortable at having from some perspective my hard won knowledge of how to write English, code and to design systems partly commoditized. Part of me is amazed and grateful for being in this timeline. I am now learning and building things I only dreamed about for years. Sky is the limit.

simianwords

6 hours ago

When technology progressed enough to allow for

1. outsourcing and offshoring (non deterministic, easy to disrupt)

2. cloud computing (mercy of the online service availability)

we had the same dilemma.

Outsource exactly what you think is not critical to the business. Offshore enough so that you gain good talent across the globe. Use cloud computing so that your company does not spend time working on solving problems that have already been solved. Assess what skills are required and what aren't - an e-commerce company doesn't need deep expertise in linux and postgres.

Companies that do this well outcompete other companies that obsess over details that are not core to their value proposition. This is how modern startups work: it is in finding that critical balance of buying products externally vs building only the crucial skills internally.

lapcat

6 hours ago

I think you missed the point. The entire article is about specialists: astrophysicists. The problem with AI is that specialists are delegating their thinking about their specialty! The fear here is that society will stop producing specialists, and thus society will no longer progress.

simianwords

6 hours ago

You are assuming that set of specialists are fixed system! That's not the case. With change in technology, you would get more and more specialists, the same way Agricultural revolution allowed for more specialists to exist.

scrpgil

7 hours ago

[flagged]

bojan

6 hours ago

The danger is also for Alice, if she gets moved to Bob's project when things start falling apart in production.

user

6 hours ago

[deleted]

garn810

9 hours ago

Academia always been full of narcissists chasing status with flashy papers and halfbaked brilliant ideas (70%? maybe) LLMs just made the whole game trivial and now literally anyone can slap together something that sounds deep without ever doing the actual grind. LLMs just speeding up the process, just a matter of time how quickly this shit is exposing what the entire system has been all along

itmitica

6 hours ago

Contrarian just for the sake of it. Get on board or stay behind. Whatever good or bad AI brings to the table, it's here to stay. The cat's out of the bag. Might as well enjoy it. Evolution will not stay on your whimsical made-up reality. It will run you over.

ergl

18 minutes ago

Do you have any more platitudes to add so I can fill my dismissive HN comment bingo card?

techblueberry

6 hours ago

What if AI in the long run makes us slower and less effective. As someone who is one of the folks supercharged by these tools, I could see it.

I think people are underestimating the level of experience and knowledge that’s required to prompt LLM’s. Not in the micro sense but in the macro. It seems so easy because it feels easy. But if you don’t have deep understanding of the domain, it will just feel impossible. The person next to you with domain experience will say “it’s so easy, look at This simple sentence I typed in”. And be like “it’s just a skill issue, why is everyone struggling so much” and not understand the years of accumulated wisdom or innate talent it took to type that simple sentence.

AI makes the easy things easy and the hard things harder, and more omnipresent.

itmitica

2 hours ago

My experience is AI helping me unload things I am not built for. It allows me to be creative with less drag. Not everyone aims to be useless human automaton or useless human thesaurus. It allows me to exercise my intelligence freely.