Karpathy on Programming

37 pointsposted 13 hours ago
by rishabhaiover

22 Comments

rishabhaiover

13 hours ago

For the longest time, the joy of creation in programming came from solving hard problems. The pursuit of a challenge meant something. Now, that pursuit seems to be short-circuited by an animated being racing ahead under a different set of incentives. I see a tsunami at the beach, and I’m not sure whether I can run fast enough.

condensedcrab

12 hours ago

Not to mention many companies speedrunning systems of strange and/or perverse incentives with AI adoption.

That being said, Welch’s grape juice hasn’t put Napa valley out of business. Human taste is still the subjective filter that LLMs can only imitate, not replace.

I view LLM assisted coding (on the sliding scale from vibe coding to fancy auto complete) similar to how Ableton and other DAW software have empowered good musicians that might not have made it otherwise due to lack of connections or money, but the music industry hasn’t collapsed completely.

tjr

12 hours ago

In the music world, I would say that, rather than DAWs, LLM-assisted coding is more like LLM-assisted music creation.

m463

3 hours ago

> I can run fast enough.

Can you do some code reviews while you're running?

nextworddev

8 hours ago

(Inception scene) here a minute is seven hours

tjr

12 hours ago

Being a nondeterministic tool, the output for a given input can vary. Rather than having a solid plan of, "if I provide this input, then that will happen", it's more like, "if I do something like this, I can expect something like that, probably, and if not, then try again until it works, I suppose".

What are the productivity gains? Obviously, it must vary. The quality of the tool output varies based on numerous criteria, including what programming language is being used and what problem is trying to be solved. The fact that person A gets a 10x productivity increase on their project does not mean that person B will also get a 10x productivity increase on their project, no matter how well they use the tool.

But again, tool usage itself is variable. Person A themselves might get a 10x boost one time, and 8x another time, and 4x another time, and 2x another time.

stack_framer

2 hours ago

Also interesting is the possibility that a 10x boost for person A might still be slower than person B not using AI.

general1465

9 hours ago

Non determinism of AI feels like a compiler which will on same input code spit out different executable on every run. Fixing bugs will become more like a ritual to satisfy whims of the machine spirit.

fragmede

5 hours ago

But how different? Compilers do, in fact, spit out different binaries with each run. There are timestamps and other subtle details embedded in them (esp compiler version and linking) that make the same source result in a different binary. "That's different"; "that's not the same thing!" I see you thinking. As long as the AI prompt "make me a login screen" results in a login screen appropriate for the rest of the code, and not "rm -rf ~/", does it matter if the indeterminism produces a login page with a Google login page before the email login button or after?

grim_io

5 hours ago

Non determinism does not imply non correctness. You can have the LLM do 10 different outputs, but maybe all 10 are valid solutions. Some might be more optimal in certain situations, and some might appeal to different people aesthetically.

tjr

5 hours ago

Nondeterminism indeed does not imply non-correctness.

All ten outputs might be valid. All ten will almost certainly be different -- though even that is not guaranteed.

The OP referred to the notion of there being no manual; we have to figure out how to use the tool ourselves.

A traditional programming tool manual would explain that you can provide input X and expect output Y. Do this, and that will happen. It is not so clear-cut with AI tools, because they are -- by default, in popular configurations -- nondeterministic.

grim_io

4 hours ago

We are one functional output guarantee away from them being optimizing compilers.

Of course, we maybe never get there :)

tjr

4 hours ago

Why would one opt to use an LLM-based AI tool as a compiler? It seems that would be extraordinarily complex over traditional compilers, but for what benefit?

grim_io

3 hours ago

It would be, in its ideal state a vague problem to concrete and robust implementation compiler.

A star trek replicator for software.

Obviously we are nowhere near that, and we may never arrive. But this is the big bet.

oakpond

5 hours ago

> There's a new programmable layer of abstraction to master (in addition to the usual layers below) involving agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, IDE integrations, and a need to build an all-encompassing mental model for strengths and pitfalls of fundamentally stochastic, fallible, unintelligible and changing entities suddenly intermingled with what used to be good old fashioned engineering.

Slop-oriented programming

breve

10 hours ago

I'd be more interested in hearing his thoughts on the full self driving lie he participated in for many years. Does he feel any responsibility at all for lying to Tesla's customers? Did he advocate for them to be refunded since Tesla had not and still has not delivered what was promised?

fooblaster

10 hours ago

Its clear from listening to podcasts/interviews, he does not want to say anything to get on elons bad side. Interviewers appear to also not be eager to broach the subject.

breve

an hour ago

If indeed he doesn't have the heart for basic honesty then why should anyone listen to him about anything?

This is not a high bar. This is not some impossible moral standard to be held to.

This really is an easy one.

dude250711

8 hours ago

Man, this is giving me a cognitive dissonance compared to my experiences.

Actually, even the post itself reads like a cognitive dissonance with a dash of the usual "if it's not working for you then you are using it wrong" defence.

credit_guy

7 hours ago

I feel exactly like Karpathy here. I have some work to do, and I know exactly what I need to do, and I'm able to explain it to AI, and the AI seems to understand me (I'm lately using Opus 4.5). I wrote down a roadmap, it should take me a few weeks of coding. It feels like with a proper workflow with AI agents, this work should be doable in one or two days. Yet, I know by now that it's not going to be nearly that fast. I'll be lucky if I finish 30% faster than if I just code the entire damn thing myself. The thing is, I am a huge AI optimist, I'm not one of the AI skeptics, not even close. Karpathy is not an AI skeptic. We just both feel this sense of possibility, and the fact that we can't make AI help us more is frustrating. That's all. There's no telling anyone else "it's on you if you can't make it work for you". I think Karpathy figured out by now, and at least I did, that the number of AI skeptics by now far outnumbers the number of AI optimists, and it has become something akin to a political conviction. It's quite futile to try and change someone's mind about whether AI is good, bad, overhyped, underused, etc. People picked their side and that's that.

TeodorDyakov

8 hours ago

I think of it this way. If you dropped Einstein with a time machine two thousand year ago, people would think he is some crazy guy doing scribbles in the sand. No one would ever know how smart he is. The same is with people and advanced AGI like Gemini 3 Pro or Chatgpt 5.2 Pro. We are just dumber than them.

csto12

7 hours ago

You think they have “advanced AGI” and are worried about keeping up with the software industry? There would be be nothing to keep up with at that point.

To use an analogy, it would be like spending all your time before a battle making sure your knife is sharp when your opponent has a tank.