LLMs learn what programmers create, not how programmers work

25 pointsposted 9 hours ago
by noemit

Item id: 47494696

3 Comments

shomp

5 hours ago

Great observation. The brain of a programmer is still a "black box" to the feed-forward network of nodes . But in theory, if you pumped a lot of the live-coding videos from something like youtube into the process, you could get a bit of that "what's your approach"-erism to bleed into the model. There might not be enough material there to truly "train it to think" but it would be interesting to try and "fill the gaps" of black-box-ed-ness in the LLM with supplemental "here was the process that got us there" video feeds. The next natural move might actually be recording thousands of hours of footage of developers working with the LLMs directly like in Cursor or another IDE that has LLM live-pair-programming , maybe calling it "pair programming" is generous , but it might be a reasonable foray into teaching the next generation of LLMs the "thought process" behind things. In reality you'd be teaching it which files to inspect, which windows to open/close, which tools to switch to and focus on. And while it might be imperfect, it might just be enough.

mpalmer

2 hours ago

The novice came to the master. "I have figured it out, the rules for how LLMs understand CLIs. It gives the right commands, but adds colons. It was trained on the visual shape of terminals, not keystrokes."

"Clear the session," the master said. "Run the same prompt again."

The novice pressed return. The model output: `ls -R /tmp`

"The colons are gone," the novice said. "But my theory explained them perfectly."

"You built a cage for a cloud," the master said. "Do not mistake a single roll of the dice for the rulebook."

Art9681

2 hours ago

Is "how programmers work" a useful and provable metric? No? Then it belongs in philosophy discussions. How you work and how I work is different. Your work may have ended up in the LLM training and my work did not. Or vice versa.

Can you objectively analyze how VSCode adapts to your way of working without our interference?

Did you test your theory with the actual frontier LLMs (which Kimi K2.5 is not BTW?)