Zigurd
a day ago
I don't want to be that contrarian guy, but I find it energizing to go faster. For example, being able to blast through a list of niggling defects that need to be fixed is no longer a stultifying drag.
I recently used a coding agent on a project where I was using an unfamiliar language, framework, API, and protocol. It was a non-trivial project, and I had to be paying attention to what the agent was doing because it definitely would go off into the weeds fairly often. But not having to spend hours here and there getting up to speed on some mundane but unfamiliar aspect of the implementation really made everything about the experience better.
I even explored some aspects of LLM performance: I could tell that new and fast changing APIs easily flummox a coding agent, confirming the strong relationship of up-to-date and accurate training material to LLM performance. I've also seen this aspect of agent assisted coding improve and vary across AIs.
observationist
a day ago
There's something exhilarating about pushing through to some "everything works like I think it should" point, and you can often get there without doing the conscientious, diligent, methodical "right" way of doing things, and it's only getting easier. At the point where everything works, if it's not just a toy or experiment, you definitely have to go back and understand everything. There will be a ton to fix, and it might take longer to do it like that than just by doing it right the first time.
I'm not a professional SWE, I just know enough to understand what the right processes look like, and vibe coding is awesome but chaotic and messy.
lukan
a day ago
"It was a non-trivial project, and I had to be paying attention to what the agent was doing"
There is a big difference between vibe coding and llm assisted coding and the poster above seems to be aware of it.
closewith
a day ago
Vibe coding is the name for LLM assisted coding, whether you like it or not.
observationist
a day ago
Not all LLM assisted coding is vibe coding. Vibe coding goes something like , throw it a prompt, then repeat "it works" or "it doesn't work" or "it works, but I want Feature XYZ, also" or repeated "ok, but make it better."
Vibe implies missing knowledge or experience - LLMs are a great equalizer, on the level of handguns or industrialization. Everyone can use them, some can use them well, and the value of using them well is enormous.
A real SWE is going to say "ok, now build a new function doing XYZ" and in their agents.md they'll have their entire project specs and baseline prompt framework, with things like "document in the specified standard, create the unit test, document how the test integrates with the existing features, create a followup note on any potential interference with existing functionality, and update the todo.md" and so forth, with all the specific instructions and structure and subtasks and boring project management most of us wouldn't even know to think of. Doing it right takes a lot of setup and understanding how software projects should be run, and using the LLMs to do all the work they excel at doing, and not having them do the work they suck at.
I only know the binary "that works" or not, for most things I've vibe coded. It'd be nice to have the breadth of knowledge and skills to do things right, but I also feel like it'd take a lot of the magic out of it too, lol.
closewith
a day ago
While that was the original intent when Andrej Karpathy coined the term, it's now simply synonymous with LLM assisted coding. Like many previously pejorative terms, it's now become ubiquitous and lost its original meaning.
Nashooo
a day ago
One means (used to mean?) actually checking the LLM's output one means keep trying until it outputs does what you want.
closewith
a day ago
That's the original context of the Andrej Karpathy comment, but it's just synonymous with LLM assisted coding now.
lukan
17 hours ago
Not yet, but the more you will insist, the more it will be. But what is your proposal for differentiating between just prompting without looking at code vs just using LLM to generate code?
closewith
16 hours ago
I'm not in favour of the definition, but like _hacker_, the battle is already lost.
iwontberude
a day ago
Given that the models will attempt to check their own work with almost the identical verification that a human engineer would, it's hard to say if human's aren't implicitly checking by relying on the shared verification methods (e.g. let me run the tests, let me try to run the application with specific arguments to test if the behavior works).
ahtihn
a day ago
> Given that the models will attempt to check their own work with almost the identical verification that a human engineer would
That's not the case at all though. The LLM doesn't have a mental model of what the expected final result is, so how could it possibly verify that?
It has a description in text format of what the engineer thinks he wants. The text format is inherently limited and lossy and the engineer is unlikely to be perfect at expressing his expectations in any case.
ModernMech
10 hours ago
I disagree, "vibe" characterizes how the LLM is being used, not that AI is being used at all. The vibe part means it's not rigorous. Using an LLM to autocomplete a line in an otherwise traditionally coded project would not be considered "vibe" coding, despite being AI assisted, because the programmer can easily read and verify the line is as correct as if they had typed it character by character.
closewith
9 hours ago
That's the etymology, but it's now entered the general lexicon and that distinction had been lost.
bcrosby95
a day ago
If you're hanging your features off a well trodden framework or engine this seems fine.
If frameworks don't make sense for what you're doing though and you're now relying on your LLM to write the core design of your codebase... it will fall apart long before you reach "its basically working".
The more nuanced interactions in your code the worse it'll do.
hansmayer
a day ago
> I'm not a professional SWE
It was already obvious from your first paragraph - in that context even the sentence "everything works like I think it should" makes absolute sense, because it fits perfectly to limited understanding of a non-engineer - from your POV, it indeed all works perfectly, API secrets in the frontend and 5 levels of JSON transformation on the backend side be damned, right ;) Yay, vibe-coding for everyone - even if it takes longer than the programming the conventional way, who cares, right?
westoncb
a day ago
It sounds more like you just made an overly simplistic interpretation of their statement, "everything works like I think it should," since it's clear from their post that they recognize the difference between some basic level of "working" and a well-engineered system.
Hopefully you aren't discouraged by this, observationist, pretty clear hansmayer is just taking potshots. Your first paragraph could very well have been written by a professional SWE who understood what level of robustness was required given the constraints of the specific scenario in which the software was being developed.
brailsafe
a day ago
By your response, it really seems like you read their first sentence as advocating for vibe coding, but I think they were saying something more to the effect of "While it's exciting to reach those milestones more quickly and frequently, as it becomes easier to reach a point where everything seems to be working on the surface, the easier it then is to bypass elegant, properly designed, intimately internalized detail—unavoidable if written manually—and thus when it comes time to troubleshoot, the same people may have to confront those rapidly constructed systems with less confidence, and hence the maintenance burden later may be much greater than it otherwise would be"
Which to me, as a professional SWE, seems like a very engineer thing to think about, if I've read both of your comments correctly.
observationist
a day ago
Exactly - I know enough to know what I don't know, since I've been able to interact with professionals, and went down the path of programming far enough to know I didn't want to do it. I've also gotten good at enough things to know the pattern of "be really shitty at doing things until you're not bad, and eventually be ok, and if you work your ass off, someday you'll actually be good at it".
The neat thing about vibe coding is knowing that I'm shitty at actual coding and achieving things in hours that would likely have taken me months to learn to do the right way, or weeks to hack together with other people's bubblegum and duct tape. I'd have to put in a couple years to get to the "OK" level of professional programming, and I feel glad I didn't. Lucky, even.
burningChrome
a day ago
>> even if it takes longer than the programming the conventional way, who cares, right?
Longer than writing code from scratch, with no templates or frameworks? Longer than testing and deploying manually?
Even eight years ago when I left full-stack development, nobody was building anything from scratch, without any templates.
Serious questions - are there still people who work at large companies who still build things the conventional way? Or even startups? I was berated a decade ago for building just a static site from scratch so curious to know if people are still out there doing this.
tjr
a day ago
What do you mean by "the conventional way"?
burningChrome
a day ago
I was referencing OP's statement.
"conventional programming"
Key Characteristics of Conventional Programming:
Manual Code Writing
- Developers write detailed instructions in a programming language (e.g., Java, C++, Python) to tell the computer exactly what to do.
- Every logic, condition, and flow is explicitly coded.
Imperative Approach
- Focuses on how to achieve a result step by step. Example: Writing loops and conditionals to process data rather than using built-in abstractions or declarative statements.
High Technical Skill Requirement
- Requires understanding of syntax, algorithms, data structures, and debugging. No visual drag-and-drop or automation tools—everything is coded manually.
Longer Development Cycles
- Building applications from scratch without pre-built templates or AI assistance. Testing and deployment are also manual and time-intensive.
Traditional Tools
- IDEs (Integrated Development Environments) like Eclipse or Visual Studio. Version control systems like Git for collaboration.
lelanthran
20 hours ago
>> I'm not a professional SWE, I just know enough to understand what the right processes look like, and vibe coding is awesome but chaotic and messy.
> It was already obvious from your first paragraph - in that context even the sentence "everything works like I think it should" makes absolute sense, because it fits perfectly to limited understanding of a non-engineer - from your POV, it indeed all works perfectly, API secrets in the frontend and 5 levels of JSON transformation on the backend side be damned, right ;)
I mean, he qualified it, right? Sounds like he knew exactly what he was getting :-/
damiangriggs
a day ago
I've noticed that as well. I don't memorize every single syntax error, but when I use agents to help code I learn why they fail and how to correct them. The same way I would imagine a teacher learns the best way to teach their students.
ModernMech
10 hours ago
AI is allowing a lot of "non SWEs" to speedrun the failed project lifecycle.
The exuberance of rapid early-stage development is mirrored by the despair of late-stage realizations that you've painted yourself into a corner, you don't understand enough about the code or the problem domain to move forward at all, and your AI coding assistant can't help either because the program is too large for it to reason about fully.
AI lets you make all the classic engineering project mistakes faster.
lelanthran
17 hours ago
> I don't want to be that contrarian guy, but I find it energizing to go faster. For example, being able to blast through a list of niggling defects that need to be fixed is no longer a stultifying drag.
It depends. No one is running their brain at full-throttle for more than a few hours on end.
If your "niggling" defects is mostly changes that don't require deep thought (refactor this variable name, function parameters/return type changes, classes, filename changes, etc), then I can see how it is energising - you're getting repeated dopamine hits for very little effort.
If, OTOH, you are doing deep review of the patterns and structures the LLM is producing, you aren't going to be doing that for more than a few hours without getting exhausted.
I find, myself, that repeatedly correcting stuff makes me tired faster than simply "LGTM, lets yolo it!" on a filename change, or class refactor, etc.
When the code I get is not what I wanted even though it passes the tests, it's more mental energy to correct the LLM than if I had simply done it myself from the first.
A good example of the exhausting tasks from today - my input has preprocessing directives embedded in it; there's only three now (new project), so the code generated by Claude did a number of `if-then-else-if` statements to process this input.
My expectation was that it would use a jump table of some type (possibly a dictionary holding function pointers, or a match/switch/case statement).
I think a good analogy is self-driving cars: if the SDC requires no human intervention, then sure it's safe. If the SDC requires the human to keep their hand on the wheel at all time because it might disengage with sub-second warnings, then I'm going to be more tired after a long drive than if I simply turned it off.
vidarh
a day ago
Same here. I've picked up projects that have languished for years because the boring tasks no longer make me put them aside.
rightbyte
20 hours ago
> I don't want to be that contrarian guy, but I find it energizing to go faster.
Is that contrarian though? Seems like pretty normal corparate setting bragging to me. (Note: I am not accusing you of it since your boss or collegues does not read this).
On the variant of "I am bad at not working too hard".
skdhshdd
a day ago
> But not having to spend hours here and there getting up to speed on some mundane but unfamiliar aspect of the implementation
At some point you realize if you want people to trust you you have to do this. Otherwise you’re just gambling, which isn’t very trustworthy.
It’s also got the cumulative effect of making you a good developer if done consistently over the course of your career. But yes, it’s annoying and slow in the short term.
Animats
a day ago
> I don't want to be that contrarian guy, but I find it energizing to go faster.
You, too, can be awarded the Order of Labor Glory, Third Class.[1]
Zigurd
a day ago
Had I been doing other than interesting exploratory coding, I would agree with you. I can readily imagine standups where the "scrum master" asks where our AI productivity boost numbers are. Big dystopia potential.
p_v_doom
19 hours ago
People in some places already are doing that. The Dystopia is now
pyrophane
a day ago
> I recently used a coding agent on a project where I was using an unfamiliar language, framework, API, and protocol.
You didn’t find that to be a little too much unfamiliarity? With the couple of projects that I’ve worked on that were developed using an “agent first” approach I found that if I added too many new things at once it would put me in a difficult space where I didn’t feel confident enough to evaluate what the agent was doing, and when it seemed to go off the rails I would have to do a bunch of research to figure out how to steer it.
Now, none of that was bad, because I learned a lot, and I think it is a great way to familiarize oneself with a new stack, but if I want to move really fast, I still pick mostly familiar stuff.
Zigurd
a day ago
SwiftKotlinDartGo blur together by now. That's too many languages but what are you gonna do?
I was ready to find that it was a bit much. The conjunction of ATProto and Dart was almost too much for the coding agent to handle and stay useful. But in the end it was OK.
I went from "wow that flutter code looks weird" to enjoying it pretty quickly.
QuercusMax
a day ago
I'm assuming this is the case where they are working in an existing codebase written by other humans. I've been in this situation a lot recently, and Copilot is a pretty big help to figure out particularly fiddly bits of syntax - but it's also really stupid suggests a lot of stuff that doesn't work at all.
stonemetal12
a day ago
> I didn’t feel confident enough to evaluate what the agent was doing
So don't. It is vibe coding, not math class. As long as it looks like it works then all good.
tjr
a day ago
Is there any software you think should not be developed with this approach?
Rperry2174
a day ago
I think both experience are true.
AI removes boredome AND removes the natural pauses where understanding used to form..
energy goes up, but so does the kind of "compression" of cognitive things.
I think its less a quesiton of "faster" or "slower" but rather who controls the tempo
visarga
a day ago
After 4 hours of vibe coding I feel as tired as a full day of manual coding. The speed can be too much. If I only use it for a few minutes or an hour, it feels energising.
agumonkey
a day ago
> the kind of "compression" of cognitive things
compression is exactly what is missing for me when using agents, reading their approach doesn't let me compress the model in my head to evaluate it, and that was why i did programming in the first place.
Avicebron
a day ago
Can you share why it was non-trivial? I'm curious about how folks are evaluating the quality of their solutions when the project space is non trivial and unfamiliar
Zigurd
a day ago
It's a real, complete, social media client app. Not a huge project. But the default app was clearly written by multiple devs, each with their own ideas. One goal was to be cleaner and more orthogonal, among other goals.
ares623
a day ago
A little bit of Dunning-Kruger maybe?
joseda-hg
a day ago
Non triviality is relative anyway, if anything admiting complexity beyond your skills on your expertise field reads like the inverse
marginalia_nu
a day ago
Dunning-Kruger isn't what you think it is[1]
[1] https://skepchick.org/2020/10/the-dunning-kruger-effect-misu...
blitz_skull
a day ago
I think it's less "going fast" and more "going fast forever."
To your point, you can blow through damn-near anything pretty quickly now. Now I actually find myself problem-solving for nearly 8 hours every day. My brain feels fried at the end of the day way more than it used to.
wiether
a day ago
Same feeling here!
I used to be like: "well, this thing will take me at least half a day, it's already 16:00, so I'll better do something quiet to cooldown to the end of the day and tackle this issue tomorrow". I'll leave the office in an regular mood and take the night to get ready for tomorrow.
Now I'm like: "17:30? 30 minutes? I have time to tackle another issue today!" I'll leave the office exhausted and take the night to try and recover from the day I had.
solumunus
19 hours ago
This. I’m able to be more productive for long hours more consistently than before. The occasions where I’m engineering for 8 solid hours are much more frequent now, and I’m certainly more tired. Almost all of my time is now dedicated to problem solving and planning, the LLM executes and I sit there thinking. Not everyone’s brain or project is well suited for this, but for me as a personality combined with the way my product is structured, it’s a total game changer.
WhyOhWhyQ
a day ago
Everyone in this conversation talks about different activities. One version of vibe coding happens with Netflix open and without ever opening a text editor, and another happens with thoroughly reviewing every change.
etothet
a day ago
I 100% agree. It's been incredible for velocity and the capabilities and accuracy of the models I've been (mostly from Anthropic) have improved immensely over the last few months.
louthy
a day ago
> But not having to spend hours here and there getting up to speed on some mundane but unfamiliar aspect of the implementation
Red flag. In other words you don’t understand the implementation well enough to know if the AI has done a good job. So the work you have committed may work or it may have subtle artefacts/bugs that you’re not aware of, because doing the job properly isn’t of interest to you.
This is ‘phoning it in’, not professional software engineering.
jmalicki
a day ago
Learning an unfamiliar aspect and doing it be hand will have the same issues. If you're new to Terraform, you are new to Terraform, and are probably going to even insert more footguns than the AI.
At least when the AI does it you can review it.
pferde
a day ago
No, you can not. Without understanding the technology, at best you can "vibe-review" it, and determine that it "kinda sorta looks like it's doing what it's supposed to do, maybe?".
louthy
a day ago
> Learning an unfamiliar aspect and doing it be hand will have the same issues. If you're new to Terraform, you are new to Terraform
Which is why you spend time upfront becoming familiar with whatever it is you need to implement. Otherwise it’s just programming by coincidence [1], which is how amateurs write code.
> and are probably going to even insert more footguns than the AI.
Very unlikely. If I spend time understanding a domain then I tend to make fewer errors when working within that domain.
> At least when the AI does it you can review it.
You can’t review something you don’t understand.
[1] https://dev.to/decoeur_/programming-by-coincidence-dont-do-i...
lelanthran
16 hours ago
> Learning an unfamiliar aspect and doing it be hand will have the same issues.
I don't think so. We gain proficiency by doing, not by reading.
If all you are doing is reading, you are not gaining much.
bongodongobob
a day ago
It sounds like you've never worked a job where you aren't just supporting 1 product that you built yourslef. Fix the bug and move on. I do not have the time or resources to understand it fully. It's a 20 year old app full of business logic and MS changed something in their API. I do not need to understand the full stack. I need to understand the bug and how to fix it. My boss wants it fixed yesterday. So I fix it and move onto the next task. Some of us have to wear many hats.
louthy
a day ago
> It sounds like you've never worked a job where you aren't just supporting 1 product that you built yourslef
In my 40 years of writing code, I’ve worked on many different code bases and in many different organisations. And I never changed a line of code, deleted code, or added more code unless I could run it in my head and ‘know’ (to the extent that it’s possible) what it will do and how it will interact with the rest of the project. That’s the job.
I’m not against using AI. I use it myself, but if you don’t understand the scope fully, then you can’t possibly validate what the AI is spitting out, you can only hope that it has not fucked up.
Even using AI to write tests will fall short if you can’t tell if the tests are good enough.
For now we still need to be experts. The day we don’t need experts the LLMs should start writing in machine code, not human readable languages
> I do not need to understand the full stack.
Nobody said that. It’s important to understand the scope of the change. Knowing more may well improve decision making, but pragmatism is of course important.
Not understanding the thing you’re changing isn’t pragmatism.
theshrike79
14 hours ago
Either you're a true 100x coder who can get a full understanding of every single project and every effect it will have through the full end to end stack.
Or you were never under time pressure and always had enough time to do it.
Either way, I'm jealous for you. For me it's "here's code that Bob wrote 10 years ago, it's not working. Customers are complaining and this needs to be fixed yesterday".
"Sorry I need to understand what it will do and how it will interact with the rest of the project, that'll take a few days and I can't fix it before that" wasn't an option. You fix the immediate issue, run whatever tests it may have and throw it to QA for release approval.
Most likely the fix will work and nobody has to touch that bit in a few years. Should we spend time to understand it fully and document it, add proper and comprehensive tests? Yep. But the bosses will never approve the expense.
If I had an AI agent at point, it could "understand" the codebase in minutes and give me clues as to what's the blast radius for the possible fix.
louthy
8 hours ago
> Either you're a true 100x coder who can get a full understanding of every single project and every effect it will have through the full end to end stack.
It's hard to state how good I am without sounding like an arsehole, so here goes... I am certainly a very experienced engineer, I've coded from the age of 10 and now at 50 I'm 'retired' after selling the company that I founded. I started in the 8bit era doing low level to-the-metal coding and ended it building an internationally used healthcare SaaS app (with a smattering of games engineering in-between). I've been a technical proof-reader for two Manning books, have at least one popular open-source project, and I still write code for fun and am working on my next idea around data-sovereignty in my now infinite free time... so yeah, I'm decent, and I feel like I've gained enough experience to have an opinion on this.
But also you're not reading what I wrote. I never said "a full understanding of every single project and every effect it will have through the full end to end stack", which I explicitly dealt with in my last reply, when I said: "It’s important to understand the scope of the change. Knowing more may well improve decision making, but pragmatism is of course important."
If the scope is small, you don't need "a full understanding of every single project and every effect it will have through the full end to end stack". But in terms of what it does touch, yeah you should know it, especially if you want to become a better software engineer, and not just an engineer with the same 1 years worth of experience x 30.
It should also not take "a few days" to investigate the scope. If it's taking you that long then you're not exercising the capability that allows you to navigate around unfamiliar code and understand what it's doing. That knowledge accumulates too, so unless you're working on a completely different project every single day, you're going to get quicker and quicker.
I have seen pathological cases where a dev that worked for me went so far down the rabbit hole that he got nothing done, so it has to be a pragmatic process of discovery. It should entirely depend on the extent to which your change could leak out into other areas of the project. For example, if you had a reusable library that had some core functionality that is used throughout the project and you wanted to change some of its core behaviour, then I'd want to find all of the usages of that library to understand how that change will affect the behaviour (if at all). But equally, if I was updating a UI page or control that has limited tentacles throughout the app, then I'd be quite comfortable not doing a deep dive.
> "here's code that Bob wrote 10 years ago, it's not working. Customers are complaining and this needs to be fixed yesterday".
I've been in that exact situation. You need to make a decision about your career. Are you just going to half-arse the job, or are you going to get better? If you think continuing as you are is good for your career, because you've made your idiot boss happy for 5 minutes before they give you the next unreasonable deadline, then you're wrong.
The fact is the approach you're taking is slower. It's slower because you and the team of engineers you're in (assuming everyone takes the same approach) are accumulating bugs, technical debt, and are not building institutional knowledge. When those bugs need dealing with in the future, or that technical debt causes the application to slow to a crawl, or have some customer-affecting side-effects, then you're going to waste time solving those issues and you're sure as hell gonna want the institutional knowledge to resolve those problems. AI doesn't "understand" in the way you're implying. If it did understand then we wouldn't be needed at all.
> Most likely the fix will work and nobody has to touch that bit in a few years. Should we spend time to understand it fully and document it, add proper and comprehensive tests? Yep. But the bosses will never approve the expense.
So you work for a terrible boss. That doesn't make my argument wrong, that makes your boss wrong. You can obviously see the problem, but instead of doing something about it, you're arguing against good software development methodology. That's odd. You should take umbrage with your boss.
The best engineers I have worked with in my career were the ones that fully understood the code base they were working on.
visarga
a day ago
>Red flag. In other words you don’t understand the implementation well enough to know if the AI has done a good job.
Red flag again! If your protection is to "understand the implementation" it means buggy code. What makes a code worthy of trust is passing tests, well designed tests that cover the angles. LGTM is vibe testing
I go as far as saying it does not matter if code was written by a human who understands or not, what matters is how well it is tested. Vibe testing is the problem, not vibe coding.
nosianu
a day ago
> What makes a code worthy of trust is passing tests
(Sorry, but you set yourself up for this one, my apologies.)
Oh, so this post describes "worthy code", okay then.
https://news.ycombinator.com/item?id=18442941
Tests are not a panacea. They don't care about anything other than what you test. If you don't have code testing maintainability and readability, only that it "works", you end up like the product in that post.
Ultimate example: Biology (and everything related, like physiology, anatomy), where the test is similarly limited to "does it produce children that can survive". It is a huuuuuge mess, and trying to change any one thing always messes up things elsewhere in unexpected and hard or impossible to solve ways. It's genius, it works, it sells - and trying to deliberately change anything is a huge PITA because everything is interconnected and there is no clean design anywhere. You manage to change some single gene to change some very minor behavior, suddenly the ear shape changes and fur color and eye sight and digestion and disease resistance, stuff like that.
agumonkey
a day ago
I wonder if for a large class of jobs, simple unit tests will be enough to be a negative that the llm output will try to match. Test driven delegation in a way.. that said i share the same worries as you. The fact that the LLM can wire multiple files / class / libs in a few seconds to pass your tests doesn't guarantee a good design. And the people who love vibe coding the most are the one who never valued design in the first place, just quick results..
louthy
7 hours ago
> If your protection is to "understand the implementation" it means buggy code.
Hilarious. Understanding the code is literally the most important thing. If you don't understand the code then you can't understand any unit tests you write either. How could you possibly claim test coverage for something you don't understand?
I suspect you primarily develop code with dynamic languages where you're reinventing type-systems day-in day-out to test your code. Personally, I try to minimise the need for unit-tests by using well-defined types and constraints. The type-system is a much better unit-tester than any human with a poor understanding of the code.
OptionOfT
a day ago
If you do this on your personal stuff, eh, I wouldn't do it, but you do you.
But we're seeing that this becomes OK in the workplace, and I don't believe it is.
If you propose these changes that would've normally taken you 2 weeks as your own in a PR, then I, as the reviewer, don't know where your knowledge ends and the AI's hallucinations begin.
Do you need to do all of these things? Or is it because the most commonly forked template of this piece of code has this in its boilerplate? I don't know. Do you?
How can you make sure the code works in all situations if you aren't even familiar with the language, let alone the framework / API and protocol?
* Do you know that in Java you have to do string.Equals instead of == for equality?
* Do you know in Python that you assigning a new value to a function default persists beyond the function?
* And in JavaScript it does not?
* Do you know that the C# && does not translate to VB.NET's And?frio
a day ago
This is far and away the biggest problem I have atm. Engineers blowing through an incredible amount of work in a short time, but when an inevitable bug bubbles up (which would happen without AI!), there's noone to question. "Hey, you changed the way transactionality was handled here, and that's made a really weird race condition happen. Why did you change it? What edge case were you trying to handle?" -- "I don't know, the AI did it". This makes chasing things down exponentially harder.
This has always been a problem in software engineering; of course -- sometimes staff have left, so you have to dig through tickets, related commits and documentation to intuit intent. But I think it's going to make for very weird drags on productivity in _new_ code that may not counter the acceleration LLMs provide, but will certainly exist.
QuercusMax
a day ago
A lot of that stuff can be handled by linters and static analysis tools.
seattle_spring
a day ago
What are some examples of linters / static analysis catching hallucinated solutions to problems?
I feel like it would just yield a well-formatted, type safe incorrect solution, which is no better than a tangled mess.
steveklabnik
a day ago
A specific example: for some reason, when working on Playwright scripts, Claude really likes to inject
await page.waitForLoadState('networkidle');
But this isn't good, and is not encouraged. So much so that there's an eslint that suggests removing it. This means that by running the linter, if Claude does decide to inject this, it gets taken out, because the linter runs and then tells it so.quotemstr
a day ago
> I don't want to be that contrarian guy, but I find it energizing to go faster. For example, being able to blast through a list of niggling defects that need to be fixed is no longer a stultifying drag.
It's often that just getting started at all on a task is the hardest part. That's why writers often produce a "vomit draft" (https://thewritepractice.com/vomit-first-draft/) just to get into the right frame of mind to do real writing.
Using a coding agent to fix something trivial serves the same purpose.
sixothree
a day ago
I am currently only vibe-coding my hobby projects. So if that changes, my view could very well change.
But I 100% agree. It's liberating to focus on the design of my project, and my mental model can be of how I want things to work.
It feels like that switch to test driven development where you start from the expected result and worry about the details later.
stuffn
a day ago
I think the counter-point to that is what I experience.
I agree it can be energizing because you can offload the bullshit work to a robot. For example, build me a CRUD app with a bootstrap frontend. Highly useful stuff especially if this isn't your professional forte.
The problems come afterwards:
1. The bigger the base codebase generation the less likely you're going to find time or energy to refactor LLM slop into something maintainable. I've spent a lot of time tailoring prompts for this type of generation and still can't get the code to be as precise as something an engineer would write.
2. Using an unfamiliar language means you're relying entirely on the LLM to determine what is safe. Suppose you wish to generate a project in C++. An LLM will happily do it. But will it be up to a standard that is maintainable and safe? Probably not. The devil is in the mundane details you don't understand.
In the case of (2) it's likely more instructive to have the LLM make you do the leg work, and then it can suggest simple verifiable changes. In the case of (1) I think it's just an extension of the complexity of any project professional or not. It's often better to write it correct the first time than write it fast and loose and then find the time to fix it later.
OptionOfT
a day ago
Ergo instant tech debt.