jfaganel99
7 hours ago
Author here. The finding that surprised me most while writing this wasn’t the breach numbers. It was the Stanford result: developers with AI assistance introduced more flaws than those without, and felt more confident about their code. The confidence gap is the problem, not just the code quality.
The LLM secret predictability angle is something I’m still digging into and will be a separate article. There’s a lot more to it than I could cover here.
Genuinely curious: for anyone shipping vibe-coded projects, are you actually running any kind of security check before it goes live? Prompting the AI for a review, using a scanner, doing it manually, or just crossing your fingers? And if you are using an agent workflow for it, what does that look like? Any specific agent skills or tools you’ve found useful versus just adding noise?
fhouser
6 hours ago
I recently shipped a "vibe-coded" project. You raise a good point: I hadn't considered the confidence gap. If it is true that LLM generated code produces more vulnerabilities in addition to there being more code overall, all while at the same time the developer feels better about their results, then that is concerning.
This is how I go about ensuring there is little to no chaos (your mileage may vary based on project size and characteristics): - Plan your project manually, do not outsource thinking to the LLM. This includes being intentional about architecture, tech-stack, dependencies, etc.. - I have planning, orchestrating, coding, and reviewing agents. These should be self-explanatory, but there's a catch: the workflow is automated. OpenCode allows you to define "subagents" which can be called by "primary" agents. I will write a detailed Gitlab issue that my planning agent can fetch and read. It will create a detailed resolution plan that I can point the orchestration agent to. The orchestrator then delegates implementation to one or more coding agents simultaneously. Results are in turn delegated to reviewer agents. If the reviewer agents don't complain, then the results are ready for human review in an MR. - Changes that pass all review are documented in the project spec. E.g., if new modules are added that require an auth guard pattern implementation that is already documented in the spec, they will be listed as relevant sites for that auth guard pattern, etc..
I feel like the LLM agents have been more thorough and consistent than I could have been without them. This goes for refactors too: Since the entire project is essentially mapped out in the spec.md file(s), it's hard for the agent to miss a relevant site in the code. Human review is key. Don't merge code you don't understand.
jfaganel99
6 hours ago
This is one of the most practical breakdowns I’ve seen for a while. The spec.md as a living architecture map is smart, and documenting auth guard pattern sites as new modules get added is exactly the kind of thing that prevents issues creeping in.
The bit I’d push on: do your reviewer agents catch logic errors… things like a double negative auth check or a race condition in a payment flow. Those usually pass a check because code looks intentional and clean. Curious whether your reviewers are prompted specifically for security logic or more for spec conformance?
“Don’t merge code you don’t understand” is the right closer. Most setups don’t force that discipline cause people dont have the knowledge :)
fhouser
5 hours ago
Opus 4.6 usually doesn't disappoint .. No double negative auth checks or race conditions to report on, but I can say that introducing new functionality and patterns mostly requires a few cycles before the "repeatable pattern" is cleanly documented in the spec. When bugs do come up, the agent is quite good at finding the root cause and implementing a fix.
jfaganel99
3 hours ago
Working on a model benchmark focused on which model is good for these tasks. Keep you posted
fhouser
3 hours ago
Thanks,that would be great.