rubenhellman
6 hours ago
I have been playing with Vibecodeprompts for a bit and what stood out to me is not the prompts themselves, but the framing.
Most “prompt libraries” assume the problem is wording. As if better adjectives or clever roleplay magically produce reliable systems. That has never matched my experience. The real failure mode is drift, inconsistency, and lack of shared structure once things scale beyond a single chat window.
Vibecodeprompts seems to implicitly accept that prompting is closer to infra than copywriting.
The prompts are opinionated. They encode assumptions about roles, constraints, iteration loops, and failure handling. You can disagree with those assumptions, but at least they are explicit. That alone is refreshing in a space where most tools pretend neutrality while smuggling in defaults.
What I found useful was not copying prompts verbatim, but studying how they are composed. You can see patterns emerge. Clear system boundaries. Explicit reasoning budgets. Separation between intent, process, and output. Guardrails that are boring but effective.
In other words, this is less “here is a magic prompt” and more “here is a way to think about working with models as unreliable collaborators”.
That also explains why this probably will not appeal to everyone. If you want instant magic, this is not it. You still have to think. You still have to adapt things to your domain. But if you are building anything persistent, reusable, or shared with other people, that effort feels unavoidable anyway.
Curious how others here think about this. Do you treat prompts as disposable glue, or as something closer to code that deserves structure, review, and iteration over time?