It’s iterative, not a strict waterfall, but the constraint is “text first, code second”.
A typical loop looks like this:
I talk with Perplexity/ChatGPT to clarify the requirement and trade‑offs.
The “architect” Cursor window writes a short design note: intent, invariants, interfaces, and a checklist of concrete tasks.
The “programmer” Cursor window implements those tasks, one by one, and I run tests / small experiments.
If something feels off, I paste the diff and the behavior back to the architect and we adjust the design note, then iterate.
There is a feedback channel from “programmer” to “architect”, but it goes through me. When the programmer model runs into something that doesn’t fit (“this API doesn’t exist”, “these two modules define similar concepts in different ways”, etc.), I capture that as:
comments in the code (“this conflicts with X”), and
updates to the architecture doc (“rename Y to Z, merge these two concepts”, “deprecate this path”, etc.).
So the architect is not infallible. It gets corrected by reality: tests failing, code being awkward to write, or new edge cases showing up. The main thing the process enforces is that those corrections are written down in prose first, so future changes don’t silently drift away from whatever the system used to be.
In that sense the “programmer” complains the same way human ones do: by making the spec look obviously wrong when you try to implement it.