100%, this is all true and something you have to tackle eventually. Companies like this one (Kasava) can get away with it because, well, they likely don't have very many customers and it doesn't really matter. But when you're operating at a scale where you have international customers relying on your SaaS product 24/7, suddenly deploys having a few minutes of downtime matters.
This isn't to say monorepo is bad, though, but they're clearly naive about some things;
> No sync issues. No "wait, which repo has the current pricing?" No deploy coordination across three teams. Just one change, everywhere, instantly.
It's literally impossible to deploy "one change" simultaneously, even with the simplest n-tier architecture. As you mention, a DB schema is a great example. You physically cannot change a database schema and application code at the exact same time. You either have to ensure backwards compatibility or accept that there will be an outage while old application code runs against a new database, or vice-versa. And the latter works exactly up until an incident where your automated DB migration fails due to unexpected data in production, breaking the deployed code and causing a panic as on-call engineers try to determine whether to fix the migration or roll back the application code to fix the site.
To be a lot more cynical; this is clearly an AI-generated blog post by a fly-by-night OpenAI-wrapper company and I suspect they have few paying customers, if any, and they probably won't exist in 12 months. And when you have few paying customers, any engineering paradigm works, because it simply does not matter.
I’m not sure why you made the logical leap from having all code stored in a single repo to updating/deploying code in lockstep. Where you put your code (the repo) can and should be decoupled from how you deploy changes.
> you will have the old system using the old schema and the new system using the new schema unless you design for forwards-backwards compatible changes
Of course you design changes to be backwards compatible. Even if you have a single node and have no networked APIs. Because what if you need to rollback?
> Maybe a regression causes something only they use to break. Now the entire org is held hostage by the version needs of one team.
This is an organizational issue not a tech issue. Who gives that one team the power to hold back large changes that benefit the entire org? You need a competent director or lead to say no to this kind of hostage situation. You need defined policies that balance the needs of any individual team versus the entire org. You need to talk and find a mutually accepted middle ground between teams that want new features and teams that want stability and no regressions.
The point is that the realities of not being able to deploy in lockstep erode away at a lot of the claimed benefits the monorepo gives you in being able to make a change everywhere at once.
If my code has to be backwards compatible to survive the deployment, then having the code in two different repos isn’t such a big deal, because it’ll all keep working while I update the consumer code.
The point is atomic code changes, not atomic deployments. If I want to rename some common library function, it's just a single search and replace operation in a monorepo. How do you do this with multiple repos?
> If I want to rename some common library function, it's just a single search and replace operation in a monorepo. How do you do this with multiple repos?
Multiple repos shouldn't depend on a single shared library that needs to be updated in lockstep. If they do, something has gone horribly wrong.
> This is an organizational issue not a tech issue.
It’s both. Furthermore, you _can_ solve organizational problems with tech. (Personally, I prefer solutions to problems that do not rely strictly on human competence)
I think I disagree.
We have a monorepo, we use automated code generation (openapi-generator) for API clients for each service derived from an OpenAPI.json generated by the server framework. Service client changes cascade instantly. We have a custom CI job that trawls git and figures out which projects changed (including dependencies) as to compute which services need to be rebuilt/redeployed. We may just not be at scale—thank God. We're a small team.
Monorepo vs multiple repos isn't really relevant here, though. It's all about how many independently deployed artifacts you have. e.g. a very simple modern SaaS app has a database, backend servers and some kind of frontend that calls the backend servers via API. These three things are all deployed independently in different physical places, which means when you deploy version N, there will be some amount of time they are interacting with version N-1 of the other components. So you either have to have a way of managing compatibility, or you accept potential downtime. It's just a physical reality of distributed systems.
> We may just not be at scale—thank God. We a small team.
It's perfectly acceptable for newer companies and small teams to not solve these problems. If you don't have customers who care that your website might go down for a few minutes during a deploy, take advantage of that while you can. I'm not saying that out of arrogance or belittlement or anything; zero-downtime deployments and maintaining backwards compatibility have an engineering cost, and if you don't have to pay that cost, then don't! But you should at least be cognizant that it's an engineering decision you're explicitly making.
> Having all the context for AI in one place is hard to beat though.
Seems like a weird workaround, you could just clone multiple repos into a workspace. Agree with all your other points though.
Exactly. Monorepo-enjoyers like to pretend that workspaces don't a) exist, and b) provide >90% of the benefits of a monorepo, with none of the drawbacks.
> At some point, you will have many teams. And one of them _will not_ be able to validate and accept some upgrade. Maybe a regression causes something only they use to break. Now the entire org is held hostage by the version needs of one team. Yes, this happens at slightly larger orgs. I've seen it many times.
The alternative of every service being on their own version of libraries and never updating is worse.
atomic updates in particular is one of those things that sounds good to the C-suite, but falls apart extremely badly in the lower levels.
months-long delays on important updates due to some large project doing extremely bad things and pushing off a minor refactor endlessly has been the norm for me. but they're big so they wield a lot of political power so they get away with it every time.
or worse, as a library owner: spending INCREDIBLE amounts of time making sure a very minor change is safe, because you can't gradually roll it out to low-risk early adopter teams unless it's feature-flagged to hell and back. and if you missed something, roll back, write a report and say "oops" with far too many words in several meetings, spend a couple weeks triple checking feature flagging actually works like everyone thought (it does not, for at least 27 teams using your project), and then try again. while everyone else working on it is also stuck behind that queue.
monorepos suck imo. they're mostly company lock-in, because they teach most absolutely no skills they'd need in another job (or for contributing to open source - it's a brain drain on the ecosystem), and all external skill is useless because every monorepo is a fractal snowflake of garbage.
You always have this problem thats why you have a release process for apis.
And monorepo or not, bad software developers will always run into this issue. Most software will not have 'many teams'. Most software is written by a lot of small companies doing niche things. Big software companies with more than one team, normally have release managers.
My tipp: use architecture unit tests for external facing APIs. If you are a smaller company: 24/7 doesn't has to be the thing, just communicate this to your customers but overall if you run SaaS Software and still don't know how to do zero-downtime-deployment in 2025/2026, just do whatever you are still doing because man come on...
I really have never been able to grasp how people who believe that forward-compatible data schema changes are daunting can ever survive contact with the industry at scale. It's extremely simple to not have this problem. "design for forwards-backwards compatible changes" is what every grown-up adult programmer does.