I think this is ignoring a lot of prior art. Our deploys at Yelp in roughly 2010 worked this way -- you flagged a branch as ready to land, a system (`pushmaster` aka `pushhamster`) verified that it passed tests and then did an octopus merge of a bunch of branches, verified that that passed tests, deployed it, and then landed the whole thing to master after it was happy on staging. And this wasn't novel at Yelp; we inherited the practice from PayPal, so my guess is that most companies that care at all about release engineering have been doing it this way for decades and it was just a big regression when people stopped having professional release management teams and started just cowboy pushing to `master` / `main` on github some time in the mid 2010's.
That's super interesting, thanks for sharing the Yelp/PayPal lineage. You're right: there's probably a lot of prior art in internal release engineering systems that never got much written up publicly.
The angle we took in the blog post focused on what was widely documented and accessible to the community (open-source tools like Bors, Homu, Bulldozer, Zuul, etc.), because those left a public footprint that other teams could adopt or build on.
It's a great reminder that many companies were solving the "keep main green" problem in parallel (some with pretty sophisticated tooling), even if it didn't make it into OSS or blog posts at the time.
Can you clarify what you mean by "deploy"?
You're not talking about pushing to an environment with live traffic, right?
I don't know what it's like now, but GitHub's internal merge queue circa 2017 was a nightmare. Every PR required you to set aside a full day of babysitting to get it getting merged/deployed - there were too many nondeterministic steps.
You'd join the queue, and then you'd have to wait for like 12 other people in front of you who would each spend up to a couple hours trying to get their merge branch to go green so it could go out. You couldn't really look away because it could be your turn out of nowhere - and you had to react to it like being on call, because the whole deployment process was frozen until your turn ended. Often that meant just clicking "retry" on parts of the CI process, but it was complicated, there were dependencies between sections of tests.
Despite the article, I'm not quite sure I understand exactly what this entails.
Mainly I'm confused what this check is gating. Based on the article it's hard to tell what they mean.
1. Code changes that may conflict with each other in the repo, in the sense of a merge conflict.
2. Regressions (test failure, build breakage) caused by recently checked-in code.
3. Preparing and verifying a new release prior to deployment.
4. Monitoring/canary a release candidate with real users.
In my mind, these are all very different things, but the article seems to mix them up.
No mention of Graphite.dev? Oh, it's written by Mergify, got it.
(Co-founder of Graphite here) Even better - they didn't mention that Shopify deprecated Shipit in favor of Graphite's merge queue for their new monorepo.
> The motivation was to avoid "merge skew," where changes appear compatible when reviewed in isolation but break once merged into an updated main.
My opinion is that this situation of a merge skew happens rarely enough not to be a major problem. And personally, I think instead of the merge queues described in the article, it would be overall more beneficial to invest in tooling to automatically revert broken commits in your main branch. Merging the PR into a temporary branch and running tests is a good thing, but it is overly strict to require your main branch to be fast forwarded. You can generally set a time limit of one day or so: as long as tests pass when merging the PR onto a main branch less than one day old, you can just merge it.
> it is overly strict to require your main branch to be fast forwarded
But merge queues (talking in general, IDK about the mergify.com product specifically) don't require fast-forwarding as far as the developer is aware. In the simplest case it looks like merging (non-fast-forward) to a temporary branch, then only updating the main branch after tests pass. This is very similar to your auto-revert except the main branch is never broken, so no wasted developer time and confusion when they pull a bad commit to start their PR.
IMHO it is a real shame that all CI doesn't work like this. It should be the default. Just this basic delay and auto-revert is already a nice boost to developer productivity. Not to mention that blocking a merge in the original PR is much less confusing than reverting and requiring a fresh PR to make the change. It adds basically no complexity other than the fact that our tools aren't set up to work this way by default which ends up requiring extra tools which are not as well integrated.
On top of this you can add batching which can be incredibly useful when your CI is slow (including things like deploying to a staging environment and letting it soak for a few hours) which isn't feasible to do per-PR even for fairly small teams.
This seems to skip the idea of stacked commits plus automatic rebasing, which have been around in Gerrit and other tools for quite a while.
If you read between the lines, the underlying problem in most of the discussion is GitHub's dominance of the code hosting space coupled with it's less than ideal CI integration - which while getting better is stuck with baggage from all their past missteps and general API frailty.
That's a good point. To clarify, Gerrit itself didn't actually do merge queuing or CI gating. Its model was stacked commits: every change was rebased on top of the current tip of main before landing. That ensured a linear history but didn't solve the "Is the whole pipeline still green when we merge this?" problem.
That's why the OpenStack community built Zuul on top of Gerrit: it added a real gating system that could speculatively test multiple commits in a queue and only merge them if CI passed together. In other words, Zuul was Gerrit's version of a merge queue.
This covers part of the problem, the part where your tests are enough to indicate whether the changes are good enough to keep. In that scenario, relying only on fast-forward merges is good enough.
One trickier problem is when you don't know until later that a past change was bad: perhaps slow-running performance tests show a regression, or flaky tests turn out to have been showing a real problem, or you just want the additional velocity of pipelined landings that don't wait for all the tests to finish. Or perhaps you don't want to test every change, but then when things break you need to go back and figure out which change(s) caused the issue. (My experience is at Mozilla where all these things are true, and often.) Then you have to deal with backouts: do you keep an always-green chain of commits by backing up and re-landing good commits to splice out the bad, and only fast-forwarding when everything is green? Or do you keep the backouts in the tree, which is more "accurate" in a way but unfortunate for bisecting and code archaeology?
The Chromium commit queue slightly predates this -- they started using it in 2010.
while the article does not provide specific dates for the “prehistory” the Not Rocket Science piece refers to an automated integration system working circa 2000~2001:
> February 2 2014, 22:25 […] Thirteen years ago I worked at Cygnus/RedHat […] Ben, Frank, and possibly a few other folks on the team cooked up a system [with a] simple job: automatically maintain a repository of code that always passes all the tests.