Splitting engineering teams into defense and offense

212 pointsposted a year ago
by dakshgupta

84 Comments

solatic

a year ago

This pattern has a smell. If you're shipping continuously then your on-call engineer is going to be fixing the issues the other engineers are shipping, instead of those engineers following up on their deployments and fixing issues caused by those changes. If you're not shipping continuously, then anyway customer issues can't be fixed continuously, and your list of bugs can be prioritized by management with the rest of the work to be done. The author quotes maker vs. manager schedules, but one of the conclusions of following that is that engineers don't talk directly to customers, because "talking to customers" is another kind of meeting, which is a "manager schedule" kind of thing rather than a "maker schedule" kind of thing.

There's simply no substitute for Kanban processes and for proactive communication from engineers. In a small team without dedicated customer support, a manager takes the customer call, decides whether it's legitimately a bug, creates a ticket to track it and prioritizes it in the Kanban queue. An engineer takes the ticket, fixes it, ships it, communicates that they shipped something to the rest of their team, is responsible for monitoring it in production afterwards, and only takes a new ticket from the queue when they're satisfied that the change is working. But the proactive communication is key: other engineers on the team are also shipping, and everyone needs to understand what production looks like. Management is responsible for balancing support and feature tasks by balancing the priority of tasks in the Kanban queue.

thih9

a year ago

> on-call engineer is going to be fixing the issues the other engineers are shipping, instead of those engineers following up on their deployments and fixing issues caused by those changes

Solution: don’t. If a bug has been introduced by the currently running long process, forward it back. This is not distracting, this is very much on topic.

And if a bug is discovered after the cycle ends - then the teams swap anyway and the person who introduced the issue can still work on the fix.

dakshgupta

a year ago

This is a real shortcoming, the engineers that ship feature X will not be responsible for the immediate aftermath. Currently we haven’t seen this hurt in practice, probably because we are very small and in-person, but you might be correct and it would then likely be the first thing that breaks about this as our team grows.

pnut

a year ago

We came to this conclusion from a different direction - feature implementation teams are focused on subdomains, but defensive teams are spread across the entire ecosystem.

Additionally, defensive devs have brutal SLAs, and are frequently touching code with no prior exposure to the domain.

They got known as "platform vandals" by the feature teams, & we eventually put an end to the separation.

spease

a year ago

It really depends on the context. Some types of troubleshooting just involves a lot of time-consuming trial-and-error that doesn’t teach anything, it just rules out possibilities to diagnose the issue. Some products have a long deployment cycle or feedback loop. Some people are just more suited to or greatly prefer either low or high context switched work.

Good management means finding the right balance for the team, product, and business context that you have, rather than inflexibly trying to force one strategy to work because it’s supposedly the best.

WorldWideWebb

a year ago

What do you do if the manager has no technical skill/knowledge (basically generic middle manager that happens to lead a technical team)?

dakiol

a year ago

I once worked for a company that required from each engineer in the team to do what they called “firefighting” during working hours (so not exactly on-call). So for one week, I was triaging bug tickets and trying to resolve them. These bugs belonged to the area my team was part of, so it affected the same product but a vast amount of micro services, most of which I didn’t know much about (besides how to use their APIs). It didn’t make much sense to me. So you have Joe punching code like there’s no tomorrow and introducing bugs because features must go live asap. And then it’s me the one fixing stuff. So unproductive. I always advocated for a slower pace of feature delivery (so more testing and less bugs on production) but everyone was like “are you from the 80s or something? We gotta move fast man!”

onion2k

a year ago

This sort of thing is introduced when the number of bugs in production, especially bugs that aren't user-facing or a danger to data (eg 'just' an unhandled exception or a weird log entry), gets to a peak and someone decides it's important enough to actually do something about it. Those things are always such a low priority that they're rarely dealt with any other way.

In my experience whenever that happens someone always finds an "oh @#$&" case where a bug is actually far more serious than everyone thought.

It is an approach that's less productive than slowing down and delivering quality, but it's also completely inevitable once a team/company grows to a sufficient size.

dakshgupta

a year ago

This is interesting because it’s what I imagine would happen if we scaled this system to a larger team - offense engineers would get sloppy, defensive engineers would get overwhelmed, even with the rotation cycles.

Small, in-person, high-trust teams have the advantage of not falling into bad offense habits.

Additionally, a slower shipping pace simply isn’t an option, seeing as the only advantage we have over our giant competitors is speed.

DJBunnies

a year ago

I think we’ve worked for the same org

user

a year ago

[deleted]

resonious

a year ago

I honestly don't really like the "let's slow down" approach. It's hard for me to buy into the idea that simply slowing down will increase product quality. But I think your comment already contains the key to better quality: close the feedback loop so that engineers are responsible for their own bugs. If I have the option of throwing crap over the wall, I will gravitate towards it. If I have to face all of the consequences of my code, I might behave otherwise.

glenjamin

a year ago

Having a proportion of the team act as triage for issues / alerts / questions / requests is a generally good pattern that I think is pretty common - especially when aligned with an on-call rotation. I've done it a few times by having a single person in a team of 6 or 7 do it. If you're having to devote 50% of your 4-person team to this sort of work, that suggests your ratios are a bit off imo.

The thing I found most surprising about this article was this phrasing:

> We instruct half the team (2 engineers) at a given point to work on long-running tasks in 2-4 week blocks. This could be refactors, big features, etc. During this time, they don’t have to deal with any support tickets or bugs. Their only job is to focus on getting their big PR out.

This suggests that this pair of people only release 1 big PR for that whole cycle - if that's the case this is an extremely late integration and I think you'd benefit from adopting a much more continuous integration and deployment process.

wavemode

a year ago

> This suggests that this pair of people only release 1 big PR for that whole cycle

I think that's a too-literal reading of the text.

The way I took it, it was meant to be more of a generalization.

Yes, sometimes it really does take weeks before one can get an initial PR out on a feature, especially when working on something that is new and complex, and especially if it requires some upfront system design and/or requirements gathering.

But other times, surely, one also has the ability to pump out small PRs on a more continuous basis, when the work is more straightforward. I don't think the two possibilities are mutually exclusive.

The_Colonel

a year ago

> This suggests that this pair of people only release 1 big PR for that whole cycle - if that's the case this is an extremely late integration

I don't think it suggests how the time block translates into PRs. It could very well be a series of PRs.

In any case, the nature of the product / features / refactorings usually dictates the minimum size of a PR.

codemac

a year ago

> extremely late integration

That's only late if there are other big changes going in at the same time. The vast majority of operational/ticketing issues have few code changes.

I'm glad I had the experience of working on a literal waterfall software project in my life (e.g. plan out the next 2 years first, then we "execute" according to a very detailed plan that entire time). Huge patches were common in this workflow, and only caused chaos when many people were working in the same directory/area. Otherwise it was usually easier on testing/integration - only 1 patch to test.

yayitswei

a year ago

A PR that moves the needle is worth 2-4 weeks or more. Small improvements or fixes can be handled by the team on the defense rotation.

marcinzm

a year ago

That's also been my experience. It's part time work for a single on call engineer on a team of 6-8. If it's their full time work for a given sprint then we have an urgent retro item to discuss around bug rates, code quality and so on.

stopachka

a year ago

> While this is flattering, the truth is that our product is covered in warts, and our “lean” team is more a product of our inability to identify and hire great engineers, rather than an insistence on superhuman efficiency.

> The result is that our product breaks more often than we’d like. The core functionality may remain largely intact but the periphery is often buggy, something we expect will improve only as our engineering headcount catches up to our product scope.

I really resonate with this problem. It was fun to read. We've been tried different methods to balance customers and long-term projects too.

Some more ideas that can be useful:

* Make quality projects an explicit monthly goal.

For example, when we noticed our the edges in our surface area got too buggy, we started a 'Make X great' goal for the month. This way you don't only have to react to users reporting bugs, but can be proactive

* Reduce Scope

Sometimes it can help to reduce scope; for example, before adding a new 'nice to have feature', focus on making the core experience really great. We also considered pausing larger enterprise contracts, mainly because it would take away from the core experience.

---

All this to say, I like your approach; I would also consider a few others (make quality projects a goal, and cut scope)

cutemonster

a year ago

> Make quality projects .. can be proactive

What are some proactive ways? Ideally that cannot easily be gamed?

I suppose test coverage and such things, and an internal QA team. What I thought the article was about (before having read it) was having half of the developers do red team penetration testing, or looking for UX bugs, of things the other half had written.

Any more ideas? Do you have any internal definitions of "a quality project"?

Attummm

a year ago

When you get to that stage, software engineering has failed fundamentally.

This is akin to having a boat that isn't seaworthy, so the suggestion is to have a rowing team and a bucket team. One rows, and the other scoops the water out. While missing the actual issue at hand. Instead, focus on creating a better boat. In this case, that would mean investing in testing: unit tests, integration tests, and QA tests.

Have staff engineers guide the teams and make their KPI reducing incidents. Increase the quality and reduce the bugs, and there will be fewer outages and issues.

lucasyvas

a year ago

> When you get to that stage, software engineering has failed fundamentally.

Agreed - this is a survival mode tactic in every company I’ve been when it’s happened. If you’re permanently in the described mode and you’re small sized, you might as well be dead.

If mid to large and temporary, this might be acceptable to right the ship.

intelVISA

a year ago

Yep, software is about cohesion. Having one side beloved by product and blessed with 'the offense' racing ahead to create extra work for the other is not the play.

Even when they rotate - who wants to clock in to wade through a fresh swamp they've never seen? Don't make the swamp: if you're moving too slow shipping things without sinking half the ship each PR then raise your budget to better engineers - they exist.

This premise is like advocating for tech debt loan sharks; I really hope TFA was ironic. Sure, it makes sense from a business perspective as a last gasp to sneakily sell off your failed company but you would never blog "hey here at LLM-4-YOU, Inc. we're sinking".

ericmcer

a year ago

You are viewing it like an engineer. From a business perspective if you can keep a product stable while growing your user base until you become an attractive acquisition target then this is a great strategy.

Sometimes as an engineer I like the frantically scooping water while we try to scale rapidly because it means leaderships vision is to get an exit for everyone as fast as possible. If leadership said "lets take a step back and spend 3 months stabilizing everything and creating a testing/QA framework" I would know they want to ride it out til the end.

kqr

a year ago

Wait, are you saying well-managed software development has no interrupt-driven work, and still quickly and efficiently delivers value to end users?

How does one get to that state?

fryz

a year ago

Neat article - I know the author mentioned this in the post, but I only see this working as long as a few assumptions hold:

* avg tenure / skill level of team is relatively uniform

* team is small with high-touch comms (eg: same/near timezone)

* most importantly - everyone feels accountable and has agency for work others do (eg: codebase is small, relatively simple, etc)

Where I would expect to see this fall apart is when these assumptions drift and holding accountability becomes harder. When folks start to specialize, something becomes complex, or work quality is sacrificed for short-term deliverables, the folks that feel the pain are the defense folks and they dont have agency to drive the improvements.

The incentives for folks on defense are completely different than folks on offense, which can make conversations about what to prioritize difficult in the long term.

dakshgupta

a year ago

These assumptions are most likely important and true in our case, we work out of the same room (in fact we also all live together) and 3/4 are equally skilled (I am not as technical)

eschneider

a year ago

If the event-driven 'fixing problems' part of development gets separated from the long-term 'feature development', you're building a disaster for yourself. Nothing more soul-sucking than fixing other people's bugs while they happily go along and make more of them.

dakshgupta

a year ago

There is certainly some razor applied on whether a request is unique to one user or is widely requested/likely to improve the experience for many users

jedberg

a year ago

> this is also a very specific and usually ephemeral situation - a small team running a disproportionately fast growing product in a hyper-competitive and fast-evolving space.

This is basically how we ran things for the reliability team at Netflix. One person was on call for a week at a time. They had to deal with tickets and issues. Everyone else was on backup and only called for a big issue.

The week after you were on call was spent following up on incidents and remediation. But the remaining weeks were for deep work, building new reliability tools.

The tools that allowed us to be resilient enough that being on call for one week straight didn't kill you. :)

dakshgupta

a year ago

I am surprised and impressed a company at that scale functions like this. We often internally discuss if we can still doing this when we’re 7-8 engineers.

cgearhart

a year ago

This is often harder at large companies because you very rarely make career progress playing defense, so it becomes very tricky to do it fairly. It can work wonders if you have the right teammates, but it’s almost a prisoners dilemma game that falls apart as soon as one person opts out.

dakshgupta

a year ago

Good point, we will usually only rotate when the long running task is done but eventually we’ll arrive at some feature that takes more then a few weeks to build so will need to restructure our methods then.

shalmanese

a year ago

To the people pooh poohing this, do y’all really work with such terrible coworkers that you can’t imagine an effective version of this?

You need trust in your team to make this work but you also need trust in your team to make any high velocity system work. Personally, I find the ideas here extremely compelling and optimizing for distraction minimization sounds like a really interesting framework to view engineering from.

johnnyanmac

a year ago

work with terrible management that can't imagine an effective version of this.

jph

a year ago

Small teams shouldn't split like this IMHO. It's better/smarter/faster IMHO to do "all hands on deck" to get things done.

For prioritization, use a triage queue because it aims the whole team at the most valuable work. This needs to be the mission-critical MVP & PMF work, rather than what the article describes as "event driven" customer requests i.e. interruptions.

dakshgupta

a year ago

A triage queue makes a lot of sense, only downside being the challenge of getting a lot done without interruption.

Kinrany

a year ago

You're not addressing the issue of triage also being an interruption.

d4nt

a year ago

I think they’re on to something, but the solution needs more work. Sometimes it’s not just individual engineers who are playing defence, it’s whole departments or whole companies that are set up around “don’t change anything, you might break it”. Then the company creates special “labs” teams to innovate.

To borrow a football term, sometimes company structure seems like it’s playing the “long ball” game. Everyone sitting back in defence, then the occasional hail mary long pass up to the opposite end. I would love to see a more well developed understanding within companies that certain teams, and the processes that they have are defensive, others are attacking, and others are “mid field”, i.e. they’re responsible for developing the foundations on which an attacking team can operate (e.g. longer term refactors, API design, filling in gaps in features that were built to a deadline). To win a game you need a good proportion of defence, mid field and attack, and a good interface between those three groups.

svilen_dobrev

a year ago

IMO the split, although good (the pattern is "sacrifice one person" as per Coplien/Harrision's Organisational patterns book [0]), is too drastic. It should be not defense vs offense 100% with a wall inbetween, but for each and every issue (defense) and/or feature (offense), someone has to pick it and become the responsible (which may or may not mean completely doing it by hirself). Fixing a bug for an hour-or-two sometimes has been exactly the break i needed in order to continue digging some big feature when i feel stuck.

And the team should check the balances once in a while, and maybe rethink the strategy, to avoid overworking someone and underworking someone else, thus creating bottlenecks and vacuums.

At least this is the way i have worked and organised such teams - 2-5 ppl covering everything. Frankly, we never had many customers :/ but even one is enough to generate plenty of "noise" - which sometimes is just noise, but if good customer, will be mostly real defects and generally under-tended parts. Also, good customers accept a NO as answer. So, do say more NOs.. there is some psychological phenomena in software engineering in saying yes and promising moonshots when one knows it cannot happen NOW, but looks good..

have fun!

[0] https://svilendobrev.com/rabota/orgpat/OrgPatterns-patlets.h...

Kinrany

a year ago

Thanks for the name!

chiefalchemist

a year ago

Interesting concept. Certainly worth trying, but in the name of offense (read: being proactive):

- "and our “lean” team is more a product of our inability to identify and hire great engineers, rather than an insistence on superhuman efficiency."

Can we all at some point have a serious discussion on hiring and training. It seems that many teams are unstaffed or at least not satisfied with the quality and quantity of their team. Why is that? Why does it seem to be the norm?

- what about mitigating bugs in the first place? Shouldn't someone be assigned to that? Yeah, sure, bugs are a given. They are going to happen. But in production bugs are something real and paying customers shouldn't experience. At the very least what about feature flags? That is sonething new is introduced to a limited number of user. If there's a bug and it's significant enough, the flag is flipped and the new feature withdrawn. Then the bug can be sorted as someone is available.

Prehaps the profession just is what it is? Some teams are almost miraculously better than others? Maybe that's luck, individuals, product, and/or the stack? Maybe like plumbers and shit there are just things that engineering teams can't avoid? I'm not suggesting we surrender, but that we become more realistic about expectations.

philipwhiuk

a year ago

We have a person who is 'Batman' to triage production issues. Generally they'll pick up smaller sprint tasks. It rotates every week. It's still stuff from the team so they aren't doing stuff unknown (or if they are, it's likely they'll work on it soon).

The aim is generally not to provide a perfect fix but an MVP fix and raise tickets in the queue for regular planning.

It rotates round every week or so.

My company's not very devops so it's not on-call, but it's 'point of contact'.

ryukoposting

a year ago

I can't be the only one who finds the graphics at the top of this article off-putting. I find it hard to take someone seriously when they plaster GenAI slop across the top of their blog.

That said, there's some credence to what the author is describing. Although I haven't personally worked under the exact system described, I have worked in environments where engineers take turns being the first point of contact for support. In my experience, it worked pretty well. People know your bandwidth is going to be a bit shorter when you're on support, and so your tasks get dialed back a bit during that period.

I think the author, and several people in the comments, make the mistake of assuming that an "engineer on support" necessarily can fix any given problem they are approached with. Larger firms could allocate a complete cross-functional team of support engineers, but this is very costly for small outfits. If you have mobile apps, in-house hardware products and/or integrations with third-party hardware, it's basically guaranteed that your support engineer(s) will eventually be given a problem that they don't have the expertise to solve.

In that situation, the support engineer still has the competencies to figure out who does know how to fix the problem. So, the support engineer often acts more as a dispatcher than a singular fixer of bugs. Their impact is still positive, but more subtle than "they fix the bugs." The support engineer's deep system knowledge allows them to suss out important details before the bug is dispatched to the appropriate dev(s), thereby minimizing downtime for the folks who will actually implement the fix.

jwrallie

a year ago

I think interruptions damage the productivity overall, not only of engineers. Maybe some are unaware of it, and others simply don’t care. They don’t want to sacrifice their own productivity by waiting on someone busy, so they interrupt and after getting the information they want, they feel good. From their perspective, the productivity increased, not decreased.

Some engineers are more likely to avoid interrupting others because they can sympathize.

smugglerFlynn

a year ago

Constantly working in what OP describes as defence might also be negatively affecting the perception of cause and effect of own actions:

   Specifically, we show that individuals following clock-time [where tasks are organized based on a clock**] rather than event-time [where tasks are organized based on their order of completion] discriminate less between causally related and causally unrelated events, which in turn increases their belief that the world is controlled by chance or fate. In contrast, individuals following event-time (vs. clock-time) appear to believe that things happen more as a result of their own actions.[0]
** - in my experience, clock based organisation seems to be very characteristic to what OP describes as defensive, when you become driven by incoming priorities and meetings

Broader article about impact of schedules at [1] is also highly relevant and worth the read.

   [0] - https://psycnet.apa.org/record/2014-44347-001    
   [1] - https://hbr.org/2021/06/my-fixation-on-time-management-almost-broke-me

Kinrany

a year ago

By "constantly", do you mean for 2-4 weeks in a row?

october8140

a year ago

My first job had a huge QA team. It was my job to work quickly and it was their job to find the issues. This actually set me up really poorly because I got in the habit of not doing proper QA. There were at least 10 people doing it for me. When I left it took awhile for me to learn what properly QAing my own worked looked like.

ntarora

a year ago

Our team ended up having the oncall engineer for the week also work primarily on bug squashing and anything that makes support easier. Over time the support and monitoring becomes better. Basically dedicated tech debt capacity, which has worked well for us.

marcinzm

a year ago

It feels like having 50% of your team's time be spent on urgent support, triage and bugs is a lot. That seems like a much better thing to solve versus trying to work around the issue by splitting the team. Probably having those people fix bugs while a 4 week re-factor in a secluded branch is constantly in process doesn't help with efficiency or bug rate.

Kinrany

a year ago

It's a team of 4, so the only options are 25% and 50%.

But the fact that this explicit split makes the choice visible is clearly an upside.

JohnMakin

a year ago

This is a common "pattern" on well-ran ops teams. The work of a typical ops team consists of a lot of new work but tons of interruptions come in as new issues arise and must be dealt with. So we would typically assign 1 engineer (who was also typically on call) a lighter workload and would be responsible for triaging most issues that came in.

toolslive

a year ago

The proposed strategy will work, as will plenty of others, because it's a small team. That is the fundamental reason. Small teams are more efficient. So if you're managing a team of 10+ individuals: split them in 2 teams and keep them out of each other's way/harm.

ozim

a year ago

I like the approach as it is easy to explain and it is having catchy names.

But sounds like there has to be a lot of micro management involved and when you have team of 4 it is easy to keep up but as soon as you go to 20 and that increase also means much more customer requests it will fall apart.

ndndjdjdn

a year ago

This is probably devops. A single team talking full responsibility and swapping oncall-type shifts. These guys know their dogfood.

You want the defensive team to work on automating away stuff that pays off for itself in the 1-4 week timeframe. If they get any slack to do so!

stronglikedan

a year ago

Everyone on every team should have something to "own" and feel proud of. You don't "own" anything if you're always on team defense. Following this advice is a sure fire way to have a high churn rate.

FireBeyond

a year ago

Yup, last place I was at I had engineers begging me (PM) to advocate against this, because leadership was all "We're going to form a SEAL team to blaze out [exciting, interesting, new, fun idea/s]. Another team will be on bug fixes."

My team had a bunch of stability work, and bug fixes (and there was a lot of bugs and a lot of tech debt, and very little organizational enthusiasm to fix the latter).

Guess where there morale was, compared to some of the other teams?

LatticeAnimal

a year ago

From the post:

> At the end of the cycle, we swap.

They swap teams every 2-4 weeks so nobody will always be on team defense.

ninininino

a year ago

You didn't read the article did you, they swap every 2 weeks between being on offense and defense.

bsimpson

a year ago

Ha - I think greptile was my first email address!

Reptile was my favorite Mortal Kombat character, and our ISP added a G before all the sub accounts. They put a P in front of my dad's.

eiathom

a year ago

And, what else?

Putting a couple of buzzwords on a practice being performed for at least 15 years now doesn't make you clever. Quite the opposite in fact.

Kinrany

a year ago

Do you have a name for this practice?

bradarner

a year ago

Don't do this to yourself.

There are 2 fundamental aspects of software engineering:

Get it right

Keep it right

You have only 4 engineers on your team. That is a tiny team. The entire team SHOULD be playing "offense" and "defense" because you are all responsible for getting it right and keeping it right. Part of the challenge sounds like poor engineering practices and shipping junk into production. That is NOT fixed by splitting your small team's cognitive load. If you have warts in your product, then all 4 of you should be aware of it, bothered by it and working to fix it.

Or, if it isn't slowing growth and core metrics, just ignore it.

You've got to be comfortable with painful imperfections early in a product's life.

Product scope is a prioritization activity not an team organization question. In fact, splitting up your efforts will negatively impact your product scope because you are dividing your time and creating more slack than by moving as a small unit in sync.

You've got to get comfortable telling users: "that thing that annoys you, isn't valuable right now for the broader user base. We've got 3 other things that will create WAY MORE value for you and everyone else. So we're going to work on that first."

MattPalmer1086

a year ago

I have worked in a small team that did exactly this, and it works well.

It's just a support rota at the end of the day. Everyone does it, but not all the time, freeing you up to focus on more challenging things for a period without interruption.

This was an established business (although small), with some big customers, and responsive support was necessary. There was no way we could just say "that thing that annoys you, tough, we are working on something way more exciting." Maybe that works for startups.

dakshgupta

a year ago

All of these are great points. I do want to add we rotate offense and defense every 2-3 weeks, and the act of doing defense which is usually customer facing gives that half of the team a ton of data to base the next move on.

rkangel

a year ago

> You've got to get comfortable telling users: "that thing that annoys you, isn't valuable right now for the broader user base. We've got 3 other things that will create WAY MORE value for you and everyone else. So we're going to work on that first."

Yes, but you've got to spend time talking to users to say that. Many engineering teams have incoming "stuff". Depending on your context that might be bug reports from your customer base, or feature requests from clients etc. You don't want these queries (that take half an hour and are spread out over the week) to be repeatedly interrupting your engineering team, it's not great for getting stuff done and isn't great for getting timely helpful answers back to the people who asked.

There's a few approaches. This post describes one ("take it in turns"). In some organisations, QA is the first line of defence. In my team, I (as the lead) do as much of it as I can because that's valuable to keep the team productive.

ramesh31

a year ago

To add to this, ego is always a thing among developers. Your defensive players will inevitably end up resenting the offense for 1. leaving so many loose ends to pick up and 2. not getting the opportunity for greenfield themselves. You could try to "fix" that by rotating, but then you're losing context and headed down the road toward man-monthing.

user

a year ago

[deleted]

Roelven

a year ago

Getting so tired of the war metaphors in attempts to describe software development. We solve business problems using code, we don't make a living by role-playing military tactics. Chill out my dudes

madeofpalk

a year ago

Somewhat random side note - I find it so fascinating that developers invented this myth that they’re the only people who have ‘concentration’ when this is so obviously wrong. Ask any ‘knowledge worker’ or yell even physical labourer and I’m sure they’ll tell you about the productivity of being "in the zone" and lack of interruptions. Back in early 2010s they called it ‘flow’.

dakshgupta

a year ago

My theory is that to outsiders software development looks closer to other generic computer based desk jobs than to the job of a writer or physical builder, so to them it’s less obvious that programming needs “flow” too.

000ooo000

a year ago

The article doesn't say or suggest that. It says it applies to engineers.

Towaway69

a year ago

What's wrong with collaboratively working together? Why is there a need to create an atificial competition between a "offence" and a "defence" team?

And why should team members be collaborative amongst their team? E.g. why should the "offence" team members suddenly help each other if it's not happening generally?

This sounds a lot like JDD - Jock Driven Development.

Perhaps the underlying problems of "don't touch it because we don't understand it" should be solved before engaging in fake competition to increase the stress levels.

megunderstood

a year ago

Sounds like you didn't read the article.

The idea has nothing to do with creating artificial competition and it is actually designed as a form of collaboration.

Some work requires concentration and the defensive team is there to maintain the conditions for this concentration, i.e. prevent the offensive team from getting interrupted.

namenotrequired

a year ago

Many are complaining that this way the engineers are incentivised to carelessly create bugs because they have to ship fast and won’t be responsible for fixing them.

That’s easy to fix with an exception: you won’t have to worry about support for X time unless you’re the one who recently made the bug.

It turns out that once they’re responsible for their bugs, there won’t actually be that many bugs and so interruptions to a focused engineer will be rare.

That's how we do it in my startup. We have six engineers, most are even pretty junior. Only one will be responsible for support in any given sprint and often he’ll have time left over to work on other things e.g. updating dependencies.