GitPrime elevates engineering leadership with objective data. In this interview series, Engineering Leaders talk about how to build high performing teams.
We often fail to look beyond our intuitive notions of scaling.
We assume that adding a management layer will create leverage. Instead, it adds more signal to the noise. We assume that by hiring more people, we will “go faster.” Instead, we slow down.
So one of the first scaling traps is assuming that your organization needs to scale in the first place. Think about it: What, exactly, do you need more of?
Arlo Belshee once said something along the lines of “scaling is fundamentally a question of what must remain consistent,” to which I would add, “and what you are willing to spend, lose, and sacrifice for that consistency.”
So the question for your organization is this: What must remain consistent?
The answer to that question will most likely revolve around the unique value of your product. Beyond that, it could be something you and your team have defined as important: maybe it’s how you approach prioritization, or how you onboard new employees.
We ignore unseen costs
Now, consistency comes at a cost. So one of the more common scaling traps is that we underestimate the costs of various approaches — whether it’s invisible (or simply longer-term) costs that result from making a decision, or the opportunity cost of not choosing another.
Take an organization that is struggling with limiting work in progress. A new layer of project management is introduced into the mix — essentially human load balancers meant to “protect” the teams, and keep them “topped up.” Sounds reasonable. But what happens next? You have a non-linear increase in communication challenges.
Suddenly everything has to “go through” the project manager. The PM attends meetings without the team, and has to relay any important information back to the team. This decision to keep prioritization and project control consistent will come at a cost. It’s neither good nor bad — it’s simply something to be aware of and take into account.
Another scaling situation is that one part of the organization starts to dwarf another. For example, you might have hired out a large sales and marketing team. You bring in customers. There’s lots of feedback. But your development team didn’t grow, and now it’s turning into a feature factory. They’re getting hit from all angles, and they’re trying to do their best to keep up.
But the core issue is that the organization is capable of coming up with ideas WAY FASTER than it can possibly implement them. When faced with an imbalance, people will try to work around the problem. They start professional services teams to do customizations — essentially working around the product. They become more and more prescriptive with their requests for Engineering and don’t leave time to truly iterate towards a solution. Is this a scaling problem? Perhaps. But it could also be described as an imbalance.
We “scale” as a tactic. It is not a foregone conclusion. Entropy is not a foregone conclusion (the classic “well, this is just what happens when you grow”). Some organizations can “scale” while retaining much of what made them successful as a small startup. Others don’t have the same challenge.
We favor the immediate
In many cases, we fall back on the classic tension between short and long term-ism. Scaling is often a decision to fix pressing problems, which often overlooks the team’s long-term resilience, adaptability, and ability to deliver real value.
A startup raises money with a promise to invest that capital in “growth.” And in many cases, more people is precisely what these startups need. But they often fall into the trap of not thinking about where these new people will fit into their current team and how they will all work (and communicate) with each other. And how can we spread something good from those who have it, to those who don’t (yet)?
Along that note, our intuition around scaling often breaks down when we make assumptions about economies of scale. We imagine that by centralizing a function, we will 1) gain consistency, and 2) gain scale. Specialized functions like Data Science, UX Research, and Operations often get caught in this trap. There is nothing inherently wrong about centralizing a function — it might be the perfect approach — but we tend to underestimate the costs of centralization. In the case of a shared team, this might include slower throughput, lack of novelty (all problems are solved the same way), lack of flexibility, and communication breakdowns. The assumed benefits — “hey, it totally makes sense to only solve this once” — bias us from the delayed and distant problems that will emerge as a result.
We grow dependencies
Are your units of scaling truly independent (where they need to be independent)? Many organizations emphasize autonomous teams but don’t realize the web of dependencies that are actually in place. The challenge here is that these dependencies are often invisible, complex, and/or informal.
Here’s a common question: “How can our team get this done when it takes other organizations multiple teams to do this?” For many, this is the scaling question. How do we get “bigger” things done? How do we, basically, hire more people and target them at solving a “bigger” problem?
A couple of observations in this case. First, a small group of developers (7-14) can move mountains when the right conditions are in place. When I inquire about these “big goals,” I’m often surprised by what constitutes “big”, as I’ve seen much smaller teams tackle similar challenges. When I dig deeper, I learn about a host of environmental factors that balloon the work, functional silos, shared resources, compliance steps, etc. In short, the problems aren’t necessarily “too big” for a small team — in an optimal setting. But in that context — with heavy process or misplaced people — **they can’t be driven forward by a single team.
Second, I tend to hear a lot of this “if we could only execute” type of language. In other words, the organization has persuaded themselves that if only they could harness EVERYONE working, everything would work out great. A group of 150 developers can be reasonably self-managing with minimal layers of command and control. 150 developers have the potential to move mountains! But when you peel back the layers, you often find that these groups aren’t actually independent. They aren’t being tasked with specific problems. Individuals — groups, even — don’t actually own anything. And the organization has predetermined what that “big silver bullet solution” looks like, and they are trying to jam it into the feature factory.
I find that we often under-invest in what will allow teams true independence and autonomy (while remaining aligned with global missions). Amazon is often touted for their two-pizza teams, but those two-pizza teams leverage a vast array of services from across the organization. Two-pizza teams require LOTS of other two-pizza teams. This is the challenge of copying organizational structures in a boilerplate fashion, without deeply consider what is going on behind the scenes.
We model others
Scaling for a startup is not the same as a large, lumbering organization trying to figure out how to operate “big.” These are two different problems. In the case of the lumbering organization, the challenge can be better described as “de-scaling,” or trying to limit the overhead that is baked into their current size. They’re already BIG. They already have the large teams that underperform much smaller teams.
For a startup, you are “building out” the system. You have a completely different set of problems. Old structures are constantly crumbling and need love. The meeting that used to work, is no longer effective. The temptation is to seek “more defined roles”, when you might need to redesign the meeting. The temptation is for the executive to feel overwhelmed by the level of detail, and to hire a manager underneath them to “deal with it.” Is this the right thing to do? Maybe. But my guess is that the existing form could have been effective with some subtle design tweaks.
Questions to consider (before you scale)
Here are a few questions we should consider when contemplating the need to scale:
- What is inhibiting us from achieving these goals right now?
- It is really “more people” that we need?
- What’s most important? What must remain consistent?
- Do we really need to have all of this at a certain deadline?
- If we were to lay out our initiatives end-to-end and measure their success, are we doing a reasonable job making decisions at the moment?
- And if we don’t know whether these initiatives are actually paying off — if we don’t know which people were involved and what kind of work went into projects that were and weren’t successful — how can we reasonably scale?
Are we overlooking a bottleneck? Often our blocker is not “too few engineers,” but rather a tooling problem, an under-resourced operations team, an inability to measure impact, or too much added complexity (relative to the value created), to name a few. We see scaling as the solution, but what we actually need is to face the real problem. When we face the real problem, we realize that adding more people on top of it is not the solution.
For example, doubling the size of the engineering team will not solve the problem of an over-committed, over-worked “Ops” team. It will make the problem worse.
This is a major challenge when organizations are injected with capital and start rapidly hiring, without considering their constraints, and without first considering what must remain consistent.
Our ability to effectively scale hinges on our ability to make sense of the situation. When we don’t fully understand, we are subject to our intuitive notions around the situation.
If we view scaling as a plan that is executed, and we have to wait a long period of time to understand if we’re on the right track, we’ll likely fail. Instead, there is a lot of “sense and respond.”
So the alternative is to view scaling as a set of experiments. We have to intervene in safe ways and sense the impact of those experiments. I’m always amazed when I hear — 16 months after a hiring increase — that “perhaps we hired too quickly.” How can that feedback loop be improved? Why are we waiting 16 months to figure out if our scaling efforts have succeeded?
What tends to happen is that there are a good 12 months of finger pointing — did we hire correctly? Why aren’t these folks getting up to speed? And there’s no “early warning” that things are strained, or have gone off the rails. A great example here: creating a new layer of management (hiring outside people), and not realizing for months — years, even — how that new layer was struggling because of an almost impossible need to manage up and down. Maybe they were never set up to do their jobs well. Maybe that extra layer of management was never needed in the first place.
Finally, if we scale without a sense of the system’s true health, and without engaging with and understanding the people doing the work, then we should be prepared to be surprised.