3 misconceptions about code churn
Code Churn is a seemingly straightforward concept that’s also commonly misunderstood. Since it can be seen as “throwing away code,” it’s no surprise the term can carry a fair amount of dread and baggage.
It may be useful to unpack the term here, since understanding the nuances of Code Churn gives engineering leaders unprecedented perception of how their teams operate.
After working with hundreds of customers and analyzing over 40 million commits from professional software developers, here are the top 3 misconceptions that engineering leaders have about Code Churn.
Myth #1: Code Churn is “bad”
Reality: Code Churn is neither good nor bad— it’s a useful signal.
Testing and rework are natural parts of the software development process. However, Code Churn levels that deviate significantly higher or lower than expected norms can represent smoke that is an indicator of a potential fire.
For example, elevated Code Churn could be an indicator that your team was given unclear requirements, or that a product owner is sending your engineer running in circles (see 6 Causes of Code Churn for additional reasons). For example, you can likely expect to see a significant increase in Churn at the beginning of an ambitious project—a signal that your team is experimenting with different implementation paths.
Code Churn can be interpreted as just that: another data point. It’s neither good nor bad— it is signal data that helps paint a complete picture of engineering performance. When you understand what’s happening in the codebase, you better understand your team.
Myth #2: There’s no such thing as “normal” Churn
Reality: In benchmarking the code contribution patterns of over 85,000 software engineers, the GitPrime data science team identified that Code Churn levels most frequently run between 13-30% of all code committed (i.e., 70-87% Efficiency), where a typical team can expect to operate in the neighborhood of 75% Efficiency.
Of course, Code Churn levels will vary between individuals, types of projects, and over the course the software lifecycle. The idea here is to become familiar with baseline Churn levels across your typical projects so you can a) identify when your team may be hung up and need assistance, or b) stay out of your team’s way when they’re in a healthy flow.
The human brain is a fantastic pattern-matching device. We’re naturally inclined to fill the gaps in the information we have. And while it’s an incredible ability, it can cause harmful biases (and blaming) to emerge when we have limited perspective — or when we use data without context.
The intention isn’t to find The Perfect Way™ that everyone should operate. The purpose is to instrument software systems of record and give managers rich data about their teams, so they can understand varying ‘normals’ and know when things change.
Myth #3: Churn should be consistent across a project’s lifecycle
Reality: Code Churn can be lumpy. A healthy project may have elevated Churn at the beginning of a project and lower Churn towards the end. Measuring Code Churn across various teams and projects will help you identify which fluctuations are natural, and which are indicators that something’s amiss.
As mentioned in Myth #1, you may expect to see elevated “exploratory churn” toward the beginning of a project, notably when a team is pioneering a new feature. An unusually high (or low) Churn level could be an early indicator that a project is not progressing as planned.
If you notice high Churn in the flurry of work leading up to a release, you would have justified reason for concern. You will know that release is coming in hot, or you have advance notice that the release is going to slip.
But with data, you’re able to detect early, communicate with stakeholders, and make an informed decision about shipping. When people know things are coming, they’re generally able to become comfortable with it. It’s when the release is due today at 4:00 pm, and at 3:59 pm you announce that the release is going to be late — that’s when Engineering starts to lose trust and respect.
Having the right data to be able to forecast our work reliably gives Engineering more weight. This kind of predictive power helps Engineering build respect from other organizations in the enterprise.
The goal of adopting metrics
Engineering leaders have been running on limited signals for too long.
Imagine you’re driving a car, but you can’t look outside. Now and then, you get a ping that says “turn left.” That’s an impoverished information world. And that’s exactly the world in which Engineering has been working.
The goal of monitoring Code Churn and other Engineering KPIs is to supplement your intuition engine with concrete data. Gone are the days of manual collection and long reporting chains. Data-driven feedback loops help you find opportunities for improvement to processes and tune Engineering workflows in real-time.
With the right data in the mix, engineering managers can get a real read on their team and notice when things are abnormal. The next wave of effective Engineering leadership is being defined by the use of concrete data to determine what’s working, identify risk, and celebrate victories. Data is helping engineering leaders level-up their team with better decisions, made more quickly—to effectively support, guide, and challenge their teams to do the best work of their careers.
Travis Kimmel is the CEO, co-founder of GitPrime, the leader in data-driven productivity reporting for software engineering teams. He is experienced in building high-performing teams, and empowering people to do their best work of their careers. Follow @traviskimmel on Twitter.
Get Engineering Impact: the weekly newsletter for managers of software teams
Keep current with trends in engineering leadership, productivity, culture, and scaling development teams.