We just released a series of industry benchmarks across the Review and Submit Fundamental metrics in GitPrime. They’re designed to help inform software teams about how other organizations are leveraging the review process.
When customers are first starting to get a feel for their team’s data, it’s natural to think, Is this normal? — How do we compare to other teams? Before digging into discussions with the team around data, most managers will do some exploration on where their team is today relative to their historical trends and relative to the industry.
That’s why, previously, the researchers at GitPrime studied over 7 million anonymized redacted commits across nearly 88,000 developers to understand the contribution patterns of software developers. It was in that research that we first began measuring industry benchmarks across the Code Fundamental metrics (Active Days per Week, Commits per Active Day, Impact, and Efficiency). And with that information, customers were able to begin using industry benchmarks to get a more informed understanding of their team’s baseline.
This year, our researchers studied over a half-million pull requests to identify industry benchmarks in the code review process across the Review and Submit Fundamental metrics. For reference, these metrics look at how individuals are collaborating with peers in the review process from both the “submitter” and “reviewer” sides of the discussion. By breaking the review process down into a set of core activities, engineering leaders can promote healthy collaboration habits amongst the team—and teams can easily visualize how different activities contribute to the overall time it takes to resolve pull requests.
How to leverage the new industry benchmarks
There are two primary use cases for leveraging the new benchmarks.
First, you can use the industry benchmarks as a starting point for incorporating analytics about the development process into a team’s regular meetings.
You can start incorporating quantitative review into your team’s weekly or bi-weekly rituals by bringing the Review and Submit Fundamentals to one of your teams’ regular meetings, like a sprint retrospective. Have a discussion with the team about which metrics matter most to the team in the review process, and then gain buy-in on healthy ranges for those metrics. For example, if the team wants to make sure that someone picks their work up quickly once it’s a PR is submitted for review, they can set expectations around Time to First Comment and then review that on a regular cadence to continue holding up that value.
After you have established healthy ranges, use that information to facilitate a conversation on a regular cadence about any deltas seen between target ranges and actuals. (Essentially you’re looking at the what and facilitating a conversation about the why.) Then, have a productive discussion about what learnings could be applied to the next sprint or iteration.
The industry benchmarks are also useful for keeping a pulse on team trends and reporting on progress over time.
You can point to industry benchmarks when communicating to senior leaders about progress as they serve as an easy-to-understand baseline. Couple this information with the team’s historical trends to regularly report on progress, or to support your narrative when making a specific ask.
The recently released industry benchmarks provide visibility into how other organizations are leveraging the code review process. For a deep-dive on the benchmarks and definitions you can read our help docs, or reach out to us at firstname.lastname@example.org.