When customers ask about leading indicators or “windows” into what’s going on in the code review process, our team often points them towards the Review & Collaboration report—it has a particularly interesting set of health indicators that provide visibility into how the team works together.
Specifically, there are two narratives that we can visualize with this report that are both insightful and actionable to most managers:
- Speed of feedback, which asks, “how quickly are we handling our own work and our teammates’ work in the review process?”, and
- Thoroughness of review, which asks, “are we successfully using the review process to improve the quality of the code and collaborate on improved solutions to problems?”
Together, these two narratives provide a broader window into the collaboration and culture within the team. Today, we’ll focus on the metrics you can use within the Review & Collaboration report to help you understand a team’s speed of feedback and thoroughness of review.
Speed of Feedback
When it takes team members hours or even days to provide feedback or responses to questions in the review process, that can mean that the team spends a good amount of time in wait states. It can also mean that communication isn’t timely enough to be as effective as it can be. Often when it takes a long time for someone to respond to us, we’ve already forgotten what the discussion was about and have to get back into that mindset before we can reengage in the conversation.
So as a manager, we generally want to see that the team isn’t spending a lot of time in wait states and that team members are helping each other move their work forward. We can measure this by looking primarily at Responsiveness in the Submitter metrics and Reaction Time in the Reviewer metrics. Some quick definitions:
- Responsiveness is the time in hours that it takes the Submitter to respond to feedback (with either a comment or a code revision) that has been provided on their pull request.
- Reaction Time is the reversal of Responsiveness: it’s the time it takes for the pull request Reviewer to respond to a comment addressed to them.
There a number of reasons why it might take someone a while to respond to their teammate—perhaps the original comment or question was submitted at the end of the day on a Friday, or maybe the discussion lead to confusion or disagreement that was followed by a period of silence. In any case, it can be helpful for the manager to set expectations around timely responses in the review process so the team knows that way of working is valuable and important.
Just communicating this value and bringing it to the team’s awareness can have a substantial impact on Responsiveness and Reaction Time, which results in a shorter Time to Resolve.
Thoroughness of Review
In addition to timely feedback, we generally want to see the team using the review process to provide substantial feedback that leads to higher quality work and improved solutions to problems. Managers can visualize this narrative with the help of two metrics: Unreviewed PRs and Influence.
Unreviewed PRs show just that—the number of pull requests that were opened and then merged without ever receiving a comment or an approval. Most customers tend to agree that this number should probably be close to zero, unless there’s a very specific workflow or edge case where it makes sense. Unreviewed PRs is essentially helpful for risk mitigation. We want to get a second pair of eyes on the code that our customers end up interacting with.
Influence is the ratio of follow-on commits made after the Reviewer commented. This metric helps us understand how common it is for a Reviewer’s feedback to drive change and improvements in the code.
While it’s a powerful metric at the high level, we often see customers using this metric to spot the silent influencers. You might have a subset of people that are more vocal in meetings, and you might see those people as some of the more influential reviewers in the pull request process as well. But then, sometimes you start to see team members that are unexpectedly influential in the review process—maybe they’re just typically not as vocal—but their peers clearly respect and act on their feedback. So it can be powerful as a manager to identify silent influencers that you were previously unaware of.
We’ve found that when it comes to creating positive change or adjusting processes in a way that benefits the team as a whole, it’s best to just focus on one or two concepts at a time. By continuing to invest in speed of feedback and thoroughness of review, you will find yourself solidly on the path to an organization that promotes healthy collaboration and shares values that make for an engaged, productive workplace.
If you’ve already been tracking these two narratives with your team and would like to learn about some additional ways to support your team with data, reach out in the chat below or email us at firstname.lastname@example.org.