Measuring performance of self-organizing product teams

Measuring performance of self-organizing product teams

At my workplace we have different product teams. From management there was a clear wish to make team performance visible during Sprint Reviews. The natural response was to present Scrum related metrics, like velocity (story points committed vs delivered).

These general Scrum metrics feel a bit off to me. Take velocity for example. What does this tell us? Say that we have a product which receives 10 new bug reports every week and every sprint we fix these bugs. Our velocity would be pretty good. But does this mean our team and product are doing good?

Let's take another example. John Cuttlefish recently published an excellent post about signs of working in a Feature Factory. Say our team is only focused on shipping new features, but neglects attention for technical debt. The velocity metric would still indicate everything is fine, as story points are being delivered. But if we keep accumulating more and more technical debt, is our team really performing well?

If velocity doesn't actually tell us how we are performing? What does it tell us? The one thing I can think of is that it tells us that the team is busy. And being busy by itself doesn't say anything. We want to know if we are actually performing and bring value. So this brings us to our main question.

What are good measurements of team and product performance?


We can do a bit of reverse engineering to come to these measurements. Let's start with our main goal. We want to build a product, which solves the problem(s) of our customer in an efficient way, resulting in happy end-users.

So what makes customers (un)happy?

  • When a product is unavailable, users won't be happy. So a high uptime would make them, or at least keep them, happy.
  • When things don't work as expected, users won't be happy. So a low bug report count indicates that things work as expected.
  • When a bug is reported, users will be happy if it's quickly fixed. So time from initial bug report to fix in production should be as low as possible.
  • Getting exceptions is quite frustrating for users. So also the number of exceptions should be as low as possible.
  • When the application feels slow, users will be unhappy. So response times of web requests should be as low as possible.

We could also just ask our end-users what they think of the product. One way would be to send a survey every once in a while to rate (1 to 10) certain aspects of our application. Similar to a NPS survey. We can think of questions like:

  • How do you rate your productivity within the product?
  • How do you rate the usefulness of our product?
  • How do you rate our customer support?

By tracking the average of these scores over time and plotting them on a graph, we can easily see if we improve as a team or not.

Technical indicators

Considering we are talking about tech teams we can also ask ourselves what technical indicators tell us how we perform?

  • Tests prevent regression, so test coverage should be sufficient
  • Slow tests are killing productivity, so time to run all tests suites should be kept within an acceptable range
  • Same goes for waiting on peer reviews, so we could measure the average time between opening a PR and when it's merged.
  • Or what about measuring deployment duration? The same as with tests, we want to keep waiting times for developers as low as possible.

And finally, just like asking our end-users, we can also survey our team and ask them to rate various aspects.

  • How do you rate your productivity?
  • How do you rate the productivity of your team?
  • How do you rate your morale?
  • How do you rate the morale of your team?

Conclusion

In this post we explored various ways to measure the performance of product teams. I hope this challenges you and your team to think further than just the standard scrum metrics. Collecting the right metrics gives you valuable insight in your teams performance and triggers healthy discussions on how and where to improve.

Do you have any thoughts on this subject? Feel free to reply on this tweet.

Big thanks to Sander Lissenburg and Ferenc Szeli for proof reading this article.