hansontechsolutions.com

Understanding KPIs: Tools for Improvement, Not Punishment

Written on

Chapter 1: The Purpose of KPIs

In the world of business management, the saying by Peter Drucker, "What gets measured gets managed," holds significant weight, particularly when it comes to key performance indicators (KPIs). These metrics are essential across diverse sectors, including software development.

Nevertheless, KPIs have faced pushback from some developers who view them negatively, often resorting to manipulation to create a facade of success. This undermines the true purpose of KPIs and leads to the formulation of intricate metrics in a misguided effort to prevent such manipulation. This, in turn, fuels ongoing debates about what constitutes a valuable KPI.

What drives this transition from straightforward to convoluted KPIs? Is it a fundamental issue with the metrics themselves, or is it the result of their misapplication? Below, we explore three real-world scenarios in software development that highlight these challenges.

Section 1.1: Case Study 1 - Server Uptime

The Backend for Frontend (BFF) service operates behind our frontend systems as a direct interface for users, acting as a proxy and integrator of various content services. To evaluate the BFF's performance in relation to user experience, we focus on its uptime, aiming for a perfect 100% availability.

However, there are times when the BFF's reported uptime dips below this ideal. This has raised alarms within the BFF development team. Upon investigation, it becomes clear that the BFF is erroneously marked as down, whereas the upstream services are actually experiencing outages due to maintenance or other issues.

The BFF team fears that this could unjustly impact their performance metrics. They propose a change to how we measure KPIs, suggesting that BFF downtime caused by upstream services—despite affecting users—should not reflect on the BFF itself.

While this adjustment might provide a clearer picture of the BFF service's performance from the team's standpoint, it complicates the user experience assessment and deviates from the KPI's original intent: to measure end-user satisfaction.

By helping the BFF team understand that uptime metrics primarily aim to assess user experience rather than team performance, we alleviate some of the tension. They ultimately agree to retain the current uptime measurement, which takes into account the influence of upstream service outages.

Section 1.2: Case Study 2 - System Health Status

In the maintenance of software systems, various metrics can be adopted to assess their overall health. Examples include:

  • What is the test coverage of the system?
  • Are all system dependencies updated?
  • Is the monthly uptime meeting the required criteria?
  • Is the system reliant on outdated frameworks?

When a system fails to meet certain criteria, it is labeled as "UNWELL." Consequently, there are KPIs aimed at reducing the number of systems with this designation. The transition from "WELL" to "UNWELL" is viewed unfavorably, prompting teams to strive to avoid it.

While the intention behind these metrics is commendable—promoting excellence—the fear of being labeled as "UNWELL" can lead to unintended manipulation of KPIs. For instance, a system dependent on a now-neglected proprietary framework may still be rated "WELL" simply to avoid negative perceptions.

Here's the dilemma: If the framework's status remains as "HOLD," systems depending on it retain a "WELL" rating. However, this may prevent teams from allocating time to transition away from it. Conversely, if the status changes to "STOP," it signals the urgency for teams to migrate, but risks labeling those systems as "UNWELL," which teams dread.

In such contexts, the temptation to keep the framework categorized as "HOLD" is strong, as it preserves a positive KPI for the systems, potentially harming the organization in the long run.

If management views the change in KPI status not as a reflection of poor team performance but as a necessary step for improving system health, teams may be more inclined to classify the framework as "STOP," even if it negatively impacts KPIs.

Section 1.3: Case Study 3 - Lines of Code

What about measuring programming through Lines of Code (LOC)? Once a popular metric, it has become outdated and criticized for its simplicity. Bill Gates famously remarked that measuring programming progress by LOC is akin to gauging aircraft manufacturing by weight, underscoring its limitations.

While LOC can be an ineffective measure of productivity, there are scenarios where it can still offer insights. For example, during a migration from a monolithic application to a modular architecture, it becomes essential to assess progress incrementally.

Initially, we tracked the number of Pull Requests (PRs) related to the monolithic code base, hoping to see a decline over time. However, we discovered that PR counts could be misleading, especially early in the process, as teams might still be working within the monolith while also developing new features externally.

To better gauge progress, we switched to measuring LOC. While it doesn't accurately reflect developer productivity or the app's significance, it serves as a reliable indicator of the monolith's gradual reduction over time.

Teams are not pressured to cut LOC but are encouraged to decompose the monolith based on business priorities, making LOC a natural metric to track our strategic shift toward modularization.

KPIs Should Guide Progress, Not Punish Performance

The examples above illustrate that KPIs are designed with good intentions. They offer insights into performance, highlight areas for enhancement, and showcase overall progress. However, they should not be used to evaluate individual or team performance.

KPIs must serve to measure system performance, not to judge personal achievements. Unfortunately, management can sometimes misuse these metrics to assess performance, leading teams to manipulate KPIs to maintain their standing.

Evaluating a developer's performance is complex and cannot be reduced to mere numbers. Effective people management requires a nuanced understanding that goes beyond metrics and processes.

Share the page:

Twitter Facebook Reddit LinkIn

-----------------------

Recent Post:

Transformative Effects of Online Writing on Wellbeing

Discover how writing online can enhance your wellbeing and productivity beyond the act of writing itself.

Understanding the Suffering of Victims: A Path to Justice

An exploration of victim suffering and the evolution of justice approaches.

Innovative Strategies for Scaling Your Startup Effectively

Discover four innovative strategies for effectively scaling your startup while maintaining core values and enhancing team collaboration.

Embracing Growth: The Choice Between Thriving and Declining

Life presents a choice between growth and stagnation. Explore how to foster continual improvement and avoid the comfort zone trap.

Understanding Empiricism: A Look at Locke, Berkeley, and Hume

Explore the development of Empiricism through the insights of Locke, Berkeley, and Hume, highlighting their differing perspectives.

Forgiveness: Why You Don't Have to Force It for Your Peace

Explore why forcing forgiveness can be harmful and how acceptance fosters self-respect without enabling bad behavior.

Discovering Holiness: A Journey Toward Inner Peace and Wisdom

Explore the journey of discovering holiness and inner wisdom while understanding the importance of personal growth and connection with the divine.

Unlocking Blogging Success: 5 Lessons from 150 Posts and a Quiz

Explore five insightful lessons learned from writing 150 blog posts and discover your blogging stage with a helpful quiz.