Berlin

November 4 & 5, 2024

New York

September 4 & 5, 2024

Measuring app stability to reduce technical debt

The impact of stability scores and sharing responsibility
November 30, 2020

In the fast-paced world of software and application development, one reality holds true: errors are inevitable.

Even though you try to minimize errors as much as possible, you’ll eventually overlook some of these bugs in order to get your app, or new features, to market faster.

This concept is called technical debt. Everyone has it. It’s a fact of life for developers and they understand how their daily efforts contribute to the problem. Any time a product team rushes new features to market or wants an incremental code change to make a customer happy, technical debt increases. The same holds true when software frameworks and languages aren’t upgraded in a timely fashion because executives don’t want to slow down development.

In some ways, technical debt is like financial debt. You may need to carry debt in the short-term, but it causes major problems (i.e. weaker software) if you keep letting it rack up over the long-term. As technical debt builds up, it slows down the creation and maintenance of new product work within a codebase.

When this happens, it causes a significant emotional ‘drag’ on developers. Why? Because developers often feel ill-equipped to explain the impact of technical debt and find it difficult to get the broader organizational support to address it. This creates a vicious cycle, leading to increasing levels of frustration, lost productivity, and disengagement from their projects.

Oftentimes, technical debt is viewed as an ‘engineering problem’ that blocks money-making activities, such as building new features and pleasing customers. Developers rarely succeed at winning over advocates outside their department because they lack the right tools to demonstrate the problems that technical debt causes.

Bugsnag advert

Putting the focus on measuring stability

By bringing the concept of stability into the conversation, developers can raise awareness and expand the dialogue about technical debt. Organizations must aim for a balance between delivering a robust product roadmap and maintaining a healthy and evolving codebase. That equilibrium is impossible to achieve if engineering, application, and product teams don’t have a method to openly and regularly discuss and agree upon the impact of technical debt.

The impact is easy to see since technical debt is the measurable drag in your codebase. And by measuring technical debt, you can determine how stable your software is.

Assessing stability is much like how infrastructure and operational teams rely on the ‘five nines’ to track availability, measure uptime, and conform to Service-Level Agreements (SLAs). Software stability can be calculated in two ways using real-time error rates, session, and daily active user data: firstly, as a percentage of successful application sessions, and secondly, as a percentage of daily active users who do not experience an error. These percentages act as stability scores that demonstrate how stable each software release is.

Simply put, when customers enjoy error-free interactions with an application, stability scores are high. If bugs cause disruptions or crashes, stability scores are low.

From technical debt to business value

Customers don’t have the patience for software that doesn’t work properly. In fact, 80% will only retry an app once or twice before moving on. With stability scores providing direct insight into the actual impact of bugs and user experience, organizations can better understand how less technical debt translates into stronger business value.

More importantly, measurable results remove the burden of technical debt from solely being on the engineering team. Now it becomes a metric that the entire organization can view and address collectively on a regular basis.

When cross-functional teams measure and communicate in the same language about technical debt, they can determine when and how to address it. Here are some questions that should be asked to establish your organization’s approach:

● What is our target stability?

● Are our stability scores for each release above our target?

● If any stability scores are below our target, which bugs make the most sense to fix first?

● What target stability scores can we realistically set for future releases? Do we fix bugs that impact a key customer first, or do we focus on bugs that are impacting many customers?

● How many bugs are too many bugs?

When stability scores are used, these questions become discussion points rather than sources of frustration. By reframing the conversation, teams move away from lamenting about annoyed customers (downside protection) to a joint focus on developing features faster and removing the drag of technical debt (upside generation). Teams that adopt stability as a KPI enable technical debt to be rolled into the engineering team’s goals through stability scores, which creates accountability from top to bottom.

if

One of the oldest software development questions is ‘Should we fix bugs or build new features?’ With tools that aid in stability management and error monitoring by delivering metrics and analytics, organizations can more easily determine what their focus should be in order to meet business goals while advancing their software.

Bugs are the consequence of innovation. To move forward, you need to create bugs (and lots of them). But you also need methodologies in place that address the existence of bugs in a timely fashion. After all, the question is never if technical debt will impact your software, but when and how badly. Stability scores provide a quick and easy answer.

Everyone in an organization has a hand in building technical debt with business requests, product requirements, and customer needs. Stability scores enable the entire organization to share the responsibility of deciding when to address that technical debt.

Bugsnag advert