Application quality

You can divide Software quality into code quality and application quality. Code analysis tools like Sonarqube and Microsoft Code Analysis ensure that the written code keeps a high standard, while application quality tools measures how your users experience your application.

You can compare those code analysis tools to spell checkers when writing documents. They make sure that you have no spelling errors and ensure that you use the correct grammar and correctly structured sentences.

Your story may have the correct grammar, no spelling errors, and still be quite faulty in the reader's eye. Application quality tools are used to detect and analyze how your readers experience your book.

This page explains how you, as a manager, can make sure that not too many bugs go into production.

Bug reports as a quality indicator

Traditionally bugs are found in two ways in production: Bug reports reported by users and by repeatably scanning the log files for errors.

The problem is that very few users, only 1%, report errors that they find. When was the last time you reported an error to a software vendor?

Therefore, measuring application quality by counting the number of bugs users have reported is not an accurate way to do it. There are too many unknown errors that users will never report because they are not important enough for the user, but they might be vital for your team.

Cyclomatic complexity

The simple function to the right requires four tests to ensure that it works per the requirements, no matter if the tests are in code or done manually by a tester.

It's quite common that only one test case is made for a function, making sure that the successful case works. By not testing the fault paths, the code might continue with invalid data, and those errors can be tough to debug.

The important aspect of cyclomatic complexity isn't as a test indicator, but a comprehension indicator. The higher complexity, the harder for a developer to manage all the different way that the code can take. Complex code is much more prone for errors.

Your team should strive to have at most 20 as complexity in their functions, as the risk of bugs increases too much over that.

function saveDiscount(User accountOwner, string id, decimal discount)
    if (id == null)
		throw new ArgumentNullException("You must specify account id.");
    if (accountOwner == null)
		throw new ArgumentNullException("You must specify account owner.");
    var account = LoadAccount(id);
    if (account == null)
		throw new InvalidDataException("Failed to load account " + id);
    account.Discount = discount;

Test coverage measures how large part of your code is covered by unit tests (code tests).

Measuring test coverage is vital to get an indication to learn how well tested it is. The higher coverage, the less likely that previously built functions stop working.

A common goal is to cover 80% of the code base with tests, which means that 20% of the code is untested.

What's important is not the actual goal itself (like 80% above), but that the coverage isn't decreasing over time. Therefore, measure in the end of every sprint to ensure your application quality.

Test coverage is a metric which is supported in most development tools.


There is a lot of ways to measure your application quality. In our experience, the metrics above are those that have the most considerable effect on your application quality (as opposed to code quality). It's even possible to have low software quality and high application quality if you manage complexity and test coverage.

High complexity combined with a low (or not measured code coverage) is the worst you can have. In that case, you have several errors in your production environment that you are not aware of.

Having a low unit complexity combined with a high test coverage is the best way to maintain a high application quality over time.

Try our bug estimator to learn what you are missing out on.

Bugs calculator