Telerik blogs

Normally, testing is just about finding bugs—“defect removal”—which is worth doing. But testing can also be about increasing quality in the eyes of the only people whose opinion matters: Your users.

There are, at least, three criteria for measuring software quality:

The bad news: The only opinion that matters on “fitness for purpose” is the opinion of the people who have to use the software every day as part of their jobs. If you disagree, remember that when (as a user) you interact with any piece of software, this is also your definition of “fitness for purpose.”

And, while bugs aren’t the only measure of fitness for purpose, they’re the part that annoys users the most.

The Cost of Bugs

For example, the top reason users delete an app is because the app crashed or froze or just displayed an error message (that’s 61% of all users). While deleting is pretty aggressive, Global App Testing reports that almost half of all user are less likely to even use an app again if the app performs poorly.

It gets worse: The user’s poor opinion isn’t restricted to the app. Almost 90% of Americans will have a negative opinion of the brand associated with a poorly performing app; almost 80% of users are less likely to purchase from a site they think performs poorly. We shouldn’t be surprised: If you find a cockroach in your hotel room, you don’t say, “Look, there’s a cockroach.” Instead, you say, “This place is infested!” and move to another hotel.

If you think you’re immune because you create “internal apps” used only by employees of your organization … well, you’re wrong. According to a G2 survey, half of all employees are unhappy at work precisely because of the software they have to use. It’s so bad that 25% of employees consider leaving their jobs because of bad software.

Even if you don’t care what users think, bugs are expensive and probably more expensive than you think: The cost of a bug in production is generally underestimated.

At one petrochemical company, a “regular bug” in an application (usually triggered by a summer intern using the application for the first time) was assumed to affect one department and cost an hour of time to correct. When an enterprising developer actually investigated the effort involved to clean up after the bug, it turned out that recovering from the bug rippled through three departments and took 8-10 hours to fix—an 800% error in estimating a bug that everyone knew about and that occurred regularly.

So, we also shouldn’t be surprised that an IDC Survey found that an hour of downtime costs small businesses a minimum of $8,000 (for Fortune 500 companies, the cost is over $100,000 an hour).

And the costs aren’t limited to out-of-pocket expenses: Parasoft found that companies that report software glitches lose an average of $2.3 billion in shareholder value (when Provident Software reported a bug that caused it to collect less than half their loans on time, their stock price tumbled to about a quarter of the shares’ pre-announcement value and the CEO resigned).

And, while bugs are more expensive and their impact more far-reaching than you think, the cost of fixing a bug is usually relatively small. That “regular bug,” for example, was eliminated by adding an edit to the program, which took less than two hours to code, test and release.

Testing To Increase Quality

This is, of course, why we do testing: To find bugs. But, presumably, “fitness for purpose” means more than “not blowing up” and something more like “meeting the user’s needs.” Unfortunately, the standard definition of testing only gets you to “removing defects.” That’s usually because QA is involved too late in the process.

If your QA people are involved as early as possible—if the “definition of done” is developed with QA at the table with users and developers—then QA can actually add quality. For example, early involvement enables QA to develop the tests that prove the “definition of done” so that those tests reflect what matters to users the only group whose opinion matters. Furthermore, as QA starts doing exploratory testing, they’re positioned to do more than “bug hunt” and can actively investigate users’ issues.

But the real payoff from having QA at the table with users from the beginning comes from enabling QA to involve users in the testing process:

  • If users can participate in developing tests (using test recorders, for example), then you get tests whose results users actually care about.
  • As the application changes because of users’ feedback from testing, users start to assume ownership of the application: This is something they helped create, not something inflicted on them.
  • If the users involved in testing are well regarded in the user community, those users function as champions for the application and help drive acceptance.
  • By seeding the environment with people who’ve been testing the application and, as a result, know how the application works, training costs are reduced.

Finally, as a side effect, the perception of bugs in production changes. When users see the bug count diminish over the development period, a bug that makes it into production is seen as a “residual bug” from a much larger number that those users helped eliminate.

Of course, all of this is only possible if QA has time to work with users and, as we all know, QA already doesn’t have enough time or budget to do all the testing we want. Automated testing helps here by freeing up QA resources from regression testing (and positions QA to, potentially, regression test everything). If those resources are then used to actually “increase quality” in the eyes of your users, then testing becomes a value-added activity.

In other words: Testing moves from “something we have to do” to “something we want to do.”

Next up, you may want to read about other criteria for measuring software quality: reliable delivery and maintainability.

Peter Vogel
About the Author

Peter Vogel

Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter also writes courses and teaches for Learning Tree International.

Related Posts


Comments are disabled in preview mode.