There are 10 laws that drive everything Peter knows about testing. It’s not always pretty (and it’s not always kind) but they are always true.
I have opinions on testing, driven by these 10 laws. I guess that don’t know that these laws are truly “immutable,” but they’ve been true for the last 80 years or so and they will continue to be true as long as users keep wanting more functionality and developers keep delivering it using code. So: Close enough for me.
But there is a way to change many of these laws.
Bugs aren’t a measure of quality (that’s measured by things like fitness for purpose, reliable delivery, cost and other stuff). But bugs are what annoy our users most. If you don’t believe me, consider this: over 60% of users delete an app if it freezes, crashes or displays an error message. Cue P!nk.
We all know where bugs come from: Developers writing code (enabled by users who want new functionality). Bugs are the visible evidence that our code is sufficiently complicated that we don’t fully understand it. We don’t like creating bugs and wish we didn’t do it and have developed some coping skills to address the problem … but we still write bugs into our code.
Everyone has an Aunt Edna where the inevitable result of her going out is that she brings home some new thing to put on a shelf. The inevitable result of creating software is more bugs (and, yes, more/better functionality). Like Aunt Edna’s tchotchkes, without some positive action (stopping Aunt Edna at the cash register leaps to mind), tchotchkes and bugs both accumulate over time. When bugs accumulate, the application becomes unusable—or, at the very least, unused (see Law 1).
And that’s why we have testing. While other ways of eliminating bugs have been suggested (<cough>
provably correct software </cough>
,
<cough>
contract-based software <cough>
), they’ve never caught on.
This is law is, obviously, just the result of the previous three laws. I’ve always argued against the idea of zero-defect software as not only historically unfounded (it’s never happened) but physically impossible (it can’t be done). The bugs in the things you don’t test are still bugs—they’re just the bugs you don’t know about.
If “testing” means “reducing bugs,” then that puts testing into the “necessary work” category: stuff we have to do so that we can deliver value (i.e., new functionality). That’s because the only people who get to decide what’s valuable are the people who use our software, and what they want is new functionality. If we could create bug-free software without testing, our users wouldn’t complain.
The only time testing starts to move into the value-added category is when we test a user transaction end-to-end and, even then, probably only when the user is involved in crafting the test.
Since the goal with a necessary task is to reduce time and cost so that more time/money can be spent on value-added tasks, there will never be enough time/money to produce bug-free software. Automating regression tests, for example, is attractive because it lets us get closer to testing everything while reducing costs. But, even so, we never have enough time/money and are always prioritizing. The basis for our priorities is risk: What would it cost to remediate a bug in this area if it got into production?
To misquote Jane Austen, “It is a truth universally acknowledged, that a bug found early is cheaper to fix than a bug found later.” Not that fixing a bug is ever free: We can’t fix a bug unless we understand why it exists, and we have bugs precisely because we don’t fully understand our code (see Law 2: Bugs exist because we write them).
But fixing a bug late in the process is both hard and expensive because other code now depends on the buggy software. Not only does that automatically increase the cost, it throws our schedules out because it turns out that, after all this time, we’re still astonishingly bad at estimating how far bug fixes ripple through our software.
So we start testing earlier so that our integration test costs—the only tests that possible have value for our users (see Law 6)—are manageable.
How early can you start testing? You can start by validating your requirements.
Like any good law, “starting early” is true everywhere. If you’ve unit tested some code, for example, there’s no reason why you can’t start load testing it—you have all the necessary resources. And if, you’re not going to start testing, remember Law 5: Anything you don’t test will have at least one bug—probably more.
We even have tools that let us go back in time and make old software compatible with automated testing.
The usual definition of testing means that it’s not about adding quality: At best, it’s about defect removal and, given constraints on time and resources, really about managing risk in production and staving off chaos in the development process. Since bugs are inevitable and inevitably pile up (Laws 2 and 3), testers defend against that.
But it doesn’t have to be that way. Testers can truly become the custodians of quality and be seen as adding value to delivering functionality (and even making adding new functionality more efficient). And it wouldn’t be fair to say all that without mentioning Lisa Crispin and Janet Gregory’s wonderful book Agile Testing, which has influenced my thinking about testing more than any other single source.
As I’ve said elsewhere, testing can become Quality Assurance and can move from “something we have to do” to “something we want to do.” And that would be very valuable.
Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter also writes courses and teaches for Learning Tree International.