Telerik blogs
How ToT2 Dark_1200x303

If you get your testers involved early enough, they can be the primary tool for ensuring that your users get what they want. In fact, testers can be the part of your process that users regard as “value added.”

No one cares if the application’s developers like the application (and of course they do—no one has ugly children). The only opinion that matters about the application’s quality is the users’. This is why the first step in developing an application is getting the users’ requirements: It establishes the criteria that will be used to judge the application. Except, as we all know, developers can create a product that meets the requirements but that the users reject.

You can solve that problem by bringing your testers into the requirements process and, simultaneously, get your users to regard testing as a value-added activity.

Delivering What’s Wanted

Bringing testers into the requirements process lets you leverage what makes testers unique: By converting business requirements into tests that the technology must pass, testers are positioned on the border between the business and the technology.

That unique position shows up in virtually everything testers do. In pulling together test data, for example, testers have to define equivalence partitions: sets of data where only one item in the set needs to be tested because the other members of the set are functionally identical. Testers build those sets by understanding both what the data means to the business and how the technology responds to variations in the data (Can the customer specify a negative amount in some field? Does that negative number count as a “special number” to the code?).

That unique point of view means that testers are the best team members to help users and developers refine what the requirements “really mean.” By defining the tests that prove the application meets the “definition of done” in eyes of both developers and users, testers define the requirements in terms that make sense to both developers and users. Effectively, tests are an example of what “done” means (and if you don’t think examples matter, see my discussion of equivalence sets, above).

Of course, bringing in testers early also allows testers to start building tests driven by requirements early enough that those tests can be used through the whole development process. It also ensures that all the tests are supported by the team’s testing infrastructure. But those benefits are really just icing on the cake.

Delivering When It’s Wanted

Testers also help refine the scope of the project, allowing users to get the functionality they want as early as possible. Defining tests allows users to prioritize use cases and functionality: What needs to work now and which can be put off until later? Often users will be perfectly happy, for example, getting a version of the application earlier, even if it doesn’t reliably handle edge and corner cases.

Testing, after all, isn’t about eliminating risk (that’s impossible) but about managing risk. The application’s users (and only its users) get to decide how much risk is acceptable. I once helped deliver a version of an application that was only guaranteed to execute along the “happy path” under a very light load. It would, however, reliably execute that path over and over and over again—all day long, if you wanted it to, in fact. My client was thrilled because that was exactly what they wanted that version to do: Demonstrate the product as a proof of concept at trade show, ten hours a day for seven straight days (I think it would also have done well under heavier loads and in other scenarios, but we hadn’t done the tests to prove that, yet).

Getting testers involved early also avoids recreating waterfall-style development processes where testing is left until the end and is never given enough time. Instead, because testers are present at the start, testing can happen throughout the project, providing a more predictable delivery schedule with defect-free software (again, something users value).

Delivering What Works

Early testing includes exploratory testing: exercising the application to discover potential problems and “risky” areas. But random explorations don’t pay off. To quote Niall Lynch at QA Lead, “Anyone can find bugs. Customers do it for free all the time.” Exploratory testing has the biggest payoffs when driven by the tester’s understanding of the customers’ requirements and trade-offs around risk (there’s no point in exploring the corner cases if the user is, currently, uninterested in them). Again, this is only possible if testers participated in defining those requirements and trade-offs.

Early involvement also supports creating automated tests early (where appropriate), allowing the team to do regression testing without consuming team resources. A growing body of regression tests both demonstrates and maintains progress toward meeting requirements. Users may not care about those progress reports … but management does.

Last benefit: Where testers have a deep understanding of what users want, developed during the requirements phase and continued through the project, testers can also help refine the criteria and focus of User Acceptance Testing. This ensures that the UAT demonstrates what users need to see to sign off (again, something that users value).

The Real Job of Testers

Getting testers in early lets them work on their real job (to quote Lisa Crispin and Janet Gregory in Agile Testing): “… to work with the customer or product owner in order to help them express their requirements adequately so that they can get the features they need, and to provide feedback on project progress to everyone.” Which is something that your users/customers will value, even if they regard testing itself as a “necessary evil.”

The tester’s job isn’t either trivial or easy. Testing matters: No requirement is “done” until it’s passed its test. Getting testers involved early supports testing during development and lets the team demonstrate that the customers’ needs are going to be met (especially with automated testing). That’s going to require the kinds of tools that support all the testing I’ve described here: exploratory and automated testing, white box/black box testing, unit and API testing, end-to-end testing and User Acceptance Testing. Tools like Telerik’s Test Studio, for example.


Peter Vogel
About the Author

Peter Vogel

Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter also writes courses and teaches for Learning Tree International.

Related Posts

Comments

Comments are disabled in preview mode.