It’s true not all testing solutions look alike. But one criterium drives not only your choice here, but automated testing in general: It’s the quantified version of “Can you live with it?”
Obviously, functionality in a testing solution matters: Are you just testing web apps? Do you need to support mobile testing? How about desktop applications? What resources do you have to allocate to implement automated testing? What tools will fit into your toolchain? As testing tools evolve, it becomes increasingly difficult to determine what criteria you should use to pick between the available packages. There is one way to look at the issue that clarifies the problem, though.
First, we all agree that we have to do testing because we have no other way to ensure that our applications work. It’s not that alternatives haven’t been proposed—provably-correct software leaps to mind, for example. So far, however, nothing has replaced testing.
But, while testing is a necessary task, it isn’t, from our user’s point of view (the people paying the bills), a value-added task. Testing doesn’t add what users want: more functionality, better UX, and so on. At best, testing removes dissatisfiers. In fact, if testing ties up developers, testing actually reduces your ability to deliver what your users value. Tasks that are necessary but not value-added are common enough that they have a name: They’re called “overhead.”
Don’t misunderstand: Classifying testing as overhead doesn’t diminish its importance. If you run a factory, your water bill is classified as overhead but that doesn’t mean you don’t value a consistent, high-quality supply of water (and are willing to pay for it). That classification does, however, clarify what you’re looking for in a testing solution. And, as a quick review of testing strategies/technologies shows, it even explains why we’ve adopted automated testing: It reduces overhead.
For example, automated testing allows us to reduce the time people must spend on repetitive testing. While a person (an expensive resource) still has to develop the initial test, automated testing allows us to rerun those tests at little or no cost. It’s automated testing that has made thorough and complete regression testing viable.
Automated testing can also be integrated with our build chains in a way that manual testing can not, which reduces the supervision costs required to ensure that all (and only) the “right” tests are executed. Automated testing also supports automated reporting, ensuring that all (and only) the right people are notified of problems, and only notified about the problems they care about.
Keyword testing is another example of how automated testing reduces costs. Keyword testing allows users to create new test plans without (much) developer intervention, using tools that users are familiar with (Excel, for example). While some end-user training is required, keyword testing reduces costs by, first, ensuring that all (and only) the tests users feel are valuable are run and, second, streamlining the process of approving test results by allowing end users to create, execute and review tests.
One more example: Testing that includes exercising an application’s UI (end-to-end or E2E testing) has, traditionally, been a bridge too far for most development shops. The issue, again, has been almost purely overhead-related: Typically, even trivial changes to an application’s UI caused multiple E2E tests to fail, requiring those tests to be rewritten. Given those ongoing maintenance costs, most shops felt the costs of automated UI/E2E testing were too high. Telerik Test Studio’s mixed element detection, for example, was specifically designed to eliminate those costs and make E2E testing viable.
While E2E testing is valuable in itself, lowering the costs of UI testing also enables codeless testing by enabling users to generate tests just by interacting with the application. Like keyword testing, codeless testing empowers users to create the tests they need, run them and review the results. Codeless testing also reduces the need for developers (a relatively high-cost and limited resource) in creating many of the tests. Codeless testing doesn’t eliminate the need for developers, but they can now focus on those scenarios that codeless testing can’t address.
The reality is that living with a testing solution has ongoing costs associated with it. And, when you think about managing ongoing/overhead costs, then you’re thinking about your Total Cost of Ownership (TCO) because purchase price is just a small part of owning your solution.
You can see that viewpoint reflected in modern testing solutions. Most modern testing tools no longer require you to compile a testing agent into the application to support automated testing. One reason for that change is philosophical: Testing an application with a special agent in it means that you’re not testing the version of the application you’ll be releasing. But the more critical reason is overhead-related: Requiring an application to have a testing agent makes both the test and release build more complicated processes—an ongoing cost that drives your overhead up. Eliminating test agents eliminates cost.
Getting started with automated testing includes one-time costs like setting up your test environment, integrating testing into your development/delivery processes, and training people to create tests. But the real costs you want to avoid are the ongoing ones. Your development/delivery toolchain is going to be evolving over time. For example: You don’t want to have to repeat those setup costs every time you tweak your development process.
It’s easy to miss some of the ongoing costs associated with parts of testing—reporting, for example. A good testing reporting tool first makes sure that everyone who needs to know about a testing problem is informed (that may even involve compliance issues). But reporting must also facilitate finding the problem that needs to be addressed and handling the follow-up reporting that tells the appropriate people that the problem has been fixed.
Selenium is good example of the challenges in picking a good testing tool. As an open-source tool, Selenium’s initial investment couldn’t be lower: It’s free. Selenium also has a rich ecosystem that provides a variety of third-party packages (some free, some not) for addressing testing needs.
Which is a good thing because, for example, Selenium doesn’t support testing mobile apps. However, a third-party package, Appium (also free) does support mobile apps by leveraging the Selenium interfaces. As a result, if you know Selenium, you know most of what you need to know to start using Appium.
However, because Appium and Selenium are separate tools, creating an integrated reporting environment from their output can be… challenging. For that, you’ll be looking at a repository like Maven, which provides a place to store information from a variety of tools, and a reporting tool like Allure, which can generate reports from a variety of sources.
But, all of a sudden, your TCO is going up. While open-source components are free, integrating them into a solution can be expensive.
Furthermore, other costs start to accrue as you assemble a best-of-breed infrastructure: You’re now managing patches and upgrades for multiple tools, all on different release schedules. Integration issues become important and you can find yourself having to spend time tracking down problems with your testing toolchain, in addition to fixing problems with your application.
On top of that, fixing testing infrastructure problems can be time consuming in best of breed solutions: You’re Googling Stack Overflow rather than calling tech support. Acquiring expertise in a variety of packages isn’t free, either. If the upshot is that you have someone whose job includes maintaining your test infrastructure, then you have to feel that something has gone wrong in meeting your testing goals.
Obviously, all of this is both doable and manageable. What it’s not is free.
Modern applications integrate microservices and clients while spreading complicated business logic across all of these system components. Proving that these applications “work as intended” requires integrating both technical and business domain knowledge while supporting a variety of testing approaches (UI, regression, E2E, API, load, etc.). Ensuring quality is only possible with tooling that supports all of those requirements and does it without increasing teams’ overhead.
Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter also writes courses and teaches for Learning Tree International.
Subscribe to be the first to get our expert-written articles and tutorials for developers!