End-to-end testing is the only way to ensure that modern, distributed, loosely coupled applications actually work. And it does that by taking a positive approach to ensuring application quality.
Now that, thanks to the internet, we’re all end users, we have come to a new realization about testing: If we, the end users, have to fight with the UI, if we don’t get the results we expect, then we don’t care if the unit, functional or load tests succeeded. If the end users say the application doesn’t work then the application doesn’t work.
Which is the primary reason why we’re now starting to talk about end-to-end (E2E) testing.
If that seems self-centered, we can put that another way: If your business depends on users interacting with your software (and it does), then any flaw in the UI or the results is bad. Only E2E testing proves that.
Fundamentally, E2E reflects a growing maturity in testing. Initially, we just worried about system stability, which was, essentially, a negative approach: “Does my application not blow up?”
Now that software is critical to our organization’s success, we’re adopting a more genuinely positive approach: “Does my application help our users achieve their goals?” This is where E2E comes in: E2E testing takes a typical set of interactions that a user follows in order to achieve the user’s goals (the “user’s journey”) and checks to see if the user’s goals are met.
E2E asks the only question that matters: “Does the system—as a whole—meet the user’s (and owner’s) goals?” And the only answer that matters comes from the system’s stakeholders: The end user and the organization that owns the system. If the UI doesn’t work as they expect and produce the results they expect, then the system is broken. Period.
However, E2E testing isn’t just about getting stakeholder approval. Interest in E2E testing is also driven by modern application design. In a world that increasingly consists of loosely connected clients and services, all of which may be built by different teams, proving that the system’s individual components work is, at best, just a good first step. You can see that change in the explanations that are triggered when an application fails. In the bad old days, when an application failed, the default response was “Well, it worked in test.” Now, it’s “Well, the request wasn’t written to the right queue” or “The client didn’t call the API correctly.”
Because E2E changes the testing question, it also changes the way that testing is done. Most obviously, E2E requires tests to interact with an application’s UI, both to start the test and to check the results. To be useful, these tests have to be robust enough to survive after “typical” UI changes.
But E2E also changes one of the basic principles of unit testing: Unit testing depends on isolating the module or component being tested. This concept of isolating components is so fundamental to most automated testing that there’s even a three-letter acronym for it: Isolated modules are the “Component Under Test” or CUT.
E2E takes the opposite approach: Testing depends on not isolating the “component under test” but, instead, finding out if everything triggered by the user’s interactions actually delivers the results.
This means that E2E requires tools that work differently from your unit test tools. There are five criteria that you should be looking at in your E2E support system.
First and foremost: Speed. This one doesn’t change from unit testing, but it’s harder to achieve with E2E testing. In production, it may take hours (or even days) for an interaction to work through the whole system. However, without rapid feedback, testing stops being useful. The ideal testing scenario is that, as developers finish their code, all the relevant tests are triggered, and the developer gets feedback on the code they just finished, before they have to move onto the next task (or get bored).
Second: A flexible way to check results. You probably can’t have a whole copy of the system set aside for every team that contributes to the system. Teams need to be able to successfully test the user interactions that they’re interested in, even while some other team is testing another set of interactions.
Third: Support for all the ways that users can interact with the system. In addition to whatever clients you create, your system may also have an API interface that business partners and customers can use from their own clients. The good news here is that you’re not obligated to test your business partners’ clients… but you are required to create the E2E tests that guarantee that your API works in the way you’ve promised your partners.
Here’s the bad news: You can’t use your API tests as a substitute for any clients you create. With E2E testing you can’t just test whether the APIs called from mobile platforms do the right thing. If you want to prove that the system works the way mobile users expect it to… well, then you have to be able to initiate E2E tests from the mobile clients (or something very much like them).
Fourth: Integrated reporting. E2E testing doesn’t mean that you get to skip other tests. You’ll still, for example, need to do load testing to ensure that your system stays responsive as demand increases. And there’s not much point doing E2E testing if the system’s components aren’t passing their unit tests. You’ll need to integrate the results of all of your tests into a UI that reports on system quality in a way that’s useful both to developers and management.
If all that sounds like an intimating set of demands for any toolkit… well, it is. You’re going to need more than just a “testing tool.” As a result, vendors in the testing arena have focused on creating test suites that provide the support you need. Telerik Test Studio demonstrates that approach by, for example, not only supporting quick and stable E2E tests (including scheduled test runs and integration with DevOps toolchains) along with a robust framework for exercising UIs, but also adds in an “executive dashboard” that brings the results of all of your tests together (unit, load, E2E, etc.).
The fifth (and final) criteria: Whatever testing infrastructure you assemble will need to be a good fit with your organization’s culture, toolsets, skillsets and processes. Testing isn’t a “thing you do” at the end of the development cycle: It’s the part of your process that ensures you deliver software that your users actually value. And why would you want to do anything else?
Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter also writes courses and teaches for Learning Tree International.
Subscribe to be the first to get our expert-written articles and tutorials for developers!