Sluggish automated test runs can significantly slow down your entire team. This article teaches you five ways you can speed up your test automation.
Any team working on software development these days knows that it’s a fast-paced environment. Organizations and customers alike expect new features and improvements made to their favorite applications consistently. Trying to ship fast and often without an automated test suite in place, helping to verify that new modifications to the codebase don’t break what’s already in place, is an uphill battle.
While automated testing can speed up development, it can also create bottlenecks that slow the entire process. A poorly optimized test suite delays the feedback loop that developers rely on to make sure an application runs as intended after each change, making it harder to identify and fix issues as soon as possible.
In this article, we’ll cover how slow tests impact developer productivity and five ways teams can help make their automated tests fast.
Test automation’s primary purpose is to help quickly detect potential application bugs. Instead of the time-consuming process of waiting for someone to verify code changes manually, an automated test suite can validate them automatically after every modification. This rapid feedback loop saves time and money by reducing the effort needed to test an application, eliminating human error with consistent results and allowing developers to fix bugs sooner. All of these benefits add up to shorter release cycles without sacrificing quality.
Most of those benefits evaporate when running an application’s automated tests takes a non-trivial amount of time. Every time a new change gets introduced to the codebase, developers won’t know whether or not their updates break existing functionality unless they wait. Waiting for tests to run, only to see one fail, frustrates and demotivates developers. Eventually, they will likely begin ignoring the test suite to continue working on the next thing, and quality will erode slowly over time.
It’s not only developers affected by slow automated tests—it impacts the entire organization. Slow tests lead to slow coding iterations, so developers can’t work as fast as they’d like. The team begins making trade-offs to bypass some or all automated testing and create more technical debt and defects in the long run. Developers then need to deal with buggy deployments and release cycles slow to crawl, putting the organization at risk of being outpaced by a competitor.
The ripple effect caused by a slow automated test suite shows why it’s vital to keep automated test suites running as quickly as possible. Here are a few strategies teams can use to build and maintain optimal test suites without sacrificing the long-term health of the application’s quality.
Most automated test suites run each scenario one at a time by default. Running tests individually will take a lot of time to complete—imagine a grocery store with one hundred customers in line to pay but only one cashier. One way to improve the testing process is by running tests in parallel. Instead of running test scenarios individually, parallel test execution runs multiple tests simultaneously. Returning to the grocery store analogy, 10 cashiers will get through the line of customers much more quickly. Similarly, 10 test runner processes will wrap up execution sooner. Parallel testing can slash automated testing times by more than half and usually only requires a simple configuration change to the test runner. For example, Progress Telerik Test Studio can distribute tests across multiple browsers and execution servers in parallel.
Another way running tests in parallel speeds up the process is by detecting tightly coupled test scenarios that rely on other tests to work properly. An example of a tightly coupled test is when one scenario writes data to a file, and another must read from that file to pass the test. Testers should avoid these kinds of tests because they’re difficult to debug and maintain and won’t work well in parallel due to their dependency on one another. Rewriting or removing these tests will inevitably improve testing times.
Many teams set up continuous integration systems to run the entire automated test suite after every change. This method will result in a well-tested application, but it also slows down the feedback loop for developers to know that their application is still in a good state. For larger projects, running all the tests when updating the codebase is unnecessary. A balanced way to approach automated testing is strategically running subsets of test scenarios at different stages of the software development lifecycle.
Most modern software tooling allows testers to label or tag their scenarios and set up their CI service only to execute the identified test cases. The purpose of doing this is to cut down on the time it takes to validate modifications to the application. For instance:
Large automated testing suites that take hours to execute can benefit from running on a schedule, such as Telerik Test Studio’s scheduling services. Running a segment of tests earlier in the development process takes only a fraction of the time to verify the application’s functionality while giving enough confidence that things are working as they should.
One of the most common mistakes teams make when building an automated test suite is focusing on volume. The prevalent thought is that the more automated test scenarios an application has, the better off it is. Unfortunately, more isn’t always better. Making quantity the focal point of writing automated tests steers teams to create redundant or low-quality test scenarios, and every new automated test introduced slows down the test suite more and more.
When working on test automation, the focus should be quality over quantity. Aiming for 100% test coverage in an application is not feasible. Teams will make the most of their efforts by automating high-risk sections or scenarios that are time-consuming for frequent manual testing. Concentrating on these critical areas helps focus on the areas that matter without the overhead of testing low-risk or rarely used parts of an application.
Developers can run automated test suites on their local development machines to validate changes before committing them to the codebase. However, continuous integration systems do most of the automated test execution work. One of the most overlooked areas in test automation is the hardware powering these CI systems, and it’s one of the places that causes the most headaches for testers and developers.
CI systems use servers that are typically underpowered, with the obvious consequence of slow test runs, but these low-powered systems also cause frequent test failures due to a lack of resources. Many continuous integration services provide different tiers with more powerful hardware that can scale as needed. Teams that struggle with their continuous integration systems should look at bumping up the power of their hardware to potentially resolve most of these issues. Although it doesn’t come for free, the expense is often much lower than the lost opportunity costs for the team.
Most software applications are constantly evolving, whether it’s to add new functionality or fix defects. Ideally, these modifications will include automated testing to maintain a high level of quality throughout the project’s lifetime. However, it’s a given that all software will build up code that becomes obsolete. No matter how careful developers and tests are when committing new code and adding tests, a common oversight among development teams is never taking time to review these areas that become obsolete or—worse yet—make the codebase more difficult to work with. Even when developers and testers are careful only to write the tests they need, each new change potentially accumulates more testing scenarios over time.
Even when developers and testers are careful only to write the tests they need, each new change potentially accumulates more testing scenarios over time. Those tests often stop serving a purpose yet remain in the test suite, taking time and effort to maintain. Teams should perform regular code audits on their existing test suite to spot these nonessential scenarios and determine whether they’re still worth keeping. Potential candidates for removal are:
Testing tools like Telerik Test Studio’s test results reporting and regular pruning of these scenarios will keep test suites fast and maintainable.
Working on software applications with slow automated test suites isn’t a pleasant experience. Developers have to wait for long periods to determine if the application isn’t working as expected due to their changes, which leads to longer release cycles. However, teams can correct these issues by adopting a few strategies in their test automation. Thanks to modern tooling like Telerik Test Studio, developers and testers can run multiple tests simultaneously, plan when to run specific tests and make frequent audits of their test suites.
Optimizing existing tests can be challenging, especially for long-lived test suites that have accumulated hundreds or thousands of automated scenarios. An excellent approach is to start small with one of the strategies mentioned in this article and eventually add more as test execution times improve. Even using just one of these strategies will pay off in the form of faster development and deployments. These actions are just a few ways to keep automated tests in a project running smoothly for months and years to come.
Dennis Martinez is a freelance automation tester and DevOps engineer living in Osaka, Japan. He has over 19 years of professional experience working at startups in New York City, San Francisco, and Tokyo. Dennis also maintains Dev Tester, writing about automated testing and test automation to help you become a better tester. You can also find him on LinkedIn and his website.