Telerik blogs

JimInBulgaria-tightI often joke that one half of the grey in my beard is from being the at-home-parent for our two children from birth through my oldest’s teen years, one half is from my age, and one half is from all the hard lessons I’ve learned over a few decades worth of being in the software industry.

Yes, that adds up to three halves. Its supposed to. I’m a tester so errors in math are an ironic juxtaposition of… Oh, nevermind.

It’s taken me a long time, but I finally learned to be very careful about how I design my automated test cases/scripts. In my early years of test automation I’d often have test cases rely on how other test cases left the system: test 43 might expect that test 12 had passed and had properly create a user that test 43 would use. That led to a lot of painful experiences when trying to keep test suites running smoothly as a system evolved.

Remember that test automation is a software engineering activity, even with a great tool like Test Studio. One of the most troublesome issues with software design is the notion of tight coupling. A well designed system, or test case, takes on as few tightly coupled dependencies as possible.

In the context of a test case, having one test case rely on other separate test cases is a perfect (bad!) example of tight coupling. Following the example above, if test 12 failed and didn’t create the user test 43 needed, then test 43 would fail—and likely for the wrong reasons. Tests relying on other tests to set system state or configuration is another example of tightly coupling your tests.

This problem leads to lots of frustration and wasted time cleaning up your test suites as your system grows. You end up having to fix many tests when one is impacted by something elsewhere in your system. The problem can grow large enough that your entire project, not just the testing effort, can be jeopardized.

I try to drive home this concept in every training session and workshop I deliver because it’s crucial to having a test automation strategy that helps you continue delivering great value to your customers. (Also, note that not once have I mentioned “UI testing.” This concept of avoiding tight coupling applies to all forms of automated testing: database, unit, integration, performance, etc.)

What’s a Good Test Look Like?

Good test cases are specific, as simple as possible, and stand-alone.

  • Specific: A good test case doesn’t conflate different scenarios. It focuses on validating/confirming/exploring one narrow issue. Good test cases don’t try to do things like “If you’re in Alabama, purchase tires and ship via UPS. If you’re in Ohio, purchase gummie bears and ship via Post Office.”
  • Simple: Automated testing is hard, especially if you’re working at the UI level. Keeping the test as simple as possible means cutting out extraneous paths that aren’t directly related to the core focus of the test. Turn off things like CAPTCHA and avoid automating checks logging in to Gmail for e-mail validation, for example. Simple also means you don’t create huge tests that are hundreds of steps long!
  • Stand-alone: This is the key theme that relates back to the coupling/dependency issue. Each test needs to handle its own setup for prerequisites and specific system configuration. No test should rely on another test to do this.

Here’s a test from my demo project that highlights these concepts

image

The test is short, uses other tests to handle different pieces of system navigation/preparation, and uses setup steps to prepare the system for this test to succeed. I don’t have to worry about other tests interfering with the execution of this test, and impacts of other system changes are minimized.

Why This Matters So Much

Tight coupling in automated tests means groups of tests likely have to execute in a specific order. If you’re using Test Studio’s static test lists, then you can control the order of how your tests run when creating or editing a static list.

image

Dynamic lists; however, are completely different. You have no ability to control the order of how your tests are executed. On a similar note, few other test frameworks such as JUnit, NUnit, etc., allow you to control the order of execution. In these cases your order-related dependencies will end up hurting you, badly.

Why This Matters Even More With Test Studio 2013 R1

The problem is magnified even for static lists with the new release of Test Studio 2013 R1. Why? Because one of the strongest points for R1 is the ability to distribute tests in a list across multiple execution engines.

image

Now it’s even more crucial that your tests are completely stand alone! There’s no way you can expect a particular order of execution for a test list of 3,642 tests spread across 50 different execution engines. Or a list of 10 tests split across two.

What About Setup/Configuration Tests?

Some organizations use static test lists so they can ensure a set of tests can run before any others as part of an environmental configuration. Using tests to contain these actions is a perfect approach when you’re loading baseline datasets or turning off things like CAPTCHA.

image

Unfortunately, this isn’t the best way to handle this—you’re still relying on the test list executing in a certain order, and this definitely won’t work in 2013R1 when you want to run your tests in distributed mode.

What’s the Right Approach?

First off, ensure you’re not creating any dependencies in your tests. Tests must be standalone!

Secondly, if you’re in need of setup or teardown actions for a test list then have a look at the wonderful execution extensions we have available to you. This enables you to write small blocks of code to handle your setup and teardown actions as needed. By combining the extensions with your own custom backing API you can easily handle setup actions like this:

public void OnBeforeTestListStarted(TestList list)
{
    UserFactory.DeleteAllTestUsers();
}

But We’ve Don’t Have Time To Rewrite Our Tests to Eliminate Dependencies!

Fear not, we don’t expect you to rework your entire automation suite to support distributed execution. If you do have test lists and tests with existing dependencies that cause failures in the distributed execution mode, then you can deselect that option and have your tests run in series on a single execution machine—just like you do now.

image

I’d encourage you to write your new tests in a non-dependent fashion. You can then put those tests into separate test lists which you will be able to successfully distribute across your test execution infrastructure.

Moving Forward

Developers learned the hard lessons of badly written systems with tightly coupled dependencies. It’s hard to extend those systems, and it’s a nightmare for maintenance of them. We have to take those same lessons into our automation suites.

Save yourselves from extra grey hair, regardless of whether it’s your beard or hair on your head!

About the author

Jim Holmes

Jim Holmes

has around 25 years IT experience. He's a blogger and is the Director of Engineering for Telerik’s Test Studio, an awesome set of tools to help teams deliver better software. He is co-author of "Windows Developer Power Tools" and Chief Cat Herder of the CodeMash Conference. Find him as @aJimHolmes on Twitter.

Interested in chatting with Jim at a conference? Check out his speaking schedule!


Comments

Comments are disabled in preview mode.