Telerik blogs

You might not think you have much to learn about automated testing from watching Top Gun, but fighter pilots have some important lessons about what you need in a testing tool that you want to know.

You know the current mantra for agile development in a DevOps environment: Deploy often (to dev, test or production), fail fast (preferably in dev or test) and redeploy sooner. Though no one mentions it, presumably somewhere between “fail fast” and “redeploy,” you fix the failure.

The usual reason we start thinking about automated testing is to be able to perform all our regression tests—we recognize that we’ll never have the manpower or time to regression test everything if we have to keep running test scripts manually. We also start thinking about automated testing because it’s easier to integrate automated tests in our DevOps pipelines.

But we’ll only achieve the agile mantra of “fail fast” if we’re using testing software that supports how fighter pilots think.

Testing Like a Fighter Pilot

That’s because we’ve actually been here before or, more accurately, Lieutenant John Boyd of the U.S. Air Force has been here before. As an F-86 Sabre fighter pilot in the Korean War, Boyd wanted to account for why American fighter pilots were outperforming North Korean fighter pilots, despite the Koreans having, in the MiG-15, a technically better aircraft.

The explanation, Boyd decided, was the OODA loop (observe–orient–decide–act) which he felt described the process that fighter pilots went through repeatedly during combat. Pilots began by observing the situation, orienting (using that information to support making a decision), deciding on an attack posture, and then acting to implement that decision. Thanks to a combination of better training and cockpit design, the U.S. pilots were going through that loop faster than the North Korean pilots.

That faster processing through the OODA loop meant that, while the North Korean pilots were still integrating the information about what was going on, the situation was about to change because North American pilots were already acting by modifying or abandoning their attack posture. The North Korean pilots never got to the Act phase because they had to start over by observing the new situation.

Which should also be the goal of testing:

  1. Observe the problem fast (fail early)
  2. Orient by tracing that problem to its likely source
  3. Decide how to address it
  4. Act to fix and re-deploy
  5. Do it again

Your goal, in an agile environment, is to do this fast enough that problems either don’t make it to production or, if they do, are resolved while their impact is minimal: By the time someone notices the failure, the situation has already changed and the failure is gone.

To put it another way: Everything follows from observing failures early.

What’s Required

That means that, in looking at a testing solution, you first want a tool that reduces the time to build a test. With a good testing tool, you can create a test script just by turning on a test recorder, exercising your application and then revising the test. The easier it is to create an effective test (including the mocks required to isolate your code), the more likely it is that you will a) have a test, and b) can shift your testing to earlier in the process.

Of course, creating tests isn’t much help if you can’t run them without effort: The more work that’s required to run a test, the less often those tests will be run and the later you’ll find out about your failures. An effective test manager is essential here. And, as the number of your tests increases, in order to get the results fast, you’ll need to be able to easily run tests in parallel on multiple computers.

These tools give you a chance to observe the failure so that you can, in fact, “fail fast.”

Next Steps

The next step—orienting by using the information produced by your testing process—means you need a good reporting system to let you know, as soon as possible, when something has failed. Tracking down the problem requires implementing best practices in building tests—isolating your tests is essential here, for example, to reduce the scope of the search when finding the problem.

Obviously, the last steps (deciding on the solution and acting to fix the problem) gets into the realm of good programming practices and well-designed application architectures. But, as I’ve said elsewhere, good programming practices and effective testing go together—certainly loosely coupled applications built using the SOLID principles make testing (and, especially, automated testing) easier. In fact, if it’s hard to test your application, it’s going to be expensive to extend or enhance it.

And, at the end of your loop, you’re ready to redeploy. Effective automated testing is essential, not just because it enables thorough regression testing, but because it’s fundamental to the entire philosophy of the agile methodology.


Peter Vogel
About the Author

Peter Vogel

Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter also writes courses and teaches for Learning Tree International.

Related Posts

Comments

Comments are disabled in preview mode.