Telerik blogs

In a word, yes. It won’t be easy. You’ll need to learn new terminology. You’ll need to be patient. You’ll likely have to get a bunch of figures on your monitor.

figures

Why Collaborate?

Why should testers and developers collaborate?

It’s a perfectly legitimate question, particularly to those who’ve been in the software industry for a number of years and have seen the coming and going of any number of buzzword fads.

Collaboration among members of a team producing software isn’t just a fad. The IT industry is finally moving away from stove piped, separated groups to a much healthier, more productive whole team environment. More and more case studies and experience reports are backing up the value of this transformation.

This long article focuses on just one aspect of whole team interaction: collaboration between developers and testers. Both roles bring tremendous skills and experience to a team. Having the two work together often results in a marked improvement in the quality of work, and a noticeable decrease in waste and rework.

We’ll look at how each role can help the other look at work in new ways.

Help From the Developer

Good developers bring solid design, engineering, and craftsmanship disciplines to a team. Good testers should view this as an extraordinary chance to expand their own skills by pairing up with developers whenever possible. Testers can adopt many concepts from developers to make their test suites more valuable, maintainable, and powerful.

Backing APIs

Backing APIs, sometimes called test support infrastructure, are critical to a flexible, powerful, maintainable automation suite. Backing APIs let you leverage your system’s internal functionality to handle things like configuration, data creation and cleanup, or test oracles. These sorts of actions can sometimes be performed by UI automation; however, they’re better left to faster, more flexible methods such as web service endpoints, internal APIs, or stored procedures.

Many testers are often hesitant to try this approach themselves, since few testers are comfortable writing database accessors, web service calls, or system call invocations. In these cases reaching out to developers for help makes perfect sense.

For instance, let’s look at a test that creates a user in a system.

CreateUser

The purpose of this test is to validate a properly created user is persisted in the system’s database. We want to avoid having to deal with any error handling around duplicate user creation—tests shouldn’t deal with error handling, they should focus on checking the validity of the tested slice. We can avoid this problem a couple different ways: ensure we create a unique user each time we run this test, or we could ensure all test users are deleted before we run this test.

Testers could write UI automation scripts to handle this task (start a browser, log on to the system, navigate to the system’s administration section, delete any existing test users, e.g.), but that’s slow and brittle. Teams are much better off leveraging code-level APIs within the system itself. Step 2 in the figure above does just that, via this bit of code:

Abstraction

Here a developer has created a simple method (Delete_all_Foo_contacts_from_database) on a helper class (ContactFactory) in order to clear out test users. This makes it easy for less-technical testers to quickly and easily use just enough code to get the tester’s current job done without having to understand the deep internals of the system or the technical details of invoking a web service.

Note that exactly how this method does its work is hidden to the user of the backing API. This concept, abstraction, is critical in good software design. Abstraction keeps separate what something does from how. The tester doesn’t know, or care, if the ContactFactory is calling web services, internal APIs, or a command line utility. This enables the team members maintaining the backing API to switch to the most appropriate approach for the particular operation— and the testers would never have to touch their tests!

Configuration / Switches

Automation professionals are often asked “How do we automate CAPTCHA?” or similar difficult third-party features and tools. The correct answer is nearly always “Don’t.”

CAPTCHA is a perfect example of something that should be bypassed or turned off versus struggled with. The point of an automated registration test shouldn’t be checking a third-party bot filter (CAPTCHA), the point of that test should be ensuring a newly registered user actually shows up in your database. Futzing around with trying to detect CAPTCHA graphics and work through it is simply a waste of time and doesn’t bring value to your automation suite.

Testing sent e-mail is another area fraught with frustration. The last thing testers should ever be doing is writing tests that log on to Gmail in order to validate formatting and content of system-generated mails. Both these topics are perfect examples of collaborating with developers to control system configuration during automated test passes.

There’s no reason we shouldn’t have separate system configurations for testing and production as long as we carefully control (and test!) the deployment process to ensure we’re not leaving critical functionality shut off in our production environments. This mindset allows testers to work with developers and IT team members to change the system to make it more testable within certain constraints. Developers will have to do additional work allowing features like CAPTCHA or mail providers to be swapped out or shut off; however, careful discussion should enable the team to figure out if it makes sense to undertake that effort.

Exact implementation of the configuration switches will be extremely specific to each system under development; however, here’s how one implementation might look for a .NET application hosted under IIS using a web.config file:

class Web_config_switches
{
 public void shut_off_captcha()
    {
        change_appSettings_key_value("captchaActive", "false");
    }
 
 public void turn_on_captcha()
    {
        change_appSettings_key_value("captchaActive", "true");
    }
 
 private static void change_appSettings_key_value(string key, string value)
    {
 string path_to_config = @"c:\some_dir\web.config";
        Configuration webConfig =
            WebConfigurationManager.OpenWebConfiguration(path_to_config);
        webConfig.AppSettings.Settings[key].Value = value;
        webConfig.Save(ConfigurationSaveMode.Modified);
        ConfigurationManager.RefreshSection("appSettings");
    }
}

This snippet of code assumes the system under test has a section of its web.config file which includes an captchaActive flag. The system under test would obviously need to support altering CAPTCHA status based on that flag—and details of that implementation are far beyond the scope of this work.

Automated tests could simply reference Web_config_switches.shut_off_captcha() directly from a setup step or test in their tests or lists as appropriate.

One significant caveat when working with system switches or changes in configuration: you absolutely must have a set of automated tests that verify the system is correctly configured when deploying to non-test environments. These automated checks must be part of your regular deployment processes, otherwise you risk potentially rolling out your system to production with critical features deactivated. You do not want to be on the receiving end of that call at 2:42AM!

Craftsmanship and Code Smells

Software craftsmanship and software engineering disciplines have a direct correlation to good testing. The software craftsmanship movement brings a sense of pride in one’s work, and frames that in the mindset of carefully learning good practices along an entire career of work. Software engineering brings concrete metrics and practices to the show in a great compliment to the craftsmanship movement.

Good testers can take several principles to heart from both of these domains. Good developers know to look for “code smells,” clear indications a section of code is too complex, potentially a maintenance nightmare, or flat out wrong. (The term code smell was apparently first coined by Kent Beck and Martin Fowler as part of the work for Fowler’s seminal Refactoring: Improving the Design of Existing Code.)

Code Smell: Complexity

Code smells come in several areas. First off would be overly complex code. Nested IF statements in code have long been recognized as a direct contributor to overly complex, hard-to-understand, bad code. (See Wikipedia’s section on Cyclomatic Complexity as a starting point.) The same concept goes for tests as well, as you can see from the following image.

clip_image006

Code Smell: Mixed Concerns

Additionally, IF statements are also bad practice in that they often mix several concerns. The test above checks at least three different test flows (tires, videos, computer supplies), plus logging on if needed. Mixed concerns indicate the test case isn’t well-focused—it’s working on too many things at once. A failure in one section of this will likely mask potential failures in other areas.

Finally, mixing numerous concerns in one test case means it’s harder to maintain the case. How do you remember where to find the section of your test suite that focuses in on checking videos if you have five, ten, or more scenarios mixed in each test case (file)?

In the software engineering/craftsmanship domains, mixed concerns are often referred to as violations of the Single Responsibility Principle. SRP means that one class or method should focus on doing one thing and leave other concerns to different classes or methods.

Code Smell: Duplication

This same test provides a great example of the Don’t Repeat Yourself (DRY) principle. DRY helps you avoid maintenance nightmares incurred when functionality is duplicated numerous times through a codebase. If one thing in a piece of functionality changes, you’ll find yourself having to update that functionality everywhere it occurs—and the odds of missing once instance escalate the more

The logon workflow in steps three to nine is a common feature and will likely be duplicated in every test requiring a logon. The impact of this can’t be overstated: imagine having to update hundreds or thousands of your tests when (not if!) your logon process changes.

The logon-related steps should be immediately moved to a separate test which can be used as a component in other tests. This way no other tests need to be updated if the logon workflow ever changes.

Avoiding Smelly Code and Tests

Developers can help testers avoid these situations by sharing their experience and patterns they’ve picked up through their work. Testers can learn to head off smelly code via good design principles and practices. Developers can also teach testers about refactoring, the process of changing the structure or implementation of software without changing its behavior.

Collaboration: It Pays Off

Collaboration with developers may not always be easy, but in the long run you’ll be happy you made the effort. Your tests will be more maintainable and easier to write, and you’ll be delivering better software to your customers.

About the author

Jim Holmes

Jim Holmes

is the Director of Engineering for Telerik’s Test Studio, an awesome set of tools to help teams deliver better software. He is co-author of "Windows Developer Power Tools" and Chief Cat Herder of the CodeMash Conference. Find him as @aJimHolmes on Twitter.

Interested in chatting with Jim at a conference? Check out his speaking schedule!


Comments

Comments are disabled in preview mode.