Real Life Guidelines that Deliver Results

You’re reading the second to last post in a series that’s intended to get you and your teams started on the path to success with your UI test automation projects:

1. Introduction
2. Before You Start
3. People
4. Resources
5. Look before You Jump
6. Automation in the Real World (you are here)
7. Improving Your Success

Important note: After its completion, this series will be gathered up, updated/polished and published as an eBook. We’ll also have a follow-on webinar to continue the discussion. Interested?

Chapter Six: Automation in the Real World

So here you are: ready and raring to get real work done. Hopefully, at this point, you're feeling excited about what you've accomplished so far. Your team has set itself up for success through the right amount of planning, learning and prototyping.

Now it's time to execute on what you've laid out. Remember: your best chance for success is focusing on early conversations to eliminate rework or waste, and being passionate about true collaboration. Break down the walls wherever possible to make the mechanics of automation all that much easier.

In my last post, I tried hard to really hit the importance of collaboration and early communication as fundamental pieces of a successful project. Now it's time to focus on the actual mechanics of the work.

The next few sections dive in to areas I've found to be critical for pain-free, successful automation.

Developer Tester Pairing

Although discussion around pair programming has always focused on developers, pairing is a wonderful tool that can be used by all members of a team.1

Developers and testers should work together on UI automation for a number of reasons, including:

  • Developers know the ins and outs of the system. They can head off wasted efforts by the testers.
  • Developers know what asynchronous or disconnected operations are taking place behind the UI.
  • Testers can help identify valuable edge cases, environmental concerns or data combinations a developer might miss.
  • Developers can handle the more complex coding issues while testers focus on high-value test problems.

Far too many people think pairing means working side-by-side for eight hours a day. NO! Pairing doesn't have to be high-stress, full-time effort. Spend as little time together as needed to solve a particular problem, or spend as much time together as the members feel comfortable with.

Also, pairing shouldn't be dismissed because teams aren't colocated. Technology such as Skype,, GoToMeeting and others have allowed me to pair with team members in different cities, states and continents. It's a matter of the team members committing to working hard toward great communication.

Pairing at its root is knowledge sharing. Investing to encourage and expect the team to pair up reaps rewards over the long run.

Maintainable Tests

Perhaps the most crucial concept to get straight at the start is the need to treat your test code like production code—because test code IS production code!

Delivery teams absolutely have to invest time in creating maintainable system code through well-established conventions like low complexity, readability, carefully thought dependencies, etc.

UI test automation code needs to be treated exactly the same way. You need to take the same approaches to simplicity, clarity and so on, regardless if you're writing browser tests in WebDriver or creating them in Telerik® Test Studio®.

Teams that don't approach their test suites (or system code) this way are doomed to suffer lost time due to brittle suites. Those teams are also going to pay a heavy price when trying to fix brittle tests, due to their complexity. It's easier to be successful when you start out right by giving your test suites some love and maintainability.

Three great principles I've made use of over the years have helped me keep my UI tests as maintainable as possible.

  1. DRY: Don't repeat yourself. Copy/Paste development means you've got duplication of effort scattered all over the place. One thing changes, and you're forced to fix that change in multiple places. Pay attention to these areas in particular:
    • Locator definition: Ensure your find logic, or element locators, are defined in one place and one place only. Either use the Page Object Pattern2 or a tool such as Telerik Test Studio that centralizes those definitions in some form of repository.
    • Actions: Don't repeat actions or workflows. Move those into a reusable test or method. Think about a logon example: you want it defined one place and called from many.
  2. SRP:Single responsibility principle: tests need to be granular and concise. They should test one thing. Scripts shouldn't conflate multiple test cases—something that's often abused in data-driven scenarios. For example, don't mix checking if products can successfully be added to a shopping cart with checking; those products also get correct recommendations. Those are two separate tests.
  3. Abstraction: Abstraction3 is a programming idiom that enables you to push common or complex actions to another unit of code. A logon operation is the classic example for this. Many tests will need to log on to your system; however, you should write this action only once in a separate test or method, then call that test/method from all other tests to perform the action.

Abstraction is also helpful because it hides the implementation details from the calling test or method. With the logon example, each test doesn't know how the logon occurs, only that it's successful or not. If the system changes the logon workflow, say from username/password to username/password/PIN, no other test would need to be updated—only the logon method.

You'll save your team tremendous amounts of time, frustration and grief if you focus on maintainability at the start of your UI test automation projects.

Backing APIs

Backing APIs, sometimes referred to as test infrastructure, are abstraction tools. They're perfect examples of how testers and developers can collaborate to leverage each other's best skills.

I've found not many testers have deep programming backgrounds, which is absolutely fine. Few testers understand how to create authentication headers to successfully call web service endpoints. Nor do many testers understand how to create secure, performant, reusable connection pools to a database.

That's all fine, because testers' skills lie in testing. Developers, on the other hand, generally do those tasks on a frequent basis.

Working together, teams can build an abstraction layer of a backing API that enables testers to very easily call methods for setting up prerequisite data or turning off features to make the system more testable.

The great thing about abstraction is you don’t have to know (or care) how the API does its work. Are new users created via a web service, or direct insertion to the database? Don't know, don't care. Backing APIs let you abstract all that away so you don't worry.

Moreover, if the developers create better methods for accessing the system, say moving from a stored procedure call to a web service, the tests won't be affected at all.

That's all fine and dandy, but what practical things should you look to do with a backing API? Here are a few things I use in every project I've worked on:

  • Data and Prerequisite Setup: Don't use the UI to create data you need for tests. Hand that off to a backing API. (Starting out using the UI, then transitioning to a backing API is just fine.)
  • System Configuration: How do you test CAPTCHA or other complex third-party controls? Don't. Work with the developers to create system-wide configuration switches that will let you shut these things off, or swap them out for simpler components.
  • Test Oracles/Heuristics: It isn't enough to check the UI. You also need to check the layer where things are really happening: the database, file system and so on. Backing APIs are a great way to abstract out calls to the database for verifying records were created, updated or deleted.
  • Backing APIs don't have to be complex, and you should only build them out as you need them. Be very lean as you create them.

Testable UI

Far too many systems are built without testability in mind. Architecture and coding design decisions impact testing at both the system and UI layers. System-level testability requires specific architecture and design decisions. Testability at the UI layer can be a much simpler matter of adding in good element IDs where possible.

Some web technologies such as Ruby on Rails add ID attributes to elements by convention. Nearly every web stack from Rails to ASP.NET WebForms makes adding IDs to regular elements a snap.

Additionally, developers and testers working closely together can easily solve problems such as dynamically generated IDs that hinder testability—or don't render them at all.

For example, Telerik® Kendo UI® has a Grid control that doesn't include IDs by default:

Kendo UI Grid

It's easy to add a bit of JavaScript to the Grid's definition, to create useful IDs that include data unique to each record:

dataBound: function(dataBoundEvent) {
var gridWidget = dataBoundEvent.sender;
var dataSource = gridWidget.dataSource;
$.each(gridWidget.items(), function(index, item) {
    //use next three lines for html ID attrib
    // with custom ID of database ID + lname
    var uid = $(item).data("uid");
    var dataItem = dataSource.getByUid(uid);
    $(item).attr("id", dataItem.Id + "-" + dataItem.LName);
    //Use this line to show html ID attrib with row #
    //$(item).attr("id", index);
$(".k-grid-add").attr("id", "create_btn");


Now the Grid's records each have a unique ID composed of the identifier from the database, plus the last name of the person on the row.

Master the Essentials Chapter 6 Grid

This approach is obviously specific to this example; however, that's the beauty of this approach. Use your tools at hand to solve the specifics of the situation you're encountering. Maybe your IDs need part numbers, zip codes or something else. That's fine! Construct them as needed to get testable pieces in place.

A final piece about testable UI: You're not limited to just ID or other attributes. There are all sorts of things you can add to the UI to help testing. Think of flags you can add to handle complex asynchronous or queuing actions.

For example, the image below shows a new element being added to the page after a Create action completes. This gives you something additional to "latch" onto when an action is complete.

Kendo UI Grid testing

This particular example is again in Kendo UI as a new method in the control's DataSource definition:

requestEnd: function (e) {
    var node = document.getElementById('flags');
    while (node.firstChild) {
    var type = e.type;
   $('#flags').append('<div responseType=\'' + type + '\'/>');

While this example is specific to Kendo UI suite, again the underlying concept is the same regardless of the technology stack you're using.

Surviving Legacy UIs

All this discussion about modifying the UI to be more testable is wonderful, but what about when you're stuck with a UI that can't be changed? Maybe it's a legacy system that's got limited maintenance. Perhaps it's something built on top of a third-party system or platform, say SharePoint, Sitecore, or Orchard.

In those cases, you'll need to work hard to learn flexible approaches for building locators that work with the system/stack/platform you're using. In many cases, you'll find yourself having to fall back to locators based on convoluted XPath or JQuery selectors. Evaluate those locators as carefully as possible to ensure you're using the best locator possible.

Please note I specifically said "...the best locator possible." In many situations you won't be able to get a perfect selector. In those cases, you'll need to become adept at using combinations of things such as IDs, CSS classes, name and other attributes, plus some XPath to scope down to what you need.

InnerText remains one of my favorite locator strategies, because it enables you to find things such as table rows using data that should be in that row. It's also very handy when you're simply not able to find other usable locator strategies.

Remember: Take the Long View!

Success in software, regardless of whether you're writing multithreaded database transactions or user interface functional automated tests, is all about the long view. Of course you have to write tests that are solid, high-value and correct, but you absolutely have to keep your eye on how useful those tests will be over time, and how costly they'll be to maintain.

Work hard to keep your tests simple, concise and flexible. Make use of the suggestions we've laid out here.

Did we miss something you've found useful in your own "real world" life of UI test automation? Did you find these topics helpful? Let us know in the comments!

1Read more about pair programming on its page at the Extreme Programming site.

2Martin Fowler has a nice write up on the Page Object Pattern.

3See Wikipedia's definition of abstraction.

About the Author

Jim Holmes

Jim is an Executive Consultant at Pillar Technology. He is also the owner of Guidepost Systems. He has been in various corners of the IT world since joining the US Air Force in 1982. He’s spent time in LAN/WAN and server management roles in addition to many years helping teams and customers deliver great systems. Jim has worked with organizations ranging from startups to Fortune 100 companies to improve their delivery processes and ship better value to their customers. When not at work you might find Jim in the kitchen with a glass of wine, playing Xbox, hiking with his family, or banished to the garage while trying to practice his guitar.


Comments are disabled in preview mode.