This post is a response to the great questions Jim Evans and I had during our webinar The One Critical Factor for UI Test Automation Success. My apologies for taking so long to get this posted; however, there were a LOT of questions to answer, so it took longer than expected! I’ve organized the questions and responses into several different sections.
We had several great questions around starting and maintaining effectiveness in test automation. Preparation for taking automation is a critical part of success.
Start small, focus on high-value tests (you’ll hear this from me many times elsewhere in this post).
Build time in the schedule for learning and skills development. Make sure you have time in the schedule for creating, running and maintaining your automation scripts.
Commit to directed, practical learning. (Read Andy Hunt’s Pragmatic Thinking and Learning for ideas.)
One great tool I’ve used repeatedly is regularly scheduled brown-bag lunches, during which team members can share failures and successes.
Automation should focus on the high-value, risky business cases you want to be able to easily regression test. Manual testing should NOT be thousands of detailed test scripts; rather, conduct thoughtfully selected explorations through the system during which a tester uses his or her skills to gather information on how the system is operating.
As I’ve said elsewhere, start small. Focus on the stakeholder’s highest priorities. Next, focus on areas where your team has had lots of pain, regressions, tedious manual tests and so on.
You may find parts of the UI that are hard to test (no good locators, bad control hierarchies and other issues). Deal with those as best as you can, but don’t let them become time-sinks.
Automation in a fast-moving environment, regardless of whether it’s “Agile” or not, requires the team to be focused on a few critical aspects:
Test code should be treated just like production code, because it is production code! You need to take the same love and care with designing and crafting your tests that your developers (hopefully!) take with writing their system code.
Neither. Unless you’re suffering from an extraordinary amount of regressions, UI tests should focus on business workflows. Do some exploratory validation of control correctness as development proceeds, but focus your automated test around business use cases.
Look at it this way: instead of trying to write a test that validates every field on a shopping cart checkout, write a test that flows through doing the checkout and validating an order was properly created and the customer was properly billed. Note the emphasis on the business case: making money. Validation of the fields input (disallow bad characters, prevent SQL injection, block overflows and so on) should likely be a one-off testing session.
We had a number of questions around specific technologies ranging from UI platforms to BDD tooling. I was honestly surprised at how many folks wanted to know about BDD and related integrations. I don’t normally see so many questions on this very valuable approach to functional testing.
The fundamentals of UI automation are much the same, regardless of the approach you’re using or the mechanics of your code. (This is geared towards code-based automation tools.)
SpecFlow, a Behavioral Driven Development tool by nature, should help you get communication going early in the process. Early communication is critical for good locators and understanding of what async the developers may be adding in the page.
Focus on clear, concise tests. Make sure to use the page object pattern–your tests should only have test logic in them, nothing about the page layout, asynch or navigation. Leave all that to the page objects, and keep your business grammar in the spec.
Asynchronous actions are one of the hardest things to work with. You’ll need to find a concrete, stable set of conditions to “latch” or “synchronize” on. I don’t know which toolset you’re working with, but look to whatever you have for conditional waits. In Telerik Test Studio™ solution, it’s the Wait For; in WebDriver it’s using the WebDriverWait class. Other tools use similar approaches.
You’ll use that mechanism and create a condition that waits until the controls with which you’re going to interact are present and active on the page.
Extjs creates dynamic IDs that change every time the page loads. This creates an incredible challenge when trying to build stable tests.
There are a couple great resources that cover this in far more detail than I could here:
Andrew Dzynia’s slide deck on Extjs and WebDriver
Guogang Hu’s short post on Extjs and WebDriver
The following attributes are where you should start:
Like many technology stacks, Lightswitch doesn’t add in ID attributes to elements automatically. Your developers should be able to add in attributes, but don’t ask them to create IDs for every element–instead, ask for help by focusing only on elements critical for high-value tests.
You may find yourself having to use other locator strategies such as CSS selectors or XPath. Do the best you can in those situations.
Each automation toolset has a slightly different way of accomplishing this, but the general concept is the same: create a dynamic wait that pauses until the control isn’t on the DOM anymore.
In Test Studio solution, you would implement this by adding a “Wait for Element Exists” step, then toggle the checkbox for “NotExists.” Other tools vary in their exact implementation, but the concept’s the same.
I have one suggestion: evaluate whether it makes sense to automate this step. I haven’t used this in years, because I focus on the next task I need to do–like waiting on another field to appear so I can fill it out. Waiting on a control to vanish from the DOM generally isn’t the direct focus of a business-related workflow.
As with everything, use your brain, and make sure your test focuses on the high-value business case.
Note that “WinForms” doesn’t mean every desktop app that runs on the Windows platform. There are lots of technologies that run desktop apps on the Windows platform: Powerbuilder, MFC, Oracle, SAP, Java Swing and so on. This response is focused specifically on WinForms technology.
There are still plenty of WinForm applications in active maintenance and development modes. Teams needing to create automation around those modes have a narrow set of choices.
If you’re open to open source software, there are various alternatives. Mohawk, a Ruby gem, enables teams to write Ruby-based spec tests against WinForms applications. Sikuli, another open source product, also enables image-based automation.
There are a number of commercial tools that offer WinForms support, starting all the way at the top-end HP Unified Functional Test and moving through a number of other toolsets.
As often as you need to. While that may seem flippant, frankly, it’s true.
Keep in mind that UI automation should be focused on high-value, critical business features. If you’re doing lots of work in those areas, you may want your test suite (or parts of it) running several times a day.
If you’re not moving so quickly in the system codebase, perhaps one long-running “smoke check” per evening may be good enough.
One thing you should keep in mind: UI tests won’t ever be suitable for normal Continuous Integration/Delivery (CI or CD) builds. UI tests generally take much longer than unit tests, so you normally can’t have them running every 10-15 minutes as check-ins stack up. Instead, UI test suites are generally scheduled to run at regular times throughout the day.
Several thought leaders in the testing space have referred to a similar concept: a well-tested system should have a mix of the various automated test types. Martin Fowler has a great article describing this on his blog.
The base of the pyramid is lots and lots of fast unit tests. They’re quick to write and speedy to run. The middle section of the pyramid is a lesser amount of system tests, also called integration tests. These are slower to write and much slower to run since they go across service layers to the database, web services and so on. At the very top of the pyramid are UI tests. They are the slowest to write and most expensive, and are very slow to run–therefore you should have only a few.
Dependencies or shared state between tests are a recipe for pain. Believe me, I know this from lots of hard knocks.
Imagine a typical Create, Retrieve, Update, Delete (CRUD) set of actions around a test user in your system. Why not just use one user for that entire flow of four actions?
Great! Four tests all from one piece of data.
Now what happens if your create flow breaks? The Retrieve, Update and Delete tests will all fail, too.
Did they fail because there were bugs in those components? We don’t know, because the Create test failed and the three other tests were dependent on it.
What happens if the Update function was broken? You likely wouldn’t find it until the bug you filed around Create was fixed.
Oops.
It’s harder, but you should create unique data for every test. Don’t share state or prerequisites between tests, if at all possible.
A better flow would look like this:
Many people get confused by this.
Unit tests are those that have no external dependencies. They’re not crossing service boundaries, and they shouldn’t be crossing behavioral flows between classes (even if you’re a purist). Unit tests may involve mocking/faking behavior as part of their design. Because of this tight focus, unit tests are quick to write, fairly easy to maintain and blisteringly fast to execute. Think 30,000 tests in 10-30 seconds. Seriously.
Integration tests step up a level and cross service boundaries. Think of a test hitting a web service endpoint then checking the database to validate a CRUD action was correctly taken. Integration tests are slower to write, harder to maintain and much slower to run. Think 30-300 tests in 10-30 seconds.
Functional or User Interface tests start at the application’s UI and focus on testing a functional slice: creating a blog post, checking out your shopping cart and so on. Those tests are the hardest to write, most costly to maintain and slowest to run. Think one test every 30 seconds.
Single Page Applications (SPAs) are the epitome of hard-to-test apps. The entire application is hosted on one page, with everything about the application being pulled in dynamically via one of any number of technology stacks.
Regardless of the tech stack, two things will help you succeed testing SPAs:
1. Work with your UI team to get solid, stable ID values on the elements you’ll need to interact with.
2. Get very, very familiar with how asynchronous operations work in your app’s tech stack. You will need to become very adept at using appropriate conditional waits in your test scripts to handle the app changing views. Work very closely with your developers to understand the conditions you’ll need to latch on–and find out where developers may be able to help you with changes to the UI.
First off, there’s no such thing as a “best practice.” Well, other than using your brain. Sorry, old rant of mine.
One size doesn’t fit all for every team. Some teams dive straight to code for everything because they’ve got testers paired up with developers, or their testers have programming backgrounds. Those teams find code-based tests faster to write and easier to maintain.
Other teams are highly skilled testers with no development background and no easy way to quickly come up to speed on code. Those teams find recorded tests faster to create and easier to maintain.
Badly written coded tests are horrible to maintain. Well-written coded tests are a snap to maintain.
Badly built recorded tests are horrible to maintain. Well-built recorded tests are a snap to maintain–but even well-built recorded test suites will need some amount of code.
Don’t focus on how other teams succeed; focus on what your team needs to succeed.
How you execute your tests is nearly as important as how you write them. Here are some great Q&As around that topic.
Some form of client virtualization is critical for running large test suites. You want your execution to happen as fast as possible, and you may need some matrix of tested browsers and operating systems.
This makes virtualization critical for success. Virtualization lets you split out execution across multiple systems, enabling you to run your suites in parallel and across those browser/OS matrices.
Every modern build system (Hudson/Jenkins, TFS, TeamCity and so on) lets you schedule test jobs and “farm them out” to “agents” running on other systems. Those other systems are normally virtual machines but could also be unused hardware.
See my earlier answer about using virtualization.
You’ll want to split your test lists/groups across a number of different execution machines. Those execution machines could be either virtual machines or unused physical systems.
The exact mechanics of distributing/splitting a group of tests differs for each environment. Test Studio solution allows you to easily distribute test lists across machines via a checkbox on the Schedule Test Lists dialog.
Jim is an Executive Consultant at Pillar Technology. He is also the owner of Guidepost Systems. He has been in various corners of the IT world since joining the US Air Force in 1982. He’s spent time in LAN/WAN and server management roles in addition to many years helping teams and customers deliver great systems. Jim has worked with organizations ranging from startups to Fortune 100 companies to improve their delivery processes and ship better value to their customers. When not at work you might find Jim in the kitchen with a glass of wine, playing Xbox, hiking with his family, or banished to the garage while trying to practice his guitar.