Real Life Guidelines that Deliver Results

You’re reading the fourth post in a series that’s intended to get you and your teams started on the path to success with your UI test automation projects:

1. Introduction
2. Before You Start
3. People
4. Resources (you are here)
5. Look before You Jump
6. Automation in the Real World
7. Improving Your Success

Important note: After its completion, this series will be gathered up, updated/polished and published as an eBook. We’ll also have a follow-on webinar to continue the discussion. Interested?


Chapter Four: Resources

No, "people" are not "resources". Let's get that straight right now!

We've walked through getting a plan in place for having your team build their automation skills. Now you've got to consider another aspect of the team's success: proper tools and infrastructure to help them get their work done. Your team will need a number of pieces to create, execute and maintain your UI automation suites effectively.

Here's what a typical infrastructure looks like. Obviously, some environments have many more moving parts.




Members writing automation scripts will generally build their scripts and check them into a source control system Team Foundation Server, Git or SVN (see flow #1 on the diagram).

A build server will pull the latest version of tests as a suite from source control (flow #2). The suite is compiled (if necessary), then the build server hands actual test execution off to one or more agents. (flow #3.) This test pass, or job, is generally scheduled rather than running on a constant basis in a continuous integration model—UI automation tests are simply too slow and long-running to have them blocking other CI builds and tasks.

Test agents for UI testing can be very lightweight systems, often virtualized, that handle executing tests against the system under test (SUT) (flow #4). The agents can be a mix of different operating systems with different browsers. This helps to ensure you’re getting proper OS and browser coverage. Agents can also run on mobile devices if the organization needs mobile coverage too.

Agents report test results back to the build/CI server, which then makes reports and notifications available to the team.

Let's dive further into each component of this diagram.


Development/Test Systems

Your team needs adequate systems to create the automation suites. There is plenty of evidence that team productivity is enhanced when they have access to solid, well-powered systems. Most developers end up with very high-performance systems.




Testers, or those mainly responsible for your automation scripts don’t need quite that amount of power. Test automation projects shouldn't take much horsepower to build, but you do need to ensure slow systems won't leave your team hanging in the air while tests are compiling or running.




Your team will be using these systems to write, troubleshoot, execute and maintain your test suites. These systems will need access to the source control repository, build/CI server, SUT host(s) and agent systems.

(In these systems is where Telerik Test Studio, either standalone or Visual Studio version, live.)


Build/CI Server

Build servers come in many shapes and flavors. Team Foundation Server, Team City and Jenkins are just three of the most popular systems; there are many others.

Before going further, let's get terminology clear. A build server is an application or set of tools responsible for getting the latest version of source code and building/compiling it. Build servers generally allow tasks or jobs to be custom-defined, such as packaging a built package for distribution, executing automated tests, reporting and so on.

Often, build servers have scheduled jobs such as building the system, deploying to a test environment and executing slow-running tests that hit the database or fire up the user interface. (Telerik Test Studio's scheduling server fits this role in the diagram.)

Continuous Integration is a methodology in which a set of tasks runs continuously as teams check in updates to source control. CI tools are generally part of a build server, which monitors the source control system for check-ins. When a configured number of check-ins occur, the CI server will pull the latest version of source, build the system and execute any other defined tasks.

The basic advantage of CI is, at a bare minimum, team members know if commits from multiple members have somehow broken the system—the infamous "builds on my system, but fails on someone else's" scenario. CI servers are generally configured to run additional tasks after the basic build, such as executing unit tests. (Remember from above that build servers can and often do a lot more than just unit tests.)


Execution Agents

Execution agents allow the build/CI server to offload tasks, normally relating to test execution. Agents are usually a mix of operating and browsers, as well as mobile devices, configured to meet the organization's testing matrix requirements. Often, agents are virtualized guests on a larger virtualization platform, such as Hyper-V or VMWare.

Agents are small(ish) executable components hosted on one or more systems, separate from the build/CI and SUT servers. The build/CI server creates a job and dispatches it to the agent.

In the case of UI tests, the agents will spawn the system's application, either a desktop app or web browser, and navigate through the automated scripts. From there, the agent executes the task and reports back to the build server.

Agents give organizations the ability to scale out the coverage matrix. Agents can also run tasks in parallel, enabling large, long-running test suites to execute much faster.

(Telerik Test Studio Runtime Edition fills the "agent" role.)


System Under Test

The final piece in the diagram is the system under test (SUT). This very simplified diagram shows one unit; however, often, there are many components involved: web front ends, application servers, middleware such as Tibco or BizTalk, database servers and so on.

The SUT may be updated as part of the CI or scheduled build/execution process, or it may be a simpler model in which other team members update the SUT, as needed. Moreover, sometimes SUTs are in a shared environment, which causes additional complications for test data.


Leveraging Resources to Their Utmost

Getting to success in your test automation projects means getting all these components working well together, the earlier the better. Your team should view the automated deployment and execution of the tests, and everything around that process, as the highest-value feature for the organization. Being able to build, deploy, test and release your software with a metaphorical push of a button is an incredibly powerful concept!

You don't need to piece everything together at once. Start small and work from there. Here's one route you might take to get from zero to awesome:

  1. Get tester/developer systems up and running: You can write and run test suites locally as you build out the rest of your system.
  2. Get your source control running: This isn't optional. Period.
  3. Get your SUT running: Get a separate environment where you can totally control your SUT, even if it's a small VM to start with.
  4. Get your build/CI running: Use whatever tools with which your organization is already competent—don't try to reinvent the wheel. If the organization's not using a build/CI server, pick one that meets your needs. Start small with a simple build script, then wire up deployment of your SUT.
  5. Create agents for execution: Now find small VMs or old unused desktop systems and bring them into your environment as agents.
  6. Create test jobs as necessary: Configure scheduled jobs that pull the latest SUT and test suites, deploy as necessary and execute tests.
  7. Rinse, lather, repeat: Continue to evolve your tests and smooth out your automated build, deploy and execute processes.

Moving Forward

Infrastructure and the build/deploy/execute pipeline is critical to get in place as early as possible. Having the right environment in place lets you focus on the harder testing and domain problems.

Now that all the tools and people are in place, we'll next look at the practical aspects of getting rolling with your automation efforts.


About the Author

Jim Holmes

Jim is an Executive Consultant at Pillar Technology. He is also the owner of Guidepost Systems. He has been in various corners of the IT world since joining the US Air Force in 1982. He’s spent time in LAN/WAN and server management roles in addition to many years helping teams and customers deliver great systems. Jim has worked with organizations ranging from startups to Fortune 100 companies to improve their delivery processes and ship better value to their customers. When not at work you might find Jim in the kitchen with a glass of wine, playing Xbox, hiking with his family, or banished to the garage while trying to practice his guitar.

Comments

Comments are disabled in preview mode.