Telerik blogs

Many organizations are seeing tremendous benefits in moving to one of the various test-first methodologies such as Test Driven Development (TDD), Behavior Driven Development (BDD) or Acceptance Test Driven Development (ATDD). Test-first approaches improve team communication effectiveness, dramatically shorten feedback cycles and get testing activities working in parallel with development versus happening after development is complete.

Sometimes organizations aren’t able to fully dive into these methodologies for a number of reasons. Regardless, teams can still see tremendous improvements even without wholesale adoption of “formalized” test-first methodologies.

This blog post will help you formulate ideas to improve your testing activities and push them earlier, even without adopting a full-up methodology. All testing activities benefit from earlier collaboration with the team; however, functional user interface automation has some specific benefits that come out of early, effective collaboration.





Earlier Testing’s Value Proposition

Organizations want to deliver better value more quickly. That’s not new wisdom! What is new is the emphasis on how more effective collaboration with testing can help.

Traditional delivery models often have testing as a late activity performed after development is complete. This gap increases delivery friction because bugs are found later in the cycle, which means those bugs are fixed and re-tested even later in the cycle.

It’s easy to see these sorts of disconnects: work items such as cards, user stories, tasks or whatever your team uses get carried from one iteration to another. It’s also common to see separate “dev” and “test” iterations. Worse, often the definition of “done” doesn’t include testing.

Sometimes organizations aren’t thinking about the benefits they can realize through earlier collaboration when it comes to testing. Here are a few specifics to consider:


Early Test Collaboration Raises Awareness of the Value of Features

One welcome change over the last few years is that teams are learning t  understand more about how features are used: what problems they’re solving for end-users and what value they’re bringing to those same users.


Team Writes Better Acceptance Criteria

This evolution in better understanding value means the team tends to write better acceptance criteria. This allows the team to focus on getting the right tests in place.


Devs Have a Better Understanding of What They’re Building

How many times have you heard “I wasn’t sure about this part, so I built it like this”? Frankly, once is too many times. Clarity around feature value and better acceptance criteria helps developers better understand what value they’re really trying to build.

That means a significant decrease in the amount of rework due to miscommunication around vague intents.


Benefits of Specification-Based Methodologies

Some types of test-first approaches revolve around writing specification-style acceptance criteria. These are particularly useful to helping spread the value awareness across the team. Readers interested in this concept should chase down a copy of Gojko Adzik’s Specifications By Example, or Markus Gaertner’s ATDD By Example.



Early Test Collaboration Encourages Better Design

Designing systems for better testability is a crucial part of any test-first approach. Regardless of whether you’re working in a specific methodology or just having testing conversations earlier, you’ll find your systems will become much more testable over time.


Testable User Interfaces

Functional tests on poorly designed UIs can be a nightmare. Automation scripters often have to fall back to brittle XPath for locating objects. Complex asynchronous actions drive confusing timing hacks and delays in scripts.

Contrast that with a team that knows ahead of time what major functional tests will be written and you’ll see more usable locators, hidden flags/latches on the UI and helper methods. Automation testers can then easily write flexible, simple tests to cover the feature.


Hooks for Backing APIs

Teams creating great test automation suites have learned that backing APIs, or test infrastructure, is a vital part of a smooth, powerful suite. Backing APIs is an abstraction over complex calls to web services, internal APIs or databases. These APIs provide testers with a quick way to set up test data, configure the system for testing and validate expected conditions.

Early collaboration can quickly drive out the right amount of work on these APIs, which, in turn, dramatically speeds up the time to write automation scripts. Backing APIs also pays off for manual testers doing exploratory work—they can leverage those same APIs for the same setup, configuration and validation tasks.



Early Test Collaboration Garners Faster Feedback

Fast delivery has to revolve around fast feedback cycles. Your organization is not going to make any progress around improving release times if it’s taking weeks for developers to get feedback on the quality of the systems they’re creating. Stakeholders suffer when their best information on value/risk ratios is weeks or months out of date.


Disconnected Flows Get Repaired

Disconnected flows are a sad benchmark for traditional (or just “old school”) delivery models. Large requirements gathering efforts are done up front, then designers and developers build things and pitch them over a metaphorical fence to testers. Testers have to invest significant amounts of time catching up on the details of the work that’s been done, after which they’re finally able to start their testing work.

In these environments it can be weeks or months before developers get actionable information back. In turn, that requires them to spend significant amounts of time just getting back to the mental state required to fix the problem: What was I doing at the time? What were my intentions with this area of the code? Can I remember other areas this interacts with that I might need to consider as well?

Earlier collaboration helps mitigate many of these disconnected flows. Teams are able to bridge those gaps and work more effectively without the major context changes required to drop back to an earlier state.


Tests Ready When Feature Work Is Done

Collaborating from the start results in tests getting completed alongside features they’re intended to test, regardless of whether those tests are test charters, formal cases, notes for exploration or Selenium WebDriver scripts. Parallel test development or envisioning means little or no delay between feature completion and test execution. In many environments, parts of those tests can be executed as the feature work is being accomplished. This results in immediate feedback on areas where developers might be going astray.

Early collaboration also eliminates the gap between “dev complete” and “ready to test,” wherein testers are scrambling to write scripts, test charters and so on. That’s a significant time saver that helps drive better value out the door and also drives up team morale since overall throughput/velocity is improved.



Early Test Collaboration Results in Faster Delivery of Better Value

If the roughly 800 words in the previous section were too much for you, the TL;DR [1] version is: talk earlier, ship value faster.

Early collaboration eliminates gaps, pauses and delays. The “handoff phase,” always a process waste in which work items go idle for extended periods, turns into “collaboration flow,” in which work passes smoothly through the delivery pipeline.



Creating an Early Testing Environment

You can begin taking advantage of the concept of early test collaboration by looking into one of the methodologies mentioned at the start of this paper, or you can simply work on improving communication in your current environment. Before you do, though, I’d encourage you to address some specific issues.



Understand the Problem You’re Trying to Solve

Changing any process, workflow or culture can be difficult, even if you’re not trying to implement a recognized methodology. Ensuring you’ve clearly identified areas you’re trying to improve will greatly help you make your case for changing how you do things. Telerik has some great content to help you work through this phase. Check out the Before You Start post in the Mastering the Essentials of UI Automation series. Telerik also wrapped up the series into an eBook, which you can register for here.



Early Collaboration in Practice

Disclaimer: Of course this example will have lots of holes in it. It's not fleshed out; it's simplistic. Focus on the conversations around testing.

Envisioning

Conversations at this stage revolve around the costs of testing, both manual and automated. Testers should be giving a high-level, rough idea of the impacts of testing so that stakeholders can make an informed decision of the total cost of the feature.

"We'll need additional datasets, we'll have to modify the build flow to handle those, we'll need a modest number of new automated scripts and we'll need to do quite a bit of exploratory testing. I'd guess we'll need two weeks’ time, plus support from the DevOps folks for the data work."

This is the point at which the initial hack at acceptance criteria should be happening. This is a starting point for you to get a feel for what tests you might be writing when it’s time to do the work.


Early UX Design

Testers should work with UI/UX members to ensure everyone knows how to keep the UI as testable as possible.

"I'll need good IDs on these four areas of this screen so I can write good locators. What asynchronous actions will be happening on this page so that I can figure out the right conditional waits?"

Depending on how well your team works together, you might be able to start listing out locator definitions at this point—ID, name or other attribute values. You shouldn’t start building your tests yet, because the feature might change or not be selected, but at least you’ll have a start with locators defined.


Early Architecture Design

Testers need to understand the high-level architecture in order to test effectively. This is also a great time to start discussions with developers about what testing will happen at the unit or integration level, and what tests will be at the UI level. Discussions are generally still high-level and conceptual at this point.

"The recommendation engine is a separate component that determines recommendations. It's sending results back to the cart via web services, right? Will you be testing the recommendation logic via those services? If so, then I can just write functional tests to ensure we're getting recommendations pulled back and rendered on the UI. I won't have to write tests to check that all the various combinations are creating the proper recommendations. I can also help you with those web service tests!"

Again, this isn’t the right time for you to start building tests. A critical fundamental concept of Lean and Agile practices is to do the work as late as possible. You want to make sure you don’t waste valuable time if things are still changing.


Starting Work on the Feature

Now is the time for the team to get into the details. Discuss the specifics of the data you'll need. Talk about any backing APIs you will require. Determine what edge cases and combinations get tested at which levels. Also, this is the time that you're able to start writing the scaffold of your tests, even before the UI is built. You can do that because you talked with the UX/UI people early in the game, remember?

"Can we work together to build a method to help me set parameters for recommendations? I could then use that as a setup for UI tests. I'll need some way to load these specific products into the recommendation database. Can I help you build up combinatorial/pairwise tests to run through the web service tests you're building? That way we could cut the number of iterations you'd need to write and I'd be able to focus on the main flows with the UI tests."

NOW is the time when testers should start creating scaffolding for the automated tests. You’ve got acceptance criteria and basic flows and locators defined at this point. Build out as much of the test as you can, even if the system or backing APIs aren’t built yet.


Working the Feature, Iterating on Feedback

Hopefully you're at a point at which development and testing is happening nearly concurrently. This means testers can get feedback to the developers very, very quickly. This enables teams to quickly fix issues EARLY in the game. Feedback is best when teams are communicating directly to each other, not just relying on bug reports to percolate through the process.

"Hey, I realized we missed a couple of edge cases with our test data. When I added them in, I found we're recommending motor oil instead of cooking oil when someone's buying a stir fry kit. I don't think that's what we meant! We need to modify the recommendation logic to pay better attention to an item's category."

Run as many of your tests as possible, as early as possible. You’ll be validating the basic flow and page structure, even if the entire feature is not complete. You’ll find things like misidentified locators, errors in backing APIs and so on. That’s all good! You want to discover those things now, rather than weeks down the road.


Rolling Test Suites into the Build Process

You should be adding all your automated tests (unit, integration, UI) into your build process. This means configuring your CI builds and scheduled jobs to run your tests as appropriate. Ensure your team has access to the reports they need.

"OK, so we're ready to add our integration and UI tests to the regularly scheduled jobs. All the UI and most of the integration tests are too slow to add to the CI build, but I think we should add these two relatively fast integration tests to the CI build to ensure we've got this part locked down. These five UI tests should go in the hourly UI job, and the remainder need to get added to the nightly job. Jane the stakeholder will see them showing up in her overall trend report, and we team members will see them in our detailed reports."

Here you’ll be working closely with whomever manages your build pipeline. Generally the majority of the work is up front when you first get your build process in place. Once that work is done, it’s usually a matter of adding your tests to the existing jobs.


Maintaining the Suites

Your automated test suites are a metaphorical living, breathing creation. You're going to have to spend time in the care and feeding of them. You'll need to fix them when the tests break, update them with the systems change and refactor or outright rearchitect them on occasion.

"We've had two tests break last night due to changes in the helper APIs. That's roughly four hours to fix. We also had another four tests break due to changes in the checkout workflow. We think that's a day's work. Finally, we think we've got some duplication in a number of tests around the cart and recommendation engine. We want to take a half-day to pour over the tests and weed out any that are unnecessary."

Systems change. You’ll need to continually reassess the effectiveness and value of your automation. You’ll also need to regularly update your locators and workflows, if changes to the system merit it.



Working With Your Tools

Exactly how you work with integrating this into your own environment is completely dependent on the tools, platforms and systems with which you’re working . We’ll walk through two common UI automation tools, Selenium’s WebDriver using their C# bindings, and Telerik Test Studio solution, using some exciting features in a soon-to-be-released version.


Selenium WebDriver

Since WebDriver is 100-percent code based, getting your tests built out ahead of your system is a matter of creating the appropriate classes. Whatever method you use for building your tests in WebDriver, you should always follow the Page Object Pattern to centralize locator definitions and page behaviors.


class LogonPage
    {
        private IWebDriver browser;
 
        [FindsBy(How=How.Id)]
        IWebElement username { get; set; }
        [FindsBy(How=How.Id)]
        IWebElement password { get; set; }
        [FindsBy(How = How.ClassName, Using = "radius")]
        public IWebElement LoginButton { get; private set; }
 
        public LogonPage(IWebDriver browser)
        {
            this.browser = browser;
            PageFactory.InitElements(browser, this);
        }
 
        public HomePage LogonAs(string username, string password)
        {
            this.username.SendKeys(username);
            this.password.SendKeys(password);
            LoginButton.Click();
            return new HomePage(browser);
        }
    }

Example of a Page Object for a System Logon


Here you can see how you would define the class for a logon page. Building this ahead of time is easy because locators are defined by the public properties (For example, IWebElement username). Moreover, if your team will settle on solid conventions, you can use WebDriver’s PageFactory to quickly handle element location based on common naming conventions. For example, the username field is located using the InitElements to find a field on the actual browser Document Object Model based off the ID value “username.”

Tests in WebDriver are very clean, because all behavior and locator logic is in the Page Object. The test proper_creds_logs_user_on below is very clear because it doesn’t handle initialization or location, just the test flow.


class When_logging_into_system_pageobjects
    {
        IWebDriver browser;
 
        [TestFixtureSetUp]
        public void run_before_any_tests()
        {
            browser = new FirefoxDriver();
  
browser.Navigate().GoToUrl("http://the-internet.herokuapp.com/login");
        }
 
        [TestFixtureTearDown]
        public void run_after_all_tests_are_complete()
        {
            browser.Quit();
        }
 
        [Test]
        public void proper_creds_logs_user_on()
        {
            LogonPage login = new LogonPage(browser);
            HomePage home = login.LogonAs("tomsmith", "SuperSecretPassword!");
            Assert.AreEqual("Logout", home.LogoutButton.Text);
            string targetUrl = home.LogoutButton.GetAttribute("href");
            Assert.IsTrue(targetUrl.EndsWith("/logout"));
        }
    }

Testing using the logon Page Object


Telerik Test Studio Solution

Test Studio solution has always supported great collaboration between testers and developers, enabling both roles to play to their strengths. Testers can focus on writing high-value functional tests and extending them with features like data driving. Developers come in to play when testers need help in situations where more complex code might be needed to solve particularly challenging find logic, or asynchronous situations. Additionally, both testers and developers can work together to create helper APIs for system configuration, data setup and so on.

Working ahead with Test Studio suite just got dramatically easier, thanks to some exciting features in the latest release . Test Studio suite has always enabled you to record flexible tests for playback. You’ve also been able to write as much or as little code as you need to build highly maintainable test suites. Now you can define your elements in the Test Studio repository ahead of time and build test steps without ever firing up the recorder.

The Test Studio element repository is based on Page Objects. The element repository is a central definition for element locators, organized on a per-page basis. As of Test Studio 2015.1, you can define locators in the repository manually, then build test steps for validations, extractions, conditional waits and all other actions supported in Test Studio suite.

This new scaffolding functionality enables teams to tie right back to the early collaboration we’ve talked about this entire paper: start defining your pages, elements and tests at the appropriate phase of your workflow. You can run the tests as the systems are being developed, then move your tests into your regular build/test/push workflow ,as needed.

Get a free evaluation copy of the latest Telerik Test Studio suite

Sign up for the release webinar “Early Test Collaboration for Successful Automation”



Close the Gaps in Your Delivery: Get Testing Collaboration Earlier

Test-first methodologies are wonderful ways to get work done. You don’t need to wait for an entire process and culture change to see the benefits. Regardless of your specific toolset, you can move conversations earlier in your process and realize the benefits to your overall delivery of value.


About the Author

Jim Holmes

Jim is an Executive Consultant at Pillar Technology. He is also the owner of Guidepost Systems. He has been in various corners of the IT world since joining the US Air Force in 1982. He’s spent time in LAN/WAN and server management roles in addition to many years helping teams and customers deliver great systems. Jim has worked with organizations ranging from startups to Fortune 100 companies to improve their delivery processes and ship better value to their customers. When not at work you might find Jim in the kitchen with a glass of wine, playing Xbox, hiking with his family, or banished to the garage while trying to practice his guitar.

Comments

Comments are disabled in preview mode.