Telerik blogs

This post describes what constitutes user acceptance testing, along with its purpose and how to make sure you deliver a satisfactory customer experience.

User Acceptance Testing (UAT) is a type of software application testing that validates whether the software meets either a contractual agreement through requirements or fully satisfies the needs and expectations of the end user or customer. Traditionally, acceptance testing is performed as part of a formal or semi-formal review process for each code release by customers.

However, when code is continuously released, it’s next to impossible for customers to keep up with acceptance testing without a significant investment in testing resources. When customers cannot keep up with acceptance testing, they may opt to spend some time doing exploratory testing or skip it altogether.

Experienced QA testers may also execute user acceptance tests alongside business analysts or product management. The purpose is not necessarily to repeat functional regression testing, but rather to check that the application meets the contractual requirements and meets the intended use of the customer. Frequently when the customer receives a software application release, they are surprised that it doesn’t explicitly meet their intended needs.

This guide describes what constitutes user acceptance testing, along with its purpose and how to make sure you deliver a satisfactory customer experience.

What Is User Acceptance Testing?

User acceptance testing is traditionally executed by customers when receiving a new application release. It is exactly what it says—either the customer accepts the functionality in a release or they return issues or feature requests.

UAT is meant to ensure the application release meets the expectations of end users or customers. It involves setting up UAT environments for testing, sharing test scenarios or test scripts, and providing a demo or general overview of the release’s content.

Don’t wait until an application is complete to provide customers with testable builds, interactive demos or user acceptance testing opportunities. Provide customers the opportunity to perform acceptance testing throughout the development cycle to prevent the need to redesign features when coding is completed.

UAT is also performed by QA testers, business analysts and product managers within a software development team. The intent is the same: to determine whether the released software application functions and meets the business expectations of a customer. Vendor internal UAT needs to be based on known customer expectations of how the software will be used.

The Purpose of User Acceptance Testing

The purpose of UAT is to ensure the customer is satisfied with the product. With any luck, customer experience is high and customers love the released product. The point of UAT is to make sure the customer is happy.

With Agile or other rapid development methodologies, UAT is done continuously throughout or after unit, system, integration and functional regression testing, or as a distinct part of a regression testing cycle. UAT proves that the software development team has created the functionality per the customer’s requirements. UAT also ensures no requirements were missed or misunderstood.

Many software development teams perform UAT through interactive demos of functionality during the development cycle. When UAT is done throughout development, there’s far less chance of a nasty surprise at the end when the customer rejects the application release. It’s far less expensive for vendors to make sure customers are satisfied with the product as it’s being developed rather than before a planned production release.

The best way to build and protect an application’s brand is to produce applications that customers want to use. Practice UAT frequently so customers and development are on the same page during the entire development cycle. Otherwise, you’ll end up with an application the customer does not want to pay for or use.

User Acceptance Testing—Formal vs. Informal

Formal methods of UAT includes several steps:

  • Preparing UAT test plan or test strategy document
  • Customer approval on specific business requirements
  • Developing real-world test case scenarios prepared by customers
  • Creating a team of resources dedicated to performing UAT
  • Scheduling expected application release schedule during the year
  • Executing testing and sharing results including defects and feature requests

If changes to the application are required, UAT kicks off another round of development, testing and additional UAT testing. Development continues until the customer signs off and accepts the release.

Informal UAT testing methods include:

  • Building a UAT environment to simulate the customer’s IT infrastructure
  • Identifying a set of user personas to represent expected end users
  • Developing UAT test cases or test scenarios based on customer workflows
  • Creating a team of dedicated resources to execute UAT validation testing
  • Entering defects or feature requests
  • Fixing defects or redesigning features to meet the customer’s requirements
  • Deployment of the completed release to production

As with the formal UAT methods, UAT testing may kick off another development cycle for items that don’t work for anticipated customer workflows.

Informal UAT testing can be executed by the vendor with internal team members or by customers, depending on the need. Many customers cannot dedicate the time or resources to test every application release. In those cases, it’s in the application vendor’s best interest to conduct an informal UAT testing cycle.

The success of informal UAT depends on the business’s understanding of the customer’s needs. If it is accurate, then the tests, personas and workflows match the customer’s actual actions. Remember the goal is a customer who loves the application and is fully satisfied with the release because they can use it effectively and productively.

User Acceptance Testing in the Real World

UAT, in theory, is executed during the development cycle or before every production release.

In a real application development business, UAT is often skipped or skimped. Why? One reason is with Agile or other rapid application development methodologies, UAT testing is built into regular QA testing and the application is demoed regularly with clients.

The demo is supposed to be seen and reviewed by customers and actual end users of the application. When that happens, UAT’s goal and purpose are realized if the product meets the customer’s expectations and they love using the release. However, both sides are often busy and few invest in the resources to perform UAT testing. Many businesses rely on the quality of the requirements or user stories so that customer needs are met.

If you’ve been in QA testing for long, you know that the quality of requirements is often questionable at best. Sometimes customers aren’t sure what they want or need, so ensuring they test a prototype and have users attend demos is important. Other times, the requirements are missing critical foundational infrastructure development. Use cases may be incomplete, and critical requirements may be missed.

If an application development company skips UAT testing, it’s imperative for high customer experience levels that requirements are detailed, specific and accurate. As a customer, executing some form of UAT is in your best interests. Consider performing small UAT testing cycles throughout the development cycle so you catch missed or misunderstood requirements before they are fully coded into the product.

Customer experience is king and queen. Demand the quality you want by participating in or supporting effective UAT testing for each application release. Invest in quality and you’ll reap the positive rewards of productivity and a fully functional application employees love to use.


QA
About the Author

Amy Reichert

A QA test professional with 23+ years of QA testing experience within a variety of software development teams, Amy Reichert has extensive experience in QA process development & planning, team leadership/management, and QA project management.  She has worked on multiple types of software development methodologies including waterfall, agile, scrum, kanban and customized combinations. Amy enjoys continuing to improve her software testing craft by researching and writing on a variety of related topics. In her spare time, she enjoys gardening, cat management and the outdoors.

 

Related Posts

Comments

Comments are disabled in preview mode.