We collect a lot of data during the testing process, but whether we put it to good use is a different story. We run individual tests of test cases, and determine if they pass or fail. If they fail, we determine at what point they fail, and provide the steps necessary to replicate that failure.
We look for performance of individual transactions early on in the testing process, then when our goal turns toward deployment, we add virtual users to determine the behavior characteristics of the application and servers under stress, as well as whether it can successfully handle the required number of simultaneous users.
In the aggregate and over time, we look for statistics such as bug find and fix rate, seeking to find more at the beginning of the process, and fix more as we get farther along. We also try to look at bug severity to help determine its priority and when it has to be fixed.
However, testers tend to look at specific pieces of data in isolation, to determine the impact of specific bugs, as well as looking at whether we’re on track for the scheduled release. There is a lot more that we can determine from our testing and other practices, if we could only bring together the data and have specific questions that we want to answer.
What are the kinds of information we can glean if we looked at our data more comprehensively? Let’s start with performance. As soon as an application is built for the first time, we should be testing performance for a single user through a set of transactions. The first set of measures serves as the baseline. It may be good or poor performance according to requirements, but it should subsequently be tested daily to look for significant changes while the code is under development.
As we go through the development and testing lifecycle, we collect large amounts of data on unit tests, code check-ins, smoke tests, functional tests, defects, fixes, changes, and schedule deviations.
How can you tell what data to aggregate, and how to manipulate it to find out more about your status and processes? First, have a plan to collect all data that you’re generating as a part of the development and testing practices. This may involve using a single integrated tool or set of tools, or writing all data to a SQL database.
Second, we determine what we want to learn about our application development and testing practices ahead of time, and formulate it as a series of questions. These will be largely unique for each organization and project, but here are some samples:
- How does our build success compare with other projects of similar scope? Is there a trend over time in our build success, within the project and compared to other projects?
- Are defects clustering around specific requirements? How about specific areas of code? If so, those requirements may be ambiguous, or the code may be highly complex.
- Are some defects taking longer to diagnose and fix than others? Defect priority and test blocking may be causes, but there may be other reasons.
We don’t ask questions like these today, but answers to them can be enormously helpful in understanding and improving our processes. In my next post, I’ll look at how Test Studio can help generate the data to provide answers to these and similar questions.
Peter Varhol is an Evangelist for Telerik’s TestStudio. He’s been a software developer and software product manager, technology journalist, and university professor among the many roles in his past, and believes that his best talent is explaining concepts and practices to others. He’s on Twitter at @pvarhol.
Copyright © 2016, Progress Software Corporation and/or its subsidiaries or affiliates. All Rights Reserved.
Progress, Telerik, and certain product names used herein are trademarks or registered trademarks of Progress Software Corporation and/or one of its subsidiaries or affiliates in the U.S. and/or other countries. See Trademarks or appropriate markings.