Telerik blogs

Software development and testing processes have the potential to create a lot of useful data. With fast computers and new software tools, we can use that data in innovative ways to dramatically improve how we look at problems and make decisions. Gartner has named data and analytics as a disruptive force in the testing industry in its latest automated testing Magic Quadrant.

Today, for our decisions regarding characteristics like quality, “doneness”, impact of defects, and test coverage, we all too often rely on our experience, instincts, and “gut feel”. While those decisions are sometimes correct, much of the time our biases get in the way, resulting in flawed test designs, data interpretations, and decisions.

Why do we work like this? One reason is inherent in our nature. Manipulating and analyzing data to obtain information is hard work. Once we think we have a rough understanding of data trends, the effort to mine that data for more valuable information falls by the wayside as we make guesses about its meaning.

But it’s also the case that our useful data is spread out over multiple applications and systems, and we simply don’t know how to begin to pull it together in one place to create useful information. Between multiple testing tools, our build systems, IDEs, requirements/agile tracking system, and defect management solution, it can be a tremendous effort just to get our data to talk to one another. We don’t analyze it because we can’t get to it.

For example, testers record defects and track them as they are fixed and the fixes verified, but they rarely look beyond that for defect trends or underlying root causes. Are there more defects that are associated with certain requirements or user stories? Is that because the requirement is ambiguous, or because it is technically difficult to implement? Knowing the answers to these questions will help us know what corrective actions to take.

Data and Analytics

There are three answers to organizing and analyzing our data. The first is to centralize team data as much as possible. This could be done by integrating team functions into one tool or integrated family of tools. A second option is to retain multiple tool chains, but ensure that those tools can export their data to a central repository, such as a SQL database.

The second is to be able to more easily mine that data to obtain information that we’re currently missing. Reporting plays a big role here, both within integrated tool chains and from within a SQL database. But we also need intelligent mining tools and ways to create ad hoc queries that can answer “what if” questions quickly and accurately.

Last, we need a consistent way of turning that data into decisions that are quantifiably better than instinctive ones. This is the analysis part. To effectively analyze data, we have to know the questions concerning application quality and readiness that we want to answer, and how to use the data to get those answers.

We also have to make sure that the analysis is improving our decisions. This requires a feedback loop from the often long-term results of the decisions into the data that we collect and evaluate, and the processes by which we use that data to make decisions. We can improve our decisions only by knowing how well we did to begin with.

What’s at stake here? We can’t continue to make our team and quality decisions by gut instinct, especially if we can create the information we need to do better. Software development and testing are increasingly expensive practices, and the results of poor decisions can cost millions of dollars or even result in loss of life. Analyzing our existing data better, and using it to make better decisions, is one of the overarching ways of improving how we design, build, and test software.

For more information on the technologies that are disrupting testing, and in particular better data and analytics, check out what Gartner has to say on the subject. Also, find out how to make key disruptors work for you by reading a whitepaper on "Four trends reshaping the software quality testing market".

 

Peter Varhol

About the author

Peter Varhol

Peter Varhol

is an Evangelist for Telerik’s TestStudio. He’s been a software developer and software product manager, technology journalist, and university professor among the many roles in his past, and believes that his best talent is explaining concepts and practices to others. He’s on Twitter at @pvarhol.


Comments

Comments are disabled in preview mode.