Verifying code though automated testing ensures things are working right. Validating requirements, on the other hand, ensures you’re working on the right things. Since there’s no point in verifying the wrong requirements, here’s how to ensure that your requirements are valid.
Somehow, we manage to deliver applications that, while they pass all of our tests, our users don’t want. Here’s why that happens (and how to stop doing it).
When my code has passed all my automated tests, I can say that I’ve proved that my code works. In the testing biz, what I’ve done is called “verification” and it answers the question, “Does my code do what the requirements say it was supposed to do?” That’s because my code and my tests are both driven by the same source: The application’s requirements.
Which, of course, assumes that those requirements are “right.” If the requirements aren’t right … well, plainly, verifying code against requirements that aren’t right is a waste of your time. And it’s when the requirements aren’t right that we deliver applications that, while they work “perfectly” in some technical sense of the word, don’t make our stakeholders happy. They were great requirements but they weren’t, as it turns out, the ones that mattered. Ah, verification vs. validation.
If you’ve been creating applications for any period of time, then you’ve been through this experience and, I assume, have zero desire to go through it again.
In the testing biz, making sure you have the “right” requirements is called “validation.” Paying attention to validation—making sure the requirements are “right”—prevents you from delivering code that no one wants. And you should (obviously) validate your requirements. One reason you should do that is because it’s easy: Just ask your users what they want.
But, you say, that’s what our whole requirements process is supposed to do: Find out what the user wants. And I agree: Sometimes, your requirements process does deliver valid requirements. But sometimes it doesn’t.
For example, you may verify that your code complies with some regulatory requirement. That’s great, but there are a bunch of questions that should have been answered before you started writing either your code or the test that verified it: Did someone confirm that this particular set of code needed to comply with that regulation? Does the code demonstrate to the necessary authorities that you are, in fact, complying with the regulation? Do you know that there are no other regulatory requirements that you must comply with? These are all validation, not verification, questions. Some of those questions may have been answered. Others … maybe not. How would you know?
At this point, you’re probably expecting a very woolly philosophical discussion with a lot of handwaving, along with some directions that boil down to “do good and don’t do wrong” … but no practical direction. So let me specific: The problem with requirements is that your stakeholders are imagining, based on their needs, what the system you will deliver is going to do.
That’s still too philosophical. Let me be even more specific about the problem:
The result is that the requirements given to you only approximate what your stakeholders are imagining, and the system you’re building only approximates what they’re expecting. Validation is the process of reconciling your stakeholders’ dreams with the ugly reality you’re going to deliver.
The way you accomplish this is by constantly and repetitively reviewing both proposed and work-in-progress code with your stakeholders in terms that make sense to your stakeholder.
That means, even if you have no interest in user experience design, providing stakeholders with prototypes of your user interfaces along with mockups of any output that stakeholders will interact with … and then working through typical scenarios with your stakeholders using those tools. Effectively, you’re modeling your stakeholders’ world—or, rather, the new world that you’re moving your stakeholders into—in the context of your application.
These simulations must involve both the stakeholders who will live with the final product and the developers who will deliver the code. Having developers work with stakeholders on these simulations does two things: It deepens the developers’ understanding of what the requirements mean, and it grounds stakeholders’ expectations about what will actually be delivered.
You shouldn’t be surprised if, in this process, stakeholders say things like, “That’s great! Now, where do I find x?” or “That was easy. So, how will
<insert name here> know when this is
<insert status here>?” Or, even, “Oh, dear. That works but we have to do
<insert unit of measure> of these every
<time period>. We’ll never have enough time to do this/we’ll be racking up hours of overtime every day.” Or even, a day or two later, a stakeholder calling you up to say, “You know, I’ve been thinking about
<system component> and…it’s not really going to work.”
We won’t ask how I know this.
But this is the reality of validating requirements: Modeling and simulating your stakeholder’s world surfaces validated requirements that you’ll need to create code to implement and tests to verify. Don’t think of these as “new” requirements: These requirements were always there … you just didn’t know about them. Since you were going to have these requirements inflicted on you anyway, by getting them early, you’re being “proactive” (supposedly a good thing).
Listen: I’m putting the best spin on this I can.
And you don’t do this just once, by the way. As you start building out your application, you must get your stakeholders back in so that you both can participate in more detailed and complete simulation exercises with your work-in-progress code.
You need to take the time to model your stakeholders’ new world: It’s the essential step in turning your stakeholders’ dreams into valid requirements. Only these requirements deserve your time because only these requirements are the ones that matter.
Speaking of what matters, the only testing that really matters is testing through the eyes of the user.
Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter also writes courses and teaches for Learning Tree International.
Subscribe to be the first to get our expert-written articles and tutorials for developers!