Telerik blogs

You spend more time modifying, extending, enhancing and (occasionally) fixing applications than building them. A reliable test suite confirms that you have an application that will support those activities.

There are, at least, three criteria for measuring software quality:

  • Low maintenance costs: Can our organization live with those applications?
  • Fitness to purpose: Do the applications we deliver do their job?
  • Reliable delivery process: Can we deliver those applications when we said we would?

There’s a case to be made that “low maintenance costs” is the only criteria that matters: software developers spend 35% of their time on managing existing code—and that doesn’t include “improving existing code.” For comparison’s sake, developers spend less than 12% of their time on “greenfield” development.

Plainly, improving your ability to create applications that can be easily extended and modified will have a far higher payoff than improving your ability to create new applications.

Maintaining Compatibility

Part of the problem is that modifying existing applications has at least one problem that “greenfield” coding does not: maintaining “backward compatibility.” Modifying existing code is like working on a car’s engine while ensuring that the driver can still steer and stop the car … and doing it while the car is going down the highway with passengers inside it, surrounded by other cars, all with passengers inside them.

Testing has a major role to play here. If you have regression tests for the code that you’re modifying, then, after you make a change, you can prove that the part of your application that’s supposed to be unaffected still works “as it did before.” Since you never have enough time to regression test everything, automated tests are your best choice here.

By the way, if your existing code doesn’t have automated tests, it’s not a problem to add them: You have multiple strategies for doing that.

Automated Tests and Maintainable Applications

If you’re trying to asses how hard/expensive it will be to modify an application, you can look for several key features. Those features are achieved by writing applications that are:

Which is also true of code that supports automated testing. If it’s hard/expensive/time-consuming to create tests, that’s a sign that your application is going to be hard/expensive/time-consuming to maintain.

For example, the Single Responsibility principle (the “S” in SOLID) says that an object should do “one thing well.” The Interface Segregation principle (the “I” in SOLID) ensures that functionality is segmented by client. The Dependency Inversion principle ensures that the interface is designed based on the client’s needs (the “D” in SOLID).

Applications built following those three principles need fewer, simpler tests to prove they work: you only need to prove the object’s “single responsibility” is met, only need to create tests for a single client at a time, and your tests don’t have to deal with extraneous interface members. The tests are smaller, easier to write and easier to understand, and can be created faster than tests for objects that don’t follow these principles.

The Open/Closed principle (objects are open to extension and closed to change—the “O” in SOLID) means that you can create a test confident that the test is good for the foreseeable future of the object because the object is closed to change. Under the Open/Closed principle, modifications to an object are made by creating extensions that are separate from the object. You can then create separate tests for those extensions, confident those tests are independent of the tests for the original object.

The Liskov Substitution principle (the “L” in SOLID) restates the Open/Closed principle for inheritance. For objects that follow the Liskov principle, any test applied to a base object can be applied to all the children of the base object. For any child object, you only need to add tests for the functionality that the child object adds to the base object.

The features of loosely coupled components—that they can be upgraded, modified or replaced with minimal impact on other components—also make those components easy both to unit test and to combine with other components to create integration tests.

Missing Tests

And if you’re thinking that there’s a part of your application that doesn’t need a test—that’s probably a mistake: Anything you don’t test will have at least one bug. Probably more.

In his classic book on software development, “The Mythical Man Month: Essays on Software Engineering,” Fred Brooks talks about developing the System 360 operating system. Initially, the teams didn’t measure or test how memory was used by the operating system’s individual components. Not surprisingly, when they tried using the components together, they found multiple memory-related bugs.

One last note about the value of tests: Over the life of an application, many people will be involved in maintaining an application. Automated tests are an unambiguous documentation of what any component is supposed to do: The component is supposed to pass this test. Any part of an application that doesn’t have an associated test is a part of the application whose behavior is, essentially, unknown to the people who have to maintain the application.

So there’s a “meta-test” here: You can determine if you’ve created a “maintainable application” by creating an automated test suite for it. If you can’t, you probably have an application that’s going to be hard to maintain and extend. Plus, in a very real sense, you have a system that’s undocumented, no matter how much paper you have about it. Finally, when you do come to modify the application, you’ll have a very limited ability to do regression tests that determine the impact of your changes.

To put it another way: Without a test suite, your application will not be well understood, will be expensive to maintain, and you won’t know how much trouble you’re in when you go to modify it. Not a good place to be.

Next, you may want to read about other criteria for measuring software quality: reliable delivery and fitness to purpose.

Peter Vogel
About the Author

Peter Vogel

Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter also writes courses and teaches for Learning Tree International.

Related Posts

Comments

Comments are disabled in preview mode.