Read More on Telerik Blogs
March 29, 2022 Productivity, Testing
Get A Free Trial

An essential guide of how to overcome complexity and burden when adding features to an existing app.

You’re never really done with writing any piece of software. You don’t, for example, stop adding new features to your application after you release it into production. Instead, you keep adding new features to that application over its life and, because that’s the reality you live in, you want to know how you can keep the costs of those changes (time and money) under control and add those new features reliably (i.e., without introducing new bugs or re-activating old ones).

There’s a world of tools and techniques out there to help you do that, but they generally fall into four categories:

  • Leave working code alone
  • Add new features by writing new code
  • Architect for extension
  • Test early and test often

Leave Working Code Alone

The two truths in the life of a programmer are that:

  1. Change is your enemy: Every change has the possibility of introducing a new bug.
  2. Your existing code is working as you expect it to.

Those two truths lead to the first rule of programming: Don’t change working code.

That doesn’t mean that you can’t add new features to your existing applications, though. It does mean that you need to build your application so that you can add new features without having to alter currently working code. The five principles wrapped up in the SOLID acronym give you direction on how to write code so that you can add new features without disturbing that code. Those principles often work together.

For example, the S in SOLID stands for the single responsibility principle (SRP) which goes hand in hand with the I (for the interface segregation principle) to reduce the need to change existing, working code.

SRP is often described in two related ways: “Every component should do one thing well” and “There should be only one reason that drives changes to a module.” The interface segregation principle recommends creating objects with a small number of members in its public API—just enough members, in fact, to support the class’s single responsibility (and the D in SOLID—the dependency inversion principle has something to say about what those members should be, as you’ll see).

Following the single responsibility and interface segregation principles keeps objects small and focused. More importantly, it means that new functionality is more likely to be added by creating new components rather than changing existing code. Those two principles work together to reduce the chance of introducing bugs into working code

The best way to enable adding new features without violating the single responsibility principle is to follow the open principle (the O in SOLID). The open principle says that it should be easy to extend objects with new features and to do that without changing existing code. (The open principle is often expressed as keeping an object “open to extension but closed to modification.”)

One of the most powerful ways to extend objects in object-oriented programming is through inheritance, which allows you to extend an object by adding a subclass while leaving the original “base” class untouched. However, it’s not enough to avoid having to change existing objects—you also want to avoid changing the clients that use those objects.

Avoiding changing existing clients is possible when object hierarchies follow the Liskov substitutability principle (the L in SOLID). The Liskov substitutability principle says that an existing client for any base class should be able to use any new subclass without having to be rewritten. Following this principle when implementing inheritance ensures that existing clients will continue to work—without change—even when interacting with new subclasses.

The dependency inversion principle (the D in SOLID) also supports rolling out changes without affecting existing, working clients. The dependency inversion principle says the members exposed by an object’s public API should match the demands of the client rather than expose how the object does its work. Following the dependency inversion principle means that, because the class’s interface is controlled by the client, that interface stays stable as you change how your object meets its new responsibilities.

Add New Features by Writing New Code

If you’re not going to change your existing code, how, then, do you add new features? The answer is that you add new features by writing new code (and, thanks to applying the SOLID principles, leaving old code alone).

This is where design patterns come in. Design patterns provide ways for solving common problems around adding new features either without changing existing code or by localizing changes to a single module. Adopting most design patterns requires leveraging both inheritance and interfaces—both are techniques for making different (i.e., new and old objects) look alike so that an application can use either as required and without change.

In practice, design patterns tend to favor creating solutions by composition—assembling multiple single responsibility purpose classes in order to complete a task. That emphasizes using interfaces over inheritance but, more importantly, allows you to add new features by incorporating new clients into the composition.

An excellent example of how design patterns allow you to add new functionality without disturbing existing code is the decorator pattern. The decorator pattern describes how to wrap new functionality around an existing object to provide new functionality to that object without altering its existing code.

The state pattern is an example of how changes can be localized. It’s not unusual for an application to have to consider the current state of its data when performing changes (for example, a customer with outstanding sales orders—and money owing—can’t be deleted). Rather than scatter those tests throughout the application, the state pattern centralizes it into one module and isolates the code for each state into separate modules.

With the state pattern, if a new feature requires a new state, you can add the new feature by adding a new module with the code required by the new state. You also have to change the existing module that tests for and loads the correct module but, while you can’t avoid changing existing code, you can minimize the changes required.

A key part of design patterns is that they provide a place where the experience and knowledge of programmers can be assembled. Design patterns come with expert advice on how they can be implemented reliably.

Here is a list of 23 design patterns you can utilize: https://www.dofactory.com/net/design-patterns.


Architect for Extension

All of this is good advice to follow within an application. Where applications interact, a different set of principles applies. The most important of these principles is that systems should be “loosely coupled”: Two parts of a system that have to work together should have as little to do with each other as possible, as, for example, in the consumer provider pattern. This ensures that you can safely add new features to your application without impacting other parts of your application.

Two popular methods for implementing the consumer provider pattern are through queues and events. With queues, an application writes out a message to a queue that is read by other applications. With an event-driven system, an application raises an event, passing some information, and other applications subscribe to the event.

Event and message-based solutions aren’t all that different (in fact, under the hood, event-driven systems are often based on message queues). Message queues tend to be a better choice when resiliency matters, while event-driven architectures are preferred when faster turnarounds are required. The key to both architectures is that the producer that generates the events/messages is almost completely independent of the consumer that processes them—the only thing the two applications have to agree on is the format of the information passed in the message or the event.

Creating loosely coupled applications will lead you to adopting “eventual consistency.” Eventual consistency relaxes the requirement that all systems reflect the same state of the data at the same time. You’ve probably seen this in action: If you look at your credit card bill online, you may notice that subtracting your recorded bills from your total credit amount doesn’t reflect your available credit—your available credit includes some charges that haven’t yet showed up in your list of bills (but will, “eventually”). Eventual consistency lets you add features to your application without having to be in lockstep with other components of your application.

Test Early and Test Often

And, while all of those tools and techniques will help you add new features reliably, none of them guarantee that the result will be free of bugs. You need, therefore, some way to ensure that your new feature hasn’t caused any of your existing functionality to start behaving … differently. That’s the purpose of regression testing: re-executing tests that prove old functionality still performs the same way as it did before the change.

Unfortunately, as your application evolves and you add more functionality, more and more tests are required to prove that your application continues to behave “as expected.” Eventually, the time required to perform all of these tests manually will exceed your available time and budget.

That is, unless, as you were adding functionality, you were also adding automated tests that demonstrate the new functionality works as specified. With automated testing, you can always run all of your regression tests—at most, you’ll just have add a new computer for your running tests so that your tests finish in time.

By reducing the cost of running your tests, you enable more frequent testing. By developing your tests as you write your code, you can start testing earlier. By testing early and more frequently, if you do find that a new feature has created a problem, you’ll be able to address the problem when the costs are lowest.

Automated testing has another benefit: It enables you to pay down your technical debt. In every application, you will, over time, realize that some of your existing code isn’t living up to your current standard. You have, in other words, “technical debt” that will, eventually, make it difficult to add new features. Remembering that change is your enemy, you have to consider whether it’s worth improving that code.

However, the existence of automated regression tests removes that risk: You can enhance the code to bring it up to standard and re-run your automated regression tests to ensure that your change really is benign. If you’ve followed the SOLID principles, implemented known design patterns and architected your systems to support extension, then you’ll be able to both target your testing and track down your bugs quickly.

The reality is that IT shops spend more time extending, enhancing and modifying existing applications than building new ones. Applying all these principles, patterns, architectures and testing patterns will allow you to make those changes reliably, on time and within budget. And, in addition to adding new features, you’ll actually be able to improve your existing code.


About the Author

Peter Vogel

Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter also writes courses and teaches for Learning Tree International.

Related Posts