Getting your application to work is great … but then you have to live with it. Here’s what to aim for in creating a maintainable application.
Like most developers, I spend most of time adding functionality to existing applications, modifying existing functionality to meet changes in the environment, and (occasionally) fixing bugs. It’s unusual for me to do “greenfield” development.
As a result, what matters to me is creating applications that are easy to maintain—applications where it’s easy to add or change functionality and where it’s easy to find and fix bugs when they happen.
Back when I was the head of an IT department, if you came to me and offered to take 10% off our development costs, I’d be mildly interested. On the other hand, if you offered to take 5% off my application maintenance costs … well, then you’d have my attention. “Maintainable” is only just behind “working” in the list of “features I like.”
For me, creating maintainable applications isn’t about implementing any particular pattern or SOLID principle and certainly not about using a particular tool. It’s all about keeping my eye on a specific set of three goals. If that doesn’t sound like the way that programing principles and patterns are usually discussed, let me give you an example.
I should warn you, though: I have no planning skills. When writing code, I tend to make much of it up as I go along (writing code is how I come to understand the problem and its solution). I’m sure that there are developers who can plan out their applications in advance. I am not that person. So, at the risk of looking foolish, this is a realistic description of the process I go through to create a maintainable application. It’s also why testing tools are important to me.
For example, I might be asked to develop an application with an “add a sales order” feature. That feature could be implemented as a single method that creates the following objects/data items:
The signature for this method is simple, straightforward, easy to understand and easy to call:
public void CreateSalesOrder(List<Product> products, DateTime shipDate, Customer cust)
And that’s great … until it isn’t.
It turns out, for example, that I have to add another feature: The application must have a “prepay” feature to support customers who use an existing credit to pay for products. In those transactions, neither the CustomerCreditCheck nor CustomerInvoice is necessary, but UpdateCustomerCredit is.
It gets worse: The company also wants to start selling digital products and now needs a DigitalSalesOrder feature. With digital products the ShippingReservation functionality isn’t required, but a new EnableDownload function is.
I could handle this request by adding optional Boolean parameters to CreateSalesOrder that can be used to turn off or on some behavior in the method. The result is a method whose signature now looks something like this:
public void CreateSalesOrder(List<Product> products, , DateTime shipDate, Customer cust, bool addDownload = false, bool addCustomerCredit = false, bool skipCreateShipping = false, bool skipCreditCheck= false, bool skipCreateInvoice= false)
Of course, this requires adding new code, including multiple if blocks that include/exclude functionality based on those Boolean parameters. As a result, my method is harder to understand, test and debug. And, of course, because it’s all one big method, a bug in any part of the code makes the whole process unreliable: If there’s a bug in this code, I have a lot of code to look through to find it.
If I had written the CreateSalesOrder function, I would have seen this coming. Seeing the size of this method, I’d apply the single-responsibility principle (SRP). Following that principle, I’d create each of those steps as a separate method (e.g., CreateHeader(), CreateDetail() and so on). When different processing is required, clients could mix and match these methods to create their solution.
But, notice: realistically (and given my lack of planning skills), I would have written some code before I got this far. Since I like writing automated tests, I’d also have been leveraging Test Studio and JustMock to build tests that prove my code works. As I divided my initial code up into five or six SRP methods (and Visual Studio can help here), I’d use my existing tests to prove that I wasn’t introducing new bugs (I’d also create tests for each of my SRP methods). Quite frankly, I think anyone who’s modifying code without unit tests to prove the application still works “as expected” is a lunatic.
I’ve created a new problem, though: A client that just wants to create a standard sales order—the normal case that makes up 80% of my company’s business—must call all these individual methods. While I’d have the tests that prove that would work, consider the odds of any other developer getting that right the first time.
And when my company decides, after release, to add some new functionality (PredictFutureSales, for example) … well then, yes, I do only have to write one new method and can leave the other SRP methods unchanged. But, because this feature is getting added after the application was released, I also must track down every client that creates a sales order and rewrite its code to also call PredictFutureSales.
Sorry, folks: From a maintenance point of view, this is not better.
The problem is that I’ve only applied a single principle. I actually am about halfway to a maintainable solution—I just need to apply the façade pattern.
In the façade pattern, I still write the original CreateSalesOrder method, but all the method does is call my SRP methods. Now, clients creating a standard sales order just call the façade’s CreateSalesOrder method. And the good news for me is that I’ve already written the tests for that façade method.
More critically, by combining SRP with the façade pattern, the costs of extending the application drop:
I may even create some new façade methods: A method to handle digital orders, for example, would probably be useful if there are a variety of clients that support digital orders.
For me, then, it’s meeting the goals of creating maintainable code that matters to me, not applying any specific tool or technique. The goals I’m aiming for are:
I pick the pattern(s) that I’ll use because they move me closer to these goals. The façade pattern isn’t unusual in being part of achieving these goals—most design patterns are specifically designed to support maintainable applications. The strategy pattern, for example, lets me customize the processing of any method by passing in a specialized method to handle new demands. I can add extend my application just by writing a new strategy method.
Not coincidentally, applications that meet these goals tend to be easier to test. Loosely coupled modules, for example, are easier to unit test and to combine into integration tests; focused components support simpler tests; if I’m adding new features with new code, I can create new tests for that new code and leave my existing tests alone.
While I’ve already mentioned testing tools, there are also any number of coding practices and tools that I use to create a maintainable application. For example, you can implement loose coupling by using a dependency injection tool that allows code to pick the object it needs out of a container. Now, if you need to change the behavior of your code, you just load the repository with a new object.
But you need to leverage multiple tools when creating a solution. For example, interfaces support dependency injection by loosely coupling the API your code calls to any particular implementation of your feature. If you’re going to use the strategy pattern, you’ll probably find that it works even better in conjunction with the factory method pattern. Most design patterns assume you’ll be using interfaces, for example.
To put it another way, it’s not about the tools and techniques but how they’re used together. Keeping the three goals for maintainable applications in mind, along with a testing tool that lets me adapt my code as my solution evolves, makes it easier for me to make decisions about how to use those tools.
New FREE Ebook—Unit Testing Legacy Code: Effective Approaches
Our guide to the most effective approaches for unit testing legacy code was just published and it’s ready for download.
Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter also writes courses and teaches for Learning Tree International.
Subscribe to be the first to get our expert-written articles and tutorials for developers!