If you’ve never done Test Driven Development or aren’t even sure what this "crazy TDD stuff” is all about than this is the series for you. Over the next 30 days this series of posts take you from “I can spell TDD” to being able to consider yourself a “functional” TDD developer. Of course TDD is a very deep topic and truly mastering it will take quite a bit of time, but the rewards are well worth it. Along the way I’ll be showing you how tools like JustCode and JustMock can help you in your practice of TDD.
Previous Posts in this Series: 30 Days of TDD – Day 21 – A Tale of Two Defects
The focus of this series has been Test Driven Development. TDD relies on unit tests, and so the focus of this series has been on unit testing. And just to make sure I’m clear; your unit tests, and the practice of letting them drive your development are VERY important! But the truth is that unit testing is not the only kind of testing that you should be thinking about.
A large part of the effort we expend writing unit tests is spent making sure that our code under test is isolated so that we are only testing a specific unit of work. But there comes a time when you need to bind the components and classes of your application together to make a whole. What’s going to happen when you do that?
Integration testing is similar to unit testing but instead of running our code in isolation we make a point of using the other components, classes and external resources for our tests. Where a unit test would mock a database component to ensure that is not actually accessing the external database, an integration test will use the actual database component to ensure that it is talking to the external database. These tests are a bit easier to write than unit tests as we don’t need to worry about mocking, but can take quite a bit longer to run.
Integration tests are important to ensure that the individual units of work you are creating all tie together properly when it’s time to integrate them together. I’m not just taking about the individual classes in your application; you also need to be sure that you are properly interacting with any external resources your application uses. The afternoon of a deployment is not the time to find out that you misinterpreted the documentation for that important third-party web service that your application relies on. You want to start your integration testing early and make running them part of your nightly build. If your developer are able/willing to run them, that’s fine. But beware; on large systems I’ve seen full suites of integration tests take over nine hours to complete.
I find the most useful types of integration tests are the kind that replicate user behavior. For example; if I have a process for a user to create an account on my site I’ll start my test at the UI level. This means if I’m writing an MVC application my integration test will invoke an action on the Controller that will execute all the way down to the back-end data store and back. This ensures that all the layers of my application are integrating properly and my process is correct. The last step of any integration test should be to “clean up.” This means reversing any change in you environment that you made during the test. If you created an entity, delete it. If you changed something, change it back. If you deleted something, replace it. You get the idea. Like unit tests you ideally want to be able to run your integration tests in any order or configuration. This is much easier if you are working from a static starting environment.
Most applications have some sort of user interface that is presented in either a web browser or a window on the users screen. These View’s often have their own logic for things like input validation and hiding/displaying form elements. These Views are part of your application and should be tested as well. The benefit of creating and using these kinds of tests with a tool like Telerik’s Test Studio is that you have the ability to automatically verify much of the logic in your user interfaces and these tests can be run on a regular basis as part of your build.
There are some considerations when creating and running these types of tests. Like Integration tests, they can be quite slow, so it makes more sense to run them as part of an automated build process then have the developers run them locally. These tests can also be a bit brittle and seemingly simple changes can require you to update your tests. Because of that I don’t try to create a test for every workflow in my application. Instead I choose several key workflows and write tests for them. I understand that this is not going to catch every possible defect in my user interface, but if it catches 80% of them it’s worth the time and effort to create and run these tests.
Most development teams don’t start performance testing till well into the project. In my opinion this is a huge mistake. If you are six months into a nine month project before you start performance testing you have six months of performance data about your application is gone forever. If you don’t start looking at performance until after your application is slow it’s much more difficult to determine why it’s slow.
On my teams we start performance testing and performance profiling with the first deployment to QA. Yes; when your application has limited features, limited users and limited data the performance will be quite good. As time goes on and the application becomes larger and more complex it’s normal to see your performance gradually degrade. But what happens if you come in one morning and your application performance had fallen off a cliff? By tracking performance daily from the start of the project you’ll have a much easier time finding and correcting the offending code. Tools like Telerik’s Test Studio and JustTrace can help you make sure that your application performance remains high, alert you when it degrades too much and help you find and fix the problem.
If you are a developer of an application that is deployed on the Internet and think that your site is not at risk of being hacked you are wrong. You may say “My site doesn’t store any financial information or private information from my users, why would anyone want to hack me?” Most attacks performed on sites are not by a person; they are done by bots. And the goal is not necessarily to steal your data. Often the goals of these attacks is to simply gain control of you server. From there the attacker can either try to tunnel deeper into your company’s network, setup a spam server or enlist your machine in a Denial of Service attack.
EVERY application and site on the internet is susceptible to attack. Because of that it’s necessary to ensure that our applications are secure. As part of your normal testing at some point you should test the security of your application. Keeping your application secure keeps your user’s safe and your system administrators happy.
Automated testing is great and can save a tremendous amount of time and effort in your software development effort. But at some point in time you need a human being to get their eyes on the application. Many things can and should be automated, but there will always be things that can only be tested by humans. I want to automate as much as I can so that my QA testers (and ideally my business users) don’t waste their time with the obvious and easily found defects. I want to free the humans up to find the things that only humans can find. As projects run long and over budget it can be tempting to skip this step. This is a tremendous mistake. If no other type of testing happens, you MUST make sure that User Acceptance Testing does happen.
Unit tests, the focus of TDD, are very important, but not the only type of test. There are many other types of test that test various different aspects of our applications and they should not be skipped. By automating these tests I am able to free up my QA testers to test the things that only they can test. These other types of tests all add value and you should investigate implementing them in your software development practice.
Continue the TDD journey: