Testing Cost Benefit Analysis
I’m probably one of the first people to advocate writing tests, even for seemingly obvious cases. (See for example, Really Dumb Tests.) There are some cases where I suggest that testing might best be skipped. There are cases where tests may not only have little value but can also add unnecessary cost to change. It’s important to honor the principles of agile development and not let the “rule” of a test for everything get in the way of the the “goal” of effective testing for higher productivity and quality.
While writing a unit test can increase the cost of a change (since you’re writing the code and the test), but the cost is relatively low because of good frameworks, and the benefits outweigh the costs:
- The unit test documents how to use the code that you’re writing,
- The test provides a quicker feedback cycle while developing functionality that, say, running the application, and
- The test ensures that changes that break the functionality will be found quickly during development so that they can be addressed while everyone has the proper context.
Automated integration testing, especially involving GUIs, are ones are harder to write, and cover code that likely was tested with unit tests, so it’s easy to stumble onto cases where the tests add little enough value and enough cost that it’s worth re-considering the need for an automated test in a particular case.
On one project I worked on the team was extremely disciplined about doing Test Driven Development. Along with unit tests, there were integration tests that tested the display aspects of a web application. For example, a requirement that a color be changed would start with a test that checked a CSS attribute, or a requirement that 2 columns in a grid be swapped would result in a test that made assertions about the rendered HTML.
The test coverage sounded like a good idea, but from time to time a low cost (5 minute), low risk change, would take much longer (1 hour) as tests would need to be updated and run, and unrelated tests would break. And in many cases the tests weren’t comprehensive measures of the quality of the application: I remember one time when a colleague asserted that it wasn’t necessary to run the application after a change, since we had good test coverage, only to have the client inquire about some buttons that had gone missing from the interface. Also, integration level GUI tests can be fragile, especially if they are based on textual diffs: a change to one component can cause an unrelated test to fail. (Which is why isolated unit tests are so valuable.)
I suspect the reasons for the high cost/value ratio for these UI-oriented tests had a lot to do with the tools available. It’s still a lot easier to visually verify display attributes than to automate testing for them. I’m confident that tools will improve. But it’s still important to consider cost in addition to benefit when writing tests.
Some thoughts:
-
Integration (especially GUI) tests tend to be high cost relative to value.
-
When in doubt try to write an automated test. If you find that maintaining the tests, or execution time, adds a cost out of proportion to the value of a functionality change, consider another approach.
-
GUI tests can be high cost relative to value, so focus on writing code where the view layer is as simple as possible.
-
If you find yourself skipping GUI testing for these reasons, be especially careful about writing unit tests at the business level. Doing this may drive you to cleaner, more testable, interfaces.
-
Focus automated integration test effort on key end-to-end business functionality rather than visual aspects of an application.
Applications need to be tested at all levels, and automated testing is valuable. It’s vital to have some sort of end-to-end automated smoke test. Sometimes there is no practical alternative to simply running and looking at the application. Like all agile principles, testing needs to be applied with a pragmatic perspective, and a goal of adding value, not just following rules blindly.