Friday, 21 May 2010

The Kitchen Sink - why not all tests need automating

I've recently been working on testing a Windows platform version of our server system. A major part of that work was to port the test harnesses to work in a Windows environment. I'd completed the majority of this work, and with most of the tests running, checked and passing, I began to tackle the few tests remaining that were not simple to resolve. After the best part of a day struggling to get very few tests working I decided to take a step back and review what the remaining tests were atually trying to do. I very quickly decided that the best approach was to get the tests working after all but rather to remove them from the test suite.

For example, the first one that I looked at tested an error scenario where a system function was called where permission had been revoked from the input file to that process. Although a valid test, the likelihood of this issue occuring in a live environment was slim and the potential risk to the system low. The test itself, on the other hand, involved bespoke scripts within the test with high maintenance requirement when porting and a high risk of failure.

I contacted the tester who created the test and put it to him that this type of test was possibly more suited to initial exploratory assessment of the functionality involved rather than full automation and repeated execution. He accepted this and we agreed to remove the test.

I took this as an opportunity to review with the team what tests needed adding to the regression packs and when. Some of the key points that should be considered:-


  • Once a test has passed, what is the risk of a regression occurring in that area?

  • How much time/effort is involved in developing the test in the first place compared to the benefit of having it repeatable?

  • Is the likelihood of the test erroring in itself higher than the chance of it picking up a regression?

  • Will the test prove difficult to port/maintain across all of your test environments?



Just because we can automate a test doesn't mean that we always should. Aim to perform a cost/benefit analysis of having the test in your automation arsenal versus the cost of running and maintaining the test. It may becomes apparent that the value of the test is less than the effort it takes to develop, execute and maintain. In this situation then the best course of action may be to execute manually as an exploratory test in the initial assessment phase, and focus our automation efforts on those tests that give us a bit more bang for our buck.

Copyright (c) Adam Knight 2009-2010

Whatsapp Button works on Mobile Device only

Start typing and press Enter to search