For example, the first one that I looked at tested an error scenario where a system function was called where permission had been revoked from the input file to that process. Although a valid test, the likelihood of this issue occuring in a live environment was slim and the potential risk to the system low. The test itself, on the other hand, involved bespoke scripts within the test with high maintenance requirement when porting and a high risk of failure.
I contacted the tester who created the test and put it to him that this type of test was possibly more suited to initial exploratory assessment of the functionality involved rather than full automation and repeated execution. He accepted this and we agreed to remove the test.
I took this as an opportunity to review with the team what tests needed adding to the regression packs and when. Some of the key points that should be considered:-
- Once a test has passed, what is the risk of a regression occurring in that area?
- How much time/effort is involved in developing the test in the first place compared to the benefit of having it repeatable?
- Is the likelihood of the test erroring in itself higher than the chance of it picking up a regression?
- Will the test prove difficult to port/maintain across all of your test environments?
Just because we can automate a test doesn't mean that we always should. Aim to perform a cost/benefit analysis of having the test in your automation arsenal versus the cost of running and maintaining the test. It may becomes apparent that the value of the test is less than the effort it takes to develop, execute and maintain. In this situation then the best course of action may be to execute manually as an exploratory test in the initial assessment phase, and focus our automation efforts on those tests that give us a bit more bang for our buck.
Copyright (c) Adam Knight 2009-2010
Post a Comment