Giving in to temptation
I'm sure that we are not the only test team to feel pressure. When faced with issues under pressure the temptation is to focus on such activity as to remove that issue and restore a state of "normality". The visible issue in a suite of automated tests (or manual checks) is the failing check, and resolving a bug to the extent that the check returns the expected result can seem to be the appropriate action for a quick resolution. The danger that is apparent with this approach, however, is that it results in "gaming" the automation to the extent that we ensure that the checks pass, even though the underlying issue has not been fully resolved. We can focus on resolving the visible problem without the requisite activity to provide confidence in the underlying issue that caused the visible behaviour. Some simple examples:-- Fixing one checked example of a general case
Sometimes a negative test case can provide an example e.g. of our error handling in a functional area. If that check later exposes unexpected behaviour, then a resolution on that specific scenarion could leave other similar failure modes untested. I've seen this situation where a check deleted some files to force failure in a transactional copy. When our regression suite uncovered an change in the transactional copy behaviour the initial fix was to check for the presence of all files prior to copy, fixing the test case but leaving open other similar failures around file access and permissions.
- Updating result without ensuring that purpose of test is maintained
There is a danger in focussing on getting a set of tests "green" that we actually lose the purpose of the test. I've seen this situation where a check shows up new behaviour, a tester verifies that the behaviour is as a result of new behaviour and so the new result is updated into the automation repository, but the actual original purpose of the check was lost in the transaction.
These are a couple of simple examples but I'm sure that there are many cases where we can lose focus on the importance of an issue through mistakenly concentrating on getting the automation result back to an expected state. No matter how well designed our checks and scenarios, this is an inherently risky activity. Michael Bolton refers to the false confidence of the "green bar".
Re-Testing
I always try to focus on the fact that retesting is still testing, and as with all testing, is a conscious and investigative process. Our checks existing to warn us of a change in behaviour which requires investigation. They are a tool to describe our desired behaviour, not a target to aim at. As well as getting the check to an expected state, our activity during retesting should, more importantly, be focussed on:-- Performing sufficient exploration to give us confidence that the no adverse behaviour has been introduced across the feature area
- Examining the purpose and behaviour of the check to ensure that the original intention is still covered
- Adding any further checks that we think may be necessary given that we now know that there is a risk of regression in that area
If we fall into the trap of believing that automation equates to testing, even on the small scale of bug retests, we risk measuring, and therefore fixing, the wrong thing. I am a huge proponent of automation to aid the testing effort, however we should maintain awareness that test automation can introduce false incentives into the team that can be just as damaging as any miguided management targets.
Copyright (c) Adam Knight 2009-2011
Hi,
Excellent article. Merely getting a set of automated tests pass is not sufficient to ensure that the bug is fixed. A lot more manual exploration and investigation should be done prior to signing off and due diligence should be given to the areas not covered by automation.
Regards,
Aruna
www.technologyandleadership.com
Post a Comment