The problem is me. If I am honest with myself, I know that I want this test to run to completion. This is a terrible position from which to start performing any testing activity, automation or not.
A Conflict of Interest
When testers are automating tests they are subject to an interesting dichotomy. On one hand, the end goal of this activity is to execute some test attempt to expose information on the target software. Hence the activity is a testing one aimed at increasing our ability to critically assess the system.
One the other hand, the process of automation is a creative activity in which we are developing new routines and programs, and therefore our natural tendency will be towards trying to get these to run successfully.
The danger that I certainly face when performing the activity of test automation is that I am subject to a positive bias in wanting the tests to run. When I am creating an automated test and a run encounters an issue my natural inclination is not to investigate the underlying cause, but to take the software development approach of finding an alternative path that allows me to reach my end goal of automating my target scenario.
A programmer faced with a failure in their code might furrow their brow and utter the immortal line - "It should work" - often combined with a look of incredulity. In a similar fashion, when faced with an automated test that doesn't run, a tester's first reaction can be one of frustration and disappointment rather than a more appropriate stance of inquisitiveness. In his book "Perfect Software and other illusions about Testing" Jerry Weinberg relates an example of a tester who was so impressed with the automation tool testing her search-page that she hadn't confirmed that the tool was actually checking the search returned anything. This may be an extreme case, but it is very easy to fall into the mindset of placing value on something that appears to run successfully at the expense of making more important observations on the behaviour of the software.
The times that the software is most exposed to the discovery of new issues is when it is subjected to a usage or workflow for the first time. This holds whether in a new customer implementation, an unexpected usage or the execution of a new test. The process of automating tests is a great opportunity to expose new issues in our software. The individual responsible for the automation needs to balance the natural problem solving creative approach of such an activity with a critical testing mindset in order to take advantage of this opportunity. Knowing that I can suffer from this problem, here are a few ideas that I think can help to avoid falling into this trap:-
- Treat automation as a testing activity
- Question the reasons for failures
- Look for usability issues
- Resist the urge to rerun on failure
- Note down issues to follow up as Exploratory Tests
Many argue that automation should be done by programmers. This is not the case in my company. I certainly agree that the automation harnesses and tools are best developed by those with programming knowledge, however I do think that abstracting testers away from the automated test design activities is a risky approach. Just as a test will yield the greatest information the first time that it is run, so the activity of automating tests can expose new issues and should be treated as an exploratory activity. In creating the tests we have an opportunity to examine the system in new ways which could be missed if our focus is on getting the test working. Treating the process of creating the automation as a testing activity can help to ensure that the correct mindset is adopted when tackling the task and that issues are not glossed over. My main concern with automation strategies in which the fixtures to drive the functionality are written by the same folks that engineer the software is that both software and fixtures are written from the same mindset. Many bugs do not relate to coding errors but simply by looking at the problem from a different perspective. If any bias towards a specific model of the solution is embedded into the automation as well as the software, this may prevent the tester from using the automation to drive the software in unexpected ways to expose issues.
Most initial attempts at automating a test will not run cleanly. Rather than immediately hacking around to get them working, spend some time questioning why the test did not run. Is this a scenario that the user could encounter? If so, was it clear from the system output what the problem was and how to resolve it? If the resolution required key developer knowledge that may not be in the hands of the customer, does this indicate a problem?
IF you are struggling with the complexity of a piece of automation, it could well be the case that the functionality in question is unnecessarily complicated. Review the user facing functionality and discuss with the product owner to see if there is a problem, and examine the potential to simplify the feature.
Clearing down and hitting the run button again "to see if it works this time" could be hiding a multitude of state based issues which could be lost by resubmitting. Even if the software turns out to be OK it is likely that the tests themselves are not as isolated as they should be and you may require some better setup and teardown routines. Resolving this will prevent multiple false negative results cropping up on future runs.
It might be that we don't have time to investigate something immediately, it's not ideal but it is a fact of life. Try jotting down issues using a notepad or Rapid Reporter and follow up later in exploratory testing charters once the automation task is completed.
Even as I write this, my test run has just reported "can't create the archive as it already exists" - oh well, I must have forgotten to clean the old run up, I'll just delete the directory and run it again...
Post a Comment