Tuesday, 24 September 2013

Blaming the Tester

 

It has been my unfortunate experience more than once to have to defend my testing approach to customers. On each occasion has been deemed necessary in the light of an issue that the customer has encountered in the production use of a piece of software that I have been responsible for testing. I'm open to admitting when an issue that should have been detected was not. What has been particularly frustrating for me in these situations is when the presence of the issue in question, or at least the risk of it, had already been detected and raised by the testing team...

The Dreaded Document

I have a document. I'm not proud of it. It describes details of the rigour of our testing approach in terms that our customers are comfortable with. It talks about the number of tests we have in our regression packs, how many test data sets we have and how often these are run. The reason that the document exists is that, on the rare occasion that a customer encounters a significant problem with our software, a stock reaction seems to be to question our testing. My team and I created this document in discussion with the product management team as a means to explain and justify our testing approach, should this situation arise.

The really interesting element in exchanges of this nature is that no customer has ever questioned any other aspects of our development approach, irrespective of how much impact they may have on the overall software quality.

  • They do not question our coding standards
  • They do not question our requirements gathering and validation techniques
  • They do not question our levels of accepted risk in the business
  • They do not question our list of known issues or backlog items
  • They do not question the understood testing limits of the system

Instead they question the testing

This response is not always limited to external customers. In previous roles I've even had people from other internal departments questioning how bugs had made their way into the released software. I've sat in review meetings listening to how individuals '... thought we had tested this software' and how they wanted to find out how an issue 'got through testing'. Luckily in my current company this has not been the case, however I have had to face similar questions from external customers, hence the document.

A sacrificial anode

The behaviour of the business when faced with questions from customers over testing has typically been to defend the testing approach. Whilst this is reassuring and a good vote of confidence in our work, it is also very interesting. It seems that there is a preference for maintaining the focus on testing, rather than admitting that there could be other areas of the business at fault. Product owners would apparently rather admit a failure in testing than establishing a root cause elsewhere. Whilst frustrating from a testing perspective, I think that on closer examination there are explainable, if not good, reasons for this reluctance.

    • The perception of testing in the industry - Whilst increasing numbers of testers are enjoying more integrated roles within development teams, the most commonly encountered perception of testing within large organisations is still of a separate testing function, which is seen as less critical to software development than product management, analysis and programming. As a consequence of this I believe that it is deemed more acceptable to admit a failure in testing than other functions of the development process which are seen as more fundamental. A reasonable conclusion then is that, if we don't want testers to receive blame for bugs in the products then we need to integrate more closely with the development process. See here for a series of posts I wrote last year on more reasons why this is a good idea.
    • Reluctance to admit own mistakes - Often the individuals asked to explain the presence of an issue that was identified by the testing were the ones responsible for making the decisions not to follow up on that issue. In defending their own position it is easy to use a mistake in testing as a 'sacrificial anode' in removing attention from risk decisions that they have made . This is not purely a selfish approach. Customer perception is likely to be more heavily impacted by an exposed problem in the decision making process than one in the testing, as a result of the phenomenon described in the previous point. It therefore makes sense to sacrifice some confidence in testing at the expense of admitting that a problem arose due to the conscious taking of a risk.
    • "Last one to see the victim" effect - A principle in murder investigation is that the last person to see the victim alive is the likeliest culprit. The same phenomenon applies to testing. We're typically the last people to work on the software prior to release, and therefore the first function to blame when things go wrong. This is understandable and something we're probably always going to have to live with, however again the more integrated the testing and programming functions are into a unified development team, the less likely we are to see testing as the ones who shut the door on the software on its way out.

Our own worst enemy

Given the number of testers that I interact with and follow through various channels, I get a very high level of exposure to any public problems with IT systems. It seems that testers love a good bug in other people's software. What I find rather disappointing is that, when testers choose to share news of a public IT failure, they will often bemoan the lack of appropriate testing that would have found the issue. I'm sure we all fall into this mindset, I know that I do. Whenever I perceive a problem with a software system, either first hand or via news reports, I convince myself that it would never have happened in a system that I tested. This is almost certainly not the case, and demonstrates a really unhealthy attitude. By adopting this stance all we are doing is reinforcing the idea that it is the responsibility of the tester to find all of the bugs in the software. How do we know that the organisation in question hasn't employed capable testers who have fully appraised the managers of the risks of such issues, and the decision has been made to ship anyway? Or that the testers recommended that the area was tested and a budgetary constraint prevented them from performing that testing. Or simply could it be that the problem in question was very hard to find and even excellent testing failed to uncover it? We are quick to contradict the managers who have unrealistic expectations of perfect software, claiming the infinite combinations of functions in even the most simple systems, yet we seem to have the lowest tolerance of failure in systems that are not our own.

Change begins at home and if we're to change the 'blame the tester' game then we need to start within our community. Next time you see news of a data loss or security breach, don't jump to blaming the thoroughness, or even absence, of testing by that organisation. Instead question the development process as a whole, including all relevant functions and decision making processes. Maybe if we start to do this then others will follow, and the first response to issues won't be to blame the tester.

 

image: http://www.flickr.com/photos/cyberslayer/2535502341

Phil Kirkham said...

Great post. I now dread it when 'public' bugs are found as there will be a flood of blogs and tweets asking 'who tested this' and claims that their tool/approach would have found it and they have no background knowledge of what went on

Anonymous said...

Great reading Adam. Testers sometimes are their own worst enemies. If we started looking at testing as a software development activity, just like analysis and coding, we would probably understand better that through testing we are accountable for some mistakes but as a software development team we are accountable for everything, the good, the bad and the ugly.

Jesper L. Ottosen said...

If the paper version of tTP existed I would have love to see it there, and carry it with me to..

Appreciate the two sided point: lessons for testers and for business decision makers. It's a business decision to ship.

If a company blame the testers for the data found, and at the same time apparently want's trustworthiness as a value; then there is indeed a mismatch.

Paul said...

Three paragraphs in and I was nodding in agreement. I think you've unpicked the reasoning behind blaming the testers rather well, as it chimes with my experiences.

I would say that in my professional experience, I have found it extremely difficult to attempt to point out, without appearing overly defensive, that the whole process leading up to product delivery (including but not limited to testing, of course) is responsible for error remaining in a shipped product.

It's only natural for people to point fingers at the last set of people to have touched the product. It's also wrong.

James Christie said...

Very good article. Sometimes I've thought the more generous contract rates for test managers included a scapegoat premium!

Joe said...

I'm in complete agreement that often the first reaction to a public bug is "The testers must have screwed up!"

Perhaps I'm fortunate, but I have seldom worked for a company that didn't want to dig deeper to find out what REALLY happened (often performing a Root Cause Analysis), rather than jump to conclusions about the testing itself. When we dig in, sometimes we find that there was a flaw in testing. But more often, we find breakdowns elsewhere.

Still there is indeed a public perception out there that has existed throughout all the years I've been involved in testing - that no matter how bad the requirements, how bad the development process, how bad the business process - testing can still save the day.

To combat that, I never let my team of Testers be gatekeepers. Our role is one of enlightenment - we assess the state of the system and report our findings. But the Business gets to make the decision if more testing is desirable, if bugs can be deferred until a future release, etc. Quality belongs to the entire business - not just QA.

When I say something like "perhaps they should have tested more" it isn't directed to the testers (since we testers ALWAYS want to test more), but it's directed to the people who make the decision regarding the sufficiency of the testing (not often us).

Adam Knight said...

Thanks Phil,

I'm glad it is not just me that has noticed this happening. Testers rarely have the public interface to discuss any failures associated with projects they've been involved in, so they're relying on the rest of the community to give them the benefit of the doubt and consider the many possible causes for any failure.

Adam.

Adam Knight said...

Thanks for reading and taking the time to comment. I've certainly found that the more integrated I am as a tester into the development team as a whole, the less likely my own company are to isolate testing as the source of any failure. Sadly external customers don't always have the same perception and tend to question the testing when mistakes occur.

Adam

Adam Knight said...

Thanks Paul,

I'm really glad that my post resonated with you. It is difficult to explain these things without coming across as whining. Whilst I hate the CYA approach of some testers, I do think that one of the keys is ensuring that raising the risks early in the process and throughout rather than retrospectively when problems occur.

Thanks for taking the time to read and comment.

Adam

Adam Knight said...

Hah - yes you're probably right (although I'm sure no-one would admit that)

Thanks for reading.

Adam Knight said...

Joe,

Thanks. You're very fortunate that the companies you've worked for could see the need for root cause analysis rather than taking the myopic 'blame the tester' approach.

As with the experience that I wrote about in this post I try to focus my annoyance on the omission of considering a possible failure, rather than the specific testing itself. Whether the omission occurred in specification, design, development or testing is something that I cannot know and therefore try not to presume.

Thanks for commenting.

Adam..

Whatsapp Button works on Mobile Device only

Start typing and press Enter to search