Running both the testing and support operations in my organisation affords me an excellent insight into the issues that are affecting our customers and how these relate to efforts and experiences of the testers who worked on the feature in question. I recently had reason to look back and examine the testing done on a specific feature as a result of the feature exhibiting an issue with a customer. What became apparent to me in looking back was that, during that feature development, I had failed to follow two important principles that my team have previously worked to maintain in our testing operations.
A dangerous temptation
When employing specialist testers within an agile process one of the primary challenges is to maintain discipline in testing features during the same sprint as the coding. Over the time that I've been involved in an agile process the maintenance of this discipline has occasionally proved difficult in the face of external pressures. I feel it has been key to our continued successful operation as a unified team.
During the development of a tricky area last year we found that the testing of a new query performance feature was proving very time consuming. The net result of this was that we didn't have as much testing time as originally thought to devote to another related story in the sprint backlog. Typically in this situation we would aim to defer the item until a subsequent sprint or to shift roles to bring someone else in as a tester on that piece. For various reasons in this case we decided to complete the programming and then test in the following sprint.
Some reading this will be surprised that an agile team would ever consider testing and developing in separate sprints. Believe me it is not the case for all teams who describe themselves as agile. A few years ago at the 2009 UKTMF Summit I attended a talk by Stuart Reid "Pragmatic Testing in Agile projects" in which Stuart suggested Testing in a subsequent sprint to coding was a common shape for agile developments. Many testers that I have spoken to in interview reinforce this notion, claiming to have worked in agile teams with scrums that were solely for the testing process with a single build delivery into the testing sprint. In fact one individual I have interviewed from a very large company described a 3 stage 'agile' process to me where coding, test script writing and test script execution were all done in seperate sprints.
The purpose of this post is not to criticise these teams, however I do personally believe that this is an approach that favours monitoring over responsibility at the expense of the true benefits that agile can deliver. In my experience the benefits of testing in the same sprint is that we can provide fast feedback on new features and provide a focus on quality during development. Without this benefit then developments can quickly exhibit the characteristic problems of testing as an isolated activity after coding. Even on an individual feature level, delaying feedback from testing until after the programming has 'finished' results in a significant change in the tester to developer dynamic. The problems reported by the tester are distracting from, rather than contributing to, the active projects for the programmer, something I explored more here. Some teams may achieve success through delayed testing in isolated sprints, for our team it marks a retrograde step from our usual standards.
Unrepresentative confidence
The second lapse in principles arose in the completion of the story.
When the testing commenced it was clear that the functionality in question was potentially impacted by pretty much every administration operation that could be performed on the data that was stored. The tester in question diligently worked exploring a complicated state model and exposed a relatively high number of issues compared to our other developments. A lot of coding effort was required in order to address these issues, however this was done under extra pressure of having estimated scant programming work for that story in the sprint it was being tested in the belief that it was essentially completed.
As I discussed in this post I use a confidence based approach on reporting story completion to allow for the many variables that can affect the delivery of even the simplest features. At the end of the story in question, under my guidance, the tester reported high confidence in all of the criteria on the basis that all of the bugs that they had found had been retested successfully. I did not question this at the time, however in hindsight I should have suggested a very different report on the basis of the nature and prevalence of the bugs that had been encountered. At the end of the sprint all of the bugs that had been found were fixed. Reporting high confidence on this basis belied the number of issues that had been discovered and the corresponding likelihood of there being more issues.
To hijack a famous testing analogy, if you are clearing a minefield and every new path exposes a mine, there is a good chance that there are still mines to be found in the paths you haven't tried yet.
This problem can arise equally through the arbitrary cut-off of the sprint timebox as a finite set of prescribed test cases. If after completing the prescribed period/test cases there are no outstanding issues it is hard to argue for further testing activity, however it is my firm belief that testing reporting should be sufficiently rich and flexible to convey such a situation. As I'd discussed with my team when introducing the idea, the reporting of confidence is intended to prompt a decision - namely whether we want to take any action to increase our confidence in this feature. The existence of a high number of issues found during testing is sufficient to diminish confidence in a feature and merit such a decision, despite those issues being closed. In this case we should have decided to perform further exploratory testing or possibly review the design. As it was the feature was accepted 'as was' and no further actions taken.
Problems exposed
We recently encountered a problem with a customer attempting to use the feature in question. Whilst the impact was not severe, we did have to provide a fix to a problem which was frustratingly similar to the types of issues found during testing.
I'm aware of the dangers of confirmation bias here, and the fact that we encountered an issue does not necessarily indicate that this would have been prevented had we acted differently. We have seen other issues from features developed much more in line with our regular process, however there are some factors which make me think we would have avoided or detected this one by sticking to the principles described.
- The issue encountered was very similar in nature to the issues found during testing, it was essentially a hybrid of previous failures recreated through combining the recreation steps for known problems
- The solution to the problem after a group review with the senior developers was to take a slightly different approach which utilised existing and commonly tested functions. This type of review and rework is just the sort of activity that we would expect do if testing was exposing a lot of issues while the coding focus was on that area and rework would have been considered more readily.
Slippery Slope
While being something of a 'warts and all' post I think this example highlights some of the dangers of letting standards lapse even briefly. It is naive to think that mistakes won't be made. With short development iterations there is scant time to resolve when this happens. For this reason I think that a key to maintaining a successful agile development process is to identify lapses and to increase effort to regaining standards quickly. As the name suggests a sprint is a fast paced development mechanism. In the same way that agile sprints can provide fast paced continuous improvement, degradations in the process can develop just as quickly. Any slip in standards can quickly become entrenched into a team if not addressed, and I've seen a few instances where good principles have been lost permanently through letting them slip for a couple of sprints.
Post a Comment