It took me a while to instigate the agreement of acceptance criteria as a fundamental part of our sprint operations but for the last couple of years it has been the case that we agree and publish acceptance criteria for each story being worked on. Prior to this we had struggled through the use of mini requirements specifications that were fed into the team at the start of evey iteration, with all of the negative implications for the testing effort that coincide with parallelising the testing and development efforts based on a written specification of behaviour.
The acceptance criteria became essential aspect of our testing operation. The ongoing discussion and verification of these criteria helped to ensure that the testing effort was inline with the development while staying focussed on testing the target customer value. As a story approached completion the tester would identify whether the criteria had been met or not and whether there were any limitations in place on the manner in which that criteria had been achieved. Limitations usually revolved around the scope of operation of the functionality in terms of scalability/performance, or a limit on the level of testing that had been achieved in the time available.
This approach was OK but through various discussions and retrospectives we identified a common feeling among the team that reporting story status against criteria was too restrictive. Trying to represent the result of a complex piece of work through a series of done/not done seemed insufficient to represent to product management the true status of the story. The acceptance criteria were our tool to use as a basis for discussion on what had been delivered but instead we found ourselves working to raise the awareness on any issues encountered behind a mask of 'Done' statuses.
Through a targeted team workshop we tackled the limitations of acceptance criteria and what we could do to improve our ability to improve on the process of gathering acceptance criteria, both for feeding into the development process and reporting feature status at the end of the iteration. We identified two significant areas that we wanted to change:-
Criteria, Assumptions and Risks
Instead of focussing the elaboration process on each feature to identifying acceptance criteria, the tester is now responsible for identifying three different aspects of the development story:-
- Acceptance Criteria Still an important aspect of the feature. Each story worked on will have a distinct set of criteria which we aim to meet through the implementation of that feature. The criteria where possible focus on value delivered through externally visible behaviour rather than internal structures and implementation details.
- Assumptions Sometimes it is necessary to make assumptions in order to get started on a feature, however hidden assumptions can be feature killers (I discuss this further here). If we are making any assumptions in order to make a start on the development of any feature then we identify these and publish them to the product management team along with the criteria. The testers early work on the feature should involve the confirmation, modification or negation of these with the appropriate stakeholder representatives. If we reach the point at which development work is progressing and affected criteria are being tested based on an unconfirmed assumption then the confirmation of that assumption becomes a top priority.
- Risks One of the key elements that we identify, both during elaboration and throughout the development of a story, are the key risks that are involved in the implementation of that story. The risks are identified based on whole team discussions and we work to mitigate these risks through the course of development. This may be through focussed testing, additional development work or simply highlighting further research/testing activities which need to be prioritised to gain further knowledge on the likelihood of that risk manifesting itself in a problem for the customer. Risks identified often focus around the complexity of development, the likelihood of collateral issues, the time required for thorough testing or any other factor that we feel imposes a risk on the successful delivery or successful operation of that feature.
Done is a sliding scale of Confidence
Instead of a binary done/not done measure of completion status, instead we've opted to report a measure of confidence against every one of the above items. Initially we've opted for a simple high/medium/low/none scale for this. For each item the confidence measure reflects a different confidence in the context of the item in question:-
- Assumptions - Confidence that the assumption has been confirmed. In essence, have we removed this as an assumption and confirmed it as a constraint on the scope of the work being delivered.
- Risks - Confidence in our steps taken to mitigate the risk. If an identified risk is not going to be mitigated to a level that we are happy with through the sprint then, as early as possible, we report low confidence in the risk and discuss potential actions that may be taken to address this in future iterations.
- Criteria - Confidence that the criteria has been achieved. This can be affected by the level of testing that we have been able to achieve against that story, the complexity of the item and the level of problems that we have encountered through the testing of the feature.
As with our original criteria, the elements that we identify here are intended as tools to aid in our discussion of the development stories, rather than the sole means of communication. Having a slightly richer confidence based approach, however, does provide us with more flexibility in reporting the true status of the items that we are testing. This approach also affords the Product Management team a clearer high level indication of the status of the items being delivered and the levels of confidence that we are achieving with the features being implemented than discussions focussed around criteria alone.
In addition to the benefits as the sprint progresses, this approach has also yielded hidden benefits in that our elaboration discussions expose far more relevant information from the entire team, helping the tester to identify areas to focus on which in turn drives the improvement of the testing being performed.
The thing I like most about this approach is that the confidence measure is a subjective one that is under the control of the tester. We are not counting test passes/fails or counting bugs or any other arbitrary measure of success with the potential for gaming and false targets that are inherent in such practises. Instead we are utlising the judgement and expertise of the tester to summarise the confidence that we have in each item delivered, which is something that believe yields far more relevant information and is something that I place significantly more value on.
Copyright (c) Adam Knight 2009-2010
Post a Comment