On joining a new company (see my previous post) one of the most interesting activities for me is learning about the differences between their approaches and yours. Working in an organisation, particularly for a long period, leaves you vulnerable to institutionalisation of your existing testing practices and ways of thinking. No matter how much we interact in a wider community our learning will inevitably be interpreted relative to our own thinking, and the presence of the Facebook Effect can inhibit our openness to learning in public forums. Merging with another company within the walls of your own office provides a unique opportunity to investigate the differences between how you and another organisation approach testing. As part of the acquired company I'll be honest and say there is an inclination to guardedness about our own processes and how these are perceived by the new larger company. The reverse is not true, and as part of an entirely new organisation I'm free, if not obliged, to learn as much as I can about their culture and processes.
In the early stages I've not yet had the opportunity to speak to many of the testers, though I have some exciting conversations pending on tools and resources that we can access. One of the things I have done is browse through a lot of the testing material that is available in online training and documentation. Whilst reading through some of the large body of training material I found this old gem, or something looking very much like it:
Now I'm under no illusion that those responsible for testing place any credence on this at all, the material in question was in some externally sourced material and not prepared in-house. It is, however, interesting how these things can propagate and persist away from public scrutiny.
I remember seeing the curve many years ago in some promotional material. Working in a staged waterfall process at the time the image appealed to me. My biggest bugbears at the time were that testers get complete and unchanging requirements; and that we got earlier involvement in the development process. The curve therefore fed my confirmation bias, it seemed convincing because I wanted to believe it. It is hardly surprising therefore that the curve enjoyed such popularity among software testers and is still in circulation today.
Some hidden value?
Since I first encountered it the curve has been widely criticised as having limited and very specific applicability. Given that it originated in the 1970s this is hardly surprising. It has been validly argued that XP and Agile practices have changed the relationships between Specification, Coding and Testing such as to significantly flatten the curve, Scott Ambler gives good coverage of this in this essay. In fact, the model is now sufficiently redundant that global head of testing at a major investment bank received some criticism for using the curve as reference material in a talk at a testing conference.
I'm not going to dwell on the limitations of the curve here, that ground has been well covered. Suffice to say that there are many scenarios whereby a defect from the design may be resolved quickly and cheaply both through development and testing activities, and in a production system. The increasing success of 'testing in live' approaches in SAAS implementations are testament to this. The closer, more concurrent working relationships between coding and testing also reduce the likelihood and impact of exponential cost increases between these two activities.
Whilst seriously flawed, there is an important message implicit in the curve which I think may actually suffer from being undermined by the associated problems with the original model. The greatest flaw for me is that it targets defects. I believe that defects are a poor target for a model designed to highlight the increasing cost of change as software matures. The scope of defects is wide ranging and not necessarily tightly coupled to the design such that they can't be easily resolved later. Michael Bolton does an excellent job of providing counter-examples to the curve here There are, however, other characteristics of software which are tied more tightly to the intrinsic architecture such that changing these will become more costly with increasing commitment to a specific software design.
If we don't consider defects per-se, but rather any property, the changing of which necessitates a change to the core application design, then we would expect an increasing level of cost to be associated with changing that design as we progress through development and release activities. Commitments are made to existing design - code, documentation, test harnesses, customer workflows, all of which carry a cost if the design later has to change. In some cases I've experienced, it has been necessary to significantly rework a flawed design, whilst maintaining support for the old design, and additionally having to create an upgrade path from one to the other. Agile environments, whilst less exposed, are not immune to this kind of problem. Any development process can suffer from missed requirements which render an existing design redundant. In this older post I referenced a couple of examples where a 'breaking the model' approach during elaboration avoided expensive rework later, however this is not infallible and as your customer base increases, consequently so does the risk of missing an important use case.
Beyond raw design flaws, I have found myself thinking of some amusing alternatives that resonate more closely with me. Three scenarios in particular spring to mind for me when I recount personal experiences of issues where design changes were required, or at least considered, that would have been significantly more expensive later than if they'd been thought of up front.
The cost of intrinsic testability curve
Testability is one. As I wrote about in this post, forgetting to include intrinsic Testability characteristics into software design can be costly. In my experience if testability is not built into the original software design then it is extremely difficult to prioritise a redesign purely on the basis of adding those characteristics in retrospectively. Justifying redesigning software to add intrinsic testability becomes increasingly difficult through the development process. I describe in this post the challenge that I faced in trying to justify getting testability features added to a product retrospectively. So I'd suggest the cost of adding intrinsic testability curve probably looks something like this:-
The cost of changing something in your public API curve
API design is another classic. While fixing bugs that might affect user interfaces can be low impact, changing design of APIs which someone has coded against can cause a lot of frustration. We as a customer of other venors' APIs have suffered recently with having to repeatedly rework the code of our solutions due to breaking changes in minor versions of their APIs. Speaking from bitter experience, if you have made breaking changes to your API then you're probably not your customers' favourite vendor right now.
The cost of fixing bugs the customer doesn't care about curve
Of course, lest we forget, there is always the chance that the stuff that you thought was important is less so to the customer. The cost curve for fixing bugs in features drops significantly if you realise that no-one is interested in the fix. Some features seem to be particular favourites with product owners or marketing teams but receive little production use. Testers are obliged to raise issues with them, however practically once the software is released these issues are unlikely to get prioritised by the customer.
Please note that these curves are entirely frivolous and have no basis on empirical evidence. That said I'd put money on them being more representative than the original in organisations where I've worked. And given the fact that they'll appeal to many testers' own opinions, perhaps through the power of confirmation bias they may just be showing up in a keynote near you a few years from now.
References
- Brief summary post on the history and original references for the Boehm Curve - Uri Nativ - Revisiting the cost of Change Curve
- Scott Ambler - Essay revisiting the Cost of Change Curve in Agile projects - Examining the Agile Cost of Change Curve
- Interesting post on Making up cost of change curve estimates to justify testability at Google - Paul Hammant - Testability and the Cost of Change
- Examples Dismantling the Cost of Change Curve - Michael Bolton - Tyranny of Always
Post a Comment