Not much of a confession I admit, however assumptions are something of a dirty word in software testing. If not addressed face on they can become hidden problems, rocks just under the surface waiting to nobble your boat when the tides change.
As a tester I am constantly making assumptions. This is an unfortunate but necessary part of my work. Where possible I always try to avoid assumptions and drive to obtain specific parameters when testing. Sometimes, particularly early on in a piece of development, it is not always possible to explicitly scope every aspect of the project. In order to avoid "scope paralysis" and put some boundaries in place in order to progress with testing, it is sometimes necessary to make assumptions about the required functionality and the environment into which it will be implemented and used.
These assumptions could relate to the users, the implementation environment, application performance or the nature of the functionality. e.g.:-
- It is assumed that the customer all servers in a cluster will be running the same operating system
- It is assumed that the user will have familiarity of working with database applications and related terminology
- It is assumed that the customers will have sufficient knowledge to set up a clustered file system so our installation process can be documented from that point onward
- Given a lack of explicit performance criteria it is assumed that performance equivalent to similar functionality will be acceptable
- It is assumed that the function will behave consistently with other functions in this area in terms of validation and error reporting
I don't see anything wrong in making assumptions, as long as it is identified that this is what we are doing. As part of our testing process I encourage testers in my organisation to identify where they are making assumptions and to highlight these to the other stakeholders when publishing the agreed acceptance criteria for each story. In this way we identify where assumptions have had to be made and allow these to be reviewed and the safety and the risks involved in making those assumptions to be assessed. We identify implicit assumptions and expose them as explicit constraints, gaining confirmation from the product owner and/or customer to provide ourselves with confidence that the assumptions are safe.
Despite this process of identification and review, I recently encountered an issue with a previously made assumption. This highlighted the fact that simply identifying and reviewing assumptions during the development of a piece of functionality is not sufficient. Once you have made an assumption during the development of a function, in essence you remake that assumption every time you release that same functionality in the future until such time as:-
- You cease to support the functionality/product
- You change the functionality and review the assumptions at that point
- You get bitten because the assumption stops holding true.
No more function, no more assumption - job done
At this point I encourage my team to re-state any assumptions made about the existing functionality for re-examination. A recent example involved our import functionality. As part of an amendment to that functionaliy the tester stated the assumption that an existing constraint on the import data format would apply in the case of using the amended software. We questioned this and, after conferring with the customer, established that it was no longer a safe assumption given the way that they wanted to implement the new feature. In this way the explicit publishing and examination of a long held constraint helped to avoid a potential issue that would have affected the end customer.
This last alternative happened to me recently. As part of a functional development a couple of years ago some assumptions were explicitly stated in the requirement regarding the nature of the data used in that function. Over the course of the next two years the customer base was extended and the range of data sources for the functionality extended. As no extensions to the functionality appeared necessary to support the new use cases, no further developments were done and the assumptions were not revisited. The environment in which the product was being used changed, rendering the assumption invalid and resulting in a issue with a specific set of source data. The problem that manifested itself was very minor, actually resulting from a problem in the application that the data was sourced from, but it did highlight the dangers involved in making assumptions and not reviewing them. I've since altered the way in which assumptions are documented during our development process to allow for easier identification and review in future.
Assumptions are easy to make. They are even easier to remake, every time the feature in question is re-released. Identifying and confirming the assumptions at the point of making them is a good step, but it is still a risky approach. Assumptions are static in nature and easy to forget. Customer environments, implementation models and usage patterns change much more quickly and forgotten assumptions can become dangerously redundant if not constantly reviewed. I'll be improving my process of assumption documentation, examination and re-examination in coming weeks. Is this a good time review what assumptions you've made in the past that are still being made - it may do you some good to stand up and confess.
Copyright (c) Adam Knight 2009-2010
Post a Comment