Thursday, 20 November 2014

In The Spirit of Windows

As I wrote in my post on The FaceBook effect, as you progress in your career you build up more examples of when you got it wrong, or you exhibited opinions which change over time to the extent that you later come to disagree with your former self. Perhaps the most fundamental example of this for me was my position on programmer/tester collaboration that I wrote about in this post. Another good example comes from an earlier role when I was testing a marketing data analytical client/server system, when my position on the need for explicit requirements was very different...

Demanding the Impossible

At the time the process that I was working under was a staged waterfall style process. We didn't follow any specific process model but it was compared on occasion to RUP. There was a database which contained formal requirements written by the product managers, and a subset of these were chosen for each long term (6-9 month) release project.

Whilst the focus of most of the development, and all of the testing, was on the main engine, a parallel development had been underway for some time to deliver a customer framework for housing the client components and managing the customers data rules, objects and models. This had been done on a much more informal, interactive manner than the core features with the result that requirements had been added to the database thick and fast as the product managers thought of them. These were much less rigidly specified up front than was usual in the company with behaviour established instead through ongoing conversations between the programmes and the product manager.

Enter the testers

After a long period of development, it was decided to perform a first release of the new framework. At this point the test team were brought into the project. What we found was a significant accumulation of requirements in the database, some of which were delivered as written, many had changed significantly in the implementation, and for many of them it was unclear whether they had been delivered or not. To clear up the situation the product owners, testers and architects day down for a couple of lengthy meetings to work through this backlog and establish the current position.

One requirement in particular caused the most discussion, and the most consternation for the testers. I don't have the exact wording but the requirement stated something like

“The security behaviour within the system will be consistent with the security of the windows operating system”

We pored over this one, questioning it repeatedly in the meetings. What did it mean specifically and how could we test it? What were the exact characteristics of the security model that we are looking to replicate? How could we map the behaviour of an operating system to a client server object management framework? In somewhat exasperated response the Development Manager tried to sum up the requirement in his own words :

“It should be done in the spirit of windows”

This caused even more consternation from the testers. Behind closed doors we ranted about the state of the requirements and the difficulties we faced. How could we test it when it was so open to interpretation? How could you write explicit test cases against a requirement “in the spirit of” something? How did you know whether a specific behaviour was right or wrong? We complained and somewhat ridiculed the expectation that we were to test "in the spirit of" something.

A rose by any other name

Looking back on that time, I can see that most of the requirements were closer to user stories than the formal requirements that we were accustomed to write our test cases against . They were not intended to explicitly define the exact behaviour, but to act more as placeholders for conversations between the product owners and the relevant development team members on how the value was to be delivered. These were small summaries of target value which were open to interpretation on their implementation and required discussion to establish the detailed acceptance criteria. The problem was that, within agile context the expectation here would be that this conversation be held between the '3 Amigos' of Product Owner, Programmer and Tester. Unfortunately in the process that we were working, the 'Third Amigo', the tester, had been missed from the conversation, with the result that the testers only had the requirement as written to refer to, or so we felt.

The Spirit of an Oracle

Let us examine the requirement that so vexed me and the other testers so much at that time - that the user security model should work in the 'spirit of windows'. As a formal requirement yes it was ambiguous, however as a statement of user value there was a lot of information here for testers to make use of. The requirement instantly provides a clear test oracle, that of the windows file system security model, from which we could establish some user expectations.

  • Windows security on files is hierarchically inherited such that objects can inherit their permissions from higher level containers
  • inherited properties can be overridden such that an object within a container has distinct security settings from its parent container
  • Permissions are applied based on the credentials of the user that is logged into windows
  • Permissions on objects may be applied either to individuals or at a group level
  • Ability to perform actions is based on roles assigned to the user
  • Allow permissions are additive so a user will have the highest level of permissions applied to any group that they are a member of
  • Deny permissions for any object overrode Allow permissions And so on.

The important concept that we failed to recognise here is that it didn't really matter what the exact behaviour was. The need for explicitly defined behaviour up front in this scenario was not present. The value was in the software providing as familiar and consistent a user experience to windows as possible whilst also providing an appropriate security structure for the specific product elements.

It is true that the ambiguity of the requirement made it more difficult to design explicit test cases and expected results in advance of accessing the software. What we were able to do was examine another application which possessed the characteristics of our target system to establish expectations and compare actual behaviour. As Kaner points out in this authoritative post on the subject, and Bolton explains eloquently through this fictitious conversation - oracles are heuristic in nature in that they help guide us in making decisions. Through the presence of such a clear testing oracle we were able to explore the system and question the behaviour of any one area through comparison with an external system. If there were behaviours that were inconsistent with our expectation, based on the windows system, then we were able to discuss the behaviour directly with the product owners who sat in close proximity to the testers. This required judgement, and sometimes compromise given that the objects managed in our system and the relationships between them were inherently different to Windows files and directories. As with all heuristic devices, our oracle was fallible and required the judgement of the testers to decide whether inconsistencies corresponded to issues or acceptable deviations. In many ways it was a very forward thinking setup, it just wasn't what we were accustomed to, and the late introduction of the testers into this process resulted in our exclusion from important discussions over which of the above behaviours we needed to deliver, and therefore restricted limited our judgement in relation to the oracle system. This, combined with our unfamiliarity with this way of working, resulted in our resistance to the approach taken.

The spirit of testing

I find great personal value in examining situations from the past to see how my opinions have changed over time. Not least this provides some perspective on my current thinking around any problems and can act as a reminder that my position on an issue may not be constant as my experience I grows. In the years since my 'spirit of windows' incident I've grown more pragmatic around the need in testing for rigidly specified requirements. In particular experience has demonstrated that specifications are themselves fallible, yet are treated as if they should be unambiguous and exhaustive instead of being treated as simply another type of oracle, to be used with judgement in making decisions. This can have damaging consequences - I have seen the situation where incorrect behaviour was implemented on the basis of a specification, when there was an excellent test oracle available that was not referenced during testing as there was no perceived need to do so.

The availability of a clear testing oracle provides an excellent basis for exploring a system that is being actively developed with the fluidity of minimising documentation to focus on asking questions and discussing design decisions through the development process, such as when working with agile user stories. What the example above clearly highlights is the importance of early tester engagement in this process if the testers are to understand the value in the feature, the decisions that go into the design and, crucially, the characteristics of the test oracle that we are looking to replicate.

References

Image: https://www.flickr.com/photos/firemind/30189049

Whatsapp Button works on Mobile Device only

Start typing and press Enter to search