Sunday, 29 July 2012

Putting your Testability Socks On



One of the great benefits in working in an process where testers are involved from the very start of each development, testing and automating concurrently with the programming activities, is that the testability of the software becomes a primary concern. In my organisation testability issues are raised and resolved early as an implicit part of the process of driving features through targeted tests. The testers and programmers have built a great relationship over the last few years such that the testers are comfortable in raising testability concerns and know that everyone will work together to address these.

As is natural when you have benefitted from something for a while, I confess that I'd started to take this great relationship for granted. A recent project has provided a timely reminder for me on just how important the issue of testability is...

Holes in our socks


We've recently introduced a new area of functionality into the system that has come via a slightly different route than most of our new features. The majority of our work is processed in the form of user stories which are then elaborated by the team along with the product owner in a process of collaborative specification. This process allows us to identify risks, including testability issues, and build the mitigating actions into our development of the corresponding features.

In this recent case the functionality came about from a bespoke engineering exercise by our implementation team driven by a customer, which was then identified as having generic value for the organisation and so brought in-house to integrate. The functionality itself will certainly prove very useful to our customers, but as the initial development has been undertaken in the field in a staged project, issues of testability have not been identified or prioritised in the same way as they would in our internal process. We've already identified a number of additional work items required in order to support the testability of the feature long term through future enhancements. Overall it is likely that the testing effort on the feature will be higher than if the equivalent functionality had been started within the team with concurrent testing activities. Given the nature of the originating development it is understandable why this happened, but the project has served as a reminder to me of the importance of testability in our work. It has also highlighted how much more effective iterative test driven approaches are at building testability in to a product than approaches where testing is a post development activity.

Why Testability?


In response to a request for improving testability, a senior programmer in a previous employment once said to me "Are you suggesting that I change the software just to make it easier for you to test?". In a word, yes.

Improving the testability of the software provides such a significant benefit from a tester's perspective that it seems surprising how many software projects I'm aware of where testability was not considered. In the simplest sense by improving testability we can reduce the time taken to achieve testing goals by making it quicker and easier to execute tests and obtain results. Testability also engenders increased confidence in our test results through better visibility of the states and mechanisms on which these results are founded, and consequently in the decisions that are informed by these.

The benefits of improved testability are not limited to testing either. From working on supporting our system I know that improved testability can drive a consequential improvement in supportability. The two have many common characteristics such as relying on the ability to obtain information on the state of the system and the actions that have been performed on it.

Adding testability can even see an improved feature set. I read somewhere that Excel's VBA scripting component was originally implemented to improve testability, and has gone on to become one of its key user features for many users (sadly I can't source a reliable reference for this - if anyone has one please let me know).

So what does this have to do with socks?


When researching testability for a UKTMF session a few years ago I came across this presentation by Dave Catlett of Microsoft, which included a great acronym for testability categories - SOCK (Simplicity, Observability, Control and Knowledge). I'm not a great fan of acronyms for categorisations normally, as they tend to imply exhaustiveness on a subject and fall down in the case of any extensions. As with all things testing, James Bach also has a set of excellent testability heuristics which include similar categories, with the additional one of Stability. As luck would have it, in this case the additional category fits nicely onto the end of Catlett's acronym to give SOCKS (it would have been very different if the additional category was Quotability or Zestfulness). As it is, I think the result is a great mnemonic for testability qualities:-

  • Simplicity

  • Simplicity aids testability. Simplicity primarily involves striving to develop the simplest possible solutions that solve the problems at hand. Minimising the complexity of a feature to deliver only the required value helps testing in reducing the scope of functionality that needs to be covered. Creeping the feature set or Gold plating may appear to be over-delivering on the part of the programmer, however the additional complexity can hinder attempts to test. Code re-use and coding consistency also fall into this category. Re-using well tested code and well understood structures improves the simplicity of the system and reduces the need for re-testing.

    I feel that simplicity is as much about limiting scope as it is about avoiding functional complexity. I've grown accustomed to delivering incrementally in small stories where scope is negotiated on a per story and per sprint basis. Working on a larger fixed scope delivery has certainly highlighted to me the value in restricting scope to target specific value within each story, and the ensuing testability benefits of this narrow focus.

  • Observability

  • The ability to monitor what the software is doing, has done and the resulting states. Improving log files and tracing allows us to monitor system events and recreate problems. Being able to query component state allows us to understand the system when failures occur and prevent misdiagnosis of issues. When errors do occur, reporting a distinct message from which the point of failure can be easily identified dramatically speeds up bug investigations.

    This is one area where we have identified a need to review and refactor recently in order to improve the visibility of state changing events across the multiple server nodes of our system. In addition to being a great help to testers, this work will also have the additional benefit of improving the ongoing supportability as well.

  • Control

  • Along with observability, control is probably the key area for testability, particularly so if wanting to implement any automation. Being able to cleanly control the functionality to the extent of being able to manipulate the state changes that can occur withing the system in a deterministic way is hugely valuable to any testing efforts and a cornerstone of successful automation.

    Control is probably the one area in my most recent example where we suffered most. Generally when implementing asynchronous processes we have become accustomed to asking for hooks to be integrated into the software that allow them to be executed in a synchronous way. The alternative is usually implementing sleeps in the tests to wait for processes to complete, which results in brittle, unreliable automation.

    Exposing control in this way is achieved much more quickly and easily at the point of design rather than a retrofitting activity afterwards. I remember working on a client-server data analysis system some years ago which, as part of its feature set, also included a VBA macro capability. This was testability gold, as it allowed me to create a rich set of automated tests which directly manipulated the data objects in the client layer. The replacement application was in development for over a year before being exposed to my testing team, by which time it was too late to build in a scripting component. We were essentially limited to manual testing, which for a data analysis system was a severe restriction.

  • Knowledge

  • Knowledge, or Information, in the context of testability revolves around our understanding of the system and the behaviour that we are expecting to see. Do we have the requisite knowledge to critically assess the system under test? This can be in the form of understanding the system requirements, but can also include factors such as domain knowledge of the processes into which the system must integrate and an understanding of similar technologies to assess user expectations.

    In the team in which I work knowledge issues in the form of missing information or lack of familiarity with technologies are identified early in the elaboration stages. The approach to address these can vary from simply raising questions with the product owner or customer to clarify requirements to a targeted research spike researching specific technologies or domains. As we are seeing, with longer term developments the learning curve for the tester coming into the process becomes much steeper and testability from each testers perspective is diminished. Additionally with less immediate communication the testers have less visibility of the early development stages and consequently a weaker understanding of design decisions taken and the rationale behind them. It has taken the testers some time to become as familiar with the decisions, designs, technologies and user expectations involved in our latest project as those where they are actively involved in the requirement elaboration process.

  • Stability

  • The 'additional S' - I can see why it was not included in Catlett's acronym as this is not an immediate testability characteristic, however as James suggests it is an important factor in testability. James defines stability specifically in terms of the frequency and control over changes to the system. Working in an agile process where the testing occurs very much in parallel with active programming changes, functional changes are something to be expected so implementing these in a well managed and communicated way is critical. I find that the daily stand-ups are a great help in this regard. Having had experience in the past of a code base which was under active development by individuals not involved in the story process, I know how much it can derail the testing effort having changes appear in the system which are not expected by the testers and have not been managed in a controlled fashion.

    I'd also be inclined to include stability of individual features under this category. It is very difficult to test a system in the presence of functional instability in the form of high levels of functional faults. The reasons for this are primarily that issues mask issues. The more faults that exist in the system, the greater the chance of other faults lying inaccessible and undetected. Additionally investigating and retesting bugs takes significantly longer than testing healthy functionality. Nothing hinders testing, and therefore diminishes testability, like an unstable system.


In hindsight I think that the big takeaway from this experience is that lack of testability becomes more likely the later you leave the exposure to testing of your software. Following an agile development process has a natural side effect of developing testability in to the product as you go along. As George Dinwiddie points out in his post on the subject - if you drive your development through your tests then you will naturally build testability into each feature as you go along. After enjoying this implicit benefit of our development approach for years, this value couldn't have been demonstrated to me more effectively than working on a feature that had not been developed in this way.

References


I hope it is clear that I make no claim as to have invented the categorisation of testability concepts, I just like the Socks acronym and find it a useful breakdown to discuss my own experiences on the subject. When presenting on this, as with most posts/presentations, my first steps were to write down my own ideas on the topic before researching other reference. In doing this I did come up with a similar set of groupings, so was inevitably pleased when I found a good correlation with the references I've mentioned. For these, and other good links on the subject, please look below:-

Heuristics of Software Testability, James Bach.
Improving Testability – Dave Catlett, Microsoft (presentation)
Design for Testability – Bret Pettichord
design for Testability - George Dinwiddie
image: http://www.flickr.com/photos/splityarn/3132793374/
halperinko said...

Thanks Adam - This is clearly one of the best if not THE Best post I have read this year.
Clearly written and so true - this should be spread to System Engineers, Developers and Testers.
I think that more than it just serves for easier testing - it serves for quicker and better Product Release!

Out of these, Observability is in many times lacking, though there are many forms of it we already know but often are not embedded within the systems from initial stages:
1. Indication of critical log entries to bring these to Testers attention even when not taking the step to evaluate the logs (as when getting the indication early we can more easily find the source of the problems rather than investigate the resulting phenomena)
2. OS resource observability - a limitation in many Embedded systems both in specific OS, as well as in generic OS such as Linux and Android.
(Being able to get a live graph of OS resources)
3. Locale files which allow central spelling checks, and even driving SW Characteristics from XML (like range of fields) - which allows visibility and ease of fixing / adaptation to specific user's needs.
4. Consistent presentation form - to allow easy parsing of logs and CLI by automation.
5. BITS - Built In Testing abilities are often used in embedded products for reporting of possible degradation of the application before actual failure occur.

@halperinko - Kobi Halperin

Adam Knight said...

Kobi,

Thanks for the kind words. To be honest I was in two minds about this one as I didn't think I was adding that much to the discussion. My intention was to present my thoughts and experiences on the framework of others' great work on the subject, I'm glad in your eyes at least that this was worthwhile.

Thanks for the great inputs too. I like the fact that you raise OS resource observability. While not working with embedded systems, in my company we still capture and log critical information on the state of the OS environment each time we start the service. Again this help both with testing and also support in being able to recreate the environmental configuration to investigate issues.
Your point on consistent presentation form is key too. It is not a problem I face in my current role, although I once worked on a web GUI interface where the developer had simply implemented the html from the designer without refactoring. The result was that every key and id was randomly generated by the design app - a testability nightmare.

Thanks again for the fantastic feedback.

Adam.

Anonymous said...

Thanks for this Adam. It's all very good, with some very useful tips.

I've had similar discussions to the one you had with the senior programmer who was surprised at the suggestion that the software should be written to aid testing. I've heard programmers argue, quite seriously, that they were allowed only to write software in response to user requirements and that it was unprofessional for them to add anything, or to take requirements from testers. I didn't have much patience with that line of thinking. I thought it was a mixture of laziness and bloddy mindedness.

Any developer who's sceptical about the need to code and design an application to be testable should spend some time trying to test an application with lousy testability.

James Christie

Whatsapp Button works on Mobile Device only

Start typing and press Enter to search