Still waters run ... slowly
As part of our last release, one of my team was testing the installers in Linux and found that it was taking an inordinately long time to install the server product. In one test it took him 15 minutes to install. The programmers investigated and found that a new random library used to generate keys was relying on machine activity to provide the randomisation. On a system with other software running, the random data generation was very fast. On a quiet machine with no activity, other than the user running the installer, it could take minutes to generate enough random data to complete the process. By its very nature any automated testing had not uncovered the problem as the monitoring harnesses were generating sufficient background activity to feed the random data and reduce the install time. Through manual testing on an isolated system we uncovered an issue which could otherwise have seriously impacted the customers' first impressions of our software
This is a great example of the phenomenon of Observer effects, most commonly associated with physics but applicable in many fields, notably psychology and Information Technology. The act of observing a process can affect the actual behaviour and outcome of that process. In another good example, earlier this year we had a problem reported from a customer using an old version of one of our drivers which was complaining about library dependencies missing. It turns out that the tool that had been used to test the successful connectivity of the installation actually incorporated some runtime libraries on the library path that were needed for the drivers to function, but were not included in the install package. The software used to perform the testing had changed the environment sufficiently to mask the true status of the system. Without the tool and associated libraries, the drivers did not work.
Such Observer Effects are a risk with throughout software testing efforts where the presence of observing processes can mask problems such as deadlocks and race conditions by changing the execution profile of the software. The problem is particularly apparent with the use of test automation due to the presence of another software application which is accessing and monitoring exactly the same resources that are being used by the application under test. The reason I'm discussing Observation effects specifically in a post on installers, is that I've found this area to be one where they can be most apparent. Software installation testing by its nature is particularly susceptible to environmental problems. The presence of automation products and processes can fundamentally change the installation environment. Relying on automation alone to perform this nature of testing seems particularly risky.
Falling at the first
The install process is often the "shop window" of software quality, as it provides people with their first perception of working with your product. A bad first impression on an evaluation, proof of concept or sales engagement can be costly. Even when the severity of the issue is low, the impact in terms of customer impressions at the install stage can be much higher. If your install process is full of holes then this can shatter confidence in what is otherwise a high quality product. You can deliver the best software in the world but if you can't install it then this gets you nowhere.
This week I was using a set of drivers from another organisation as part of my testing. The unix based installers worked fine, however the windows packaged installers failed to install, throwing an exception. It was clear that the software had come out of an automated build system and no-one had actually tested to ensure that the installers worked. No matter how well the software drivers themselves had been tested I wasn't in a position to find out as I couldn't use them. Also my confidence in the software had been shattered by the fact that the delivery had fallen at the first hurdle.
I can't claim that our install process has never had issues, however I do know that we've identified a number of problems when manually testing installations that would otherwise have made it into the release software. I've also seen install issues from other software companies that I know wouldn't have happened for us. Reports from our field guys are that in most cases our install is one of the easier parts of any integrated delivery, which give me confidence that the approach is warranted. Every hour spent on testing is an investment, and I believe that making that investment in the install process is money very well spent.
Image: http://www.flickr.com/photos/alanstanton/3339356638/
Great post. I have seen shockingly bad installers. I once tested an installation 'wizard' with 27 screens or popups and it still had manual schema installation steps! (And it only needed to work with one database vendor!) While the testers could see what a terrible first impression it could cause, management argued that it is only used once!
Thanks for reading and taking the time to comment Joe - sad to see such an attitude from Management on something that has such an influence in defining perception of quality in your product. I'd certainly only be using that installer once!
Adam
Post a Comment