I have over the last few months been concurrently involved in some of the most and least inspiring work of my career. Naturally having a software tester mindset I decided to write about the negative stuff as a priority. What can I say? My glass is usually half empty.
Engaging Through Alignment
I recently had the pleasure of inviting an inspiring lady named Susie Maguire to run a workshop at River. Susie has a wealth of experience in the field of engagement and motivation and was the perfect person to discuss, question and regions our own expertise in this area. In one discussion during the workshop Susie discussed the importance of aligning the goals of the individual, with the goals of the team, with the goals of the organisation to achieving true employee engagement.
As with many of the most powerful ideas in successful work, this is a blazingly simple concept yet surprisingly difficult to achieve and therefore depressingly rare. The divided and hierarchical nature of many organisational structures means that teams can aggressively optimise to their own established goals, which, over time, can deviate drastically away from those of the wider company. As we were talking I couldn't help thinking about a process that I was working through at the time which was an extreme case of when such a deviation of goals occurs.
A Painful Process
As I mentioned at the start I've also recently been involved in some of the least inspiring work of my career in relation to implementing a software programme into a large organisation. This is not in itself inherently painful and the relationships with the immediate client at the start of the programme were healthy and long standing. As part of the implementation we were required to work with an internal team from the wider global organisation to 'certify' that one element of our software meet their standards. We were happy to do this on the basis that we'd successfully delivered software to the client before and felt confident in meeting these standards based on our programmes with other global clients. Our confidence proved to be misplaced, however, when we discovered the details of the process.
- It became apparent early on that the certification team were not going to engage with us in any kind of collaborative relationship at all but instead would operate primarily through the documented artefacts of the process
- The requirements that we had to meet were captured in lengthy and convoluted documentation from which we had to extract the relevant information and interpret for our situation. Much of the documentation was targeted at in-house development in different technologies to our stack.
- Some parts of the process involved different people submitting the same lengthy and detailed information into separate documents or systems, which were then all required to align exactly across the submission
- Many of the requirements documented were either impractical or actually not possible in the native operating systems we were supporting
- The process involved no guidance or iteration towards a successful outcome, instead certification involved booking a scheduled 'slot' which had to be reserved weeks in advance based on the predicted delivery date.
- Any failure to meet the standards discovered during the certification slot were not fed back during the slot in time to be resolved towards a successful outcome, but were communicated via a lengthy PDF report once the process was complete
- Items as minor as a repeated entry in a copyright list or a slight difference in naming between a help page and guidance prompts were classified as major failures resulting in failing the certification
- Approaches were presented in the specification as reference examples, yet any deviation from the behaviour of the 'example' was treated as a major failure, even if the logical behaviour was equivalent.
- The inevitable failure in the certification slot required a second 'slot' to be booked for a retest
The final straw came when, as part of the second booked review slot, new requirements were identified which we hadn't been told about in the initial certification, yet our failure to meet them still constituted a failure of the overall certification. Software components not raised in the first review were newly identified as 'unacceptable' in the second, and the missing behaviours stated in the first review were frustratingly in themselves insufficient to pass when it came to the second.
Misaligned
What was clear to me in going through this process was that here was a team where the goals of the team had diverged significantly from the goals of the company.
The goals of the team appeared to be
- Ultimately protect the team budget by maintaining a healthy stream of failures and retests (the internal purchase only covered two test slots - a third retest resulted in an additional internal purchase and internal revenue for the team)
- Tightly document the requirements of software solutions irrespective of value or practical applicability
- Maximise failure through maintaining a position of zero tolerance for ambiguity or delivering value in different ways.
- Maintain an internal view - limiting communication outside the team and Interfacing primarily through artefacts - such as requirements and failure reports
If this only affected us as a supplier then I would probably not be writing this. What was more frustrating was that I was working on behalf of a client company that was a component of the larger global organisation. The behaviours of this team were directly preventing the progression of an exciting and engaging programme. Instead of adding value to their programme they were using valuable budget on frustrating bureaucratic processes and inane adjustments that they saw very little value from and ultimately placed the programme at risk.
It could be argued that the team were protecting the company to ensure standards. I'd argue that a process of collaborative guidance and ongoing review would have been easier, cheaper in terms of both team and our costs, and far more likely to achieve a successful outcome. The process as designed was not aligned with the needs of wider company, including my client.
The other side of the fence
I get very frustrated in situations like the one above as they affect me on two fundamental levels.
Firstly, we only have a limited amount of time on the earth. Seeing so many talented people wasting their valuable time on such pointless activities is very frustrating. For me work is about more than making money. If intelligent and capable people are spending their time on undertakings that add little value beyond meeting the specific idiosyncrasies of a self-propagating process then they will start to question themselves and their work deeply. The nature of the process caused tension across all the people involved and caused anxiety for people that simply wouldn't have been required if the process had been structured differently. We wanted to deliver work that benefitted the programme and pleased the customer, yet we were unable to do so due the effort required simply to adhere to the process imposed on us.
Secondly, the process that I described above was essentially a testing process. It's true that the process was so unrecognizable from what I would describe as testing that it took me a while to appreciate it, but testing it was. The process fitted exactly the pattern of:
- a strict requirements document written before the software was developed
- a predefined sequence of checks based on adherence to the documentation performed subsequent to and in isolation from the development process
- the absence of feedback loops that would allow issues to be resolved in a timely fashion
- communication via artefacts and failure reports rather than direct communication between the person performing the checks and the developing team
Which could describe testing processes in many organisations throughout the world of software development.
Not what I call Testing
Being on the other side of such a testing process was a new and enlightening experience. It gave me an insight into how frustrating it can be working with a testing unit that refuses to engage. I can understand some of the suspicion and even hostility that developers historically have felt towards isolated testing teams. When your best efforts to meet the documents expectations are fed back via reports covered in metaphorical red pen, it's hard to harbour positive feelings for the people involved.
It's heartening then, that each year I read the results of the "State of Testing" survey and see a testing community that is in parts rejecting this kind of approach and embracing more communication and collaboration. In fact the testing skill rated as most important in both 2016 and 2017 survey reports was communication. Whilst this is encouraging, the level of importance placed on communication for testers did drop from '16 to '17 - which is not a trend that I'd want to see continue.
I recommmend if you are a tester reading this that you take the time to take part in this years "State of Testing" survey here http://qablog.practitest.com/state-of-testing/
Our ability to communicate risks and guide and inform decisions is paramount in delivering a testing activity that prioritises the needs of the business over the delivery of the testing process. Going back to Susie's lessons on employee engagement - if alignment with the goals of the wider business is the key to successful engagement then testers whose approach is focussed more on continuous communication and helping to guide towards business goals, are ultimately the ones who will improve their own satisfaction at work but also the others whose lives they impact in the process.
It would be interesting to know why you hadn't come into contact with this certifying team before if you'd already had a long and healthy relationship with your client who was themselves a part of this wider organisation.
Had there been managerial changes that led to the imposition of this certification regime? Was the product subject to external regulatory certification? Or had someone found a management textbook from the 1960s in a flea market and thought this sounded like the best idea ever and why don't we do this sort of thing any more?
Was the top-level organisation private or public sector? In another life, I was a career bureaucrat in the UK public sector; but I saw my role as trying to use my knowledge and experience in smoothing out regulatory and bureaucratic hurdles to deliver the service to the end user as quickly, efficiently and seamlessly as possible whilst still meeting all the organisational aims, objectives and needs. Sometimes it was possible to find legal loopholes or short cuts to make that happen. Sometimes, unfortunately, it wasn't. My experience, however, suggested that sometimes large private sector organisations imposed worse bureaucratic restrictions on themselves than the public sector would ever dream of.
Thanks for taking the time to comment Robert. The reason we came into contact with the team in question was because of the specific technology used in the project leading it to fall under the scope of this team, where previous programmes had not. That said I think that the team was a relatively new one given the technologies they focussed on. As I said in the main post I wouldn't disagree with the need for centralising such a function per-se and the value they could potentially deliver for the business was clear. It was the nature of the process of execution of their duties which so deviated from the intended value.
Post a Comment