Friday, 17 September 2010

A Set of Principles for Automated Testing

The act of introducing new members to the team can act as a focus for helping the existing members to clarify their approach to the job. One of the things that I developed to work through with new members is a presentation on the automation process at RainStor, and the principles behind our approach. This post explores these principles in detail and the reasoning behind them. Although these have grown very specifically to our context (we write our own test harnesses), I think that there are generally applicable elements here that merit sharing.

Separate test harness from test data


The software that drives the tests, and the data and metadata that define the tests themselves, are separate entities and should be maintained separately. In this way the harness can be maintained centrally, maintained to reflect changes in the system under test, and even re-written, without having to sacrifice or risk the tests themselves.

Users should not need coding knowledge to add tests


Maintenance of test data/metadata should be achievable by testers with knowledge of the system under test, not necessarily knowledge of the harness technology.

Tests and harnesses should be portable across platforms


Being able to use the same test packs to execute across all of our supported platforms gives us an instant automated acceptance suite to help to drive platform ports and them continue to provide an excellent confidence regression set for all supported platforms.

Tests are self documenting


Attempting to maintain two distinct data sources in conjunction with each other is inherently difficult. Automated tests should not need to be supported by any documentation other than the metadata for the tests themselves, and should act as executable specifications for the system to describe behaviour. Test metadata should be sufficient to explain the purpose and intention of the test such that this purpose can be maintained should maintenance be required on that test.

Test harnesses are developed as software


The tests themselves are a software product that serves the team, and developments should be tested and implemented as such.

Tests should be maintainable


Test harnesses should be designed to be easily extensible and maintainable. At RainStor harnesses consist of a few central driving script/code modules and then individual modules for specific test types. We can add in new test types to the system by the dropping in of script modules with simple common inputs/outputs into the harness structure.

Tests should be resilient to changes in product functionality


We can update in the central harness in response to changes in product interfaces without the need to amend the data content of thousands of individual tests.

Tests allow for expected failures with bug numbers


This can be seen as a slightly contentious approach, and is not without risk, however I believe that the approach is sound. I view automated tests as indicators of change in the application. Their purpose is to indicate that a change has occurred in an area of functionality since the last time that that function underwent rigorous assessment. Rather than having a binary PASS/FAIL status, we support the option of having an result which may not be what we want but is what we expect, flagged with the related bug number. In this way we can still detect potentially more serious changes to that functionality. In this way we are maintaining the tests purpose as a change indicator, without having to re-investigate every time the test runs, or turn off the test as a failing test.

Tests may be timed or have max memory limits applied


As well as data results, the harnesses support recording and testing against limits of time and system memory that will be used in running a test. This helps in driving performance requirements and identifying changes in memory usage over time.

Tests and results stored in source control


The tests are an executable specification for the product. The specification changes with versions of the product, so the tests should be versioned and branched along with the code base. This allows tests to be designed for new functionality and performance expectations updated whilst maintaining branches of the tests relevant to existing release versions of the product.

Test results stored in RainStor


Storing results of automated test runs is a great idea. Automated tests can/should be used to gather far more information that simple pass/fail counts (see my further explanation on this here). Storing test results, timings and performance details in a database provide an excellent source of information for:-
* Reporting performance improvements/degradations
* Identifying patterns/changes in behaviour
* Identifying volatile tests

As we create a data archiving product, storing the results in here and using this for analysis provides the added benefit of "eating our own dog food". In my team we have the longest running implementation of our software anywhere.

These principles have evolved over time, and will continue to do so as we review and improve. In their current form they've been helping to drive our successful automation implementation for the last three years.

Copyright (c) Adam Knight 2009-2010
Glenn Halstead said...

Nice post Adam, I'm very much 'on page' with your thoughts.

Principles are vital and test developers need to understand the principles so they can implement the pertinent ones as their tests require. I often find test developers asking for rules that can be always applied, where as principles need to be understood and intelligently applied, exceptions to the principles are often appropriate.

Some of my test automation principles that you have not mentioned are:

Design for debug: when the test does not behave as I expected it to how easy will it be for me to figure out why?

Reporting: Show me the data that was used to determine pass or fail. Typically, at best, I'll be shown that data if a test point fails. I want to see the data that the test used to determine it's result whether that's pass or fail.

Test script readability: I don't want to have to decipher code to understand WHAT the test is doing. I don't mind reading code to understand HOW the test does it's thing, WHAT it's doing should be a level of understanding / simplicity above HOW it does it. For me this generally means really obviously named functions and variables more than commenting.

Glenn H

Adam Knight said...

Thanks for the feedback Glen. Excellent additions to the principles too.

With regard to designing for debug I wrote a post earlier this year on why automated testing is more than just checking and the information to debug and diagnose faults was a key element to this.

I like your point on reporting, yes we shouldn't try to abstract away the crux of what the test is using as its pass/fail criteria, this can be a significant source of risk.

Agree totally on readability. The way that I approach this at RainStor is that each step in the test packs has an associated metadata file containing the purpose of that step. The test reports list this purpose along with the test results, so to see what the test is trying to do you can just follow the flow of comments. You make a good point about function and variable naming, this applies as much in the SUT itself as the test harness. Having sensibly named functions can improve the testability of a product immensely.

Thanks again for the comments,

Adam

Albert Gareev said...

Hi Adam,

I totally agree, introducing and educating your stakeholders on the process you established is critically important. Having answers ready for your reasoning helps building credibility and trust on automation solutions.

My variation of principles in a short version is.

* Separate core automation framework, test object recognition, test logic and test data
* Scaling up a test suite does not require rewriting of the core framework
* Creating test logic does not require programming
* Test logic is intended to interact, observe, and log information; checking is not a purpose
* Test Plans can handle customizable Test Case dependencies
* Configuring / maintaining test logic and test data does not require programming
* Flexible and easily scalable support of data source
* Fully detailed execution log, presentable in a variety of automatically generated reports (I use XML-XSL for that)
* SEED NATALI heuristic for GUI Automation
* Absolutely no hard-codings. Dynamic associative data structures instead.
* If doing "it" manually is better (easier, quicker, more trusted, etc.) for testers, automation is part of the problem, not a solution


Thank you,
Albert Gareev

Adam Knight said...

Albert. Thanks for the comments. I agree with most of them, particularly the last one. Automation only works if you are taking mindless effort away. If maintaining the automation requires more effort than manually testing, don't go there.

Not sure about "checking is not a purpose". I feel that checking is not the only purpose, however for my automation a huge amount of checking is carried out that would be very time consuming and prone to error if done by a human (checking of output results from SQL queries is a job best done by machine!). I'd totally agree that there should be a lot more to it though, I wrote this post a while ago on other things that automation should achieve and why I would not refer to automated tests as checks.

Thanks again for the feedback,

Adam.

Adam Knight said...

Just found a comment from Adam Goucher here

A Set of Principles for Automated Testing is not a bad list. I completely disagree with the second one, and the last one is blatant employer promotion (but is contextually correct).

I thought I'd take the opportunity to respond.

i) I can understand there might be contexts where point 2 may not be applicable, however in my last two roles the teams skills have been relevant to the product and the interfaces of it and not necessarily to the technology being used to drive the automation. I've found by allowing the testers to define tests in a format that they understand which then drives the test harnesses can allow testers without programming knowledge to be highly producting in creating and maintaining automated tests.
ii) I'm sorry if the last point comes across as employer promotion. In hindsight I should not have named the system explicitly. In my last two roles I have worked on data storage and analysis systems. In both roles I used those systems to store and analyse test results. I appreciate that my work context may present a unique opportunity to kill two birds with one stone in this, however if this were not the case I would still store my test results in a database, and I would still look to use my own application internally within my organisation if this was feasible.

Thanks Adam for taking the time to read and comment on the post.

Adam.

halperinko said...

Sorry for late reply, but better late than never :-)
(Some readers might find the great post above and still make use of additional ideas).
1. I fully agree with the notion that testers should be able to write their own tests (even if they don't have programming knowledge), and for that Keyword Driven Testing was "invented" - We find it very easy for testers to define test cases in excel sheets (easier to copy and such for quick writing).
While these functions are supported by dedicated ATE programmer who writes the underlining code.
2. KDT also have the advantage that automation scripts can be written even before the SUT version exists, and even before the automation infrastructure exists - just need to define the required function names & parameters.
3. KDT is platform agnostic - so it can run through different interfaces, use different test equipment models, just by defining which "driver" is used in each run.
4. Automation can & should be used for assisting in Semi-Manual tests scripted/exploratory, again - building a very simple GUI which takes keyword and it's parameters + number of times to execute, can easily implement that.

Kobi @halperinko

halperinko said...

Sorry for late reply, but better late than never :-)
(Some readers might find the great post above and still make use of additional ideas).
1. I fully agree with the notion that testers should be able to write their own tests (even if they don't have programming knowledge), and for that Keyword Driven Testing was "invented" - We find it very easy for testers to define test cases in excel sheets (easier to copy and such for quick writing).
While these functions are supported by dedicated ATE programmer who writes the underlining code.
2. KDT also have the advantage that automation scripts can be written even before the SUT version exists, and even before the automation infrastructure exists - just need to define the required function names & parameters.
3. KDT is platform agnostic - so it can run through different interfaces, use different test equipment models, just by defining which "driver" is used in each run.
4. Automation can & should be used for assisting in Semi-Manual tests scripted/exploratory, again - building a very simple GUI which takes keyword and it's parameters + number of times to execute, can easily implement that.

Kobi @halperinko

halperinko said...

Forgot another issue in earlier post:
Automation Results must be easy to investigate !
Quite quickly one reach the state where automation runs quickly but investigating the results takes lot's of time...
Special attention should be given to the ATE produced Logs and means for viewing them.
I prefer Drill-Down tree-like logs viewing, which allows to view the high level results, quickly identify the problematic areas, then dive in to investigate just these points by exploding the relevant tree nodes.

In many cases, the ATE logs rely on more detailed application logs, and therefor one must enable means to synchronize between logs, to clearly identify the same investigated point in time in all logs.

Kobi @halperinko

Adam Knight said...

Thanks Kobi, any time is a good time to add useful tips to a blog post!

The principles that I follow grew very much out of Keyword Driven Testing, which I have used on previous projects driving tests out of Excel in just the way you describe. Given that the main inputs to our system are file based we've simply made the natural progression from Excel structure to a file driven structure with a structure of file extensions and metadata tags for our "keywords". AS you suggest this allows tests to be developed before the software under test or the harness can support the tests in the ATDD style.

Thanks for the great comments,

Adam

Whatsapp Button works on Mobile Device only

Start typing and press Enter to search