February 2012 Monthly Meeting Summary
Test Automation Disasters and Successes: Lessons Learned - Roundtable discussion
This meeting was a roundtable discussion on stories of test automation disasters and successes -
those heard about or experienced by attendees. We discussed the characteristics of these projects
and considered what contributed to them being disasters or successes, and what lessons can be learned
from them to avoid automation 'disasters' and ensure 'successes'. We also discussed what 'success'
means for software test automation and how to effectively measure it.
Took place on: Wed. February 8 2012 6:30 PM
Some points were brought up that were thought to be helpful in fostering and determining test automation success:
- Disaster example: An automation project where the manager said basically 'just go automate everything you can', where the software system was a buggy, immature product.
It was unclear to the automation engineers whether or not to include automation of all the bugs that existed, or do workarounds of some kind, etc. Essentially the automation
project was not managed.
- Disaster example: Another project was mentioned where the automation worked well but the automated testing results were not that useful. Much of what was automated was
low-priority functionality/tests, and this was found to be a mistaken approach.
- Disaster example: There was a discussion of a major automation project where $100,000 was spent on automation tool licenses, without any initial analysis
to determine if the tool to be purchased was effective or appropriate. It turned out - long after the automation tools were purchased - that the automation
tools were not compatible with the UI components in the software to be tested - the developers were using code libraries for UI development that the
test tool could not work with. The $100,000 worth of automation software was never used, in what turned out to be one of many contributors to the ultimate
failure of a major software.
- Many successful automation efforts were mentioned, although this became a more complex discussion once the group began discussing what 'success' meant.
There was also much discussion of successful *aspects* of test automation projects, as opposed ot overall 'successful' automation projects.
- It is often necessary to educate management regarding test automation - upper management often does not know what 'test automation' really means and what
'success' means and what the risks are.
- It's helpful to determine who the automation stakeholders are and what automation 'success' means to them.
- The above point can feed into the decisions as to what testing to prioritize in an automation effort, and who should decide on prioritizations.
- It was noted that there can be differences between perceived success and actual success. For instance, some stakeholders may be pleased to see
some pretty graphics and charts of automation results, and will deem things 'successful' just from that even if the results were not that useful technically.
- It was suggested that a good measure of success is if the automation provides effective visibility into the software development process and into the
quality of the sofware.
- It can be helpful to keep automated tests simple and focused - but this may require careful planning.
- When purchasing COTS automation tools (or even open sourc tools) a differentiator can be the online/on-call support available for the tool.
- A point to consider in automation planning for success is: ' Who would know or care if the test automation is any good?' and 'How will they know?'
Comments were that testers are often the only ones who know if a test automation effort is doing any good. Experienced stakeholders may ask 'Who will be
testing your automation to verify its effectiveness?'
- It was mentioned that an indicator of automation success is 'if it's finding bugs'.
- It can be helpful if in the bug tracking system it's noted whether a bug was reported via manual or automated testing.
- There were a variety of ideas re 'success' in automation projects; a primary goal of 'saving money' was mentioned as being
difficult to measure and subject to fudging like many other metrics.
- Enhancing the ability of a team to develop software faster was mentioned as a valuable improvement; again it could be hard to measure without
ending up comparing apples to oranges, etc.
- Some goals for improvement that could indicate 'success' were: improvement in build monitoring, improved 'breadth testing' (there was discussion
that it's easier to improve 'breadth testing' than 'depth testing' via automation), improved testing coverage.
- It was suggested that a useful metric is time to do an automation run vs time to do manually.
- There was discussion about the usefulness of being proactive in letting those outside the test team know about automation
results and the added value provided by automation.
NoVaTAIG Home Page