February 2012 Monthly Meeting Summary

Topic:
Test Automation Disasters and Successes: Lessons Learned - Roundtable discussion

This meeting was a roundtable discussion on stories of test automation disasters and successes - those heard about or experienced by attendees. We discussed the characteristics of these projects and considered what contributed to them being disasters or successes, and what lessons can be learned from them to avoid automation 'disasters' and ensure 'successes'. We also discussed what 'success' means for software test automation and how to effectively measure it.

Took place on: Wed. February 8 2012 6:30 PM

Attendance: 14

Meeting Notes:

Some points were brought up that were thought to be helpful in fostering and determining test automation success:
  • It is often necessary to educate management regarding test automation - upper management often does not know what 'test automation' really means and what 'success' means and what the risks are.
  • It's helpful to determine who the automation stakeholders are and what automation 'success' means to them.
  • The above point can feed into the decisions as to what testing to prioritize in an automation effort, and who should decide on prioritizations.
  • It was noted that there can be differences between perceived success and actual success. For instance, some stakeholders may be pleased to see some pretty graphics and charts of automation results, and will deem things 'successful' just from that even if the results were not that useful technically.
  • It was suggested that a good measure of success is if the automation provides effective visibility into the software development process and into the quality of the sofware.
  • It can be helpful to keep automated tests simple and focused - but this may require careful planning.
  • When purchasing COTS automation tools (or even open sourc tools) a differentiator can be the online/on-call support available for the tool.
  • A point to consider in automation planning for success is: ' Who would know or care if the test automation is any good?' and 'How will they know?' Comments were that testers are often the only ones who know if a test automation effort is doing any good. Experienced stakeholders may ask 'Who will be testing your automation to verify its effectiveness?'
  • It was mentioned that an indicator of automation success is 'if it's finding bugs'.
  • It can be helpful if in the bug tracking system it's noted whether a bug was reported via manual or automated testing.
  • There were a variety of ideas re 'success' in automation projects; a primary goal of 'saving money' was mentioned as being difficult to measure and subject to fudging like many other metrics.
  • Enhancing the ability of a team to develop software faster was mentioned as a valuable improvement; again it could be hard to measure without ending up comparing apples to oranges, etc.
  • Some goals for improvement that could indicate 'success' were: improvement in build monitoring, improved 'breadth testing' (there was discussion that it's easier to improve 'breadth testing' than 'depth testing' via automation), improved testing coverage.
  • It was suggested that a useful metric is time to do an automation run vs time to do manually.
  • There was discussion about the usefulness of being proactive in letting those outside the test team know about automation results and the added value provided by automation.


 
 
 
     NoVaTAIG Home Page

Copyright 2011 Northern Virginia Test Automation Interest Group
Northern Virginia Test Automation Interest Group