February 2011 Monthly Meeting Summary
Test Automation Design - Roundtable Discussion
This was a roundtable discussion followup to our March 2010 roundtable on test automation design. We went over some of the points
that came out of the March meeting and continued further discussion of the areas where we ran out of time previously. We also
solicited a real automation project from among the attendees, discussed some of its requirements, and discussed design issues.
Took place on: Wed. February 9 2011 6:30 PM
One of the attendees discussed a new automation project and its challenges and this provided a focus for early-stage automation design considerations.
Much of the design discussion surfaced the kinds of questions that needed to be asked about the project. The considerations that were discussed included:
- It was suggested that good documentation is needed if manual testers are to use the automation developed by others
- Code reviews were recommended for multi-person automation teams - 'rigorous' code reviews not just check-the-box reviews
- One attendee mentioned an approach where there one engineer did the automation design and documented it, another coded it and another person tested it;
they and said the approach worked well.
- The use of 'common code' (code/classes/methods that others will utilize in their automation) was discussed and documentation for that also
- It was mentioned that someone should verify that the automation thats developed really tests what is expected.
- There was much discussion regarding what automation typically covers, and that, like much unit testing, automation often covers 'happy path'
or 'positive' testing and is often oriented to integration type testing. Only one person indicated that they had tried monitoring code coverage of their
automated tests. The consensus was that budget/time problems were usually a factor in all this.
There was some discussion of common code developed for an automation suite/framework:
- Target audience of the automation results
- Stakeholders of the project
- What that audience expected.
- What were their priorities, which feeds into the desicion of what to try to automate.
- How configurable should the automation be?
- Who will use it and run it.
- Expected outputs - reporting
- Custom development or use off-the-shelf open source or COTS?
- Learning curve
- Team skills
- Documentation - Wiki? MS Word? ....?
- Version control, standards, and enforcement thereof
- Alignment with development processes
- Tool/framework patching and updating processes
- Tests from manual testers? or do the automation people come up with the test cases?
- If the tests to be automated are written up by manual testers, do they need guidelines as to how to write automatable test cases?
- Where there is a lot of common code to be developed, it may require a lot of time/resources - so tradeoff decisions and judgement needed.
- Development of common code to support negative tests, one-offs, etc can also require a lot of additional time/resources
- It was recommended to keep functions/classes/methods small and simple - general good coding practices
- It was suggested that writing good common code requires a good understanding of the software that is to be tested
- Common code is best written by those with good programming skills
NoVaTAIG Home Page
Copyright 2011 Northern Virginia Test Automation Interest Group
Test Automation Interest Group