QW2001 Paper 4M1

Mr. David Fern
(Micros Systems Inc.)

How Testers Can And Should Drive Development Cycles

Key Points

Presentation Abstract

Having finished our last project, which ended as a fire drill as usual, the managers of the development and test concluded that we weren't ever going to go through that again. All agreed the test team had been riding the development cycle bus long enough.

Our new paradigm has test driving the development cycle bus instead of engineering. Our first task was to organize and sell the idea to product and project management and a skeptical engineering team. Everyone including engineering, management, and especially test are now fully on board and wouldn't want it any other way. The following paragraphs outline our plan currently in use.

* Establish daily events/goals that will allow you to meet weekly build objectives. During the early phase of the software development cycle, engineering builds the software every week. Closer to the Beta release, the software can and is often built semiweekly. Each weekly build is treated as a release candidate in the sense that the state of the software is always known. If the software has to ship, it is known which areas have been tested and which modules are or are not ready to ship. Both engineering and test are assigned daily deliverables for which they are accountable to deliver to each other. Commitment of delivery by each team allows the build to occur successfully, and improves the state of the software weekly.
* The bug triage process is a weekly meeting where the lead engineer and lead tester agree on the bugs that will be fixed in the upcoming week's build cycle. This is the chance for the lead tester and lead engineer to reach a consensus and address the most severe bugs. In most cases, addressing the most severe bugs enables testing to continue. This also increases test's ability to improve the depth of its reporting on the state of the software each week. Triage also provides a time for discussion and risk analysis between the test and engineering teams.
* Grading the software is a two-tiered approach. Grades consist of weekly build cycle grades and module grades. The grades are calculated by using the bug repository counts and a scale everyone is familiar with: A to F, just like school. Grades are distributed to each team member and posted where all can see.
* The test team grades the weekly cycle build. Most bugs are found during the sanity testing of the weekly build. If an engineer introduces showstoppers in the build which prevent testing, the build receives a letter grade of F. If an engineer breaks the weekly build, it receives the same grade and the engineer who breaks the build has to buy the entire team bagels.
* The software is also segmented into modules. Test also grades each module of the software weekly. Some module's grades will remain static while test recommends engineering to focus on more severe modules. The ultimate goal is to have all modules at a letter grade agreed upon in the test plan. The module grades are calculated using the same counts and scale as the weekly builds.
* This type of grading system goes beyond the use of metrics, in that everyone on the team knows the status of the overall software as well as the individual modules. All are striving for an A and each team member can see the progress and the readiness of the software.
* Finally, once the process is in place, it is easy to accelerate or decelerate the entire process as needed. The hardest part is starting, which includes getting organized, developing the grading criteria, the tools to perform the bug tracking, and of course, selling the idea to management and engineers. In the end, we all win. We are ready for show time and can produce a gold CD with a few finishing touches-just like we've done numerous times before in our weekly dress rehearsals.

About the Author

David Fern has a background in Foodservice Management that spans more than two decades. During the past four years he has been involved in all aspects of Food Service Point Of Sale application and devices, specializing in the development of Large Restaurant and Hotel computer systems. Currently he is a Test Specialist in Research and Development at Micros Systems, Inc. in Columbia, Maryland. MICROS is the premier developer of software for the hospitality industry worldwide.

Prior to his current professional endeavors, David was a Manager for the University of Maryland's Dining and Retail Services responsible for the campus Point of Sale Operations on the campus of over 60,000 students.

David recently had an article entitled "Testing Point of Sale Software" published in the November 2000 Quality Techniques Newsletter. David is a graduate of the University of Maryland College Park, Maryland with a Masters Degree in Industrial Technology and a Bachelors Degree in Business Management.