10th International Software Quality Week 1997


Sheraton Palace Hotel
27-30 May 1997
San Francisco, California USA

A GUIDED TOUR OF QUALITY WEEK '97...

CONFERENCE THEME: "Quality in the Marketplace"

Return to the QW'97 Technical Program.

Return to the QW'97 Home Page

Go to the On-Line Registration Link

As the 90's draw to a close we see more and more that software quality issues are among the essential driving forces for selection, use, and world-wide software market expansion. Companies large and small, Government units, in short, everyone, are concerned to make sure that the software products they deliver provide good value and effective use.

QW'97 is a multi-threaded conference aimed at all levels of software quality people, from those just beginning new projects, to those with many years experience. People come to Quality Week to get started, to hone their skills, to share results with their colleagues ... a thousand reasons. QW'97 has something for everyone and is something that everyone in the software quality field will want to attend.

This short writeup, a companion to the printed brochure and the on-line conference description, gives an informal summary of the entire conference.

Note that the QW'97 session numbers are given with each paper; for example, "(TA)" means Tutorial A.

HALF-DAY TUTORIALS

The conference opens with 10 carefully selected speakers, with topics intended to address the issues of today as well as the issues of tomorrow.

Basics of testing are presented by the world-renowned test expert Boris Beizer (Independent Consultant), An Overview of Testing (TA), likely to be, as it has in the past, one of the best general introductions to the software quality area you'll likely discover.

In keeping with the growing interest in process oriented approaches to quality there are tutorials by Matthew Cooke (European Software Institute), Introduction to SPICE (ISO 15504 Software Process Assessment) (TH) and by William J. Deibler & Bob Bamford (Software Systems Quality Consulting), Software Engineering Models for Quality: Comparing the SEI Capability Maturity (CMM) to ISO 9001 (TC).

Along the way, software designers need to know the intrinsic value of inspection methods, so we have arranged to bring you: Tom Gilb (Independent Consultant), Optimizing Software Inspections (TE).

Technically oriented testing is the key to good quality software, as illustrated by Robert V. Binder (RBSC Corporation), Test Automation for Object oriented Systems (TB). And, continuing the theme of using the best available technology, there is a generalized method described by Michael Deck (Cleanroom Software Engineering, Inc.), Cleanroom Development: Formal Methods, Judiciously Applied (TJ).

Real-world results have to be taken into account, too. Good specifications beget good software, says Bob Poston (Aonix), 10X Testing: Automating Specification-Based Testing (TG) and optimized field testing using the concepts of execution profiles that he invented are described by John Musa (Consultant), Applying Operational Profiles in Testing (TF).

We all know that the Year 2000 problem may be a real watershed for software quality, and this topic is the focus of the tutorial by Nicholas Zvegintzov (Software Management Network), Testing for Year 2000 (TI).

Finally, judging from the recent activity and concern, in particular the wave of concern brought about by the WWW and the Internet and all of its issues, the legal aspects have to be important to us all. Hence we have a tutorial by Cem Kaner, Brian Lawrence and Tom Arnold (Falk & Nguyen), Software Quality-Related Law (TD).

KEYNOTES TALKS

Special events like QW'97 give us an opportunity to address critical issues, controversial issues and issues that are likely to be important in the future.

In view of the interest in ISO-9000 you'll be pleased to know that not everyone believes ISO-9000 to be a panacea. This keynote by John Seddon (Consultant), In Pursuit of Quality: The Case Against ISO 9000 (1P1) aims to clarify the ISO-9000 effectiveness question.

Software *IS* software and we don't really know enough about it to manage it well. Two keynotes take different looks at the question of where and how should software be evolved: Lawrence Bernstein (National Software Council), Software Dynamics: Planning for the Next Century (5P1), and, Dick Hamlet (Portland State University), Keeping The Engineering" In Software Engineering (5P2)".

We all know that a main concern for us all is the stability and security of the Internet. Two highly respected keynoters describe the problem and suggest ways to gain confidence: Dorothy Denning Cyberspace Attacks and Countermeasures: How Secure IS the Internet? (1P2) and, Lori A. Clarke (University of Massachusetts), Gaining Confidence in Distributed Systems (10P1).

Finally, employing his well-known style of truth-telling, there will be another critical look at the Year 2000 question: Boris Beizer (Independent Consultant), Y2K Testing: The Truth of the Matter (10P2).

QUICK-START MINI-TUTORIAL TRACK

The QuickStart Mini-Tutorials are designed to give the attendee an in-depth look at an important area of software quality, and also to assist in passing on the advice of experienced software quality people to those new in the field.

The first two Mini-Tutorials in our QuickStart track focus on how best to organize your activities to get the most results for the least work. Tom Drake (NSA Software Engineering Center), Managing Software Quality -- How to Avoid Disaster and Achieve Success (2Q) explains how it's done in a large scale government laboratory, and, Craig Kaplan (I.Q. Company), Secrets of Software Quality (9Q) addresses the issues from the point of view of the smaller programming shop.

The key to success in testing is to be sharp, quick, detail oriented ... all attributes that describe the speaker in this tutorial: James Bach (STL), How to be an Expert Tester (4Q).

The best testing teams are the best managed ones, and Brian Marick (Testing Foundations), The Test Manager at The Project Status Meeting (6Q) focuses attention on the role that the test manager plays.

As testing technology matures, there is increased emphasis on the use of technical methods of test completeness checking, the topic of Robert M. Schultz (Motorola), Test Coverage Analysis (7Q)>

Lastly, we have the issue of getting right the first time and making sure that business issues are handled well. For this we have: Tom Gilb (Independent Consultant), Making Contracts Testable (3Q).

SPECIAL PANEL SESSION

Is Java really going to take over the world? Has anyone *NOT* heard that Java is our future common programming language? This panel session, How Does Java Address Software Quality: A Special Panel Session (8Q) brings industry experts together in a forum where you'll hear the good news and the bad news, and maybe even some news that is thought provoking as well!

TECHNOLOGY TRACK

The long term growth of software quality is founded in developing technologies that address current and even future problems. The Technology Track has some 15 papers from around the world that show off the latest thinking from labs and from the field and which gives you a good cross-section of what people are doing today that can lead to exciting results in the future.

Real time systems are extremely difficult to test, and these two papers take a good hard look at the issues. Victor Braberman, Martina Marre and Miguel Felder (Universidad de Buenos Aires), Testing Timing Behaviors of Real Time Software (4T1) and Joachim Wegener & Matthias Grochtmann (Diamler-Benz AG), Testing Temporal Correctness of Real-Time Systems by Means of Genetic Algorithms (4T2) give an additional international perspective to testing these critical types of software.

Detailed program analysis is a key technology, and Antonia Bertolino & Martina Marre (IEI-CNR), A General Path Generation Algorithm for Coverage Testing (2T1), James R. Lyle & Dolores R. Wallace (National Institute of Standards and Technology), Using the Unravel Program Slicing Tool to Evaluate High Integrity Software (3T1) William Howden (University of San Diego), Partial Statistical Test Coverage and Abstract Testing (7T2) and Hugh McGuire (University of California, Santa Barbara), Generating Trace Checkers for Test Oracles (3T2) look into three types of detailed in-the-source-code analysis.

Object oriented approaches have high payoffs, provided that you do things right. The paper by Daniel Jackson (Carnegie Mellon University), Automatic Analysis of Object Models (6T2) examines how best to automated object oriented testing, and the paper by Lee J. White & Khalil Abdullah (Case Western Reserve University), A Firewall Approach for the Regression Testing of Object-Oriented Software (6T1) tries out a new concept.

Test support systems are often key, as Huey Der-Chu & John E. Dobson (University of Newcastle upon Tyne), An Integrated Test Environment for Distributed Applications (8T1) and Dolores R. Wallace & Herbert Hecht (National Institute Of Standards & Technology), Error Fault and Failure Data Collection and Analysis (7T1) make clear.

A hot new area -- mutation testing is the older name, genetic algorithms is the newest name -- has generated a lot of excitement. Istvan Forgacs & Eva Takacs (Computer and Automation Institute), Mutation-based regression testing (9T2), and Marc Roper (Strathclyde University), Computer Aided Software Testing Using Genetic Algorithms (9T1) give two different views of what can be expected.

A novel approach to code inspection, using machine assistance (the prototype product is called ASSIST), is described in the paper by Fraser Macdonald & James Miller (University of Strathclyde Department of Computer Science), Automated Generic Support for Software Inspection (8T2)

Finally, the world's expert on operation profiles describes practical methods for influencing your testing in a reliability enhancing way: John Musa (Consultant), Applying Operational Profiles in Software-Reliability-Engineered Testing (2T2)

APPLICATIONS TRACK

How modern methods are applied makes a big difference in how successful a software quality enhancement project is going to be.

To start with, your approach has to be systematic and thorough. These two papers speak to the question of how to be more systematic and "more rigorous": Larry Apfelbaum (Teradyne Software & Systems Test), Model Based Testing (8A2), and, Albrecht Zeh & Paul Bininda (Sekas GmbH), Quality Improvement by Automatic Generation and Management of Test Cases (9A2).

Quality on the WWW is critical these days, and the quality issues affecting, and affected by, all manner of WWW-based systems is becoming increasingly important. The papers by Peter Middleton & Colm Dougan (Queens University Belfast), Grey Box Testing C++ via the Internet (7A1), and, Shankar L. Chakrabarti & Harry Robinson (Hewlett Packard Company), Catching Bugs in the Web: Using the World Wide Web to Detect Software Localization Defects (7A2), deal specifically with WWW based issues. Meanwhile, taking their cue from other aspects of systematic test automation, the papers by Debrorah MacCallum (Bay Networks), Test Automation Solutions for Complex Internetworking Products (9A1), Ram Chillarege (IBM), Orthogonal Defect Classification (4A2), and, Taghi Khoshgoftaar (Florida Atlantic University), Identifying Fault-Prone Modules: A Case Study (4A1) look in detail into the question of how best to define, track, and contend with software trouble reports (i.e. defects).

Quality issues arise in the worlds space programs as well, as the papers by Francesco Piazza (Alenia Spazio), A Requirement Traceability Application for Space Systems , and, Jennifer Davis & Daniel Ziskin & Bryan Zhou (NASA Goddard Space Flight Center), The Challenge of Testing Innovative Science Software on a Rapidly Evolving Production Platform (2A1) make clear.

Putting things into practice, sometimes an experience that differs a lot from the expectations, is addressed by two speakers: Otto Vinter (Bruel & Kjaer), How to Apply Static and Dynamic Analysis in Practice (6A1), and, Danny R. Faught (Hewlett Packard Company), Experiences with OS Reliability Testing on the Exemplar System (3A2).

Lastly, how tool environments affect delivering software quality is addressed by: Joe Maybee (Tektronix Inc.), Mother2 and MOSS: Automated Test Generation from Real-Time Requirements (6A2), Rob Oshana (Texas Instruments), Improving a System Regression Test With a Statistical Approach Implementing Usage Models Developed Using Field Collected Data (8A1), and, Michael E. Peters (Digital Equipment Corporation), Managing Test Automation: Reigning in the Chaos of a Multiple Platform Test Environment (3A1).

MANAGEMENT TRACK

Process issues are sometimes as important as the product itself -- after all, if the process is good then the product OUGHT to be good too! But process involves more than just complying with a standard. Ana Andres (European Software Institute), ISO-9000 Certification as a Business Driver: The SPICE (4M2) Marilyn Bush (Xerox Corporation), What is an SEPG's Role in the CMM? (7M2) Paul Taylor (Fujitsu Software Corporation), Workware & ISO 9000 (4M1) Johanna Rothman (Rothman Consulting Group), Is your Investment in Quality and Process Improvement Paying Off?? (7M1)

Allied to this is the issue of process measurement, addressed well in the paper by Don O'Neill (Independent Consultant), National Software Quality Experiment, A Lesson in Measurement (3M2).

Getting the most out of your test group is a key area, too. Here are four papers that give you good tips on how to maximize the effectiveness of your test group: Nick Borelli (Microsoft), Tuning your Test Group: Some Tips & Tricks (9M1), Dave Duchesneau (The Boeing Company), Guerrilla SQA (9M2), and, Andrew A. Gerb (Space Telescope Science Institute), Delivering Quality Software in Twenty-Four Hours (6M1).

Many quality engineers have to deal with so-called legacy systems, and the concepts that apply in this case may be subtly different from those that apply to new code. The papers by John Hedstrom & Dennis J. Frailey (Texas Instruments), Mastering the Hidden Cost of Software Rework (2M2), and, Lech Krzanik & Jouni Simila (CCC Software Professionals Oy), Incremental Software Process Improvement Under Smallest Useful Deliverable" (3M1)" take a fresh, forward-looking peek at what really is involved in this critical area.

We learn best from experience, and the conference would not be complete without some case studies. The situations presented here give a cross-section of the issues: Brian G. Hermann & Amritt Goel (U.S. Air Force), Software Maturity Evaluation: Can We Predict When Software Will be Ready for Fielding? (6M2), G Thomas (Vienna University of Technology), On Quality Improvement in Heroic" Projects (8M1)", and, Ian Wells (Hewlett Packard), Hewlett Packard Fortran 90 compiler project case study (8M2).

EXHIBITS AND VENDOR PRESENTATIONS

The technical tradeshow will have over 25 different companies and a technically reviewed Vendor Technical Track will present carefully selected Vendor talks. For complete information or to obtain a copy of the Conference Brochure please send Email to qw@sr-corp.com.