Are Templates Dangerous
Total Page:16
File Type:pdf, Size:1020Kb
Are Templates Dangerous? Teaching, Templates, and Testing
By David Gelperin
Summary: When we mindlessly use templates to give easy answers, we're trying to play chess with a checkers mentality. That's Gelperin's warning in this article. Templates related to context-sensitive issues require a lot of analysis. As he aptly suggests, "Perhaps every test doc template should come with a warning label: 'Not a Substitute for Thinking.'"
If methodology is a program for accomplishing some set of goals, then we can envision a spectrum of methodologies ranging from context-free to context-sensitive, and the many points in between. We may also think of this spectrum in terms of the density of "ifs" in the methodology. Pure context-free methodologies have no ifs, while extreme context- driven methodologies are filled with ifs.
Except for pure context-free methods, the only correct answer to most questions about what to do next is "It depends." Process wisdom is knowing what the relevant parameters are and how specific parameters should be weighted in order to make an effective decision. In game playing, for instance, chess is more difficult than checkers. Chess has more parameters (kinds of pieces) and more complex relationships (positions on the board). We must learn to recognize patterns and respond appropriately. Success follows from knowing the factors relevant at any point in the game and how to interpret them.
Testing expertise entails knowing the limitations of testing. Sometimes the answer is not found through testing, but rather through technical reviews of design/code or technical analysis methods targeted at specific types of bugs, or a combination of these. Sometimes there is no effective answer and that must be recognized and communicated as well (e.g., testing the reliability of missile defense systems: at the end of the film "War Games" the computer determined that the winning strategy in thermonuclear war was NOT TO PLAY).
Various issues in software testing occur at various points on this context spectrum. The hardest decisions have a lot of parameters and different weighting schemes. Many people in software don't see testing as a difficult problem--it ranks with checkers. To teach about testing, the challenge must be effectively communicated, but that is just the beginning. Teaching extreme context-sensitive wisdom is very difficult--as in the game of chess. Teaching rules is easier than teaching patterns and appropriate responses. Telling the learner to determine the impact and likelihood of software failure is easier than teaching effective ways to accomplish it. Still more difficult is knowing what to do with the results of this analysis when traded off against the many other factors influencing a test strategy.
On any specific project, templates neither reveal the critical issues, nor how to acquire the critical information, nor what to do with it after acquisition. Those who understand testing as a game of checkers see test documentation as an exercise in filling in the blanks of a template. Those who understand the chess-like complexities of testing see doc templates as a guide to recording the results of a difficult decision process. Just as a neat mathematical proof does not reveal the often messy way in which the proof was developed, so too the neat compartments of a template do not reveal the difficult process that developed and selected the information. Creating a test plan using a template fails if we view the sections of a template as the variables in an algebraic formula. We cannot work our way through each section of the plan and have "the answer" at the end. A template can reveal and organize the important elements; however, an effective solution requires experienced analysis of those elements.
Perhaps every test doc template should come with a warning label: "Not a Substitute for Thinking."
About the Author David Gelperin ([email protected]) is chairman of Software Quality Engineering (www.sqe.com) in Orange Park, Florida. David has more than thirty years' experience in software engineering with an emphasis on quality control. He has been a programmer, project lead, test lead, quality support manager, test consultant, and instructor.
He chaired the development of both ANSI/IEEE standards on software test and catalyzed the launch of Software Testing & Quality Engineering magazine. He is chief architect of the Systematic Test & Evaluation Process (STEP ™) test methodology, High-Impact ™ technical review methodology, Unique Cause and Pre-emptive Debugging test strategies, Testability Support Model, and Ultra-Understandable Usage (U3) Modeling framework