.

Design Patterns in Object-Oriented Computing Practices Frameworks

Object-oriented frameworks provide an important enabling technology for reusing software components. In the context of speech-recognition applications, the author describes the benefits of an object-oriented framework rich with that provide a natural way to model complex concepts and capture system relationships.

Savitha eveloping interactive software systems Srinivasan with complex user interfaces has IBM become increasingly common, with prototypes often used for demonstrat- ing innovations. Given this trend, it is Dimportant that new technology be based on flexible architectures that do not require developers to under- stand all the complexities inherent in a system. Object-oriented frameworks provide an important enabling technology for reusing both the architecture and the functionality of software components. But frameworks typically have a steep learning curve since the user must understand the abstract design of the underlying framework as well as the object collabo- ture of abstract and concrete classes. The abstract ration rules or contracts—which are often not appar- classes usually reside in the framework, while the con- ent in the framework —prior to using the crete classes reside in the application. A framework, framework. then, is a semicomplete application that contains cer- In this article, I describe our experience with devel- tain fixed aspects common to all applications in the oping an object-oriented framework for speech recog- problem domain, along with certain variable aspects nition applications that use IBM’s ViaVoice speech unique to each application generated from it. recognition technology. I also describe the benefits of The variable aspects, called hot spots, define those an object-oriented paradigm rich with design patterns aspects of an application that must be kept flexible for that provide a natural way to model complex concepts different adaptations of the framework.1 What differ- and capture system relationships. entiates one framework application from another in a common problem domain is the manner in which these OBJECT-ORIENTED FRAMEWORKS hot spots are defined. Frameworks are particularly important for devel- Despite the problem domain expertise and reuse oping open systems, where both functionality and offered by framework-based development, application architecture must be reused across a family of related design based on frameworks continues to be a difficult applications. An object-oriented framework is a set of endeavor. The framework user must understand the collaborating object classes that embody an abstract complex class hierarchies and object collaborations design to provide solutions for a family of related embodied in the framework to use the framework effec- problems. The framework typically consists of a mix- tively. Moreover, frameworks are particularly hard to

24 Computer 0018-9162/99/$10.00 © 1999 IEEE .

document since they represent a reusable design at a that governhow objects can be combined to Documenting a high level of abstraction implemented by the frame- achieve certain behaviors.4 work classes. However, frameworks don’t enforce con- framework as a set But design patterns—recurring solutions to known tracts; if an application does not obey the con- of design patterns problems—can be very helpful in alleviating software tracts, it does not comply with the intended is an effective complexity in the domain analysis, design, and main- framework design and will very likely behave means of achieving tenance phases of development. Framework designs incorrectly. To make these relationships more can be discussed in terms of design pattern concepts, clear, a number of researchers have introduced a high level of such as participants, applicability, consequences, and the concept of motifsto document the purpose communication trade-offs, before examining specific classes, objects, and use of the framework in light of the role of between the and methods. design patterns and contracts.5,6 Documenting a framework—on paper or in the framework designer code itself—as a set of design patterns is an effective A FRAMEWORK FOR SPEECH RECOGNITION and the framework means of achieving a high level of communication We designed a framework for speech recog- user. between the framework designer and the framework nition applications intended to enable rapid user.2 Using design patterns helped us establish a com- integration of speech recognition technology in mon terminology with which we could discuss the applications using IBM’s ViaVoice speech recog- design and use of the framework. nition technology (http://www.software.ibm.com/ speech/). Our primary objective was to simplify the DESIGN PATTERNS development of speech applications by hiding com- Design patterns are descriptions of communicating plexities associated with speech recognition technol- objects and classes that are customized to solve a gen- ogy and exposing the necessary variability in the eral design problem in a particular context.2 By their problem domain so the framework user may cus- very definition, design patterns result in reusable tomize the framework as needed. object-oriented design because they name, abstract, We focused on the abstractions necessary to identify and identify key aspects of a common design structure. reusable components and on the variations necessary Design patterns fall into two groups. The first group to provide the customizations required by a user to focuses on object-oriented design and programming create a specific application. While this method or object-oriented modeling, while the second group— adheres to the classical definition of what a framework a more recent trend—focuses on patterns that address must provide, we consciously incorporated feedback problems in efficient, reliable, scalable, concurrent, into the framework evolution process. parallel, and distributed programming.3 In this article, I focus primarily on the first group, although my col- Speech concepts and complexities leagues and I used patterns from both categories to At a simplistic level, the difference between two address our design problems. speech recognition applications boils down to what In designing speech-recognition applications, we specific words or phrases you say to the application used patterns to guide the creation of abstractions nec- and how the application interprets what you say. essary to accommodate future changes and yet main- What an application understands is of course deter- tain architectural integrity. These abstractions help mined by what it is listening for—or its active vocab- decouple the major components of the system so that ulary. Constraining the size of the active vocabulary each may vary independently, thereby making the leads to higher recognition accuracy, so applications framework more resilient. typically change their active vocabulary with a change In the implementation stages, the patterns helped in context. us achieve reuse by favoring object composition or The result that the speech recognition engine returns delegation over class inheritance, decoupling the user in response to a user’s utterance is a recognized word. interface from the computational component of an In the context of a GUI application, the active vocab- application, and programming to an interface as ulary and the recognized words may be different for opposed to an implementation. each window and may vary within a window, depend- In the maintenance phases, patterns helped us doc- ing on the state of the application. ument strategic properties of the software at a higher Several factors contribute to complexity in devel- level than the source code. oping speech recognition applications. Recognition technology is inherently asynchronous and requires Contracts adhering to a well-defined handshaking protocol. Design patterns describe frameworks at a very high Furthermore, the asynchronous nature of speech level of abstraction. Contracts, on the other hand, can recognition makes it possible for the user to initiate be introduced as explicit notation to specify the rules an action during an application-level task so that the

February 1999 25 .

Recognized word

Engine status I extensi U on eled the common aspects of the problem domain by G Speech abstract classes in the core framework; the applica- recognition Core tion implements concrete classes derived from the engine framework abstract classes. The class diagrams in the following sections describe the extensions to the core framework for IBM’s VisualAge GUI class library. We use the prefix “I” as a naming convention for Vocabulary Sp n the GUI extension classes. For example, the class ee tio Recognition definition ch applica ISpeechClient refers to the GUI framework extension session class that provides the core SpeechClient class func- tionality. We use the Unified Modeling Language (UML)7 notation generated by Rational Rose to rep- resent the class diagrams. Figure 1. The speech application must either disallow or defer any action Framework collaborations recognition engine until all previous speech engine processing can be com- and speech applica- pleted. The shown in Figure 2 shows the tion are separate Also, speech-programming interfaces typically eventual interface classes for the VisualAge extension processes. The first require a series of calls to accomplish a single appli- to the core framework classes. (The framework layer, the core frame- cation-level task. And achieving high accuracy evolved over time as we developed various applica- work, encapsulates requires that the application constantly monitor the tions.) The IVocabularyManager, ISpeechSession, the speech engine engine’s active vocabulary. The uncertainty associated ISpeechClient, ISpeechText, and ISpeechObserver functionality in with high-accuracy recognition forces the application classes provide the necessary abstractions to direct, abstract terms, inde- to deal with recognition errors while the recognition dispatch, and receive speech recognition engine events pendent of any spe- engine communicates with applications. It must do so at a level that is meaningful to the application. The cific GUI class library. at a process level and carries no understanding of GUI ISpeechSession class must establish a recognition ses- The second layer windows or window-specific vocabularies. The appli- sion through ISpeechManager prior to any speech extends the core cation must therefore build its own infrastructure to functions starting. framework for differ- direct and dispatch messages at the window level. ISpeechClient, ISpeechText, and ISpeechObserver ent GUI environments provide the necessary abstractions for implementing a Framework abstractions and hot spots in order to provide GUI application, but do not contain a GUI compo- tightly integrated, Common application functions include establish- nent. An application creates a speech-enabled GUI seamless speech ing a recognition session, defining and enabling a pool window using from the recognition function- of vocabularies, ensuring a legal engine state for each ISpeechClient class and a VisualAge window class ality. call, and directing recognized word messages and such as IFrameWindow. This inheritance provides engine status messages to different windows in the speech functionality and speech engine addressability application. The application needs explicit hot-spot at a GUI window level so that recognized words can identification to handle variations such as the active be directed to a particular GUI window. window, the active vocabulary in a specific active win- Similarly, an application creates a speech-enabled dow, the recognized words received by the active win- text widget using multiple inheritance from the dow based on the active vocabulary, and the action ISpeechText class and a VisualAge text-widget class that an active window must take based on the recog- such as IEditControl. This process provides speech nized word. functionality and speech engine addressability at a text- Figure 1 shows a high-level view of the layered widget level. Derived ISpeechObserver classes provide framework architecture that we adopted in order to a publish-subscribe protocol necessary to maintain a endow this theory with maximum flexibility and consistent speech recognition engine state across all reuse. The speech recognition engine and speech appli- windows in the application. The IVocabularyManager cation are separate processes. The first layer, the core class supports the dynamic definition and manipula- framework, encapsulates the speech engine function- tion of vocabularies. The collaborations of these classes ality in abstract terms, independent of any specific GUI is internally supported by the ISpeechManager class, class library. A GUI speech recognition application which behaves in the manner of a Singleton pattern2 usually involves the use of a GUI class library; there- (discussed later) that makes the class itself responsible fore, the second layer extends the core framework for for keeping track of its sole instance. different GUI environments to provide tightly inte- Figure 3 shows one of the primary hot spots in the grated, seamless speech recognition functionality. system, the processing of words recognized by the Each GUI speech application requires customizing active ISpeechClient. DictationSpeechClient is a con- the GUI extension for the core framework. We mod- crete class derived from the abstract ISpeechClient

26 Computer .

Figure 2. Interface IVocabularyManager classes for the 1 1 . pSpchMgr VisualAge extension to the core framework SpeechManager ISpeechSession classes. IVocabulary- Application Manager, ISpeechSes- 1 creates and sion, ISpeechClient, uses Singleton 1 ISpeechText, and ISpeechManager ISpeechSession, ISpeechObserver which instantiates creates Singleton classes provide the the Visual Age SpeechManager containing necessary abstractions ISpeechManager ISpeechClient class pSpchMgr, which to direct, dispatch, and points to itself receive speech recog- fSpchMgr nition engine events at ISpeechText a level that is meaning- ISpeechManager ful to the application. The prefix “I” signals a ISpeechObserver GUI extension class. = Aggregation by value

class. The speech engine invokes the pure virtual Figure 3. A hot spot method, recognizedWord(), when it recognizes a ISpeechClient that processes words word; this gives DictationSpeechClient the ability to recognizedWord (IText &) recognized by the process the recognized word in an application-specific active ISpeechClient. manner. This process works as designed only if the DictationSpeechClient application obeys the accompanying contract, which DictationSpeechClient is a concrete class states that before a user speaks into the microphone derived from the (which invokes the recognizedWord() method), abstract ISpeech- the application must specify the active ISpeechClient Client class. The = Pure virtual method and its corresponding vocabularies. speech engine A GUI application might contain several windows, invokes the pure vir- each of which is a multiple-inheritance-derived instance tual method, recog- of ISpeechClient and IFrameWindow. In a case like this, work itself. We used some well-known design patterns nizedWord(), the framework user must at all times keep track of the to facilitate reuse of design and code and to decouple when it recognizes a currently active ISpeechClient and ensure that the cor- the major components of the system. Table 1 sum- word; this gives Dicta- rect one is designated as being active. Likewise, enabling marizes some of the design patterns we used and the tionSpeechClient the the appropriate vocabulary based on the active reason we used them. ability to process the ISpeechClient is the framework user’s responsibility. Initially, we used the to provide a recognized word in an One way to do this is to designate the foreground unified, abstract interface to the set of interfaces sup- application-specific window as the active ISpeechClient. The application ported by the speech subsystem. The core abstract manner. must track changes in the foreground window and speech classes shown in Figure 2 communicate with designate the appropriate derived ISpeechClient as the speech subsystem by sending requests to the Facade active. This means that the application must process object, which forwards the requests to the appropri- the notification messages sent by the IFrameWindow ate subsystem objects. Facade implements the Singleton class when the foreground window changes and must pattern, which guarantees a sole instance of the speech also update the active ISpeechClient and vocabulary. subsystem for each client process. Only then will the implementation of the hot spot in As the framework evolved, we found ourselves Figure 3 ensure delivery of the recognized words to incorporating new design patterns to extend the the correct window by calling the active ISpeech- framework’s functionality and manageability. Client’s recognizedWord() method. Radiology dictation application FRAMEWORK EVOLUTION THROUGH REUSE We designed the MedSpeak/Radiology dictation We used a single framework to build four applica- application—the first framework we created—to build tions: MedSpeak/Radiology, MedSpeak/Pathology, the reports for radiologists using real-time speech recog- VRAD (visual rapid application development) envi- nition.8 We ourselves were the developers and users ronment, and a socket-based application providing of the framework, which vastly contributed to our speech recognition functionality to a video-cataloging understanding of what the framework requirements tool. Each application we developed had a particular were and helped us find a balance between encapsu- effect on the development and evolution of the frame- lating problem domain complexities and exposing the

February 1999 27 .

Table 1. Design patterns used in building the speech-recognition framework.

Design pattern used Reason for using the pattern Object-oriented patterns Adapter2 (or Wrapper) Enable the use of existing GUI classes whose interface did not match the speech class interface Create a reusable class that cooperates with unrelated GUI classes that don’t necessarily have compatible interfaces Facade2 Provide an abstract interface to a set of interfaces supported by the speech subsystem Abstract speech concepts to facilitate the use of a different speech recognition technology Observer2 Notify and update all speech-enabled GUI windows in an application when speech engine changes occur Singleton2 Create exactly one instance of the speech subsystem interface class per application since the recognition system operates at the process level Distributed or concurrent patterns Active Object9 Decouple method invocation from method execution in the application when a particular word is recognized Asynchronous Completion Token12 Allow applications to efficiently associate state with the completion of asynchronous operations Service Configurator10 Decouple the implementation of services from the time when they are configured

variability needed in an application. Specifically, it workflow. We worked closely with the framework indicated to us that we needed a clear separation users in extending the core framework for a GUI envi- between the GUI components and the speech compo- ronment different from that of the previous applica- nents, so we defined the concept of a SpeechClient to tion—namely, the VisualAge GUI class library abstract window-level speech behavior. discussed earlier in the “Framework collaborations” We used the class Adapter (or Wrapper) pattern to section. The ease with which we accomplished this enable the use of existing GUI classes whose interfaces porting effort validated our Adapter approach to pro- did not match the application’s speech class interfaces.2 viding speech functionality in GUI-independent For example, we used Adapter to create a speech- classes. enabled GUI window that multiple inherits from both However, it brought out an additional requirement an abstract SpeechClient class and a specific GUI win- stemming from the specific GUI builder that was being dow class—in this first application, the XVT GUI class used—that the class form of the , library. Thus, we were able to encapsulate speech- which uses multiple inheritance, was unacceptable. aware behavior in abstract framework classes so that We therefore modified our usage of the Adapter pat- GUI classes did not have to be modified to exhibit tern to the object form (using composition) to accom- speech-aware behavior. plish the same result. Instead of defining a class that Finally, we used the Active Object9 and Service inherits both Medspeak/Pathology’s SpeechClient Configurator10 patterns to further decouple the GUI interface and VisualAge’s IFrameWindow implemen- implementation from the speech framework. Using tation, for example, we added an ISpeechClient the principle embodied in the pattern, instance to IFrameWindow so that requests for speech we separated method definition from method execu- functionality in IFrameWindow are converted to tion by the use of a speech profile for each applica- equivalent requests in ISpeechClient. tion, which in essence is the equivalent of a lookup Furthermore, the need to provide an abstract table. This delayed the binding of a speech command SpeechText class to provide speech-aware edit control to an action from compile time to runtime and gave us functionality became apparent when we realized the tremendous flexibility during development. complexity associated with maintaining the list of dic- Similarly, the Service Configurator pattern decou- tated words. Likewise, the problems associated with ples the implementation and configuration of services, maintaining the speech engine state across different thereby increasing an application’s flexibility and windows led to the definition of the SpeechObserver extensibility by allowing its constituent services to be class using the . The Observer pat- configured at any point in time. We implemented this tern defines an updating interface for objects that need pattern again using the speech profile concept, which to be notified of changes in a subject.2 set up the configuration information for each appli- cation (initial active vocabularies, the ASCII string to The VRAD environment be displayed when the null word or empty string is The visual rapid application development (VRAD) returned, and so forth). class of applications that used our framework provided us with insight into our original objectives, which Pathology dictation application included hiding speech complexities and providing MedSpeak/Pathology was similar to the radiology variability. We planned that the speech framework dictation application, but customized for pathology classes would be included in an all-encompassing

28 Computer .

ISpeechSession getSpeechClient () : ISpeechClient

object-oriented programming framework, the Visual- AppSpeechSession Age class library. The previous pathology dictation application was one instance of an application that used the VisualAge class library together with the speech framework. Figure 4. User-identified hot spot: Which ISpeechClient is The greatest impediment to the acceptance of the active? The new hot spot requires the framework user to cre- framework by the VRAD users was that the public ate a concrete AppSpeechSession class derived from an interface by itself didn’t adequately provide the under- abstract ISpeechSession class. This hot spot necessarily standing necessary to create speech applications draws the framework user’s attention to the fact that an rapidly. For example, the variability provided by the active ISpeechClient must be specified prior to beginning the hot spot in Figure 3 requires that the application recognition process. enforce the designation of the correct ISpeechClient as well as the active vocabulary. Users had to turn to the framework documentation in order to understand the accompanying contracts. Furthermore, our use of ISpeechClient the class form of the Adapter pattern, which uses mul- tiple inheritance, forced the users to understand the getVocabulary () : IText behavior of the parent classes. This was additional motivation for us to favor the object form of the Adapter pattern. We therefore modified the public interface in an DictationSpeechClient effort to reflect the client’s view, while still ensuring that the necessary contracts were enforced. To accom- plish this, we created additional hot spots that helped framework users comply with the internal constraints Figure 5. User-identified hot spot: Which vocabulary is active? not ordinarily visible in the external interface but When a particular ISpeechClient is active, the appropriate apparent on reading the documentation. The actual vocabulary must be activated. Enforcing this rule by invoking a implementation of the additional hot spots adhere to pure virtual method, getVocabulary(), in the active standard framework practice of maintaining invari- ISpeechClient ensures that the active vocabulary gets updated ants within the framework and forcing the clients to by the framework user for each new active ISpeechClient. override abstract methods. SpeechSession. Figure 4 shows an implementation of the first user-identified hot spot in which the active SpeechClient. Figure 5 shows an implementation of ISpeechClient must be explicitly specified by the the second user-identified hot spot. When a particu- framework user. The new hot spot requires the frame- lar ISpeechClient is active, the appropriate vocabulary work user to create a concrete AppSpeechSession class must be activated. Enforcing this rule by invoking a derived from an abstract ISpeechSession class. pure virtual method, getVocabulary(), in the SpeechSession hides from the user the speech subsys- active ISpeechClient ensures that the active vocabu- tem’s Facade object, which was previously a public lary gets updated by the framework user for each new framework class. To implement AppSpeechSession, active ISpeechClient. This method returns the right the framework user must define the pure virtual vocabulary based on the state of that particular method, getSpeechClient(), which the frame- ISpeechClient. Here again, this hot spot forces the work invokes to get the active ISpeechClient. framework user to specify the active vocabulary prior This hot spot necessarily draws the framework to beginning the recognition process, which also user’s attention to the fact that an active ISpeechClient ensures that there are no surprises regarding active must be specified prior to beginning the recognition vocabularies and recognized words received by the process, which alleviates the problem of designating active ISpeechClient. the active ISpeechClient when the application starts. However, the behavior of updating the active Socket interface applications ISpeechClient when the foreground window changes The socket interface class of applications was dif- may also be implemented by the same hot spot. The ferent from the others because we used a specific creation of a new abstract class that derives from framework application to provide speech recognition ISpeechClient and IFrameWindow can encapsu- services to numerous non-C++ client applications. We late the change in foreground window notification built a VRAD application and extended the frame- from the IFrameWindow class and invoke the work to support a socket interface to read and write getSpeechClient() method. operations on a socket connection, thus making irrel-

February 1999 29 .

Table 2. Framework applications summary. OO framework Radiology dictation Pathology dictation VRAD applications Socket applications External framework SpeechClient SpeechClient, SpeechSession, SpeechSession, classes SpeechText, SpeechClient, SpeechClient, SpeechObserver SpeechText, SpeechText, SpeechObserver, SpeechObserver, VocabularyManager VocabularyManager Required extensions Yes Yes Yes No Employed patterns Active Object, Active Object, Active Object, Active Object, Adapter, Adapter, Adapter, Adapter, Asynchronous Completion Token, Facade/Singleton, Facade/Singleton, Asynchronous Facade/Singleton, Observer, Service Configurator Service Configurator Completion Token, Service Configurator, Facade/Singleton Observer, Service Configurator Speech functions Dictation, Command Dictation, Command Dictation, Command, Command, Grammars Grammars Speech functionality (percentage) in core framework 75 80 90 97 in the GUI extensions 25 20 10 3

evant the runtime environment of the actual speech The first framework application, Medspeak/ application. The relative ease with which we were able Radiology, forced the framework user to understand to add a socket-based string interface for speech recog- and create SpeechObserver and VocabularyManager nition in a new SocketSpeechClient class only rein- concepts. By the second application, we realized that forced our Adapter approach in the framework. these concepts were sufficiently abstract to be encap- sulated within the framework. At the end of the third BUILDING RESILIENCE iteration, where we faced the most critical users, Table 2 summarizes different aspects of our experi- we had further abstracted SpeechObserver and ence with the various framework applications we VocabularyManager classes into the core framework. developed. With each new application, the number of This was largely driven by an attempt to alleviate the framework interface classes grew to encapsulate dis- user’s confusion in understanding the collaborations tinct functional areas, which exemplifies the principle between the VocabularyManager, SpeechObserver, of modularizing to conquer complexity. But we didn’t and SpeechSession classes. capture all abstractions in the problem domain during The unmotivated framework user’s perspective can the initial analysis phase. In fact, some framework also contribute to the evolution to a black-box frame- applications led to the creation of new speech inter- work from a white-box one.11 Our original white-box face abstractions necessary in the analysis and design framework required an understanding of how the phases. GUI extensions to the core framework classes work, As a result, though, the framework realized more so that correct subclasses could be developed in the design patterns allowing for greater resilience. We application. The VRAD user’s reluctance to accept were able to abstract more domain knowledge into this forced us to favor composition over inheritance the core framework, which minimized the required in the application. We continued to use inheritance GUI-specific extensions. The last rows in Table 2 sum- within the framework to organize the classes in a hier- marize the breakdown of code split between the core archy, but use composition in the application, which framework and the GUI extensions to the framework. allows maximum flexibility in development. Figure 6 shows the movement of the classes from the application/GUI extension to the core framework, REDUCING THE LEARNING which results in the 97 to 3 percent split from the orig- CURVE WITH HOT SPOTS inal 75 to 25 percent. Additional hot spots created as a result of the framework user’s perspective play an important role MOTIVATED AND UNMOTIVATED USERS in reducing the learning curve associated with a frame- The unmotivated user’s perspective helped further work. This conclusion appears somewhat paradoxi- abstract problem domain concepts. Figure 6 shows an cal because hot spots are typically implemented in increasing abstraction of speech concepts into the core frameworks using object inheritance or object com- framework classes with each new application. It shows position,1,2 and both suffer from severe limitations in the change in distribution of functionality between the understandability. core framework classes, GUI-specific extensions, and A classic problem in comprehending the architec- the application—as the framework evolved. ture of a framework from its inheritance hierarchy is

30 Computer .

Speech Vocabulary Medspeak/ Vocabulary Speech Medspeak/ observer manager Radiology manager session pathology

Speech XVT extensions Speech VisualAge extensions text to core framework observer to core framework

Speech Core Speech Speech client framework client text Core framework

(a) (b)

VRAD application Speech session VisualAge extensions to core framework

Speech Speech Speech Vocabulary Core client text observer manager framework

(c)

Figure 6. Increasing abstraction of problem domain concepts with each new contract implicit in each application. Each application requires a slightly different architecture: (a) Medspeak/Radiology application, (b) Medspeak/Pathology application, (c) VRAD applications. The Socket application (not shown) simply adds a socket interface class on top of the SpeechSession class as an extension to the VRAD framework classes. that inheritance describes relationships between if the user understands the underlying collaborations classes, not objects. Object composition achieves between vocabulary objects and speech client objects more flexibility because it delegates behavior to other documented in a contract. Some of the less glaring hot objects that can be dynamically substituted. However spots—such as the active vocabulary and active speech it does not contribute to the understanding of the client—are apparent as hot spots only when we view architecture either, since the object structures may be the framework from a framework user’s perspective. built and modified at runtime. Despite this, we believe The level of reuse and the productivity gains achieved that the hot spots contribute to reducing the learning summarized in Table 2 were obtained by informal user curve. The primary hot spots address high-level vari- testing, and by evaluating the number of lines of code ability, but require an understanding of the underly- written and the amount of time spent to develop a new ing contracts and related framework architecture. framework application as the framework evolved. Designing for variability appears to be an impor- tant facet of the user perspective, too. Applications ltimately, using a common set of abstractions can be made to obey framework contracts, where pre- across the analysis, design, and implementa- viously contracts were merely structured documen- Ution phases and the use of design patterns to tation techniques to formalize object collaborations. capture these abstractions led to better communica- And since variability is often implemented using tion and a more efficient technique for handling design patterns, we were able to achieve an impor- changes as the framework evolved. Design patterns tant goal: a common vocabulary for communicating helped us more effectively communicate the internal with the framework users. The additional hot spots framework design and made us less dependent on the generated by the user’s perspective complement the documentation. ❖ primary ones and result in a design that necessarily exposes the object collaborations in the framework. Arguably, the new hot spots identified as the frame- Acknowledgment work evolved could have been identified in the origi- This framework was originally developed as part of nal design. We contend that as domain experts, even the Medspeak/Radiology product with John Vergo at though we abstract the common aspects and separate the IBM T.J. Watson Research Center. them from the variable aspects, we do so at a suffi- ciently high level that we mandate a minimum under- standing of built-in object collaborations in the References framework. Originally, we had identified the hot spot 1. W. Pree, Design Patterns for Object-Oriented Software addressing the flexibility required to process recog- Development, Addison-Wesley, Reading, Mass., 1994. nized words within an application. But this works only 2. E. Gamma et al., Design Patterns: Elements of Reusable

February 1999 31 .

Object-Oriented Software, Addison-Wesley, Reading, Conf., Addison-Wesley, Menlo Park, Calif., 1995. Mass., 1995. 10. P. Jain and D. Schmidt, “Service Configurator–A Pattern 3. D.C. Schmidt, R.E. Johnson, and M. Fayad, “Software for Dynamic Configuration of Services,” Proc. Third Patterns,” Comm. ACM, Oct. 1996, pp. 36-39. Usenix Conf. Object-Oriented Technology and Systems, 4. R. Helm, I.M. Holland, and D. Gangopadhyay, “Con- Usenix, Berkeley, Calif., 1997. tracts: Specifying Behavioral Compositions in Object- 11. D. Roberts and R.E. Johnson, “Evolving Frameworks: Oriented Systems,” Proc. OOPSLA 90, ACM Press, A Pattern Language for Developing Object-Oriented New York, 1990, pp. 169-180. Frameworks,” Pattern Languages of Program Design 3, 5. R.E. Johnson, “Documenting Frameworks Using Pat- R. Martin, D. Riehle, and F. Buschmann, eds., Addison terns,” Proc. OOPSLA 92, ACM Press, New York, Wesley Longman, Reading, Mass., 1998. 1994, pp. 63-76. 12. T. Harrison, D. Schmidt, and I. Pyarali, “Asynchronous 6. R. Lajoie and R. Keller, “Design and Reuse in Object- Completion Token,” Pattern Languages of Program Oriented Frameworks: Patterns, Contracts, and Motifs Design 3, R. Martin, D. Riehle, and F. Buschmann, eds., in Concert,” Proc. Colloquium on Object Orientation in Addison Wesley Longman, Reading, Mass., 1998. Databases and Software Engineering, World Scientific, River Edge, N.J., 1994, pp. 295-312. Savitha Srinivasan is a software engineer in the Visual 7. G. Booch, Object Oriented Analysis and Design, Ben- Media Group at the IBM Almaden Research Center jamin/Cummings, Los Angeles, 1994. in San Jose, California. Her current research interests 8. J. Lai and J. Vergo, “MedSpeak: Report Creation with include using speech recognition and other synergis- Continuous Speech Recognition,” Proc. CHI 97, ACM tic techniques, while practicing object-oriented tech- Press, New York, 1997, pp. 431-438. nology. She received an MS degree in computer science 9. D.C. Schmidt and G. Lavender, “Active Object: An from Pace University. Object for Concurrent Program- ming,” Proc. Second Pattern Languages of Programs Contact Srinivasan at [email protected].

PURPOSE The IEEE Computer Society is the COMPUTERSOCIETY WEB SITE world’s largest association of computing pro- The IEEE Computer Society’s Web site, at http:// fessionals, and is the leading provider of computer.org, offers information and samples technical information in the field. from the society’s publications and conferences, as well as a broad range of information about tech- MEMBERSHIP Members receive the nical committees, standards, student activities, and monthly magazine COMPUTER, discounts, and more. opportunities to serve (all activities are led by EXECUTIVE volunteer members). Membership is open to COMMITTEE C OMPUTER SOCIETY OFFICES all IEEE members, affiliate society members, President: LEONARD L. TRIPP * Headquarters Office and others interested in the computer field. Boeing Commercial 1730 Massachusetts Ave. NW, Washington, DC 20036-1992 Airplane Group Phone: (202) 371-0101 • Fax: (202) 728-9614 P.O. Box 3707, M/S19-RF E-mail: [email protected] B OARD OF GOVERNORS Seattle, WA 98124 O: (206) 662-4437 Publications Office Term Expiring 1999: Steven L. Diamond, Rich- F: (206) 662-14654404 10662 Los Vaqueros Cir., PO Box 3014 ard A. Eckhouse, Gene F. Hoffnagle, Tadao Ichikawa, [email protected] Los Alamitos, CA 90720-1314 James D. Isaak, Karl Reed, Deborah K. Scherrer General Information: Term Expiring 2000: Fiorenza C. Albert- President-Elect: Phone: (714) 821-8380 • [email protected] Howard, Paul L. Borrill, Carl K. Chang, Deborah M. GUYLAINE M. POLLOCK * Membership and Publication Orders: Cooper, James H. Cross III, Ming T. Liu, Christina M. Past President: Phone (800) 272-6657 • Fax: (714) 821-4641 Schober DORIS CARVER * E-mail: [email protected] VP, Press Activities: Term Expiring 2001: Kenneth R. Anderson, European Office Wolfgang K. Giloi, Haruhisa Ichikawa, David G. CARL K. CHANG* VP, Educational Activities: 13, Ave. de L’Aquilon McKendry, Anneliese von Mayrhauser, Thomas W. JAMES H. CROSS * B-1200 Brussels, Belgium Williams VP, Conferences and Tutorials: Phone: 32 (2) 770-21-98 • Fax: 32 (2) 770-85-05 Next Board Meeting: 19 Feb. 1999, WILLIS KING (2ND VP) * E-mail: [email protected] Houston, Texas VP, Chapter Activities: FRANCIS LAU * Asia/Pacific Office VP, Publications: Watanabe Building, 1-4-2 Minami-Aoyama, I EEE OFFICERS BENJAMIN W. WAH (1ST VP)* Minato-ku, Tokyo 107-0062, Japan ¨ VP, Standards Activities: Phone: 81 (3) 3408-3118 • Fax: 81 (3) 3408-3553 STEVEN L. DIAMOND * President: KENNETH R. LAKER E-mail: [email protected] President-Elect: BRUCE A. EISENSTEIN VP, Technical Activities: JAMES D. ISAAK * Executive Director: DANIEL J. SENESE Secretary: E XECUTIVE STAFF Secretary: MAURICE PAPO DEBORAH K. SCHERRER * Treasurer: DAVID CONNOR Treasurer: Executive Director and Chief Executive Officer: MICHEL ISRAEL* VP, Educational Activities: ARTHUR W. WINSTON T. MICHAEL ELLIOTT IEEE Division V Director Publisher: MATTHEW S. LOEB VP, Publications: LLOYD “PETE” MORLEY MARIO R. BARBACCI VP, Regional Activities: DANIEL R. BENIGNI IEEE Division VIII Director Director, Volunteer Services: ANNE MARIE KELLY VP, Standards Activities: DONALD LOUGHRY BARRY JOHNSON * Chief Financial Officer: VIOLET S. DOAN Executive Director Chief Information Officer: ROBERT G. CARE VP, Technical Activities: MICHAEL S. ADLER and Chief Executive Officer: Manager, Research & Planning: JOHN C. KEATON President, IEEE-USA: PAUL KOSTEK T. MICHAEL ELLIOTT 12JAN1999