<<

A CONCEPTUAL MODEL AND PROTOTYPE FOR

A CASE-BASED ADAPTIVE ANALYST SUPPORT

by

WILLIAM H. GWINN, B.A., M.S.

A DISSERTATION

IN

BUSINESS ADMINISTRATION

Submitted to the Graduate Faculty of Texas Tech University in Partial Fulfillment of the Requirements for the Degree of

DOCTOR OF PHILOSOPHY

Approved

Accepted

May, 1999 Copyright 1999, William H. Gwinn ACKNOWLEDGMENTS

1 would like to take this opportunity to thank my wife Amie G. Gwinn for her encouragement and support throughout this endeavor I express my gratitude to Dr.

Surya B. Yadav, my dissertation chair for his guidance and extreme patience over the long haul. I also thank my dissertation committee. Dr. Ralph R. Bravoco, Dr. Glenn J.

Browne, and Dr. Rich L. Sorenson for their pertinent comments and recommendations and Dr. Nirup Menon and Mr. Harold Webb T.A. for their comments on improving the clarity of the screens which comprise the user interface for the prototype system. A special thanks to Ms. Barbi Dickensheet, Thesis Coordinator of the Texas Tech Graduate

School for her preliminary and final dissertation review and helpful tips along the way

11 TABLE OF CONTENTS

ACKNOWLEDGMENTS ii

ABSTRACT viii

LIST OF TABLES ix

LIST OF FIGURES x

CHAPTER 1 INTRODUCTION 1

Background 1 Requirement Definition 1 Requirement Determination Process o Importance of Fact Gathering and Analysis Activities 3

The Nature of Fact Gathering Activities .1 Why an Analyst Needs Support 4 Nature of Fact Gathering and Analysis Support 5

Problem Statement 6

Research Questions 7

Research Objectives 7

Research Deliverables 8

Significance of the Research 9

Structure of the Dissertation 10

II. LITERATURE REVIEW 12

Introduction 12

Works on Requirement Determination Processes and Analyst Activities 13 G. B.Davis 13 Yadav 15 Alan M. Davis 15 iii Byrd, Cossick, and Zmud 16 Works on Expert Analyst Assistants 17 Analysis Expert Aide (SYS-AIDE) 17 Analyst Assistant 18 Analyst Assist 19 Expert Modeling Support System 20 Dalal/Yadav EMSS Modification 20

Works on Case-Based Learning 23 CYRUS 24 PLEXUS 25 CABSYDD 25

Summary 25

HI RESEARCH METHODOLOGY 27

Introduction 27

Formulating the Problem 29

Constructing the Level Principles 29

Constructing the Symbol Level Principles 30

Developing the Prototype System 31

Evaluating and Validating the System 32

A Validation Framework 34

Summary 3 8

IV. CONCEPTUAL DEVELOPMENT OF A CASE-BASED

ADAPTIVE ANALYST SLT^PORT SYSTEM 39

Introduction 39

Fact Gathering Activities 39

System Behavior 47

Knowledge Level and Principles ^6 Knowledge Level Concepts 58 Knowledge Level Principles 62 iv Requirement Specification 63 Requirement Set 63

A Conceptual Model of a CAASS 64 The Dialog Management Subsystem (DMS) 65 The Fact Gathering Coordinator Subsystems (FGCS) 66 Organization Base Management Subsystem (OBMS) 66 The Case Base Management Subsystem (CBMS) 66 The Domain Knowledge Base Management Subsystem (DKBMS) 67 The Basic Knowledge Base Management Subsystem (BKBMS) 67 The Resource Base Management Subsystem (RBMS) 67

Summary 68

\' SYMBOL LEVEL FOR A CASE-BASED

ADAPTIVE ANALYST SUPPORT SYSTEM (CA.ASS) 69

Introduction 69

Symbol Level Concepts and Principles 69 Symbol Level Concepts 69 Symbol Level Principles 73 Symbol Level Architecture of CAASS 74 Case-based Reasoning and Learning 76 Case Knowledge Frame Base 1~! Knowledge Frame Base 78

Structure 80

Summary 87

VI. DESIGN .AND IMPLEMENTATION OF THE PROTOTYPE 88

Introduction 88

Logical Flow Design 88

Implementation Languages 97 Visual BASIC 5.0 M7 The Haley Enterprise Products ^;8 CAASS Prototype Components 99 User Interface Module 100 System Control Module 115 Knowledge Frame Base 116 Case Knowledge Frame Base 116 Executables 116

Summary 116

VII. VERIFICATION AND VALIDATION 117

Introduction 117

Conceptual Model Verification 117

Prototype Verification 119 Verifying the Input Capability and User Interface 119 Verifying the Knowledge Base 120 Verifying Case-based Learning 120 Verifying Case-based Retrieval 121 Verifying Template Modification 121

Conceptual Model Validation 121

Prototype Validation 121

Summary 122

Vin. EXPERIMENTAL DESIGN 123

Introduction 123

Experiment Design 123 Group Selection 124 Testing Instrument 125 Experiment Layout and Model 126 Hypothesis Testing 128

User Survey 129

Summary 132

IX. RESULTS AND CONCLUSIONS

VI Introduction 133

Experimental Results 133 Analysis of (ANOVA) 133 User Survey Results 136 Conclusions 139

Summary 139

X RESEARCH CONTRLBUTIONS, LIMITATIONS.

AND FUTURE RESEARCH 142

Introduction 142

Research Deliverables 142

Research Contributions 143

Research Limitations 146

Future Work 147

REFERENCES 148

APPENDIX. ORGANIZATIONAL CASE FACT LISTS FOR MEDIA TECHNOLOGY SERVICES AND HOMEOU^^TRS OF AMERICA 152

\ii ABSTRACT

¥c\\ researchers have addressed the question of how information system requirements should be derived. The rapidly changing needs of increasingly complex organizations are pressuring the analyst to rapidly produce information requirements This the analyst needs the capability to rapidly acquire, organize and analyze organizational facts from which information requirements are derived.

This research concerns the development of an adaptive analyst support system to assist the analyst with the gathering and managing of organizational facts. A check-list for analyst fact gathering activities is suggested. The knowledge needs, conceptual model, and architecture for a case-based adaptive analyst support system are developed and a prototype support system is implemented. This tool provides a means for an analyst to recall facts and information requirements from previously analyzed organizations rapidlv and adapt the recalled information to current organizational needs.

The prototype demonstrates the feasibility of a case-based approach to an adaptive support system. The implemented prototype's adaptability is demonstrated b\' the growth of its case-base with repeated use.

The primary contribution of this research is to provide the MIS communitv with a new analysis tool . The research describes the toofs ability to gather, organize store, recall, and adapt organizational facts to a current situation rapidly and efficientK

This enhances the analyst's ability to rapidly produce information requirements

Vlll LIST OF TABLES

4.1 Fact Gathering Activities 41

4.2 System Functions 49

4 3 System Functions and Supported Activities 57

5.1 Mapping Conceptual Model to Symbol Level Components 74

9 1 Responses to CAASS End-User Survey 137

9 2 Question 12 Responses 138

IX LIST OF FIGURES

2.1 Dalai and Yadav EMSS 21

3 1 A Validation Framework for CAASS 35

4 1 Knowledge Level Concepts 60

4.2 CAASS Conceptual Model 65

5.1 Frame and Slots 71

5.2 Frame Network 72

5.3 Symbol Level Architecture for CAASS 75

5 4 Structure Chart Symbols 81

5.5 CAASS Structure Chart 82

5.6 Begin New Case 83

5.7 Adapt Check-List 84

5.8 Resume Existing Case 85

5.9 Learn Case, Perform Case Matciiing, and Analyze Case Facts 86

6.1 CAASS System Control Module Logical Diagram 89

6.2 System Control Module Case Comparison Process 90

6.3 System Control Module Inference Engine Processes 91

6.4 System Control Module Case-Based Reasoner and General

Utility Processes 92

6.5 User Interface Module Check-List and Print Check-List Processes 93

6.6 User Interface Module Display Info and Begin New Case Processes *^)4

6.7 User Interface Module Build Current Case, Resume Case, and Display Current Case Processes 95 X 6.8 User Interface Module Modify Case Slots and Display Match Processes 96

6.9 Screen Hierarchy 101

6.10 Opening Screen 102

6.11 Information Screen 103

6.12 Begin New Case Screen 105

6.13 Check-List Screen 106

6.14 Parameters and Objectives Screen 108

6.15 Strategy and Properties Screen 109

6.16 Identify'Entities Screen 110

6.17 Functions and Processes Screen 111

6.18 Case Comparison Screen 114

6.19 Case Comparison Screen with Case Suggested 114

7.1 A VaUdation Framework 118

8.1 Experiment Layout 127

8.2 CAASS User Survey Questionnaire 131

XI CHAPTER I

INTRODUCTION

Background

One of the most difficult tasks in developing information systems is the determination of an organization's information requirements. Requirement specifications should deal with three basic questions (Yadav and Chand, 1989):

1. What should requirements be'^

2. How should requirements be derived?

3. How should requirements be stated?

The first and third questions address the actual content of each requirement and the form used to state the content of each requirement. The second question is concerned with the processes used to determine the contents of the requirements. Although a number of professionals have discussed the first and third questions, few researchers have addressed the second question (Yadav and Chand, 1989).

Requirement Definition

Webster's Dictionary defines requirement as "something required; something demanded or needed" (Webster, 1997, p. 1141). IEEE standard 729 (1983) defines requirement as: "(1) a condition or capability needed by a user to solve a problem or achieve an objective; (2) a condition or capability that must be met or possessed by a system ... to satisfy a contract, standard, specification, or other formally imposed document" (p. 29). This research concerns the process by which the organization's information requirements are derived.

Requirement Determination Process

The requirements determination phase of system analysis begins with the recognition of an organizational problem that requires a solution and ends when there is a complete description of the external behavior of the system needed to solve the organization's problem. The process seeks to describe what the system should do to support the organization's goals, not how it will do it (Davis, 1990). The requirements determination process is complex because it spans multiple organizational domains. The organization's executive, middle manager and the user technician domains are intimately involved with the requirements determination process (Yadav and Chand, 1989).

The process used to complete the requirements determination phase is composed of three steps (Dalai and Yadav, 1989). The first step is to obtain the raw requirements from the organization. These requirements are derived by gathering facts about the organization, recognizing a current information system problem or realizing the possible benefits to be gained from the application of a new concept or technology to the organization's information system. The second step is to analyze the gathered facts.

During this step, the analyst attempts to resolve conflicts between constraints, set priorities and verify that the identified information requirements support the organization's goals. Finally, the derived information requirements set should be formally documented and communicated. This research concerns the first two steps of the requirements determination process. Of specific interest to this research are the identification and structuring of the fact gathering and analysis activities that should be performed to elicit an organization's information requirements.

Importance of Fact Gathering and Analysis Activities

It is fi"omth e analysis of gathered facts about the organization that the analyst forms information requirements. The accuracy and completeness of the facts gathered about the organization's goals and the information needs of the organization executives, managers, and users directly influence the quality of information system requirements.

The raw facts elicited from these domains often have competing or conflicting priorities.

The analyst should analyze the facts and form an information requirement set that meets all the critical constraints from each of the domains. The skill with which the analyst determines the information requirement set can determine the success or failure of the organization's information system. If the information requirement set is incomplete or lacking in quality, the organization will incur additional costs to modify or implement its information system.

The Nature of Fact Gathering Activities

Currently, the analyst decides what fact gathering methodology to use to gather organizational facts. The success the analyst may achieve in applying the chosen methodology is more the result of an art than a documented step-by-step process The analyst must deal with a relatively unstructured and hard to manage series of activities. Activities suitable for one organization or one facet of an organization ma\ not be suitable for use in another organization. By performing the fact gathering activities, the analyst should be able to gather information about the user needs in the organization, the organization's goals, its critical success factors, and the current information system's strong points and short comings. Unfortunately, user needs are among the most difficult to determine. A problem in information requirements determination is the "business person who does not know what he or she wants and is unable to accurately communicate what little they can figure out" (McClatchy, 1990, p. 34). The rapid changes in today's business environment generate uncertainties that require fact gathering activities that may be applied in situations varying in the degree of uncertainty from little to a great deal.

According to Davis (1982), these fact gathering activities from the areas of asking the users, deriving facts fi"om an existing information system, performing a synthesis from the characteristics of a subsystem which uses the outputs from the organization's information system and fact discovery by prototyping a new information system should be adaptable to fit a wide variety of organizational uncertaint\' and problem situations. A panicular target organization may not require an activity from each of the above areas. The analyst should from prior experiences with similar organizations be able to recall those activities which may suit the needs of the target organization.

Why an Analvst Needs Support

A significant portion of an analyst's time is involved in gathenng and analyzing organizational facts (Davis, 1982). A clear delineation of fact gathering and analysis

4 activities can help reduce the overall information requirements determination process time and improve analyst productivity (Davis, 1982). The organization's information requirements are extracted from a dynamic internal and external environment. Pressures on the organization from increasing competition in a world wide market place, as well as internal restructuring have accelerated the changes in the organization's information needs. The increased rate of change in information needs has compounded the problem for an analyst seeking to produce accurate and complete information requirements. The ability of an analyst to eUcit facts rapidly and completely from which to generate information requirements has become crucial. Fact gathering and analysis support will help the analyst improve productivity and his or her ability to rapidly produce information requirements.

Nature of Fact Gathering and Analysis Support

Since the analyst is currently left to his/her own devices in gathering and analyzing organizational facts, the derived information requirements reflect the art and experience of the analyst. The fact gathering and analysis activities performed by the analyst provide the basis from which the analyst will determine information requirements. Determining the proper contents of the organization's information requirements is a dynamic problem.

Facts are gathered from multiple human participants in an organization and perhaps from a number of subordinate information systems that provide information to the organization or use information from an integrated organizational information system As the organization's internal and external environments change, new information characteristics are discovered which should be reflected in the information requirements set (Vitalari and

Dickson, 1983). Analysts apply human problem solving techniques that rely on recalling a pnor experience to adapt or match to the current situation (Kolodner, 1993; Riesbeck and

Schank, 1989). A tool that could assist the analyst with these tasks is an adaptable analyst support system. An adaptable system "acquires knowledge over time" (Yadav, 1989, p.

2). The support system would learn or acquire facts about organizations as they are analyzed. The system could recall facts for similar organizations analyzed in the past and adapt the facts to the current situation.

Problem Statement

Every analyst progresses through a series of activities in the process of deriving information requirements. A review of the current literature reveals that a number of researchers have suggested methodologies an analyst can use to gather organizational facts. Yet in the actual gathering and analyzing of the facts, the analyst is left on his/her own. The demand for rapid information system development to meet the changing business environment is pressuring the analyst to rapidly produce information requirements. Because of the complexity of organizations and the rapidly changing organizational needs generated by their dynamic internal and external environments, the analyst cannot depend on personal memory alone to provide the fact gathering and analysis activities appropriate for the derivation of information system requirements. A support system could assist the analyst in recalling information requirement sets for similar previously analyzed organizations. A recalled information requirement set would not fit the target organizations needs exactly The recalled set would need to be adapted to the target organization's needs. This modified set should be remembered or learned so that it ma> be recalled in the future if needed. If no similar organization has been analyzed, a new information requirement set should be created and remembered.

Research Questions

In support of the research objectives, the focus of this work is on the following research questions:

1. What are the fact gathering activities an analyst goes through in determination

of an organization's information requirements'?'

2. What knowledge should an analyst support system possess to assist during the

fact gathering and analysis steps of the information requirements determination

phase?

3. What should be the architecture for an adaptive analyst support system"^

4. How can we demonstrate the feasibility of an adaptive analyst support system"^

Research Objectives

This research addresses the issue of what type of support should be provided to the analyst during the fact gathering and analysis steps of the information requirements determination phase. The primary objective is to conceptually develop an analyst support system to enhance the analyst's abilities to rapidly develop valid information requirements for an organization. Specific objectives are: 1. To develop and detail a concept for an adaptive analyst support system. This

will be achieved by specifying the support system requirements and structure.

2. To provide a means of reminding the analyst of various fact gathering activities

and providing a way of selecting appropriate fact gathering activities for the

target organization.

3 To provide a means by which the support system can adapt previous successfijl

information requirement sets from similar organizations to fit current

organization information needs and/or learn new information requirements sets.

4. To provide initial validation for the conceptual development of the analyst

support system by illustrative and reiterative evidence and final validation

through the design, construction, and implementation of a prototype adaptive

analyst support system.

Research Deliverables

This research investigates the steps an analyst performs in determining and analyzing facts about an organization and developing the architecture for an assisting support system to support the analyst in this task. The deliverables are:

1. Identification of fact gathering activities an analyst employs in determining

an organization's information requirements.

2. The determination of the knowledge needed in order for a support system to

assist an analyst in the process of gathenng facts and performing an analysis of

the facts to derive information system requirements for an organization

8 3 A conceptual model for an adaptive analyst support system.

4 A prototype adaptive analyst support system to test the concept.

5 An analysis of the result of running the prototype adaptive analyst support

system.

Significance of the Research

A review of MIS literature did not find an adaptive analyst support system to help the analyst during the fact gathering and analysis activities of the information requirements determination phase of systems analysis. The analyst struggles unassisted with a variety of complex details in order to gather and analyze organizational facts that help determine an organization's information requirements. An adaptive analyst support system can provide a tool that is currently missing from the MIS inventory. A support system would help the analyst more efficiently gather and analyze organization facts. A support system would use templates to provide the analyst with an organized structure during the complex fact gathering and analysis steps of the information requirements determination phase. An analyst support system gives the analyst a tool that provides the means to rapidly and efficiently gather and analyze organizational facts.

The significance of this research comes from the effect the adaptive analyst support system has on the analyst's productivity. Currently the analyst relies on his or her own ability to recall prior similar organization information requirements and adapt them to the present situation. An adaptive analyst support system can allow the recall of similar organizational situations from a library and also provide a template structure to guide the analyst's fact gathering and analysis activities. An adaptive analyst support system can help the analyst with modification of a past set of information requirements to fit the current organization's information needs or assist in the construction of an entirely new set of information requirements. Finally, a support system with the capacity to learn new information requirement sets can evolve with the changing needs of the organizations internal and external environments.

Structure of the Dissertation

Chapter II contains the results of the hterature search The work done by other researchers that is similar or guides this research is presented.

The research methodology is presented in Chapter III. The Unified Research

Methodology of Baldwin and Yadav is used. A framework for conducting the validation of the conceptual model and evaluating the prototype adaptive analyst support system is developed.

Chapter IV describes the conceptual model for the assisting support system. This model is used to develop a prototype that documents the feasibility of the assisting support system.

Chapter V contains the development of the symbol level architecture and structure chart for a case-based adaptive analyst support system. Chapter VI discusses the design and implementation of the prototype system. The prototype was constructed based on the conceptual model. The pnmary objective of the prototype system is to demonstrate the feasibility of an adaptive analyst support system.

10 The verification and validation of the conceptual model and prototype system are described in Chapter VTI and the experimental design is discussed in Chapter VIII. The experimental results and the conclusions are stated in Chapter IX. Chapter X presents a description of the contributions of this research to the MIS community The research limitations and the reasons for such limitations are also discussed in Chapter X.

11 CHAPTER II

LITERATURE REVIEW

Introduction

This chapter reviews some of the prior research which forms the theoretical basis for this research An adaptive analyst support system is conceptualized with the capabilities to reason, adapt, and learn during the process of assisting the analyst in gathering and analyzing organizational facts in the information requirements determination phase of systems analysis. There are three primary areas where prior research provides the theoretical underpinnings for an adaptive analyst support system. They are the areas of requirements determination, expert artificial intelligence analyst assistants, and case- based learning techniques.

The requirements determination process provides a background for identifying and the analyst activities needed to gather and analyze organizational facts. The second major area of focus is the prior work performed on expert artificial intelligence systems designed to assist the analyst and the limitations the prior works provide in assistance with the requirements determination phase. Finally, the adaptive analyst support system must learn and adapt during the fact gathering and analysis portion of the requirements determination phase. This background stems from research performed in the area of case-based learning and reasoning. Other areas such as knowledge identification and validation techniques for conceptual models and prototypes also contribute to this research.

12 Each of the relevant areas that provide the primary theoretical basis for a support

system that assists the analyst during the fact gathering and analysis steps of the

requirements determination process are examined. For each primary area, a background, description of the work done, and the limitations of each work as they apply to this

research are provided.

Works on Requirements Determination Processes and Analyst Activities

G. B. Davis

Davis (1982) states the need for complete and correct information in order for the

resultant information system to meet the needs of the organization. Davis feels that three

primary difficulties prohibit the obtaining of complete and correct requirements. The first

difficulty is human nature. Humans possess limitations as problem solvers and

information processors. These limitations are a consequence of how humans accumulate

and store long term and short term memories. The second difficulty is the \ ariet\' and

complexity of information requirements that exist. Finally, complex interactions exist

between users attempting to define their needs and the analyst who is attempting to

understand the users' needs and state the needs as information requirements.

Because of these difficulties, Davis feels no one requirement definition process or

strategy can exist. Rather, the analyst needs to apply a variety of strategies to meet the

varying sets of conditions existing in different organizational settings. Da\ is identifies

four strategies for determining lequiremer.ls:

13 1. Asking the user (interviewing).

2. Using an existing information system. The system may be operating in the

target organization or a similar system in a similar organization within the

target organization's industry.

3. A synthesis from a utilizing systems characteristics. What outputs of the

primary system are used as inputs for subsystems? These can be determined

by performing normative, strategy set transformation, critical factor, process,

decision, socio-technological, and input-output analysis.

4. Performing experiments with a prototype of the new or revised system. A

working model system sometimes helps managers and users to formulate

additional requirements or modify prior stated needs.

The strategy or strategy mix is selected based on the degree of uncertainty that exists about the organization's requirements. If a low amount of uncertainty exists the

asking strategy is chosen. If a high degree of uncertainty exists then the prototyping

strategy is selected. The degree of uncertainty depends on whether or not a set of useable requirements can be stated, the ability of the users to state the requirements, and the skill of the analyst in eliciting and evaluating requirements.

This framework is limited by the skill of the analyst in determining the degree o\

uncertainty. Additionally, the framework does not specifv activities needed to caiT\ out a particular strategy or strategy mix.

14 Yadav

Yadav (1983, 1985) also points out that several researchers have published frameworks and guidelines for gaining an understanding of an organization in order to build an information system that meets that organization's needs. However, their works do not provide systematic procedures or steps for the analyst to follow in order to describe completely the managerial fijnctions of an organization.

Yadav proposes an Organization Analysis and Requirement Specification

Methodology (OARSM). This methodology is a set of guidelines in the form of a framework to study and characterize an organization There are five steps to this guideline. The first step is to do an aggregate structural analysis of the organization. Step two is to use fiinctional analysis to describe the organization. The third step has the analyst perform a detailed analysis of the organizational fianctions to be supported by the information system. Finally, step five is the determination of the information systems characteristics that will be needed to support the organizational fianctions.

This work does provide general guidelines for looking at an organization and determining its requirements. It does not provide a checklist of activities an analyst may perform in order to complete each step in the framework.

Alan M Davis

Davis (1988) contends that tool sets and techniques have been suggested b\ researchers to determine system/software requirements. Yet, when studied closeK Orr's

Structured Requirement Definition (SRD), Ross's Structured Analysis and Design

15 Technique (SADT) and Teichroew's Problem Statement Language and Problem

Statement Analyzer (PSL/PSA) actually address different problems.

Davis proposes a four-level taxonomy for software development that he feels can be extended to systems development. The four levels are: (1) User Needs Analysis, (2)

Definition of the Solution Space, (3) External Behavior Definition, and (4) Preliminary

Design. Specifically, levels one and two apply to the system requirements determination process.

Because of the complex nature of systems, analysts need a structure to organize the concepts, attributes, and interrelationships. This structure can be buih by using partitioning, , and projection approaches. The result of using these approaches is a completed level one needs analysis. By identifying the organization's constraints, a level two solution space of possible information system functions that will satisfy the organization's need is built.

Davis does not provide a means of determining requirements but rather discusses the extent other techniques proposed by Ross, Orr, and Teichroew assist in requirements determination.

Byrd. Cossick. and Zmud

Byrd, Cossick, and Zmud (1992) recognize the complexity involved in the requirements determination process. They contend that other authors only provide proposed techniques for fulfilling various aspects of requirements elicitation. As a result the authors propose a synthesis of the various techniques as the means to solving the

16 muhifaceted problem of requirements determination. Their synthesis contains the following techniques: prototyping, open interviews, brainstorming, a goal oriented approach, cognitive mapping, variance analysis, repertory grid, scenario technique, structured interviews, critical success factor determination, and fiiture analysis.

Once again, this approach does not specify the actual analyst steps that should be carried out in applying the techniques. The authors do point out that proper identification of information needs reduces the need for costly corrections to the systems design in the later stages of the system development life cycle. The relevance for our research is that a mix of techniques should be employed by an analyst to deal with the multifaceted problem of determining a target organization" s information requirement set.

Works on Expert Analvst Assistants

Various forms of expert systems to aid an analyst have been discussed individuallv by Dalai and Yadav (1992), Loucopoulos and Champion (1989), Puncello, Torrigiani,

Pietri, Burton, Cardile, and Conti (1988), and Shemer (1987). The expert systems discussed in the literature that provide a background and set the stage for our proposed research are the Systems Analysis Expert Aide, Analyst Assistant, Analyst Assist and

Expert Modeling Support System.

Systems Analvsis Expert .Aide (SYS-AIDE)

Work on an Expert System as an aid to the analyst was done in the late 1980's

Shemer (1987) proposed a conceptual model for an intelligent aid for systems analysts

17 The framework is intended to work on four levels:

1. The system's processes and events.

2. The data structures and elements in the system.

3 The system's entities (people, components, resources, roles).

4. Decisions taken at all levels of the organization (operational control,

management control, and planning).

This is described as a semantic network of frames. A prototype intelligent computerized tool to assist the analyst was developed.

This tool was called SYS-AIDE (Shemer, 1987). It was the subject of Shemer's doctoral dissertation written in Hebrew at Tel-Aviv University. This computerized assistant's primary focus was on overall systems analysis not the specific process of requirements determination.

Analysis Assistant

A year later, Puncello et al.(1988) discussed a set of tools designed to address tasks in the early phases of the software life cycle. "Knowledge-based system can support the specification and design of a software system"' (p. 58). The authors called this collection of tools the Application Software Prototype Implementation System (ASPIS).

One of the ASPIS tools is an analyst assistant. This assistant includes knowledge about the methods the analyst should follow in the analysis of a system. This knowledge lets the analyst ask the assistant "What do I do now*^" or "How do I accomplish this analysis phase?" Specifically the Assistant is capable of telling the analyst what should be

18 described in each phase of the analysis, from which viewpoint and in what detail. The prototype ASPIS system was developed in PROLOG and run on a Sun Microsystems workstation under the UNIX operating system (Puncello et al., 1988).

While the Analysis Assistant tool is aim^ed at the phase, it does not perform an actual analysis of the system requirements. The Analyst Assistant does not provide checklists, guidelines or recommended activities to help the analyst determine the system requirements.

Analyst Assist

Loucopoulos and Champion (1989) suggested an Analyst Assist prototype system to provide an environment for the capture and specification of system requirements The primary objectives of this knowledge based system are to capture informal requirements and improve the transition from informal requirements to formal representation, specify and document these requirements and to validate the specification hv prototyping and animating the specification.

The Analysis Assist was prototyped using Texas Instrument's Explorer and the

LISP programming language. It does provide for the elicitation of user requirements through checklists and user interviews. It does not provide for the use of other elicitation techniques such as operating the current system or determining management's organizational goals. This tool also does not learn from previously analvzed systems.

19 Expert Modeling Support System

Yadav and Chand (1989) proposed an expert system to assist the analyst/project team. This expert modeling support system (EMSS) provided a model base containing formal notation to describe a model of organizational fijnctions and a knowledge base to assist the analyst/project team in building a model to span the manager/user domain gap.

Their intelligent decision support system provided:

1. An organizational study framework

2 A knowledge base containing extensive knowledge and meta-knowledge about

organizations.

3. A model base to support the modeling of organizational fijnctions by using the

Structured Analysis and Design Techniques (SADT) and the Integrated

Computer-Aided Manufacturing Definition Zero (EDEFo) Technique to

produce a fijnction model.

Additionally, the analyst's formal model of the object system is stored as it is developed in the model base (Yadav and Chand, 1989). While the Yadav and Chand EMSS is an expert artificial intelligent modeling support system, it is limited by not providing for the recall and adaptation of similar organizational information requirement sets and the learning of new requirement sets.

Dalal/Yadav EMSS Modification

Dalai and Yadav (1992) proposed an EMSS that differed from the traditional decision support system (DSS) in two ways First the EMSS had a user base and user

20 base management system vice the DSS's database and relational database management system (RDBMS) Second the EMSS incorporated a knowledge base and knowledge base management subsystem (KBMS). The user base stored initial information collected b\ the analyst about the current system and the developing model of the object system.

The model base contained templates for various object systems, modeling knowledge, one or more formal models such as data flow diagram (DFD) or IDEFo, and models to generate requirements specifications and analysis reports. The knowledge base stored knowledge and meta-knowledge about the organization as well as the inference rules

(Dalai and Yadav, 1992) (see Figure 2.1 below).

USER

DLALOG M.\N AGEMENT SUBSYSTEM (n\usi

I

KNOWLEDGE I'S E R B .A S E .MODEL BASE BASE M.AN AGEMENT .\U-\N AGEMENT .M ANAGE.MENT SUBSYSTEM SUBSYSTEM SIBSYSTEM (UBMS) (MBMS) (KB.MS)

USER B.ASE .MODEL BASE KNOWLEDGE B A S E

Figure 2.1. Dalai and Yadav EMSS

21 The analyst/project team fills in some of the required elements of the template to create the primary user base. Over a number of sessions the analyst or project team members collect data to fill the various template slots thus building a clean description of the organization. Additionally, inference rules from the knowledge base propose template slot values for the analyst/project team to accept or reject. The template can be modified as necessary to collect additional information. To support the analyst/project team, the knowledge base accumulates knowledge about the history of the organization, the organizational goals and policies, governmental regulations and policies, external and internal environments, and the structure of the organization, the measures of performance, the organization's operating core, structural configurations, subgoals of the organization's fijnctional units, major managerial activities and decisions, expertise to infer the organization's structure, and a body of modeling knowledge (Dalai and Yadav, 1992).

This knowledge is acquired over time by a series of interactions in the form of inputs, queries, and outputs. The user base at the completion of the study will contain a complete organizational description, the existing IS model, and a proposed IS model.

"The dialog management subsystem (DMS) provides the user interface for the analyst by interacting with all other subsystems" (Dalai and Yadav, 1992, p. 1378)

Additional management subsystems control the access and the removal and entry of information and knowledge into and between the user, model and knowledge bases. Dalai and Yadav's knowledge-based decision support system is limited in that it does not provide for the use of mixed organizational fact gathenng methodologies nor contain a learning component in the expert modeling support system. Lessons learned from

oo previous information requirements determination sessions cannot be recalled for adaptation to the current organization's information needs.

Works on Case-Based Learning

An Artificial Intelligence general problem solver accepts verbal problems and gives verbal answers (Riesbeck and Schank, 1989). These general problem solvers reason from first principles and therefore pay a price in overhead each time a similar problem is run.

Every problem, even if it is similar to a problem already solved, will be reasoned from first principles (Riesbeck and Schank, 1989). Artificial Intelligence Expert Systems use rules for problem solving and domain knowledge gained from domain experts. The rule system should be periodically reevaluated and updated by the domain experts. Human understanding on the other hand is based on a process of explanation. Humans essentially reason by case matching and adapting. They try to fit a current situation to a case or explanation that has solved a problem in the past (Kolodner, 1993; Riesbeck and Schank,

1989). "In essence, then case-based reasoning means no more than reasoning from experience" (Riesbeck and Schank, 1989, p. 11).

The major processes used by a case-based reasoner are case storage, retrieval of a case, adaptation of a retrieved case to the current situation, and criticism or evaluation of how well the adapted case solves the problem. Because no old case will exactly fit a current situation, adaptation is necessary. The between the current situation and the old case are accounted for by adaptation. Learning is dependent upon the criticism

23 and evaluation of the adapted case solution to the problem and feedback (Kolodner,

1993).

The quality of a case-based reasoner's problem solution depends on the case-base of stored prior experiences, the ability of the reasoner to understand the new problem situation in terms of the old stored cases, the ability to make a workable adaptation to a stored case, its adeptness in carrying out evaluation and repair, and the ability to incorporate new experiences into stored memory. Repair is adaptation based on feedback from a prior failed solution (Kolodner, 1993).

Three case-based reasoners each demonstrate some of the characteristics needed by the case-base in the adaptive analyst support system. Although these reasoners were developed to meet limited objectives, the techniques they use was needed in developing a prototype adaptive analyst support system.

CYRUS

In 1984, Kolodner designed a case-based reasoner to read input narratives about

Cyrus Vance's diplomatic travels. Interpretations of key events were stored in the case- base. The reasoner was able to make generalizations and answer questions about Vance's travels. The Domain of this reasoner is not of interest but its ability to make generalizations and ask questions appeared relevant to the adaptive analyst support system prototype (Riesbeck and Schank, 1989).

24 PLEXUS

Alterman (1986) designed a planning tool that used case-based reasoning. It adapted old plans to new situations. The program had a limited case-base but did use a nice adaptation technique to arrive at new solutions (Riesbeck and Schank, 1989). This also is a technique that the adaptive analyst support system prototype requires.

CABSYDD

The Lo and Choobineh, (1995) case-based reasoner focused on the design of a database for a single department of an organization. It drew on a case-base of databases used in other departments to meet management needs, select a similar database and adapt it to solve the current department's database design needs. The reasoner used a dialog with the designer to ascertain values needed by the case-base primary and secondary indexing schema. These indexes were then used to retrieve a candidate solution from the case-base. The program then continued a dialog with the designer during the adaptation process until a satisfactory database design was reached. The dialog between the user and the reasoner during the adaptation phase is a technique of interest in the construction of the adaptive analyst support system prototype (Lo and Choobineh, 1995, 1996)

Summary

This chapter discussed the literature that is directly relevant to fact gathering activities for organizational information requirements determination, expert analyst support systems, and case-based reasoning systems. Previous research has been conducted to determine the feasibility of analyst support systems for assisting the analyst with the analysis, design and implementation phases of the systems development life- cycle.

The literature reviewed provides a great deal of support for this research project.

The sparsity of literature specifically addressing how an analyst gathers and analyzes facts to determine organizational information requirements supports the need for this research.

26 CHAPTER III

RESEARCH METHODOLOGY

Introduction

The principal objectives of this research are to suggest a conceptual model for an analyst support system and propose a prototype system to test and validate the conceptual model. In order to fijlfill theses objectives, a rigorous and flexible methodology was needed. The unified research methodology (URM) developed by

Baldwin and Yadav (1995) contained the necessary elements that best supported the research objectives.

The URM is an artificial intelligence (AI) equivalent of Ackoff s general research methodology. The URM provides a methodology to meet the special needs of the AI research community. Specifically, it includes nine steps that support AI conceptual model and architecture development and the building of system prototypes.

Baldwin and Yadav's (1995) nine steps include:

1. Formulate the problem.

2. Construct the knowledge level principles and .

3. Construct the symbol level principles and theories.

4. Operationalize the knowledge level theories

5 Identify or construct a symbol level design.

6. Identify or develop a prototype system based on the symbol level design.

7 Test the prototype system(s)

27 8 Evaluate and validate the results.

9. Refine the system by repeating steps 1-8 as necessary.

This research is conceptual in that it proposes an expert case-based decision support system designed to assist the analyst in determining organizational system requirements. Step one of the URM, problem formulation, corresponds to the first step in both the classical and AckoflTs general research methodology.

Steps two and three developing the knowledge and symbol level theories relate to the building of the classical scientific method and Ackoff s developing a model for the proposed solution. Steps four and five of the URM are the rough equivalent of

Ackoff s operational system development based on the proposed model and the classical scientific method's hypotheses development step. The URM's prototype development, step six, relates to a continuation of Ackoff s operational system development and corresponds to creating an experiment in the classical scientific method. Step seven is the testing of the prototype that is comparable to the classical scientific method's data collection step and Ackoff s operational system testing. In step eight, of the URM evaluation and validation of the prototype systems are performed. This corresponds to the classical scientific method's data analysis step and

AckoflTs general research methodology's evaluate and validate the results step.

Finally, in step nine any modifications needed to refine the underlying concepts of the prototype system are made and the above steps are repeated as necessary to evaluate the revised prototype.

28 Formulatinig& the Problem

The first step of the URM consists of a problem statement that describes the characteristics of the current environment. It also identifies the shortcomings as they currentK' exist. Additionally, the characteristics of the proposed system are identified and related to the problem's solution. Assumptions and constraints are stated along with the research issue/questions and objectives (Baldwin and Yadav, 1995). This research addresses two problem areas. First, the literature review indicates that the design of a system to support the analyst in the requirements determination process has been neglected. A system is needed to assist the analyst with the gathering and organizing of facts about the organization's increasingly complex information system needs. Secondly, the specific analyst activities and the accompanying knowledge requirements should be identified and provided as the initial content of an adaptive expert analyst modeling support system.

Constructing the Knowledge Level Principles

The second step in the URM is to construct the knowledge level principles.

The knowledge level describes "the beliefs, goals, reasoning capabilities, learning capabilities and potential actions that are attributed to a system so that its behavior can be understood" (Baldwin and Yadav, 1995, p. 853). Propositions that explain how certain knowledge types and attitudes interact with environmental characteristics to produce a behavior are called knowledge principles (Baldwin and Yadav, 19^'5)

29 These propositions are the goals or requirements a system must meet in order to

exhibit the desired behavior.

In order to construct the knowledge level principles, the research problem is

analyzed, and a set of system requirements determined. The system requirements

define the fijndamental properties of a system able to successfijlly provide the analyst

with adaptable fact gathering support. A set of system behaviors are developed which

achieve the requirements when performed. Additionally, system capabilities are

derived which allow an adaptable support system to realize the desired set of system

behaviors. Finally, a knowledge level model is proposed to satisfy

and incorporate the needed system capabilities.

Constructing the Svmbol Level Principles

The symbol level is the body of data structures and processes that realize a

body of knowledge at the knowledge level (Newell, 1982) The symbol level

principles are "propositions that describe how the characteristics of an AI system interact with the environment to produce a behavior" (Baldwin and Yadav, 1995, p.

853). The symbol level architecture embodies the data structures, control, and processes that provide the functionality described by the knowledge level conceptual model. A prototype can be designed using an AI shell or high level computer language. AI shells are development applications that contain pre-built fiinctions and structures needed to develop AI systems. The prototype is implemented by installing and running the prototype on computer hardware. The implemented protot\pe

30 adaptive support system is evaluated, and the evaluation results are analyzed. If the prototype successfully solved the research problem it is accepted otherwise the prototype is modified.

Developin^ the Prototvpe Svstem

This development process spans the next three steps of the URM The knowledge level and symbol level concepts are operationalized to complete a detailed design of a prototype system. Concentration is on the user interaction with the system as well as the specific analyst support and template development features the prototype provides the analyst user. A detailed specification for the prototype system is developed and used to construct an operational prototype system. This prototype system serves to validate the conceptualization of the knowledge level by demonstrating the application of the knowledge level concepts. It provides the analyst user with a tool for gathering and organizing organizational facts. The organization's information system requirements are constructed from the gathered facts. The prototype stores and retrieves facts for previously analyzed organizations.

A key strength of prototyping is that it need not be a fully developed production system but rather a limited construction to test the knowledge level concepts. A prototype helps establish a means of predicting the behavior characteristics of the system being modeled and allows controlled experimentation in situations where direct observation and experimentation are too impractical or expensive.

31 Prototyping also has some attendant limitations. Since the prototype is not a fully operational system it only embodies the key concepts from the knowledge level.

The determination of which concepts to include in the prototype is a challenge. If too many concepts are included, the cost of the prototype may become prohibitive. If too little detail is included in the prototype system, results may be inconclusive. Prototype validation offers a particular challenge. It should be a logical validation of the prototype with the design criteria derived from the knowledge and symbol levels principles and concepts.

Construction of the prototype system required constant monitoring and analysis during development. Refinements to the prototype development steps were needed and some design steps had to be repeated in order to ensure consistency with the derived design criteria

Evaluating and Validating the System

The dictionary defines validation as the act, process, or instance of showing a proposition or model to be relevant, meaningful, logically correct and appropriate to the end view. Validation as it relates to research usually fits a portion of this definition. Validation is a crucial but controversial step in the process of presenting research results. Philosophers, scientists, social scientists, and researchers in computer and information sciences have argued over the best way to validate research and conceptualizations.

32 In their study of Epistemology (theory of knowledge), early philosophers such as Descartes, Locke, Hume, Kant and Hegel were concerned with how knowledge was gained. The arguments continued into the twentieth century with Russel.

Wittgenstein, Rorty, Kuhn and Toulmin each presenting their views that supported or differed from the early philosophers and each other. As these arguments developed around means of validating knowledge, the philosophers fall into two groups. One group expounds the Reductionist/Formalist/Foundationalist (RFF) view of knowledge and the other group supports the Holistic/Social/Relativist (HSR) view (Barlas and

Carpenter, 1990). The RFF view holds that models are absolute, objective descriptions of the real world and the model validation process applies formal algorithms in a confrontational scenario to attempt to prove the model false. The HSR group views the model as only one of many possible descnptors of a real world

situation and model validity is derived by a semiformal, social conversational process.

The multifaceted view of validation presented in the dictionary is an attempt to combine both philosophical views of vahdation (Barias and Carpenter, 1990). Since the discipline of management information systems has its roots in multidiciplines, we

see arguments for both views from researchers in information systems. Boloix and

Robillard (1995) have stated that "evaluation involves judging the merits of a phenomenon (a system, a person, a thing, or an idea) against some explicit or implicit yardstick" (p. 19). Steps seven, eight and nine of the URM describe the recursive testing, validation, and modification needed to prove the system conceptualization and

33 prototype Testing, demonstrates that the concept model is both feasible and effective and solves the stated research problem

A Validation Framework

The HSR view for the validation of the research has been adopted and suggested a framework for performing the validation. The framework requires that the validation be performed in a continuous step wise fashion. Each step fulfills the requirements of the previous URM step and the final results satisfy the original stated problem or goal. Many researchers including Cohen and Howe (1988) have suggested a life cycle or sequence of steps for research validation. A ;nore comprehensive

sequence of steps is proposed with the validafion emphasis added (Figure 3.1). Each

step of the framework satisfies the requirements of the previous step. If the current

step does not satisfy the preceding step's requirements, i^. is redesigned or modified until it does. If the results are consistent, then the next step in the validation framework is executed. The series of validation steps assume that the conceptual research involves a demonstration of the concepts through the use of a prototype, simulation, or case study.

34 Identifv a Need for Fact Gathenng Requirements are 1 » steps \v hicli must Support be followed to sohc the problem \ r Determine Fact Gathenng Support i • Requirements 1 Behaviors i satisfy all elements of 1 ' 1 the requirements set Propose 4 • System's Behavior System's Capabihties 1 ' allow the realization of Propose the proposed s\stem's Svstem's J • behavior Capabilities \— Conceptual Validation Knowledge Level ^' Conceptual Model Propose Knowledge satisfies and incorporates J Level Conceptual Final Res ults the System's Capabilities Model Solve Pre>ble m ^laienieni Operationalizes the ' ' Knowledge Lev el Propose Sv mbol Conceptual Model J Level Svstem 4 • 1 • Architecture Design is consistent with the Architecture ' ' Create

» Prototype Design Implementation J— Prototype fiiifills design >' / Validation elements Implement System • E\ aluation procedures Prototype test prototype 1 r performance Test and Evaluate 1 > Prototype

1 Results are consistent V 1 v\ ith prototype Analvze evaluation procedures J the Res lilts

Figure 3.1. A Vahdation Framework for CAASS

.K'^ Conceptual Validation. The first step shows the need for fact gathering support for an analyst. The next step is the determination of a set of requirements that should be met in order to solve the stated problem. In this case, the analyst support system must be adaptable and able to learn new solutions as well as being able to recall and modify prior solutions to the target organization's needs. A set of vahdation criteria is developed to verify that the requirements are complete and will solve the problems included in the problem statement. From the requirements, a proposed system behavior is conceptualized. This system behavior or fijnction set should be such that it will fijlfill all the requirements specified in the prior step. Next, the system capabilities are developed. These capabilities allow the achievement of the desired system behaviors or fijnctions. A system architecture is developed. The architecture provides the system capabilities specified in the previous step of the framework.

Prototype Validation. The last five steps of the validation framework constitute the prototype validation. The execution of these steps attempts to assess the validity of the conceptual model. The prototype validation demonstrates that the conceptual model of the system does embody the necessary elements to allow the production of meaningfijl resuhs (Youngblood and Pace, 1995). Thus the validity of the prototype and indirectly the symbol level concepts and principles assert the correctness of the conceptual model.

Figure 3.1 indicates that prototype validation is achieved through design consistency, design implementation, prototype performance validation, and results verification. In this process of prototype validation, a prototype is designed to match the symbol level architecture. Here, the concern is to insure the prototype design is consistent and flows logically from the conceptualized model and symbol level architecture. Next, the prototype is actually built and the source code verified. The prototype implementation is carefijlly examined and the following questions are asked:

Were the pertinent aspects of the implementation evaluated as they relate to the stated problem? Does the implementation accurately reflect the design criteria? Have all aspects of the prototype design been implemented? If not, what limitations might this pose on the ability to solve the stated problem? The performance of the prototype is evaluated by running "canned" test data. Coding errors and problems revealed should be addressed and corrected.

Prototype Evaluation. Since a similar system is not available to compare objectively with the prototype, a subjective evaluation by user analysts will be used.

The leading method for systematic validation of expert systems and prototypes is the use of projects or test cases (O'Keefe and O'Leary, 1993). Undergraduate students in

system analysis are provided with a project to expose them to fact gathering using manual unassisted techniques. A second project from the same domain was given to allow the students to gather facts with the assistance of the prototype system. A questionnaire was provided for the students to complete which uses a Likert five-point rating scale (Emory, 1985). The questions are designed to ask the analysts to rate the extent to which the prototype system provides assistance with their fact gathenng activities.

37 The evaluation questionnaire contains twelve questions. By limiting the number of questions, the chance of respondent fatigue and loss of interest is reduced. The questionnaire is used to evaluate the extent to which the assisting prototype system satisfied the stated research problem. The validation framework asks the following questions:

1. Do the results logically flow from the observations made during the

prototype evaluation*^

2 Are the resuhs meaningfijl with respect to our original problem statement

in the context of the real world*^

If the resuhs are appropriate, then attempts should be made to generalize them fijrther.

Summary

The Baldwin -Yadav URM provides a rigorous framework for conducting a

systematic approach to the conceptualization and prototyping of artificial intelligence

systems. This chapter has detailed the URM and provided a validation framework for

conceptual models and prototypes. Chapter VII discusses the verification and validation of the conceptual model and the prototype adaptive analyst support system.

The next chapter presents the conceptual development of a case-based adaptive

analyst support system.

38 CHAPTER IV

CONCEPTUAL DEVELOPMENT OF A CASE-BASED

ADAPTIVE ANALYST SUPPORT SYSTEM

(CAASS)

Introduction

The purpose of this chapter is to present the conceptual development of a case- based adaptive analyst support system. The purpose of this system is to assist the analyst with fact gathering activities during the information requirements determination phase of systems analysis. First, common fact gathering activities suggested by various analysis methodologies are identified. Fact gathering activities that appear supportable by CAASS are then described. Finally, the system behavior or fijnctions CAASS needs to perform in order to support the system and fact gathering requirements along with the component knowledge level concepts and principles are presented. These theories and principles constitute steps two and three of the URM and provide the base upon which a conceptual architecture may be buih (Baldwin and Yadav, 1995)

Fact Gathering Activities

A key first step in the requirements determination phase is to gather facts about the target organization. A number of methodologies for requirement analysis have suggested fact gathering activities. These methodologies include: Critical Success Factors (CSFs)

(Rockart, 1979); End/Means Analysis (E/M) (Wetherbe, 1991), Business Svstcms

39 Planning (BSP) (IBM, 1975); Business Information Analysis and Integration Technique

(BIAIT) (Carlson, 1979); Business Process Reengineering (BPR) (Dewitz, 1996;

Nakatani and Yadav, 1996) and the Business Information Characterization Study (BICS)

(Kemer, 1979; \'adav, 1985). Additionally, Byrd, Cossick and Zmud (1992) provided a synthesis of other research conducted on requirements analysis and fact gathering. These methodologies contained a number of common fact gathering activities and some that were unique.

As methodologies were reviewed for possible fact gathering activities, a comprehensive list of activities was buih. Duplicate activities were removed and unique fact gathering activities were combined with those addressed in the Byrd, Cossick and

Zmud (1992) research. A carefijl review of these activities revealed that some activities were not apphcable to the phase of requirements analysis addressed in this research. A list of fifty-three fact gathering activities an analyst could use to help determine a target organization's information requirements was synthesized. These fifty-three fact gathering activities are shown in Table 4.1.

There are four columns included in Table 4.1. The first column contains sequence numbers for the table listings. The second column describes the fact gathering activities that need to be completed in the process of determining the target organization's information requirement set. The third column lists the system fijnctions needed to support the analyst during the fact gathenng activity. Column four lists the reference source for the fact gathering activity listed in the table.

40 Tab le 4 1 Fact Gathering Activities. ID ACTIVITY SYSTEM FUNCTION SOURCE(S) 1 Establish a Fact Gathenng Plan 1. Search for Analyst Fact BSP, CSF, BIAIT, Gathering Activities E/M 2. Recall Analyst Fact Gathenng Activities. 3. Suggest Checklist of Fact Gathenng Activities. 4 Build Fact Gathenng Schedule.

2 Identify Industrv of Organization 1. Search for Industry Types and CSF Standard Industnal Classification Codes. 2. Recall Industrv' Type Classification Codes. 3. Classifv by Standard Codes.

3 IdentifS Organization Structure Tvpe 1. Search for Organizational Tvpes C^adav,1985) and Charactenstics. 2. Recall Organizational T>pes and Characteristics. 3. Classify-according to Mintzberg's Types.

4 Identify Organizational Goals 1. Search for Organizational Goals BSP for this Industry Type. (Yadav, 1985) 2. Recall Orgamzational Goals 3. Suggest Additional Orgamzational Goals

5 Identify- Process Management 1. Associate with T>pical Process BSP Functions (Such as: Development. Management Functions for This Manufactunng. Marketing, Sales, Organization Tvpe. Serv ice. Finance, and Subprocesses. 2. Suggest Additional Possible etc.) Process Management Functions.

6 Identify Resource Management 1. Associate with Typical Generic BSP Functions (Such as: Cash, Personnel, Resource Management Functions Materials, Facilities, Output Products, for this Orgamzation Tvpe. Output Services, etc.) 2. Suggest additional Resource Management Functions.

7 Identify Business Process 1. Associate with Typical Genenc BSP, BPR, Business Processes for this (Nakatani and Organization Tvpe. Yadav, 1996) 2. Hypotliesize Additional Processes for this Business Type.

—— 1

41 Table 4 1 Continued m ACTIVITY SYSTEM FUNCTION SOURCE(S) 8 Identify tlie Objectiv c of the Busmess 1 Associate w ith Organization's BPR, (Nakatam Process Goals. and Yadav. 1996) 2 Suggest a list of Additional Process Objectives.

9 Establish the Importance of the 1. H>pothesize a Pnonlization of BPR. (Nakatam Business Process to Orgamzations Business Processes based on and Yadav, 1996) Strategic goals and Customers relative importance. 2. Suggest a Priontized List of Processes.

10 Determine the Outcome or Work 1. Associate w ith Input for a follow- BPR, (Nakatam Product of the Business Process on Process or achievement of and Yadav, 1996) Organizational Goal. 2. Suggest a Classification of Work Products as Intermediate or Final.

11 Establish the Relationships between 1. Associate Business Process Work BPR. (Nakatam the Work Products and the Business Products with Process Objectives. and Yadav, 1996) Process ObjecUv e 2 Suggest List of Work Products needed to meet Business Process Objectives.

12 Establish the Customer SaUsfaction 1. Associate Business Process with BPR, (Nakatam Lev el w ith the Business Process Customer's Needs. and Yadav. 1996) 2. Suggest a list of Customer Satisfaction Level vvitli each Business Process.

13 Determine tlie Resources needed by 1. Associate Business Process BPR. (Nakatani this Busmess Process Inputs with Resources. and Yadav, 1996) 2. Suggest a List of Needed Resources.

14 Identifv the Relationships between tlie 1. Associate Business Process BPR. (Nakatam Work Products and the Resources Resources witli Work Products. and Yadav. 1996) used by this Business Process 2. Suggest a List of Resources Needed to Produce each Work Product.

42 Table 4.1 Continued I» ACTIVITY SYSTEM FUNCTION SOURCE(S) 15 Determine the ActiviUes earned out 1. Associate Business Process BPR. (Nakatam on tlus Resource Activities with Resources. and Yadav, 1996) 2. Suggest a List of Activities Need to Transform each Resource.

lo Identify the Relationships between tlie 1. Associate Specific Business BPR, (Nakatani Activities and the Resources used b> Process Activities with Specific and Yadav, 1996) this Business Process Resources. 2. Suggest a List of Direct and Indirect Relationships between Activities and Resources.

17 Identify the collection of Activities 1, Associate Business Process Steps BPR, (Nakatam that constitute a Process Step. with particular Activities. and Yadav, 1996) 2. Suggest a list of Activities needed to complete Each Business Process Step.

18 Determine the relationship between 1. Associate Business Process Steps BPR, (Nakatam the Process Steps and the Resources with Resources. and Yadav, 1996) (Process step inputs/outputs) 2. Suggest a List of Inputs/Outputs for each Business Process Step.

19 Establish the Relationships between 1. Associate Business Process Steps BPR, (Nakatani the Business Process and Process w ith a Business Process. and Yadav. 1996) Steps 2. Suggest a List of Process Steps composing each Business Process.

20 Identify' the Sequential relationship 1. Search for Process Step BPR (Nakatam among the Process Steps Sequences. and Yadav, 1996) 2. Recall Step Sequences. 3. Suggest Step Sequence needed to complete Business Process.

21 Determine the Critical Events for the 1. Associate Business Process Steps BPR. (Nakatam Business Process and its Process Steps with Critical Events. and Yadav. 1996) 2. Suggest a List of Cntical Events for each Business Process Step.

43 Table 4 1. Continued ID ACTIVITY SYSTEM FUNCTION SOURCE(S) 22 Establish the Cycle time for a 1. Associate Business Process Steps BPR. (Nakatam Business Process and its component w ilh Cycle Times. and Yadav. 1996) Process Steps 2 Suggest a List companng observed Cycle Times with Average Cycle Times for Similar 1 Process Steps. 23 Establish the Information Reqiured to 3 Associate Business Process BPR. (Nakatam Measure the Quality of a Process and Qualitv witli Step Quality. and Yadav. 1996) its component Process Steps 4 Suggest a List of (Quality Measures for Business Processes and Their Component Steps. 24 Determine the Costs of the Process 5 Associate Business Process Costs BPR. (Nakatani and included Process Steps with Step Costs. and Yada\. 1996) 6. Suggest a Table of Process Costs and component Step Costs. 25 Identify Orgamzational Entities that 7 Associate with Tvpical Generic BSP Perform or Operate Business Entity/ Process Associations for Processes This Orgamzation Type. 8. Suggest an Entity/Process Matrix 26 Understand Current Information 9. Organize and Store Identified BSP System Current Problems. Goals and Objectives. 10. Remembered and Recalled Goals and Objectives. 11. Suggest a Modified Reqmrements Set.

27 Observe Current Information System 12. Recall a Requirements Set. BSP from Manager Level 13. Modify a Recalled Set for Similar Organization. 14. Suggest as a Tentative Reqmrements Set.

28 Observe Current Information Sv stem 15. Recall a Reqmrements Set. BSP from User Level 16. Modify a Recalled Set for Similar Orgamzation. 17 Suggest as a Tentative 1 Requirements Set.

29 Understand Planned or Follow-on 18. Recall Infonnation Requirements BSP Information System Set. 19. Modify Potential Requirements

Set. 1

44 Table 4 1 Continued. ID ACTIVITY SYSTEM FUNCTION SOURCE(S) 30 Interview Executives about Business 1. Recall Information Requirements BSP, CSF, BIAIT. Problems and Management Needs, Set. E/M CSFs, BIAIT, E/M, Organizational 2 Modif> Potential Requirements Goals the Information Must Support. Set.

31 Identify Kc\ Owners (Managers) for 3. Associate Managers with BSP each Process Processes. 4 Suggest Manager - Process Matrix.

32 Interview Key Owners (Managers) for 5. Search for Requirements Set. BSP. CSF. BLAJT. each Process 6. Recall Reqmrements Set. E/M 7 Suggest Adapted Reqmrements List.

.1:) Identify Key Operators (Users) for 8. Associate Operators with BSP each Process Processes. 9. Suggest Operator (User) - Process Matrix.

34 Interview Ke> Operators (Users) for 10. Search for Reqmrements Set. BSP each Process 11. Recall Reqmrements Set. 12 Suggest Adapted Reqmrements List.

35 Identify Kev Owners (Managers) for 13. Associate Managers with BSP each Resource Resources. 14 Suggest Manager - Resource Matnx.

36 Interview Kev Owners (Managers) for 15. Search for Reqmrements Set. BSP. CSF. BIAIT. each Resource 16 Recall Requirements Set. E/M 17 Suggest Adapted Requirements List.

37 Identify Key Operators (Users) for 18. Associate Users with Resources. BSP each Resource 19. Suggest User - Resource Matnx.

38 Interview Key Operators (Users) for 20. Search for Reqmrements Set. BSP. CSF. BIAIT. each Resource 21. Recall Requirements Set. E/M 22. Suggest Adapted Requirements List.

39 Prepare A Questionnaire for 23. Recall Tvpical Executive BSP Executives not reachable for Interview Questions. interviews 24. Suggest a List of questions for Questionnaires.

45 Table 4.1. Continued ID ACTIVITY SYSTEM FUNCTION SOURCE(S) 40 Issue Executiv e QuesUonnaire BSP

41 Retnev e Executive Questionnaire BSP

42 Prepare A Questionnaire for Key 1. Recall Tvpical Manger Interview BSP Owners (Managers) not reachable for QuesUons. interviews 2 Suggest a List of quesUons for Queshoimaires.

43 Issue Kev Manager QuesUonnaire BSP

44 Ret neve Key Manager Questionnaire BSP

45 Prepare A Questionnaire for Ke\ 1. Recall Tvpical Operator BSP Operators (Users) not reachable for Interview C^estions. interviews 2 Suggest a List of questions for Questionnaires.

46 Issue Kev User (^estionnaire BSP

47 Retrieve Kev User (Questionnaire 48 Analyze Executive Interviews and 1. Recall and Adapt Reqmrements BSP Questionnaires Model. 2. Suggest a Revised Requirements List.

49 Analyze Key Manager Interview s and 1. Recall and Adapt Requirements BSP Questionnaires Model. 2. Suggest a Revised Requirements List.

50 Analyze Key User Interv lews and 1. Recall and Adapt Requirements BSP Questionnaires Model. 2. Suggest a Revised Requirements List.

51 Reach a Joint Manager/User 1. Revise Reqmrements Model. Wetlierbe.1991 Consensus over Identified 2. Suggest a New Requirements Information needs for Organizational List. Support

46 Table 4 1 Continued ID ACTIVITY SYSTEM FUNCTION SOURCE(S) 52 Relate Manager and User Information 1. Associate Manager and User Wetherbe, 1991 Needs Needs w ith Identified Information Elements. 2. Suggest a Manager/User/ Information Matnx

53 Review Identified Infonnation needs 1. Recall and Revise Reqmrements WeUierbe,1991 with Orgamzalion's Executivcs Model. 2. Suggest as a New Case.

System Behavior

An analyst support system should learn new information requirements for particular organization types and be able to remember these information requirements sets in fiiture organizational fact gathering situations. This means that the analyst support system should be an adaptive knowledge base system. According to Yadav (1989), an adaptive knowledge base system is a learning system that uses its knowledge base and environmental inputs to generate new knowledge over time. This is a desirable behavior for CAASS to exhibit. This behavior is referenced whenever the CAASS performs a function or action that requires it to modify, revise or adapt its knowledge. For example, if the system possesses knowledge about the concept "red" and the concept "ball" then the system's associate function hnks or merges the concepts 'red" and "ball" into a higher level of knowledge, "red ball" (Yadav, 1989). When the system performs an associate function, it is exhibiting its ability to learn and adapt.

Seven system functions have been identified that support fact gathering activities.

These functions are: associate, build, classify, modify', recall, search, and suggest The\

47 are listed in Table 4.2. The associate function is used to link concepts together forming a new concept. A new information requirement list is created by the build function. The classifS' fiinction determines the organizational type based on standard industrial classification codes promulgated by the Executive Office of the President, Office of

Management and Budget (United States, 1987) and Mintzberg's organizational types

(Mintzberg, 1979). An existing list of requirements may be modified to conform to the needs of the organization under study by the modify function. The recall function is used to retrieve learned requirement sets. The locating of relevant fact gathering activities and organization types is performed by the search fijnction. Finally, the suggest fiinction proposes check hsts and requirement sets from memory to assist the analyst in gathering facts about the organization under study

The first column of Table 4.2 is an identifying sequence number for the table listings. The second column lists the system fijnction/action needed to provide the analyst support for the hsted fact gathering activity The third column shows the support the system provides to the analyst for the listed fact gathering activity Column four identifies the fact gathering activhy requiring the system's support. Finally, the last column lists the knowledge required to complete the fact gathering activity

48 Table 4 2 System Functions ID SYSTEM ANALYST SUPPORT FACT GATHERING SYSTEM FUNCTION ACTIVITY KNOWLEDGE 1 ASSOCIATE Associate T\pical Genenc Identifv Resource Resource Resomce Management Management Functions Management Functions With (Such as: Cash. Personnel, Functions Orgamzation Type. Materials, Facilities, Output Products, Output Services, etc.) ASSOCIATE Associate Business Process Determine the Resources 1. Business Process Inputs with Resomces. needed by this Business Resources Process 2. Inputs ASSOCIATE Associate Business Process Establish the Information 1. Business QualitN with Step Quality. Required to Measure the Processes. Qualit}' of a Process and its 2. Business Process component Process Steps Steps. 3. Quality Measures. ASSOCIATE Associate Business Process Determine the Critical 1. Business Steps with Critical Events. Events for the Business Processes. Process and its Process 2. Business Process Steps Steps. 3. Critical Events.

ASSOCIATE Associate Business Process Establish the Cycle time for 1. Business Process Steps with Cycle Times. a Business Process and its Processes. component Process Steps 2. Business Process Steps. 3 Process Step C>cle Times.

ASSOCIATE Associate Manager and User Relate Manager and User Matrix Construction Needs with Identified Information Needs Techmques Information Elements.

ASSOCIATE Associate Managers with Identity Key Owners 1. OrganizaUonal Processes and Resomces. (Managers) for each Structure. Process and Resource. 2. Resources 3. Processes 4. Matnx Building Techmques

ASSOCIATE Associate Operators with Identif> Ke\ Operators 1 Orgamzational Processes and Resources. (Users) for each Process Structure. and Resource. 2 Resources 3. Processes 4. Matnx Building Techniques

1

49 Table 4 2 Continued ID SYSTEM ANALYST SUPPORT FACT GATHERING SYSTEM FUNCTION ACTIVITY KNOWLEDGE ASSOCIATE Associate w ith Input for a Determine the Outcome or 1. Business Process follow-on Process or Work Product of the Inputs. achievement of Business Process 2. Business Process Orgamzational Goal. Outcomes - Work Products. 3. Organizational Goals ASSOCIATE Associate witli Identhy the Objective of 1. Organization's Organization's Goals. the Business Process Goals. 2 Business Process Objectives.

ASSOCIATE Associate with Typical Identil\' Business Process Business Processes Generic Business Processes for this Organization Type.

ASSOCIATE Associate with Typical IdentifV' Orgamzational 1. OrganizaUonal Genenc Entit\ / Process Entities that Perform or Structme. 1 Associations for This Operate Business Processes 2. Functional Orgamzation T>pe. Responsibilities. 3. Business Processes. 4. Matrix Building Techmques.

ASSOCIATE Associate with Typical Identif> Process Process Process Management Management Functions Management Functions for This (Such as: Development. Functions Orgamzation Type. Manufactunng, Marketing, Sales, Service. Finance, and Subprocesses, etc.)

ASSOCIATE Associate Business Process Determine the Costs of the 1. Business Costs with Step Costs. Process and included Processes. Process steps 2 Business Process Steps. 3 Cost Measmes.

2 BUILD Build Information Perform Analysis of 1. Gathered Facts. Requirement Set. Gathered Facts 2 Information Requirements.

3 CLASSIFY ClassilS' by Standard Codes. Identify Industn of Identify lndustr> of Organization Orgamzation

50 Table 4.2. Continued. ID SYSTEM ANALYST SUPPORT FACT GATHERING SYSTEM FUNCTION ACTIVITY KNOWLEDGE CLASSIFY Classify according to Identify Orgamzation 1. Organizational Mintzberg's Types Structure Type Structure T>pes. 2 Organizational Charactenstics.

4 MODIFY Modify Requirements Set. Analyze Interview s and 1. Analysis Questionnaires Techniques. 2. Requirements Set.

MODIFY Modif> Requirements Set. Review Identified Consensus Building Information Needs with Techniques Organization's Executives

MODIFY' Modify' Potential Interview Executives about 1. BSP, CSFs, Requirements Set. Business Problems and BIAIT processes. Management Needs. CSFs. 2. Organizational BIAIT. End/Means. Goals Organizational Goals the Information must support

MODIF^^ Modify Reqmrements Set. Reach a Joint Consensus Building Manager/User Consensus Techniques over Identified Information needs for Organizational Support

5 RECALL Recall Reqmrements Set. 1. Analyze Interview s and 1. Analysis Questionnaires 2 Techniques. Review Identified 2 Requirements Set. Information Needs with 3. Consensus Organization's Executives. Building Techniques.

RECALL Recall Typical Interview Prepare A Questionnaire 1. Questionnaire Questions. for Employees not Building reachable for interviews Techniques. 2 Genenc Question T\pes

51 Table 4 2 Continued. ID SYSTEM ANALYST SUPPORT FACT GATHERING SYSTEM FUNCTION ACTIVITY KNOWLEDGE RECALL Recall Analyst Fact Establish a Fact Gathenng Analyst Fact Gathering Activities. Plan Gathenng Activities

RECALL Recall Industrv Type Identifv Industrv of Identifv Industrv of Classification Codes. Organization Organization

RECALL Recall Orgamzational Goals. Identify' Orgamzational Goals - Means Goals Hierarchy

RECALL Recall Organizational Types Identify Orgamzation 1. Organizational and Charactenstics. Structure Type Structme T>pes. 2. Organizational Characteristics.

RECALL Recall Reqmrements Set. Interview Ke\ Employees 1. Organizational for each Process and Structme. Resource 2. Generic Question Tvpes 3 Business Processes. 4. Business Resomces. 5. Interview Techniques.

6 SEARCH Search for Analyst Fact Establish a Fact Gathering Analyst Fact Gathenng Activ ities. Plan Gathering Activities

SEARCH Search for Industrv Types Identifv' Industrv of IdentilS Industn of and Standard Industnal Orgamzation Orgamzation Classification Codes.

SE.\RCH Search for Organizational Identity Organizational Goals - Means Goals for this lndustr> Tvpe. Goals Hierarchy

SEARCH Search for Organizational Identify Organization 1. Orgamzational Tvpes and Charactenstics. Structme Tvpe Structure Tvpes. 2 Organizational Charactenstics.

52 Table 4 2 Continued ID SYSTEM ANALYST SUPPORT FACT GATHERING SYSTEM FUNCTION ACTIVITY KNOWLEDGE SEARCH Search for Requirements Set. Interview Kev Employees 1. Orgamzational for each Process and Structure. Resource 2 Genenc Question Tvpes 3. Business Processes. 4. Business Resomces. 5. Interview Techniques.

7 SUGGEST Suggest addiuonal Resource IdenUfy Resomce Resomce Management Functions. Management Fimctions Management (Such as: Cash, Personnel, Functions Materials, Facilities, Output Products, Output Services, etc.)

SUGGEST Suggest a List of Needed Detenmne the Resomces 1. Business Process Resources. needed by this Business Resomces Process 2. Inputs

SUGGEST Suggest a List of Qualit> Establish the Information 1. Business Measmes for Business Required to Measme the Processes. Processes and Their Quality of a Process and its 2. Business Process Component Steps. component Process Steps Steps. 3 Qualitv Measures.

SUGGEST Suggest a List of Cntical Determine the Cntical 1. Business Events for each Business Events for the Business Processes. Process Step. Process and its Process 2. Business Process Steps Steps. 3. Critical Events.

SUGGEST Suggest a List comparing Establish the Cycle ume for 1. Business Process observed Cycle Times with a Business Process and its Processes. Average Cycle Times for component Process Steps 2. Business Process Similar Process Steps. Steps. 3 Process Step Cycle Times

53 Table 4 2 Continued. ID SYSTEM ANALYST SUPPORT FACT GATHERING SYSTEM FUNCTION ACTIVITY KNOWLEDGE SUGGEST Suggest a Relate Manager and User Matnx Construction Manager/User/Information Information Needs Techniques Matrix.

SUGGEST Suggest Manager - Process - Identify' Kev Owners 1. Organizational Resource Matrix (Managers) for each Structme. Process and Resource. 2. Resomces 3.Processes 4. Matrix Building Techmques

SUGGEST Suggest Operator (User) - Identify Kev Operators 1. Organizational Process - Resomces Matrix. (Users) for each Process Structme. and Resomce. 2. Resomces 3. Processes 4. Matrix Building Techniques

SUGGEST Suggest a Classification of Determine the Outcome or 1. Business Process Work Products as Work Product of the Inputs. Intermediate or Final. Business Process 2. Business Process Outcomes - Work Products. 3. Orgamzational Goals

SUGGEST Suggest a list of Additional Identify the Objective of 1. Orgamzalion's Process Objectives. the Business Process Goals. 2. Business Process Objectives

SUGGEST Suggest Additional Identifv' Business Process Business Processes Processes for this Business T>pe

SUGGEST Suggest a Revised Analyze Interviews and 1. Analysis Reqmrements Set. Questionnaires Teclmiques. 2. Requirements Set.

54 Table 4 2 Continued. m SYSTEM ANALYST SUPPORT FACT GATHERING SYSTEM FUNCTION ACTIVITY KNOWLEDGE SUGGEST Suggest an Entity/Process Identify Orgamzational 1. Organizational Matrix. Entities that Perform or Structme. Operate Business Processes 2. Functional Responsibilities. 3. Business Processes. 4. Matrix Building Techniques.

SUGGEST Suggest Additional Possible Identify' Process Process Process Management Management Fimctions Management Functions. (Such as: Development, Functions Manufactunng, Marketing, Sales, Service. Finance, and Subprocesses, etc.)

SUGGEST Suggest as a New Case. Review Identified Consensus Building Information Needs with Techniques Orgamzalion's Executives

SUGGEST Suggest a List of questions Prepare A Questionnaire 1. Questionnaire for Questionnaires. for Employees not Building reachable for interviews Teclmiques. 2. Generic Question Tvpes

SUGGEST Suggest a New Reach a Joint Consensus Building Requirements Set. Manager/User Consensus Techniques over Identified Information needs for Organizational Support

SUGGEST Suggest Checklist of Fact Establish a Fact Gatlienng Analyst Fact Gathering Activities. Plan Gatliering Activities

SUGGEST Suggest Additional IdenUfy' Organizational Goals - Means Organizational Goals. Goals Hierarchy

1 1 1

NS Table 4 2 Continued. ID SYSTEM ANALYST SUPPORT FACT GATHERING SYSTEM FUNCTION ACTIVITY KNOWLEDGE SUGGEST Suggest Revised Interview Kev Employees 1. Orgamzational Requirements Set. for each Process and Structure. Resource 2. Genenc Question Types 3. Business Processes. 4 Business Resomces. 5. Interview Techmques.

SUGGEST Suggest a Table of Process Determine the Costs of the 1. Business Costs and component Step Process and included Processes. Costs. Process steps 2. Business Process Steps. 3. Cost Measmes.

1

Table 4.3 depicts the seven system functions and the fact gathering activities supported. Column one is an index column. Column two lists the seven system functions and column three shows the supported fact gathering activities.

Knowledge Level Concepts and Principles

This knowledge level is characterized by 'a set of propositions that describe and explain how certain types and attributes of knowledge interact with characteristics of the environment to produce some behavior" (Baldwin and Yadav, 1995, p. 853). In order for

CAASS to behave as desired, it should have the knowledge needed to solve the research problem.

56 Table 4.3 System Functions and Supported Activities ID SYSTEM SUPPORTED FACT GATHERING ACTIVITIES FUNCTION ASSOCIATE Identifv resource management functions. Detenmne Resources needed b\ Business Process. Establish Infonnation Required to Measure the Qualitv of a Process. 4 Determine CriUcal Events for a Business Process. 5. Establish a Cycle time for a Business Process 6 Show the Relationship Between Manager and User Information Needs. 7 Identify Kc\ Owners for each Process and Resource. 8. Identify Kev Operators for each Process and Resource. 9 Determine tlie Outcome of the Business Process. 10. Identify' the Objectives of the Business Process. 11. Identify the Business Process. 12 Identify the Organizational Entities that Perform or Operate the Business Process. 13 Identify the Process Management Functions. 14. Determine Process and Process Steps costs.

BUILD Analyze Gathered Facts to Construct an Information Requirements Set

CLASSIFY 1. Identify Orgamzations Industry Tvpe. 2 Identify Orgamzation" s Structure Tvpe

MODIFY 1. Analyze Interview and Questionnaire results. Review Identified Orgamzation Information Needs with Executives. 3. Interview Executives. 4. Reach Joint Manager/ User Consensus on Information Needs

RECALL 1. Analyze Interview and Questionnaire Results. 2. Review Identified Information Needs witli Executives. 3. Interview Key Employees. 4. Prepare Employee Questiomiaires. 5. Establish a Fact Gathenng Plan. 6. Identify Industrv of Organization. 7. Identify OrgamzaUonal Goals. 8. Identify Organizational Structure.

SEARCH 1 Establish a Fact Gathenng Plan. Identify' Industrv of the Orgamzation. Identify Orgamzational Goals. 4. Identify' Orgamzations Structure Tvpe. Inteniew Key Employees.

57 Table 4 3 Continued ID SYSTEM SUPPORTED FACT GATHERING ACTIVITIES FUNCTION SUGGEST 1 Identifv' resource management functions. Determine Resources needed by Business Process Establish Infonnation Required to Measme the Qualitv' of a Process. Detenmne Critical E\ enls for a Business Process. Establish a Cycle time for a Business Process Show the Relationship Between Manager and User Information Needs. Identify Key Owners for each Process and Resomce. Identifv Key Operators for each Process and Resource. Determine tlie Outcome of tlie Business Process. 10. Identity tlie Objectives of the Business Process. 11. Identifv the Business Process. 12. Identify the Orgamzational EnUties that Perform or Operate the Business Process. 13. Identify' the Process Management Functions. 14. Analvze Interview and Questionnaire Results. 15 Review Identified Information Needs with Organization's Executives 16 Prepare an Employee Questionnaire. 17 Reach a Joint Manager User Consensus on Information Needs. 18 Establish a Fact Gathenng Plan. 19 IdentifS Orgamzational Goals. 20 Interview Kev Employees. 21 Determine the Cost of the Process and Included Process Steps.

Knowledge Level Concepts

The types of objects and relations needed by an analyst support system were contemplated. Also considered were the types of queries the analyst might make on the system. How the system would adapt or add new knowledge was also considered.

Environment and domains of operation considerations for an analyst support system have led to the conclusion that seven knowledge level concepts were required to produce the desired analyst support behavior. These seven concepts are:

1. Meta-knowledge (Yadav, 1989)

58 2 Interface Knowledge.

3 Organization Knowledge.

4 Resource Knowledge.

5 Domain Knowledge.

6. Case-based Knowledge (Kolodner, 1993).

7 Basic Knowledge (Yadav, 1989).

The seven concepts are discussed in detail below and depicted in Figure 4.1.

Meta-knowledge. This is knowledge about knowledge. The system requires this

type of knowledge in order to know what it knows (Luger and Stubblefield, 1993). It

contains knowledge about the interrelationships between each individual form of

knowledge in the system. This is the controlling knowledge that allows the system to

acquire, store, remove and modify knowledge that supports the system's desired behavior.

Interface Knowledge. This is knowledge about how the system will interact with

the user. The display format may be a menu based interface or a graphical user interface.

Forms or templates may be used for user input and reports such as checklists, matrices,

questionnaires, and lists of requirements may provide system output.

Organization Knowledge. The descriptive characteristics of the target

organization make up this knowledge level concept. This is the knowledge that has been

input, recalled or adapted to fit the target organization. Knowledge about previously

analyzed organizations from the target organization" s industry may be recalled and used to

suggest additional target organization characteristics.

59 o •a o oc o (Al CJ o c (L» o O u '175 c r. C) - CQ U-i o

•a

CO

o

1/1 re o re v- O CEi o •£

o

a. o a. H vi "" re o Ci. C O c/^ c -a ^. -^ •— o en -:r c- .^ .0 % Cu o '^ s s 3 c —• re re t. H ^ ^ t. t- en - O O o c o T3

c -^ !:^j C/5 o ;:^ V) C/5 nb u o en --* itic s o iiq u (.J c 1 r^ =i; „_ 1 ^ H . _ in 12 .i

o U cr v ri Q, Oi <* _ _ o ic s C/) re Ir: cyo Q- — oo J= ^ O Iii l m 03 CQ 03 O O -ci

en o H. c c o o

0/j o

OJD o

!/i o ^ :2 o -'• 3 c re o :^ £ a re c ,o a 3 —

--> — 3 <^ 3 t/3 (— ii o 3; o — ^ '^ i— ^ >^'

60 Resource Knowledge. Fact gathering methodologies and system analysis techniques necessary to allow the proposed system to assist in fact gathering activities make up this knowledge level concept. Methodological knowledge for BICS, CSFs,

BIAIT, BSP, and BPR is required. Additionally, knowledge of interviewing and questionnaire building techniques are needed to permit the systems desired behavior.

Domain Knowledge. Knowledge about industrial types, organizational types, and organizational structures is required to provide a context for the target organization. This knowledge concept provides the primitive knowledge needed to support the system's learning processes.

Case-based Knowledge. Here knowledge of individual cases or examples of previously analyzed target organizations are learned and stored. This learning is achieved by accumulating new cases or examples or by assignment or reassignment of indexes.

Indexes provide a means of remembering and recalling past examples or cases at an appropriate time (Kolodner, 1993). Indexes are keywords or descriptors that can be used to recall a prior example that closely resembles the current target organization. A recalled case is then modified to fit the current target organization. If the target organization does not match a previous example organization then the CAASS should be able to employ basic knowledge.

Basic Knowledge. This is the base knowledge used by first principle or primary reasoning. It is composed of the elemental concepts needed to use induction and generalization. It is employed when no case exists to adapt to the target organization's needs. Here first principle reasoning is used to construct a new original infonnation

61 requirement set for a previously unanalyzed organization type (Yadav, 1989) Knowledge about generalized templates also resides here.

Knowledge Level Principles

The purpose of knowledge level principles in the URM is to relate desired behavior characteristics of a system (B) with the beliefs, goals, reasoning, learning capabilities and potential actions that comprise a knowledge level (K) interacting within an environment

(E). Sufficient knowledge is the result of using knowledge within an environment to produce a desired behavior (K x E -> B). Knowledge level principles will be a series of statements contending that if knowledge is applied within an environment then a system behavior will result (Baldwin and Yadav, 1995). A number of general knowledge propositions help shape and define the knowledge level concepts or theories. For the

CAASS, the following principles are stated:

1 If a system contains knowledge about fact gathering activities (K) within an

organizational setting (E) then a fact gathering checklist can be suggested (B)

2. If a system possesses knowledge about interviewing and questionnaire building

techniques (K) within a business domain (E) then it can suggest a set of

questions for an analyst to use in fact gathering activities (B).

3. If a system possesses meta-knowledge, basic knowledge and/or case-based

knowledge (K) within an organizational setting (E) then it can suggest an

information requirements set.

62 4 If a system has knowledge about an information requirement set (k) from

similar organizations (E) then the information requirement set can be modified

to the target organization's needs (B).

Requirement Specification

Requirements for CAASS are stated below. The system requirements are developed from the knowledge level concepts and principles. The requirements must be satisfied in order for CAASS to achieve the research purpose and objectives stated in

Chapter I.

Requirement Set

The CAASS requirements set contains the following seven items:

1. Use knowledge about fact gathering activities to suggest a fact gathering check

list for analyst use.

2. Employ knowledge about organizations to provide the user with explanations

about key organizational concepts.

3. Apply interface knowledge to provide a graphical user interface.

4. Use case-based reasoning to provided a best fit order index of stored cases.

5 Retrieve cases from the case-base in best fit order for analyst review.

6. Modify and adapt retrieved existing cases to meet the current fact gathering

situation. 7 Apply case-based learning to add new cases to the case-base expanding the

case knowledge.

A Conceptual Model of a CAASS

The next step in the URM is to operationalize the knowledge level concepts and principles. This step includes the development of a conceptual model for the proposed system. The model includes the following components: a user system dialog manager, a meta-knowledge fact gathering coordinator, a set of target organization information requirements under construction, a library of information requirement sets for a variety of previously analyzed organizations, a resource repository of fact gathering methodologies and techniques, and an inference mechanism to apply induction and generalization when required to learn about the target organization.

In order for CAASS to exhibit the desired system behaviors, it needs a number of subsystems to perform various tasks. These subsystems are: a dialog management subsystem, a fact gathering coordinator subsystem, an organization base management subsystem, a case-base management subsystem, a resource base management subsystem, and a basic knowledge base management subsystem. These subsystems are depicted in

Figure 4.2 below

64 s ^ R M.

DIALOG M A N A G E M E N I INTERFACE SI BSVSTEM KNOW LEDGE (UVIS) BASE

FA( T GATHERING C OORDINATOR META- SUBSYSTEM KNOWLEDGE (FGCS) BASE

ORGANIZATION CASE BASE RESOURCE BASIC BASE MANAGEMENT BASE KNOW LEDGE MANAGEMENT SUBSYSTEM M A N A G E M E N T BASE SUBSYSTEM (CBMS) SUBSYSTE.VI MANAGEMENT (OBMS) (R B M S) SUBSYSTEM (BKBMS) DOMAIN KNOW LEDGE BASE MANAGEMENT SUBSYSTEM (DBMS)

D O M AIN K N O W L E D G E BASE

ORGANIZATION CASE RESOURCE BASIC KNOW LEDGE KNOW LEDGE KNOW LEDGE K N O W L E D G E BASE BASE BASE BASE

Figure 4.2. CAASS Conceptual Model.

The Dialog Management Subsystem (DMS)

The DMS provides the interface between the user and the system. It is the means by which the system can be queried, information input, and reports displayed to the user.

Through the user/DMS interactions with the other subsystems knowledge is accumulated

65 The Fact Gathering Coordinator Subsvstem (FGCS)

The FGCS is the meta-knowledge of the system that coordinates how information is stored or retrieved and whether primary or exemplar knowledge will be employed to build the target organization's information requirement set. It is via the FGCS that communications between subsystems flow

Organization Base Management Subsvstem (OBMS)

The OBMS manages the contents of the organization knowledge base. It also manages the construction of the information requirements set for the target organization.

Organizational descriptors fill template slots and are maintained the user gathers additional characteristics of the target organization. It is in the organization base that the proposed formal information requirements set resides.

The Case Base Management Subsystem (CBMS)

The CBMS controls access, retrieval storage and modification of the library of exemplar cases. These cases are information requirement sets for previously analyzed organizations. The CBMS stores new examples in the case-base and searches for cases with indices similar to the target organization's descriptors. If a similar case is found, it is modified to match the known facts about the target organization and submitted to the

ABMS via the FGCS as a candidate information requirement set for the target organization. If no similar case can be located, the CBMS notifies the FGCS of that fact

66 The Domain Knowledge Base Management Subsvstem (DKBMS)

The DKBMS controls access, retrieval, storage and modification of the domain knowledge base. The domain knowledge base contains knowledge about various industry- types, organization types, and organizational structures. The BKBMS and OBMS communicate with the DKBMS and domain knowledge base via the FGCS.

The Basic Knowledge Base Management Subsystem (BKBMS)

The BKBMS controls access, retrieval, storage and modification of the knowledge base. If the CBMS notifies the FGCS that no matching case for the target organization can be found in the case library, than the FGCS triggers the BKBMS to use the inference engine to provide information to the OBMS to construct an original information requirements set. The BKBMS retrieves generic templates for use in the OBMS. It manages the primary knowledge about industry and organization types and structures and stores and retrieves knowledge base data.

The Resource Base Management Subsvstem (RBMS)

The RBMS controls access, retrieval, storage and modification of fact gathering methodologies, and interviewing and questionnaire building techniques. It is responsible for recalling elements of methodologies such as: CSFs, BSP, BPR, E/M, BICS, and

BIAIT It suggests mixed methodological approaches to data gathering where appropriate. Additionally, interviewing and fact gathering checklists may be proposed to the user by the RBMS via the FGCS and DMS.

67 Summar\'

This chapter describes the use of the URM as a framework for conceptually developing a knowledge level conceptual model for CAASS. System behavior and the supporting knowledge level concepts and principles are discussed and a case-based conceptual model is proposed for CAASS. The fohowing chapter will discuss the symbol level architecture for CAASS.

68 CHAPTER V

SYMBOL LEVEL ARCHITECTURE FOR A CASE-BASED

ADAPTIVE ANALYST SUPPORT SYSTEM (CAASS)

Introduction

This chapter presents the symbol level architecture for CAASS. It completes steps three, four and five in Baldwin and Yadav's (1995) URM. The symbol level operationalizes the knowledge level conceptual model by describing how the knowledge will be represented in functions, databases and structures.

Symbol Level Concepts and Principles

Symbol level theories and concepts describe how characteristics of the system's architecture interact with the system "s external environment to produce the intended system behaviors. Typically this behavior is reflected in the exhibition of knowledge and problem solving skills. The symbol level principles are "a noted correlation between certain types of architecture, environments, and system behavior" (Baldwin and Vadav,

1995, p. 854).

Symbol Level Concepts

In order to operationalize the knowledge level conceptual model, two knowledge representational schemes have been used. These schemes provide various ways to organize and portray knowledge level concepts and principles.

69 Knowledge Representational Schemes. According to Mylopoulous and Levesque

(1984), knowledge representational schemes may be divided into four categories:

1 logic representations,

2 procedural representations or rule based formalisms (If-Then propositions),

3 graph network representations, and

4 structured representations of objects, frames and frame networks.

The knowledge representations schemes used by CAASS include procedural representations and structured representations. Within these schemes the primary formalisms used to operationalize the conceptual model are frames, frame networks and rules.

Frames. Groups of facts that share the same organizational structures are called fi-ames. "A frame is analogous to a record structure in a high-level language" such as C++ or PASCAL (Giarratano and Riley, 1994, p. 82). Frames are similar to objects in that they support class inheritance (Luger and Stubblefield, 1993). A frame has a name or label and compartments or slots to hold characteristics (see Figure 5.1 below). Not all slots in a frame need to contain information. They may be empty and have information inserted or removed as necessary. CAASS stores information about industrial division types, industrial major groups, definitions, examples, analyst activities, check-list items, fact gathering suggestions, and facts gathered by the analyst in frames. The check-list frame in

Figure 5.1 contains a description of the check-list item. In this case, the item's purpose is to determine the principal business of the organization. The key concept of business appears in the concept slot.

70 Check-list

frame-type: Description

concept: Business

item-number: 1

description-1: Determine the principle business of the organization.

description-2:

description-n:

Figure 5.1. Frame and Slots.

Frame Network. A frame network is a collection of frames that are hnked together via the information contained in slots. In CAASS the analyst check-list is composed of a number of items. A complete check-list item is represented by a network of five frames as shown below in Figure 5.2. Each frame that makes up the frame network is linked to every other frame in the network by the common contents of the concept slots.

The check-list frame and the remaining frames for definitions, examples, analyst activities, and suggested questions are all linked by the concept business. The knowledge contained in the resource base and domain base is stored in frame networks.

71 Check-list Definitions

frame-tvpc Description frame-tvpc Det'inition

p concept. Business concept: Business

item-number. 1 index-number 12

description-1; Determine the definition 1: A business is a principal business of profit making system of the ortiaiHzation. man, machine, and procedures that has a description-2 mission and which • performs some • • economic activit\ description-n: 1 definition 2 1 • • Examples • 1 definition-n: i 1 frame-t\pe Example 1 1 1 - concept; Business Analyst-activities example-1: Texas Instruments frame-type: Activit\ example-2 General Motors • concept: Business 1 • 1 • 1 example-n: activiti\-l: Interview executives, and review documents. Suggested-questions activitiy-2. Resohe differences between frame-type: Questions interviev\ results and documents. '—» concept. Business • • questions-1; Does this • business primarih activit\-n: produce a product or deliver a service':' questions-2: What product or service do you think is delivered to the customer? • • • questions-n:

Figure 5 2 Frame Network.

•J") Rules. The rules are used by the expert system inference engine for pattern matching operations. If the frames that make up the condition or if side of a proposition are true then the frames that make up the action or then side of the proposition will be asserted. A rule for CAASS is stated as follows: If a new-case-frame has empty slot values and values are available then insert the values into the new-case-frame slots.

Symbol Level Pnnciples

These principles relate symbol level concepts (S), knowledge level concepts (K), system behavior (B) and the environment (E). CAASS uses the following principles to relate knowledge representation, system architecture, system behavior and the environment:

1. If the system represents case knowledge in case frames, then it can recall cases

about previously analyzed similar organizations.

2. If the system represents resource knowledge in a network of resource frames,

then it can suggest a checklist of fact gathering activities.

3. If the system contains an expert inference engme then meta-knowledge rules

can be recalled and apphed.

4. If the system represents basic knowledge in rules and basic knowledge frames,

then it can suggest a generalized new case frame.

5. If the system represents interface knowledge in rules and objects, then it can

provide a user interface for input and output.

/J 6. If the system represents organizational knowledge in organizational frames,

then it can recall characteristics of the target organization.

7 If the system represents domain knowledge in a network of domain frames,

then it can recall types of organizations.

Symbol Level Architecture of CAASS

Based on the knowledge representation schemes and formalisms suggested in the system level concepts and principles, each element of the conceptual model can be mapped into a symbol level component. See Table 5.1 below The symbol level components

(Figure 5.3) are used to construct the symbol level architecture.

Table 5.1. Mapping Concept Model to Symbol Level Components.

Conceptual Model Symbol Level Component Dialog Management Subsystem and User Interface Module and Interface Knowledge Base Interface Object Knowledge Base System Conuol Module Management Subsystems for: Expert Inference Engine Fact Gathering Coordinator. Organization Base. Basic Knowledge Base, Domain Knowledge Base, and Resource Knowledge Base

Case-based Reasoner Case-Base Management Subs>'stem General Utilities

Meta-Know ledge Rule Base Meta Knowledge Base Knowledge Frame Base Organization Knowledge Base Orgamzation Frame Base Domain Knowledge Base Domain Frame Base Resource Knowledge Base Resource Frame Base Basic Knowledge Base Basic Frame Base Case-base Knowledge Base Case Knowledge Frame Base

74 USER INPUT USER INTERFACE MODULE Interface Object Knowledge Base OUTPUT TO USER MONITOR PRE^TER SYSTEM CONTROL MODULE Meta-Knowledge Case-based General Expert Inference Rule Base Reasoner Utilities Engine

CASE KNOWLEDGE KNOWLEDGE FRAME BASE FRAME BASE

Basic Frame Base Case Frame Base Organization Frame Base Resource Frame Base Domain Frame Base

Figure 5.3. System Level Architecture for CAASS

The components of the system level architecture provide the following functionality:

1. The user interface module provides a user interface for input and output..

2. The system control module contains general programming utilities, the case-

based reasoner and the expert inference engine. The case-based reasoner

75 provides access to the case knowledge frame base. The case-based reasoner

also provides case-based learning, classifies new information and retrieves

existing information. The inference engine provides access to the knowledge

frame base.

3. The case frame base holds the existing frames for previously analyzed

organizations.

4. The knowledge frame base contains the frame structure definitions and basic

knowledge frames, organization knowledge frames, resource frames, and

domain frames

Case-based Reasoning and Learning

Case-based reasoning emphasizes the recalling of concrete cases or experiences over . Rather than performing composition, decomposition and recomposition processes, the case-based reasoner manipulates cases to try to match or closely approximate a current situation (Kolander, 1993). The case-based manipulation involves recalling stored cases, creating a subset of the cases that best fit the situation, adapting the subset to exactly fit the current situation, accepting the modified case as a solution for the current organizational situation and storing the newly adapted case in the case-base. The case-based reasoner achieves learning by indexing and adding new cases (Kolander, 1993)

For each new situation the reasoner constructs an index that orders the cases in the case- base according to the best fit with the current situation.

76 Case Knowledge Frame Base

The case knowledge frame base contains case frames for previously analyzed organizations. The case frames all have the same structure. Each case frame has slots for:

1. frame name,

2. business name,

3. business type,

4. standard industrial classification code,

5. business development cycle stage,

6. business size,

7. business environment,

8. business objectives (four slots),

9. business strategy,

10. coordinating mechanism of the organization,

11. key part of the orgamzation,

12. focus of the organization,

13. degree of decentralization,

14. number of business units,

15. unit physical separation,

16. type communication network,

17. where the organizational power is located.

77 18. type of job specialization,

19. amount of employee training provided,

20. amount of new employee indoctrination provided,

21 organizational entities (14 slots),

22 fimctions (six slots),

23. processes (18 slots),

24. process inputs (90 slots), and

25 process outputs (90 slots).

Knowledge Frame Base

The knowledge frame base contains the structure definitions, basic frames, organizational frames, resource frames and domain frames.

Basic Frames. The knowledge needed to create new frames resides here. When facts are being gathered about a target organization a new empty frame is created, named and moved to organization frames.

Organization Frames. The partially filled frames for each target organization undergoing fact gathering activities are located here. Once a frame is completed it ma\' be removed by the system control module and placed in the case knowledge frame base The same frame structure is used as discussed above for the case frame.

Resource Frames. The frames in this base are organized into networks. Frames for concept definitions, examples, suggested fact gathering questions, and anah st activities

78 are linked to form a check list frame network. The structure for a concept definition frame contains the six following slots:

I. concept name,

2 index number,

3 and definition text (four slots)

The example frame contains five slots:

1. concept name

2 and example text (four slots).

The analyst activities frame has two slots:

1. concept name

2. and activity name.

The suggested interview questions frame has two slots:

1. concept name

2. and question text.

Domain Frames. The domain knowledge frame base contains a frame network of industrial division and major industrial group frames. The industrial division frame has five slots:

1. division title,

2. division industrial type,

3. major group link,

4. definition link,

5. and concept name.

79 The industrial group frame also contains five slots for

1. division title,

2. group number,

3 group description,

4. definition link,

5 and concept name.

Structure Design

The system level architecture and the system requirements express the general components and relationships for CAASS. Before CAASS can be coded and implemented, the architecture needs to be translated into a hierarchy of fimctions which are used to generate logic flow diagrams and guide the development of program code A structure chart is one tool for expressing a hierarchy of functions. It provides a graphical representation of the functions and function calls (Figure 5 4). The structure chart for

CAASS appears in Figure 5.5 through Figure 5.9 The CAASS structure chart reflects the functions need to provide eleven system capabilities.

1. Suggest a generic analyst fact gathering check-list.

2. Organize gathered organizational facts into frames.

3. As details about the organization's structural configuration are gathered,

customize the check-list to fit the organization's structural configuration

4. Recall existing organizational fact frames (case-base records) for matching

with the current organization frame (case).

80 5. Adapting all or part of the recalled case-base records to help complete the

current organization frame slots.

6. Add or learn the completed current case to the case-base as a new record.

7 Analyze the case facts to classify the organization.

8. Model the organization's fianctions and processes.

9. Generate the organization's information requirements.

10. Learn the current organization's requirements.

11. Generate new requirements specifications.

Data element or structure. Function block Description with name or Flag or status. description.

Iteration or loop. Bbb performed then Ccc performed then sequence Hat Function - repeated. Function is embedded in calling function.

Dispatcher - provides control redirection. Iteration or loop. Eee performed until done then Connector provides Fff performed until done continuity across multiple pages or figures.

-•Unconditional flow of control.

-• Conditional flow of <^ control

Figure 5 4. Structure Chart Symbols

81 Start CAASS

"^U^

Load Rules A: Display Initialize Structures Opening Screen (Mcla- (Basic (Interface Knowledge Knowledge Knowledge Base) Base) Base)

Provide Over\ icw

Display Overv lew Get Screen Menu Dispatcher (Interface Option Knowledge Base)

Fig;ure 5.6 © Figure 5.5. CAASS Structure Chart.

82 &

name A Displav Begin Develop C. Screen .Assign New Case (Interface New Case (.Meta- Dispatcher Knowledge Name Knowledge Base) Base)

name ?

case case

Get Ca.se Save Definition Case (Basic (Organization f REG j Knowledge Knowledge & Base) Base)

Figure 5.7 Figure 5.8

Figure 5.6. Begin New Case.

83 Get Display Check- Check-List List Screen (Resource (Interface Knowledge Knowledge Base) Base)

Get Validate Suggest Update Ipdate Check- Edit Case Organizational Check-List Check-List List Frames Check-List (Organization Structural Items Related Screen (Resource Knowledge Configuration To Structural (Interface Knowledge Base) Available Configuration Knowledge Base) (Ca.se-Ba.se) Base)

Figure 5.7 Adapt Check-List.

84 ^REC j

Resume E.xisting Case

2\. Get Display Resume Get Case Case Screen Get Case (Organization (Interface Menu Dispatcher Name Knowledge Knowledge Options Base) Base) © 3If case 9 name case

Display Case Analyze Maintain Screen Adapt Leam Case Facts Quit Case (Interface Check-List Case (Meta- Case Knowledge (Case-Base) Knowledge Base) Base) \m\m

Figure 5.9 (^ A^ Ufxlate E.\isting Update Existing Display Quit Edit Case Screen Case Perform Wmdow Get Case (Interface (Organization Case (Interface Menu Dispatcher Knowledge Knowledge .\latchmg Knowledge Option Base) Base) Base)

( PCM J Save &. F^xit Figure 5.9 Case Figure ,©

Ufxiale E.xistmg Case (Organization Knowledge © Base) Imurc 5.5

Figure 5.8. Resume Existing Case.

85 -Y f PCM j Perform Learn Case Case Malchmg (Case-Base)

status R Display Best Update Case Validate Add To Find Match Adopt Best Screen Case Case-Base Best Match (Interface Match Value (Interface Structure © (Case-Base) Knowledge Knowledge Figure 5.8 Base) Base) ® case ?

Analyze Case Facts (.Meta- Knowledge case Base)

Determme .Model Generate Display Orgamzalion's Organization "s Requirement Leam Requirement Structure Functions Specification Requirement Specification (Domain (Resource (Resource Specification (Interface Knowledge Knowledge Knowledge Knowledge Base) Base) Base) Base)

Update Case Pnnt (Organization Requirement Knowledge Specification Base)

Figure 5.9. Leam Case, Perform Case Matching, an(i Analyze Case Facts.

86 Summary

This chapter discusses the symbol-level representation of the knowledge level concepts, principles, and architecture. Symbol level concepts and principles as well as a symbol level architecture are developed and frame structures are discussed. A structure chart for CAASS is presented.

Chapter VI discusses the design and implementation of the CAASS prototype. A logical flow diagram is presented and implementation languages are discussed.

87 CHAPTER VI

DESIGN AND IMPLEMENTATION OF THE PROTOTYPE

Introduction

This chapter discusses the implementation of the CAASS conceptual model and the symbol level architecture. It corresponds to step six of the URM. Specifically described are the logical design and the programming languages used to construct the prototype.

Logical Flow Design

This step consists of constructing a logical flow diagram for the prototype system that accurately reflects the conceptual model, the symbol level architecture and the

CAASS structure chart. The revise check-list fiinction (Figure 5.7) and the analyze case facts fiinction (Figure 5.9) were not included in the CAASS prototype. The revise check list function uses the same techniques demonstrated by the maintain case fiinction (Figure

5.8) and the techniques for analyzing facts were demonstrated by Dalai and Yadav (1992)

The symbol level architecture contains two logical groupings of processes. The logical design is composed of the system control process group and the user interface process group. Figure 6.1 through Figure 6.4 show the logical design for the system control module and its processes. The user interface module is represented b\ the logical design and processes pictured in Figure 6.5 through Figure 6.8. The implementation of the logical design through programming languages will be discussed next

88 DISPLAY YES INfX)

See Figure 6.6.

NO

BEGIN ^TS NEW CASE

See Figure 6.6.

YES NO

CHECK­ LIST RESUME YES See Figure 6.5. CASE

See Figure 6.7.

NO

BUILD NO CUT^RENT CASE See Figure 6.' YES

CASE YES COMPARISON QUIT

See Figure b 4 See Figure 6.2.

NO YES QUIT

See Figure o 4

Figure 6.1 CAASS System Control Module Logical Diagram.

89 CASl' COMPAI^ISON

DISPLAY CURRENT CASi:

YES CREATE CASE-BASE INDEX (case-base reasoner)

FIND BEST MATCH

See Figure 6.4

WRITE CASE-BASE FRAME -<^STOP^^ DISPLAY (case-base reasoner) MATCH See Figure 6.8.

NO

NO NO

RETURN

NO

MODIFY YES CASE SLOTS SAVE U See Figure 6.8. See Figure 6.4.

Figure 6.2. System Control Module Case Comparison Process

90 CREATE CASE FRAME

READ BASIC FRAME COPY CASE-BASE STRUCTURE SELECTED SLOT (inference engine) VAULES TO CASE FRAME SLOT (inference engine)

ASSIGN FILE NAME TO FRAME SLOT

C3ETURN

READ CASE FRAME

READ NAMED READ RESOURCE ORGAI^IZATION FRAME NETWORK FRAME (inference engine) (inference engme)

ClRETURg^ RETURN

Figure 6.3 System Control Module Inference Engine Processes.

91 FIND BEST MATCH

READ CASE-BASE FRAME CORRESPONDING TO INDEX VALUE (case-base reasoner)

STEP INDEX POINTER

RETURN

WRITE ORGANIZATION FRAME (general utilih

RETURN

Figure 6.4. System Control Module Case-Based Reasoner and General Utilit\ Processes

92 PRINT CHECK-UST

PRINT CHECK- WRITE TO UST DEFAULT PRINTER UPDATE CASE z PROGRESS TREE READ CHECK- UST UPDATE CASE PROGRESS See Figure 0.3. TREE

RETURN

See Figure b 4

Figure 6.5 User Interface Module Check-List and Print Check-List Processes

93 See Figure 6 4 DISPLAY CASE PROGRESS TREE

INPUT FILE NAME FOR CASE

YES

CREATE CASE RETURN FILAME

See Figure b }

DISPLAY EMPTY CASE FRAME

RETURN'

Figure 6.6. User Interface Module Display Info and Begin New Case Processes

94 INPUT CASE NAME UPDATE CASE PROGRESS TREE I READ CASE FRAME Vi See Figure 6.3.

DISPLAY CASE PROGRESS TREE

See Figure 6 4. DISPLAY CASE PROGRESS TREE

See Figure 6.4

DISPLAY CURRENT CASE

DISPLAY CASE FRAME I RETURN

Figure 6.7. User Interface Module Build Current Case, Resume Case, and Display Current Case Processes.

95 DISPLAY MATCH

DISPLAY BEST MATCH

Figure 6.8. User Interface Module Modify Case Slots and Display Match Processes.

96 Implementation Languages

The prototype's logical design is implemented in four programming languages.

Acting together, these language components produce the behavior described in the

CAASS conceptual model. The CAASS prototype system source code is a cohection of

Visual BASIC (Beginner's All-purpose Symbolic Instruction Set) code statements and

Echpse script files implementing facts, relations and template definitions, and rules. The user interface of the prototype was built using Visual BASIC 5.0 a rapid application development (RAD) tool from Microsoft Corporation.

Visual BASIC 5.0

This Microsoft product provides a quick and easy programming method for creating applications designed to run on computers using Microsoft Windows 95 or

Windows NT operating systems. Rather than writing every hne of an application in

original code. Visual BASIC allows the programmer to drag and drop pre-built screen objects while constructing application screens for a graphical user interface (GUI). The

Visual BASIC objects are primarily forms and controls. The objects are knowledge structures containing properties or attributes, methods or actions to be performed, and a list of events or triggers which will cause particular methods to be performed. Once an object is selected the tool provides the underlying BASIC programming code and compiles the code into an executable file. The programmer may accept the default properties, methods and events for each object or customize them as necessary In instances where a pre-buih object does not exist or does not perform as desired, the

97 programmer may supply the BASIC code to provide the needed functionality (Microsoft,

1997). Visual BASIC is event driven It s objects respond to triggering event code rather then executing code in sequence.

The Halev Enterprise Products

The remaining prototype components have been buih using three Haley Enterprise products: Agent OCX, Eclipse, and the Easy Reasoner. Used together these products provide an expert knowledge base system with a case-based reasoning capability. This combination has been specifically developed to support applications running in a Windows

95 or Windows NT environment.

Agent OCX. This is an Object Linking and Embedding (OLE) control extension used with popular programming tools to fully embed Artificial Intelligence (AI) capabilities in applications for the Microsoft windowing environment. Specifically, this

OCX has been designed to work with Microsoft Visual BASIC, Powersoft PowerBuilder,

Borland Delphi and other RAD tools as well as conventional programming languages such as C++ (Haley Enterprise, 1996). "Agent OCX encapsulates the 32 bit version of the

Eclipse inference engine" (Haley Enterprise, 1996, p. 1).

Eclipse Version 3.4C. This product is a knowledge based expert system. This Al system provides the declarative rule based capability not available in C^- or BASIC This allows the prototype system to perform pattern matching and reasoning (Hale>' Enterprise,

1995). Prepositional relations with their arguments are called facts. An ordered fact consists of a relation name followed by one or more arguments in a sequential list

98 Template facts use a frame structure. A relation name is followed by one or more named slots. Eclipse calls slots fields.

The Easy Reasoner Version 2. IC. This is an extension to the Eclipse AI shell. It provides the addition of a case-based reasoner and retrieval capability. It uses machine learning techniques such as nearest neighbor classification or decision trees to classify' new- information (a case) or retrieve existing information (the case-base). Working as an addition to the Eclipse shell, the Easy Reasoner provides "a complete case-based reasoning solution" (Haley Enterprise, 1995, p. CBR-1). The prototype system uses the nearest neighbor machine learning technique. When a query is made, the records are retrieved from the case fact base and ranked according to the degree of similarity with the case described in the query An index is constructed which places the best match found in the case-base records as the first record listed in the index. The completed index is a best to worst fit retrieval hst of the case-base records. Each record in the case-base is a template fact. The query is a partially completed template fact representing the case in progress. Each field of the retrieved record may be given a weight to distinguish more important fields from lessor fields. In this prototype, the default of equal weight for all elements was used. Next, the way these languages are used to map the conceptual model components to a prototype system will be discussed.

CAASS Prototype Components

The programming languages provide code to implement the logical design. The code constructs the symbol level architecture components for CAASS;

99 1. an interface module,

2 a control module,

3 a case knowledge frame base, and

4 a knowledge frame base.

User Interface Module

The interface module is not a single block of code but rather the collection of forms, functions and control codes of Visual Basic 5 0. Visual BASIC code generates the prototypes graphical user interface and provides input and output facilities between the user, printing devices and the prototype system. The user interface module provides event triggers and arguments to the system control module. It also receives and displays data passed from the system control module to the users monitor and/or a printer. The user interface module consists of a hierarchy of nine Visual BASIC screens (Figure 6.9). The nine screens of the prototype system together with their control objects, properties, trigger-events, and methods make up the interface module. Each screen and its controls will be discussed. Common controls such as save and quit appear on most of the screens and will be discussed only once.

100 (;1'I;NI.\I, IXTRODliTION

L_ i^ul IT 2* *^^: --^ 1 BEcilX \i;\\ C'.VSl - CIU'CK LIST

•,, " P.\RA.\1ETERS AND OBJETTIN ES

STRATEGY AM J PROPERTIES

" IDENTIFY ENTITIES / QUIT b V f "j,

•• ' FrCNTlONS AND PROCESSES 1 i CASE CO.MPARISON

Figure 6.9 Screen Hierarchy

Opening Screen. This screen is displayed on program startup. It acts as a directory or switchboard for prototype operations. See Figure 6.10 below. There are four command button controls on this screen:

1. The information command button causes a load information screen

trigger-event. A method then loads the information screen and closes

the opening screen.

101 2 Begin New Case command button invokes a trigger-event to load the

Begin New Case screen. A method loads the Begin New Case screen

and closes the opening screen.

3 Resume Build Case command button triggers a method to load the last

screen visited by the user during the last run of the CAASS. Facts and

rules retrieve the last screen visited from the saved new case template.

4. Quit command button triggers a dialog box that asks the user to save or

exit. If exit is indicated the program ceases execution, all screens are

closed and the user is returned to the windows desktop. If save is

chosen then the save template rule is activated in eclipse via Agent

OCX and the current case template is updated and saved to disk.

OuBrariu Siaeeri

CASE-BASED ADAPTIVE Analyst Support System

I INTOHMATIOH I BwmHewtase j] BemaTw8ijWL«Bemarw8i*lL«ee Quit r

Figure 6.10. Opening Screen.

102 Information Screen. The purpose of this screen is to provide the user with an overview of the prototype system (see Figure 6.11). The information screen is invoked bv

pressing the information screen command button on the opening screen. This screen

contains two command buttons:

1. Build New Case command button.

2 Quit command button.

Both of these command buttons function as described above under the opening screen.

The information screen is closed vice the opening screen.

InloiiBdliun

II iiiiiiii iiiiiMnniiirfrTTfr*T—•*"-' -^ —^ .r^.—...... —.—.^-.t-o.^-^ -^,-..^1=^1 HIHIHII imniitntfmniiraintiiiMiirriTtMJrtnfiTi •TiaiMiftnT'hT'Tr"-'" -

j 1 he Case-tJased Ajj^splive AnaK*st Support aysrem u intended (o awle the Analyst duiing the tact gattiering phai»

I at lequoemcnts dctcrmrwtjon The &uppo''t Sy»tem wil asK a ncflnber 0* guettrons tt\a wJ hap yoo gather tnc

uganiiationai tat's needed to detetmine the otganudbon's mtcwmatjon foquuonwnti

\ ou wiU DC gj/en the option ot pitnttng a chcckus' ot anatytt tac\ gathering activities Atte; compictng or ac'ivity

or actrvioes. you may entei the resulLant irtormation into tne JUDPCft system's -ase m CMogress You c^n save the

information by pro-zidinc your own ttorage file rvdrrv© Each time wou retur to '.he Suppof" Sysfen you VMII corJmoe

youx CAse m progreis untJ a le* ot gathered tacts t-^s been bult V oi> will be alsie lo piintout. revTe

ard modify the fact set New tact; m^y be ndude as drterrrunec: As you use the Support System, it will

ijse case-based feasonnq to mai-.e suggestions and loarn new tacts Comptetior of activitios sucn as cleteirrtnng rhe organizations mission ob(ectives. strategy entities unit: fLTtctions and processes wili heti U detemme Jacts which leaid to irtforfnadon lequnements

•'.^-^T.-^iViH^^Xi- • ...Z • .

'•--.- x^:-"-:

: Begtfi hiev* Case Gurt !l • --^- -yr^-^ryt .^'.iT -

t. •.^^.- — .- •- . . ;.•".'"-." ^ -'-•

'•'^"•"'

Figure 6.11. Information Screen.

Begin New- Case Screen. This screen allows the user to assign a file name to the

new case template (see Figure 6.12 below). All new case information is stored and

retrieved in this named Echpse template. The depression of the begin new case command

103 button on either the opening or information screen causes the begin new case screen to open and the previous screen to close. This screen contains four controls.

I. A Text Box control allows the user to insert a desired save file name.

2 A Check List command button triggers a method that opens the check list

screen and closes the build new case screen. A method is also invoked to

create the save file.

3. A Skip Check List command button that invokes the opening of the parameters

and objectives screen and the closing of the build new case screen. A second

method creates the save file.

4. Quit command button. See the opening screen description.

5 Key Concept Definitions menu selection on the case progress panel causes a

drop down hst to open and allows the user to select a key term for

clarification. The selection of a key term cause Visual BASIC to pass a

trigger-event to Agent OCX. Agent OCX provides values to Eclipse relations

to specify rule conditions and retrieve values from the knowledge base

template network.

The left panel of this screen is a hierarchical tree view progress panel. It reflects the users progress as various screens and system activities are completed. As each step is completed the oval in front of the step name is filled in. The users current program step is highlighted. Clicking a tree node provides a rapid method for the user to move backward and forward through the program screens.

104 C*>v Piuuiwv •won Nuw Caw

O P'ini ChrckLnt

QJ) Pajan>ctcii ar^J OLipLtivc^ -v-Choose seven characters lor your casename; CJ Stiategiv OIRJ PiupeiUci The first character must be alphabetic. Do not

O ldcr

.Checi.Ujt Skic Check' • " '*' '- '

^^

Figure 6.12 Begin New Case Screen.

Check-List Screen. This screen permits the user to inspect and print the suggested fact gathering check-list (see Figure 6.13). Depressing the check list command button on the begin new case screen invokes the check-list screen and closes the begin new case screen. As this screen opens Visual BASIC passes a trigger-event to Agent OCX. A network of templates in the knowledge base provides the check-list item, a clarification statement, provides some examples, and suggests some analyst activities and interview questions. This information is loaded into Visual BASIC text boxes. The screen contains five command buttons as follows:

1. The Print Check-List command button causes the information from the

knowledge base templates to be passed via Agent OCX and Visual BASIC to

the windows operating system print function. The complete check-list is

printed.

105 2 Previous Item command button allows the user to scroll back to the beginning

of the check-hst one item at a time.

3. Next Item command button functions the same as the previous item command

button but allows the user to move toward the end of the check-list items.

4 Build Current Case command button causes the check-hst screen to close and

the parameters and objectives screen to open. Additionally, the new case

template is opened for input. A trigger-event from Visual BASIC causes

AGENT OCX to open the Eclipse new case template.

5. Quit.

Jtey Conc»p* 0»fcn^ig»ig-:^:'i^#^^~^afe feriiE^

' - - ^ E ntoi Nam« •Trlxz- 7?: T'—.--.Z.'^-^ t.j^^jcicrmnc t)"ie u(ii.ku busncii name ui iJite 'j' tt"< :)i^or..ioiiufi

:--• O Pi-nlCriockij*

O Pai«ne(et) arc Otnecuves

O Sudtes^ and Propwuei

(3.' loenljfy EnOUei

O Functoont arxiPcc —-.t-jri^,y^j ^•rTTjlf'^*:" '' ^ - ^^^^-nteivie*-£)«ocU'vej Review Oigafi^aoorwl Docunenu

;,-v-ij* e the <3di ridrne j( the bi_j»..

;_.'>tfaliJ!AJ».a».^^a<,igg'

j^^£^fPiTit Check LBTI -:~r:Bidii£unr

'•'".••-;^a/',.^^ES;^Sa£i;,,, . .,

Figure 6.13. Check-List Screen.

Parameters and Objectives Screen. The parameters and objectives screen allows the user to enter organizational facts into the new case template (see Figure 6.14 below)

It is invoked by pressing the build current case command button on the check-list screen.

106 This screen and the remaining screens in the screen hierarchy (Figure 6.9) are the data entr\' screens. There are three command buttons, seven combination box controls, and 14 radio button controls located on this screen.

1 The Save Data command button triggers a save event causing values in the

Visual BASIC entered in the combination boxes or selected by the radio

buttons to be passed via Agent OCX to the Eclipse new case template.

2 Combination Box controls allow the user to select facts from pull down lists or

type in unique organizational facts. The contents of each box is maintained in

variables until a save data event occurs.

3 Radio Button controls permit the user to choose from available selections. The

selections are stored in variables until a save event occurs.

4. Next Page command button provides the user with a way to close this screen

and open the next screen. It is an interface module trigger-event and function.

Initiation causes a save data yes/no dialog box to display. Once a choice is

made the parameters and objective screen closes and the strategy and

properties screen opens.

5. Quit.

07 ' I :««f» I'liMMfita Paumtteit and Qbiecaves Koy Concept UotnitiiYu

U jrnj ih« le^pofijei to jxx* otiocWBtJJI tytktoifmo bitrkt« bo»« you c

Uu^fj Heme oi litiit . .^ j^^i j,^^ lommunnv ho:D

" |Hilji7HealT^Jetvc^

.-. New ^. ^-YCTjnqond Mafuey Old Mthiwel -. - itsvetoqed tearnwcd-.

a-'npk»y©ec/ " . • —- ~ - • " - • • "-•

'^-SuTfito-Stable'

i"*; Complex - Slabte

.^^ ' Whj<- iwR lh*i I hr«r«7-*nn'; I nv^ritM>yi -=Hteg

ilincrease ptotidbiitu

'O'/ice eTtplov«e prof* shann^

•ciijce v/as'e

;: Sive Data Qui'

^.•-•-

^tta ii^lih

Figure 6.14. Parameters and Objectives Screen.

Strategy^ and Properties Screen. The user can enter the organization's strategy and additional properties on this screen. This is the second of the data entry screens (see

Figure 6.15). It provides access to the slots/fields of the new case template. Facts may be entered, deleted and modified. It has four command buttons and 12 combination box controls.

1. Save Data command button fimctions as previously described.

2. Next Page command button functions as previously described.

3. Combination box controls function as previously described.

4. Quit command button.

108 5. Clicking the previous command button triggers a Visual BASIC event that

causes the save yes/no dialog box to display After a choice is selected, the

parameters and objectives screen opens and is populated with existing new

case facts. The parameters and objectives screen is closed.

Gate Piogiess IShAlRfiji arifi HinnRrhRt -'ctntKXw _^ ...^-„ ( # Jegn Uev* Z^se Pjtmar^ tUategu ^o acHev* Mt«icn; ^ Enter Name IF i •^ # ^Bvoew Checklist I ^n-)^ r1 rrj wrrk f nrj»s5e^ ^11' TJ ^ PtntCheckUtl r^ m ^dtametefE arc OtDiectives fe: u. Siiaiegy vxj Ptnpfvties Nviiibe».o< 0F9*)Ba(iofw( Unte | i |A(e unilx ptyticaly sepaiatecf? Ut denliiy £ntitiei

THQW a!eifits,sepad(e(n^|i,,mr Dlnri/camnus.'taalitii - $am

-• r ...MV.^.- ..^ .f 1 -»^'°-* "^nr -Where t^thepowef ct«w*ated? j-jrwd rJ I nerfw^ Uj PrKsira-nf ^r- si-tl ^ J • ;>W^!NHt^l.jm^*»mw; .J.ujfiiw|jijjii.^ II.ii.*'iujljt.m„-miiiji mm.m 11, ^.—^

-. De^ee of Job Sperlalizatioit.:

.Amourt^o* Tianrig piisvictest

^.•.-'•-J.^tlly*H)>^^«j^^^'jJapJg_^• .-.: — ntnwifmi Afnourt o< Iretoctiwiofcon: -T}^

—.—— -... - \;;-=-'"^'SaveDa*a3 "I ' ' Prssious Paoe I N&^ ^ane k '^i^umSmi^iaiaiimjaSmummiaam •^^a^-^-o:^- ^liiiYiifiiiiriipririiiMiiiliii mi

Figure 6.15. Strategy and Properties Screen.

Identify Entities Screen. This screen provides 14 text boxes for the user to identify organizational entities. Additionally, the screen contains four command buttons (see

Figure 6.16 below).

1. Save Entities control initiates the same trigger-event and methods as describe

for previous save data command buttons.

2. Previous Page command button.

3. Next Page command button.

109 4 Combination boxes.

5. Quit.

Gate Piogia** lilniilily EnliliBX Uon.£ey . Cqncopt P ehmlignt •• ' ——~ : •rtv ^'} bognNe*vLase Enter Up to 14 Entities Belowc (^^ Enioi Name

^ ReviSM Checklist

! ^ Piml OieckLKi

^ Paemetecs end Obiectivs

^) S tialecu' and Properties :.;v-' .--'•!;• H!iBg.'a.^^ IdertilvEntkist

•s^'n IvlrdirAb''iijjiT''^1 i 4^) FurK^tMDm And Piocstces igaS*'; r ,r3iS^:.^.--Jr;^-?:)P5i:'^!r*^r*;~«as?^^

fS? •=w

SeveEnbtiM I Previouj Page I - i'Nexl Rage ' | - Uuit

Figure 6.16. Identify Entities Screen.

Functions and Processes Screen. This screen allows the user to enter six fimctions with three processes each into the new case template. Each process is allowed five inputs and outputs (see Figure 6.17). The screen contains eight command buttons and 12 text boxes. Facts are typed directly into the text boxes.

1. Text box controls are for direct data entry.

2. Previous Function command button allows the user to move back to the

previous function. The text boxes will be reloaded with the previous contents

3. Next Function command button permits moving on to the next stored fiinction

or empty text boxes. If data was previously entered then the text boxes will be

populated with the template data. 110 4. Previous Process command button works exactly the same as the previous

function command button except it loads the previous process for the selected

fiinction.

5. Next Process command button provides the same functionality as the next

function command button except the next process for the selected function is

loaded or blank text boxes are provided for data entr>^

6. Start Case Base command button triggers an event that causes Agent OCX,

Eclipse and the Easy reasoner to start and open the case-base. The case

comparison screen is opened and the function and process screen is closed.

The data currently entered in the new case template forms the query for the

:ase-base.

7 Save Data.

8. Previous Page.

9. Quit.

runclions and Processes

# B*9n N«v«f Cate Enter a. Function, Procsftss and the Process input ^ Enia Name Re50urc«» and Outputs; • Hoviow DnecklBt

^ pTmt CbeckLtsi f urc50f\ N anv* P»ac*f« U«n*

» Pmivn^vi «od Oofecbves [Hae""'! Caie Adnir^«tfarr>'' • S ualsQ^ and Ptopoibos

-icnri*,! F -^tffirt lAdmii Hat»rH 11 'Onil© f'St'^'^t

r\iKto-::m *»dPtoc-:i« I'Xst ^ lo oe J J0.«-h.*-;e

1 1 ••:;::

! 1 S:::::;: ::;•:::;:::: 1 _.!',..'.;,.,..J.! i

^•VKKJI 1 1

S*v«0«f« j .- ! 1 1 e

Figure 6.17. Functions and Processes Screen. Ill Case Comparison Screen. This screen is used to compare the contents of the new case template with the retrieved cases from the case-base (see Figure 6.18). It is invoked by pressing the start case-base command button on the functions and processes screen. It contains 24 text box controls, 12 check box controls, and nine command buttons.

I Text box controls are primarily for the display of template field values. The

text boxes will accept additions, modifications or deletions of the displayed

template field contents. The left side text boxes contain the field contents of

the new case template. The right side text boxes contain the field contents of

the retrieved case-base record. The right side wih remain blank until the

suggest case match command button is pressed.

2. The check boxes cause the prototype system to modify the contents of the

corresponding new case template field with the corresponding contents of the

displayed case-base record.

3. Print Displayed New Case Values command button causes the field values of

the new case template to be sent to the defauh printer. Only the displayed

contents will print.

4. More Fields command button allows additional template fields to be displayed

for viewing, modification, or printing.

5 Previous Fields aUows the user to return to previously viewed or pnnted new

case template fields.

6. Add New Case Values to Case-base & Exit command button permits the

acceptance of the new case template values as representative of the

112 organizational facts for the current situation. Once pressed the interface

module and control module cause the new case field values to be added as a

new record to the case-base. The program then performs a quit operation.

The next time the prototype system is started the expanded case-base will be

available for retrievals and comparisons.

7 Save New Case Values.

8. The Suggest Case Match command button loads the case-base best match.

The control module and the case-based reasoner create an index of the cases in

the case-base best matching the contents of the new case template. The first

case in the index is then loaded for display and comparison. The interface

module then hides the save new case values, add new case values, and suggest

case command buttons and displays the show next case and cancel matching

command buttons (see Figure 6.19).

9 Show Next Case command button via the interface and control modules selects

the next best fit existing case-base record from the case-base index and copies

that record from the case-base in long-term memory to the short-term memor\'

for display.

10. Cancel Matching command button causes the interface module to redisplay the

save new case values button and destroy the case-base index.

11. Quit.

113 Late Lompaiiton

^wutWT

is'i^ 1 .'^vice L>etiveiv> Uusr^eii d r " ' • UZ |Hinn HMhtiSsivLCB '"d r AffP |i JunQ/Urovwing busincd d r .-.-A-A-A-.-.-.-.-.^.-.- "3 iidil Busm»n,'. < ? OOfi Fni(»i-^r.v.%vy.-.T.'.->;Yi''-i;t:i,i Increase prottabiiii;' .^-j r Ubii Ir ovirin nrnplovftA piofit shnnnQ d r Obr4 |Heduce Wajte d" r :^:i::t:t!.!i:i:f:,:i-:tyiy.:,v.r.-.-.-.Y.v.v,v.-.-.-,-., l>tl«t«Q^ |Hnromn a Nirh« f1;.atna(t UperialL7« n an "•"id . r •JEV WtKjh I*- 'andardzod Wo»i- F'locesse? '—J r K«?r«ft |Mrn,(v, ^ r:

Add New Cam Values U> Pr4ntUisp(^fedNe*vCas«VMjtttr ^^rertous Ftald» | faaabaxe 4 FKIT

SuQgnctCaMtMjttch 1 Quit

Figure 6.18. Case Comparison Screen

CAXft I ./imnniixnrt

Use Wpw,t»?f,y:8^,. >vhagyt^ v»K, Va^ffi-

•-•|barvic* Daiiveiy BL*stneKt r jeivira Dolvefv^ Butnets

SK .^8000 Hedth Service r* J8000 he,3lih Serves

I T'ojng/fjfO'A^ny Busmen 1 ] roung.'jirovMng B JSin See JvnaJI Busness < 2.000 Employees r 000 Empteyai*'

Envi js rxile WfMk Stars v^th Dvjrv»nnr: Damanfls ^^ I jSimpi© War*- Steps *M^h Ovnamic Oamands '~H

Qbj-J l-ronce new plan f" - ]l-iniSic= '-^e»^ c'^f^' const uct on __j

O0»-2 iciease pioli'ability r* itnaedse pioKdtm " j -'•••"•'•"'-'•""•' Jp'ovide emcilnv«« D*olit sHianng ^^ i jPrnvida ©rnpiovea piotit sharrig d 0*4- JRedJceWeste •^•^ ] jRoduce Waste

Skateas' '.Ibocome a Nche busmess lipectadze r an aiea;^ ; , f" jbecome a Niche Buanesi Itpeoanzp 'n an ataa-;^

KtyMacb JSiandafdizea Woik Piocesses ^y yj- jSlandarOiied WaiK Processes

ft»VFa»t --jOpeiaigis "^ f

PiifTt£>«p(Jj«dMcwCw«V^jet j Prewxius Firfcb j Mor© Fiekte

S^^o*v Cancel j Quit Naxl '^Jue M atchiqj

Figure 6.19. Case Comparison Screen with Case Suggested.

114 Svstem Control Module

1 he system control module is an amalgamation of the functions of Windows NT operating system. Agent OCX, Eclipse and the Easy Reasoner. Together the programming languages form the code that provides access to the case knowledge frame base and the knowledge frame base. The system control module controls the flow of information between system components. The parts of the system control module are the case-base reasoner, general utilities, and the inference engine.

Case-base Reasoner. The Easy Reasoner embeds its commands and fimctions in the Eclipse shell and provides the case-base reasoner fianctionality. The construction of indexes, case access, case retrieval, and case storage is done through the case-base reasoner. The case knowledge frame base responds to input data and retrieval requests via the case-base reasoner.

General Utilities. This component handles routine reads from disk drives, data inputs, and writes to disk drives. It is the collection of routine programming input-output fiinctions from the language components.

Inference Engine. The inference engine decides which rules to execute and in which order. The Eclipse engine supports both forward and backward rules chaining. It constructs new templates from structure definitions and stores data in the template fields

(Haley Enterprise, 1995). It provides control over the knowledge frame base.

15 Knowledge Frame Base

Eclipse is the language that provides the organization, access, storage, and retrieval fijnctionahty for the knowledge frame base. It responds to the control fijnction calls and input data. Al! templates structures and the values stored in the template fields are programmed in the Eclipse language and maintained in the knowledge frame base.

Case Knowledge Frame Base

This is the collection of stored frames that make up the case-base. Each frame is stored as a record in a database. The database is in the X-Base relational format.

Executables

The programs exist as series of compiled executable binary files. This increases the performance of the prototype by 300%. It also reduces the program size and enhances portability to computing platforms. With slight modifications to the windows registry, the prototype runs equally well on Windows 95 or Windows NT platforms.

Summarv

In this chapter the implementation and design of the prototype system has been discussed. The next chapter outlines the evaluation and testing of the prototype. The purpose of the testing is to show the prototype system's feasibility

116 CHAPTER Vll

VALIDATION AND VERIFICATION

Introduction

The purpose of this chapter is to show that the research purpose and objectives ha\'e been achieved. The validation and verification of the conceptual model and prototype are step eight of the Baldwin and Yadav URM. A validation framework was developed and discussed in Chapter III. The framework provided a methodolog\ for performing the validation for the conceptual model and prototype required by the URM.

The framework is shown again in Figure 7.1.

Conceptual Model Verification

The step by step activities shown on the left side of the framework constitute the verification steps for the conceptual model and the prototype. The major framework steps of proposing a system behavior, system capabilities needed to produce the behaviors, and developing the knowledge level architecture together w ith the attendant verification steps constitutes the conceptual model verification. Each \ crification step was performed as a major step was completed. Each successive major step was \cnficd to provide the behavior or capabilities needed to satisfy the requirements of the prcccdmu step. The production of a prototype system is considered by man\' (Cohen and Howe.

1988; Baldwin and Yadav, 1995) to be the verification of the conceptual model.

Prototype verification is addressed next.

117 Identity a Need for Requirements aic lacl Ga thennc steps which must Support i)c tollovved to sohe the problem 1 ' 1 Determine laci i Gathering Support 1 i Requirements Behaviors i salistN all elements ol • i 1 the requirements set "^ Propose System's <—1 Behavior System's Capabilities allow the realization of y 1' Propose the proposed system's / S\stem s behavior Capabilities \ Conceptual Knowledge Level Validation 1 r Conceptual Model j Propose Knowledge satisfies and incorporates / Level Conceptual the System's Capabilities —' rinai KCSUUS Model Solve Problem Statemen I Operationalizes the / Knowledge Level / Propose Symbol Conceptual Model ' Le\el Svstem 4 Architecture Design IS consistent y with the Architecture xr Create Prototype Design Implementation Prototype fulfills design ^ / Validation ' ' elements Implement Svstem Evaluation procedures Prototype test prototype ^ r performance 1 Test and Evaluate Prototype 1 Results are consistent j with prototype / Analvze evaluation procedures the Res ults

Figure 7.1. A Validation framework.

118 Prototvpe Verification

The objective of prototype verification is to ensure that the software used to construct the prototype fiinctions correctly and produces expected resuhs Since commercial products are used to develop the prototype system, it is assumed that each software producer has verified its product (O'Keefe and O'Leary, 1993). The verification of the combined software products that compose the prototype and the way they work together was done by applying known test data and comparing the prototype system output with the expected resuhs.

Specifically, verification concentrated in five areas:

1. the user interface and the prototype's abihty to accept user input,

2. the correct insertion of input values into template slots,

3. the prototype's ability to leam a new case and expand the contents of the case-

base,

4. the ability of the prototype to correctly find the best matching case in the case-

base for a query based on the contents of the new case template,

5. and finally, the ability to retrieve, display and modify the new case by using

data from the retrieved case-base records.

Verifying the Input Capabilitv and User Interface

Test data were inserted into the combination boxes and text boxes of the user interface screens. Marking of check boxes and radio button selections were also tested

119 Visual BASIC vanables were inspected to insure the above actions had correctlv insened values into the variables.

Verifying the Knowledge Base

Test data stored in Visual BASIC variables were used as relational arguments b\

Agent OCX and inserted into Eclipse templates. The contents of the various knowledge base templates were inspected using a text editor. The contents of each template slot were verified and the ability of each template structure to send and receive information back and forth through the control module was confirmed. Rules were fired and the resuhs passed to Agent OCX and correctly inserted in Visual BASIC text and combination boxes. The output from each rule was inspected to insure accuracy and correctness.

Verifying Case-based Learning

The case-base contents were inspected using a text editor. Test data was then loaded into the new case template. An add-case trigger-event was imtiated and the contents of the case-base again inspected. The additional record in the case-base was examined for complete and accurate correspondence to the test data contained in the new case template. A number of additional new cases were created and added to the case-base to provide a minimum of eight case records (cases) in the case-base

20 Verifying Case-based Retrieval

A partially completed new case template was used to initiate case matching. Each existing case was retrieved from the case-base. Using a text editor, each retrieved case was manually matched against the contents of the new case template. Best to worse fit order of retrieval was verified.

Verifi/ing Template Modification

A partial new case was used to initiate case matching. The retrieved case was compared in the prototype comparison screen with the new case and modification check boxes selected. Case-base values were copied into the new case template for each check box selected The new case was saved. A text editor was used to verify the modifications to the new case template contents.

Conceptual Model Validation

According to the URM, construction of a working prototype system provides the validation for the conceptual model. The prototype although limited in scope is complete enough to provide validation of the conceptual model. It incorporated a working case- base reasoner and generates a check hst of fact gathenng activities.

Prototvpe Validation

The detailed discussion of the validation and testing of the prototvpe system is discussed in the next chapter The prototype was evaluated both subjectively and 121 objectively Prototype validation is steps seven and eight of the URM and step nine of the CAASS validation framework (Figure 7 1).

Summary

A framework for validating CAASS presented in Chapter m and shown again in this chapter as Figure 7.1. The CAASS validation framework was used to guide the conceptual and prototype validation and verification of the CAASS. A reiterative top- down verification of the framework activities was carried out. Fact gathering support requirements were verified to ensure they fiilfilled the research purpose and objectives.

Next the system behaviors were matched with the requirements to guarantee all elements of the requirement set were satisfied. The system capabilities were examined to make sure they realized the needed system behaviors The knowledge level conceptual model was inspected to verify that it incorporated and satisfied the required system capabilities

These steps completed the conceptual validation of the CAASS.

The prototype validation included performing the five remaining steps of the validation framework. The symbol level architecture was reviewed to ensure it correctly operationalize the knowledge level conceptual model. The prototype design was examined to verify that it was consistent with the symbol level architecture. Finally, the prototype code was implemented and verified to fijlfill the design elements. The last two steps in the validation framework prototype test and evaluation and the analysis of results are discussed in Chapters Vin and IX.

122 CHAPTER VIII

EXPERIMENTAL DESIGN

Introduction

This chapter discusses the experiment and survey tool used to test and evaluate the prototype system. This is step nine of the validation framework and ftirther satisfies step eight of the Baldwin and Yadav URM. The design of an objective experiment to test the prototype and the construction of a subjective end-user survey are discussed.

Experiment Design

The prototype was buih with a proprietary commercial knowledge base shell from

Haley Enterprise, Inc. "The use of an expert system shell can reduce the design and implementation time of a program considerably" (Luger and Stubblefield, 1993, p. 312).

Expert system shells contain all the components of the expert system except the knowledge base, case-base and case specific data (Luger and Stubblefield, 1993). The

Eclipse expert knowledge base shell was composed of an inference engine, knowledge base editor, user-interface control extension, and a case-based reasoner (Haley Enterprise,

1996). Once the shell was installed on a computer, it would generate a computer identification number. In order to access the shell for fijrther system development, an enabhng authorization code had to be obtained from Haley Enterprise, Inc. The authorization code served to prevent software pirating by enabling the shell operation onl\' on the specific computer with the correctly paired identification number and authorization

.23 code. The need to specifically install and activate the prototype, on each individual machine, precluded ofif she distribution of the software

A pilot study using students from the graduate systems analysis class was conducted. The pilot study group pre-tested the requirements determination projects.

The projects constituted the pre-test and post-test for the experiment. Experience with this graduate group helped estabhsh the ninety-minute period allowed for completion of each test, refined the questions in the CAASS user survey and clarified the detailed instructions for the use of the prototype system.

Group Selection

In order to provide an adequate number of test subjects on site, Texas Tech

University undergraduate systems analysis students were used as subjects for the experiment. Seventy-two students in the undergraduate systems analysis course participated in the experimental evaluation of the prototype system. All students had received instruction in requirements determination in their systems analysis course before participating in the prototype evaluation experiment. Thirty-six subjects were randomly selected from the population of 72 undergraduate systems analysis students to form a control group. The control group performed two requirements determination projects manually. The control group's performance established a base line level of performance for the pre-test and post-test. A treatment group was formed from the remaining 36 students to use the prototype tool and provide a performance comparison with the base line control group. The treatment group performed the first project manualK and used the

124 prototype system to perform the second project. Time demands from class projects, homework, and work schedules restricted the available prototype testing participation time for each student to three hours. Students were allowed ninety minutes to complete each test as established in the pilot study The time constraints on student availability prohibited giving hands on instruction sessions for prototype tool usage. As an alternative to hands on instruction sessions, a set of detailed instructions for prototype operation was provided each prototype user.

The experimental design layout was a two factor crossed design with repeated measure over the pre-post test factor. This particular design was chosen because "h is the most frequently used design in social sciences research" (Cook and Campbell, 1979, p.

103). It can be used to evaluate equal and non-equal sized groups (Cook and Campbell

1979; Keppel, 1991, Montgomery, 1991). A test factor with two levels and a treatment factor of two levels made up the and layout.

Testing Instrument

The test instrument contained a pre-test level and a post-test level. Each level or case was selected from a workbook by George and Annette Easton (1996). The case workbook is a companion book to the students' systems analysis textbook (Hoflfer,

George, and Valachich, 1999). The pre-test consisted of a case about Media Technologv

Services (MTA) at a community college. The post-test was a case about Homeowners of

America, a management services firm. The two cases were selected so the\ would be similar and provide the students with an opportunity to improve their performance b\

125 repeating requirements gathering activities on similar cases. Before the experiment was conducted, each case was analyzed by the researcher and had an organizational fact list prepared (see Appendix). The fact lists were validated by comparing the pilot study and experiment student responses to the fact hsts. The percentage of reasonable student responses that matched the pre-test fact list was 81 42 percent. The post-test responses exhibited an 88.42 percent match with the fact list. The facts each student identified while performing the pre-post tasks were compared to the pre-experiment fact lists. The ratio of the number of facts identified by a student on a test to the number of facts in the appropriate pre-experiment fact hst produced a performance percentage score for that student. The difference between the control group's pre-test and post-test scores refiected learning from repeating similar tasks. The difference between the tool group's pre-test and post-test scores reflected the learning effect and treatment or tool effect.

Experiment Lavout and Model

The two factor crossed experimental design with repeated measure over the test factor produced the layout below (Figure 8.1).

126 TEST PRE-TEST (1) POST-TEST (2) CONTROL(1) Subji scorei.i.i 1 scorci.:.! SubJ2 scorei,i,2 scorci.:.: • • •

• • • • • • SubJ36 scorei,i.36 scorei.:..36

TREATMENT TOOL (2) SubJ37 score2.i.37 score:,:.;? SubJ38 SC0re2,1,238. SCOre2,2,38 • • • • • • • • • SubJ72 score2,i,72 scorcz,:.?:

Figure 8.1. Experiment Layout

The statistical model for this experimental design is:

yij.k=^ + x.-Pj+(Tp),j + s,jk where i = 1,2; j = 1,2, and k = 1,36. In this model y,j.kis the observed score for the ith treatment, jth test, and kth subject, ^ is the overall , i, is the effect for the ith level of the treatment factor, pj is the effect for the jth level of the test factor, (Tp),j is an interaction term for interaction between test and treatment factors, and c„k is a random error term (Montgomery, 1991). The Analysis of Vanance {ANO\A) with interaction was chosen specifically because an interaction was expected

Both the control group and treatment group performed the pre-test manualK' The resuhs from both groups were expected to be verN' close with no significant ditTerence on

127 the pre-test and a significant difference on the post-test.

Hypothesis Testing

The experiment was designed to ahow the posing of hypotheses about the component treatments and tests. The post-test score for the control group w as expected to be higher than the pre-test score, reflecting the learning from repeated similar tasks.

The post-test score for the treatment (tool) group was expected to be the highest, reflecting the learning from repeated similar tasks plus the effect of the tool on

performance. Some interaction was expected between the treatments, reflecting the tool

effect and the learning effect between pre-test and post-test. In order to test these

premises, three hypotheses were formed and expressed in the conventional null hypothesis

and alternate hypothesis sets.

The first hypothesis set refiected the premise that an interaction exists between the

mean treatment effects and the mean learning effects. These hypotheses are stated:

Hlo: There will be no interactions between the treatments and the tests.

HI i! The post-test mean score will be higher than the pre-test mean score, and the

tool treatment mean post-test score will be higher than the control treatment mean

post-test score.

The rejection of the null hypothesis would indicate the presents of an interaction between

the test and treatment factor effects.

The second hypothesis pair concerns the difference between tests If a ditTerence

exists between the pre-test and post-test results, it will reflect the test etfects due to the

12S uses of different cases for the pre-test and post-test. This hypothesis is based on the expectation of a positive learning effect. This pair of hypotheses are expressed as:

H2„ There will be no difference between the pre-test mean score and the post-test mean score.

H2i: The mean score on the post-test will be greater than the mean score on the pre-test.

Again a significant difference between test effects would resuh in the rejection of the H2o hypothesis.

Finahy, the third pair of hypotheses addressed a difference between treatments

(control versus tool). The tool was expected to enhance the tool group's performance.

The tool group's mean score for the post-test was expected to be greater than the post- test mean score for the control group.

H3o: The tool group's post-test mean score will be less than or equal to the control group's post-test mean score.

H3i. The tool group's post-test mean score will be greater than the control group's post-test mean score.

If the experimental results indicate a significant difference between the treatment effects, then the H3o hypothesis must be rejected.

User Survey

In addition to the quanthative experiment, a measure of the user analyst's perception of the value or utility of the prototype tool was sought. User feedback on the content, accuracy, easy of use and format of the prototype system was desired \o comprehensive instrument for computing success currently exists, houever the Doll and

129 Torkzadeh End-user Computing Satisfaction instalment presents accepted measures of user satisfaction (McHaney and Cronan, 1998). Other user surveys focus on all systems and services of an information systems department while the Doll and Torkzadeh instrument focuses on the individual apphcation (Goodhue, 1998). Doh and Torkzadeh

(1988) contended that end-user satisfaction is a surrogate for utility and can be used to measure the satisfaction or utility that the prototype system provides.

Doll and Torkzadeh used twelve closed-ended and three open-ended questions in their questionnaire. The close-ended questions were evaluated by using a five-point

Likert-type response scale, which helped minimizing the fatigue of the responder. The

open-ended questions were used as global measures of overall user satisfaction.

The CAASS user survey is a modification of Doh and Torkzadeh's pre-tested and

vahdated instrument. In order to reduce the time needed by the subjects to complete the

survey, ten closed-ended questions and two open-ended questions were included in the

CAASS instrument (Figure 8 2). The first closed-ended question was original. The

remaining closed-ended questions were selected from the questions posed by Doll and

Torkzadeh. The questions were designed to provide user feedback on four aspects of the

prototype system. The aspects are content, accuracy, format and ease of use. Closed- ended questions one, two and ten measure the user satisfaction whh the format of the prototype tool. Questions six and nine are focused on the user assessment of the tool's accuracy. The fourth, and fifth questions address the users' impressions about prototype content and questions three, seven and eight focus on the tool's ease of use The first open-ended question was included to verify that the users had completed the last three

130 instruction steps while using the tool. The second open-ended question was from the Doll and Torkzadeh instrument and served as a global measure of satisfaction.

C.VASS I SER .SI R\ EY This quesUonnaire is an evaluation instrument for the Case-based Adaptive Analvst Support System (CA.\SS). In the questions below (W.VSS wiU be referred to as "the system" or "the apphcation".

Please check one box for each of the ten questions below.

1. Do you prefer the system to the manual method? Ahiiost never Some of the time Abouthalf of the tune .Most of the time Ahnostah^avs an n D D " 2. Do you think the output b presented in a useable format? .Almost never Some of the tune About half of the time Most of the time Ahnost always • • D D D ' 3. Is the system difGcult to operate? .Vhnost never Some of the time About half of the tune Most of the tune Ahnost ahvavs • D D D D • 4. Does the system provide suflicient information? .\lmost never Someof the time About half of the time .Most of the time Ahnost ahyays • D D D D ' 5. Do you find the output relevant? Almost never Some of the time About half of the time Most of the time .Almost always • D D D D ' 6. Is the system successful? Almost never Some of the time About half of the time Most of the tune Almost always • D D D D ' 7. Is the system easy to use? Almost never Some of the time About half of the time Most of the time .Almost always D D • D D ' 8. Is the system user friendly? Almost never Some of the time About half of the time Most of the time Almost always D D n D D ' 9. Do you think the system is reliable? Almost never Some of the time About half of the time Most of the time Almost always n D D n D ' 10. Overall, how would you rate your satisfaction with this apphcation? -Nonexistent Poor Fair Good Excellent an D n D

11. Wliat evidence did you see that would indicate the system adapted or learned yvith repeated use?

12. Wliat aspects of the apphcation, if any, were you most satisGed with and why'

Figure 8.2. CAASS User Survey Questionnaire.

131 Summary

The design of the experiment and the construction of the CAASS user surxev have been discussed. The reasons for choosing each instrument were given, the testing procedures described, and the selection process for the control and treatment groups discussed. Chapter IX describes the experiment resuhs and conclusions.

132 CHAPTER IX

RESULTS AND CONCLUSIONS

Introduction

The purpose of this chapter is to discuss the resuhs of the experiment and user survey. The experiment concerned the control group and tool group as treatment levels and the pretest and posttest as test levels in a two factor experimental design. An analysis of the resuhs is presented and conclusions are reached.

Experimental Results

Analvsis of Variance (ANOVA)

The experimental resuhs were analyzed initially by using an ANOVA with repeated measures over the test factor. There was a significant interaction that caused rejection of the no interaction hypothesis Hlo. This rejection was based on the treatment b\ test interaction F-test resuh: F(l,70) = 14 54; p = 0.0003

Since an interaction occurred, H2 and H3 each must be decomposed and each part evaluated in order to understand the interaction effect. Hypothesis H2 reflected the expectation that the control group post-test scores would show a positive learning effect.

Additionally, the post-test scores for the tool group was expected to show a learning effect plus a positive tool effect. Hypothesis H2 was separated into H2a and H2b to investigate these effects. H2ao. There wih be no difference between the control group's pre-test

mean score and the control group's post-test mean score.

FI2ai. The control group's post-test mean score wih be greater than the

control group's pre-test mean score.

The H2ao hypothesis was rejected based on a paired student's t test resuh of t(0.05, 70) = 4 159388; p = 0.0001.

As expected, the control group's post-test mean score of 48.35 was a significant improvement over the control group's pre-test mean score of 35.89. This increase in the control group's mean score reflected a poshive learning effect. The control group's repetition of similar tasks by performing the pre-test and post-test, had produced a rise in performance level.

Hypothesis H2b was expressed as:

H2bo The tool pre-test mean score will be less than or equal to the tool

post-test mean score.

H2bi. The tool post-test mean score will be greater than the tool pre-test

mean score.

The H2bo hypothesis could not be rejected with t(0.05, 70) = -1.23382; p = 0.7786

The failure to reject the H2bo premise indicated that there was no significant increase in mean score between the tool group's pre-test mean score of 36.52 and the post-test mean score of 32.82. In fact, a decrease in the mean test score was observed

The expected post-test rise in the tool group's mean score, reflecting the positive learning effect from repeating similar tasks and the additional positiv e effect from tool use. v\ as not

134 realized. The slight decrease in the tool group's post-test mean score possibly reflects the negative impact of the lack of hands on training in tool use. The use of the tool apparently nullified the positive learning effect from repeating similar tasks.

Hypothesis H3 became the hypotheses pair H3a and H3b. Hypothesis, H3a, sewes to ftirther investigate tool versus control on the post-test. H3b provides a consistency check on the pre-test performance of the tool and control groups.

H3a(). The tool group's mean score on the post-test will be less than or

equal to the control group's mean score on the post-test.

H3ai: The tool group's post-test mean score wih be greater than the

control group's post-test mean score.

The H3ao hypothesis cannot be rejected based on t(0.05,70) = -5.18185; p = 0.9999.

Hypothesis H3a tested a comparison between the tool group's post-test performance and the control group's baseline performance. The failure to reject H3ao imphes that the tool group's post-test performance was less than the baseline mean scores estabhshed by control group. The tool group's post-test mean score of 32.82 was considerably below the control group's basehne mean score of 48.35 This resuh was consistent with the failure to reject the hypothesis of little or no increase in the tool group's post-test mean score.

The H3b hypotheses pair are stated:

H3bo. The tool group's pre-test mean score will be equal to the control

group's pre-test mean score.

135 H3bi: The tool group's pre-test mean score wih not equal the control

group's pre-test mean score.

H3bo cannot be rejected based on a t(0.05,70) = 0.211353; p = 0.8332

The comparison of the tool group's pre-test and the control group's pre-test mean scores was expected to indicated any significant difference in skill level. The failure to reject H3bo indicates that no significant difference was detected between the control group's and tool group's pre-test mean scores. The tool group's pre-test mean score of

36.52 and the control group's pre-test mean score of 35.89 demonstrate a similar skih level on the pre-test manual task. The tool group's mean score was not substantially different from the control groups estabhshed baseline mean score.

User Survey Results

All 36 of the CAASS users in the tool group responded to the closed-ended questions. The CAASS survey questions and the modal response for each question is shown in Table 9.1 below Each response was based on the Likert scale choices shown in

Figure 8.2. The survey responses support the overall design of the tool with respect to format, content, accuracy and ease of use. The survey results indicate an overall satisfaction with the tool in spite of the lower experimental performance

136 Table 9.1 Responses to CAASS End-User Survey Question No. Question Modal response Number Mean Std. Dev

Tool tomiat 1. Do vou prefer the system Almost .Always 18 of 36 4.12 l.uu4 to the manual method?

•> Do you think the output .Most of the Time 22 of 36 3.56 0.89O IS presented in a useable format*

10. Overall, how would you Good 26 of 36 3.78 0.790 rate your satisfaction witJi this application'.'

Tool Content "" Does the system provide Most of the Time 20 of 36 3.4^ 0.840 sufficient information?

5. Do you find the output .Most of the Time 20 of 36 3.34 0.938 relevant?

Accuracy 6. Is the system successful* -Most of the Time 27 of 36 3.83 0.781

9. Do you think the system Most of the Time 23 of 36 3.80 0.8'^2 is reliable?

Easy to Use 3. Is tlie system difficult to Some of the Time 22 of 36 1.927 0.905 use.'

7. Is the system easy to use? Most of the Time 20 of 36 3.88 0.899

8. Is the system user .Most of the Time 20 of 36 3.83 1.070 friendly?

Only 50% of the students completing the survey chose to answer open-ended

question 11 concerning the ability of the tool to leam or adapt with repeated use. Ten

subjects indicated that the system did leam and provided the leamed case as the best

match with repeated use. Eight students stated they observed no evidence of learning

Since all eighteen of the students responding to this question had complete templates recorded for the post-test, a possible explanation may be that the eight students failed to carry out the last three steps in the written instmctions. These steps included allowing the

137 system to leam the current organization parameters and conduct a second tnal to see if the tool would find the learned results as the best match.

Twenty-six students or 72% chose to answer the second open-ended question concerning aspects of the tool with which they were most satisfied. Twenty-three of the students generated responses that could be grouped into the three general poshive responses and three students generated the dislike responses that follow.

Table 9.2. Question 12 responses.

LIKES DISLIKES

Screen sequences and format (visual Needs more detailed help screens prompts, pull-down menus and combination boxes

Ease of use of the tool helped complete the Didn't like the apphcation at all. I had fact gathering tasks not idea what I was doing

Best fit case match helped complete No advantage over manual method current case fact gathering

The help fijnction definitely needs expansion. Although this was a prototype system, an expanded help may have reduced the impact of the lack of hands on training Hands on training may have helped the student who responded "I had no idea what 1 was doing "

The response that the tool offered no advantage over the manual method was probabK accurate for the small case studies that allowed completion within the ninety-minute time allotted. Use in the fact gathering activities in a real worid organization would ha\ e provided a better assessment of the tool's possible advantage

138 Conclusions

The CAASS prototype achieved the objective of demonstrating the feasibility of the conceptual models and the symbol level architecture. It provided assistance to the students by suggesting a checklist and suggesting elements to help complete the current organizational template. No prototype tool suggestion was automatically incorporated into the current template. The student analyst had to make the decision to use suggested values. The student analyst had to decide if the suggested values were the best choice under the current situation. The test resuhs may refiect the students' use of a tool suggestion when it was not warranted in the context under study.

Summarv

This chapter discussed the results of the experiment and user survey responses.

The control group results estabhshed a performance basehne. The control group's post- test performance was an improvement over the pre-test performance This reflecting a learning effect from performing similar repeated tasks.

The tool group's performance was measured and compared to the control group baseline. Both groups performed as expected. There was no significant pre-test difference between the control group and the tool group. At this point the tool group had not been

provided whh the prototype tool, so both groups used the manual method to perform the

pre-test task. This estabhshed that both groups possessed similar skill sets and pertbrmed

similariy on the same task. The tool group was given a set of detailed operating

139 instructions and allowed to use the tool while completing the post-test task. It was expected that both control and tool groups would benefit from the learning effect from repeating similar tasks. Additionally, the tool group was expected to realize an increase in performance above the control baseline due to the poshive contribution of the tool. This did not occur. The tool group's performance apparently suffered from a lack of hands-on training in the use of the tool.

Famiharity with the windows environment presented by the tool's Visual BASIC user interface may have caused a degree of automated task completion by the user. Users may have selected the first likely ahemative suggested by the tool without perfomiing a cognitive evaluation as to which suggested ahemative best fit the task under study.

The experimental resuhs did not demonstrate a tool performance advantage over the manual method for the pre-post test tasks of limited scope and duration. More complex real-world tasks may better demonstrate the tool's capabihties.

The users' utility for the tool was relatively high. The resuhs of the CAASS user survey provided some insight into the level of user utihty for the prototype in the areas of tool format, content, accuracy, and ease of use. The users' preferred the use of the tool to the manual method. They feh the tool's output was in a useable format and were satisfied with the overall application format.

The tool content was beheved to provide sufficient relevant information to complete the tasks assigned. The exception to this view was the tool help facility Users would like to see a more extensive development of the help fiinction. This is a \alid

.40 observation. The prototype provided a ''bare bones" help facility for major tool fijnctions.

A production tool will require a greatly expanded help facility.

The users feh the tool accuracy was rehable and produce successfijl results. The familiar windows interface was adjudged user fnendly and easy to use. As mentioned earlier, this familiar easy to use interface, in the absence of hands on training may have contributed to the unexpected below baseline performance of the tool group on the post- test task.

The users' overah evaluation of the prototype tool was good. The implementation of the working tool prototype proved the feasibility of the conceptual architecture. The next chapter discusses the research contributions and hmitations of the CAASS.

141 CHAPTER X

RESEARCH CONTRIBUTIONS, LIMITATIONS,

AND FUTURE RESEARCH

Introduction

This chapter describes the research deliverables and the contributions this research makes to the discipline of management information systems in the areas of artificial intehigence, systems analysis and decision support systems. Addhionally, known limitations in the research are stated.

Research Deliverables

The deliverables of this research were specified in Chapter I and fijlfilledi n the following manner:

1. Identification of fifty-three analyst fact gathering activities that may be used to

assist in gathering organizational facts were identified and discussed in Chapter

IV of this research.

2. The determination of the knowledge needed by CAASS in order to support the

analyst during fact gathering activities was completed in Chapter I\' as well.

3. Chapter IV also developed the conceptual model for CAASS. This model

incorporated the required knowledge level concepts needed bv the system to

leam and adapt organizational fact sets to the current organizational situation

It provided the theoretical basis upon which CAASS was developed

142 Additionally, the system knowledge types, needed capabilities, beha\ iors and

component relationships were described and incorporated into the conceptual

model's dialog management, fact gathering coordinator, application base

management, case-base management, resource base management and

knowledge base management subsystems.

4. A validating prototype of CAASS was designed, implemented and evaluated.

The design and implementation were discussed in Chapter VI. An expert case-

base prototype system was implemented using Visual BASIC, Agent OCX,

Eclipse and the Easy Reasoner software. The prototype CAASS was based on

the stmcture chart and logical flow diagrams. The prototype reflected the

knowledge level concepts of the conceptual model and the symbol level

architecture. The prototype demonstrated the feasibility of an adaptive analyst

assistant by organizing fact gathering activities, suggesting an analyst fact

gathering check-list, and providing a new case template for storing

organization facts. The analyst was fijrther assisted by the CAASS case recall

capability of the case-base reasoner and the case-base.

5. The analysis of the test of the prototype system was provided in Chapter IX

Research Contributions

The major contribution of this research is a conceptual model for an adaptive analyst fact gathering support system. Eariier researchers have proposed analyst support systems that provide support during the later phases of systems analysis. These systems

143 were proposed by Shemer (1987), Puncello, Torrigiani, Pietri, Burton, Cardile, and Conti

(1988), Loucopoulos and Champion (1989), and Dalai and Yadav (1992). Each of these earlier systems assumed that information requirements had been determined and pro\ ided support for the stmcturing of formal requirement specifications and the generation and selection of ahemative information systems. The CAASS system is designed to assist the analyst in gathering and analyzing organizational facts that may be used to generate information system requirements. CAASS provides assistance in the requirements determination or first phase of systems analysis. The earlier systems started with the assumption that this phase had already been completed.

The second contribution provided by this research is the synthesis of information system theories and concepts into a framework for a system for fact gathering and information requirements determination support. It combines mle-based modeling, case- based reasoning and recaU, with adaptive learning techniques to provide fact gathering

support. The system suggests facts leamed from the previous analysis of similar

organizations. These facts may then be modified to suh the current organizational

shuation. This helps the analyst by providing recall of details from similar analyses to help

ensure completeness of the current analysis without impinging on the creative process oi^

requirements determination. The requirements are generated to fit the target

organization's specific needs.

In order to provide fact gathering support CAASS requires knowledge of anah st

fact gathering activities. A third contribution is the synthesis of analvst fact gathering

activities proposed by G. B. Davis (1982), Yadav (1983, 1985), A M Davis (1^)S8).

144 Wetherbe (1991), and Byrd, Cossick and Zmud (1992) into a list of fifty-three key analyst fact gathering activities. Additionally, each activhy is associated whh a set of fiinctions that are needed to support the analyst in completing the fact gathering activities. This list may be used to support fijrther research into the process of information system requirements determination.

A fourth contribution of this research is the design and implementation of a prototype system that demonstrates the feasibility of the conceptual model. The prototype provides the user-analyst with suggested fact gathering activities, organizes gathered facts into a recallable stmcture and suggests facts from prior analyzed organizations. The facts may be modified to fit the needs of the organization currently being analyzed. The prototype provides the fijrther capability to adapt its case-base contents by learning facts about new organizations. The major portions of the conceptual model have been operationalized by the prototype and could be extended to a production support system.

Lastly, the research provides addhional validation for the Baldwin and Yadav

URM. The steps in the URM lead directly to a working prototype that helps validate the conceptual model of the system. Addhionally, a validation framework was proposed and used to vahdate system support requirements, proposed system behaviors, estabhsh system capabihfies, and to test and evaluate the implemented prototype system. This framework was specifically created for use whh artificial intelligence support systems and for use in conjunction with the URM. This contributes to the knowledge about validation of artificial intelligence systems.

145 Research LimitatinriQ

Several limitations must be discussed:

1 A major limitation existed in time to complete a fijU featured CAASS. A

prototyping approach was chosen to specificahy demonstrate feasibihty by

capturing selected features of the conceptual model. The features incorporated

in the prototype help fill the gap in information requirements determination left

by previous research.

2. The depth of knowledge in the case-base was limited. Over time the learning

capabihties of the system expand the case-base increasing the system

knowledge. The prototype did reflect this ability. The Easy reasoner had an

undocumented limit of 256 template fields in a case-base record. The stmcture

of the new case template had to be reduced in the number of slots to meet this

limit.

3. This research concentrated on the constmction of the CA.\SS prototype and

reflected the abihty to leam and support the analyst in fact gathering activities.

Although included in the architecture and stmcture chart, some full featured

capabihties were omitted from the prototype as they were not relevant to a

feasibility demonstration. This included a portion of research objective three

that was to provide a means to recah and adapt previous information

requirement sets from previously analyzed similar organizations. Since the

ability of an Expert system to analyze facts and create information requirement

sets and specifications was demonstrated by Dalai and ^'aday (1992), CAASS

146 was limited to storing, recalling and adapting the fact sets from which

information requirements and specifications may be generated. A fiill featured

production model would need to add the fact analysis and information

requirement specification ability The case base mechanism would remain the

same for storing, recalling and adapting the requirement sets as was

demonstrated with the unanalyzed fact sets.

Future Work

The CAASS architecture should be implemented in another case-based reasoning system allowing up to 1000 fields for the fact gathering templates. A 256 field limit was satisfactory for a feasibility demonstration but a fijll scale test during fact gathering and analysis on a real organization will require more than 256 fact fields. Addhionally, the fact analysis and information requirement specification abihties of the Dalai and Yadav (1992)

EMSS should be incorporated.

147 REFERENCES

Alterman, R. "An Adaptive Planner." Proc. AAAI-86 Fifth National Convention nn Artificial Intelligence. 1986. 2.

Baldwin, D and Yadav, S. B. 'The Process of Research Investigations in Artificial Intelligence- An Unified View." IEEE Transacfions on Sv.stems. Man, and Cvbemetics 25 (1995): 852-861.

Barlas, Y. and Carpenter, S. "Philosophical roots of model validation: two ." Systems Dynamics Review rSummer 1990): 148-166.

Boloix, G. and RobiUard, P. N. "A Software System Evaluation Framework." IEEE Computer Dec. 1995: 17-26.

Byrd, T. A., Cossick, K. L., and Zmud, R. W "A synthesis of research on requirements analysis and knowledge acquisition techniques." MIS Ouarteriy 16.1 (1992): 117- 138.

Carison, W. M., "Business Information Analysis and Integration Technique (BAIT) - The New Horizon." Data Base 10.4 (1979): 3-9.

Cohen, P R. and Howe, A. E. "How Evaluation Guides AI Research." AI Magazine Winter. 1988: 35-43.

— "Toward AI Research Methodology; Three Case Studies in Evaluation." IEEE Transactions on Systems, Man, and Cybernetics 19 (1989): 634-646.

Cook, Thomas D. and Campbell, Donald T. Ouasi-Experimentation. Boston: Houghton Mifflin, 1979 103-112

Dalai, N. P and Yadav S. B. "The Design Of a Knowledge-Based Decision Support System to Support the Information Analyst in Determining Requirements " Decision Sciences 23 (1992): 1373-1388.

Davis, Alan M. "A Taxonomy for the Early Stages of the Software Development Life Cycle." The Joumal of Systems and Software 8.4 (1988): 297-311.

Davis, A. M. Software Requirements Analvsis and Specification. Englewood Cliffs, NJ Prentice Hall, 1990.

Davis, G. B. "Strategies for information requirements determination." jBM Systems Joumal. 27.1 (1982): 4-30.

Dewitz, S. D. Systems Analysis and Desmn and the Transition to Objects New York McGrawHill, 1996. 11-16.

148 Doll, WiUiam J, and Torkzadeh, Gholamreza. "The Measurement of End-Lser Computing Satisfaction." MIS Ouarteriy 12.2 (1988): 259-274.

Easton, Annette, and Easton, George. Cases for Modem Systems Analysis and Design Menlo Park, CA. Benjamin/Cummings, 1996.

Emory, C W. Business and Research Methods. Homewood, LL: Richard D Irwin, 1985.

Giarratano, Joseph and Riley, Gary. Expert Systems Principles and Programming. Boston: PWS, 1994.

Goodhue, Dale L. "Development and Measurement Validity of a Task-Technolog>' Fit Instmment for User Evaluations of Information Systems." Decision Sciences 29.1 (1998): 105-138.

Haley Enterprise. Agent OCX Programmer's Guide. Sewickley, PA The Haley Enterprise, 1995.

—. Eclipse Reference Manual. Sewickley, PA: The Haley Enterprise, 1996.

Hoffer, Jeffery A., George, Joey F., and Valacich, Joseph S. Modem Systems Analysis ^ and Design. 2"'* Ed. Reading, MA. Addison-Wesley, 1999

IBM. "Business Systems Planning." Advanced Svstem Development/Feasibility Techniques. Ed. J. D. Couger, M. A. Colter, and R. W Knapp. New York: John Wiley, 1982. 236-314. IEEE. "IEEE Standard Glossary of Software Engineering Terms." ANSITEEE Standard 729-1983. New York: Institute of Electrical and Electronic Engineers, 1983 1-38. Rpt. in Software Engineering Standards. New York: Institute of Electncal and Electronic Engineers, 1984. 1-38.

Keppel, Geoffrey. De.sign and Analvsis. Englewood Cliffs, NJ: Prentice Hall, 1991. 187- 278. Kemer, D. V. "Business Information Characterization Study " Data Base 10 4 (1979) 10-17. Kolodner, J. C^^se-Rased Reasoning. San Mateo, CA. Morgan Kaufmann, 1993

Lo W Amber and Choobineh, Joobin. "CABSYDD Case-Based System for Database Design." Unpubhshed Working Paper, Texas A&M U, College Station, TX 1995.

149 —, "Architecture of a Case-Based Conceptual Database Design Tool." Unpubhshed Working Paper, Texas A&M U, College Station, TX. 1996.

Loucopoulos, P and Champion, R. E. M. "Knowledge Based Support for Requiremems Engineering." Information and Software Technology 31 3 n98QV 123-135

Luger, George F. and Stubblefield, William A. Artificial Intehigence 2"''ed. New York: Benjamin/Cummings, 1993.

McClatchy, W. "Meadows the CEO." Information Week 5 Nov. 1990 34.

McHaney, Roger, and Cronan, Timothy P. "Computer Simulation Success: On the Use of the End-User Computing Satisfaction Instmment: A Comment." Decision Sciences 29.2 (1998): 525-536.

Microsoft. Visual BASIC Getting Started. Redmond, WA. Microsoft, 1997.

Mintzberg, H. The Stmcturing of Organizations. Englewood Chflfs, NJ: Prentice-Hall, 1979.

Montgomery, Douglas C. Design and Analvsis of Experiments. 3"* ed. New York: John Wiley & Sons, 1991. 201-222.

Mylopoulos, John, and Levesque, Hector J. "An Overview of Knowledge Representation." OP Conceptual Modelling. Ed. Michael L. Brodie, John Mylopoulos and Joachim W Schmidt. New York: Springer-Veriag, 1984. 3-17

Nakatani, Kazuo, and Yadav, Surya B. "An Extended Object-Oriented Modeling Method for Business Process Reengineering (BPR)." Proc. of the Americas Conference on Information Systems. Ed. Jane M. Carey. Phoenix, AZ. August 16-18 1996. 167-169.

Newell, Allen. "The Knowledge Level*** " Anificial Intehigence 18 (1982): 87-127

O'Keefe, R. M. and O'Leary. "Expert Systems Verification and Validation: A survev and Tutorial. " Artificial Intelligence Review 7.1 (1993): 3-42.

Puncello, P.O., Torrigiani, P., Pietri, F.., Burton, R., Cardile, B., and Conti, M. "ASPIS A knowledge based CASE environment." IEEE Software 5 2 (1988): 58-65

Rockart, John F., "Chief Executives define their own data needs. "Harvard Business Review March-April 1979: .81-93

Riesbeck, C. K. and Schank, R. C. Inside Case-Based Reasomng. Hillsdale, NJ Lawrence Erlbaum, 1989.

150 Shemer, I. "Systems analysis: A systemic analysis of a conceptual model." Communications of the ACM 30 (1QR7) 506-512.

Umted States. Executive Office of the President. Office of Management and Budget. Standard Industrial Classificafion Manual Washington: 1987. 7-9.

Vitalari, Nicholas P and Dickson, Gary W. "Problem Solving for Effective Systems Analysis: An Experimental Exploration." Communications of the ACM 26 (1983) 948-956.

Webster's New Worid Cohegiate Dicfionary. Ed. Michael Agnes. 3"* ed. New York: Simon & Schuster-Macmihan, 1997.

Wetherbe, James C. "Executive Information Requirements: Getting h Right." MS Ouarteriy. (March 1991): 51-65.

Yadav, S. B. "Determining an organization's information requirements; A state of the art survey." Data base 14.3 (1983): 3-20.

— "Classiiying an Organization to Idenfify hs Information Requirements: A Comprehensive Framework." Joumal of Management Information Systems. 2.1 (Summer 1985): 39-60.

— "A Human Learning Based Approach to Stmcturing and Acquiring Knowledge in Adaptive Knowledge Based Systems". Proc. of Second International Symposium on Artificial Intehigence in Monterrey, N.L. Mexico October 25-27. 1989. Monterrey, Mexico: Instituto Tecnologicao y de Estudios Superiores de Monterrey Centro de Inteligencia Artificial, 1989 1-24.

Yadav, S.B., and Chand, D. R. "An expert modeling support system for modeling an object system to specify its information requirements." Decision Support Systems 5 (1989): 29-45.

Youngblood, Simone M. and Pace, Dale K. "An Overview of Model and Simulation Verification, Vahdation, and Accreditation." Johns Hopkins APL Technical Digest 16.2(1995): 197-206.

151 APPENDIX

ORGANIZATIONAL CASE FACT LISTS FOR

MEDIA TECHNOLOGY SERVICES AND

HOMESOWNERS OF AMERICA

152 MEDIA TECHNOLOGY SERVICES FACT LIST

There are fifty-five graded hems.

Business Type

1. 8222 - Educational Services

Objectives

Improve response times.

Improve consistency of resuhs.

4. Provide documentation (system and user).

5 Normalize the database.

Remove data inconsistencies.

Replace the network interface

Strategy

8. Develop a new inf

Enthies

9. User or Customer

10. Equipment

11. Course

12. CCTV

13. Activity

14. Media

15. Department

16. Transaction

153 Function

17 Provide help with instmctional materials

Process

18. Assist whh selection of instmctional materials.

Process

19 Provide instmction in the use of instmctional materials.

Function

20. Manage audio-visual equipment.

Process

21. Distribute audio-visual equipment to classrooms.

Process

22. Provide maintenance for audio-visual equipment when requested.

Function

23. Counter Services

Process

24. Make reservations.

Input

25. Reservation request.

Outputs

26. Inventory book is checked

27. Physical inventory is performed

154 28. Order card is completed.

29 Inventory book is updated.

Process

30. Retum Equipment

Input

31. Equipment hem is retumed

Output

32. Pull order card

33. Inventory book is updated

Process

34. Cancel an equipment/media reservation.

Input

35. Phone call request to cancel a reservation

Output

36. Inventory book is updated.

Process

3 7. Checkout Media and Equipment.

Input

38. Request to checkout media or equipment.

Output

39. Inventory book is checked.

40. Physical inventory is performed.

155 41 Order card is completed

42 Inventory book is updated

Function

43. CCTV Operations

Process

44. Schedule a media event

Input

45. Receive a CCTV playback request form.

46. Receive a Playback request over the telephone.

Output

47. Schedule the playback event.

Process

48. Cancel a scheduled playback event.

Input

49. Cancellation phone cah

Output

50. Schedule is updated

Function

51. Media/Equipment acquishions.

Process

52. Purchase Media/Equipment

156 Input

53. Request for Media/Equipment.

Output

54 Order Media/Equipment.

Process

55 Maintain On-line Media Libraries.

15>7 HOMEOWNERS OF AMERICA FACT LIST

There are fifty-five graded items.

Business Type

1. 8 741 —Management Services.

Objectives

0 Organizational growth

J. Increase Organization efficiency

Strategy

4. Improve operating procedures.

Enthies

5. Member

6. Association

7. Employee

8. Complaint

9. Computer

10. Account

Function

11. Administer financial information.

Process

12. Make assessments

Inputs

13. Monthly dues charge

158 14 Board of Directors special assessment.

Output

15 Member's monthly bill.

Process

16. Receive member payments.

Inputs

17. Check.

18. Payment coupon.

Output

19. Sorted payments by association.

20. Payments recorded in ledger.

21. Payments entered in spreadsheet program

Process

?o Handle delinquent payments.

Input

23. Delinquent member hst.

Output

24. Letter of delinquency.

25. Member fine amount.

26. Lien on members property

Process

27. Pay association obligations.

159 Inputs

28. Vendor bihs.

29 Utility fees.

Output

30. Send Check.

Function

31. Property maintenance and upkeep.

Process

32. Provide maintenance service to associations.

Inputs

33 Vahd maintenance contract.

34. Request for maintenance.

Outputs

35. Provides maintenance service.

Process

36. Contract for new maintenance service.

Input

37. Bid request to provide maintenance service.

Output

38. Submit maintenance service bid.

Function

39. Rules violation communications.

60 Process

40. Resolufion of alleged mles violations.

Input

41. Letter of complaint.

Output

42. Investigate aheged complaint.

43. Dismiss complaint

44. Issue a notice of mles violation.

Function

45. Community newsletter.

Process

46. Publish a newsletter

Input

47. Association information

Output

48. Create newsletter

49 Print newsletter.

Process

50. Distribute newsletters

Input

51. Association

52. Member addresses

161 Output

53 Mail newsletters

Function

54 Manage association membership records.

Process

55 Update membership member details.

162