<<

Florida State University Libraries

Electronic Theses, Treatises and Dissertations The Graduate School

2009 Graph and Property Analysis: A for Comparing Representations Linda Jane Smith

Follow this and additional works at the FSU Digital Library. For more , please contact [email protected]

FLORIDA STATE UNIVERSITY

COLLEGE OF

GRAPH AND PROPERTY SET ANALYSIS: A METHODOLOGY FOR COMPARING

MENTAL MODEL REPRESENTATIONS

By

LINDA JANE SMITH

A Dissertation submitted to the Department of Educational and Systems in partial fulfillment of the requirements for the degree of Doctor of

Degree Awarded: Spring Semester, 2009

The members of the Committee approve the Dissertation of Linda Jane Smith defended on November 26, 2008.

J. Michael Spector Professor Directing Dissertation

Ian Douglas Outside Committee Member

Tristan E. Johnson Committee Member

Vanessa P. Dennen Committee Member

Approved:

Akihito Kamata, Chair, and Learning Systems

The Office of Graduate Studies has verified and approved the above named committee members.

ii TABLE OF CONTENTS

List of Tables ...... v List of Figures ...... vi Abstract ...... viii

CHAPTER 1 INTRODUCTION ...... 1

CHAPTER 2 LITERATURE REVIEW ...... 8

Development of Mental Model Theory ...... 8 The Place of Models in Learning and Instruction ...... 13 Conceptual Change and Mental Model Assessment Points ...... 22 Importance of Mental Model Assessment for Instructional Design ...... 25 Status of Mental Model Assessment ...... 28 Mental Model Assessment Needs Not Addressed with Available Approaches .... 40 How the Proposed Methodology Addresses Some of the Assessment Needs ...... 44

CHAPTER 3 METHODS ...... 46

Research Focus ...... 48 Description of the Mental Model Comparison Methodology ...... 49 Study ...... 52

CHAPTER 4 RESULTS ...... 60

Adequacy of Training ...... 60 The Mental Model Elicitation Process ...... 62 Mental Model Representations ...... 64 Quantitative Comparison of Models ...... 72 Qualitative Comparison of Student Models to Their Professor‘s Model ...... 73 Comparisons of Student Backgrounds and the Focus and Perspectives in Their Models 75 Conceptions and Possible Misconceptions Identified in Student Models ...... 76 Case Review œ Creating a Mental Model Representation ...... 77 Interviews with the Three Experienced Designers ...... 82

CHAPTER 5 DISCUSSION ...... 85

Research Question 1 ...... 85 The Role of Properties Data in Model Analysis ...... 87 Quantitative Analysis in Model Comparisons ...... 89 Qualitative Analysis in Model Comparisons ...... 91 Additional Comments about Mental Models and Their Representation ...... 95 Research Question 2 ...... 96 Considerations for Developing a Mental Model Assessment Tool ...... 98

iii Limitations ...... 98

CHAPTER 6 CONCLUSION ...... 100

Recommendations ...... 100 Development and Research ...... 101

APPENDICES ...... 107

A A Methodology for Comparing Mental Model Representations ...... 107 B Application of Graph Theory for the Comparison of Model Representations: A Prototype Study Using Descriptions of Instructional Design...... 123 C Study Training Materials ...... 132 D Human Subjects Approval ...... 139 E Report for Professor ...... 148

REFERENCES ...... 181

BIOGRAPHICAL SKETCH ...... 187

iv LIST OF TABLES

Table 1. Assessment Points for Learner Mental Models...... 24

Table 2. Summary of Current Approaches to Mental Model Assessment ...... 38

Table 3.Milestones in the Development and Formative Evaluation of the Methodology ...... 47

Table 4. Student Responses to the Post-training Questionnaire ...... 61

Table 5. Professor Responses to the Post-training Questionnaire ...... 61

Table 6. Student Responses to the Post-elicitation Questionnaire ...... 62

Table 7. Professor Responses to the Post-elicitation Questionnaire ...... 63

Table 8. Quantitative Comparisons of Student and Other Experienced Designer Models to the Model of the Professor Teaching the Class ...... 73

Table 9. Summary of Differences between Professor and Student Models ...... 74

Table 10. Summary of Backgrounds and Student Graph Perspectives by Country ...... 75

Table 11. Summary of Backgrounds and Student Graph Perspectives by Experience ...... 76

Table 12.Basic Vocabulary Tree ...... 88

Table 13.Vertex Summary ...... 117

Table 14. Edge Summary ...... 118

Table 15.Model Comparison Summary ...... 120

Table 16.Summary of the Model Comparison ...... 128

v LIST OF FIGURES

Figure 1. The effect of elicitation techniques on the level of similarity between internal and external models...... 3

Figure 2. Internal and external models in learning and instruction ...... 14

Figure 3. Framework for model based learning for specific content goals (based on Clement, 2000) ...... 21

Figure 4. Outcomes in the expression of a mental model ...... 30

Figure 5. Examples of simple Pathfinder- networks ...... 32

Figure 6. Example of a derived from textual analysis ...... 34

Figure 7. Example of a map (based on Jonassen, 2004) ...... 35

Figure 8. Example of a causal diagram (based on Jonassen, 2004) ...... 37

Figure 9. Visual comparison of network, cognitive, and proposed methodology maps ...... 45

Figure 10. Basic elements of a language of graphs ...... 50, 109

Figure 11. Combining elements in a language of graphs ...... 51, 110

Figure 12. Comparison concept for two mental model representations ...... 52, 118

Figure 13. Study design overview ...... 53

Figure 14. Graph and vertex list for the model of professor 1 ...... 67

Figure 15. Example of a graph and vertex list for the model of a student ...... 68

Figure 16. Graph and vertex list for the model of professor 2 ...... 70

Figure 17. Step 1 œ notes made in planning the model representation ...... 78

Figure 18. Step 2 œ first attempt at the model representation...... 79

Figure 19. Step 3 œ second attempt at the model representation ...... 80

Figure 20. Step 4 œ final graph for the model representation ...... 81

Figure 21. Focus and perspectives shown on graphs: US vs. international students ...... 87

vi

Figure 22. Basis for a quantitative comparison of models for student P1 and Professor 1 ...... 90

Figure 23. Basis for a quantitative comparison of models for professor 1 and professor 2 ...... 91

Figure 24. Common elements for professor 1, professor 2, and some of the19 students...... 92

Figure 25. Model A œ Reiser, Gustafson, and Branch (2002) ...... 126

Figure 26. Model B œ Gagné, Wager, Golas, and Keller (2005) ...... 126

Figure 27. Model A: Models of ID segment ...... 129

Figure 28. Model B. Models of ID segment ...... 129

vii ABSTRACT

The purpose of this dissertation study was to conduct the next stage of research in the development of a new methodology (Smith, 2005) based on an analysis of Graphs and Property Sets (GAPS). The objective of the methodology is to measure the degree of similarity in structure and content of mental model representations. Such measures are useful in determining if and to what extent instructional interventions promote and the acquisition of expertise with regard to complex phenomena and situations. This methodology builds on earlier research (e.g., Spector & Koszlka, 2004) and was tested in a prototype study (Smith, 2006). The research was developmental in and consisted of a formative evaluation of the methodology aimed at answering the following questions:

1. Does the methodology provide useful comparisons of student-constructed models based on relevant attributes of structure and content that are embedded in the model elicitation methodology? 2. What improvements in the methodology are needed prior to further research and development and eventual implementation in the form of a mental model assessment tool? a. What improvements are needed regarding the mental model elicitation methodology? b. What improvements are needed in the mental model representation analysis methodology?

The study revealed that the methodology can provide useful comparisons of student- constructed models. The determination of usefulness was based on the received from two professors who are instructors for beginning students in the instructional design program which provided the subjects for this research. The study also identified specific improvements that are needed prior to further research and development of the methodology. For this study, a mental model is an internal cognitive structure created by an individual to explain external phenomena, to solve problems, and/or to predict outcomes of actions and decisions. Such internal structures cannot be observed directly, and methods for representing an individual‘s mental model vary according to the latitude of expression given the individual and the extent of assumptions that must be made concerning the degree of similarity between the internal model and the external representation. The methodology evaluated in this study represents a systematic attempt to combine freedom of expression on one hand with structured detail elicitation on the other. The is to reduce the number of inferences and assumptions an investigator must make in interpreting a mental model representation and address finer levels of comparisons between and among models. The methodology uses an application of graph theory (Chartrand, 1977; Diestel, 2000) and can be distinguished from other graph-based by one or more of the following characteristics. Subjects create their own graphs to represent their mental models. Subjects provide detailed property sets for each graphic element. Property sets define both the in a subject‘s mental model and the subject‘s understanding of how concepts are related. Finally, comparisons between models are based on analyses of properties of graphic elements rather than linked pairs of concept labels. Property set analysis may determine whether or not similar labels in different mental model representations refer to the same

viii concepts. It also may determine whether or not similar concepts are identified with different labels. Assumptions that subjects understand and use concept labels the same way can lead to inaccurate conclusions about the degree of correspondence of one model to another. The research context was a graduate program in instructional design at a large, Southern university. Individuals may enter the program as masters students or doctoral students. The focus of this study was limited to comparisons of mental model representations between novices and experts in the field of instructional design. The methodology was used to examine gaps between the and conceptions of beginning students and the knowledge and conceptions of their professor who is an experienced practitioner in the field of instructional design. The initial state of student knowledge and conceptions can have significant implications for the design and delivery of instruction. First, understanding students‘ prior knowledge provides a starting point in bridging the gap between their beginning state and the learning objectives of the instruction. Second, learning of new material takes place with regard to a larger world view students may have. Integration of new knowledge within this larger context requires some of the context‘s relevant attributes. Next, examination of students‘ initial conceptions and mental models may reveal misconceptions that must be overcome in order for the learning objectives to be achieved. Misconceptions can be firmly entrenched, and may require design and/or delivery approaches beyond those sufficient to instruct students without such handicaps. It is assumed that a comparison of mental model representations of beginning students with the mental model representation of an experienced practitioner will reveal both initial states of the learners and misconceptions they may have. Participants included three professors who are experienced instructional designers and 19 graduate students in an introductory design course in the Instructional Systems Program. Participants were trained to use the methodology to represent their mental models in responding to an instructional design problem. Mental model representations of students were compared with that of the professor teaching the introductory instructional design course. The comparisons addressed: (a) the degree of similarity in content and structure; and (b) specific areas in student models which might indicate misconceptions or knowledge gaps. The mental model representations of the other two experienced instructional designers were compared to that of the professor teaching the course. This analysis determined that the methodology has utility in comparing the models of persons with similar expertise (experienced designers) as well as those with different levels of expertise (professor/student). It also confirmed that the methodology identifies more similarities between persons with similar expertise than between persons of different levels of expertise. Answer to research question 1. A comparison of student-constructed models based on relevant attributes of structure and content is considered useful if it reveals misconceptions or gaps in knowledge that, if present, will affect the design and/or delivery of instruction for the purpose of improving the potential for learners to achieve the targeted learning goals. The comparison analysis results were shared first with the professor teaching the class of student participants to determine the usefulness of the methodology in identifying misconceptions or knowledge gaps that can affect instructor decisions concerning the design and/or delivery of instruction. Next, the results were shared with the other two professors, one of whom also was an instructor for beginning students in instructional design. The third professor, who did not teach instructional design students, did not comment on the specific application of results; however, the two professors teaching in the instructional design program

ix responded that the information would aid them in making course design and delivery decisions. They indicated surprise regarding (a) the amount of information that the methodology could produce and (b) the extent of the gaps in knowledge that were revealed between entry level students and their professor. Answer to research question 2. Required improvements in the methodology were addressed using qualitative data obtained from analysis of mental model representations and participant responses to questionnaires and interviews. Questionnaires and interviews were used to obtain participant feedback on the representation process. Comparative analysis data and data from the questionnaires and interviews were examined to determine what improvements are needed prior to further study and implementation of the methodology. The initial analysis results and recommended list of changes were shared with the experienced practitioner group (i.e., the professors) to obtain their reactions to the proposed improvements. The list of recommendations include: (a) an improved training plan with more examples and additional practice; (b) assessment of understanding of both the representation process and the problem statement prior to model elicitation; (c) better design of the model to be used for comparison with student models; and (d) a set of guidelines for constructing the database and performing qualitative analyses. Because this study was limited to a single application and participant group, results cannot be generalized. However, the mental model assessment methodology design is not limited to this specific application. The results of this study can set the stage for future research using other subject areas, different educational levels, and additional populations. The intended features of this methodology are that it will: (a) be generalizable across domains and populations; (b) be applicable for a variety of purposes in education and training including educational research and instructional design; (c) be scalable for practical use in secondary, tertiary and work settings; (d) be appropriate for complex problem solving domains; (e); produce metrics that identify the degree and basis of correspondence between mental models; and (f) provide greater insight into the structure and content of a person‘s mental model than what is now provided by current mental model assessment approaches. Further research may produce a validation for broader applications and eventual implementation in the form of a mental model elicitation and assessment tool.

x CHAPTER 1

INTRODUCTION

New views in professional disciplines often call for new tools, whether those views are as dramatic as shifts or merely represent a progression of within an existing trend. As in other fields, professionals in the field of instructional design are faced with the need to respond to new information, to conduct well-designed research, to contribute to the development of theory and practice, and to evaluate the application of new (see IBSTPI Instructional Design Competencies, 2000). Each of these activities requires tools (e.g., methodologies and instruments) designed to provide the type and of information required by the design tasks in light of developments within the profession. As any craftsman knows, no one tool can perform all tasks. Like the builder, the engineer, and the scientist, an instructional designer can benefit from the development of new tools (see Snow, 1990). The purpose of this dissertation study was to conduct the next stage of research in the development of a methodology (Smith, 2005) intended to support research and practice in efforts to improve the design and development of instruction and the assessment of learning achievement of students. The methodology consists of an analysis of Graphs and Property Sets (GAPS). This methodology provides a measure of the degree of similarity in structure and content of mental model representations. Such measures are useful in determining if and to what extent instructional interventions serve to promote the development of understanding and expertise with regard to complex phenomena and problem solving situations. The methodology uses an application of graph theory (Chartrand, 1977; Diestel, 2000) and has been tested in a prototype study (Smith, 2006). The research in this study is developmental in nature and includes a formative evaluation of the methodology aimed at answering the following questions:

1. Does the methodology provide useful comparisons of student-constructed models based on relevant attributes of structure and content that are embedded in the model elicitation methodology? 2. What improvements in the methodology are needed prior to further research and development and eventual implementation in the form of a mental model assessment tool? a. What improvements are needed regarding the mental model elicitation methodology? b. What improvements are needed in the mental model representation analysis methodology?

The study revealed that the methodology can provide useful comparisons of student- constructed models. The determination of usefulness was based on the feedback received from two professors who are instructors for beginning students in the instructional design program which provided the subjects for this research. The study also identified specific improvements that are needed prior to further research and development of the methodology.

1 The focus of this study was to examine the methodology‘s utility in comparing mental model representations of beginning students with those of teachers and experienced practitioners. The results of such a comparison can provide several kinds of useful information. For the instructional designer, it can reveal the entry level status of beginning students and examine their preconceptions and specific gaps that must be bridged between students‘ prior knowledge and learning objectives. A researcher may use the comparison to identify common misconceptions of beginning students and explore the most effective means of overcoming them and the negative impact these misconceptions can have on learning . An instructor may use comparison results to establish a baseline for individual students and track learning progress throughout a course. The importance of mental model comparisons for this and other applications in learning and instruction are reflected in the of views toward education (Seel, 2006; Spector, Dennen, & Koszalka, 2005). Today‘s instructional designers are faced with shifting and new technologies to support learning and instruction. For example, developments in psychology have had strong impacts on instructional design since its as a discipline during the middle of the last century. The dominant view in psychology in the 1950s was still behaviorism, which emphasized objective measures based on of external . In the 1960s in the USA, cognitive psychologists revived the field‘s interest in the workings of the mind, and researchers began to explore how learners acquire, organize, and apply knowledge. In the latter part of the twentieth century, theories of mental models were offered to explain how humans perceive and relate to their world. The role mental models in learning and instruction has become a significant topic for researchers and instructional designers worldwide, and a number of techniques have been used to gain insight into the minds of learners and the experts they may want to emulate (Seel, 1999). However, ongoing progress requires methods and tools that offer better detail, consistency, and scalability than are currently available. For this study, a mental model is an internal cognitive structure created by an individual to explain external phenomena, to solve problems, and/or to predict outcomes of actions and decisions. Such internal structures cannot be observed directly, and methods for representing an individual‘s mental model vary according to the latitude of expression given the individual and the extent of assumptions that must be made concerning the degree of similarity between the internal model and the external representation. Figure 1 illustrates the relationships between an individual‘s mental model(s), the data elicitation techniques that serve as a filter in external representations, and the degree of similarity between the individual‘s model and the interpretation of an external observer. On the left side of the figure, the internal of a person contains that individual‘s world view and one or more mental models relating to a subject. Educators, trainers, and researchers have used a variety of techniques in their attempts to determine what a person's mental model might be and how well it reflects a target level of understanding or an accepted view of a subject. Model data elicitation techniques range from whole model expression in some form to increasingly piecemeal collections of mental model elements. Techniques that ask an individual for a direct representation of his or her entire model require relatively few inferences and assumptions on the part of the observer; therefore, these direct expressions are likely to be more similar to the individual‘s mental model than those obtained by techniques that consist of fragmented expression (e.g., test question answers, linked word pairs, or unstructured narratives). However, it has been a challenge to assess and evaluate holistic mental model representations elicited using relatively open-ended techniques. In fragmented expressions, observers are required to

2 make many inferences and assumptions to construct their own version of what they think the individual‘s mental model is. While fragmented mental model elicitations are often easier to assess and evaluate, the results may not be as indicative of an individual‘s understanding as the teacher or researcher or designer would like. The limitations of both approaches are exacerbated in situations where models refer to complex domains. The mental model elicitation and analysis method proposed herein has elements of both approaches and represents a systematic attempt to capture the best of both worlds, so to speak.

Elicitation Techniques Affect How Similar the External Model Is to the Internal Mental Model

Elicitation Techniques

Individual‘s Direct expression: World View limited inferences and assumptions

Mental Model(s)

Indirect expression: many inferences and assumptions

Figure 1. The effect of elicitation techniques on the level of similarity between internal and external models.

Many of the techniques for mental model elicitation and examination have been adequate within their intended contexts, but there remains a need for a generalized methodology to compare one mental model representation with another and to: (a) produce metrics that express the degree of similarity in content and structure between models, and (b) identify specific elements of agreement and disagreement. In education, there are a number of applications for a methodology that compares mental model representations. This study focuses on a specific application area; however, the following examples identify more general applications. For example, such comparisons can provide instructors a means of tracking learning progress for individuals. Instructors may also compare a learner's mental model

3 representation with an expert model to assess whether learning objectives have been achieved. Instructional designers can use mental model comparisons to examine the effectiveness of course design and to identify areas where improvements are needed. Researchers concerned with the development of theory can use these same comparisons to improve our understanding of learning and the design that influence it. Among the techniques for investigating subjects' mental models are analyses of their responses to narrative or essay questions, multiple choice test items, think-aloud protocols, and concept maps. Techniques that are highly structured require the development of instruments (e.g., a set of test questions) that are specific to a particular context or topic. These instruments may limit the respondent's expression and require assumptions on the part of the investigator about what the person's complete mental model is. Techniques that allow more freedom of expression tend to be labor-intensive during analysis and may involve issues of inter-rater reliability because of the difficulties of interpreting the data. Attempts to overcome some of these limitations have used several forms of computer supported analysis to identify mental model structures and assess the degree of correspondence among mental model representations. However, in spite of progress in the development of analytical tools, there remains the need for a methodology that can capture more details along with a holistic representation, address finer levels of comparisons between and among models, and reduce the number of inferences and assumptions about specific mental model representations and their various elements. This research provided a formative evaluation of an application of a methodology I developed (Smith, 2005) for the comparison of mental model representations. The application is limited to comparisons of mental model representations between novices and experts in the field of instructional design; however, it is hoped that further research will produce a validation for broader applications and eventual implementation in the form of a mental model elicitation and assessment tool. The intended features of this methodology are that it will: (a) be generalizable across domains and populations; (b) be applicable for a variety of purposes in education and training including educational research and instructional design; (c) be scalable for practical use in secondary, tertiary and work settings; (d) be appropriate for complex problem solving domains; (e); produce metrics that identify the degree and basis of correspondence between mental models; and (f) provide greater insight into the structure and content of a person‘s mental model than what is now provided by current mental model assessment approaches. The methodology is based on an application of graph theory in the representation and comparison of mental models. It can be distinguished from other graph-based methodologies by one or more of the following characteristics. First, the methodology allows subjects to create their own graphs to represent their mental models. Next, subjects provide detailed property sets for each graphic element in their mental model representations. These property sets define not only the concepts in a subject‘s mental model but also the subject‘s understanding of how concepts are related. Finally, comparisons between mental models are based on analyses of properties of graphic elements rather than linked pairs of concept labels. Property set analysis may determine whether or not similar labels in different mental model representations refer to the same concepts. It also may determine whether or not similar concepts are identified with different labels. Assumptions that subjects understand and use concept labels the same way can lead to inaccurate conclusions about the degree of correspondence of one model to another. Because the proposed methodology is intended to be generalizable and subject domain independent, it can be applied to many applications without modification. The focus of this

4 research was to use the methodology to examine gaps between the knowledge and conceptions of beginning students and the knowledge and conceptions of an experienced practitioner in the field of instructional design. These gaps may be the result both of a lack of knowledge on the students‘ part and students‘ misconceptions about the subject. As the literature review in the next section will show, detailed information about learner models is needed for the effective design of instruction and to help students achieve the targeted learning goals. Early work in instructional design relied on measurements of observable behaviors of learners. Without rejecting the of these assessments, researchers began to consider what might be occurring in the black box of the mind as people experience their world, store and process information, and apply that knowledge to new situations and problems. The development of mental model theory has benefited from work in and research related to learning and instruction. The role of models in learning and instruction has been examined from two perspectives. Instructors have acknowledged the utility of using models to present information to learners. Similarly, model building as a learning exercise has been used to aid learners in integrating related knowledge and exploring outcomes related to different model inputs (Buckley, 2000; Clement, 2002). External models, however, are only part of the learning process. Understanding the mental models of persons with knowledge and the mental models of those who are seeking knowledge are seen as key elements in identifying the goals of a learning process and the stages of development leading to successful learning outcomes. Much instruction is aimed at aiding learners in progressing from an initial level of performance or knowledge with simplistic or inadequate mental models to that of a successful practitioner or advanced performer with more robust mental models in a particular problem solving domain or area of inquiry (see Jonassen, 2004). Assessing mental models at various points along this continuum enables evaluators to examine whether or not desired conceptual changes are occurring so that learners are gaining not only knowledge but also the ability to apply that knowledge effectively when challenged with new situations in complex domains. In other words, learners must be able to deal with problems for which prior solutions do not exist or no longer apply. This ability to respond effectively to new problem situations arises from the mental models experts can create and exercise rather than from problem-specific training; it is impossible to anticipate and train for all possible complex problem situations. In short, and the development of robust mental models are highly desirable learning outcomes (Mayer, 1989; Taylor, Barker,& Jones, 2003) for which few, if any, assessment tools now exist (Glaser, 1990; Seel, 1999). Tracking conceptual change in learning progress is not the only application of mental model assessment during instruction. It is also important to determine whether beginning misconceptions persist or if new ones have been introduced during instruction. For both theory development and practical application, researchers can explore relationships between instructional design options and their impacts on learner conceptions. Instructors can determine whether or not the delivery of instruction has been effective and identify needs of individual learners for additional assistance or remediation. In summary, assessment of learners‘ mental models can provide information regarding initial conceptions and misconceptions, conceptual change during a course of instruction, the similarity between learners‘ mental models and target models for learning achievement, and the persistence of effective mental models for post-instruction performance. Although current mental model assessment techniques may provide useful information within their intended

5 contexts, each has limitations related to how much the researcher is left to infer from the data obtained. Specific inferences that the methodology in this study seeks to reduce relate to the structure of the individual‘s mental model and the meanings the individual ascribes to labels of elements and the relationships among those elements. By asking individuals to provide their own representations of structure and meaning, it is assumed that the external representations of their mental models will have greater correspondence with their internal models than other techniques are likely to produce. The methodology investigated in this study builds on earlier research (e.g., Spector & Koszalka, 2004) and has been tested in a prototype study (Smith, 2006). The prototype study, which used archival data, demonstrated the viability of the methodology for providing useful information regarding the similarity of mental model representations in terms of structure and content. It also produced meaningful quantitative measures of the levels of correspondence. This research study is the next step in the development of the methodology. The study used students and experienced practitioners in the field of instructional design. All participants were trained in the use of the study materials for representing their mental models. Data analysis included comparisons of mental model representations as well as qualitative data obtained through questionnaires and interviews. The limitations of this study pertain to: (a) the length of that was devoted to training participants in use of the methodology for representing mental models; (b) the length of time participants were allowed for representing their mental models; (c) the lack of software support for graphic representation and description of mental model elements; (d) the use of participants from one university; (e) the use of one age group and educational level for students; and (f) the focus on one subject area for both student and expert participants. Although the results of the formative evaluation cannot be generalized to other populations and subjects, the study data answered the research questions. First, it determined that the methodology provides useful comparisons on the attributes of structure and content for the selected application. Second, data from the analysis of the mental model comparisons and from the questionnaires and interviews revealed what improvements in the methodology are needed prior to further development. The results of this study can set the stage for future research using other subject areas, different educational levels, and additional populations. The foundation for this dissertation research is provided in the literature review presented in Chapter 2. The review covers: (a) the development of mental model theory; (b) the place of models in learning and instruction; (c) conceptual change and mental model assessment points; (d) the importance of mental model assessment for instructional systems; (e) the status of mental model assessment; and (f) a summary of mental model assessment needs that the proposed methodology is intended to address. Chapter 3 explains the methodology employed in conducting this research. This chapter identifies specific elements of the study design and presents a rationale for their selection. The overall data analysis approach is described, and a more extensive presentation of analysis procedures for mental model representation comparisons can be found in Appendix A. Results of this research are reported in Chapter 4. Study results are interpreted and discussed in Chapter 5, along with limitations that affected these results and their generalizability. The concluding chapter presents an elaboration of further research and development of the methodology. The prototype study conducted prior to this research suggested that the methodology for using graph theory and property set analysis has the potential for providing a valuable new addition to the toolset used

6 by researchers, designers, and educators in mental model assessment. This study confirmed that result.

7 CHAPTER 2

LITERATURE REVIEW

The development of mental model theory has been followed by increasing interest in the investigation of mental models in learning and instruction. The term mental model is often found in literature pertaining to education, but it is clear that there is not a common view of what a mental model is. In this literature review, the foundations of mental model theory are traced, and a short definition is offered to specify what is meant by the term mental model in this study. The next topic addressed is the role models serve in learning and instruction. Models have been recognized as both the means by which instruction may be delivered (external models) and a goal of the instructional process (internal or mental models). Investigation of mental models is an important topic for instructional systems, as seen in the review of literature concerning research for instructional design, theory development, and educational practice. A comparative analysis of approaches to mental model elicitation and assessment reveals that there are unfilled needs in the area of the methodologies and tools available to researchers, designers, and educators, particularly those dealing with complex domains. The literature review concludes with an for providing a new methodology aimed at addressing some of these needs.

Development of Mental Model Theory

The historical context of mental model theory development includes decades of research in psychology, artificial (computer science), cognitive neuroscience, and educational technology (Winn, 1996). Although new insights and shifting perspectives have led some researchers to reject earlier views, ongoing progress in understanding mental models and their place in learning and instruction is likely to depend on a careful integration of the best that both earlier work and recent work have to offer.

Background

In the 1980‘s, mental models emerged —as a central theoretical construct to capture situated and qualitative reasoning“ (Seel, Al-Diban, & Blumschein, 2000, p. 130). Two books published in 1983 were devoted to the subject of mental models. The book edited by Gentner and Stevens (1983) consisted of a collection of articles dealing with various phenomena without attempting to present a unified theory (Greca & Moreira, 2000). Johnson- Laird (1983) dedicated his book to presenting a theory to explain mental models and their use in reasoning. When Johnson-Laird (1983) introduced a theory of mental models, he described knowledge and understanding in terms of a process of translating external information into words, numbers, or other that can be used in "thinking" about the subject and then "retranslating" the representational system into some application of its principles. Johnson- Laird used a simple analogy to help explain this cognitive artifact:

8 The psychological core of understanding, I shall assume, consists in your having a 'working model' of the phenomenon in your mind. If you understand inflation, a mathematical proof, the way a computer works, DNA or a divorce, then you have a that serves as a model of an entity in much the same way as, say, a clock functions as a model of the earth's rotation. (p. 2)

More recently in a chapter of a book on cognitive neurosciences, Johnson-Laird (1994) described the central role of mental models as —the natural way in which the human constructs , conceives alternatives to it, and searches out the consequences of assumptions“ (p. 999). In further description of mental model theory, Johnson-Laird says that "it assumes that a major component of reasoning is nonverbal–that is, the construction of models with a structure corresponding to the structure of situations" (p. 1005). There are two key points to note regarding Johnson-Laird‘s (1994) six main assumptions in his theory of mental models. His first assumption declares that models consist not only of entities and the relations between them but also of their properties. The implication of this assumption is that attempts to investigate a mental model without regard to both relationships and properties may miss significant aspects of the model. Another feature of this set of assumptions is the role that alternative models play. It is possible that more is due alternative models in the investigation of mental model development during learning and the use of mental models in problem solving. While Johnson-Laird focused on reasoning, other theorists and researchers have emphasized the role mental models play in cognition and problem-solving, such as Seel, who published a theory of mental models in Germany in1991. Johnson-Laird and Seel are examples of two different perspectives on mental models. Winn (1996) aligns the work of Johnson-Laird with traditional views of cognitive theory. In the traditional view, the assumption is that the brain works much as a computer does, by representing information as symbols and then operating on them through learned procedures. Winn points to constructivists as being among the first to propose an alternative for cognition. In this view, students do not merely encode information from the outside world. Knowledge is constructed as learners interact with their environment and one another. Although constructivism in its extreme forms is not universally accepted (Winn), the that knowledge is constructed can be found among Seel and other mental model theorists and researchers as seen in the following statement: —From an instructional point of view, the suggestion can be made that mental models are constructed from the significant properties of external situations, such as learning environments in schools, and the subject‘s interactions with these well-designed situations“ (Seel, Al-Diban, & Blumschein, 2000, p. 131).

Relationship to Cognitive Psychology

The first books on mental models came forth in 1983, but the impetus for this work can be found earlier as cognitive psychology began to establish its place in what had been a strongly behaviorist field. In the 1960s, cognitive psychology emerged as a —shift from the normative paradigm of behaviorism to the interpretative paradigm of symbolic interactionism“ (Seel, 2003, p. 62). In some ways the shift away from behaviorism can be thought of as a pendulum swing because the movement away from the constraints of behaviorism brought psychology back to the focus of its beginnings–the inner workings of the mind. Winn (1996)

9 traces the discipline of psychology to the end of the nineteenth century and the work of the German psychologist Wundt, whose interest was in the introspection of his subjects and whose data gathering technique was similar to the think-aloud protocols used today in which subjects are asked to explain their actions and the behind them. Behaviorists rejected research methods that sought to explore mental states that are not directly observable and focused instead on objective measurement of (Winn). Objective measurements have dominated the field of instructional design for many years; however, cognitive psychologists have begun to address the missing information in the learning equation–what is the relationship between what is happening in the learner‘s mind and the performance observed? In order to improve the design of learning environments, materials, and activities, educational technologists need to know more than the that behaviors occur in an instructional setting; they also need to know more about how learners think and what can be done to influence thought leading to performance. Without rejecting objective measurement, we need to complement it with reliable methods of gaining access to the inner workings of learners‘ minds (Spector, Dennen, & Koszalka, 2005).

Relationship to Learning Theories and Perspectives

Among the various learning theories and perspectives that could be discussed in relation to mental models, three are particularly important. theory, situated cognition, and cognitive apprenticeship, in concert with mental model theories, are providing a sound theoretical foundation for both research and application in instructional design.

Schema Theory

Winn (1996) explains that there are many descriptions of what schemata are, but he lists four characteristics that are common to all of them. First, a schema is an organized structure in , and our schemata contain our knowledge of the world. Second, a schema exists as an abstraction of our experience of the world. Third, a schema is dynamic and subject to change as new information requires adaptation or reorganization. Finally, a schema provides a context for interpreting new knowledge and a structure to hold it (see also, Seel, 2003). Although schemata form the basis for the construction of models (Seel, 2003), schemata and mental models are not necessarily synonymous. Our schemata can be thought of as an internal library that we can scan to collect pertinent information during the construction of a mental model. Schemata may be considered generalized knowledge that can be repeatedly referenced once learned. In contrast, mental models are situation dependent and are —are stored in memory as an effective means for having mastered a specific situation, but they will never function as mere patterns to be rotely reproduced in another situation“ (Seel, Al-Diban, & Blumschein, 2000, p. 154).

Situated Cognition

Seel, Al-Diban, and Blumschein (2000) point to mental models as a central theoretical construct of the situated approach and provide a description of the relevance of mental models for advocates of situated cognition:

10 Theories of situated cognition aim to account for how individuals learn in conceptually organized environments which contain: (1) the external world to be understood; (2) the individual‘s and internal representations of the world; and, (3) the individual‘s symbolic interactions with it. In this context, learning is defined as the individual‘s ability to construct meaning by extracting and organizing information from environments. Thus, cognitive processes that occur as an individual interacts with a given environment are at the core of situated cognition, and, consequently, learning, thinking, and reasoning are made up of interactions between learners and situations in the world. According to the terminology of practical philosophy, situations include all environmental states and conditions to which subjects are exposed and which are not available as self-constructed models. However, this definition involves a dichotomy, since external situations are separated from mental constructions. Therefore, the question arises of how the suggested interactions between a learner and a physical or social situation may occur. The answer of those who advocate situated cognition is that the learner constructs a mental model in order to simulate relevant properties of the situation to be cognitively mastered. (p. 130)

Seel and colleagues (2000) go on to say that mental models are not assumed to be fixed structures in the mind but are constructed when needed. A new learning situation may require the construction, or reconstruction, of a mental model; however, no new mental model is needed if the learner can assimilate the learning material into the structures of prior knowledge. Mental models are flexible, evolve according to ongoing input and reflection, and may not be articulated either internally of externally until circumstances demand an organization of knowledge and information. Further, when thinking about a topic, an individual may employ several models in solving problems and reaching conclusions (Held, Knauff, & Vosgerau, 2006).

Cognitive Apprenticeship

The traditional model of a trade apprenticeship can be approximated in learning situations in which learner development is linked to the shared expertise of a given expert or project. This form of situated cognition is known as cognitive apprenticeship. However, a broader application in instruction can be seen in the work of Seel, Al-Diban, and Blumschein (2000), who find that the cognitive apprenticeship approach —corresponds best with the principles described for the construction of mental models through instruction, due to the fact that this approach implies modeling (as providing an expert‘s ) as the central component of teaching“ (p. 138). In other words, the use of models and modeling in instruction can be a powerful force in promoting learning and need not be limited to the knowledge and skill of a single expert.

Current Perspectives

Today‘s views of mental models build upon the idea that mental models are the means by which the mind projects order to perceptions and experiences. However, the order provided by a mental model goes beyond psychological constructs such as schemata and scripts, which organize information and procedures. Mental models enable individuals to explain cause and

11 effect relationships, identify interdependencies, recognize temporal relations, and evaluate the plausibility of phenomena. They are the means by which a person can take a holistic view of the world and his or her experiences in it. It is this holistic view that underlies another function of mental models–prediction. A person‘s mental model generates expectations about the outcomes of possible actions and can simulate real actions in the (Seel, 2003). Mental models make thought experiments possible as an individual considers different combinations of conditions that can influence outcomes (Forbus & Gentner, 1997; Seel, 2003). This aspect of mental models is a key element in current investigations of human performance. Recent work regarding mental models and their place in learning and instruction addresses three important issues: complexity, dynamic systems, and the need for holistic approaches. The focus for some of this work is on the need to respond society‘s demands for a highly skilled workforce that can deal with ever-changing conditions and complex problems in a variety of domains (Spector, Dennen, & Koszalka, 2005). Another focus, not unrelated, is seen in the work of science educators who are concerned with developing future professionals who can —do“ science; that is, they consider mental model building and modeling as the core of science activity (Taylor, Barker, & Jones, 2003). Greca and Moreira (2000) cite the concepts of mental models, conceptual models and modeling as the theoretical axes for the new trends in science education. Kolkman, Kok, and van der Veen (2005) stress the importance of mental models in addressing complex, unstructured problems. Their research is based on the idea that mental models are at the core of the problem solving and knowledge production process, and that a person‘s mental model acts as the filter through which s/he observes the problem situation. They indicate that the externalization of mental models through concept mapping procedures provides a way to communicate and integrate knowledge, explore differences, and identify missing or incorrect information. The work of Kolkman and colleagues and the other researchers mentioned illustrates why mastery learning of pre-determined outcomes does not apply when —the problem is concerned not with the discovery of a particular fact, but with the comprehension or management of an inherently complex reality“ (Kolkman et al., p. 328). The message is that we cannot master the solution to every problem that may arise in the future; our goal is to learn how to solve new problems when they occur. Addressing complex problems in dynamic environments requires a holistic perspective that can enable problem solvers to consider a variety of conditions that may affect outcomes for proposed solutions.

Mental Model Summary

The development of mental models theory has strong ties to cognitive psychology and theories of learning and cognition (e.g., schema theory, situation cognition, and cognitive apprenticeship). Current perspectives emphasize the importance of mental model development in dealing with problem-solving in complex domains. Based on the discussion so far, mental models:

ñ reflect the perceived structure of the external world; ñ are temporary constructions created in response to a need to go beyond existing conceptual structures in explaining something or resolving a problematic situation; ñ are constructed o from current knowledge or analogous models,

12 o from everyday , and o from other people‘s ; ñ are flexible and evolve according to input and reflection; ñ provide a way to integrate and communicate information and understanding of complex material from multiple domains; ñ might not be articulated until circumstances demand an organization of knowledge and information; and ñ cannot be observed directly but can be inferred based on an appropriate form of external representation.

A key issue addressed in this dissertation is what counts as an appropriate form of external representation given the obvious relevance of mental models for learning, performance and instruction.

The Place of Models in Learning and Instruction

Models in learning and instruction occur both as internal conceptions and external expressions that communicate model elements to others. Mental models are being recognized as integral to the learning process (Clement, 2000; Seel & Dinter, 1995). Seel (2006) summarizes the relationship between mental models and learning in two assumptions: (a) an individual constructs a mental representation of reality, and (b) cognition and learning occur as individuals use mental representations that organize symbols of experience or thought to effect a systematic representation of the experience or thought as a means of understanding it or explaining it to others. Instruction can intentionally support the development of these mental models and may use various models and model building activities in the process (Justi & Gilbert, 2000; Seel, 2003). Seel and Dinter see mental models as —conceptual structures that underly (sic) learning-dependent transitions which are strongly interrelated with prior knowledge and specific preconceptions as well as with causal explanations“ (p. 7). It is the goal of instruction to support learners‘ progress in moving from their initial mental models to acquisition of the target model identified in the subject learning objectives (Buckley, Gobert, & Christie, 2002; Seel, 1999). Duit and Treagust (2003) describe this process in the context of science education but refer to it as conceptual change. Clement and Steinberg (2002) note that the term conceptual change has been used in a variety of ways, ranging from changes relating to surface level details to radical shifts in core concepts. In their work, Clement and Steinberg refer to a type of learning where a significant new cognitive structure is created, whether that structure is entirely new or is a substantial modification of an old structure. Mental models can be assessed for several purposes, including evaluating conceptual change. Before examining the details of assessing mental models, the following discussion will clarify terminology relating to models in learning and instruction. At a general level, a model is a representation. Clancey (1988) adopts the view that a model is a purposeful representation of some object or process. A model —is constructed in accordance with specific in order to simplify its original in several respects“ (Seel, 2003, p. 61). Gobert and Buckley (2000) consider a model to be a simplified representation of a system. This representation focuses on specific aspects of the system and presents objects, events, or ideas in ways that make them visible or more readily comprehended. The representation may include a level of that is not inherent in the system being

13 described. In summary, a model serves to communicate, either implicitly or explicitly, understanding of some aspect of the human experience that cannot be easily apprehended by direct observation of the original subject. Models in learning and instruction can be explored in terms of characteristics, function, or expression. A variety of models and modeling approaches can be used by both learners and instructors. Clancey (1988) stresses the utility of models in instruction by saying, —Summarizing broadly, instruction requires being able to use models in multiple ways–in solving problems, in explaining, and in learning from experience“ (p. 58). The impact of model use on the development of mental models in learners is an important topic in instructional research today (Clement, 2000). To begin, models can be examined according to whether they are internal to the mind of an individual or external (expressed in some form). Figure 2 provides a high-level view of the relationships between internal (mental) models and external (expressed) models in learning and instruction. Internally, the learner begins with his or her current knowledge, potentially with a preliminary mental model of the subject. The learner may be asked for an external representation of his or her model which the instructor, who has his or her own mental model, can use to evaluate learner needs and select the best teaching model(s) to assist the learner during instruction. Teaching models can take many forms and are intended to help the learner integrate new information with existing knowledge and form an accurate mental model. Learners may observe or even interact with teaching models and/or their own external expressions of their mental models during the instruction process. As a product of instruction, the learner forms a mental model which s/he expresses for post-instruction assessment. Each of these elements is explained in more detail in the discussion that follows.

Learner Initial State Learning State Learner • Current knowledge • Integration--current Mental Model and schemata knowledge and schemata After • Current mental • Mental model development Instruction model (if any)

Instructor Mental Model

Teaching Model(s) Learner Model Learner Model(s) Learner Model Pre-instruction Development Post-instruction Assessment Assessment Assessment

Learning Environment

Figure 2. Internal and external models in learning and instruction.

14 Types of Models

Models, whatever they may represent, can be categorized in a variety of ways. For example, Harrison and Treagust (2000) have developed a typology of concept-building analogical models used to enhance scientific investigation, understanding, and communication. Their typology arranges models according to whether they may be considered concrete or abstract. For purposes of this literature review, three dimensions of model designations are particularly pertinent: simple vs. complex, quantitative vs. qualitative, and functional vs. behavioral.

Simple vs. Complex

Spector, Dennen, and Koszalka (2005) identify the following indicators of complexity: the presence of many components or factors; ill-defined or vague components; uncertain input variables and imprecise processes; and dynamic systems which behave differently depending on factors such as varying internal feedback mechanisms and nonlinear relationships among components. Complex models can include general concepts, , and background processes that are part of an individual‘s understanding and can be brought to bear in a specific problem solving situation (see Clancey, 1988). Spector and colleagues explain that while much is known about how to improve knowledge of facts and simple procedures, educational researchers and learning scientists do not yet understand the best ways to improve knowledge and performance in complex task domains.

Quantitative vs. Qualitative

Quantitative models are expressed in numeric terms and may represent physical , economic projections, and other types of quantifiable entities. Qualitative models are descriptive of spatial, temporal, and causal relations of objects and processes. They may be expressed in a notation, held in a person‘s mind, or implemented in a computer program (see Clancey, 1988). Although both quantitative and qualitative internally held models may be —mental,“ the expression of a quantitative model might be assessed more easily than a qualitative one due to a closer relationship between the way the model is stored internally and the way it is expressed externally. That is, internally held numeric relations are likely to be highly similar to their external representation. The expression of qualitative models may take many forms, and an individual may have difficulty externalizing his or her model fully and accurately.

Functional vs. Behavioral

Clancey (1988) distinguishes between functional and behavioral models. Functional models consist of hierarchical compositions of functional operators and/or structural components. Behavioral models express the links between system states in a causal- associational network. The functional model is considered to be general, whereas the behavioral model is related to problem-solving in specific situations. This distinction has implications for both the development of mental models in learners as well as the assessment of

15 those models. Selecting the instructional remedy for an inaccurate mental model in a learner requires an understanding of the both the nature of the model error and its source.

The Expertise Continuum

In the context of learning and instruction, models can be categorized by the status of the model owner along the continuum of naïve learner to expert. Although a learner may not be expected to achieve expert status in a given instructional sequence, learning-dependent mental model progression is intended to move the learner from his or her initial state to a more advanced state in which s/he can provide causal explanations that satisfy the learning objectives of the instruction (see Seel, Al-Diban, & Blumschein, 2000; Snow, 1990).

Naïve. Mental models are constructed from three sources: a) from a set of basic components of world knowledge or analogous models in current understanding, b) everyday observations of the outside world, and c) other people‘s explanations (Seel, 2003). When learners are introduced to a subject, initial mental models they may have constructed to explain their world experience are described as naïve (Brewer, 2001). Naïve learners may have little or no formal instruction in a subject; however, they may have naïve knowledge, or theories, that may support or hinder their learning, depending on the accuracy of their conceptions. Misconceptions may be described as flawed or incomplete mental models (Ogan-Bekiroglu, 2007), and such misconceptions may need to be overcome during learning (Mayer, 1989; Clement, 2000).

Novice. Mayer (1989) defines a novice as a student who lacks prerequisite knowledge and capacities for the subject domain. Mayer‘s definition might also apply to naïve learners, but Clement (2000) makes between a distinction between naïve learners and novice learners. According to Clement, naïve learners have little or no knowledge of a subject, while novice learners have some knowledge based on instruction or experience.

Expert. An expert is a person who is acknowledged by peers (or others qualified to judge) to be a master in his or her field. For example, Ford and Sterman (1998) consider a system expert to be a person who participates in the process directly in an operational or managerial role. One contrast between novices and experts is that novices may not see some parts of a phenomenon while experts see them easily and know the behavior of those parts and its causes (Buckley, 2000).

Intermediate. As novices develop expertise, they may go through a number of intermediate stages (Winn, 1996). An intermediate level individual has more knowledge than a novice but less knowledge than an expert in the subject area.

There are qualitative differences between the mental model of a novice and that of an expert in a subject (see Glaser, 1990). For example, novices tend to focus on surface features of problems in contrast to experts, whose understanding is likely to be theory based (Clement, 2000; Forbus & Gentner, 1997). Experts tend to be more efficient in the expression of their models, a likely consequence of their reliance on theory and greater integration of, and swifter access to, knowledge (see Glaser, 1990). Instructional researchers are concerned with how

16 instruction can support the construction of mental models in learners (Buckley, 2000; Gobert, 2000; Ifenthaler & Seel, 2005; Mayer, Moreno, Boire, & Vagge, 1999) and the process by which novices progress to intermediate stages and on to expert status (Burkhardt, Détienne, & Wiedenbeck, 1997; Seel, 2003; Seel & Dinter, 1995; Spector, Dennen, & Koszalka, 2005). Mayer (1989) noted that novice learners are more likely to benefit from direct instruction in how to construct a conceptual model than more skilled students (i.e., intermediate learners) because those with greater skill are likely to have and use conceptual models. Pre- existing conceptual models may conflict with models presented during instruction. Linn (2003) provides additional comments about the issue of pre-existing and conflicting or unrelated models. Because students may come to a learning situation with a repertoire of disconnected ideas about a phenomena, the learning task involves adding new, more normative ideas and sorting out the repertoire. The instructional design challenge is to help students evaluate prior ideas and integrate new ideas with relevant existing knowledge. —Designing opportunities for explanation that help students sort out and prioritize their ideas requires some knowledge of the ideas that students typically hold, as well as some mechanism to support the process of comparing ideas“ (Linn, p. 731). In order to meet this challenge, there must be effective means of assessing learner mental models prior to and during instruction. Assessment techniques that do not ask a learner for direct representation of his or her mental model may miss important components of the learner‘s view. Understanding these components may be the key to designing instruction that helps students discard misconceptions that can hinder learning progress.

Model Based Teaching and Learning

Seel (2006) lists four functions of mental model building. First, models can help simplify an investigation to specific and relevant phenomena in a closed domain. Second, they can aid in the of a complex structure or system. Next, models can serve in relating features of an unknown domain to a known one via analogy. Finally, models may simulate the processes of a system, enabling individuals to manipulate objects mentally in a situation to explore the effects that may occur in real-life situations. —These simulation models operate as thought experiments which produce qualitative inferences with respect to the situation to be mastered“ (Seel, p. 88). Clancey (1988) provides a similar description when he says a simulation model of reasoning consists of a general model and some inference procedure (also called a cognitive model). Buckley (2000) defines model based learning as a —dynamic, recursive process of learning by building mental models“ (p. 896). Mayer (1989) offers a summary of processes by which instructional models can support the building of mental models. First, models can help focus student‘s attention toward the conceptual information in the lesson, which may include objects, states, and actions and the causal relations among them. Next, the models can help students organize information and coherent explanations and build internal connections. Finally, models can assist students in integrating new information with existing relevant knowledge and make connections with external phenomena. The following discussion provides more details concerning the relationship between instructional models and learners‘ mental models.

17 Models in the Instructional Process

In science education, the value of models and modeling is now recognized by science education reform movements, and models and modeling are considered essential parts of scientific literacy (Gobert & Buckley, 2000; Linn, 2003). Gobert and Buckley explain the need for research to support the development of a coherent theory of model-based learning and teaching. Such research needs to address the cognitive processes involved in learning and how model-based teaching should be approached. Although a well-developed theory of model-based learning and teaching is not available, considerable research and discussion exist concerning the use of models and relationships between various types of expressed models and the mental models of learners, teachers, and experts. A summary of key model types in the context of model-based learning will illustrate the importance of mental model assessment and the need for new tools to provide both qualitative and quantitative data to give us insight into the models of both learners and experts. This insight is required to support research to improve a theory of model-based learning and teaching.

Target Model. A target model represents the goal of the instructional process–what the learner is expected to understand and express during learning assessment (Gobert and Buckley, 2000).

Expressed Model. An expressed model is an external representation of the target generated from one‘s mental models and expressed through action, speech, written description, and other material depictions (Gobert and Buckley, 2000).

Consensus Model. A consensus model is an expressed model that has been developed, tested, and agreed among scientists or among groups of learners (Gobert and Buckley, 2000). A consensus model may be a target model for an instructional process.

Mental Model. A mental model is a personal internal representation of the target system being modeled (Gobert and Buckley, 2000).

Teaching Models. Teaching models —are developed and used by teachers and curriculum writers to promote the understanding of a target system“ (Gobert and Buckley, 2000, p. 892). They may or may not be intended to represent an accurate depiction of a target system. During an instructional sequence, these models may present a progression of views of the target system. Early models may illustrate particular aspects of the target and support the learner‘s transition from naïve or novice mental models to intermediate and increasing levels of expertise.

The goal of an instructional process is not necessarily for learners to be able to understand and memorize a target model. The greater goal is for learners to engage in the activity of modeling, described by Greca and Moreira (2000) as the establishment of semantic relations between theory and phenomena or objects and considered to be the fundamental activity in the sciences. At a more detailed level, the models listed above may be described as (a) conceptual or explanatory models, (b) semantic models, or (c) causal models. These terms

18 are not all mutually exclusive; that is, more than one of these descriptions may be relevant for a single model.

Conceptual or explanatory models. Snyder (2000) defines a conceptual model as —a representation used by scientists and teachers to understand and to teach the target system“ (p. 979). Mayer (1989) defines a conceptual model as —words and/or diagrams that are intended to help learners build mental models of the system being studied; a conceptual model highlights the major objects and actions in a system as well as the causal relations among them“ (p. 43). Conceptual models are created in the mind to support thinking by explaining observations, sometimes by using an analogy to relate the observation to something already known (Clement, 2000). Clement emphasizes the importance of conceptual models:

These issues are of interest to science educators because of the suspicion that conceptual models can be important for the attainment of ”conceptual understanding‘ in science at a level that goes beyond memorized facts, equations, or procedures. The hope is that such understandings not only lead to a student‘s that science can ”make sense‘ via satisfying explanations, but also embody a form of flexible knowledge that can be applied to transfer problems. (p. 1042)

Semantic models. One way to represent knowledge mentally is through semantic models, also known as symbolic models (Seel, 2003). According to Seel, cognition and learning take place as learners use mental representations in which they organize symbols of their experience or thought to effect a systematic representation of this experience or thought. This representation then serves as a means of understanding it or of explaining it to others.

Causal, or cause and effect models. Causal models are representations that allow us to describe what we know about the world and how changes in the values of causes affect outcomes (Sloman, 2005). Sloman argues that causal models are important for understanding how people and make judgments and decisions. Such models can allow us to represent how the world works and how it might have been if some cause or causes had different values. Sloman also points out that causal models are not necessarily helpful for allowing us to understand noncausal systems. Although he doesn‘t elaborate on the distinction, Sloman‘s discussion suggests that noncausal models relate more to description rather than operational representations. Descriptions may be important aspects of a holistic mental model. For example, description may provide the means of recognizing the entities involved in a cause and effect relationship.

Model Based Learning

Gobert and Buckley (2000) define model-based learning as the construction of mental models of phenomena. Learners respond to a particular task by constructing mental models which s/he evaluates and revises as needed. Gobert and Buckley go on to say:

While it is impossible to know precisely the nature and content of mental models, even our own, we can as researchers draw inferences about the nature of one‘s mental models based on the types of reasoning learners are able to do with the knowledge they possess.

19 Model formation, we assume, is the construction of a model of some phenomenon by integrating pieces of information about the structure, function/behavior, and causal mechanism of the phenomenon, mapping from analogous systems or through induction. (p. 892)

Correspondingly, Gobert and Buckley define model-based teaching as —any implementation that brings together information resources, learning activities, and instructional strategies intended to facilitate mental model-building both in individuals and among groups of learners“ (p. 892).

Figure 3. illustrates the theoretical framework Clement (2000) offers for model based learning. Clement offers this framework as a means for organizing research problems relating to instruction with specific content goals. The goal of the learning process is the student‘s understanding of a target model which may differ from an expert consensus model in several respects. For this reason, Clement places the expert model outside of the learning process. The target model may be less sophisticated than that of the expert and may represent an educator‘s view designed to meet the needs of learners who require simplified language and analogy appropriate for their age and experience. The starting point for the student‘s progression towards the target model consists of his or her preconceptions and natural reasoning skills. Preconceptions may include both useful conceptions (prior knowledge), which can serve as a foundation for learning the target model. However, alternative conceptions which conflict with the target model may also be present. The learning processes move the learner from preconceptions to the target model. Such processes may include one or more intermediate models that can support the learner‘s acquisition of the target. On the instruction side, intermediate models are presented so that information presented to the student builds to the target model. On the learner side, internal mental model progression is assumed to correspond to the instructional progression toward the target model. The student model after instruction is compared with the target model to assess learning achievement. Clement notes that the target model may replace, dominate, or coexist with initial conceptions, depending on the domain (as illustrated by the arrow joining the learner‘s initial state with the target model).

20 (based on Clement, 2000)

Preconceptions

• Alternative Intermediate Intermediate Target Expert Conceptions and Model M1 Model M2 . . . Model Consensus Models Mn Model • Useful Conceptions and Learning Processes Models Natural Reasoning Student Skills Model Mental Model Progression After Instruction

Framework designed for educators with specific content goals

Figure 3. Framework for model based learning for specific content goals (based on Clement, 2000).

Clement (2000) indicates the need for substantial research within this framework. —The problem is that there are few topics for which we know enough about students‘ preconceptions and even fewer where we know enough about the best choices for other entities in the framework to guide curriculum development or instruction in a principled way“ (p. 1043). To support such research, tools are needed to provide insight in areas where Clement (2000) notes some difficulties in model based learning. First, internal conceptual models cannot be observed directly. Second, there may be a conflict between new learner models and his or her pre- existing intuitive models; this conflict would require a conceptual change or reorganization. Finally, terminology used for models in instruction may conflict with meanings in natural language. Clement and Steinberg (2002) found that representational tools that enable a student to externalize his or her mental model can support mental model building as well as to provide a shared reference point for detailed conversations with an instructor. Representational tools, then, can serve both learners and instructors. During model based teaching and learning, learners develop mental models in stages. In order to ensure that those models mirror the target learning model at the end of the process, learning progress must be assessed at key points. Clancey (1988) explains the importance of assessment for the purpose of diagnosis during the teaching process:

21 In instruction, we monitor the behaviour of the student (a cognitive system), look for discrepancies from the ideal specification (target problem-solving model), track discrepancies back to faults in the student‘s presumed world model or inference procedure, and ”repair‘ the student by instruction (p. 60).

Clancey likens this function to debugging the student learning system and says that researchers in the science of instruction want to understand the origin of bugs so that instructional sequences can be improved to prevent bugs from forming or at least to catch them before they become ingrained.

Summary of the Place of Models in Learning and Instruction

Models have been recognized as being integral to the learning process. On the instruction side, models serve to communicate information needed to support the learners‘ achievement of targeted learning goals. On the learner side, information received during instruction aids in building, evaluating, and revising mental models in a progression toward the desired learning achievement. The importance of understanding and addressing learners‘ preconceptions and misconceptions has been a common theme throughout the discussion of the place of models in learning and instruction. Assessing learners‘ initial states is of central concern for this dissertation research because of the relationship of initial mental models to conceptual change leading to learning achievement.

Conceptual Change and Mental Model Assessment Points

Although views of what conceptual change is vary somewhat, a number of authors point to the role of conceptual change in the learning process and the relationship of conceptual change to mental model development. Assessing learners‘ mental models at key points can identify what changes are needed and whether or not the desired conceptual change has occurred.

Views of Conceptual Change

Learners do not come to an instructional setting with no prior conceptions, and their learning may be inhibited if these prior conceptions are not handled properly (Ogan-Bekiroglu, 2007; Seel, Al-Diban, & Blumschein, 2000). In general, conceptual change refers to the process of revising or removing prior conceptions (Chi &Roscoe, 2002). Mayer (2002) describes conceptual change as the mechanism underlying meaningful learning; it occurs when a learner moves from not understanding something to understanding how it works. Mayer goes on to say that over time, conceptual change —has been represented as a process of achieving structural insight, accommodative learning, understanding of relations, deep learning, or–more recently–mental model building“ (p. 101). In attempting to reconcile four views of conceptual change (including that of Chi and Roscoe), Mayer arrives at the following summary. First, conceptual change is a cognitive process in which the learner seeks to construct knowledge that is coherent and useful. What changes is the learner‘s knowledge with the result being a mental representation that is organized and functional.

22 Chi and Roscoe‘s (2002) view of conceptual change in learners focuses on the problems posed by misconceptions. They define conceptual change as the processes of removing misconceptions; they refer to the processes of repairing preconceptions as conceptual reorganization. Misconceptions are considered to be more serious than preconceptions because they are the result of a miscategorization of concepts. Chi and Roscoe claim that it is easier to realign preconceptions in the same category of concepts than to get learners to reassign a concept to a different ontological category. The main challenge in the processes of removing misconceptions is that learners lack awareness that they have misconceptions and/or lack the alternative categories to which they should reassign their misconceptions. Learners are not the only ones who may be unaware of their misconceptions. Instructional designers may be unaware of common misconceptions that should be anticipated as a learning module is designed. Instructors may need to determine if common or other misconceptions are present in specific individuals so that they can decide on appropriate instructional inventions. Seel and Dinter (1995) explain that the instructional strategy to be selected may depend on the learner‘s initial states concerning mental models, but the need to examine learners‘ mental models is not limited to the beginning of instruction (see Clement & Steinberg, 2002). Misconceptions can affect individuals throughout the expertise continuum, and misconceptions can be reciprocating, reinforcing each other (Buckley, 2000). Models can be evaluated and revised repeatedly during learning (Buckley; Clement, 2000). Instructors can introduce new teaching models and/or analogies to improve learners‘ understanding, and learners respond to new material by revising their internal models. Learner‘s interpretations of teaching models, analogies, and other material must be checked to see whether or not they have imported unintended implications to their working model (Clement). Chi and Roscoe (2002) advocate examining learner knowledge at the mental model level rather than piecemeal because the mental model adds a structure in which learner beliefs are embedded. Assessment of the learner mental model can reveal whether it is coherent (i.e., model parts are connected in an organized manner) or incoherent (i.e., fragmented). If coherent, Chi and Roscoe indicate that the model can be examined to see if it is correct or if it is flawed, meaning that the model is organized according to an incorrect . As is the case with misconceptions, Chi and Roscoe explain that learners with incorrect coherent mental models may be at a greater disadvantage than learners who understand that their models are incomplete. Learners with incorrect models may be able to answer questions adequately and consistently based on such models, but they may be blind to their lack of deep understanding. Deep understanding is often seen as the goal for student learning. Clement (2000) describes this goal as —an internal, integrated, and deeply understood model that the student can use to reason with and make inferences from“ (p. 1045). Such learning cannot be assessed as a list of discrete, measurable objectives (see Snow, 1990; Winn, 1996). Mental model assessment requires a methodology that can reveal the key elements of a learner‘s mental model and present them in a manner that identifies the meanings of the learner‘s concepts and the relationships among them.

Assessment Points in the Instructional Process

There are a number of points at which the assessment of learner mental models can provide important information. These points include a pre-instruction evaluation, multiple

23 evaluations during instruction, and one or more evaluations after instruction. Table 1 identifies these points and the kind of information needed. Before a learner begins an instructional sequence, an evaluation of his or her initial mental model can assess the learner‘s current knowledge and readiness for instruction, including an identification of misconceptions and gaps in knowledge. During instruction, mental model assessment can evaluate whether or not the learner has been able to integrate data from diverse sources, correctly assimilate new information with prior knowledge, and comprehend the target learning model. Assessment during instruction can track learning progress, examine the differences between initial mental models and mental models at milestones in the learning process, and determine whether or not the learning objectives of those milestones have been achieved. Mental model assessment can also identify new or persisting misconceptions that must be overcome in order for deep understanding to be achieved. Finally, a learner‘s level of achievement along the continuum of novice to expert can be evaluated at the end of the instructional process. Post-instructional assessment of mental models can evaluate knowledge retention over time. It also can examine how well the learner‘s mental model can support problem-solving in novel situations where the knowledge acquired during learning is relevant.

Table 1 Assessment Points for Learner Mental Models

Stage Description Pre-instruction Assesses learner‘s current knowledge and capabilities (including misconceptions, gaps in knowledge, and readiness for instruction)

Milestones in learning Monitors learning progress by checking for grasp of new material, proper integration with existing knowledge, identification of new misconceptions, etc.

Levels of achievement (e.g.): Evaluates achievement of learning objectives at the level of ñ Novice expertise targeted ñ Intermediate (e.g., apprentice) ñ Expert

Post-instruction - Transfer Determines learner ability to retain knowledge and apply it in relevant post-instructional activities, including new situations in which the knowledge may support problem- solving

24 Summary of Conceptual Change and Mental Model Assessment Points

Whether conceptual change is viewed as a general process of the acquisition and organization of knowledge (e.g., Mayer‘s definition) or a correction of misconceptions (e.g., Chi and Roscoe‘s view), the assessment of mental models provides a means of evaluating the level of understanding a learner has regarding the subject of instruction. It is at the model level, rather than piecemeal investigation, that the depth of understanding is revealed. The goal of assessing mental models at key points is to determine what conceptual change is needed, to track the learner‘s progress, and to evaluate whether or not conceptual change has occurred.

Importance of Mental Model Assessment for Instructional Design

Mental model assessment is important for instructional design in three interrelated functions: research, theory development, and practical application. Research provides the basis for theory development. It also supports the evaluation of applied theory. Mental model assessment methods are needed for additional research aimed at understanding the role of models in learning and instruction. Instructional design practices intended to affect mental model development in learners require mental model assessment methods to evaluate (a) the effectiveness of the practice for groups of learners and (b) the effectiveness of the practice in supporting the learning progress of specific kinds of learners. Researchers, theorists, and educators have identified a number of areas in which mental model assessments can provide critical input to the search for greater understanding of how best to support learning and learner performance. The following summary provides examples of specific areas of investigation. While many of these research areas were proposed some years ago, it is clear from recent reviews of literature that the work described is far from being finished. The questions and research agendas remain relevant today.

Areas of Investigation

In 1990, Snow introduced a schematic classification of cognitive and conative constructs for educational assessment, calling for research to validate the constructs and support further development of the scheme. Among the items on his chart entitled —Components of Instructional Theory“ (p. 457) are a number of terms we associate with mental models (e.g., naïve theories, mental model progression, causal explanation, modeling and internalization, etc.). Snow explains the need for assessment research and development that can provide —richer, thicker, deeper descriptions of student learning progress than is possible with conventional tests and questionnaires, and to make these descriptions more directly useful for one or another educational need“ (p 469). Ten years after Snow‘s article, we see some of the same issues being raised. Clement (2000), who cites individual cognition as a central determining feature of learning about which we have much to discover, lists a series of research questions whose answers could make a significant contribution to theories of instruction:

ñ What is the role of mental models in science learning? ñ What is the nature of these models as knowledge structures? ñ What learning processes are involved in constructing them?

25 ñ What teaching strategies can promote these learning processes?

Although Clement is speaking from the context of science education, these questions are fundamental for education in general and (Forbus & Gentner, 1997; Seel, 2003; Seel & Dinter, 1995). Most discussions of mental model assessment focus on the need to understand the learner‘s mental model; however, some authors also discuss the importance of understanding the mental models of teachers. Ogan-Bekiroglu (2007) explains that teacher conceptions could be at variance with the recognized experts (e.g., scientists) and might compound problems relating to misconceptions in learners. Clement and Oviedo (2003) note a lack of sufficient discussion of cognitive processes of both teacher and students working together in building mental models. Reinders and Treagust (2003) discuss the impact of teachers‘ conceptions on conceptual change observed in learners. One way to view mental models in instruction is as a triad consisting of:

ñ the —expert“ or target mental model–the instructional goal; ñ the learner mental model–the learning achievement; and ñ the instructor mental model–the teacher‘s view of the instructional goal and which guides the teacher‘s response to the learning needs of the student.

Whatever the mental model and purpose of assessment, there are substantial challenges to determining the details of the model and comparing it with one or more companion models, such as novice to expert or learner to self (learning progression). Mayer (1989) found support for the idea that conceptual models for scientific text can affect student thinking about the material; however, he emphasized that traditional measures of overall amount of or correct answers on a comprehension test would not have revealed the strong differences between model and control groups that were found when measures designed to evaluate differences in systematic thinking were used. He calls for future research that includes measures addressing how models help students select, organize, and use scientific information and to provide a more —fine grained analysis“ (p. 59) than the work described. Mayer encourages continued development of theory and practice and offers a research agenda for how conceptual models should be used in instruction. The agenda is summarized in the following questions:

ñ What is a good model? This question must be asked relative to specific learning goals, such as transfer performance. Research should address the following characteristics: good models are complete, concise, coherent, concrete, conceptual, correct, and considerate. —Systematic research is needed to identify the relative contributions of each characteristic and to establish better operational definitions of each“ (p. 60). ñ Where should models be used? Research is needed to identify the conditions under which models should be used. ñ When should models be used? Research is needed to identify the most effective placement of models within a lesson. ñ Who is a model good for? Research is needed to identify individual learner differences that determine the effectiveness of models.

26 ñ Why use models? Research is needed to relate model use and instructional goals.

Mayer (1989) calls for general research on model based teaching and learning and assessment methodologies that can provide the information needed for analysis. Others have focused on the learner aspect of model use. Seel (2003) asks how we can influence model- building activities of learners. More specifically, Buckley (2000) asks how different tasks and learning activities influence model-building and how different types of representations contribute to model-based learning. Clement (2000) reports that even young students (5th grade) can construct mental models of complex causal and dynamic systems which they can use to make inferences. He notes that not all students can do this equally well and calls for investigation of the factors that are responsible for individual differences. Clement calls for detailed studies of student learning progress, including quantitative pre and post tests as well as qualitative investigation of mechanisms for intervening learning. Research regarding learning progress and conceptual change is a common theme among a number of authors in this literature review (see Snow, 2000; Mayer, 2002). Winn (1996) identifies another aspect to the research examples cited above. He recommends that research address ways in which technology can be used to help students construct meaning for themselves in accordance with constructivist approaches (e.g., experience and shared knowledge construction). Among the new technologies to be explored are virtual and enhanced simulation modeling tools. Regardless of the instructional approach, developments in technology will require ongoing investigation of their role in instruction and learning. Seel, Al-Diban, and Blumschein (2000) observed, following a review of more than 240 publications (published from 1983 to 1999), that more than 20% were explicitly concerned with how instruction could support the construction and progression of mental models. However, they also observed that earlier research suffered from weaknesses in theoretical foundation, lack of formal precision, limitations in scope, and an assessment focus confined primarily to the diagnosis of malfunctions of mental models in limited learning situations. In speaking of the developments in psychology theory, Winn (1996) emphasizes the need for fundamental changes in mainstream instructional design practice. He explains that —it is not sufficient simply to substitute cognitive objectives for behavioral objectives and to tweak our assessment techniques to gain access to knowledge schemata rather than just to observable behaviors“ (p. 86). Science educators dealing with questions of how best to use models in instruction recognize the need for improved cognitive theories of conceptual change (see Clement, 2000; Mayer, 2002). Research to support theory development in any of these areas requires a good methodology for eliciting and analyzing mental model representations.

Summary of the Importance of Mental Model Assessment for Instructional Design

Regardless of the focus of proposed investigations, there has been a repeated acknowledgement of the need for assessment techniques that can provide better insight about mental models than traditional measures can provide. There is an ongoing quest for improved theories of learning and instruction, and there is a corresponding quest for methods and tools that can aid researchers in examining the effect of models and instruction on the development of mental models in learners. Another focus of investigation is on the nature of mental models in experts, both in comparison to the mental models less knowledgeable persons and to the

27 mental models of peers. Finally, there is a call for an examination of the relationships among a triad of mental models: expert (or target) model, learner model, and the model of the instructor. All of these investigations can contribute to improved theory and practice in instructional design, but they require mental model elicitation and analysis techniques that can support the efforts. A number of techniques for eliciting mental model representations have been developed. While many of these approaches provide useful information, additional methodologies are needed.

Status of Mental Model Assessment

The assessment of mental models involves a number of challenges. Some of these challenges are obvious on the surface, while others are less obvious but perhaps just as important. The assessment of mental models is likely to increase in difficulty in relation to the complexity of the subject . Spector and Koszalka (2004) define complex task domains "as those which typically involve many interrelated factors and which involve many problems without standard or single solutions" (p. 7). Clement (2000) complains of the lack of ways to represent complex knowledge structures. Techniques for mental model assessment use several types of elicitation methods to acquire data from those whose models they want to examine. A survey of these methods suggests that not all are equally adept at communicating complexity. In turn, the mental model assessment tools that rely on the methods may be limited to one degree or another by the type of data obtained and the approach taken in analyzing the data.

Assessment Challenges

The obvious challenge in comparing the mental model of one person with a target model (e.g., an expert‘s mental model) is that of finding effective means for the model owners to communicate what is in their minds so that an observer can assess similarities, differences, and levels of complexity of their knowledge. Knowledge representation has become a subject of particular interest in computer science fields like knowledge base design, expert systems and artificial intelligence (see Brachman, Levesque, & Reiter, 1992). Much of this work deals with formal languages that can be used to translate information into computer-compatible statements of logical relationships and rules for processing. The expertise required to use the tools of this domain are probably beyond the capabilities of most persons involved in assessments for instruction or educational research; in addition, neither learners nor experts are likely to be able to express their mental models in such terms without either: a) extensive training; or b) the development of a very sophisticated computer program to make the translation for them. Mental model assessment for both general research and specific instructional design problems is limited by the types of tools available (see Glaser, 1990; Snow, 1990; Spector, Dennen, & Koszalka, 2005). In examining the state of the art of mental model assessment, Seel (1999) discusses the difficulty of inferring an individual‘s mental model from data that are observable, and he outlines the limitations of techniques such as traditional tests, questionnaires, protocol analysis, and a variety of computer-based approaches to cognitive modeling. In each case, the degree of limitation is related to how much the researcher is left to infer from the data obtained. Held, Knauff, and Vosgerau (2006) provide additional insight into the challenges of mental model assessment:

28

The reasoner constructs and manipulates mental models not according to abstract logical rules but according to the world which she represents. After having integrated all the information of the premises in one (or more) consistent models, the conclusion can be directly —seen“ in the model (and eventually compared with conclusions from other models). In this way, logically sound reasoning —emerges“ from the format of representation. Failure of sound reasoning can be explained, as sketched above, from the fact that not all relevant models are constructed for many problems. (p. 13)

This summary helps to explain the challenges researchers and instructors face in assessing anyone‘s mental model. First, each person‘s mental model is unique, in spite of likely common elements with other models. Second, an individual may be reasoning or solving a problem based on a collection of mental models, and the path such reasoning takes can vary among individuals. Adding to the assessment challenge is the fact that all mental model elicitation methods require some level of inference and assumptions by the researcher in determining what someone‘s mental model is (Buckley, 2000; Buckley, Gobert, & Christie, 2002). The key question is what is the most accurate way to determine the essential characteristics of a model that cannot be observed directly? Figure 4 illustrates the transitions and problem areas in mental model expression. An individual creates a mental model in response to the demands of a situation (e.g., a complex problem presented during instruction). The mental model is created as the individual integrates information from external sources (e.g., world experience, instructional models, instructional materials) and internal sources (e.g., prior knowledge, schemata, pre-conceptions, misconceptions, prior models). The external representation of this mental model is constrained by the representation technique because no technique can mimic a mind‘s internal representation. An external observer, such as a researcher or instructor, makes inferences and assumptions in evaluating the individual‘s representation. This evaluator then performs another translation in explaining the model for assessment of learning or other investigation purposes. For those assessment tasks that seek to gain the best understanding of an individual‘s mental model, the challenge is to limit the constraints of representation and reduce the number of inferences and assumptions required of an evaluator.

29 Information Sources The model explained • External (for assessment, • Internal comparison, etc.)

Integration

Evaluator interpretation

Mental Model Constraints of External Cognitive Representation representation response to Technique of mental situation model demands

Figure 4. Outcomes in the expression of a mental model.

Assessment Techniques

Mental model assessment techniques may be classified by the representation method used to obtain data from the model owners. These techniques include think-aloud protocols, matched word pairs, narrative text, concept maps, and causal diagrams. Buckley (2000) stresses the importance of having learners produce their own representations; however, there appears to be disagreement concerning what constitutes a learner-produced representation. In the discussion of tools using various methods of model elicitation, some researchers count narrative text or simple diagrams as an adequate means of initial representation of both structure and meaning. In contrast, others expect a more coherent product. For example, Buckley, Gobert and Christie (2002) state that representations —are considered models only when they represent structural, dynamic, and/or causal aspects of the target model; that is, they‘re not just visualizations or diagrams of phenomena“ (p. 1). Kellog and Breen (1990) discuss the relationship between design and the effectiveness of a mental model assessment technique for specific purposes. "The kind of model a researcher chooses to derive depends both on the kind of knowledge being represented (e.g., declarative or procedural) and the use to which the model will be put, since different methods have different strengths and weaknesses" (p. 180). Many of the tools discussed below rely on some application of graph theory to perform analyses on mental model representations. Some convert data provided by model owners to other forms of representation that may be examined as graphs.

30

Think-aloud Protocols

Think-aloud protocols ask subjects to work on a problem and to think aloud as they complete tasks involved in solving the problem (Davison, Vogel, & Coffman, 1997; van Someren, Barnard, & Sandberg, 1994; Willemain, 1995). One drawback of this method is that think-aloud protocol analysis was designed for basic research in psychology and is too time- consuming and labor-intensive to serve as an assessment method in classroom and work settings (Spector, Dennen, & Koszalka, 2005). van Someren and colleagues identify other issues which suggest that think-aloud methods do not provide sufficient details about subjects‘ mental models to satisfy the information requirements outlined in the research questions discussed earlier in this review. First, subjects are encouraged to report what they are doing in task performance without interpretation or explanation of what s/he is doing. Next, think-aloud protocols can be combined with another method of data collection, such as retrospection and prompting, but van Someren and colleagues report that time delays and prompting effects can lead to false results. In addition to the issues already identified, it appears that the task focus of think-aloud protocols may not reveal some of the information needed about a subject‘s mental model. For example, a person thinking-aloud about how to drive a car may not address what his mental model of a car is, but it is the larger view that affects how he drives and the performance capabilities he expects. This larger view of a mental model is likely to contain the knowledge and perspectives that determine the decisions made in task performance.

Matched Word Pairs

In this model elicitation technique, individuals are asked to express how closely related they believe a set of concepts to be. It is assumed that this activity provides data from which mental model representations may be derived. In some applications, individuals are provided with two lists of words which they are supposed to link. This technique requires the definition of the word lists in advance and therefore such lists are specific to particular subject areas. In other applications, matched word data are derived from texts provided by the mental model owners. The resulting data of both types of applications are mapped as networks and analyzed for structural relations of concepts. One example of the matched pair technique is Pathfinder networks (see Schvaneveldt, 1990). Pathfinder provides a representation of knowledge structure by using pairwise similarity ratings among concepts to create a network. These networks are based on proximity data among entities (e.g., concepts) and "are determined by identifying the proximities that provide the most efficient connections between the entities by considering the indirect connections provided by paths through other entities" (Schvaneveldt, 1990, p. ix). Figure x shows two simple networks in the Pathfinder style.

31 Network Network as a basic as a set of procedures

garage dwelling enter store house go to shoe department yard

porch view selections

ask clerk for rooms shoe size bedroom

try on shoes

bathroom living room make purchase decision

kitchen buy shoes dining room leave store

Figure 5. Examples of simple Pathfinder-style networks.

Although Pathfinder procedures have been useful in applications such as the assessment of learning, Pathfinder networks alone do not necessarily provide the best means of representing and assessing mental models for all purposes. Some researchers have combined Pathfinder techniques with other procedures (e.g., multidimensional scaling œ MDS) to expand the information available for an evaluation task (Goldsmith, Johnson, & Acton, 1991; Gonzalvo, Cañas, & Bajo, 1994). Goldsmith and colleagues and Gonzalvo and colleagues mentioned limitations in the use of Pathfinder and its combination with other techniques such as MDS. It may be difficult to correlate or compare Pathfinder with other techniques because of differences in structural perspectives (e.g., global vs. local) as well as differences in knowledge required to perform knowledge representation tasks. Goldsmith and colleagues mention the need for more research regarding the psychological interpretation of the networks. Based on their discussion, it appears that more details concerning the structural relationships must be provided by participants in order for this kind of analysis to be performed. Dearholt & Schvaneveldt (1990) comment that from —the viewpoint of cognitive modeling, a disadvantage of PFNETs [Pathfinder output] in the present state of development is that we have no way of knowing the features upon which similarity judgments are made" (p. 3). This weakness is related to the underlying design of the Pathfinder technique, which is based on expressions of proximity evaluations for pairs of concept labels rather than on an expression of concept meanings and the nature of the relationships between them. Although the Pathfinder technique may not be an ideal approach for all mental model comparison tasks, some of its features and measurement metrics, such as ways of examining node and path similarities, have been found to be useful. According to Durso and Coggins

32 (1990), a chief asset of Pathfinder is that it provides a means of looking at a graph in its entirety. They note the advantage of looking both at the overall graph structure as well as at subgraphs that represent more detailed or clustered information. Comparing mental models for complex knowledge domains may require a more open analysis of elements included in their representation than techniques like Pathfinder offer. Spector and Koszalka (2004) reported that they found some deep differences among experts; some used "specific nodes and complex relations not mentioned by others" (p. 34). Researchers using Pathfinder tend to assume that relationships among items not mentioned frequently among experts may indicate that the items are not necessary in the knowledge representation (Schvaneveldt, 1990). In complex domains, such differences may indicate creative approaches to problem solving or equally valuable, although less known, aspects of the subject matter, rather than extraneous material. In the case of knowledge representation by novices, misplaced elements (compared to experts) may indicate a misunderstanding of relationships, or it may be a function of the integration of new knowledge with existing knowledge in other domains.

Text Analysis

Approximately ten years after the first books on mental models were published, Carley and Palmquist (1992) presented a methodology for representing mental models as maps, extracting the maps from texts, and analyzing and comparing those maps. They cited widespread interest in mental models but pointed to the lack of adequate methods for examining such models. Their methodology is based on assumptions that: (a) mental models can be represented as networks which can be derived from an individual‘s text; (b) the text is a representative sample of the contents of the individual‘s cognitive structure; and (c) the symbolic or verbal structure extracted from the text is a sample of the full symbolic representation of the individual‘s cognitive structure. In comparing various techniques for mental model representations, they found cognitive mapping to be the most promising. Their approach —allows the researcher to construct and compare representations of mental models“ (p. 606). Although they claim their methodology overcomes the limitations of earlier techniques using this cognitive mapping, they have not overcome the primary weaknesses of any technique which does not elicit the model representation directly from the individual. These weaknesses include the following:

ñ the model expression is constrained by the individual‘s ability to understand what information is desired and to articulate that information clearly; ñ the model expression is limited by the language or techniques of the researcher; and ñ the final model expression is that of the researcher and therefore is distorted to one degree or another by the combined filters of representation technique and the researcher‘s interpretation.

Figure 6 illustrates the type of cognitive map produced by Carley and Palmquist‘s methodology. This example is a partial replication of a simple map shown in their article. It consists of labels and lines, many of which link multiple points in a confusing cross-hatch. Although the authors claim that such maps with as many as 100 concepts can be analyzed visually, it is difficult to see how these artifacts can provide the instructional designer or

33 researcher with much insight regarding a learner‘s mental model. The accompanying computer analysis includes the number of concept labels and the number of statements within the text. Comparisons of this type of output may correlate with differences in mental models of learners at various stages or between the learner model and a target model; however, it does not appear that approaches like this can satisfy the information needs identified in the research questions outlined above. This method does not explain what learners mean by the concept labels. Neither does the method provide any information about the nature of the relationships between concepts.

Partial example based on Carley and Palmquist, 1992

writing articles

books unbiased

encyclopedia topic fact sides format general

research issue present opinion outline

Figure 6. Example of a cognitive map derived from textual analysis.

Concept Maps

Jonassen, Beissner, & Yacci, (1993) describe concept maps as two dimensional diagrams that illustrate relationships between ideas in a content area. The relationships are hierarchical in nature with the most inclusive concept at the top of the page. Subordinate concepts are linked to the highest level concept or to one another by lines that are labeled according to the type of relationships involved. Jonassen and colleagues explain how concept maps can be used to represent structural knowledge and serve as the basis for examining differences between learners‘ and a more advanced person‘s knowledge (e.g., an expert or an

34 instructor). They also can be used as a means of tracking learners‘ progress in attaining knowledge by comparing their concept maps at different points in a curriculum. Jonassen, Beissner, & Yacci, (1993) list several advantages of concept maps. First, concept maps explicitly convey the relationships between ideas in a content area. Second, the nature of those relationships is indicated by labels on the links. Next multiple interrelationships between concepts can be identified. Finally, concept mapping is relatively easy to learn. As disadvantages, they point out that concept mapping may be time-consuming because a good map may require several attempts at drawing the map. A second disadvantage is the difficulty of interpretation of maps with many lines and labels. Concept mapping provides a convenient way for learners to express information relating to their mental models, but it does not elicit enough information to ensure that learners mean the same thing in their use of labels. For complex domains, it is unlikely to provide enough detail to assess deep understanding of the subject by learners. The concept map example seen in Figure 7 illustrates this problem. The map is a partial diagram representing the relationships among elements associated with global climate change. Concepts are labeled in oval figures, and the connections among the concepts are shown as labeled lines. The diagram indicates that the learner understands that fertilizer use and rice production lead to the production of certain greenhouse gases, but there is no indication that the learner has a deeper understanding of the processes by which this production takes place or the relative impact each might have on climate change.

(Partial example based on Jonassen, 2004)

Global Climate Change will necessitate

caused by New Agricultural Practices contribute Current to Increased Greenhouse agricultural Gases practices type type Carbon type Dioxide e.g. e.g. Methane produces Rice Nitrous Fertilizer production Oxide use produces

Figure 7. Example of a concept map (based on Jonassen, 2004).

35 Causal Diagrams

In their simpler forms, causal diagrams are graphics that indicate causal and correlational relationships between both observed and unobserved variables (Jonassen, Beissner, & Yacci, 1993). The diagrams consist of concept words (which may be enclosed in a circle or a square) and directional arrows that indicate the relationships among concepts. Causal diagrams are often used in , defined by Schwaninger (2004) as a —general- purpose methodology for modelling and simulation employed in order to deal with dynamic complexity“ (p. 411). Seel, Al-Diban, and Blumschein (2000) found that causal diagrams are an adequate method of assessing mental models, but they acknowledge —the difficulty of accurately deducing an abstract concept like a mental model from a drawing“ (p. 154). Seel and colleagues incorporated additional techniques such as verbal protocols in their attempts to enhance their assessment methodology. Spector, Dennen, and Koszalka (2005) employed annotated causal concept maps in a framework for assessing mental model development in learners. The annotations provide more insight to the mental models than the graphic map alone offers, but a lack of structure in such annotations makes interpretation for comparisons a challenging enterprise. Jonassen, Beissner, & Yacci, (1993) list several advantages and disadvantages of causal diagrams. Advantages pertinent for mental model assessment include: (a) these diagrams can depict the multiple variables that may interact to yield a given effect; and (b) causal diagrams are easy to develop and interpret. Disadvantages include: (a) only causal and correlational relationships are depicted and therefore such diagrams limit the structural knowledge conveyed; and (b) there is no indication of how two or more variables are related. In the context of mental model assessment, an additional disadvantage can be stated. Causal diagrams, even with narrative annotations and/or formal mathematical statements, do not provide a comprehensive description of model elements (see Richardson, 1996). According to Richardson, formal models can fail to communicate underlying principles, mechanisms, and insights needed for others to provide deep explanations and replicate the work of experts. Figure 8 illustrates a simple causal diagram. Model components are connected by lines and arrows that indicate the influence of one component on another. In a diagram representing a dynamic system, the model may include feedback loops that identify how internal conditions affect system behavior. Causal diagrams may not convey a complete representation of a learner‘s mental model, but they can provide a way to simplify complex systems presentations and serve as a focal point for both learners and instructors to use in evaluating a proposed model (see Spector, Dennen, & Koszalka, 2005). Richardson (1996) describes techniques like causal-loop diagrams and influence diagrams as qualitative systems thinking. Richardson notes that some systems dynamicists see qualitative techniques as precursors of formal modeling, while others believe qualitative approaches can stand by themselves in leading to reliable insights about systems behavior (e.g., organizations). He calls for research —to address the relationships between qualitative mapping and quantitative modeling–in short, when to map and when to model“ (p. 150).

36 (Partial example based on Jonassen, 2004)

Labor Price CROP YIELD Seed Variety

Fertilizer Crop Price Technology Level Level

Government Support Price Policy

Figure 8. Example of a causal diagram (based on Jonassen, 2004).

Table 2 summarizes approaches to mental model representation and assessment along with their key strengths and weaknesses. Mental model representations obtained through think- aloud protocols, matched-word pair techniques, text analysis, concept maps, and causal diagrams are sometimes used in combination with other data gathering techniques. Analytical techniques may involve translating data into graphic structures for analysis. The next discussion identifies recent methodologies and tools that apply these approaches.

37 Table 2 Summary of Current Approaches to Mental Model Assessment

Technique Strengths Weaknesses

Think-aloud Captures subject‘s identification Time-consuming, lacks protocols of procedural steps during task explanatory detail, may not be performance in problem-solving. appropriate for assessments that require detailed information about concepts and relationships among them.

Matched-word pairs Easy for subjects to respond to Does not explain what subjects data request. Easy to analyze mean by the concept labels. Does with existing computer-based not provide any information about tools. the nature of the relationships between concepts. May limit range of responses permitted to subjects.

Text analysis Easy for subjects to respond to May be difficult to interpret data request. Does not limit what because of the lack of structure. the subject can express in words. May not provide the kind of detail needed by assessors.

Concept maps Relatively easy to learn. Limited detail about meanings of Identifies relationships between concepts and the relationships ideas in a content area. Provides among them. May be difficult to some information about the interpret because of multiple lines nature of the relationships. and labels. May be time- consuming because good maps may require several redrawings.

Causal diagrams Relatively easy to learn. Not appropriate for all types of Identifies causal relationships knowledge representation. May among entities leading to given lack detail about what subjects effects. mean by concept labels. May not provide enough detail about the nature of relationships among concepts.

38 Recent Assessment Methodologies

Three methodologies, each with a different approach to mental model elicitation, have been developed, tested, and included in a new computer-based toolset known as the Highly Integrated Model Assessment Technology and Tools (HIMATT) (Pirnay-Dummer, Ifenthaler, & Spector, 2008).

Surface, Matching, Deep Structure Technology (SMD)

The SMD Technology was designed to measure relational, structural, and semantic levels of graphical representations and concept maps (Ifenthaler, 2007). It is based on mental model theory and graph theory. One major advantage of the SMD Technology is that it is computer-based. The initial mental model elicitation process may involve diagrams (e.g., concept maps) or natural language statements. Either process may be computer-based or use paper and pencil. Raw data must be transformed into a standardized format for in a SQL (structured query language) database. Raw data are stored as pairwise propositions that identify two nodes (concepts) and a link expressed as a simple relation (e.g., consists of). The output from SMD Technology analysis provides metrics concerning: (a) the number of nodes and links (surface structure) in the model; (b) the range and complexity of the model is computed based on an analysis of the quantity of nodes and the distances between the most distant nodes (matching structure); and (c) a comparison of an individual model with a reference model (deep structure), which is described as the semantic similarity of the two models based on an analysis of the number of structural elements they have in common. Ifenthaler used the MITOCAR instrument to validate the SMD Technology.

Model Inspection Trace of Concepts and Relations (MITOCAR)

MITOCAR uses natural language in eliciting mental model information (Pirnay- Dummer, 2007). Concepts are obtained from subjects‘ expressions via a process of computer- based concept parsing. The concept parser identifies nouns (with and without adjectives) and builds a list of the most frequent concepts. In a subsequent phase, subjects are asked to associate the concepts with meaningful groups. Data analysis is based on a pairwise comparison of concepts to identify: (a) how closely subjects rate concepts; (b) how different subjects believe the concepts are; and (c) how confident the subjects are in their ratings. Output from MITOCAR includes re-representation of the data on an undirected graph, and metrics based on graph theory can be produced to provide information such as the number of nodes, distances between nodes, and similarity ratings. A cross validation study was conducted with the SMD Technology.

Dynamic Evaluation of Enhanced Problem-solving (DEEP)

The DEEP methodology was developed to assess learning in a variety of educational contexts. It is intended to provide a measure of relative higher-order problem-solving ability in complex task domains and to offer a way to track and analyze learner progress (Spector and Koszalka, 2004). In developing the DEEP methodology, the project team focused on examining mental models in the context of problem scenarios and a comparison of novice to expert

39 conceptualizations. Responses to problem scenarios were elicited through a simple method involving causal diagrams to use in conceptualizing the factors and their relationships involved in addressing the problem. Data analysis in the DEEP project included: (a) surface analysis, involving the numbers of nodes and links and examination of annotations associated with them; and (b) a combined structural and semantic analysis to assess the similarity of novice and expert representations. The DEEP Problem Conceptualization Tool, which was developed during the initial project, is computer-based and is used to collect demographic information on respondents, present problem scenarios to respondents, collect problem conceptualizations, and record assumptions and additional information needed to develop a complete problem solution.

Summary of the Status of Mental Model Assessment

Approaches to mental model assessment use a combination of one or more elicitation methods and a method of analysis which is often based on graph theory. Most of the analyses based on graph theory provide metrics that relate to the number of concept labels and which labels are linked to one another. In comparing one model representation with another, the analysis examines the number of concept labels and linkages the two representations have in common. The limitations of method of mental model elicitation (e.g., verbal protocols, matched-word pairs, text analysis, concept maps, and causal diagrams) have already been introduced. The discussion that follows will examine the needs not being addressed by available methods for eliciting and analyzing mental model data.

Mental Model Assessment Needs Not Being Addressed with Available Approaches

Although approaches like SMD and MITOCAR can provide valuable information for researchers and educators, the focus of these approaches is primarily on the presence of concept labels and proximities among them. Neither of these methodologies addresses an in-depth exploration of what subjects mean by the labels used, nor do they capture details of how subjects believe concepts are related. More details regarding meanings and relationships are needed to provide the data required by research agendas suggested by authors such as the following:

ñ Mayer (1989) calls for research that provides a more fine grained analysis and addresses how models help students select, organize, and use scientific information. ñ Snow (1990) asks for richer, thicker, deeper descriptions of student learning progress. ñ Clement (2000) asks what is the role of mental models and what is the nature of these models as knowledge structures.

The comments of Mayer, Snow, and Clement are in line with Johnson-Laird‘s (1994) first assumption in his theory of mental models, which declares that models consist not only of entities and the relations between them but also of their properties. This assumption implies that attempts to investigate a mental model without regard to both relationships and properties may miss significant aspects of the model. A brief review of how mental models are used also helps to emphasize the need for more details in order to gain better understanding of the structure and content of an individual‘s

40 mental model. Mental models do more than organize information as schemata and scripts. For example, a mental model can enable an individual to perform thought experiments in which possible outcomes are considered in relation to changes in conditions (Forbus & Gentner, 1997; Seel, 2003). Whether a mental model describes a causal system or other organized set of perceptions and experience, the goal of student learning is a deeply understood model that can support reasoning and inferences by the student (Clement, 2000). Diagrams with concept labels and proximity lines do not explain an individual‘s understanding of the concepts nor how they would be employed to evaluate possibilities, support decisions, or explain phenomena to others. It is possible that the structure of diagrams provided by mental model assessment approaches like SMD and MITOCAR may relate more to the schemata and scripts present in an individual‘s mental model than to the fully functional model. Put another way, could the output of this kind of methodology be used to explain how and why an individual would use the concepts to respond to a problem or apply his or her knowledge in a different set of conditions? The DEEP project aimed to gain more comprehensive external mental model representations by having subjects draw their own diagrams and provide annotations to explain model elements. The research described in this dissertation builds on the findings of the DEEP project. In developing the proposed methodology, consideration was given to the characteristics that would constitute an effective methodology for eliciting and assessing mental model representations. Like DEEP and other methodologies reviewed, the proposed methodology uses graph theory in mental model representations and analysis. Graph theory offers a promising approach for the comparison of mental models; however, the use of graphs in communicating and evaluating mental models so far does not appear to take full advantage of graph theory's potential both to represent complexity in mental models and to do so via a relatively simple methodology that deals with the meanings of graph elements as well as their structural relations.

Issues and Considerations in Mental Model Assessment

There are a number of issues and considerations in mental model assessment. The following have been found to be particularly relevant for this dissertation research. The first to be addressed is the types of comparisons to be made in mental model assessment. The second refers to the issue of whether or not representations can be compared if respondents do not have a common understanding of their representation task. The next issue is related to the second; allegations of comparability are questionable if respondents do not have common understandings of the terminology they use to identify concepts and relationships.

Types of Comparisons

Within an assessment context, it may be important to perform different types of comparisons of mental model representations. For example, these comparisons may involve comparing one person‘s mental model with another‘s or one person‘s mental model with his own at a different point in time. In general, assessment of mental models in a learning context involves comparing a learner‘s mental model representation with whatever corresponding representation is to serve as the basis of evaluation. The following list summarizes key comparisons to be accommodated in a mental model assessment methodology:

41 ñ Peer to peer (e.g., expert to expert; learner to learner); ñ Non-expert to expert (e.g., novice to accomplished practitioner); ñ Non-expert to ideal or consensus model (e.g., learner to target model); and ñ Individual to self (e.g., one learner‘s mental models over time).

The differences in scope of knowledge and level of detail may be substantial for individuals in such comparisons. Spector and Koszalka (2004) observed that "experts have a preference to think widely as well as deeply" (p. 23); however, the novice may approach the subject in a more limited manner. Graph theory may provide a way to compare the smaller, less complex world of the novice with that of someone who has more expertise. The difficulties of translation and interpretation may be minimized if mental model representations are expressed in a common form that provides a way to capture model elements without requiring the same level of detail from persons with different levels of expertise.

Common Understanding of the Task

One of the issues identified in the DEEP project was the need to ensure that respondents have a common understanding of their task in representing their mental models. Spector and Koszalka (2004) set up novice to expert comparisons in three different complex domains. In the engineering domain, they noted that the —general inclination of experts was to solve the problem rather than to think about how to solve it or provide a representation of the problem space" (p. 23). Kellog and Breen (1990) also addressed the challenge involved in obtaining model representations that can be compared fairly. First, Kellog and Breen point to the difficulty in defining the target body of knowledge adequately, whether it is an expert's model or a composite ideal model. Second, Kellog and Breen describe the difficulty of capturing the user's mental model in a way that can be systematically compared with the target model. There are at least four aspects of task understanding that subjects need to have in common in order for there to be a good basis of comparison of their mental model representations:

ñ Focus: a clear specification of the subject matter; ñ Perspective: the point of view or views to be expressed, such as physical characteristics, functionality, value, etc.; ñ Level of detail: the detail or complexity required by the task (rather than in terms of the individual's model); and ñ Communication method: the required or accepted means of expression of the mental model.

It is beyond the scope of a comparison methodology to ensure that participants in a mental model comparison exercise have a common understanding of their task (although the methodology could be used to make such a comparison); however, the methodology should be supportive of researchers' efforts to accomplish this critical goal. The methodology can provide support by offering a strong but flexible structure for describing a mental model and by defining a clear set of rules and procedures for expressing a mental model in this context. At the same time, the methodology should meet the standard of facilitation of mental model

42 representation without unnecessary influence on what that representation will be (Gammack, 1990; Spector & Koszalka, 2004).

Common Meanings for Terminology in Mental Model Representations

Common use of language is one of the issues present in most of the mental model assessment approaches reviewed. In general, researchers appear to assume that respondents associate the same meanings with concept labels and indicators of relationships. Researchers in the DEEP study (Spector and Koszalka, 2004) attempted to address this issue by asking respondents to annotate their mental model diagrams, but they reported having problems in determining whether two nodes or links were comparable for different respondents. Spector and Koszalka associated one source of this problem with their —open-ended approach to space conceptualization and representation“ (p. 26). This method of data collection was chosen because Spector and Koszalka believed it would approximate real world situations. Comparing nodes and links with regard to properties may provide a more reliable method of assessing similarities of meanings than either concept names alone or open-ended descriptions that depend on the expertise of coders who translate the responses. Spector and Koszalka (2004) describe the method they used as being interpretative in nature and in need of a process to ensure coding reliability. They also characterized the analysis of whether or not different responses involved the same nodes and links as particularly difficult in the context of their study and the DEEP tool.

The DEEP Project: Recommended Improvements

In building on the research reported for the DEEP project (Spector and Koszalka, 2004), this dissertation research attempted to incorporate the recommendations made by respondents who used the DEEP node-link representation tool. Respondents identified the need to:

ñ Indicate directionality and general type of links; ñ Alter prior entries; ñ Indicate an overall positive or negative effect; ñ Indicate relative amount of influence of the link; and ñ Select a property for each node that indicates type or category of node.

Requirements for a Mental Model Assessment Methodology

The review of literature concerning mental model assessment, combined with the findings from the prototype study I conducted (Smith, 2006), suggest a number of desirable features in a generalized methodology that uses graph theory in the comparison of mental models. These features might be considered requirements (either in part or as a collective whole) as described in the following list. An effective mental model assessment methodology:

ñ is flexible and generalized enough to be used in a variety of subject areas; ñ is usable to describe mental models by persons at any point along the spectrum of novice to expert; ñ is usable by persons with different information processing preferences and skills;

43 ñ provides structure without unnecessary influence on mental model expression; ñ forms a common language with enough scope to convey many elements and characteristics of a mental model; ñ promotes consistency of assessment; ñ makes clear the boundaries of the mental models; ñ can locate common sub-structures that are not described within the same larger structures; ñ can distinguish between models with common large structures but have variance at a more detailed level; ñ can translate model elements into things that can be identified as quantities or specific qualities œ as in mathematical modeling œ to support description of the degree of correspondence of two models; ñ provides a way to assess level of complexity; and ñ lends itself to automation for scalability.

Although current approaches to mental model assessment aim to satisfy some of these requirements, none of them encompasses enough of these needs in a way that provides the fine grained analysis that Mayer (1989) requests. Nor do they offer the richer, thicker, deeper descriptions of student learning progress that Snow (1990) calls for.

How the Proposed Methodology Addresses Some of the Assessment Needs

The methodology involved in this dissertation research attempts to satisfy some of the needs in mental model assessment by: (a) allowing direct representation of mental models by respondents; (b) not limiting model expression to any particular subject, type of knowledge, or level of detail; (c) providing a communication strategy that offers form without attempting to influence the content or meaning of respondents‘ representations; (d) collecting details (i.e., property sets) to ascertain the meanings respondents associate with concept labels and relationships between them; and (e) basing comparisons on analyses that emphasize common meanings for model elements. Figure 9. illustrates the differences between the level of detail provided by the proposed methodology and the detail provided by methodologies that create concept maps by linked pair analysis or other types of concept-linking techniques. Causal diagrams and concept maps that include labels or numbers on links may provide more information than the examples in the figure, but they do not necessarily provide more information regarding what respondents understand about the contents of the representations. Much more can be elicited from respondents to provide greater insight regarding a fuller representation of their mental models. The proposed methodology elicits property set data that supports the creation and analysis of detailed databases that can be used to focus on specific aspects of knowledge which might be hidden in a label-only approach. Property set analysis satisfies an important tenet of Johnson- Laird‘s (1994) discussion of mental model analysis, and it may be the chief means by which the focus of this dissertation research can be addressed. It is unlikely that an elicitation of concept labels and links alone can identify sufficiently the misconceptions and/or inadequate conceptions beginning learners might have. It is also unlikely that labels can identify specific problems concerning how learners may believe concepts are related. If the proposed methodology is successful in assessing the differences between beginning graduate students

44 and an experienced practitioner in instructional design, additional research employing the methodology may find it to be relevant for a variety of subject areas, education levels, and purposes for mental model assessment.

Partial example based on Carley and Palmquist, 1992

Network Network writing as a basic concept map as a set of procedures articles

garage dwelling enter store books house go to shoe unbiased department yard encyclopedia porch view selections topic

ask clerk for fact rooms shoe size bedroom sides format

try on shoes general

bathroom living room make purchase decision research

kitchen issue buy shoes dining room present opinion leave store outline

V1 Vertix Name: ______Property Set: Noun class (person, place, thing) ______Type (within noun class) ______Description (details within class) ______Value (importance, weight, etc.)______

Linkable to higher Edge Name: ______E1 or lower level Property Set: structures Relationship (cause/effect, sequence, etc.) ______Medium ______Importance ______Duration ______Positive/negative______V2 Direction______Value (number)______Iteration ______

Vertix Name: ______Property Set: Noun class (person, place, thing) ______Type (within noun class) ______Description (details within class) ______Value (importance, weight, etc.)______

Figure 9. Visual comparison of network, cognitive, and proposed methodology maps.

45

CHAPTER 3

METHOD

The purpose of this study is to conduct a formative evaluation of a methodology for comparing mental model representations. This methodology is presented as a potential assessment technique for a variety of purposes in the context of the design and delivery of instruction. The methodology has reached that stage of development in which a formative evaluation is appropriate to determine what improvements should be made prior to further development and eventual implementation. It should be noted that this effort involved the development of a new methodology based on graph theory to assess mental models for use in educational contexts. In other words, this research is closely tied to a significant and substantial development effort that I have conducted over a period of several years. The term formative evaluation was coined by Scriven in 1967 and initially was applied to the tryout of instructional materials to determine whether or not students could learn from them (Dessinger, & Moseley, 2006; Seels & Glasgow, 1998). Since then, the term has been applied more broadly to refer to feedback that is used to improve products in terms of content accuracy, technical quality, user acceptability, and other issues in evaluation (Seels & Glasgow). Formative evaluation is recognized as an integral part of the instructional design process (Dick, Carey, & Carey, 2005; Gagné, Wager, Golas, & Keller, 2005; Seels & Glasgow). Gagné and colleagues indicate that a formative evaluation should provide more than an answer to the question of whether or not materials are well received and effective; it is necessary to determine why results have occurred in order to identify what improvements are needed. Formative evaluation can be viewed from more than one perspective in the field of instructional design. For example, Dessinger and Moseley (2006) discuss formative evaluation as one of a group of evaluation steps in the context of human performance technology; however, regardless of the perspective, the basic components of this process consist of trying out materials and designs and obtaining feedback from users and experts. Through a combination of data collection, questionnaires, interviews, and analyses, this study aims to conduct a comprehensive formative evaluation of the methodology described herein.

The broad objectives of this methodology are to:

ñ enable the comparison of mental models pertinent to teaching and learning (e.g., peer to peer, non-expert to expert, non-expert to consensus, and individual to self comparisons); ñ support the representation of both simple and complex knowledge structures without unnecessarily influencing what those structures might be; ñ enable participants to describe structures that may include multiple kind(s) of knowledge, (e.g., factual, procedural, dynamic relationships, combinations, etc.); ñ provide a way to describe concepts and the characteristics of the linkages between concepts; and

46 ñ facilitate the comparison of mental models by providing techniques for translating the mental model data into metrics that can express the degree of correspondence in quantitative as well as qualitative terms.

Although the methodology may have broader applications, the focus of this study is on a particular application involving an assessment of the differences between beginning students and an experienced practitioner in instructional design. The study is intended to answer two research questions:

1. Does the methodology provide useful comparisons of student-constructed models based on relevant attributes of structure and content that are embedded in the model elicitation methodology? 2. What improvements in the methodology are needed prior to further research and development and eventual implementation in the form of a mental model assessment tool? a. What improvements are needed regarding the mental model elicitation methodology? b. What improvements are needed in the mental model representation analysis methodology?

Table 3 identifies the milestones in this research. The work has spanned a period of more than three years beginning in the summer of 2005.

Table 3 Milestones in the Development and Formative Evaluation of the Methodology Milestone Description

Initial research Review literature for mental model theory, graph theory, and applications of graph theory for mental model assessment.

Development of methodology Design mental model elicitation strategy, data collection procedures, data analysis process, and formulas for computing metrics to describe model structures and similarity ratings.

First Prototype study Design study, select archival data sources, use methodology to represent models, record data, and perform analyses. Report findings, including recommendations for improvements to methodology.

47 Table 3 - Continued Milestone Description

Review literature for dissertation study Review literature to explore: the theoretical basis and role of models in learning and instruction; mental model assessment approaches; and the status of mental model assessment research and application.

Design study Select focus for research, identify participants, plan procedures, design study materials, plan analyses.

Develop study materials Prepare instructions and data collection materials.

Develop training materials for participants Create a PowerPoint presentation to train subjects.

Obtain IRB Human Subjects approval Complete Human Subjects training, submit application for research approval.

Test training and study materials Test materials with small group of subjects and revise materials as needed.

Conduct study Train participants, collect data.

Perform data analysis Translate data as described in the assessment methodology, compute metrics, analyze similarities and differences among mental model representations, review data to answer the research questions.

Report findings Complete dissertation.

Research Focus

The application selected will address important issues for beginning students. According to the literature review, the initial state of student knowledge and conceptions can have significant implications for the design and delivery of instruction. First, understanding students‘ prior knowledge provides a starting point in bridging the gap between their beginning state and the learning objectives of the instruction. Second, learning of new material takes place

48 with regard to a larger world view students may have. Integration of new knowledge within this larger context requires some awareness of the context‘s relevant attributes. Next, examination of students‘ initial conceptions and mental models may reveal misconceptions that must be overcome in order for the learning objectives to be achieved. Misconceptions can be firmly entrenched, and may require design and/or delivery approaches beyond those sufficient to instruct students without such handicaps. It is assumed that a comparison of mental model representations of beginning students with the mental model representation of an experienced practitioner will reveal both initial states of the learners and misconceptions they may have. Knowledge of the initial states of learners (i.e., gaps in knowledge and misconceptions) can lead to decisions regarding the design and delivery of instruction for those learners. If the methodology reveals information about learner needs in terms of structure and content, the answer to research question number one will be —yes.“ The second research question will be addressed using qualitative data obtained from analysis of mental model representations and participant responses to questionnaires and interviews.

Operational Definition

An operational definition of the term useful must be developed to provide the foundation for answering the research questions. For purposes of this research, a comparison of student-constructed models based on relevant attributes of structure and content is useful if it reveals misconceptions or gaps in knowledge that, if present, will affect the design and/or delivery of instruction for the purpose of improving the potential for learners to achieve the targeted learning goals.

Description of the Mental Model Comparison Methodology

A methodology for comparing mental model representations using an application of graph theory has been proposed (Smith, 2005) and tested in a prototype study (Smith, 2006). The methodology consists of the following elements: (a) a language for mental model representation based on graph theory; (b) a grammar that identifies the rules for representation; (c) a vocabulary of terms for graph components and properties; and (d) a set of instructions and formulas for computing the degree of correspondence between two mental model representations. The methodology is fully described in Appendix A. The prototype study is reported in Appendix B. Figures 10 and 11 illustrate how the methodology will map language features to the domain of graphs. Relating natural language elements to graphs relies on thought processes and means of communication that are common among humans (see Sloman, 2005). The methodology does not limit the graphical representations to cause and effect relationships; the intent of going beyond causal models is to accommodate a broader range of mental model elements. Sloman explains that a causal modeling framework does not serve as the basic language of thought because not all thought is causal. As examples, Sloman points to the human ability to do arithmetic and geometry. He also notes that much of the grammatical structure of language has to do with . The implication is that a methodology for representing mental models should permit a close correspondence between the representational technique and natural language elements.

49 Figure 10 shows the correspondence of vertices and edges to language functions. A vertex functions as a noun. Depending on the position of a vertex, it may be a noun serving as the subject of a verb or an object of a verb. An edge functions as a verb. It identifies a relationship between two vertices. Vertices and edges are defined by property sets that serve as modifiers; that is, the modifiers are the adjectives and adverbs of the graph language. Figure 11 shows the next step in mapping. It presents a basic statement in graph language. The statement identifies a component of a mental model. At a minimum, a statement must contain one vertex. It will contain an edge only if a relationship can be defined between it and another entity of the mental model. Both vertices and edges must be described by property sets. A property set contains the formal expression of the characteristics a person associates with whatever name s/he has given a vertex or edge. The rules for combining vertices and edges are contained in a grammar of the graph language, and the terminology for identifying vertices, edges, and their properties is contained in the language's vocabulary. These terms are specific to the language of graphs, generalized for all types of mental model comparisons, and not limiting regarding the wording used to provide details about properties. In other words, the vocabulary is related to the graph structure of the mental model representation, not the subject matter that the representation describes.

Graph:

v1 Vertex - functionally, a noun - by position, a statement subject e1: v1-v2 Edge or arc - functionally, a verb - by position, an expression of relationship

v2 Vertex - functionally, a noun Property set-E - by position, a statement object for the v1-v2 e2: v2-v3 - Property 1 relationship and subject for the v2-v3 relationship - Property 2 - Property n Property set - functionally, a modifier v3 Property set-V - by position, a description attached to a vertex or - Property 1 and edge/arc (not physically a part of the graph) - Property 2 - Property n

Figure 10. Basic elements of a language of graphs.

50 Grammar The rules and principles for stating vertex and edge combinations. Vocabulary The set of terms used to express the structure of a mental model in the language of graphs. v1 e: v1-v2 v2

V Property set E Property set V Property set - Properties 1-n - Properties 1-n - Properties 1-n

Basic graph —statement“:

V1 (Property Set V1) + [E1 (Property Set E1) + V2 (Property Set V2)]

Interpretation:

V1 (as defined by its property set) is a member of the mental model configuration. If V2 (as defined by its property set) is also a member of the mental model configuration

and is related to V1, its relationship is E1 (as defined by its property set).

Figure 11. Combining elements in a language of graphs.

The graphs for two mental models can be compared on basic quantitative values such as the order (i.e., total number of vertices), size (i.e., total number of edges/arcs), and the total number of graph components. However, the more important questions of comparison relate to how similar the graphs are regarding content œ the meanings of nodes and the relationships between them. Similarity ratings for content and structure can be created using similarity values computed in the vertex and edge summary tables. Figure 12 illustrates the comparison concept.

51 Common Area for Two Models

Model A Model B Total Nodes Total and Nodes Edges Same and Properties Edges

Different or missing properties

Figure 12. Comparison concept for two mental model representations.

The prototype study conducted as supervised research during the 2006 summer semester revealed that the methodology was viable but required several improvements:

ñ Expanded menus for properties. Both vertex and edge property lists need to be reexamined and expanded. Relationships such as process steps or class memberships are straightforward, but relationships between concepts or interdependent items require more exploration for appropriate terms. ñ Dictionary of terms with clear definitions and criteria for designation of properties. Clear examples should also be provided to illustrate application of the criteria.

The prototype study also confirmed the original vision that a fully effective implementation of the methodology requires software support for the expression and analysis of mental model comparisons. A review of existing software, such as that available for concept mapping, did not reveal a product that is capable of fulfilling all of the necessary components of the methodology: combined graph and property set visualization and modification, data collection, database creation, data analysis, and reporting. At this stage of development, a paper-based study is required to validate the concept and demonstrate its utility.

Study

The context for this research is a graduate program in instructional design at a large, Southern university. The program provides learners with a strong background in instructional

52 design, including: theories of design, learning and instruction; instructional design practice; and research methods. Individuals may enter the program as masters students or doctoral students.

Design

The mental model representations of two types of participants were compared: experienced instructional designers (professors) and inexperienced instructional designers (students). Participants in the study were given an instructional design problem and asked to represent their mental models of their approaches to addressing the problem. The mental model representations of beginning instructional design students were compared with the mental model representation of an experienced instructional designer (their professor) in order to examine the methodology‘s utility in comparing mental model representations among persons assumed to have different levels of expertise. Prior to data collection, the instructional design professor was asked if he anticipated that the mental model representations could prove useful in providing assessment data (e.g., entry level knowledge and conceptions) that might identify learner needs and/or influence the course of instruction. The mental model representations of three experienced instructional designers were compared to examine the methodology‘s utility in comparing mental model representations among persons assumed to have similar knowledge; however, the primary use of this data was for providing another point of comparison, if needed, with the student data. The focus of this study was on the methodology‘s utility in making comparisons; it was not intended to determine the quality of an experienced person‘s model. Figure 13. illustrates the basic study design aimed at answering the research question concerning whether or not the methodology provides useful comparisons of student-constructed models based on relevant attributes of structure and content that are embedded in the model elicitation methodology. The basis for comparison with student models is the mental model representation of an experienced instructional designer (their professor). Both students and the designer were asked to respond to the same instructional design problem. Each student model representation was compared with the designer model representation. Analysis of these comparisons was intended to identify learner needs: (a) gaps in knowledge and conceptions of students at the beginning of instruction; and (b) student misconceptions that might interfere with learning if not overcome through instruction. The analysis was intended to reveal learning needs that are common in the student group as well as those that are specific to individuals.

53 Mental Model COMPARISONS Representation EXPERIENCED DESIGNER STUDENT n Response Mental to problem Model Representation STUDENT B Mental STUDENT ModelA STUDENT A ANALYSIS MentalRepresentation Mental To identify learner needs Model Model Representation • Students‘ entry level gaps in Representation knowledge and conceptions Response • Student/designer model to problem differences that indicate misconceptions of students

Figure 13. Study design overview.

Materials

The materials for this study included:

ñ a training package for representing mental models using the methodology; ñ a problem statement and specific instructions for responding to the problem statement; ñ paper and forms for representing mental model elements; and ñ questionnaires to obtain qualitative feedback concerning the representation process and utility of the information provided by the methodology.

Study materials were presented as a separate packet for each participant in the study. Each packet contained a full complement of the study materials, and each item in the packet was labeled with a number corresponding to the participant number identified on the packet cover. Study materials were color coded to facilitate easy identification during the mental model representation process. For example, the instructions sheet was printed on green paper, vertex properties forms were printed on buff paper, and edge properties forms were printed on blue paper.

54 Problem Statement

The problem statement was designed to achieve several objectives in the mental model representation exercise. Problem statement objectives included the following:

Learning Objectives in the Course. First, the problem statement needed to relate to the learning objectives for the course. The course used in the study was an introductory course providing an overview of instructional design and the role of instructional designers. Therefore, the problem statement needed to be broad enough to allow participants to present a wide range of the knowledge and perspectives they may have about the course subject matter.

Focus on Knowledge about Problem Solving rather than Solving the Problem. The problem statement was constructed with the intention of directing participants‘ attention to focusing on knowledge about instructional design and what instructional designers need to know and do rather than on participants‘ attempting to design instruction.

Maximizing Openness of Responses. The problem statement did not specify the kind of structure for model representation. For example, it did not indicate that models should focus on representing a process or a problem context or other structure. The selection of what to include and how to present it was left to participants.

Guidance for Level of Detail and Time Management. The problem statement provided suggestions to help participants determine how much was expected of them in their model representations. Participants were encouraged to be selective and not attempt to include every possible element and detail of which they might have knowledge.

The concept for the problem statement emerged from my personal knowledge of the courses in the instructional design program where the study was conducted and conversations with the professor teaching the course used in the study. The conversations with the professor confirmed that the problem statement related well to the learning objectives for the course. The other objectives in the problem statement design were based on the experiences noted in other research. For example, the design of the problem statement attempted to address problems noted by Spector and Koszalka (2004) who found participants attempting to solve a problem rather than to present knowledge about how to find a solution. Participants were asked to respond to the following statement in representing their mental model of how to approach the problem:

You are an instructional designer who is competing for position with an instructional design company. The company is asking each applicant to illustrate his or her knowledge of instructional design by representing his or her mental model of how to respond to an instructional design assignment. The assignment is to design a one week educational unit on writing paragraphs for a 5th grade English class that includes students with disabilities (e.g., physical, learning, behavior) along with students who have not been diagnosed with problems related to the classroom. In this exercise, you are NOT to design the unit–you are to illustrate your knowledge of what an instructional designer needs to know and do in order to complete the design task. Using

55 the mental model representation guidelines, provide a description of your approach to this problem. You may include whatever elements you consider significant (e.g., processes, procedures, theories, persons, tools, etc.). Your objective is to convey your overall understanding of instructional design and the basis for your approach. You probably won‘t have time to include all details of which you have knowledge, so choose what you consider the most important and relevant elements.

Procedures

The data collection for this study was conducted as a learning activity in the first class session of an introductory course in instructional design. Before beginning the study, participants were given copies of the informed form which described their as human subjects and advised them of the nature of their participation. Then they were given an opportunity to ask questions. Participants signed copies of the informed consent form and returned them to the researcher. Participants were informed that their names would not be reported in study results; however, the professor teaching the class needed to be able to associate study data with individual students because the mental model representation exercise was used as a learning exercise in the class. In order to associate student names with their mental model representations for the professor‘s purposes, students were asked to write their names on the number labels attached to their study packets. These labels were removed from the study packets and collected before the packets were returned to the researcher. The names were not associated with numbers in the database until it was time to report analysis results to the professor teaching the class. The name/number list was presented to the professor as a separate document that was not part of the database containing study data. In order to facilitate analyses relating student backgrounds to their initial conceptions of instructional design, the professor provided access to information about country, educational background, and type of work experience. The study packet labels were used for a second purpose. Immediately following the mental model representations, the labels were collected and used to draw a winning name for a scientific calculator given by the researcher as a reward for class participation. Participants were given a 20 minute PowerPoint orientation to the methodology. The orientation included examples and definitions of graph elements. After the initial orientation, the next phase of this training consisted of a brief practice period in which participants were asked to complete a portion of a small graph and describe the graph vertices and edges. In step one of the practice, participants were given guided practice in identifying graph elements and describing them on the data collection forms. In step two, participants were asked to identify and define several additional graph elements independently. Finally, participants were asked to complete a short questionnaire concerning the training. After the practice, each participant worked independently to produce a mental model representation of his or her approach to the instructional design problem. The representation consisted of a drawing of a graph and completion of forms defining the vertices and edges included in the graph. The participants were given time checks to let them know when 30 minutes, 45 minutes and 55 minutes had elapsed. Then, participants were asked to complete a short questionnaire (10-15 minutes) to obtain participant feedback on the process of representation. At the end of the representation process, study materials were replaced in the packets and returned to the researcher.

56 There was one difference in the protocol used for the two types of participants. After their orientation, students completed their mental model representations during the first class session of the course used in the study. The professor teaching the course allocated approximately 60 minutes of class time for the mental model representation exercise. The experienced instructional designers were given time outside of this class session to complete their representations. The experienced instructional designers determined the amount of time devoted to representing their mental models. They reported ranging from about one and one-half hours to as much as three hours or more. This arrangement corresponds to the circumstances that would exist if the methodology were implemented for general use in an instructional design program. It is expected that professors would take time to create and edit a model representation that would achieve their assessment objectives. Each participant‘s packet was processed to create a database of graph elements and an electronic version of his or her . Labels and comments were recorded as written by participants (e.g., abbreviations were not interpreted, and unusual language constructions used by international students were not revised). A number of analyses were performed on this database. Appendix A contains the details concerning the analysis process, the formulas for computing comparison metrics, and the kinds of information produced. Following the data analysis, participants were given feedback on study results. Student participants were given individual reports that included: (a) a summary of the differences between student models and their professor‘s model; (b) a graph and list of vertex titles for the professor‘s model; and (c) a graph and list of vertices for the student‘s model. The professor teaching the class was given a report with an analysis of student conceptions and possible misconceptions as well as a graph for his own model and each of the student models. In this report, students were referenced by their participant numbers. This report was also shared with the other two experienced instructional designers. All three experienced persons were interviewed to obtain feedback on the mental model representation process, data, and a list of recommendations for improvements to the methodology.

Participants

Participants were:

ñ a class of 19 graduate students taking an introductory design course in the Instructional Systems Program; ñ an experienced instructional designer who was the professor teaching the introductory course in the Instructional Systems Program; and ñ two more professors who are experienced instructional designers.

The graduate students were a diverse group composed of nine students from the United States and ten international students representing six different countries (Australia, Indonesia, Singapore, China, Korea, and India). Twelve (63%) had experience in teaching or military training. Of these twelve, two had limited experience in course development but had no formal training. Of those with no background in instruction, two had military experience, and the other five had a variety of experience in education, system administration, and laboratory research. The students had a variety of educational backgrounds, and only three students had degrees in

57 instructional or educational technology. Six students were just beginning post-graduate work. Ten had already completed masters degrees. All but one of the graduate students were taking the introductory course in their first semester of study in the Instructional Design Program. In addition, the class session was held mid-day on Monday of the first week of classes, making it their first course in the program overall. Under these circumstances, the students appeared eager to participate and perform well. Two of the experienced instructional designers were professors in the Instructional Systems Program. The other professor was experienced in designing instruction for courses in another graduate program in the College of Education. This professor was also an expert in a field referenced in the problem statement–teaching students with disabilities.

Analysis

Each student‘s mental model representation was compared with that of the professor teaching the course. The comparisons were made by examining the vertex and edge definitions recorded in the study database. The mental model comparison methodology was applied to translate vertex and edge property data into quantitative measures of the degree of correspondence of each student‘s model to that of their professor. During this analysis, details provided in the property sets for vertices and edges were used to verify determinations of similarity between graph elements in the two models. Next, the mental model representations of the other two experienced instructional designers were compared with that of the professor teaching the course. The same kinds of similarity measures were computed, and the three experienced designer models were examined to determine whether or not additional individual comparisons with student models would be valuable. The models of the nineteen students and their professor were reviewed individually to look for indicators of common focus and perspectives. These results were then tabulated by participant. Student backgrounds were examined to explore possible relationships between the focus and perspectives observed and the student‘s prior education, experience, and country (i.e., US students vs. international students). The questionnaire data was examined to see if there were relationships between student performance and their responses regarding the training, understanding of the exercise, and the mental model representation process. A report was prepared for the professor teaching the course (see Appendix E). The report contained a summary of the preliminary analyses including:

ñ a brief narrative explaining the analysis; ñ a summary of the qualitative differences between the professor‘s model and the student models; ñ a summary of the quantitative differences between the professor‘s model and the student models; ñ a description of conceptions and possible misconceptions observed in the student models; ñ a graphic representation, qualitative summary, and list of vertices for each model (professor and the nineteen students); and

58 ñ several tables and figures summarizing and illustrating various aspects of the data.

This report was shared first with the professor teaching the course and then with the other two participating professors during individual interviews to obtain feedback on the usefulness of the information. During the interviews, each of the three professors was also asked for feedback on the mental model elicitation process and the list of recommendations that had been prepared concerning the improvement needed prior to further study and implementation of the methodology.

59 CHAPTER 4

RESULTS

The results for the study are presented in this chapter in an order that relates to the progression of data elicitation and analysis. In Chapter 5, the results are discussed in relation to the research questions. The first data obtained from participants was their responses to a brief questionnaire that they completed following the orientation and training session. Additional data relating to the adequacy of training and participant feedback on the mental model elicitation process was obtained on a second questionnaire that participants completed following the mental model representation activity. Data analysis of models began with a reproduction of participants‘ hand-drawn models using the drawing features of PowerPoint. The reproductions in electronic form were consistent with the structure and contents of the participant models and facilitated visual comparisons by presenting model elements with the same size and notation (e.g., sequential numbering of model vertices and edges for reference to the definitions of these elements in the database). The next step in analysis involved using the database to conduct the quantitative comparison of mental model representations. Qualitative analysis began with a comparison of model focus and perspectives to determine differences between student models and the model of their professor. This analysis continued with an examination of student backgrounds relative to the focus and perspectives observed in their model representations. Student models were also analyzed to identify the conceptions and possible misconceptions that might affect the professor‘s design and/or delivery of instruction. A review of the mental model representations of the three experienced instructional designers resulted in a different analysis than was originally anticipated. Some of the for the difference are discussed in Chapter 5. The model of one of the three did not lend itself to qualitative comparison with the other two, but it did present an opportunity for a short case review to gain insight to the process of mental model representation. The final stage of data collection involved interviews with the three experienced designers, each of which is a professor. For convenience of reference, the experienced designer who taught the course used in the study is sometimes referred to as professor 1; the other two are referred to as professor 2 and professor 3. The feedback obtained from the three professors was used in compiling the final evaluation of the methodology and recommendations for improvements and next steps in research.

Adequacy of Training

Table 4 shows student responses on the questionnaire they completed immediately following the 20 minute mental model representation orientation and training. Two of the 19 students (11%) did not complete the questionnaire. The table presents the detailed responses of the students. When the responses are grouped according to agreement at some level or disagreement, the table shows that the majority of the students regarded the training as adequate. Eighty-four percent (84%) of the students indicated that they understood the training (i.e., the PowerPoint presentation). Seventy-nine percent (79%) of the students said that they

60 understood what to do in the practice in which they represented a simple model and described its elements using the vertex and edge properties forms. Seventy-four percent (74%) of the students believed that the practice gave them confidence for the next activity of representing their mental models regarding instructional design. Three students provided additional comments in the space provided on the questionnaire. The primary concern expressed was a limited understanding of the edge properties form used to explain the relationship between two vertices. One student indicated understanding of the exercise but still lacked confidence about his preparation. However, those students who expressed doubt over understanding and preparation did not appear to perform less well than those who had more confidence.

Table 4 Student Responses to the Post-training Questionnaire Strongly Strongly No Question Agree Agree Disagree Disagree Response

1. I understood the training. 5 11 1 0 2 (26%) (58%) (5%) (0%) (11%)

2. I understood what I was supposed to do in the 5 10 2 0 2 practice. (26%) (53%) (11%) (0%) (11%)

3. The practice helped me have confidence for the next 4 10 3 0 2 activity. (21%) (53%) (16%) (0%) (11%)

Table 5 reports the responses of the three professors following their orientation and training for mental model representation. All of the professors indicated that they understood the training and were prepared for the mental model representation activity. One professor made the comment that having the training material available during practice was helpful.

Table 5 Professor Responses to the Post-training Questionnaire Strongly Strongly Question Agree Agree Disagree Disagree

1. I understood the training. 2 1 0 0

2. I understood what I was supposed to do in the 2 1 0 0 practice.

3. The practice helped me have confidence for the 1 2 0 0 next activity.

61 The Mental Model Elicitation Process

Student responses to the questionnaire relating to the mental model elicitation process are presented in Table 6. Just over half of the students (53%) reported that they were familiar with concept maps. The second question on the questionnaire repeats the inquiry contained on the post-training questionnaire, asking if the training session helped participants understand how to represent a mental model. Although the portion of responses was eighty-four percent (84%) in both instances, there was a difference in the persons expressing agreement. Two students changed their responses from agreement to disagreement, and the two students who did not complete the first questionnaire answered the question on the post-elicitation question: one expressing agreement and the other disagreement. Another student changed his response from disagreement to agreement. Questions three and four addressed clarity of instructions and understanding of the task; the students expressed agreement, seventy-nine percent (79%) and ninety percent (90%) respectively. Question five inquired whether or not the mental model representation exercise helped students to clarify their thinking about instructional design. Seventy-four (74%) responded that it did. Questions six through eight addressed the time constraints that may have affected the students‘ ability to represent their mental models. Seventy-four percent (74%) reported that had more information to include given more time. Eighty-four percent (84%) believed they could improve their models if they had more time to make revisions. Ninety-five percent (95%) indicated they could improve their performance if they had more practice. The two final questions dealt with student perceptions of their thought processes. Only twenty-one percent (21%) said that they found it difficult to express their thoughts as graph elements. Forty-eight percent (48%) said that they often think in —pictures.“ Nine students included additional comments in the space provided on the questionnaires. Many of these comments reiterated the student‘s responses to questions. Most of the additional information related to a need for better understanding of the relationships between vertices as expressed through edge properties. One student included a comment that the edge definition —seemed to add a lot to the material.“ Another student expressed difficulty in responding to the problem statement because of having trouble distinguishing between a specific instance of the design task and a generic description of instructional design. As suggested by the response changes from not understanding to understanding the training, one student reported —warming up“ to the idea and able to add more information if given more time. The student added that it was easier to illustrate rather than describe what he was thinking.

Table 6 Student Responses to the Post-elicitation Questionnaire Strongly Strongly No Question Agree Agree Disagree Disagree Response

1. I am familiar with concept maps. 2 8 6 3 0 (11%) (42%) (32%) (16%) (0%)

62

Table 6 - Continued Strongly Strongly No Question Agree Agree Disagree Disagree Response

2. The training/orientation presentation helped me 4 12 3 0 0 understand how to represent a mental model. (21%) (63%) (16%) (0%) (0%)

3. The instructions for representing my mental model 3 12 3 0 1 were clear. (16%) (63%) (16%) (0%) (5%)

4. I understood what I was supposed to do. 3 14 2 0 0 (16%) (74%) (11%) (0%) (0%)

5. The exercise of representing my mental model helped 2 12 3 0 2 me to clarify my thinking about instructional design. (11%) (63%) (16%) (0%) (11%)

6. I had much more information that I did not have time 6 8 3 1 1 to include in my model representation. (32%) (42%) (16%) (5%) (5%)

7. I could improve my model if I had more time to make 8 8 2 0 1 revisions. (42%) (42%) (11%) (0%) (5%)

8. I could do a better job of representing a mental model 12 6 1 0 0 if I had more practice. (63%) (32%) (5%) (0%) (0%)

9. It is difficult for me to represent my thoughts as 3 1 14 0 1 graph elements. (16%) (5%) (74%) (0%) (5%)

10. I often think in —pictures.“ 2 7 8 1 1 (11%) (37%) (42%) (5%) (5%)

Table 7 reports the professors‘ responses to the post-elicitation questionnaire. The professors‘ response patterns were similar to those of the students. Although the small number of individuals makes it difficult to ascertain real differences, there were two questions where the responses appeared to be different. Two of the three professors disagreed that the exercise helped them clarify their thinking about instructional design. Two of the three professors agreed that it was difficult for them to represent their thoughts as graph elements.

Table 7 Professor Responses to the Post-elicitation Questionnaire Strongly Strongly Question Agree Agree Disagree Disagree

1. I am familiar with concept maps. 0 2 0 1

63 Table 7 - Continued Strongly Strongly Question Agree Agree Disagree Disagree

2. The training/orientation presentation helped me 1 2 0 0 understand how to represent a mental model.

3. The instructions for representing my mental model 1 2 0 0 were clear.

4. I understood what I was supposed to do. 1 2 0 0

5. The exercise of representing my mental model helped 1 0 2 0 me to clarify my thinking about instructional design.

6. I had much more information that I did not have time 1 2 0 0 to include in my model representation.

7. I could improve my model if I had more time to make 2 1 0 0 revisions.

8. I could do a better job of representing a mental model 2 0 1 0 if I had more practice.

9. It is difficult for me to represent my thoughts as 1 1 0 1 graph elements.

10. I often think in —pictures.“ 0 1 1 1

Mental Model Representations

The hand-drawn graph of each participant was reproduced using the drawing facilities available in PowerPoint software. Vertices are shown in circles, and edges are indicated by lines connecting vertices. Some participants did not use the recommended sequential numbering for graph elements on their hand-drawn graphs. For example, some participants used Roman numerals for some portions of their graphs. Others used a hierarchical system such as 1.1, 1.2, 1.2.1, and so on. On the reproduction graph, each vertex and edge is numbered from 1 to —n,“ according to the number of vertices and edges in each graph. This number corresponds to the sequential numbering required by the database. The original numbering of each participant was used whenever possible. The structure presented on each reproduction graph duplicates that of the participant; however, the physical positions of some graph elements were shifted to make the graph representation easier to read. For example, in the original drawings, the addition of edge lines after initial placement of vertices resulted in unnecessary crossing of lines which made the drawing difficult to follow. Such vertices were repositioned so that the connecting edges did not cross. After a preliminary analysis of graph data, a draft report was submitted to the professor teaching the course. This report contained a summary of his model and each of the models of his students. Also included were a narrative discussing differences and several figures

64 summarizing some of the data. The professor provided feedback concerning this draft report and requested modifications in the data presentation and additional analysis regarding how students might be grouped according to their levels of understanding and perspectives. For example, the professor wanted the narrative discussing differences to be converted to a table that summarized the differences between his model and student models on key elements, such as an approach to problem solution, perspective, and level of abstraction. For formatting the graphs, the professor suggested showing common vertices and lines in bold print to indicate that the same elements appeared in both the professor‘s graph and a student‘s graph. The professor also requested that student responses and demographics be examined to determine if there were any relevant factors that would suggest identifiable subgroups in the class. In particular, he wanted to know if there were groups of students who had common needs in instruction. The professor‘s suggestions and requests were incorporated into the final report. The investigation of subgroups within the class revealed that there were noticeable differences between US and international student models, but no differences were noted relative to prior education or work experience. It appeared that US students had less knowledge than international students about the role of an instructional designer and the instructional design process. In response to the revised report, the professor responded that the additional data were informative and that the tables were very helpful. The final report given to the professor teaching the course contained a model structure summary for him and each of the 19 students (see Appendix E). Each model summary included a reproduction of the graph, a numbered list of the vertex labels used by the participant, and an indication of the focus and perspectives ascertained from the labels and descriptions provided by the participant. On student models, the summary page also includes a brief review of the student‘s background. In each graph, the first vertex (V1) indicates the highest level within the graph. Color coding is added for ease of locating the first vertex and each vertex incident to it. V1 is shown in light yellow. All vertices directly connected to V1 are shown in green. The color coding is used both on the graph and the list of vertices included with the model representation. Any correspondence in vertex or edge numbering from one model to another is coincidental. In other words, the numbering from 1 to —n“ refers only to the graph elements contained on an individual‘s model representation. On student graphs, vertices and edges that correspond to similar elements on the professor graphs are shown with bold lines. Similarity of vertices was determined by an analysis of vertex labels, properties, and graph context. Vertex labels need not be identical, but the graph labeling and properties must indicate a clear correspondence for vertices to be considered similar. Edges were determined to be similar if (a) they connect the same vertices on both the professor‘s graph and the student‘s graph, and (b) their properties suggest the same type of relationship on both graphs. There were no edges common for the professor‘s graph and any other graph representation. The dotted line for edge 39 indicates that it did not link the two vertices in any of the student graphs (i.e., the vertices occurred in different structures). Figure 14 shows the model summary for the professor teaching the course (also indicated on the database as participant P20), and Figure 15 illustrates a student model summary. Vertices that appear in both professor and student graphs are shown in bold circles. On the professor graph, the numbers in parentheses indicate how many student graphs contained the same or a similar vertex (i.e., property sets indicate correspondence between the

65 two vertices, but the descriptions may not be identical). On each vertex list, vertex labels in bold indicate that the vertex appears on the professor‘s model and on the student‘s model. Nine vertices on the professor‘s model appeared on one or more of the model representations of the students; however, four was the highest number of common vertices that appeared on any single student model. Vertex 1, which was the overarching model label on the professor‘s graph, was the vertex found on the most student models (nine). Vertex 30 (labeled —client“ but further described in properties as —teacher“) was the second most common vertex. It appeared on eight student models. The next common vertex was number 29 (labeled —target“ audience but further described in properties as —students“) and appeared on six student models. Five student models contained vertices corresponding to vertex 5, which referred to design activity. Four student models addressed —identifying gaps,“ which was also included in the professor‘s vertex 19. Vertex 30, —class syllabus and other documentation,“ was mentioned by two students. Vertices 2 (—system characteristics“), 22 (—prepare unit plans and lesson plans“), and 23 (—prepare specialized materials“) each appeared on once among the student models. Some student models contain elements that are implied in the professor‘s model, but they are not considered comparable because they clearly are presented as incomplete expressions or are categorized differently. For example, student models may include —student needs“ as something to be determined; however, in the professor‘s model, student needs are included within the overall analysis of the system characteristics and determined from several sources of information.

66

Professor (P20) Structure Summary V14 V15 V37 V38 Vertices: 40 V39 Edges: 42 V40 V30 E11 E12 E32 E33 V36 E34 (8) V13 (2) E10 E31 E35 V29 V10 V8 E25 V31 V35 E30 (6) V9 E9 E23 E24 E26 E6 E27 V32 E7 E8 E1 E22 V7 V2 V1 (1) (9) E28 E2 V33 V11 V12 E5 E3 E4 E29 V3 E36 V34 V4 E37 V5 E38 V6 E13 (5) E16 E42 E14 E39* E41 E17 E19 E21 V19 V28 V16 (4) E18 E40 E15 V24 E20 V22 (1) V17 V20 V23 V18 V21 (1) V25 V26 V27

Focus/Perspectives: ID knowledge, Design process, System view, Specific classroom setting

1 Design a one week instructional unit on writing paragraphs: Instructional designer —Know & Do“ Requirements 21 Client 2 System characteristics 22 Prepare unit plan and lesson plans 3 Project plans and strategies 23 Prepare specialized materials 4 Goals 24 Go over unit with teacher and get approval 5 Blueprint: Design the unit 25 Lesson plan & materials Instructional materials: Develop and 6 formatively evaluate the units 26 —Walk through“ 7 People 27 Pilot test 8 Supporting materials 28 Closing the project 9 Situational knowledge 29 Target audience 10 Target system characteristics 30 Primary client 11 Background resources 31 Relevant administrators 12 Enculturation 32 Learning disability specialists 13 Suprasystem characteristics and interfaces 33 Other teachers 14 Coordinate systems & interfaces 34 Subject matter experts 15 Values, policies, and attitudes 35 Existing curricular material 36 Class syllabus and other class 16 Background (existing) material documentation Related curricula (concurrent, previous, and 17 Process model 37 subsequent) Documents about specialized needs of the 18 Project plan 38 students 19 Information requirements (identify gaps) 39 School policy and procedure documents 20 Constraints & facilitators 40 Materials on teaching and writing

Figure 14. Graph and vertex list for the model of professor 1

67 Student (P1) Structure Summary Vertices: 20 Edges: 24 V3

V20 E2 V4 E3 V19 V2 E19 E5 E18 V5 E20 E1 E4

V18 E17 V16 E15 V1 E6 V6 E7 V7 E16 E22 E14 E23 E8 E24 V8 V17 E9 V15

V11 E10 E13 V9 V10 E11 V14 E12

V12 V13 E21

Focus/Perspectives: ID knowledge, Specific classroom setting, Delivery of instruction Background: (not included here for anonymity)

1 Design a one-week educational unit 2 Participants 3 Normal students 4 Students with disabilities 5 English standards of participants 6 Level of standards 7 5th grade 8 Syllabus of teaching English 9 Duration of the program 10 One week 11 Instructors 12 Instructors qualification 13 Training for instructors 14 Number of instructors available 15 Theories of learning 16 Instructional techniques 17 Classroom instruction 18 Experiential learning 19 Computer-based instruction 20 Self-paced learning

Figure 15. Example of a graph and vertex list for the model of a student

68 Figure 16. shows the model summary for professor 2 (also indicated on the database as participant P21). As with the student models, it was compared to the model for professor 1. Fourteen of the vertices in this model appeared also in that of professor 1. A notable feature on the model presentation for professor 2 is a set of three associated processes that are described as occurring iteratively throughout the primary design process. These processes are identified as vertex sequence 45-47 (which deals with budget management), vertex sequence 48-50 (which deals with personnel management), and vertex sequence 51-53 (which concerns time-line issues in project management). In the original drawing, these vertex sequences were shown with arrows pointing to a large circle that encompassed most of the vertices comprising the primary model. Edge 33 is shown as a dotted line to emphasize the fact that it is not common to the model of professor 1 and the model of professor 2 even though the two vertices it connects are found in both models. A more detailed discussion of the comparison between the models for professors 1 and 2 is presented in Chapter 5. This chapter also addresses the implications for model representation that the three associated (but disconnected) processes suggest.

69 Professor (P21) Structure Summary Vertices: 53 V4 V5 V6 Edges: 58 E5 E4 E6

V1 E2 V3 E7 V8 E12 V14

E8 V13 E1 E3 E11 V9 E9 E13 E10 E14 V2 V7 V10 V12 V25 V11 E16 V15 E26 V16 V27 E28 V24 E20 V20 E17 E27 V17 E23 E19 E15 V26 E33 E24 V23 V19 E18 V18 E32 E29 E25 V31 E22 E34 E35 E21 V28 E30 V30 V29 V22 E31 V21 E36

V32 E37 V33 E38 V34 E39 V35

E42 E40

Associated processes V41 V36 V42 Occur iteratively throughout primary design process E43 E41 E47

V45 E48 V46 E49 V47 V40 V37 V43

V48 E56 V49 E57 V50 E44 E46 E58

V51 E52 V52 E54 V53 V39 E45 V38 V44

Focus/Perspectives: Design process, Specific classroom setting

1 Get clear view of desired project outcome 2 Conduct informal interview/discussion with client 3 Identify specific skills/knowledge learners need in order to attain desired outcome 4 Examine print & other resources 5 Draw upon personal knowledge 6 Conduct interviews with SMEs (teachers) 7 Identify subordinate skills

Figure 16. Graph and vertex list for the model of professor 2

70

8 Identify skills & knowledge learners already possess 9 Interview learners 10 Interview teachers 11 Pretest students 12 Collect student records 13 Gain permission from administrators 14 Identify skills that the target audience needs to learn 15 Verify with stakeholders (teachers/clients) 16 Meet with teachers 17 Meet with clients 18 Revise 19 Examine existing materials 20 Identify how the existing materials might be employed 21 Identify the knowledge & skills for which instruction must be designed 22 Identify instructional context/environment 23 Identify media to be employed 24 Design instructional strategy 25 Design instructional activities 26 Identify & design information to be presented 27 Design examples 28 Design practice activities 29 Design feedback 30 Design assessment instruments 31 Review plan for materials with clients 32 Revise plans as needed 33 Script/write inst. materials 34 Produce materials 35 Formatively evaluate materials 36 Conduct 1-on-1 evaluations 37 Analyze 1-on-1 data 38 Revise materials as necessary 39 Conduct small group eval. 40 Analyze small group data 41 Revise as needed 42 Create teachers guide 43 Present materials & guide to teachers 44 Implement materials 45 Determine total project funds available 46 Create/adjust project budget 47 Monitor project budget 48 Identify project personnel 49 Assign/reassign personnel to project tasks 50 Monitor personnel's task performance 51 Identify project deadlines 52 Create/adjust project 53 Monitor progress vis-à-vis timeline Figure 16 - Continued

71 Quantitative Comparison of Models

Table 8 reports the quantitative analysis resulting from applying the techniques and formulas of the mental model comparison methodology (see Appendix A). The models of the 19 students and the two other experienced designers (professors 2 and 3) are compared with that of professor 1, the professor teaching the course. The comparisons are made at two levels: full model comparison and comparison of common elements. General graph characteristics are reported for the full model comparison. These characteristics include the order of the graph (total vertices) and the size of the graph (total edges). The graph for the model of professor 1 had 40 vertices and 42 edges. The order and size of each comparison graph is reported and includes a number in parentheses that indicates how many of the comparison graph vertices or edges were also contained in the graph of professor 1. The full model comparison also contains similarity values for three measures. Similarity values range from 0, indicating no similarity to 1, indicating perfect agreement. The first measure is vertex degree, which is a measure of how similar the two graphs are in structure for the common elements; that is, now similar are they in the number of edges connected to the common vertices. The second measure is similarity of vertex properties. This measure indicates how similar the two graphs are in content overall. The third measure relates to similarity of edge properties overall; in other words, how similar the two graphs are regarding relationships among vertices. None of the student or other professor graphs had an edge in common with the graph of professor 1. The second level of comparison addresses similarity of those elements that are common to the graph of professor 1 and the each of the comparison graphs. The same types of similarity values are computed as in the full model comparison; however, only the data from common elements are included in the computation. The data for full model comparisons is as follows. The order of graphs for students ranges from six vertices (student 13) to 39 vertices (student 17). The order of the graph for professor 2 is 53 vertices and that of professor 3 is six vertices. The size of graphs for students ranges from 11 edges (student 13) to 39 edges (student 17). The size of the graph for professor 2 is 58 edges and that of professor 3 is six edges. The student who had the most vertices in common with professor 1 was student 1 (4 vertices). Professor 2 had the most vertices in common with professor 1 (14 vertices). Professor 3 had no vertices in common with either professor 1 or professor 2. Similarity values in the full model comparison range from 0 (students 5 and 12) to 0.04 (student 14) for student vertex degree and 0.09 for the vertex degree of professor 2. Similarity values for vertex properties range from 0 (students 5 and 12) to 0.02 (multiple students). In comparing common elements, the highest similarity value for vertex degree for students was .78 (student 17). The same measure for professor 2 was 0.53. The highest similarity value for vertex properties among students was 0.75 (student 15); the number of vertices in common was two. In contrast, professor 2 had a similarity value of 0.46 for 14 common vertices.

72 Table 8 Quantitative Comparisons of Student and Other Experienced Designer Models to the Model of the Professor Teaching the Class Comparisons with the Professor Model Full Model Comparison Comparison of Common Elements Order Size Similarity Values Similarity Values of G of G Vertex Vertex Total Total Degree Vertex Edge Degree Vertex Edge Vertices* Edges* Values Properties Properties Values Properties Properties Professor 1 40 42 Student 1 20(4) 24(0) 0.03 0.02 0 0.46 0.22 0 Student 2 18(3) 21(0) 0.02 0.02 0 0.28 0.36 0 Student 3 15(3) 14(1) 0.02 0.02 0 0.42 0.30 0 Student 4 25(2) 25(0) 0.01 0.01 0 0.42 0.29 0 Student 5 15(0) 14(0) 0 0 0 0 0 0 Student 6 29(1) 34(0) 0.01 0.00 0 0.71 0.14 0 Student 7 29(1) 32(0) 0.01 0.00 0 0.50 0.33 0 Student 8 11(1) 12(0) 0.01 0.01 0 0.43 0.40 0 Student 9 23(3) 24(0) 0.03 0.02 0 0.58 0.41 0 Student 10 14(3) 19(0) 0.02 0.02 0 0.42 0.30 0 Student 11 21(2) 31(0) 0.01 0.02 0 0.18 0.60 0 Student 12 17(0) 22(0) 0 0 0 0 0 0 Student 13 6(1) 11(0) 0.01 0.01 0 0.50 0.33 0 Student 14 19(3) 18(0) 0.04 0.02 0 0.71 0.30 0 Student 15 24(2) 23(0) 0.01 0.02 0 0.41 0.75 0 Student 16 26(2) 28(0) 0.01 0.01 0 0.25 0.40 0 Student 17 39(3) 39(0) 0.03 0.01 0 0.78 0.24 0 Student 18 13(1) 12(0) 0.01 0.00 0 0.43 0.14 0 Student 19 23(2) 22(0) 0.02 0.02 0 0.58 0.46 0 Professor 2 53(14) 58(0) 0.09 0.08 0 0.53 0.46 0 Professor 3 6(0) 6(0) 0 0 0 0 0 0 *Numbers in parenthesis represent the number of common elements between the comparison graph and professor 1‘s graph

Qualitative Comparison of Student Models to Their Professor‘s Model

Table 9 presents a summary of the qualitative differences between the model of the professor teaching the course and the models of the nineteen students. The main difference appears to be that the professor‘s model reflects a holistic, systems view of the problem and an approach to a solution. In other words, the professor‘s model included the aspects of an instructional designer‘s role, the design process, the resources needed, persons involved, project management, and other pertinent elements–all related to the context identified in the problem statement. The professor‘s model may be described generally as conceptual and abstract; the model he presents may be applicable in other problem settings that have similar characteristics. In contrast, the student models may be described generally as much more specific and concrete. Many student models consisted largely of lists of physical aspects of a classroom setting.

73 Table 9 Summary of Differences between Professor and Student Models

Professor Model Student Models Is presented from a holistic, systems view of Contain compilations of elements whose the problem and an approach to a solution. relationships are not defined in terms of an approach to a problem solution. Identifies project goals, plans, activities, Tend to show groups of elements with little, if information sources, and functions in relation any, expression of how those elements are to an instructional design process. related to the problem solution. Is both comprehensive and concise. No student model presents a comprehensive picture, and many student models contain elements and details that are not relevant (at least not in the way presented in their graphs). Presents a high level view of an entire design Student models that include elements of the process. design process are incomplete. Some are inaccurate. Relates to creating an instructional unit for a Many student models appear to focus on specific class setting, but the model is existing physical characteristics of a target expressed in general terms. That is, the model setting and specific individuals. could apply to other settings with similar characteristics. Does not repeat the elements contained in the Many student models repeat the elements problem statement; the model responds to the contained in the problem statement without problem statement. responding to the problem. Reflects the perspective of an experienced Some student models appear to reflect instructional designer. backgrounds as teachers and trainers. Approximately one-third of the students include the delivery of instruction in their models (perhaps as they might have done as teachers who are responsible for creating and delivering course units). Reflects a team approach, including client Do not reflect a team approach to instructional personnel and subject matter experts. design. Instead, it appears the students assume the instructional designer role is to become a source of knowledge in the subject matter and instructional delivery requirements. Identifies a number of sources for information, Some student models indicate the students are content knowledge, and specialized expertise. attempting to provide detailed knowledge of the problem area rather than identifying how and where to locate that knowledge within the problem context. May be characterized as conceptual and Tend to be related in more concrete terms. abstract.

74 Comparisons of Student Backgrounds and the Focus and Perspectives in Their Models

Student backgrounds were reviewed in an attempt to gain insight into the focus and perspectives reflected in the model representations. Table 10 provides a summary of student background and perspectives information grouped according to whether students were from the United States or were international students. The nine students from the United States included two who were in the military. Among the ten international students were three who were military personnel. The countries represented were Australia, China, Singapore, Indonesia, Korea, and India. Student graphs could reflect multiple perspectives.

Table 10 Summary of Backgrounds and Student Graph Perspectives by Country Graph Perspectives US Students International Students

ID Knowledge 1 1

Design process 1 7

Redesign of instruction 0 3

Subject matter knowledge 4 0

General classroom 1 2

Specific classroom 6 8

Delivery of instruction 5 1

A more detailed table was given to the professor in his final report, but the table is not reproduced here because of the potential to identify individual participants from the data. In the more detailed table, each student‘s background was presented in relation to the focus and perspectives included in his or her model representation. Students were identified by their participant number. Background information included the student‘s highest education level (bachelors, post-graduate certificate, masters), the major subject areas, and work experience. The subject areas for prior education were highly varied. They included psychology, , fine arts, business administration, science, engineering, and educational technology. Work experience was varied, also, but most students had some background in teaching or training. Participant background information was grouped by the countries from which they came to enter the instructional design program. Counts for focus and perspectives were shown according to whether students were from the United States or were international students.

75 A second detailed table was given the professor. It presented most of the same information as table 10, but it was arranged according to common experience. Table 11 summarizes the counts for graph perspectives for the twelve students who had experience in some aspect of teaching, training, or course development. Those with experience in course development did not have formal instructional design training. Counts are also shown for the six students who had some other kind of background. Some of these students worked in the field of education but in positions related to administration and support.

Table 11 Summary of Backgrounds and Student Graph Perspectives by Experience Graph Perspectives Teaching, Training, Course Other Experience Development Experience

ID Knowledge 2 0

Design process 6 2

Redesign of instruction 2 1

Subject matter knowledge 2 2

General classroom 3 0

Specific classroom 7 7

Delivery of instruction 3 3

Conceptions and Possible Misconceptions Identified in Student Models

Observations on Conceptions and Misconceptions

Many students have experience as teachers or trainers. Some have experience in course design but without formal preparation for this task. Although their backgrounds reflect experience in education, all students appear to be at the basic entry level for the instructional design program. The following elements and concepts were included in the professor‘s graph but were not represented in the student graphs, indicating areas where they may need to receive instruction.

1. the role of the instructional designer, 2. project planning and execution,

76 3. a systems view of the context for an instructional design project, 4. the roles of other persons in a project, 5. sources of information and their relationship to the design process, 6. the nature of constraints and limitations, and 7. the instructional design process.

Some students included graph elements with labels referring to an ADDIE-type design process (Analysis, Design, Development, Implementation, Evaluation); however, these were usually presented at a high level and lacked appropriate, if any, detail.

Possible Misconceptions

A number of student graphs suggest that there may be misconceptions about: a) the role of the instructional designer b) limitations that are imposed on the designer in the design process; and c) the product of instructional design. Although individual circumstances may affect the role, activities, and limitations in a particular design task, no such limitations were implied in the problem statement. Four possible misconceptions in student responses were notable:

1. Instructional designers must know specifics of a particular classroom rather than design instruction that can be used and reused by different persons and possibly in different locations (assuming they have comparable generic characteristics). Examples of things students said an instructional designer must know: a. individuals who will be delivering the instruction; b. individuals who will be receiving the instruction; and c. details about the specific classroom where instruction will be delivered. 2. Instructional designers are constrained by currently available resources. 3. Instructional design is essentially a redesign of existing material and procedures. 4. Instructional designers work alone in the design process.

Case Review œ Creating a Mental Model Representation

Professor 3, who was not an instructor in the instructional design program, did not produce a final model representation that had any vertices in common with the models of either of the two professors who were instructors in instructional design. The model of professor 3 also did not provide a useful alternative for comparison with student model representations. However, this professor‘s package of materials contained a series of draft model representations that documented some of the thought process involved in creating her mental model representation. The professor was interviewed to discuss this process. The interview took place several weeks after the professor‘s model representation because the idea for the case review arose after a study of all model representations. At this time, it became apparent that this professor‘s contribution would best be used in examining the process of representation. The following figures and excerpts from the interview summarize how professor 3 created her mental model representation and her thoughts about the mental model elicitation process. The first thing the professor did was to make notes about concepts she might want to include in her model representation. Figure 17 shows the notes she made during this planning

77 stage. The numbers attached to the list of considerations indicate a first attempt at prioritizing these concepts after they were listed.

Step 1 Goal œ Unit on writing paragraphs Planning Considerations ID‘s 2 œ characteristics of students -knowledge of subject matter (including student‘s prior knowledge -Learning theories 1 œ task analysis of skill: In order to write a paragraphs students must be -Student characteristics able to • write sentences -Curriculum • correctly punctuate -Disabilities • capitalization • etc. -Making 3- instructional objectives accommodations/modifications 4 œ educational standards to be -Technology skills addressed -Designing instructional materials 5 œ instructional materials needed -Designing instructional 6 œ describe instructional activities activities/strategies including accommodations for -Evaluating instructional activities students requiring them 7 œ describe evaluation procedures

Figure 17. Step 1 œ notes made in planning the model representation

78 Figure 18 shows the first attempt at putting the lists from the planning notes into some kind of structure. After review, the professor decided this organization did not depict what she wanted.

Step 2 V1 Consider Instructional student Unit characteristics age, etc. V2 E1 Student Varied Characteristics Consider Learning E2 Theories E3 E4

V4 V5 Analysis of V3 Prior Disabilities each student‘s Experience performance on with Task task knowledge

V7 V6 Writing Instructional Instructional Accommodations Instructional Materials Activities Objectives

Figure 18. Step 2 œ first attempt at the model representation

79 Figure 19 shows the third step in the process. It appears that the number of key elements has been reduced in order to focus on an appropriate structure. The professor reported that she was dissatisfied with this representation because it did not convey the relative importance of the elements. She wanted to illustrate that some elements were of equal weight. The professor felt so strongly about this issue that it raised a question concerning whether or not training should address weighting or some other expression of the importance of concepts.

Step 3 V1 Instructional Unit E1 E2

V2 Learning V3 Characteristics Task analysis of E4 E3 skill E5 V5 V4 Prior Disabilities? experience with task

E6

E7 V7 V6 Writing Accommodations instructional ? objectives

Figure 19. Step 3 œ second attempt at the model representation

80 Figure 20 shows the final model representation. The vertex names have been changed to reflect a high level view of the model subject matter. Vertices 2, 3, and 4 have been drawn on the same level to illustrate that they are of equal importance. As suggested by the planning notes, the professor could have provided a more detailed model if she had been able to devote more time than was allocated for the representation activity. In reflecting on how she spent her time, the professor reported that most of her effort involved determining a strategy for structure illustration. She also believed she had asked a question that led her to reread the instructions to get a better understanding of what she was to do. After rereading the instructions, she changed her approach. She limited her final visual graph representation to six vertices and seven edges to allow her enough time to complete the vertex and edge properties forms that defined these elements.

Step 4 V1 Final Model Instructional Representation Unit Considerations

E1 E3 E2

V2 V3 V4 Student E4 Curriculum E5 Instructional Characteristics Design

E6 E7

V5 V6 Learning Characteristics Theories of Disabilities

Figure 20. Step 4 œ final graph for the model representation

After discussing the steps in the model representation process, the professor provided additional comments about the experience. Some of these comments are paraphrased as follows: ñ This kind of exercise requires more practice in order for one to communicate in depth what one wants to convey.

81 ñ The prompts on the forms for describing vertices did not provide much assistance in determining what should be included in a vertex definition because more practice is needed regarding the terms and their meaning. ñ Defining edges (the relationships between vertices) seems more foreign than defining vertices —because I don‘t often think about the relationships and probably should. I think all of the relationships [included on the properties form] are important, but I don‘t often think about the relationships between nodes.“

The professor was asked if anything could be done in training or practice to help her to focus on and articulate edge relationships. She responded that it would be helpful: (a) to see examples of the relationships; (b) then perform a matching exercise to relate different types of relationships to examples; and (c) finish by using the examples in practice. She suggested a similar activity would be helpful in understanding vertex definitions, but that more emphasis is needed on edge relationships because they are more difficult to grasp than vertices. In speculating on why defining relationships between vertices might be more problematic than defining vertices, the conversation turned to a discussion of schema theory, which addresses how information structures are formed, updated, and restructured. The professor‘s observation was that we probably focus more on vertices than edges because vertices relate more to —facts“ stored in schema. The professor‘s final comments related to her attempts to apply the thought process she experienced in the exercise. Over the next day or two, she tried to think about her mental model of situations she encountered. She noticed that she again focused on vertices and that she organized her thoughts as a sequence of linear events.

Interviews with the Three Experienced Designers

Interviews with the three professors who were experienced instructional designers focused on five topics: (a) adequacy of training; (b) time requirements for creating models; (c) understanding the problem statement; (d) usefulness of the information provided by the methodology; and (e) the list of recommendations for improving the methodology. There were several areas of agreement among the three professors. All three emphasized the need for additional training and practice in order for both students and experts to gain a better understanding of how to represent their mental models. The professors expended considerable effort in representing their mental models; each indicated that s/he could have invested much more time than what they had allocated. They also agreed that common understanding of the problem statement is critical for obtaining the most useful model representations for analysis. The two professors who were teachers of introductory instructional design courses indicated that the information about student models and conceptions would be helpful to them. The third professor responded that she could not comment on how useful it would be because she did not teach in the instructional design program. All three professors agreed with the list of recommendations and suggested several additions.

Professor 1

Prior to his interview, Professor 1 was given a preliminary report providing him with information about his students‘ mental model representations, comparisons with his own

82 model, and student conceptions that might be relevant in his planning the design and delivery of instruction in the course. In providing feedback on the report, the professor stated that the report provided a very clear description of the contrasts between professor and students. He indicated that the —possible misconceptions“ section was —okay,“ and suggested that the goal is to as help students enlarge and enrich their perceptions rather than to correct them. The professor requested a different presentation of the comparison of the student and professor models. Rather than presenting the information as a narrative, he suggested a table with point by point comparisons. He indicated that this would make it easier to scan and see specific characteristics; the table would also make it easier to make decisions about how to focus on enlarging student perceptions. The professor was asked if there were any surprises in the student data, and he replied that the gaps and differences were greater than expected between the student models and the professor model. In response to a question of what information would be most useful to you in course design and delivery, the professor answered that he would like to see (a) where the students are relative to the professor and where he wants them to be, and (b) definable sub- groups of students. In response to the comments concerning the type and format of information in the report, the professor/student comparison data was presented in table form, and two additional tables were created to relate student backgrounds to their focus and perspectives. From these tables, it was possible to identify specific sub-groups of students. The professor received the revised report enthusiastically and indicated that the changes were very helpful. The professor suggested that it would not be necessary to obtain mental model representations from students at the beginning of every semester unless the student population changed. He believed that performing the analysis for one or two classes would provide the information needed for making instructional design and delivery decisions. An exception to this approach would be if the mental model methodology would be used in a before-and-after instruction comparison to assess student learning during or at the end of the course.

Professor 2

Professor 2 discussed his interpretation of the problem statement. Although the problem statement did not limit model responses to process descriptions, his model representation focused on action steps in performing a design task. He was unsure of how much detail to include and suggested that the problem statement define the level of specificity required. He also suggested picking a task for the problem statement that had less scope so that there would be more time to respond in detail; however, after discussing the purpose of the broader scope, he agreed that a limited task would not support investigation of initial student conceptions about instructional design. He then suggested that the limited scope investigation would be more applicable during a course of instruction to examine student conceptions concerning a particular step in the instructional design process. After reviewing the report for professor 1, professor 2 expressed surprise at the amount of information that could be derived from the mental model representations. He said that the range of possibilities was not apparent to him while he was going through the exercise. Professor 2, who also teaches an introductory course in instructional design, requested (and received) a copy of the report for his own use.

83 Professor 3

Comments by professor 3 have already been reported in the case review.

Interpretation of Results

The data for this study included graph drawings which were converted to electronic format, a database of participant definitions of their model elements, tables and figures providing quantitative and qualitative comparisons of models, a case review, and feedback from participants. Chapter 5 interprets these results in relation to the research questions regarding usefulness and further development of the methodology.

84 CHAPTER 5

DISCUSSION

This study was successful in answering the two research questions. Research question 1 was answered affirmatively concerning whether or not the methodology provides useful comparisons of student-constructed models. In addressing research question 2, specific areas of improvement were identified for both the model elicitation and analysis aspects of the methodology. The discussion of study results includes comments concerning specific aspects of the model analyses and concludes with an overview of the limitations of the study and application of its results.

Research Question 1

Does the methodology provide useful comparisons of student-constructed models based on relevant attributes of structure and content that are embedded in the model elicitation methodology?

In designing the study, the standard for an affirmative response to research question 1 was stated as follows. A comparison of student-constructed models based on relevant attributes of structure and content is considered useful if it reveals misconceptions or gaps in knowledge that, if present, will affect the design and/or delivery of instruction for the purpose of improving the potential for learners to achieve the targeted learning goals. The feedback from the instructor for the course used in the study indicated that this standard was met. The comparisons of student-constructed models to that of the professor yielded clear contrasts that the professor indicated could aid him in making decisions about how to provide instruction to his students. Although the professor was a regular instructor for entry level students in the instructional design program, he was surprised that the gaps and differences were as great as what the analysis revealed. The methodology supported analyses at three levels: (a) student-specific comparisons of individual models to the professor‘s model; (b) identification of subgroups of students whose models had key features in common; and (c) general trends for the entire class of students compared with the professor‘s model.

Student-specific Comparisons

The nineteen students appeared to have very little knowledge of instructional design, and no student had a substantial number of vertices in common with that of their professor. Therefore, the comparison of individual student models with the model of the professor was more interesting in terms of general focus and perspectives rather than specific model elements. This analysis, combined with background information for the nineteen students, led to an identification of subgroups of students.

85 Subgroups of Students

In looking for a basis for common features of student models, several possibilities were investigated: educational backgrounds, teaching/training backgrounds, and US vs. international students. The students had a wide variety of educational backgrounds. Very few listed the same major areas of study. The focus on classroom elements and teaching suggested that those with teaching or training backgrounds would emphasize these features in their models more than students who did not have this experience; however, this relationship was not observed in the model descriptions. The common background characteristic that did appear to be related to graph focus and perspectives was whether or not students were from the US or another country. Figure 21 illustrates differences between US students and international students. Eight focus or perspective areas are listed. The first bar position (black) in each grouping indicates whether or not the professor‘s model included that area. The second bar position represents US students, and the third bar position represents international students. Only the professor‘s model included a holistic, systems view. The professor, one US student, and one international student included references to the knowledge an instructional designer should have. The professor‘s model included a description of the design process. Only one US student included the design process, but seven of the ten international students referred to the instructional design process in their models. Redesign of existing instruction was included only in the models of three of the international students. Nearly half of the US students (four of nine) indicated an expectation that the instructional designer required personal knowledge of the subject matter to be taught. No one else included this requirement. Nearly all persons included a reference to a classroom in their models. For a few persons, the reference was general, while for others, the reference indicated the modeler was concerned with a particular environment. Further investigation of these references revealed another difference. Some students focused on contents, size, and other concrete elements while others were more concerned with the classroom from a functional perspective. The final area on Figure 21 revealed a surprise. More than half of the US students (five of nine) included the delivery of instruction in their models–not with regard to formative or summative evaluation but as a part of their view of instructional design. Only one international student included this area in his model.

86 Graph Focus and Perspectives US vs. International Students

Professor US Students International Students

9 8 7 6 5 4 3 2

Number of Students Number of 1 0 e w e s g g s ign tion e s c Vie c e u ro d ssroom em str Re n n p cla f i ig o Syst s ter knowled ral ID Knowled t cific classroomry De e e ma liv ct Gene Sp je De b Su

Figure 21. Focus and perspectives shown on graphs: US vs. international students

General Trends for the Class

The analyses of individual models and their focus and perspectives revealed some general trends for the class of nineteen students. The summary of differences between the professor and student models has already been presented in Table 9 in Chapter 4. The professor teaching the course indicated that this table and the information concerning the subgroups of students were the most helpful material included in the report to the professor.

The Role of Properties Data in Model Analysis

In building the database and performing the model comparisons, the properties data for vertices and edges was a key element in determining similarities. For example, —teacher“ was a vertex observed in both the professor‘s model and eight student models; however, in the professor‘s model, the vertex label was —primary client.“ Without the accompanying description included on the properties form, it would not have been clear that these labels

87 referred to the same basic concept (in part because the professor‘s model included different vertex labeled —client“). Similarly, —students“ was another vertex found on both the professor‘s model and six student models, but the alternative labels for this vertex included —target audience,“ —audience,“ —participants,“ and —learners.“ In addition to determining similarities that weren‘t immediately apparent, the properties forms also were helpful in determining whether or not seemingly similar vertex labels referred to the same concepts. An examination of other vertices in the model representations provided additional support for the determination. The following example in Table 12 illustrates this analysis. The labels and descriptions correspond to the ways in which they were expressed by participants on their vertex properties forms. The three vertex labels suggest similar concepts that relate to outcomes in the context of the problem statement. Professor 1 presented his concept from the viewpoint of describing an object related to the problem context; that is, there is a document produced in the instructional design process that reflects an understanding of the outcomes desired by a client. Professor 2 identified a similar concept, but he expressed it in terms of the action that results in an understanding of the client‘s goals. On first examination, student P18‘s vertex labeled —Expectations of stakeholders“ suggested that he was expressing a similar idea, but there was no inclusion of the concept of a client in the context of an instructional design project. Instead, the student merely listed possible sources of outcome expectations in his vertex definition and then repeated those sources as individual vertices linked to this vertex in his model representation. The student‘s model suggests a gap between his understanding of instructional design and the understanding of two experienced instructional designers.

Table 12 Analysis of Similarities of Vertices Related to Goals Participant Vertex Label Vertex Type Description

Professor 1 Goals: Plan, obtain Object A document explaining the learning approval for, and objectives with supporting conduct a front end documentation concerning analysis rationale, entry level knowledge and skill, and related factors Professor 2 Get clear view of Action General sense of desired skills & desired project knowledge client wants students to outcome attain

Student Expectations of (not specified) What is required based on state (P18) stakeholders statute or policy; expectations of students skills based on other teachers need; expectations from parents; expet. from students

88 Although many students did not have time to complete all of their properties forms, the methodology is robust enough to support analysis with the information that was provided. Missing information reduced the level of detail that could be addressed in model comparisons but not overall similarities and differences.

Quantitative Analysis in Model Comparisons

The quantitative analysis revealed that there was very little similarity between any of the student models with the model of their professor (professor 1). The main reason for the low similarity values was the lack of common vertices. The most vertices any student graph had in common with that of his or her professor was four. The lack of common vertices was linked to the observation that the graph representation of the model of professor 1 had no edge in common with any of the student models; in other words, common edges can only occur between common vertices. The low incidence of common vertices reduced the likelihood of finding common edges. The model of professor 1 also had no edge in common with the model of professor 2. Because of a lack of common edges, the similarity values of edges did not make a contribution to the analysis of different levels of similarity among the different models. Figure 22 illustrates the comparison of a student‘s model that had four vertices in common with the model of his professor (professor 1). The similarity value for vertices was 0.02. The comparison methodology takes into consideration the total number of vertices in both models and how many of that total number are unique. The number of unique vertices represents the combined total of: (a) the number of vertices that are common in both models; (b) the number of vertices that appear only in the first model; and (c) the number of vertices that appear only in the second model. The number of unique vertices for the two models was 56. In addition, an analysis of the properties for the common vertices revealed that the descriptions were not identical in the two models. The similarity value for common vertices alone was 0.02 instead of the 1.0 which would have been computed if their properties matched perfectly. The area on the left side represents the 16 vertices that appeared only in the model of student P1 (see Figure 15). The area on the right side represents the 36 vertices that appeared only in the model of his professor. In the middle area, the bars that are all black and all white indicate the portion of overlap for the two models (i.e., the four vertices that were common to both models). However, the descriptions for the common vertices were not identical. The portion of the bar that is all black (2%) represents the portion of the 56 unique vertices that was described similarly in both models.

89 Comparison of Vertex Set for the Models of Student P1 and His Professor

1

0% 20% 40% 60% 80% 100% 1 Vertices only in Professor Model 64% Common vertices/different properties 6% Common Vertices/Common Properties 2% Vertices only in Student Model 29%

Figure 22. Basis for a quantitative comparison of models for student P1 and Professor 1

Similarity of the order (total vertices) and size (total edges) of the professor‘s graph with that of comparison graphs was no indicator of the overall similarity of their models. For example, the graph for professor 1 had an order of 40 and a size of 42. The corresponding values for the graph of student 17 were similar. The student‘s graph had an order of 39 and a size of 39; however, this student‘s model had only three vertices in common with that of the professor. The full model comparison for similarity on vertex properties was 0.01; the similarity value for properties of the three common vertices was 0.24. The low similarity values for vertex comparisons indicate that the models had little content in common. With so little similarity in content, additional structural analysis appears to have little value in assessing gaps between the knowledge of these students and the knowledge of their professor. The quantitative values for model comparisons for all students were low, but the comparison of the model for professor 2 with that of professor 1 demonstrated that the methodology can distinguish among persons with different levels of expertise. In a full model comparison with the model of professor 1, the highest similarity value on vertex properties for any student model was 0.02. The models of nine students had this similarity value. In contrast, the full model comparison of the models for professor 1 and professor 2 had a vertex similarity of 0.08. The model of professor 1 had 14 vertices in common with that of professor 2. On the surface, it appears that the comparison value should be higher than 0.08; however, the comparison methodology takes into consideration the total number of vertices in both models and how many of that total number are unique. The number of unique vertices for the two professors‘ models was 79. In addition, an analysis of the properties for the common vertices revealed that the descriptions were not identical in the two models. The similarity value for common vertices alone was 0.46 instead of the 1.0 which would have been computed if their properties matched perfectly. The fact that common vertices had differences in properties also

90 contributed to the rather low similarity value of 0.08 for full model comparisons for the two professors. Figure 23 illustrates the comparison of the two professors‘ models. The area on the left side represents the vertices that appeared only in the model of professor 1. The area on the right side represents the vertices that appeared only in the model of professor 2. In the middle area, the bars that are all black and all white indicate the portion of overlap for the two models (i.e., the fourteen vertices that were common to both models. However, the descriptions for the common vertices were not identical. The portion of the bar that is all black (8%) represents the portion of unique vertices that was described similarly in both models.

Comparison of Vertex Set for the Models of Two Experienced Designers

1

0% 20% 40% 60% 80% 100% 1 Vertices only in Model 2 49% Common vertices/different properties 10% Common Vertices/Common Properties 8% Vertices only in Model 1 33%

Figure 23. Basis for a quantitative comparison of models for professor 1 and professor 2

The low similarity value in content combined with a lack of common edges in structure raises the question as to whether or not the two professors have different expertise regarding the instructional design problem. The quantitative analysis alone does not reveal why the models appear to be different.

Qualitative Analysis in Model Comparisons

A comprehensive qualitative analysis of model features was not done for the student to professor models because the models did not have enough in common to support this comparison. However, a comparison of the models for professors 1 and 2 illustrates the kind of analysis that could be done with a different group of students. The differences observed between the models for professor 1 and professor 2 relate to: (a) the perspective from which the problem statement was viewed (holistic context vs. linear process); (b) the way related

91 elements were presented (clustering vs. linear linkages); and (c) amount of detail included (detail in description vs. detail in concepts included). To a more limited degree, these same types of differences also appeared in the comparison between the model of professor 1 and some of the student models. Figure 24 shows where the model of professor 1 contained vertices in common with the model of professor 2. Vertices shown with diagonal lines were found in both models. Vertices shown with bold lines were present in one or more student models. Vertices that appeared in the models of both professors and more than one student model include: V30 (teacher), V5 (design the unit), and V19 (identify gaps in knowledge). Individual comparisons of student models to the model of professor 2 were not done because his model represented a detailed series of steps in the instructional design process. Student models contained very little of the same kind of representations.

Also in student Professor 1 Model model(s) V14 V15 V37 V38 Also in model of V39 professor 2 V40 V30 V36 V13

V10 V8 V29 V31 V35 V9 V32 V7 V2 V1

V33 V11 V12 V3 V34 V4 V5 V6

Edge is only in professor 1 model V19 V16 V28 V24 V22 V17 V20 V23 V18 V21 V25 V26 V27

Figure 24. Common elements for professor 1, professor 2, and some of the19 students

Perspective: Holistic Context vs. Linear Process

The graph for the final model of professor 1 shows a central vertex (V1) entitled —Design a one week instructional unit on writing paragraphs: Instructional designer ”know & do‘ requirements.“ Linked to this vertex are vertices which represent groups of concepts that are further detailed in the vertices linked to them. The longest path between the central vertex (V1) and the farthest vertex appears in the group V1 œ V2 œ V9 œ V11 or V12. The model for

92 professor 1 includes concepts that identify project goals, plans, activities, information sources, and functions in relation to an instructional design process. The central vertex specifically refers to the overall objective of the problem statement: Your objective is to convey your overall understanding of instructional design and the basis for your approach. In contrast to the holistic context perspective, the model of professor 2 began with a vertex entitled —Get clear view of desired project outcome.“ His graph consisted of a series of process steps, all of which were classified as actions on the vertex properties forms. The number of vertices in the longest path on this graph was 20. Someone experienced in instructional design could see a relationship between the action steps in the model of professor 2 and the various concepts identified in the model of professor 1. For example, professor 1 included a vertex labeled —supporting materials.“ The descriptions of the vertices linked to this central concept referred to the use these materials would have in the instructional design process. The model for professor 2 suggested that he might use the information in the materials, but his model dealt with the action steps involved in acquiring, producing, or applying the information. The mental model representations for the two professors may indicate that each has an advanced knowledge of instructional design, but it is not clear that they conceptualize the subject in the same way. The observation that comparable expertise may be represented quite differently leads to several questions. First, should the comparison model used to assess beginning students‘ conceptions of instructional design be designed in a way that accommodates more than one style of conceptualization? It may be better to create a model that represents a composite of several experienced designers‘ models so that differences in style do not suggest differences in understanding. On the other hand, could differences in the style of a mental model representation suggest important differences in conceptualization, assuming that the modeler has a good understanding of the problem statement? During his interview, professor 2 explained that his focus was on the part of the problem statement that referenced what an instructional designer needs to do. His model represented his view of the process of instructional design. It was interesting that professor 1 interpreted the problem statement in the same way on his first attempt at representing his model. After rereading the problem statement, he developed a model that responded to the broader objective of illustrating an overall understanding of instructional design. The partial drawing in the first model attempt for professor 1 was much more similar to the graph of professor 2 than the final model professor 1 provided. This observation emphasizes the importance of there being a common understanding of the problem statement in model comparisons. Professor 2 indicated that he might not have represented his model as a series of process steps if he had read the problem statement differently. Before asking participants to represent their models, it may be advisable to validate that they have a common understanding of what the modelers are to do. However, thinking in process steps may be a basic model style that will persist even if the modeler demonstrates an understanding of the intent of the problem statement.

Presentation: Clustering vs. Linear Linkages

One of the reasons that no common edges were found between the model of professor 1 and the model of professor 2 is that professor 1 represented the instructional design process as groups (or clusters) of process steps instead of a linear set of linked actions. Professor 1 linked

93 elements in the design process at the group level rather than at the level of an individual step as seen in the model of professor 2. Although student models contained far fewer elements in common with the model of professor 1, student models that had more than one vertex in common with the model of professor 1 either had no cluster concept or included the vertices in a different grouping.

Amount of Detail: Detail in Description vs. Detail in Concepts Included

Details were included in several ways among the models of the 19 students and the two professors. Sometimes detail was provided in a description attached to a vertex representing a complex concept. Alternatively, detail was provided by attaching vertices to a complex concept vertex; the attached vertices and their associated properties supplied additional information about the complex concept. On a few model drawings and properties forms, a notation was made that details should be obvious from the name given the vertex. Regarding the amount of detail provided, there were several reasons for a lack of detail included both in vertex inclusion and vertex descriptions on properties forms. Time was a limiting factor. Many students did not finish completing their properties forms. Some forms were missing, and others did not include a full complement of descriptive elements. Another limiting factor was a lack of understanding of the terminology involved in providing detail. The professors indicated they wanted to provide more detail but ran out of the time they had allocated for the model representation activity. In addition to the need to provide more training about how to provide descriptive detail on properties forms, two suggestions regarding detail emerged from the model analyses and discussions with the professors. First, there should be a clear indication about the level of detailed required to respond to the problem statement. Second, the comparison model of the experienced designer may need to contain both grouping vertices and detailed vertices for a complex concept. Specifying the amount of detail required may be difficult. An attempt was made to provide guidelines concerning detail, both in the problem statement and in responses to questions of both student and professor participants. Those persons who had a great amount of information to represent seemed to feel obliged to include as much as possible. One way to address the level of detail requirement (assuming a participant has detail to provide) may be to present a model example during training that illustrates what may be expected. There are problems with such an example. First, it may have an undesired influence on the type of model produced by the participant. Second, it would need to represent a different subject matter so that it does not influence the content of the participant‘s model. The second suggestion regarding detail in models is to design the comparison model of an experienced person so that it contains both complex, or grouping, vertices as well as related detail vertices. This arrangement would enable a comparison that would identify when a student‘s model contained matching broad level concepts and/or one or more of the more detailed concepts.

94

Additional Comments about Mental Models and Their Representation

Comparison with the Prototype Study

Observations in both the preceding prototype study (see Appendix B) and the current study reflected the experience of Spector and Koszalka (2004), who reported finding some deep differences among experts. After the comparison of the models for professor 1 and professor 2, the results were compared with the observations made in the prototype study. This study applied the mental model comparison methodology to represent and compare the views of instructional design as expressed in two textbooks written by experienced instructional designers. The prototype study revealed the same types of differences as the current study. One model presented instructional design as something that professionals do to achieve some goal. This model focused on the process steps in instructional design activity. The other model presented instructional design from a more holistic view that included the context in which instructional design takes place. This context contains elements that may lie outside of the process but provide the necessary information and support that enables those who design instruction to achieve their goal. Another observation that was made in both studies is that two-dimensional drawings may not be sufficient for representations of complex mental models. As was seen on notes on the hand-drawn models of professor 1 and professor 2, there were more linkages among vertices than they were able to represent without making their graphs unreadable. Although the database could have accommodated complex structures, modelers had difficulty communicating the fully complexity of their models. Well-designed software might provide the assistance needed to describe complex models and provide the feedback modelers need to represent, review, and revise complex models.

Mental Models as Temporary Structures

Observations of the participants‘ work (both students and professors) emphasize the view of Seel and colleagues (2000) that mental models are temporary structures created in response to the demands of a situation. The professors reported that they found the exercise of representing their mental models to be quite challenging. Part of their challenge was in dealing with the storehouse of knowledge they had to draw on and deciding how much knowledge to include and the best way to present it. In the case of the students, their challenge appeared to be that they had little knowledge of instructional design. Their models appeared to reflect their own experience either as students or teachers in classroom; as such, the model elements tended to list persons, materials, and resources that would be found in that environment. As discussed in the case review in Chapter 4, participants may not normally articulate possible relationships among concepts in the schema from which they create their mental models. If relationships and properties are as important as suggested by Johnson-Laird (1994), modelers may need more training and practice in understanding and expressing this kind of information in their mental model representations. The answer to research question 2 addresses the importance of additional training and practice to ensure that participants are better prepared to represent their mental models.

95 Research Question 2

What improvements in the methodology are needed prior to further research and development and eventual implementation in the form of a mental model assessment tool? ñ What improvements are needed regarding the mental model elicitation methodology? ñ What improvements are needed in the mental model representation analysis methodology?

Mental Model Elicitation

Several issues for mental model assessment were identified and addressed during the planning for this study: (a) differences in thought processes for experts compared to novices; (b) common understanding of the task; and (c) common meanings for terminology in mental model representations. In spite of attempts to minimize problems in these areas, this study encountered some of the same difficulties as those mentioned by Spector and Koszalka (2004) in the DEEP study. A review of the observations concerning these issues provides the basis for the discussion of improvements needed in the methodology.

Different Levels of Expertise

Spector and Koszalka (2004) made the observation that experts think widely as well as deeply, but novices tend to approach a subject in a more limited manner. I assumed that translation and interpretation of the different levels of model representations would be aided through the use of a common form that can capture model elements without requiring the same level of detail from persons with different levels of expertise. This assumption was supported during the study, but improvements could be made in the approach to creating the model used for comparison with student models.

Common Understanding of the Task

The problem statement is not directly a part of the mental model comparison methodology, but a well-constructed problem statement is necessary for eliciting useful model representations for comparison. In order for there to be a good basis of comparison for model representations, participants must have a common understanding of the task regarding focus, perspective, level of detail, and method of communication. Each of these areas was addressed in the problem statement for this study, but participants had some difficulty in understanding what was expected. As in the DEEP study, the professors (and some students) appeared to begin their approach to model representation by focusing on solving the problem rather than representing what an instructional designer must know and do to solve a design problem.

Common Meanings for Terminology

The properties forms for defining vertices and edges were designed with the expectation that confusion in terminology would be reduced by providing lists of common terms and brief definitions for properties. Although the properties forms were useful in interpreting the

96 meanings of labels on graph drawings, it is clear that participants need better understanding of descriptive terminology in order to provide the level of detail that the methodology is capable of capturing.

Improvements Needed

A large majority of the participants indicated that they understood the training and what they were supposed to do in representing their mental models. Those who expressed concern about their understanding did not appear to have much, if any, more difficulty in representation than those who had confidence. A review of the materials that participants produced indicated that there were problem areas in understanding for nearly all participants. The problems observed related more to completing the properties forms used in defining vertices and edges than to the activity of drawing a graphic representation of a model. The difficulty observed for drawing graphs was that participants needed to use a hierarchical numbering scheme for vertices rather than sequential numbers of 1 to —n.“ The hierarchical scheme seemed to help them keep track of their structures and the properties forms that related to them. Common problems in completing properties forms included:

ñ misunderstanding of the meaning of terms used in identifying properties for both vertices and edges (even though the properties forms included brief definitions of the terms); ñ inconsistencies between the general class of a vertex or edge and the type of details provided to complete the definition (e.g., class relating to an object but details relating to an action); and ñ a general lack of understanding of the edge concept and how to describe edge relationships.

As further of misunderstanding, some participants were not able to relate one or more vertices to one of the vertex class terms provided and added their own term under the —other“ classification; however, all but one of these vertices could have been categorized appropriately using one of the terms included on the form. More training is needed to ensure a better understanding of the terminology on the properties forms. More training might also result in more complete and structured definitions, thus reducing the need for interpretation of narrative descriptions. Participants in general seemed to have much more difficulty in describing edges than vertices. Poor understanding of edges was the area mentioned most in the additional comments section of the questionnaires. Few included arrows to indicate directionality on their graph drawings. If there had been common edges among student and professor graphs, the lack of directionality would have reduced the analyses that could have been done concerning comparison of graph structures. In order to improve participants‘ understanding of the representation process, additional training needs to be provided, including better examples and more practice.

97 Analysis of Mental Model Representations

The original description of the comparison methodology (see Appendix A) does not mention the collection and use of demographic data for modelers. In this study, demographic data, combined with the results of model analysis, provided information about subgroups of students. The professor teaching the course requested information about subgroups. The analysis identified common features among model representations, but the basis for common features was not apparent until the features were examined relative to student backgrounds. Procedures for the collection and use of demographic data would make a valuable addition to the methodology. The methodology includes formulas and guidelines for computing metrics concerning the similarities of model representations, but it does not address how to create the database and interpret differences in expressions in labels and narrative descriptions that provide additional detail about what the modeler intends. There also is no discussion of ways to represent results of analyses beyond the computed values for similarity. These processes should be presented in a set of guidelines for using the methodology so that other researchers can apply the methodology similarly in future research.

Considerations for Developing a Mental Model Assessment Tool

The methodology has not yet been refined sufficiently to support the design and development of an automated tool, but observations of modeling behavior are relevant for future work in this are. Some modelers made lists of concepts before attempting to represent their models in graph drawings. They also revised drawings, either by redrawing entirely or by correcting and/or expanding their original models. It would be advantageous to have these process steps accommodated in the development of software to support the mental model elicitation methodology. Software would need to be able to convert lists of concepts to vertices on a graph representation. Software could aid in the vertex and edge properties elicitation by providing prompts to ensure logically consistent details in property sets. Feedback from participants identified areas where more training is needed prior to mental model representation. Software could provide an efficient training tool by presenting examples of models and model elements. The software could also administer tests of understanding and feedback to ensure that modelers are adequately prepared for the representation exercise.

Limitations

As anticipated, the limited amount of time that could be allocated to train participants affected their understanding of how to represent their mental models and define model elements. Although better understanding might result in better and more consistent details in model representations, participants understood well enough for the study to demonstrate that the methodology can provide useful comparisons of student-constructed models. The second limitation was the length of time participants were allowed for representing their mental models. Students put forth a good effort in spite of adverse environmental conditions. The air conditioning was not working, and the classroom was extremely hot and humid. All students were fully engaged throughout the exercise. Participants were given time

98 checks to keep them advised of how much time remained for completing their task. They also were advised to balance their time for drawing graphs and using the forms to define the elements, but they appeared to have difficulty managing their time to be able to provide a complete set of materials. Many focused on drawing the picture and gave much less attention to defining the elements on their graphs. Given unlimited time, participants might make revisions and refinements that would result in additional detail in their representations; however, judging from the kinds of information included in their model representations, students had sufficient time to represent their entry level knowledge of instructional design. Time constraints are common in academic settings, and all methods of assessment using a time limit accept the best that participants can achieve during that period. Professors were allowed more time than students to represent their mental models. This is the approach that would be taken in an implementation of the methodology. The model of the professor teaching the course was used for comparison with the nineteen students. Although the model provided the best alternative among the three professors‘ models, using a model that reflected one person‘s representation may not have provided the best comparison for a group of students with different styles of expression and approaches to describing concepts. Next, this study was conducted without software support for representing mental models. Using paper and pencil for drawing and modifying graphic representations is expected to be more cumbersome than if participants had well-designed representation software; however, a formative evaluation study such as this is required in order to provide input to the specifications for user-friendly software that can collect representation data, manage a database, and provide various kinds of representation and analytical output. A prototype test of the methodology and subsequent testing of the study materials revealed that data collection without software support is viable and useful data can be obtained for analysis. The current study confirmed this result. It would have been advantageous to use data from this study to examine whether or not similar model comparisons would have been achieved using one or more of the tools available in the computer-based toolset known as the Highly Integrated Model Assessment Technology and Tools (HIMATT) (Pirnay-Dummer, Ifenthaler, & Spector, 2008). Unfortunately, the raw data obtained was either incompatible and/or insufficient in volume to support this comparison. Finally, this study was limited to a single application and participant group. Although results from the study cannot be generalized to all subject areas and persons representing mental models, the positive results observed can provide support and direction for further research and development of the methodology. The mental model assessment methodology design is not limited to particular subject areas, individuals, or comparison purposes. Although the methodology provided useful data concerning conceptions of entry level students in instructional design, the qualitative data regarding the usability of the methodology related to the process of representing and comparing mental models more than to the specific problem posed for mental model construction.

99 CHAPTER 6

CONCLUSION

Even though there were limitations concerning time, training, and fullness of understanding, the GAPS methodology produced mental model representations and analyses which yielded information that was determined to be useful in assessing the knowledge of beginning students in instructional design. This determination was made by two professors who were instructors for the students who were subjects in the study. The analyses identified differences between the conceptions of beginning students and the conceptions of their professor, and the gaps in knowledge were greater than their professor had expected. The data provided by the methodology supported both quantitative and qualitative analyses in the comparisons. There are several advantages of this methodology compared to some of the other methodologies reviewed in the literature. First, it removes the need for the researcher to make assumptions about the structure of someone‘s mental model because participants represent their own models. Second, it reduces the number of assumptions a researcher must make concerning the content of a model because the properties forms allow participants to provide defining details about what they mean by their labels for concepts and the relationships among them. Third, it does not restrict participants to predefined structures, terminology, and level of detail. The study fulfilled its function as a formative evaluation by identifying a number of specific improvements that should be made for further development and research using the methodology. The follow list of recommendations summarizes these improvements.

Recommendations

Mental Model Elicitation

1. Participants need more training and practice before representing their mental models for evaluation. Areas that appear to need the most attention are related to the completion of properties forms, including an understanding of terms used in defining vertices and edges: a. proper classification of vertex types for agreement with vertex label; b. ways to provide brief descriptions that clarify what is meant by the vertex label; c. types of relationships between vertices; d. directionality (arrows) in vertex linkages; e. a strategy for managing time so that a complete set of materials is produced for evaluation (i.e., a completed definition sheet for each vertex and edge in the graph). 2. The training material should include richer examples of models. The examples should include different types of models (e.g., process models, models that contain a variety of vertex types, and models that illustrate a variety of relationships between vertices). 3. The training should include examples and practice for each type of vertex and edge.

100 4. Participants' understanding of the mental model properties terminology should be assessed after training. Types of assessment might include: a. a multiple-choice quiz that can be assessed quickly and provide feedback to correct misunderstandings; and b. a matching operation to link vertex and edge examples to the appropriate definition of each. 5. Participants may need to use a hierarchical numbering scheme for vertices during elicitation in order to assist them in relating their properties forms to the visual graph. 6. Participants‘ understanding of the model task should be assessed prior to mental model representation (e.g., understanding of the problem, model perspective or viewpoint, and level of detail required).

Experienced Designer (Professor) Models

7. Professors need more time than student participants in representing their mental models. 8. The professor model may need to be constructed as a composite of two or more professors' mental model representations in order to accommodate fair comparisons with student representations that have similar underlying meanings but which are described from different categorizations or viewpoints (e.g., a vertex that represents an action that creates an outcome vs. a vertex that represents an outcome created by an action). 9. The professor model should be checked for consistency (e.g., labels corresponding to properties) and completeness (e.g., a completed properties form for each graph element). 10. The professor model may need to include vertices that group related vertices that represent more detailed elements; this grouping would facilitate comparison with student models that have similar general concepts but lack the level of detail present in the model of an experienced person.

Refinement of Properties Forms for Vertices and Edges

11. Explore the addition of another category of —Thing“ on vertex types: something that acts on another vertex.

Analysis

12. Pertinent demographic data should be collected to facilitate analysis that identifies the relationship between a student's background and his or her model contents and structure. 13. More detail should be provided about how to create the database and analyze data obtained on the graph drawings and properties forms.

101

Future Development and Research

Future development and research using the GAPS methodology can encompass a series of research efforts aimed at implementing and evaluating the recommendations for improvement and expanding the exploration of the methodology‘s application beyond the current subject area and population. Along with these efforts, there is a need to begin work on the development of an automated tool to support mental model elicitation and analysis.

Validation and Continuation of the Current Study

The information provided by the methodology in this study could be validated through follow-up studies. Several types of studies are indicated: evaluation of recommendations for improvement, application of information provided by the methodology, and comparison with other methodologies.

Evaluating Recommendations for Improvement

Recommendations for improvement in the methodology would be appropriate topics for further evaluation; however, not all recommendations for improvement need to be tested in subsequent studies because they are provided as guidelines for implementation. For example, the current study demonstrated that: (a) professors need more time than students to represent their models; (b) professor models should follow some basic guidelines to ensure that they accommodate comparison with both general and detail level concepts; and (c) professor models should be checked for consistency and completeness prior to use in comparisons. Similarly, the recommendation to collect demographic data is a guideline rather than a procedural change to be evaluated. After creating and pre-testing improved training materials, follow-up studies using the same course in other semesters could evaluate implementation of the recommended improvements for the type of population used in this study. The focus for such studies would be to ensure that the training and assessment of understanding is efficient for use in single class sessions and effective for students from a variety of language and cultural backgrounds (i.e., it must be appropriate for classes with students from the United States and international students). Follow-up studies using the same course might also confirm or refute the professor‘s view that entry level assessment does not need to be done for each new cohort of students unless the characteristics of the entry level population change. It might be possible to combine an evaluation of improvements with an examination of whether or not similar cohorts of students reveal the same kinds of differences between student models and the professor‘s model. The reason for this possibility is that the improvements may not change the basic knowledge content of student models; the primary expectation is that the improvements will increase the precision and level of detail students can provide in communicating the meaning of their model elements. However, the best study to answer the question about stability of students‘ entry level knowledge would be one that uses the same procedures and professor model for several semesters. Another kind of follow-up study is needed to ensure that other researchers can use the methodology. Studies could be conducted in which several researchers use the same sets of

102 model data to create their own databases and analyze the data. Comparisons of the products of these analyses might reveal potential issues with conversion of raw data to the database. The main potential for differences in interpretation may occur in cases where participants do not make full use of the scaffolding for model elicitation. For example, if participants use vague labels and do not provide adequate detail on the properties forms, researchers may not interpret these model elements in the same way. A series of studies with multiple researchers could result in a set of guidelines that would help researchers achieve greater similarity in interpretations of difficult data.

Evaluating Application of Information Provided by the Methodology

The professor teaching the class used in this study indicated that he found the data to be useful in his planning for the design and delivery of instruction, but there is no indication concerning what modifications he might make and what the results would be if he made changes. Follow-up studies could explore the impacts of the data produced by the methodology. Because of the limited number of introductory courses and students in a given semester, such studies would likely be qualitative rather than experimental in design. Prior to the initial class session and mental model elicitation, a professor could be interviewed to determine his or her expectations concerning student conceptions and possible misconceptions. Each professor would also be questioned regarding his or her planned design and delivery of instruction related to these expectations. After elicitation of student models, the professor would be asked to explain how the information about student models would affect his or her design and delivery of instruction. If available, information regarding student performance in prior semesters for the same courses could provide baseline data for examining potential impacts in learning outcomes. The professor would also be interviewed at the end of the course to obtain feedback on his or her perceptions of how the information provided by the methodology affected his or her actions as well as the performance of students.

Comparison with Output from Other Methodologies

A study could be designed to obtain enough data for analysis using the HIMATT toolset as well as the GAPS methodology. Similar results might confirm the results observed in this study. In the event that the results are different between two methodologies, detailed analyses might reveal the source of the differences and whether or not they indicate either flaws or underlying differences in the type of output produced. In order to obtain sufficient data for the HIMATT tools, the study may need to elicit mental model representations from the same students in more than one class session so that each participant produces the required volume of data (e.g., at least 350 words of description). Professors would need to expand their models, also, to obtain the required volume.

Expansion of Methodology Use for Other Populations and Applications

The current research was limited to a specific application and participant group, but the mental model comparison methodology is not limited to this single application. Further research can include other subject areas, different educational levels, and additional populations. The following examples illustrate some of the expanded uses and areas of study

103 for application of the methodology. These uses are not limited to particular subject areas, educational levels, or populations although further studies might reveal that the methodology is more useful in some situations than in others. The second instructional design professor in this study indicated an interest in using the methodology to explore student understanding of individual steps in the instructional design process. The methodology could be used to assess student understanding of course modules and to evaluate student readiness for learning modules throughout a course (i.e., has the class grasped the concepts that serve as a foundation for the next unit of instruction?). However, the methodology could be applied more broadly in learning assessment and formative assessment. In learning assessment, the methodology could be used to track individual learning progress throughout a course. The initial assessment could document student entry level conceptions and readiness for learning. Subsequent model representations could be used to determine student achievement of intermediate learning objectives and a means of identifying potential misconceptions about how the student should integrate new knowledge with prior knowledge. Post-instruction models could be used to assess the extent of conceptual change between pre-instruction and post-instruction models, student achievement of course learning objectives, and how well students have grasped the interrelationships of course material. The methodology could also be used as a means of formative assessment that promotes reflection and self-assessment by students. Students could use their own series of model representations to reflect on their learning progress. At key points during instruction, students could be provided with one or more expert models for comparison with their own model. Multiple expert models would provide different perspectives and might serve as inspiration for additional self-reflection. The property set data obtained in the GAPS methodology offers a means for students to examine their detailed understanding of concepts in addition to the presence and position of those concepts in a holistic picture. So far, the discussion of applications of the methodology has dealt with its use in relation to individual students, but the graph and property set data may also support group model composition and a means of identifying individual contributions to a composite model. Group model construction using the methodology might promote discussions and collaborative learning in student groups. Further discussion and model development could be promoted when the group compares its model with one or more expert models. The use of the methodology as a learning assessment strategy could be the subject of several kinds of studies. For example, studies could evaluate whether or not the GAPS methodology provides a better assessment of learning for problem-solving situations than traditional assessment methods do. In other words, would the assessment of mental model representations provide a better predictor of performance than assessments consisting of multiple choice questions, essays, or term projects? One way to approach this comparison would be to design studies that assess the same students with both traditional methods and the GAPS methodology. Then, the students could be given a complex problem to solve. The proposed problem solutions could be evaluated by a team of experts and then compared with the performance predicted by each of the assessment methodologies. Another type of study might examine the question of whether or not the holistic representation and self-reflection that may be promoted by the GAPS methodology leads to (a) better transfer of learning to other problem situations; and/or (b) better retention of learning over time. Here, the focus would be on the contribution the methodology makes in the learning process rather than the methodology‘s effectiveness as a predictor of performance. This type of

104 study might employ treatment and control groups to compare performance of those who use GAPS to represent mental models and those who do not represent mental models during the learning process. Qualitative analysis could provide additional information concerning student perceptions of the benefits of the methodology in promoting learning. In the current study, approximately three-fourths of the students reported that representing their mental models helped them to clarify their thinking about the course subject. More detailed questionnaires and interviews could provide insight concerning how and to what extent mental model representation serves as a means of organizing students‘ thoughts and integrating knowledge.

Development of the Mental Model Assessment Tool

The development of an automated mental model assessment tool could improve the usability of the methodology and enable its inclusion in a toolset such as HIMATT. The prompting of software during property set definitions could improve scaffolding and result in better quality and level of detail of participant input. Controls could ensure that all necessary data were addressed by the person representing a model. A well-designed graphic interface could improve graph drawing and modification that is now sometimes cumbersome with paper and pencil. Software controls could facilitate database creation by ensuring that labels and terminology are clear (e.g., correct spelling, no repetition of labels for different elements), and that each graph element is defined with a property set. The tool might be developed and tested in stages. Stage one could include the mental model elicitation software that would support the drawing of graphs and the definition of property sets. Output during this stage would be a printout of the graph and printouts of model elements and their property sets. Stage two could focus on using the model elements and their property set definitions to create a database. The database structure would need to accommodate representation and modification of models and analysis in model comparisons. The primary test for stage two is the determination of the accuracy and efficiency of data storage and retrieval. In stage three, development would add the model comparison and analysis components that would process mental model representation data in the database. Testing for stages one and two are relatively straightforward. The output can be compared to the input to determine success. The third stage requires a different kind of testing and validation. Studies like those described in this chapter could provide baseline data for the third stage of model development. The computer software must be able to replicate the human analytical capabilities; therefore, output from studies conducted without software support can be used to provide the standard against which the mental model assessment tool can be validated. Once the software is sufficiently validated, it should provide a consistent product which does not require either the time investment or the level of experience needed to use the paper and pencil version of the methodology.

Use of the Methodology in Mental Model Research

Another area of research was suggested during the data analysis in this study. During the literature review, a number of graphic representations for mental models were reviewed. A few were constructed by participants and included labels to identify model elements. Most were constructed as groups of nodes with connecting lines and provided no sense of a —model“ in terms of explaining how this data related to application of knowledge in a problem-solving

105 situation. In contrast, the methodology in this study seemed to provide insight regarding the internal models of the subjects, and a review of the drafts and notes revealed something about the process of externalizing a mental model. The differences observed between the model representations of the two instructional design professors suggested quite different model styles, yet it was apparent that both professors were highly knowledgeable in their field. Some of the questions I would be interested in exploring include:

ñ How might different model styles (e.g., linear vs. central groupings, specific action vs. abstract concepts) be related to perception, comprehension, and performance? ñ Does an individual tend to construct the same kind of model in a variety of situations, or does s/he construct different kinds of models? ñ Do model styles change as a result of instruction?

I believe these questions and others could be explored in research that applies the methodology investigated in this study. Whether used alone or in conjunction with other tools, the GAPS methodology‘s application of graph theory for the comparison of mental model representations has potential for making a significant contribution to assessment both in the field of instructional design and other applications in education.

106 APPENDIX A

A METHODOLOGY FOR COMPARING MENTAL MODEL REPRESENTATIONS (This material was included in the original research reported by Smith in 2005.)

The methodology offered herein outlines a means by which graph theory can be applied to the comparison of mental models with fewer restrictions and limitations than have been noted in prior efforts. It is offered as a foundation for further research rather than a completed product.

Objectives

The objectives of this methodology are to:

ñ enable the comparison of mental models in any of the following four categories: peer to peer, non-expert to expert, non-expert to ideal, and individual to self (over time); ñ support the representation of both simple and complex knowledge structures without unnecessarily influencing what those structures might be; ñ enable participants to describe structures that may include any kind(s) of knowledge, (e.g., factual, procedural, combinations, etc.); ñ provide a way to describe both concepts and the characteristics of the linkages between them; and ñ facilitate the comparison of mental models by providing techniques for translating the mental model data into metrics that can express the degree of correspondence in quantitative as well as qualitative terms.

Uses of Graphs

Graphs are useful for a variety of subjects, both within and outside of mathematics (Chartrand, 1977). Chartrand begins his introduction to graph theory by discussing the concept of models. He explains that a model represents something by illustrating key features of the real thing. A model may be mathematical or non- mathematical. If a real-life situation or problem can be described with numbers, it can be represented with a mathematical system. Bertalanffy, the author of general system theory, defines a system in its simplest terms as "a set of elements standing in interrelations" (p. 55). The linking of these three concepts--graphs, models, and systems--is important in the application of graph theory for the comparison of mental models.

A graph consists of a set of vertices whose relationships are represented by an edge set. These seemingly simple elements can be combined in many ways and used not only in describing situations but also in solving problems. Chartrand (1977) discusses the following to illustrate some of the varied uses of graphs: transportation and traffic problems; connection problems; project management; social and

107 organizational problems; matching problems; and puzzles; tournaments; decision making; transactional analysis; network problems; scheduling problems; and topology problems. Durso and Coggins (1990) relate a list of uses of graph theory along the lines presented by Chartrand (1977) and add to it specific examples of how graph theory has been used by social and psychological scientists. An early example of the application of graph theory in mental model assessment is the use of Pathfinder networks (see Schvaneveldt, 1990).

Language Structure

The starting point in this application of graph theory to the comparison of mental models is to examine the means by which the communication of any mental model can occur with respect to the domain of graphs. In other words, communication is accomplished by means of a language, and we need to examine the structure, vocabulary, and grammar of a language of graphs. The following discussion uses definitions of terms from the Merriam-Webster online dictionary (http://www.m-w.com) in order to ensure a clear understanding of how the terminology will be used in the methodology. A separate issue–the question of who will "speak" this language (e.g., persons conveying their mental models directly or interpreters who will convert expressions of mental models to a language of graphs)–will be addressed in a discussion of methodology implementation.

Two related definitions of "language" are pertinent here. A language is "a systematic means of communicating ideas or feelings by the use of conventionalized signs, sounds, gestures, or marks having understood meanings." A language may also be "a formal system of signs and symbols (as FORTRAN or a calculus in ) including rules for the formation and transformation of admissible expressions." At a base level, a language consists of primary elements that function as nouns, verbs, and modifiers which are combined to form statements. The grammar for a language defines the functions and relations of language elements. Our methodology may also require a transformational grammar, which is "a grammar that generates the deep structures of a language and converts these to the surface structures by means of transformations."

A language has a vocabulary, here defined as "a sum or stock of words employed by a language, group, individual, or work or in a field of knowledge" or "a supply of expressive techniques or devices." While the Merriam-Webster online dictionary defines a "word" as something that is spoken or a representation of something that is spoken, we will employ the term "word" here as a reference to our basic language elements: functionally–nouns, verbs, and modifiers. A noun is "any member of a class of words that typically can be combined with determiners to serve as the subject of a verb, can be interpreted as singular or plural, can be replaced with a pronoun, and refer to an entity, quality, state, action, or concept." A verb is "a word that characteristically is the grammatical center of a predicate and expresses an act, occurrence, or mode of being." A predicate is “a term designating a property or

108 relation." A modifier is "a word or phrase that makes specific the meaning of another word or phrase."

Figures 10 and 11 illustrate how the methodology will map language features to the domain of graphs. Figure 10 shows the correspondence of vertices and edges to language functions. A vertex functions as a noun. Depending on the position of a vertex, it may be a noun serving as the subject of a verb or an object of a verb. An edge functions as a verb. It identifies a relationship between two vertices. Vertices and edges are defined by property sets that serve as modifiers; that is, the modifiers are the adjectives and adverbs of the graph language. Figure 11 shows the next step in mapping. It presents a basic "statement" in graph language. The statement identifies a component of a mental model. At a minimum, a statement must contain one vertex. It will contain an edge only if a relationship can be defined between it and another entity of the mental model. Both vertices and edges must be described by property sets. A property set contains the formal expression of the characteristics a person associates with whatever name s/he has given a vertex or edge. The rules for combining vertices and edges are contained in a grammar of the graph language, and the terminology for identifying vertices, edges, and their properties is contained in the language's vocabulary. These terms are specific to the language of graphs, generalized for all types of mental model comparisons, and not limiting regarding the wording used to provide details about properties. In other words, the vocabulary is related to the graph structure of the mental model representation, not the subject matter that the representation describes.

Graph:

v1 Vertex - functionally, a noun - by position, a statement subject e1: v1-v2 Edge or arc - functionally, a verb - by position, an expression of relationship

v2 Vertex - functionally, a noun Property set-E - by position, a statement object for the v1-v2 e2: v2-v3 - Property 1 relationship and subject for the v2-v3 relationship - Property 2 - Property n Property set - functionally, a modifier v3 Property set-V - by position, a description attached to a vertex or - Property 1 and edge/arc (not physically a part of the graph) - Property 2 - Property n

Figure 10. Basic elements of a language of graphs

109 Grammar The rules and principles for stating vertex and edge combinations. Vocabulary The set of terms used to express the structure of a mental model in the language of graphs. v1 e: v1-v2 v2

V Property set E Property set V Property set - Properties 1-n - Properties 1-n - Properties 1-n

Basic graph —statement“:

V1 (Property Set V1) + [E1 (Property Set E1) + V2 (Property Set V2)]

Interpretation:

V1 (as defined by its property set) is a member of the mental model configuration. If V2 (as defined by its property set) is also a member of the mental model configuration

and is related to V1, its relationship is E1 (as defined by its property set).

Figure 11. Combining elements in a language of graphs

110 Vocabulary. Table 12 presents a basic vocabulary for defining and describing mental model elements in our language of graphs.

Table 12. Basic Vocabulary Tree

Vertex œ noun function Vertex name - unique name associated with a vertex in the model Vertex properties - description of the characteristics that define the vertex o Identification œ noun class to which the vertex belongs. ° Person œ type of individual. ñ Specific individual œ name of person. ñ Individual role or function œ title for role or function some person would fulfill. ° Place œ type of location. ñ Specific location œ name of place (e.g., a geographic location). ñ Relative location œ place identified relative to another place (e.g., the third house on a street). ñ Functional location œ place identified in terms of its function (e.g., a processing plant, a public park). ° Thing œ type of thing. ñ Object œ a physical item. ñ Action œ a procedure, function, or activity. ñ Concept œ an idea. ñ Assumption œ a statement assumed to be true. ñ Hypothesis œ a tentative assumption expressed for the purpose of examining whether or not it provides a plausible explanation for an observation or phenomenon. ñ Theory œ an explanation of something based on some reasonable evidence. ñ œ statement of a relationship that observers have found to be consistent ñ Condition œ a stipulation, prerequisite, or state of being. ñ Consequence œ outcome. o Description œ detailed characteristics, depending on noun class. ° Physical description œ observable characteristics, depending on the noun class (e.g., size, color, material composition, etc.) ° Functional description œ description of what the vertex item does. ° Procedural description œ description of how the activity is performed. o Value œ the importance of the vertex item. Edge or Arc œ verb function Edge or arc name œ unique name associated with an edge or arc in the model (hereafter, the term "edge" will include both edge and arc items). Edge properties œ a description of the characteristics that define the edge.

111 Table 12 - Continued Edge properties œ a description of the characteristics that define the edge. o Identification œ relationship class to which the edge belongs. ° Dependence/interdependence ° Sequential ° Transaction ° Class member ° Hierarchical ° Cause/effect ° Correlation o Medium œ the means by which the relationship exists or occurs. o Duration œ time aspect of the relationship. o Direction œ link aspect to vertices (to/from), unidirectional or bidirectional. o Iteration œ repetition aspect of link to vertices. o Importance œ necessity of the relationship (e.g., required, critical, present but not necessary). o Positive/negative œ character of the relationship. o Value œ numerical value, if pertinent to context

Grammar. The rules by which vertices and edges may be combined to describe a mental model are relatively simple.

1. A person, place, or thing is represented by a vertex. 2. A mental model must have at least one vertex in its description. 3. A relationship between a person, place, or thing, and some other person, place, or thing is represented by an edge (or arc, if the relationship is directional). 4. An edge may be present only if it begins and ends with one or two vertices (an edge beginning and ending with the same vertex indicates a loop). 5. A vertex must have both a unique name and a property set. 6. An edge must have both a unique name and a property set. 7. A mental model may consist of one vertex only and no edges. (Although this configuration meets the requirements of a simple graph, it suggests no detailed knowledge of the subject and offers almost nothing for comparison; however, the awareness of a concept, even though there is no detailed knowledge, may be important information in a comparison.) 8. A vertex may be joined to any number of other vertices. 9. Two vertices may be joined to each other by more than one edge, with each edge representing a different aspect of relationship. 10. The number of vertex–edge–vertex sequences is unlimited. 11. The representation of a mental model may contain any combination of graph configurations with bridges connecting different forms of subgraphs (e.g., simple graphs, digraphs, multigraphs, trees, etc.). 12. The representation of a single mental model may consist of a set of graph configurations which are disconnected (i.e., there are no bridge edges defined that connect them).

112 Assessment / interpretation

Comparing mental models for complex knowledge domains may require a more open analysis of elements included in their representation than techniques like Pathfinder offer. Spector and Koszalka (2004) reported that they found some deep differences among experts; some used "specific nodes and complex relations not mentioned by others" (p. 34). Researchers using Pathfinder tend to assume that relationships among items not mentioned frequently among experts may indicate that the items are not necessary in the knowledge representation (Schvaneveldt, 1990). In complex domains, such differences may indicate creative approaches to problem solving or equally valuable, although less known, aspects of the subject matter, rather than extraneous material. In the case of knowledge representation by novices, misplaced elements (compared to experts) may indicate a misunderstanding of relationships, or it may be a function of the integration of new knowledge with existing knowledge in other domains.

The treatment of elements within representations of mental models will depend on the reason for the comparison. For example, the details of analysis needed for comparing mental models in order to identify weaknesses in instructional design may differ substantially from a comparison whose purpose is to assess whether or not trainees have achieved a competency level sufficient to rank them as employable "experts" in their field. The discussion which follows presents a broad description of the kinds of analyses that might be performed within a methodology that employs graph theory. There is no attempt to specify analyses for particular purposes.

There are several types of comparisons that might be made between mental models expressed in terms of graph elements.

Simple structural comparisons:

ñ Overall structure œ a high-level of view of the major nodes (vertices) and graph components (i.e., separable, connected subgraphs), probably best presented pictorially. ñ Component structure œ a view of each distinct subgraph with its nodes and linkages, also presented pictorially. ñ Basic structural analysis œ quantitative comparisons of numbers of nodes and linkages.

Content and deep structure comparisons:

ñ Concepts œ as identified in vertex (node) descriptions. ñ Relationships among concepts œ as identified in edge descriptions. Edge relationships provide a "deep structure" view.

The inclusion of both structural and content comparisons provides a way to distinguish between structures that appear "physically" similar but have quite different

113 meanings. It also provides a way to examine representations with different structural relationships but which are composed of similar content.

A starting point for comparing two mental models is to examine the similarities among graph elements in simple quantitative terms which might be combined in a formula to express an overall similarity rating. The list of such comparisons might include:

ñ number of common components (i.e., comparable subgraphs) ñ number of common vertices ñ number of common properties for each common vertex ñ number of common edges ñ number of common properties for each common edge ñ number of common adjacent vertices ñ length of path from one common vertex to another common vertex ñ number of common cut-vertices (or bridges)

The vertex, or node, is the basic building block of a graph. Vertices with the most edges may be central elements in a knowledge structure. Dearholt & Schvaneveldt (1990) suggest that the degree of a node, which identifies the number of edges incident with it, is an appropriate measure to examine whether or not its category can be considered basic-level. However, vertices that serve as cut-vertices may also be key elements because of their importance in linking subgraphs to the larger graph structure. In order to determine how similar nodes with the same name are in two different mental model representations, it is necessary to examine what the participants mean by the name used. (Identifying nodes that are the same, although named differently, will be discussed later.) The property set associated with a node can provide the basis for this assessment. For example, a node entitled "reading" could represent an activity, a measurement, an event, a railroad, or a place (Reading, PA). While context should help clarify the node-namer's intention, it may not always guarantee clarity of meaning, and it would be difficult to automate context analysis in an interpretation process. A property set associated with the node can make the meaning clear. A match of relevant properties and keywords in the description property can provide the information needed for an accurate interpretation.

The following formula suggests a way to express the first step in comparing the similarity of two nodes.

NSP = CPA1,A2 / TP

Where: NSP = node similarity for properties (range 0 œ 1) A1 = node A for model 1 A2 = node A for model 2 CP = common properties for A1 and A2 TP = total unique properties mentioned in the two property set definitions

114

A next step in node comparison is to examine the edges incident to the nodes, first in quantity (the degree of the node), then with regard to level of similarity. The first formula below shows the relationship between node degree and directional aspects of edges incident with the node. The second formula shows how a similarity rating can be computed for a comparison of two nodes.

ND = OD + ID

Where: ND = node degree OD = number of edges adjacent from it ID = number of edges adjacent to it

NSTD = (NSD + NSOD + NSID) / 3

Where: NSTD = node similarity for total degree comparison (range 0 œ 1) NSD = node similarity for degree (range 0 œ 1) NSOD = node similarity for outdegree (range 0 œ 1) NSID = node similarity for indegree (range 0 œ 1)

The computations for NSD, NSOD, and NSID consist of determining the ratios that express the largest possible agreement for each degree measure for the corresponding nodes from the two models. For example, if node A1 has an outdegree of six and A2 has an outdegree of four, the agreement ratio is four possible matches out of six (or 0.67) for edges referenced in the outdegrees. The total node similarity comparison represents an average of the three node degree values.

While nodes may be compared regarding the number of edges each has, further investigation is needed to determine whether or not the edges represent the same thing in the two mental models, both in terms of the nodes they connect as well as the types of relationships involved. Edge analysis can begin with an examination of which nodes the edges connect. This analysis may proceed directly from edge descriptions, rather than sequentially node-by-node. Identification of common edges (and subsequent common paths) will emerge when there is a match of both the linkage the edges represent and the property sets associated with the edges in the two models.

Since there can be multiple edges joining the same two nodes, it is important to examine property sets for the edges in both models to determine: a) if an edge in each of the two models represents the same kind of relationship; and b) how well the relationship described for a common edge in one model corresponds to that in the other model.

115 When there are multiple edges for the same linkage, a first step should be to identify edges with the potential for common properties; that is, edges should only be compared when they have the same relationship class property. If they do not share this property, they are not common edges. The following formula suggests a way to express the similarity of edges with the same linked nodes and the same relationship class.

ESP = CPA1,A2 / TP

Where: ESP = edge similarity for properties (range 0 œ 1) A1 = edge A for model 1 A2 = edge A for model 2 CP = common properties for A1 and A2 TP = total unique properties mentioned in the two property set definitions

The preceding discussion is based on comparing nodes (and edges that connect them) when the nodes have the same name in both models. However, if participants are permitted to select their own names for nodes, additional analysis may be required to identify nodes that are common in both models even though they have been named differently. One technique would be to develop a list of aliases for possible node names and to search for them to identify possible common nodes. The other possibility is to examine property sets for nodes in the same categories and then evaluate potential matches based on the degree of correspondence of their properties. If nodes with high correspondence are identified, a recoding of their names would be helpful for further analysis of the mental models.

The results of detailed node and edge analyses can be combined to create a statement of the level of commonality of the two mental models. This information can also be used to identify: a) differences in structure; b) differences in level of detail; c) elements that are contained in one model but not in the other; and d) patterns of node linkages. A visual representation of the graphs can be created from the vertex and edge linkage descriptions and can aid in communicating the results of the comparisons.

The following tables suggest how detailed analyses of nodes and edges can be developed and then combined to present an overall comparison of two mental models. The discussion begins at the base level and advances through summary levels:

ñ vertex table ñ edge table ñ graph table œ summary of commonality

Table 13 illustrates a vertex summary table to be completed for each vertex contained in either or both of the two models to be compared. There will be information in both column A and column B only if the vertex can be found in both models. The

116 degree of similarity is expressed by the terms NSP (node similarity for properties) and NSTD (node similarity for degree). The value range for each of these terms is 0 œ 1, with "0" indicating that the vertex can be found in only one of the two models. At the other end of the range, "1" indicates that the vertex is present with the same property descriptions in both models.

Table 13: Vertex Summary

Vertex (Name) Model A Model B Common Degree œ number of edges incident Outdegree œ number of vertices adjacent from it Indegree œ number of vertices adjacent to it Cut-vertex? Vertex class Properties (detail sheet to be completed) Comparison Values Node similarity for properties (range 0 œ 1) NSP Node similarity for degree (range 0 œ 1) NSTD

Table 14 illustrates an edge summary table to be completed for each edge contained in either or both of the two models to be compared. There will be information in both column A and column B only if the edge can be found in both models. The degree of similarity is expressed by the terms ESP (edge similarity for properties). The value range for this term is 0 œ 1, with "0" indicating that the edge can be found in only one of the two models. At the other end of the range, "1" indicates that the edge is present with the same property descriptions in both models. It is also possible to have an edge incident to one common vertex in both models but not incident with a corresponding common vertex. For example, vertex A may be present in both models with a degree of "2" (meaning it has two edges that connect it with one or two other vertices), but the connected vertex (or vertices) are different in the two models. The edge summary table provides for an indication of whether or not the vertices being connected are the same in both models. The edge itself is only considered common if it connects common vertices in both models. The value for comparison, edge similarity for properties, will be "0" if the edge is not common to both models. The condition of edges present but connecting different vertices from a vertex common to both models is accounted for in the vertex summary in the degree comparison value.

117 Table 14: Edge summary

Edge/Arc (Name) Model A Model B Common Vertex u (name) Vertex v (name) Bridge? Properties (detail sheet to be completed) Comparison Values Edge similarity for properties ESP

A component of a graph is a connected subgraph that is joined to a larger graph by a cut-vertex. That is, the cut-vertex separates a graph into two or more separate components. Because a component can be examined regarding the same characteristics as the larger graph of which it is a part, that discussion will not be duplicated here. The number of components in the complete graph can be determined by examining the number of cut-vertices contained in the vertex set.

The graphs for two mental models can be compared on basic quantitative values such as the order (i.e., total number of vertices), size (i.e., total number of edges/arcs), and the total number of graph components. However, the more important questions of comparison relate to how similar the graphs are regarding content œ the meanings of nodes and the relationships between them. Similarity ratings for content and structure can be created using the similarity values computed in the vertex and edge summary tables. Figure 12 illustrates the comparison concept.

Common Area for Two Models

Model A Model B Total Nodes Total and Nodes Edges Same and Properties Edges

Different or missing properties

Figure 12. Comparison concept for two mental model representations

118

Content: Overall similarity regarding nodes included and how they are described can be computed with the formula:

ONSP = (NSPv1 + NSPv2 + . . . NSPvn) / TN

Where: ONSP = overall node similarity on properties NSPvn = node similarity on properties for vertex n TN = total nodes

Another measure of potential interest is the level of similarity for nodes that are common to both models. For example, an expert's model may be more extensive than that of a novice, but how similar is the novice's understanding to that of the expert on those elements where the novice does have knowledge? Using only common nodes, the following formula provides this measure.

CNSP = (NSPv1 + NSPv2 + . . . NSPvn) / TCN

Where: CNSP = common node similarity on properties NSPvn = node similarity on properties for vertex n TCN = total common nodes

Structure. Structure can be examined both from the vertex (node) and edge perspectives. From the node view, structure begins with the presence of the node and then the number of edges that link it to other nodes. From the edge view, the properties of an edge determine whether or not it has the same relationship in linking two nodes in model A as the edge linking the same common nodes in model B. In other words, the relationship between two nodes defines an important aspect of the deep structure in the model and defines not only that they are linked but how they are linked.

ONSTD = (NSTDv1 + NSTDv2 + . . . NSTDvn) / TN

Where: ONSTD = overall node similarity on total degree values NSTDvn = node similarity on total degree values for vertex n TN = total nodes

The same type of computation can be made to see how similar node degrees are using only those nodes that are common to both models.

CNSTD = (NSTDv1 + NSTDv2 + . . . NSTDvn) / TCN

119

Where: CNSTD = common node similarity on total degree values NSTDvn = node similarity on total degree values for vertex n TCN = total common nodes

Overall similarity regarding edges included and how they are described can be computed with the formula:

OESP = (ESPe1 + ESPe2 + . . . ESPen) / TE

Where: OESP = overall edge similarity on properties ESPen = edge similarity on properties for edge n TE = total edges

The following formula examines edge similarity among common edges.

CESP = (ESPe1 + ESPe2 + . . . ESPen) / TCE

Where: CESP = common edge similarity on properties ESPen = edge similarity on properties for edge n TCE = total common edges

Table 15 suggests a way to begin summarizing the comparison of two models. Additional analyses (e.g., the number of paths and their individual characteristics; distances between key vertices) would be possible using the same vertex and edge tables described above.

Table 15. Model Comparison Summary

Overall (computed from detail tables) Model A Model B Common Order of G œ total number of vertices Size of G œ total number of edges/arcs Total number of graph components (connected subgraphs) Full model comparison Similarity value Overall node similarity on properties* ONSP Overall node similarity on total degree values* ONSTD Overall edge similarity on properties* OESP Comparison of common elements Similarity value Common node similarity on properties CNSP Common node similarity on total degree values CNSTD Common edge similarity on properties CESP *Included in the concept of properties is a presence/absence indicator.

120

A short example examining the vertices in two hypothetical models will show how the similarity values are related to the graph elements as described in the vertex and edge tables. Model A has an order of 20 vertices. Model B has an order of 15 vertices. The most vertices they could have in common is 15 (the number in the smaller order). They have 10 vertices in common, meaning that Model A has 10 vertices not contained in model B, and Model B has 5 vertices not contained in model A.

Total vertices = 20 + 15 = 35 Total common vertices = 10 Total unique vertices = (20 œ 10) + (15 œ 10) + 10 = 25 Portion of unique vertices that are common = 10 / 25 = .40

In this example, the highest possible similarity rating for model A and model B regarding the presence of the same vertices is 0.40. However, if the node (vertex) names do not mean the same thing in both models, then that similarity rating is reduced according to the degree of differences in the descriptions of corresponding vertices in the two models. If nodes are common and have the same meaning in both models, the value for that vertex comparison is 1. If the descriptions agree on three of four properties, then the comparison value is 0.75. If that were the case for all 10 common vertices, then the combined similarity ratings for them would be 7.5, which divided by 25 would give an overall similarity rating of 0.30. Adding a property set to provide a structured way of evaluating meaning increases the information potential of a mental model comparison. The interpretation of the above analysis is that the two models have a similarity rating of 0.75 for those nodes that the two models have in common, but the overall similarity rating for the two models is only 0.30.

The information included in the vertex and edge descriptions could be used to perform a number of different analyses to give insight regarding comparisons of two mental models. For example, researchers could: a) compare types of knowledge presented (e.g., factual vs. procedural); b) identify specific knowledge represented in one model but not the other; c) examine the "efficiency" of one model compared to the other (i.e., being both complete and concise); and so on.

Implementation

A primary question in implementation of the kind of methodology outlined in this paper is "who will interpret the mental models into a language of graphs?" It would be possible to implement the methodology with or without automated support, although in the ideal situation, persons representing their mental models would do so directly, using the language described to provide unlimited naming and descriptions in a structured form so that their model information could be placed in a database. Model descriptions could proceed based on a branching-tree program which would offer drop down menus to collect the required information at each level, depending on preceding responses. For example, if the participant is defining a node/vertex, it requires a name

121 of her or his choice followed by a classification of its type (person, place, or thing). Subsequent information requirements would depend on the type and responses within its category. With this type of automation, the participant would be able to represent a mental model in the graph language without having had to learn it in advance. In a more restricted version of this technique, participants could be offered a menu of node or edge names to choose from (as is done in the Pathfinder technique) so that model comparisons would be guaranteed to have no issues concerning name association and the scope of elements to be included.

Automated support ideally would also include graphic display of the information entered. In the best case scenario, a participant could start from either an entry of pictorial elements (i.e., vertices and edges) or language statements and then be presented with the alternate form of representation. Some means of summarizing mental model information would be needed in order for participants to track their progress in representing their knowledge, whether it is pictorial or summary lists of nodes and linkages.

One of the advantages of the proposed methodology is that, depending on the amount of information available, a researcher could use its language structure to translate a variety of knowledge representations (e.g., drawings and notes, or data from other automated systems) into the node and edge descriptions for a graph version of knowledge representation. In addition, the methodology could be implemented for pilot tests using only paper forms (or a simple database management like Access) to collect data for analysis, refinement, and validation of the methodology.

122 APPENDIX B

APPLICATION OF GRAPH THEORY FOR THE COMPARISON OF MODEL REPRESENTATIONS: A PROTOTYPE STUDY USING DESCRIPTIONS OF INSTRUCTIONAL DESIGN

Introduction

This paper represents the second phase of research investigating the application of graph theory for the comparison of mental model representations. In the first phase (Smith, 2005), the potential use of graph theory was explored, and a methodology was proposed for expressing mental models in terms of graphs and for assessing the types and levels of correspondence of one model to another. The next step in the development of the methodology requires a test of its utility and completeness. As originally envisioned, the methodology would be implemented with automated support to facilitate users' expressions of their mental models and analyze the similarity of models on content and structure. However, in the absence of such support, it was necessary to find another means to perform the initial test. The purpose of this paper is to report the results of a prototype study in which the methodology was used to compare several descriptions of Instructional Design. The descriptions represent the authors' models of this subject.

Study Questions

Although this study revealed some interesting comparisons between two models, the primary interest is in addressing questions regarding the next steps in developing and refining the methodology. These questions include:

1. Can the methodology be applied to compare models on the attributes of structure and content? 2. What, if any, modifications are needed in the language specifications for describing vertices and edges in a graphic representation of a model? 3. What issues must be addressed in the next steps in testing the methodology?

Method

The initial plan for this prototype study was to use live subjects with a paper- and-pencil means of providing their model representations; however, preliminary discussions with a potential subject revealed that considerable training and assistance would be needed to obtain good data for a meaningful test of the methodology. As an alternative, four textbooks were evaluated to determine if model descriptions could be derived from narrative discussions of the subject instructional design. Two of the texts (Dick, Carey, & Carey, 2005; Seels, & Glasgow, 1998) focused on describing particular models for the instructional design process. Two others, (Gagné, Wager, Golas, & Keller, 2005; Reiser, & Dempsey 2002) provided a more comprehensive view

123 of the subject, not limiting their discussion to process steps. These last two were used for the prototype application of the methodology.

In the introductory portions of each of the texts used for this analysis, the authors offer some type of definition of instructional design. From the authors' discussions, two conceptual models were identified and charted using the methodology developed to apply graph theory for the comparison of model representations. For each model, the following steps were followed. First, a drawing was made to show the major vertices and their relationships. Next, the text was consulted to determine more detail for each vertex, revealing additional graph components and their relationships. Using the Basic Vocabulary Tree (see Appendix A), vertex and edge descriptions were identified and documented in a database. After each model description was completed, the models were analyzed to obtain relevant graph characteristics for use in comparisons. The data included: (a) number of vertices; (b) number of edges; (c) degree values; and (d) common elements. Using the tables and formulas defined in the methodology (for a review, please see http://agil- ed.com/GTandMentalModels.doc), a model comparison was developed.

The following comments address issues that arose during the interpretation of the models and which could be factors in subsequent uses of the methodology, particularly when the models are represented by someone other than the original authors.

Implied vs. stated connections. In order to follow the same approach to interpretation in both models, very few assumptions were made about what the authors might have intended. In other words, unless an element or connection was referred to directly (however minimal the reference), it was not included, even though common logic would suggest that the authors might have had it in mind.

Completeness of the models. The emphasis was on drawing from the introductory material in the first two chapters of each text. Additional correspondences are likely to exist in the remainders of the texts; however, the purpose in this effort was not to provide complete documentation of the authors' views of the topic. Instead, it was to gather a reasonable amount of data to examine the types of analyses that could be performed using the methodology.

Additional refinements could be made, adding connections among elements presented and additional levels of detail; however, that would only serve to increase the amount of data present, not the quality of the analysis.

Limitations. Data were drawn from existing publications that define and describe the field of instructional design; therefore, model authors were not responding to a common problem statement or set of instructions for identifying their models.

124 Models were translated to the graph methodology by the researcher. The text authors may have translated the data differently. It is also acknowledged that other model interpretations could be produced from the same texts.

The models were derived from texts produced by multiple authors; therefore, none of the models can be considered the "mental model" of an individual. The issue of whether or not mental models can be shared is left for other discussions.

The texts did not include the full set of properties as outlined in the methodology for vertex and edge descriptions. Although property values were very limited, there were enough in principle to permit inclusion of property comparisons in the analysis.

The Models

The models were developed as representative of the authors' views of instructional design (ID). Figures 25 and 26 illustrate the top levels for Model A (Gustafson & Branch, 2002; Reiser, 2002) and Model B (Gagné, Wager, Golas, & Keller, 2005). A quick review of the two graphs immediately reveals that there are differences in the way the two groups present their subject; however, further investigation is needed to determine if the differences relate to the way elements are grouped (i.e., structure) or if there are differences in the content of the models. For example, a look at the complete models (see below) shows that Model A and Model B both have vertices for the systematic process of ID and models of ID, but only Model B shows them at the top level of the graph.

125 Reiser, Gustafson, & Branch (2002) V1 Instructional Design V7 & Technology (IDT) E7 Goals

E1

E3 E6 E5 E11 V2 E4 Professional organizations E10 E2 V3 IDT professionals V6 Actions

E9

E8 V5 Concepts

V4 V8 Key defining Performance technology elements

1

Figure 25. Model A - Reiser, Gustafson, and Branch (2002)

Gagné, Wager, Golas, & Keller (2005) V1 V12 Educational or E11 Goal training system (facilitate intentional learning) V3 Instructional designers E16 E1

E2 V11 E10 Activities that facilitate learning E12 V2 Instructional Design V4 E3 (ID) Assumptions E9 V10 E4 E8 Models of ID E5 E7 E15

E6 V6 V5 Principles of Conditions of E14 V8 learning learning Instructional materials V9 Systematic Process of ID V7 E13 Instruction

B1

Figure 26. Model B œ Gagné, Wager, Golas, and Keller (2005)

126

Table 16 shows the summary of the model comparison that was derived from the database and formulas as specified in the methodology. Model B has considerably more elements than Model A. Model B contains sixty-four vertices compared to forty- five for Model A. Only twenty of these are common to both models, leaving eighty-nine vertices as being unique between the two. Similarly, Model B contains eighty-three edges compared to fifty-four for Model A, with sixteen of these being common to both models. In order for an edge to be common to both models, it must connect the same two vertices and be based on the same type of relationship in both models. Although there were twenty vertices common to both models, they were not all connected in the same way.

The range of values for similarity is 0-1, with 0 indicating no similarity and 1 indicating complete agreement. On the full model comparison, it appears the two models have very little similarity. The overall vertex similarity value is 0.19. This value relates to content and represents not only the similarity on whether or not vertices were present in both models but also on whether or not they were described in the same way. The overall similarity on degree values was lower: 0.13. This value relates to similarities in structure in terms of the number of edges associated with vertices and the direction of their connections within the model. The value for overall edge similarity on properties, 0.13, indicates the level at which relationships between vertex connections are similar. In this particular comparison, edges that were common had the same properties; therefore, this value does not add significant information to the analysis.

Although there was little similarity observed between the two models overall, there was considerably more similarity in the comparison of common elements in the two models. For those vertices that were common to both models, the similarity value was 0.85. There was somewhat less similarity on the comparison of degree values for vertices: 0.57. This suggests that there was more agreement on content than structure in the common elements.

127 Table 16. Summary of the Model Comparison

Model Comparison Summary Model Model Overall* A B Common Order of G - total number of vertices 45 64 20 Size of G - total number of edges/arcs 54 83 16 Full model comparison Similarity value Overall vertex similarity on properties 0.19 Overall vertex similarity on total degree values 0.13 Overall edge similarity on properties 0.13 Comparison of common elements Similarity value Common vertex similarity on properties 0.85 Common vertex similarity on total degree values 0.57 Common edge similarity on properties 1 *Note: analysis of subgraphs is not included here.

Model similarities. Figures 27 and 28 show the model segments that are most similar for Model A and Model B. The primary differences between the two pictures of Models of ID are: (a) where they connect at the upper level; and (b) the addition of another vertex in Model B (V56: Simple lesson planning). The primary basis for the similarity is the reference to the ADDIE model.

128 E40

Reiser, Gustafson, & Branch (2002) V45 V37 Dick & Carey Models of ID

E51 E52

E41 V44 Other models

V38 Basic model œ V39 E42 ADDIE Analysis

V43 Evaluation E50

E43 E49 E47 E48 E46 V40 V42 Design V41 E44 Implementation Development E45

A9

Figure 27. Model A: Models of ID segment

E9

Gagné, Wager, Golas, & Keller (2005)

V64 V10 Dick & Carey Models of ID V56 E71 Simple model E73 (e.g., lesson E74 planning) E72 V63 Other models

V57 Prototypical model œ V58 E75 ADDIE Analysis

V62 Evaluation E83

E76 E82 E80 E81 E79 V59 V61 Design V60 E77 Implementation Development E78

B9

Figure 28. Model B. Models of ID segment

129

Model differences. Although there are other striking differences in the way content in the two models is presented, Figures 25 and 26 illustrate that these differences appear to be at a fundamental level. Model A presents a straightforward view of ID as something that professionals do to achieve some goal. On the other hand, Model B presents a more holistic view, indicating first, that ID exists in a context (an educational or training system), and that it is related to other elements that may lie outside of the process of ID but which provide the necessary support to enable those who design instruction to achieve their goal.

Reflections on the Prototype Study

First, no judgment of the relative value of one model over another is implied. The methodology can only reveal information about similarities and differences between models. A separate rubric would be required to judge merit.

When writers create a narrative description of their models, they do not follow simple, two-dimensional representations. What emerges can be the sense of a multidimensional model, not confined to the limitations of two-dimensional representation that can appear in a concept map. In the graph world, the same element may be related to other elements in more than one way, perhaps repeated in different "levels." With the proper tool to implement the methodology, it may be possible to capture some of that multi-dimensional aspect of the way we think about our models. The methodology appears to be able to support the analysis of these representations.

After applying the methodology to narrative descriptions, it has become clear that persons representing their models would need considerable preparation to describe them without the support of well-designed software. Persons who are visual thinkers are likely to have an easier time than others, but the process is challenging even for the developer of the methodology. The data for even a simple model can be extensive when all of the vertex and edge descriptions are collected.

It is difficult to know how much detail to provide. When is a model "complete" for a given purpose? The answer to this question could be important for determining true differences between one person's model and another's; however, the methodology supports analysis that can distinguish between differences that are related to level of detail and those that are substantially different in content and structure.

Improvements Needed

When the original basic language tree (see Appendix A) was developed, the categories and menu items appeared to be straightforward and relatively complete. However, during the study, it was difficult to decide which menu item to use in some cases, and in others, none seemed to be completely appropriate. Two major improvements are needed:

130

ñ Expanded menus for properties. Both vertex and edge property lists need to be reexamined and expanded. For the vertex descriptions, the main difficulty was in deciding how to classify a "thing," particularly when the thing can be considered from several perspectives at the same time. For edge descriptions, the most difficulty arose in trying to define the relationship class of the connection between two vertices. Relationships such as process steps or class memberships are straightforward, but relationships between concepts or interdependent items require more exploration for appropriate terms.

ñ Dictionary of terms with clear definitions and criteria for designation of properties. Concurrent with the development of expanded menus for properties is the need to develop a dictionary that provides clear definitions of all terminology and criteria for determining how to select the appropriate classification from the menus. Clear examples should also be provided to illustrate application of the criteria.

Resources to be consulted for further development of menus and a dictionary include books on knowledge representation and modeling systems.

Conclusions

The purpose of this study was to test the methodology for comparing model representations using graph theory. Using existing texts that describe instructional design, the methodology was used to make a comparison with metrics for (a) overall similarity on content and structure and (b) common element similarity on content and structure. Although the methodology appears to be viable, improvements are needed in the language specifications for vertex and edge descriptions. A possible next step in developing the methodology is to refine the language specifications and then use the same texts to represent the models with better descriptions and additional detail.

Graphs and Data

Complete graphic representations for Model A and Model B are contained in a separate PowerPoint file.

131 APPENDIX C

STUDY TRAINING MATERIALS

PowerPoint Presentation

V1

E1 E2

V2 V3 Linda J. Smith Doctoral Candidate Instructional Systems

132 USING GRAPHS TO REPRESENT YOUR MENTAL MODEL

Mental models are internal organizations of information and ideas that help you understand and function within your world.

In order for someone else to understand your mental model, it must be represented externally by some method that communicates key elements.

2

USING GRAPHS TO REPRESENT YOUR MENTAL MODEL

In this activity, you will use graphs to represent your mental model of how to approach an instructional design problem: • To show how much knowledge you have about instructional design as you begin this course • To compare to that of an expert to identify knowledge you may need • To compare to your end-of-semester model to show how much you have learned in this course

3

133 USING GRAPHS TO REPRESENT YOUR MENTAL MODEL

A vertex (shown as a circle) Vertex 1 represents a person, place, (name) or thing

Edge (number) An edge (shown as a line) represents a connection or relationship between two Vertex 2 vertices (name)

4

USING GRAPHS TO REPRESENT YOUR MENTAL MODEL

V1 Dependence V7 Buying a Car E7 Type of Car

Dependence E1 E5 Transaction Cause/ Dependence V2 effect E3 Hierarchical E6 Reasons for E4 Purchase E2 V3 V6 Members Class Purchase Dependence Money E9 member

Class E8 member Each Vertex V5 V8 in Blue can V4 Children Selection be expanded Parents Process on separate pages 5

134 USING GRAPHS TO REPRESENT YOUR MENTAL MODEL

Person: Vertex __ Specific individual or group Type __ Role someone would perform Place: __ Specific location __ Relative location __ Functional location Thing: __ Object __ Hypothesis __ Action __ Theory __ Concept __ Law __ Assumption __ Condition __ Consequence or outcome

6

USING GRAPHS TO REPRESENT YOUR MENTAL MODEL

Details (depending on the type of vertex)

Vertex Details • Physical œ relevant characteristics (e.g., size, color, material composition, etc.) • Functional œ WHAT the vertex item does • Procedural œ HOW the activity is performed • Value œ the significance of the vertex item (i.e., comments on elements that are not physical objects or actions)

7

135 USING GRAPHS TO REPRESENT YOUR MENTAL MODEL

A vertex is identified by its type and by a name that indicates a specific instance of Vertex the type. Name Person names: --Robert Gagné --elementary school students --an instructional designer Place names: --Florida State University --a classroom Thing names: --textbook --design process --Graph Theory --improved grades (outcome) --same treatment for group members (condition) 8

USING GRAPHS TO REPRESENT YOUR MENTAL MODEL

An edge is a line that links two vertices and identifies their relationship. Vertex (name) Type of Relationship

Edge __ Dependence/interdependence (number) __ Sequential __ Transaction __ Class member (e.g., —one-of“) __ Hierarchical Vertex __ Cause/effect (name) __ Correlation

9

136 USING GRAPHS TO REPRESENT YOUR MENTAL MODEL

Relationship Details (depending on the type of edge)

Medium œ how the relationship occurs Edge Details Duration œ time aspect Direction œ to/from Iteration œ repetition Value œ numerical value, if pertinent Character œ positive/negative Importance œ significance

10

USING GRAPHS TO REPRESENT YOUR MENTAL MODEL

Vertex Vertex Vertex DOG CAR STEP 1

Edge 1 Edge 2 Edge 3 Class Member Transaction Sequential

Vertex Vertex Vertex POODLE PURCHASE STEP 2

11

137 USING GRAPHS TO REPRESENT YOUR MENTAL MODEL

Each participant has a packet of materials that includes:

1. Instructions 2. Paper for drawing the mental model graphic representation 3. Blank forms for describing graph vertices 4. Blank forms for describing graph edges 5. A questionnaire

12

USING GRAPHS TO REPRESENT YOUR MENTAL MODEL

Using the practice blank sheet and forms, represent PART of a simple mental model for making ice cubes œ 3 vertices.

TIME œ ABOUT 3 MINUTES YOUR MODEL MIGHT INCLUDE SOME OF THE FOLLOWING:

V1 • Materials (e.g., water, ice cube tray) (name) • Theory (e.g., how ice forms) • Equipment needed (e.g., freezer) E1 E2 • Temperature requirement • Process (e.g., fill tray, put in freezer, V2 V3 etc.) • Other elements? (name) (name)

13

138 APPENDIX D

HUMAN SUBJECTS APPROVAL

Office of the President For Research Human Subjects Committee Tallahassee, Florida 323062742 (850) 6448673 — FAX (850) 6444392

APPROVAL MEMORANDUM

Date: 8/22/2008

To: Linda Smith

Address: 1842 Vineyard Way, Tallahassee, FL 32317 Dept.: EDUCATIONAL PSYCHOLOGY AND LEARNING SYSTEMS

From: Thomas L. Jacobson, Chair

Re: Use of Human Subjects in Research Graph and Property Set Analysis: A Methodology for Comparing Mental Model Representations

The application that you submitted to this office in regard to the use of human subjects in the proposal referenced above have been reviewed by the Secretary, the Chair, and two members of the Human Subjects Committee. Your project is determined to be Expedited per 45 CFR § 46.110(7) and has been approved by an expedited review process.

The Human Subjects Committee has not evaluated your proposal for scientific merit, except to weigh the risk to the human participants and the aspects of the proposal related to potential risk and benefit. This approval does not replace any departmental or other approvals, which may be required.

If you submitted a proposed consent form with your application, the approved stamped consent form is attached to this approval notice. Only the stamped version of the consent form may be used in recruiting research subjects.

If the project has not been completed by 8/21/2009 you must request a renewal of approval for continuation of the project. As a courtesy, a renewal notice will be sent to you prior to your expiration date; however, it

139 is your responsibility as the Principal Investigator to timely request renewal of your approval from the Committee.

You are advised that any change in protocol for this project must be reviewed and approved by the Committee prior to implementation of the proposed change in the protocol. A protocol change/amendment form is required to be submitted for approval by the Committee. In addition, federal regulations require that the Principal Investigator promptly report, in writing any unanticipated problems or adverse events involving risks to research subjects or others.

By copy of this memorandum, the Chair of your department and/or your major professor is reminded that he/she is responsible for being informed concerning research projects involving human subjects in the department, and should review protocols as often as needed to insure that the project is being conducted in compliance with our and with DHHS regulations.

This institution has an Assurance on file with the Office for Human Research Protection. The Assurance Number is IRB00000446.

Cc: J Spector, Advisor HSC No. 2008.1656

140

141

142 GRAPH AND PROPERTY SET ANALYSIS: A METHODOLOGY FOR COMPARING MENTAL MODEL REPRESENTATIONS Formative Evaluation Study Instructions

Introduction: As a participant in this study, you are asked to represent your mental model in response to an instructional design problem statement. There is no single correct answer, and this activity does not affect your grade in the course. Your name will not be included in the reporting for this study. Your mental model representation will illustrate your current understanding of instructional design.

What you are to do: 1. Read the problem statement below. 2. Using the blank paper and forms provided in this packet, represent your mental model of how to respond to the problem statement. (Time limit: 1 hour) 3. Complete the enclosed questionnaire. 4. Put all materials back in the folder and return to the researcher.

Problem statement: You are an instructional designer who is competing for position with an instructional design company. The company is asking each applicant to illustrate his or her knowledge of instructional design by representing his or her mental model of how to respond to an instructional design assignment.

The assignment is to design a one week educational unit on writing paragraphs for a 5th grade English class that includes students with disabilities (e.g., physical, learning, behavior) along with students who have not been diagnosed with problems related to the classroom. In this exercise, you are NOT to design the unit–you are to illustrate your knowledge of what an instructional designer needs to know and do in order to complete the design task.

Using the mental model representation guidelines, provide a description of your approach to this problem. You may include whatever elements you consider significant (e.g., processes, procedures, theories, persons, tools, etc.). Your objective is to convey your overall understanding of instructional design and the basis for your approach. You probably won‘t have time to include all details of which you have knowledge, so choose what you consider the most important and relevant elements.

Suggestions and comments: ñ You may want to sketch a concept map before you begin to complete the detailed descriptions of vertices (circles) and edges (lines) on the property set forms. ñ You may want to draw your graphs in —layers“; that is, the first page may show the highest level elements, and separate pages may be used to show the details of those vertices that are to be broken out into further detail. Repeat the vertex identifier on the page that shows its details. ñ There is no minimum or maximum number of vertices or edges. ñ You may refer to the training material during this exercise.

143

Vertex No. Vertex Name: ______(Unique name that describes the element)

STEP 1 Identification Section œ check ONE identifier from the list below

Person(s)

Specific A particular individual or group

Title Role or function someone would perform

Place(s)

Specific Name of geographic location (e.g., Florida State University, New York, Europe)

Relative Identified relative to another place (e.g., 3rd house on a street, 1st floor of a building)

Functional Identified by function (e.g., factory, public park, classroom)

Thing(s)

Object Physical item

Action Procedure, function, or activity

Concept Idea

Assumption Statement assumed to be true

Hypothesis Testable statement that may explain an observation or phenomenon

Theory Explanation of something based on reasonable evidence

Law Statement that is true by consistent observations

Condition Stipulation, prerequisite, or state of being

Outcome Result or consequence

Other Specify: ______

STEP 2 Type of Description œ check ONE type and write brief description of RELEVANT details

Physical Observable characteristics (e.g., size, number, color, material, shape, weight, etc.)

Functional WHAT the person, place, or thing does

Procedural HOW the action is performed

Significance Comments on a concept, assumption, hypothesis, theory, law, condition, outcome, etc.

Write BRIEF description:

144

Vertex No. Vertex No. Edge No. ______CONNECTS ______TO ______

STEP 1 Identification Section œ select ONE relationship class that explains how the edge links two vertices

Dependence One item depends on another (or they both depend on each other)

Sequence One item follows another

Transaction An exchange (e.g., a purchase, communication, etc.)

Membership Class member or example (e.g., this item is —one-of“ the larger element)

Hierarchical Structure (e.g., corporation, division, branch, unit, etc.)

Cause/effect One item is caused by the other

Correlation One item is correlated with another (but doesn‘t cause it)

Basis

Other Specify: ______

STEP 2 Description Section œ fill in blanks with RELEVANT details (leave an item blank if not applicable) Medium œ the means by which the relationship exists or occurs:

Duration œ how long the relationship lasts:

Iteration œ repetition of link (e.g., occurs —n“ times or repeats —until -- specify condition“):

Value œ numerical value, if pertinent to context (e.g., a purchase amount, a weight, etc.):

Direction œ linking (to/from):

one direction

goes both directions

Character

Positive

Negative

Importance œ necessity of the relationship:

required or critically important

valuable but not essential

present but not necessarily valuable

145

GRAPH AND PROPERTY SET ANALYSIS: A METHODOLOGY FOR COMPARING MENTAL MODEL REPRESENTATIONS Formative Evaluation Study Post-Training Questionnaire

PARTICIPANT: (Study Code) ______

Strongly Strongly Agree Agree Disagree Disagree 1. I understood the training. O O O O 2. I understood what I was supposed to do in the O O O O practice. 3. The practice helped me have confidence for the O O O O next activity. Additional comments or suggestions:

146

GRAPH AND PROPERTY SET ANALYSIS: A METHODOLOGY FOR COMPARING MENTAL MODEL REPRESENTATIONS Formative Evaluation Study Questionnaire

PARTICIPANT: (Study Code) ______

Strongly Strongly Agree Agree Disagree Disagree 1. I am familiar with concept maps. O O O O 2. The training/orientation presentation helped me O O O O understand how to represent a mental model. 3. The instructions for representing my mental O O O O model were clear. 4. I understood what I was supposed to do. O O O O 5. The exercise of representing my mental model O O O O helped me to clarify my thinking about instructional design. 6. I had much more information that I did not have O O O O time to include in my model representation. 7. I could improve my model if I had more time to O O O O make revisions. 8. I could do a better job of representing a mental O O O O model if I had more practice. 9. It is difficult for me to represent my thoughts as O O O O graph elements. 10. I often think in —pictures.“ O O O O Additional comments or suggestions:

147 APPENDIX E

REPORT FOR PROFESSOR

Comparison of Mental Model Representations Students to Professor Introduction to Instructional Design Course

Final Report

Linda J. Smith October 19, 2008

148 Contents

Introduction …………………………………………………………………………………….4 Comparison of Graphs Overview ……..……………………………………………………….4 Summary of Findings ………………………………………………………………………….4 Observations on Conceptions and Misconceptions ………...………………………………….6 Professor and Student Graphs …………………………………………………………………7

Tables Table 1. Summary of differences between professor and student models ....…………………5 Table 2. Focus and perspectives shown on graphs by students and professor ..………………8 Table 3. Post-elicitation questionnaire responses ……………………………………………..9 Table 4. Student model to professor model comparisons …………………………………....30 Table 5. Common vertices in professor and student models .……………………………..…31 Table 6. Summary of backgrounds and student graph perspectives by country ……………..32 Table 7. Summary of backgrounds and student graph perspectives by experience ………….33

Figures Figure 1. Number of vertices and edges in graph …………………….…………………..…..7 Figure 2. Focus and perspectives shown on graphs: US vs. International students ..………....8

Graphs, professor and student 1-19 Professor (P20) ……………………………………………………………………………....10 Student (P1) ………………………………………………………………….……………....11 Student (P2) ………………………………………………………………….……………....12 Student (P3) ………………………………………………………………….………………13 Student (P4) ………………………………………………………………….…………..…..14 Student (P5) ………………………………………………………………….…………..…..15 Student (P6) ………………………………………………………………….…………..…..16 Student (P7) ………………………………………………………………….…………..…..17 Student (P8) ………………………………………………………………….…………..…..18

149 Student (P9) ………………………………………………………………….…………..…..19 Student (P10) .……………………………………………………………….…………..…...20 Student (P11) .....…………………………………………………………….…………...…..21 Student (P12) .…………………………………………………….………….…………..…..22 Student (P13) .………………………………………………………………….………….....23 Student (P14) .……………………………………………….……………….……….…..….24 Student (P15) .………………………………………………………………….…….……....25 Student (P16) .………………………………………………………………….…….………26 Student (P17) .………………………………………………………………….…….…..…..27 Student (P18) .……………………………………………………………….………...... …..28 Student (P19) .……………………………………………………………….……….…..…..29

150 Final Report

Introduction

This brief report summarizes some of the findings from an analysis of the mental model representations of the professor and nineteen students in an introductory course in instructional design. The purpose of the report is to provide the professor with information about student entry level conceptions and possible misconceptions. Such information may prove useful for the design and delivery of instruction in the course.

Comparison of Graphs Overview

Mental model representations were expressed through graphs and descriptions of the vertices and edges contained on the graphs. Vertices are shown in circles and edges are indicated by lines connecting vertices. Each vertex and edge is numbered from 1 to —n,“ according to the number of vertices and edges in each graph.

A structure summary is provided for each participant‘s model representation. It consists of a graph showing the total vertices and edges and a list the vertex names provided by the participant. In each graph, the first vertex (V1) indicates the highest level within the graph. Color coding is added for ease of locating the first vertex and each vertex incident to it. V1 is shown in light yellow. All vertices directly connected to V1 are shown in green. The color coding is used both on the graph and the list of vertices included with the model representation.

Each student graph summary includes a list of the focus and perspectives shown on the graph. The summary also includes a brief review of the student‘s background.

On student graphs, vertices and edges that correspond to similar elements on the professor graphs are shown with bold lines. Similarity of vertices is determined by an analysis of vertex labels, properties, and graph context. Vertex labels need not be identical, but the graph labeling and properties must indicate a clear correspondence for vertices to be considered similar. Edges are determined to be similar if (a) they connect the same vertices on both the professor‘s graph and the student‘s graph, and (b) their properties suggest the same type of relationship on both graphs.

Summary of Findings

On the surface, it appears that many student graphs included some of the same or similar elements when compared with the graph of their professor; however, an examination of element properties and the relationship among vertices revealed substantial differences between the mental model representation of the professor and the mental model representations of the students. Although few student model elements are identical to the professor‘s model representations, model correspondences have been listed where labeling and properties support an assumption that the student meant something similar to the professor‘s model element.

151 Table 1. Summary of differences between professor and student models

Professor Model Student Models Is presented from a holistic, systems view of Contain compilations of elements whose the problem and an approach to a solution. relationships are not defined in terms of an approach to a problem solution. Identifies project goals, plans, activities, Tend to show groups of elements with little, if information sources, and functions in relation any, expression of how those elements are to an instructional design process. related to the problem solution. Is both comprehensive and concise. No student model presents a comprehensive picture, and many student models contain elements and details that are not relevant (at least not in the way presented in their graphs). Presents a high level view of an entire design Student models that include elements of the process. design process are incomplete. Some are inaccurate. Relates to creating an instructional unit for a Many student models appear to focus on specific class setting, but the model is existing physical characteristics of a target expressed in general terms. That is, the model setting and specific individuals. could apply to other settings with similar characteristics. Does not repeat the elements contained in the Many student models repeat the elements problem statement; the model responds to the contained in the problem statement without problem statement. responding to the problem. Reflects the perspective of an experienced Some student models appear to reflect instructional designer. backgrounds as teachers and trainers. Approximately one-third of the students include the delivery of instruction in their models (perhaps as they might have done as teachers who are responsible for creating and delivering course units). Reflects a team approach, including client Do not reflect a team approach to instructional personnel and subject matter experts. design. Instead, it appears the students assume the instructional designer role is to become a source of knowledge in the subject matter and instructional delivery requirements. Identifies a number of sources for information, Some student models indicate the students are content knowledge, and specialized expertise. attempting to provide detailed knowledge of the problem area rather than identifying how and where to locate that knowledge within the problem context. May be characterized as conceptual and Tend to be related in more concrete terms. abstract.

Some student models contain elements that are implied in the professor‘s model, but they are not considered comparable because they clearly are presented as incomplete expressions or are

152 categorized differently. For example, student models may include —student needs“ as something to be determined; however, in the professor‘s model, student needs are included within the overall analysis of the system characteristics and determined from several sources of information.

Observations on Conceptions and Misconceptions

Many students have experience as teachers or trainers. Some have experience in course design but without formal preparation for this task. Although their backgrounds reflect experience in education, all students appear to be at the basic entry level for the instructional design program. The following elements and concepts were included in the professor‘s graph but were not represented in the student graphs, indicating areas where they may need to receive instruction.

8. the role of the instructional designer, 9. project planning and execution, 10. a systems view of the context for an instructional design project, 11. the roles of other persons in a project, 12. sources of information and their relationship to the design process, 13. the nature of constraints and limitations, and 14. the instructional design process.

Some students included graph elements with labels referring to an ADDIE-type design process; however, these were usually presented at a high level and lacked appropriate, if any, detail.

Possible misconceptions. A number of student graphs suggest that there may be misconceptions about: a) the role of the instructional designer b) limitations that are imposed on the designer in the design process; and c) the product of instructional design. Although individual circumstances may affect the role, activities, and limitations in a particular design task, no such limitations were implied in the problem statement. Four possible misconceptions in student responses were notable:

5. Instructional designers must know specifics of a particular classroom rather than design instruction that can be used and reused by different persons and possibly in different locations (assuming they have comparable generic characteristics). Examples of things students said an instructional designer must know: a. individuals who will be delivering the instruction; b. individuals who will be receiving the instruction; and c. details about the specific classroom where instruction will be delivered.

6. Instructional designers are constrained by currently available resources.

7. Instructional design is essentially a redesign of existing material and procedures.

8. Instructional designers work alone in the design process.

153 Professor and Student Graphs

Summaries of the data obtained in this study are shown in the figures and tables that follow. Figure 1 shows a comparison of the number of vertices and edges in each of the students' graphs and the professor‘s graph. Figure 2 reveals differences in the focus and perspectives of US and international students. Table 2 identifies the focus and perspectives of each student and the professor. Table 3 contains the post-elicitation questionnaire responses of the nineteen (19) students.

Following these summaries, there is a graph for the professor and then a graph for each of the nineteen (19) students. Table 4 presents the numeric values indicating the degree of correspondence between each student's graph and the professor's graph. Details include the size and order of the graphs, the correspondence based on a full model comparison, and the correspondence based on a comparison of only the common elements. Table 5 lists the vertices common in the professor's model and each of the student models.

Number of Vertices and Edges in Graph Vertices Edges 45 40 35 30 25 20 15 10 5 0 1 2 3 4 5 6 7 8 9 1011121314151617181920 Participants: 1-19 (Students) and 20 (Professor)

Figure 1. Number of vertices and edges in graph

154 Graph Focus and Perspectives US vs. International Students

Professor US Students International Students

9 8 7 6 5 4 3 2

Number of Students Number of 1 0 e w e s g g s ign tion e s c Vie c e u ro d ssroom em str Re n n p cla f i ig o Syst s ter knowled ral ID Knowled t cific classroomry De e e ma liv ct Gene Sp je De b Su

Figure 2. Focus and perspectives shown on graphs: US vs. international students

Table 2. Focus and perspectives shown on graphs by students and professor

Focus and Perspectives Shown on Graphs

Students P N 1 1 1 1 1 1 1 1 1 1 2 Focus or perspective o. 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 System view X ID knowledge 2 X X X 1 Specific classroom setting 4 X X X X X X X X X X X X X X X General classroom environment 3 X X X Delivery of instruction 6 X X X X X X Redesign of existing instruction 3 X X X Design process 8 X X X X X X X X X Subject matter knowledge 4 X X X X

155 Table 3. Post-elicitation questionnaire responses

Post-Elicitation Questionnaire œ Student Responses (19) Strongly Strongly No Question Agree Agree Disagree Disagree Response 1. I am familiar with concept maps. 11% 42% 32% 16% 0% 2. The training/orientation presentation helped me 21% 63% 16% 0% 0% understand how to represent a mental model. 3. The instructions for representing my mental model 16% 63% 16% 0% 5% were clear. 4. I understood what I was supposed to do. 16% 74% 11% 0% 0% 5. The exercise of representing my mental model helped 11% 63% 16% 0% 11% me to clarify my thinking about instructional design. 6. I had much more information that I did not have time 32% 42% 16% 5% 5% to include in my model representation. 7. I could improve my model if I had more time to make 42% 42% 11% 0% 5% revisions. 8. I could do a better job of representing a mental model 63% 32% 5% 0% 0% if I had more practice. 9. It is difficult for me to represent my thoughts as 16% 5% 74% 0% 5% graph elements. 10. I often think in —pictures.“ 11% 37% 42% 5% 5%

Explanation of comparison notation on graphs:

In the graphs that follow, vertices and edges that appear in both professor and student graphs are shown in BOLD circles and lines. On the professor graph, the numbers in parentheses indicate the number of student graphs containing the same or nearly the same vertex or edge. There was only one student graph that appeared to have an edge in common with the professor's graph; however, the relationship between the two common vertices appeared to be substantially different. This edge is indicated by a dotted line (see Professor, Edge E39* and Student P3, Edge E11*).

156 Professor (P20) Structure Summary V14 V15 V37 V38 Vertices: 40 V39 Edges: 42 V40 V30 E11 E12 E32 E33 V36 E34 (8) V13 (2) E10 E31 E35 V29 V10 V8 E25 V31 V35 E30 (6) V9 E9 E23 E24 E26 E6 E27 V32 E7 E8 E1 E22 V7 V2 V1 (1) (9) E28 E2 V33 V11 V12 E5 E3 E4 E29 V3 E36 V34 V4 E37 V5 E38 V6 E13 (5) E16 E42 E14 E39* E41 E17 E19 E21 V19 V28 V16 (4) E18 E40 E15 V24 E20 V22 (1) V17 V20 V23 V18 V21 (1) V25 V26 V27

Focus/Perspectives: ID knowledge, Design process, System view, Specific classroom setting

1 Design a one week instructional unit on writing paragraphs: Instructional designer —Know & Do“ Requirements 21 Client 2 System characteristics 22 Prepare unit plan and lesson plans 3 Project plans and strategies 23 Prepare specialized materials 4 Goals 24 Go over unit with teacher and get approval 5 Blueprint: Design the unit 25 Lesson plan & materials Instructional materials: Develop and 6 formatively evaluate the units 26 —Walk through“ 7 People 27 Pilot test 8 Supporting materials 28 Closing the project 9 Situational knowledge 29 Target audience 10 Target system characteristics 30 Primary client 11 Background resources 31 Relevant administrators 12 Enculturation 32 Learning disability specialists 13 Suprasystem characteristics and interfaces 33 Other teachers 14 Coordinate systems & interfaces 34 Subject matter experts 15 Values, policies, and attitudes 35 Existing curricular material 36 Class syllabus and other class 16 Background (existing) material documentation Related curricula (concurrent, previous, and 17 Process model 37 subsequent) Documents about specialized needs of the 18 Project plan 38 students 19 Information requirements (identify gaps) 39 School policy and procedure documents 20 Constraints & facilitators 40 Materials on teaching and writing

157 Student (P1) Structure Summary Vertices: 20 Edges: 24 V3

V20 E2 V4 E3 V19

E19 V2 E5 E18 V5 E20 E1 E4

E17 V18 V16 E15 V1 E6 V6 E7 V7 E16 E22 E14 E23 E8 E24 V8 V17 E9 V15

V11 E10 E13 V9 V10 E11 V14 E12 V12 V13 E21

Focus/Perspectives: ID knowledge, Specific classroom setting, Delivery of instruction Background: (not included here for anonymity)

1 Design a one-week educational unit 2 Participants 3 Normal students 4 Students with disabilities 5 English standards of participants 6 Level of standards 7 5th grade 8 Syllabus of teaching English 9 Duration of the program 10 One week 11 Instructors 12 Instructors qualification 13 Training for instructors 14 Number of instructors available 15 Theories of learning 16 Instructional techniques 17 Classroom instruction 18 Experiential learning 19 Computer-based instruction 20 Self-paced learning

158 Student (P2) Structure Summary Vertices: 18 Edges: 21 V9 V10 V13 E6 E5 E21 E14

E4 E1 E3 E13 V15 V2 V1 V4 V6 E16

E15 E20 E2 E11 E12

V14 V7 E7 V3 V17 V5

E8

E9 E19 V16 E18 E17

V18 E10 V8 V11 V12

Focus/Perspectives: Specific classroom setting, Delivery of instruction Background: (not included here for anonymity)

1 Designing a LA unit for 5th grade 2 Classroom instructor 3 Classroom 4 Students 5 Education levels/previous competencies and knowledge 6 Disabilities and the stipulations 7 Resources 8 Timing/time restraints 9 Evaluation 10 Class size (# of students) 11 Writing scores 12 IQ/Aptitude test scores 13 Attention span of students 14 Behavioral factors (previous experience of students) 15 Teaching style 16 Classroom deficits in resources 17 Students learning style 18 Daily schedule

159 Student (P3) Structure Summary Vertices: 15 Edges: 14 V1 E1 E8

V2 E4 V9

E3 E11* E2 V5 E12 E9 V3 V4 V15 V12 E5 V10 E14 E13

V6 E10 V13 V14 E6 E7

V7 V8 V11

Focus/Perspectives: Design process, Subject matter knowledge Background: (not included here for anonymity)

1 Complete a design task/5th grade English paragraphs 2 Understand what the task is (instructional unit) 3 Clarification 4 Identify specifics 5 Identify population 6 5th grade English class 7 Disabled students 8 "Norm" 9 Create/design instruction 10 Know subject matter 11 Writing paragraphs 12 Create/design lesson plans 13 Objectives 14 Time 15 How to teach

160 Student (P4) Structure Summary V1 Vertices: 25 E21 Edges: 25 E2

E1 V21 E8 V2 E7

E22 E23

E3 V16 V7 V8 V22 V23 V3 E17

E24 E25 E10 E4 E6 E9 V17 E11 E12 E5 V4 V24 V25 V6 V9 V10 E18 V5

V18 V11 E16 V12

E20 E19 E13 E14 E15

V20 V19 V13 V14 V15

Focus/Perspectives: Specific classroom setting, Subject matter knowledge, Delivery of instruction Background: (not included here for anonymity)

1 Teach writing paragraphs 21 Evaluator title 2 Teachers/staff 22 Evaluation process 3 Classroom characteristics 23 Models 4 # of students 24 Pre/post test 5 Special needs 25 Assessment 6 Normal students 7 Audience 8 Tools 9 Normal 10 Special needs 11 Curriculum 12 Exercises 13 Books 14 Games 15 Media 16 Words 17 Sentences 18 Paragraphs 19 Elements 20 Rules

161 Student (P5) Structure Summary Vertices: 15 Edges: 14

V1 E1 V2 E2 V3 E3 V4 E4 V5

E8 E9 E10 E11

E12 E14 E5 V12 V13 V14 V15

E13

V9 V11 V8 E7 V7 E6 V6

V10

Focus/Perspectives: Design process, General classroom environment Background: (not included here for anonymity)

1 Needs assessment 2 Learner analysis 3 Objectives of the unit 4 Methods and materials for the unit 5 Instructional development 6 Small scale trials and evaluation 7 Revisions 8 Implementation 9 Students with disabilities 10 Student with no disabilities 11 Student with other problems 12 Existing knowledge level 13 Learning ability 14 Instruction methods 15 Materials

162 Student (P6) Structure Summary V1 Vertices: 29 E1 Edges: 34 E5 E4 V2 V6 E7 E6 V5 E2 E3 E8 E12 E10 V7 E13 V8 V3 V4 E14 V11 V9 V13 E26 E27 E25 E15 E19 E9 E11

E16 V27 E18 V25 E17 V24 V12 V10 E29 E28

V14 V15 V16 V17 V18 V28 V26 E20 E21 E22 E23 E24 E30

V19 E31 V20 E32 V21 E33 V22 E34 V23 V29

Focus/Perspectives: Problem context, ID knowledge, Subject matter knowledge, Delivery of instruction Background: (not included here for anonymity)

1 Develop one week educational unit 24 Paper (V8) 2 Examine company's background 25 Pencil (V8) 3 Problem new 26 Pen (V8) 4 Problem old 27 Computer (V8) 5 Theory/model to design mid school educational unit 28 How to use (V8) 6 Assessment guidelines 29 Functional steps (V8) 7 Types of paragraphs for English writing 8 Tools and materials for students use (5th grade) 9 Process (V5) 10 Steps to succeed (V5) 11 Procedures (V6) 12 Feedback (V6) 13 Where to find/resource (V7) 14 Day 1 expository (V7) 15 Day 2 descriptive (V7) 16 Day 3 directions (V7) 17 Day 4 argument and (V7) 18 Day 5 creative (V7) 19 Day 1 rules/steps 20 Day 2 rules/steps 21 Day 3 rules/steps 22 Day 4 rules/steps 23 Day 5 rules/steps

163 Student (P7) Structure Summary Vertices: 29 V28 Edges: 32 V27 V22 E13 V29 V23 V9 E12 V11 E15 E14 E16 V24 V8 V26 E11 E32 V6 E17 V12 E10 E30 V5 E31 E18 V25 E29 E4 E3

E28 V10 V2 E1 E6 V1 V3 E5 V14 E19 E27 E20 V7 E2 E26 E7 E9 V18 V13 V17 E8 V4 E25 E24 E21 E23 E22 V15 V21 V16 V19 V20

Focus/Perspectives: Specific classroom setting, Subject matter knowledge, Delivery of instruction Background: (not included here for anonymity)

1 Teaching 5th grade English paragraphs 21 Computer (V4) 2 Disability students 22 Organization (V5) 3 Normal students 23 Grammar (V5) 4 Tools 24 Spelling (V5) 5 Paragraph format 25 Punctuation (V5) 6 Topic 26 Report (V6) 7 Teacher 27 Essay (V6) 8 Male (V2) 28 Creative (V6) 9 Female (V2) 29 Autobiography (V6) 10 Physical (V2) 11 Acquired (V2) 12 Birth (V2) 13 Learning (V2) 14 SLD (V2) 15 ADHD (V2) 16 Behavior (V2) 17 Male (V3) 18 Female (V3) 19 Blackboard (V4) 20 Paper/pencil (V4)

164 Student (P8) Structure Summary Vertices: 11 Edges: 12

V1 E12 E1 V11 V2 E4

E2 E3 V5

E10 V3 V4 E8 E5 V10 V9 V6 E9 E11 E6 V7 E7

V8

Focus/Perspectives: Specific classroom setting Background: (not included here for anonymity)

1 Things the Instructional Designer needs to know in order to make design task 2 Condition of classroom 3 Equipment in the classroom 4 Classroom size 5 Students' condition 6 Number of students in the classroom 7 Number of students with disabilities 8 Number of students that have problems related to the classroom 9 What disabilities that students have 10 Students' problems related to the classroom 11 Writing paragraphs lesson

165 Student (P9) Structure Summary V1 Vertices: 23 Edges: 24 E24 V22 E1 V3 E23 E13 E22 E2 E4 V21 V2 E16 V5 V13 E17 V20 E3 E5

E20 V4 E6 V23 E18 E19 V19

E7 V6 E9 V16 V7 V9 V17 E21 E10 E12 E8 E15 V18 E14

V10 E11 V15 V12 V8 V14 V11

Focus/Perspectives: Design process, Redesign of existing instruction, Specific classroom setting Background: (not included here for anonymity)

1 Instructional program 2 Analysis 3 Current situation 4 Desired outcome 5 Gap (in the program) SKA 6 Variable 7 Students 8 Environment 9 Time 10 Needs of students 11 Performance of students 12 Age of students 13 Design 14 Design time (V9) 15 Program time (V9) 16 Competency outcomes (V13) 17 Tool (V13) 18 Lesson package (V13) 19 Conditions (V13) 20 Conduct (V1) 21 Evaluate (V1) 22 Validate (V1) 23 Previous evaluation (V2)

166 Student (P10) Structure Summary V1 Vertices: 14 E4 Edges: 19 E1

E3 V5 V2 E2 E5 E8 V3 V4 E6 V9 E7 V6 E19 E9 E12 E13 V7 V8 E10 V10 E14 E11 V11

E15

V12

E16 E18

V13 E17 V14

Focus/Perspectives: Training, Design process, Redesign of existing instruction, Specific classroom setting Background: (not included here for anonymity)

1 Steps/approach to training design 2 Learning support and resources available 3 Understand current trg approach 4 Examine AT alternative tools/method available 5 Understand student profiles 6 Level of student competency and proficiency 7 Student and learning factors 8 No. of teachers 9 Instructive and trg anals (correct reading of writing?) 10 Shortfall and gaps in existing trg 11 Develop trg options/methods 12 Select/review best options 13 Detailed design of course 14 Post implementation review

167 Student (P11) E22 Structure Summary V4 V16 Vertices: 21 Edges: 31 E3 E23 V19 V17 E25 E27 E24

E4 V5 V1 E29 V18 E26 E2 V6 E8 V20 E6

E31 V8 E1 E17 E7

V3 E12 V21 E5 E15 V7 V2 E11

E30 V11 E28 E10 E9 E18 E19 E20 E21

E14 V12 V13 V14 V15 E16

V9 E13 V10

Focus/Perspectives: Specific classroom setting, Design process Background: (not included here for anonymity)

1 Instructional designer 2 Classroom 3 Available tools 4 Class schedule 5 Level of material used 6 Teachers 7 Regular tr 8 Remedial tr 9 Students 10 With L/D 11 Regular students 12 ICT 13 Texts 14 Handouts 15 Learning tasks 16 Lesson plans 17 Assignments 18 Tests 19 Inclusive 20 Special 21 Analyze needs

168 Student (P12) Structure Summary V1 Vertices: 17 Edges: 22 E1 E17 E6 V2

V15 V6 E3 E2 E9 E18 E20 E19 V5 E7 V3 V8 V7 V16 E10 E4

E8 E14 V13 E21 V12 V4 E5 E13 E16 V9

E11 V10 E12 V11 E15 V14 E22

V17

Focus/Perspectives: Specific classroom setting, Delivery of instruction Background: (not included here for anonymity)

1 Teaching disabled 5th graders to write paragraphs 2 ID their current knowledge level 3 Ask questions 4 Evaluate their responses 5 Get a "feel"/vibe from students 6 ID barriers to development 7 Overcome barriers 8 Work around barriers 9 Define attainable level of success 10 Begin teaching 11 Evaluate progress 12 Why failures? 13 Fix shortfalls 14 Continue where successful 15 ID preconceived desired/ideal outcomes 16 Modify desired/ideal outcomes 17 Establish curriculum

169 Student (P13) Structure Summary Vertices: 6 Edges: 11

V2 E1 V1

E3 E11 E5 E2

V4 E10 E4 V6 E8 V3 E6 E9

E7

V5

Focus/Perspectives: General classroom environment Background: (not included here for anonymity)

1 Writing paragraphs 2 Qualified teacher 3 Classroom environment 4 5th grade students 5 Paragraph topic 6 Student grade level

170 Student (P14) Structure Summary V1 Vertices: 19 E10 Edges: 18 E1 V11 V2 E11 E4 E2

V5 E7 V8 V12 V3 E5 E8 E12 E6 E3 E9 V6 V9 V13 V4 V7 V10 E13 V15 E14

V14 E17 E15 V16 E16 V19 E18 V18

V17

Focus/Perspectives: Design process, General classroom environment Background: (not included here for anonymity)

1 Designing the unit 2 Analyzing learners 3 Readiness 4 Diagnosis test 5 Characteristics 6 Interests, favor in writing 7 Motivation 8 Learning style preference 9 Work alone 10 Collaboration 11 Developing lesson plan 12 Set the goals 13 Analyze learning objectives 14 Develop learning procedure 15 Instructional strategies 16 Learning materials 17 Resources 18 Implementation 19 Evaluation

171 Student (P15) Structure Summary Vertices: 24 V7 Edges: 23 V6 V9 E9 E7 E8 V8 E6 E10

V4 V24 E24 V22 V10 V13

E22 E1 E13

V21 E21 V5 E4 V1 E2 V3 E11 V11

E5 E12 E3 E20

V20 V2 E16 V16 V12 E17 E14 E19 E23 E18 V17 E15 V14 V19 V23 V18 V15

Focus/Perspectives: Specific classroom setting Background: (not included here for anonymity)

1 What an inst. Designer needs to know 2 Instructors themselves 3 Teaching materials 4 Classroom setting 5 Students taking the class 6 Size of classroom 7 Accommodating for instruction (classroom) 8 How classes are broken up 9 Environment (classroom) 10 What its like in class 11 Goals of the class 12 Quality provided to all students (teaching materials) 13 Access/ease of use (teaching materials) 14 Knowledge of materials (instructors themselves) Knowledge of special ed students (instructors 15 themselves) 16 Teaching skills (of instructors) 17 Fun factor (of instructor) 18 Communication skills (of instructor) 19 Appropriate level of teaching (by instuctors) 20 Knowledge base (of students taking the class) 21 # of students in a class 22 # and severity of handicap 23 Interest in increasing knowledge (of students) 24 Types of handicaps

172 Student (P16) Structure Summary Vertices: 26 V1 Edges: 28 V19 E1 E13 E18 V2 V20 E21 E2 E19 E7 V14 E27 E28 E20 V22 V3 V8

V21 E14 E8 E22 E3 E4

V23 V4 V9 E9 V15 E25 V5 E10 E15 E26 E24 E5 V10 V16 E16 V26 E6 V11 E23 E17 V6 E11 V17 V25 V7 E12 V12 V18 V24 V13

Focus/Perspectives: Specific classroom setting Background: (not included here for anonymity)

1 One week course 22 Contents 2 Writing class 23 Goal 3 Required knowledge 24 Subject 4 Concept of paragraphs 25 Learning materials 5 Writing skills 26 Assessment 6 Basic writing 7 Express the ideas 8 Classroom environment 9 Blackboard and chalk 10 Computer and projector 11 Instructor 12 Specialist 13 Number per students 14 Learner 15 Disabilities 16 Physical 17 Learning problem 18 Behavior problem 19 Grade 20 Needs 21 Pre-knowledge

173 Student (P17) Structure Summary E22 V24 Vertices: 39 E23 V1 Edges: 39 V26 E4 E14 V16 E25 E1 E3 V4 E15 V27 E24 V2 E10 V3 E28 V30 V17 V25 E17 E2 E16 E35 E27 E13 E26 V36 E36 E6 V29 V5 V18 E11 V13 V28 E30 V37 E37 V19 V6 E8 V34 E31 E5 V8 E39 E29 E21 E7 V7 E20 V38 E38 V35 E12 V39 V23 V10 V15 E18 V9 E9 E19 E32 E34 E33 V20 V14 V11 V22 V12 V31 V32 V33 V21

Focus/Perspectives: Design process, Specific classroom setting Background: (not included here for anonymity)

1 Design a course 24 Goal statement 2 Task analysis 25 Need assessment 3 Learner analysis 26 Develop a course 4 Environment analysis 27 Assessment analysts 5 Activity 28 Characteristics of learner 6 Subject 29 Pre-studied level 7 Time 30 Achievement goal orientation 8 Age of learner 31 Physical 9 Writing paragraph 32 Learning 10 English 33 Behavior 11 One week educational unit 34 Age 12 5th grade 35 Gender 13 Curriculum of related subject 36 Present level 14 Curriculum of writing 37 Want to be level 15 Curriculum of reading 38 Analysis the gap 16 Area of the school 39 Disability in study 17 Number of students in one class 18 Parent's concern on education 19 Equipment which can be used 20 Teacher's computer 21 Personal laptop 22 VIM projector 23 Electronic blackboard

174 Student (P18) Structure Summary Vertices: 13 Edges: 12 V1 E1 E6 V2 V6 E2 E5 V13 E9 E4 E8 V3 E3 E7

V5 V8 V4 V9 V7

E10 E12

E11 V10 V12 V11

Focus/Perspectives: Specific classroom setting Background: (not included here for anonymity)

1 Design 1 week class on writing paragraphs 2 Expectations of stakeholders 3 DOE 4 Parents 5 Teachers 6 Research of previous knowledge 7 Survey students ability level 8 Research previous learning material 9 Know student's special needs 10 Physical 11 Learning 12 Behavior 13 Students (5a)

175 Student (P19) Structure Summary V1 Vertices: 23 E5 Edges: 22 E1 E2 E4 V6 V2 E3 V3 V5 E6 E19 V4 V20 E15 E20 E8 E18 V7 E7 V16 E17 V19 E22 E21 V21

E14 V9 V23 V8 E16 V22 E13 V15 V18 E12

E9 E11

E10 V13 V17

V10 V12 V14 V11

Focus/Perspectives: Specific classroom setting, Design process, Redesign of existing instruction Background: (not included here for anonymity)

1 Write paragraph 2 5th grade 3 English class 4 Tools 5 Process 6 One week 7 Normal students 8 Disable students 9 Potential disable students 10 Physical disable students 11 Learning disable students 12 Behavior disable students 13 Listening 14 Reading 15 Speaking 16 Writing 17 Textbooks 18 Computer 19 References 20 Evaluate the need 21 Evaluate what we have 22 Assess the gap 23 Design new material and repair the old one

176 Table 4. Student model to professor model comparisons

Student Model to Professor Model Comparisons Full Model Comparison Comparison of Common Elements Order Size Similarity Values Similarity Values of G of G Vertex Vertex Degree Vertex Edge Total Total Degree Vertex Edge Values Properties Properties Vertices* Edges* Values Properties Properties Professor 40 42 Student 1 20(4) 24(0) 0.03 0.02 0 0.46 0.22 0 Student 2 18(3) 21(0) 0.02 0.02 0 0.28 0.36 0 Student 3 15(3) 14(1) 0.02 0.02 0 0.42 0.30 0 Student 4 25(2) 25(0) 0.01 0.01 0 0.42 0.29 0 Student 5 15(0) 14(0) 0 0 0 0 0 0 Student 6 29(1) 34(0) 0.01 0.00 0 0.71 0.14 0 Student 7 29(1) 32(0) 0.01 0.00 0 0.50 0.33 0 Student 8 11(1) 12(0) 0.01 0.01 0 0.43 0.40 0 Student 9 23(3) 24(0) 0.03 0.02 0 0.58 0.41 0 Student 10 14(3) 19(0) 0.02 0.02 0 0.42 0.30 0 Student 11 21(2) 31(0) 0.01 0.02 0 0.18 0.60 0 Student 12 17(0) 22(0) 0 0 0 0 0 0 Student 13 6(1) 11(0) 0.01 0.01 0 0.50 0.33 0 Student 14 19(3) 18(0) 0.04 0.02 0 0.71 0.30 0 Student 15 24(2) 23(0) 0.01 0.02 0 0.41 0.75 0 Student 16 26(2) 28(0) 0.01 0.01 0 0.25 0.40 0 Student 17 39(3) 39(0) 0.03 0.01 0 0.78 0.24 0 Student 18 13(1) 12(0) 0.01 0.00 0 0.43 0.14 0 Student 19 23(2) 22(0) 0.02 0.02 0 0.58 0.46 0 *Numbers in parenthesis represent number of common elements between student‘s graph and professor‘s graph

177 Table 5. Common vertices in professor and student models

Professor Model Student Models 1 2 3 4 5 6 7 8 9101112131415161718 19 Vertex No. 4332011133210322312 1 Design a one week instructional unit 9V1V1V1 V1 V1 V1V1 V1V1 2 System characteristics: Be able to po 1 V3 3 Project plans and strategies: Prepare 4 Goals: Plan, obtain approval for, and 5 Blueprint: Design the unit: Prepare th 5 V9 V13V13 V14 V23 6 Instructional materials: Develop and 7 People 8 Supporting materials 9 Situational knowledge 10 Target system characteristics 11 Background resources 12 Enculturation 13 Suprasystem characteristics and inte 14 Coordinate systems & interfaces 15 Values, policies, and attitudes 16 Background (existing) material 17 Process model 18 Project plan 19 Information requirements (identify ga 4 V5V10 V38 V22 20 Constraints & facilitators 21 Client 22 Prepare unit plan and lesson plans 1 V12 23 Prepare specialized materials 1 V16 24 Go over unit with teacher and get ap 25 Lesson plan & materials 26 —Walk through“ 27 Pilot test 28 Closing the project 29 Target audience 6 V2 V4 V7 V7 V9 V14 30 Primary client 8V11V2 V2 V7 V6V2 V2V11 31 Relevant administrators 32 Learning disability specialists 33 Other teachers 34 Subject matter experts 35 Existing curricular material 36 Class syllabus and other class docum2 V8 V13 37 Related curricula (concurrent, previo 38 Documents about specialized needs 39 School policy and procedure docume 40 Materials on teaching and writing

178 Table 6. Summary of backgrounds and student graph perspectives by country

Table 6 has been removed from this dissertation to preserve anonymity of students.

179 Table 7. Summary of backgrounds and student graph perspectives by experience

Table 7 has been removed from this dissertation to preserve anonymity of students.

180 REFERENCES

Bertalanffy, L.v. (1968). General system theory: Foundations, development, applications (Rev. ed.). New York: George Braziller.

Brachman, R. J., Levesque, H. J., & Reiter, R. (Eds.). (1992). Knowledge representation. Cambridge, MA: MIT Press.

Brewer, W. F. (2001). Models in science and mental models in scientists and nonscientists. Mind & Society, 2(2), 33-48.

Buckley, B. C. (2000). Interactive multimedia and model-based learning in biology. International Journal of Science Education, 22(9), 895-935.

Buckley, B. C., Gobert, J. D., & Christie, M. T. (2002, April). Model-based teaching and learning with hypermodels: What do they learn? How do they learn? How do we know? Paper presented as part of the symposium Hypermodel Research in Theory and Practice, American Educational Research Association, New Orleans, LA.

Burkhardt, J., Détienne, F., & Wiedenbeck, S. (1997). Mental representations constructed by experts and novices in object-oriented program comprehension. Paper presented at Interact 1997, Sydney, Australia.

Carley, K., & Palmquist, M. (1992). Extracting, representing, and analyzing mental models. Social Forces, 70(3), 601-636.

Chartrand, G. (1977). Introductory graph theory. New York: Dover.

Chi, M. T. H., & Roscoe, R. D. (2002). The processes and challenges of conceptual change. In L. Mason & M. Limón (Eds.), Reconsidering conceptual change: Issues in theory and practice (pp. 3-27). Dordrecht, The Netherlands: Kluwer.

Clancey, W. J. (1988). The role of qualitative models in instruction. In J. Self (Ed.), Artificial Intelligence and Human Learning: Intelligent Computer-Aided Instruction (pp. 49-68). New York: Chapman and Hall.

Clement, J. (2000). Model based learning as a key research area for science education. International Journal of Science Education, 22(9), 1041-1053.

Clement, J., & Oviedo, M. C. N. (2003). Abduction and analogy in scientific model construction. Proceedings of NARST, Philadelphia, PA.

Clement, J. J., & Steinberg, M. S. (2002). Step-wise evolution of mental models of electric circuits: A —learning-aloud“ case study. The Journal of the Learning Sciences, 11(4), 389-452.

181 Davison, G. C., Vogel, R. S., & Cuffman, S. G. (1997). Think-aloud approaches to cognitive assessment and the articulated thoughts in simulated situations paradigm. Journal of Consulting and , 65(6), 950-958.

Dearholt, D. W., & Schvaneveldt, R. W. (1990). Properties of Pathfinder networks. In R. W. Schvaneveldt (Ed.), Pathfinder associative networks: Studies in knowledge organization (pp. 3-30). Norwood, NJ: Ablex Publishing Corporation.

Dessinger, J. C., & Moseley, J. L. (2006). The full scoop on full-scope evaluation. In J. A. Pershing (Ed.), Handbook of human performance technology (3rd ed.) (pp. 312-330). San Francisco, CA: Wiley.

Dick, W., Carey, L., & Carey, J. O. (2005). The systematic design of instruction (6th ed.). Boston, MA: Pearson.

Diestel, R. (2000). Graph theory. Electronic edition 2000. New York: Springer-Verlag. Retrieved June 28, 2005 from http://www.emis.ams.org/monographs/Diestel/en/GraphTheoryII.pdf

Duit, R., & Treagust, D. F. (2003). Conceptual change: A powerful framework for improving science teaching and learning. International journal of Science Education, 25(6), 671- 688.

Durso, F. T., & Coggins, K. A. (1990). Graphs in the social and psychological sciences: Empirical contributions of Pathfinder. In R. W. Schvaneveldt (Ed.), Pathfinder associative networks: Studies in knowledge organization (pp. 31-51). Norwood, NJ: Ablex Publishing Corporation.

Forbus, K. D., & Gentner, D. (1997). Qualitative mental models: Simulations or ? Proceedings of the Eleventh International Workshop on Qualitaative Reasoning, Cortona, Italy. Retrieved February 3, 2006 from http://www.qrg.northwestern.edu/papers/Files/QMM_QR97.pdf

Ford, D. N., & Sterman, J. D. (1998). Expert knowledge elicitation to improve formal and mental models. System Dynamics Review, 14(4), 309-340.

Gagné, R. M., Wager, W. W., Golas, K. C., & Keller, J. M. (2005). Principles of instructional design (5th ed.). Belmont, CA: Wadsworth/Thomson.

Gammack, J. G. (1990). Expert conceptual structure: The stability of Pathfinder representations. In R. W. Schvaneveldt (Ed.), Pathfinder associative networks: Studies in knowledge organization (pp. 213-226). Norwood, NJ: Ablex Publishing Corporation.

Gentner, D., & Stevens, A. (Eds.). (1983). Mental models. Erlbaum, Hillsdale, NJ: Erlbaum.

182 Glaser, R. (1990). Toward new models for assessment. International Journal of Educational Research, 14(5), 475-483.

Gobert, J. D. (2000). A typology of causal models for plate tectonics: Inferential power and barriers to understanding. International Journal of Science Education, 22(9), 937-977.

Gobert, J. D., & Buckley, B. C. (2000). Introduction to model-based teaching and learning in science education. International Journal of Science Education, 22(9), 891-894.

Goldsmith, T. E., Johnson, P. J., & Acton, W. H. (1991). Assessing structural knowledge, Journal of Educational Psychology, 83(1), 88-96.

Gonzalvo, P., Cañas, J. J., & Bajo, M.T. (1994). Structural representations in knowledge acquisition. Journal of Educational Psychology, 86(4), 601-616.

Greca, I. M., & Moreira, M. A. (2000). Mental models, conceptual models, and modelling. International Journal of Science Education, 22(1), 1-11.

Gustafson, K. L., & Branch, R. M. (2002). What is instructional design? In R. A. Reiser & J. V. Dempsey (Eds.), Trends and issues in instructional design and technology (pp. 16-25). Columbus, OH: Merrill/Prentice-Hall.

Harrison, A. G., & Treagust, D. F. (2000). A typology of school science models. International Journal of Science Education, 22(9), 1011-1026.

Held, C., Knauff, M., & Vosgerau, G. (Eds.). (2006). Mental models and the mind: Current developments in cognitive psychology, neuroscience, and philosophy of mind. Amsterdam: Elsevier.

IBSTPI Instructional Design Competencies (2000). Available online at http://www.ibstpi.org/Competencies/instruct_design_competencies.htm

Ifenthaler, D. (2007). Relational, structural, and semantic analysis of graphical representations and concept maps. Paper presented at the Annual of the AECT, Anaheim, CA.

Ifenthaler, D., & Seel, N. M. (2005). The measurement of change: Learning-dependent progression of mental models. Technology, Instruction, Cognition and Learning, 2(4), 317-336.

Johnson-Laird, P. N. (1983). Mental models: Towards a cognitive science of language, inference, and . Cambridge, MA: Harvard University Press.

Johnson-Laird, P. N. (1994). Mental models, deductive reasoning, and the brain. In M. S. Gazzaniga (Ed.), The cognitive neurosciences (pp. 999-1008). Cambridge, MA: MIT Press.

183

Jonassen, D. H. (2004). Learning to solve problems: An instructional design guide. San Francisco, CA: John Wiley & Sons.

Jonassen, D. H., Beissner, K., & Yacci, M. (1993). Structural knowledge: Techniques for representing, conveying, and acquiring structural knowledge. Hillsdale, NJ: Erlbaum.

Justi, R., & Gilbert, J. (2000). and philosophy of science through models: Some challenges in the case of ”the atom.‘ International Journal of Science Education, 22(9), 993-1009.

Kellogg, W. A., & Breen, T. J. (1990). Mental models and user performance. In R. W. Schvaneveldt (Ed.), Pathfinder associative networks: Studies in knowledge organization (pp. 179-195). Norwood, NJ: Ablex Publishing Corporation.

Kolkman, M. J., Kok, M., & van der Veen, A. (2005). Mental model mapping as a new tool to analyse the use of information in decision-making in integrated water management. and Chemistry of the Earth, 30(4-5), 317-332.

Linn, M. (2003). Technology and science education: Starting points, research programs, and trends. International Journal of Science Education, 25(6), 727-758.

Mayer, R. E. (1989). Models for understanding. Review of Educational Research, 59(1), 43-64.

Mayer, R. E. (2002). Understanding conceptual change: A commentary. In L. Mason & M. Limón (Eds.), Reconsidering conceptual change: Issues in theory and practice (pp. 101-111). Dordrecht, The Netherlands: Kluwer.

Mayer, R. E., Moreno, R., Boire, M. & Vagge, S. (1999). Maximizing constructivist learning from multimedia communications by minimizing cognitive load. Journal of Educational Psyhology, 91(4), 638-643.

Ogan-Bekiroglu, F. (2007). Effects of model-based teaching on pre-service physics teachers‘ conceptions of the moon, moon phases, and other lunar phenomena. International Journal of Science Education, 29(5), 565-593.

Pirnay-Dummer, P. (2007). Model inspection trace of concepts and relations. A heuristic approach to language-oriented model assessment. Paper presented at the AERA 2007, Chicago, IL.

Pirnay-Dummer, P., Ifenthaler, D., & Spector, J. M. (2008). Highly integrated model assessment technology and tools.

Reinders, D., & Treagust, D. F. (2003). Conceptual change: A powerful framework for improving science teaching and learning. International Journal of Science Education, 25(6), 671-688.

184

Reiser, R. A. (2002). What field did you say you were in? Defining and naming our field. In R. A. Reiser & J. V. Dempsey (Eds.), Trends and issues in instructional design and technology (pp. 5-15). Columbus, OH: Merrill/Prentice-Hall.

Richardson, G. P. (1996). Problems for the future of system dynamics. System Dynamics Review, 12(2), 141-157. Retrieved August 2, 2008 from http://www3.interscience.wiley.com.proxy.lib.fsu.edu/cgi- bin/fulltext/20745/PDFSTART

Schvaneveldt, R. W. (Ed.). (1990). Pathfinder associative networks: Studies in knowledge organization. Norwood, NJ: Ablex Publishing Corporation.

Schwaninger, M. (2004). Methodologies in conflict: Achieving synergies between system dynamics and organizational cybernetics. Systems Research and Behavioral Science, 21, 411-431.

Seel, N. M. (1999). Educational diagnosis of mental models: Assessment problems and technology-based solutions. Journal of Structural Learning and Intelligent Systems, 14(2), 153-185.

Seel, N. M. (2003). Model-centered learning and instruction. Technology, Instruction, Cognition and Learning, 1(1), 59-85.

Seel, N. M. (2006). Mental models in learning situations. In C. Held, M. Knauff, & G. Vosgerau (Eds.), Mental models and the mind: Current developments in cognitive psychology, neuroscience, and philosophy of mind. Amsterdam: Elsevier.

Seel, N. M., Al-diban, S., Blumschein, P. (2000). Mental models & instructional planning. In J. M. Spector & T. M. Anderson (eds.), Integrated and holistic perspectives on learning, instruction and technology, 129-158. The Netherlands: Kluwer Academic Publishers.

Seel, N. M., & Dinter, F. R. (1995). Instruction and mental model progression: Learner- dependent effects of teaching strategies on knowledge acquisition and analogical transfer. Educational Research and Evaluation, 1(1), 4-35.

Seels, B., & Glasgow, Z. (1998). Making instructional design decisions (2nd ed.). Upper Saddle River, NJ: Merrill.

Sloman, S. A. (2005). Causal models: How people think about the world and its alternatives. Oxford: Oxford University Press.

Smith, L. J. (2005). The application of graph theory for the comparison of mental models. Unpublished manuscript. Accessible at http://agil-ed.com/GTandMentalModels.doc

185 Smith, L. J. (2006) Application of graph theory for the comparison of model representations: A prototype study using descriptions of instructional design. Unpublished manuscript.

Snow, R. E. (1990). New approaches to cognitive and conative assessment in education. International Journal of Educational Research, 14(5), 455-473.

Snyder. J. L. (2000). An investigation of the knowledge structures of experts, intermediates and novices in physics. International Journal of Science Education, 22(9), 979-992.

Spector, J. M., Dennen, V. P., & Koszalka, T. A. (2005). Causal maps, mental models and assessing acquisition of expertise. Paper presented at the Technology, Instruction, Cognition & Learning Special Interest Group, AERA 2005, Montreal, Quebec, Canada.

Spector, J. M., & Koszalka, T. A. (2004). The DEEP methodology for assessing learning in complex domains (Final report to the National Science Foundation Evaluative Research and Evaluation Capacity Building). Syracuse, NY: Syracuse University.

Taylor, I., Barker, M., & Jones, A. (2003). Promoting mental model building in astronomy education. International Journal of Science Education, 25(10), 1205-1225. van Someren, M. W., Barnard, Y. F., & Sandberg, J. A. C. (1994). The think aloud method: A practical guide to modelling cognitive processes. London: Academic Press.

Willemain, T. R. (1995). Model formulation: What experts think about and when. Operations Research, 43(6), 916-932.

Winn, W. (1996). Cogntive perspectives in psychology. In D. H. Jonassen (Ed.), Handbook of Research for Educational Communications and Technology (2nd ed.), pp. 79-112. Mawah, NJ: Erlbaum.

186

BIOGRAPHICAL SKETCH

Linda J. Smith Curriculum Vitae

Education

Ph.D. Candidate Florida State University Major: Instructional Systems Minor: Cognitive Science

Certificate: Program Evaluation

M.D.E University of Maryland University College Master of Distance Education Degree received May, 2004

Certificates: Foundations of Distance Education Teaching at a Distance Distance Education and Technology

M.A. Florida State University Specialization: English Degree received June, 1971

B.A. Florida State University Major: Humanities Concentrations: English, American Studies, Degree received June, 1969

Additional George Washington University post-graduate work Information Systems Technology (9 units), 1974-1975

Training

Florida State University (2004-present) Mentoring Mentor training workshop (December 2005)

Grant writing Grant writing workshop (June 2004)

U.S. Government (1971-2002) Management Management courses in leadership, team building, and general management skills Systems Courses in computer programming; system analysis, design and development; and project management.

187 Work Experience

Research Assistant January œ April 2008 College of Education, Florida State University, Tallahassee, Florida Primary : - Assisted instructor in delivery of course EDG2701 Teaching Diverse Populations, including teaching some class sessions - Conducted a program evaluation for the exceptional student education teacher preparation program and prepared a report highlighting findings regarding course sequencing and relationships to teacher preparation standards.

Research Assistant August œ December 2007 College of Education, Florida State University, Tallahassee, Florida Primary duties: Assisted instructor with design and delivery of a blended learning version of EEX5245 Introduction to Special Education Technology

Research Assistant May œ August 2007 College of Education, Florida State University, Tallahassee, Florida Primary duties: Assisted instructor with design and delivery of an online version of EDE 5931 Supervision of Associate Teaching

Teaching Assistant January œ April 2007 College of Education, Florida State University, Tallahassee, Florida Primary duties: Assisted instructor with design and delivery of an online version of EDE 5931 Supervision of Associate Teaching

Teaching Assistant August œ December 2006 College of Education, Florida State University, Tallahassee, Florida Primary duties: Assisted instructor with design and delivery of an online version of EDE 5931 Supervision of Associate Teaching

Newsletter Editing Fall Issue 2006 Co-editor of Instructional Systems Newsletter for 2006 Instructional Systems, Florida State University, Tallahassee, Florida Primary duties: Designed newsletter format, wrote material, edited submitted material, handled administrative work for publication

Teaching Assistant May œ August 2006 Instructional Systems, Florida State University, Tallahassee, Florida Primary duties: Assisted instructor with delivery of an online version of EME 6691 Performance Systems Analysis

Committee Work January œ April 2006 Instructional Systems, Florida State University, Tallahassee, Florida Primary duties: Member of alumni relations committee, Instructional Systems newsletter coordination

188

Teaching Assistant January œ April 2006 Instructional Systems, Florida State University, Tallahassee, Florida Primary duties: Assisted instructor with design and delivery of an online version of EME 5608 Trends and Issues in Instructional Design and Technology

Graduate Assistant August œ December 2005 College of Education, Florida State University, Tallahassee, Florida Primary duties: Instructional design for an online version of course EME 5608 Trends and Issues in Instructional Design and Technology

Tutor September œ December 2005 Dean of Students, Disability Students Resource Center, Florida State University Primary duties: Provided tutorial assistance for deaf student in course EDF 5481 œ Educational Research Methods

Research Assistant May 2004 œ April 2005 Learning Systems Institute, Florida State University, Tallahassee, Florida Primary duties: - Designed, conducted, and reported on a study to revalidate online learner-instructor interaction guidelines for the Naval Education and Training Command - Performed research for literature reviews and project reports for the Naval Education and Training Command

Teaching Assistant January 2004 œ May 2004 Master of Distance Education Program, University of Maryland University College, Adelphi, Maryland Primary duties: - Assisted the program chairman in teaching two online graduate courses (OMDE602 - Distance Education Systems, OMDE622 -The Business of Distance Education) - Facilitated online discussions, designed assessments, evaluated and provided feedback on student assignments

Career Highlights July 1971 œ January 2004 Social Security Administration (SSA) œ various positions. Baltimore, Maryland

Office Director Office of Data Analysis, Office of Program Development and Research Manager of a select group of senior analysts responsible for trend analysis, longitudinal database development, policy impact studies, and other special projects relating to program development for SSA. (2002-2004)

CDG Advisor Special Assignment: For U.S. Office of Personnel Management (2000-2002): Career Development Group Advisor for 18 Presidential Management Interns of Class 2000

189

Senior Staff Project Provided leadership and technical expertise to agency inter- Manager component teams for business process redesign tests and evaluations of changes to SSA's disability claims process to determine impacts on administrative costs (measured in $Millions) and fund costs (measured in $Billions). Presented project data to SSA executives and to State administrators at national conferences. This work received awards at the highest levels in SSA: one Commissioner and three Deputy Commissioner level citations. (1995-2002)

Branch Chief Directed a staff of 18 analysts and technicians. Provided ongoing quality assurance information for disability program managers and SSA reports to Congress, and designed and conducted special studies for the disability claims process to monitor quality and make recommendations for improvement. (1993-1995)

Systems Work Designed and developed numerous management information systems (As a systems specialist using a variety of computer platforms and languages. Most major and in other positions) projects were national in scope (1971-2002). Examples:

-Designed and produced a national Web site providing process redesign project data to SSA and State partners.

-Designed and developed a generalized PC-based database management system and support utilities to enable personal computers to be used for mainframe-style data processing of large files.

-Designed and developed a national distributed case control and database management system for quality assurance reviews and case processing for State Disability Determination Services disability claims.

-Performed research and development work for computers to aid writers in assessing and improving readability of texts.

Instructional Design Courses designed and taught: and Teaching - Classroom, satellite broadcast, interactive video training (1981-2001) - Effective Briefing Design for paper-based and PC-based briefings. Course materials include a textbook and PowerPoint slide show for classroom presentations. - Analysis for Quality Assurance and Process Evaluations. Course materials include study text and exercises. - Courses in database concepts and systems use.

Teaching Assistant August 1969 œ June 1971 Department of English, Florida State University, Tallahassee, Florida. Primary duties: - Taught freshman composition courses (including honors sections) - Faculty advisor for basic studies students

190 Academic Honors

Invited to join Golden Key International Society, October, 2008.

Finalist, Pacific Corp. Instructional Design Competition, AECT Conference, October, 2006.

Recipient of Scholarship, Instructional Systems, Florida State University, Tallahassee, Florida, May, 2006.

Recipient of Scholarship, Instructional Systems, Florida State University, Tallahassee, Florida, January, 2006.

Recipient of a grant provided by Volkswagen to collaborate on a case study for Volkswagen's new corporate university and participate in the Third European Distance Education Network (EDEN) Research Workshop in Oldenburg, Germany, March, 2004.

Phi Kappa Phi Honor Society, inducted 2004.

Offered Graduate Fellowship, Department of Humanities, Florida State University, Tallahassee, Florida, June, 1969.

Lambda Iota Tau, Literary Honor Society, inducted 1969.

Journal Article

Dennen, V. P., Darabi, A. A., & Smith, L. J. (2007). Instructor-learner interaction in online courses: The relative perceived importance of particular instructor actions on performance and satisfaction. Distance Education, 28(1), 65-79.

Book Chapters

Smith, Linda J., & Drago, Kristen. (2004). Learner support in workplace training. In J. Brindley, C. Walti, & O. Zawacki-Richter (Eds.), Learner Support in Open, Distance, and Online Learning Environments (pp. 193-201). Oldenburg, Germany: Bibliotheks-und Informationssystem der Universität Oldenburg.

Smith, Linda J. (2003). Assessing student needs in an online graduate program. In U. Bernath & E. Rubin (Eds.), Reflections on Teaching and Learning in an Online Master Program: A Case Study (pp. 255-265). Oldenburg, Germany: Bibliotheks- und Informationssystem der Universität Oldenburg.

Note: The above book chapters are required reading in the Master of Distance Education course OMDE608, Student Support in Distance Education and Training, University of Maryland University College.

Presentations

Smith, L. (2006, October). Finding a new lens: What "New Science" theories can offer for distance education research. Paper presented by L. Smith at the Fourth European Distance Education Network (EDEN) Research Workshop, Barcelona, Spain.

Smith, L. (2005, November). The view from both sides of the modem: Applying learning and cognition theories. Presentation given at the Eleventh Sloan-C International Conference on Asynchronous Learning Networks (ALN), Orlando, Florida.

Smith, L. (2005, June). A typology of student support in distance learning. Paper presented at the Distance Learning Administration Conference 2005, Jekyll Island, Georgia.

191

Smith, L. (2005, June). A Needs assessment strategy for distance education students. Paper presented at the Distance Learning Administration Conference 2005, Jekyll Island, Georgia.

Sievert, J., & Smith, L. (2005, June). Three levels of need: A framework for pre-assessing online learner readiness. Paper presented at the Distance Learning Administration Conference 2005, Jekyll Island, Georgia.

Smith, L., & Darabi, A. (2005, March). What Is most important to students in e-Learning? Paper presented at the 1st Southeastern Conference in Instructional Design and Technology, Mobile, Alabama.

Darabi, A., Hassler, L., & Smith, L. (2005, March). Guidelines for learner-instructor interaction in online learning environment: Comparison of perspectives from either side of the line. Paper presented by A. Darabi at the International Conference on Methods and Technologies for Learning, Palermo, Italy.

Smith, L. (2005, March). The online student as a Learning system: Implications of systems, chaos, and learning theories for instructional design. Paper presented at the 1st Southeastern Conference in Instructional Design and Technology, Mobile, Alabama.

Smith, Linda J. (2004, March). Preparing Students to Be Collaborative Learners in Distance Learning and Training at Volkswagen AutoUni. In U. Bernath & A. Szücs (Eds.), Supporting the learner in distance education and e-learning, Proceedings of the Third EDEN Research Workshop (pp. 416-422). Oldenburg, Germany: Bibliotheks- und Informationssystem der Universität Oldenburg.

Smith, L., Blaschke, L., Dudink, G., Fox, B., Schuster, C., Templeton, C. (2004, March). In search of the ideal classroom in a blended learning environment: A case study for Volkswagen AutoUni. Case study presented by L. Smith at the Third European Distance Education Network (EDEN) Research Workshop, Oldenburg, Germany.

Smith, L. (2003, November). Recommendations for developing collaborative learning skills for online students. Presentation delivered at the Ninth Sloan-C International Conference on Asynchronous Learning Networks (ALN), Orlando, Florida.

Smith, L. (2003, April). Collaborative learning skills for online students. Presentation delivered at the Maryland Distance Learning Association Spring Conference, Rocky Gap, Maryland.

Technical Report

[Smith, L. J.]. (2004). Revalidation of IDS guidelines for learner-instructor interaction in the Navy Integrated Learning Environment (Naval Education and Training Command Contract Number: N00140- 03-Q-2897). Tallahassee, FL: Florida State University, Learning Systems Institute.

Other Professional Activities

Case study for Volkswagen, Wolfsburg, Germany. Member of a research team consisting of six Master of Distance Education program students and alumni from University of Maryland University College and two Volkswagen staff. The team assignment was to design the ideal classroom for Volkswagen's new corporate university, AutoUni, which will offer courses in a combination of distance and face-to-face environments. The case study was presented at the Third EDEN Research Workshop, Oldenburg, Germany, March 2004.

Participation in Professional Associations

Association for Educational Communications and Technology

192 United States Distance Learning Association

Instructional Systems Student Association, Vice President for Distance Students (2005), Florida State University, Tallahassee, Florida.

193