
UCLA Publications Title The user's mental model of an information retrieval system: An experiment on a prototype online catalog Permalink https://escholarship.org/uc/item/2k3386nz Journal International Journal of Human-Computer Studies, 51(2) ISSN 1071-5819 Author Borgman, Christine L. Publication Date 1999 DOI 10.1006/ijhc.1985.0318 Peer reviewed eScholarship.org Powered by the California Digital Library University of California THE USER'S MENTAL MODEL OF AN INFORMATION RETRIEVAL SYSTEM Christine L. Borgman Graduate School of Library and Information Science University of California, Los Angeles ABSTRACT INTRODUCTION An empirical study was performed In the search to understand how to train naive subjects in the use of a naive user learns to comprehend, a prototype Boolean logic-based in- reason about, and utilize an interac- formation retrieval system on a bib- tive computer system, a number of liographic database. Subjects were researchers have begun to explore the undergraduates with little or no nature of the user's mental model of prior computing experience. Subjects trained with a conceptual model of a system. Among the claims are that the system performed better than a mental model is useful for deter- mining methods of interaction [1,2], subjects trained with procedural in- problem solving [2,3], and debugging structions, but only on complex, problem-solving tasks. Performance errors [4]; that model-based training was equal on simple tasks. Differ- is superior to procedural training ences in patterns of interaction with [2,5,6]; that users build models the system (based on a stochastic spontaneously, in spite of training process model) showed parallel re- [1,7]; that incorrect models lead to sults. Most subjects were able to problems in interaction [4,7]; and articulate some description of the that interface design should be based system's operation, but few articu- on a mental model [8,9]. Not sur- lated a model similar to the card prisingly, these authors use a varie- catalog analogy provided intraining. ty of definitions for "mental model" Eleven of 43 subjects were unable to and the term "conceptual model" is achieve minimal competency in system often used with the same meaning. use. The failure rate was equal Young [i0] was able to identify eight between training conditions and gen- different uses of the term "concep- ders; the only differences found tual model" in the recent literature, between those passing and failing the for example. This author prefers the benchmark test were academic major distinction made by Norman [7] that a and in frequency of library use. conceptual model is a model presented to the user, usually by a designer, researcher, or trainer, which is intended to convey the workings of the system in a manner that the user can understand. A mental model is a model of the system that the user builds in his or her mind. The user's mental model may be based on the conceptual model provided, but is probably not identical to it. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for The first research comparing direct commercial advantage, the ACM copyright notice and the conceptual models to procedural in- title of the publication and its date appear, and notice is given structions for training sought only that copying is by pern~ission of the Association for Computing to show that the conceptual training Machinery. To copy otherwise, or to republish, requires a fee was superior [6,11]. Other recent and/or specific permission. research [1,2] has studied the inter- action between training conditions © 1985 ACM 0~89791-159-8/85/006/0268' $00.75 and tasks, finding that model-based 268 training is more beneficial for The introductory narrative pro- complex or problem solving tasks. vided to the model group described the system using an analogical model The research on mental models of the card catalog. The instruc- and training has been concentrated in tions first explained the structure the domains of text editing [ii,12] of a divided (author/title/subject) and calculators [1,2,4,10]; no such card catalog and then explained the research has yet been done in inform- system structure in terms of the ways ation retrieval. Information re- it was similar to a card catalog and trieval is an interesting domain, as the ways in which it was different. it is now undergoing a shift in user Boolean logic was described in terms population. In the last ten years, a of sets of catalog cards, showing significant population of highly- sample sets and the resulting sets trained searchers who act as inter- after specified Boolean combinations. mediaries for end users on commercial systems has developed. Although end The narrative introduction for users have been reluctant to use the the procedural group consisted of commercial systems, libraries are background information on information rapidly replacing their card catalogs retrieval that is commonly given in with online catalogs intended for system manuals. The Boolean opera- direct patron use. The online cata- tors were defined only by single- logs are typically simpler to use and sentence statements. have a more familiar record struc- ture, but still have many of the The examples provided were the difficulties associated with the use same in each conditioD, but the anno- of a complex interactive system. The tations for each reflected the dif- result is a population of naive, ferences in the introductory mater- minimally-trained, and infrequent ials. The list of searchable fields users of information retrieval sys- (16 of 25 fields were searchable) was tems [13]. The need for an efficient also identical and gave examples of form of training for this population the search elements for each field. is very great and we chose it as a domain to test the advantages of The training tasks used for the model-based training. benchmark test were all classified as simple tasks, requiring the use of only one index and no more than one EXPERIMENTAL METHOD Boolean operator. The experiment con- sisted of five simple and ten complex The experiment was structured as tasks, the latter requiring two or a two-by-two design, with two train- more indexes and one or more Boolean ing conditions (model and procedural) operators. All tasks were presented and two genders. All subjects were as narrative library reference ques- undergraduates at Stanford University tions and were designed to be within with two or fewer programming courses the scope of questions that might be and minimal, if any, additional com- asked by undergraduates in performing puter experience. course assignments. We performed the experiment on a i Subjects were given the instruc- prototype Boolean logic-based online tional materials to read and then catalog mounted on a microcomputer performed the benchmark test, which with online monitoring capabilities. consisted of completing 14 simple Two bibliographic databases were tasks on the small database in less mounted: a training database con- than 30 minutes. The testwas based sisting of 50 hand-selected records on pilot test findings that those who on the topic of "animals" and a !arg - took longest to complete the training er database of about 6,000 records tasks were least able to learn to use systematically sampled from the 10- the system (r=-0.83, p<.05). If the million record database of the OCLC subject passed the benchmark test, he Online Computer Library Center. or she was interviewed briefly, given the experimental tasks to perform, Subjects in each training condi- and then asked to perform one addi- tion received three training docu- tional search while talking aloud for ments: an introductory narrative, a the experimenter. Subjects were in- set of annotated examples of system terviewed again after completing the operation, and a table of searchable experiment. fields. 269 RESULTS If the subjects were able to describe the system's operation at Due to a high failure rate on all, it was most likely in terms of the benchmark test (ll of 43, or an abstract model bearing little 26%), we were able to gather a valid resemblance to a card catalog anal- dataset of only 28 cases. The dif- ogy. Of 28 subjects, 15 (5 model ference in time required to complete condition, i0 procedural) gave some the benchmark test was significant form of abstract model, four (3 (p<0.0001), with those failing aver- model, 1 procedural) articulated a aging 39.2 minutes and those passing card catalog-based model, only one averaged 18.2 minutes. Subjects subject (procedural condition) artic- failed equally in the two training ulated a model based on another meta- conditions and by gender. phor (robots retrieving sheets of paper from bins), and eight subjects Subjects who passed the bench- (6 model, 2 procedural) were unable mark test tended to be from science to describe the system in any model- and engineering majors rather than based manner. social science and humanities (p<0.0001), and were less frequent Only minor differences between visitors to the library (average 8.0 genders were found. Men scored high- visits per month vs. 18.4 visits for er than women (p<0.05) on the index those who passed). Major and library of describing the system, although use were not correlated. gender explained only 14% of the variance in the model index on a In task performance, we found no linear regression. Men were found to difference between training condi- make more errors on simple tasks than tions on number of simple tasks cor- women (p<0.05), but the difference rect (p>0.05). The difference" on was not significant for errors on number of complex tasks correct was complex tasks.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages7 Page
-
File Size-