INFORMATION TO USERS

This manuscript has been reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter Ace, while others may be from any type of computer printer.

The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction.

In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion.

Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand comer and continuing from left to right in equal sections with small overlaps. Each original is also photographed in one exposure and is included in reduced form at the back of the book.

Photographs included in the original manuscript have been reproduced xerographically in this copy. Higher quality 6” x 9” black and white photographic prints are available for any photographs or illustrations appearing in this copy for an additional charge. Contact UMI directly to order. UMI A Bell & Howell Information Company 300 North Zeeb Road, Ann Arbor MI 48106-1346 USA 313/761-4700 800/521-0600

A FRAMEWORK FOR FORMAL SPECIFICATION OF THE CARTOGRAPHIC USER INTERFACE

DISSERTATION

Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University

By

Alan Kirk Edmonds, B.G.S., M.A.

*****

The Ohio State University 1997

Dissertation Committee: Approved by:

Professor Harold Moellering, Adviser Professor Morton O’Kelly Adviser Professor Alan Saalfeld Department of Geograph UMI Number: 9731616

UMI MicroForm 9731616 Copyright 1997, by UMI Company. All rights reserved.

This microform edition is protected against unauthorized copying under Title 17, United States Code.

UMI 300 North Zeeb Road Ann Arbor, Ml 48103 © Copyright by Alan Kirk Edmonds 1997 ABSTRACT

This research combines elements from sim ilar models in cartographic communication theory and human-computer interaction to develop a model for cartographic interaction. The model developed includes components from Nielsen's virtual protocol model of human- computer interaction. The use of the concepts of surface and deep structure in this research is adopted from Nyerges cartographic data structure research and the surface interaction paradigm of Took. These concepts are represented at various levels of interaction by BNF production rules modified according to Shneiderman's multi-party grammar model. Other levels of specification draw on the transformations of Moellering's real and virtual maps model.

The model developed using the above concepts is used to specify and compare five different cartographic computer systems that use three different styles of interaction. In addition, to test the use of this model to implement systems the specifications for one of the computer mapping systems is used to implement a graphical user interface. The results of this research show that:

• Modification of Backus Naur form production rules for graphical interaction and deep and surface structure can be successfully applied to the specification of spatial information system user interfaces, • That such specification can allow the comparison of systems based on their use of graphical interaction, in terms of virtual devices, • That examining such production rules can give insight into the system concepts, i.e. the surface interaction with the user in terms of text and graphics, • That complexity of interaction and ease of use of systems can be described by examining the length and number of graphical interactions in the production rules.

Ill To my mother, Mary Jean Edmonds, my Grandmother, Fae Rogers and my Great Aunt, Irene Edmonds for their dedication to education throughout the generations

IV ACKNOWLEDGMENTS

I wish to thank my adviser, Harold Moellering, for his support and encouragement and for his patience in guiding me through this long process.

I also thank the members of my dissertation committee for their guidance in making this a better document.

I am very grateful for my wife Cheryl's support through out these years.

I would also like to thank my many friends and colleagues through out the university, specifically those at University Technology

Services, the Department of Civil and Environmental Engineering and

Geodetic Science and the Department of Geography. VITA

September 16, 1958 ...... Bom - Lawrence, Kansas, USA.

May, 1980 ...... B.G.S. in Geography, The University of Kansas, Lawrence, KS, USA.

May, 1980 ...... Commissioned 2nd Lieutenant, United States Marine Corps.

May, 1982 ...... Promoted to 1st Lieutenant, United States Marine Corps.

September, 1985-August, 1987 ...... Graduate Teaching Associate, Department of Geography, The Ohio State University, Columbus, OH, USA.

September, 1987 ...... MA., in Geography, The Ohio State University, Columbus, OH, USA.

September, 1987-August, 1990 ...... Graduate Teaching Associate, Full responsibility. Department of Geography, The Ohio State University, Columbus, OH, USA.

VI August, 1990 - June, 1991 ...... Graduate Research Associate to Dr. Moellering, Department of Geography, The Ohio State University, Columbus, OH, USA.

July, 1991 - October, 1994 ...... Graduate Research Associate, MVS Systems Programming Group, University Technology Services, The Ohio State University, Columbus, OH, USA.

October, 1994 - Present ...... Supervisor, Mapping Laboratory, Department of Civil and Environmental Engineering and Geodetic Science, The Ohio State University, Columbus, OH, USA.

PUBLICATIONS

1988 Edmonds, A.K. and H. Moellering. An Analytical Cartographic System for Modeling Geomorphic Data. Technical Papers: 1988 ACSM-ASPRS Annual Convention: Cartography. St. Louis, Missouri. Vol. 2, pp. 129-138.

1991 Edmonds, A.K. Directions for Research: User Interfaces for GIS. Position Paper: NCGIA Initiative 13 Specialist Meeting, June 23-26, 1991, Buffalo, New York.

1992 Edmonds, A.K. A Methodology for the Comparison and Specification of the Geographic Information System User Interface. Proceedings of the 5th International Symposium on Spatial Data Handling, August 3-7, 1992,Charleston, South Carolina.

VII FIELDS OF STUDY

Major Field: Analytical Cartography Minor Field: Climatology

v m TABLE OF CONTENTS

ABSTRACT...... II

ACKNOWLEDGMENTS...... V

VITA ...... VI

TABLE OF CONTENTS...... IX

LIST OF TABLES...... XI

LIST OF FIGURES...... XII

CHAPTERS

1. INTR O D U C TIO N ...... 1

Structure of the Th e s is ...... 5 2. RELEVANT LITERATURE...... 6

Cartographic M odels of Communication ...... 6 U ser Interface Co ncepts ...... 18 Models of Human Computer Interaction ...... 23 Specification of Human Computer Interaction ...... 29 S ummary ...... 35 3. THE RESEARCH DESIGN ...... 37

H um an -Computer Interaction in Cartographic System s...... 38 Methods of S pecifying Cartographic Interaction ...... 41 S pecifying H um an -Computer Interaction in a S patial S etting ... 45 R esearch Ta sk s ...... 54 S ummary ...... 57

IX 4. RESEARCH RESULTS...... 59 Specification of 2-Dimensional Programs ...... 60 GIMMS...... 60 ZSHADE...... 76 2-D System Comparisons ...... 79 Specification of Three Dimensional Systems ...... 81 ARC/INFO VIEW...... 82 Spatial Data Display System (SDDP)...... 85 Three Dimensional System Comparisons ...... 89 Implementation FROM S pecification ...... 90 Interface Implementation ...... 90 EvEiluation of Implementation ...... 103 Summary of Implementation ...... 105 5. SUMMARY AND CONCLUSIONS...... 107 Summary of the Research ...... 107 Conclusions ...... iio Future Work ...... 114 APPENDIX A - PC-GIMMS SPECIFICATION...... 116

APPENDIX B - ARCPLOT BNF SPECIFICATION...... 126

APPENDIX C - ZSHADE SPECIFICATION...... 171

APPENDIX D - VIEW3D SPECIFICATIONS...... 186

APPENDIX E - SDDP 3D SPECIFICATION...... 193

APPENDIX F - IMPLEMENTATION SPECIFICATION...... 200

BIBLIOGRAPHY...... 207 LIST OF TABLES

Table 4.1 N um ber of PC-GIMMS productions ...... 67 4.2 Types of ARC/PLOT productions ...... 74 4.3 Types of ZSHADE productions ...... 79 4.4 S u m m a ry o f 2-D S y s t e m s...... 80 4.5 Types of ViewSD productions ...... 84 4.6 Types of SDDP productions ...... 87 4.7 S ummary of 3-D S y s t e m s ...... 89

XI LIST OF FIGURES

Figure 2.1 A GENERAL COMMUNICATION SYSTEM...... 7 2.2 A Communication V iew of Cartog raph y ...... 7 2.3 Kolâncÿ 's Communication Mo d e l ...... 10 2.4 Muehrcke 's Cartographic P rocessing Model ...... l i 2.5 Cartographic Communication M odel of R obinson & P etc h en ik ...... 12 2.6 Real and V irtual Ma p s ...... 13 2.7 Real and V irtual Map Transformations ...... 14 2.8 Surface and D ee p S tructure ...... 16 2.9 The S eeheim Model of the U ser In ter fa c e ...... 24 2.10 Surface Interaction Paradigm ...... 25 2.11 The V irtual P rotocol Model of In ter a c tio n ...... 29 3.1 Comparison of N yerges an d N ie lse n M odels ...... 43 4.1 Typical map produced w ith PC-G IM M S...... 62 4.2 GIMMS Map D esig n Me n u ...... 63 4.3 Typical map produced w ith ARC/PLOT...... 71 4.4 ARC/PLOT COMMAND LINE...... 72 4.5 ZSHADE MAIN MENU AND GRAPHICS DISPLAYS...... 77 4.6 Typical graphic display of ARC/PLOT V ie w ...... 83 4.7 Typical graphics display of S D D P ...... 86 4.8 SDDP MAIN AND DISPLAY M ENUS...... 87 4.9 SDD3D MAIN MENU...... 92 4.10 SDD3D D ata Structure S e l e c t io n ...... 96 4.11 SDD3D DATABASE SELECTION...... 97 4.12 SDD3D VARIABLE READ...... 97 4.13 SDD3D VARIABLE SELECTION...... 98 4.14 SDD3D O rigin an d Size E ntr y ...... 99 4.15 SDD3D D isplay Me n u ...... 100 4.16 SDD3D D isplay Method S e l ec t io n ...... 101 4.17 SDD3D H ue and Color Levels In p u t ...... 102 4.18 SDD3D I mage D is p l a y ...... 103

XII CHAPTER 1

INTRODUCTION

Throughout the history of geography and related disciplines and sub-disciplines (i.e. GIS, transportation geography, economic geography, etc.) there has been a need to seeirch for new theories for explaining phenomena, and new tools to apply these theories to data and situations. One of the disciplines that has provided these tools, as well as spatial theory, is cartography and its cousin Geographic Information

Systems (GIS). The explosion of inexpensive computer technology has placed very sophisticated software tools and algorithms into the hands of users who are inexperienced in the underlying theory of the software they use. Consequently there has arisen a need for new methods to evaluate this software. These methods can provide the cartographer or the subject domain expert with a means of evaluation to compare and recommend such programs to the novice.

This work consequently has developed a procedure (i.e. a framework) to be used in specifying, evaluating and comparing software programs for geographic analysis. The theory used in this work has been derived from the communications models of cartography and the human computer interaction field of computer science. Cartographic communication models have been presented and debated for many years, however they have remained mostly a descriptive discussion of the processes involved in map design and production. This research will examine those models and how they may apply to a description of the interaction between a user and a spatial information system. The communications models and their application in computer science can serve as a guide to the incorporation of the cartographic communication models in a description of a user's interaction with cartographic system both textually and spatially. Therefore traditional models of cartography will be presented and examined for the direction they give to this research. Although cartographic communication models have not been successfully implemented, the use of the surface interaction model presented might be one way in which these models can be advanced.

Computer science communication models as they apply to human computer interaction will also be examined and used to help develop and extend a framework for cartographic interaction with spatial information systems. The basic objectives of this research are to:

1) Develop a method of describing of the interaction between the computer and the cartographer in the display and analysis process,

2) Use the method developed to implement a cartographic user interface.

Cartographic communication theory, like most of the human- computer interaction models in computer science was developed from the communication theory of Shannon and Weaver (1949). The basic cartographic communication model involves the interpretation of data, symbolization and design of a map, by the cartographer to communicate information to the map user. This process is subject to error in the interpretation of the cartographer, the collection of data, the choice of symbolization and the interpretation of that symbolization by the map user, resulting in noise in the communication process and affecting the perception of the information by the map user. This model has been expanded by various cartographers to include the transformations necessary in this process, (Muehrcke, 1970) and the adoption of this model to the use of computer technology (Moellering, 1977a) using the concepts of real and virtual maps. These concepts, i.e. real and virtual maps, and the related deep and surface structure concepts of both cartography (Nyerges, 1981) and computer science (Took, 1990) have been drawn upon to construct the framework presented in this work for the

specification and comparison of cartographic communication in the user

interface.

The framework developed in this thesis serves as a basis for the

comparison of cartographic user interfaces. The framework allows the

evaluator to examine the spatial device use in the interface, by detailing

the use of the "locate" and "pick" logical devices. The system concepts

involved in the interface are detailed by the specification of the types of

surfaces (i.e. dialogue text, graphics, etc.), present. The cartographic

transformations occurring in the system are specified with Dr.

Moellering's real and virtual map transformations. In addition the deep

and surface structure interactions occurring in the cartographic system

can be examined, to give a feel for the type of program control occurring outside the range of the interface. The production rules used to specify the user interfaces of these systems can be examined for the complexity of

accomphshing specific tasks. These specifications can then be examined

and used to compare the difficulty in accomphshing the same task in the same system with a different interface or be compared to a different system. The model used and the techniques devised for this framework for comparison will be presented in Chapter 3 following a discussion of the theory upon which this research is based. Structure of the Thesis

Chapter 2 of this thesis will discuss the relevant literature, with the first section a discussion of the traditional and more recent models of cartographic communication. Following that discussion the remainder of chapter 2 will discuss the various computer science models of human computer interaction and end with a description of the various techniques that have been developed for specifying and comparing human computer interactions. The development of the cartographic interaction model will be presented in Chapter 3. Following the description of the model and the tasks developed to test and evaluate this approach, the actual application of the model to the descriptions and comparisons of five different spatial information systems will be presented in the first two sections of Chapter

4. The remaining section of Chapter 4 will present a proof of concept system that was produced firom the specifications for one of the other systems. A summary of the results will be detailed in the first section of

Chapter 5, followed by a discussion of the conclusions that may be derived firom those results and a third section that presents future work. The appendices (A-F) contain the detailed specifications produced as a result of this research. CHAPTER 2

RELEVANT LITERATURE

This chapter will briefly introduce the literature relevant to this

research. Specifically the first section will cover the foundations of cartographic theory, namely the cartographic communications models.

The second section of the chapter will review the equivalent models for human computer interaction and also cover the specification of the interfaces for human computer interaction.

Cartographic Models of Communication

The origin of the communication view of cartography has as its foundation the communication model defined in 1949 by Shannon and

Weaver, Figure 2.1. This model has as its components; a message that is selected firom an information source, changed into a signal by a transmitter, and modified enroute by noise. The received signal is then transformed back into a message by the receiver and delivered to its destination. Cartographers such as Board (1967) have related this basic model of communication to cartography by defining a sim ilar model,

shown in Figure 2.2. This cartographic model defines the world as the

source, the cartographer as the encoder, and the map as the encoded

message. The receiver is the eyes of the map reader, with his mind being

the decoder, while the destination is the map-reader.

INTOIMAIiON SOUKf TIANSMimi M C B V U DESTINATION

SIONAl MESSAGE MESSAGE

NOISE SOUECE

Figure 2.1 A general communication system (fi-om Shannon and Weaver, 1963)

NOSE

Figure 2.2 A Communication View of Cartography (fi*om Robinson and Petchenik, 1976)

A more detailed model by Kolacny (1972), see Figure 2.3, relates the communication model to sets that define the perceptions of the cartographer and the map reader, and the relation of those perceptions to reality. Kolacny defines reality as the set U (the universe), Ui is the selective observation of the universe by the cartographer, Ug is the universe as seen by the map user. This reality is transformed by the cartographer (Si) with specific goals into an intellectual model that is represented by a map (M) using cartographic symbols (L). The map reader by reading the map using his knowledge of cartographic symbols

(L) transforms his previous understanding of the universe into a conception of the universe that incorporates some of the conceptions of the cartographer. Muehrcke (1970) developed firom Tobler’s original concepts, a transformational model of cartographic processing, and this is presented in Figure 2.4. Muehrcke's description of his model is as follows:

Data are selected firom the real world (Ti), the cartographer transforms these data into a map (Tg), and information is retrieved firom the map through an interpretive reading process (Ta). A measure of the communication efficiency of the cartographic process is related to the amount of transmitted information, which is simply a measure of the correlation between input and output information. The cartographer's task is to devise better and better approximations to a transformation, Tg, such that output firom Ta is equal to Tg; i.e. Ta = Tg 1 (Muehrcke, 1970).

Robinson and Petchenik (1976) are more comfortable expanding on the set concepts firom Kolâncÿ’s model and developing a more detailed

Venn diagram (Figure 2.5) rather than using the cartographic transformations of the communication's model of Muehrcke. The model developed by Robinson and Petchenik shows the conceptual relationships of the cartographic process, but relates nothing about the process itself.

As shown in Figure 2.8, S represents geographical space, with Sc being a correct conception of that space and Se being an erroneous conception. A is the conception of geographical space held by the cartographer, B is the conception of space held by the map reader, and each of these has erroneous and correct parts to their conceptions. The map (M) represents a subset of these conceptions and of the actual geographical space. The portion of the map Mi is that previously conceived by the map reader, M 2 is the portion not previously conceived but that is now comprehended because of the communication of the map. Ms is the portion of the map that is still not conceived by the map reader, with U being the increase in conception by the map reader that occurs because of knowledge gained from the map, but is not directly portrayed by it Since this model only addresses the membership in these sets, it can not be apphed to the flow or transformation of information between them. U,-reality (the Universe) represented as seen by the car- tograper; S, -the subject representing reality, i.e. the cartographer; L -cartographic language as a system of map symbols and rules for their use; M -the product of cartography, i.e. the map; $2 -the subject consuming the map, i.e. the map user [per­ cipient]; Uj-reality (the Universe) as seen by the map user; and Ig -cartographic information.

CAITOGCAPMO'S «CAimr ICAUTY

L 52 . iToetAnnc MAP USE A lANGUACC MINO

-The Meta-langwage of Certogrephy •

Figure 2.3 Kolancy's Communication Model (from Robinson and Petchenik, 1976)

10 REAL RAW MAP MAP WORLD DATA IMAGE

Figure 2.4 Muehrcke's Cartographic Processing Model (from Robinson and Petchenik, 1976)

Another model that can be used to describe the cartographic communication process is one that begins by redefining maps, taking into account the computer era. Moellering (1977a, 1984) developed this proposal for a redefinition of what a map is, in response to Morrison's

(1974) call for a new definition. This concept, summarized in Figure 2.6, defines both real and virtual maps based on two-fold characteristics as to whether the map is viewable and whether the map has a permanent tangible reality. In the case of computer generated maps displayed on a monitor, the map is directly viewable but does not have a tangible reality, therefore this type of map would be described as a virtual map of Type 1.

A real map has both the above qualities, while a virtual map Type 3 has neither of those qualities (e.g. a digital elevation model in computer but not displayed on a screen). The remaining type of virtual

11 map is a Type 2 map and has a tangible reality but is not directly viewable, an example might be the gazetteer of an atlas, or a CD-ROM database.

IE: HH

r* t r f') ) > t r rrv 77, ////////////V ///// ///////////// /// ///// ///////////

S = Geographical space, the milieu Sç = Correct conception of the milieu Sg = Erroneous conception of the milieu A = Conception of the milieu held by Cr B = Conception of the milieu held by Pt M = Map prepared by Cr and viewed by Pt M, = Fraction of M previously conceived by Pt M, = Fraction of M not previously conceived by Pt and newly comprehended by him: an indirect increment Mj = Fraction of M not comprehended by Pt U = Increase in conception of S by Pt not directly portrayed by M but which occurs as a conse­ quence of M: an indirect increment

Figure 2.5 Cartographic Communication Model of Robinson & Petchenik (jfrom Robinson and Petchenik, 1976)

12 The interesting implication of this type of map definition is in the specification of transformations between each type of map (Figure 2.7).

As Moellering (1977b) describes, the design of interactive computer systems can be facihtated by an examination and specification of which of the sixteen different real to virtual map transformations is occurring at a given step. Tobler (1979) has also explored a transformational view of cartography. Tobler's view of cartography is not, however, very useful for specification of human-computer interaction since it appHes entirely to the data domain and does not include transformations outside of computer processing, unlike the real to virtual map transformation model.

Directly viewable YES NO Permanent YES Sheet Map Gazetteer Tangible Reality Globe Field Data NO CRT Image RAM Cognitive Map Digital Terrain Model

Figure 2.6 Real and Virtual Maps (after Moellering, 1983)

13 VIRTUALUAL ^ HAP TYPE II

VIRTUAL

TYPE I

Figure 2.7 Real and Virtual Map Transformations (from Moellering, 1983)

The development of computer databases for cartography and spatial information lead to a need for concepts relating to the structure of this information in digital databases. Nyerges (1981) addressed this by specifying six different levels for data structure abstraction in the development of spatial analysis systems. These levels are (Nyerges,

1981):

1) Information Reality - Observations that exist as ideas about geographical entities and their relationships which knowledgeable persons would communicate with each other using any medium for communication.

2) Information Structure - A formal model that specifies the information organization of phenomenon in reality. This structure acts as an abstraction of reality and skeleton to the canonical structure. It includes entity sets

14 plus the type of relationships that exist between those entity sets.

3) Canonical Structure - A model of data which represents the inherent structure of that data and hence is independent of individual applications of the data and also of the software or hardware mechanisms which are employed in representing and using the data.

4) Data Structure - A description elucidating the logical structure of data accessibility in the canonical structure. There are access paths that are dependent on explicit links, i.e. resolved through pointers, and others that are independent of links. i.e. resolved through other forms of reference. Those access paths dependent on links would be based on trees or plex structures as in network models. Those access paths independent of links would be based on tables as in relational models.

5) Storage Structure - An exphcit statement of the nature of links expressed in terms of diagrams that represent cells, linked and contiguous hsts, levels of storage medium, etc. It includes indexing how stored fields are represented and in what physical sequences the stored records are stored.

6) Machine Encoding - A machine representation of data including the specification of addressing (absolute, relative or symbohc), data compression and machine code.

This scheme for abstraction of spatial data structures was specified in conjunction with the use of an additional abstraction concept, that of deep and surface cartographic structure. Nyerges developed this additional abstraction by analogy to Chomsky's consideration of surface and deep structure in linguistics. As Moellering (1984) interprets this abstraction, deep structure represents linkages that are not present graphically in the

15 surface structure representation of map information (Figure 2.8), with deep structure consisting of the information defined primarily at the data structure level (level 4) of the data abstraction hierarchy developed by

Nyerges.

NODE LINK POINT MODULE MODULE MODULE

POINT OBJECT LINK CARTOGRAPHIC MODULE ATTRIBUTE DEEP MODULE STRUCTURE

VIRTUAL MAP TYPE I I I

CARTOGRAPHIC SURFACE STRUCTURE

REAL MAP, VIRTUAL MAP TYPE I

Figure 2.8 Surface and Deep Structure (firom Moellering 1984)

16 This brief review of cartographic communication models serves as a basis for the discussion in the next section of the communications basis of human-computer interaction models. The use of these communications models in cartographic research has decreased substantially. In fact the most recent edition of Elem ents of C artography (Robinson et al, 1995) has totally dispensed with any discussion of the communications models that in previous editions had been a prominent featme. One of the problems with using the communications models, is representation of the various parts of the process. Dr. Moellering's real and virtual map concepts and the transforms that can be expressed between the different map types is one step towards a means of representing the processes involved in map production and cognition. However, like the more generic communications models, these concepts also do not have a means of specifying the parties to these processes.

In site of their limitations these different conceptual models of the cartographic communication process and cartographic transformations can serve as a basis for the development of a model for cartographic human-computer interaction. To further expand on this base the next section will discuss research specific to human computer interaction.

17 User Interface Concepts

The cycle of design, implementation and testing of user interfaces usually foUows one of the patterns of software development advocated by software engineers. Although the specific procedures followed by the system designers may vary, the development of a user interface usually proceeds in an iterative fashion through the design, implementation and evaluation stages of the life cycle. Rouse (1984) presents the following general steps in the life cycle for interactive systems development that can be used for the implementation of any specific system:

1. Collect Information. 2. Define requirements and semantics. 3. Design syntax and support facihties. 4. Specify physical devices. 5. Develop software. 6. Integrate system and disseminate to users. 7. Nurture the user community. 8. Prepare evolutionary plan.

These steps are not meant to be linear but often involve returning to a previous step to rework some aspects of the system. This requires that evaluations be conducted at each step in the process so that errors may be corrected and reworked.

18 Moellering (1977b, 1983) specifies five basic phases in the development of an interactive cartographic system:

1. Specifying the system goals. 2. Specifying system needs and feasibility. 3. Designing the system. 4. Implementing the system. 5. System testing, verification and documentation.

These two sequences for the implementation of a computer software system are closely related and show only two approaches to the many paths available and used in the steps of software production.

The specification of the requirements, semantics, syntax and physical devices used in an interface often is the more challenging part of the development of the user interface. Thus this area of user interface development has been the subject of extensive research within the human factors discipline. Such issues as the selection of input devices, the screen layout of the system and the type and firequency of interaction with the user must be considered in the design of a high quality interface. Key issues in human factors evaluations of systems are (Shneiderman 1992):

1. Time needed to leam the system, 2. Speed of performance, 3. The rate of errors by users, 4. Users subjective satisfaction with the system, and 5. Users knowledge retention over time.

19 As these issues point out, testing and evaluation of critical elements of

the user interface is necessary to insure that the system tested is effective

and will be successful.

One of the seminal works related to the design of human-computer interfaces and consideration of human factors is the work of Foley and

Wallace (1974). In this work, and later work, such as Foley, Wallace and

Chan (1984), the authors consider such psychological blocks to interaction as boredom, panic, frustration, confusion, and discomfort. In addition, consideration is also given to the different types of virtual devices for input (picks, buttons, locators and valuators). These virtual devices are important in maintaining the transportabüity of interaction techniques between different physical devices. Human factors design of user interfaces has also recently been the subject of research in the GIS field.

Egenhofer and Frank (1988) present requirements of good human factors design of future geographical information systems which specify that the user interface should be, "easy to leam, appear natural to the user, and independent of any internal data structure." These views are in accordance with the general human factors design criteria presented above.

More recently, in the GIS field, there has been sufficient research interest that an edited collection of papers entitled: H um an Factors in

20 G eographical Inform ation System s (Medyckyj-Scott and Heamshaw,

1993) has been produced. However none of the research presented in that

volume has direct bearing on the areas of interest in this research.

A natural extension of the effort to design computer interfaces so

that they are effectively transparent to the user and satisfy the human

factors criteria presented above, has been the use of direct manipulation

interfaces (Shneiderman 1983). Such interfaces as the Xerox Star and

Apple Macintosh desktops (Smith et al 1982) seek to exploit the use of

metaphors. Specifically both the Star and Macintosh attempt to use the

metaphor of the desktop, where the arrangement of the computer screen

and the interaction with graphical components of the interface is meant

to be an extension of the interaction of a worker with the files and items located on a typical office desk. This has resulted in easy to leam user interfaces and an increase in popularity of the Apple Macintosh computers with less computer literate individuals. The success of this type of interaction and these interfaces has resulted in the Graphical

User Interface (GUI) with its associated windows, icons, and a mouse as a pointing device, being adopted for use in many different computing environments. SunView, Open Look, Motif, DECWindows, Microsoft

Windows and NextStep are some of the currently available GUIs for microcomputers and workstations.

21 The success of the desktop metaphor and metaphors in general is also being explored in the GIS area. Gould and McGranaghan (1990) discuss the use of different metaphors in Geographic Information Systems and find that some metaphors can be used successfully, but that the use of a map or a map hbrary metaphor in a GIS user interface may not be appropriate. They suggest, however, that the use of nested metaphors may be valuable to provide appropriate organization to the system in a way that is relevant to the user's view of the task or apphcation domain.

Wilson (1990) finds fault with the use of the desktop metaphor as indicated in the title "Get Your Desktop Metaphor Off My Drafting

Table...". As Wilson points out a critical component in the design of the user interface, the underlying conceptual model for the interaction, is often overlooked in the design of spatial data handling systems.

Mark (1992) also examines metaphors and comes to the conclusion that since most non-spatial information systems rely on spatial metaphors for interaction, general-purpose computing is conceived of as spatial data handling. Therefore interface designers for spatial information systems must be careful that users do not confuse spatial data and its manipulation in the system, with the graphical interaction methods in a direct manipulation interface. Burrough and Frank (1995) also address how users perceive interaction with the GIS. Their

1 9 conclusion is: “...spatial data analysis tools need to be chosen and developed to match the way users perceive their domains: these tools should not impose alien thought modes on users just because they are impressively high tech (Burrough and Frank, 1995)." How then are user interfaces modeled and what is their conceptual basis? This is discussed in the next section of this chapter.

Models of Human Computer Interaction

The basic model of human-computer interaction that is most widely apphed is one that was proposed at the Seeheim Conference on

User Interface Management Systems (Pfaff 1985). This model is known as the Seeheim Model and is shown in Figure 2.9. The Seeheim Model is a very simple model of the interaction between the user and the application program, with the user interface existing as the dialogue between the components and is very similar to the basic cartographic communication model. A more recent model of the communication between the user and the apphcation is that presented by Took (1990).

Took proposes a different approach for separating the apphcation and the user, the surface interaction paradigm. The Seeheim model only specifies the user interface as the dialogue occurring between the user and the application, Took takes this one step further by making the user interface

a separate component of the overall system. Whereas the Seeheim model

relies on the underlying application to manipulate the presentation of the

dialogue, in Took's surface interaction paradigm the interface, or surface,

is responsible for changing the presentation (Figure 2.10). Took requires

that this surface domain (i.e. the graphics and text presented to the user) be independent of the applications semantics or the deep interaction (i.e.

the interaction among applications or within the application program).

Although this approach is related to that of a window manager, such as X,

Took requires that the separation include the contents of the windows, not just the arrangement and appearance as is provided by the X

Windows Toolkit.

User Applications User Interface Program

Figure 2.9 The Seeheim Model of the User Interface (from Pfaff 1985)

24 user external commands events

SURFACE

commands \ responses © = internal events

Figure 2.10 Surface Interaction Paradigm (from Took, 1990)

The above models of the separation of the user interface and the apphcation are representative of the hterature in this area. There are many more variations on these concepts, but these illustrate the types of models that have been used. A comprehensive look at the separation of the interface and apphcations is done in The Separable User Interface

(Edmonds, 1992). This collection is mostly a collection of previously pubhshed papers that address early communication models of human computer interaction, specification for user interface separabihty, and the architecture and practical aspects of these concepts. Some of this work will be referenced later in the discussion of specification. It is appropriate to point out that this collection of work is based on the communication

25 model of the user interface and does not include the research by Took

(1990) in the same area.

Although the two models discussed above can both be used as

general models of the user interface, there are many more levels of

human-computer communication that must be considered in the

development of the interface and that have been explored. Moran (1981)

has developed the Command Language Grammar (CLG) model that takes

into account many more different levels of interaction than those specified

above. The components of CLG are grouped into three major categories:

conceptual, communication and physical. The conceptual category

includes concepts related to the user task and abstraction of that task, the communication component is the command language or other means of

interaction with the application and the physical component includes the type of display and pointing devices used on the computer system. Moran

groups these components into four distinct levels: the task level, the semantic level, the syntactic level and the interaction level. These levels of specification of the interface are useful for dividing the interests of user interface researchers into groups with three different purposes or views.

The linguistic view's primary goal is the study of the command language used to interact between the system and the human, the psychological view is concerned with describing the user’s of the system

26 and the design view concerns the representations used to specify the system design. All of these views and levels of interaction are accommodated by the Command Language Grammar.

An even more elaborate model of human-computer interaction is the virtual protocol model proposed by Nielsen (1986). This model expands on the command language grammar model of Moran by specifying seven different levels of human-computer interaction based on the ISO Open Systems Interconnection (OSI) model of physical communication between computer networks. In this model, illustrated in

Figure 2.11, messages at a specific level are exchanged between the two sides by virtual communication at the next lower level. The higher levels of this interaction specify the conceptual processes and components of the discourse, while the lower levels specify the form or appearance of the communication.

The top level of Nielsen's model is concerned with the goals of a system, these are real world concepts that are external to the computer system. The task layer deals with the system concepts, in other words what kind of objects are available in the system and how can they be manipulated. The semantics layer of Nielsen is not defined in the usual way as the meaning of an action but as an examination of the detailed functionality of the systems. This includes what specific objects are in the

27 system (rather than categories of objects) and what specific operations can be done. To summarize the difference between the task and semantics levels in this model, the task level deals with what types of things and actions are represented by the system, in other words how a task might be described generically using the system concepts. The semantics level deals with specific objects and actions on those objects, i.e. what specific operations are necessary on what specific system objects to accomphsh the task.

The next layer of interaction, the syntax layer, deals with what specific sequence of commands must be issued to accomphsh the semantic operation. The lexical level merely expresses the syntax in system tokens, or the smallest unit of information in the system, the individual words or actions in the command sequences. The final two levels of

Nielsen's model are the alphabetic, where the primitive symbols that are used to construct the tokens are specified, and the physical level that represents the actual physical actions such as the pressing of a key on the computer keyboard.

These different models of human-computer interaction include not only an examination of the goal and task levels involved in the discourse with the computer but also provide a means of specifying the form of the interface. This last component, that of specification of the interface, has

28 been the subject of extensive research in user interfaces and will be

discussed next.

CempwNf

0

Virtuel eemiMMiieetien ^ * Phjeitol eonMiunicetion I' Reoliter t «iMlyier

Figure 2.11 The Virtual Protocol Model of Interaction (firom Nielsen 1986)

Specification of Human Computer Interaction

The specification of the user computer interface using various techniques is valuable because it can be a formal method for evaluating the interface as weU as a means to automate its construction. The most common techniques used to specify human-computer interaction have

29 been the use of formal gram m ars (Shneiderman 1982) and graphical specifications using state transition diagrams or other types of transition networks (Jacob 1983, Wasserman 1985). The next portion of this review will concern itself with an examination of these methods as well as a few of the less common m eans of specification.

Formal grammar specifications of user interfaces generally use as a basis for the formal grammar, the Backus-Naur Form (BNF), also called

Backus-Normal Form. Backus (1960) first applied this grammar to computer language specification when specifying ALGOL (Naur 1960).

Originally this formal grammar notation was used in linguistics by

Chomsky (1964, Reisner 1981), and has been used in modified form by computer scientists for compiler specification (Holub 1990), and in defining user interfaces (Shneiderman 1982). A BNF specification uses a set of terminal symbols and a set of definitions (non-terminals) by which every legal structure in the system being defined must be represented.

Strict BNF rules allow only one operator, which means "is defined as" (Holub 1990). Writing production rules that represent every component of the system, such that all non-terminals are ultimately defined by terminal symbols, specifies a system.

Various problems have been noted with the use of BNF specifications and the human-computer interface, thus Shneiderman

30 (1982) discusses and extends the BNF grammar to correct some of the deficiencies. One of the problems Shneiderman notes is that BNF notation is geared toward batch programming languages, therefore he extends it by allowing each non terminal (i.e. each specification that can be defined by components) to be specified with a party identifier. The party identifier specifies which part of the interface or program is involved in this particular interaction. This allows the human and computer components of the discourse to be uniquely identified and allows for the addition of more than one party on either side of the dialogue. Additional enhancements include the assignment of values to non-terminals and a notation to specify the output of that value, as well as a wildcard non-terminal that matches any string if no other parse succeeds. Some of the advantages of using a formal grammar such as

BNF is that a complete description of a system can be built which can be implemented, debugged and constructed with the aid of compiler construction tools such as 'yacc' (yet another compiler compiler).

Reisner (1981) uses an extended BNF notation to evaluate the design of two interactive graphics systems. Although there are some problems with the use of this notation, she finds the formal analysis accomplished of great benefit. Reisner finds that one benefit as being the ability to analyze a model before implementation, this allows the designer

31 to examine the notation to find inconsistencies. For example, the design might include areas where different steps are necessary to accomplish the same task. This also enforces precision in design, such as making sure that prerequisite actions are always accomplished. The formation of testable hypotheses can also be aided by allowing the designer to specify what actions or components are being tested. For example, if a designer wishes to compare different types of selection actions in a particular task, those actions can be specified in the notation and the specifications used to predict the length of the interaction necessary to accomplish that task.

Richards and others (1986) use Reisner's extensions to BNF, as well as MorEui's Command Language Grammar to analyze the MINICON user interface to UNIX. In applying both notations to the task, limitations of each grammar were noted but both could be used to evaluate the user interface. Some of the limitations of BNF were corrected in Bleser and Foley's (1982) extension to the grammar. In this study, in addition to adding enhancements to BNF sim ilar to some of

Reisner and Shneiderman's extensions the researchers developed a means to specify environment variables. This is done in their notation by using attribute lists that define environment parameters (such as window size and background color) using an attribute declaration similar to the C

32 language data structure definition. These attributes would have a local scope and could be referenced by a parameter bst of the non terminal.

Another type of specification that is firequently used is User Action

Notation (Hix and Hartson, 1993). User Action Notation (UAN) is a notation that has been devised for representation of the behavioral aspects of human computer interaction. The notation includes symbols for representing mouse press and release events as well as the movement of the mouse. This makes this specification technique well suited to the definition of graphical user interfaces, however the notation is also somewhat hard to leam and read. Although the specification of human- computer interaction using notations has been used successfully, many researchers find other means of specification more appropriate, one of these is the use of transition diagrams.

Pamas (1969) was one of the first researchers to advocate the use of state transition diagrams for the specification and construction of user interfaces. Jacob (1982,1983, and 1985) built on Parnas' approach and details the construction of a user interface using a design specified with state transition diagrams. In Jacob's comparison of state transition diagrams and BNF notation (1983) he indicates that state transition diagrams are superior because sequence is explicit in the diagrams but only implicit in BNF specifications. However, Jacob does concede that

33 BNF and state transition diagrams are functionally equivalent for specifying human-computer interfaces and even uses a grammar representation of state transition diagrams to implement the interface discussed. In subsequent research Jacob (1985) and Wasserman (1985) develop transition diagram interpreters that use the graphical specification of the state transition diagrams directly.

Other researchers have used algebraic and set theoretic notations to specify interaction dialogues (Chi 1985, Hill 1987), in addition some researchers have constructed systems that specify the human-computer interaction component by having the designer demonstrate the interaction required (Myers and Buxton 1986, Myers 1987). In Peridot

(Myers 1987,1988), since the human-computer interaction is entirely specified by demonstration, no formal specification of the user interface is actually constructed and thus the advantages of using a formal specification for definition of the interface are lost. Although this particular system of implementing a user interface does not use a specification notation many other systems for constructing, prototyping and evaluating user interfaces formally define the interface in some way.

A recent development by Carr (1995) is the Interaction Object

Graph (lOG). This specification technique is: "... a blending of statecharts for control. User Action Notation for event descriptions, and an abstract

34 model of the user interface (Carr, 1995)". Although Carr successfully tested his technique for ease of understanding, his main use of these graphs is to specify widgets, such as sliders, and buttons. This work has not yet been used to specify complete systems, and would appear to be rather unwieldy when the number of widgets specified became large.

The major application of techniques for specification of user interfaces is in the area of User Interface Management Systems (UIMS).

UIMS are systems that have been created for managing human-computer dialogue in the prototyping and programming of applications. Many books (such as Olsen, 1992) are available which discuss UIMS and their construction and choice of representation techniques.

Summary

This chapter has reviewed some of the models of communication that have been used in both the cartographic literature and the human- computer interaction Uterature. Although the traditional cartographic models that were originally developed firom the communication model have fallen out of favor, more recent models such as Moellering's real and virtual maps model have been developed to extend this research into the computer era. The promise of using real and virtual map transformations to specific cartographic systems, although explored by

33 Dr. Moellering has not been expanded to include representation of the various parties to the design and interpretation of a map.

The human-computer interaction models started from the same roots as the cartographic communication models. These models have served as the basis for the development of such cognitive and task models as Moran's Command Language Grammar. As an expansion of these models Nielsen developed a seven layer virtual protocol model that can be used to describe the interaction between the user and the computer. This model combined with some ideas from cartography and other areas of computer science will serve as the theoretic background for the model presented in the next chapter.

36 CHAPTER 3

THE RESEARCH DESIGN

The literature reviewed in the previous section outlines the pertinent areas of past research in this area. Discussion will next concentrate on the development of a research agenda for specific components of user interfaces in analytical cartography and spatial analysis systems. This research will first elucidate the connections between the virtual protocol of Nielsen (1986), the surface interaction paradigm of Took and the real and virtual maps paradigm of Moellering and how these concepts apply to modeling the element of human- computer interaction in analytical cartography and geographic information systems. The adoption of these concepts will be shown to be useful by an appHcation of formal language specification to communication between the cartographer and the cartographic system.

37 Human-Computer Interaction in Cartographic Systems

Many of the communication models of cartography are not appropriate for use as the underlying model for cartographic systems, since they are static, graphic representations of the different components of cartographic communication. There has been no research that has successfully implemented these models and they are not suitable for use in interaction between the user and the application. However, one of the cartographic models, Moellering's real and virtual maps model is appropriate since it has the capability of expressing transformations between different forms of cartographic representation of reality. This model of cartographic processing will be used to help specify the human- computer interaction in cartographic systems for this research. In addition, the different levels of data structure developed by Nyerges

(1981) and the various of levels of interaction specified by Nielsen (1986) have the potential to be useful in examining graphical interaction.

Nyerges (1981) six different levels for data structure can be adapted and used in the specification of the cartographic interface. These levels are (Nyerges, 1981):

1) Information Reality 2) Information Structure 3) Canonical Structure 4) Data Structure 5) Storage Structure 6) Machine Encoding

38 The expression of these data structure levels in relationship to deep and surface structure ties quite nicely into the similar concept of surface interaction used by Took (1990) in describing interaction in computer systems. The concept of surface interaction specifies human- computer interaction as taking place between the user and the application, across the surface or medium of the user interface. In this interaction paradigm all surface presentation is controlled by the user interface in reaction to requests firom either the application program or the user. Communication between application processes or different applications that does not require a change to the surface presentation is termed "deep interaction". The primary difference between the Seeheim model of user interfaces and that of Took is that the surface presentation

(of the interface) in the Seeheim model is controlled by the application and is not abstracted and separated fi’om the application as extensively as in Took's surface interaction paradigm. This difference, although it may appear shght, is very important because of the current state of the art in window management systems. Multi-tasking operating systems may have many different applications that are executing at the same time and affecting different parts of the interface presented to the user. Therefore the user interface, in Took's model, must contain some intelligence or some knowledge about the applications executing to handle the

39 presentation components of the interface. In addition the intelhgence

present in the interface can be used to specifically manage the surface

presentation for individual users, as in an Intelhgent User Interface (TUI)

(Rissland 1984). In these interfaces the surface presentation and

interaction methods can be tailored for the individual user, not only when

initially configured, but as the user interacts with the interface. Thus the

user interface must handle more than just updating the surface structure presented to the user, but must provide additional services. When the

interface is extended to include these concepts is when the advantages of

the separation built into Took's surface interaction paradigm begin to become essential.

The advantages present in the adoption of the surface interaction paradigm in dealing with human-computer interfaces make it the most reasonable model to use as the underlying general model for the specification of cartographic interaction. Therefore this research will represent the human-computer interface as a surface structure that has interaction with both the user and the applications program s This adoption of surface interaction as the underlying model for this work extends Nyerges' deep and surface structure concepts to the user interaction area. The definition of real and virtual maps by Moellering

(1984) is essential to the resulting model as it specifies the abstractions of

40 the cartographic products in components that seem to match the surface interaction paradigm.

Methods of Specifying Cartographic Interaction

The specification of cartographic interaction in spatial analysis systems can occur at many different levels of abstraction. The levels of data structure presented by Nyerges (1981) can be used as a basis for these abstractions, as could the levels used in Nielsen's virtual protocol model discussed previously. In a comparison of these two schemes of abstraction, the Goal Layer of Nielsen can be related to the Information

Reality or Data Reality level of Nyerges (Figure 3.1). In the Goal Layer the user deals with real world concepts, as does the Information Reality level of data structure specification. In the Task Layer the conceptual tasks that are used to process the real world concept are dealt with and is similar to the Information structure level where the types of relationships and the information structure are devised. The Canonical Structure level is similar to the Semantics Layer where the meaning of the interaction is specified, i.e. relates the operations to each object, and is independent of the actual application or implementation. Unfortunately the sematics level as defined by Neilsen does not relate to the meaning of the interaction but to a more definable concept, operations on an object. This

41 research therefore will deal with the semantics on this level. The Data

Structure level of Nyerges corresponds to the Syntax and Lexical levels of

Nielsen's protocol model where the specific sequencing and structure of the various commands are developed and are specified so that specific relationships and structures exist independent of actual implementation.

The Alphabetic Layer corresponds to the Storage Structure in the data abstraction, in this layer the letters and digits of the interactions are handled and the Storage Structure represents the specification of structure in implementation. The final layer, the Physical Layer handles the actual exchange of the computer signals in the interaction, while the corresponding data structure level. Machine Encoding, handles the specific addressing and machine representation of the data structure.

These two models of abstraction were devised tor specifying different components of computer systems, but are functionally similar in their considerations. Since Nielsen's model is specifically designed for human computer interaction it will be used in this research to abstract the different levels of specification of the human computer cartographic interaction used in this project.

42 Nyerges Data Structure Nielsen Virtual Protocol

Levels Model

Information Reality Goal

Information Structure Task

Canonical Structure Semantics

Syntax

Data Structure Lexical

Storage Structure Alphabetic

Machine Encoding Physical Layer

Figure 3.1 Comparison of Nyerges and Nielsen Models

The Goal and Task Layers of the protocol model, although they can be easily described, are perhaps the most difficult to specify formally.

However, various specifications such as the Command Language

Grammar developed by Moran (1981) does provide for specification of task entities, tasks, task procedures and task methods. Even Moran concedes that this portion of CLG needs further extension in this area. The use of

CLG at the Semantic Level does not provide any more formal method of specification than at the Task Level but provides specification of system

43 entities, system operations, user operations, semantic procedures and

semantic methods. One method of specifying interaction at these levels in

a more precise, although still informal fashion, is to use the map

transformations specified by Moellering (1983) to examine the semantic operation of the interface. The examination and specification of cartographic interfaces in this research will first be done descriptively at these levels and enhanced with the specification of the real and virtual map transformations.

The Syntax, Lexical and Alphabetic Layers of Nielsen are equivalent to the Syntactic and Interaction levels of CLG (Nielsen 1986) and are the primary areas of interest in this research. All of the specification of interfaces with the BNF notation or state transition diagrams occurs at these levels. It is important to note that no current research in analytical cartography or geographic information systems is concerned with this level of specification. Although some researchers have considered the use of task analysis and an examination of the requirements for the users of spatial analysis systems (Egenhofer and

Frank 1988, Gould 1989), no one has attempted to evaluate and compare these systems at this level of interaction. Any specification of an interface in BNF or using state transition diagrams must deal with both the commands available in the system and the means of the human

44 interaction with the system. Therefore, when specifying the components of the interface the specihcation of the virtual devices of interaction (Foley and Wallace 1974, Foley, Wallace and Chan, 1984) and the syntax of the command must be dealt with for a complete system definition. The specification notation used for describing and evaluating the surface interaction of cartographic users will be a version of BNF notation. This

BNF notation is extended to allow specification of the form of the output and input via virtual devices and uses a means .sim ilar to Bleser and

Foley's for specifying the environment parameters necessary. The next section of the research design will specify each of the components discussed above in more detail.

Specifying Human-Computer Interaction in a Spatial Setting

As indicated above the underlying conceptual model of the user interface in the cartographic interaction model will be the surface interaction paradigm of Took (1990). This model is used since it provides a better separation of the human-computer interaction firom the apphcation program. An addition advantage is the ability of the user interface to include intelligent components that allow the incorporation of beneficial enhancements in the human-computer dialogue. Although not

45 all user interfaces may meet the standards set by using this conceptual model, it serves as an ideal for the evaluation of various interfaces.

In the process of evaluation of cartographic interaction the virtual protocol levels presented by Nielsen (1986) will be used to organize the specification. At the Goal Layer statements can be made about the use of the evaluated system and although these statements can not be specified formally they prevent two systems fiom being compared which were designed for different purposes. The specification of the interface at the

Task Layer is also informal and consists of stating the objects and operations available in each system. In this layer the emphasis is not on whether a specific function is available using one command, but whether a specific operation can be accomplished, either with one command or by stringing together some sequence of the commands available. This layer of the specification gives an indication of whether the same operations and objects are available in each system, even though they may be implemented in different fashions. An indication of the tasks available in the system can be clarified by expressing the operations using the real and virtual map transformations described by Moellering (1984).

The next three layers of the virtual protocol model are those that are incorporated in the specification by representing the system actions as a set of BNF rules. The Semantics Layer of indication can be represented

46 by the higher order non-termmsds in BNF, and the specific syntax

(Syntax Layer) of the operations is described by lower level productions involving both terminals and non-terminals of the language. The Lexical

Layer of the protocol model is described by the lowest order productions in

BNF, where these productions involve the terminal components present in the interface. The Alphabetic Layer of the model is comprised by the most primitive components of the interaction, the lexemes, which must be used to specify the tokens of the Lexical Layer. Since this layer consists of individual letters and digits as well as line color and other attributes of graphics, they serve as terminals in the productions of the Lexical Layer and will not be decomposed any further. The remaining layer that

Nielsen uses in the virtual protocol model is the Physical Layer. This layer is concerned with the actual specification of the recording of individual keys and specific button pressed by the user. Since the specification of the interface relies on the definition of virtual devices as described in Foley and Wallace (1974) this layer will not be dealt with in the specification.

Since the purpose of this means of specifying the user interface is in the comparison, evaluation and automation of the design of the user interface it is appropriate to enumerate some of the specific uses of the

47 notation at this point. Reisner (1981) points out three aspects of the notation that can be evaluated as:

1. The number of different terminal symbols, 2. The length of the terminal strings for particular tasks, and 3. The number of rules necessary to describe the structure of some set of terminal strings.

As Reisner discusses, the purpose of the first aspect is to evaluate the total number of individual operations present in the interaction. The second aspect concerns the number of steps a user would need to perform some sub-task, and the third criteria represents the number of different steps required to accomphsh similar operations. The benefits Reisner finds in these specifications are the ability to analyze a model before implementation, the enforcement of precision in design, the formation of testable hypotheses, and the ability to automatically detect and quantify the intrinsic properties of easy-to-use systems. Shneiderman (1982) also describes similar means for the use of a formal description of the interface as well as pointing out how the use of this formal notation can lead to fewer errors and problems in the implementation of an apphcation interface.

Although Jacob (1983) indicates reasons why BNF is inferior to the use of state transition diagrams for formal specification of interfaces most of his objections appear to be personal bias. Although BNF in its pure form is not easily used for this task, neither are state transition diagrams.

48 State transition diagrams are extended by Jacob to include sub-diagrams so that the specifications can have easily understandable abstractions.

This is taken care of in BNF notation by only exam ining non terminal rules at the appropriate level of abstraction. In addition, BNF notation is particularly suited for the specification of concurrent processes since the order of execution of the rules is impHcit not explicit as in state transition diagrams. Also Jacob's initial efforts (1982,1983) with state transition diagrams required the encoding of the diagrams into a formal notation before use in an automated setting, therefore counteracting the advantage of the graphic representation. Since both BNF notation and state transition diagrams have strengths and weaknesses the advantage of one over the other is not clear, but because of the extensibility of BNF notation in particular areas of interest to this research, BNF notation will be used for formal specification of the interface.

The specification of the various parts of the interaction is defined in Backus-Naur Form with various extensions to allow a more concise notation. These extensions are derived fi'om the work of Reisner (1981),

Shneiderman (1982) and Bleser and Foley (1982). The basic form of the

Backus-Naur production rule is:

sentence ::= subject predicate

49 In this example the left-hand side of the production (sentence), is defined

as (::=) a subject followed by a predicate. Each production in the BNF

notation defines a set of non-terminal symbols as consisting of one or

more non-terminal or terminal symbols. In this specification a terminal

symbol will be indicated by an expression in capital letters. The specification of a grammar continues until all non-terminal symbols are ultimately defined by reference to term inal symbols. Since the operator is the only legally allowed operator in the BNF notation, some extensions will be described to make the notation more compact. In general, the order of the BNF components will be the order in which those components appear in the interface. A metasymbol ’ 1 ' indicates that the symbols on either side may define the production, e.g. c ::= ab I ba, indicating that 'a' may be followed by "b" or that "b" may be followed by 'a'.

To make it easy to distinguish between symbols composed of several words and individual symbols, brackets wiU be placed around each symbol, e.g. I . In some notations the epsilon symbol is used to match a nuU string, to make the m eaning more precise and understandable the terminal symbol NULL will be used to represent an empty string.. In addition, a terminal symbol ERROR is used to match any user action that is not valid in the interface. Such an entry would show the actions of the interface in giving the user feedback to correct his

50 mistake. Unfortxmately none of the systems that axe targeted for this research handle error other than to retnm an error message, so the action in the case of any of the specificed systems would be the same. Identifiers will be used, as in Shneiderman (1982) to indicate which party in the interaction is referenced by the symbol This can be used to specify output to different windows in a multi-window environment or to represent the actions of different sub-modules of the user interface. The specification of different program segments using this portion of the notation can easily show how completely the interface being specified conforms to the surface interaction paradigm. An example of how identifiers will be used is;

::= I NULL

This brief example is meant to illustrate that the user interface symbol surface structure 'report' is defined as the surface display of text in a dialogue window (DW). The symbol 'text_obj' is the text received firom the application and either nothing else (NULL) or a graphic object displayed in graphic window number one (GWl). In this example "text_obj" and

"graphic_obj" are variables that are separately defined to describe the text and graphics presented to the user. Other variables could be used as environmental variables to define such characteristics as text color, or the graphics line color. The party identifiers used in this research will all belong to an appropriate level of interaction as defined by Took’s

51 interaction model and Moellering’s surface and deep structure cartographic interaction model. For instance, a user interface action is a surface structure action. A deep structure action would be an action apparent in the program that did not have any reflection on the display.

Finally the user can be the initiator of actions that can result in surface structure or deep structure interaction. Environmental variables can be defined for each characteristic of the interface objects presented, as in the following example;

background_color = BLUE, linecolor = RED, transparency = OFF.

In this type of specification terminal symbols are also indicated in capital letters, (e.g. BLUE).

All of the extensions to be used with this style of BNF notation have now been presented with one exception, the use of virtual devices.

The virtual devices used in this notation will be those specified in Foley and Wallace (1974) and will be non-terminals defined to return specific types of values, as presented below:

::=