INFORMATION TO USERS
This manuscript has been reproduced from the microfilm master. UMI films the text directly from the original or copy submitted. Thus, some thesis and dissertation copies are in typewriter Ace, while others may be from any type of computer printer.
The quality of this reproduction is dependent upon the quality of the copy submitted. Broken or indistinct print, colored or poor quality illustrations and photographs, print bleedthrough, substandard margins, and improper alignment can adversely affect reproduction.
In the unlikely event that the author did not send UMI a complete manuscript and there are missing pages, these will be noted. Also, if unauthorized copyright material had to be removed, a note will indicate the deletion.
Oversize materials (e.g., maps, drawings, charts) are reproduced by sectioning the original, beginning at the upper left-hand comer and continuing from left to right in equal sections with small overlaps. Each original is also photographed in one exposure and is included in reduced form at the back of the book.
Photographs included in the original manuscript have been reproduced xerographically in this copy. Higher quality 6” x 9” black and white photographic prints are available for any photographs or illustrations appearing in this copy for an additional charge. Contact UMI directly to order. UMI A Bell & Howell Information Company 300 North Zeeb Road, Ann Arbor MI 48106-1346 USA 313/761-4700 800/521-0600
A FRAMEWORK FOR FORMAL SPECIFICATION OF THE CARTOGRAPHIC USER INTERFACE
DISSERTATION
Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University
By
Alan Kirk Edmonds, B.G.S., M.A.
*****
The Ohio State University 1997
Dissertation Committee: Approved by:
Professor Harold Moellering, Adviser Professor Morton O’Kelly Adviser Professor Alan Saalfeld Department of Geograph UMI Number: 9731616
UMI MicroForm 9731616 Copyright 1997, by UMI Company. All rights reserved.
This microform edition is protected against unauthorized copying under Title 17, United States Code.
UMI 300 North Zeeb Road Ann Arbor, Ml 48103 © Copyright by Alan Kirk Edmonds 1997 ABSTRACT
This research combines elements from sim ilar models in cartographic communication theory and human-computer interaction to develop a model for cartographic interaction. The model developed includes components from Nielsen's virtual protocol model of human- computer interaction. The use of the concepts of surface and deep structure in this research is adopted from Nyerges cartographic data structure research and the surface interaction paradigm of Took. These concepts are represented at various levels of interaction by BNF production rules modified according to Shneiderman's multi-party grammar model. Other levels of specification draw on the transformations of Moellering's real and virtual maps model.
The model developed using the above concepts is used to specify and compare five different cartographic computer systems that use three different styles of interaction. In addition, to test the use of this model to implement systems the specifications for one of the computer mapping systems is used to implement a graphical user interface. The results of this research show that:
• Modification of Backus Naur form production rules for graphical interaction and deep and surface structure can be successfully applied to the specification of spatial information system user interfaces, • That such specification can allow the comparison of systems based on their use of graphical interaction, in terms of virtual devices, • That examining such production rules can give insight into the system concepts, i.e. the surface interaction with the user in terms of text and graphics, • That complexity of interaction and ease of use of systems can be described by examining the length and number of graphical interactions in the production rules.
Ill To my mother, Mary Jean Edmonds, my Grandmother, Fae Rogers and my Great Aunt, Irene Edmonds for their dedication to education throughout the generations
IV ACKNOWLEDGMENTS
I wish to thank my adviser, Harold Moellering, for his support and encouragement and for his patience in guiding me through this long process.
I also thank the members of my dissertation committee for their guidance in making this a better document.
I am very grateful for my wife Cheryl's support through out these years.
I would also like to thank my many friends and colleagues through out the university, specifically those at University Technology
Services, the Department of Civil and Environmental Engineering and
Geodetic Science and the Department of Geography. VITA
September 16, 1958 ...... Bom - Lawrence, Kansas, USA.
May, 1980 ...... B.G.S. in Geography, The University of Kansas, Lawrence, KS, USA.
May, 1980 ...... Commissioned 2nd Lieutenant, United States Marine Corps.
May, 1982 ...... Promoted to 1st Lieutenant, United States Marine Corps.
September, 1985-August, 1987 ...... Graduate Teaching Associate, Department of Geography, The Ohio State University, Columbus, OH, USA.
September, 1987 ...... MA., in Geography, The Ohio State University, Columbus, OH, USA.
September, 1987-August, 1990 ...... Graduate Teaching Associate, Full responsibility. Department of Geography, The Ohio State University, Columbus, OH, USA.
VI August, 1990 - June, 1991 ...... Graduate Research Associate to Dr. Moellering, Department of Geography, The Ohio State University, Columbus, OH, USA.
July, 1991 - October, 1994 ...... Graduate Research Associate, MVS Systems Programming Group, University Technology Services, The Ohio State University, Columbus, OH, USA.
October, 1994 - Present ...... Supervisor, Mapping Laboratory, Department of Civil and Environmental Engineering and Geodetic Science, The Ohio State University, Columbus, OH, USA.
PUBLICATIONS
1988 Edmonds, A.K. and H. Moellering. An Analytical Cartographic System for Modeling Geomorphic Data. Technical Papers: 1988 ACSM-ASPRS Annual Convention: Cartography. St. Louis, Missouri. Vol. 2, pp. 129-138.
1991 Edmonds, A.K. Directions for Research: User Interfaces for GIS. Position Paper: NCGIA Initiative 13 Specialist Meeting, June 23-26, 1991, Buffalo, New York.
1992 Edmonds, A.K. A Methodology for the Comparison and Specification of the Geographic Information System User Interface. Proceedings of the 5th International Symposium on Spatial Data Handling, August 3-7, 1992,Charleston, South Carolina.
VII FIELDS OF STUDY
Major Field: Analytical Cartography Minor Field: Climatology
v m TABLE OF CONTENTS
ABSTRACT...... II
ACKNOWLEDGMENTS...... V
VITA ...... VI
TABLE OF CONTENTS...... IX
LIST OF TABLES...... XI
LIST OF FIGURES...... XII
CHAPTERS
1. INTR O D U C TIO N ...... 1
Structure of the Th e s is ...... 5 2. RELEVANT LITERATURE...... 6
Cartographic M odels of Communication ...... 6 U ser Interface Co ncepts ...... 18 Models of Human Computer Interaction ...... 23 Specification of Human Computer Interaction ...... 29 S ummary ...... 35 3. THE RESEARCH DESIGN ...... 37
H um an -Computer Interaction in Cartographic System s...... 38 Methods of S pecifying Cartographic Interaction ...... 41 S pecifying H um an -Computer Interaction in a S patial S etting ... 45 R esearch Ta sk s ...... 54 S ummary ...... 57
IX 4. RESEARCH RESULTS...... 59 Specification of 2-Dimensional Programs ...... 60 GIMMS...... 60 ZSHADE...... 76 2-D System Comparisons ...... 79 Specification of Three Dimensional Systems ...... 81 ARC/INFO VIEW...... 82 Spatial Data Display System (SDDP)...... 85 Three Dimensional System Comparisons ...... 89 Implementation FROM S pecification ...... 90 Interface Implementation ...... 90 EvEiluation of Implementation ...... 103 Summary of Implementation ...... 105 5. SUMMARY AND CONCLUSIONS...... 107 Summary of the Research ...... 107 Conclusions ...... iio Future Work ...... 114 APPENDIX A - PC-GIMMS SPECIFICATION...... 116
APPENDIX B - ARCPLOT BNF SPECIFICATION...... 126
APPENDIX C - ZSHADE SPECIFICATION...... 171
APPENDIX D - VIEW3D SPECIFICATIONS...... 186
APPENDIX E - SDDP 3D SPECIFICATION...... 193
APPENDIX F - IMPLEMENTATION SPECIFICATION...... 200
BIBLIOGRAPHY...... 207 LIST OF TABLES
Table 4.1 N um ber of PC-GIMMS productions ...... 67 4.2 Types of ARC/PLOT productions ...... 74 4.3 Types of ZSHADE productions ...... 79 4.4 S u m m a ry o f 2-D S y s t e m s...... 80 4.5 Types of ViewSD productions ...... 84 4.6 Types of SDDP productions ...... 87 4.7 S ummary of 3-D S y s t e m s ...... 89
XI LIST OF FIGURES
Figure 2.1 A GENERAL COMMUNICATION SYSTEM...... 7 2.2 A Communication V iew of Cartog raph y ...... 7 2.3 Kolâncÿ 's Communication Mo d e l ...... 10 2.4 Muehrcke 's Cartographic P rocessing Model ...... l i 2.5 Cartographic Communication M odel of R obinson & P etc h en ik ...... 12 2.6 Real and V irtual Ma p s ...... 13 2.7 Real and V irtual Map Transformations ...... 14 2.8 Surface and D ee p S tructure ...... 16 2.9 The S eeheim Model of the U ser In ter fa c e ...... 24 2.10 Surface Interaction Paradigm ...... 25 2.11 The V irtual P rotocol Model of In ter a c tio n ...... 29 3.1 Comparison of N yerges an d N ie lse n M odels ...... 43 4.1 Typical map produced w ith PC-G IM M S...... 62 4.2 GIMMS Map D esig n Me n u ...... 63 4.3 Typical map produced w ith ARC/PLOT...... 71 4.4 ARC/PLOT COMMAND LINE...... 72 4.5 ZSHADE MAIN MENU AND GRAPHICS DISPLAYS...... 77 4.6 Typical graphic display of ARC/PLOT V ie w ...... 83 4.7 Typical graphics display of S D D P ...... 86 4.8 SDDP MAIN AND DISPLAY M ENUS...... 87 4.9 SDD3D MAIN MENU...... 92 4.10 SDD3D D ata Structure S e l e c t io n ...... 96 4.11 SDD3D DATABASE SELECTION...... 97 4.12 SDD3D VARIABLE READ...... 97 4.13 SDD3D VARIABLE SELECTION...... 98 4.14 SDD3D O rigin an d Size E ntr y ...... 99 4.15 SDD3D D isplay Me n u ...... 100 4.16 SDD3D D isplay Method S e l ec t io n ...... 101 4.17 SDD3D H ue and Color Levels In p u t ...... 102 4.18 SDD3D I mage D is p l a y ...... 103
XII CHAPTER 1
INTRODUCTION
Throughout the history of geography and related disciplines and sub-disciplines (i.e. GIS, transportation geography, economic geography, etc.) there has been a need to seeirch for new theories for explaining phenomena, and new tools to apply these theories to data and situations. One of the disciplines that has provided these tools, as well as spatial theory, is cartography and its cousin Geographic Information
Systems (GIS). The explosion of inexpensive computer technology has placed very sophisticated software tools and algorithms into the hands of users who are inexperienced in the underlying theory of the software they use. Consequently there has arisen a need for new methods to evaluate this software. These methods can provide the cartographer or the subject domain expert with a means of evaluation to compare and recommend such programs to the novice.
This work consequently has developed a procedure (i.e. a framework) to be used in specifying, evaluating and comparing software programs for geographic analysis. The theory used in this work has been derived from the communications models of cartography and the human computer interaction field of computer science. Cartographic communication models have been presented and debated for many years, however they have remained mostly a descriptive discussion of the processes involved in map design and production. This research will examine those models and how they may apply to a description of the interaction between a user and a spatial information system. The communications models and their application in computer science can serve as a guide to the incorporation of the cartographic communication models in a description of a user's interaction with cartographic system both textually and spatially. Therefore traditional models of cartography will be presented and examined for the direction they give to this research. Although cartographic communication models have not been successfully implemented, the use of the surface interaction model presented might be one way in which these models can be advanced.
Computer science communication models as they apply to human computer interaction will also be examined and used to help develop and extend a framework for cartographic interaction with spatial information systems. The basic objectives of this research are to:
1) Develop a method of describing of the interaction between the computer and the cartographer in the display and analysis process,
2) Use the method developed to implement a cartographic user interface.
Cartographic communication theory, like most of the human- computer interaction models in computer science was developed from the communication theory of Shannon and Weaver (1949). The basic cartographic communication model involves the interpretation of data, symbolization and design of a map, by the cartographer to communicate information to the map user. This process is subject to error in the interpretation of the cartographer, the collection of data, the choice of symbolization and the interpretation of that symbolization by the map user, resulting in noise in the communication process and affecting the perception of the information by the map user. This model has been expanded by various cartographers to include the transformations necessary in this process, (Muehrcke, 1970) and the adoption of this model to the use of computer technology (Moellering, 1977a) using the concepts of real and virtual maps. These concepts, i.e. real and virtual maps, and the related deep and surface structure concepts of both cartography (Nyerges, 1981) and computer science (Took, 1990) have been drawn upon to construct the framework presented in this work for the
specification and comparison of cartographic communication in the user
interface.
The framework developed in this thesis serves as a basis for the
comparison of cartographic user interfaces. The framework allows the
evaluator to examine the spatial device use in the interface, by detailing
the use of the "locate" and "pick" logical devices. The system concepts
involved in the interface are detailed by the specification of the types of
surfaces (i.e. dialogue text, graphics, etc.), present. The cartographic
transformations occurring in the system are specified with Dr.
Moellering's real and virtual map transformations. In addition the deep
and surface structure interactions occurring in the cartographic system
can be examined, to give a feel for the type of program control occurring outside the range of the interface. The production rules used to specify the user interfaces of these systems can be examined for the complexity of
accomphshing specific tasks. These specifications can then be examined
and used to compare the difficulty in accomphshing the same task in the same system with a different interface or be compared to a different system. The model used and the techniques devised for this framework for comparison will be presented in Chapter 3 following a discussion of the theory upon which this research is based. Structure of the Thesis
Chapter 2 of this thesis will discuss the relevant literature, with the first section a discussion of the traditional and more recent models of cartographic communication. Following that discussion the remainder of chapter 2 will discuss the various computer science models of human computer interaction and end with a description of the various techniques that have been developed for specifying and comparing human computer interactions. The development of the cartographic interaction model will be presented in Chapter 3. Following the description of the model and the tasks developed to test and evaluate this approach, the actual application of the model to the descriptions and comparisons of five different spatial information systems will be presented in the first two sections of Chapter
4. The remaining section of Chapter 4 will present a proof of concept system that was produced firom the specifications for one of the other systems. A summary of the results will be detailed in the first section of
Chapter 5, followed by a discussion of the conclusions that may be derived firom those results and a third section that presents future work. The appendices (A-F) contain the detailed specifications produced as a result of this research. CHAPTER 2
RELEVANT LITERATURE
This chapter will briefly introduce the literature relevant to this
research. Specifically the first section will cover the foundations of cartographic theory, namely the cartographic communications models.
The second section of the chapter will review the equivalent models for human computer interaction and also cover the specification of the interfaces for human computer interaction.
Cartographic Models of Communication
The origin of the communication view of cartography has as its foundation the communication model defined in 1949 by Shannon and
Weaver, Figure 2.1. This model has as its components; a message that is selected firom an information source, changed into a signal by a transmitter, and modified enroute by noise. The received signal is then transformed back into a message by the receiver and delivered to its destination. Cartographers such as Board (1967) have related this basic model of communication to cartography by defining a sim ilar model,
shown in Figure 2.2. This cartographic model defines the world as the
source, the cartographer as the encoder, and the map as the encoded
message. The receiver is the eyes of the map reader, with his mind being
the decoder, while the destination is the map-reader.
INTOIMAIiON SOUKf TIANSMimi M C B V U DESTINATION
SIONAl MESSAGE MESSAGE
NOISE SOUECE
Figure 2.1 A general communication system (fi-om Shannon and Weaver, 1963)
NOSE
Figure 2.2 A Communication View of Cartography (fi*om Robinson and Petchenik, 1976)
A more detailed model by Kolacny (1972), see Figure 2.3, relates the communication model to sets that define the perceptions of the cartographer and the map reader, and the relation of those perceptions to reality. Kolacny defines reality as the set U (the universe), Ui is the selective observation of the universe by the cartographer, Ug is the universe as seen by the map user. This reality is transformed by the cartographer (Si) with specific goals into an intellectual model that is represented by a map (M) using cartographic symbols (L). The map reader by reading the map using his knowledge of cartographic symbols
(L) transforms his previous understanding of the universe into a conception of the universe that incorporates some of the conceptions of the cartographer. Muehrcke (1970) developed firom Tobler’s original concepts, a transformational model of cartographic processing, and this is presented in Figure 2.4. Muehrcke's description of his model is as follows:
Data are selected firom the real world (Ti), the cartographer transforms these data into a map (Tg), and information is retrieved firom the map through an interpretive reading process (Ta). A measure of the communication efficiency of the cartographic process is related to the amount of transmitted information, which is simply a measure of the correlation between input and output information. The cartographer's task is to devise better and better approximations to a transformation, Tg, such that output firom Ta is equal to Tg; i.e. Ta = Tg 1 (Muehrcke, 1970).
Robinson and Petchenik (1976) are more comfortable expanding on the set concepts firom Kolâncÿ’s model and developing a more detailed
Venn diagram (Figure 2.5) rather than using the cartographic transformations of the communication's model of Muehrcke. The model developed by Robinson and Petchenik shows the conceptual relationships of the cartographic process, but relates nothing about the process itself.
As shown in Figure 2.8, S represents geographical space, with Sc being a correct conception of that space and Se being an erroneous conception. A is the conception of geographical space held by the cartographer, B is the conception of space held by the map reader, and each of these has erroneous and correct parts to their conceptions. The map (M) represents a subset of these conceptions and of the actual geographical space. The portion of the map Mi is that previously conceived by the map reader, M 2 is the portion not previously conceived but that is now comprehended because of the communication of the map. Ms is the portion of the map that is still not conceived by the map reader, with U being the increase in conception by the map reader that occurs because of knowledge gained from the map, but is not directly portrayed by it Since this model only addresses the membership in these sets, it can not be apphed to the flow or transformation of information between them. U,-reality (the Universe) represented as seen by the car- tograper; S, -the subject representing reality, i.e. the cartographer; L -cartographic language as a system of map symbols and rules for their use; M -the product of cartography, i.e. the map; $2 -the subject consuming the map, i.e. the map user [per cipient]; Uj-reality (the Universe) as seen by the map user; and Ig -cartographic information.
CAITOGCAPMO'S «CAimr ICAUTY
L 52 . iToetAnnc MAP USE A lANGUACC MINO
-The Meta-langwage of Certogrephy •
Figure 2.3 Kolancy's Communication Model (from Robinson and Petchenik, 1976)
10 REAL RAW MAP MAP WORLD DATA IMAGE
Figure 2.4 Muehrcke's Cartographic Processing Model (from Robinson and Petchenik, 1976)
Another model that can be used to describe the cartographic communication process is one that begins by redefining maps, taking into account the computer era. Moellering (1977a, 1984) developed this proposal for a redefinition of what a map is, in response to Morrison's
(1974) call for a new definition. This concept, summarized in Figure 2.6, defines both real and virtual maps based on two-fold characteristics as to whether the map is viewable and whether the map has a permanent tangible reality. In the case of computer generated maps displayed on a monitor, the map is directly viewable but does not have a tangible reality, therefore this type of map would be described as a virtual map of Type 1.
A real map has both the above qualities, while a virtual map Type 3 has neither of those qualities (e.g. a digital elevation model in computer memory but not displayed on a screen). The remaining type of virtual
11 map is a Type 2 map and has a tangible reality but is not directly viewable, an example might be the gazetteer of an atlas, or a CD-ROM database.
IE: HH
r* t r f') ) > t r rrv 77, ////////////V ///// ///////////// /// ///// ///////////
S = Geographical space, the milieu Sç = Correct conception of the milieu Sg = Erroneous conception of the milieu A = Conception of the milieu held by Cr B = Conception of the milieu held by Pt M = Map prepared by Cr and viewed by Pt M, = Fraction of M previously conceived by Pt M, = Fraction of M not previously conceived by Pt and newly comprehended by him: an indirect increment Mj = Fraction of M not comprehended by Pt U = Increase in conception of S by Pt not directly portrayed by M but which occurs as a conse quence of M: an indirect increment
Figure 2.5 Cartographic Communication Model of Robinson & Petchenik (jfrom Robinson and Petchenik, 1976)
12 The interesting implication of this type of map definition is in the specification of transformations between each type of map (Figure 2.7).
As Moellering (1977b) describes, the design of interactive computer systems can be facihtated by an examination and specification of which of the sixteen different real to virtual map transformations is occurring at a given step. Tobler (1979) has also explored a transformational view of cartography. Tobler's view of cartography is not, however, very useful for specification of human-computer interaction since it appHes entirely to the data domain and does not include transformations outside of computer processing, unlike the real to virtual map transformation model.
Directly viewable YES NO Permanent YES Sheet Map Gazetteer Tangible Reality Globe Field Data NO CRT Image RAM Cognitive Map Digital Terrain Model
Figure 2.6 Real and Virtual Maps (after Moellering, 1983)
13 VIRTUALUAL ^ HAP TYPE II
VIRTUAL
TYPE I
Figure 2.7 Real and Virtual Map Transformations (from Moellering, 1983)
The development of computer databases for cartography and spatial information lead to a need for concepts relating to the structure of this information in digital databases. Nyerges (1981) addressed this by specifying six different levels for data structure abstraction in the development of spatial analysis systems. These levels are (Nyerges,
1981):
1) Information Reality - Observations that exist as ideas about geographical entities and their relationships which knowledgeable persons would communicate with each other using any medium for communication.
2) Information Structure - A formal model that specifies the information organization of phenomenon in reality. This structure acts as an abstraction of reality and skeleton to the canonical structure. It includes entity sets
14 plus the type of relationships that exist between those entity sets.
3) Canonical Structure - A model of data which represents the inherent structure of that data and hence is independent of individual applications of the data and also of the software or hardware mechanisms which are employed in representing and using the data.
4) Data Structure - A description elucidating the logical structure of data accessibility in the canonical structure. There are access paths that are dependent on explicit links, i.e. resolved through pointers, and others that are independent of links. i.e. resolved through other forms of reference. Those access paths dependent on links would be based on trees or plex structures as in network models. Those access paths independent of links would be based on tables as in relational models.
5) Storage Structure - An exphcit statement of the nature of links expressed in terms of diagrams that represent cells, linked and contiguous hsts, levels of storage medium, etc. It includes indexing how stored fields are represented and in what physical sequences the stored records are stored.
6) Machine Encoding - A machine representation of data including the specification of addressing (absolute, relative or symbohc), data compression and machine code.
This scheme for abstraction of spatial data structures was specified in conjunction with the use of an additional abstraction concept, that of deep and surface cartographic structure. Nyerges developed this additional abstraction by analogy to Chomsky's consideration of surface and deep structure in linguistics. As Moellering (1984) interprets this abstraction, deep structure represents linkages that are not present graphically in the
15 surface structure representation of map information (Figure 2.8), with deep structure consisting of the information defined primarily at the data structure level (level 4) of the data abstraction hierarchy developed by
Nyerges.
NODE LINK POINT MODULE MODULE MODULE
POINT OBJECT LINK CARTOGRAPHIC MODULE ATTRIBUTE DEEP MODULE STRUCTURE
VIRTUAL MAP TYPE I I I
CARTOGRAPHIC SURFACE STRUCTURE
REAL MAP, VIRTUAL MAP TYPE I
Figure 2.8 Surface and Deep Structure (firom Moellering 1984)
16 This brief review of cartographic communication models serves as a basis for the discussion in the next section of the communications basis of human-computer interaction models. The use of these communications models in cartographic research has decreased substantially. In fact the most recent edition of Elem ents of C artography (Robinson et al, 1995) has totally dispensed with any discussion of the communications models that in previous editions had been a prominent featme. One of the problems with using the communications models, is representation of the various parts of the process. Dr. Moellering's real and virtual map concepts and the transforms that can be expressed between the different map types is one step towards a means of representing the processes involved in map production and cognition. However, like the more generic communications models, these concepts also do not have a means of specifying the parties to these processes.
In site of their limitations these different conceptual models of the cartographic communication process and cartographic transformations can serve as a basis for the development of a model for cartographic human-computer interaction. To further expand on this base the next section will discuss research specific to human computer interaction.
17 User Interface Concepts
The cycle of design, implementation and testing of user interfaces usually foUows one of the patterns of software development advocated by software engineers. Although the specific procedures followed by the system designers may vary, the development of a user interface usually proceeds in an iterative fashion through the design, implementation and evaluation stages of the life cycle. Rouse (1984) presents the following general steps in the life cycle for interactive systems development that can be used for the implementation of any specific system:
1. Collect Information. 2. Define requirements and semantics. 3. Design syntax and support facihties. 4. Specify physical devices. 5. Develop software. 6. Integrate system and disseminate to users. 7. Nurture the user community. 8. Prepare evolutionary plan.
These steps are not meant to be linear but often involve returning to a previous step to rework some aspects of the system. This requires that evaluations be conducted at each step in the process so that errors may be corrected and reworked.
18 Moellering (1977b, 1983) specifies five basic phases in the development of an interactive cartographic system:
1. Specifying the system goals. 2. Specifying system needs and feasibility. 3. Designing the system. 4. Implementing the system. 5. System testing, verification and documentation.
These two sequences for the implementation of a computer software system are closely related and show only two approaches to the many paths available and used in the steps of software production.
The specification of the requirements, semantics, syntax and physical devices used in an interface often is the more challenging part of the development of the user interface. Thus this area of user interface development has been the subject of extensive research within the human factors discipline. Such issues as the selection of input devices, the screen layout of the system and the type and firequency of interaction with the user must be considered in the design of a high quality interface. Key issues in human factors evaluations of systems are (Shneiderman 1992):
1. Time needed to leam the system, 2. Speed of performance, 3. The rate of errors by users, 4. Users subjective satisfaction with the system, and 5. Users knowledge retention over time.
19 As these issues point out, testing and evaluation of critical elements of
the user interface is necessary to insure that the system tested is effective
and will be successful.
One of the seminal works related to the design of human-computer interfaces and consideration of human factors is the work of Foley and
Wallace (1974). In this work, and later work, such as Foley, Wallace and
Chan (1984), the authors consider such psychological blocks to interaction as boredom, panic, frustration, confusion, and discomfort. In addition, consideration is also given to the different types of virtual devices for input (picks, buttons, locators and valuators). These virtual devices are important in maintaining the transportabüity of interaction techniques between different physical devices. Human factors design of user interfaces has also recently been the subject of research in the GIS field.
Egenhofer and Frank (1988) present requirements of good human factors design of future geographical information systems which specify that the user interface should be, "easy to leam, appear natural to the user, and independent of any internal data structure." These views are in accordance with the general human factors design criteria presented above.
More recently, in the GIS field, there has been sufficient research interest that an edited collection of papers entitled: H um an Factors in
20 G eographical Inform ation System s (Medyckyj-Scott and Heamshaw,
1993) has been produced. However none of the research presented in that
volume has direct bearing on the areas of interest in this research.
A natural extension of the effort to design computer interfaces so
that they are effectively transparent to the user and satisfy the human
factors criteria presented above, has been the use of direct manipulation
interfaces (Shneiderman 1983). Such interfaces as the Xerox Star and
Apple Macintosh desktops (Smith et al 1982) seek to exploit the use of
metaphors. Specifically both the Star and Macintosh attempt to use the
metaphor of the desktop, where the arrangement of the computer screen
and the interaction with graphical components of the interface is meant
to be an extension of the interaction of a worker with the files and items located on a typical office desk. This has resulted in easy to leam user interfaces and an increase in popularity of the Apple Macintosh computers with less computer literate individuals. The success of this type of interaction and these interfaces has resulted in the Graphical
User Interface (GUI) with its associated windows, icons, and a mouse as a pointing device, being adopted for use in many different computing environments. SunView, Open Look, Motif, DECWindows, Microsoft
Windows and NextStep are some of the currently available GUIs for microcomputers and workstations.
21 The success of the desktop metaphor and metaphors in general is also being explored in the GIS area. Gould and McGranaghan (1990) discuss the use of different metaphors in Geographic Information Systems and find that some metaphors can be used successfully, but that the use of a map or a map hbrary metaphor in a GIS user interface may not be appropriate. They suggest, however, that the use of nested metaphors may be valuable to provide appropriate organization to the system in a way that is relevant to the user's view of the task or apphcation domain.
Wilson (1990) finds fault with the use of the desktop metaphor as indicated in the title "Get Your Desktop Metaphor Off My Drafting
Table...". As Wilson points out a critical component in the design of the user interface, the underlying conceptual model for the interaction, is often overlooked in the design of spatial data handling systems.
Mark (1992) also examines metaphors and comes to the conclusion that since most non-spatial information systems rely on spatial metaphors for interaction, general-purpose computing is conceived of as spatial data handling. Therefore interface designers for spatial information systems must be careful that users do not confuse spatial data and its manipulation in the system, with the graphical interaction methods in a direct manipulation interface. Burrough and Frank (1995) also address how users perceive interaction with the GIS. Their
1 9 conclusion is: “...spatial data analysis tools need to be chosen and developed to match the way users perceive their domains: these tools should not impose alien thought modes on users just because they are impressively high tech (Burrough and Frank, 1995)." How then are user interfaces modeled and what is their conceptual basis? This is discussed in the next section of this chapter.
Models of Human Computer Interaction
The basic model of human-computer interaction that is most widely apphed is one that was proposed at the Seeheim Conference on
User Interface Management Systems (Pfaff 1985). This model is known as the Seeheim Model and is shown in Figure 2.9. The Seeheim Model is a very simple model of the interaction between the user and the application program, with the user interface existing as the dialogue between the components and is very similar to the basic cartographic communication model. A more recent model of the communication between the user and the apphcation is that presented by Took (1990).
Took proposes a different approach for separating the apphcation and the user, the surface interaction paradigm. The Seeheim model only specifies the user interface as the dialogue occurring between the user and the application, Took takes this one step further by making the user interface
a separate component of the overall system. Whereas the Seeheim model
relies on the underlying application to manipulate the presentation of the
dialogue, in Took's surface interaction paradigm the interface, or surface,
is responsible for changing the presentation (Figure 2.10). Took requires
that this surface domain (i.e. the graphics and text presented to the user) be independent of the applications semantics or the deep interaction (i.e.
the interaction among applications or within the application program).
Although this approach is related to that of a window manager, such as X,
Took requires that the separation include the contents of the windows, not just the arrangement and appearance as is provided by the X
Windows Toolkit.
User Applications User Interface Program
Figure 2.9 The Seeheim Model of the User Interface (from Pfaff 1985)
24 user external commands events
SURFACE
commands \ responses © = internal events
Figure 2.10 Surface Interaction Paradigm (from Took, 1990)
The above models of the separation of the user interface and the apphcation are representative of the hterature in this area. There are many more variations on these concepts, but these illustrate the types of models that have been used. A comprehensive look at the separation of the interface and apphcations is done in The Separable User Interface
(Edmonds, 1992). This collection is mostly a collection of previously pubhshed papers that address early communication models of human computer interaction, specification for user interface separabihty, and the architecture and practical aspects of these concepts. Some of this work will be referenced later in the discussion of specification. It is appropriate to point out that this collection of work is based on the communication
25 model of the user interface and does not include the research by Took
(1990) in the same area.
Although the two models discussed above can both be used as
general models of the user interface, there are many more levels of
human-computer communication that must be considered in the
development of the interface and that have been explored. Moran (1981)
has developed the Command Language Grammar (CLG) model that takes
into account many more different levels of interaction than those specified
above. The components of CLG are grouped into three major categories:
conceptual, communication and physical. The conceptual category
includes concepts related to the user task and abstraction of that task, the communication component is the command language or other means of
interaction with the application and the physical component includes the type of display and pointing devices used on the computer system. Moran
groups these components into four distinct levels: the task level, the semantic level, the syntactic level and the interaction level. These levels of specification of the interface are useful for dividing the interests of user interface researchers into groups with three different purposes or views.
The linguistic view's primary goal is the study of the command language used to interact between the system and the human, the psychological view is concerned with describing the user’s mental model of the system
26 and the design view concerns the representations used to specify the system design. All of these views and levels of interaction are accommodated by the Command Language Grammar.
An even more elaborate model of human-computer interaction is the virtual protocol model proposed by Nielsen (1986). This model expands on the command language grammar model of Moran by specifying seven different levels of human-computer interaction based on the ISO Open Systems Interconnection (OSI) model of physical communication between computer networks. In this model, illustrated in
Figure 2.11, messages at a specific level are exchanged between the two sides by virtual communication at the next lower level. The higher levels of this interaction specify the conceptual processes and components of the discourse, while the lower levels specify the form or appearance of the communication.
The top level of Nielsen's model is concerned with the goals of a system, these are real world concepts that are external to the computer system. The task layer deals with the system concepts, in other words what kind of objects are available in the system and how can they be manipulated. The semantics layer of Nielsen is not defined in the usual way as the meaning of an action but as an examination of the detailed functionality of the systems. This includes what specific objects are in the
27 system (rather than categories of objects) and what specific operations can be done. To summarize the difference between the task and semantics levels in this model, the task level deals with what types of things and actions are represented by the system, in other words how a task might be described generically using the system concepts. The semantics level deals with specific objects and actions on those objects, i.e. what specific operations are necessary on what specific system objects to accomphsh the task.
The next layer of interaction, the syntax layer, deals with what specific sequence of commands must be issued to accomphsh the semantic operation. The lexical level merely expresses the syntax in system tokens, or the smallest unit of information in the system, the individual words or actions in the command sequences. The final two levels of
Nielsen's model are the alphabetic, where the primitive symbols that are used to construct the tokens are specified, and the physical level that represents the actual physical actions such as the pressing of a key on the computer keyboard.
These different models of human-computer interaction include not only an examination of the goal and task levels involved in the discourse with the computer but also provide a means of specifying the form of the interface. This last component, that of specification of the interface, has
28 been the subject of extensive research in user interfaces and will be
discussed next.
CempwNf
0
Virtuel eemiMMiieetien ^ * Phjeitol eonMiunicetion I' Reoliter t «iMlyier
Figure 2.11 The Virtual Protocol Model of Interaction (firom Nielsen 1986)
Specification of Human Computer Interaction
The specification of the user computer interface using various techniques is valuable because it can be a formal method for evaluating the interface as weU as a means to automate its construction. The most common techniques used to specify human-computer interaction have
29 been the use of formal gram m ars (Shneiderman 1982) and graphical specifications using state transition diagrams or other types of transition networks (Jacob 1983, Wasserman 1985). The next portion of this review will concern itself with an examination of these methods as well as a few of the less common m eans of specification.
Formal grammar specifications of user interfaces generally use as a basis for the formal grammar, the Backus-Naur Form (BNF), also called
Backus-Normal Form. Backus (1960) first applied this grammar to computer language specification when specifying ALGOL (Naur 1960).
Originally this formal grammar notation was used in linguistics by
Chomsky (1964, Reisner 1981), and has been used in modified form by computer scientists for compiler specification (Holub 1990), and in defining user interfaces (Shneiderman 1982). A BNF specification uses a set of terminal symbols and a set of definitions (non-terminals) by which every legal structure in the system being defined must be represented.
Strict BNF rules allow only one operator, which means "is defined as" (Holub 1990). Writing production rules that represent every component of the system, such that all non-terminals are ultimately defined by terminal symbols, specifies a system.
Various problems have been noted with the use of BNF specifications and the human-computer interface, thus Shneiderman
30 (1982) discusses and extends the BNF grammar to correct some of the deficiencies. One of the problems Shneiderman notes is that BNF notation is geared toward batch programming languages, therefore he extends it by allowing each non terminal (i.e. each specification that can be defined by components) to be specified with a party identifier. The party identifier specifies which part of the interface or program is involved in this particular interaction. This allows the human and computer components of the discourse to be uniquely identified and allows for the addition of more than one party on either side of the dialogue. Additional enhancements include the assignment of values to non-terminals and a notation to specify the output of that value, as well as a wildcard non-terminal that matches any string if no other parse succeeds. Some of the advantages of using a formal grammar such as
BNF is that a complete description of a system can be built which can be implemented, debugged and constructed with the aid of compiler construction tools such as 'yacc' (yet another compiler compiler).
Reisner (1981) uses an extended BNF notation to evaluate the design of two interactive graphics systems. Although there are some problems with the use of this notation, she finds the formal analysis accomplished of great benefit. Reisner finds that one benefit as being the ability to analyze a model before implementation, this allows the designer
31 to examine the notation to find inconsistencies. For example, the design might include areas where different steps are necessary to accomplish the same task. This also enforces precision in design, such as making sure that prerequisite actions are always accomplished. The formation of testable hypotheses can also be aided by allowing the designer to specify what actions or components are being tested. For example, if a designer wishes to compare different types of selection actions in a particular task, those actions can be specified in the notation and the specifications used to predict the length of the interaction necessary to accomplish that task.
Richards and others (1986) use Reisner's extensions to BNF, as well as MorEui's Command Language Grammar to analyze the MINICON user interface to UNIX. In applying both notations to the task, limitations of each grammar were noted but both could be used to evaluate the user interface. Some of the limitations of BNF were corrected in Bleser and Foley's (1982) extension to the grammar. In this study, in addition to adding enhancements to BNF sim ilar to some of
Reisner and Shneiderman's extensions the researchers developed a means to specify environment variables. This is done in their notation by using attribute lists that define environment parameters (such as window size and background color) using an attribute declaration similar to the C
32 language data structure definition. These attributes would have a local scope and could be referenced by a parameter bst of the non terminal.
Another type of specification that is firequently used is User Action
Notation (Hix and Hartson, 1993). User Action Notation (UAN) is a notation that has been devised for representation of the behavioral aspects of human computer interaction. The notation includes symbols for representing mouse press and release events as well as the movement of the mouse. This makes this specification technique well suited to the definition of graphical user interfaces, however the notation is also somewhat hard to leam and read. Although the specification of human- computer interaction using notations has been used successfully, many researchers find other means of specification more appropriate, one of these is the use of transition diagrams.
Pamas (1969) was one of the first researchers to advocate the use of state transition diagrams for the specification and construction of user interfaces. Jacob (1982,1983, and 1985) built on Parnas' approach and details the construction of a user interface using a design specified with state transition diagrams. In Jacob's comparison of state transition diagrams and BNF notation (1983) he indicates that state transition diagrams are superior because sequence is explicit in the diagrams but only implicit in BNF specifications. However, Jacob does concede that
33 BNF and state transition diagrams are functionally equivalent for specifying human-computer interfaces and even uses a grammar representation of state transition diagrams to implement the interface discussed. In subsequent research Jacob (1985) and Wasserman (1985) develop transition diagram interpreters that use the graphical specification of the state transition diagrams directly.
Other researchers have used algebraic and set theoretic notations to specify interaction dialogues (Chi 1985, Hill 1987), in addition some researchers have constructed systems that specify the human-computer interaction component by having the designer demonstrate the interaction required (Myers and Buxton 1986, Myers 1987). In Peridot
(Myers 1987,1988), since the human-computer interaction is entirely specified by demonstration, no formal specification of the user interface is actually constructed and thus the advantages of using a formal specification for definition of the interface are lost. Although this particular system of implementing a user interface does not use a specification notation many other systems for constructing, prototyping and evaluating user interfaces formally define the interface in some way.
A recent development by Carr (1995) is the Interaction Object
Graph (lOG). This specification technique is: "... a blending of statecharts for control. User Action Notation for event descriptions, and an abstract
34 model of the user interface (Carr, 1995)". Although Carr successfully tested his technique for ease of understanding, his main use of these graphs is to specify widgets, such as sliders, and buttons. This work has not yet been used to specify complete systems, and would appear to be rather unwieldy when the number of widgets specified became large.
The major application of techniques for specification of user interfaces is in the area of User Interface Management Systems (UIMS).
UIMS are systems that have been created for managing human-computer dialogue in the prototyping and programming of applications. Many books (such as Olsen, 1992) are available which discuss UIMS and their construction and choice of representation techniques.
Summary
This chapter has reviewed some of the models of communication that have been used in both the cartographic literature and the human- computer interaction Uterature. Although the traditional cartographic models that were originally developed firom the communication model have fallen out of favor, more recent models such as Moellering's real and virtual maps model have been developed to extend this research into the computer era. The promise of using real and virtual map transformations to specific cartographic systems, although explored by
33 Dr. Moellering has not been expanded to include representation of the various parties to the design and interpretation of a map.
The human-computer interaction models started from the same roots as the cartographic communication models. These models have served as the basis for the development of such cognitive and task models as Moran's Command Language Grammar. As an expansion of these models Nielsen developed a seven layer virtual protocol model that can be used to describe the interaction between the user and the computer. This model combined with some ideas from cartography and other areas of computer science will serve as the theoretic background for the model presented in the next chapter.
36 CHAPTER 3
THE RESEARCH DESIGN
The literature reviewed in the previous section outlines the pertinent areas of past research in this area. Discussion will next concentrate on the development of a research agenda for specific components of user interfaces in analytical cartography and spatial analysis systems. This research will first elucidate the connections between the virtual protocol of Nielsen (1986), the surface interaction paradigm of Took and the real and virtual maps paradigm of Moellering and how these concepts apply to modeling the element of human- computer interaction in analytical cartography and geographic information systems. The adoption of these concepts will be shown to be useful by an appHcation of formal language specification to communication between the cartographer and the cartographic system.
37 Human-Computer Interaction in Cartographic Systems
Many of the communication models of cartography are not appropriate for use as the underlying model for cartographic systems, since they are static, graphic representations of the different components of cartographic communication. There has been no research that has successfully implemented these models and they are not suitable for use in interaction between the user and the application. However, one of the cartographic models, Moellering's real and virtual maps model is appropriate since it has the capability of expressing transformations between different forms of cartographic representation of reality. This model of cartographic processing will be used to help specify the human- computer interaction in cartographic systems for this research. In addition, the different levels of data structure developed by Nyerges
(1981) and the various of levels of interaction specified by Nielsen (1986) have the potential to be useful in examining graphical interaction.
Nyerges (1981) six different levels for data structure can be adapted and used in the specification of the cartographic interface. These levels are (Nyerges, 1981):
1) Information Reality 2) Information Structure 3) Canonical Structure 4) Data Structure 5) Storage Structure 6) Machine Encoding
38 The expression of these data structure levels in relationship to deep and surface structure ties quite nicely into the similar concept of surface interaction used by Took (1990) in describing interaction in computer systems. The concept of surface interaction specifies human- computer interaction as taking place between the user and the application, across the surface or medium of the user interface. In this interaction paradigm all surface presentation is controlled by the user interface in reaction to requests firom either the application program or the user. Communication between application processes or different applications that does not require a change to the surface presentation is termed "deep interaction". The primary difference between the Seeheim model of user interfaces and that of Took is that the surface presentation
(of the interface) in the Seeheim model is controlled by the application and is not abstracted and separated fi’om the application as extensively as in Took's surface interaction paradigm. This difference, although it may appear shght, is very important because of the current state of the art in window management systems. Multi-tasking operating systems may have many different applications that are executing at the same time and affecting different parts of the interface presented to the user. Therefore the user interface, in Took's model, must contain some intelligence or some knowledge about the applications executing to handle the
39 presentation components of the interface. In addition the intelhgence
present in the interface can be used to specifically manage the surface
presentation for individual users, as in an Intelhgent User Interface (TUI)
(Rissland 1984). In these interfaces the surface presentation and
interaction methods can be tailored for the individual user, not only when
initially configured, but as the user interacts with the interface. Thus the
user interface must handle more than just updating the surface structure presented to the user, but must provide additional services. When the
interface is extended to include these concepts is when the advantages of
the separation built into Took's surface interaction paradigm begin to become essential.
The advantages present in the adoption of the surface interaction paradigm in dealing with human-computer interfaces make it the most reasonable model to use as the underlying general model for the specification of cartographic interaction. Therefore this research will represent the human-computer interface as a surface structure that has interaction with both the user and the applications program s This adoption of surface interaction as the underlying model for this work extends Nyerges' deep and surface structure concepts to the user interaction area. The definition of real and virtual maps by Moellering
(1984) is essential to the resulting model as it specifies the abstractions of
40 the cartographic products in components that seem to match the surface interaction paradigm.
Methods of Specifying Cartographic Interaction
The specification of cartographic interaction in spatial analysis systems can occur at many different levels of abstraction. The levels of data structure presented by Nyerges (1981) can be used as a basis for these abstractions, as could the levels used in Nielsen's virtual protocol model discussed previously. In a comparison of these two schemes of abstraction, the Goal Layer of Nielsen can be related to the Information
Reality or Data Reality level of Nyerges (Figure 3.1). In the Goal Layer the user deals with real world concepts, as does the Information Reality level of data structure specification. In the Task Layer the conceptual tasks that are used to process the real world concept are dealt with and is similar to the Information structure level where the types of relationships and the information structure are devised. The Canonical Structure level is similar to the Semantics Layer where the meaning of the interaction is specified, i.e. relates the operations to each object, and is independent of the actual application or implementation. Unfortunately the sematics level as defined by Neilsen does not relate to the meaning of the interaction but to a more definable concept, operations on an object. This
41 research therefore will deal with the semantics on this level. The Data
Structure level of Nyerges corresponds to the Syntax and Lexical levels of
Nielsen's protocol model where the specific sequencing and structure of the various commands are developed and are specified so that specific relationships and structures exist independent of actual implementation.
The Alphabetic Layer corresponds to the Storage Structure in the data abstraction, in this layer the letters and digits of the interactions are handled and the Storage Structure represents the specification of structure in implementation. The final layer, the Physical Layer handles the actual exchange of the computer signals in the interaction, while the corresponding data structure level. Machine Encoding, handles the specific addressing and machine representation of the data structure.
These two models of abstraction were devised tor specifying different components of computer systems, but are functionally similar in their considerations. Since Nielsen's model is specifically designed for human computer interaction it will be used in this research to abstract the different levels of specification of the human computer cartographic interaction used in this project.
42 Nyerges Data Structure Nielsen Virtual Protocol
Levels Model
Information Reality Goal
Information Structure Task
Canonical Structure Semantics
Syntax
Data Structure Lexical
Storage Structure Alphabetic
Machine Encoding Physical Layer
Figure 3.1 Comparison of Nyerges and Nielsen Models
The Goal and Task Layers of the protocol model, although they can be easily described, are perhaps the most difficult to specify formally.
However, various specifications such as the Command Language
Grammar developed by Moran (1981) does provide for specification of task entities, tasks, task procedures and task methods. Even Moran concedes that this portion of CLG needs further extension in this area. The use of
CLG at the Semantic Level does not provide any more formal method of specification than at the Task Level but provides specification of system
43 entities, system operations, user operations, semantic procedures and
semantic methods. One method of specifying interaction at these levels in
a more precise, although still informal fashion, is to use the map
transformations specified by Moellering (1983) to examine the semantic operation of the interface. The examination and specification of cartographic interfaces in this research will first be done descriptively at these levels and enhanced with the specification of the real and virtual map transformations.
The Syntax, Lexical and Alphabetic Layers of Nielsen are equivalent to the Syntactic and Interaction levels of CLG (Nielsen 1986) and are the primary areas of interest in this research. All of the specification of interfaces with the BNF notation or state transition diagrams occurs at these levels. It is important to note that no current research in analytical cartography or geographic information systems is concerned with this level of specification. Although some researchers have considered the use of task analysis and an examination of the requirements for the users of spatial analysis systems (Egenhofer and
Frank 1988, Gould 1989), no one has attempted to evaluate and compare these systems at this level of interaction. Any specification of an interface in BNF or using state transition diagrams must deal with both the commands available in the system and the means of the human
44 interaction with the system. Therefore, when specifying the components of the interface the specihcation of the virtual devices of interaction (Foley and Wallace 1974, Foley, Wallace and Chan, 1984) and the syntax of the command must be dealt with for a complete system definition. The specification notation used for describing and evaluating the surface interaction of cartographic users will be a version of BNF notation. This
BNF notation is extended to allow specification of the form of the output and input via virtual devices and uses a means .sim ilar to Bleser and
Foley's for specifying the environment parameters necessary. The next section of the research design will specify each of the components discussed above in more detail.
Specifying Human-Computer Interaction in a Spatial Setting
As indicated above the underlying conceptual model of the user interface in the cartographic interaction model will be the surface interaction paradigm of Took (1990). This model is used since it provides a better separation of the human-computer interaction firom the apphcation program. An addition advantage is the ability of the user interface to include intelligent components that allow the incorporation of beneficial enhancements in the human-computer dialogue. Although not
45 all user interfaces may meet the standards set by using this conceptual model, it serves as an ideal for the evaluation of various interfaces.
In the process of evaluation of cartographic interaction the virtual protocol levels presented by Nielsen (1986) will be used to organize the specification. At the Goal Layer statements can be made about the use of the evaluated system and although these statements can not be specified formally they prevent two systems fiom being compared which were designed for different purposes. The specification of the interface at the
Task Layer is also informal and consists of stating the objects and operations available in each system. In this layer the emphasis is not on whether a specific function is available using one command, but whether a specific operation can be accomplished, either with one command or by stringing together some sequence of the commands available. This layer of the specification gives an indication of whether the same operations and objects are available in each system, even though they may be implemented in different fashions. An indication of the tasks available in the system can be clarified by expressing the operations using the real and virtual map transformations described by Moellering (1984).
The next three layers of the virtual protocol model are those that are incorporated in the specification by representing the system actions as a set of BNF rules. The Semantics Layer of indication can be represented
46 by the higher order non-termmsds in BNF, and the specific syntax
(Syntax Layer) of the operations is described by lower level productions involving both terminals and non-terminals of the language. The Lexical
Layer of the protocol model is described by the lowest order productions in
BNF, where these productions involve the terminal components present in the interface. The Alphabetic Layer of the model is comprised by the most primitive components of the interaction, the lexemes, which must be used to specify the tokens of the Lexical Layer. Since this layer consists of individual letters and digits as well as line color and other attributes of graphics, they serve as terminals in the productions of the Lexical Layer and will not be decomposed any further. The remaining layer that
Nielsen uses in the virtual protocol model is the Physical Layer. This layer is concerned with the actual specification of the recording of individual keys and specific button pressed by the user. Since the specification of the interface relies on the definition of virtual devices as described in Foley and Wallace (1974) this layer will not be dealt with in the specification.
Since the purpose of this means of specifying the user interface is in the comparison, evaluation and automation of the design of the user interface it is appropriate to enumerate some of the specific uses of the
47 notation at this point. Reisner (1981) points out three aspects of the notation that can be evaluated as:
1. The number of different terminal symbols, 2. The length of the terminal strings for particular tasks, and 3. The number of rules necessary to describe the structure of some set of terminal strings.
As Reisner discusses, the purpose of the first aspect is to evaluate the total number of individual operations present in the interaction. The second aspect concerns the number of steps a user would need to perform some sub-task, and the third criteria represents the number of different steps required to accomphsh similar operations. The benefits Reisner finds in these specifications are the ability to analyze a model before implementation, the enforcement of precision in design, the formation of testable hypotheses, and the ability to automatically detect and quantify the intrinsic properties of easy-to-use systems. Shneiderman (1982) also describes similar means for the use of a formal description of the interface as well as pointing out how the use of this formal notation can lead to fewer errors and problems in the implementation of an apphcation interface.
Although Jacob (1983) indicates reasons why BNF is inferior to the use of state transition diagrams for formal specification of interfaces most of his objections appear to be personal bias. Although BNF in its pure form is not easily used for this task, neither are state transition diagrams.
48 State transition diagrams are extended by Jacob to include sub-diagrams so that the specifications can have easily understandable abstractions.
This is taken care of in BNF notation by only exam ining non terminal rules at the appropriate level of abstraction. In addition, BNF notation is particularly suited for the specification of concurrent processes since the order of execution of the rules is impHcit not explicit as in state transition diagrams. Also Jacob's initial efforts (1982,1983) with state transition diagrams required the encoding of the diagrams into a formal notation before use in an automated setting, therefore counteracting the advantage of the graphic representation. Since both BNF notation and state transition diagrams have strengths and weaknesses the advantage of one over the other is not clear, but because of the extensibility of BNF notation in particular areas of interest to this research, BNF notation will be used for formal specification of the interface.
The specification of the various parts of the interaction is defined in Backus-Naur Form with various extensions to allow a more concise notation. These extensions are derived fi'om the work of Reisner (1981),
Shneiderman (1982) and Bleser and Foley (1982). The basic form of the
Backus-Naur production rule is:
sentence ::= subject predicate
49 In this example the left-hand side of the production (sentence), is defined
as (::=) a subject followed by a predicate. Each production in the BNF
notation defines a set of non-terminal symbols as consisting of one or
more non-terminal or terminal symbols. In this specification a terminal
symbol will be indicated by an expression in capital letters. The specification of a grammar continues until all non-terminal symbols are ultimately defined by reference to term inal symbols. Since the operator is the only legally allowed operator in the BNF notation, some extensions will be described to make the notation more compact. In general, the order of the BNF components will be the order in which those components appear in the interface. A metasymbol ’ 1 ' indicates that the symbols on either side may define the production, e.g. c ::= ab I ba, indicating that 'a' may be followed by "b" or that "b" may be followed by 'a'.
To make it easy to distinguish between symbols composed of several words and individual symbols, brackets wiU be placed around each symbol, e.g. I
50 mistake. Unfortxmately none of the systems that axe targeted for this research handle error other than to retnm an error message, so the action in the case of any of the specificed systems would be the same. Identifiers will be used, as in Shneiderman (1982) to indicate which party in the interaction is referenced by the symbol This can be used to specify output to different windows in a multi-window environment or to represent the actions of different sub-modules of the user interface. The specification of different program segments using this portion of the notation can easily show how completely the interface being specified conforms to the surface interaction paradigm. An example of how identifiers will be used is;
This brief example is meant to illustrate that the user interface symbol surface structure 'report' is defined as the surface display of text in a dialogue window (DW). The symbol 'text_obj' is the text received firom the application and either nothing else (NULL) or a graphic object displayed in graphic window number one (GWl). In this example "text_obj" and
"graphic_obj" are variables that are separately defined to describe the text and graphics presented to the user. Other variables could be used as environmental variables to define such characteristics as text color, or the graphics line color. The party identifiers used in this research will all belong to an appropriate level of interaction as defined by Took’s
51 interaction model and Moellering’s surface and deep structure cartographic interaction model. For instance, a user interface action is a surface structure action. A deep structure action would be an action apparent in the program that did not have any reflection on the display.
Finally the user can be the initiator of actions that can result in surface structure or deep structure interaction. Environmental variables can be defined for each characteristic of the interface objects presented, as in the following example;
background_color = BLUE, linecolor = RED, transparency = OFF.
In this type of specification terminal symbols are also indicated in capital letters, (e.g. BLUE).
All of the extensions to be used with this style of BNF notation have now been presented with one exception, the use of virtual devices.
The virtual devices used in this notation will be those specified in Foley and Wallace (1974) and will be non-terminals defined to return specific types of values, as presented below:
52
The above examples of specification of fo\ur types of virtual devices are not the only formulations that can be used to specify these actions. An interpretation of the expression of the virtual device 'pick' would be that the user selects an object by moving a cursor over that object and pressing a button, and the apphcation program returns an object id. The button virtual device consists of a key press on the part of the user which triggers some action on the part of the program. The locator device consists of the user moving a cursor through some unspecified action and upon a selection action initiated by the user the user interface returns a coordinate of some unspecified type (e.g. relative or absolute). The final virtual device specified above is the valuator, this device consists of the user moving a cursor, followed by the user completing a selection action that causes the user interface to return a value. The appeal of this means of specifying the virtual devices used above is the use of the many non terminal symbols in the definition. This allows the individual implementations that depend on different physical device types to be defined in the same way conceptually, as is discussed in Foley and
Wallace (1974).
53 Research Tasks
The specifications and evaluations will primarily be of existing cartographic or spatial analysis systems. Systems chosen for these evaluations are particular modules of the ARC/INFO GIS, the GIMMS computer mapping package, and Professor Moellering's NASA Spatial
Data Display Project's analytical cartographic systems. It is useful as a test of the power and value of the specification language to compare the evaluations of two different systems that have similar Goal and Task components. With this in mind the ordering of the comparison is the
ARCPLOT module of ARC/INFO with GIMMS and the ARC/INFO View module with the TIN component of the Spatial Data Display system.
Since this compares systems with similar capabilities, the specifications generated are valuable for analyzing the similarities and differences in the pairs of systems. The systems can be evaluated on their complexity, as documented in the specification, as well as the ease of manipulation of the various capabilities as indicated in the complexity of the equivalent non-terminals in the appropriate productions.
Another task is the specification of an enhanced window interface for the Spatial Data Display system. This system currently runs under
UNIX on a Tektronix 4337 using the DI-3000 graphics package with a command line interface. A version of X is available that allows the use of
54 DI-3000 in a graphic window, and a graphical user interface could be
implemented with the same functionality as the command line interface
as a demonstration of the ability to create two different interfaces from
the same specification. However this option was untenable because of
problems with the graphics display memory of the machine. Instead of
this option an interface was produced on a Sun SPARCstation 10 by
modification of the user interface specification of the 3-D system. This
program will not attempt to port the algorithms and code from the
research system but will instead implement some 3D surface rendering
using Open GL graphics libraries to simulate some of the capabilities of
the original system.
The following outline summarizes the specific components of this
research:
A. Task I - comparison of 2-D mapping interfaces 1. Specify the ARCPLOT module of ARC/INFO 2. Specify the GIMMS batch and interactive interfaces 3. Specify he 2-D portion of the Spatial Data Display System 4. Compare the three interfaces
B. Task II - comparison of 3-D display interfaces 1. Specify the 3-D View module of ARC/INFO 2. Specify the 3-D module of the Spatial Data Display System 3. Compare the two 3-D display interfaces
C. Task III - implementation of interface from specification 1. Specify interface of current Spatial Data Display System
55 2. Implement formal specification under X windows. 3. Evaluation of the process of implementation of the second form of an already existing interface
Each task that requires specification includes the following steps in regards to Nielsen's virtual protocol model:
1. Specify the Goal of the interaction • real world goal, • descriptive
2. Specify the Tasks under the Goal • what can be accomplished using operations and objects in the system, • descriptive
3. Specify the Semantics • the concepts available in the system, • specified as transformations
4. Specify the Syntax of the operations • sequencing of the input and output, including graphics and pointing input, • done with enhanced BNF notation
5. Specify the Lexical components of the interaction • keywords, • terminals of the BNF notation from the Syntax level
6. Alphabetical Specification • lexemes, • legal letters and digits, line types and colors of the interaction, • legal components of the BNF terminals, • line types and colors declared as defaults of the objects in the BNF specification and inherited by sub-objects
7. Physical Specification • keyboard entry, • observable interchange of information
56 The evaluation component of the task includes:
1. Evaluation for internal consistency of interface, - i.e. that all interactions are consistent in their form and style
2. Comparison of operations needed for interaction using two different styles, - i.e. direct manipulation compared to command language, etc.
3. Comparison of the complexity of the interfaces by examination of the number of possible operations and the length of their composition,
4. Comparison of complexity by examining the number of components of the specification needed for accomplishing specific tasks and the overall goal.
5. Other complexity measures include: • The number of surface interactions • The number of deep interactions • The number of user interactions • The number of visible productions • The number of graphical interactions
Sum m ary
To recap, a firamework for specifying cartographic interaction in spatial information systems has been presented. This fi-amework involves specifying the appropriate goals and tasks of the interface, in relation to Nielsen’s virtual protocol model, whose model nicely complements Nyerges’ data structure levels. At the appropriate levels of this model, interactions are specified, both in term of Moellering’s real and virtual map transformations, and in a modified Backus-Naur form.
57 The expansions of the normal Backus-Naur form with Shneiderman’s party identifier, modified with deep and surface structure components firom Took and Moellering, and the incorporation of virtual devices firom
Foley and Wallace’s work allow a detailed specification of the interactions in spatial information systems, between the user, the system and the surface presentation (graphics and text). These specifications can then be examined and summarized to show the complexity, consistency and difTerences between interfaces. In the next chapter the results of the specification and comparison of the specific interfaces examined in this research will be presented.
58 CHAPTER 4
RESEARCH RESULTS
This chapter is divided into three sections, the first two covering the results of specification and comparison of various cartographic systems, and the last covering the implementation firom specification.
The first section covers the specification of three programs that mainly produce two-dimensional cartographic output. These three systems,
GIMMS, ESRI's ARC/INFO Arcplot module, and Dr. Moellering’s NASA project 2-dimensional program ZSHADE, will be presented in that order. Specifications of each system will be presented using the cartographic interaction model outlined in the previous chapter. This specification will include descriptions of the system at each of the difierent Nielsen virtual protocol levels, specification of the systems in terms of transformations in the real and virtual map domain of Dr.
Moellering (Moellering, 1984) and BNF specification of the syntax layer using multi-party grammar (Shneiderman, 1982), augmented with the ideas of virtual and deep structure firom Took (1990) and Nyerges
59 (1981). Following this specification a comparison and evaluation of the systems will be shown using Reisner’s (1981) measures, as well as measures developed in the cartographic interaction model. The second section of this chapter will follow the same format, except that it will compare two cartographic systems that involve three-dimensional cartographic output. These two systems are ESRI's ARC/INFO view3d module, and Dr. Moellering’s NASA project 3-dimensional Spatial Data
Display Program. Each of the specifications of these systems will begin with a general summary and overview of the capabilities of the systems.
The final section of this chapter will present the details of implementing a different style of interface firom an existing specification.
Specification of 2-Dimensional Programs
This section will discuss and specify the three two-dimensional cartographic output systems examined in this research.
GIMMS
The first system that will be examined is PC-GIMMS, a personal computer version of a cartographic package that was originally developed on mainframe computers, using a batch production environment. Figure 4.1 shows a typical map produced with the
GIMMS program. GIMMS on the PC can be used in either a batch type
60 of mode that allows reading commands from files, in a manner very similar to the mainframe version, or in a mode that uses an interactive menu system. The specification of the system examined here will be that of the menu system.
The menu interface in GIMMS allows the user to interactively select and specify the various design components of a map. The map design menu of PC-GIMMS is shown in Figure 4.2. Each menu in the system allows the user to select options that set values for display or composition of the map or allows the user to update the graphics screen or exit the particular menu. The menu interface is the only screen visible in GIMMS unless the user toggles the graphics display using the
HOME key.
61 New South Wales
#
Total Cattle (OOO's)
Calves Bullocks 1000 - 2000 200 - < 1000 0 —< 200 Cows Total cattle, split between types #j
Figure 4.1 Typical map produced with PC-GIMMS
62 ***** GIMMS MAP PRODUCTION ***** Map Design Menu
Select Option
-> PAGESIZE Select page size for map POSITION Select to position map area TEXT Select to specify text for map CLASSES Select class intervals SYMBOLS Select map symbolism LEGEND Select legend design GRAPH Select to add graph(s) to map DATA Select new variable(s) to map
Select Action
EDIT Controls display of multiple maps HARDCOPY Plot map sheet as hardcopy STORAGE Store/Restore map layout PLOT Update map sheet on-screen EXIT Exit from Map Design Menu
< R e t u m > selects : < H o m e > toggles graphics : < P 1 > help
Figure 4.2 GIMMS Map Design Menu
The first task in this framework is specifying the goals of the system, Nielsen’s (1986) top specification level. For the GIMMS system the main goal is to interactively produce thematic maps. Tasks available to accomphsh this goal are:
• selection of the data files, • selection of the data variables, • selection of map symbohsm and shading, • specification of map scale, • creation and positioning of the map legend, • position of the map on the page,
63 • making a hard copy of the map and • saving the map layout.
Another way of looking at the tasks available in the system is to examine Moellering’s real and virtual map transformations. The tasks involving selection are V3->V3 transformations, since these parameters are those that affect the representation of the map but do not actually produce a display. There are no V1->V3 type transformations in the selection tasks since no actual map data is being entered or modified from the keyboard. The specification of map scale, on the other hand, does involve the input of parameters that directly affect the map data, therefore it is a V1->V3 type transformation that results in V3->V3 transformations of the data. The positioning of the map legend, or of the map itself involve three different transforms, that of V1->V3, the interactive entry of coordinates, V3->V3 the transformation of the map data to satisfy those coordinates, and V3->V1, the display of the map on the computer screen. The action of making a hard copy of the map involves a V3->R transformation, and that of saving the map layout is a
V3->V3 action.
To examine in more detail the interactions in GIMMS, a detailed extended BNF description of the system was completed using a multi party grammar specification with Surface Structure, Deep Structure and User concepts. This complete specification is available in the first
64 part of Appendix A. The interaction with the user can be described by quantitatively examining the number of productions involving user interaction, in contrast to the total number of productions. The second part of Appendix A contains a table that categorizes each BNF rule into one of the six types summarized in Table 4.1. A rule by rule examination of the real to virtual map transformations is included in the third part 3 of Appendix A.
To understand the specification of this system, several examples of the specifications of the various menus will be discussed. The map design menu show in Figure 4.2 is represented by the production rules:
map_design_menu_options '■ PAGES I2E POSITION TEXT CLASSES SYMBOLS LEGEND GRAPH DATA"
map_design_menu_actions "EDIT HARDCOPY STORAGE PLOT EXIT"
65 As can be seen by examining this rule, the main components of the
menu are a listing of the menu options and actions available. In terms
of this research, surface dialogue changes are made. Also apparent in
this example is the fact that the menu options and actions are specified
as variables. This serves two purposes, one it allows the BNF notation to be more compact, and two it allows flexibility in the design of the user interface. To change or add a menu item in this notation requires only a change in the value of the variable, not in the BNF rule itself. Since the only user interface activity is that involving the dialogue listing, that is the only category of interaction tabulated in Appendix A for this menu.
Also no map transformations are occurring at this step so no real or virtual map transformations are shown in that table.
A more interesting example of interaction is the
GIMMS logo, the map border, and map text on the graphics surface.
66 The user is then required to use the graphics cursor, moved with the mouse or keyboard cursor keys to specify the position of the legend on the screen. This is represented by the virtual device interaction
This interaction includes surface graphics changes, user input and deep transformations of the legend coordinates. Consequently the real and virtual map transformations include V1->V3 (LOCATE), V3->V3 (the transformation of the legend coordinates, and V3->V1 the drawing of the map, logo, text, border and legend on the computer screen.
Surface Graphics actions 19 Surface Dialogue actions 39 Productions w/o surface interaction 0 Deep interactions 35 Virtual device interactions 17 User interactions 23 Total productions 58
Table 4.1 Number of PC-GIMMS productions
As can be seen from an examination of the above table there is a high surface interaction component to this interface. All of the productions (58) were involved in some form of feedback in terms of surface interaction. There are 23 productions where the user is involved
67 in some type of input after selection of the menu choice (i.e. user interactions) and there are 17 productions where this type of input is a virtual graphics device style of input. However, if menu selection is removed from this total, only 4 of the interactions involve virtual devices. Many of the commands in this system use virtual device input in the form of picks from a menu as shown in the
Examination of these productions show that GIMMS is a highly interactive system that either involves the user continually or at least gives the user a high level of feedback. However, very little of that interaction is with graphic picks or locates, other than for the choice of a menu selection. The hierarchical menu system results in navigation of several levels of menus involving complex interactions before a map can be produced. The next section will examine a system that has a very different style of interface, ESRI’s ARC/PLOT module of the ARC/INFO
Geographical Information System.
ARC/PLOT
ESRI's ARC/PLOT is the cartographic output module of the
ARC/INFO Geographical Information System. This module has undergone many changes in revisions 6 and 7 of ARC/INFO, most significantly between versions 5 and 6. Since version 5 of ARC/INFO
68 provided a better delineation along the lines of the categories of this
research (i.e. 2-d versus 3-d cartographic output), the separation of
capabilities present in version 5 of ARC/INFO will be used as the basis of the higher levels of this study, while the syntactical descriptions will be presented for the current version of the system, version 7.04.
ARC/PLOT is a command-oriented system that has available many specific commands that relate to the many different parameters of map design and production. Although a graphical interface is available for this system, it does not present any advantages, in that to use that system, the command available and presented in the command line system must be picked from the menus. A typical map produced with
ARC/PLOT is show in Figure 4.3 and the command prompt of the system is show in Figure 4.4.
As with PC-GIMMS the goals of ARC/PLOT are to produce cartographic output, this time from a full function geographical information system: ARC/INFO. The tasks available to accomplish this goal are many and include:
• specification of the database, • specification of the data variable, • specification of shading, • specification of colors for symbols, lines and area shading, • specification of line widths of all symbolism, • positioning and specification of the legend and map scale, and • plotting of the map.
69 A large variety of these tasks, as in GIMMS, are Moellering’s virtual Type 3 to virtual Type 3 map transformations, i.e. they involve setting of display, symbol and other parameters. Specification of coordinates via keyboard entry of numbers or through the use of virtual coordinate entry (LOCATE) devices involve V1->V3 transformations that result in V3->V3 transformations and/or V3->V1 transformations.
As with GIMMS a table of these transformations for each command is listed in Appendix B.
70 ARf.f’i ni EU]
: -■•: '. c tO r .< _ H ;-.'- — -:- .
Figure 4.3 Typical map produced with ARC/PLOT
71 m n n m n n m £fc Jfmw Selup IrwWlw H *
/d2/aedmnds8ra> arc ^ Copyright (C) 1982-1996 Environmental Systems Research Institute, Inc. All rights reserved. ARC Version 7.8.4 (Sun Jan 21 22:27:17 PST 1996)
0 0 I OSU has a site license for ARC/INFO, you can have your own copy. | I See the URL "http://www.cfm.ohio-state.edu/esri/" for details |
I Use the station command "^station Xcfm" for X-terminals at the | I Center for Mapping, "ftstation X" for all other X-terminals. j
Arc: arcplot Copyright (C) 1982-1996 Environmental Systems Research Institute, Inc. All rights reserved. ARCPLOT Version 7.8.4 (Sun Jan 21 22:27:17 PST 1996)
Arcplot: I
24x80 1(20.10) I C o n n ^ jPrinterOff iLagfterOff | ^
Figure 4.4 ARC/PLOT command line
The BNF descriptions for this system are presented in Appendix
B of this work. An examination of several typical BNF rules for
Arcplot will show the differences between GIMMS and Arcplot in their style of interaction. For example, since Arcplot is a command based system that usually requires parameters to be specified with the entry of the command, the BNF rules reflect that:
72
cdraw[NULL]> ::=
The rules shown above are three forms of the same command, draw.
The first rule describes the command as consisting of draw and the mandatory arguments of xy coordinates. This command will draw a line between the specified points. Although this command has only mandatory arguments, in the case of commands that have optional arguments only the mandatory ones are indicated in this specification.
The second form of the command (draw[*]) allows entry of the two points that specify the line using the virtual device
This example also illustrates some of the problems with the real and virtual map transformations in this domain. The first form of the command does only surface graphics changes. In a separable user interface model this can be done by the interface and not involve the application, thus no deep interactions are involved in this rule. The real and virtual map model requires that this interaction be represented as V1->V3 and V3->V1 transformations, so there is not a clean separation between the user interface and the application.
Table 5.2 summarizes the character of the productions for
Arcplot. Of the 389 total productions in this system many of the productions (161) involve deep interactions to set some type of parameter. There are also a significant number of production rules
(120) that do not involve any type of surface interaction. On the other hand many of the interactions have alternate forms that involve interactive graphics actions, such as picks and locates that result in 36 productions with virtual interactions. However, only two of the commands involve user input after the entering of the command.
Surface Graphics actions 112 Surface Dialogue actions 186 Productions w/o surface interaction 120 Deep interactions 161 Virtual device interactions 36 User interactions 38 Total productions 389
Table 4.2 Types of ARC/PLOT productions
Since this is a command line system, the number of productions defined in the system is directly related to the number of commands in the system. The ease of use of an interface is much debated in the
74 literature, and this system illustrates that point nicely. Although the system is easy to use for someone who knows the syntax of the commands and the order in which they are required to be given, the beginner or inexperienced user would be lost in this system without extensive training. The large number of commands and their permutations can make it difficult for a user to determine the correct commands and their sequence to arrive at their result. Therefore the large number of commands (120) that do not give any surface feedback to the user, can make it difficult for an inexperienced user to navigate this system. The only saving grace here is that the user usually will have the last command visible on the screen to reference their previous actions. This system also has a very rich interaction in the virtual device domain, with 36 of the commands taking virtual device input or having alternate forms that take virtual device input. The two systems examined have been very different in their interaction style, i.e. command line versus a menu driven system. The next system is a different interaction style firom either of those, a command line system with menu displays.
75 ZSHADE
This system is the product of Dr. Moellering’s NASA visualization project. Its primary goal is to produce analytical as well as cartographic output for a variety of image and topographic data sets.
ZSHADE (the name of the system) was designed to be interactive in nature and provides a command line type of environment. However, in contrast to ARC/PLOT, ZSHADE has a very specific set of commands and displays a menu of commands at every point in the interaction.
Commands in this system take no arguments, but request them firom the user in an interactive dialogue. This design feature, as is shown later, has an impact on the BNF specification produced for this system.
The systems goals of ZSHADE are to produce analytical output, including graphic display, of specific image variables, and can also overlay these variables. System tasks available for accomphshing those goals are:
• selection of the data set, • selection of data variables, • selection of display parameters, • selection of levels, and • display of those data.
Again most of the interaction with this system involves V3->V3 transformations, with display and interactive selection of coordinates occurring using V1->V3 and V 3 ->V1 transformations. A typical screen
76 display of this system is presented in Figure 4.5, the main menu of the
system can be seen in the dialog area, the lower left ftame of the screen.
>-0 PHOr.HHM LHÏOUI HHBNEY PEAK ELEVBriON 200 X 200 )ï£BLHï 2 SURFACE [LEVRriQH 10.29. 15 10.09.44 ÎLEVEL5=I2| U B 3 I 5 . 00 [LEVEL5=16 :LE= 45.00 :0L0B5:192| Y.EX= 1.50 3/ 4/1991 .EVEL5= 12 1BBNEY PK2| 4IN=1476.0 rOTB= 200| 4BX=2192.0 rorc= 5LPNIN= C iTBOH= SLPMRX= 9S 5TC0L = 5E=0.99014 mows= 5E=0.9III8 4C0LS: 2001 JE=0.8S216 :h l b r g = 2 'OLYGNS» C (92= 50.01 IfTtOERJSITY" HARNEY PERK ASPECT 200 X 200 RESEARCH FOUHDATIOM 1991 1S-IMAGE=I PATENT PEMOIHG ISPECr «» 01ÆRLAY MENU - SELECT OPTIOtl jone 10.15.06 30HE JEGION OVERLAY LEVELS= 16 ÎEAO DATA SET SALE 4ETH00=NKS SHADING RESTORE 3H=L0CRL ’OLYGOHS REWRITE 0.000 IMAGE 4= 168.000 riTLE CHANGE END H= 359.000 (C) COPYRIGHT OHIO STATE UNIVERSITY RESEARCH FOUNDATION 1991 PATENT PENDING MAIN rtHU - SELECT OPTION e
Figure 4.5 ZSHADE main menu and graphics displays
ZSHADE highly involves the user in the use of the system. The rule shown below describes the interaction with the user in setting the elevation parameter for calculating lighting model for the surface display.
77 < elevation > ::= < Sfc:Dialogue list[elevation] >
This rule shows that the system lists the current setting of the elevation parameter, gives the user a prompt requesting the new value, sets some parameters reflecting the new value, lists that new value, and updates a portion of the graphics to indicate that the surface must be redisplayed to use the new parameters. Since this production rule involves the user in the
Appendix C contains the BNF productions developed for this system summarized in Table 4.3, in addition to the real and virtual transformation table, and the interaction classification table. This system and GIMMS have the same number of total production rules
78 (58). These two systems are also interesting in that neither system has a specification that does not include some sort of surface interaction feedback to the user, ZSHADE also has a relatively few number of productions that involve virtual device input (5) but has a very large number of productions that involve user input after command entry
(40).
Surface Graphics actions 21 Surface Dialogue actions 56 Productions w/o surface interaction 0 Deep interactions 29 Virtual device interactions 5 User interactions 40 Total productions 58
Table 4.3 Types of ZSHADE productions
2-D System Comparisons
The character of the three systems can be examined by comparing the summaries of the specifications presented above in Tables 4.1, 4.2 and 4.3, these are presented together in Table 4.4. Although both
Arc/Plot and Zshade are command line systems, they are quite different in design as can be seen by exam in ing the Table 4.4. Arc/Plot seems to be a significantly more complex system as can be seen by the number of total productions in both cases. Of course Arc/Plot also has
79 E-
I S O m 6 N & 1 Surface Graphics actions 19 112 21 Surface Dialogue actions 39 186 56 productions w/o surface interaction 0 120 0 Deep interactions 35 161 29 virtual device interactions 17 36 5 User interactions 23 38 40 Total productions 58 389 58 Table 4.4 Summary of 2-D Systems
significantly more production rules because of the many commands that
have alternative forms and this must be taken into account in any
comparison. This is also due to the fact that the Zshade system was
designed for a relatively limited number of analytical operations and is
a research system and not a general-purpose commercial system. The
PC-GIMMS system has the same number of productions as Zshade and
although has many of the capabihties of Arc/Plot it does not allow the
flexibility in control of every detail that Arc/Plot does. Although Zshade
and GIMMS have many fewer productions than Arc/Plot, both systems have a much more complex interaction with the user throughout their
structures. Zshade, although command line oriented like Arc/Plot, controls many aspects of the system through one command, where
Arc/Plot may use many different commands to accomplish the same
80 task. GIMMS is somewhat intermediate in this aspect, having a greater complexity in the production structures than Arc/Plot but not as complex as the productions in Zshade.
Arc/Plot, compared to the other two systems, has a much richer structure for interaction at the virtual device level. This is due the large number of commands that have alternative modes of parameter specification that involve picks and locates. Even though rich in this aspect, Arc/Plot has relatively fewer commands where there is feedback to the user as done in GIMMS and Zshade. In the next section the two cartographic systems producing three-dimensional output will be examined.
Specification of Three Dimensional Systems
This section will examine the two three-dimensional systems analyzed in this research. These systems are Dr. Moellering’s 3- dimensional Spatial Data Display System and ESRI’s ARC/INFO view module. The view module of ARC/INFO, although separate in version 5, has merged with ARC/PLOT in versions 6 and 7 of ARC/INFO. The commands are mostly the same with only changes in the names.
Therefore this research will present the commands for ARC/PLOT
81 version 7, but only examine the commands that were available in version 5, not commands added since that time.
ARC/INFO VIEW
The ARC/INFO view module commands have now been placed in the ARC/PLOT module of ARC/INFO. To distinguish them from similarly named commands that were previously in ARC/PLOT, their names have usually been prefixed with the token ‘surface’ or totally renamed. Since these commands are part of ARC/PLOT they share the same architecture, i.e. a command line oriented system.
The goals of this system are to produce three-dimensional displays of a geophysical surface. These surfaces may or may not have other data draped over them. Tasks available to accomplish these goals are:
• selection of the data set, • selection of display parameters, • interaction selection of points, and • display of the data set.
Figure 4.6 shows a typical graphical display of the system, while Figure
4.4 shows the system prompt, now the same as ARC/PLOT.
82 : A R T P l n î k3
Figure 4.6 Typical graphic display of ARC/PLOT View
The surface profile command is one of the commands in this system that allows virtual device input. This command presented below involves the user in selecting two points firom a surface, interactively using the LOCATE device and drawing the surface profile between those two points. Therefore this command involves the use of V1->V3,
V3->V3, and V3->V1 transformations.
83 < surfaceprofile[*] > ::=< Sfc:Graphics draw_backcover > < Sfc:Graphics draw_frame > < Sfc:Graphics draw_cursor > < Sfc:Dialogue list[select_msg] >
Table 4.5 presents a summary of the BNF specification that is contained in Appendix D. Since this system has a relatively limited goal compared to the systems examined in the first section of this chapter the number of total productions (39) is relatively small. Most of these commands have some type of surface interaction with only 8 not having any interaction with the surface, and a majority of the commands interact with the deep structure of the system (16).
Compared to the total Arc/Plot command set there are relatively few (3) of the View commands that use virtual device interactions to input parameters.
Surface Graphics actions 7 Surface Dialogue actions 31 productions w/o surface interaction 8 Deep interactions 16 virtual device interactions 3 User interactions 6 Total productions 39
Table 4.5 Types of View 3D productions
84 Spatial Data Display System (SDDP)
This system is similar to the ZSHADE system developed as part of the same NASA project by Dr. Moellering. The program runs on a
Tektronix workstation with special three-dimensional display hardware.
The goals of this system are to produce a three dimensional display of a surface with a thematic surface overlay. The tasks available to accomplish these goals include:
• selection of the data set, • setting of display parameters, • selection of overlay variables and • display of the surface.
The table below (4.6) presents a summary of the specifications for this system found in Appendix E. This system like View has a relatively few number of total productions, because of the specialized nature of the system. All of the productions involve some sort of feedback to the user, and 13 productions involve deep interactions with the system. However none of the commands available use virtual device interactions. Like the Zshade system, this is a command line system that constrains the input of commands based on previous actions and presents a menu of available choices at every point in the interaction. The graphics
85 SAMPLE SESSION
TITLE FOR 3D DISPLAY SQUARE BASED 3D SEGMENT VERTICAL EXAGGRATION COLOR AND INTENSITY STEPS MESH BASED 3D SEGMENTS SHOW PARAMETERS HUE VALUES TRIANGLE BASED 3D SEGS SHADING CONTROL LIGHT PARAMETERS TRED - TRIANGLULAR EDITING VISIBILITY CONTROL W B I (AMBIENT,DIFFUSE) PERSPECTIVE CONTROL DONE L0CATE-3D
** 3D DISPLAY MEN SELECT OPTION
h i g h 1. I .•'■10 I .MO 1 VHW. ‘J1Ü P I l.'/J.'/ ri'1.1 Z-VfiR. - ELEVAIlUN iiRiM I'lllllD R N Il! (c) Cosyrlgh* Ohio Slot# Uniiersity ?
Figure 4.7 Typical graphics display of SDDP
produced by this system are show in Figure 4.7 and the main and display menus of the system in shown in Figure 4.8.
86 READ (Z-VAR.) DISPLAY SAVE RASA (PARAMETER-SAVE) EDIT PARE (PARAMETER RESTORE) IREAD (IMAGE VAR.) SWITCH DATA STRUCTURE END
TRIANGULAR DATA STRUCTURE ACTIVE
** MAIN MENU - SELECT OPTION
d i s p DISP
TITLE FOR 30 DISPLAY SQUARE BASED 3D SEGMENT VERTICAL EXAGGRAriON COLOR AND INTENSITY STEPS MESH BASED 3D SEGMENTS SHOW PARAMETERS HUE VALUES TRIANGLE BASED 3D SEGS SHADING CONTROL LIGHT PARAMETERS TRED - TRIANGLULAR EDITING VISIBILITY CONTROL AMBI (AMBIENT.DIFFUSE) PERSPECTIVE CONTROL DONE L0CATE-3D
** 30 DISPLAY MENU - SELECT OPTION
Figure 4.8 SDDP main and display menus
Surface Graphics actions 14 Surface Dialogue actions 23 Productions w/o surface interaction 0 Deep interactions 13 Virtual device interactions 0 User interactions 13 Total productions 25
Table 4.6 Types of SDDP productions
87 A typical command of the system, hue is presented here:
As can be seen by examining this production rule, it like the commands in the ZSHADE system is highly interactive. For instance this command involves the user in a dialogue that includes prompting by surface dialogue communication, entry by the user of color table values, a message when the process is being done, and at the end of the command the graphics are updated to reflect the change made by the command. This command is also illustrative for the differences between an examination of the real and virtual map transformations and the separation done by consideration of surface and deep structure. The real and virtual map transformation involved are V1->V3 entry of color values, V3->V3 updating of the color table, and V3->V1 updating of the display. In the separation of the user interface into surface and deep structures this rule only involves surface changes in presentation and does not involve any deep actions, because this operation could be
88 handled entirely by the user interface and not involve the application in
any manner.
Three Dimensional System Comparisons
The two systems examined here are very similar in the number of
total productions available because of the specialized nature of the two
systems. The SDDP system has no interactions that involve virtural
devices while the View system has few compared to the Arc/Plot system.
Again the View system with its command line orientation presents less
feedback to the user and has simple productions compared to the SDDP
system. Although it has few virtual device interactions, View has
available those types of interaction while SDDP has no capabilities in
this area. The next section will present an implementation from the
specification of the SDDP system to a graphical user interface under
Motif and X.
VIEW3D SDDP Surface Graphics actions 7 14 Surface Dialogue actions 31 23 Productions w/o surface interaction 8 0 Deep interactions 16 13 Virtual device interactions 3 0 User interactions 6 13 Total productions 39 25
Table 4.7 Summary of 3-D Systems
89 Implementation from Specification
This section will cover the implementation of a graphical user interface from the specification of the SDDP program in the previous section. The first topic to be covered is the matching of the BNF specifications with specific interface components, and an examination of the design changes that were initiated. Following a examination of the implementation, an evaluation of the specification will be conducted for any changes that resulted from the change of interface type.
Interface Implementation
The graphical user interface for this implementation was produced as a proof of concept and is not meant to be a totally functional system. Although it should have been possible to build this system on the Tektronix 4337 workstation and have all of the capabilities of the system available, problems with the hardware forced the transition to a
Sun SPARCstation 10. The problems associated with the Tektronix were due to the aging of the equipment and memory problems associated with using a Tektronix implemented native graphics window under the X11R3 windowing system.
The sdd3d system, the name of this implementation, was coded in
FORTRAN and C, using Motif XI1 Windows libraries under the Sun
90 Open Windows implementation of XI 1RS. The graphics part of the program was coded using the Mesa graphics libraries, a public domain implementation of SGI’s Open GL graphics libraries. Open GL is the commonly used successor to the Tektronix native graphics mode under
X ll on the Tektronix 4337 and XD88 series of workstations
(Shamansky, 1995).
The main menu of the SDDP system shown previously in Figure
4.7, is represented by the following productions:
< main_menu > ::= < Sfc:Dialogue list[main_menu_keywords] > < SfcrDialogue list[data_struct] > < SfcrDialogue list [main_menu_prompt] > < read_response > with main_menu_keywords r r= "READ (Z-VAR.) DISPLAY SAVE PASA (PARAMETER-SAVE) EDIT PARE (PARAMETER-RESTORE) IREAD (IMAGE VAR.) SWITCH DATA STRUCTURE END" data_struct ::= " TRIANGLE | SQUARE CELL " "DATA STRUCTURE ACTIVE" main_menu_prompt : r= "** MAIN MENU - SELECT OPTION"
These original productions have been mapped into the main menu tool bar of the sdd3d system shown in Figure 4.9. Among the design changes made to improve the system, the switch command for changing the data structure type has been implemented as a choice in the file
91 dialog, which also contains the read and image read commands that were previously on the main menu. This reorganization has resulted in a more hierarchical structure that simplifies the main menu. The resultant BNF for the main menu tool bar is:
< m a m menu > ::= < SfcrGraphics button[PILE] > < Sfc:Graphics button[PARAMETERS] > < Sfc:Graphics button[DISPLAY] > < Sfc:Graphics button[HELP] > < Sfc:Graphics button[QUIT] > < readresponse >
0 0 E3
Figure 4.9 SDD3D main menu
The procedure for selecting data sets and their size is one of the longest productions in the original system. This procedure was broken down into several production rules in the redesign that, although they must be followed in sequence, can be terminated at any point. The user also does not have to restart the dialog fi*om the main menu if he changes his mind about which data set with which he wishes to work.
92 The original BNF for the read, iread and switch commands are:
< read > Sfc:Dialogue list[data_set_list] > SfciDialogue list[unitjprompt] > read_response > SfcrDialogue list[data_set_info] > SfcrDialogue list[verify] > readresponse > SfcrDialogue origin_prompt > read response > < Deepr set [parameters] > SfcrDialogue list[size prompt] > read response > < Deepr set [parameters] > Deep r read data > main menu >
< iread > Sfc rDialogue list[data_set list] SfcrDialogue list[unit_prompt] > read response > SfcrDialogue list [data__set_info] SfcrDialogue list[verify] > read_response > Deep r read data > main menu >
< switch > < Deepr set[data_struct] > < main menu >
This has been replaced by the following production rules:
93