VIRTUAL REALITY VISUALIZATION FOR MAPS OF THE FUTURE
DISSERTATION
Presented in Partial Fulfillment of the Requirements for
the Degree Doctor of Philosophy in the Graduate
School of The Ohio State University
By
Kosta Bidoshi, M.S.
* * * * *
The Ohio State University 2003
Dissertation Committee: Approved by
Professor Alan Saalfeld, Adviser ______
Professor J. Raul Ramirez, Co-Adviser Adviser
Geodetic Science Graduate Professor Joel Morrison Program
ABSTRACT
In today's Geographic Information Systems (GIS) and Computer-Assisted Maps, user’s perceptual interface with a paper map is replaced, in many cases, by the analytical and logical queries of a spatial database that represents the map in a computer form. The analytical results do not give a full account of the information that can be represented since these results do not include the implicit information contained in the map. The visual display representation of the entire map is very important for the user to determine what kind of information is to be extracted and to understand the interrelations among elements of the map. Current visualization techniques (paper maps and their computer replicas) do not take full advantage of the many modalities of human perception in representing the complete possible range of spatial information.
This research investigates the use of virtual reality (VR) in map visualization and reconsiders some of the fundamental concepts of cartography in the context of the VR advances. Our investigation of spatial cognition shows that VR techniques enhance the perception of spatial phenomena in maps. Three-dimensional object visualization and terrain representation are means to increase the level of perception of the real world in maps. Spatial sound techniques used to enhance the perception of real world phenomena and describe cartographic features are considered important additions to the visual representation in a VR map. Dynamic visualization is used to display real world phenomena (like clouds, rain and movement of the cars and rivers) and to attract the map user’s attention. Important cartographic elements such as georeferencing, scale and symbolization are reformulated in the context of VR map visualization. User
ii interaction with the VR map environment enhances the feeling of the realistic presence within the surroundings. Intelligent VR map visualization improves the perception of spatial phenomena through visual representation of GIS queries and analysis.
In all, then, this project aims at setting the framework for building an “virtual perceptual reality” for mapping environments. This will allow us to immerse users into the mapped entities in realistic ways using somewhat conventional desktop computers.
iii
Dedicated to my parents and my wife
iv ACKNOWLEDGMENTS
Firstly, I wish to express my gratitude to my advisors Dr. Alan Saalfeld and Dr. Raul
Ramirez for their comments and constructive criticism and Dr. Joel Morrison for his insights. Special thanks go to Dr. Ramirez for his guidance and support during this research. Without his help and encouragement this dissertation would not have been possible. He gave me the opportunity to work and learn at the Center for Mapping, as well as provided the direction to completing and refining this research.
My great appreciation goes to Dr. Burkhard Schaffrin for changing my life by providing me with the opportunity to come and study in the United States. I would also like to thank Dr. Ron Li who was instrumental in my obtaining financial support for the last three quarters of this research through a PEGS grant. Many thanks to Irene Tesfai who was always willing to provide assistance with department paperwork and who guided me through the complicated path of returning to the Ph.D. program after a long absence.
This research would have not been possible without the love and support of my parents,
Petrit and Ana-Ermoza, my sisters, Alma and Anita, and my wife, Kristin. Special thanks go to my father for his continued urging and encouragement until the completion of this degree. I am thankful to my wife, Kristin, for her patience, moral support and tremendous assistance in both content and language throughout the chapters of this dissertation.
v VITA
January 17, 1969...... Born - Tirana, Albania
1991...... Diploma Engineer Tirana University
1995...... M.S. Geodetic Science The Ohio State University
1994-1998...... Graduate Research Associate, Center for Mapping, The Ohio State University, Columbus, Ohio
PUBLICATIONS
Bidoshi, K., Ramirez, J.R. (1997) “Quality Control of Road Datasets Using a Hybrid Data
Structure” UCGIS (University Consortium for Geographic Information Systems) Annual
Assembly and Summer Retreat.
Ramirez, J. R., Bidoshi, K., Douglass, T., Phuyal, B., and Szakas, J. (1996) “A Quality
Assessment of Spatial Data Acquired for the Rickenbacker Air National Guard Base
Conversion Task,” Center for Mapping Report 1996_RR_01.
Ramirez, J. R., Bidoshi, K., and Szakas, J. (1996) “Partial Automation of Flood Zone
Determination,” Center for Mapping Report 1996_RR_02.
vi Bidoshi, K., Ramirez, J. R.,. (1999) “Multimedia Visualization for Maps of the Future”,
International Cartographic Association General Assembly, Ottawa, Canada.
FIELDS OF STUDY
Major Field: Geodetic Science
vii
TABLE OF CONTENTS
ABSTRACT ...... II
ACKNOWLEDGMENTS...... V
VITA ...... VI
LIST OF TABLES ...... XII
LIST OF FIGURES ...... XIII
CHAPTER 1 INTRODUCTION ...... 1
1.1. Purpose ...... 3
1.2. Scope...... 4
1.3. Organization...... 5
1.4. Hardware and Software...... 5
1.5. References...... 6
CHAPTER 2 LITERATURE REVIEW ...... 7
2.1. Virtual Reality Concept and Technology...... 8
2.2. Virtual Reality in Cartography...... 12
2.3. Applications ...... 15
viii 2.4. Virtual Reality Modeling Language – VRML...... 17
2.5. XML and its Potential for Use in VR Visualization...... 18
2.6. Animation in VR...... 19
2.7. Sound in VR...... 20
2.8. References...... 23
CHAPTER 3 SPATIAL COGNITION ...... 27
3.1. Spatial Cognition in Cartography ...... 28
3.2. Spatial Cognition in a Virtual Environment ...... 32
3.2.1. Human Perception of Virtual Reality Maps...... 32
3.2.2. Spatial Cognition in Learning...... 39
3.2.3. Spatial Cognition in Dynamic Environments...... 41
3.2.4. Spatial Cognition and Sound...... 42
3.3. Summary ...... 44
3.4. References...... 46
CHAPTER 4 MAP REPRESENTATION IN VIRTUAL REALITY ...... 50
4.1. Three-dimensional Visualization ...... 51
4.1.1. Three-dimensional Object Viewing ...... 51
4.1.2. Terrain Modeling ...... 60
4.2. Sound - An Addition to the Visual Interface of a VR Map...... 64
4.2.1. Virtual Auditory Systems ...... 64
4.2.2. Use of Sound in VR Maps...... 68
4.3. Dynamic Visualization...... 73
4.3.1. Realistic Three-dimensional Computer Animation in VR...... 74
4.3.2. Dynamic Visualization in Virtual Reality Cartography ...... 77
ix 4.4. Viewer Immersed in the Scene ...... 79
4.5. Map Navigation ...... 81
4.6. Summary ...... 82
4.7. References...... 84
CHAPTER 5 ELEMENTS OF CARTOGRAPHY AND VIRTUAL REALITY...... 87
5.1. From Data Sources to Cartographic VR Visualization ...... 92
5.2. Georeferencing in VR Visualization of Maps...... 98
5.2.1. What is Georeferencing?...... 98
5.2.2. The Importance of Georeferencing in VR Cartography...... 99
5.2.3. Georeferencing VR Maps - Classification, Problems and Solutions...... 100
5.3. Scale and Orientation in VR Maps ...... 105
5.4. Generalization – Reducing the Complexity ...... 115
5.4.1. Necessity of Generalization in Virtual Reality Maps ...... 117
5.4.2. Multimedia generalization ...... 121
5.5. Symbols and Labels – Making Sense of the Scene...... 125
5.6. Visualization of Uncertainty and Metadata ...... 133
5.7. Summary ...... 139
5.8. References...... 143
CHAPTER 6 USER INTERACTION, GIS ANALYSIS AND VISUALIZATION ON DEMAND ...... 147
6.1. User Interaction ...... 148
6.2. Visualizing Spatial and Proximity Analysis...... 157
6.3. Visualization on Demand ...... 162
6.4. Summary ...... 169
6.5. References...... 171
x CHAPTER 7 CONCLUSIONS AND FUTURE CONSIDERATIONS ...... 172
7.1. Conclusions ...... 172
7.2. Limitations ...... 176
7.3. Considerations for Future Work...... 177
7.4. References...... 179
BIBLIOGRAPHY...... 180
xi LIST OF TABLES
Table 4.1: List of cartographic features and their potential audio representation...... 70
Table 5.1: The Structure of Cartographic Features...... 91
Table 5.2: Comparison of Data Capture in Conventional and VR Maps...... 96
Table 5.3: Visual Variables for VR Maps...... 129
Table 6.1: Sensors Applied to GIS functionality...... 154
xii LIST OF FIGURES
Figure 2-1: New Yorker's Idea of America ...... 10
Figure 2-2: Techniques Used to Reduce the Complexity...... 11
Figure 2-3: Vector of Icons ...... 11
Figure 2-4: Four Levels of GIS and SciVis Integration ...... 15
Figure 3-1: Recognition of a State ...... 31
Figure 3-2: From Simple Sketches to VR Representation ...... 33
Figure 3-3: Navigating while Maintaining a Sense of the Area...... 36
Figure 3-4: The Navigation Model...... 38
Figure 3-5: Location with regard to relative distances ...... 41
Figure 3-6: Localization of Sound Perception ...... 43
Figure 4-1: Example of 3D Representation for Urban Area...... 50
Figure 4-2: Perspective Projection Illustration ...... 53
Figure 4-3: Parallel Projection Illustration...... 54
Figure 4-4: Illustration of Vanishing Points...... 54
xiii Figure 4-5: Orthographic Projection (Front, Top and Side views)...... 55
Figure 4-6: VR Snapshot in a Perspective Projection ...... 56
Figure 4-7: VR Snapshot in a Parallel Projection...... 56
Figure 4-8: Helvetiae Descriptio – Swiss Map of 1538 ...... 60
Figure 4-9: Shaded Relief Map of Northern Albania...... 60
Figure 4-10: TIN Representation of Northern Albania Relief...... 63
Figure 4-11: Rendered Representation of Northern Albania Relief...... 63
Figure 4-12: Rectangular Coordinate System...... 66
Figure 4-13: Polar Coordinate System...... 66
Figure 4-14: Diagram showing possible audio distribution in space ...... 68
Figure 4-15: Immersed View...... 80
Figure 4-16: Top View ...... 80
Figure 5-1: The Paper Map and its Elements...... 88
Figure 5-2: Digital Map and its Elements ...... 89
Figure 5-3: Cartographic Concept in VR...... 90
Figure 5-4: Spatial data capture in conventional cartography...... 94
Figure 5-5: Spatial data capture in VR cartography ...... 95
Figure 5-6: Undefined point in 3D environment ...... 102
xiv Figure 5-7: Georeferenced buildings and terrain of Hong Kong Port...... 102
Figure 5-8: The Scale Grid...... 107
Figure 5-9: The Scale Grid overlaid to a VR map...... 107
Figure 5-10: Scale elements in a 1:10,000 scale simulation...... 109
Figure 5-11: Scale elements in a 1:1,000 scale simulation...... 110
Figure 5-12: The Average Scale Factors along the Axes...... 111
Figure 5-13: The Geometry of a VRML Sound Node ...... 123
Figure 5-14: Positional Accuracy Labels...... 135
Figure 5-15: Use of Value in Data Quality Representation ...... 136
Figure 5-16: Use of Transparency in Data Quality Representation...... 137
Figure 5-17: Fog Added to VR Map Representation ...... 138
Figure 6-1: Active Worlds ...... 148
Figure 6-2: The Sims ...... 149
Figure 6-3: Simcity...... 149
Figure 6-4: Illustration of the Point Sensor in Displaying Identification ...... 153
Figure 6-5: User Interaction with the Visual Display...... 156
Figure 6-6: Illustration of Buffers in a VR Map...... 159
Figure 6-7: Buffering the Trajectory of an Airplane ...... 160
xv Figure 6-8: Pattern Determination Visualization ...... 162
Figure 6-9: Visualization of Different Media Georeferenced to the Map ...... 164
Figure 6-10: Small Scale Map of the Area...... 166
Figure 6-11: Large Scale Map of the Area...... 166
Figure 6-12: Aerial View of Campus ...... 167
Figure 6-13: VR Map of the Campus Center...... 167
xvi CHAPTER 1
INTRODUCTION
Cartography consists of techniques fundamentally concerned with reducing the spatial characteristics of a large area and putting it in map form to make it observable
(Robinson et al 1988). Just as the spoken and written language, maps are used to communicate information to users in order to enhance their perception of the world and help them in the decision-making process. Conventional paper maps are carefully designed to convey information by graphics. They give users tools to record, analyze, calculate, and in general understand the interrelation of things in their spatial relationship. And this is done generally through the visual perception of the map.
The enormous advances in the computer science world call for a change in the way we look at maps. Geographic Information Systems (GIS) and Automated Cartography are concepts that enhance the role of maps in society, but they also call for the redefinition of maps (Ramirez 1999), (Ramirez 2001). Most of the developments in computer mapping have been directed in imitation and replication of the conventional technology rather than true innovation. There is a lot of information lost in the map when it is just replicated in computer form (Ramirez 1999). Robinson et al (1988) identify an Artistic and Presentation Focus of Cartography as an important element, which together with a
Geometry, Presentation and Communication Focus are important to achieve the purpose of a map. Presentation focus puts emphasis on the map design as an
1 important element of the cartographic process. Artistic focus aims to create the appropriate user sensations and impressions through understanding of such visual qualities as color, balance, contrast, pattern and other graphic characteristics.
Technological advances have diminished the Artistic and Presentation Focus of cartography. As a result, in many cases the implicit information and the interrelationships between elements are lost in a conventional Computer-Assisted Map or GIS.
In the computer world the visual perception of the user reading a paper map is successfully replaced in many cases with analytical and logical queries of the spatial database that represents the map in computer form. Nevertheless this is not enough, because in many cases users do not know what they are searching for in the map until they can see the whole display or graphic information. Visual display of the whole map
(or some of its elements) is very important for the user to determine what kind of information is to be extracted and to understand the interrelationships between elements. Current visualization techniques (paper maps and their computer replicas) do not take full advantage of the many modalities of human perception that can be used in representing a range of spatial information. Much of cartographic knowledge is based on intuition (Buttenfield and Mark 1991). That is why analytical and logical queries cannot be substitutes in the context of the general role of maps. Visualizing information is a great part of the whole map experience. On the other hand, cartographic visualization needs radical changes conforming with the technological, computer hardware and software advances.
Although Virtual Reality (VR) has emerged widely in many disciplines in the past few years, its conceptual beginnings date back to the 1960s and applications like flight simulators started in 1970s. In 1980s there was a boom in virtual reality gaming technology. Even though in the past few years many disciplines such as Medical
2 Science and Architecture have adopted virtual reality as an emerging technology, very little has been done in Cartography. Utilization of virtual reality and its multimedia components in mapping and GIS is both beneficial (as it will be shown in this dissertation) and imperative. Perkins (1994, p. 96) contends that: "There may be a requirement to translate data to new media, in order to preserve its utility in the future".
1.1. Purpose
The main purpose of this research is to study the combination of spatial
information with various kinds of media in a virtual reality (VR) environment for
the purpose of intelligent cartographic visualization. A considerable amount of
research has been done in the field of multimedia cartography in the past few
years (Cartwright and Peterson 1999, Ramirez 2001, Ramirez and Zhu 2002). The
review of research on virtual reality, including its multimedia components
(dynamic visualization, sound and interactive visualization) will be the focus of the
following chapters. Although some work has done in introducing these new
technology elements to cartography, very few results have been achieved in
bringing these media components together in spatial context in a virtual
reality environment and studying the interrelationships between these
elements as applied to cartography. This dissertation intends to answer
questions like:
• How well suited is Virtual Reality Visualization for Cartography? Is it
beneficial? What are the advantages and disadvantages of using this
visualization method?
• How is this transition going to effect elements of traditional cartography
(georeferencing, scale, generalization, symbolization, coordinate system,
3 metadata, accuracy) and Geographic Information Systems (spatial analysis,
proximity analysis, logical operations)? How can we adapt current Virtual
reality methods to respond to cartographic needs?
The remainder of this chapter summarizes the organization of this study.
1.2. Scope
In this research we propose to explore map visualization in the context of virtual reality (VR) by looking at the following key issues:
• Map representation: In the context of VR (including navigation and viewer
immersion)
• Use of sound in VR maps: Sound and Localization of sound in a VR
environment
• Dynamic 3-D Visualization: Animation in a VR scene as an important element
for visual perception
• Visualization on demand: Methods of creating 3-D, VR scenes on-the-fly,
resulting from query (as will be explained in the following chapters)
• Spatial Analysis: This GIS element will be studied in the context of a VR map
• Georeferencing, Scale and Uncertainty in VR Map Visualization: These
fundamental elements of cartography will be studied in the VR environment
context.
Virtual Reality Modeling Language (VRML) (http://www.web3d.org/) (Web 3D
Consortium 2002) will be used as a proof of concept for this research.
4 1.3. Organization
Chapter 2 provides a review and analysis of the current literature pertaining to VR and multimedia visualization in their cartographic contexts.
Chapter 3 provides a detailed outline of the research and a discussion of spatial cognition in support of the need for VR cartography.
Chapter 4 outlines the techniques for the use of VR in cartography.
Chapter 5 discusses specific elements of cartography, such as georeferencing, scale and generalization in the context of VR representation.
In Chapter 6 we investigate interactive map visualization accommodating GIS tasks such as proximity and optimal path analysis.
Chapter 7 includes conclusions and recommendations for the future work in this area.
1.4. Hardware and Software
The examples in this dissertation are created using mainly a PC environment
(Windows XP) with a memory of 256MB and a 1.1 GHz processor. Software used includes Cortona from Parallel Graphics for VRML viewing, 3DS Max, Dune and
VRML Pad for VRML coding, 3D Sound for spatial sound, CoolEdit for sound editing, Macromedia Director for multimedia. The World Wide Web (through
Internet Explorer and Netscape browsers) is the principal environment for building the example prototypes. VRML, HTML, Java, Javascript, Visual C++ are the main languages used in this project.
5 1.5. References
Robinson, A. H., Sale, R. D., Morrison, J. L., Muehrcke, P. C., 1988, Elements of
Cartography, John Wiley & Sons 1988
Ramirez, J.R., 1999, “Maps for the Future: A Discussion”, ICA Conference, Ottawa,
Canada, 1999
Ramirez, J.R., 2001, “New Geographic Visualization Tool: A Multiple Source,
Quality, and Media (MSQM) Maps”, Internal Paper, Center for Mapping at Ohio State
University, 2001
Buttenfield, B.P., Mark D.M., 1991, “Expert Systems in Cartographic Design”, in
Geographic Information System: the Microcomputer and Modern Cartography, Ed.
D.R.F. Taylor, Pergammon Press, Ltd., Oxford, pp. 129-150
Perkins, C., 1994, " Quality in the map librarianship and documentation in the
GIS age. In: The Cartographic Journal, vol. 31, no. 2, pp. 93-99
Ramirez, J.R., Zhu, Y., 2002, “A Multi-Media Visualization System”, Internal Poster,
Center for Mapping at Ohio State University, 2002
Cartwright, W., Peterson, M.P., 1999, "Multimedia Cartography". In: Multimedia
Cartography, Ed. Cartwright, W., Peterson, M., Gartner, G., Springer-Verlag Berlin, pp. 1-10
Web 3D Consortium 2002, http://www.web3d.org/: Home of the Web 3D
Consortium
6 CHAPTER 2
LITERATURE REVIEW
Visualization is one of the most effective methods of displaying data. Expressions like “I see what you mean”, “I have a different view”, prove that visual perception is crucial in communicating information. There are three ways in which visualization can use and interpret data. These ways are selective emphasis, transformation, and contextualization (Erickson 1993). Selective emphasis involves highlighting some features and suppressing others in order to facilitate detection of patterns.
Transformation involves changing non-visual data to visual data to make them more perceptible. Contextualization involves a visual context or framework within which the data may be displayed (for example an indexed map). Our research in the following chapters covers elements of these three aspects. In Chapter 6 (Visualizing Spatial and
Proximity Analysis) we investigate visualization of query analysis and the emphasis of certain features in the VR map scene (selective emphasis). In Chapter 5 we examine methods of visualization of metadata and uncertainty, typically non-visual elements in a GIS (transformation). Visualization on demand (Chapter 6) emphasizes the use of an index two-dimensional map to reference VR and multimedia clips available at user's request (contextualization).
In the next sections we will look at some issues of virtual reality visualization and how it applies to mapping and cartography.
7 2.1. Virtual Reality Concept and Technology
Virtual Reality (VR) is the use of computers and human-computer interfaces to create the effect of a three-dimensional world containing interactive objects with a strong sense of three-dimensional presence (Bryson 1996). According to Bryson the most important aspect of this definition is the fact that VR is computer-generated,
3-D and interactive. National Research Council (1995) looks at VR as composed by three components: Virtual Environment System, where the machine is a programmed computer that generates or synthesizes virtual worlds with which the operator can interact. Teleoperator System, where the machine is a robotic system that allows the operator to sense and manipulate the environment. Augmented
Reality System, where the operator’s interaction with the real world is enhanced by overlaying information stored in the computer on the information of the real world.
In our context, augmented reality is comprised of virtual (non-existing) objects that are merged with natural real objects in the map (Ramirez 1999).
Computers have made it possible to manipulate larger and larger amounts of information but humans are cognitively ill-suited for understanding the resulting complexity (Fairchild 1993). Advances in VR technology suggest that encoding subsets of the information using multimedia techniques and placing the resulting visualizations into a perceptual three-dimensional space increase the amount of information that people can meaningfully manage. Because of the complexity of the information (spatial or non-spatial), the space containing visual objects should be available for users to navigate, examining individual objects and clusters of objects in more detail. If one visualization is not appropriate for a particular user on a particular task, the user should be able to immediately create a more suitable visualization (Fairchild 1993). Three basic problems are addressed in Fairchild
(1993):
8 • Visualization of individual pieces of information
• Assuming that reasonable visualizations exist for single pieces, how do these
extend to large collections of these individual pieces
• Since all the pieces of information in large databases cannot be shown at the
same time, techniques must be available to allow user control of subsets of
the entire amount of information presented.
Fairchild (1993) is looking at a model (visualization model) to suit different applications or user requirements. Visualization must be able to change dynamically the use of different subsets of information. This model’s purpose is efficient visualization with a minimum of complexity.
One general solution for allowing user control of the subsets is to develop models of degrees of interest (DOI) or fish-eye views. Fish-eye views contain a mixture of objects with high and low levels of detail. The DOI model associates two values with each object, semantic distance and a priori distance (API). The semantic distance is a measure of how far the viewpoint is away from the object. The API is a measure of how important an object is to the user. The “New Yorker’s View of the
World” found on the New Yorker cover is one of the most famous fish-eye views.
The mailbox in front of a New Yorker’s house is shown in high-detail, as well as some of the stores. The Hudson River and Brooklyn are shown in less detail. All the states between New York and California are skipped and the visualization ends with just labels demarcating Japan and China (Figure 2-1 shows a similar concept). A perspective view is an example of a limited fish-eye view. All objects will have the same DOI value. Objects that are at the same distance from the viewpoint will be the same size and detail.
9
Figure 2-1: New Yorker's Idea of America
In Fairchild (1993) three techniques that can be used to reduce the perceived complexity of a large collection of objects are identified:
• Laws of Perspective Technique. This is the real-life technique. The laws of
perspective cause objects closer to observer to be shown in larger size (Figure
2-2 a)
• Fish-eye View Technique. Objects that are considered to have more
importance are shown larger with more detail than objects with lesser
importance (Figure 2-2 b)
• DOI Distortion Technique. Instead of just growing in size, the information
content increases for objects closer to the observer. Additionally objects that
have greater importance distort their position, basically trying to follow the
user around in the virtual space, while less important objects avoid the user
(Figure 2-2 c).
10
a. Laws of Perspective Technique
b. Fish-eye View Technique
c. DOI Distortion Technique
Figure 2-2: Techniques Used to Reduce the Complexity
The DOIindex value is used to index a vector of icons to be used in a visualization according to the importance and distance. The formula determining this index is:
DOIindex = Integer ((SD + (0.5 – API) x 2) X VectorLength (Fairchild 1993)
API and semantic distance (SD) are normalized [0-1]. The vector used in the previous example is shown in Figure 2-3.
Figure 2-3: Vector of Icons
11
If SD and API are both 0.5, the middle index of vector is used (the hexagon). An
improvement of fish-eye views can be achieved by distorting the positions of the
objects in space. The amount of distortion can be given by this simple formula:
Distortion = (DOI - 0.5) x Degree Of Distortion (Fairchild 1993).
2.2. Virtual Reality in Cartography
As beings from a 3-D world, we have brains that understand 3-D better than most
of us consciously realize. Yet many maps developed and used today are only 2-D
representations of reality, and use conventions such as contour lines to indicate a
third dimension. Thoen (1997) gives some ideas about the impact of the new
computer 3-D technology in mapping. With good computerized visualization tools,
the range of information that can be packed into a map can be extended, and
because of the human ability to comprehend and interpret 3-D relationships, less
work is needed to understand what’s seen in them.1
Considering the advances of technology and applications in other fields such as
architecture and medicine very little has been done to incorporate virtual reality
techniques in mapping and cartography.
Fairbairn and Parsley (1997) explore the possibilities and implications of the use of
Virtual Reality (VR) tools in Cartography and GIS. Current developments in
computer science enable us to move to a finer emulation of the complexities of the
real world and can be combined with our abilities as human beings to navigate
1 At this point, there is a clarification we should make about the distinction between 3D and 2.5D. 2.5D representations are not true 3-D, as they only have one height value associated with every other indexing (2D) coordinates (De Floriani et al 1999). In mapping and for the purpose of this dissertation when we say 3D we mean 2.5D.
12 through ambient environments. The VR techniques are intended to offer means by which participants can interact fully with spatial information of various types.
Hardware and human-computer interfaces are two important considerations in VR development. VR systems vary according to these considerations (Fairbairn and
Parsley 1997):
• Full or Immersive VR requires the participant to be subject to stimuli affecting
many senses including vision, hearing, balance, and touch. These systems
require head-mounted equipment, tactile gloves and moving platforms
• Transparent VR uses the real world as a backdrop, seen through a device
presenting the spatial information. Flight simulations for military pilots are
examples of these systems
• Projection VR can be a multi-participant experience involving the presentation
of large-scale graphical displays. Planetarium-like situations are such
examples
• Desktop VR is a Virtual Reality system in a desktop computer. The user
navigates as if he/she is within the scene on the computer screen. It is the
most commonly used VR system, due to the fact that it can be presented on
standard computer monitors. For the purpose of this dissertation we are
working only with this kind of system.
In most of current GIS and mapping applications the user has limited control (in some cases no control) over the graphic output. Most of the efforts are focused in manipulating, analyzing and querying data. The application of VR in mapping can add a dimension that is not used currently. VR techniques can be initiated by the data, but there is considerable user choice over the graphic display. The user can change point of view, scale of display, rendering techniques depending on the specific application and user interest (Fairbairn and Parsley 1997).
13 Ryhne (1997) considers the evolution of GIS and where it stands compared to the developments of Scientific Visualization (SciVis) and VR. Problems identified with the difficulties of integrating SciVis and VR with GIS include: separate efforts to build SciVis software without regard for GIS data and the complexity of GIS data which makes them very difficult to use in general purpose SciVis software. One of the main problems is that there is a lack of connection between the database and the visual environment that supports this database display. Ryhne (1997) identifies four levels of integration for GIS and SciVis (Figure 2-4):
• Rudimentary: Using minimum data sharing between the two technologies
• Operational: This level attempts to remove redundancies between GIS and
SciVis
• Functional: Providing transparent communication between the respective
software
• Merged: The concepts are fused together from the beginning, in order to build
comprehensive toolkits.
The first two levels are achieved in many applications. We are at the beginnings of integration of the functional level. Rhyne (1997) contends that “the discipline of cartography provides both the historical and the new bridge between GIS and
SciVis”. Although the author states the word cartography, we believe that a redefinition of cartography is needed to provide the leap between these two disciplines. Going back to the theory of traditional cartography might provide a clue for this integration. Also, the World Wide Web (WWW) is probably a suitable environment for this integration. Recent developments in the WWW provide a less expensive alternative for Virtual Reality in GIS.
14
Rudimentary: Minimal Data Sharing
Operational: Forcing the Consistency
Functional: Transparent Communication
Merged: Fused Technologies
Figure 2-4: Four Levels of GIS and SciVis Integration
2.3. Applications
Although not very many, there are some VR and Multimedia applications with emphasis in mapping. The following is a short description of the mapping applications that were found in the literature.
Terrain Data Representation Branch of the United States Army has done a good deal of research in the field of terrain representation. They are involved with the development of terrain visualization techniques for use in battlefield visualization and virtual reality applications. They are working with digital images from satellite data and aerial photographs and merging these data with 3-D models of augmented reality. This division is using mainly VRML for their terrain applications and MPEG images for their animations. Terrain Data Representation
Branch web site (http://www.tec.army.mil/TD/tvd/tdrb.html) contains demonstrations of these projects. The MPEG files display fly through animations
15 over different areas of the world. The VRML demo of the world can generate a terrain representation of any place in the world (1 x 1 degrees).
Truflite is a software package that allows users to fly through the digital terrain using elevation data files. The user traces a route. The fly through follows this route using computer animation. The user can specify the speed, flight height and angle of lighting. The company’s Web site is at http://www.truflite.com. This software supports most common elevation formats. The web site has some MPEG animations to demonstrate the power of animated mapping.
The Center for Mapping at the Ohio State University has been working on some prototype systems applying multimedia visualization to cartography and GIS. The
Multi-Media Visualization System (Ramirez and Zhu 2002) combines vector datasets with raster images, panoramic views, sounds and special effects to create a multi-dimensional visualization system of the ground. This prototype project was conducted in Upper Arlington, Ohio. Some of the data included are:
• Large scale vector digital data from Franklin County Auditor (scale 1:2,400)
• US Geological Survey’s DLG (Digital Line Graph) files scaled at 1:24,000
• Digital images, digital orthophoto quarter-quadrangle (DOQQ)
• Digital images, from IKONOS, 4-meter resolution
• Digital images from LANDSAT 7, 30-meter resolution
• Vertical digital images generated by the GPSVan™
• 360° panoramic views (photographic) of selected locations.
The Smart Text in Mapping project (Morrison and Ramirez 2001, Ramirez et al.
2002) under development at the Center for Mapping focuses in incorporating sound and text in maps and images as a substitute for conventional geographic
16 labels. The prototype gives users the option to see or hear labels. Microsoft Text to
Speech Software is used to implement this idea. A commercial speech synthesizer
(ReadPlease Pro) is used to speak the label names.
2.4. Virtual Reality Modeling Language – VRML
The Virtual Reality Modeling Language (or VRML), or previously called Virtual
Reality Markup Language is a scene description language that is used for the graphic representation of the 3-D world, primary on internet browsing tools. This language was created by Silicon Graphics and it is similar in concept to the
HyperText Markup Language in the sense that they both use hyperlinks. The basic idea of this language’s structure is the “virtual world”. The virtual world is composed of objects in the scenery. The objects can be hyper linked to other virtual worlds (Fairbairn and Parsley 1997).
Since the birth of VRML in 1994, the International Organization for
Standardization (ISO) and the International Electrotechnical Commission (IEC) in partnership with the VRML Consortium (Web 3D Consortium 2002) have developed standards VRML2 and VRML97. Developments in VRML2 and VRML97 allow for two very important aspects of cartographic visualization: terrain generation and animation. It is important to mention that VRML provides the possibility of using different levels of detail with regard to the distance of the viewpoint from the objects (map generalization). Levels should be in different files specified by the user.
The main advantage of VRML is that it makes available to the users efficient ways to visualize numeric data using mainly free software. Since objects and worlds can be hyperlinked to other objects and worlds (including sound and animation), this is
17 a very attractive language for the cartographic visualization that we are investigating.
Many VRML browsers and software are used to support VRML files (which have a
.wrl extension). Some of the browsers do not support all the features of VRML.
Another disadvantage is the fact that VRML is designed to be an efficient language, which means that it does not include very advanced photo-realistic rendering techniques. VRML was a very promising language in 1997. Since then the language has not grown as expected, but it is still a very efficient tool for multimedia applications including cartographic applications. In many ways the language was ahead of its time. Many authors (Web 3D Consortium 2002) believe that the future of VRML in tandem with Javascript and Java is still bright. For purposes of this dissertation VRML is the perfect media for bringing multimedia elements together in a proof of concept of theories discussed in this research. The benefits of this language for cartographic applications will be explored in the next chapter.
Information about VRML can be found in these web sites: http://www.vrml.org/, http://www.vrmlsite.com/, http://www.landform.com/vrml.htm.
2.5. XML and its Potential for Use in VR Visualization
Extensible Markup Language (XML) is a recently introduced language on the web.
It provides standards for extending the structure of data on the Internet according to the type of application. While HTML is about styling and appearance of the data,
XML is about the content of data. XML structures the metadata (data describing the data). The major advantage of XML is the efficiency of web search, reduction of web traffic, efficiency of web linking and data transfers (Houlding 2001). XML in
18 conjunction with Java and Javascript can be a powerful tool for use in cartography where metadata and standards are very important.
2.6. Animation in VR
The word animation means, “bringing to life”. Research in computer animation deals with animation systems and the way people interact with these systems
(Encarnacao et al. 1993). Animation is an operation on the objects over time using a discrete time model. It has found applications in many fields including cartography.
The idea of animation in cartography is not new (Tobler 1970). The use of animation in GIS and Cartography has been focused mainly in representing time series and helping in spatial analysis (Openshaw et al. 1994). In Monmonier (1992) the author uses two graphic scripts in composing narrative sequences of dynamic maps, statistical graphics, and text blocks. One of the scripts studies the correlation between two geographic variables for a single year. The other examines the historical evolution of a single phenomenon during a 90-year period. A dynamic scene of changing elements will give a much better perceptive presentation of the data, compared to a stationary complex map that can confuse the user. Blinking or flashing symbols is a strategy for calling the attention of the user for important elements in the display.
3-D animation is very important in the representation of a VR scene. The problem is that 3-D animation is a very complex process involving consecutive 3-D modeling, object generation, and rendering in order to make the surface appear more lifelike (Peterson 1995). Few graphics programs support 3-D animation. One of the important features of VRML97 is that it does support 3D animation. That is
19 one of the additional reasons that make VRML a strong candidate for use in this research. The main application of 3D animation has been in architecture in order to visualize rooms by providing “walk through” tours. Examples of software like
3DS Max, Virtus, Form.Z and Homespace Designer show that the architecture field is more advanced in this area. This dissertation will study cartographic animation within the framework of VR and 3D representation. Research will show how animation can be used in a VR scene to fulfill cartographic needs.
2.7. Sound in VR
Existing maps supported with sound are usually electronic atlases giving information about the country being pointed, playing the national anthem, or some general phrases in the specific language (Bar and Sieber 1997, Ramirez et al.
2002). Experiments with sound have also been done in the case of map accuracy
(Fisher 1994, Krygier 1994). On scheme employs sound to indicate the accuracy of the data. Moving the cursor to an area of the map that is less accurate will increase the sound of noise. The same idea is used in interactive atlases where short sentences in the local dialect can be heard when moving the mouse to a particular location. The Center for Mapping at Ohio State University is also involved in research about data quality and sound (Ramirez 2001).
Experiments by scientists such as Bly (1982) have shown that sound is a viable means of representing data, especially when used in association with graphic displays. Two-dimensional sound displays that locate sounds via stereo technology and three-dimensional sound displays (adding depth) have been developed. A three-dimensional virtual sound environment has been developed at the NASA-
Ames Research Center (Wenzel et al. 1990). This is very important for representing
20 spatial relationships in mapping. Krygier (1994) describes what he calls “sound variables” analogous to Bertin (1983) visual variables. These variables are:
• Location: The location of a sound in a two or three-dimensional space
• Loudness: The magnitude of a sound
• Pitch: The highness or lowness (frequency) of a sound
• Register: The relative location of a pitch in a given range of pitches
• Timbre: The general prevailing quality or characteristic of a sound
• Duration: The length of time a sound is (or isn’t) heard
• Rate of Change: The relation between the duration of sounds and silence over
time
• Order: The sequence of sounds over time
• Attack/Decay: The time it takes a sound to reach its maximum/minimum.
Sound is particularly suited with map animation, because it is an inherently temporal phenomenon. Work on cartographic animation has resulted in closely looking at dynamic sound variables like duration, rate of change and order. Sound can be closely linked to animated applications to enhance the comprehension of information. Three distinct kinds of change can be visualized by map animation.
Spatial change is often called “fly-by”. This change is visualized by moving the observer’s location relative to a static object. Voice-over can be used with this kind of application to provide an explanation of what is being seen. Vocal narration is important way of using sound to enhance visualization. The sound of fire, wind, river flow can be incorporated into the animation to make the representation more powerful. Chronological change, or “time series” is visualized by mapping chronologically ordered phenomena onto animated series. Sound can be used to add additional information to the chronological display. Loudness, pitch and other
21 variables add other dimensions to the representation. Attribute change or “re- expression” is visualized by mapping attribute ordered phenomena onto an animated series. Graphic methods (sliding display bar) to alert the viewer that the animation is ordered in terms of attribute change have been used (Monmonier
1992). The problem with the graphic methods is that the user's attention can not be focused on the map and the display bar at the same time. Sound can be used to obviate the need for the graphic bar.
Krygier (1994) warns about issues of concern in the use of sound for visualization.
The user has to be acclimatized to the idea of sound and that sound variables are to be used. From a perceptual point of view we must be aware of the problem of
“sonic overload”, of loading the user with too many different variables and dimensions of sound. The sequential nature of sound might create problems in knowledge acquisition and memory (Krygier 1994). On the other hand there is the argument (Ramirez 2001) that knowledge is sequential too, so sound could be very appropriate in many cases. A combined graphic and sound display might be the mutually reinforcing solution from the point of view of the cartographer. One promising way to make the sonic display of complex information feasible is to adapt sound structures that we are accustomed to dealing with (primarily those from music) to display design (Weber and Yuan 1993).
Sound has been minimally used for data display up to date, because of the limitations and costs of producing and using it. Such limitations are rapidly diminishing. Research done recently by the Center for Mapping (Morrison and
Ramirez 2001, Ramirez et al. 2002) has shown that simple methods can avoid the high cost of producing and using sound. The Center for Mapping has used commercial software for sound synthesizer. This creates a structured and low cost system for speaking labels. It is necessary to explore the ways we can take full
22 advantage of human perceptual and cognitive capabilities in visualization designs.
Sound provides us with more choices for representing data and phenomena about the physical world.
2.8. References
3DS Max, 2002, http://www.discreet.com: Discreet, Autodesk, Inc. 2002
Bar H.R., Sieber R., 1997, "Atlas of Switzerland - Multimedia Version: Concepts
Functionality and Interactive Techniques". Proc: 18th ICC, ICA, Stockholm, 2, pp.
1141-1149
Bertin, J., 1983, Semiology of Graphics, University of Wisconsin Press, Madison.
Bly, S., 1982, “Presenting Information in Sound”, Human Factors in Computer
Systems, Proceedings, Gaithersburg, MD, pp.371-375
Bryson, S., 1996, “Virtual Reality in Scientific Visualization”, Communications of the ACM, Vol. 39, No. 5, pp. 62-71, May 1996
De Floriani, L., Puppo, E., Magillo, P., 1999, "Applications of Computational
Geometry to Geographic Information Systems", Chapter 7 in Handbook of
Computational Geometry, J.R. Sack, J. Urrutia (Editors), Elsevier Science, 1999, pp.333-388
Encarnacao, J.L., Kroemker, D., de Martino, J.M., Englert, G., Haas, S., Klement,
E., Loseries, F., Mueller W., Sakas, G., Rainer, Petermann, R.R.V., 1993,
“Advanced Research and Development Topics in Animation and Scientific
Visualization”, Animation and Scientific Visualization, Earnshaw R.A. and Watson
D., eds., Academic Press, London 1993
23 Erickson, T., 1993, “Artificial Realities as Data Visualization Environments:
Problems and Prospects”, in Virtual Reality Applications and Explorations, Alan
Wexelblat Ed., pp. 3-22, Academic Press Professional, 1993
Fairbairn, D., Parsley, S., 1997, “The Use of VRML for Cartographic Presentation”,
Computers & Geosciences, Vol. 23, No. 4, pp. 475-481
Fairchild, K., 1993, “Information Management Using Virtual Reality-Based
Visualizations”, in Virtual Reality Applications and Explorations, Alan Wexelblat
Ed., pp. 45-74, Academic Press Professional, 1993
Fisher, P., 1994, “Randomization and Sound for the Visualization of Uncertain
Spatial Information”, in Visualization in Geographic Information Systems, Unwin,
D., Hearnshaw, H., (eds), London: Wiley, pp. 181-185
Houlding, S., W., 2001, "XML - An Opportunity for
Krygier, J., 1994, “Sound and Geographic Visualization”, in Visualization in Modern
Cartography, Ed. A.M. MacEachren and D.R.F. Taylor, Pergammon Press, Ltd.,
Oxford, pp. 149-166
Monmonier, M., 1992, “Authoring Graphics Scripts: Experiences and Principles”,
Cartography and Geographic Information Systems, Vol 19, No.4, 1992, pp. 247-260
Morrison, J.L., 1994, “The Paradigm Shift in Cartography: the Use of Electronic
Technology, Digital Spatial Data, and Future Needs”, Advances in GIS Research 1,
Healey, R.G. and Waugh T.C., eds., Taylor and Francis, London, p. 1-15
24 Morrison, J.L., Ramirez, J.R., 2001, “Integrating Audio and User-Controlled Text to Query Digital Databases and to Present Geographic Names on Digital Maps and
Images”, ICA Conference, 2001, China
National Research Council, 1995, Virtual Reality: Scientific and Technological
Challenges, National Academy Press, 1995
Openshaw, S., Waugh, D., Cross, A., 1994, “Some Ideas About the Use of Map
Animation as a Spatial Analysis Tool”, in Visualization in Geographic Information
Systems, Unwin, D., Hearnshaw, H., (eds), London: Wiley, pp. 131-138
Peterson, M., 1995, Animated and Interactive Cartography, Prentice Hall, Inc., 1995
Ramirez, J.R., 1999, “Maps for the Future: A Discussion”, ICA Conference, Ottawa,
Canada, 1999
Ramirez, J.R., 2001, “New Geographic Visualization Tool: A Multiple Source,
Quality, and Media (MSQM) Maps”, Internal Paper, Center for Mapping at Ohio State
University, 2001
Ramirez, J.R., Morrison, J., Maddulapi, H., 2002, “Smart Text in Mapping”,
Internal Poster, Center for Mapping at Ohio State University, 2002
Ramirez, J.R., Zhu, Y., 2002, “A Multi-Media Visualization System”, Internal Poster,
Center for Mapping at Ohio State University, 2002
Rhyne, M., T., 1997, “Going Virtual with Geographic Information and Scientific
Visualization”, Computers & Geosciences, Vol. 23, No. 4, pp. 489-491
Terrain Data Representation, 2002, http://www.tec.army.mil/TD/tvd/tdrb.html:
Terrain Data Representation Branch Web Site
25 Thoen, B., 1997, “Holo-Deck GIS Provides New World View”, GIS World, August
1997, pp. 32-33
Tobler, W.R., 1970, “A computer Movie simulating Urban Growth in Detroit
Region”, Economic Geography 46, pp 234-240
Truflite, 2002, http://www.truflite.com: Truflite’s 3-D World
Web 3D Consortium 2002, http://www.web3d.org/: Home of the Web 3D
Consortium
Weber, C., Yuan, M., 1993, “A Statistical Analysis of Various Adjectives Predicting
Consonance/Dissonance and Intertonal Distance in Harmonic Intervals”, Technical
Papers: ACSM/ASPRS Annual Meeting, New Orleans, Vol. 1, pp. 391-400
Wenzel, E., Fisher, S., Stone, P., Foster, S., 1990 “A System for Three-dimensional
Acoustic “Visualization” in a Virtual Environment Workstation”, Visualization ’90:
First IEEE Conference on Visualization, IEEE Computer Society Press, Washington, pp.329-337.
26 CHAPTER 3
SPATIAL COGNITION
This study will investigate human understanding and perception in visualization and how it relates to VR in cartography. This research aims to build a foundation of support for the use of VR in cartography. It will do so by investigating cognition and human perception issues. Virtual reality (including 3D animation, virtual localization of sounds and user interaction) will be the main interest of this study of human cognition. Some of the questions to be asked in this section are:
• Can human cognitive ability be used to its fullest in the perception of cartographic
information in a virtual reality environment?
• How does the user cope with the complexity of information in a VR environment?
Should something be done to visually reduce this complexity, providing for the
different thresholds that people might have?
Spatial cognition is a very complex topic and could be the focus of dissertation research by itself. For the purpose of this dissertation we limit this study to answering aspects of the questions stated above.
27 3.1. Spatial Cognition in Cartography
We begin this discussion with a brief description of spatial cognition. This is important in order to later show the need of VR for GIS and mapping. Cognition is the science of perception of knowledge. It studies the ways in which the human mind acquires, stores, processes and manipulates knowledge. According to the
“Stanford Encyclopedia of Philosophy” cognition science is an interdisciplinary study of the mind and intelligence, including disciplines from philosophy, psychology, linguistics, neuroscience, artificial intelligence and so on. As seen from this definition cognition covers a very broad area of study. For example just The
Ohio State University Center for Cognitive Science alone is involved in several directions of research such as: computational vision (image understanding), music cognition, decision-making, human language processing and cognitive development. From studies by different authors and their understandings of cognitive science (Montello 1997) we can detect two main categories:
• Cognitive science as a study of perception, sensation, thinking. This can be
seen as more of a behavioral study, which is closer to psychology
• Cognitive science as a study of processes of the brain and nervous system,
emphasizing the physical process of perception.
From the point of view of this research we will use the first category in order to stay on course with the purpose of this chapter. In the framework of this category we study spatial cognition, which deals with the perception of the spatial environment. This study touches on the perception of size, location, direction and topology of objects in the world. Spatial cognition is very important in Cartography and GIS, because of their nature and purpose to convey information directly to users. One of the main questions of cognitive science in cartography, and the one that we are most interested in this research, is: How should a map be built in
28 order to promote realistic and efficient communication of spatial information (such as scale, location, uncertainty)?
The most widespread theory of human information processing comes from Stephen
Michael Kosslyn (Kosslyn 1980). Kosslyn contends that everything we see is stored in our brain in the form of “mental images”. These images are a snapshot picture of reality in our brain and are analogous to displays on a graphic computer terminal.
They are stored in our long-term memory (see below) and are ready for access when we desire. Let’s say that a question is asked: What shape is a German shepherd’s ear? Most people report that they mentally imagine the dog and then zoom-in into the ears (Kosslyn 1980). An important aspect of the mental image is that it is not just a dumb scanned image like a digital scan. Humans store these images as a collection of objects and their spatial interrelationships. This means that we remember an object or part of an object as being in a certain location in relation to another object (the ears of a German shepherd on its head). This is a very important fact from the point of view of cartography, because if spatial relations are a factor in increasing human processing and cognitive ability, we are limiting this ability by using 2D maps. The importance of spatial relations and topology in the way people process and remember scenes is also supported by other authors and experiments (Blasser 2000, Waller et al 2002). This will be further elaborated in the next section of this chapter.
Mental images, like maps, depict information about interval spatial extents. If this is true, the mental image is nothing but a generalized picture of reality (similar to a map). Kosslyn (1983) developed an experiment where subjects were asked to look at a picture of a rabbit next to an elephant and a picture of a rabbit next to a fly.
The results of the experiment showed that people took more time to see parts of the relatively smaller animals, imaged next to the larger animals. Kosslyn
29 concluded that a selection of features had occurred because, as he found, the mental image has a fixed size and resolution. Peterson (1994) emphasizes the importance of the construction of the “mental image” in the human mind. Mental image is a form of internal “visualization”. We can look at information processing as a series of memory stores, each characterized by a limited amount of information processing (Klatzky 1975). This theory identifies three memory stores: the iconic memory (or sensory register), the short-term visual memory – STVM (or short-term memory), and the long-term visual memory – LTVM (or long-term memory).
Figure 3-1 depicts the information processing stores in the user’s recognition of a state. The human information processing starts with the iconic memory. This memory holds information for a very brief time, but has unlimited capacity and is not affected by pattern complexity. This is an important fact providing that VR applications are intended for the creation of complex scenes (those as close as possible to reality). STVM can hold the information for much more time, but has a limited capacity. Only the information of interest in STVM is matched with the information previously recorded in the LTVM and then the recognition is made. The models of visual pattern recognition in these memory stores include template matching, feature detection (breaking it down into sub-components) and symbolic description (Peterson 1994). Several theories exist concerning LTVM. One theory suggests that LTVM is a permanent storehouse. In this theory forgetting something is the problem of retrieving information from the memory. Other studies show that the information in the LTVM might be very locational (Mandler et al. 1979). This explains the reason why people remember the location of a paragraph in a book without trying or even recalling the content. This might be evidence of the brain’s ability to spatially encode information. No matter which theory is employed, there is evidence of an almost unlimited capacity for LTVM. Shepard (1976) conducted a
30 series of experiments by showing people many pictures. These pictures were recognized later with 98% accuracy.
Short Term Visual Memory
Iconic Memory Long Term Visual Memory
Recognition
Eye
It’s a state! It’s OHIO! Stimulus
Figure 3-1: Recognition of a State
Since features acquired in STVM are matched with models stored in LTVM, information would be more meaningful if objects we are looking at are closer to reality (objects in a VR scene). We recognize Ohio as a state only because a certain shape associated with Ohio boundaries is stored in our LTVM. People from other states (e.g. New Mexico), or furthermore, people from outside the United States might not recognize Ohio. Of course, certain abstract features like state boundaries
31 need symbolic representation in their depiction in the VR map. Let us take the example of symbols in a large-scale map. A church would be represented by its symbol, for which most of the users would have to look in the legend and store it in their LTVM. In a VR scene, a church is represented by its shape, which is familiar to almost everybody, because an image (as mentioned above) of the church most likely exists in LTVM for almost everybody.
3.2. Spatial Cognition in a Virtual Environment
Human perception of real and virtual environments has been the focus of experiments and studies for other purposes. We adopt these studies here, to maintain our claim that spatial cognition supports the use of VR in maps.
3.2.1. Human Perception of Virtual Reality Maps
Knowing all the facts described in the previous section, we propose to study
aspects of human perception and processing resulting from virtual
environments (close to reality) as compared to simple 2D mapping, or even
simple sketches (Figure 3-2). Is VR in mapping a necessity? One might argue
that the benefits of VR are obvious; a more realistic representation should
facilitate human processing and perception of the world. Nevertheless, it is
important to study this from the point of view of human cognitive abilities.
32
VR Map 2D Map (Paper or Digital) FreeHand Sketch
Figure 3-2: From Simple Sketches to VR Representation
Blaser (2000) conducted a series of experiments gathering information on the
GIS sketching behavior of people. The following facts resulted from the study:
• Sketches appeared to be simple and abstract. They highly generalized the
reality
• Topology was very important, but metrics and orientation didn’t matter.
• Sketches are structured through objects and their connectivity
• Sketching style depends highly on the individual. Each individual has
his/her own signature in the sketch.
These results have significant importance in supporting our argument.
Firstly, it seems that spatial aspects of the sketch are very important for increasing people’s cognitive ability. Topology and object connectivity play specifically important roles in perception (a sketch is nothing but a simplified perception of the real world). This fact is further supported by the idea that the “mental image” is built in the mind using interrelationships among objects that compose that image (Kosslyn 1980); this was described earlier in this chapter. Blaser contends that subjects of these experiments did not pay special attention to distances and orientation of the sketch/map. This fact does not mean that these two factors are not important in people's spatial
33 perception. In fact, findings of Waller et al. (2002) show that relative distances are very important in user orientation (see 3.2.2 - Spatial Cognition in
Learning). Secondly, sketches are highly subjective and depend on the individual. This is an expected outcome since every person has his/her own style, background and knowledge (LTVM). If the sketch were for personal use this would not be a problem, because the “map-maker” will also be the “map- user”. On the other hand, subjectivity plays an important role if the sketch (or map) is intended for other people. Although traditional maps are much more sophisticated than sketches, they remain an abstraction of reality and as such, they also have a dose of subjectivity (although not as much as sketches). The use of symbols, or even projections (depending on what projection and its parameters are used) injects the mapmaker’s signature and intention. Although in most cases, this is not necessarily a negative aspect, because maps are created with a specific intention in mind, the subjectivity can create problems in the user’s perception of the map. A closer representation of reality (like in VR) would reduce the factor of subjectivity.
Navigation and orientation are two very important factors in human interaction with virtual maps. As mentioned in the previous chapters, the importance of map visualization lies in the fact that in a lot of cases users need the whole picture to help them not only with answers, but also with questions. Often we don’t know what exactly we are looking for until we are in front of a map. This kind of implicit information cannot be represented by analytical queries. Navigation is an additional tool in this direction.
34 Navigation is the cognitive process of acquiring knowledge about a space,
strategies for moving through space, and changing one's metaknowledge
about a space.
- Laura Leventhal
Navigation is a very important part of VR map visualization and it helps us to understand the advantages of a VR map as compared to a traditional map.
We can orient the traditional map, as we need, because it is in our hands or on a computer screen. In a VR map we place ourselves in the map.
Orientation during navigation (walk-through or fly-through) is very important for a quick adjustment to the VR environment. The most important action in this case is performing a mental rotation (Darken and Peterson 1999). This mental rotation transforms the egocentric perspective, an individual human based positioning system to a geocentric perspective, the world based positioning system. The ability of the user to process his/her surroundings and environment is directly correlated with the simplicity with which this mental rotation can be performed (Darken and Peterson 1999). Of course this can be different for different people and it highly depends on the cognitive adequacy of the user. On the other hand, elements like orientation aids in the
VR map can be of a great help. One example is to use a North arrow that follows the user wherever he/she goes giving him/her a directional perspective. Borrowing elements from the video game world, we can use a map window showing the whole area (like in the video game Doom) so that the user knows where he is and what he/she is facing. The general question is how can we see the detail to navigate, but still maintain a sense of the overall area (Figure 3-3). We are further elaborating this topic in Chapter 5.
35
North Where am I?
Figure 3-3: Navigating while Maintaining a Sense of the Area
VR tries to simulate the real world environment and although depending on the application, it cannot fully replicate it, we can assume that human behavior in, and cognition of, the real world comes pretty close to that of a VR environment. Reviewing some of the studies of navigational abilities and spatial knowledge acquisition of the aspects of the real world can help us understand how to build better virtual environments. Next, we consider such research in terms of the VR map perception and spatial cognition.
In order to better understand human spatial cognitive ability and improve performance of navigation, it is important to study the structure of this process. There have been some attempts to decompose the navigation process and to build models of navigation. The model proposed by Jul and Farnas (Jul and Farnas 1997) is one of the most complete models and is shown in Figure
3-4. We are adapting this model for tasks common to mapping and GIS.
36 As previously mentioned, in many cases users do not know what they need until they visualize the map. Suppose I am looking to build a new house and I am using VR map and navigation to find a location. I still don’t know what the factors for location selection are, so I cannot use GIS analytical queries to achieve what I would like. First, I have formulated a goal (Figure 3-4). I decide that I should find a river and navigate along it, because I like locations next to a river. With this, I have formulated a strategy. In order to implement this strategy I need to gather information so that I do not start moving in a random direction. At this moment I might look at a smaller scale VR map or a conventional 2D map. I just gathered information and acquired data for the environment. Any time in the loop that things become clear, I have a cognitive model. For example, I visualize in what direction I should go (or I realize that I am at a dead end). At this moment I review the information that I have (rivers in the area) and make an assessment about moving in a certain direction. At any time during my movement I can go back and gather more information by consulting the map (e.g. check the relative distances to the rivers). I can also decide that locating my future house near river for is not a good idea after all and look at locations near a lake instead (change the strategy). Or, I might decide to give up this building a house idea and look for an existing house
(change the goal).
37
Building a What am I looking for? Build a new house Goal Look for something else Look at existing houses
How should I find it? Building a Along a river
Change strategy Strategy Near a lake What preliminary information do I have? The approx. size and look of the house Gathering Need more informatio n Information The small scale map Look at the surroundings? Look at smaller scale map
Acquisition of Do I need more
Environment information or Go forward, not? left or right.
Reviewing Acting Situation
Cognitive Model I see what’s going on
Figure 3-4: The Navigation Model
Structuring the real world information would be very beneficial for building
virtual environments. This is very difficult because of the irregularity of the
real world. Attempts to create such structure are made for parts of the
environment such as cities. Lynch (1960) attempted to structure urban
environments from an urban planner point of view. He delineated cities and
38 urban areas into building blocks that comprise these main categories:
• Landmarks: Buildings and other manmade structures
• Routes: They connect landmarks. They are not necessary roads, but
they represent the transition from one landmark to another
• Nodes: Nodes are intersections or junctions between the routes
• Districts: These are regions that are to some degree separated from the
rest of the urban area
• Edges: The city and its districts are bounded by edges.
The classification of these elements is subjective with regard to the context.
For example, to a pedestrian the road is an edge and to a driver the road is a route. People generally dislike a lack of structure. Perceptual capability is enhanced when there is structure (Darken and Peterson 1999). This might be explained with the way we store things in our memory by coding the scenes
(Mandler et al. 1979). In any case, the existence of a structure in a VR map is beneficial. On the other hand, this might be an impossible task because of the irregularity of the real world and its features. As was documented in the case of Lynch (1960), some structure might be possible for manmade objects and environments. In the following chapters we will discuss the attempt for a potential structure in the VR maps.
3.2.2. Spatial Cognition in Learning
VR and its spatial representation of the world play an important role in cognition studies, and learning the real world. A great amount of research is done using VR for educational and cognitive purposes. Experiments on how humans perceive and process the real world could be better conducted by a
39 controlled replication of this world, or otherwise by creating a simulated virtual environment. Such studies and experiments are important in order to understand the effects of representing the real world by using VR (especially the cognitive importance). We use the results and conclusions of such experiments in this thesis to support the VR use in mapping from a cognitive point of view.
A series of experiments conducted for psychological tests had people try to recognize similar objects first in a sketch and then in a VR environment (USC
1999). These objects were slightly rotated with regard to the original. Results concluded that virtual reality techniques help people improve their rotational skills. Not only that, but VR technology shows great promise as rehabilitation tool for people with cognitive impairments.
Waller et al. (2002) conducted a series of experiments to study human learning with regard to spatial properties of the world (specifically locational and directional properties). These experiments concluded that people could locate themselves very well when within the midst of a VR scene by using relative distances to landmarks. Waller et al. (2002) concluded that this must be also true for the real world perception. This kind of method of orientation using landmarks is called “piloting” or “landmark-based navigation” (Waller et al. 2002). Waller’s findings support the claim for VR in mapping. The more complex a scene is (i.e. more landmarks present), the easier user orientation becomes. That is because more landmarks are used to support “piloting” or location finding. Figure 3-5 shows the position (or viewpoint) of a human with regard to three buildings. The differences in distances, rather than angular differences, play a major role in the individual locating himself/herself.
Waller's experiments prove that metrics are very important in the orientation
40 of an individual (contradicting Blaser's claim that metrics are not important
(Blaser 2000)). Studies have also shown that users who are trained in a virtual environment have better navigational skills than users who are trained by other means. Therefore, user perception is enhanced in a virtual environment as compared to the traditional environment (a map). The reason is due to the realistic nature of a virtual environment.
Figure 3-5: Location with regard to relative distances
3.2.3. Spatial Cognition in Dynamic Environments
The majority of work in vision cognition is done with static images.
Nevertheless, the world is dynamic and the mind must be able to process moving images. Psychologists have taken separate approaches in
41 understanding motion perception. One popular approach indicates that the mind constructs the perception of motion from a series of static images (like film technology). Gibson (1979) argues against the view that visual perception involves the recognition of static retinal snapshots. He claims that it is the flow in the structure of the total optic array that provides information for perception, not the individual forms and shapes of the “retinal image”. It has long been theorized that the visual system has a special sensitivity to moving images. There are examples of patients not able to see moving objects, while their vision of static objects is perfect. For the purposes of this dissertation we are assuming that the mind processes motion as a collection of static images, because the computer technology used to create motion, mimics the process of this assumption. It seems that motion processing is a natural activity of the eye-brain system. The viewing of static displays may be more difficult for the brain, requiring a greater degree of training (Peterson 1994). We are all aware that moving objects in the background attract attention more than static objects, therefore they are cognitively more perceptible. It is beneficial, in any case, to use both types of representations (static and dynamic) in a VR map environment. Some users may respond better to static portrayal and others to motion.
3.2.4. Spatial Cognition and Sound
As described in the previous chapter, research about sound in cartography is limited mostly to the 2D world and used for attribute representation rather than locational representation. Sound is an important part of VR applications and as such we need to look at auditory perception in terms of VR visualization in maps. There is solid research in sound and auditory
42 perception in the fields of psychology, acoustics and music (Krygier 1994).
This research is mostly directed in sound variables (as described in previous chapters) and human cognitive processing of these variables (loudness, pitch, timbre, duration, rate of change and so on). This dissertation will not detail such research, but will take into account the problem of “sonic overload”, which means overburdening the user with too many variables and dimensions of sound (Blattner et al. 1989). This is investigated further in 5.4.2
(Multimedia Generalization).
As an important part of sound perception in virtual environments, we need to explain what localization of sound is. Let us discuss how acoustical perception works (Figure 3-6). If the sound source is right in front of us
(Figure 3-6, Sound Source 1), the sound will reach both ears at the same time and with the same volume. If the sound source is on one side (Figure 3-6,
Sound Source 2), it will reach each ear with a delayed difference and volume.
Sound Source 2
Sound Source 1
Figure 3-6: Localization of Sound Perception
43 This delay is very small. For example, if the sound source is at a 90 degrees
angle on our left, the delay is approximately 0.7 milliseconds. This means that
the sound reaches our right ear 0.7 ms later than the left ear. This delay is
transmitted to the brain, creating the perception of the approximate location
for the sound source. In VR environments we can simulate spatial hearing by
creating such delays from a stereo sound system. Although the sound does
not come from a specific location, delays can create this impression. Users
would be able to approximately determine the location of sound source. Of
course, this approximation is also dependent on the limitation of the size of
viewing devices, such as the computer screen. Such applications have been
investigated (Blauert 1983), but not in terms of VR in cartography. In terms of
VR maps, localization of sound can be used to attract the attention of the user
to a certain location especially within the navigational mode where only a
160° horizontal and 120° vertical view (approximate field of view in humans)
is visible to us. We intend to address the topic of sound in VR maps in the
following chapters in more detail.
3.3. Summary
From what we saw in the previous discussions we reach these conclusions about the benefits of VR in mapping from the cognitive point of view:
• We store mental images with regard to the spatial relationships of objects and
connections between them (Kosslyn 1980), much like the quick map sketches
(Blasser 2000). This makes sense since in these sketches we arrange things in
such a way that our brain can process them quickly with the least amount of
drawing. Metrics in addition to topology are very important for the perception
of reality (findings from Waller et al 2002 challenge Blasser's experiments). If
44 spatial elements are so important in perception and human processing of
information, introducing a third dimension in VR maps should significantly
enhance the cognitive ability.
• Sketches and 2D maps are subjective. They have an amount of subjectivity
and depend on the individual that creates them (Blaser 2000). The closer to
reality map representation is, the less subjectivity is introduced. VR tries to
replicate reality as it is.
• There is generalization in the perception of reality by humans. Either “mental
images” are generalized images of real world, or we generalize from mental
images when we recall them from the Long Term Memory (LTM). But this
generalization is also dependent on the individual (Kosslyn 1980). That is why
it is important to provide users with details giving them the option to
generalize.
• What about the complexity of a VR scene? As seen above, iconic memory is
not effected by complexity. It has unlimited capacity. On the other hand,
“mental image” is generalized accordingly when recalled from the memory
store.
• VR techniques significantly improve the cognitive ability of the user reading a
map. Experiments conducted in using VR for learning show this (USC 1999).
Also humans use “landmark navigation” for faster orientation in their
surroundings. Providing detailed scenes and accurate information is
important in order to increase visual perceptive performance.
• Navigation as an element of VR is very important in order to enhance the
user’s experience with virtual environments. On the other hand, we should be
aware that the user might become disoriented from being too close to the
45 details. Additional tools like a map of the whole area and navigational screen
information are useful.
• A structural breakdown of the objects in a VR map is helpful. This might not
be always possible because of the irregularity of the real world.
• Dynamic elements are an important part of the VR environment. A flowing
river and moving traffic are just a couple of examples of using animation in
VR maps. It has been documented that dynamic elements are more
perceptible to the human eye than the static ones.
• Spatial sound can be used to enhance human perception in a VR map. It can
tell us about the approximate location of objects without the need to look at
them. This information can be presented to the user who may choose to
ignore it, or process it further according to his/her needs. Caution should be
used in barraging the user with too many sound variables.
The most important conclusions of this research are: Firstly, human cognitive ability is not used to its fullest in the perception of cartographic information in conventional maps. VR maps could make a significantly better use to this ability.
Secondly, users can cope with the complexity of information as they generalize in their mind according to their needs. Caution should be used in the case of sound.
3.4. References
Blaser, A., 2001, “A Study of People’s Sketching Habits in GIS”, in Spatial Cognition and Computation 2: 393–419, 2000
Blattner, M., Sumikawa, D., Greenberg, R., 1989, “Earcons and Icons: their structure and common design principles”, Human-Computer Interaction, Vol. 4, No.
4, pp. 11-44
46 Blauert, J., 1983, Spatial Hearing: the Psychophysics of Human Sound Localization,
MIT Press, Cambridge
Bly, S., 1982, “Presenting Information in Sound”, Human Factors in Computer
Systems, Proceedings, Gaithersburg, MD, pp.371-375
Buziek, G., 1999, "Dynamic Elements of Multimedia Cartography". In: Multimedia
Cartography, Ed. Cartwright, W., Peterson, M., Gartner, G., Springer-Verlag Berlin, pp. 231-244
Darken, R., Peterson, B., 1999, “Spatial Orientation, Wayfinding, and
Representation”, Naval Postgraduate School, Monterey, California
Dransch, D., 1999, "Theoretical Issues in Multimedia Cartography". In: Multimedia
Cartography, Ed. Cartwright, W., Peterson, M., Gartner, G., Springer-Verlag Berlin, pp. 41-50
Forer, P., 1993, "Envisioning environments: map metaphors and multimedia databases in education and consumer mapping. In: Proceedings of the 16th
International Cartographic Association Conference, Koln, Germany, ICA, pp. 959-
982
Gibson, J.J., 1979, The Ecological Approach to Visual Perception, Houghton-Mifflin,
Boston
Jul, S., Furnas, G.W., 1997, “Navigation in Electronic Worlds: A CHI 97
Workshop”, SIGCHI Bulletin, 29(4), 44-49
Klatzky, R.L., 1975, Human Memory: Structures and Processes, Freeman, San
Francisco
47 Kosslyn, S.M., 1980, Image and Mind, Harvard University Press, Cambridge.
Kosslyn, S.M., 1983, Ghosts in the Mind’s Machine, Norton Press, New York.
Krygier, J., 1994, “Sound and Geographic Visualization”, in Visualization in Modern
Cartography, Ed. A.M. MacEachren and D.R.F. Taylor, Pergammon Press, Ltd.,
Oxford, pp. 149-166.
Lynch, K., 1960, The Image of the City, Cambridge: MIT Press
Mandler, J. M., Seegmuller, D., Day, J., 1979, ”On the Coding of Spatial
Information”, Memory and Cognition, Vol. 5, pp. 10-16
Montello, D., 1997, NCGIA Core Curriculum in GIS, National Center for Geographic
Information and Analysis, University of California, Santa Barbara, November 1997
Neves N., Silva, J., P., Goncalves, P., Muchaxo, J., Silva, J., M., Camara, A., 1997,
“Cognitive Spaces and Metaphors: A Solution for Interacting with Spatial Data”,
Computers & Geosciences, Vol. 23, No. 4, pp. 483-488
Peterson, M., 1994, “Cognitive Issues in Cartographic Visualization”, in
Visualization in Modern Cartography, Ed. A.M. MacEachren and D.R.F. Taylor,
Pergammon Press, Ltd., Oxford, pp. 27-43
Ramirez, J.R., 1999, “Maps for the Future: A Discussion”, ICA Conference, Ottawa,
Canada, 1999
Shepard, R. N., 1967, “Recognition Memory for Words, Sentences and Pictures”,
Journal of Verbal Learning and Behavior, Vol. 6, pp. 156-163
48 Slocum, T., Blok, C., Jiang, B., Koussoulakou, A., Montello, D., Fuhrman, S.,
Hedley, N.R., 2001, "Cognitive and Usability Issues in GeoVisualization ". In:
Cartography and Geographic Information Science, Vol. 28, No. 1
Slocum, T.A., Egbert, S.L., 1991, “Cartographic Data Display”, in Geographic
Information System: the Microcomputer and Modern Cartography, Ed. D.R.F. Taylor,
Pergammon Press, Ltd., Oxford, pp. 167-199
USC, 1999, “Virtual Reality Finding Role In Psychological Evaluation”, Science
Daily, March 1999
Waller, D., Loomis, J., Colledge, R., Beall, A., 2002, “Place learning in humans: The role of distance and direction information”, Spatial Cognition and Computation 2:
333–354, 2000
Wenzel, E., Fisher, S., Stone, P., Foster, S., 1990 “A System for Three-dimensional
Acoustic “Visualization” in a Virtual Environment Workstation”, Visualization ’90:
First IEEE Conference on Visualization, IEEE Computer Society Press, Washington, pp.329-337.
49 CHAPTER 4
MAP REPRESENTATION IN VIRTUAL REALITY
In this chapter we discuss the integration for depiction of 3-D terrain and man-made structures with other types of representation in order to increase the level of real world perception in maps. The above-mentioned other types of representation include 3D
animation, spatial virtual sound, augmented reality and dynamic visualization. VRML
will be the primary tool for the examples and pictures depicted in this chapter.
Figure 4-1: Example of 3D Representation for Urban Area
50 Our intention is to answer questions such as: “How can VR representation combine various data and information from different media sources and facilitate the understanding of the inherent complexities and relationships in a map?”
4.1. Three-dimensional Visualization
In this section we analyze elements of 3D visualization in the context of VR. The
representation of cartographic elements in 3D includes portrayal of features such
as buildings, hydrography, relief and roads. The cartographic community has
extensively researched some of these elements (like relief representation) in the
past. We will review some of the studies on volume (3D objects and terrain
surfaces) visualization and analyze them within the framework of VR in
cartography.
4.1.1. Three-dimensional Object Viewing
We start our discussion with a review of 3D computer graphics and object 3D
modeling. Three-dimensional viewing and computer graphics is the basis of
VR simulation. The complexity of the 3D viewing process is substantially
higher than that of the 2D viewing process. The increased complexity is not
only due to the fact that an additional dimension is introduced, but also
because display devices are only 2D. VR aims to create the effect of a strong
three-dimensional presence of objects of the real world (Bryson 1996). When
using a computer screen (as in desktop VR, but also other types of VR), 3D
reality is transformed into 2D for display on the computer screen. The
objective is to ultimately convey to the user the impression of a 3D world.
Elements such as clipping, rendering and projection are very complex and
important in 3D-object representation. Foley et al. (1994) introduces the
51 metaphor of a “synthetic camera”. We can imagine the 3D scene as a
snapshot with a camera of set parameters. Moving through the scene is going
to take us through a collection of such snapshots. The major steps taken to
accomplish the 3D scene are:
• Set Projection type: Projection will transform 3D objects onto a 2D-plane
representation for display on the computer screen. The most popular
projections in the computer graphics world are perspective and parallel
projections. This will be further discussed below.
• Set viewing parameters: This information includes location of the viewer’s
eye and viewing plane. We can use the scene coordinate system and the
eye coordinate system. By varying these parameters we can change the
way the viewer sees the scene including a view of the interior if that is
necessary. These parameters set the conditions for clipping and
rendering.2
• Clipping in 3D: We must eliminate portions of the 3D object or objects
that are not candidates for display. We may choose to ignore parts of the
scene that are outside the viewing area or even objects or parts that are
too far to be clearly visible. Clipping is a complex process determined by
mathematical equations that depend on the viewing parameters
mentioned above and the intricacy of the objects.
• Projection and display: This is the final stage of the process in which the
contents of the scene are projected in the viewing medium (in the case of
desktop VR, this is the computer screen).
2 Rendering is a very important element of 3D visualization and it will be discussed in details further in this chapter
52 Based on the above list we believe it is important to further discuss some of
the steps that are essential to the performance of VR as it relates to
cartography, such as projections and clipping.
Projections: In general, projections transform points and objects from a
coordinate system of n dimension to a coordinate system of less than n
dimension (Foley et al. 1994). In cartography, projections aim to transform
the earth surface’s three-dimensional coordinates (based on an ellipsoid or
sphere) to a plane for two-dimensional representation. In computer graphics
three-dimensional representations of objects are transformed onto a two-
dimensional plane representation for display on a computer screen.
Projections used in computer graphics can be divided into two basic classes:
perspective and parallel. The distinction is where the center of projection is
with regard to the projection plane. If this distance is finite, the projection is
of the perspective type. If the center moves away to infinity, the projection is
of the parallel type. Figure 4-2 and Figure 4-3 illustrate the difference
between these two projection types.
Z Projection Plane Object
Y
Center of Projection X
Figure 4-2: Perspective Projection Illustration
53
Projection Plane Z Object
Y
Infinity Center of Projection X
Figure 4-3: Parallel Projection Illustration
In perspective projections, parallel lines that are not parallel to the projection
plane converge to a point that is called the vanishing point. In Figure 4-2, the
Z-axis is parallel to the projection plane; therefore only lines along the X and
Y-axis converge to vanishing points (this is illustrated in Figure 4-4).
X-axis vanishing point Y-axis vanishing point
Figure 4-4: Illustration of Vanishing Points
54 Subtypes within parallel projections depend on the direction of the projection lines. If lines of a parallel projection are perpendicular to the projection plane, the projection is orthographic; otherwise, the projection is oblique. The most common of orthographic projections are front, side and top projections where the projection plane is perpendicular to a principal axis (Figure 4-5).
Figure 4-5: Orthographic Projection (Front, Top and Side views)
Perspective projection is the most realistic because of the way our eyes perceive the environment. This is also the most common projection used in computer graphics. We believe that the two-point perspective projection is the most appropriate for cartographic representation in VR, because in most of the cases the projection plane is nearly parallel to at least one of the axis.
This is more apparent in the navigation mode in the VR environments, where the projection plane is near parallel to the Z-axis. The three-point perspective projection consists of three vanishing points along the principal directions.
The three-point perspective projection is very rarely used because it does not greatly enhance realistic representation. Therefore, its use does not justify the added computational effort (Foley et al. 1994). Figure 4-2 and Figure 4-3 show the example of a scene representation (road and tunnel) in both perspective and parallel projections. The realism of perspective projection as compared to the parallel one is apparent.
55
Figure 4-6: VR Snapshot in a Perspective Projection
Figure 4-7: VR Snapshot in a Parallel Projection
56 Equations for the coordinate transformation within projections are derived by their special geometric characteristics. These equations are a very important factor in projections, especially in the VR map context. Measurements and computing perspective are also significant to cartography and GIS. Projection equations transfer these measurements from the media we are using (the computer screen on a desktop VR GIS) to three-dimensional reality or virtual reality or vice versa. These transformations should be done on the fly. Most of the software and languages that deal with VR (including VRML) make the transformations easier for developers and perform them automatically in the background. The problem is that in object-oriented environments (like VRML creates) locations are object-based. This means that objects placed in an area are spatially referenced, while “the empty area” is not. In other words, if we click in an area with no objects in a desktop VR, we might not be able to receive the spatial location of the click or point. This is something that can be done easily on a paper map or in GIS, because of the simplicity of two- dimensional representation, where the whole area is georeferenced. Set parameters like camera focal distance and direction are important factors to include in a projection transformation. It has become almost universal to represent the computer screen as a pair of integers (X and Y) with a range 0 –
(Xmax-1) horizontally and 0 - (Ymax-1) vertically (Ferguson 2001). The origin is in the top left corner. The distance of the projection plane from the viewpoint can be arbitrarily chosen to make things easier. If the plane of projection is located at (1,0,0) and is parallel to the yz plane, then the screen coordinates
Xs and Ys for the projection point (x, y, z) are (Ferguson 2001):
X y X = max − ⋅ s Equation 4-1 s 2 x x
57 Y z Y = max − ⋅ s Equation 4-2 s 2 x y
The parameters sx and sy are scale values that allow for different aspect ratios (vertical and horizontal resolutions):
X f f s = max ⋅ ⋅ A Equation 4-3 x 2 21.22 x
y f f s = max ⋅ ⋅ A Equation 4-4 y 2 21.22 y
Where f f is focal length (measured in mm), Ax : Ay is the aspect ratio and
21.22 is a constant to allow us to specify f f in standard mm units.
Clipping and culling: In order to reduce the number of calculations and eliminate the unnecessary features it is imperative to perform clipping and culling (Ferguson 2001). Clipping involves intersections of the screen boundaries with features that extend in both the inside and the outside of the viewing area. As such, mathematical and geometrical algorithms for intersections of lines with planes, planes with planes and planes with volumes are applied. Culling refers to the process of discarding polygons that are outside the viewing area (Ferguson 2001). Because clipping and culling are important components of the 3D computer graphic display, there has been extensive research in these areas. Most of the three-dimensional building computer languages and programs intrinsically include these procedures.
In today’s 3D computer graphics world generating a 3D model in computers is summarized in one word: rendering. In this context rendering is the process of artificially generating a picture of a scene from a numerical data set or
58 other specifications (O’Rourke 1998). Other important aspects of 3D realistic rendering besides the basic steps listed above include:
• Surface Representation: This basic rendering procedure creates the
object from a list of polygons and vertices. The simplest way to visualize
an object is to create a wireframe. Wireframe drawing involves the fewest
calculations possible to draw the object. Other techniques include
Hidden Surface Drawing which adds color to the drawing so that hidden
polygons do not show.
• Anti-aliasing: This procedure refers to the measures taken to avoid
aliasing which is an undesirable effect resulting from sampling (or
pixelation). These measures minimize the interference of errors due to
the fact that the display device has a finite resolution.
• Lighting: The purpose of lighting is to add a realistic touch to the
drawing. The color and direction of light should be specified. Light
sources can be classified as point, directional and spotlight.
• Shading: Algorithms (like the Phong and Gouraud approach) manage to
produce very realistic shading with the least amount of computational
efforts (Ferguson 2001). Shading increases the realistic impression of the
scene.
• Object Texturing: It allows for realistic detail to be added to the image
without the need for a large number of small polygons. The most basic
technique of object texturing is surface texture mapping in which a two-
dimensional picture is applied to the surface of a three-dimensional
object.
59 4.1.2. Terrain Modeling
In this section we proceed with a look at three-dimensional cartographic representations of terrain. Cartographers have employed many techniques to graphically portray relief. The oldest relief maps depict relief with simple symbols or sketching (Kraak 1988). One of the oldest documented relief maps of this type is the “Helvetiae Discripto” by Tschudi in 1538. A portion of this map is shown in Figure 4-8. Notice that careful sketching portrays the mountains.
Figure 4-8: Helvetiae Descriptio – Swiss Figure 4-9: Shaded Relief Map of Northern Albania Map of 1538
Relief mapping further developed by using hachuring and shading to enhance the relief idea (more artistic and more accurate way or showing the relief at the same time). Figure 4-9 shows a shaded relief map of northern Albania.
The following list outlines the most popular ways of relief representation:
• Antique Maps: As mentioned above this includes mountain ranges drawn
in aspect and with simple symbology
• Contour Representation: Contours are lines that connect points of the
same elevation. This is a popular method used on topographical maps
60 • Modified Contour Representation: These are contour lines that are
modified to apply illumination and lighting. They are thicker in the
shadow areas to give the impression of a more realistic relief. They are
also called shadowed or illuminated contours
• Hachure Representation: These are parallel lines perpendicular to the
contours, aligned in the direction of slopes. Thicker and denser hachures
depict steeper areas giving a more realistic representation
• Hypsometric Tint Representation: Represented by shaded areas between
contours in bands of color
• Continuous Tone Representation: Each point is shaded according to the
elevation value of the surface at that point. This method is also used in
the digital environment
• Shaded Relief Representation: Surface is illuminated with a light source
and everything is drawn in graduated black and white according to the
illumination
• Stereo Pairs or Anaglyph Representation: Two maps (surfaces) are
created from slightly different perspectives and then viewed through a
stereoscope or 3D glasses
• Fishnet Representation: A grid is draped to the surface and then viewed
in perspective. Other data can be inserted into this skeleton. This
method is used in digital environments
• Virtual Reality Representation: This representation allows the user to fly-
through, navigate and be immersed in the scene, as well as view it from a
distance. This method is only used in digital environments.
61 VR representation of maps is the focus of this research and we continue to discuss methods used to depict surfaces in this environment. In cartography, surfaces are represented with either regular or irregular tessellations. The most common are:
• Digital Elevation Models (DEM): A DEM is a regular network of points
with an elevation interpolated for each. The term DTM is also used in
relation to the surface of the earth to represent topographic elevations
and other terrain features (Worboys 1995)
• Triangular Irregular Networks (TIN): A TIN is an irregular collection of
triangles representing the topography.
In most of the cases (computer graphic toolboxes, software languages), TIN is used in association with modern rendering techniques (including lighting, shading and anti-aliasing) for realistic surface generation. Figure 4-10 depicts a TIN representation of the Northern Albanian Mountains, generated using
VRML. Figure 4-11 shows the same model after shaded rendering is applied.
How do the variety of methods for 3-D representation of terrain and other objects effect VR map representation? Bryson (1996) warns about the computational complexity that comes with VR. VR systems make it easy for the user to understand the complex spatial information, but to create a strong sense of the three-dimensional presence might be an expensive task. There are two fundamental elements associated with the realism that VR creates:
• The realistic rendering of three-dimensional representation
• The interactive human-computer interface that creates a strong sense of
the three-dimensional presence.
62
Figure 4-10: TIN Representation of Northern Albania Relief
Figure 4-11: Rendered Representation of Northern Albania Relief
63 The second item is the most important in designing VR applications, because
it is the least expensive computationally [Bryson 1996]. The interactive
human computer-interface is a very important factor in VR cartography and
GIS. Firstly, we aim to reduce the computational complexity that comes with
intricate rendering; the human-computer interaction becomes ineffective if the
system slows down as result of the complex rendering (therefore, the power of
realistic representation is lost). Secondly, maps are powerful tools of
communication; interaction of users with the environment is very important
in achieving the objective of a map. In this research we adopt Bryson’s
principle for developing VR systems: when designing VR maps we need to
keep graphic rendering as simple as possible depending on the processing
power of the computer system. We also need to emphasize that we are not
limiting or restricting our theory to simple rendering techniques. Advanced
and realistic rendering techniques should be used when advances in
computer system allow it.
4.2. Sound - An Addition to the Visual Interface of a VR Map
As explained in section 2.7 (Sound in VR) and 3.2.4 (Spatial Cognition and Sound), sound is a very powerful supplementary factor to representing spatial information.
In this section we explore the use of sound as a complement to the visual interface of a VR map. We start with a description of the use of sound in VR systems and proceed with the investigation of its use in VR maps.
4.2.1. Virtual Auditory Systems
Sound is a very important aspect of daily life. As such, auditory systems play
a very important role in a VR application. Nevertheless, audio processing is
64 often given secondary attention when designing virtual environments. Some of the main factors that underline the importance of the auditory systems in a
VR environment are:
• Sound is important to achieve a full sense of presence in an immersed
virtual environment. Experiments have shown elements of disorientation
in people that suffered sudden deafness (Gilkey and Weisenberger 1995).
• Simulating the real world requires a high degree of realism. Sound is an
integral part of the human environment. As such, it would be inadequate
to have a VR environment without sound.
• Response to visual targets is enhanced when including auditory in VR
systems. Studies have shown that the time people need to react to a
visual object is dramatically decreased when localized sound is
introduced (Perrott et al. 1990).
• Sound systems can be used to represent information that does not exist
in the real world. The advantage that we have when simulating the real
world is that we can add additional information and sensors according to
our needs. For example, if the user makes an error or is trying to do
something that is not permitted, sound can be used to alert them to the
problem. Auditory information and spatial hearing can also be used to
replace other senses in a VR environment. The sense of moving into a
closed door in the real world is an impossible task to emulate when using
Desktop VR. The sense of touch in this case can be replaced with sound.
In order to realize the impact of 3D sound in VR applications we need to understand some basic concepts of sound propagation and sound localization.
65 From the standpoint of physics, sound is an acoustic wave that is created by the vibration of the sound source (like vocal chords) and transmitted through the air to the eardrum. Vibrations of the eardrum are then transmitted to the brain in the form of neural impulses and the result is the sense of sound. The most interesting and useful aspect of human hearing is the ability to localize sound (this was explained briefly in Chapter 3). This ability enables humans
(and other living organisms) to extract directional information from sound.
This is important within the context of cartography, an option that has not previously been fully utilized. The coordinate system used to study the location of a sound source with regard to the listener is a head-centered system. This could be a rectangular (Figure 4-12) or spherical coordinate system (Figure 4-13). The rectangular coordinate system is based on a XYZ system with planes defining up/down, front/back and left/right separations.
Y Sound Y Source r
Z X Z X
Figure 4-12: Rectangular Coordinate Figure 4-13: Polar Coordinate System System
66 However, a polar or spherical coordinate system is more appropriate because the human head is approximately of a spherical shape. This is the system most commonly used. The polar system is defined by azimuth ( ), elevation ( ), and range (r).
In Chapter 3 we described the idea behind the localization of sound. The time difference explained in Chapter 3 is also known as the Interaural Time
Difference (ITD) (Blauert 1983, Duda 1997). ITD can be calculated easily using polar coordinates when it comes to azimuthal standpoint:
a ITD = ⋅ (θ + sinθ ) where − 90o ≤ θ ≤ 90o Equation 4-5 c a - is radius of the head approximated to a sphere c- is the speed of sound (343m/s).
When localizing sound, humans are best at estimating azimuth, then elevation and worst at estimating range [Blauert 1983]. This is a very important fact when it comes to designing an auditory system for a VR map.
The combination of the spatial visual and the sound element should be based on this fact among others. Figure 4-14 shows suggested audio distribution in space, taking into account the accuracy with which we can estimate sound.
Brighter areas allow for an increased use of sound. Our ability to recognize the location of sounds is reduced with the increase in distance and elevation of the sound sources. Although the use of sound is important in locations above the head (where vision is impossible), we should be cautious on the accuracy the user can perceive sound from these locations. The estimation of height is not as accurate as the estimation of azimuth.
67
Minimal Moderate Use Use of Moderate Use of Sound Sound of Sound
Increased Use Increased Use of Sound of Sound
Figure 4-14: Diagram showing possible audio distribution in space
In VR, auditory cue can be simulated using headphones or loudspeakers.
Headphones are usually more appropriate because sound signals can be controlled independently and there is no outside noise (like echoes). When designing virtual auditory systems it is important to keep in mind that realism is not always the most essential factor. Including echoes will increase the perceived realism of the display and improve distance perception. However echoes will also degrade the perception of the sound direction [Shilling and
Shinn-Cuningham 1999].
4.2.2. Use of Sound in VR Maps
In this section we identify two important approaches on using sound in VR applications: attribute representation and enhancing the perception of real world phenomena.
68 Attribute Representation: This approach involves the use of sound to explain the objects in a VR scene. The user walks through and interacts with different objects. Pointing at a specific object will result in a narrative explaining the objects. An example of this could include a description of a building or directions to that building (see Table 4.1 for more examples). The advantage of this method is that we can simplify the visual display (the user does not have to look at two different places at the same time). The Center for Mapping
(Ramirez et al. 2002) has researched the use of sound for descriptive purposes in conventional 2D maps with multimedia elements. In the case of a VR map the sound description of an object also contains a spatial element. The audio attached to the object comes from a specific direction (azimuth and elevation) and with a specific volume based on the distance from the object. As mentioned in the previous section, the range of the sound source is probably the weakest aspect for the estimation of localization (Blauert 1983), but the difference in volume could give us an idea of the relative range.
Enhance the Perception of Real World Phenomena: This approach encompasses the realistic use of simulated audio in phenomena like rain and thunder or a busy traffic intersection. This kind of representation provides a map with additional information to convey to users. The use of sound for simulating reality has been the focus of research by Scharlach (2001) in “Maps and
Sound”. Conventional 2D maps were outfitted with realistic sounds for enhanced user perception. This research focuses on the use of sound in VR maps. Use of simulated sound in a VR environment for representing real world phenomena is not aimed at simply conveying information, but rather for assisting perception of visual information. The reaction time of the viewer to the phenomena or its attributes is significantly faster if the user can hear the
69 sound coming from a certain direction and a certain distance (Perrott et al.
1990).
How can we depict or enhance specific cartographic features by using sound?
We do not claim to cover all cartographic features in this research, as that
would be a very complex task. As an example, we can use the cartographic
data categories from United States Geological Survey (USGS) Digital Line
Graph (DLG) maps. Table 4.1 depicts DLG categories and their potential
descriptive or realistic representation using sound.
Cartographic Sub-Categories Realistic Descriptive Sound Audio
Data Categories Sound Sound Description Clip
Hypsography - - Yes “Climbing”, “Descending”
Hydrography Flowing Waters Yes - The sound of flowing water
Standing Waters Yes - Sound of waves
Wetlands Yes Yes Sound of birds or
descriptive: “Wetland”
Vegetation Woods - Yes “Woods”
Scrub - Yes “Scrub”
Orchards - Yes “Orchards”
Vineyards - Yes “Vineyards”
Table 4.1: List of cartographic features and their potential audio representation
(CONTINUED)
70 Table 4.1: CONTINUED
Cartographic Sub-Categories Realistic Descriptive Sound Audio
Data Categories Sound Sound Description Clip
Non-Vegetative Glacial moraine - Yes “Glacial moraine”
Features Lava - Yes “Lava”
Sand - Yes “Sand”
Gravel Yes Yes Sound of walking in gravel
or descriptive: ”Gravel”
Boundaries State - Yes “Crossing state boundary”
County - Yes “Crossing county boundary”
City - Yes “Crossing city boundary”
Forests - Yes “Forest”
Parks - Yes “Park”
Survey Controls Horizontal - Yes “Horizontal Monument” and Markers Monument
Vertical - Yes “Vertical Monument”
Monument
Transportation Roads Yes - The sound of traffic
Trails - Yes “Trail”
Railroads Yes - The sound of train
Pipelines - Yes “Pipeline”
(CONTINUED)
71 Table 4.1: CONTINUED
Cartographic Sub-Categories Realistic Descriptive Sound Audio
Data Categories Sound Sound Description Clip
Transmission - Yes “Transmission line”
Lines
Manmade Schools Yes - Sound of a busy school
Features Churches Yes - Sound of a church bell
Hospitals Yes - Sound of ambulance sirens
Public Land Township - Yes “Entering Township”
Survey System “Leaving Township”
Range - Yes “Entering Range”
“Leaving Range”
Section - Yes “Entering Section”
“Leaving Section”
Sound variables (as discussed in 2.7 Literature Review – Sound in VR) are
very important in accurate human perception of a sound source location.
When it comes to a VR simulated sound in maps, some of the sound variables
(as described by Krygier 1994) obtain significant importance:
• Loudness: Describing the magnitude of sound. The closer the user is to a
sound source the louder the tone becomes. At a certain distance from the
sound source the user or navigator cannot hear it.
• Pitch or frequency of sound: The numbers for the range of hearing vary
from 20-20,000 vibrations per second.
72 • Location: This variable includes the direction and distance of the sound.
The direction of the sound is determined by its localization as explained
in 3.2.4 – Spatial Cognition and Sound. As mentioned above, we are
always best at estimating the direction especially at ear level. Distance is
the hardest to estimate (see Figure 4-14).
• Timbre: Timbre is the most complex of auditory attributes. It represents
the quality or special characteristic of sound. Two sounds with the same
pitch and loudness can be distinguished by their timbre.
VRML allows for the implementation of three-dimensional, spatialized audio.
The syntax of a sound node in VRML is:
Sound { ExposedField SFVec3f direction 0 0 1 ExposedField SFFloat intensity 1 ExposedField SFVec3f location 0 0 0 ExposedField SFFloat maxBack 1 ExposedField SFFloat maxFront 1 ExposedField SFFloat minBack 1 ExposedField SFFloat minFront 1 ExposedField SFFloat priority 0 ExposedField SFNode source NULL Field SFBool spatialize TRUE } The power of spatialized sound in simulated reality is evident in this example
of a hornet buzzing around: http://www.geocities.com/bidoshi/Dissertation/.
In order to see this example you need Cortona VRML Client from Parallel
Graphics (or any other type of VRML client). This software can be downloaded
for free at http://www.parallelgraphics.com/products/cortona/download.
4.3. Dynamic Visualization
The purpose of this research is to explore different ways that dynamic visualization can be used in VR mapping. We start with a general review of the basics of 3D
73 animation in VR and continue with use of VR animation or motion to represent
cartographic features and information.
4.3.1. Realistic Three-dimensional Computer Animation in VR
Three-dimensional animation in a VR environment is fundamentally the same
(as explained in 2.6 - Animation in VR) as two-dimensional animation. It is a
collection of series of still images that when played back in sequence appear
as continuous and simulate movement [O’Rourke 1998]. The illusion of
flowing movement (the same idea is used in film or video) is produced by a
property of our eyes called persistence of vision. An afterimage is left in the
retina after the subject of the vision is taken away. A very important notion in
animation is the concept of keyframes. In the early days a master animator
would draw the most important frames in an animation sequence, the
keyframes. They are the most meaningful frames in the animation sequence;
they are instances of essential changes in the motion flow.3 A number of less
experienced animators would create the rest of the frames in between. Almost
every 3D-computer animation system uses the same idea [Ferguson 2001]. In
a 3D-computer animation system the human operator is the master animator
and creates the keyframes. The computer then calculates and generates the
filler frames. This also is called “tweening” (derived from the word in-
betweening).
Next we examine some of the basic and advanced animation techniques used
in 3D-computer animation system [O’Rourke 1998]. These are techniques
3 Turns of a car driving on a road in a VR map can be considered keyframes
74 that we believe pertain to 3D animation of cartographic features and phenomena.
• Motion path animation: The user draws a curve in space that is referred
to as the motion path in which the object moves. The user selects the
object to be animated and conveys to the system the number of frames
and the path to be traced. The computer then interpolates frames
between the main locations of the path. This method can be used in the
instance of traffic representation (to enhance the perception of a road in
a map), navigation systems, user moving along a certain path, etc.
• Shape changes: Most 3D-computer animation systems provide means for
shape deformations or changes. The user defines the points that
determine the object shape in keyframes and the computer completes the
tweening. This method is appropriate for continuous temporal changes in
cartographic features such as relief or still waters.
• Camera animation: The point of view or camera moves instead of the
objects as described in motion path animation. This can be used in user
navigation along a certain pre-determined path.
• Expressions: In some situations it is best to think of animation as a
mathematical formula. Objects will move through a specific path defined
by this formula. In this case the user provides the formula and the object
to be animated. The computer calculates the positions of the keyframes
using this formula. This is a very useful technique for mapping and GIS.
It can be used when we need the navigator to move through a specific
path defined by a formula. Cloud movements can also be defined by a
formula (involving speed and direction). The implementation of common
GIS search and analysis through statistical functions can be an
75 extension of this technique. This includes operations like proximal
search, buffering and best path analysis.
• Motion dynamics: In certain circumstances we can predict the way the
object will move by using the laws of physics. An example of this would
be a ball bouncing on a specific surface. The animation technique based
on the motion of objects as determined by laws of physics is called
motion dynamics.
• Motion capture: Since VR intends to realistically portray the real world, it
is important to make the participants in a virtual world behave as real as
possible. Motion capture involves accurately mimicking real world
motions (for example a person walking). This is a popular technique in
the realistic design of sports’ video games where the athletes are wired
with sensors in order to capture their motions.
• Particle and particle-like systems: This technique is appropriate for
portraying phenomena like gas, smoke, clouds, fog, etc. They are
surfaces that cannot be modelled easily otherwise. The particle system
technique does not model individual surfaces, but rather creates them by
the collection of pixel size elements that when coming together become
visible.
• Procedural animation: This is a modelling and animation technique that
involves a different kind of approach. A text file usually called “world file”
contains the complete information needed to model and animate the
objects. The software then interprets this file and a VR scene is
generated. Animations in VRML are examples of procedural animation.
76 4.3.2. Dynamic Visualization in Virtual Reality Cartography
In this research we identify four ways that three-dimensional dynamic
visualization can be used in cartography:
Displaying Real World Phenomena: Rivers, clouds, rain and movement of
traffic are all dynamic features that are important to the map user. Dynamic
representation of these features makes them more real and user perception
easier. You can see an example of this visualization at:
http://www.geocities.com/bidoshi/Dissertation
The scene depicts a mountain road and a car driving through the road. Some
of the cartographic categories (DLG features) that can be enhanced by using
3D animation are:
• Hydrography
- Flowing waters - Standing waters (waves)
• Vegetation
- Woods (wind motion on trees)
• Non-vegetative features
- Lava • Transportation
- Roads - Trails - Railroads - Pipelines4
4 This classification is consistent with DLG map standards; typically the use of Pipelines is different from use of other features in Transportation
77 Attracting Attention: Animation of objects can be used to attract the attention of the map user. For example, a map query results in a number of objects in the scenery. The user is interested in visualizing the objects in the context of their surroundings. Making these objects blink can attract the user’s attention through a complex scene representation. DLG features that can be enhanced by attracting attention with animation are:
• Boundaries: Since boundaries are abstract features, an animated
blinking barrier can be used to portray them.
• Survey Controls and Markers: Blinking monuments can be distinguished
from the environment as important geodetic features.
Representing Time Changes: Animation can be used to show changes over time to a specific object, or the whole scenery. This kind of animation can be used in all of DLG features (using the feature list from Table 4.1) showing their progression in time.
Representing GIS Spatial Analysis: Motion in a VR map can be used to enhance the representation of GIS queries and spatial analysis. The best path analysis and routing are good examples of such applications. This analysis addresses the problems of finding the shortest, or the least-cost distance between two locations. The result of this query can be visualized in a VR map by animating the movement along this path. The path can be shown through a sequential drawing of a trail that points the user in the proper direction.
In addition to what we mentioned above, immersion is also a very important element of VR map dynamic visualization. Because of its significance, we discuss immersion in the next section. VRML allows for the implementation of
78 dynamic visualization. This is made possible by the TimeSensor node with the
following syntax:
TimeSensor { ExposedField SFTime cycleInterval 1 ExposedField SFBool enabled TRUE ExposedField SFBool loop FALSE ExposedField SFTime startTime 0 ExposedField SFTime stopTime 0 EventOut SFTime cycleTime EventOut SFFloat fraction_changed EventOut SFBool isActive EventOut SFTime time }
4.4. Viewer Immersed in the Scene
It is believed (Robertson et al 1998) that the power of VR is based on the fact that user attention is captivated by creating a sense of immersion. In general, when speaking about VR immersion, one thinks of the Head-Mount Device (HMD) display. This display is updated by tracking the head movements and changing the viewpoint in the display accordingly. However, this kind of involvement cannot be limited to only HMD VR applications. Immersion is defined by Webster dictionary as “the state of being absorbed or deeply involved”. Although objects are displayed on a flat computer screen, the change of viewpoint and the way objects are comprehended from this viewpoint creates the perception of a realistic immersion into the scene. Immersion in a Desktop VR is not as effective as in HMD VR, but it is still a very powerful method to obtain a realistic view of surroundings. Pausch et al. (1997) contend that the difference between HMD VR and desktop VR is not significant in terms of user immersion in the scene. The realistic virtual presence in desktop environments is simulated by high degree if interaction and users' freedom of movement combined with human imagination capability. Users feel comfortable in extracting information from the scene in both environments. The sense of immersion, a basic characteristic of VR, can be a powerful interpretative
79 aid in cartography. MacEachren et al. (1999) warn about some concerns dealing with immersion in VR for mapping. The power of immersion comes from the fact that it is realistic. When designing a cartographic VR system we should aim to preserve this sense of realism by limiting the non-realistic options for the user.
Another concern is maintaining a sense of the overall area when immersed in a VR scene (We briefly discussed this in 3.2.1 – Human Cognition of Virtual Reality
Maps). There are two options to deal with this concern:
• The user controls his/her own actions by quickly panning, zooming and flying
through space in order to create a sense of the position in regard to his/her
environment. This is a very natural task in VR, but results in temporary
distraction from the original task
• The use of multiple views (like in the video game Doom) can help the user
maintain a general sense of the area while enjoying the benefits that come
with full immersion in a VR map. Figure 4-15 and Figure 4-16 depict an
immersive VR view and its respective top view. User is continuously aware of
his/her location.
Figure 4-15: Immersed View Figure 4-16: Top View
80 4.5. Map Navigation
In real world or VR, navigation is the process by which people determine where they are, where the surroundings are and how to get to a particular location [Jul and Furnas 1997]. This is a very complex process and often a source of people’s frustrations with conventional maps. Computer maps and GIS have significantly improved the process of navigation by the introduction of complex spatial analysis.
It is not by chance that mapping companies providing such services online (like
Mapquest, Mapblast, Yahoo Maps) are very popular and have become part of our daily life. In 3.2.1 (Human Perception of Virtual Reality Maps) we studied some aspects of navigation in the VR world from the cognitive point of view. Here we examine the different ways navigation can effect information processing in VR maps. In a VR environment the user can interactively fly through, walk through, or be immersed in the map using these basic navigational tools:
• Panning - moving in all directions of the plane where the point of view is
located, coming closer or moving further away from points of interest
• Zooming in and out – moving towards or away from the area of interest
• Tilting – rotating in all directions to get a better perspective of the view.
Other navigational tools (in the form of on screen buttons) can be very helpful:
• Fit Objects - fitting the collection of simulated objects in our desktop scene, or
flying away from the scene until the whole scene fits in our view
• Straighten Scene – positioning the camera or the viewpoint parallel to the
horizon. This is necessary for quick adjustments, especially to help
unexperienced navigators.
Predetermined flight paths can also be available with one click of the mouse.
Navigation is an important factor in query and analysis visualization. The user can
81 set up a VR system where the viewpoint immerses and navigates automatically to all objects that resulted from a query. For example, we can ask the system to perform such combinations of analysis and visualization by requesting: “Find the households with an annual income more than $50,000 and visit the results starting with the lowest and finishing with the highest value”. Another example is navigating through the path determined by the shortest distance analysis of driving directions from one point to another. The viewpoint can be changed to be the driver’s viewpoint and the user can take a virtual tour of the drive in order to have a better understanding of the surroundings before driving to the destination.
We identify the following ways the user can navigate through the VR map.
• Free navigation through the scene: User moves freely through the scene. In
this case viewpoint is changed automatically creating the impression of
immersion
• Defining several viewpoints: Key viewpoints are predetermined to help user
move with one click of the mouse (e.g. seeing the whole map from different
directions or close-ups in busy areas)
• Predetermined paths of movement: navigational paths are defined in order to
give the user a better general close-up of the area in study.
4.6. Summary
In this chapter we investigate the various methods of representation of information in a VR map as summarized below:
• Graphic representation: We start creating the map by building the graphic
representation of the scene. The VR map will include two main elements:
− Three-dimensional object viewing: The two-point perspective projection is
82 used for the projection of three-dimensional objects
− Terrain visualization: DEM and TIN models are used for depiction of
terrain. We also attempt to determine the proper balance between
rendering and human-machine interaction
Methods of viewing a three-dimensional scene of a VR map include:
− Immersion and outside viewing can be used to place the user in the
position to perceive the spatial information. The sense of immersion can
be a powerful aid in cartography. Multiple views can be used to alternate
between immersion and outside viewing
− Navigation is used to enhance the perception of spatial information. It
includes free navigation, defined viewpoints and predetermined paths of
movement
• Sound representation: Sound is important to achieve the full sense of
presence in immersed environments. We attempt to determine the audio
distribution in space, taking into account the accuracy with which we can
estimate sound. Sound in VR map is used in these ways:
− Attribute representation
− Enhance the perception of real world phenomena
• Dynamic representation: Computer three-dimensional animation is used to
increase the realistic sense of the environment and convey additional
information. Three-dimensional visualization can be used to:
− Display real world phenomena
− Attract user's attention
− Represent time changes
− Represent GIS spatial analysis
83 4.7. References
Blauert, J., 1983, Spatial Hearing - The Psychophysics of Human Sound
Localization, MIT Press, Cambridge, MA 1983
Blauert, J., Lehnert, H., 1994, Binaural technology & Virtual Reality, CIARM,
Special Meeting Japan. MITI Committee on Virtual Reality, J-Tokyo
Bryson, S., 1996, “Virtual Reality in Scientific Visualization”, in Communications of the ACM, Vol. 39, No. 5: 62–71, 1996
Burgess, D.A., 1992, “Techniques for Low Cost Spatial Audio”, In Proceedings of the ACM Symposium on User Interface Software and Technology (UIST), ACM
SIGGRAPH and ACM SIGCHI, ACM Press, Nov. 1992, pp. 53-59
Buziek, G., 1999, "Dynamic Elements of Multimedia Cartography". In: Multimedia
Cartography, Ed. Cartwright, W., Peterson, M., Gartner, G., Springer-Verlag Berlin, pp. 231-244
Central Intelligence Agency, 2001, World Fact Book 2001, http://www.odci.gov/cia/publications/factbook/index.html, Web Page accessed
03/07/02
Duda, R.O., 1997, 3-D Audio Guide, Online Source, 1997
Ferguson, R.S., 2001, Practical Algorithms for 3D Computer Graphics, 2001
Foley, J. D., Van Dam, A., Feiner, S. K., Hughes, J. F., Phillips, R.L., 1994,
Introduction to Computer Graphics, 1994
Gilkey, R.H., Weisenberger, J.M., 1995, The sense of presence for the suddenly deafened adult, Presence, Vol. 4, No. 4, 1995, pp. 357-363
84 Green, M., Halliday, S., 1996, “A Geometric Modeling and Animation System for
Virtual Reality”, Communications of the ACM, Vol. 39, No.5, pp. 46-53, May 1996
Jul, S., Furnas, G.W., 1997, “Navigation in Electronic Worlds”, A SIGCHI 1997
Workshop, Vol.29 No.4, October 1997
Kraak, M. J., 1988, Computer-assisted Cartographical Three-dimensional Imaging
Techniques, Delft University Press, 1988
Kraak, M. J., 1994, “Interactive Modelling Environment for Three-dimensional
Maps: Funcionality and Interface Issues”, in Visualization in Modern Cartography,
Ed. A.M. MacEachren and D.R.F. Taylor, Pergammon Press, Ltd., Oxford, pp. 27-
43.
Kraak, M. J., Ormeling, F. J., 1996, Cartography – Visualization of Spatial Data,
Longman Limited 1996
Kraak, M., 1999, "Cartography and the Use of Animation". In: Multimedia
Cartography, Ed. Cartwright, W., Peterson, M., Gartner, G., Springer-Verlag Berlin, pp. 173-180
Krygier, J., 1994, “Sound and Geographic Visualization”, in Visualization in
Modern Cartography, Ed. A.M. MacEachren and D.R.F. Taylor, Pergammon Press,
Ltd., Oxford, pp. 149-166
MacEachren, A., Kraak, M.J., Verbree, E., 1999, “Cartographic issues in the design and application of geospatial virtual environments”, Proceedings of the 19th
International Cartographic Conference, August 14-21, 1999, Ottawa, Canada
Mathews, G., J., 1996, “Visualizing Space Science Data in 3D”, IEEE Computer
Graphics and Applications, Vol. 16 (November 1996), pp. 6-9
85 Mitas, L., Brown, W., M., Mitasova, H., 1997, “Role of Dynamic Cartography in
Simulations of Landscape Processes Based on Multivariate Fields”, Computers &
Geosciences, Vol. 23, No. 4, pp. 437-446
O’Rourke, M., 1998, Principles of Three-Dimensional Computer Animation, 1998
Pausch, R., Proffitt, D., Williams, G., 1997, “Quantifying Immersion in Virtual
Reality”, ACM SIGGRAPH Conference proceedings, August 1997
Perrott, D.R., Saberi, K., Brown, K., Strybel, T. Z., 1990, Auditory psychomotor coordination and visual search performance, Perception & Psychophysics, 48, 214-
226
Ramirez, J.R., Morrison, J., Maddulapi, H., 2002, “Smart Text in Mapping”,
Internal Poster, Center for Mapping at Ohio State University, 2002
Robertson, G., Czeminski, M., Dantzich, M., 1998, “Immerssion in Desktop Virtual
Reality”, Microsoft Research, 1998
Scharlach, M., 2001, “Maps and Sound”, in ICA Conference, Beijing, China, 2001
Schwartz, R., J., 1995, “Virtual Facilities: Imaging the Flow”, Aerospace America,
Vol. 33 (July 1995), pp. 22-26
Shilling, R.D, Shinn-Cuningham, B., 1999, Virtual Auditory Displays, Virtual
Environments Handbook, 1999
University of Texas, 2001, PCL Map Collection, http://www.lib.utexas.edu/maps/index.html, Web page accessed 03/07/02
Worboys, M. F., 1995, GIS – A Computing Perspective, Taylor & Francis 1995
86 CHAPTER 5
ELEMENTS OF CARTOGRAPHY AND VIRTUAL REALITY
Studying the elements of a conventional map can help us decide on a course of action for the new representation techniques. In this chapter we research these elements in order to make the transition to VR maps easier. Figure 5-1, Figure 5-2 and Figure 5-3 summarize this concept. We examine the important visual elements of a paper map
(such as metadata, scale, legend, georeferencing information, etc) (Figure 5-1) and the important elements of a digital map (such as spatial analysis) (Figure 5-2). We further analyze the performance of these elements in a VR map visualization system and the impact of the VR media (such as three-dimensional visualization, sound, dynamic visualization and interaction) in the depiction of these elements (Figure 5-3). Chapters 5 and 6 will be the focus of this research. There have been several studies and attempts to create modules that accommodate cartographic data in VR environments5 (Reddy et
al. 2000). A part of this research will be explored in the upcoming sections.
Nevertheless, none of these attempts pertain to the visual elements of cartographic
data. They rather accommodate the internal representation of data. We intend to
analyze the conventional cartography tasks and translate them to VR map
representation.
5 GeoVRML is a library of nodes that facilitates use of cartographic data in VRML
87
Figure 5-1: The Paper Map and its Elements
88
Figure 5-2: Digital Map and its Elements
89
Figure 5-3: Cartographic Concept in VR
For the purposes of this research, we are attempting to structure cartographic features in three main categories, natural, abstract and manmade features. This structure is important because the VR representation is different for each group; this will become more evident in the following chapters. The following classification (Table 5.1) lists the
DLG features and their potential representation type grouped in the below categories:
90
CLASS CATEGORY FEATURE REPRESENTATION
Hypsography - Represented by triangular terrain or quad patch surfaces, and NURBS6. Hydrography Flowing waters, Represented by sound, standing waters, shape changes and wetlands motion dynamics Natural animation features Vegetation Woods, Represented by a (irregular Scrubs, combination of geometric features) Orchards, objects Vineyards
Non-vegetative Glacial moraine, Represented by particle features lava, sand, collections, particle and gravel shape changes animation
Boundaries State, county, Represented by surfaces, city, forests, planar transparent parks divisions and sound Abstract features Public Land Township, Represented by surfaces, (semi-regular Survey System Range, planar transparent features) Section divisions and sound
Survey Horizontal Represented by a controls and monument, combination of geometric markers vertical objects (cube, cylinder, monument cone) Manmade Transportation Roads, trails, Represented by surfaces, features Railroads, combinations of geometric (regular Pipelines, shapes (cubic, features) transmission cylindrical), sound and lines motion path animation
Buildings Churches, Represented by a hospitals, schools combination of geometric shapes and sound
Table 5.1: The Structure of Cartographic Features
6 The term NURBS stands for Non-Uniform Rational B-Splines. It is a method used to approximate surfaces
91 5.1. From Data Sources to Cartographic VR Visualization
The content and quality of data sources is a very important element in any successful cartographic visualization system. The transition from paper to digital cartography and GIS has introduced some concerns related to the quality and content of the raw data component for the latter. The power of analytical queries and analysis in GIS has revolutionized cartography and provided us with tools that were inconceivable in the past. The increasing speed of computer processors and the size of computer memory boosts our confidence in providing solutions to the spatial world. This confidence should be cautioned by the fact that the final product (a map) is only as good as its data sources. We can only count on the computer to generate results as accurate as the input data sources. Some of the problems that produce risks related to the content and quality of spatial data are:
• Degree of accuracy: Insufficient data accuracy to produce the cartographic
visualization system required
• Lack of metadata: A common problem in digital spatial data is lack of
metadata or information about data
• Data from different sources: Availability of spatial data is increased with the
advances in communication techniques. Digital data and Internet
communication makes it possible to share data from different sources
creating an abundance of raw information. This data should be combined with
caution taking into account the content and accuracy of the source.
In this section we discuss the significance that source information has in VR cartography and some concerns that we face during the transition from traditional cartography and GIS to the new medium. Our contention in this research is that the successful application of VR methods to cartography depends, to a great degree, on a change of the notion in data collection. The information collected for
92 generation of conventional maps is not sufficient for VR maps. Our intention in this section is to identify the additional information necessary to build VR maps.
For the purpose of this study we divide data sources for modern cartography and
GIS visualization in two major groups:
1. Ground data sources: These data are acquired directly from the ground using
conventional as well as modern methods. Methods include remote sensing,
aerial photogrametry, radar sensing, Global Positioning Systems (GPS) and
other ground measurements.
2. Digitized data sources: They are produced from scanning or digitizing analog
products.
Data capture and acquisition in VR cartography is more complex than in analog and other digital systems. The data collected from digitized data sources often lack the information necessary in VR cartography. This information includes:
• Shape: Features that are represented with symbols in a two-dimensional map
do not contain the complete information needed to build a VR map. Examples
include buildings in some scales and forests in general
• Sound: Features that are associated with sounds in the real world. Examples
include transportation and hydrography
• Movement: Features that are associated with movement in the real world.
Examples include hydrography and transportation
• Third Dimension and Height: In many features, third dimension or heights are
not included on the map (even as an attribute). Examples include buildings
and forests.
The omitted information can be generated by approximating the simulation of reality in elements where precision is not crucial (sound of a river or traffic). In
93 other circumstances, the user should be aware of the lack of precision in the approximate simulations. Distribution of trees and their heights can be simulated without any information in order to create the perception of a forest, but the user should recognize that this simulation does not contain precise spatial information except for the general positioning of the vegetative area.
The process of spatial data capture for conventional cartography (this includes digital cartography) is depicted in Figure 5-4. Spatial and attribute elements are the basis of this data capture. They are transformed to visualization systems through coordinate transformation and symbolization respectively.
Data Capture
Spatial Descriptive Element Element
XY Attributes
Visualization
Figure 5-4: Spatial data capture in conventional cartography
Figure 5-5 summarizes the data capture process for VR maps. The addition of the third dimension and media element is the major difference between Figure 5-5 and
94 Figure 5-4. Data collection for a VR map should include collection of media, such as sound and animation, and the third dimension (even for features that would be shown symbolically on a conventional map). The diagrams in Figure 5-4 and
Figure 5-5 portray a rather simplified explanation of the steps taken to achieve visualization, but they convey the idea that data capture for VR maps is a more complex process and additional elements need to be collected.
Data Capture
Spatial Media Attribute Element Element Element
XYZ Sound Dynamic Description Element
Sound Lettering
Visualization
Figure 5-5: Spatial data capture in VR cartography
Table 5.2 depicts a comparison of the information captured in conventional and VR cartography. The set of features is an excerpt from the DLG feature collection. In
95 this table, we identify the additional VR element to be collected in data sources for features as defined by the DLG standards.
Categories Data Capture in Conventional Data Capture in VR Cartography
Cartography
Position Attribute Position Attribute Media
Relief XY, XY … Z, Name of XYZ, XYZ, … Name of Sound
geographical form geographical
form
Flowing XY, XY … Z, Name XYZ, XYZ, … River name Sound,
Waters Dynamic
Standing XY, XY … Z, Name XYZ, XYZ, … Lake name Sound
Waters
Woods XY, XY … Name XYZ, XYZ, …, Type of -
Number and woods, name
height of trees
Boundaries XY, XY … Name XYZ, XYZ, … Boundary -
type
Buildings XY, XY … Type of building, XYZ, XYZ, … Type of Sound
name building,
name
Table 5.2: Comparison of Data Capture in Conventional and VR Maps
The notation "XY" denotes collection of two-dimensional information and "XYZ" denotes the collection of volumetric or three-dimensional information. From the
96 above table and figures we draw two important points. Firstly, in most of the cases, the third dimension is collected as an attribute in conventional cartography, but it is part of the spatial object in VR cartography. Secondly, there are additional elements to be collected for VR cartography. These elements are:
• Volumetric information (third dimension)
• Sound:
− Sound recorder: If measurements are at the site
− Sound description: The actual sound can be collected later from a sound
database.
• Movement:
− Direction of movement
− Speed of movement.
Some of the existing data sources that can be used in VR cartography are:
• USGS Landsat
• USGS DEM and DLG files
• Topologically Integrated Geographic Encoding and Referencing (TIGER) map
data from the Census Bureau
• National Imagery and Mapping Agency (NIMA) earth info imagery
• Space Imaging and Digital Globe commercial satellite imagery companies
• The Geographic Name Information System (GNIS) compiled by USGS
(http://geonames.usgs.gov/)
• 3D Cafe sound effects (http://www.3dcafe.com/asp/sounds.asp).
97 5.2. Georeferencing in VR Visualization of Maps
In this section we explore the georeferencing concept in VR cartography. We begin with a brief description of the georeferencing process and its importance in displaying spatial phenomena. We further propose a classification system for georeferencing in VR cartography and analyze some problems and their solutions.
We introduce georeferencing concepts that are unique to VR cartography.
5.2.1. What is Georeferencing?
Georeferencing is the process of establishing the relationship between the
coordinates of a map or image and the real world coordinates. It is a term that
has become popular especially with the modern developments in GIS and
computer cartography. Georeferencing has different meanings for different
people or communities. These differences result from the variety of GIS
applications and their purposes. Nevertheless, most of the different meanings
are special cases to the general concept presented above. As an example,
georeferencing (or geocoding) can be used to describe the process of assigning
geographic coordinates to a feature on the basis of its address (Cowen 1997).
Such applications have tremendous practical importance:
1. Mapping out the addresses and driving directions (Mapquest/Mapblast)
2. Emergency response
3. Flood zone determination
4. Crime Analysis
5. Analysis of client distribution and company geographical expansion
6. Use of GPS in field (companies such as UPS georeference their data)
98 Clarke (1990) identifies two major methods of georeferencing (or geocoding):
• Locational Geocoding uses a map coordinate system and specific map
projection (UTM, State Plane, Geographic System). Features are
referenced spatially by their coordinate or projection transformations
• Topological Geocoding is based on the concept of graph theory. Graph
theory is concerned with the connectivity rather than shape or length of
objects. A planar graph in the two-dimensional space preserves the
structure of the feature it represents (Worboys 1995). Topological
geocoding is the basis of the TIGER system developed by the Census
Bureau. Streets, rivers, railroads and boundaries are represented as
straight-line segments connected to each other.
In certain circumstances georeferencing can be used to describe the process of translating a raster image (a scanned map or an aerial photograph) to real world coordinates. Identifiable features with known coordinates (monuments in aerial photographs or points with known coordinates in scanned images) are used to georeference the entire image. Because of the sequential and geometrically simple structure of the raster system, a set of affine transformations with parameters that are determined from least square regression models are used to perform this rectification. Vector data, on the other hand, represent space as a continuous set of points (Cromley 1992); therefore vector maps are intrinsically georeferenced.
5.2.2. The Importance of Georeferencing in VR Cartography
In the light of multimedia and VR cartography the georeferencing concept becomes very important. McEachren and Kraak (2001) list georeferencing as one of the major research problems in integrating spatial data in multimedia
99 visualization systems. It is estimated that 80% of all digital data today includes geospatial information (McEachren and Kraak 2001). Georeferencing creates the means to fusing data from different sources together. The complexity of Geovisualization comes from the fact that georeferenced data differ fundamentally from any other kind of data (McEachren and Kraak
2001): Georeferenced data are inherently structured (by their coordinates), therefore they are implicitly interrelated in space (important for geographical queries). This is a relatively unexplored topic, but very crucial in applying VR to cartography and mapping. Multimedia georeferencing is currently being investigated at the Center for Mapping at the Ohio State University (Ramirez and Zhu 2002). The prototype system at the Center for Mapping combines horizontal digital vector datasets and raster images of the ground with vertical vector datasets and raster images, panoramic views, sounds, and special effects. Panoramic views and sound clips are georeferenced to the ground coordinate system.
5.2.3. Georeferencing VR Maps - Classification, Problems and Solutions
In VR maps georeferencing answers questions like where am I (in terms of spatial location) or where am I in relation to certain objects? It is also very important to preserve the relationships across multiple scales of the same visual system (VR scene as seen from far away and VR scene as seen from a near distance) or multiple visual systems (a flat index map, an aerial photo and the VR scene). In VR cartography we propose the classification of georeferencing in two important components:
Space and objects georeferencing: Georeferencing of a three-dimensional desktop virtual environment differs considerably from the georeferencing of a
100 two-dimensional map. Firstly, the introduction of the third dimension
increases the complexity of the reference system. Secondly, the flat computer
desktop screen is appropriate for displaying plane maps, but the two-
dimensional raster representation of the screen complicates the
georeferencing of the three-dimensional environments. In a flat georeferenced
map the user can obtain the coordinates of any point (by clicking on the
screen) through trivial transformations. This process can not be performed in
a three-dimensional environment. As mentioned in 4.1.1 (Three-dimensional
object viewing), the "empty area" may not be fully spatially referenced, but the
objects in it are. While in a two-dimensional paper or computer map we have
a continuous spatially referenced space 7, in a VR map we a have a rather
8 discrete spatially referenced space. The depth (d) of a certain point (P1 or P2)
on the screen remains undetermined (Figure 5-6).
GeoVRML system was the first attempt of geoscientists to reference three-
dimensional and VR maps to geographic coordinates. GeoVRML is a collection
of nodes that extends the syntax of VRML 97 to provide support for large-
scale geographic applications (Reddy et al. 2000). It was formed in 1998 by a
working group of the Web3D Consortium with the goal of developing tools for
the use of geographical data in VRML.
7 We can measure x,y at any point and interpolate between contours. 8 Location is known only at certain discrete points and interpolation is not possible from the visualization of the scene.
101
P1
P2
x
f
y
Figure 5-6: Undefined point in 3D environment
Figure 5-7 shows a VRML model of the Hong Kong Port built using GeoVRML nodes. The Remote Sensing Department at the Institute Of Surveying and
Mapping, Henan, China created this model which includes imagery, elevation, buildings and road data. These data are georeferenced to their actual location on earth.
Figure 5-7: Georeferenced buildings and terrain of Hong Kong Port
The most important node in GeoVRML is the GeoLocation node:
102 EXTERNPROTO GeoLocation [ exposedField SFString geoCoords # "" field MFNode children # [] field SFNode geoOrigin # NULL field MFString geoSystem # [ "GD", "WE" ] ]
This node has the ability to georeference standard VRML models. It transforms the coordinates of the system to a location on the earth's surface.
Object and space georeferencing is very important to successful VR map visualization, because it provides the user with the ability to measure and estimate distances, accurately visualize geographical queries and understand interrelationships among objects in multiple scales. The most important elements that are to be georeferenced in a VR map in order to accomplish the above tasks are:
• Terrain: Relief visualization and georeferencing, to a degree, will fill the
"empty space" created by the representation of a three-dimensional scene
on a flat screen. Terrain is represented by a discrete amount of points
covering the area. The user can estimate the position of any point in
space by referencing it to the closest elevation point. This estimation is
as accurate as the resolution of the elevation model. VRML and
GeoVRML nodes that deal with terrain models are respectively:
− ElevationGrid: This node specifies a rectangular grid and the height
values in every point of the grid. The user can also set parameters
such as the spacing of the grid, color and other graphic features.
− GeoElevationGrid: Specifies a grid of elevations within a spatial
reference frame. It allows geoscientists to create a terrain model by
specifying geographic coordinates (latitude, longitude and elevation).
103 • Visual Objects: It includes all cartographic objects, natural, manmade or
abstract: buildings, rivers, lakes, boundaries, monuments, roads,
railroads and vegetation areas.
• Viewpoints: The user's continuous spatial awareness is very important
when navigating in a VR map. The georeferencing of the viewpoint would
answer questions like "where am I?" and "where am I in relation to other
objects?" when the user is immersed in a VR map.
Multimedia georeferencing: is another important component of
georeferencing in VR maps. As stated in the previous chapters, incorporated
multimedia enhances the perception of real world phenomena. The objects
below can be placed in the VR scene and referenced to the coordinate system.
The GeoCoordinate node in GeoVRML references any object, including a
multimedia element, to the earth's surface:
• Audio clips
• Images
• Hyperlinks and hypertext
• Animation
In this section we emphasized some of the problems with georeferencing VR maps and proposed additional elements to be georeferenced (visual objects, viewpoints and multimedia) as compared to the conventional maps. We identify the problem of georeferencing the "empty space" in a VR environment and give partial solutions through terrain and object georeferencing. This study recognizes the limitations of georeferencing in a VR environment and the insufficient means to completely solve this problem with the current technology.
104 5.3. Scale and Orientation in VR Maps
The notion of scale is probably the most important concept in visualizing
georeferenced data (McEachren and Kraak 2001). Scale is also a very complex
issue, specifically in complex visualization systems like VR environments. There
can be fundamental differences in visualizing data in multiple scales. For example,
features can be represented as objects in a large scale or attributes in a smaller
scale9. In this section we study the representation and impact of scale in
cartographic VR visualization. Firstly, we review the role, importance and
representation of scale in conventional maps. These concepts are important for a
successful transition to VR maps.
The map scale is a unitless ratio between a distance on the map with the
corresponding distance on the earth's surface (Robinson et al. 1988). The distance
in the map is usually expressed as a unit of one, therefore map scales are stated as
a fraction (for example: 1:25,000). Robinson et al. (1988) identify four ways in
which the map scale can be expressed:
1. Representative fraction: A fraction represented by the same unit of distance in
the numerator (on the map) and the denominator (earth's surface).
2. Verbal statement: A statement of how map distance relates to earth's distance.
It is especially common in old maps (e.g. "1 in. to 1 mi.").
3. Graphic or bar scale: A subdivided line placed on the map, often near the
legend. The subdivisions represent distances in the earth's surface.
4. Area scale: A fraction of area values in the map to those on the earth's surface.
9 The representation of specific features can also be eliminated at a certain scale.
105 In conventional maps the notion of scale represented by a fraction is somewhat
misleading because of the distortion involved when projecting the three-
dimensional earth's surface to a two-dimensional plane (Jones 1997). As a result of
these distortions the scale is not uniform throughout the map. We can think of the
average scale in a map as the ratio between the radius of a mini globe10 and the
earth's radius. In fact, the fractional scale represents this average scale that is also
known as the principal scale of a map (Jones 1997). Traditionally, maps have been
classified as large, intermediate and small-scale. These classifications differ
considerably depending on the organization and the use of maps. As an example,
Davis et al. 1981 classify as large scale maps 1:100 to 1:2,000, intermediate scale
maps 1:2,000 to 1:10,000 and small scale maps 1:10,000 to 1:100,000,000. On
the other hand, USGS (USGS 1995) considers DLG data (1:24,000) large-scale
data. Based on the scale of a paper map, we estimate distances just by looking at
the map. A similar judgment should be conceivable for the VR maps.
Representation of scale in VR maps is very complex because of the continuous
change of the scene (zooming, panning and navigating through the scene). We
propose these ways to represent scale graphically in VR maps:
• Three-dimensional Grid: A volume grid can be overlaid to the VR scene and
moved with the viewpoint. The user will be able to estimate distances along
the three axes by comparing them to the grid interval (just like a two-
dimensional map grid in some large-scale topographical maps). Figure 5-8
depicts a scale grid created in a VRML environment. Figure 5-9 shows the
same grid superimposed to a VRML map scene simulation. The scale grid is
translated, rotated and scaled properly with the change of the viewpoint.
10 We can imagine this globe by shrinking the earth's surface to a globe comparable to the size of the map
106
Figure 5-8: The Scale Grid
Figure 5-9: The Scale Grid overlaid to a VR map
107 • Graphic scale: A subdivided line that shows map units corresponding to real
world units represents the graphic scale in conventional maps. Adaptation of
this concept in VR maps is not a trivial matter for two reasons:
− The scale is not uniform in all directions. Although there are distortions
in different directions, conventional maps are usually confined to a
principal scale. The advantage of the VR environments is that we can
change the scale automatically by moving the viewpoint. The
disadvantage is that the interactive movement of the viewpoint closer to,
or further away from, the map complicates the matter of having a
principal scale for the whole scene. One of the ways to solve this problem
is by displaying a graphic scale along the principal axes.
− The third dimension adds more complexity to the visualization of scale.
The cartographer has to confront more computational complexity and the
user might be confused with a more convoluted visual representation.
The graphic scale should be kept simple and perceptible.
Figure 5-10 depicts the simulation of a VR map at the approximate scale of
1:10,00011. A rectangular cube that depicts the linear scale along three
principal axes represents the graphic scale. The rectangular cube provides the
user a visual tool to estimate distances in the map. In order not to obstruct
the user's view, the "scale cube" is built as a semi-transparent three-
dimensional object (50% transparency). Users must have the option to
configure the system by setting the display of the "scale cube" on or off. The
"scale cube" is updated automatically as the user navigates above the scene or
through the scene. The lengths of its edges on the screen represent real world
distances along the principal axes. The labels along its axes represent the
11 Elements in the map are only approximately to scale. The purpose of this sample map is to serve as an example of the concept we are discussing.
108 distances of the edges at the specific map scale (350' along the X, 240' along Y and 45' along Z).
Figure 5-10: Scale elements in a 1:10,000 scale simulation
109 Figure 5-11 depicts the same scene after the user has navigated closer to the map. The "scale cube" in the upper-right corner changes values along the axes appropriately.
Figure 5-11: Scale elements in a 1:1,000 scale simulation
110 The VR map snapshot in Figure 5-11 is at an approximate scale of 1:1,00012.
In Figure 5-12, f is the focal distance from the viewpoint to the projection
plane, fx and fz are its projection along the axes, Dx and Dz are the distances of
the viewpoint from central point of the scene projected to the axes.
z Viewpoint Projection Plane y x f fz
fx Dz D
D x
Figure 5-12: The Average Scale Factors along the Axes
From the figure, the average scale factors along these axes are:
f = x s x Equation 5-1 Dx
f = y s y Equation 5-2 D y
= f z sz Equation 5-3 Dz
where:
s x , s y , s z are scale factors along the axes,
f x , f y , f z are the focal length distance projections on the axes,
12 Elements in the map are only approximately to scale. The purpose of this sample map is to serve as an example of the concept we are discussing.
111 Dx , D y , Dz are the distances of the viewpoint from the central point of the
scene as projected on the principal axes.
• Numerical scale: The numerical scale is a statement of the fractional scale
that changes automatically when users zoom in, or out, navigate or fly
through the scene. In paper maps the representative fraction (as described
above) of the principal scale is set in print and does not change with the
user's point of view. In a VR map the numerical scale changes with the virtual
user's point of view. The numerical scale is depicted in the bottom-left corners
of Figure 5-10 and Figure 5-11. The numerical scale is an approximate
average value, since the scale changes with the location and the angle from
which the user is looking at the scene.
From Figure 5-12, the simplest way to calculate this scale is:
f s = Equation 5-4 D
where:
s is the average scale factor,
f is the focal length distance,
D is the distance of the viewpoint from the central point of the scene.
A map (paper, digital or VR) represents real world information and as such must be properly oriented in order to be valuable to the user. The north arrow is the most common representation of orientation in conventional maps. For several centuries now it has become almost an unwritten rule in the Northern Hemisphere to have north face up and south face down on a map. Orienting maps to the north is important because most users are familiar with this orientation (Robinson et al.
1988), but in some cases it is more convenient to orient the map in the direction of
112 interest. This fact becomes more important in VR maps where the user navigates
and is immersed in the map. We propose a three-dimensional north arrow, which
changes direction according to the user's position and orientation. In the upper-
right corner of Figure 5-10 and Figure 5-11 the north arrow is represented by a
three-dimensional object that changes orientation depending on the user's
navigation direction. VR maps can also be oriented in the direction of the
movement. In this case, we propose an arrow showing the direction of the
movement and labels depict the approximate orientation (N, NE, NW, S, SE, SW, E
and W).
Scale in VR maps is not only confined to the geometry of objects. VR elements like
sound and animation must be scalable for a realistic and efficient representation.
This topic will be explored in the upcoming section (5.4.2 Multimedia
Generalization).
As explained in the previous chapters, navigation is an important part of VR maps.
Scalability of navigation velocity is a new but important concept for the exploration
of VR spatial systems. Users should be able to navigate at different speeds
depending on how close they are to the scene. The velocity of navigation depends
on the height above the scene or terrain. For example when flying over a terrain
near the ground the velocity of 100 m/s is relatively fast and users would not be
able to explore the scene properly. The same speed at a very high altitude would be
very slow. At a 10,000 m altitude13 the 100 m/s speed would be extremely
inadequate. The time it would take the user to reach the ground (in order to
explore details) from that altitude would be 100s (1h 40min), which is intolerable.
Even though in VR applications users have the option to change the speed of
13 The altitude that most commercial jets fly
113 navigation (slow, medium or fast), there is no study that explores automated scalability of the velocity depending on the altitude above the ground. We propose the following system for VR maps. The speed of navigation should change proportionally with the height from the ground. In order to determine the approximate speed of movement for fly-through navigations, we start by defining a minimum speed that is appropriate for near ground navigation. The user should have the choice to determine this speed. For example, we assume that the user configures the near ground navigation velocity to 25 m/s (city highway speed of 55 mph). For heights up to 100m (near ground) the velocity coefficient "k" that relates speed to the height from ground is 25/100 = 0.25. Hence, the relation between the speed and height in this example is:
velocity = height ⋅0.25 Equation 5-5
Where minimum velocity is 25 m/s (height>100m).
Reconsidering the above example, we would reach the ground from a 10,000m altitude in 4 seconds. Also, we would navigate across a 1000m long scene in 40 seconds from a 100m height and in 4 seconds from a 1000m height.
The above formula for fly-through navigation velocity provides only a general estimate of the speed at specific altitudes. Users should still be able to select the level of velocity for their navigation (very slow, slow, average, fast and very fast). A toggle switch (much like the volume on a computer desktop) can also be used to set the proper degree of velocity.
114 5.4. Generalization – Reducing the Complexity
Generalization is one of the fundamental concepts of cartography made necessary by the fact that the simplification of reality depending on the scale is very important part of mapmaking (Robinson et al 1988). Generalization methods are necessary for maximizing visualization effectiveness and facilitating visualization use. In traditional digital cartography most of the generalization research is limited to simplification of lines in the vector representation of maps. The new concept introduced with the VR cartographic representation calls for rethinking the need for this cartographic concept. In this research we intend to provide answers to the following questions:
• Is generalization necessary for the new VR media in cartography?
• What is the implication of generalization on multimedia elements like sound,
animation and hyperlinks in a VR scene? What is the feasibility and
effectiveness of conversion from one media to the other during generalization?
(As an example, a feature shown graphically at a certain scale can be
represented by sound at a different scale).
In this research we do not intend to create or formulate specific generalization techniques for VR maps, but rather identify the necessity for generalization and possible ways to generalize for appropriate VR representation of maps at different scales. We start our discussion with a review of generalization in traditional cartography and how this knowledge influences the transition from conventional to
VR mapping.
The process of generalization aims for the reduction of reality or large scale maps by keeping and grouping features that are appropriate to the map objective.
Cartographers have studied generalization for decades and each transition (as the
115 one from printed maps to digital cartography) has added to the challenges of this process. Generalization is a very broad and complex problem and is defined by various authors differently. Some examples include:
• "Selection and simplified representation of detail appropriate to the scale
and/or purpose of the map" - ICA 1973
• "…selection of those features that are essential to the map's purpose and the
representation of them in a way which is clear and informative" - Jones 1997
• " The application of both spatial and attribute transformations in order to
maintain clarity, with appropriate content, at a given scale, for a chosen map
purpose and intended audience" - McMaster and Shea (1988)
Robinson et al. (1988) define generalization as a collection of ways to reduce complexity and detail, to induce general characteristics from particulars. In a very thorough analysis they detail four elements (simplification, classification, symbolization and induction) and four controls (objective, scale, graphic limits and quality of data) for generalization. Simplification is the process of elimination of unwanted detail. Classification is the ordering and grouping of data. Symbolization is the graphic coding of the grouped data. Induction is the process of inference, extending the information of the map beyond the selected data. The manner in which these elements are performed depends on the controls of map generalization
(objective, scale, graphic limits and quality of data). Objective encompasses the purpose of the map. Scale is the ratio of map to earth as explained above. Graphic limits include the capability of the system and users for graphic communication.
Quality of data is the reliability of the data being mapped. Jones 1997 subdivides generalization concept in two main parts:
• Semantic Generalization: is concerned with selection of relevant information
and involves abstraction with rules derived by the geographic concepts
116 • Geometric Generalization: is concerned with geometric transformations of
features.
Cartographers have always attempted to quantify generalization. The most widespread of such attempts is Topfer's Radical Law of Selection [Topfer and
Pillewizer 1966]. This rule of thumb law states a direct relationship between the number of features and the scale of a conventional map:
M = ⋅ s na ns Equation 5-6 M a
Where na is the number of features in the derived map and ns is the number of
features in the original map. Ma and Ms are the respective sale denominators.
5.4.1. Necessity of Generalization in Virtual Reality Maps
Map generalization is important because of the necessity for reduction and
elimination of complexities in order to help users better comprehend the
spatial world (Robinson et al. 1988). Why do we generalize in cartography?
• Some real world phenomena are too complex to be grasped easily by the
user. The process of generalization becomes more indispensable with the
use of symbols for representing real world objects
• Real world phenomena are too large to be displayed in their entirety on a
large-scale limited size map
• Intellectual aspect of generalization is used to communicate a concept or
idea (e.g. crime occurs here or poverty exists there)
• Paper maps are limited to a static display, so there is no possibility for a
multi-scale representation.
117 What do we gain with the VR maps?
• VR maps depict a realistic representation of reality. It is easier for users
to grasp the phenomena and objects around them because of their
realistic representation (see 3.2.1 Human Perception of Virtual Reality
Maps)
• The VR interface gives users the flexibility necessary for fitting the whole
world (or even the universe) on a computer screen. The process of
navigation, fly through and walk through provides access to even very
large-scale data visualizations. The multi-scale representation makes the
comprehension of both the general idea and the details in the same map
possible.
The above-mentioned points eliminate some of the requirements for
generalization in VR maps, but we believe that some degree of generalization
is important for the following reasons:
Eliminating the need for expensive rendering:14 From theoretical point of view
we would like the best rendering possible for very realistic representation.
Practically, eliminating the need for expensive rendering is very important
especially in the case of terrain models that are massive in size. USGS
produces Digital Elevation Models (DEMs) that contain over one million
points. Digital Orthophoto Quads (DOQs) contain over 12 million pixels.
Single scale visualizations of data of this size would not allow any interaction
with the user (as it would be too slow for computer memory and speed). There
are many simplification and smoothing techniques available for terrain
14 In this case we define rendering as the process of drawing on the computer screen. This includes all the stages from the projection of points on the screen to shading and other visual effects.
118 surfaces. This element of generalization has been effectively researched in the
past. In this research we are not studying the different methods to generalize
surfaces, but we need to point out that most of these techniques load data in
memory before smoothing. These methods are not appropriate for highly
interactive VR maps (e.g. scenes built with VRML) because of the cost of
loading a high-resolution dataset in memory. A multi-scale pyramid (with
different size and resolution images for different scales) would be a more
appropriate depiction. This method provides the user with different displays of
the same feature for different scales. The system automatically switches to a
certain display when the scale changes a certain value (as set by the user).
The multi-scale pyramid is appropriate for highly interactive environments
such as VR maps, because the high-resolution datasets are not loaded in
memory at small scales. Instead, a more appropriate, small resolution version
is loaded (configured by the designer).
Reduction of complexity for human perception. Even with the advantages of
the realistic VR visualization, we are still observing reality through a viewing
device (HMD or computer screen). In the case of Desktop VR this device is the
computer screen. Because of the limited size of these devices15, the display of
high resolution and detailed scenes in very small scales would be pointless.
The feasibility of graphic display is also limited by resolution of the human
eye. This resolution is at best one minute of arc (Bruce et al. 1996). At a
viewing distance of 500mm16 the resolution acceptable to the human eye is:
1′ ⋅ 2 ⋅π ⋅ r 1′ ⋅ 2 ⋅ 3.14 ⋅ 500mm r = = ≈ 0.14mm Equation 5-7 360o ⋅ 60′ 360o ⋅ 60′
15 A normal computer screen will have a maximum of 1,600 x 1,200 pixels resolution
16 Average viewing distance for a 16" monitor
119 We would not be able to distinguish a feature (or between two features) if it is
(they are) less than 0.14mm on the computer screen. This translates to 3.5m
(11.5') on the real world surface for a scale of 1:25,000. The display of such features today would also be limited by the pixel size, which on an average computer screen (16" monitor, 1600x1200) resolution would be at best
0.21mm (5.25m or 17.4' for a scale of 1:25,000). The latter is the practical limit for graphic resolution of display based on today's technology.
Limited generalization is supported in VRML by the Level of Detail (LOD) node:
LOD { exposedField MFNode level [] field SFVec3f center 0 0 0 # (-•, •) field MFFloat range [] # (0, •) }
LOD specifies various levels of details or complexity for a given object and displays them at certain scales (or range from the viewpoint). Users can provide alternative versions of the feature. The browser would automatically select the appropriate version based on the distance from the user.
Generalization for multi-scale representation of terrain and surfaces is partly supported in GeoVRML with GeoLOD node:
EXTERNPROTO GeoLOD [ Field SFString center # "" field MFString child1Url # [] field MFString child2Url # [] field MFString child3Url # [] field MFString child4Url # [] field SFNode geoOrigin # NULL field MFString geoSystem # [ "GD", "WE" ] field SFFloat range # 10 field MFNode rootNode # [] field MFString rootUrl # [] eventOut MFNode children ]
120 GeoLOD provides a terrain specialized form of LOD. It allows for specification of different levels for terrain generalization.
5.4.2. Multimedia generalization
In this section we explore different scenarios that reduce the complexity of multimedia representations in VR maps. While studying the generalization of the visual element, we focused on representation and did not explore the methods to achieve this representation. In the case of multimedia we focus more on the potential ways to achieve this generalization (generalization of multimedia in VR maps is a relatively unexplored topic). We define the generalization of multimedia as the process of simplification, classification and transformation from one media to another, to serve the purpose of the map at a certain scale. Firstly, the reduction of complexity in multimedia could be a consequence of the generalization of a feature that multimedia represents. For example, if a building is removed at a small scale, sounds associated with that building could consequently be removed. Secondly, simplification of multimedia could be necessary to reduce the complexity of the scene in order not to overburden the user (see 3.2.1 Human Perception of
Virtual reality Maps). For example, reduction of animated scenes that appear on the map at a small scale might be necessary. We identify several media features to which generalization can be applied:
Sound: The attribute of sound that can be used for generalization effects and multi-scale representation is intensity or loudness (see 4.2.2 Use of Sound in
VR Maps). The intensity of sound decreases with the distance of the viewpoint from the sound source (or reduction of scale). The change of the intensity with distance supports a realistic representation of sound used as a tool for
121 enhancing the perception of real world phenomena (4.2.2 Use of Sound in VR
Maps). At a certain point away from the source, users would not be able to hear any sound. A cartographer should also have the flexibility to configure sounds beyond their realistic reach. Exaggeration of sound might be necessary in cases when we need to emphasize certain features. In other cases, sound generalization attempts to avoid sonic overload in a small-scale map (3.2.4 Spatial Cognition and Sound). The existence of several sound clips in the area a user is viewing could overburden the user and therefore diminish his/her cognitive ability. In the case of sound used for attribute representation, the set up should be similar except for the fact that the generalization of the attribute would depend on the generalization of the feature it represents. For example, on a certain small scale we do not see the buildings (otherwise shown on a large scale); a sound attribute attached to a specific building would also be eliminated (consequence of feature generalization).
In VRML a sound node specifies the spatial representation of sound in a scene
(as seen in 4.2.2 Use of Sound in VR Maps). In the sound node, fields like direction, intensity, minFront, minBack, maxFront, maxBack define the scalability of sound in different views (VRML 1997). Figure 5-13 illustrates the use of these variables to set the proper sound intensity and its perception from a distance. If intensity is set to 1, it will remain the value of 1 for up to a
minFront distance from the object. Between P1 and P2 it decreases linearly until it becomes 0 at a distance maxFront from the object. Beyond this distance the user can not hear the sound source.
122
Intensity 1
0 Distance
P1 P2
minBack
Min Max Ellispoid Ellispoid direction
minFront
maxBack maxFront
Figure 5-13: The Geometry of a VRML Sound Node
Animation: Following the example of sound, it is also necessary to change the appearance of animation in a multi-scale representation. In section 4.3.2
(Dynamic Visualization in Virtual reality Cartography) we identified "attracting user attention" as an important role of animation. The animation action for this purpose is directly correlated with the object animated. This action should be decreased depending on the object visibility and in some cases even eliminated. In this case multimedia generalization is a consequence of the feature generalization. In other instances animation could be decreased or even eliminated depending on the small scale, the feature we would like to emphasize and the amount of such animations. This simplification of dynamic representation is performed independently from the generalization of the feature itself.
123 VisibilitySensor node in VRML detects visibility changes in a rectangular box as the user navigates the scene (VRML 1997). At the moment the user can see a specific object, the visibility sensor can activate or deactivate a behavior
(including animation) in this object:
VisibilitySensor { ExposedField SFVec3f center 0 0 0 # (-∞,∞) exposedField SFBool enabled TRUE exposedField SFVec3f size 0 0 0 # (0,∞) eventOut SFTime enterTime eventOut SFTime exitTime eventOut SFBool isActive }
Text Labels: Generalization of text labels consists of elimination or replacement of text. As the scale decreases, large-scale features become
"invisible" and labels that represent these features disappear. Other features or groupings of large-scale features need to be updated with text labels. As an example, at a scale 1:1,000 we can show individual buildings and their labels
(school, hospital, etc.). At the scale 1:100,000 buildings are grouped into cities and labels are replaced with city names.
Hypertext: or links to other types of media also are omitted or replaced as the scale changes (following the same reasoning as in the "text label" case.)
Generalization of graphics and multimedia in the VR maps can also include cases of enhancements of one type of media using the other. As an example, at a large scale (e.g. 1:500) we can clearly see the shape of a church and quickly arrive at a conclusion on the specific purpose of the building. At a smaller scale (e.g. 1:5,000) we can still see the building, but we are not able to distinguish its type. A sound description would fill the void of the graphic deficiency at the smaller scale. Hypertext can also be used to represent
124 information that cannot be seen or heard at a smaller scale. The access to
this information will be through a link in the VR environment.
5.5. Symbols and Labels – Making Sense of the Scene
In this section we argue the necessity of symbolization in VR maps. We then continue to examine graphic and multimedia variables as determined by several authors. Our contribution includes the proposal of new graphic variables and labeling methods for VR maps. The fundamental purpose of symbolization in the traditional paper map is the assignment of marks (graphic coding) to represent the phenomena. In a VR scene, most of the features are represented by their form and shape (or something similar to their form and shape). This is a significant advantage of the VR maps as compared to the traditional maps where symbols and conventions are used to present information. Neves et al. (1997) contend that immersion and VR allow a non-symbolic interaction with the environment and a communication language is not necessary for this representation. This statement is not true in VR maps. Although in depiction of general maps (including topographic maps) the objective is to portray real world geographical phenomena
(roads, buildings, etc), exaggeration of features and depiction of labels to describe features are perfect examples of a communication language. Furthermore, the realistic type of representation does not apply to abstract objects, such as political boundaries. Thematic maps concentrate on spatial variations of the form of a single attribute or relationship among several (Robinson et al. 1988). An example of a thematic map would be the distribution of population in United States. This is an abstract spatial phenomenon that needs symbolization, even in a VR environment. VR and VRML have been used for abstract cartography in which, objects in the scene are not replicas of the physical environment and symbolization
125 is required. Mitas et al. (1997) used VR representation to show predicted spatial distribution of erosion. Nevertheless, the visual display in the case of thematic maps is less complex, because in most cases only one attribute is involved. As mentioned above, abstract representation is also inevitable in parts of a topographic large-scale map. As an example, in the case of DLG maps, the categories that require abstract representation are:
• Boundaries: They include state, county, city, forests and parks. The first two
features are manmade divisions of land that do not have a physical separation
from the rest of the environment
• Public Land Survey System: It includes township, range and section. All the
features in this category represent abstract divisions of the geographical area.
Possible depictions of the above abstract features include different coloring or textures of the surface, or semi-transparent walls to symbolize divisions.
Cartographic symbolization in traditional maps associates geographic data with what Bertin (1983) called "visual Variables". Bertin (1983) identified the elements that distinguish between visualizations of spatial data. He identified seven variables in the Semiology of Graphics, which was first published in French in
1967. These variables are:
− Size: Dimension of the graphic marks (magnitude)
− Shape: Visual recognition based on the geometric characteristics of the
graphic marks (nominal)
− Value: Relative lightness or darkness (order or magnitude)
− Spacing: Distance of marks in symbols such as lines per inch or dots per inch
(nominal)
126 − Hue: Spectral variations perceived by the human eye; colors such as red,
green, blue, etc. (nominal)
− Orientation: Directional arrangement of the individual marks (nominal)
− Position: Location as referenced by two dimensions on the plane.
Bertin's 'visual variables" have been the basis of cartographic symbolization for the past decades.
The advances in VR and inclusion of VR in cartography call for a revision of
Bertin's visual variables that were meant specifically for two-dimensional maps and static cartography. Firstly, we need to update these variables for VR and three- dimensional representation. Secondly, the inclusion of other kinds of media makes it imperative to consider other types of variables (sound and dynamic visualization). In 2.7 (Sound in VR) we reviewed sound variables by Krygier (Krygier
1994) analogous to Bertin's visual variables. These variables include location, loudness, pitch, register, timbre, duration, rate of change, order and attack/decay.
DiBiase et al. (1992) defined three dynamic variables for cartographic visualization:
• Duration: Duration is the number of units of time that a scene (map or map
feature) is displayed
• Rate of change: Rate of change is the proportion of the magnitude of the
change of position or attributes to the duration of the scene
• Order: Order is the sequence in which scenes are presented.
In Table 5.3 we have adapted Bertin's variables for VR three-dimensional visualization of maps. Some of the variables are extended (position, value and hue) and the others are modified (size, texture, orientation and shape). We are also introducing a very important variable appropriate for the VR map representation
(transparency). These variables are:
127 • Position: Location of three-dimensional elements, which is determined by the
coordinates of the vertices that form the object (XYZ, XYZ, XYZ, …)
• Size: The size of objects or symbols as determined by their diameter, edge, or
volume
• Value: Lightness or darkness of a three-dimensional object
• Texture: The pattern of one or all of the faces in a three-dimensional symbol.
These patterns can be attached to the object for a more realistic
representation (e.g. brick pattern for buildings regardless of shape)
• Hue: The color variations of the three-dimensional objects
• Orientation: The direction of feature in the three-dimensional space.
Orientation is defined by the rotation around the three principal axes (XYZ)
• Shape: The appearance of an object (sphere, cube, etc). Objects might be
represented by approximate shapes, such as rectangular cubes for buildings
or a combination of cylindrical and spherical shapes for trees
• Transparency: The degree of object's visibility. Transparency provides a view
of occluded objects.
128
Variable Depiction
Position
Size
Value
Texture
Hue
Table 5.3: Visual Variables for VR Maps
(CONTINUED)
129 Table 5.3: CONTINUED
Variable Depiction
Orientation
Shape
Transparency
Transparency is a very important element added to the list of visual variables.
Compared to the two-dimensional maps there are numerous object occlusions in
VR maps. The nature of this visual representation, in which we do not have to look at all the objects in one view, makes this a common occurrence in virtual environments. Although through navigation we can get to a position to fully visualize the occluded features, transparency can help in increasing perception in many cases. Transparency ranges with values from 0, where the object is not visible, to 1, where the object is solid. In Table 5.3 we have depicted an object with transparency 0.5.
130 Symbolization of maps is closely associated with labeling. Labels could range from
simple text as in paper maps to sound (when user comes near to or clicks on the
feature). Cartographers have argued that names in a map are "a necessary evil",
because they crowd maps and complicate the image (Robinson et al 1988). In this
research we are not describing the details of cartographic lettering, like placement,
size, coloring and style, but we are considering text labeling as one of the options
in a list of alternative attribute representations. The most important objectives in
describing features represented in the map are not to crowd the map scene and
help the user's perception focus on the feature instead of the description of the
feature. Morrison and Ramirez (2001) studied the integration of audio and smart
text to present geographic names of the digital features. Users have the option to
use sound descriptions17 or automatically well placed text labels that do not change
with map scale. Text labels are independent from the graphics of the map and
support panning or zooming by moving with the features during these operations.
In VR maps we identify several ways to describe features and symbols in the scene:
• Text: Text labeling can be used to describe features. The text feature is
another object placed in the scene and it can be configured by the user with a
click of a button (turn labels on or off).
• Animated text: Labels are represented by blinking text, which appears during
intervals of time. These descriptions avoid the static overload of the scene.
• Hyperlinks or hypertext: Links attached to objects or text direct users to the
proper information. In VRML this can be applied by using the Anchor node:
17 When users click on a feature a sound clip is activated and the description is narrated
131 Anchor { EventIn MFNode addChildren EventIn MFNode removeChildren ExposedField MFNode children [] ExposedField SFString description "" ExposedField MFString parameter [] ExposedField MFString url [] Field SFVec3f bboxCenter 0 0 0 # (-•, •) Field SFVec3f bboxSize -1 -1 -1 # (0, •) } This node attaches an anchor to the object (text or features). User can access
the information through a Uniform Resource Locator (known as URL) in a new
view.
• Sound: Feature description is provided to the user by the means of sound.
Every feature is associated with a sound clip narrating its description. In
VRML, this feature can be applied by using the AudioClip node:
AudioClip { ExposedField SFString description "" ExposedField SFBool loop FALSE ExposedField SFFloat pitch 1.0 # (0, •) ExposedField SFTime startTime 0 # (-•,•) ExposedField SFTime stopTime 0 # (-•,•) ExposedField MFString url [] EventOut SFTime duration_changed EventOut SFBool isActive } The AudioClip node in association with the Sound node (as seen on 4.2.2 Use
of Sound in VR Maps) attaches a sound source to the object. The sound can
be played continuously or triggered by a mouse click.
• Proximity Labeling: Labeling information can be decreased and in some cases
eliminated automatically when the user moves to a certain distance from the
object. Text or sound can symbolize the descriptions. The user configures the
proximity distance. If more than one feature is within the proximity distance,
the narrative sound can be ordered starting with the object closest to the user
(text labels on the other hand, can describe several features within the
132 proximity distance). In VRML, the Proximity Sensor node generates events
when the user enters or exits a region of space defined by a box:
ProximitySensor { exposedField SFVec3f center 0 0 0 # (-•, •) exposedField SFVec3f size 0 0 0 # [0, •) exposedField SFBool enabled TRUE eventOut SFBool isActive eventOut SFVec3f position_changed eventOut SFRotation orientation_changed eventOut SFTime enterTime eventOut SFTime exitTime }
• Touch Labeling: A sound or text label can be triggered when the mouse arrow
moves over the object. This can be achieved in VRML by using the
TouchSensor node:
TouchSensor { ExposedField SFBool enabled TRUE EventOut SFVec3f hitNormal_changed EventOut SFVec3f hitPoint_changed EventOut SFVec2f hitTexCoord_changed EventOut SFBool isActive eventOut SFBool isOver eventOut SFTime touchTime } The numerous labeling options available for VR maps are the result of the variety of means for representation in virtual environments.
5.6. Visualization of Uncertainty and Metadata
In a paper map, metadata and quality information are visualized in the form of notes on the edges of the map (See USGS 1:24,000 quadrangle in Figure 5-1) or a card catalog description. Examples of metadata include information on projection, revision and source. Examples of quality information include consistency and accuracy. In today’s computer maps metadata is almost never visualized in a map along the spatial information. As a consequence, it is very difficult for a user to judge the condition of a map. As mentioned in the previous chapters, because of
133 the fact that cartographic knowledge is based on the perception of visual representation, it is important to make this information available at the fist glance of the map. In the context of this dissertation, we are assuming that metadata exists for VR maps. The lack of metadata and quality information constitutes a topic for another discussion. We rather intend to discuss techniques necessary to make this information available for viewing at any time in VR maps. Due to the variety of media involved, VR provides an abundance of techniques for depicting information about data. In conventional maps the uncertainty and metadata are provided for the map as a whole. In this research, we advocate a data quality visualization system that supports the display of information for each object in the database. Because of the nature of VR, we have the flexibility of providing this information by linking it to specific data, which makes the combination of data from different sources more attainable. Most of the techniques for the depiction of data quality and metadata should make good use of the variables we studied in the previous section (5.5), the visual, dynamic and sound variables. Some of the methods that can be used to indicate information about data include:
Labels: A description of the information about data is printed on the VR scene.
Labels can describe data quality or metadata in several ways:
• Warning labels: Users are cautioned on the use of multi-source data by being
warned on the features that are less accurate
• Positional accuracy labels: Labels describing positional accuracy are attached
to the features in a VR map. Figure 5-14 describes a simulated VR map in
which buildings data belongs to different sources. Labels attached to these
buildings convey their positional accuracy
• Animated labels: Labels appear periodically attached to the feature.
134 A major disadvantage of data quality visualization through labels is the fact that we might overload the VR map scene, especially if feature names are also described by labels.
Figure 5-14: Positional Accuracy Labels
Value, coloring or texture of objects: If hue is used to distinguish between object representations, value and texture can be used to portray their accuracy and data quality. Figure 5-15 depicts the same scene as above. This time, the uncertainty in buildings' position is represented by a change in value of the color red. The brightness of the color increases with the uncertainty of the positional accuracy of the building. The same concept applies to the street representations in this figure.
A variation in color gray symbolizes the change in accuracy. The representation of
Main St. is the most accurate and the representation of Erie St. is the less accurate. Likewise, the uncertainty levels in relief can be displayed by variations of color values in patches of the surface. The variation in value can be combined with a legend for a more informed visualization of data uncertainty (to determine the
135 level of magnitude in between colors). This method of data quality visualization spares the cartographer the overload that would result by placing additional text in the scene. Moreover, in order to reduce the complexity that might result from introducing several colors and their variations, users should be able to create a single color representation for all objects while values in color would depict the quality of information. They should be able to switch between views with a click of the button or a toggle. All these options should be user controlled and configured.
Figure 5-15: Use of Value in Data Quality Representation
Transparency: As explained in the previous section, transparency is a very powerful visual variable for VR map representation. Figure 5-16 illustrates the use of transparency in the simulated scene representation we used above. Objects with lower positional accuracy are more transparent (or less visible) than their counterparts. The transparency method applies to both buildings and streets in the scene depicted in Figure 5-16.
136
Figure 5-16: Use of Transparency in Data Quality Representation
Animation: The motion of features or change of shape can be used to focus user's attention on the accuracy of the specific data. There are several ways animation can be used to portray variation of data uncertainty:
• Changing shape: Objects or phenomena change shape with regard to their
uncertainty. For example, a street is represented by a solid object or a
surface. The width of the street can change gradually from a minimum to
maximum buffer that represents the uncertainty of the feature's edges
• Fading: Objects that result from less accurate data sources fade periodically
• Blinking: The less accurate objects blink. The frequency of the blink depends
on the accuracy or feature source quality.
Other visual effects: Visual effects of natural phenomena such as fog and clouds or abstract effects such as fuzzy representation can be used to portray uncertainty.
Figure 5-17 illustrates the use of fog in VR map representations. Fog or clouds can be used to convey the feeling that the scene's spatial quality is not up to the user's
137 specific requirements (like in Figure 5-17); fog and clouds can also be confined in certain parts of the scene in order to depict the spatial quality of individual objects.
Figure 5-17: Fog Added to VR Map Representation
Interactive determination: This type of visualization is valuable when we are concerned about overloading the scene with labels, color variations and special effects. Users interact with objects in order to acquire the information about them.
We identify these types of interactive determination of uncertainty in VR maps:
• Hyperlinks: Anchors and links can be attached to features in order to access
the information with a user click (information can be listed in a new window,
flag, or in the scene)
• Touch sensors: When mouse pointer moves over the feature, information
appears on the screen or in a new window
• User query: The user queries the system for information of certain quality, or
range of uncertainty. As an adaptation of Monmonier's animated graphics
138 scripts (Monmonier 1992), we can use a slide bar, or an animated sequence to
progress through a range of numerical uncertainties. Features in the scene
change their hue or color value accordingly.
Sound: This method can be applied by attaching audio clips to features. Sound for
presentation of data quality can be applied in two ways:
• Sound narrative: Information (metadata or data quality) is spoken when users
click on the feature or point to it
• Sound noise: The intensity of sound varies according to the uncertainty of the
feature. This is an adaptation of work from Muller and Scharlach18 (Muller and
Scharlach 2001). The noise can be played when a user approaches the feature
to a certain distance; it can also be triggered with a click of the mouse, or
when pointing at the feature.
Uncertainty and metadata information could overload the map image and confuse
the user in 2D digital maps. VR maps offer us alternative ways to represent this
information providing the cartographer with more flexibility.
5.7. Summary
In this chapter we reconsider several elements and aspects of conventional
cartographic representation in light of VR visualization.
Firstly, we attempt to classify cartographic features in the following categories:
− Natural features (irregular)
− Abstract features (semi-regular)
18 They used sound noise for representing pollution in digital two-dimensional maps
139 − Manmade features (regular).
The type of representation (geometric representation) and the function of some of the cartographic aspects (such as symbolization and generalization) vary according to these categories.
Secondly, we analyze aspects of cartographic representation in the new maps.
These aspects are:
Source information for a VR map representation: Additional information is necessary in data collection for VR maps. This information includes:
• Volume and shape: Additional geometric information is needed in most of the
cases for the realistic display of features
• Sound: Description of realistic sound
• Motion: Direction and speed of movement for dynamic features.
Georeferencing VR maps: The two important components of georeferencing in VR maps are:
• Space and objects georeferencing: We identify the problem of georeferencing
the "empty space". This problem is partially solved by georeferencing the
objects and terrain in the VR space. Elements to be georeferenced are:
− Terrain
− Visual objects
− Viewpoints
• Multimedia georeferencing: elements to be georeferenced include audio clips,
images, hyperlinks and animation.
140 Scale and orientation: We propose these potential scale representations and orientation methods in VR maps:
• Three-dimensional grid scale: This is a volume grid overlaid to the VR map
scene, assisting the user to estimate the distances along the three principal
axis
• Graphic scale: A rectangular cube depicts linear scales along the axis. Its
edges represent distances on the ground
• Numerical Scale: A statement of the fractional scale that changes with the
movement of the viewpoint
• Orientation using the North arrow displayed in the VR map scene
• Orientation using the directional arrow in the VR map scene
• Scaling of velocity of navigation based on the altitude from the ground
Generalization: Generalization is a very important aspect of the VR map representation for two main reasons:
• Eliminating the need for expensive rendering
• Reduction of complexity for human perception
Multimedia generalization includes:
• Sound
• Animation
• Text labels
• Hypertext
• Enhancement of one type of media using the other
141 Symbols and Labels: We argue the necessity of symbolization in VR maps. Symbols are needed for representation of abstract features and other features at certain scales. We consider sound and dynamic variables as elements of representation for phenomena in VR maps. We have adapted, modified and added to Bertin variables in order to make them suitable for VR maps. These variables are:
• Position
• Size
• Value
• Texture
• Hue
• Orientation
• Shape
• Transparency
We also identify several ways to describe and label features in the scene:
− Text
− Animated text
− Hyperlinks or hypertext
− Sound
− Proximity labeling
− Touch labeling
Visualization of uncertainty and metadata: We identify several potential methods to visualize the information about data (this information is assumed to exist for each feature in the database):
142 • Labels: description printed in the VR scene
• Value, coloring or texture of objects: Value changes according to the
uncertainty of information
• Transparency: Objects with lower accuracy are less visible than their
counterparts
• Animation: Motion of features or change in shape conveys uncertainty
information to the user
• Other visual effects: Natural phenomena such as fog or clouds and fuzzy
representation varies according to the uncertainty of features
• Interactive determination: Users interact with objects in order to acquire the
necessary information
• Sound: Narrative sound or noise can convey the information about data.
5.8. References
Bruce, V., Green, P.R., Georgeson, M.A., 1996, Visual Perception : Physiology,
Psychology, and Ecology; Hove, East Sussex, UK, Psychology Press, 1996
Clarke, K. C., 1990, Analytical and Computer Cartography, Prentice Hall Inc. 1990
Cowen, D. J., 1997, "Discrete Georeferencing", NCGIA Core Curriculum in
GIScience, http://www.ncgia.ucsb.edu/giscc/units/u016/u016.html, posted
February 11, 1997.
Cromley, R.G., 1992, Digital Cartography, Prentice Hall Inc. 1992
Davis, R. E., Foote, F. S., Anderson, J. M., Mikhail, E. M., 1981, Surveying Theory and Practice - Sixth Edition, McGraw-Hill, Inc., 1981
143 DiBiase, D., MacEachren, A.M., Krygier J.B., Reeves, C., 1992, "Animation and the
Role of Map Design in Scientific Visualization, Cartography and Geographic
Information Systems, Vol.19-4, 1992, pp. 201-214
ICA (International Cartographic Association), 1973, Multilingual Dictionary of
Technical Terms in Cartography, Weisbaden, Franz Steiner Verlag
Jones, C., 1997, Geographical Information Systems and Computer Cartography,
Longman Limited, 1997
Krygier, J., 1994, “Sound and Geographic Visualization”, in Visualization in Modern
Cartography, Ed. A.M. MacEachren and D.R.F. Taylor, Pergammon Press, Ltd.,
Oxford, pp. 149-166
Lin, H., Gong, J., Wong, F., 1999, "Web-based Three Dimensional Geo-referenced
Visualization". In: Computer & Geosciences, Vol. 25, pp.1177-1185
McEachren, A., Kraak M., 2001, "Research Challenges in GeoVisualization". In:
Cartography and Geographic Information Science, Vol. 28, No. 1
McMaster, R.B., Shea, K.S., 1988, Cartographic Generalization in a Digital
Environment: A Framework for Implementation in a Geographic Information
System, Proceedings GIS/LIS '88, pp. 240-249.
Mitas, L., Brown, W., M., Mitasova, H., 1997, “Role of Dynamic Cartography in
Simulations of Landscape Processes Based on Multivariate Fields”, Computers &
Geosciences, Vol. 23, No. 4, pp. 437-446
Monmonier, M., 1992, “Authoring Graphics Scripts: Experiences and Principles”,
Cartography and Geographic Information Systems, Vol 19, No.4, 1992, pp. 247-260
144 Moore, K., 1997, "Interactive Virtual Environments for Fieldwork", British
Cartographic Society Annual Symposium, Leicester University, September 12th-14th,
1997
Morrison, J.L., Ramirez, J.R., 2001, “Integrating Audio and User-Controlled Text to Query Digital Databases and to Present Geographic Names on Digital Maps and
Images”, ICA Conference, 2001, China
Muller, J.C., Scharlach, H., 2001, "Noise Abatement Planning - Using Animated
Maps and Sound to Visualise Traffic Flows and Noise Pollution", in Cartography and the Environment, 2001
Neves N., Silva, J., P., Goncalves, P., Muchaxo, J., Silva, J., M., Camara, A., 1997,
“Cognitive Spaces and Metaphors: A Solution for Interacting with Spatial Data”,
Computers & Geosciences, Vol. 23, No. 4, pp. 483-488
Ramirez, J. R., Zhu, Y., 2002, “A Multi-Media Visualization System”, Internal
Poster, Center for Mapping at Ohio State University, 2002
Reddy, M., Iverson, L., 2002, "GeoVRML 1.1 Specifications", http://www.geovrml.org/1.1/doc/, SRI International, July 2002
Reddy, M., Iverson, L., Leclerc, Y. G., 2000, "Under the hood of GeoVRML 1.0",
Proceedings of the Web3D-VRML 2000 fifth symposium on Virtual Reality Modeling
Language, Monterey, California, United States, 2000
Robinson, A. H., Sale, R. D., Morrison, J. L., Muehrcke, P. C., 1988, Elements of
Cartography, John Wiley & Sons 1988
Topfer, F., Pillewizer, W., 1966, "The Principles of Selection", The Cartographic
Journal, 3(1), 10-16
145 USGS, 1995, United States Geological Survey (USGS) Digital Line Graph (DLG) Data, http://nsdi.usgs.gov/products/dlg.html, 1995
VRML, 1997, International Standard ISO/IEC 14772-1:1997, http://www.web3d.org/technicalinfo/specifications/ISO_IEC_14772-
All/index.html
Worboys, M. F., 1995, GIS – A Computing Perspective, Taylor & Francis 1995
146 CHAPTER 6
USER INTERACTION, GIS ANALYSIS AND VISUALIZATION ON
DEMAND
In this chapter we investigate interactive map visualization focusing on the user interaction, query, analysis and visualization on demand. As noted in the previous chapters, researchers (Bryson 1996, Neves et al. 1997) emphasize the importance of interaction in virtual environments. Interaction is a crucial element that enhances the realistic representation in VR. Without the capability to interact with the surroundings users would not feel a part of the realistic environment. At the same time, the VR designer should be cautioned to limit the non-realistic interaction options for the user
(MacEachren et al. 1999) (see 6.1 below).
In the context of the interactive human-computer interface, we also study the potential contribution of VR techniques to important GIS elements such as query and analysis.
Our contention is that GIS tasks, such as proximity analysis and shortest path, are better conveyed through a VR environment in which users need less training to interpret and analyze the results. This will be explored in the upcoming sections.
We begin with an investigation of potential user interaction tools in VR maps, proceed with the role of VR visualization in common GIS tasks (such as proximity analysis and shortest path) and conclude with a study of visualization on demand.
147 6.1. User Interaction
The users' interaction with their environments is one of the most powerful tools of
VR; it enhances the feeling of realistic presence within the surroundings. The importance of interactivity in VR is demonstrated in such commercial applications as Active Worlds, Sims and Simcity. The Active Worlds VR application (Figure 6-1) portrays a series of highly interactive virtual worlds in which users can communicate with each other and manipulate the environment. This application provides users with tools to build their own virtual environments (Active Worlds
2003).
Figure 6-1: Active Worlds
The Sims (Figure 6-2) is a computer and video game from Electronic Arts (EA) in which users can create simulated lives, interact with the environment and other users, build homes, furnish homes, talk to friends and neighbors, get married and so on (Sims 2002).
148
Figure 6-2: The Sims
Figure 6-3: Simcity
149 Simcity (Figure 6-3) is a computer game that allows users to build and run virtual cities. The user has control over tools to perform both realistic and non-realistic tasks such as create terrains, sculpt mountains, create disasters, populate areas, build virtual cities, set laws, change policies and run the day-to-day life of a city
(Simcity 2003).
In this research we are not concerned with the specific details of the user interaction techniques in a VR environment because they are studied extensively and put to use in applications such as the video games mentioned above. Instead, we are focusing on the use of these techniques to facilitate visualization of GIS and mapping tasks. User interaction is a very important aspect of VR maps. As explained in Chapter 2.2 (Virtual Reality in Cartography), there has been a separation between databases that are the foundation of a visualization system and the visualization itself (Rhyne 1997). In most of the commercial software used in mapping (Arc/Info, Mapinfo and Intergraph), as soon as data are visualized there is not much we can do to change or manipulate them through the graphic interface. User interaction includes virtual interaction of objects to mouse click, touch, proximity and other actions. Immersion in the virtual environment is also an important aspect of user interactivity (walking-through, panning, zooming).
Users should also be able to change the environment around them
(add/remove/manipulate virtual objects).
Firstly, we review the role of GIS and its functions as a tool in the decision support process. Secondly, we use these functions as a basis for deciding what types of user interactions we need in a VR map and finally we analyze the potential of the role of GIS in an interactive VR environment. GIS combines spatial and non-spatial data to answer questions (Kraak and Ormeling 1996) such as:
150 • What is there? Identification of an object or phenomenon. It seeks to
determine what exists at a particular location
• Where is it? Location of an object or phenomenon. We would like to identify a
location where a certain feature is or where a certain condition is met
• What has changed since? Trends of changes through time
• What is the best route between two objects? Optimal Path between two
objects
• What relation exists between two objects? Patterns between objects or
phenomena. This function is the basis of important GIS applications such as
the ones involving crime and health analysis
• What if a certain condition is applied? Logical models and analysis are
applied for a well-informed determination. For example, we determine what
happens to the customers' spatial allocation if a new store is added to the
group (how does a new store effect the performance of other stores in the
network).
The investigation of GIS techniques that employ these functions is beyond the scope of the present research. Instead, in this section, we focus on the human- computer interaction and different ways to activate these functions from a visual
VR environment. In this research, we identify several categories of human- computer interaction in VR maps:
Sensor interaction: Sensor interaction provides the direct interaction of users with the objects or phenomena in the scene. We propose using this method of interaction for several common GIS tasks. Sensors include:
151 • Activation (or click) Sensor: The user activates (or clicks in a desktop VR) the
objects of interest and a proper response, as set by the map designer, is
triggered. Examples of the use of this sensor include:
− Clicking on the object to receive the location (a set of coordinates) on the
screen
− Clicking on the object to identify it by name (proper information about
the object)
− Activating two objects with the purpose of finding the optimal path
between them
− Activating several objects in order to study patterns and relations
between them
• Point Sensor: The system tracks the location of the pointing device. If the user
points (mouse over) to an object, a reaction or series of reactions are set as
needed. Examples of this technique include:
− Determining the location of objects by pointing to them (mouse over) so
that a set of coordinates appears on the screen
− Identifying the object or objects by pointing to them (mouse over) so that
labels appear on the screen (Figure 6-4)
• Proximity Sensor: The system reacts when the user approaches or comes
within a certain distance to an object or a group of objects. Examples include:
− Objects (such as buildings) are identified when a user reaches a certain
distance from them. This can also be used in cases where objects are
described by sound. Only when the user reaches a certain distance is the
description spoken (this feature should be configurable by the user)
152 − Analyze patterns between objects that lie within a certain radius to the
viewpoint. Object within a certain distance would be identified by a
changing color (useful in crime analysis and other spatial distribution
phenomena)
Figure 6-4: Illustration of the Point Sensor in Displaying Identification
• View Sensor: The system determines when a certain object or objects are
within the view angle and therefore are visible to the user. The main purpose
of the view sensor is to attract the user's attention:
− An object is identified by a sound when it is visible to the user (enters the
view angle). This feature is also helpful for the partially sighted people
− An object or objects are introduced with their distances from the user (or
other characteristics) when they first appear in the scene. The distance
from the object or a transient route of the optimal path is created in the
scene as object becomes visible
153 • Collision: The system reaction to the collision of the virtual user (viewpoint)
with a certain object or group of objects. The purpose of this sensor is to
restrict such non-realistic user movements as:
− Moving outside an optimal path
− Moving outside barriers as set by logical models (e.g. use of highways
that have a lower speed than a certain value)
− Running into an object (such as a building)
Table 6.1 summarizes the above discussion on the use of sensors in common GIS tasks.
Sensor GIS Functionality VRML Node
Activation Location, Identification, Optimal path, Anchor
or Click Pattern
Point Location, Identification TouchSensor
Proximity Identification, Pattern, Logical models ProximitySensor
View Identification, Optimal path VisibilitySensor
Collision Optimal path, Logical models Collision
Table 6.1: Sensors Applied to GIS functionality
154 Sensor methods such as activation or point are commonly used in computer applications where users interact with environments. Nevertheless, these and other methods examined above are new to VR map applications.
Immersion and navigation interaction: Encompasses such operations as walk- through, fly-through, zoom, pan, tilt and roll. A description of these operations and their role in a VR map was provided in sections 4.4 (Viewer Immersed in the Scene) and 4.5 (Navigation).
Environment change interaction: In a VR map users should be able to a certain extent change the environment around them. MacEachren et al. (1999) caution about the risk of unlimited user interaction in VR environments. For example, if a user would be able to pick up a building and change it in a large-scale environment, the realistic feeling of the surroundings would diminish. In a small- scale environment (looking at the same picture from above and further away), buildings would be considered more like models that we can change.
In this research we identify the following concepts that a cartographer should keep in mind when designing environment change interaction for VR maps:
• Provide the user with the ability to add objects or phenomena
• Provide the user with the ability to delete objects or phenomena
• Provide the user with the ability to edit objects or phenomena
• Provide the user with the ability to change the location of objects or
phenomena
• The cartographer should restrict the non-realistic options of the user on
editing, changing location, adding or deleting the objects or phenomena.
These restrictions depend on:
155 − Scale
− Application and objective of the map
− Navigation method (walk-through or fly-through)
From the practical point of view the VRML structure can contribute to this concept because of the nature of this language. We propose the following structure: the objects are connected through hyperlinks that point to server applications, which manipulate data sources. The following diagram (Figure 6-5) that connects the user to the source data and the visual display depicts this concept.
The analysis of details and implementation of such a system is beyond the scope of this research. We provide the general framework of the structure as described in the context of VR visualizations of maps. A system that focuses on the source data manipulation from a VR map visual interface, based on this concept, is a topic for another discussion.
Server Data Interact Application
VRML User Interface
Hyperlink Visual Interact Connection Display
Figure 6-5: User Interaction with the Visual Display
156 6.2. Visualizing Spatial and Proximity Analysis
Spatial analysis is the fundamental concept of a GIS. In this section we study the visualization of spatial search and analysis, such as containment search, conditional search, buffering, optimal path and pattern analysis in a VR map environment. We identify these major differences between the visualization of the
GIS analysis in two-dimensional and VR environments:
• The presence of the third dimension in a VR map. The “Z” factor enhances the
visualization of spatial analysis results
• Multimedia elements (sound, dynamic visualization, hyperlinks and hypertext)
provide additional tools for helping the user understand the situation
• Navigation and immersion as powerful discovery tools. In a traditional GIS
users are able to zoom in to the selected objects. In addition to this, in a VR
map, users are able to fly-through or walk-through the selected objects or
even visit them sequentially through a predetermined path.
In this research, we propose the following techniques for the visualization of spatial analysis results. These techniques should be configurable by the users. Users can decide to use certain methods or a combination of methods:
Containment search: The user extracts features contained in a certain area defined by the query. The selected features can be visualized by:
• A change in the color of the surface in which objects stand (similar to the
display in Figure 6-6)
• A change of the object's color. For example, the selected features can be
shown in yellow (similar to the display in Figure 6-6)
157 • If color is used to display different features, values can be used to portray the
features selected from the query (e.g. different variations of red)
• A change in the visibility of features that are not selected. The selected
features are displayed normally, while the rest of the map is displayed as
semi-transparent
• A change in the size of the containment area. The area is exaggerated to the
degree that the user can distinguish between selected and non-selected
objects. Visualization of this exaggeration should be configurable by the user.
Neighboring features can be displaced in order to accommodate the
exaggeration.
Conditional search: Features are selected based on the value of an attribute or a combination of different attributes. The results of this type of query are not necessary located in one region. They may be found in different locations of the map. This is an important fact in deciding on the methods for displaying these results:
• Change of object colors (same as above)
• Change in the visibility of the features that are not selected (same as above)
• Dynamic visualization of selected features. Blinking would attract the user's
attention to the features that are selected from the query
• Navigation of the user through the selected features. The user sets the
parameters of this navigation. For example, the user could select all the
buildings with household incomes greater than $50,000. The system can be
configured to visit each building (resulting from the query) starting with the
lowest income and ending with the highest. The time spent at each location is
also configurable.
158 Buffering: Features are selected based on their proximity (or distance) from a certain feature. In a 2D environment buffers of a point, line and area are classified respectively as point, line and area proximity regions. Our contention is that in a
VR map, buffers should be classified as area and volume regions. Geometric points and lines are 0 and 1-dimensional elements and cannot be rendered in a VR environment. We identify these methods for displaying buffers in a VR map:
• Coloring of regions and features included within the region for area buffers.
Figure 6-6 displays a 30m buffer around Erie Street. The buffered area is
colored blue and features selected within this area are colored yellow
• Change in the visibility of features that are not selected from the area buffer
(same as above).
Figure 6-6: Illustration of Buffers in a VR Map
159 In certain cases we are interested in three-dimensional buffers and features that lie within the three-dimensional region of space around an object. For example, air traffic controllers might be interested in airplanes that lie within a sphere centered on a certain aircraft, at a certain moment of time. In the same fashion, users might be interested in plotting the airplane's route, buffering it in the three-dimensional space and locating other airplanes that would fall within this buffer (in order to plan and prevent mid-air collision). Figure 6-7 shows the trajectory of an airplane plotted through space and a cylindrical buffer around this trajectory. We have created a semi-transparent buffer, in order for the aircraft model and its trajectory not to be obstructed. Other flying objects or features, such as buildings and terrain surfaces that are at a close proximity to the airplane when landing or taking off, are flagged and portrayed in a certain color.
Figure 6-7: Buffering the Trajectory of an Airplane
160 Optimal path: Optimal path analysis addresses the problem of finding the shortest or the least-cost route between two features and locations. Once this route is determined we identify these ways in which it can be visualized:
• Automated fly-through the route: Users set the parameters including speed
and possible stops during the fly-through or walk-through from the origin to
the destination
• Directing the user through the route: The user navigates along the path as it
is guided by the system. An arrow can show the direction of movement at any
time (much like the guidance systems in new car technology)
• An animated arrow extending from the origin to the destination: This method
is useful to become aware of the route, especially if we are observing the scene
from a distance
• Highlighting the path: Changing the colors, value, or visibility of the
background are all methods that will accentuate the optimal path. If we are
working with a road network, the optimal path search will return a series of
connected road segments that can be highlighted using these methods.
Pattern analysis: Pattern analysis examines the correlations and patterns that exist between objects based on their spatial location. Correlations or unusual groupings of phenomena can be visualized through techniques or combination of techniques such as the ones that we discussed above: coloring, value, blinking, visibility and navigation. In Figure 6-8 we attempt to detect patterns or correlations between a chemical plant location and occurrences of a lung disease in the area.
Multiple buffers encircle the location of the chemical plant. These buffers are distinguished by color value. Vertical bars represent the level of disease at units of population (it could be cities). The size of the bars depends on the occurrence of
161 the disease in that area. It is clear from the figure that these occurrences increase with the proximity to the chemical plant.
Figure 6-8: Pattern Determination Visualization
6.3. Visualization on Demand
In this section, we investigate the feasibility of a highly interactive, on-the-fly, flexible visualization system, utilizing VR techniques. In the previous chapters we investigated how VR visualization techniques can impact the way we think about maps. Generating a map as a realistic VR model is a very useful, but at the same time very bold enterprise. Users are accustomed to conventional maps and have used them for hundreds of years. At this point, we cannot predict if or when VR maps will become part of every day life. Nevertheless, combined methods of the old and the new technology might be more appropriate for use in the present applications. A combination of two-dimensional static displays and VR three- dimensional dynamic displays can be used for depicting spatial phenomena. It is
162 important that we take into account the computer speed and memory for the typical user when designing such applications. A VR map that works very nicely in powerful computers might be useless in an average computer (human-machine interaction and real-time rendering should be flawless in order for users to perceive the realistic effects of the scene). In this research we consider the combination of such techniques as an intermediate step towards the full utilization of the new VR map technology, which is highly dependent on improvements in average computer systems. We propose the following potential application of VR technology as an alternative to conventional maps or fully self-functional VR maps.
Multimedia VR visualization including three-dimensional display, sound and animation can be indexed by traditional two-dimensional maps. A two-dimensional map or sketch of reality is used to reference the VR scene and users can visualize it on demand. For example, the user points and clicks at a river feature on the two- dimensional map. The river scenery with sound and dynamic flow, as well as the surroundings, is visualized. The advantage of this method is that the computer memory and space is used more efficiently because only sections of the VR map, which can be connected by hyperlinks to the index map, are displayed. Figure 6-9 illustrates such a concept. It depicts an indexed map containing streets, localities and multimedia icons. Users click on these icons to access VR scenes such as:
− Natural features: Relief, rivers, lakes, etc., including animation and sound to
enhance the visualization and to describe the features (as studied in the
previous chapters)
− Manmade features: Buildings, roads, etc., including animation and sound to
enhance the visualization of phenomena and describe the features.
163
VRML: Natural Obj. inc: VRML: Manmade Obj. inc: - Sound - Sound - Animation - Animation
Figure 6-9: Visualization of Different Media Georeferenced to the Map
This system allows the introduction of elements used in VR maps as extensions to a two-dimensional map. The size of such "VR map clips" should be dependent on the capability of the computer system and the availability of information.
As a proof of concept for this notion, we have built an Internet compatible
"Visualization on Demand System" for the Union College campus area. This map
164 can be accessed at http://home.nycap.rr.com/bidoshi/dissertation. This is not a
fully functional accurate map system, but is rather used to show the potential for
combination of the incorporated technologies.
The initial display (Figure 6-10) is a multimedia map of the Schenectady-Albany,
N.Y. area built with Shockwave technology. An initial narrative describes the map
and its scale. Users can perform several actions on this map:
• Clicking on the red labels would obtain additional information on the
feature.19 Highways are described by name and main direction (N, S, E, W and
major cities)
• Users can also obtain a narrative of directions from the airport by clicking on
the airport label
• A larger scale map opens gradually from the center when the user clicks on
the red building symbolizing Union College.
The large-scale map shown in Figure 6-11 appears on the screen while a narrative
explains the nature of this map and its scale. The lower menu button in Figure
6-11 displays an aerial view of the region (Figure 6-12). The upper menu button
directs the user to a VR map of the central campus area (Figure 6-13). The VR map
is created using 3DSMax software and VRML. Users need to download VRML
browsers such as Cortona from Parallelgraphics to view this scene
(http://www.parallelgraphics.com).
19 Sound clips are used for the narrative. All sound clips are created using ReadPlease (http://www.readplease.com) text-to-speech technology
165
Figure 6-10: Small Scale Map of the Area
Figure 6-11: Large Scale Map of the Area
166
Figure 6-12: Aerial View of Campus
Figure 6-13: VR Map of the Campus Center
167 The VRML scene includes these user interaction capabilities:
• Users click on the body of the models for multimedia elements like pictures
and inside 360° video
• Users click on the roof of the models for sound descriptions of the buildings
• Users click on the floor in order to navigate closer to the scene
• User controls on the side and bottom of the screen provide these type of
movements and actions:
− Pan, Zoom, Tilt, Roll
− Switch between walk-through and fly-through operations
− Switch between previously set up viewpoints
− Fit the scene on the screen
− Align the scene to the horizon
− Restore to the original viewpoint.
The purpose of the system described above is to prove that the combination of various representations including VR provides an effective map visualization. VR
"map clips" can be used to depict aspects of reality and requested on demand by the user interacting with the map. Nevertheless, we believe that the ultimate target for cartographers should be stand-alone VR maps.
168 6.4. Summary
In this chapter we investigate user interaction in VR maps, visualization of GIS queries and visualization on demand. These topics are summarized below:
User interaction is an important element of VR maps enhancing the feeling of realistic presence. We review the most important GIS functions and identify the potential human-computer interaction methods and their use for GIS functionality:
• Sensor interaction: It includes new sensors in addition to methods that have
been commonly used in 2D maps (such as click or touch sensors). Sensors
include:
− Activation or click: Location, identification, optimal path, pattern
− Point: Location, identification
− Proximity: Identification, pattern, logical models
− View: Identification, optimal path
− Collision: Optimal path, logical models
• Immersion and navigation interaction: Operations of walk-through, fly-
through, zoom, pan, tilt, and roll
• Environment change interaction: Interaction of users with objects in the
environment (actions such as delete, add and modify). We identify the risk of
unlimited user change interaction. Restrictions should be dependent on scale,
map objective and navigation method (walk or fly).
Visualization systems should also be structured to support the change of source data from the visual interface. Hyperlinks can be used to access server information and change data appropriately.
169 Visualization of GIS spatial analysis in virtual environments is enhanced by the following elements:
• The presence of the third dimension
• Multimedia elements
• Navigation and immersion
We propose several visualization techniques (which should be configurable by the user) for the following spatial analysis elements:
− Containment search
− Conditional search
− Buffering
− Optimal path
− Pattern analysis
Visualization on demand encompasses highly interactive, on-the-fly, flexible map visualization systems using VR techniques. A combination of two-dimensional static and VR dynamic displays is used as an alternative method and/or an intermediate step towards the full utilization of the new VR map technology. A two- dimensional index map is used to reference VR scene and other multimedia clips.
A visualization on demand system for a college campus area is used as a proof of concept.
170 6.5. References
Active Worlds, 2003, Active Worlds Computer Game, http://www.activeworlds.com, Activeworlds Inc., 1997-2003
Bryson, S., 1996, “Virtual Reality in Scientific Visualization”, in Communications of the ACM, Vol. 39, No. 5: 62–71, 1996
Cámara, A., 1997, "Interacting with Spatial Data: The Use of Multimedia and
Virtual Reality Tools", Geographic Information Research at the Millenium: GISDATA
Final Conference, Le Bischenberg, France, 1997
Cammack, R. G., 1999, "New Map Design Challenges: Interactive Map Products for the World Wide Web". In: Multimedia Cartography, Ed. Cartwright, W., Peterson,
M., Gartner, G., Springer-Verlag Berlin, pp. 154-172
Kraak, M. J., Ormeling, F. J., 1996, Cartography – Visualization of Spatial Data,
Longman Limited 1996
MacEachren, A., Kraak, M.J., Verbree, E., 1999, “Cartographic issues in the design and application of geospatial virtual environments”, Proceedings of the 19th
International Cartographic Conference, August 14-21, 1999, Ottawa, Canada
Neves N., Silva, J., P., Goncalves, P., Muchaxo, J., Silva, J., M., Camara, A., 1997,
“Cognitive Spaces and Metaphors: A Solution for Interacting with Spatial Data”,
Computers & Geosciences, Vol. 23, No. 4, pp. 483-488
Simcity, 2003, Simcity, http://simcity.ea.com/, Electronic Arts, Inc, 2003
Sims, 2002, The Sims, http://thesims.ea.com, Electronic Arts, Inc, 2002
171 CHAPTER 7
CONCLUSIONS AND FUTURE CONSIDERATIONS
7.1. Conclusions
We believe that the ultimate goal of the spatial information specialists should be the utilization of VR technology to its fullest in the portrayal of spatial phenomena in maps. In this research we attempt to set up the fundamentals for the change to such utilization.
As agreed by many authors (Ramirez 1999, Cartwright and Peterson 1999), the conventional map is not the best possible abstraction of reality anymore. Advances in many areas of computer science have opened ways and possibilities for cartographers to improve this abstraction. Additions like sound and animation can contribute in making use of human perception aspects that have not been used until now. The study of spatial cognition is very important to determine the impact of incorporating new media to the map visualization. In Chapter 3 we identified some important observations about the way human brain processes the perception of surroundings (specifically applicable to virtual environments):
• Images in the brain are stored according to a structure that uses the
interrelationships between objects
172 • Realistic representation of maps reduces the amount of subjectivity in map
creation
• Generalization of the scene is part of the user's perception of reality.
Nevertheless, details are important to provide users with options. The capacity
of human brain and the ability to generalize support the higher complexity of
the VR map scene
• VR techniques significantly improve the cognitive ability of users reading a
map (the use of techniques such as navigation further enhances this ability)
• Dynamic elements and spatial sound are important elements used to enhance
the human perception of a VR map
• Structural breakdown of features is important to support the application of
VR techniques in maps.
As mentioned in Thoen (1997), we are beings from a three-dimensional world.
Consequently our brain understands three-dimensional scenes better. That is why the three-dimensional representation, virtual reality and navigation through the scenery are very important concepts in communication with maps.
In Chapter 4 we investigate methods of representation of information in VR maps.
Graphic representation is the foundation of the VR map. Immersion and navigation are important methods of visualizing the content of the map. Sound and dynamic visualization within the VR context are other areas that are not investigated considerably in cartography. These elements are very important because of the nature of cartographic data. Dynamic representation of road scenes and rivers will give us better perception of these data. Animation can also be used to enhance representation by attracting user's attention. Sound is very important to convey the realistic feeling of the scenery and can also be used as an additional mean to communicate information in maps.
173 In the most significant contribution of this research we reconsider several important concepts of conventional cartography in view of VR map visualization
(Chapter 5). For the purpose of structuring information (see above), we classify features into natural, abstract and manmade.
− Source information acquisition should be updated to collect additional
elements such as volume, shape, sound and motion
− Georeferencing of spatial information in VR maps is achieved through spatial
referencing of objects and terrain. The "empty area" is partially georeferenced
through interpolation of locations of these features. Georeferencing of
viewpoints and multimedia are important elements in enhancing the sense of
location within the surroundings
− Analyzing scalability issues in VR maps, we propose potential representations
(adopted for virtual environments) such as grid scale, graphic scale and
numerical scale. Scalability of navigation velocity is taken into account in this
context
− Generalization of VR maps is necessary in order to eliminate the need of
expensive rendering and to reduce the complexity of human perception.
Multimedia generalization is also part of the reduction of unnecessary and
detrimental complexity (such as "sonic overload")
− Symbols and labels are needed for representation of features at certain scales
as well as representation of abstract features. We adapt Bertin's visual
variables in light of the VR map representation
− Several potential methods are identified for visualization of uncertainty and
metadata in VR maps. These methods use visual, sound and dynamic
variables as analyzed in current and previous chapters
174 Traditionally, GIS queries display their results numerically. Attempting a more efficient decision making support, we propose the potential display of these results in a close-to-real environment with surroundings in the background (Chapter 6).
Furthermore, interaction with the environment makes it easier for the user to collect information about features and decide on the kind of queries that he/she wants to perform. Editing (changing, adding and removing) features in virtual reality maps is a powerful concept that allows user to predict the influence and contribution of a change to the environment.
The need for the new map visualization is very imminent. There are two major reasons as premises for the transformation that we propose:
• Advances in computer sciences and visualization techniques in general
prompt the need for new cartographic visualization even for mere survival of
cartographic material
• In today's fast-paced modern world, cartographers need to find new and better
ways to display spatial information in a more perceptive way for the user.
There are arguments that conventional cartography has been successful because the real world is too complex to take in at once. We need abstraction and separations between representations and ourselves in order to make sense of it
(Slocum et al 2001). As discussed in 5.4.1 (Necessity of Generalization in Virtual reality Maps), simplification and abstract representation of reality are also part of spatial virtual environments. Nevertheless, VR techniques give us the option to portray reality very realistically and in dimensions our brain understands better.
175 7.2. Limitations
During the course of this research we identified several limitations in the VR representation of maps and provided, in some cases, practical and partial solutions. These limitations are effected mostly by the restrictions in the current technology including hardware and processing capability of computers. Below, we list some of these limitations as identified in the previous chapters:
− Graphic rendering: The trade-off between graphic rendering and human-
machine interaction is important in the context of current technology (see
4.1.2 - Terrain Modeling). This technology has not fully reached the point
where we do not have to take into consideration the computational complexity
of a VR scene. Theoretically we strive to keep graphic rendering at the most
realistic levels, but practically this would affect the user interaction with the
environment in today's computers, therefore reduce realistic sense altogether
− Computer screen size limitation: Today's viewing devices set the practical limit
for graphic resolution of displays. As explored in 5.4.1 (Necessity of
Generalization for Virtual Reality Maps) the human eye resolution limitation is
approximately 3.5m in a scale 1:25,000. The computer screen resolution
would translate at best to 5.25m in the same scale. Viewing devices also affect
the approximation of location of sound sources when they are positioned
outside the user's viewing angle
− Two-dimensional display depicting three-dimensional features: The
discrepancy lies in the fact that the viewing devices are two-dimensional while
the world we are attempting to perceive is three-dimensional. Therefore,
georeferencing of space becomes problematic (see 5.2 - Georeferencing in
Virtual Reality Visualization of Maps)
176 7.3. Considerations for Future Work
In this dissertation we adapted some of the VR concepts to elements of cartography and attempted to transform some of the visualization techniques in modern cartography and GIS based on the new VR media.
Advantages of the new techniques were mentioned throughout the dissertation.
Nevertheless, we have identified some limitations (as mentioned in the previous section); in addition to this, the reader should be aware of several areas in which the potential for more research exists (these areas exceed the scope of this study):
• Methods of georeferencing in virtual maps were explored in 5.2
(Georeferencing in Virtual Reality Visualization of Maps). We identified "the
empty space" as one of the problems in georeferencing desktop VR scenes.
This study proposed several options to confront this issue (grid, object-based
georeferencing and multimedia georeferencing). We believe that more research
is necessary in this area (not only confined to desktop VR), specifically in the
georeferencing of space in a VR scene
• One of the problems identified in a desktop VR (and other types of VR) is the
inconsistency of viewing a three-dimensional scene through a
two-dimensional device (as seen in 5.2 - Georeferencing in Virtual Reality
Visualization of Maps). The study of true three-dimensional viewing
techniques such as holography might be the answer to such problems
• In Chapter 6 we investigated various methods of human-machine interaction
in a VR GIS environment. We think that this study is very important and
several aspects that exceeded the scope of this research should be potential
topics for investigation in the future. The study of speech recognition
techniques for VR queries is one of these topics. Users should be able to use
177 sound not only to learn from the environment but also to communicate with
their surroundings through spoken language
• During the course of this dissertation we discussed the role of user interaction
and manipulation of surroundings (6.1 - User Interaction). As mentioned in
the previous chapter, a study that focuses on the source data manipulation
from a VR map visual interface is a topic for another discussion. We believe
that this is an important topic related to VR map visualization. Users should
be able to change (add, remove and modify) features from a visual
environment
• The computational complexity of a VR scene using today's technology might
encourage further research on visualization on demand. Although the
ultimate goal is to create a fully immersive environment for maps of the
future, such technique is a proper practical solution that can be currently
implemented using existing resources
• In this study we focus primarily on general maps that represent real world
phenomena (such as topographical maps). This study could be extended
easily to pertain to abstract phenomena. Nevertheless, the discussion of the
use of VR in thematic maps could be a topic for another discussion.
In the list above we mention some of the topics that either were examined and remained unexplored to their fullest, or were beyond the scope of this dissertation, but are still very important in the framework of realistic visualization of maps. We believe that this dissertation fulfilled its purpose of discovering the problems and providing some of the solutions in the use of Virtual Reality in cartography. We are confident that future research on the above topics and others like them will shed more light on this subject.
178 7.4. References
Slocum, T., Blok, C., Jiang, B., Koussoulakou, A., Montello, D., Fuhrman, S.,
Hedley, N.R., 2001, "Cognitive and Usability Issues in GeoVisualization ". In:
Cartography and Geographic Information Science, Vol. 28, No. 1
Cartwright, W., Peterson, M.P., 1999, "Multimedia Cartography". In: Multimedia
Cartography, Ed. Cartwright, W., Peterson, M., Gartner, G., Springer-Verlag Berlin, pp. 1-10
Ramirez, J.R., 1999, “Maps for the Future: A Discussion”, ICA Conference, Ottawa,
Canada, 1999
Thoen, B., 1997, “Holo-Deck GIS Provides New World View”, GIS World, August
1997, pp. 32-33.
179 BIBLIOGRAPHY
3DS Max, 2002, http://www.discreet.com: Discreet, Autodesk, Inc. 2002
Active Worlds, 2003, Active Worlds Computer Game, http://www.activeworlds.com, Activeworlds Inc., 1997-2003
Bar H.R., Sieber R., 1997, "Atlas of Switzerland - Multimedia Version: Concepts Functionality and Interactive Techniques". Proc: 18th ICC, ICA, Stockholm, 2, pp. 1141- 1149
Berry, J., 2000, "Video Mapping Brings Maps to Life". In: GeoWorld, October 2000
Bertin, J., 1983, Semiology of Graphics, University of Wisconsin Press, Madison.
Blauert, J., 1983, Spatial Hearing - The Psychophysics of Human Sound Localization, MIT Press, Cambridge, MA 1983
Blauert, J., Lehnert, H., 1994, Binaural technology & Virtual Reality, CIARM, Special Meeting Japan. MITI Committee on Virtual Reality, J-Tokyo
Bly, S., 1982, “Presenting Information in Sound”, Human Factors in Computer Systems, Proceedings, Gaithersburg, MD, pp.371-375
Bruce, V., Green, P.R., Georgeson, M.A., 1996, Visual Perception : Physiology, Psychology, and Ecology; Hove, East Sussex, UK, Psychology Press, 1996
Bryson, S., 1996, “Virtual Reality in Scientific Visualization”, Communications of the ACM, Vol. 39, No. 5, pp. 62-71, May 1996
Burgess, D.A., 1992, “Techniques for Low Cost Spatial Audio”, In Proceedings of the ACM Symposium on User Interface Software and Technology (UIST), ACM SIGGRAPH and ACM SIGCHI, ACM Press, Nov. 1992, pp. 53-59
Buttenfield, B.P., Mark D.M., 1991, “Expert Systems in Cartographic Design”, in Geographic Information System: the Microcomputer and Modern Cartography, Ed. D.R.F. Taylor, Pergammon Press, Ltd., Oxford, pp. 129-150
Buziek, G., 1999, "Dynamic Elements of Multimedia Cartography". In: Multimedia Cartography, Ed. Cartwright, W., Peterson, M., Gartner, G., Springer-Verlag Berlin, pp. 231-244
180 Cámara, A., 1997, "Interacting with Spatial Data: The Use of Multimedia and Virtual Reality Tools", Geographic Information Research at the Millenium: GISDATA Final Conference, Le Bischenberg, France, 1997
Cammack, R. G., 1999, "New Map Design Challenges: Interactive Map Products for the World Wide Web". In: Multimedia Cartography, Ed. Cartwright, W., Peterson, M., Gartner, G., Springer-Verlag Berlin, pp. 154-172
Cartwright, W., 1997, “New Media and their Application to the Production of Map products”, Computers & Geosciences, Vol. 23, No. 4, pp. 447-456, 1997
Cartwright, W., Peterson, M.P., 1999, "Multimedia Cartography". In: Multimedia Cartography, Ed. Cartwright, W., Peterson, M., Gartner, G., Springer-Verlag Berlin, pp. 1-10
Central Intelligence Agency, 2001, World Fact Book 2001, http://www.odci.gov/cia/publications/factbook/index.html, Web Page accessed 03/07/02
Clarke, K. C., 1990, Analytical and Computer Cartography, Prentice Hall Inc. 1990
Cohen, D., Gotsman, C., 1994, “Photorealistic Terrain Imaging and Flight Simulation”, IEEE Computer Graphics and Applications, Vol. 14 (March 1994), pp. 10-12
Corporel J., 1995, HyperGeo, Proc. JEC, The Hague, pp. 90-95
Cotton B. and Oliver R., 1994, The Cyberspace Lexicon - an illustrated dictionary of terms from multimedia to virtual reality, London: Phaidon Press Ltd
Cowen, D. J., 1997, "Discrete Georeferencing", NCGIA Core Curriculum in GIScience, http://www.ncgia.ucsb.edu/giscc/units/u016/u016.html, posted February 11, 1997.
Cromley, R.G., 1992, Digital Cartography, Prentice Hall Inc. 1992
Davis, R. E., Foote, F. S., Anderson, J. M., Mikhail, E. M., 1981, Surveying Theory and Practice - Sixth Edition, McGraw-Hill, Inc., 1981
De Floriani, L., Puppo, E., Magillo, P., 1999, "Applications of Computational Geometry to Geographic Information Systems", Chapter 7 in Handbook of Computational Geometry, J.R. Sack, J. Urrutia (Editors), Elsevier Science, 1999, pp.333-388
Deering, M., F., 1996, “The Holosketch – VR Sketching System”, Communications of the ACM, Vol. 39, No. 5, pp. 54-61, May 1996
DiBiase, D., MacEachren, A.M., Krygier J.B., Reeves, C., 1992, "Animation and the Role of Map Design in Scientific Visualization, Cartography and Geographic Information Systems, Vol.19-4, 1992, pp. 201-214
Duda, R.O., 1997, 3-D Audio Guide, Online Source, 1997
181 Encarnacao, J.L., Kroemker, D., de Martino, J.M., Englert, G., Haas, S., Klement, E., Loseries, F., Mueller W., Sakas, G., Rainer, Petermann, R.R.V., 1993, “Advanced Research and Development Topics in Animation and Scientific Visualization”, Animation and Scientific Visualization, Earnshaw R.A. and Watson D., eds., Academic Press, London 1993
Erickson, T., 1993, “Artificial Realities as Data Visualization Environments: Problems and Prospects”, in Virtual Reality Applications and Explorations, Alan Wexelblat Ed., pp. 3-22, Academic Press Professional, 1993
Fairbairn, D., Parsley, S., 1997, “The Use of VRML for Cartographic Presentation”, Computers & Geosciences, Vol. 23, No. 4, pp. 475-481
Fairchild, K., 1993, “Information Management Using Virtual Reality-Based Visualizations”, in Virtual Reality Applications and Explorations, Alan Wexelblat Ed., pp. 45-74, Academic Press Professional, 1993
Ferguson, R.S., 2001, Practical Algorithms for 3D Computer Graphics, 2001
Fisher, P., 1994, “Randomization and Sound for the Visualization of Uncertain Spatial Information”, in Visualization in Geographic Information Systems, Unwin, D., Hearnshaw, H., (eds), London: Wiley, pp. 181-185
Foley, J. D., Van Dam, A., Feiner, S. K., Hughes, J. F., Phillips, R.L., 1994, Introduction to Computer Graphics, 1994
FormZ, 2002, http://www.formz.com/: Auto*Des*Sys, Inc. (form.Z Homepage)
Freitag, U., 1993, "Map Functions. Five selected main theoretical issues facing cartography". In: Report of ICA-Working-Group to define the main theoretical issues on cartography, Kanakubo T. (ed), pp. 9-19
Gilkey, R.H., Weisenberger, J.M., 1995, The sense of presence for the suddenly deafened adult, Presence, Vol. 4, No. 4, 1995, pp. 357-363
Green, M., Halliday, S., 1996, “A Geometric Modeling and Animation System for Virtual Reality”, Communications of the ACM, Vol. 39, No.5, pp. 46-53, May 1996
Houlding, S., W., 2001, "XML - An Opportunity for
Hurni, L., Bar H.R., Sieber R., 1999, " The Atlas of Switzerland as an Interactive Multimedia Atlas Information System ". In: Multimedia Cartography, Ed. Cartwright, W., Peterson, M., Gartner, G., Springer-Verlag Berlin, pp. 99-112
ICA (International Cartographic Association), 1973, Multilingual Dictionary of Technical Terms in Cartography, Weisbaden, Franz Steiner Verlag
Jiang B., Kainz W. and Ormeling F., 1995, Hypermap Techniques in Fuzzy Data Exploration, Proceedings of the 17th ICC, Barcelona, Spain: ICA, pp. 1923-1927
Jones, C., 1997, Geographical Information Systems and Computer Cartography, Longman Limited, 1997
182 Jul, S., Furnas, G.W., 1997, “Navigation in Electronic Worlds”, A SIGCHI 1997 Workshop, Vol.29 No.4, October 1997
Kraak, M. J., 1988, Computer-assisted Cartographical Three-dimensional Imaging Techniques, Delft University Press, 1988
Kraak, M. J., 1994, “Interactive Modelling Environment for Three-dimensional Maps: Funcionality and Interface Issues”, in Visualization in Modern Cartography, Ed. A.M. MacEachren and D.R.F. Taylor, Pergammon Press, Ltd., Oxford, pp. 27-43.
Kraak, M. J., Ormeling, F. J., 1996, Cartography – Visualization of Spatial Data, Longman Limited 1996
Kraak, M., 1999, "Cartography and the Use of Animation". In: Multimedia Cartography, Ed. Cartwright, W., Peterson, M., Gartner, G., Springer-Verlag Berlin, pp. 173-180
Kraak, M-J., van Driel, R., 1997, "Principles of Hypermaps". In: Computers and Geosciences - Special Issue Cartographic Visualization, Vol 23, no 4, pp. 457-464
Krygier, J., 1994, “Sound and Geographic Visualization”, in Visualization in Modern Cartography, Ed. A.M. MacEachren and D.R.F. Taylor, Pergammon Press, Ltd., Oxford, pp. 149-166
Lin, H., Gong, J., Wong, F., 1999, "Web-based Three Dimensional Geo-referenced Visualization". In: Computer & Geosciences, Vol. 25, pp.1177-1185
MacEachren, A., Kraak, M.J., Verbree, E., 1999, “Cartographic issues in the design and application of geospatial virtual environments”, Proceedings of the 19th International Cartographic Conference, August 14-21, 1999, Ottawa, Canada
Mathews, G., J., 1996, “Visualizing Space Science Data in 3D”, IEEE Computer Graphics and Applications, Vol. 16 (November 1996), pp. 6-9
McEachren, A., Kraak M., 2001, "Research Challenges in GeoVisualization". In: Cartography and Geographic Information Science, Vol. 28, No. 1
McMaster, R.B., Shea, K.S., 1988, Cartographic Generalization in a Digital Environment: A Framework for Implementation in a Geographic Information System, Proceedings GIS/LIS '88, pp. 240-249.
Mitas, L., Brown, W., M., Mitasova, H., 1997, “Role of Dynamic Cartography in Simulations of Landscape Processes Based on Multivariate Fields”, Computers & Geosciences, Vol. 23, No. 4, pp. 437-446
Monmonier, M., 1992, “Authoring Graphics Scripts: Experiences and Principles”, Cartography and Geographic Information Systems, Vol 19, No.4, 1992, pp. 247-260
Moore, K., 1997, "Interactive Virtual Environments for Fieldwork", British Cartographic Society Annual Symposium, Leicester University, September 12th-14th, 1997
Morrison, J.L., 1994, “The Paradigm Shift in Cartography: the Use of Electronic Technology, Digital Spatial Data, and Future Needs”, Advances in GIS Research 1, Healey, R.G. and Waugh T.C., eds., Taylor and Francis, London, p. 1-15
183 Morrison, J.L., Ramirez, J.R., 2001, “Integrating Audio and User-Controlled Text to Query Digital Databases and to Present Geographic Names on Digital Maps and Images”, ICA Conference, 2001, China
Muller, J.C., Scharlach, H., 2001, "Noise Abatement Planning - Using Animated Maps and Sound to Visualise Traffic Flows and Noise Pollution", in Cartography and the Environment, 2001
National Research Council, 1995, Virtual Reality: Scientific and Technological Challenges, National Academy Press, 1995
Negroponte N., 1995, Affordable Computing, Wired, July, p. 192
Negroponte, N., 1996, “Object-Oriented Television”, Wired, July, pp. 188
Neves N., Silva, J., P., Goncalves, P., Muchaxo, J., Silva, J., M., Camara, A., 1997, “Cognitive Spaces and Metaphors: A Solution for Interacting with Spatial Data”, Computers & Geosciences, Vol. 23, No. 4, pp. 483-488
O’Rourke, M., 1998, Principles of Three-Dimensional Computer Animation, 1998
Openshaw, S., Waugh, D., Cross, A., 1994, “Some Ideas About the Use of Map Animation as a Spatial Analysis Tool”, in Visualization in Geographic Information Systems, Unwin, D., Hearnshaw, H., (eds), London: Wiley, pp. 131-138
Pausch, R., Proffitt, D., Williams, G., 1997, “Quantifying Immersion in Virtual Reality”, ACM SIGGRAPH Conference proceedings, August 1997
Perkins, C., 1994, " Quality in the map librarianship and documentation in the GIS age. In: The Cartographic Journal, vol. 31, no. 2, pp. 93-99
Perrott, D.R., Saberi, K., Brown, K., Strybel, T. Z., 1990, Auditory psychomotor coordination and visual search performance, Perception & Psychophysics, 48, 214-226
Peterson, M., 1995, Animated and Interactive Cartography, Prentice Hall, Inc., 1995
Peterson, M.P., 1999, "Elements of Multimedia Cartography". In: Multimedia Cartography, Ed. Cartwright, W., Peterson, M., Gartner, G., Springer-Verlag Berlin, pp. 31-40
Pfister, B., Burgess, K., Berry, J., 2001, "What's a Map? Media Mapping Technology is Redefining the Term". In: GeoWorld, May 2001
Ramirez, J. R., Zhu, Y., 2002, “A Multi-Media Visualization System”, Internal Poster, Center for Mapping at Ohio State University, 2002
Ramirez, J.R., 1999, “Maps for the Future: A Discussion”, ICA Conference, Ottawa, Canada, 1999
Ramirez, J.R., 2001, “New Geographic Visualization Tool: A Multiple Source, Quality, and Media (MSQM) Maps”, Internal Paper, Center for Mapping at Ohio State University, 2001
184 Ramirez, J.R., Morrison, J., Maddulapi, H., 2002, “Smart Text in Mapping”, Internal Poster, Center for Mapping at Ohio State University, 2002
Raper J., 1991, "Spatial Data Exploration Using Hypertext techniques, in proceedings of EGIS '91, Second European Conference on GIS, Brussels, Belgium, pp. 920-928
Rapid Imaging, 2002, http://www.landform.com/vrml.htm: Rapid Imaging Software’s VRML Page
Reddy, M., Iverson, L., 2002, "GeoVRML 1.1 Specifications", http://www.geovrml.org/1.1/doc/, SRI International, July 2002
Reddy, M., Iverson, L., Leclerc, Y. G., 2000, "Under the hood of GeoVRML 1.0", Proceedings of the Web3D-VRML 2000 fifth symposium on Virtual Reality Modeling Language, Monterey, California, United States, 2000
Rhyne, M., T., 1997, “Going Virtual with Geographic Information and Scientific Visualization”, Computers & Geosciences, Vol. 23, No. 4, pp. 489-491
Richard, D., 1999, "Web Atlases- Internet Atlas of Switzerland". In: Multimedia Cartography, Ed. Cartwright, W., Peterson, M., Gartner, G., Springer-Verlag Berlin, pp. 113-118
Robertson, G., Czeminski, M., Dantzich, M., 1998, “Immerssion in Desktop Virtual Reality”, Microsoft Research, 1998
Robinson, A. H., Sale, R. D., Morrison, J. L., Muehrcke, P. C., 1988, Elements of Cartography, John Wiley & Sons 1988
Rohrer, R., Swing, E., 1997, “Web-Based Information Visualization”, IEEE Computer Graphics and Applications, Vol. 17 (July/August 1997), pp. 52-59
Scharlach, M., 2001, “Maps and Sound”, in ICA Conference, Beijing, China, 2001
Schwartz, R., J., 1995, “Virtual Facilities: Imaging the Flow”, Aerospace America, Vol. 33 (July 1995), pp. 22-26
Shilling, R.D, Shinn-Cuningham, B., 1999, Virtual Auditory Displays, Virtual Environments Handbook, 1999
Simcity, 2003, Simcity, http://simcity.ea.com/, Electronic Arts, Inc, 2003
Sims, 2002, The Sims, http://thesims.ea.com, Electronic Arts, Inc, 2002
Slocum, T., Blok, C., Jiang, B., Koussoulakou, A., Montello, D., Fuhrman, S., Hedley, N.R., 2001, "Cognitive and Usability Issues in GeoVisualization ". In: Cartography and Geographic Information Science, Vol. 28, No. 1
Swanson, J., 1999, "The Cartographic Possibilities of VRML". In: Multimedia Cartography, Ed. Cartwright, W., Peterson, M., Gartner, G., Springer-Verlag Berlin, pp. 181-194
Terrain Data Representation, 2002, http://www.tec.army.mil/TD/tvd/tdrb.html: Terrain Data Representation Branch Web Site
185
Thoen, B., 1997, “Holo-Deck GIS Provides New World View”, GIS World, August 1997, pp. 32-33.
Tobler, W.R., 1970, “A computer Movie simulating Urban Growth in Detroit Region”, Economic Geography 46, pp 234-240
Topfer, F., Pillewizer, W., 1966, "The Principles of Selection", The Cartographic Journal, 3(1), 10-16
Truflite, 2002, http://www.truflite.com: Truflite’s 3-D World
University of Texas, 2001, PCL Map Collection, http://www.lib.utexas.edu/maps/index.html, Web page accessed 03/07/02
USGS, 1995, United States Geological Survey (USGS) Digital Line Graph (DLG) Data, http://nsdi.usgs.gov/products/dlg.html, 1995
VRML (Virtual Reality Modeling Language), 1997, International Standard ISO/IEC 14772-1:1997, http://www.web3d.org/technicalinfo/specifications/ISO_IEC_14772- All/index.html
Web 3D Consortium 2002, http://www.web3d.org/: Home of the Web 3D Consortium
Weber, C., Yuan, M., 1993, “A Statistical Analysis of Various Adjectives Predicting Consonance/Dissonance and Intertonal Distance in Harmonic Intervals”, Technical Papers: ACSM/ASPRS Annual Meeting, New Orleans, Vol. 1, pp. 391-400
Wenzel, E., Fisher, S., Stone, P., Foster, S., 1990 “A System for Three-dimensional Acoustic “Visualization” in a Virtual Environment Workstation”, Visualization ’90: First IEEE Conference on Visualization, IEEE Computer Society Press, Washington, pp.329- 337.
Worboys, M. F., 1995, GIS – A Computing Perspective, Taylor & Francis 1995
186