Contents / Matières Session / Séance Authors / Auteurs 06 Index

Section 06

Design and Production Dessin et production

Problems in screen map design Mette Arleth

Area-Normalized Thematic Views T. Alan Keahey

Color Perception Research on Electronic Maps Chen Yufen

Interface Design Issues For Interactive Animated Maps Sven Fuhrmann,

National Place Name Register Integrated with Cartographic Names Register for Multiple Scales and Products Teemu Leskinen

Label placement for dynamically generated screen maps Ingo Petzold, Lutz Plümer, Markus Heber

Towards an Evaluation of Quality for Label Placement Methods Steven van Dijk, Marc van Kreveld, Tycho Strijk, Alexander Wolff

Production flowcharting for mapping organisations: A guide for both lecturers and production managers Sjef J.F.M. van der Steen

Automatic Bilingual Name Placement of 1:1000 Map Sheets of Hong Kong Lilian S.C. Pun-Cheng and Geoffrey Y.K. Shea

A Model for Standardizing Cartographic Depiction of International Boundaries at Small to Medium Scales Leo Dillon

Conception de cartes pour l’étude comparative de l’urbanisation de trois pays du Maghreb Vanessa Rousseaux

Design and Production / Dessin et production Contents / Matières Session / Séance Authors / Auteurs 06 Index

Coping with Qualitative-Quantitative Data in Meteorological Cartography: Standardization, Ergonomics, and Facilitated Viewing Mark Monmonier

Noise in Urban Environment: Problems of Representation and Communication Jean-Claude Muller, Holger Scharlach, Matthias Jäger

The Visualization of Population Distribution in a Cartographic Information System - Aspects of Technical Realization of Dot Maps on Screen Robert Ditz

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Session / Séance 01-D Problems in screen map design

Mette Arleth Aalborg University, Laboratory of Geoinformatics Fibigerstraede 11 DK-9220 Aalborg Ø Denmark ph.: +45 96 35 82 83 fax: +45 98 15 57 75 [email protected]

Abstract Screen maps, with their physical conditions and interactive potentials calls for design strategies other than traditional printed maps. This paper is based on part of a Ph.D. study that concerns screen map design. It is found that the process of designing and developing screen maps is more manageable when the process is separated into two phases, concerning the Map Interior and the Map exterior respectively.Problems and methods of the two design phases are mentioned, as well as some suggested solutions.

Introduction

The topic of this paper is screen map design. Through the last decades, the computer and thereby the primary output facility, the screen, has become not only a tool, but a medium in its own respect. Screen maps, on CD- ROM or the Internet successively supplements and replaces traditional paper maps. But suitable design of screen maps still constitutes a cartographic challenge, due to the low resolution and limited size of the most prevalent technology today; the 15" or 17" CRT screens. As [Peterson,1995] says, the influence of the tradi- tional paper medium on the screen map is heavy. So heavy that many of the commonly available screen maps look like scanned versions of paper maps (and many of them are), covered with bulky symbols that signalise underlying interactive elements. These screen maps simply fail to exploit the potential of the computer me- dium [Bär and Sieber, 1997]. Fortunately there are many fine exceptions from this description, one distin- guished example being the Interactive Atlas of Switzerland, which with a combination of aesthetically pleas- ing graphic and extensive smooth interactivity shows the way ahead for interactive atlas technology. But it is an exception. To overcome the physical limitations of the screen, the screen map has to take advantage of the enhanced interactive and dynamic potential offered by the computer medium. The starting point must be the medium as it is, not only as it should or could be. Inspiration and knowledge can be gained by turning the attention towards other fields working with visual communication and audio-visual medias. The cartographers have to realise that comprehensive knowledge of information technology and multimedia techniques is today a necessity. With this in mind a Ph.D. project concerning appropriate screen map design is currently going on at Aalborg University. The project includes theoretic considerations as well as a case study that includes a user test. The case study concerns setting up guidelines for suitable screen map design for a specific purpose, physical plan- ning, but the theoretical basis of the guidelines might proof to have more general apllicability. This paper will present and argue one of the basic ideas in the project: That screen map design both as regards method as well

Design and Production / Dessin et production 06 Index as contents must be divided into two phases: The Map Interior design (the map elements, symbolisation etc) and the Map Exterior design (the tools and functions for using the map). Furthermore some problems in the two design phases will be mentioned as well as some possible solutions.

The screen map, an internal representation

The representational approach to maps and cartography has been presented and thoroughly investigated by [MacEachren,1995]. Another and more general understanding of representations is presented by [Norman, 1993] (founding Chair of the Department of Cognitive Science at the University of California, San Diego). [Norman, 1993] places representations in a larger group of what he calls cognitive artefacts, tools or methods dedicated to supplement and expand the human memory and cognitive abilities. Or, in the words of [Bertin,1983]: “The entire problem is one of augmenting this natural intelligence in the best possible way, of finding the artificial memory that best supports our natural means of perception.” Maps and other cartographic visualisations are representations of the physical (or imaginary) world, representations that make it possible for users to perform higher order thinking, reflect upon the world represented and recognise new spatial patterns [Norman, 1993]. These higher order thoughts are very hard for the unaided mind to perform, having to dedi- cate mental capacity to an internal picture of the world. If the representation - the map - is well made, and the represented features are depicted in an easily perceivable way, the essential connections appear more obvious, and leads the user to new experiences and ideas [Norman, 1993]. The critical trick is to get the abstractions right, to represent the important thing and not the unimportant, and to choose the relevant cartographic vari- ables to represent the features. Much cartographic research has been dedicated to the latter subject, especially as regards the paper map. However, there are differences between the paper maps and the screen maps. The difference consists of the visibility of the contents and properties of the representation. In the paper map the visibility of the contents and properties of the map is an inherent characteristic. Everything that the paper map contains is shown on the paper, and though the use and meaning of the displayed information might require special knowledge, all information is readily visible. Contrary to this, the screen map is a graphic representation of underlying digital map data, basically numbers in rows and columns. To become a perceivable and interpretable representation, these numbers have to be translated, visualised in a way that enables the user to get insight in the underlying spatial or abstract connections. A screen map is only one of several possible representations of the underlying data, one single surface representation out of an extensive internal representation. The surface can be changed and manipulated at any time, and there are even hidden representations; temporary calculations, internal states used only by the software that are never displayed, never visible. The screen map falls in the category of internal representations [Norman, 1993], and with the screen map as with all other internal representations, there is much more than can be readily perceived. Internal representations need interfaces, visual aids that transforms the information hidden within the internal representation into surface forms that can be used [Nor- man, 1993]. Consequently, designing a screen map is not only a question of designing the map. The map designer must supply the map user with an easily perceivable interface and usable, relevant tools for manipu- lating the internal representations. Designing a map and developing interactive tools for it are two quite different tasks, using different methods, different technology and with different theoretic bases. Even though in the final product, the screen map and the interface should appear as an integrated natural whole, it seems reasonable to divide the design process into two equally important phases: 1. The Map Interior design; the selection and symbolisation of the map elements, the choice(s) of level of detail etc, 2. The Map Exterior design; the system development or multimedia design, production of interactive/multime- dia elements etc.

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

The Map Interior design

Designing the screen map interior does not fundamentally differ from traditional paper design. The same techniques and theories can be applied, only adjusted for the special physical properties of the screen map, which are, in short, limited size and resolution. In practice this essentially means putting a smaller amount of information per area unit [Spiess, 1994], choosing graphically simpler symbols and choosing text fonts that are fit for the screen [Birkvig, 1996]. A tricky point is the colours. Firstly because the colours on the screen map arise from the transmission of light through the screen, contrary to the colours on the paper map, that arise from the reflection of light on the map surface. This means that the perceptual as well as the aesthetic properties of colours on a screen map might differ from colours on paper map, since their brightness and mutual contrast appear different. However this is more or less a reverse problem, since the screen map is almost always pro- duced and designed digitally in a WYSIWYG environment. The problem is more pressing when the end result is a paper map, as it is hard to adjust the colours on the paper copy to look exactly like the ones on the screen.

Colourdepth Secondly there is the problem of colourdepth. This is a hardware problem. Colourdepth refers to the number of displayable colours. This number depends on the number of bits available for the determination of colours. 8 bit colourdepth means 256 available colours, 16 bit gives appr. 32.000 or 65.000 colours, 24 bit or more provides many millions of colours and are denoted true colours. If the screen map is designed using 16 or 24 bit colourdepth but afterwards is displayed on a screen with 8 bit colourdepth the result might look very different from the intention. As an example one of the screen maps from the authors case study could be mentioned, showing a minor town and surroundings, the town area showed with a light, soft orange in 16 bit colours. Transferred to 8 bit colour it changed to a quite dominating and unappetising milky pink. Another problem with colourdepth is that colournuances that are separable at high colourdepth look alike at lower colour depth. An example of this problem was discovered on one of the maps in the proposal for the digital municipal plan of Aalborg. A map showing forest areas used two different green colours to show existing forest areas and planned forest planting. With 16 bit colours the two green colours very just separable, with an 8 bit colourdepth they were similar. The screen map designer, designing for Internet or CD-ROM is unable to tell in which colourdepth the end users will watch the screen map. This paper will not plead for a limitation to 8 bit colourdepth, since this will not always supply the designer with the nuances needed to make an aesthetically pleasing and easily perceivable map. But at least the designer should test that all the colours used can be distinguished even in 8 bit colours. For some general advice on colour use in screen map design see [Artimo,1994] and [Loihmulahti, 1995].

“Dead” areas Regardless of the colourdepth, large coloured areas on a screen map tend to look “dead”. Here the screen map designer faces a difficult task; balancing between keeping the screen map simple and avoiding too empty areas. One way of overcoming the “dead” areas is to apply a relief map. The Atlas of Switzerland uses a shaded reliefmap as a background in most of the maps in the printed version as well as the interactive one. However not all countries have a topography enabling this, a reliefmap of Denmark as an example with it’s highest point 173 m above sea level does not add much variation to a map image.It is not recommendable in theses cases simply to add texture or hatching to the areasignature, an option offered by most mapping software. A regular, uniform texture will not liven up the “dead” areas, it will only disturb the balance in the map image and attract too much attention. During a work study of the methods and design strategies of the graphic designers at the Danish National Television (DR1) news programme a different solution was observed. News programmes very often use maps

Design and Production / Dessin et production 06 Index to locate the events being mentioned or the persons reporting or being interviewed. Obviously, these maps have to be very simple, due to the usually very short time of exposure (20-30 seconds). Typically the maps show only the relevant part of the world, one or two major cities and if necesarry the border between two adjacent countries. Consequently there will be large empty parts in the map looking “dead”. The designers at the Danish television avoid this by using transparent colours on top of an irregular, haphazard texture image, the same background image is used in all maps. The transparent colours make the map image look bright, and the irregular texture makes the surface “live”. This solution must be regarded as highly recommendable, but unfortunately hardly applicable, as most mapping software does not facilitate transparent colours. This is only one example of a general problem; most mapping software does not yet supply the cartographic designer with all the options and graphic tools needed for aesthetically satisfactory map production, the final editing must be performed in professional graphic programmes. A third solution should be mentioned here; using digital orthophotos or satellite imagery as background will prevent the problem with “dead” areas. In combination with relevant themes, and provided at the right scales orthophotos or satellite scenes can serve as a functional and decorative background, but the solution must be used with caution. Orthophotos and satellite imagery provide huge amounts of unclassified information and can just as well disturb as enhance the information in the map. Furthermore these kinds of background put strong limitations on the choices of colours and size of symbols, especially if the background is panchromatic[Brande-Lavridsen,1998].

Limited size Another problem to be overcome in the screen map design is the limited size of the screen. In combination with the low resolution (compared to paper) the size of the screen gives the designer the choice between showing a simplified overview of a large area, or a detailed insight in a small area. Contrary to the paper map, which with a theoretically almost unlimited size and a very high graphic resolution provides both overview and details. In the screen map this is compensated for by the possibility of moving and changing the focus (pan and zoom). But pan and zoom functions do not solve the problem; if the map user has to zoom to a very high scale to be able to see the details of the map, the overview will be lost. Consequently the map has to give a sufficiently detailed and meaningful information at any scale. This is a task not unfamiliar to cartographic designers (re- gardless of the medium) and it requires considerations on the contents and purpose of the map, and on the expected map users. In the screen map design the problem can be solved in two ways. Either by reducing the zoom possibilities to a limited predefined number of scales, and then elaborating an appropriate design for each of the levels. Or by adapting an intelligent map concept, like the one used in “Atlas of Switzerland - multimedia version” [Bär and Sieber, 1997]. An intelligent map allows the display of any zoom level, auto- matically adjusting the degree of generalisation, and gives the user a possibility of combining map layers freely and perhaps of using alternate symbolisation. To assure that the resulting maps have a sufficient cartographic quality, each map layer is associated with look up tables defining how the layer can be visualised, what sym- bolisation is allowed, what scales the layer can be applied in etc [Bär and Sieber, 1997]. There is no doubt that the latter solution is superior to the first. But it is technically much more demanding and requires a lot of programming, unless special software such as ESRI’s Internet Maps Server (IMS) for ArcView is used. ArcView Internet Map Server is an extension (plug-in module) to the desktop GIS application ArcView 3.0. The extension allows the user to put ArcView projects available through a TCP/IP network. (An ArcView projects is a collection of maps and related spatial data.) The extension is marketed as an easy to use product that will make live maps and GIS applications accessible on the web without any further demand than a standard Java enable web browser. The extension is customisable through ArcViews built-in macro language Avenue and on the web site through standard Java programming [GISplan,1997]. The Arcview maps are not intelligent in the sense described by [Bär and Sieber, 1997], but they do facilitate the option of automatically

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index switching layers on and off according to the map scale. In this way it is almost possible to adjust the level of generalisation to be suitable at any map scale. The IMS for Arcview has disadvantages (one is the prize...), some of these will be mentioned in the discussion of the Map Exterior design, but it provides a solution for smaller institutions like a municipal or regional administration, that do not have resources at their disposal comparable to the ones allocated to e.g. national atlas projects.

Point symbols and text Most printed maps make use of point symbols or pictograms (icons in the Peirce terminology [Fiske, 1990]). Despite a very limited size these pictograms can be fairly detailed, due to the very thin lines obtainable with printing techniques. Converted to the screen map, the pictograms become either illegible or big and bulky. In general a pictogram on a screen map should not exceed 16*16 pixels, as larger symbols will be much to dominating. To design meaningful pictograms in 16*16 pixels is a task requiring special graphic talents. Hence the use of iconic symbols in screen maps ought to be limited or avoided. One way of reducing the need for iconic symbols could be to only allow the simultaneous display of one or two point themes requiring graphically complex symbols. A technique that can be used alone or in addition to the limitation is the applica- tion of metasymbols; symbols symbolising underlying (hyperlinked) symbols. When the metasymbol is acti- vated (by pointing or clicking) a small window showing the iconic symbol in a fully legible size pops up. The application of lettering is in general a problem in map design [Robinson, 1952]. Readability, legibility, harmony and a distinct visual hierarchy are just some of the requirements of the lettering of a map. Applying nice looking lettering on a screen map is at the same time harder and easier than lettering a printed map. The difficulties arise from the limitations of the screen, primarily the coarse resolution. Despite the application of antialiasing very few fonts are immediately legible on the screen in a point size of 7. Some sans serif fonts can be read at a point size 6, but most antiquas require at least point size 8 to be legible. Of course not all lettering in a map need be this small, but with the limited size and resolution of the screen, lettering with larger types tend to be much to dominating. Moreover to establish a suitable visual hierarchy between the lettering of different elements, the designer needs a certain scope of sizes and styles, and consequently has to make use of small lettering as well. Few fonts available today are designed specifically for the screen. Empirical studies suggest that sans serif fonts (like Verdana or Tahoma) are more legible in small scales on the screen than antiquas. The aesthetic qualities of the different fonts should not be discussed here, as it is more or less a matter of taste. But it must be noted that even though a certain font might look nice on the screen (even in small types) when applied horizontally, the letters can appear quite differently sloping and slanting to follow the features in the map. What makes lettering easier on a screen map than on printed maps is the fact that the features in the screen map need not be lettered once and for all. Lettering can be switched on and off, according to shifts in theme or scale, it can be applied as “pop up” labels, spoken “labels” or whatever suits the intended users. In that case the problem of application of harmonic lettering with a clear visual hierarchy is replaced by a problem of how to inform the users about the interactive and flexible lettering options applied, and show how to use them. The solution of this kind of problems is part of the Map Exterior design.

The Map Exterior design

The benefits of using cognitive artefacts, such as maps, are maximised if distractions and interruptions are minimised. All effort should be concentrated on the solution of the task involving the artefact, not to the artefact it self. Most persons in literal cultures use paperbased representations in this effortless natural way, without giving a thought to the use of the paper. The very nature of a paperbased representation guarantees

Design and Production / Dessin et production 06 Index some understanding of the use of it [Norman, 1993]. Contrary to this, the understanding of the contents and possible uses of an internal representation, as the screen map, depends on the interface. An internal representa- tion with a poorly designed interface will be an artefact that blocks the cognitive process leading to the solution of the task. If the user needs to spend time and mental resources on figuring out how the screen map is used, less attention can be spent on performing the task [Lindholm and Sarjakoski 1994]. The computer/screen as a medium has many advantageous qualities. But to be able to benefit from these qualities, the user needs an interface that provides meaningful and usable information about the content and structure of the internal repre- sentation. In other words, the screen map has to be user friendly. User friendliness is a keyconcept in all software development and interface design. However, the term “user friendly” is quite subjective and hard to define, and therefore more precise and operational terms like human-computer interaction, user-centered de- sign or usability engineering are preferred. See [Keller and O’Connel, 1997] for a brief or [Nielsen, 1993] for a thorough exposition of the meaning and contents of the term “Usability engineering”. Having recognised the necessity and contents of a user friendly design, the remaining question will be how to achieve this. [Norman, 1988] argues that the design should be based on the needs of the user; the user should be able to figure out what actions are possible, how they are performed, and to evaluate the effect of the action. He puts up 7 “principles for transforming difficult tasks into simple ones”. The principles are aimed at design in a very broad sense (hence the title; The design of everyday things), but prove to be very applicable to system- and multimedia design as well. The 7 principles are as follows: 1.Use both knowledge in the world and in the head. 2. Simplify the structure of tasks. 3. Make things visible: bridge the gulfs of Execution and Evaluation 4. Get the mappings right. 5. Exploit the power of constraints, both natural and artificial. 6. Design for error. 7. When all else fails, standardise. A full explanation of the meaning of these 7 principles exceeds the limits of this paper. But some explanation must be given, especially to principles 1 and 3.

Knowledge in the world and in the head Knowledge in the head refers to the knowledge and experience stored in the human memory. Storing it there and recalling it for use requires mental effort. Knowledge in the world, or external knowledge refers to infor- mation readily perceivable from the surroundings, either explicitly as written information (as the letters and numbers on the keyboard) or immediately derived through constraints. Knowledge in the world need not be memorised and recalled, it is there, ready for use. [Norman, 1988] argues that people learn better and feel more comfortable when the knowledge required for a task is available externally. It should be noted, that when a user is able to internalise - learn - the required knowledge performance could be faster and more efficient. These users should have the opportunity to benefit from the internalisation of knowledge by short cuts or hot keys etc. But for most users the best result will occur when knowledge in the world and in the head mutually support each other. One way of achieving this is by providing the user a good conceptual model of the interface. The principles of operation must be observable, all actions should be consistent with the conceptual model, and the current state of the system should be reflected visually. In multimedia and other interactive applications, the conceptual model is often provided by a central metaphor. Metaphors help users to see things in different ways. Spatial metaphors often arise from spatial experiences and orientations in reality [Flensborg, 1997]. A good metaphor enables the user to make sense in the external knowledge provided explicitly and implicitly by the system, and

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index furthermore, the conceptual model helps the user to internalise the required knowledge [Norman, 1988]. Inter- active multimedia for preschool children makes extensive use of spatial metaphors for navigation between the different elements in the application; playroom, bookshelf, shop, garden, farm etc. With hardly any introduc- tion 4-5 year old children can navigate and use these applications, using their spatial sensory experiences from the physical world. The critical trick is off course to find the right metaphor, the one that wakes immediate surprise and recognition. If the mental effort of interpreting the metaphor exceeds the recognition achieved, the benefit of using the metaphor is lost [Rasmussen, 1998]. One example of a metaphor is the “spectacles meta- phor” used in the authors case study. A familiar phrase in Denmark goes something like “What you see de- pends on the spectacles you wear”. The screen maps in the case study can be explored using different specta- cles; 3D models, 360o panoramic pictures, orthophotos etc lie “behind” the map surface, and can be shown when the spectacle option is chosen. The pointer turns into a pair of dark spectacles, and as the pointer is moved around in the map, it changes to a pair of spectacles with eyes, when active zones with underlying visualisation elements as the above mentioned are hit.

The Gulfs of Execution and Evaluation Gulfs of execution and evaluation in the use of an application occur, when there are no visual clues of the possible (and desirable) actions and the results of performed actions, respectively. To avoid such gulfs the application should strive to live up to the following 4 principles[Norman,1988]: Visibility, the user can tell the possible actions and the state of the application by looking A good conceptual model, the application is build up using and providing a good conceptual model for the user, with consistency in the presentation of possible actions and their results Spatial analogies, it is possible to determine the relationships between actions and results, between controls and their effects by their appearance and spatial analogies Feedback full and continuous feedback is given about the results of the action.

Get the mappings right In this connection mapping does not refer to the cartographic process, but to the use of spatial and physical analogies in designing and placing functions and controls. As spatial analogies are inherent in mapproducts this principle is followed in a vast majority of interactive map products. Also the cartographic tradition of systematic symboldesign using the graphic variables is in good accordance with this principle.

The power of constraints Physical constraints, semantic and cultural conventions provide the designer and the user with a useful reduc- tion of reasonable actions. However, a considerable problem occurs in applying this principle to the design of interactive applications: In the physical world, the world of atoms, impossible things are impossible, in the world of bits physical constraints does not exist. In a simulated flythrough of a mountain area the operator can safely fly through the mountains, an option off course not available in the physical world. Physical affordances and constraints therefor must be replaced by artificial, simulated ones.

When all else fails, standardise Visibility and spatial analogies are goals not always achievable. In that case the least the designer can do is to follow well known standards. The functionality of Microsoft Windows is probably the closest one can get at a common de facto standard for interactive applications. To “cut and paste” using ctrl x and ctrl v in no way

Design and Production / Dessin et production 06 Index follows the principles recommended here, but every Windows user knows how it works, and this knowledge is applicable to every application running in the Windows environment.

When the software provides the interface Special software and employees with specific knowledge are needed for developing and programming an aesthetically pleasing and user friendly interface. For small organisations or institutions like a municipal or regional administration, who wishes to distribute geographic data in the form of interactive screen maps on CD-ROM or Internet these specialised functions might not lie within the resources at their disposal. In these cases, an easy way to overcome the map exterior design is by using the interface given by the software used for producing or publishing the map. Internet Map server from ESRI is one possible solution, but as earlier men- tioned it is not exclusively advantageous. The following problems are present at all the Danish applications of IMS that the author of this paper has visited on the web: The map area is too small, and both the legend and the “how to use”-instructions are too dominating. The map area is obviously limited for the sake of the reduction of download time, but in most applications the area is further reduced to give room for disproportionately large legends and instructions for use. In some of the applications, the resulting map area is only about 6 times 8 cm. Obviously, with the coarse resolotion of the screen, it is hard to provide an attractive mapimage on less than 50 square centimetres. So in this case, the software offers an easy way to distribute interactive maps, but at the current state, this technology is not flexible enough for providing an acceptable map exterior.

Cartographer or programmer? User interface design is a research field in its own respect, and need not be reinvented by cartographers. When designing new cartographic applications for interactive use there is much knowledge to gain from these re- search fields, as well as from the entertainment industry as regards ideas on how to build in selfinstructing functionality. It could be discussed whether cartographers in the future should choose only to concentrate on the Map Interior design, and leave the Exterior design to programmers, graphic designers and user interface experts. The attitude lying behind the Ph.D. project mentioned in this paper is that this is what has been done until now, with a severe loss of map quality as result. To design a suitable Map Exterior the designer must have an understanding of the map as much more than a graphic, and he needs to understand what kind of function- ality is desired and needed for effective use of a screen map. As the commitment to maps and cartography should be the driving force in the development and design of screen maps, educating cartographers in usability engineering will be more likely to produce better interactive map applications, rather than teaching interface designers about maps and cartography.

Concluding remarks

This paper has argued the appropriateness of separating design and development of screen maps into two phases, the Map interior and the Map Exterior. The separation makes the processes more manageable and makes it easier to detect, whether a certain problem in the screen map arises from an inappropriate map design or insufficient or badly designed functionality in the interface. Problems and solutions concerning the Map Interior design will - as always in map design - depend of the purpose of the map, the conditions under which it is to be used, and the qualifications of the intended users. Solutions and choices in the Map Exterior design have to take these conditions in consideration as well, and will in addition depend on the technology - what kind of hardware and software - and on the knowledge and skills of the designer. All these kinds of considera- tions are faced in the case study part of the Ph.D.project. The resulting screen map application, a local council plan, will be tested by a range of different user groups, using the official digital version of the plan as reference.

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

It is hoped - and expected - that the screen maps from the case study prove to be more easily perceivable and more satisfactory in use than those in the official version of the local council plan. Conclusions on these matters as well as on the experiences of the design process will be made as the final part of the Ph.D. project. Regardless of the results of the user test it seems inevitable that to become better screen maps designers, cartographers have to extract and acquire the knowledge and theories present in fields as usability engineering and interface design; they should also try to gain inspiration from computer games and the television industry. No doubt, cartographers of the future will realise, that screen maps need not look like a modernised version of Mercator’s Atlas.

References

Artimo, Kirsi (1994). Screen map design. Proceedings, Cartographic summer course 1994 Bär, Hansruedi and Siber, Rene (1997). Atlas of Switzerland - multimedia version. Concepts, functionality and interactive techniques. In 18th ICA/ACI International Cartographic Conference Stockholm, Proceedings vol 2 1141-1148 Bertin, Jacques (1983). Semiology of graphics: Diagrams, networks, maps. University of Wisconsin Press, Madison Birkvig, Henrik (1996). Birkvigs skriftatlas - 15 gode skriftfamilier til design og produktion af grafisk kommunikation. Grafisk Litteratur, København Brande-Lavridsen, Hanne (1998). Vector on Raster - colour ortho images as settings in GIS-presentations. Aalborg University, Department of Development and Planning Fiske, John (1990). Introduction to communication studies.Routledge, London Flensborg, Ingelise (1997). Æstetisk erkendelse i den grafiske brugerflade. In Bo Fibiger (Ed). Design af multimedier. Aalborg Universitets forlag. Aalborg GISplan, IRDSS team (1997). IRDSS - Integrated Regional Development Support System Delieverable D.09.2 : Interregional Development node (design) - Arcview Internet Mapserver, Specification, build and test. On http:/ /130.225.61.24/irdss/d09.2.htm Keller, Peter C. And O’Connel, Ian J. (1997). Methodologies for evaluating user attitudes towards and interactions with innovative digital atlas products. In 18th ICA/ACI International Cartographic Conference Stockholm, Proceed- ings vol 3 1242-1249 Lindholm, Mikko and Sarjakoski, Tapani (1994). Designing a visualization user interface. In Alan M. MacEachren and D.R. Fraser Taylor (eds). Visualization in modern cartography. Pergamon. Loihmulahti, Anne (1995). Visualizing geographical data in computerbased environment. Proceedings, ScanGIS 1995 MacEachren, Alan M. (1995). How maps work. The Guildford Press, New York Nielsen, Jakob (1993). Usability engineering. Academic Press. London Norman, Donald A. (1998). The design of everyday things. Currency/Doubleday. New York Norman, Donald A. (1993). Things that make us smart, defending human attributes in the age of the machine. Addison- Wesley Publishing Company Peterson, Michael P. (1995). Interactive and animated cartography. Prentice Hall, Enlgewood Cliffs, New Jersey Rasmussen, Anne (1998). Metaforisk ræsonneren. Ph.D. Thesis. Aalborg University, Faculty of Humanities Robinson, Arthur H. (1952). The look of Maps. The University of Wisconsin Press, Wisconsin Spiess, E. (1994). Some problems with the use of electronic atlases. 9. Konferenz der LIBER-grupp der Kartenbibliothekare, Zürich

Design and Production / Dessin et production 06 Index

Session / Séance 28-A Area-Normalized Thematic Views

T. Alan Keahey Los Alamos National Laboratory [email protected]

Abstract Thematic variables are commonly used to encode additional information such as population density within the spatial layout of a map. Such «themes» are typically encoded using colour-maps. We will explore techniques for using this thematic information to directly define spatial transformations so that areas of map regions are proportional to their thematic variables, thus making the view more consistent with the thematic encodings. Our method emphasizes interactivity as a primary mechanism for allowing the user to better realize the distribution of the thematic variable, rather than relying on static views of the map.

Introduction

A common problem in cartography occurs when a thematic variable is used to control the shading of regions of the map; this can very often lead to a situation where the areas used to represent the various thematic values are not consistent with the values themselves, possibly leading the viewer to misinterpret the information that the map designer is trying to convey [Tufte, 1983]. As a simple example of this, imagine two countries having identical populations, but one has a geographic area one tenth the size of the other. On a thematic map showing population, the larger country will make a more significant visual impression on the viewer than the smaller one, despite their identical population values, possibly leading to the viewer attaching a greater weight to the larger country. Many efforts at reducing or eliminating this problem by distorting the map to make the areas of regions proportional to their thematic values, while still maintaining connectivity between regions, have been described in the cartography literature, a common term for such a method of solution is the continuous cartogram. A different but related problem to this occurs in computer visualization where zooming in on regions of the display causes the viewer to lose his or her awareness of context, so that it is not possible to see both the big picture and the fine details at the same time (bringing to mind the saying “he couldn’t see the forest for the trees”). The concept of a fisheye view is often applied in such cases, so that the viewer can focus in on magni- fied regions of interest while still maintaining a sense of the overall context at reduced resolutions (thus allow- ing the viewer to see both the trees and the forest simultaneously). The simplest example of such a view is familiar to anyone who has looked through a fisheye lens of a camera or a security peephole installed in a door. This paper will explore the regions of intersection between these two methods – continuous cartograms and fisheye views – and show how the tools and methods obtained from each of them can be applied to the other. We will begin with a generalized description of fisheye views (nonlinear magnification), followed by a more expressive abstraction for fisheye views (nonlinear magnification fields). This new abstraction can be applied to many of the same problems that continuous cartograms are attempting to address. We will emphasize the importance of interactivity for the user, and show that our system is efficient enough to produce results at near- interactive rates. We finish with a discussion of related work, conclusions, and plans for further work.

Design and Production / Dessin et production 06 Index

Nonlinear Magnification

Many approaches have been described in the literature for stretching and distorting spaces to produce effective visualizations. Examples include fisheye views, stretching a rubber-sheet, focus+context, and distortion-ori- ented displays. The term nonlinear magnification was introduced in [Keahey and Robertson, 1996] to describe the effects common to all of these approaches. The basic properties of nonlinear magnification are non- occluding in-place magnification which preserves a view of the global context. A brief discussion of some of the characteristics of more traditional systems for generating nonlinear magnification will provide some con- text and motivation for the section that follows. More details about these types of systems are available from the Nonlinear Magnification Home Page at www.cs.indiana.edu/hyplan/tkeahey/research/nlm/. Traditionally, most nonlinear magnifica- tion systems begin with defining a center of magnification, often referred to as a fo- cus. This center of magnification can be either a point, a line, or a region, depend- ing on the specific system under discus- sion. The idea is that areas near this center of magnification will be enlarged, while the surrounding context areas will be com- pressed. The simplest examples of this type of magnification are illustrated in Figure 1, which shows both a radial fisheye-type transformation, as well as an orthogonal Figure 1. Radial and Orthogonal Transformations transformation where the horizontal and vertical axes are treated independently. These examples show the effect of a transformation applied to a regular grid of points, and the user is able to interactively change the location of the center of magnification as desired. From this simple starting point, many more complex transformations have been developed. Some examples of the kinds of transformations that are possible include: using regions of distortion-free linear magnification within the fisheye view, placing boundaries around the regions of magnification, and combining multiple centers of magnification in various ways. Examples of these are shown in Figure 2.

Figure 2. Examples of More Complex Transformations

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

The examples shown so far have just involved transforma- tions on a regular grid of points, however there are many examples of applying nonlinear magnification to more ir- regular data types such as trees [Munzner, 1997], graphs, arbitrary polygons and GIS data [Churcher et al., 1997]. When we use a regular grid of points, we can use texture mapping [Blinn and Newell, 1976] to map any image (tex- ture) onto that grid, so that as transformations are applied to the grid they are also applied to the mapped image. This makes transformations on a regular grid a particularly use- ful technique, as we can now easily transform arbitrary images at interactive rates using the texture mapping hard- ware acceleration that is now increasingly common on even low-end PC machines. An example of applying transfor- mations to a texture map is seen in Figure 3, where we magnify a region on a map of the Washington D.C. area. Most of the more recent research efforts into nonlinear Figure 3. Using Texture Mapping to Transform magnification have stressed the need for providing a high an Image degree of interactivity for the user. User studies have em- phasized that smooth operation and feedback of the magnification mechanism is necessary if the user is not to become disoriented [Schaffer et al., 1993]. These requirements typically can be met if the system is able to transform and display the information at a rate of 10 frames per second or greater. Some of the nonlinear magnification research efforts that describe an emphasis on computational efficiency are able to sustain inter- active rates while transforming tens of thousands point coordinates for each frame [Keahey and Robertson, 1996; Munzner, 1997], thus illustrating the rapid transformation rates that are possible with these “foci-based” systems. We would like to maintain this high degree of interactivity for our thematic map transformations, as will be discussed in a later section. There is some limitation on the degree of transformational expressiveness that is possible when using the foci- based systems however. Despite the many ways in which these centers of magnification can be constructed and combined, we are still conceptually limited to the idea of having discrete centers of magnification. Developing more complex transformations with such systems involves the addition of more foci, and it becomes a non- trivial matter to analytically predict the overall effect of these multiple interdependent foci [Keahey and Robertson, 1997]. An early effort at applying this type of foci-based magnification to the problem of continu- ous cartograms can be found in the polyfocal projection work of [Kadmon and Shlomi, 1978]. We will see in the next section how removing the restrictions of discrete foci can allow for a much more expressive class of transformations which is better suited for the complex transformations required of continuous cartograms.

Nonlinear Magnification Fields

Leung and Apperley [1994] first established the mathematical relationship between 1D magnification and transfor- mation functions for nonlinear magnification systems, defining magnification as the derivative of the transforma- tion function. This idea was extended to higher dimensions using an area-based derivative, resulting in the abstraction of a scalar field of magnification values called a nonlinear magnification field [Keahey and Robertson, 1997; Keahey, 1997]. As a result of this work, it is possible to compute the implicit magnification field of a given nonlinear transformation, which gives the magnification values inherent in the transformation. An exam- ple is shown in Figure 4, where a nonlinear transformation is shown beside its implicit magnification field.

Design and Production / Dessin et production 06 Index

In addition, an iterative method is described in [Keahey and Robertson, 1997] that computes suitable spatial transformations based on a specified scalar field of magnification values. The sca- lar magnification field is particu- larly amenable to user and pro- gram manipulation, and provides a much more expressive class of transformations than is possible with traditional foci-based ap- proaches to nonlinear magnifica- tion such as [Kadmon and Figure 4. Transformation and Implicit Magnification Field Shlomi, 1978; Keahey and Robertson, 1996]. Briefly, the iterative method begins with an untransformed “working” grid and a mesh of desired magnification values. For each iteration, the method computes the implicit magnification field of the working grid and then subtracts the obtained magnification values from the desired magnification values to obtain a mesh of error values. Then for each node in the working grid if the associated error is positive we push away the nearest neighbours, and if the error is negative we pull the neighbors closer to that node. Full details of the iterative method are available in [Keahey, 1997]. With these magnification fields, it is now possible to directly define magnification values without having to deal with the side-effects of multi-dimensional transfor- mation functions, and without concern for the complex interactions of multiple foci. The additional expres- siveness of nonlinear magnification fields is crucial to the methods we present here; it is now possible to create data-driven magnifications [Keahey and Robertson, 1997], where properties of the data are used to directly define the magnification best suited for viewing that data.

Thematic Magnification

There is a natural match between data-driven magnification and the colour-encoding of thematic variables in maps. We can easily define routines which place a regular grid over a raster image of RGB values, and use the sampled RGB values to derive suitable magnification levels at each point in the grid, producing a magnifica- tion mesh. Given that magnification mesh, we can then compute a suitable transformation having the desired cartogram-like properties. The accuracy of this method is dependent on two major factors: the resolution of the map image (the degree to which its pixels represent an accurate sampling of the original data), and the resolu- tion of the magnification mesh (the degree to which it accurately samples the pixels in the map image). In each case, sampling theory such as the Nyquist theorem gives guidelines as to what resolution would be required to accurately sample some given minimal signal frequency. Since our end goal here is only to achieve a visual approximation to the correct results, it is usually appropriate to relax the sampling requirements so that they are tuned more to the average-sized frequency components rather than to the smallest ones. For some extreme cases however, the sampling frequency requirements can be somewhat stringent, and may require the use of established multi-resolution methods for dealing with them most effectively. Once the magnification mesh has been computed, a number of preprocessing operations can be performed on its values before the transformation is computed. Smoothing of the values with a low-pass filter is sometimes useful for cases with extreme variation from node to node (high-frequency spikes). Nonlinear scaling of the values can be used to emphasize particular ranges of magnification values. Other possibilities include locking

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

individual nodes in the mesh, and constraining them to lie along certain vectors. During the iterative process, we can also weight the method to emphasize correction of undersized regions, oversize regions, or those regions which are maximally different from their desired size. Details of these operations are provided in [Keahey, 1997]. Complex effects can also be achieved by encoding different information in each RGB chan- nel; the examples in this paper use the R channel to define the magnification values, and the G channel to specify logical “don’t care” values for those areas of the map where the R values are not well defined (e.g. in the bodies of water surrounding geographic regions). This new ability to define “don’t cares” is particularly advantageous in terms of computational efficiency. Whereas previous cartogram systems typically assigned some constant (average) value to these regions, in our iterative method we can simply ignore them and allow neighbouring regions to push and pull them without constraint (other than preserving mesh topology). Con- versely, the “don’t care” regions will not exert any influence on their neighbours. This can greatly reduce the amount of computation required for convergence.

Interaction Issues with Thematic Maps Although there have been a number of methods for continuous cartograms which do a reasonably good job of producing a transformation of the map having the desired areas [Gusein-Zade and Tikunov, 1993], the prob- lem of determining how well such cartograms convey the desired information to the viewer remains difficult. Recent work has effectively addressed part of this problem by including region recognizability as part of the iterative transformation process [House and Kocmoud, 1998]. Some fundamental issues on the effectiveness of cartograms are worthy of discussion at this point however. A key difficulty with the conceptual notion of cartograms is that the human visual system is not very finely tuned for detecting slight differences in area between regions. Two examples of this are shown in Figure 5. Looking at the two images on , and trying to visually determine which of them has a larger area, you will find that without the use of measuring devices it is very difficult to say with certainty that the two objects have the same area. This example illustrates that the shape of a region can influence our estimate of its size; more complex shapes such as those found in cartograms will present even greater difficulties for the viewer. Next, examine the two images on the right in Figure 5, and try to visually determine if the central circle is larger in the left or right image. For most viewers, the circle on the left will initially appear to be larger, despite the fact that they are the same size. This example is the well-known Titchener illusion, which illustrates how even when the size and shape of an object remain constant, the surrounding context for that object can influence our perception of its size. Because of the above difficulties, our method employs a shift of emphasis away from demanding absolute exact area representations, and towards creating perceptually recognizable transformations that provide the user with a better understanding of the relative distribution of the thematic variable. A key ingredient of our method is the use of animation and interactivity. Animation provides a means to smoothly interpolate between the regular and transformed views of the space, allowing the viewer to realize the relationship between the normal familiar view and the view which more accurately reflects the thematic content. Placing that animation under user control allows the viewer to rewind, playback, and pause, thus allowing the viewer to manipulate the independent thematic variable and obtain a “feel” for its distribution. These methods provide a direct binding between the complex independent thematic variable, and the simple variable of time. Thus the time variable pro- vides a stable frame of reference for the user as he or she manipulates the complex variable Figure 5. The Difficulty of Visually Comparing the Areas of to understand its properties. Regions

Design and Production / Dessin et production 06 Index

When we observe a static cartogram only as a view at the end of a transformation process, we do not inherently have a description of the nature of the transformation that has taken place, excepting for those cases where the untransformed view of the data is so familiar to us that it serves as an internal source for comparison. For the less familiar cases, we can place a view of the untransformed map alongside the cartogram, this will provide some basis for comparison and realizing the distribution of the thematic variable, however subtle differences may still be lost or erroneously inferred as the viewer shifts her gaze between images. Through the use of animation and interactivity however, the user is able to smoothly compare the normal and transformed views without having to shift gaze, thus allowing much more subtle effects of the transformation to be recognized. In addition, the user can now easily track the progress of the transformation on a region that is familiar in the untransformed view, even if the final transformation produces a significantly distorted representation of that region. In support of our requirements for animation and interactivity, the method that we use for computing the transformations performs at near-interactive rates. While other systems for continuous cartogram production can take many hours to compute [House and Kocmoud, 1998], ours can typically be computed in about the same amount of time it takes to read the map image from disk (from a fraction of a second to a few seconds, depending on the specific task). Animation between normal and transformed views can be achieved either through simple interpolation between the original and transformed regular grid, or through viewing time steps from the iterative process as it converges.

Example I: Presidential Election Results The presidential election in the United States is decided by the number of electoral votes each candidate re- ceives. Each state has a given number of electoral votes (based on the state population), and all of the electoral votes for a single state must be given entirely to only one of the candidates. It is common practice on election day for the news organizations to show a map of the USA, shading a state in blue (or dark gray) if they voted for the Democratic candidate, and red (or light gray) if they voted for the Republican candidate. This gives rise to a classic problem in information visualization that occurs when the area used to visually represent each region is not consistent with the actual thematic variable of importance [Tufte, 1983]. Figure 6 shows a traditional view of the presidential election results from 1996. If this image were to accurately reflect the number of electoral votes each candidate received we would expect the ratio of red (light) to blue (dark) pixels to be 0.42; what we actually get however is a ratio of 1.23, a global error of 193% which could leave the viewer to mistakenly infer that the Republican candidate (Dole) won the election instead of the Democratic candidate (Clinton). The error occurs because large and sparsely populated states such as Alaska and Montana visually dominate the image even though they have very few electoral votes, while states with a large number of electoral votes such as New York, Texas and California are not represented with an area-emphasis pro- portional to their electoral contributions. Note that the error measure used here is a global one based on the sum total of pixels in the image. This is in contrast to the per-region error meas- ures often used in other cartogram systems such as [Dougenik et al., 1985; Gusein-Zade and Tikunov, 1993; House and Kocmoud, 1998], where the average of errors for the regions is Figure 6. Traditional View of Election Results used rather than the sum total of them.

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

To reduce this error we can construct a map of the USA where shading is used to represent the number of electoral votes in each state, as shown in the left image of Figure 7. The right image of that figure shows how the shading is used to define a magnification field for the theme.

Figure 7. Shading by Electoral Votes and Construction of Magnification Field

We can then compute a magnification based on that thematic content to transform the normal view of the election into one that more accurately represents the actual proportion of electoral votes received by each candidate. The result is shown in Figure 8, where the ratio of red (light) to blue (dark) pixels is 0.69. Although this still represents a global error of 64%, this is less than 1/3 of the global error found in the original image, and the ratio of pixels now accurately reflects the fact that Clinton won the election. Since our transformation applies only to images and there is no representation of discrete state boundaries in our data, comparisons of this error with the averages of per-state errors described in [Dougenik et al., 1985; Gusein-Zade and Tikunov, 1993; House and Kocmoud, 1998] are difficult. One possibility would be to divide the total error by the number of states (50) to obtain an average error of 1.28% for each state, which would compare quite favour- ably with the previously mentioned systems. In reality however, there is probably some cancellation of global error between states, so that we can only say with certainty that the per-state error resulting from our method is approximately the same as with the other methods. Visual comparison of these results with the results shown in the above mentioned references provides additional evidence for this statement.

Figure 8. Normalized Views of Election Results and Electoral Votes

For performance comparisons, we ran this computation on a 200 MHz SGI workstation using a 33x33 (1069 nodes) mesh. With this setup it took 0.404 seconds of wall clock time per 100 iterations. The visual trend of the convergence is readily apparent after only 50 iterations, and by 300 iterations (1.2 seconds) the algorithm has converged to within 10% of it’s final value, as measured by pixel ratio. By 800 iterations (3.2 seconds), the algorithm has fully converged to the above result.

Design and Production / Dessin et production 06 Index

Example II: Interstate Speed Limits The interstate highway system in the United States covers every state in the union, and each state is able to define the maximum speed limit on those portions of the interstates that pass through it. There is considerable variation in the speed limits chosen, from 55 miles per hour in states such as Connecticut to effectively no speed limit in Montana. All speed limits were obtained from a USENET FAQ, the numerical speed limit for Montana was arbitrarily set to a “reasonable and proper” 140 miles per hour. For a driver planning to travel across the USA, the time required for a particular route will be a function of both the geographic distance involved and the speed limits that will be enforced en-route. By encoding the speed limit information for each state as a thematic variable in a map of the USA, we can then sample that map to obtain a suitable magnifica- tion field. Here we define magnification as the inverse of the speed limit, so that states with higher speed limits will shrink to reflect the increased rate of travel. Figure 9 shows the thematic encoding of speed limits by state, along with a transformed version of the map which reflects the thematic magnification. For some of the states the transformation is quite subtle, and may not be immediately apparent when comparing static images. By animating through the changes however, the user can quite readily track these subtle changes.

Figure 9. State Speed Limits and Normalized Driving View

Although this example is illustrative of the ability to use metrics other than quantity and density as thematic variables for this process, it also raises some questions about the ways in which the resulting views should be interpreted. When we look at the state of North Dakota (the state directly to the right of the darkest state Montana) in the normal view, we see that it is approximately rectangular in shape, so that any vertical line across the state will be approximately the same length, no matter where it is located horizontally within the state. When we look at the transformed version of the state however, we see that the left side of the state is more vertically compressed than the right side. This may give the viewer the false impression that it would be faster to drive North across the western part of the state than the eastern. The difficulty here occurs for two reasons. First is that the distribution of the thematic variable is discontinuous, while the transformation method imposes continuity on the resulting transformation. Thus the high value for Montana causes a localized influ- ence on the surrounding regions where it interfaces with its neighbouring states. This would seem to be a problem that is inherent to all systems for producing continuous cartograms (and hence a argument in favour of discontinuous cartograms which do not enforce adjacency requirements), although [Gusein-Zade and Tikunov, 1993] claim to have had some success dealing with this type of discontinuity. The second problem is that we are overlaying a linear metric (driving distance) on a map that has been transformed using an area metric, thus the transformation method is making no effort to ensure that linear distances between regions are enforced, although on average their values will be changed suitably by the area transformation. Here again we empha- size that the intent of our method is to provide information about the relative impact of the thematic variable rather than the absolute value. Through the use of interaction and animation the user will be able to see the states grow and shrink according to their values, and thus realize the general pattern of distribution for the thematic variable, even when specific discontinuities or issues of interpretation arise.

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Related Work

Comparisons between the performance of different systems is difficult, as it has not always been the common practice to report actual timing results for the convergence of the systems. Although empirical measures of the performance are not always the ultimate means of determining algorithm efficiency, time for convergence of iterative methods can be difficult to predict, and simple counts of number of iterations are often a poor indica- tor of overall computation time requires. We have followed the lead of [House and Kocmoud, 1998] in report- ing actual time to achieve convergence, and will welcome comparison with other published figures in the future. The method for continuous cartograms in [Tobler, 1973] is somewhat similar in nature to our iterative method, in that both systems work on regular meshes rather than on the polygonal boundaries of the region (although we note that our work was derived independently through a different set of motivating factors). Here we use texture mapping to automatically interpolate an image of the map onto the transformed space, whereas this required a distinct computational step in Tobler’s system. The system described in [Dougenik et al., 1985] manages to reduce the number of iterations required for convergence while using a pre-processing step to ensure that certain constraints will not be violated. This pre-processing step shifts some computational costs away from the iterative method itself, but also introduces additional vertices to the geometry as the polygonal representation is refined. Their system was able to achieve fairly good accuracy for the US population transfor- mation, having only 1.7% average error for each state. In [Tobler, 1986], we see a method for providing a crude but computationally inexpensive initial estimate on the transformation. The transformation can then be refined via an iterative method such as [Tobler, 1973], with the idea that the initial estimate will result in less work for the iterative method, and thus faster convergence. Similar to this, although more complex in nature is the idea of using a 2D histogram equalization technique as described in [Keahey, 1997] to provide a much more accu- rate initial estimate, which can then be refined using the iterative method found in that paper. The work de- scribed in [Gusein-Zade and Tikunov, 1993] is similar in motivation to [Dougenik et al., 1985] in that it makes an effort to reduce the number of iterations for convergence of their system. However, as the authors acknowl- edge, they are reducing the number of iterations required at the expense of greatly increasing the amount of time required for each iteration. Thus no claim has been made by them for the system running significantly faster than it’s predecessors, although they do report a fairly accurate average error per-state of 1% for the US population density map. Careful attention to maintaining recognizability of regions is given in the work of [House and Kocmoud, 1998]. The authors describe an iterative method which carefully balances the con- straints of area, topology and the rotation and distortion of individual regions. Their method makes use of a hierarchical polygonal representation and a simulated annealing-type process to cycle through the various constraint solvers on each iteration. They achieve fairly accurate results (1.5% average error per state) on the US population problem, for which they report a time of approximately 6 hours on a 300 MHz workstation to compute the final result.

Conclusions

Area-normalized thematic views provide a practicable method for reducing one of the most egregious «visual lies» encountered in visualization, particularly in the use of thematic maps. We have described a system which offers approximately the same accuracy as is found in other systems for continuous cartograms, while also providing convergence at near-interactive frame-rates. To the best of our knowledge, no system that converges at a rate within even a few orders of magnitude of our system has been described in the literature. By illustrat- ing some of the shortcomings of static representations of cartograms, we emphasized the importance of interactivity in helping the user to better realize the distribution of the thematic variable.

Design and Production / Dessin et production 06 Index

Further Work

As mentioned in a previous section, standard multi-resolution methods would increase the value of this method greatly by reducing the number of mesh nodes required to represent large-uniform regions, while still allowing localized increases in resolution level to capture high-frequency details. Currently we provide for this with a basic multi-grid method which (based on preliminary results) has the ability to increase performance signifi- cantly, however more advanced techniques such as wavelets can take this idea much further. Such multi- resolution methods should ultimately be driven by an automated signal analysis which describes the frequency components of the map image being transformed. Very little has been done in the way of user studies on the effectiveness of continuous cartograms from the user’s perspective, such a detailed study on both the static and interactive methods of presentation would likely prove beneficial to identifying shortcomings and determining whether or not further work in this area is fully warranted.

References

Blinn, J.F. and Newell, M.E. (1976). Texture and reflection in computer-generated images. Communications of the ACM, 19(10). Churcher, N., Prachuabmoh, P. and Churcher, C. (1997). Visualization techniques for collaborative GIS browsers. In International Conference on GeoComputation. Dougenik, J.A., Chrisman, N.R. and Niemeyer, D.R. (1985). An algorithm to construct continuous area cartograms. Professional Geographer, 37(1). Gusein-Zade, S.M. and Tikunov, V.S. (1993). A new technique for constructing continuous cartograms. Cartography and Geographic Information Systems, 20(3). House, D.H. and Kocmoud, C.J. (1998). Continuous cartogram construction. IEEE Visualization. Kadmon, N. and Shlomi, E. (1978). A polyfocal projection for statistical surfaces. Cartographic Journal, 15(1):36-41. Keahey, T.A. and Robertson, E.L. (1996). Techniques for non-linear magnification transformations. IEEE Symposium on Information Visualization. Keahey, T.A. and Robertson, E.L. (1997). Nonlinear magnification fields. IEEE Symposium on Information Visualization. Keahey, T.A. (1997). Nonlinear Magnification. PhD thesis, Department of Computer Science, Indiana University. Leung, Y.K. and Apperley, M.D. (1994). A review and taxonomy of distortion-oriented presentation techniques. ACM Transactions on Computer-Human Interaction, 1(2):126-160. Munzner, T. (1997). H3: laying out large directed graphs in 3D hyperbolic space. IEEE Symposium on Information Visualization. Schaffer, D., Zuo, Z., Bartram, L., Dill, J., Dubs, S., Greenberg, S., Roseman, M. (1993). Comparing fisheye and full- zoom techniques for navigation of hierarchically clustered networks. Graphics Interface. Tobler, W.R. (1973). A continuous transformation useful for districting. Annals, New York Academy of Science, 219. Tobler, W.R. (1986). Pseudo-cartograms. The American Cartographer, 13(1). Tufte, E.R. (1983). The visual display of quantitative info

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Session / Séance 01-B1 Color Perception Research on Electronic Maps

Chen Yufen Department of Cartography, Zhengzhou Institute of Surveying and Mapping Zhengzhou 450052, Henan, PR China

Abstract This paper analyses features and reading environment of the electronic map and presents the role of color played in designing electronic maps firstly, and gives emphasis to introduce a color perception experiment on electronic maps secondly. The study provides cartographers with empirical guidelines regarding the color matching in the electronic map design on the basis of analyzing and discussing the experiment results.

Introduction

The appearance of electronic maps presents a new map reading and map perception environment that is differ- ent from that provided by traditional printed paper maps. Map perception theory has been formed through visual perception research on traditional paper map, and it is one of the most practical cartographic theories that can guide map design, But traditional map perception theory and paper map design principles are not wholly suitable for electronic maps. In the fields of information input, graphics display and map use, there are great differences between electronic maps and printed paper maps. Therefore, electronic map has own design characteristics. It is necessary for making and using electronic maps effectively to study visual perception on electronic maps, and traditional map perception theory must be developed under new condition of technology. Computer color graphics display has so abundant color that there are enough color selection space to display electronic maps, and among graphic variables of electronic map, shape, size and pattern variables are restricted by computer display device, so their application is limited. Besides, human’s eye is highly sensitive to color. Therefore, in order to increase expression ability and communication efficiency of cartographic information, we should make full use of the advantages of color in electronic map design, put color variable to rational use. Up to now there is not a set of relatively complete color design principles of electronic maps yet. Being cartographers, we should take the responsibility of exploring and studying color design rule of electronic maps, making electronic maps which accord with human visual perception mostly, and providing users who have no cartographic background with help in making and using electronic maps. Using psychological test method, the author developed a set of test software of electronic maps visual percep- tion in Visual Basic language. This software has functions of producing experiment maps, testing on computer, recording experiment data, analyzing result, and displaying in graph. Experiment maps include raster maps scanned from paper maps and vector maps made by MapInfo.

The Features and Reading Environment of Electronic Map The Features Comparing with the paper map, the electronic map has the features of (1) displaying spatial information in real time and dynamic; (2) the separation of data storage and display; (3) high interaction and functions of analysis and query.

Design and Production / Dessin et production 06 Index

The Reading Environment As the differences exist in information display media and color presentation, there is quite a little difference in reading environment between the electronic map and the printed map. Reading an electronic map is affected by the resolution and the format of the screen, and by the color representation of the screen. The resolution of computer graphic display is at present is limited, as is the format. Typical resolution of normal microcomputer display is from 640* 480 to 4096*4096 pixels with a dot pitch of between 0.3 and 0.4mm, and the display area of the most common color screen (14’’) is approximately 260*180mm. This second factor is particularly sig- nificant for topographic maps, as they are generally large format. The visualization range of the electronic map is restricted by the format of the screen and displaying information mostly by segments and by layers. Map users read the electronic map through the tools like roam, zoom in and zoom out. In the conventional printed map production, we are most familiar with the use of the subtractive primaries, cyan, magenta and yellow. In contrast to this, the computer monitors create color images by using mixtures of light of the three additive primaries, red, green and blue. This difference in color representation gives rise to visual perception effects of the electronic map differed from those of the paper map because human’s eye is highly sensitive to color. Visual perception is in close relation to the environment of map reading. The changes in reading environment certainly lead to the changes in visual perception effects. Analyzing the environment of map reading would be helpful for the visual perception research on electronic map, and finally for guiding the electronic map design.

The Role of Color in the Electronic Map Design

Color is an important part of modern cartographic language. A suitable application of color on a map may enhance the express power of map and increase effectively the effects of communicating cartographic informa- tion, so does color on the electronic map. As the electronic map is limited by monitor display, visual variables like shape, size and pattern are used within limits in designing symbol of the electronic map. Therefore, the application of color in the electronic map design is becoming more important. It is impossible for the paper map to use many colors on account of being limited by the cost and the technol- ogy of map making. Furthermore, the color is fixed on the paper map and can not be erased and modified. But it is very easy for screen color to be set and changed, and to realize it needs nothing more than a sentence or a function. The facility of changing and adjusting color on the electronic map provides map users with a great convenience in map use, and increases immensely the representation of the electronic map. The abundant and agile color capabilities of electronic map make it have many features differed from the paper map, and they should be found expression in designing, making and using the electronic map. The visual perception of the electronic map is mainly influenced by screen resolution, size and color space of the computer. The resolution and size of the screen dependent upon the hardware, what the cartographers can do is to put the color rational use in the electronic map design in order to enhance the visual perception effects.

Color Perception Experiment on Electronic Map There were many experiments regarding color perception of the paper map, but little regarding those of the electronic map. In order to use psychological test method to color perception research on electronic map, the author developed a set of test software of electronic maps visual perception in Visual Basic language. This software has functions of producing experiment maps, testing on computer, recording experiment data in an answer database, analyzing result on computer and displaying in graph. Experiment maps include raster maps scanned from paper maps and vector maps made by MapInfo. The author used test software introduced above to conduct some visual perception experiments on electronic

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index map. There were two experiments in relation to color perception. This paper will introduce the color matching experiment on electronic map.

Research Design In the map color design, cartographers should not only take the color of single feature into account, but also the effects of color matching among all features on a map. This is particularly important for an electronic map design, because there are so abundant colors that it is difficult for cartographers, let alone non-cartographers to chose and match colors. Since maps on computer monitors are probably produced more often in color than in black and white, there are many possibilities to choose all sorts of colors to be a map background. How do we choose colors and match colors to get a better visual perception effect? The following experiment was de- signed to determine the effects of color matching on the electronic map.

Subjects Subjects in the study were 10 undergraduate students and 9 freshman students at the Department of Cartogra- phy in Zhengzhou Institute of Surveying and Mapping. They were 9 males and 10 females with the age range of 18 to 22 years old.

The Test Maps The test maps in the experiment were generated from MapInfo and had functions of roam, zoom out and zoom in through MapBasic programming. All of 18 test maps had the same base map including settlements by proportion (in orange) and by non-proportion (in red), roads (in red or black), some point symbols (in orange or white), and lettering. The most different point among 18 test maps was the color of the lettering and the map background that were realized by MapBasic programming.

The procedure The subjects were tested individually. The instruction was presented on the monitor before the experiment so that the subject known how to read and evaluate the test map and how to answer the questions on the computer. The experiment was divided into Test One and Test Two. The four test maps in four display windows were presented horizontally at one time on the computer monitor. There were four times of comparison in each test. The subject could use the tool to look at any test map in any window in detail. The subject was asked to read the test maps carefully and to compare the integral visual perception effect on the four test maps, then to input the right numbers from the best to the bad by pressing the number key on the keyboard. Number 1,2,3 and 4 represented the best, the secondary, the bad and the worst respectively. There were 8 records for every subject to be recorded in the answer database after completing the whole experiment. Test One is comparing the visual perception effects of test maps with same background color and different background color. Test Two was contrary to Test One.

Results Analysis Using the test software of electronic map visual perception, the experiment was carried out, and the results were recorded in an answer database of the test software in real time. They could be extracted at will to analyze according to the needs.

Design and Production / Dessin et production 06 Index

The independent variables in the study were the foreground color and the background color on the test maps. The dependent variables were the visual perception effects of the test maps. The results were shown from Table 1 to Tables 8.

Table 1. The fore-color was magenta; the colors on the table were the back-color.

No. 5(light yellow) 1(light gray) 12(light blue) 18(black) 1 47.37 15.73 15.79 21.05 2 21.05 47.37 21.05 10.53 3 21.05 31.58 47.37 0.00 4 10.53 5.26 15.79 68.42

Table 2. The fore-color was black, the colors on the table were the back-color.

No. 6(light yellow) 2(light gray) 10(light blue) 15(black) 1 31.58 21.05 36.84 10.53 2 47.37 36.84 10.53 5.26 3 10.53 36.84 47.37 5.26 4 10.53 5.26 5.26 78.95

Table 3. The fore-color was green; the colors on the table were the back-color.

No. 7(light yellow) 3(light gray) 16(light blue) 11(black) 1 42.11 31.58 26.32 0.00 2 47.37 36.84 0.00 15.79 3 10.53 26.32 26.32 36.84 4 0.00 5.26 47.37 47.37

Table 4. The fore-color was blue; the colors on the table were the back-color.

No. 8(light yellow) 4(light gray) 13(light blue) 17(black) 1 57.89 10.53 26.32 0.00 2 26.32 68.42 5.26 0.00 3 10.53 21.05 68.42 5.26 4 5.26 0.00 0.00 94.74

Table 1 to Table 4 showed the statistical results of Test One. There was same magenta foreground on the maps in Table 1, and the background colors were light yellow, light gray, light blue and black in turn. The result was light yellow (first), light gray (second), blue (third) and black (fourth). The rest of results were respectively represented on the Table2, Table3 and Table4. All above were sorted from the best to the worst in turn. Table 5 to Table 8 showed the statistical results of tests Two. The maps on Table 5 have same light gray background, and the foreground colors were black, blue, green and magenta in turn. The order of visual effects showed that the best effect was fore-color in black and secondary was in blue under the condition of light gray background. On Table 6, the fore-color was blue, black, magenta and green, and the background color was same light yellow background. Sorting result was that blue (first), black (second), magenta (third), green (fourth). From Table 7 and Table 8, we concluded that fore-color in black would be best and in yellow second-

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index arily under the background in light blue, and the result of background color in black was white (first), green (second), yellow (third) and blue (fourth).

Table 5. The BackColor was light gray, the colors on the table were the ForeColor.

No. 4(blue) 3(green) 1(magenta) 2(black) 1 36.84 5.26 0.00 57.89 2 47.37 26.32 5.26 21.05 3 15.79 15.79 52.63 15.79 4 0.00 52.63 42.11 5.26

Table 6. The BackColor was light yellow, the colors on the table were the ForeColor.

No. 8(blue) 6(black) 7(green) 5(magenta) 1 52.63 47.37 0.00 0.00 2 36.84 26.32 15.79 21.05 3 5.26 15.79 42.11 36.84 4 5.26 10.53 42.11 42.11

Table 7. The BackColor was light blue, the colors on the table were the ForeColor.

No. 10(black) 12(magenta) 9(yellow) 11(green) 1 78.95 5.26 10.53 5.26 2 5.26 36.84 52.63 5.26 3 10.53 47.37 26.32 15.79 4 5.26 10.53 10.53 73.68

Table 8. The BackColor was black, the colors on the table were the ForeColor.

No. 16(green) 15(white) 14(yellow) 17(blue) 1 36.84 47.37 15.79 0.00 2 26.32 31.58 36.84 5.26 3 36.84 15.79 47.37 0.00 4 0.00 5.26 0.00 94.74

From the analysis above, we could get a conclusion that light yellow as foreground color has the best effect, the second is light gray, and black is not good enough to be background. Under light yellow background, the order of visual effect is foreground color in blue, then in black. Under light gray background, that of visual effect is in black and the second in blue.

Discussion The general findings of Test One were that the visual perception effect would be better: The foreground colors in blue or black under light yellow background; the foreground color in black and secondarily in blue under light gray background; the foreground color in light color (like white) under black background. As can be seen from the results of Test Two, black foreground would be better under all the background in light yellow, light gray and light blue.

Design and Production / Dessin et production 06 Index

People have been familiar with the surface color on the paper in reading the printed paper map for a long time. The color of the paper the map is printed on generally is white with light yellow. The visual perception effects will be not good if white surface on the paper is pure enough owing to reflect light too strong. As the white brightness value produced on the screen is much greater than that on the paper, white is not suitable for the background of the electronic map. This is consistent with the result of the experiment that light yellow as background color has the best effect, the second is light gray, and black is not good enough to be background, although the changes of foreground color in the experiment were mainly the changes of lettering color. In the practice of designing the electronic map, cartographers should choose suitable symbol color as the foreground according to cartographic purpose. Generally, the light yellow and light gray could be considered as the background colors.

Conclusion

As so far, most of the design guidelines of electronic maps that cartographers currently depend upon were formulated for traditional maps printed on white paper background. Computer color graphics display has so abundant color that there are enough color selection space to display electronic maps, so the background color is no longer limited by white background, other than various colors. What problem cartographers confronted with is how to use them to serve with map design. This study provides cartographers with some guidelines regarding the color matching on the electronic map. Color perception is a complicated perception, which has respect to various factors. The color perception ex- periment on electronic map in this paper was only an attempt. There are many problems regarding color design of electronic map which need us to further explore through experiment, and to present color design principles with more completely and more practically.

References

1. Asche, H., and Herrmann, C. M. (1993). Electronic mapping systems—a multimedia approach to spatial data use, Proceedings of 16th ICA Conference, 1101-1108. 2. Gooding. K., and Forrest, D. (1990). An examination of the difference between the interpretation of screen based and printed maps, The Cartographic Journal, Vol. 27, June, 15-19. 3. Gilmartin, P., and Shelton, E. (1989). Choropleth maps on high resolution CRTs/The effects of number of classes and hue on communication. Cartographica, Vol.26, No2. 41-50. 4. Kumar, N., and Evans, I. S. (1997). Map output format from GIS: optimising visual quality of paper and electronic map atlas, The Cartographic Journal, Vol. 34, No.1, 37-41. 5. Liu Guangyun, Han Libin. (1996). Technology and application of electronic maps (in Chinese). Publishing House of Surveying and Mapping (China). 6. Martynenko, A., and Leontiev, V. (1995). The electronic maps system: scientific basis, methods and technology, Proceedings of 17th ICA Conference, 2881-2884. 7.Nelson, E. S. (1994). Colour detection on bivariate choropleth maps: the visual search process. Cartographica, No.4, 33-43. 8. Tainz, P. (1997). Communication- oriented approach to the presentation of cartographic screen information in geo- graphical information systems, Proceedings of 18th ICA Conference, 1462-1470. 9. Tang, Q. (1987). From description to analysis: an electronic atlas for spatial data exploration, ASPRS—ACSM Annual Convention, Technical Papers, Vol.4, Cartography, 455-463. 10.Yang Chuncheng. (1996). Research on digital maps and visualization (in Chinese). Unpublished M.S. thesis, Zhengzhou Institute of Surveying and Mapping(China).

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Session / Séance 15-B Interface Design Issues For Interactive Animated Maps

Sven Fuhrmann GeoVISTA Center, Department of Geography, Penn State University, University Park, PA 16802, USA Institute for Geoinformatics, University of Münster, D-48149 Münster,

Werner Kuhn Institute for Geoinformatics, University of Münster, D-48149 Münster, Germany E-mail: {fuhrman, kuhn}@ifgi.uni-muenster.de

Abstract Information technology for spatial tasks is rapidly entering public spaces and private homes, providing online maps in a large variety to the general public. Standard techniques and functionalities of the WWW allow users to request and transmit the latest spatial information available. A single mouse click enables the user to surf from one mapping site to another, selecting and querying „up to date“ maps and related information. Furthermore, within the next few years interactive, animated maps will become widespread as sources of everyday spatial information. Currently many of these electronically published maps lack ease of use. Their levels of abstraction and complexity are often rarely matched by appropriate information on how to use them. Cartographers need to define basic interactive map functions and metaphors for these interactive cartographic animations. In addition they need to understand and test their design including the graphical user interface in order to find out how the general public will later interact with those maps. This paper discusses some concepts that the authors consider fundamental to the research on interactive animated map design: affordances and metaphors.

Introduction

Today an immense diversity of media plays a decisive role in human life. In order to complete mundane tasks we rely on information sources such as newspaper reports, magazine articles, books, radio broadcasts, televi- sion shows, neighbourhood chats, or traffic signs. When these tasks require spatial information, maps or carto- graphic representations are often the appropriate medium. For example, a common means to assist wayfinding is to draw a sketch map. Maps provide information which enables us, e.g. to navigate from one place to another, to get estimates on distances, to choose a nice neighbourhood for living, etc. We have learned to search for and excerpt useful spatial information from paper maps. Most people own a sizeable collection of city maps, hiking and biking maps, road atlases, and the like in their homes, though many of the maps we use for navigation and orientation are more or less outdated. We keep and use them until we mistakenly drive on the wrong road to our destination or loose orientation. At that point we decide that it is time to buy a new map. During the last 10 years mapping technology has radically changed. Nowadays mapping is no longer restricted to traditional media, and information technology is applied to cartographic processes from data collection to map distribution. Countless maps and cartographic representations are published over the World Wide Web

Design and Production / Dessin et production 06 Index

(WWW). Standard techniques and functionalities of the WWW allow users to request and transmit the latest spatial information. A single mouse click enables the user to surf from one mapping site to another, selecting and querying ‘up to date’ maps and related information. Within the last two years the WWW has become a major infrastructure for map distribution and publishing. Internet-based mapping is no longer limited to a particular computing environment. With access to the internet, all the software a user needs is a web browser and some plug-ins. Taylor [1997] states that ‘Cybercartography’ is going to be the base of maps in the coming years. In a very short period of time, it will become an everyday task to look up online maps and request up-to- date spatial information, e.g. on a local construction project or a nearby environmental hazard. In a few years traditional maps might play a much smaller role than today, because web-based two- or higher-dimensional maps will instantly be available for a variety of purposes and everyday uses. Actually, the first prototypes are being delivered to our homes right now, e.g. for shopping or travelling.

Interactive Animated Maps

Traditionally maps have been considered to be passive representations of geographic space, even after the introduction of digital mapping techniques in the sixties and seventies. This attitude resulted in the application and development of computer based mapping software in order to generate the same type of static representa- tions. While more flexible notions of maps have been around for some time [Moellering, 1980; Monmonier, 1992], computer cartography textbooks of the eighties and nineties still present mapping as the computer- supported design of static presentations. Most maps in the WWW are two-dimensional static images (JPEG or GIF files), presenting a single view of an environment. Those maps are easy to produce and distribute with PC-based mapping and GIS packages. In fact, most of them are nothing else but scanned paper maps. Internet Map Servers (IMS) offer in addition interactive functionalities that allow the user to request and select specific data or to change the scale or the area of interest [Plewe, 1997; and Rinner 1998(a)]. All currently available IMS feature only two-dimensional interactive maps. Almost all web-based maps exhibit a number of restrictions that become especially obvious when displaying spatio-temporal processes. The maps are [DiBiase et al., 1992; Dransch, 1995; Peterson, 1995]: · static: changes of spatial conditions cannot be shown, · isolating: only a pre-determined number of geo-objects and attributes can be queried and displayed, · selecting: only a few representational choices are enabled (e.g. camera position, symbolisation, classification), · passive: user interaction is minimal or not possible. An approach to decrease or eliminate these restrictions of cartographic representations is offered by interactive animated maps. They can support such functions as navigation, orientation, identification, object manipula- tion, classification or metadata queries. In combination with 3D modelling languages cartographers are able to create higher-dimensional animated maps and include the dimension of time and attribute data into it. One often used technique to create such animated higher-dimensional maps is the application of the Virtual Reality Modelling Language (VRML). VRML is a language to describe three-dimensional scenes and a stand- ard for building three-dimensional objects in the WWW. These three-dimensional environments – in our case geospatial temporal and/or non-temporal information - can be viewed on any computer with a WWW browser that supports the VRML standard or uses a VRML interpreter. Most VRML browsers are program extensions working as plug-ins, ActiveX Controls or Java-Applets. Often the graphical user interfaces of these VRML browsers can be changed to a menu structure or a dashboard. In most cases short-cut keys can be used for navigation in three-dimensional space. In addition to three-dimensional geometric object descriptions, the current version (VRML97) of VRML is capable of handling animation and integrating media, such as sounds,

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index movies and images. Textures can be pasted on surfaces for photorealistic results. In a limited way, the user is enabled to experience immersion in a spatial environment [Hase, 1997]. Each scene graph in VRML is built up by nodes. The viewpoint of such a scene graph is defined by the current position, viewing direction and viewing angle. For the presentation of a scene graph the entry viewpoint is usually predefined and can be moved continuously using the mouse cursor. The user ‘flies’ or ‘walks’ through the virtual environment, examines objects or initiates other events. One important feature of VRML is the possibility of animating a scene by changing · the position, size and texture of objects, · the position, brightness, colour of lights, · the surrounding sounds. These events can be sent and received by any object in the scene graph. Additional functions can be embedded through programs or scripts, e.g. through Java or JavaScript. All these features suggest to use VRML in com- bination with Java or JavaScript as one technology for interactive animated maps and various cartographic applications have become available over the Internet [e.g. see Buziek and Hatger, 1998; Rhyne, 1998; Rinner 1998(b)]. However, the technology alone does not get us to the point of usable interactive animated maps. One major problem that is encountered when designing a cartographic application for custom VRML browsers is that the designer has only minor influence on the graphical user interface (GUI) of the browser application. Some interactive cartographic animations might need additional map functions, e.g. to offer thematical selec- tions or a classification change. Currently those functions can only be implemented under VRML or through external Java-Applets. A second - and very important - problem which get encountered while designing inter- active animated maps is that navigation in current browser software is not easy to learn and users often loose their orientation in the virtual space. Some of the current VRML browser use metaphors to a certain degree to facilitate navigation tasks and many metaphors of VRML browser GUIs have changed through the releases of new versions. For that reason we will take a close look at metaphors in the following sections. Our focus in this context is on user centered GUI design for interactive animated maps in order to find a solution to facilitate basic navigational tasks in this environment. The following sections try to address the GUI issue for interactive animated maps from a cognitive rather than simply technological point of view.

An ecological approach to interactive map use

Map use in cartography has been broadly defined as ‘reading, analysis, and interpretation’ [Muehrcke, 1986: 8]. It represents a key research topic in cartography, focused on the ways in which people interact with maps, both perceptually and cognitively. Most work on this topic addresses the questions how cartographic informa- tion is retrieved from maps and in what ways that information is being used. With notable exceptions [prima- rily Castner, 1990 and MacEachren, 1995], research on perceptual and behavioural aspects has remained sepa- rated from cognitive approaches and quantitatively dominates them. To put it differently, there is a lack of cognitively based interpretations of behavioural and perceptual studies, as well as of empirical investigations of cognitive assumptions. An obvious reason for this is the sparseness and recent development of theories of visual cognition. We believe that the cognitive approaches of ecological perception and Gestalt theory [Gibson, 1986; Koffka, 1935] and the pragmatic and empirical procedures of designing and testing for usability [Nielsen, 1993] can usefully be combined to support the design of interactive animated maps that are easier to use. Gibson’s eco- logical approach to perception and cognition [Gibson, 1986] deals with the visual system’s reaction to the natural environment. This represents a different perspective on visual cognition from the information-process- ing models that have typically been used to describe map use. The most important information processing

Design and Production / Dessin et production 06 Index theories for hypermedia and multimedia applications are the schema theory by Neisser (1979), the dual-coding theory by Paivio (1986) and the conceptual-proposition theory by Pylyshyn (1981) [Buziek, 1998]. Any map – analogue or digital – provides some ‘functionalities’ to the map user. Maps are usually designed to support certain tasks, such as locating, wayfinding, analysing, or exploring. Traditional two-dimensional paper maps provide topographic and/or thematic information to the recipient. Besides their primary information contents, paper maps also offer clues on how to handle them, e.g. how to · interpret the symbology, · fold and unfold the map, · orient the map, · measure distances, angles, and areas, · superimpose additional information, · identify adjacent maps and maps at other scales, etc. Gibson [1986] calls such clues of an environment affordances. He considers perception as an active process and takes the natural environment to explain his theory: ‘The affordances of the environment are what it offers the animal, what it provides or furnishes, either for good or ill’ [Gibson, 1986: 127]. Transferring the theory of affordances to human culture, an example for an affordance would be a chair that affords sitting on it. Depend- ing on its functional layout the same chair could afford standing on it. A thing that affords sitting or standing can be natural or artificial. Gibson transfers his theory of affordances to man built structures and surfaces. He points out that, although man has altered the natural environment and built up artificial structures, he still uses substances of the environment and the two should not be distinguished in the context of perception and cogni- tion. ‘This [built-up environment] is not a new environment – an artificial environment distinct from the natu- ral environment – but the same old environment modified by man. It is a mistake to separate the natural from the artificial as if there were two environments [...]. It is also a mistake to separate the cultural environment from the natural environment, as if there were a world of mental products distinct from the world of material products’ [Gibson, 1986: 130]. Gibson’s theory of affordances is partly based on Gestalt ideas: ‘things in our environment tell us what to do with them’ [Koffka, 1935: 353] and states that properties of things (e.g. colour, texture, composition, size, shape, mass, mobility, etc.) are visually transmitted to the observer as reference. ‘I [...] suggest that what we perceive when we look at objects are their affordances, not their qualities. [...] The special combination of qualities into which an object can be analysed is ordinarily not noticed. [...] The meaning is observed before the substance and surface, the colour and form, are seen as such. An affordance is an invariant combination of variables’ [Gibson, 1986: 134]. Gibson points out that perceiving an affordance ‘is not to classify an object’. One could use a stone as a weapon while another uses the same stone as a paperweight. The concept of affordances implies that object classifications do not count for perception, but what the object affords. Percep- tion being economical, it is not necessary for the user to distinguish all features of an object but to distinguish for what it can and cannot be used. Can Gibson’s theory tell us anything about the perception and cognition of maps, those being mostly two- dimensional and a cartographer’s interpretation of the geographic environment? The three-dimensional do- main of Gibson does not necessarily apply to graphic elements in maps [Castner, 1990]. However, Gibson’s approach has been applied to graphic user interfaces and usability in general [Norman, 1988; Kuhn and Blumenthal, 1996]. Norman distances himself from a literal adoption of Gibson’s thoughts, but fundamental concepts such as that of an affordance clearly retain their value, mutatis mutandis. Also, digital maps are increasingly three- or four-dimensional simulations of an environment. Thus, it appears justified to explore how Gibson’s ecological approach can support the design of graphical user interfaces for interactive carto- graphic animations and help to improve their usability for the general public.

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Interactive cartographic maps are virtual products that need to offer affordances to the map user. However, information technology as such is mostly abstract and complex and lacks affordances. A bit-mapped screen image as such does not tell us how to use it. Each and every affordance has to be explicitly built into the product by the designers. There is much less physical indication of how a virtual information product can and should be used than in the case of paper maps. Some general examples for lacking affordances are software components operating only in command line mode or invisible links in hypertext applications. In the specific case of interactive animated maps, examples of missing or insufficient affordances range from zooming and panning functionalities through legends to hyperlinks. Map designers need to create generic and specific affordances for interactive map use. The concep- tual model behind these affordances must support the user’s mental model brought to the task and developed through interaction. If the conceptual model lacks consistency or clashes with user expectations, the applica- tion will be hard to use and the user will end up with a distorted mental model [Norman, 1988]. Taking a look at the usability of three-dimensional cartographic animations reveals considerable problems with orientation and navigation in such virtual environments. Users are quickly ‘lost in space’ and fail to navigate using the metaphors of current VRML browser because of poor conceptual models. Such failures show the need for ‘easy-to-use’ interactive map functions for orientation and navigation. When we transfer the theory of affordances to interactive cartographic animations we find three primary ge- neric affordances to be provided by interface controls: · orientation – Where am I? · navigation – In which direction can I go (walk, fly, etc.)? · identification of objects – Who or What is this? Secondary generic affordances for cartographic animations include: · zooming and re-scaling, · panning (scrolling), · moving through space (walking, flying), · moving through time, · explaining and requesting help. An ecological approach to the design of interactive animated maps has to · identify such generic affordances in more detail, · find specific affordances related to particular user tasks (such as determining travel time to a tourist sight or finding the nearest pharmacy), · design visualisations implementing the affordances in each map. Given the virtual nature of interactive maps, the only possible basis for creating such visualisations is the use of appropriate metaphors. The rest of this paper therefore discusses the use of metaphors in the design of interactive cartographic animations.

Map functions and Metaphors

Traditional paper maps contain and provide affordances to a user, because they are real world objects. Cur- rently available VRML browsers lack in many cases—as described—affordances, because they are non- physical and abstract. This results in a complexity of use that needs to be counteracted through the explicit design of virtual affordances. Metaphors are proposed here as a basis for implementing affordances in inter- active cartographic animations. They can be of great help to achieve ease of use, because they allow the user to

Design and Production / Dessin et production 06 Index understand one thing in terms of another, familiar (physical) thing, without suggesting the two are the same. In addition, metaphors play a fundamental role in our ordinary everyday language and shape our everyday think- ing, talking, and acting [Lakoff and Johnson, 1980]. An example for a metaphor is: ‘He is over the hill’. Here, the words ‘over’ and ‘hill’ are not used in their literal everyday sense. ‘He is over the hill’ means that the person we are talking about has past his highest point in life. Metaphors have the structure of mappings from source to target domains. In the example metaphor ‘He is over the hill,’ ‘He’ forms the target - meaning one’s current stage in life - and ‘over the hill’ the source domain, describing the activity of crossing over a mountain. Mostly the target domain is more abstract and the metaphor provides a more concrete source domain to under- stand it. Lakoff [1992: 244] writes: ‘Metaphor is the main mechanism through which we comprehend abstract concepts and perform abstract reasoning. Much subject matter, from the most mundane to the most abstruse scientific theories, can be only comprehended via metaphor. Metaphor is fundamentally conceptual, not lin- guistic, in nature.’ Graphical user interface metaphors map familiar source concepts into abstract, computational target domains [Kuhn, 1995]. They have become a key idea in designing and assessing human-computer interaction [Kuhn, 1996]. The abstract nature of information technology creates a need for metaphors in GUI’s, so that users can conceptualise and understand software without having to master its technical workings. The main role of metaphors is to afford ways of interacting and to help the user in mastering complex tasks. Interface metaphors are a conceptual, not only a presentational device. They act as ‘sense makers’ – an indispensable function for any user interface [Kuhn, 1995]. Many important operations in GUI’s for interactive cartographic animations could be realised using meta- phors, because relevant daily abstract concepts like state, action, purpose, means, change, time and causation are mostly described metaphorically [Gersmehl, 1990; Lakoff, 1992]. The metaphors used for such daily con- cepts are ideal candidates for design, because state, action, purpose, means, change, time and causation are essential functions in interactive cartographic animations. Current operating systems are still designed for two-dimensional screen tasks based on a desktop metaphor, e.g. to move documents, to start a program, to delete files, etc. [Rohrer, 1995]. We have to think about how graphical interface controls should be designed and organised for higher-dimensional interactive cartographic animation and what kind of common metaphors could be used for creating affordances in interactive carto- graphic animations. Appropriate and usable metaphors do not pop up by chance. In order to find and use good metaphors for user interfaces, they need to be carefully designed and tested for their usability in a certain user community. In the case of interactive map design, we are looking for metaphors that are meaningful to the general public in their environment. Some relevant metaphor candidates can be found in everyday language, existing commonly used technology and human experience in their environment (space). Finding metaphors, thus, means · observing the general culture, · listening to users when they talk about their tasks, · observing user behaviour, · looking at previous technology and its explanations [Kuhn, 1995]. The metaphors that are derived from such observations serve as a toolbox from which we can build affordances into interactive maps. The two basic common affordances to be provided by interactive cartographic animation controls are orientation and navigation. These functions are strongly related to the ‘real’ orientation and navi- gation in our surrounding environment. Designers can most likely apply metaphors based on daily spatial experiences for these interactive map functions. Some examples are given in table 1.

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Table 1. Metaphors based on daily spatial experiences.

Vertical More is up Horizontal The ground is further down or at front (never r left) Neighbourhood Nearby is similar Centre Central is important Closeness Close is important Movement Motion is important

In addition, metaphors based on human physical experience support interactive map functions (table 2). Table 2. Metaphors based on human physical experience. Light absorption Dark is more Heat Hot is red, cold is blue Touch Touching is learning/experiencing (grasping) Pitch Low tone is big

These general metaphors have mostly the character of design guidelines and certainly do not strike us as particularly creative. But this is precisely their strength, since user interfaces become more powerful the less unexpected or unfamiliar properties they have. And the state of the art in interactive maps shows plenty of cases where such elementary principles derived from human everyday experience have been ignored or not fully exploited.

How do we find out which metaphors are usable?

User feedback on metaphor usability in interactive cartographic animations is a very important source for metaphor evaluation and the development of new metaphors. It is essential to involve a range of representative users in the metaphor development process because the map user community varies considerably. Thus, a number of usability assessment methods and tests are necessary to prove if the designed metaphors are usable for the general public or not. Such ‘feedback processes’ give important information to the designer on how the conceptual metaphor model matches the user’s abilities and characteristics. Those tests will also provide infor- mation on necessary adjustments or redesigns of metaphors. Metaphor design and user interface design for interactive cartographic animations need to be improved not only once but repeatedly – in order to provide good metaphors to the user and to shape the user interface according to the user needs [BEST-GIS, 1998]. This designing, testing and redesigning process is called usability engineering or user-centred design (UCD). The main idea of UCD is to design products, which can be used with a minimum of stress and a maximum of efficiency. Rubin [1994: 12] lists three principles for UCD: ·‘set an early focus on users and tasks, · measure the product usage empirically, · design the product, test it and modify it repeatedly’. Nielsen [1993] points out that besides the practical acceptability of a system the social acceptability needs to be investigated. Today, we cannot say much about the social acceptance of interactive cartographic maps but given the fact that IT will move into homes and, e.g. IMS are more and more applied as mapping tools for the general public, we consider them as becoming socially accepted. In the course of interactive map design the practical acceptability can be broken down into several categories, e.g. cost, reliability, compatibility, useful- ness, etc. [Nielsen, 1993]. Usefulness provides information to the designer on whether a conceptual model

Design and Production / Dessin et production 06 Index

facilitates functionalities to accomplish a goal or not. It is divided into the two classes ‘utility’ and ‘usability’, where ·‘utility is the question of whether the functionality of the system can do what is needed and · usability is the question of how well users can use that functionality’ [Nielsen, 1993: 25]. UCD provides various methods for efficient and effective design evaluation, e.g., focus group research, sur- veys, expert evaluations, etc. Usability testing is defined as ‘a process that employs participants who are representative of the target population to evaluate the degree to which a product meets specific usability crite- ria’ [Rubin, 1994: 25]. In order to find out if the designed metaphors and graphical user interfaces achieve usability, Nielsen’s [1993] usability parameters can be applied in interactive animated map interface design. The usability parameters provide some clues if the designed metaphors – and their map functions – are easy to learn, · efficient to use, · easy to remember, · preventing user errors, · pleasant to use [Nielsen, 1993]. In order to gain knowledge on the design of metaphors for interactive animated maps, we will apply a number of usability assessment methods and tests, i.e. · observation: the user is observed by an experimenter while performing a specific task, · questionnaires and interviews: the experimenter collects the users’ opinions about the user interface, · performance measurements: user performance is generally measured by having a group of test users perform a predefined set of tasks while collecting time and error data, · thinking aloud: the users verbalise their thoughts by thinking out loud while using the test environment, · eye movement recording: the sequence of eye movements is recorded and related to human visual cognitive processes [Nielsen, 1993; Heidmann and Johann, 1997].

Conclusions

Our approach to the design of interactive animated maps brought up several research questions on map inter- face design. Gibson’s [1986] theory of ecological perception, particularly affordances, and modern concepts of metaphors [Lakoff and Johnson, 1980] have been suggested here as support for designing GUIs for interactive cartographic animations. The goal of our research is to find out if and which metaphors are good concepts for interactive map interface design in modern cartography. In our opinion, the use of metaphors will simplify the structure of map tasks and will make their affordances visible to the inexperienced user. Map interface metaphors, especially for naviga- tion and orientation, need to supplement the user’s experience in the natural physical world in order to become intuitive to use. ‘Successful user interface metaphors tap into a reservoir of bodily feeling on the part of the user and successfully exploit out embodied knowledge’ [Rohrer, 1995]. Thus, an important step in the devel- opment and the application of metaphors in interactive animated maps is their usability testing and redesigning process. An example for a counterintuitive metaphor is the trash can of the Macintosh OS desktop metaphor where the user needs to drag the disk icon over the trash can in order to eject the disk [Rohrer, 1995]. Who would not fear deleting the files on the disk – ‘throwing them into the trash can’? This problem shows an important issue in the

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index course of usability testing and the redesigning process where in some cases metaphors for interactive animated maps might need to get standardised in order to become usable [Norman, 1988]. Standardisation of GUI functions is also done in general software development, e.g. for Windows98 where the right mouse button always brings up a selection menu. Especially in cartography where the purpose is to facilitate human thought and communication about space, it is important to avoid a technology centric view and proceed towards user centred map design. MacEachren [1998] writes ‘Harnessing [the human] power of vision [...] requires developing a more complete understand- ing of spatial cognition and perception of visual displays. While we have a solid base of knowledge about perception and cognition as it relates to static paper maps, we know much less about the cognitive and percep- tual issues associated with 3D and dynamic displays’. Our paper has taken a first step in this direction, inves- tigating the application of affordances and metaphors to designing interactive animated maps.

References

BEST-GIS (1998). Guidelines for best practice in user interface for GIS. The European Commission, DGIII – Industry, ESPRIT Programme, Geographical Information Systems International Group, Geneva. Buziek, G. (1998). Wahrnehmungstheoretische Grundlagen, Gestaltungsprinzipien und Beispiele für die animierte kartographische Visualisierung eines Ueberflutungsprozesses. In W.-F. Riekert, and K. Tochtermann (Eds.). Hypermedia im Umweltschutz. Metropolis Verlag, Marburg, 251 - 266. Buziek, G., and Hatger, C. (1998). Interactive animation and digital cartometry by VRML 2.0 and JAVA within a temporal environmental model on the basis of a DTM of the Elbe estuary and a 12 hour tide period. http://visart.ifk.uni-hannover.de/~buziek/COMVIS/COMVIS98/buziek/comvis98.html. Castner, H. W. (1990). Seeking new horizons – A perceptual approach to geographic education. McGill-Queen’s Uni- versity Press, Montreal. DiBiase, D., MacEachren, A. M., Krygier, J. B., and Reeves, C. (1992). Animation and the role of map design in Scientific Visualization. Cartography and Geographic Information Systems, 19(4), 201-214. Dransch, D. (1995). Temporale und nontemporale Computer-Animation in der Kartographie. Berliner Geowissenschaftliche Abhandlungen, Band 15, Freie Universität Berlin, Berlin. Gersmehl, P. J. (1990). Choosing Tools: Nine Metaphors of Four-Dimensional Cartography. cartographic perspectives, 5(spring), 3 – 17. Gibson, J. J. (1986). The ecological approach to visual perception. Lawrence Erlbaum Associates, Hilsdale. Hase, H.-L. (1997). Dynamische Virtuelle Welten mit VRML 2.0. dpunkt.verlag, Heidelberg. Heidmann, F., and Johann, M. (1997). Modelling graphic presentation forms to support cognitive operations in screen maps. In L. Ottoson (Ed.). Proceedings of the 18th ICA/ACI International Cartographic Conference. Swedish Cartographic Society, Gävle, 1452 – 1461. Lakoff, G. (1992). The contemporary theory of metaphor. In Ortony, A. (Ed.). Metaphor and Thought. Cambridge University Press, Cambridge, 202 - 251. Lakoff, G., and Johnson, M. (1980). Metaphors we live by. The University of Chicago Press, Chicago. Koffka, K. (1935). Principles of Gestalt Psychology. Routledge & Kegan Paul LTD, London. Kuhn, W. (1995). 7 ± 2 questions and answers about metaphors for GIS user interfaces. In Nyerges, T. L., et al. (Eds.). Cognitive aspects of Human-Computer Interaction for Geographic Information Systems. Series D: Behavioral and Social Sciences, Vol. 83, Kluwer Academic Publishers, Dordrecht, 113 – 122. Kuhn, W. (1996). Handling data spatially: spatializing user interfaces. In Kraak, M.-J. and M. Molenaar (Eds.). Pro- ceedings of 7th International Symposium on Spatial Data Handling, SDH’96, Advances in GIS Research II. IGU, Delft, 13B1 – 13B.23.

Design and Production / Dessin et production 06 Index

Kuhn, W., and Blumenthal, B. (1996). Spatialization: Spatial metaphors for user interfaces. Geoinfo Series, No. 8, Department of Geoinformation, Technical University Vienna, Vienna. MacEachren, A. M. (1995). How maps work. The Guilford Press, New York. MacEachren, A. M. (1998), Visualization – Cartography for the 21st century. http://www.geog.psu.edu/ica/icavis/ poland1.html. Moellering, H. (1980). The real-time animation of three-dimensional maps. The American Cartographer, 7(1), 67 – 75. Monmonier, M. (1992). Summary graphics for integrated visualization in dynamic cartography. Cartography and Geo- graphic Information Systems, 19(1), 23 –36. Muehrcke, P. C. (1986). Map use. JP Publications, Madison. Nielsen, J. (1993). Usability engineering. AP Professional, Boston. Norman, D. A. (1988). The design of everyday things. Currency Doubleday, New York. Peterson, M. P. (1995). Interactive and animated cartography. Prentice-Hall, Inc., Englewood Cliffs. Plewe, B. (1997). GIS-Online, Information retrieval, mapping and the internet. High Mountain Press, Inc., Santa Fe. Rinner, C. (1998(a)). Online maps in GEOMED. http://www-fit-ki.gmd.de/persons/clus.rinner/pages/gisplanet98.html. Rinner, C. (1998(b)) ATKIS-Objekte in VRML. In Strobl, J., and F. Dollinger (Eds.). Angewandte Geographische Informationsverarbeitung, Beitraege zum AGIT-Symposium in Salzburg. H. Wichmann Verlag, Heidelberg, http://www-fit-ki.gmd.de/persons/clus.rinner/pages/agit98.html. Rohrer, T. (1995). Metaphors we compute by: bringing magic into interface design. http://darkwing.uoregon.edu/~rohrer/ gui4web.htm. Rhyne, T. M., and Fowler, T. (1998). Geo-VRML visualization: A tool for spatial data mining. http://www.geog.psu.edu/ ica/icavis/rhyne98.htm. Rubin, J. (1994). Handbook of usability testing. John Wiley & Sons, Inc., New York. Taylor, D. R. F. (1997). Maps and mapping in the information era. In L. Ottoson (Ed.). Proceedings of the 18th Interna- tional Cartographic Conference. Swedish Cartographic Society, Gävle, pp. 1-10.

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Session / Séance 41-C National Place Name Register Integrated with Cartographic Names Register for Multiple Scales and Products

Teemu Leskinen National Land Survey of Finland, Development Centre Opastinsilta 12 C, P.O. Box 84 FIN-00521 Helsinki, Finland E-mail [email protected]

Abstract Place names are often hard to model in a Geographic Information System (GIS). Sometimes they can be understood as well-defined attributes of well-defined geographic objects, but often these objects are difficult to give structure to, or even to point out, or mark the boundaries for, in the real world. Anyhow, these names must be included in the database for, e.g., cartographic reasons. In most GIS implementations the place names seem to form a layer of their own, with no tighter connection to the (possible) actual geographic objects in the database than location and classification. So is the case in the Finnish National Topographic Database too. However, the place names with object co-ordinates and a carefully selected set of attributes form a versatile source of information for separate or integrated service databases providing multiple different fields of application and products. This paper describes some of these possibilities found when developing the National Geographic Names Register in the National Land Survey of Finland.

Introduction

According to the United Nations Group of Experts on Geographical Names (UNGEGN meeting, June 1994) place names “identify our landscape, express our national identity and cultural heritage, create our framework of orientation and keys to the information age and promote our awareness of the world around us. Geographic names are elements of basic information needed to refer to places in the world. The spelling and application of geographic names must be clear, accurate, current and unambiguous. Expressed in their standardised forms the place names support effective national and international communication and are essential to our socio-eco- nomic development in such fields as trade and commerce, population census and national statistics, property rights and cadastre, urban and regional planning, environmental conservation, natural disaster and emergency preparedness, security strategy, search and rescue operations, automatic navigation and tourism, map and atlas production.” Keeping in mind these benefits, but primarily to support and rationalise the national map and map database production, the National Land Survey (NLS) decided in 1995 to create a National Geographic Names Register. The time was right, since the NLS had launched the Topographic Data System (TDS) in 1992, and the primary data for the names register would be gathered as part of the TDS data collection process with no extra costs. In Finland standardised geographic names databases or gazetteers have been lacking this far. The only proper nationwide collections of accepted place names have been the archives (mainly manual) of the Research Insti- tute for the Languages of Finland (RILF), and the Basic Maps 1:20,000 by the NLS, the geographic names of which have been accepted by the RILF.

Design and Production / Dessin et production 06 Index

National Topographic Database

The source of the National Geographic Names Register is the National Topographic Database (TDB). The TDB is a part of the National Topographic Data System and consists of the most detailed and up-to-date general topographic data of the whole of the country . The TDB production is decentralised in our 13 regional offices around Finland. The TDB includes among (or as attributes of) other geographic features about one million names for physical features, populated places and administrative areas presented on Finnish Basic Maps 1:20,000. The place names in the TDB are divided in 7 feature groups and further classified in 47 feature types. In addition to place names the TDB includes other text objects, like explanatory texts and numeric values.

National Geographic Names Register

The National Geographic Names Register is a service database consisting of the National Place Name Register (PNR) and the National Map Name Register (MNR). The data source for the PNR and the MNR 1:20,000 is the TDB, from which the data are loaded in them. The three elementary objects in the database are place, place name and map name. Places and place names build up the PNR; the PNR and the map names form the MNR. In addition to these three basic tables there are some 30 background tables in the database. For the PNR they include tables for maintaining the object history, the area divisions hierarchy, the feature groups and the official statuses of different languages by municipality. For the MNR there are additional tables for maintaining the product information and introducing the codes used in the database. These background data enable queries with various spatial and attribute search criteria.

National Place Name Register The PNR is scaleless including no cartographic information. The PNR objects place and place name consist of following data:

Table place - Unique place id as a possible link to external GIS data - Geographic object co-ordinates (Gauss-Krüger metric X,Y;Z); the centre point (the mouth for a river) - Information about the location of the object by municipality, General Map Index and National Rescue Grid - Feature type - In the background tables feature types can be aggregated as feature groups freely

Table place name - Unique place name id - Place id as a link to the place - Proper spelling of the place name (accepted by the RILF, unabbreviated, upper and lower case) - North European 8-bit character set ISO 8859-10, extended with the letters needed in Skolt Sami - Language of the name (Finnish, Swedish, North Sami, Enare Sami, Skolt Sami) - The official status of different languages in Finland’s 450 municipalities is given in background tables. On Finnish maps, in bilingual areas, both names are presented, e.g., Helsinki, Helsingfors. The order (above, below) depends on which language group has the majority in the municipality

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

- Source of the place name The object history for places and place names is stored in tables place history and place name history:

Tables place (place name) history - Place id (place name id) - Event code; addition, deletion; change of location or feature type (change of spelling or language) - Event time - User id When ‘deleted’ the place or place name record is not actually removed, but moved to other tables called old place and old place name.

National Map Name Register The MNR includes the unique product-dependent cartographic information for the place names. The PNR is integrated as a part of the MRN. The MNR specific object map name consists of following information:

Table map name - Unique map name id - Place name id as a link to the place name (and to the place) - Position (X,Y) in the product co-ordinate system - Text box ‘handle’ (1..9; which point the text position is referring to in the rectangle imagined around the text) - Alternative spelling (product-dependent spelling which differs from the spelling in table place name; for example, a name divided in two lines on a map forms two entries in this table, both having the same place name id) - Text font code - Text size (graphic size, mm/100) - Text colour code - Letter tilt angle - Capitals flag (whether the name is written upper case in this product) - Text direction (expressed as relative co-ordinates (dx,dy)) - Spacing flag (whether the text direction parameters (dx,dy) are also used for indicating the length of the text box) - Bending (up to 32 pairs of co-ordinates for curved texts, in product co-ordinate system) The map names are arranged as products. Every map name instance is related to one and only one product. The map names are connected to a product in the table map name product. The products are introduced in table product and grouped in additional background tables.

Table map name product - Map name id - Product code

Design and Production / Dessin et production 06 Index

Table product - Product code - Product group code - Co-ordinate system code - Map scale class code - Product name

Integration, Data Model The data model of the National Geographic Names Register is illustrated in Figure 1. The National Place Name Register and the National Map Name Register are integrated as one single consistent database where every piece of information is stored only once. A place may have one or more place names (e.g., in different lan- guages) and a place name may have 0-N cartographic map name appearances in one or several products. For example, the largest lake in Finland, Saimaa, has one place entry and one place name entry in the database. The map name ‘Saimaa’ (or often ‘SAIMAA’, see ‘capitals flag’ in the table map name) would probably appear in all cartographic products in all map scales, and possibly several times in one product. On the other hand a little pond named Patalampi might have a map name appearance just once, in the Basic Map 1:20,000.

Figure 1. A ‘mind map’ presentation of the National Geographic Names Register

Implementation of Database The National Geographic Names Register is implemented as a relational database (Oracle) running on Unix.

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Maintaining National Geographic Names Register

The National Geographic Names Register is ‘self-sufficient’: all data both for the PNR and the MNR are stored in a single database and nowhere else. However, the nationwide register needs processes and tools to maintain the data. The processes can clearly be divided in two parts: maintaining the place names and the map names 1:20,000, and, creating and maintaining the small scale map names from 1:100,000 up to 1:4.5 million.

Maintaining Place Names and 1:20,000 Map Names The Topographic Data System and the Topographic Database are the origins for the place names and the map names 1:20,000, and the data are maintained as a part of normal TDB updating and Basic Map 1:20,000 compiling processes. The TDB is processed using the GIS software developed in the NLS, and eventually using an object oriented GIS being currently evaluated. The PNR and the MNR 1:20,000 are not in on-line connection with the TDB. They can be updated with the changes made since the previous update when needed, by using an automatic process. If need be, the PNR and MNR 1:20,000 can also be edited directly using an application software developed on Arc/Info.

Creating and Maintaining Small Scale Map Names The processes and tools for maintaining the small scale map names are developed on Arc/Info running on Unix. Arc/Info is also the production and storage system for other small scale GIS data. The main issues in processing are generalisation, name placement and other cartographic editing. The process workflow is illus- trated in Figure 2.

Generalisation Process Besides the scale 1:20,000 the MNR includes nationwide small scale map name presentations in 1:100,000, 1:250,000, 1:500,000, 1:1 million and 1:4.5 million. They are cartographically compiled with other small scale vector data in the respective scales. The generalisation process is incremental; the map names in 1:100,000 are selected from the map names in 1:20,000, the source data for the 1:250,000 map names are the map names in 1:100,000, and so on. The map names in larger scale are fetched from the Oracle database to an Arc/Info work coverage using appropriate spatial criteria. In Arc/Info, the names are handled on three levels: the upper, the middle and the lower level. The upper level consists of names that are selected to the target product. The lower level has the names that have been rejected. The middle level is the workspace, i.e. it includes the names that still need to be taken a stance on. The user has possibilities to change the names from any one level to another interactively, one by one, or using group operations of different kinds. For each combination of a source product and a target product a parameterised ‘batch’ generalisation process has been implemented to give a starting point for interactive name selection. The process places each source product name in one of the three levels on grounds of the combination of the source product name feature type and text size. Naturally, in the following interactive selection phase several other criteria must be taken into consideration. One of them is all the other vector data in the target area. The names’ generalisation cannot be started unless all other data are compiled and presentable as background information on screen or paper.

Name placement and editing After the names have been selected in the upper level they need cartographic processing. To give a starting point for interactive and more enhanced editing a parameterised ‘batch’ process sets the default typographic parameters for each name. As in generalisation this process is directed by the combination of the source prod-

Design and Production / Dessin et production 06 Index uct name feature type and text size. This process doesn’t affect name placement. In the interactive editing phase the user may edit the name placement and other cartographic parameters freely, yet within a product-dependent framework given for every feature type. During this phase the user is able to bring all the necessary and helpful vector and raster data on the screen as background information. After completing the interac- tive phase the user stores the selected and edited map names from Arc/Info to the Oracle database.

Figure 2. The small scale map name generalisation and editing process

Retrieval Interface

The graphic end-user retrieval application for the National Geographic Names Register’s Oracle database was developed on NT using Visual Basic and TCP/IP/SQL*Net/ODBC tools. When making a query the user de- fines the register, the names he/she wants, the data set he/she is interested in, and how to output the results. The flow of a query is presented in Figure 3.

Selecting Register The user interface is very similar to both the PNR and MNR queries. In map name queries the product becomes a search key among others.

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Selecting Names The user may apply combinations of various criteria when searching the names. The spatial conditions are set by choosing municipalities, the General Map Index sheets and the National Rescue Grid squares. By nature, they all form hierarchical area divisions, and their aggregates, like counties, can be used as well. A rectangle given by two co-ordinate pairs is one alternative. Feature type or a group of feature types is a common search key as well as the language and spelling of names.

Selecting Data Set The data set in this context means the data model for the query results. Though the data in the database is divided in several tables, the data set model is one table. The data set contains data and almost any data in the database can be included in one data set. Every item of data is equal either to some data field in the three primary tables, or any background table data field that can be derived from the primary table data fields; the structure of the database does not restrict the data sets. Once defined, a data set can be stored for future use.

Selecting Output Media The user can output the query results to the screen, to a text file, or as a database table (Access, Excel etc.).

Figure 3. A register query

Design and Production / Dessin et production 06 Index

Applications, Products

The Introduction presented a large variety of applications associated with proper place names proposed by the UNGEGN experts. Implemented and possible fields of application and products related to the National Geo- graphic Names Register are, for example: - Rationalisation of the NLS map (database) production. Organised data management, uniform principles and specialised tools for geographic names in full integration with other map database production. - Place name and map name data products. Standard as well as tailored data sets for GIS developers and map makers. - Gazetteers. Product related, regional, national and international gazetteers. On-line gazetteers in the Internet. - Internet services. Karttapaikka (MapSite, http://www.kartta.nls.fi/) is a service that provides the national topographic map (Basic Map 1:20,000) for on-line public use. At present a database of about 30,000 names is implemented for place searches. Plans exist to integrate the incrementally generalised names data sets of the MNR to the service as a flexible and ‘intelligent’ data source for navigating and zooming. - International projects and databases. Contribution of approved geographic names for international GIS and place name databases. Thanks to the object history records, updates are easily arranged. - Research. A large, nationwide and homogenous database of accepted place names with feature types and object co-ordinates forms a new source of information for researchers, such as onomasticians, historians and natural scientists. By combining different spatial, attribute and spelling criteria they can find answers to existing problems and derive new subjects of study.

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Session / Séance 25-B Label placement for dynamically generated screen maps

Ingo Petzold Institute of Cartography and Topography, University of Bonn, Germany [email protected]

Lutz Plümer Institute of Cartography and Topography, University of Bonn, Germany [email protected]

Markus Heber [email protected]

Abstract The subject of this paper is a new and very efficient technique for automatic label placement, where the focus is on dynamically generated screen maps. Such screen maps are frequently created in the course of the interactive navigation in a geographic information system. Screen maps put specific requirements on the efficiency of the applied algorithms. In the last few years much effort was spent on the design and refinement of algorithms for automatic label placement in the context of static maps. Whereas a runtime of a couple of minutes may be appropriate for static maps, label placement for screen maps should be done in real time, i.e. less than a second. Rather than a new algorithm a reactive data structure is described which provides the required efficiency. The main technical tool is a reactive conflict graph representing potential conflicts augmented by a multidimensional index which supports both zooming and scrolling. There is a running prototype convincingly demonstrating that label placement for dynamically generated screen maps can be done in real time with an ordinary personal computer.

Introduction

This paper presents a new technique which allows to label dynamically generated screen maps in real time, i.e. “on the stroke of a key”. Screen maps are generated as answers of spatial queries to a geographical information system (GIS). For convenient interaction between the user and the GIS immediate reactions (“real real time”) are desirable, because otherwise the user gets impatient. In contrast to static (paper) maps the resolution of dynamic (screen) maps is much lower, and the size of screen maps is restricted by the screen size. Text is essential for orientation, and under these circumstances an efficient procedure for label placement plays a predominant role. The time consuming iteration of queries and screen maps can be substantially reduced if readable maps providing a good orientation enable the user to formulate specific queries. Since text is an essential carrier for information and orientation, it must be placed carefully. Imhof has formu- lated cartographic rules for map labeling [Imhof, 1962], which may well serve as a base for automatic proce- dures aiming at readable maps with good orientation and information density. In an interactive GIS the main activities of a user after a quick orientation and assimilation of the content of a map are scrolling and zooming.

Design and Production / Dessin et production 06 Index

Technically speaking scrolling amounts to shifting of the displayed clip of the map in any direction. In this case labeling must be recalculated at least at the borders of the displayed map. For ergonomic reasons these changes should be restricted to the labeling of features close to the new and the old margin of the displayed map since otherwise the user is confronted with the complex task of matching the old situation with the new situation. On the other hand, zooming a map not only requires changing the scale but also modification of the label size. A higher scale provides more details and larger labels. Thus zooming is not just a simple blow up or scale down of a map like a photo. Algorithms and methods of scaling maps without labels are well-known. This paper presents techniques how to combine the labeling of maps under constraints of cartographic rules and functions like scrolling and zooming. Label placement for ordinary static maps is already pretty complex. If over and above the normal constraints zooming and scrolling have to be supported and very short response times have to be achieved complexity is increased considerably. This paper provides a general approach to handle this addi- tional complexity. It does not give just another algorithm for label placement. Instead it provides a reactive data structure which may be combined with most of the existing algorithms such as greedy, gradient descent and simulated annealing. The main idea is to reduce dynamic runtime at the expense of a sophisticated preprocess- ing of potential conflicts. Technically, the main device is a reactive conflict graph augmented by a multidimen- sional index. A running prototype convincingly demonstrates the potential of the new technique.

Point Feature Labeling Problem

Before elaborating on dynamically generated screen maps, a closer look on static (paper) maps could help to classify the problem. Maps consist of basic objects which can roughly divided for labeling in groups of point- , line- and area-objects (called features). Point features are for example towns or ruins, line features are rivers or contour-lines and area feature are lakes or mountain ranges. Some of these features are carrying the informa- tion itself, for example a feature with the symbol of a ruin or a contour-line. Additional information on the feature must be appended by labels, like name of the ruin or altitude of the contour-line. Other features are useless without any label because they carry only the information on their location. Sometimes it is possible for the map reader to assign to these features information in cause of preknowledge. In contrast to labels for point- and line features labels for area features may carry extra information on spatial expansion. From an abstract point of view a map is a medium for storing and visualizing spatial information. The tech- nique is not to place as much information as possible on a map, otherwise it would be difficult to read the map and to find the searched information. Over the last hundreds of years rules for producing maps and especially labeling were developed. Imhof was the first one who gathered these rules and published them [Imhof, 1962]. His paper was an important step for objectivization of craftsmanship of labeling maps. His colloquial rules are underlaid with paradigmatic examples of comparison between good and bad labeling. These rules are also the base for cartographic rules mentioned in this paper. This paper concentrates on the point feature labeling problem (pflp). The pflp is placeholder for the complexity of the general labeling problem. There are also algorithms for generating label positions for line features [Edmondson, Christensen, Marks and Shieber, 1996] and area features [Petzold, 1996; Petzold and Plümer, 1997]. For understanding the problem of label placement a discretisation of possible label positions is suffi- cient. Each possible label position obtains a scoring. This scoring describes the “quality“ of the label position in contrast to the feature, based on Imhofs cartographic rules. The quality of a label position is determined by the face of the “skyline” of text (see Figure 1 b) and c)), the easy association between label and feature and the reading habits of the map reader. Underlying graphics must also be taken into account. If a label covers line features and/or point features, like ruin signatures, it is more difficult to read this label. Thus there must be also a scoring of underlying graphics for each possible position [Petzold, 1996].

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Figure 1. Overlapping of two labels a); Detailed modeling of labelboxes, considering the skyline of the letters figure b); Bounding box of a label for easy handling c); Conflict of figure a) with simple labelboxes figure d).

Besides local aspects, such as feature and possible label positions, more global considerations have to be taken into account as well. Well-proportioned signatures and labels on the map support the easy orientation for the reader and assimilation of the map information [Imhof, 1962]. This characteristic is also called well-propor- tioned information density and will be discussed in the chapter about reactive conflict graphs. The worst situation is the overlapping of labels. These labels are unreadable (see Figure 1 a)) and simple procedures like scaling down the label size are clearly no solution because the size of the label must be appro- priate to the scale of the map and the importance of the feature. The size of a label carries in general additional information, for example the size of a town label carries information on the number of inhabitants. The over- lapping of labels are often called conflicts. These conflicts are normally not locally restricted. For solving or globally reducing the number of conflicts it could be necessary to change the labeling of other features (see Figure 2).

Figure 2. Reaching a global minimum (from 4 to 3 overlappings) needs leaving a local minimum (sequence a)-c)).

In the following we shall use the terms label and label position as synonyms. A point feature has a set of possible label positions. Once a label position has been selected out of this set it is called selected label posi- tion. With regard to conflicts there are again different cases which are illustrated in Figure 3. There is (a) the conflict between selected label positions, (b) the conflict between a selected and one or more possible label position and c) the “potential” conflict between possible label positions. These potential conflicts are the start- ing point for the conflict graph to be discussed in the next chapter.

Figure 3. Different kinds of conflicts: a) conflict between selected label positions; b) conflict between a se- lected label position for a feature and possible label positions for another feature; c) potential con- flict between possible label positions.

Conflict Graph

At first we discuss (static) conflict graphs for static maps with a fixed scale and clip. A conflict graph repre- sents all potential conflicts. More precisely, it is an undirected graph where the nodes represent point features. There is an edge between two nodes if there is a potential conflict between the associated point features.

Design and Production / Dessin et production 06 Index

Construction of this conflict graph amounts to shifting the burden of geometric computation into a first, pre- processing phase. We can now differentiate between a first preprocessing phase where the conflict graph is constructed and a second dynamic phase where the conflict graph is applied. At first sight a conflict graph provides only negative advice on potential conflicts and no positive advice on promising label positions. The immediate advantage of this graph lies in the fact that at runtime, i.e. in the dynamic phase for each point feature all potential conflict partner are immediately accessible. Note that without such a device identification of potential conflict partners requires the identification of intersecting label boxes which has a rather high computational complexity. An example of a conflict graph is given in Figure 4. The illustration is a screenshot from our prototype. Look for instance at the edge between Cologne (Köln) and Düsseldorf. This node represents a potential conflict between the features of Cologne and Düsseldorf.

Figure 4. Screenshot of our prototype showing the conflict graph.

The conflict graph in Figure 4 falls apart into several partial graphs also called clusters independent from each other: a northern between Wilhelmshaven over Hamburg to Schwerin, a middle one between Osnabrück and Berlin, and in the south on the one hand the partial graph between Dortmund and Aachen and on the other hand between Erfurt and Dresden. The algorithmic advantage of this “Divide and Conquer” technique, dismantling of a major problem into several from each other independent sub-problems, is enormous [Sedgewick, 1997]. The recognition of these clusters is very simple, it succeeds in a time which depends linear on the size of the graph. Weakly connected graphs can be dealt within a similar, slightly more complex way. In a graph the degree of a node is the number of incident edges (these edges where an end-node is identical with the feature node) and therefore the number of the direct neighbors. In the conflict graph these direct neighbors are potential conflict partners. The degree of a node is therefore a measure of the hardness of the conflicts between this node and its neighbors and therefore the difficulty for finding a suitable position for this node.

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

For finding a quantitative characterization of difficulty of labeling a point feature, the degree of a node is a first estimation which may further be refined [Heber, 1998]. It supplies a measure of the local competition. For example in Figure 4 the competition in the Ruhrgebiet and Rheinland (region with the towns Cologne, Düsseldorf, Dortmund and Essen) is much higher than in the region around Bremen. The representation of this competition as a third dimension leads to the concept of a conflict relief. This relief illustrates the competition between point features of the map. The a priori estimation of the relative difficulty of labeling enables the application of two powerful heuristics. These heuristics play an essential role in the literature about combinatorial search: (I) “most constrained” and (II) “least commit” [Pearl, 1984]. The objective of these heuristics is to reach the global optimum closely through a clever order of features and label positions. The first heuristic controls the selection of the features to be labeled, the second heuristics controls the selection of the label position. “Most constrained” means to start with the most difficult features which are limited to the most restrictions. “Least Commit” means to begin with those features with least possible conflicts. From the viewpoint of the conflict relief, “most constrained” means to start from the mountain peaks and move down in spiral serpentines. “Least Commit” leads to preferring label positions in direction of the negative gradient. Up to now the conflict graph and its concept was described. The next technical step is the application of an appropriate data structure. Two nodes (features) are adjacent if both nodes are (directly) connected by an edge. This neighborhood is stored in a data structure called adjacency list. The data structure holds for each node a list of its neighbors. In practice it is a table where each node has its own row. The rows represent the list of nodes and each entry in the row represents a neighbor. The table of such an adjacency list of the conflict graph in Figure 5 a) is shown in Table 1. The advantage of the data structure of an adjacency list is its efficient use of space (O(n) in this particular case) and the immediate access to adjacent nodes.

Table 1. Adjacency list to a component of the conflict graph in Figure 5 a). node / feature adjacency list Cologne Duisburg Düsseldorf Dortmund Essen Duisburg Cologne Düsseldorf Dortmund Essen Düsseldorf Colgne Duisburg Dortmund Essen Dortmund Cologne Duisburg Düsseldorf Essen Essen Cologne Duisburg Düsseldorf Dortmund

Zooming and the reactive conflict graph

So far the conflict graphs first only support the label placement in static maps. A conceptual enlargement consists in adding edges to the conflict graph, which represent potential conflict in different scales. These edges obtain attributes about the scale interval where potential conflicts between nodes (features) occur. In Figure 4 the risk of a conflict between Berlin and Magdeburg or between Hannover and Osnabrück arises by enlargement of the scale. Through this extension the conflict graph becomes scale independent. A conflict occurs at a particular scale. When scaling down this conflict will remain. That means applied to the conflict graph that the edges receive attributes of the scale where the conflict starts. Mathematically more precise the attribute is an interval on the scale with one fixed end, the attribute (“the scale”), and open ended to smaller scales. This leads to a continuous conflict graph, called reactive conflict graph, which is needed for zooming. As will be seen later, scrolling will be supported as well. An example of a static conflict graph with a fixed scale of 1:500.000 and a reactive conflict graph with the same features with a continuous scale are shown in Figure 5 a) and b). Table 2 shows the accompanying adjacency list of the graph in Figure 5 b). Only the interval endpoints are given as attributes.

Design and Production / Dessin et production 06 Index

Figure 5. A static conflict graph of scale 1:500.000 is shown in a). In b) the first version of the reactive conflict graph with only the first attribute, up to which scale a conflict occurs, is displayed. The reactive conflict graph in c) is with both attributes, up to which scale a conflict occurs (“]”) and the scale from which one of the incident edges are deselected (“[“). Breaks indicate the intervals illustrated in. The dotted line between Co- logne and Essen shows an “irrelevant” edge.

Table 2. Adjacency list with scale up to which the conflict occurs to the conflict graph in Figure 5 b).

node / feature adjacency list: neighbor scale up to which conflicts occurs Cologne Duisburg 1:3.800.000 Düsseldorf 1:180.000 Dortmund 1:3.800.000 Essen 1:3.000.000 Duisburg Cologne 1:3.800.000 Düsseldorf 1:700.000 Dortmund 1:340.000 Essen 1:180.000 Düsseldorf Colgne 1:180.000 Duisburg 1:700.000 Dortmund 1:650.000 Essen 1:400.000 Dortmund Cologne 1:3.800.000 Duisburg 1:340.000 Düsseldorf 1:650.000 Essen 1:220.000 Essen Cologne 1:3.000.000 Duisburg 1:180.000 Düsseldorf 1:400.000 Dortmund 1:220.000

From a specific scale on all nodes / features will have conflicts among each other (1:3.800.00 in Figure 5 b)). That is unacceptable with regard to space, running time but especially the appearance of the map. If the scale would be for example reduced from 1:20.000 to 1:50.000 and all features of the well-labeled higher scale map (1:20.000) would still be displayed, the number of features per square inch would increase significantly and the readability of the lower scale map would be worse. This leads to the question which features (nodes) should (not) be displayed respectively (de)selected and how this information can be embedded in the reactive conflict graph.

Table 3. Adjacency list with intervals of edges of the reactive conflict graph in Figure 5 c). The irrelevant edges are gray underlaid.

node / feature adjacency list: neighbor [scale from which one or both end nodes are deselected (not displayed) scale up to which conflicts occurs ] Cologne Duisburg [1:5.000.000 Düsseldorf [1:8.000.000 Dortmund [1:7.000.000 Essen [1:1.000.000 1:3.800.000] 1:180.000] 1:3.800.000] 1:3.000.000] Duisburg Cologne [1:5.000.000 Düsseldorf [1:5.000.000 Dortmund [1:5.000.000 Essen [1:1.000.000 1:3.800.000] 1:700.000] 1:340.000] 1:180.000] Düsseldorf Colgne [1:8.000.000 Duisburg [1:5.000.000 Dortmund [1:7.000.000 Essen [1:1.000.000 1:180.000] 1:700.000] 1:650.000] 1:400.000] Dortmund Cologne [1:7.000.000 Duisburg [1:5.000.000 Düsseldorf [1:7.000.000 Essen [1:1.000.000 1:3.800.000] 1:340.000] 1:650.000] 1:220.000] Essen Cologne [1:1.000.000 Duisburg [1:1.000.000 Düsseldorf [1:1.000.000 Dortmund [1:1.000.000 1:3.000.000] 1:180.000] 1:400.000] 1:220.000]

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

In the next paragraphs the dependence between (de)selection and the reactive conflict graph will be dealt with. To retrieve information about the level of difficulty of labeling a feature the reactive conflict graph must be examined. The number of incident edges to a node (these edges where an end-node is identical with the feature node) describes the conflict-degree (k) of the node. If only this single aspect would be considered and features with a higher conflict-degree than a constant would be deselected, a big town like Cologne would be not labeled and towns with a fraction of inhabitants and less conflicts, like Troisdorf, a town close to Cologne, would be labeled. Cologne is of course more important than Troisdorf. This informational content is held in an attribute of each node. In this example it would be the number of inhabitants. The informational content must be combined with the conflict-degree. Additionally other aspects like quality of a label position to a feature, topology between feature and label, graphic underground and other cartographic aspects/rules can be linked, for example with a ranking/scoring system (details see [Heber, 1998; Petzold and Plümer, 1997; Petzold 1996]).

Figure 6. A relevant edge (both intervals overlap) is shown in a). In b) an irrelevant edge is displayed.

A good decisive factor for (de)selecting of a feature is the number of conflicts with features with higher or equal priorities (informational content). Conflicts with features with lower priorities are not taken into account. This ensures that a town like Cologne will definitely be labeled even if a smaller town in direct conflict like Troisdorf is labeled (Figure 7). The number of maximum conflicts with features with higher or equal priorities will be restricted to a constant (k?). From a certain scale the number of these conflicts will be lower than this constant. This specific scale will be a further attribute of the nodes and especially of the edges of the reactive conflict graph. As described, this scale is the lower endpoint of an one end open interval. The interval describes the selection (visibility) of the accompanying feature. Since a node is deselected all its conflicts will disappear and thus also the incident edges (“end node visible” in Figure 6). The interval range of the first described edge attribute, from which scale a conflict occurs, is exact reverse (“conflict valid” in Figure 6). In the figures and tables the intervals are given by brackets. When both intervals of an edge overlap, the edge is relevant (Figure 6a) and will be part of the reactive conflict graph (Figure 5c). Otherwise the edge is irrelevant (Figure 6b) and is not part of the conflict graph like the edge between Cologne and Essen displayed as a dotted line in Figure 5c. In Figure 7 the effects of different k? values especially the consequence for the runtimes are displayed. Table 3 shows the adjacency list of the reactive conflict graph in Figure 5 c). The entry of the upper row of each cell is the attribute mentioned above and describes the interval border when both incident nodes will be se- lected (“end node visible” of Figure 6). The value is displayed in the graph in Figure 5 c) below the edges / conflicts. The second entry in the lower row of each cell represents the “conflict valid” interval border of Figure 6. These values can be found in Figure 5 b) and c) above the edges / conflicts. The irrelevant edge in Figure 5 c) between Cologne and Essen is gray underlaid in Table 3.

Design and Production / Dessin et production 06 Index

Figure 7. Effects of different values for k? and the consequence to the running time.

Improvement of Label Selection This chapter deals with a refinement of the attributes for edges for selecting features. The attribute is a scale and it describes the lower endpoint of the interval where features are selected in dependency of the number of conflicts with features with higher or equal priorities (in the following called “end nodes visibility”). This value is related to the feature with all possible label positions. The refinement is to examine the best possible label position of each feature as an representative with a high selection probability. Here the end node visibility criteria can be applied again but with a different constant for k-. The number of possible conflicts in the reactive conflict graph is reduced again, and the runtimes are lower. More details can be found in [Heber, 1998].

Scale dependent Label Size

The last problem to be discussed here is the dependency of labelsize and scale. A first observation shows that the size of labels depends on the scale. The size of a label, however, is not proportional to the scale. Table 4 shows the values for measured labels for different scales, the scaling factor (x) of the map to the base of a scale of 1:100.000 and the label scaling factor (ls) to the base of the label width of the label of scale 1:100.000. We developed a function based on empirical evidence that calculates the label width (label scaling factor (ls)) in dependence from a given scaling factor (x): ls exp(log(1/x)) = x exp(-ln(S)/ln(2)) [Heber, 1998]. Calculated values can be found in the right table of Table 4 for different label scalings based on the scaling for 1:100.000. The calculated values for the lower scales have higher divergences but are acceptable.

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Table 4. The left table shows the measured labels for different scales and their label scaling factor. In the right table the calculated values and original values are listed.

scale scaling factor (x), label label scaling (ls), base 1:100.000 width base 1:100.000 1:100.000 1.0 2 cm 1.0 1:200.000 0.5 1.9 cm 0.95 1:500.000 0.2 1.6 cm 0.8 1:2.250.000 0.044 1.5 cm 0.75 1:4.500.000 0.022 1.5 cm 0.75 1:15.000.000 0.0066 0.9 cm 0.45 1:30.000.000 0.0033 0.8 cm 0.4 scale original ls=0.95 ls=0.92 ls=0.9 value 1:100.000 1.0 1.0 1.0 1.0 1:200.000 0.95 0.95 0.92 0.9 1:500.000 0.8 0.88 0.82 0.78 1:2.250.000 0.75 0.79 0.68 0.62 1:4.500.000 0.75 0.75 0.63 0.55 1:15.000.000 0.45 0.68 0.54 0.46

Queries to the Reactive Conflict Graph – Access to the Data Structure This chapter deals with queries to the reactive conflict graph and discusses its embedding in a data structure. The reactive nodes hold attributes of the selection scale, the scale from which upwards the feature is not displayed, the geometric coordinates, the priority, the label name, height and width and additionally the refer- ence scale to which all the before mentioned attributes apply to. The attributes of a reactive edge describe the interval of validation of the conflict. The typical query to the reactive conflict graph is: derive a static conflict graph from a given reactive conflict graph with the scale s and the clip described

by a box with the corners x1,y2 and x2,y2. At first the nodes inside the box (clip) must be extracted with the scale s as the third dimension. This is a three dimensional query. The 3D-tree supports these range queries efficiently [Preparata and Shamos, 1985]. Selec- tion of nodes also leads to a first selection of edges. Only the incident edges causes conflicts in the clip of the map. The problem now is to extract these edges where the given scale s fits in the accompanying interval attribute of the edge. In contrast to the three dimensional node search problem here the inquiry is one dimen- sional. The kind of queries are reverse. The node search problem is a range query problem, looking up if a value lies in a certain interval. The edges are given with an interval attribute and it must be tested if an inquiry of a scale lies within this interval. Efficient data structures for answering these kinds of inquiries are segment trees and interval trees. Details about implementation and approximation of complexity and space can be found in [Heber, 1998]. With such an index a quick derivation of a static conflict from the reactive data structure can be achieved for each desired scale and clip. The running time for generating such data structure can reach up to minutes. The reactive data structure is generated once and for all in a preprocessing step. In practice it can for example be delivered with the data on a CD-ROM. These static precalculations reduce dramatically the dynamic response time, the time where the user is really interested in. Our developed prototype labels realistic scenarios in real time with running times noticeable under one second (see [Heber, 1998]).

Design and Production / Dessin et production 06 Index

Conclusions

In this paper we have presented a new approach for automatic label placement which is especially suitable for screen maps which are dynamically generated in the course of a navigating, interactive access to a geographi- cal information system. Label placement is already a rather complex task. The specific requirements of inter- active GIS put additional requirements: basic functions such as scrolling and zooming must be supported. The main challenge, however, is to achieve response time short enough that the system is not boring the user. This effect is well-known from the World Wide Web which is frequently called “World Wide Wait” since the flow of information is too slow. Future generation GIS should generate high quality screen maps in a way which ensures that the time the system needs to generate a map successfully competes with the time the user needs to access the significant information. The main technical contribution of this paper is the concept of a reactive conflict graph. A conflict graph represents all potential conflicts between pairs of point features. It avoids time consuming geometric computa- tions at runtime. If point features are regarded as agents which compete among each other on space, such a conflict graph allows to estimate the degree of competition and the difficulty to achieve an appropriate labeling in any area, thus providing a base for powerful heuristics such as “divide and conquer”, “most constrained”, and “least commit”. If the degree of competition is regarded as just another dimension problem structure may be represented by a conflict range. At first the conflict graph is just a static concept and thus able to support label placement in static maps. If the edge representing a potential conflict between a pair of nodes is augmented by information of the range of scale where the conflict occurs, it becomes a scale independent data structure. Together with an appropriate multi- indexing scheme supporting both scrolling (3D-Tree) and zooming (interval tree) it becomes a reactive data structure. The reactive conflict graph is independent of size and clip. The static conflict graph for a given scenario is derived from the reactive conflict graph just by fixing scale and clip. The technique described above has been implemented and tested in a running prototype. A typical screen map with 60 up to 100 labeled point features (automatically selected out of a set of 2500) is generated on a standard MS Windows 95 Pentium Computer (133 MHz) in a fraction of a second. In fact, the precision of the system clock does not allow to give exact figures different from zero. When the user is scrolling the window or changing the scale by zooming in or out, the positions of the labels are recalculated “at the stroke of a key”. The idea behind the reactive data structure is to minimize dynamic runtime by extensive preprocessing. Thus there is a difference between the static phase (specific for the whole database) which is non-critical (up to 5 minutes) and a dynamic phase (specific for a single query or a single map) which is critical. (£ 0.1 seconds). Since all potential conflicts are represented in the dynamic conflict graph no geometric computation is needed in the dynamic phase. For the final label selection any efficient heuristic algorithms like greedy, gradient descent or simulated annealing may be used. Whereas the current prototype focuses on labeling point features, the concepts of a conflict graph and reactive data structure are more general and can be extended to line and area features. More sophisticated cartographic rules and constraints will be integrated in the future.

Acknowledgments

We thank Prof. Dr.-Ing. Dieter Morgenstern, Director of the Institute of Cartography and Topography for substantial advice. Many discussions with Gerd Gröger and Thomas Kolbe helped to clarify the ideas pre- sented here.

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

References

Christensen, J. (1995). Managing Designs Complexity: Using Stochastic Optimization in the Production of Computer Graphics. Ph.D.-Thesis, Center for Research in Computing Technology, Harvard University, Technical Report tr-10-95, Cambridge, Massachusetts, USA. Christensen, J., Marks, J. and Shieber, S. (1992). Labeling Point Features on Maps and Diagrams. Technical Report tr- 25-92, Harvard University, Cambridge, Massachusetts, USA. Christensen, J., Marks, J. and Shieber, S. (1995). An Empirical Study of Algorithms for Point-Feature Label Placement. ACM Transactions on Graphics. Edmondson, S., Christensen, J., Marks, J. and Shieber, S. (1996). A General Cartographic Labeling Algorithm. Manu- script, Harvard University, Cambridge, Massachusetts. Freeman, H. (1984). AUTONAP - An Expert System for Automatic Map Name Placement. Proceedings of the 1st International Symposium on Spatial Data Handling, Universität Zürich-Irchel, 544-569. Heber, M. (1998). Vorausberechnung reaktiver Datenstrukturen zur schnellen Beschriftung von Landkarten. Diploma- Thesis Institute for Computer Sciences, University of Bonn. Hirsch, S.A. (1982). An Algorithm for automatic Name Placement around Point Data. The American Cartographer. Vol 9(1). Imhof, E. (1962). Die Anordnung der Namen in der Karte. Internationales Jahrbuch für Kartographie. Gütersloh Bd. 2, 93-129. Pearl, J. (1984). Heuristics - Intelligent Search Strategies for Computer Problem Solving. Springer. Petzold, I. (1996). Textplazierung in dynamisch erzeugten Karten. Diploma-Thesis Institute for Computer Sciences, University of Bonn. Petzold, I. and Plümer, L. (1997). Plazierung der Beschriftung in dynamisch erzeugten Bildschirmkarten. Nachrichten aus dem Karten- und Vermessungswesen. Reihe I, Heft Nr. 117, Verlag des Instituts für Angewandte Geodäsie, Frankfurt a. M.. Preparata, F.D. and Shamos, M.I. (1985). Computational Geometry : An Introduction. Springer. Sedgewick, R. (1997) Algorithms in C : Fundamentals, Data Structures, Sorting, Searching. Addison-Wesley. van Oosterom, P. (1993). Reactive Data Structures for Geographic Information Systems. Oxford University Press.

Design and Production / Dessin et production 06 Index

Session / Séance 36-B Towards an Evaluation of Quality for Label Placement Methods

Steven van Dijk Dept. Computer Science, Utrecht University [email protected]

Marc van Kreveld Dept. Computer Science, Utrecht University [email protected]

Tycho Strijk Dept. Computer Science, Utrecht University [email protected]

Alexander Wolff Institut für Informatik, Freie Universität Berlin [email protected]

Abstract The cartographic labeling problem is the problem of placing text on a map. It is composed of two phases. In the first, the question which map features should in principle receive a label is settled and the style (i.e. color and font) of these labels is determined. The second phase consists of the actual label placement. In this phase for each feature one has to decide whether there is in fact sufficient space and, if yes, the best location and shape of the label must be determined. This paper proposes a quality measure for the result of the second phase, allowing the comparison of label placement programs.

Introduction

For a human cartographer label placement is a tedious and labor intensive task that can take up to 50% of the total map design time. This explains why there have been many attempts to automate name placement, mostly in the last two decades. Automated label placement requires that the standard guidelines for label placement be formalized. The simplest of all rules to follow is that names shouldn’t overlap. Most automated label place- ment methods described in the literature comply with this rule; however, only a few methods take into account other aspects such as aesthetics and avoiding ambiguity. In this paper we give a classification for most requirements relevant to the positioning of names on a map. We develop a quality function for label placement methods that measures how well a method places labels on a given map. Such a function is useful in many ways. Firstly, it helps to understand what contributes to good label placement in general. Secondly, it helps to develop label placement software that takes into account the various aspects of label placement. High-quality cartographic label placement involves many, often conflicting factors. This makes it difficult to develop software that takes into account all aspects simultaneously, and is efficient at the same time. This brings us to the third use of the quality function: it provides a way to compare

Design and Production / Dessin et production 06 Index

different label placement methods. From a computational point of view, it is much easier and faster to measure quality than to optimize it. The remainder of this paper is as follows. In the next section we will discuss globally criteria for high-quality map labeling as described in the cartographic literature and list those that are considered by automated label placement methods. In Section 3 we develop a framework for a quality function in which nearly all criteria can be put. In Section 4 we give an example of a fully operationalized quality function that fits into our framework. The full version of this paper gives a more complete description and several possible extensions as well.

Aspects of lettering maps

A map—on paper or on a screen—consists of a number of geographic objects together with annotation. Geo- graphic objects are objects that have coordinates associated to them, and can be zero-, one-, or two-dimen- sional. They can also be symbols, or composite objects. Annotation consists of labels, i.e. text associated to geographic objects, and miscellaneous map objects like title and legend. The main characteristic of annotation is that its position on the map is not determined directly by coordinates in the real world.

Cartographic criteria The cartographers Imhof, Alinhac, and Yoeli have each listed a number of requirements for high-quality labeling (Imhof, 1975; Alinhac, 1962; Yoeli, 1972). These requirements have been summarized in various textbooks and surveys, e.g. (Dent 1996; Robinson et al., 1995). Here we briefly list the major high-level rules again. Notice that not all aspects of these rules are relevant to the placement of text, for instance the choice of font. · Legibility: Influenced by font size, font color (contrast), overlap with other labels and features, and label position relative to its feature. In addition, labels of different features should not be placed close to each other on a horizontal line. · Aesthetics: Influenced by the choice of font, shape of text, clustering (clutter), accidental regularity in text. · Harmony: It is considered good practice to select one typeface, but allow several variants of a type family, e.g. allow Times roman and italic, variation in weight (light, medium, bold), and a small number of font sizes. Furthermore, a particular variant and color should be chosen for all map features of the same type, for example, all rivers should have blue labels. · Unambiguity: Involves avoiding text close to objects it does not correspond to, avoiding objects between a label and its object. Harmony may help to resolve ambiguities. If river names are blue and city labels black, a river name won’t be mistaken for a city name, even if it is the text closest to the city. · Not disturbing the map contents: Text is placed on top of other objects, but should not cover important information or relevant details of the map. (Note that here the emphasis is on the map background while in the criterion legibility emphasis is on the label.) · Suggesting the position, orientation, shape, and extent: One example of this aspect is that larger cities should have their name in a larger font than smaller cities, and that coastal towns should have their name in the sea or ocean. Other examples are map features with indeterminate boundaries, features whose boundary is not explicitly shown, and composite features like groups of islands.

Criteria in automated names placement research and software The rules used in methods for automated label placement are usually much more ‘down to earth’ than the cartographer’s requirements. Most attention has been given to labeling point features. In many cases, the only requirement is that labels may not overlap and that the bounding box of the text touches the point to be labeled.

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Yoeli (1972) was the first to incorporate position preferences in his algorithm. Later many others followed this. Hirsch (1982) used buffer circles around points. By forbidding labels to intersect these buffers, he prevented ambiguity. Ahn and Freeman developed a name placement system called AUTONAP that includes area labeling (1984). AUTONAP adapts the label shape to the so-called skeleton of an area. Langran and Poiker (1986) combine label placement and point selection by removing the least important points in overcrowded map regions so that the remaining labels can be placed without overlap. Jones (1989) tried not to disturb the map contents by placing a grid of pixels on the map. Different feature classes have an overlapping priority. For each pixel the highest priority feature is determined that overlaps the pixel. For each label position the amount of overlap is the sum of the priorities of the pixels covered by the label. Preferably labels are placed in positions with low amount of overlap. Cook and Jones (1990) describe a Prolog rule that determines whether a label of a city is placed on the same side of the river or boundary. Another Prolog rule they describe is to halve the distance of a label to its point if the point lies in a crowded area. This is done in order to prevent ambiguity. Doerschler and Freeman (1992) also describe a rule-based approach. One rule they use in order to avoid ambiguity is that route numbers may not be placed at the intersec- tion of two roads. Edmondson et al. (1997) implemented a simulated-annealing algorithm that gives penalties for label-label and feature-label overlaps. They also introduced a metric to measure how well a line label is placed with respect to the line. Pinto and Freeman (1996) introduce a measure that evaluates how well an area label is placed in five respects. In the software package Maplex (1998), for each feature class one can set feature and label weights that determine which labels may hide which features. In addition, a user can set the size of label-to-feature and label-to-label buffers that determine the minimum separation between a label and other features or other labels. Preuß (1998) proposed a fitness function for point labeling as part of an evolution strategies approach that incorporates the following elements. Firstly, the label of a point feature must keep a certain distance to any other point features and labels. Secondly, for every point feature the labels that don’t belong to it must keep a certain distance. Thirdly, there is a minimum and a maximum for the distance between a point feature and its own label. The fitness function of Preuß also takes into account the preferential positions for a label around a point feature. In addition he computes the local density around each point feature and uses these densities for a function that gives a penalty for clustered labels and big variance of local density.

Modeling aspects for the quality function The quality function for label placement that we will develop is a special type of model. The concept ‘model’ is a very broad one, so we will specify first what characteristics and type of model we set out to define. Usually, a model is a simplified version of the real world, one that only captures the aspects that are essential to the use of the model. In our case, the ‘real world’ is the map including the labels, and the model is a function that represents the quality of the placement. The output of the quality function should be easy to interpret. The easiest type of output is a value for the quality. This value should be the higher the better the labels are placed. Such an output allows comparing two different labelings of the same base map. In our opinion, the value may at best be interpreted on an ordinal scale, since statements about exactly how much one labeling is better than another will generally be meaning- less. Defining an ordering in labeling quality seems ambitious already. The goal is to define an ordering at least for all cases where one labeling is clearly better than another. If two human cartographers decide differently on which label placement is better, then the ordering of the labelings by the quality function can follow only one judgement, of course.

Design and Production / Dessin et production 06 Index

A specification for name placement quality

Our model aims for the properties simplicity, relevance, tractability, and understandability. Simplicity is a general asset of a model. The type of relevance we expect of the model is that the quality function really gives an ordering by quality of different labelings of the same map. Tractability is necessary since the quality func- tion should be a tool to be implemented that allows automated comparisons. The understandability of the model is important since it should be adjustable to give better results. To this end, the quality function should depend on only a few parameters whose tuning has predictable effects on the quality function output.

Elements of the map In order to deal formally with map labeling, we first need to specify what forms a map. A map is a rectangular area with the following map objects that have a fixed position: · Point features, which can be point symbols or other symbols. · Linear features, which can be lines or curves, and may branch and join. · Areal features. Each of these features has a display style including aspects such as color, pattern, and width for lines. Note that there are many types of lines. They can represent roads or rivers, administrative boundaries, parallels of lati- tude, or contour lines. A map usually contains other objects as well, objects without a fixed position in the real world: · Title, legend, and other insets of the map. · Text labels associated with any of the three types of features listed above. · Leader symbols such as arrows that convey the association between a feature and its label. · Diagrams and charts located close to the features they are associated with. All of these objects have an explicit position on the final map, and that map can be described purely by geom- etry (coordinates) and display style of its objects.

A-priori restrictions of label placement Label placement is considered to be done by some method or algorithm. The a-priori labeling restrictions specify beforehand what type and shape of labels could be generated by the method. One reason to have such limitations beforehand is that the quality function to be developed need not deal with the most pathological situations. The advantage is that the quality function will give better indication of the quality in the more normal cases. A second reason is that the quality function to be developed later can be simpler; simplicity is a desirable property for any model. The first limitation we make is that any feature receives at most one label on the map. Secondly, we don’t consider the map elements title and legend, and we don’t deal with diagrams and charts on the map, and visual effects like hill shading. We also won’t treat marginal annotation. The other limitations deal with the allowed shape of labels, and are different for point, linear, and areal feature labels, so we treat these separately. Point labels are text blocks that must align with parallels of latitude on the map. On large-scale maps, the parallels are (nearly) horizontal, and all point labels should be horizontally aligned text. The text itself, the type, typeface, font, and color of the text is fixed for each separate point feature. The text block is treated as a rectangular block, somewhat like a bounding box. If the parallels are curved, then the rectangle is also curved. The text block is completely specified by the coordinates of one corner at the start and bottom of the text, and the height and the width of the text.

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Other than point labels, the text of line and area labels may follow curves. Each area label additionally allows specifying an inter-character distance.

Input and output of a label placement method Before a label placement method can start, a number of choices about the map design must be made. For instance what features to display, what colors to use, and which map projection to use. Intuitively, the map background has been drawn but no labels have been placed yet. However, which features should receive labels and in which font and size has been decided. Let F = F+ ÈF - be the set of features, F+ (F -) the set of features that are (not) to be labeled, and L the set of labels. Note that features in F - must also be part of the input since they must be considered when placing labels of other features. Thus we formalize the input to a label placement method by the set F - of features that are not to be labeled and the set P of pairs < f,l > where f is a feature in F+ and l the corresponding label in L. Each feature has the following attributes: feature · geometry (position, shape, orientation) · display style (color, pattern, ...) · priority The priority is an attribute that captures how important a feature is for the intended map use. Each label on the map is modeled as follows: label · text · display style (color, typeface, font, ...) · priority · geometry (position, shape, reading direction) The priority of a label may be different from the priority of its feature, because sometimes the visibility of a feature is more important than its label. The fourth attribute of a label is the part computed by a label placement method. If a label isn’t placed, the geometry attribute states ‘unlabeled’. We assume that the output of the label placement method consists of F - and P and that the geometry attribute of each < f,l > in P is specified if l could be placed. Note that the output contains more information than the labeled map itself, because from a map one cannot determine the priority of its features and labels, nor which labels were intended but could not be placed. This information is necessary to evaluate the quality of a label placement.

Form of the quality function The quality function evaluates how well a label placement method performed its task. The quality function should reflect that the quality is low when labels are missing or poorly placed, especially if high-priority labels are involved. The input to our quality function consists of the output of the label placement method. The quality function for label placement is a combination of the qualities of the individual label positions. This makes it possible to trace low quality to the labels that cause it. There are four aspects to the quality of label placement represented in our function. They form a break-down of the overall label placement quality into natural categories with little overlap in criteria. Let L* Í L be the set of all labels from L that were placed by the method. We outline the four categories and specify for each which parameters are needed to evaluate it. The first parameter is always the one whose quality is being assessed. Note that the partial quality functions also evaluate the corresponding feature and/or label priorities since we consider these to be attributes of the features and labels.

Design and Production / Dessin et production 06 Index

Aesthetics. This aspect represents the quality of the shape of the label itself and is not influenced by the position on the map, or by other map features. The aesthetic quality of a label l Î L is denoted: qualityaesthetics(l) Label visibility. This aspect represents how well a label is visible on the map. It is influenced by other features and labels. The visibility quality of a label l is given by: * qualitylabel-vis(l, F, L ) Feature visibility. This aspect represents how well a feature is visible on the map. It is influenced by other features and labels. The visibility quality of a feature f is given by: * qualityfeature-vis(f, F, L ) Label-feature association. This aspect represents how clear it is that a particular feature and label are associated. It is partly influenced by other map features and labels. The association quality of a feature-label pair < f, l > is given by: * qualityassociation( < f, l > , F, L ) Note that the four categories are not only distinct conceptual (or perceptive) notions, the quality functions indicate that the parameters needed to evaluate each of them are different too. This supports the claim that the four categories only have little overlap. The quality function for label placement of the whole map is a combi- nation of the qualities of the individual features and labels. The quality function has the form: È - Å Quality(P F ) = g ( l Î L qualityaesthetics(l), Å * l ÎL qualitylabel-vis(l, F, L ), Å * f ÎF qualityfeature-vis(f, F, L ), Å * < f,l > ÎP qualityassociation( < f, l > , F, L ) ) The Å-combinators (e.g. summations) iterate over all elements of the corresponding set, and g weighs the contribution of each quality category according to the given application. In technical maps, for instance, the visibility of labels and a good label-feature association is more important than aesthetics or the visibility of objects that constitute the map background.

An example function for name placement quality

This section describes a relatively simple quality function as an example. There are several ways to extend and improve upon the function to be outlined. Firstly, we let the combinators Å return the sum of their arguments. The g-function adds up the four individual quality function values, corresponding to a map purpose where the four qualities are considered equally impor- tant. The form of the quality function is: È - å Quality(P F ) = l Î L qualityaesthetics(l) å * + l Î L qualitylabel-vis(l, F, L )

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

å * + f Î F qualityfeature-vis(f, F, L ) å * + < f, l > Î P qualityassociation( < f, l > , F, L ) The summations are done over all labels that should be on the map, all features that are on the map, and all feature-label pairs that should be on the map, but where the label could have been omitted. Next we describe how the four quality functions can be defined. For simplicity we choose to let each quality function take values from 0 (lowest quality) to 100 (highest quality) for each feature or label. The priorities of the labels and features could weigh these quality values to incorporate the fact that certain features and labels are more important than others. We omit this issue from further consideration here. In the four quality func- tions, the label is considered a text block that has some area.

The aesthetic quality function With the labeling model and requirements in mind, aesthetic quality of point labels is not interesting, and for line and area labels it is determined by the shape. According to Imhof (1975) and Alinhac (1962) a label should not have more than one inflection point and the curvature of a label should be small (Figure 5). Therefore, we define the aesthetic quality of a label l as follows:

qualityaesthetics(l) = 100 if the baseline and topline have at most one inflection point and curvature at most the curvature of a circle with radius the height of the text; 0 if the baseline and topline have more than one inflection point and curvature greater than the curvature of a circle with radius the height of the text; 50 otherwise. We define the aesthetic quality of a missing label to be 0, but another choice like 100 can actually be better in some cases.

The label visibility quality function Text is considered as a text block with a certain area. This area can be disturbed if it overlaps with other text or if line features or area boundaries run through it. A line or boundary is considered to disturb all parts of the text block within a certain small distance, say, 1 millimeter, from the line or boundary (Figure 6). The region within some distance from an object is called the buffer or dilation of that object. * For a label l we define qualitylabel-vis(l, F, L ) as the percentage of the area of l that is overlapped neither by other labels nor by the buffers of line features and boundaries. If a label is missing, its visibility quality is 0.

The feature visibility quality function Since point features are generally small, they are visible only if no label intersects them. So we define the visibility quality of a point feature to be 100 if it is not intersected by a label and it is 0 otherwise. For line features we are interested in how much of their length is covered by labels. We define the visibility quality of a line feature as the percentage of its length that is not covered by labels. Similarly, for area features the visibility quality is defined as the percentage of the area that is not covered by labels. Here we exclude the label of the area itself.

The association quality function The association quality of a point feature and its label is good if the point p and label l are close, but no other point is close to the label l and no other label is close to the point p. We define:

Design and Production / Dessin et production 06 Index

* qualityassociation(p, l, F, L ) = 100 if l is within half the text height from p, no other point is within once the text height from l, and no other label is within once the text height from p; 0 otherwise For line features the situation is more complicated because we prefer labels that are close to the line over the 1 full label length. Therefore, we define the quality by thickening the line with a buffer of width 1 /2 times the text height, and take the percentage of the label area that lies within this buffer. For area features the situation is yet more involved because the label should be close to most of the area. Intuitively, the part of the area that is associated to the label lies within a certain distance from it. Again the idea is to use a buffer, but this time we take a buffer around the text. Its width is chosen twice the text height. The quality of association of an area feature a and its label l is defined as (Figure 7):

* Ç qualityassociation( < l, a > , l, F, L ) = 100 · Area(buffer(l) a)) / Area(buffer(l)). If the label is missing, then the association quality is defined to be 0.

Conclusions

This paper discussed the automated evaluation of how well a label placement method performed its task on a given input. This involved developing a general model, a formalization, of the shapes of types of labels, and of the input and output of a label placement method. The evaluation is given as a function that maps the input and output of a label placement method to a value representing the quality. The main use of such a function is for comparing how well different label placement algorithms perform their task on the same input. In the full paper we list roughly sixty criteria for good label placement and classify them as aesthetic quality, label visibility, feature visibility, and label-feature association. Here we only gave some examples of these criteria for the four classes. Then we gave a concrete example of a quality function, specifying fully the quality of a label placement method on an input. We kept this function very simple; our goal was to capture the most important criteria in a small number of geometric concepts that can be computed automatically. These concepts include inflection points of curves, curvature, distance, buffers (dilation), and area of overlap. Several types of extension to this research are possible. The evaluation function can be extended to incorporate more criteria, like taking color of features and text into account, and testing for regularity in label positions. The a-priori restrictions on labeling can be relaxed, for example by allowing more than one line per label. Furthermore, it is possible to generalize label placement evaluation to typography evaluation by including, for instance, the choice of font. Such extensions and the issues involved are discussed in the full paper. We believe that a formalization, a quantification, of all criteria for label placement by Imhof and Alinhac will lead to a better understanding of the final goals for automated, high-quality label placement. Eventually this should lead to efficient, high-quality label placement algorithms themselves. This paper is a first step towards such a formalization.

Acknowledgements.

Steven van Dijk, Tycho Strijk, and Marc van Kreveld are supported by the Dutch Organization for Scientific Research (NWO). Alexander Wolff is supported by the Deutsche Forschungsgemeinschaft (DFG) under grant Wa 1066/3-1. Furthermore, this research is supported in part by the ESPRIT IV LTR Project No. 28155 (GALIA).

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

References

Ahn, J., and H. Freeman (1984). AUTONAP—an expert system for automatic map name placement. In Proceedings International Symposium on Spatial Data Handling, pages 544-569. Alinhac, G. (1962). Cartographie Théorique et Technique, chapter IV. Institut Géographique National, Paris. Cook, A.C., and Jones, C.B. (1990). A Prolog interface to a cartographic database for name placement. In Proceedings Fourth International Symposium on Spatial Data Handling, pages 701-710. Dent, B.D. (1996). Cartography, chapter 14. Wm. C. Brown Publishers. Doerschler, J.S., and H. Freeman (1992). A rule-based system for dense-map name placement. Communications of the ACM, 35:68-79. Edmondson, S., J. Christensen, J. Marks, and S. Shieber (1997). A general cartographic labeling algorithm. Cartographica, 33(4):13-23. ESRI (1998). Maplex—Automatic Cartographic Name Placement Software. Hirsch, S.A. (1982). An algorithm for automatic name placement around point data. The American Cartographer, 9(1):5-17. Imhof, E. (1975). Positioning names on maps. The American Cartographer, 2(2):128-144. Jones, C. (1989). Cartographic name placement with Prolog. IEEE Computer Graphics & Applications, 9(5):36-47. Langran, G.E. and T.K. Poiker (1986). Integration of name selection and name placement. In Proc. Auto-Carto 8, pages 50-64. Pinto, I. and H. Freeman (1996). The feedback approach to cartographic areal text placement. In A. Rosenfeld P. Perner, P. Wang, editors, Advances in Structural and Syntactical Pattern Recognition, pages 341-350. Springer, New York. Preuß, M (1998). Solving map labeling problems by means of evolution strategies. Master’s thesis, Fachbereich Informatik, Universität Dortmund. Robinson, A.H., J.L. Morrison, P.C. Muehrcke, A.J. Kimerling, and S.C. Guptill (1995). Elements of Cartography, chapter 22. John Wiley & Sons, Inc. Yoeli, P. (1972). The logic of automated map lettering. The Cartographic Journal, 9:99-108.

Design and Production / Dessin et production 06 Index

Session / Séance C3-D Production flowcharting for mapping organisations: A guide for both lecturers and production managers

Sjef J.F.M. van der Steen ITC, Cartography Division P.O.Box 6, NL-7500 AA Enschede, The Netherlands E-mail: [email protected]

Abstract The introduction of digital mapping processes is a fact in a part of the world. In other regions developments are going on at the stage of carefully starting the conversion from traditional to digital mapping. Inevitably, the changes are rapidly taking place. Without an appropriate organisation of workflows it is impossible to have access to the control of new digital processes running for mapping organisations. ‘Cartographic flow charting for production processes’ has already been presented at the International Cartographic congresses in Cologne and Barcelona. But since evaluations and improvements in the application of flow chart symbols have taken place and since digital mapping has been introduced the time is now ripe to introduce flow charts for entire mapping processes, supporting educational introduction and managerial functioning for production organisations. Investigations, studies and experiences in the use of and new developments in the application of flow management and the software for flow charting enable a wider and easier use of flow diagrams for a digital mapping environment than ever before. The example as indicated in the paper deals with flow charting beyond a cartographic field, as well. The basis for production flow charts (PFD) is determined within the cartographic process, but since we increasingly deal with the processes beyond the cartographic field, as well, a larger impact has been reserved for wider flow chart use. The main benefits of flow-charting are therefore not only seen at the cartography level of operations and management, but also at the entire Spatial Data Handling context included the education component. In the paper principles and symbolisation of flow chart items for digital mapping are explained and displayed. A brief description of subsequent tasks for the flow chart construction indicates the activities to come to the drawing of such a diagram. Further, the practical use and the relations between the processes and products visualise the dependency of various processes and their results, the products. As a main example the production of a satellite image mapping acts as a clarification of the process flowchart model.

The process of flow-chart design

Flow chart construction requires a thorough study and knowledge of any process. The designer of the flow chart requires adequate and much information with respect to all individual parts of a large process. The designer has therefore to make use of other people’s knowledge and experience. Moreover, information is required from the agencies that deliver software to be applied for the processes.

Design and Production / Dessin et production 06 Index

Further it is essential that we know of which level of process flow chart we speak. In our case we have adopted the ‘top down’ approach [Paresi, 1998]. In his lecture note Paresi particularly deals with Data Flow Diagrams (DFD), while we mainly concentrate on Process Flow Diagrams (PFD). In the set up of such an approach first the boundaries of the entire process have to be defined. In the mapping environment the context process flow diagram (CPFD) displays just general processes from the perspective of existing fields or professions, such as cartography or remote sensing. The predetermining of the boundaries of the CPFD depends on the field of the process. As an example one can determine the cartographic process as a CPFD (see figure 1). But also if applying GIS the GIS process can be displayed in the context process chart. In our case we have considered that the CPFD comprises the entire mapping process. It should be clear that the users of the CPFD do not distil much detail out the chart. Managers, lecturers and trainees and operators require much more information for their execution of tasks. However, for overview of the main processes and considering the CPFD being the start for further detailing this type of diagram is very valuable.

Figure 1. A Context Process Flow Diagram of a mapping process project

Figure 2. Two samples of a Top Level PFD of a main process in a mapping project (level 1)

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

As told, the CPFD is detailed to lower level diagrams. The entire CPFD is broken down into main processes symbolised as separate PFDs. In this diagram level (the top-level diagram) one just distinguishes separate processes (see image in figure 2). The top-level PFD does not display detailed activities and results of the processes, but is a useful start for further refining the diagram to the logic continuation to level two PFD, in which every little process is symbolised. Besides, products resulted out of processes are visualised. Due to the complexity of some main processes it is, however, possible that a further detailing to PFD level 3 has to be established. For extensive detailing one might make use of the Work Breakdown Structure (WBS). Work breakdown is often applied for large projects. WBS facilitates managers with details of complex projects, amongst which mapping is such a extended and complex process. With the use of WBS it is possible to find the smallest manageable unit to be distilled from a large process analysis. PFD level 2 and 3 visualise a process with all its important elements. The diagram displays all the important activities, input and output of the activities. Further, details such as reference numbers, file formats, file names and software or software modules facilitate the process flow diagram user with the most essential elements for his/her tasks. The following page shows a PFD level 2. Activities and results of activities are symbolised including the input- activity-output relation. A legend comprising the symbols explains the important issues of the diagram.

The fundamentals and the symbolisation

Dealing with process flow charts the fundamentals have to be established primarily. It is inevitable that the designer of the diagram must make a thorough analysis of a process and decomposed smaller processes and activities, because in a process flow diagram we wish to display the symbolisation of many processes to be extracted from a large context process. If one deals with a process analysis he/she discovers that every single process consists of similar and common elements. The basic elements of such an individual process in PFD level 2 or level 3 they can be classified as: • the input element, • the activity element and • the output element. In this case the diagram displays the symbology of CAD data files and the editive correction activity. Due to consistency with earlier standardisation the symbolisation could not be established with examples like dis- played in Managing Geographic Information System Projects. Those basic elements have a close relation with each other and for clarification the items have been described. - Input items are the product elements that deliver information for the follow up of the input: the process activity. - The (process) activity element forms the dynamic of the entire process. This is the dynamic centre of a small process. - The result of a single process is called the output element. In fact each individual output element is the begin of a successive activity. What is output of a certain activity will be the input element for the successive activity. Input and output elements are products, elements that require a certain activity to become another product. As told, a main process is decomposed of separate processes, which are all dependent on each other. As a result of this and for clarification on relation a link has to be made between the various elements of the single processes.

Design and Production / Dessin et production 06 Index

Besides, since output elements form the start of a new process they are chained with the next process activity. In the symbolisation relational lines might differ with respect to their function. Some relations imply perma- nent changes some temporary changes. In order to display the difference continuous lines and broken lines are the symbolisation of the two.

Figure 3. The design of a Process Flow Diagram. (level 2)

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Figure 4. A drawing of the three elements of one individual process.

The process flowchart functionality: a perspective view for the potential users

The use of process flow diagrams varies with the structure of an institute or company. If these do not work with an appropriate management structure it does not make sense to adapt PFDs. However, if well organised pro- duction managers, operators, and trainees benefit from tools like a process flow diagrams. In all cases, if applied the diagrams are provided for everybody dealing with map production. In order to clarify the benefits for potential users tables with classes of users and functionality have been listed. One can discover four tables in perspective with: - management matters prior to a production process - keeping track on the various processes - evaluation after having executed the process - communication for data/file exchange Each of the tables are explained.

Flow diagrams for use prior to the actual production Regardless on the production management certain preparation activities have to be organised in order to fulfil the actual task of the relevant production. Map production also reflects this statement. Below we give a small table and an explanation of its features relating to their function and their advantages for map production preparation.

Table 1. Functionality for process flow-chart users. The table displays function item with the user

Function item: Particular useful for:

a. time consumption Operator and manager b. equipment utilisation Manager c. skills of employees Operator and manager d. cost pre-calculation Manager, client e. contracting Manager, client

Explanation: a. If the diagrams are applied for time estimation both operator and manager and other responsible employees have to check the amount of time to be spent for a particular step in the process. b. A potential source of conflict is the availability of devices. In the digital environment particularly it may be a problem that digital production hardware is only temporarily available.

Design and Production / Dessin et production 06 Index c. In order to meet the qualification to apply hardware and software, skills and knowledge of the operators suppose to be as updated as possible. Both manager and operator should be aware of the proper knowledge and skill level of the operator and if correct this must be realised prior to the actual production process represented by the flow diagram symbolisation d. It is evident that the main goal of a successful management is a proper cost control. Cost calculation begins prior to production, but also during and after major cartographic process cost control is essential. Applying the flow diagrams the manager possesses a magnificent tool. The details of the production and their related symbols in the diagrams comprise the true elements to exactly know the steps to be followed. e. If all elements of a PFD all well defined and established the flowchart could be applied as a predetermined piece of contract document. Nowadays elements of a production are executed outdoors. Consequently one requires useful documents for contracting and subcontracting. PFDs enable supporting a contract.

Flow diagrams as a tool for keeping track of production One of the tasks of the manager is to follow the state of affairs of projects. A diagram is the ideal instrument to be applied. Each production step appears separately and if related to a time period it can therefore be an indication for keeping track in production.

Table 2. Functionality for process flow-chart users. This time the functions are valid during the process

Function item: Particular useful for:

a. time consumption operator and manager b. process progress operator and manager c. quality check operator and manager

Explanation: a. Deals with keeping track of the time spent for each process step. b. In principle this item relates to time control. However, here it will be of great advantage to know which is the condition of a particular process at a certain time. E.g. how much time it still takes to finish a certain produc- tion process. Maybe this function might initiate matters like decisions on overwork or so. This is an ongoing process for both operator and manager and is one of the most frequent uses of a flow diagram. c. Assurance of quality becomes increasingly important. Decision steps have to be implemented in order to build in quality controls. The flow diagram symbols are an ideal tool for showing indications on where the quality check should be taken into action.

Flow diagrams as a tool for evaluation. A good habit for the continuity of production in general is the evaluation of a production project. If properly applied evaluation gives very useful information on errors, shortcomings, difficulties, but also corrections and matters, which were successfully executed. The table indicates these advantages. For statistical information one should consistently and frequently make use of evaluations, which result in figures and situa- tions delivering historical data that can be applied for planning of new processes.

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Table 3. Functionality for process flow-chart users after production execution.

Function item: Particular useful for: a. time consumption Manager b. difficulties/errors/shortcomings Manager and operator c. skills/knowledge Manager

Explanation: a. For appropriate management time consumption, which has been spent for certain processes should be meas- ured and analysed for future processes. Whether this leads to changes in processes or reconsiderations in time estimation depends on various matters and will not be discussed. b. Difficulties, errors or serious shortcomings of a certain product or in a certain process are to be reported. If well analysed and corrected this definitely leads to fewer problems next time. c. If employees seem to possess lack of basic skills and/or knowledge for the proper execution of tasks one should consider either to improve by study/development if all becomes a constraint for the production con- tinuation. A similar problem occurs to the manager if he fails to know important details of the process. Therefore a close corporation should be provided for production planning, progress watching and evalua- tion. Process steps could be compared with employees’ qualifications. Incompatibility between process and incorrect qualification requires upgrading/studying of staff.

Flow diagrams as communication for data exchange Survey of current production processes learns that all production is not necessarily executed within one insti- tute or organisation. Nowadays complete sub-processes are sometimes externally executed. For instance, due to the high investment costs of hardware and software and the high depreciation of digital equipment it is often cheaper to output films for cartographic products by external companies. Another example is the common use of GIS data purchase for further processing these days. Exchanging data files involves a well-defined agreement on formats and standards. To avoid confusion very detailed contracts will be written specifically. Since in the flow diagram issues like the file formats can be indicated within the diagram symbols they can explicitly be applied for data exchange.

Figure 5. Symbols require more detail information. They might include a reference number, contents, file name and file format

Practical experiences with the latest developments.

Parent appropriate flow charting software enables elements to be drawn from libraries but new symbols can also be designed, as well. This is the case for map making. A special set of symbols has been designed and placed in a new library. On request the library can easily be extended.

Design and Production / Dessin et production 06 Index

This paper deals with flow charting applications such as FlowCharter®. It allows a hyperlinking functionality, which is extremely useful for the purpose of flowcharts we require. It is possible to make a connection between the drawn symbols and a file, a directory or even programs. FlowCharter enables linking opportunities to referring information. Consequently, the addition of this function has increased process charts’ effectiveness tremendously. Links to databases, spreadsheets or files comprising process results can immediately be recalled or displayed on request. If in a PFD, for example, a database has been symbolised the chart element can be chained to the actual database saved in another folder or on another disk drive. After having established the connection the symbols on the chart can be double-clicked and after a little while the user can regard the database. If required the database can be justified, because the database software has also been activated. Other advantages of such a system can be the connection from a hyperlinked diagram to a lower level chart clicking the single element and watching the lower level diagram. In addition it is possible to make a link to a spreadsheet. Each relevant chart element can be linked and acts e.g. in this case as a cost dialogue table to be filled with information such as cost for material, labour or equipment. This way many combinations of software enable fine-tuning tools for appropriate management on many levels of work. One might think of setting up contracts for budgeting, quality specification and etceteras.

Figure 6. Linking facilities increase functionality of modern flow charting software

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Conclusions

Today we experience an increasingly complex world around us. Whether we experience our bills coming in from companies, government and banks or we work with the software and the data files coming from our clients in all cases it seems we require an increasing management grade in order to execute our activities. Too much time is lost due to repeating actions as a reason of bad management.

Design and Production / Dessin et production 06 Index

With the introduction of process flow diagrams and their hyperlinking opportunities we facitlitate better pro- duction management. We can firstly study the analysis of processes and we are then able to separate the processes into small units in order to make them legible for maintaining. With respect to the existing spatial data-handling environment comprising complex GIS analyses and decision-making we particularly take ben- efit from the management of systems and sub-systems if we really know processes within all their details. Moreover, we want to know the consequences of processes and decisions we should know all the process principles and elements, at least. Hyperlinked PFDs enable access to control and manage processes and products and particularly the disaggregating of process flow diagrams gives users availability to modern management. Not only the produc- tion manager, but also the operator, lecturer and trainee benefit from the explained flow charting methodology. Disaggregating the diagram to both a process context and details of processes and other elements equip the potential user with additional functional support. Process flow-charting and applying linking functionality might result into substantially lowering production costs by less repetitious work, by increased structural activities and more adequate decision-making. Further, additional filing with diagram symbol links enables easier recording of information and calculation of costs. We definitely have more opportunity of managing the processes with this type of flow charting software and having access to a set of different programs. The users are provided with the diagrams tools that have formerly been neglected. With the presentation of this paper the author wants to stress that, nowadays, it is a prerequisite for the adequate execution of processes to apply an appropriate organisation of processes at all levels and anyone who deals with mapping processes. One of the most useful management tools for production processes is process flowcharting. PFDs allow for multi-functionality of management tools in all kinds of sections, departments and organisations. Since mapping processes are getting increasingly complex we can consider a well-defined role for the process flow diagrams with a full functionality. Concerning management-education for all mapping process’ aspects it implies full awareness of educating people dealing with a certain grade of map production organisation. These days, in order to be aware of map- ping, one does not need to be a professional manager, but one should be able to manage him/herself at least.

References

ter Horst, Laurens. (1994). ITC Study report: Design and filling forms for data-input and attribute-assigning for digital map production. Huxhold, William E. and Levinsohn, Allan G. (1995). Managing Geographic Information System Projects. Oxford University Press. van der Steen, Sjef J.F.M. (1995). Flow diagrams for cartographic processes. ICA. Proceedings ICC’95 Barcelona Paresi, C.M. (1998). Lecture notes on The Introduction to Information Systems Development and to Information Sys- tems Development Methodologies. ITC-lecture notes. Wilholt, Jürgen. (1998). ITC Study report: New flow chart functionality for cartographic production control.

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Session / Séance 36-A Automatic Bilingual Name Placement of 1:1000 Map Sheets of Hong Kong

Lilian S.C. Pun-Cheng and Geoffrey Y.K. Shea Department of Land Surveying and Geoinformatics, The Hong Kong Polytechnic University, Hunghom, Kowloon, Hong Kong email: [email protected] [email protected]

Abstract A name or label is an essential and important component of a map. Yet, its placement has always been considered as a most labour-intensive process in manual cartography. Even with the advent of digital mapping, automated letter placement is still difficult to handle in view of the complexity of rules and map features involved. Several programs have been developed but most of their results are still far from satisfactory (Buttenfield & McMaster, 1991). Some have been noted for overlapping text, upside-down text and text placed at awkward angles. In Hong Kong, the process is further complicated by the presence of dual languages (English and Chinese) together. Conversion of 1:1000 analogue map sheets to digital data have been completed but text placement is still performed in an interactive mode. This paper provides some thoughts on establishing effective and appropriate algorithms for the full automation of name or label placement in both languages. Tested samples are drawn from the 1:1000 monochromatic digital topographic map sheets, with the Arc/Info coverages established by the Land Information Centre of the Lands Department. The project aims not only at increasing speed, but also at searching optimum map location, maintaining logical hierarchy and consistency, thereby facilitating the task at all scales. It is expected that at full automation, the program would allow for searching optimum location for labelling point, linear and areal features, determining appropriate font, dimension and spacing in relation to their hierarchical level and overall map design.

Background

Names and labels are essential and important components of a map. It not only tells us the nominal character- istics of the geographical feature represented, but also gives an implicit understanding of the linear or areal extent and orientation of a map feature as well as its relative size or importance. Labels or names associated with different map features can be treated as attributes, useful for data retrieval or spatial analytical operations if organised systematically. However, lettering may be considered as a most difficult and labour-intensive process in manual cartography. With the advent of computer technology in most cartographic processes, automatic letter placement is considered by far the most difficult to handle in view of the complexity of rules and topographic features that are involved. Several automated label placement programs have been developed but most of their results are still far from satisfactory (Buttenfield & McMaster, 1991). Some have been noted for overlapping text, up-side down text and text placed at awkward angles. In Hong Kong, conversion of 1:1000 analogue map sheets to digital form have been completed. But text placement (including English words and numbers) is performed only in an interactive mode. That is for

Design and Production / Dessin et production 06 Index instance, if building blocks are to be labelled, the central position of each block or polygon has to be determined or adjusted manually by the user and the name is placed accordingly. Such is in fact a very time-consuming, inconsistent and inflexible method. It is estimated that at least 8 man-hours are needed to complete labelling a full A0 size 1:1000 map sheet. Also at changing scales, repeating determination of text position, font and di- mension for different features is required. The process will further be complicated by the need and future plan to incorporate Chinese place names (in form of discrete Chinese characters) as well on these general purpose maps. Methods of how to put these two very different languages together have recently been a concern of many software vendors in the region. Notwithstand- ing these, in view of the voluminous task of managing, handling and editing land information, there arises a need to automate the name placement procedure, so that it can couple with other GIS functionality for more Figure 1. Textual information of 1:1000 maps, Hong Kong. optimal use of the technology.

The Hong Kong Digital Data Set

As mentioned previously, the main objectives of the project are to investigate the dual language (English and Chinese) name placement on digital maps of Hong Kong and to establish effective and appropri- ate algorithms for the full automation of the task. Amongst the numerous textual information that ap- pears on maps, this paper focuses on the discussion of placing building and roads names. It is because these two sets of information are relatively more important to most urban users in the region com- pared to point features labelling of for instance spot Road-id Length (m) Name Remark heights and lamp posts. Besides, special attention 1 3.8 Nathan Road road margin should be paid to their dual language requirements 2 12 Nathan Road road margin in labelling. Throughout the paper, ‘names’ and 3 12 Nathan Road road margin ‘texts’ may be used interchangeably. Before that, it 4 2.9 Nathan Road road margin is first necessary to examine the existing map data under structure 5 3.8 Nathan Road road margin structure, so that algorithms developed will, as much 6 12 Nathan Road road margin as possible, not involve drastic restructuring of the 7 12 Nathan Road road margin data set. 8 2.9 Nathan Road road margin under structure Textual information of existing 1:1000 plans mainly falls into three categories (Figure 1): names of area features of buildings and roads; house numbers of Figure 2. Spatial representation of roads and associ- buildings along the side bordering the pavement; and ated database structure.

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index labelling of point features like spot heights and lamp posts. By area, side or point, we refer to the geometry being considered when placing a text. If this conforms to the geometry of the feature being represented digit- ally, algorithms for text placement may be derived and implemented fairly easily within the GIS or any CAD annotation environment. However, the data structure now employed clearly deviates from this assumption. For buildings and all point features, these are repre- sented in polygon and point geometry correspondingly and so pose least problems. The greatest difficulty lies with naming roads. While a road name should refer to Figure 3. Dark line segments and derived centerlines. the whole stretch of road, at present a road is being represented in discrete line segments (Figure 2) and so is impossible to extract it as one whole polygon. To resolve the problem, discrete line segments bordering a road are joined together, with, if necessary, the addition of dark (invisible on graphics) line segments connect- ing adjacent road blocks (Figure 3). Then from the derived road polygon, a dark road centreline is generated for guiding name placement.

English and Chinese Characters

The characteristics of different English typographic styles and forms with their selection criteria have been described in many texts (Keates, 1989 & Robinson & et.al., 1995). The following summarises the major differences with their Chinese counterparts only in modern or conventional ways of lettering: a)while English letters may vary in size with the ascenders and descenders, a Chinese character may all be preserved in the same size and probably fit into a square of predetermined size; b)naming a feature in English has to take into account of the spacing of both its words and letters (a name consists of one or more words while a word is made up of letters), whereas the corresponding Chinese name only considers spacing of a few characters (which are English equivalence of words); and c)the conventional way of English reading starts from left to right while both directions may be adopted for reading Chinese names. For annotating features in both languages, it is advisable to separate the two languages altogether, meaning not to alternate them with words or characters. The custom is to place the Chinese version on top of the English one for roughly symmetrical polygons; and to put them side by side for elongated polygons like roads. For the sake of consistency, especially for those map readers who are literately bilingual, naming in Chinese also starts from the left as that in English. Lastly, in case space is, after all means of considerations discussed in later sections, still not sufficient for placing both languages, a rather subjective compromise has to be made in that the Chinese name would give way to the more popular use of English.

Name Placement Algorithms

The procedures that are discussed below are all implemented in the Arc/Info system, with the algorithms for labelling buildings and roads names written in AML (the Arc Macro Language).

Design and Production / Dessin et production 06 Index

Buildings Preliminary Considerations As 1:1000 is the scale used in labelling buildings, no classification of buildings is performed. It is therefore essential to label all buildings, regardless of their ground areas, with the same font size, indicating their equal order of importance. Ideally, the name should be wholly placed inside the building polygon. However this may pose difficulty with long names and small polygons. Hence a database consisting of original long names, possible short forms or abbreviations and separation into a few lines is prepared. This almost is the exclusive requirement for English instead of Chinese names. It is also considered that as most buildings are symmetrical in shape, a calculated central point is sufficient to guide placing the name in whatever desirable direction. However, for building of asymmetrical shapes, such a point may occur outside the polygon and so has to be moved inside interactively.

The Approach With all the preparatory information ready – a database of long and short names, spatial information (geographic co-ordinates) that forms the building polygon, polygon identifier and label point, the program starts to search and determine the best position for name placement. The text length (in dual languages) in full is first compared with a horizontal distance centred at the label point. The hori- zontal direction is set as parallel to, in preferred sequence, the northings (map horizontal grid lines) or the longest axis of the polygon. In addition, three tolerance circles – one at the centre and two at the ends of the full text, are used to test the text height and if the whole name will cut Figure 4. Testing dimensions with tolerance circles. or allow sufficient space apart from the polygon edges respectively (Figure 4). In case these are not satisfied, spacing between Chinese characters and/or English words and letters have to be adjusted for repeated testing. If all fail, the same procedure will be tried for short names and, if necessary, splitting into several lines. For the latter case, the dimension of the orthogonal direction (let’s call it the polygon height) to cater for several lines and the number of words per line will become a matter of concern. Surely this requires more complicated and longer testing than before. In summary, there are two main principles. First is to place the name in fewer lines (preferably one) and horizontal to the map orientation for faster and simpler processing. Second, with a view of being more informa- tive, the full name is also more preferred to its short form.

Sample Results Name placement algorithms of both buildings and roads are tested with a 1:1000 map sheet, covering the busiest Figure 5. An example of awkard-shaped build- part of Hong Kong and where a dense network of roads ing that requires further interactive editing. and constructions is found. On the part of buildings, the

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index tested area consists of a total of 106 buildings which require labelling. Satisfactory results account for about 60% while another 20% may be improved by interactive editing. These generally arise from buildings of extremely irregular and awkward shapes for the program to handle (Figure 5). The rest building polygons are just impossible to wholly inscribe a name owing to either their being too small or too long names. In fact, traditional manual lettering meets with the same problems and these buildings are also left unnamed in ana- logue map sheets.

Roads Preliminary Considerations Unlike buildings, roads even at this large scale may be classified into three categories: main roads with more than one lane, secondary road with only one lane and minor roads of walkways. These are already varied by nature in length and width. So a certain font and size may be consistently applied to each class. With some exceptions of minor roads, road names do not normally require short forms or abbreviations for placement constraints. However name placement of roads do suffer from the problems of overlapping at the junctions and deviation from the road alignment. In the latter case, it is important that the centreline created for guiding the placement should align with the original road curvature. Also for long roads, it is necessary to repeat labelling at appropriate intervals.

The Approach The font size and spacing has already been predetermined for each class of road. Also a spatial database of road centreline information such as length, number of segments and nodes that make up a road is available. A road segment is defined as a stretch of road (whole or part) from one junction to another. The algorithm differentiates between the different classes of roads as well as between single-segment roads and multi-seg- ments roads. It handles name placement from minor roads and those with single segment first to major roads with multiple segments, in the order of increasing flexibility. For single-segment roads which are mostly the minor roads and some secondary roads, both the English and Chinese names are placed directly onto the segment with a predetermined offset distance from the two ends. Further adjustments on text spacing are catered for very short segments, and in the event of extreme cases, the Chinese version or both have to be removed. For multi-segments roads, the text length in dual languages have to be compared with the total centreline length first to determine the necessity of repeated labelling. Repeated occurrences of road names have to be separated by a reasonable length, taking into account the offset distances from the two ends of the entire stretch of road too. In both cases, labelling starts from the segment closest to the mid-length of the centreline. Besides, labelling are avoided at the junctions by either putting the whole text in one segment if possible or separating the words and characters into several consecutive segments. If, in some instances, a segment is too short for the name, it has to be forced into the junction area provided that it does not interfere with names of intersecting roads.

Sample Results Except for very short roads which may not accommodate the length of a road name in two languages, most name placement of roads are satisfactory. The success rate is as high as 80% without further interactive editing. However running time can be longer for very winding roads and those with numerous segments and junctions. Though occurring only in very few cases, there are still problems of (Figure 6) overlapping texts at the junctions as well as unsatisfactory results for L-shaped minor roads. It is recommended that if found necessary in other tested areas, names of roads can also be abbreviated but of course at the expense of a more time-consuming database construction and program execution.

Design and Production / Dessin et production 06 Index

Conclusion

This paper has presented some orthodox ideas on automatic name placement. For users of large scale plans as the 1:1000 to 1:5000 scale range, accurate textual information is extremely important for orientation, location query, everyday rou- tine operation and so on. It is found that the addition of another language does not pose too much difficulty or complexity to the task. One promising thing about Figure 6. Examples of unsatisfactory results in naming roads. automation is about speed. What takes a day’s work in naming only the buildings and roads of one A0 sized map sheet can now be fulfilled in about 15 minutes. There are certainly more benefits in automation like reliability, efficiency that need not be elaborated here. Nevertheless, one very key issue to the success of name placement algorithm is not on its hardware or software requirements, but the compatibility of the underlying spatial data structure. The way that we perceive a cartographic feature (as a point, a line or an area) and its corresponding spatial representation as a whole object do facilitate the automation process, from the conceptual design of lettering rules, to formal specifica- tion in the algorithm and ultimately to final implementation.

References

Buttenfield, B.P. & McMaster, R.B. (1991) Map Generalization: Making Rules for Knowledge Representation. Harlow, Essex, England: Longman Scientific & Technical; New York, N.Y.: Wiley. Carlotto, M.J. (1995). Text attributes and processing techniques in geographical information systems. International Journal of Geographical Information Systems, 9(6), 621-635. Keates, J.S. (1989). Cartographic Design and Production, 2nd ed.. Longman Scientific and Technical, Essex, England. Robinson et al., (1995). Elements of Cartography, 6th ed.. John Wiley & Sons, Canada.

Acknowledgement

The work described in this paper was substantially supported by a grant from the Hong Kong Polytechnic University (Project No. G-S118).

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Session / Séance 27-B A Model for Standardizing Cartographic Depiction of International Boundaries at Small to Medium Scales

Leo Dillon* Office of The Geographer and Global Issues, U.S. Department of State [email protected] * The views and opinions expressed in this paper are the author’s and do not necessarily reflect those of the U.S. Government.

Abstract The new millennium heralds a revolution in cartography, with emerging technologies poised to increase the sophistication of data shown on maps. These technical advances, along with competition for dwindling resources and increasing nationalism, are fueling an increased sensitivity to depiction of international boundaries. With the absence of any recognized international standard, cartographers and data managers currently employ a myriad of overlapping and at times contradictory categories to classify boundaries. Still, cartographers must be sensitive to the increasing diversification of international boundaries and the territorial expressions they represent. This paper seeks to present a model for classifying international boundaries for small to medium scale mapping. The model will introduce a manageable number of categories based on type and quality of boundary. Following a review of the categories and the criteria behind them, the paper will elaborate on problems of defining, implementing, and standardizing international boundaries, particularly those involving changes in physical and political alignment, policy considerations and access to data. A digital international boundaries database, currently being developed by the U.S. government, will be discussed. The paper will conclude with arguments for the establishment of an international standard for the portrayal of international boundaries and the need to make cartographers of the 21st century aware of the sensitivity and importance of accurately depicting and classifying international boundaries.

Introduction

The need for accuracy in boundary depiction has become exponentially more important over the last decade. As little as 200 years ago most nation-states were separated from one another by sparsely settled frontier areas. But with the increase of world population and resulting need to exploit all available resources, and with an increasing national consciousness of the spatial element of sovereignty, the existence of an underdeveloped zone between neighbors is no longer tenable. These factors have been magnified greatly over the last few years, fed in part by emerging technologies in the geospatial sciences that enable far greater accuracy in boundary delimitation than previously existed. But boundaries themselves, and the legal regimes that underline them, have recently been the subject of greater scrutiny than only a generation ago. Governments are becoming increasingly aware that loosely defined bounda- ries can become a casus belli, as seen recently in the Horn of Africa. And one aftermath of recent conflicts has been the need to more precisely define boundaries; the boundary between Kuwait and Iraq, loosely and unilat- erally delimited before the Gulf War, is now densely demarcated. Another legacy of that conflict has been an increased resolve among the neighboring states in the Arabian Peninsula to bring legal resolution to their loosely defined or undefined boundaries (Thomas, p.92).

Design and Production / Dessin et production 06 Index

Cartographers working at small to medium scales may tend to disregard the practice of boundary classification, preferring to use one symbol for international boundaries and basing their representation on the best sources available. Others will employ a myriad of boundary types and categories. Among the common terms used for boundary classification are: administrative, disputed, demarcated, delimited, defined, indefinite, intercolonial, provisional, approximate, political, de jure, de facto, armistice line, line of control, cease-fire line, and zonal. These terms, some of which are obsolete, often contain contradictions and can cause confusion, especially when applied to various scales. On the other hand, oversimplification does not do justice to the complex nature of international boundaries. Therefore a need can be said to exist in middle to small scale mapping for a reduced but comprehensive number of categories for boundary classification. The classification system proposed below has evolved from the need to provide U.S. Government cartogra- phers working at medium to small scales a simplified system for portraying international boundaries in a manner consistent with policy. It is based on previous U.S. Government policies, which have changed as necessary to fit new contingencies. It is designed primarily for land boundaries, although its principles can be applied as well to maritime boundaries. In the context of this paper, however, this system could be viewed as a model for depiction of international boundaries in any environment. A cartographer producing a medium to small scale map of a politically diverse area would find it difficult to accurately depict the many types of boundaries encountered in such an area at such a scale, particularly if that cartographer were working with such constraints as small size or black and white output. Excessive informa- tion concerning the status and nature of boundaries does not aid the map viewer when the subject being por- trayed is not the boundaries. For instance, a page size map depicting transportation networks in the Arabian Peninsula might lose much of its intended emphasis if the thematic content had to compete with a legend explaining the several types of boundary depiction found there. The following classification for international boundaries seeks to bridge the divide between oversimplification and overcomplication.

The Classification Model.

International boundary (definite) This classification is intended to include all generally established and accepted international boundaries. Some examples of a well defined boundary are U.S. - Canada, with approximately 6,000 pillars demarcating less than 5,200 kilometers of land boundary, and Switzerland - Italy, where approximately 400 treaties and agree- ments define minute sections of the 740 kilometer boundary (IBS No. 12). Conversely, the 1,561 kilometer boundary between Mauritania and Western Sahara is marked by only 39 pillars, of which 14 are concentrated on a less than 80 kilometer section around Cap Blanc (IBS No. 149). Problems: Many boundaries that fall into this category have been delimited by treaty or agreement but have not been defined with precision. Many long-accepted boundaries owe their existence to treaty delimitations based upon imprecise criteria. One example of this is the boundary between Sudan and the Democratic Repub- lic of the Congo, which is aligned to the Congo-Nile watershed. In other cases, such as the boundary between Cote d’Ivoire and Guinea, there is no relevant international agreement and the alignment depends upon French administrative practice during the colonial period (Brownlie, p.301).

Indefinite boundary This classification encompasses boundaries in which evidence of delimitation is unsuitable or unavailable, but some form of status quo exists between the two states. It can be broken up into two broad categories:

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

1) Boundaries that are not actively disputed but are vaguely delimited, such as the riverine portion of the boundary between the Republic of the Congo and the Democratic Republic of the Congo, where no division has been made of the wide river and its islands (IBS No. 127); and 2) Boundaries shown by a conventional line where there is no treaty evidence but where no active dispute exists. This category encompasses most of the boundaries of many states that were formerly administrative units of a larger state, such as the U.S.S.R. and Yugoslavia. Problems. A precise definition of what constitutes an indefinite from a definite boundary needs to be estab- lished; many boundaries that fall in the definite category have elements of the indefinite, such as the Guinea - Cote d’Ivoire example mentioned above.

Disputed boundary. This classification is for cases where an active dispute exists between the two states concerning the location of their boundary. Since minor and often unportrayable disputes exist along many if not most international bounda- ries, this category is intended to be limited to those disputes involving sufficient territory to depict on standard maps, and only to those cases where the two states involved are seeking an alteration of the status quo. Problems. Revanchist or belligerent states often declare a dispute unilaterally, making claims to territory de- spite an existing legal foundation for a boundary. An example of this would be Libya’s claim to the Aozou Strip. Or states will make claims on territory based on often complex historical criteria, such as China’s claim to most of the Indian state of Arunachal Pradesh. Also, there are instances where neither state will acknowl- edge the existence of a dispute despite differing interpretations of the boundary alignment. Uruguayan and Brazilian official maps, for instance, show slight variations in their common boundary.

Other line of separation. This classification covers any known division between states that is not a legal international boundary. Cat- egories under this classification include the following: Military disengagement line. A division between two belligerent states, usually established during or after a period of hostilities. Examples include the demarcation line and demilitarized zone separating the two states on the Korean Peninsula, the line of control between India and Pakistan in Kashmir, and Israel’s 1949 armistice lines with Lebanon and Syria. Administrative boundary. A line defining administrative control of territory. This falls under two groups: one in which a political boundary has not been established, and one in which a political boundary exists but the administrative boundary portrays de facto control. An example of the former would be the line dividing Oman and U.A.E. administration of the Musandam Peninsula along the Strait of Hormuz. An example of the latter would be “Ilemi Triangle” between northwest Kenya and Sudan, where ethnic Turkana pastoralists from Kenya administer a territory beyond the known treaty line (Brownlie, pp. 917-919). Provisional boundary. A line dividing a disputed boundary or territory pending a final settlement, in which a agreement of convenience has been reached without prejudice to future claims. The most notable example of this is the southern half of the Ethiopia - Somalia boundary. Military base/leased area. A line defining a military base or leased area where sovereignty is exercised by another state. Examples of this would be the U.S. Naval Base in Guantanamo Bay, Cuba, and the two U.K Sovereign Base Areas in southern Cyprus. Problems. This classification tends to be a catch-all for boundaries that do not fit neatly into the other catego- ries, but it needs to be limited to those cases where the line is not recognized as a legal international boundary. More than the other categories, this classification requires a notation on the map, where scale permits, specify- ing the type of line portrayed. This classification is also transitional in nature, and needs periodic review.

Design and Production / Dessin et production 06 Index

No defined boundary This classification refers to any land division between states in which no known boundary exists. The largest example of this would be the Saudi Arabia - Yemen boundary east of the Treaty of Taif line. Problems. Policy is a large determinant to what boundaries would fall into this category. For instance, U.S. Government maps depict the frontier between the United Arab Emirates and Oman south of the Musandam Peninsula as a case of no defined boundary, despite the existence of 1950 boundaries, potentially contested, that define the emirates of the then Trucial Coast. The policy on official maps from the U.K., however, is to depict these lines as de facto - possibly because they were drawn by British political agents (Schofield, pp. 585-595). Another problem with this category is how to portray the spatial element of the two states involved; for instance, when using different colors for the two states, where should the colors meet? This problem is compounded in the digital environment, where enclosed polygons are often required.

Problems and Concerns

This classification system seeks to provide a model for generalizing international boundaries for small and medium scales. It does not and can not suppose a universal classification, but rather provides a framework for international boundary classification in any milieu. However, the system is fraught with practical problems. Boundaries are subject to political, physical, and social considerations, some of which are described below.

Political Policy The most difficult and resolute obstacle to a universal agreement on boundary status is political policy. The political nature of boundaries themselves necessitates that differences in opinion will exist, even among parties with a high degree of comity. The classification system proposed above, however, can be considered independent of such policy concerns; cartographers can adapt their client’s policy concerns into whichever category seems most fitting.

Interpretation Another problem with employing this or any other classification system lies in interpreting the data that defines a boundary. When a boundary’s legal basis has been defined through physical features, valid differences in interpretation can occur. For instance, when the International Court of Justice issued its 1992 decision on the boundary between El Salvador and Honduras, it ruled a boundary between all but a small portion of the 420 square kilometers of disputed territory. This small portion was ruled beyond the jurisdiction of the Court because, although the two parties had agreed on a named physical feature as a boundary point, they did not agree on its location (Dillon, p 1.). How, then, to classify the gap in the boundary: disputed? Indefinite? No defined boundary? Another example of an ongoing dispute based upon interpretation of physical features is the “icefields” sector of the Chile - Argentina boundary.

Physical Changes The other problem with physical features defining boundaries are physical changes. This applies mostly to boundaries set in rivers, which shift course and often require a continuing program of boundary maintenance and rectification. But if a river that defines a boundary changes course, and the two states do not come to a timely agreement on the new alignment, does the cartographer consider this boundary, once categorized as a definite boundary, indefinite, or in dispute? Does the existence of a treaty defining a river thalweg as an international boundary imply a definite boundary, even if a precise delineation has not been made? Botswana

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index and Namibia, whose Linyanti River boundary was generally considered solid, have taken a dispute over river islands to the International Court of Justice. Does this imply that the entire river boundary is indefinite?

Insufficient data Changes in boundary status are often treated with discretion by the parties involved. Boundary agreements can be publicly announced, without the attendant data necessary to portray the boundaries with accuracy. A recent example of this is the 1998 agreement between the Russian Federation and China respecting the eastern por- tion of their boundary. Although the federal authorities of both states have publicly stated that an agreement has been reached which precisely delimits all but a few segments of their 3,600 kilometer boundary, the details of the agreement - including the specific delineation - have not been made publicly available. The cartographer producing medium to small scale maps of the area may confidently classify the agreed portions of the bound- ary as definite, since the scale would not depict any noticeable change. But working at large scales, the cartographer does not have the data necessary to portray the boundary as definite. Another example of this would be the Zambia / (formerly) Zaire boundary segment between Lake Mweru and the tripoint with Tanzania. This segment was officially regarded as an indefinite boundary on U.S. Govern- ment maps until an agreement was publicly reached between the two countries in 1989. Details of the agreed alignment were never made public, however, leaving cartographers with little choice than to portray it as a straight line segment, as was the previous practice. In the absence of documentation, however, does this remain an indefinite or a definite boundary?

Symbology The symbology used for the various classifications is up to the cartographer, but clearly a hierarchy is desir- able, with definite boundaries more dominantly displayed than others. Although this paper does not seek to impose a standard for symbolic treatment of boundaries, the following is offered for informational purposes. Currently, on small to medium scale U.S. Government maps, an informal portrayal policy exists. International boundaries are portrayed by solid lines. Indefinite boundaries are also usually portrayed by solid lines, but carry a disclaimer stating that the boundaries are not necessarily authoritative. Disputed boundaries are de- picted with a line symbol distinct from and subordinate to definite boundaries, with the notation “in dispute” where scale permits. Boundaries falling under the “other line of separation” category are shown with a distinct and subordinate line symbol, with the type of boundary notated where scale permits. In the “no defined boundary” category, predictably, no boundary is shown.

Digital International Boundaries Database The need to develop and implement the classification model is enhanced by efforts underway in the U.S. Government to create a digital international boundary database. The U.S. National Imagery and Mapping Agency (NIMA) is in the process of developing this database project, with the cooperation of the U.S. Depart- ment of State. The Digital International Boundaries Database (DIBDB) collects and analyzes the best avail- able source information on each international boundary segment, and uses a digital geographic information system to display the alignment of a boundary segment along with its attribute data. This data includes geo- coordinates and physical nature of boundary points, text from relevant treaties or other documentation, and origin of map sources and datums used. Also included in this data would be the classification of each boundary segment into one of the aforementioned categories, along with directions for special labeling. The initial purpose of this database is to give U.S. Government cartographers and map users access to the best available, highest resolution data for all international boundaries. This database would then become available

Design and Production / Dessin et production 06 Index

to the public through the Internet, and would be updated as needed. Since boundary data is not uniform in precision, some data will be precise to a resolution of a meter, where others will necessarily be generalized representations derived from the best known sources. In this database, the widest variety of classifications can be employed for maximum precision.

Conclusion

In conclusion, regardless of the milieu a cartographer works in - commercial, governmental, institutional - it is in the best interests of cartography as a science and an art to portray boundaries with sensitivity. Those cartog- raphers working principally at scales exceeding 1:1,000,000 are in need of a simplified approach to classifying boundaries that takes into account the differing nature and quality of boundaries. It should be able to channel more specific requirements arising from large scale mapping while keeping the number of categories limited. If employed correctly, the map viewer will achieve a general understanding of the nature of the international boundaries shown without being distracted by too much data. With the approach of a new millenium - one in which the importance and notoriety of international boundaries is sure to be enhanced - it seems timely for the international community to reassess the way in which it views and portrays the lines that divide states.

References

Brownlie, Ian (1979). African Boundaries: A Legal and Diplomatic Encyclopaedia. C. Hurst and Co., London; Uni- versity of California Press, Berkeley, Los Angeles. Dillon, Leo (1993). El Salvador - Honduras Boundary: After the ICJ Decision. Geographic and Global Issues Quar- terly, Volume 3, Number 3, pp. 1-2. IBS no. 12 (1961). International Boundary Study No. 12: Italy - Switzerland Boundary. Office of the Geographer, U.S. Department of State. IBS no. 127 (1972). International Boundary Study No. 127: Congo - Zaire Boundary. Office of the Geographer, U.S. Department of State. IBS no. 149 (1975). International Boundary Study No. 149: Mauritania - Spanish Sahara Boundary. Office of the Geographer, U.S. Department of State. Schofield, Richard (Ed.) (1992). Arabian Boundary Disputes, Volume 19: United Arab Emirates - Oman and Saudi Arabia - Oman. Redwood Press Ltd. For Archive Editions, London. Thomas, Bradford L. (1994). International Boundaries: Lines in the Sand (and the Sea). In George J. Demko and William B. Wood (Eds.), Reordering the World: Geopolitical Perspectives on the Twenty-first Century. Westview Press, Boulder, San Francisco, Oxford, pp. 87-100.

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Session / Séance 15-D Conception de cartes pour l’étude comparative de l’urbanisation de trois pays du Maghreb

Vanessa Rousseaux Institut de géographie, université d’Aix en Provence- UMR CNRS C65680 Institut de recherches et d’études sur le monde arabe et musulman. [email protected]

Résumé Cette contribution s’inscrit dans une partie de nos recherches portant sur l’étude comparative de l’urbanisation de l’Algérie, du Maroc et de la Tunisie de ces trente dernières années. De nombreuses études statistiques de la population ont pour support de représentation des cartes administratives mais ces découpages politiques ont leurs limites et ils ne sont pas toujours révélateurs pour certaines données. Les statistiques concernant l’urbain illustrent parfaitement ces propos car l’urbanisation ne s’arrête pas aux limites administratives, lesquelles sont fictives, pour se développer dans l’espace. D’autre part, les limites officielles de ces trois pays du Maghreb sont construites sur des critères différents. Les cartes administratives ne nous offrant pas suffisamment d’informations, il était nécessaire, pour pallier cette lacune, de réaliser nos propres fonds cartographiques où les critères personnels et officiels peuvent parfois se cotoyer. La carte est un outil de plus en plus prisé car son langage est accessible à tous. Elle nous permet de saisir et de transmettre des informations, ce qui implique que sa conception soit complexe et réfléchie d’où l’intérêt de clarifier ses critères d’élaboration car les informations transmises en dépendent. La conception d’une carte implique à la base de savoir ce que l’on souhaite démontrer. Notre objectif principal est de dégager des informations encore inconnues, qui ne sont pas visibles sur les cartes officielles, grâce à une nouvelle approche du territoire. Nous souhaitons comprendre l’articulation de l’urbanisation à travers l’espace, les réactions face aux attractions économiques et des zones d’influence afin d’être le plus proche de la réalité géographique.

Spécificités de nos fonds cartographiques.

Lorsque nous parlons de géographie, la notion qui vient instantanément à l’esprit, est la carte. Réaction tout à fait naturelle puisque la transcription de variables géographiques est généralement faite sur fond cartographique. Mais l’image que nous avons habituellement de la carte est celle d’un outil servant uniquement à repérer des lieux, à localiser des noms ou des nombres sur un fond. A travers les supports réalisés nous allons voir qu’elle a d’autres finalités.

Evincement des fonds administratifs. Pour effectuer une analyse spatiale, trois types de partitions sont possibles pour réaliser une carte : -type 1 : espaces administratifs. -type 2 : espaces homogènes (physiques et humains). -type 3 : espaces fonctionnels.

Design and Production / Dessin et production 06 Index

Les espaces administratifs (par wilayate, gouvernorats, provinces...) n’ont pas été utilisés pour cette étude car nous considérons que leurs limites (ayant une origine extérieure à la géographie) ne sont que fictives, l’urbanisation peut être arrêtée par des obstacles physiques mais rarement par des limites administratives, et qu’elles ont davantage une finalité de gestion que d’observation. Par ailleurs, le fait que les limites administratives fragmentent les ensembles physiques n’aurait pas permis d’étudier l’influence de ces milieux sur l’urbanisation. Avant de préciser les particularités de nos fonds, il nous faut expliquer davantage pourquoi nous avons évincé de notre étude les fonds administratifs en précisant leurs objectifs. Le découpage tunisien correspond à un certain type d’organisation politique et sociale et est adaptée aux diverses finalités que s’est donné l’Etat. Ces dernières sont les suivantes : contrôle policier, politique et administratif plus étroit sur la population et l’espace, ce qui explique les nouveaux découpages effectué depuis l’Indépendance qui augmentent le nombre de gouvernorats (mais aussi des délégations et niveaux inférieurs) en réduisant par la même les superficies dans l’objectif d’avoir un contrôle de plus en plus étroit. Au Maroc, la refonte de la carte administrative élaborées par l’administration territoriale a cherché à atteindre différents objectifs : -rapprocher l’administration des administrés. -réduire les coûts de transports. -ériger des centres commerciaux et industriels appelés à se développer. -regrouper des fractions de tribus. Cette opération a été amorcée sur deux niveaux d’administration : -niveau déconcentré basé sur la province, la préfecture et la région. -niveau décentralisé reposant sur les communes urbaines et rurales. Les préoccupations des différents découpages administratifs algériens ont été le souci d’équilibre des potentialités économiques, des effectifs des populations, et de maîtrise de leur croissance dans chaque wilaya ; ainsi que de promouvoir certaines régions à difficulté et de faire émerger les petites villes. Ceci dans l’objectif d’obtenir une harmonisation et une homogénéisation du territoire. Le découpage de 1984 qui demeure en 1998 (modifi- cation spatiale de la wilaya d’Alger), associé aux nouvelles prérogatives des A.P et A.P.W, devait permettre la mise en place d’une véritable décentralisation. Par ailleurs, cette organisation répond aux problèmes du pays : les populations seront mieux intégrées au processus de développement et les initiatives administratives et économiques seront mieux propagées. Nous remarquons que les trois Etats du Maghreb mettent principalement l’accent, pour l’organisation de leur découpage administratif, sur le “contrôle” des populations, viennent ensuite les objectifs économiques... Leurs préoccupations sont différentes des nôtres et, étant donnée nos objectifs, l’utilisation de ces découpages n’aurait pas été d’un grand intérêt géographique.

Critères de nos fonds cartographiques. Les découpages retenus s’appuient donc sur des limites naturelles, humaines et économiques, car ils apporteront des réponses plus proches de la réalité géographique. Trois fonds cartographiques ont été créés, pour chacun des trois pays étudiés, afin d’effectuer les observations, les comparaisons nécessaires. Ici, le choix s’est porté sur une combinaison du type 2 et 3, parce qu’ils étaient plus adéquats à l’étude : -niveau 1: espaces fonctionnels. -niveau 2 : espaces fonctionnels et homogènes. -niveau 3 : espaces homogènes.

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Les trois supports élaborés sont les suivants : - niveau 1 : ce découpage macro-régional repose sur des grandes régions géographiques. Il a été effectué en s’appuyant sur les attractions économiques et les zones d’influence (grossistes, médecins...) des grandes agglomérations et principalement des métropoles issues d’enquêtes réalisées par les autorités compétentes, ainsi que les milieux physiques et humains. Il représente des espaces fonctionnels. Ce découpage a un intérêt indicatif car il permet de donner les tendances des grands ensembles spatiaux algériens, marocains et tunisiens. - niveau 2 : ce découpage est à un niveau méso-régional, il correspond aux ensembles naturels structurés formant une entité spatiale qui regroupe des milieux liés entre eux par diverses relations (axe de transport...). Il est constitué d’éléments différents mais complémentaires qui permettent une structure et une viabilité de chaque unité spatiale. Il est proche de nos micro-régions françaises par sa composition. Cette dernière est voisine d’un géosystème, car il est inscrit dans un espace et ses milieux sont considérés comme des ensem- bles fonctionnels. D’autre part, des relations croisées, plus ou moins importantes, ont lieu dans chacun de ces ensembles et ils peuvent avoir des relations entre eux. Ce découpage est une combinaison des espaces fonctionnels et homogènes. - niveau 3 : ce dernier support de travail est un découpage micro-régional représentant des espaces homogènes à un niveau fin. Il est basé uniquement sur des facteurs physiques et humains qui permettent d’observer les caractéristiques propres à chaque entité. Ses caractéristiques sont proches de l’explication du géosystème vu précédemment qui est également applicable au niveau 1. Nous tenons à préciser que nous n’avons pas réalisé ces entités de niveau 3 dans un esprit déterministe. Le principe du déterminisme, rappelons-le, consistant à admettre que tout phénomène dépend d’un ensemble de conditions antérieures ou simultanées (“les mêmes causes produisent les mêmes effets”). La connaissance de ces conditions permettant de prévoir rigoureusement le phénomène et même de le reproduire nécessairement. La réalisation de ce découpage micro-régional, dans ces conditions, n’aurait eu aucun intérêt pour notre recherche. Nos précédents travaux nous amenaient à sous-entendre des réactions différentes du phénomène urbain de ces entités spatiales homogènes engendrées majoritairement par des facteurs exogènes. Remarque : il est souvent possible de les assimiler à des aires urbaines, car les agglomérations de ces espaces ont des relations et des échanges plus ou moins forts avec la ville maîtresse.

Cartes ayant servi d’appui et difficultés rencontrées.

Nous avons réalisé les entités spatiales du niveau 3 en utilisant comme support la carte des milieux physiques puis en prenant en considération les limites les plus fines du découpage administratif de chaque pays (d’autres facteurs sont impliqués pour les niveaux 1 et 2). Il aurait été incohérent d’utiliser les limites naturelles brutes car les données statistiques étaient issues d’un découpage administratif. Nous avons été amenés à aménager et à harmoniser les limites des entités spatiales en nous référant à celles des deux types de cartes servant de repère.

Supports utilisés

Cartes topographiques : Algérie : la carte routière et touristique Michelin, 1995, n°958 “Algérie-Tunisie” au 1/1 000 000 et la carte du Laboratoire de cartographie des hautes études, Paris 1960 “distribution de la population musulmane et non- musulmane d’après le R.G.P.H de 1954" au 1/1 000 000. Maroc : la carte routière et touristique Michelin, 1995, n°959 “Maroc” au 1/1 000 000. Tunisie : la carte routière et touristique Michelin, 1995, n°958 “Algérie-Tunisie” au 1/1 000 000 et la carte touristique et routière de Tunisie, 1992, au 1/500 000 de l’office de la topographie et de la cartographie de Tunis.

Design and Production / Dessin et production 06 Index

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Design and Production / Dessin et production 06 Index

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Design and Production / Dessin et production 06 Index

Cartes administratives : Algérie : découpage administratif par communes 1987, référence : illisible. Maroc : Découpage administratif par communes (découpage fixé par l’arrêté n°2.92.651 du 17 Safar 1413/17 Août 1992. Tunisie : à l’échelle nationale la seule carte représentant des découpages administratifs est celle des délégations, mais elle n’est pas assez précise pour notre étude. La carte administrative par commune n’existe pas car ce terme est utilisé uniquement pour qualifier des espaces urbains. Cependant, pour chaque gouvernorat existe une carte a un niveau de découpage plus fin que celui des délégations existe : elle représente le découpage par secteur pour 1994. L’échelle utilisée pour chaque gouvernorat est différente. Notre premier travail à été de décalquer chaque gouvernorat, puis de choisir une échelle commune et d’effectuer les agrandissements et les réductions nécessaires. Une fois les ensembles administratifs à la même échelle, nous les avons assemblés et ainsi nous avons obtenu une carte à l’échelle nationale par secteur. Ceci est moins précis que si nous avions une carte administrative représentant les communes (entités spatiales urbaines) et les centres ruraux (entités spatiales rurales), mais vu les particularités politico-administratives et les définitions tunisiennes, nous utiliserons cette carte réalisée comme support. Elle nous est utile pour repérer les communes et les centres ruraux pour effectuer, par la suite, la répartition spatiale.

Obstacles rencontrées Pour l’Algérie les limites des communes de l’Algérois étaient difficilement lisibles. Les travaux de BRULE J-C, BENJELID A, FONTAINE J, 1990, Découpage de l’Algérie. Colloque aménageurs, aménagés, Oran, nous ont servi de référence. Pour le Maroc le problème a été l’évolution du découpage administrative qui s’est modifié entre 1992 et le R.G.P.H de 1994. Mais nous avons pu repérer les nouvelles limites et adapter notre découpage. Quant à la Tunisie, l’absence d’une carte administrative à une échelle aussi fine vu les raisons évoquées précédemment n’a pas été un réel handicap. Pour l’essentiel, les limites des délégations correspondaient aux découpages des entités spatiales que nous réalisions pour le niveau 1 et 2. Celles du niveau trois ont posé quelques difficultés mais nous avons effectué des aménagements en tenant compte du milieu physique et du découpage par secteur.

Intérêts des fonds cartographiques créés

Respect du local Table 1 : Répartition des entités spatiales selon les niveaux.

Vanessa ROUSSEAUX 1998 Pour le nombre d’entités de niveau 2 et 3, il est important de signaler qu’un certain équilibre a été maintenu entre les trois pays, dans la mesure du possible, en fonction de leur différence de superficie. Les noms attribués aux entités des trois niveaux spatiaux sont issus des appellations employées par les autochtones de chaque pays concerné. Nous avons fait appel à des chercheurs locaux qui nous ont aidé à utiliser et choisir les noms adéquats, qu’ils utilisent ainsi que leurs compatriotes, pour désigner une zone géographique spécifique pour qu’il n’y ait pas d’erreur de localisation et que cet espace soit localisé sans hésitation par tous. Cependant, certains espaces algériens de niveau 2 et 3, situés dans la zone des hautes steppes, ont la même appellation. Nous avons appliqué les noms utilisés pour le niveau 3 au niveau 2 pour éviter tout confusion

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

(exemple : les hautes Steppes oranaises). Ceci s’explique par le fait que ces ensembles recouvrent des superfi- cies assez vastes et qu’ils correspondent, à la fois, à des micro-régions et méso-régions. Il nous a semblé plus correct de ne pas les subdiviser afin de ne pas rompre leur spécificité.

Objectifs de ces fonds cartographiques Ces trois supports sont importants car ils permettent d’effectuer des approches différentes de l’espace, d’observer d’éventuelles oppositions ou regroupements et d’apporter des explications tendancielles et affinées. Le changement de niveau spatial modifie les représentations des phénomènes. Il change les analyses des relations décryptées dans le territoire. Plus l’espace considéré est vaste, plus le nombre, la nature et le type des relations évoluent. A chaque niveau, correspondent des interactions et des hiérarchies entre les phénomènes de nature fondamentalement différente. D’autre part, en changeant de niveau spatial, de nouveaux concepts apparaissent et le raisonnement gagne dès lors en complexité et pertinence. La carte au départ est un instrument d’information, mais elle devient également un instrument de découverte, de compléments d’informations. Les trois niveaux sont liés entre eux, une entité de niveau 3 appartient à une seule entité de niveau 2 qui est issue également d’une seule entité de niveau 1, ce qui permet de garder une logique dans les interprétations. Une carte n’est pas une simple image artistique d’un espace, car sa conception est complexe et réfléchie. La finalité d’une carte est la transmission d’une information et plus nous souhaitons que cette dernière soit riche plus sa conception sera méditée. La carte doit être considérée comme un message spécial employant un langage capable d’opérer une transposition de sens. La carte thématique met en scène des structure qui ne sont pas directement perceptibles dans le paysage, mais uniquement visualisable. Elle est un modèle plus abstrait et plus difficilement contrôlable qu’une carte topographique. Le choix du niveau spatial dépend de l’information détaillée ou tendancielle que nous choisissons de représenter. La carte transmet un message, et à sa réalisation, l’information que nous faisons apparaître est déjà perçue et définie au préalable. Il est très important de clarifier les critères que nous utilisons pour réaliser les fonds à différents niveaux spatiaux, car les informations en dépendent. Les trois pays du Maghreb sont urbanisés et les trois niveaux spatiaux successifs apportent un éclairage nouveau sur ce phénomène, puisqu’ils font apparaître une information nouvelle à chaque étape différente. La carte au départ est un instrument d’information, mais elle devient également un instrument de découverte, de compléments d’informations.

Béguin, M., Pumain, D. (1994). La représentation des données géographiques, statistique et cartographie. Cursus, Armand Colin. Paris. Bonin, S. et M. (1989). Le graphique dans la presse, informer avec des cartes et des diagrammes. Paris. Bocar, C., Kherfi, H., Labidi, M., Sedjari, A. (1989). L’administration territoriale au Maghreb. Publié par le centre maghrébin d’études et de recherches administratives. Edition Guessous. Rabat. Charre, J. (1995). Statistiques et territoire. Reclus. Montpellier. Office national des statistiques. (1991). Evolution des populations et des limites communales 1977-1987. Collections Statistiques n°22. Alger. Ministère de l’environnement et de l’aménagement du territoire. (1996). Schéma directeur d’aménagement du territoire national, étude stratégique, bilan-Diagnostique, rapport de première phase, document de synthèse. Dirasset- Groupe huit-Igip. Tunis. Rouleau, B. (1991). Méthodes de la cartographie. Presse du C.N.R.S. Paris.

Design and Production / Dessin et production 06 Index

Session / Séance 12-C Coping with Qualitative-Quantitative Data in Meteorological Cartography: Standardization, Ergonomics, and Facilitated Viewing

Mark Monmonier Department of Geography, Maxwell School of Citizenship and Public Affairs, Syracuse University, Syracuse, New York, USA, 13244-1090 [email protected]

Abstract Meteorological cartography’s long history reflects a reluctant appreciation of cartographic standards. Although most of the world’s national weather services agreed on a set of common projections early in the twentieth century, countries eager for the benefits of familiar alphabets and measurement units have adapted, rather than adopted, a uniform set of map symbols. In addition to national variations of established international symbols, interactive workstations with flexible symbols allow individual weather service offices to accommodate regional weather conditions as well as color-impaired staff members. Especially noteworthy are multi-hue quantitative displays like Doppler radar reflectance maps, on which differences in hue not only allow more than a dozen distinct symbols but suggest meaningful qualitative differences among drizzle, heavy rain, snow, severe hail, and other forms of precipitation. Because of similarly complicated maps, meteorological cartography depends heavily on two supportive viewing environments: interactive workstations with which experienced analysts can juxtapose or toggle between diverse displays, and televised presentations in which a personable weathercaster interprets complex images.

Introduction

Meteorological cartography has evolved with few (if any) formal links to the professional and academic car- tography of topographic and thematic mapping. Even so, meteorologists are intense users of maps, and no mapping enterprise rivals atmospheric science in creating vast numbers of individual maps, however ephem- eral. And despite practices at variance with conventional cartographic principles such as strictures against complex color scales that invoke hue variations for quantitative data, what weather scientists do they do well. Maps hold a pervasive role in meteorology and climatology: a role that includes discovery, hypothesis con- struction and verification, prediction, monitoring, and public communication at several levels, including edu- cation, storm warnings, propaganda, and entertainment. This role evolved slowly, starting in 1816, when Heinrich Wilhelm Brandes, a physics professor at the University of Breslau, prepared the first synchronous weather maps from measurements collected over thirty years earlier by the Meteorological Society of the Palatinate. Meteorology’s cartographic impulse matured rapidly during the 1830s, when awareness of geographic rela- tionships among pressure, wind, moisture, and temperature precipitated a philosophical controversy about the origin of storms and led to empirical rules for forecasting weather [Fleming, 1990]. Following the formation of national weather forecasting agencies in the 1860s and 1870s, the map’s role expanded further as scientists and bureaucrats applied new technology to collect data and transmit maps elec- tronically, probe the upper atmosphere with balloons and kites, monitor moisture and wind speed with satellite

Design and Production / Dessin et production 06 Index

and radar sensors, project atmospheric conditions forward in time with computer models, and fascinate televi- sion audiences with engaging animations [Monmonier, 1999]. An examination of the design conventions and use environments of meteorological mapping reveals chal- lenges and opportunities distinctly different from those of other cartographic ventures. The atmosphere is a rapidly changing, highly complex multidimensional system with important horizontal and vertical variations; it requires multiple maps that are often dynamic or employ complex symbolic codes. Compensating for de- mands imposed by atmospheric phenomena are the skills of one principal group of users and the supportive viewing environment of another. Professional meteorologists benefit from standardized symbols and formats, readily learned through regular, highly repetitious use—in this sense weather maps are akin to the wrenches and test equipment of a trained auto mechanic. By contrast, television viewers benefit from familiar formats and knowledgeable weathercasters, who interpret the maps’ patterns and compensate for their complexity and fleeting appearance [Carter, 1993; Henson, 1990]. Standardization is readily apparent in the maps’ projections, point symbols, classifications, and formats. But few meteorological conventions are as prominent as the color scales of Doppler radar and satellite maps. At variance with Jacques Bertin’s prescription against multiple hues for quantitative data, the 14- and 15-step color schemes for radar velocity and reflectivity images not only provide much needed contrast with appropri- ate background information but afford meaningful distinctions among qualitatively different weather situa- tions. In addition to examining the limited standardization of color in the United States by the National Weather Service, this paper provides a concise history of international standardization in meteorological cartography.

Early Standards: Map Projections, Scales, and Formats

Meteorologists standardized their measurements long before they standardized their maps. Weather systems are huge—several hundred to a more than thousand miles in extent—and only a few large nations, like the United States, can construct useful weather maps from information collected solely within their borders. Fore- casters as well as scientists saw the need for standardized measurements taken at fixed times of the day with calibrated instruments. Adjusted for elevation differences and converted to terse, parsimoniously compact codes for telegraphic transmission, these measurements allowed the construction of synchronous regional weather maps with which forecasters could identify storms, plot their paths, and predict their impacts. By the late 1870s national weather services in Europe were exchanging data regularly, at least once a day, and the United States had a similar cooperative arrangement with Canada. Although weather services exchanged maps as well as data, their cartographic products were too stale for operational forecasting. Even so, sequences of maps helped forecasters understand the development and move- ment of weather systems. Because weather systems generally move from west to east, European scientists and forecasters were especially intrigued by maps extending westward across the Atlantic to North America. In the 1880s, data collected by mail allowed weather services in Britain, Denmark, Germany, and the United States to compile North Atlantic weather charts covering a large part of the northern hemisphere [Harrington, 1894]. In January 1914, the U.S. Weather Bureau inaugurated a daily weather map of the northern hemisphere, based on data received by radio and submarine cable, which cut the compilation time from several months to several hours [Abbe, 1914]. Among those praising the new map were Norwegian meteorologist Vilhelm Bjerknes and German climatologist Wladimir Köppen [Anon., 1914]. Bjerknes praised the Americans’ use of metric units, including absolute centigrade temperatures, but Köppen suggested plotting isotherms for 268°, 273°, 278°, . . ., rather than integer multiples of 5, and using a thicker line for 273° to highlight areas with frost. Bjerknes [1920], who pioneered conceptual development of weather fronts and cyclogenesis, was an early advocate of cartographic standardization. In 1919, he presented a detailed set of recommendations at the Fourth

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

International Conference of Directors of Meteorological Institutes and Observatories, meeting in Paris. His proposal called for basing regional weather maps on one of three conformal projections: a Mercator cylindrical projection secant at 15° for equatorial regions, a Lambert conformal conic projection secant at 30° and 60° for middle latitudes, and a polar stereographic projection secant at 75° for polar regions. Conformality would assure correct portrayal of angles between meridians, parallels, isobars, isotherms, wind arrows, or weather fronts, and a regionally centered standard line would limit distortion of distance and area. Bjerknes also advo- cated four standard scales: 1:2,500,000, 1:5,000,000, 1:10,000,000, and 1:20,000,000. Uniform projections and scales, he contended, would promote the ready and accurate compilation of composite maps from the weather charts of various nations. Despite wide recognition of the advantages of standardization, formal endorsement of Bjerknes’s recommen- dation took nearly two decades. In 1937, the International Meteorological Committee, meeting in Salzburg, adopted a modified set of cartographic standards based on the Norwegian’s guidelines [Gregg and Tannehill, 1937]. Two proposals sacrificed geometric accuracy for a wider geographic scope: moving the polar stereographic projection’s standard parallel outward to 60° allowed a wider geographic scope for polar maps of the full hemisphere, and shifting the Mercator projection’s standard parallels poleward to 22.5° encouraged extension of tropical maps into the mid latitudes. In recommending equal-area projections for climatological maps, on which relative area is more important than angular relationships, the Salzburg guidelines reiterated the advan- tages of standard lines situated to minimize distortion within a region. Consistent with the standards for re- gional weather charts, the guidelines called for regional climatic maps based on a plane secant at 60°, a cone secant at 30° and 60°, or a cylinder secant at 22.5°. The U.S. Weather Bureau adopted the Salzburg guidelines in 1938, and promptly replaced the polyconic pro- jection on its national maps with the Lambert conformal conic [Griggs, 1955]. Even so, the bureau (now the National Weather Service) bases its current Daily Weather Map on a polar stereographic projection secant at 60°, which allows the meteorologists responsible for the national surface-weather map to integrate their analy- sis with a map of the northern hemisphere. Surface analysts at the National Centers for Environmental Predic- tion prepare a North American map every three hours, and copy every second surface-analysis map for use on a surface-analysis chart of the northern hemisphere [Kocin et al., 1991]. Publication scales, which range from 1:25,000,000 to 1:60,000,000, are noticeably smaller than Bjerknes’s recommendations.

Point Symbols: International Coding, National Differences

Efficient international sharing of mapped data also required standardization of map symbols and content as well as common definitions or thresholds for weather phenomenon. In some cases standard symbols evolved naturally from straightforward graphic metaphors like arrows for showing wind direction and a progression of open, split, and filled circles for portraying clear, partly cloudy, and cloudy skies at specific cities and weather stations. By contrast, symbols relying on abbreviations such as R for rain and S for snow were not readily portable to nations with different alphabets and vocabularies. Especially troublesome were the arbitrary and idiosyncratic graphic codes various nations devised to describe the details of clouds and precipitation. Graphic symbols for weather phenomena predate weather maps by nearly half a century. In 1771, the math- ematician J.H. Lambert, who developed the conformal conic projection recommended by Vilhelm Bjerknes, proposed a set of small geometric symbols representing degrees of cloudiness and various forms of precipita- tion as well as atmospheric electricity and assorted astronomical phenomena [Talman, 1916]. Developed as a means of compact communication and data compression, Lambert’s codes (see Figure 1) were expanded by the Meteorological Society of the Palatinate and had an indirect, albeit minor, effect on international practice.

Design and Production / Dessin et production 06 Index

The value of a common symbology was apparent as early as 1873, when the International Meteorological Congress, meeting in Vienna, adopted numerical and pictorial codes for a broad range of phenomena. But because of bureaucratic conservatism as well as disagreement over what was worth measuring and mapping, the Vienna recommendations were neither rapidly nor widely adopted. In 1891, an international conference meeting in Munich expanded and modified the Vienna codes, largely by endorsing the practice of the German weather service. Unconvinced of an overriding need for international symbols—the exchange of maps was clearly secondary to the exchange of data—the British and French weather services continued to use their own codes. For some nations the international symbology was a code to adapt, rather than adopt. In a 1916 article on cartographic symbols, U.S. Weather Bureau official Fitzhugh Talman [1916, 265] described “the forms of these symbols as more or less flexible.” He included a table of 27 international symbols used by the bureau, and noted that all but six of them were “principal variants.”

Figure 1. Weather symbols suggested by J.H. Lambert in 1771 [Talman, 1916, 265].

At the dawn of the twenty-first century, meteorological symbology is markedly more complex and standard- ized yet still flexible. The World Meteorological Organization supports the needs of commercial aviation and international scientific collaboration with a standardized set of terms, definitions, symbols, and electronic formats. Even so, on weather charts intended for domestic use, national weather organizations continue to adjust the international codes to their own needs, situations, and traditions. In United States, for instance, the National Weather Service employs a slightly different station model for its Daily Weather Map, and in Britain, the Meteorological Office uses its own pictograms for “wet fog,” “patches of shallow fog over land/sea,” and “zodiacal light” [Monmonier, 1999, 220-221]. Point symbols describing conditions at weather stations reflect an amalgam of graphic logic, pictorial shorthand, and numerical con- venience. Figure 2, the “specimen station model” for the National Weather Service’s Daily Weather Map, illustrates a fourth facet: the assigned positions of individual elements within a complex multi- element symbol. Relative position is especially important for the num- bers arrayed around the symbol’s center: a number above the center and to the right, for instance, always refers to barometric pressure, whereas a number below the center and to the left represents tem- perature. In practice, many symbols on the map lack some, perhaps most, of the eighteen symbol elements identified in the specimen model either because measurements were not taken or reported, or because no clouds or precipitation occurred. Continued use of feet, miles, and Fahrenheit degrees promotes communication with Ameri- can audiences by accommodating widespread public resistance to metric measurement. Moreover, as the diagram notes, the Daily Figure 2. Specimen station model used Weather Map relies on an abridged version of the International Code. by the U.S. National Weather Service for The specimen station model illustrates the roles of shape, value, its Daily Weather Map. orientation, and numerousness as retinal variables [Bertin, 1983, 60-97]. The star-shaped “snowflake” symbols signify snow, the large filled circular dot indicates full cloud cover, the wind pointer represents winds from the northwest, and the two wind-speed barbs denote a wind speed between 18 and 22 knots. Website weather maps typically use these four retinal variables together with hue, as shown in Figure 3, the symbol key for WXP (The Weather Processor), a weather graphics system used

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

by many American universities. Although widespread use of WXP reinforces conventional symbol codes, the coarse resolution of television and computer monitors precludes more subtle distinctions found in the Interna- tional Code. Weather websites typically compensate for poor resolution by offering a broad range of special- ized maps with straightforward symbols or by allowing users to compose their own maps, consisting of fea- tures selected from a menu. Dynamic cartography has eroded the relevance and utility of internationally stand- ardized graphic codes developed for black-and-white printed maps.

Color: Qualitative Differences within Quantitative Sequences

Although some assignments of color to meteorological symbols seem logical, others appear counterintuitive, at least to lay viewers troubled by green rain and yellow fog. Among the more logical, or at least emotive, assignments is the use of red and blue for warm and cold fronts, as in the color version of Figure 3. Indeed, colored fronts are a cartographic throwback to the symbols devised around 1919 by Vilhelm Bjerknes’s col- leagues, who originally (and paradoxically) colored warm fronts blue and cold fronts red [Jewell, 1981]. A somewhat different yet easily understood logic underlies the use of magenta or purple for occluded fronts and alternating red and blue dashes for stationary fronts. By contrast, the WXP code for radar intensity (a surrogate for precipitation) is a somewhat ambiguous sequence of additive and subtractive primary hues: blue, cyan, green, yellow, magenta, and red. To a reader of newspaper weather maps, the WXP colors is similar to the spectral sequence of hues often used for temperature: blue, green, yellow, orange, and red. But to a profes- sional meteorologist, the sequence also resembles the color code for conventional radar: light blue, green, yellow, orange, light red, and red. As viewers of weather websites and televised weathercasts are well aware, the range of cartographic weather products is much broader in symbology and content than the U.S.A. Today weather map and its various local clones, which have made the blue-to-red scale of temperatures a standard part of Americans’ graphic vocabu- lary. According to a study by the American Meteorological Society, atmospheric scientists, forecasters, pilots, and other ‘power users’ of weather graphics cope with at least five different color sequences, which portray intensity variations for no less than ten different kinds of surface features [Doore et al., 1993]. Most straight- forward are the graytones used for snowfall accumulation and water vapor as well as for visible and infrared satellite imagery, and the now-familiar blue-green-yellow-orange-red sequences for air temperature, soil tem- perature, and sea-surface temperature. Three types of radar imagery have their own subtly distinct sequences of hues. In addition to the six-step light blue-through-red sequence for conventional radar, Doppler radar displays typically represent reflectivity with 15 colors bracketed by light blue and white, and wind velocity with 14 colors bounded by light green and red. What’s more, weather advisories for pilots and emergency management officials use a yellow-orange-red sequence to warn of hazards such as atmospheric turbulence, icing, fire danger, high seas, high winds, and the “convective outlook” for severe weather such as tornadoes and thunderstorms. Based on red’s cultural connotation of danger, the hazard sequence seems a pedagogically sound strategy, readily understood and instantly recalled. In 1991, the American Meteorological Society responded to a “proliferation” of color maps by establishing a Subcommittee on Color Guidelines within its Committee on Interactive Information and Processing Systems. Aware of the difficulty of formulating, much less enforcing, broad standards, the subcommittee adopted the modestly pragmatic goal of “try[ing] to unify the variety of color schemes that are used to depict weather information for its end users—the general public—as well as for the default colors for workstation system” [Doore et al., 1993, 1709]. In addition to assessing the current practices of meteorologists and software ven- dors as well as polling the preferences of the news media, airlines, and other principal users, the subcommittee sought the advice of psychologist Robert Hoffman and other human-factors experts and developed a set of guidelines.

Design and Production / Dessin et production 06 Index

Based on the red-green-blue color model, the guidelines consist of definitions for 15 named colors, recom- mended assignments of these colors to 64 feature categories, and 12 human-factors principles [Doore et al., 1993]. The named colors, most of which rely on only one or two additive primary colors, recognize the need for contrast among multiple features in the same display, and their assignments reflect prevailing practice and graphic logic as well as the relative importance of the features and their likelihood of juxtaposition on the same map. Foremost among the human-factors principles were recommendations for choosing colors with familiar relationships (e.g., red to represent danger), using them consistently, following time-tested commonsense com- binations (e.g., darker symbols on a lighter background), and limiting the number of colors in a single display. Although the guidelines reflect cau- tious optimism, a separate article published the same year by Hoffman and his colleagues [1993, 515] calls for continued evaluation and iterative prototyping lest troublesome color choices based on untested consensus be “cast in stone.” Perhaps the most complex and po- tentially controversial meteorologi- cal displays are the NEXRAD (Next Generation Radar) maps, on which differences in hue and value distin- guish more than a dozen quantitative categories as well as essentially qualitative differences for hail and tornadic winds. A flexible and im- Figure 3. Symbol legend for WXP (The Weather Processor), a weather pressively reliable monitoring sys- graphics package developed at Purdue University. tem, NEXRAD provides reflectance maps showing different levels of pre- cipitation intensity as well as wind-velocity maps describing differences in the direction and speed of wind. (Klazura and Imy [1993] describe these and other NEXRAD analysis products.) That the two types of map look distinctly different is less a conscious effort to avoid confusion than an overt accommodation of important qualitative distinctions. A typical reflectance map portrays 15 different levels of precipitation intensity by embedding three intensity levels (light, medium, dark) in a sequence of five spectral hues (blue, green, yellow- orange, red, and purple). With this scheme, the broad area of rain associated with a warm front might appear as a wide zone of greens and blues, whereas a more locally intense storm might stand out as a smaller zone of orange and yellow punctuated perhaps by a dollop of red or a red halo with a purple center. By contrast, on a typical storm-relative-velocity map, useful for monitoring tornadoes and severe thunderstorms, greens repre- sent winds moving toward the antenna site, reds show winds moving away, and value differences represent wind speed. On a blocky, relatively large-scale raster image of an area under a severe-storm watch, adjoining bright red and bright green cells signify the violent circular winds of a tornado. (Although the lightest colors also represent the more gentle inbound and outbound winds at the middle of the scale, among the map’s other categories lighter colors represent stronger winds.) Especially puzzling are the ends of these scales, which taper to lighter colors. In particular, the highest reflect- ance level (reflectivities greater than or equal to 75 decibel reflectance units) is represented by white, not a darker purple, whereas the upper and lower ends of the wind-velocity scale represent exceptionally severe

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

winds toward or away from the antenna site with light green and light red, respectively. An interview with Jeff Waldstreicher, Science and Operations Officer at the National Weather Service Office in Binghamton, New York, explained both apparent paradoxes. According to Waldstreicher, reflectivities greater than 75 dBZ are rarely seen: “I don’t remember ever seeing it on our radar, but I have in other parts of the country, [where it] would likely be associated with very large hail.” A pattern of white within purple marks an unusual, highly significant occurrence. Unlike the discontinuous values on a choropleth map, NEXRAD reflectances typically form a continuous surface on which red or purple draws the eye to higher-than-average precipitation, purple within red suggests even more intense precipitation or larger particles, and white within purple signifies ex- traordinarily large hail. On wind-velocity displays, similar nestings of light colors within dark colors pinpoint the most severe winds while contrasting greens and red differentiate inbound and outbound winds. A light- within-dark signature is efficient, Waldstreicher notes, because “when we are looking for potentially tornadic signatures, we are looking for areas that can be as small as a couple of pixels, so they need to stand out visually.” Even so, NEXRAD wind-velocity displays are not locked into a single color scheme: “Most offices, including ours, utilize an alternate color scale that can be toggled via a mouse click that visually highlights wind speed maximum (both inbound and outbound) more than the directional difference.” Highly interactive workstations with a flexible symbology help individual weather service offices adjust to regional weather patterns as well as accommodate staff members with impaired color vision. As Jeff Waldstreicher points out, “the color scales are somewhat arbitrary [and] can be redefined at the workstation level.” Although his office uses scales that have “evolved into somewhat of an accepted standard,” other offices deviate from the accepted default to support an employee with red-green color blindness. Although standard colors promote communication among meteorologists monitoring severe weather or recapitulating their interpretation of yes- terday’s storm, efficient and effective use of NEXRAD and other advanced monitoring systems demands a flexible display system that affords rapid visual re-expression of data. Especially important are interactive, multi-screen displays with which forecasters can compare features and alternative displays as well as experi- ment with critical values [Hoffman, 1991] and draw on their well-informed, expert’s mental representation of the atmosphere and its behavior [Lowe, 1994].

Concluding Remarks

As Cynthia Brewer (1997) argues, broad strictures against spectral color schemes for quantitative data need to be reconsidered and carefully qualified. When explained by the map author and understood by the map viewer, multi-hue color schemes can be highly efficient in emphasizing critical values and highlighting direction as well as degree of divergence. And as Brewer observes, the naturally continuous distributions of coherent, high- resolution environmental data as well as the artificially generalized distributions of low-resolution data yield nested sequences of ordered colors that promote ready, unambiguous decoding—more so certainly than the discontinuous distributions typical of choropleth maps. At least equally important is the viewing environment [Carter, 1988]: meteorologists cope with the highly complex maps by manipulating the display [Hoffman, 1991], whereas television viewers rely on a professional weathercaster who points out storms and interprets noteworthy features [Carter, 1998]. Even so, trained forecasters and television viewers seem to benefit from familiarity with individual colors and color schemes consistently assigned to specific weather phenomena. For this reason, research on the effectiveness of spectral, diverging, and other multi-hue color schemes needs to consider familiar associations of specific color schemes with specific features or phenomena as well as interac- tively toggled pairings of color schemes. But don’t look for meteorological color standards more rigid than those now in use. Functional familiarity demands flexible de facto standards, with local, national, and international variants. Indeed, past attempts to standardize symbols on weather maps indicate that the advantages of full international uniformity can neither

Design and Production / Dessin et production 06 Index match the convenience of language-specific alphabetic symbols nor overcome the American public’s obstinate resistance to metric units. Yet even though convenience and necessity argue against rigid standards, default color schemes and similar conventions are inevitable. And like any rules, these defaults are especially helpful to those who not only understand when and how to ignore them but can do so efficiently.

References

Abbe, C. (1914). The weather map on the polar projection. Monthly Weather Review, 42(1), 36-38. Anon. (1914). New daily weather map. Monthly Weather Review, 42(1), 35-36. Bertin, J. (1983). Semiology of Graphics: Diagrams, Networks, Maps. University of Wisconsin Press, Madison. Bjerknes, V. (1920). Sur les projections et les échelles à choisir pour les cartes géophysiques. Geografiska Annaler, 2(1), 1-12. Brewer, C.A. (1997). Spectral schemes: Controversial color use on maps. Cartography and Geographic Information Systems, 24(4), 203-220. Carter, J.R. (1988). The map viewing environment: A significant factor in cartographic design. American Cartogra- pher, 15(4), 379-385. Carter, J.R. (1993). Weather maps on television in the USA. Proceedings of the Sixteenth International Cartographic Conference, Cologne, Germany, 244-254. Carter, J.R. (1998). Uses, users, and use environments of television maps. Cartographic Perspectives, no. 30, 18-37. Doore, G.S., et al. (1993). Guidelines for using color to depict meteorological information: IIPS Subcommittee for Color Guidelines. Bulletin of the American Meteorological Society, 74(9), 1709-1713. Fleming, J.R. (1990). Meteorology in America, 1800-1870. Johns Hopkins University Press, Baltimore. Gregg, W.R., and Tannehill, I.R. (1937). International standard projections for meteorological charts. Monthly Weather Review, 65(12), 411-415. Griggs, A.L. (1955). The background and development of weather charts. Bulletin, Geography and Map Division, Special Libraries Association, no. 21, 10–13. Harrington, M. (1894). History of the weather map. U.S. Weather Bureau Bulletin, no. 11, pt. 2, 327-335. Henson, R. (1990). Television weathercasting: A history. McFarland, Jefferson, North Carolina. Hoffman, R.R. (1991). Human factors psychology in the support of forecasting: The design of advanced meteorological workstations. Weather and Forecasting, 6(1), 98-110. Hoffman, R.R., et al. (1993). Some considerations in using color in meteorological displays. Weather and Forecasting, 8(4), 505-518. Jewell, R. (1981). The Bergen School of meteorology: The cradle of modern weather-forecasting. Bulletin of the Ameri- can Meteorological Society, 62(6), 824-830. Klazura, G.E., and Imy, D.A. 1993. A description of the initial set of analysis products available from the NEXRAD WSR-88D system. Bulletin of the American Meteorological Society, 74(7), 1293-1311. Kocin, P.J., et al. (1991). Surface weather analysis at the National Meteorological Center: Current procedures and future plans. Weather and Forecasting, 6(2), 289-298. Lowe, R.K. (1994). Selectivity in diagrams: Reading beyond the lines. Educational Psychology, 14(4), 467-491. Monmonier, M. (1999). Air apparent: How meteorologists learned to map, predict, and dramatize weather. University of Chicago Press, Chicago. Talman, C.F. (1916). Meteorological symbols. Monthly Weather Review, 44(5), 265-274.

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Session / Séance 12-A Noise in Urban Environment: Problems of Representation and Communication

Jean-Claude Muller Ruhr-University Bochum, Germany [email protected]

Holger Scharlach Ruhr-University Bochum, Germany [email protected]

Matthias Jäger Ruhr-University Bochum, Germany [email protected]

Abstract Various ways of representing urban noise are discussed, involving various techniques such as discrete and continuous mapping, two and three dimensional viewing, and the use of multimedia resources such as sound and interactivity. Some of these techniques require noise data modelling from a discrete sampling to a continuous data field. The origin of urban noise production is identified and a distinction is made between physical noise sources and noise perception. It is argued that new efficient multimedia, interactive techniques to communicate noise information, when integrated into a GIS, may be potentially very useful for city planning and environmental applications.

Introduction

According to the World Health Organization, “noise must be recognized as a major threat to human well- being”. In a recent article from the German Press Agency [DPA, 1999] was mentioned that over fifty percent of the German population suffers from noise disturbance and 25 percent has moved away from cities in search of quieter environments. Noise is directly related to population density and is everywhere increasing, but it is increasing faster in urban areas, where it is particularly concentrated, than in non-urban areas. Noise may be produced from a variety of sources, including natural ones like wind, rainfall, and wildlife but most impor- tantly anthropogeneous ones like road and rail traffic, air flight, industrial plants, commercial activities, tour- ism and entertainment. Because of its increasing threatening effects against life quality, information about noise is becoming a necessary ingredient in planing new constructions in cities or improving the welfare of residential and recreational areas. Measures against noise from local governments can already be observed in multiple ways, such as creating green buffer zones around hospitals and elderly people residences or imposing speed limits along highways crossing residential areas. Whereas it is clear that noise information, like other polluting factors such as hazardous chemical emissions and radioactivity, must be made available to the planers (and the citizens at large!), it is less obvious whether

Design and Production / Dessin et production 06 Index planers can access the information in simple and understandable ways. Numerical listings and written reports will only provide a partial view of noise reality. One needs tools to visualize the occurrence, the intensity and the nuisance factor of noise everywhere at once. In this paper we will discuss various ways of noise represen- tation for the purpose of facilitating the communication of urban noise information. Various visualization techniques will be presented, including traditional ones (such as maps) as well as new ones involving the use of multimedia. First, the availability and form of noise data will be discussed, as well as the necessity of noise data processing and modeling for use in Geographical Information Systems (GIS). Then various visualization solutions will be introduced, both for discrete and continuous noise data. Finally, potential applications of multimedia noise representations for planing and political decision making will be reviewed, pointing out the communication problems which still remain to be solved.

Human Noise Perception - Physical Noise Production

Sound is surrounding us every day regardless whether we live in a small country village or a big city. But there is a difference between the voice of a bird and the roaring engine of a car. Everyone agrees that it is more soothing to listen to a singing bird than a loud car engine. On the other hand the voice of the same bird can also be nerve-racking if it prevents you from sleeping in early morning hours. Hence there are countless different sounds that are perceived very differently depending on the situation in which we listen to this sound. Sounds can be desired, e.g. if you want to listen to music, or undesired, e.g. if there is a construction site next to your house. Whenever the sound is undesired and disturbing we speak of noise, e.g. a sound that physically, psycho- logically, socially or economically impairs an afflicted person [Guski, 1987]. A generally accepted definition of noise does not exist since the perception is subjective, depending on the situation in which it is perceived. It is possible to distinguish between sound and noise on a subjective basis, but for planning purposes objective data about noise pollution are essential. Therefore it is worth to take a closer look at the physical noise production. Sound is caused by the movement of air particles (i.e. the vibration of a diaphragm in a loudspeaker disturbs the nearest air particles). The following dissemination is not linear but results in an alternating positive and negative air pressure. This sound wave can be represented as a sinus curve in which the mountains indicate positive pressure and the valleys negative pressure. The quicker the pressure changes occur in a certain time period the higher is the frequency of the sound. Greater differences between the highest and the lowest pres- sure result in an higher amplitude. The frequency composition and the specific amplitude of every involved frequency change quickly over time. A fundamental characteristic of sound is that it looses energy on its way away from its source. The amount of energy loss depends on the character of the barriers in its way, e.g. certain materials can absorb or reflect a high amount of energy. Because of this characteristic a sound level can be measured either near its origin (sound emission) or at the ear of the listener (sound immission). The range of human sensivity is extremely wide and the decibel scale (after Graham Bell, inventor of the telephone) is used to represent sound levels. The threshold of audibility of a healthy young man is 0 dB while the threshold of pain lies at about 140 dB. When using the decibel scale it is important to notice that 20 dB compared to 10 dB does not mean a doubling of the sound energy but rather that the energy is ten times higher. In conclusion sound can be measured physically and represented using the decibel scale. Although the psychological aspects of noise are quite important and should not be omitted, the research carried out here concentrated on the physical characteristics of noise. The basic question is how the results of noise measurements/simulations can be visualised in a multimedia environment.

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Recent Technical Developments in Visualization

It is commonly recognized that recent developments in GIS, Internet and Web-based technology have changed the nature of maps [MacEachren and Kraak, 1997]. For the purpose of this research we only need to mention the new visualization opportunities offered by the combined use of hypertext and hyperlink techniques to allow interactive map queries and the integration of multimedia documents like photography, video or anima- tion and sound which become parts of the cartographic representation. These new opportunities open a range of new applications. Space and time can be truly integrated and dynamically represented (such as the spreading of population over space or the changes of atmospheric temperatures); the presentation of non-temporal vari- ables can be enhanced through animation in order to facilitate map understanding (animated sequencing of taxonomic variables such as soil or vegetation); map content can be enriched through interactive hyperlinks to a database of texts, tables, pictures, sounds in order to afford queries and explorations [see the recent develop- ment of interactive multimedia atlases such as in Switzerland, Sieber and Bär, 1996]; dynamic three dimen- sional modeling can be used to depict volume objects such as a real or virtual landscapes or non tangible variables such as population densities. Sound is also an important added opportunity to enhance visualization but has not been yet exploited to its full capacity. Sound applications are limited for the most part to the use of popular or classical music as a compan- ion to make a visual presentation more attractive, or to the use of speech to add explanation to a picture without overloading the screen with written text. Music, when used appropriately and carefully (i.e. without overdoing it) can certainly be exploited to attract the viewer but is essentially an entertaining tool, with minimum carto- graphic substance. Verbal communication through speech, instead, can add substantial information to a graphic or a map and research is currently undertaken to find out in which ways spoken communication can be combined to a map presentation in order to facilitate map use [Borchert, 1998]. But there are very few exam- ples of sound being used as an extension to the visual variables introduced by Bertin more than thirty years ago, although the application of sound to “visualize” metadata such as data quality and uncertainty in GIS has already been often suggested, i.e. prompting an alarm or a strident tone to notify the user that his/her cursor is entering a zone with unreliable information [Fisher, 1994; Krygier, 1994; Van der Wel et al., 1994]. One of the difficulties of course is that sound alone cannot communicate an overview of spatial relations in two or three dimensions. It is a one-dimensional medium of communication and sound signals, like animations, require the temporal dimension for their expression and can only be perceived sequentially. Hence sound hearing for cartographic communication can only be conceived as an addition, not a substitute to pictorial vision. In this paper the use of multimedia with sound will be investigated not as a symbolic artifact to communicate metadata, but as a realistic representation emulating the noise occurring in a urban environment. One obvious advantage of this approach is that we are getting an perfect adequacy between the phenomenon to be represented and the tool used for its communication.

Urban Noise – History, Sources, Measurement

History The problem of noise pollution is inextricably linked with a high population density in urban areas. Today 43% of the earth population lives in cities and a forecast by the United Nations estimates that the urban population will double until 2025 to then 4,6 billion people [Meurer, 1997]. Besides this noise “is also closely related to industrialization and the modern products of such activity especially heavy machinery, aircraft and motor vehicles” [Harnapp and Noble, 1987]. Especially in the 1990s noise has been recognised as a major problem not only from a medical but also from an economical point of view. The social costs of the environmental

Design and Production / Dessin et production 06 Index

factor “noise immission” have been evaluated during a research project in the Federal Republic of Germany. As a result of this research a total sum of more than 30 billion D-Mark p-a- for noise induced social costs could be estimated [Weinberger, 1992]. Noise is not a very recent problem, however. Already in 200 BC Horaz described the noise in the ancient city of Rome (ca.700 000-900 000 inhabitants) as unbearable. In the late 19th century Oscar Wilde observed that “America is the noisiest country that ever existed. One is waked up in the morning not by the singing of the nightingale but by the steel worker. It is surprising that the sound practical sense of the Americans does not reduce this intolerable noise” [Harnapp and Noble, 1987]. Although noise was perceived as a negative phe- nomenon for a very long time, government regulations against noise only appeared recently. For example, in the United States the “Noise Control Act“ has been passed in 1972 whereas in Germany the latest instrument for municipal noise protection has been passed in 1990. Summing up it may be said that noise has been recognised as an environmental problem for more than a millennium but actions against it have not been taken before this century. Especially in this decade a growing interest in noise as one part of the urban ecosystem can be observed.

Sources Noise in an urban environment can result from hundreds of different sources. The sound of a lawn-mower can be as nerve-racking as the horn of a car or people talking to each other on the street during night time. There- fore the different noise sources have to be categorized. First of all common urban sounds that occur every day and night at nearly every place in the city can be distinguished from sounds that are unusual. The latter sounds attract the attention of the listener possibly not because of their loudness but their different information content [Hofmann, 1997]. Unusual sounds change from time to time and cannot be expressed simply in decibel but have to be estimated with the help of qualita- tive methods in the particular situation. For this reason the most important sound category for this research refers to the predictable or usual sounds. These can be further divided into the following groups: Street-traffic, railway traffic, tramway traffic, marshalling yards, water traffic, air traffic, industrial plants, military facilities, sports facilities, leisure facilities [Losert et al., 1994]. Obviously most of the urban noise results from traffic. Following Harnapp and Noble (1987) “a survey in London revealed 80% of the city’s noise to be generated by automobiles. A similar study in Tokyo recorded 86%.” Noise from aircraft is also important and often discussed in public. Whereas traffic noise affects large urban areas, noise from industrial plants and leisure/sports facilities affect normally a small area at certain times, e.g. during a sport event. For this reason it is difficult to calculate an equivalent loudness for a time period (day-night loudness). Concluding we can distinguish between the following three major noise sources in an urban environment: - traffic noise, - noise from industrial plants and - noise from sports/leisure facilities.

Measurement Urban noise data can be obtained in two different ways: 1) direct measurement or 2) estimation through simu- lation software. In the first case noise is measured directly by measuring instruments which are placed at a certain distance from the noise source, e.g. a major road. Measurements may last from 10 minutes near a much frequented street to 1 hour near a quiet side street depending on the traffic load. The results can be either data about emissions or immissions. Immissions refer to the loudness at a certain building front and include noise

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index from all sources in the environment whereas the measurements of emissions characterize the noise from one source, e.g. the traffic noise on streets. Maps about noise emissions show the streets as lines of different colours or different width depending on the noise level. In contrast noise immissions are represented by isophon maps as a continuous surface. That means theoretically that a sound level can be derived for every point on the map. This method has been used by many major German cities since the 1960s to obtain data about noise pollution [Vogt, 1997]. The data have been used for environmental atlases and environmental information systems. But since noise changes remarkably over time and space many measurements must be made at different places and at different times. Consequently this method is very expensive due to the high personnel costs. Nowadays noise dissemination is mostly computed with the help of simulation software. For example traffic noise depends on several factors such as the traffic load, the portion of lorries, the average traffic speed, the street surface, the street gradient, etc. If data for all these parameters are available the noise immission can be computed for a certain area. One advantage of using such a model is that the changing of one or two of the parameters leads to a different noise map. Therefore the noise levels for different planning alternatives can be simulated and the results can be used in a spatial decision support system. A disadvantage of this method is of course that noise measurements are not as precise and the noise level obtained depends on the quality of the secondary data listed above.

Visualization of Urban Noise: Discrete Solutions

As was mentioned in the previous section, the in- tensity of noise is measured at discrete locations. One can write those measured intensities directly on a map at their respective locations (some sort of a noise chart), but this would provide a spatial ta- ble and would not help much the visualization. A representation of noise along critical lines (such as Figure 1. Discrete isophon map (13 classes) highways) would be interesting for planing pur- poses. Noise, however, propagates and can be con- sidered as an ubiquitous, continuous variable. In this case, the value of noise measured and recorded at points may be extrapolated to create a field variable, just like any other field variable such as tempera- ture or atmospheric pressure. A standard technique to map physically measured field variables is the isometric method [Robinson et al, 1996]. In this case the isometric lines or isophons show the locus of points of constant noise intensity value. Since the data were already provided to us in form of a dxf-file we could map them di- rectly adding colored gradients for better understand- ing using ArcView 3.01 (Figure 1). This is a first, notwithstanding traditional mapping solution. An Figure 2. Discrete isophon map showing the use of extension to this solution would be to add transpar- hyperlinks to different media. ent overlapping layers such as air photography and

Design and Production / Dessin et production 06 Index land use to see where patterns are in phase, and where relationships between noise and land use can be discov- ered. An other extension would be to add interactivity to the isometric view by giving the user the possibility of clicking his/her mouse over particular landmarks (factory, crossing highways etc.) to yield a noise value and a picture of the object at that particular spot (Figure 2). This would add both extra numerical information and realism to the presentation. Finally, one can add sound by mouse click at any spot on the map. The produced tone of course is the same within the area bounded by two subsequent isophons but changes in intensity when the mouse crosses an isophone to enter a new area. This simple multimedia extension including interactivity and sound was implemented on a pentium 2 using the authoring software Director version 7. Although the colored isometric representation with or without multimedia artifacts does provide an spatial overview which is relatively easy to understand, it is a symbolic abstraction which may be misleading. An area patch bounded by two subsequent isophones has been uniformly colored, which wrongly suggests that within this area noise does not change intensity. In fact, by adding colors for the classes bounded by isophone values (see the legend in Figure 1) we created a discrete representation of a field variable f(x.y) which is in reality continuously changing value. A colorless isometric representation would be less misleading (no classes would be shown in the legend) since it does not assume a stepwise classification but would not be as communicative. The problem then is to find a solution which both combines truth to the data and communi- cation effectiveness.

Visualization of Urban Noise: Continuous Solutions

Engineers which collect information about noise publish their results in table forms, re- ports and maps. Their maps use the isometric representation, often combined with a class color scheme with the advantages and incon- veniences mentioned above. In this paper we are looking for an alternative solution to rep- resent noise continuously over space in a form Figure 3. Continuous isophon map (256 classes) which is communicatively effective. Hence the idea of a mapping for which any (x,y) loca- tion (pixel in the case of a computer map) takes on a scalar value defined by a noise function f(x,y). This function is single valued and var- ies continuously from place to place. There are various ways of visualizing such function, including a two-dimensional raster represen- tation, a three- dimensional volume represen- tation and other alternatives which would in- corporate multimedia resources such as sound and animation. Before we can undertake this mapping, however, we need to calculate the values of the noise function. Since the noise data were originally made available to us in a discrete form (i.e. a listing of isolines along Figure 4. Continuous three-dimensional isophon map (100 classes)

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index which noise values were obtained according to formulas which are proprietary and unpublished), we need first to transform the data in a continuous form. This is a typical modelling problem: extrapolate the information existing along discrete lines to create a data field where the information is available at any point.

Data Visualization The output of noise data as a data field may be visualized in various forms. One type of visualization is more conventional and involves only the use of graphic depiction, whereas the other requires the use of interaction and multimedia.

Graphic Depiction A data field is computationally structured as a two-dimensional matrix where every matrix entry is a scalar. The portrayal of this matrix onto the computer screen can be readily achieved by assigning a color hue ranging from 0 to 255 to each of the matrix entry value. We obtain a bitmap where pixels are painted in color. We worked here with a standard screen resolution of 600 x 800 pixels but higher resolutions are possible, depend- ing on how much noise information has to be portrayed (a function of field resolution and area). The color scheme assigned to the pixels exploits only part of the color spectrum, ranging from green to red. Green depicts low noise level and red high noise level (Figure 3). Those colors correspond to the intuitive notion of noise disturbance, green suggesting the idea of a quiet environment against red conveying the notion of heat and excitement. Another traditional representation of field data is a volume representation where the surface heights correspond to the values of the function f(x,y) for every (x,y) position. In this case picks depict high noise level and pits depict low noise level. One advantage of this representation is the possibility of draping another layer (such as land use) onto the surface in order to show pattern correlation between land feature (the independent variable) and noise (the dependent variable). One can also add the previous color scheme used in the two- dimensional representation to the volume surface. This redundancy effect emphasizes the morphology of the noise surface and may help the communication (Figure 4).

Multimedia Presentation The multimedia extensions which were implemented for the discrete visualization solutions above can also be introduced for the continuous solutions. In this case it is possible for the user to click with the mouse at any place on the screen and hear a sound which simulates noise and whose intensity is proportional to the value of the pixel as previously defined. For selected objects which are important sources of noise pollution a photog- raphy may also simultaneously popup in a window to add realism to the presentation. Still more innovative from the point of view of communication is the possibility of pressing the mouse and “wandering” over the map area to produce a continuous output of noise information. This output may be in written form (a small window somewhere on the edge of the screen displays continuously the changing numerical value of the noise). Or it may be communicated in form of a sound which is prompted by mouse click but whose loudness changes continuously so long the mouse keeps being pressed and moves over the map area. Hence the user is able to experiment interactively the intensity of noise over different parts of the map, which can be a land use map, an air photo or other kinds of documents relevant to noise. The advantage of the multimedia solution is thus the possibility of assigning to the map an other function than noise representation, whereby facilitating the task of visual exploration with several spatial layers relevant to noise. In this case the analytical tasks are emphasized, rather than overall presentation, since sound as medium is poorly fitted for the communication of two-dimensional patterns. This multimedia representation with a continuous rollover function prompting the sound of noise was programmed and realized using Lingo, an programmatic extension of Director.

Design and Production / Dessin et production 06 Index

Potential Applications of Multimedia Noise Representations

The growing importance of information and knowledge management in our increasingly complex society led to the development of many technical tools for data storage, analysis and display. As one of these tools Geo- graphic Information Systems have specialized on processing spatial and temporal data. Until the beginning of this decade GIS have been predominantly used in scientific environments and local governments. Only nowa- days decreasing prices for computer hard- and software and better performance capabilities offer the possibil- ity for the widespread use of GIS and the integration of multimedia techniques. Since the protection of our environment has been recognized as a major challenge for the survival of human- kind on a more and more densely populated earth, GIS have been used to integrate and overlay spatial data from different sources (e.g. temperature, rainfall, vegetation, ground water). The concept of layers is central to this application area. As we have seen noise and silence are important for the quality of life especially in an urban environment but cannot be interpreted without the consideration of further data sources. First of all noise has to be visualized in conjunction with topographic base data as streets, buildings and cadastral information. Furthermore aerial and terrestrial photographs may be useful to enhance the graphic quality and information about noise regulations may help to interpret the noise map (Fig. 2). Additional information about the traffic load and sensible facilities as schools can help experts to find decisions about future traffic policy. At this point it becomes obvious that a central application field is the use of interactive noise maps for planning purposes in local government institutions. Since urban GIS are used by many people with a different knowl- edge background, information about noise pollution has to be differently represented for different users. Where an auditory noise map overlaid with streets, buildings and an aerial photograph may be sufficient for the visitor of a planning meeting the planning expert may require additional information about noise regulations and the number of inhabitants. According to MacEachren’s (1994) cube of map use we can distinguish between the visualization of urban noise data for the planning expert and the communication of this information for the layman e.g. during a planning meeting. To achieve a better visualization and communication of urban noise data information about the needs of ex- perts and laymen are essential. It is by far not enough to find a technical solution to this problem. Interviews with target groups must provide additional information that will influence the design of an urban noise infor- mation system and guarantee its applicability in the daily planning process.

Conclusions

In this paper we have shown a variety of solutions for the representation of noise, using discrete and continuous mapping in conventional and multimedia settings. The use of tone from a computer appears to be a logical tool to emulate real sound (in this case urban noise) and is available to everyone (tone is becoming a standard equipment for PCs and Workstations). Acoustic interactions have the advantage of adding one more degree of freedom to the map, since the two dimensions of the screen are left available and can be used to the portrayal of other spatial variables. We have limited ourselves to the problem of representation and communication of urban noise, and produced accordingly interactive acoustic maps. All were snapshots of a particular noise physical state, however. Two important aspects of noise have not been considered, which are related to the psychological and temporal dimensions. Sound from a psychological point of view is perceived very differently in different situations. The same sound can be annoying or stimulating. There is no fixed border to divide sound from noise. Following Harnapp and Noble (1987) „the problem is that some sounds are noises only at certain times, in certain places,

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

to certain people.“ One major problem for the representation and communication of urban psychological noise is the lack of data. Hence major efforts have to be made in 1) identifying the complex nature of perceived noise as opposed to simple physical noise emission or immission, and 2) collecting the data accordingly in ways which are sufficiently reliable. The second aspect, which refers to the temporal dimension, is particularly relevant for the monitoring of noise changes (either physical or perceived) over a period of time. Here again we are lacking data, which are even more complicated to capture, since the process involves continuous measurements over time or the simulation of temporal changes through new software models. These problems are presently being tackled in other re- search centres, particularly in the University of Lyon in France [Servigne et al., 1999]. Temporal noise data could readily be represented and communicated through multimedia animation software (temporal map se- quences with blending in and out effects, video films of the temporal variation of noise, etc.) in discrete or continuous ways, as proposed in this project. This information could also be incorporated in a GIS in order for the user to make, besides usual spatial queries, temporal queries such as: 1) at which time of the day and at which place occurs the highest noise disturbance, 2) how big is the range of noise changes within a particular time period (day, week, etc.). Further research is also needed to get detailed information about the requirements of experts and laymen which will work with an urban noise information system in the daily planing process. Finding an applicable solution to communicate and represent the multiple aspects of urban noise data, including the psychological dimension, is a major goal of this project.

References

Borchert, A. (1998). Die Kombination der Medien Karte und Verbalsprache zur nachhaltigen Akquisition räumlichen Wissens, Dissertation Research Proposal, University of Berlin. DPA (1999) in Neue Westfälische Journal (25.2.99). Fisher, P. (1994). Hearing the reliability in classified remotely sensed images. Cartography and Geographical Informa- tion Systems, 21(1), 31-36. Guski, R. (1987). Lärm. Wirkungen unerwünschter Geräusche. Verlag Hans Huber, Bern, Stuttgart, Toronto. Harnapp, V. R. and Noble, A. G. (1987): Noise Pollution. GeoJournal, 14.2, 217-226. Hofmann, R. (1997). Können wir Lärm mit Schallmessungen und Grenzwerten bekämpfen. Lecture at SVG Zürich, 16.4.97. Krygier, John B. (1994). Sound and geographic visualization, In: Alan M. MacEachren and D.R. Fraser Taylor (Eds). Visualization in modern Cartography. Pergamon, New York. Losert, R., Mazur, H., Theine, W. and Weisner, Ch. (1994). Handbuch Lärmminderungspläne: modellhafte Lärmvorsorge in ausgewählten Städten und Gemeinden. Erich Schmidt Verlag, Berlin. MacEachren, A.M. (1994). Visualization in Modern Cartography: Setting the Agenda. In: Alan M. MacEachren and D.R. Fraser Taylor (Eds). Visualization in modern Cartography. Pergamon, New York. MacEachren, A.M. and Kraak, M.J. (1997). Exploratory cartographic visualization: advancing the agenda. Computers and Geosciences, 23(4), 335-343. Meurer, M. (1997). Stadtökologie. Eine historische, aktuelle und zukünftige Perspektive. Geographische Rundschau, 49(10), 548-555. Robinson, A., Morrison, J.,Muehrcke, P., Kimerling, A. and Guptill, S. (1995). Elements of cartography, 6th edition, John Wiley, Chichester. Servigne, S., Laurini R., Kang M.-A., Balay, O., Arlaud B. and Li K.J. (1999), A prototype of a system for Urban Soundscape, Proceedings of the 21st Urban Data Symposium, Venice, Italy, to be published.

Design and Production / Dessin et production 06 Index

Sieber, R. and Bär, H.R. (1996). Das Projekt Interaktiver Multimedia Atlas der Schweiz, In: Kartographie im Umbruch – neue Herausforderungen, neue Technologien. Proceedings, Cartography Congress Interlaken. Van der Wel, F., Hootsmans, R., Ormeling, F. (1994). Visualization of Data Quality, In: Alan M. MacEachren and D.R. Fraser Taylor (Eds). Visualization in modern Cartography. Pergamon, New York. Vogt, J. (1997). Schallpegelanalysen in Städten. Geographische Rundschau 49(10), 569-575. Weinberger, M. (1992). Gesamtwirtschaftliche Kosten des Lärms in der Bundesrepublik Deutschland. Zeitschrift für Lärmbekämpfung, 39, 91-99.

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Session / Séance 01-B The Visualization of Population Distribution in a Cartographic Information System - Aspects of Technical Realization of Dot Maps on Screen

Robert Ditz Institute of Cartography and Reproduction Techniques, University of Technology, Vienna, Austria [email protected]

Abstract This article describes a procedure to create dot maps in an Interactive Cartographic Information System (ICIS) and the corresponding functions for the analysis of such a presentation. The cartographic theory of dot maps and the automated placement of dots are examined. Functions of dot maps in an interactive system are analyzed. The initial consideration related to the visualization of population distribution with dot maps deals with the question of the size and the value of the dot unit. The inappropriate choice of the dot value can either cause too many dots that cannot be placed or areas that seem to have no population. In relation to the dot value, the size of the dot and the minimum distance between dots have to be chosen to obtain a readable display of the population distribution. It is obvious that a consistent placement of dots in conurbations cannot be obtained so that special solutions similar to those developed for printed maps must be utilized. The second issue to be addressed is the structure of the database and the topographic base map required to visualize a dot map on the screen. To ensure a realistic representation a detailed base map representing the settlements or houses is necessary. This requires a large scale, which in fact is not practical on the screen since the overall view is lost. Further, detailed population statistics related to houses - usually not available because of concerns of privacy - at this scale are needed to place the dots. Concluding observations about dot mapping will illustrate new interactive functions that are possible. One potential function might be the counting of dots and therefore the determination of total population within a user-defined area by moving a pointer over the screen. This area could be delimited by map features, for example, contour intervalls to determine the total population within a certain altitude. In this case, the limits of this function are obvious because of scale, accuracy of data and the fidelity of the placement of dots. As this example demonstrates, the users of Cartographic Information Systems have to be informed about useful functions, how to use them, and the possibilities of interpretation.

Introduction

Many of the electronic atlases published so far, especially world atlases, focus on the representation of topo- graphic information. Only regional atlases offer a variety of statistical information in form of thematic maps, but most of them are ‘view-only‘ and have no possibility of interaction. The development of fast hardware components, the increasing distribution of Cartographic Information Systems and the availability of varied databases make it possible to offer even more thematic topics in such Information Systems. Of great interest are topics related to population statistics, especially the representation of population distribution that in fact has always been a challenge for cartographers. This article demonstrates a possible representation of population distribution with dot maps in an Interactive Cartographic Information System and indicates some selected considerations concerning the methodological and technical realization. New possibilities of analysis by means of interactivity and the limits of a sensible use for interpretation are also shown.

Design and Production / Dessin et production 06 Index

Theoretical Background of Dot Maps

The advantage of dot maps is a more realistic presentation of socio-economic-geographical phenomena and the possibility to determine the number of dots inside a user-defined area [Dent, 1996; Gorki et. al. 1987]. Thus the map can depict a distribution and the user can determine the actual quantity. The success of dot maps is based on three parameters: 1) the numerical value represented by each dot; 2) the dot size and; 3) the placement of the dots. A high dot value gives the impression of empty areas while a low dot value produces areas with a high concentration of population (see Figure 1). The dot value and the dot size are closely related to each other and are dependent on the scale, the distribution, the density and, above all, the purpose of the map [Geer, 1922]. According to the opinion of Dent [1996], the dot value should be chosen in such a manner that at least 2-3 points are placed in the area with the lowest density [cp. also Dickinson, 1964]. This will cause dots to coalesce in areas with high densities, as recom- mended among others by Raisz [1962] or Robinson et. al. [1978]. But these coalesced areas impede the deter- mination of the number of dots as Dent admits further on. Some attempts were made to develop a formula to determine an appropriate unique dot value. Töpfer [1967] started working on the assumption of a ‘tolerable graphical load’ of the map which is a quantitative attribute of the density of the features. He compares the area of all placed signatures in relation to the total map area. Assuming a mean graphical load of the map, the size of the signatures can be calculated. The weaknesses of this formula are the mean graphical load of the map and the fact that a minimum distance between signatures is not taken into consideration. Kelnhofer [1971] considered this minimum distance and developed a formula using a unit area of 100 mm² of the map, which allows to calculate the dot size dependent on the number of dots to be placed. Therefore a regular distribution is sup- posed which in fact does not exists but is necessary to establish this relation. The calculation has to be done with ‘critical areas’ where a high density is supposed. After examining several of these critical areas a dot value for the whole map can be determined. If the amount of dots with a unique dot value is too high, quantity distinguished dots can be used instead. An unequivocal dot size can be obtained when present- ing real areas as dot maps, for example, the size of farmlands. The size of these ‘area-equal dots’ [Witt, 1970] is only dependent on the scale of the map. The printed dot map of the population distribution should be constructed at a larger scale based on more detailed information such as settlements and houses and than reduced to the final scale. The dots should be placed with a positional accuracy for individual houses or at the center of gravity to represent a group of small isolated houses [Geer, 1922]. The placement of dots at the center of gravity has a minor signifi- cance in smaller scales because of the decreasing ac- curacy of placement caused by the smaller scale of representation [Kelnhofer, 1971]. The purpose of a dot map of population distribution is the representa- tion of a realistic portrayal of the character of settle- Figure 1. The graphical result of different dot sizes and ment patterns. values [Dent, 1996]

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

The Topographic Base Map and the Statistical Database for Dot Maps on Screen

The base map geometry in the Cartographic Information System used for the automated placement of dots are vectors and symbols, originally generated for the production of maps on paper and for the interactive atlas of Austria [Kelnhofer et. al., 1999]. The scale of this map geometry is 1:1,000,000, a scale that is still adequate for a dot map [Kelnhofer, 1971]. To ensure a realistic representation it is inevitable to have a detailed base map representing the houses or the blocks of houses. This detailed geometry requires a larger scale which in fact is not practical on the screen because the overall view gets lost. Further, detailed population statistics related to the houses or blocks of houses - usually not available because of concerns fo privacy - are needed to be located at a larger scale. It is obvious that the process of automatically placing dots within administrative areas of communities without any further graphical information will produce wrong results. Communities in high mountain areas will illus- trate this problem, when because of the randomness - on which this process of placing dots is based - dots are placed on ridges, or within lakes or woods or other uninhabited areas. In a first approach the settled areas of communities, villages, and cities of the printed maps were used for the placement of dots. In addition the body of water, the lettering - available as vectors - and contour lines as well as hypsometric tints combined with a hill shading are used to complete the representation of the population distribution (Figure 2). Further topographic elements like the road network and railways would be desirable. But, due to the random placement of dots, a sensible presentation has to be guaranteed. It must also be prevented that dots are placed on roads or railways.

Figure 2. Part of the topographic base map for the automated placement of dots

The census data and the register of settlements from the Austrian Statistical Bureau are available in digital form as a database but have to be prepared for the use in the Cartographic Information System. The structure of this database is equal to that of the printed lists of the register of settlements which has to be adapted and aggregated for the use in an Information System. This aggregation process could not be readily automated because of inconsistencies in the database. The preparation of the database required a good knowledge of the geographical reality.

Design and Production / Dessin et production 06 Index

Technical Realization of a Dot Map in an Interactive Cartographic Information System

The methodology of cartography has not progressed much since the introduction of computers, as Arnberger already noticed in 1977 [Arnberger, 1977]. The disruption of literature in cartographic methodology in the 70s indicates this development. Although the use of computers makes the production of maps easier, no increase of dot maps could be discovered. Many programs have been developed for the production of printed dots maps, but mostly with a random or a regular placement of dots [Aschenbrenner, 1989; Wonka, 1989]. It is obvious that neither a random nor a regular placement will create a realistic distribution. The approach the author followed was also based on randomly placed dots, which became more sophisticated by considering further topographic parameters for the placement. A population distribution in a test area was chosen as an example for the technical realization of dot maps in an Interactive Cartographic Information System. This test area covers Northern and Eastern Tyrol completely, which lies in the alpine region of Austria. The visualization of the dot map and the programming of the user interface was realized with the Microsoft Visual Basic programming language. MapInfo MapX assists thereby as an interface between the vector geometry - stored in the internal MapInfo data format - and the programming language, which offers also some basic functions, for instance, functions for navigation, identification and the marking of elements. The census data are stored in a Microsoft Access database which is supported from Visual Basic as well as from MapInfo MapX.

Figure 3. User interface design for the dot map on screen

The user is offered a choropleth map of the population density as an entrance to this socio-geographical ques- tion, with the possibility to view the population distribution as a dot map in a particular window (see Figure 3) as a ‘See-Through’-Tool like a Magic Lens [Stone et.al., 1994, cp. also Ditz, 1997]. The advantage of this Magic Lens is the simultaneous visualization of absolute and relative quantitative statistical data and the fact,

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

that the speed for the calculation of the dots can be enhanced, because the dot map has to be calculated only for this part of the screen map. Therefore all communities within the dot map area are processed and then marked in the database. When the user shifts the map, only these communities of the dot map not marked in the database have to be processed. Quantity distinguished dots with three different dot values and sizes are used for the presentation of the dot map to avoid coalesced dots in conurbations. These parameters where chosen after the judgement of the statis- tical data and the geographic characteristics of the population within the test area. The problems of either an insufficient number of dots or too many dots that are impossible to place will be obviated by a proper dot value that should be chosen by an expert. Squares and not circles should be used to visualize the dots on the screen, as a result of the limited spatial resolution of computer displays [Spiess, 1996; Steinlechner, 1997]. But, the test has shown that it makes no difference when using squares or circles because of the smallness of the dots. Therefore the problems of using circles were considered.

The placement of dots As mentioned above the coordinates of the dots are determined by a random generator in a minimum-maxi- mum boundary of the populated area of a community (see Figure 4). After examining whether the dot is completely inside the ‘reduced’ populated area - an area reduced in size by half of the minimum distance (grey area in Figure 4) to ensure a minimum distance between dots of different communities - a check of all distances between already placed dots within this community has to be done to guarantee a readable map. By using a regular grid, as shown in the right side of Figure 4, with a constant column width and row height, increases the speed of the process of examination, especially in case of narrow and long areas. It has to be proven which grid cells are at least partially within the populated area. A randomly generated number determines the grid cell out of the list of the ‘placeable’ cells in which the dot has to be placed. To avoid a regular placement, the dots have to be placed randomly within the grid cells.

Figure 4. The random placement of dots in populated areas of communities

Additionally, topographic map elements like the body of water and the contour lines can be use to improve the presentation of the population distribution. It can be assumed that in alpine regions the settlement is concen- trated near the bottom of a valley [Leunzinger, 1987]. Therefore the randomly placed dots within a buffer zone beside a river could be dragged to the course of the river. Furthermore, the altitude of the settlements is stored in the register of settlements of the Austrian Statistical Bureau and this information can be combined with the contour lines to restrict the placement of dots within a certain altitude. It is obvious that these two methods will

Design and Production / Dessin et production 06 Index only produce an illustrative population distribution in alpine regions. On the other hand, these methods will fail in low land areas. Another way to improve the dot map is to use the settlement symbols of the topographic base map supple- mented by further generalized centroids of settlements that are also available in the register of settlements. These centroids are the starting points for placing dots. The first dot is placed with the same coordinates as the centroid, continuing by randomly placing a defined number of further dots within a circle (see Figure 5). After placing these dots, the circle is expanded in which the dots have to be placed and the process is continued until all dots for the population of a settlement are placed. The settlement with the highest population of a commu- nity is begun with this process and continued until all settlements of the register are treated. Then the rest of the dots to be placed can be spread randomly over the remaining populated area of this community.

Figure 5. Random placement of dots with considering the settlement

Hettner [1900] already demanded in 1900 that the actual shape of the settlement has to be considered for the construction of dot maps. Long and narrow villages should not be represented as circles or squares. This is only a problem with larger scale maps when the detailed geometry of the shapes of the settlements can be seen. A possible way could be the use of templates for different kind of settlement structures. This could not be put into practice in the chosen test area because of the large number of different settlement structures in Austria and the various topographic circumstances, especially in the alpine regions.

Functions for Dot Maps in an Interactive Cartographic Information System

One of the advantages of a dot map over other types of symbolization is the possibility to count and add up the dots in a user-defined area to determine the value of the shown distribution. Such a function is obvious within an ICIS where the user can either define an area, or select a spatial element of the topographic base map, for example, a certain area between two contour lines within a community. By doing so the user runs the risk of exceeding the accuracy of the randomly placed dots with such a vector geometry in a scale of 1:1,000,000. Even larger scale printed dot maps are constructed with a presumed accuracy that does not mesh with reality. Further, map elements are also influenced by generalization within a certain accuracy [Ditz, 1996], which is often forgotten when using maps as a basis for Geographic Information Systems.

Ottawa ICA / ACI 1999 - Proceedings / Actes 06 Index

Another possible function in an ICIS in connection with a dot map is the search for a specific settlement to view the distribution of the population and to display its total population. This requires the name of the settle- ment to be stored as an attribute to the placed dots. As already mentioned above, this function is not very useful because of the scale. The dots will generally be spread over a wider area than the extent of the settlement. Further, all settlements will have a kind of ‘regular’ spread of dots because of the circular placement. On the other hand, the presented dots can be assigned to a settlement by moving it over with the mouse and having the name and the total population of the settlement displayed The automated placement of dots requires the fixing of the dot value and size to obtain an agreeable distribu- tion in an acceptable time. The determination of a dot value supported by a program requires that many decisions are based on the knowledge of experts, level of experience, the statistical data, the topographic base map and, most of all, the purpose of the presentation. These aspects cannot be automated. Therefore, the user has no opportunity to change significantly these parameters without causing other effects. The ICIS gives the user the chance to switch (see Figure 3) between two basic dot values (100 and 50 people per dot) and the corresponding values (500 and 5000 respectively 200 and 2000). When changing the dot value, the scale of the screen maps has to be adjusted to meet the requirements of the new conditions. To guarantee a reasonable area for placing the dots, the scale has to be enlarged. But, a change of the scale has in that case no effects on the quality of the presentation, because the statistical data and the vector geometry is the same. The analysis of the dot map is affected with the same uncertainties.

Conclusion

The purpose of this work is to revive dot maps as a form of cartographic representation. Further, it is an attempt to show the possibilities of creating dot maps automatically in an Interactive Cartographic Information System. Most of all, the purpose is to demonstrate the potential of possible interactive functions in connection with dot maps. These functions should be in a sensible relation to the scale of a map and associated cartometric analysis [Kelnhofer et. al., 1997]. Beyond that, the restrictions of this kind of ‘on-line’ presentation and the accuracy of the analysis, which can be expected in such a scale, have to be pointed out. The hardware for the use in cartography is getting faster and faster and programs are getting more ‘intelligent’, but there are still param- eters for the process of placing dots which can neither be automated nor can serve to speed up the calculation of a distribution map almost in real time. The future investigations should concentrate on the improvement of dot map presentations on the principles of cartographic methodology.

References

Arnberger, E. (1977). Die Probleme einer durch Computer und elektronischen Datenverarbeitung unterstützten thematischen Kartographie (Programm einer Arbeitstätigkeit). In Witt, W. (Ed.). Thematische Kartographie und elektronische Datenverarbeitung. Veröffentlichungen der Akademie für Raumforschung und Landesplanung, Hannover. Aschenbrenner, J. (1989). Die EDV-unterstützte Herstellung von Punktstreuungskarten auf der Basis kleinster Bezugseinheiten. In Kelnhofer, F. (Ed.). Beiträge zur themakartographischen Methodenlehre und ihren Anwendungsbereichen. Berichte und Informationen (Sammelband der Hefte Nr. 10-20), Österreichische Akademie der Wissenschaften. Cartwright, W., Peterson, M.P., and Gartner G. (Eds.) (1999). Multimedia Cartography. Springer Verlag, Heidelberg. (In Press) Dent, B.D. (1996). Cartography - Thematic Map Design. Wm. C. Brown Publishers, Dubuque, Iowa.

Design and Production / Dessin et production 06 Index

Dickinson, G.C. (1964). Statistical Mapping and the Presentation of Statistics. Edward Arnold Ltd, London. Ditz, R. (1996). Geometriedatengewinnung aus topographischen Karten - eine maßstabslose Annäherung an GIS? Österreichische Zeitschrift für Vermessung und Geoinformation (4), 329-332. Ditz, R. (1997). An Interactive Cartographic Information System of Austria - Conceptual Design and Requirements for Visualization on Screen. In: Proceedings of the 18th International Cartographic Conference, Volume 1. Swed- ish Cartographic Society, Gävle, 571-578. Geer, St. de (1922). A Map of the Distribution of Population in Sweden: Method of Preparation and General Results. Geographical Review, 72-83. Gorki, H.F., and Pape, H. (1987). Stadtkartographie. Band III/1. Österreichische Akademie der Wissenschaften, Franz Deuticke, Wien. Hettner , A. (1900). Über bevölkerungsstatistische Grundkarten. Geographische Zeitschrift, 185-192. Kelnhofer, F. (1971). Beiträge zur Systematik und allgemeinen Strukturlehre der thematischen Kartographie. Österreichische Akademie der Wissenschaften, Wien. Kelnhofer (1989). Beiträge zur themakartographischen Methodenlehre und ihren Anwendungsbereichen. Berichte und Informationen (Sammelband der Hefte Nr. 10-20), Österreichische Akademie der Wissenschaften. Kelnhofer, F., and Ditz, R. (1997). Interaktive Atlanten - Eine neue Dimension der kartographischen Informationsvermittlung. Mitteilungen der Österreichischen Geographischen Gesellschaft, Band 139, 277-312. Kelnhofer, F., Pammer, A., and Schimon, G. (1999). Prototype of an Interactive Multimedia Atlas of Austria. In W. Cartwright, M.P. Peterson, and G. Gartner (Eds.). Multimedia Cartography. Springer Verlag, Heidelberg. (In Press) Leibrand, W. (Ed.) (1987). Kartengestaltung und Kartenentwurf. Ergebnisse des 16. Arbeitskurses Niederdollendorf 1986 des Arbeitskreises Praktische Kartographie. Kirschbaum Verlag, Bonn. Leunzinger, H. (1987). Graphische Gestaltung thematischer Karten mit punktförmigen Elementen. In Leibrand, W. (Ed.). Kartengestaltung und Kartenentwurf. Ergebnisse des 16. Arbeitskurses Niederdollendorf 1986 des Arbeitskreises Praktische Kartographie. Kirschbaum Verlag, Bonn. Raisz, E. (1962). Principles of Cartography. McGraw-Hill, New York. Robinson, A., Sale, R., and Morrison, J. (1978). Elements of Cartography. John Wiley & Sons, New York. Spiess, E. (1996). Digitale Technologie und graphische Qualität von Karten und Plänen. Vermessung, Photogrammetrie, Kulturtechnik (9), 467-472. Steinlechner, G. (1997). Kartographische Informationsvisualisierung am Bildschirm mit Hilfe von Informationsebenen - Möglichkeiten und Grenzen einer neune Informationsdarstellung. Unpublished Diploma Thesis, TU Wien. Stone, M.C., Fishkin, K., and Bier, E.A. (1994). The Moveable Filter as a User Interface Tool. In Proceedings of CHI‘94 (Boston, MA, April 24-28), ACM, New York, 306-312. Töpfer, F. (1967). Gesetzmäßige Generalisierung und Kartengestaltung. Vermessungstechnik ,15(2), 65-71. Witt, W. (1970). Thematische Kartographie - Methoden und Probleme, Tendenzen und Aufgaben. Veröffentlichungen der Akademie für Raumforschung und Landesplanung. Gebrüder Jänecke Verlag, Hannover. Wonka, E. (1989). Das Gebäuderegister als Grundlage für die Aufbereitung statistischer Daten auf der Basis von kleinräumigen territorialen Einheiten. In Kelnhofer, F. (Ed.). Beiträge zur themakartographischen Methodenlehre und ihren Anwendungsbereichen. Berichte und Informationen (Sammelband der Hefte Nr. 10-20), Österreichische Akademie der Wissenschaften.

Ottawa ICA / ACI 1999 - Proceedings / Actes