The Intelligent Placement of Vegetation Objects in 3D Worlds

Li Jiang

SUBMITTED IN TOTAL FULFILMENT OF THE REQUIREMENTS OF THE DEGREE OF MASTER OF GEOMATIC ENGINEERING (BY RESEARCH).

April 2009

Supervisors: Prof. Ian Bishop Dr Jean-Philippe Aurambout

Department of Geomatics

The University of Melbourne, Australia

Abstract

In complex environments, increasing demand for exploring natural resources by both decision makers and the public is driving the search for sustainable planning initiatives. Among these is the use of virtual environments to support effective communication and informed decision-making. Central to the use of virtual environments is their development at low cost and with high realism.

This paper explores intelligent approaches to objects placement, orientation and scaling in virtual environments such that the process is both accurate and cost- effective. The work involves: (1) determining of the key rules to be applied for the classification of vegetation objects and the ways to build an object library according to ecological classes; (2) exploring rules for the placement of vegetation objects based on vegetation behaviours and the growth potential value collected for the research area; (3) developing GIS algorithms for implementation of these rules; and (4) integrating of the GIS algorithms into the existing SIEVE Direct software in such a way that the rules find expression in the virtual environment.

This project is an extension of an integrated research project SIEVE (Spatial Information Exploration and Environment) that looks at converting 2D GIS data into 3D models which are used for visualization. The aims of my contribution to this research are to develop rules for the classification and intelligent placement of objects, to build a normative object database for rural objects and to output these as 2D billboards or 3D models using the developed intelligent placement algorithms.

Based on Visual Basic Language and ArcObjects tools (ESRI ArcGIS and Game Engine), the outcomes of the intelligent placement process for vegetation objects are shown in the SIEVE environment with 2D images and 3D models. These GIS algorithms were tested in the integrated research project. According to the case study in Victoria, rule-based intelligent placement is based on the idea that certain decision- making processes can be codified into rules which, if followed automatically, would yield results similar to those which would occur in the natural environment. Final product produces Virtual Reality (VR) scenes similar to the natural landscapes.

I

Considering the 2D images and 3D models represented in the SIEVE scenario and the rules (for natural and plantation vegetation) developed in conjunction with scientists in the Victorian Department of Primary Industries (DPI) and other agencies, outcomes will contribute to the development of policies for better land and resource management and link to wide ranging vegetation assessment projects.

Keywords: landscape simulation, virtual environment, geographic information systems, intelligent placement, rule-based placement, vegetation classification

II

Declaration This is to certify that (i) The thesis comprises only my original work towards the Masters except where indicated in the Preface, (ii) Due acknowledgement has been made in the text to all other material used, (iii) The thesis is around 23,000 words in length, inclusive of footnotes, but exclusive of tables, maps, appendices and bibliography.

Acknowledgments

I would like to express my gratitude to my supervisor, Professor Ian D. Bishop, whose expertise, understanding, and patience, added considerably to my graduate experience. I appreciate his vast knowledge and guidance to research skills and his advices in writing the thesis. Thanks to CRC and DPI for their research assistance. Finally, I wish to express special appreciation to my families for all the love and support they offered me through my entire life.

III

Contents Abstract ...... I Acknowledgments ...... III List of Tables ...... I List of Figures ...... I Acronyms ...... III Definitions ...... III 1. Introduction ...... 1 1.1 Overview ...... 1 1.2 Problem Statement ...... 2 1.3 Outline of the Thesis ...... 3 2. Related Research ...... 5 2.1 The Development of Landscape Visualization ...... 5 2.2 Intelligent Placement of Objects in Virtual Worlds ...... 9 2.2.1 Urban objects placement ...... 10 2.2.2 Rural objects placement ...... 11 2.3 The SIEVE context ...... 12 2.3.1 Overview ...... 12 2.3.2 The Torque Game Engines ...... 15 2.3.3 Linkage to GIS ...... 16 2.4 Objectives of this Project ...... 18 2.5 Summary ...... 19 3. Intelligent placement of objects ...... 20 3.1 Introduction ...... 20 3.2 Vegetation Modelling ...... 20 3.3 Vegetation Classification and Representation ...... 24 3.4 Object library ...... 29 3.5 Vegetation Placement ...... 34 3.5.1 Intelligent Placement Constraints...... 34 3.5.2 Vegetation Model Exhibition ...... 38 3.5.3 Vegetation Shape...... 40 3.5.4 Vegetation Location ...... 42 3.6 Summary ...... 46 4. Plant Growth Simulation via SIEVE ...... 47 4.1 Introduction ...... 47 4.2 Integrating Algorithms into SIEVE ...... 47 4.2.1 SmartVeg Architecture ...... 47 4.2.2 Points Generation ...... 50 4.2.3 EVC-based Placement ...... 53 4.2.4 Plantation-based Placements ...... 62 4.3 Case Study ...... 65 4.3.1 Growth Potential Value and Vegetation Height ...... 66 4.3.2 EVC-based Simulation ...... 68 4.3.3 Plantation-based Simulation ...... 73 4.4 Summary ...... 74 5. Conclusions and Further Outlook ...... 76 5.1 Conclusions ...... 76 5.2 Evaluation of the SmartVeg ...... 77 5.3 Applications ...... 77 5.4 Limitations and Future Research ...... 78 6. Bibliography ...... 80

I

List of Tables Table 3-1: Variables used to calculate growth potential value (Section 3.5.1) Table 3-2: Resources used to determine vegetation placement (Section 3.5.1) List of Figures Figure 2-1: SIEVE system components (Section 2.3.1) Figure 2-2: Overview of SIEVE Direct Implementation (Section 2.3.1) Figure 3-1: Vegetation texture images query interface (Section 3.2) Figure 3-2: A plant (Mallee) displayed as 2D billboard (Section 3.2) Figure 3-3: A plant (Maple) displayed as DTS model (Section 3.2) Figure 3-4: Bioregion name (Section 3.3) Figure 3-5: Ecological Vegetation Class benchmarks for the Central Victorian Uplands bioregion (Section 3.3) Figure 3-6: Sample EVC documentation, EVC 47: Valley Grassy Forest (Section 3.3) Figure 3-7a: Typical EVC polygon shapefile (Section 3.3) Figure 3-7b: Related Legend for Figure 3-7a (Section 3.3) Figure 3-8: Species’ attributes table showed in ArcGIS for representing EVCs layer (Section 3.3) Figure 3-9: A sample database with Bioregion Name (Section 3.4) Figure 3-10: EVC type and related IndexCode in CVU Bioregion (Section 3.4) Figure 3-11: Species details of each EVC (Section 3.4) Figure 3-12: Common used species of plant details and vegetation models (Section 3.4) Figure 3-13: Sample Species Distance Database (Section 3.4) Figure 3-14: Sample Growth Potential Layer and related EVC Polygon layer for the same area (Section 3.5.1) Figure 3-15: A sample Forestry Species description (Section 3.5.3) Figure 3-16: Co-location representation (Section 3.5.4) Figure 3-17: Example of multiple ring buffers (ArcGIS Desktop Help 9.2) and Illustration of species spatial co-location patterns (Section 3.5.4) Figure 4-1: SmartVeg main interface (Section 4.2.1) Figure 4-2: Point Generation Interface (Section 4.2.1)

I

Figure 4-3: Replicative Foliage Rendering Interface (Section 4.2.1) Figure 4-4: Individual Object Placement Interface (Section 4.2.1) Figure 4-5: Points Generation Result for a Selected Polygon (Section 4.2.2) Figure 4-6: Implementation Processes of EVC-based placement (Section 4.2.3) Figure 4-7: An example of the input density (Section 4.2.3) Figure 4-8: Processes of Plantation-based Placement (Section 4.2.4) Figure 4-9: Case Study area (Section 4.3) Figure 4-10: Generated points based on growth potential value layer (Section 4.3.1) Figure 4-11: Vegetation models with related heights (Section 4.3.1) Figure 4-12: Sample result of the selections (Section 4.3.2) Figure 4-13: DTS models for the first species (Palm tree) (Section 4.3.2) Figure 4-14: Placements of the second species (under shrubs) (Section 4.3.2) Figure 4-15: Placements of grass (Section 4.3.2) Figure 4-16: More details for Figure 4-15 (Section 4.3.2) Figure 4-17: Replicative billboards shown in rows (Sunflowers) (Section 4.3.2) Figure 4-18: Placement of the second species (Section 4.3.2) Figure 4-19: Randomly placement of billboards (Section 4.3.2) Figure 4-20: Placement of the third species (Section 4.3.2) Figure 4-21: Sample selections based on Plantation (Section 4.3.3) Figure 4-22: Placement of young blue gums (billboards) (Section 4.3.3) Figure 4-23: Placement of sunflowers and young apple trees (billboards in rows) (Section 4.3.3)

II

Acronyms • CRS-SI – Cooperative Research Centre for Spatial Information • DEM – . A format of elevation data, represented by continuous elevation values over a topographic surface by a regular array of z- values. • DPI – Department of Primary Industies • DTS – A three space file formats, supported by Torque Game Engine. • EVC – Ecological Vegetation Classes. • GIS – Geographic Information System (generally ArcGIS 9.2) • SIEVE – Spatial Information Exploration and Visualisation Environment • TGE – Torque Games Engine (developed by Garagegames, user documentation available via www.garagegames.com ) • VBA – Visual Basic for Applications

Definitions • Terrain file: Torque Game Engine file (.ter). Stands for terrain file, contains 3D terrain information and mapped texture(s). • Mission file: Torque Game Engine file (.mis). Stands for mission file. It is a configuration file. It contains virtual environment settings, locations and links of surface objects, and the name of the terrain file to use. • Surface texture: this refers to an image (generally .bmp) that is mapped to the terrain surface. Reference to this file is stored in the terrain file. • Exporter: Refers to a VBA module, written for use with ArcGIS 9.2 which converts GIS layers into above mentioned terrain, mission and surface texture files. • DTS file: Torque Game Engine file (.dts). A 3D object model. These objects can be created in 3D Studio Max, Maya or other modelling packages and exported to .dts format. This format is the mainstay of the TGE.

III

1. Introduction 1.1 Overview

Traditionally, users of spatial data rely on geographic information systems (GIS) maps, charts and technical reports to display spatial information and to make planning decisions. However, traditional static maps provide limited capabilities for exploration. To overcome the limitations of static maps, visualization technology has been widely used to help people understand the data. As more people become involved in decision making, and demand for visualization is increasing.

Many environmental planning decisions affect rural areas, and they can be controversial: loss of native vegetation cover, expanding plantations, conversion of traditional farms to wind energy farms, revegetation and even fire management. All require effective simulation before they are implemented, so as to assess whether they are likely to have the desired effects and unlikely to have unforeseen consequences. The more sophisticated simulations are done through the creation of virtual three- dimensional (3D) landscapes. Building on previous research (Stock et al. 2007), rural landscapes are simulated through the patterns of different vegetation units, corresponding to the areas where different types of vegetation are present. The importance of making the environmental planning more accurate and transparent is driving research into the development of better methods for the creation of virtual landscape simulations. Among other things, better virtual landscape simulations rely on more realistic simulated environments for plant growth states and closer approximations of the realities of the vegetation cover.

The users of spatial information and modelling tools are growing in numbers and broadening in nature. The development of methods for the automated placement of objects in virtual environments (VE, also referred to as Virtual Realities; (Stillwell et al. 1999)) is driven by both technology trends and user demands. Methods for the automatic representation of 3D objects in virtual worlds are needed that allow users with limited knowledge of object behaviours to explore how landscapes might look in different planning scenarios. This motivates the search for more powerful intelligent tools that allow the placement of objects in virtual environments in more complex patterns and following realistic object behaviours. 1

In the area of rural applications, the developed vegetation object placement tools and the resultant landscape models will facilitate the communication of possible management scenarios and enable the investigation of social indicators with respect to these scenarios. This will aid in the development of environmental management policies and the amendment of legislation with regard to important biophysical parameters. Additionally, the tools will serve as visual verification platforms through which analysts and modellers can test the quality of their data and assumptions.

This thesis discusses the development of an interface for the intelligent placement of vegetation objects in a virtual world, based on the GIS algorithms explored within the context of the wider research project (Stock et al. 2008) in which this thesis is embedded. Using the enhanced Direct Live Link tool developed in this project; users can automatically sketch placements for 2D or 3D objects onto maps, allowing them to assess the outcome of the proposed landscape planning scenarios. It is expected that this research project will contribute to the development of better policies for land and resource management.

1.2 Problem Statement

The general public does not have the same level of understanding of rural ecosystems and of the potential consequences of landscape planning decisions as specialists do. As such, it is beneficial to be able to visualize landscape scenarios in a way that harnesses the knowledge of specialists yet can be easily understood by the public. The development of a visualization system that integrates virtual reality technology (VR) with geographical information systems (GIS), allowing users to directly manipulate objects in a 3D viewer, is considered particularly necessary in its ability to yield meaningful information that can be used to inform planning decisions.

A landscape planning tool called SIEVE (Spatial Information Exploration and Visualization Environment) was developed as a collaborative virtual environment for the generation of landscape models from GIS/SDI data, enabling exploration of the impacts of environmental and natural processes (O'Connor 2007) . The module SIEVE Builder can convert selected polygons into point layers and map the point locations to appropriate vegetation objects. At this stage, the vegetation model distribution is done randomly. This is done by either assigning random points within

2 polygons or by randomly using simple point files. However the pattern of propensity and size in the real world is not random (Scott 2000), there is also a certain order to things, houses for instance, fit within cadastral boundaries with specified setbacks. They are typically aligned parallel to a road or towards a view. Similarly, the locations of plants are not random but relative to landscape elements and influenced by variables such as distance to a riverbed, distance to a road, or the location of paddocks). Therefore, ways to identify appropriate locations for objects have to be explored. Also, trees grow larger in wetter areas or if not crowded by other trees; the size and shape of trees at the local scale can be influenced by the plants' access to resources (insolation , soil moisture, fertility or depth, competition with other species) and by human management practices (Stock et al. 2008).

SIEVE Direct Live Link supports live communication between ArcMap and SIEVE, enabling users to set the attributes of the mapped objects and place them into the SIEVE environment. To date, users have had to manually specify object attributes such as the size and general shape of plants. This can be quite difficult for non-expert users with low knowledge of plant behaviour. Therefore, there is a definite need for integrating 3D visualization and intelligent object placement and growth state simulation methods to provide a more comprehensive way to support public users’ participation and planning in a collaborative system.

Models cannot be generated cheaply or without expertise in GIS, 3D modelling and other fields. While a number of individual objects already exist for environmental visualization models, what is lacking is a normative, rule-based way to organize all those models. Taking advantage of existing plant models, this project tackles these issues through the design of a vegetation objects library organised according to the ecological classes. The models in this library can be shared among a variety of users.

1.3 Outline of the Thesis

Chapter 2 reviews the application of visualization in current landscape planning systems. More specifically, research about rule-based object placement in both urban and rural areas in virtual worlds is reviewed. The chapter also introduces the experimental SIEVE (Spatial Information Exploration and Visualization Environment) system, particularly focusing on the direct live link which is used to carry out live

3 communication between ArcMap and SIEVE and a game engine which is used to experience the virtual environment. Limitations and weaknesses of current placement methods using the direct live link are addressed.

Chapter 3 summarizes the knowledge of landscape modelling, vegetation behaviours and classification and the principles behind the vegetation placements. GIS algorithms are developed to implement these principles.

Chapter 4 describes the integration of these GIS algorithms into the SIEVE Direct software such that a smarter direct live link is created. Through the application of these rules, two approaches to intelligent vegetation placements have been developed. Following this is a case study for selected EVC (Ecological Vegetation Class) areas, in which the effectiveness of the smart direct live link is tested.

Chapter 5 presents a discussion of the findings of the research, the functionality of the developed interface and the extent to which the results have met the objectives. It concludes with a consideration of the potential application of the outcomes and an outline of the future directions in this field.

4

2. Related Research 2.1 The Development of Landscape Visualization

Landscape can be defined as a combination of the visible features of an area of land which comprise physical elements such as landforms, living elements of flora and fauna, abstract elements such as lighting and weather conditions, and human elements (Matthew 2007). The term visualization was first mentioned in the cartographic literature at least as early as 1953, in an article by University of Chicago geographer Allen K. Philbrick (1953). Computer-based visualization technologies allow for more interactive maps in which the user can change the visual appearance of the map, and support geospatial object analysis through the use of interactive operations. Landscape visualization should be used as a planning tool that facilitates understanding, conceptualization and implementation of planning targets (Herwig and Paar 2002). Landscape visualization has been identified as an integral part of the planning process (Klosterman and Pettit 2005), both in the areas of rural land use planning and urban modelling. Pettit (2007) summarizes landscape visualization as a concept of geographical visualization which describes the representation of geographical processes and related datasets deployed spatial-science-related technologies such as geographical information system (GIS). Techniques of landscape visualization are developing significantly. Commonly, landscape visualization refers to the representation of the natural and built environment and is often developed from readily available two-dimensional digital data. Components range from 2D media such as abstracted static maps and digital photos to animation techniques, virtual reality and real-time virtual environment approaches.

Traditionally, landscape change outcomes are communicated via tables, graphs and 2D maps which may not be fully understood by affected non-experts (O'Connor et al. 2005). When outputs are two-dimensional and static, users need more time and specific knowledge to analyse and compare the results in order to understand the changes. There is a long history of using computer visualization in forestry monitoring, stating with early research in the 1970s (Kojima M. 1972). This work, along with most subsequent development in the field, was inspired by the need for simulating vegetation cover changes and land use changes to assist environmental

5 planning and decision makings. The first experiments in computer-aided landscape simulation appeared in the 1970s. Although digital photos can reach a relatively high level of geometric accuracy to generate more realistic outputs, the representations are still limited in that they form static and highly abstract outcomes. In order to represent changing landscapes, the photomontage has been a common visualization technique for creation of images for some time (Lange 1990). However, all these methods and techniques are very limited in their capacity to indicate how the physical landscape elements relate to each other. Although nothing is as good as actually being there (Ludwig 1996), several varieties of methods to produce more realistic landscape visualizations have since emerged.

As the quality of rendering algorithms and the developing of computer graphic techniques improved throughout the 1980s and 1990s, much more realistic simulation techniques began to appear. For example, as the speed of computers advanced, the animation of simulated landscapes for planning purpose became popular (Zube et al. 1987). The pioneers of dynamic simulation came from the Berkeley Environmental Simulation Laboratory. Among analogue techniques, physical models are well illustrated. However limitations remain, since even a precise model with detailed objects cannot portray the visual appearance of an environment completely. Moreover, all these early techniques required considerable skill and experience to produce realistic visualizations, and as such they are largely inaccessible to non-expert users. Developments in these techniques also laid the foundation for the further development of Geographic Information Systems (GIS) and ideas for linking visualization to GIS emerged at the end of the 1980s (Lange 1989). As a result, terrain representation and visualization of large areas of landscape was predominantly the purview of GIS-style software for the next twenty or so years (Ervin 2004).

With growing environmental awareness and rapid developments in technology (both in software and hardware), general purpose GIS can no longer satisfy the multitude of visualization demands. All these activators exert their own strong influences on the geographical visualization process. The “real world” in which we operate, as well as the imaginary worlds designed by computer game designers, are usually thought of as being three-dimensional. As such, the need for 3D information is rapidly increasing. More and more research has been concerned with the validity of visual simulation techniques for representing environmental change and assessing public reactions.

6

People tend to bring with them into a virtual environment all their understandings of spatial relationships in a three-dimensional world (Siyka et al. 2002). To accommodate this variety of understandings, a wide range of landscape visualization concepts and applications are appearing, such as Virtual Reality, real-time environment techniques and so on. Using these and other 3D geographical visualization techniques, we can move beyond the realm of traditional static maps and create virtual landscapes through 3D geographical visualization (Lammeren et al. 2005).

Stillwell (1999) describes virtual environments as computer generated environments which allow users to immerse, navigate and interact within a realistic representation of virtual landscape. Virtual Reality technology is a very recent invention that allows users to interact with a computer-simulated environment, be a real or envisioned one. VR is an essential technology for the virtual representation of environments for urban and rural planning purposes, supporting a wide variety of applications commonly associated with its immersive and 3D environments, such as VR -construction sets, VR-viewers and VR-scenes. Users can use 3D visualization to represent alternative landscape scenarios in order to gain a better understanding of the planning outcomes and explore the design ideas by placing object models within virtual environments. Most VR tools use digital 3D models designed to simulate a reality (either past, present or future) and render in real time. As such, the viewer can operate the scene by changing the viewing locations (Winterbottom and Long 2006). However, VR typically lacks the analytical capabilities which underpin GIS and has very limited or no functions for users to easily manipulate the multiple objects which define a landscape.

GIS is a particularly horizontal technology in the sense that is has wide-ranging applications across the industrial and intellectual landscape (Tomlinson 2007). 3D GIS would bring together the best of the analytical capacity of GIS with advanced landscape visualization capability. True 3D GIS, however, are still an elusive concept. The best approximations come from close couplings of GIS and 3D visualization technologies. 3D geographical visualization, as a tool for understanding places and space in both urban and rural landscapes, is becoming widely recognised as a highly valuable tool for the presentation of the complexity of spatial information and for the investigation of key indicators used in the planning process. 3D geographical

7 visualization techniques, including 3D terrain data processing, 3D objects modelling, air photo or satellite image texture application and spatial application, are already mature and are central to the application of current landscape visualization software (Jochen 2007). A number of software products are already capable of describing spatial objects and displaying them in a 3D world, such as World Construction Set (WCS) and Visual Nature Studio (3D Nature 2008). At the same time, a few such programs also support multi-user exploration of a common virtual world, such as Leica Titan (gi.leica-geosystems.com) and Skyline (www.skylinesoft.com). Emerging from a different base technology, game engines are being increasingly used to improve the simulation of real-time virtual environments. This has been especially notable in fields such as emergency training and psychotherapy (Bishop 2008).

Recently, landscape visualization has been employed to explore existing and future landscape scenarios in a range of fields, for understanding environmental process models (O'Connor 2007), exploring rural landscape change (Bishop et al. 2005), and simulating urban development (Cavens 2005). Meanwhile, interactivity and participatory ability are becoming the driving determinants in landscape visualization systems. Therefore a range of participatory and collaborative 3D geographical visualization software products have been developed to allow users’ participation in planning processes and to enhance the visual representation of landscapes in both urban and rural planning practice, so as to empower communities and stakeholders in decision-making processes (Pettit et al. 2006). The most important issue during the planning process is the capability of the software in handling a wide range of spatial objects to support complex exploration in 3D visualizations.

With the increasing demands put on applications, effective object-placement processes are increasingly required to be both flexible and accurate in both urban and rural planning contexts. For example, planners may gather information for use in a visualisation system, including data on urban land use (e.g. data on residential, commercial and industrial zoning), digital elevation models (DEM), socioeconomic data (e.g. tabular data for every land use category), residential data (e.g. intensive households parameters), commercial/industrial data (e.g. employment vacancy rates), and data with regard to transportation infrastructure (Asgary et al. 2007). Urban objects are mapped into 3D worlds based on attributes of building style and location, street furniture and other facilities. The geographical visualization of rural landscape

8 require a digital description of all features which form a scene (Muhar 2001). The most important landscape elements are usually the terrain and the vegetation cover variation. Therefore, users should be able to use the systems to place vegetation models into 3D worlds in a manner that is consistent with the local ecological parameters and environmental conditions. By visualizing more realistic object models and producing optional scenarios, participants can gain a deeper insight into the planning processes.

As described, these systems typically rely on GIS maps and spatial data that are too difficult to manipulate for many non-experts. Methods are required for the simplification of the object manipulation in a virtual world, so as to promote users’ participation in planning processes and outcomes. Ideally, both experts and public users can place objects in virtual environments as appropriate, and build and test alternative scenarios. Therefore, participation and community engagement in planning requires intelligent landscape visualization functionalities are required to support users with less professional knowledge, of either the technology or the relevant environmental and ecological processes. The challenge is to explore the rules behind intelligent objects placement and create a smart tool that empowers professionals and laypersons alike to make better-informed decisions.

2.2 Intelligent Placement of Objects in Virtual Worlds

The conceptually simple task of placing 3D objects in a visual environment is fundamental to landscape visualization. With the increasing complexity of landscape visualization and the increasing sophistication of user demands, the issue of efficient and effective placement of spatial objects in three-dimensional scenes is becoming even more pressing. Object placement is an important and often time-consuming and tedious part of modelling (Schein and Elber 2003). Previous efforts to facilitate object placement have resulted in better input devices and more efficient object manipulation techniques, which ask users to position each object by hand, one at a time. Most current systems have tools that allow individual objects to be placed quickly and accurately. For example, the Object Placement Tool (installed as part of the SDK Toolset within the Torque Game Engine which is the base for the SIEVE Viewer) is

9 multifunctional, and can be used for placing scenery as well as creating missions. However, during the process of object placement, much time and effort may have to be spent on placing models inside the virtual world, switching between views and making small adjustments to the relative position of objects. The current placement tools can be viewed as particularly effective for single object positioning. Although some systems may perform this process better by positioning the objects based on the preset point layer which has location attributes, users still have to manually control the shapes of objects. Difficulties of scale and orientation can also arise. As a result, visual scenarios are often unrealistically simple or overly tidy (Xu et al. 2002).

2.2.1 Urban objects placement

Within the context of urban planning theory, roads, buildings, road signs and road markings are the main urban elements. The visualization of these objects is typically polygon-based. The placement of urban elements is very important to the urban simulation. Most previous studies have exploited procedural modelling theory and knowledge-based systems for object placement in urban simulation.

Grimsdale (1997) has listed the main tasks in the generation of a model of an urban area: the zoning of the area, the selection and positioning of roads, buildings and other features, and the geometrical construction of the model. The placement and orientation of urban objects also require the development of specific rules. For example, a building object can be specified by a set of properties such as type, centre, orientation and position. In Grimsdale (1997), an expert system (i.e. a rule-based system) was used to generate a virtual urban world. The selection of buildings was controlled by a second knowledge-based system. The placement of road signs and other road markings was controlled by further production rules, and even the position and type of the buildings to be placed on each section of road was controlled by certain rules. Expert systems have also been used for modelling the distribution of the land use and related floristic elements (Wang et al. 2008). When combined with procedural modelling techniques, the procedural processes use this rule-based information to place selected building objects with predefined sizes and shapes on a section of road.

10

Among the many elements that should be considered as part of urban object placement there are not only physical but also administrative ones. For instance, different policies with regard to urban planning may be used in different countries (e.g. Australia and America). The rule-based approach has the capacity to encapsulate knowledge about the constraints arising from specific local policy restrictions.

2.2.2 Rural objects placement

Models called ‘‘virtual plants’’ have been popular in the field of 3D spatial modelling for more than two decades. The presence of virtual plant species in rural landscapes is mainly governed by the climatic and edaphic conditions in the research area (e.g. climate, rainfall, soil type and properties). These two sets of conditions are of prime importance for the growth of natural vegetation. Considerable work has been devoted to exploring rural objects placement for rural landscapes. For example, a vegetation modelling tool called oik (developed within the Lenne3D project) handles the distribution of plant models on a given terrain. It is able to efficiently process the placement of millions of individual trees in each landscape (Wieland 2005). Another example is a semi-automated procedure, developed by Georgia Tech, which can correctly place 3D tree objects using overhead imagery (Wasilewski et al. 2002). These solutions are useful for users with enough knowledge to place vegetation objects in a meaningful way. Additionally, some applications that have appeared in the past years, such as the Nerve Garden Project (www.biota.org/nervegarden), allow users to build and allocate plant communities in virtual online worlds. However, these solutions would not meet the needs of most designers or planners, who commonly have a need for more detailed visualizations that take account of aspects such as proper plant shape, location and neighbouring relations. Vegetation placement is complicated because it is governed to a large degree by micro-environmental conditions and by species-specific characteristics. For example, to be useful for planning evaluations a landscape visualisation of forest cover change may well need to include a clear presentation of the forest structure, including species types, locations and sizes. In addition, the neighbouring relations of the different species should be consistent with the ecology and satisfy the environmental conditions. As such, simulation techniques for the generation of 3D representations of

11 vegetation must be able to meet the modelling requirements set by the ecosystem and environmental conditions (Kumsap et al. 2005), However, environmental data may be quite sophisticated, and include variables such as local soil hydrology, the orientation of slopes, local soil depth and PH, and disturbance and management practices. In addition, the shapes of plants are species-specific as well as dependent on the local growth conditions, and species characteristics may affect the neighbouring relations among species (Bishop et al. 2007). Robust modelling techniques for vegetation objects need to be able to handle these complex issues. Consequently, the two main elements that need to be developed for vegetation placement are a set of vegetation growth rules to determine the shape of plants, including height and growth phase (whether it is mature or immature), and a set of vegetation positioning rules to determine the exact location of single plants, as well as the neighbouring relations among plants, in a way that takes account of both physical and ecosystem constrains. It is envisioned that sufficiently robust but flexible sets of vegetation growth and positioning rules would provide the necessary intelligence for a virtual system that can represent complex plant communities in a relatively faithful manner.Once developed, the strength of such a system would be that it would not require expert knowledge for its operation, and would be suitable for use by the general public. An intelligent vegetation modelling and placement tool, integrating both vegetation growth rules and vegetation position rules, has been developed in this research project. This tool was developed on the basis of the existing SIEVE landscape simulation system (see Stock et al. 2008). It allows users to place vegetation objects with relatively detailed shapes at precise locations in a virtual world. Moreover, users do not need to be concerned with the details of placement, unless they wish to.

2.3 The SIEVE context 2.3.1 Overview

Currently, landscape visualization techniques are being employed in wide-ranging applications. However, most Geographic Information Systems (GIS) do not provide the sophisticated user interaction tools and graphics that characterise real-time 3D virtual environment software. The emergence of and similar global

12 viewing systems in the last few years has generated new interest in more sophisticated GIS systems, but these systems still have insufficient capacity for widespread practical application, due to insufficient terrain detail, the difficulty of introducing large numbers of objects easily, and the lack of support for collaborative use. The SIEVE (Spatial Information Exploration and Visualization Environment) was designed to enhance the understanding of landscape scenario outputs in a way that promotes a better understanding of landscape structure and change. It allows users to explore existing spatial data and hypothetical future scenarios in a real-time 3D environment. It has the capacity to effectively represent the outcomes of environmental process models, and to provide insights into associated social impacts by allowing multiple users to explore and discuss data collaboratively (Stock et al. 2008). Planners can use SIEVE scenarios to explore existing and possible future landscape scenes so as to better understand environmental process models (O'Connor 2007), detect rural landscape change (Bishop et al. 2005) and investigate the consequences of different approaches to urban development. To achieve the desired functionality, SIEVE was initially developed with two main components. SIEVE Builder creates the 3D models, and SIEVE Viewer allows users to view, explore and collaborate in the virtual world. The GarageGames Torque Game Engine and ESRI ArcGIS were used to build this landscape visualization and planning application system. ArcGIS was selected because it has a broad global user group; it can handle a wide variety of spatial data types and its existing functionalities can be extended through by programming with Visual Basic for Applications (VBA). It also permits the importing and exporting of different data types and conversions between data types. SIEVE Builder is written in Visual Basic and works in the ArcGIS or ArcServer environments; Viewer was developed on the basis of the Torque Game Engine (TGE).

13

Figure 2-1: SIEVE system components (O'Connor 2007)

SIEVE engages participants in decision-making processes and allows for exploration for evaluation and planning purposes (Stock et al. 2008). The original system components are displayed in Figure 2-1, and include GIS layers, a Texture and Object Library, the SIEVE Builder and the SIEVE Viewer. The terrain model and surface texture are converted from GIS layers. The Texture Library includes a database of vegetation textures that are modelled as billboards (i.e. flat 2D images crossed onto a vertical plane in the 3D scene) in the SIEVE environment. The Object Library is a database of vegetation (and other) models in DTS format as used by SIEVE. SIEVE Builder converts Digital Terrain Model (DTM) rasters, imagery, and point features into formats that SIEVE Viewer can read (Stock et al. 2008). SIEVE Viewer is installed on the user’s local system to run the 3D models which are stored as terrain files and mission files. The terrain file specifies a grid of elevation, while the mission file specifies the objects that should be drawn on the terrain, and a wide range of other elements such as sun, sky, fog and water effects (O'Connor et al. 2005). All 3D objects that are referenced in the mission file, including the vegetation object variables (e.g. location and rotation) can be modified in the Viewer. However, the process is time-consuming and requires a certain level of skill. A logical next step in the development of the SIEVE system was to endow it with the ability to send vegetation objects to the Viewer in real time (see section 2.3.3).

14

2.3.2 The Torque Game Engines

Due to the fast-growing market of computer games, software is improving constantly and the capabilities of hardware are increasing markedly. Nowadays, computer game technology constitutes one of the most sophisticated technologies for simulating virtual environments. Many of the computer games can support seamless and rapid integration of spatial data and functionalities, producing fluid, extremely realistic and less restrictive visualizations of virtual environments that can be viewed in real time on PCs or game consoles. Games software fuses several highly integrated packages called engines, each with a specific functionality (Herwig et al. 2005). For instance, a terrain engine creates and renders the ground. Game engines are designed to provide multiple users, simultaneously, with a highly interactive gaming experience. With careful design, game engines can provide intuitive user interfaces. The virtual environment simulation package used in this research project (SIEVE) was developed from a commercial low-cost game engine supporting immersive exploration and built- in functionality for collaboration. This game engine is called Torque Game Engine (TGE from www.garagegames.com), and has been described by Stock et al (2008). The TGE supports many industry-standard content creation tools for 3D modelling and animation. Exporters for 3D Studio Max on the high end and Milkshape on the low end produce objects in the DTS format used by TGE. These tools are affordable and commonly used. The TGE supports real-time terrain rendering, and provides an inbuilt world editor and out-of-the-box multi-user functionality. Moreover, the C++ source code is available for USD 100, allowing unlimited modification of the engine

(Ervin 2004). Due to its low price and relatively powerful features, the TGE gained popularity as a landscape visualization tool. The TGE is a powerful terrain generation and object placement toolkit allowing users to easily build game scenes and structure. Using the geo-typical ground texturing supported by the TGE, SIEVE Viewer utilizes a set of textures that are appropriate for the local land use to texture the ground. An advantage of this method is that ground textures can be easily changed by simply substituting existing textures (Stock et al. 2008). However, the accuracy and detail of the ground textures using this approach is relatively low. It uses general textures for terrain types (e.g. meadow, forest, or wetland), but details such as roads and little streams are typically missing

15

(GarageGames 2009). Additionally, the TGE allows the editing of the surface terrain, including shape and textures, with TorqueEdit allowing users to control all elements to modify the scene and build hypothetical future scenarios. However, the TGE does not support the use of aerial photography as ground textures. This capability has been added to SIEVE, because aerial photography is used in many landscape simulation situations. The TGE also has an in-built object editor to add, modify, and delete 3D objects. Users can pick any available 3D model (DTS format) and add it to the scenario. Users can manually select any model and move it in order to change its position, or remove it from the scene by deleting it. The TGE editor helps users control the many elements in a game from a graphical viewpoint (Stock et al. 2008). However, it is time- consuming to edit 3D object properties in TGE, as users have to edit them individually. To overcome this, a new tool is needed that will allow users to place vegetation objects into SIEVE with automatically generated, precise attributes and proper locations.

2.3.3 Linkage to GIS

An extension called SIEVE Direct provides a real-time linkage between the game engine environment (SIEVE Viewer) and the GIS (ESRI Desktop ArcGIS). Figure 2 shows the implementation process of SIEVE Direct (Chen et al. 2008).

Figure 2-2: Overview of SIEVE Direct Implementation

16

The TGE supports a variety of complex objects types. For example, it supports 2D billboards that are always used as replicants within polygons and 3D models that are typically assigned to specific locations. Properties like object ID, model-file name, size and rotation, and coordinates (x, y, z) are recorded in the mission file for a simple DTS format 3D plant. This will typically not include attribute information about species type. It is up to the linkage (Direct Live Link here) to assign correct species information to each plant location. While for FoliageReplicator object which is used to place replicated 2D billboards into TGE, it has up to 30 more attributes such as foliage count, minimum and maximum size, view distance, cull resolution, fading-in and –out distances, light reflectance and so on (Chen et al. 2008). It is more time consuming for SIEVE Viewer to load up a mission file for the environment containing large number of 3D individual DTS objects. Thus, it is up to the linkage to generate a realistic distribution of point locations using the classification data with all required properties. It is easier and faster for the user to edit and update the landscapes scenario in the 3D view via the GIS with the Direct Live Link and corresponding messages. In the original Exporter, if the object was altered (e.g. appending or deletion of 3D landscape objects) a new mission file had to be created by the Builder from scratch to update the changes. With the Direct Live Link, users can keep the virtual environment up date with changes made immediately and evaluate the resulting impacts on the environment at run-time instead of repeating the whole exporting process (Chen et al. 2006). Vegetation placement processed via the Direct Live Link method remains time consuming because uses have to manually input the attributes of a 3D object, such as object type or size. Previously the Direct Live Link just can place one species of plant into SIEVE Viewer at a time, and users have to decide neighbouring relations among different species. The parameters which describe the physical properties of vegetation objects are manually operated. Users had to set different parameters of vegetation objects and then send data to SIEVE. These procedures are too complex for non- experts to envision and easily generate realistic vegetation cover uses and changes. Nowadays, government are shifting emphasis from planning for people to planning with people. Software needs to follow and accommodate this trend. When planning with people special care needs to be taken such that non-experts can understand

17 model outputs and manipulate freely objects placements. This implies that the procedure of implementation and the form of representation should be as comfortable as possible. Thereby, users do not need to be familiar with the principles behind the vegetation placements such as vegetation behaviour, soil condition effects or ecology. This research explores an intelligent placement method which incorporates plant growth rules and position rules to extend the capabilities of SIEVE Direct. A new direct link, called SmartVeg, which incorporates this method, is developed for both experts and non-experts to overcome all these limitations.

2.4 Objectives of this Project

The overall objective of this research is to implement rules of potential growth effect and intelligent placement for vegetation objects in a virtual environment. This is supported by development of an objects database for rural objects based on particular rules. More intelligent incorporation of vegetation into virtual environments can then support visualization of land cover prediction, for example under climate change, and different future land use scenarios as a result of specific plant planning proposals. Through the visual interface, users can interactively establish new virtual plants in the landscape.

The SmartVeg extension enhances the SIEVE Direct Link as a smart Link by integrating an intelligent placement method that incorporates growth rules based on thematic layer and potential rules based on vegetation association. It allows for rapid intelligent placement of vegetation objects in appropriate locations based on the specified vegetation rules and pre-acquired growth potential value of the certain area. The advantage of such a smart link is that people with no knowledge of GIS or plant behaviours, can access sophisticated landscape environments and automatically change vegetation covers accordingly. To achieve the overall objective, a simple user interface allows users to select parameters for vegetation objects. SmartVeg:

• Allows two different approaches to be applied for establishment of vegetation objects and the underlying objects library according to the ecological classes.

• Explores the rules for vegetation objects’ shapes based on the growth potential value layer collected for the research area and vegetation behaviours. The

18

sizes and heights of the objects are then based on the growth potential value and other parameters of the research area.

• Incorporates plant position rules based on vegetation association. While the different species’ neighbouring relations rely on the Ecological Vegetation Classes (EVC).

• Develops the within GIS algorithms for implementation of these rules to put vegetation objects in suitable position intelligently.

• Integrates these GIS algorithms into the SIEVE Direct software such that the rules find expression in the virtual environment. The result is experienced visually by linking with preset 3D display options.

• Since SIEVE provides interactive and collaborative virtual environments, users with different backgrounds and varying levels of technical competency can compare and evaluate their outputs obtained from the enhanced smart direct link - SmartVeg. Using this application, users from different backgrounds can get insight into landscape planning and their design concepts according to their personal object placing experiences. Subsequently, users can make more appropriate selections and have greater understanding of vegetation cover changes that occur in the studied landscape. 2.5 Summary

Currently, an increasing number of plant visualization systems are now available in the market, such as Navigating Biosphere3D (www.biosphere3d.org/) and Visual Nature Studio (3dnature.com/). These systems typically rely on the use of GIS maps, charts and technical reports to display the outputs or the consequences, as such, users have to input or edit different parameters for the objects in a virtual environment manually. Consequently, it is still insufficient for many non-experts to fully understand spatial or scientific information (Lammeren et al. 2005). Users have to input or edit different parameters for the objects in virtual environment manually with prior SIEVE platform. By using SmartVeg, users from different backgrounds will be able to intelligently place virtual 3D representations of current landscape environments and hypothetical future (or historical) scenarios. Accordingly, both experts and non-experts users can to explore virtual landscapes and collaborate on

19 landscape management issues. Chapter 3 gives an insight view to the principles and methods behind intelligent vegetation placement. 3. Intelligent placement of objects 3.1 Introduction In order to accurately portray rural landscape, this chapter will explore intelligent placement algorithms and develop specific rules, such as vegetation growth rules and vegetation position rules, based on ecological constrains and environmental conditions. 3.2 Vegetation Modelling

Risch et al.(1996) concluded that the challenge within any 3D display was to provide a sense of the overall distribution of elements and to allow specific features to be seen. Obviously, linking 3D models to visualizations make landscape scenarios more accessible and easily understood. Modelling is a fundamental tool to support comprehension of complex systems like ecosystems (Bornhofen and Lattaud 2008). Most landscape is covered by some type of vegetation. 3D representations of data have been shown to aid understanding of landscape changes and prediction of vegetation covers for a very wide range of users. Consequently, digital modelling of vegetation and the appropriate location and suitable shape representation of models are prerequisites for three-dimensional landscape visualization. Vegetation can be modelled either at the level of individual plants or as a terrain texture (Muhar 2001). Stefan and Claude (2008) carried out three different scales of observations on real plant species as part of model development for virtual vegetation. For individual plants, single virtual plants were grown to examine their responses to environmental constraints, and then characteristics concerning individual plant growth can be observed. At population level, unifying field Observation, mathematical theory and computer simulation, experiments are related to corresponding aggregate models of population dynamics to provide a more general understanding of long-term vegetation growth trends. The last observation scale is the evolutionary level which is a more abstract expression; it aims at morphogenesis and the influence of competition on plant morphology. Aiming to offer accurate vegetation simulations, this paper

20 presents model of virtual plants for studies at individual level based on growth potential constraints. There are many reasons to create 3D vegetation objects into virtual world; there are also multiple ways of doing so. Several basic approaches can be distinguished. Sievanen (1997) identified the coupling of process-based models and detailed 3D representations of the plant architecture which are called ‘‘functional–structural models’’; then there was a suggestion to divide the existing models into empirical- descriptive and causal-mechanistic ones (Prusinkiewicz 1998). Nevertheless, Godin (Hammes 2001) preferred the classification according to the architecture of the plant model. However, all these kind of models are computationally expensive for their detailed description of the plant structure and the local environment of each plant organ (Bornhofen and Lattaud 2008). Regarding with ecological modelling, procedures for the generation of individual plant models and plant cover modelling of a given terrain in a virtual world are becoming attractive as well as computer graphics techniques. There are two types of file formats employed by TGE/SIEVE and therefore adopted in this research: .PNG files are shown as 2D billboards for replicative plants and .DTS 3D models are used for individual plants. Neither incorporates explicit growth or environmental response models. There are several commonly used products to process these models. Vegetation objects were created in 3D Studio Max or Milkshake and exported to the DTS format using a free plug-in (MilkShape3dTutorials 2008). Any image processing software can be used to create the PNG files. Original vegetation texture images are required to create a billboard and a DTS model for the species of plant by extracting an actual tree part and cutting out the other background part is required. Fortunately, texture images (mostly in .JPEG) in different growth stages for most of the vegetation species over the Australia already have been collected by Australian Botanical Name Portal of Australian National Botanic Gardens and Australian National Herbarium.

21

Figure 3-1: Vegetation texture images query interface

Users can search out and download required images downloaded by the public for non-commercial purpose from this web-based query system (http://www.anbg.gov.au/ibis/speciesLinks.html). Then tree body can be extracted from the original downloaded image by Photoshop and saved as a PNG file for further usage.

Then Torque Show Tool was used to display and check the quality of the DTS models. Moreover, DTS objects can be static or animated models which are textured with images. The texture information of 3D model surface patches is taken from photographs. Trees may be simple planar objects that are textured with photos of the correct species type, they are represented by crossing flat planes each textured with an image to represent the silhouette of the rendered image (shown as Figure 3-2), or a somewhat more complex 3D plant as displayed in Figure 3-3. Both are stored in DTS format. Tree images as billboards can used be used in SIEVE as replicants which rotate so as to always face the camera.

22

Figure 3-2: A plant (Mallee) displayed as 2D billboard

Figure 3-3: A plant (Maple) displayed as DTS model

Because models are simplified images of the reality, whose level of abstraction can differ considerably, whether to use 2D billboards or 3D DTS models depends on the objectives of the study. The more specific objects are, the greater the realism of the landscape model. Every DTS model here represents a specified plant species. The object library contains a range of 3D models and images. Procedures for library construction are explored at the following section.

23

3.3 Vegetation Classification and Representation

This research has developed a series of rules to overcome the limitations of the prior version of SIEVE Direct. These developments have to take into account existing systems of vegetation classification in the areas in which they may be used. In vegetation science, classification varies depending on different standards and needs. Although variable between the States, Australia typically has detailed vegetation classifications that specify the proportions of tree species, heights, distributions and other factors. This level of detail is required for some large-scale visualisations. Considering the visual effects of vegetation cover change or environmental processes on a larger scale, just representative vegetation can be considered sufficient (O'Connor 2007). In the state of Victoria, the native vegetation has been classified according to Ecological Vegetation Classes (EVCs) which are typically considered in terms of the biogeographical region in which they occur. EVC account for species climatic and edaphic requirements (DSE 2009). There are approximately 300 EVCs statewide and they have been grouped into 20 simplified native vegetation groups with a total of 35 sub-groups within Victoria. Using a range of attributes such as climate, geomorphology, geology, soils and vegetation, EVC have been arranged by bioregions at a landscape-scale. EVC are the basic units used for mapping biodiversity that are derived from large-scale forest-type and plant community mapping. Each EVC describes distinct floristic (plant) communities across Victoria that occur in similar types of environment and respond to environmental events in similar ways (O'Connor et al. 2005). There are 28 bioregions identified within Victoria. Figure 3-4 below shows the EVC bioregional benchmarks, which represents the average characteristics of a mature and apparently long-undisturbed state of the same vegetation type (http://www.dse.vic.gov.au). There are similar vegetation classifications for the other states and in other countries.

24

Figure 3-4: Bioregion name

Ecosystems rely on what could be referred to as “Rules of Nature”. Vegetation data used in preparing certain area statements has been compiled by the Biodiversity and Ecosystem Services Division, Department of Sustainability and Environment (DSE), Victoria. Figure 3-5 refers to a list of published Ecological Vegetation Class benchmarks for the Central Victorian Uplands bioregion. MU stands for Mapping Unit, and BCS refers to Bioregional Conservation Status. For instance, EVC 22 Grassy Dry Forest is listed in this table, and Figure 3-6 shows an example of this EVC documentation for the Central Victorian Uplands bioregion (this is the case study area described in section 4.3). Clearly, this EVC documentation provides a wide range of associated attributes for vegetation such as the variety of species that make up the class, the proportions they occur in and approximate sizes.

25

Figure 3-5: Ecological Vegetation Class benchmarks for the Central Victorian Uplands bioregion (DSE 2009)

Figure 3-6: Sample EVC documentation, EVC 47: Valley Grassy Forest (DSE 2009)

To place vegetation, vegetation models are rendered into “Ecological Vegetation Class” (EVC) polygons with appropriate attributes for representative trees described in related documents (more details will be discussed in section 3.5 and section 4.2).

26

Intelligent placement of a variety of tree species for vegetation classes should be based on EVC documentation. Vegetation information such as density that refers to the ‘cover’, potential shape and distribution rules that describe the relationships among species are significant parameters for visualizing representative vegetation. Vegetation shapes are restricted by DBH (diameter at breast height) from EVC documentation, height ranges from Australia Vegetation Attribute Manual Table (Department of the Environment 2009), and growth potential value at research area. More details about vegetation shapes will be developed in the following section. To allow for intelligent placement of the vegetation objects, some of the descriptive attributes in the EVC documentation need to be made operable in the ArcGIS layer. Existing EVC data available in GIS formats define the spatial extents of each class in certain bioregion areas, and corresponding vegetation description and condition. Figure 3-7a shows typical EVC stored digitally as a polygon shapefile. For example, the EVC 47: Valley Grassy Forest can be found in the below map and its legend is represented as the last one in the Figure 3-7b.

Figure 3-7a: Typical EVC polygon shapefile

27

Figure 3-7b: Related Legend for Figure 3-7a

In the case of vegetation, the EVC can be used to define the attributes of individual plants or spatial distribution among species. Ideally, with all of the EVC attributes added to the EVCs polygons, a fully representative vegetation simulation could be built. However, for more flexible use of objects models in the library and faster implementation, just dominant (or most characteristic) attributes are used to represent each EVC (Figure 3-8). For example, the EVC names and default densities shown below. Once the EVC type for the research area is selected, a database that stores the name of each of the species, the plants sizes and understorey densities will be read. Thus, a database storing EVC species information based on the related documentation has been built. Section 3.6 will explore the structure of species attributes for each EVC type.

Figure 3-8: Species’ attributes table showed in ArcGIS for representing EVCs layer

People differ in their views about how naturally an area of vegetation should be built for some special purposes, certain plant species could occur in some areas according

28 to habitats, topography, soil, policy requests or some non-natural causes. In this situation, users prefer to place certain plants into the research areas regardless of strict ecological constraints. For example, farm forests which are not natural forest areas, contain species which do not normally occur in the considered bioregion. These plantations are a special case with their own distribution and growth rules. Accordingly, differently from EVC representation of plants, commonly representative plant species across Victorian such as Victorian native and plantation vegetation should be included into a standalone database (see section 3.6). Furthermore, the appropriate distribution of vegetation species, under different conditions, has not normally been addressed in EVC or plantation definitions. Herrlich (2007) mentioned that we could use configurable rule sets to provide more options for other visual contexts such as forest or field areas to control the vegetation placement. Methods for placing vegetation objects in landscape intelligently, including reasonable plant shapes, proper individual locations and appropriate proximity relation among species for more realistic placement are discussed in the following sections.

3.4 Object library

In most application of 3D modelling and visualization, large and complex 3D models data are required (El-Hakim et al. 1998). Having 3D data stored in a database, users have the possibility to extract only a limited set of data (e.g. one single tree instead of one unit of trees) and thus critically reduce the time for loading. Additionally, locating, editing and examining a particular object becomes quick, simple and convenient (Siyka et al. 2002). Realistic vegetation placement requires a real-world landscape ecology model as part of a database. For example, ArcGIS, Community Viz and Visual Nature Studio (VNS) all have their own object libraries for users to create 3D scenes. VNS allows the use to define ecological associations, but few other products have this capacity. The organization of data within my research is defined according to the ecological vegetation classification and character species of plant. For this project, the object library is simply a database that can be accessed by SIEVE to create realistic landscape models. Storing a 3D model of every vegetation object in the world would

29 however be very expensive in terms of plant image collection, object creation and storage. Fortunately, varieties of elaborate virtual plant models already exist (Stock et al. 2008). Plant texture images can be freely downloaded from the Australian Botanical Name Portal. Both DPI and DSE are working on vegetation and infrastructure libraries with appropriate management tools currently. Thereby, logical organization of these vegetation objects for the convenience of users from different fields is essential. Moreover, widening the SIEVE usage to effectively encompass the whole of Australia area will require a standardized object library to include a wide range of vegetation species from different parts of the country. This library will include all major Australian species and will be built based on the EVC, or similar, classes. Furthermore, more spatial relationships among species will be explicitly stored to determine co-locations. In order to fulfil requirements from a wider range of visual proposals, generally, there are two ways to organize the database. One is follow the “EVC/Bioregion Benchmark for Vegetation Quality Assessment” documents from DSE, major vegetation species that exist in each of the bioregions in association with their individual EVC type, EVC Number, species name, density, height range and foliage/canopy coverage, etc, are built into the database logically. The other way is that list all the common used species together with their growth states, density, height range and models’ folder path into a database. Databases extracts in Tables 3-9 and 3-10demonstrate an example of the way to structure the database for object library. According to the EVC/Bioregion standards, a hierarchical structure is adopted. Figure 3-9 gives a sample table that lists part of Bioregion names and corresponding abbreviation of these names. Database in Figure 3-10 shows an example of Bioregion named CVU, EVC types in this bioregion are listed, and IndexCodes are allocated based on their growth states and EVC number. Thus there are another 27 bioregion databases created like CVU to cover Victoria. All these databases are named by the corresponding abbreviated Bioregion Names.

30

Figure 3-9: A sample database with Bioregion Name

For most species of plants, their appearances are changing while they are growing. Their vegetation models are different according to their growth stages. In order to get a more realistic simulation result and simplify the object library, the growth states of an EVC region is classified into three stages: Young, Immature and Mature, abbreviated as Y, I and M. For example, in Figure 3-10, EVC type ‘Grass Forest’ has its EVC number 128, so associated with this number, its three growth states are defined as 128Y, 128I and 128M. These models are referenced by IndexCode stored in the database with species details (Figure 3-11).

Figure 3-10: EVC type and related IndexCode in CVU Bioregion

Figure 3-11 shows species details of each EVC, connected to the database in Figure 3- 10 by IndexCode. Density and height range information are also can be read by referencing the IndexCode and Species name. 2D billboards are stored in the path

31 showed in ‘PNG’ field, while 3D vegetation objects are stored in the ‘DTS’ field. Consequently, vegetation object manipulation (e.g. appending, editing or deletion) in the virtual environment becomes easier since we just need to change models in corresponding folders, without needing to modify settings in the database or in the simulation tools. Specifically, the major species in each EVC is given a species ID: “EVC Number”+”1”+abbreviation of growth states. The species ID of the other species are name in the same method.

Figure 3-11: Species details of each EVC

Apart from EVC/Bioregion standards, there is another way to build up the vegetation object database. Some visual modellers probably prefer to choose plant species by themselves, since the species of plants under each EVC are preset according to the EVC documentations from DSE. Figure 3-12 shows records of commonly used species of plant information and vegetation models. All these species comes from the “Australia plant name index” created by Australian Botanical Name Portal of Australian National Botanic Gardens and Australian National Herbarium. Accordingly, the plantation database in Figure 3-12 was designed to allow for selecting models of common species. Particularly in planning practices, planners could construct conceptual landscape representations freely, by using a wide range of suitable vegetation objects.

32

Figure 3-12: Common used species of plant details and vegetation models

Additionally, all the common species will be given an object ID, these ID come from the “SpeciesID” in the database shown in Figure 3-11. Then neighbourhood relations among species can be recorded in the database logically. Distances values between two species of plants can be retrieved easily from the following database. Thus, SmartVeg can access the distance value in the database to calculate the buffer area, around the first placed species in which the second placed species should be placed. The fist placed species and the second place species come from either the “species_1” or “species_2” field in the database. Detailed usage of this database is discussed in section 3.4.4.

Figure 3-13: Sample Species Distance Database

Since databases can be expensive, good planning is essential to make the ongoing GIS efforts cost-effective in the long run. This would improve the re-usability and reliability of existing vegetation models as well. Ideally, masses of vegetation objects which are well organized and documented, will allow arbitrary Australian landscapes, that are faithful to existing landscapes, to be built (Stock et al. 2008).

33

Since required texture information for vegetation models can be obtained easily, models in the object library and related documentations are updated over time, SIEVE could generate both current and historical vegetation scenarios. Moreover, to place plant models into virtual worlds intelligent, populate landscapes and make the vegetation objects in scenarios updating easier and faster, there is a need to automatically access existing vegetation object libraries. Development of SmartVeg with access to the object library enables flexible and dynamic patterns of landscape representations.

3.5 Vegetation Placement 3.5.1 Intelligent Placement Constraints

Deussen (2003) concluded that these days, a typical landscape modelling and visualization pipeline in computer graphics follows four steps: modelling/generation of 3D terrain, computation of 2D plant distribution, positioning/instantiation of 3D plant models and rendering, and visualization. Furthermore, most efforts are focused on the technological operation improvements of the object (Wieland 2005). Even for the SIEVE Direct Link, there are no rules or constraints behind the placement processes. In this research, the focus is on software to support ecologically appropriate distribution and rendering of plant models with relatively realistic shapes. A method is proposed to implement rules and constraints behind vegetation distribution and growth, and to support a real-time approach to building plant models into a virtual world Conceptually, considering the vegetation objects, the placement of vegetation must satisfy ecological constraints such as variation in the physical environment (including aspect, elevation, geology and soils, landform, rain pattern, salinity and climatic zones; (DPI 2008). The shapes of vegetation objects also need to be resolved during the placing process. Hence, the spatial placement of objects in a virtual world is a knowledge-based task. As discussed above, all these processes, if implemented manually, would require users to have a degree of expertise in a variety of scientific fields. Additionally, the proper placement of vegetation objects needs to take into consideration not only the absolute locations of the vegetation objects in the scenario, but also the relative position with regard to objects representing other species.

34

However, there are few landscape visualization systems on the market that attempt to provide a solution for the interactive, intelligent visualization of vegetation in a way that satisfies ecological constraints. Using the SIEVE software, landscape scenarios were previously populated with 3D elements through the use of preset point locations and EVC polygon coverages of GIS data layers (O'Connor 2007). However, in reality the size and heights of trees at the local scale are influenced by the plants' access to resources. To accommodate such local variation, SmartVeg uses a value layer representing the plant growth potential, which is a raster layer for a region calculated on the basis of information about the region’s climatological and soil resources and local physical environmental conditions. Also drawing on Hammes’ (2001) suggestions with regard to vegetation placement variables, the environmental variables used to calculate growth potential values in this project are listed in Table 3-1. On the basis of all the variables listed in this table, the vegetation growth potential values can be calculated and stored as a raster layer in the GIS. However, when detailed data are unavailable a simpler approach can be used. This thesis is not focusing on development of a growth potential layer. Instead, the focus is on ways in which this layer can be used.

The soil contains minerals that are assimilated by the fine roots. Soil nutrients determine the vegetation growth. This variable includes the levels of soil erosion, soil salinity, soil Soil saturation, soil moisture, soil pH, fertility and depth. It is the most important variable affecting the growth of natural plants. The elevation is the height above sea level. With increases in elevation, the general conditions become harsher. All plants have an upper elevation limit at which they can survive. Plants of a single species also tend to change get smaller types with increases in altitude. Relative elevation refers to Elevation the local changes in height, describing the terrain convexity/concavity. Relative altitude affects plant growth since valleys are generally wetter, as well as more sheltered. Ridges on the other hand tend to be much more exposed to the elements. The slope of the terrain has a direct bearing on the quality and Slope depth of the soil, as well as on water retention due to runoff. Steep slopes tend to have small shrubs and grass cover. Very

35

steep slopes tend to consist of exposed rock without vegetation. The skew refers to the direction in which sloped areas face. The direction in which a slope faces has a direct bearing on Skew how many sunlight hours it receives each day, as well as on the degree of exposure to the prevailing winds. The multi-fractal effect expresses the fact that some plants and ecosystems exhibit local grouping behaviours that depend on other landscape elements such as distance to a riverbed, distance to a road, or location within a paddock. Human Multi-fractal management practices also can locally affect plant growth. effect Naturally, growth potential must also be relate to climate, temperatures, rainfall, humidity, wind and so on. Climate- related resources provide another source of significant effects on the growth potential values in a research region. Table 3-1: Variables used to calculate growth potential value

Vegetation shapes are calculated on the basis of the local growth potential value and the environment characteristics as derived from the related EVC documentation. The absolute locations of vegetation objects depend on the point positions generated in the EVC polygon layer, while the interrelations among species rely on the co-location algorithm that will be explored in section 3.5.4. Figure 3-14 displays a sample growth potential layer and its related EVC polygon layer. It may be clear that the area in Figure 3.14a labelled “NONE” has an extremely low growth potential value 1, while the south-east area has a high growth potential value from 1 to 3 (as shown in the top map) and multiple EVC types (as shown in the bottom map).

36

Figure 3-14: Sample growth potential value layer and related EVC polygon layer for the same area

As the user moves around the environment, the vegetation placement is done for each newly selected EVC polygon that appears on screen and the corresponding growth potential values are extracted for the individual polygon. This process ensures that the attributes of the vegetation models respond intelligently to the environmental variables listed in Table 3-1 and vegetation cover and growth are automatically adjusted accordingly. The accuracy of 3D vegetation objects in virtual environments greatly depends on the application of rule sets and constraints. Table 3-2 shows the general rules set used to determine the placement of vegetation objects.

37

The main parameter to determine the sizes, heights, and growth status of vegetation in a certain area. For example, plants growing on the area with highest growth potential value (e.g. 3 in the Figure Growth potential 3-14) have the most opportunity to get maximum value height, compared with the same species of plant in the same age on the other areas. This growth potential value will be multiplied to the normal height ranges (see section 3.5.3). Determines the characteristics such as location, density, size and representative species within Official polygons representing an EVC. Australia's Native documentations Vegetation booklet also records height and growth rate information for major vegetation classes within Australia. Algorithms written to use preset variables to Co-location relations calculate the suitable distributions of different algorithm species. The main parameters such as location, orientation and size of each object could be set by users according to their knowledge. It allows for the Human knowledge generation of generating a variety of landscapes populated with different 3D vegetation models for specific purposes. Table 3-2: Resources used to determine vegetation placement

The algorithms for establishing on the intelligent placement of vegetation objects in a virtual landscape are discussed in the following two sections.

3.5.2 Vegetation Model Exhibition

Algorithms that allow for the intelligent visualisation of plants could be developed to generate points with a set growth potential value, to calculate the appropriate size and growth rate for each point, and to use these variables to place individual models into a virtual environment in a realistic manner.. As mentioned in section 3.2, there are two kind of format to store vegetation models. Hence there are two approaches to represent vegetation models in SIEVE Viewer: replicative foliage rendering (filling polygons) and individual plant placement (located at points).

38

(1). Replicative foliage rendering

In order to quickly visualize the simulation outcomes, detailed lower-level plant models are rendered by placing 2D billboards. Replicative rendering can be done by specifying a set of EVC polygons to define the areas in which the vegetation objects should be replicated following specified density parameters. Placing a large number of individual 3D vegetation objects into a scene leads to a longer loading times, despite the fact that many of the objects will be the same for a particular species, with the objects just being placed at different locations. To deal with this situation, Stock et al (2008) have developed an enhanced function for the SIEVE that allows replicators to fill polygons(not just ellipses as in the original TGE), after which an individual 3D object gets loaded once and is then replicated into all random positions within the polygon. Object loading times can be significantly reduced with this function. It can also considerably reduce the storage space required and speed up the implementation of modifications, which are critical considerations in maintaining and updating the system. However, this approach has certain limitations, the virtual plants have to be kept simple and are shown as billboards in PNG format; trees of selected species have random sizes (within a reasonable range) or fixed sizes, and the models are not assigned with precise locations. These trees are also not individually editable.

(2). Individual plant placement

Another option is to use individual DTS models in the SIEVE Viewer and place them in specific locations for each object as determined in the GIS (O'Connor 2007). The more detailed an individual plant model is and the greater its complexity, the higher the computational cost per model. This means that rendering simulations of large plant communities through DTS models is more difficult to realize than through replicated shapes, as it will either require much greater processing capacity or much more time. Moreover, there is a limitation to the number of individual editable objects in the SIEVE Viewer. Thus, the choice between replicated billboards and individual models is dependent on both the required realism of the simulation and the number of objects that needs to be visualised, as well as the available processing capacity and/or time.

39

3.5.3 Vegetation Shape

Most characteristics of vegetation species are specified in the EVC documentation related to particular areas. However, for many applications EVC polygon vegetation classes alone are not sufficient to depict the plants present with a satisfactory degree of accuracy. For example, a tree of a particular species that grows along a river bank will look quite different from a tree of the same species that grows in open grassland, due to the different local environmental conditions. There are three main sources containing specifications of the height ranges and growth rates for Australian plant species: Australia's Native Vegetation booklet from the Department of the Environment and Water Resources (which records the height classes and growth forms for Australia's major plant species) , the Agriculture Notes published by the Department for Primary Industries (which record the characteristics of forestry species for different regions; see Figure 3-15), and information from the EVC documentations created by the Department of Sustainability and Environment.

Figure 3-15: Sample forestry species descriptions (DPI 2008) The calculation of the vegetation size and growth rate for a particular area is based on the calculated growth potential value for that area and the species information from these official publications. In the virtual environment, all the height and growth rate information abstracted from these publications are stored in the vegetation objects database, together with the relevant species models. This choice is efficient because the developed tool can directly read values from the database once information regarding the plant species is received. This section outlines the key equations that were used to calculate the vegetation size ranges, as well as the way in which they

40 were integrated into Visual Basic algorithm implementation. The growth rate and vegetation height are species-specific and dependent on the growth potential values in the research area. The equations are based on the value extracted from the growth potential value layer. The algorithm first selects an EVC polygon, finds the species density for this polygon, and then generates corresponding amounts of regular or random points within this polygon. Finally, it extracts a value from the growth potential value layer for the generated points by looping through each point in the generated feature class and obtaining the value of the raster layer at that point.

(1). Replicative foliage rendering

For the replicative foliage rendering, the sizes of billboards are fixed to the maximum value or the minimum value, or assigned between this ranges. The first step is to get the mean and standard deviation values from the growth potential value layer within the selected polygon. M and Sd are the mean value and standard deviation value for the polygon. They can be obtained by the implementations of “pStatResults” function from ArcObjects (AO) directly.

Term the minimal size value and maximum size value as Min (i) and Max (i) for each species model i at the generated location. These values are defined on the basis of calculated M and Sd of the research area, and referenced documentations depending on the species and age. Dmin(i) and Dmax(i) stand for the realistic minimum value and maximum value of species i in the vegetal documentation. There is a scaling value scaling (i) comes from the growth rate in the official vegetation documentations

(Figure 3-15) in conjunction with Dmin(i) and Dmax(i) .

Min (i) = (M - Sd ) × Dmin(i) × S(i) ;

Max (i) = (M + Sd )× Dmax(i) × S(i) ;

Following this, the size range of this kind of species growing in the research area is (Min(i), Max(i)). Moreover, the arrangement of replicated plants can be either as rows or random. The state can be slightly swaying (plants wave in the wind) or static.

(2). Individual plant placement

Another method for placing vegetation is representation by individual DTS models for more realistic visualization.

41

The size allocated to each point is specified with greater accuracy and is not merely a randomly chosen value within a specified range. Extract the growth potential value of each generated point with G (i) , associated with the characteristic of a species itself and its age, the growth rate S (i) of species i – scaling for this species should also be considered. Smin(i) and Smax(i) stand for the minimum size and maximum size of the individual species model i. Corresponding equations are:

Smin(i) = D min(i) × G(i) × S(i) ;

Smax(i) = D max(i) × G(i) × S(i) ;

The sizes of each individual plant models vary depending on the growth potential value of the allocated point, and species and age of the plant. This approach gives more relative realistic simulation of the vegetation covers. A growth-potential-based approach integrates the physical conditions into the placement processes to give a more realistic view to optimizing vegetation’s growth. This procedure can be used with the animation capability, allowing the user to grow trees with changing size properties by updating the growth potential layer.

3.5.4 Vegetation Location

Intelligent placement includes relatively realistic shapes and appropriate locations. The intelligent visual representation of both can be powerful in enhancing the ‘realism’ of the visualization outcome and providing a sounder basis upon which to make a preferred scenario choice . Currently, most research focuses solely or mostly on the spatial locality of vegetation objects. The object correlations tend to be neglected or absent. Moreover, vegetation classes often include more than one species of plant. However, very few approaches have so far been proposed for exploiting plants correlations in virtual environments to improve the realism of simulated vegetation covers (Bishop and Lange 2005). This section explores the algorithms needed to generate both exact locations for major species in the EVC and co-locations among species.

Appropriate object placement is controlled by generated point layers and preset rules. Apart from the issue of vegetation shapes, a second problem is the determination of the suitable spatial location for a three-dimensional vegetation object. Object

42 placement in this research is controlled by point layers and selected polygons. In order to place vegetation models into a virtual world intelligently, the following common rules are proposed:

• Large areas able to grow vegetation require a sufficient number of trees in the polygon to obtain a reasonable representation of the landscape.

• Considering EVC type, typical species are the most important component and should be placed into virtual world first. While for the plantation type, the species are chosen by the users.

• Too small areas are not suitable for the placement of large plants except short brushwood or grass.

• There should be a reasonable distance among generated tree models in a polygon area.

• Even for the tree models with random locations, the location area must be restricted in the virtual world in a way that sees it correspond to the area of selected polygon in the GIS.

Considering just one plant species, if one or more polygons have been selected that represent one EVC type, points are generated randomly or in rows in these polygons and tagged with extracted value from the growth potential layer. As each new vegetation object is placed, the object is located at a feasible position that satisfies its common placement rules. Replicated plant models will be rendered into Viewer in rows or in random distributions, while individual models will be positioned into Viewer at the generated locations. One of the most important issues during the process is to make sure that the location of all the points are in the polygon. Once this is ascertained, the location information can be added to the collection. Both the replicative placement of billboards and the placement of individual models could follow this algorithm. Obviously, it is simple to present the locations for a single species, but it is much more difficult to represent the locations for more than one species in the virtual world suitably. For example, vegetation is classified into ecological vegetation classes and these classes are represented by polygon layers in the GIS to determine the location of individual species and the spatial distribution of species. Thereby, each polygon typically describes an EVC with more than one species. The official vegetation documentation may specify co-location information

43 for some of these species. For instance, it may specify that species A, B, and C are typically found in EVC 1, and that species A occurs in clusters and typically together with species B. A fundamental concept is that species B need to be generated and placed in relation to the locations of species A and the EVC polygons. The placement algorithm used to appropriately map the locations of vegetation objects follows these steps:

• Place the models representing the major spices into the virtual world; the process is the same as for the placement of models of just one species.

• Set ecological co-location constraints to get reasonable neighbourhood relations among different species in an EVC area

• Find out ecological co-location constraints from the official vegetation documentation to determine what the distances between two objects should be for reasonably realistic object placement. The ecological co-location constraints indicate how close the object should be placed relative to another object when they are neighbours, and is specified by proximity analysis. Consider n species sets P 1, P 2, …, Pn, such that each P i contains plant objects representing the same species. Given a distance threshold r, two objects on a GIS map (i.e. two plant points in a point layer) are neighbours if their distance is at most r. An example of point co-location analysis is shown below in the form of undirected connected graph in which each node is one species set and each edge corresponds to a neighbourhood relationship between the connected species set.

P2

P3

P1

P4

Figure 3-16: Co-location representation

It is assumed that there is an ideal constraint type (e.g. close to a certain distance) represented by edge. This means the neighbourhood relation between plants in species

44 sets P1 and P2 is not affected by the fact that P3 is also the neighbour of P1. However, generally, if a vegetation community has plants of more than two species, distances among plants can be affected by the presence of plants of other species. This sort of influence is a challenge for the future development of intelligent vegetation placement tools. The research accomplishments within the current research field is based on the assumption of an ideal situation in which different species do not affect each other To calculate suitable plants collocation, buffers are usually used to create area regions a specified distance around selected species. Figure 3-13 shown in section 3.4 is part of the dataset, showing the distance among two species. This co-location database contains the total impedance between two species. Distances are used to calculate the corresponding buffer area for those neighboured species that are within the search radius of the input species. The “distance” constraint causes placement within the buffer. For example, if a species with ID 1 is selected, and the vegetation object of species 1 is defined as the centre and distance 1.5 as the search radius for co-location of the input species and a species 2, a vegetation object of species 2 should be randomly located within a circle of radius 1.5 (pink area in the right image of Figure 3-17) around the vegetation object of species 1. Thus, for a species that has more than one different species as neighbours, multiple ring buffers should be created. In Figure 3-17 the centre points represents species 1, while species 3 (represented as point 3) should be placed within the small circle (purple area) and species 4 (point 4 and point 5 in the green area) within the larger circle.

P3 P2

Figure 3-17: Example of multiple ring buffers (ArcGIS Desktop Help 9.2) and Illustration of species spatial co-location patterns

45

The left right image displayed in Figure 3-17 illustrates the spatial co-location patterns for a sample of species, following the outlined constraints. It corresponds to the co-location representation shown in Figure 3-16.

Co-location pattern definition and rule discovery is an ongoing research effort. A major challenge in rendering vegetation models is the relative placement of objects when inter-species relations and influences are taken into consideration.

3.6 Summary

For appropriate representation of 3D vegetation objects in virtual world, a proposed scenario is needed to explain how landscapes might look like in different planning settings. Consequently, a set of rules and constraints which help defining the appearance and distribution of vegetation objects in 3D worlds are required. Existing SIEVE Direct Link carries out a live communication between ArcMap and SIEVE. However, users have to input different types of parameters to control the shapes and placements of vegetation objects manually. Not all the planners are experts in multi-disciplines, so a tool which facilitates intelligent placements of vegetation objects is necessary. This chapter put forward a proposal to overcome some limitations of SIEVE Direct by automatically calculating the exact position, co- location and parameters to better support the visualization of different species, based on automated linkage between 3D object models and spatial data in GIS. Meanwhile, as availability of 3D models increases, SIEVE Viewer landscape models will be populated with more typical vegetation objects, widening the availability and reliability of the vegetation object library. Due to the well structured database, other software can get these models easily by linking to the required database. Principles developed in this chapter provide an opportunity for public users to automatically build more realistic and conceptual representation of the landscape. The following section will develop a tool SmartVeg that integrates algorithms and methods explored in this chapter to enhance SIEVE functionality.

46

. 4. Plant Growth Simulation via SIEVE 4.1 Introduction

The algorithms described in Chapter 3 were developed into a SmartVeg tool for integration into the SIEVE simulation platform. This chapter addresses the design and algorithms of the SmartVeg tool, which is currently run locally using ESRI Desktop ArcGIS. This GIS package was chosen for the development of SmartVeg as it provides a built-in Integrated Development Environment (IDE) that allows programming with the powerful ESRI ArcObjects toolkit. As an extension tool based on SIEVE Direct it enhances the live communication between GIS and SIEVE Viewer to simulate a virtual environment in real time (Stock et al. 2008; Stock C. 2008). This enables data transfer between GIS software and SIEVE and thus supports the object placement process. SmartVeg was developed in ArcGIS using VBA and uses the vegetation library described in Chapter 3. It allows users to place vegetation models into SIEVE and provides the ability to visualize real-world environments as easily as fantasy landscapes. To test this tool, a case study in was conducted for the Central Victorian Uplands (CVU) bioregion.

4.2 Integrating Algorithms into SIEVE 4.2.1 SmartVeg Architecture

As outlined, the intelligent placement of vegetation objects requires an interface that combines EVC presentation and common species (native or exotic) presentation with relevant GIS data layers. These layers provide growth potential values and EVC cover area information. Users are provided with an interface that allows the automatic selection of suitable vegetation models with associated parameters on the basis of the object database or attribute table of spatial data layer in ArcGIS (Figure 4-1). The placement of vegetation objects by SmartVeg follows the rules and algorithms developed in section 3, as well as the EVC polygon area information in the GIS. To facilitate the intelligent placement of appropriate vegetation objects, a number of drop-down box options were included in the user interface to enable users to easily perform placement tasks.

47

Figure 4-1: The main interface of SmartVeg

The process of developing SmartVeg consisted of the following steps: Step1: Preparation of spatial data in ArcGIS, including EVC polygon data and the associated parameters for the research area, and a raster layer with growth potential values for the research region. If the original raster layer covers a large area, a new layer only covering the required region should be cut from the original one to minimize processing time. Step2: Creation of an option-based interface. Figure 4-1 shows the main interface of SmartVeg. SmartVeg has three major components: a Points Generation interface, options for EVC-based placement and options for Plantation-based placement. In order to fulfil the different requirements of these components, two procedural processes were written that allow the automatic placement of replicated images and individual models into SIEVE Viewer based on the EVC or Plantation information. Step3: Development of a ‘Generate Random Objects Layer’ tool. As outlined in section 3.4, a point layer containing the growth potential values for each point stored in the GIS attribute table is a prerequisite for subsequent individual object placement. Once users click the long button ‘Generate Random Objects Layer based on Growth Potential’, a ‘Point Generation Interface’ (Figure 4-2) will pop up to facilitate the generation of random points in the selected polygon on the basis of the information

48 contained in the growth potential raster. The number of points is based on the area of selected polygon and density value from attribute table of this layer. .

Figure 4-2: Point Generation Interface

Step4: Creation of a tool for ‘Replicative Trees’ rendering. If the ‘Display method’ in Figure 4-1 is chosen to be ‘Display with 2D images’, the Replicative Trees interface (Figure 4-3) allows the user to set parameters for replicative foliage, which is then rendered into SIEVE Viewer as 2D billboards that fill the selected polygon area

Figure 4-3: Replicative Foliage Rendering Interface

Step5: Creation of a tool for ‘Individual Trees’ allocation (Figure 4-4). The aim of this tool is to enable the enhancement of the realism of the generated virtual landscape, by allowing the user to place more detailed models at particular vegetation locations. The vegetation models are automatically distributed into calculated locations associated with specified parameters, to provide more detailed virtual landscapes for research areas.

49

Figure 4-4: Individual Object Placement Interface

Typically scientific object placement processes are designed for expert use and do not have the clear visual expression necessary to make them understandable to the general public. The SmartVeg architecture developed in this project provides many benefits over traditional methods of vegetation object placement. For example, it provides an ‘EVC’ approach to reduce the otherwise very lengthy operation time and avoid the requirement for expert knowledge by automatically looping through preset species for each EVC type. Conceptually, filling in the drop-down lists systematically from top to bottom provides users with a clear understanding of the placement characteristics. The ‘Plantation’ approach offers users an opportunity to manage the vegetation scenarios in their own way. The expressions used in the simplified automatic (EVC-based) and manual (Plantation-based) vegetation placement options are one of the key aspects of the SmartVeg tool. As long as the user follows the steps contained in the SmartVeg interfaces, no additional operations are required to place either 2D vegetation billboards or 3D vegetation DTS models. Once the selections have been made from the drop down lists, SmartVeg obtains the plant models and associated characteristics from the object library directly. Herewith, it is easy to update and maintain the SmartVeg and database separately.

4.2.2 Points Generation

To visualise a required range of vegetation types with suitable distributions in a vegetation landscape, points of the appropriate density and need to be generated. With the Point Generation Interface (Figure 4-2), SmartVeg generates a point shape file by generating random points at a specified density within polygons representing an EVC. This point file contains the growth potential value, taken from the raster layer, of each point in the attribute table.

50

The dispersion of points (vegetation locations) within the selected polygon is automatically computed (Figure 4-5). Once the random points have been generated, a table is created with columns that store the point ID and growth potential value of each point. For this research, it was assumed that each generated point layer contains the requisite information for a single species of plant.

Figure 4-5: Points Generation Result for a Selected Polygon

The implementation steps of the Points Generation function are as follows: Step 1: Add all the raster layers from current active data frame in ArcGIS to the combo box in the Points Generation interface. Step 2: Click the blank button to display the attribute density automatically. The implementation processes underlying Step 2 are: Select the layer in the table of contents that is currently highlighted. Generated points will be placed into the highlighted EVC polygons. The relevant code is: Private Function GetHighlightedLayer() As ILayer Set pLayer = pDoc.SelectedLayer Set GetHighlightedLayer = pLayer Set pHighlightedLayer = GetHighlightedLayer() Then, obtain the "density" value for the selected polygon from the attribute table and use it automatically for display once the user clicks the blank button. pFS.SelectionSet.Search Nothing, False, pFCursor ‘pFS stands for the highlighted layer. Set pF = pFCursor.NextFeature ' pF is the current selected feature

51

densityFieldInd = pF.Fields.FindField("density") Density = pF.Value(densityFieldInd) Step 3: Generate the point layer with the growth potential value for all the points using the attribute table, and display this layer in ArcGIS. Firstly the user selects a path to save the generated random points, then automatically ascribes a name to the layer. Secondly, the system calculates the foliage count on the basis of the area of the selected polygons and density value. This process includes: - getting the envelopes that contain all the specific elements, which mean that selected polygons for this EVC type will share the same ID. Because these polygons may be distributed in a dispersed manner, the code below is used to obtain the extent of the shapes: Set pEnv = New EnvelopeEnvelope pEnv.Union pFeature.Shape.Envelope Set GetEnvelope = pEnv Set pEnvelope = GetEnvelope(pFeatColl) ‘pFeatColl stands for the satisfied polygons in the highlighted layer - generating random points: Set pNewFeatCls = CreateRandPoints(m_sFilePath, m_sFileName, pSR, pEnvelope, nPointNum, pFeatColl) ‘m_sFilePath and m_sFileName refer to the valid file name and file path for the generated point layer; pSR stands for the spatial reference of the elements; nPointNum means the number of points will be generated. - extracting the value from the raster layer to generated point features by looping through each point in the feature class and obtaining the value of the raster at that point: Dim pProp As IRasterProps Set pProp = pInRaster If pProp.PixelType = PT_CHAR Or pProp.PixelType = PT_UCHAR Then pFeature.Value(lFieldIndex) = pRIDObj.Name 'Get the value of the Raster Identified Object on the point feature and add it to the field - adding the extracted value to the “GpValue” field in the attribute table and add the layer to the ArcGIS. The result for a sample polygon is shown in Figure 4-5. This result provides the foundation for the calculation of vegetation shapes and growth states. It also forms the basis for the vegetation placement approaches discussed in the following sections.

52

Once the point layer has been created, the mean and the standard deviation are calculated for all the extracted growth potential values for the generated points and automatically saved to background memory. The key codes to implement the calculation are shown below. dMean and dStdDev stands for the mean value and standard deviation values of the growth potential value of these generated points. pData.Field = "GPValue" Set pStatResults = pData.Statistics GetMean = pStatResults.Mean GetStdDev = pStatResults.StandardDeviation dMean = GetMean() dStdDev = GetStdDev()

These two values will be used for the calculation of vegetation sizes in the following section. As mentioned in section 4.2.1, in order to generate a virtual landscape for future vegetation growth scenarios, two procedural models were created to place vegetation objects (either formatted as 2D billboards or 3D models) into SIEVE Viewer. These two procedural approaches – ‘Plantation’ and ‘EVC’– are a practical way to place vegetation objects into a virtual world using intelligent placement methodology. The ‘Plantation’ approach offers convenience when users want to have a high degree of control over the landscape design, while the ‘EVC’ approach would be particularly useful in supporting the decision-making processes of planners with limited knowledge of vegetation. These two simulation processes are discussed and analyzed separately in the following two sections.

4.2.3 EVC-based Placement

Spatial objects placement by relevant specialists has proven advantages, while implementing a method that supports the placement of vegetation objects by non- experts is more complicated. The EVC-based object placement process enables planners with a variety of backgrounds to easily achieve visualizations of their proposed vegetation landscapes due to its fully intelligent automation. The constraints of this process are based on the species information derived from EVC documentation and on the growth potential values for the research area. The vegetation shapes follow the formulas and principles discussed in section 3.5, as do the vegetating model locations and the intelligent placement method.

53

To meet all the functional requirements, the logical steps for placing vegetation objects based on EVC information are proposed to be as follows: (Figure 4-6):

Ecological Vegetation Class (EVC)

Bioregion

EVC

Young/Immature /Mature

Display (billboard /DTS)

Species

Density

Shape

Send to SIEVE

Figure 4-6: Implementation Processes of EVC-based placement

The implementation processes and functional requirements of the EVC-based placement approach can be interpreted as follows:

PART 1: The blue rectangles (Figure 4-6) illustrate the processes in the main interface. This part of the flowchart is based on the vegetation classification mentioned in section 3.3. Once a bioregion name has been selected, the EVC types within that bioregion, the growth states options and the display method options are automatically shown in the dropdown list. During this step, SmartVeg establishes a linkage with databases that

54 store the bioregion names (see Figure 3-9) and the selected bioregion dataset (see Figure 3-10 for an example involving the CVU Bioregion). Associated vb code is: strSql = "Select DISTINCT BN From BioName Where BioName='" & sBioregion & "'" m_strEVCTableName = recTemp("BN") strSql = "Select DISTINCT EVC_Name From " & m_strEVCTableName recTemp.Open strSql, m_pMdbConn, adOpenStatic, adLockReadOnly cboEVCName.AddItem recTemp("EVC_Name") Step 1: select a bioregion name from the “Bioregion” drop-down list. The corresponding abbreviation name is read from the field “BN” in the bioregion names database (see Figure 3-9). Step 2: on the basis of this abbreviation name (e.g. CVU), the relevant information is derived from the related database that stores all the detailed EVC information (see Figure 3-10). Step 3: all the EVCs for this bioregion are listed in the “EVC Name” combo box, as well as growth states and display method. Step 4: choose a growth state and a display method. For each EVC that has more than one species recorded in its documentation, major species will be rendered, using a programming loop, into SIEVE one by one according to their covering percentages (see Figure 3-6). PART 2: Lavender rectangles (in Figure 4-6) show that parameters need to be processed after all the options in the first step have been selected. The outputs of the vegetation model placement process are a collection of plant parameters for displaying plant models in a virtual world. PART 2A: Replicative Vegetation Billboards Rendering Once the display method has been chosen as 2D billboards, the Replicative Foliage Rendering Interface (Figure 4-3) will pop up. Databases that hold species details of each EVC (Figure 3-11) and species distance database (Figure 3-13) are linked. For the replicative plants, it is assumed that the specified species are distributed randomly within the selected EVC polygons at the defined densities. Since all the species in one EVC are named with the same method (“SpeciesID” in Figure 3-11), once the EVC type is selected, these species are sorted in ascending

55 order. The major species of each EVC such as “561M” is ranked as the first during the initialization of the interface. Step 1: Name the objects which will be rendered in SIEVE. Step 2: Decide the density of the vegetation.

The foliage count to be shown in the TGE based on area and density that controlled by check-boxes, including density value from the EVC database (Figure 3-11), fixed count of plants or specified density are entered manually by users.

Figure 4-7: An example of the input density

Step 3: Set parameters for vegetation shapes (including state “sway” or “not sway”), distribution pattern in rows or randomly, and size that calculated on the basis of growth potential value of plants. The parameters of vegetation object presentations are sway or static, size and density. The implementation steps are:

• Check whether “sway” is set on with a check box.

• Locations of replicative billboards, depending on whether foliage instances are to be shown in rows or randomly. The most important part of this step is the determinations of heights. In the specified polygon area, the heights of replicative billboards for the selected species should either fit in size to the minimal/maximum or randomly between the calculated ranges. Due to the variable environmental conditions in different areas, the size range of the plants are affected by the growth potential value in the polygon. Size ranges of vegetation models for research area are calculated out once the “Replicative Tree” display method has been chosen, implementing the functions and formulas explored in section3.5.3. dMinSize and dMaxSize are the minimum size and maximum size of the plant that come from the database that stores the species details (Figure 3-11). The key codes to implement the calculation are shown below; for replicative foliage rendering, based

56 on the algorithms developed in section 3.5.3 (Vegetation Shape), the minimal size value and maximum size value for each species model in TGE are: GetMinValue = (dMean - dStdDev) * dMinSize* scaling GetMaxValue = (dMean + dStdDev) * dMaxSize* scaling dMean and dStdDev are calculated once the points layer is created based on the growth potential value. Thus, the size range of the replicative plants is ( GetMinValue , GetMaxValue ). If the size of plant model is randomly set between this range, the size of replicative billboards are defined as: sngRandScale = Round((GetMaxValue - GetMinValue + r) * Rnd + sngMinScale, 2) where parameter r refers to a variable used for fine-tuning the scaling, and sngRandScale stands for the final size of the individual plant billboard. Step 4: Render the replicative billboards of the specified species into SIEVE within the selected polygon areas. Key steps are listed below.

Step 4-1: Set prerequisite parameters, and start to calculate the processing time. Set the replicator and foliage paths, then location coordinates of replicative billboards are saved. ReplicatorFileFolder = strDrive & RootPath & "replicators" Get the currently highlighted layer on the TOC, all later operations are based on this layer Set pHighlightedLayer = GetHighlightedLayer() If highlighted layer found, create a folder with the same name as the highlighted layer to save text files. Operations for the highlighted layer are saved into this folder. Set layerFolder = fso.CreateFolder(ReplicatorFileFolder & "\" & pHighlightedLayer.Name) Step 4-2: Get the number of selected polygons, start to loop through selected polygon features in the layer. Because in a bioregion, the same EVC is very likely to scatter in different locations, there is always more than one polygon sharing one EVC name in a polygon layer. Function GetFeatureExtent() selPolygonCount = selPolygonCount + 1 lngPolygonArea = CLng(pArea.Area)

57

Set pFeatExtent = pF.Shape.Envelope

Step 4-3: Core procedures for looping through selected polygons. A critical step for the foliage rendering is on coordinate transformation from UTM System to Torque World System, since layers in ArcGIS and models in Torque Game Engine are under different coordinate systems. Thus, the coordinate system of mission area is important in deciding the object’s location when sending data from ArcGIS to SIEVE. SIEVE uses a rectangle definition (left top width height) to define the mission extent in 3D space and the coordinate system uses centre point as the origin. The position is defined by (X Y Z) in right-handed coordinate system (GarageGames 2004). The procedure loops through the point collection for the polygon vertices and writes the coordinate pair into the text file. Accordingly, all the replicative billboards are limited to within this polygon. (1). The coordinate system used in GIS (ArcGIS) is different from the coordinate system in SIEVE (TGE). Therefore, it is essential to transform GIS coordinates into SIEVE coordinate and get the selected polygon area in SIEVE (see transformation details in Chen, Stock et al (2008)).

(2). Check the vegetation billboards to be shown as sway or static, in rows or randomly, fixing size or between a range. Read billboards’ path from the “PNG” field in the database and send it to the path variable. (3). Calculate the foliage count based on area and default density (density from species detail database), or the fixed count, or area and manually input density (instances per 100 square meters). (4). Send out the whole set of parameters of vegetation billboards to TGE. An active X control - Microsoft Winsock Control is adopted, thereby .udpPeerA.SendData is used to send data. Step 4-4: Go to the next selected polygon. Step 5: Set end time, work out total processing time for this species of plant rendering. Then pop out Replicative Foliage Rendering Interface (Figure 4-3) again for the next species in this EVC. The processes are similar from step 1 to step 4. Use the same operations as the first rendered species to get the other parameters, then the whole set of parameters of vegetation billboards are sent to TGE.

58

Step 6: When the species recorded in the database for this selected EVC have been rendered into SIEVE, the loop will end automatically. Additionally, users can just cease the rendering process, if they do not want to place all these species into SIEVE Viewer. For this placement method, the positions of all the later rendered species models rely on the locations of first rendered major species. Manual placement of millions of vegetation objects is prohibitively time-consuming to consider for large-scale environments (William 2005). However, this approach can considerably reduce storage space. It also can speed up the implementation for the replicative rendering and the looping processes. Following this process, the simulation of forest vegetation becomes easier. PART 2B: Individual Vegetation Objects Placement If users chose the “Display as 3D models’ option from the drop list in SmartVeg main interface, Individual Object Placement Interface (Figure 4-4) pops up. Users only need to select a point layer from the combo box for the individual model’s placement. The population of tree location data with specific tree types is described in the following steps. Similar to Replicative billboards rendering, the major species in a certain EVC type is initially positioned. Step 1: Once the interface pops up, the form has been initialized. Generated point layers have been added to the dropdown list. Then, choose a generated point layer for the selected EVC type polygons. Step 2: Get the density, minimum size, maximum size and model file’s path (DTS) from the species database (Figure 3-11), and save these values to associated variables. Sizes stored in the database are ideal values for each species that grow in a common an environment. Then set start time to calculate the processing time.

Step 3: While reading the attribute density from the database, the value for the first rendered species ID is located, and then this value is passed on to SpeciesID1. Step 4: Get the number of selected polygons, start to loop through selected polygon features in the layer. Step 4-1: Get the ID value for this looping polygon (Similar operations to PART2A), and then calculate the amount of vegetation models based on the

59 selected polygon area and density value from database. Write the calculated amount to a variable such as “intIndCount ”. Conceptually, the density of the EVC polygons layer in ArcGIS is larger than the species density in database. Because the density value for each EVC polygon considers the distribution of more than one species. As such, the value of “intIndCount ” is less than number of points in the selected feature layer. Step 4-2: Randomly pick up “intIndCount ” number of points from the selected point layer. Selected points should allocate inside the looping polygon. Step 4-3: Loop through picked up points and send individual models to SIEVE one by one. Thus, the individual model should be placed at the location in SIEVE, corresponding to the location of the looped point in GIS. (1). Similar to PART2A, there are also conversions from GIS to SIEVE coordinate systems. Easting and Northing values for each point feature are getting according to the locations of picked up points in the selected points shapefile layer. Plant model of the specified species are going to be placed according to these point locations.

(2). Get growth potential value from the shapefile attribute table for the selected points to a variable like GpValue . Calculate more reasonable model’s size range associated with the growth potential value. As interpreted in section 3.5.3, for each point, the size range of researched species should multiply the growth potential value at certain point, considering the scaling between real world size and virtual model size, a scaling is set to adjust the models. sngMinScale = MinSize * GpValue * scaling sngMaxScale = MaxSize * GpValue * scaling

Consequently, the sizes of individual model at different locations are more likely to be different.

(4). Using .udpPeerA.SendData to send out the whole bunch of parameters of individual models to TGE.

(4). Go to the next selected point in this polygon. Step 4-4: Once all the picked up points in this polygon have been sent out, go to the next selected polygon.

60

Step 5: Go to the next species for this EVC type, and send the plant models to the SIEVE automatically. The processes are similar from step 2 to step 4, the slight difference is that the distribution of the remaining species should be in a certain buffer area around the first rendered plant of species. During the step 3, record the species ID of this species as SpeciesID2 and calculate the number of models. Get the model IDs of allocated models of SpeciesID1. Then looping though every position in SpeciesID1 and create a buffer for it, the specified number of plant models for SpeciesID2 are then randomly located in the buffer area around the SpeciesID1 . In step 4, by implementing the following code to do buffer analysis, we can get another parameter. gp.pointdistance_analysis(SpeciesID1, SpeciesID2, “colocation.dbf”, r)

SpeciesID1 stands for the species that already located into the SIEVE Viewer; SpeciesID2 stands for the other species that close to SpeciesID1. “colocation.dbf” is a standalone database that contains the species ID and distance measurements between species, which is created as a part of whole database (e.g. Figure 3-13). r is a variable that records the distance value between two species from the “colocation.dbf . All the r values are derived from the official vegetation documentations. The values of SpeciesID1 and SpeciesID2 could be derived from either the “species_1” field or the “species_2” field (Figure 3-13), because the co-location representation is undirected (Figure 3-16).

Because all the parameter setting processes are running in the background, the Individual Object Placement Interface does not pop up again for the next species. Step 6: Set end time of processing and calculate total time. Pop out a message box to show the placing time. The EVC-based place method implements the placements automatically with the looping processes for different species. Vegetation object type and other object parameters are controlled by the attributes of specified database and preset growth potential layer. Rendering species are preset by SmartVeg, and the locations for each species are also automatically calculated by the functions within the tool. However it cannot give the users an insight into detailed design vegetation cover and planning

61 alternatives. In this situation, a plantation-based placement method is developed as described in the next section.

4.2.4 Plantation-based Placements

Due to the diverse nature of scenarios involved in the planning process, an automatic way to place vegetation objects does not always meet the various requirements from different planners. For example, shrubs can appear in farm regions, whereas crop plants are popular in rural landscape. Plantations such as these are not covered in EVC types. Therefore, a framework that enables users to manipulate species objects manually from the GIS into the 3D viewer was developed. This placement method relies on the vegetation classification from the “Australia plant name index” as described in section 3.4, growth potential value of selected polygon and rule-based location algorithms developed in section 3.5. Thereby two databases are linked during the placement. One is the common species information database (Figure 3-12); the other one is the common species distances relationships database (similar to Figure 3-13, just with different species ID). The following Figure 4-8 illustrates schematic representation of the implementation of this method.

62

Plantation

Species

Young/Immature /Mature

Display (billboard /DTS)

Density

Shape

Send to SIEVE

Figure 4-8: Processes of Plantation-based Placement

This process is a simplified model based on the model for EVC (Figure 4-6). Less controls are set by SmartVeg, and there are more options for users. The implementations can again be divided into two main parts. PART 1: The blue rectangles describe the options for the main interface (Figure 4-1); SmartVeg is linked with common species database. Step 1: Common used species are added to the combo box, once the main interface has been displayed. Step 2: Choose a growth state. Step 3: According to the species and growth parameters, corresponding row in the common species database is searched. Then read the OBJECTID, density, minimum size, maximum size, replicative billboard route and individual model route, following by setting all these values to specified variables. Step 4: Choose a display method that in either billboards or DTS models. PART 2:

63

PART 2A: If replicative billboard display method was chosen, then Replicative Foliage Rendering Interface (Figure 4-3) is appeared. Step 1: Set the parameters in the popped up interface. The operations are similar from step 1 to step 3 in PART 2A, but no necessary to record the species ID. Instead, record the OBJECTID of this species from the database to variable SpeciesID1 . Step 2: Calculate billboards population. Send all these parameters of vegetation billboards TGE to render replicative billboards randomly in landscape, by looping through each selected polygon. Step 3: If users choose a second species to continue the rendering process, repeat PART 1 and step 1, and record the OBJECTID value to the other variable SpeciesID2. Then, repeat step 2, but the positions of SpeciesID2 models must be in the buffer area that SpeciesID1 model is the centre of this buffer. Rest of the operations are as same as step 2. Additionally, the distance relationships database is linked to create the buffer regions. After the rendering process, set the value of SpeciesID2 to SpeciesID1. Step 4: If users want to place more species models into TGE, just repeat step 3. PART 2B: If the individual models representation option is selected, Individual Object Placement Interface (Figure 4-4) is shown. The processes are also similar to the EVC-based placement. Step 1: Add the generated point feature layers to the combo box while the interface is initializing. Choose a point layer for the selected polygon area. Step 2: Write the value of OBJECTID for the selected species from the common species database to variable SpeciesID3. Step 3: Calculate the number of objects in the looping polygon based on its area and species’ density got at step 3 in PART1. Save this amount to variable “intIndCount ”. Step 3: Randomly pick up intIndCount amount of points that are inside the looping polygon from selected point layer. Step 4: Loop through every point by transferring GIS coordinate system to TGE system, calculating reasonable model size based on the ideal size, growth potential value at research point and proper scaling, and sending all the related parameters to

64

TGE by .udpPeerA.SendData function . Then, it is the end of the processing for this plant models. Step 5: If users choose to send individual models of a second species to SIEVE, repeat step 1 to step 3 but send the OBJECTID to SpeciesID4. Repeat step4, but there is one more process that during the looping process, read the position of each SpeciesID3 model, and then create a buffer with radius r which is the distance between SpeciesID3 and SpeciesID4. At the end of the placement, save the value of SpeciesID4 to SpeciesID3. Step 6: Once users choose to next species models into SIEVE, redo the step 5. It has been shown that it is possible to automate the tedious manual task of placing three-dimensional vegetation models into visual systems for both experts and non- experts. The use of rule-based tool SmartVeg allows concepts from users, facts from real world, and potential growth value layer for research area to be coded within a virtual environment that supports vegetation decision-making processes and envisions high-level informed vegetation scenarios. In order to demonstrate the potentials of SmartVeg, the following section will present a number of validating experiments.

4.3 Case Study

In this section, the viability of this plant’s growth simulation method is demonstrated in a particular environmental context in regional Victoria. A particular case study area was extracted and cropped from bioregion “Central Victorian Uplands” of Victoria, Australia (Figure 4-9). The sample maps in Figure 3-14a are the growth potential value data and polygon information for this area.

65

Figure 4-9: Case Study area

Users of SIEVE can access and set vegetation placement parameters via the SmartVeg interface. Playing a role as exporter, SmartVeg allows users to select based on either plantation classification or EVC standards, and then automatically place 2D billboards or 3D models into SIEVE Viewer within the area corresponding to the selected area on the 2D digital map. Several experiments are designed to demonstrate SmartVeg functions.

4.3.1 Growth Potential Value and Vegetation Height

A major purpose of the intelligent placement method is to display outcomes of vegetation models with more realistic shapes. The user will be able to see changes in plants’ heights according to different growth potential values.

Click the generated random point button in the main interface (see Figure 4-1), then operate on Point Generation Interface (Figure 4-2) to create random points based on selected growth potential value layer. Figure 4-10 and Figure 4-11 show generated points in ArcGIS with different growth potential values and corresponding vegetation models in SIEVE with related heights.

66

Figure 4-10: Generated points based on growth potential value layer

Figure 4-11: Vegetation models with related heights

The darkest region in the centre of the growth potential value layer indicates highest value 3, thus the points in this area (purple ones) have the highest height 5. The growth potential value in gray area is 2, and related generated points (red ones) in this area hold height 3. While the heights of green points are 1 with their lowest growth potential value 1.

The heights of plant models in Figure 4-11 reflect the data input from the upper map (Figure 4-10). These outcomes demonstrate the principles in section 3.5.3 that

67 vegetation shapes at local scale are directly influenced by growth potential value of plants’ accessed resources. Therefore, high supply of soil nutrients leads to a boosted plants growth. For this reason, the equations and related code for the size of vegetation models, which are broken down into the uptake of each individual captor module in section 4.2 are implemented reasonably. This will provide users with a greater understanding of height changes that are occurring in the areas with different growth potentials.

Based on a normative database for vegetation objects and two procedural models for different presentations of vegetation classifications, the principles and algorithms described in chapter 3 were adopted to give more natural settings to intelligently placing landscape objects in virtual environment. The following two sections will evaluate the implementations of these two approaches.

4.3.2 EVC-based Simulation

Figure 4-12 shows a sample of EVC-based placement. Choose the bioregion type from the combo box, there are 28 bioregions identified within Victoria. In this case, the CVU (Central Victorian Uplands) is chosen as a test bioregion. Then, EVC types included in this bioregion will be shown automatically in the “EVC Name” combo box; Growth states and Display Method will appear as well. Selection processes for Plantation-based approach are similar.

Figure 4-12: Sample result of the selections

Selected EVC types and growth state will be maintained in internal data log to serve as the input for species selection and co-location algorithms. Once the 3D display

68 method is chosen, Individual Object Placement Interface (Figure 4-4) will pop up. After setting the parameters in this interface, rapid automated vegetation simulation will be rendered in the virtual environment one species at a time based on EVC type from the object library.

(1). Display with DTS Models

At individual level, growth of single virtual plant responses to different environmental constraints. Figure 4-13 displays the placement of the first species (palm tree models employed here), and the sizes of models depend on the calculated values of each allocated points. Figure 4-14 shows the placement of second species. Obviously, the models of under shrubs are located close to the palm trees. While Figure 4-15 displays the distributions of grass that around the palm trees. In such a situation, the positions of the later processed species models are always depend on the first placed species models. In real world, the neighbourhood relations tend to be more complex.

Figure 4-13: DTS models for the first species (Palm tree)

The following result indicates that under shrubs are located in a buffer area around the Palm trees due to their co-location relationship.

69

Figure 4-14: Placements of the second species (under shrubs)

Figure 4-15: Placements of grass

Figure 4-16: More details for Figure 4-15

70

Typical plant species in the specified EVC area need to be appropriately placed, while other representative vegetation with lower percentage in this area such as shrub and grass can be added for greater realism; for example, Figure 4-16 shows a relatively realistic simulation. Furthermore, there exists a variety of elaborate individual-based plant models. As rendering speed increases, it is possible to add more detailed objects in the terrain in order to enhance the realism. The use of individual plants simulation greatly increases the level of realism and editing ability that can be expected in the placement process. Therefore, to allow for a range of 3D vegetation to be placed appropriately in landscape scene, additional parameters such as growth value are required in GIS.

(2). Display with Billboards

Once “Display with 2D images” is selected in the EVC main interface (Figure 4-12), the “Replicative Foliage Rendering” interface pops up (Figure 4-3). In this example the sizes of sunflowers are ranged randomly between the minimal size and the maximum size in Figure 4-17.

Figure 4-17: Replicative billboards shown in rows (Sunflowers)

Figure 4-18 and Figure 4-19 display the results of different distribution patterns; the upper one shows models that allocated in rows while the bottom one shows the random distribution of objects.

71

Figure 4-18: Placement of the second species

Figure 4-19: Randomly placement of billboards

The following image shows that the sizes of each species are between their calculated ranges.

Figure 4-20: Placement of the third species

72

According to the results, when users are satisfied with their selection, they can initiate a placement process from ArcGIS view to SIEVE Viewer. Therefore, users do not need to master knowledge about vegetation behaviours or restrictions of ecosystem; they can operate on the automotive options to get the representation of changing landscapes intelligently.

4.3.3 Plantation-based Simulation

The plantation-based database includes the data about typical economic species grown in Victoria. Using Plantation-based simulation approach, communities can simulate rural landscape with more human activities. By filling in the interface in Figure 4-21, the plantation-based simulation is implemented.

Figure 4-21: Sample selections based on Plantation

Mostly, this approach is employed for farm forest or economic plantation use. The experimental processes of this approach are similar to the EVC-based simulation, but more manual inputs for economic species. Normally, economic plants in farm are of a single type or two types that grow with an interval. For example, planners can use the replicative function to create a farm forest of blue gums rapidly (Figure 4-22), with proper sizes and shapes in the specified area.

73

Figure 4-22: Placement of young blue gums (billboards)

In order to improve the efficiency of land use, it is beneficial to grow two species with a certain interval (see Figure 4-23). By using the “Replicative Tree” interface twice, users can build more species properly in a scenario (Figure 4-23 below), with appropriate shapes and locations.

Figure 4-23: Placement of sunflowers and young apple trees (billboards in rows)

4.4 Summary

As SIEVE is a community engagement tool, it aims to allow exploration of spatial models by both experts and citizens (O'Connor et al. 2005). Previously Direct Link needed specific skills to place objects into SIEVE, and the process was time-

74 consuming. Aiming to solve this problem, an enhanced SIEVE Direct Live Link, named as SmartVeg, has been developed. Integrating SmartVeg into SIEVE leads to additional benefits. Experts and non-experts have different visualization preferences; however, SmartVeg provides two ways to render objects to TGE intelligently. Therefore, users do not need to be familiar with related vegetation parameters. Also it is a good way to assess different scenarios created via two structured approaches, by adopting various plant property options. For example, an expert may wish to view a proposed environment with more human interferences, where vegetation is not simply following the EVC classification. Thus, they can use ‘Plantation’ approach to set vegetation properties manually and send objects to SIEVE based on selected point layer that determines the locations of plants. Non-experts can use ‘Plantation’ as well; they can use this tool to envision a wide range of issues in relation to farm forestry on private land to support the rural property management. If planners are more interested in seeing what the real world looks like, they can employ ‘EVC’ approach to place vegetation into SIEVE. Future situation scenarios created via SmartVeg can be compared with the existing situation. Thus enhanced analysis functions can be developed to find out the differences automatically and analyze spatial changes.

The case study clearly showed the benefits of the SmartVeg as well as revealed what improvements are needed. Community planners, architects, and land use planners are increasingly using three-dimensional visualization tools. Both experts and citizens can visualize the changes of vegetation cover and growth states of vegetation through the SmartVeg successfully. It facilitates public participation and envisions the prediction of vegetation changes.

75

5. Conclusions and Further Outlook 5.1 Conclusions

The research related to the manipulation of real world 3D data in virtual environment has become the focus of much research. All traditional GIS vendors provide extended tools for 3D navigation, exploration and landscape simulation. However, still many of these systems are lacking abilities for non-professional users to represent objects properly. A more participatory planning process accentuates the need for development of landscape visualization tools which are designed to suit broad non-specialist audiences.

This research deals primarily with intelligent mechanisms for placing vegetation objects in virtual environments. The key factors addressed in this thesis are finding appropriate locations of different species and determining reasonable sizes and heights of plants. The primary input data are the EVC data layer produced by Department of Primary Industries (DPI) and growth potential value layer calculated from environmental variables, such as levels of soil erosion, salinity and moisture.

This research explored rules for vegetation objects classification and intelligent placement followed by the development of GIS algorithm to fulfil these rules, and finally to build a smart direct link to implement these algorithms. Rule-based intelligent placement is based on the idea that certain decision-making processes can be codified into rules that, if followed automatically, will yield results similar to those in the natural environment. Not all aspects of vegetation location and growth were addressed but sufficient to illustrate that with a more intelligence based approach to using the 2D data and 3D models is possible. This outcome will contribute to develop policies for better land and resource management.

This paper concludes by discussing the lessons learnt in undertaking a case study in the “CVU” region of Victoria and offering some future research directions with respect to related technical specifications from this thesis.

76

5.2 Evaluation of the SmartVeg

The aim of the intelligent envisioning system is to help stakeholders learn about their vegetation landscape, to examine the consequences of applying different vegetation covers, and then to support their decisions on vegetation planning issues. SmartVeg was tested within the SIEVE platform with several experiments in the “Central Victorian Uplands” of Victoria, Australia. The results indicate that SmartVeg is an effective, efficient, and demonstrably beneficial tool. Since SIEVE provides interactive and collaborative virtual environments, the outputs of SmartVeg can be compared and evaluated in a collaborative virtual environment by planners with different backgrounds and varying levels of technical competency (Stock et al. 2008).

Applications can be implemented by linking to a range of models. By storing models in a standalone object library, this approach can reduce considerably the storage space of the system and speed up the operations, which are critical considerations in maintaining and updating system.

However, all landscape process models have limitations in terms of accuracy and their predictive capabilities. The uncertain locations of some special objects should be available to planners when using SmartVeg for vegetation placement, but this is a separate topic.

5.3 Applications

Vegetation growth simulation is about designing plants distributions to provide specific benefits for communities and planners, either physical or financial. Users have to be clear on what their plants will be, and base their decisions on envisioned scenarios. Through this approach, they can use SmartVeg, integrated into SIEVE Direct, to develop a vegetation plan and to tailor their situation, regardless of related knowledge about ecosystem or soil conditions. As it offers realistic vegetation models and abstract visualization of landscape models, this extension can improve decision- making processes as it provides more realistic answers for plant shapes, locations, species and neighbourhood relationships among species.

Currently, The Department of Primary Industries (DPI) Farm Forestry Development Team provides a series of advisory and training services within north central region to

77 train land managers with the appropriate technical skills and confidence to operate farm forestry systems (DPI 2009). Thus, the “intelligent” function developed in this thesis can enhance the services from the DPI.

Landholders are inclined to adopt commercial plants like growth of crops, pasture or some indigenous vegetation. By using the “Plantation” function in SmartVeg, they can develop a property vegetation plan that assists with the future management of native vegetation for a property or farm. This offers landholders an opportunity to improve biodiversity for their property. Using a computed growth potential value for a certain area, this approach also provides greater certainty and flexibility on envisioned vegetation covers.

Moreover, scientific evidence indicates global climate change has been occurring for the past several decades and increasing greenhouse gas emissions will continue to cause climatic changes. A significant approach to tackling climate change is through the growth and sustainable management of trees and forests (DPI 2008). Because the shape and location of the plants can be determined by the potential growth value of the research area, the growth of trees and forests can be simulated precisely and efficiently.

5.4 Limitations and Future Research

This section points out several limitations with the intelligent placement method as proposed in the thesis, and overviews some potential solutions and open questions for future work.

The current object library just covers parts of Victoria, widening the SIEVE framework to effectively encompass the whole of Australia will require extension of the object library to include a wide range of vegetation types and vernacular infrastructure elements from different parts of the country.

Although SmartVeg succeeded in visualizing vegetation models’ placement processes, limitations have been identified. For example, the representation of individual plant objects always lead to the issue of exceeding the number of static objects supported in SIEVE. The number of individual objects which can be rendered in TGE is limited.

78

The real environment is highly complex, thus to calculate higher accuracy growth potential value is another challenge. Several other factors, not mentioned in this thesis, such as chemical equilibria, organic matter transformations etc tend to affect the calculated potential growth value in certain area.

The final concern about this technique is its applicability and scalability. Co-relations among species are more complex in the real environment, while this research just used a simple co-location pattern algorithm among typical species.

More rules to determine the co-location relations among more species are required. Many mechanisms for cooperation or competition between individual plants were not taken into account, although these issues effect the neighbourhood relations among species too. Furthermore, with complex growing distribution patterns, the maintenance of a comprehensive object library becomes an important function of government.

The objective of natural and easy representation of vegetation scenarios for people with different backgrounds will drive the further explorations to overcome existing limitations.

79

6. Bibliography 3D Nature. (2008). "Brief History of Computer 3D Landscapes and 3D Nature." from http://3dnature.com/history.html . Asgary, A., Klosterman, R. and Razani, A. (2007). Sustainable Urban Growth Management Using What-If? Int. J. Environ. Res. 1(3): 218-230. Bishop, I. D. (2008). Understanding place and agreeing purpose: environmental visualisation and other tools. Landscape Analysis and Visualisation, Spatial Models for Natural Resource Management and Planning. W. C. In: C. Pettit, I.Bishop, K. Lowell, D. Puller and D. Duncan (ed), Springer-Verlag, Berlin (457-468). Bishop, I. D. and Lange, E. (2005). Visualisation in Landsacpe and Environmental Planning(ed). Abingdon, Oxon Taylor & Francis. Bishop, I. D., Stock, C. and O’Connor, A. (2005). Interfacing visualization with SDI for Collaborative decision making. Proceedings of SSC 2005 Spatial Intelligence, Innovation and Praxis, Melbourne Bishop, I. D., Stock, C., Pettit, C. J. and Aurambout, J.-P. (2007). Prospects and Plans for a Fully Integrated Collaborative Virtual Environment: from SDI to AR and back, Department of Geomatics, University of Melbourne. Bornhofen, S. and Lattaud, C. (2008). Competition and evolution in virtual plant communities: a new modeling approach, Springer Science+Business Media B.V. Chen, T., Stock, C., Bishop, I. D. and O' Connor, A. (2006). Prototyping an in-field collaborative environment for landscape decision support by linking GIS with a game engine. Paper presented at the 14th International Conference on Geoinformatics, Wuhan, China Chen, T., Stock, C., Bishop, I. D. and Pettit, C. (2008). Automated Generation of Enhanced Virtual Environments for Collaborative Decision Making via a Live Link to GIS. Department of Geomatics, University of Melbourne. Department of the Environment. (2009). "Australian Vegetation Attribute Manual- Section Two- The NVIS Framework - Concepts and Standard Procedures - Structural Information." from http://www.environment.gov.au/erin/nvis/avam /section-2-2.html# table2. Deussen, O. (2003). Computergenerierte Pflanzen. Technik und Design digitaler Pflanzenwelten . Springer, Berlin, Heidelberg, New York. I-XII & 1 - 287. DPI. (2008). "Plantations and Climate Change." from http://www.dpi.vic.gov.au/DPI /nrenfa.nsf/childdocs/-8C824F85D7393BC6CA256E5B00010183- B6BA960FD5234 D56CA257393007C20D6?open. DPI. (2008). "Vegetation." from http://www.dpi.vic.gov.au/dpi/vro/vrosite.nsf/pages/ vegetation. DPI. (2009). "Farm Forestry in the North Central Region." from http://www.dpi.vic. gov.au/DPI/nrenfa.nsf/LinkView/F3050FD701728BDECA256E7B007DAB6F4 6107699C2C444C7CA256E5B000D86E3#species.

80

DSE. (2009). "Ecological Vegetation Class (EVC) Benchmarks for each Bioregion." from http://www.dse.vic.gov.au/DSE/nrence.nsf/LinkView/43FE7DF24A1447 D9CA256EE6007EA8788062D358172E420C4A256DEA0012F71C. DSE (2009). EVC Benchmarks - Central Victorian Uplands bioregion, Victorian Government Department of Sustainability and Environment. El-Hakim, S., Brener, C. and Roth, G. (1998). A Multi Sensor Approach to Creating Accurate Virtual Environment. ISPRS Journal of Photogrammetry & Remote Sensing, 53(6), pp. 379-391. Ervin, S., M. (2004). Landscape Visualization: Progress and Prospects, Harvard Design School. GarageGames (2004). Torque Game Engine SDK Introduction, Hall Of Worlds, LLC. GarageGames. (2009). "Garage Games Resources." from http://www.garagegames. com/community/resources. Hammes, J. (2001). Modeling of Ecosystems as a Data Source for Real-Time Terrain Rendering. Paper presented at the First International Symposium on Digital Earth Moving. London, UK. Herwig, A., Kretzler, E. and Paar, P. (2005). Using Games Software for Interactive Landscape Visualization, in I.D.Bishop and E.Lange (Ed.). Visualization in Landscape and Environmental Planning: technology and applications (ed), (pp. 62-67), London, Taylor & Francis. Herwig, A. and Paar, P. (2002). Game Engines: Tools for Landscape Visualization and Planning? Trends in GIS and Virtualization in Environmental Planning and Design, Proceedings at Anhalt University of Applied Sciences, Heidelberg, Wichmann, Wichmann Verlag.(161 - 172) Jochen, A. (2007). Key Concepts and Techniques in GIS . London, SAGE Publications Inc. Klosterman, R. E. and Pettit, C. J. (2005). Guest editorial: an update on planning support systems. Environment and Planning B: Planning and Design (ed) : Vol 32 No 4 pp. 477–484. Kojima M. , W. J. A. (1972). Computer genrated drawings of groundform and vegetation. Journal of Forestry 70(5): 282-285. Kumsap, C., Borne, F. and Moss, D. (2005). The technique of distance decayed visibility for forest landscape visualization. International Journal of Geographical Information Science 19, No. 6, 723–744. Lammeren, R. V., Momot, A., Olde Loohuis, R. and Hoogerwerf, T. (2005). 3D Visualization of 2D Scenarios. Trends in GIS and Virtualization in Environmental Planning and Design, Proceedings at Anhalt University of Applied Sciences, Heidelberg, Wichmann.(pp. 132 ‐143) Lange, E. (1990). Vista management in Acadia National Park. Landscape and Urban Planning 19: 353-376. Lange, L. (1989). GIS goes 3D. Computer Graphic World 12: 38-46. Ludwig, G. S. (1996). VIRTUAL REALITY: A New World for Geographic Exploration, EarthWorks.

81

Matthew, J. (2007). Ideas of landscape: An Introduction, Blackwell Publishing. MilkShape3dTutorials. (2008). from http://www.chumba.ch/chumbalumsoft/forum/ forumdisplay.php?f=7. Muhar, A. (2001). Three-dimensional Modelling and Visualization of Vegetaion for landscape Simulation. Landscape and Urban Planning 54: pp. 5 -17. O'Connor, A. (2007). Automatic Virtual Environments from Spatial Information and Models, PhD thesis. O'Connor, A., Bishop, I. D. and Stock, C. (2005). 3D Visualisation of Spatial Data and Environmental Process Models for Community Engagement and Collaborative Data Exploration. GeoVis 05, London, July 6-8. (758-763) O'Connor, A., Stock, C. and Bishop, I. D. (2005). SIEVE: An Online Collaborative Environment for Visualising Environmental Model Outputs. International Congress on Modelling and Simulation, Melbourne, December,12-15. (3078- 3084) Pettit, C. J., Cartwright, W. and Berry, M. (2006). Geographical visualization: A participatory planning support tool for imagining landscape futures. Applied GIS Vol. 2, No. 3. Prusinkiewicz, P. (1998). Modeling of spatial structure and development of plants: a review, Sci Hortic 74:113–149. Risch, J., May, R., Thomas, J. and Dowson, S. (1996). Interactive Information Visualization for Exploratory Intelligence Data Analysis. Proceedings of VRAIS ’96.(pp. 230-238) Schein, S. and Elber, G. (2003). Placement of Deformable Objects, ACM Transactions on Graphics. Scott B. D. (2000). Landscapes as Analogues of Political Phenomena, in Diana Richards (Ed.) . Political complexity: nonlinear models of politics (ed), (pp. 46- 82), United States of America, The University of Michigan Pres Siyka, Z., Alias, A. R. and Pilouk, M. (2002). 3D GIS: Current Status And Perspectives. Stillwell, J. S., Geertman and Openshaw, S. (1999). Geographical Information and Planning: European Perspectives. Berlin - Heidelberg, Springer. Stock, C., Bishop, I. D. and Green, R. (2007). Exploring landscape changes using an envisioning system in rural community workshops Landscape and Urban Planning 79: 229. Stock, C., Bishop, I. D., O'Connor, A., Chen, T. C., Pettit, C. J. and Aurambout, J.-P. (2008). SIEVE: collaborative decision-making in an immersive online environment. Cartography and Geographic Information Science 35, 133-144. Tomlinson, R. (2007). Thinking About GIS: Geographic Information System Planning for Managers . Redlands, California, ESRI Press. Wang, L., Hua, W. and Bo, H. (2008). Procedural modeling of urban zone by optimization. Computer Animation and Virtual Worlds 19, 569 - 578.

82

Wasilewski, T., Faust, N., Grimes, M. and Ribarsky, W. (2002). Semi-Automated Landscape Feature Extraction and Modeling. (Technical Report No. -GVU- 02-15), Atlanta: Georgia Tech Graphics, Visualization & Usability Center Wieland, R. (2005). oik - nulla vita sine dispensatio. Vegetation Modelling for Landscape Planning. Trends in Real-time Visualization and Participation. Proc. , at Anhalt University of Applied Sciences, Wichmann, Heidelberg: 256–262. William, D. W. (2005). Generating Enhanced Natural Environments and Terrain for Interactive Combat Simulations (GENETICS), Naval Postgraduate School, Monterey, CA 93943-5000. Winterbottom, S. J. and Long, D. (2006). From abstract digital models to rich virtual environments: landscape contexts in Kilmartin Glen, Scotland. Journal of Archaeological Science 33, 1356-1367. Xu, K., Stewart, J. and Fiume, E. (2002). Constraint-based Automatic Placement for Scene Composition, University of Toronto. Zube, E. H., Simcox, D. E. and Law, C. S. (1987). Perceptual landscape simulations:History and prospect. Landscape Journal 6: 62-80.

83

Minerva Access is the Institutional Repository of The University of Melbourne

Author/s: Jiang, Li

Title: The intelligent placement of vegetation objects in 3D worlds

Date: 2009

Citation: Jiang, L. (2009). The intelligent placement of vegetation objects in 3D worlds. Masters Research thesis , Engineering - Geomatics, The University of Melbourne.

Persistent Link: http://hdl.handle.net/11343/35191

File Description: The intelligent placement of vegetation objects in 3D worlds

Terms and Conditions: Terms and Conditions: Copyright in works deposited in Minerva Access is retained by the copyright owner. The work may not be altered without permission from the copyright owner. Readers may only download, print and save electronic copies of whole works for their own personal non-commercial use. Any use that exceeds these limits requires permission from the copyright owner. Attribution is essential when quoting or paraphrasing from these works.