Quick viewing(Text Mode)

An Efficient, Platform-Independent Map Rendering Framework

An Efficient, Platform-Independent Map Rendering Framework

International Journal of Geo-Information

Article An Efficient, Platform-Independent Map Rendering Framework for Mobile Augmented Reality

Kejia Huang 1,†, Chenliang Wang 2,† , Shaohua Wang 3,*, Runying Liu 1, Guoxiong Chen 1 and Xianglong Li 1

1 SuperMap Software Co., Ltd., Beijing 100015, China; [email protected] (K.H.); [email protected] (R.L.); [email protected] (G.C.); [email protected] (X.L.) 2 Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, Beijing 100101, China; [email protected] 3 State Key Laboratory of Remote Sensing Science, Aerospace Information Research Institute, Chinese Academy of Sciences, Beijing 100094, China * Correspondence: [email protected]; Tel.: +86-101-5989-6730 † Co-first author, these authors contributed equally to this work.

Abstract: With the extensive application of big spatial data and the emergence of spatial computing, augmented reality (AR) map rendering has attracted significant attention. A common issue in existing solutions is that AR-GIS systems rely on different platform-specific graphics libraries on different operating systems, and rendering implementations can vary across various platforms. This causes performance degradation and rendering styles that are not consistent across environments. However, high-performance rendering consistency across devices is critical in AR-GIS, especially for   edge collaborative computing. In this paper, we present a high-performance, platform-independent AR-GIS rendering engine; the augmented reality universal graphics library (AUGL) engine. A unified Citation: Huang, K.; Wang, C.; Wang, cross-platform interface is proposed to preserve AR-GIS rendering style consistency across platforms. S.; Liu, R.; Chen, G.; Li, X. An High-performance AR-GIS map symbol drawing models are defined and implemented based on a Efficient, Platform-Independent Map unified algorithm interface. We also develop a pre-caching strategy, optimized spatial-index querying, Rendering Framework for Mobile and a GPU-accelerated vector drawing algorithm that minimizes IO latency throughout the rendering Augmented Reality. ISPRS Int. J. Geo-Inf. 2021, 10, 593. https:// process. Comparisons to existing AR-GIS engines indicate that the performance of the doi.org/10.3390/ijgi10090593 AUGL engine is two times higher than that of the AR-GIS rendering engine on the Android, iOS, and Vuforia platforms. The drawing efficiency for vector polygons is improved significantly. The Academic Editors: Wolfgang Kainz, rendering performance is more than three times better than the average performances of existing Vit Vozenilek, Georg Gartner and Android and iOS systems. Ian Muehlenhaus Keywords: AR-GIS; spatial computing; geovisualization; mobile augmented reality; GPU; Received: 22 June 2021 parallel technology Accepted: 3 September 2021 Published: 8 September 2021

Publisher’s Note: MDPI stays neutral 1. Introduction with regard to jurisdictional claims in Augmented reality (AR) is an interactive technology that overlays graphical digital published maps and institutional affil- iations. information over the physical world. Its main purpose is to combine real-world and virtual information in a manner that provides real-time interaction. More precisely, users receive additional information such as images, sounds, and text that create illusions related to the real world [1,2]. Current AR technologies tend to provide end users and professionals with accessible Copyright: © 2021 by the authors. and innovative applications such as entertainment, advertising, and a variety of other Licensee MDPI, Basel, Switzerland. professional applications. Developers can create AR applications that place digital infor- This article is an open access article mation about businesses into a user’s field of view. For example, a digital description of a distributed under the terms and building is applied directly to a user’s view. Some AR frameworks such as ARkit, ARCore, conditions of the Creative Commons Attribution (CC BY) license (https:// and Vuforia can be used to make the AR experiences more easily available to a wider set creativecommons.org/licenses/by/ of users and provide registration and tracking techniques. However, it is challenging to 4.0/). combine AR virtual information with our view of the physical world properly [3]. This

ISPRS Int. J. Geo-Inf. 2021, 10, 593. https://doi.org/10.3390/ijgi10090593 https://www.mdpi.com/journal/ijgi ISPRS Int. J. Geo-Inf. 2021, 10, 593 2 of 24

challenge differs from issues associated with traditional visualization techniques, where the content to be displayed is well known. To achieve better AR visualization experiences, we must study the key issues in AR visualization implementations that are targeted towards a wider audience than the typical audience of researchers and AR experts. In geographic information science (GIScience) and related domains, combining AR and GIS to form an augmented reality geospatial information system (AR-GIS) can provide unique opportunities to enhance spatial experiences. This helps to advance applications in planning, analysis, urban development, and other geosciences in a transparent and intuitive way [4,5]. Several studies have shown the high value of AR in earth science. In post-disaster assessments, AR can measure and interpret damaged structures [6]. In virtual travel, AR can be used to display a virtual model of a web interface and indoor and outdoor environments [7,8]. In geoheritage applications, images of four volcanic geological relics were converted into three-dimensional animations and an AR application was developed that attracted the interest of young viewers towards earth sciences [9]. AR can help non-experts to understand processes without first acquiring a heavy professional background in GIScience [10]. Immersive visualization of geographic data has been used widely in recent years. Game engines such as Unity3D support spatial data integration and enable the use of popular AR Software Development Kits (ARKit, ARCore, etc.) to develop spatial AR and (VR) applications via extension plugins [11–15]. Traditional GIS vendors such as ESRI have recently started to integrate AR and VR capabilities into their GIS products [16–18]. It is important to note that research and development of such applications still focuses on hardware and software implementations, with less focus on visualization, especially when hardware-based computing resources are limited [19–21]. Another common defi- ciency in existing solutions is that AR-GIS systems often rely on various platform-specific graphics libraries or operating systems and therefore rendering implementations can differ across various platforms. This leads to performance degradation or rendering styles that are not consistent across environments. However, consistent high-performance rendering across devices is critical in AR-GIS, especially for edge collaborative computing. The main contribution of this paper is the presentation of an efficient, platform- independent map-rendering framework that uses several visualization algorithms to solve the issues above. We develop a pre-caching strategy, optimized spatial-index querying, and a Graphic Processing Unit (GPU)-accelerated vector-drawing algorithm to improve rendering efficiency. Moreover, we design the cross-platform architecture, including stor- age, scheduling, and application modules, to provide a consistent AR-GIS rendering style across platforms.

2. Related Work 2.1. Augmented Reality Visualization Visualization in AR is concentrated on the mapping from virtual data to visual repre- sentations. Additionally, it also focuses on spatial relationships and interactions between the physical world and raw (virtual) data [3]. It determines how to combine the physical world and virtual data into 3D images, which are then calculated using 3D–2D algorithms to produce final 2D images. One important aspect of AR visualization is the combina- tion of real and virtual information. If a traditional visualization pipeline is used for AR visualization, it should be modified to reflect a combination of real and virtual informa- tion [11]. Based on continued AR development, Keil et al. [22] revised their initial survey to highlight how AR systems require a good registration process that can align virtual and physical objects accurately. Adding registration information to the original pipeline requires providing camera images that represent the environment in the video-transparent AR interface, enabling extraction of environmental information captured in the image, and performing specialized composition steps that reflect the characteristics of AR visualization. In particular, the original rendering step is replaced with a composition step that addresses ISPRS Int. J. Geo-Inf. 2021, 10, 593 3 of 24

the need to combine different AR visualization-related source data. In summary, the visual- ization in AR differs from the traditional visual definition because it uses a combination of the data to be displayed and information about the user’s actual physical environment. In this regard, it should be noted that this work deliberately excludes the problems and challenges of specific AR displays such as space ARs, video transparent displays, and optically transparent displays. For example, using an optically transparent display to implement an AR interface presents its own challenges. Issues regarding these specific displays are discussed in other studies [23,24]. Virtual environment visualization and interaction techniques have been studied in depth [25–27]. There are several AR-related studies that have outlined the expression and interaction of virtual data on different platforms [1,20], but work on the classification and analysis of AR visualization technology in terms of cross-platform consistency is very limited. There is little work available on cross-platform AR-related visual topics. Willett et al. [28] defined general AR data representation and representation concepts, and studied the relationship between data, the physical environment, and data representation. However, the visualization method depends on the hardware environment of the embedded platform. Zollmann et al. [3] outlined AR visualization techniques and defined the categories and methods of common AR representation and these corresponding patterns. However, these methods lack further performance analysis and cross-platform research. Although they have deeply summarized of AR visualization technology, there is still no research related to architectural systems, cross-platform consistency and performance analysis of systematic research of this domain.

2.2. Augmented Reality and GIS Visualization As the world’s leading provider of GIS products, Esri has been developing their Windows-based software product ArcGIS for decades. The map-rendering engine depends on the Windows graphics device interface (GDI) graphics library. This makes subsequent cross-platform development difficult [29]. With increasing demand for cross-platform products, the implementation of map rendering engines for ArcGIS Runtime SDK products varies between operating systems. Moreover, various operating systems optimize map- engine performance, resulting in various types of dependence on operating system-specific optimization algorithms. This is not conducive to producing consistent map-rendering effects or providing consistent map-rendering performance across platforms. Specifically, ArcGIS only supports the AR function of 3D scene. It does not support 2D map visualization (including vectors, image maps, network maps, etc.) in AR. Additionally, it does not support real-time ground-proximity visualization and AR effects. It does not support 2D and 3D integrated visualization and real scene associated map visualization for AR. ArcGIS does not support high-performance AR feature visualization (minimization of I/O time consumption). Thus, it is difficult to achieve fast display performance without operational delays. It also does not support multi-cache and parallel graphics processing unit acceleration. Lag occurs when the ArcGIS map data volume is large [23]. Mapbox AR does not support 2D and 3D integrated AR map visualization, does not support ground- proximity visualization, does not support Web browser AR map symbol visualization, and does not support iOS operating system AR map symbol visualization [11,24]. The Open Graphics Library (OpenGL) is proposed for drawing 2D and 3D vector graphics. The application programming interface (API) is designed to interact with graphics processing units (GPUs) for hardware-accelerated rendering [30]. The OpenGL specifi- cation describes an abstract API that draws 2D and 3D graphics. Although the API has many language bindings and can be implemented entirely using software, its design is mainly hardware-based and independent of language and platform. OpenGL supports AR 2D and 3D integrated rendering, consistent cross-platform rendering, cross-device and cross-browser hardware acceleration, and unified-style high-volume data batch ren- dering [29]. Because OpenGL is only a graphical image-rendering library, it focuses on as the basic element of encapsulation. It is not designed for AR-GIS ISPRS Int. J. Geo-Inf. 2021, 10, x FOR PEER REVIEW 4 of 26

AR 2D and 3D integrated rendering, consistent cross-platform rendering, cross-device and cross-browser hardware acceleration, and unified-style high-volume data batch ren- dering [29]. Because OpenGL is only a graphical image-rendering library, it focuses on ISPRS Int. J. Geo-Inf. 2021, 10, 593 computer graphics as the basic element of encapsulation. It is not designed for AR4- ofGIS 24 map symbolization and mapping, as it does not address packaging of map layers, object avoidance, map management, or other issues. It also does not include AR map wrappers for real scene mapping, spatial modeling masking, or other scenarios [31,32]. Therefore, it map symbolization and mapping, as it does not address packaging of map layers, object is not possible to display large-scale map data directly. It cannot manage map scene data avoidance, map management, or other issues. It also does not include AR map wrappers scheduling, cannot display complex map symbols using text- or annotation-type displays, for real scene mapping, spatial modeling masking, or other scenarios [31,32]. Therefore, it does not support virtual building masking display in AR, and does not support accurate is not possible to display large-scale map data directly. It cannot manage map scene data calibration of map symbol data in a real-scale AR display. scheduling, cannot display complex map symbols using text- or annotation-type displays, does not support virtual building masking display in AR, and does not support accurate 3. Materials and Methods calibration of map symbol data in a real-scale AR display. An efficient, platform-independent AR map rendering engine (the augmented-real- ity3.Materials universal andgraphics Methods library (AUGL) engine) is proposed in this paper. The AUGL map- renderingAn efficient, engine platform-independentconsists of a variety of AR GPU map-based rendering rendering engine strategies, (the augmented-reality including par- alleluniversal GPU rendering graphics librarystrategies (AUGL) and cache engine) pool is reuse proposed strategies, in this which paper. are The discussed AUGL in map- the subsequentrendering engine sections consists in this of chapter. a variety of GPU-based rendering strategies, including parallel GPUThe rendering AUGL strategies engine integrates and cache various pool reuse rendering strategies, strategies which based are discussed on the OpenGL in the graphicssubsequent library. sections It can in thisdraw chapter. each map layer via the hierarchical rendering method and save Thethe processed AUGL engine layer results. integrates Finally, various it overlays rendering these strategies results to based complete onthe the OpenGLAR map renderinggraphics library. process. It can draw each map layer via the hierarchical rendering method and saveAs the a processed method of layer connecting results. applications Finally, it overlays and opera theseting results systems, to completethe AUGL the engine AR map can displayrendering spatial process. objects on a screen without any platform-specified functions. Based on the OpenGLAs a graph method libraries, of connecting the AUGL applications engine is anddetailed operating by a unified systems, framework the AUGL that engine for- mallycan display describes spatial the objectsmap symbol on a screendrawing without approache any platform-specifieds and map data exchange functions. interfaces. Based Theon the AUGL OpenGL engine graph framework libraries, is the implemented AUGL engine by is the detailed OpenGL by a for unified Embedded framework Systems that (formallyOpenGL describes ES) graphics the map library symbol in the drawing Android approaches and iOS and operating map data systems, exchange the interfaces. WebGL graphicsThe AUGL library engine in Web framework browsers, is implementedand the OpenGL by thegraphics OpenGL libra forry on Embedded Desktop GIS. Systems The platform(OpenGL-independent ES) graphics specification library in the ensures Android AUGL and iOS engine operating interoperability systems, the and WebGL con- sistencygraphics across library different in Web browsers, implementations and the OpenGLso that users graphics and librarydevelopers on Desktop need not GIS. worry The aboutplatform-independent the different characteristics specification of ensures the underlying AUGL engine hardware interoperability platform. All and of consistency the graph- icalacross mapping different elements implementations (points, lines, so that polygons, users and texts, developers and effects) need are not drawn worry in aboutthe same the mannerdifferent and characteristics constitute elements of the underlying of a unified hardware style on platform. every platform. All of the The graphical platform mapping-inde- pendentelements AR (points, map lines,symbol polygons, visualization texts, architecture and effects) proposed are drawn in in this the paper same manneris shown and in Figureconstitute 1. elements of a unified style on every platform. The platform-independent AR map symbol visualization architecture proposed in this paper is shown in Figure1.

Figure 1. Platform-independent visualization architecture of AUGL.

ISPRS Int. J. Geo-Inf. 2021, 10, x FOR PEER REVIEW 5 of 26

ISPRS Int. J. Geo-Inf. 2021, 10, 593 5 of 24

Figure 1. Platform-independent visualization architecture of AUGL.

3.1. Architecture3.1. of the Architecture Augmented of- theReality Augmented-Reality Universal Graphics Universal Library Graphics Engine Library Engine AR map visualizationAR map requires visualization the same requires data set the as same conventional data set as map conventional visualization, map visualization, with dynamic with content dynamic and metadata content and information: metadata information: vector data vector set, terrain data set, data terrain set, data 3D set, 3D model model with or withwithout or withouttexture, texture,point cloud point and cloud so on. and However, so on. However, because it because mainly itpro- mainly processes cesses the windowthe window area data area of the data current of the position, current position, it will be it relatively will be relatively light at the light data at the data level. level. The displayThe content display is content more complex is more complexand lifelike, and and lifelike, the color and theprocessing color processing and col- and collection lection details detailsof virtual of virtualelements elements achieve achieve the effect the of effect augmented of augmented reality reality visualization. visualization. Current Current AR devices,AR devices, especially especially mobile mobile Augmented Augmented Reality, Reality, have have limited limited real real-time-time pro- processing power. cessing power.In In practice, practice, AR-rendered AR-rendered map map symbols symbols are are always always “attached” “attached efficiently” efficiently and in real time to and in real timethe to corresponding the corresponding real objects real objects (ground, (ground, walls, topwalls, of buildings,top of buildings, river surfaces, river underground surfaces, undergroundsurfaces, surfaces, etc.), which etc.) remains, which aremain key issue.s a key issue. In this paper, inIn order this paper, to consider in order the to specific consider input the information specific input from information MAR devices, from MAR devices, compared withcompared the traditional with classical the traditional GIS architecture, classical GISour architecture,key requirements our keyfor A requirementsR-GIS for AR- include virtualGIS reality include combined virtual rendering reality combined of occlusion rendering and visibility of occlusion processing, and visibility cross processing, platform consistentcross map platform rendering, consistent real- maptime rendering,positioning real-time and camera positioning sensing, anduser camera inter- sensing, user action based oninteraction pinhole camera based model, on pinhole etc. camera model, etc. Several high-performanceSeveral high-performance spatial computation spatial methods computation are used methods to help are the used AUGL to help the AUGL engine provideengine an efficient provide rendering an efficient process. rendering First, retrieval process. is First,accelerated retrieval by using is accelerated dy- by using namically updateddynamically spatial database updated index spatial hit database technology index (a scheduling hit technology technique (a scheduling to im- technique to prove the retrievalimprove efficiency the retrieval of spatial efficiency index) ofand spatial dictionary index) texture and dictionary caching texturetechnology. caching technology. Second, map octopusSecond, tree map dynamic octopus memory tree dynamic cache pool memory technology cache is pool implemented technology tois up- implemented to date the cache updatein real time. the cache Update in real operation time. Update is triggered operation when is triggeredthe geographic when therange geographic or range or stylization of thestylization map symbol of the data map changes. symbol data changes. The modules of AR-GIS rendering engine (AUGL) include a GIS data module, AR map The modules of AR-GIS rendering engine (AUGL) include a GIS data module, AR rendering engine module, perception module, user interaction module, AR map interaction map rendering engine module, perception module, user interaction module, AR map in- module, AR map application module and so on. The architecture of this method is shown teraction module, AR map application module and so on. The architecture of this method in Figure2. is shown in Figure 2.

Figure 2. Architecture of AUGL. Figure 2. Architecture of AUGL.

The sensing modTheule sensing includes module two sub includes modules: two location sub modules: processing location and camera processing pro- and camera cessing. The locationprocessing. processing The location is responsible processing for is estimating responsible the for geographical estimating the position, geographical position, velocity and directionvelocity of and objects direction in 3D of through objects GNSS. in 3D through In order GNSS. to estimate In order position, to estimate GNSS position, GNSS uses a set of satellites and ground stations orbiting the earth. At present, four main systems have been put into use, including GPS of the United States, GLONASS of Russia, Galileo of

ISPRS Int. J. Geo-Inf. 2021, 10, 593 6 of 24

the European Union and Beidou of China. In this paper, AUGL is compatible with the above four positioning systems and takes advantage of the above GNSS system. Because the geographical location accuracy of GNSS is affected by many errors, including ionosphere, building reflection, etc., in order to eliminate these errors, the AUGL positioning module supports access to the high-precision differential system, uses the most advanced RTK and PPP technology of precise point positioning, and connects the high-precision receiver through Bluetooth, which can achieve centimeter-level accuracy. At the same time, in order to achieve the purpose of low power consumption, strong autonomy and convenient portability, it supports access to high-precision consumer mobile devices (e.g., Huawei P40 and mi11 ultra). Without connecting any receiver, the accuracy can reach centimeter level [33,34]. This is used for positioning and azimuth estimation (3D absolute coordinates, pitch angle, yaw angle, roll angle). Camera processing is responsible for acquiring 6-DOF through attitude estimation of equipment motion. During this period, centimeter-level relative attitude information is obtained in real time according to camera parameters (mainly external parameter changes). These accurate 6-DOF relative attitude information is fused with absolute position obtained by location processing to ensure that the equipment in motion can acquire high-precision coordinate reference in real time, it is a prerequisite to generate enhanced image to realize the facility of accurately depicting the real world through AR. The GIS map data module is responsible for parsing geospatial data in an MAR (mobile augmented reality) scene into a data format that the AR map rendering engine module can organize hierarchically. It is mainly responsible for reading, converting, registering and other parsing operations of spatial data, and encodes the parsing results into a unified layer data unit of AR map. The data of this module include vectors, rasters, 3D spatial data (3D vectors,3D networks, voxel grids, etc.), RTSP service, text address data, picture and video recorded by mobile phone camera or UAV camera, etc. These data types will meet the conditions of AR map rendering after parsing. The AR map rendering engine module is used to parse the output of AR map data module into OpenGL recognizable format, and render the visualization results of corre- sponding symbol types according to the rules of map stylization. It includes two sub modules: cross platform OpenGL graphics library and AR map symbol processing. The cross platform OpenGL graphics library is responsible for unifying the rendering algo- rithms of multiple language platforms (Android, iOS, Web, unity, etc.), triangulating and occlusion culling the spatial geometric data, converting all geographic coordinate data into vertex data in the future, and packaging them into batch cell objects to be drawn. The AR map stylization sub module is responsible for the batch unit objects output from the OpenGL graphics library. According to the map rules [35,36], it carries out the stylized objects such as symbols for area, lines, points, annotations, vector tiles and grid tile sym- bols and image symbols, and calls on the graphics library API operation to render these stylized objects as visual data. Finally, the OpenGL state machine is used to control the data exchange API of the rendering pipeline, and the visualization results are displayed on the screen. The AR map interaction module is responsible for calculating and responding to the interactive operations generated by the user interaction module according to the location data generated by the positioning and camera module. When combined with AR, the classic mouse or touchpad controller is replaced by the user movement tracked by AR devices. Additionally, this kind of map visualization usually needs absolute positioning relative to the global earth frame, so user interaction needs to accurately handle the absolute attitude of AR devices. These operations include screen gestures (pitch, zoom in, zoom out, rotate, pan, select, double-click, capture, magnifying glass, etc.) and customized button commands (local redraw, global redraw, add, delete, modify, query, geometry edit, etc.). The module supports the encapsulation of these operations into events, and provides active notification methods. The combination of these methods can facilitate users to complete tasks. ISPRS Int. J. Geo-Inf. 2021, 10, 593 7 of 24

3.2. High-Performance AR-GIS Rendering Engine Parallel Computation The need to visualize large amounts of data in map windows is growing rapidly [37] as display technologies and devices such as high-definition (HD) mobile screens, large LCD/LED screens, HD projectors, display walls, and immersive virtual environments are increasingly used in geospatial applications to visualize large spatial areas with rich details. On the other hand, the age of big data has made vast amounts of geospatial data widely available for use in a variety of GIS applications that require real-time or near real-time responses to solve complicated geographic problems. Therefore, high-performance map visualization is essential not only for online map services, but also for offline geospatial computing. In this paper, we use a variety of methods to achieve high-performance spatial com- puting in the rendering process. On the one hand, spatial database index hit technology based on dynamic update and dictionary texture cache technology is used to speed up the retrieval process. On the other hand, in order to update the cache in real time, the map octopus tree dynamic memory cache pool technology is implemented, and the update operation is triggered when the geographic range and style of map symbol data change.

3.2.1. AR-GIS Map Symbol Parallel Processing Framework is an efficient way to process some tasks via multi-threaded or parallel processes. The computing nodes used in parallel computation are connected via a network and can achieve parallel data transmission acceleration and computational efficiency. Parallel GIS technology applies parallel computing technology to parallel storage, query, retrieval, and processing of massive annotation data, and provides the ability to process massive spatial geographic data by establishing software systems with fast response speeds and high operational efficiency. A new generation of multi-core, parallel high-performance computing integrated with high-performance parallel rendering computing technology and algorithms has become an area of interest in GIS mapping [38]. In order to take full advantage of the mobile augmented reality (MAR) spatial adaptive decomposition method, we have developed a rendering parallelization framework for AR map symbols in multi-core environments (Figure3). When the first drawing task is generated, the framework generates multiple data layers for different symbolization levels based on the original data. These layers include the vector layer, raster layers, text layers and other layers. The data structure for rendering layers is the symbolic data. It contains the rendering behaviors and some useful information for drawing geometries and raster objects (e.g., categories of symbol for spatial data, the collection of coordinates, the drawing style). Spatial data are acquired for each data level within the current AR viewport location map, and multiple uniformly distributed grid symbol data are created. These data are distributed to different drawing sub-threads through the adaptive decomposition method, and the drawing sub-thread launches the AR map symbol processing, and queries, retrieves, transforms and renders the corresponding characteristics of the data in the grid. The drawing results are sent to the shared resource thread for aggregation and stored in the buffer pool. The shared resource thread is responsible for sending the drawing results to the main thread when the main thread map is refreshed or when the customizer is triggered. The main thread loads the sub-image results of these AP maps, and merge all the sub-image textures into the final view displayed in the AR viewport. Additionally, the entire process of parallel drawing of AR map symbols is completed. ISPRS Int. J. Geo-Inf. 2021, 10, 593 8 of 24 ISPRS Int. J. Geo-Inf. 2021, 10, x FOR PEER REVIEW 9 of 26

Figure 3. AR map symbol rendering parallelization frameworks in multi-coremulti-core environments.

Moreover,The generation to accelerate and drawing application of symbolized of the coordinate data in conversion each grid isprocess completed to large in vol- the umesdrawing of AR sub-thread. map symbols, Therefore, the GPU any operationis used instead of the of main the threadCPU to will perform not increase vertex overheadtransfor- mationsto the visualization of texts. This task reduces of the drawing the time sub-thread. overhead At associated the same with time, transferring since the display large amountsand update of geographic of the symbolized coordinates data from in each the grid host are memory completed to the in GPU the device. main thread, In this time- arti- cle,consuming we use multiple operations subtasks such as pr query,ocessed retrieval, by multiple conversion computation and rendering units to draw of the map drawing sym- bolssub-threads in parallel. will Examples not increase include the amount transformation of calculation of map to thesymbols display from of thegeographic main thread’s coor- visualizationdinates to OpenGL results. Render Object [39] in a multi-threaded manner using multiple CPUs and transformationMoreover, to accelerate from a OpenGL application Render of Object the coordinate to screen coordinates conversion by process using multiple to large volumesGPUs (multiple of AR mapcores) symbols, to perform the sub GPU-processes is used insteadand thus of reduce the CPU computing to perform time. vertex Alt- houghtransformations parallel computing of texts. This has reduces been widely the time used overhead in GIS associatedsoftware [40 with–42], transferring this paper largepro- posesamounts a combined of geographic parallel coordinates computing frommethod the of host multi memory CPU and to theGPU GPU multi device.-core to In accel- this eratearticle, the we conversion use multiple of geographic subtasks processed coordinate by data. multiple computation units to draw map symbolsThe inparallelization parallel. Examples framework include provides transformation a general platform of map symbols for accelerating from geographic the visu- alizationcoordinates of various to OpenGL map Rendersymbol data Object and [39 for] in quick a multi-threaded responses to real manner-time usingmap visualiza- multiple tionCPUs view and interaction transformation requests from for a large OpenGL amounts Render of Objectdata. to screen coordinates by using multiple GPUs (multiple cores) to perform sub-processes and thus reduce computing time. Although3.2.2. High parallel-Speed AR computing-GIS Map has Symbol been Cache widely used in GIS software [40–42], this paper proposes a combined parallel computing method of multi CPU and GPU multi-core to Processing of the AR map symbol element visualization style requires extensive accelerate the conversion of geographic coordinate data. drawing pre-processing time. The AUGL engine provides a map symbol data cache The parallelization framework provides a general platform for accelerating the visual- scheme based on load balancing memory pool technology. Cached data storage and reuse ization of various map symbol data and for quick responses to real-time map visualization of cache loading are key to making full use of the cached map symbol data. view interaction requests for large amounts of data. Image data caching strategies include horizontal and vertical methods. As shown in Figure 4A, the image (i, j) in the current view area of layer n is attached to the AR map windows for map symbol visualization and imported into the cache pool. The surround- ing images (i − 1, j − 1), (i, j − 1), (i + 1, j − 1), (i − 1, j), (i + 1, j), (i – 1, j + 1), (i, j + 1), and (i +

ISPRS Int. J. Geo-Inf. 2021, 10, 593 9 of 24

3.2.2. High-Speed AR-GIS Map Symbol Cache Processing of the AR map symbol element visualization style requires extensive drawing pre-processing time. The AUGL engine provides a map symbol data cache scheme based on load balancing memory pool technology. Cached data storage and reuse of cache loading are key to making full use of the cached map symbol data. Image data caching strategies include horizontal and vertical methods. As shown in ISPRS Int. J. Geo-Inf. 2021, 10, x FOR PEER REVIEW 10 of 26 Figure4A, the image (i, j) in the current view area of layer n is attached to the AR map windows for map symbol visualization and imported into the cache pool. The surrounding images (i − 1, j − 1), (i, j − 1), (i + 1, j − 1), (i − 1, j), (i + 1, j), (i − 1, j + 1), (i, j + 1), and (i + 11,, j j+ + 1 1)) represent represent its its extension extension in in layer layer n. n. In In the the vertical, vertical, (i, (i, j) j) represents represents the the image image of of layer layer nn and and the the subsequent subsequent four four graphs graphs indicate indicate that that the the image image of of layer layer n n corresponds corresponds to to the the fourfour images images divided divided by by scale parity rulesrules atat thethe nn ++ 1th1th layer layer (Figure (Figure4B). 4B). Cached Cached images images in inlayer layer n −n −1 1 are are divided divided into into images images according according to to the the same same scale scale rules rules and and only only occupy occupy 1/4 1/4of theof the spatial spatial range range of theof the nth n layerth layer (i, j)(i, images. j) images.

(a) (b)

FigureFigure 4. 4. ARAR map map symbol symbol caching caching mechanism: mechanism: (a (a) )data data extending extending and and (b (b) )data data stretching stretching caching caching mechanisms. mechanisms.

DuringDuring the the caching caching process, process, it it is is essential essential to to check check for for and and remove remove duplicates duplicates between between thethe extended extended image image of of the the nth nth lay layerer image image (i, (i, j) j) and and other other images images in in layer layer n, n, as as well well as as duplicatesduplicates between between the the stretched stretched image image of of the the nth nth layer layer image image ( (i,i, j j)) and the n − 11thth layer layer andand other other extended extended images. images. Images Images in in memory memory are are loaded loaded from from disk disk files files first first and and then then exportedexported into into the the cache cache pool. pool. TheThe cached cached data data loading loading method method is isshown shown in inFigure Figure 5;5 the; the nth nth layer layer scale scale stores stores all allof theof theextended extended images images of that of that layer layer and and layer layer n−1 nstores−1 stores all of all the of stretched the stretched images images that thatare smallerare smaller than thanthe current the current scale scalebar. The bar. n The + 1 nth+ layer 1th layerstores stores all of allthe of stretched the stretched images images that arethat larger are larger than thanthe current the current scale scale bar. bar. The The images images are are loaded loaded during during the the display display process process usingusing the the texture texture tra traversalversal order: order: layer layer n n > > layer layer n n + 1 1 > layer layer n − 11.. The The exact exact loading loading processprocess is is as as follows: follows: (1) The user browses the map to the nth layer and sends a display request to the system. (2) The system automatically traverses all of the images of the nth layer in the cache pool. The images are transferred directly to graphics memory for display. Images in the blank area are overwritten to visualize the latest images in the current area intuitively. (3) Traversing the image of the n + 1th layer in the cache pool. If an image intersects the current view area and has a scale bar greater than the current view area, the texture cache is stretched into the graphics memory for display via texture mapping technology, overwriting the blank area of the image. (4) Traversing the n − 1th layer image in the cache pool. If an image intersects the current view area with a scale less than the current view area, the texture is stretched to the graphics memory for display via texture mapping technology, overwriting the blank area of the image.

Figure 5. AR map symbol data pre-cache loading reuse principle.

(1) The user browses the map to the nth layer and sends a display request to the system.

ISPRS Int. J. Geo-Inf. 2021, 10, x FOR PEER REVIEW 10 of 26

1, j + 1) represent its extension in layer n. In the vertical, (i, j) represents the image of layer n and the subsequent four graphs indicate that the image of layer n corresponds to the four images divided by scale parity rules at the n + 1th layer (Figure 4B). Cached images in layer n − 1 are divided into images according to the same scale rules and only occupy 1/4 of the spatial range of the nth layer (i, j) images.

(a) (b) Figure 4. AR map symbol caching mechanism: (a) data extending and (b) data stretching caching mechanisms.

During the caching process, it is essential to check for and remove duplicates between the extended image of the nth layer image (i, j) and other images in layer n, as well as duplicates between the stretched image of the nth layer image (i, j) and the n − 1th layer and other extended images. Images in memory are loaded from disk files first and then ISPRS Int. J. Geo-Inf. 2021, 10, 593 exported into the cache pool. 10 of 24 The cached data loading method is shown in Figure 5; the nth layer scale stores all of the extended images of that layer and layer n−1 stores all of the stretched images that are (5)smallerTraversal than the of allcurrent images scale in thebar. nth The layer. n + 1 Allth layer images stores that all intersect of the stretched the current images view arethat are largercopied than to memory, the current and scale images bar. of The the nimages + 1th orare n −loaded1th layer during are the replaced display to ensureprocess usingthat the the texture final displayedtraversal order: image islayer up-to-date n > layer with n + no 1 remaining> layer n − stretched 1. The exact images. loading process is as follows:

FigureFigure 5. 5.AR AR map map symbol symbol data data pre-cache pre-cache loading loading reuse reuse principle. principle.

This(1) The module user ofbrowses the AUGL the enginemap to isthe used nth to layer obtain and style sends information a displayfor request annotation to the initialization,system. including name, size, font, angle, type, italics, decoration, combination infor- mation, etc. It also serves to prepare these symbols as a collection of object information to be drawn for quick acquisition and coordinate transformation when drawing map symbols per frame. During the map symbol drawing process, the symbol style cache is constructed from the symbol and thematic layers in the order in which the symbol layers are acquired. The annotation style caches that are not included in the annotation style memory pool are attached to the annotation style memory pool via queuing operations. If the number of memory pools exceeds the upper limit of 256, the update algorithm is automatically performed according to the annotation of the current viewport and historical browsing record, filtering out the annotation style cache data to be removed. The number of memory pools online can be adapted dynamically according to the operating system memory. When a new map annotation drawing task is launched, the priority is to search for the annotation style cache in the memory pool. If the same annotation name or style, such as annotation collection decoration information, is present, it is obtained directly from the memory pool. This strategy enables a large number of annotation style reuses. For map zoom browsing with different scales, there are small changes in the generated map annotations. Most of the annotations can be queried in the memory pool without frequent updates. In this case, the reuse rates of the name and decoration annotation style are quite high. For map panning operations in different spatial ranges, the generated map annotation changes greatly due to fundamental changes in the current viewport display content. New annotations for drawing are updated frequently. Therefore, memory pools must update the cache frequently and remove items from the cache in the order that they are created when the upper limit is exceeded. In this case, the symbol style and font type reuse rates are high. According to the frequency of symbols appearing in the map browsing process and symbols in the current viewport to establish a comprehensive evaluation algorithm, the map symbol style is obtained via the memory pool via key-value pair data organization and matched to the dictionary texture cache to obtain the specified symbol drawing image style directly. The entire caching process is shown in Figure6. ISPRS Int. J. Geo-Inf. 2021, 10, x FOR PEER REVIEW 12 of 26 ISPRS Int. J. Geo-Inf. 2021, 10, 593 11 of 24

Figure 6. The principle of map symbol pre-caching in the AUGL engine. Figure 6. The principle of map symbol pre-caching in the AUGL engine. 3.2.3. GPU-Based High-Speed Pre-Processing for OpenGL Render Objects (RO) Calculating3.2.3. the GPU screen-Based coordinates High-Speed from Pre the-Processing geographic for coordinates OpenGL Render is the mostObjects time- (RO) consuming part ofCalculating the mobile the augmented screen coordinates reality drawing from the process. geographic First, coordinat the specifiedes is the most time- geometry of theconsuming GIS layer part is set, of andthe mobile when it augmented comes to different reality drawing GIS spatial process. data, weFirst, decide the specified ge- whether to performometry this of the step GIS in bulk,layer dependingis set, and when on the it amountcomes to of different data and GIS the spatial character- data, we decide istics. Since OpenGLwhether onlyto perform supports this triangles step in bulk, and depending points and on line the primitives, amount of we data need and tothe character- represent GISistics. geometry Since as OpenGL a combination only supports of three basictriangles components and points before and preparingline primitives, to be we need to assigned to therepresent OpenGL GIS state geometry machine. as Accordinga combination to the of principlethree basic of components OpenGL rendering before preparing to pipeline coordinatebe assigned system to transformation, the OpenGL state shown machine. in Figure According7, the geographic to the principle coordinates of OpenGL render- must be transformeding pipeline into coordinate OpenGL system render transformation, objects (RO) (i.e., shown model in Figure coordinates 7, the geographic in the coordi- model coordinatenates system). must be Thesetransformed vertex datainto OpenGL are then assembledrender objects into (RO) drawing (i.e., model elements coordinates in according tothe model properties coordinate of vertex system). objects, These combined vertex withdata are GIS then symbolization assembled rules,into drawing ele- and then croppedments to according compare tothe the elements properties with of the vertex view objects, of the combineduser’s crop with plane GIS and symbolization the model projectionrules, and matrix, then cropped discarding to compare the elements the elements located outsidewith the the view view of andthe user crop’s crop plane plane. By settingand the model calculation projection of OpenGL matrix, rendering discarding pipeline the eleme matrixnts located transformation, outside the view and GIS graphicscrop data are plane. converted By setting into the a pixel calculation matrix thatof OpenGL can be displayed rendering by pipeline the device matrix transfor- screen. The transformationmation, GIS graphics pipeline data operates are converted in the followinginto a pixel order: matrix model that can coordinate be displayed by the system- > worlddevice coordinate screen. The system- transformation > camera coordinate pipeline operates system- in > projection.the following Finally, order: the model coordi- blanking operationnate system is performed,- > world coordinate that is, the system obscured- > camera objects coordinate are cropped system to obtain- > projection. the Finally, window pixel coordinates that the GIS geometry object finally displays. The principle of

ISPRS Int. J. Geo-Inf. 2021, 10, x FOR PEER REVIEW 13 of 26

ISPRS Int. J. Geo-Inf. 2021, 10, 593 12 of 24

the blanking operation is performed, that is, the obscured objects are cropped to obtain the window pixel coordinates that the GIS geometry object finally displays. The principle ofpre-processing pre-processing AR-GIS AR-GIS geographic geographic coordinates coordinates based based on on OpenGL OpenGL rendering rendering pipeline pipeline is isshown shown in in Figure Figure7. 7.

FigureFigure 7. 7. TheThe principle principle of of AUGL AUGL engine engine map map symbol symbol pre pre-caching.-caching.

TheThe transformation transformation of of the the AR AR map map fr fromom geographic geographic coordinates coordinates to to screen screen coordinates coordinates cancan be be divided divided into into two two stages. stages. In In t thehe first first stage stage the the geographic coordinate system of the mapmap is is obtained and the ellipsoid of the geographic coordinates is determined.determined. AUGL usesuses WGS84 WGS84 by by default, default, and and the the el ellipsoidlipsoid can can also also be be modified modified according according to to the the meta meta-- informationinformation of of the the map. map. A Accordingccording to to the the projection projection coordinate coordinate system system of of the the map, map, the the geometricgeometric objects of thethe mapmap areare converted converted into into geometric geometric objects objects of of the the specified specified projection projec- tioncoordinate coordinate system, system, that that is, from is, from geographic geographic coordinates coordinates to rectangular to rectangular coordinates. coordinates. The Therectangular rectangular coordinates coordinates of these of these geometric geometric objects objects are used are used as the as input the input coordinates coordinates of the offirst the stage. first stage. the data the aredata converted are converted (description (description information information of geometric of geometric objects objects on theon themap) map) to OpenGL to Open stateGL state machine machine parameters. parameters. The basicThe basic idea isidea to convertis to convert complex complex geometry ge- ometryinto a combination into a combination of three of basic three elements basic elements that OpenGL that OpenGL can render can andrender to accelerateand to accel- the eratetime-consuming the time-consuming process ofprocess converting of converting map symbol map datasymbol into data vertex into data vertex by parallelizingdata by par- allelizinga multi-threaded a multi-threaded calculation. calculation. The second The stage second transforms stage transforms OpenGL OpenGL render objects render (RO) ob- jectsinto (RO) screen into coordinates, screen coordinates, which is also which a time-consuming is also a time-consuming process [43 process]. The [43]. transformation The trans- formationof OpenGL of render OpenGL objects render (RO) objects is calculated (RO) is usingcalculated the GPU, using which the GPU, is much which faster is thanmuch a fasterCPU. Thethan structured a CPU. The map structured symbol data map characteristics symbol data arecharacteristics preserved so are that preserved the map symbolso that thedata map can symbol be converted data can into be screen converted coordinates into screen in bulk coordinates and a large in bulk number and a of large symbol number data ofpre-processing symbol data pre mechanisms-processing can mechanisms be converted can in be batches. converted in batches. TheThe mobile mobile augmented augmented rea realitylity (MAR) (MAR) studied studied in this in this paper paper supports supports the general the general mo- mobile GPU hardware architecture. At the same time, we use multithreading to transfer bile GPU hardware architecture. At the same time, we use multithreading to transfer the the rendering objects (RO) of these map symbol data from the host memory to opengl rendering objects (RO) of these map symbol data from the host memory to the the global memory on the GPU, which greatly reduces the conversion time of OpenGL global memory on the GPU, which greatly reduces the conversion time of OpenGL ren- rendering objects (RO) on various smartphones [44]. dering objects (RO) on various smartphones [44]. 3.3. The AR-GIS Rendering Engine Map Symbolization Core 3.3. The AR-GIS Rendering Engine Map Symbolization Core 3.3.1. The AR-GIS Map Symbol Drawing Model 3.3.1. CorrectThe AR- visualizationGIS Map Symbol of the Drawing camera Model display for geographic data in AR-GIS requires transformationCorrect visualization among four of coordinatethe camera systems display includingfor geograph theic camera data in (View AR-GIS Coordinate), requires transformationworld (World Coodrinate), among four coordinate model (Model systems Coordinate), including and the screencamera pixel (View (Screen Coordinate), Coordi- worldnate) coordinate(World Coodrinate), systems. Thesemodel are (Model used Coordinate), to depict the and geometries screen pixel of the (Screen added Coordi- virtual nate)shapes. coordinate Because thesystems. world These coordinate are used system to anddepict the the camera geomet coordinatesries of the are added right-handed virtual shapes.coordinate Because systems the without world deformation, coordinate system rigid body and transformation the camera coordinates can be used areto convert right- handedthem from coordinate coordinates systems under without the world deformation, coordinate rigid system body to coordinates transformation under can the be camera used tocoordinate convert them system. from This coordinates addresses under the the rotation world and coordinate translation system of ato geometric coordinates object under in 3D space when the object does not deform. The camera coordinates obtained via rigid body transformation and through perspective projection are transformed into coordinates

ISPRS Int. J. Geo-Inf. 2021, 10, x FOR PEER REVIEW 14 of 26

ISPRS Int. J. Geo-Inf. 2021, 10, 593the camera coordinate system. This addresses the rotation and translation of a geometric13 of 24 object in 3D space when the object does not deform. The camera coordinates obtained via rigid body transformation and through perspective projection are transformed into coor- dinatesunder under the imagethe image coordinate coordinate system system [45 ,[45,46]46] and and then then converted converted to camerato camera display display pixels pixelsdepending depending on on screen screen coordinate coordinate system system displacement. displacement. The The transformation transformation relationships relation- shipsbetween between the the four four coordinate coordinate systems systems are are shown shown in in Figure Figure7. 7. In theIn actual the actual AR ARmap map conventional conventional visualization visualization operation, operation, the themain main scene scene includes includes twotwo kinds kinds of coordinate of coordinate transformation transformation from from the theworld world coordinate coordinate system system to the to thescreen screen pixelpixel coordinate coordinate system system and andits inverse its inverse transformation. transformation. In the In thespecial special ope operations,rations, such such as as occlusionocclusion elimination elimination and andother other scenes, scenes, several several of the of four the four coordinate coordinate systems systems need need to be to be transformedtransformed to each to each other. other. The Thetransformation transformation relationships relationships between between the four the four coordinate coordinate systemssystems are shown are shown in Figure in Figure 8. 8.

Figure 8. Transformation relationships among the four coordinate systems. Figure 8. Transformation relationships among the four coordinate systems. Based on the principle of computer vision camera imaging [47], the transformation relationshipBased on the equations principle betweenof computer the camera,vision camera world, imaging model, and[47], screen the transformation pixel coordinate relationshipsystems [equations48] are shown between in Equation the camera, (1). Using world, these model, four coordinateand screen systems, pixel coordinate a point in the systemsworld [48] coordinate are shown system in Equation can be transformed(1). Using these into four a point coordinate in the pixel systems, coordinate a point system. in the worldThis allows coordinate the map system symbol can databe transformed to be drawn into accurately a point in anythe pixel location coordinate and toward sys- the tem.attitude This allows in the the MAR. map symbol data to be drawn accurately in any location and toward the attitude in the MAR.      1   X u dx1 0 u0 f 0 0 0   10 푢0 R푋 t  Y  Zc v 푢=  푑푥0 v0 푓 00 f0 00 0    (1)  dy  푅 푡0T 푌 1  Z  1푍 [푣] = 1 [0 0푓 00 10 ] 0[ ] [ ] (1) 푐 00 0 1푣 0푇 1 푍 1 1 0 0 0 1 0 푑푦 1 [ ] Based on the above description,0 0 1 we describe the geographic visualization methods usedBased to on create the theabove MAR, description, as well as we the describe final mapping the geographic results, which visualization are integrated methods into a usedmobile to create app the (Figure MAR,9). as To well enhance as the 3D final geographic mapping visualization, results, which we are use integrated mapping into products a mobilesuch app as SMWU(Figure 9). (Map To documentenhance 3D format geographic of SuperMap visualization, software) we anduse mapping MXD (Map products document suchformat as SMWU of ArcGIS (Map software)document to format build aof two SuperMap and three-dimensional software) and MXD integrated (Map symboldocument model formatby adding of ArcGIS layers, software) whichare to rearranged build a two into and an three image-vector-3D-dimensional grid-effect integrated map. symbol Unified modelcoordinate by adding references, layers, which interactive are rearranged gestures, andinto display an image projection-vector-3D algorithms grid-effect manage map. the Unifiedscheduling coordinate and final references, rendering interactive of large amounts gestures, of and data. display projection algorithms manage the scheduling and final rendering of large amounts of data.

ISPRS Int. J. Geo-Inf. 2021, 10, 593 14 of 24 ISPRS Int. J. Geo-Inf. 2021, 10, x FOR PEER REVIEW 15 of 26

FigureFigure 9. Visualization 9. Visualization prototype prototype of of AR AR map map symbols symbols basedbased on computer vision vision camera camera imaging imaging principle: principle: (a) (Originala) Original displaydisplay area area outdoors; outdoors; (b) ( Visualizationb) Visualization of of map map symbols symbols in in outdoor outdoor augmented augmented reality; reality; ( c(c)) OriginalOriginal displaydisplay area indo indoors;ors; (d) Visualization(d) Visualization of map of symbols map symbols in indoor in indoor augmented augmented reality. reality.

3.3.2. AR Vector Feature Triangulation Rendering 3.3.2. AR Vector Feature Triangulation Rendering Triangulation of point sets can be used as an important computer graphics pre-pro- Triangulation of point sets can be used as an important computer graphics pre- cessing technique. In particular, many geometric diagrams of point sets such as Voronoi, processing technique. In particular, many geometric diagrams of point sets such as Voronoi, EMST tree, and Gabriel diagrams are related to Delaunay triangulations. Delaunay trian- EMSTgulations tree, have and Gabrielseveral important diagrams features: are related maximization to Delaunay of triangulations.the minimum angle, Delaunay “closest trian- gulationsto regularized have several” triangulation important, and features: uniqueness maximization (any four of points the minimum cannot be angle, co-circled). “closest to regularized”These features triangulation, can be used andto plot uniqueness the geometry (any of four any pointsshape accurately cannot be and co-circled). rigorously. These featuresThe relevant can be mathematical used to plot equation the geometry [49] is shown of any in shape Equation accurately (2): and rigorously. The relevant mathematical equation2 2 [49] is shown in Equation (2): 퐴푥 퐴푦 퐴푥 + 퐴푦 1 2 2 2 2 퐴푥 − 퐷푥 퐴푦 − 퐷푦 (퐴푥 − 퐷푥 ) + (퐴푦 − 퐷푦 ) 2 2 2 2   A A |퐵푥A 퐵+푦 A퐵푥 +1 퐵 푦 1 | 22 2 2 2 2 2 2 x y x y =A|x퐵−−D퐷x A퐵y −−D퐷y (퐵Ax−−퐷D)x+ (+퐵 −A퐷y −)|D y | 2 2 2 2 | 푥 푥 푦 푦 푥 푥 푦 푦 B B 퐶푥B 퐶+푦 B 퐶푥 +1 퐶 푦 1   x y x y = 퐶−− 퐷 퐶 −− 퐷 (퐶22−−퐷2)2+ (+퐶2 − 2퐷−2) 2 퐷 2퐶 2퐷2 + 퐷 2 1 Bx 푥 Dx 푥 By푦 Dy푦 B푥x D푥 x 푦 By 푦 Dy Cx Cy 푥Cx +푦 Cy 푥 1 푦   2 2 2 2 2 2 ( 2) D C D + D 1 Cx − Dx Cy − Dy Cx − D2x + Cy − Dy x y x y 퐴 − 퐷 퐴 − 퐷 (퐴 − 퐷 )2 + (퐴 − 퐷 ) (2) 푥 푥 푦 푦 푥 푥 2 푦 푦 2 A − D A − D (A − D ) + A − D 2 =x | x y y x x 2 y y | > 0 |퐵푥 − 퐷푥 퐵푦 − 퐷푦 (퐵푥 − 퐷푥)2 +(퐵푦 − 퐷푦)2| = Bx − Dx By − Dy (Bx − Dx) + By − Dy > 0 2 2 퐶푥 − 퐷푥 퐶푦 − 퐷푦 (퐶푥 − 퐷푥)2 +(퐶푦 − 퐷푦)2 Cx − Dx Cy − Dy (Cx − Dx) + Cy − Dy Popular mapping engine libraries with hardware acceleration including OpenGL andPopular Direct3D mapping support enginedrawing libraries of any polygons with hardware based on acceleration Delaunay tri includingangulation. OpenGL Figure and Direct3D10 shows support that there drawing are usually of any two polygons types of basedOpenGL on Delaunaygeometric primitives. triangulation. One Figureis the 10 showstriangle that geometric there are primitive usually two and types the other of OpenGL is the triangle geometric fan primitive. primitives. We One represent is the triangle the geometricvertex set primitiveto be drawn and by theOpenGL other by is theV1– triangleV6. Based fan on primitive.this principle. We The represent application the vertex of setthe to two be methods drawn by of OpenGLdrawing map by V1–V6. symbols Based in AR- onGIS this is shown principle. in Figure The 11. application We represent of the twoa consecutive methods of polyline drawing by mapthree symbolspoints (Pn−1, in AR-GIS Pn and isPn+1). shown in Figure 11. We represent a consecutive polyline by three points (Pn − 1, Pn and Pn + 1).

ISPRS Int. J. Geo-Inf. 2021, 10, x FOR PEER REVIEW 16 of 26

ISPRS Int. J. Geo-Inf. 2021, 10, x FOR PEER REVIEW 16 of 26

ISPRS Int. J. Geo-Inf. 2021, 10, 593 15 of 24

Figure 10. Two types of OpenGL geometric primitives: (a) triangle strip; (b) triangle fan. Figure 10. Two types of OpenGL geometric primitives: (a) triangle strip; (b) triangle fan.

Figure 10. Two types of OpenGL geometric primitives: (a) triangle strip; (b) triangle fan.

Figure 11. The application of two types of geometric primitives to GIS line symbol rendering: (a) wide line segment constructed by triangular strip; (b) smooth line node constructed by triangular fan. Figure 11. The applicationWhen of drawing two types map of geometric symbols, primitives the system to GIS reads line symbol the current rendering location: (a) and direction wide line segmentbased constructed on the built-in by triangular inertial strip; augmented (b) smooth reality line node odometer. constructed It then by prepares triangular to calculate map fan. symbol data that represent the geographical extent of the specified viewport, as provided by a series of geographic point sequences. A distinction is drawn regarding realistic camera When drawing map symbols, the system reads the current location and direction pictures to indicate representation of cartographic information regarding the current reality based Figureon the 11. built The- inapplication inertial augmentedof two types ofreality geometric odometer. primitives It th toen GIS prepares line symbol to calculaterendering : (a) space. This expression systematically calculates the virtual position of the map symbol map symbolwide line data segment that represent constructed the by geographical triangular strip; extent (b) smooth of the linespecified node constructed viewport, byas pro-triangular in the current viewport relative to the fictitious origin within the camera virtual space. vided fan.by a series of geographic point sequences. A distinction is drawn regarding realistic The corresponding graphic matrix rotates, transforms, shifts, and scales the coordinate camera picturescollection to indicate and representation orientation of this of carto mapgraphic symbol information relative to the regarding current position, the cur- thus allowing rent realityWhen space. drawingThis expression map symbols, systematically the system calculates reads the the virtual current position location of the and map direction based onthe the map built symbol-in inertial to be viewedaugmented from reality the camera odometer. angle It eventually. then prepares Within to eachcalculate video frame symbol in thein current the MAR viewport camera relative viewport to the (the fictitious calculation origin time within requirement the camera is limited virtual to 35 ms), a 3D map symbol data that represent the geographical extent of the specified viewport, as pro- space. The correspondingtransformation graphic of the m symbolizedatrix rotates, map transforms, data must shifts, be completed and scales so thatthe coor- the map symbol is vided by a series of geographic point sequences. A distinction is drawn regarding realistic dinate collectioncontinuously and orientation within of the this observer’s map symbol view relative for real-time to the responses.current position, This supports thus permanent camera pictures to indicate representation of cartographic information regarding the cur- allowing the mapposition symbol changes. to be Therefore,viewed from the the system camera requires angle thateventually. large data Within mass each element units be rent reality space. This expression systematically calculates the virtual position of the map video frame indrawn the MAR in quick came batchesra viewport that are(the split calculation into a series time ofrequirement triangles via is triangulation,limited to as shown symbol in the current viewport relative to the fictitious origin within the camera virtual 35 ms), a 3D transformationin Figure 12a,b. of Thesethe symbolized are characterized map data by must rapid be transformation completed so that using the parallelization space. The corresponding graphic matrix rotates, transforms, shifts, and scales the coor- map symbol isalgorithms continuously with within GPUs the that observer exceed’ thousandss view for real of computing-time responses. cores. This sup- ports permanentdinate collection position and changes. orientation Therefore, of this map the symbolsystem relativerequires to that the large current data position, mass thus allowing the map symbol to be viewed from the camera angle eventually. Within each video frame in the MAR camera viewport (the calculation time requirement is limited to 35 ms), a 3D transformation of the symbolized map data must be completed so that the map symbol is continuously within the observer’s view for real-time responses. This sup- ports permanent position changes. Therefore, the system requires that large data mass

ISPRS Int. J. Geo-Inf. 2021, 10, x FOR PEER REVIEW 17 of 26

ISPRS Int. J. Geo-Inf. 2021, 10, x FOR PEER REVIEW 17 of 26

element units be drawn in quick batches that are split into a series of triangles via trian- gulation, as shown in Figure 12a,b. These are characterized by rapid transformation using ISPRS Int. J. Geo-Inf. 2021, 10, 593 parallelizatielement unitson algorithms be drawn within quick GPUs batches that exceed that are thousands split into of a computingseries of triangles cores. via16 trian- of 24 gulation, as shown in Figure 12a,b. These are characterized by rapid transformation using parallelization algorithms with GPUs that exceed thousands of computing cores.

(a) (b)

Figure 12. Triangularized vector( afeature) rendering: (a) a triangularized route and (b)( ab )triangularized building area.

FigureFigure 12.12. TriangularizedTriangularized3.3.3. vectorvector AR featurefeatureStereoscopic rendering:rendering: Masking ((aa)) a a triangularized triangularized Culling route route and and ( b(b)) a a triangularized triangularized building building area. area. Displaying spatial objects in 3D space often causes rendering problems due to com- 3.3.3.3.3.3. ARAR StereoscopicStereoscopic MaskingMasking CullingCulling plex spatial locations and relationships [50]. In all MAR visualization components, maskedDisplayingDisplaying display is spatial anspatial issue objects objects that must in in 3D 3D be space solvedspace often often in causesthe causes AR renderingapplication rendering problems scene. problems Virtual due due to objects complex to com- spatial locations and relationships [50]. In all MAR visualization components, masked areplex always spatial placed locations in the foreground and relationships of incoming [50]. video In all images, MAR but visualization the user is able components, to view display is an issue that must be solved in the AR application scene. Virtual objects are themasked observed display area iswithout an issue constraints that must bedepending solved in onthe theAR realapplication-time spatial scene. location Virtual objectsrela- always placed in the foreground of incoming video images, but the user is able to view the tionshipare always between placed the in user the ’foregrounds camera location of incoming and the video AR spatialimages, data. but the As usera result, is able the to real view observed area without constraints depending on the real-time spatial location relationship locattheion observed of the virtual area without visual model constraints is sometimes depending behind on orthe in real front-time of thespatial real building.location rela- If between the user’s camera location and the AR spatial data. As a result, the real location of thistionship masking between relationship the user is not’s camera analyzed location correctly, and therendering AR spatial does data. not produce As a result, a reason- the real the virtual visual model is sometimes behind or in front of the real building. If this masking ablelocat image.ion of The the principlevirtual visual of OpenGL model isdepth sometimes testing behind and masking or in front [51] of is theshown real inbuilding. Figure If relationship is not analyzed correctly, rendering does not produce a reasonable image. The 13.this masking relationship is not analyzed correctly, rendering does not produce a reason- principleable image. of OpenGLThe principle depth of testing OpenGL and depth masking testing [51] and is shown masking in Figure [51] is 13 shown. in Figure 13.

Figure 13. A mask test performed using an OpenGL depth buffer (A indicates visible geometry; Figure 13. A mask test performed using an OpenGL depth buffer (A indicates visible geometry; B B indicates testing with OpenGL; and C indicates testing with other software methods). indicates testing with OpenGL; and C indicates testing with other software methods). FigureIn 13. this A paper, mask test masking performed technology using an and OpenGL virtual depth geographic buffer (A data indicates such asvisible depth geometry; are used B indicates testing with OpenGL; and C indicates testing with other software methods). to maskIn this real paper, objects. masking This simplifiestechnology the and 3D virtual mask models geographic of textures data such and as textured depth are buildings used toand mask mixes real itobjects. into the This visual simplifies model via the alpha 3D mask channels. models If thereof textures is no occlusion and textured of the build- virtual In this paper, masking technology and virtual geographic data such as depth are used ingsvisual and model mixesand it into 3D the mask visual in the model user’s via camera alpha viewport, channels. these If there 3D is masks no occlusion do not affect of the the to mask real objects. This simplifies the 3D mask models of textures and textured build- virtualvisualization visual model results. and In 3D contrast, mask in if thethe 3Duser mask’s camera is placed viewport, in a visual these model 3D masks with do a mask, not ings and mixes it into the visual model via alpha channels. If there is no occlusion of the affectthe formerthe visualization visually removes results. theIn contrast, obscured if partthe 3D and mask displays is placed it through in a visual a backward model with video virtual visual model and 3D mask in the user’s camera viewport, these 3D masks do not a mask,image. the This former process visually is shown removes in Figure the obscured 14. Taking part the and user’s displays on-site it through inspection a backward in urban affect the visualization results. In contrast, if the 3D mask is placed in a visual model with planning as an example, after accurate augmented reality positioning and orientation, a mask, the former visually removes the obscured part and displays it through a backward determine the actual display area of GIS 3D data in the video, realize the accurate matching of 3D facilities (such as the well cover in this example) and realistic objects through depth and masking technology, and after matching, view the underground part of the actual object by parameterizing the depth and masking, and obtain the invisible properties of the

original space entity. ISPRS Int. J. Geo-Inf. 2021, 10, x FOR PEER REVIEW 18 of 26

video image. This process is shown in Figure 14. Taking the user’s on-site inspection in urban planning as an example, after accurate augmented reality positioning and orienta- tion, determine the actual display area of GIS 3D data in the video, realize the accurate matching of 3D facilities (such as the well cover in this example) and realistic objects ISPRS Int. J. Geo-Inf. 2021, 10, 593 through depth and masking technology, and after matching, view the underground17 part of 24 of the actual object by parameterizing the depth and masking, and obtain the invisible properties of the original space entity.

Figure 14. Correct visualization of virtual geographic data based on depth and masking: (a) no occlusion;occlusion; (b) minimalminimal occlusion; (c) partial occlusion; ((d)) almostalmost allall occlusion.occlusion.

4. Experiments Experiments and and Results 4.1. Experiments Experiments Design Design In this study, we selected the full feature vector map data for a performance test to In this study, we selected the full feature vector map data for a performance test to simulate the way users typically interact with the AR-GIS application. The test assumes simulate the way users typically interact with the AR-GIS application. The test assumes that the AR-GIS application has typical user behavior. In the experimental method, to verify that the AR-GIS application has typical user behavior. In the experimental method, to ver- the high performance, practicality, and effectiveness of our approach, we implemented ify the high performance, practicality, and effectiveness of our approach, we implemented the rendering scheme introduced in Section2 using C++ and the OpenGL library with the rendering scheme introduced in Section 2 using C++ and the OpenGL library with ARCore on Android and ARKit on iOS. The study used national map data (the scale of ARCore on Android and ARKit on iOS. The study used national map data (the scale of Chinese data is less than 1:1 million and the scale of Beijing data is more than 1:1 million) Chinese data is less than 1:1 million and the scale of Beijing data is more than 1:1 million) with a total of 268 layers. The data source was OpenStreetMap. The total data set size was with30.25 a GB.total GIS of 268 mapping layers. usesThe data various source map was symbols. OpenStreetMap. The basic The types total include data set dotsize sym- was 30.25bols (solidGB. GIS dot, mapping symbol uses point), various line symbolsmap symbols. (solid The single basic lines, types solid include color dot wide symbols lines, (solidsymbol dot, lines), symbol area point), symbol line (solid symbols fill polygon, (solid single symbol lines, polygon), solid color annotations, wide lines, text, symbol grids, lines),CADsymbol, area symbol 3D symbols (solid andfill polygon, animated symbol symbols. polygon), Experiments annotations, were conducted text, grids, on theseCAD symbol,13 common 3D types symbols of map and symbols. animated symbols. Experiments were conducted on these 13 common types of map symbols. 4.2. Time Test of Single AR Map Visualization 4.2. TimeTo verify Test of the Single adaptability AR Map andVisualization robustness of our rendering algorithm in various actual dataTo types, verify we the then adaptability compare the and time robustness required of by our each rendering algorithm’s algorithm subprocess in various (Table ac-1) tualand data test validation types, we underthen compare the current the time ArcGIS required scenario by geographiceach algorithm’s range subprocess load. The average (Table time spent on map queries in the AR camera viewport is less than 18 ms, the average time required for the map symbols to draw to the texture cache is less than 30 ms, and the average time for texture cache to perform interactive drawing to the viewport screen is less than 24 ms. The total average time is approximately 60 ms. The map symbol drawing method based on the dynamic distributed rendering algorithm designed in this study consumes little time and can complete the drawing tasks efficiently. ISPRS Int. J. Geo-Inf. 2021, 10, 593 18 of 24

Table 1. Sub process time test of various rendering algorithms.

AR Camera Interactive Map Symbols Drawn Total Map Symbol Type Viewport Map Rendering of Lags to the Texture Cache Rendering Time Query Result Points (500 Running 15 ms 25 ms 14 ms 54 ms features) smoothly Running Lines (500 features) 16 ms 21 ms 13 ms 50 ms smoothly Polygons (500 Running 18 ms 24 ms 10 ms 52 ms features) smoothly Annotations (200 14 ms 29 ms 15 ms 58 ms No obvious lag features)

4.3. Comparison Test of AR Map Visualization By applying the adaptive decomposition method when facing the camera viewport, AUGL can model the camera viewport drawing task dynamically based on the amount of rendering data in the geographic range of the AR map. It generates a list of drawing tasks that covers all feature types in the AR map, and renders the symbols in real time based on spatial index queries and spatial relationship judgment. For the four symbol types (points, lines, polygons, and annotations), we compare the operating efficiencies (time consumed in ms) of the three drawing algorithms (Table2). For all four symbol types, the average drawing time needed by the dynamic distributed rendering algorithm proposed in this paper is less than 60 ms. The average drawing time needed by the ArcGIS runtime AR drawing algorithm exceeds 80 ms and the average drawing time needed by the Mapbox AR drawing algorithm exceeds 100 ms. The AR map symbol drawing system proposed in this paper is significantly faster than the other two algorithms.

Table 2. Comparison of the efficiencies of three rendering engine drawing algorithms.

ArcGIS Runtime Mapbox Battery Power Consumption Ratio Map Symbol Type AUGL Engine AR Rendering AR Rendering (AUGL/ArcGIS/Mapbox) Points (500 features) 37 ms 55 ms 89 ms 1:1.12:1.17 Lines (500 features) 45 ms 53 ms 117 ms 1:1.15:1.13 Polygons (500 features) 41 ms 62 ms 106 ms 1:1.1:1.16 Annotations (200 48 ms 67 ms 109 ms 1:1.13:1.15 features)

To compare the performance characteristics of the AUGL engine and the other two algorithms with regard to battery power consumption, we shut down all other applications on the device and allowed the application to draw points, lines, polygons, and annotations. We recorded the progress of the three rendering algorithms from 100% battery to 90%. The AUGL engine algorithm reduces frequent CPU and GPU resource consumption and improves battery power consumption. We then compared the average time required by each algorithm’s subprocess (Table1) and test validation under the current AR-GIS scenario geographic range load. The average time spent on map queries in the AR camera viewport is less than 18 ms, the average time required for the map symbols to draw to the texture cache is less than 30 ms, and the average time for texture cache to perform interactive drawing to the viewport screen is less than 24 ms. The total average time is approximately 60 ms. The map symbol drawing method based on the dynamic distributed rendering algorithm designed in this study consumes little time and can complete the drawing tasks efficiently. ISPRS Int. J. Geo-Inf. 2021, 10, 593 19 of 24

4.4. Consistency Test of AR Visualization Cross Platform Algorithm In addition, we compared the algorithmic consistency of the three drawing engines across platforms. OpenGL supports 2D and 3D integrated vector and raster graphics rendering, and supports cross language and cross platform graphics processing unit (GPU) interactive processing. In this paper, AR map symbol visualization is designed on the basis of a multi- language OpenGL graphics library. It supports map symbolization algorithms and runs on Android, iOS, web browser and other platforms through cross language graphics rendering. It also supports cross language hardware accelerated rendering, which is obviously reflected in the effect comparison experiment of cross platform algorithm consistency. In the experiment, we compared consistency performance of the smoothed line connection, smoothed line endpoint, annotation auto avoidance display, real-time ground display and occlusion culling display [52,53] of three AR rendering engine algorithms (Table3). The algorithm can support the various symbol types in the table across platforms and provide smooth display. However, Mapbox does not support line endpoint and line junction consistency. The ArcGIS runtime does not support flow annotation or annotation auto-avoidance consistency. As shown in Figure 15, the AUGL linear symbol algorithm is shown to be presented consistently in three cross-platform environments. Figure 16 shows the AUGL polygon symbol algorithm being presented consistently in three cross-platform environments.

Table 3. Comparison of the consistency performance of three rendering engine drawing algorithms.

Annotation Auto AR-GIS Smoothed Line Smoothed Line Real-Time Occlusion Culling Avoidance Engine Connection Endpoint Ground Display Display Display Partially Android (ArcGIS) Not supported Not supported Not supported Not supported supported Partially iOS (ArcGIS) Not supported Not supported Not supported Not supported supported WebAR (ArcGIS) Not supported Not supported Not supported Not supported Not supported Partially Android (Mapbox) Supported Supported Supported Not supported supported Partially iOS (Mapbox) Not supported Not supported Not supported Not supported supported WebAR (Mapbox) Not supported Not supported Not supported Not supported Not supported Android (AUGL) Supported Supported Supported Supported Supported Partially iOS (AUGL) Supported Supported Supported Supported supported Partially WebAR (AUGL) Supported Supported Supported Supported supported

Finally, we compared the visualization of route symbols produced using the AUGL, ArcGIS, and Mapbox AR rendering engines to support AR navigation. The method pro- posed in this paper is based on cross-platform OpenGL matrix conversion technology. It can identify ground depth information in real time, support drawing of AR guide route parameters in real time, and draw a guiding route when AR navigation is attached to the ground consistently. This visualization approach works better on the human eye sensory system and improves the real-time response capabilities of the AR map system and environment [31]. The above content is shown in Figure 17. We visually compare the visualization results of the AR rendering engine. In addition, in order to verify the usability and visualization effect of AUGL engine in all kinds of GIS data, we test the AR visualization of common GIS data in corresponding environments, and verify it from the perspective of user experience. Figure 18 summarizes the AR visualization test results of four map data types. ISPRS Int. J. Geo-Inf. 2021, 10, x FOR PEER REVIEW 21 of 26 ISPRS Int. J. Geo-Inf. 2021, 10, x FOR PEER REVIEW 21 of 26

ISPRS Int. J. Geo-Inf. 2021, 10, 593 20 of 24 WebAR Partially WebAR Supported Supported Supported Supported Partially (AUGL) Supported Supported Supported Supported supported (AUGL) supported

FigureFigure 15. 15. TheThe linear linear symbol symbol algorithms algorithms are rendered are rendered consistently consistently in three in cross three-platform cross-platform environ- Figure 15. The linear symbol algorithms are rendered consistently in three cross-platform environ- environments.ments. ments.

Figure 16. The polygon symbol algorithms are rendered consistently in three cross-platform envi- FigureFigure 16. 16.The Thepolygon polygon symbol symbol algorithms algorithms are rendered are rendered consistently consistently in three in cross three-platform cross-platform envi- ronments. ronments.environments. Finally, we compared the visualization of route symbols produced using the AUGL, Finally, we compared the visualization of route symbols produced using the AUGL, ArcGIS, and Mapbox AR rendering engines to support AR navigation. The method pro- ArcGIS, and Mapbox AR rendering engines to support AR navigation. The method pro- posed in this paper is based on cross-platform OpenGL matrix conversion technology. It posed in this paper is based on cross-platform OpenGL matrix conversion technology. It can identify ground depth information in real time, support drawing of AR guide route can identify ground depth information in real time, support drawing of AR guide route

ISPRS Int. J. Geo-Inf. 2021, 10, x FOR PEER REVIEW 22 of 26 ISPRS Int. J. Geo-Inf. 2021, 10, x FOR PEER REVIEW 22 of 26

parameters in real time, and draw a guiding route when AR navigation is attached to the ground consistently.parameters This visualization in real time, and approach draw a works guiding better route on when the humanAR navigation eye sensory is attached to the ground consistently. This visualization approach works better on the human eye sensory system and improves the real-time response capabilities of the AR map system and envi- system and improves the real-time response capabilities of the AR map system and envi- ronment [31]. The above content is shown in Figure 17. We visually compare the visuali- ronment [31]. The above content is shown in Figure 17. We visually compare the visuali- zation results of the AR rendering engine. In addition, in order to verify the usability and zation results of the AR rendering engine. In addition, in order to verify the usability and visualization effect of AUGL engine in all kinds of GIS data, we test the AR visualization visualization effect of AUGL engine in all kinds of GIS data, we test the AR visualization of common GIS data in corresponding environments, and verify it from the perspective ISPRS Int. J. Geo-Inf. 2021, 10, 593 of common GIS data in corresponding environments, and verify it from the perspective21 of 24 of user experience.of user Figure exp erience.18 summarizes Figure 1 8the summarizes AR visualization the AR visualizationtest results of test four results map of four map data types. data types.

(a) (a) (b) (b) (c) (c) Figure 17. Comparison with the visualization of AR route navigation: (a) AR route navigation (Map- Figure 17. FigureComparison 17. Comparison with the visualization with the visualization of AR route of AR navigation: route navigation: (a) AR route(a) AR navigation route navigation (Mapbox); (Map- (b) AR route box); (b) AR route navigation (ArcGIS); (c) AR route navigation (AUGL). navigationbox); (ArcGIS); (b) AR (c) route AR route navigation navigation (ArcGIS); (AUGL). (c) AR route navigation (AUGL).

Figure 18. AR visualization test results of four map data types: (a) 2D Vector; (b) BIM; (c) Navigate route; (d) Terrain; (e) 3D Figure 18. AR visualization test results of four map data types: (a) 2D Vector; (b) BIM; (c) Navigate route; (d) Terrain; (e) models; (f) 3D Vector. Figure 18. AR visualization3D models; test (f) results3D Vector of four. map data types: (a) 2D Vector; (b) BIM; (c) Navigate route; (d) Terrain; (e) 3D models; (f) 3D Vector. Comparisons show the map symbol rendering framework that is based on the dynamic Comparisons show the map symbol rendering framework that is based on the dy- distributed rendering algorithm and combined with the cross-platform OpenGL engine namic distributed rendering algorithm and combined with the cross-platform OpenGL Comparisonsconsumes show the little map time symbol and can rendering achieve efficient framework map that symbol is based rendering on the in dy- the field of mobile engine consumes little time and can achieve efficient map symbol rendering in the field namic distributedaugmented rendering reality. algorithm and combined with the cross-platform OpenGL engine consumesof mobilelittle time augmented and can reality.achieve efficient map symbol rendering in the field of mobile augmented5. Conclusions reality. and Further Work 5. Conclusions and Further Work This paper proposes a cross-platform rendering framework, the AUGL engine, for 5. Conclusionsrendering and Further of high-performanceWork map symbols in mobile augmented reality implementations. To resolve difficulties with efficient rendering of mobile augmented reality map symbols consistently across different platforms, the AUGL engine utilized parallel graphics accel- eration hardware and innovative AR rendering algorithms to achieve high-performance rendering. A stylized map symbol rendering correction algorithm was used to achieve a consistent map symbol rendering style across different platforms. Based on the AUGL engine, we developed AR map applications consisting of AR service components and mobile prototype components. Experiments in the fourth chapter of this paper demonstrated the following: (i) the use of a consistent, cross-platform AR map symbol rendering method, consistent performance across different operating systems, and ISPRS Int. J. Geo-Inf. 2021, 10, 593 22 of 24

better results; (ii) that the AUGL engine with a variety of innovative rendering algorithms improves performance by an average of more than two times compared to other AR map engines and provides a vector polygon symbol performance improvement of near three times (take mapbox AR as an example); (iii) that the GPU pre-cache rendering method reduces battery power consumption; and (iv) that asynchronous interactive rendering reduces the AR real-time response overload and overall wait time. In the future, we plan to experiment with a more flexible, incremental, cross-platform adaptive decomposition approach to large, long-term AR map visualization tasks at the city level [3,54–56]. In addition, we plan to introduce a technology that supports cache fault tolerance in mobile environments and allows devices to connect intermittently and make better use of distributed cache resources. In MAR, in addition to two natural interaction methods through the interactive operation of the user’s screen and the body motion operation of the mobile camera, we will study more manners in which users can actively choose constraints for map use and exploration, such as AR glasses. For example, the user can manipulate the map through AR glasses combined with physical touch constraints. With the development of AI technology and spatial computing technology, computer vision scene understanding is also an important topic for future work. With the development of MAR software and hardware, the accuracy of visual inertial navigation positioning is becoming higher and higher. The use of AR self-positioning technology combined with GIS semantic understanding to study indoor tools such as indoor surveying and mapping or interior design is expected to achieve better results. For example, users carrying a mobile phone while walking around the museum to obtain a 3D digital map, not just point cloud data. In the future, we plan to conduct research in this area.

Author Contributions: Conceptualization, Kejia Huang and Chenliang Wang; methodology, Kejia Huang, Chenliang Wang and Shaohua Wang; software, Kejia Huang, Runying Liu, Guoxiong Chen and Xianglong Li; validation, Kejia Huang, Runying Liu and Xianglong Li; Writing—Original draft preparation, Kejia Huang, Chenliang Wang and Shaohua Wang; Writing—Review and Editing, Kejia Huang, Chenliang Wang and Shaohua Wang; visualization, Kejia Huang, Runying Liu and Xianglong Li; supervision, Kejia Huang; project administration, Kejia Huang. All authors have read and agreed to the published version of the manuscript. Funding: This work was supported by the National Key Research and Development Program of China (No. 2017YFA0604703), the National Natural Science Foundation of China under Grant (41571439), a grant from State Key Laboratory of Resources and Environmental Information System, the Key Project of Natural Science Research of Anhui Provincial Department of Education under Grant (KJ2020A0720). Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: Not applicable. Acknowledgments: The authors express thanks to anonymous reviewers for their constructive comments and advice. Conflicts of Interest: The authors declare no conflict of interest.

References 1. Azuma, R.T. A Survey of Augmented Reality. Presence Teleoperators Virtual Environ. 1997, 6, 355–385. [CrossRef] 2. Kim, W.; Kerle, N.; Gerke, M. Mobile augmented reality in support of building damage and safety assessment. Nat. Hazards Earth Syst. Sci. 2016, 16, 287–298. [CrossRef] 3. Zollmann, S.; Langlotz, T.; Grasset, R.; Lo, W.H.; Mori, S.; Regenbrecht, H. Visualization Techniques in Augmented Reality: A Taxonomy, Methods and Patterns. IEEE Trans. Vis. Comput. Graph. 2020, 27, 3808–3825. [CrossRef][PubMed] 4. Kilimann, J.-E.; Heitkamp, D.; Lensing, P. An Augmented Reality Application for Mobile Visualization of GIS-Referenced Landscape Planning Projects. In Proceedings of the 17th International Conference on Virtual-Reality Continuum and Its Applications in Industry, Brisbane, Australia, 14–16 November 2019; ACM: New York, NY, USA, 2019; pp. 1–5. ISPRS Int. J. Geo-Inf. 2021, 10, 593 23 of 24

5. Papadopoulou, E.-E.; Kasapakis, V.; Vasilakos, C.; Papakonstantinou, A.; Zouros, N.; Chroni, A.; Soulakellis, N. Geovisualization of the Excavation Process in the Lesvos Petrified Forest, Greece Using Augmented Reality. ISPRS Int. J. Geo-Inf. 2020, 9, 374. [CrossRef] 6. Kamat, V.; El-Tawil, S. Evaluation of Augmented Reality for Rapid Assessment of Earthquake-Induced Building Damage. J. Comput. Civ. Eng. 2007, 21, 303–310. [CrossRef] 7. Sieka´nski,P.; Micho´nski,J.; Bunsch, E.; Sitnik, R. CATCHA: Real-Time Camera Tracking Method for Augmented Reality Applications in Cultural Heritage Interiors. ISPRS Int. J. Geo-Inf. 2018, 7, 479. [CrossRef] 8. Panou, C.; Ragia, L.; Dimelli, D.; Mania, K. An Architecture for Mobile Outdoors Augmented Reality for Cultural Heritage. ISPRS Int. J. Geo-Inf. 2018, 7, 463. [CrossRef] 9. Rapprich, V.; Lisec, M.; Fiferna, P.; Zavada, P. Application of Modern Technologies in Popularization of the Czech Volcanic Geoheritage. Geoheritage 2016, 9, 413–420. [CrossRef] 10. Werner, P.A. Review of Implementation of Augmented Reality into the Georeferenced Analogue and Digital Maps and Images. Information 2018, 10, 12. [CrossRef] 11. Ortega, S.; Wendel, J.; Santana, J.M.; Murshed, S.M.; Boates, I.; Trujillo, A.; Nichersu, A.; Suárez, J.P. Making the Invisible Visible—Strategies for Visualizing Underground Infrastructures in Immersive Environments. ISPRS Int. J. Geo-Inf. 2019, 8, 152. [CrossRef] 12. Haynes, P.; Hehl-Lange, S.; Lange, E. Mobile Augmented Reality for Flood Visualisation. Environ. Model. Softw. 2018, 109, 380–389. [CrossRef] 13. Schmid, F.; Frommberger, L.; Cai, C.; Freksa, C. What You See is What You Map: Geometry-Preserving Micro-Mapping for Smaller Geographic Objects with mapIT. In Lecture Notes in Geoinformation and Cartography; Springer: Cham, Switzerland, 2013; pp. 3–19. ISBN 9783319006147. 14. King, G.R.; Piekarski, W.; Thomas, B.H. ARVino—Outdoor augmented reality visualisation of viticulture GIS data. In Proceedings of the Fourth IEEE and ACM International Symposium on Symposium on Mixed and Augmented Reality, Vienna, Austria, 5–8 October 2005; IEEE: New York, NY, USA, 2005; pp. 52–55. 15. Liarokapis, F.; Greatbatch, I.; Mountain, D.; Gunesh, A.; Brujic-Okretic, V.; Raper, J. Mobile Augmented Reality Techniques for GeoVisualisation. In Proceedings of the Ninth International Conference on Information Visualisation (IV’05), London, UK, 6–8 July 2005; IEEE: New York, NY, USA, 2005; pp. 745–751. 16. Zaher, M.; Greenwood, D.; Marzouk, M. Mobile augmented reality applications for construction projects. Constr. Innov. 2018, 18, 152–166. [CrossRef] 17. Behzadan, A.H.; Dong, S.; Kamat, V. Augmented reality visualization: A review of civil infrastructure system applications. Adv. Eng. Inform. 2015, 29, 252–267. [CrossRef] 18. Meriaux, A.; Wittner, E.; Hansen, R.; Waeny, T. VR and AR in ArcGIS: An Introduction. In Proceedings of the 2019 ESRI User Conference-Technical Workshops, San Diego, CA, USA, 8–12 July 2019. 19. Palmarini, R.; Erkoyuncu, J.A.; Roy, R.; Torabmostaedi, H. A systematic review of augmented reality applications in maintenance. Robot. Comput. Manuf. 2018, 49, 215–228. [CrossRef] 20. Kim, K.; Billinghurst, M.; Bruder, G.; Duh, H.B.-L.; Welch, G.F. Revisiting Trends in Augmented Reality Research: A Review of the 2nd Decade of ISMAR (2008–2017). IEEE Trans. Vis. Comput. Graph. 2018, 24, 2947–2962. [CrossRef] 21. Brum, M.R.; Rieder, R. Virtual Reality Applications for Smart Cities in Health: A Systematic Review. In Proceedings of the 2015 XVII Symposium on Virtual and Augmented Reality, Sao Paulo, Brazil, 25–28 May 2015; IEEE: New York, NY, USA, 2015; pp. 154–159. 22. Keil, J.; Korte, A.; Ratmer, A.; Edler, D.; Dickmann, F. Augmented Reality (AR) and Spatial Cognition: Effects of Holographic Grids on Distance Estimation and Location Memory in a 3D Indoor Scenario. PFG J. Photogramm. Remote Sens. Geoinf. Sci. 2020, 88, 165–172. [CrossRef] 23. Collins, J.; Regenbrecht, H.; Langlotz, T. Visual Coherence in Mixed Reality: A Systematic Enquiry. Presence Teleoperators Virtual Environ. 2017, 26, 16–41. [CrossRef] 24. Kruijff, E.; Swan, J.E.; Feiner, S. Perceptual issues in augmented reality revisited. In Proceedings of the 2010 IEEE International Symposium on Mixed and Augmented Reality, Seoul, Korea, 13–16 October 2010; IEEE: New York, NY, USA, 2010; pp. 3–12. 25. Bowman, D.A.; North, C.; Chen, J.; Polys, N.F.; Pyla, P.S.; Yilmaz, U. Information-rich virtual environments: Theory, tools, and research agenda. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology, Osaka, Japan, 1–3 October 2013; ACM: New York, NY, USA, 2003; Volume F1290, pp. 81–90. 26. Bowman, D.; Hodges, L.F. Formalizing the Design, Evaluation, and Application of Interaction Techniques for Immersive Virtual Environments. J. Vis. Lang. Comput. 1999, 10, 37–53. [CrossRef] 27. Elmqvist, N.; Tsigas, P. A Taxonomy of 3D Occlusion Management for Visualization. IEEE Trans. Vis. Comput. Graph. 2008, 14, 1095–1109. [CrossRef] 28. Willett, W.; Jansen, Y.; Dragicevic, P. Embedded Data Representations. IEEE Trans. Vis. Comput. Graph. 2016, 23, 461–470. [CrossRef][PubMed] 29. Rosser, J.; Morley, J.; Smith, G. Modelling of Building Interiors with Mobile Phone Sensor Data. ISPRS Int. J. Geo-Inf. 2015, 4, 989–1012. [CrossRef] ISPRS Int. J. Geo-Inf. 2021, 10, 593 24 of 24

30. Azuma, R.; Baillot, Y.; Behringer, R.; Feiner, S.; Julier, S.; MacIntyre, B. Recent advances in augmented reality. IEEE Eng. Med. Boil. Mag. 2001, 21, 34–47. [CrossRef] 31. Narzt, W.; Pomberger, G.; Ferscha, A.; Kolb, D.; Müller, R.; Wieghardt, J.; Hörtner, H.; Lindinger, C. Augmented reality navigation systems. Univers. Access Inf. Soc. 2005, 4, 177–187. [CrossRef] 32. Devaux, A.; Hoarau, C.; Brédif, M.; Christophe, S. 3D Urban Geovisualization: In Situ Augmented and Mixed Reality Experiments. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2018, 4, 41–48. [CrossRef] 33. Banville, S.; Diggelen, F.V. Precise GNSS for Everyone: Precise Positioning Using Raw GPS Measurements from Android Smartphones. GPS World 2016, 27, 43–48. 34. Wanninger, L.; Heßelbarth, A. GNSS code and carrier phase observations of a Huawei P30 smartphone: Quality assessment and centimeter-accurate positioning. GPS Solut. 2020, 24, 1–9. [CrossRef] 35. Wu, C.; Liu, J.; Li, Z. Research on National 1:50,000 Topographic Cartography Data Organization. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2014, 4, 83–89. [CrossRef] 36. Netek, R.; Brus, J.; Tomecka, O. Performance Testing on Marker Clustering and Heatmap Visualization Techniques: A Comparative Study on JavaScript Mapping Libraries. ISPRS Int. J. Geo-Inf. 2019, 8, 348. [CrossRef] 37. Guo, M.; Guan, Q.; Xie, Z.; Wu, L.; Luo, X.; Huang, Y. A spatially adaptive decomposition approach for parallel vector data visualization of polylines and polygons. Int. J. Geogr. Inf. Sci. 2015, 29, 1419–1440. [CrossRef] 38. Hertel, S.; Hormann, K.; Westermann, R. A Hybrid GPU Rendering Pipeline for Alias-Free Hard Shadows. In Eurographics 2009—Areas Papers; Ebert, D., Krüger, J., Eds.; The Eurographics Association: Goslar, Germany, 2009. 39. Li, S.; Wang, S.; Guan, Y.; Xie, Z.; Huang, K.; Wen, M.; Zhou, L. A High-performance Cross-platform Map Rendering Engine for Mobile Geographic Information System (GIS). ISPRS Int. J. Geo-Inf. 2019, 8, 427. [CrossRef] 40. Hu, W.; Li, L.; Wu, C.; Zhang, H.; Zhu, H. A parallel method for accelerating visualization and interactivity for vector tiles. PLoS ONE 2019, 14, e0221075. [CrossRef] 41. Guo, M.; Han, C.; Guan, Q.; Huang, Y.; Xie, Z. A universal parallel scheduling approach to polyline and polygon vector data buffer analysis on conventional GIS platforms. Trans. GIS 2020, 24, 1630–1654. [CrossRef] 42. Zhang, J.; Ye, Z.; Zheng, K. A Parallel Computing Approach to Spatial Neighboring Analysis of Large Amounts of Terrain Data Using Spark. Sensors 2021, 21, 365. [CrossRef] 43. Guo, M.; Huang, Y.; Guan, Q.; Xie, Z.; Wu, L. An efficient data organization and scheduling strategy for accelerating large vector data rendering. Trans. GIS 2017, 21, 1217–1236. [CrossRef] 44. Gao, B.; Dellandréa, E.; Chen, L. Accelerated dictionary learning with GPU/Multi-core CPU and its application to music classification. In Proceedings of the 2012 IEEE 11th International Conference on Signal Processing, Beijing, China, 21–25 October 2012; IEEE: New York, NY, USA, 2012; pp. 1188–1193. 45. Milosavljevi´c,A.; Dimitrijevi´c,A.; Ranˇci´c,D. GIS-augmented video surveillance. Int. J. Geogr. Inf. Sci. 2010, 24, 1415–1433. [CrossRef] 46. Milosavljevi´c,A.; Ranˇci´c,D.; Dimitrijevi´c,A.; Predi´c,B.; Mihajlovi´c,V. Integration of GIS and video surveillance. Int. J. Geogr. Inf. Sci. 2016, 1–19. [CrossRef] 47. Zhang, Z. Camera Calibration. In Computer Vision; Springer: Boston, MA, USA, 2014; pp. 76–77. 48. Zhang, Z. A flexible new technique for camera calibration. IEEE Trans. Pattern Anal. Mach. Intell. 2000, 22, 1330–1334. [CrossRef] 49. Chew, L.P. Constrained delaunay triangulations. Algorithmica 1989, 4, 97–108. [CrossRef] 50. She, J.; Tan, X.; Guo, X.; Liu, J. Rendering 2D Lines on 3D Terrain Model with Optimization in Visual Quality and Running Performance. Trans. GIS 2016, 21, 169–185. [CrossRef] 51. Staneker, D. An Occlusion Culling Toolkit for OpenSG PLUS. In OpenSG Symposium; Reiners, D., Ed.; The Eurographics Association: Goslar, Germany, 2003. 52. Dalenoort, G.J. Towards a general theory of representation. Psychol. Res. 1990, 52, 229–237. [CrossRef] 53. Wu, M.; Zheng, P.; Lu, G.; Zhu, A.-X. Chain-based polyline tessellation algorithm for cartographic rendering. Cartogr. Geogr. Inf. Sci. 2016, 44, 491–506. [CrossRef] 54. Chatzopoulos, D.; Bermejo, C.; Huang, Z.; Hui, P. Mobile Augmented Reality Survey: From Where We Are to Where We Go. IEEE Access 2017, 5, 6917–6950. [CrossRef] 55. Wang, S.; Zhong, Y.; Wang, E. An integrated GIS platform architecture for spatiotemporal big data. Futur. Gener. Comput. Syst. 2018, 94, 160–172. [CrossRef] 56. Heitzler, M.; Lam, J.C.; Hackl, J.; Adey, B.T.; Hurni, L. GPU-Accelerated Rendering Methods to Visually Analyze Large-Scale Disaster Simulation Data. J. Geovisualization Spat. Anal. 2017, 1, 3. [CrossRef]