<<

International Journal of Geo-Information

Article Generating Orthorectified Multi- 2.5D Maps to Facilitate Web GIS-Based and Exploitation of Massive 3D City Models

Jianming Liang 1,2, Jianhua Gong 1,2,*, Jin Liu 1,3,4, Yuling Zou 2, Jinming Zhang 1,5, Jun Sun 1,2 and Shuisen Chen 6

1 State Key Laboratory of Remote Sensing Science, Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, Beijing 100101, China; [email protected] (J.L.); [email protected] (J.L.); [email protected] (J.Z.); [email protected] (J.S.) 2 Zhejiang-CAS Application Center for Geoinformatics, Zhejiang 314100, China; [email protected] 3 University of Chinese Academy of Sciences, Beijing 100049, China 4 National Marine Data and Information Service, Tianjin 300171, China 5 Institute of Geospatial Information, Information Engineering University, Zhengzhou 450052, China 6 Guangdong Open Laboratory of Geospatial Information Technology and Application, Guangzhou Institute of Geography, Guangzhou 510070, China; [email protected] * Correspondence: [email protected]; Tel.: +86-10-6484-9299

Academic Editors: Bert Veenendaal, Maria Antonia Brovelli, Serena Coetzee, Peter Mooney and Wolfgang Kainz Received: 5 August 2016; Accepted: 7 November 2016; Published: 11 November 2016

Abstract: 2.5D map is a convenient and efficient approach to exploiting a massive three-dimensional (3D) city model in web GIS. With the rapid development of oblique airborne and photo-based , 3D city models are becoming more and more accessible. 3D Geographic Information System (GIS) can support the interactive visualization of massive 3D city models on various platforms and devices. However, the value and accessibility of existing 3D city models can be augmented by integrating them into web-based two-dimensional (2D) GIS applications. In this paper, we present a step-by-step workflow for generating orthorectified oblique images (2.5D maps) from massive 3D city models. The proposed framework can produce 2.5D maps from an arbitrary perspective, defined by the elevation angle and azimuth angle of a virtual orthographic camera. We demonstrate how 2.5D maps can benefit web-based visualization and exploitation of massive 3D city models. We conclude that a 2.5D map is a compact data representation optimized for web data streaming of 3D city models and that geometric analysis of buildings can be effectively conducted on 2.5D maps.

Keywords: massive 3D city model; 2D GIS; web GIS; 2.5D map; oblique airborne photogrammetry

1. Introduction Urban landscapes are highly diversified in both the horizontal and vertical dimensions [1] with the presence of water bodies, plants, buildings, and other artificial structures. The inadequacy of satellite imagery and digital elevation models (DEMs) for representing the vertical dimension led to the widespread use of 3D city models [2] in urban GIS, which typically takes the form of a virtual geographic environment [3]. Although 3D GIS is already widely used for interactive exploration of 3D city models, its use in a web context faces the following technical challenges: (1) Internet data transfer limits. For example, it takes 17 min to download 1 GB of 3D model data with an internet transfer rate of 1 MB/s. This means that users will likely experience data latency under limited bandwidth. (2) Cross-platform hardware and compatibility. The devices and platforms that host a 3D GIS application need to meet

ISPRS Int. J. Geo-Inf. 2016, 5, 212; doi:10.3390/ijgi5110212 www.mdpi.com/journal/ijgi ISPRS Int. J. Geo-Inf. 2016, 5, 212 2 of 18

hardware and software compatibility. The devices and platforms that host a 3D GIS application need theto meet minimum the minimum hardware hardware and software and software requirements requirements for for graphics processing processing unit (GPU) unit support. (GPU) Mobilesupport. devices Mobile do devices not always do not meet always the computational meet the computational requirements requirements of a 3D GIS [4 ].of Deploymenta 3D GIS [4]. of aDeployment 3D GIS across of a various 3D GIS platforms,across various such platforms, as Linux, Windows,such as Linux, Android, Windows, iOS, andAndroid, Mac, requiresiOS, and extraMac, developmentrequires extra cost development and effort, sincecost and 3D graphicseffort, since APIs 3D rarely graphics support APIs direct rarely cross-platform support direct migration. cross- Additionally,platform migration. web-based Additionally, 2D GIS remains web-based the dominant 2D GIS remains paradigm the in dominant operational paradigm business in applications. operational Manybusiness business applications. GIS applications, Many business such as GIS the hotelapplications, booking such website as Agodathe hotel (http://www.agoda.com/ booking website Agoda), rely(http://www.agoda.com/), solely on web-based 2D rely GIS solely for operational on web-based functionality. 2D GIS for 2.5D operat maps,ional also functionality. known as perspective2.5D maps, mapsalso known or bird’s-eye as perspective view maps, maps are or represented bird’s-eye inview an obliquemaps, are perspective. represented 2.5D in an cartography oblique perspective. dates back to2.5D ancient cartography times (Figure dates1 back). Modern to ancient 2.5D cartography,times (Figure known 1). Modern as 2.5D art cartography, in some contexts, known integrates as pixel -aidedart in some contexts, design integrates (CAD)-based computer-aided photorealistic design or artistic (CAD)-based renderings photorealistic into maps. 2.5Dor artistic maps canrenderings facilitate into visualization maps. 2.5D and maps exploitation can facilitate of 3Dvisualization city models and in exploitation web-based 2Dof 3D GIS city applications. models in Theweb-based 2.5D maps 2D GIS provider applications. Edushi The produces 2.5D maps pixel prov art-styleider Edushi maps produces from CAD pixel models art-style and maps delivers from theseCAD models maps to and a browser-baseddelivers these maps web GISto a browser- (Figure2A),based which web canGIS run(Figure on 2A), various which devices can run and on platforms.various devices These and CAD-based platforms. pixel These art-style CAD-based maps arepixe alsol art-style widely maps used are in campus also widely map used GIS (Figure in campus2B) (maphttp://maps.colostate.edu/ GIS (Figure , 2B)https://www.udayton.edu/map/ (http://maps.colostate.edu/,, http://map.wfu.edu/https://www.udayton.edu/map/,, https://www. asu.edu/map/interactive/http://map.wfu.edu/, https://www.as), which makesu.edu/map/interactive/), high-quality 3D scenes which available makes to users high-quality under conditions 3D scenes of limitedavailable bandwidth to users under and computationalconditions of limited power. bandwidth If large-scale and 3Dcomputational city models power. can be If used large-scale in web 3D GIS city as shownmodels incan those be used examples in web (Figure GIS as2A,B), shown they in will those attract examples a wider (Figure audience 2A,B), and they benefit will aattract wider a scopewider ofaudience applications. and benefit Therefore, a wider 2.5D scope maps of applications. have the potential Therefore, to augment 2.5D maps the have value the and potential extend to the augment use of existingthe value 3D and city extend models. the use of existing 3D city models.

Figure 1. Hand-drawn perspective map by Claes Janszoon Visscher Figure 1. Hand-drawn perspective map by Claes Janszoon Visscher (http://en.wikipedia.org/wiki/ Pictorial_maps(http://en.wikipedia.org/wiki/Pictorial_maps).).

2 ISPRS Int. J. Geo-Inf. 2016, 5, 212 3 of 18

Figure 2. Application of 2.5D maps in (A) a web-based urban GIS (http://sz.edushi.com/); (B) an online Figure 2. Application of 2.5D maps in (A) a web-based urban GIS (http://sz.edushi.com/); campus GIS (http://maps.colostate.edu/) for efficient streaming of 3D city models. (B) an online campus GIS (http://maps.colostate.edu/) for efficient streaming of 3D city models. 3D city models can be derived from a variety of sources. One of the most simplified forms of a 3D3D city city model models is obtained can be by derived extruding from building a variety footprints of sources. into polyhedra One of the [5]. most A LiDAR simplified point forms cloud of a 3Dcan city accurately model iscapture obtained architectural by extruding geometry, building but the footprints texturalinto information polyhedra needs [5]. to A be LiDAR retrieved point cloudseparately can accurately [6]. Photorealistic capture architectural building models geometry, can butbe constructed the textural in information a CAD environment, needs to be but retrieved the separatelyintensive [labor6]. Photorealistic required to create building the geometric models can deta beils constructed and to acquire in astreet-level CAD environment, façade photos but is the intensivechallenging. labor Recently, required with to create the rapid the geometric development details of unmanned and to acquire aerial street-levelvehicle (UAV) façade technology photos is challenging.and computer Recently, vision-based with the 3D rapid reconstruction development [7], ofoblique unmanned airborne aerial photogrammetry-based vehicle (UAV) technology 3D city and computermodels vision-based(OAP3Ds) are 3D becoming reconstruction increasingly [7], oblique mature airborne and affordable photogrammetry-based [8]. OAP3Ds 3Dcombine city models the (OAP3Ds)advantages are of becoming LiDAR and increasingly close-range mature photography, and affordable and thus [8]. are OAP3Ds enriched combine in both the geometric advantages and of LiDARtextural and detail. close-range photography, and thus are enriched in both geometric and textural detail. InIn the the early early stages stages ofof 3D3D GIS development, development, the the use use of of3D 3D city city models models was waslimited limited primarily primarily to tovisualization visualization and and urban urban planning planning [9,10], [9,10 but], but recently recently it has it hasexpanded expanded well wellbeyond beyond this scope. this scope. A bibliometric survey [11] identified 29 distinct use cases from different application domains and A bibliometric survey [11] identified 29 distinct use cases from different application domains and grouped grouped these use cases into two broad categories, i.e., visualization-centered and non-visualization- these use cases into two broad categories, i.e., visualization-centered and non-visualization-centered. centered. Out of the 29 use cases, 24 are visualization-centered and only five are non-visualization- Out of the 29 use cases, 24 are visualization-centered and only five are non-visualization-centered centered (Table 1). This suggests that the core value of 3D city models is embodied primarily in (Table1). This suggests that the core value of 3D city models is embodied primarily in visualization, visualization, although a variety of non-visualization purposes are also served. although a variety of non-visualization purposes are also served. Table 1. Overview of the use cases of 3D city models (adapted from Biljecki et al. [11]). Table 1. Overview of the use cases of 3D city models (adapted from Biljecki et al. [11]). 1. Estimation of solar irradiation [12–15] 2. Energy1. demandEstimation estimation of solar irradiation [16,17] [ 12–15] 3. Aiding2. positioningEnergy demand estimation [16,17] 4. Determination3. Aiding positioningof floor space [18,19] 5. Classifying4. Determination building types of floor [20] space [18,19] 5. Classifying building types [20] 6. Geo-visualization and visualization enhancement [2,21] 7. Visibility6. Geo-visualization analysis [22] and visualization enhancement [2,21] 7. Visibility analysis [22] 8. Estimation of shadows cast by urban features [23] 8. Estimation of shadows cast by urban features [23] 9. Estimation9. Estimation of the propagation of the propagation of noise of noise in inan an urban urban environment environment 10. 3D 10.cadaster3D cadaster [24] [24] 11. Visualization11. Visualization for navigation for navigation [25] [25] 12. Urban12. planningUrban planning [8–10] [ 8–10] 13. Visualization for communication of urban information to citizenry [25,26] 13. Visualization for communication of urban information to citizenry [25,26] 14. Reconstruction of sunlight direction 14. Reconstruction15. Understanding of sunlight synthetic direction aperture radar images 15. Understanding16. Facility management synthetic aperture [27] radar images 16. Facility17. managementAutomatic scaffold [27] assembly civil engineering 17. Automatic18. Emergency scaffold response assembly [28] civil engineering 19. Lighting simulations 18. Emergency response [28] 20. Radio-wave propagation 3 ISPRS Int. J. Geo-Inf. 2016, 5, 212 4 of 18

Table 1. Cont.

21. Computational fluid dynamics 22. Estimating the population in an area [29] 23. Routing [30] 24. Forecasting seismic damage 19. 25.LightingFlooding simulations [31,32] 20. 26.Radio-waveChange detectionpropagation [33] 21. 27.ComputationalVolumetric densityfluid dynamics studies 22. 28.EstimatingForest management the population in an area [29] 23. 29.RoutingArchaeology [30] 24. Forecasting seismic damage Massive25. 3DFlooding city [31,32] models are embedded in a level-of-detail (LOD) structure optimized for 26. Change detection [33] view-dependent out-of-core rendering, and thus cannot be easily brought into a CAD environment for 27. Volumetric density studies making 2.5D28. maps.Forest Typicalmanagement formats for representing massive 3D city models include CityGML and OAP3D. Additionally,29. Archaeology the methods and applications of 2.5D cartography have not been systematically addressed in the literature. In thisMassive study, we3D presentcity models a streamlinedare embedded framework in a level-of-detail (Figure (LOD)3) for structure transforming optimized massive for view- 3D city dependent out-of-core rendering, and thus cannot be easily brought into a CAD environment for models into 2.5D maps. In the proposed framework (Figure3), we aim to achieve the following making 2.5D maps. Typical formats for representing massive 3D city models include CityGML and functionalOAP3D. objectives: Additionally, the methods and applications of 2.5D cartography have not been systematically addressed in the literature. 1. SupportIn forthis variousstudy, we 3D present model a streamlined formats. 3D framework models (Figure come in3) afor wide transforming variety ofmassive formats, 3D city including .3ds,models .obj andinto 2.5D .dae. maps. In the proposed framework (Figure 3), we aim to achieve the following 2. Supportfunctional for objectives: LOD structure and spatial partitioning. Given limited computational power, a well-designed1. Support for various spatial-partitioning 3D model formats. and 3D out-of-coremodels come renderingin a wide variety scheme of formats, must beincluding employed to generate.3ds, 2.5D .obj and maps .dae. at a sufficiently high spatial resolution. By leveraging an LOD structure, a large2. Support 3D city for modelLOD structure can be and rendered spatial partitioning. into a grid Given of maplimited tiles, computational but these power, map a tiles well- need be designed spatial-partitioning and out-of-core rendering scheme must be employed to generate accurately georeferenced so they can be stitched back together to form a seamless 2.5D . 2.5D maps at a sufficiently high spatial resolution. By leveraging an LOD structure, a large 3D 3. Supportcity for model custom can be map rendered perspectives. into a grid An of oblique map tiles, perspective but these map is defined tiles need by abe camera’s accurately azimuth and elevationgeoreferenced angle. so Athey multi-perspective can be stitched back set together of 2.5D to maps form a can seamless potentially 2.5D mosaic. afford a complete view of3. a 3DSupport city model. for custom map perspectives. An oblique perspective is defined by a camera’s azimuth 4. Supportand forelevation orthorectification. angle. A multi-perspective The presence set of 2.5D of terrain maps can and potentially the use afford of an a obliquecomplete perspectiveview of a 3D city model. may subject a set of 2.5D maps to distortion and misalignment. Orthorectification brings 4. Support for orthorectification. The presence of terrain and the use of an oblique perspective may a multi-perspectivesubject a set of 2.5D set of maps 2.5D to maps distortion back and into misalignment. a common referenceOrthorectification system. brings a multi- perspective set of 2.5D maps back into a common reference system.

Figure 3. A framework for producing orthorectified 2.5D maps from massive 3D city models. Figure 3. A framework for producing orthorectified 2.5D maps from massive 3D city models. 4 ISPRS Int. J. Geo-Inf. 2016, 5, 212 5 of 18

To orthorectify a multi-perspective set of 2.5D maps, the normal workflow is to identify and collect a set of ground control points (GCPs) in each of them and then apply a polynomial transformation using the set of GCPs to warp the image into alignment with the common reference image. This traditional orthorectifition procedure is not only labor-intensive but also disruptive to a streamlined workflow. Moreover, an efficient implementation of spatial partitioning and out-of-core rendering is also not a trivial task.

2. Methods

2.1. Constructing an Integrated Oblique Image Renderer In a nutshell, the oblique image renderer is expected to accommodate various 3D model formats, various types of LOD structures, spatial partitioning, and custom camera configuration. To find out which 3D graphics API is most suitable for developing the oblique image renderer, we conducted a comparative review of the mainstream rendering APIs by categories (Table2):

1. Low-level graphics API

Low-level graphics APIs offer limited support for external data formats. For example, DirecX3D has built-in support only for its native .X format. Additionally, they do not provide advanced capabilities related to out-of-core data management, image data exchange, and scene graph construction.

2. High-level 3D CAD studio

3D CAD studios such as and Sketchup are intended for architectural design, model creation and offline rendering of photorealistic images. Autodesk 3ds Max and Sketchup have built-in support for a wide variety of 3D model formats and can be used to create and render individual 3D building models. Massive 3D city models, also known as large-scale 3D city models, are usually present in the form of a hierarchy of progressively simplified individual 3D building models. These 3D CAD studios are not equipped with built-in capability to manage a massive 3D city model with an LOD structure [8].

Table 2. Comparative review of mainstream 3D rendering APIs.

Category Intended for Product Model Format Support no built-in support for any low-level graphics OpenGL application providing direct access to model format. GPU rendering pipeline. programming built-in support only for its DirectX interfaces (API) native model format. native support for various 3D creating 3D models and Autodesk 3ds Max high-level 3D model formats serving photo-realistic CAD studio off-line rendering. native support for various 3D Sketchup model formats. built-in support only for its intermediate-level accelerating OGRE integrated software development of native model format. development kit integrated 3D support for various 3D model OpenSceneGraph (SDK) applications. formats and LOD structures.

3. Intermediate-Level Integrated SDK

OGRE and OpenSceneGraph are open-source 3D graphics libraries developed on top of low-level graphics APIs. They are used mainly by application developers in fields such as visual simulation, computer games, , and scientific visualization and modeling. OpenSceneGraph provides ISPRS Int. J. Geo-Inf. 2016, 5, 212 6 of 18 not only native support for a wide variety of 3D model formats, but also a well-organized scene graph that accommodates large-scale 3D city models with an LOD structure. Actually, OAP3D is usually represented in an OpenSceneGraph-based LOD format [8]. Moreover, OpenSceneGraph also affords access to many OpenGL functions such as rendering to texture and camera space transformation, OpenSceneGraphOpenSceneGraph alsoalso affordsaffords accessaccess to many OpenGLGL functionsfunctions such such as as rendering rendering to to texture texture and and which are crucial to generating oblique image. cameracamera space space transformation, transformation, whichwhich are crucial toto generatinggenerating oblique oblique image. image. BasedBasedBased on on on the the the advantages advantages advantages and andand disadvantages discdisc discussedussedussed above,above, above, OpenSceneGraph OpenSceneGraph OpenSceneGraph was was chosen waschosen chosen to to to developdevelop the thethe oblique oblique image image image renderer,renderer, renderer, which which serves serves ththee purpose thepurpose purpose of of transforming transforming of transforming a a3D 3D city city a 3Dmodel model city into modelinto intoanan an oblique oblique oblique imageimage image forfor for aa agivengiven given perspective.perspective. perspective. The The oboblique obliquelique imageimage image rendererrenderer renderer can can canbe be thought bethought thought of of as ofas the asthe the renderingrenderingrendering module module module of ofof a a CADa CADCAD studio studiostudio with with extendedextended supportsupport forfor massivemassive massive 3D 3D 3D city city city models. models. models. There There There are are are typicallytypicallytypically two two two types typestypes of cameras ofof camerascameras in computer inin computercomputer graphics: grapgraphics:hics: a perspective aa perspectiveperspective camera camera camera and and anand orthographic an an orthographic orthographic camera. A perspectivecamera.camera. A A perspective perspective camera acts cameracamera like a actsacts real-world like a real-wor cameraldld or cameracamera the human oror the the visualhuman human system. visual visual system. However,system. However, However, an image generatedanan image image by generated agenerated perspective byby cameraaa perspectiveperspective is subject camera to perspective isis subjectsubject distortion, toto perspectiveperspective which distortion,distortion, could adversely which which could affectcould the geometricadverselyadversely accuracy affect affect the the of geometric mapgeometric tile mosaicking accuaccuracy of andmaporthorectification tiletile mosaickingmosaicking and and in orthorectification orthorectification generating 2.5D in maps.in generating generating To avoid 2.5D maps. To avoid perspective distortion, an orthographic camera is used in the proposed oblique perspective2.5D maps. distortion, To avoid perspective an orthographic distortion, camera an or isthographic used in camera the proposed is used in oblique the proposed image oblique rendering imageimage rendering rendering pipeline. pipeline. WithWith anan orthographicorthographic projection,projection, anan object object always always appears appears the the same same size size pipeline. With an orthographic projection, an object always appears the same size no matter how close nono matter matter how how close close or or far far awayaway itit is. The orthographic cameracamera (Figures (Figures 4 4 and and 5) 5) used used for for generating generating or far away it is. The orthographic camera (Figures4 and5) used for generating an oblique image is anan oblique oblique image image is is adjusted adjusted soso itit covers the entire spatialspatial domaindomain of of a a 3D 3D city city model model and and is ismainly mainly adjusted so it covers the entire spatial domain of a 3D city model and is mainly constrained by three constrainedconstrained by by three three factors,factors, i.e.,i.e., thethe elevation angle ,, thethe azimuthazimuth angle angle , ,and and the the bounding bounding box box factors,ofof the the i.e., 3D 3D thecity city elevationmodel. model. angle ϕ, the azimuth angle θ, and the bounding box of the 3D city model.

Figure 4. The view and projection transformations of an orthographic camera. FigureFigure 4. 4.The The view view and and projection projection transformationstransformations of of an an orthographic orthographic camera. camera.

Figure 5. Constructing an oblique camera ( = 0°, = 45°) for rendering 2.5D maps: (A) bounding box Figureof the 5.3D Constructing city models; (anB) obliquecamera configuration;camera ( = 0°, ( C)= spatial 45°) for partitioning; rendering 2.5Dand (maps:D) a map (A) tilebounding rendered box Figure 5. Constructing an oblique camera (θ = 0◦, ϕ = 45◦) for rendering 2.5D maps: (A) bounding box offrom the 3Da sub-camera. city models; (B) camera configuration; (C) spatial partitioning; and (D) a map tile rendered of the 3D city models; (B) camera configuration; (C) spatial partitioning; and (D) a map tile rendered from a sub-camera. fromThe a sub-camera. bounding box (Figure 5A) is given by {xmin, ymin, zmin, xmax, ymax, zmax} and has a radius R. WeThe define bounding the azimuth box (Figur anglee 5A) as isthe given number by { xmin,of degrees ymin, clockwisezmin, xmax, from ymax, the zmax negative} and Yhas axis. a radius We Rrotate. We definethe 3D the city azimuth model clockwise angle as thearound number the vertical of degrees direction clockwise (Z axis) from by thean amountnegative equal Y axis. to We, rotate the 3D city model clockwise around the vertical6 direction (Z axis) by an amount equal to , 6 ISPRS Int. J. Geo-Inf. 2016, 5, 212 7 of 18

The bounding box (Figure5A) is given by { xmin, ymin, zmin, xmax, ymax, zmax} and has a radius R. We define the azimuth angle as the number of degrees clockwise from the negative Y axis. We rotate the 3D city model clockwise around the vertical direction (Z axis) by an amount equal to θ, and then re-compute the world-space bounding box. If θ is 0◦ in case the camera is oriented in the south–north direction, the rotation will not affect the model orientation and the world-space bounding box. By rotating the 3D city model instead of the camera itself, a fixed camera reference system can be maintained. The virtual camera is constructed by defining the two transformations [34]:   vRightx vUpx vForwardx eyex    vRighty vUpy vForwardy eyey  ViewMatrix =   , (1)  vRightz vUpz vForwardz eyez  0 0 0 1 where ViewMatrix defines the camera’s local coordinate system, eye(x,y,z) is the position of the camera (Figure5B), and vUp(x,y,z) is the cross product of vForward(x,y,z) and vRight(x,y,z). The positive X axis, Y axis, and Z axis of the camera coordinate system are defined by vRight(x,y,z), vUp(x,y,z), and vForward(x,y,z) respectively. The four vectors in the ViewMatrix are given in Equations (9)–(12).

 2 right+le f t  right−le f t 0 0 right−le f t  2 top+bottom   0 0  ProjectionMatrix =  top−bottom top−bottom  (2)  − 2 − f ar+near   0 0 f ar−near f ar−near  0 0 0 1

The orthographic projection matrix is defined in Equation (2) by six parameters, which are given by Equations (13)–(18). The width, height, length, and center of the view frustum are given in Equations (4)–(7).  shadowLen = (zmax − zmin) × tanϕ (3)   width = xmax − xmin (4)   sinϕ  height = (ymax − ymin) × 2 + shadowLen (5)   length ∈ [R × 4, ∞] (6)  n o  = width (ymax−ymin) + shadowLen ( )  vCenter(x,y,z) 2 , 2 2 , zmin 7   dist ∈ [length, ∞] (8)   = ( )  vRight(x,y,z) {1, 0, 0} 9  vForward(x,y,z) = {0, −cosϕ, sinϕ} (10)  vUp = vForward × vRight (11)  (x,y,z) (x,y,z) (x,y,z)   eye(x,y,z) = vCenter(x,y,z) + vForward(x,y,z) × dist (12)   le f t = −width/2 (13)   right = width/2 (14)   = − ( )  bottom height/2 15   top = height/2 (16)   near = dist − length/2 (17)   f ar = dist + length/2 (18)

In Equations (3)–(7), ϕ is the elevation angle and vCenter(x,y,z) is translated from the center of the bounding box down to the ground plane and is the reference point that the camera looks at (Figure5C). We set the ground plane at a minimum height, zmin. In Equation (3), shadowLen is the maximum possible projected length of a vertical line on the ground plane. We extend the camera frustum height by shadowLen and translate vCenter(x,y,z) toward the positive Y axis by shadowLen/2 so that tall buildings ISPRS Int. J. Geo-Inf. 2016, 5, 212 8 of 18

(if there are any) on the northern edge remain within the field of view and the projected scene remains centered. The view frustum length in Equation (6) can be any value greater than the diameter of the bounding sphere. In Equation (8), dist denotes the distance from vCenter(x,y,z) to the camera; it can be any value greater than the view frustum length.

To overcome computational power limitations, a large scene needs to be spatially segmented into To overcome computational power limitations, a large scene needs to be spatially segmented a grid of a given number of rows and columns, with each grid cell linked to a map tile. With the master into a grid of a given number of rows and columns, with each grid cell linked to a map tile. With the orthographicmaster orthographic camera constructed camera constructed above, weabove, need we only need to only split to thesplit frustum the frustum into into an an array array of of smaller frustumssmaller so frustums that each so of that these each sub-frustumsof these sub-frustums exactly exactly covers covers the the corresponding corresponding mapmap tile tile (Figure (Figure 5C). In generating5C). In generating an oblique an image oblique of image a 3D cityof a model3D city formodel a set for of a parameters, set of parameters, the group the group of sub-frustums of sub- is traversedfrustums and is each traversed sub-frustum and each camera sub-frustum is activated camera to is renderactivated the to 3D render scene the into 3D an scene OpenGL into an Render TargetOpenGL Texture Render (RTT), Target which Texture is saved (RTT), as a mapwhich tile is sa imageved as immediately a map tile image after immediately the render loopafter the comes to an end.render After loop all comes the sub-frustums to an end. After are all traversed,the sub-frus thetums set are of traversed, map tiles the generated set of map aretiles mosaickedgenerated into are mosaicked into a seamless raster (Figure 5D). a seamless raster (Figure5D). We exemplify the use of the proposed set of Equations (1)–(18) by parameterizing an oblique Weimage exemplify renderer the(Table use 3). of Inthe this proposedexample, the set building of Equations model (Figure (1)–(18) 6A) by used parameterizing has a bounding an box oblique image{xmin renderer = −50, ymin (Table = −350,). Inzmin this = −50, example, xmax = 50, the ymax building = 50, zmax model = 50} centered (Figure6 atA) the used origin has {x = a 0, bounding y = 0, box {xminz = 0}. =The− 50,obliqueymin camera= −50, lookszmin in the= − south–north50, xmax = direction 50, ymax {x = 0, 50, y =zmax 0.707106781,= 50} centeredz = –0.707106781} at the origin {x = 0,giveny = 0, z= =0° 0}. and The = oblique 45° (Figure camera 6A,B). looks The inset the of south–northparameters that direction define the {x =ViewMatrix 0, y = 0.707106781, and z = –0.707106781ProjectionMatrix} given are summarizedθ = 0◦ and in ϕTable= 45 3.◦ (Figure6A,B). The set of parameters that define the ViewMatrix and ProjectionMatrix are summarized in Table3. Table 3. Parameterization of the view and projection matrix of an oblique image renderer.

Table 3. ParameterizationVariable of the view and projection matrixValue of an oblique image renderer. R 86.60253906 shadownLenVariable Value 100 height R 86.60253906135.3553391 widthshadownLen 100 100 length height 135.3553391346.4101563 left width 100 −50 length 346.4101563 right 50 left −50 bottom right 50–67.67766953 top bottom –67.6776695367.67766953 dist top 67.67766953346.4101563 near dist 346.4101563173.2050781 near 173.2050781 far 519.6152344 far 519.6152344 vForwardvForward { 0, –0.707106781,{ 0, –0.707106781, 0.707106781} 0.707106781} vUp vUp { 0, 0.707106781,{ 0, 0.707106781, 0.707106781} 0.707106781} vCentervCenter { 0, 50,{ –50} 0, 50, –50}

Figure 6. Generating an oblique image from the south–north direction at an elevation angle of 45° Figure 6. ◦ usingGenerating an instantiated an oblique oblique image image renderer: from the(A) the south–north 3D building direction model used; at an(B) elevationthe oblique angleimage of 45 usinggenerated. an instantiated oblique image renderer: (A) the 3D building model used; (B) the oblique image generated.

8 ISPRS Int. J. Geo-Inf. 2016, 5, 212 9 of 18

2.2. Automating GCP Coordinates Retrieval for Orthorectification In orthorectification, an oblique image is warped so that all ground are mapped back to the orthographic space of the 3D city model. A larger number of GCPs are usually needed in cases with a rugged terrain. To orthorectify a multi-perspective set of 2.5D maps, the coordinates of each GCP need to be identified and collected separately in each map. Automatic retrieval of GCP coordinates can substantially reduce the labor required for the manual identification and collection of GCP coordinates. We present a -based method for automatically retrieving the GCPs’ coordinates in an oblique image. Before this method can be applied to an oblique image, a set of GCPs with orthographic coordinates need to be collected from the 3D city model in a 3D GIS or from an orthoimage in a 2D GIS (Figure7A). In collecting the GCPs, one must ensure that every GCP is sampled from the exposed ground (Figure7A) instead of from other aboveground objects, e.g., buildings, trees, and vehicles. A four-byte RTT the same size as a map tile in the grid is allocated for rendering to texture. With the proposed method, the coordinates of each GCP in an oblique image can be correctly retrieved as follows (Figure7): 1. Locate the row and column number of the map tile that contains the GCP by the orthographic coordinates of this GCP. 2. Create an OpenGL point primitive using the GCP coordinates and then render the point together with the 3D city model using the associated orthographic camera. In the GPU pipeline, the GCP primitive is shaded in red and the 3D city model in pure black with all textures and materials disabled (Figure7B). In Figure6B, the background pixels associated with the 3D city model are not discarded to show how the GCPs are displaced in an oblique view against the orthographic view. 3. Traverse the pixels in the RTT after the render loop is completed. The red pixels are retained (Figure7C) while black ones are discarded. The center of the cluster of red pixels is assumed to be the coordinates of this GCP in the oblique space.

Figure 7. Illustration of the automatic GCP coordinates retrieval approach: (A) GCPs in an Figure 7. Illustrationorthographic of the view; automatic (B) GCPs GCPin an oblique coordinates view; (C) GCPs retrieval rendered approach: in an oblique ( Aview) GCPs without in an orthographic background for identifying coordinates; (D) GCP table obtained for use in orthorectification; (E) view; (B) GCPs inoblique an image oblique orthorectified view; using (C the) GCPs set of GCPs. rendered in an oblique view without background for identifying coordinates; (D) GCP table obtained for use in orthorectification; (E) oblique image The retrieved GCPs are stored in a table (Figure 7D) for use in the final image warping (Figure 7E). orthorectified usingWith these the GCPs, set ofa polynomial GCPs. transformation can be applied to warp the oblique image into alignment with the orthographic reference system. ArcMap normally comes with an image warping tool that can make use of externally created GCPs in the polynomial transformation. The open-source raster library GDAL also provides a free image warping tool, the “gdalwarp utility”, which is used in the proposed streamlined framework because open-source programs can be more conveniently integrated into an automated workflow.

3. Application Two OAP3D datasets were acquired using a combination of oblique airborne photogrammetry and photo-based 3D reconstruction (Figure 8). 2.5D maps are generated from the two 3D city models using the proposed 2.5D cartography framework. A comparison of the 2.5D maps and the 3D city model is made to show the potential of 2.5D maps for improving user experience in exploring 3D city models in a web environment. We assess the accuracy of building height measurement on the 2.5D maps against the 3D city model and then lay out the general framework for integrating 2.5D maps into web GIS. Two simple examples are presented to show how 2.5D maps can be integrated into real-world applications.

10 ISPRS Int. J. Geo-Inf. 2016, 5, 212 10 of 18

The retrieved GCPs are stored in a table (Figure7D) for use in the final image warping (Figure7E). With these GCPs, a polynomial transformation can be applied to warp the oblique image into alignment with the orthographic reference system. ArcMap normally comes with an image warping tool that can make use of externally created GCPs in the polynomial transformation. The open-source raster library GDAL also provides a free image warping tool, the “gdalwarp utility”, which is used in the proposed streamlined framework because open-source programs can be more conveniently integrated into an automated workflow.

3. Application Two OAP3D datasets were acquired using a combination of oblique airborne photogrammetry and photo-based 3D reconstruction (Figure8). 2.5D maps are generated from the two 3D city models using the proposed 2.5D cartography framework. A comparison of the 2.5D maps and the 3D city model is made to show the potential of 2.5D maps for improving user experience in exploring 3D city models in a web environment. We assess the accuracy of building height measurement on the 2.5D maps against the 3D city model and then lay out the general framework for integrating 2.5D maps into web GIS. Two simple examples are presented to show how 2.5D maps can be integrated into real-world applications.

FigureFigure 8.8.The The distribution distribution of GCPsof GCPs in two in OAP3Dtwo OAP3D datasets datasets used for used generating for generating 2.5D maps 2.5D and maps assessing and quality:assessing (A quality:) Dataset (A 1) inDataset a rugged 1 in terrain; a rugged (B )terrain; Dataset (B 2) inDataset a gentle 2 in terrain. a gentle terrain.

3.1. Generating 2.5D Maps from 3D City Models 3.1. Generating 2.5D Maps from 3D City Models 2.5D maps at a spatial resolution of 25 cm were generated for eight perspectives, defined by the 2.5D maps at a spatial resolution of 25 cm were generated for eight perspectives, defined by the combinations of a 45° elevation angle and the eight azimuth directions, including north, south, east, combinations of a 45◦ elevation angle and the eight azimuth directions, including north, south, east, west, northeast (NE), southeast (SE), southwest (SW), and northwest (NW). Collection of GCPs was west, northeast (NE), southeast (SE), southwest (SW), and northwest (NW). Collection of GCPs was conducted in a 3D interactive environment, in which the orthographic coordinates of a GCP were conducted in a 3D interactive environment, in which the orthographic coordinates of a GCP were registered at the intersection point of a mouse click position and the ground plane. Dataset 2 (Figure registered at the intersection point of a mouse click position and the ground plane. Dataset 2 (Figure8B) 8B) covers an area of flat topography. By contrast, Dataset 1 is situated in rugged terrain (Figure 8A). covers an area of flat topography. By contrast, Dataset 1 is situated in rugged terrain (Figure8A). In collecting the GCPs, areas surrounding buildings were not considered since they may be In collecting the GCPs, areas surrounding buildings were not considered since they may be obstructed obstructed by the buildings when viewed from an oblique perspective. Consequently, a majority of by the buildings when viewed from an oblique perspective. Consequently, a majority of the GCPs the GCPs were placed on arterial road segments, which were clear of building obstructions. were placed on arterial road segments, which were clear of building obstructions. In orthorectifying an oblique image, the oblique-space coordinates of each GCP were retrieved using the automatic GCP coordinates retrieval technique. Orthorectification with a third-order transformation was used to warp the oblique images into the orthographic reference system. To show the performance of the automatic GCP coordinates retrieval algorithm, all GCPs were used in the orthorectification process without being adjusted or screened. As expected, the 2.5D maps created from Dataset 1 (Figure 8A) have a mean RMSD of 4.5 m, which is significantly larger than the RMSD (0.8 m) with Dataset 2 (Figure 8B). This large difference in the geometric accuracy of the orthorectified 2.5D maps was likely caused by terrain relief. We conducted a visual inspection to assess the quality of the orthorectified 2.5D maps. An evenly divided grid was draped on the orthographic map and the 2.5D maps from different perspectives for comparison of reference locations. We utilized the locations of some visually prominent ground-level features to assess the relative displacement between an orthographic map and an oblique map. At the given map scale, noticeable displacement (Figure 9) can be observed in areas of rugged relief (Figure 8A), but little displacement (Figure 10) appear in areas of gentle relief (Figure 8B).

11 ISPRS Int. J. Geo-Inf. 2016, 5, 212 11 of 18

In orthorectifying an oblique image, the oblique-space coordinates of each GCP were retrieved using the automatic GCP coordinates retrieval technique. Orthorectification with a third-order transformation was used to warp the oblique images into the orthographic reference system. To show the performance of the automatic GCP coordinates retrieval algorithm, all GCPs were used in the orthorectification process without being adjusted or screened. As expected, the 2.5D maps created from Dataset 1 (Figure8A) have a mean RMSD of 4.5 m, which is significantly larger than the RMSD (0.8 m) with Dataset 2 (Figure8B). This large difference in the geometric accuracy of the orthorectified 2.5D maps was likely caused by terrain relief. We conducted a visual inspection to assess the quality of the orthorectified 2.5D maps. An evenly divided grid was draped on the orthographic map and the 2.5D maps from different perspectives for comparison of reference locations. We utilized the locations of some visually prominent ground-level features to assess the relative displacement between an orthographic map and an oblique map. At the given map scale, noticeable displacement (Figure9) can be observed in areas of rugged relief (Figure8A), but little displacement (Figure 10) appear in areas of gentle relief (Figure8B).

FigureFigure 9. 9.2.5D 2.5D maps maps of of different different perspectives perspectives created created from from Dataset Dataset 1 1 for for assessing assessing geometric geometric accuracy accuracy ◦ ◦ (follow(follow the the yellowyellow arrowsarrows toto observe displacements): displacements): (A (A) )orthographic orthographic view; view; (B ()B ) θ= 0°,= 0 , =ϕ 45°;= 45 (C;) ◦ ◦ ◦ ◦ (C )=θ 45°,= 45 , ϕ= 45°;= 45 (D;()D ) θ= =135°, 135 , ϕ= =45°. 45 .

12 ISPRS Int. J. Geo-Inf. 2016, 5, 212 12 of 18

FigureFigure 10. 10.2.5D 2.5D maps maps ofdifferent of different perspectives perspectives created created from from Dataset Dataset 2 for assessing2 for assessing geometric geometric accuracy: (A)accuracy: orthographic (A) orthographic view; (B) θ view;= 0◦, ϕ(B=) 45 ◦=;( 0°,C ) θ = 45°; 45◦ ,(Cϕ)= 45=◦ 45°,;(D ) θ== 45°; 135 (◦D, ϕ) == 45 135°,◦. = 45°.

3.2.3.2. Comparison Comparison of of 2.5D 2.5D and and 3D 3D Representations Representations inin Web-Based Visualization Visualization of of 3D 3D City City Models Models DataData transfer transfer can can becomebecome aa challenging bottlenec bottleneckk for for a web a web GIS GIS if a if large a large volume volume of data of datais is involved while while only only limited limited Internet Internet bandwidth bandwidth is is available. available. A Acompact compact data data representation representation is is conduciveconducive to to a weba web GIS GIS that that focuses focuses on on interactive interactive visualization visualization and and analysis. analysis. Datasets Datasets 1 and1 and 2 in2 theirin their original form take up 81.5 GB and 4.8 GB of disk space, respectively. By contrast, the eight- original form take up 81.5 GB and 4.8 GB of disk space, respectively. By contrast, the eight-perspective perspective sets of 2.5D maps in GeoTiff format with LZW compression were reduced to 1.6 GB and sets of 2.5D maps in GeoTiff format with LZW compression were reduced to 1.6 GB and 0.14 GB, 0.14 GB, showing a compression ratio of 51:1 and 34:1, respectively. showing a compression ratio of 51:1 and 34:1, respectively. To find out how 2.5D maps can improve the user experience in web-based visualization of 3D To find out how 2.5D maps can improve the user experience in web-based visualization of 3D city city models, Dataset 2 and the associated 2.5D maps were created on a local area network with a models,maximum Dataset transfer 2 and speed the associated of 10 MB/s 2.5D. This maps dataset were covers created a 3195 on m a by local 2292 area m area, network which with was a maximumdivided transferinto a grid speed of of50 10m by MB/s. 50 m Thissub-regions dataset with covers 45 rows a 3195 and m 63 by columns. 2292 m On area, the whichclient side, was to divided simulate into a grida user of interactively 50 m by 50 m exploring sub-regions a 3D with city model 45 rows in andfull detail, 63 columns. a robot On user the was client programed side, to simulateto zoom into a user interactivelythe 2835 sub-regions exploring one a 3D by cityone modeland would in full remain detail, focused a robot on usera sub-region was programed until the data to zoom download into the 2835was sub-regions completed oneand bythe onearea and appeared would in remain full detail. focused A 50 onm by a sub-region 50 m sub-region until thewas data used download because the was completed3D city model and the has area a spatial appeared resolution in full higher detail. than A 50 25 mcm by and 50 can m sub-region show its content was used in full because detail only the 3D citywhen model zoomed has a spatialin on an resolution area of approximately higher than 2550 cmm by and 50 canm or show smaller. its content The robot in fulluser detail program only was when zoomedmade to in navigate on an area through of approximately the 2835 sub-regions 50 m by separa 50 m ortely smaller. for the The3D and robot 2.5D user visualization. program was Under made tothe navigate 3D visualization through the mode, 2835 sub-regionsa total of 2253 separately s were spent for the exploring 3D and 2.5Dthe 2835 visualization. sub-regions Under with data the 3D visualizationretrieval and mode, 3D rendering a total of consuming 2253 s were 1648 spent and exploring 605 s, respectively. the 2835 sub-regionsUnder the 2.5D with visualization data retrieval andmode 3D renderingwith a constant consuming oblique 1648 perspective, and 605 s, a respectively. mere total of Under 20 s on the average 2.5D visualization were spent, modewith 5 with s a constantattributed oblique to data perspective, retrieval and a mere15 s attributed total of 20 to s image on average rendering. were This spent, shows with that 5 s attributed2.5D maps to can data retrievalpotentially and improve 15 s attributed the user to experience image rendering. in web-base Thisd showsvisualization that 2.5D of 3D maps city canmodels potentially by shortening improve thethe user time experience required for in web-baseddata retrieval visualization and rendering. of 3D city models by shortening the time required for data retrieval and rendering. 13 ISPRS Int. J. Geo-Inf. 2016, 5, 212 13 of 18

3.3. Geometric Measurement on 2.5D Maps and Accuracy Assessment

In a 2D GIS, a 2.5D map can be utilized to measure the geometric features of buildings. In theory, 3.3. Geometric Measurement on 2.5D Maps and Accuracy Assessment the actual length of a linear feature that is perpendicular to the ground plane can be found by dividing the projectedIn lengtha 2D GIS, by a the2.5D tangent map can ofbe theutilized camera to measure elevation the geometric angle. When features the of camerabuildings. elevation In theory, angle is 45◦, thethe projected actual length length of a islinear assumed feature tothat be is perpendi equal tocular the actualto the ground length. plane Therefore, can be found direct by dividing measurement the projected length by the tangent of the camera elevation angle.◦ When the camera elevation angle can be conductedis 45°, the projected on a 2.5D length map is withassumed an elevationto be equal angleto the actual of 45 length.. For a Therefore, 2.5D map direct with measurement a larger or smaller elevationcan angle, be conducted measurement on a 2.5D is map still applicablewith an elevation with simpleangle of trigonometric45°. For a 2.5D calculations.map with a larger To assessor the measurementsmaller accuracyelevation angle, of geometric measurement features is still on applicable 2.5D maps, with we simple picked trigonometric out 20 reference calculations. buildings To from Datasetassess 2. We the measured measurement the heightaccuracy of of each geometric of the featur 20 buildingses on 2.5D inmaps, a real we 3D picked GIS out and 20 then reference on the 2.5D maps forbuildings comparison from Dataset (Figure 2. We11). measured As shown the height in Table of 4each, the of twothe 20 sets buildings of building in a real heights 3D GIS agreeand well then on the 2.5D maps for comparison (Figure 11). As shown in Table 4, the two sets of building with each other with a root mean square deviation (RMSD) of 0.701 m. Besides height measurements, heights agree well with each other with a root mean square deviation (RMSD) of 0.701 m. Besides buildingheight façade measurements, area can also building be approximated façade area can (Figure also be approximated12). (Figure 12).

Figure 11. Measuring building height (A) in a real 3D environment; (B) on a 2.5D map with = 315°, Figure 11. Measuring building height (A) in a real 3D environment; (B) on a 2.5D map with θ = 315◦, = 45°; (C) on a 2.5D map with = 0°, = 45°. ϕ = 45◦;(C) on a 2.5D map with θ = 0◦, ϕ = 45◦.

Table 4. Comparison of the building heights measured on a 2.5D map and in a 3D GIS.

Building ID 3D Measurement (m) 2D Measurement (m) Difference (m) 1 12.507777 13.125066 0.617289 2 13.713325 13.732419 0.019094 3 14.129196 14.155275 0.026079 4 16.866035 16.933496 0.067461 5 17.225675 17.833031 0.607356 6 18.953028 18.25633 –0.696698 7 18.758588 18.653542 –0.105046 8 19.039177 19.47366 0.434483 9 20.484453 19.157041 –1.327412 10 20.829912 20.002697 –0.827215 11 20.874955 21.008558 0.133603 12 20.955293 20.958611 0.003318 13 22.353933 22.861286 0.507353 14 22.754253 22.423482 –0.330771 15 22.520097 22.066294 –0.453803 16 23.23515814 24.293483 1.058325 17 33.576494 31.089237 –2.487257 18 37.405119 36.844787 –0.560332 19 55.114582 54.690003 –0.424579 20 56.291389 56.197712 –0.093677 ISPRS Int. J. Geo-Inf. 2016, 5, 212 14 of 18

Figure 12. Measuring building façade area on a 2.5D map in a 2D GIS. Figure 12. Measuring building façade area on a 2.5D map in a 2D GIS. Table 4. Comparison of the building heights measured on a 2.5D map and in a 3D GIS.

3.4. Workflow forBuilding Integrating ID 2.5D 3D MapsMeasurement into Web (m) GIS 2D Measurement (m) Difference (m) A multi-perspective1 2.5D layer12.507777 set is essentially a raster 13.125066 dataset, which can0.617289 be published through 2 13.713325 13.732419 0.019094 standard map services such as WMS and TMS (Figure 13). Most open-source map servers, including 3 14.129196 14.155275 0.026079 Geoserver, Mapserver,4 and QGIS Server,16.866035 support WMS service 16.933496 and accommodate0.067461 raster datasets that come in a standard geoferenced raster format, such as GeoTIFF. This means a geoferenced 2.5D map 5 17.225675 17.833031 0.607356 in a standard format6 can be hosted18.953028 as a WMS layer on these 18.25633 servers and thus–0.696698 multi-perspective 2.5D come in a standard geoferenced raster format, such as GeoTIFF. This means a geoferenced 2.5D map maps canin a be standard published format7 as can a set be ofhosted WMS18.758588 as a layers. WMS layer Another on these option 18.653542 servers is toand publish thus multi-perspective 2.5D–0.105046 maps through 2.5D TMS service,maps which can generates be published8 maps as a as set a of pyramid19.039177 WMS layers. of tiles Anot ather multiple option 19.47366 is zoom to publish levels. 2.5D Open-source maps0.434483 through tools, TMS such as GDAL2Tiles,service, can which generate9 generates TMS maps tiles as20.484453 a from pyramid a georeferenced of tiles at multiple 19.1570412.5D zoom map. levels. Viewing Open-source–1.327412 2.5D maps tools, insuch browsers on variousas GDAL2Tiles, devices and10 can platforms generate has TMS20.829912 become tiles from an easya georeferenced task with 20.002697 the 2.5D availability map. Viewing–0.827215 of open-source 2.5D maps in mapping 11 20.874955 21.008558 0.133603 librariesbrowsers such as on OpenLayers, various devices which and platforms can display has beco mapme tiles,an easy vector task with data, the and availability markers of loadedopen- from source mapping12 libraries such as20.955293 OpenLayers, which can display 20.958611 map tiles, vector0.003318 data, and markers a wide variety of sources. loaded from a13 wide variety of sources.22.353933 22.861286 0.507353 14 22.754253 22.423482 –0.330771 15 22.520097 22.066294 –0.453803 16 23.235158 24.293483 1.058325 17 33.576494 31.089237 –2.487257 18 37.405119 36.844787 –0.560332 19 55.114582 54.690003 –0.424579 20 56.291389 56.197712 –0.093677

3.4. Workflow for Integrating 2.5D Maps into Web GIS A multi-perspective 2.5D layer set is essentially a raster dataset, which can be published through standard map services such as WMS and TMS (Figure 13). Most open-source map servers, including Geoserver, Mapserver, and QGIS Server, support WMS service and accommodate raster datasets that 15

Figure 13. Integrating 2.5D maps into web GIS via map services. Figure 13. Integrating 2.5D maps into web GIS via map services. 3.5. Integrating Orthorectified 2.5D Images into a Street Map for Campus Navigation Traditionally, 2.5D campus maps are usually rendered from 3D city models created in a CAD environment. The multi-perspective 2.5D maps generated using the proposed method can serve as an alternative data source for web-based 2.5D campus mapping (Figure 14). In contrast to manually created CAD models, OAP3D can be easily acquired and rapidly updated over a geographic domain much larger than a university campus. Thus, OAP3D-based 2.5D maps cannot only reduce the cost of 2.5D campus mapping, but also extend the use of 2.5D campus mapping to a wider scope of geographic environments, including industrial parks, high schools, hospitals, and amusement parks.

16 ISPRS Int. J. Geo-Inf. 2016, 5, 212 15 of 18

3.5. Integrating Orthorectified 2.5D Images into a Street Map for Campus Navigation Traditionally, 2.5D campus maps are usually rendered from 3D city models created in a CAD environment. The multi-perspective 2.5D maps generated using the proposed method can serve as an alternative data source for web-based 2.5D campus mapping (Figure 14). In contrast to manually created CAD models, OAP3D can be easily acquired and rapidly updated over a geographic domain much larger than a university campus. Thus, OAP3D-based 2.5D maps cannot only reduce the cost of 2.5D campus mapping, but also extend the use of 2.5D campus mapping to a wider scope of geographic environments, including industrial parks, high schools, hospitals, and amusement parks.

Figure 14. Integrating 2.5D images into a street map for campus navigation. Figure 14. Integrating 2.5D images into a street map for campus navigation. 3.6. The Fusion of Scientific Data and Art in 2.5D Cartography 3.6. The Fusion of Scientific Data and Art in 2.5D Cartography Cartography is the discipline that combines science, technology, and art to prepare and study Cartography is the discipline that combines science, technology, and art to prepare and study all all types of maps and charts [35]. Modern cartography relies on specialized data models in GIS to types of maps and charts [35]. Modern cartography relies on specialized data models in GIS to store, store, manage, symbolize, and render digitized geographic information [36,37]. Since the first manage, symbolize, and render digitized geographic information [36,37]. Since the first operational operational GIS was introduced in 1960, cartography had been gradually integrated into GIS as an GIS was introduced in 1960, cartography had been gradually integrated into GIS as an essential essential component that provides capabilities for the mapping of cartographic information [38]. component that provides capabilities for the mapping of cartographic information [38]. Goodchild [39] Goodchild [39] commented that “Cartography also deals with geographic information, but in a commented that “Cartography also deals with geographic information, but in a manner that combines manner that combines the scientific with the artistic”. Krygier [35] contended that “art” and “science” the scientific with the artistic”. Krygier [35] contended that “art” and “science” serve a functionally serve a functionally similar role in cartography. similar role in cartography. Pixel art-based cartography (Figure 2) is arguably more artistic than scientific, as the data content Pixel art-based cartography (Figure2) is arguably more artistic than scientific, as the data content is is not physically related to the real world. In contrast to the pixel art-based cartography (Figure 2), a not physically related to the real world. In contrast to the pixel art-based cartography (Figure2), a 2.5D 2.5D map from an OAP3D is a photogrammetric representation of an urban environment and is map from an OAP3D is a photogrammetric representation of an urban environment and is reasonably reasonably more scientific than artistic. Following the cartographic philosophy of “integrating art moreand science”, scientific we than can artistic. improve Following the aesthetic the cartographic quality and philosophyextend the use of “integrating of OAP3D-based art and 2.5D science”, maps weby (1) can employing improve the computer aesthetic graphics quality andtechniques extend theto pr useoduce of OAP3D-based desired visual 2.5D effect mapss [40,41]; by (1) even employing simple computershadowing graphics effects can techniques enhance to the produce aesthetic desired quality visual of 2.5D effects maps [40, 41(Figure]; even 15); simple And shadowing (2) incorporating effects canexternally enhance generated the aesthetic content, quality e.g., of 2.5Darchitectural maps (Figure and urban15); And design (2) incorporating [8], to achieve externally a wider generated scope of content,application. e.g., architectural and urban design [8], to achieve a wider scope of application.

Figure 15. Incorporating computer-generated virtual effect into a 2.5D map (a) without shadow effect; (b) with shadow effect.

17

Figure 14. Integrating 2.5D images into a street map for campus navigation.

3.6. The Fusion of Scientific Data and Art in 2.5D Cartography Cartography is the discipline that combines science, technology, and art to prepare and study all types of maps and charts [35]. Modern cartography relies on specialized data models in GIS to store, manage, symbolize, and render digitized geographic information [36,37]. Since the first operational GIS was introduced in 1960, cartography had been gradually integrated into GIS as an essential component that provides capabilities for the mapping of cartographic information [38]. Goodchild [39] commented that “Cartography also deals with geographic information, but in a manner that combines the scientific with the artistic”. Krygier [35] contended that “art” and “science” serve a functionally similar role in cartography. Pixel art‐based cartography (Figure 2) is arguably more artistic than scientific, as the data content is not physically related to the real world. In contrast to the pixel art‐based cartography (Figure 2), a 2.5D map from an OAP3D is a photogrammetric representation of an urban environment and is reasonably more scientific than artistic. Following the cartographic philosophy of “integrating art and science”, we can improve the aesthetic quality and extend the use of OAP3D‐based 2.5D maps by (1) employing computer graphics techniques to produce desired [40,41]; even simple shadowing effects can enhance the aesthetic quality of 2.5D maps (Figure 15); And (2) incorporating externally generated content, e.g., architectural and urban design [8], to achieve a wider scope of ISPRSapplication. Int. J. Geo-Inf. 2016, 5, 212 16 of 18

(a) (b)

Figure 15. Incorporating computer-generated virtual effect into a 2.5D map (a) without shadow effect; (b) with shadow effect. 17

4. Conclusions 2.5D mapping is a convenient and efficient approach to delivering massive 3D city models to 2D web GIS. With 2.5D cartography, existing massive 3D city models can be used by a wider audience and in a wider variety of contexts. A systematic 2.5D cartography framework was presented in this study. We have shown how the technical challenges, including spatial partitioning, camera construction, and orthorectification, can be effectively resolved by using a combination of GIS and computer graphics techniques. We have laid out a streamlined workflow for producing high-quality 2.5D maps using massive 3D city models. The geometric accuracy of the orthorectification under different topographic conditions has been assessed. In gentle terrain relief, we achieved an RMSD of 0.8 m, which suggests that the presented automatic GCP coordinates retrieval method is effective in orthorectifying multi-perspective 2.5D maps. Although a mean RMSD of 4.5 m was found in rugged topography, the orthorectification quality can be improved if more GCPs are used. Effective utilization of 3D city models in 2D web GIS is mainly reflected in two functional aspects:

1. Interactive analysis. Geometric measurement is typical of interactive analysis in 3D city models. We have shown by example that the geometric measurement of buildings can be effectively conducted on 2.5D maps. The accuracy assessment revealed that measurement of building height on 2.5D maps is subject to minor errors. Although the RMSD is as small as 0.701 m, it must be considered in engineering activities such as cadastral survey. The uncertainty in geometric measurement on 2.5D maps may be related to the inaccurate positioning of a point, inaccurate alignment of lines, or insufficient map resolution. 2. Interactive visualization. We conclude that 2.5D maps are a compact data representation optimized for web data streaming and mapping. Our case study showed that a compression ratio of 51:1 was achievable by transforming an OAP3D of 81.5 GB into an eight-perspective set of 2.5D maps of 1.6 GB. Efficient streaming of high-resolution 2.5D maps to a client can ensure a high-quality visualization experience.

Acknowledgments: This research was supported and funded by the Foundation for Young Scientists of the State Key Laboratory of Remote Sensing Science (15RC-08), the Key Knowledge Innovative Project of the Chinese Academy of Sciences (KZCX2 EW 318), the National Key Technology R&D Program of China (2014ZX10003002), the National Natural Science Foundation of China (41371387), and the Science and Technology Program of Guangzhou (201604020117) (high-resolution estimation method of UAU-based urban building carbon emissions–key special subject in cooperation innovative combining learning with research and production of Guangzhou Municipality). We thank the three anonymous reviewers for their constructive comments and Maya Hutchins for her proofreading. Author Contributions: Jianming Liang and Jianhua Gong conceived and designed the method; Jianming Liang implemented the method; all the authors reviewed and edited the manuscript. Conflicts of Interest: The authors have no conflicts of interest to declare. ISPRS Int. J. Geo-Inf. 2016, 5, 212 17 of 18

References

1. Bruse, M.; Fleer, H. Simulating surface–plant–air interactions inside urban environments with a three dimensional numerical model. Environ. Model. Softw. 1998, 13, 373–384. [CrossRef] 2. Huang, B.; Jiang, B.; Li, H. An integration of GIS, virtual reality and the Internet for visualization, analysis and exploration of spatial data. Int. J. Geogr. Inf. Sci. 2001, 15, 439–456. [CrossRef] 3. Lin, H.; Chen, M.; Lu, G. Virtual geographic environment: A workspace for computer-aided geographic experiments. Ann. Assoc. Am. Geogr. 2013, 103, 465–482. [CrossRef] 4. Li, W.; Gong, J.; Yu, P.; Duan, Q.; Zou, Y. A stream-based Parasitic Model for implementing Mobile Digital Earth. Int. J. Dig. Earth 2014, 7, 38–52. [CrossRef] 5. Ledoux, H.; Meijers, M. Topologically consistent 3D city models obtained by extrusion. Int. J. Geogr. Inf. Sci. 2011, 25, 557–574. [CrossRef] 6. Cheng, L.; Gong, J.; Li, M.; Liu, Y. 3D building model reconstruction from multi-view aerial imagery and lidar data. Photogramm. Eng. Remote Sens. 2011, 77, 125–139. [CrossRef] 7. Remondino, F.; Barazzetti, L.; Nex, F.; Scaioni, M.; Sarazzi, D. UAV photogrammetry for mapping and –current status and future perspectives. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2011, 38, 22–37. [CrossRef] 8. Liang, J.; Shen, S.; Gong, J.; Liu, J.; Zhang, J. Embedding user-generated content into oblique airborne photogrammetry-based 3D city model. Int. J. Geogr. Inf. Sci. 2016.[CrossRef] 9. Ranzinger, M.; Gleixner, G. GIS datasets for 3D urban planning. Comput. Environ. Urban Syst. 1997, 21, 159–173. [CrossRef] 10. Zhang, X.; Zhu, Q.; Wang, J. 3D city models based to urban design. Geogr. Inf. Sci. 2004, 10, 82–86. [CrossRef] 11. Biljecki, F.; Stoter, J.; Ledoux, H.; Zlatanova, S.; Çöltekin, A. Applications of 3D city models: State of the art review. ISPRS Int. J. Geo-Inf. 2015, 4, 2842–2889. [CrossRef] 12. Biljecki, F.; Heuvelink, G.B.; Ledoux, H.; Stoter, J. Propagation of positional error in 3D GIS: Estimation of the solar irradiation of building roofs. Int. J. Geogr. Inf. Sci. 2015, 29, 2269–2294. [CrossRef] 13. Liang, J.; Gong, J.; Li, W.; Ibrahim, A.N. A visualization-oriented 3D method for efficient computation of urban solar radiation based on 3D–2D surface mapping. Int. J. Geogr. Inf. Sci. 2014, 28, 780–798. [CrossRef] 14. Liang, J.; Gong, J.; Zhou, J.; Ibrahim, A.N.; Li, M. An open-source 3D solar radiation model integrated with a 3D Geographic Information System. Environ. Model. Softw. 2015, 64, 94–101. [CrossRef] 15. Lukaˇc,N.; Žalik, B. GPU-based roofs’ solar potential estimation using LiDAR data. Comput. Geosci. 2013, 52, 34–41. [CrossRef] 16. Carrión, D.; Lorenz, A.; Kolbe, T.H. Estimation of the energetic rehabilitation state of buildings for the city of Berlin using a 3D city model represented in CityGML. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2010, 38, 31–35. 17. Saran, S.; Wate, P.; Srivastav, S.K.; Krishna Murthy, Y.V.N. CityGML at semantic level for urban energy conservation strategies. Ann. GIS. 2015, 21, 27–41. [CrossRef] 18. Boeters, R.; Arroyo Ohori, K.; Biljecki, F.; Zlatanova, S. Automatically enhancing CityGML LOD2 models with a corresponding indoor geometry. Int. J. Geogr. Inf. Sci. 2015, 29, 2248–2268. [CrossRef] 19. Shiravi, S.; Zhong, M.; Beykaei, S.A.; Hunt, J.D.; Abraham, J.E. An assessment of the utility of LiDAR data in extracting base-year floorspace and a comparison with the census-based approach. Environ. Plan. B Plan. Des. 2015, 42, 708–729. 20. Henn, A.; Römer, C.; Gröger, G.; Plümer, L. Automatic classification of building types in 3D city models. GeoInformatica 2012, 16, 281–306. [CrossRef] 21. Royan, J.; Gioia, P.; Cavagna, R.; Bouville, C. Network-based visualization of 3d landscapes and city models. IEEE Comput. Graph. Appl. 2007, 27, 70–79. [CrossRef][PubMed] 22. Yang, P.P.J.; Putra, S.Y.; Li, W. Viewsphere: A GIS-based 3D visibility analysis for urban design evaluation. Environ. Plan. B Plan. Des. 2007, 34, 971–992. [CrossRef] 23. Hofierka, J.; Zlocha, M. A new 3-D solar radiation model for 3-D city models. Trans. GIS. 2012, 16, 681–690. [CrossRef] 24. Guo, R.; Li, L.; Ying, S.; Luo, P.; He, B.; Jiang, R. Developing a 3D cadastre for the administration of urban land use: A case study of Shenzhen, China. Comput. Environ. Urban Syst. 2013, 40, 46–55. [CrossRef] ISPRS Int. J. Geo-Inf. 2016, 5, 212 18 of 18

25. Duan, Q.; Gong, J.; Li, W.; Shen, S.; Li, R. Improved Cubemap model for 3D navigation in geo-virtual reality. Int. J. Dig. Earth. 2015, 8, 877–900. [CrossRef] 26. Wu, H.; He, Z.; Gong, J. A virtual globe-based 3D visualization and interactive framework for public participation in urban planning processes. Comput. Environ. Urban Syst. 2010, 34, 291–298. [CrossRef] 27. Hijazi, I.H.; Ehlers, M.; Zlatanova, S. NIBU: A new approach to representing and analysing interior utility networks within 3D geo-information systems. Int. J. Dig. Earth. 2012, 5, 22–42. [CrossRef] 28. Kwan, M.P.; Lee, J. Emergency response after 9/11: The potential of real-time 3D GIS for quick emergency response in micro-spatial environments. Comput. Environ. Urban Syst. 2005, 29, 93–113. [CrossRef] 29. Lu, Z.; Im, J.; Quackenbush, L. A volumetric approach to population estimation using LiDAR remote sensing. Photogramm. Eng. Remote Sens. 2011, 77, 1145–1156. [CrossRef] 30. Hildebrandt, D.; Timm, R. An assisting, constrained 3D navigation technique for multiscale virtual 3D city models. GeoInformatica 2014, 18, 537–567. [CrossRef] 31. Amirebrahimi, S.; Rajabifard, A.; Mendis, P.; Ngo, T. A framework for a microscale flood damage assessment and visualization for a building using BIM–GIS integration. Int. J. Dig. Earth. 2016, 9, 363–386. [CrossRef] 32. Liu, J.; Gong, J.H.; Liang, J.M.; Li, Y.; Kang, L.C.; Song, L.L.; Shi, S.X. A quantitative method for storm surge vulnerability assessment—A case study of Weihai City. Int. J. Dig. Earth 2016.[CrossRef] 33. Qin, R. Change detection on LOD 2 building models with very high resolution spaceborne stereo imagery. ISPRS J. Photogramm. Remote Sens. 2014, 96, 179–192. [CrossRef] 34. Shirley, P.; Ashikhmin, M.; Marschner, S. Fundamentals of Computer Graphics; CRC Press: Boca Raton, FL, USA, 2009. 35. Krygier, J.B. Cartography as an art and a science? Cartogr. J. 1995, 32, 3–10. [CrossRef] 36. Peuquet, D.J. An examination of techniques for reformatting digital cartographic data/Part 1: The raster-to-vector process. Cartogr. Int. J. Geogr. Inf. Geovis. 1981, 18, 34–48. 37. Peuquet, D.J. An examination of techniques for reformatting digital cartographic data/Part 2: The vector-to-raster process. Cartogr. Int. J. Geogr. Inf. Geovis. 1981, 18, 21–33. 38. Berry, J.K. Fundamental operations in computer-assisted map analysis. Int. J. Geogr. Inf. Syst. 1987, 1, 119–136. [CrossRef] 39. Goodchild, M. Twenty years of progress: GIScience in 2010. J. Spat. Inf. Sci. 2015, 1, 3–20. [CrossRef] 40. Liang, J.; Gong, J.; Li, Y. Realistic rendering for physically based shallow water simulation in Virtual Geographic Environments (VGEs). Ann. GIS 2015, 21, 301–312. [CrossRef] 41. Liu, P.; Gong, J.; Yu, M. -based dynamic for typhoons on a virtual globe. Int. J. Dig. Earth. 2015, 8, 431–450. [CrossRef]

© 2016 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC-BY) license (http://creativecommons.org/licenses/by/4.0/).