<<

International Journal of Geo-Information

Article Semi-Automatic 3D City Model Generation from Large-Format Aerial Images

Mehmet Buyukdemircioglu 1, Sultan Kocaman 1,* and Umit Isikdag 2 ID 1 Department of Geomatics Engineering, Hacettepe University, Ankara 06800, Turkey; [email protected] 2 Department of Informatics, Mimar Sinan Fine Arts University, Istanbul 34427, Turkey; [email protected] * Correspondence: [email protected]; Tel.: +90-312-780-5792

 Received: 28 June 2018; Accepted: 20 August 2018; Published: 22 August 2018 

Abstract: 3D city models have become crucial for better city management, and can be used for various purposes such as disaster management, navigation, solar potential computation and planning simulations. 3D city models are not only visual models, and they can also be used for thematic queries and analyzes with the help of semantic data. The models can be produced using different data sources and methods. In this study, vector basemaps and large-format aerial images, which are regularly produced in accordance with the large scale map production regulations in Turkey, have been used to develop a workflow for semi-automatic 3D city model generation. The aim of this study is to propose a procedure for the production of 3D city models from existing aerial photogrammetric datasets without additional data acquisition efforts and/or costly manual editing. To prove the methodology, a 3D city model has been generated with semi-automatic methods at LoD2 (Level of Detail 2) of CityGML (City Geographic Markup Language) using the data of the study area over Cesme Town of Izmir Province, Turkey. The generated model is automatically textured and additional developments have been performed for 3D of the model on the web. The problems encountered throughout the study and approaches to solve them are presented here. Consequently, the approach introduced in this study yields promising results for low-cost 3D city model production with the data at hand.

Keywords: 3D city model; visualization; DSM; DTM; aerial imagery

1. Introduction According to United Nations World Urbanization Prospects [1], more than half of the world’s population (54%) are currently living in cities, and this number will increase to 66% by 2050. For efficient management of large cities and urban population, new technologies must be developed so that higher living standards can be obtained. Geographical Information Systems (GIS) are inevitable platforms for the efficient management of cities, and and visualization have become a crucial component of GIS. A 3D city model is a digital representation of the building and objects in the urban area, as well as the terrain model [2]. 3D city models usually consist of digital terrain models (DTMs), building models, street-space models, and green space models [3]. They can also be used to perform simulations in different scenarios in virtual environments [4]. The focus in 3D GIS lies in the management of more comprehensive and exhaustive 3D models of the construction, management of domains, architectural, engineering, construction and facility management, and wide area and georeferenced 3D models. 3D city models are becoming important tools for urban decision-making processes and information systems, especially planning, simulation, documentation, heritage planning, mobile networking planning, and navigation. 3D city models are being produced and developed

ISPRS Int. J. Geo-Inf. 2018, 7, 339; doi:10.3390/ijgi7090339 www.mdpi.com/journal/ijgi ISPRS Int. J. Geo-Inf. 2018, 7, 339 2 of 20 in many countries nationwide. Berlin [5], Vienna [6], and Konya [7] city models can be shown as examples of pioneer models. 3D city model reconstruction can be performed from different data resources, such as LiDAR point clouds [8], airborne images [9,10], satellite images [11,12], UAV (unmanned air vehicle) images or combination of DSM (digital surface model) data with cadastral maps [13]. The buildings are the base elements of such models, and the building reconstruction can mainly be done in three process steps: building detection, extraction, and reconstruction [14]. The reconstruction of the buildings is a process for generating a model using features obtained from building detection and extraction. With the use of photogrammetric methods, building rooftops can be measured manually from stereo pairs or extracted by edge detection methods with their radiometric characteristics or classification methods can be applied. They can be also extracted from point clouds generated from photogrammetric image matching methods [15] or LiDAR sensors. Additionally, combining different data resources is a popular approach [16,17]. CityGML is an XML-based open-source data format for storing and exchanging 3D city models [18]. This format is adopted by Open Geospatial Consortium [18] as an international standard format [19]. CityGML has a structure suitable for presenting city models with semantic data. These semantic features allows users to perform functions that are not possible without the metadata provided by CityGML. Modules in CityGML reflect the appearance, spatial and theme characteristics of an object [19]. In addition to the geometric information of the city model, CityGML contains semantic and thematic information. These data enable users to make new simulations and queries and analyzes of 3D city models [20]. There are five different levels of detail (LoD) defined in the CityGML data schema. As the LoD level icnreases, it has a more detailed architecture with the complexity of the structures, so different levels of detail can be used for different purposes. CityGML allows the same objects to be visualized at different levels of detail. Thus, analysis can be performed at different LoDs [20]. A building in LoD0 is represented by 2.5D polygons either at the roof level height or at the ground level height [21]. LoD0 can also be called building footprints. In LoD1, the building is represented as a solid model or a multi-faced block model without any roof structures. The building can be separated into different building surfaces called “BuildingParts”. LoD2 adds generalized roof structures to LoD1. In addition, thematic details can be used to represent the boundary surfaces of a building. The main difference between LoD1 and LoD2 is that the outer walls and roof of a building can be represented by more than one face and the curved geometry of the building can be represented in the model structure [22]. LoD3 is generated by expanding LoD2 with openings (windows, doors), detailed roof structures (roof windows, chimneys, roof tops) and detailed façade constructions. LoD4 is the highest level of detail and adds interior architecture to the building. At this level of detail, all interior details, such as furniture and rooms, are represented with textures. The building reconstruction methods can be divided into two categories: model-driven and data-driven [23]. In data-driven methods, the algorithm reconstructs buildings by using the DSM as the main data source and analyzes it as a whole without associating it to any set of parameters [24]. The aim of the model-driven method is to match the DSM generated roof geometry with the roof types that it has in its library [8] and uses the best-matched roof type with preliminary assumptions made of the roof [24]. This approach guarantees that the reconstructed roof model is topologically correct, though problems may occur if the roof shapes do not have any match in the roof library [23]. Other studies using the model-driven building reconstruction can be found in [24–26]. The method proposed here uses model-driven approach. Manual mensuration of building geometries, especially in large cities, requires a great deal of time, money, and labor. Photogrammetric approaches are widely used for 3D extraction of surfaces and features, and it is one of the most preferred technologies in data production for 3D city models [27]. With the help of photogrammetric methods, feature geometries and digital elevation models (DEM) can be extracted automatically from stereo images [28]. Semi-automatic reconstruction of buildings can be done by combining DSMs with the cadastral footprints so that manual production time and labor ISPRS Int. J. Geo-Inf. 2018, 7, 339 3 of 20 can be reduced. Such DSMs are mostly obtained by stereo and/or multi-image matching approaches and by using aerial LiDAR sensors. So far it is not possible to reconstruct building geometries fully automatically with high accuracy and level of detail (i.e., resolution). Fully-automatic 3D city model generation is still an active agenda which attracts the attention of many researchers and there are studies on automation of the production and reliability improvements [29]. One of the main criteria for 3D city model production is to create models that are close to reality both in geometry and also in appearance (visual fidelity). Textures obtained from optical images are used for providing visually correct models quickly and economically. There are also cases where untextured building models are in use, however, 3D city models should be textured where the visualization is the front planer and the user needs to have an idea of the area at first sight. Regardless of the level of detail in their geometries, untextured 3D city models will always be visually incomplete. With the development of aerial photogrammetric systems equipped with nadir and oblique camera systems, automatic texturing of 3D city models became quite possible. Nadir cameras are the most ideal for texturing roofs of the buildings whereas the oblique cameras are better at texturing the building façades. Similarly, for automatic texturing of 3D city models, the best strategy is to select the rooftop textures from the nadir images and building façades from the oblique images. The view of the façades can be obtained clearly with this approach and the spatial resolution would also be higher. Similar to the nadir image acquisition, the season and the time of the flight would affect the radiometric quality of the oblique images. Früh et al. [30] have used an approach which combines 3D city model obtained from aerial and ground-based laser scans with oblique aerial imagery for . Früh and Zakhor [31] presented a method that merges ground-based and airborne laser scans and images. Additional terrestrial data used in the method naturally leads to more detailed models, but also increases the size of the datasets considerably. A more effective technique to model the façades is to extract textures from terrestrial images and locate them on the polygonal models. This process is done in a manual fashion, however, texturing of a single building is a tedious task and can easily take up to several hours. For a large number of building façades, such an approach is not efficient and usually not applicable. Kada et al. [32] presented a new approach that automatically extracts façade textures from terrestrial photographs and maps them to geo-referenced 3D building models. In contrast to the application of standard viewers, this technique allows to model even complex geometric effects, like self-occlusion or lens distortion. This allows for a very fast and flexible on-the-fly generation of façade textures using real-world imagery. Aerial for map production is a widely used approach throughout the world. Aerial LiDAR sensors are not yet commonly used for point cloud acquisition as it has high initial costs [33]. In addition, although LiDAR sensors can provide highly accurate and dense elevation data, the spectral and radiometric resolutions of the intensity images are quite low in comparison to large-format aerial cameras, thus, not suitable for feature extraction for basemap production purposes. Stereo data acquisition with large format aerial cameras is still the most suitable approach for 3D city model generation, as point clouds and other types of information can be extracted from those with sufficient density, accuracy, and resolution (level of detail, feature types, etc.) at the city scale. It is possible to produce a point cloud with 100 points per square meter density from stereo aerial images with a resolution of 10 cm (image matching per pixel). Such a point cloud is denser than most LiDAR datasets in practical projects [29]. Nowadays large cities are composed of hundreds of thousands of buildings and manual extraction of their geometries for city model generation purposes is very costly in terms of time and money. Considering the amount and variety of the data obtained with aerial photogrammetric methods for over decades, the possibility of using these datasets for automatic or semi-automatic 3D city model generation without additional data acquisition is discussed in this study. A new approach is introduced for this purpose and applied using the data of photogrammetric flight mission carried out over Cesme town near Izmir in 2016. The speed, accuracy and capacity of existing technologies have been investigated within the study to analyze the challenges and open issues in the overall city ISPRS Int. J. Geo-Inf. 2018, 7, 339 4 of 20 model generation process. The generated model is then used to create a 3D GIS environment on a web platform. Queries, analysis, or different simulations can be performed on this model with the help of semanticISPRS data.Int. J. Geo The-Inf. building 2018, 7, x FOR reconstruction PEER REVIEW approach has been validated using reference data,4 of and 20 the problems encountered during the processing and the potential improvements to the process have been and the problems encountered during the processing and the potential improvements to the process investigated. Suitability of the basemaps and aerial imagery and the outputs have also been analyzed have been investigated. Suitability of the basemaps and aerial imagery and the outputs have also and thebeen results analyzed are presented and the results here. are In addition, presented using here. the In addition, proposed using method, the proposed it is possible method, to reconstruct it is the citiespossible of the to pastreconstruct with little the cities effort of using the past the with datasets little obtainedeffort using from the aerial datasets photogrammetric obtained from aerial missions. photogrammetric missions. 2. Materials and Methods 2. Materials and Methods 2.1. Study Area and Dataset 2.1. Study Area and Dataset Izmir is the largest metropolitan city in Turkey’s Aegean Region. Cesme is a coastal town located 85 km westIzmir of Izmir is the andlargest also metropolitan a very popular city in holiday Turkey’s resort Aegean with Region. several Cesme historical is a coastal sites. town A large located majority of the85 structures km west of in Izmirthe Cesme and also area a arevery in popular the Aegean holiday architecture resort with style several and historical most of sites. the buildings A large are locatedmajority on the of coastline the structures and are in usuallythe Cesme at lowarea elevations.are in the Aegean The highest architecture building style in and Cesme most has of athe height buildings are located on the coastline and are usually at low elevations. The highest building in of approximately 45 m. A general view of the study area is given in Figure1. Cesme has a height of approximately 45 m. A general view of the study area is given in Figure 1.

Figure 1. A general view of the study area in Cesme, Izmir on the Google Earth image. Figure 1. A general view of the study area in Cesme, Izmir on the Google Earth image. The study area covers the whole Cesme region with a size of 260 km2. Within the project, a total Theof 4468 study aerial area photos covers were the taken whole in May Cesme 2016 regionwith 80% with forward a size overlap of 260 and km 60%2. Within lateral theoverlap project, using a total of 4468UltraCam aerial photos Falcon werelarge- takenformat in digital May 2016camera with from 80% Vexcel forward Imaging overlap, Graz, and Austria 60% lateral[34]. The overlap photos using UltraCamhave been Falcon taken large-format from an average digital altitude camera of from 1500 Vexcel m from Imaging, mean sea Graz, level Austriaand have [34 an]. average The photos GSD have (ground sampling distance) of 8 cm and a footprint of 1400 m × 900 m. A total of 306 ground control been taken from an average altitude of 1500 m from mean sea level and have an average GSD (ground points (GCPs) have been established prior to the flight with an interval of 1.5 km. Figure 2 shows the sampling distance) of 8 cm and a footprint of 1400 m × 900 m. A total of 306 ground control points level of detail in the images and the ground signalization of one control point. The block configuration (GCPs)is provided have been in established Figure 3. Orthometric prior to the heights flight with have an been interval measured of 1.5 by km. a leveling Figure2 methodshows the and level of detailplanimetric in the images coordinates and the have ground been measured signalization using of Global one control Navigation point. Satellite The block System configuration (GNSS) is provideddevices. in FigureThe GCPs3. Orthometric have been used heights in the have photogrammetric been measured bundle by ablock leveling adjustment method process. and planimetric coordinatesManual have feature been measured extraction usingfor basemap Global production Navigation from Satellite stereo Systemaerial images (GNSS) has devices. been done The by GCPs havephotogrammetry been used in the operators. photogrammetric Contour lines bundle (1 m, block 2 m, adjustment5 m, 10 m), slopes process., and breaklines have been Manualmeasured feature along with extraction the generated for basemap photogrammetric production height from points, stereo which aerial have images later been has converted been done by photogrammetryto a TIN (triangular operators. irregular Contour network) lines (1 dataset m, 2 m, and 5 m, used 10 m), for slopes,grid DTM and generation. breaklines Lastly, have been orthophotos with 10 cm pixel size have been generated for the whole area. measured along with the generated photogrammetric height points, which have later been converted to a TIN (triangular irregular network) dataset and used for grid DTM generation. Lastly, orthophotos with 10 cm pixel size have been generated for the whole area.

ISPRS Int. J. Geo-Inf. 2018, 7, 339 5 of 20 ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW 5 of 20 ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW 5 of 20

Figure 2. Level of detail in one image (left) and the ground signalization of one GCP (right). Figure 2. Level of detail in one image (left) and the ground signalization of one GCP (right). Figure 2. Level of detail in one image (left) and the ground signalization of one GCP (right).

Figure 3. Aerial photogrammetric image block (left) and the GCP distribution (right) over the study

area in Cesme, Izmir. Figure 3. Aerial photogrammetric image block (left) and the GCP distribution (right) over the study Figure 3. Aerial photogrammetric image block (left) and the GCP distribution (right) over the study 2.2area. The in OverallCesme, WorkflowIzmir. area in Cesme, Izmir. The overall workflow of the study involves photogrammetric data production and building 2.2. The Overall Workflow 2.2.reconstruction The Overall Workflow together with texturing as depicted in Figure 4. The photogrammetric processing mainlyThe overall includes workflow flight planning, of the study GCP involves signalization photogrammetric on the ground data, production image acquisition and building and The overall workflow of the study involves photogrammetric data production and building reconstructiontriangulation, together and manual with texturingfeature extraction as depicted for basemap in Figure production 4. The photogrammetric in CAD (computer processing aided reconstructiondesign) format. together The buildings with texturing have been as reconstructed depicted in Figure automatically4. The photogrammetricusing the CAD data. processing A post mainly includes flight planning, GCP signalization on the ground, image acquisition and mainlyphotogrammetric includes flight data planning, collection GCP process signalization has been on performed the ground, to generate image acquisition a dense DSM and triangulation,of the area triangulation, and manual feature extraction for basemap production in CAD (computer aided andwith manual a 30 cm feature grid interval extraction using for Agisoft basemap Photoscan production Pro, St. inPetersburg, CAD (computer Russia [35 aided]. All design)contour lines, format. design) format. The buildings have been reconstructed automatically using the CAD data. A post Thebreaklines, buildings and have photogrammetric been reconstructed height automatically points measured using inthe Microstation CAD data. Afrom post Bentley photogrammetric Systems, photogrammetric data collection process has been performed to generate a dense DSM of the area dataExton, collection PA, USA process [36] and has saved been performedin CAD format to generate (Microstation a dense DGN) DSM have of thebeen area converted with a 30into cm TIN grid with a 30 cm grid interval using Agisoft Photoscan Pro, St. Petersburg, Russia [35]. All contour lines, intervaldatasets using and Agisoftgrid DTMs Photoscan using FME Pro, Workbenches St. Petersburg, developed Russia by [35 Safe]. All Software, contour St lines,Surrey, breaklines, BC, Canada and breaklines, and photogrammetric height points measured in Microstation from Bentley Systems, photogrammetric[37]. The buildings height have points beenmeasured reconstructed in Microstation using BuildingReconstruction from Bentley Systems, (BREC) Exton, software PA, USA from [36 ] Exton, PA, USA [36] and saved in CAD format (Microstation DGN) have been converted into TIN andVirtualCity saved in CADSystems format GmbH (Microstation, Berlin, Germany DGN) have[38] and been the converted generated into models TIN datasets have been and converted grid DTMs datasets and grid DTMs using FME Workbenches developed by Safe Software, St Surrey, BC, Canada usinginto FME CityGML Workbenches format as developed output. The by texturing Safe Software, has been St done Surrey, automatically BC, Canada using [37]. the The CityGRID buildings [37]Texturizer. The buildings software have from been UVM reconstructed Systems, Klosterneuburg, using BuildingReconstruction Austria [39]. As a last (BREC) step, textured software models from have been reconstructed using BuildingReconstruction (BREC) software from VirtualCity Systems VirtualCityhave been Systems converted GmbH into, Berlin, the CityGML Germany format [38 ]to and ensure the generated data interoperability models have between been converted different GmbH, Berlin, Germany [38] and the generated models have been converted into CityGML format intosystems. CityGML format as output. The texturing has been done automatically using the CityGRID as output. The texturing has been done automatically using the CityGRID Texturizer software from Texturizer software from UVM Systems, Klosterneuburg, Austria [39]. As a last step, textured models UVM Systems, Klosterneuburg, Austria [39]. As a last step, textured models have been converted into have been converted into the CityGML format to ensure data interoperability between different the CityGML format to ensure data interoperability between different systems. systems.

ISPRS Int. J. Geo-Inf. 2018, 7, 339 6 of 20

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW 6 of 20

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW 6 of 20

Figure 4. Overall workflow of 3D city model implementation. Figure 4. Overall workflow of 3D city model implementation. 3. Implementation of the 3D City Model 3. Implementation of the 3D City Model 3.1. Image Georeferencing with a Photogrammetric Bundle Block Adjustment 3.1. Image Georeferencing with a Photogrammetric Bundle Block Adjustment Exterior orientation parametersFigure 4. Overall (EOPs) workflow of the of 3D aerial city model photos implementation. have already been measured during Exteriorthe photogrammetric orientation parametersflight mission (EOPs) using GNSS of the antenna aerial photos and INS have (inertial already navigation been measured system). In during 3. Implementation of the 3D City Model the photogrammetricorder to achieve sub flight-pixel mission accuracy, using a photogrammetric GNSS antenna and triangulation INS (inertial process navigation with self system).-calibration In order has been applied using GCPs and tie points, and the existing GNSS and INS measurements have been to achieve3.1 sub-pixel. Image Georeferencing accuracy, with a photogrammetric a Photogrammetric Bundle triangulation Block Adjustment process with self-calibration has been used as weighted observations in the adjustment. The project area has been split into seven sub- applied using GCPs and tie points, and the existing GNSS and INS measurements have been used blocks forExterior optimization orientation of the parameters bundle (EOPs)block adjustment of the aerial photosprocess have (Figure already 5). been The measured image measurement during as weightedthe observationsphotogrammetric in flight the adjustment.mission using TheGNSS project antenna area and INS has ( beeninertial split navigation into seven system sub-blocks). In for of the GCPs and initial tie points in strip and cross-strip directions have been done manually by optimizationorder of to the achieve bundle sub- blockpixel accuracy, adjustment a photogrammetric process (Figure triangulation5). The imageprocess measurementwith self-calibration of the GCPs operators,has been and applied further using tie points GCPs and have tie beenpoints, extracted and the existing automatically GNSS and with INS themeasurements Intergraph have Z/I automaticbeen and initial tie points in strip and cross-strip directions have been done manually by operators, and triangulatiused ason weighted module [observations40]. The bundle in the adjustment adjustment. has The been project performed area has beenusing split the intosame seven software sub- with furtherself tie-calibrationblocks points for have optimization and been blunder extracted of the removal bundle automatically byblock statistical adjustment with analysis. process the Intergraph(FigureThe systematic 5). The Z/I image error automatic measurement modeling triangulation with moduleadditional [40of]. the The parametersGCPs bundle and initial adjustmentinclude tie pointsatmospheric has in beenstrip refraction,and performed cross-strip Earth using directions curvature, the samehave and been software camera done manuallydistortion with self-calibration by (radial and blunderand deoperators,-centering). removal andby furtherThe statistical number tie points of analysis. imageshave been in The extracted each systematic block, automatically the numbers error with modeling and the Intergraphproperties with additional Z of/I automaticthe GCPs, parameters size triangulation module [40]. The bundle adjustment has been performed using the same software with includeof the atmospheric block area arefraction,nd the accuracy Earth measures curvature, (sigma and naught camera of distortion the adjustment (radial and and the de-centering). standard deviationsself-calibration of GCPs andin image blunder space) removal are shown by statistical in Table analysis. 1. The systematic error modeling with The numberadditional of images parameters in each include block, atmospheric the numbers refraction, and Earth properties curvature, of and the camera GCPs, distortion size of (radial the block area and the accuracyand de-centering). measures The (sigma number naughtof images ofin each the adjustmentblock, the numbers and and the properties standard of deviationsthe GCPs, size of GCPs in image space)of the are block shown area in and Table the accuracy1. measures (sigma naught of the adjustment and the standard deviations of GCPs in image space) are shown in Table 1.

Figure 5. Photogrammetric blocks generated for the area. Figure 5. Photogrammetric blocks generated for the area. Figure 5. Photogrammetric blocks generated for the area. ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW 7 of 20

ISPRS Int. J. Geo-Inf. 2018, 7, 339 7 of 20 Table 1. Photogrammetric block properties and triangulation results.

Sub- Number and Features of GCPs Sigma GCP Image GCP Image No. of Table 1. Photogrammetric block properties and triangulation results. Block Full Planimetric Height Naught σx σy Images No (XYZ) (XY) (Z) (microns) (microns) (microns) Sub-Block No. of Number and Features of GCPs Sigma Naught GCP Image GCP Image 1No Images424 Full34 (XYZ) Planimetric- (XY) Height- (Z) 1.9(microns) σx1.3(microns) σy (microns)1.0 2 1661 424 51 34 - -1 -1.7 1.9 1.1 1.3 0.8 1.0 3 2287 661 22 51 - -- 11.5 1.7 0.9 1.1 0.7 0.8 4 3745 287 55 22 3 -- -1.6 1.5 1.0 0.9 0.8 0.7 4 745 55 3 - 1.6 1.0 0.8 5 5751 751 61 61 - -- -1.8 1.8 1.1 1.1 0.9 0.9 6 6807 807 62 62 - -1 11.2 1.2 1.0 1.0 0.8 0.8 7 793 61 - 1 1.4 0.8 0.7 7 793 61 - 1 1.4 0.8 0.7

3.2. Building Footprint and Attribute Generation from Basemaps 3.2. Building Footprint and Attribute Generation from Basemaps Although aerial photographs contain almost all visible information related to the surface of the Although aerial photographs contain almost all visible information related to the surface of the project area, they cannot be compared to traditional vector maps before they are interpreted by humans project area, they cannot be compared to traditional vector maps before they are interpreted by or computers. Any good map user can hardly understand and interpret aerial photographs directly humans or computers. Any good map user can hardly understand and interpret aerial photographs and image interpretation for map production is still an expert task. A vector map is an abstraction directly and image interpretation for map production is still an expert task. A vector map is an of selected details of the scene/project area. Figure6 shows an overlay of an aerial orthoimage and abstraction of selected details of the scene/project area. Figure 6 shows an overlay of an aerial manually extracted features stored as vectors. In addition, maps contain symbols and text as further orthoimage and manually extracted features stored as vectors. In addition, maps contain symbols information. Interpretation, analysis, implementation, and use of vector map information are easier and text as further information. Interpretation, analysis, implementation, and use of vector map than orthophotos. Details and attribute information in vector maps are gathered by models from information are easier than orthophotos. Details and attribute information in vector maps are the stereo analysis so that they are clearer, understandable, and interpretable. Feature points and gathered by models from the stereo analysis so that they are clearer, understandable, and lines showing the terrain characteristics (e.g., creeks, ridges, etc.) are also produced manually or interpretable. Feature points and lines showing the terrain characteristics (e.g., creeks, ridges, etc.) semi-automatically during the processing of aerial photos, such as contour lines, DTM grid points, are also produced manually or semi-automatically during the processing of aerial photos, such as break-lines and steep slopes. In addition, information stored in vector maps have been extracted to contour lines, DTM grid points, break-lines and steep slopes. In addition, information stored in vector create an attribute scheme for buildings in this study. maps have been extracted to create an attribute scheme for buildings in this study.

Figure 6. Manual feature extraction from stereo aerial images. Figure 6. Manual feature extraction from stereo aerial images. BuildingReconstruction, which is used for 3D city model generation in this study, requires buildingBuildingReconstruction, footprints and attributes which as is input used forin ESRI 3D city Shapefile model generation (.shp) format. in this The study, basemaps requires produced building infootprints aerial photogrammetric and attributes as projects input in are ESRI often Shapefile stored in (.shp) CAD format. formats The as standard basemaps and produced every mapping in aerial photogrammetricproject generally contains projects more are often than stored 200 layers. in CAD The formats data and as information standard and stored every in mapping these layers project are generally contains more than 200 layers. The data and information stored in these layers are usually

ISPRS Int. J. Geo-Inf. 2018, 7, 339 8 of 20

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW 8 of 20 adequate for extracting the building footprints and a few types of attributes. The building footprints usually adequate for extracting the building footprints and a few types of attributes. The building used here can be considered as roof boundaries since it is usually not possible to see the building footprints used here can be considered as roof boundaries since it is usually not possible to see the footprints on the ground in aerial images. The buildings are collected and stored in different CAD building footprints on the ground in aerial images. The buildings are collected and stored in different layers such as residential, commercial, public, school, etc., according to their use types. In addition CAD layers such as residential, commercial, public, school, etc., according to their use types. In to the buildings, other structures, such as sundries, lighting poles, or roads, are also measured by addition to the buildings, other structures, such as sundries, lighting poles, or roads, are also photogrammetry operators. measured by photogrammetry operators. In BuildingReconstruction it is required to store the building geometries and attributes in a In BuildingReconstruction it is required to store the building geometries and attributes in a single ESRI shapefile. These geometries and attributes are later converted into CityGML format while single ESRI shapefile. These geometries and attributes are later converted into CityGML format while exporting the building geometry after the reconstruction is completed. The attributes can be used to exporting the building geometry after the reconstruction is completed. The attributes can be used to implement queries on the reconstructed city model. However, the CAD files contain just vector data implement queries on the reconstructed city model. However, the CAD files contain just vector data and don’t include the attributes as text assigned to geometries. While converting the CAD data into and don’t include the attributes as text assigned to geometries. While converting the CAD data into shapefile format, some implicit attributes about the buildings have been extracted generated using ETL shapefile format, some implicit attributes about the buildings have been extracted generated using (extract-transform-load) transformers of the FME software. As an example, the number of building ETL (extract-transform-load) transformers of the FME software. As an example, the number of floors is stored in a different CAD layer as text (also vector unit) and their location fall inside the building floors is stored in a different CAD layer as text (also vector unit) and their location fall inside building polygons as can be seen in Figure6. A spatial transformer (i.e., “SpatialFilter” of FME) is the building polygons as can be seen in Figure 6. A spatial transformer (i.e., “SpatialFilter” of FME) used to transform these numbers into “number of floors” attribute of every building. The transformer is used to transform these numbers into “number of floors” attribute of every building. The compares two sets of features to see if their spatial relationships meet the selected test conditions transformer compares two sets of features to see if their spatial relationships meet the selected test (i.e., filter is within candidate) and if a number (i.e., text) is located inside a polygon, it is assigned as conditions (i.e., filter is within candidate) and if a number (i.e., text) is located inside a polygon, it is attribute to that polygon (i.e., building). In addition, center coordinates of building footprints have assigned as attribute to that polygon (i.e., building). In addition, center coordinates of building been extracted and saved as new attributes using another spatial operation (i.e., CenterPointExtractor). footprints have been extracted and saved as new attributes using another spatial operation (i.e., A total of eight attributes have been extracted from the CAD layers using the spatial ETLs and become CenterPointExtractor). A total of eight attributes have been extracted from the CAD layers using the ready to be input into BuildingReconstruction. A part of the project area with all CAD layers and the spatial ETLs and become ready to be input into BuildingReconstruction. A part of the project area building footprints extracted thereof are given in Figure7. with all CAD layers and the building footprints extracted thereof are given in Figure 7.

Figure 7. A part of the basemap with all CAD layers (left) and the building footprints extracted from Figure 7. A part of the basemap with all CAD layers (left) and the building footprints extracted from a a set of them (right). set of them (right). 3.3. Digital Terrain Model (DTM) Extraction 3.3. Digital Terrain Model (DTM) Extraction DTMs are 3D representations of the terrain elevations found on the bare Earth’s surface. In this DTMs are 3D representations of the terrain elevations found on the bare Earth’s surface. In this study, study, DTMs are required to make sure that buildings are located exactly on the terrain, not under or DTMs are required to make sure that buildings are located exactly on the terrain, not under or above of it. above of it. It is also used to calculate the base height of a building from the terrain. The resolution It is also used to calculate the base height of a building from the terrain. The resolution requirements requirements for the DTMs are substantially lower than DSMs in the methodology proposed here. A for the DTMs are substantially lower than DSMs in the methodology proposed here. A grid resolution grid resolution between 1–5 m is usually sufficient. A general recommendation from between 1–5 m is usually sufficient. A general recommendation from BuildingReconstruction is to BuildingReconstruction is to use a grid interval of 1 m. The software works best with the tile-based use a grid interval of 1 m. The software works best with the tile-based raster data since raw point raster data since raw point clouds data can be very massive and difficult to process. Figure 8 shows clouds data can be very massive and difficult to process. Figure8 shows an overview of the manually an overview of the manually extracted terrain features and a zoomed view of it. extracted terrain features and a zoomed view of it.

ISPRS Int. J. Geo-Inf. 2018, 7, 339 9 of 20 ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW 9 of 20

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW 9 of 20

Figure 8. General view of the manually extracted terrain features (left) and close view (right). Figure 8. General view of the manually extracted terrain features (left) and close view (right). Figure 8. General view of the manually extracted terrain features (left) and close view (right). Within the photogrammetric flight project, contour lines with different intervals (i.e., 1 m, 2 m, Within the photogrammetric flight project, contour lines with different intervals (i.e., 1 m, 2 m, 5 m,Within 10 m), the breaklinesphotogrammetric and grid flight DTM project, points contour have lines been with manually different measured intervals (i.e. using, 1 m, Bentley 2 m, 5 m, 10 m), breaklines and grid DTM points have been manually measured using Bentley MicroStation 5MicroStation m, 10 m), softwarebreaklines by and photogrammetry grid DTM points operators. have Morebeen manually than 1.2 million measured height using points, Bentley and software by photogrammetry operators. More than 1.2 million height points, and thousands of contour MicroStationthousands of softwarecontour lines by photogrammetryand break lines have operators. been merged More and than converted 1.2 million into height a TIN points, dataset andand lines and break lines have been merged and converted into a TIN dataset and a single DTM with 1-m thousandsa single DTM of contour with 1- mlines grid and interval break (Figurelines have 9). been merged and converted into a TIN dataset and grid interval (Figure9). a single DTM with 1-m grid interval (Figure 9).

Figure 9. Overview of the generated grid DTM from manually measured terrain data of the whole Cesme area (left) and a zoomed view (right). FigureFigure 9. 9. OverviewOverview of of the the generated generated grid grid DTM DTM from from manually manually measured measured terrain terrain data data of of the the whole whole CesmeCesme area area ( (leftleft)) and and a a zoomed zoomed view view ( (rightright).). 3.4. Digital Surface Model (DSM) Generation 3.4. Digital Surface Model (DSM) Generation 3.4. DigitalFor the Surface DSM Model generation, (DSM) a Generation number of different software has been investigated to obtain an optimumFor the balance DSM between generation, the DSM a number quality of (completeness, different software density has, and been accuracy) investigated and the to production obtain an For the DSM generation, a number of different software has been investigated to obtain an optimumspeed. Finally balance several between high the-density DSM DSMsquality have (completeness, been generated density using, and Ag accuracy)isoft Photoscan and the Pro production software, optimum balance between the DSM quality (completeness, density, and accuracy) and the production speed.since it Finally allows several parallel high processing-density DSMsby allowing have been GPU generated (graphics using processing Agisoft unit) Photoscan usage along Pro software, with the speed. Finally several high-density DSMs have been generated using Agisoft Photoscan Pro software, sinceCPU it(central allows processingparallel processing unit) during by allowing the DSM GPU generation. (graphics The processing DSM of u thenit) projectusage along area haswith been the since it allows parallel processing by allowing GPU (graphics processing unit) usage along with the CPUgenerated (central in sevenprocessing sub-blocks unit) asduring these theblocks DSM have generation. already been The formedDSM of for the the project triangulation area has (Figure been CPU (central processing unit) during the DSM generation. The DSM of the project area has been generated5). Since BuildingReconstruction in seven sub-blocks as these can reconstruct blocks have a already maximum been of formed a couple for of the thousand triangulation buildings (Figure in a generated in seven sub-blocks as these blocks have already been formed for the triangulation (Figure5). 5).single Since process, BuildingReconstruction the larger blocks can have reconstruct again been a maximum divided of into a couple more subof thousand-blocks to buildings decrease in the a Since BuildingReconstruction can reconstruct a maximum of a couple of thousand buildings in a single singleprocessing process, area the and larger time. blocksWhile have selecting again the been aerial divided photographs into more in every sub- blocks sub-block to decreas for buildinge the process, the larger blocks have again been divided into more sub-blocks to decrease the processing processingreconstruction area step,and it time. has Whilebeen taken selecting consideration the aerial thatphotographs buildings in near every the subouter-block boundary for building of the area and time. While selecting the aerial photographs in every sub-block for building reconstruction reconstructionproject area should step, beit hasvisible been at takenleast inconsideration six or more photographsthat buildings for near better the reconstruction. outer boundary Adjusted of the projectEOPs andarea camerashould be calibration visible at parametersleast in six or have more been photographs used as inputfor better in Photoscan reconstruction. to provide Adjusted the EOPsgeoreferencing and camera of the calibration images. However, parameters the have images been have used been as inputre-oriented in Photoscan by generating to provide tie points the georeferencing of the images. However, the images have been re-oriented by generating tie points

ISPRS Int. J. Geo-Inf. 2018, 7, 339 10 of 20

step, it has been taken consideration that buildings near the outer boundary of the project area should be visible at least in six or more photographs for better reconstruction. Adjusted EOPs and camera calibrationISPRS Int. J. Geo parameters-Inf. 2018, 7, x have FOR PEER been REVIEW used as input in Photoscan to provide the georeferencing of10 of the 20 ISPRSimages. Int. J. However, Geo-Inf. 2018 the, 7, imagesx FOR PEER have REVIEW been re-oriented by generating tie points and re-measuring10 GCPs of 20 dueand tore- softwaremeasuring requirements. GCPs due to Based software on the requirements. established Based camera on orientations, the established the softwarecamera orientations, generates a anddensethe resoftware- pointmeasuring cloud generates GC (i.e.,Ps DSM).a due dense to Figure software point 10cloud depictsrequirements. (i.e., the DSM). tie pointsBased Figure on and 10 the depicts the established dense the point tie camerapoints cloud and orientations, generated the dense in theonepoint software part. cloud generatedgenerates ain dense one part. point cloud (i.e., DSM). Figure 10 depicts the tie points and the dense point cloud generated in one part.

FigureFigure 10.10. Generated tie points ((leftleft)) andand densedense pointpoint cloudcloud ((rightright)) usingusing AgisoftAgisoft Photoscan.Photoscan. Figure 10. Generated tie points (left) and dense point cloud (right) using Agisoft Photoscan. Although a DSM with a resolution of 25 cm is already considered to be sufficient for the Although a DSM with a resolution of 25 cm is already considered to be sufficient for the BuildingReconstructionAlthough a DSM with software a resolution to generate of 25 building cm is already models considered with optimum to be accuracy, sufficient five for point the BuildingReconstruction software to generate building models with optimum accuracy, five point BuildingReconstructionclouds in Agisoft Photoscan software have to been generate generated building with modelsdifferent with densities optimum (at 1, accuracy,2, 4, 8, and five 16 pointpixels clouds in Agisoft Photoscan have been generated with different densities (at 1, 2, 4, 8, and 16 pixels cloudsintervals) in Ag forisoft test Photoscan purposes. have The beenoptimal generated results withwith differentBuildingReconstruction densities (at 1, 2, have 4, 8 , beenand 16achieved pixels intervals) for test purposes. The optimal results with BuildingReconstruction have been achieved intervals)using DSMs for withtest 25purposes.–50 cm resolutions The optimal for resultsLoD2 buildings. with BuildingReconstruction Considering this fact, have the DSMs been haveachieved been using DSMs with 25–50 cm resolutions for LoD2 buildings. Considering this fact, the DSMs have been usinggenerated DSMs at with a resolution 25–50 cm of resolutions 30 cm for thefor LoD2whole buildings. project area. Considering An overview this andfact, a the zoomed DSMs viewhave ofbeen the generated at a resolution of 30 cm for the whole project area. An overview and a zoomed view of the generatedgenerated at DSMs a resolution from one of sub30 cm-block for theis shown whole in project Figure area. 11. An overview and a zoomed view of the generated DSMs from one sub-block is shown in Figure 11. generated DSMs from one sub-block is shown in Figure 11.

Figure 11. An overview (left) and a close view (right) of the generated DSM. Figure 11. An overview (left) and a close view (right) of the generated DSM. 3.5. Automated BuildingFigure 11. ReconstructionAn overview (left) and a close view (right) of the generated DSM. 3.5. Automated Building Reconstruction 3.5. AutomatedIn this study, Building BuildingReconstruction Reconstruction 2018 software has been used for automatic reconstruction of buildingsIn this study, and conversionBuildingReconstruction into the CityGML 2018 software format. hasThe beensoftware used enables for automatic automatic reconstruction generation ofand buildings Inmanual this study, andediting conversion BuildingReconstruction of building into models the CityGML in 2018 LoD1 software format. and LoD2 hasThe beenbasedsoftware used on DSMs forenables automatic and automatic building reconstruction generation footprints. of andbuildingsAs describedmanual and editing in conversion [38 of], thebuilding intoDSM the modelsis CityGMLanalyzed in LoD1 format.for eachand The LoD2building software based outline on enables DSMs to detect automatic and buildingthe roof generation footprints.shape. andThe Asmanualfollowing described editing algorithms in [ of38] building, theare DSMemployed models is analyzed for in this LoD1 forpurpose: andeach LoD2 building based outline on DSMs to detect and the building roof shape. footprints. The followingAs described algorithms in [38 ],are the employed DSM is for analyzed this purpose: for each building outline to detect the roof shape. The• followingRectangle algorithms intersection: are employed Simple geometries for this purpose: are processed with rectangular intersections. • RectangleIntersecting intersection: rectangles and Simple squares geometries from the arebasis processed of the footprint with are rectang createdular to intersection construct thes. • IntersectingRectanglebuilding in intersection: 3D.rectangles Simpleand squares geometries from arethe processed basis of the with footprint rectangular are intersections.created to construct Intersecting the • buildingrectanglesCell decomposition. in and 3D. squares The from cell the decomposition basis of the footprint algorithm are createdis used tofor construct buildings the with building multiple in 3D. roof • Cellforms decomposition. and/or with different The cell heightsdecomposition [8]. A built algorithm-in library is used of more for buildingsthan 32 main with and multiple connecting roof formsroof types and/or is usedwith differentto detect theheights best [fitting8]. A built roof- typein library for each of more generated than 32 cell. main and connecting • roofFootprint types isextrusion: used to detect This algorithm the best fitting is used roof to type create for LoD1 each generatedmodels of cell.large areas and also flat • FootprintLoD2 buildings. extrusion: This algorithm is used to create LoD1 models of large areas and also flat LoD2 buildings. The main advantage of this method is automatic reconstruction of the 3D city models along with the richThe semanticmain advantage information of this and method thematic is automatic attributes reconstruction of the buildings. of the It 3Dworks city bestmod withels along tile- basedwith the rich semantic information and thematic attributes of the buildings. It works best with tile-based

ISPRS Int. J. Geo-Inf. 2018, 7, 339 11 of 20

• Cell decomposition. The cell decomposition algorithm is used for buildings with multiple roof forms and/or with different heights [8]. A built-in library of more than 32 main and connecting roof types is used to detect the best fitting roof type for each generated cell. • Footprint extrusion: This algorithm is used to create LoD1 models of large areas and also flat LoD2 buildings. ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW 11 of 20 The main advantage of this method is automatic reconstruction of the 3D city models along with inputthe rich files. semantic If a large information region is required and thematic to be reconstructed, attributes of the the buildings. users need It to works generate best tiles with with tile-based 1 km2 orinput smaller files. sizes If a largewith regionthe footp isrint required data, tolaser be reconstructed,data and orthophotos. the users A need model to- generatedriven approach tiles with is 1used km2 inor BuildingReconstruction smaller sizes with the footprint to detect data, and laser reconstruct data and orthophotos. the general roof A model-driven geometry and approach type using is used a libraryin BuildingReconstruction containing 32 types toof detectroof geometry and reconstruct popularly the used general in urban roof geometry areas around and type the worl usingd a(Figure library 12).containing For every 32 types given of polygon roof geometry in input popularly footprint used shapefile, in urban a areas discrete around 3D building the world geometry (Figure 12 is). reconstructed.For every given The polygon reconstructed in input footprintgeometry shapefile, is ensured adiscrete to be fully 3D buildingclosed without geometry any is holes reconstructed. or gaps betweenThe reconstructed surfaces, geometry and also is be ensured fully compatible to be fully closed with given without building any holes footprint. or gaps between Along with surfaces, the automatedand also be reconstruction, fully compatible the with software given offers building an interactive footprint. Alongeditor withfor manual the automated editing on reconstruction, the building geometries.the software offers an interactive editor for manual editing on the building geometries.

FigureFigure 12. 12. BuildingReconstructionBuildingReconstruction 2018 2018 software software roof roof library library [ [3838]]..

The input data requirements for the BuildingReconstruction software are as follows: The input data requirements for the BuildingReconstruction software are as follows: ● A grid DSM with 25–50 cm grid interval, optimally • ● AA DTM grid DSM with with1-m grid 25–50 interval cm grid, optimally interval, optimally ●• 2DA DTM building with footprints 1-m grid interval,with attributes optimally (each building polygon must have a unique ID) ●• An2D orthophoto building footprints for visual with checks attributes (each building polygon must have a unique ID) • An orthophoto for visual checks Figure 13 shows screenshots of the DSM and the DTM from one sub-block. The DSMs are used to determineFigure 13 the shows roof geometry screenshots and of characteristics, the DSM and the whereas DTM from the DTMs one sub-block. are used to The calculate DSMs arethe usedheight to anddetermine the ground the roof geometry geometry (footprint) and characteristics, of the building. whereas theFootprints DTMs are are used also to used calculate to determine the height andthe geometricthe ground structures geometry of(footprint) the walls of in the the building. building Footprints model. Tarehe also software used to also determine supports the processing geometric withoutstructures a DTM, of the and walls if no in theterrain building model model. is available, The software the base also height supports of a building processing is determined without a DTM,from aand buffer if no in terrain which model the lowest is available, point the around base height a building of a building is determined. is determined However, from this a buffer approach in which is successfulthe lowest only point in around areas where a building building is determined. density is low. However, this approach is successful only in areas where building density is low.

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW 11 of 20

input files. If a large region is required to be reconstructed, the users need to generate tiles with 1 km2 or smaller sizes with the footprint data, laser data and orthophotos. A model-driven approach is used in BuildingReconstruction to detect and reconstruct the general roof geometry and type using a library containing 32 types of roof geometry popularly used in urban areas around the world (Figure 12). For every given polygon in input footprint shapefile, a discrete 3D building geometry is reconstructed. The reconstructed geometry is ensured to be fully closed without any holes or gaps between surfaces, and also be fully compatible with given building footprint. Along with the automated reconstruction, the software offers an interactive editor for manual editing on the building geometries.

Figure 12. BuildingReconstruction 2018 software roof library [38].

The input data requirements for the BuildingReconstruction software are as follows: ● A grid DSM with 25–50 cm grid interval, optimally ● A DTM with 1-m grid interval, optimally ● 2D building footprints with attributes (each building polygon must have a unique ID) ● An orthophoto for visual checks Figure 13 shows screenshots of the DSM and the DTM from one sub-block. The DSMs are used to determine the roof geometry and characteristics, whereas the DTMs are used to calculate the height and the ground geometry (footprint) of the building. Footprints are also used to determine the geometric structures of the walls in the building model. The software also supports processing without a DTM, and if no terrain model is available, the base height of a building is determined from ISPRS Int. J. Geo-Inf. 2018, 7, 339 12 of 20 a buffer in which the lowest point around a building is determined. However, this approach is successful only in areas where building density is low.

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW 12 of 20

ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW 12 of 20

Figure 13. 25 cm resolution DSM and 1 m resolution DTM input for BuildingReconstruction software. Figure 13. 25 cm resolution DSM and 1 m resolution DTM input for BuildingReconstruction software. The buildings with the simple roof types and roof heights can in general be reconstructed more Theaccurately buildingsFigure than13. 25 with thecm resolution complexthe simple DSM ones. roof and This 1 types m resolution problem and roof DTM arises heights input because for can BuildingReconstruction the in general software be cannot reconstructed software match. the more accuratelycomplex than roof the types complex with ones.the roof This models problem in its arises library. because Additionally the software, small cannotparts on match the roofs the complexare The buildings with the simple roof types and roof heights can in general be reconstructed more roof typesmostly with ignored the roofby the models software. in itsThe library. main advantage Additionally, of the small model parts-driven on themethod roofs is arereconstructing mostly ignored modelsaccurately without than the any complex gaps or ones. holes This between problem surfaces arises with because correct the geometry.software cannot In addition, match the by the software. The main advantage of the model-driven method is reconstructing models without productioncomplex roof speed types is slightlywith the better roof thanmodels other in methods.its library. The Additionally main disadvantage, small parts is the on requirement the roofs are of any gapsamostly very or comprehensive holesignored between by the roofsoftware. surfaces library The with for main comparing correct advantage geometry. roof ofgeometries the In model addition, generated-driven the method production from given is reconstructing DSM. speed is slightly bettermodels thanFigure other without methods.14 any shows gaps Thean or main example holes disadvantage between from surfaces the is the automatically with requirement correct generated geometry. of a very comprehensiveIn buildings addition, with the roof libraryBuildingReconstructionproduction for comparing speed is roof slightly geometries2018 better software than generated from other the methods. project from given area.The main DSM. disadvantage is the requirement of Figurea very comprehensive14 shows an example roof library from for the comparing automatically roof generatedgeometries buildings generated with from BuildingReconstruction given DSM. 2018 softwareFigure from 14 the shows project an area. example from the automatically generated buildings with BuildingReconstruction 2018 software from the project area.

Figure 14. Automatically generated buildings with BuildingReconstruction 2018 software in Cesme Town.

Figure 14. 15 Automatically shows a part generated of the buildings automatically with BuildingReconstruction generated roof geometries 2018 software overlaid in Cesme on the FigureorthophotoTown. 14. Automatically with 10 cm GSD. generated The recons buildingstruction with accuracy BuildingReconstruction has been analyzed 2018 by software visual inchecks Cesme on Town. 1000 buildings and the results are provided in Table 2. During the visual assessments, the roof structures smallerFigure than 15the shows DSM grid a part interval of the have automatically been excluded generated and if the roof main geometries roof structure overlaid is model on theed correctly,orthophoto it withis considered 10 cm GSD. as true. The reconstruction accuracy has been analyzed by visual checks on 1000 buildings and the results are provided in Table 2. During the visual assessments, the roof structures smaller than the DSM grid interval have been excluded and if the main roof structure is modeled correctly, it is considered as true.

ISPRS Int. J. Geo-Inf. 2018, 7, 339 13 of 20

Figure 15 shows a part of the automatically generated roof geometries overlaid on the orthophoto with 10 cm GSD. The reconstruction accuracy has been analyzed by visual checks on 1000 buildings and the results are provided in Table2. During the visual assessments, the roof structures smaller than the DSM grid interval have been excluded and if the main roof structure is modeled correctly, it is consideredISPRS Int. as J. Geo true.-Inf. 2018, 7, x FOR PEER REVIEW 13 of 20

Figure 15. Automatically generated roof geometries overlaid on the orthophoto with 10 cm resolution. Figure 15. Automatically generated roof geometries overlaid on the orthophoto with 10 cm resolution. Table 2. Automatic building reconstruction accuracy based on visual analysis of 1000 buildings. Table 2. Automatic building reconstruction accuracy based on visual analysis of 1000 buildings. DSM Resolution Processing Time LoD1 Reconstruction Success LoD2 Reconstruction Success 10 cm 6 min 50 s 100% 73.8% DSM Resolution25 cm Processing1 min 54 Time s LoD1 Reconstruction100% Success LoD2 Reconstruction72.7% Success 10 cm50 cm 6 min1 min 50 2 ss 100% 100% 65.4% 73.8% 25 cm100 cm 1 min39 54 s s100% 100% 55.1% 72.7% 50 cm 1 min 2 s 100% 65.4% 100Based cm on the investigations 39 s performed in this study, 100% the advantages and the disadvantages 55.1% of the BuildingReconstruction software can be listed as follows: Based+ Large on the-area investigations reconstruction performed of LoD1 and in LoD2 this study,building the models advantages is possible. and the disadvantages of the BuildingReconstruction+ Building geometries software are suita canble be for listed texturing as follows: with nadir/oblique aerial images. + Direct export to CityGML format with automatic and flexible attribute mapping is possible. + +Large-area Fast processing reconstruction of 3D Buildings of LoD1 with and semantic LoD2 building information models is possible. is possible. + +Building High geometriesgeometric accuracy are suitable can be forobtained texturing with witha ground nadir/oblique plan-based reconstruction. aerial images. + −Direct Reconstruction export to CityGML is limited formatto existing with roof automatic library and and complex flexible roof attribute types cannot mapping be reconstructed. is possible. − + FastSmaller processing roof parts of 3D are Buildings neglected with during semantic the reconstruction. information is possible. − The manual editing interface is not user-friendly. + High geometric accuracy can be obtained with a ground plan-based reconstruction. − 3.6.Reconstruction Automatic Building is limited Texturing to existing roof library and complex roof types cannot be reconstructed. − SmallerExisting roof 3D parts city aremodels neglected can be duringtextured the from reconstruction. oriented images to increase the attractiveness of − theThe digital manual city editing models. interface Building isand not terrain user-friendly. models can be textured automatically if the images of the models with accurate orientation parameters exist. Automatic-texturing of the models generated 3.6. Automaticin this study Building has been Texturing carried out with CityGRID Texturizer software by UVM Systems GmbH [39]. Automatic texturing workflow of CityGRID Texturizer is given in Figure 16. Oriented aerial Existing 3D city models can be textured from oriented images to increase the attractiveness of the photographs can be used for the texturing of roofs and façades, as well as mobile mapping data for digital city models. Building and terrain models can be textured automatically if the images of the the high-resolution texturing of street-side façades in CityGRID Texturizer. The façade texture is modelsassigned with accurate interactively orientation or fully parametersautomatic from exist. oriented Automatic-texturing images. Terrain texture of the modelsis alway generateds calculated in this studydynamically, has been carried whereas out roof with and CityGRID façade textures Texturizer need softwareto be pre- bycalculated UVM Systems and saved GmbH for each [39 building]. Automatic unit. Orthophotos of the façades can be generated by simple affine transformation in the texture tool. More details on the texturing process are elaborated in the following sub-sections.

ISPRS Int. J. Geo-Inf. 2018, 7, 339 14 of 20 texturing workflow of CityGRID Texturizer is given in Figure 16. Oriented aerial photographs can be used for the texturing of roofs and façades, as well as mobile mapping data for the high-resolution texturing of street-side façades in CityGRID Texturizer. The façade texture is assigned interactively or fully automatic from oriented images. Terrain texture is always calculated dynamically, whereas roof and façade textures need to be pre-calculated and saved for each building unit. Orthophotos of the façades can be generated by simple affine transformation in the texture tool. More details on the texturing process are elaborated in the following sub-sections. ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW 14 of 20

Figure 16. CityGRID Texturizer automatic texturing workflow [39]. Figure 16. CityGRID Texturizer automatic texturing workflow [39]. 3.6.1. Image Acquisition Requirements and the Radiometric Pre-Processing for Texturing 3.6.1. Image Acquisition Requirements and the Radiometric Pre-Processing for Texturing There are are several several criteria criteria for for the the acquisition acquisition of airborne of airborne images images for automatic for automatic texturing. texturing. The optimalThe optimal resolution resolution for automatic for automatic texturing texturing is usually is usually 10 10cm cm or or better better and and the the flight flight altitude altitude to to obtain obtain this resolution with large large-format-format cameras is is usually between between 800 800 and and 2500 2500 m from terrain, but these values cancan rangerange depending depending on on different different factors, factors such, such as airport as airport restrictions restrictions in the area,in the building area, building heights, heights,terrain elevation, terrain elevation, camera typecamera (e.g., type camera (e.g., format),camera format), camera parameterscamera parameters (in particular (in particular focal length), focal length),image acquisition image acquisition method (vertical method or oblique) (vertical and or the oblique) other factors and the of image other GSD factors requirements. of image In GSD this requirementproject, the imagess. In this have project, been acquiredthe images from have an altitudebeen acquired of 1500 from m with an 8 altitude cm resolution of 1500 with m with the aim 8 cm of resolutionproduction with of basemaps the aim of for production Cesme City. of In basemaps this type for of acquisition,Cesme City. the In vividnessthis type of theacquisition, colors is notthe vividnessconsidered of as the an colors important is not aspect considered in the as “Basemap” an important generation aspect in process, the “Basemap” however, generation in order to generateprocess, howevera fine texture,, in order the sharpnessto generate of a images fine texture, together thewith sharpness the vividness of images of colorstogether is ofwith a great the vividness importance. of colorsIn order is of to a obtain great importance. a more vivid In and order impressive to obtain texture,a more vivid the images and impressive have been texture, pre-processed the images for haveimproving been thepre- realisticprocessed look for of improving color tones the and realistic the image look sharpness. of color Figuretones and17 shows the image an example sharpness. from Figurethe original 17 shows and thean example enhanced from images. the original and the enhanced images.

ISPRS Int. J. Geo-Inf. 2018, 7, 339 15 of 20 ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW 15 of 20

FigureFigure 17. RawRaw aerial aerial image image ( (leftleft)) and and pre pre-processed-processed (color (color-enhanced)-enhanced) image ( right) of Cesme Town.

3.6.23.6.2.. Image and Model Orientation A basic requirement of model texturing is to ensure that the model coordinates and the image EOPs are are defined defined in in the the same same reference reference coordinate coordinate system. system. In addition, In addition, the camera the camera calibration calibration data shoulddata should be provided be provided for the for accurate the accurate orientation orientation of the of images. the images. If no If errors no errors are detected are detected in the in orientationthe orientation validation, validation, the database the database setup setup and load and is load carried is carried out to outestablish to establish a project a projectspace, which space, whichcontains contains the orthophotos, the orthophotos, DTMs, building DTMs, building models, models,and aerial and photographs aerial photographs of the project. of the The project. pre- definedThe pre-defined project (texturing) project (texturing) area is enlarged area is enlarged with a buffer with a zone buffer so zone that sothe that buildings the buildings near the near outer the borderouter border can be can included be included completely. completely. After Afterimporting importing all data all into data a into database, a database, aerial aerial photographs photographs and adjustedand adjusted exterior exterior orientation orientation parameters parameters are matche are matchedd with buildings, with buildings, DTMs DTMs and orthophotos and orthophotos in the database.in the database. Consistency Consistency checks checks between between aerial aerial photographs, photographs, interior interior and and exterior exterior orientation parametersparameters,, and buildings are performed and the texturingtexturing processprocess isis startedstarted ifif therethere areare nono errors.errors.

3.6.33.6.3.. Texture MappingMapping Process A pre pre-selection-selection of the photos for the texturing area would result in faster processing. To To find find the most suitable photos for a building, the algorithm splits the building into sub sub-parts-parts and assigns each of them to different classes. The The buildi buildingsngs are are usually usually split split into into four four different different objects objects,, such such as as roof, roof, façade, base, base, and and fringe. fringe. This This object object structure structure is similar is similar to the object to the structure object in structure CityGML in specification. CityGML specification.The software can The directly software use can CityGML directly object use CityGMLstructure in object the automatic-texturing structure in the automatic process.- Thetexturing parts process.of the buildings The parts that of will the bebuildings used in thethat texturing will be used process in the can texturing also be selected process by can the also user. be For selected example, by theonly user. roofs For can example, be textured only in roofs a project canand be textured other parts in ofa project the building and other canbe parts left un-textured.of the building The can nadir be (vertical)left un-textured. stereo images The nadir in this (vertical) study have stereo been image takens in with this 80% study forward have andbeen 60% taken lateral with overlap, 80% forward which andis sufficient 60% lateral for texturingoverlap, which both the is sufficient roofs and for the texturing façades. both the roofs and the façades. The most important factor in texturing roofs is the selection of the best image for each roof, as every object appearappear ininmultiple multiple photos photos due due to to the the high high overlap overlap ratio. ratio. The The central central part part of theof the photos photos has hasthe bestthe best radiometric radiometric appearance appearance for buildingsfor buildings as no as relief no relief displacement displacement occurs occurs in this in part.this part. The bestThe besttexture texture for each for each roof existsroof exists basically basically in the in photo the photo where where the building the build is locateding is located close to close the imageto the image center. center.In other In words, other thewords, principal the principal axis of the axis image of the should image intersect should theintersect roof top the perpendicularly, roof top perpendicularly if possible., ifFor possible. example, For in example, the application in the application dataset, the dataset, same roof the can same be roof found can in be nine found or ten in differentnine or ten photos different due photosto high due overlap. to high The overlap. software The finds software the most finds suitable the most photograph suitable for photograph the texturing for roofthe texturing by comparing roof bythe comparingcenter point the coordinates center point of the coordinates roof and the of aerial the photographs. roof and the After aerial the photographs. comparison, Afterthe photo the withcomparison, the closest the centerphoto pointwith the to the closest roof center point is used to forthe texturing. roof center is used for texturing. Façade texturing is done with a different approach from the roof texturing. This approach is flexible and affected mainly by the camera system and city structure. Other parameters are whether the camera is large- or medium-format, how high the buildings in the city are, how the vegetation is

ISPRS Int. J. Geo-Inf. 2018, 7, 339 16 of 20

Façade texturing is done with a different approach from the roof texturing. This approach is flexible and affected mainly by the camera system and city structure. Other parameters are whether the camera is large- or medium-format, how high the buildings in the city are, how the vegetation is covered,ISPRS Int. whether J. Geo-Inf. there 2018, 7 is, x fringe FOR PEER in theREVIEW buildings, and topographical characteristics. Image orientation16 of 20 angles along with center coordinates are of great importance for the façade texturing. The façades covered, whether there is fringe in the buildings, and topographical characteristics. Image orientation appear to be better for texturing in the regions closer to the outer border of the photograph. In terms angles along with center coordinates are of great importance for the façade texturing. The façades of the viewing angle, the building has clearer outward appearance, but the camera distortions should appear to be better for texturing in the regions closer to the outer border of the photograph. In terms alsoof be the taken viewing into angle, account the inbuilding this case. has clearer outward appearance, but the camera distortions should alsoThe be software taken into cuts account out the in chosen this case. photos for all façades and saves them in separate image files. While cutting outThe thesoftwa aerialre cuts photos, out the a predefined chosen photos buffer for zoneall façades is kept and at thesaves peripherals. them in separate Thus, image if any files. editing is neededWhile cutting to be done out the in theaerial building photos, geometry, a predefined gaps buffer in the zone building is kept envelopeat the peripherals. can be prevented. Thus, if anyEach textureediting is then is needed used in to that be part done of in the the building building according geometry, to the gaps CityGML in the building data structure. envelope The can extracted be imageprevented. is not directly Each texture textured is then to the used façade, in that but part it is of related the building one-to-one according based to on the file CityGML names. In data other words,structure. the texture The extracted parts cut image out fromis not eachdirectly aerial textured photograph to the façade, is unique but it and is related can only one- beto-one used based for one façadeon file belonging names. In to oneother building. words, the Every texture texture parts image cut out is from used each only aerial for one photograph building oris unique façade. and A part of texturedcan only citybe used model for areaone façade is presented belonging in Figure to one 18building.. Every texture image is used only for one building or façade. A part of textured city model area is presented in Figure 18.

Figure 18. Exported CityGML buildings with textures. Figure 18. Exported CityGML buildings with textures. 4. Conclusions 4. Conclusions In this study, a workflow has been introduced for the production of 3D city models, which have nowIn this become study, a anecessity workflow for has better been management introduced of for cities. the production Large-format of aerial 3D city imagery models, and which vector have nowmaps become manually a necessity produced for better from managementthis data have ofbeen cities. used Large-format for this purp aerialose. 3D imagery city models and vector in LoD2 maps manuallyand DSMs produced can be fromproduced this datausing have existing been aerial used photogrammetric for this purpose. datasets 3D city and models their in project LoD2 outputs and DSMs canwithout be produced additional using cost existing and effort aerial as shown photogrammetric here. With the datasetsproposed and method, their it project is possible outputs to produce without the model of Cesme Town (260 km2) within one week by one trained person using a desktop PC. additional cost and effort as shown here. With the proposed method, it is possible to produce the Figure 19 shows views of the resulting product of this study. The created 3D model is automatically model of Cesme Town (260 km2) within one week by one trained person using a desktop PC. Figure 19 textured from aerial images and published online on the web at www.cesme3d.com. shows views of the resulting product of this study. The created 3D model is automatically textured from aerial images and published online on the web at www.cesme3d.com.

ISPRS Int. J. Geo-Inf. 2018, 7, 339 17 of 20 ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW 17 of 20

Figure 19. Different parts of the resulting 3D city model of Cesme. Figure 19. Different parts of the resulting 3D city model of Cesme. Some of the problems encountered during the work and suggestions for their solutions are as follows:Some of the problems encountered during the work and suggestions for their solutions are as follows: ● Excessive amount of preprocessing for file format conversion: Most of the data used in this study • wasExcessive not in amount the formats of preprocessing that the software for file formatrequires, conversion: and need Mostto be of pre the-processed. data used inMost this of study the manualwas not interaction in the formats has b thateen theperformed software for requires, this purpose. and need to be pre-processed. Most of the ● Incorrectmanual interaction building roof has models: been performed The automatic for this reconstructio purpose. n of buildings has the limitation of • existingIncorrect roof building types roofin the models: library. TheIn addition, automatic occluded reconstruction buildings of which buildings are partially has the limitation covered by of treesexisting or roofother types objects in the have library. been reconstructedIn addition, occluded incorrectly buildings from DSMs. which In are order partially to fix covered these problems,by trees or visual other objects checks have and manual been reconstructed editing have incorrectly been required. from DSMs. Examples In order to the to false fix these roof reconstructionsproblems, visual are checks provided and in manual Figure editing20. have been required. Examples to the false roof ● Softwarereconstructions limitations are provided in processing in Figure the 20 data. of large regions: A significant example to this • problemSoftware is limitations the BuildingReconstruction in processing the data software, of large which regions: can A o significantnly reconstruct example buildings to this problemin areas lessis the than BuildingReconstruction 1 km2. In this study 43,158 software, buildings which have can onlybeen reconstructreconstructed buildings automatically in areas in less an thanarea of1 km 2702. km In this2. In studyorder 43,158to be able buildings to perform have beenan automatic reconstructed reconstruction automatically on such in an a area large of area, 270 km the2 . workingIn order area to be is able divided to perform into small an parts automatic and then reconstruction divided into on folders such by a large producing area, theDSM, working DTM, andarea building is divided footprint into small for each parts area. and thenThe segments divided intowere folders separately by producing produced DSM,and exported DTM, and as CityGML,building footprint and these for separate each area.files have The been segments later converted were separately into a single produced CityGML and exportedfile to cover as tCityGML,he entire city. and these separate files have been later converted into a single CityGML file to cover Asthe a entire final city.recommendation, an automatic quality control and quality assessment (QC and QA) should be implemented for the 3D city models. Such methods are especially crucial for the QC and QA of large datasets. More research on the production of 3D city models should be conducted and

ISPRS Int. J. Geo-Inf. 2018, 7, 339 18 of 20

As a final recommendation, an automatic quality control and quality assessment (QC and QA) should be implemented for the 3D city models. Such methods are especially crucial for the QC and ISPRS Int. J. Geo-Inf. 2018, 7, x FOR PEER REVIEW 18 of 20 QA of large datasets. More research on the production of 3D city models should be conducted and their results should be discussed in the literature, literature, since more results would help to open open new new horizons horizons for higher higher-quality-quality city models.

Figure 20. Examples to the false reconstructions of the roofs. Figure 20. Examples to the false reconstructions of the roofs.

Author Contributions: This article is produced from the Master of Science (M.Sc.) thesis of Mehmet AuthorBuyukdemircioglu Contributions: underThis the article supervision is produced of Sultan from Kocaman the Master (principle of Science supervisor) (M.Sc.) and thesisUmit Isikdag of Mehmet (co- supervisor).Buyukdemircioglu under the supervision of Sultan Kocaman (principle supervisor) and Umit Isikdag (co-supervisor).

Funding: This research received no external funding.funding. Acknowledgments: The authors gratefully acknowledge the support of Alpaslan Tutuneken for provision and Acknowledgments: The authors gratefully acknowledge the support of Alpaslan Tutuneken for provision and training of CityGRID Texturizer software. training of CityGRID Texturizer software. Conflicts of Interest: The authors declare no conflicts of interest. Conflicts of Interest: The authors declare no conflicts of interest. References References 1. United Nations. World Urbanization Prospects Highlights; United Nations: New York, NY, USA, 2014. 1. United Nations. World Urbanization Prospects Highlights; United Nations: New York, NY, USA, 2014. 2. Stadler, A.; Kolbe, T.H. Spatio-semantic coherence in the integration of 3D city models. In Proceedings of the 2. Stadler, A.; Kolbe, T.H. Spatio-semantic coherence in the integration of 3D city models. In Proceedings of 5th International Symposium on Spatial Data Quality, Enschede, The Netherlands, 13–15 June 2007. the 5th International Symposium on Spatial Data Quality, Enschede, The Netherlands, 13–15 June 2007. 3. Döllner, J.; Kolbe, T.H.; Liecke, F.; Sgouros, T.; Teichmann, K. The virtual 3D city model of Berlin-managing, 3. Döllner, J.; Kolbe, T.H.; Liecke, F.; Sgouros, T.; Teichmann, K. The virtual 3D city model of Berlin-managing, integrating, and communicating complex urban information. In Proceedings of the 25th Urban Data integrating,Management and Symposium communicating Udms, Aalborg, complex Denmark, urban information. 15–17 May In 2006. Proceedings of the 25th Urban Data 4. ManagementKolbe, T.; Gröger, Symposium G.; Plümer, Udms, L. CityGML:Aalborg, Denmark, Interoperable 15–17 access May 2006. to 3D city models. In Geo-Information˙ for

4. Kolbe,Disaster T.; Management Gröger, G.;; Springer:Plümer, L. Berlin, CityGML: Germany, Interoperable 2005; pp. 883–899.access to 3D city models. In Geo-İnformation for 5. DisasterKada, M. Management The 3D berlin; Springer: project. Berlin, In Photogrammetric Germany, 2005; Week’09 pp. 883; Wichmann–899. Verlag: Heidelberg, Germany, 2009.

5.6. Kada,Agugiaro, M. The G. First3D berli stepsn project. towards In an Photogrammetric integrated CityGML-based Week’09; Wichmann 3d model Verlag: of Vienna. Heidelberg,ISPRS Ann.Germany, Photogramm. 2009.

6. Agugiaro,Remote Sens. G. Spat. First Inf. steps Sci. 2016 towards, III-4 ,an 139–146. integrated [CrossRef CityGML] -based 3d model of Vienna. ISPRS Ann. 7. Photogramm.Ozerbil, T.; Gokten, Remote Sens. E.; Onder, Spat. Inf. M.; Sci. Selcuk, 2016, O.; III- Sarilar,4, 139–146. N.C.; Tekgul, A.; Yilmaz, E.; Tutuneken, A. Konya 7. Obuyuksehirzerbil, T.; belediyesiGokten, E.; egik Onder, (oblique) M.; Sel goruntucuk, O.; alimi, Sarilar, 3 boyutlu N.C.; Tekgu kent modelil, A.; Y veilmaz, 3 boyutlu E.; Tutuneken kent rehberi, A. projesi.Konya bInuyukse Proceedingshir belediyesi of the e V.gik Uzaktan (oblique) Algılama goruntu vealimi Cografi, 3 boyutlu Bilgi kent Sistemleri modeli Sempozyumu ve 3 boyutlu kent (UZAL-CBS rehberi projesi. 2014), InIstanbul, Proceedings Turkey, of 14–17 the V. October Uzaktan 2014. Algılama Available ve online:Cografi http://uzalcbs.org/wp-content/uploads/2016/11/ Bilgi Sistemleri Sempozyumu (UZAL-CBS 2014), I2014_013.pdfstanbul, Turkey,(accessed on 14 21–17 August October 2018). 2014. Available online: http://uzalcbs.org/wp- 8. content/uploads/2016/11/2014_013.pdfKada, M.; McKinley, L. 3D building reconstruction (accessed on 21 from August lidar 2018) based. on a cell decomposition approach. 8. Kada,Int. Arch. M.; Photogramm.McKinley, L. Remote3D building Sens. 2009reconstruction, 38, 47–52. from lidar based on a cell decomposition approach. Int. 9. Arch.Haala, Ph N.;otogramm. Kada, M. Remote An update Sens. on2009 automatic, 38, 47–52. 3D building reconstruction. ISPRS J. Photogramm. Remote Sens. 9. Haa2010la,, 65 N.;, 570–580. Kada, M. [CrossRef An update] on automatic 3D building reconstruction. ISPRS J. Photogramm. Remote Sens. 10. 2010Haala,, 65 N.;, 570 Rothermel,–580. M.; Cavegn, S. Extracting 3D urban models from oblique aerial images. In Proceedings 10. Haala,of the 2015 N.; JointRothermel, Urban M.;Remote Cavegn, Sensing S. ExtractingEvent (JURSE), 3D urban Lausanne, models Switzerland, from oblique 30 March–1 aerial images.April 2015; In Proceedingspp. 1–4. of the 2015 Joint Urban Remote Sensing Event (JURSE), Lausanne, Switzerland, 30 March–1 11. AprilKocaman, 2015; S.;pp. Zhang,1–4. L.; Gruen, A.; Poli, D. 3D city modeling from high-resolution satellite images. 11. Kocaman,In Proceedings S.; Zhang, of the ISPRS L.; Gruen, Workshop A.; Poli, Topographic D. 3D city Mapping modeling from from Space high (withSpecial-resolution satellite Emphasis images on Small. In ProceedingsSatellites), Ankara, of the Turkey,ISPRS Workshop 14–16 February Topographic 2006. Mapping from Space (withSpecial Emphasis on Small Satellites), Ankara, Turkey, 14–16 February 2006. 12. Kraus, T.; Lehner, M.; Reinartz, P. Modeling of urban areas form high resolution stereo satellite images. In Proceedings of the ISPRS Hannover Workshop 2007 High-Resolution Earth Imaging for Geospatial Information, Hannover, Germany, 29 May–1 June 2007. 13. Flamanc, D.; Maillet, G.; Jibrini, H. 3D city models: An operational approach using aerial images and cadastral maps. In Proceedings of the Photogrammetric Image Analysis, Munich, Germany, 17–19 September 2003.

ISPRS Int. J. Geo-Inf. 2018, 7, 339 19 of 20

12. Kraus, T.; Lehner, M.; Reinartz, P. Modeling of urban areas form high resolution stereo satellite images. In Proceedings of the ISPRS Hannover Workshop 2007 High-Resolution Earth Imaging for Geospatial Information, Hannover, Germany, 29 May–1 June 2007. 13. Flamanc, D.; Maillet, G.; Jibrini, H. 3D city models: An operational approach using aerial images and cadastral maps. In Proceedings of the Photogrammetric Image Analysis, Munich, Germany, 17–19 September 2003. 14. Kabolizade, M.; Ebadi, H.; Mohammadzadeh, A. Design and implementation of an algorithm for automatic of building models using genetic algorithm. Int. J. Appl. Earth Obs. Geoinf. 2012, 19, 104–114. [CrossRef] 15. El Garouani, A.; Alobeid, A.; El Garouani, S. Digital surface model based on aerial image stereo pairs for 3D building. Int. J. Sustain. Built Environ. 2014, 3, 119–126. [CrossRef] 16. Haala, N.; Brenner, C. Extraction of buildings and trees in urban environments. ISPRS J. Photogramm. Remote Sens. 1999, 54, 130–137. [CrossRef] 17. Suveg, I.; Vosselman, G. Reconstruction of 3D building models from aerial images and maps. ISPRS J. Photogramm. Remote Sens. 2004, 58, 202–224. [CrossRef] 18. Open Geospatial Consortium. City Geography Markup Language (Citygml) Encoding Standard Version 2.0.0. Available online: http://www.opengis.net/spec/citygml/2.0 (accessed on 21 August 2018). 19. Buyuksalih, I.; Isikdag, U.; Zlatanova, S. Exploring the processes of generating lod (0-2) citygml models in greater municipality of Istanbul. In Proceedings of the 8th 3DGeoInfo Conference & WG II/2 Workshop, Istanbul, Turkey, 27–29 November 2013. 20. Kolbe, T.H. Representing and exchanging 3D city models with citygml. In 3D Geo-Information˙ Sciences; Springer: Berlin, Germany, 2009; pp. 15–31. 21. Arefi, H.; Engels, J.; Hahn, M.; Mayer, H. Level of detail in 3D building reconstruction from lidar data. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2008, 37, 485–490. 22. Gröger, G.; Plümer, L. Citygml—Interoperable˙ semantic 3D city models. ISPRS J. Photogramm. Remote Sens. 2012, 71, 12–33. [CrossRef] 23. Dorninger, P.; Pfeifer, N. A comprehensive automated 3D approach for building extraction, reconstruction, and regularization from airborne laser scanning point clouds. Sensors 2008, 8, 7323–7343. [CrossRef] [PubMed] 24. Tarsha-Kurdi, F.; Landes, T.; Grussenmeyer, P.; Koehl, M. Model-driven and data-driven approaches using lidar data: Analysis and comparison. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2007, 36, 87–92. 25. Huang, H.; Brenner, C.; Sester, M. A generative statistical approach to automatic 3D building roof reconstruction from laser scanning data. ISPRS J. Photogramm. Remote Sens. 2013, 79, 29–43. [CrossRef] 26. Taillandier, F. Automatic building reconstruction from cadastral maps and aerial images. Int. Arch. Photogramm. Remote Sens. 2005, 36, 105–110. 27. ¸Sengül,A. Extracting semantic building models from aerial stereo images and conversion to CityGML. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, 41, 321–324. [CrossRef] 28. Kobayashi, Y. Photogrammetry and 3D city modeling. Proc. Digit. Archit. 2006, 90, 209. 29. Nex, F.; Remondino, F. Automatic roof outlines reconstruction from photogrammetric DSM. ISPRS Ann. Photogramm. Remote Sens. Spat. Inf. Sci. 2012, I-3, 257–262. [CrossRef] 30. Früh, C.; Sammon, R.; Zakhor, A. Automated texture mapping of 3D city models with oblique aerial imagery. In Proceedings of the 2nd International Symposium on 3D Data Processing, Visualization and Transmission, 3DPVT 2004, Thessaloniki, Greece, 6–9 September 2004; pp. 396–403. 31. Früh, C.; Zakhor, A. Constructing 3D city models by merging aerial and ground views. IEEE Comput. Soc. 2003, 23, 52–61. [CrossRef] 32. Kada, M.; Klinec, D.; Haala, N. Façade texturing for rendering 3D city models. In Proceedings of the ASPRS 2005 Annual Conference, Baltimore, MD, UDA, 7–11 March 2005. 33. Novel, C.; Keriven, R.; Poux, E.; Graindorge, P. Comparing aerial photogrammetry and 3D laser scanning methods for creating 3D models of complex objects. In Proceedings of the Capturing Reality Forum, Bentley Systems, Salzburg, Austria, 2015; p. 15. 34. Vexcel Imaging˙ Ultracam Falcon. Available online: http://www.vexcel-imaging.com/ultracam-falcon/ (accessed on 31 July 2018). 35. Agisoft Photoscan Pro. Available online: http://www.agisoft.com/features/professional-edition/ (accessed on 31 July 2018). ISPRS Int. J. Geo-Inf. 2018, 7, 339 20 of 20

36. Bentley Microstation. Available online: https://www.bentley.com/en/products/product-line/modeling- and-visualization-software/microstation (accessed on 31 July 2018). 37. Safe Software Fme. Available online: https://www.safe.com/ (accessed on 31 July 2018). 38. Virtualcity Systems Building Reconstruction. Available online: http://www.virtualcitysystems.de/en/ products/buildingreconstruction (accessed on 31 July 2018). 39. Uvm Systems Gmbh. Available online: http://www.uvmsystems.com/index.php/en/software/soft-city (accessed on 31 July 2018). 40. Intergraph Corporation. Available online: http://www.intergraph.com/ (accessed on 31 July 2018).

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).