<<

technologies

Article Extraction of Structural and Semantic Data from 2D Floor Plans for Interactive and Immersive VR Real Estate Exploration

Georg Gerstweiler *, Lukas Furlan, Mikhail Timofeev and Hannes Kaufmann Institute of Visual Computing and Human-Centered Technology, TU Wien, Favoritenstrasse 9-11- E193-06, 1040 Vienna, Austria; [email protected] (L.F.); [email protected] (M.T.); [email protected] (H.K.) * Correspondence: [email protected]; Tel.: +43-1-58801-18823

 Received: 26 September 2018; Accepted: 27 October 2018; Published: 4 November 2018 

Abstract: Three-dimensional reconstructions of indoor environments are useful in various augmented and virtual scenarios. Creating a realistic virtual apartment in 3D manually does not only take time, but also needs skilled people for implementation. Analyzing a floor is a complicated task. Due to the lack of engineering standards in creating these drawings, they can have multiple different appearances for the same building. This paper proposes multiple models and heuristics which enable fully automated 3D reconstructions out of only a 2D floor plan. Our study focuses on floor plan analysis and definition of special requirements for a 3D building model used in a Virtual Reality (VR) setup. The proposed method automatically analyzes floor plans with a pattern recognition approach, thereby extracting accurate metric information about important components of the building. An algorithm for mesh generation and extracting semantic information such as apartment separation and room type estimation is presented. A novel method for VR interaction with interior design completes the framework. The result of the presented system is intended to be used for presenting a large number of apartments to customers. It can also be used as a base for purposes such as furnishing apartments, realistic occlusions for AR (Augmented Reality) applications such as indoor navigation or analyzing purposes. Finally, a technical evaluation and an interactive user study prove the advantages of the presented system.

Keywords: 3D model generation; floor plan analysis; CAD (computer-aided design); pattern recognition; virtual reality; room labeling

1. Introduction Content creation for virtual experiences is a critical but necessary task within various application areas. The recreation of real or planned environments for virtual experiences is necessary especially for architectural and real estate virtual reality (VR) applications. Architects and companies present planned houses or apartments usually by creating renderings or by generating 3D visualizations together with 2D floor plans. Exploring an apartment from the inside in a 3D environment could be beneficial. By using a head-mounted display together with a state-of-the-art tracking system, people could freely explore and interact with the architectural environments in VR, which are still in the planning stage. The critical part for providing such an experience, however, is the creation of a proper 3D structure, a realistic visualization and appropriate navigation and interaction techniques within VR. In order to provide such a feature for the mass-market, it is necessary to speed up or even fully automate the content generation process for VR, which is the key target of our approach. The work at hand has to solve various challenges beginning with the (1) interpretation of the floor plan, (2) generating the 3D mesh for the model and (3) providing an appropriate VR experience.

Technologies 2018, 6, 101; doi:10.3390/technologies6040101 www.mdpi.com/journal/technologies Technologies 2018, 6, 101 2 of 27

The first part of the work concentrates on analyzing floor plan drawings to extract architectural elements with correct scale while dealing with content, which is not relevant for the task. This so-called clutter (such as text, detailed wall structures, etc.) is due to the construction of the here presented detection process automatically ignored. Since we concentrate on visual correctness, but not architectural correctness, components which cannot be accessed or seen should also be ignored such as pipes or units. This allows us to use a different approach than in [1] for example, since not every inner detail has to be covered. Therefore, we first concentrate on openings to then detect wall segments. Furthermore, the big diversity of architectural object representations, such as doors or windows, represents another challenge in this work. One and the same object can be drawn differently in a floor plan. To solve this problem, we analyzed many floor plan drawings to construct a ruleset to be able to automatically detect these objects. In the second part, responsible for the 3D model generation the main challenges are the automatized preparation of the surfaces for texturing. Thereby defining each wall individual and preparing a correct scale for surface texturing. To appropriately texture surfaces the room type has to be estimated, since this information is not always available. The main challenge of the final part of the implementation dealing with preparing the VR experiences is the creation of an automated approach of positioning light sources, automated texturing and preparing the model for interaction possibilities. All steps presented in this work have a high degree of automation and almost no interactions of a user are required. The individual research areas covered by the work at hand are already discussed by many researchers [2–4]. Nevertheless, to our best knowledge, no work has been released with a similar complete workflow and a similar degree of automation. They are also not flexible enough to deliver all information needed to create a realistic and interactive 3D representation and are usually not meant for end customers. The main research hypothesis of this work claims that an automatically generated virtual experience by our methods significantly helps customers in the decision-making process of buying or renting a real estate object. To study this hypothesis an elaborate user study was conducted comparing a traditional 2D representation including 3D renderings, a virtual walkthrough and an advanced VR walkthrough with the possibility of interacting with the environment (furniture, textures, etc.) with a custom designed VR interaction technique. This work starts with the description of an important scientific contribution and the limitations of the presented approach. After comparing the here presented concepts with similar approaches the main core components are described. The Floor Plan Interpreter is responsible for reading and analyzing 2D vectorized floor plans (DWG or DFX) of one or multiple apartments. Section 3.2VR Apartment Generation then describes the further processing of the found architectural components such as wall or openings. It contains the 3D model generation and the room type estimation. Finally, the Section 3.3 module describes the visual refurbishment and the interaction metaphor for the user. The work closes with a technical evaluation of the model generation process and an extensive user study utilizing a head-mounted display with two controllers (Oculus Rift CV) to explore the VR environment. The studies showed that people looking for an apartment were satisfied with the realistic appearance in VR. Furthermore, people agreed to use such a virtual tour more often, since it helps in making a decision about buying or renting a property. Results also indicate that the interaction is flexible and easy to understand for exploration and interaction with furniture.

Contribution and Limitations The work at hand was inspired by the real-world problem of the shortage of available apartments. For that reason, many people have to make a decision about buying or renting an apartment before the construction of such even has begun. Our approach should help the provider to create an appropriate and cost-effective VR experience within a short amount of time. In that way customers have another source of information in order to come to a decision whether to buy a specific apartment or not. Our method can also be applied to other application areas such as building construction, Technologies 2018, 6, 101 3 of 27 indoor navigation, automatic 3D model generation, interior design and much more. While others (see Section2) are mostly focusing on individual problems, this work shows an implementation with a fully functional automatized workflow starting from an input file describing a floor plan to an application for exploring a virtual apartment. For the present approach, a chain of modules was developed. The contribution of the presented modules can be described in different areas: The main contribution of the first module is defined by the way of recognizing structurally relevant objects in a 2D CAD (computer-aided design) floor plan drawing in an automated way. CAD files usually contain layers and meta-information describing the presented content. Within this work, many floor plans were examined which showed that this information cannot be relied on. Because of that and in order to be prepared for various import file structures this source of information is ignored in the whole process. The work at hand also targets floor plans with different levels of details as architectural drawings can have different purposes. By using a recursive inside-out approach of connected components in the drawing it was possible to ignore components in the drawing that are irrelevant for constructing a 3D model. This enables the system to take floor plans as input, which can be cluttered with information such as furniture, building details or text elements without much disturbing the algorithm. In order to open up for a broader application possibility, we decided to make use of open standards as used in building information modeling (BIM). As output of the detection process a well-defined file format based on the Industry Foundation Classes (IFC) is created, thereby increasing the reusability of the system. Since the target of the paper is an interactive explorable environment the defined requirements for a VR application were considered when constructing a 3D model, such as low polygon count or lighting condition. In favor of an automated system also 3D objects of doors and windows were placed according to position and scale. A statistical approach was implemented to estimate a plausible room type configuration of an apartment. Other than that, the automated approach was able to process a big number of vectorized floor plans. The resulting 3D model is automatically prepared for and equipped with textures, light sources and the possibility of interaction. A user study which forms the final research contribution of this paper shows how people react to the 3D environment and how they would accept such a system in real life when searching for a real estate property. For the approach at hand certain limitations were defined in order to realize a fully automated workflow. As an input file only vectorized 2D floor plans of one ore multiple apartments are accepted as DWG or DFX file formats. We are currently not processing rasterized representations. At this point it has to be mentioned that this work is targeting floor plans, which do not already have all necessary meta information in order to construct a 3D model. We are targeting drawings which could in an extreme case for example be scanned and vectorized by non-professionals. Small drawing errors in the floor plan are corrected in the process. The plan may contain design elements and measuring details (to a certain extent). Objects representing pipes or similar technical details are handled as clutter and are not further processed. To detect openings, they have to be composed of one connected component. Openings fulfilling the structural constraints mentioned in Section 3.1.2 can be automatically detected. Other more complex ones such as foldable doors can be added by the user with the “OneClick” approach. The algorithm can handle outer areas such a balconies and terraces, but vertical openings such as staircases or elevator shafts are not part of this work, but the user can define vertical openings manually. The 3D reconstruction currently does not support tilted walls due to roof slopes. Furthermore, the room type estimation was developed as an auxiliary tool since the many of surveyed floor plans contained no textual description, using abbreviations, and different languages.

2. Related Work The 3D model generation algorithms in the area of buildings or apartments are often intended for architects and professionals to visualize the overall concept. Creating a model for a customer being able to explore and interact with a virtual apartment has different requirements. In the literature, Technologies 2018, 6, 101 4 of 27 there are not many approaches describing a whole automated workflow starting with a floor plan and presenting a visually realistic representation of such. This section discusses related approaches in the area of analyzing 2D floor plans, using CAD and BIM data for realistic 3D visualization and automatic floor plan generation. Technical drawings for apartments are usually the base material for the generation of 3D models since they contain all necessary structural information. Yin et al. [5] evaluated multiple algorithms by comparing aspects such as image parsing, 3D extrusion, and the overall performance. They showed that the pipeline for a complete solution is very similar over most approaches. It usually starts with a scanned or already vectorized image, which is cleaned up and forwarded to a detection algorithm for architectural components. The triangulation usually finalizes the concepts. According to Yin et al., the algorithms are partly automatized but still rely on layer information, manual outline definition, manual cleanup or geometric constraints. One of the first approaches to use 2D drawings for creating 3D structures were published by Clifford So et al. [3] and Dosch et al. [6]. Their common goal was to mainly reduce the time needed to create a 3D model of a building. Clifford So et al. reviewed the manual process and concentrated on creating a semi-automated procedure for wall extrusion, object mapping and ceiling and floor reconstruction. Dosch et al. on the other hand mainly concentrated on matching multiple floors on top of each other in an automated way. An important step towards 3D model generation is the detection process for architectural symbols. Many publications concentrated on symbol detection in rasterized or vectorized floor plans, whereas the majority is converting rasterized images into a vectorized representation in the first step, just like Luqman et al. [7]. They use a statistical classification method in order to detect well-known symbols within a drawing. A multidimensional feature vector is created for every new symbol. Thereby it is possible to detect symbols of similar size and shape. A training procedure is used to create a Bayesian network for the detection step. Others such as Yan et al. [4] are focusing on creating a spanning tree representation of the available data structure. Every new symbol is described through geometric constraints and converted into a connected tree structure. With this approach, they are reaching good results in finding these well-known symbols. Guo et al. [1] goes a step further and includes text components into the definition of symbols, which allows a better and faster localization of these within the . In contrast to these approaches, the work at hand presents a more general solution in which for the most common symbols a general ruleset is defined. By this it is possible to avoid the initialization step of defining each type of symbol for each floor plan. Another concept that is used in publications such as [8,9] rely on having layers available and a correct layer assignment in order to segment specific architectural components (openings, walls). This, on the one hand, reduces the complexity but on the one hand, could lead to problems if some elements are wrongly assigned. Detecting wall segments in technical drawings are also treated either via layer assignment alone [9] or of taking advantage of the parallelism of wall segments [8,10]. We on the other hand, do not rely on parallel line segments since walls could divergence or consist of multiple parallel line segments, representing wall and façade separately. Zhu et al. [10] is using the symbol detection algorithm from Guo et al. [1] and subtracts the found elements from the floor plan. In their example, the resulting plan only consists of line segments of the type wall. Like others [8,11,12] they try first to detect parallel line-segment pairs which are analyzed with respect to intersections of different shapes (L, T, X, I). This helps to eliminate false positives and to create a so-called Shape-Opening Graph [10], wherein each case an edge-set is added to the graph structure. This leads to an overhead in classifying intersections and could also lead to problems with more complex wall shapes. The proposed method in the paper at hand is based on a different strategy where each wall segment (a single line) is detected based on the prior knowledge of the symbol recognition step, which is mostly independent of the wall structure. Technologies 2018, 6, 101 5 of 27

Several publications in the area of 3D modeling do not analyze floor plans but integrate IFC as an interface for BIM in order to get access to the individual components. Some approaches also rely on VRML (Virtual Reality Modelling Language) for visualization [13]. Hakkarainen et al. are exporting structural information from a commercial CAD drawing application converting it to a VRML standard. For the visualization step, they used Open Scene Graph, a 3D rendering environment. Using data directly from applications supporting the BIM standard has the advantage of having access to various meta information about each architectural component. This allows also to process some steps manually in order to achieve a better visual quality. Ahamed et al. [14] for example is describing a workflow starting with the Autodesk Revit BIM model, which is exported to a 3D DWG file describing the structure of the model. In the last step with Autodesk 3D MAX textures and lighting are added before exporting it to VRML. Using their 3D models is especially interesting in Virtual and Augmented Reality applications not only for a static model but also for visualizing the construction process. In [15,16] for example, the authors make use of VRML which can be linked to a file describing the building process. Thereby, a 4D visualization can be produced showing the 3D model in different construction stages at different times. Not only in construction but also in the area of game development realistic 3D models and a fast creation are of importance. Yan et al. describe in [17] the integration of a BIM-Game based system for architectural design. The relation to our work is especially within the realistic visualization and the possibility of interacting with the environment. In their work, they describe a high-level framework and rely solely on the BIM model. From the point of view of real estate customers, the visualization style from the BIM format is not that interesting, and since no mesh splitting is performed many surfaces use textures with the same material and appearance. The process they describe is partly automated and partly manual since augmented information has to be done in a separate application. However, for integration into a BIM system, this tool seems to be very valuable. Interactivity in a virtual environment is a core component of various game engines. With the problem of integrating architectural models into game engines, Uddin et al. [18] presented a short workflow to realize an integration of universal formats such as DXF and DWG. They faced two main problems which they were able to solve: compatibility and accuracy. Unfortunately, their models did not contain textured surfaces. Semantic detection besides architectural components such as openings have been addressed sparsely. Ahmed et al. have published another work in this area [19] handing a semantic analysis for detecting rooms labels. In contrast to the work at hand where room labels are assigned by a statistical approach. Ahmed et al. try to extract text information from inside of the room boundaries. To complete the section of related work the area of automatic floor plan generation has to be mentioned since this area approaches a similar problem of room labeling. Like [20,21] the aim is to generate living environments according to a specific set of requirements such as the number and type of rooms. With the algorithm of Marson et al. [20], they were able to procedurally generate floor plans and also concentrate on the overall structure of the apartment. This is especially interesting for the work at hand since the here presented apartment segmentation and room type estimation is based on similar dependencies (accessibility of rooms, room types, size, etc.). The high number of related works shows that many researchers have been working on one or the other aspect. Most approaches are designed for architectural applications or construction purposes. In contrast to these we have to concentrate on visual correctness, but not architectural correctness. This means for example that not every wall element has to be in the 3D model only those that can be seen. The inside of a ventilation shaft or a technical shaft should be ignored. This opens up other paths and challenges in the symbol detection step as well as in the 3D model generation step. Technologies 2018, 6, 101 6 of 27

3. Proposed System The proposed system “Floor Plan 2 Virtual Exploration” (FP2VE) can be subdivided into three main modules (see Figure1) responsible for floor plan interpretation, 3D model generation and the creation of an interactive VR environment. Module one is written in C++ as a standalone application. At a first step, it is responsible for reading the lines of the vectorized input floor plan thereby analyzing the content and correcting drawing errors. In a next step, the information of the Technologies 2018, 6, x 6 of 26 input file is converted to a graph structure and passed forward to the recognition process, where a methodsearching is methodresponsible is responsible for findingfor relevant finding elements. relevant A elements. graphical A user graphical interface user makes interface it possible makes to it monitorpossible the to monitor process and the processto adjust and certain to adjust parameters. certain For parameters. the objectFor recognition the object process recognition, two methods process, weretwo methods implemented. were implemented.An automated An process automated for finding process openings for finding is designed openings to isdetect designed a high to number detect a ofhigh doors number and windows of doors andwith windows different with appearances. different appearances.As a fallback option As a fallback, it is possible option, to it isadd possible unusual to representationsadd unusual representations with the “One with-Click the” “One-Click” (as described (as in described Section 3.1.2 in Section approach 3.1.2 for approach a separate for adetection separate stepdetection. After step. converting After converting all relevant all objects relevant into objects the standard into the standard IFC format IFC the format result the isresult handed is handedto the 3D to Modelthe 3D Generation Model Generation process. process.

Floor Plan Interpreter

Detection CAD Importer Graph Structure RecognitionDetection BIM Export

ProcessProcessProcess Native

3D Model Generation

BIM Import / 3D Mesh Appartment Room Type

Reconstruction Optimization Detection Estimation

Editor

Unreal Workflow

VR Experience

Door/Window Automated Light Sources Interaction / Models Texturing Positions Manipulation

Unreal VR

Figure 1. System overview of the three main modules. Native C++ implementation implementation is is used used mainly mainly for symbol detection, the the Unreal Unreal editor editor adjustments adjustments were were utilized utilized for for 3D 3D model model generation generation and and finally, finally, the virtual reality (VR)(VR) experienceexperience module in Unreal Engine is employed for VR visualization.visualization.

The 3D 3D Model Model Generator Generator is implemented is implemented andand integrated integrated into the into Unreal the Unreal Game GameEngine Engine4(Unreal 4 Engine:(Unreal https://www.unrealengine.com/ Engine: https://www.unrealengine.com/). A custom). plug A- customin with plug-inthe help withof the the IFC help Engine of the library IFC wasEngine created library to wasbe able created to take to bethe able IFCto file take from the the IFC Floor file from Plan theInterpreter Floor Plan module. Interpreter All implemented module. All algorithmimplementeds in this algorithms module in are this running module within are running the Unreal within editor the mode Unreal, which editor enables mode, whichthe possibility enables tothe use possibility the editor to for use manual the editor design for changes manual such design as addi changesng furniture such. as The adding same Unreal furniture. project The is sameused toUnreal build project the environment is used to build as a VR the executable environment application. as a VR executable The main application.task of this generation The main task module of this is togeneration create mesh modulees considering is to create openings meshes consideringand to optimize openings them andfor VR, to optimize including them light for leaking VR, including removal andlight individ leakingual removal surface and texturing. individual Also surface, the surface texturing. subdivision Also, the is performed surface subdivision in this step. is performedSince floor plansin this usually step. Since contain floor multiple plans usually living units contain within multiple this module living units the apartment within this segmentation module the apartmentalgorithm searchessegmentation for possible algorithm subdivisions. searches for This possible is necessary subdivisions. since the This room is necessarytype estimation since thealgorithm room typecan onlyestimation work in algorithm a closed canapartmen only workt structure in a closed. apartment structure. FinallyFinally,, the VR Experience module, also implemented in Unreal Game Engine 4 4,, prepares the model for a VR experience experience.. 3D 3D models models for for openings openings are are selected, selected, adjusted adjusted and and placed placed auto automatically.matically. FurthermoreFurthermore,, the texturing of walls, ceilings, floors floors,, and facades is done according to the previously detected room type. type. Also Also,, a areasonable reasonable position position of of lights lights within within the the rooms rooms is isestimated. estimated. Finally Finally,, a daatabase database with with various various 3D 3D models models of of furnitu furniturere was was generated generated,, adapted adapted in in size size and and prepared prepared for interaction such as adding bounding boxes for collision detection. For exploring the finally designed model an interaction metaphor was implemented enabling the user to manipulate the surrounding by placing furniture, changing the appearance of surfaces such as walls and enabling a teleporting locomotion possibility through the environment. This module concludes the overview of the implemented system.

3.1. Floor Plan Interpreter As a first step, numerous floor plans were studied from different sources and formats, in order to understand the complexity of the problem. During this pre-study two main challenges were defined for the automated analyzing tool. Floor plans exist in different styles depending on the purpose. Drawings for construction differ from those used for approval or just for presenting to a Technologies 2018, 6, 101 7 of 27

For exploring the finally designed model an interaction metaphor was implemented enabling the user to manipulate the surrounding by placing furniture, changing the appearance of surfaces such as walls and enabling a teleporting locomotion possibility through the environment. This module concludes the overview of the implemented system.

3.1. Floor Plan Interpreter As a first step, numerous floor plans were studied from different sources and formats, in order to understand the complexity of the problem. During this pre-study two main challenges were defined for Technologiesthe automated 2018, 6, x analyzing tool. Floor plans exist in different styles depending on the purpose. Drawings 7 of 26 for construction differ from those used for approval or just for presenting to a customer. Different customer.stages D ofifferent information stages results of information in more or less result informations in more included or less in information the file. Figure included2 illustrates in the file. Figuredifferent 2 illustrates styleswhich different can bestyles used which as an input can be to theused proposed as an input approach. to the proposed approach.

(a) (b)

FigureFigure 2. Floor 2. Floor plans plans with with a different a different purpose purpose andand different different abstraction abstraction levels. levels. Left imageLeft image (a) shows (a) ashows a clutteredcluttered drawing drawing with with dimensions dimensions and and multiple elements. elements. On On the the right right (b), a(b schematic), a schema simplifiedtic simplified description of the structure of an apartment can be seen. description of the structure of an apartment can be seen. As already pointed out we are interested in the elements which define the building shell. All other Ascomponents already describedpointed out in a we drawing are interested are treated in as the visual elements clutter. which In many define drawings, the building especially shell. in All othervectorized components formats described such as CADin a drawing drawings, are layers treated are used as visual to group clutter the content. In many of the drawings same category, especially in vectorized(walls, windows, formats furniture, such asetc.). CAD To our drawings experience,, layers especially are in used complex to drawings,group the it is content not possible of the to same categoryfully (walls, rely on thewindows, correct naming furniture, or the etc.). correct To assignmentour experience, to these especially layers. Changing in complex the appearance drawings, it is not possibleor correcting to fully the layerrely assignmenton the correct manually naming is time-consuming. or the correct Forassignment that reason, to the these algorithm layers. must Changing be able to process different kinds of representations. Furthermore, it is important to design the the appearance or correcting the layer assignment manually is time-consuming. For that reason, the approach in a way that this clutter is automatically ignored. algorithmIn must addition be able to that, to the process main modeling different programs kinds of available representations for architects. Furthermore were analyzed, i concerningt is important to designthe the graphical approach representation in a way that of different this clutter components. is automatically When extracting ignored different. models out of these Inapplications addition we to identified that, the the same main challenge modeling as mentioned programs in several available other publications.for architects The were variety analyzed of concerningavailable the symbols graphical for one representation and the same object of different is structured components. differently. When To our bestextracting knowledge, different there is models out ofno these database applications available we with identified a collection the of same architectural challenge components as mentioned with differentin several representations other publications. The varietywhich makes of available it very hard symbols to create for a generalizedone and the detection same object algorithm. is structured differently. To our best knowledge,For there constructing is no database a realistic available 3D representation, with a collection we definedof archit theectural following components components with asdifferent important to be extracted (see Table1). Some attributes of an object can be directly extracted from the representations which makes it very hard to create a generalized detection algorithm. 2D representation, others such as the height of objects have to be predefined by the system or the user. For Architectural constructing drawings a realistic represent 3D usuallyrepresentation, a cut of the we represented defined object the at following a height of onecomponents meter. as importantTherefore, to be not extracted all parameters (see Table can be1). extracted Some att fromributes a floor of an plan object especially can be if directly text elements extracted are not from the 2D representation,analyzed during theothers detection such process. as the Elements height ofwhich objects are located have at to another be predefined height are often by the entered system or the user.in another way or in a separate drawing. Attributes such as height or the lower edge of a window Architecturalcannot be extracted drawings from the represent line set alone. usually Within a cut this of work, the represented these parameters object are at set a toheight usual of values one meter. Therefore, not all parameters can be extracted from a floor plan especially if text elements are not analyzed during the detection process. Elements which are located at another height are often entered in another way or in a separate drawing. Attributes such as height or the lower edge of a window cannot be extracted from the line set alone. Within this work, these parameters are set to usual values but can be adapted in the interface manually. While parsing the data from the floor plan line width and line color are ignored, since there is no universal standard for using these patterns.

Table 1. Components, which have to be extracted and attributes which have to be manually predefined.

Object Extracted Attributes Predefined Attributes Doors Position, Orientation, Width Height Windows Position, Orientation, Width Height, Lower Edge Inner Walls Polygon Height Outer Walls Polygon Height Rooms Polygon Height Balconies Polygon - Technologies 2018, 6, 101 8 of 27

but can be adapted in the interface manually. While parsing the data from the floor plan line width and line color are ignored, since there is no universal standard for using these patterns.

Table 1. Components, which have to be extracted and attributes which have to be manually predefined.

Object Extracted Attributes Predefined Attributes Doors Position, Orientation, Width Height Windows Position, Orientation, Width Height, Lower Edge Inner Walls Polygon Height Technologies 2018, Outer6, x Walls Polygon Height 8 of 26 Rooms Polygon Height Balconies Polygon - The proposed detection process is based on a structural approach, where a group of elements is generalized as much as possible in order to detect a high number of elements. Our work is not taking advantageThe proposed of neural detection networks process for this is based approach on a structural since for approach, training purposes where a group a high of number elements of examplesis generalized would as be much necessary. as possible On the in contrary, order to detectthe proposed a high numberapproach of can elements. also be Ourseen workas a first is not step totaking collect advantage various data of neural for creating networks a database, for this approach since every since found for training object is purposes kept within a high the number application of andexamples can be wouldused for be necessary.further research. On the contrary,In comparison the proposed to other approach structural can alsoapproaches be seen asin a this first area, step tothe herecollect described various detection data for creating process a is database, starting since from every the most found complex object is structures kept within such the applicationas windows and and can be used for further research. In comparison to other structural approaches in this area, the here doors. Figure 3 shows the overall concept of retrieving components from an input file. At the described detection process is starting from the most complex structures such as windows and doors. beginning a DWG or DFX floor plan is loaded into the system and only relevant components are Figure3 shows the overall concept of retrieving components from an input file. At the beginning extracted. As a next step a custom graph structure (see next Section 3.1.1) is created thereby correcting a DWG or DFX floor plan is loaded into the system and only relevant components are extracted. small drawing errors. The automated opening detection process (see Section 4) is searching for As a next step a custom graph structure (see next Section 3.1.1) is created thereby correcting small windows and doors. After that, wall segments are extracted. With the knowledge of wall elements, drawing errors. The automated opening detection process (see Section4) is searching for windows and the restrictions of the opening detection process are now softened up and the graph is again used to doors. After that, wall segments are extracted. With the knowledge of wall elements, the restrictions of find the remaining openings. At this step the user has the chance to influence the result the opening detection process are now softened up and the graph is again used to find the remaining (seeopenings. Section At 3. this1.5 Interfacestep the user) by has removing the chance false to positivinfluencees, the adding result walls (see Section or using 3.1.5 the Interface) “OneClick” by approachremoving to false add positives, remaining adding openings. walls or In using the final the “OneClick” steps room approach contours, to add the remaining outside contour openings. and outsideIn the finalareas steps such room as balconies contours, or the ter outsideraces are contour detected. and All outside relevant areas information such as balconies about or objects terraces and dimensionsare detected. are All structured relevant information in order to aboutcreate objects a standardized and dimensions object areoriented structured IFC file in order (Figure to create 4 shows a anstandardized excerpt of an object IFC orientedformatted IFC file), file (Figurewhich 4is shows the output an excerpt of this of step an IFC before formatted creating file), a which3D model is the for VR.output of this step before creating a 3D model for VR.

(a) (b) Figure 3. General procedure of the detection process of architectural components. (a) The first part of the Figure 3. General procedure of the detection process of architectural components. (a) The first part of procedurethe procedureperforms performsthe generation the generation of the graph of the and graph the and detection the detection of openings of openings and adjacent and adjacent walls. walls;; (b) In the second( bp)hase In the the second user can phase verify the and user adjust can verify the result and adjust before the rooms result and before outside rooms areas and are outside detected. areas Finally are the IFC objectsdetected. are exported Finally the. IFC objects are exported.

Figure 4. Excerpt of interesting entries of an Industry Foundation Classes (ICF) formatted file (ISO 10303-21).

3.1.1. Graph Structure As a first step of the process, the libdxfrw Version 0.5.12 (https://github.com/rvt/libdxfrw) library is used within the Qt development framework in order to be able to read DXF and DWG files, which are widely used for exchanging architectural drawings. As mentioned before not all information of these vectorized CAD files are used in this project. The following DWG/DFX object types are seen as relevant: line (defined by two points), arc (defined by a radius and two angles), polyline (polyline in 2D and 3D) and insert (reference block). This subset of the input file is processed and converted to a custom graph structure for further processing. Technologies 2018, 6, x 8 of 26

The proposed detection process is based on a structural approach, where a group of elements is generalized as much as possible in order to detect a high number of elements. Our work is not taking advantage of neural networks for this approach since for training purposes a high number of examples would be necessary. On the contrary, the proposed approach can also be seen as a first step to collect various data for creating a database, since every found object is kept within the application and can be used for further research. In comparison to other structural approaches in this area, the here described detection process is starting from the most complex structures such as windows and doors. Figure 3 shows the overall concept of retrieving components from an input file. At the beginning a DWG or DFX floor plan is loaded into the system and only relevant components are extracted. As a next step a custom graph structure (see next Section 3.1.1) is created thereby correcting small drawing errors. The automated opening detection process (see Section 4) is searching for windows and doors. After that, wall segments are extracted. With the knowledge of wall elements, the restrictions of the opening detection process are now softened up and the graph is again used to find the remaining openings. At this step the user has the chance to influence the result (see Section 3.1.5 Interface) by removing false positives, adding walls or using the “OneClick” approach to add remaining openings. In the final steps room contours, the outside contour and outside areas such as balconies or terraces are detected. All relevant information about objects and dimensions are structured in order to create a standardized object oriented IFC file (Figure 4 shows an excerpt of an IFC formatted file), which is the output of this step before creating a 3D model for VR.

(a) (b) Figure 3. General procedure of the detection process of architectural components. (a) The first part of the procedure performs the generation of the graph and the detection of openings and adjacent walls.; (b) In the second phase the user can verify and adjust the result before rooms and outside areas are detected. Finally the Technologies 2018, 6, 101 9 of 27 IFC objects are exported.

FigureFigure 4. Excerpt 4. Excerpt of interesting of interesting entries entries of an of anIndustry Industry Foundation Foundation Classes Classes ( (ICF)ICF) formattedfile file (ISO 10303(ISO-21). 10303-21). 3.1.1. Graph Structure 3.1.1. Graph Structure As a first step of the process, the libdxfrw Version 0.5.12 (https://github.com/rvt/libdxfrw) Aslibrary a first is used step within of the the process Qt development, the libdxfrw framework Version in order 0.5.12 to be (https://github.com/rvt/libdxfrw able to read DXF and DWG ) libraryfiles, is used which within are widely the Qt used development for exchanging framework architectural in order drawings. to be able As mentioned to read DXF before and not DWG all files, whichinformation are widely of these used vectorized for exchanging CAD files are architectural used in this project.drawings. The followingAs mentionedDWG/DFX beforeobject not all informationtypes are of seen these as relevant:vectorized line CAD (defined files by are two used points), in this arc project. (defined The by a following radius and DWG two angles),/DFX object polyline (polyline in 2D and 3D) and insert (reference block). This subset of the input file is processed types are seen as relevant: line (defined by two points), arc (defined by a radius and two angles), and converted to a custom graph structure for further processing. polyline (polylineThe detection in 2D process and 3D is) based and insert on the (reference graph structure block). of This the architectural subset of the drawing input andfile is has processed to and converfulfill specialted to requirements, a custom graph due tostructure the search for procedure. further Forprocessing. that reason, it differs from representations usually used in other approaches [8,10]. Those representations concentrate on line segments within nodes of a spanning tree which allows for fast saving and finding parallel and orthogonal lines. In addition to that usually, a set of parallel lines is treated as one segment. In contrast to that, the work at hand concentrates on a more complex representation, where every line is treated individually since walls and openings can vary in all dimensions. Furthermore, for the here presented approach it is important to be able to easily find and follow a path of lines. This can be achieved by building a graph where every node can hold one 2D vector, which represents the end of a line segment of the CAD drawing. When adding data to the graph the local representation of a point of the input file is converted to a new global coordinate system. In addition to that, every node is connected to another point, thereby representing a valid line segment. In this stage also, a field is prepared for adding the type of object the line belongs to. In case of an arc properties such as center point, radius and start/end angle are saved. Another feature of this structure is that angles between two edges can be calculated without additional search within the graph structure. This advantage allows the search function to define certain angle limits especially when matching with a defined structure. The creation process of the graph is finalized with an optimization and error correction stage. Identical edges are reduced and nodes with very similar points (see Figure5a) are merged in case of numerical imprecisions. Furthermore, edges are only split if another edge ends close to them (see Figure5b,c). In this case, the split, as well as the non-split edges, will be kept in the graph to be able to detect or ignore clutter such as furniture placed next to a wall. Crossings such as in Figure5d are not separating lines since every reference line would influence the process. For the detection process, every node is able to hold multiple membership information such as the types: wall, balcony, etc. In addition to that edges of a certain type are able to hold a directional vector (red arrows in Figure5e). This vector describes a normal vector of an edge pointing for example towards the inside of a wall segment, which is used when following edges. In special cases when walls consist of multiple line segments this vector also helps to identify the most outer lines. Technologies 2018, 6, x 9 of 26

The detection process is based on the graph structure of the and has to fulfill special requirements, due to the search procedure. For that reason, it differs from representations usually used in other approaches [8,10]. Those representations concentrate on line segments within nodes of a spanning tree which allows for fast saving and finding parallel and orthogonal lines. In addition to that usually, a set of parallel lines is treated as one segment. In contrast to that, the work at hand concentrates on a more complex representation, where every line is treated individually since walls and openings can vary in all dimensions. Furthermore, for the here presented approach it is important to be able to easily find and follow a path of lines. This can be achieved by building a graph where every node can hold one 2D vector, which represents the end of a line segment of the CAD drawing. When adding data to the graph the local representation of a point of the input file is converted to a new global coordinate system. In addition to that, every node is connected to another point, thereby representing a valid line segment. In this stage also, a field is prepared for adding the type of object the line belongs to. In case of an arc properties such as center point, radius and start/end angle are saved. Another feature of this structure is that angles between two edges can be calculated without additional search within the graph structure. This advantage allows the search function to define certain angle limits especially when matching with a defined structure. The creation process of the graph is finalized with an optimization and error correction stage. Identical edges are reduced and nodes with very similar points (see Figure 5a) are merged in case of numerical imprecisions. Furthermore, edges are only split if another edge ends close to them (see Figure 5b,c). In this case, the split, as well as the non-split edges, will be kept in the graph to be able to detect or ignore clutter such as furniture placed next to a wall. Crossings such as in Figure 5d are not separating lines since every reference line would influence the process. For the detection process, every node is able to hold multiple membership information such as the types: wall, balcony, etc. In addition to that edges of a certain type are able to hold a directional vector (red arrows in Figure 5e). This vector describes a normal vector of an edge pointing for example towards the inside of a wall segment, which is used when following edges. In special cases when walls consist of multiple line Technologies 2018, 6, 101 10 of 27 segments this vector also helps to identify the most outer lines.

(a) (b) (c) (d) (e)

FigureFigure 5. (a 5.) Two(a) Two points points are are close close to to each each other other and and are are merged. (bb,,cc)) LineLine A A is is split split since since another another endpointendpoint is close is close to the to the line. line. (d ()d Crossings) Crossings are are left left untouched. untouched. ( e) AllAll lineline segmentssegments are are added added with with vectorsvectors showing showing to the to the inside inside of ofthe the element. element. 3.1.2. Detection of Openings 3.1.2. Detection of Openings The most important step in the detection process is the identification of openings since the restThe of most the detectionimportant process step in is the directly detection dependent process on is it. the This identification is also the reason, of opening why thiss since first the step rest of theis designed detection to keepprocess the precisionis directly high dependent by avoiding on false it. This positive is also results. the The reason, detection why of this false first windows step is designedwould to have keep a negativethe precision effect high on all by other avoi partsding of false the detectionpositive results. pipeline. The The detection here presented of false approach windows wouldis distinguishing have a negative two effect openings: on all doors other and parts windows. of the detection There is no pipeline. finer categorization The here presented of these elements, approach is distinguishingwhereas the set two of rules openings differs: betweendoors and them. windows. All wall openings There is can no be finer located categorization in the inner of or the these elementsouter, wallwhereas of the the structure. set of rules Besides differs detecting between an them opening,. All w alsoall openings the connecting can be parts located to the in nearbythe inner or thewall outer segment wall have of the to be structure. identified. Besides Thesehints detecting are then an used opening in the, subsequentalso the connecting steps to detect parts walls. to the nearbyTechnologiesThe wall detection 2018 segment, 6, processesx have to are be designed identified. to beThese invariant hints againstare then rotation, used in realisticthe subsequent scaling, andsteps10 position. toof detect26 wallIns. case The some detection symbols processes cannot be are found designed with the to automated be invariant process, against because rotation, of not realistic fulfilling scaling structural, and

constraints of the automated approach a “One Click” detection process of openings is proposed. position. In case some symbols cannot be found with the automated process, because of not fulfilling strucAutomatictural constraints Detection of of theOpenings automated approach a “One Click” detection process of openings is proposedAutomatic. Detection of Openings The detection process of doors differs slightly from detection of windows. Especially doors have a largeThe numberdetection of process possible of representations. doors differs slightly The two-stage from detection door detection of windows. starts Especially with a rule doors set to have detect a candidates.large number In of the possi secondble stage,representations these symbols. The aretwo further-stage door analyzed detection concerning starts with their surroundings.a rule set to detectDoor openingscandidates. are In detected the second depending stage on, the these following symbols conditions. are further analyzed concerning their surroundingsAs a starting. Door point,openings the are algorithm detected tries depen tod detecting on the the doorfollowing frame. conditions Hence, a. list of all possible edgesAs a representing starting point one, the side algorithm of a door tries frame to detect is extracted the door frame. withthe Hence constraint, a list of of all a possible limited edges length. representingThis length one is determined side of a door by frame the usual is extracted thickness with of walls.the constraint Sequent of edges a limited with length. the same This direction length isare determined summarized by the since usual a thickness door frame of wall coulds. Sequent consist ed ofges multiple with the edges. same dir Inection addition, are summarize these nodesd sinceneed a door to have frame at could least oneconsist other of multiple connected edges. edge In which addition would, these usually nodes need describe to have the at nearby least one wall othercontinuation. connected As edge a second which condition, would usually these edges describe are the further nearby analyzed wall continuation regarding the. As a second of conditionopenings., these Every edges valid are door further needs analyzed a frame consistingregarding ofthe two symmetry similar almost of openings. parallel Every edges valid in a specific door needsdistance a frame range. consisting This step of creates two similar a pairwise almost subset parallel of edgesedges (Figure in a specific6a red distance lines), building range. T anhis almost step createsperfect a rectangle. pairwise Insubset addition, of edges every (Figure corner node6a red needs lines to), have building an adjacent an almos edget perfect in a certain rectangle. direction In addition,representing every a corner wall. node needs to have an adjacent edge in a certain direction representing a wall.

(a) (b) (c)

FigureFigure 6. 6.(a()a Red) Red lines lines represent represent pairwise pairwise edges. edges. (b ()b Search) Search area area around around the the potential potential door door in in both both directidirections.ons. (c) ( cLines) Lines (blue) (blue) are are added added as as a awall wall continuation. continuation. Every Every segment segment gets gets a vector a vector directed directed to to thethe inside inside of of the the element. element.

The third condition mainly eliminates false positives in the area of interior objects. For that reason, the search algorithm spans a square area (see Figure 6b) with the width of the potential door opening in both directions and collects all elements within this area. If an arc exists in this area which is directly connected to the frame, a door is detected. In case of a symbol that is missing an arc, it has to have a connection to the frame, which could be a sliding door. In that way, it is possible to see if another object such as a table is blocking the element in an area where the path should be free. If all conditions are met a door is recognized and one more step has to be performed in order to continue to find wall segments. In this second stage, the detected symbol has to be prepared for the wall detection step. Not all door representations have a connected door frame on the long side (see Figure 6a the direct connection between the red lines). These connections have to be added to the graph to be able to follow wall edges all around a potential room. In addition to that, a normal vector pointing to the inside of the door symbol is added to every edge belonging to the symbol (see Figure 6c). Not all shapes can be detected with this approach especially more complex ones containing not connected elements could be left undetected. Figure 7 shows a selection of door openings which can be detected with the described process.

Figure 7. Selection of different door openings that can be detected with the automated approach. Technologies 2018, 6, x 10 of 26

Automatic Detection of Openings The detection process of doors differs slightly from detection of windows. Especially doors have a large number of possible representations. The two-stage door detection starts with a rule set to detect candidates. In the second stage, these symbols are further analyzed concerning their surroundings. Door openings are detected depending on the following conditions. As a starting point, the algorithm tries to detect the door frame. Hence, a list of all possible edges representing one side of a door frame is extracted with the constraint of a limited length. This length is determined by the usual thickness of walls. Sequent edges with the same direction are summarized since a door frame could consist of multiple edges. In addition, these nodes need to have at least one other connected edge which would usually describe the nearby wall continuation. As a second condition, these edges are further analyzed regarding the symmetry of openings. Every valid door needs a frame consisting of two similar almost parallel edges in a specific distance range. This step creates a pairwise subset of edges (Figure 6a red lines), building an almost perfect rectangle. In addition, every corner node needs to have an adjacent edge in a certain direction representing a wall.

(a) (b) (c)

Figure 6. (a) Red lines represent pairwise edges. (b) Search area around the potential door in both directions. (c) Lines (blue) are added as a wall continuation. Every segment gets a vector directed to the insideTechnologies of the element.2018, 6, 101 11 of 27

The third condition mainly eliminates false positives in the area of interior objects. For that The third condition mainly eliminates false positives in the area of interior objects. For that reason, reason, the search algorithm spans a square area (see Figure 6b) with the width of the potential door the search algorithm spans a square area (see Figure6b) with the width of the potential door opening opening in both directions and collects all elements within this area. If an arc exists in this area which in both directions and collects all elements within this area. If an arc exists in this area which is directly is directly connectedconnected to to the the frame frame,, a adoor door is isdetected. detected. In In case case of of a asymbol symbol that that is ismissing missing an an arc arc,, it ithas has to have to have a connectiona connection to the to theframe frame,, which which could could be a be sliding a sliding door door.. In Inthat that way way,, it itis ispossible possible to to see see if if another another objectobject such such as a astable a table is blocking is blocking the the element element in inan anarea area where where the the path path should should be be free. free. If all conditionsIf all are conditions met a door are is met recognized a door is recognizedand one more and step one morehas to step be performed has to be performed in order to in order to continue to findcontinue wall tosegments. find wall segments.In this second In this stage, second the stage, detected the detected symbol symbolhas to be has prepared to be prepared for the for the wall wall detectiondetection step. Not step. all Not door all representations door representations have have a connected a connected door door frame frame on on the the long long side side (see Figure6a Figure 6a thethe direct direct connection connection between between the the red red lines). TheseThese connectionsconnections have have to to be be added added to to the the graph to be graph to be ableable to to follow follow wall edges allall aroundaround aa potentialpotential room. room. In In addition addition to to that, that, a a normal normal vector vector pointing to pointing to the the inside inside of theof the door door symbol symbol is added is toadded every to edge every belonging edge tobelonging the symbol to (see the Figure symbol6c). Not all Technologies 2018(see Figure, 6, x 6c).shapes Not can all be shapes detected can with be detected this approach with especiallythis approach more especially complex ones more containing complex notones connected 11 of 26 containing notelements connected could elements be left undetected. could be Figure left undetected.7 shows a selection Figure of 7 door shows openings a selection which of can door be detected For detectingopenings whichwith windows, can the be described detected the process. with same the described approach process. is used to find pairwise edges. When searching for windows the long sides of the symbol have to be connected through one or multiple segments (see Figure 8). Window symbols are usually more complex within the boundaries of the wall. The main constraint of windows relies on the occurrence of one or more rectangular elements within the

Technologies 2018, 6, x 11 of 26 boundaries. ForFigure that 7. Selection reason,Figure of 7. differentSelection it is doorespecially of different openings door that openingsimportant can be detected that can bewith to detected theuse automated withsplit the automatedapproach.edges. approach. The algorithm tries to detect these shapes by followingForFor detecting detecting nodes windows, windows, and the calculating samesameapproach approach angles is is used used to to findbetween find pairwise pairwise edges.edges. edges. When WithW searchinghen searching this for approach for , multiple windows could bewindowswindows detected the the lo longng in side sides ones of thesym symbolsymbolbol. have haveFor to to bethat be connected connected reason through through, in onea onefurther or multiple or multiple analyzing segments segments (see (seestep it is testing if FigureFigure 88).). Window Window symbols areare usually usually more more complex complex within within the the boundaries boundaries of the of wall. the Thewall main. The main symbols are overlapping.constraintconstraint of of windows windows In this relies relies case on the, the the occurrence occurrence bigger of of one one and or or more more rectangular rectangular complex elements eleme candidate withinnts within the the is taken. Some windows have aboundaries moreboundaries. complex. For For that that reason, reason, connection it is especiallyespecially to important important walls to. toThese use use split split edges. types edges. The Theare algorithm algorithm ignored tries totries detect within to detect the first run of thesethese shapes shapes by by following following nodes nodes andand calculatingcalculating angles angles between between edges. edges. With With this this approach, approach multiple, multiple the detection, sincewindowswindows it would could could be be detected detectedresult inin oneone false symbol.symbol. positive For For that that reason, reason results in, ain further a .further Windows analyzing analyzing step having step it is testing it is testing a if step if or multi-step connection to thesymbols symbolswall arewill are overlapping. be searched In In this this case, for case the ,after bigger the bigger and wall more and segments complex more candidate complex are is candidate taken.already Some is windows classified. taken. Some In this second windowshave a more have complex a more connection complex toconnection walls. These to typeswalls are. These ignored types within are theignored first run within of the the detection, first run of since it would result in false positive results. Windows having a step or multi-step connection to the search cycle, fewerthe restrictionsdetection, since itare would used result to infind false the positive remaining results. Windows openings. having Thea step restrictions or multi-step of parallelism wall will be searched for after wall segments are already classified. In this second search cycle, fewer connection to the wall will be searched for after wall segments are already classified. In this second of the pairwise linerestrictions segments are used and to find the the straight remaining openings. connection The restrictions to the of parallelism wall elements of the pairwise are line dropped. This cycle search cycle, fewer restrictions are used to find the remaining openings. The restrictions of parallelism can be repeated multiplesegments and times the straight if necessary. connection to theFigure wall elements 9 shows are dropped. a selection This cycle canof bedetectable repeated window styles. of multiplethe pairwise times line if necessary. segments Figure and the9 shows straight a selection connection of detectable to the wall window elements styles. are dropped. This cycle can be repeated multiple times if necessary. Figure 9 shows a selection of detectable window styles.

Figure 8. General structure of a window element. Figure 8. General structure of a window element. Figure 8. General structure of a window element.

Figure 9.9.Selection Selection of of detectable detectable window window styles. styles.

The automated approach of detecting openings is designed to find a broad variety of different openings thereby keeping false positive results low. This is the reason why there are certain limitations in theFigure automated 9. Selection detection process. of detectable Every d oorwindow element styles. for example needs to be connected to at least one side of the frame to be detected. Sliding doors which have no connection to the wall are currently not detected, due to the misinterpretation of the blocking element (door leaf). The automatedAlso, certainapproach configurations of detecting of design elementsopenings could is be designed misinterpreted to as find an opening. a broad Additional variety of different openings therebystructures keeping on the false outside positive or inside of resultsthe opening low. do not T influencehis is the the proces reasons. However why, as thethere are certain limitations in theanalyses automated of floor plans detection showed, certain process. exotic door Every or window d oorstructures element are buil t upfor in a example different needs to be configuration such as folding doors, etc. For this rather rare occurrence a template matching connected to at leastalgorithm one was side implemented, of the frame together towith be a segmentationdetected. methodSliding to be doors able to fullywhich process have a floor no connection to the wall are currentlyplan. This not “One detected,-Click” approach due is to described the misinterpretation in the next section. of the blocking element (door leaf). Also, certain configurationsOne-Click Detection of of design Openings elements could be misinterpreted as an opening. Additional structures on the outsideIn contrast or to the inside automated of approach the opening an exact pattern do not of the influence occurring symbol the has proces to be known.s. However , as the analyses of floorT planshe “One -showedClick” segmentation, certain and exotic detection door approach or windowcan be used structureson top of the fully are automated built up in a different process. Templates are defined by line structure, angular relation between them and position of configuration suchadjacent as wall folding segments . doors, The advantage etc. of For this approach this rather is the flexibi rarelity and occurrence the adaptability a to template new matching algorithm was implemented,representations. On together top of that allwith symbols a segmentation are collected in a database method, thereby to beit is ablepossible to to fully build process a floor up a library of detected symbols over time. This can further be used to improve the detection plan. This “One-algorithm.Click” approach Since also the groundis described truth is available in the, it cannext be used section in a learning. process. To use symbols for detection two components had to be developed: (1) the “One-Click” method to separate a symbol and (2) an effective and reliable graph search mechanism to find related objects in the drawing. One-Click DetectionThe of only Openings user interaction required is to select just one line belonging to the object to be extracted. From there on the algorithm tries to spread towards the outer edges of a possible opening in four In contrast tosteps. the Since automated the graph is structurapproached in a anway exact that it is pattern easy to find of neighboring the occurring edges, the symbol tree just has has to be known. The “One-Click”to segmentation be followed in every and direction detection within a approach certain search can range. be These used nodes on are top extracted of the into fully a automated separate sub-graph. This sub-graph is searched for symmetry axes to mirror the wall sockets of process. Templates are defined by line structure, angular relation between them and position of adjacent wall segments. The advantage of this approach is the flexibility and the adaptability to new representations. On top of that all symbols are collected in a database, thereby it is possible to build up a library of detected symbols over time. This can further be used to improve the detection algorithm. Since also the ground truth is available, it can be used in a learning process. To use symbols for detection two components had to be developed: (1) the “One-Click” method to separate a symbol and (2) an effective and reliable graph search mechanism to find related objects in the drawing. The only user interaction required is to select just one line belonging to the object to be extracted. From there on the algorithm tries to spread towards the outer edges of a possible opening in four steps. Since the graph is structured in a way that it is easy to find neighboring edges, the tree just has to be followed in every direction within a certain search range. These nodes are extracted into a separate sub-graph. This sub-graph is searched for symmetry axes to mirror the wall sockets of Technologies 2018, 6, 101 12 of 27

The automated approach of detecting openings is designed to find a broad variety of different openings thereby keeping false positive results low. This is the reason why there are certain limitations in the automated detection process. Every door element for example needs to be connected to at least one side of the frame to be detected. Sliding doors which have no connection to the wall are currently not detected, due to the misinterpretation of the blocking element (door leaf). Also, certain configurations of design elements could be misinterpreted as an opening. Additional structures on the outside or inside of the opening do not influence the process. However, as the analyses of floor plans showed, certain exotic door or window structures are built up in a different configuration such as folding doors, etc. For this rather rare occurrence a template matching algorithm was implemented, together with a segmentation method to be able to fully process a floor plan. This “One-Click” approach is described in the next section.

One-Click Detection of Openings In contrast to the automated approach an exact pattern of the occurring symbol has to be known. The “One-Click” segmentation and detection approach can be used on top of the fully automated process. Templates are defined by line structure, angular relation between them and position of adjacent wall segments. The advantage of this approach is the flexibility and the adaptability to new representations. On top of that all symbols are collected in a database, thereby it is possible to build up a library of detected symbols over time. This can further be used to improve the detection algorithm. Since also the ground truth is available, it can be used in a learning process. To use symbols for detection two components had to be developed: (1) the “One-Click” method to separate a symbol and (2) an effective and reliable graph search mechanism to find related objects in the drawing. The only user interaction required is to select just one line belonging to the object to be extracted. Technologies 2018, 6, x 12 of 26 From there on the algorithm tries to spread towards the outer edges of a possible opening in four steps. Sincearbitrary the graph structure is, similar structured to the in pairwise a way search that it for is door easy frames to find. In neighboring a next step, a connection edges, the tree just has to bebetween followed these in two every segments direction is added within if it is anot certain available search in the range. graph. At These the next nodes step, areall lines extracted into between the wall sockets are assigned to the opening. If these lines are not connected to each other a separate sub-graph. This sub-graph is searched for symmetry axes to mirror the wall sockets of the symbol is composed of more than one connected component and the object is rejected. arbitrary structure,The outcome similar of this to theapproach pairwise is a list search of candidates for door (see frames. Figure 10 In).a The next number step, of a connectioncandidates between these twodepend segmentss on the is comp addedlexity if and it is symmetry not available of the symbol. in the T graph.he last step At in the this next approach step, has all to linesanalyze between the wall socketsthe search are assigned result to deliver to the one opening. final symbol. Ifthese The candidates lines are are not sorted connected according to their each size, other thereby the symbol is searching for the biggest thickness, but shortest length. Symbol 10 in the example is therefore selected composed of more than one connected component and the object is rejected. since more edges are representing the thickness of the opening and it is one of the widest elements. The outcomeBefore saving of this the approach symbol, it is quickly a list of analyzed candidates towards (see the Figuretype of symbol. 10). The If one number or multiple ofcandidates dependsarcs on the or similar complexity structu andres are symmetry induced the of theelement symbol. is categorized The last as step a door. in this Otherwise approach, the has flag to analyze the searchwindow result will to deliverbe assigned. one The final graph symbol. structure The of candidatesthe current symbol are sorted is saved according in the symbol to their library size, thereby including semantic information about the wall sockets (red). Every entry of the symbol library is searching for the biggest thickness, but shortest length. Symbol 10 in the example is therefore selected added in a well-defined XML formatted database in order to collect many symbols over time. since more edges are representing the thickness of the opening and it is one of the widest elements.

Figure 10.FigureK represents 10. K represents the the input input line line of of the the element. With With the symmetry the symmetry approach, approach, 11 possible 11 possible candidatescandidates are found. are found. Symbol Symbol 10 is10 usedis used as as aa symbol due due to the to thethickness thickness and shortest and shortestlength. length.

Detecting objects of the symbol library within a floor plan is a complex and time demanding operation. The designed approach within this work is based on the concept of detecting symbols in technical drawings as presented in [1,4]. They focus on finding specific configurations in graphs. On top of that, the used approach in this paper takes the symbol to find and select a starting edge, which is either an arc or the line with the most neighbors. By doing this the search area can be minimized. All candidates of that starting edge are then searched within the graph representation. Every possible edge candidate is then compared to the symbol by using a recursive depth search. The comparison considers the amount and orientation of neighbors. In order to detect different variations of the symbol, the orientation may differ to a certain predefined angle. In contrast to other approaches, the dimensions of the individual parts are not considered. This allows for a more tolerant to non-uniform scaling, which can happen due to variable wall thickness. When talking about architectural components symbols appear mirrored in one and the same floor plan. For that reason, the search is started for the occurrence of all four possible variations.

3.1.3. Wall Detection The main idea of the proposed approach is based on the hypothesis that every room has at least one opening. So, it should be possible to detect all wall components by following wall segments starting at an already detected opening. The main challenge when dealing with wall representation Technologies 2018, 6, 101 13 of 27

Before saving the symbol, it is quickly analyzed towards the type of symbol. If one or multiple arcs or similar structures are induced the element is categorized as a door. Otherwise, the flag window will be assigned. The graph structure of the current symbol is saved in the symbol library including semantic information about the wall sockets (red). Every entry of the symbol library is added in a well-defined XML formatted database in order to collect many symbols over time. Detecting objects of the symbol library within a floor plan is a complex and time demanding operation. The designed approach within this work is based on the concept of detecting symbols in technical drawings as presented in [1,4]. They focus on finding specific configurations in graphs. On top of that, the used approach in this paper takes the symbol to find and select a starting edge, which is either an arc or the line with the most neighbors. By doing this the search area can be minimized. All candidates of that starting edge are then searched within the graph representation. Every possible edge candidate is then compared to the symbol by using a recursive depth search. The comparison considers the amount and orientation of neighbors. In order to detect different variations of the symbol, the orientation may differ to a certain predefined angle. In contrast to other approaches, the dimensions of the individual parts are not considered. This allows for a more tolerant to non-uniform scaling, which can happen due to variable wall thickness. When talking about architectural components symbols appear mirrored in one and the same floor plan. For that reason, the search is started for the occurrence of all four possible variations.

3.1.3. Wall Detection

TechnologiesThe 2018 main, 6, x idea of the proposed approach is based on the hypothesis that every room has at13 least of 26 one opening. So, it should be possible to detect all wall components by following wall segments is thatstarting walls at can an already hold information detected opening. about the The inner main structure challenge and when can dealing thereby with be wallrepresented representation through multipleis that wallslines. canFor holdthe application information area about at the hand inner, creating structure a and3D surface can thereby model, be represented the only information through thatmultiple is needed lines. are For the the inside application and the area outside at hand, lines creating of wall a 3Delements. surface model, the only information that is neededAt this arestage the, the inside algorithm and the outsideis already lines aware of wall of elements. components in the graph which belong to an opening.At On this top stage, of that the, algorithmevery found is alreadyopening aware also c ofontains components the wall in socket the graph elements which and belong the normal to an vectorsopening. indicating On top ofthe that, inside every of found the wall opening. The also wall contains detecting the wall approach socket can elements be compared and the normal to a fill algorithmvectors indicating for rasterized the inside images of (see the wall.Figure The 11). wall For detecting exporting approach wall segments can be compared the area has to a to fill be algorithm for rasterized images (see Figure 11). For exporting wall segments the area has to be defined as a polygon. Starting points for wall detection are the adjacent wall segments of the defined as a polygon. Starting points for wall detection are the adjacent wall segments of the openings. openings. By looking closer at the openings, the width of the wall can be defined. This allows ignoring By looking closer at the openings, the width of the wall can be defined. This allows ignoring all all structural elements acquired inside the wall. The neighbors of a current wall node are searched structural elements acquired inside the wall. The neighbors of a current wall node are searched for the for the edge with lowest angle difference compared to the normal vectors. This concept is repeated edge with lowest angle difference compared to the normal vectors. This concept is repeated until a untilwall a segmentwall segment is found is whichfound haswhich the samehas the orientation same orientation as the starting as the point. starting The followingpoint. The rules following were rulesdefined were fordefined completion for completi and inon order and to in extend order the to searchextend area: the search area: • Details inside the wall are only deleted if two opposing normal vectors can be assigned • Details inside the wall are only deleted if two opposing normal vectors can be assigned • If no further neighbor is available, the last node will be searched for another edge • If no further neighbor is available, the last node will be searched for another edge If the search meets another wall polygon, the information is combined. As Figure 11 shows, If the search meets another wall polygon, the information is combined. As Figure 11 shows, elements which are directly connected to the wall, such as the interior, are automatically ignored. elements which are directly connected to the wall, such as the interior, are automatically ignored.

(a) (b) (c) (d)

FigureFigure 11. 11. (a)( Openingsa) Openings are are used used as as starting starting points. points. (b ()b neighbor) neighbor with with the the lowest lowest angle angle difference difference is selected.is selected. (c) this (c) this is repeated is repeated until until a loop a loop is is closed. closed. (d (d) ) Completed Completed wall elements elements are are marked marked in in thethe interface. interface.

3.1.4. Room Detection For creating a virtual experience in the area of real estate it is necessary to have access to the individual room structures of the building. Later in the process the room area detection allows a better texturing by dividing mesh structures according to the room structure as well as for distinguishing the inside from the surface of the facade. Furthermore, it can be used for analyzing the room types or for just having a colliding object when exploring the environment in VR. At this point wall structures as well as openings in the walls are known in position and size. This information forms the basis for detecting rooms and defining room polygons. Figure 12 (a) illustrates the detection of room polygons and the contour of the building. Since every room has to be accessible it also has to have a door or window. For that reason, the room detection process starts at a door element K1and K2. The inverted normal vector is used to follow the inside direction of the room. By following the edges clockwise until K1 is reached again a polygon can be defined. The detected area can be on the inside or on the outside. In one case a conventional room is found. The other case describes either the outer shell of the building or a situation where a room is located inside a room. To distinguish the two cases, the angular differences between the neighboring edges are observed. In the case F1 in Figure 12 (b) the normal vectors looking to the inside are taken for the calculation. Taking an arbitrary but close point in direction of the normal vector it is possible to test if the point is inside or outside the polygon. This is done by using the approach of Shimrat et al. [22]. If the point is inside the polygon a room is detected otherwise the outer shell of the floor plan is found. All edges passed on the way are marked and assigned to the room area. In addition to that also the openings can now be assigned to the correct room. Starting from the contour of the building, which consists only of walls the detection of balconies or terraces directly connected to the outer wall can be done. Following the lines on the outside of the shell, it can be detected if it starts and ends at the outer shell. The spanning area is calculated. On top Technologies 2018, 6, 101 14 of 27

3.1.4. Room Detection For creating a virtual experience in the area of real estate it is necessary to have access to the individual room structures of the building. Later in the process the room area detection allows a better texturing by dividing mesh structures according to the room structure as well as for distinguishing the inside from the surface of the facade. Furthermore, it can be used for analyzing the room types or for just having a colliding object when exploring the environment in VR. At this point wall structures as well as openings in the walls are known in position and size. This information forms the basis for detecting rooms and defining room polygons. Figure 12a illustrates the detection of room polygons and the contour of the building. Since every room has to be accessible it also has to have a door or window. For that reason, the room detection process starts at a door element K1and K2. The inverted normal vector is used to follow the inside direction of the room. By following the edges clockwise until K1 is reached again a polygon can be defined. The detected area can be on the inside or on the outside. In one case a conventional room is found. The other case describes either the outer shell of the building or a situation where a room is located inside a room. To distinguish the two cases, the angular differences between the neighboring edges are observed. In the case F1 in Figure 12b the normal vectors looking to the inside are taken for the calculation. Taking an arbitrary but close point in direction of the normal vector it is possible to test if the point is inside or outside the polygon. This is done by using the approach of Shimrat et al. [22]. If the point is inside the polygon a room is detected otherwise the outer shell of the floor plan is found. Technologies 2018, 6, xAll edges passed on the way are marked and assigned to the room area. In addition to that also the 14 of 26 openings can now be assigned to the correct room. Starting from the contour of the building, which consists only of walls the detection of balconies or of that, the algorithmterraces directlysearche connecteds for toa thedoor outer in wall this can be area, done. Followingwhich theis linesnecessary on the outside to ofaccess the shell, the balcony. By comparing differentit can resulting be detected if areas it starts andin size ends at and the outer position shell. The the spanning correct area ispart calculated. is selected On top of which that, is connected the algorithm searches for a door in this area, which is necessary to access the balcony. By comparing to a door. different resulting areas in size and position the correct part is selected which is connected to a door.

(a) (b)

Figure 12. (a) DetectionFigure 12. (processa) Detection of process the ofroom the room polygon. polygon. (b()b Blue) Blue area (F1)area defines (F1) a roomdefines whereas a room the whereas the red area (F2) the outside of the apartment. red area (F2) the outside of the apartment. 3.1.5. Interface A graphical user interface (see Figure 13) allows for importing floor plans, for evaluating the 3.1.5. Interface results and for user intervention. In addition to that, the user is able to supervise and correct mistakes if necessary before exporting everything into the Industry Foundation Classes (IFC) data model. IFC is A graphical auser platform-neutral interface and (see open fileFigure format specification13) allows to describefor importing building and floor construction plans industry, for evaluating the results and for user intervention. In addition to that, the user is able to supervise and correct mistakes if necessary before exporting everything into the Industry Foundation Classes (IFC) data model. IFC is a platform-neutral and open file format specification to describe building and construction industry data. In our approach we only use a small subset of available IFC types. Every detected component of the previously described elements is converted to the appropriate IFC standard format. The following elements are used for exporting: IfcWall, IfcWindow, IfcDoor, IfcSpace, and IfcSlab. This is the first step where 3D information is created and could already be interpreted as a 3D model with an appropriate application. However, to use this with interactive VR it has to be further processed.

Figure 13. Graphical user interface to supervise the individual steps in the detection process.

3.2. VR Apartment Generation The IFC file provides the 3D structure of components and can be read with any application which is able to interpret IFC files. In contrast to these applications, the here developed IFC interpreter is based on the IFC Engine Library and extended by requirements for an automated construction of a virtual reality experience within the Unreal Engine. These VR requirements are very specific and are to our best knowledge not supported by other IFC viewers. For creating a successful VR experience, the following requirements have to be met and are described in this Section 3.2. At Technologies 2018, 6, x 14 of 26 of that, the algorithm searches for a door in this area, which is necessary to access the balcony. By comparing different resulting areas in size and position the correct part is selected which is connected to a door.

(a) (b)

Figure 12. (a) Detection process of the room polygon. (b) Blue area (F1) defines a room whereas the red area (F2) the outside of the apartment.

3.1.5. Interface A graphical user interface (see Figure 13) allows for importing floor plans, for evaluating the results and for user intervention. In addition to that, the user is able to supervise and correct mistakes if necessary before exporting everything into the Industry Foundation Classes (IFC) data model. IFC is a platform-Technologiesneutral2018 and, 6, 101open file format specification to describe building and construction15 of 27 industry data. In our approach we only use a small subset of available IFC types. Every detected component of the previouslydata. In ourdescribed approach we elements only use a smallis converted subset of available to the IFC appropriate types. Every detected IFC component standard of format. The following elementsthe previously are used described for elements exporting: is converted IfcWall, to the IfcWindow, appropriate IFC IfcDoor, standard format. IfcSpace The following, and IfcSlab. This is elements are used for exporting: IfcWall, IfcWindow, IfcDoor, IfcSpace, and IfcSlab. This is the first step the first step wherewhere 3D 3D information information is created is and created could already and becould interpreted already as a 3Dbe model interpreted with an appropriate as a 3D model with an appropriateapplication. application However,. However, to use this withto use interactive this with VR it hasinteractive to be further VR processed. it has to be further processed.

Figure 13. GraphicalFigure 13. Graphical user interface user interface to super to supervisevise thethe individual individual steps insteps the detection in the process.detection process.

3.2. VR Apartment Generation 3.2. VR Apartment Generation The IFC file provides the 3D structure of components and can be read with any application which The IFCis file able provides to interpret IFCthe files. 3D In structure contrast to these of components applications, the and here developedcan be read IFC interpreter with any is application based on the IFC Engine Library and extended by requirements for an automated construction of a which is ablevirtual to realityinterpret experience IFC within files. the In Unreal contrast Engine. Theseto these VR requirements applications are very, the specific here and dev are eloped IFC interpreter isto based our best on knowledge the IFC not supported Engine byLibrary other IFC and viewers. extended For creating by arequirements successful VR experience, for an automated construction theof a following virtual requirements reality experience have to be within met and arethe described Unreal inEngine this Section. These 3.2. AtVR first requirements a realistic are very representation of an apartment has to be produced, in order to support potential customers in their specific and arebuying to our decision. best To knowledge achieve this goal, not the support applicationed by has other to run onIFC desktop viewers. environments For creating as well a successful VR experienceas,mobile the follow devices.ing Furthermore, requirements the 3D have model to has be to bemet prepared and are for texturingdescribed and in interaction. this Section 3.2. At Because of this the produced mesh has to be separated into individual wall segments thereby defining [u,v] coordinates for a correct texture mapping. To assign appropriate textures a room type estimation heuristic was developed.

3.2.1. 3D Model Generation The generation of the 3D model is implemented with the Unreal game engine and running in the editor mode. This means that the model can already be computed, displayed and manipulated before the VR application is built. As a representation of each mesh the procedural mesh component is used which was still an experimental plugin for Unreal at that time. At this point, the IFC file contains the high-level geometry of the floor plan. The result of the floor plan interpreter module can be read into the Unreal engine with the help of the IFC Engine library. A direct integration with the game engine is not possible since the IFC file only holds corner Technologies 2018, 6, x 15 of 26 first a realistic representation of an apartment has to be produced, in order to support potential customers in their buying decision. To achieve this goal, the application has to run on desktop environments as well as mobile devices. Furthermore, the 3D model has to be prepared for texturing and interaction. Because of this the produced mesh has to be separated into individual wall segments thereby defining [u,v] coordinates for a correct texture mapping. To assign appropriate textures a room type estimation heuristic was developed.

3.2.1. 3D Model Generation The generation of the 3D model is implemented with the Unreal game engine and running in the editor mode. This means that the model can already be computed, displayed and manipulated before the VR application is built. As a representation of each mesh the procedural mesh component is used which was still an experimental plugin for Unreal at that time. At this point, the IFC file contains the high-level geometry of the floor plan. The result of the floor plan interpreter module can be read into the Unreal engine with the help of the IFC Engine library. A direct integration with the game engine is not possible since the IFC file only holds corner points of elements, but not triangulated mesh data. The IFC representation containing the base profile (polygon) and the height has to be converted to triangle arrays. This is done for every IFC object containing Technologiesa geometry2018, 6 ,definition. 101 This conversion does not consider the further use16 ofof 27 an immersive and realistic representation of the model. The first result of this step is a collection of mesh elements mostly clustpointsered of by elements, rooms. but The not triangulatedceiling and mesh the data. floor The consist IFC representation of one global containing representation the base profile each. (polygon) and the height has to be converted to triangle arrays. This is done for every IFC object On topcontaining of the mesh a geometry generation, definition. Thisa semantic conversion mesh does not segmentation consider the further algorithm use of an immersivewas implemented to achieve a finerand realisticgranularity representation of meshes. of the model.First, Thefor firsta realistic result of thisdynamic step is a lighting, collection of it mesh is important elements to keep the meshes waterproofmostly clustered so no by light rooms. from The ceiling the other and the side floor of consist the mesh of one global is leaking representation along each.the edges. Besides the On top of the mesh generation, a semantic mesh segmentation algorithm was implemented to number of achievetriangles a finer should granularity be ofkept meshes. low, First, the for surfaces a realistic dynamic belonging lighting, to it a is room important should to keep thebe separated in order to allowmeshes different waterproof texture so no light styles from thein otherone sideroom. of the An mesh example is leaking of along a mesh the edges. belonging Besides to multiple rooms can bethe seen number in ofFigure triangles 14a. should Two be walls kept low, connected the surfaces to belonging each other to a room have should to be be subdivided separated according in order to allow different texture styles in one room. An example of a mesh belonging to multiple to the adjacentrooms IFC can bespace seen in which Figure 14 isa. describing Two walls connected the volume to each other of havea room. to be subdividedTo achieve according this, toevery space is clipped againstthe adjacent the IFCmeshes space which of the is describing walls. theThereby volume of overlapping a room. To achieve wall this, segments every space is are clipped assigned to the current roomagainst (see the Figure meshes 14b). of the This walls. is Thereby done overlapping for every wall room segments and every are assigned mesh to thebelonging current room either to a wall, the ceiling or(see the Figure floor. 14b). It This also is allows done for everyto separate room and walls every meshbelonging belonging to either several to a wall, rooms. the ceiling At this or point, every the floor. It also allows to separate walls belonging to several rooms. At this point, every wall surface wall surfacecan can be selectedbe selected individually individually after adding after a collider adding component. a collider component.

(a) (b)

Figure 14. Figure(a) Two 14. (a meshes) Two meshes directly directly converted converted from from the the IFC inputIFC inputfile; (b) file Subdivided; (b) Subdivided meshes meshes according to IFC space objects for individual handling. according to IFC space objects for individual handling. The IFC file does not contain any information about texture coordinates. This has to be considered The IFCin a next file automated does not step. contain In order to any achieve information a high visual quality, about it is texture necessary coordinates. to realize automated This has to be surface unwrapping. Thereby each mesh is divided into coplanar sub-surfaces. Each of the resulting considered surfacesin a next is projected automated on a unit step. square In andorder for everyto achieve vertex, aa pair high of (u,v)visual texture quality coordinates, it is betweennecessary to realize automated [0,1]surface is assigned. unwr Theapping. longer sideThereby of the mesh each is alignedmesh withis divided the u-vector. into To coplanar achieve a realistic sub-surfaces. look Each of the resultingespecially surfaces if tiles is or projected a wooden floor on is a used unit a global square scaling and factor for is every multiplied. vertex One unit, a inpair texture of (u,v) texture space represents one meter in real scale. coordinates between [0,1] is assigned. The longer side of the mesh is aligned with the u-vector. To achieve a realistic3.2.2. Room look Type especially Estimation if tiles or a wooden floor is used a global scaling factor is multiplied. One unit in textureIn the space process represents of preparing theone 3D meter model forin exploration,real scale. rooms should also be textured in an appropriate style. To achieve this, it is beneficial to know the type of the rooms of the floor plan. This information can also help the user when working with furniture pieces since the system can present a manageable subset of furniture models according to the current room type. Extracting the room types directly from the floor plan can only be done if the appropriate text is available. Most of the floor plans however do not mention the room type or use non-standard abbreviations. For that reason, this auxiliary tool to estimation room types according to a heuristic was implemented. It is necessary to analyze the floor plan in relation to available apartments before the room types can be estimated. In order to subdivide the floor plan into individual apartments, the rooms and the connections between them are transferred to an unweighted graph. The rooms form the nodes Technologies 2018, 6, x 16 of 26

3.2.2. Room Type Estimation In the process of preparing the 3D model for exploration, rooms should also be textured in an appropriate style. To achieve this, it is beneficial to know the type of the rooms of the floor plan. This information can also help the user when working with furniture pieces since the system can present a manageable subset of furniture models according to the current room type. Extracting the room types directly from the floor plan can only be done if the appropriate text is available. Most of the floor plans however do not mention the room type or use non-standard abbreviations. For that reason, this auxiliary tool to estimation room types according to a heuristic was implemented. It is necessary to analyze the floor plan in relation to available apartments before the room types can be estimated. In order to subdivide the floor plan into individual apartments, the rooms and the connections between them are transferred to an unweighted graph. The rooms form the nodes and door openings are the edges. At the same time, the distance between rooms is calculated by the magnitude of number of doors in between. This is stored in a distance matrix. In the first step shared Technologies 2018, 6, 101 17 of 27 spaces, not belonging to an apartment, such as a staircase, is searched for. So, a room is selected which has the lowest connection sum to all other rooms. In a next step, all combinations of apartments are and door openings are the edges. At the same time, the distance between rooms is calculated by clustered due to three rules. (1) An apartment has just one entrance to a shared space, but not leading the magnitude of number of doors in between. This is stored in a distance matrix. In the first step to the outside.shared spaces, (2) At not least belonging two rooms to an apartment, are assigned such asto a one staircase, apartment, is searched whereas for. So, a also room smaller is selected rooms are counted.which (3) Finally has the lowest, a minimum connection value sum toof all the other overall rooms. living In a next space step, is all defined. combinations of apartments In area filtering clustered step due, to all three combinations rules. (1) An of apartment disjunct has apartment just one entrance structures to a shared are selected space, but and not are now possibleleading solutions to the outside.which have (2) At least to be two ranked. rooms are To assigned achieve to one this apartment,, a grading whereas system also smaller awards better rooms are counted. (3) Finally, a minimum value of the overall living space is defined. combinations. If a shared space exists, which can be accessed by all apartments, this combination gets In a filtering step, all combinations of disjunct apartment structures are selected and are now a higherpossible value, solutions in contrast which to have other to combinations be ranked. To achievehaving this,rooms a grading that could system not awards be assigned better to an apartment.combinations. Due to the If arankin sharedg space, the exists,algorithm which takes can be the accessed top result by all fo apartments,r further thisprocessing. combination Within the interfacegets the a higheruser is value, presented in contrast with to multiple other combinations solutions having and can rooms select that couldthe most not be preferable assigned to one. an Sinceapartment. at this Duepoint to thean ranking,apartment the algorithmconfiguration takes the has top been result selected, for further it processing.is now possible Within theto estimate the typeinterface of rooms. the user For is presented this detection with multiple process, solutions the and following can select room the most types preferable are one.considered: toilet, Since at this point an apartment configuration has been selected, it is now possible to estimate the bathroom,type living of rooms. room, For this bedroom, detection kitchen,process, the dining following room, room and types storage are considered: room. toilet, Since bathroom, many buildings have roomsliving with room, multiple bedroom, purposes kitchen, dining the room,here presented and storage app room.roach Since is many able buildings to assign have multiple rooms roles to one room.with Asmultiple a minimum purposes the configuration here presented in approach every apartment, is able to assign there multiple should roles be to space one room. for a toilet, bathroom,As a kitchen, minimum dining configuration room, in everybedroom, apartment, and therea living should room. be space Whereas for a toilet, the bathroom, dining room, kitchen, hallway, and kitchendining are room, only bedroom, assign anded once. a living These room. six Whereas types the are dining the first room, ones hallway, distributed and kitchen over are onlyall available assigned once. These six types are the first ones distributed over all available rooms. To assign the most rooms. To assign the most plausible type for every room a plausibility value for every available type plausible type for every room a plausibility value for every available type is calculated according to is calculatedwell-defined according factors. Figureto well 15- definedshows the factors. concept of Figure the room 15 type shows estimation the algorithm. concept of Three the basic room type estimationestimators algorithm. are used Three to classify basic the estimators room types. are The used estimators to classify are configured the room with types. different The weights estimators are configuredaccording with todifferent the currently weights tested according room type. to the currently tested room type.

FigureFigure 15. 15.TheThe concept concept of of the roomroom type type estimation estimation algorithm. algorithm.

The first factor is related to the size of the room. On the one hand, a normalized value in relation to Thethe first size factor of the largest is related room to and the the size apartment of the isroom. used (OnareaInRelation the one ).hand, On the a othernormalized hand, the value real size in relation to the sizeis used of the to define largest a lower room and and upper the limit apartment for specific is roomused types (areaInRelation (sizeRange). Another). On the important other hand, factor the real size is usedis the to connection define a possibilities lower and to upper other rooms limit (fornum_doors specific). A room toilet types or a bathroom (sizeRange is restricted). Another to one important door only in the first assigning round. The factor of a number of doors is less considered when dealing with rooms with less privacy such as kitchen, dining room or living room. Rooms with high privacy are also checked if a door leads to the outside or a shared space with other apartments (entryDoor). The concavity factor usually helps in combination with the number of doors to define a potential hallway (form_factor), which is calculated as the relation between the size and the convex hull of the room. Hallways are more likely to be more complex than other rooms. Size is not a relevant Technologies 2018, 6, 101 18 of 27 value since hallways can be the largest rooms or might not even exist. Bathroom or toilets are never assigned to the largest room and the plausibility of being a bedroom is also lower. After every room is checked with every possible room type a ranking is generated according to the plausibility value. Finally, the global apartment configuration is examined according to equal roles, necessary roles and distribution. The user is getting a list of the most possible configurations. If no interaction is done the first result is used.

3.3. VR Experience To prepare a model for exploration the automatic texturing according to the results of the room estimation heuristic is the next step. Especially correctly textured surfaces are important for immersion since they support the sense of scale and the sense of orientation. This Section 3.3 also describes the assignment of 3D models of openings and the placement of light sources. Finally, an interaction technique for locomotion in VR and interaction with textures and furniture is also required in order to be able to evaluate the additional benefit when searching for apartments.

3.3.1. Finalizing the Model At this point in the process, the algorithm can already provide a true to scale 3D model of the apartment with subdivided meshes prepared for texturing. Furthermore, every room is assigned to a specific set of room types. Presenting the current stage to a customer is however not recommended. Without lights in the environment and by not having realistic texturing, users would find it hard to rely on their sense of scale and orientation within the VR environment. Furthermore, openings should also be filled with appropriate models. In order to finalize the 3D model for VR presentation, meshes of windows and doors are added. The IFC file provides, therefore, the “IFCOpening“ entities with the correct parameters. For placing these models especially with door openings, it was necessary to extend the door openings to the side and to the top since the door frame is usually thicker than the opening itself. To increase the visual quality in the experience lightning is an important factor. By putting light sources in every room according to the room structure, it is possible with the Unreal game engine to have dynamic shadows which enhances the realism. The position of the light is dependent on the geometry of the room. Every room polygon is checked if it is a convex or concave. In rooms with a convex area, a single light source is added at the center of masses at the ceiling. Rooms, which are bigger than a certain threshold (20 m2) will get another light source by dividing the room in two equal areas. Non-convex rooms, on the other hand, will be split into concave elements. Every element bigger than 10 m2 is fitted with a light source again at the center of mass. Light sources consist of a 3D mesh of a flat ceiling lamp and a dynamic point light source strong enough to illuminate the room. Finally, textures are automatically assigned to wall elements according to the room type. Humid rooms such as bathrooms are usually fit with tiles and all other rooms with wooden floor and a single color for every wall. This concludes the automated phase for the visual creation of the apartment. At this point, a designer could still influence the appearance of the 3D model within the Unreal Editor and the additionally implemented interface. This allows for changing room types, textures, adding furniture or changing the light situation before exporting the environment as a VR application. Figure 16 shows the different stages of the reconstructed environment. As an optional step the created 3D model can then be furnished automatically using a greedy cost optimization. This implementation by Kan et al. [23] works well together on top of our automated model creation method. Technologies 2018, 6, 101 19 of 27 Technologies 2018, 6, x 18 of 26

(a) (b)

(c) (d)

Figure 16. 16. (a)( aThe) The result result of the of sy thembol symbol detection detection process. process. (b) The final (b) The environment final environment for VR. (c) forand VR.(d) (Manuallyc,d) Manually furnished furnished rooms rooms for the for user the userstudy. study.

3.3.3.2.3.2. Interaction Another main goal of the presented approach is the possibility of exploring the indoor environment with with the the help help of of a a VR VR h headset.eadset. To To be be able able to tonavigate navigate through through the the environment environment and and to provideto provide an aninteraction interaction possibility possibility an appropriate an appropriate interaction interaction metaphor metaphor needs needs to be todesigned be designed.. This Thisinteraction interaction has to has provide to provide the possibility the possibility to allow to allowthe selection the selection and manipulation and manipulation of furniture of furniture pieces piecesand surface and surface textures. textures. Furthermore Furthermore,, an appropriate an appropriate locomotion locomotion metaphor metaphor has to be has available. to be available. For the applicationFor the application at hand attwo hand tracked two trackedcontrollers controllers,, the Oculus the OculusTouch were Touch used were. used. The user user is is tracked tracked in in an an area area of ofabout about four four by byfour four meters meters with with the Oculus the Oculus Rift setup. Rift setup. This Thisrepresents represents the direct the direct interaction interaction room room for the for user. the user. In order In order to reach to reach points points beyond beyond this space this space in VR in, VR,the left the controller left controller is used is used for the for standard the standard teleportation teleportation method method provided provided by the by VR the template VR template of the of Unrealthe Unreal engine engine to teleport to teleport to an to anarb arbitraryitrary point point in inthe the line line of of sight sight of of the the user. user. The The right right controller represents the interaction device. It It is is virtually virtually surrounded surrounded by by small small but but recognizable 3D 3D items (Figure(Figure 1717a,b).a,b). TheseThese itemsitems are are hovering hovering in in a a circle circle following following the the position position of theof the controller. controll Theer. The user user can canselect select the desired the desired object object in the menuin the by menu rotating by rotating the controller the controller around the around lateral the axis. lateral The freedom axis. The of ◦ freedommotion of of the motion user’s of wrist, the user’s which wrist, is about which 180 is isabout subdivided 180° is subdivided by the number by the of currentlynumber of used currently items usedin the items menu. in Selectedthe menu. objects Selected in theobjects menu in arethe highlightedmenu are highlighted by a yellow by contour. a yellow The contour. user canThe enteruser cana submenu enter a submenu by clicking by aclicking button a on button the controller. on the controller. To go back To go the back most the left most item left in item the listin the has list to hasbe selected. to be selected. All submenus All submenus are grouped are grouped to individual to individual topics: topics: textures textures (tiles, (tiles, colors, colors, wood, wood,... …)) or furniture (chairs,(chairs, tables, beds, beds, ... …).). In In the the first first hierarchy hierarchy,, the the user user can can decide between changing textures and working with furniture. In In order order to to be able to handle a big database of furniture furniture,, the user gets only those items presented which fitfit the current current room room type. type. If for exampleexample,, the user is currently standing in a bedroom all 3D models models belonging belonging to to the the kitchen kitchen are are not not shown shown in in the the menu. menu. Placing furniture or selecting a wall for changing the texture is based on a ray ray-cast.-cast. The user can point with the controller,controller, which is shooting a ray into the scene, to an object which gets highlighted if any interaction can be performed with the object.object. When When confirming confirming with with a a button the the object (chair) is bound to the collision point of the ray-cast and the floor thereby fixating roll and elevation. For a Technologies 2018, 6, 101 20 of 27

Technologies 2018, 6, x 19 of 26 bound to the collision point of the ray-cast and the floor thereby fixating roll and elevation. For a better controlbetter control of furniture, of furniture, the same the rotation same asrotation used in as the used menu in the is applied menu andis applied amplified and to amplified the azimuth to the of theazimuth currently of the selected currently furniture. selected furniture.

(a) (b) (c)

FigureFigure 17. 17.Different Different interactions interactions are areshown shown above. above. The controllerThe controller is surrounded is surrounded by (a) surface by (a materials) surface ormaterials (b) furniture or (b) objects. furniture (c )objects. Selection (c) andSelection manipulation and manipulation tool uses atool raycast uses toa raycast select and to select to place and the to objectplace the at the object intersection at the intersection point with point the with floor. the The floor. rotation The rotation of the controllerof the controller rotates rotates the chair the chair by a factorby a factor of two. of two.

4.4. ResultsResults ToToassess assess thethe implementedimplemented featuresfeatures (see(see TableTable2 )2 and) and the the generated generated 3D 3D model model of of an an interactive interactive VRVR apartment,apartment, twotwo evaluationevaluation methodsmethods werewere chosen.chosen. AtAt first,first, aa technicaltechnical evaluationevaluation waswas performedperformed inin orderorder toto showshow thethe accuracyaccuracy ofof thethe detectiondetection processprocess andand thethe degreedegree ofof automation.automation. PerformingPerforming anan appropriateappropriate evaluationevaluation isis aa non-trivialnon-trivial tasktask duedue toto thethe complexitycomplexity ofof differentdifferent floorfloor plans.plans. ThisThis cancan alsoalso bebe seenseen inin otherother publicationspublications ofof thisthis areaarea sincesince theythey omitomit evaluationevaluation [2[2,6,10,24,25],6,10,24,25].. TheThe finallyfinally constructedconstructed environment environment was was tested tested with with users, users, put put into into a situation a situation of buying of buying or renting or renting the real the estate real object.estate object The main. The goal main was goal to was evaluate to evaluate if the system if the system helps in help theirs in decision-making their decision-making process. process.

Table 2. Feature description of the presented approach. Table 2. Feature description of the presented approach.

FeatureFeature TypeType Feature Feature Type Type VectorVector plan plan input input YesYes Triangulation Triangulation Polygon Polygon-based-based PlanPlan cleaning cleaning AutoAuto Window/Door Window/Door placement Automated Automated Using meta information No Lightning Automated SymbolUsing meta recognition information AutomatedNo Lightning Texturing Automated Automated LayerSymbol information recognition Automated Ignored SurfaceTexturing unwrapping Automated Automated Contour/OutlineLayer information AutomatedIgnored Surface Room type unwrapping definition Automated Automated Ceiling/FloorContour/Outline AutomatedAutomated Room type definition Automated Ceiling/Floor Automated 4.1. Technical Evaluation 4.1. TechnicalIn order toEvaluation evaluate the accuracy of the detection process of architectural components, the authors decidedIn orderto ask professionalsto evaluate the for accuracy architectural of the floor detection plans. Because process of of copyright architectural issues components and intellectual, the property,authors decided it is hard to ask to get professionals floor plans for for architectural evaluation andfloor publication plans. Because purposes. of copyright For that issues reason, and theintellectual authors askedproperty three, it professionalsis hard to get to fl drawoor plans floor for plans evaluation of apartments. and publication In order to purposes. have a variety For that of architecturalreason, the authors elements, asked these three people professionals were asked to draw apartmentsfloor plans of usualapartments size. Windows. In order and to have doors a shouldvariety haveof architectural different appearances. elements, these In people addition were to that, asked every to draw room apartment had to contains of usual design size. W elementsindows suchand doors as furniture. should There have weredifferent no other appearances. restrictions, In addition not even to the that choice, every of theroom application had to contain was dictated. design Theelements reason such why as three furniture. floor plans There per were person no other were restrictions, requested was, not oneven the the one choice hand, of to the limit application the effort forwa drawings dictated. and The on reason the other why hand three the floor fact plan thats one per floorperson plan were contains requested multiple was, elements on the one to analyze.hand, to Inlimit the thedataset effort given for by drawing the professionals and on the over other 180 hand openings the fact and that between one floor 200 and plan 300 contains individual multiple wall elements to analyze. In the dataset given by the professionals over 180 openings and between 200 and 300 individual wall elements were contained. No floor plan that was used for designing the algorithm was used to evaluate the method in order not to bias the results. Technologies 2018, 6, 101 21 of 27

elementsTechnologies were 2018,contained. 6, x No floor plan that was used for designing the algorithm was used to evaluate20 of 26 the method in order not to bias the results.

Figure 18. Three examples of floor plans for extracting architectural components that were used for the Figure 18. Three examples of floor plans for extracting architectural components that were used for evaluation. The top row shows the input floor plan. The second row shows the color-coded objects. the evaluation. The top row shows the input floor plan. The second row shows the color-coded objects. Figure 18 shows three examples of the analyzed floor plans. Every IFC object is visualized in Figure 18 shows three examples of the analyzed floor plans. Every IFC object is visualized in a a separate color (doors – green, windows – blue, rooms-grey, walls – red, outside areas – purple). separate color (doors – green, windows – blue, rooms-grey, walls – red, outside areas – purple). The The technical evaluation should evaluate the whole procedure including the fully automated detection technical evaluation should evaluate the whole procedure including the fully automated detection as as well as the “One Click” detection, where a specific symbol is first added to a library and then well as the “One Click” detection, where a specific symbol is first added to a library and then used used for detection. Every floor plan was first analyzed with the automated process in order to detect for detection. Every floor plan was first analyzed with the automated process in order to detect as as many openings as possible. Only in case a symbol was missed or a false positive was detected many openings as possible. Only in case a symbol was missed or a false positive was detected the the “One Click” approach was used to finalize the detection process. In order to assess the effort, “One Click” approach was used to finalize the detection process. In order to assess the effort, the the number of user interactions needed to complete the detection was counted. number of user interactions needed to complete the detection was counted. The floor plans used for the evaluation contain 91 doors and the same number of windows, The floor plans used for the evaluation contain 91 doors and the same number of windows, whereas 20 different styles of doors and 10 different styles of windows were present in the drawings. whereas 20 different styles of doors and 10 different styles of windows were present in the drawings. Symbols are considered as identical if the number of edges and the number of neighbors of a node are Symbols are considered as identical if the number of edges and the number of neighbors of a node the same. are the same. The fully automated process reached a precision of 98.2% and a recall value of 92.30% when The fully automated process reached a precision of 98.2% and a recall value of 92.30% when detecting window openings (see Table3). There were only two floor plans with false positive matches. detecting window openings (see Table 3). There were only two floor plans with false positive In contrast to windows, door elements performed slightly worse. A precision of 87.87% and a matches. In contrast to windows, door elements performed slightly worse. A precision of 87.87% and recall of 89.01% can be explained by the rather loose rules used to detect a broad diversity of doors. a recall of 89.01% can be explained by the rather loose rules used to detect a broad diversity of doors. The restriction of finding a door leaf in the symbol would improve the result in this dataset. The restriction of finding a door leaf in the symbol would improve the result in this dataset. Table 3. Results of the fully automated detection process for window and door symbols. Table 3. Results of the fully automated detection process for window and door symbols. Precision Recall Precision Recall Windows 0.98196248 0.923076923 Windows 0.98196248 0.923076923 Doors 0.87870711 0.89010989 Doors 0.87870711 0.89010989

TableTable4 shows 4 shows the the overall overall detected detected symbols. symbols. At the At end the of the end automated of the automated process, seven process, windows seven andwindows ten doors and outten doors of 91 symbolsout of 91 could symbols not could be categorized, not be categorized, due to the due complexity to the complexity of the symbol. of the Withsymbol. the “One With Click” the “OneC methodlick” one method type after one the type other after was the added other to was a library added until to athe library detection until has the beendetection completed. has been In thecompleted. end, only In 2 the door end, symbols only 2 coulddoor symbols not be detected could not at allbe sincedetected they at were all since not at they all were not at all connected to the wall segments and therefore categorized as furniture elements. In the 3D model, in the end, this would be represented as an open gate without a door. Technologies 2018, 6, 101 22 of 27

Technologies 2018, 6, x 21 of 26 connected to the wall segments and therefore categorized as furniture elements. In the 3D model, in the end,The this automatic would be detection represented of walls as an achieved open gate an without average a door. of 99.26% as shown in Figure 19a. By manuallyThe automatic adding wall detection segments of wallsthis value achieved could an be average increased of by 99.26% 0.49%. as Only shown 0.13% in Figure of all wall19a. Byelements manually were adding not able wall to be segments added correctly. this value This could happened be increased because by one 0.49%. drawing Only contained 0.13% of a all design wall elementselement which were not was able placed to be inside added a correctly.wall. This happened because one drawing contained a design element which was placed inside a wall. Table 4. Absolute Detection Values. With the automatic detection, 7 out of 10 symbols could not be Tabledetected. 4. Absolute The “One Detection-Click” approach Values. Withcould the reduce automatic these to detection, only two 7 not out detectable of 10 symbols symbols. could not be detected. The “One-Click”Automatic approach couldFalse reduce- these toNot only two not detectable“One symbols. Not # Symbols # Symbols AutomaticDetect Detectionion Positive False-Positivess Detected Not Detected “OneClick Click”” NotDetectable Detectable Windows 91 84 2 7 ⇒ 7 0 Windows 91 84 2 7 ⇒ 7 0 DoorsDoors 9191 81 81 12 1210 10 ⇒⇒ 88 2 2

On average 3.2 (σ = 2.04) interactions were necessary to complete a detection in a floor plan (see On average 3.2 (σ = 2.04) interactions were necessary to complete a detection in a floor plan Figure 19b). As an individual interaction adding or removing a wall segment, a “OneClick” definition (see Figure 19b). As an individual interaction adding or removing a wall segment, a “OneClick” of a symbol or removing a false positive element are counted. Finally, Figure 19c shows the average definition of a symbol or removing a false positive element are counted. Finally, Figure 19c shows the processing time of the full automated step of all analyzed floor plans with a median of 11.65 s. The average processing time of the full automated step of all analyzed floor plans with a median of 11.65 s. duration is highly dependent on the complexity of the floor plan and the number of contained The duration is highly dependent on the complexity of the floor plan and the number of contained line segments. line segments.

(a) (b) (c)

FigureFigure 19.19.( a()a The) The detection detection rate rate of all of wall all elementswall elements over all over analyzed all analyzed floor plans. floor (b ) plans. Average (b) necessary Average interactionsnecessary interactions in order to in complete order to thecomplete floor plan. the floor (c) Time plan. needed (c) Time to needed complete to thecomplete automated the automated detection processdetection of process a floor plan.of a floor plan.

4.2.4.2. UserUser Study InIn additionaddition to to the the technical technical evaluation evaluation of theof the presented presented system, system the, authors the authors conducted conducted a user a study user instudy order in to order research to research the applicability the applicability of the resulting of the result VR environmenting VR environment in a real-world in a real scenario.-world Thescenario. main objectiveThe main of obj theective user studyof the is user concentrating study is concentrating on the decision-making on the decision process-making of a person process searching of a person for an apartment.searching forThe an research apartment. question The evaluates research if users question prefer evaluate a mores detailed if user environments prefer a more with furniture detailed andenvironment the possibility with offurniture interacting and orthe if possib they findility an of emptyinteracting VR apartment or if they morefind a helpfuln empty in VR this apartment situation. Withinmore helpful the user in study,this situation we compare. Within three the differentuser study representations, we compare three of an different apartment: representations 2D renderings, of emptyan apartment: VR apartment, 2D renderings, and a furnished empty VR apartment. apartment Furthermore,, and a furnished the question apartment. if people Furthermore are willing, the to usequestion such aif toolpeople while are searching willing to an use apartment such a tool was while answered. searching an apartment was answered. ToTo researchresearch these these questions questions a a setup setup was was built built with with one one and and the the same same apartment apartment reconstructed reconstructed for eachfor each user. user. The participants The participants were asked were to asked put themselves to put themselves in the position in the of position searching of forsearching an apartment for an whichapartment is still which in the is construction still in the phase,construction whereas phase, the factors: whereas number the factors: of persons, number income of persons, and age income should notand be age considered. should not The be user considered. was confronted The user with was three confronted different scenarios with three representing different the scenarios same apartment.representing After the sa eachme ofapartment. these three After tasks, each the of user these had three to answer tasks, the multiple user had questions to answer based multiple on the 5questions points Likert based scale on [ the26]. 5 All point threes Likert scenarios scale are [26] a direct. All result three of scenarios the implemented are a direct system. result At of first, the eachimplemented user was system. confronted At first with, each multiple user renderingswas confronted of the with apartment multiple and renderings a of the top apartment view of and a perspective top view of a printed paper. For the rest of the study, the participant was equipped with an Oculus CV1 HMD and was tracked within an area of approximately 4 m by 4 m. With the Technologies 2018, 6, 101 23 of 27 Technologies 2018, 6, x 22 of 26 two Oculusa printed Touch paper. controller For the rests, ofthe the participant study, the participant could navigate was equipped through with the an environment Oculus CV1 HMD with a teleportingand was metaphor. tracked within In the an first area ofVR approximately scenario, the 4 user m by was 4 m. put With into the twoan empty Oculus VR Touch apartment controllers, and askedthe to participantvisit every couldroom. navigate The final through VR condition the environment was an apartment with a teleporting equipped metaphor. with furniture In the first and VR the possibilityscenario, of changing the user was the put appearance into an empty of all VR surfaces. apartment Finally and asked, the participant to visit every was room. asked The to final furnish VR condition was an apartment equipped with furniture and the possibility of changing the appearance of the only empty room in the apartment. all surfaces. Finally, the participant was asked to furnish the only empty room in the apartment. Overall six women and seven men were part of the study at an age of 25 to 55 years. Nine Overall six women and seven men were part of the study at an age of 25 to 55 years. participants have been searching for apartments over three times in their life and the other four Nine participants have been searching for apartments over three times in their life and the other four peoplepeople two times. two times. So, everyone So, everyone was was familiar familiar with with the the situation situation they they wer weree put in.in. OnOn the the other other hand, hand, peoplepeople did not did feel not feelexperienced experienced with with VR VR hardware hardware (mean (mean value value 2.38 withwith σσ= = 0.76).0.76). All All except except one one participantparticipant confirmed confirmed to have to have no noproblems problems experiencing experiencing depth depth in in 3D 3D movies whichwhich are are an an important important factorfactor in a VR in a application. VR application. The dTheecision decision-making-making process process when when searching searching for for an an appropriate appropriate apartment apartment is is complex complex and and dependentdependent on multiple on multiple factors. factors. The user The userstudy study at hand at hand,, tries tries to analyze to analyze some some factors factors which which could could lead to a conclusion.lead to a conclusion. The following The followingaspects are aspects covered: are covered:object appearance object appearance in VR, the in sense VR, the of sensepresence, of the importancepresence, the of importanceinformation of available information in VR available and the in readiness VR and the of readinessactually using of actually such a using system. such This a was achievedsystem. This by an was extensive achieved questionnaire, by an extensive where questionnaire, the most whereimportant the most results important will be resultsdiscussed will here. be Afterdiscussed experiencing here. the three different presentations of the apartment the participants were asked After experiencing the three different presentations of the apartment the participants were asked about the visual appearance of the automatically generated apartment. The results can be seen in about the visual appearance of the automatically generated apartment. The results can be seen in Figure 20a which shows that 70% of the participants rated the general appearance of the apartment Figure 20a which shows that 70% of the participants rated the general appearance of the apartment with “realistic” (4) or “very realistic” (5). The individual components of the VR setup were rated in a with “realistic” (4) or “very realistic” (5). The individual components of the VR setup were rated in a similarsimilar range, range, only only doors doors and and windows windows were were rated lower.lower. This This can can be be caused caused by theby usagethe usage of only of one only one doordoor and and window window modelmodel for for the the whole whole apartment apartment which which are scaledare scaled to the to size the of size the of opening the opening and look and look therefore aa bit bit distorted. distorted.

(a) (b)

FigureFigure 20. (a 20.) Individual(a) Individual parts parts of the of VR the environment VR environment in relation in relation to the to therated rated visual visual appearance. appearance. (b) Results(b) of Results the presence of the presence questionnaire questionnaire for both for VR both scenarios. VR scenarios.

After each VR session, the presence according to Usoh et al. [27] was measured which should show After each VR session, the presence according to Usoh et al. [27] was measured which should how immersed the users felt while being in the VR environment. As Figure 20b shows, the overall show how immersed the users felt while being in the VR environment. As Figure 20b shows, the rating is rather high in both scenarios. While the average rating of an empty apartment was rated overall rating is rather high in both scenarios. While the average rating of an empty apartment was at 5.71 (σ = 1.16) the fully furnished version reached an average of 6.18 (σ = 1.12) on the 7-point ratedLikert at 5.71 scale. (σ = 1.16) the fully furnished version reached an average of 6.18 (σ = 1.12) on the 7-point Likert scale.After the first VR exposure without furniture, the participants were asked which information they Afterare missing the first within VR the exposure environment without in order furniture to get a, betterthe participants impression of were the situation. asked which Figure information 21a shows they the are most missing mentioned within keywords. the environment The majority in of order participants to get mentioned a better impression that furniture of as the a keyword situation. Figurewithout 21a shows knowing the that most the final mentioned VR setup keywords. will contain The this feature. majority Furthermore, of participants a visualization mentioned of the that furnituretime of as the a day keyword and the without location of knowing plug sockets that was the considered final VR important. setup will contain this feature. Furthermore, a visualization of the time of the day and the location of plug sockets was considered important. In comparison to all kind of representations, the floor plan itself is considered to be the most important source of information. An interesting finding is that the VR experience and the possibility of interacting with the environment were rated slightly better or equal to the renderings Technologies 2018, 6, 101 24 of 27

In comparison to all kind of representations, the floor plan itself is considered to be the most TechnologiesTechnologies 2018 2018, 6,, 6x, x 2323 of of26 26 important source of information. An interesting finding is that the VR experience and the possibility of (seeinteracting(see Figure Figure 21 with 21b).b ).So, the So, all environment all provided provided information wereinformation rated slightlyseemed seemed betterto to be be helpful or helpful equal to toto the thethe user renderingsuser in in order order (see to to get Figure get a abetter better 21b). undSo,understanding allerstanding provided of of informationthe the apartment. apartment. seemed to be helpful to the user in order to get a better understanding of the apartment.BecauseBecause visiting visiting an an apartment apartment in in VR VR is isassociated associated with with high high time time expenditure expenditure the the authors authors also also askedasked Because how how plausible plausible visiting it an it is is toapartment to actually actually usein use VR such such is associateda a system system in within case casehigh of of searching searchingtime expenditure for for a a real real the estate estate authors. All. All participantsalsoparticipants asked how were were plausibleasked asked if if it they theyis to w actually wouldould take take use the suchthe time time a system to to use use in the the case setup setup of searching and and for for which for which a real purpose. purpose. estate. FigureAllFigure participants 22 22 shows shows the were the results. results. asked All ifAll participants they participants would rated take rated theeither either time for for to “Strongly use“Strongly the setupagree” agree” and or or “ for A“Agree” whichgree” to to purpose.use use the the systemFiguresystem when 22 when shows searching searching the results. for for a areal All real estate participants estate (average (average rated 4.84 4.84 either and and σ for σ= =0.36) “Strongly 0.36). Similar. Similar agree” results results or “Agree”can can be be se se toenen usewhen when the askingsystemasking for whenfor the the purpose. searching purpose. Participants forParticipants a real estate would would (average like like to to narrow 4.84 narrow and down σdown= 0.36). the the potential Similarpotential resultsoffers offers (average can(average be seen 4.45 4.45 whenand and σasking σ= =0.84) 0.84) for with with the the purpose. the help help of Participantsof the the system. system. would Making Making like a toafinal final narrow decision decision down, on, theon the potentialthe other other hand, offershand, resulted(average resulted still 4.45still in andin a a positiveσpositive= 0.84) direction direction with the but helpbut was was of rated the rated system. lower lower (ave Making(averagerage 3.62 a 3.62 final and and decision, σ σ= =0.83). 0.83). on the other hand, resulted still in a positive direction but was rated lower (average 3.62 and σ = 0.83).

(a()a ) (b()b )

FigureFigureFigure 21. 21.21. (a ()(a aParticipants)) Participants Participants were were were missing missing missing these these these topics topics topics in in the inthe VR the VR environment VRenvironment environment without without without furniture. furniture. furniture. The The biggerThebigger bigger the the font font the size size font the the more size more thepeople people more asked asked people for for this asked this special special for topic. this topic. special (b ()b The) The topic.importance importance (b) Theof of representations representations importance of ofrepresentationsof a reala real est estate.ate. The ofThe afloor realfloor plan estate. plan was was The rated rated floor uniformly planuniformly was with rated with the uniformlythe highest highest value. with value. the highest value.

TheTheThe implemented implemented implemented intera interaction interactionction technique technique technique was waswas part part part of of of the the the second second second VR VR VR task task task where where where people people people could could could interactinteractinteract with withwith surfaces surfacessurfaces and andand place placeplace furniture furniturefurniture in inin a aroom.a room.room. In InIn order orderorder to toto get getget an anan understanding understandingunderstanding of ofof the thethe usability usabilityusability thethethe System SystemSystem Usability UsabilityUsability Scale ScaleScale (SUS) (SUS)(SUS) [28] [[28]28 ]questionnaire questionnairequestionnaire was waswas used. used.used. An AnAn average average score score of ofof 80.19 80.19 was was achieved. achieved. TheTheThe individual individualindividual results resultsresults of ofof the thethe 10 1010 questions questionsquestions can cancan be bebe seen seen in in Figure Figure 23 23. Participants.. Participants also also “Strongly “ “StronglyStrongly agreed” agreed” tototo the thethe statement, statement,statement, that thatthat they theythey would wouldwould like likelike to toto use useuse the thethe system systemsystem for forfor their theirtheir home homehome in inin order orderorder to toto place placeplace and andand try trytry out outout newnewnew furniture furniturefurniture (average (average(average 4.69 4.694.69 and andand σσ σ== =0.46). 0.46).0.46).

Figure 22. Answers of the participants regarding the personal attitude towards the system. FigureFigure 22. 22. Answers Answers of of the the participants participants regarding regarding the the personal personal attitude attitude towards towards the the system. system. Finally, all participants were asked whether they would buy or rent the shown apartment either FinallyFinally, all, all participants participants were were asked asked w whetherhether they they would would buy buy or or rent rent the the shown shown apartment apartment either either only from seeing renderings, experiencing an empty apartment or a fully furnished one. Furthermore, onlyonly from from seeing seeing renderings, renderings, ex eperiencingxperiencing an an empty empty apartment apartment or or a fullya fully fu furnishedrnished one one. Furthermore. Furthermore, , they could differentiate between an already existing object and a not yet existing object. By far the best theythey could could differentiate differentiate between between an an already already existing existing object object and and a anot not yet yet existing existing object. object. By By far far the the results were recorded in the fully furnished condition of a not yet existing apartment. The majority of bestbest results results were were recorded recorded in in the the fully fully furnished furnished condition condition of of a a not not yet yet existing existing apartment. apartment. The The the participants would agree to buy it without seeing the object. Considering a 5-point Likert scale majoritymajority of of the the participants participants would would agree agree to to buy buy it itwithout without seeing seeing the the object. object. Considering Considering a a5 -5point-point people agreed with an average of 4.15 (σ = 0.77), followed by the condition of a furnished, but already LikertLikert scale scale peop peoplele agreed agreed with with an an average average of of 4.15 4.15 (σ (σ = 0.77)= 0.77), followed, followed by by the the condition condition of of a furnisheda furnished, , butbut already already existing existing VR VR apartmen apartment twith with an an average average of of 3.46 3.46 (σ (σ = = 1.15). 1.15). The The conventional conventional way way of of providingproviding a afloor floor plan plan and and renderings renderings was was rated rated far far behind behind with with an an average average score score of of 2.54 2.54 (σ (σ = =1.01). 1.01). ToTo prove prove that that the the participants participants were were not not influenced influenced by by cybersickness cybersickness the the Simulator Simulator S ickness Sickness QQuestionnaireuestionnaire (SSQ (SSQ) [29]) [29] was was used used before before and and after after the the VR VR experience. experience. The The results results of of the the pre pre- and- and Technologies 2018, 6, 101 25 of 27 existing VR apartment with an average of 3.46 (σ = 1.15). The conventional way of providing a floor plan and renderings was rated far behind with an average score of 2.54 (σ = 1.01). TechnologiesTo prove 2018 that, 6, x the participants were not influenced by cybersickness the Simulator Sickness24 of 26 Questionnaire (SSQ) [29] was used before and after the VR experience. The results of the pre- and post-questionnairepost-questionnaire showed showed no extreme no extreme results. results. According According to the totalto the score, total nine score participants, nine partic feltipants better felt afterbetter the experiment. after the experiment. The total valueThe total of the value four ofothers the fourwas slightly others w worse,as slightly mainly worse due, to mainly sweating. due to Thissweating. could also This be could caused also by be the caused high temperature by the high oftemperature above 26 ◦ Cof inabove the laboratory 26 °C in the setup. laboratory Only one setup. participantOnly one reported participant a moderate reported problem a moderate with probl nauseaem afterwith thenausea experiment. after the experiment.

FigureFigure 23. Individual23. Individual results results of the of the System System Usability Usability Scale Scale (SUS) (SUS questionnaire.) questionnaire. 5. Conclusions 5. Conclusions In this paper, we presented a complete automated 3D model generation workflow for apartment In this paper, we presented a complete automated 3D model generation workflow for apartment structures in order to be experienced in VR by potential buyers. As a starting point, a vectorized floor structures in order to be experienced in VR by potential buyers. As a starting point, a vectorized floor plan is analyzed regarding only structural elements. A game engine is used to further process this plan is analyzed regarding only structural elements. A game engine is used to further process this information by generating a 3D mesh for appropriate VR interaction, assigning suitable room types, information by generating a 3D mesh for appropriate VR interaction, assigning suitable room types, textures, and lights. Finally, a user study and a technical evaluation were performed. textures, and lights. Finally, a user study and a technical evaluation were performed. The main focus of the work at hand was to construct a workflow with a high degree of automation The main focus of the work at hand was to construct a workflow with a high degree of which allows providing a large number of VR experiences of apartments available on the market at automation which allows providing a large number of VR experiences of apartments available on the a reasonable effort. With a floor plan analyzing the time of about one minute for an apartment of market at a reasonable effort. With a floor plan analyzing the time of about one minute for an average complexity and about three to four interactions on average, the detection process is finished apartment of average complexity and about three to four interactions on average, the detection in a few minutes. The 3D model generation process including the baking of the lights in the Unreal process is finished in a few minutes. The 3D model generation process including the baking of the game engine takes another two to three minutes. In general, it can be said that an apartment can be lights in the Unreal game engine takes another two to three minutes. In general, it can be said that an finished for exploration without furniture in about 10 to 15 min. We have shown that it is possible to apartment can be finished for exploration without furniture in about 10 to 15 min. We have shown process floor plans from different sources with different complexity. In contrast to other approaches that it is possible to process floor plans from different sources with different complexity. In contrast the concept at hand poses only few requirements to the floor plan. As such we do not rely on parallel to other approaches the concept at hand poses only few requirements to the floor plan. As such we lines [8,12] or any other layer information [9]. The automated detection of openings also works without do not rely on parallel lines [8,12] or any other layer information [9]. The automated detection of the need of an initialization step [7] or any user input. Of course, our approach could also benefit openings also works without the need of an initialization step [7] or any user input. Of course, our from layer information or text contained in the input file to speed up the process or to extend the approach could also benefit from layer information or text contained in the input file to speed up the functionality, but it is not necessary. process or to extend the functionality, but it is not necessary. Creating the VR experience in this work allows professional interior designers to adapt and extend Creating the VR experience in this work allows professional interior designers to adapt and the appearance of the apartment before exporting the VR application. This user-oriented feature is also extend the appearance of the apartment before exporting the VR application. This user-oriented a benefit of our implementation in contrast to other approaches. feature is also a benefit of our implementation in contrast to other approaches. A meaningful numerical comparison between the here presented concepts and similar work is not possible since to our best knowledge there is no public database of floor plans available to calculate comparable performance measurements. But the quality of the result was measured in multiple ways. Besides the already good precision and recall values for the automated detection process, the full detection process within the tested drawings only had problems with two door symbols. This reflects the technical quality. More important than a good 3D representation is the acceptance to use the system. The results of our evaluation suggest that the majority of people derive Technologies 2018, 6, 101 26 of 27

A meaningful numerical comparison between the here presented concepts and similar work is not possible since to our best knowledge there is no public database of floor plans available to calculate comparable performance measurements. But the quality of the result was measured in multiple ways. Besides the already good precision and recall values for the automated detection process, the full detection process within the tested drawings only had problems with two door symbols. This reflects the technical quality. More important than a good 3D representation is the acceptance to use the system. The results of our evaluation suggest that the majority of people derive added value from the VR experience. On the one hand, they agreed to invest time to experience an apartment in VR. On the other hand, participants would even buy an apartment only by experiencing it in VR. Finally, people consider a VR visit similarly important than having 2D renderings. The findings within this work at hand can be very important for many application areas, as an automated 3D model can be used in various scenarios such as maintenance, security, advertising and many more. Due to the lack of available living space in some areas, apartments are often sold before they have been built. Having a tool to create such a representation has an added value to these people. Application areas such as buying furniture or refurbishing a room could benefit from our system. Further research in this area can be based on a database that is created by the detection algorithm and will be used to further develop the detection process with the help of neural networks. A big dataset of ground truth data will be needed in order to train a working network.

Author Contributions: G.G. performed the manuscript preparation. The implemented methodology, the main algorithm and conducted user studies were performed by G.G., L.F. and M.T. H.K. supervised the work. All authors discussed the basic structure of the manuscript and approved the final version. Funding: This research was funded by the Austrian Research Promotion Agency [FFG–850706]. Acknowledgments: We would like to thank all participants of the user studies for their time and interests. The authors acknowledge also the support provided by partners from the real estate area the Immobilien lifetime GmbH and the company 3motion Virtual for their insights and valuable discussions. Conflicts of Interest: The authors declare no conflict of interest.

References

1. Guo, T.; Zhang, H.; Wen, Y. An improved example-driven symbol recognition approach in engineering drawings. Comput. Graph. 2012, 36, 835–845. [CrossRef] 2. Dosch, P.; Tombre, K.; Ah-Soon, C.; Masini, G. A complete system for the analysis of architectural drawings. Int. J. Doc. Anal. Recognit. 2000, 3, 102–116. [CrossRef] 3. So, C.; Baciu, G.; Sun, H. Reconstruction of 3D virtual buildings from 2D architectural floor plans. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology, Taipei, Taiwan, 2–5 November 1998; pp. 17–23. [CrossRef] 4. Yan, L.; Wenyin, L. Engineering drawings recognition using a case-based approach. In Proceedings of the Seventh International Conference on Document Analysis and Recognition, Edinburgh, UK, 6 August 2003; Volume 1, pp. 190–194. [CrossRef] 5. Yin, X.; Wonka, P.; Razdan, A. Generating 3D Building Models from Architectural Drawings: A Survey. IEEE Comput. Graph. Appl. 2009, 29, 20–30. [CrossRef][PubMed] 6. Dosch, P.; Masini, G. Reconstruction of the 3D structure of a building from the 2D drawings of its floors. In Proceedings of the International Conference on Document Analysis and Recognition, ICDAR, Bangalore, India, 22 September 1999. 7. Luqman, M.M.; Brouard, T.; Ramel, J.-Y. Graphic Symbol Recognition Using Graph Based Signature and Bayesian Network Classifier. In Proceedings of the 10th International Conference on Document Analysis and Recognition, Barcelona, Spain, 26–29 July 2009. [CrossRef] 8. Domínguez, B.; García, Á.L.; Feito, F.R. Semiautomatic detection of floor topology from CAD architectural drawings. CAD Comput. Aided Des. 2012, 44, 367–378. [CrossRef] 9. Noack, R. Converting CAD Drawings to Product Models. Licentiate Thesis, KTH Royal Institute of Technology, Stockholm, Sweden, June 2001. Technologies 2018, 6, 101 27 of 27

10. Lu, T.; Tai, C.L.; Su, F.; Cai, S. A new recognition model for electronic architectural drawings. CAD Comput. Aided Des. 2005, 37, 1053–1069. [CrossRef] 11. Lu, T.; Yang, H.; Yang, R.; Cai, S. Automatic analysis and integration of architectural drawings. Int. J. Doc. Anal. Recognit. 2007, 9, 31–47. [CrossRef] 12. Zhu, J.; Zhang, H.; Wen, Y. A new reconstruction method for 3D buildings from 2D vector floor plan. Comput. Aided Des. Appl. 2013, 11, 704–714. [CrossRef] 13. Hakkarainen, M.; Woodward, C.; Rainio, K. Software for Mobile Mixed Reality and 4D BIM Interaction. In Proceedings of the 25th CIB W78 Conference, Santiago, Chile, 15–17 July 2008. 14. Ahamed, S.; Murray, N.; Kruithof, S.; Withers, E. Advanced Portable Visualization System; RR-255; National Research Council Canada: Ottawa, Canada, 1 March 2008. [CrossRef] 15. Doulis, M. 4DIVE-A 4D Interface for the Visualization of Construction Processes in a Virtual Environment. In Proceedings of the 7th International Conference on Construction Applications of Virtual Reality, University Park, PA, USA, 22–23 Octorber 2007. 16. Döllner, J.; Hagedorn, B. Integrating urban GIS, CAD, and BIM data by service-based virtual 3D city models. In Urban and Regional Data Management–UDMS Annual 2007; Rumor, M., Coors, V., Fendel, E., Zlatanova, S., Eds.; CRC Press: London, UK, 2008. 17. Yan, W.; Culp, C.; Graf, R. Integrating BIM and gaming for real-time interactive architectural visualization. Autom. Constr. 2011, 20, 446–458. [CrossRef] 18. Uddin, M.S.; Yoon, S.-Y.; House, X. Scheme G: From 3D Game Engine to Virtual Representation of Architecture. J. Des. Commun. Assoc. 2004. 19. Ahmed, S.; Liwicki, M.; Weber, M.; Dengel, A. Automatic room detection and room labeling from architectural floor plans. In Proceedings of the 10th IAPR International Workshop on Document Analysis Systems, DAS 2012, Gold Cost, Queensland, Australia, 27–29 March 2012. 20. Marson, F.; Musse, S.R. Automatic Real-Time Generation of Floor Plans Based on Squarified Treemaps Algorithm. Int. J. Comput. Games Technol. 2010, 2010, 7. [CrossRef] 21. Müller, P.; Wonka, P.; Haegler, S.; Ulmer, A.; Van Gool, L. Procedural modeling of buildings. In ACM SIGGRAPH 2006 Papers on, SIGGRAPH ’06; ACM: New York, NY, USA, 2006. 22. Shimrat, M. Algorithm 112: Position of point relative to polygon. Commun. ACM 1962, 5, 434. [CrossRef] 23. Kan, P.; Kaufmann, H. Automatic Furniture Arrangement Using Greedy Cost Minimization. In Proceedings of the 25th IEEE Conference on Virtual Reality and 3D User Interfaces, Reutlingen, Germany, 18–22 March 2018. 24. Or, S.; Wong, K.; Yu, Y.; Chang, M.; Kong, H. Highly Automatic Approach to Architectural Floorplan Image Understanding & Model Generation. In Pattern Recognition; IOS Press: Amsterdam, The Netherlands, 2005; pp. 25–32. 25. Lewis, R.; Séquin, C. Generation of 3D building models from 2D architectural plans. Comput. Des. 1998, 30, 765–779. [CrossRef] 26. Likert, R. A technique for the measurement of attitudes. Arch. Psychol. 1932, 22, 55. 27. Usoh, M.; Catena, E.; Arman, S.; Slater, M. Using presence questionnaires in reality. Presence Teleoperators Virtual Environ. 2000.[CrossRef] 28. Brooke, J. SUS: A quick and dirty usability scale. Usability Eval. Ind. 1996, 189, 4–7. [CrossRef] 29. Kennedy, R.S.; Lane, N.E.; Berbaum, K.S.; Lilienthal, M.G. Simulator Sickness Questionnaire: An Enhanced Method for Quantifying Simulator Sickness. Int. J. Aviat. Psychol. 1993.[CrossRef]

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).