https://doi.org/10.20965/ijat.2021.p0274 Umehara, Y. et al.

Paper: Research on Identification of Road Features from Point Cloud Data Using Deep Learning

Yoshimasa Umehara ∗1,†, Yoshinori Tsukada∗2, Kenji Nakamura∗3, Shigenori Tanaka∗4, and Koki Nakahata∗5

∗1Organization for Research and Development of Innovative Science and Technology, 3-3-35 Yamate-cho, Suita-shi, Osaka 564-0073, Japan †Corresponding author, E-mail: [email protected] ∗2Faculty of Business Administration, Setsunan University, Neyagawa, Japan ∗3Faculty of Information Technology and Social Sciences, of Economics, Osaka, Japan ∗4Faculty of Informatics, Kansai University, Takatsuki, Japan ∗5Graduate School of Informatics, Kansai University, Takatsuki, Japan [Received October 30, 2020; accepted February 10, 2021]

Laser measurement technology has progressed signifi- raised concerns [6], where the use of point cloud data can cantly in recent years, and diverse methods have been be expected to achieve automation and higher efficiency. developed to measure three-dimensional (3D) objects Previous studies in this area include those proposing the within environmental spaces in the form of point cloud generation of dynamic maps from point cloud data [7], data. Although such point cloud data are expected efficient 3D data construction of deteriorated bridges [8], to be used in a variety of applications, such data do and the inspection of incidental road structures [9]. Fur- not possess information on the specific features repre- thermore, the Japanese Ministry of Land, Infrastructure, sented by the points, making it necessary to manually Transport and Tourism (MLIT) has set up a committee to select the target features. Therefore, the identification promote the introduction of new technologies and systems of road features is essential for the efficient manage- for infrastructure maintenance [10], under which efforts ment of point cloud data. As a technology for identify- are being made to research and develop technologies to ing features from the point cloud data of road spaces, efficiently administer facilities using point cloud data. In in this research, we propose a method for automati- addition to such efforts to develop various technologies, cally dividing point cloud data into units of features there have been undertakings to make point cloud data and identifying features from projected images with available to the general public, including the Shizuoka added depth information. We experimentally verified Point Cloud DB (PCDB) [11] by the Shizuoka prefectural that the proposed method accurately identifies and ex- government, “My City Construction” by the Association tracts such features. for Promotion of Infrastructure Geospatial Information Distribution [12], and the MLIT Data Platform 1.0 [13]. As can be seen from the active undertakings to make Keywords: i-Construction, road feature, point cloud use of point cloud data, as well as MLIT’s agenda of pro- data, deep learning, feature identification moting i-Construction [14], which aims to improve con- struction productivity by making use of 3D information, opportunities for employing point cloud data in road space 1. Introduction management are steadily rising. However, point cloud data are merely the sets of a vast number of points to Laser measurement technology has progressed drasti- which XYZ coordinates are attached that represent spatial cally in recent years, and diverse methods have been de- positions or laser reflection intensities, and do not carry veloped to measure three-dimensional (3D) objects that information about the specific features indicated by the exist within road spaces in the form of point cloud data. In points or their relations with other points, which makes it addition, 3D measurement devices that apply point cloud difficult to use them wisely in a manner suitable for the data acquired in many locations in Japan include mobile intended application. For example, in a previous research mapping systems [1], airborne lasers [2], terrestrial laser on the use of point cloud data for inspection of road inci- scanners [3], unmanned aerial vehicle lasers [4], and mo- dental structures [9], it was necessary to manually select bile lasers [5]. Because such point cloud data are an ef- the point cloud data of road incidental structures such as fective means to capture the current detailed configuration signposts from among the vast point cloud data. As this of urban spaces, their use is expected in a variety of ap- case shows, the identification of road features is an essen- plications. In particular, the declining work force in the tial item in the efficient treatment of point cloud data. The field of maintaining and administrating road spaces has identification of road features makes it possible to manage

274 Int. J. of Automation Technology Vol.15 No.3, 2021

© Fuji Technology Press Ltd. Creative Commons CC BY-ND: This is an Open Access article distributed under the terms of the Creative Commons Attribution-NoDerivatives 4.0 International License (http://creativecommons.org/licenses/by-nd/4.0/). Research on Identification of Road Features from Point Cloud Data Using Deep Learning

point cloud data according to road feature units in a time based on deep learning, which will enable feature iden- series, which allows various advanced usages such as the tification in any kind of road space, in addition to those detection of differences or changes. road sections for which drawings are available. This can The authors have been involved in research to identify be expected to promote the wider use of point cloud data road features from point cloud data, where they have de- and encourage advanced applications. veloped technologies to identify physical features present in road spaces [15–20] as well as conceptual features without concrete physical presence such as the road cen- 2. Proposal to Resolve Issues terline [21, 22]. The present research, which deals with real objects such as traffic signs, falls under the former 2.1. Issues category. As stated above, our objective is to resolve the issues As a method to identify physically present road fea- remaining in [19], namely, the “need to manually divide tures, we proposed a method based on the completed plan the features” and “the loss of depth information owing to drawings produced at the time of road construction [15]. an image projection.” Hatchings that represent road features on completed plans With respect to the former issue, we remove those fea- were used to extract road features with high accuracy. tures that come into contact with the target features, such Some unresolved issues are the failure to identify grade as the ground, utility lines, and buildings, based on their separations of road surface features such as driveways and geometric characteristics, and then apply 3D labeling to sidewalks or steep gradients, as well as a misalignment the point cloud data based on Euclidean distances for di- between the completed plan and point cloud data. To ad- vision into units of features (i.e., objects). dress these issues, we propose a method to improve the The latter issue arose because we applied a method that extraction accuracy of road surface features by estimating is intended for use with 2D images [24] to 3D point cloud their elevations from the completed plans [16], and an- data. Although a method [25] was devised to treat 3D other method to carry out high-precision registration be- point cloud data using deep learning for feature identifi- tween the completed plan and point cloud data [17]. As cation, it does not allow learning to take place within a a result, we were able to establish a method to identify, realistic time when the point cloud data of road spaces using completed plans, road features in diverse sites that include a variety of features. Therefore, in this research, also include grade separations and inclines. when the point cloud data are projected onto an image, in- Although it is possible to extract road features with formation on its 3D configuration is reflected in the RGB high accuracy using the methods employing completed values of the generated image, which allows learning and plans [15–17], they can only be used for sections of identification without a loss of information. national roads constructed since 2006, when a govern- ment manual [23] was put into effect, making it manda- tory to prepare drawings for road works. Therefore, we 2.2.ContentofProposal studied technologies to identify road features from point The processing flow of the proposed method is shown cloud data without resorting to completed plans [18, 19]. in Fig. 1. With the proposed method, the input of the point In [18], we were able to robustly identify features using cloud data is subjected to three functions: removing the a random forest, but found it difficult to define the char- connected features and then dividing and identifying them acteristic features when the targeted road space included before being output as the point cloud data of individual features with diverse shapes. Meanwhile, in [19], we were features. able to achieve, by employing deep learning, a highly The connected-feature removal function removes the versatile identification without the need to pre-define the features that connect and merge individual features tar- feature characteristics. Focusing on bridges, particularly geted for extraction from the point cloud data to in- among road features, we then studied a method of iden- crease the accuracy of the feature-division function. The tifying bridge parts based on deep learning [20] to assess feature-division function divides the point cloud data into the possibility of the advanced use of a deep learning ap- data representing individual features by labeling based on proach. The results of these experiments confirmed that Euclidean distances. The feature identification function it is possible to identify road features using this method. applies deep learning to the point cloud data, which is di- However, in [19], the authors brought to the surface such vided according to individual features, allowing the fea- issues as the need to manually divide the point cloud data ture types to be identified. of target features and the loss of depth information in The target features in this research consist of pole- the projected direction when the 3D point cloud data are shaped objects, including signposts, illumination posts, image-projected. Thus, the aim of the present research is utility poles, traffic signals, fences, guardrails, low vege- to resolve these issues and establish a method for identi- tation, and trees, which are target objects of maintenance fying road features based on deep learning. and management services [26] commonly found in road In this research, we resolve the issues remaining spaces and specified in MLIT inspection manuals [27– in [19], namely, the “need to manually divide the features” 29]. In addition, the point cloud data include vehicles, and “the loss of depth information due to image projec- pedestrians, and other moving objects captured during the tion” to establish a technology to identify road features measurements, which must be removed. Therefore, vehi-

Int. J. of Automation Technology Vol.15 No.3, 2021 275 Umehara, Y. et al.

3RLQW FORXGGDWD

&RQQHFWHGIHDWXUH )HDWXUHGLYLVLRQIXQFWLRQ UHPRYDOIXQFWLRQ 䠖0HDVXUHGGDWD 䠖,QYHUWHGPHDVXUHGGDWD *URXQGUHPRYDO 'ODEHOLQJ 䠖&ORWKXVHGIRUFORWKVLPXODWLRQ ,QYHUWHG 1RLVHUHPRYDO 8WLOLW\OLQHUHPRYDO

%XLOGLQJUHPRYDO 3RLQW FORXGGDWD DIWHUGLYLVLRQ

)HDWXUHLGHQWLILFDWLRQIXQFWLRQ 

*HQHUDWLRQRISRLQWFORXGSURMHFWLRQLPDJH Fig. 3. Ground removal.

>/HDUQLQJ@ >(YDOXDWLRQ@ )HDWXUH &RQVWUXFWLRQRI )HDWXUH LGHQWLILFDWLRQ IHDWXUHLGHQWLILFDWLRQ LGHQWLILFDWLRQ sists of ground removal, utility-line removal, and building PRGHOV PRGHOV removal.

>/HJHQG@ )XQFWLRQ 3URFHVVIORZ 3RLQWFORXGGDWDRI 3.1. Ground Removal LQGLYLGXDOIHDWXUHV 3URFHVV 'DWD This process removes the ground, such as streets and Fig. 1. Process flow. sidewalks, from the input point cloud data P = {pi | i = 1,2,3,...} (pi, arbitrary point). To this end, we em- ploy a cloth simulation [30]. As shown in Fig. 3,the cloth simulation consists of inverting the point cloud data in the height direction, covering it with a cloth that has some rigidity, and extracting points that come into contact with the cloth as the ground. By applying this method, the point cloud data P are divided into ground points, GP = {gpj | j = 1,2,3,...} (gpj, arbitrary point), and = { | = , , ,...} other points, NGP ngpk k 1 2 3 (ngpk,arbi- trary point).

(a) (b) 3.2. Utility Line Removal Fig. 2. Example of connected features. (a) Connected by ground points. (b) Connected by utility lines. Labels are This process removes points that represent utility lines indicated by random colors. from the non-ground point cloud data NGP by dividing the point cloud data into voxels and checking the num- ber of points in each voxel. First, voxels of a given size, Pegsiz, are used to divide the point cloud data NGP into = { | cles and human beings are also included as identification point cloud data of individual voxels, EGPS egpsl = , , ,...} targets, although they are not targets of road management. l 1 2 3 (egpsl, point cloud data of arbitrary vox- els). Next, against voxels lying at or above a given height Peoffset above the ground points GP, the bounding 3. Connected-Feature Removal Function boxes (BBs) of the point cloud data egpsl of individual voxels are acquired. To obtain the bounding boxes, the With the proposed method, the point cloud data are di- point cloud data egpsl are rotated once about the Z-axis, vided into individual features by 3D labeling, in which and the bounding box with the rotation angle that has the adjacent clusters in the point cloud data are divided into minimum area on the XY plane is chosen. The area is groups based on point-to-point distances and labeled ac- obtained by projecting the point cloud data egpsl to the cordingly. Because the target features of this research XY plane, and then computing the area of a rectangle with come into contact with the ground, utility lines, buildings, a diagonal that connects the point with the maximum X and other elements, all point cloud data may be labeled and Y values with the point with the minimum values. under a single group, as shown in Fig. 2, when the pro- Then, as shown in Fig. 4, if the bounding box has a height posed method is applied. Therefore, we achieve proper of Pemaxheight or less and the narrow side of the base is labeling by removing the ground, utility lines, and build- Pemaxwidth or less and the wide side is Pegsize or greater, the ings from the point cloud data, which cause different fea- point cloud data egpsl are removed as a utility line. This = { | = , , ,...} tures to be grouped into a single label. This function con- yields the point cloud data NEP nepm m 1 2 3

276 Int. J. of Automation Technology Vol.15 No.3, 2021 Research on Identification of Road Features from Point Cloud Data Using Deep Learning

'LYLVLRQRISRLQWFORXGGDWD &RQILUPYR[HOVFRQWDLQLQJSRLQWV 3 LQWRYR[HOV HPD[ZLGWK 3HJVL]H

㽢 㽢 㽢 3 㽢 㽢 㽢 㽢㽢 㽢 OJVL]H

3HPD[KHLJKW

3HRIIVHW 3 䠖*URXQGSRLQW gp HJVL]H = j = 䠖1RQJURXQGSRLQW䠄ngp 䠅 < 3RLQWegps UHPRYHG ; k l 䠖9R[HO ; DVXWLOLW\OLQH /DEHOLQJRIFRQQHFWHGYR[HOV

䠖3RLQWFORXGGDWD䠄nopp䠅 Fig. 4. Utility line removal. 䠖3RLQWFORXGGDWDRILQGLYLGXDO IHDWXUHV䠄obpsq䠅 䠖9R[HO *ULGGLYLVLRQRIJURXQGSRLQWV &RPSXWDWLRQRIERXQGDULHV <

3JJVL]H ;

Fig. 6. 3D labeling.

*ULGGLYLVLRQDIWHUUHPRYDORI in the boundary grid cells GGOUT are removed to obtain  5HPRYDORIERXQGDU\VHFWLRQV XWLOLW\OLQHV = { | = , , ,...} the point cloud data NOP nopp p 1 2 3 (nopp, arbitrary point), from which buildings at the boundaries were removed.

4. Feature-Division Function

gp 䠖*ULG䠄gg 䠅 < 䠖*URXQGSRLQW䠄 j䠅 n This function extracts the point cloud data of individual 䠖3RLQWUHPDLQLQJDIWHU 䠖*ULGFHOORQERXQGDU\ features from point cloud data NOP. ; UHPRYDORIXWLOLW\OLQHV䠄nepk䠅 VHFWLRQV䠄ggouto䠅 䠖3RLQWUHPDLQLQJDIWHU Because the obtained point cloud data are projected UHPRYDORIEXLOGLQJV䠄nopp䠅 as an image by the feature-identification function, which is described below, this function divides the point cloud Fig. 5. Building removal. data to prevent occlusions between features in the pro- jected image, thereby improving the identification accu- racy. This function consists of 3D labeling and noise re- moval. (nepm, arbitrary point), from which the utility lines have been removed. 4.1. 3D Labeling 3.3. Building Removal This process subjects the point cloud data NOP to 3D When the point cloud data represent a measurement of labeling based on the Euclidean distances, as shown in road spaces, in which a building exists alongside a road, Fig. 6, to extract the candidates of point cloud data of the building obstructs the laser and prevents a measure- individual features. First, the point cloud data NOP are ment of the ground points of the building interior, and divided into voxels of a given size Plgsiz. Next, adja- therefore the ground points and building area have a com- cent voxels are assigned the same label. Finally, the mon boundary where they come into contact. Thus, by points included in voxels with the same label are extracted removing those points within the vicinity of the bound- as a single feature, thus obtaining the point cloud data = { | = , , ,...} aries of ground point GP sections, the buildings can be OBPS obpsq q 1 2 3 (obpsq, point cloud data removed from the point cloud data NEP.First,asshown of arbitrary features) of individual features. in Fig. 5, the ground points GP are divided using a grid GG = {gg | n = 1,2,3,...} (gg , arbitrary grid cell), n n 4.2. Noise Removal consisting of cells with a given size Pggsiz in the X-and Y-axis directions and an infinite height in the Z-axis di- The results of the 3D labeling of the point cloud data rection. Next, the presence of point cloud data in each include outlier labels owing to measurement errors and cell of the grid GG is checked, from which the grid cells other factors. Therefore, this process removes such noise = { | = , , ,...} GGOUT ggouto o 1 2 3 (ggouto, arbitrary by filtering the point cloud data obpsq of individual fea- grid cell) that lie along the boundaries are acquired. The tures based on the number of points. Specifically, the point cloud data NEP were then divided using a grid GG. number of points in each point cloud data obpsq,which Finally, the points of the point cloud data NEP that lie represents an individual feature, is checked, and if it is

Int. J. of Automation Technology Vol.15 No.3, 2021 277 Umehara, Y. et al.

\ 5

* % 3URMHFWLRQ ZLGWK 2ULJLQ = < Fig. 8. Feature identification. $FWXDOGLVWDQFHRI SURMHFWLRQWDUJHW [ ;

Fig. 7. Generation of point cloud projection image. mum values) of the bounding box (the origin in Fig. 7) of the point cloud data nobpsr. Finally, the point cloud data are projected in parallel to generate the point cloud projection image. At this time, the reflection intensity, equal to or less than a given number Pnmi, it is removed. normalized to 255, is registered as the R value, and the As a result, we acquired the point cloud data NOBPS = projected distance to the plane of projection, normalized { | = , , ,...} nobpsr r 1 2 3 (nobpsr, point cloud data of ar- to 255, is registered as the G value to account for the 3D bitrary features), from which small outliers have been re- configuration in the depth direction. Because many of the moved. road features targeted in this research have similar shapes, and there are not many features whose difference is only their dimensions in the depth direction, normalizing the 5. Feature-Identification Function distance in the depth direction is expected to have a minor effect. Finally, the height above the average of the ground To identify the features represented by the point cloud points GP, which were removed by the ground removal data, this function applies You Only Look Once v3 process, normalized to 255, was registered as the B value. (YOLO) [24], which is an object identification technol- The point with the shortest projection distance is selected ogy, to the point cloud data nobpsr of individual features when multiple points are projected onto the same pixel. extracted through the feature-division function. Note that we used YOLOv3 [24] in this research, although 5.2. Construction of Feature Identification Models YOLOv4 [31] has been released, because the former has This process employs deep learning to generate feature an established record in existing studies. This function identification models from point cloud projection images. consists of the generation of a projected image from the The number of training epochs was set to Pepoch.Asmen- point cloud, construction of feature identification models tioned earlier, we use YOLOv3 [24], which has an estab- for training, and identification of features for evaluation. lished reputation for deep learning in existing studies.

5.1. Generation of Point Cloud Projected Image 5.3. Feature Identification This process generates images by projecting the point This process identifies features by analyzing the point cloud projection image using feature identification mod- cloud data nobpsr to a plane of projection, as shown in Fig. 7. To prevent the loss of 3D information through the els, as shown in Fig. 8. First, the point cloud projection projection of the point cloud data to an image, RGB val- image is applied to the feature identification models to ues are assigned based on the 3D configuration. First, the determine the features in the image. If this process yields plane of projection used to transform the point cloud data more than one feature, the feature with the highest relia- into a projected image is defined. To preserve the relative bility is chosen. Then, the result of feature identification sizes of features when creating an image projection, the is assigned to the point cloud data nobpsr extracted by the feature-division function, and the point cloud data of the size of the projected image, Pimgsize, and the dimension of the space assigned to the projected image (hereafter identified features are output. referred to as the “actual distance of projected target”), Pprjlen, are set. Next, the point cloud data nobpsr are ro- tated about the Z-axis, and the angle that maximizes the 6. Verification Experiment width of the projected image on the plane of projection (projected width in Fig. 7) is chosen. The plane of pro- 6.1. Program of Experiment jection is established along the X-andZ-axes, with the To evaluate the validity of the proposed method, we coordinate origin set at the lower-left corner (i.e., mini- evaluated the accuracy of feature division and identifica-

278 Int. J. of Automation Technology Vol.15 No.3, 2021 Research on Identification of Road Features from Point Cloud Data Using Deep Learning

tion in Experiments 1 and 2, respectively, and the accu- 6.2.2. Parameters for Feature-Division Function racy using real sites in Experiment 3. The parameter Plgsize is the voxel size used for 3D label- In Experiment 1, we evaluated the accuracy of the fea- ing. The greater Plgsize is, the greater the risk of placing ture division of the proposed method by comparing it with multiple features under a single label, while the smaller an existing method. The proposed method consists of a Plgsize is, the greater the risk of splitting up a single fea- connected-feature removal function and feature-division ture because of the uneven distribution of the point cloud function, whereas the existing method consists of ground data. During this experiment, we set Plgsize at 0.15 m con- removal and 3D labeling, as proposed in a previous re- sidering the error in the accuracy of the point cloud data. search [30]. The parameter Pnmin is used in the noise removal pro- In Experiment 2, we evaluated the accuracy of feature cess to determine whether the labeled point cloud data identification in the proposed method by using the fea- represent noise. This is the minimum number of points ture identification function to identify the point cloud data required to determine a feature. In this experiment, we set that have been divided into individual features, and check Pnmin at 100, considering the point cloud density. whether the obtained results match the identification re- sults obtained through a visual observation. In Experiment 3, we evaluated the accuracy of the fea- 6.2.3. Parameters for Feature-Identification Function ture identification when the proposed method was used to As the parameters for the feature-identification func- analyze the point cloud data of actual road spaces. tion, the training image size Pimgsize was set at a pixel res- olution of 300 × 300; the actual distance of the projected target Pprjlen was set at 15.0 m, which is sufficiently large 6.2. Parameter Settings of Proposed Method to contain any of the target features; and the number of In the experiment, we set the parameters used in the epochs for deep learning [24] Pepoch was set at 6,000. The proposed method. We need to set Pegsize, Pemaxheight, initial settings given in [24] were used for the activation Pemaxwidth, Peoffset,andPggsize for the connected-feature function for deep learning and other parameters. removal function. Plgsize and Pnmin must be set for the feature-division function. The training image size P , imgsize 6.3. Experiment 1: Evaluation of Feature-Division the actual distance of the projected target Pprjlen,andthe Accuracy number of epochs Pepoch were set for the feature identifi- cation function. During this experiment, the results of the point cloud data of the entire road space divided into individual road features using the proposed and existing methods are 6.2.1. Parameters for Connected-Feature Removal compared to confirm the accuracy of the feature division Function of the proposed method. The parameters Pegsize and Pggsize are the voxel size used for utility-line removal and the grid size used for build- 6.3.1. Experiment Procedure ing removal, respectively. Although higher settings of these parameters result in the removal of larger connected The accuracy was evaluated using the following pro- features, this may cause the removal of unintended fea- cedure. Because the proposed method divides the point tures. Meanwhile, one can avoid removing other unin- cloud data into units of features, which are then identi- tended features by assigning low values to these parame- fied by deep learning, composite features, in which multi- ters, but this may leave connected features unremoved. In ple features are connected, cannot be properly identified. this experiment, we set Pggsize to 0.5 m because the target Therefore, individual and composite features were evalu- features mostly exist in sections between the street and ated separately. sidewalk, and the minimum sidewalk width is assumed to Step 1: The target features included in the point cloud be 0.5 m [32]. data selected for the experiment were individ- The parameters P , P ,andP are emaxheight emaxwidth eoffset ually extracted manually and used to produce those used in the utility-line removal process to determine the experimental data for an accuracy evalua- whether the point cloud data in a voxel represent a util- tion. ity line. Pemaxheigh is the upper limit of the height of the bounding box of the point cloud data, Pemaxwidth is the up- Step 2: The proposed and existing methods are applied per limit of its base length, and Peoffset is the lower limit to the selected point cloud but for division into of its height above the ground points. During this experi- individual road features. ment, we set Peoffset at 4.0 m, based on the standards on the height of aerial utility lines [33] and allowing for a certain Step 3: The results of each method of Step 2 are com- slack. pared with the correctly divided data of Step 1 The parameters Pegsize, Pemaxheight,andPemaxwidth were to check whether the point cloud data of the set at 0.5, 0.3, and 0.2 m, respectively, based on an ad- features have been extracted. Following this, vanced parametric verification. For the parameters of the we check whether the point cloud data have ground removal, see [15]. been properly divided into unit features.

Int. J. of Automation Technology Vol.15 No.3, 2021 279 Umehara, Y. et al.

6.3.3. Results and Discussion Table 1. Details of point cloud data. The experimental results are listed in Table 3.Thefol- 1XPEHURI 1R /RFDWLRQ lowing three observations can be made. Note that the cor- SRLQWV rect number of features is given in shaded cells. 6KLPRJDZDUDFKR 1XPD]X FLW\   6KL]XRNDSUHIHFWXUH i. Both the proposed and existing methods were able to FKRPH .LWD  MR QLVKL &KXRNX   6DSSRUR+RNNDLGR extract all road features. FKRPH 2KGRULQLVKL &KXRNX When we compare the numbers of each feature in the   6DSSRUR+RNNDLGR proposed and existing methods with those of the correct data, we find that the point cloud data of all features have been extracted. Because none of the target features were Table 2. Details of measurement instruments. omitted, we can expect to establish a highly accurate fea- 5RDG6FDQQHU 6WUHHW0DSSHU ,WHP ture identification technology, where no feature is omitted >@ >@ from extraction, as long as the feature identification func- 0HDVXUHPHQW   ORFDWLRQ tion that follows the feature division function can prop- 0HDVXUHPHQW erly estimate the features. However, the point cloud data ƒ ƒ UDQJH representing people are removed as noise by the noise re- (IIHFWLYH moval process with the proposed method, which is why PHDVXUHPHQW P aP the extracted numbers are fewer than the correct numbers. GLVWDQFH This should present no problem because the point cloud 0D[LPXPQXPEHU  ± RIPHDVXUHPHQW SRLQWVVHFRQGî SRLQWVVHFRQGî data of vehicles and people are removed as noise in the SRLQWV XQLWV XQLWV maintenance and operation of road features.   6FDQQLQJUDWH URWDWLRQVVHFRQG URWDWLRQVVHFRQG ii. The proposed method can divide road features into sin- 6FDQQLQJ PPRUOHVV PPRUOHVV gle units with higher accuracy than the existing method. SUHFLVLRQ From the numbers of signposts, illumination poles, util- ity poles, signal posts, low vegetation, and trees in Ta- ble 3, it can be seen that the proposed method was able to correctly extract single features at a higher rate than the existing method. From the visualization results shown in Fig. 10, the existing method assigns a single label to multiple features because of the utility lines. Meanwhile, the utility lines were removed using the proposed method, making it possible to correctly divide them into single fea- tures. iii. The proposed method has the tendency to assign a sin- gle label to road features clustered close to low vegetation or trees. From the numbers of composite features extracted by the proposed method shown in Table 3, we can see that there are point cloud data in which multiple features are connected, although the numbers are lower than in the ex- Fig. 9. Point cloud data used in experiment. isting approach. By checking the visualization results, as shown in Fig. 11, we can see that several features were given a single label when they overlapped with the leaves of low vegetation or trees. We believe that the problem Step 4: The point cloud data divided into unit features caused by low vegetation can be treated by dividing the are visually checked, and the numbers of indi- point cloud data into equal strips in the Z-axis direction, vidual and composite features are counted. as shown in Fig. 12, and extracting low vegetation by specifying the sections indicating its growth based on the distribution tendency of the point cloud data according to 6.3.2. Experiment Condition elevation. To deal with the issue of trees, we believe that a As the point cloud data of road spaces for this experi- contraction/expansion process (morphological treatment) ment, we selected those of three urban locations that con- based on voxels, as shown in Fig. 13, should be able to tain many road features. Details of the selected point isolate tree leaves that overlap with other features. cloud data are presented in Table 1, those of the mea- surement instruments are listed in Table 2, and the visu- alization results are given in Fig. 9.

280 Int. J. of Automation Technology Vol.15 No.3, 2021 Research on Identification of Road Features from Point Cloud Data Using Deep Learning

Table 3. Results of Experiment 1.

3HUVRQ 6LJQ ,OOXPLQDWLRQ 8WLOLW\ 6LJQDO *XDUG /RZ /RFDWLRQ 0HWKRG &DWHJRU\ 9HKLFOH )HQFH 7UHH 2WKHU V SRVW SROH SROH SRVW UDLO YHJHWDWLRQ &RUUHFWGDWD           ʊ $OO            3URSRVHG 6LQJOH             &RPSRVLWH            $OO            ([LVWLQJ 6LQJOH            &RPSRVLWH            &RUUHFWGDWD           ʊ $OO            3URSRVHG 6LQJOH             &RPSRVLWH            $OO            ([LVWLQJ 6LQJOH            &RPSRVLWH            &RUUHFWGDWD           ʊ $OO            3URSRVHG 6LQJOH             &RPSRVLWH            $OO ([LVWLQJ 6LQJOH            &RPSRVLWH           

2. Check point distribution 1. Divid in Z-axis direction. and extract low vegetation.

(a) (b) Fig. 10. Examples of division into single features. (a) Exist- ing method. (b) Proposed method. Labels are indicated by random colors. Fig. 12. Treatment of issue of low vegetation.

 9R[HOGLYLVLRQ  &RQWUDFWLRQ

Connected with tree Connecter with vegetation  ([SDQVLRQ  /DEHOLQJ

Connected with low vegetation < 䠖3RLQWFORXGGDWD on traffic island 䠖3RLQWFORXGGDWDZLWKODEHO Fig. 11. Example of failed division. ; 䠖9R[HO

Fig. 13. Treatment of trees.

Int. J. of Automation Technology Vol.15 No.3, 2021 281 Umehara, Y. et al.

refers to the number of correctly identified items, the Table 4. Data size used in Experiment 2. number of mismatches refers to the number of items that Training data Evaluation were incorrectly identified as another feature (summed Feature Training Test Data down the columns in Table 5). The number of misidentifi- Vehicle 279 119 20 cations refers to the number of other (incorrect) items that Person(s) 345 148 20 were identified as the item in question (summed across Signpost 349 149 20 the rows). The number of unidentified items refers to Illumination pole 365 157 20 Utility pole 308 132 20 the number of items that were undetected by YOLO. In Signal post 55 23 20 Table 5, the correct matches are indicated by the cells Fence 41 18 20 framed by thick borders, whereas the wrong identifica- Guardrail 323 138 20 tions (mismatches) are given in the shaded cells. In Ta- Low vegetation 24 10 20 ble 6, the F-values are shown in the shaded cells. The Tree 426 182 20 experimental results yielded the following observations.

i. The feature identification function of the proposed 6.4. Experiment 2: Evaluation of Accuracy of method was able to identify road features with high ac- Feature Identification curacy. Through this experiment, to evaluate the accuracy of The overall F-value of 0.82, given in Table 6, indicates the feature identification using the proposed method, we that the road features were identified with high accuracy. checked the results of the feature identification of point This demonstrates that the feature identification function cloud data that were manually extracted in units of fea- of the proposed method is able to identify features con- tures. tained in the point cloud data obtained through the mea- surement of road spaces with high accuracy. 6.4.1. Experiment Procedure ii. Vehicles and people are identified with high accuracy, The accuracy is evaluated through the following proce- which enables their removal as noise. dure. As shown in Table 6, the F-values of the vehicles Step 1: All point cloud data of the road space are vi- and people were found to be 0.75 and 0.95, respectively, sually inspected, and the point cloud data of showing that these features were accurately identified. individual features are separated. This demonstrates that the proposed method identifies ve- hicles and people with high accuracy and thus they can be Step 2: The point cloud data of the individual features removed from the point cloud data as noise. obtained in Step 1 are divided into training and evaluation data. The details of the training data iii. Pole-shaped objects may be incorrectly identified as are presented in Table 4. In this research, we other types of pole-shaped objects. divided the point cloud data into training and From the F-values of the signposts, illumination poles, test data at a ratio of 7 : 3. utility poles, and signal posts given in Table 6, we obtain an average value of 0.76, which indicates that pole-shaped Step 3: The training data are input to the feature identi- objects can be identified. However, this is lower than the fication function to construct the feature iden- overall F-value of 0.82, whereas 3 items in this category tification models. were unidentified. When we checked the point cloud data Step 4: The road features of the evaluation data are that had been incorrectly or not identified, we found that identified using the constructed feature iden- there was a tendency to misidentify a pole-shaped object tification models. to another, incorrect pole-shaped object, as shown in Ta- ble 5 and Fig. 14. The reason for this is thought to be the Step 5: The identification accuracy of individual fea- range of diverse shapes within the same feature category, tures is evaluated using the precision, recall, the most notable being illumination poles, and the fail- and F-values. ure of deep learning to sufficiently learn the shape charac- teristics of pole-shaped objects, which include signposts, 6.4.2. Experiment Conditions illumination poles, utility poles, and signal posts. Be- cause the pole-shaped objects as a group are identified For use in training and evaluation using the deep learn- with good accuracy, when the separate results of sign- ing applied in this research, we manually collected point posts, illumination poles, utility poles, and signal posts cloud data of road features from the point cloud data of are tabulated together, we believe that this issue can be urban road spaces (Kyoto and Hokkaido). resolved by first identifying pole-shaped objects, and fur- ther identifying the sign boards, illumination fixtures, util- 6.4.3. Results and Discussion ity lines, and traffic signals that are part of those objects. The experimental and tabulated results are shown in Tables 5 and 6, respectively. The number of matches

282 Int. J. of Automation Technology Vol.15 No.3, 2021 Research on Identification of Road Features from Point Cloud Data Using Deep Learning

Table 5. Results of Experiment 2.

(YDOXDWLRQGDWD ,GHQWLILFDWLRQ 1XPEHURI 3HUVRQ 6LJQ ,OOXPLQDWLRQ 8WLOLW\ 6LJQDO *XDUG /RZ UHVXOW 9HKLFOH )HQFH 7UHH PLVLGHQWLILFDWLRQ V SRVW SROH SROH SRVW UDLO YHJHWDWLRQ 9HKLFOH            3HUVRQ V            6LJQSRVW            ,OOXPLQDWLRQSROH            8WLOLW\SROH            6LJQDOSRVW            )HQFH         *XDUGUDLO     /RZYHJHWDWLRQ            7UHH      1RRI     ʗ PLVPDWFKHV 1RRI      ʗ XQLGHQWLILHGLWHPV

Table 6. Tabulated results of Experiment 2.

(YDOXDWLRQGDWD ,WHP 3HUVRQ 6LJQ ,OOXPLQDWLRQ 8WLOLW\ 6LJQDO *XDUG /RZ 7RWDO 9HKLFOH )HQFH 7UHH V SRVW SROH SROH SRVW UDLO YHJHWDWLRQ (YDOXDWHGQXPEHU            1RRIPDWFKHV            1RRIPLVPDWFKHV            1RRI            PLVLGHQWLILFDWLRQV 5HFDOO            3UHFLVLRQ            )YDOXH           

iv. There is a tendency to misidentify low vegetation as also used the point cloud data of composite features to vehicles. obtain an additional validation. Table 6 shows that low vegetation has an F-value of 0.40, which is considerably lower than that of the other 6.5.1. Experiment Procedure features. From Table 5, we can see that 13 of the 20 items The experiment is conducted according to the follow- of low vegetation were misidentified as vehicles. This ing procedure. arose from the tendency to misidentify low vegetation with an extension of a few meters with vehicles, because Step 1: The point cloud data of individual and com- they resemble each other in shape and size, as shown in posite features are acquired from the results Fig. 15. This problem arose because the training data did obtained using the proposed method applied in not include low vegetation of limited extent, and therefore Experiment 1. can be dealt with by adding the corresponding point cloud data to the training data. Step 2: The feature identification function is used to identify the point cloud data obtained in Step 1. 6.5. Experiment 3: Accuracy Evaluation of Actual Step 3: The correct data of features obtained through a Sites visual inspection in Experiment 1 are checked In this experiment, to evaluate the accuracy of the fea- to see whether they have been correctly identi- ture identification of the proposed method using all func- fiedinStep2. tions, we identified features in the point cloud data that Step 4: The accuracy of identification is evaluated in were divided in Experiment 1, and compared them with terms of precision, recall, and F-values. the features obtained through a visual observation. Be- cause the proposed method is intended to identify indi- vidual features, we employ the point cloud data that were 6.5.2. Experiment Conditions extracted as single features in Experiment 1, although we We use the point cloud data divided in Experiment 1.

Int. J. of Automation Technology Vol.15 No.3, 2021 283 Umehara, Y. et al.

and an average of 0.78, indicating that single features are identified with high accuracy. From the visualization re- sults, as shown in Fig. 16, we can see that the proposed method is able to correctly identify features. ii. From Experiments 2 and 3, we can see that the pro- posed method can accurately identify features in the point cloud data of actual road spaces. We observed in Experiment 2, which employed point cloud data manually divided into units of road features, that the proposed method achieved an overall F-value of 0.82, indicating a high identification accuracy. Mean- while, maximum and average F-values of 0.86 and 0.78, respectively, were obtained in the present experiment, which employed point cloud data of single features ob- tained by the proposed method from point cloud data of actual road spaces. This demonstrates that the proposed method is valid when used in real operational environ- ments.

Fig. 14. Examples of pole-shaped objects that were iii. The proposed method has a tendency to misidentify misidentified. pole-shaped objects. When we examine the F-values of individual (single) pole-shaped objects in Table 8, we can see that the identi- fication of signal posts in location 2 and that of the illumi- nation poles in location 3 have a low accuracy. Checking the cases of misidentification, as given in the results of the single features in Table 7,andshowninFig. 17, pole- shaped objects tended to be misidentified as other types of pole-shaped objects because of their diverse shapes, as we found earlier in Experiment 2. As in the case of Ex- periment 2, we can expect to improve the accuracy by first identifying an item as a pole-shaped object, and then iden- tifying the specific device it carries. From the above findings, we can see that the proposed method is able to identify point cloud data of single fea- tures with high accuracy. The present experiment also in- Fig. 15. Examples of incorrectly-identified low vegetation. vestigated whether the proposed method, which was de- signed to identify single features, is capable of treating composite features. The experimental results indicate the following. 6.5.3. Results and Discussion iv. The proposed method can identify a single road feature The experiment and tabulated results are presented constituting a part of a composite feature. in Tables 7 and 8, respectively. The numbers of matches, mismatches (summed down columns in Ta- The overall recall and precision of the composite fea- ble 7), misidentifications (summed across rows), and tures in Table 8 have mean values of 0.34 and 0.87, re- unidentified items were calculated in the same manner as spectively, and the recall is lower than the precision in in Experiment 2. In Table 7, the correct matches are in- all locations. Meanwhile, Table 7 shows that there are a dicated in the cells framed with thick borders, whereas large number of unidentified items that were undetected incorrect identifications (mismatches) are given in the by YOLO. shaded cells. In Table 8, the F-values of the single fea- This is because, as shown in the lower part of Fig. 17, tures and the recall and precision of the composite fea- composite features are identified as single features be- tures are given in the shaded cells. The experimental re- cause composite features were not included in the train- sults yielded the following observations. ing to produce feature identification models because the proposed method was designed to identify single features. i. The proposed method is able to identify single features However, because the identification results match one of with high accuracy. the composite features, they can be considered to possess Table 8 shows that the overall F-values for single fea- a measure of validity. tures using the proposed method have a maximum of 0.86 This issue can be resolved by improving the feature di-

284 Int. J. of Automation Technology Vol.15 No.3, 2021 Research on Identification of Road Features from Point Cloud Data Using Deep Learning

Table 7. Results of Experiment 3.

(YDOXDWLRQGDWD 1XPEHURI ,GHQWLILFDWLRQ 3HUVRQ 6LJQ ,OOXPLQDWLRQ 8WLOLW\ 6LJQDO *XDUG /RZ /RFDWLRQ 9HKLFOH )HQFH 7UHH PLVLGHQWLILFDWLRQ UHVXOW V SRVW SROH SROH SRVW UDLO YHJHWDWLRQ 6&6&6& 6 & 6&6&6&6& 6 & 6& 6 & 9HKLFOH          3HUVRQ V          6LJQSRVW               ,OOXPLQDWLRQ          SROH 8WLOLW\SROH               6LJQDOSRVW         )HQFH           *XDUGUDLO         /RZ          YHJHWDWLRQ 7UHH          1RRI        ʊ ʊ PLVPDWFKHV

1RRI XQLGHQWLILHG        ʊ ʊ LWHPV

9HKLFOH          3HUVRQ V          6LJQSRVW               ,OOXPLQDWLRQ          SROH 8WLOLW\SROH               6LJQDOSRVW         )HQFH           *XDUGUDLO         /RZ          YHJHWDWLRQ 7UHH          1RRI        ʊ ʊ PLVPDWFKHV

1RRI XQLGHQWLILHG       ʊ ʊ LWHPV

9HKLFOH          3HUVRQ V          6LJQSRVW               ,OOXPLQDWLRQ          SROH 8WLOLW\SROH               6LJQDOSRVW         )HQFH           *XDUGUDLO         /RZ          YHJHWDWLRQ 7UHH          1RRI        ʊ ʊ PLVPDWFKHV

1RRI XQLGHQWLILHG        ʊ ʊ LWHPV Note: S: single feature, C: composite feature

Int. J. of Automation Technology Vol.15 No.3, 2021 285 Umehara, Y. et al.

Table 8. Tabulated results of Experiment 3.

(YDOXDWLRQGDWD 6LQJOH /RFDWLRQ ,WHP 3HUVRQ 6LJQ ,OOXPLQDWLRQ 8WLOLW\ 6LJQDO *XDUG /RZ 7RWDO FRPSRVLWH 9HKLFOH )HQFH 7UHH V SRVW SROH SROH SRVW UDLO YHJHWDWLRQ (YDOXDWHG            QXPEHU 1RRIPDWFKHV            1RRI      PLVPDWFKHV  1RRI      PLVLGHQWLILFDWLRQ 5HFDOO      ʊ ʊ ʊ    3UHFLVLRQ      ʊ ʊ ʊ    )YDOXH      ʊ ʊ ʊ    (YDOXDWHG            QXPEHU 1RRIPDWFKHV            1RRI      6LQJOH PLVPDWFKHV  IHDWXUH 1RRI      PLVLGHQWLILFDWLRQ 5HFDOO     ʊ  ʊ ʊ ʊ   3UHFLVLRQ     ʊ  ʊ ʊ ʊ   )YDOXH     ʊ  ʊ ʊ ʊ   (YDOXDWHG        QXPEHU 1RRIPDWFKHV        1RRI      PLVPDWFKHV  1RRI      PLVLGHQWLILFDWLRQ 5HFDOO   ʊ  ʊʊʊʊ    3UHFLVLRQ   ʊ  ʊʊʊʊ    )YDOXH   ʊ  ʊʊʊʊ    (YDOXDWHG      QXPEHU 1RRIPDWFKHV      1RRI      PLVPDWFKHV  1RRI      PLVLGHQWLILFDWLRQ 5HFDOO ʊ ʊ  ʊ   ʊ ʊ    3UHFLVLRQ ʊ ʊ  ʊ   ʊ ʊ    )YDOXH ʊ ʊ  ʊ   ʊ ʊ    (YDOXDWHG      QXPEHU 1RRIPDWFKHV      1RRI      &RPSRVLWH PLVPDWFKHV  IHDWXUH 1RRI      PLVLGHQWLILFDWLRQ 5HFDOO ʊ    ʊ   ʊ ʊ   3UHFLVLRQ ʊ    ʊ   ʊ ʊ   )YDOXH ʊ    ʊ   ʊ ʊ   (YDOXDWHG        QXPEHU 1RRIPDWFKHV      1RRI      PLVPDWFKHV  1RRI      PLVLGHQWLILFDWLRQ 5HFDOO     ʊ  ʊ ʊ    3UHFLVLRQ     ʊ  ʊ ʊ    )YDOXH     ʊ  ʊ ʊ   

286 Int. J. of Automation Technology Vol.15 No.3, 2021 Research on Identification of Road Features from Point Cloud Data Using Deep Learning

we believe that this research will improve the operational efficiency of point cloud data and contribute to spreading its usage, thereby leading to advanced applications. However, the experimental results showed that when features are clustered close to low vegetation or trees, sev- eral features are assigned a single label, which results in erroneous identifications. Our future investigation will fo- cus on improving the feature division function by devel- oping a technology to divide composite features and im- prove the accuracy of feature identification. We plan to Fig. 16. Examples of successful identification. expand the application range by extending the targets to include the diverse features present in various road spaces.

Acknowledgements In carrying this research, Nippon Insiek Co., Ltd. provided us with the point cloud data used in the verification experiment. We would like to express our gratitude to us.

References: [1] Geospatial Information Authority of Japan (GSI), “Survey Manual of Point Cloud Data using MMS,” 2019 (in Japanese). [2] Geospatial Information Authority of Japan (GSI), “Public Survey Manual using ALB,” 2019 (in Japanese). [3] Geospatial Information Authority of Japan (GSI), “Public Survey Manual using TLS,” 2018 (in Japanese). [4] Geospatial Information Authority of Japan (GSI), “Public Survey Manual using Laser Scanner Equipped UAV,” 2018 (in Japanese). [5] KAARTA, “Stencil 2.” https://www.kaarta.com/ja/products/stencil- 2-for-rapid-long-range-mobile-mapping/ [Accessed October 30, 2020] [6] Ministry of Land, Infrastructure, Transport and Tourism (MLIT), “Current Status And Challenges of the Construction Industry,” (in Japanese). https://www.mlit.go.jp/common/001187379.pdf [Ac- cessed October 30, 2020] [7] K. Hara and H. Saito, “Automatic Extraction of High Precision Map Based on Gradient Image Processing from 3D Point Cloud,” Int. Fig. 17. Examples of unsuccessful identification. J. Automotive Engineering, Vol.47, No.1, pp. 183-188, 2016 (in Japanese). [8] H. Date, T. Yokoyama, S. Kanai, Y. Hada, M. Nakao, and T. Sugawara, “Efficient Registration of Laser-Scanned Point Clouds of Bridges Using Linear Features,” Int. J. Automation Technol., vision function, including the findings of Experiment 1, Vol.12, No.3, pp. 328-338, 2018. which should enable the division of composite features [9] H. Niigaki, J. Shimamura, and Y. Taniguchi, “Estimation of The Ex- into single features and improve the accuracy. tent of Pillar-shaped Object Bent from A Three-dimensional Point Clouds Using 3D Geometry Model Fitting,” IEICE Technical Re- port, Vol.114, No.90, pp. 79-84, 2014 (in Japanese). [10] Geospatial Information Authority of Japan (GSI), “Major Meetings,” (in Japanese). https://www.mlit.go.jp/sogoseisaku/ 7. Conclusions maintenance/03activity/03 01 04.html [Accessed October 30, 2020] [11] Construction Technology Planning Division, Construction Sup- In this research, we proposed a method for identifying port Bureau, Transportation Infrastructure Department, Shizuoka road features using deep learning to analyze point cloud Prefecture, “Shizuoka Point Cloud DB,” (in Japanese). https:// data. The proposed method achieved a high identification pointcloud.pref.shizuoka.jp/ [Accessed October 30, 2020] [12] Association for Promotion of Infrastructure Geospatial Information accuracy by removing unnecessary points prior to deep Distribution (AIGID), “My City Construction,” (in Japanese). https: learning, rather than analyzing the entire point cloud data, //mycityconstruction.jp/ [Accessed October 30, 2020] [13] Association for Promotion of Infrastructure Geospatial Informa- and projecting the data as images with the additional in- tion Distribution (AIGID), “Data Platform of Land, Infrastructure, formation on the 3D configurations. The experimental re- Transport and Tourism,” (in Japanese). https://www.mlit-data.jp/ platform/ [Accessed October 30, 2020] sults indicated two findings and confirmed the validity of [14] Ministry of Land, Infrastructure, Transport and Tourism (MLIT), the proposed method. “i-Construction,” (in Japanese). https://www.mlit.go.jp/tec/i- construction/index.html [Accessed October 30, 2020] i. The proposed method can divide point cloud data into [15] K. Nakamura, T. Teraguchi, Y. Umehara, and S. Tanaka, “Research concerning Extraction Object From Point Cloud Data Based on individual road features with high accuracy. Road Completion Drawing,” J. of Japan Society of Civil Engineers, Ser. F3 (Civil Engineering Informatics), Vol.73, No.2, pp. I 424- ii. The proposed method can accurately identify road I 432, 2017 (in Japanese). features if they are single features. [16] K. Nakamura, Y. Tsukada, S. Tanaka, Y. Umehara, and K. Nakahata, “Research for Extracting Point Cloud Data Related to Road Surface Features Using Plan of Completion Drawing,” J. of By making it possible to identify the point cloud data of Japan Society for Fuzzy Theory and Intelligent Informatics, Vol.32, various features from the point cloud data of road spaces, No.1, pp. 616-626, 2020 (in Japanese).

Int. J. of Automation Technology Vol.15 No.3, 2021 287 Umehara, Y. et al.

[17] K. Nakamura, Y. Tsukada, S. Tanaka, Y. Umehara, and K. Nakahata, “Research concerning High Accuracy Method of Ex- Name: tracting Road Features From Point Cloud Data Using Plan of Com- pletion Drawing,” J. of Japan Society of Civil Engineers, Ser. F3 Yoshimasa Umehara (Civil Engineering Informatics), Vol.75, No.2, pp. I 160-I 169, 2019 (in Japanese). Affiliation: [18] Y. Umehara, Y. Tsukada, K. Nakamura, S. Tanaka, N. Hirano, Assistant Professor, Organization for Research S. Otsuki, and Y. Kawamura, “Basic Research for Recogniz- and Development of Innovative Science and ing Road Objects Using Point Cloud Data by Machine Learn- Technology, Kansai University ing,” Fuzzy System Symp. 2019, Vol.35, pp. 493-494, 2019 (in Japanese). [19] Y. Tsukada, K. Nakamura, S. Tanaka, Y. Umehara, N. Hirano, S. Otsuki, and Y. Kawamura, “Basic Research for Recognizing Road Objects using Point Cloud Data by Deep Learning,” Fuzzy Address: System Symp. 2019, Vol.35, pp. 491-492, 2019 (in Japanese). 3-3-35 Yamate-cho, Suita-shi, Osaka 564-0073, Japan [20] Y. Tsukada, S. Kubota, S. Tanaka, Y. Umehara, M. Nakahara, and K. Nakahata, “Research for Recognizing Individual Parts of Bridge Brief Biographical History: using Point Cloud Data with Deep Learning,” J. Japan Society for 2019- Assistant Professor, Organization for Research and Development of Fuzzy Theory and Intelligent Informatics, Vol.32, No.1, pp. 627- Innovative Science and Technology, Kansai University 631, 2020 (in Japanese). Main Works: [21] S. Tanaka, K. Nakamura, Y. Yamamoto, R. Imai, S. Kubota, and • “Research Concerning Technology for Detecting Objects of River Space W. Jiang, “Research for Generating Road Alignment of Highway Using Laser Profiler Data,” J. of Japan Society of Civil Engineers, Ser. F3 Bridges with Point Cloud Data using MMS,” J. Japan Society for (Civil Engineering Informatics), Vol.73, No.2, pp. I 433-I 442, 2018 (in Fuzzy Theory and Intelligent Informatics, Vol.28, No.5, pp. 826- 845, 2016 (in Japanese). Japanese). [22] W. Jiang, Y. Yamamoto, K. Nakamura, and S. Tanaka, “Empir- Membership in Academic Societies: ical Research on Technology for Automatic Generation of Road • Information Processing Society of Japan (IPSJ) Alignment using MMS,” J. of Japan Society of Civil Engineers, • Institute of Electronics, Information and Communication Engineers Ser. F3 (Civil Engineering Informatics), Vol.73, No.2, pp. I 327- (IEICE) I 337, 2017 (in Japanese). • Japan Society of Civil Engineers (JSCE) [23] National Institute for Land and Infrastructure Management (NILIM), “Manual of Completion Drawing Production for Road Works,” 2006 (in Japanese). [24] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection,” Proc. of Conf. on Computer Vision and Pattern Recognition, Vol.29, No.2, Name: pp. 779-788, 2016. Yoshinori Tsukada [25] C. Qi, H. Su, M. Kaichun, and L. Guibas, “PointNet: Deep Learn- ing on Point Sets for 3D Classification and Segmentation,” Proc. of Conf. on Computer Vision and Pattern Recognition, pp. 77-85, Affiliation: 2017. Lecturer, Faculty of Business Administration, [26] Ministry of Land, Infrastructure, Transport and Tourism (MLIT), Setsunan University “Measures for aging roads,” (in Japanese). https://www.mlit.go. jp/road/sisaku/yobohozen/yobohozen.html [Accessed October 30, 2020] [27] Ministry of Land, Infrastructure, Transport and Tourism, Road Bu- reau, “Inspection Manual of Small Features,” 2017 (in Japanese). [28] Ministry of Land, Infrastructure, Transport and Tourism, Road Bu- Address: reau National RoadInspection Manual of Appendages (Sign Pole 17-8 Ikedanaka-machi, Neyagawa-shi, Osaka 572-8508, Japan and Light Pole etc.),” 2019 (in Japanese). Brief Biographical History: [29] Ministry of Land, Infrastructure, Transport and Tourism (MLIT), 2015-2016 Assistant Professor, Organization for Research and “Standards for road Structures,” (in Japanese). https://www.mlit.go. Development of Innovative Science and Technology, Kansai University jp/common/001063925.pdf [Accessed October 30, 2020] 2016-2019 Lecturer, Faculty of Software and Information Science, Iwate [30] W. Zhang, J. Qi, W. Peng, H. Wang, D. Xie, X. Wang, and G. Yan, Prefectural University “An Easy-to-Use Airborne LiDAR Data Filtering Method Based on 2019- Lecturer, Faculty of Business Administration, Setsunan University Cloth Simulation,” Remote Sensing, Vol.8, No.6, 501, 2016. Main Works: [31] A. Bochkovskiy, C. Wang, and H. Liao, “YOLOv4: Optimal • Speed and Accuracy of Object Detection.” https://arxiv.org/abs/ “Research for Recognizing Individual Parts of Bridge Using Point Cloud 2004.10934 [Accessed October 30, 2020] Data with Deep Learning,” J. of Japan Society for Fuzzy Theory and [32] Ministry of Land, Infrastructure, Transport and Tourism (MLIT), Intelligent Informatics, Vol.32, No.1, pp. 627-631, 2020. “Explanation of Each Provision of the Road Structure Ordinance,” Membership in Academic Societies: (in Japanese). https://www.mlit.go.jp/road/sign/kouzourei kaisetsu. • Information Processing Society of Japan (IPSJ) html [Accessed October 30, 2020] • Institute of Electronics, Information and Communication Engineers [33] Ministry of Economy, Trade and Industry (METI), “Interpretation (IEICE) of technical standards for electrical equipment,” (in Japanese). • Japan Society of Civil Engineers (JSCE) https://www.meti.go.jp/policy/safety security/industrial safety/ • Japan Society for Fuzzy Theory and Intelligent Informatics (SOFT) law/files/dengikaishaku.pdf [Accessed October 30, 2020] • Institute of Electrical and Electronics Engineers (IEEE) [34] SITECO Informatica, “Road-Scanner4.” https://www.sitecoinf.it/ en/115-english/solutions/569-road-scanner [Accessed October 30, 2020] [35] Mirukuru, “StreetMapper 360,” (in Japanese). http://www. mirukuru.co.jp/products/pdf/IGI StreetMapper 360 2011 web j. pdf [Accessed October 30, 2020]

288 Int. J. of Automation Technology Vol.15 No.3, 2021 Research on Identification of Road Features from Point Cloud Data Using Deep Learning

Name: Name: Kenji Nakamura Koki Nakahata

Affiliation: Affiliation: Professor, Faculty of Information Technology Graduate Student, Graduate School of Informat- and Social Sciences, Osaka University of Eco- ics, Kansai University nomics

Address: Address: 2-2-8 Osumi, Osaka-shi, Osaka 533-8533, Japan 2-1-1 Ryozenji-cho, Takatsuki-shi, Osaka 569-1095, Japan Brief Biographical History: Brief Biographical History: 2009-2010 Postdoctoral Fellow, Faculty of Informatics, Kansai University 2018- Graduate Student, Graduate School of Informatics, Kansai 2010-2012 Assistant Professor, Faculty of Information Science and University Engineering, Main Works: 2012-2018 Associate Professor, Faculty of Information Technology and • “Research Concerning High Accuracy Method of Extracting Road Social Sciences, Osaka University of Economics Features from Point Cloud Data Using Plan of Completion Drawing,” J. of 2018- Professor, Faculty of Information Technology and Social Sciences, Japan Society of Civil Engineers, Ser. F3 (Civil Engineering Informatics), Osaka University of Economics Vol.75, No.2, pp. I 160-I 169, 2019 (in Japanese). Main Works: Membership in Academic Societies: • “Research for Extracting Point Cloud Related to Road Surface Features • Information Processing Society of Japan (IPSJ) Using Plan of Completion Drawing,” J. of Japan Society for Fuzzy Theory • Japan Society of Civil Engineers (JSCE) and Intelligent Informatics, Vol.32, No.1, pp. 616-626, 2020. • Japan Society of Photogrammetry and Remote Sensing (JSPRS) Membership in Academic Societies: • Information Processing Society of Japan (IPSJ) • Institute of Electronics, Information and Communication Engineers (IEICE) • Japan Society of Civil Engineers (JSCE)

Name: Shigenori Tanaka

Affiliation: Professor, Faculty of Informatics, Kansai Uni- versity

Address: 2-1-1 Ryozenji-cho, Takatsuki-shi, Osaka 569-1095, Japan Brief Biographical History: 1994-1997 Lecturer, Faculty of Informatics, Kansai University 1997-2004 Associate Professor, Faculty of Informatics, Kansai University 2004- Professor, Faculty of Informatics, Kansai University Main Works: • “Fundamental Research for Use Case of Point Cloud Data Considering Analysis and Processing Technology Using UAV-Based Laser Scanner,” J. of Japan Society of Civil Engineers, Ser. F3 (Civil Engineering Informatics), Vol.72, No.2, pp. II 82-II 89, 2017 (in Japanese). Membership in Academic Societies: • Information Processing Society of Japan (IPSJ) • Institute of Electronics, Information and Communication Engineers (IEICE) • Japan Society of Civil Engineers (JSCE)

Int. J. of Automation Technology Vol.15 No.3, 2021 289

Powered by TCPDF (www.tcpdf.org)