applied sciences

Article An ARCore Based User Centric Assistive Navigation System for Visually Impaired People

Xiaochen Zhang 1 , Xiaoyu Yao 1, Yi Zhu 1,2 and Fei Hu 1,*

1 Department of Industrial Design, Guangdong University of Technology, Guangzhou 510006, China; [email protected] (X.Z.); [email protected] (X.Y.); [email protected] or [email protected] (Y.Z.) 2 School of Industrial Design, Georgia Institute of Technology, GA 30332, USA * Correspondence: [email protected]

 Received: 17 February 2019; Accepted: 6 March 2019; Published: 9 March 2019 

Featured Application: The navigation system can be implemented in smartphones. With affordable haptic accessories, it helps the visually impaired people to navigate indoor without using GPS and wireless beacons. In the meantime, the advanced path planning in the system benefits the visually impaired navigation since it minimizes the possibility of collision in application. Moreover, the haptic interaction allows a human-centric real-time delivery of motion instruction which overcomes the conventional turn-by-turn waypoint finding instructions. Since the system prototype has been developed and tested, a commercialized application that helps visually impaired people in can be expected.

Abstract: In this work, we propose an assistive navigation system for visually impaired people (ANSVIP) that takes advantage of ARCore to acquire robust computer vision-based localization. To complete the system, we propose adaptive artificial potential field (AAPF) path planning that considers both efficiency and safety. We also propose a dual-channel human–machine interaction mechanism, which delivers accurate and continuous directional micro-instruction via a haptic interface and macro-long-term planning and situational awareness via audio. Our system user-centrically incorporates haptic interfaces to provide fluent and continuous guidance superior to the conventional turn-by-turn audio-guiding method; moreover, the continuous guidance makes the path under complete control in avoiding obstacles and risky places. The system prototype is implemented with full functionality. Unit tests and simulations are conducted to evaluate the localization, path planning, and human–machine interactions, and the results show that the proposed solutions are superior to those of the present state-of-the-art solutions. Finally, integrated tests are carried out with low-vision and blind subjects to verify the proposed system.

Keywords: assistive technology; ARCore; user centric design; navigation aids; haptic interaction; adaptive path planning; visual impairment; SLAM

1. Introduction According to statistics presented by the World Health Organization in October 2017, there are more than 253 million visually impaired people worldwide. Compared to normally sighted people, they are unable to access sufficient visual clues of the surroundings due to weakness in visual perception. Consequently, visually impaired people face challenges in numerous aspects of daily life, including when traveling, learning, entertaining, socializing, and working. Visually impaired people have a strong dependency on travel aids. Self-driving vehicles have achieved SAE (Society of Automotive Engineers) Level 3 long back, which allows the vehicle to make

Appl. Sci. 2019, 9, 989; doi:10.3390/app9050989 www.mdpi.com/journal/applsci Appl.Appl. Sci. Sci. 20192019,, 99,, x 989 FOR PEER REVIEW 2 2of of 16 15 decisions autonomously regarding the machine cognition. Autonomous robots and drones have also beendecisions dispatched autonomously for unmanned regarding tasks. the machineObviously, cognition. the advances Autonomous in robotics, robots computer and drones vision, have GIS also (Geographicbeen dispatched Information for unmanned System), tasks. and Obviously,sensors allow the advancesfor integrated in robotics, smart computersystems to vision, perform GIS mapping,(Geographic positioning, Information and System), decision-making and sensors while allow executing for integrated in urban smart areas. systems to perform mapping, positioning,Human andbeings decision-making have the ability while to interpret executing the in surrounding urban areas. environment using sensory organs. Over Human90% of the beings information have the transmitted ability to interpret to the brai then surrounding is visual, and environment the brain processes using sensory images organs. tens ofOver thousands 90% of theof times information faster than transmitted texts. This to theexpl brainains iswhy visual, human and beings the brain are processescalled visual images creatures; tens of whenthousands traveling, of times visually faster impaired than texts. people This explainshave to face why difficulties human beings imposed are called by their visual visual creatures; impairment when [1].traveling, visually impaired people have to face difficulties imposed by their visual impairment [1]. TraditionalTraditional assistive solutions solutions for for visually visually impaired impaired people people include include white white canes, canes, guide guide dogs, dogs, and and volunteers.volunteers. However, However, each each of of these these solutions solutions has has it itss own own restrictions. restrictions. They They either either work work only only under under certaincertain situations, situations, with with limited limited capability, capability, or are expensive in terms of extra manpower. ModernModern assistive assistive solutions solutions for for visually visually impaired impaired people people borrow borrow power power from from mobile mobile computing, computing, robotics,robotics, andand autonomousautonomous technology. technology. They They are implementedare implemented in various in various forms suchforms as mobilesuch as terminals, mobile terminals,portable computers, portable computers, wearable sensor wearable stations, sensor and stations, indispensable and indispensable accessories. ac Mostcessories. of these Most devices of these use devicescomputer use vision computer or GIS/GPS vision or toGI understandS/GPS to understand the surroundings, the surrounding acquires, a acquire real-time a real-time location, location, and use andturn-by-turn use turn-by-turn commands commands to guide to the guide user. the However, user. However, turn-by-turn turn-by-turn commands commands are difficult are fordifficult users forto follow.users to follow. InIn this this work, work, we we propose propose an an assistive assistive navigation navigation system system for for visually visually impaired impaired people people (ANSVIP, (ANSVIP, seesee Figure Figure 11.)) usingusing ARCoreARCore areaarea learning;learning; wewe introduceintroduce anan adaptiveadaptive artificialartificial potentialpotential fieldfield pathpath planningplanning mechanism mechanism that that generates generates smooth smooth and and safe safe paths; paths; we we design design a a user-centric user-centric dual dual channel channel interactioninteraction that that uses uses haptic haptic sensors sensors to to deliver deliver real-time real-time traction traction information information to to the the user. user. To To verify verify the the designdesign of of the the system, system, we we have have the the proposed proposed system system prototype prototype implemented implemented with with full full functionality, functionality, andand have have the the prototype prototype tested tested with with blind blind folded and blind subjects.

FigureFigure 1. 1. TheThe components components of of the the pr proposedoposed ANSVIP ANSVIP system. system. 2. Related Works 2. Related Works 2.1. Assistive Navigation System Frameworks 2.1. Assistive Navigation System Frameworks Recent advances in sensor technology support the design and integration of portable assistive navigation.Recent advances Katz [2] designed in sensor antechnology assistive devicesupport that the aids design in macro-navigation and integration of and portable micro-obstacle assistive navigation. Katz [2] designed an assistive device that aids in macro-navigation and micro-obstacle Appl. Sci. 2019, 9, 989 3 of 15 avoidance. The prototype was on a backpacked laptop employed with a stereo camera and audio sensors. Zhang [3] proposed a hybrid-assistive system with a laptop with a head-mounted web camera and a belt-mounted depth camera along with IMU (Inertial Measurement Unit). They used a robotics- to connect and manage the devices and ultimately help visually impaired people when roaming indoors. Ahmetovic [4] used a smartphone as the carrier of the system, but a considerable number of beacons had to be deployed prior to support the system. Furthermore, Bing [5] used Project Tablet with no extra sensor to implement their proposed system. The system allowed the on-board depth sensor to support area learning, but the computational power burden was heavy. Zhu [6] proposed and implemented the ASSIST system on a Project Tango smartphone. However, due to the presence of more advanced ARCore, the Google Project Tango introduced in 2014 has been depreciated [6] since 2017, and smartphones with the required capability are no longer available. To the best of our knowledge, the proposed ANSVIP system is the first assistive human–machine system using an ARCore-supported commercial smartphone.

2.2. Positioning and Tracking Most indoor positioning and tracking technologies were borrowed from autonomous robotics and computer vision. Methods using direct sensing and dead reckoning [7] are no longer qualified options. Yang [8] proposed a Bluetooth RSSI-based sensing framework to localize users in large public venues. The particle filter was applied to localize the subject. Jiao [9] used an RGB-D camera to reconstruct a semantic map to support indoor positioning. They used an artificial neural network on reflectivity to improve the accuracy of 3D localization. Moreover, Xiao [10,11] and Zhang [3,12] used hybrid sensors to carry out fast visual odometry and feature-based loop closure in localization, while Zhu [6] and Bing [5,13] used area learning (a pattern recognition method) to bond subjects with areas of interest.

2.3. Path Planning As the most popular path planning in robotics, A* is also extensively used by assistive systems. Xiao [10,11] and Zhang [3] used A* to connect areas of interest. Bing [5] applied greedy path planning in the label-intensive semantic map. Meanwhile, Zhao [14] suggested a potential field as a potential candidate for local planning. Paths in Reference [15,16] were planned on well-labeled maps using global optimal methods such as Dijkstra [17] and its varieties. Most existing path-planning methods generate sharp turn-by-turn paths connecting corner or feature anchors. These paths are good for robots, but bring unbearable experiences to humans.

2.4. Human–Machine Interaction Most present systems use audio to deliver turn-by-turn directional instructions [5,11,18]. However, the human brain has its own understanding for positioning, direction, and velocity [19,20], which is not robot-styled. Some recent works proposed using haptic interfaces in obstacle avoidance [1,3,6,10,13,21–23]. However, due to the restriction of turn-by-turn path planning, the haptic interaction is unlikely to be used in a continuous path-following interface in assistive systems. Fernandes [7] used perceptual 3D audio as a solution; however, the learning was not easy and the accuracy in real complex scenes needs to be improved. Ahmetovic [15] conducted a data-driven analysis, which pointed out that turn-by-turn audio instructions have considerable drawbacks due to latency in interaction and limited information per instruction. Guerreiro [24] stated that turn-by-turn instructions may confuse visually impaired people’s navigation behaviors, and result in, for example, deviating from the intended path. These behaviors lead to errors, confusion and longer recovery times back to the right track. Such behaviors also emphasize that more effective real-time feedback interfaces are necessary. Ahmetovic [15] studied the factors that cause rotation error in turn-by-turn navigation. Rotation errors accompanying audio instructions significantly affect user experiences in navigation. Rector [22] compared the accuracy of three different human guidance interfaces and provided insights into the design of multimodal feedback mechanisms. Appl. Sci. 2019, 9, 989 4 of 15 Appl. Sci. 2019, 9, x FOR PEER REVIEW 4 of 16

3. Design of ANSVIP 3. Design of ANSVIP

3.1.3.1. Information Flow inin SystemSystem DesignDesign MostMost taskstasks inin real life are difficultdifficult toto accomplish usingusing a single sensor or functional unit. Instead, theythey requirerequire collaboration collaboration (cooperation, (cooperation, competition, competition, or or coordination) coordination) from from multiple multiple functional functional units units or sensingor sensing agents agents in intelligent in intelligent systems systems to make to the make most the favorable most finalfavorable synthesis. final Insynthesis. such a collaborative In such a context,collaborative each context, functional each unit functional has its unit own has duty its andown cooperatesduty and cooperates via the agreed via the channel, agreed channel, thereby maximizingthereby maximizing the effectiveness the effectiveness of shared of resources shared resources to achieve to theachieve goal. the goal. Specifically,Specifically, anan assistiveassistive navigationnavigation systemsystem isis composedcomposed ofof twotwo parts:parts: thethe cognitivecognitive systemsystem andand thethe guidanceguidance system.system. TheThe cognitivecognitive systemsystem aimsaims toto understandunderstand thethe world,world, includingincluding thethe micro-scalemicro-scale surroundingssurroundings andand the the macro-scale macro-scale scene; scene; the the guidance guidance system system aims aims to properly to properly deliver deliver the micro-scalethe micro- guidancescale guidance command, command, the macro-scale the macro-scale plan, plan, as well as aswell semantic as semantic scene scene understanding, understanding, to the to user. the user. The collaborationThe collaboration of the of twothe two allows allows the the machine machine to to understand understand the the scene, scene, and and then then the the useruser acquiresacquires understandingunderstanding fromfrom thethe machine,machine, asas shownshown inin FigureFigure2 2..

FigureFigure 2.2. TheThe informationinformation flowflow inin thethe assistiveassistive system:system: TheThe assistiveassistive systemsystem corecore aimsaims toto understandunderstand thethe worldworld andand translatetranslate thethe essentialessential understandingunderstanding toto thethe user.user.

TheThe proposedproposed ANSVIPANSVIP usesuses anan ARCore-supportedARCore-supported smartphonesmartphone as thethe majormajor carriercarrier andand usesuses ARCore-basedARCore-based SLAMSLAM (Simultaneous(Simultaneous Localization and Mapping) to track motion so as to createcreate aa scenescene understanding along with with mapping. mapping. The The huma human-scalen-scale understanding understanding of of motion motion and and space space is isprocessed processed to toproduce produce a ashort short and and safe safe path path to towardswards the goal.goal. The correspondingcorresponding micro-motionmicro-motion guidanceguidance isis delivereddelivered toto thethe useruser usingusing haptichaptic interaction,interaction, whilewhile thethe macro-pathmacro-path cluesclues areare delivereddelivered usingusing audioaudio interaction.interaction. BasedBased onon the the information information flow flow in anin assistive an assistive navigation navigation system, system, we design we thedesign ANSVIP the structureANSVIP asstructure follows: as follows: Firstly,Firstly, thethe systemsystem shouldshould bebe fullyfully awareaware ofof thethe informationinformation relatedrelated toto thethe user’suser’s locationlocation duringduring navigation.navigation. Unlike GPS-based solutions that are commonlycommonly used outdoors, our system has to useuse computercomputer vision-basedvision-based SLAMSLAM sincesince indoorindoor GPSGPS signalssignals areare unreliable.unreliable. SLAMSLAM isis basedbased onon thethe GoogleGoogle ARCore,ARCore, whichwhich integratesintegrates thethe visionvision andand inertinert sensorsensor hardwarehardware totosupport supportarea area learning. learning. Secondly,Secondly, thethe systemsystem shouldshould bebe capablecapable ofof conveyingconveying thethe abstractedabstracted systemicsystemic cognitioncognition toto thethe user.user. Unlike the conventionalconventional exclusiveexclusive audioaudio interaction,interaction, wewe proposepropose aa haptic-basedhaptic-based cooperativecooperative mechanism.mechanism. This allows us to replace the popularpopular turn-by-turnturn-by-turn guidanceguidance with aa moremore continuouscontinuous motionmotion guidance.guidance. TheThe workingworking logiclogic amongamong thethe ANSVIPANSVIP componentscomponents isis shownshown inin FigureFigure3 3.. DetailsDetails ofof thethe major major componentscomponents areare discusseddiscussed inin thethe followingfollowing subsections:subsections: Appl. Sci. 2019, 9, 989 5 of 15 Appl. Sci. 2019, 9, x FOR PEER REVIEW 5 of 16

Figure 3.3. The working logic amongamong thethe ANSVIPANSVIP components:components: The physical components and softsoft componentscomponents areare shownshown onon thethe leftleft handhand sideside andand rightright handhand side,side, respectively.respectively.

3.2. Real-Time AreaArea Learning-BasedLearning-Based LocalizationLocalization The systemsystem reliesrelies on on existing existing indoor indoor scenario scenario CAD CAD maps, maps, which which are availableare available as escape as escape maps maps near elevatorsnear elevators (as requested (as requested by fire by departments).fire departments). The Th mape map we we use use in thisin this study study is presentedis presented in in Figure Figure4. We4. We label label the the area area of of interest interest on on the the map map so so as as to to allow allow the the system system to to understandunderstand navigationnavigation requestsrequests and toto planplan thethe pathpath accordingly.accordingly.

Figure 4. The digital CAD map before (leftleft)) andand afterafter ((rightright)) beingbeing labeled.labeled.

Google ARCore is usedused toto tracktrack thethe posepose ofof thethe systemsystem inin navigation.navigation. TheThe sparsesparse featuresfeatures areare collectedcollected andand stored stored in anin areaan descriptionarea description dataset dataset and subsequently and subsequently used in re-localization. used in re-localization. Specifically, aSpecifically, normal-sighted a normal-sighted person has to person build thehas sparseto build map the ofsparse the indoor map of scenario the indoor by runningscenario the by ARCorerunning SLAMthe ARCore in advance. SLAMThen, in advance. the assistive Then, systemthe assistive is capable system to re-localize is capable itselfto re-localize on the pre-built itself on map the afterpre- enteringbuilt map the after scenario. entering By the observing scenario. and By recognizingobserving and the recognizing labeled traceable the labeled objects, traceable the system objects, is able the tosystem re-localize is able itself to re-localize after roaming itself inafter the roaming system map.in the However,system map. the Howe mappingver, betweenthe mapping points between in the systempoints in feature-based the system feature-based map and those map on theand scenariothose on CAD the scenario map has CAD to be map obtained. has to be obtained. We use a singular value decomposition method to find the transportation matrix A . Two groups of corresponding feature point sets are used for finding the homogeneous transformation matrix: Appl. Sci. 2019, 9, 989 6 of 15

We use a singular value decomposition method to find the transportation matrix A. Two groups of corresponding feature point sets are used for finding the homogeneous transformation matrix:   " # cos(θ) − sin(θ) tx R × t × A = 2 2 2 1 =  sin(θ) cos(θ) t . (1) 0 1  y  0 0 1

T T Let ln = [xn, yn] denote the point sets in the feature map, and let pn = [in, jn] denote the corresponding point set on the scenario CAD map. We use the least squares method to find the rotation R and translation t as follows:

N 2 (R, t) ← arg min∑kR × pi + t − lik . (2) R,t i=1

N N 1 1 Denote l = N ∑ li, p = N ∑ pi, Equation (2) can be written as i=1 i=1

N 2

R ← arg min∑ R × pi − li . (3) R i=1

For M x = b,   x1 −y1 1 0  y x   1 1 0 1   . . . .  M =  . . . . , (4)  . . . .     xn −yn 1 0  yn xn 0 1

T x = [cos(θ), sin(θ), tx, ty] , (5) T b = [i1, j1, ..., in, jn] . (6) Using SVD to decompose M, we find

T MN×4 = UN×NSN×4V 4×4, (7)

T where U denotes the feature matrix of MM , S denotes the diagonal matrix with eigenvalue δi, V is the eigenvector matrix of MT M, and we have A in (1) calculated by

−1 −1 T −1 x = (Vdiag(δ1 , ..., δ4 )U ) b. (8)

3.3. Area Learning in ANSVIP ARCore is an framework for smartphones with Android operating system. It is an advanced substitute of the deprecated Project Tango. Without the extra depth sensor, an ARCore-powered cell phone is able to track its pose and build a map of the surroundings in real time. Besides, ARCore also enhances the area of interest detection by estimating the average illumination intensity, which helps area segmentation during semantic mapping. The smartphone is a remarkable feat of engineering. It integrates a great number of sensors like a gyroscope, camera, and GPS into a small slab. Specifically, in our work, a HUAWEI P20 with Kirin 970 CPU, Gravity sensor, Ambient light sensor, Proximity sensor, Gyroscope Compass is used. Appl. Sci. 2019, 9, 989 7 of 15

3.4. Adaptive Artificial Potential Field-Based Path Planning In indoor navigation for the visually impaired, the path planning has to consider both efficiency and safety. Specifically, our path planning considers the issues that follow. 1. The path should be planned to be away from obstacles and risks: Where the conventional robot path planning prefers the shortest path, the assistive system has more in-depth requirements. For visually impaired users, the path should be away from obstacles and risks such as walls, pillars, and uneven steps, which may cause falls [25]. 2. The path and guidance shall be updated in real time: Unlike autonomous robot systems, the assistive system cannot expect visually impaired users to proceed along the planned path accordingly. When the user deviates from the planned path, there should be a corresponding real-time path evolution instead of asking the user to return to the planned track. 3. The mechanism shall be flexible to scale up with new elements: The path-planning algorithm should be able to easily expand with new elements, such as dynamic obstacle avoidance, functional unit integration, step warning, and extreme case re-planning. 4. The path shall be planned in a human-friendly manner: Unlike robots, visually impaired users are unable to grasp precise turning angles, and thus, it is difficult for them to follow conventional turn-by-turn paths [15]. Qualitative direction guidance is more suitable. Users prefer continuous guidance in navigation and a generally smooth plan. The artificial potential field path planning is a suitable candidate for the above issues and challenges, since it has the characteristics of a simple structure, strong practicability, ease of implementation, and flexibility regarding expansion [14,17]. Therefore, we propose an adaptive artificial potential field path-planning mechanism for path generation. Specifically, the target (goal) is considered an attractive potential, while walls are repulsive. The potential fields are the combination of first-order attractive and repulsive potentials:

U = Uatt + Urep, (9)

Uatt(Xcurrent) = kρ(Xcurrent, Xtarget), (10)   1 1  η − if ρ(Xcurrent, X ) ≤ ρ ρ(Xcurrent,X ) ρ0 obs 0 Urep(Xcurrent) = obs , (11) 0 if ρ(Xcurrent, Xobs) > ρ0 where η denotes the repulsive factor, ρ denotes a distance function, and ρ0 denotes the effective radius. A path can be generated along with the potential gradients. However, local minimums may block the path or cause redundant travel costs. Thus, we refer to the local minimum immune A* algorithm path length to control ρ0 and solve this problem. ρ0 = ρ0 − ∆ρ until CAAPF > λCA∗, where λ denotes the control factor, and CAAPF and CA∗ denote the path length of adaptive artificial potential field (AAPF) and A* from the current position, respectively. A sliding window is used to smooth the path to support and enhance the experience of motion guidance in human–machine interaction (Figure5):

1 X(i) = (X(i + N) + X(i + N − 1) + ... + X(i − N)). (12) 2N + 1

A case of a smoothed path is shown in Figure5. Since the plan is discrete, the path (red dotted) is planned in taxicab style before smoothing. The sliding window described in equation (12) updates each point on the path by averaging its position with the nearest 2N points’ positions on the path. Consequently, the path (dark curve) is smoothed after the process. Appl. Sci. 2019, 9, 989 8 of 15 Appl. Sci. 2019, 9, x FOR PEER REVIEW 8 of 16

FigureFigure 5. A 5. caseA case of a of smoothed a smoothed path path by by sliding sliding window: beforebefore smoothing smoothing (red (red dotted) dotted) versus versus after after smoothingsmoothing (dark (dark curve). curve).

3.5. Dual-Channel Human–Machine Interaction A case of a smoothed path is shown in Figure 5. Since the plan is discrete, the path (red dotted) is plannedThe in informationtaxicab style transfer beforebetween smoothing. the userThe andsliding system window relies ondescribed human–machine in equation interactions (12) updates each(HMIs). point on The the HMI path in anby assistive averaging navigation its position system with has certainthe nearest unique 2N characters. points’ positions First, the HMI on the does path. Consequently,not rely on visualthe path cognition. (dark curve) Second, is the smoothed HMI is highly after task-oriented.the process. Third, the different information has distinct delivery requirements in situations of urgency or depending on high accuracy. The most popular audio interaction for assistive navigation systems [16,26–28] suffers from the following aspects: 3.5. Dual-Channel Human–Machine Interaction Instruction delay: The instruction delivery is not instantaneous, and the latency becomes a bottleneckThe information when dealing transfer with between urgent the interaction user and requests. system Itrelies is vital on when human–machine dealing with urgencyinteractions (HMIs).in navigation. The HMI in an assistive navigation system has certain unique characters. First, the HMI does not rely onLimited visual information: cognition. Second, The amount the HMI of information is highly pertask-oriented. message/second Third, is verythe different limited and information tends has distinctto cause delivery ambiguity, requirements which makes in accomplishing situations of tasks urgency with multipleor depending semantics on high difficult. accuracy. The most popular Vulnerableaudio interaction to interference: for assistive The user maynavigation not be able systems to access [16,26–28] multiple instructions suffers from simultaneously, the following and environmental sounds may cause interference. aspects: Result-oriented instructions: The conventional graphical interaction provides many individual smallInstruction tasks todelay: users, The allowing instruction them todelivery choose is among not instantaneous, different combinations and the tolatency achieve becomes their a bottleneckgoals. when Audio dealing instructions with areurgent usually interaction goal-driven requ andests. result-oriented,It is vital when and dealing they with are weak urgency in in navigation.procedure-oriented interaction tasks. LimitedThus, information: we design aThe hybrid amount haptic of information interaction mechanism per message/second as the major is very interface limited to deliver and tends to causenavigation ambiguity, instructions, which especiallymakes accomplishing micro-motion tasks instructions. with multiple Audio is semantics used to deliver difficult. less-sensitive macro-informativeVulnerable to interference: . The user may not be able to access multiple instructions simultaneously,After a path and toenvironmental the target is determined, sounds may motion cause guidance interference. is generated as shown in Figure6. Result-oriented instructions: The conventional graphical interaction provides many individual small tasks to users, allowing them to choose among different combinations to achieve their goals. Audio instructions are usually goal-driven and result-oriented, and they are weak in procedure- oriented interaction tasks. Thus, we design a hybrid haptic interaction mechanism as the major interface to deliver navigation instructions, especially micro-motion instructions. Audio is used to deliver less-sensitive macro-informative messages. After a path to the target is determined, motion guidance is generated as shown in Figure 6. Appl. Sci. 2019, 9, 989 9 of 15 Appl.Appl. Sci.Sci. 20192019,, 99,, xx FORFOR PEERPEER REVIEWREVIEW 99 ofof 1616

Figure 6. The motion guidance is generated intersecting the planned path and the awareness cycle. FigureFigure 6.6. TheThe motionmotion guidanceguidance isis generatedgenerated intersectingintersecting thethe plannedplanned pathpath andand thethe awarenessawareness cycle.cycle. To deliver the motion guidance in real time via haptic interaction, there is a numerical solution. ToTo deliverdeliver thethe motionmotion guidanceguidance inin realreal timetime viavia haptichaptic interaction,interaction, therethere isis aa numericalnumerical solution.solution. In this work, we design haptic gloves as shown in Figure7. InIn thisthis work,work, wewe designdesign haptichaptic glovesgloves asas shownshown inin FigureFigure 7.7.

FigureFigure 7.7. TheThe designdesign ofof haptichaptic gloves.gloves.

TheThe leftleft gloveglove guidesguides thethe motion,motion, andand thethe rightright gloveglgloveove warnswarns ofof obstacles.obstacles. UsingUsing thethe middlemiddle fingerfingerfinger asas thethe headhead frontfront ofof thethe subject,subject, thethe motionmotion directionaldiredirectionalctional guidanceguidance cancan bebe instantaneouslyinstantaneously delivereddelivered toto thethe useruser asas soonsoonsoon asasas thethethe motionmotionmotion planplanplan isisis made.made.made.

4.4. SystemSystem PrototypingPrototyping andand EvaluationEvaluation 4.1. System Prototyping 4.1.4.1. SystemSystem PrototypingPrototyping We use the HUAWEI P20 as the ARCore-supported smartphone, use Arduino sensors to WeWe useuse thethe HUAWEIHUAWEI P20P20 asas thethe ARCore-supportedARCore-supported smartphone,smartphone, useuse ArduinoArduino sensorssensors toto implement the haptic interactive glove, and use the BAIDU open API for audio recognition. The implementimplement thethe haptichaptic interactiveinteractive glove,glove, andand useuse thethe BAIDUBAIDU openopen APIAPI forfor audioaudio recognition.recognition. TheThe application is developed in Unity3D. Roberto Lopez Mendez ARCore SLAM is applied as the base for applicationapplication isis developeddeveloped inin Unity3D.Unity3D. RobertoRoberto LopeLopezz MendezMendez ARCoreARCore SLAMSLAM isis appliedapplied asas thethe basebase visual odometry and area learning. Bluetooth is used to connect the smartphone and the accessory. forfor visualvisual odometryodometry andand areaarea learning.learning. BluetBluetoothooth isis usedused toto connectconnect thethe smartphonesmartphone andand thethe The ready-to-work human–machine prototype is shown in Figure8. accessory.accessory. TheThe ready-to-workready-to-work human–machhuman–machineine prototypeprototype isis shownshown inin FigureFigure 8.8. Appl. Sci. 2019, 9, 989 10 of 15 Appl. Sci. 2019, 9, x FOR PEER REVIEW 10 of 16 Appl. Sci. 2019, 9, x FOR PEER REVIEW 10 of 16

Figure 8. The implemented ANSVIP prototype with full functionality. Figure 8. The implemented ANSVIP prototyp prototypee with full functionality. 4.2. Localization 4.2. Localization To validate the localization accuracy and reliability, we compare the localization of area learning To validatevalidate the localization accuracy and reliability, wewe compare thethe localization of area learning with visual odometry in an indoor test. Two subjects wearing the system are asked to walk five times with visual odometry in an indoor test. Two subjectssubjects wearing the system are asked to walk fivefive times along a path in the corridor, one subject with area learning and the other with visual odometry (VO). along a path in the corridor, one subject with area learning and the other with visual odometry odometry (VO). (VO). The results on Figure 9 are consistent with our expectations: The VO trials suffer from accumulative The results on Figure9 9 are are consistent consistent with with our our expectations: expectations: TheThe VOVO trialstrials suffersuffer fromfrom accumulativeaccumulative errors, which cause localization drifts; meanwhile, in the area learning method, there are certain drifts errors, which cause localization drifts; meanwhile, in the area learning method, there are certain drifts in passing corners but the system is able to swiftly correct the drift by recognizing learned areas. in passing cornerscorners butbut thethe systemsystem isis ableable toto swiftlyswiftly correctcorrect thethe driftdrift byby recognizingrecognizing learnedlearned areas.areas.

Ground Truth VOGround 1 Truth VOVO 21 VOVO 32 VOVO 43 VOVO 54 AreaVO 5 Learning Area Learning

Figure 9. The ground truth and trajectory of test trails. Figure 9. The ground truth and trajectory of test trails. Figure 9. The ground truth and trajectory of test trails. 4.3. Path Planning 4.3. Path Planning 4.3. PathSimulation Planning comparisons on four different path planning mechanisms are conducted: the adaptive Simulation comparisons on four different path planning mechanisms are conducted: the artificialSimulation potential comparisons field (AAPF), on thefour adaptive different artificial path planning potential mechanisms field without are a slidingconducted: window the adaptive artificial potential field (AAPF), the adaptive artificial potential field without a sliding (AAPF/S),adaptive artificial the artificial potential potential field field (AAPF), without the a adaptive repulsive artificial force and potential sliding window field without (AAPF/RS), a sliding and window (AAPF/S), the artificial potential field without a repulsive force and sliding window thewindow A* path (AAPF/S), planning. the artificial potential field without a repulsive force and sliding window (AAPF/RS), and the A* path planning. (AAPF/RS),On the map,and the we A* set path the elevator’s planning. location as the starting position. Then, 100 random destinations On the map, we set the elevator’s location as the starting position. Then, 100 random destinations are generatedOn the map, outside we set a the circle elevator’s with a location radius ofas 25the meters starting centered position. at Then, the starting 100 random point, destinations as shown are generated outside a circle with a radius of 25 meters centered at the starting point, as shown in inare Figure generated 10. Weoutside use thea circle four with candidate a radius path-planning of 25 meters mechanismscentered at the to starting generate point, the paths as shown for the in Figure 10. We use the four candidate path-planning mechanisms to generate the paths for the start– start–targetFigure 10. We pairs. use the four candidate path-planning mechanisms to generate the paths for the start– target pairs. target pairs. Appl. Sci. 2019, 9, 989 11 of 15 Appl. Sci. 2019, 9, x FOR PEER REVIEW 11 of 16 Appl. Sci. 2019, 9, x FOR PEER REVIEW 11 of 16

Figure 10. The 100 generated destinations (star) and the starting position (pentagram) on the map. Figure 10. The 100 generated destinations (star) and the starting position (pentagram) on the map. In Figure 11, we compare the path lengths generated by the four mechanisms. Obviously, the In Figure 1111,, wewe comparecompare thethe pathpath lengthslengths generatedgenerated byby thethe fourfour mechanisms.mechanisms. Obviously, the path lengths of AAPF are always lower than those of AAPF/S and AAPF/RS because the sliding path lengths ofof AAPFAAPF areare alwaysalways lowerlower thanthan thosethose ofof AAPF/S AAPF/S andand AAPF/RSAAPF/RS because the sliding window curves the sharp turns on the path into filleted turns. Therefore, the path length is shorter, window curvescurves thethe sharpsharp turnsturns on on the the path path into into filleted filleted turns. turns. Therefore, Therefore, the the path path length length is shorter,is shorter, as as expected. The path lengths of A* are always the lowest and are the best among the four. Because expected.as expected. The The path path lengths lengths of A*of areA* are always always the lowestthe lowest and and are theare bestthe amongbest among the four. the four. Because Because A* is A* is using a greedy mechanism to generate the path, it is guaranteed to produce the shortest length usingA* is using a greedy a greedy mechanism mechanism to generate to generate the path, the it path, is guaranteed it is guaranteed to produce to produce the shortest the shortest length when length a when a global view is accessible. However, it is noted that the path length differences between A* globalwhen a view global is accessible.view is accessible. However, However, it is noted it that is noted the path that length the path differences length differences between A* between and AAPF A* and AAPF is very limited. isand very AAPF limited. is very limited.

120 120 AAPF AAPF AAPF/S 110 AAPF/S 110 AAPF/RS AAPF/RS A* 100 A* 100

90 90

80 80

70 70

60 60

50 50

40 40 0 102030405060708090100 0 102030405060708090100 Trial Trial

Figure 11. Simulation results on path planningplanning cost.cost. Figure 11. Simulation results on path planning cost. In Figure 12, we collect the discrete distances from the path to obstacles along the paths. It is In Figure 12, we collect the discrete distances from the path to obstacles along the paths. It is shownIn thatFigure AAPF 12, andwe collect AAPF/S the properly discrete dealdistances with distance from the to path obstacles, to obstacles which isalong consistent the paths. with It our is shown that AAPF and AAPF/S properly deal with distance to obstacles, which is consistent with our design:shown that The AAPF repulsive and AAPF/S forces of properly obstacles deal keep with the dist pathance away. to obstacles, AAPF/RS which and is A* consistent do not have with such our design: The repulsive forces of obstacles keep the path away. AAPF/RS and A* do not have such design: The repulsive forces of obstacles keep the path away. AAPF/RS and A* do not have such Appl. Sci. 2019, 9, 989x FOR PEER REVIEW 1212 of of 16 15 Appl. Sci. 2019, 9, x FOR PEER REVIEW 12 of 16 repulsive forces, and thus, a good portion of the paths is close to obstacles. This is not desired in repulsiverepulsive forces, and thus,thus, aa goodgood portionportion ofof thethe pathspaths isis closeclose toto obstacles.obstacles. This is not desireddesired inin assistive navigation [6,11]. assistive navigationnavigation [[6,11].6,11].

5000 5000

0 0 5000 5000

0 0 5000 5000

0 0

2000 2000 1000 1000 0 0 0123456789101112 0123456789101112

Figure 12. Simulation results on distances toto obstacles.obstacles. Although the path lengths of A* are a bit shorter than those of AAPF, considering the fact that Although the path lengths of A* are a bit shorter than those of AAPF, considering the fact that subjects in navigation are prone to experience risks and panics if risky places are close to the paths, the subjects in navigation are prone to experience risks and panics if risky places are close to the paths, AAPF outperforms A* in keeping the path safer. thethe AAPFAAPF outperformsoutperforms A*A* inin keepingkeeping thethe pathpath safer.safer. 4.4. Haptic Guidance 4.4. Haptic Guidance To verify the directional guidance of the haptic device, we carry out unit tests of the haptic To verify the directional guidance of the haptic device, we carry out unit tests of the haptic guidance glove. The guidance glove on the left hand and the Arduino joystick to be controlled on the guidance glove. The guidance glove on the left hand andand thethe ArduinoArduino joystickjoystick toto bebe controlledcontrolled onon thethe right hand are shown in Figure 13. A series of programmed guidance commands are stored and sent right hand are shown in Figure 13. A series of programmed guidance commands are stored and sent to the glove so as to let the subject feel the guidance. A blind-folded subject is told to use the joystick toto thethe gloveglove soso asas toto letlet thethe subjectsubject feelfeel thethe guidanceguidance.. AA blind-foldedblind-folded subjectsubject isis toldtold toto useuse thethe joystickjoystick to depict the directional instruction received. The joystick behaviors are recorded every half second. toto depictdepict thethe directionaldirectional instructioninstruction received.received. TheThe joystickjoystick behaviorsbehaviors areare recordedrecorded everyevery halfhalf second.second.

Figure 13. ((Left)) Prototype Prototype ofof haptichaptic glove.glove. ( (Right)) Joystick Joystick for for testtest purpose.purpose.

In Figure 14, the input guidance commands are compared with the joystick records. Obviously, InIn FigureFigure 14,14, thethe inputinput guidanceguidance commandscommands areare cocompared with the joystick records. Obviously, there is a latency between the input and records. The latency is caused by three factors: the cognitive therethere isis aa latencylatency betweenbetween thethe inputinput andand records.records. TheThe latencylatency isis causedcaused byby threethree factors:factors: thethe cognitivecognitive delay of human haptic sensibility, the delay from understanding the guidance to controlling the delay of human haptic sensibility, the delay from understanding the guidance to controlling the joystick, and the delay between joystick action and recording. The average delay is less than 0.4 s, joystick,joystick, andand thethe delaydelay betweenbetween joystickjoystick actionaction andand recording. The average delaylay isis lessless thanthan 0.40.4 s,s, which is acceptable in most cases. Note that the delays in later trials are much less than those in earlier which is acceptable in most cases. Note that the delays inin laterlater trialstrials areare muchmuch lessless thanthan thosethose inin earlierearlier trials. One of the reasons for this is that the subject is getting familiar with the haptic interaction. In trials.trials. OneOne ofof thethe reasonsreasons forfor thisthis isis thatthat thethe subjectsubject isis gettinggetting familiarfamiliar withwith thethe haptichaptic interaction.interaction. InIn other words, the subject is capable of efficiently and quickly converting the data perceived by the other words, the subject is capable of efficiently and quickly converting the data perceived by the haptic interaction into their own perception after a few attempts. Thus, a cooperative cognition is built haptic interaction into their own perception after a few attempts. Thus, a cooperative cognition is between the assistive system, haptic interaction, and human perceptions. built between the assistive system, haptic interaction,interaction, andand humanhuman perceptions.perceptions. Appl. Sci. 2019, 9, 989 13 of 15 Appl. Sci. 2019, 9, x FOR PEER REVIEW 13 of 16

Haptic Glove Guidance Joystick Trail 1 Joystick Trail 2 Joystick Trail 3 Joystick Trail 4

Haptic Glove Guidance Rocker Trail 1 Rocker Trail 2 Rocker Trail 3 Rocker Trail 4

FigureFigure 14. 14.Haptic Haptic glove glove guidanceguidance versus versus joystick joystick records. records.

4.5. Integration4.5. Integration Test Test To verifyTo verify the prototypethe prototype system, system, we conductwe conduct target-oriented target-oriented navigation navigation tests tests with with three three low-vision low- subjectsvision and subjects one blindand one subject. blind subject. To evaluate To evaluate the the human–machine human–machine interaction interaction that that occurs occurs in our in our systems,systems, we administratewe administrate navigation navigation with with twotwo differentdifferent interaction interaction mechanisms. mechanisms. One Onewith withpure pure audioaudio instructions instructions [3] [3] and and the the other other with with haptic haptic instructions.instructions. Experience Experience surveys surveys are arecollected collected after after the tests. A 5-minute tutorial on the navigation instructions is given prior to the tests, and all of the the tests. A 5-minute tutorial on the navigation instructions is given prior to the tests, and all of the subjects are told that security personnel will interfere before any collision or risk is met. This gives subjects are told that security personnel will interfere before any collision or risk is met. This gives the the users peace of mind. users peaceAfter of mind.the test, all four subjects believed they successfully followed the instructions to reach the Aftertarget the(5/5); test, most all subjects four subjects agreed believedthat the instruct they successfullyions were very followed easy to understand the instructions (4.5/5); toand reach all the targetsubjects (5/5); agreed most subjectsthat their agreedcognition that of haptic the instructions instructions wereenhanced very a short easy while to understand after beginning (4.5/5); the and all subjectsexperiment agreed (5/5). that Furthermore, their cognition all subjects of haptic agreed instructions that the haptic enhanced instructions a short were whileless likely after to beginningcause the experimenthesitation than (5/5). audio Furthermore, instructions (5/5); all subjects some subj agreedects believed that the that haptic they instructions feel safer than were expected less likely to cause(3.75/5); hesitation most believed than audio that instructionsthey had a better (5/5); ex someperience subjects with believedhaptic instructions that they than feel saferaudio than expectedinstructions (3.75/5); in mostmicro-guidance believed that (4.75/5); they and had all a betterbelieved experience that audio with instructions haptic instructionswere indispensable than audio as macro-instructions (5/5). Two subjects believed the haptic glove would affect holding objects in instructions in micro-guidance (4.75/5); and all believed that audio instructions were indispensable as daily life and suggested migrating the haptic component to the arm or back of the hand. macro-instructions (5/5). Two subjects believed the haptic glove would affect holding objects in daily life and suggested migrating the haptic component to the arm or back of the hand. Appl. Sci. 2019, 9, 989 14 of 15

5. Conclusions In this work, we propose a human-centric navigation system to assist people with visual impairment while travelling indoors. The system takes a commercial smartphone as the carrier and uses Google ARCore vison SLAM in positioning. Comparing with the conventional visual odometry-supported travel aids, the system achieves better mapping and tracking. An adaptive artificial potential field-based path planning has been proposed for the system; it keeps the path away from the obstacles so as to avoid risk and collision while generating a real-time smooth path. Finally, a dual-channel human-machine interaction mechanism is introduced in the system. The system user-centrically incorporates haptic interfaces to provide a fluent and continuous guidance superior to the conventional turn-by-turn audio-guiding method. The haptic interaction can be carried out via different candidate devices, but our proposed haptic gloves benefit from an affordable cost and the convenience to plug and play. Evaluation on field tests and simulations shows that the localization and path planning achieves the expected performance, and as such, the proposed ANSVIP system is welcomed by visually impaired subjects.

Author Contributions: Conceptualization, X.Z.; Investigation, Y.Z.; Methodology, X.Z. and F.H.; Project administration, F.H.; Resources, Y.Z.; , X.Y.; Writing—original draft, X.Z. Funding: This work was funded by the Humanity and Social Science Youth foundation of the Ministry of Education of China, grant number 18YJCZH249, 17YJCZH275. Acknowledgments: The authors would like to thank Bing Li, Jizhong Xiao and Wei Wang for their insightful suggestions regarding this research. We thank LetPub for its linguistic assistance during the preparation of this manuscript. Conflicts of Interest: The authors declare no conflict of interest.

References

1. Horton, E.L.; Renganathan, R.; Toth, B.N.; Cohen, A.J.; Bajcsy, A.V.; Bateman, A.; Jennings, M.C.; Khattar, A.; Kuo, R.S.; Lee, F.A.; et al. A review of principles in design and usability testing of tactile technology for individuals with visual impairments. Assist. Technol. 2017, 29, 28–36. [CrossRef][PubMed] 2. Katz, B.F.G.; Kammoun, S.; Parseihian, G.; Gutierrez, O.; Brilhault, A.; Auvray, M.; Truillet, P.; Denis, M.; Thorpe, S.; Jouffrais, C. NAVIG: Augmented reality guidance system for the visually impaired. 2012, 16, 253–269. [CrossRef] 3. Zhang, X. A Wearable Indoor Navigation System with Context Based Decision Making for Visually Impaired. Int. J. Adv. Robot. Autom. 2016, 1, 1–11. [CrossRef] 4. Ahmetovic, D.; Gleason, C.; Kitani, K.M.; Takagi, H.; Asakawa, C. NavCog: Turn-by-turn smartphone navigation assistant for people with visual impairments or blindness. In Proceedings of the 13th Web for All Conference Montreal, Montreal, QC, Canada, 11–13 April 2016; pp. 90–99. [CrossRef] 5. Bing, L.; Munoz, J.P.; Rong, X.; Chen, Q.; Xiao, J.; Tian, Y.; Arditi, A.; Yousuf, M. Vision-based Mobile Indoor Assistive Navigation Aid for Blind People. IEEE Trans. Mobile Comput. 2019, 18, 702–714. 6. Nair, V.; Budhai, M.; Olmschenk, G.; Seiple, W.H.; Zhu, Z. ASSIST: Personalized Indoor Navigation via Multimodal Sensors and High-Level Semantic Information. In Proceedings of the 2018 European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; Volume 11134, pp. 128–143. [CrossRef] 7. Fernandes, H.; Costa, P.; Filipe, V.; Paredes, H.; Barroso, J. A review of assistive spatial orientation and navigation technologies for the visually impaired. In Universal Access in the Information Society; Springer: Berlin/Heidelberg, Germany, 2017. [CrossRef] 8. Yang, Z.; Ganz, A. A Sensing Framework for Indoor Spatial Awareness for Blind and Visually Impaired Users. IEEE Access 2019, 7, 10343–10352. [CrossRef] 9. Jiao, J.C.; Yuan, L.B.; Deng, Z.L.; Zhang, C.; Tang, W.H.; Wu, Q.; Jiao, J. A Smart Post-Rectification Algorithm Based on an ANN Considering Reflectivity and Distance for Indoor Scenario Reconstruction. IEEE Access 2018, 6, 58574–58586. [CrossRef] Appl. Sci. 2019, 9, 989 15 of 15

10. Joseph, S.L.; Xiao, J.Z.; Zhang, X.C.; Chawda, B.; Narang, K.; Rajput, N.; Mehta, S.; Subramaniam, L.V. Being Aware of the World: Toward Using Social Media to Support the Blind with Navigation. IEEE Trans. Hum.-Mach. Syst. 2015, 45, 399–405. [CrossRef] 11. Xiao, J.; Joseph, S.L.; Zhang, X.; Li, B.; Li, X.; Zhang, J. An Assistive Navigation Framework for the Visually Impaired. IEEE Trans. Hum.-Mach. Syst. 2017, 45, 635–640. [CrossRef] 12. Zhang, X.; Bing, L.; Joseph, S.L.; Xiao, J.; Yi, S.; Tian, Y.; Munoz, J.P.; Yi, C. A SLAM Based Semantic Indoor Navigation System for Visually Impaired Users. In Proceedings of 2015 IEEE International Conference on Systems, Man, and Cybernetics, Kowloon, China, 9–12 October 2015. 13. Bing, L.; Muñoz, J.P.; Rong, X.; Xiao, J.; Tian, Y.; Arditi, A. ISANA: Wearable Context-Aware Indoor Assistive Navigation with Obstacle Avoidance for the Blind. In Proceedings of the 2016 European Conference on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016. 14. Zhao, Y.; Zheng, Z.; Liu, Y. Survey on computational-intelligence-based UAV path planning. Knowl.-Based Syst. 2018, 158, 54–64. [CrossRef] 15. Ahmetovic, D.; Oh, U.; Mascetti, S.; Asakawa, C. Turn Right: Analysis of Rotation Errors in Turn-by-Turn Navigation for Individuals with Visual Impairments. In Proceedings of the 20th International ACM Sigaccess Conference on Computers and Accessibility, Assets’18, Galway, Ireland, 22–24 October 2018; pp. 333–339. [CrossRef] 16. Balata, J.; Mikovec, Z.; Slavik, P. Landmark-enhanced route itineraries for navigation of blind pedestrians in urban environment. J. Multimodal User Interfaces 2018, 12, 181–198. [CrossRef] 17. Soltani, A.R.; Tawfik, H.; Goulermas, J.Y.; Fernando, T. Path planning in construction sites: Performance evaluation of the Dijkstra, A*, and GA search algorithms. Adv. Eng. Inform. 2002, 16, 291–303. [CrossRef] 18. Sato, D.; Oh, U.; Naito, K.; Takagi, H.; Kitani, K.; Asakawa, C. NavCog3 An Evaluation of a Smartphone-Based Blind Indoor Navigation Assistant with Semantic Features in a Large-Scale Environment. In Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility, Baltimore, MD, USA, 20 October–1 November 2017. [CrossRef] 19. Epstein, R.A.; Patai, E.Z.; Julian, J.B.; Spiers, H.J. The cognitive map in humans: Spatial navigation and beyond. Nat. Neurosci. 2017, 20, 1504–1513. [CrossRef][PubMed] 20. Marianne, F.; Sturla, M.; Witter, M.P.; Moser, E.I.; May-Britt, M. Spatial representation in the entorhinal cortex. Science 2004, 305, 1258–1264. 21. Papadopoulos, K.; Koustriava, E.; Koukourikos, P.; Kartasidou, L.; Barouti, M.; Varveris, A.; Misiou, M.; Zacharogeorga, T.; Anastasiadis, T. Comparison of three orientation and mobility aids for individuals with blindness: Verbal description, audio-tactile map and audio-haptic map. Assist. Technol. 2017, 29, 1–7. [CrossRef][PubMed] 22. Rector, K.; Bartlett, R.; Mullan, S. Exploring Aural and Haptic Feedback for Visually Impaired People on a Track: A Wizard of Oz Study. In Proceedings of the 20th International ACM Sigaccess Conference on Computers and Accessibility, Assets’18, Galway, Ireland, 22–24 October 2018. [CrossRef] 23. Papadopoulos, K.; Koustriava, E.; Koukourikos, P. Orientation and mobility aids for individuals with blindness: Verbal description vs. audio-tactile map. Assist. Technol. 2018, 30, 191–200. [CrossRef][PubMed] 24. Guerreiro, J.; Ohn-Bar, E.; Ahmetovic, D.; Kitani, K.; Asakawa, C. How Context and User Behavior Affect Indoor Navigation Assistance for Blind People. In Proceedings of the 2018 Internet of Accessible Things, Lyon, France, 23–25 April 2018. [CrossRef] 25. Kacorri, H.; Ohn-Bar, E.; Kitani, K.M.; Asakawa, C. Environmental Factors in Indoor Navigation Based on Real-World Trajectories of Blind Users. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, Montreal, QC, Canada, 21–26 April 2018; pp. 1–12. [CrossRef] 26. Boerema, S.T.; van Velsen, L.; Vollenbroek-Hutten, M.M.R.; Hermens, H.J. Value-based design for the elderly: An application in the field of mobility aids. Assist. Technol. 2017, 29, 76–84. [CrossRef][PubMed] 27. Mone, G. Feeling Sounds, Hearing Sights. Commun. ACM 2018, 61, 15–17. [CrossRef] 28. Martins, L.B.; Lima, F.J. Analysis of Wayfinding Strategies of Blind People Using Tactile Maps. Procedia Manuf. 2015, 3, 6020–6027. [CrossRef]

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).