Roboticsuwb Iaaa.Pdf
Total Page:16
File Type:pdf, Size:1020Kb
Autonomous Robotic Exploration and Mapping of Smart Indoor Environments With UWB-IoT Devices Tianyi Wang1∗, Ke Huo1∗, Muzhi Han2, Daniel McArthur1, Ze An1, David Cappeleri1, and Karthik Ramani13 1School of Mechanical Engineering at Purdue University, US fwang3259, khuo, dmcarth, an40, dcappell, [email protected] 2Department of Mechanical Engineering at Tsinghua University, China, [email protected] 3School of Electrical and Computer Engineering at Purdue University, US (by Courtesy) Abstract into the connected network, where they can leverage infor- mation collected from the IoT, and thus gain stronger sit- The emerging simultaneous localization and mapping uational awareness (Simoens, Dragone, and Saffiotti 2018) (SLAM) techniques enable robots with spatial awareness of the physical world. However, such awareness remains at a and spatial intelligence, which is especially useful in explo- geometric level. We propose an approach for quickly con- ration, planning, mapping and interacting with the environ- structing a smart environment with semantic labels to en- ment without relying on AI/vision-based navigation only. hance the robot with spatial intelligence. Essentially, we em- Recent advanced computer vision technologies, such as bed UWB-based distance sensing IoT devices into regular SLAM algorithms and depth sensing, have empowered items and treat the robot as a dynamic node in the IoT net- mobile robots with the ability to self-localize and build work. By leveraging the self-localization from the robot node, we resolve the locations of IoT devices in the SLAM map. maps within indoor environments using on-board sensors We then exploit the semantic knowledge from the IoT to only (Cadena et al. 2016), (Newcombe et al. 2011). Al- enable the robot to navigate and interact within the smart though these systems provide good maps under favorable environment. With the IoT nodes deployed, the robot can conditions, they are very sensitive to calibration and imag- adapt to environments that are unknown or that have time- ing conditions, and are not suitable in changing dynamic en- varying configurations. Through our experiments, we demon- vironments. Therefore, to fully support navigation and inter- strated that our method supports an object level of localiza- action with the environment, we need to extend robots’ per- tion accuracy (∼ 0:28m), a shorter discovery and localization ception from a vision-based geometric level to a semantic time (118:6s) compared to an exhaustive search, and an effec- level. Although researchers have made substantial progress tive navigation strategy for a global approach and local ma- in scene understanding, object detection and pose estima- nipulation. Further, we demonstrated two use case scenarios where a service robot (i) executes a sequence of user-assigned tion (Salas-Moreno et al. 2013), vision-based approaches tasks in a smart home and (ii) explores multiple connected re- largely rely on knowing the object representations a priori gions using IoT landmarks. (Wu, Lim, and Yang 2013) and keeping the objects of inter- est in the camera’s view. That said, vision-only approaches may be more suitable for local and specific tasks. Thus, Intruduction mapping key parts of the environment and identifying the Within our surrounding environment, the ad-hoc tasks which objects of interest, and especially finding means to interact we take for granted are often complex for robots because with them using vision-based methods, usually do not have of their limited perception capabilities and underdeveloped well-developed solutions. intelligence algorithms (Kemp, Edsinger, and Torres-Jara In contrast, within a smart environment, wireless tech- 2007). Despite the commercial successes of mobile robots, niques such as Bluetooth, Zigbee, and WiFi allow for in- particularly in warehouses, they are mostly specialized in stant discovery of the connected objects via the network. handling simplified and pre-defined tasks within controlled Further, the robots could naturally access the semantic in- environments often with fixed navigation pathways. Further, formation stored in the local IoT devices which contributes many of the AI advances in navigation are in simple settings towards understanding the environment and results in intel- with many assumptions, and are not useful in realistic envi- ligent user interactions. Still, resolving the spatial distribu- ronments (Savinov, Dosovitskiy, and Koltun 2018). On the tion of the IoT-tagged devices or objects remains challeng- other hand, the rapidly emerging IoT ecologies bridge our ing. Using the wireless communication opportunistically, re- physical world with digital intelligence. In contrast to ongo- ceived signal strength indicator (RSSI)-based methods for ing advances in vision, we propose an integration of robots localization of the sensor node have been studied exten- ∗Tianyi Wang and Ke Huo contribute equally to this work. sively in the wireless sensor network (WSN) field (Heurte- Copyright c 2020, Association for the Advancement of Artificial feux and Valois 2012). Yet, the low accuracy of the results (a Intelligence (www.aaai.org). All rights reserved. few meters) may prevent them from being employed for in- door mobile robots. Other researchers have developed UHF in smart factories (Wan et al. 2017). Moreover, the assis- RFID-based object finding systems (Deyle, Reynolds, and tive robots in human environments rely heavily on over- Kemp 2014). However, these systems introduce an extra seeing human-involved activities and/or medical conditions bulky and expensive UHF antenna, suffer from a limited de- from the IoT platform (Simoens et al. 2016). Essentially, tection range (∼3m), and, using their approach, a robot must these extended perception and cognitive capabilities lead perform a global search before navigating to and interacting to a larger degree of autonomy and adaptability, and thus with the IoT tags. better facilitate human-robot interactions and collabora- Recently, researchers have been investigating distance- tions (Nguyen et al. 2013). On the other hand, compared to based localization methods using an ultra-wide band- stationary and simple actuators such as doors, coffee makers, width (UWB) wireless technique which provides accu- manufacturing machines, and elevators in an IoT ecosys- rate time-of-flight distance measurements (Di Franco et al. tem, the mobility introduced by robots serves as an essen- 2017). Such techniques have been further applied to enable tial complementary-element in IoRT. From this perspective, users to interact with smart environments (Huo et al. 2018). self-localization and mapping becomes fundamental to nav- Inspired by these works, we propose a spatial mapping igate the robot to complete any spatially-aware tasks in in- for IoT devices by integrating UWB with SLAM-capable door environments such as factories, hospitals, and houses. robots. We build highly portable and self-contained UWB- While on-board vision-based sensors support SLAM well IoT devices that can be labeled and attached to ordinary for navigation, semantic mapping remains challenging using items. The SLAM-capable robots simply survey in a small only on-board sensors (Cadena et al. 2016). In our paper, we local region and collect distance measurements to the IoT emphasize an IoT-facilitated mapping of the smart environ- devices for a short time, and then our mapping method out- ment by employing an autonomous discovery and localiza- puts the global locations of the devices relative to the SLAM tion of the IoT devices and associating them spatially on the map. Our method supports navigation and planning in pre- SLAM map. viously unseen environments. We leverage the discovered IoTs as spatial landmarks which essentially work as bea- cons that help the robot familiarize itself with a complex Mapping of Smart Environments environment quickly without accessing any pre-stored and static databases. Centering upon this mapping method, our contributions are three-fold as follows. To access locations of smart devices, researchers have de- • A method to build a smart environment with spatial and veloped environmental models which associate the metadata semantic information by deploying UWB-IoT devices. and spatial information of the IoT with the geometric map of the environment (Dietrich et al. 2014). However, these • A navigation pipeline that drives a robot to a target glob- models largely remain static, which indicates low flexibil- ally and then refines the object localization, for example ity against the environment changing, e.g., newly added or with object handling and manipulations. moved assets. By equipping the robots with on-board active • Demonstration of our method with a prototype service sensors, researchers further investigated approaches towards robot (i) working with users through a task-oriented and autonomous and dynamic discovery and mapping of IoT. spatially-aware user interface and (ii) exploring an un- Utilizing UHF RFID technologies, a probabilistic model and known environment referring to IoT landmarks. a filtering-based localization method have been proposed to map the distributed passive tags (Hahnel et al. 2004), (Joho, Background Plagemann, and Burgard 2009). Although RFID tags carry some basic information, they are not connected to the net- The Internet of Robotic Things work or Internet. Therefore, the discovery of the tags often The concept of IoT has gained recent importance because requires the robots to traverse the entire environment (Deyle