Harvestnet: Mining Valuable Training Data from High-Volume Robot Sensory Streams

Total Page:16

File Type:pdf, Size:1020Kb

Harvestnet: Mining Valuable Training Data from High-Volume Robot Sensory Streams HarvestNet: Mining Valuable Training Data from High-Volume Robot Sensory Streams Sandeep Chinchali, Evgenya Pergament, Manabu Nakanoya, Eyal Cidon, Edward Zhang Dinesh Bharadia, Marco Pavone, Sachin Katti Abstract—Today’s robotic fleets are increasingly measuring In general, the problem of intelligently sampling useful or high-volume video and LIDAR sensory streams, which can anomalous training data is of extremely wide scope, since field be mined for valuable training data, such as rare scenes of robotic data contains both known weakpoints of a model as road construction sites, to steadily improve robotic perception models. However, re-training perception models on growing well as unknown weakpoints that have yet to even be discov- volumes of rich sensory data in central compute servers (or the ered. A concrete example of a known weakpoint is shown “cloud”) places an enormous time and cost burden on network in Fig. 2, where a fleet operator identifies that construction transfer, cloud storage, human annotation, and cloud computing sites are common errors of a perception model and wishes resources. Hence, we introduce HARVESTNET, an intelligent to collect more of such images, perhaps because they are sampling algorithm that resides on-board a robot and reduces system bottlenecks by only storing rare, useful events to steadily insufficiently represented in a training dataset. In contrast, improve perception models re-trained in the cloud. HARVESTNET unknown weakpoints could be rare visual concepts that are significantly improves the accuracy of machine-learning models not even anticipated by a roboticist. on our novel dataset of road construction sites, field testing of self-driving cars, and streaming face recognition, while reducing To provide a focused contribution, this paper addresses cloud storage, dataset annotation time, and cloud compute time the problem of how robotic fleets can source more training by between 65:7 − 81:3%. Further, it is between 1:05 − 2:58× data for known weakpoints to maximize model accuracy, but more accurate than baseline algorithms and scalably runs on with minimal end-to-end systems costs. Though of tractable embedded deep learning hardware. scope, this problem is still extremely challenging since a robot must harvest rare, task-relevant training examples from I. INTRODUCTION growing volumes of robotic sensory data. A principal benefit of addressing known model weakpoints is that the cloud can Learning to identify rare events, such as traffic disrup- directly use knowledge of task-relevant image classes, which tions due to construction or hazardous weather conditions, require more training data, to guide and improve a robot’s can significantly improve the safety of robotic perception sampling process. Our key technical insight is to keep robot and decision-making models. While performing their primary sampling logic simple, but to leverage the power of the cloud tasks, fleets of future robots, ranging from delivery drones to to continually re-train models and annotate uncertain images self-driving cars, can passively collect rare training data they with a human-in-the-loop, thereby using cloud feedback to happen to observe from their high-volume video and LIDAR direct robotic sampling. Moreover, we discuss how insights sensory streams. As such, we envision that robotic fleets will from HARVESTNET can be extended to collecting training data act as distributed, passive sensors that collect valuable training for unknown weakpoints in future work. data to steadily improve the robotic autonomy stack. Despite the benefits of continually improving robotic mod- els from field data, they come with severe systems costs that are largely under-appreciated in today’s robotics literature. Robot Limited Network Cloud Specifically, these systems bottlenecks (shown in Fig. 1) stem Vision Model Update from the time or cost of (1) network data transfer to compute Model servers, (2) limited robot and cloud storage, (3) dataset anno- Stored tation with human supervision, and (4) cloud computing for Dataset training models on massive datasets. Indeed, it is estimated that Re-train Sampler Model a single self-driving car will produce upwards of 4 Terabytes Annotate Cache (TB) of LIDAR and video data per day [1, 7], which we “Interesting” contrast with our own calculations in Section II. Image? Upload In this paper, we introduce a novel Robot Sensory Sampling Sample Problem (Fig. 1), which asks how a robot can maximally Cache improve model accuracy via cloud re-training, but with min- imal end-to-end systems costs. To address this problem, we Fig. 1: The Robot Sensory Sampling Problem. What are introduce HARVESTNET, a compute-efficient sampler that the minimal set of “interesting” training samples robots can scalably runs on even resource-constrained robots and reduces send to the cloud to periodically improve perception model systems bottlenecks by “harvesting” only task-relevant training accuracy while minimizing system costs of network transfer, examples for cloud processing. cloud storage, human annotation, and cloud-computing? Literature Review: HARVESTNET is closely aligned with Next, Sections IV and V quantify HARVESTNET’s accuracy cloud robotics [6, 16, 15] , where robots send video or LIDAR improvements for perception models, key reductions in system to the cloud to query compute-intensive models or databases bottlenecks, and performance on the Edge TPU [5]. Finally, when they are uncertain. Robots can face significant network we conclude with future applications in Section VI. congestion while transmitting high-datarate sensory streams to the cloud, leading to work that selectively offloads only uncertain samples [11] or uses low-latency “fog computing” nodes for object manipulation and tele-operation [25, 26]. As opposed to focusing solely on network latency for real- time control, HARVESTNET addresses overall system cost (from network transfer to human annotation time) and instead Error focuses on the open problem of continual learning from Domain- cleverly-sampled batches of field data. Specific Our work is inspired by traditional active learning [27, 22, 12, 14], novelty detection [18, 10], and few-shot learning [30, 28, 20], but differs significantly since we use novel compute- efficient algorithms, distributed between the robot and cloud, Corrected that are scalable to run on even resource-constrained robots. Many active learning algorithms [27, 22, 12] use compute- Fig. 2: HARVESTNET Re-Trained Vision Models. From only intensive acquisition functions to decide which limited samples a minimal set of training examples, HARVESTNET adapts a learner should label to maximize model accuracy, which default vision models trained on the standard COCO dataset often involve Monte Carlo Sampling to estimate uncertainty [17] (top images) to better, domain-specific models (bottom over a new image. Indeed, the most similar work to ours [14] images). We flexibly fine-tune mobile-optimized models, such uses Bayesian Deep Neural Networks (DNNs), which have a as MobileNet 2, as well as compute-intensive, but more distribution over model parameters, to take several stochastic accurate models such as Faster R-CNN [19]. forward passes of a neural network to empirically quantify uncertainty and decide whether an image should be labeled. Such methods are infeasible for resource-constrained robots, II. MOTIVATION which might not have the time to take several forward passes We now describe the costs and benefits of continual learning of a DNN for a single image or capability to efficiently run in the cloud by collecting and processing three novel percep- a distribution over model parameters. Indeed, we show in tion datasets. Section V how our sampler uses a hardware DNN accelerator, specifically the Google Edge Tensor Processing Unit (TPU), A. Costs and Benefits of Continual Learning in the Cloud to evaluate an on-board vision DNN, with one fixed set of Benefits: A prime benefit of continual learning in the pre-compiled model parameters, in < 16ms and decides to cloud is to correct weak points of existing models or specialize sample an image in only ≈ 0:3ms! More broadly, our work them to new domains. As shown in Fig. 2 (right, top), stan- differs since we (1) address sensory streams (as opposed to dard, publicly-available vision deep neural networks (DNNs) a pool of unlabeled data), (2) use a compute-efficient model pre-trained on benchmark datasets like MS-COCO [17] can to guide sampling, (3) direct sampling with task-relevant misclassify traffic cones as humans but also miss the large cloud feedback, and (4) also address storage, network, and excavator vehicle, which neither looks like a standard truck computing costs. nor a car. However, as shown in Fig. 2 (right, bottom), a Statement of Contributions: In light of prior work, our con- specialized model re-trained in the cloud using automatically tributions are four-fold. First, we collect months of novel video sampled, task-relevant field data can correct such errors. footage to quantify the systems bottlenecks of processing More generally, field robots that “crowd-source” interesting growing sensory data. Second, we empirically show that large or anomalous training data could automatically update high- improvements in perception model accuracy can be realized by definition (HD) road maps. automatically sampling task-relevant training examples from Costs: The above benefits of cloud re-training, however, vast sensory streams. Third, we design a sampler that intelli- often come with understated costs, which we now quantify. gently delegates compute-intensive tasks to the cloud, such as Table I estimates monthly systems costs for a small fleet of 10 adaptively tuning key parameters of robot sampling policies self-driving cars, each of which is assumed to capture 4 hours via cloud feedback. Finally, we show that our perception of driving data per day. In the first scenario, called “Dashcam- models run efficiently on the state-of-the-art Google Edge TPU ours”, we assume a single car has only one dashcam capturing [5] for resource-constrained robots. H.264 compressed video, as in our collected dataset.
Recommended publications
  • Synergy of Iot and AI in Modern Society: the Robotics and Automation Case
    Case Report Robot Autom Eng J Volume 3 Issue 5 - September 2018 Copyright © All rights are reserved by Spyros G Tzafestas DOI: 10.19080/RAEJ.2018.03.555621 Synergy of IoT and AI in Modern Society: The Robotics and Automation Case Spyros G Tzafestas* National Technical University of Athens, Greece Submission: August 15, 2018; Published: September 12, 2018 *Corresponding author: Spyros G Tzafestas, National Technical University of Athens, Greece; Tel: + ; Email: Abstract The Internet of Things (IoT) is a recent revolution of the Internet which is increasingly adopted with great success in business, industry, the success in a large repertory of every-day applications with dominant one’s enterprise, transportation, robotics, industrial, and automation systemshealthcare, applications. economic, andOur otheraim in sectors this article of modern is to provide information a global society. discussion In particular, of the IoTmain supported issues concerning by artificial the intelligence synergy of enhances IoT and considerablyAI, including what is meant by the concept of ‘IoT-AI synergy’, illustrates the factors that drive the development of ‘IoT enabled by AI’, and summarizes the conceptscurrently ofrunning ‘Industrial and potentialIoT’ (IIoT), applications ‘Internet of of Robotic great value Things’ for (IoRT),the society. and ‘IndustrialStarting with Automation an overview IoT of (IAIoT). the IoT Then, and AI a numberfields, the of article case studies describes are outlined, Keywords: and, finally, some IoT/AI-aided robotics and industrial
    [Show full text]
  • The Robot Economy: Here It Comes
    Noname manuscript No. (will be inserted by the editor) The Robot Economy: Here It Comes Miguel Arduengo · Luis Sentis Abstract Automation is not a new phenomenon, and ques- tions about its effects have long followed its advances. More than a half-century ago, US President Lyndon B. Johnson established a national commission to examine the impact of technology on the economy, declaring that automation “can be the ally of our prosperity if we will just look ahead”. In this paper, our premise is that we are at a technological in- flection point in which robots are developing the capacity to greatly increase their cognitive and physical capabilities, and thus raising questions on labor dynamics. With increas- ing levels of autonomy and human-robot interaction, intel- ligent robots could soon accomplish new human-like capa- bilities such as engaging into social activities. Therefore, an increase in automation and autonomy brings the question of robots directly participating in some economic activities as autonomous agents. In this paper, a technological frame- Fig. 1 Robots are rapidly developing capabilities that could one day work describing a robot economy is outlined and the chal- allow them to participate as autonomous agents in economic activities lenges it might represent in the current socio-economic sce- with the potential to change the current socio-economic scenario. Some nario are pondered. interesting examples of such activities could eventually involve engag- ing into agreements with human counterparts, the purchase of goods Keywords Intelligent robots · Robot economy · Cloud and services and the participation in highly unstructured production Robotics · IoRT · Blockchain processes.
    [Show full text]
  • Consumer Cloud Robotics and the Fair Information Practice Principles
    Consumer Cloud Robotics and the Fair Information Practice Principles ANDREW PROIA* DREW SIMSHAW ** & DR. KRIS HAUSER*** INTRODUCTION I. CLOUD ROBOTICS & TOMORROW’S DOMESTIC ROBOTS II. THE BACKBONE OF CONSUMER PRIVACY REGULATIONS AND BEST PRACTICES: THE FAIR INFORMATION PRACTICE PRINCIPLES A. A Look at the Fair Information Practice Principles B. The Consumer Privacy Bill of Rights 1. Scope 2. Individual Control 3. Transparency 4. Respect for Context 5. Security 6. Access and Accuracy 7. Focused Collection 8. Accountability C. The 2012 Federal Trade Commission Privacy Framework 1. Scope 2. Privacy By Design 3. Simplified Consumer Choice 4. Transparency III. WHEN THE FAIR INFORMATION PRACTICE PRINCIPLES MEET CLOUD ROBOTICS: PRIVACY IN A HOME OR DOMESTIC ENVIRONMENT A. The “Sensitivity” of Data Collected by Cloud-Enabled Robots in a Domestic Environment B. The Context of a Cloud-Enabled Robot Transaction: Data Collection, Use, and Retention Limitations C. Adequate Disclosures and Meaningful Choices Between a Cloud- Enabled Robot and the User 1. When Meaningful Choices are to be Provided 2. How Meaningful Choices are to be Provided D. Transparency & Privacy Notices E. Security F. Access & Accuracy G. Accountability IV. APPROACHING THE PRIVACY CHALLENGES INHERENT IN CONSUMER CLOUD ROBOTICS CONCLUSION PROIA, SIMSHAW & HAUSER CONSUMER CLOUD ROBOTICS & THE FIPPS INTRODUCTION At the 2011 Google I/O Conference, Google’s Ryan Hickman and Damon Kohler, and Willow Garage’s Ken Conley and Brian Gerkey, took the stage to give a rather intriguing presentation:
    [Show full text]
  • Benchmarks for Cloud Robotics by Arjun Kumar Singh a Dissertation
    Benchmarks for Cloud Robotics by Arjun Kumar Singh A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Pieter Abbeel, Chair Professor Ken Goldberg Professor Bruno Olshausen Summer 2016 Benchmarks for Cloud Robotics Copyright 2016 by Arjun Kumar Singh 1 Abstract Benchmarks for Cloud Robotics by Arjun Kumar Singh Doctor of Philosophy in Computer Science University of California, Berkeley Professor Pieter Abbeel, Chair Several areas of computer science, including computer vision and natural language processing, have witnessed rapid advances in performance, due in part to shared datasets and benchmarks. In a robotics setting, benchmarking is challenging due to the amount of variation in common applications: researchers can use different robots, different objects, different algorithms, different tasks, and different environments. Cloud robotics, in which a robot accesses computation and data over a network, may help address the challenge of benchmarking in robotics. By standardizing the interfaces in which robotic systems access and store data, we can define a common set of tasks and compare the performance of various systems. In this dissertation, we examine two problem settings that are well served by cloud robotics. We also discuss two datasets that facilitate benchmarking of several problems in robotics. Finally, we discuss a framework for defining and using cloud-based robotic services. The first problem setting is object instance recognition. We present an instance recognition system which uses a library of high-fidelity object models of textured household objects. The system can handle occlusions, illumination changes, multiple objects, and multiple instances of the same object.
    [Show full text]
  • Cloud-Based Methods and Architectures for Robot Grasping By
    Cloud-based Methods and Architectures for Robot Grasping by Benjamin Robert Kehoe A dissertation submitted in partial satisfaction of the requirements for the degree of Doctor of Philosophy in Engineering - Mechanical Engineering in the Graduate Division of the University of California, Berkeley Committee in charge: Professor Ken Goldberg, Co-chair Professor J. Karl Hedrick, Co-chair Associate Professor Pieter Abbeel Associate Professor Francesco Borrelli Fall 2014 Cloud-based Methods and Architectures for Robot Grasping Copyright 2014 by Benjamin Robert Kehoe 1 Abstract Cloud-based Methods and Architectures for Robot Grasping by Benjamin Robert Kehoe Doctor of Philosophy in Engineering - Mechanical Engineering University of California, Berkeley Professor Ken Goldberg, Co-chair Professor J. Karl Hedrick, Co-chair The Cloud has the potential to enhance a broad range of robotics and automation sys- tems. Cloud Robotics and Automation systems can be broadly defined as follows: Any robotic or automation system that relies on either data or code from a network to support its operation, i.e., where not all sensing, computation, and memory is integrated into a single standalone system. We identify four potential benefits of Cloud Robotics and Automa- tion: 1) Big Data: access to remote libraries of images, maps, trajectories, and object data, 2) Cloud Computing: access to parallel grid computing on demand for statistical analysis, learning, and motion planning, 3) Collective Robot Learning: robots sharing trajectories, control policies, and outcomes, and 4) Human computation: using crowdsourcing access to remote human expertise for analyzing images, classification, learning, and error recovery. We present four Cloud Robotics and Automation systems in this dissertation.
    [Show full text]
  • Networking for Cloud Robotics: the Dewros Platform and Its Application
    Journal of Sensor and Actuator Networks Article Networking for Cloud Robotics: The DewROS Platform and Its Application Alessio Botta *, Jonathan Cacace , Riccardo De Vivo, Bruno Siciliano and Giorgio Ventre Department of Electrical Engineering and Information Technology, University of Naples Federico II, Via Claudio 21, 80125 Naples, Italy; [email protected] (J.C.); [email protected] (R.D.V.); [email protected] (B.S.); [email protected] (G.V.) * Correspondence: [email protected] Abstract: With the advances in networking technologies, robots can use the almost unlimited resources of large data centers, overcoming the severe limitations imposed by onboard resources: this is the vision of Cloud Robotics. In this context, we present DewROS, a framework based on the Robot Operating System (ROS) which embodies the three-layer, Dew-Robotics architecture, where computation and storage can be distributed among the robot, the network devices close to it, and the Cloud. After presenting the design and implementation of DewROS, we show its application in a real use-case called SHERPA, which foresees a mixed ground and aerial robotic platform for search and rescue in an alpine environment. We used DewROS to analyze the video acquired by the drones in the Cloud and quickly spot signs of human beings in danger. We perform a wide experimental evaluation using different network technologies and Cloud services from Google and Amazon. We evaluated the impact of several variables on the performance of the system. Our results show that, for example, the video length has a minimal impact on the response time with respect to Citation: Botta, A.; Cacace, J.; De Vivo, R.; Siciliano, B.; Ventre, G.
    [Show full text]
  • Network Offloading Policies for Cloud Robotics
    Network Offloading Policies for Cloud Robotics: a Learning-based Approach Sandeep Chinchali∗, Apoorva Sharmaz, James Harrisonx, Amine Elhafsiz, Daniel Kang∗, Evgenya Pergamenty, Eyal Cidony, Sachin Katti∗y, Marco Pavonez Departments of Computer Science∗, Electrical Engineeringy, Aeronautics and Astronauticsz, and Mechanical Engineeringx Stanford University, Stanford, CA fcsandeep, apoorva, jharrison, amine, ddkang, evgenyap, ecidon, skatti, [email protected] Abstract—Today’s robotic systems are increasingly turning to computationally expensive models such as deep neural networks Mobile Robot Cloud (DNNs) for tasks like localization, perception, planning, and Limited Network object detection. However, resource-constrained robots, like low- Cloud Model power drones, often have insufficient on-board compute resources Robot Model or power reserves to scalably run the most accurate, state-of- Sensory the art neural network compute models. Cloud robotics allows Input Offload mobile robots the benefit of offloading compute to centralized Logic servers if they are uncertain locally or want to run more accurate, Offload Compute compute-intensive models. However, cloud robotics comes with a Local Compute key, often understated cost: communicating with the cloud over Image, Map congested wireless networks may result in latency or loss of data. Databases Query the cloud for better accuracy? In fact, sending high data-rate video or LIDAR from multiple Latency vs. Accuracy vs. Power … robots over congested networks can lead to prohibitive delay for
    [Show full text]
  • Semantics for Robotic Mapping, Perception and Interaction: a Survey Full Text Available At
    Full text available at: http://dx.doi.org/10.1561/2300000059 Semantics for Robotic Mapping, Perception and Interaction: A Survey Full text available at: http://dx.doi.org/10.1561/2300000059 Other titles in Foundations and Trends® in Robotics Embodiment in Socially Interactive Robots Eric Deng, Bilge Mutlu and Maja J Mataric ISBN: 978-1-68083-546-5 Modeling, Control, State Estimation and Path Planning Methods for Autonomous Multirotor Aerial Robots Christos Papachristos, Tung Dang, Shehryar Khattak, Frank Mascarich, Nikhil Khedekar and Kostas Alexis ISBN: 978-1-68083-548-9 An Algorithmic Perspective on Imitation Learning Takayuki Osa, Joni Pajarinen, Gerhard Neumann, J. Andrew Bagnell, Pieter Abbeel and Jan Peters ISBN:978-1-68083-410-9 Full text available at: http://dx.doi.org/10.1561/2300000059 Semantics for Robotic Mapping, Perception and Interaction: A Survey Sourav Garg Niko Sünderhauf Feras Dayoub Douglas Morrison Akansel Cosgun Gustavo Carneiro Qi Wu Tat-Jun Chin Ian Reid Stephen Gould Peter Corke Michael Milford Boston — Delft Full text available at: http://dx.doi.org/10.1561/2300000059 Foundations and Trends R in Robotics Published, sold and distributed by: now Publishers Inc. PO Box 1024 Hanover, MA 02339 United States Tel. +1-781-985-4510 www.nowpublishers.com [email protected] Outside North America: now Publishers Inc. PO Box 179 2600 AD Delft The Netherlands Tel. +31-6-51115274 The preferred citation for this publication is S. Garg, N. Sünderhauf, F. Dayoub, D. Morrison, A. Cosgun, G. Carneiro, Q. Wu, T.-J. Chin, I. Reid, S. Gould, P. Corke and M. Milford. Semantics for Robotic Mapping, Perception and Interaction: A Survey.
    [Show full text]
  • RBR50 2018 Names the Leading Robotics Companies of the Year
    WHITEPAPER RBR50 2018 Names the Leading Robotics Companies of the Year TABLE OF CONTENTS THE PATH TO THE 2018 RBR50 EVALUATION CRITERIA IN THE WINNER’S CIRCLE AI + ROBOTS = GREATER UTILITY COMPONENTS DIFFERENTIATE ROBOTS FOR DEVELOPERS, USERS MANUFACTURING BUILDS ON ROBOT STRENGTHS SUPPLY CHAIN KEEPS ON TRUCKIN’ AUTONOMOUS VEHICLES GET READY TO HIT THE ROAD RETURNING FAVORITES AND NEWCOMERS REGIONAL ANALYSIS BIG DEALS FOR RBR50 COMPANIES THE 2018 RBR50 COMPANIES roboticsbusinessreview.com 3 RBR50 2018 NAMES THE LEADING ROBOTICS COMPANIES OF THE YEAR By Eugene Demaitre, Senior Editor, Robotics Business Review What does it take to be a robotics industry leader? Common ingredients include a novel technology, a strong understanding of customer needs, and an ecosystem of developers and integrators. Other factors for success include investor support, components that are improving in capability and price, and a growing market that has room for competition. End users expect systems that can perceive their surroundings; maneuver in dynamic environments; and interact with objects, one another, and humans for greater efficiency and productivity. From factories and warehouses to highways, hospitals, and the skies above, robots are becoming everyday tools to extend human capabilities. For seven years, the RBR50 list has been one of the most prestigious collections of industry leaders in robotics, artificial intelligence, and unmanned systems. We’ve researched multiple companies and their applications, reviewed numerous submissions, and identified this year’s top 50 companies worth following. THE PATH TO THE 2018 RBR50 This year, we created five categories: artificial intelligence, autonomous vehicles, components, manufacturing, and supply chain. They reflect the most active markets for automation.
    [Show full text]
  • Cloud Based Real-Time Multi-Robot Collision Avoidance for Swarm Robotics
    International Journal of Grid and Distributed Computing Vol. 9, No. 6 (2016), pp.339-358 http://dx.doi.org/10.14257/ijgdc.2016.9.6.30 Cloud based Real-time Multi-robot Collision Avoidance for Swarm Robotics Hengjing He1, Supun Kamburugamuve2, Geoffrey C. Fox2, Wei Zhao1 1. State Key Lab. of Power System, Dept. of Electrical Engineering, Tsinghua University, Beijing, China 2. School of Informatics and Computing and CGL, Indiana University, Bloomington, USA [email protected], [email protected], [email protected], [email protected] Abstract Nowadays, Cloud Computing has brought many new and efficient approaches for computation-intensive application areas. One typical area is Cloud based real-time device control system, such as the IoT Cloud Platform. This kind of platform shifts computation load from devices to the Cloud and provides powerful processing capabilities to a simple device. In Swarm robotics, robots are supposed to be small, energy efficient and low-cost, but still smart enough to carry out individual and swarm intelligence. These two goals are normally contradictory to each other. Besides, in real world robot control, real-time on-line data processing is required, but most of the current Cloud Robotic Systems are focusing on off-line batch processing. However, Cloud based real-time device control system may provide a way that leads this research area out of its dilemma. This paper explores the availability of Cloud based real-time control of massive complex robots by implementing a relatively complicated but better performed local collision avoidance algorithm. The Cloud based application and corresponding Cloud driver, which connects the robot and the Cloud, are developed and deployed in Cloud environment.
    [Show full text]
  • Survey on Cloud Based Robotics Architecture, Challenges and Applications
    Journal of Ubiquitous Computing and Communication Technologies (UCCT) (2020) Vol.02/ No. 01 Pages: 10- 18 https://www.irojournals.com/jucct/ DOI: https://doi.org/10.36548/jucct.2020.1.002 Survey on Cloud Based Robotics Architecture, Challenges and Applications Dr. Subarna Shakya Professor, Department of Electronics and Computer Engineering Central Campus, Institute of Engineering, Pulchowk Tribhuvan University, Pulchowk Lalitpur Nepal. Email: [email protected] Abstract: The emergence of the cloud computing, and the other advanced technologies has made possible the extension of the computing and the data distribution competencies of the robotics that are networked by developing an cloud based robotic architecture by utilizing both the centralized and decentralized cloud that is manages the machine to cloud and the machine to machine communication respectively. The incorporation of the robotic system with the cloud makes probable the designing of the cost effective robotic architecture that enjoys the enhanced efficiency and a heightened real- time performance. This cloud based robotics designed by amalgamation of robotics and the cloud technologies empowers the web enabled robots to access the services of cloud on the fly. The paper is a survey about the cloud based robotic architecture, explaining the forces that necessitate the robotics merged with the cloud, its application and the major concerns and the challenges endured in the robotics that is integrated with the cloud. The paper scopes to provide a detailed study on the changes influenced by the cloud computing over the industrial robots. Keywords: Robotics, Cloud Computing, Cloud Based Robotics, Big Data, Internet of Things, Open Source, Challenges and Applications 1.
    [Show full text]
  • A Survey of Research on Cloud Robotics and Automation
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING 1 A Survey of Research on Cloud Robotics and Automation Ben Kehoe Student Member, IEEE, Sachin Patil Member, IEEE, Pieter Abbeel Senior Member, IEEE, Ken Goldberg Fellow, IEEE Abstract—The Cloud, an infrastructure and extensive set of Internet-accessible resources, has potential to provide significant benefits to robots and automation systems. This survey is orga- nized around four potential benefits: 1) Big Data: access to remote libraries of images, maps, trajectories, and object data, 2) Cloud Computing: access to parallel grid computing on demand for statistical analysis, learning, and motion planning, 3) Collective Robot Learning: robots sharing trajectories, control policies, and outcomes, and 4) Human Computation: use of crowdsourcing to tap human skills for analyzing images and video, classification, learning, and error recovery. The Cloud can also improve robots and automation systems by providing access to a) datasets, publications, models, benchmarks, and simulation tools, b) open competitions for designs and systems, and c) open-source soft- ware. This survey includes over 150 references on results and open challenges. A website with new developments and updates Fig. 1. The Cloud has potential to enable a new generation of robots and is available at: http://goldberg.berkeley.edu/cloud-robotics/ automation systems to use wireless networking, big data, cloud computing, statistical machine learning, open-source, and other shared resources to Note to Practitioners—Most robots and automation systems improve performance in a wide variety of tasks such as assembly,caregiving, still operate independently using onboard computation, memory, package delivery, driving, housekeeping, and surgery. and programming.
    [Show full text]