A Survey of Behavior Learning Applications in Robotics

Total Page:16

File Type:pdf, Size:1020Kb

A Survey of Behavior Learning Applications in Robotics Journal Title XX(X):1–38 A Survey of Behavior Learning c The Author(s) 2019 Reprints and permission: Applications in Robotics - State of the sagepub.co.uk/journalsPermissions.nav DOI: 10.1177/ToBeAssigned Art and Perspectives www.sagepub.com/ Alexander Fabisch1, Christoph Petzoldt2, Marc Otto2, Frank Kirchner1,2 Abstract Recent success of machine learning in many domains has been overwhelming, which often leads to false expectations regarding the capabilities of behavior learning in robotics. In this survey, we analyze the current state of machine learning for robotic behaviors. We will give a broad overview of behaviors that have been learned and used on real robots. Our focus is on kinematically or sensorially complex robots. That includes humanoid robots or parts of humanoid robots, for example, legged robots or robotic arms. We will classify presented behaviors according to various categories and we will draw conclusions about what can be learned and what should be learned. Furthermore, we will give an outlook on problems that are challenging today but might be solved by machine learning in the future and argue that classical robotics and other approaches from artificial intelligence should be integrated more with machine learning to form complete, autonomous systems. Keywords machine learning, reinforcement learning, imitation learning, behavior learning, robotic applications Introduction 2009; Kober et al. 2013; Kormushev et al. 2013; Tai and Liu 2016; Arulkumaran et al. 2017; Osa et al. 2018). In this Machine learning and particularly deep learning (LeCun survey, we take a broader perspective to analyze the state of et al. 2015) made groundbreaking success possible in the art in learning robotic behavior and do explicitely not many domains, such as computer vision (Krizhevsky et al. focus on algorithms but on (mostly) real world applications. 2012), speech recognition (Hinton et al. 2012), playing We explicitely focus on applications with real robots, video games (Mnih et al. 2015), and playing Go (Silver because it is much more demanding to integrate and learn et al. 2016). It is unquestionable that learning from data, behaviors in a complex robotic system operating in the real learning from experience and observations are keys to world. We give a very broad overview of considered behavior really adaptive and intelligent agents – virtual or physical. learning problems on real robotic systems. We categorize However, people are often susceptible to the fallacy that problems and solutions, analyze problem characteristics, and the state of the art in robotic control today heavily relies point out where and why machine learning is useful. on machine learning. This is often not the case. An example for this is given by Irpan (2018): at the time of This article is structured as follows. We first present a writing this paper, the humanoid robot Atlas from Boston detailed summary of selected highlights that advanced the Dynamics is one of the most impressive works in robot state of the art in robotic behavior learning. We proceed control. It is able to walk and run on irregular terrain, with definitions of behavior and related terms. We present jump precisely with one or two legs, and even do a back categories to distinguish and classify behaviors before we flip (Boston Dynamics 2018). Irpan (2018) reports that present a broad overview of the state of the art in robotic arXiv:1906.01868v1 [cs.RO] 5 Jun 2019 people often assume that Atlas uses reinforcement learning. behavior learning problems. We conclude with a discussion Publications from Bost Dynamics are sparse, but they do of our findings and an outlook. not include explanations of machine learning algorithms for control (Raibert et al. 2008; Nelson et al. 2012). Kuindersma et al. (2016) present their work with the robot Atlas, which includes state estimation and optimization methods for locomotion behavior. Robotic applications have demanding requirements on processing power, real-time 1 computation, sample-efficiency, and safety, which often German Research Center for Artificial Intelligence (DFKI GmbH), Robotics Innovation Center, Germany makes the application of state-of-the-art machine learning 2University of Bremen, Robotics Research Group, Germany for robot behavior learning difficult. Results in the area of machine learning are impressive but they can lead to false Corresponding author: expectations. This led us to the questions: what can and what Alexander Fabisch, German Research Center for Artificial Intelligence (DFKI GmbH), should be learned? DFKI Bremen, Robotics Innovation Center, Recent surveys of the field mostly focus on algorithmic Robert-Hooke-Straße 1, D-28359 Bremen, Germany aspects of machine learning (Billard et al. 2008; Argall et al. Email: [email protected] Prepared using sagej.cls [Version: 2016/06/24 v1.10] 2 Journal Title XX(X) Selected Highlights coupling is learned to mitigate the influence of minor perturbations of the end-effector that can have significant Among all the publications that we discuss here, we selected influence on the ball trajectory. A successful behavior is some highlights that we found to be relevant extensions of learned after 75 episodes. the repertoire of robotic behavior learning problems that can The problem of flipping a pancake with a pan has been be solved. We briefly summarize these behavior learning solved by Kormushev et al. (2010b) with the same methods: problems and their solutions individually before we enter a controller that is very similar to a DMP is initialized from the discussion of the whole field from a broader perspective. kinesthetic teaching and refined with PoWER. The behavior We find it crucial to understand the algorithmic development has been learned with a torque-controlled Barrett WAM arm and technical challenges in the field. It also gives a good with 7 DOF. The artificial pancake has a weight of 26 grams impression of the current state of the art. Later in this article, only, which makes its motion less predictable because it is we make a distinction whether the perception or the action susceptible to the influence of air flow. For refinement, a part of these behaviors have been learned (see Figure 1). complex reward function has been designed that takes into An early work that combines behavior learning and account the trajectory of the pancake (flipping and catching), robotics has been published by Kirchner (1997). A goal- which is measured with a marker-based motion capture directed walking behavior for the six-legged walking system. After 50 episodes, the first successful catch was machine SIR ARTHUR with 16 degrees of freedom (DOF) recorded. A remarkable finding is that the learned behavior and four light sensors has been learned. The behavior has includes a useful aspect that has not directly been encoded in been learned on three levels – (i) bottom: elementary swing the reward function: it made a compliant vertical movement and stance movements of individual legs are learned first, (ii) for catching the pancake which decreases the chance of the middle: these elementary actions are then used and activated pancake bouncing off from the surface of the pan. in a temporal sequence to perform more complex behaviors Table tennis with a Barrett WAM arm has been learned like a forward movement of the whole robot, and (iii) top: a by Mulling¨ et al. (2011, 2013). Particularly challenging is goal-achieving behavior in a given environment with external the advanced perception and state estimation problem. In stimuli. The top-level behavior was able to make use of the comparison to previous work, behaviors have to take an light sensors to find a source of maximum light intensity. estimate of the future ball trajectory into account when Reinforcement learning, a hierarchical version of Q-learning generating movements that determine where, when, and (Watkins 1989), has been used to learn the behavior. On the how the robot hits the ball. A vision system has been lowest level, individual reward functions for lifting up the used to track the ball with 60 Hz. The ball position is leg, moving the leg to the ground, stance the leg backward, tracked with an extended Kalman filter and ball trajectories and swinging the leg forward have been defined. are predicted with a simplified model that neglected the Peters et al. (2005) presented an algorithmic milestone in spin of the ball. 25 striking movements have been learned reinforcement learning for robotic systems. They specifically from kinesthetic teaching to form a library of movement used a robot arm with seven degrees of freedom (DOF) primitives. A modified DMP version that allows to set a final to play tee-ball, a simplified version of baseball, where velocity as a meta-parameter has been used to represent the the ball is placed on a flexible shaft. Their solution demonstrations. Desired position, velocity and orientation combines imitation learning through kinesthetic teaching of the racket are computed analytically for an estimated with dynamical movement primitives (DMPs) and policy ball trajectory and a given target on the opponent’s court search, which is an approach that has been used in many and are given as meta-parameters to the modified DMP. following works. In their work, Peters et al. (2005) used In addition, based on these task parameters, a weighted natural actor-critic (NAC) for policy search. The goal was average of known striking movements is computed by a to hit the ball so that it flies as far as possible. The gating network. This method is called mixture of movement reward for policy search included a term that penalizes primitives. The reward function encourages minimization of squared accelerations and rewards the distance. The distance the distance between the desired goal on the opponent’s court is obtained from an estimated trajectory computed with and the actual point where the ball hits the table. In the final trajectory samples that are measured with a vision system.
Recommended publications
  • Global Education Entry Points Into the Curriculum: a Guide for Teacher Librarians
    DOCUMENT RESUME ED 415 155 SO 028 109 AUTHOR Blakey, Elaine; Maguire, Gerry; Steward, Kaye TITLE Global Education Entry Points into the Curriculum: A Guide for Teacher Librarians. INSTITUTION Alberta Teachers Association, Edmonton. Learning Resources Council.; Alberta Global Education Project, Edmonton. PUB DATE 1991-00-00 NOTE 74p. AVAILABLE FROM Alberta Global Education Project, 11010 142 Street, Edmonton, Alberta T5N 2R1 Canada. PUB TYPE Guides Non-Classroom (055) EDRS PRICE MF01/PC03 Plus Postage. DESCRIPTORS Elementary Secondary Education; Foreign Countries; *Global Approach; *Global Education; Interdisciplinary Approach; *Justice; *Peace; *Resource Materials; Social Studies; Sustainable Development IDENTIFIERS Alberta ABSTRACT This information packet is useful to teacher-librarians and teachers who would like to integrate global education concepts into existing curricula. The techniques outlined in this document provide strategies for implementing global education integration. The central ideas of the global education package include:(1) interrelatedness;(2) peace;(3) global community;(4) cooperation;(5) distribution and sustainable development; (6) multicultural understanding;(7) human rights;(8) stewardship; (9) empowerment; and (10) social justice. Throughout the packet, ideas are offered for inclusion of global perspectives in language arts, science, mathematics, and social studies. Recommendations are included for purchases of resource materials and cross reference charts for concepts across grades and curriculum areas. (EH)
    [Show full text]
  • AI, Robots, and Swarms: Issues, Questions, and Recommended Studies
    AI, Robots, and Swarms Issues, Questions, and Recommended Studies Andrew Ilachinski January 2017 Approved for Public Release; Distribution Unlimited. This document contains the best opinion of CNA at the time of issue. It does not necessarily represent the opinion of the sponsor. Distribution Approved for Public Release; Distribution Unlimited. Specific authority: N00014-11-D-0323. Copies of this document can be obtained through the Defense Technical Information Center at www.dtic.mil or contact CNA Document Control and Distribution Section at 703-824-2123. Photography Credits: http://www.darpa.mil/DDM_Gallery/Small_Gremlins_Web.jpg; http://4810-presscdn-0-38.pagely.netdna-cdn.com/wp-content/uploads/2015/01/ Robotics.jpg; http://i.kinja-img.com/gawker-edia/image/upload/18kxb5jw3e01ujpg.jpg Approved by: January 2017 Dr. David A. Broyles Special Activities and Innovation Operations Evaluation Group Copyright © 2017 CNA Abstract The military is on the cusp of a major technological revolution, in which warfare is conducted by unmanned and increasingly autonomous weapon systems. However, unlike the last “sea change,” during the Cold War, when advanced technologies were developed primarily by the Department of Defense (DoD), the key technology enablers today are being developed mostly in the commercial world. This study looks at the state-of-the-art of AI, machine-learning, and robot technologies, and their potential future military implications for autonomous (and semi-autonomous) weapon systems. While no one can predict how AI will evolve or predict its impact on the development of military autonomous systems, it is possible to anticipate many of the conceptual, technical, and operational challenges that DoD will face as it increasingly turns to AI-based technologies.
    [Show full text]
  • Design and Evaluation of Modular Robots for Maintenance in Large Scientific Facilities
    UNIVERSIDAD POLITÉCNICA DE MADRID ESCUELA TÉCNICA SUPERIOR DE INGENIEROS INDUSTRIALES DESIGN AND EVALUATION OF MODULAR ROBOTS FOR MAINTENANCE IN LARGE SCIENTIFIC FACILITIES PRITHVI SEKHAR PAGALA, MSC 2014 DEPARTAMENTO DE AUTOMÁTICA, INGENIERÍA ELECTRÓNICA E INFORMÁTICA INDUSTRIAL ESCUELA TÉCNICA SUPERIOR DE INGENIEROS INDUSTRIALES DESIGN AND EVALUATION OF MODULAR ROBOTS FOR MAINTENANCE IN LARGE SCIENTIFIC FACILITIES PhD Thesis Author: Prithvi Sekhar Pagala, MSC Advisors: Manuel Ferre Perez, PhD Manuel Armada, PhD 2014 DESIGN AND EVALUATION OF MODULAR ROBOTS FOR MAINTENANCE IN LARGE SCIENTIFIC FACILITIES Author: Prithvi Sekhar Pagala, MSC Tribunal: Presidente: Dr. Fernando Matía Espada Secretario: Dr. Claudio Rossi Vocal A: Dr. Antonio Giménez Fernández Vocal B: Dr. Juan Antonio Escalera Piña Vocal C: Dr. Concepción Alicia Monje Micharet Suplente A: Dr. Mohamed Abderrahim Fichouche Suplente B: Dr. José Maráa Azorín Proveda Acuerdan otorgar la calificación de: Madrid, de de 2014 Acknowledgements The duration of the thesis development lead to inspiring conversations, exchange of ideas and expanding knowledge with amazing individuals. I would like to thank my advisers Manuel Ferre and Manuel Armada for their constant men- torship, support and motivation to pursue different research ideas and collaborations during the course of entire thesis. The team at the lab has not only enriched my professionally life but also in Spanish ways. Thank you all the members of the ROMIN, Francisco, Alex, Jose, Jordi, Ignacio, Javi, Chema, Luis .... This research project has been supported by a Marie Curie Early Stage Initial Training Network Fellowship of the European Community’s Seventh Framework Program "PURESAFE". I wish to thank the supervisors and fellow research members of the project for the amazing support, fruitful interactions and training events.
    [Show full text]
  • Learning for Microrobot Exploration: Model-Based Locomotion, Sparse-Robust Navigation, and Low-Power Deep Classification
    Learning for Microrobot Exploration: Model-based Locomotion, Sparse-robust Navigation, and Low-power Deep Classification Nathan O. Lambert1, Farhan Toddywala1, Brian Liao1, Eric Zhu1, Lydia Lee1, and Kristofer S. J. Pister1 Abstract— Building intelligent autonomous systems at any Classification Intelligent, mm scale Fast Downsampling scale is challenging. The sensing and computation constraints Microrobot of a microrobot platform make the problems harder. We present improvements to learning-based methods for on-board learning of locomotion, classification, and navigation of microrobots. We show how simulated locomotion can be achieved with model- Squeeze-and-Excite Hard Activation System-on-chip: based reinforcement learning via on-board sensor data distilled Camera, Radio, Battery into control. Next, we introduce a sparse, linear detector and a Dynamic Thresholding method to FAST Visual Odometry for improved navigation in the noisy regime of mm scale imagery. Locomotion Navigation We end with a new image classifier capable of classification Controller Parameter Unknown Dynamics Original Training Image with fewer than one million multiply-and-accumulate (MAC) Optimization Modeling & Simulator Resulting Map operations by combining fast downsampling, efficient layer Reward structures and hard activation functions. These are promising … … SLIPD steps toward using state-of-the-art algorithms in the power- Ground Truth Estimated Pos. State Space limited world of edge-intelligence and microrobots. Sparse I. INTRODUCTION Local Control PID: K K K Microrobots have been touted as a coming revolution p d i for many tasks, such as search and rescue, agriculture, or Fig. 1: Our vision for microrobot exploration based on distributed sensing [1], [2]. Microrobotics is a synthesis of three contributions: 1) improving data-efficiency of learning Microelectromechanical systems (MEMs), actuators, power control, 2) a more noise-robust and novel approach to visual electronics, and computation.
    [Show full text]
  • Robot Learning
    Robot Learning 15-494 Cognitive Robotics David S. Touretzky & Ethan Tira-Thompson Carnegie Mellon Spring 2009 04/06/09 15-494 Cognitive Robotics 1 What Can Robots Learn? ● Parameter tuning, e.g., for a faster walk ● Perceptual learning: ALVINN driving the Navlab ● Map learning, e.g., SLAM algorithms ● Behavior learning; plans and macro-operators – Shakey the Robot (SRI) – Robo-Soar ● Learning from human teachers – Operant conditioning: Skinnerbots – Imitation learning 04/06/09 15-494 Cognitive Robotics 2 Lots of Work on Robot Learning ● IEEE Robotics and Automation Society – Technical Committee on Robot Learning – http://www.learning-robots.de ● Robot Learning Summer School – Lisbon, Portugal; July 20-24, 2009 ● Workshops at major robotics conferences – ICRA 2009 workshop: Approachss to Sensorimotor Learning on Humanoid Robots – Kobe, Japan; May 17, 2009 04/06/09 15-494 Cognitive Robotics 3 Parameter Optimization ● How fast can an AIBO walk? Figures from Kohl & Stone, ICRA 2004, for the ERS-210 model: – CMU (2002) 200 mm/s – German Team 230 mm/s Hand-tuned gaits – UT Austin Villa 245 mm/s – UNSW 254 mm/s – Hornsby (1999) 170 mm/s – UNSW 270 mm/s Learned gaits – UT Austin Villa 291 mm/s 04/06/09 15-494 Cognitive Robotics 4 Walk Parameters 12 parameters to optimize: ● Front locus (height, x pos, ypos) ● Rear locus (height, x pos, y pos) ● Locus length ● Locus skew multiplier (in the x-y plane, for turning) ● Height of front of body ● Height of rear of body From Kohl & Stone (ICRA 2004) ● Foot travel time ● Fraction of time foot is on ground 04/06/09 15-494 Cognitive Robotics 5 Optimization Strategy ● “Policy gradient reinforcement learning”: – Walk parameter assignment = “policy” – Estimate the gradient along each dimension by trying combinations of slight perturbations in all parameters – Measure walking speed on the actual robot – Optimize all 12 parameters simultaneously – Adjust parameters according to the estimated gradient.
    [Show full text]
  • Systematic Review of Research Trends in Robotics Education for Young Children
    sustainability Review Systematic Review of Research Trends in Robotics Education for Young Children Sung Eun Jung 1,* and Eun-sok Won 2 1 Department of Educational Theory and Practice, University of Georgia, Athens, GA 30602, USA 2 Department of Liberal Arts Education, Mokwon University, Daejeon 35349, Korea; [email protected] * Correspondence: [email protected]; Tel.: +1-706-296-3001 Received: 31 January 2018; Accepted: 13 March 2018; Published: 21 March 2018 Abstract: This study conducted a systematic and thematic review on existing literature in robotics education using robotics kits (not social robots) for young children (Pre-K and kindergarten through 5th grade). This study investigated: (1) the definition of robotics education; (2) thematic patterns of key findings; and (3) theoretical and methodological traits. The results of the review present a limitation of previous research in that it has focused on robotics education only as an instrumental means to support other subjects or STEM education. This study identifies that the findings of the existing research are weighted toward outcome-focused research. Lastly, this study addresses the fact that most of the existing studies used constructivist and constructionist frameworks not only to design and implement robotics curricula but also to analyze young children’s engagement in robotics education. Relying on the findings of the review, this study suggests clarifying and specifying robotics-intensified knowledge, skills, and attitudes in defining robotics education in connection to computer science education. In addition, this study concludes that research agendas need to be diversified and the diversity of research participants needs to be broadened. To do this, this study suggests employing social and cultural theoretical frameworks and critical analytical lenses by considering children’s historical, cultural, social, and institutional contexts in understanding young children’s engagement in robotics education.
    [Show full text]
  • Hyundai Motor Group to Acquire Controlling Interest in Boston Dynamics from Softbank Group, Opening a New Chapter in the Robotics and Mobility Industry
    Hyundai Motor Group to Acquire Controlling Interest in Boston Dynamics from SoftBank Group, Opening a New Chapter in the Robotics and Mobility Industry • Hyundai Motor Group to acquire controlling interest in Boston Dynamics, valued at $1.1 billion, with the goal of advancing robotics and mobility to realize progress for humanity • The combination of the highly complementary technologies of Hyundai Motor Group and Boston Dynamics, and the continued partnership of SoftBank Group, will propel development and commercialization of advanced robots • The robotics technologies will lend synergies to autonomous vehicles, UAMs and smart factories • Hyundai Motor Group, together with Boston Dynamics, will create robotics value chain ranging from robot component manufacturing to smart logistics solutions BOSTON/SEOUL/TOKYO, December 11, 2020 – Hyundai Motor Group and SoftBank Group Corp. (SoftBank) today agreed on main terms of the transaction pursuant to which Hyundai Motor Group will acquire a controlling interest in Boston Dynamics in a deal that values the mobile robot firm at $1.1 billion. The deal came as Hyundai Motor Group envisions the transformation of human life by combining world-leading robotics technologies with its mobility expertise. Financial terms were not disclosed. Under the agreement, Hyundai Motor Group will hold an approximately 80% stake in Boston Dynamics and SoftBank, through one of its affiliates, will retain an approximately 20% stake in Boston Dynamics after the closing of the transaction. Hyundai Motor Group’s affiliates - Hyundai Motor Co., Hyundai Mobis Co. and Hyundai Glovis Co. - and Hyundai Motor Group Chairman Euisun Chung respectively participated in the acquisition. By establishing a leading presence in the field of robotics, the acquisition will mark another major step for Hyundai Motor Group toward its strategic transformation into a Smart Mobility Solution Provider.
    [Show full text]
  • Multi-Leapmotion Sensor Based Demonstration for Robotic Refine Tabletop Object Manipulation Task
    Available online at www.sciencedirect.com ScienceDirect CAAI Transactions on Intelligence Technology 1 (2016) 104e113 http://www.journals.elsevier.com/caai-transactions-on-intelligence-technology/ Original article Multi-LeapMotion sensor based demonstration for robotic refine tabletop object manipulation task* Haiyang Jin a,b,c, Qing Chen a,b, Zhixian Chen a,b, Ying Hu a,b,*, Jianwei Zhang c a Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China b Chinese University of Hong Kong, Hong Kong, China c University of Hamburg, Hamburg, Germany Available online 2 June 2016 Abstract In some complicated tabletop object manipulation task for robotic system, demonstration based control is an efficient way to enhance the stability of execution. In this paper, we use a new optical hand tracking sensor, LeapMotion, to perform a non-contact demonstration for robotic systems. A Multi-LeapMotion hand tracking system is developed. The setup of the two sensors is analyzed to gain a optimal way for efficiently use the informations from the two sensors. Meanwhile, the coordinate systems of the Mult-LeapMotion hand tracking device and the robotic demonstration system are developed. With the recognition to the element actions and the delay calibration, the fusion principles are developed to get the improved and corrected gesture recognition. The gesture recognition and scenario experiments are carried out, and indicate the improvement of the proposed Multi-LeapMotion hand tracking system in tabletop object manipulation task for robotic demonstration. Copyright © 2016, Chongqing University of Technology. Production and hosting by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
    [Show full text]
  • Creative Robotics Studio Worcester Polytechnic Institute
    Creative Robotics Studio An Interactive Qualifying Project Report submitted to the faculty of the Worcester Polytechnic Institute in partial fulfillment of the requirements for the degree of Bachelor of Science. By: Graham Held Walter Ho Paul Raynes Harrison Vaporciyan Project Advisers: Scott Barton Craig Putnam Joshua Rosenstock Creative Robotics Studio 2 Contents Contents .................................................................................................................................... 2 Abstract...................................................................................................................................... 4 Acknowledgements .................................................................................................................... 5 Introduction ................................................................................................................................ 6 1 Background ......................................................................................................................... 8 1.1 Relationship between Robots and Humans ................................................................. 8 1.1.1 “Human Replacement” .......................................................................................... 8 1.1.2 Humanizing Robots - Anthropomorphism .............................................................. 9 1.1.3 Uncanny Valley ....................................................................................................11 1.1.4 Computers and Digital
    [Show full text]
  • Machine Vision for Service Robots & Surveillance
    Machine Vision for Service Robots & Surveillance Prof.dr.ir. Pieter Jonker EMVA Conference 2015 Delft University of Technology Cognitive Robotics (text intro) This presentation addresses the issue that machine vision and learning systems will dominate our lives and work in future, notably through surveillance systems and service robots. Pieter Jonker’s specialism is robot vision; the perception and cognition of service robots, more specifically autonomous surveillance robots and butler robots. With robot vision comes the artificial intelligence; if you perceive and understand, than taking sensible actions becomes relative easy. As robots – or autonomous cars, or … - can move around in the world, they encounter various situations to which they have to adapt. Remembering in which situation you adapted to what is learning. Learning comes in three flavors: cognitive learning through Pattern Recognition (association), skills learning through Reinforcement Learning (conditioning) and the combination (visual servoing); such as a robot learning from observing a human how to poor in a glass of beer. But as always in life this learning comes with a price; bad teachers / bad examples. 2 | 16 Machine Vision for Service Robots & Surveillance Prof.dr.ir. Pieter Jonker EMVA Conference Athens 12 June 2015 • Professor of (Bio) Mechanical Engineering Vision based Robotics group, TU-Delft Robotics Institute • Chairman Foundation Living Labs for Care Innovation • CEO LEROVIS BV, CEO QdepQ BV, CTO Robot Care Systems Delft University of Technology Content •
    [Show full text]
  • Opportunities in the 5G Buildout Miral Kim-E
    Opportunities in the 5G Buildout Miral Kim-E September 2019 Opportunities in the 5G Buildout Frontier Global Partners INTRODUCTION The fundamental demand for advanced telecommuni- With each new advance in microprocessor and cations is embedded in our DNA. People want and memory chip technology, the PC industry saw rapid need to communicate. Whether by prehistoric cave development in hardware and software. When the in- drawings, messenger pigeons in the Middle Ages, or ternet became commercially available as the “World radio waves since 1896, the drive to improve the Wide Web” in 1990, the PC and telecom industries speed, breadth and affordability of communications has became inexorably linked. The internet became a spawned countless industries and defined generations. must-have telecommunications service (remember those dial-up modems?) and the personal computer Our focus is on the latest iteration of the evolution: was the only way to get it. It also marked the begin- 5G or the Fifth Generation of mobile telephony. ning of the shift from voice communications on the telecom network to data communications. To truly understand where telecommunications are going, especially in the 5G era, one has to understand These PC technologies helped mobile phones evolve computing. When the internet added cheap, world- from that first 2-lb Motorola handheld to the pow- wide communications to the power of programming erful smartphones of today. Like PCs, the use of and data analysis on a personal computer, a new era mobile phones has shifted from its original intent as was born. When that power was transferred to mo- a tool for voice communication to a tool for data bile phones, new industries were born.
    [Show full text]
  • SLAM and Vision-Based Humanoid Navigation Olivier Stasse
    SLAM and Vision-based Humanoid Navigation Olivier Stasse To cite this version: Olivier Stasse. SLAM and Vision-based Humanoid Navigation. Humanoid Robotics: A Reference, pp.1739-1761, 2018, 978-94-007-6047-9. 10.1007/978-94-007-6046-2_59. hal-01674512 HAL Id: hal-01674512 https://hal.archives-ouvertes.fr/hal-01674512 Submitted on 3 Jan 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. SLAM and Vision-based Humanoid Navigation Olivier STASSE, LAAS-CNRS 1 Motivations In order for humanoid robots to evolve autonomously in a complex environment, they have to perceive it, build an appropriate representation, localize in it and decide which motion to realize. The relationship between the environment and the robot is rather complex as some parts are obstacles to avoid, other possi- ble support for locomotion, or objects to manipulate. The affordance with the objects and the environment may result in quite complex motions ranging from bi-manual manipulation to whole-body motion generation. In this chapter, we will introduce tools to realize vision-based humanoid navigation. The general structure of such a system is depicted in Fig. 1.
    [Show full text]