Robonaut Mobile Autonomy: Initial Experiments M
Total Page:16
File Type:pdf, Size:1020Kb
Load more
										Recommended publications
									
								- 
												  Artificial Intelligence: Soon to Be the World’S Greatest Intelligence, Or Just a Wild Dream? Edward RJohnson & Wales University ScholarsArchive@JWU Academic Symposium of Undergraduate College of Arts & Sciences Scholarship 3-22-2010 Artificial Intelligence: Soon to be the world’s greatest intelligence, or just a wild dream? Edward R. Kollett Johnson & Wales University - Providence, [email protected] Follow this and additional works at: https://scholarsarchive.jwu.edu/ac_symposium Part of the Artificial Intelligence and Robotics Commons, Arts and Humanities Commons, Nanoscience and Nanotechnology Commons, and the Social and Behavioral Sciences Commons Repository Citation Kollett, Edward R., "Artificial Intelligence: Soon to be the world’s greatest intelligence, or just a wild dream?" (2010). Academic Symposium of Undergraduate Scholarship. 3. https://scholarsarchive.jwu.edu/ac_symposium/3 This Research Paper is brought to you for free and open access by the College of Arts & Sciences at ScholarsArchive@JWU. It has been accepted for inclusion in Academic Symposium of Undergraduate Scholarship by an authorized administrator of ScholarsArchive@JWU. For more information, please contact [email protected]. Artificial Intelligence: Soon to be the world’s greatest intelligence, or just a wild dream? Edward Kollett Johnson & Wales University Honors Program 2009 Edward Kollett, Page 2 Artificial Intelligence is a term, coined by John McCarthy in 1956, that “in its broadest sense would indicate the ability of an artifact to perform the same kinds of functions that characterize human thought processes” (“Artificial Intelligence”). Typically, it is used today to refer to a computer program that is trying to simulate the human brain, or at least parts of it. Attempts to recreate the human brain have been a goal of mankind for centuries, but only recently, as computers have become as powerful as they are now, does the goal of a fully automated robot with human intelligence and emotional capabilities seems to be within reach.
- 
												  Remote and Autonomous Controlled Robotic Car Based on Arduino with Real Time Obstacle Detection and AvoidanceUniversal Journal of Engineering Science 7(1): 1-7, 2019 http://www.hrpub.org DOI: 10.13189/ujes.2019.070101 Remote and Autonomous Controlled Robotic Car based on Arduino with Real Time Obstacle Detection and Avoidance Esra Yılmaz, Sibel T. Özyer* Department of Computer Engineering, Çankaya University, Turkey Copyright©2019 by authors, all rights reserved. Authors agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License Abstract In robotic car, real time obstacle detection electronic devices. The environment can be any physical and obstacle avoidance are significant issues. In this study, environment such as military areas, airports, factories, design and implementation of a robotic car have been hospitals, shopping malls, and electronic devices can be presented with regards to hardware, software and smartphones, robots, tablets, smart clocks. These devices communication environments with real time obstacle have a wide range of applications to control, protect, image detection and obstacle avoidance. Arduino platform, and identification in the industrial process. Today, there are android application and Bluetooth technology have been hundreds of types of sensors produced by the development used to implementation of the system. In this paper, robotic of technology such as heat, pressure, obstacle recognizer, car design and application with using sensor programming human detecting. Sensors were used for lighting purposes on a platform has been presented. This robotic device has in the past, but now they are used to make life easier. been developed with the interaction of Android-based Thanks to technology in the field of electronics, incredibly device.
- 
												  AUV Adaptive Sampling Methods: a Reviewapplied sciences Review AUV Adaptive Sampling Methods: A Review Jimin Hwang 1 , Neil Bose 2 and Shuangshuang Fan 3,* 1 Australian Maritime College, University of Tasmania, Launceston 7250, TAS, Australia; [email protected] 2 Department of Ocean and Naval Architectural Engineering, Memorial University of Newfoundland, St. John’s, NL A1C 5S7, Canada; [email protected] 3 School of Marine Sciences, Sun Yat-sen University, Zhuhai 519082, Guangdong, China * Correspondence: [email protected] Received: 16 July 2019; Accepted: 29 July 2019; Published: 2 August 2019 Abstract: Autonomous underwater vehicles (AUVs) are unmanned marine robots that have been used for a broad range of oceanographic missions. They are programmed to perform at various levels of autonomy, including autonomous behaviours and intelligent behaviours. Adaptive sampling is one class of intelligent behaviour that allows the vehicle to autonomously make decisions during a mission in response to environment changes and vehicle state changes. Having a closed-loop control architecture, an AUV can perceive the environment, interpret the data and take follow-up measures. Thus, the mission plan can be modified, sampling criteria can be adjusted, and target features can be traced. This paper presents an overview of existing adaptive sampling techniques. Included are adaptive mission uses and underlying methods for perception, interpretation and reaction to underwater phenomena in AUV operations. The potential for future research in adaptive missions is discussed. Keywords: autonomous underwater vehicle(s); maritime robotics; adaptive sampling; underwater feature tracking; in-situ sensors; sensor fusion 1. Introduction Autonomous underwater vehicles (AUVs) are unmanned marine robots. Owing to their mobility and increased ability to accommodate sensors, they have been used for a broad range of oceanographic missions, such as surveying underwater plumes and other phenomena, collecting bathymetric data and tracking oceanographic dynamic features.
- 
												  Development of an Open Humanoid Robot Platform for Research and Autonomous Soccer PlayingDevelopment of an Open Humanoid Robot Platform for Research and Autonomous Soccer Playing Karl J. Muecke and Dennis W. Hong RoMeLa: Robotics and Mechanisms Lab Virginia Tech Blacksburg, VA 24061 [email protected], [email protected] Abstract This paper describes the development of a fully autonomous humanoid robot for locomotion research and as the first US entry in to RoboCup. DARwIn (Dynamic Anthropomorphic Robot with Intelligence) is a humanoid robot capable of bipedal walking and performing human like motions. As the years have progressed, DARwIn has evolved from a concept to a sophisticated robot platform. DARwIn 0 was a feasibility study that investigated the possibility of making a humanoid robot walk. Its successor, DARwIn I, was a design study that investigated how to create a humanoid robot with human proportions, range of motion, and kinematic structure. DARwIn IIa built on the name ªhumanoidº by adding autonomy. DARwIn IIb improved on its predecessor by adding more powerful actuators and modular computing components. Finally, DARwIn III is designed to take the best of all the designs and incorporate the robot's most advanced motion control yet. Introduction Dynamic Anthropomorphic Robot with Intelligence (DARwIn), a humanoid robot, is a sophisticated hardware platform used for studying bipedal gaits that has evolved over time. Five versions of DARwIn have been developed, each an improvement on its predecessor. The first version, DARwIn 0 (Figure 1a), was used as a design study to determine the feasibility of creating a humanoid robot abd for actuator evaluation. The second version, DARwIn I (Figure 1b), used improved gaits and software.
- 
												  Robonaut 2 Fact SheetNational Aeronautics and Space Administration Robonaut 2 facts NASA Almost 200 people from 15 countries have visited the International Space Station, but the orbiting complex has only had human crew members – until now. Robonaut 2, the latest generation of the Robonaut astronaut helpers, launched to the space station aboard space shuttle Discovery on the STS-133 mission in February 2011. It is the fi rst humanoid robot in space, and although its primary job for now is demonstrating to engineers how dexterous robots behave in space, the hope is that, through upgrades and advancements, it could one day venture outside the station to help spacewalkers make repairs or additions to the station or perform scientifi c work. R2, as the robot is called, was unpacked in April and powered up for the first time in August. Though it is currently being tested inside the Destiny laboratory, over time both its territory and its applications could expand. Initial tasks identified for R2 include velocity air measurements and handrail cleaning, both of which are simple but necessary tasks that require a great deal of crew time. R2 also has a taskboard on which to practice flipping switches and pushing buttons. Over time, the robot should graduate to more complex tasks. There are no plans to return R2 to Earth. History Work on the first Robonaut began in 1997. The idea was to build a humanoid robot that could assist astronauts on tasks in which another pair of hands would be helpful or to venture forth to perform jobs either too dangerous for crew members to risk or too mundane for them to spend time on.
- 
												  Talk with a RobotTalk With A Robot With its small, portable, travel size, kids and adults will love to bring it around to play with. Type a custom snippet or try one of the examples. Though, it was an improvement over the depressing white/black, its charm wore out pretty quickly. © 2014 Steve Worswick. “When the robot was active, people tended to respond and give feedback to whatever the robot was doing, saying ‘Wow!’, ‘Good job. Python 100. Typically, a chat bot communicates with a real person, but applications are being developed in which two chat bots can communicate with each other. We started with some of the key features. Human Robot Intelligent That Can Talk Dance Sing Watch Home Smart Humanoid Robot For Kids Education , Find Complete Details about Human Robot Intelligent That Can Talk Dance Sing Watch Home Smart Humanoid Robot For Kids Education,Human Robot Intelligent,Human Robots,Robots That Can Talk from Toy Robots Supplier or Manufacturer-Shenzhen Yuanhexuan Industrial Co. Another communication method I want to talk about is XMLRPC, which stands for XML-formatted Remote Procedure Call. The bots have hammers attached to micro servos that they use to hit targets on the other robot. Two human look-a-like robots invented by Japanese engineers. Unemployment. The site Cleverbot. Choose a material for your robot. Typically, a chat bot communicates with a real person, but applications are being developed in which two chat bots can communicate with each other. Slideshow ( 2 images ). Type a custom snippet or try one of the examples. In this week’s Tech Talk podcast: Brian Stelter discusses recent hacks on major Web sites and the author of a new book on robots discusses what is to come.
- 
												  Robonaut 2 – Operations on the International Space StationRobonaut 2 – Operations on the International Space Station Ron Diftler Robonaut Project Lead Software, Robotics and Simulation Division NASA/Johnson Space Center [email protected] 2/14/2013 Overview Robonaut Motivation GM Relationship Robonaut Evolution Robonaut 2 (R2) Capabilities Preparing for ISS Journey to Space On Board ISS Future Activities Spinoffs Robonaut Motivation Capable Tool for Crew • Before, during and after activities Share EVA Tools and Workspaces. • Human Like Design Increase IVA and EVA Efficiency • Worksite Setup/Tear Down • Robotic Assistant • Contingency Roles Surface Operations • Near Earth Objects • Moon/Mars Interplanetary Vehicles Astronaut Nancy Currie works with 2 Robonauts to build a truss structure during an Telescopes experiment. Robonaut Development History 1998 • Subsystem Development • Testing of hand mechanism ROBONAUT 1999 Fall 1998 • Single Arm Integration • Testing with teleoperator ROBONAUT 2000 Fall 1999 • Dual Arm Integration • Testing with dual arm control ROBONAUT 2001 Fall 2000 • Waist and Vision Integration • Testing under autonomous control 2002 ROBONAUT • R1A Testing of Autonomous Learning Fall 2001 • R1B Integration 2003 ROBONAUT • R1A Testing Multi Agent EVA Team Fall 2002 • R1B Segwanaut Integration 2004 ROBONAUT Fall 2003 • R1A Autonomous Manipulation • R1B 0g Airbearing Development 2005 ROBONAUT Fall 2004 • DTO Flight Audit • Begin Development of R1C ROBONAUT 2006 Fall 2006 • Centaur base • Coordinated field demonstration GM’s Motivation Why did GM originally come to us? • World
- 
												  Ph. D. Thesis Stable Locomotion of Humanoid Robots BasedPh. D. Thesis Stable locomotion of humanoid robots based on mass concentrated model Author: Mario Ricardo Arbul´uSaavedra Director: Carlos Balaguer Bernaldo de Quiros, Ph. D. Department of System and Automation Engineering Legan´es, October 2008 i Ph. D. Thesis Stable locomotion of humanoid robots based on mass concentrated model Author: Mario Ricardo Arbul´uSaavedra Director: Carlos Balaguer Bernaldo de Quiros, Ph. D. Signature of the board: Signature President Vocal Vocal Vocal Secretary Rating: Legan´es, de de Contents 1 Introduction 1 1.1 HistoryofRobots........................... 2 1.1.1 Industrialrobotsstory. 2 1.1.2 Servicerobots......................... 4 1.1.3 Science fiction and robots currently . 10 1.2 Walkingrobots ............................ 10 1.2.1 Outline ............................ 10 1.2.2 Themes of legged robots . 13 1.2.3 Alternative mechanisms of locomotion: Wheeled robots, tracked robots, active cords . 15 1.3 Why study legged machines? . 20 1.4 What control mechanisms do humans and animals use? . 25 1.5 What are problems of biped control? . 27 1.6 Features and applications of humanoid robots with biped loco- motion................................. 29 1.7 Objectives............................... 30 1.8 Thesiscontents ............................ 33 2 Humanoid robots 35 2.1 Human evolution to biped locomotion, intelligence and bipedalism 36 2.2 Types of researches on humanoid robots . 37 2.3 Main humanoid robot research projects . 38 2.3.1 The Humanoid Robot at Waseda University . 38 2.3.2 Hondarobots......................... 47 2.3.3 TheHRPproject....................... 51 2.4 Other humanoids . 54 2.4.1 The Johnnie project . 54 2.4.2 The Robonaut project . 55 2.4.3 The COG project .
- 
												  Learning Preference Models for Autonomous Mobile Robots in Complex DomainsLearning Preference Models for Autonomous Mobile Robots in Complex Domains David Silver CMU-RI-TR-10-41 December 2010 Robotics Institute Carnegie Mellon University Pittsburgh, Pennsylvania, 15213 Thesis Committee: Tony Stentz, chair Drew Bagnell David Wettergreen Larry Matthies Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Robotics Copyright c 2010. All rights reserved. Abstract Achieving robust and reliable autonomous operation even in complex unstruc- tured environments is a central goal of field robotics. As the environments and scenarios to which robots are applied have continued to grow in complexity, so has the challenge of properly defining preferences and tradeoffs between various actions and the terrains they result in traversing. These definitions and parame- ters encode the desired behavior of the robot; therefore their correctness is of the utmost importance. Current manual approaches to creating and adjusting these preference models and cost functions have proven to be incredibly tedious and time-consuming, while typically not producing optimal results except in the sim- plest of circumstances. This thesis presents the development and application of machine learning tech- niques that automate the construction and tuning of preference models within com- plex mobile robotic systems. Utilizing the framework of inverse optimal control, expert examples of robot behavior can be used to construct models that generalize demonstrated preferences and reproduce similar behavior. Novel learning from demonstration approaches are developed that offer the possibility of significantly reducing the amount of human interaction necessary to tune a system, while also improving its final performance. Techniques to account for the inevitability of noisy and imperfect demonstration are presented, along with additional methods for improving the efficiency of expert demonstration and feedback.
- 
												  Uncorrected ProofJrnlID 10846_ ArtID 9025_Proof# 1 - 14/10/2005 Journal of Intelligent and Robotic Systems (2005) # Springer 2005 DOI: 10.1007/s10846-005-9025-1 Temporal Range Registration for Unmanned 1 j Ground and Aerial Vehicles 2 R. MADHAVAN*, T. HONG and E. MESSINA 3 Intelligent Systems Division, National Institute of Standards and Technology, Gaithersburg, MD 4 20899-8230, USA; e-mail: [email protected], {tsai.hong, elena.messina}@nist.gov 5 (Received: 24 March 2004; in final form: 2 September 2005) 6 Abstract. An iterative temporal registration algorithm is presented in this article for registering 8 3D range images obtained from unmanned ground and aerial vehicles traversing unstructured 9 environments. We are primarily motivated by the development of 3D registration algorithms to 10 overcome both the unavailability and unreliability of Global Positioning System (GPS) within 11 required accuracy bounds for Unmanned Ground Vehicle (UGV) navigation. After suitable 12 modifications to the well-known Iterative Closest Point (ICP) algorithm, the modified algorithm is 13 shown to be robust to outliers and false matches during the registration of successive range images 14 obtained from a scanning LADAR rangefinder on the UGV. Towards registering LADAR images 15 from the UGV with those from an Unmanned Aerial Vehicle (UAV) that flies over the terrain 16 being traversed, we then propose a hybrid registration approach. In this approach to air to ground 17 registration to estimate and update the position of the UGV, we register range data from two 18 LADARs by combining a feature-based method with the aforementioned modified ICP algorithm. 19 Registration of range data guarantees an estimate of the vehicle’s position even when only one of 20 the vehicles has GPS information.
- 
												  Dynamic Reconfiguration of Mission Parameters in Underwater HumanDynamic Reconfiguration of Mission Parameters in Underwater Human-Robot Collaboration Md Jahidul Islam1, Marc Ho2, and Junaed Sattar3 Abstract— This paper presents a real-time programming and in the underwater domain, what would otherwise be straight- parameter reconfiguration method for autonomous underwater forward deployments in terrestrial settings often become ex- robots in human-robot collaborative tasks. Using a set of tremely complex undertakings for underwater robots, which intuitive and meaningful hand gestures, we develop a syntacti- cally simple framework that is computationally more efficient require close human supervision. Since Wi-Fi or radio (i.e., than a complex, grammar-based approach. In the proposed electromagnetic) communication is not available or severely framework, a convolutional neural network is trained to provide degraded underwater [7], such methods cannot be used accurate hand gesture recognition; subsequently, a finite-state to instruct an AUV to dynamically reconfigure command machine-based deterministic model performs efficient gesture- parameters. The current task thus needs to be interrupted, to-instruction mapping and further improves robustness of the interaction scheme. The key aspect of this framework is and the robot needs to be brought to the surface in order that it can be easily adopted by divers for communicating to reconfigure its parameters. This is inconvenient and often simple instructions to underwater robots without using artificial expensive in terms of time and physical resources. Therefore, tags such as fiducial markers or requiring memorization of a triggering parameter changes based on human input while the potentially complex set of language rules. Extensive experiments robot is underwater, without requiring a trip to the surface, are performed both on field-trial data and through simulation, which demonstrate the robustness, efficiency, and portability of is a simpler and more efficient alternative approach.
- 
												  Robots Are Becoming AutonomousFORSCHUNGSAUSBLICK 02 RESEARCH OUTLOOK STEFAN SCHAAL MAX PLANCK INSTITUTE FOR INTELLIGENT SYSTEMS, TÜBINGEN Robots are becoming autonomous Towards the end of the 1990s Japan launched a globally or carry out complex manipulations are still in the research unique wave of funding of humanoid robots. The motivation stage. Research into humanoid robots and assistive robots is was clearly formulated: demographic trends with rising life being pursued around the world. Problems such as the com- expectancy and stagnating or declining population sizes in plexity of perception, effective control without endangering highly developed countries will create problems in the fore- the environment and a lack of learning aptitude and adapt- seeable future, in particular with regard to the care of the ability continue to confront researchers with daunting chal- elderly due to a relatively sharp fall in the number of younger lenges for the future. Thus, an understanding of autonomous people able to help the elderly cope with everyday activities. systems remains essentially a topic of basic research. Autonomous robots are a possible solution, and automotive companies like Honda and Toyota have invested billions in a AUTONOMOUS ROBOTICS: PERCEPTION–ACTION– potential new industrial sector. LEARNING Autonomous systems can be generally characterised as Today, some 15 years down the line, the challenge of “age- perception-action-learning systems. Such systems should ing societies” has not changed, and Europe and the US have be able to perform useful tasks for extended periods autono- also identified and singled out this very problem. But new mously, meaning without external assistance. In robotics, challenges have also emerged. Earthquakes and floods cause systems have to perform physical tasks and thus have to unimaginable damage in densely populated areas.