Unmanned Ground Vehicle for Military Purpose

Total Page:16

File Type:pdf, Size:1020Kb

Unmanned Ground Vehicle for Military Purpose International Journal of Pure and Applied Mathematics Volume 119 No. 12 2018, 13189-13193 ISSN: 1314-3395 (on-line version) url: http://www.ijpam.eu Special Issue ijpam.eu UNMANNED GROUND VEHICLE FOR MILITARY PURPOSE SAINADH JASTHI Mrs.PONNAMMAL P Electronics & communication Engineering Assistant professor SRM Institute of Science And Technology Deparrtment of ECE Kattankulathur SRM Institute of Science And Technology Email:[email protected] Kattankulathur Email :[email protected] AKHIL CHERUKURI ADITYA SHASHANK NETI Electronics & communication Engineering Electronics & communication Engineering SRM Institute of Science And Technology SRM Institute of Science And Technology Kattankulathur Kattankulathur Email:[email protected] Email:[email protected] Abstract—Unmanned ground vehicle (UGV) is also called as self treat issues and are investing in this development of the self controlled robot which play an important role in military purposes, controlled robots to safeguard citizens from terrorism. However the demand for manual robots robots are on rise. UGV To be more precise, unmanned ground vehicle operates without work in more efficient way to counter terrorism and in far fields. human interference, i.e artificial intelligence comes into the need Unmanned ground vehicle support and augment soldiers life in for taking self decisions and navigate according to these battle fields. The capability of this robot mainly depends on decisions. This self decision of the robot can be achieved with protecting the soldiers or at least reduce the death tolls in wars. the help of GPS coordinates, magnetic compass to detect the Many research’s are extensively carried out in major parts of the obstacles in the mid journey and to self plan the new route by world to develop this prototype which work more efficiently in the robot itself. military activities and in counter terrorism. This Unmanned ground vehicle effectively works in the fields of border patrol, surveillance Unmanned ground vehicle is based on two basic robot and in war combats. This UGV mainly works in two different technologies combined into one unit and make it work modes, Automatic or self mode and Manual or in coordination with effectively. The first one is DRDO Daksh robot, which is human’s. In self mode , UGV is tasked to surveillance the border controlled by remote in locating the hazardous objects safely and between the two areas without the human instructions. This feature destroying them. This is achieved by the X-ray machine to locate can be attained by implementing GPS,Magnetic compass to metals and LED’s. It is also used for climbing staircases and measure the turning angles and adjust the route avoiding the scan vehicles for explosives. The second one is Foster-miller obstacles such as mountains, trees and rivers. This path planning TALON robot used to navigate through sand, water and even to strategy is the key program basic to work the robot in the self climb tress. So all these features are added up along with the self controlled mode. The complete working and setup are described in controlled operations to create a multitask robot which works the paper. effectively using path planning and obstacle detection. If it works as planned, the world can see a new era of robots with its Keywords- GPS - Geographical positioning system,Magnetic new defense system working for the betterment of the society Compass, Self controlled mode,Unmanned ground vehicle. and making earth a better place to live in. SELF CONTROL MODE INTRODUCTION An autonomous robot can work without human intervention. In the present scenario, the recent increase in terrorism has carved It gains information about the climatic conditions and operate the way to develop self controlled robots tremendously. This according to it using GPS coordinates. motivation has challenged the research scholars to invent an efficient robots for the war fields. This self controlled robots It learns the new knowledge frequently about the obstacle counter the emergency problems that present day people and detection and how to avoid them by planning a new path using soldiers are facing. Foe example, self robots work in handy the magnetic compass to measure the angles. It parallely adapts parallely with the soldiers in tracking the terrorist activities and can to the changing surroundings for accomplishing the new tasks. also work with police to help them find illegal activities occuring The block diagram for the self controlled mode is shown in the in the city and to find the culprits indulged in thefts and other figure 1. antisocial activities. Many countries are now aware of the serious A). BLOCK DIAGRAM OF SELF CONTROLMODE 13189 International Journal of Pure and Applied Mathematics Special Issue The specific function of each block in the figure are explained in detail. 1). COMMAND CONTROL STATION: The UGV is located away from the command control station and is controlled by keyboard and mouse for movement of the vehicles, monitoring the live video feedback and switching between the autonomous and manual modes. 2). INTERNET: This acts as a medium for the flow of commands from control station to the UGV to make it work wireless. 3). ON-BOARD SYSTEM: The computer in the UGV receives the commands from the control systems and operate accordingly. 4). CAMERA: The camera records the video of the path travelled by the UGV and FIG 1:BLOCK DIAGRAM OF SELF CONTRO MODE send to the control station for the users to access it. HARDWARE COMPONENTS REQUIRED FOR UGV 5). CONTROL UNIT: It is usually the micro controller that receives the signals Arduino micro controller from the user and other sensors and operate according to the Servo motor signals such as UGV movement and angle, obstacle DC motor detections. Zigbee radio modem Magnetic compass 6). GPS UNIT: GPS system Used for the autonomous mode for obtaining the source and Battery destination places for the UGV to travel between these places Obstacle detector and to plan the required path accordingly. Water detector Human sensor 7). MAGNETIC COMPASS: Camera It is used to obtain the angle for the UGV to take a turn IR sensor towards the destination co-ordinates. 8). IR SENSORS: Infrared sensors are used for detecting the metal objects like bombs, mines in the autonomous mode. 9). SERVO MOTOR: This is used to control the direction of the Unmanned control vehicle. 10). DC MOTOR: It is mainly used for UGV movement from one place to other place. 11). WIRELESS MODEM: Raspberry pi or zigbee module provides wireless data transfer to the control station from the on-board system. 12). BATTERY: Used for powering up the control unit and the zigbee module. 13190 International Journal of Pure and Applied Mathematics Special Issue FIG 4: OBSTACLE DETECTION ALGORITHM CODES FOR LEFT SIDE ‘1’ FIG2:FLOW CHART OF UNMANNED CONTROL VEHICLE RESULTS B). ALGORITHM DESIGN FOR SELF CONTROL MODE: We successfully built an unmanned ground vehicle which The algorithm design for self control mode is mainly achieved by functions automatically using magnetic compass, GPS two important factors called path planning and obstacle detection coordinates, path planning and obstacle detection algorithms. algorithms for the UGV to navigate accordingly. The user from the Technologies like zigbee, arduino, water sensor, human sensor, command station obtains the current GPS coordinates of the UGV metal sensor are been used and been implemented effectively. and also the destination coordinates to where UGV has to navigate. The directions are obtained by the magnetic compass by trigonometric functions by calculating the specific angle required CONCLUSION AND FUTURE WORKS by the UGV to move accordingly. Thus path planning algorithm decides the path taken by the UGV taking into account all these We successfully built a prototype that works automatically in measurements. Now, obstacle detection is done by the IR sensors self control mode using GPS antenna, magnetic compass, path that detect the obstacles like mountains, rivers and thus in parallel planning and obstacle detection algorithms. In this self control update the path planning measurements accordingly. It also detects mode, UGV is capable of been travelling automatically from one metals like bombs, metal objects those are buried in the ground and place to other place without human interference and navigation send this information to the command control station. Thus in self commands. It adjust its strategies and measurements ased on control mode, path planning and obstacle detection algorithms have path planning, obstacle detection, GPS antenna and magnetic to work in parallel in the most efficient manner based on all the compass. We strongly feel, self control robots are very helpful measurements. On the other side command control station obtains for miltary purposes that are operated outdoors. This UGV can all the measurements and signals from the UGV and coordinates undertake operations like surveillance, border patrol and in the robot continuously. Based on the path planning and obstacle active combat both as a standalone unit or automatic mode as detection algorithms the UGV navigates to the destination given by well as in coordination with human soldiers or in manual mode. the user itself automatically. The obstacle detection algorithm Our future work is on developing arm controlled mode or works according to following data shown in figure 3 and figure 4. gesture controlled mode along with the command control mode and
Recommended publications
  • Remote and Autonomous Controlled Robotic Car Based on Arduino with Real Time Obstacle Detection and Avoidance
    Universal Journal of Engineering Science 7(1): 1-7, 2019 http://www.hrpub.org DOI: 10.13189/ujes.2019.070101 Remote and Autonomous Controlled Robotic Car based on Arduino with Real Time Obstacle Detection and Avoidance Esra Yılmaz, Sibel T. Özyer* Department of Computer Engineering, Çankaya University, Turkey Copyright©2019 by authors, all rights reserved. Authors agree that this article remains permanently open access under the terms of the Creative Commons Attribution License 4.0 International License Abstract In robotic car, real time obstacle detection electronic devices. The environment can be any physical and obstacle avoidance are significant issues. In this study, environment such as military areas, airports, factories, design and implementation of a robotic car have been hospitals, shopping malls, and electronic devices can be presented with regards to hardware, software and smartphones, robots, tablets, smart clocks. These devices communication environments with real time obstacle have a wide range of applications to control, protect, image detection and obstacle avoidance. Arduino platform, and identification in the industrial process. Today, there are android application and Bluetooth technology have been hundreds of types of sensors produced by the development used to implementation of the system. In this paper, robotic of technology such as heat, pressure, obstacle recognizer, car design and application with using sensor programming human detecting. Sensors were used for lighting purposes on a platform has been presented. This robotic device has in the past, but now they are used to make life easier. been developed with the interaction of Android-based Thanks to technology in the field of electronics, incredibly device.
    [Show full text]
  • AI, Robots, and Swarms: Issues, Questions, and Recommended Studies
    AI, Robots, and Swarms Issues, Questions, and Recommended Studies Andrew Ilachinski January 2017 Approved for Public Release; Distribution Unlimited. This document contains the best opinion of CNA at the time of issue. It does not necessarily represent the opinion of the sponsor. Distribution Approved for Public Release; Distribution Unlimited. Specific authority: N00014-11-D-0323. Copies of this document can be obtained through the Defense Technical Information Center at www.dtic.mil or contact CNA Document Control and Distribution Section at 703-824-2123. Photography Credits: http://www.darpa.mil/DDM_Gallery/Small_Gremlins_Web.jpg; http://4810-presscdn-0-38.pagely.netdna-cdn.com/wp-content/uploads/2015/01/ Robotics.jpg; http://i.kinja-img.com/gawker-edia/image/upload/18kxb5jw3e01ujpg.jpg Approved by: January 2017 Dr. David A. Broyles Special Activities and Innovation Operations Evaluation Group Copyright © 2017 CNA Abstract The military is on the cusp of a major technological revolution, in which warfare is conducted by unmanned and increasingly autonomous weapon systems. However, unlike the last “sea change,” during the Cold War, when advanced technologies were developed primarily by the Department of Defense (DoD), the key technology enablers today are being developed mostly in the commercial world. This study looks at the state-of-the-art of AI, machine-learning, and robot technologies, and their potential future military implications for autonomous (and semi-autonomous) weapon systems. While no one can predict how AI will evolve or predict its impact on the development of military autonomous systems, it is possible to anticipate many of the conceptual, technical, and operational challenges that DoD will face as it increasingly turns to AI-based technologies.
    [Show full text]
  • AUV Adaptive Sampling Methods: a Review
    applied sciences Review AUV Adaptive Sampling Methods: A Review Jimin Hwang 1 , Neil Bose 2 and Shuangshuang Fan 3,* 1 Australian Maritime College, University of Tasmania, Launceston 7250, TAS, Australia; [email protected] 2 Department of Ocean and Naval Architectural Engineering, Memorial University of Newfoundland, St. John’s, NL A1C 5S7, Canada; [email protected] 3 School of Marine Sciences, Sun Yat-sen University, Zhuhai 519082, Guangdong, China * Correspondence: [email protected] Received: 16 July 2019; Accepted: 29 July 2019; Published: 2 August 2019 Abstract: Autonomous underwater vehicles (AUVs) are unmanned marine robots that have been used for a broad range of oceanographic missions. They are programmed to perform at various levels of autonomy, including autonomous behaviours and intelligent behaviours. Adaptive sampling is one class of intelligent behaviour that allows the vehicle to autonomously make decisions during a mission in response to environment changes and vehicle state changes. Having a closed-loop control architecture, an AUV can perceive the environment, interpret the data and take follow-up measures. Thus, the mission plan can be modified, sampling criteria can be adjusted, and target features can be traced. This paper presents an overview of existing adaptive sampling techniques. Included are adaptive mission uses and underlying methods for perception, interpretation and reaction to underwater phenomena in AUV operations. The potential for future research in adaptive missions is discussed. Keywords: autonomous underwater vehicle(s); maritime robotics; adaptive sampling; underwater feature tracking; in-situ sensors; sensor fusion 1. Introduction Autonomous underwater vehicles (AUVs) are unmanned marine robots. Owing to their mobility and increased ability to accommodate sensors, they have been used for a broad range of oceanographic missions, such as surveying underwater plumes and other phenomena, collecting bathymetric data and tracking oceanographic dynamic features.
    [Show full text]
  • Development of an Open Humanoid Robot Platform for Research and Autonomous Soccer Playing
    Development of an Open Humanoid Robot Platform for Research and Autonomous Soccer Playing Karl J. Muecke and Dennis W. Hong RoMeLa: Robotics and Mechanisms Lab Virginia Tech Blacksburg, VA 24061 [email protected], [email protected] Abstract This paper describes the development of a fully autonomous humanoid robot for locomotion research and as the first US entry in to RoboCup. DARwIn (Dynamic Anthropomorphic Robot with Intelligence) is a humanoid robot capable of bipedal walking and performing human like motions. As the years have progressed, DARwIn has evolved from a concept to a sophisticated robot platform. DARwIn 0 was a feasibility study that investigated the possibility of making a humanoid robot walk. Its successor, DARwIn I, was a design study that investigated how to create a humanoid robot with human proportions, range of motion, and kinematic structure. DARwIn IIa built on the name ªhumanoidº by adding autonomy. DARwIn IIb improved on its predecessor by adding more powerful actuators and modular computing components. Finally, DARwIn III is designed to take the best of all the designs and incorporate the robot's most advanced motion control yet. Introduction Dynamic Anthropomorphic Robot with Intelligence (DARwIn), a humanoid robot, is a sophisticated hardware platform used for studying bipedal gaits that has evolved over time. Five versions of DARwIn have been developed, each an improvement on its predecessor. The first version, DARwIn 0 (Figure 1a), was used as a design study to determine the feasibility of creating a humanoid robot abd for actuator evaluation. The second version, DARwIn I (Figure 1b), used improved gaits and software.
    [Show full text]
  • Learning Preference Models for Autonomous Mobile Robots in Complex Domains
    Learning Preference Models for Autonomous Mobile Robots in Complex Domains David Silver CMU-RI-TR-10-41 December 2010 Robotics Institute Carnegie Mellon University Pittsburgh, Pennsylvania, 15213 Thesis Committee: Tony Stentz, chair Drew Bagnell David Wettergreen Larry Matthies Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Robotics Copyright c 2010. All rights reserved. Abstract Achieving robust and reliable autonomous operation even in complex unstruc- tured environments is a central goal of field robotics. As the environments and scenarios to which robots are applied have continued to grow in complexity, so has the challenge of properly defining preferences and tradeoffs between various actions and the terrains they result in traversing. These definitions and parame- ters encode the desired behavior of the robot; therefore their correctness is of the utmost importance. Current manual approaches to creating and adjusting these preference models and cost functions have proven to be incredibly tedious and time-consuming, while typically not producing optimal results except in the sim- plest of circumstances. This thesis presents the development and application of machine learning tech- niques that automate the construction and tuning of preference models within com- plex mobile robotic systems. Utilizing the framework of inverse optimal control, expert examples of robot behavior can be used to construct models that generalize demonstrated preferences and reproduce similar behavior. Novel learning from demonstration approaches are developed that offer the possibility of significantly reducing the amount of human interaction necessary to tune a system, while also improving its final performance. Techniques to account for the inevitability of noisy and imperfect demonstration are presented, along with additional methods for improving the efficiency of expert demonstration and feedback.
    [Show full text]
  • Uncorrected Proof
    JrnlID 10846_ ArtID 9025_Proof# 1 - 14/10/2005 Journal of Intelligent and Robotic Systems (2005) # Springer 2005 DOI: 10.1007/s10846-005-9025-1 Temporal Range Registration for Unmanned 1 j Ground and Aerial Vehicles 2 R. MADHAVAN*, T. HONG and E. MESSINA 3 Intelligent Systems Division, National Institute of Standards and Technology, Gaithersburg, MD 4 20899-8230, USA; e-mail: [email protected], {tsai.hong, elena.messina}@nist.gov 5 (Received: 24 March 2004; in final form: 2 September 2005) 6 Abstract. An iterative temporal registration algorithm is presented in this article for registering 8 3D range images obtained from unmanned ground and aerial vehicles traversing unstructured 9 environments. We are primarily motivated by the development of 3D registration algorithms to 10 overcome both the unavailability and unreliability of Global Positioning System (GPS) within 11 required accuracy bounds for Unmanned Ground Vehicle (UGV) navigation. After suitable 12 modifications to the well-known Iterative Closest Point (ICP) algorithm, the modified algorithm is 13 shown to be robust to outliers and false matches during the registration of successive range images 14 obtained from a scanning LADAR rangefinder on the UGV. Towards registering LADAR images 15 from the UGV with those from an Unmanned Aerial Vehicle (UAV) that flies over the terrain 16 being traversed, we then propose a hybrid registration approach. In this approach to air to ground 17 registration to estimate and update the position of the UGV, we register range data from two 18 LADARs by combining a feature-based method with the aforementioned modified ICP algorithm. 19 Registration of range data guarantees an estimate of the vehicle’s position even when only one of 20 the vehicles has GPS information.
    [Show full text]
  • Dynamic Reconfiguration of Mission Parameters in Underwater Human
    Dynamic Reconfiguration of Mission Parameters in Underwater Human-Robot Collaboration Md Jahidul Islam1, Marc Ho2, and Junaed Sattar3 Abstract— This paper presents a real-time programming and in the underwater domain, what would otherwise be straight- parameter reconfiguration method for autonomous underwater forward deployments in terrestrial settings often become ex- robots in human-robot collaborative tasks. Using a set of tremely complex undertakings for underwater robots, which intuitive and meaningful hand gestures, we develop a syntacti- cally simple framework that is computationally more efficient require close human supervision. Since Wi-Fi or radio (i.e., than a complex, grammar-based approach. In the proposed electromagnetic) communication is not available or severely framework, a convolutional neural network is trained to provide degraded underwater [7], such methods cannot be used accurate hand gesture recognition; subsequently, a finite-state to instruct an AUV to dynamically reconfigure command machine-based deterministic model performs efficient gesture- parameters. The current task thus needs to be interrupted, to-instruction mapping and further improves robustness of the interaction scheme. The key aspect of this framework is and the robot needs to be brought to the surface in order that it can be easily adopted by divers for communicating to reconfigure its parameters. This is inconvenient and often simple instructions to underwater robots without using artificial expensive in terms of time and physical resources. Therefore, tags such as fiducial markers or requiring memorization of a triggering parameter changes based on human input while the potentially complex set of language rules. Extensive experiments robot is underwater, without requiring a trip to the surface, are performed both on field-trial data and through simulation, which demonstrate the robustness, efficiency, and portability of is a simpler and more efficient alternative approach.
    [Show full text]
  • Robots Are Becoming Autonomous
    FORSCHUNGSAUSBLICK 02 RESEARCH OUTLOOK STEFAN SCHAAL MAX PLANCK INSTITUTE FOR INTELLIGENT SYSTEMS, TÜBINGEN Robots are becoming autonomous Towards the end of the 1990s Japan launched a globally or carry out complex manipulations are still in the research unique wave of funding of humanoid robots. The motivation stage. Research into humanoid robots and assistive robots is was clearly formulated: demographic trends with rising life being pursued around the world. Problems such as the com- expectancy and stagnating or declining population sizes in plexity of perception, effective control without endangering highly developed countries will create problems in the fore- the environment and a lack of learning aptitude and adapt- seeable future, in particular with regard to the care of the ability continue to confront researchers with daunting chal- elderly due to a relatively sharp fall in the number of younger lenges for the future. Thus, an understanding of autonomous people able to help the elderly cope with everyday activities. systems remains essentially a topic of basic research. Autonomous robots are a possible solution, and automotive companies like Honda and Toyota have invested billions in a AUTONOMOUS ROBOTICS: PERCEPTION–ACTION– potential new industrial sector. LEARNING Autonomous systems can be generally characterised as Today, some 15 years down the line, the challenge of “age- perception-action-learning systems. Such systems should ing societies” has not changed, and Europe and the US have be able to perform useful tasks for extended periods autono- also identified and singled out this very problem. But new mously, meaning without external assistance. In robotics, challenges have also emerged. Earthquakes and floods cause systems have to perform physical tasks and thus have to unimaginable damage in densely populated areas.
    [Show full text]
  • Geting (Parakilas and Bryce, 2018; ICRC, 2019; Abaimov and Martellini, 2020)
    Journal of Future Robot Life -1 (2021) 1–24 1 DOI 10.3233/FRL-200019 IOS Press Algorithmic fog of war: When lack of transparency violates the law of armed conflict Jonathan Kwik ∗ and Tom Van Engers Faculty of Law, University of Amsterdam, Amsterdam 1001NA, Netherlands Abstract. Underinternationallaw,weapon capabilitiesand theiruse areregulated by legalrequirementsset by International Humanitarian Law (IHL).Currently,there arestrong military incentivestoequip capabilitieswith increasinglyadvanced artificial intelligence (AI), whichinclude opaque (lesstransparent)models.As opaque models sacrifice transparency for performance, it is necessary toexaminewhether theiruse remainsinconformity with IHL obligations.First,wedemon- strate that theincentivesfor automationdriveAI toward complextaskareas and dynamicand unstructuredenvironments, whichinturnnecessitatesresorttomore opaque solutions.Wesubsequently discussthe ramifications of opaque models for foreseeability andexplainability.Then, we analysetheir impact on IHLrequirements froma development, pre-deployment and post-deployment perspective.We findthatwhile IHL does not regulate opaqueAI directly,the lack of foreseeability and explainability frustratesthe fulfilmentofkey IHLrequirementstotheextent that theuse of fully opaqueAI couldviolate internationallaw.Statesare urgedtoimplement interpretability duringdevelopmentand seriously consider thechallenging complicationofdetermining theappropriate balancebetween transparency andperformance in their capabilities. Keywords:Transparency, interpretability,foreseeability,weapon,
    [Show full text]
  • Variety of Research Work Has Already Been Done to Develop an Effective Navigation Systems for Unmanned Ground Vehicle
    Design of a Smart Unmanned Ground Vehicle for Hazardous Environments Saurav Chakraborty Subhadip Basu Tyfone Communications Development (I) Pvt. Ltd. Computer Science & Engineering. Dept. ITPL, White Field, Jadavpur University Bangalore 560092, INDIA. Kolkata – 700032, INDIA Abstract. A smart Unmanned Ground Vehicle needs some organizing principle, based on the (UGV) is designed and developed for some characteristics of each system such as: application specific missions to operate predominantly in hazardous environments. In The purpose of the development effort our work, we have developed a small and (often the performance of some application- lightweight vehicle to operate in general cross- specific mission); country terrains in or without daylight. The The specific reasons for choosing a UGV UGV can send visual feedbacks to the operator solution for the application (e.g., hazardous at a remote location. Onboard infrared sensors environment, strength or endurance can detect the obstacles around the UGV and requirements, size limitation etc.); sends signals to the operator. the technological challenges, in terms of functionality, performance, or cost, posed by Key Words. Unmanned Ground Vehicle, the application; Navigation Control, Onboard Sensor. The system's intended operating area (e.g., indoor environments, anywhere indoors, 1. Introduction outdoors on roads, general cross-country Robotics is an important field of interest in terrain, the deep seafloor, etc.); modern age of automation. Unlike human being the vehicle's mode of locomotion (e.g., a computer controlled robot can work with speed wheels, tracks, or legs); and accuracy without feeling exhausted. A robot How the vehicle's path is determined (i.e., can also perform preassigned tasks in a control and navigation techniques hazardous environment, reducing the risks and employed).
    [Show full text]
  • An Autonomous Grape-Harvester Robot: Integrated System Architecture
    electronics Article An Autonomous Grape-Harvester Robot: Integrated System Architecture Eleni Vrochidou 1, Konstantinos Tziridis 1, Alexandros Nikolaou 1, Theofanis Kalampokas 1, George A. Papakostas 1,* , Theodore P. Pachidis 1, Spyridon Mamalis 2, Stefanos Koundouras 3 and Vassilis G. Kaburlasos 1 1 HUMAIN-Lab, Department of Computer Science, School of Sciences, International Hellenic University (IHU), 65404 Kavala, Greece; [email protected] (E.V.); [email protected] (K.T.); [email protected] (A.N.); [email protected] (T.K.); [email protected] (T.P.P.); [email protected] (V.G.K.) 2 Department of Agricultural Biotechnology and Oenology, School of Geosciences, International Hellenic University (IHU), 66100 Drama, Greece; [email protected] 3 Laboratory of Viticulture, Faculty of Agriculture, Forestry and Natural Environment, School of Agriculture, Aristotle University of Thessaloniki (AUTh), 54124 Thessaloniki, Greece; [email protected] * Correspondence: [email protected]; Tel.: +30-2510-462-321 Abstract: This work pursues the potential of extending “Industry 4.0” practices to farming toward achieving “Agriculture 4.0”. Our interest is in fruit harvesting, motivated by the problem of address- ing the shortage of seasonal labor. In particular, here we present an integrated system architecture of an Autonomous Robot for Grape harvesting (ARG). The overall system consists of three interdepen- dent units: (1) an aerial unit, (2) a remote-control unit and (3) the ARG ground unit. Special attention is paid to the ARG; the latter is designed and built to carry out three viticultural operations, namely Citation: Vrochidou, E.; Tziridis, K.; harvest, green harvest and defoliation.
    [Show full text]
  • Space Applications of Micro-Robotics: a Preliminary Investigation of Technological Challenges and Scenarios
    SPACE APPLICATIONS OF MICRO-ROBOTICS: A PRELIMINARY INVESTIGATION OF TECHNOLOGICAL CHALLENGES AND SCENARIOS P. Corradi, A. Menciassi, P. Dario Scuola Superiore Sant'Anna CRIM - Center for Research In Microengineering Viale R. Piaggio 34, 56025 Pontedera (Pisa), Italy Email: [email protected] ABSTRACT In the field of robotics, a microrobot is defined as a miniaturized robotic system, making use of micro- and, possibly, nanotechnologies. Generally speaking, many terms such as micromechanisms, micromachines and microrobots are used to indicate a wide range of devices whose function is related to the concept of “operating at a small scale”. In space terminology, however, the term “micro” (and “nano”) rover, probe or satellite currently addresses a category of relatively small systems, with a size ranging from a few to several tens of centimetres, not necessarily including micro- and nanotechnologies (except for microelectronics). The prefix “micro” will be used in this paper in (almost) its strict sense, in order to address components, modules and systems with sizes in the order of micrometers up to few millimetres. Therefore, with the term microrobot it will be considered a robot with a size up to few millimetres, where typical integrated components and modules have features in the order of micrometers and have been entirely produced through micromechanical and microelectronic mass- fabrication and mass-assembly processes. Hence, the entire robot is completely integrated in a stack of assembled chips. Due to its size, the capabilities of the single unit are limited and, consequently, microrobots need to work in very large groups, or swarms, to significantly sense or affect the environment.
    [Show full text]