<<

Future Directions 2007 Getting ready for what comes next

'(ULFNVRQ-&ROOLHU60RQFNWRQ%%URWHQ-*LHVEUHFKW07UHQWLQL'+DQQD5 &KHVQH\'0DF.D\69HUUHW5$QGHUVRQ DefenceR&DCanada-Suffield

'HIHQFH5 '&DQDGD 7HFKQLFDO0HPRUDQGXP '5'&6XIILHOG70 'HFHPEHU

Future Directions 2007 Getting ready for what comes next

'(ULFNVRQ-&ROOLHU60RQFNWRQ%%URWHQ-*LHVEUHFKW07UHQWLQL'+DQQD5 &KHVQH\'0DF.D\69HUUHW5$QGHUVRQ DefenceR&DCanada-Suffield

DefenceR&DCanada±Suffield TechnicalMemorandum DRDCSuffieldTM 'HFHPEHU2007 Author

D. Erickson Defence R & D Canada - Suffield

Approved by

D. Hanna Head/Tactical Vehicle Systems Section

Approved for release by

Dr. R. Clewley Chairman/Document Review Panel

© Her Majesty the Queen as represented by the Minister of National Defence, 2007 © Sa majesté la reine, représentée par le ministre de la Défense nationale, 2007 Abstract

ThispapersummarizestheFutureDirections2007planningsymposium’soutcomes thatwasheldbyDRDCSuffieldstaffon17and18September2007.Participants proposedunconstrainedfutureautonomyscenarios,outliningwhattheysawasthenext step.Participantpresentationswereusedtoextractcommontimelinesandthemes aboutautonomoussystemsdevelopmentandmilitary.Thissymposiumalso reviewedtheVWDWHRIWKHDUW,currentprograms,criticalresearchareas,andindicated promisingfutureresearchavenuesthatfitthecurrenttrends.Thesymposiumalso reviewedthepracticalityofreachingfullUnmannedVehicles(UxV)autonomyand recommendsamanintheloopsystemsconceptfortheforeseeablefuture.Giventhis systemconceptreality,itrecognizedanimportantshiftinfocustointroducemore “automaticity”soonerasanotherwaytoimpacttheclientandbringaboutautonomyin thelongerterm.

Resume

/HSUpVHQWUDSSRUWVHYHXWXQUpVXPpGHVUpVXOWDWVGX6\PSRVLXPVXUODSODQLILFDWLRQ GHVDFWLYLWpVGHIRQGpVXUOHVRULHQWDWLRQVIXWXUHVTXLDpWpWHQXSDUOHSHUVRQQHO GH5''&6XIILHOGOHVHWVHSWHPEUH/HVSDUWLFLSDQWVRQWSURSRVpGHV VFpQDULRVG¶DXWRQRPLHIXWXUHVDQVFRQWUDLQWHD[pVVXUOHXUYLVLRQGHODSURFKDLQHpWDSH jVXLYUH/HVSUpVHQWDWLRQVRQWSHUPLVG¶H[WUDLUHGHVpFKpDQFLHUVHWGHVWKqPHV FRPPXQVHQPDWLqUHGHGpYHORSSHPHQWGHV\VWqPHVDXWRQRPHVHWGHURERWLTXH PLOLWDLUH/H6\PSRVLXPSRUWDLWVXUO¶pWDWGHODWHFKQLTXHOHVSURJUDPPHVDFWXHOVHW OHVGRPDLQHVGHUHFKHUFKHHVVHQWLHOVHWDSHUPLVG¶LGHQWLILHUGHVSLVWHVGHUHFKHUFKH SURPHWWHXVHVFRUUHVSRQGDQWDX[WHQGDQFHVDFWXHOOHV2Q\DpJDOHPHQWWUDLWpGHOD IDLVDELOLWpG¶REWHQLUGHVYpKLFXOHVVDQVpTXLSDJH 9;6( HQWLqUHPHQWDXWRQRPHVHW UHFRPPDQGpXQFRQFHSWGHV\VWqPHVIRQGpVVXUO¶LQWHUYHQWLRQKXPDLQHSRXUXQDYHQLU SUpYLVLEOH/DUpDOLWpGHFHFRQFHSWUHFRQQDLVVDLWOHFKDQJHPHQWG¶REMHFWLI FRQVLGpUDEOHVRLWFHOXLG¶LQWURGXLUHGDYDQWDJHG¶©DXWRPDWLFLWpªSOXVUDSLGHPHQWHQ WDQWTX¶DXWUHIDoRQG¶LQIOXHUVXUOHFOLHQWHWGHSHUPHWWUHXQHDXWRQRPLHjORQJWHUPH

DRDCSuffield70 L This page intentionally left blank.

LL DRDCSuffield70 Executive Summary )XWXUH'LUHFWLRQV*HWWLQJUHDG\IRUZKDWFRPHV QH[W '(ULFNVRQ-&ROOLHU60RQFNWRQ%%URWHQ-*LHVEUHFKW07UHQWLQL' +DQQD5&KHVQH\'0DF.D\69HUUHW5$QGHUVRQ '5'&6XIILHOG70'HIHQFH5 '&DQDGD±6XIILHOG'HFHPEHU

ThispaperdescribestheoutcomesoftheFutureDirections2007symposiumheldbyDRDC SuffieldstaffattheRedTecInc.facilityon17and18September2007.Thesymposiumwas anopportunitytomuseaboutthefuturebasedonmanyyearsexperience,frustration,and thought.Eachparticipantwasgiventheopportunitytoproposeafuturevisionfor autonomoussystemsandfromthisidentifytheimportantfactorsforthisresearchfieldinthe future.Fromeachindividual’scommontimelines,themes,andresearchareaswere extracted.Whatisimportantaboutthesefindingsishowindividualscametosimilar conclusionsabouttherealfutureofautonomoussystems,withoutthebenefitofpre- discussion.Noonethoughtthegoalwasunattainable,butthesomefeltthetraditionalforced adoptionstrategymaynotbetheonlywaytoarriveatfullautonomy.

Thesymposiumproducedmanydiverseopinionsoutofdiscussionsandtherewasgeneral consensus,notnecessarilyunanimous,attendeesarrivedatforthefollowing recommendations: 1. DRDC,AISSinparticular,shouldadoptNIST1011(ALFUS)autonomyscale conventionsforfuturework.Theadvantagesareastandardizationofterminology,an apples-applescomparisonwithUSprojects,analignmentthatenhances interoperability,andapotentialmanagementtooltoconsiderautonomoussystem proposals.

2. Communication,perception,cognition,complexity,visualsimultaneouslocalization andmapping,complexsensing,teamwork,learning,andintelligentmobilityarethe criticalresearchareasthatAISSshouldfocusoninordertoapproachthefullautonomy goal.

3. AISSshoulddevelopandnurtureamorefocusedrelationshipwithDLCD/DLSCfor autonomousvehiclesystemconceptsexploration/presentation.Afocusedpartnership willlessentheAISSburdentoconvincedepartmentsoftheimportancewhileadjusting expectationstomorepracticalsolutions.

DRDCSuffield70 LLL 4. Based on the common themes identified, critical research areas’ status, and time lines discussed, ten broad project applications that are possible and practical should be pursued : (a) STRV Lite - improved hallway and stairway capable UGV ; (b) Logistics rotocraft - semiautonomous UAV for frontline convoy operations ; () Autonomous convoy - automating the current ground vehicles use din resupply ; (d) IEDD/EOD UGV - continued development of EOD augmentation to protect soldiers ; (e) Multi-spectral sensing - investigate novel hyper-spectral sensing for vegetation classification ; (f) Teleoperated Air-dropped Demolition Munition (TADM) - develop dynamic autonomous demolition capability ; (g) Static Remote Weapons (SRW) - investigate forward operating base (FOB) protection using automated weapons ; (h) Sonobuoy - investigate increasing autonomy in naval sonobuoys ; (i) Multi- Target System - spin-off current SOA UGV teamwork into target systems ; (j) Arctic Sovereignty - investigate Arctic surveillance UGV/UAV systems

5. It is recommended that the FFCV project include a small UGV for dismounted infantry operations in urban areas.

LY DRDCSuffield70 Sommaire

)XWXUH'LUHFWLRQV*HWWLQJUHDG\IRUZKDWFRPHV QH[W '(ULFNVRQ-&ROOLHU60RQFNWRQ%%URWHQ-*LHVEUHFKW07UHQWLQL '+DQQD5&KHVQH\'0DF.D\69HUUHW5$QGHUVRQ5''&6XIILHOG 705HFKHUFKHHWGpYHORSSHPHQWSRXUODGpIHQVH&DQDGD± 6XIILHOGGpFHPEUH

-FQSÏTFOUSBQQPSUTFWFVUVOSÏTVNÏEFTSÏTVMUBUTEVTZNQPTJVN0SJFOUBUJPOTGVUVSFT RVJBÏUÏUFOVQBSMFQFSTPOOFMEF3%%$4VGGJFMEMFTFUTFQUFNCSF EBOTMFTJOTUBMMBUJPOTEF3FE5FD*OD$FTZNQPTJVNBEPOOÏBVYQBSUJDJQBOUT MPDDBTJPOEÏDIBOHFSBVTVKFUEFMBWFOJSFOTFGPOEBOUTVSMFVSTOPNCSFVTFTBOOÏFT EFYQÏSJFODF EFGSVTUSBUJPOFUEFSÏGMFYJPO$IBRVFQBSUJDJQBOUTFTUWVEPOOFSMB QPTTJCJMJUÏEFQSPQPTFSTBWJTJPOEVGVUVSEFTTZTUÒNFTBVUPOPNFTFU EFMË EFDFSOFS MFTGBDUFVSTRVJTFSBJFOUTFMPOMVJJNQPSUBOUTQPVSDFEPNBJOFEFSFDIFSDIFEBOT MBWFOJS%FTQSÏWJTJPOTEFDIBRVFQBSUJDJQBOU OPVTBWPOTUJSÏEFTUIÒNFTDPNNVOTFU EFTEPNBJOFTEFSFDIFSDIFKVHÏTJNQPSUBOUT*MFTUJOUÏSFTTBOUEFOPUFSRVFMFT QBSUJDJQBOUTTPOUQBSWFOVTËEFTDPODMVTJPOTTJNJMBJSFTËQSPQPTEVWÏSJUBCMFGVUVSEFT TZTUÒNFTBVUPOPNFTTBOTTFDPOTVMUFSBWBOUMFTZNQPTJVN/VMOBVSBJUDSVDFU PCKFDUJGBUUFJHOBCMF NBJTDFSUBJOTÏUBJFOUEBWJTRVFMBTUSBUÏHJFDMBTTJRVFEFMBEPQUJPO GPSDÏFOÏUBJUQFVUÐUSFQBTMVOJRVFGBÎPOEBSSJWFSËMBVUPOPNJFUPUBMF

-FTZNQPTJVNBQFSNJTEFYQSJNFSVOFHSBOEFWBSJÏUÏEPQJOJPOTFUBQSPEVJUVO DPOTFOTVTHÏOÏSBM NBJTQBTGPSDÏNFOUVOBOJNF-FTQBSUJDJQBOUTPOUGPSNVMÏMFT SFDPNNBOEBUJPOTTVJWBOUFT  3%%$ FUQMVTQBSUJDVMJÒSFNFOUMB44*" EFWSBJUBEPQUFSMFTDPOWFOUJPOTTVS MÏDIFMMFEBVUPOPNJFEÏGJOJFTEBOTMBQVCMJDBUJPO/*45 "-'64 QPVSTFT USBWBVYËWFOJS1BSNJMFTBWBOUBHFT NFOUJPOOPOTMBUFSNJOPMPHJFOPSNBMJTÏF VOFDPNQBSBJTPOEÏHBMËÏHBMBWFDMFTQSPKFUTNFOÏTBVY²UBUT6OJT VOF IBSNPOJTBUJPOSFIBVTTBOUMJOUFSPQÏSBCJMJUÏFUVOPVUJMEFHFTUJPOQPUFOUJFM QPVSMÏUVEFEFTQSPQPTJUJPOTEFTZTUÒNFTBVUPOPNFT  1PVSTBQQSPDIFSEFMPCKFDUJGEFMBVUPOPNJFDPNQMÒUF MB44*"EFWSBJUTF DPODFOUSFSTVSMFTEPNBJOFTEFSFDIFSDIFRVFTPOUMBDPNNVOJDBUJPO MB QFSDFQUJPO MBDPHOJUJPO MBDPNQMFYJUÏ MBMPDBMJTBUJPOFUDBSUPHSBQIJF TJNVMUBOÏFTËQBSUJSEVOJOUSBOUWJTVFM MBEÏUFDUJPODPNQMFYF MFUSBWBJM EÏRVJQF MBQQSFOUJTTBHFFUMBNPCJMJUÏJOUFMMJHFOUF  -B44*"EFWSBJUÏUBCMJSFUFOUSFUFOJSVOSBQQPSUQMVTÏUSPJUBWFDMF%$4'5 %$405QPVSMFYQMPSBUJPOFUMBQSÏTFOUBUJPOEFDPODFQUTEFTZTUÒNFTEF WÏIJDVMFTBVUPOPNFT6OQBSUFOBSJBUUIÏNBUJRVFBJEFSBMB44*"ËDPOWBJODSF MFTNJOJTUÒSFTEFMJNQPSUBODFEFTFTUSBWBVYUPVUFOSBNFOBOUMFTBUUFOUFTË EFTTPMVUJPOTQMVTQSBUJRVFT

DRDCSuffield70 Y  $PNQUFUFOVEFTUIÒNFTDPNNVOT EFMBTJUVBUJPOEFTEPNBJOFTEF  SFDIFSDIFDSJUJRVFTFUEFTÏDIÏBODJFSTEÏGJOJT HSBOETQSPKFUTKVHÏT  SÏBMJTBCMFTFUQSBUJRVFTEFWSBJFOUÐUSFFOUSFQSJT  B 7ÏIJDVMFSPCPUJRVFËDIFOJMMFNÏUBNPSQIFMÏHFSoWÏIJDVMFUFSSFTUSF   TBOTQJMPUFBNÏMJPSÏDBQBCMFEFQBSDPVSJSEFTDPSSJEPSTFUEFT   FTDBMJFST  C (JSBWJPOMPHJTUJRVFoWÏIJDVMFBÏSJFOTBOTQJMPUFTFNJBVUPOPNF   EFTUJOÏBVYPQÏSBUJPOTEFDPOWPJEFQSFNJÒSFMJHOF  D $POWPJBVUPOPNFoBVUPNBUJTBUJPOEFTWÏIJDVMFTUFSSFTUSFT   BDUVFMMFNFOUFNQMPZÏTQPVSMFSÏBQQSPWJTJPOOFNFOU  E 7ÏIJDVMFUFSSFTUSFTBOTQJMPUFQPVSMÏMJNJOBUJPOE*&%FUMB/&.o   QPVSTVJUFEVEÏWFMPQQFNFOUEVDPNQMÏNFOU/&.QPVSQSPUÏHFS   MFTTPMEBUT  F %ÏUFDUJPONVMUJTQFDUSBMFoÏUVEFEFTOPVWFBVYNPZFOTEFEÏUFDUJPO   IZQFSTQFDUSBMFQPVSMBDMBTTJGJDBUJPOEFMBWÏHÏUBUJPO  G .VOJUJPOTEFEFTUSVDUJPOQBSBDIVUÏFTUÏMÏHVJEÏFT 5"%. o   ÏMBCPSBUJPOEVOFDBQBDJUÏEFEFTUSVDUJPOBVUPOPNFEZOBNJRVF  H "SNFUÏMÏDPNNBOEÏFTUBUJRVFoÏUVEFEFMBQSPUFDUJPOEFTCBTFT   EPQÏSBUJPOTBWBODÏFTBVNPZFOEBSNFTBVUPNBUJTÏFT  I #PVÏFBDPVTUJRVFoSFDIFSDIFWJTBOUMBVHNFOUBUJPOEFMBVUPOPNJF   EFTCPVÏFTBDPVTUJRVFTEFMB.BSJOF  J 4ZTUÒNFEPCKFDUJGNVMUJSPCPUoSÏBGGFDUBUJPOEFMBSFDIFSDIF   BDUVFMMFTVSMFUSBWBJMEÏRVJQFEFTWÏIJDVMFTUFSSFTUSFTTBOTQJMPUFEFT   0QÏSBUJPOTTQÏDJBMFTIÏMJQPSUÏFTEBOTMFEPNBJOFEFTTZTUÒNFT   EPCKFDUJG  K 4PVWFSBJOFUÏEBOTM"SDUJRVFoÏUVEFEÏWFOUVFMTTZTUÒNFTEF   WÏIJDVMFTUFSSFTUSFTFUBÏSJFOTTBOTQJMPUFEFTUJOÏTËMBTVSWFJMMBODF   EFM"SDUJRVF  &OGJO JMFTUSFDPNNBOEÏEBKPVUFSBVQSPKFU'7$"VOQFUJUWÏIJDVMF  UFSSFTUSFTBOTQJMPUFEFTUJOÏBVYPQÏSBUJPOTEJOGBOUFSJFEÏCBSRVÏFTFO  NJMJFVVSCBJO

YL DRDCSuffield70 Table of contents

Abstract...... i

Resume...... i

ExecutiveSummary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iLi

Sommaire . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

Tableofcontents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vL

ListoffiguresviiL

Listoftables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .L[

1. Introduction ...... 1

2. CommonTimelinesandThemes...... 1

2.1 CommonTimelines...... 2

2.1.1 Autonomy is a long road ...... 2

2.2 CommonThemes...... 3

2.2.1 The real value of military UxV in adaptive dispersed operations? 3

2.2.2 Autonomy Lite ...... 4

2.2.3 Thenextstep...... 5

2.2.4 Static or mobile, ground or air? ...... 6

2.2.5 Commonattributes...... 7

2.2.6 Autonomy is not intelligence; intelligence is not autonomy . . . 7

2.2.7 Drones...... 8

2.2.8 Evolutionary acceptance strategy ...... 9

2.2.9 LegalImplications...... 10

2.3 Critical Research Areas ...... 10

2.3.1 Communications...... 11

2.3.2 Perception...... 11

DRDCSuffield70 YL 2.3.3 Cognition/Artificial Intelligence ...... 13

2.3.4 Complexity/Metaalgorithms...... 13

2.3.5 Visual Simultaneous Localization and Mapping (VSlam) . . . 14

2.3.6 ComplexSensing/Video...... 15

2.3.7 Teamwork/Teaming...... 15

2.3.8 Learned Trafficability ...... 16

2.3.9 Learning and Versatility in Machines ...... 17

2.3.10 Intelligent Mobility ...... 21

2.4 Technology Readiness ...... 25

2.4.1 State of Unmanned Ground Vehicles ...... 25

2.4.2 Humanteamsizeversuscomplexity...... 26

2.5 The Autonomy Scale ...... 28

3. Discussion...... 29

3.1 CommentsonFamilyofFutureCombatVehicles(FFCV)...... 29

3.2 CommentsonCrisisinZefra...... 30

3.2.1 AGedankenexperiment...... 31

3.2.2 AThumbnailsummary...... 31

3.2.3 Key Technologies ...... 31

3.2.4 Aerostats...... 31

3.2.5 Strikebots...... 31

3.2.6 Scarabs...... 32

3.2.7 SmartDust...... 32

3.2.8 Swarmbots...... 33

3.3 PartnershipwithDLCD/DLSC...... 34

3.4 CommentsonDARPAUrbanChallenge/GrandChallenge...... 34

YLL DRDCSuffield70 3.5 Comments on COHORT ...... 35

3.5.1 Howwillwedoit?...... 35

3.5.2 UrbanOverwatchScenario...... 36

3.5.3 Layered battlespace ...... 38

3.6 Autonomy Application Discussion ...... 41

4. Recommendations...... 42

References...... 44

Annexes...... 49

A List of abbreviations/acronyms/initialisms ...... 49

B Notation...... 54

CDefinitions (from Merriam-Webster) ...... 56

List of figures

Figure 1. An Urban Canyon (The MagnificentMile,Chicago,Illinois)...... 6

Figure 2. Little Dog Platform (Courtesy of Boston Dynamics) ...... 20

Figure 3. Little Dog Platform (Courtesy of Boston Dynamics) ...... 21

Figure 4. NASA Technology Readiness Levels ...... 26

Figure5.WrightBrothers(left)GlobalHawk(right)...... 26

Figure 6. Stanford Research Institute, Shakey Robot[1] [2] ...... 27

Figure7.Stanford’sStanleyUGV...... 27

Figure 8. NIST ALFUS Detailed Model autonomy scale (Fig 2 from [3]) ...... 28

Figure 9. Symposium’s proposed 3-dimensional autonomy scale. Dimensions: Human Interaction (HI) Mission Complexity (MC), and Environment Complexity (EC) . . . 29

Figure10.NISTALFUSSummaryModel(takenfromHuanget.al[4]Figure5).... 30

DRDCSuffield70 YLLL Figure 11. An example of the ’Layered Battlespace’ in which UGV/UAV-RW/UAV-FW are restricted by capability and complexity into a layered structure...... 40

List of tables

Table1.ApproximateTeamsSizesfortheDARPASecondGrandChallenge...... 25

L[ DRDCSuffield70 1. Introduction

ThispaperdescribestheoutcomesoftheFutureDirections2007symposiumheldby DRDCSuffieldstaffattheRedTecInc.facilityLQ5HGFOLII$%on17and18 September2007.Researcherstookabreakfrombenchworktomuseaboutthefuture basedonmanyyearsexperience,frustration,andthought.Eachofthe12participants wasgiventheopportunitytoproposeafuturevisionforautonomoussystems,not necessarilymilitaryrobotics,andfromthispredictwherethingswillbeinthefuture. Theattendeesrepresentawidecross-sectionofresearchareas,education,andproject experience.Fromeachpresentationthecommontimelines,themes,andresearchareas wereextracted.,WLVVLJQLILFDQWWKDWWKHSDUWLFLSDQWVcametosimilarconclusionsabout therealfutureofautonomoussystems,withoutthebenefitofpre-discussion.

The first day was dedicated to future predictions and extracting the common timelines and themes from the various presentations. The second day presented current projects, DLSC concepts, and international programs: the Family of Future Combat Vehicles (FFCV), DARPA Urban Challenge, Future Combat Systems (FCS), DLSC Crisis in Zefra, and AISS COHORT.

ThecurrentVWDWHRIWKHDUWforunmannedsystems(UMS)[524]researchand developmentisalargefield,toolargeforpropercircumferentialsurveyinthispaper. Needlesstosay,thescientificandengineeringcommunitieshaveadvancedcapability beyondearlywork[2].

ThispaperexpandstheideasproposedDWthesymposium.Itshouldbenotedthatsome ofthecontentiscircumspectionandisQRWintendedtobeauthoritativeforprediction, prognostication,orplanningpurposes.However,thispapersetsoutimportantfactors andargumentsthatshouldbeaddressedatastrategiclevelinordertofulfillthe ultimategoaloffullyautonomousUxVsystems.

This paper is organized as follows; section 2 describes some common themes and time lines for the near term in autonomy research. Section 3 discusses current programs, projects, partnership with DLSC, proposed COHORT direction, and the DLSC science fiction story Crisis in Zefra. Section 4 concludes with recommendation list for review by management. These recommendations are considered important to relevant and focused UxV systems projects for the near future.

DRDCSuffield70 1 2. CommonTimelinesandThemes

Thissectionoutlinescommontimelinesandthemesextractedfromthepresentations madeon17and18September2007. 2.1 Common Time lines

2.1.1 Autonomy is a long road

Thefirstteleoperated(wired-guided)groundvehiclesweredesignedaround1940[25] andusedinWWII;teleoperatedgroundvehiclesremaintheprimarycontrolparadigm forsimilarExplosiveOrdnanceDisposal(EOD)machinestoday.Theintroductionof smallinexpensiveforreconnaissance,security,searchandrescue,infantry operations,andotherapplicationsisrecent.Thingshaven’tchangedmuchforground vehicleautonomy,exceptthatthetechnologytosupportteleoperationhasbecomea commodityinthepast10years. Researchintoautomationhasprogressedsincethe1940’s.VonNeumann1lecturedon automatain1948;autonomyresearchhasbeenunderwayforaboutthesameperiod thatteleoperatedrobotshavebeenavailable.Advancementsincontroltheoryand computationalcomplexityhaveincreasedourcapabilitiesbuthasn’tdelivered autonomycapableofreplacingmenincomplexenvironmentswithunconstrained situations.Therearemanyreasonsforthis-traditionalcontrolparadigmsdonotwork successfullyinuncertaincomplexenvironments,perceptionresearchhasnot deliveredahigh-speedhigh-confidencehigh-bandwidthsystemthatmakesfollow-on computationeasier,artificialintelligenceresearchhasnotsolveduncertainty, sequential-instructiondigitalcomputersarenotascapableastheaveragemammalian brain,communicationsatlowaltitudesarenotreliable,andsystemcomplexityfor autonomyisatleastanorderofmagnitudemorecomplexthancurrentmilitary systemsamongstotherreasons.Perhapsthesimplestreasonisthatatgroundlevel, especiallyinurbancanyons,therearesimplymorethingstoconsiderinreal-timein threedimensionsandthatmakesithardtoachieveautonomy.

LargepurchasesofteleoperationUGVsystemsforIraqandAfghanistanandlarge budgetoverrunssuggeststhatmilitarylogisticsandsupportwillneedtimeand resourcesinanycasetoaccommodatecurrentsystems.Thisfactalonemayhalt currentprogramslikeFutureCombatSystems(FCS),theambitiousautomated divisionprogramundertakenbytheUSArmy,whichhasspent“US$8billion alreadyandestimatedtocostaroundUS$300billion”2.FCShasbecomethemost expensiveproposedDoDprograminhistory.Thisseemstoindicatetherelative difficultyofautonomyincomparisontootherresearchfields,orotherprojectslike theManhattanProjectwhichcostapprox.US$27billionin2007dollars.

1The General and Logical Theory of Automata 2http://www.govexec.com/features/0507-01/0507-01s3.htm - Fighting Folly 5/1/07

2 DRDCSuffield70 Therewasageneralagreementthatautonomy,classifiedbytheparticipantsas full autonomy3, was an ambitious and uncertain goal. A definition of autonomy is given in Annex C. A majority expected that full autonomy for ground vehicles is probably at least 5 years away. Even in 5 years, full autonomy with no human intervention will only occur in controlled or known environments with minimal mission complexity. Barring a disruptive event, it is anticipated that teleoperation will be the primary UGV robot control schema for at least 5 years.

2.2 Common Themes

2.2.1 The real value of military UxV in adaptive dispersed operations?

There has been debate for many years over the FCS program’s foundations. Sensor omniscience, overarching intelligence, dominance through sophisticated sensing, ubiquitous communication, and hierarchical information fusion are some of the foundation assumptions for FCS. All these assumptions must hold for the proposed FCS system to deliver the capability it is promising.

There is another proposed advantage to consider that does not hinge on the above assumptions. In the asymmetric warfare domain, the fundamental problem is enemy recognition- the bad hide amongst the good. UxV systems extend the reach of soldiers thereby extend their presence and surveillance and therefore could intimidate insurgents more than current systems alone. If insurgents fear being exposed or feel threatened by overwatch UxV systems that may provoke a response or it may force them to delay action. If insurgents are provoked to act then they are exposed. If insurgents delay action, then they have failed to act which defeats their campaign. If one reduces insurgents’ opportunities to act then you decrease their success probability by denying them successful strikes. Ineffective insurgents, whether one catches them or not, may be suppressed or exposed through intimidation. An unsuccessful insurgent campaign gives the government more time to rebuild and move public opinion. UxV systems could be valuable to amplify this intimidation by their presence in addition to the current systems alone.

Some of the reasons why insurgents may be intimidated by UxV systems are psychological. The capabilities of the machines are unimportant and enemies cannot assume UxV are harmless. In fact, robots may not need to be menacing to upset human emotional responses. Humans understand they cannot bluff nor bribe a machine that can operate for hours without distraction. Machines cannot be surprised, although as stated above they can be fooled.

If UxV systems hamper insurgents from acting then they will attribute their failure to the UxV and its behaviour. Insurgents’ emotions could link UxV

3this coincides with Huang et. al. [3]

DRDCSuffield70 3 presence/actions to some emotional reason and not to programming - a psychological effect called attribution. Attribution is a psychology theory [26, 27] describing how humans perceive internal and external factors’ affects on a personal event’s outcome. Attribution can change behaviour and it was demonstrated that when people succeed then they are more likely to attribute their success to their own abilities (internal attribution) but when people fail they are more likely to attribute that to other reasons (external attribution). Therefore, with every botched ambush or aborted bomb making rendez-vous insurgents could excuse their failure by pointing to the ever-present eye of UxV overwatch. This attribution could change their perception of the situation and each failure may change their beliefs about their future attack success probability .

This intimidation attribution effect would be similar with human forces in overwatch, so it cannot be a UxV-only justification. Rather, it is the realization of the goal of wider ISTAR coverage, made possible by force-multiplied manpower from UxV that amplifies the effect over larger areas than one soldier could influence. At the least, UxV presence will increase the probability of insurgent detection, and in the most likely scenarios, it could impact insurgent operations on the moral plane and that may have a greater value than any other conventional weapon system.

In the conventional warfare domain, UxV systems can provide a screen against a determined enemy in defense or an onslaught on offense. In extreme applications, UxV systems can attack in wave after wave of UxV to overwhelm human enemy defenses. The psychological impact of this scenario is obvious, no nation would willingly risk annihilation by machines. A nation that is capable of sustaining volley after volley of UxV systems into the breach will intimidate other nations into diplomatic solutions.

Neither of the above proposed conventional warfare applications depends on sensor omniscience, full autonomy, nor flawless operations and accepts that any system can/will fail or be defeated. In both cases UxV systems, if designed with psychology in mind, can provide a tangible benefittothe adopter.

2.2.2 Autonomy Lite

As discussed in Section 2.1.1, the full autonomy may be a distant goal, there is a practical middle ground to develop “autonomy lite” for UxV systems. Autonomy lite for UxV is loosely defined as:

Reliable operation with a man in the loop.

This indicates the acceptance of man in the loop for the foreseeable future, based on personal experience, as part of any system design. This implies a

4 DRDCSuffield70 shift in design to include stronger man machine interface (MMI) and adjusted roles for UxV vis á vis their human operators. This agrees to some degree with Blackburn et. al.’s [28] previous observations on Mobile Detection Assessment and Response System (MDARS) and DEMO III. “Near autonomy” is a similar term used by TARDEC4 in much the same way, to describe a limited subset of the full autonomy that fits inside what users are comfortable with.

This concurs with the current AISS mission of soldier augmentation. For non-scientific reasons, it is important to avoid the impression that new systems can and will replace human soldiers. As a practical matter, it is recognized that this is not possible at this time in any case for SOA reasons and beyond technology reasons such as positive control by soldiers, logistics, training, etc.

To propose an exclusive autonomous solution is impractical at this time and should be treated with skepticism. Unfortunately, some programs (see Section 3.) have promised systems that can (allegedly) remove the man from the loop. Such ambitious expectations, if they fail, may impact our ability to meet the client’s needs. Failure to meet expectations may impede what could have been achieved by more realistic proposals.

2.2.3 The next step

“Thenextstepinroboticswillbewhenmachinesareabletothink theywantthewanttothink,notthewaywethinktheyshouldthink. Foryears,researchersaroundtheglobehavebeenprogramming robotstoperformactionsinwaysthatmadesensetotheresearcher. Forinstance,thefieldofmappingfornavigationhasgrowntothe pointwherewecandeveloplargeintricatemapswithseveral hundredsofthousandsoffeaturesandwecannavigateintheseareas withoutaproblemwhenstationary.Assoonaswemoveintoa dynamicenvironment,manyofourmappingtechniqueswillnot work.Nowifwethinkofhowwenavigatearoominacrowd,we don’tdrawamapinthebackofourheadandremembereveryfeature possible.Insteadwedrawaveryroughsketchofwhereweneedtogo andweadaptandlearnalongtheway.Wedon’tkeepeverythingin memory,weactuallykeepverylittleinmemory.Instead,wekeepin memorytheknowledgeofhowtogetfromAtoBinthegeneralsense anduseitfordifferentscenarios,e.g.goingfromacitytoanothercity orfromourhometothesupermarket.Thedaythatrobotscandevelop ageneralknowledgeonhowtodothingsandthenapplythat knowledgetoalargevarietyofsituationsandenvironmentswillbe thedaythatwehavesignificantlyadvancedautonomyandmachine intelligence.”-SeanVerret

4TARDEC presentation 10/10/2007

DRDCSuffield70 5 In general terms, the shift from researcher hand-crafted algorithms to self-evolving systems will mark the next step in autonomy research. The research themes where this will be of great benefit are expanded upon in Section 2.3.

2.2.4 Static or mobile, ground or air?

The symposium discussion touched on unmanned aerial and ground vehicles but not underwater nor surface vehicles. There was debate as to whether or not the aerial autonomy is as complex as ground autonomy. Judging by current systems, it can be argued that aerial autonomy has arrived for UAV flying above the dreaded urban canyons. Aerial vehicles flying above the Earth’s surface operate in a low complexity environment; at higher altitudes the autonomy problem could be reduced to flight dynamics, weather, wind conditions, limited obstacle avoidance, target detection, and GPS position. At high altitudes model complexity can be reduced to 2-D / 2.5-D representations or even no world model at all!. Aerial sensing at higher altitudes does not suffer from destructive interference from the ground that contributes to a lees complex environment / autonomy problem.

Figure 1: An Urban Canyon (The Magnificent Mile, Chicago, Illinois )

However, for military operations inside urban canyons (see Figure 1) UAV autonomy must cope with nearly the entire spectrum of ground vehicle complexity. UAV are limited by power and payload constraints in addition to control complexity; this makes them less flexible for sustained urban operation. This is important for the reach and capability of UxV systems: UGV are the only platforms currently that have adequate payload and power for sustained military operations in complex environments like the urban

6 DRDCSuffield70 canyon. One opinion is that once UGV autonomy can perform in urban canyons then autonomous teams of UxV systems may be practical. Another opinion proposes that UAV autonomy capable of flying above the urban canyon is reliable then the full autonomy UxV systems may be practical.

The static autonomy problem is a reduced form of the mobile UGV autonomy problem; all localization is relative to the current static position. It can be argued that static autonomy has arrived, for instance an IR-initiated IED and the M84 Hornet advanced munition already provide short-range static autonomous response. The robustness, adaptability, and reliability of these weapons as “autonomous” is debatable. The same capability levels are not currently on par for UGV.

More importantly, since static autonomy could be less restricted in size, power, and weapon type then static autonomy will have an advantage versus mobile versions. Given the inherent limitations of a mobile ground platform versus static ground platforms, it could be argued that static ground autonomy may appear before the advent of mobile ground autonomy. This could become a disruptive technology for future armed forces as static autonomous weapons. Static autonomy could pose a larger threat to manned vehicles than the IR-initiated IED.

2.2.5 Common attributes

The general belief is that future UxV systems require the following attributes:

1. Reliable

2. Adaptive

3. Robust

4. Subservient to human peremptory “man in the loop” control

5. Consist of vehicle teams, chosen by their unique capabilities that compliment team members.

TheserequirementsstandinstarkcontrasttotheexistingVWDWHRIWKHDUW (SOA)inautonomousresearchvehiclesatmostuniversitiesandlabs.This requirementsgaprepresentsadisconnectbetweentheopen-endedresearch aimedatsolvingspecificareasandcurrentlyfieldedteleoperationsystems.

2.2.6 Autonomy is not intelligence; intelligence is not autonomy

For most people, the two words autonomy and intelligence are considered synonyms. One truism that was extracted from the debate is the realization that autonomy, defined loosely as the ability to operate with human

DRDCSuffield70 7 intervention, is not necessarily intelligence. At some complexity layer, it is possible to bound the problem enough to automate an arbitrary platform. Since it is not necessary to have intelligence, then it must be possible to make autonomous systems out of the traditional assortment of control architectures.

Reactive control architecture, systems lacking deliberate strategies, ontologies, planning, “reason”, etc. can operate without supervision [29, 30]. This research dead-ended in part because of the local minima problem. Deliberative systems met the same fate, even as far back as Standford/SRI’s SHAKEY[24], because they lacked responsiveness to an unconstrained environment. Hybrid approaches, melding reactive and deliberative strategies, are making headway but progress is slow because of problem enormity.

Given enough complexity, it is possible to develop enough control architecture to automate UxV for niche applications. This strategy then rests on a complexity solution, perhaps be adopting a learning model, for example, to manage the adaptation to more complexity. Learning, in general, may hold the promise of simplifying the problem without constraining the solution. Again, a system that learns is not necessarily intelligent yet autonomous.

2.2.7 Drones

Some participants proposed the notion of drones over traditional robot system concepts. Before fully autonomous systems become fielded it is proposed that smaller drone-like vehicles with a very specific purpose may become the future operational mainstay. In comparison to current robots, these vehicles would have very little external or proprioceptive sensing, minimal computing power and would be controlled either by larger systems or by a nearby human operator in the loop. The mission of these drones will be similar to those of today’s robots and it is likely they would be disposable, single function machines. They could be our eyes around a corner, or an explosive device on wheels perhaps. As times goes on we may see these drones being essentially more mobile sensors for larger, bulky autonomous systems that don’t have the mobility to get into certain areas. Or they are a cost-effective vanguard screen for complicated semi-autonomous robots or manned systems. One idea advanced by Erickson [31] was the asymmetric robot team (ART), where disposable drones act as scouts or munitions commanded and controlled by complex UGV behind the screen. This is arguably a more cost-effective way to maintain a large swarm of UGV as opposed to near parity machines where each robot is expensive and a considerable loss.

While the robotics research community adopts a more laissez-faire attitude to vehicle control, it is a presumption in military robotics that all machines will have positive control over-arching all other control paradigms. The analogy was made using a cruise missile, arguably an original autonomous UAV, that even if one cannot stop the missile from veering off target, it can be

8 DRDCSuffield70 self-destructed at any point. This is the kind of control expected by the military. Drones conform to the positive control requirements of military operations and are therefore a good fit to execute missions while we await autonomous robots that can take initiative on their own to execute a task. If drones get stuck or are disabled then they can be disposed of without losing focus and exposing humans to danger.

There are two drones examples in the works at DRDC. The miniature remote neutralization vehicle (mini-RNV) and Teleoperated Air-dropped Demolition Munition (TADM) are projects that use drone-like miniature robotics as end-effectors for human operators. Each miniature UGV has minimal capability and therefore minimal operator expectation.

2.2.8 Evolutionary acceptance strategy

The prevalent process for ASD is in general the forced adoption strategy where the procurement intention is to drop complete systems into the Army units. This has some tangible benefits, for instance the withholding of immature technology until a point where it is ready allows the most time for development, logistics support, and benefit from other trends like miniaturization. On the other hand, forced adoption has some serious disadvantages; if the technology is not ready, and with current autonomous systems that is certain, then the soldier’s performance expectations for so-called autonomous robots will be difficult to satisfy. If that is the case, then there is a risk that those soldiers will not trust the technology and therefore lead to reduced performance or robot abandonment. The impact of this might be a generation of robot mistrust amongst soldiers, who may believe that the extra support for robots need is beyond their focus. That would gravely impact future system introduction.

An alternate strategy proposed, the evolutionary acceptance strategy is less ambitious about impact. Rather than withhold technology until a point where machines are purportedly autonomous, introduce machines that are not fully autonomous into dangerous, dull, and dirty roles early and let their evolution fit the soldier’s needs. Fit the robot’s capabilities to a limited role, with the soldier’s understanding, and then introduce evolutionary improvements in upgraded / redesigned UMS versions. This thinking arrived at Blackburn et. al. [28]/ SPAWAR [28] realizations about automaticity (semiautomatic) and reinforces this conclusion. Blackburn et. al. [28] also proposed a way for evolutionary expansion.

It is an observation from current missions that soldiers are employing teleoperated UGV in wider roles, with positive effect, beyond EOD/IEDD application. These systems offer no real autonomy and limited self-governance yet there are positive expectations and outcomes by soldiers because the roles and limitations are understood. Limited autonomy still

DRDCSuffield70 9 provides robots ahead of soldiers in harm’s way. A tool at hand with restrictions is preferable for soldier’s than an uncertain wait for autonomous tools. This seems to be the justification for the advancement of the small UGV (SUGV) in the FCS plan to be ready by 2008 and deployed in 20105.

The most important benefit of the evolutionary acceptance strategy is that it provides a communication channel between the soldiers in the field and scientists at the bench. The complexity of the autonomy problem is almost without bound, a reality that hampers the realistic simulation / modeling by scientists. An application understanding from the field allows scientist to limit the problem complexity for current implementations. Adapted solutions can evolve with the changing needs of current and future missions.

This strategy is not as it appears a capitulation, in fact it is perhaps a more honest and practical way to meet the end state. There is consensus that the autonomy goal is coming, the question is which strategy will arrive there first?

2.2.9 Legal Implications

There were some concerns raised about the law and autonomy. There were questions posed with no easy, straightforward answers. Despite this, it is important for anyone preparing to use autonomous systems to consider the consequences in all areas, including under the law. Can a company that purports to sell an “autonomous” UVS prove that the system is autonomous under the law? If an autonomous robot accidentally injures a soldier, can a company demonstrate they designed in enough safety? In fact, if the robot exhibits self-preservation, a definite trait of autonomy, then can we accept accidental human casualties? Does the military risk litigation for not applying Asimov’s [32][33][34]? How can you prove that an autonomous system meets liability, including strict liability? It was proposed that autonomous systems would be held to a higher standard than soldiers, which is an onerous requirement considering that SOA UxV systems are still orders of magnitude less complex and capable than humans. If UxV systems are held to a higher standard than human error, do companies consider the risk of fielding them?

These are open questions, but they indicate the attractiveness of systems designs where a man in the loop compensates. Indeed, for legal reasons, it may be practical to always keep a man in the loop for the purposes of liability.

2.3 Critical Research Areas

The following research areas were singled out by participants as being important to the establishment of full autonomy for UxV systems. This is not an all-encompassing list,

5http://www.armytimes.com/news/2007/04/defense_sugv_070424/

10 DRDCSuffield70 but it is believed to be a reasonable research subset. The important realization is that until these areas are mature, fully autonomous UxV systems will not exist. Each area is briefly summarized in the following subsections.

2.3.1 Communications

The unintended consequence of fielding large numbers of UxV, is that the available bandwidth per machine is decreased. For small UxV systems this available bandwidth is not a problem, but for larger systems like massive swarm networks the bandwidth per machine approaches zero (even local communication schemes become noise/interference for the surrounding swarm nodes). It is also apparent that the closer the machines are then more interference from team communications will prevent local agents from communicating. As the machines reach the limit of communications range the intermediate latency is increased as it is passed from member to member. This is a very real problem: how do you compensate for the communications bottleneck as the number gets larger? Larger bandwidth data, like unfused / raw sensor data, increased the demand per swarm member. To make matters worse, as machines use more abstracted object data, from middleware like MIRO and CORBA, then payload bandwidth is further reduced to a fraction of the original. How can a large swarm collectively solve a problem when their ability to collaborate is reduced with every active member or limited to one member speaking at one time? Methods such as delegated pre-processing/fusion, compression, ad-hoc networking, priority messaging, local routing, and gateways all reduce the burden to a degree, but not to the point where it is negligible.

In the military context, systems must also survive jamming, triangulation, radio silence, and human positive control at every point. These constraints further reduce the reliability of communication systems.

Without improved communications, it is unlikely that large numbers of swarmbots will be practical. One foundation assumption of the swarm theory is that emergent behaviour will appear from larger groups of simple members. If group communication, hence cohesion, cannot be maintained then swarm theory will fail.

2.3.2 Perception

Perception is the process of seeing what is where [35]. The proper interpretation of sensor data, sensor fusion, is a large field encompassing many varieties of sensed data. In all those fields, perception is a critical and unsolved component. We know that our visual system can understand confusing and even contradictory information through many processes and develops correct models of the world most of the time, in all weather and

DRDCSuffield70 11 visibility conditions. We know that there are quasi-independent modules within the visual framework that can be studied and perhaps replicated individually. Marr [35] elabourated on nine processes that could be used to interpret visual data. We know that throughout the visual process large amounts of data are vetted for small clues and sub-regions where our attention is important; in some ways our vision works because it is better at discarding / ignoring information than collecting it. The most intriguing factor about the human visual system, is that human vision transforms two dimensional data almost entirely, if not completely, into 2.5D sketches and then into 3D representations. What we don’t know what is a complete visual system design is and the relevance of every module.

SOA perception systems for robotics consist of one or a few processes which make specific assumptions about the structure and content of the environment. When these assumptions are invalid, the systems become prone to failure. Unlike their human counterpart, the system is unable to adapt its perception system to its new environment. Research into context and scene recognition holds some promise for allowing these systems to better adapt, but are yet unproven. Furthermore, a large amount of the research focuses on attempting to understand how humans perform such tasks and have not been adopted by the robotics field in general. A number of clever algorithms have been implemented with great success to perform simple constrained tasks. It was suggested that an evolution in perception systems will occur whereby these algorithms are combined to allow for more robust and capable perception systems. This suggest some form of artificial intelligence which is capable of assessing which of these algorithms are appropriate to utilize given, the task, environment, and robotic constraints.

The majority of UGV perception systems are geo-centric in that they attempt to construct an internal representation of the physical structure of the environment from which actions can be planned and executed. Other tangible elements such as texture, color, objects, etc. are often ignored or underutilized. Bandwidth, computational power, and complexity are roadblocks to the development of perception systems which are capable of exploiting these features. Furthermore, perception systems are only as good as the sensory data they have to work with. Despite the availiability of low cost high accuracy video and range sensors, UGV perception system still are unable to process sensory data at human level rates and often suffer from a lack of data, especially at far distances from the platform.

Vision is a cornerstone of our understanding the world, and video is one of the highest bandwidth mechanisms for sensing. Range and distance sensing using time of flight (TOF) sensors have not demonstrated enough performance to address the perception problem. TOF sensors are also active as opposed to passive cameras and therefore will allow for detection by adversaries in

12 DRDCSuffield70 military operations. Without an artificial perception equivalent, it will be difficult to advance autonomy.

2.3.3 Cognition/Artificial Intelligence

Despite over 60 years of artificial intelligence work, we still do not have intelligent computers. There are application-specific computer checkers and chess champions, protein folders, and massively parallel computer clusters for many purposes, but there is no computer that can learn these diverse skills and apply them to other fields. Today’s can perform Tera FLOPS (trillion floating point operations) and yet we still lack the missing piece that transcends raw processing power. Despite the raw capability these supercomputers cannot think like a human.

Machine learning, as described in section 2.3.9, is a promising research field that has the potential to increase complexity without aprioriprogramming. There are many computation paradigms as the basis for computer thought, but without intelligence the field will languish.

2.3.4 Complexity/ Meta algorithms

One estimate of the lines of code (LOC) needed for the FCS program is 63 million LOC6. Consider that an early version NASA space shuttle had 1 million LOC, that represents almost 2 orders of magnitude increase in complexity. This by itself poses difficulty for the eventual system operation. The basic problem can be couched thus: how can one person, or small group of people, understand and predict the inner workings and side-effects of all those interacting components. If you consider that today’s automobile has on average approx. 40,000 parts, not including software, yet many of these are identical instantiations (bolts, screws, wires, etc.), then the complexity span is reasonable for a small group. This is not necessarily the case in mobile robotics research. As explained in section 2.4, the DARPA Grand Challenge teams average around 29 personnel, in general larger than university robotics research teams but similar to the SHAKEY [2, 24] project in earlier mobile robotics research. As the need for more modules increase, so too does the need for specialized skills and understanding. Once breakthroughs are made and encapsulated into subsystems then the complexity is lessened. However, once a capability is achieved then the goal posts are moved further until the current solutions are no longer within capability.

Without mechanisms to handle complexity, there may be a plateau effect for research: systems designs reaching a critical complexity mark may cease progress until some of the problem is simplified. It would be important for future systems to incorporate complexity management into the system itself.

6http://www.govexec.com/features/0507-01/0507-01s3.htm - Fighting Folly 5/1/07

DRDCSuffield70 13 A self-rationalizing system might mitigate this complexity barrier effect. Some potential solutions to complexity are learning and wise selections of software infrastructure (Architecture for Autonomy ).

2.3.5 Visual Simultaneous Localization and Mapping (VSlam)

A problem common among UGV is how to accurately register sensor measurements to a local or global frame of reference. Typically, the homogeneous transformations which govern this registration rely on accurate measurements between coordinate frames. The problem is further aggravated by vehicle motion, timing issues, wheel slippage, inaccurate pose estimation, etc., and can often lead to incorrectly registered sensor data.

An area of research which has the potential to alleviate these problems is Vision Based Simultaneous Localization and Mapping (VSlam). These algorithms rely on vision based techniques to extract unique landmarks or features from visual imagery, which can be used to improve vehicle localization and mapping. Stereo vision or stereo from motion are often used in conjunction with the feature detector to retrieve 3D structure of the environment. VSlam has been successfully implemented in UGV using both the Harris detector [36] and the Scale Invariant Feature Tracking (SIFT) algorithm [37] developed by Lowe, though the SIFT algorithm has proved to be more robust to variations in scale, rotation, and lighting. While many of VSlam implementations utilize odometry as an input to predict motion, successful implementations have been developed that estimate this motion by tracking features through several video frames, matching corresponding features, and estimating the motion necessary to bring these matched features into alignment [38, 39]. In these cases, visual data from the camera need not be registered to a common frame of reference and thus sensor registration issues are alleviated.

Large advances have been made in the field in recent years, however VSlam algorithms are still in their infancy. Many of the algorithms are too computationally expensive and have slow update rates which limits their viability. In addition, slow camera movements are a necessity as significant overlap between successive images must be present to provide a good match. this can severly limit the mobility of the UxV running the system. Wide baseline VSlam or omni-directional VSlam could help alleviate these problems and is a current research gap. Finally, VSlam algorithms are particularly brittle in environments where few features are present (absense of texture). While these scenarios will always exists, research into methods to augment VSlam techniques in such situations should be pursued to improve system robustness.

14 DRDCSuffield70 2.3.6 Complex Sensing / Video

Complex sensing refers to the evolution of today’s sensors into more advanced, perhaps intelligent versions that automatically compensate for changing conditions so the data does not suffer degradation. It is well known that sensor fusion cannot improve the data, so data loss at the transducer is never recovered post-processing. Sensor systems that attempt to adjust to conditions, for example automatic gain compensation (AGC), are for our purposes complex sensors. The more complex the sensor system, the more capable the sensing.

For example, squinting cameras would allow autonomous systems to work in glare by restricting input to unsaturated levels. Eyelids on digital cameras, coupled to an autonomic response to light levels would create a more complex sensor than a camera. The goal for visual sensing is, in essence, to mimic the complexity of our own visual system - a system that has other subcomponents like eyelids, eye lashes, six muscle-groups for articulation, and even cooling systems for cones. Our sensory systems are constantly adapting conditions - Weber’s Law indicates how time and intensity vary the perceived stimulus, in general logarithmically, and it must be so with artificial faculties like machine vision.

The lack of complex sensing in today’s sensors has forced a great deal of automatic compensation into software. Without the delegation downward of compensation processing, the entire software code base inside the computer must be larger. Human visuals systems compensate autonomically which allows concentration on higher level tasks. Autonomous systems will need the same level of autonomic response.

2.3.7 Teamwork / Teaming

Before we can have a reliable autonomous multirobot system we will need to have a very reliable single autonomous system. Some multirobot systems are still in a stage of explicit cooperation or erratic functioning with stigmergic communication yielding emergent results. With these results it is extremely difficult to prove convergence except in the most controlled of controlled environments.

The near future of multirobot systems is teleoperation. The development of control software to allow a single operator the ability to control multiple robots will be very useful. Initially these robots will have limited intelligence, but as the technology for reliable single autonomous robots is developed we will be able to use this technology to make it easier for a single operator to command and control a team of robots from a single command station.

As technology continues to be developed we will then see the first

DRDCSuffield70 15 autonomous multirobot systems as a large leader or controlling robot that carries the brains commanding several smaller functional robots with limited intelligence. This leader robot will essentially use the smaller follower robots as its remote sensors, remote weapons and remote actions. This leader robot will initially be a teleoperated vehicle but as technology advances it too will take on more and more autonomous roles.

2.3.8 Learned Trafficability

UGV operating in outdoor environments must traverse unstructured terrain. This terrain is diverse in nature and contains natural obstacles such as rocks, brushes, berms, and low lying wet areas. Outdoor terrain is not static as it varies on a seasonal basis due to the life cycle associated with natural vegetation. Additionally, outdoor terrain may change appearance due to variations in lighting that result from the Sun’s relative position and from weather conditions such as clouds, fog or rain.

The tremendous diversity associated with outdoor terrain has long caused researchers considerable grief, as developing classical terrain classification algorithmshasproventobeaverydifficult if not impossible task. Traditionally, researchers have avoided this problem by relying upon ranging sensors, which provide data to construct 2 1/2-D or 3-D world representations. Although geometrical representations have been used extensively, the low data rates associated with laser rangefinders and the inconsistencies associated with stereo vision have limited ground vehicle velocities. Additionally, geometrical representations do not encode trafficability information, such as soft unnavigable soil, or grass which is traversable.

Researchers have attempted to address the trafficability issue by segmenting the image and classifying the resulting regions using statistical measures such as confidence levels or transition probabilities. Image segmentation using the hue information [40] and segmentation on texture [41] are examples of this approach. Jasiobedzki [42] took a different approach, where the intensity images from a laser rangefinder were used to detect driveable floor regions in an indoor environment. Other research [43, 44] has attempted, using hyperspectral or multispectral images to classify the ground surface composition with limited success.

The research at DRDC focuses on ‘Learned Trafficability” [45, 46] where a UGV learns from experience in a more human like manner. The learning component is critical as outdoor terrain varies regionally, seasonal and even daily due to lighting conditions. Traditional, non adaptive algorithms are not well suited for these conditions and tend to perform poorly. On the other hand adaptive algorithms, such as the learning paradigm, allow a system to adjust to the current environmental conditions using a feedback mechanism. Although learning algorithms are attractive, they are difficult to implement successfully

16 DRDCSuffield70 under real-time conditions. DRDC’s early research in this area well illustrated these difficulties [45]. This research unsuccessfully tried to “perceive” terrain types using eigenimages and to classify it using a neural network.

Dahlkamp and Thrun developed a self-supervised road detection technique for the Stanley UGV [47]. This technique can be viewed as a type of trafficability classification as the algorithm differentiated between road terrain and off-road terrain. This technique uses colour camera images and laser rangefinder data to create a sky-view drivability map, which significantly extends the lookahead distance. The lookahead distance is a key driving strategy parameter as it limits a vehicle’s maximum speed. This techniques learns the relationship between camera data and the laser rangefinder data, and then extends the drivability map using this learned relationship.

Although the real-time learning accomplished by Stanley was geared towards road detection, researchers at DRDC believe that learning, in general, is a critical component in creating successful autonomous UGV. With this principle in mind, DRDC researchers are creating a learned trafficability system that will have generalized learning capabilities. This means it will be able to learn numerous terrain types such as asphalt and gravel roads, dirt trails, traversable vegetation and the ability to differentiate between soft/wet ground and traversable ground.

2.3.9 Learning and Versatility in Machines

Autonomy is dependent upon development of more versatile intelligence rather than the isolated single purpose and restrictively defined intelligence currently demonstrated. For unmanned vehicles this means a movement away from prescribed interpretation schemes and behaviours towards more flexible and machine generated solutions.

Planning with learning and discovery

Mission planning systems for unmanned vehicles will need to understand not only the world but conditions and situations that are unstructured and cannot be limited to conditions and situations that are describable before hand. Planning will need to incorporate newly discovered information/techniques and quickly refine plans in both failure(forced re-plan) and speculative (voluntary re-plan) situations.

Generalized discovery and learning

The ability to learn from experience and represent that knowledge in a way that can be adapted and utilized in new situations is vital for machines to work in unstructured environments and situations. It would be desirable that

DRDCSuffield70 17 learning occur with un-formatted sensor streams and is capable of discovery of relevant relationships (ie action and effect).

Learning of utility

Current learned trafficability systems learn terraincharacteristics by associating appearance of distant terrain with the past experiences of directly traversing that terrain or sensed shape of similar terrain. These concepts need to be extended to allow machines to discover the utility of other features in their world, understand the relevant actions and their effects and use that information in planning systems and other learned behaviors.

Unified learning from many sources

It would be desirable for a learning system to learn from many different techniques, sources and formats. For instance, learning using trial and error learning, imitation and direct instructions will be required. Furthermore the ability to quickly assimilate machine based information (digital maps, rules) with the more flexible learned knowledge would allow a machine to attain higher levels of capability faster.

Concurrent and multi-horizon learning

It is common that not only will locomotion control systems need to perform more than one task at a time, but that they will need to concurrently learn any number of tasks. For example, locomotion balance, leg phasing, foot fall recognition, obstacle negotiation and path planning will need to be learned concurrently.

Apprentice systems

Teaching learning machines through their observation of experienced humans will allow for human handlers to influence what is learned and be more comfortable with the outcome. This would result in machines that affect the environment in ways more familiar and acceptable to human teammates.

Loss of machine based standardization.

A major benefit of machine-based communications can be the inherent standardization of communications. Standardization reduces misinterpretation. As machines become more autonomous and more of the interpretation is done using machine-derived interpretation routines it would be natural for machines to detect different things and label the same things differently resulting in non-standard labels and language between machines.

18 DRDCSuffield70 Collective standardization will be needed to ensure a common language and understanding between cooperative individual machines.

Learning the reduced sophistication solution first

Humans possess far more degrees of freedom than are necessary for simple walking (or any other specific task). This high DOF allows for higher optimization or finesse and to allow great flexibility to deal with extreme terrain and joint failures. To reduce complexity humans often learn the simple solution first and then add in additional degrees of freedom to learn the finesse solution. For instance an infant learning to walk often starts with a stiff legged gait and once that has been mastered then additional DOF, such as flexing the knees, is introduced and a better solution is found. Controlling the number of DOF available to artificial learning systems is very important in avoiding the curse of dimensionality. Learning by incrementally increasing the number of DOF freedom and sophistication has the potential to learn good solutions in reasonable times on high DOF systems.

Self-generating skill hierarchies and frameworks

Hierarchies are useful and commonly used for reducing the dimensionality of the problem to be learned. Handcrafted hierarchies represent prior information that may not always be available. These hierarchies also constrain the solution learned to the one that is allowed within the handcrafted and unchanging hierarchical framework. DRDC has performed some of the basic research on self generating hierarchies for reinforcement learning and plans to extend it to learning locomotion.

The confluence of discrete and continuous representations

Both continuous and discrete (abstracted) representations need to co-exist and be usable by learning algorithms. Ultimately at the low end everything is a continuous signal and at the high end everything is discrete symbology. In between these extremes is a mixture of interdependent continuous and discrete representations. For instance balancing a walking robot requires continuous signals to the leg actuators while the phasing of leg movements is discrete (left-right-left..) and beneath that phasing representation are continuous representations for actuator positions and contact forces. Learning algorithms must learn in both representation spaces simultaneously. For example it must learn the leg phasing at the same time as learning the continuous signals to balance. These operations are coupled and use some of the same degrees of freedom which must be automatically conflicted or fused in a mutually satisfactory (if not optimal) way.

DRDCSuffield70 19 Learning and the need for failure

Many advanced AI methods are a result of the need to move out of the regular deterministic and niche based world of conventional AI and into the irregularities and stochastic nature of the real world to create broadly applicable machine intelligence. These new approaches are often iterative solutions and involve learning. When these artifacts are combined with a stochastic and irregular environment/situations such autonomy systems will fail; but fail more like a human than a machine. That is, fail as a result of an insufficiently representative training regime, extremely novel situations over taxing the machines ability to extrapolate from its niche (apply its common sense), insufficient ability to adapt, learn and keep up with rapidly changing situations or fall victim to perceptual illusions.

Figure 2: Little Dog Platform (Courtesy of Boston Dynamics).

Learning locomotion

Machine learning will serve two vital roles in the control of legged vehicles in complex terrains. First, the acquisition of predictive models of terrain characteristics that cannot be known and supplied apriori. Secondly, the discovery and experience based improvements to control strategies for commanding the actuators of the vehicle to move through irregular terrain and surmount obstacles while balancing many constraints. This will require the discovery of cause and effect relationships between the vehicle’s drive train and the terrain and represent them in ways that can be chained together and generalized to successfully control leg actuators over complex motions. The capabilities to acquire and represent knowledge in usable formats and discover strategies to utilize that knowledge towards a goal are vital to attaining higher

20 DRDCSuffield70 Figure 3: Extreme Terrain Board (Courtesy of DARPA and CMU).

autonomy of legged vehicles in complex terrains. The key concept is that it is impossible to know ahead of time all world knowledge, situations and problems and script solutions ahead of time. Rather, machines must be able to acquire knowledge and use that knowledge to solve problems in a self sufficient and versatile manner. Boston Dynamics’s Little Dog robot, shown in Figure 2, and DARPA’s complex terrains surrogates with an extreme example, shown in Figure 3, are used at DRDC to research learning locomotion.

2.3.10 Intelligent Mobility

Despite the large body of research in the area of robotic locomotion, UGV exhibit extremely simple behaviours. This is especially true when compared to the human ability to move in the world or to the numerous examples of movements found in nature. A large body of research has been developed to support robotics locomotion, although the research has been focused on a particular set of assumptions or based on a specific morphology. Advances have largely been confined to positioning and navigating within structured environments. The mobility problem for UGV, defined in this context as autonomous maneuverability in unknown, highly-complex environments, remains an open problem. Unlike traditional control problems, the practical tools necessary for intuitive and systematic controller synthesis are not readily available. This may be attributed to the fact that although many control researchers were actively involved in robotics, the control community did not play a leading role in robotics throughout much of the 1980s and 90s. A 2002 panel report entitled "Control in an Information Rich World"[48] addresses the role that the control community may play and actions required to enable

DRDCSuffield70 21 new breakthroughs in control research. In particular, with respect to the field of robotics, it notes the need to develop robots that can operate in highly unstructured environments. This necessitates visual processing advances and scene understanding, complex reasoning and learning, and dynamic motion planning and control. It stresses that a reasoning and planning framework in these unstructured environments will likely require new mathematical concepts that combine dynamics, logic, and geometry in ways that are not currently available.

Assumptions of current control theory ill-suited to mobility

Automatic control systems play a critical role in many fields and are critical enablers in numerous locomotive examples including: automotive applications that provide improved handling characteristics of vehicles using active suspensions, aerospace applications (control systems for rocket boosters that provide orbiting satellites and safe space travel), and in flight controls of inherently unstable aircraft and UAVs. These accomplishments are made possible from R&D activities that have yielded a theory and tools that handle many inputs, many outputs, complex uncertain dynamic behaviour, difficult disturbance environments, and ambitious performance goals. However, it is important to note that:

“... the control needs of some engineered systems today and those of many in the future outstrip the power of current tools and theories. This is so because our current tools and theories apply directly to problems whose dynamic behaviors are smooth and continuous, governed by underlying laws of physics and represented mathematically by (usually large) systems of differential equations. Most of the generality and the rigourously provable features of our methods can be traced to this nature of the underlying dynamics.[48]"

One might expect to apply existing control strategies directly to the mobility problem. However, the demands of UGV operating in highly complex unstructured environments requires the UGV to interact intimately with its surroundings if they are to successfully negotiate difficult terrain and obstacles. Much of nonlinear control theory relies in one way or another upon differentiation. Unfortunately, when contacts are made and/or broken with robotic vehicles that walk, gallop, jump, or change shape to interact with their world, the governing equations of motion become discontinuous. Thus, the assumption of differentiation of the equations of motion is no longer valid, which explains the limited success in application of nonlinear control theory to the mobility problem.

22 DRDCSuffield70 Intelligent mobility algorithms

Mobile robotic vehicles are often innovative and simple in their design. They exploit various modes of locomotion to address their environment and objective. In spite of these efforts their practical application remains a challenge, primarily due to the fact that it is difficult to plan for, coordinate and control all of the required vehicle degrees-of-freedom. Human-scripted algorithms prove difficult and time consuming to understand, design, and tune for diverse UGV that often possess multiple modes of locomotion. No unified framework exists for this problem. This is primarily due to the fact that robotic vehicles are examples of underactuated nonlinear systems for which few general solutions to the problem exist in the control community.

“While many of the theoretical elements needed for the systematic derivation of a theory for underactuated nonlinear control systems are known, they have not been widely and methodically studied from a practical perspective. Consequently, control of underactuated nonlinear systems remains an endeavor of the theoretician and not a practical tool of the engineer.[49]"

Application of intuitive and systematic controller synthesis for underactuated nonlinear systems may provide considerable insight and understanding of locomotive control systems. A unified framework for robotic control is needed to address the numerous mobility behaviours that a single UGV must be capable of in order to successfully negotiate unknown obstacles in their environment. Once these systematic methods are well understood, the strategies should be easily translated across a large number of UGV with diverse modes of locomotion.

Coordination of vehicle behaviours with the world

It is evident that the application of intuitive and systematic controller synthesis for underactuated nonlinear robotic vehicles has the potential to significantly improve UGV mobility characteristics. The resulting algorithms exploit the inherent dexterity of the platform, and while they represent a potentially significant contribution to the SOA, this does not fully solve the mobility problem. These behaviours must be mated with relevant world representation information of the environment to allow the UGV to interact intimately with its surroundings. Use of open-loop behaviours, which do not close the control loop with world representation information, are unable to meaningfully maneuver UGV in the world. For a closed-loop system, world representation information must be made available to the controller. However, there exists a disconnect between the information provided by world representation generators and that which is required by the locomotion system for controller synthesis.

DRDCSuffield70 23 Mathematical models are essential in the analysis and design of traditional control systems and prove to be indispensable tools to the controls engineer. The next logical extension would be to embed a mathematical modeller in an autonomous system. In this context, DRDC Suffield is utilizing Vortex by CMLabs Simulations Inc., a faster than real-time physics based engine, to act as an on-board modelling tool. To fill the gap between the real-world and the controller, relevant geometric features of the environment are extracted into a world representation, whose coordinates are passed to the mathematical modeller. A model of the UGV that includes its dynamics, is then correctly positioned into this world model. This model now contains sufficient information represented in a meaningful mathematical framework that can be used by the intelligent mobility algorithms. The controller synthesis may be performed in faster than real-time allowing for trials of candidate behaviours before implementation. The controller is able to formulate input/output relationships, calculate and make corrections to behaviour implementation for robust performance.

Sensing for robotic locomotion in unknown highly-complex environments

In the practical application of control theory, the control engineer must determine what variables should be controlled and those variables that should be measured in an effort to make the system behave in a desired manner. The selected measurement variables should be those that have strong relationships with the controlled outputs and are dependent on the control objective, which may be the stabilization of an unstable plant, rejection of disturbances and/or to track reference changes. For traditional control problems such as temperature control, automobile cruise control or flight controls, these relationships are fairly intuitive. These relationships are also fairly intuitive in some of the inner control loops of a robotic platform that control motor torque for leg position or wheel speed.

These control concepts become more abstract at higher levels where control is needed to produce desired robotic behaviours that yield improved mobility characteristics. Here, intelligent mobility algorithms must select what variables should be measured from the world in an effort to move the UGV successfully through its environment. Perception sensors will look at cluttered environments and produce, as an example, 3D scans of its world. Intelligent mobility algorithms must then extract useful information from this unrefined data. DRDC scientists are investigating a framework for perception strategies that will allow the UGV to move in and out of a wide variety of environments[50]. This information must be translated into a meaningful mathematical framework that can be used by the intelligent mobility algorithms. The variables measured from the environment will change depending on the specific UGV, and are complicated by vehicle speed, modes

24 DRDCSuffield70 of locomotion, and mobility objectives. Specificexamplesofmeasured variables may include footfall distances for galloping, gap crossings for jumping, hill grades for energy management, or clearances for shape-shifting maneuvers.

2.4 Technology Readiness

Vehicle(s) Team Name Institution Estimated Team Size Stanley Stanford Racing Stanford 30 Highlander/SandStorm Red Team Carnegie Mellon 58 Rocky/Cliff Virginia Tech Virginia Tech 35 ION Desert Buckeyes Ohio State 31 Alice Cal. Tech. Cal. Tech. 50 KAT-5 Team Gray Gray Insurance 23 Golem 2 Golem Group UCLA 18 CajunBot Team CajunBot University of Louisiana 14 NaviGATOR Team CIMAR Florida University 17 Meteor Mitre Mitre 18 Average Team Size 29.4 1965 SRI SHAKEY Team [2] 29 Table 1: Approximate Teams Sizes for the DARPA Second Grand Challenge

ThissectiondescribesNASA’sTechnologyReadinessLevel(TRL)PHWULFand estimatHsthecurrentlevelofUxVsystems.Thetechnologytimelinestartswiththe insightofavisionaryscientist(s)andmaturesthroughstagesuntilthetechnologyis usedinmassproducedproducts.Anexcellentexampleofthisprocessisthelaser, whichemergedfromtheresearchconductedbyTheordeMaimanatHughesResearch Laboratories.Initiallythelaserwasascientificcuriosity,butovertimeithasfoundits wayintonumerousapplicationsandisnowamulti-billiondollarindustry.This progression,fromthebasicscientificconcept,throughtooperationalusageisneatly summedupbyNASA’sTechnologyReadinessLevelMeter,asshowninFigure4.

It can be argued that progress up the TRL scale is roughly correlated to team size, since the basic technology is integrated into ever more complex systems. The development of aerial vehicles provides a useful example (see Figure 5). The first powered flight was achieved by the Wright brothers, who as a team of two, conceived, developed and manufactured their first airplane. Flight’s early development phase contracts starkly with today’s unmanned air vehicles such as the Global Hawk, which was would have required a huge research and development team.

2.4.1 State of Unmanned Ground Vehicles

UGV research dates back over 50 years, a good original example research platform is SHAKEY [1] [2] platform from the Stanford Research Institute.

DRDCSuffield70 25 Figure 4: NASA Technology Readiness Levels

Figure 5: Wright Brothers (left) Global Hawk (right)

While the SHAKEY robot, shown in Figure 6, was approximately at TRL 1 to 2. Stanley, the UGV from Stanford that won DARPA’s second Grand Challenge seen in Figure 7, has a TRL of approximately 4 to 5.

2.4.2 Human team size versus complexity

The argument has been made that increasing TRL levels roughly correlate to more complex systems, thus larger teams sizes are required. Although this line of thought seems reasonable, do the facts support this conclusion? An examination of the competitors in DARPA’s Grand Challenge versus an earlier system could indicate if this is so. The Grand Challenge featured numerous teams and by examination each teams’ size was estimated. Table 1 shows the team name and the estimated number of team members. This table includes a

26 DRDCSuffield70 Figure 6: Stanford Research Institute, Shakey Robot[1] [2]

Figure 7: Stanford’s Stanley UGV

number of university teams, which often include numerous undergraduate students. The table also shows two private teams, Team Gray and Mitre, who had between 18 and 23 active participants. Assuming undergraduate assistance is part-time for university teams, the average team size is approximately 29.4 members. In comparison, the original SHAKEY report from 1966 [2], lists a team count of 29 members. The original SHAKEY could navigate through office environments on the order of minutes. The DARPA Grand Challenge vehicles could navigate approx. 200km open terrain in a time of on the order of hundreds of minutes. The DARPA Grand Challenge vehicles themselves are more complex than the original SHAKEY platform, and the sensors and software are more complex and more capable than the SHAKEY system (STRIPS). It appears that system designs today, as they did back in 1965, compartmentalize the complexity to interfaces. One man’s system is another man’s subcomponent. Therefore, it has not been demonstrated in this single comparison that increasing complexity requires increasing teams for robotics research. However, though team size has not significantly changed, it shoudl be noted that the successful DARPA grand

DRDCSuffield70 27 challenge research programs have devoted approximately 30 members towards the defined competitions and demonstrations.

2.5 The Autonomy Scale

There is a large body of work pertaining to autonomy in human psychology and several proposed autonomy scales for measuring human behaviour including (briefly) [51] such models as the Kurtine’s Autonomy Scale[52], Teaching Autonomy Scale [53], Rorschach Mutuality of Autonomy Scale (MOA) [54] among others. These are not suitable yet for the advancement level of modern autonomous mobile robotics. Indeed, they are more applicable for the opaque nature of the human psyche. Given we have the ability to reach into a robot’s mind, a more suitable scale considers the role and scope of the machine as we make them.

One important agreement is that UxV autonomy must be compared and contrasted to the human systems conducting the same operation and not behaviour unto itself. In the end, it is a matter of whether or not an unmanned system replaces or augments human systems that is the litmus test for feasability. There must be a gold standard upon which apples to apples comparisons are made.

Recent advances in UxV systems have resulted in proposed autonomy scales. Huang et. al’s [55][55][3] ALFUS autonomy scale describes in applicable dimensions the problem of quantifying UxV autonomy. The National Institute of Standards and Technology (NIST) ALFUS ad-hoc working group is composed of many DARPA labs and contractors aimed at refining the definition of autonomy scales and has produced a standard [3] which agrees with the proposed 3-dimensional model proposed during the symposium.

Under the ALFUS detailed model[3], adopted by NIST in NISTSP 1011 V1.1, (refer to Figure 8) has three primary dimensions: mission complexity, environmental difficulty, and human interface.

Figure 8: NIST ALFUS Detailed Model autonomy scale (Fig 2 from [3])

28 DRDCSuffield70 The ALFUS model agrees with the symposium’s proposed 3-vector for autonomy, as described in Figure 9 which also included mission complexity, human interaction, and environmental complexity. Since the symposium model is a premature co-alignment with the NIST standard, it makes sense to propose adoption of the pre-existing standard as opposed to creation of a complimentary one.

Figure 9: Symposium’s proposed 3-dimensional autonomy scale. Dimensions: Human Interaction (HI) Mission Complexity (MC), and Environment Complexity (EC)

If the ALFUS model is adopted, then there are other model refinements and definitions that may be included in the common references to autonomy. In addition, the ALFUS summary model, seen in Figure 10, reflects an aggregate viewpoint on the autonomy scale. The grading systems proposed to rank system autonomy are still flexible, based on current references, and open to wide interpretation. In the end the adoption of a common standard is the important agreement, the outcome numbers can always be re-evaluated in light of more applicable formulae.

It would make sense for DRDC, AISS in particular, to adopt NIST 1011 autonomy scale conventions for future work. The advantages are a standardization of terminology, an apples-apples comparison with US projects, an alignment that enhances interoperability, and a potential management tool to consider autonomous system proposals.

DRDCSuffield70 29 3. Discussion 3.1 Comments on Family of Future Combat Vehicles (FFCV)

In general it was considered that the FFCV UGV platform was the right size for UGV operation. The proposed UGV payload of 300kgs was deemed to be adequate. The modular chassis system, with play and play equipment roles, was well-conceived and

Figure 10: NIST ALFUS Summary Model (taken from Huang et. al [4] Figure 5)

allowed for more configuration than otherwise. The scope of FFCV vehicles was ambitious, including the recce, logistics, countermine, and assault variants of the UGV chassis. There was debate about the proper level for the UAV components. It was proposed by some to move the company-level fixed wing UAV down to platoon level.

ThecommonautonomylevelfortheUGVvehiclevariantswouldbeapproximateto theautonomylitedefinitionprovidedaboveinsection2.2.2.Theapplication-specific modularpayloads,however,wouldprovideanequalcomplexityleveltotheoverall vehicleautonomyinthecaseofrecce,assault,andcountermine/IEDvariants.This suggeststhatthemodularcomponentsystemswillneedasmuchdevelopmentasthe platform.:KLOHWhereisasignificantamountofresearchanddevelopmenttodrawon fortheplatform,thereislessavailablefortheapplication-specificpayloads.

TKRXJKWherewasnourbanvehicleincludedtomovewithsoldiersintobuildings,this wasanimportantelementforhelpingthesoldiersinclearancedrills.Itwas recommendedthattheFFCVincludeasmallUGVfordismountedinfantryoperations inurbanareas.ThissmallUGVshouldbeabletoclimbstairsandlead/followinfantry intomouseholesandrubbledbuildings.

30 DRDCSuffield70 3.2 Comments on Crisis in Zefra

CrisisinZefrapresentsafuturewarscenariowhereautonomousrobots,bothaerialand ground,teamwithsoldiers.Forthepurposeofdiscussion,thesymposiumprimarily reveiewdtheunmannedsystemsinthesciencefictionstory. Thehuman±robotinterface(HRI)presentedcanbedescribedasunconscious/intuitive sothatsoldiersdidnotfumbleforinformationandtherobotsprocessednetworked informationsoreadilythatcontrolwasneitherlostnorconflicted.Thisisanidyllic systemunlikelytobefielded.0by2020.

3.2.1 A Gedankenexperiment Principally,CrisisinZefraisaGedankenexperiment7 orWhought experiment.Thepurposeoftheseexercisesisessentiallytochallenge existingbeliefsandtechniquesbyposinghypotheticalproblemsand strawmansolutions.

3.2.2 A Thumbnail summary

Inthestory,Canadaisperformingapeace-makingoperationinthenotional Zefra²acitystatefromafailedAfricannation.Agitatorsuseavarietyof passiveandactiveaggressivetechnologiestodestabilizethecityandattack theinternationalpeacekeepersonthephysicalandmoralplanes.Inresponse, Canadian)orcespersonneluseavarietyofnovelandunusualtoolsto restoreorderanddefeattheinsurgents. 3.2.3 Key Technologies ThereareanumberofUxVorUxV-relatedtechnologiesHPSOR\HHGinWKH &ULVLVLQZefra:Camels,Dragonflies/Palm-sizedhelicopterdrones,Aerostats, Strikebots,Scarabs,SmartDust,andSwarmbots.

3.2.4 Aerostats

Aerostats appear fully capable of delivering on Zefra’s proposed unmanned stationary communications and surveillance concept. Quoting one industrial proposal: UltimatelytheseUAVswillbe"parked"inthestratosphereatan altitudeofapproximately20-km(65,000-ft.).Atthataltitude,they willserveasastableplatformfortelecommunicationsandremote sensing–21stCenturyAirships 3.2.5 Strikebots

ProbablythemostimportantdeviceintheCrisisinZefraistheVWULNHERW 8Qfortunately,LWZRXOGUHTXLUHadifficultandunlikelymarriageof capabilitiesLQVRPXFKDVWKHVWULNHERWLVmoreofaliterarydevicethana realizablemilitarymachine.

7Also Gedankenversuch (Hans Christian Ørsted, 1812), Later Ernst Mach (1871)

DRDCSuffield70 31 TheStrikebotisaQeutralEouyancygroundvehicle,meaningithassufficient intrinsiclifttomaintainaconstantaltitudeandusesenvironmentalcontact (throughlegs)tomaneuver.Themachinecomeswithavarietyofaccessories includingcameras,manipulatorsandGrenadelaunchers.Thedeviceappears tobeelectric,requiringrechargingandlongrangetransportbyaScarabUGV.

Neutral Bouyancy through aero (or aero-thermodynamic[56, 57]) thrust (no other mechanism is plausible at this scale) is power-expensive therefore making the Strikebot a distant technology if electrically powered. The story implies the manipulators are very powerful; again relying on large power density and lightweight electric actuators. High power density schemes may be possible (such as H2O2 manipulators ), but are distant. Current Medium Aerial Vehicle (MAV) technology is inefficient, awkward, and low endurance when compared to the fictional Strikebot. Comparable helicopters exhibit considerable maneuver potential, but remain difficult to control, particularly in close quarters. This latter point makes the Strikebot even less probable. Virtually all aero or aerothermo dynamic thrusting systems would find control difficult in such complex indoor surroundings.

3.2.6 Scarabs

The Scarab concept clearly mirrors the MULE concept from FCS and appears to be a networked, light freight carrying multi-wheeled vehicle capable of convoy, path planning, waypoint, and obstacle avoidance behaviours at minimum. There is no indication of the Scarab’s sensing capability, however if general purpose maneuver is considered, the Scarab must use some passive imaging to establish safe local routing.

Much of the Scarab is within the 10-20 year future, though general purpose maneuver in traffic and amongst pedestrians is unlikely in that time frame. Current systems rely heavily on active sensing (i.e. LIDAR) and passive sensing such as machine vision has been relegated to a supporting role 8.

3.2.7 Smart Dust

Todate,theterminologyandrealityofSmartDustiswidelydivergent.Most “smartdust"applicationsareonthe2WR3cmscale²farfromthesub±cm scalesuggestedinZefra.However,itislikelythatsomeformoflowor ambientpowersensingmaybecomeavailable.RFIDandmicromechanical technologystronglysuggestpassivesensingwithlimitedpowerwillbecome routinelyavailablewithinadecade.However,the“richness"orqualityofthe sensorstreamislikelytoremainrudimentaryfortheforeseeablefuture, limitedprincipallytobinary(e.g.thresholdedaccelerationeventssuchas footfalls)or,

8as in Stanford’s STANLEY where LIDAR gathered all geometry and video provided color cues for velocity control

32 DRDCSuffield70 at most, scalar signal streams (e.g. acoustic or EM radiation time histories) over very short range (linked substantially to the intelligence-scale paradox described below). In theory, more complex correlated products will be available (such as pseudo-imagery) after post processing .

3.2.8 Swarmbots

“Swarmbots are a collection of mobile robots able to self-assemble and to self-organize in order to solve problems that cannot be solved by a single robot. These robots combine the power of swarm intelligence with the flexibility of self-reconfiguration as aggregate swarm-bots can dynamically change their structure to match environmental variations 9."

Similar concepts have been proposed for swarming air and seacraft. Zefra assumes a set of roving micro sensor platforms equipped with Video/IR/audio and fully networked. In general, the concept is plausible, but the intelligence-scale paradox may bar the way for the foreseeable future.

In this deliberative intelligence paradox, greater sensor resources and, therefore, interpretive intelligence is required to maneuver over increasingly complex terrain. Qualitatively, one may observe that as vehicle scales approach human or smaller size, environmental surface complexity rises 10 implying smaller vehicles may require greater not less computing capability than larger vehicles. Hybridized reactive-deliberative systems may soften this paradox, however.

Swarmbots, as described throughout the novel, are golf-ball sized six-legged crawling surveillance and (in some cases) explosive machines packed into boxes and deployed to reconnoiter ahead of the soldiers. The time for movement in the story seems implausible. For a golf ball swarmbot (estimated 35mm gait diameter for a 70mm leg to leg distance) with a 3-leg movement of 35mm would require approx. 2857.142 gaits to cross 100m on a flat surface. If that swarmbot could perform 100 gaits per second then it would take 28.57 seconds to cross a 100m flat open area. Given a human can run this in less than half that time, it makes the swarmbot employment questionable. If the swarmbot cannot move fast enough ahead of an assault then it won’t provide enough lead time while soldiers are exposed. If that is a reality, then employment for all scenarios would not be certain, prompting scenario assessment on a case-by-case basis.

9http://www.swarm-bots.org 10here defining complexity loosely as the total deviated distance per unit distance of progress along a linear trajectory.

DRDCSuffield70 33 3.3 Partnership with DLCD/DLSC

DLCD/DLSC provides a unique opportunity for partnership with DRDC/AISS for the purpose of crafting and presenting a unified UxV perspective. The DLSC role, which is analogous to the marketing department of large companies, has considerable overlap with DRDC’s mandate. While DRDC is tasked with science and technology, DLSC is tasked with changing the minds of the Forces, or at least presenting possible future operations, including new technology.

There is a danger, however, that DLSC presents an image of the future that is not realistic in terms of development expectations. This may cause an expectation gap when new systems arrive between the science fiction and the reality. This can be seen in the above section 3.2 on the Crisis in Zefra novel. There are impracticalities and impossibilities in the Crisis in Zefra fiction. It is important that we work with project management to form a realistic appraisal of project success and payoff. Expectation management, to temper the idealized dreams of science fiction with the cold reality of advancement, is an important function that DRDC must embrace.

One the one hand, guiding DLSC can make DRDC’s job easier to communicate with the Forces, while on the other prevents wildly unrealistic project expectations that hamper advancement. For both reasons, the advantage of partnership on areas of UxV representation is in DRDC interests.

3.4 Comments on DARPA Urban Challenge/ Grand Challenge

It was agreed that the introduction of the DARPA grand challenge and subsequent Urban Challenge directed more attention and more effort to the autonomy problem than could have occurred by funding an in-house project.

The main advantages seen are: larger effort by teams larger than what could be obtained by one contract, competition has proven to spur innovation. There are many examples, Lindberg and the Spirit of St. Louis, Harrison and the Act of Longitude, and thethe AK-47[58] rifle (the winning competitor for Soviet assault rifle designs during WWII - a design that has impacted modern battlefields for 60 years). Competition draws participation from beyond the traditional mainstream research labs, and communicates the problem to a wider audience than a technical forum. Defeat at the hands of a challenge such as the Grand Challenge reinforces the difficulty of the problem for outsiders.

One main disadvantage is that competitions tend to force designs to limit complexity or risky proposed designs out of gamesmanship. The resultant systems, it can be argued, may or may not be generic enough to apply to other less trivial implementations. This applies to hardware and software. This can be seen in some sense in the CMU Tartan design for DARPA urban challenge compared to SandStorm for the earlier Grand Challenge. The current design does not have the same gimballed scanning system using a Riegl LIDAR [59] for long-range object detection as the SandStorm did for the Grand

34 DRDCSuffield70 Challenge, which in the opinion of some is a step backwards in complexity. During the Grand Challenge race the SandStorm gimbal locked up. Abandonment of a more complex component could be interpreted as an unwillingness to try risky solutions. This is contrary to the goal of open-ended research and development.

3.5 Comments on COHORT

Quoting from the Deliverables section of the Cohort proposal:

"At the completion of Cohort DRDC will have a research hardened and demonstrable multi UxV and UGS system for performing coordinated reconnaissance, surveillance and operations support in complex/urban environments. Prototype UxV C2 systems that supports "delegation of authority" and "service on demand" concepts will be in place. The expertise and exposure gained will establish DRDC as a world class facility for vehicle and system intelligence. The use of UxV to clarify UGS responses, deploy and tend unattended ground sensors (UGS) will be demonstrated."

3.5.1 How will we do it?

From the proposal under Work Plan:

"Creating effective intelligent UxV for complex environments demands advances in world understanding and navigation to allow agile UxV to exploit cover and concealment. Cohort will develop vehicle intelligence and small UxV that navigate the 3D world of a ruined and fortified urban setting by climbing, walking, rolling, jumping, flying and changing shape, and, collaborate with low altitude UAV, flying through ’urban canyons’. A pending TIF harmonizes vehicle intelligence within a common operating architecture and proposes a flexible UxV C2 system for top down "delegation of authority" and bottom up "services on demand". Multi-vehicle task allocation and planning systems will allow teams of UxV to take actions to clarify the operating picture, maximize information gain, and perform tactical operations (tracking enemy, spoofing) by fully exploiting the synergies of UGV, UGS, and UAV."

The foregoing quotes sound ambitious. Careful reading distills the project into a few key points:

1. a system for performing coordinated reconnaissance and surveillance using a common operating architecture.

DRDCSuffield70 35 2. a prototype C2 system that supports "delegation of authority" (i.e. autonomy) and "service on demand" (i.e. unscripted, reactive task allocation).

3. vehicle intelligence to navigate the 3D world of a ruined and fortified urban setting by (a series of example modes of motion). With the addition of the STRV, all these modes will be available.

The Technology Investment Fund (TIF) (A Unified Approach to Control and Coordination of Unmanned Vehicle Teams in Complex Environment, 12pi01) is a more detailed document that discusses the UAV/UGV/MMI issues in greater depth. Many of the TIF milestones sound complex, but most will be achievable and, of course, some will not. That is the nature of any research project, particularly TIFs.

3.5.2 Urban Overwatch Scenario

Urban overwatch currently represents the most common use of unmanned systems, albeit only UAV, in today’s conflicts. Specifically, aircraft fly over and ahead of friendly forces to determine the safety of the route and to identify potential problems. This is an interesting role since it places friendly forces near the centre of unmanned system operation, even though the UAV is often controlled from the rear.

This is reminiscent of the scenario in the TIF:

“A group of UAV fly over a city with downward looking cameras, imaging large strips of the urban world. These images, combined with coarse UAV position, can be used to build a three-dimensional model of the world below. These algorithms are very computationally expensive. Both the processing and power required could not be carried by small UAV. Therefore, the UAV send the images to a UGV with the resources to build the models. A team of UGV then use this rough world model to make preliminary plans to explore the city. As they enter the city, they confirm routes identified by the UAV, fill in gaps in the model caused by obscured views, and add terrain characteristics and ground based imagery to the models."

Manned convoy overwatch can be performed by both ground and air units. Strictly speaking, any unit that observes or provides force protection to another advancing unit is said to perform overwatch. Units trading between advancement and overwatch on each other perform bounding overwatch.

Consider the following possible unmanned roles during convoy overwatch particularly in an urban/rural mixed environment:

36 DRDCSuffield70 1. Route reconnaissance- At operational speeds, friendly forces must determine viable routes.

2. Detailed route inspection- At operational speed, inspect selected routes in advance of friendly forces and provide advance warning against IED and VBIED devices.

3. Close approach/Force protection- Provide protection for friendly forces during enemy contact and/or EOD operations.

Route reconnaissance

Route reconnaissance establishes an overall tactical picture of the region, though detailed, high resolution inspection may also be sporadically required. Given the foregoing capabilities, fixed wing aircraft appear best suited to the long range role. High altitude capabilities minimize both environmental and countermeasure risks of long range operation. Rotorcraft assigned the same mission radius would be larger and slower than comparable fixed wing aircraft, while possibly operating at lower altitudes. UGV would face the largest risks and longest transit times of any class.

Detailed route inspection

Route inspection establishes a detailed picture of a planned route. Once again, fixed-wing, rotor, and ground vehicles could all act either fully or partially in this role.

Depending on the medium range radius and required picture detail, fixed wing aircraft may be sufficient for detailed imagery. However, near horizontal aspect makes low speed fixed wing or rotor wing craft attractive options. Even small rotorcraft (10kg payloads) appear to have sufficient power and speed to carry modest stabilization and zoomed multispectral optics at operational speeds but with abreviated endurance.

UGV provide a viable solution in this role, but only with significant operator support or along low complexity routes. Reaching peak operational speeds autonomously other than in the pure convoy role [60] will pose a significant UGV challenge , particularly without high resolution terrain maps and with sensor horizons in the 100m range. Offroad route clearing presents very high cumulative risk to the mission without human involvement. Inexpensive marsupial robots provide a limited, but interesting UGV solution variant[61]. Combined with UAV rebroadcast and local aerial imagery, this could provide a viable alternative in some specific cases.

DRDCSuffield70 37 Close approach/force protection

Force protection and close approach roles provide manned units with protection against enemy fire and the ability to perform close maneuver up to and including physical contact.

Unmanned rotorcraft (UAV-RW), untested in the field, appear to have capabilities suitable for close approach. However, both fixed and rotorwing aircraft may be limited to observation, designation, or light weaponized roles, given approach to within 5 metres of surface targets would challenge current localization and safety constraints. Though small micro-UAV offer potential indoor capabilities, endurance (measured in minutes) and payload (measured in grams) will severely limit their outdoor roles.

UGV (or UAV/UGV hybrids) represent the only plausible solution to close approach and manipulation missions. Not surprisingly, small electric UGV are ideally suited to the counter-IED and indoor urban reconnaissance roles, with endurance less than 4 hours and communications range typically under 2 km. However, these compact platforms have generally limited surface sensing capability. Proposed mid-range platforms (such as the BAE Gladiator, iRobot R-gator, or DRDC MATS) offer useful longer range, larger payload, human scale movement capabilities.

3.5.3 Layered battlespace

The above discussion reveals an essential observation of current and likely future constraints of autonomous systems. No single system is ideally suited to all overwatch roles. Clearly, an opportunity exists to provide deep reconnaissance and force protection through layering of fixed, rotor, and ground UxV.

Subject to difficult conditions, enormous complexity, and smart agile opponents, UGV will be too easily defeated by the environment and countermeasures to form a long range solution for autonomous unmanned operations, though medium range teleoperated or telecommanded operations are concievable. It is equally clear that UGV will play an essential role in close approach, force protection, and logistics – all close combat support roles.

Similarly, rotorcraft offer significant medium range capabilities at the cost of short endurance, noise, flight and maintenance complexity. Helicopter low/high speed capabilities offer low altitude maneuverability with limited surface avoidance. Like UGV, aircraft will be vulnerable to small arms fire though without the frequency or complexity of true surface operations.

Fixed wing UAV (UAV-FW) are established high altitude platforms capable of

38 DRDCSuffield70 long range/high speed/long endurance operations. However launch/recovery operations for larger airframes make these largely command assets. Smaller, man portable, UAV provide forward units with local ISTAR services but at the cost of short endurance and relatively poor imagery.

These observations strongly suggest that UxV will be assembled into battlespace regions around mounted, moving units, providing both reconnaissance and force protection in increasingly heavy, aggressive layers approaching the manned core. Soldiers at the battlespace core make the layered battlespace possible, by commanding, regulating, and periodically aiding autonomous elements in achieving mission objectives.

UGV size and speed capabilities suggest that only the largest class (R-gator scale and higher) will be able to keep pace in convoy. Rotorcraft and small fixed wing aircraft will be able to form with moving elements but, with the need for frequent refueling, may require: scheduled refueling stops, autonomous refueling capabilities, and/or rotating sorties. With higher dash speeds, but short endurance, these assets will provide overwatch from hundreds of metres to tens of kilometres from the manned core.

Once stationary, UGV of all scales will likely see deployment to up to a few kilometers from the manned core. Longer range deployment will be rare given the difficulty of communications, complexity of navigation, and enemy countermeasures. Similarly, rotorcraft will remain near dismounted units, but able to venture further afield at less risk alongside smaller fixed wing aircraft. During enemy contact, an essentially low altitude/surface event, large payload UGV and rotorcraft may serve as a “shell" around manned units through active force protection and remote weapons platforms.

Fixed wing UAV already “own" the higher altitude/longer range battlespace above manned units and will continue to provide “big picture" coverage of the region in a radius measured in tens to hundreds of kilometres around the manned core. Significantly, however, longer range/higher payload airframes will likely remain strategic rearward assets. Until an integrated, automated airspace management system appears, airspace deconfliction alone ensures that only low altitude assets (e.g. less than 1000 ft AGL) will be under local forward control, severely limiting portable fixed wing options.

In the Cohort context this translates to a multivehicle, cooperative, urban survey/reconnaissance task specifically designed to show off:

1. Multi-vehicle (UAV/UGV) control

2. Inter-vehicle data transfer

3. Urban 2.5D mapping

DRDCSuffield70 39 Figure 11: An example of the ’Layered Battlespace’ in which UGV/UAV-RW/UAV-FW are restricted by capability and complexity into a layered structure.

4. Leader/follower 5. Autonomous navigation

A proposed scenario would proceed along the following:

1. A UAV flys over a village from medium altitude (250-1000ft) 2. An operator identifies a target structure from UAV imagery. 3. The operator prescribes a mission for a UxV team 4. The team navigates in convoy to the target structure 5. One (or more) team member(s) performs perimeter patrol returning imagery and building a 3D map. 6. One team member enters the structure returning imagery and building a 3D map.

40 DRDCSuffield70 3.6 Autonomy Application Discussion

The following projects were proposed based on the symposium ideas presented. A significant portion of these projects represents a limited role for autonomy and a primary role as tools for manned systems. In this way they can meet the current needs of the Forces and are practical implementations.

1. STRV Lyte - It has been identified that urban operations pose a high risk to dismounted soldiers whose task involve the securing of buildings occupied or fortified by enemy combatants. Specifically, hallways and stairs have been identified as undesirable areas to navigate as they present zones of reduced awareness and cover. In this context the Shape-shifting Tracked Robotic Vehicle (STRV) Lyte could be used to remotely investigate threat obstacles, structures and zones in a building that include hallways, stairways or elevator access points. The system would be highly mobile, building on the mobility characteristics of the current STRV prototype and delivering low-level autonomy to include stair-climbing and door entry behaviours. The STV Lyte would be tele-operated to provide a system that could potentially address current needs that outstrip the capabilities of the current system. STRV Lyte would allow reconnaissance, surveillance and application of effects, such as flash-bang removing the solider from harm’s way. It is intended that the STRV Lyte would operate urban terrain, capable of climbing stairs, passing through doorways, and traversing obstacles in its path. Semi-autonomous operation enables the operator to override behaviours that the vehicle would not be capable to overcome autonomously.

2. Logistics Rotorcraft - Current application conditions suggest that an semi-autonomous rotorcraft may assist in logistics to the dispersed personnel stationed at forward bases. An autonomous logistics rotorcraft could lift supplies to the forward bases and then bring back injured personnel, mail, etc.. Road travel avoidance reduces the ambush and IED risk for personnel.

3. Autonomous Convoy - An alternative for logistics is the integration of autonomy into the current convoy vehicles themselves. Insurgent ambushes may destroy unmanned autonomous vehicles in an autonomous convoy, but casualties would be reduced therefore less of a propaganda victory.

4. IED/Landmine UGV - Continued application of advanced techniques to the EOD/IEDD/minewarfare application will payoff by removing human soldiers from harm. There is a significant gap between current commercial EOD robots and the SOA for UxV systems. Projects related to EOD/IEDD/minewarfare should be advanced.

5. Multi-spectral sensing - investigating the DIS section’s novel hyper-spectral sensing system for a vegetation / non-vegetation classification.

6. Teleoperated Air-dropped Demolition Munition (TADM) - one proposed extension of the miniature Remote Neutralization Vehicle (mini-RNV) demonstration is to

DRDCSuffield70 41 combine the simple capable mini-RNV platforms as expendable drones in an asymmetric robot team. One application is as dynamic landmines. These applications are described by Erickson et. al.[31, 62].

7. Static Remote Weapons (SRW) - One way to reduce complexity is to consider static autonomy. For static machines, where all pose estimates can be ego-centric and relative, this ignores locomotion and proprioceptive sensing. One static application would be remote weapons deployed in local security around bases. Static weapons would force-multiply the existing perimeter defense soldiers committed for 24/7 protection. In low-threat scenarios, the onboard sensors could augment Intelligence, Surveillance, Target Acquisition and Reconnaissance (ISTAR). In high-threat scenarios, the safety features can be removed and static weapons would engage all targets weapons free. This additional capability would make it less likely that insurgents would attempt to overrun detached forward bases.

8. Sonobuoy - Another underexposed field is automation of current naval sonobuoys. Integrating autonomy into deployed sonar stations would increase the anti-submarine warfare (ASW) capability of naval and air units.

9. Multi-robot Target System - it may be relevant to employ current SOA to targettry systems. This is a reduced complexity problem when compared to operational military robotics. It provides another system where the communication channels are open between the user and the scientist.

10. Arctic Sovereignty - One important political objective is to extend and enhance sovereignty over the Canadian arctic. UxV systems hold a great deal of promise for Arctic application. Significant research would be needed to redesign current temperate climate machines for sustained arctic operations. Once arctic-compliant vehicles are ready, UxV system could be deployed and tested in the North. This application holds unique advantages: systems can be on active duty while still undergoing development and thereby address current sovereignty demands and operations can take advantage of national resources instead of the logistics problems associated with overseas missions.

42 DRDCSuffield70 4. Recommendations

The symposium produced many diverse opinions out of discussions and there was general consensus, not necessarily unanimous, attendees arrived at for the following recommendations:

1. DRDC, AISS in particular, should adopt NIST 1011 (ALFUS) autonomy scale conventions for future work. The advantages are a standardization of terminology, an apples-apples comparison with US projects, an alignment that enhances interoperability, and a potential management tool to consider autonomous system proposals. 2. Communication, perception, cognition, complexity, visual simultaneous localization and mapping, complex sensing, teamwork, learning, and intelligent mobility are the critical research areas that AISS should focus on in order to approach the full autonomy goal.

3. AISS should develop and nurture a more focused relationship with DLCD/DLSC for autonomous vehicle system concepts exploration/ presentation. A focused partnership will lessen the AISS burden to convince departments of the importance while adjusting expectations to more practical solutions.

4. Based on the common themes identified, critical research areas’ status, and time lines discussed, ten broad project applications that are possible and practical should be pursued:

(a) STRV Lite - improved hallway and stairway capable UGV; (b) Logistics rotocraft - semiautonomous UAV for frontline convoy operations; (c) Autonomous convoy - automating the current ground vehicles use din resupply; (d) IEDD/EOD UGV - continued development of EOD augmentation to protect soldiers; (e) Multi-spectral sensing - investigate novel hyper-spectral sensing for vegetation classification; (f) Teleoperated Air-dropped Demolition Munition (TADM) - develop dynamic autonomous demolition capability; (g) Static Remote Weapons (SRW) - investigate forward operating base (FOB) protection using automated weapons; (h) Sonobuoy - investigate increasing autonomy in naval sonobuoys; (i) Multi-robot Target System - spin-off current SOA UGV teamwork into target systems; (j) Arctic Sovereignty - investigate Arctic surveillance UGV/UAV systems

5. It is recommended that the FFCV project include a small UGV for dismounted infantry operations in urban areas.

DRDCSuffield70 43 References

1. Nilsson, Nils J. (1984). Shakey The Robot. (Technical Report 323). AI Center, SRI International. 333 Ravenswood Ave., Menlo Park, CA 94025.

2. Rosen, C., Nilsson, N., Adams, M., Green, M., Wahlstrom, S., Forsen, G., Bennion, D., Wensley, J., Crane, H., Nilsson, N., Keckler, W., Larson, R., Shapiro, E., and Forsen., G. (1966). APPLICATION OF INTELLIGENT AUTOMATA TO RECONNAISSANCE. (First Interim Report 5953). Stanford Research Institute. Menlo Park California.

3. Huang, Hui-Min (2004). Autonomy Levels for Unmanned Systems (ALFUS) Framework Volume I: Terminology Version 1.1 (NISTSP 1011), 1.1 ed. Vol. NISTSP of Special Publication. Gaithersburg MD: NIST.

4. Huang, Hui-Min, Pavek, Kerry, Albus, James, and Messina, Elena (2005). Autonomy Levels for Unmanned Systems (ALFUS) Framework: An Update. In SPIE Defense and Security Symposium, SPIE. Orlando FL: SPIE Press.

5. Thrun, S., Montemerlo, M., Dahlkamp, H., Stavens, D., Aron, A., Diebel, J., Fong, P., Gale, J., Halpenny, M., Hoffmann, G., Lau, K., Oakley, C., Palatucci, M., Pratt, V., Stang, P., Strohband, S., Dupont, C., Jendrossek, L.-E., Koelen, C., Markey, C., Rummel, C., van Niekerk, J., Jensen, E., Alessandrini, P., Bradski, G., Davies, B., Ettinger, S., Kaehler, A., Nefian, A., and Mahoney, P. (2006). Stanley: The Robot That Won the DARPA Grand Challenge. Journal of Field Robotics, 23(9), 661–692. accepted for publication.

6. Thrun,Sebastian,Burgard,Wolfram,andFox,Dieter(2005).Probabilistic Robotics,TheMITPress.&DPEULGJH0$

7. Thrun, S. (2003). Learning occupancy grids with forward sensor models. In Autonomous Robots, Vol. 15, pp. 111–127.

8. D.J.Bruemmer and M.O. Anderson (2003). Intelligent Autonomy for Remote Characterization of Hazardous Environments. In IEEE, (Ed.), Proceedings of the IEEE International Symposium on Intelligent Control, IEEE. Houston, TX: IEEE.

9. David Bruemmer and Donald Dudenhoffer and Mark McKay and Matthew Anderson (2002). Dynamic-Autonomy for Remote Robotic Sensor Deployment, Vol. Spectrum 2002 of 9th Biennial International Conference on Nuclear and Hazardous Waste Management, EERC. EERC.

10. David Bruemmer and Julie Marble and Donald Dudenhoffer and Matthew Anderson and Mark McKay (2002). Intelligent Robots for Use in Hazardous DOE Environments. Idaho NL.

11. Saffiotti, A. (1997). The Uses of Fuzzy Logic in Autonomous . Soft Computing, 1(4), 180–197.

44 DRDCSuffield70 12. Konolige, Kurt (2003). Map Merging for Distributed Robot Navigation. In Proceedings of the 2003 IEEE International Conference on Intelligent Robots and Systems, pp. 212–217. Las Vegas, NV.

13. Konolige, K. and Myers, K. (1998). The Saphira Architecture for Autonomous Mobile Robots. Artificial Intelligence and Mobile Robots: Case Studies of Successful Robot Systems.

14. Didier Guzzioni and Kurt Konolige and Karen Myers and Adam Cheyer and Luc Julia (1998). Robots in a Distributed Agent System. In AAAI Proceedings.

15. Albus, J. (2001). Engineering of Mind: An Introduction to the Science of Intelligent Systems, John Wiley and Sons.

16. Albus, J. (1997). DRCS: A Reference Model Architecture for Demo III. NISTIR 5994, National Institute of Standards and Technology, Gaithersburg, MD.

17. Albus, J. (1991). Outline for a Theory of Intelligence. IEEE Transactions on Systems, Man and Cybernetics, 21(3), 473–509.

18. Albus, J.S. and Quintero, R. (1990). Toward a Reference Model Architecture for Real Time Intelligent Control Systems (ARCTICS). In ISRAM ’90, pp. 243–250.

19. Albus, J., McCain, H., and Lumia, R. (1989). NASA/NBS Standard Reference Model for Telerobot Control System Architecture. (Technical Report 1235). NIST.

20. Brooks, R.A. (1991). Intelligence without Representation. Artificial Intelligence, 47, 139–159.

21. Brooks, R.A. (1991). Artificial IntelligenceMemo No. 1293: Intelligence without Reason, Massachusetts Institute of Technology.

22. Brooks, R.A. (1989). A Robot that Walks: Emergent Behaviours from a Carefully Evolved Network, Artificial Intelligence at MIT, Ch. 24, pp. 28–39. The MIT Press.

23. Brooks, Rodney A. (1986). A Robust Layered Control System for a . IEEE Journal of Robotics and Automation, RA-2(1), 14–23.

24. Nilsson, Nils J. (1984). Shakey The Robot. (Technical Report 323). AI Center, SRI International. 333 Ravenswood Ave., Menlo Park, CA 94025.

25. Chamberlain,PeterandDoyle,Hilary(1999).EncyclopediaofGermanTanksof WorldWarTwo,2nded.ed.Arms&Armour..London.ISBN1-85409-214-6.

26. Malle, Bertram F. (2002). From Attributions to Folk Explanations: An Argument in 10 (or so) Steps. University of Oregon.

27. Heider, Fritz (1958). The Psychology of Interpersonal Relations, John Wiley and Sons.

DRDCSuffield70 45 28. Blackburn, M. R., Laird, R. T., and Everett, H. R. (2001). (UGV) Lessons Learned. (Technical Report 1869). SPAWAR. SSC San Diego.

29. R.A. Brooks (1991). Intelligence without Representation. Artificial Intelligence, (47), 139–159.

30. R.A. Brooks (1991). AI Memo No. 1293: Intelligence without Reason, Massachusetts Institute of Technology.

31. Erickson, David, Ceh, Matt, Anderson, Dale, , and Lanz, Edward (2007). mini-RNV: a response to IED threat. In Carapezza, Edward M., (Ed.), Proceedings of SPIE – Volume 6538 Sensors, and Command, Control, Communications, and Intelligence (C3I) Technologies for Homeland Security and Homeland Defense VI, Vol. 65380S, Orlando, FL.

32. Clarke, Roger (1993). Asimov’s Laws of Robotics: Implications for Information Technology-Part I. Computer, 26(12), 53–61.

33. Asimov, Isaac (1968). I, Robot, London: Grafton Books. (a collection of short stories originally published between 1940 and 1950).

34. Asimov, Isaac (1968). The Rest of the Robots, Grafton Books.

35. Marr, David (1982). Vision, 1st ed. W.H. Freeman and Company.

36. Hygounenc, E., Jung, I.-K., Soueres, P., and Lacroix, S. (2004). The autonomous blimp project at LAAS/CNRS: achievements in flight control and terrain mapping. In International Journal of Robotics Research, Vol. 23, pp. 473–512.

37. Se, Stephen, Lowe, David G., and Little, James J. (2005). Vision-Based Global Localization and Mapping for Mobile Robots. In IEEE Transactions on Robotics, Vol. 21.

38. Se, Stephen and Jasiobedzki, Piotr (2005). Instant Scene Modeller for Crime Scene Reconstruction. In IEEE Workshop on Advanced 3D Imaging for Safety and Security.

39. Davison, Andrew J., Cid, Yolanda Gonzalez, and Kita, Nobuyuki (2004). Real-Time 3D Slam With Wide-Angle Vision. In 5th IFAC/EURON Symposium on Intelligent Autonomous Vehicles.

40. Chaturvedi, P., Sung, E., Malcolm, A. A., and Guzman, J. Ibanez (2001). Real-time identification of driveable areas in a semi-structured terrain for an autonomous ground vehicle. In Proceedings of the SPIE, Vol. 4364, pp. 302–312.

41. Grunes, A. and Sherlock, J. F. (1990). TExture Segmentation for defining driveable Regions. In Proceedings fo the British Machine Vision Conference, pp. 235–239. BMVC90.

46 DRDCSuffield70 42. Jasiobedzki, P. (1995). Detecting driveable floor regions. In IEEE/RSJ International Conference on Intelligent Robots and Systems. Human Robot Interaction and Cooperative Robots, Vol. 1, pp. 264–270.

43. Kruse,F.A.,Boardman,J.W.,andLefkoff,A.B.(2000).Extractionof compositionalinformationfortrafficabilitymappingforhyperspectraldata.In $lgorithmsforMultispectral,Hyperspectral,andUltraspectralImageryIV, Vol.4049,pp.262–273.

44. Johnson, A. J., Windesheim, E., and Brockhaus, J. (1998). Hyperspectral imagery for Trafficability Analysis. In EEE Aerospace Conference, Vol. 2, pp. 21–35. Snowmass at Aspen, CO, USA.

45. Broten, G. S. and Digney, B.L. (2002). Perception for Learned Trafficability Models. In Grant R. Gerhart, Douglas W. Gage, Chuck M. Shoemaker, (Ed.), Procedding of SPIE, Unmanned Ground Vehicle Technology IV, Vol. 4715, pp. 149–160. Orlando, Florida, USA.

46. Digney, B.L. (2001). Learned Trafficability Models. In Gerhart, G.R., Shoemaker, C.M., and Gage, D.W., (Eds.), Unmanned Ground Vehicle Technology III,The International Society for Optical Engineering. The International Society for Optical Engineering.

47. Dahlkamp, H., Kaehler, A., Stavens, D., Thrun, S., and Bradski, G. (2006). Self-supervised Monocular Road Detection in Desert Terrain. In Proceedings of Robotics: Science and Systems, Philadelphia, USA.

48. Murray, R., (Ed.) (2003). Control in an Information Rich World, SIAM.

49. Vela, Patricio Antonio (2003). Averaging and Control of Nonlinear Systems (with Application to Biomimetic Locomotion). Ph.D. thesis. California Institute of Technology.

50. Collier, J. A., Ricard, B., Digney, B. L., Cheng, D., Trentini, M., and Beckman, B. (2004). Adaptive representation for dynamic environment, vehicle, and mission complexity. In Gerhart, G. R., Shoemaker, C. M., and Gage, D. W., (Eds.), Proceedings of the SPIE, Volume 5422, pp. 67-75 (2004)., pp. 67–75.

51. Anderson, Ruth A. Anderson Lowel Worthington William T. and Jennings, Glen (1994). The development of an autonomy scale. Contemporary Family Therapy, Volume 16(Number 4), 329–345.

52. Mustaine, Beverly and Wilson, Robert (1995). An Exploration of the Internal Consistency of the Kurtines Autonomy Scale. Measurement and Evaluation in Counseling and Development, 27(4), 211–226.

53. Pearson, Carolyn and Hall, Bruce (93). Initial Construct Validation of the Teaching Autonomy Scale.. Journal of Educational Research, 86(3), 172–178.

DRDCSuffield70 47 54. J., Urist and M., Shill (1982). Validity of the Rorschach Mutuality of Autonomy Scale: a replication using excerpted responses.. Journal of Personality Assessment, 46(5), 450–454.

55. Huang, Hui-Min, Pavek, Kerry, Novak, Brian, Albus, James, and Messina, Elena (2005). A Framework For Autonomy Levels For Unmanned Systems (ALFUS). In Proceedings of the AUVSI Unmanned Systems North America 2005,p.9.AUVSI. Baltimore MD: AUVSI.

56. MK-53 Nulka Decoy Launching System (DLS).

57. Kelly, B., Vance, L., and Baker, P. (1993). Ground Test Performance Validation of the Army LEAP Kill Vehicle. 2nd Annual AIAA SDIO Interceptor Technology Conference, June 6-9, 1993, Albuquerque, NM,p.6.

58. Mikhail, Kalashnikov (1983). How and Why I Produced My Submachine Gun. Sputnik: A Digest of Soviet Press, pp. 70–75. Novosti Press Agency.

59. Urmson, C., Anhalt, J., Bartz, D., Clark, M., Galatali, T., Gutierrez, A., Harbaugh, S., Johnston, J., Kato, H., Koon, P.L., Messner, W., Miller, N., Mosher, A., Peterson, K., Ragusa, C., Ray, D., Smith, B.K., Snider, J.M., Spiker, S., Struble, J.C., Ziglar, J., and Whittaker, W.L. (2006). A Robust Approach to High-Speed Navigation for Unrehearsed Desert Terrain. Journal of Field Robotics, 23(8), 467–508.

60. Jaczkowski, Jeffrey J. (2003). Robotic Follower Experimentation Results. In 3rd Annual Intelligent Vehicle Systems Symposium, National Defence Industries Association.

61. Yamauchi, B. and Rudakevych, P. (2004). Griffon: A Man-Portable Hybrid UGV/UAV. , 31(5), 443–450.

62. Erickson, David (2007). mini-RNV Dozer Demolition Analysis. (Technical Report TR-2007-160). Defence R&D Canada – Suffield.

48 DRDCSuffield70 Annex A List of abbreviations/acronyms/initialisms

ADC Analog to Digital Conversion

ABI Application Binary Interface

ADO Adaptive Dispersed Operations

ALFUS Autonomy Levels for Unmanned Systems

AM Ante Meridiem

AMR Autonomous Mobile Robotics

API Application Programmer Interface

AO Area of Operations

AOR Area of Responsibility

ANSI American National Standards Institute

ASCII American Standard Code for Information Interchange

ASD Autonomous Systems Development

ASW Anti-Submarine Warfare

BIT Built-In Test

BOM Bill of Materials

BSP Board Support Package

C4ISR Command, Control, Communications, Computers, Intelligence, Surveillance, and Reconnaissance

CAN Control Area Network

C/A Course Acquisition GPS

COM Communication

CMAC Cerebellar Model Articulation Controller

CPU Central Processing Unit

CR Carriage Return

CVAP Computational Vision and Active Perception Laboratory

DAC Digital to Analog Conversion

DRDCSuffield70 49 DM Domain Model

DMU Dynamic Measurement Unit

DOF Degrees of Freedom

DGPS Differential GPS

DIS Detection Information Section

DRDC Defence Research and Development Canada

DRES Defence Research Establishment Suffield

ECEF Earth-Centred, Earth-Fixed

EEPROM Electrically Erasable Programmable Read Only Memory

EOD Explosive Ordnance Disposal

FOB Forward Operating Base

FCS Future Combat Systems

FFCV Family of Future Combat Vehicles

FLA Four Letter Acronym

FOG Fibre Optic Gyroscope

GCC GNU Compiler Collection

GPS Global Positioning System

GUI Graphical User Interface

HAAW Heavy Anti-Armour Weapon

HRI Human-Robot Interaction

IEC International Electrotechnical Commission

IED Improvised Explosive Device

IEEE Institute of Electrical and Electronics Engineers

IMU Inertial Measurement Unit

IP Intellectual Property

IP Internet Protocol

IP54 Ingress Protection or International Protection rating 54

ISO International Standards Organization

50 DRDCSuffield70 ISTAR Intelligence, Surveillance, Target Acquisition and Reconnaissance

JAUS Joint Architecture for Unmanned Systems

JDL Joint Directors of Laboratories

JTA Joint Technical Architecture

KTH Kungl Tekniska Hogskolan

LAAW Light Anti-Armour Weapon

LAN Local Area Network

LF Line Feed

LOC Lines of Code

MAV Medium Aerial Vehicle

MCU Micro Controller Unit

MDARS Mobile Detection Assessment and Response System

MGRS Military Grid Reference System

MMI Man Machine Interface

MMU Memory Management Unit

MPIO Multi-Purpose Input/Output

MSL Mean Sea Level

NBC Nuclear, Biological, Chemical

NEMA National Electrical Manufacturers Association

NIST National Institute of Standards and Technology

NSU Navigational Sensor Unit

NTP Network Time Protocol

OCU Operator Control Unit

OEM Original Equipment Manufacturer

OPI Office of Primary Interest

PC

PGR Point Grey Research

PID Proportional Integral Differential

DRDCSuffield70 51 PM Post Meridiem

PM Perception Module

PMA Yugoslavian anti-personnel mine

POST Power-On Self Test

PWM Pulse Width Modulation

QADC Queued Analog Digital Conversion

RA Reference Architecture

RC Radio Controlled

RF Radio Frequency

RFP Request For Proposal

RGA Rate Gyro Accelerometer

RMS Root Mean Square

RPG Rocket Propelled Grenade

RPY Roll, Pitch, Yaw

RSTA Reconnaissance Surveillance and Target Acquisition

RTEMS Real-Time Executive for Multiprocessor Systems (originally Real-Time Executive for Missile Systems and then Real-Time Executive for Military Systems)

RTK Real-Time Kinematic

SAE Society of Automotive Engineers

SI System International

SLAM Simultaneous Localization and Mapping

SMA Senior Military Advisor

STA Sensing Target Acquisition

STRV Shape-shifting Tracked Robotic Vehicle

SUGV Small UGV

TADM Teleoperated Air-Dropped Munition

TCP Transmission Control Protocol

52 DRDCSuffield70 TEAM Technologies Enabling Adaptive Manoeuvre

TLA Three Letter Acronym

TPU Time Processor Unit

TNA Thermal Neutron Activation

UAV

UAV-FW Unmanned Aerial Vehicle-Fixed Wing

UAV-RW Unmanned Aerial Vehicle-Rotor Wing

UGS Unattended Ground Sensors

UGV Unmanned Ground Vehicle

UMS UnManned Systems (from NIST 1011 v1.1)

US United States

USA United States of America

USV Unmanned Space Vehicle

UTC Universal Time Coordinated

UTM Universal Trans Mercator

UUV Unmanned Underwater Vehicle

UXO Unexploded Ordinance

UxV Unmanned (Aerial, Ground, Underwater, Space) Vehicle

VBIED Vehicle-Borne Improvised Explosive Device

VSLAM Visual Simultaneous Localization and Mapping

WG Working Group

WGS World Geodetic System

DRDCSuffield70 53 AQQH[B Notation

α latitude angle

β longitude angle

μ Statistical mean of a variable population

μx Distance mean along x-axis

μy Distance mean along y-axis σ2 Variance of a variable population σ2 x Distance variance along x-axis σ2 y Distance variance along y-axis σ Standard deviation of a variable population

σx Standard deviation distance along x-axis

σy Standard deviation distance along y-axis θ rotation about the modified y-axis in radians for Euler RPY

φ rotation about the modified x-axis in radians for Euler RPY

ψ rotation about initial z-axis in radians for Euler RPY

∀ for all

∈ is an element , in

Z Integers Set

Local Coordinate Frame of Reference (Robot ego-centric)

p Local Coordinate Frame pose (Robot ego-centric)

prpy JAUS-compliant

pW World Coordinate Frame pose

pW−UTM World Coordinate Frame pose with UTM q quaternion vector

q quaternion conjugate

qs quaternion scalar component

54 DRDCSuffield70 qx quaternion imaginary projection along the i axis

qy quaternion imaginary projection along the j axis

qz quaternion imaginary projection along the k axis s2 Variance of a variable sample

s Standard deviation of a variable sample

x x displacement in local pose

xW x displacement in global pose

y y displacement in local pose

yW y displacement in local pose

z z displacement in global pose

zW z displacement in global pose D Displacement vector . m G Units of gravity (9 81 s2 ) at sea-level R Rotation Matrix

T Transformation Matrix

W World Coordinate Frame of Reference

DRDCSuffield70 55 AQQH[C Definitions(fromMerriam-Webster11)

autonomous 1: a. existing or capable of existing independently b. responding, reacting, or developing independently of the whole

fovea a small rodless area of the retina that affords acute vision.

intensity 1: the quality or state of being intense; especially : extreme degree of strength, force, energy, or feeling 2 : the magnitude of a quantity (as force or energy) per unit (as of area, charge, mass, or time).

intelligence 1 a. (1) : the ability to learn or understand or to deal with new or trying situations : REASON; also : the skilled use of reason (2) : the ability to apply knowledge to manipulate one’s environment or to think abstractly as measured by objective criteria (as tests) b. Christian Science : the basic eternal quality of divine Mind c. : mental acuteness : SHREWDNESS 3 : the act of understanding : COMPREHENSION

luminance 1 : the quality or state of being luminous 2 : the luminous intensity of a surface in a given direction per unit of projected area.

proprioceptive 1: of, relating to, or being stimuli arising within the organism

reflectance the fraction of the total radiant flux incident upon a surface that is reflected and that varies according to the wavelength distribution of the incident radiation – called also reflectivity.

stochastic 1 : RANDOM; specifically : involving a random variable 2 : involving chance or probability : PROBABILISTIC

11By permission. From Merriam-Webster’s Collegiate® Dictionary, Eleventh Edition ©2006 by Merriam- Webster, Incorporated (www.Merriam-Webster.com).

56 DRDCSuffield70 DOCUMENT CONTROL DATA (Security markings for the title, abstract and indexing annotation must be entered when the document is Classified or Designated) 1. ORIGINATOR (The name and address of the organization preparing the document. 2a. SECURITY MARKING Organizations for whom the document was prepared, e.g. Centre sponsoring a (Overall security marking of the document including contractor's report, or tasking agency, are entered in section 8.) special supplemental markings if applicable.) Defence Research and Development Canada – Suffield UNCLASSIFIED P.O. Box 4000, Station Main Medicine Hat, Alberta T1A 8K6 2b. CONTROLLED GOODS (NON-CONTROLLED GOODS) DMC A REVIEW: GCEC '(&(0%(5

3. TITLE (The complete document title as indicated on the title page. Its classification should be indicated by the appropriate abbreviation (S, C or U) in parentheses after the title.) Future Directions 2007 : Getting ready for what comes next

4. AUTHORS (last name, followed by initials – ranks, titles, etc. not to be used) '(ULFNVRQ-&ROOLHU60RQFNWRQ%%URWHQ-*LHVEUHFKW07UHQWLQL'+DQQD5&KHVQH\ '0DF.D\69HUUHW5$QGHUVRQ

5. DATE OF PUBLICATION 6a. NO. OF PAGES 6b. NO. OF REFS (Month and year of publication of document.) (Total containing information, (Total cited in document.) including Annexes, Appendices, etc.) December   

7. DESCRIPTIVE NOTES (The category of the document, e.g. technical report, technical note or memorandum. If appropriate, enter the type of report, e.g. interim, progress, summary, annual or final. Give the inclusive dates when a specific reporting period is covered.) Technical Memorandum

8. SPONSORING ACTIVITY (The name of the department project office or laboratory sponsoring the research and development – include address.) Defence Research and Development Canada – Suffield P.O. Box 4000, Station Main Medicine Hat, Alberta T1A 8K6

9a. PROJECT OR GRANT NO. (If appropriate, the applicable research 9b. CONTRACT NO. (If appropriate, the applicable number under and development project or grant number under which the document which the document was written.) was written. Please specify whether project or grant.)

10a. ORIGINATOR'S DOCUMENT NUMBER (The official document 10b. OTHER DOCUMENT NO(s). (Any other numbers which may be number by which the document is identified by the originating assigned this document either by the originator or by the sponsor.) activity. This number must be unique to this document.) DRDC Suffield TM 2007-236

11. DOCUMENT AVAILABILITY (Any limitations on further dissemination of the document, other than those imposed by security classification.) Unlimited

12. DOCUMENT ANNOUNCEMENT (Any limitation to the bibliographic announcement of this document. This will normally correspond to the Document Availability (11). However, where further distribution (beyond the audience specified in (11) is possible, a wider announcement audience may be selected.)) Unlimited 13. ABSTRACT (a brief and factual summary of the document. It may also appear elsewhere in the body of the document itself. It is highly desirable that the abstract of classified documents be unclassified. Each paragraph of the abstract shall begin with an indication of the security classification of the information in the paragraph (unless the document itself is unclassified) represented as (S), (C), (R), or (U). It is not necessary to include here abstracts in both official languages unless the text is bilingual).

7KLVSDSHUVXPPDUL]HVWKHIXWXUHSODQQLQJV\PSRVLXP¶VRXWFRPHVKHOGE\'5'&6XI¿HOGVWDII DWWKH5HG7HF,QFIDFLOLW\RQDQG6HSWHPEHU3DUWLFLSDQWVSURSRVHGXQFRQVWUDLQHG IXWXUHDXWRQRP\VFHQDULRVRXWOLQLQJZKDWWKH\VHHDVWKHQH[WVWHSLQDXWRQRPRXVV\VWHPV GHYHORSPHQW $6' DQGIURPWKHLGHDVSUHVHQWHGWKHFRPPRQWLPHOLQHVDQGWKHPHVZHUH H[SRVHG7KLVV\PSRVLXPDOVRUHYLHZHGWKHVWDWHRIWKHDUWRIURERWLFVFXUUHQWSURJUDPVDQG LQGLFDWHGSURPLVLQJIXWXUHDYHQXHVEDVHGRQWKHGLVFXVVLRQ7KHSUDFWLFDOLW\RIUHDFKLQJIXOO DXWRQRP\IRU8[9VZDVUHYLHZHGUHFRPPHQGLQJDPDQLQWKHORRSV\VWHPVFRQFHSWIRUWKH IRUHVHHDEOHIXWXUH*LYHQWKLVUHDOLW\LWUHFRJQL]HVDQLPSRUWDQWVKLIWLQIRFXVWRLQWURGXFHPRUH ³DXWRPDWLFLW\´VRRQHUDVDQRWKHUZD\WRLPSDFWWKHFOLHQWDQGEULQJDERXWDXWRQRP\LQWKH ORQJHUWHUP7KLVSDSHUSURSRVHVVRPHSURMHFWDOWHUQDWLYHVEDVHGRQWKHGLVFXVVLRQ

/HSUpVHQWUDSSRUWVHYHXWXQUpVXPpGHVUpVXOWDWVGX6\PSRVLXPVXUODSODQLILFDWLRQGHV DFWLYLWpVGHIRQGpVXUOHVRULHQWDWLRQVIXWXUHVTXLDpWpWHQXSDUOHSHUVRQQHOGH5''& 6XIILHOGOHVHWVHSWHPEUH/HVSDUWLFLSDQWVRQWSURSRVpGHVVFpQDULRVG¶DXWRQRPLH IXWXUHVDQVFRQWUDLQWHD[pVVXUOHXUYLVLRQGHODSURFKDLQHpWDSHjVXLYUH/HVSUpVHQWDWLRQVRQW SHUPLVG¶H[WUDLUHGHVpFKpDQFLHUVHWGHVWKqPHVFRPPXQVHQPDWLqUHGHGpYHORSSHPHQWGH V\VWqPHVDXWRQRPHVHWGHURERWLTXHPLOLWDLUH/H6\PSRVLXPSRUWDLWVXUO¶pWDWGHODWHFKQLTXH OHVSURJUDPPHVDFWXHOVHWOHVGRPDLQHVGHUHFKHUFKHHVVHQWLHOVHWDSHUPLVG¶LGHQWLILHUGHV SLVWHVGHUHFKHUFKHSURPHWWHXVHVFRUUHVSRQGDQWDX[WHQGDQFHVDFWXHOOHV2Q\DpJDOHPHQWWUDLWp GHODIDLVDELOLWpG¶REWHQLUGHVYpKLFXOHVVDQVpTXLSDJH 9;6( HQWLqUHPHQWDXWRQRPHVHW UHFRPPDQGpXQFRQFHSWGHV\VWqPHVIRQGpVVXUO¶LQWHUYHQWLRQKXPDLQHSRXUXQDYHQLU SUpYLVLEOH/DUpDOLWpGHFHFRQFHSWUHFRQQDLVVDLWOHFKDQJHPHQWG¶REMHFWLIFRQVLGpUDEOHVRLW FHOXLG¶LQWURGXLUHGDYDQWDJHG¶©DXWRPDWLFLWpªSOXVUDSLGHPHQWHQWDQWTX¶DXWUHIDoRQG¶LQIOXHU VXUOHFOLHQWHWGHSHUPHWWUHXQHDXWRQRPLHjORQJWHUPH

14. KEYWORDS, DESCRIPTORS or IDENTIFIERS (technically meaningful terms or short phrases that characterize a document and could be helpful in cataloguing the document. They should be selected so that no security classification is required. Identifiers, such as equipment model designation, trade name, military project code name, geographic location may also be included. If possible keywords should be selected from a published thesaurus. e.g. Thesaurus of Engineering and Scientific Terms (TEST) and that thesaurus-identified. If it not possible to select indexing terms which are Unclassified, the classification of each should be indicated as with the title).

AISS, Autonomous Intelligent Systems, COHORT, future, FCS, strategic, UGV, UAV, UxV

Defence R&D Canada R & D pour la défense Canada

Canada's Leader in Defence Chef de file au Canada en matière and National Security de science et de technologie pour Science and Technology la défense et la sécurité nationale

www.drdc-rddc.gc.ca