AN INTELLIGENT FRAMEWORK FOR PROACTIVE HEALTH ANALYSIS AND MONITORING OF AUTONOMOUS MOBILE ROBOTS

By

WALTER JOHN WALTZ

A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA

2019

© 2019 Walter John Waltz

To my wife and my parents

ACKNOWLEDGMENTS

Pursuing this Ph.D. has been an incredibly long journey, with many people to be grateful for. First and foremost, I thank my amazing and supportive wife Zeryna Waltz for being my constant companion during the late nights, sacrifices made, and an incredibly helpful sounding board to outlandish ideas. My parents, Walter and Alyson

Waltz, I am thankful for their guidance, lifelong support and encouragement, and blind faith when I had none.

My research has followed me through my professional career, picking up genius wherever possible. I am unable to thank everyone here that aided me. My friends at the US Army Combat Readiness Center and US Army Aeromedical Research

Laboratory at Fort Rucker supported me with their enthusiasm and helped me discover my passion for data sciences and databases. I also thank my friends in the Autonomy

Incubator at NASA who provided their expertise, insight, and patience. I will forever appreciate my lab mates in CIMAR who helped me start this endeavor.

I would also like to express my gratitude to Dr. Carl Crane who enabled this opportunity and started me down this path. Finally, I thank my committee Dr. John

Schueller, Dr. Scott Banks, and Dr. Paul Gader for their encouragement, leadership, and support.

4

TABLE OF CONTENTS

page

ACKNOWLEDGMENTS ...... 4

LIST OF TABLES ...... 8

LIST OF FIGURES ...... 9

LIST OF ABBREVIATIONS ...... 13

ABSTRACT ...... 16

CHAPTER

1 INTRODUCTION ...... 18

Background ...... 19 History ...... 19 CBM / PHM ...... 19 IVHM ...... 19 Project Lifecycle for Health Monitoring ...... 20 Models ...... 22 Diagnostics ...... 22 Prognostics ...... 24 Decision Making ...... 25 Problem Statement ...... 26 Proposed Health Strategy for Autonomous Mobile Robots ...... 27 Focus ...... 30 Motivation ...... 31 Research Solution ...... 32

2 LITERATURE REVIEW ...... 49

Robotic Information Systems ...... 49 Robot Data Management and Abstraction ...... 52 Onboard Database Design and Uses ...... 52 Cloud-Based Collection Methods and Knowledge Extraction ...... 56 Degradation and Health Monitoring ...... 61 Data-Driven Diagnostics ...... 61 Adaptive Control ...... 63 RUL ...... 65 Techniques for Continuous Improvement Strategies ...... 67 Requirements ...... 69 Database Design ...... 69 Robot Assumptions and Criteria ...... 71

5

3 ROBOTMD ...... 80

Data Structure and Initialization ...... 80 Dictionary ...... 81 Robot Registration ...... 82 Data Warehouse ...... 84 Configuration Management ...... 85 Events ...... 85 Baselines ...... 86 Data Retrieval and Processing ...... 86 Health Assessments ...... 87 Model file structure ...... 87 Parameter estimation guidelines ...... 88 Cross-validation ...... 89 Parameters and Degradation Ontology ...... 90 Ontology ...... 91 Workflows ...... 92

4 EXPERIMENTS ...... 105

LiZARD Initialization with RobotMD ...... 105 Workflows ...... 107 Experiment Setup ...... 108 Experiment 1: Motor Degradation ...... 108 System Assessment ...... 109 Parameter Assessment ...... 110 Experiment 1 Conclusion...... 112 Experiment 2: Simulated Degradation ...... 113 Assessment and Analysis ...... 113 Experiment 2 Conclusion...... 114

5 CONCLUSION ...... 174

Summary of Experiments ...... 174 Future Work ...... 174 Implementation ...... 174 Approach ...... 175 Data Analysis ...... 176 Final Remarks ...... 177

APPENDIX

A APPLICATION ...... 178

Robot Details ...... 178 Record Events ...... 179 Event Summary ...... 180

6

Review Warehouse ...... 181

B LiZARD ROBOT DESIGN ...... 190

Software Structure ...... 190 Safety ...... 191 Data Collection and Processing ...... 192 Computational Requirements ...... 192 Designed Behavior ...... 193

LIST OF REFERENCES ...... 206

BIOGRAPHICAL SKETCH ...... 213

7

LIST OF TABLES

Table page

3-1 The data dictionary that provides a list of tables and their definitions that are involved in the processes of RobotMD...... 94

3-2 List of event types and their categories and descriptions used in the configuration management health processes of RobotMD...... 102

B-1 Information documenting system information for LiZARD for registration with RobotMD...... 196

B-2 Information documenting assigned sensor information for LiZARD for registration with RobotMD...... 197

B-3 Information documenting system parameters for LiZARD for registration with RobotMD...... 198

B-4 Recorded data that outlines the input to the data mapping required in order to register LiZARD with RobotMD...... 200

8

LIST OF FIGURES

Figure page

1-1 Core processes of CBM that involves data acquisition, processing, and decision making...... 35

1-2 Core processes of IVHM and definitions...... 35

1-3 An example of relationships between phases of a project life cycle indicated by the PMI...... 36

1-4 Change management structure as indicated by PMI...... 37

1-5 Relationships of the outputs between diagnostics, prognostics, and decision making...... 38

1-6 Categories and breakdown of models and algorithms used for diagnostics and prognostics, modified from...... 38

1-7 Expected deterioration of performance over time...... 39

1-8 Process flow for failure modes, mechanisms, and effects analysis...... 39

1-9 Generalized remaining useful life evolution visualized with probabilistic estimates...... 40

1-10 Levels, process, and maturity of diagnostics and prognostics...... 41

1-11 Related subjects between diagnostics, prognostics, and decision making...... 42

1-12 Technology readiness levels for prognostics...... 42

1-13 Proposed health strategy for autonomous mobile robots...... 43

1-14 NaviGATOR...... 44

1-15 NaviGATOR Complex Architecture...... 45

1-16 ARTS performing range clearance operations...... 46

1-17 A destroyed ground robot...... 47

1-18 Interdisciplinary tools and technologies required for development of RobotMD...... 48

2-1 An overview of the steps that compose the KDD Process...... 73

9

2-2 Data collection and analysis process of onboard database collection experiment...... 73

2-3 DIKW structure showing relative content meaning and value provided...... 74

2-4 DIKW outlined with considered robotic analogy and applications...... 74

2-5 Data separation and knowledge representation of actions and knowledge...... 75

2-6 Knowledge map propagation of action and behaviors...... 75

2-7 Layers of RoboEarth...... 76

2-8 Expected MTBF of machinery performance related to maintenance and the probability of failure...... 77

2-9 Approach using CMAC for detection of degradation...... 77

2-10 Elements of CMAC structure to detect pattern anomalies for an experiment. .... 78

2-11 Outline of RUL types...... 78

2-12 CBM process algorithm with degradation index calculation...... 79

2-13 SVM continuous learning strategy with random forgetting algorithm...... 79

3-1 Configuration management process outline for health monitoring of autonomous mobile robots...... 93

3-2 Dictionary schema table outline with foreign key relationships...... 94

3-3 Robot schema table outline with foreign key relationships...... 97

3-4 Section to edit the basic robot information on Robot Details view...... 97

3-5 Section of Robot Details view for creating, deleting, or modifying systems...... 98

3-6 Section of Robot Details view for creating, deleting, or modifying sensors of the robot...... 98

3-7 Section of Robot Details view for creating, deleting, or modifying system parameters...... 99

3-8 Warehouse schema table outline with foreign key relationships...... 100

3-9 Section of Review Warehouse view for providing data mapping information that aligns with the structure of data retrieved from the robot...... 101

3-10 RobotConfig schema table outline with foreign key relationships...... 102

10

3-11 Section of Record Event view to record and event and upload data...... 103

3-12 Section of Record Event view to create a new workflow...... 104

4-1 Comparison of two workflows to indicate value of the tool and the effects of the history setting on resulting assessed performance of systems...... 116

4-2 The track with approximately one-inch wide line, used for the experiments used for line following...... 118

4-3 Scorch marks on motor casing of motors found during post-mortem analysis of the first experiment...... 120

4-4 System fitness performance plots for the first experiment...... 121

4-5 Cross validation plots for all systems of the first experiment...... 124

4-6 Histogram plots from the third event of the first experiment...... 127

4-7 Histogram plots from the fortieth event of the first experiment...... 130

4-8 Comparison histograms between the third and forty-first event...... 133

4-9 Statistical plots of the temporal mean and standard deviation for the first experiment...... 135

4-10 Temporal parameters of the primary kinematic system of the first experiment. 138

4-11 Individual parameter plots for right motor used in the first experiment...... 140

4-12 Individual parameter plots for left motor used in the first experiment...... 143

4-13 The statistical plots of parameters providing mean and standard deviation values...... 146

4-14 Normalized parameter plots for the systems of LiZARD used in the first experiment...... 153

4-15 Simulated inflating tire...... 156

4-16 Performance plots of the second experiment...... 156

4-17 Cross validation plots of the second experiment...... 159

4-18 Run-time data histograms observe the effects of the second experiment between the forty-eighth and sixth events...... 161

4-19 Parameter plots for primary kinematic system for second experiment...... 163

11

4-20 Parameter plots for right and left motors of the second experiment...... 165

4-21 Normalized parameter plots of the second experiment...... 171

A-1 The section of the application including title, ribbon, navigation, main content, and footer...... 182

A-2 Recent events in Record Details view with option for initiating a new workflow...... 182

A-3 Additional result details provided by the recent events section of the Event Summary view...... 183

A-4 Performance plot and options in Event Summary view...... 184

A-5 Parameter plot and options in the Event Summary View...... 185

A-6 Cross validation plot and options in the Event Summary view...... 186

A-7 Run-time data plots in the Review Warehouse View...... 187

A-8 Example run-time set-point data plots that can be produced...... 188

B-1 LiZARD...... 196

B-2 LiZARD software layout with corresponding rostopics...... 200

B-3 Data post processing performed after data retrieval producing several plots. .. 203

B-4 Line following control scheme...... 204

B-5 Motor control design...... 205

12

LIST OF ABBREVIATIONS

ACID Atomicity, Consistency, Isolation, and Durability

AFRL Air Force Research Laboratory

AI Artificial Intelligence

AMR Autonomous Mobile Robots

ARM Advanced Reduced instruction set computing Machine

ARMA Autoregressive Moving Average

ARTS All-terrain Remote Transport System

Atb Action tables

BI Business Intelligence

BLOB Binary Large Objects

CBM Condition Based Maintenance

CHM Continuous Health Monitoring

CIMAR Center for Intelligent Machines and Robots

CMAC Cerebellar Model Articulation Controller

COTS Commercial Off The Shelf

COLD CoSy Localization Database

CPU Central Processing Unit

CRMF Cognitive Resource Management Framework

DARPA Defense Advanced Research Projects Agency

DBMS Database Management System

DDR Double Data Rate

DIKW Data information Knowledge Wisdom

DW Data Warehouse eMMC Embedded Multimedia Card

13

ETL Extract Transform Load

FMMEA Failure Modes, Mechanism, and Effects Analysis

HMI Human Machine Interface

IS Information System

IT Information Technology

IoT Internet of Things

IVHM Integrated Vehicle Health Monitoring

JAUS Joint Architecture for Unmanned Systems

KDD Knowledge Discovery in Databases

KPI Key Performance Indicator

KS Knowledge Store

LCD Liquid Crystal Display

LiZARD Line following to analyze Autonomous Robot Degradation

Ltb Lookup table

MTBF Mean Time Between Failure

MVVM Model-View-View Model

NIST National Institute of Standards and Technology

OWL Web Ontology Language

PHM Prognostic Health Monitoring

PRU Programable Logic Unit

PMI Project Management Institute

RAM Random Access Memory

ROS Robotic Operating System

RUL Remaining Useful Life

SQL Structured Query Language

14

SVM Support Vector Machine

TRL Technology Readiness Level

UXO Unexploded Ordnance

Wtb Warehouse table

15

Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy

AN INTELLIGENT FRAMEWORK FOR PROACTIVE HEALTH ANALYSIS AND MONITORING OF AUTONOMOUS MOBILE ROBOTS

By

Walter John Waltz

August 2019

Chair: Carl D. Crane III Major: Mechanical Engineering

Health monitoring techniques employed in industrial settings are not well adapted for application in autonomous mobile robotic systems. This research defines an intelligent framework-based health monitoring strategy to address this void with a three- tiered approach, including guidelines for mission, multi-agent, and single agent monitoring. RobotMD is an extensible, intelligent, first-generation framework developed for offline monitoring to augment resource-limited online algorithms.

The RobotMD framework consists of database structures and application interfaces governed by strict configuration management-based processes. Initial health assessments utilize state space models for system characterization, and run-time data to identify events prompting re-estimations and evaluations of parameters. The temporal evolution of parameters provides insight into system performance and are used to analyze and identify the characteristics of degradation.

RobotMD offers a new process approach for offline monitoring of autonomous mobile robots, initial health strategies for limited resource platforms, and use of parameters as features for diagnostics and prognostics. The experiments conducted and loaded into the framework successfully identified degradation for a differential drive,

16

line-following mobile robot. This research further contributes a rule-based fault detection ontology for the sharing of degradation-state knowledge of autonomous mobile robotic systems.

17

CHAPTER 1 INTRODUCTION

Health is a ubiquitous term traditionally leveraged to indicate a positive functional condition, and in the context of engineering, refers to operating within desired performance characteristics. Implementing infrastructure with managerial processes to govern health monitoring and maintenance promotes safety and reliability while reducing the risk of failure and overall cost of operation. These are essential considerations given the rapidly expanding presence of autonomous mobile robotic systems, which must establish a level of trustworthiness for the successful integration in everyday life.

This chapter provides a background review of the history and techniques of condition-based maintenance (CBM), prognostic health monitoring (PHM) and

Integrated Vehicle Health Management (IVHM), which also addresses project and general lifecycles, health modeling approaches, and key factors for the implementation of health monitoring with distinction on diagnostics, prognostics, and decision making.

CBM and IVHM systems are examples of health monitoring systems that serve industrial manufacturers and complex vehicles, respectively. The problem statement section details the difficulties surrounding health monitoring and the technological gaps in applying health monitoring processes for autonomous mobile robots (AMR).

Addressing the lack of structure, a generalized health strategy for autonomous mobile robots is proposed to aid ongoing research and identification of scope. Research motivations and an outline for the remaining structure of this paper conclude the chapter.

18

Background

History

CBM / PHM

CBM was the first maintenance strategy which utilized observations to deduce required maintenance. The Rio Grande Railway Company implemented the earliest form of this method in 1940, where engine failures were minimized through checking for contaminated engine oil with fuel or glycol [1]. With the advancement of computer, sensor, and the internet of thing (IoT) technology, CBM evolved to enable quantitative assessments that could measure the health of machinery or processes. This leveraged data collected from sensors, processed to extract desired features, which were then used to make decisions and suggest maintenance activities [2], depicted in Figure 1-1.

This was one diagnostic procedure, which for early CBM, focused on the detection of anomalies that may have resulted in faults or conditions leading to failure. Prognostics, the making of estimations of the future, with the introduction and advancement of machine learning, further enabled CBM capabilities. As these processes matured with the development of different models, applications, and tools, CBM was more often referred to as prognostic health monitoring.

IVHM

A report published by NASA in 1992 suggested maintenance activities should be determined by extracted features processed from sensor data with IVHM, also called

Integrated Systems Health Management (ISHM), as the highest priority technology for present and future space transportation [3]. Key factors substantiating the development of IVHM were the prevention of failures, promotion of reliability, minimization of maintenance costs, and assistance of operational diagnostics for the entire life cycle of

19

space vehicles. However, the report also expressed caution in using sensors, as sensor systems were single tier with no correlation amongst sub-systems and were lacking calibration techniques. In subsequent years, the developed IVHM processes, outlined in Figure 1-2, benefitted from the simultaneous research and development of CBM techniques, which expanded to include aeronautic applications [4]. Both CBM and

IVHM health monitoring began to leverage underlying managerial processes to govern diagnostics, prognostics, and decision making, which contributed to more holistic approaches during the life cycles of critical engineering assets.

Project Lifecycle for Health Monitoring

The life cycle of a project or operation, defined by the Project Management

Institute (PMI), consist of five non-linear phases, including initiation, planning, executing, monitoring and control, and closing. These phases are not prescriptive or sequential;

Figure 1-3 illustrates an example of how they might be interconnected [5]. Initiation describes the phase where a business need is determined, an estimation of feasibility is outlined, and initial stakeholders are identified. The planning phase defines the project through collecting requirements, guidelines to execute and control managerial processes, and outlines how the work will be accomplished and measured. Work is performed in the execution phase, followed by monitoring and controlling, which utilizes change management and the processes that validate and verify the deliverable or service. Change management utilizes procedures for updating project documents and employs configuration management, discussed below. Validation is the passing of potential use cases and acceptance by stakeholders, and verification determines if requirements identified during the planning phase were met. Lastly, closure involves

20

the completion of the project, where work is finalized with the delivery of products and documentation.

Configuration management, among the several categories of management described in the Project Management Body of Knowledge [5] and shown in Figure 1-4, is a standard to establish and maintain the integrity of work products using identification, control, and audits [6]. This management technique is primarily used to track changes made to a work product or components of a product, from one or more defined baselines. A baseline serves as the value or static description of reference that provides context and metrics for the item being tracked as changes occur.

Configuration management depends on the application and defined work involved, which may involve several layers or sub-divisions that each is tracked individually with separate baselines. For a robot, typical configuration management would separately track both its hardware and software components. Software configurations include the software versions, system plant, parameters, and mathematical models. Hardware and physical elements include all assemblies, connections, electronics, sensors, and actuators. Attention is given to parts that are replaced often, easily damaged, or re- configurable.

Continuous health monitoring (CHM) is primarily engaged in the monitoring and control phase. However, proper implementation requires additional consideration during the planning and executing phases. Development of CHM can occur after a system is in production, which would then require the system to undergo separate initial life cycle phases to later align with the production system. Planning for CHM, at a minimum, should detail what faults to detect, performance objectives, and acceptable levels of risk

21

during the identification of requirements. The remaining phases involve the processes, strategies, and models used for diagnostics and prognostics, the outputs of which are used for decision making, outlined in Figure 1-5.

Models

Models and algorithms used for diagnostics and prognostics are categorized as knowledge-based, data-driven, or hybrid, depicted in Figure 1-6. Knowledge-based approaches infuse prior known information in the form of rules, state machines, or physics-based methods; such as differential equations described using transfer functions or state space models. This category can produce reliable models that can be verified and validated, but is the most difficult to develop, requires expert knowledge, involves extensive testing, complex implementation, and tends to be application specific

[7]. Data-driven models and algorithms involve the use of machine learning where historical data is consumed to estimate functions or models including, but not limited to, linear regression, neural networks, support vector machines (SVM), and clustering.

These methods are less complex to develop and are more generalized for different applications but require large datasets. These datasets include nominal and fault data, which does not require expert knowledge of the system and is difficult to verify and validate [8]. Lastly, the hybrid category fuses the strengths of the knowledge-based and data-driven categories. Examples of hybrid approaches include online state or parameter estimation of a known model, and the comparison of physics-based and data-driven models in parallel [9].

Diagnostics

Diagnostics is leveraged in conducting health assessments after the collection and processing of data [10] for fault detection, and fault isolation [7]. Assessing the

22

health of assets results in the qualitative or quantitative representations of critical features that facilitates the diagnostic analysis, and serves as inputs for prognostics and decision-making processes. Quantitative results can be complex or aggregated to a single value or index [11]. Given nominal conditions, an asset is expected to operate until a fault is detected, indicating degradation or a failed component, referred to as soft and hard faults, respectively, illustrated in Figure 1-7. Degradation is characterized by age, wear, or lack of performance, resulting in decreased functionality, reliability, and trustworthiness that occurs before failure [12].

Faults are conditions which may lead to failure and are characterized by the unpermitted deviation of at least one property or parameter from an identified norm [13].

Failure is the inability of a system, process, or mission to complete its intended operation. Detecting faults before failure, alternatively called anomaly or novelty detection, can be performed by calculating residuals, thresholds, rules, or pattern matching methods on sensor outputs, states, parameters, or specific identified features

[14]. Anomalies are classified by three categories described below. [15].

1. Point – An individual data instance that is considered to be anomalous in comparison to the rest of the data.

2. Contextual – Data instance is anomalous in a specific context. Attributes can be used to describe this context and behavior.

3. Collective – A series of related data instances that are anomalous in comparison to the entire data set.

The method of comparing residuals evaluates differences between predicted and actual outputs, and between normal operation that should result in near-zero values during normal operation without faults. Thresholds, the simplest fault detection method, identifies a range of nominal values, and once violated, is flagged as a finding. Rules

23

are a result of learned information from fault analysis and data-driven approaches, and are implemented similarly to thresholds as permissible sensor ranges. Pattern matching is the subject of data-driven approaches on collective anomalies where models indicate the likelihood of either nominal or anomalous behavior, depending on the application and model. The results of fault detection and identification can be passed to fault isolation; a process that further refines the detected fault to provide further information including type, location, and potential parameters or components involved [13].

The processes of diagnostics have requirements that are determined during planning with models and algorithms implemented during execution. Outputs are monitored and controlled in part with prognostics and decision-making. Documentation and identification of fault requirements are conducted through a systematic process of the failure modes, mechanisms and effects analysis (FFMEA), outlined in Figure 1-8.

Theorized, laboratory tested, or actual faults are analyzed to determine the series of events that initially caused the fault, propagation, and the components involved [16].

The results of this process are the vital features to monitor, how they might be isolated, and how the faults are to be prioritized [17]. Together, management of these processes improves the safety and reliability of assets leading toward systems that can be documented and verified. Reliability, in this sense, can be stated as the ability of an asset to perform under expected performance requirements for a specified period under expected working conditions [18].

Prognostics

The processes of prognostics, a topic discussed in Chapter 2, focuses on making estimations or predicting future outcomes using the information and features provided by diagnostics. This is accomplished by either directly using a developed model that

24

produces estimations, or by simulation to predict the remaining useful life (RUL), depicted in Figure 1-9. Estimation of confidence or probabilistic assessment of confidence should accompany predictions to indicate likelihood of the RUL being accurate with information available. Skikorska, Hodkiewicz, and Ma in their work [8],

“Prognostic modeling options for remaining useful life estimation by industry,” guide the selection of appropriate models and outline the following three levels of maturity of prognostic implementations, also provided in Figure 1-10.

1. To provide life and confidence estimations of systems and components for each diagnosed fault

2. Evaluation of other potential faults

3. Analysis of models with additional business process information such as maintenance and logistics

Prognostics relies on comprehensive health assessments and coverage of fault diagnosis in to be effective and produce confident results. These place limitations on research, systems to evaluate, and may block full implementation due to the necessary maturity requirements of diagnostics.

Decision Making

Decisions, made from either diagnostic or prognostic processes, are automated, autonomous, or manually performed; the relationships of which are illustrated in Figure

1-11. The resulting actions could include maintenance indicators, enabling fault tolerant control, and activation of an alternate behavior such as modifying rates of motion, termination of an operation, or return to home command. The term behavior, subject to many interpretations, is defined for this research as a layered control system activated by some logic and capable of running in parallel with other behaviors [19]. Behaviors can consist of one or more feedback control loops, but conceptually scope to a single

25

action or objective. Any feature of interest that represents decision-making strategies require more than sensor data and should include information from business processes such as schedules, maintenance data, and logistics.

Problem Statement

PHM lacks readiness for critical production systems, and there is a technological gap in applying health monitoring to AMR. Technology readiness levels (TRL) indicate the stages of an asset regarding the amount of development and whether requirements are validated and verified using a scale from one to nine; the higher the number, the more ready the technology is as indicated by Figure 1-12 [20]. While CBM is a mature concept, aspects such as prognostics do not have a high TRL [9]. The challenges in increasing readiness levels for prognostics include the dependence on other processes, prohibitive costs, issues validating and verifying models, and lack of standardization

[21]. Prognostic model and algorithm development are contingent on the maturity, readiness, and completeness of diagnostic models, processes, and outputs to diagnose faults of a system. In consideration of the three levels of prognostics, only entities with funding and resources are capable of establishing valid, reliable, and robust prognostic health monitoring systems [8]. Prognostic models and algorithms developed by experts must also exhibit high fidelity, and meet established requirements. Verifying and validating prognostic models is dependent on the type of model used; further complicated by the lack of available data and seldom codified occurrence of faults.

Physics-based models, although costly, and complicated, tend to be application specific, but can be mathematically verified and validated. Although data-driven models are popular and generalized for applications, they are not easily verified and validated, limiting the growth of TRL. Models that are developed for PHM rarely make use of other

26

useful business information such as scheduling, maintenance, and logistics that may affect the health of an asset [8]. PHM lacks standardization across all industries [9]

[21].

Literature indicates voids in established strategies in health monitoring for AMR, resulting in a technological shortfall of PHM application. [22] Schall and Atkeson explain that this is primarily due to a lack of working, production robots using learning components, and would benefit from health monitoring. By nature, mobile robots present new challenges to PHM in the form of distributed and disconnected networks, limited power, and finite computational resources. They pose an elevated level of risk due to uncertainty granted by potential hazards, dynamic environments, and the inability to provide complete models of interaction types. This also creates a challenge in verifying algorithms, which demands more requirements to analyze, develop, and document. The approaches of CBM, PHM, and IVHM all provide insight into the structure of a unified problem where parallel, offline methods are not considered. Online methods for AMR are limited in scope, capability, information, and must consider augmentation using offline processes.

Proposed Health Strategy for Autonomous Mobile Robots

This section proposes a general health strategy for monitoring and analyzing one or more autonomous mobile robots referred to as agents in the context of multiple robots, illustrated in Figure 1-13. Strategies must meet design specifications and requirements that also address the needs of the architectures and software frameworks.

Robotic architectures describe the design and organization of algorithms and computational elements that are executed, mechanisms used for communicating or transferring information, and the mapping and relationships of systems that compose

27

the robot [23]. A software framework represents the infrastructure of a design that utilizes a collection of algorithms or libraries adapted to a specific problem domain [24].

This proposed health strategy outlines three hierarchical tiers labeled mission, multi-agent, and single agent. The hierarchical nature is due to the level of governance, encapsulation of scope, and the dependency, readiness, and maturity of lower tiers.

Each tier has unique health monitoring guidelines and outlines general considerations such as hardware and software, determined by a selected implementation. Strategies for autonomous missions may provide additional context and information regarding relationships between tiers.

Mission health monitoring refers to the performance, progress, or status undertaken by one or more autonomous agents. Clear objectives with associated success criteria should be identified to qualitatively or quantitatively assess an outcome.

This tier is the most application-specific and depends on the healthy operation of the

AMR associated with the missions, which may benefit from further decomposition.

The multi-agent tier considers the health of the methods and tools explicitly used for the control, communication, and behavior of multiple agents. Requirements detail the robustness and capability that communication buses and networks must achieve for successful operation and execution. Multi-agent systems also present new health considerations, as there are unique formations to include swarms, fleets, and collaborative actions that must be documented to measure success and health.

Additionally, health monitoring may also take advantage of multi-agent systems to design behaviors for providing new and alternative health assessments of agents, components, or assets.

28

The single-agent tier addresses the health of an individual agent and is the most supported with existing technology and methods like IVHM. Monitoring methods like

IVHM are ideal for on-board diagnostics that could halt autonomous behaviors or provide mitigation through the use of fault-tolerant controls. Typical robotic systems, consisting of various configurations, involve the use of hardware, software, sensors, and actuators. Health monitoring should take into consideration these elements, addressing each with its own monitoring needs. Computational hardware, as with single-board computers, may include monitoring of resources to include memory used, temperatures, available disc space, and central processing unit (CPU) use. Monitoring software meets the specifications of the developed algorithms and runtime characteristics of the implemented software framework used for software execution. Sensors must also have established processes to verify calibration and function in accordance with specifications, as core to most processes [25]. Existing sensors for autonomous behaviors may assist in monitoring health, but additional sensors may be necessary to meet additional health related requirements. Techniques such as sensor fusion, state and parameter estimation, external sensing, and pose estimation may contribute valuable information for health monitoring [12]. Actuators may be monitored through the use of sensor-systems similar to asset CBM, or modeled as a separate system entity.

The proposed single agent monitoring strategy also suggests hierarchical monitoring of systems, sub-systems, and components. The terms systems and sub-systems are used to indicate where a mathematical model is used to describe a dynamic process or observation. Conversely, components are parts of a system that is not intended to correlate with a mathematical model.

29

Health strategies for AMR are limited by computational power, electrical power, data storage, and weight restrictions. Furthermore, agents are likely to perform their functions in distributed and disconnected networks with limited or no internet access. A comprehensive health strategy should include addressing these concerns and include development or improvement of health assessing techniques, identify decisions made manually and autonomously, design new maintenance strategies, generalize results for similar systems, and augment analysis and monitoring with offline processes for all tiers. This proposed health strategy is the first iteration, and like early PHM techniques, should focus on the maturation and readiness of diagnostics and maintenance strategies.

Focus

The focus of this research is the development of an intelligent and extensible framework, called RobotMD, to enable offline health monitoring and analysis of single- agent systems. Unlike exclusive software frameworks, this research leverages software defined infrastructure for health monitoring, and functions independently from hardware specific dependencies, which will be referred to as the framework or infrastructure for this research. Configuration management is employed to organize and track the health of a robot’s systems demarked by discrete events. An event is defined as a significant observation that provides business information and signaling potential deviations of health from previous observations. Examples of tracked events include maintenance performed on the robot, observed damage, or completion of a mission, called an experience. Developed processes, internal to the configuration management, facilitate automatic health assessments and results to the user. The diagnostic elements of interest for RobotMD include data collection of both mission and agent information, data

30

processing, and health assessments of systems and sub-systems. As this is an offline system, diagnostics will only address detection of slow-trending degradation from observed losses in performance within data collected from each mission.

This research also proposes an approach using generalized state-space models, capable of detecting degradation with physical interpretations, to provide insight, identification of fault characteristics, prioritization of features, and selection of suitable models and algorithms. By focusing on state-space models, the emphasis is made on robotic systems, rather than trying to define the uncertainties surrounding AMR. Finally, this research abstracts degradation results from the health assessments that use parameter estimation on generalized state-space. A system in a degraded state may correlate to a particular configuration of parameters. For generalized models, these degraded parameter configurations may provide additional benefit as an ontology of degradation, having potential in sharing information between similar or multi-agent systems. RobotMD provides the infrastructure, guidance, and analysis tools to combat the cost and complexity developing health monitoring systems, simplifies mechanisms for wider applicability, and seeks to increase overall readiness of this technology.

Motivation

Mobile robots in operation must combat inevitable cycles of degradation through proper sustainment and maintenance; this is especially true for robots performing complicated tasks, high-risk behaviors, and those that operate in hazardous environments. Autonomous driving robots, a popular category of AMR, mandate safe and reliable systems. NaviGATOR is one such example, shown in Figure 1-14, designed by University of Florida’s Center for Intelligent Machines and Robots (CIMAR) for the Defense Advanced Research Projects Agency (DARPA) urban challenge [26]

31

[27]. The NaviGATOR was designed to autonomously navigate an urban environment utilizing techniques including localization, obstacle avoidance, complicated reactive path planning, and cognitive resource management. These complex systems, depicted in

Figure 1-15, undergo redundant maintenance, inspection, and are entrusted to function with exceptionally high-performance characteristics [28]. Even though systems like the

NaviGATOR are well designed, the public trust of autonomous systems is fragile, and the technology is perceived as high-risk.

Operation in hazardous environments presents complexity in sustainment and maintenance. The All-purpose Remote Transport System (ARTS), shown in Figure 1-

16, was designed for performing autonomous or semi-autonomous unexploded ordnance (UXO) clearance, range maintenance, remediation, and force protection [29].

Operating in environments containing UXO presents additional concerns of contamination and unpredictable damage to robotic systems that may cause faults or degradation. Figure 1-17 demonstrates the hazard of operating in these environments by the extensive damage to a teleoperated robot. Mobile robots that work in these environments are ideal candidates for health monitoring and would benefit from improved safety, enhanced reliability, and reduction of costs from cascading faults.

Research Solution

This dissertation documents the design and implementation of RobotMD, comprised of managerial processes, database strategies, and a software application; the use of which is documented in Appendix A. To support this research, an in-depth review of interdisciplinary tools, techniques, processes, configuration management, and database methodologies is provided in Chapter 2 and outlined in Figure 1-18. Following this review, methods for collecting run-time data, distributed sharing of learned

32

information, and abstraction methods for ontological purposes are discussed for robotic applications. The next subject reviews PHM techniques for industrial robotics highlighting uses, approaches, and adaptive methods. Finally, continuous improvement strategies are evaluated for developing robust processes and identification of assets to monitor, analyze, and adapt that would benefit from an evolving infrastructure.

Chapter 3 details the architecture, database structure, and processes that form the framework of RobotMD. The chapter begins by outlining procedures to implement

RobotMD outline required information from users to include a complete description of the mobile robot, specified hierarchy of systems, and structure of recorded data. The collected information from the user facilitates the automatic, dynamic creation of database objects required by RobotMD before monitoring. The procedures to use

RobotMD, with emphasis on the data collected after a mission, which is called experience, are then described. This feeds the configuration management processes that govern offline health monitoring, which are reviewed. The cycle described by configuration management provides a natural mechanism to label collected datasets to specific discrete events in time. The degradation analysis process first seeks to establish a baseline in which to measure the health or performance loss caused by degradation or external events. Since RobotMD is a generalized framework, guidelines are given to provide an initial starting point, centralized around the use of parameter estimation on state-space models. This chapter concludes with a discussion of expected results and the formation of degradation ontology.

The subject of chapter 4 is two controlled experiments performed to highlight and promote the functionality of RobotMD. Both experiments were designed to utilize an

33

autonomous line-following robot called LiZARD, which stands for Line following to analyze Autonomous Robot Degradation. The specifications, software structure, and health monitoring of LiZARD are documented in Appendix B. The first experiment establishes a basic understanding of the use of RobotMD and the tools provided to corroborate detected degradation; accomplished by repetitively executing the same mission until a fault occurs. The second experiment simulates degradation by increasing the radius of one of the wheels of LiZARD until a fault occurs. The purpose of this experiment is to identify a fault condition, that targeted a parameter on the kinematic model of LiZARD. Finally, Chapter 5 summarizes the importance of health monitoring for mobile robots, highlights the contributions of RobotMD, and concludes with a review of future work.

34

Figure 1-1. Core processes of CBM that involves data acquisition, processing, and decision making. [2].

Operations and maintenance advisories, capability forecast assessments, recommendations, evidence, and explanation.

Future health grade, future failures, recommendations, evidence and explaination.

Health grade, diagnosed faults and failures, recommendations, evidence and explaination.

Current enumerated state indicator, threshold boundary alerts, and statistical analysis data with timestamp and data quality.

Descriptor data with timestamp and data quality.

Digitized data with timestamp and data quality.

Figure 1-2. Core processes of IVHM and definitions [4].

35

Figure 1-3. An example of relationships between phases of a project life cycle indicated by the PMI [5]. Project Management Institute, A Guide to the Project Management Body of Knowledge Fifth Edition, (2013). Copyright and all rights reserved. Material from this publication has been reproduced with the permission of PMI.

36

Figure 1-4. Change management structure as indicated by PMI [5]. Project Management Institute, Practice Standard for Project Configuration Management, (2007). Copyright and all rights reserved. Material from this publication has been reproduced with the permission of PMI.

37

Diagnostics Prognostics

Decision Making

Figure 1-5. Relationships of the outputs between diagnostics, prognostics, and decision making.

Figure 1-6. Categories and breakdown of models and algorithms used for diagnostics and prognostics, modified from [9].

38

Figure 1-7. Expected deterioration of performance over time [8].

Figure 1-8. Process flow for failure modes, mechanisms, and effects analysis [10].

39

Figure 1-9. Generalized remaining useful life evolution visualized with probabilistic estimates [21].

40

Figure 1-10. Levels, process, and maturity of diagnostics and prognostics [8].

41

Figure 1-11. Related subjects between diagnostics, prognostics, and decision making.

Figure 1-12. Technology readiness levels for prognostics [9].

42

Hardware / Mission / Task Mission Software

Network / Hardware / Multi-Agent Multi-Agent Communication Software

Behavior(s)

Hardware / Software Single Agent Single-Agent (Framework and Operating System)

Systems

Software System(s) Sensor(s) (Algorithms) / Model

Behavior(s) / Subsystems Component Controls

Figure 1-13. Proposed health strategy for autonomous mobile robots.

43

Figure 1-14. NaviGATOR [27]. Photo courtesy of Carl D Crane.

44

Figure 1-15. NaviGATOR Complex Architecture [27].

45

Figure 1-16. ARTS performing range clearance operations. Photo courtesy of Walter M Waltz.

46

Figure 1-17. A destroyed ground robot. Photo courtesy of author.

47

Technology Industry and Fields

Information Technology Engineering Computer Science Mathematics

Data Warehouse System Analysis Machine Learning

Data Management Control Theory

Health Monitoring / Degradation

Configuration Management

Business Management

Figure 1-18. Interdisciplinary tools and technologies required for development of RobotMD.

48

CHAPTER 2 LITERATURE REVIEW

The collection, storage, and post-processing of data produced by mobile robotic systems is a complex design problem and published literature regarding the degradation of mobile robots is not yet abundant. Further, readily available continuous performance data for those in production, is even more scarce. However, their predecessors, stationary industrial robots, have been well documented in a full spectrum of interdisciplinary technologies which are integrated for health monitoring.

These branches of knowledge include mechanical and electrical engineering as well as the information technology of software and database architectures. This chapter begins with a thorough review of these interdisciplinary tools followed by the techniques for collecting run-time data, active learning insight and lessons learned, and concludes with a set of requirements which have been adapted in the development of this paper’s proposed intelligent framework for proactive health analysis and monitoring of autonomous mobile robots.

Robotic Information Systems

The complexity of robotics is molded by interrelated hardware and software components that collect, retrieve, process, store, and distribute information to support designed objectives [30]. By definition, robots of any type are information systems which can benefit from the tools, services, policies, and best practices prevalent within the information technology (IT) industry leading to improved architectures, procedures, and data flows. Successful frameworks that incorporate these tools include the Robotic

Operating System (ROS) [31], Joint Architecture for Unmanned Systems (JAUS) [32], and the Cognitive Resource Management Framework (CRMF) [28].

49

Database Management Systems (DBMS), used for the efficient management of networked stored data, are useful for enhancing robotic capabilities. They are applied in robotic systems for development, fault analysis, knowledge repositories, and information sharing. The most common type of DBMS is the transactional database structure used to perform create, read, update, and delete (CRUD) operations on records stored in tables for routine, frequent actions. These tables are often organized in logical groups with other objects called schemas.

Utilization of on-board transactional databases, also known as knowledge stores

(KS) for robotic operations, include run-time parameters, location and mapping, and pre-planned trajectories. The CoSy Localization Database (COLD), an example of a knowledge store, is used to store data for planning and mapping environments. COLD was utilized in the mapping of three research laboratories across Europe [33] consisting of data collected by cameras, laser scanners, and encoders. The COLD structure is shared and used for the development of other robots, emerging algorithms, baselines in other repetitive studies, and for the determination of semantic knowledge based in symbology.

Another useful DBMS is the Data Warehouse (DW), formed by the aggregation of data from one or more sources. These structures are often used to extract meaningful business intelligence (BI) of transformed raw data from transactional databases or structured datasets such as comma separated value (CSV) files.

Derivation of BI rules that define geospatial regions when mining data within an aggregated data warehouse is one such example [34]. The results of which can be

50

used in building unique climate maps, flood-plain estimations, and traffic congestion predictions.

Data science has advanced significantly since the late 1980s and the field continues to produce useful tools, algorithms, and processes [35]. Integration of two well-established technologies: artificial intelligence (AI) and DBMS [36] to form intelligent databases is one product of ongoing research. Intelligent databases have three components consisting of user interfaces, database engines, and high-level tools.

The user interface is the interactive mechanism for observation, reporting, or manipulation. The database engine, a generic term referring to a DBMS, is used perform CRUD operations to manipulate data. High-level tools refer to the algorithms and processes used to ensure the quality of data and operations for deriving knowledge from raw data. An example of a high-level tool that does not require knowledge of the data structure is the semantic query, which can receive non-structured sentences and return relevant information. This differs from approaches using the structured query language (SQL) employed by some database engines to retrieve data. These tools are a sub-set from a process framework known as knowledge discovery in databases

(KDD).

In the late 1980s and early 1990s, KDD was developed to process large sets of data for extracting meaningful information aggregated to develop knowledge. The KDD framework outlined the sequential, non-trivial process of identifying valid, novel, potentially useful, and ultimately understandable patterns in data as depicted in Figure

2-1 and described in the following list [37].

1. Selection of a data set - Information is selected to be processed by the system.

51

2. Pre-processing of data - Data cleansing by evaluating data based on pre- determined values and the elimination of noise. For example, values that fall outside of a range of acceptable values could be rejected and flagged for review.

3. Transformation - Values or combinations of columns are transformed into appropriate units, mathematical mapping, or routed to designed columns of the final transformation.

4. Application of data mining tools - Tools such as artificial intelligence, machine learning, and neural networks are applied.

5. Interpretation and evaluation of learned knowledge – The results of the applied algorithms and techniques are used to derive knowledge.

The first three steps outline the process known as Extract Transform Load (ETL) where raw data is processed in well-defined procedures. The fourth step is critical where data mining algorithms are applied to yield non-trivial knowledge. Data mining is defined as a set of mechanisms and techniques, realized in software, to extract hidden information from data [35]. The utilized techniques and methods are typically algorithms from machine learning, neural networks, and artificial intelligence. The algorithm(s) used must be selected and developed for the intended system. The application of machine learning to massive datasets using a structured approach makes this a viable procedure for extracting meaningful knowledge from modern autonomous robotics systems. In the final step, the results are stored, interpreted, or provided for consumption by another process.

Robot Data Management and Abstraction

Onboard Database Design and Uses

Niemueller, Lakemeyer, and Srinivasa in 2012 [38] sought to tackle the challenges of designing a generic database with application to fault analysis and performance evaluation by observing and working with collected data. They found that collected data is typically volatile and often disposed of after consumption by operation,

52

function, or learning method. They then postulated that an on-board data store must have the following capabilities:

1. Ability to store any and all data produced on the robot in real-time 2. Powerful retrieval features to query specific data 3. Integration with typical robot middleware 4. No or minimal configuration 5. Natural adaptation to evolving data structures 6. Distributable among multiple robots and off-board machines 7. Independence toward robot platform and software context

These capabilities form the basis of their selection process for MongoDB, a schema-less database, as their datastore while utilizing ROS as the middleware to promote atomicity, consistency, isolation, and durability (ACID) qualities; characteristics that are often used to describe robust and reliable DBMS. The database was designed around the concept of being document-oriented, where key-value pairs are grouped into entities called documents [39]. Entities, in relation to data modeling, are objects that represent data structures defined with elements and data types for single low-level concepts. The document data structure may also belong to a collection in which similar information can be associated. To ensure that storage space is considered, these collections are capped to a fixed size, useful for logging events. Data retrieval, or querying, must also be considered for performance, especially with the quantities of data that robotic systems produce. This is addressed by indexing over specified fields in a document, allowing search algorithms to locate and return data [40].

To interface with ROS, the team subscribed to the data produced by the system, through the use of topics. Topics are mechanisms of tranferring data between processes in ROS, characterized by their defined message type and registration to a central broker. While ROS may have recording capabilities specific to an operating

53

system, it does not have the capability for advanced querying. Data provided through topics are collected and stored in a near real-time, peer-to-peer fashion. The following list describes the different types of data that can be captured and recorded.

• Point – single element measurements individually recorded

• Waveform / Collection – a segment of data, typically in an array, collected as a single unit

• Complex – other types of specialized formats beyond point and collection-based formats, often stored as binary-large-objects (BLOBs), such as images or point fields

MongoDB performs a systematic guided, data-driven fault analysis by inspecting the recorded data outlined in Figure 2-2 to identify the components that produce errors in software components as a result of anomalous, low-quality sensory signals or actuator feedback. To accomplish this task, the data flow is described as following the model of the Data-Information-Knowledge-Wisdom (DIKW) hierarchy to identify the appropriate decisions and tasks [41]. This hierarchy is a philosophical representation of the evolution of observed data as it is extracted and modeled into the stages of information, knowledge, and wisdom depicted in Figure 2-3 and Figure 2-4. DIKW provides rough boundaries to logically separate tasks or effort into levels of sophisticated algorithms. This leads to a natural progression where knowledge is extracted by incrementally adding context to raw sensor signals for the eventual consumption by some level of robotic understanding for the performance of enhanced actions.

The experiment performed by Niemueller, Lakemeyer, and Srinivasa evaluated a failed grasping event, which they originally assumed to be a data related error. From a thorough manual review of the data, they determined that the expected result did not

54

match the sensory data due to a shifted camera mount that had occurred over time.

Even in this simple identification, the process to follow the trail of poor sensory data that propagated through the data flow was tedious.

The second application evaluated the performance of a bottle retrieval task by the ROS-enabled service bimanual mobile manipulator, developed at Carnegie Mellon

University. They observed that the python-based deserialization of network streams was too computationally expensive and implemented a specialized C++ based data logger instead. The typical amount of data produced for the system was approximately

120 MB/min with peak values to 500 MB/min resulting in approximately 4300 inserts/min.

Their work was expanded upon in 2013 to explore the concept of life-long learning of service robots using data storage techniques, and the applicability of cloud- based solutions [42]. The central idea their research was to enable a robot to learn new objects using persisted training data collected from sensory data with a focus on captured images. The following requirements were identified to address this new type of storage approach.

• Flexible data structures – varying and evolving data structures for different perception methods and input formats

• Data management – unified storage architecture; the ability to replicate data quickly, and backup and restore facilities

• Flexible and efficient retrieval – queries for specific data; fast and low-overhead retrieval of diverse and abundant data

The same database structure described in their earlier work was used with the addition of GridFS, a file system extension to manage the storage of BLOBs. This extension is used to permit the database to serve as a perception database and enable

55

new queries to be executed for performing object recognition learning on and offline.

Using MongoDB’s replication feature allows the ability to copy data to any number of databases, known as sinks, are beneficial if shared in a multi-host or cloud-based environment. In a similar work exploring cumulatively learning techniques for the life of a robot [43], object recognition was performed using a neural network strategy. While their results were successful, it was suggested that there be a hierarchical organization of the data to avoid exhaustive network searches.

The ideas and concepts presented in these papers are generalized concepts that should be considered for any mobile robot intending to extract information from collected data. A robust storage device incorporates ACID characteristics, associative data rules, and indexing. Expanding on the uses of collected data, the philosophical concept of robotic understanding was evaluated, adding to the powerful notion that actions may be enriched by contextual cues.

An important note in this work is the platform’s initial struggle due to resource starvation, exacerbating the threshold of limited resources available to mobile robots.

The RobotMD framework separates the shortfall of logging and immediate use of the information, relieving the concern of additional resources and streaming data to a transactional database. While an onboard transactional database is still a viable solution, it is not required as streamed data is not immediately queried, further reducing the concern of overhead for system.

Cloud-Based Collection Methods and Knowledge Extraction

Cloud technology is a newer capability adopted by disparate industries for provisioning software applications as a service, distribution demands of social media, and regression analysis for financial market predictions. Cloud computing is defined by

56

the National Institute of Standards and Technology (NIST) as a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources that can be rapidly provisioned and released with minimal management effort or service provider interaction [44]. Services can also be provided in a cloud setting such as platforms, software applications, and infrastructure [45]. In recent years, there has been significant research effort in the field of cloud services for autonomous robots including architectures [46], infrastructures [47], and applications.

[48] Kehoe et al clarify this technological approach by stating that cloud robots and automation systems rely explicitly on either data or code from a network to support operations, where not all sensory computation memory and persistence is integrated into a standalone system. Significant areas of research for “Cloud Robotics” are identified as big data, cloud computing, collective robot learning, and human computation.

Big data describes high-volume, complex, and variable data often targeted for knowledge extraction, data science, and business intelligence [49]. In application to robotics, this can be considered as the type of data that exceeds the processing capacity of conventional database systems or knowledge stores. Cloud computing is often a consideration sequentially following big data to process the delivered or retrieved data. Another form of cloud computing can be the generation of a task sequence for a robot that has provided stimuli input from perception sensors [50].

Collective learning on the “big-data” collected is often the next logical step, especially in the context of knowledge extraction and the production of meaningful results. Lastly, human computation is the type of assisted learning produced by humans intended for

57

use by robots. This type of effort is frequently a form of iterative or active learning control strategies. Iterative and active learning control are similar types of controls laws in which machine learning techniques are used to develop a control law trained from data procedurally. These implementations are strongly suited for cyclic and repetitive type tasks, and the specific algorithm used should be selected providing the characteristics of the process [51]. All of the technological areas discussed are rarely considered independently and resemble the KDD structure in a widely accessible format.

Three interrelated, progressive services that use these strategies are KnowRob,

Open-EASE, and RoboEarth. KnowRob is a knowledge processing tool and storage environment for autonomous robots with an emphasis on personal service robots that implement ROS as described by Tenorth and Beetz, in their work Knowledge

Processing for Autonomous Robots [52]. This service aims to provide a comprehensive set of learned knowledge in a format that can be processed and understood by robots using the Web Ontology Language (OWL). The authors identified essential characteristics to distinguish knowledge bases for robots, outlined in Figure 2-5.

• Robot-specific knowledge – providing the type of knowledge robots need

• Grounding and integration with a robot – mapping between the knowledge base to the related actions, percepts, and internal data structures

• Integration of knowledge sources – acquiring information from necessary sources and potentially different formats

• Special-purpose inference methods – inferences required for a robot under unique circumstances

Thus, the authors sought to augment the knowledge extracted or learned from environments with an ontology, considered common-sense, for the robot to consume.

58

An example they provided was to alter the behavior of transporting a container filled with water. Know-ROB would provide the ontological relationship between the transport of the liquid with an inferred task-trajectory such that the liquid was not spilled in a logical map-like structure similar to Figure 2-6. This service, however, did require careful thought and integration. Only limited types of data or knowledge were provided to the system in data structures identified as actions, perception objects, events, and computable classes [53].

Building and expanding on the concept of KnowRob were a series of cognitive- equipping technologies embracing the expansion of cloud-based technologies. Beetz,

Tenorth, and Winkler introduced Open-EASE in 2015, a remote knowledge representation and processing service that aims at facilitating the use of artificial intelligence technology for equipping robots with knowledge and reasoning capabilities

[54]. This service seeks to store and provide data on a massive scale, storing details about robots’ hardware, capabilities, environment, operational tasks, memorized experiences of manipulation episodes, and knowledge obtained from training episodes in which skills are demonstrated. Central to this service is the comprehensive storing of execution data with provided metadata, including details of the task and system, in a universal, open, publicly accessible way. Also provided is a suite of software tools enabling researchers to work with their data and to utilize Open-EASE’s representational infrastructure. This infrastructure allows researchers to attempt to make sense of the data semantically, defining concepts with a standardized, uniform vocabulary.

59

RoboEarth is an open-source knowledge-based system providing web and cloud services that transform a simple robot into an intelligent one [55]. The core design principle is to permit the collection, storing, and sharing of data, independent of specific hardware [56]. RoboEarth sports a three-layered architecture; a centralized database on a global server, a generic middle layer, and a specific action-based, hardware dependent layer shown in Figure 2-7. The generic and hardware dependent layers are both implemented on the robot. The data and knowledge processing architecture incorporate a distributed environment involving multiple databases each with a specific purpose. The last technical consideration is a series of tools to provide interfaces between the layers to facilitate the traffic of data for efficient use.

Scaling RobotMD as a cloud solution is beyond the scope of this paper, however, the premise, format, and considerations are essential to the implementation of the proposed intelligent framework. Cloud-based services are designed to be as universal as possible, integrating other technologies’ interfaces, methods, or providers to enable a variety of systems with advanced tools. These attributes and lessons-learned form an ideal environment for robotic learnings systems.

Another advantage of evaluating cloud-based systems is the potential to share new information that is applicable to similar types of robots. Generic observations can form the basis for the ontology of semantical information of tasks, environments, and actions. The ability to share knowledge, beyond data, advances more than a single robot and allows benefits of these systems to be included with any design. While the work presented focuses on object recognition, serving, or calculating corresponding tasks and trajectories, the work presented in this paper could augment this knowledge

60

with an ontology of degradation. For example, this could be used to identify a state of degradation and a cloud-based service could return a modified behavior to minimize deterioration of the fault during operation. To provide this type of knowledge, the first iteration must develop an understanding using a generalized approach and simple definitions.

Degradation and Health Monitoring

Monitoring degradation and the potential impact to autonomous ground robots is a central component of this paper. Due to the lack of publications regarding mobile robots, the works presented are with regard to industrial robots and their extensive use in manufacturing processes. The manufacturing industry has a well-documented observation regarding the impact of production quality with safety and financial concerns to gain competitive advantages [57].

Data-Driven Diagnostics

Lee addresses extending the mean-time-between-failures (MTBF) by detecting degradation and faults using a trained neural network on sensor signal characteristics

[58]. MTBF, represented in Figure 2-8, is a measurement that is described as being a key-performance-indicator (KPI). KPIs are an industry-wide concept of quantified values that describe how well a service is performing [59].

His work documents an examination of manufacturing processes, the ever- increasing cost of reactive maintenance, and that in high-performance systems, work operations cannot tolerate significant degradation much less failed components. The work presented is a cost effective, proactive approach to improve KPIs. Upon investigating available methods, Lee identified the problem that newly designed machines do not have any historical maintenance data and adding sensors to monitor

61

machine condition will also increase machine complexity, requiring trained personnel.

Simply measuring the machine’s condition also does not reveal degradation patterns between dependent components. Lee’s solution was two cerebellar model articulation controller (CMAC) neural networks trained on available system sensors to describe an estimated level of degradation, the process of which can be seen in Figure 2-9. The architecture of the CMAC is shown in Figure 2-10 relating the input vector 푆, association vector 퐴, and the output vector 푃. The following equations formulate the initial structure.

푆 = (푆 , 푆 , … , 푆 ) 1 2 푛 (2-1) 푃 = 퐹(푆) (2-2) 푓: 푆 → 퐴 (2-3) 푔: 퐴 → 푃 (2-4) In his first experiment, he trained the neural network on the proper encoder signals of a simple motor setup at several recorded motor speeds. The training method consisted of relating normal behavior to a user-defined number of hidden nodes and prescribed output vector. Weights were adjusted until the error between desired and computed and 푃 vectors were within a tolerable value defined by 휖 as shown in

Equations 2.5 - 2.8.

푛 푑 − ∑푗=1 푤푗 푤 = 푤 + ( ) ∗ 푚 푖푛 푖표 푛 (2-5)

푃 = (푃 , 푃 , … , 푃 ) = 퐶푎푙푐푢푙푎푡푒푑 1 2 푛 (2-6) 푃 = 퐷푒푠푖푟푒푑 퐷 (2-7) |푃 − 푃 | < 휖 퐷 (2-8)

The components that make up equation 2.5 include the old weight value 푤푖표, desired output value 푑, number of addressed locations 푛, value of current weight 푤퐽, percentage

62

of correction 푚, and new value 푤푖푛. A pattern discrimination model was applied to the trained neural network resolving weight values greater than zero to a value of , and zero otherwise. Then, a confidence value was determined to represent the amount that a system represents normal operating conditions, calculated by

퐶표푢푛푡 퐶표푛푓푖푑푒푛푐푒 = 100 ∗ ( ) 푆푖푧푒 표푓 퐴푟푟푎푦 (2-9)

To test the trained system, Lee adjusted backlash screws to simulate degradation and observe the resulting confidence values. The output produced successfully estimated the simulated degradation.

Lee recognized that by identifying the performance or quality of the operating signals that it would be an acceptable indicator that a machine or component would need to be serviced, extending the life of the machine and reduction of failures. A benefit of this system was that it would be able to be developed without historical knowledge of maintenance. However, the neural network will not adapt to any additional information. To further complicate the performance of this approach, user-selected values for this system may be susceptible or worsen anomalous signals. Using model- based knowledge and awareness of the type of failure is one such method of providing robustness to anomalous signals.

Adaptive Control

Detecting degradation and faults and controlling these states are described in work performed by Liu, Control of Robot Manipulators with Consideration of Actuator

Performance Degradation and Failures [60]. Liu addresses problems of actuator performance degradation by means of an online, adaptive control scheme to compensate for parametric model uncertainties. He differentiated his work from

63

previous robust and adaptive control laws by not modeling the joint actuators as precise torque sources because this method had the potential to damage the controlled actuator. The formation of his adaptive control was centralized around a generalized model for robotic manipulators in the form of

퐷(푞)푞̈ + 퐶(푞, 푞̇)푞̇ + 푔(푞) = 푌(푞, 푞̇, 푞̈)푝 = 휏 (2-10) where 푞, 푞̇, 푞̈ are the position, velocity and acceleration vectors, 퐷(푞) is the inertia matrix, 퐶(푞, 푞̇) are the Coriolis and centripetal forces, 푔(푞) is the gravity force, 휏 are the joint torques, 푝 is a vector of physical parameters of the robot manipulator, and 푌(. ) is a matrix of known functions of joint positions and derivatives. The actual steady-state torque was defined as

휏 = 퐾 휏 푡 푐 (2-11) where 퐾푡 is a diagonal matrix of actuator torque coefficients determined after calibration, and the primary element of the adaptive control law.

Liu ‘s adaptive control method proved successful, implying that the method was generalized for an entire classification of robot manipulator by using a general model.

Liu also suggested that from experimentation a threshold can be defined to logically halt manipulation in an attempt to prevent failure. Though the control may have performed better if it were developed with a more complex model, this raises computational, stability, and performance concerns. This approach does focus solely on actuator torque degradation and requires a detailed understanding of the particular type of degradation and failure.

64

RUL

Calculating remaining useful life (RUL), described by Si et al is an essential factor in condition-based maintenance, prognostics, and health management that bears a significant impact on maintenance, logistics, profitability, and operational performance

[61]. Early approaches in estimating when machinery needed maintenance included the determination of operational thresholds and predictions utilizing expert knowledge and judgement. Thresholds are useful initially, but they do not address the dependent relationships among system components. The methods described in this paper to estimate RUL have many assumptions and selection criteria. Si, Wang, Hu, and Zhou outlined four challenges of estimating RUL:

• Challenging to determine RUL with little or no data available, and the difficulty deriving physics-based models

• Data fusion and high-dimensionality

• Influence of external environmental variables

• Consideration of multiple failure modes for a single component

These challenges are significant for industrial robots, but more so for mobile robots. The last two challenges are concerning when considering RUL for mobile robots given dynamic, damage prone environments.

Jardine, Lin, and Banjevic covered an exhaustive list of CBM diagnostic techniques and approaches. A subset of interest involves the use of model-based degradation detection documented by Tran, Pham, Yang, and Nguyen [62]. The entire approach, shown in Figure 2-11, also considers a predicted value of RUL, but only the first stage of estimating degradation will be reviewed. They began with performing

65

system identification using the autoregressive moving average (ARMA), shown in

Equation 2.12, using only normal operating data to describe the system.

푝 푞 푦 = 푐 + ∑ 휓 푦 + ∑ 휙 휖 + 휖 푡 푖 푡−푖 푗 푡−푗 푡 (2-12) 푖=1 푖=1

This model’s parameters include the autoregressive orders 푝, moving average orders 푞, autoregressive coefficients 휓푖 and 휙푗, normal white noise process 휖푡, and constant 푐.

The degradation index, defined to be the root mean square of the residual errors, was used to compare the system outputs to the model. It was suggested that this value could have thresholds but would require operators or maintainers to select appropriate bounds.

The team focused on a compressor equipped with only vibration sensors, a standard approach for industrial mechanical degradation assessments, to validate their work. Data were collected at intervals of six hours, consisting of approximately 1200 data points each session. During this testing, degradation was successfully detected and indicated significant deviations at the 300th point. The compressor eventually failed at the 308th point and was subsequently repaired. Following the data collection effort, they were able to successfully generate an ARMA model to quantify the degradation index of the machine. Additionally, they have observed a then known fault that can be learned where the compressor failed due to insufficient lubrication of the main journal bearings.

Monitoring health continues to be an active area of research for industrial robotics, where excellent tools and techniques have proven success. A primary concern for these methodologies is the contrasting differences in comparison to mobile

66

robots. Many of the assumptions made for an industrial setting cannot be applied to an outdoor dynamic environment that most mobile robots operate. Even the quality of internal and external sensory signals may be affected by the environment, damaging external forces, and potential dynamical trajectories. This requires a robust approach that takes into consideration various maintenance and repair events that is well documented using configuration management processes. RobotMD has been designed to accommodate these concerns by focusing on the overall system performance as the input to health assessments. The assessments are calculated from comparing previous and current dynamical models from logically selected data that has been collected.

Techniques for Continuous Improvement Strategies

By focusing on an offline monitoring system, online learning methods and control laws such as iterative, repetitive, and run-to-run control laws are to be excluded from consideration [51]. Such control laws operate on volatile historical input signals to improve performance for cyclic or repetitive trajectories or tasks [63]. A suitable alternative to improving control strategies would be through active learning methods.

Active learning is a subfield of artificial intelligence and machine learning where computer systems improve with experience and interactive training [64]. This method has proven to be very successful in the work by Dima, Hebert, and Stentz in their work

Enabling Learning from Large Datasets: Applying Active Learning to Mobile Robots [65].

Their research focuses on using active training methods to reduce the vast amount of unlabeled training data collected from perception sensors for terrain classification and obstacle detection. Without assistance, this becomes a tedious, time consuming, costly task of sifting through collected data for scenes of interest. Active learning was applied to this process by processing the scenes and estimating patches

67

of color features using kernel density estimations. These estimations are used by a scoring function to identify candidates of interest to be reviewed by a human expert.

This approach doesn’t directly benefit health monitoring, but it does provide a direction in handling massive amounts of data and a process to prune or select desired subsets given appropriately selected algorithms. An iterative proactive approach like this could also be used in gradually learning regions of degradation to facilitate the determination of appropriate thresholds.

Another attribute to consider when continuously learning is the temporal relevancy of the target being examined. The objective of the work of Ullah, Orabona, and Caputo [66] was to consider the evolution of a dynamic environment of objects over time for service robots. Typical indoor living environments routinely experience the addition, removal, and changing positions of objects resulting in the necessary application of robust obstacle avoidance and updated path trajectories [67]. The team addressed this challenge by developing a trained on-line support vector machine (SVM) algorithm that incrementally learns objects from experience. This algorithm is augmented with a forgetting strategy to promote the learned location of recently recognized objects and improve storage space onboard the robot. The described process is shown in Figure 2-12 where stored vectors are removed, tested, and retraining the algorithm for operation in repetitive intervals. Careful boundaries were determined for their testing and produced promising results in both memory management and object recognition.

It was noted that the primary intent of the work was proving the concept and that the SVM would eventually break down due to computational requirements. The

68

importance, however, is both the method in which information is learned and the forgetting algorithm used to keep information current for the robotic system. While it may not be necessary to forget information, it may be prudent to either rank or promote recently learned information or weigh the current information higher than historical.

Requirements

There is an enormous amount to take into consideration when implementing a sophisticated system to address the detection of performance degradation of autonomous mobile robots. The following list summarizes requirements describing a data warehouse with processes that apply to a wide variety of mobile robots.

• DBMS must promote ACID operations with minimal configuration • Clear documentation outlining processes and defining entities • Efficient CRUD operations utilizing indexes • Relationships between entities identified wherever possible • Hierarchical organization of data • Ability to store massive amounts of data in a logical format • Stored data must be independent of specific hardware • Centralized storage environment for the collection of data • Efficient, easy to use interfaces between layers of data

Database Design

Being cognizant of the previous requirements, the RobotMD intelligent frameworks' database will be designed and implemented in Microsoft’s SQL Server.

SQL is the primary language used to perform CRUD operations and is portable between different SQL solutions including MySQL, Oracle, and PostgreSQL with exception of platform specific functions. This environment has a proven history of providing a robust relational database management system which is well suited because the data structures require no change over time. This selection does not prohibit the selection

69

and use of other database solutions such as schema-less designs; however, an exhaustive review of different approaches is beyond the scope of this research.

RobotMD further benefits from following best practices regarding data warehouse structure and data quality indicated by Larson in his book Delivering Business

Intelligence [68]. One suggested design is the snowflake schema, visualized in Figure

2-13, comprised of hierarchical measures, dimensions, and attributes. Measures are the facts or values of interest that are often the target of aggregation. Dimensions describe measures and facilitate some understanding or intuition of extracted information. Lastly, attributes are any additional details to describe dimensions. These attributes lead to relatively wide tables storing often redundant information for pulling data for an intended purpose such as generating reports.

In contrast, normalized transactional tables, optimized for writing data, incorporate more tables and are designed to have only the minimum number of columns required for storing data. Both designs make use of table relationships that involve key- value pairs, typically integers, relating a foreign key to a primary key. Foreign-key columns store the value of the primary key from another table, forming a unique relationship that can be utilized to join data records. This method helps enforce data integrity and encourages data quality. For this research, transactional and hybrid schemas will facilitate transfer of robot information, results from degradation analysis, and configuration management. The hybrid schema, a combination of snowflake and transactional, outlines the structure of the data warehouse, maintaining all of the collected runtime data.

70

Ensuring enforcement of quality data is another principle concept. This is conducted by maintaining comprehensive documentation, monitoring processes to safeguard data that is operated on, and planning for the data lifecycle [69]. The data lifecycle is described by the following non-linear steps: plan, obtain, store, share, maintain, apply, and dispose. This process is stressed to be implemented during the design of the system because the cost of correcting data is exorbitant [70]. This entire process requires a core data model, generally represented as a map of entities and their relationships. The work presented in this research will be complete with a comprehensive series of diagrams and processes to ensure that all models, processes, and thresholds for data quality.

Robot Assumptions and Criteria

The following list summarizes previous considerations, assumptions, and limiting factors which are addressed to ensure the contribution to the effectiveness of the

RobotMD framework. The robots described in this framework are assumed to be designed sufficiently well that include the following:

• Defined physical characteristics complete with knowledge of sensors and actuators

• Kinematic and dynamic mathematical models

• Dynamic models must be represented in state-space form

• Suggested that only a single behavior or action be evaluated by the framework

• Mobile robots in their environment are assumed to operate with little to no internet connectivity

• The environment is further described as dynamic, unpredictable, and not able to be sufficiently modeled

• The robots themselves have limited computational, and power resources and are not capable of performing all of the operations necessary for this framework

71

• Mobile robots undergo continuous scheduled and unscheduled maintenance events

• Data collected in the form of CSV files

Of the individual robot, the dynamical system must be presented in state-space equations to allow physical interpretations of the parameters. The control architecture should exhibit a hierarchical behavior-based control architecture, also called a subsumption architecture [71]. Finally, all events that impact the configuration management of the system, elaborated further in the next chapter, must be documented using the provided application including maintenance, replaced parts, and damage to the vehicle. Replaced parts, either commercial off the shelf (COTS) or fabricated, must be explicitly documented through the application as this impacts an identified configuration item. These will be used as identifiers and triggers to note and record these events, allowing algorithms to intelligently operate on the collected historical data and facilitate learning relationships between parameters configurations and degradation.

72

Figure 2-1. An overview of the steps that compose the KDD Process [37].

Figure 2-2. Data collection and analysis process of onboard database collection experiment [38].

73

Figure 2-3. DIKW structure showing relative content meaning and value provided [41].

Figure 2-4. DIKW outlined with considered robotic analogy and applications [38].

74

Figure 2-5. Data separation and knowledge representation of actions and knowledge [53].

Figure 2-6. Knowledge map propagation of action and behaviors [53].

75

Figure 2-7. Layers of RoboEarth [55].

76

Figure 2-8. Expected MTBF of machinery performance related to maintenance and the probability of failure [58].

Figure 2-9. Approach using CMAC for detection of degradation [58].

77

Figure 2-10. Elements of CMAC structure to detect pattern anomalies for an experiment [58].

Figure 2-11. Outline of RUL types [62].

78

Figure 2-12. CBM process algorithm with degradation index calculation [62].

Figure 2-13. SVM continuous learning strategy with random forgetting algorithm [66].

79

CHAPTER 3 ROBOTMD

This chapter describes the framework design, structure, and processes of the

RobotMD offline health monitoring framework. Addressed is the technical void identified, guidelines to reduce cost and complexity, and a proposed universal knowledge for the overall improvement and maturation of health monitoring for AMR. The organization of the database objects used by the framework infrastructure aligns with the core processes. When initialized, the Robot Registration process collects information from the user to dynamically generate the remaining database objects required to perform health assessments. The configuration management of health monitoring, outlined in

Figure 3-1, discusses the categorization of event types, types of events monitored, and the processes involved; these include the persistence of event data, the establishment of baselines, conducting health assessments, cross-validation, and results provided back to the user. Strategies and tools used to investigate and understand degradation for identification of ontological parameter patterns are discussed in the final section.

Data Structure and Initialization

RobotMD database objects are contained in logical structures called schemas consisting of the Dictionary, Robot, RobotDW, and RobotConfig. Tables in the

Dictionary, Robot, and RobotConfig schemas are normalized transactional structures.

RobotDW consists of the data-warehouse tables which follow a snowflake pattern and utilize related transactional data. All tables have been designed to utilize unique primary keys, explicit data types, stated null usages, and table relationships defined using foreign keys. Foreign keys help enforce data integrity, assist data analysis, and simplify data operations when consumed by application logic.

80

The Dictionary schema contains the dictionary tables prefixed with “Ltb,” an abbreviation for look-up table, where contents rarely change over time. These records, a term to indicate table rows, used by the application for dropdown boxes, provide additional information to ETL processes and for aggregation during analysis. The Robot and RobotConfig schemas have tables prefixed “Atb” action tables, and collectively encompass data that describes the robot and configuration management, respectively.

Action tables are the dynamic data structures that rapidly undergo CRUD operations, found in transactional databases. RobotDW, with prefix “Wtb,” or warehouse table, is the warehouse schema for tables directly involved in data mapping, collection, and storage. The schemas presented were designed to align with primary processes and functions utilized by RobotMD, for data dictionary management, robot registration, configuration management, and degradation analysis.

Dictionary

Nine tables, outlined in Figure 3-2, constitute the dictionary schema and provide structure to the processes of RobotMD. These tables provide information for event types and categories, general parameter information, result types, selectable sensors, and system mathematical model types. Information from records include titles, abbreviations, descriptions of types, and static ranges of values. For example, the electric current sensor, A000079, the second record of the LtbSensor table, provides the constant range of acceptable values of zero to four that supports data processing by indicating a range of nominal readings.

Even though dictionary records rarely change, each table is equipped with start and end-use dates fields representing dates defining when records are valid for use.

The end-use date indicates when records should no longer be available and help

81

preserve the integrity of dictionary records by acting as a logical delete. Deletion otherwise would violate referential integrity in the database engine. This schema provides a data dictionary through the use of two tables. Data dictionaries are a standard tool used to provide context to users of database tables and objects; Table 3-1 provides information detailing RobotMD tables definitions.

Robot Registration

Robot registration is the first process that prepares RobotMD to monitor, analyze, and assess the health of the robot. The user is expected to provide complete descriptions of the robot, including basic information, systems, sensors, and parameters. This information is stored in the Robot schema tables, shown in Figure 3-

3. Figure 3-4 designates the initial section to enter the robot name, description, and optional image.

The user hierarchically associates systems and sub-systems with an indicated order to conducting health assessments. Models associated with these systems consumed by the application as external MATLAB files, provided the file path to the entrance function. The design and use of these files and related system attributes described below are discussed further in the Model File Structure section of this chapter. System information provided by the user should include a system name, model, model type, parent system if applicable, description, the order of system assessments, the threshold for degradation by percentage, and MATLAB file path shown in Figure 3-5. The model field is not required but is recommended to illustrate the structure of the mathematical model used by user-provided MATLAB files. Model type categorizes the mathematical representation as kinematic of dynamical.

82

The sensors of the robot are indicated by the user in Figure 3-6. Sensors and associated system are selected using dropdown menus accompanied by manually entered descriptions for the specified mounting location and purpose of the sensor. The dropdown menu for the sensor selection is populated using the dictionary table

LtbSensor, the contents of which will be used in a later process by this relationship for processing data. The system selection dropdown is generated from the systems associated to the robot.

Parameters of each system are declared in the following section shown in Figure

3-7. Each parameter must be labeled, provided a description, associated with a system, and provided specifications of numerical properties. The latter includes defining minimum, maximum, and default values, the threshold of acceptable change from prior value, and change bias. Value ranges are used to indicate nominal values and boundaries for parameters, which if violated, produce warnings sent to the user.

Users have the option to disable boundary warnings for a specific parameter in this section. The change threshold also produces a warning if violated, and is suggested to initially be set to three standard deviations of the expected value for the parameter; based on the premise of confidence intervals. Confidence intervals, for Gaussian distributions, state that 99.73% of values will fall in a region three standard deviations from the expected value of the distribution [72]. The final setting, change bias accepting values from zero to one-thousand, refers to regularization; a strategy used to penalize large variances and an indication of the likelihood of the initial value being accurate when fused with actual or expert-knowledge, of the system when analyzed from collected data. Initial expert-knowledge for low TRL systems can aid the application of

83

regularization to parameters that represent inertial or rigid physical dimensions.

Change in these parameters are less likely and have the potential to guide resulting estimations to appropriate solutions. Regularization is a tradeoff between variance and offset bias, resulting in smaller changes but may not correspond with actual measurements or represent true values. For initial analysis, this may be acceptable if the parameter is stable while not in a degraded state. Parameter warnings, if produced during checks, may indicate problems with analysis, uncertainty, model fidelity, or signaling degradation. To complete the registration describing the initial configuration of the robot, the user must define the structure of data collected by the robotic system captured in the WtbDataMapping table, described in the following section.

Data Warehouse

The data warehouse tables, located in the RobotDW schema and presented in

Figure 3-8, are used for storage of the data collected and describe the structure, organization of the data collected, and statistical results of the collected data. The collected data, described by the user, at a minimum should record when the behavior is active, sensor outputs related to control schemes, controlled input values, and setpoint values. It is encouraged to provide all available data for analysis and future improvement of current methods. Run-time data collected and produced as CSV files should align with the indicated mapping, where rows are sequenced by time and columns represent different data points. Each data column identified through mapping must include a name, description, column position, data type, and whether the column is used for degradation analysis, as seen in Figure 3-9. Users can further optionally select whether statistical properties, mean and standard deviation, are to be automatically determined at each event. Data types, set by an integer value of one or

84

two, declare an integer or double type, respectively. Once the mapping is complete, the user completes the setup by clicking the Create Warehouse Table button, triggering the automatic creation of custom user data types, columnstore index, stored procedures, and the data warehouse table for storing run-time data. The custom data types, used in conjunction with the stored procedures, are used with the data warehouse table to query data or insert run-time data. The columnstore index referred to as a “wide index,” is used to optimize multi-column, read-only structures for querying.

Configuration Management

Configuration management is the driving mechanism of RobotMD. Established through a semi-automatic event-driven strategy, utilizing the tables within the

RobotConfig schema presented in Figure 3-10, the framework works in concert with developed processes and user-supplied information. The primary subject of this configuration management approach is the monitoring of system health performance evaluated according to specified categories and types of events. Degradation of health is measured from a calculated baseline from health assessment results and rules formed using event categories, and cross-validation. The processes that govern the configuration management are data retrieval, performing health assessments, and submittal of findings to the user.

Events

Events are categorized as action, change, or passive. Actions represent events that result in data produced from executing some behavior for a mission. Change events are recorded, indicating possible configuration changes to the robot that may not have any data associated. The last category, passive, are observations made of the robot that is significant enough for detection, but with little to no impact on system

85

performance. Passive events are a subject of suggested future work, as they provide additional data labels that may improve or enhance strategies, processes, algorithms, or models. The user is expected to submit an event entry through the application, as seen in Figure 3-11, for each of the events as they occur. The types of events and their description employed by this framework are presented in Table 3-2. Both types and categories are designed for algorithms of configuration management to logically operate on, particularly when analyzing the data and establishment of baselines.

Baselines

Baselines are controlled by event categories and formed using the results of health assessments and prior established baselines. Early performance of a system is expected to resemble the stable zone shown in Figure 1-7 and can be modeled as a horizontal line, treated as the baseline. New baselines for a system are established from three consecutive completed health assessments with a standard deviation of less than five percent. Only action events result in health assessments being performed.

After a change in configuration, identified by the user as a change event, system baselines affected must be re-established. Updated baselines are calculated as a weighted sum of two data events following the change event and the prior established baseline. The use of prior information enables a baseline to be formed with less data, provided the assumption that components replaced do not to differ. Situations that relax this restriction is another subject of future work.

Data Retrieval and Processing

Data retrieval refers to the extraction of collected data from an AMR, differing from CBM as data collection to processing is a discrete, discontinuous process.

Retrieval occurs before recording an action event with RobotMD, as the data file is

86

required for uploading. It is optional to pre-process data before recording the event for complex processing needs such as filtering, smoothing, or calculation of derived columns. Change and passive event types reported by the user do not require data retrieval. After an action event has been recorded in the application, the data file is uploaded through an ETL process. The data is extracted from the file, transformed per conversions provided by the user from the data mapping, and loaded into the previously created data warehouse table. Events that successfully result in data stored are referred to as data events.

Health Assessments

The health assessment process is initiated automatically for all active event types after data has been uploaded to the data warehouse, which consists of parameter estimation, provided MATLAB functions, and cross-validation of specific realized models. For clarification, models referenced to a specific event refer to the mathematical structure indicated by provided MATLAB functions with respect to the estimated parameters of that event. Assessments for systems of an AMR occurs for every action event in the user-specified order. The focus of RobotMD health monitoring is on the experience data collected from an AMR after a mission has been completed.

An alternative consideration to asses health would be collecting and analyzing data from a pre-determined trajectory after a mission. Trajectory assessments, beyond the scope of this research, could also be augmented with additional external sensors, improving the quality of health assessments.

Model file structure

During the robot registration process, file paths of MATLAB files that perform parameter estimation and describe mathematical system models are provided. The

87

indicated file must be a MATLAB function that follows a pre-determined input parameter signature and output declaration which optionally can call other MATLAB files. The inputs within the function are suggested for use, but are not required; these include system order number, lower bound array, upper bound array, initial parameter guess array, regularization array, and data matrix. All arrays except the initial parameter guess, are provided from the robot registration process. Initial parameter guesses are provided from either default or the estimated parameters from the previous data event.

Default values are only used during events involved with the determination of a baseline. The last input parameter, data matrix, is the data queried from the data warehouse used by the MATLAB function for parameter estimation. Contrary to the input parameters of the function, the output requires an array of two elements. The first being the fitness or estimate of the resulting accuracy determined from comparing actual data to predictions made with the new model. The second element is an array of realized parameter values; both are subjects of the parameter estimation guidelines section below.

Parameter estimation guidelines

RobotMD does not prescribe an estimation strategy that MATLAB functions are to incorporate; only the input function’s parameter is signature and object returned. The flexible design promotes various approaches to be considered and is influenced by findings from health assessments or other model types that were previously discouraged. Least-squares estimation is the suggested initial approach to estimate the parameters of state space models [73], and is implemented using MATLAB’s fmincon function. This function is a constrained optimization algorithm used for minimization based on supplied criteria including objective function, lower and upper parameter

88

bounds, initial parameter guesses, regularization, and time-series data. The objective function mathematically declares the relationship and purpose of the parameters being optimized. In this case, minimization of the calculated mean-squared error (MSE)

푛 1 2 푀푆퐸 = ∑(푌 − 푌̂ ) 푛 푖 푖 (3-1) 푖=1 is a suitable choice for parameter estimation. MSE is the sum of errors between points in time of collected data 푌푖 and model prediction 푌̂푖 [73]. After parameters have been determined, the fitness level of the resulting model can be calculated as

‖푥 − 푥̂‖ 퐹푖푡푛푒푠푠 = 100 ∗ (1 − 푖 푖 ). ‖푥푖 − avg(xi)‖ (3-2)

The variable 푥푖 are the actual data values and 푥̂푖 are the estimated values of the model.

Fitness represents the percentage level of accuracy that the model was able to predict from the data used for estimation, and if below the degradation threshold, identified by the user as a percentage from the active baseline, suggesting that the system has deteriorated. For a final assessment of health, an additional evaluation, cross- validation, is suggested for increased assurance given documented uncertainties.

Cross-validation

Two sets of cross-validations seek to determine fitness values for realized models and their parameters. First, the recently determined model and parameters are used to calculate its fitness for all previous data events. The result is a collection of fitness values, stored in the database, indicating how well the model predicts those individual events. The second set of cross-validations involve all prior estimated models, and iteratively determining their fitness for the recently collected data event.

89

The purpose of cross-validation is to confirm the intuition of Figure 1-7 by evaluating the fitness values of models established in the same event as its baseline.

The expectation is that these models predict well when health is in a stable zone, and poorly when in a degraded state. The final degraded state of the health assessment is determined given a poor prediction through cross-validation of the baseline model in conjunction with current fitness calculation below the degradation threshold. Upon completion of the health assessment, users are automatically navigated to the Event

Summary view, where all results and calculations are presented.

Parameters and Degradation Ontology

Detection of degradation lay in the subtleties of the proposed health monitoring strategy by abstraction of meaningful ontological information to describe the deterioration of health. The approach using generalized state-space models, however complex, may shed light on similarly described mobile robots. Even simplified or reduced models may exhibit low fitness, but may remain stable until degradation occurs, and provide insight. Furthermore, intermediate cross-validation models may also contribute additional information to neighboring data events. However, there is value perusing more sophisticated models to determine their results and relationships between parameters in light of declining performance.

Beyond health assessments, analysis and monitoring of the parameters of assessed and estimated systems provide additional features to assist diagnosing faults and measurement of severity. Determined parameters are evaluated for the following features that presented to the user after a health assessment.

• Jump – a change threshold when compared to the previous set of estimated parameters

90

• Threshold – when a parameter exceeds three standard deviations from the initially provided initial guess

• Boundary – when estimated parameter results in a value that has been pre- determined to be the maximum or minimum allowed

In addition to the monitored parameter characteristics, there is significant value in the relationships between parameters as they evolve through degradation, the central motivation for defining a generalized ontological representation of degradation using parameter configurations.

Ontology

Understanding degradation of mobile robots is the central focus of this research.

To accomplish this goal, users are equipped with a series of processes, tools, and plots to analyze the temporal estimation outcomes for the systems of mobile robotic platforms. Paramount to examining the coupled relationships of parameters, is the representation of normalized parameters plotted per data event. The normalization is represented as the percent change of one or more system parameters. Correlation of contributing parameters can define or mark an identified state or region of degradation.

Robotic systems that use generalized models can then take advantage of prior analysis from associated parameter configurations to states of degradation. The temporal trajectories may also yield physical intuition, metrics quantifying severity, or live estimations. This forms the concept of degradation ontology where prior analysis can be used to supplement systems described by the generic model. The ontology suggested may determine fault characteristics for detection, fault isolation to a specific combination of parameters, and aid potential prognostic evaluations, utilizing the time-series techniques, like ARMA, from Chapter 2. Treated in this way, the use of degradation ontology can be implemented or augment on-board, online diagnostics and prognostics

91

as with CBM or IVHM. This information can also be used to revise the estimation strategy or to determine a method for mitigation as observed from the experiments performed in Chapter 4.

Workflows

To encourage continuous analysis of degradation and revision of employed methods, RobotMD offers the workflow process. Workflows, created on the Record

Events view shown in Figure 3-12, provides the mechanism to re-evaluate changes to a current estimation strategy and executing overall events in the same sequential order.

With exception to collected data, all prior user-provided information can be altered without affecting results already obtained. Potential changes to consider include adjustment of hyper-parameters, mathematical models, parameters values, database stored procedures, estimation algorithms, and regularization. Values, such as the ones previously mentioned that could be adjusted to improve health assessments are called hyper-parameters, a standard method used in data sciences.

Different workflows can also be compared to one another on the Event Summary view by selecting workflows and plotting accordingly, which is helpful to determine superior strategies for health monitoring. A hyper-parameter of interest for workflows is the historical setting that specifies the number of previous data events to include for parameter estimation. Involving multiple data events permits estimation not to be solely influenced by one particular event and has the stimulating effect of smoothing out fitness results, like a moving-average convolution. Identification of noteworthy features helps progress health monitoring of a system and should be encouraged to analyze or incorporated into strategies.

92

User

1 Robot Registration User enters information of the robot prior to monitoring or analyzing health.

3 Data Retrieval User transfers and prepares data. Uses Transactional 4 application to register Database event and upload data.

2 6

Warehouse 5

User Entry 1. Robot Registration 2. Automatic Creation of Database Objects Automatic Process 3. Data Retrieval Configuration 4. Data Upload and ETL Management 5. Health Assessment (Automatic) 6. Record and Report Results

Figure 3-1. Configuration management process outline for health monitoring of autonomous mobile robots.

93

Figure 3-2. Dictionary schema table outline with foreign key relationships.

Table 3-1. The data dictionary that provides a list of tables and their definitions that are involved in the processes of RobotMD. Dictionary Item Name Description Dictionary ID TypeID 1 LtbDataDictionary Provide a centralized description of 1 the data objects used in the analysis of degradation. 2 LtbDataDictionary Characterize the types of data 1 Type dictionary objects. 3 LtbSensor Provide a list of known sensors that 1 can be associated to robots. 4 LtbParameter To describe all possible parameters 1 of a system for tracking, analysis, and observation. 5 LtbParameterType To list out the different types of 1 parameters. 6 LtbResultType To list out the different types of 1 results.

94

Table 3-1. Continued Dictionary Item Name Description Dictionary ID TypeID 7 LtbSystemModel Used to describe the different 1 Type possible types of models used in the design of a robot. All controls shall have an associated model. 8 LtbEventType Describes the categories that event 1 Category types can be grouped into. 9 LtbEventType Describes the types of events that 1 are recorded that may include possible degradation. 10 AtbRobot Provide robot a name and description 1 of its purpose. Highest level hierarchically. 11 AtbModel Associated models to a given system 1 of a given robot. Only supports one model per system. 12 AtbAssignedSensor Describes all of the sensors for a 1 system indicated what they are sensing and where. 13 AtbBehavior Describes all of the behaviors the 1 robot is capable of. 14 AtbAssociated Table to identify and define 1 Parameter parameters of a particular system. These are also parameters that will be used during optimization. 15 AtbParameter The purpose is to track different 1 Workflow parameter workflows to determine the best result. 16 AtbEvent Table to describe key events 1 associated to anything impacting the robot. 17 AtbEventType Table to record all labels/types for a 1 given event. 18 AtbModel After each experience performance is 1 Performance evaluated and recorded in this table. 19 AtbSystemBaseline Table to track baselines for each 1 system as they get established. Prior values are retained for historical analysis. 20 AtbResult Table to record all types of results 1 from a data-related event.

95

Table 3-1. Continued Dictionary Item Name Description Dictionary ID TypeID 21 AtbModelValidation After each experience, cross validate 1 resulting model with prior models and record results. 22 AtbParameter Record of all updated parameters. 1 ChangeLog 23 AtbParameterStats Record of stats per uploaded 1 experience/event. 24 AtbParameter Table to associate parameter 1 ChangeEventsUsed changes to datasets generated and associated by events. 25 WtbDataMapping Record of data mapping and relevant 1 associations. 26 WtbDataStats Record of stats per uploaded 1 experience/event. 27 WtbLiZARDData Primary data warehouse table 1 housing all uploaded experience/event data. Automatically generated from WtbDataMapping. 28 WtbLiZARD_ Primary data warehouse table 1 exp2Data housing all uploaded experience/event data. Automatically generated from WtbDataMapping.

96

Figure 3-3. Robot schema table outline with foreign key relationships.

Figure 3-4. Section to edit the basic robot information on Robot Details view. Photo courtesy of author.

97

Figure 3-5. Section of Robot Details view for creating, deleting, or modifying systems. Photo courtesy of author.

Figure 3-6. Section of Robot Details view for creating, deleting, or modifying sensors of the robot. Photo courtesy of author.

98

Figure 3-7. Section of Robot Details view for creating, deleting, or modifying system parameters. Photo courtesy of author.

99

Figure 3-8. Warehouse schema table outline with foreign key relationships.

100

Figure 3-9. Section of Review Warehouse view for providing data mapping information that aligns with the structure of data retrieved from the robot. Photo courtesy of author.

101

Figure 3-10. RobotConfig schema table outline with foreign key relationships.

Table 3-2. List of event types and their categories and descriptions used in the configuration management health processes of RobotMD. EventType Description Experience (Action) Normal operation of a designed mission. Maintenance - (Passive) Event to indicate maintenance has been performed No Change without any modifications. Maintenance - (Change Event) Event to indicate maintenance has been Change performed with one or more parts being modified. Validation event should immediately follow change. Damage (Change Event) Indicator of observed or visible damage.

102

Table 3-2. Continued EventType Description Failure (Change Event) Failure indicator for the resulting mission experience. Testing (Action) Label to follow a change event label type to re-evaluate parameter estimation. Validation (Action) Validation run performed after a testing label event. Observation (Passive) Used to indicate a non-critical observed issue with the robot.

Figure 3-11. Section of Record Event view to record and event and upload data. Photo courtesy of author.

103

Figure 3-12. Section of Record Event view to create a new workflow. Photo courtesy of author.

104

CHAPTER 4 EXPERIMENTS

LiZARD Initialization with RobotMD

Having a well-established plan and understanding of model parameters is central to the degradation strategy for analysis and ontology. The approach for LiZARD starts with the primary kinematic system, leading to the right motor, and concludes with the left motor. The convention using subscripts 푟 and 푙 will indicate right and left for each of the sub-systems. If sequential order is required, right side elements will be selected first.

MATLAB provides several tools and methods to support modelling and estimation. As the primary system model describes kinematics, many of the simplified estimation strategies are not applicable. Instead, the fmincon function was used for constrained optimization. The kinematic model and parameters discussed in Appendix B are provided below for convenience.

푟푟 푟푙 푣 휓 [ 푥] = [ 2 2 ] [ 푟] 휔 푟푟 −푟푙 휓푙 (4-1) 2퐿 2퐿

Components of Equation 4-1 include the linear velocity 푣푥 of the kinematic center of the robot, the angular velocity 휔, normal distance between the wheel planes 퐿, wheel velocities 휓푟 and 휓푙, and 푟푟 and 푟푙 are the right and left wheel radii. Only the wheel radii, 푟푟 and 푟푙, are of interest as it is assumed that is it extremely unlikely that the normal distance between wheel planes will change without a user-described change event. If this parameter were to change it would be due to catastrophic failure, outside the scope of offline health assessments. Therefore, regularization will penalize changes to this last parameter, and have disabled warnings if the parameters takes on a boundary value. This allows tighter bounds to be defined for this parameter while still

105

permitting it to be estimated. The wheel radii parameters will have a lower regularization penalty since the tires are not completely rigid.

The motors are described by the same dynamical model, but with separate sets of parameters uniquely identified using the prescribed convention, also provided for convenience without subscripts.

푏 퐾 ̈ − 휽 퐽 퐽 휽̇ 0 풙̇ = [풅풊 ] = [ ] + [1 ] 풖 ⁄ 퐾 푅 풊 ⁄ (4-2) 풅풕 − − 퐼 [ 퐼 퐼 ]

The parameters for this model are the inductance 퐼, motor resistance R, moment of inertia of the motor shaft 퐽, and simplified gain 퐾 combined from the constant for back- emf and torque coefficient. The model also defines its state 풙 to be made up of the angular velocity 휽̇ and electrical current 풊, and input vector 풖 . Dynamical models are able to take advantage of MATLAB’s grey-box modelling tools, specifically greyest function, simplifying the provided MATLAB function. For each motor, the moment of inertia parameter will be treated in a similar fashion as the 퐿 parameter from the primary kinematic model above. Each moment of inertia parameter will be tightly bound, disabled warning on taking a boundary value, and penalized for changes. This parameter is not expected to not change since the system should not be losing or gaining motor shaft mass. During robot development and health assessment analysis it was found that a small regularization penalty on the inductance parameter resulted in better local optima for the remaining parameters.

Some parameters of the described models did not yield accurate physical values due to the lack of model complexity, unmodeled gears, poor quality motors, and use of regularization. Having minimal variance is more desirable than accurate offset bias for

106

low fidelity models because the emphasis for ontology relies on the relative temporal observations of the parameters. Use of regularization also encourages establishment of baselines by penalizing significant variances. Improvement of offset bias and model fidelity are both subjects of suggested future work. Finally, the boundaries for each parameter are set to the suggested three times the standard deviation, quantified through multiple estimations, tests, and workflows.

Workflows

Single event data sets were used to determine reasonable model structures, estimation algorithm selection, and fitness calculations. This approach does not indicate the amount of prior data sets needed, appropriate parameter ranges, regularization selections, or setting of thresholds. The workflow tool assisted in determining the following hyper-parameters for the experiments conducted for this research.

• Active – narrowing the dataset for when the primary behavior was active during the mission

• Regularization – establishing penalties on rigid parameters less likely to vary

• Model Complexity – evaluating different model orders and combinations of parameters

• Filtering – examining different types of convolutions to smooth data

• Boundaries - enabling, disabling, or changing the bounds for parameters

• History – selection of data events used for estimation

It was observed that results were more consistent when selecting active data instead of making use of idle data, unnecessarily giving emphasis to lower motor commands. Regularization was extremely useful in establishing a level of belief in the initial estimation of a parameter, and resulted in less variance in model fitness and

107

parameters. Run-time data examined for estimation algorithms revealed that certain fields would benefit from filtering or smoothing to improve results. Use of moving average filters preserves tending time-domain information [74] and reduces the impact of point anomalies. The collected data further provided insight that the suggested boundaries based on confidence intervals were appropriate. Through examination of multiple workflows, it was determined that five recent data events produced better qualitative results. A comparison of two workflows showing the observed smoothing effect is provided in Figure 4-1.

Experiment Setup

The remainder of this chapter presents two experiments performed to assess the capabilities and effectiveness of RobotMD by monitoring the health and analyzing degradation of LiZARD. LiZARD’s mission was to complete forty circuits on a closed oblong track with a black one-inch line, without leaving the track. Figure 4-2 illustrates the track, starting location, and the goal post to be detected by a sonar sensor. This experimental setup lends itself insight to the concept of prescribing trajectories or behaviors to assess health, presented in Chapter 3, in lieu of performing assessments from data collected in volatile environments. The first experiment illustrates the use and capabilities of RobotMD by monitoring LiZARD during consecutive missions without any external influences other than battery changes and lighting. The second experiment was designed to force an expected form of degradation on the primary kinematic system by manually adding layers of tape to the right wheel to force a fault.

Experiment 1: Motor Degradation

The objective of the first experiment was to execute LiZARD’s line following mission until the occurrence of a fault to highlight RobotMD’s ability to indicate

108

degradation and subsequent identification of contributing parameters. These parameters can be used for ontological definitions and aid the development of diagnostics and prognostics. Collected data for each successful mission underwent pre-processing prior to upload through the application into the database. Failure occurred on the forty-fifth event with a faulted left motor that caused a brown-out, resulting in the cascading failure of the right motor that lacked the minimum electrical demand to operate. A visual post-mortem evaluation of the motors indicated that sustained operation at high temperatures may have been a contributing factor in the degradation that lead to the fault, as indicated by the motor casing scorch marks shown in Figure 4-3.

System Assessment

System fitness results are plotted in Figure 4-4 for each system which include the baseline, degradation threshold, and event labels. RobotMD detected no degradation of the primary kinematic system, which remained near baseline values. The left and right motors both began to decline in fitness performance, which eventually crossed the degradation threshold at different events.

It was expected that both motors would display similar profiles of degradation where the left motor would indicate consistently greater degradation as a result of the mission environment with the LiZARD following curved sections of the track to the right.

In Figure 4-5, the cross-validation fitness values are plotted for data events three and forty-one, when the baseline was established and the lowest fitness data event prior to failure, respectively. The motor models near the baseline tend to align with future health assessment values, which agree with the indication of a declining trend of estimation performance. Conversely, the motor models near the fault event performed

109

poorly and did not assess earlier events well. This suggests that the dynamic models used to describe the motors were no longer suitable, supporting the observation that degradation occurred. The primary kinematic model assessed to not degrade had nearly identical cross validation results, further proof indicating no significant deterioration.

Histograms of the setpoints, motor commands, wheel velocities, and filtered angular velocity reinforced the expected outcome that the left motor required higher input commands in Figure 4-6. These fields were selected for analysis as they are the minimum suggested dataset for performing parameter estimation of each system.

Figure 4-7 displays the histograms of the fortieth event, and indicates both motor input commands were higher to accomplish the same executed mission. This is further evident in Figure 4-8, showing the comparison between the third and fortieth events for motor input. The figure also compares the difference in filtered angular velocity, which does not differ, agreeing with prior results of the primary kinematic system. Finally,

Figure 4-9 provides plots for the assessed average and standard deviation for all parameters. Each point is calculated using all prior data events to represent statistic estimator trends.

Parameter Assessment

To determine parameter configurations that provide information for the detected degradation, the parameters of LiZARD are analyzed for characteristics suggesting deterioration. The parent system, described in Equation 4-1 using a kinematic model, has temporal parameters plotted in Figure 4-10. The base width parameter was specified to be highly constrained, reducing the likelihood of changes. The unconstrained wheel radii parameters had similar profiles and did not significantly vary,

110

supporting the notion that the system did not degrade. Both sub-systems, in contrast, were observed to pass the degradation threshold and had parameters that varied significantly.

Figure 4-11 illustrates the temporal parameters for the right motor, which exhibited less performance degradation than the left. Motor inductance showed a slight upward trend, appearing to stabilize in later data events. The damping coefficient did not initially align with its initial estimate and decayed to a stable value around event thirty-three. The parameter is the worst performing for both motor models, and is attributed to the unreconciled issues with model fidelity, friction, gears, and motor configuration. Prior analysis and analysis workflows resulted in similar profiles for this parameter. The moment of inertia parameter was highly restricted and maintained a boundary value. Electrical resistance maintained an upward trend, suggesting that this motor would require more power to operate. Finally, the combined gain parameter seemed to correlate with the damping coefficient, but on a different scale.

The left motor was identified as the system of initial fault, and Figure 4-12 contains parameter plots used for analysis. Inductance resulted in a similar profile as the right motor but with a slightly higher offset bias. The damping coefficient stayed within defined boundaries, different from the findings of the right motor. The moment of inertia and combined gain, with a large offset bias, both had strong resemblance to the shape of the damping coefficient. The moment of inertia also stayed within bounds, and events where the parameter took on near zero values also appeared to result in poor health assessments. Additional statistical information for these parameters can be found in Figure 4-13.

111

Normalized plots, Figure 4-14, is an additional tool for analysis and further aids the formation of ontological findings. The plots indicate the percent change from the initial default value with respect to each event. The kinematic model wheel radii are shown to deviate, but have almost identical values as the other. The right motor model reveals relative downward trends for the damping coefficients and combined gain parameters, but on different scales. The resistance parameter increases linearly, as noted above. The left motor displays parameters that vary erratically, including the damping coefficient, moment of inertia, and combined gain. Upon further analysis and reviewing the event logs, the saw-tooth like patterns align to when the lithium-polymer battery was replaced with a fully charged battery. This can be improved in the future by modeling the effects of the battery. Lastly, the resistance parameter is observed to depart from its norm, and suggested as the parameter involved with this fault.

Experiment 1 Conclusion

Combining all of the presented results, the failure of the shorted left motor for this experiment enters a degraded region beginning at event twenty-four as a result of the resistance parameter departure from norm values. The parameter continues to decrease approximately linearly, and increasing in severity of the fault. A rule can be formed regarding a threshold or trending negative slope of this parameter, to suggest maintenance and indicate the possibility of a future failure. Data event twenty-seven is the first assessment that labels this motor as degraded. Having the representation of how this parameter degrades describes a potential heuristic for all motors utilizing the model in this experiment. An initial approach would be to monitor this parameter for negative deviations and using regression as a potential tool for prognostics. Another consideration is to examine higher fidelity models and the relationships between the

112

damped coefficient and combined gain parameters as they may have also declined in performance. This consideration is deferred as suggested future work to determine improved or optimal models for analyzing degradation using RobotMD.

Experiment 2: Simulated Degradation

The second experiment was designed to simulate degradation for the primary kinematic model rather than motor degradation, which is more likely to occur. To distinguish these results from the first, the robot was reentered as a second robot in

RobotMD. Starting on the sixth event, identified as event fifty-one due to sequential order used in the database, three layers of masking tape were applied to the right wheel to mimic an inflating tire shown in Figure 4-15. This was continued until the robot deviated from the track, failing the mission on event sixty, the sixteenth event. The workflow for this experiment used a history value of one since failure occurred sooner than the first experiment. Higher settings obfuscate results with fewer data events for analysis.

Assessment and Analysis

Fitness performance plots, Figure 4-16, show the degraded primary system and stable, non-degraded motors. The primary model was first identified as being degraded on event eight, confirmed by the cross-validation plots in Figure 4-17. Similarly, to the first experiment the kinematic model using realized parameters from the baseline, aligned well with declining performance. The model using degraded parameters poorly estimated well with prior data events. The right and left motors of both baseline and just prior to failure events had nearly identical fitness values, a similar finding of the non- degraded kinematic system from the first experiment. Run-time data histograms found in Figure 4-18, show that both right and left motors required more power but were of

113

similar value, contrary to the fault detected in the first experiment. The filtered angular velocity, however, is observed to take on increasingly more positive values in the event prior to failure. As this is related to the first system, this finding agrees with the prior evidence that the primary kinematic model degraded.

The wheel radii of the kinematic model were given offsets to highlight divergent trends in Figure 4-19, indicating a linear increase for the right wheel radius and decrease for the left. The coupled observation is due to the roll induced by the inflating wheel and compression of the left tire. The magnitude of change is greater for the right wheel radius, compared to the changes of the left. Figure 4-20 portrays the parameters of the right and left motors for completeness, and are not observed to indicate new information. Normalized plots in Figure 4-21 re-illustrate the above findings, and suggests monitoring of divergent wheel radii as a new diagnostic rule.

Experiment 2 Conclusion

The second experiment explored degradation on the kinematic model that produced new distinct results and augments the findings from the first experiment.

Even though wheel radii from this experiment did not change more than fifty percent as the first experiment, the characteristic of the normalized wheel radii parameters diverged from one another represents the detected fault. These normalized parameters can be monitored and measured for divergent behavior instead of the actual realized values. This can also be generalized for other systems utilizing this model by setting a rule based on divergent slopes of the wheel radii parameters. The non-degrading motor parameters had similar characteristics, and no declining resistance values. The use of cross-validations proved adept at contributing to health assessments, and potential use

114

as another feature for health monitoring by comparing the residuals between models of different events.

115

A

Figure 4-1. Comparison of two workflows to indicate value of the tool and the effects of the history setting on resulting assessed performance of systems. The seventh and ninth workflows compared between A) primary kinematic system, B) right motor system, and C) left motor system. The first figure includes baseline and degradation calculations between workflow systems, but not provided to highlight the effects of data on assessments.

116

B

C

Figure 4-1. Continued

117

A

Figure 4-2. The track with approximately one-inch wide line, used for the experiments used for line following. The top figure is the A) track used for experiments and B) indicates the start and goal locations that were used for the experiments. Photo courtesy of author.

118

Start Goal

B (Marks to align robot)

Figure 4-2. Continued

119

Figure 4-3. Scorch marks on motor casing of motors found during post-mortem analysis of the first experiment. Photo courtesy of author.

120

A

Figure 4-4. System fitness performance plots for the first experiment. This includes A) all systems together B) primary kinematic system, C) right motor sub-system, and D) left motor sub-system. Individual system plots include baseline and degradation lines for reference.

121

B

C

Figure 4-4. Continued

122

D

Figure 4-4. Continued

123

A

B

Figure 4-5. Cross validation plots for all systems of the first experiment. The first sub- plot is A) the models at the third event and second B) at the forty first event. The remaining three figures are comparisons between third and forty first events for C) primary kinematic system, D) right motor, and E) left motor.

124

C

D

Figure 4-5. Continued

125

E

Figure 4-5. Continued

126

A

Figure 4-6. Histogram plots from the third event of the first experiment. The data examined are the right and left A) control setpoint values, B) motor command, C) wheel velocities, and D) filtered angular velocity.

127

B

C

Figure 4-6. Continued

128

D

Figure 4-6. Continued

129

A

Figure 4-7. Histogram plots from the fortieth event of the first experiment. The data examined are the right and left A) control setpoint values, B) motor command, and C) wheel velocities.

130

B

C

Figure 4-7. Continued

131

D.

Figure 4-7. Continued

132

A

Figure 4-8. Comparison histograms between the third and forty-first event. These highlight A) changes in values of the right and left motor commands and B) the lack of de between filtered angular velocities.

133

B

Figure 4-8. Continued

134

A

Figure 4-9. Statistical plots of the temporal mean and standard deviation for the first experiment. The data examined are the right and left A) control setpoint values, B) motor command, C) wheel velocities, and D) filtered angular rate.

135

B

C

Figure 4-9. Continued

136

D

Figure 4-9. Continued

137

A

Figure 4-10. Temporal parameters of the primary kinematic system of the first experiment. The parameters of the system are the A) right wheel radius 푟푟, B) left wheel radius 푟푙, and C) normal distance between wheel planes 퐿.

138

B

C

Figure 4-10. Continued

139

A

Figure 4-11. Individual parameter plots for right motor used in the first experiment. The parameters of the system include A) inductance 퐼푟, B) damping coefficient 푏푟, C) moment of inertia 퐽푟, D) motor electrical resistance 푅푟, and E) combined gain 퐾푟.

140

B

C

Figure 4-11. Continued

141

D

E

Figure 4-11. Continued

142

A

Figure 4-12. Individual parameter plots for left motor used in the first experiment. The parameters of the system include A) inductance 퐼푙, B) damping coefficient 푏푙, C) moment of inertia 퐽푙, D) motor electrical resistance 푅푙, and E) combined gain 퐾푙.

143

B

C

Figure 4-12. Continued

144

D

E

Figure 4-12. Continued

145

A

Figure 4-13. The statistical plots of parameters providing mean and standard deviation values. Each point is calculated that event and prior data. This information is provided for A) right wheel radius 푟푟 B) left wheel raidus 푟푙, C) normal distance between wheel planes 퐿, D) inductance 퐼푟, E) damping coefficient 푏푟, F) moment of inertia 퐽푟, G) motor electrical resistance 푅푟, H) combined gain 퐾푟, I) inductance 퐼푙, J) damping coefficient 푏푙, K) moment of inertia 퐽푙, L) motor electrical resistance 푅푙, and M) combined gain 퐾푙.

146

B

C

Figure 4-13. Continued

147

D

E

Figure 4-13. Continued

148

F

G

Figure 4-13. Continued

149

H

I

Figure 4-13. Continued

150

J

K

Figure 4-13. Continued

151

L

M

Figure 4-13. Continued

152

A

Figure 4-14. Normalized parameter plots for the systems of LiZARD used in the first experiment. These include A) all systems, B) primary system, C) right motor without damping coefficient parameter, D) all right motor parameters, and E) left motor.

153

B

C

Figure 4-14. Continued

154

D.

E

Figure 4-14. Continued

155

Figure 4-15. Simulated inflating tire. Photo courtesy of author.

A

Figure 4-16. Performance plots of the second experiment. These include A) all systems, B) primary kinematic system, C) right motor, D) left motor.

156

B

C

Figure 4-16. Continued

157

D

Figure 4-16. Continued

158

A

Figure 4-17. Cross validation plots of the second experiment. The top plot is the A) primary kinematic system plotted for the forty-eighth and sixth events. The second plots the B) forty-eighth and sixth events for the right and left motor systems.

159

B

Figure 4-17. Continued

160

A

Figure 4-18. Run-time data histograms observe the effects of the second experiment between the forty-eighth and sixth events. The simulated degradation is shown through A) right and left wheel command, B) right and left wheel velocities, and C) filtered angular velocity.

161

B

C

Figure 4-18. Continued

162

A

Figure 4-19. Parameter plots for primary kinematic system for second experiment. This information is provided for A) right wheel radius 푟푟 B) left wheel raidus 푟푙, and C) normal distance between wheel planes 퐿.

163

B

C

Figure 4-19. Continued

164

A

Figure 4-20. Parameter plots for right and left motors of the second experiment. This information is provided for A) inductance 퐼푟, B) damping coefficient 푏푟, C) moment of inertia 퐽푟, D) motor electrical resistance 푅푟, E) combined gain 퐾푟, F) inductance 퐼푙, G) damping coefficient 푏푙, H) moment of inertia 퐽푙, I) motor electrical resistance 푅푙, and J) combined gain 퐾푙.

165

B

C

Figure 4-20. Continued

166

D

E

Figure 4-20. Continued

167

F

G

Figure 4-20. Continued

168

H

I

Figure 4-20. Continued

169

J

Figure 4-20. Continued

170

A

Figure 4-21. Normalized parameter plots of the second experiment. The plots include A) primary kinematic system, B) right motor system with damping coefficient, C) all right motor system parameters, and D) left motor system.

171

B

C

Figure 4-21. Continued

172

D

Figure 4-21. Continued

173

CHAPTER 5 CONCLUSION

Summary of Experiments

This new and novel framework is paramount for collecting and storing large sets of data to proactively assess and analyzing assets given the proposed health strategy for AMR. The results of which can then be used to prevent, mitigate, or delay degradation leading to a maturation of an ontology relating observations to temporal parameter configurations. From the experiments that were conducted RobotMD successfully identified degradation and produced ontological results that can be used to improve diagnostics and prognostics. The proposed strategies and guidelines further enable robotic platforms of lower readiness levels basic health monitoring capabilities aimed at improving the availability and cost effectiveness of this technology. While the scope of the research has limitations, there are opportunities for improvement; categorized as implementation, approach, and data analysis.

Future Work

Implementation

Evaluating the subjects excluded from this research’s scope would benefit

RobotMD. In particular, use of higher fidelity models would help refine observed variance and provide further insight on degrading coupled parameters. This framework is also capable of analyzing models including neural networks and ARMA models, but may not yield physical interpretations. Different strategies for parameter estimation may also result in better fitness and overall predictions as the current strategy may encounter local minima, preventing optimal results. As this is an offline system, assessing strategies in parallel will increase both the capabilities of RobotMD but

174

simultaneously permitting comparisons of experimental methods. This is evident from the comparisons made of the workflows with different history settings from the two experiments conducted. This and other hyperparameters that govern health monitoring can be optimized in this manner.

Another improvement would be to update parameters such as those used for forward kinematics in control loops as changes are detected or when entering a degraded region. Complementing this concept would be to ensuring robustness for health assessments, which can take many forms such as verifying stability criteria and the use of predefined trajectories. Developing this further should include an evaluation on features such as data rate, length of the complete experience, storage size of required data, and the likelihood of a parameter changing. Sensor selection and coverage, internal and external, also contribute to robust health assessment and monitoring. This includes adding the capability of assessing the health sensors and recalibrating if necessary. High bandwidth sensors, such as lidar or cameras, would also require attention due to limited resources, and require additional handling and decision-making algorithms.

Approach

The evolution of RobotMD should take a more systems-engineering approach to incorporate all components such as updating controls and behaviors to address degradation based on the severity and serviceability. After degradation has been identified to occur for a specific system, the control of that system should be optimized, stability verified, and performance criteria met. To further this idea, the behaviors that consume these controls should be adapted as to not exacerbate the identified or related degradation. The outcome of the modified behavior can be explicit or implicit. Explicit

175

modification adapts to the degradation that is described in the system model. An implicit method would be to utilize artificial intelligence to describe a particular instance of degradation and use this result to modify behaviors [75]. For continued use the determined faults from degradation detected through maintenance cycles should be shared for similar systems using cloud-based technologies. Information regarding the nature of faults through parameters can also lead to generalized representations for prognostics and contribute toward mitigating strategies such as suggested maintenance cycles or behavior modification described above. These approaches provide additional knowledge aids and benefit from further integration into business processes to include risk, quality, and integration management.

Data Analysis

Statistical inference would significantly improve health assessments strategies.

Given prior data, parameter configurations, unused event data labels, and knowledge of documented forms of degradation would provide better probabilistic estimates for parameter properties and changes. The combination of learned knowledge also directly enhances development of baselines and selection of regularization values. Methods such as Kalman filters can also be used for sensor fusion to refine data used and verification and validation of individual sensors.

Use of normalized parameters is suggested to improve discrepancies caused by disproportionate parameters of a system such as the resistance parameter compared to the moment of inertia of the motor sub-systems. Inference, workflows, and time series analysis can also be combined to perform prognostics to examine quantities such as remaining useful life by projecting the evolution of one or more parameters. Other data science techniques, such as clustering, have potential to augment extracted ontological

176

parameter configurations degradation with additional information indicating other characteristics, methods for mitigation, and features to consider for prediction.

Final Remarks

The research presented for the concepts, development, and processes for

RobotMD addresses offline health monitoring and challenges involved making use of this technology. This is a step forward pushing for availability and progression of robotic systems of all technology readiness levels, and as indicated in Chapter 1 is a part only addressing single agent monitoring. The future work discussed in earlier sections identify intricacies of advancing offline monitoring single agents. Integration of this research with online health monitoring is the next step in developing a complete single agent health monitoring strategy.

Mission health monitoring was implicitly examined by indicating success or failure by a user, a basis that configuration management was designed to utilize. Although missions have unique characteristics, business processes enable a generalized approach giving structure to analyze the experiences of robots. There are opportunities to greatly improve the context for health monitoring for both single and multi-agent systems through quantification of mission characteristics, progress, and outcome. The methods described here provide further structure for the continued development and optimization of verification and validation of the elements proposed by the health strategy, leading toward safer and more reliable robots.

177

APPENDIX A APPLICATION

The mechanism that users will use to interface with RobotMD will be through the application. The application was developed using the Model-View-ViewModel (MVVM) pattern and Microsoft’s Windows Presentation Foundation user interface tools.

Following industry best practices by employing design patterns, the framework is scalable in an enterprise environment.

The application has five main sections for user interaction and information display. These include the title bar, footer, ribbon, main content region, and the navigation pane, shown in Figure A-1. The title and footer bars both only display simple information. The ribbon provides buttons for selecting common functions for modifying and selecting the desired robot. As a future work item, there is a section of the ribbon bar dedicated to the manipulation of the dictionary records. The main content region will dynamically serve up content selected by the user. The navigation panel allows the navigation between the four views of the application to control what is loaded in the main content region. At startup, the application by default will select the most recent robot registered load the Robot Detail view. Each time a different robot is selected in the drop-down menu, the Robot Detail view will be loaded for that specific robot. This view will also be loaded if the user selects the option to register a new robot. The remaining views in the navigation pane include the Record Event, Event Summary, and

Review Warehouse.

Robot Details

The Robot Detail view is where the user provides the basic description of the robot and initial configuration. This includes information for all systems, sensors and

178

parameters required to analyze the run-time data for degradation. Each of the sections allow the user to create, modify, or delete entries by the appropriate buttons or selections in the grid displaying the section’s information. In Figure 3-4 the user provides at a minimum the name, type, identification of parent systems, description, order for performing estimation, and the percentage from the baseline that indicates the threshold for undesirable performance for each system.

Next the user assigns sensors, shown in Figure 3-5, to the robot which includes a specific sensor from the dictionary, location description, stated intended purpose, and owning system. This information is primarily used during the ETL process for defined ranges where the sensor may identify anomalous readings. Finally, the parameter section, Figure 3-6, is where the user associates parameters to the system models using a name, minimum and maximum boundary limits, default value, change threshold, change bias, and a user selected parameter ignore option. The change threshold is to check the difference between the prior estimated value to current, and to warn the user if violated. The change bias is the regularization value from one to one thousand that would activate during normal operation after a baseline has been established. The user also has the option to have the framework indicate a warning if the parameter takes on a boundary value. It is suggested to disable warnings for parameters that are highly unlikely to change such as inertial or ridged physical dimension parameters.

Record Events

The Record Events view, illustrated in Figure 3-11, is where the user uploads

CSV data files, selects event types, and provides other high-level details for a specific event. Each event must be uploaded in sequential order. The workflow analysis tool is also available, shown in Figure A-2 along with a grid displaying the recently recorded

179

events. A new workflow is created by clicking the “Initiate Workflow Button,” for which the user is expected to provide a title, description, and to identify a positive integer value for the history setting to specify the number of recent data events to include. The degradation analysis and parameter estimation will be performed for all events in the database. This allows the user to explore different hyper-parameters for better degradation analysis.

Event Summary

After an event has been recorded and the systems analyzed, the user is automatically navigated to the Event Summary view. The user can also manually navigate to this view using the navigation pane. In this view the results of the analysis are presented in grid labeled Recent Events, shown in Figure A-3. Additional details of parameter warnings, if any, are available in this same grid as an expanded row for the selected event record. The remainder of the view consists of plots for the estimation performance, three plots for examining temporal parameter evolution, grid for associated parameters for convenience, grid indicating which data events were used for models generated, and a cross-validation plot.

The performance plot in Figure A-4 indicates how well the estimation strategy predicted the input data using the realized model and parameters. The user has the ability to select a specific system, workflow, and optionally to plot the systems’ baseline and degradation limits. Next Figure A-5 shows a series of three plots for parameters.

The data is displayed according to workflow, system, and parameter user selections.

The first plot reveals the results of one or more estimated parameters, optionally with average and standard deviation bars. The second plot shows the result of the calculated average and standard deviation for the selected parameters for each data

180

event. The final parameter plot depicts the percent change of parameters relative to the initial values of each. The final section of this view allows the user to plot cross validations shown in Figure A-6, by model and data event selections. The user selects the model results of a particular system and event. The resulting plot shows how well that model predicted the other data events uploaded to the data warehouse.

Review Warehouse

The Review Warehouse view is provided for the user to input the data mapping,

Figure 3-9, and analyze the collected data after it has been through the ETL process and stored in the data warehouse table. The user can select from a list of parameters and event data sets to plot as shown in Figure A-7 and capable of producing runtime plots indicated by Figure A-8. Units are not are indicated on the Y axis plot as each parameter is likely to have different units, and the X axis represents time. The user also has additional options to plot only active data, and whether to plot the average and standard deviation together. For each parameter selected, a histogram is plotted in twenty bins to indicate the distribution of values. These plots provide insight to how the collected data evolves over the lifetime of the robot.

181

Figure A-1. The section of the application including title, ribbon, navigation, main content, and footer.

A

Figure A-2. Recent events in Record Details view with option for initiating a new workflow. The first image A) shows the events with workflow section collapsed and the second with B) the workflow tool exposed. Photo courtesy of author.

182

B

Figure A-2. Continued

A

Figure A-3. Additional result details provided by the recent events section of the Event Summary view. The base A) recent results provide results of the system fitness and B) expanded row details, when selected, to provide more information about warnings indicated.

183

B

Figure A-3. Continued

Figure A-4. Performance plot and options in Event Summary view.

184

Figure A-5. Parameter plot and options in the Event Summary View. Selections include parameters, statistical average and standard deviation of recorded events, and normalized plots.

185

Figure A-6. Cross validation plot and options in the Event Summary view. Photo courtesy of author.

186

A

B

C

Figure A-7. Run-time data plots in the Review Warehouse View. These include A) data columns plotted over time, B) histogram for selected columns, and C) average and or standard deviation of the selected data columns.

187

A

Figure A-8. Example run-time set-point data plots that can be produced. This includes A) setpoint data for entire experience and B) a zoomed in section at an arbitrary moment in time to depict qualitative waveform characteristics.

188

B

Figure A-8. Continued

189

APPENDIX B LiZARD ROBOT DESIGN

The AMR designed and created for these experiments, shown in Figure B1, is called LiZARD (Line following to analyZe Autonomous Robot Degradation). The physical design consisted of three vertically stacked tiers. The bottom contains the motors and control hardware. The middle tier consists of the operational hardware and auxiliary sensors. The top tier distributes power to the other tiers and provides the human-machine interface (HMI) components including a liquid crystal display (LCD) to display, power switch, and warning buzzer. User provided registration information required in order to make use of the RobotMD health monitoring framework are provided in Table B-1 for system details, Table B-2 for sensor selection, and Table B-3 for parameters information.

Software Structure

Software for the LiZARD was developed targeting the Armbian operating system utilizing the Robot Operating System (ROS) framework. Armbian is a fast, optimized operating system built from that targets advanced reduced instruction set computing machine (ARM) based architectures, and installed on an

S. This device connects to three Arduino Mega boards each using ROS’s rosserial package over serial protocol with a baud rate of 115200. Communication is performed through UART and USB without requiring more complicated methods. Algorithms designed for LiZARD execute individually as nodes on either the Tinker Board or

Arduino Megas. Data is transferred as explicit messages through ROS topics. For additional information on ROS the reader is referred to [76].

190

Each Arduino Mega encapsulates a core task; control loop execution, line sensor publication, and acquisition of auxiliary sensors, identified respectively as the “executor node”, “line-array node”, and “aux sensor node”. The physical distribution of tasks isolates the critical, time sensitive loops and promotes data delivery. Both the executor and line array nodes operate at 50 Hz (20 ms) with an allowable jitter of 1ms. The aux sensor node services all the non-deterministic data from the IMU, sonar distance, and temperature sensors at a rate of 10 Hz (100ms). The aux sensor node also publishes battery voltage at a slower rate of 1 Hz on a separate topic for the user display node to consume this information. These loop times are verified through inspection of the data pre-processing described in the “Data Collection and Processing” section.

The remaining nodes in use by LiZARD are the user display and commander nodes. The former subscribes to the reportBattery and reportCount topics to display battery voltage and circuit count progress. The commander node is responsible for the control commands for wheel velocities determined from the output topic of the line array node. This node also monitors variables on the ROS parameter server, allowing run- time changes as needed. Figure B-2 shows all the relationships of the nodes and topics for LiZARD.

Safety

The experiments for LiZARD were designed to degrade the robot to the point of failure. These failures, without prior data, can take any form that result in the robot stopping or leaving the track. If the robot stops, no on-board action is required. For the latter, two safety algorithms were developed. The first on the executor node monitors the timing of the messages from the commander node. If messages are more than two seconds old the behavior is turned off and the robot enters an error state. The second

191

algorithm, in a manner similar to the executor node, the commander node monitors messages received from the line array node and sets the same error condition if messages are greater than 300 ms. Additionally, the received messages are checked for valid line densities that must be greater than zero and less than eight, identifying that either no lone is detected or an error has occurred. When the robot enters an error state, motor commands are forced to zero, stopping the robot.

Data Collection and Processing

Upon initialization, LiZARD enters into a standby state where the line-following behavior is not active and the robot awaits a start command. Data collection begins during standby and is initialized with the creation of a unique file identified by the date and current time. A total of thirty-seven data points, outlined in Table B-1, are logged and form the data mapping entries. Each point was optimized to be the smallest data type possible to reduce bandwidth and file size. The “scp” Linux command is used to transfer the completed files to a specified destination for further processing. It is suggested to only retrieve data files when the robot does not have ROS active so that the system does not have to compete for computational resources.

After the files have been transferred to the processing computer, each are individually pre-processed and inspected using a python script; the results of which are provided in Figure B-3. This extra step is to verify the integrity of the data and analyze the intended outcome of the robot experience. This step reduces the complexity of the

ETL process that uploads the dataset to the warehouse data table.

Computational Requirements

LiZARD used in the experiments underwent several build iterations due to the lack of computational power available. It was originally using a Beaglebone Black that

192

featured AM335x 1GHz ARM cortex processor, 512 MB double data rate (DDR3) random access memory (RAM), 4GB embedded multimedia card (eMMC) with 2 programmable real-time unit (PRU) 32-bit on-board microcontrollers. With the ROS framework and identified architecture for the nodes and topics, the robot failed to log data for the full duration of the mission. After a period of time the values would resort to default values, often zero, due to buffer overflow that also resulted in lost serial connections. This brought up three concerns. First, the CPU capacity on a single core was unable to perform all the necessary computations for operation and was observed to consume eighty percent or more of available resources. Second, the amount of volatile memory data was insufficient. Third, the on-board non-volatile memory was not able to write at a rate capable of handling the incoming data rate.

To address these concerns the Beaglebone Black was replaced with the Asus

Tinkerboard S that has a quad-core cortex-A17 1.8GHz, 16GB eMMC, DDR3 2GB.

This increased power requirements, reducing overall run-time but was proved able to handle logging demands. Furthermore, the data logger node was improved by re- writing the Python node in C++.

Designed Behavior

The line-following behavior of the robot was to operate on a closed loop track with a dark line approximately 1.5 inches in width on a white background. The track had a marked start region for consistency, and a nearby rectangular object as the goal post.

The LiZARD’s sonar distance sensor was used to detect the goal post and indicate completed circuits. The success criteria for this behavior was to complete forty circuits without leaving the track, moving at a constant linear velocity of five inches per second.

193

The track’s line is detected by a series of infrared detectors equally spaced by

0.5 inches. The position of the line is determined by counting each positive detector from the center of the array divided by the number of positive detectors. This represents the centroid of the line where zero is the center of the robot, positive values to the right. Since the array has eight detectors, bit logic can be used to simplify calculations where positions fall in an expected range of [-128,127]. A non-zero value will result in a change in angular velocity of the robot.

The systems designed for LiZARD includes the primary system kinematics and two dynamical motor systems. The kinematic feedback control for the primary system uses a proportional gain on the commanded yaw determined from the sensed line array sensor, shown in Figure B-4. The yaw rotation is sensed by the on-board IMU located near the kinematic center for the robot. The kinematic model is defined as

푟푟 푟푙 푣 휓 [ 푥] = [ 2 2 ] [ 푟] 휔 푟푟 −푟푙 휓푙 (B-1) 2퐿 2퐿 where 푉푥 is the linear velocity, 휔 the angular velocity, 퐿 is the parameter for the normal distance between the wheel planes, wheel velocities 휓푟 and 휓푙, and 푟푟 and 푟푙 are the right and left wheel radii [77]. Wheel angular velocities are calculated from

−1 푟푟 푟푙 휓 푣 [ 푟] = [ 2 2 ] [ 푥] 휓푙 푟푟 −푟푙 휔 (B-2) 2퐿 2퐿 by inverting the 4푥4 matrix of the kinematic model multiplied by velocity vector consisting of the commanded angular yaw rate and constant linear velocities. The

194

computed wheel rates are passed to the low-level control loops for each motor sub- system.

A standard approach for modeling brushed dc motors relates input voltage to output angular velocity outlined in [78] and [79], which identifies the following simplified state space model.

푏 퐾 − 0 퐽 퐽 ̇ 풙̇ = [휽] + [1] 풖 퐾 푅 풊 (B-3) − − 퐼 [ 퐼 퐼 ]

The parameters for this model are the inductance 퐼, 푅 motor resistance, 푏 damping coefficient, 퐽 moment of inertia of the motor shaft, and 퐾 combined gain representing 퐾푚 motor torque constant and 퐾푒 back-emf constant. Simplifying the model further, the backend and motor torque constants are combined as a single parameter, K. A PID control, Figure B-5, was implemented for feedback control of the angular velocity having gains defined as

퐾 = 10 푃 (B-4) 퐾 = 5 퐼 (B-5) 퐾 = 0.05 퐷 (B-6) These values were selected to keep overshoot less than 8%, rise time below 0.2 seconds, and to have a setting time less than 0.5 seconds. During design it was noted that the motors did perform differently due to the motors operating in different configurations with unmodeled gears. The gears were not taken into consideration, suggesting that LiZARD would benefit from more complex models. The differences were not significant enough to cause complications. Additionally, these minimal models were proven sufficient in identifying and analyzing degradation.

195

Figure B-1. LiZARD. Photo courtesy of author.

Table B-1. Information documenting system information for LiZARD for registration with RobotMD. Not included in this table are the actual file locations of MATLAB files or the representative function for illustration purposes only of the system model that the function uses. System Entry Description or Value 1 Name E1 Robot System Function 1 Name system_param_est

196

Table B-1. Continued System Entry Description or Value 1 Description E1 Describe kinematics of the LiZARD_robot. Physical 1 Location NULL Degradation 1 Limit 10 System 1 Order 1 2 Name E1 Right Motor Function 2 Name motor_param_est 2 Description E1 Right Motor. Physical 2 Location Attached to chassis on bottom tier, right side. Degradation 2 Limit 30 System 2 Order 2 3 Name E1 Left Motor Function 3 Name motor_param_est 3 Description E1 Left motor. Physical 3 Location Attached to chassis on bottom tier, left side. Degradation 3 Limit 30 System 3 Order 3

Table B-2. Information documenting assigned sensor information for LiZARD for registration with RobotMD. Mounted Sensor Location Purpose Encoder Right Motor Exp1 Right wheel angular velocity. calculate pulses Output Shaft per specific known time duration. Encoder Left Motor Exp1 Left wheel angular velocity. calculate pulses Output Shaft per specific known time duration. Current Motor Shield Exp1 Current to right motor. New shield failing to measure current. Not used. Current Motor Shield Exp1 Current to left motor. New shield failing to measure current. Not used. Voltage Top Tier Exp1 2S LiPo Battery voltage, first cell. Voltage Top Tier Exp1 2S LiPo Battery voltage, second cell.

197

Table B-2. Continued Mounted Sensor Location Purpose Line Array Bottom Tier on Exp1 Measure relative position of line to robot pose. Front-most Bracket Sonar Middle Tier Exp1 Sonar used to measure number of circuits completed by measuring distance to goal post. IMU-9DOF- Middle Tier Exp1 IMU acceleration. Not used. Accel IMU-9DOF- Middle Tier Exp1 IMU linear velocity. Not used. LVel IMU-9DOF- Middle Tier Exp1 IMU Gyroscope to measure yaw angular rate Gyro of system. IMU-9DOF- Middle Tier Exp1 IMU Absolute Orientation. Not used, Orien uncalibrated.

Table B-3. Information documenting system parameters for LiZARD for registration with RobotMD. Additional information excluded from the table are paramters not used for experiments and the disabled boundary warnings for E1_L, E1_J_r, and E1_J_l. Param Param Param Param Change Change Description System Name Min Max Default Threshold Bias ID E1_r_r 0.75 4 1.25 0.02168 1 Exp1 Right 1 wheel radius E1_r_l 0.75 4 1.25 0.01504 1 Exp1 Left 1 wheel radius E1_L 2 3 2.6 2.374E-06 1000 Exp1 1 Distance between wheel planes E1_I_r 0.001 2 0.798 0.01046 100 Exp1 Right 2 motor inductance E1_b_r 0.0001 1 0.0125 0.01883 5 Exp1 Right 2 motor viscous damping E1_J_r 1E-05 0.001 0.0013 0.0009713 1000 Exp1 Right 2 motor moment of inertia

198

Table B-3. Continued Param Param Param Param Change Change Description System Name Min Max Default Threshold Bias ID E1_R_r 0.01 5 1.1515 0.007433 1 Exp1 Right 2 motor electrical resistance E1_K_r 0.001 3 0.15 0.03875 1 Exp1 Right 2 motor simple gain E1_I_l 0.001 2 0.49 0.03875 100 Exp1 Left 3 motor inductance E1_b_l 0.0001 1 0.006 0.0148 1 Exp1 Left 3 motor viscous damping E1_J_l 1E-05 0.001 0.0005 0.0001238 1000 Exp1 Left 3 motor moment of inertia E1_R_l 0.01 5 0.994 0.02074 1 Exp1 Left 3 motor electrical resistance E1_K_l 0.001 3 0.04 0.018409 1 Exp1 Left 3 motor simple gain

199

Figure B-2. LiZARD software layout with corresponding rostopics.

Table B-4. Recorded data that outlines the input to the data mapping required in order to register LiZARD with RobotMD. Name Description Data Type Node Topic Right motor PID control signed set_r setpoint. long executor robot_report Left motor PID control signed set_l setpoint. long executor robot_report unsigned u_r Right motor actual input. int executor robot_report unsigned u_l Left motor actual input. int executor robot_report Sensed line relative to center of line-array line_pos sensor. signed int executor robot_report Density of sensed line (number of sensors in unsigned line_den array). int executor robot_report

200

Table B-4. Continued Name Description Data Type Node Topic Right motor PID control signed set_r setpoint. long executor robot_report Left motor PID control signed set_l setpoint. long executor robot_report unsigned u_r Right motor actual input. int executor robot_report unsigned u_l Left motor actual input. int executor robot_report Sensed line relative to center of line-array line_pos sensor. signed int executor robot_report Density of sensed line (number of sensors in unsigned line_den array). int executor robot_report Right motor output angular velocity with unsigned psi_r moving average. int executor robot_report Left motor output angular velocity with unsigned psi_l moving average. int executor robot_report Right motor output unsigned enc_r actual angular velocity. int executor robot_report Left motor output actual unsigned enc_l angular velocity. int executor robot_report Right motor current (as unsigned i_r input from command). int executor robot_report Left motor current (as unsigned i_l input from command). int executor robot_report unsigned iter Main loop iteration value. long executor robot_report signed dt Main loop time (ms). long executor robot_report Indicates when behavior execute is active. signed int executor robot_report Line array sensor loop unsigned ladt time (ms). int executor robot_report X-axis accelerometer signed accel_x data from IMU. long aux_sensor_node reportIMU Y-axis accelerometer signed accel_y data from IMU. long aux_sensor_node reportIMU Z-axis accelerometer signed accel_z data from IMU. long aux_sensor_node reportIMU

201

Table B-4. Continued Name Description Data Type Node Topic X-axis linear velocity signed linvel_x estimate data from IMU. long aux_sensor_node reportIMU Y-axis linear velocity signed linvel_y estimate data from IMU. long aux_sensor_node reportIMU Z-axis linear velocity signed linvel_z estimate data from IMU. long aux_sensor_node reportIMU X-axis gyroscope data signed gyro_x from IMU. long aux_sensor_node reportIMU Y-axis gyroscope data signed gyro_y from IMU. long aux_sensor_node reportIMU Z-axis gyroscope data signed gyro_z from IMU. long aux_sensor_node reportIMU X-axis absolute orientation data from signed orien_x IMU. long aux_sensor_node reportIMU X-axis absolute orientation data from signed orien_y IMU. long aux_sensor_node reportIMU X-axis absolute orientation data from signed orien_z IMU. long aux_sensor_node reportIMU Barometer sensor for altitude. (NO LONGER unsigned bar IN USE). long aux_sensor_node reportIMU Estimated altitude calculated. (NO unsigned alt LONGER IN USE) long aux_sensor_node reportIMU Ambient temperature unsigned temp data from IMU. int aux_sensor_node reportIMU Distance measurement used to sense goal post unsigned sonar_dist for circuit count. int aux_sensor_node reportIMU Auxiliary sensor loop unsigned auxdt time (ms). long aux_sensor_node reportIMU extra1 Extra sensor column. signed int aux_sensor_node reportIMU extra2 Extra sensor column. signed int aux_sensor_node reportIMU extra3 Extra sensor column. signed int aux_sensor_node reportIMU unsigned battery LiPo battery voltage. int aux_sensor_node reportBattery

202

A

B

Figure B-3. Data post processing performed after data retrieval producing several plots. Manual verification of integrity examines A) system loop times and behavior execution, B) general run-time characteristics, C) line correlation to commands, and D) kinematic angular command verification.

203

C

D

Figure B-3. Continued

Figure B-4. Line following control scheme.

204

A

B

Figure B-5. Motor control design. The design includes the A) control scheme and B) tuned response.

205

LIST OF REFERENCES

[1] A. Koons-Stapf, "Condition Based Maintenance: Theory, Methodology, and Application," Science Applications International Corporation, McLean, Virginia, 2015.

[2] A. Jardine, D. Lin and D. Banjevic, "A review on machinery diagnostics and prognostics implementing condition-based maintenance," Mechanical Systems and Signal Processing, vol. 20, pp. 1483-1510, 2006.

[3] "Research and Technology Goals and Objectives for Integrated Vehicle Health Management (IVHM)," OAST, 1992.

[4] F. Figueroa and K. Melcher, "Integrated Systems Health Management for Intelligent Systems," American Institute of Aeronautics and Astronautics, 2011.

[5] A Guide to the Project Management Body of Knowledge (PMBOK GUIDE) Fifth Edition, Newtown Square: Project Management Institute, Inc, 2013.

[6] E. Forrester, B. Buteau and S. Shrum, CMMI for Services Second Edition, Upper Saddle River, New Jersey, 2011.

[7] M. Schwabacher and K. Goebel, "A Survey of Artificial Intelligence for Prognostics," NASA Ames Research Center, Moffett Field, California, 2007.

[8] J. Sikorska, M. Hodkiewicz and L. Ma, "Prognostic modeling options for remaining useful life estimation by industry," Mechanical Systems and Signal Processing, vol. 25, pp. 1803-1836, 2011.

[9] K. Javed, R. Gouriveau and N. Zerhouni, "State of the art and taxonomy of prognostic approaches, trends of prognostics applications and open issues towards maturity at different technology readiness levels," Mechanical Systems and Signal Processing, pp. 214-236, 2017.

[10] D. Kwon, M. R. Hodkiewicz, J. Fan, T. Shibutani and M. Pecht, "IoT-Based Prognostics and Systems Health Management for Industrial Applications," IEEE, vol. 4, 2016.

[11] K. F. Martin, "A Review by Discussion of Condition Monitoring and Fault Diagnosis in Machine Tools," Intelligent Machine Tools, vol. 34, no. 4, pp. 527- 551, 1994.

[12] G. Qiao and B. Weiss, "Monitoring, Diagnostic, and Prognostics for Robot Tool Center Accuracy Degradation," in Proceedings 2018 ASME International Manufacturing Science and Engineering Conference, College Station, 2018.

206

[13] J. Marzat, H. Piet-Lahanier, F. Damongeot and E. Walter, "Model-based fault diagnosis for aerospace systems: a survey," Journal of Aerospace Engineering, vol. 10, no. 226, pp. 1329-1360, 2012.

[14] L. Jack and A. Nandi, "Fault Detection Using Support Vector Machines and Artificial Neural Networks Augmented by Genetic Algorithms," in Mechanical Systems and Signal Processing, Liverpool, Elsevier Science Ltd., 2002, pp. 373- 390.

[15] V. Chandola, A. Banerjee and V. Kumar, "Anomaly Detection: A Survey," in ACM Computing Surveys, vol. 41, 2009.

[16] S. Ganesan, D. Das and M. Pecht, "Identification and utilization of failure mechanisms to enhance FMEA and FMECA," IEEE Workshop Accelerated Stress Test and Reliability, 2005.

[17] C. Hendricks, N. Williard, S. Mathew and M. Pecht, "A failure modes, mechanisms, and effects analysis (FMMEA) of lithium-ion batteries," Journal of Power Sources, vol. 297, pp. 113-120, 2015.

[18] K. C. Kapur and M. Pecht, Reliability Engineering, Hoboken, NJ: Wiley, 2014.

[19] J. Jones, B. Seiger and A. Flynn, Mobile Robots Inspiration to Implementation Second Edition, Natick, MA: A K Peters, Ltd, 1999.

[20] J. C. Mankins, "Technology Readiness Levels," NASA, 1995.

[21] H. M. Elattar, H. K. Elminir and A. M. Riad, "Prognostics: a literature review," Complex Intelligent Systems, pp. 125-154, 2016.

[22] S. Schaal and C. Atkeson, "Learning Control in Robotics Trajectory-Based Optimal Control Techniques," IEEE Robotics & Automation Magazine, no. June 2010, pp. 20-29, 2010.

[23] D. Esposito and A. Saltarello, Microsoft .NET: Architecting Applications for the Enterprise Second Edition, Redmond, Washington: Microsoft Press, 2014.

[24] R. S. Pressman and B. R. Maxim, Software Engineering A Practitioner's Approach Eighth Edition, New York, New York: McGraw-Hill Education, 2015.

[25] S. Cheng, M. G. Pecht and M. H. Azarian, "Sensor Systems for Prognostics and Health Management," Sensors, vol. 10, pp. 5774-5797, 2010.

[26] C. Crane, D. Armstrong, A. Arroyo, A. B. Baker, D. Dankle, G. Garcia, N. Johnson, L. Jaesang, S. Ridgeway, E. Schwartz, E. Thorn, S. Velat, J. Yoon and W. J. , "Team Gator Nation: Autonomous Navigation Technologies Developed to Address the Requirements of the 2007 DARPA Urban Challenge," Allen Institute for Artificial Intelligence, Gainesville, 2007.

207

[27] C. Crane, D. Armstrong, A. Arroyo, A. B. Baker, D. Dankle, G. Garcia, N. Johnson, L. Jaesang, S. Ridgeway, E. Schwartz, E. Thorn, S. Velat and J. Yoon, "Lessons Learned at the DARPA Urban Challenge," Florida Conference on Recent Advances in Robotics, FCRAR, Melbourne, 2008.

[28] G. A. Garcia, "A Cognitive Resource Management Framework for Autonomous Ground Vehicle Sensing," 2010.

[29] T. Moore, "All Purpose Remote Transport System (ARTS) For Active Range Clearance, Force Protection, and Remediation," Robotics 98, pp. 120-125, 1998.

[30] K. Laudon and J. Laudon, Essentials of Business Information Systems Seventh Edition, New Jersey: Pearson Education Inc., 2007.

[31] A. Martinez and E. Fernandez, Learning ROS for Robotics Programming, Birmingham: Packt Publishing, 2013.

[32] S. Rowe and C. Wagner, "An Introduction to the Joint Architecture for Unmanned Systems (JAUS)," Citeseer, Ann Arbor, 2008.

[33] A. Pronobis and B. Caputo, "COLD: The CoSy Localization Database," in Bioinspiration and Robotics Walking and Climbing Robots, Rijeka, I-Tech Education and Publishing, 2007, pp. 588-594.

[34] K. Koperski, J. Adhikary and J. Han, "Spatial Data Mining: Progress and Challenges Survey paper," Burnaby, 1997.

[35] F. Coenen, "Data mining: past, present and future," in The Knowledge Engineering Review, vol. 26, Liverpool, Cambridge University Press, 2011, pp. 25-29.

[36] N. Nihalani, S. Silakari and M. Motwani, "Integration of Artificial Intelligence and Database Management System: An Inventive Approach for Intelligent Databases," in First International Conference on Computational Intelligence, Communication Systems and Networks, 2009.

[37] W. Frawley, G. Piatetsky-Shapiro and C. Matheus, "Knowledge Discovery in Databases: An Overview," AI Magazine, vol. 13, no. 3, pp. 57-62, 1996.

[38] T. Niemuller, G. Lakemeyer and S. Srinivasa, "A Generic Robot Database and its Application in Fault Analysis and Performance Evaluation," IEEE, 2012.

[39] K. Chodorow and M. Dirolf, MongoDB: The Definitive Guide, O'Reilly, 2010.

[40] T. Cormen, C. Leiserson, R. Rivest and C. Stein, Introduction to Algorithms Third Edition, Cambridge: MIT Press, 2009.

208

[41] J. Rowley, "The wisdom hierarchy: representations of the DIKW hierarchy," Journal of Information Science, vol. 33, no. 2, pp. 163-180, 2007.

[42] T. Niemueller, S. Schiffer, G. Lakemeyer and S. Lakani, "Life-long Learning Perception using Cloud Database Technology," Citeseer, 2013.

[43] Y. Gatsoulis, C. Burbridge and T. McGinnity, "Online Unsupervised Cumulative Learning for Life-Long Robot Operation," in International Conference on Robotics and Biomimetics, Phuket, 2011.

[44] P. Mell and T. Grance, "The NIST definition of cloud computing," National Institute of Standards and Technology, 2009.

[45] B. Rimal, E. Choi and I. Lumb, "A Taxonomy and Survey of Cloud Computing Systems," in Fifth International Joint Conference on INC, IMS and IDC, 2009.

[46] G. Hu, W. Tay and Y. Wen, "Cloud Robotics: Architecture, Challenges and Applications," IEEE Network, vol. May/June, pp. 21-28, 2012.

[47] K. Kamei, S. Nishio and N. Hagita, "Cloud Networked Robotics," IEEE Network, vol. May/June, pp. 28-34, 2012.

[48] B. Kehoe, S. Patil, P. Abbeel and K. Goldberg, "A Survey of Research on Cloud Robotics and Automation," IEEE Transactions on Automation Science and Engineering, vol. 12, no. 2, pp. 398-409, 2015.

[49] A. Gandomi and M. Haider, "Beyond the hype: Big data concepts, methods, and analytics," International Journal of Information Management, vol. 35, pp. 137- 144, 2015.

[50] R. Hickman, J. Kuffner, J. Bruce, C. Gharpure, D. Kohler, A. Poursohi, A. Francis and L. Thor, "Shared Robot Knowledge Base for Use with Cloud Computing System". United States of America Patent 8639644, 28 January 2014.

[51] Y. Wang, F. Gao and F. Doyle, "Survey on iterative learning control, repetitive control, and run-to-run control," Journal of Process Control, pp. 1589-1600, 2009.

[52] M. Tenorth and M. Beetz, "Knowledge Processing for Autonomous Robot Control," Association for the Advancement of Artificial Intelligence, 2011.

[53] M. Tenorth and M. Beetz, "KnowRob - Knowledge Processing for Autonomous Personal Robots," in IEEE/RSJ International Conference on Intelligent Robots and Systems, St. Louis, 2009.

[54] M. Beetz, M. Tenorth and J. Winkler, "Open-EASE - A Knowledge Processing Service for Robots and Robotics/AI Researchers," in 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, 2015.

209

[55] L. Raizuelo, M. Tenorth, D. Di Marco, M. Salas, D. Lopez, L. Mosenlechner, L. Kunze, M. Beetz, J. Tardos, L. Montano and J. Montiel, "RoboEarth Semantic Mapping: A Cloud Enabled Knowledge-Based Approach," IEEE Transactions on Automation Science and Engineering, vol. 12, no. 2, pp. 432-443, 2015.

[56] M. Waibel, M. Beetz, J. Civera, R. D'ANdrea, J. Elfring, D. Lopez, K. Haussermann, R. Janssen, J. Montiel, A. Perzylo, B. Schieble, M. Tenorth, O. Zweigle and R. van de Molengraft, "RoboEarth," IEEE Robotics and Automation Magazine, vol. June, no. 11, pp. 69-82, 2011.

[57] A. Choudhary, J. Harding and M. Tiwari, "Data mining in manufacturing: a review based on the kind of knowledge," Intelligent Manufacturing, vol. 20, pp. 501-521, 2009.

[58] J. Lee, "Measurement of machine performance degradation using a neural network model," Computers in Industry, vol. 30, pp. 193-209, 1996.

[59] S. Harris, All In One CISSP Exam Guide Fifth Edition, McGraw-Hill Companies, 2016.

[60] G. Liu, "Control of Robot Manipulators with Consideration of Actuator Performance Degradation and Failures," in International Conference on Robotics and Automation, Seoul, 2001.

[61] X.-S. Si, W. Wang, C.-H. Hu and D.-H. Zhou, "Remaining useful life estimation - A review on the statistical data driven approaches," European Journal of Operational Research, vol. 213, pp. 1 - 14, 2011.

[62] V. Tran, H. Pham, B.-S. Yang and T. Nguyen, "Machine performance degradation assessment and remaining useful life prediction using proportional hazard model and support vector machine," Mechanical Systems and Signal Processing, vol. 32, pp. 320-330, 2012.

[63] Y. Chen, K. Moore and H.-S. Ahn, "Iterative Learning Control," Iterated Learning, pp. 1648-1652, 2007.

[64] B. Settles, Active Learning, Morgan & Claypool Publishers, 2012.

[65] C. Dima, M. Hebert and A. Stentz, "Enabling Learning From Large Datasets: Applying Active Learning to Mobile Robotics," Carnegie Mellon University, Pittsburgh, PA, 2004.

[66] M. Ullah, F. Orabona and B. Caputo, "You Live, You Learn, You Forget: Continuous Learning of Visual Places with a Forgetting Mechanism," in International Conference on Intelligent Robots and Systems, St. Louis, Missouri, 2009.

[67] P. Corke, Robotics, Vision and Control, Berlin: Springer-Verlag, 2013.

210

[68] B. Larson, Delivering Business Intelligence with Microsoft SQL Server 2012 Third Edition, McGraw-Hill Companies, 2012.

[69] L. Sebastian-Coleman, Measuring Data Quality for Ongoing Improvement, Waltham, MA: Elsevier, 2013.

[70] D. McGilvray, Executing Data Quality Projects, Burlington: Elsevier, 2008.

[71] K. Berns and E. von Puttkamer, Autonomous Land Vehicles, Wiesbaden: Vieweg Teubner, 2009.

[72] F. M. Dekking, C. Kraaikamp, H. P. Lopuhaa and L. E. Meester, A Modern Introduction to Probability and Statistics, London: Springer, 2005.

[73] E. Morelli and V. Klein, Aircraft System Identification: Theory and Practice 2nd Edition, Sunflyte Enterprises, 2016.

[74] R. Kidd, "Genetic Multi-Model Fault-Tolerant Control of an Over-Actuated Autonomous Vehicle Under Known and Unknown Faults," University of Florida, Gainesville, 2015.

[75] M. Quigley, B. Gerkey and W. D. Smart, Programming Robots with ROS, Sebastopol, California: O'Reilly Media, Inc., 2015.

[76] R. Dhaouadi and A. A. Hatab, "Dynamic Modelling of Differential-Drive Mobile Robots using Lagrange and Newton-Euler Methodologies: A Unified Framework," Advances in Robotics & Automation, vol. 2, no. 2, pp. 1 - 7, 2013.

[77] W. Wei, "DC Motor Parameter Identification Using Speed Step Responses," Modelling and Simulation in Engineering, vol. 2012, pp. 1 - 6, 2012.

[78] N. S. Nise, Control Systems Engineering, Hoboken, New Jersey: John Wiley & Sons, Inc., 2011.

[79] L. L. M. Wang and M. Meng, "Real-Time Multisensor Data Retrieval for Cloud Robotic Systems," IEEE Transaction on Automation Science and Engineering, vol. 12, no. 2, pp. 507-518, 2015.

[80] M. Tenorth and M. Beetz, "KnowRob: A knowledge processing infrastructure for cognition-enabled robots," The International Journal of Robotics Research, vol. 32, no. 5, pp. 566-590, 2013.

[81] B. Siciliano and O. Khatib, Handbook of Robotics 2nd Edition, Berlin: Springer, 2016.

[82] D. McKay, T. Finin and A. O'Hare, "The Intelligent Database Interface: Integrating AI and Database Systems," AAAI, pp. 677 - 684, 1990.

211

[83] M. Luo, D. Wang, M. Pham, C. B. Low, J. B. Zhang, D. H. Zhang and Y. Z. Zhao, "Model-Based Fault Diagnosis/Prognosis for Wheeled Mobile Robots: A Review," IEEE, 2005.

[84] S. Lemaignan, R. Ros, L. Mosenlechner, R. Alami and M. Beetz, "ORO, a knowledge management platform for cognitive architectures in robotics," in IEEE/RSJ International Conference on Intelligent Robots and Systems, Taipei, 2010.

[85] Y. Gao, M. Sedef, A. Jog, P. Peng, M. Choti, G. Hager, J. Berkley and R. Kumar, "Towards validation of robotic surgery training assessment across training platforms," in IEEE/RSJ International Conference on Intelligent Robots and Systems, San Francisco, 2011.

[86] L. Gallacher and H. Morris, ITIL Foundation Exam Study Guide, West Sussex, United Kingdom: John Wiley & Sons, Ltd, 2012.

[87] M. Anware, M. Haque, Husain, Munawwar and J. Usmani, "Crash helmet - the harbinger of death: a case report," International Journal of Research in Medical Sciences, Aligarh, India, 2015.

[88] T. Yairi, Y. Kato and K. Hori, "Fault Detection by Mining Association Rules from House-keeping Data," University of Tokyo, Tokyo, 2001.

212

BIOGRAPHICAL SKETCH

Walter John Waltz graduated in 2008 from Florida State University with Bachelor of Science degrees in mechanical engineering and applied mathematics. While advancing his higher learning with the University of Florida, where he obtained a Master of Science in mechanical engineering in 2014, he served as both a graduate research assistant at the Center for Intelligent Machines and Robotics (CIMAR) within the

Department of Mechanical and Aerospace Engineering, and as the lead robotics engineer for a startup company that developed comprehensive software and hardware unmanned robotic packages for mobile robotic platforms operating in hazardous environments. After completing his master’s degree and the doctorate qualifier exam, he continued his doctoral studies remotely with the same department while he worked first as a computer scientist civil servant for the United States Army Aeromedical

Research Laboratory, then as a robotics engineer contractor for the National

Aeronautics and Space Administration’s Autonomous Systems Integrated Research

Branch. His current research focus is the health monitoring, analysis, and intelligent optimization for autonomous behaviors of mobile robotic platforms.

213