NEW METHOD FOR ROBOTIC SYSTEMS ARCHITECTURE
ANALYSIS, MODELING, AND DESIGN
By
LU LI
Submitted in partial fulfillment of the requirements
For the degree of Master of Science
Thesis Advisor: Dr. Roger Quinn
Department of Mechanical and Aerospace Engineering
CASE WESTERN RESERVE UNIVERSITY
August 2019 CASE WESTERN RESERVE UNIVERSITY
SCHOOL OF GRADUATE STUDIES
We hereby approve the thesis/dissertation of
Lu Li
candidate for the degree of Master of Science.
Committee Chair
Dr. Roger Quinn
Committee Member
Dr. Musa Audu
Committee Member
Dr. Richard Bachmann
Date of Defense
July 5, 2019
*We also certify that written approval has been obtained for any proprietary
material contained therein.
ii
Table of Contents
Table of Contents ...... i List of Tables ...... ii List of Figures ...... iii Copyright page ...... iv Preface ...... v Acknowledgements ...... vii Abstract...... viii Introduction ...... 1 1.1. The challenges for robots in the real world ...... 2 1.2. Biological Inspirations...... 8 1.3. Rethinking Cyber-Physical Systems and Robotics ...... 12 1.4. Outline of the thesis ...... 16 Methodology ...... 17 2.1. Revised Robotic Paradigm and Information Hierarchy ...... 18 2.2. New Method for Robotic Architecture Analysis and Planning ...... 25 Step one: Task analysis ...... 25 Step two: System type identification ...... 28 Step three: Functional hierarchy modeling ...... 32 Robot Design Case Studies and Results ...... 35 3.1. Design Case Study I: Smart Force Sensors ...... 36 3.2. Design Case Study II: Modular Mobile Manipulator...... 42 3.3. Design Case Study III: Hybrid Neuroprosthesis (HNP) ...... 48 Discussions ...... 58 Conclusions ...... 61 Future Works ...... 62 Appendix A Design Note ...... 64 Bibliography ...... 68
i
List of Tables
Table 1. Task analysis table with sample tasks with functional allocation and performance
requirements metric scoring...... 26
Table 2. Task analysis for smart force sensor design...... 38
Table 3. Task analysis for the modular mobile manipulator system...... 43
Table 4. Task analysis of some subprocesses examples with the HNP system...... 52
Table 5. Three generations of the modular combination of three layers of the HNP subsystem electronics and its migration path...... 57
ii
List of Figures
Figure 1. Cartoon of an inside joke that defines the territory and niche of this thesis...... vi Figure 2. Biologically inspired robots developed at ...... 2 Figure 3. Block diagram that compares the human nervous system to ...... 11 Figure 4. The four-quadrant diagram of a rough comparison of robotic algorithms and applications based on computational complexity and cyber-physical interactions...... 13 Figure 5. The outline of the methodology chapter: a revised robotic paradigm with a three- step method for robotic system architecture analysis, modeling, and design...... 17 Figure 6. The "Sense-Plan-Act" mental model and the extended hybrid deliberate/reactive paradigm...... 19 Figure 7. DIKW pyramid model...... 21 Figure 8. DIKW pyramid model (left) and the combined model with "Sense-Plan-Act" paradigm(right)...... 23 Figure 9. Decision tree diagram of selecting the optimal system architecture for given robotic system problem...... 31 Figure 10. An abstracted system architecture model for robotics...... 32 Figure 11. Diagram of Information Pathways...... 34 Figure 12. Smart Force Sensor assembly...... 37 Figure 13. Abstracted system type model for the conventional force sensor (left) compares to the smart force sensor (right)...... 39 Figure 14. System hierarchy and information pathways diagram of the smart force sensor...... 40 Figure 15. Smart Force Sensor deployed on industrial(a) and medical(d) robotic systems, with the onboard stiffness estimation result (c & e)...... 41 Figure 16. Modular Mobile Manipulator System change to two operation configurations. ... 42 Figure 17. Abstracted system architecture model for the modular mobile manipulator system...... 44 Figure 18. System hierarchy and information pathways diagram for ...... 46 Figure 19. Modular Mobile Manipulator System deployed in a confined space mockup for manufacturing and inspection demonstration...... 47 Figure 20. The system core components for the HNP system (passive hydraulic configuration)...... 49 Figure 21. Abstracted system architecture model of the Hybrid Neuroprosthesis (HNP). ... 54 Figure 22. Diagram of Layered Modular Robotic Paradigm and System Architecture for HNP...... 55 Figure 23. Function components and layer allocation of conventional vs. a smart sensors and actuators...... 59 Figure 24. Different types of computing hardware with its appropriate operating system type and system architecture layer allocation...... 60 Figure 25. The next generation of modular sensor and actuator units the author is currently working on. [72] ...... 62
iii
Copyright page
In reference to IEEE copyrighted materials which is used with permission in this thesis, the IEEE does not endorse any of Case Western Reserve University products or services. Internal or personal use of this material is permitted. If interested in reprinting/republishing IEEE copyrighted material for advertising or promotional purposes or for creating new collective works for resale or redistribution, please go to http://www.ieee.org/publications_standards/publications/rights/rights_link.html to learn how to obtain a License from RightsLink.
iv
Preface
There are a few things I wish I knew before started my robot design journey:
1. What are the basic rules for designing a robot?
2. How should we select the best components from so many candidates?
3. Where should we connect hardware and software components?
4. Why should we follow these rules, if there are such?
These questions still haunt me until this day. However, after countless days and
nights trying to find the answers by studying and working with world-renowned
research teams to explore the robotics theories and practices, I started to realize
that few fundamental pieces are still missing in this big puzzle: three profound questions that have not been answered by our community:
1) What is the gold-standard robotic paradigm, or “best practice” on how
to lay out a robotic system architecture build plan?
2). Is there a systematic method or analysis tool that allows us to describe and analyze different types of robotic system architecture?
3). Can we use this new method or tool to create complex robotic systems that are able to interact with the physical world?
v
I believe if we can genuinely understand and are able to answer these three questions and design the robotic architecture in such a way that it is maximized in the overall performance, compactness, affordability, and extensibility. As a result, the robots we create may ultimately approximate to their biological exemplar or may even surpass natural solutions in the distant future.
Toward this goal, this thesis is written, which is set up to provide a new thought process for robotic architecture analysis, modeling, and design, based on my pondering, practices, and reflections over the last few years of robot building career.
This new method is aimed to transfer the sorcery of robots designing into a logical and scientific approach with an intuitive workflow. The goal for this research is to help us become better robot makers, and eventually aid our creation, the robots, leaping out from research laboratories to help society, change the world for the better, and maybe one day, to save lives.
Figure 1. Cartoon of an inside joke that defines the territory and niche of this thesis.
vi
Acknowledgements
I would like to express the deepest gratitude to my advisor Dr. Roger Quinn. His guidance, encouragement, and patience make it possible for me to pursue my unique research odyssey.
I would like to offer my special thanks to the PIs, students, staff, and friends at the
Center for Biologically Inspired Robotics Research at Case Western Reserve
University(CWRU); The Biorobotics Lab at Robotics Institute (RI), Carnegie Mellon
University(CMU); The Advanced Platform Technology Center (APTC) and Cleveland
FES Center at the Louis Stokes Cleveland VA Medical Center, for their contribution to the projects in this thesis.
Finally, I wish to thank my family for their support and love throughout my study.
This work is dedicated to the scientists, engineers, and students who put their heart into designing and building robots. Let’s save the world one robot a time!
vii
New Method for Robotic Systems Architecture
Analysis, Modeling, and Design
Abstract
by
LU LI
This thesis suggests a robotic system architecture analysis, modeling, and design workflow that is systematic and intuitive. Firstly, a revised hybrid robotic paradigm model is proposed. Secondly, a method for robotic system subtask analysis, system type identification, and functional hierarchy modeling is presented and examined.
Lastly, three robotic system design cases that are under the influence of the proposed method are studied. The main contribution of this work is to serve as a system investigation and decision-making tool, while providing an abstracted blueprint for designing a robotic system architecture that requires extensive interaction between the cyber and physical worlds.
viii
Introduction
In this chapter, I will start with a brief literature review to discuss the advantages
and challenges of current robotic systems, that lead to the scope statement of this
research. Then, a brief description of the animal nervous systems and robotic
cyber-physical system is presented to explain the inspiration and reasoning behind the logic flow in this thesis. Lastly, an outline of the proposed robotic system
architecture analysis, modeling, and design method is presented.
1
1.1. The challenges for robots in the real world
Robotics promises to transform the way human civilization operates fundamentally
in various science and technology frontiers, such as agriculture, industry, medicine,
communication, and transportation. Success stories such as biologically inspired robots equipped with powerful actuators and capable control algorithms enable robots to crawl and jump like animals[1]–[3], which can be deployed during disaster search and rescue missions. Collaborative robots play a significant role in a human- robot shared workspace that might lead to the next industrial revolution[4]–[7].
Additionally, the innovation and adoption of disruptive robotic technology such as robot-assisted Minimally Invasive Surgery (MIS) in the last decades had enabled millions of safer surgical operations that cannot be performed using conventional techniques, and eventually created a brand-new business worth billions of dollars per year, with the outcomes of saving more lives and creating more jobs [8], [9].
MICRO-CRICKET ROBOT ©CWRU Snake Monster Robot ©CMU (Photo by TechCrunch) ROBOT III` ©CWRU Unified Modular Snake Robot ©CMU Figure 2. Biologically inspired robots developed at Case Western Reserve University and Carnegie Mellon University.
2
However, as robotics shifts from proof-of-concept laboratory work to mass- produced products, challenges such as system performance, cost, reliability, maintainability, and upgradability arise. Currently, there are several problems that still have not been addressed in the robotics world:
Purpose-built: The success of personal computer and mobile devices are due to a universal standard for system architecture, that is supported by research institutes, enabled by industrial manufacturers, and refined by educational programs.
However, in the robotics world, each time when a robotic system was needed, it was
“tailor-made” – built as a one-off solution with ad-hoc system architectures and components for a specific problem. Consequently, there is a lack of common robotic system architecture or modular design guidelines that can be widely accepted in both academia and industry. Several attempts to creating a universal architecture have been proposed [26], [27], failed [28] or had some great successes, such as
Robot Operating System (ROS) [29]. However, ROS only offers a high-level software framework/middleware for software system rapid-prototyping, instead of providing a full-stack system architecture build plan that contains both hardware and software elements. One possible solution to solve this problem is to create a uniform but refined system architecture that may be shared between different robotic systems to maximize design reusability, while also enabling extendable and customizable adjustments over specific task requirement. Similarly, as the standard hardware and software architecture that propelled the blooming of personal computers in the late 20th century, a universal yet modular robotic architecture
3
or design paradigm will promote creativity, reduce the research & development cost and allows robots to more quickly be integrated into society.
Narrow perception and situational awareness: Most robotic systems are
incapable of perceiving and understanding their operational environment. A typical
industrial robot arm will ignore and collide with foreign objects or humans in its
preplanned path[10], [11], and commercially available surgical robots are not
equipped with built-in tactile sensors to prevent tissue damage[12]. Moreover,
robot design and robotics research are bounded by the limitation of sensing
technologies. Only a handful of sensors that cover a narrow spectrum of physical
properties are well studied and implemented for mainstream robots and research
platforms[13]. Some of these include encoders, cameras, lidars, and inertial
measurement units (IMU). This sensing capacity is minimal when compares to the
biological preceptors founded in the animal and human body[14], which are capable
of detecting a wide diversity of information. However, miniaturized semiconductor-
based sensors [15] and Micro-Electro-Mechanical Systems (MEMS) sensors[16]
have significantly improved over the last few years, resulting in advanced sensors
such as active infrared based time-of-flight 3D cameras[17] which provide
innovative sensing modalities that are not present in the animal kingdom. However,
these novel sensors require the excessive signal acquisition and processing for its
increased data rate and bandwidth[18], which is possible only when the sensor is
connected to a powerful, and thus more likely stationary computer. A highly integrated perception-and-processing system or also called as smart sensors is
4
vital for an intelligent autonomous robot that needs to perform onboard perception
and computation in a timely and efficient fashion[19].
Limited actuation and dexterity: It is still challenging to design a robotic actuator that performs as well as animal muscular and power systems, given limitations in power source[20], energy conversion[21], materials[22] and control[23].
Additionally, methods to improve dexterity by increasing the number of degrees of freedom (DoF), consequently render the system more problematic to perform motion planning and control[24]. Furthermore, the closed-loop control strategy of humanmade actuators lacks intelligence and adaptability because it relies on engineers to select suitable sensors, motor controller, and perform control parameter tuning manually. The entire actuator design process is often a tedious and trial-and-error based process that heavily depends on experience. The notion for a tightly coupled sensor-controller- actuator system or also called smart actuators had gained increased attention among the robotic community[25], [26], which might lead to the first but crucial step toward animal-like manipulation and locomotion capabilities.
Cost-effectiveness: Robots are incredibly costly to make and to be maintained, for example, an application-specific industrial robot arm costs around $100,000[27], and a single-purpose robotic surgical system can cost more than a million US dollars[28]. The extremely high price tags result from three current states of the
5
robotics industry: Firstly, there is no cross-industry standardization and compatibility protocol between robot vendors, this prevents the collaboration and information sharing between robotic companies and components suppliers.
Secondly, unlike the mature automotive [29] and consumer electronics industries[30], the robotics industry lacks commercial off-the-shelf (COTS) solution to source robot components, parts, and subassemblies, so that it is impossible to establish a multi-tier supply chain while enabling original equipment manufacturer
(OEM) to participate in production. The only successful example in robotics is the
adoption of standardized low-cost servo motors for small robots with integrated
closed-loop-controller, which originated from radio-controlled aircraft parts OEM
supplier industry. Lastly, many robotics companies are young startups from
academic or research organizations, which do not possess a comprehensive system
engineering knowledge that has been developed and hardened by experienced manufacturing companies in the automotive, aerospace and defense industries [31].
To solve this problem, the robotics community can learn from other industry
forerunners by creating a system integration standard and modular
components library to enable collaboration among the community and eliminate the need to reinvent the “wheels” for mass production scenarios.
6
To summarize this section, robotics has a bright future, but there are many obstacles to the way forward at the current stage. The new generation of robotics scientists and engineers, need to advance the field by:
• Generalizing a robotic architecture that is methodical and multipurpose.
• Merge rightly coupled function groups into subsystems, such as:
o Smart Sensors: highly intelligent and responsive perception systems.
o Smart Actuators: integrated actuators with advanced control electronics.
o Modular robot controllers: specialized at robotic computing applications.
• Reduce the cost of design and integration by defining a more modular robotic
architecture.
7
1.2. Biological Inspirations
To solve the problems stated in the previous sections, pioneers in robotics sometimes follow a basic tenet: learn from nature[3], [32]–[34], which has guided us since the days of Leonardo DaVinci. Inspiration from the animal kingdom can serve as a vital blueprint for us to create cybernetic organism - a machine which can feel, think, and act based on the physical stimuli and interact with the elements [35],
[36]. Modern research tools such as Functional Magnetic Resonance Imaging (fMRI) and Functional Brain Mapping (FBM) [37] allow scientists to better understand the structure and function of animal’s neurological and nervous system. It is possible to study the specimen and pattern in biological intelligence and then apply the knowledge to artificial intelligence research and the robotics field. Among all the concepts and findings in biology, a few key insights provide great guidelines toward robotic architecture building tasks:
Hierarchical structure in the nervous system: Vertebrates nervous system consists of two main parts: the central nervous system (CNS) and the peripheral nervous system (PNS), which are systematized as a hierarchical structure [38], [39].
Together the CNS and PNS form a “system architecture” which divide the tasks into three hierarchical layers: Low-level, processes require a repetitive pattern, fast adjusting time, and fixed logic, such as involuntary responses and reflexes happen in the PNS and spinal cord level [40]. One layer up, the mid-level consists of the subcomponents of the brain such as cerebellum and brainstem to regulate essential
8
coordination functions, process sensory information, and modulate motor activities.
This tends to require both sensory input and motor output to interact with the physical world. Lastly, The cerebrum is at the highest level of the CNS and is responsible mainly for high-level intelligence[41], such as perception, cognition, learning, emotion, and other processes that are intricate and abstracted. After billions of years of evolution, this hierarchical structure has become a highly capable
system, that matches the needs for the biological functions and behaviors of an
extremely complex biological organization. We might argue that a well-designed robotic system architecture can borrow this hierarchical approach as a blueprint, by separating and prioritizing the demanding responsive low-level routines, congested mid-level processes, and intricate high-level tasks into a three- layered architecture.
Neuropathway and reflex: Nature also provides a great example of systematically clustering of complex functional elements into a logical order. There are two significant insights that are relevant to the robotics world: First is the neuropathway, between PNS and CPS. Sensory signals are transmitted from
preceptors and sensory neurons from the periphery up to the brain, called sensory
or ascending tracts[42], while command signals are conveyed down to the motor
systems called motor or descending tracts[43]. Secondly are reflexes. The spinal cord as a mid-level system capable of overwriting and bypassing the high-level system in order to respond to stimuli and events in a timely and reactive fashion.
Autonomic-reflex works without the intervention of a conscious mind to regulate
9
inner organ and provide life-supporting functions. While somatic-reflex helps us to interact with the physical world via neuropathways between the CNS and the
affecting muscles [44]. More interestingly, during the cognitive development of
human and other primates, the reflex is the first behavior to develop in the early
stage of life [45], [46]. The notion of neuropathway and reflex arc provides an
excellent guideline about how to separate and affix robotic functions and
components into subsystems with dedicated information pathways. For
example, some sensorimotor process groups require responsive interaction with the
physical environment might need a lower level pathway that is similar to the reflex
arc to achieve higher system efficiency and offload burden from central processing
systems[47].
Modularity in biological systems: Biological organisms are constructed from a
series of discrete and interconnected modules. At the cellular level, groups of
interacting molecules form distinct “modules” as the building block of the cell to
execute various biological processes, such as synthesize proteins and assemble
DNA[48], [49]. Additionally, in the nervous system, modular structures and
functions in human brain functional networks also can be defined as topologically or
observed experimentally [38], [39]. The fundamental components and the design
language used to generate this biological complexity follows the modular approach.
Inspired by biology, modular design in robotics has always been a focus [50].
However, the shortage of professional grade modular robotic components,
inconsistent of mechanical interface and communication protocols, and most
10 important of all, the lack of modular system architecture, leave most of the robotic system non-modular and purpose-built. Undoubtedly, creating a common modular standard is mostly impossible, since the hardware and software constraint on each robotic system is unique. However, proposing toward an intuitive guideline and convention toward modular system architecture and interfaces is still valuable and can serve as the first step toward ultimate modularity in robotics.
Human Nervous System Robotic System
CNS Computing Subsystems
1 2 3 Cerebrum 4 5 6 7 8 High-Level 1 2 3 4 Cerebellum Intelligence 5 6 7 8 9 10 Brainstem
11
12 1 2 Mid-Level 3 Spinal Cord 4 5 Coordination 1 2 3 4 PNS Sensory Motor (afferent) (efferent) Low-Level Division Division Control
Sensing Sensing Autonomic Somatic Internal External Nervous Nervous Environment Environment System System
Perception Actuation Sensory Receptor Muscles Subsystems Subsystems
Physical World Physical World
Figure 3. Block diagram that compares the human nervous system to
a robotic system architecture.
11
1.3. Rethinking Cyber-Physical Systems and Robotics
Although robots have a lot of similarities compared to biological systems, robot
design is fundamentally constrained by the current level of technologies and
engineering constraints. In recent years, there are trends to consider a complex
robotic system as a “cyber-physical system” (CPS), which emerged since 2006,
attributed to the leaders at the National Science Foundation in the United States[51].
The concept of CPS shares the same root with the idea of “cybernetics”[52], which was coined by one of the founding fathers of control system theory, the American mathematician Norbert Wiener during World War II [53]. The notion of CPS or
Cybernetics describes a complex system that integrates computational and physical
capabilities which can be modeled and studied using logical abstraction and system architectures. Subsequently, the model then can be realized by engineering design and implementation, to create the desired system using functional components that fit into the category of computation, communication, perception, and actuation. As a result, it would be preferable to borrow methods and tools that have been developed in system science and control engineering and use them in the robotics realm.
Among principles of modern system science and CPS, two well-studied limiting factors can be used to help us determine and prioritize robot capabilities in order to form a logical system architecture [54]. First is computational complexity; second
12 is the level of cyber-physical interactions. Figure 4 illustrates a rough comparison diagram of robotic algorithms and applications based on computational complexity and physical interactions. Several primary robotic functions, algorithms, and applications are listed. Based on the two factors mentioned above, I conceptually placed each of the examples into four quadrants of a two-dimensional coordinate system, which represents four distinct directions in robotics to help define the topic and scope of this thesis:
Computational Complexity +
General Autonomous Advanced ? AI Mobile System Manipulation
Narrow SLAM IoRT AI Locomoting System Advanced Perception Learning Manipulation (Vision, etc) Planning Advanced Closed-loop Motion Control Automated Information Path Planning System Planning (Global) Obstacle Actuator surveillance Avoidance (Local) Feed back Control Programmable Localization matter (Global) Sen so r Soft Fusion Dead Robots Reckoning (Local)
Mechanical + Interactions World Physical Computing
Figure 4. The four-quadrant diagram of a rough comparison of robotic algorithms and
applications based on computational complexity and cyber-physical interactions.
13
• Quadrant I. CPS - oriented Robotics: This type of robotic system directly
operates in the real-world environment: the perception and action
information frequently circulate between the cyber and physical domain.
Both computational and responsiveness performance are highly demanded,
such that robots need to acquire data from a diversity of sensors, process the
information intelligently using onboard computers while executing feedback
control command to multiple actuators. All these procedures need to happen
within a short duty cycle of a fixed controllable period, any under or over
control may result in a catastrophic failure and can cause damage or even
death. Thus, CPS-Oriented robotics is the most challenging field compared to
other branches, and this thesis is primarily focusing on solving the system
level questions and problems related to this quadrant.
• Quadrant II. Artificial intelligence (AI) - Oriented Robotics: The AI has
similar or even much higher computational requirements compared to the
quadrant I. However, AI-related robotics tasks usually have loose necessities
in timing and direct physical information exchanges. The nature of high-level
decisions such as learning, predicting, and planning relies on the
accumulation of data over a more extended period, and rarely requires direct
interaction with physical elements without additional system support or
interfacing. Consequently, AI-oriented processes will play a lesser role in this
research and are only abstracted into the top-level components on the top of
14
the hierarchical system, which can be further defined or extended in future
works.
• Quadrant III. Robotic Materials: Robots and concepts in this region
represent an extreme case of intelligence, which requires little or zero
computational resources by leveraging mechanical or material computing.
The focus of conventional robotics rarely reached this until in recent years
when concepts such as robotic and programmable matters started to become
a new trend of novel thinking and solutions. Although in this thesis I will
mostly ignore problems and challenges in this quadrant, it is worth pointing
out that the hierarchical architecture that will be presented in a later chapter
is also designed to include the extremely-low-level intelligence at Level 0 of
the paradigm.
• Quadrant IV. Smart Mechatronics: Examples in this quadrant have the
most direct access to the physical world, acting as the barebone layer of any
modern robotic system. Mechatronics, as its name indicates, is a hybrid
between mechanical and electronics. We may argue that every robot is smart
mechatronics, or a robotic system is built upon the permutation and
combination of smaller scaled smart mechatronics. The subsystem and
modularity notion from this thesis are tightly related to this quadrant.
Moreover, the majority of the engineering practices described in this work
15
can be determined by the rules and conventions derived from the
mechatronics field.
1.4. Outline of the thesis
In the following chapters of this thesis, firstly the methodology chapter addresses
three objectives of this research by presenting a revised robotic paradigm with three steps of the proposed method. Then, a case study chapter will introduce three robotic systems that follow the proposed method in terms of system architecture
and technical implementations. Lastly, results and discussion lead to the conclusions
of this thesis. Additionally, the research objectives for this thesis are presented
below.
Three research objectives:
1) To create a robotic paradigm model optimized for cyber-physical interaction.
2) To define an analysis, modeling, and design workflow that defines a robotic
system architecture that allows the robot to sense, act, plan, and perform
reflexes.
3) To evaluate this new method of robot design practices to solve different
types of robot design challenges.
16
Methodology
This chapter is divided into two sections with a detailed description of the thought
process of the revised robotic paradigm and a proposed new method, which is
followed by detailed engineering implementation guidelines. Firstly, I reexamined
the “Sense-Plan-Act” robotic paradigm and combine it with the data, information,
knowledge, and wisdom (DIKW) pyramid model into a physical variant architecture
model. Secondly, A three-step process is proposed that aims to help robotics
researchers and engineers to plan and analyze robotic system architecture, which
provides a thought process example to define tasks requirements, identify system or
subsystem type, and select computing architecture to define information logic flow
in the robotic system.
Revised Robotic Paradigm
System Architecture Analysis, Modeling, and Design Method
Task analysis
System type identification
Functional Hierarchy Modeling
Figure 5. The outline of the methodology chapter: a revised robotic paradigm with a three-step
method for robotic system architecture analysis, modeling, and design.
17
The proposed robotic paradigm and analysis method can be utilized as a tool to
deconstruct any robotic system architecture and understand the relationship
between its logical and functional components, while also providing a guideline for
designing a robotic system that has a tight interaction requirement between the cyber and physical domains.
2.1. Revised Robotic Paradigm and Information Hierarchy
Almost 60 years ago, Herbert Simon wrote an essay for the Proceedings of the
American Philosophical Society entitled “The architecture of complexity” [55]. In
this prescient analysis, he argued that most complex systems, especially for social,
biological, physical, and symbolic systems, are all organized in a hierarchically
logical manner [38].
Likewise, as the best example of a complex system, an architecture of the robotic
system also needs to determine the causality and priory between its hardware
components and software stacks[55]. Together these elements reacting to signals,
behave accordingly, and ultimately gather knowledge to achieve intelligence. To
build this robot architecture, first, it must be equipped with the exact types of
hardware to perceive (sense) and respond (act) to the operational environment.
18
Second, a control system (plan) needs to react based on the stimuli and provide
adequate action in a timely and repeatable fashion. Last, a robot may need to
synthesize information, predict patterns, and generate behaviors. As a result, a
robot intelligently anticipates situations or orchestrates its internal and external
variables to produce the desired results.
To help us break down this thought process, there are two essential concepts that
can serve as guidelines to determine an architecture hierarchy: The Sense-Plan-Act
robotic paradigm and the information hierarchy (or the data, information, knowledge, and wisdom DIKW pyramid).
Robot
High-Level Planner / Controller Delibrative Loop Robot Low-Level Planner / Plan Controller Robot Reactive Loop
Sense Act Sensor Actuator
Physical Physical Physical World World World
Figure 6. The "Sense-Plan-Act" mental model and the extended hybrid deliberate/reactive paradigm.
19
The “Sense-Plan-Act” robotic paradigm is a highly abstracted model of how a
robot operates, as illustrated in Figure 6. Fundamentally, a robot generates
bidirectional interaction with the physical world, which can be broken down into
three principal elements, which are called the primitives of robotics: Sense, Plan, and Act. The relationship between the three primitives are commonly described by
the three paradigms, which are, the deliberative, the reactive, and the hybrid paradigms. The deliberative model follows a hierarchical logic flow, where the robot
senses the physical world, plans the next action, then executes it through actions.
The reactive model emphasizes the direct coupling between the sense and action
elements without complex planning in a timely and efficient fashion. Additionally,
the hybrid model combines both the deliberative/reactive paradigms. It is similar to
what we observed in the hierarchical structure in the vertebrate’s nervous system: the reflex arc represents the reactive loop, and the CNS acts as the deliberate
planner.
Applying the hybrid deliberative/reactive paradigm into engineering practice,
I shall propose an extension paradigm for the hybrid model. Where a local planner/controller separately governs deliberative and reactive pathways: The low- level planner provides highly efficient and real-time computation for the need of speedy reactive behavior, while the high-level planner represents capable computing platforms for the computationally expensive tasks during deliberative processes.
20
The DIKW pyramid, also called the information hierarchy, is a concept for studying the structural and functional relationship between subjective and objective information. This concept deconstructs the intelligence cycle into four layers, namely data, information, knowledge, and wisdom. From bottom to up direction, each layer creates a subjective abstraction from the lower objective data layer, while from top to bottom, each lower layer requires additional objective observation of the physical environment.
WISDOM
KNOWLEDGE
INFORMATION
DATA
Figure 7. DIKW pyramid model. Although the DIKW pyramid is often used for philosophy and information science research, the same notion can also be applied to the artificial intelligence and robotics fields. To understand the environment, respond to the sensory stimuli and act to interact with the physical world, a robot also must experience through the same information logic flow that starts from collecting sensory data, process useful information, do the analysis then build knowledge, get feedback response, and
21 eventually achieve wisdom. It is a closed loop intelligence cycle that passes back and forth between the physical world and the robotic system, that can be divided into low to high hierarchical levels:
• Layer 0 – Physical level (Data): The physical environment.
• Layer 1 – Low-level Information: Relationship inferred from data.
• Layer 2 – Mid-level Knowledge: Reasoning from the collection of information.
• Layer 3 – High-level Wisdom: Evaluated understanding of knowledge.
22
Combining “Sense-Plan-Act” and the DIKW pyramid model
Either “Sense-Plan-Act” paradigm or DIKW pyramid model is inadequate to
characterize the full spectrum of the robotic system architecture. The “Sense-Plan-
Act” paradigm presents a simplified logical flow between the input and output of the
system but fails when the system complexity grows more extensive than a single
layer system. While the DIKW pyramid model explains how basic data is processed
and abstracted in the information domain yet is unable to capture the bidirectional
interaction between the physical world and the robots. However, this problem may
be solved if we combine the information pyramid and the “Sense-Plan-Act” robotic paradigm, to form a two-dimensional revised model. This model will include both
the functional logic and hierarchical structure, that is appropriate to represent most
of the robotic system architecture and its relationship with the physical world.
Plan
L3: WISDOM High Level High | Real |
KNOWLEDGE - L2: Time Requirement + | Requirement + Time
Mid Level Delibrative Loop
L1: Sense INFORMATION Act
Low Level Reactive Loop | Computational Complexity + | + Computational Complexity |
Physical World L0: Level (Data) Physical Physical
Figure 8. DIKW pyramid model (left) and the combined model with "Sense-Plan-Act" paradigm(right).
23
As illustrated in Figure 8, the combined model presents the combination of the
DIKW pyramid and hybrid deliberate/reactive paradigm in a two-dimensional flow chart diagram. The vertical axis embodies the information hierarchy, while the horizontal axis signifies the paradigm logic flow between the “Sense-Plan-Act” primitives.
The remainder of this chapter provides a workflow of how to deconstruct a robot design problem by a three-step process:
• Step 1. Analyze the task requirements and its relevance to the “Sense-Plan-
Act” paradigm and information hierarchy.
• Step 2. Identify the type of overall system and its subsystem’s abstracted
system architecture, computational requirement, and the allocation of the
functional components.
• Step 3. Plan the information pathway for each subsystem to optimize
information exchange efficiency and minimize the inter-process delay.
24
2.2. New Method for Robotic Architecture Analysis and Planning
Step one: Task analysis
Task analysis is a good practice and studies discipline in social science and system
science [56], [57]. It aims to study how an organization or system accomplishes
missions by studying the details of a list of tasks and examining each task’s function,
allocation, and conditions. Likewise, a sophisticated robotic system can be
decomposed into numerous subtasks and applying task analysis to understand the internal logic and individual requirements further.
Applying the combined “Sense-Plan-Act” paradigm and DIKW pyramid model to a robotic task analysis can help researchers and engineers create an idea of how each functional item should be allocated in the system hierarchical architecture, and how the internal and external connections and constraints can be investigated. A sample task decomposition and analysis are presented in Table 1.
The robotic task analysis can be expended to two areas of focus, function allocation analysis, and system requirements analysis. While the former separate monolithic tasks and compound tasks, the latter provides insight for the selection of computing methods which will be further discussed in the next section.
25
Table 1. Task analysis table with sample tasks with functional allocation and performance requirements metric scoring.
Task Category Task Examples Sense Plan Act CC* DPI† RTL‡
Basic sensing Sensor data acquisition X Low High High Sensing with processing Sensor data filtering X Mid High High Sensing with planning Multi-sensor fusion X X Mid High High Sensing with Actuation Active sensing X X X Mid High Mid
Basic actuation Direct actuator output X Low High High Actuation with processing Output smoothing and limiting X Low High High Actuation with planning Multi-axis motors coordination X X Mid High Mid Actuation with sensing sensor-based closed loop control X X X Mid High Mid
Low-level Planning Joint motion and trajectory planning X Mid Low Mid Mid-level Planning Model based state estimation X High Low Low High-level planning Learning based behavior generation X High Low Low
Vacuum robot perception Motor encoder and Lidar reading X Low High High Vacuum robot actuation Motor drive PWM outputting X Low High High Vacuum robot control wheel motor closed-loop control X X X Mid High Mid Vacuum robot state estimation Lidar localization and mapping X X High Mid Mid Vacuum robot behavior Unknown area exploration X X X High Mid Low Vacuum robot intelligence Predictive operation scheduling X High Low Low CC*: Computational Complexity; DPI†: Direct Physical Interaction; RTL‡: Real-time Requirements
26
The function allocation analysis examines the role of each task under the "Sense-
Plan-Act" paradigm and determines in which category does it belong to. This process is crucial to distinguish between monolithic tasks vs. compound tasks.
For example, a fundamental motor encoder reading task should be defined as a
"Sense" only task, without the need of "Plan and Act" primitives. While an encoder- based motor closed-loop control is the consecutive task, that should be defined as
"Sense-Plan-Act" task. In most cases, a monolithic task follows a single direction of information flow, while a compound task might form a reactive or deliberative planning/control loop, which sometimes should be defined as a subsystem.
However, the conventional function allocation analysis does not provide any system variant information, that might lead to a different hardware or software components selection. To further improve this process, A system requirement analysis is proposed for each task, which scores individual conceptual task items by three benchmark criteria: 1). Computational Complexity (CC), which specify the level of computational resource needs; 2). Direct Physical Interaction (DPI) , which indicates if a specific subtask involves interfacing with the physical world; 3). Real-
Time Requirements (RTR), where estimates the constraints for the event to system response. Although these three scores are not scientifically measured, they can be roughly estimated based on a designer’s prior experience. The physical-world relevant system requirement study for the robotic system is a major difference compared to conventional data or cyber system analysis.
27
Step two: System type identification
The second step of the workflow is to identify the type of robotic system involved for this design, in order to better inform the type of system components that needed and decide the functional allocation strategy of subprocesses. A decision tree diagram is illustrated in Figure 9. The result of this step is to provide an intuitive and systematic method to decompose a robot design concept into abstracted system hierarchical architecture, paired with a concrete selection of the computing methods that would be the best suited for achieving the robot design objectives. Two major conceptual categories are the output result in this decision tree:
Four types of computing for robotic applications:
• Hardware logic computing:
Processing must happen instantaneously with near-zero latency. The
traditional approach is mechanical computing, analog circuitry, and high
performance digital integrated circuit devices such as Field Programmable
Gate Arrays (FPGAs) or Complex Programmable Logic Device (CPLD).
• Hard real-time computing:
Computing tasks require a guaranteed response within specified temporal
constraints, which means processes need to execute within a fixed and short
time frame. Typically, hard real-time computing is event or time triggered
and operated on parallel computing platforms or high-performance
28
embedded systems that are equipped with Real-Time Operating System
(RTOS), that is most suitable for a control system that directly interrelates
with physical environments. This type of computing is the most desirable
method for typical robotic applications, which is commonly found in
commercially available products such as control systems for robotic arms
[58] and advanced autonomous vehicle[59].
• Soft real-time computing:
Sometimes called near-real-time computing, which shares many similarities
with the hard-real-time system in terms of computing hardware and
software structure. However, a flexible or near real-time system is designed
to reach temporal deadlines but not cause catastrophic failure if a deadline is
missed. Typical examples are embedded systems that are loosely written
without time budgeting in a bare-metal fashion (software directly runs on
computing hardware rather than within the host operating system) without
RTOS implementation. These types of computing are ideal for rapid
prototyping for non-critical robotic applications or most sensing or planning
applications but should be avoided, especially when the actuation system is
mandatory.
• Non-real-time computing:
This is the most common type of software in general computing systems,
such as Windows or Linux operating system on personal computers. The
29
computational tasks do not have any strict time constraints, while
performance and multi-tasking are the primary objective. The development
of personal computers has skyrocketed in the last three decades; even mobile
devices can pack a significant amount of computational power in a tiny
package. However, non-real-time computers are the most misused
computing platforms in the robotics community in recent years.
Seven types of robotic system or subsystems:
• Type 0 system: Material or Hardware logic system.
• Type I system: Non-real-time system, that is isolated from the physical world.
• Type II system: Non-real-time system, that interacts with the physical world.
• Type III system: Simple robotic system.
• Type IV system: Typical robotic system.
• Type V system: Advanced robotic system.
• Type VI system: Full stacked intelligent robotic system.
I applied the computational complexity vs. physical interactions four-quadrant diagram model described in Figure 4 to examine the seven types of robotic systems listed above. The distribution covers the full spectrum of different types of robotic system combinations so that most of the robotic system can fall into these seven categories.
30
Decision Tree Diagram Start Plant Is the robot behave like a plant or animal? Seven types of Robotic System Animal Computational Complexity + No Does the robot need for interacting with the physical world? Yes L3 L3** L2 No Does the robot need to respond to sensor and event quickly? L3** L2* L1 L2* L1 L0 Yes L1 L0 Type VI System Type II System No Does the robot need to be intelligent? Type I System L2 L1 Oriented Robotics
Yes Oriented Robotics - - L0
Concept Is this robot at proof of concept phase or AI Type V System CPS + prototype/production phase? L2 Prototype/Production L1 L0 Subtask / Subprocess hierarchy and Type IV System computing platform decision matrix Is there a need to Responsiveness Can be done L1 coordinate multiple Instant Fast Mid without a No + Yes L0 of sensors and Type III System computer? Materials Smart L0
actuators? L2 L2 L3 Mechatronics Smart Physical World Interactions + Interactions World Physical Yes High Type 0 System No L1 L1 L2 Mid
Intelligence level Intelligence L0 L1 L1 Low Four types of Computing
Non-Real-Time Computing L3** L3** L3 L2* L2* L2 L2 L2 L1 L1 L1 L1 L1 L1 Soft-Real-Time Computing L0 L0 L0 L0 L0 L0
Type 0 System Type I System Type II System Type III System Type IV System Type V System Type VI System Hard-Real-Time Computing Smart Material A non-real-time A non-real-time Simple real-time Typical Typical robotic Full stacked or General computing General computing feedback system Microcontroller system Intelligent robot Hardware Logic Computing Mechanical That Isolated from That interact with System computing real-world real-world
Figure 9. Decision tree diagram of selecting the optimal system architecture for given robotic system problem.
31
Step three: Functional hierarchy modeling
The last step of the system architecture analysis is to expand the “Sense-Plan-Act” and DIKW pyramid combined model described in Section 2.1 into hardware and
software relevant function block diagrams with well-defined information pathways.
This process leverages the major compound subtasks or subsystems recognized in
the task analysis step (defined in Sec 2.1.1) with the system composition, computing
methods and layer allocation resulting from the system identification and function
allocation phase (defined in Sec 2.1.2).
L3 Controller Node Layer 3
Z3CC
L2 Controller Node
Z2VC Z2CU Layer 2
Sensor Post- Actuator Pre- Processes Processes
Z1UA[1..n] Z1SV[1..n]
SensorSensorSensor L1L1L1 Controller ControllerController ActuatorActuatorActuator Nodes NodeNode NodesNodes NodesNodes Z1SC[1..n] Node Z1CA[1..n] Nodes Layer 1
Z1US[1..n] Z1AV[1..n]
SensorSensorSensor Pre Pre Pre--- ActuatorActuatorActuator Post PostPost--- ProcessesProcessesProcesses ProcessesProcessesProcesses
Z0PS[1..n] Z0AP[1..n] Layer 0 noise + Physical System + noise
Figure 10. An abstracted system architecture model for robotics.
32
The typical expanded block diagram model is illustrated in Figure 10, which has two
changes compared to the revised robotic paradigm model described in section 2.1.
Firstly, the updated model converts the DIKW hierarchy into four layers of planner/controller nodes that echo with the four major types of robotic computing methods. Where in Layer 0, hardware logic computing provides the interaction between the robotic system and the physical environment. While layers 1 to 3 are comprised of controllers designated to each layer. Based on different task specifications and requirements, we can assign the appropriate computing type to a particular layer using the system type identification method. Secondly, I introduce additional and optional subprocesses adjacent to “Sense” and “Act” node designated to “pre-process” and “post-process”. The “Sensor pre-process” and “Actuator post-
process” are directly connected to the physical environments. Theoretically, these
processes can be placed both at layer 0 or layer 1 depending on the technical implementation methods. For example, sensor prefiltering is a typical “sensor pre- process” which can be realized by either analog RC filter circuits (layer 0 processes) or digital filtering firmware inside the embedded controller (layer 1 process). On the other hand, “Sensor post-process” and “Actuator pre-process” also share similarities in layer allocation. However, in some instances, these subprocesses will interface with multiple sources or sink function blocks. For example, a “Sensor post- process” subprocess can be a multi-sensor fusion algorithm that integrates sources from several downstream sensor nodes.
33
Furthermore, this architecture model also can be used to analyze information pathways, such as reactive planning and deliberative planning loop. The illustration
in Figure 11 are examples of four types of information pathways frequently existing
during robotic operation. Here, I also borrow terminology from biology to present
the logic flow, for example, the sensory (ascending) and motor (descending) tracks
send information directly between the physical world and controllers/planners
located at each layer. The system delay symbols between functional blocks
designate the communication and cascade time delay between function blocks. For
instance, a reactive loop has less accumulated delays, which is fundamentally more responsive than the deliberative loop that involves additional processes and subsequent delays.
L3 Controller Node Layer 3
Z3CC
L2 Controller Node
Z2VC Z2CU Layer 2 Deliberative Loop Sensor Post- Actuator Pre- Processes Processes
Z1UA[1..n] Z1SV[1..n]
SensorSensorSensor L1L1L1 Controller ControllerController ActuatorActuatorActuator Nodes NodeNode NodesNodes NodesNodes Z1SC[1..n] Node Z1CA[1..n] Nodes Layer 1
Z Z1US[1..n] Reactive Loop 1AV[1..n]
SensorSensorSensor Pre Pre Pre--- ActuatorActuatorActuator Post PostPost---
Sensory (Ascending) Tracts Processes Processes ProcessesProcesses ProcessesProcesses Motor (Descending) Tracts
Z0PS[1..n] Z0AP[1..n] Layer 0 noise + Physical System + noise
Figure 11. Diagram of Information Pathways.
34
Robot Design Case Studies and Results
In this chapter, three robot design projects that follow the proposed system
architecture analysis and planning method are presented as examples. These three
robot designs signify three unique scenarios for robotics: The first example is a robotics-algorithm enabled smart sensor design. It demonstrates that a certain level of sensing intelligence should be locally embedded into the lower level system, in order to improve tactile sensing capability and system integration level for the robotic surgical system1. The second example represents the need for maximized hardware and computing interchangeability and upgradability across a family of robots that consist of several heterogeneous platforms, which is a growing need for
robotic manufacturing and inspection2 applications. The third example is a
complex cyber-physical system that includes a wide variety of electromechanical sensors and biological actuators that are unique and challenging in the rehabilitation and assistive robotic exoskeleton3 fields.
1 Section 3.1 some figures is based on[60]–[62], previously published by IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE International Symposium on Medical Robotics (ISMR) and Robotics: Science and Systems (RSS). This project is funded by NSF National Robotics Initiative program.
2 Section 3.2 some figures is based on research at The Biorobotics Lab, Robotics Institute at Carnegie Mellon University (CMU) at Pittsburgh, Pennsylvania. This project is funded by Search Boeing Research & Technology Strategical University Program.
3 Section 3.3 some figures is based on research at The Center for Biologically Inspired Robotics Research at Case Western Reserve University (CWRU) and The Advanced Platform Technology Center (APTC) at Cleveland, Ohio. And partially adapted from [66], [67], [73], previously published by IEEE International Symposium on Medical Robotics (ISMR), IEEE Engineering in Medicine and Biology Society (EMBC), and Journal of NeuroEngineering and Rehabilitation (JNER). This work is a collection of DOD, VA and NSF projects related to HNP.
35
3.1. Design Case Study I: Smart Force Sensors
Design Overview
Force sensing is crucial for robotics applications that require exceptional sensing
and motor skill and must perceive the environment and manipulate objects.
However, existing commercial robotic systems such as industrial robot arm and
minimally invasive surgery (MIS) robotic manipulators lack the sense of touch.
These robots perform preprogrammed monotonous behaviors or can be teleoperated by a human using visual feedback. Commercially available force/torque sensors are mature technology, but still suffer from two fundamental challenges when integrated with a robotic system:
Challenges 1 – Lack of intelligence: Conventional force sensors only output unprocessed, and sometimes even uncalibrated force sensor values. Also, most sensors require a separate signal processing hardware, and sometimes heavily depend on the external system, such as a full-blown computer system to synthesize high-level information to enable material property estimation and inform control strategy. In such a way, force sensors usually have been considered as a separate front-end component, without the presence of intelligence.
Challenge 2 - Cost of integration: 1) Hardware integration cost: Commercial off- the-shelf force/torque sensors come with bulky shaped and fixed connector pattern,
36
which then requires extra effort to design an adapter between sensor and parent
system. 2) Software interfacing cost: each time when the sensor needs to be
integrated to an existing system, it requires additional software bridges such as
sensor drivers and additional processing tools, which generally require extensive
reprogramming and interfacing to the existing robotic control software systems. 3).
Cost of communication: traditional force sensors require continuously outputting sensor raw data without compression and event-based or physics-based information abstraction. Which leads to increasing data bandwidth and unnecessary communication load during none-active states.
In order to solve these two challenges, the sensor design needs to meet the following requirements:
Objective 1: Create a smart force sensor that is self-contained and able to perform on-sensor processing, calibration, and high-level tactile feature estimation.
Objective 2: Develop a low-cost, miniature force sensor that is easy to integrate into any robotic systems.
Signal conditioning circuitry with 2x2 FSR array Digital Communication Bus
Sensor Sensor front back US Quarter side side © 2017 IEEE
Figure 12. Smart Force Sensor assembly.
37
System architecture analysis and design
Task Analysis
The four main subtasks involved in the force sensor operation are listed and
analyzed in Table 2. First two tasks, sensor pre-filtering, and analog to digital conversion are basic sensing routine processes that are identical for the conventional sensor, which require frequent and extensive physical data capturing that is less computationally demanding. Higher-level processes such as stiffness estimation, self-calibration, and manipulator control play a significant role in making the a sensor “smart”. By performing these types of complex computational tasks onboard, the smart force sensor performs tactile perception without an external computer system, and is able to interface with any existing robotic system
without major software development. More importantly, the force sensor no longer
only outputing rudimentary data, but is also is capable of reporting high-level
processed information, i.e., material stiffness and manipulator motion adjustments.
Table 2. Task analysis for smart force sensor design.
System Computing Task Examples CC* DPI† RTL‡ Layer Act Plan
Sense Method Allocation Sensor array output Analog X L H H Layer 0 voltage pre-filtering Circuits Sensor array analog to X L H M Layer 1 Hard RT digital conversion Sensor self-calibration X X M M M Layer 2 Hard RT Stiffness estimation X X M M M Layer 2 Hard RT Robotic manipulation X X H M M Layer 3 Non-RT control (Optional) CC*: Computational Complexity; DPI†: Direct Physical Interaction; RTL‡: RT Requirements
38
System type identification and function allocation
Based on the task analysis and system identification result, the entire system abstracted model of the proposed system architecture compares to a conventional force sensor system is presented in Figure 13. As we can see in the block diagram, compares to a traditional force sensor design, the revised architecture eliminated the need of the sensor decoder hardware between the force sensor hardware and high-level application system. As a result, the force sensor processing capability can be fully self-contained inside the sensor package using micro-controllers. The entire force sensor become agnostic to the host system and can be exceptionally low-cost since it does not require expensive supporting devices. Additionally, this configuration enables a simplified and modular interface that only requires power and digital communication between sensor and host system, which reduces the chance of analog signal interference and eases the sensory integration process.
Conventional force sensor system Self-contained smart force sensor system L3 Type I System L2 High level application system Non-real-time system & real time kernel sensor interface L3 Type I System High level application system Non-real-time system Type III System Sensor decoder hardware L1 Real time system with L0 Analog to Digital Converter (ADC) Type V System L2 Smart force sensor L1 w/onboard micro-controller L0 Type 0 System OS: Light weight RTOS Basic Force sensor Processor: ARM Cortex-M @ 20MHz Analog output L0
Figure 13. Abstracted system type model for the conventional force sensor (left) compares to the smart force sensor (right).
39
System hierarchy and information pathways
The force sensor system architecture presented in Figure 14 captures the four main subtasks and its logical hierarchy in the context of the physical environment and its parent application systems. In layer 0, the sensor array pre-filtering task is realized
by analog circuits, which provide direct physical input from the environment, while
maintaining hardware logic computing to remove measurement noise and jitter. In
layers 1 and 2, sensor data acquisition and material stiffness estimation are done by
on-sensor hard-real-time embedded systems, without external computing
hardware. The optional interfaces to external systems such as a host computer or
robotic arm controllers, might happened at each corresponding level as the sensor
output information.
Application System Layer3
Z3 CC
Real-time Stiffness HW/SW Hybrid Z2 CU System Estimation &
Layer2 Self-Calibration Active search motion generator
Z1 UA [1.. n]
Sensor Calibration Robot Arm Data Motion ActuatorActuator ControllerNodesNodes Layer 1 Acquisition Control
Z1AV[1. .n]
Sensor ActuatorActuatorJoint motor PostPost -- Pre-filtering controllersProcessesProcesses
Z0AP[1..n] Z0PS Layer 0 noise + Physical System + noise
Figure 14. System hierarchy and information pathways diagram of the smart force sensor.
40
Conclusions and Results
The result of this new sensor architecture is a modular force sensor that is able to quickly be integrated into any existing robotic manipulator systems[60]. Moreover, its onboard micro-controller and tight coupled data processing firmware enables it to accurately estimate material properties such as stiffness and surface normal vector without external computing hardware[61], [62]. Besides embedded intelligence, the removal of support equipment and the adoption of highly integrated electronics also significantly reduced sensor cost, without sacrifice sensing performance.
(e) (d) © 2017 IEEE
© 2018 IEEE
Figure 15. Smart Force Sensor deployed on industrial(a) and medical(d) robotic systems, with the onboard stiffness estimation result (c & e).
41
3.2. Design Case Study II: Modular Mobile Manipulator
Design Overview
This robot design is aimed to build a modular mobile platform that is intelligent and
can be rapidly reconfigured based on task-specific requirements. One common
problem for existing industrial robots for manufacturing is the lack of
customizability. Commercially available automated guided vehicles (AGVs) and
articulated robot arms tend to be built for the specific use cases and operational
environments, and it is almost impossible to reconfigure the robot on the worksite.
The economic and time cost of industrial robot integration inhibits the industry to
adopt the robot for new products and processes swiftly. To solve these problems
above, the design objective for this project is simple but clear:
A robot system able to rapidly change its hardware configurations in the field,
without major software redesign. While also capable of operating
autonomously using its onboard sensor, actuator, and computers.
Figure 16. Modular Mobile Manipulator System change to two operation configurations.
42
System architecture analysis and design
Task Analysis
Table 3. Task analysis for the modular mobile manipulator system.
‡ System † * Computing Task Examples Layer Act CC DPI Plan RTL
Sense Method Allocation
Vision based SLAM X X X H M L Layer 3 Non-RT
Robotic arm planning X X X M M M Layer 2 Non-RT Joint level motor PID X X X M H M Layer 1 Hard RT control Joint Encoder Decoding X L H H Layer 1 Hard RT
Motor PWM output X L H H Layer 1 Hard RT Analog Encoder prefiltering X L H H Layer 0 Circuits Analog Motor current limiting X L H H Layer 0 Circuits CC*: Computational Complexity; DPI†: Direct Physical Interaction; RTL‡: Real-time Requirements
The subtask for the modular mobile manipulation system can be divided into three
categories; listed in Table 3. First, high-level tasks, such as state estimation and planning, vision-based Simultaneous Localization And Mapping (SLAM), and robotic
arm motion planning, are normally computationally expensive and most
appropriate to run at the top of the system hierarchy with the help of general
computing platforms, such as powerful single board computers running non-real-
time operating systems. Second, the mid-level tasks are presented at each individual
robot joint-level modules, such as motor closed-loop control, sensor acquisition, and
43
processing. These types of tasks require direct interaction with the robotic hardware in real-time, which is best suited to execute on an embedded system that tightly couples the sensor and actuators. Also, a real-time operating system or real- time processing software is needed, to guarantee precise timing and event responses. Lastly, there are two typical tasks that can be offloaded to an analog circuit that bridges between the physical world and the digital system: sensor pre-
filtering, and actuator safety protection processes.
System type identification and function allocation
The principle requirement for this system is hardware modularity and
reconfigurability while maintaining a high degree of autonomous operation
capability. Thus, the system type identification and subsystem decomposition are
straightforward: a bus-connected modular architecture, as shown in Figure 17.
Type II System L3**L3 Robot Chassis Main Module Perception L2*L2 OS: Non-real-time Linux and L1 Processor: Intel Core i5/i7 TDP 15W L0 Planning >50,000 MIPS at 1.6 GHz
L1 L1 ... L1 L0 L0 L0
Type III System Control Modular Joint Actuator Subsystem and OS: Light weight RTOS Actuation Processor: ARM Cortex-M3/M4 100 DMIPS at 80MHz
Figure 17. Abstracted system architecture model for the modular mobile manipulator system.
44
On top of the hierarchy, I placed a type II system, which includes a powerful
industrial hardened Linux computer with ROS (Robot Operating System)[63]
onboard, capable of performing robotic perception, localization, mapping, and planning using open-source software libraries. While below this main computer
layer, the system is able to connect any arbitrary number of modular actuator units
through a power and communication bus. Additionally, within each module is an
embedded microcontroller in charge of the sensor reading, closed-loop control, motion regulation, and bidirectional communication leveraging real-time operating system with custom written firmware. In such an architecture, the entire system is intelligent on the high-level requirements, while able to rapidly change the total number and topology of the joint-level actuator units through a simple mechanical and electrical reconfiguration process, that enable the robot to transform its mechanical structure in a few minutes based on the task requirements.
System hierarchy and information pathways
The system hierarchy for this robot can be divided into two separate pieces. The high-level Type II system for the main computer placed in the robot chassis, and modular Type III systems located in each joint level modules. Furthermore, from a layer analysis perspective, the top two layers in the architecture allocate the most demanding tasks such as SLAM and motion planning in the deliberative planning loop. While the lower two layers consist of the reactive control loop, which contains the hardware for joint level “sensor-plan-act” types of processes.
45
None-Real-Time SLAM Software System Layer3
Motion planner
Layer2 Sensor fusion IK solver
Z1SV[1. .n] Z1 UA [1.. n]
Hard-Real-time Joint-level System Sensor Processing Motor control SensorSensor L1L1Controller ControllerController ActuatorActuator NodeNodesNodes NodeNode NodesNodesNodes Layer1 Node
SensorSensorSensor Pre Pre -- ActuatorMotorActuator safety PostPost -- PrefilteringProcessesProcesses ProcessesProcessescircuits
Z0PS[1..n] Z0AP[1..n] Layer0 noise + Physical System + noise
Figure 18. System hierarchy and information pathways diagram for
the Modular Mobile Manipulator System.
Conclusions and Results
The modular mobile manipulator robot system was been developed based on the architecture described above. It has been proven to be a developer-friendly research platform that balances between computational performance and hardware modularity. The modular actuator units enabled our research group to work with an aerospace manufacturer team to quickly modify this robot for confined space
46
manufacturing and inspection research projects and capability demonstrations. As
shown in Figure 19, the robot in the manufacturing configuration is deployed in an
airplane wing-box mockup environment (overlaid image on the upper-left), while
the robot state estimation and mapping are displayed in the screenshot of the robot operation user interface.
The benefit of its architecture allows our robot to keep improving and evolving over several years and accomplished two research as it has development projects, which
uses this robot platform as the foundation while enabes the research team to
introduce newly designed modular components that can be seamlessly integrated to
this modular architecture to extend the system.
Figure 19. Modular Mobile Manipulator System deployed in a confined space mockup for manufacturing and inspection demonstration.
47
3.3. Design Case Study III: Hybrid Neuroprosthesis (HNP)
Design Overview
The high-level goal of the Hybrid Neuroprosthesis (HNP) Project is to “develop and fabricate a novel, self-contained and multifunctional hybrid neuroprosthesis suitable for clinical testing out of the laboratory setting” [64]–[67]. The uniqueness of this project is that HNP combines a variety of actuation modalities to help restore mobility to individuals with spinal cord injury (SCI). To be more specific, HNP uses functional neuromuscular stimulation (FNS) as the primary source of actuation, paired with electrical motors or a passive hydraulic orthoses to provide torque assist or joint locking/coupling. Compared to commercially available powered lower limb exoskeletons or orthoses that mostly achieve mobility restoration through non-muscle-driven approaches[68], the HNP has a great potential that surpasses other solutions: 1) FNS is ideal for exercising paralyzed muscles and helping preventing muscular atrophy after spinal cord injury[69]. 2) Muscle driven exoskeletons can be more efficient compared to human-made actuators powered by battery [70][71].
However, this robotic rehabilitation and assistive system comes with significant challenges that need to be addressed during the system architecture design phase:
1) System Complexity: The entire HNP system consists of various types of
sensors and actuators across the digital, analog, and biological domains.
These include inertial measurement units (IMU), joint potentiometers, rotary
48
encoders, force sensing insoles, hydraulic valves, electromechanical motors,
and even stimulated human muscles. The HNP system also requires
interfacing with a mixture of commercially available and custom designed
support systems for both upstream and downstream data connections. These
includes a VICON motion capture system (Vicon Motion Systems Ltd, Oxford
Metrics, UK) to provide reference ground truth information, and a custom-
built programmable FNS stimulator UECU (Universal External Control Unit)
developed by The Cleveland FES Center, at Case Western Reserve University.
Sensors, actuators, and support devices are listed in Figure 20.
© 2018 IEEE.
Figure 20. The system core components for the HNP system (passive hydraulic configuration).
49
2) System Flexibility: The HNP system design is aimed at providing a
programmable and reconfigurable platform to accelerate research and
development iterations. Which means both the mechanical hardware, control
electronics, and the software stacks need to allow significant modification to
accommodate different research objectives and experimental requirements
over years of research. For example, the first two phases of the HNP project
was dedicated to the hybrid control schema of FNS and passive hydraulic
mechanism, while the third phase of the project required substantial
redesign to incorporate electrical motors to provide as-needed assistance to
the muscles. Thus, each component of the HNP needed to be designed in a
modular way to allow for upgradability, and the system architecture needed
to be multifunctional and future-proof.
3) System self-contained requirements: Conventional laboratory-oriented
robotic research platforms are designed around a centralized, powerful
computer system, which generally are bulky, immobile, and expensive. The
HNP needs to be self-contained in terms of power, actuation, sensing, and
computation. Its system architecture needs to include powerful but compact
electronics and computers that are able to operate efficiently on a small,
lightweight battery. The onboard computing platform must be able to
perform complex tasks such as state estimation, closed-loop control, and
muscle stimulation in a real-time manner, without additional support
equipment and computers.
50
System architecture analysis and design
Task Analysis
The HNP is the most complex system among the three case studies. Its subtasks also
can be divided into four layers which resonate with the system hierarchical model
proposed in previous chapters. Several key subtasks are listed in Table 4. Firstly, on the top layer (Layer 3), the graphical user interfaces with the parameter server act as the bridge between the high-level robot behavior and human operator/user. This is crucial but does not require a real-time response thus should be placed on the highest level of the information pyramid. Secondly, planning and coordination tasks should occupy the middle layer (Layer 2). Most of the tasks in this layer require moderate physical interaction and real-time event response, and medium computational load. This makes it suitable for operating on a soft-real-time system that is commonly developed on a high performance embedded micro-controller.
Thirdly, the lower-level layer (layer 1) consists of highly responsive and hardware relevant subprocesses, such as sensor fusion and motor closed-loop control. Such
processes require a fixed system computing cycle and instant interrupt response to
upcoming events and are best to implemented using a hard-real-time system equipped with highly efficient interrupt handlers and task schedulers. Lastly, the physical layer (layer 0) has several subtasks that directly interact with environment.
The system can leverage analog and digital circuits to compute these subtasks as a continuous system, including analog sensor data smoothing with resistor-capacitor
(RC) filter, and motor PWM output generation using the hardware logic timer in
51
micro-controllers. The benefits of placing low-computational-load but highly- physical-demanding tasks in the lowest computing level is the reward of offloading the upper-level computers to reduce software complexity and improve system response.
Table 4. Task analysis of some subprocesses examples with the HNP system.
System ‡ † * Computing Task Examples Layer Act CC DPI Plan RTL
Sense Method Allocation
Graphical user X X H L L Layer 3 Non-RT interface Parameter X H L L Layer 3 Non-RT webserver Gait Event Detector X M M M Layer 2 Soft-RT Bluetooth X X M M M Layer 2 Soft-RT Communication FNS controller X X M M M Layer 2 Soft-RT Gait Pattern X X M M H Layer 2 Soft-RT Generator Joint level motor X X X M H H Layer 1 Hard RT PID control IMU sensor fusion X X M H H Layer 1 Hard RT IMU posture X M H H Layer 1 Hard RT estimation Joint velocity X M H H Layer1 Hard RT computing Motor PWM output X L H H Layer 0 Digital Circuits Analog Encoder prefiltering X L H H Layer 0 Circuits Motor current Analog X L H H Layer 0 limiting Circuits CC*: Computational Complexity; DPI†: Direct Physical Interaction; RTL‡: Real-time Requirements
52
System type identification and function allocation
The configuration of the HNP system architecture abstraction model consists of
three layers which correspond to three functional groups. The subsystems break
down the entire robotic system into the high, middle, and low level in a hierarchical yet modular architecture. On the highest level, a miniature Linux single board computer (Raspberry Pi Zero W) provides robot-human-interface, wireless communication, and parameter webserver. On the middle level, an Embedded-
Controller-Board (ECB) acts as the barebone and information hub of the entire system, which performs most of the soft-real-time computing tasks described in the last section. On the lowest level, a suite of subsystem hardware offers a modular and distributed computing architecture, which oversees sensor capturing, data fusion, and actuator closed-loop control. Some example hardware are the Signal-
Conditioning-Board (SCB) that provides analog sensor pre-filtering and digital bus interconnect; modular IMU daughter boards that are equipped with dedicated micro-controller to perform 9-axis inertial data fusion; and joint-level propulsion modules that are able to control an electric motor to provide torque assist locally.
Furthermore, it is worth noting that the layer 1 and layer 0 tasks are also distributed between the ECB, SCB, and other modular subsystem hardware. This configuration breaks down the complex robotic tasks into smaller functional groups, not only reducing the computational requirement and enabling it to execute on a miniature micro-controller, but also parallelized the hardware and software development workflow, which enables the research team to upgrade or modify single subsystem modules without the need to overhaul/redesign the entire system.
53
Type I SubSystem L3 High-level computer OS: Non-real-time Linux Non-Real-Time Processor: ARM Cortex-A53 2,300 DMIPS at 1GHz Soft-Real-Time
Type IV SubSystem Hard-Real-Time Mid-level micro-controller Hardware Logic L2 OS: Light weight RTOS L1 Processor: ARM Cortex-M3/M4 L0 100 DMIPS at 80MHz
L1 L1 L1 L0 L0 L0 L0
Figure 21. Abstracted system architecture model of the Hybrid Neuroprosthesis (HNP).
System hierarchy and information pathways
To examine the HNP system architecture and information flow, we should follow a
bottom-up method. The hierarchy is projected onto the three levels of hardware
groups. From the bottom layer, the sensory information and actuator efforts are
interchanged between the physical environment and modular hard-real-time subsystems including the SCB, FNS stimulation boards, IMU daughter boards. The
physically relevant information is processed and modulated within each subsystem
54 and the processed data is sent upstream to the ECB in layer 2, which collects information from different sources through digital communication buses, and computed in a centralized fashion. After mid-level planning and control computation, such as in the Gait Pattern Generator (GED), FNS pattern generator, and lower-leg gait generator subtasks, the actuator commands are then sent back down to each modular hardware that controls the different types of actuating modalities.
Human User
Non-Real-Time L3 Controller Nodes System Layer3
Z3 CC
L2 Controller Nodes Time -
Z2VC Z2 CU Layer2 Real - System Sensor Post- Actuator Pre- Soft Processes Processes
Z1SV[1. .n] Z1 UA [1.. n] time - SensorSensorSensor L1L1L1 Controller ControllerController ActuatorActuatorActuator Nodes Nodes Nodes NodesNodes NodeNode NodesNodes System Layer1 HardReal
SensorSensor PrePre-- Sensor Pre- ActuatorActuator Post PostPost--- ProcessesProcesses Processes ProcessesProcessesProcesses Layer0
Z0PS[1..n] Z0AP[1..n]
noise + Physical System + noise
Figure 22. Diagram of Layered Modular Robotic Paradigm and System Architecture for HNP.
55
Conclusions and Results
Three generations of HNP control electronics systems have evolved in the last few
years. The hierarchical yet modular system architecture proves to be valuable for
enabling our research team to upgrade and expand the system structure
incrementally and only redesign or modify subsystem modules without major
redesign of the entire architecture. For example, the ECB subsystem was redesigned
from Gen1 to Gen 2 to accommodate the research focus changes from passive
hydraulics to electric motor assistance, by introducing a more capable micro-
controller. The two generations are swappable on the two HNP exoskeletons’ hardware due to the modular hardware and software standardization.
The balance between system performance, hardware portability, and ease of development drives the decision making of the HNP system architecture designs. As a result, the HNP system accomplished the following goals that make it unique compared to other robotic control systems:
• A system architecture that is able to adapt to changes in hardware and
software configuration.
• A high-performance yet portable embedded system that excels at sensing,
control, and actuates a complex cyber-physical-biological system.
56
Table 5. Three generations of the modular combination of three layers of the HNP subsystem electronics and its migration path.
Gen 1 Gen 2 Gen 3
Embedded Linux NA Intel Edison Raspberry Pi Zero W (Layer 3) Main micro- ECBv1 (Atmel EVBv2(Cypress ECBv3(Cypress controller (Layer 2) ATmega2560) PSOC5LP) PSOC5LP) IMU subsystem onboard Hillcrest labs Custom Atmega328+MPU9150 Board (Layer 1) BNO085 Bluetooth Custom Bluetooth EDR and BLE boards Subsystem (Layer 1) Surface Stimulation Board, Percutaneous Stimulation Board, Implant FNS subsystem Control Board Motor control Custom Design NA subsystem Proposition Boards Signal Conditioning Custom designed SCB board Board (Layer 0) HNP Exo hardware HNPv2 Electric Motor HNPv2 Hydraulic Configuration (Layer 0) Configuration
57
Discussion
Applying the proposed robotic system architecture analysis and modeling method,
and studying robot design cases, some critical design recommendations are
generated from this research:
o An intricate robot system needs to be analyzed using a systematic method.
o Subtasks planning and allocation need to base on computational complexity, physical interactions, and real-time requirements.
o Solving a planning and control problem in lower layers as much as possible can reduce system delay and increase responsiveness.
o Modular design simplifies the overall system architecture.
Furthermore, there are two design best practices that have been observed during the last few years. First, there is a trend of flourishing novel smart sensor/actuator designs that directly coupled onboard processing and controls with the onboard low-level hardware. This creates a Tightly integrated “sense-plan-act” subsystem that reduces the risk of system integration and promotes a simplified system architecture. Shown in Figure 23, compared to the conventional sensor or actuator,
which only contain the sense or act capability, the smart sensor and smart actuator
include both the sense/act and plan elements. A self-contained and self-regulated closed-loop modular subsystem are formed, that allows the system to connect multiple sensors or actuators on a shared bus network without the need for
independent routing or powerful centralized computer. Also, the physical
58 interaction response and the real-time control performance can be vastly increased, since the planning and control loop happens in the highly responsive layer 0-1, instead of layers 2-3.
As a result, this new trend also allows interfacing to sensors and actuators in higher layers, using communication via high-level abstracted information, without high- bandwidth or knowledge-sparse raw data streams. The future generation of robotic sensor and actuator designs will become “black box” subsystems for the upper layers while performing more sophisticated computing, control, planning and even embedded intelligent tasks at low-level.
Sensors Actuators Act Plan Sense Layer 3 Layer 2 Layer1 Layer0
Basic Smart Blackbox Open-Loop Sensor Closed-Loop Smart Sensor Sensor Actuator Actuator Actuator
Figure 23. Function components and layer allocation of conventional vs. a smart sensors and actuators.
59
• Computing platform for robotic applications.
The second trend is optimized robotic computing platforms for high-level and full-
stack intelligence. Commonly used computing hardware is not designed for robotic
applications, general computing devices such as CPU and GPU lack real-time
capability, while micro-controllers (MCU) are designed for real-time applications
but fail at computationally heavy tasks. An optimized computing platform is much
needed when the future generation of robots requires a full stack of intelligence
while directly interacting with the physical world. Potential solutions are
heterogeneous computing architectures that combine MCU/CPU with a Field
Programmable Gate Arrays (FPGA) or even custom designed Application-Specific
Integrated Circuit (ASIC), to form a robotic System-on-Chip (SoC).
Based on different mission necessities, a robotics researcher or engineer could apply the proposed method to analyze and plan the task specification, functional layer allocation, and system requirements to decide the best type of computing hardware and it operating system selection, as indicated in Figure 24. RT - Layer3 Non RT - SW Layer2 RT - (FPGA+CPU/MCU) CPU/MCU+FPGA HW SOC/ASIC For SOC/ASIC Robotics Layer1 Layer0 Physical Figure 24. Different types of computing hardware with its appropriate operating system type and system architecture layer allocation.
60
Conclusions
The new method for robotic systems architecture analysis, modeling, and design
is a collection of work and philosophical thoughts gathered during my study,
research and work at The Biologically Inspired Robotics Laboratory at CWRU and
the Biorobotics Lab at CMU during the last few years. The proposed method has
been used throughout more than three research and system development projects,
including but not limited to the three robot design cases described in Chapter 3.
Several unique and capable robotic systems have been developed following this
method, and the proposed method has been proved to be valuable since it offers a
systematic and methodical approach to understand and layout a robot’s system
architecture.
The main contribution of this work is to serve as a system investigation and
decision-making tool while providing an abstracted blueprint for designing a robotic
system architecture that requires extensive interaction between the cyber and
physical world. To be more specific, the revised robotic paradigm provides a
conceptual model for both information hierarchy and the logic flow between the
three most important “sense-plan-act” robotic primitives. While providing a
preliminary model for system analysis and an intuitive guideline for architecture
design planning. Furthermore, a three-step method is proposed that enables
robotics researchers and engineers to create a bird-eye view toward a complex robotic design challenge, which can be applied in system design, research, and revision phases. 61
Future Works
• Modular robotic sensor, controller, actuator.
An immediate next step is to create a family of robotic modules that allows
researchers and engineers to select functional components from a robot catalog
quickly and be able to assemble and build a complex robotic system with the
help of the proposed methods.
© 2015 IEEE.
Figure 25. The next generation of modular sensor and actuator units the author is currently working on. [72]
There are two different directions of future development the author is currently
working on. First is the next generation of the modular computing unit, a self-
contained Type IV system that follows the HNP-ECB architecture but with a
single-chip solution and expands the use case from rehabilitation robotics to
more general robotic applications. Second is a collection of modular sensors and
actuators that can be quickly assembled to form a mechatronic system, to make
robot hardware and software design process like assembling LEGOs.
62
• Robotic system architecture design automation.
The proposed workflow and methods still heavily rely on human knowledge and
experiences, which does not automatically convert a robot design concept to
engineering implementation build plan. If we use software engineering as a
reference, the initial state of computer hardware and software architecture
development in the mid-20th century is similar to the state of robotic system
design today. However, after years of research and development, nowadays we
can leverage software compiler and hardware synthesizer to translate a high-
level concept or descriptive language to machine executable instruction codes or
physically manufacturable circuits.
Thus, a robotic architecture description language is needed in order to achieve
the final dream of robotic system architecture analysis and design automation.
This design automation system will be a tool that allows even non-robotics
engineers to layout system requirement constraints and other high-level
objectives, and then be able to perform feasibility analysis, virtual benchmarking
with simulation, and eventually be able to automatically generate system
architecture and even robot hardware and software.
63
Appendix A Design Note
1. Smart Force Sensor schematic drawing.
© 2017 IEEE
2. Smart Force Sensor PCB assembly.
© 2017 IEEE
64
3. Modular Mobile Manipulator System project website. http://biorobotics.ri.cmu.edu/robots/ConfinedSpaceMobileBase.php
4. System electronics diagram for the Modular Mobile Manipulator System.
Robot
5. Software function block diagram for the main computer on the Modular Mobile Manipulator System.
65
Hybrid Neuroprosthesis (HNP) system diagram - Hydraulic Configuration
HNP Embedded Controller Board with Intel Embedded Computer Assembly
66
First Generation HNP ECB
Second Generation HNP ECB
67
Bibliography
[1] M. Raibert, K. Blankespoor, G. Nelson, and R. Playter, BigDog, the Rough- Terrain Quadruped Robot, vol. 41, no. 2. IFAC, 2008. [2] R. E. Ritzmann, R. D. Quinn, and M. S. Fischer, “Convergent evolution and locomotion through complex terrain by insects, vertebrates and robots,” Arthropod Struct. Dev., vol. 33, no. 3, pp. 361–379, Jul. 2004. [3] C. Wright et al., “Design of a modular snake robot,” in IEEE International Conference on Intelligent Robots and Systems, 2007. [4] ABB, “ABB YuMi - How human-robot collaboration is driving a manufacturing revolution.” [Online]. Available: https://new.abb.com/future/yumi. [Accessed: 31-Mar-2019]. [5] KUKA AG, “KUKA Cobots: Bobots in the industry.” [Online]. Available: https://www.kuka.com/en-us/technologies/industrie-4-0/industrie-4-0- cobots-in-industry. [Accessed: 31-Mar-2019]. [6] Yaskawa, “Yaskawa: Advanced, Collaborative, Robotic Automation.” [Online]. Available: https://www.motoman.com/collaborative. [Accessed: 31-Mar- 2019]. [7] universal robots, “Case Stories: Automate almost anything with Collaborative robots from UR.” [Online]. Available: https://www.universal- robots.com/case-stories/. [Accessed: 31-Mar-2019]. [8] B. S. Peters, P. R. Armijo, C. Krause, S. A. Choudhury, and D. Oleynikov, “Review of emerging surgical robotic technology,” Surg. Endosc., vol. 32, no. 4, pp. 1636–1655, Apr. 2018. [9] G. P. Moustris, S. C. Hiridis, K. M. Deliparaschos, and K. M. Konstantinidis, “Evolution of autonomous and semi-autonomous robotic surgical systems: a review of the literature,” Int. J. Med. Robot. Comput. Assist. Surg., vol. 7, no. 4, pp. 375–392, Dec. 2011. [10] A. Bicchi, M. A. Peshkin, and J. E. Colgate, “Safety for Physical Human–Robot Interaction,” in Springer Handbook of Robotics, Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, pp. 1335–1348. [11] Y. Chinniah, B. Aucourt, and R. Bourbonnière, “Safety of industrial machinery in reduced risk conditions,” Saf. Sci., vol. 93, pp. 152–161, Mar. 2017. [12] P. Puangmali, K. Althoefer, L. D. Seneviratne, D. Murphy, and P. Dasgupta, “State-of-the-Art in Force and Tactile Sensing for Minimally Invasive Surgery,” IEEE Sens. J., vol. 8, no. 4, pp. 371–381, Apr. 2008. [13] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The KITTI dataset,” Int. J. Rob. Res., vol. 32, no. 11, pp. 1231–1237, 2013.
68
[14] Wikipedia, “Sense.” [Online]. Available: https://en.wikipedia.org/wiki/Sense. [15] R. Kuroda, S. Sugawa, and M. Suzuki, “Over 100 million frames per second high speed global shutter CMOS image sensor,” 2019. [16] A. R. Jiménez, F. Seco, C. Prieto, and J. Guevara, “A comparison of pedestrian dead-reckoning algorithms using a low-cost MEMS IMU,” in WISP 2009 - 6th IEEE International Symposium on Intelligent Signal Processing - Proceedings, 2009. [17] L. Keselman, J. I. Woodfill, A. Grunnet-Jepsen, and A. Bhowmik, “Intel(R) RealSense(TM) Stereoscopic Depth Cameras,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017, pp. 1267–1276. [18] S. Izadi et al., “Kinect Fusion: Real-time 3D Reconstruction and Interaction Using a Moving Depth Camera,” in Proceedings of the 24th annual ACM symposium on User interface software and technology - UIST ’11, 2011. [19] M. Satyanarayanan, “The emergence of edge computing,” Computer (Long. Beach. Calif)., 2017. [20] S. Seok et al., “Design principles for energy-efficient legged locomotion and implementation on the MIT Cheetah robot,” IEEE/ASME Trans. Mechatronics, vol. 20, no. 3, pp. 1117–1129, 2015. [21] Y. J. Kim, “Design of low inertia manipulator with high stiffness and strength using tension amplifying mechanisms,” in IEEE International Conference on Intelligent Robots and Systems, 2015. [22] L. Ricotti et al., “Biohybrid actuators for robotics: A review of devices actuated by living cells,” Science Robotics. 2017. [23] D. Rollinson et al., “Design and architecture of a series elastic snake robot,” in 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2014, pp. 4630–4636. [24] C. Gong et al., “Kinematic gait synthesis for snake robots,” Int. J. Rob. Res., 2016. [25] Robotis, “DYNAMIXEL All-in-one Smart Actuator.” [Online]. Available: http://www.robotis.us/dynamixel/. [Accessed: 31-Mar-2019]. [26] HEBI Robotics, “X-Series Actuators® — HEBI Robotics.” [Online]. Available: https://www.hebirobotics.com/x-series-smart-actuators. [Accessed: 31-Mar- 2019]. [27] RobotWorx, “RobotWorx - How Much Do Industrial Robots Cost?” [Online]. Available: https://www.robots.com/faq/how-much-do-industrial-robots- cost. [Accessed: 31-Mar-2019]. [28] G. I. Barbash and S. A. Glied, “New Technology and Health Care Costs — The
69
Case of Robot-Assisted Surgery,” N. Engl. J. Med., vol. 363, no. 8, pp. 701–704, Aug. 2010. [29] J.-H. Thun and D. Hoenig, “An empirical analysis of supply chain risk management in the German automotive industry,” Int. J. Prod. Econ., vol. 131, no. 1, pp. 242–249, May 2011. [30] J. F. Christensen, M. H. Olesen, and J. S. Kjær, “The industrial dynamics of Open Innovation—Evidence from the transformation of consumer electronics,” Res. Policy, vol. 34, no. 10, pp. 1533–1549, Dec. 2005. [31] M. Jamshidi, “System of Systems - Innovations for 21st Century,” in 2008 IEEE Region 10 and the Third international Conference on Industrial and Information Systems, 2008, pp. 6–7. [32] R. D. Quinn, J. T. Offi, D. A. Kingsley, and R. E. Ritzmann, “Improved mobility through abstracted biological principles,” in IEEE/RSJ International Conference on Intelligent Robots and System, vol. 3, pp. 2652–2657. [33] R. Altendorfer et al., “RHex: A Biologically Inspired Hexapod Runner,” Auton. Robots, vol. 11, no. 3, pp. 207–213, 2001. [34] S. Kim, “Bio -inspired robot design with compliant underactuated systems,” 2008. [35] G. Baldassarre and M. Marcomirolli, “Computational and robotic models of the hierarchical organization of behavior: An overview,” in Computational and Robotic Models of the Hierarchical Organization of Behavior, 2013. [36] S. Wermter, G. Palm, and M. Elshaw, “Biomimetic Neural Learning for Intelligent Robots: Intelligent Systems, Cognitive Robotics, and Neuroscience,” Lecture Notes in Artificial Intelligence. 2005. [37] S. Sagar, J. Rick, A. Chandra, G. Yagnik, and M. K. Aghi, “Functional brain mapping: overview of techniques and their application to neurosurgery,” Neurosurg. Rev., pp. 1–9, Jul. 2018. [38] D. Meunier, R. Lambiotte, A. Fornito, K. Ersche, and E. T. Bullmore, “Hierarchical modularity in human brain functional networks,” Front. Neuroinform., vol. 3, p. 37, Oct. 2009. [39] D. Meunier, R. Lambiotte, and E. T. Bullmore, “Modular and Hierarchically Modular Organization of Brain Networks,” Front. Neurosci., vol. 4, p. 200, Dec. 2010. [40] ScienceDirect, “Spinal Reflex - an overview | ScienceDirect Topics.” [Online]. Available: https://www.sciencedirect.com/topics/neuroscience/spinal- reflex. [Accessed: 01-Apr-2019]. [41] J. A. (John A. Kiernan and M. L. Barr, Barr’s The human nervous system : an anatomical viewpoint. Lippincott Williams & Wilkins, 2005.
70
[42] J. R. A. edited and created by L. W. Lindsay M. Biga, Sierra Dawson, Amy Harwell, Robin Hopkins, Joel Kaufmann, Mike LeMaster, Philip Matern, Katie Morrison-Graham, Devon Quick, “Chapter 14.5 - Sensory and Motor Pathways | Anatomy & Physiology,” in Anatomy & Physiology, Pressbooks.com: Simple Book Production. [43] R. N. Lemon, “Descending Pathways in Motor Control,” Annu. Rev. Neurosci., vol. 31, no. 1, pp. 195–218, Jul. 2008. [44] K. Saladin, Anatomy & Physiology: The Unity of Form and Function. 2014. [45] pgpedia, “Sensorimotor Stage.” [Online]. Available: https://www.pgpedia.com/s/sensorimotor-stage. [Accessed: 01-Apr-2019]. [46] S. Chevalier-Skolnikoff, “Sensorimotor development in orang-utans and other primates,” J. Hum. Evol., 1983. [47] K. S. Espenschied, R. D. Quinn, R. D. Beer, and H. J. Chiel, “Biologically based distributed control and local reflexes improve rough terrain locomotion in a hexapod robot,” Rob. Auton. Syst., 1996. [48] L. H. Hartwell, J. J. Hopfield, S. Leibler, and A. W. Murray, “From molecular to modular cell biology,” Nature, 2002. [49] K. Mitra, A.-R. Carvunis, S. K. Ramesh, and T. Ideker, “Integrative approaches for finding modular structure in biological networks,” Nat. Rev. Genet., vol. 14, no. 10, pp. 719–732, Oct. 2013. [50] J. Seo, J. Paik, and M. Yim, “Modular Reconfigurable Robotics,” Annu. Rev. Control. Robot. Auton. Syst., vol. 2, no. 1, p. annurev-control-053018-023834, May 2019. [51] R. Baheti and H. Gill, Cyber-Physical Systems: From Theory to Practice. 2011. [52] E. A. Robinson, “Cybernetics, or Control and Communication in the Animal and the Machine,” Technometrics, 1963. [53] E. A. Lee, “The past, present and future of cyber-physical systems: a focus on models.,” Sensors (Basel)., vol. 15, no. 3, pp. 4837–69, Feb. 2015. [54] E. A. Lee, “Cyber-Physical Systems-Are Computing Foundations Adequate?,” 2006. [55] - based robots,” 2004. M. N. Nicolescu and M. J. Matarić, “A hierarchical architecture for behavior [56] John Annett and Neville Anthony Stanton, Task analysis. CRC Press, 2000. [57] P. Salmon, N. Stanton, A. Gibbon, D. Jenkins, and G. Walker, “Cognitive Task Analysis,” in Human Factors Methods and Sports Science, 2010. [58] T. Brogårdh, “Present and future robot control development-An industrial
71
perspective,” Annu. Rev. Control, 2007. [59] J. Levinson et al., “Towards fully autonomous driving: Systems and algorithms,” in IEEE Intelligent Vehicles Symposium, Proceedings, 2011. [60] L. Li, B. Yu, C. Yang, P. Vagdargi, R. A. Srivatsan, and H. Choset, “Development of an inexpensive tri-axial force sensor for minimally invasive surgery,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017, pp. 906–913. [61] N. Zevallos et al., “Robotics: Science and Systems sss8 A Real-time Augmented Reality Surgical System for Overlaying Stiffness Information,” in Robotics: Science and Systems, 2018. [62] N. Zevallos et al., “A surgical system for automatic registration, stiffness mapping and dynamic image overlay,” in 2018 International Symposium on Medical Robotics (ISMR), 2018, pp. 1–6. [63] M. Quigley et al., “ROS: an open-source Robot Operating System,” ICRA Work. open source Syst., 2009. [64] C. S. To, R. Kobetic, J. R. Schnellenberger, M. L. Audu, and R. J. Triolo, “Design of a variable constraint hip mechanism for a hybrid neuroprosthesis to restore gait after spinal cord injury,” IEEE/ASME Trans. Mechatronics, vol. 13, no. 2, pp. 197–205, 2008. [65] M. J. Nandor, S. R. Chang, R. Kobetic, R. J. Triolo, and R. Quinn, “A hydraulic hybrid neuroprosthesis for gait restoration in people with spinal cord injuries,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2016, vol. 9793, pp. 192–202. [66] S. R. Chang et al., “A muscle-driven approach to restore stepping with an exoskeleton for individuals with paraplegia,” J. Neuroeng. Rehabil., vol. 14, no. 1, p. 48, Dec. 2017. [67] L. Li et al., “Embedded control system for stimulation-driven exoskeleton,” in 2018 International Symposium on Medical Robotics (ISMR), 2018, pp. 1–6. [68] A. M. Dollar and H. Herr, “Lower extremity exoskeletons and active orthoses: Challenges and state-of-the-art,” IEEE Trans. Robot., 2008. [69] T. Gordon and J. Mao, “Muscle atrophy and procedures for training after spinal cord injury,” Physical Therapy. 1994. [70] P. C. Eser, N. de N. Donaldson, H. Knecht, and E. Stüssi, “Influence of different stimulation frequencies on power output and fatigue during FES-cycling in recently injured SCI people,” IEEE Trans. Neural Syst. Rehabil. Eng., 2003. [71] K. J. Hunt et al., “Comparison of stimulation patterns for FES-cycling using measures of oxygen cost and stimulation cost,” Med. Eng. Phys., 2006.
72
[72] S. Kalouche, D. Rollinson, and H. Choset, “Modularity for maximum mobility and manipulation: Control of a reconfigurable legged robot with series-elastic actuators,” in SSRR 2015 - 2015 IEEE International Symposium on Safety, Security, and Rescue Robotics, 2016. [73] S. R. Chang et al., “A Stimulation-Driven Exoskeleton for Walking after Paraplegia,” International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), vol. 44106, pp. 6369–6372, 2016.
73