<<

Did this stuf get in? Between the Nanoscribe, Protolaser U3, and current 3D printing projects, there’s plenty we can do with rapid integration of polymers and microfluidics for soft complex systems. CGA TODO: Facilities

Summary; consists of an overview, a statement on the intellectual merit of the proposed activity, and a statement on the broader impacts of the proposed activity. 2 and 6 sets of keywords at the end of the overview in the Project Summary. Keywords: xxx; yyy at end of overview

Science center stuff

Fix Dickey word stuff

Robot Skin The introduction is almost done, so edit this seriously. Our goal is to develop skin that is actually used by real . Skin is an integrated system that forms the mechanical interface with the world. It should be rich in sensors, and support actuation. We take inspiration from human skin, where there are many types of sensors embedded in a mechanical structure that maximizes their performance. There are several new elements of our approach:

• We will create and deploy many types of sensors, including embedded accelerometers, gyros, tem- perature sensors, vibration sensors, sound sensors, optical sensors sensing nearby objects, and optical sensors tracking skin and object velocity and movement. Previous skin and tactile sensing projects typically focused on one or only a few types of sensors.

• We will optimize the skin mechanics for manipulation and tactile perception. When the needs of manipulation and tactile perception conflict or are unclear, we will focus on optimizing performance on a set of benchmark tasks. Previous tactile sensing projects often place a tactile sensor on bare metal fingers, with little consideration of skin and tissue mechanics.

• We will explore a relatively thick soft skin, and consider soft tissue surrounding internal structure (bones) with a relatively human-scale ratio of soft tissue to bone volume, or structures that are com- pletely soft (no bones/rigid elements).

• We will explore a wide variety of surface textures including arbitrary ridge patterns (fingerprints), hairs, posts, pyramids, and cones. These patterns may vary across the skin provide a variety of contact affordances.

• We will explore superhuman sensing. For example, we will create vision systems (eyes) that look outward from the skin for a whole body vision system. We will use optical tracking to estimate slipping and object velocity relative to the skin. We will explore embedding ultrasound transducers in the skin to use ultrasound to image into soft materials that are in contact such as parts of the human body.

• We will explore deliberately creating air and liquid (sweat) flows (both inwards and outwards) for better sensing (measuring variables such as pressure, conductivity, and temperature) and controlling

1 adhesion. We will explore humidifying the air for better airflow sensing, contact management, adhe- sion control, and ultrasound sensing.

• We will develop materials to make the skin rugged, and methods to either easily replace or repair damage.

• We will define a set of benchmark tasks to guide design and evaluation of our and other’s work, The tasks include exploring and manipulating rigid and articulated (jointed) objects, and deformable objects such as wire bending, paper folding, screen (2D surface) bending, and working with clay (kneading, sculpting with fingers and tools, and using a potters wheel). The system we construct will recognize, select, and manipulate objects among a set of objects (find keys in your pocket, for example). Our most difficult set of benchmarks will be mockups of tasks often found in caring for humans: wiping, combing hair, dressing, moving in bed, lifting, transfer, and changing adult diapers.

• We will explore a range of perceptual approaches, including object tracking based on contact types, forces, and distances, feature based object recognition based on features such as texture, stiffness, damping, and plasticity, feature based event recogntion based on spatial and temporal multimodal features such as the frequency content of vibration sensors, and multimodal signature based event recognition.

• We will explore behavior and control based on explicit object trajectories and force control, discrimi- nant or predicate based policies, and matching learned sensory templates.

1 Research Plan

This section is almost done except for the Generation 3 skin section, which needs to reflect the Proposed Research writetup. Our research plan has two thrusts: A: developing skin and skin sensors, and B: developing and eval- uating perception, reasoning, and control algorithms that allow the skin to do desired tasks. We have a three step plan for developing skin and skin sensors:

A.1 Generation 1: Use off the shelf sensors embedded in optical grade silicone (Near Infrared (NIR)). This skin includes optical sensing of marker movement to measure strain in all three directions. We will use range finding sensors to sense objects at up to a 0.5m distance. We will use embedded IMUs which include an accelerometer, gyro, magnetometer, and temperature sensor. We will embed piezoelectric material to capture high frequency vibration, and pressure sensors. We will embed induction loops to sense electric field, and explore imposing local or global (possibly time varying) electric fields to resolve orientation. We will glue hairs or whiskers to piezoelectric crystals to provide mechanical sensing at a (short) distance.

A.2 Generation 2: Integrate current benchtop prototypes into Generation 1 skin. Hyperelastic sensing elements will be composed of soft silicone elastomer embedded with microfluidic channels of non- toxic liquid metal alloy, eg. eutectic gallium-indium (EGaIn ). Strain, shear deformation, and/or applied surface pressure causes predictable changes in resistance or capacitance of the embedded liquid metal sensing elements. EGaIn can also be used for capacitive touch measurements. Likewise, we can pattern soft, conductive laminate films to create arrays of capacitive touch pixels (for example, graphene pastes or films separated by elastomer). Conductive elastomers will be used to interface liquid EGaIn sensors and circuits with a flexible PC board populated with rigid microelectronics. This will include a microcontroller, battery, and off-the-shelf (OTS) RF transmitter or transceiver.

2 A.3 Generation 3: Develop completely new skin sensing technology using novel concepts, components, and materials. For example, we will explore artificial hair cells with hairs and whiskers attached to five or six axes of force and motion sensing. Instead of liquid metal, these integrated sensors will be composed of insulating and conductive elastomers for detecting deformation through changes in elec- trical capacitance. They will be produced with UV laser micromachining or additive manufacturing through customized 3D printing or 2-photon polymerization. As before, the robot skin will interface with a flexible printed circuit containing a microcontroller and power. For Generation 3, the antenna in the transmitter/transceiver will be replaced with a soft, elastically deformable antenna integrated into the skin itself. We need a unified discussion of the use of imposed and inherent magnetic and electrical fields. At some abstract level the next two paragraphs are doing similar things. I can imagine imposing an AC electric field and using inductive sensors. Move the details of all of this to the proposed work section? Co-PI Onal is studying the use of Hall effect sensing ICs to detect the deformation of soft materials using an embedded miniature magnet located at a precise location with respect to the Hall element. Our preliminary results have demonstrated accurate and high-bandwidth curvature measurements for bending elements. We propose to extend upon this work to develop distributed 6-D force/moment measurements using an array of multiple magnet-Hall pairs. For touch and force sensing we will explore distribute nanoparticles in a gel. When pressed, the particles form a local percolated network that changes the local conductivity of the gel. In biology, touch generates an action potential that signals to the brain. We propose to use soft materials that generate a small potential when deformed. For example, droplets of EGaIn form an oxide skin that protects the underlying metal, but when pressed, this skin cracks and exposes the metal. This could, in principle, be harnessed to generate a potential. I may even have some prelim results to dig up. We can utilize hydrogels to replace EGaIn for touch sensing because the gels are transparent. We plan to develop the “off the shelf” skin in Year 1, the EGaIn microchannel-based skin in Year 2, and then 3rd generation skin in Year 3. Skin sensing technology needs to remain functional under extreme deformation. The skin with embedded sensors must remain mechanically suitable for desired tasks. The development and evaluation of perception, reasoning, and control algorithms for our three skin sys- tems will use a series of test setups:

B.1 Evaluate a patch of skin on the laboratory bench.

B.2 Evaluate the skin on a simple hand.

B.3 Evaluate the skin on our current lightweight arm and hand.

B.4 Evaluate the skin on our Sarcos Primus humanoid (whole body sensing).

2 Why This Matters

This section needs to be edited down to be more concise and actually flow. We are motivated by the need to build better assistive robots and environments for people with disabilities, older adults trying to live independently, and people with strokes, ALS, and spinal cord and traumatic brain injury. These people need help. It is a tremendous burden on caregivers (typically a spouse) to provide 24/7 care, and many people would rather have a machine change their diapers than a stranger. Here are several ways better robot skin is on the critical path:

3 Figure 1. Testbeds. Left: lightweight arm and hand. Right: SARCOS Primus hydraulic humanoid.

• After decades of research, robot hands are nowhere close to human levels of performance. One huge problem is terrible sensing and perception. Better robot skin would make a huge difference.

• We also plan to apply our skin to the entire arm of our arm testbed, and the entire body of our Sarcos humanoid robot. High quality whole body tactile sensing and perception would make a huge difference here as well.

• We also want to use our robot skin to instrument everyday objects and the environment such as table tops, seats, cups, utensils, and kitchen tools, so we can completely sense what is happening in an environment. This information can be used to guide more effective robot behavior and provide better service to humans by recognizing their actions and activities and predicting their intentions.

Humans greatly depend on tactile sensing during physical interactions. It is well-established that a tempo- rary loss of this modality results in significant losses in functionality, especially during grasping tasks [41]. In , however, reliable integration and use of a sense of touch is severely lacking compared to human levels or other sensing modalities such as vision. For robotics to deliver on its promise to intimately interact with human users safely and adaptively in unstructured environments, developing the robotic counterpart of a skin covering the entire body is a critical need.

In contrast to existing industrial robots, assistive healthcare robots must engage in physical contact with humans. For this contact to be safe and comfortable, the robot must be padded with material that matches the mechanical properties of natural human tissue. Any mechanical “impedance mismatch” could lead to stress concentrations and kinematic constraints that could lead to bodily immobilization or injury. In order to preserve the compliance of the underlying padding, the robot skin must be soft and elastically deformable. Stretchable functionality is also required to attach the skin to non-developable surfaces and moving joints. While rigid electronics can be buried beneath the padding, sensors and circuit wiring must be composed primarily of elastically deformable conductive material that remain electrically functional when stretched with strains on the order of 10-100%. Flexibility is not enough – such electronics should be composed of insulating elastomer embedded with conductive elastomer traces, microchannels of conductive fluid, or meshed/wavy/coiled metal wiring. Moreover, these stretchable circuits require a reliable interface with rigid electronics for multiplexing, processing, and wireless communication.

4 We focus on a crucial component of robots working with people: physical human-robot interaction. As a driving application we will develop robots that can help physically manipulate people in hospital, nursing home, and home care contexts. We envisage cooperative relationships between robots and caregivers, and robots and patients, where each participant leverages their relative strengths in the planning and performance of a task. One goal is to reduce injuries to caregivers. A second goal is to help patients recover faster. A third goal is to provide better “customer service” and autonomy to patients, particularly in rehabilitation settings. We will explore learning complex policies by watching humans execute them or by programming them directly, and then refining those policies using optimization and learning. We will test our ideas by implementing them on our Sarcos humanoid robot. The humanoid will work in concert with at least one human (simulating a human caregiver) as part of a human-robot team. Tasks to be explored include repositioning patients in bed, helping them get out of bed and into a wheelchair, moving them from a wheelchair to another chair, toilet, or car seat, helping them exercise, picking a fallen patient up off the floor, and handling accidents during any of these tasks. Our basic research will lay the foundation for several demonstration systems built on open platforms for transfer to industry and for use in clinical studies. Our computer vision techniques will allow accurate tracking of patients and caregivers and their body parts at a distance using real time surface tracking based on structured light techniques. Our ultrasound- based sensing will allow tracking of a patient’s soft tissue and skeleton when the robot contacts the patient and provide volumetric perception of the inside of the patient as the manipulation progresses. EMG mea- surements and on-patient accelerometers and gyros will be part of an early warning system for patient falls and other failures, and to allow the robot to more effectively cooperate with instrumented caregivers, by reacting more quickly to anticipated human movement. We will develop new inherently safe robot designs using soft (non-rigid) structures and mechanisms with smooth, pliable and reactive surfaces, such as inflatable robots. We will develop new control techniques that refine existing human strategies and invent new ones, based on cognitive optimization: general-purpose systems that learn to maximize reward or utility, using new, more powerful model-based forms of adaptive/approximate dynamic programming (ADP) and reinforce- ment learning. We will develop efficient multiple model methods for designing robust nonlinear and time varying task controllers for physical human-robot interaction where the robot, the patient, and possibly other caregivers all participate in the interaction. Physically manipulating people is an important area for health care robotics. Robots are useful for dan- gerous tasks. Physically manipulating people in a care facility leads to injuries and occupational disability, and is statistically one of the most harmful of human occupations [????? ]. Many care facilities are instituting “no lift” policies, in which caregivers are no longer allowed to do large force physical manip- ulation of patients, but are required to use mechanical aids. There is currently a shortage of nursing staff for hospitals and nursing homes, compounded by injury and retirements due to disability. We also want to help older couples stay in their own homes longer, when one member has limited movement and the spouse is typically not very strong but would be cognitively able to operate a transfer aid. Unfortunately, so far robotics has largely addressed this problem in a narrow and probably unacceptable way, using a fork lift approach to manipulating people that leaves little room for either caregivers or the patient to help. We be- lieve an approach that is more synergistic with current practice will be more useful, allow the use of weaker, cheaper, and safer robots, and will be more acceptable to patients and caregivers. A broader goal of the proposal is to enable robots to work with humans in the real world filled with deformable objects. Current robotics research focuses on rigid robots manipulating rigid objects. We see applications of our work in jobs that currently use teams of physically cooperating humans: construction, rescue, warehouse logistics, repair of large objects, and entertainment such as sports. We want to minimize tissue deformation during manipulation, and especially minimize tissue shear strain. Consider a wet paper grocery bag full of soft items and an off center heavy can (Figure2). Supporting the bag at the center (dashed arrow) leads to shearing forces on the paper bag and the bag will probably fail.

5 Figure 2. Left: Grocery bag example. Solid arrow is better force to apply, dashed arrow causes more shear. Right: Rolling person in bed example. What is shown is an axial section at the level of the shoulders (a cross section perpendicular to the spine that includes both shoulders). In this case the solid yellow arrow that pushes the scapula onto the rib cage rather than sideways is a better force to apply. The dashed arrows cause more shear strain in the soft tissue. We also need to consider support forces affected by robot actions (the green solid arrow).

Supporting the bag under the load concentration (the solid arrow) generates less shearing forces on the bag. The right side of Figure2 shows a schematic example of rolling a human in a bed. Di fferent forces can be applied to a cross section of the human at the shoulder. The force application marked by the solid arrow pushes tissue onto the scapula and pushes the scapula onto the rib cage, distributing forces and leading to mostly compression of soft tissue, while the forces indicated by the dashed arrows lead to more tissue shear. Future robots are expected to free human operators from difficult and dangerous tasks requiring dexterity in various environments. Prototypes of these robots already exist for applications such as extra-vehicular repair of manned spacecraft and robotic surgery, in which accurate manipulation is crucial. Ultimately, we envision robots operating tools with levels of sensitivity, precision and responsiveness to unexpected contacts that exceed the capabilities of humans, making use of numerous force and contact sensors on their arms and fingers.

3 Background: Skin and Skin-based Sensing

This section is currently a collection of paragraphs, which need to be edited down to a concise review of what is already done by others. Say something about human sensing: sensor types, resolution, etc. Compared to even the simplest of animals, today’s robots are impoverished in terms of their sensing abil- ities. For example, a spider can contain as many as 325 mechanoreceptors on each leg [13, 51], in addition to hair sensors and chemical sensors [12, 131]. Mechanoreceptors such as the slit sensilla of spiders [13, 27] and campaniform sensilla of insects [90, 134] are especially concentrated near the joints, where they provide information about loads imposed on the limbs – whether due to regular activity or unexpected events such as collisions. By contrast, robots generally have a modest number of sensors, often associated with actuators or concentrated in devices such as a force sensing wrist. (For example, the humanoid robot has 42 sensors in its hand and wrist module [28].) As a result, robots often respond poorly to unexpected and arbitrarily-located impacts. The work in this paper is part of a broader effort aimed at creating light-weight, rugged appendages for robots that, like the exoskeleton of an insect, feature embedded sensors so that the robot can be more aware of both anticipated and unanticipated loads in real time. Part of the reason for the sparseness of force and touch sensing in robotics is that traditional metal and semiconductor strain gages are tedious to install and wire. The wires are often a source of failure at joints and are receivers for electromagnetic noise. The limitations are particularly severe for force and tactile sensors on the fingers of a hand.

6 3.1 Robot and other types of artificial skin Siegfried Bauer provides a useful review of robot skin [1][2]. The development of highly deformable artificial skin with contact force (or pressure) and strain sensing capabilities [109] is a critical technology to the areas of wearable computing [89], haptic interfaces, and tactile sensing in robotics. With tactile sensing, robots are expected to work more autonomously and be more responsive to unexpected contacts by detecting contact forces during activities such as manipulation and assembly. Application areas include haptics [104], humanoid robotics [142], and medical robotics [123]. Different approaches for sensitive skin [85] have been explored. One of the most widely used methods is to detect structural deformation with embedded strain sensors in an artificial skin. Highly sensitive fiber optic strain sensors have been embedded in a plastic robotic finger for force sensing and contact localization [108, 115] and in surgical and interventional tools for force and deflection sensing [61, 112]. Embedded strain gauges have been used in a ridged rubber structure for tactile sensing [156]. Detecting capacitance change with embedded capacitive sensor [68] arrays is another approach for tactile sensing, as shown in a human-friendly robot for contact force sensing [148, 120]. Embedding conductive materials in a polymer structure is also a popular method for artificial skin such as nanowire active-matrix circuit integrated artificial skins [144], conductive polymer-based sensors [95], solid-state organic FET circuits [65], and conductive fluid embedded silicone robot fingers [153]. In spite of their flexibility, the above example sensing technologies are not truly stretchable and also cannot remain functional at large strains. For example, fiber optic sensors have upper strain limits of approximately 1-3% for silica [78] and 10% for polymers [119], and typical strain gauges cannot tolerate strains higher than 5% [124]. There have been stretchable skin-like sensors proposed using different methods. Strain sensing fabric composites for hand posture and gesture detection has been developed using an electrically conductive elastomer [83]. A stretchable tactile sensor has been proposed also using polymeric composites [149]. A highly twistable tactile sensing array has been made with stretchable helical electrodes [34]. An ionic fluid has been used with an elastomer material for measuring large strains [36]. Electroconductive elastic rubber materials have been explored for measuring displacement and control of McKibben pneumatic artificial muscle actuators [152, 76, 77]. However, these sensors are not able to remain functional at strains over 100%.

3.2 Sensing based on conductive liquids in channels A novel and specialized sensor technology for soft robots is the use of liquid metals embedded in soft sub- strates, a technology stemming from mercury-in-rubber strain gauges from 1960s [58]. Thus, dimensional changes due to deformations in the substrate are reflected as resistance changes by the liquid metal. Recent work incorporates fluidic channels inside silicone rubber filled with eutectic Gallium-Indium (EGaIn ) to measure joint angles of a finger [73] and for a tactile sensing array [74]. A short survey on sensors built with EGaIn is given in [150].

We focus on a particular type of conductive liquid materials, i.e., eutectic gallium-indium (EGaIn ) [45], which are finding increasing applications in soft wearable robots [111], flexible sensors [87] and stretchable electronics [105, 75]. EGaIn is an alloy of gallium and indium maintaining a liquid state at room tempera- ture. Due to its high surface tension and high electrical conductance, EGaIn is an ideal conductor for a soft sensor. We have demonstrated a capability of fabricating highly deformable artificial robotic skin with multi- modal sensing capable of detecting strain and contact pressure simultaneously, using embedded microchan- nels filled with EGaIn [114]. Our previous prototypes were able to decouple multi-axis strains [110], shear

7 Figure 3. Soft artificial skin sensors. (a) Multi-axis strain and pressure sensing skin. (b) Soft multi-axis force sensor. forces [151] as well as contact pressure [114, 117] at strains over 100%, as shown in Figure3. Although there have been some efforts on developing robotic skins and structures that can detect multiple types of stimuli [49, 141], highly deformable and stretchable materials have not been fully explored for multi-modal sensing. Mercury, francium, cesium, gallium, and rubidium are all elemental metals that are liquid below or near room temperature. Cesium and rubidium are both explosively reactive and francium is radioactive; therefore these materials are not practical. By process of elimination, Hg and Ga (and their alloys) remain. Hg has two principle disadvantages: (1) it is toxic - exposure can take place through inhalation of its vapor or adsorption via the skin,[3, 3, 4] and (2) it has a very large surface tension (>400 mN/m) which generally limits liquid Hg to spherical shapes that have limited utility. Ga is an attractive alternative to Hg. Ga has low-toxicity[5] and no vapor pressure (which means Ga can be handled without worry of inhalation). Ga has a melting point of 30◦C, but it can be supercooled significantly.[6] The surface of the metal reacts rapidly with oxygen to form a native oxide layer. This oxide “skin” is thin (0.5 – 3 nm) [7, 8, 9, 10], passivating[8, 11], and composed of gallium oxide [12]. Unlike Hg, the oxide skin allows the liquid gallium to be deformed into stable non-equilibrium shapes. Fig. 1 shows photos of the metal in non-spherical shapes. The oxide skin is strong enough that the metal can be 3D printed (please consider watching, “3D Printing of Liquid Metal at Room Temperature” on YouTube to understand the properties of the material and our excitement for this material).[13] The oxide skin can be removed easily using acid (pH<3), base (pH>10), or electrochemistry [14]. In both cases, the metal beads up in the absence of the oxide due to the large tension of the metal, as shown in Figure4. Various metals (e.g. indium and tin) alloy with gallium and depress the melting point below room temperature.[12, 15] This proposal focuses primarily on EGaIn , the eutectic alloy of 75 wt% gallium and 25 wt% indium[16, 17, 18], which was chosen to ensure the metal stays in the liquid phase during the room temperature studies. Despite the presence of indium, the “skin” is composed primarily of oxides of gallium[19] and therefore the conclusions from our studies should apply to other alloys based on gallium. The low viscosity (2x water) [20, 21, 22], metallic conductivity [23], and presence of the oxide layer make EGaIn a promising material for use in a variety of applications. A disadvantage of Ga is cost (=$0.5/g), although it is not prohibitively expensive for the small volumes needed here. Recently, our group and others have harnessed these oxide-coated metals to enable electronic and optical devices composed of liquid metal, which offers some unique advantages over solid metals[24] including the ability to make flexible/stretchable devices [25], switches [26], sensors [27], antennas [28, 29, 30, 31], microfluidics components (e.g., electromagnets and heaters)[32, 33], energy harvesters [34], batteries [35], soft robotics and soft electrodes [36]. Recently, researchers in the Soft Machines Lab (PI: Majidi) have developed a fabrication method to simultaneously pattern insulating elastomers, conductive elastomers, and thin films of EGaIn liquid (5).[84] This versatile technique allows for soft-matter circuits and sensor arrays to be produced in minutes without

8 Figure 4. Examples of applications from the Dickey group: 3D printed structures[13], stretchable wires[37] and antennas,[27, 28], self-healing circuits[38], microfluidic electrodes,[33] soft memory and diodes,[39],[40] and microdroplets.[41] Consider visiting our YouTube channel to see some of these applications in action. the need for labor-intensive casting and injection-filling.

3.3 Magnetic sensing Hall elements are compact, accessible, and inexpensive. The quick response and accuracy of Hall elements for traditional robotic applications have previous been verified for joint angle proprioception [31] as well as tactile exteroception [146]. Contact-free sensing capabilities are highly desired for soft robotic research. Thus, a unique advantage of our wireless magnetic field measurement approach is its negligible effect on material stiffness.

3.4 Resistive flex sensors Resistive flex sensors offer a simple and compact solution for embedded sensing in soft robotics. Neverthe- less, we concluded in a preliminary study (reported elsewhere) that they suffer from dynamic artifacts, such as delayed response and drift.

3.5 Embedded fiber optic strain sensing Fiber Bragg grating is a powerful sensing solution for deformable bodies used successfully for force mea- surements on a soft finger [107], and shape reconstruction [158]. Although this technology facilitates highly

9 (a)! (b)! (c)! (d)!

(e)! (f)! (g)!

1 cm!

Figure 5. Soft-matter electronics produced with CO2 laser machining.[84] (a) Resistive tactile sensor composed of laser- patterned conductive poly(dimethylsiloxane) (cPDMS). (b) Sensor array composed of overlap- ping strips of cPDMS insulated by non-conductive elastomer. (c) Laser-patterned inclusions of PEDOT:PSS embedded in PDMS. (d) Laser-patterned PE- DOT:PSS embedded in polyurethane. (e) Laser-patterned eutectic Gallium-Indium alloy (EGaIn ) embedded in PDMS. (f) Integration of a serpentine EGaIn wire and cPDMS electrodes in a PDMS-sealed circuit. g) LED-embedded circuit composed of laser-patterned cPDMS and EGaIn . accurate curvature measurements using a thin and flexible optical fiber, the required supporting hardware disables embedded operation, especially for tetherless mobile robots with many degrees of freedom. Various groups have explored optical fibers for tactile sensing, where the robustness of the optical fibers, the immunity to electromagnetic noise and the ability to process information with a CCD or CMOS camera are advantageous [43, 62, 86]. Optical fibers have also been used for measuring bending in the fingers of a glove [59] or other flexible structures [42], where the light loss is a function of the curvature. In addition, a single fiber can provide a high-bandwidth pathway for taking tactile and force information down the robot arm [9]. We focus on a particular class of optical sensors, fiber Bragg grating (FBG) sensors, which are find- ing increasing applications in structural health monitoring [4, 71, 80] and other specialized applications in biomechanics [33, 44] and robotics [108, 116]. FBG sensors have been attached to or embedded in metal parts [50, 81] and in composites [140] to monitor forces, strains, and temperature changes. FBG sensors are particularly attractive for applications where immunity to electromagnetic noise, small size and resistance to harsh environments are important. Examples include space or underwater robots [47, 52, 143], medical de- vices (especially for use in MRI fields) [113, 159], and force sensing on industrial robots with large motors operating under pulse-width modulated control [50, 160]. FBG sensors reflect light with a peak wavelength that shifts in proportion to the strain they are subjected to. The sensitivity of regular FBGs to axial strain is approximately 1.2 pm/µε at 1550 nm center wavelength [26, 66]. With the appropriate FBG interrogator, very small strains, on the order of 0.1µε, can be mea- sured. In comparison to conventional strain gages, this sensitivity allows FBG sensors to be used in sturdy structures that experience modest stresses and strains under normal loading conditions. The strain response of FBGs is linear with no indication of hysteresis at temperatures up to 370◦C[91] and, with appropriate processing, as high as 650◦C[106]. Multiple FBG sensors can be placed along a single fiber and optically multiplexed at kHz rates.

10 3.6 Skin-based Perception, Reasoning about Contact and Interaction, and Control: Background Scott: Can we focus this more closely on learning contact and physical interaction tasks. Need a paragraph explaining why these things are relevant. Ex: Active Perception is important to tactile sensing because sensors must be pressed against and moved across objects.

Learning from Demonstration. Learning from demonstration (LfD) [8, 25] is an approach to robot programming in which users demonstrate desired skills to a robot. Ideally, nothing is required of the user beyond the ability to demonstrate the task in a way that the robot can interpret. Example demonstration tra- jectories are typically represented as time-series sequences of state-action pairs that are recorded during the teacher’s demonstration. The set of examples collected from the teacher is then often used to learn a policy (a state to action mapping) or to infer other useful information about the task that allows for generalization beyond the given demonstrations. A variety of approaches have been proposed for LfD, including supervised learning [11, 32, 35, 54, 46,3], reinforcement learning [133,1, 161, 72], and behavior based approaches [99]. Gienger et al. [53] segment skills based on co-movement between the demonstrator’s hand and objects in the world and automatically find appropriate task-space abstractions for each skill. Their method can generalize skills by identifying task frames of reference, but cannot describe skills like gestures or actions in which the relevant object does not move with the hand. Kjellstrom and Kragic [70] eschew traditional policy learning, and instead use visual data to watch the demonstrator’s hand to learn the affordances of objects in the environment, leading to a notion of object-centric skills. Ekvall and Kragic [48] use multiple examples of a task to learn task constraints and partial orderings of primitive actions so that a planner can be used to reproduce the task.

Inverse Reinforcement Learning. A special case of learning from demonstration is that of inverse rein- forcement learning (IRL) [97], in which the agent tries to infer an appropriate reward or cost function from demonstrations. Thus, rather than try to infer a policy directly from the demonstrations, the inferred cost function allows the agent to learn and improve a policy from experience to complete the task implied by the cost function. IRL techniques typically model the problem as an Markov Decision Process (MDP) and require an accurate model of the environment [1, 125, 96], but some recent methods have been proposed to circumvent this requirement by creating local control models [145], and by using an approach based on KL-divergence [30], respectively. Maximum entropy methods have also been suggested as a way to deal with ambiguity in a principled probabilistic manner [161, 69]. A central topic of this proposal has a very similar goal to that of IRL—classifying task success and failure is a simple case of inferring a reward function, in which only the goal state is rewarded. However, all the aforementioned IRL methods assume that proper features are available that allow a task appropriate reward function to be learned. By contrast, this proposal focuses on tasks that have complex goals, for which informative features are not readily available. To discover these features, active manipulation and perception is required to reveal kinematic relationships between objects in the environment. Several researchers have also previously examined the problem of failure detection or task verification in various contexts. Pastor et al. [118] use Dynamic Movement Primitives (DMPs) [60] to acquire motor skills from demonstrations of a complex billiards shot. Statistics are then collected from the demonstrations to predict the outcome of new executions of the same skill, allowing early termination if failure seems likely. Plagemann et al. [122] use a particle filter with Gaussian process proposal distributions to model failures (collisions) in a robot navigation task with noisy observations. Niekum et al. [100] learn a finite-state representation of a task and a set of transition classifiers from demonstrations and interactive corrections.

11 These are the used to classify new observations during a task and to initiate appropriate recovery behaviors when necessary.

Active Perception The use of active perception has been studied in robotics [63], computer vision [147], and computer graphics [121]. Early work by Triggs and Laugier [147] showed that a camera placement planner can be designed to allow for optimal visibility while also taking into account the physical constraints of the robot and the environment. The approach however requires that physical regions that are important to a task are known a priori. However, important physical regions are not obvious in many tasks, making this approach difficult to scale-up to more general demonstrated tasks. Estimating optimal placement of cameras for machine vision tasks has been proposed for such tasks as inspection [57], object recognition [135], and surveillance [29]. In many cases, a significant amount of prior knowledge of objects (eg. location of barcodes, position of visual markers, 3D models, and motion dynamics) is used to optimize the camera viewing angle. One of the features of declarative models, however, is that it becomes difficult to scale up to a larger number of objects since discriminative features must be determined manually for every object. Declarative models are also limiting when a robot must be able to adapt to new tasks—the system must be able to learn with novel objects, configurations, and kinematics.

Interactive Scene Understanding. More recently, Katz etal. [63, 64] proposed an efficient method for recovering articulated objects (ie. multiple connected rigid objects) through the use of structure from motion, where the motion is induced by a robot actively interacting with the object. Furthermore, they show that through a series of interactions, the kinematic state of the connected parts can be classified as prismatic, revolute or disconnected. In work by Sturm etal. [139], more complex kinematic relationship between parts are inferred using Gaussian process models and local linear embedding. However, in both works, the perception tasks was significantly simplified by making use of fiducial markers to identify parts. In general the parts of an object along with their kinematics must also be inferred by a LfD system.

Discovering Discriminative Visual Features. More recently, the use of data-driven approaches using large amounts of data has shown that it is possible to automatically search for discriminative visual features with little a priori knowledge about the perception task. Work by Le etal. [79] has shown that high-level visual features for image search can be discovered by using deep networks of autoencoders which build only on basic pixel intensities for low-level features. The work of Singh etal. [132] showed that image large datasets can be mined for mid-level visual representations by searching for image patches that occur frequently but are also distinct from the rest of the visual world. Our proposed work will extend this insight to the task of LfD by using large amounts of image data collected from exploration to find discriminative mid-level visual representations that correspond to goal success or failure.

3.7 Results from Prior NSF Support Related to this Work Chris, Michael, and Scott need to fill in missing sections. The NSF has been rejecting grants without review recently for not following the requested format.

If any PI or co-PI identified on the project has received NSF funding (including any current funding) in the past five years, information on the award(s) is required, irrespective of whether the support was directly related to the proposal or not. In cases where the PI or co-PI has received more than one award (excluding amendments), they need only report on the one award most closely related to the proposal. Funding includes not just salary support, but any funding awarded by NSF. The following information must be provided:

12 (a) the NSF award number, amount and period of support;

(b) the title of the project;

(c) a summary of the results of the completed work, including accomplishments, supported by the award. The results must be separately described under two distinct headings, Intellectual Merit and Broader Impacts;

(d) the publications resulting from the NSF award;

(e) evidence of research products and their availability, including, but not limited to: data, publications, samples, physical collections, software, and models, as described in any Data Management Plan; and

(f) if the proposal is for renewed support, a description of the relation of the completed work to the proposed work.

Co-PIs Onal, Park, and Majidi have had no prior NSF support.

Atkeson: (a) NSF award number: EEC-0540865; amount: $29,560,917; period of support: 6/1/06 - 5/31/15. (b) Title: NSF Engineering Research Center on Quality of Life Technology (PI: Siewiorek). (c) Summary of Results: This NSF Engineering Research Center supported during the period 6/2010-5/2013 a research project (lead C. Atkeson) on soft robotics. We only report on this part of the award.

Intellectual Merit.

Broader Impacts.

Development of Human Resources. The project trained one graduate student, who presented his results at conferences (...) and in journals (...). The student is now doing a postdoc at XXX with YYY.

(d) Publications resulting from this NSF award: [????? ].

(e) Other research products: www.cs.cmu.edu/ cga/bighero6, www.cs.cmu.edu/ cga/soft

(f) Renewed support. This proposal is not for renewed support.

Michael: You only need to do one. Dickey: (a) NSF award number: ECCS-0925797; amount: $341k; period of support:9/2009-8/2013; (b) Title: Stretchable, Tunable, Self-Healing Micro-Fluidic Antennas. (c) Summary of Results:

Intellectual Merit. This was a collaborative project whose objective was to study and develop new hybrid antenna systems consisting of highly stretchable, low-loss radiating/receiving antennas formed using microfluidic technology. See below for broader impact. Development of Human Resources.

(d) Publications resulting from this NSF award: (e) Other research products: None. (f) Renewed support. This proposal is not for renewed support.

13 Dickey: (a) NSF award number: CMMI-0954321; amount: $400k; period of support: 4/2010-3/2015. (b) Title: CAREER: Understanding and Controlling the Surface Properties of a Micromoldable Fluid Metal. (c) Summary of Results: Intellectual Merit. This project focuses on three aims that are distinct from this proposal: (1) Quantifying how the skin ruptures; (2) Studying thermal evaporation onto the liquid metal due to its low volatility to form metallic skins; and (3) Clarifying what happens at the interface between the metal and the substrate during injection into microfluidic channels in air.

Development of Human Resources.

Broader Impacts. NSF funding has resulted in 3 patent applications and 23 papers in 5 years in high impact journals such as PNAS, Advanced Materials, Advanced Functional Materials, Nature Communications, Applied Physics Letters, and Lab on a Chip. The work has produced three patents and two companies have licensed inventions resulting from this work. The research has been highlighted internationally in 100 media outlets including Nature, MIT Technology Review, NY Times, BBC, The Economist, MSNBC, Forbes, Wired, Chemical and Engineering News, Chemical Engineering Progress, and US News & World Report. Our group has created YouTube video supplements to our papers that have received 2 million hits in the past two years (these videos cite NSF support). In the past five years, the PI and graduate students have presented relevant work at 30 seminars and 60 conferences / symposia. The PI also created a semi- permanent physical display for the NC State Joyner Visitor center to describe and disseminate this research; this center is visited by thousands of students each year. The PI and graduate students associated with these projects have participated in outreach activities (engineering camps for middle and high school students, and other local outreach programs), created a blog (Common Fold), and have worked closely with NC States Engineering Place to increase the impact and dissemination of the modules. Dickey has integrated graduate, undergraduate, and high school students into these research efforts to create a research team comprised of diverse students. Although the outreach aims of the current proposal build on the outreach program initiated by the CAREER award, the scientific objectives are completely distinct and only share the common focus on liquid metal. (d) Publications resulting from this NSF award: (e) Other research products: None. (f) Renewed support. This proposal is not for renewed support.

Niekum: (a) NSF award number: IIS-1208497; amount: $499,199; period of support: 10/1/2012 - 9/30/2015 (b) Title: NRI-Small: Multiple Task Learning from Unstructured Demonstrations. (c) Summary of Results:

Intellectual Merit. This project addresses the problem of learning complex, multi-step tasks from natural, unstructured demonstrations. Three main capabilities are identified that are necessary for skill learning and reuse: a parsing mechanism to break task demonstrations into simpler components, the ability to recognize repeated skills within and across demonstrations, and a mechanism that allows for skill policy improvement from practice. Some of these issues have been addressed individually in previous research efforts, but no system has jointly addressed these problems in an integrated, principled manner. This project builds on cutting-edge methods from Bayesian nonparametric statistics, reinforcement learning, and control theory, with the goal of creating a deployment-ready learning from demonstration system that transforms the way that experts and novices alike interact with robots. Broader Impacts. This project offers a potential bridge to a future generation of cooperative robots that would transform the home and workplace. Additionally, a simple robot programming interface will expedite and extend the range of research robotics. This project also strengthens interdisciplinary ties be-

14 tween computer science and several other disciplines including neuroscience and the study of human cog- nitive development, education, and intelligence. Several ROS packages have already been released as a result of this work that have impact well beyond the scope of this research alone: a package for learn- ing and generating motions from Dynamic Movement Primitives (http://wiki.ros.org/dmp); a pack- age for integrating Kinect data with AR-tag detection for object recognition and pose estimation (http: //wiki.ros.org/ar_track_alvar); and a package for doing performing supervised classification using SVM and other common approaches (http://wiki.ros.org/ml_classifiers). In particular, we are aware of at least 15 separate groups in academia and industry that are using ar track alvar for robot percep- tion. ROS-independent code has also be released for Bayesian nonparametric segmentation of demonstration data, and is currently being used by several other research groups that we are aware of.

Development of Human Resources:

(d) Publications resulting from this NSF award: [102, 100]. (e) Other research products: ROS stuff. (f) Renewed support. This proposal is not for renewed support.

4 Proposed Research

I am trying to get the order and content roughly right in this section This section expands on Section1. Unfortunately, we only have space to describe highlights of the proposed research.

4.1 Example Tasks [Years 1, 2, and 3] [Year 1] The first set of benchmark tasks are essentially applying sensory psychophysics to individual sen- sors embedded in a patch of skin. How sensitive is the sensor? What are its spatial and temporal properties? How does an array or grid of sensors respond? [Years 1 and 2] To evaluate integrated skin systems, we will use the hand, lightweight arm, and humanoid testbeds to evaluate a skin system’s ability to explore and manipulate rigid and articulated (jointed) objects, and deformable objects such as wire bending, paper folding, screen (2D surface) bending, and working with clay (kneading, sculpting with fingers and tools, and using a potters wheel). [Years 1 and 2] We will also benchmark the ability of a skin system mounted on the hand, arm, and full humanoid to recognize, select, and manipulate objects among a set of objects (find keys in your pocket, for example). [Years 1, 2, and 3] Our most difficult set of benchmarks will be mockups of tasks often found in caring for humans: wiping, combing hair, dressing, moving in bed, lifting, transfer, and changing adult diapers.

4.2 Optimizing Skin Mechanics [Years 2 and 3] We need a discussion of how to find appropriate skin mechanics, what ridges or fingerprints should we have etc., and how we make skin work around joints (knuckles, palm joints, elbow, and knee are easy since they are essentially 1D; 2D joints are harder and require stretch? We will explore shaping multiple types of soft materials into layers, flexures, creases, expansion joints, etc., as well as embedding multiple types of individual fibers and woven fabrics with different stiffnesses and damping. We will optimize the skin mechanics for manipulation and tactile perception. When the needs of manipulation and tactile perception conflict or are unclear, we will focus on optimizing performance on

15 a set of benchmark tasks. We will explore a relatively thick soft skin, and consider soft tissue surrounding internal structure (bones) with a relatively human-scale ratio of soft tissue to bone volume, or structures that are completely soft (no bones/rigid elements). We will explore a wide variety of surface textures including arbitrary ridge patterns (fingerprints), hairs, posts, pyramids, and cones. These patterns may vary across the skin provide a variety of contact affordances.

4.3 Perception [Years 1, 2, and 3] A transformation that we hope to lead in robot perception based on skin is the combination of sensing at a distance, contact-based position, movement, and force sensing, and volumetric perception of the interior of manipulated soft objects. We will explore a range of perceptual approaches, including object tracking based on contact types, forces, and distances, feature based object recognition based on features such as texture, stiffness, damping, and plasticity, feature based event recognition based on spatial and temporal multimodal features such as the frequency content of vibration sensors, and multimodal signature based event recognition.

Figure 6. Inserting a table leg and screw into a pre-drilled hole.

As an example of a perceptual task, let us consider the screw insertion task shown in Figure6. From the robot’s overhead perspective, the table leg occludes both the screw and the hole, making verification of the insertion difficult. We have conducted a preliminary exploration of using data mining techniques [157] to discover key signatures of success and failure from multimodal data from sources such as microphones, wrist accelerometers, and RGB-D cameras. For example, the co-occurrence of a clicking noise with an accelerometer spike may reliably indicate success, while either feature on it’s own might be insufficient (an accelerometer spike could be caused from sliding off the table, or a click from missing the hole and hitting the table). With these features alone, we have achieved a classification success rate of around 85%, but we suspect that this will greatly improve with more sensory modalities such as pressure and shear force. This same logic applies to more dynamic tasks as well, such as pancake flipping or robot walking, which rely on making control decisions based on friction, stickiness, and slippage. For visualization purposes, we show our approach applied to a simpler task with only one sensory modality—discriminating between a human eating soup and eating meat from wrist accelerometer data in the X, Y, and Z directions. Figure7 shows 3 examples of each condition. The highlighted sub-signal in red shows the most discriminative feature that allows us to separate the data sets and classify new exam- ples, discovered using a simple data mining technique [157]. In the future, we plan to investigate additional

16 Figure 7. Accelerometer data (rows) collected from examples (columns) of eating soup and eating meat. Highlighted sub- signal shown in red is the automatically discovered most discriminative feature that separates the data sets. techniques from data mining and deep learning specifically designed for multimodal data (e.g. [98]) to discover informative features in multimodal time series data that can inform the robot about task-critical events. Additionally, we will draw upon the prior work of co-PI Niekum [103, 101] to investigate the use of nonparametric Bayesian techniques like the Beta Process Autoregressive Hidden Markov Model to discover repeated structure in time series data. In other recent work (currently under review) [56], co-PI Niekum developed a particle filter based method to learn about and active reduce uncertainty over articulated relationships between objects. An interac- tive perception algorithm was introduced to combine both perceptual observations and the outcomes of the robot’s actively chosen manipulation actions. However, one major limiting factor in this work was the sen- sory capabilities of the PR2 mobile manipulator that was used in the experiments. Rather than having access to rich force, pressure, and tactile data, we were forced to base the outcomes of manipulation actions solely on a measure of motor effort needed to complete the action. Again, we believe that the capabilities of al- gorithms like these will greatly improve with the additional sensory modalities provided by the proposed artificial skin.

4.4 Generating Behavior [Years 1, 2, and 3] We will explore behavior and control based on explicit object trajectories and force control, policy optimiza- tion, discriminant or predicate based policies, and matching learned sensory templates. Policy Optimization: We expect most manipulations of deformable objects with soft skin to be difficult to model. We propose using policy optimization and learning to control the robot and improve its performance over time instead of standard model-based optimal control. We will explore learning complex policies for physically manipulating humans by watching humans execute them or by programming them directly based on our understanding of these procedures, and then refining those policies using optimization and learning. This section outlines our approach. We will build on our previous work on learning parametric and non-parametric models [10, 92, 94, 93,

17 129, 127, 130, 128]. We will also build on our previous work on learning from demonstration. In our past work we used inverse models of the task to map task errors to command corrections [5,2]. We have found that optimization is more effective than trying to track a learned reference movement, especially with non-minimum phase plants, and greatly speeds up this type of learning. We have also implemented direct policy learning to allow a robot to learn air hockey and a marble maze task from watching a human [21, 22, 24, 23, 14, 16, 15, 20, 17, 18]. Other prior work on policy learning and optimization and learning includes [6, 138, 136, 137]. We expect the policy optimization approach described in this proposal to be more efficient and effective than our previous work, because we have developed very efficient first and second order gradient methods to do policy optimization [? ]. Previous work on multiple model policy optimization includes [???????????? ]. Output feedback optimization is also closely related [????? ], as is reinforcement learning [?????????? ???????? ].

4.5 Generation 1 Skin [Year 1] One important goal of perception is estimating contact forces. Average contact forces across a hand can be estimated using six axis wrist force torque sensing, which we will build into our hands. Local force sensing to detect local concentrations of force will be done by measuring skin strain in various directions. We will explore the use of existing tactile imaging devices, such as Tekscan material, which we currently use, but it only senses compression forces in the normal direction (perpendicular to the skin/sensor) [? ]. We think obtaining estimates of local shearing forces (parallel to the skin/sensor) will be important in reducing the risk of skin damage to the patient. We will use a combination of structured light, markers, and multiple imagers to estimate deformation in skin and in soft (inflatable) robot limbs (Figure ??), along the lines of [? ????? ]. This system will provide inexpensive skin that is easy to replace as it wears or the material hardens, softens, or deforms (sags) with age. The skin can easily be applied to curved surfaces and with various thicknesses and stiffnesses. Fiber optics and Fresnel lens and reflectors can be used to move the imagers and projectors to more convenient locations, and optimize the numbers of imagers and projectors. One issue of concern is the possible weight of the skin material if we cover the entire robot with it. This concern may lead us to only instrument the hands and forearms, or use thinner skin on other robot parts. We will create and deploy many types of sensors, including embedded accelerometers, gyros, temperature sensors, vibration sensors, sound sensors, pressure sensors, optical sensors sensing nearby objects, and optical sensors tracking skin and object velocity and movement. Previous skin and tactile sensing projects typically focused on one or only a few types of sensors. We will explore adding electrical wiring such as printing patterns with conductive ink on the surface or between layers of the skin, and using resistance, capacitance, and inductance (or their combination) to measure skin deformation. Printed antennas similar to what are used in wireless RFID anti-shoplifting devices may also be useful placed on the skin surface or embedded in the skin. It may also be possible to embed mechanical elements in the skin that click or rasp when deformed, and use microphones to track skin deformation. We will use off the shelf sensors embedded in optical (Near Infrared (NIR)) grade silicone. This skin in- cludes optical sensing of marker movement to measure strain in all three directions (similar to www.optoforce.com). We will use embedded cameras or the same technology optical mice use (essentually using very high frame rate cameras with low angle of incidence illumination (Avago ADNS9800, for example)). We will also use range finding sensors to sense objects at up to a 0.5m distance (Sharp GP2Y0E02B, for example). We will also use embedded IMUs (Invensense MPU-9250, for example. In a 3x3x2mm package we have an ac- celerometer, gyro, magnetometer, and temperature sensor). We will explore using conductive paths printed on and in soft material to provide flexible wiring and additional circuitry needed by these small surface mount chips. We will use measurement of the gravity vector to determine roll and pitch (rotations per- pendicular to the gravity vector). We will explore imposing a local or global magnetic field (possibly time

18 varying) to help track yaw orientation (rotation about the gravity vector) and/or position. We will embed piezoelectric material (such as that from microphones and buzzers (RadioShack 273-066, for example)) to capture high frequency vibration. We will also embed pressure sensors (Freescale MPL115A2, for exam- ple). [HOWE AND DOLLAR Pressure stuff]. We will embed induction loops to sense electric field, and explore imposing local or global (possibly time varying) electric fields to resolve orientation. We will glue hairs or whiskers to piezoelectric crystals to provide mechanical sensing at a (short) distance.

4.6 Generation 2 Skin [Years 1 and 2] “Soft-matter” sensors and circuits will be produced by embedding microfluidic channels of liquid metal alloy in a thin elastic film. This work will build on preliminary efforts by co-PIs Dickey, Majidi, and Park. The dependence of circuit resistance (R), capacitance (C), and inductance (L) on elastic deformation are well understood and will be examined using principles in solid mechanics and finite elasticity.[? 87??? ] In general, we will use the displacement or stress formulation of field equations for an incompressible, homogenous, isotropic solid to determine the change in length and cross-sectional geometry of the microflu- idic channels as a function of external tractions (e.g. surface pressure, friction, tensile loading, ...). From the deformed geometry of the microchannels, we will then obtain new estimates for the circuit RCL proper- ties. When possible, we will obtain a closed-form, algebraic approximation of this mapping using existing solutions for elastic membranes, plates, and shells. We will also make use of approximate energy methods, such as the Rayleigh-Ritz technique for comparing the potential energy associated with a parameterized set of kinematically-admissible deformations. These theories will inform the design of sensing and circuit architectures in which functionality is influenced by electromechanical coupling. Examples include (i) ca- pacitive touch sensors that should change capacitance in response to applied surface pressure but not tensile loading, (ii) curvature sensors that respond to bending strain but uniform stretch, and (iii) circuit wiring that maintains constant resistance under any loading or elastic deformation. Do you still want this text? Seems of value. Hardware integration remains a challenge since the mechani- cal impedance mismatch between rigid pins and soft or fluidic conductors can lead to kinematic incompati- bility and loss of electrical contact. Did we get all the good parts of this into the next paragraph? To address this, we have begun explor- ing reliable methods for forming mechanically robust electrical connections at fluid-solid interfaces. One approach is to use anisotropically conductive elastomers that function as vias between the terminals of em- bedded EGaIn circuits and the pins of surface-mounted electronics. Presently, this is accomplished with an anisotropic “Z-tape” conductive elastomer.[126] The Z-tape is a commercial acrylic-based elastomer (3MTM ECATT 9703) that exhibits the following features: (i) as soft as skin (modulus ∼ 1 MPa); (ii) elastic (10× stretchable); (iii) adhesive on both sides (pressure-sensitive bonding); (iv) 50 µm thick; (v) anisotropic con- ductivity (107 S·m through the thickness). It is composed of conductive silver-coated iron microparticles that are ferromagnetically arranged to extend through the thickness of the tape.[55] Recently, researchers in the Soft Machines Lab (PI: Majidi) have developed a fabrication method to simultaneously pattern insulating elastomers, conductive elastomers, and thin films of EGaIn liquid (5).[84] This versatile technique allows for soft-matter circuits and sensor arrays to be produced in minutes without the need for labor-intensive casting and injection-filling. However, hardware integration remains a challenge since the mechanical impedance mismatch between rigid pins and soft or fluidic conductors can lead to kinematic incompatibility and loss of electrical contact. To address this, we have begun exploring reliable methods for forming mechanically robust electrical connections at fluid-solid interfaces. One approach is to use anisotropically conductive elastomers that function as vias between the terminals of embedded EGaIn circuits and the pins of surface-mounted electronics. Presently, this is accomplished with an anisotropic “Z-tape” conductive elastomer (Fig.9).[126] The Z-tape is a commercial acrylic-based elastomer (3M TM ECATT 9703) that exhibits the following features: (i) as soft as skin (modulus ∼ 1 MPa); (ii) elastic (10×

19 Figure 8. Soft-matter circuits with an anisotropic “Z-tape” conductive elastomer that functions as an electrical via to a sealed EGaIn circuit.[126] (a) Arrowpad with EGaIn electrodes and conductive paper wires on wrist (b) MOSFET transistor interfaced with Z-tape and liquid GaIn circuit can be stretched and flexed without destroying the liquid metal circuits (c) LED circuits manufactured with liquid GaIn and conductive paper wires hooked up onto an Arduino (d) LED lighted up when finger pressed down on GaIn circuit. stretchable); (iii) adhesive on both sides (pressure-sensitive bonding); (iv) 50 µm thick; (v) anisotropic conductivity (107 S·m through the thickness). It is composed of conductive silver-coated iron microparticles that are ferromagnetically arranged to extend through the thickness of the tape.[55] Anisotropic conductive elastomers, i.e. “Z-tape”, will be used to form direct electrical contact between liquid metal alloy (e.g. EGaIn) and surface mounted electronics. Z-tape can also be used to allow for electrical contact between liquid metal and human skin for tactile sensing and data entry. As shown in Fig. 9a, the Z-tape is composed of aligned columns of metal microparticles embedded in a soft elastomer such as PDMS. Percolation between particles within each column allows for conductivity through the thickness but not in the plane. Fig.9a shows an optical top-down image of vertically aligned columns of Ag-coated iron microparticles in an acrylic-based VHBTM tape (3M). This conductive elastomer (3MTM ECATT 9703) is produced by suspending the microparticles in uncured acrylic polymer and applying a strong magnetic field (0.1-1 Tesla) in order to align the ferromagnetic Ag-Fe particles into vertical columns. For Generation 2 skins, we will use ECATT 9703 and also explore producing our own with Fe nanopowders (Sigma-Aldrich) in PDMS (Sylgard 184; Dow Corning). Applications of Z-tape to sensing and hardware integration are presented in Figs.9c-e. Because of its anisotropic conductivity, they can function as electrical vias between embedded EGaIn circuit terminals and surface-mounted electronics (Fig.9c). The latter include 8-bit microcontrollers, transceivers for wireless communication, batteries, and flat flexible cables (FFCs) for connecting to external hardware. Fig.9d shows an example of a simple EGaIn circuit connected to a surface-mounted LED and MOSFET with ECATT 9703. A potential method for tactile sensing is presented in Fig.9e, in which commercial Z-tape allows for direct electrical contact between an EGaIn arrowpad and a finger tip for data entry. One method for producing EGaIn circuits with Z-tape is presented in Fig.9f. EGaIn or Galinstan is first patterned using a stencil lithography method that allows for control of both the height and width of the microfluidic channels. The channels are patterned in a 254 µm thick film of soft acrylic-based elastomer adhesive (F-9473PC VHBTM Tape; 3MTM). We begin with a thicker 0.5 mm film of VHB tape (4905 TM TM VHB Tape; 3M ) and use a CO2 laser (VLS 3.50; Universal Laser Systems) to produce a positive of the

20 a c e Microcontroller FFC Pin

Z-tape PDMS EGaIn Terminals

b d d

Figure 9. (a) Z-tape is composed of aligned columns of metal microparticles in a soft elastomer. (b) Top-down view of Ag-Fe microparticle columns in 3M ECATT 9703. (c) Application of Z-tape as electrical vias between EGaIn circuit and surface-mounted electronics. (d) Demonstration with an LED and MOSFET. (e) Tactile sensing demonstration. (f) Proposed fabrication technique based on stencil lithography with a CO2 laser. circuit geometry (steps i-iii) on the non-stick backing of the VHB tape (the mask). This defines the base of the microchannels. The rest of the mask is removed, leaving behind the positive of the circuit (iii), and the 254µm thick layer of VHB tape is applied on top of circuit (step iv).The circuit is engraved onto the second layer of VHB with the CO2 laser engraver (step v). The laser cut VHB is peeled off along with the mask to expose channels 254 µm deep for the liquid metal circuits (vi). The mask on the thin layer of VHB tape acts as a stencil for the deposition of liquid GaIn. To ensure an even, thin channel of EGaIn, it is deposited into the exposed VHB channels with an Ecoflex-tipped pen (vii). It is important to create a relatively deep channel for the deposition of the liquid GaIn. This is to avoid smearing the circuit when the Z-tape is applied and also to ensure sufficient volume of liquid GaIn to maintain good electrical conductivity. After the liquid GaIn is deposited (viii) and the stencil mask is removed, the circuit is sealed in Z-tape (ix). External wiring in the form of conductive fabric tape (3MTM CN-3490) may then be applied directly on top of the Z-tape.

4.6.1 Magnetic Field Sensing An array of custom force sensors distributed within an elastomeric matrix (silicone rubber) will be devel- oped utilizing a combination of custom flexible circuit fabrication technologies and soft material molding methods currently under investigation in WPI Soft Robotics Laboratory (see Figure 10). Force sensors will comprise a magnet and Hall element pair, whereby strains due to normal loading will move a small magnet and the corresponding magnetic field densities will be measured by the Hall element on a miniature flexible circuitry with adjustable sensitivity. The range of forces will be customized based on the initial distance between the magnet and the Hall element (currently 5 mm) and the material properties of the elastomeric substrate (currently Smooth-on Ecoflex 0030 or DragonSkin 10). We will perform static and dynamic char- acterization experiments to determine the sensitivity, range, and response time of individual sensors for optimized operation. Feasibility experiments on preliminary force sensors depicted in Figure 10 reveal an approximately linear analog voltage response for static loading conditions. We propose to study the adjustment of the range and sensitivity of these sensors based on material prop- erties of the silicone rubber formulation, the geometric parameters that define the magnet position and ori- entation, as well as the parameters of signal conditioning electronics on the circuit layer, covering the entire design space of this sensing approach. We will employ Ogden hyperelastic material model to describe the large deformations exhibited by the elastomeric material under the assumption of rubber incompressibility.

21 The geometric parameters will be studied on a Finite Element package and through iterative experimentation to verify our numerical findings. Electronic effects will be determined in a dynamic model that describes not only the range and precision, but also the response rates achievable for our measurements. The preliminary design shown in Figure 10 employs a 1-D Hall element (Analog Devices AD22151), which provides analog voltage outputs and of- fers convenient customization through external resistors and capacitors. After we formalize the fundamental science and methodology to develop soft mod- ules that provide normal force feed- back, we will consider the measure- ment of shear force distributions on the skin. We expect that interfacial fric- tional forces reflected by the magnitude of shear measurements will be crucial for physical interactions and a lack of this measurement would reduce the ef- fectiveness of our proposed robot skin. To enable the measurement of the full 3-D force vector in a distributed array, we will utilize Hall elements that mea- sure the 3-D magnetic field (Melexis Figure 10. The proposed force sensors comprise a Hall Effect IC with corre- MLX90333) such that every translation sponding circuitry on a flexible substrate and a miniature magnet positioned of the magnet due to material strains at a precise location with respect to the Hall element, embedded within a can be captured. Since expected shear molded silicone rubber substrate. Sensors are planned to be manufactured forces are typically lower than normal in a multi-layer mold (top). A preliminary prototype (left) and its corresponding static normal loading response (right) are displayed on the bottom row. forces, we will utilize different levels of sensitivity for each measurement di- rection and characterize crosstalk. Force sensing modules will be compared with ground-truth load-cell measurements to verify accuracy and dynamic response of the sensory system. An array of these sensors will be developed in a grid and connected over a serial communication network (SPI) such that an embedded microcontroller can receive feedback from a large number of sensors. Moments will be determined using the force vectors of neighboring sensors.

4.6.2 Ionic Liquid Embedded Sensing Skin As an alternative to liquid metals, microchnnels can be also filled with different conductive liquid, such as ionic liquids. There two main advantages of using nonmetallic liquid conductors. First, an ionic liquid can be more biocompatible than liquid metals. Although, the liquid metals we have been using, such as EGaIn or galinstan, are considered nontoxic unlike mercury, we do not want to either absorb or make a direct contact with our skin for an extended period of time. However, the robots we are interested in building with our robotic skin will be mainly assisting our daily activities and are expected to make many physical interactions with human. Using more biocompatible materials in the robot will increase the safety of the human users. The other advantage of ionic liquid is rejection of unnecessary sensor signal fluctuations. Since the skin we are designing will be stretchable and flexible, we also need to make our signal wires with soft

22 Figure 11. Ionic liquid microchannel embedded soft artificial skin sensors. (a) Soft strain sensor. (b) Hand motion detection artificial skin. (c) Soft pressure sensor using an EIT microchannel network. materials. If we use one same liquid metal for both wiring and sensing, it will not be easy to distinguish resistance changes that are from microchannel sensors and soft wires. Using two different liquids for wiring and sensing will address this issue. Since ionic liquids have much higher nominal resistance values than liquid metals, resistance changes from soft wires that are filled with a liquid metal will have much smaller resistance change than microchannel sensors with an ionic liquid. We have previously made this type of soft skin sensors for detecting axial strain [38, 40] and normal pressure sensing based on electrical tomographic imaging (EIT) [39], as shown in Figure 11.

4.7 Generation 3 Skin [Years 1, 2, and 3] We will explore superhuman sensing. For example, we will create vision systems (eyes) that look outward from the skin for a whole body vision system. We will use optical tracking to estimate slipping and object velocity relative to the skin. We will explore embedding ultrasound transducers in the skin to use ultrasound to image into soft materials that are in contact such as parts of the human body. We will explore deliberately creating air and liquid (sweat) flows (both inwards and outwards) for better sensing (measuring variables such as pressure, conductivity, and temperature) and controlling adhesion. We will explore humidifying the air for better airflow sensing, contact management, adhesion control, and ultrasound sensing. We will develop materials to make the skin rugged, and methods to either easily replace or repair damage. For the third generation of robot skin, we will explore “solid-state” soft-matter circuit and sensing archi- tectures that do not rely on liquid-phase metal alloys such as EGaIn. Three alternative methods that we focus on are gels filled with a percolating network of conductive nano- and microparticles, (ii) patterned graphene monolayer and thin films of exfoliated graphite. In contrast to EGaIn and cPDMS, the conductive gels and graphene will be transparent and can be used as a sensor layer over a flexible display. They will be patterned with Protolaser U3 (LPKF) UV laser micromachining system or through microcontact printing (µCP) using stamps produced with the Nanoscribe microscale 3D printing system. The Protolaser U3 is capable of patterning sub-millimeter thin sheets of organic materials, ceramics, and metal. It has a sufficiently low wavelength (355 nm), high pulse rate (200 kHz), and high power (5W) to drill and cut with 15 µm beam diameter over a 9”×12” cutting area at 0.5 m/s cutting speed. This is accomplished through photochemical ablation and the ability to concentrate high power in a small area at a high pulse rate that prevents burning and melting in the surrounding material. Sensors will be produced using the same rapid prototyping techniques currently used with a CO2 laser. However, because of its higher wavelength,

23 Figure 12. Fiber optically sensorized exoskeletal skin prototypes (a) FBG embedded skin structure with honeycomb patterns. (b) Human fingertip sized robotic finger with embedded FBGs.

higher beam diameter, and lower pulse rate, the CO2 laser cannot produce planar features below 100 µm. Microcontact printing will be performed using stamps produced with either conventional photolithog- raphy (e.g. SU-8) or with the Nanoscribe 3D printing system (Nanoscribe GmbH). The Nanoscribe uses 2-photon polymerization to selectively cure photoresist and photopolymers to produce three-dimensional structures with sub-micron features. We will use this machine to directly produce stamps in photoresist or UV-curable elastomer. Using µCP or imprint lithography, we will then pattern conductive gels, graphene, and exfoliated graphite into circuits and sensors with 0.1-10 µm planar features. Using the 3D printed struc- ture as a master or mold, we will also explore replica molding with thermoset and thermoplastic elastomers in order to access a broader range of polymer chemistries and µCP wetting properties.

4.7.1 Fiber Optically Sensorized Tactile Skin Instead of microfluidic channels, soft skin can be sensorized with embedded fiber optic strain sensors, such as fiber Bragg gratings (FBGs). Any contact or pressure made on the skin will cause structural deformation of the skin and will induce mechanical strains on the FBG areas of the optical fiber. The FBG sensors are very accurate and provide high resolution. They are also flexible and compact, which make them ideal to be embedded in any type of structures. Another very useful advantage is immunity to electromagnetic interference (EMI). In robotic applications where many electromagnetic actuators exist, many electronic sensors show noisy sensor readings, which requires various types of noise filtering and shielding and signal amplification. Our previous robotic skin prototypes (Figure 12) have shown the feasibility of using FBGs for tactile and force sensing with high precision control [115, 108, 116]

4.7.2 Flexible and Stretchable Waveguides for Optical Pressure and Strain Sensing Another optical soft sensing method is to embed micro wave guides in a soft material for pressure, strain or curvature sensing. When coated with a reflective material, such as a thin metal layer, microchannels can be transformed to optical waveguides. Without any stress on the elastomer, the straight waveguides will transmit optical signals like fiber optics. However, if stress or strain is applied, the soft waveguides will mechanically deform and make small cracks in the metallic coating, which cause optical power losses in the transmission. By utilizing the light intensity modulation, we will be able to estimate how much pressure or strain is applied to the sensing structure based on the optical power losses. Figure 13 shows our preliminary prototype of optical soft sensor and its result for detecting curvatures. Since this method does not use any liquid materials, it has a potential to increase the safety and practicality of the device.

24 Figure 13. Optical soft pressure sensor. (a) Preliminary prototype. (b) Prototype with light source and detector. (c) Stressed prototype showing cracked reflective coating. (d) Preliminary experimental result showing resistance increase in the photo detector with increased curvature.

4.8 Ultrasound sensing We will explore using ultrasound transducers mounted in the skin of the robot to see into the person being manipulated, to track bones, tissue movement, and estimate tissue strain. We can use real time ultrasound measurements to locate bones to guide manipulation. We can also use ultrasound to monitor tissue defor- mation (displacement and strain) to guide forces and detect when a manipulation is failing and should be stopped or adjusted. Using phase-sensitive 2D speckle tracking [?? ], tissue deformation can be accurately measured in real time at high spatial (< 1 mm) and temporal (< 10 ms) resolution with a large imaging depth of a few cm at typical clinical ultrasound imaging frequencies (2-10 MHz). Based on the tissue displace- ment information from speckle tracking, a complete set of locally developed strain tensors of the deformed tissue will be generated, indicating the degree of the strains of the tissue in different directions to guide manipulation. The “globally applied” strain also can be estimated by measuring the distance change of the tissue and skin from the bones. Non-linear analysis of “locally developed” strain and “globally applied” strain will also be applied to determine the criteria at which tissue deforms significantly and passes into the non-linear region of the soft tissue mechanics [?? ], and the robot should change its behavior. In year 1, we will do feasibility tests using a commercially available linear array transducer, connected to a commercial ultrasound research platform (Verasonics US Engine, Verasonics Inc, WA, USA), which will be mounted in the robot skin. In the following year, a set of off-the-shelf single element transducers will be mounted on the palm and fingers of the robot hand for optimal sensing, imaging and feedback. Based on the design parameters obtained during Y2, a custom built array transducer system incorporated in the skin will be developed and evaluated in Y3. We expect substantial revision and improvements in our volumetric sensing in response to how well it does in experimental use, including development of better image process- ing algorithms, as we go. In principle, the concept of mounting ultrasound transducers in robot skin is very similar to the detection of prostate cancers from an ultrasound probe mounted on a doctor’s finger in the rectum to measure the elastic properties of nearby tissue [?? ]. The integration of an ultrasound linear array transducer onto the robot hand in Y1 will also be similar to the approaches of a previous study of muscle fatigue in the forearms [? ]. Building a custom ultrasound transducer opens the possibility of distributing the transducer across the skin, or putting elements of the transducer in support surfaces such as a bed or chair, and doing transmission ultrasound imaging as well as reflective imaging. Transmission imaging may give us better images at greater depth. Preliminary Results: In Figure ??, a commercial ultrasound linear transducer integrated on to a simu- lated robot hand (knee pad) measures local elasticity and contractility of the muscles in the forearm. In the

25 proposed study, the knee pad will be replaced with an actual robot hand and the full set of strain tensors will be calculated to determine the criteria for pain and eventual tissue failure due to the applied shear force. It should be noted that ultrasound may conduct poorly without a liquid coupling and there is a potential challenge to image the tissues through patient clothes. It was confirmed by our pilot tests that ultrasound imaging is possible with a minimal spray of water (only enough to moisten the clothes, Figure ??). A mois- turizing device may be necessary as part of the robot hand next to the ultrasound transducers. We will also explore the use of other coupling materials such as very soft silicone or gels. Initial tests also confirmed the feasibility of obtaining two dimensional strain tensor fields from the ultra- sound data obtained with clothes on the forearm. The strain tensor consists of the axial, lateral and shear strains. The axial direction corresponds to the direction of propagation of ultrasound while the lateral direc- tion is the one perpendicular to the axial direction. If the measured strain goes beyond a specified threshold the robot would be programmed to alter its manipulation procedure. Such real time strain feedback comes with the possibility of reinforcement learning for the robot.

4.9 Skin Repair An unavoidable problem with skin is continual wear and occaisional damage. We will explore three methods to address wear and tear: 1) make the skin easy to replace, by having removable sections that can easily be attached and electrically connected, 2) Making it easy to apply material and replace components from the outside, potentially using a repair robot and true 3D “Maker” technology (essentially and ink jet print head on the end of a robot), and 3) making an outer layer of the skin continually slough off and be replaced from the inside, as in human skin. New material could be provided through channels in the skin, and could harden based on loss of a solvent, an active hardener such as light or heat, or an A/B epoxy-type mixing of components.

4.10 Systems Integration and Evaluation The development and evaluation of perception, reasoning, and control algorithms for our three skin systems will use a series of test setups:

B.1 Evaluate a patch of skin on the laboratory bench.

B.2 Evaluate the skin on a simple hand.

B.3 Evaluate the skin on our current lightweight arm and hand.

B.4 Evaluate the skin on our Sarcos Primus humanoid (whole body sensing).

5 Broader Impact

[EVERYBODY: This is a place holder. Please send me stuff to include in this section.] Unplanned Broader Impact. Often the broader impacts of our work are serendipitous, and not planned in advance. Examples of such ad hoc broader impacts from our recent work include: 1) Our technologies being demonstrated on entries in the DARPA Robotics Challenge. 2) A graduate student was a participant on a Discovery Channel TV series, ”The Big Brain Theory: Pure Genius”. One purpose of the TV series is getting people excited about engineering. 3) A graduate student helped run a group of all female high school students in the robot FIRST competition. 4) Our work on soft robotics inspired the soft inflatable robot Baymax in the Disney movie Big Hero 6 [? ]. We have participated in extensive publicity as a result. An explicit goal of this movie was to support STEAM. We expect to be involved with sequels to Big Hero 6,

26 and to support Disney’s STEAM effort (which is quite large and well funded). We expect similar unplanned broader impacts to result from this work, especially based on dramatic videos of agile robots. Development of Human Resources. Students working on this project will gain experience and expertise in teaching and mentoring by assisting the course students in class projects. Students working on this project will also have the opportunity to train their communication and inter-personal skills, as they will actively participate in the dissemination of the research results at conferences and in related K-12 outreach programs of the Robotics Institute. Participation of Underrepresented Groups. We will attract both undergraduate and graduate students, especially those from underrepresented groups. We will also make use of existing efforts that are part of ongoing efforts in the Robotics Institute, and CMU-wide efforts. These efforts include supporting minority visits to CMU, recruiting at various conferences and educational institutions, and providing minority fellow- ships. CMU is fortunate in being successful in attracting an usually high percentage of female undergrad- uates in Computer Science. Our collaboration with the Rehabilitation Science and Technology Department of the University of Pittsburgh in the area of assistive robotics is a magnet for students with disabilities and students who are attracted by the possibility of working on technology that directly helps people. Outreach. One form of outreach we have pursued is an aggressive program of visiting students and post- docs. This has been most successful internationally, with visitors from the Delft University of Technology (4 students) [7, 88, 154], the HUBO lab at KAIST (1 student and 1 postdoc) [19, 37, 67], and the Chinese Scholarship Council supported 5 students [82, 155]. We welcome new visitors, who are typically paid by their home institutions during the visit. We are currently experimenting with the use of Youtube and lab notebooks on the web to make public preliminary results as well as final papers and videos. We have found this is a useful way to support internal communication as well as potentially create outside interest in our work. We will continue to give lectures to visiting K-12 classes. The Carnegie Mellon Robotics Institute al- ready has an aggressive outreach program at the K-12 level, and we will participate in that program, as well as other CMU programs such as Andrew’s Leap (a CMU a summer enrichment program for high school stu- dents), the SAMS program (Summer Academy for Mathematics + Science: a summer program for diversity aimed at high schoolers), Creative Tech Night for Girls, and other minority outreach programs. Co-PI Dickey has a strong record of dedication to outreach activities, recruitment of diverse students, and mentorship of high school, exchange, and undergraduate students. In six years, the co-PI has mentored more than thirty undergraduate researchers including six REU sup- ported undergraduate researchers. The PI has helped twelve of those students receive undergraduate research fellowships. Eight undergraduates have been authors (including four first authors) on papers from our group. One undergraduate received recognition as the best poster at the 2012 AIChE National Meeting and another as the 3rd best poster at the 2011 Meeting. The PI has also mentored twelve high school students (one of whom is now enrolled at NC State and pursuing a Chemical Engineering degree, and approximately half of whom are female) and six exchange students from China (all of whom are pursuing graduate degrees). We recruit students from local Apex High School through a program called Academy of Information Technol- ogy. The PI is committed to continue supporting undergraduate research and undergraduates will work on this project. In addition, the PI has mentored 12 female students in the past six years. Also, as a graduate student, the PI was a big brother to a 9 year old African American boy; that experience has inspired the PI to participate in as many outreach opportunities as time permits. The research will be integrated with an outreach module that the PI developed during the past three years. The module discusses the liquid metal utilized in this proposal and has been given to science camps for both middle school and high school students. Fig. 13 is a photograph of the PI presenting to middle school aged children about liquid metal. Hundreds of students have seen this material directly from the PI or graduate students. The PI will continue to work with the Engineering Place to refine the module and present it at least twice annually through their camp. The camp - geared toward middle school children - reaches out to a diverse group of kids who become energized toward science via their participation.

27 The module will also be presented at NC States NanoDays, an open house for all ages that focuses on emerging technologies being developed at NC State. This event attracts 2,000 people every year and offers an opportunity to get feedback on the module from a diverse audience (parents / kids). The PI participated in this activity in 2013 and 2014 and plans to continue participation every year. The PI also volunteers for panel discussions on science that are open to the public at the NC State library. The context for the module is the popular movie Terminator 2, which features a material resembling a liquid metal that flows and morphs into human-like characters (Fig. 13). The liquid metal central to this proposal is similar in the sense that it can flow and be molded. We introduce the module by showing a clip from the movie that is appropriate for all ages (see YouTube: Shape Reconfigurable Liquid Metal). The module provides an opportunity to highlight the unique assets of the liquid metal and the more general concepts of surface tension, wetting, and rheology. We will include a demonstration of the liquid metal spreading. The appeal of the program is that it draws on popular culture in a highly visual manner to which all ages can relate. The module also introduces the field of flexible electronics and demonstrates the utility of the liquid for forming highly stretchable electronics through a prototype microfluidic antenna (Fig. 2). Exposing the students to the potential applications of flexible electronics (e.g., electronic paper, textiles, solar cells) should be inspirational and illustrate the role of creativity in engineering. The module also offers the opportunity to discuss other unusual fluids (e.g., ketchup, magnetorheological fluids, cosmetics). These familiar fluids will help the students connect what they learn with real-life applications. The PI will continue to enlist graduate and/or undergraduate students to assist with these outreach activities. Dissemination Plan. For a more complete description of our dissemination plan, see our Data Man- agement Plan. We will maintain a public website to freely share our simulations and control code, and to document research progress with video material. We will present our work at conferences and publish it in journals, and use these vehicles to advertise our work to potential collaborators in science and industry. Technology Transfer. Our research results and algorithms are being used in Disney Research and through this technology transfer path will eventually be used in entertainment and education applications, and will be available to and inspire the public. Three former students work at Boston Dynamics/Google transferring our work to industrial applications, several students have done internships at Disney Research, and two former postdocs work there full time. Benefits to Society. The proposed research has the potential to lead to more useful robots. A successful outcome can enable practical controllers for robust locomotion of legged robots in uncertain environments with applications in disaster response and elderly care. The outcomes of this project may also provide new insights into human sensorimotor control, in particular, into how humans adapt locomotion behaviors. Understanding how humans actually control their limbs has the potential to trigger neural and robotic pros- theses and exoskeletons which restore legged mobility to people who have damaged or lost sensorimotor functions of their legs due to diabetic neuropathy, stroke, or spinal cord injury as well as improve fall risk management in older adults [???? ]. Enhancement of Infrastructure for Research and Education. The project will help to maintain CMU’s ATRIAS and SARCOS robot testbeds. Both robots are powerful tools for research and education in dynam- ics and control of legged locomotion. These robots regularly attract the interest of students and inspire them to advanced education and training in robotics. Relationship to DRC funding. We are currently part of one of the funded teams in the DARPA Robotics Challenge [? ]. This support will end in the summer of 2015. The DRC focuses on reliability, implementa- tion issues, and logistics. We are not able to develop the proposed ideas in the time frame of the DRC, so longer term NSF funding complements our soon to end DRC funding.

28 6 Curriculum Development Activities

Graduate Education: PI Onal designed and teaches Special Topics on Soft Robotics, a unique graduate level course, first offered last Spring. This project-based course is open to undergraduate seniors as well and received popular demand among both student populations majoring in Mechanical Engineering and Robotics Engineering, beyond our expectations. The significant relevance of the proposed research to this course provides an opportunity to both introduce advanced methods to quantitatively study safe and adaptive interactions with human users, and also integrate mini-projects in the curriculum, which is expected to induce interested students getting further involved with the research group. Two of the five projects in the first offering of this class are currently being expanded by the original groups, to be turned into conference and journal paper submissions. Consequently, three unpaid students joined our lab to continue research in soft robotics. Undergraduate Education: The Major Qualifying Project (MQP) is the capstone design experience at WPI, which requires student research and development effort, typically divided over the whole senior year. MQPs establish qualification in the student’s major, many times with industrial sponsors, off-campus re- search organizations, or ongoing faculty research and interests. Students usually form teams to tackle multi- disciplinary problems with faculty advisors. PI Onal is actively involved with the projects program at WPI. Last year, he advised four capstone project teams involving mechanical engineering, robotics engineering, electrical and computer engineering, and computer science majors in the general area of safe human-robot interaction utilizing compliance, to design, characterize, build, and validate bio-inspired robotic systems, including a soft hydraulic exo-musculature system, and a semi-autonomous anthropomorphic hand prosthe- sis that requires minimal human supervision. These efforts led to multiple awards, including a first-place win at the national Cornell Cup, first- and second-place wins among Robotics Engineering MQPs, and the prestigious Edward C. Perry Award for Outstanding Projects in Mechanical Design. Building on this initial success, we propose to cultivate a tradition of undergraduate project involvement with the proposed research by incorporating MQP activities on modeling, characterization, and analysis of the proposed sensory sys- tems.

Curriculum Development. We will pursue multiple directions for dissemination. First, we will de- velop course material on robot control and biologically inspired approaches to humanoid and rehabilitation robotics which will directly be influenced by the planned activities of this proposal. The PIs currently teach several courses that will benefit from this material. The first course, 16-642: Manipulation, Mobility & Con- trol, is part of the Robotics Institute’s recently established professional Master’s degree program that aims at training future leaders in the workforce of robotics and intelligent automation enterprises and agencies. Two other courses, 16-868: Biomechanics and Motor Control and 16-745: Dynamic Optimization, directly address the research areas in which this proposal is embedded. We also teach a course designed to attract undergraduates into the field, 16-264: Humanoids. All of these courses emphasize learning from interaction with real robots. We will make these course materials freely available on the web.

29 7 Still Not Discussed

Forgot to mention - the goal of ASSIST is to make wearable devices (for humans) that sense biological cues / signal (EKG, biomarkers, etc) in a self-powered mode. I suppose it is complementary to the efforts here, but the motivation is quite different.

The ASSIST initiative is a perfect fit. Let’s try to incorporate that as part of the longer term agenda. In the meantime, we can propose creating anisotropic conductive elastomers as vias to connect the soft(microfluidic?) sensors and circuits with rigid microelectronics. Whatever we propose, it must be untethered, self-powered (with a battery or vibrational energy harvesting) and have wireless connectivity. I’ll send a few paragraphs later tonight on what I have in mind.

FYI - I am a member of the NSF ERC center at NC State called "ASSIST", whose goal is to make wearable electronics (think Nike FitBit on steroids). The work you are proposing is different, but just something to keep in mind that might be leveraged (?) or at least mentioned. The ASSIST folks are focused on energy harvesting and sensing of biological health cues. http://assist.ncsu.edu/

30 8 Collaboration Plan

2 page collaboration plan supplementary document Where appropriate, the Collaboration Plan might in- clude: 1) the specific roles of the project participants in all organizations involved; 2) information on how the project will be managed across all the investigators, institutions, and/or disciplines; 3) identification of the specific coordination mechanisms that will enable cross-investigator, cross-institution, and/or cross- discipline scientific integration (e.g., yearly workshops, graduate student exchange, project meetings at conferences, use of the grid for videoconferences, software repositories, etc.), and 4) specific references to the budget line items that support collaboration and coordination mechanisms. If a Large proposal, or a Medium proposal with more than one investigator, does not include a Collaboration Plan of up to 2 pages, that proposal will be returned without review. should make a convincing case that the collaborative con- tributions of the project team will be greater than the sum of each of their individual contributions. Large projects will typically integrate research from various areas, either within a cluster or across clusters, Ratio- nale must be provided to explain why a budget of this size is required to carry out the proposed work. Since the success of collaborative research efforts are known to depend on thoughtful coordination mechanisms that regularly bring together the various participants of the project, a Collaboration Plan is required for all Large proposals. Up to 2 pages are allowed for Collaboration Plans. The length of and degree of detail provided in the Collaboration Plan should be commensurate with the complexity of the proposed project. PI: Chris Atkeson (CMU), Co-PIs: Scott Niekum (CMU), Carmel Majidi (CMU), Yong-Lae Park (CMU), Michael Dickey (NCSU), and Cagdas Onal (WPI). Lead Institution: Carnegie Mellon University 1. Specific roles of the PIs: Our team brings together experts in soft materials and sensors with experts in robot perception, reasoning, control, and learning. Atkeson (CMU), PI, is an expert in robot perception, reasoning, control, and learning. He has extensive experience with robot arms and full body humanoids, as well as robot learning. Atkeson will lead the work in software algorithms, including perception, reasoning, control, and learning. Atkeson also has extensive experience with soft robots having recently developed a soft robot (that was the inspiration for Baymax in the movie Big Hero 6). Niekum (CMU), co-PI and postdoctoral fellow, will lead efforts in building integrated manipulation and perception systems that leverage the multimodal sensing capabilities of the prototype skin. This work will include signal processing and sensor integration, probabilistic modeling of sensor data, and development of machine learning and control algorithms for perceptual classification, action selection, and plan execution. For the past five years, Niekum’s research has focused on learning exploitable structure from robot percepts and human demonstration data. Examples include learning high-level task descriptions from demonstration data and using interactive perception to learn about and reduce uncertainty over articulated relationships between objects. Majidi (CMU), co-PI, will lead efforts to design sensors and circuit elements with pre-determined elec- tromechanical coupling between elastic deformation and changes in electrical resistance, capacitance, and inductance. Such designs will be informed by theoretical models based on solutions to field equations in nonlinear (finite) elasticity. When possible, a Rayleigh-Ritz method will be used to obtain approximate algebraic expressions for the mapping between deformation (tensile strain, compressive strain, bending cur- vature, shear) and electrical response. Otherwise, Majidi and his students will make use of commercial FEA software. In addition to design and modeling, they will also assist in materials selection, fabrication, and integration of soft-matter electronics with rigid hardware. This includes work on mechanically robust interfaces between liquid and solid conductors. Park (CMU), co-PI, will lead efforts in system integration of skin and robot systems. He will also develop a sensor network embedded skin and work with co-PIs on development of various soft robotics integral em- bodiments and specific elements including: hydraulic, cable driven, granular media jamming elements, and

31 pneumatic/inflatable elements. During last five years Park’s research has been focused on Exo-Musculature actuation and sensing modalities. Among others he has worked on pneumatically actuated (McKibben type) soft muscle tendons for the foot and lower leg for use in treating gait pathologies associated with neuromus- cular disorders. He also worked on a thin cylindrical sleeve with pneumatically actuated muscles based on miniaturized arrays of chambers and embedded Kevlar fiber reinforcement. Finally, he developed thin soft artificial skin with embedded arrays of various sensing modalities (strain, pressure, shear forces etc.) for use with aforementioned actuation modalities. Dickey (NC State), co-PI, is a chemical engineer with expertise in patterning and actuating soft materials including polymers, gels, and liquid metals for applications that include soft and stretchable electronics, sensors, and wearable devices. A student in his group is also studying sensors for the human skin through the NSF ERC ASSIST project. He has expertise in microfabrication, materials characterization, and soft matter. Dickey will work with the team to help develop novel soft material strategies to enable next generation robot skin. Onal (WPI) , co-PI, has a track record in soft robotic systems in theoretical modeling, design, fabrication, actuation, sensing, and control solutions for soft-bodied robotic systems. Onal will be responsible for the research thrust on next generation distributed 6-D force and moment measurement using an array of magnet and Hall element pairs embedded in silicone rubber. He will work closely with PI Atkeson on the integration of the proposed sensors within the robotic sensory skin. For the past five years, Onal has been an active contributor to the emergence and development of Soft Robotics through a series of research projects on every aspect of robots comprising pressure-operated elastomeric materials, including power generation, valving, dynamic modeling, sensing, control, and algorithmic studies. The students and remaining postdoc will interact with all of the involved faculty. We have found that combining postdocs with several faculty and multiple students provides a very productive mix of expertise. One postdoc, Niekum, will coordinate development of algorithms, testbeds, and systems. The other postdoc will take the lead in coodinating work on skin mechanics and skin sensing. What students will work on will vary through the duration of the project. All students will develop new theory and algorithms as well as work with actual implementations. The PIs will jointly supervise, guide, and provide assistance as needed, as well as focus on theoretical and algorithm development themselves. All members of the team will work together to evaluate each other’s results. What makes this team more than the sum of the individuals is that our team brings together experts in soft materials and sensors with experts in robot perception, reasoning, control, and learning. Atkeson and Niekum have extensive prior experience in robot control and learning. Atkeson has expertise in optimal control. He has prior work in human motor psychophysics and modeling human behavior. In terms of project management, Atkeson has prior experience as PI of an NSF IGERT program, and was a thrust leader for Mobility and Manipulation in our NSF Quality of Life Engineering Research Center. Atkeson is currently a co-PI on a DARPA project with colleagues at WPI, so he is experienced with long distance collaboration. Management across institutions and disciplines: Specific coordination mechanisms: In addition to frequent ad hoc meetings and interactions, the personnel involved in this work will participate in weekly project meetings to review research results and discuss future directions. Participants at WPI and NCSU will use videoconferencing software (Skype, Google Hangout, Blue Jeans, ...). Budget line items that support these coordination mechanisms: Faculty, postdoc, and student support will provide coverage for the time necessary to interact. Multiple trips are budgeted for all participants to visit each other for extended periods. Governance: The co-PIs will make decisions by consensus. The PI is the final decision maker if consensus is not reached. 2. Management across institutions and disciplines: Copies of the experimental setups (except those involving robots costing more than $50,000)), will be made available at all research sites. Robots that are at one site will be able to be teleoperated remotely to facilitate experiments and data collection by all

32 participants. Project data generated by each member of the team will be made accessible to other team members. Team members will tightly collaborate. Their research activities will be coordinated by PI Park. The team will report on research findings via conference presentations and proceedings, journal papers, and in direct communication with appropriate NSF IIS personnel. The team will generate a project website with project description, list of team members, list of publications and acknowledging sponsoring agency. 3. Specific coordination mechanisms: Specific features of the project to ensure coordination of the work will be:

• Regular meetings of the research team to plan experiments, analyze results and write manuscripts will involve all members of the group including the undergraduates who will be mentored especially in aspects of the project that are not part of their major.

• The share drive will be kept up to date and well documented. It will include research plans, data, detailed descriptions of that data, an electronic library of indexed relevant publications, and drafts of new manuscripts.

• Graduate and undergraduate students will keep up to date the project web site so that all members of the team can review progress.

• Writing of peer reviewed manuscripts and preparation for conference presentations will be reviewed by the entire group.

4. Budget line items that support theses coordination mechanisms: The budget includes several items that specifically support collaboration and coordination amongst the team. The main use of the travel funds is to participate in collaboration visits and conferences in the US. The preparation for these meetings and conferences, the writing of manuscripts for publication and the preparation for meetings will be used to ensure collaboration and coordination. Participation in these activities will ensure that the team work together and are focused on the timely completion of the project.

33 The Research Experiences for Undergraduates (REU): Sites and Supplements solicitation (NSF 13-542) gives instructions for embedding a request for an REU Supplement in a proposal. Proposers are invited to embed a request for an REU Supplement in the typical amount for one year only according to normal CISE guidelines (detailed below). REU stipend support is one way to retain talented students in under- graduate education, while providing meaningful research experiences. The participation of students from groups underrepresented in computing - underrepresented minorities, women and persons with disabilities - is strongly encouraged. CISE REU supplemental funding requests must describe results of any previous such support, including students supported, papers published, etc. Other factors influencing supplemental funding decisions include the number of REU requests submitted by any one principal investigator across all of her/his CISE grants. Investigators are encouraged to refer to the REU program solicitation (NSF 13-542)

34 All: list of project personnel as well as a list of collaborators Mary Smith; XYZ University; PI John Jones; University of PQR; Senior Personnel Jane Brown; XYZ University; Postdoc Bob Adams; ABC Community College; Paid Consultant Susan White; DEF Corporation; Unpaid Collaborator Tim Green; ZZZ University; Subawardee A list of past and present any Collaborators (related or not to this proposal) Collaborators for Mary Smith; XYZ University; PI Helen Gupta; ABC University John Jones; University of PQR Fred Gonzales; DEF Corporation Susan White; DEF Corporation Collaborators for John Jones; University of PQR; Senior Personnel Tim Green; ZZZ University Ping Chang, ZZZ University Mary Smith; XYZ University Collaborators for Jane Brown; XYZ University; Postdoc Fred Gonzales; DEF Corporation Collaborators for Bob Adams; ABC Community College; Paid Consultant None Collaborators for Susan White; DEF Corporation; Unpaid Collaborator Mary Smith; XYZ University Harry Nguyen; Welldone Institution Collaborators for Tim Green; ZZZ University; Subawardee John Jones; University of PQR

35 References

[1] P. Abbeel and A. Y. Ng. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the Twenty First International Conference on Machine Learning, pages 1–8, 2004.

[2] E. Aboaf, S. Drucker, and C. Atkeson. Task-level robot learning: Juggling a tennis ball more accu- rately. In IEEE International Conference on Robotics and Automation, pages 1290–1295, Scottsdale, AZ, 1989.

[3] B. Akgun, M. Cakmak, J. W. Yoo, and A. L. Thomaz. Trajectories and keyframes for kinesthetic teaching: a human-robot interaction perspective. In Proceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction, pages 391–398, 2012.

[4] M. Amano, Y. Okabe, N. Takeda, and T. Ozaki. Structural health monitoring of an advanced grid structure with embedded fiber Bragg grating sensors. Structural Health Monitoring, 6(4):309–324, 2007.

[5] C. H. An, C. G. Atkeson, and J. M. Hollerbach. Model-Based Control of a Robot Manipulator. MIT Press, Cambridge, MA, 1988.

[6] S. O. Anderson, J. K. Hodgins, and C. G. Atkeson. Approximate policy transfer applied to simulated bongo board balance. In IEEE-RAS International Conference on Humanoid Robots (Humanoids), 2007.

[7] S. O. Anderson, M. Wisse, C. G. Atkeson, J. K. Hodgins, G. J. Zeglin, and B. Moyer. Powered bipeds based on passive dynamic principles. In Proceedings of the 5th IEEE-RAS International Conference on Humanoid Robots (Humanoids 2005), pages 110–116, 2005.

[8] B. Argall, S. Chernova, M. Veloso, and B. Browning. A survey of robot learning from demonstration. Robotics and Autonomous Systems, 57(5):469–483, 2009.

[9] L. Ascari, P. Corradi, L. Beccai, and C. Laschi. A miniaturized and flexible optoelectronic sensing system for a tactile skin. International Journal of Micromechanics and Microengineering, 17:2288– 2298, 2007.

[10] C. G. Atkeson, A. W. Moore, and S. Schaal. Locally weighted learning. Artificial Intelligence Review, 11:11–73, 1997.

[11] C. G. Atkeson and S. Schaal. Robot learning from demonstration. In Proceedings of the Fourteenth International Conference on Machine Learning, pages 12–20, 1997.

[12] F. G. Barth. Spider mechanoreceptors. Current Opinion in Neurobiology 2004, 14:415–422, 2004.

[13] F. G. Barth and J. Stagl. The slit sense organs of arachnids. Zoomorphologie, 86:1–23, 1976.

[14] D. Bentivegna, C. Atkeson, and G. Cheng. Learning from observation and practice at the action generation level. In Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2003.

[15] D. Bentivegna, C. Atkeson, and G. Cheng. Learning from observation and practice using primitives. In Nakamura, editor, IEEE/RSJ International Conference on Intelligent Robots and Systems, Los Vegas, NV, 2003.

1 [16] D. Bentivegna, C. Atkeson, and G. Cheng. Learning to select primitives and generate sub-goals from practice. In Humanoids 2003: The Third IEEE-RAS International Conference on Humanoid Robotics, 2003. [17] D. Bentivegna, C. Atkeson, A. Ude, and G. Cheng. Learning tasks from observation and practice. Robotics and Autonomous Systems, 47:163–169, 2004. [18] D. Bentivegna, C. Atkeson, A. Ude, and G. Cheng. Learning to act from observation and practice. International Journal of Humanoid Robotics, 1(4):585–611, 2004. [19] D. Bentivegna, C. G. Atkeson, and J.-Y. Kim. Compliant control of a hydraulic humanoid joint. In IEEE-RAS International Conference on Humanoid Robots (Humanoids), 2007. [20] D. Bentivegna, G. Cheng, and C. Atkeson. Learning from observation and from practice using be- havioral primitives. In 11th International Symposium on Robotics Research, Siena, Italy, 2003. [21] D. C. Bentivegna and C. G. Atkeson. Using primitives in learning from observation. In Humanoids 2000: The First IEEE-RAS International Conference on Humanoid Robotics, 2000. [22] D. C. Bentivegna and C. G. Atkeson. Learning from observation using primitives. In Proceedings, IEEE International Conference on Robotics and Automation, 2001. [23] D. C. Bentivegna and C. G. Atkeson. Learning how to behave from observing others. In SAB’02- Workshop on Motor Control in Humans and Robots: on the interplay of real brains and artificial devices, Edinburgh, UK, 2002. [24] D. C. Bentivegna, A. Ude, C. G. Atkeson, and G. Cheng. Humanoid robot learning and game playing using PC-based vision. In Proceedings of the 2002 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 2449–2454, 2002. [25] A. Billard, S. Calinon, R. Dillmann, and S. Schaal. Handbook of Robotics, chapter Robot program- ming by demonstration. Springer, Secaucus, NJ, USA, 2008. [26] R. J. Black, D. Zare, L. Oblea, Y.-L. Park, B. Moslehi, and C. Neslen. On the gage factor for optical fiber grating strain gages. Proceedings of the Society for the Advancement of Materials and Process Engineering (SAMPE’08), 53, 2008. [27] R. Blickhan and F. G. Barth. Strains in the exoskeleton of spiders. Journals of Comparative Physiol- ogy A, 157:115–147, 1985. [28] W. Bluethmann, R. Ambrose, M. Diftler, S. Askew, E. Huber, M. Goza, F. Rehnmark, C. Lovchik, and D. Magruder. Robonaut: A Robot designed to work with humans in space. Autonomous Robots, 14:179–197, 2003. [29] R. Bodor, A. Drenner, M. Janssen, P. Schrater, and N. Papanikolopoulos. Mobile camera positioning to optimize the observability of human activity recognition tasks. In Intelligent Robots and Systems, 2005.(IROS 2005). 2005 IEEE/RSJ International Conference on, pages 1564–1569. IEEE, 2005. [30] A. Boularias, J. Kober, and J. Peters. Relative entropy inverse reinforcement learning. In Proceedings of the 15th International Conference on Automated Planning and Scheduling, pages 20–27, 2011. [31] J. Butterfass, M. Grebenstein, H. Liu, and G. Hirzinger. Dlr-hand ii: next generation of a dextrous robot hand. In Robotics and Automation, 2001. Proceedings 2001 ICRA. IEEE International Confer- ence on, volume 1, pages 109–114 vol.1, 2001.

2 [32] S. Calinon and A. Billard. Incremental learning of gestures by imitation in a humanoid robot. In Proceedings of the Second Conference on Human-Robot Interaction, 2007. [33] L. Carvalho, J. C. C. Silva, R. N. Nogueira, J. L. Pinto, H. J. Kalinowski, and J. A. Simoes.¨ Applica- tion of Bragg grating sensors in dental biomechanics. The Journal of Strain Analysis for Engineering Design, 41(6):411–416, 2006. [34] M.-Y. Cheng, C.-M. Tsao, Y.-Z. Lai, and Y.-J. Yang. The development of a highly twistable tactile sensing array with stretchable helical electrodes. Sens. Actuators, A, 166(2):226–233, 2009. [35] S. Chernova and M. Veloso. Confidence-based policy learning from demonstration using gaussian mixture models. In Proceedings of the International Conference on Autonomous Agents and Multia- gent Systems, 2007. [36] Y.-N. Cheung, Y. Zhu, C.-H. Cheng, and W. W.-F. L. C. Chao. A novel fluidic strain sensor for large strain measurement. Sens. Actuators, A, 147(2):401–408, 2008. [37] D. Choi, C. G. Atkeson, S. J. Cho, and J. Y. Kim. Phase plane control of a humanoid. In IEEE-RAS International Conference on Humanoid Robots (Humanoids), pages 145–150, 2008. [38] J.-B. Chossat, Y.-L. Park, R. Wood, and V. Duchaine. A soft strain sensor based on ionic and metal liquids. Sensors Journal, IEEE, 13(9):3405–3414, 2013. [39] J.-B. Chossat, H.-S. Shin, Y.-L. Park, and V. Duchanine. Design and manufacturing of soft tactile skin using an embedded ionic liquid and tomographic imaging. ASME Journal of Mechanisms and Robotics, 2005 (to appear). [40] J.-B. Chossat, Y. Tao, V. Duchaine, and Y.-L. Park. Wearable soft artificial skin for hand motion detection with embedded microfluidic strain sensing. In Proc. IEEE Int. Conf. Rob. Autom. (ICRA’15), 2005 (in review). [41] R. S. Dahiya, G. Metta, M. Valle, and G. Sandini. Tactile sensing – from humans to humanoids. Robotics, IEEE Transactions on, 26(1):1–20, 2010. [42] L. A. Danisch, K. Englehart, and A. Trivett. Spatially continuous six degree of freedom position and orientation sensor. Sensor Review, 19(2):106–112, 1999. [43] L. A. Danisch and E. M. Reimer. World patent of canadian space agency. pages PCT, Wo. 99, No.04234, 1999. [44] C. R. Dennison, P. M. Wild, M. F. Dvorak, D. R. Wilson, and P. A. Cripton. Validation of a novel minimally invasive intervertebral disc pressure sensor utilizing in-fiber Bragg gratings in a porcine model: An ex vivo study. Spine, 33(17):E589–E594, 2008. [45] M. D. Dickey, R. C. Chiechi, R. J. Larsen, E. A. Weiss, D. A. Weitz, and G. M. Whitesides. Eutectic gallium-indium (EGaIn): A liquid metal alloy for the formation of stable structures in microchannels at room temperature. Adv. Funct. Mater., 18(7):1097–1104, 2008. [46] S. Dong and B. Williams. Motion learning in variable environments using probabilistic flow tubes. In Robotics and Automation (ICRA), 2011 IEEE International Conference on, pages 1976–1981. IEEE, 2011. [47] W. Ecke, I. Latka, R. Willsch, R. A., and R. Graue. Fiber optic sensor network for spacecraft health monitoring. Measurement Science and Technology, 12(7):974–980, 2001.

3 [48] S. Ekvall and D. Kragic. Robot learning from demonstration: a task-level planning approach. Inter- national Journal of Advanced Robotic Systems, 5(3):223–234, 2008.

[49] J. Engel, J. Chen, Z. Fan, and C. Liu. Polymer micromachined multimodal tactile sensors. Sens. Actuators, A, 117(1):50–61, 2005.

[50] A. F. Fernandez, F. Berghmans, B. Brichard, P. Megret,´ M. Decreton,´ M. Blondel, and A. Delchambre. Multi-component force sensor based on multiplexed fibre Bragg grating strain sensors. Measurement Science and Technology, 12(7):810, 2001.

[51] R. F. Foelix. Biology of spiders, Second edition. Oxford University Press US, 1996.

[52] E. J. Friebele, C. G. Askins, A. B. Bosse, A. D. Kersey, H. J. Patrick, W. R. Pogue, M. A. Putnam, W. R. Simon, F. A. Tasker, W. S. Vincent, and S. T. Vohra. Optical fiber sensors for spacecraft applications. Smart Materials and Structures, 8:813–838, 1999.

[53] M. Gienger, M. Muhlig, and J. Steil. Imitating object movement skills with robots: A task-level approach exploiting generalization and invariance. In International Conference on Intelligent Robots and Systems, pages 1262–1269. IEEE, 2010.

[54] D. H. Grollman and O. C. Jenkins. Sparse incremental learning for interactive robot control policy estimation. In Proceedings of the International Conference on Robotics and Automation, 2008.

[55] R. B. Hartman. Flexible tape having bridges of electrically coductive particles extending across its pressure-sensitive adhesive layer. US Patent 4,548,862, 1985.

[56] K. Hausman, S. Niekum, S. Osentoski, and G. Sukhatme. Active Articulation Model Estimation through Interactive Perception. In IEEE International Conference on Robotics and Automation (sub- mitted), May 2015.

[57] X. He, B. Benhabib, K. Smith, and R. Safaee-Rad. Optimal camera placement for an active-vision system. In Systems, Man, and Cybernetics, 1991.’Decision Aiding for Complex Systems, Conference Proceedings., 1991 IEEE International Conference on, pages 69–74. IEEE, 1991.

[58] H. E. Holling, H. C. Boland, and E. Russ. Investigation of arterial obstruction using a mercury-in- rubber strain gauge. American heart journal, 62(2):194–205, 1961.

[59] J. Hong and X. Tan. Calibrating a VPL dataglove for teleoperating the utah/mit hand. Proceedings of the 1989 IEEE International Conference on Robotics and Automation, 3:1752–1757, 1989.

[60] A. Ijspeert, J. Nakanishi, and S. Schaal. Learning attractor landscapes for learning motor primitives. Advances in Neural Information Processing Systems 16, pages 1547–1554, 2003.

[61] I. Iordachita, Z. Sun, M. Balicki, J. U. Kang, S. J. Phee, J. Handa, P. Gehlbach, and R. Tayolor. A sub- millimetric, 0.25 mN resolution fully integrated fiber-optic force-sensing tool for retinal microsurgery. Int. J. Comput. Assist. Radiol. Surg., 4(4):383–390, 2009.

[62] K. Kamiyama, H. Kajimoto, M. Inami, N. Kawakami, and S. Tachi. Development of a vision-based tactile sensor. IEEJ Transactions on Sensors and Micromachines, 123(1):16–22, 2003.

[63] D. Katz and O. Brock. Extracting planar kinematic models using interactive perception. In Unifying Perspectives in Computational and Robot Vision, pages 11–23. 2008.

4 [64] D. Katz, M. Kazemi, J. A. D. Bagnell, and A. T. Stentz. Interactive segmentation, tracking, and kinematic modeling of unknown articulated objects. Technical Report CMU-RI-TR-12-06, Robotics Institute, March 2012. [65] H. Kawaguchi, T. Someya, T. Sekitani, and T. Sakurai. Cut-and-paste customization of organic FET integrated circuit and its application or electronic artificial skin. IEEE J. Solid-State Circuits, 40(1):177–185, 2005. [66] A. D. Kersey, M. A. Davis, H. J. Patrick, M. LeBlanc, K. P. Koo, C. G. Askins, M. A. Putnam, and E. J. Friebele. Fiber grating sensors. Journal of Lightwave Technology, 15(8):1442–1463, 1997. [67] J.-Y. Kim, C. G. Atkeson, J. K. Hodgins, D. Bentivegna, and S. J. Cho. Online gain switching algo- rithm for joint position control of a hydraulic humanoid robot. In IEEE-RAS International Conference on Humanoid Robots (Humanoids), 2007. [68] N. Kirchner, D. Hordern, D. Liu, and G. Dissanayake. Capacitive sensor for object ranging and material type identification. Sens. Actuators, A, 148(1):96–104, 2008. [69] K. M. Kitani, B. D. Ziebart, J. A. Bagnell, and M. Hebert. Activity forecasting. In European Confer- ence on Computer Vision (ECCV 2012), 2012. [70] H. Kjellstrom,¨ J. Romero, and D. Kragic.´ Visual object-action recognition: Inferring object af- fordances from human demonstration. Computer Vision and Image Understanding, 115(1):81–90, 2011. [71] J. M. Ko and Y. Q. Ni. Technology developments in structural health monitoring of large-scale bridges. Engineering Structures, 27(12):1715–1725, 2005. [72] G. Konidaris, S. Kuindersma, R. Grupen, and A. Barto. Robot learning from demonstration by constructing skill trees. The International Journal of Robotics Research, 31(3):360–375, 2012. [73] R. Kramer, C. Majidi, R. Sahai, and R. Wood. Soft curvature sensors for joint angle proprioception. In Intelligent Robots and Systems (IROS), 2011 IEEE/RSJ International Conference on, pages 1919– 1926, Sept 2011. [74] R. Kramer, C. Majidi, and R. Wood. Wearable tactile keypad with stretchable artificial skin. In Robotics and Automation (ICRA), 2011 IEEE International Conference on, pages 1103–1107, May 2011. [75] R. Kramer, C. Majidi, and R. J. Wood. Wearable tactile keypad with strethcable artificial skin. In Proc. IEEE Int. Conf. Rob. Autom., pages 1103–1107, Shanghai, China, May 2011. [76] S. Kuriyama, M. Ding, Y. Kurita, T. Ogasawara, and J. Ueda. Flexible sensor for McKibben pneu- matic actuator. In Proc. IEEE Sens. Conf., pages 520–525, Christchurch, New Zealand, January 2009. [77] S. Kuriyama, M. Ding, Y. Kurita, J. Ueda, and T. Ogasawara. Flexible sensor for McKibben pneu- matic artificial muscle actuator. In. J. Autom. Tech., 3(6):731–740, 2009. [78] M. C. J. Large, J. Moran, and L. Ye. The role of viscoelastic properties in strain testing using mi- crostructured polymer optical fibres (mPOF). Meas. Sci. Technol., 20(3), 2009. [79] Q. V. Le, M. Ranzato, R. Monga, M. Devin, K. Chen, G. S. Corrado, J. Dean, and A. Y. Ng. Building high-level features using large scale unsupervised learning. arXiv preprint arXiv:1112.6209, 2011.

5 [80] H.-N. Li, D.-S. Li, and G.-B. Song. Recent applications of fiber optic sensors to health monitoring in civil engineering. Engineering Structures, 26(11):1647–1657, 2004.

[81] X. C. Li and F. Prinz. Metal embedded fiber Bragg grating sensors in layered manufacturing. Journal of Manufacturing Science and Engineering, 125:577–585, 2003.

[82] C. Liu and C. G. Atkeson. Standing balance control using a trajectory library. In IEEE/RSJ Interna- tional Conference on Intelligent Robots and Systems (IROS), pages 3031 — 3036, 2009.

[83] F. Lorussi, E. P. Scilingo, M. Tesconi, A. Tognetti, and D. De Rossi. Strain sensing fabric for hand posture and gesture monitoring. IEEE Trans. Inf. Technol. Biomed., 9(3):372–381, 2005.

[84] T. Lu, L. Finkenauer, J. Wissman, and C. Majidi. Rapid Prototyping for Soft-Matter Electronics. Adv. Funct. Mater., 24(22):3351–3356, June 2014.

[85] V. J. Lumelsky, M. S. Shur, and S. Wagner. Sensitive skin. IEEE Sens. J., 1(1):41–51, 2001.

[86] H. Maekawa, K. Tanie, and K. Komoriya. Tactile feedback for multifingered dynamic grasping. IEEE Control Systems Magazine, 17(1):63–71, 1997.

[87] C. Majidi, R. Kramer, and R. J. Wood. Non-differential elastomer curvature sensors for softer-than- skin electronics. J. Smart Mater. Struct., 20(10), 2011.

[88] T. Mandersloot, M. Wisse, and C. Atkeson. Controlling velocity in bipedal walking: A dynamic programming approach. In Proceedings of the 6th IEEE-RAS International Conference on Humanoid Robots (Humanoids), 2006.

[89] D. Marculescu, R. Marculescu, N. H. Zamora, P. Stanley-Marbell, P. K. Khosla, S. Park, S. Ja- yaraman, S. Jung, C. Lauterbach, W. Weber, T. Kirstein, D. Cottet, J. Grzyb, G. Troster,¨ M. Jones, T. Martin, and Z. Nakad. Electronic textile: A platform for pervasive computing. Proc. IEEE, 91(12):1995–2018, 2003.

[90] D. T. Moran, K. M. Chapman, and R. S. Ellis. The fine structure of cockroach campaniform sensilla. The Journal of Cell Biology, 48:155–173, 1971.

[91] W. W. Morey, G. Meltz, and J. M. Weiss. Recent advances in fiber-grating sensors for utility industry applications. Proceedings of SPIE, Self-Calibrated Intelligent Optical Sensors and Systems, 2594:90– 98, 1996.

[92] J. Morimoto and C. G. Atkeson. Improving humanoid locomotive performance with learnt approx- imated dynamics via Gaussian processes for regression. In IEEE/RSJ International Conference on Intelligent Robots and Systems, 2007.

[93] J. Morimoto and C. G. Atkeson. Nonparametric representation of an approximated Poincare´ map for learning biped locomotion. Autonomous Robots, 27(2):131–144, 2009.

[94] J. Morimoto, S. H. Hyon, C. G. Atkeson, and G. Cheng. Low-dimensional feature extraction for humanoid locomotion using kernel dimension reduction. In IEEE-RAS Conference on Robotics and Automation, pages 2711–2716, 2008.

[95] S. Nambiar and J. T. Yeow. Conductive polymer-based sensors for biomedical applications. Biosens. Bioelectron., 26(5):1825–1832, 2010.

6 [96] G. Neu and C. Szepesvari.´ Apprenticeship learning using inverse reinforcement learning and gradient methods. In Proceedings of the 23rd Conference on Uncertainty in Artificial Intelligence, pages 295– 302, 2007.

[97] A. Ng and S. Russell. Algorithms for inverse reinforcement learning. In Proceedings of the Seven- teenth International Conference on Machine Learning, pages 663–670, 2000.

[98] J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, and A. Y. Ng. Multimodal deep learning. In Proceed- ings of the 28th International Conference on Machine Learning (ICML-11), pages 689–696, 2011.

[99] M. Nicolescu and M. J. Mataric.´ Natural methods for robot task learning: Instructive demonstra- tion, generalization and practice. In Proceedings of the Second International Joint Conference on Autonomous Agents and Multi-Agent Systems, pages 241–248, 2003.

[100] S. Niekum, S. Chitta, B. Marthi, S. Osentoski, and A. G. Barto. Incremental semantically grounded learning from demonstration. In Robotics: Science and Systems, 2013.

[101] S. Niekum, S. Chitta, B. Marthi, S. Osentoski, and A. G. Barto. Incremental Semantically Grounded Learning from Demonstration. In Robotics: Science and Systems, June 2013.

[102] S. Niekum, S. Osentoski, G. Konidaris, and A. G. Barto. Learning and generalization of complex tasks from unstructured demonstrations. IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5239–5246, 2012.

[103] S. Niekum, S. Osentoski, G. Konidaris, and A. G. Barto. Learning and generalization of complex tasks from unstructured demonstrations. In IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5239–5246, October 2012.

[104] A. Okamura and M. R. Cutkosky. Feature detection for haptic exploration with robotic fingers. Int. J. Rob. Res., 20(12):925–938, 2001.

[105] J. Paik, R. Kramer, and R. J. Wood. Stretchable circuits and sensors for robotic origami. In Proc. IEEE/RSJ Int. Conf. Intell. Rob. Syst., pages 414–420, San Francisco, CA, September 2011.

[106] S. Pal, J. Mandal, T. Sun, K. T. V. Grattan, M. Fokine, F. Carlsson, P. Y. Fonjallaz, S. A. Wade, and S. F. Collins. Characteristics of potential fibre Bragg grating sensor-based devices at elevated temperatures. Measurement Science and Technology, (14):1131–1136, 2003.

[107] Y.-L. Park, K. Chau, R. Black, and M. Cutkosky. Force sensing robot fingers using embedded fiber bragg grating sensors and shape deposition manufacturing. In Robotics and Automation, 2007 IEEE International Conference on, pages 1510–1516, April 2007.

[108] Y.-L. Park, K. Chau, R. J. Black, and M. R. Cutkosky. Force sensing robot fingers using embed- ded fiber Bragg grating sensors and shape deposition manufacturing. In Proc. IEEE Int. Conf. Rob. Autom., pages 1510–1516, Rome, Italy, May 2007.

[109] Y.-L. Park, B. Chen, and R. J. Wood. Soft artificial skin with multi-modal sensing capability using embedded liquid conductors. In IEEE Sens. Conf., Limerick, Ireland, October 2011.

[110] Y.-L. Park, B. Chen, and R. J. Wood. Design and manufacturing of soft artificial skin using embedded microchannels and liquid conductors. IEEE Sens. J., 12(8):2711–2718, 2012.

7 [111] Y.-L. Park, B. Chen, D. Young, L. Stirling, R. J. Wood, E. Goldfield, and R. Nagpal. Bio-inspired active soft orthotic device for ankle foot pathologies. In Proc. IEEE/RSJ Int. Conf. Intell. Rob. Syst., pages 4488–4495, San Francisco, CA, September 2011. [112] Y.-L. Park, S. Elayaperumal, B. Daniel, S. C. Ryu, M. Shin, J. Savall, R. J. Black, B. Moslehi, and M. R. Cutkosky. Real-time estimation of 3-D needle shape and deflection for MRI-guided interven- tions. IEEE/ASME Trans. Mechatron., 15(6):906–915, 2010. [113] Y.-L. Park, S. Elayaperumal, B. L. Daniel, E. Kaye, K. B. Pauly, R. J. Black, and M. R. Cutkosky. MRI-compatible haptics: Feasibility of using optical fiber Bragg grating strain-sensors to detect de- flection of needles in an MRI environment. International Society for Magnetic Resonance in Medicine (ISMRM) 2008, 16th Scientific Meeting and Exhibition, 2008. [114] Y.-L. Park, C. Majidi, R. Kramer, P. Berard,´ and R. J. Wood. Hyperelastic pressure sensing with a liquid-embedded elastomer. J. Micromech. Microeng., 20(12), 2010. [115] Y.-L. Park, S. C. Ryu, R. J. Black, K. Chau, B. Moslehi, and M. R. Cutkosky. Exoskeletal force-sensing end-effectors with embedded optical fiber-Bragg-grating sensors. IEEE Trans. Rob., 25(6):1319–1331, December 2009. [116] Y.-L. Park, S. C. Ryu, R. J. Black, B. Moslehi, and M. R. Cutkosky. Fingertip force control with embedded fiber Bragg grating sensors. Proceedings of the 2008 IEEE International Conference on Robotics and Automation, pages 3431–3436, 2008. [117] Y.-L. Park, D. Tepayotl-Ramirez, R. J. Wood, and C. Majidi. Influence of cross-sectional geometry on the sensitivity of liquid-phase electronic pressure sensors. Appl. Phys. Lett., 101(19), 2012. [118] P. Pastor, M. Kalakrishnan, S. Chitta, E. Theodorou, and S. Schaal. Skill learning and task outcome prediction for manipulation. In Proceedings of the 2011 IEEE International Conference on Robotics & Automation, 2011. [119] K. Peters. Polymer optical fiber sensors - a review. Smart. Mater. Struct., 20(1), 2011. [120] S. Phan, Z. F. Quek, P. Shah, D. Shin, Z. Ahmed, O. Khatib, and M. Cutkosky. Capacitive skin sensors for robot impact monitoring. In Proc. IEEE/RSJ Int. Conf. Intell. Rob. Syst., pages 2992–2997, San Francisco, CA, September 2011. [121] C. B. Phillips, N. I. Badler, and J. Granieri. Automatic viewing control for 3d direct manipulation. In Symposium on Interactive 3D graphics, 1992. [122] C. Plagemann, D. Fox, and W. Burgard. Efficient failure detection on mobile robots using particle filters with gaussian process proposals. In IJCAI, pages 2185–2190, 2007. [123] P. Puangmali, K. Althoefer, L. D. Seneviratne, D. Murphy, and P. Dasgupta. State-of-the-art in force and tactile sensing for minimally invasive surgery. IEEE Sens. J., 8(4):371, 2008. [124] M. Pyo, C. C. Bohn, E. Smela, J. R. Reynolds, and A. B. Brennan. Direct strain measurement of polypyrrole actuators controlled by the polymer/gold interface. Chem. Mater., 15(4):916–922, 2003. [125] D. Ramachandran and E. Amir. Bayesian inverse reinforcement learning. Proceedings of the 20th International Joint Conference on Artificial Intelligence, 2007. [126] Ruthika, J. Wissman, and C. Majidi. Interfacing Liquid & Solid Electronics with a Soft Anisotropic Conductor. manuscript under review, 2014.

8 [127] S. Schaal and C. G. Atkeson. Constructive incremental learning from only local information. Neural Computation, 10(8):2047–2084, 1998.

[128] S. Schaal and C. G. Atkeson. Learning control for robotics. IEEE Robotics & Automation Magazine, 17(2):20–29, 2010.

[129] S. Schaal, C. G. Atkeson, and S. Vijayakumar. Real-time robot learning with locally weighted learn- ing. In Proceedings, IEEE International Conference on Robotics and Automation, 2000.

[130] S. Schaal, C. G. Atkeson, and S. Vijayakumar. Scalable locally weighted statistical techniques for real time robot learning. Applied Intelligence, 16(1), 2002.

[131] E.-A. Seyfarth, W. Eckweiler, and K. Hammer. Proprioceptors and sensory nerves in the legs of a spider, Cupiennius salei (Arachnida, Araneida). Zoomorphologie, 105:190–196, 1985.

[132] S. Singh, A. Gupta, and A. A. Efros. Unsupervised discovery of mid-level discriminative patches. In European Conference on Computer Vision (ECCV 2010), 2012.

[133] W. D. Smart and L. P. Kaelbling. Effective reinforcement learning for mobile robots. In 2002 IEEE International Conference on Robotics and Automation, pages 3404–3410, 2002.

[134] D. S. Smith. The fine structure of haltere sensilla in the blowfly Calliphora erythrocephala (Meig.) with scanning electron microscopic observations on the haltere surface. Tissue and Cell, 1:443–484, 1969.

[135] D. Stampfer, M. Lutz, and C. Schlegel. Information driven sensor placement for robust active object recognition based on multiple views. In Technologies for Practical Robot Applications (TePRA), 2012 IEEE International Conference on, pages 133–138. IEEE, 2012.

[136] M. Stolle and C. G. Atkeson. Knowledge transfer using local features. In Proceedings of the IEEE Symposium on Approximate Dynamic Programming and Reinforcement Learning (ADPRL), 2007.

[137] M. Stolle and C. G. Atkeson. Finding and transferring policies using stored behaviors. Autonomous Robots, 29(2):169—200, 2010.

[138] M. Stolle, H. Tappeiner, J. Chestnutt, and C. G. Atkeson. Transfer of policies based on trajectory libraries. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2007.

[139] J. Sturm, V. Pradeep, C. Stachniss, C. Plagemann, K. Konolige, and W. Burgard. Learning kinematic models for articulated objects. In Proc. of the International Joint Conference on Artificial Intelligence (IJCAI), 2009.

[140] R. Suresh, S. C. Tjin, and S. Bhalla. Multi-component force measurement using embedded fiber Bragg grating. Optics and Laser Technology, doi:10.1016/j.optlastec.2008.08.004, 2008.

[141] Y. Tada, K. Hosoda, Y. Yamasaki, and M. Asada. Sensing ability of anthropomorphic fingertip with multi-modal sensors. In Proc. IEEE/RSJ Int. Conf. Intell. Rob. Syst., pages 31–35, Las Vegas, NV, October 2003.

[142] R. Tajima, S. Kagami, M. Inaba, and H. Inoue. Development of soft and distributed tactile sensors and the application to a humanoid robot. Adv. Rob., 16(4):381–397, 2002.

[143] N. Takahashi, A. Hirose, and S. Takahashi. Underwater acoustic sensor with fiber Bragg grating. Optical Review, 4(6):691–694, 1997.

9 [144] K. Takei, T. Takahashi, J. C. Ho, H. Ko, A. G. Gillies, P. W. Leu, R. S. Fearing, and A. Javey. Nanowire active-matrix circuitry for low-voltage macroscale artificial skin. Nat. Mater., 9(10):821– 826, 2010.

[145] J. Tang, A. Singh, N. Goehausen, and P. Abbeel. Parameterized maneuver learning for autonomous helicopter flight. In Proceedings of the IEEE International Conference on Robotics and Automation, pages 1142–1148. IEEE, 2010.

[146] E. Torres-Jara, I. Vasilescu, and R. Coral. A soft touch: Compliant tactile sensors for sensitive manipulation. MIT Computer Science and Artificial Intelligence Laboratory Technical Review, 2003.

[147] B. Triggs and C. Laugier. Automatic camera placement for robot vision tasks. In International Conference on Robotics and Automation, 1995.

[148] J. Ulmen and M. Cutkosky. A robust, low-cost and low-noise artificial skin for human-friendly robots. In Proc. IEEE Int. Conf. Rob. Autom., pages 4836–4841, Anchorage, AK, May 2010.

[149] L. Ventrelli, L. Beccai, V. Mattoli, A. Menciassi, and P. Dario. Development of a stretchable skin- like tactile sensor based on polymer composites. In Proc. IEEE Int. Conf. Rob. Biomimetics, pages 123–128, Guilin, China, December 2009.

[150] D. Vogt, Y. Mengus, Y.-L. Park, M.Wehner, R. K. Kramer, C. Majidi, L. P. Jentoft, Y. Tenzer, R. D. Howe, and R. J. Wood. Progress in soft, flexible, and stretchable sensing systems. In International Conference on Robotics and Automation (ICRA), 2013 IEEE/RSJ International Conference on, 2013.

[151] D. Vogt, Y.-L. Park, and R. J. Wood. Design and characterization of a soft multi-axis force sensor using embedded microfluidic channels. IEEE Sens. J., 13(10):4056–4064, 2013.

[152] S. Wakimoto, K. Suzumori, and T. Kanda. Development of intelligent McKibben actuator. In Proc. IEEE/RSJ Int. Conf. Intell. Rob. Syst., pages 487–492, Alberta, Canada, August 2005.

[153] N. Wettels, D. Popovic, V. J. Santos, R. S. Johansson, and G. E. Loeb. Biomimetic tactile sensor array. Adv. Rob., 22(8):829–849, 2008.

[154] M. Wisse, C. G. Atkeson, and D. K. Kloimwieder. Swing leg retraction helps biped walking stability. In Proceedings of the 5th IEEE-RAS International Conference on Humanoid Robots (Humanoids), 2005.

[155] D. Xing, C. G. Atkeson, J. Su, and B. Stephens. Gain scheduled control of perturbed standing balance. In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pages 4063–4068, 2010.

[156] D. Yamada, T. Maeno, and Y. Yamada. Artificial finger skin having ridges and distributed tactile sensors used for grasp force control. J. Rob. Mechatron., 14(2):140–146, 2002.

[157] L. Ye and E. Keogh. Time series shapelets: a new primitive for data mining. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 947–956. ACM, 2009.

[158] J. Yi, X. Zhu, L. Shen, B. Sun, and L. Jiang. An orthogonal curvature fiber bragg grating sensor array for shape reconstruction. In K. Li, X. Li, S. Ma, and G. Irwin, editors, Life System Modeling and Intelligent Computing, volume 97 of Communications in Computer and Information Science, pages 25–31. Springer Berlin Heidelberg, 2010.

10 [159] L. Zhang, J. Qian, Y. Zhang, and L. Shen. On SDM/WDM FBG sensor net for shape detection of endoscope. Proceedings of the 2005 IEEE International Conference on Robotics and Automation, 4:1986–1991, 2005.

[160] W. Zhang, E. Li, J. Xi, J. Chicharo, and X. Dong. Novel temperature-independent FBG-type force sensor. Measurement Science and Technology, 16:1600–1604, 2005.

[161] B. D. Ziebart, A. Maas, J. D. Bagnell, and A. K. Dey. Maximum entropy inverse reinforcement learning. In Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, 2008.

11