<<

1 1 TABLE OF CONTENTS

Welcome from the Editor 3 Deborah S. Ray

Foreword 6 Grant Imahara

Introduction to Service 7 Mouser Staff

Sanbot Max: Anatomy of a Service 11 Steven Keeping

CIMON Says: Design Lessons from a Robot Assistant in Space 17 Traci Browne

21 Revisiting the Jon Gabay

Robotic Hands Strive for Human Capabilities 25 Bill Schweber

Robotic Gesturing Meets Sign Language 30 Bill Schweber

Mouser and Mouser Electronics are registered trademarks of Mouser Electronics, Inc. Other products, logos, and company names mentioned herein may be trademarks of their respective owners. Reference designs, conceptual illustrations, and other graphics included herein are for informational purposes only.

Copyright © 2018 Mouser Electronics, Inc. – A TTI and Berkshire Hathaway company. 2 WELCOME FROM THE EDITOR

If you’re just now joining us for favorite robots from our favorite This eBook accompanies EIT Video Mouser’s 2018 Empowering shows and movies: Star Wars, Star #3, which features the Henn na Innovation Together™ (EIT) program, Trek, Lost in Space, and Dr. Who. Hotel, the world’s first hotel staffed welcome! This year’s EIT program— by robots. Geared toward efficiency Generation Robot—explores In this EIT segment, we explore and customer comfort, these robots as a technology capable of impacting service robots, which combine not only provide an extraordinary and changing our lives in the 21st principles of with that experience of efficiency and comfort, century much like the automobile of robotics to assist humans with but also a fascinating and heart- impacted the 20th century and tasks that are dirty, dangerous, heavy, warming experience for guests. Don’t changed life as we knew it. repetitive, or just plain dull. Service miss the accompanying blogs, which robots differ from collaborative feature “Flippy” the burger-flipping In our last segment, we explored robots in that service bots require robot, autonomous suitcases, soft collaborative robots and featured a some degree of autonomy—that robotics, and chatbots. compelling video hosted by celebrity is, the ability to perform tasks engineer and Mouser partner Grant based on current state and sensing, We hope you’re enjoying Mouser’s Imahara, as well as the Collaborative without human intervention. Today’s 2018 EIT topics, videos, eBooks, and Robots eBook, which features service bots work in close proximity blogs! Stay tuned…there’s more to cobots in industrial environments, to humans, are becoming more come! identifies design considerations and more autonomous, and can and solutions, and discusses motor interpret and respond to humans and control in complex robotics. The conditions. Deborah S. Ray accompanying blogs featured our Executive Editor

3 Empowering Innovation Together “Today’s service bots work in Jack Johnston, Director close proximity to humans, Marketing Communication are becoming more and more autonomous, and can Executive Editor interpret and respond to Deborah S. Ray humans and conditions.” Contributing Authors Traci Browne Jon Gabay Grant Imahara Steven Keeping Bill Schweber

Technical Contributor Paul Golata

Editorial Contributors LaKayla Garrett Ryan Stacy

Design & Production Robert Harper

With Special Thanks Kevin Hess Sr. VP, Marketing

Russell Rasor VP, Supplier Marketing

Jennifer Krajcirovic Director, Creative Design

Raymond Yin Director, Technical Content

4 5 FOREWORD By Grant Imahara

This year, Mouser’s Empowering will store your luggage for you. Most Innovation Together™ program has of the remaining support staff are asked me to explore how robots robots as well. and humans can collaborate and transform the concept of work. One Another meaning of the Japanese specific type of robot that may make word “hen” is “to change,” and a significant contribution to that the hotel’s slogan expresses this transformation is the . idea: “A commitment to evolution.” The designers of this revolutionary The International Federation of hotel experience want to use the Robotics states that a service robot innovative technology of service “performs useful tasks for humans robots not as a gimmick but as a way or equipment excluding automation to fundamentally change the guest applications.” From Rosie on The experience and learn more about Jetsons cartoon to Baymax in the human-robot interaction. filmBig Hero 6, pop culture is awash with examples of service robots. Service robots have the potential to These useful and endearing robotic alter every aspect of our daily lives. characters perform a wide variety of They can do not only the things we tasks from mundane jobs like cleaning can’t do but also the things we don’t to highly specialized tasks like being want to do. And that may be the one a “personal health care companion” thing that drives this technology the (Baymax’s original purpose). most—the desire to gain more free time. In this EIT segment, I’ll be traveling to Nagasaki, Japan to visit the The day when the average household Henn na Hotel. “Hen” in English has a robotic maid may be closer translates as “weird, strange, or than we think. I look forward to the odd.” And this hotel certainly fits day when I return home to everything that bill. Service robots are front and clean and in perfect order, freeing me center in this oddly unique travel to relax, play some video games, or destination. The concierge that greets design more robots! you is a multilingual dinosaur robot that checks you into the hotel. The cloakroom features a that mouser.com/empowering-innovation 6 INTRODUCTION TO SERVICE ROBOTS By Mouser Staff

When we think of service robots, we often think of toys and novelty devices; however, with roots in engineering and automation, today’s service bots are making life and work easier.

In 2018, we may not yet be jetting perform intended tasks based on • Person carrier robot: A robot off to work in flying cars and have current state and sensing, without designed to transport humans to an robot servants waiting at home with human intervention.” For service intended destination dinner on the table, but we don’t robots, this ranges from partial have to look too far to see where autonomy (including human-robot Whereas other types of service robots service robots are helping us with our interaction) to full autonomy (without work at safe distances from humans daily lives. Early examples of service active human-robot intervention). or have a physical barrier, personal robots were rooted in automation, Today’s service bots work in close care robots interact very closely with which we’ve been using in one form proximity to humans, are becoming people, which poses unique design or another for centuries—from when more and more autonomous, and can challenges. What’s more, people’s windmills helped humans pump interpret and respond to humans and diverse and complex needs pose water and process grain, to the conditions. additional challenges as well (Figure 1). steam engine that turns heat into an abundant—and mobile—source Trending Applications of energy. Nowadays, we think of Service robots are popping up all automation as asking Alexa what around us, solving personal and the temperature is outside, to turn business challenges in a wide variety off the lights, or get a little help of industries. Though some examples with navigation on the way to an seem quirky, service bots solve real appointment. problems and are changing the way we work and live. Service robots combine principles of automation with that of robotics Personal Care Robots to assist humans with tasks that are Figure 1: Robots for children will Personal care robots are non-medical dirty, dangerous, heavy, repetitive, or need to be appealing to touch as well service robots that improve quality of just plain dull. We see these aspects as sight. (Source: Mouser) life through interaction and help with in collaborative robots as well, which tasks. The ISO identifies several types work side-by-side with humans, Telepresence Robots of personal care robots: as we discussed in the 2018 EIT Long-distance communications Collaborative Robots eBook. So that allow both parties on a call • Mobile servant robot: A robot that what makes a service robot different? to see and hear the other person is capable of traveling to perform have been imagined since the serving tasks in interaction with The International Organization early 1900s, steadily advancing to humans, such as handling objects for Standardization (ISO) defines dedicated videoconferencing systems, or exchanging information a service robot as a robot “that webcams, and smartphone-based performs useful tasks for humans solutions. Telepresence robots take • Physical assistant robot: A robot or equipment excluding industrial the concept one step further, allowing that physically assists a user to automation applications” (ISO 8373). users to move about a room as a perform required tasks by providing According to this specification, physical robotic representation. Your supplementation or augmentation service robots require “a degree of face is displayed live on a screen for of personal capabilities autonomy,” which is the “ability to others to see, along with a camera

7 Ahead in This eBook

Ahead in this eBook, we meet a couple of very cool service robots, explore The Uncanny Valley, and take deep-dives on mimicking human gesturing and hand functions:

Sanbot: Anatomy of a Service Robot Meet Sanbot Max. This service robot can walk, talk, carry, direct, Figure 2: Intel Telepresence robots and tell jokes. Max is 1.46m tall, allow cost-effective collaboration has an omnidirectional four-wheel for a global workforce. (Source: drive system, boasts arms and Intel Free Press/CC-BY-SA-2.0) hands with ten degrees of freedom, can lift 400kg, and has bristles with sensors. But what’s most impressive to view an area remotely. Today, is going on inside him. these bots normally take the form Delivery Robots of a wheeled base for movement Delivery drones and rolling robots throughout an office, with a pedestal have potential to change the way that affixes the camera and screen at products are delivered to consumers: a height high enough for interaction with humans (Figure 2). Drones Delivery drones are being tested and While you might correctly refer used in various industries with some to this setup as “a head on a notable successes. UPS, for example, pole,” it does offer interactivity started testing delivery drones in rural superior to traditional phone and areas 2017. In this use, the driver CIMON Says: Design Lessons from a videoconferencing options. The deploys the drone to visit one delivery Robot Assistant in Space user can control his or her camera location, while the driver moves on Designing a service bot is perspective, view people’s body to deliver a second package by hand. challenging enough, but designing language, and give off visual The drone then returns to the truck one to function in microgravity cues that are lost in voice-only to be deployed again when needed. that’s also appealing to its fellow communication. Some devices This works especially well in sparsely- astronauts poses unique design include the ability to autonomously populated areas where the distance challenges. Read on to see how navigate to an area, showing up on to a single stop makes things engineers at Airbus developed the time to the correct meeting. The user prohibitively expensive. European Space Agency’s first free- then simply logs on without needing flying, assistant to drive, fly, or otherwise commute to Rolling Robots on the International Space Station. be in the room. Small aerial delivery drones have

[CONT’D ON NEXT PAGE] [CONT’D ON NEXT PAGE] 8 captured the public’s attention, but Indoor Farming other delivery robots are being tested Indoor farming, often called vertical as well. An Estonian startup called farming, allows for a smaller footprint Starship, for instance, manufactures land-wise than would normally be delivery robots that roll on six wheels possible, uses sensors to monitor down sidewalks to deliver grocery conditions, and brings fresh produce bag-sized packages from a central closer to consumers. Mouser hub. With capabilities to go about Electronics featured vertical farming six kilometers, these bots use GPS as part of its 2017 Empowering Figure 3: The Toyota Motor and sensors, as well as cameras for Innovation Together™ series on Corporation e-Palette concept vehicle. navigation and obstacle avoidance. Shaping Smarter Cities. (Source: Nobuyuki Hayashi/CC BY 2.0) They move along at pedestrian speeds, which makes them seemingly Farming Robots Farm Automation less threatening than drones that Farm automation isn’t just for gigantic Today, commercial farming is could drop from the sky if they combines and lettuce factories. With highly automated, with combine somehow lose power or control. the FarmBot, consumers can have operators taking manual control of a robot that tends to their garden their machines only as needed while automatically (Figure 4). This system sitting in an air-conditioned cabin that takes the form of a gantry-style robot looks more like a spaceship inside that moves over a garden planned via than the beaten-up tractor that urban a computer interface. dwellers likely still imagine. But other advancements in automation and The user selects what kinds of crops robotics are blooming at farms as well. are placed where in the garden, and the FarmBot does the rest, planting seeds, watering them, and even detecting and destroying weeds automatically. e-Palette Showroom The e-Palette showroom concept Figure 4: The FarmBot performs unveiled at Consumer Electronics gardening autonomously, removing Show in 2018 by Toyota puts a new humans from day-to-day tasks. twist on shopping by having the store, (Source: FarmBot/CC BY 4.0) shop, showroom, or office come to you (Figure 3). Using driverless vehicles that resemble something like the fusion of a modern train car and a London Black Cab, mobile showrooms could function like an on-demand kiosk, allowing customers to be able to view and even try on (or try out) available items. These vehicles could, of course, be outfitted as traditional delivery vehicles, either beckoning humans to come meet them on the street, or deploying their own minion delivery bots to go the last few meters to a doorstep. While still in the concept phase, Toyota has announced that this on-demand service will be put to use during the 2020 Olympics in Tokyo. 9 Ahead in This eBook (cont’d)

Revisiting the Uncanny Valley The uncanny valley describes a phenomenon that occurs as machines take on human attributes. As robots begin to collaborate with humans, work side-by-side with us, provide services for us, and augment our capabilities, addressing The uncanny valley in robotic design is an important consideration.

Robotic Gesturing Meets Sign Language Designing robots that can perform gesture-based sign language and interpret it would provide a potentially invaluable service to the hearing impaired and those Figure 5: Robot vacuum cleaner. These robotic vacuums use who rely on this unique language. (Source: Mouser) essentially the same technologies that Researchers around the world are were used 100 years ago to suck dirt making headway. Robotic Cleaning and small bits from a floor surface to Automated vacuuming has long been a container. But unlike their human- imagined as something that robots powered predecessors, robotic could take over—think Rosie the Maid vacuums use a series of sensors and from the 1960s cartoon The Jetsons. physical bumpers to navigate rooms Remarkably, the robotic vacuum and obstacles. They also use sensors cleaner is approaching 20 years old, to detect large changes in elevation with the Electrolux Trilobite making its that they can’t navigate, namely stairs. debut in 2001 and the iRobot Roomba Modern vacuuming bots can power Robotic Hands Strive following in 2002. Since then, other themselves for nearly an hour and For Human Capabilities competitors like Dyson and Samsung can even navigate back to a charging The human hand is unmatched in have joined the competition, pushing station to recharge. They also feature its performance capabilities, but features and capabilities forward. Most scheduling so you can come home robotic hands that use soft grips and floor cleaners in action today resemble to a nice, clean house, whether advanced algorithms are making gigantic hockey pucks, rather than a or not your dog or cat enjoys the great progress towards comparable more complicated (Figure 5). interruption. performance in some applications.

10 SANBOT MAX: ANATOMY OF A SERVICE ROBOT By Steven Keeping for Mouser Electronics

Meet Sanbot Max. This service robot can walk, talk, carry, direct, and tell jokes. Max is 1.46m tall, has an omnidirectional four-wheel drive system, boasts arms and hands with 10 degrees of freedom, can lift 400kg, and has bristles with sensors. But what’s most impressive is what’s going on inside of him.

Over 60,000 Sanbot service robots power supply, and motors to safely robots to be intelligent, interact are going about their business in and efficiently perform a wide range safely, and exhibit versatility and China. The majority of these are Elf of tasks. Such design forms the adaptability. Such specifications and Nano models, but they are now anatomy of a contemporary service demand employing the best of the being joined by a larger, faster, and robot. existing industrial-robot technologies more powerful sibling called “Max.” while developing new systems suited Weighing 100kg and standing at Designed for Service for the unique operating environment 1.46m, Max’s build is for export by of service. Qihan, a -based robotics Service robots join industrial and company that has been in business collaborative robots in performing That’s what Qihan’s engineers for over a decade. The service robot tedious work for humans. Service have achieved with Max. The made its United States debut earlier robots are not designed to take over robot melds lightweight, durable this year at the CES® exhibition in Las repetitive assembly tasks or boost electromechanical components with Vegas, Nevada. productivity by assisting humans in IBM®-powered a factory environment, rather they’re (AI), advanced sensors, cloud The high degrees of flexibility and designed to lighten the load on connectivity, and Li-ion power. And autonomy that Max demonstrates humans by performing routine tasks the robot carefully negotiates the are achievable by leveraging at work, home, or in the hospital. uncanny valley by leaning on the the latest software, electronics, Personal service robots handle non- side of cuteness and small stature. electromechanical, and mechanical commercial tasks such as domestic The robot measures 1.46m (H) × components. The robot is built service or serve as personal mobility 0.77m (W) × 0.62m (D), and beneath on a four-wheel drive base with a assistants while the professional type its plastic skin features clever design lightweight plastic and aluminum performs commercial tasks such as to make it tick. chassis and supports a 29kW/hr receptionist duties, delivering goods, Lithium-ion (Li-ion) battery power or even fighting fires and assisting in The Friendly Robot supply. It incorporates multiple rescues. Sanbot Max is a professional Service robots like Sanbot Max sensors, arms, and hands with service robot (Figure 1). spend all their time moving and 10 degrees of freedom, 3D-visual working among humans. Engineers simultaneous localization and Service robots are late to the party have equipped the robot with features mapping (vSLAM) obstacle avoidance, because they feature the complex that deliver the high mobility, dexterity, 5m/s walking speed, and 7kg towing design necessary to work and and strength essential to ensure capacity. All this allows Max to interact with humans. Industrial safety and productivity in its daily supervise office receptions, speak robots are relatively unsophisticated tasks without compromising size, to and guide humans, and distribute machines, segregated from humans, weight, and battery life. documents, among other tasks. and designed to work at the same task for years without a break. External Structure Max is an impressive machine with Collaborative robots face some Sanbot Max comprises five main an external design that incorporates design complications because they elements: A four-wheel drive base, mobility, strength, reliability, and visual cooperate to an extent with humans, a torso, two arms, and a head. The appeal and an internal design that but their tasks are still specialized and four-wheel drive system uses an utilizes computing power, sensors, limited in scope. In contrast, working directly with people requires service 11 [CONT’D ON PAGE 13] Figure 1: Sanbot Max is the latest in a new generation of professional service robots. (Source: Qihan)

12 external motor that sits atop four factor rather than with any human Power Pack Mecanum wheels. The 18cm in characteristics because its direct Power for the wheels and the rest diameter Mecanum wheels roll in any interaction with humans is limited. In of the robot’s systems comes from direction, allowing Max to go forward, contrast, humans prefer professional a Li-ion power pack. Li-ion has backward, or to the side—minimizing service robots to feature an the highest energy density of any the space necessary to maneuver. identifiable face. How much this face secondary battery technology and The wheels also include a suspension should resemble that of a real person has allowed the Sanbot’s designers to allow the robot to cope with is a subject for further research; to keep the power pack’s size and uneven surfaces. The robot’s ground however, eyes, the thing that humans weight down while enabling the 36V, clearance is 4.5cm, and its maximum look at first when approaching other 1.6kW (28.8kW/hr) unit to offer up to speed is 5m/s (18km/hr), which is people, are essential. Max’s eyes 18 hours of runtime and 50 hours of faster than most humans can move, are purely cosmetic and made of standby power. When its power is but not so fast that it compromises cost-effective animated graphics low, the Sanbot trundles off and plugs safety. that project via organic light-emitting itself in for a recharge. diode (OLED) screens mounted Sanbot Max’s small stature keeps as the face on its head. He uses Obstacle Avoidance its weight down by design to help other sensors for actual sight. Max Service robots serve in cluttered extend battery life. Max is a little also sports illuminated ears to environments such as offices, retail shorter than an average human— enhance the effect of his human-like premises, and homes. These are which enhances the robot’s friendly characteristics. areas full of obstacles that first need appeal—but is powerful enough to to be detected and then avoided. carry 400kg. Unlike an industrial or Freight Flexibility That’s quite a challenge for a robot collaborative robot, a service robot Because his arms have limited that can move at up to 18km/hr. isn’t required to perform repetitive carrying capacity, Max tows heavy The Sanbot designers decided to high-precision movements with its weights on a trolley attached to a overcome this problem by employing arms or to support the objects it chassis on his backside. The front a 3D camera that acquires images transports. Consequently, Max’s side of his torso incorporates a at 30 frames/s (six times the industry arms are primarily designed to give carrying frame that allows transport average rate). the robot a welcoming posture and of items such as cool boxes, trays, or are used to gesture and guide. These other loads up to 400kg. The frame The scanned images provide a uncomplicated requirements permit includes a pressure transducer that supplement to the robot’s supplied the designers to specify simpler, less estimates load weight and adjusts map of his environment. If items power-hungry motors, reducing the the robot’s waist position to maintain have moved outside their position complexity of their control systems. his center of gravity and stability. An on the map, the robot uses vSLAM Such cost savings are essential if attachment system can also allow to learn the new positions and plan service robots are to increase their Max to push a wheelchair. the shortest revised route to avoid presence in large volumes. However, obstacles. that’s not to say that Max’s arms The load-carrying peripherals can be are weak or unsophisticated; each detached when not needed to lower vSLAM is a popular technique for has five degrees of movement with the cost, weight, and size of the base solving the computational problem another five for each hand. robot. Electronics peripherals are that robots and other autonomous supported via a built-in 220V interface, machines face when attempting The robot’s head offers three degrees High-Definition Multimedia Interface to construct or update a map of of movement to allow the robot to (HDMI®) port, multiple Universal Serial an unknown environment while engage with people. Such design Bus (USB) ports, power-supply ports, simultaneously keeping track of their illustrates a common approach and switches, making it very easy location within it. Several vSLAM to service robot appearance; to use the robot for commercial and algorithms exist, and each uses vision however, functional service robots third-party devices such as light- combined with distance and direction like automated vacuum cleaners emitting diode (LED) screens, printers, traveled from fixed reference points are designed with fewer degrees and monitors. to enable navigation in cluttered and of movement and an efficient form populated environments. vSLAM

13 [CONT’D ON PAGE 15] Connected Microchip wireless technologies enable innovative, scalable, dedicated designs that fit small footprints, consume very little power, and operate in rugged environments—indoors or outdoors. Convenience, security, reliability, and power efficiency drive wireless application requirements. You can rely on Microchip to help you design cost-effective, highly-reliable wireless products.

ATWINC3400 Single Chip Network Controller

mouser.com/atmel-atwinc3400-controller

R21 SMART Arm®-Based Wireless Microcontrollers

mouser.com/microchip-r21-microcontrollers (Source: Sanbot.com) 14 has a proven reputation for handling Despite four decades of research, Graphical User Interface dynamic changes, such as varying facial recognition is still a challenge for In addition to talking directly to the light levels and moving objects and robotic systems. Today’s techniques robot, a person can enter information people, in an environment. While are a major improvement compared and instructions on a 10.1in., 1080P no initial map is required, Sanbot to those of a few years ago because touchscreen mounted on the robot’s Max users can upload a map to modern computing power allows upper torso. the robot to accelerate the learning rapid comparison of many more process. Because the vSLAM reference points on the face than Listening and Learning algorithms are customized to fit the were comparable before. And these available resources and operational reference points are now imaged in A critical element for a service robot circumstances rather than aiming at three dimensions rather than on a flat is learning from experience. This perfection, they represent a low cost surface to improve the differentiation experiential learning not only allows yet proven option for . between various faces even further. the robot to constantly update its map of the local environment or schedule In addition to the vSLAM technology, Sanbot Max refines the facial the best time for a battery recharge Sanbot Max has combined infrared recognition technique by framing but also enables the optimization of and ultrasonic sensors that enable it the face using the HD color camera its service by remembering the faces to come to a safety-rated, monitored and then framing other objects in of regular visitors, so they are not stop when an unexpected moving the picture that it doesn’t recognize required to self-identify each time object or person crosses its intended as part of a human body. It then they visit. path. When traveling at a maximum discounts the non-human elements speed of 5m/s, the robot can come to when performing facial recognition. Machine learning is a complex a halt within 1m. At normal operating This process of elimination helps in challenge that Sanbot Max’s speeds, the stopping distance is situations, for example, where the designers have addressed by 20cm or less. parcel a courier is delivering obscures combining the resources of the part of his or her face. robot’s programmers, its sensors Interacting with People and onboard computing power, and Hearing and Speaking remote cloud services (Figure 2). Sensors are a key element for Sanbot Max listens via no less than successful robot design. Some are six microphones arrayed on its head, there to mimic the human senses of which register voice commands from sight, hearing, and touch (while many up to 5m away from any direction. don’t consider taste and smell as The robot converses in several essential yet, research labs around languages through two speakers and the world are investigating how to features a projector to display videos reproduce these senses artificially when words aren’t enough to express for future robots). Others extend the an instruction or idea. robot’s sensory capabilities beyond those of humans. Sensing The head, torso, and arms feature Sanbot Max incorporates an array touch sensors. Infrared and ultrasonic of vision, hearing, touch, gyroscopic, sensors supplement these sensors, and magnetic sensors to inform primarily to allow the robot to be it about its surroundings and aid aware of its distance from moving communication with people. and stationary objects. Because it also works so closely with humans, Vision Sanbot Max contains software In addition to the vSLAM 3D camera algorithms that help the robot learn Figure 2: Sanbot Max employs for navigation purposes, Sanbot from its interactions and modify a three-way support system that Max has a high-definition (HD) movements and reactions to enhance leverages its own resources and color camera (for facial and object safety. those of a programmer and cloud- recognition and recording) and a based server. (Source: Sanbot) reverse black-and-white camera. 15 Open for Developers Artificial Intelligence resources found in a $1 million server, Sanbot supplies a software Sanbot Max’s wireless connectivity like the IBM Power 750. Spreading development kit (SDK) and an enables the robot to use Nuance® that cost over the use of thousands application programming interface voice recognition software and of relatively inexpensive service (API) to allow programmers to IBM’s artificial intelligence robots empowers companies to create their own code for the robot (AI). The Nuance voice recognition build a sustainable business model to customize it for a specific duty. helps the robot understand a range and encourages consumer adoption For example, configuring a robot of languages, dialects, and accents (Figure 3). specifically for reception duties in a while Watson, which is an extended confined area allows a designer to development from its original concept limit the allocation of resources to as a question/answer system to navigation, thereby extending battery one that uses computers working life. Instructions to the robot can in parallel to support a range of upload to a mobile device or a PC. machine-learning activities, helps the robot increase its overall intelligence. Onboard Computer Sanbot Max features a powerful Conclusion Figure 3: Service robots will onboard computer that runs a real- increasingly draw on external sources Sanbot Max from Qihan demonstrates time Linux operating system to such as IBM’s Watson to power their AI. how practical service robots are control movement, communication, (Source: Clockready/CC BY-SA 3.0) becoming a reality. The robot and power-autonomous operation originates from earlier, smaller models when necessary. This operating that have sold in large numbers in system can also support a range of China and are now an export beyond common applications. the country’s borders. Wireless Connectivity The robot is proving its success, Connectivity is one of the key Power as the designers have sourced enablers for contemporary service Linear µModule DC/DC power proven technology from external robots; previously, the robots were products from Analog Devices suppliers while focusing their efforts “on their own,” reliant solely on their are complete system-in-package on developing a solution that can internal resources to operate. Such solutions with integrated DC/DC perform in professions involving reliance either limited the tasks the controller, transistors, input and reception, delivery, or other service- robots could perform or added weight, output capacitors, compensation related operations. The designers complexity, power consumption, and components, and inductors that have also put considerable effort into costs. address time and space constraints. ensuring that the robot’s appearance, These efficient, reliable, and (with characteristics, and movements Now, wireless connectivity to the select products) low-EMI solutions encourage humans to interact with Internet allows powerful cloud servers are compliant with EN55022 class B it. The result is a reasonably priced, to supplement a robot’s resources. standards. practical product that performs well in Not only does this permit remote commercial environments. communication and software updates, LTM8065 Silent but it also enables a streamlining The designers of Sanbot Max have Switcher® µModule® of the robot’s computing systems, also demonstrated how wireless Regulators because the most demanding connectivity to the cloud points to the processing tasks can offload to near future of service robots: These a remote computer. Sanbot Max mouser.com/adi-ltm8065-umodule-regulators will increasingly draw on employs both Wi-Fi and cellular external resources to power their AI connectivity (which a third-party, 4G LTC3255 rather than carry expensive, complex, SIM card enables). Inductorless Charge and power-hungry computers and Pump Converters software onboard. The IBM Watson software, for example, demands the mouser.com/adi-ltc3255-converters 16 CIMON SAYS: DESIGN LESSONS FROM A ROBOT ASSISTANT IN SPACE By Traci Browne for Mouser Electronics Designing a service bot is challenging enough, but designing one to function in microgravity and that is also appealing to its fellow astronauts adds to the unique design challenges. Read on to see how engineers at Airbus developed the European Space Agency’s first free-flying, autonomous robot assistant for the International Space Station.

Of the more than 220 visitors to the In fact, CIMON’s design was Aerospace Center (DLR), was International Space Station (ISS) in reportedly inspired by the Professor assigned to assist Gerst with three the past 18 years, June 2018 marked Simon Wright’s character (a flying different tasks, making this service the first time one of those visitors brain) in the 1978 cartoon series, robot a collaborative robot as was a free-flying, autonomous Captain Future. Much like the well. The three tasks involved service robot. CIMON, or as it is more professor-aided Captain Future solving a Rubik’s cube puzzle, formally known, the Crew Interactive in the fictional world, CIMON was conducting experiments with Mobile Companion, is a 32cm in responsible for assisting astronaut crystals, and carrying out a medical diameter, 5kg, spherical robot that Alexander Gerst in the Columbus experiment which CIMON would can speak, hear, see, and understand. laboratory module in the ISS. film. CIMON could also serve as a CIMON is a robot head with no body. complex database of all necessary CIMON, developed and built by information about operation and Airbus on behalf of the German repair procedures for experiments

Figure 1: This is CIMON at the European Astronaut Center in Cologne, Germany. (Source: DLR/T. Bourry/ESA) 17 Optane SSD Intel® Optane™ technology provides high throughput, low latency, high quality, and high endurance through a unique combination of 3D XPoint™ memory media, memory and storage controllers, IP interconnects, and software. Together, they deliver a revolutionary leap forward in Optane™ 900P Series SSD Optane™ Memory decreasing latency for large capacity workloads that need fast storage. mouser.com/intel-optane-900p-ssd mouser.com/intel-optane

and equipment on the ISS, allowing contrasts to the boxy, technical collects information about the depth astronauts to have their hands free environment of the ISS, but there are and relation from one feature to while working. If the crew wanted other advantages as well. another and builds a map based to capture video for documentation on Simultaneous Localization and purposes, CIMON could handle that Because CIMON was the first free- Mapping (SLAM) algorithms. A frontal as well. flyer to operate on the ISS, safety was video camera and face detection a significant concern. Typically, the software would focus on the eyes of Designing for Microgravity rule for a microgravity environment Gerst, allowing CIMON to orient itself is that everything must be strapped to simulate eye-contact. While the design may have been down to prevent objects from floating inspired by , the around and getting into things and If Gerst wanted to get CIMON’s spherical shape has a more practical places they should not. Eisenberg attention, when it was otherwise application in a microgravity said that sharp edges or corners occupied, say looking out the window environment (Figure 1). Till would be a hazard as it floats and enjoying the view, he could look Eisenberg, project manager at Airbus around the ISS bumping against in the robot’s direction and speak to Friedrichshafen, said that the primary walls, equipment, or the astronauts CIMON. An array of microphones challenge was to create a robot that themselves. could detect the arrival direction of would be accepted by the astronauts. Gerst’s voice, and CIMON would Even if CIMON’s shape meant it orient himself until the speaker was “Everything in the Space Station is would not hurt anyone, colliding in the field of view of the camera, at rectangular, and it is a very technical with astronauts trying to work in an which point the eye gazing could environment. We wanted to have a already-cramped space could quickly happen. calm point of focus,” said Eisenberg. become annoying. To avoid that situation CIMON is equipped with For their final test, Eisenberg and CIMON’s service as a calm point of 12 ultrasonic sensors that enable it CIMON boarded the Airbus A300 focus, along with its ability to make to detect obstacles and be aware Zero-G for a parabolic flight test. the astronaut’s job more manageable, of incoming objects. These sensors Eisenberg described the flight as “a is important. Judith-Irina Buchheim, a measure the distance from the great experience” and “something researcher at the Ludwig-Maximilian obstacle or astronaut to CIMON. everyone should do.” The plane University Hospital in Munich, thinks flies up and down at 45° angles. that the assistance CIMON provides Fourteen internal fans allow CIMON The decent is where microgravity astronauts will reduce their exposure to move and rotate in all spatial occurs, and this phase lasts about 20 to stress, which researchers believe directions. While CIMON is on the seconds. A typical parabolic flight test has an impact on the human immune move, a dual 3D camera sensor will experience this about 30 times. system. The simple, spherical shape

[CONT’D ON NEXT PAGE] 18 Designing the Face and Voice the ISS has created a specific way of talking, and they use keywords The ability to maintain eye contact that have well-defined intentions, so and communicate are essential skills CIMON should not have any trouble for an assistant, so the service bot interacting with other crew members needed a face. In CIMON’s case, the in the future. face is a simple line drawing displayed on a front-facing screen (Figure 2). As Scientists are also hoping to use mentioned earlier, acceptance by the CIMON to observe the group effects astronauts was one of the engineers’ that develop in small teams over primary challenges. The team invited extended periods of time, such as Gerst to be a part of the design phase during extended missions to the to ensure CIMON was an assistant moon and eventually Mars. They want with which the astronaut would work. to understand more about the social Gerst was presented with several interaction between humans and voices and faces to choose from, machines as well. That interaction is ensuring he would be happy with significant because Eisenberg said the result, while providing him with that someday service robots, acting a sense of ownership and familiarity as flight attendants, could potentially when he and the robot finally met on play an essential role in those long the ISS. missions.

An integral part of CIMON’s ability to All in all, CIMON is impressive for a assist astronauts is the IBM Watson robot created on a 3D printer and built artificial intelligence (AI) system, with commercially available, off-the- which provides the core speech shelf sensors. Moreover, all this came comprehension element. When about in less than two years. That someone speaks a language to a timeline is impressive for just about robot like CIMON with an AI system, any application, but for the space the robot’s job is to identify intent. industry, it is epic. When a message arrives via audio stream, it then undergoes translation into written language so that the Figure 2: This is a composite image system can understand and interpret of CIMON on the ISS. (Source: its meaning. Once the identification DLR/T.Bourry/ESA) of intent is complete, the AI system generates an answer based on several keywords in one sentence.

Eisenberg said that the AI will always understand words that align with its training. That is why a system that is working fine with a small project group can react in an unexpected way when a new person tries to interact with it. Aware of this potential problem, the team had Gerst interact with the AI for two sessions. Not only did those sessions make the AI familiar with Gerst, but it acclimated him to working with AI. Much like aeronautic radio chatter, the crew on 19 20 REVISITING THE UNCANNY VALLEY By Jon Gabay for Mouser Electronics

“The uncanny valley” describes a phenomenon that occurs as machines take on human attributes. As robots begin to collaborate with humans, work side-by-side with us, provide services for us, and augment our capabilities, addressing the uncanny valley in robotic design is an important consideration.

The concept of robots integrating More Detailed Look looking like a real human and, if into our lives is nothing new, but the deemed friendly, is not intimidating. challenge of creating robots that do The initial discovery of the The popular Lost in Space robot not intimidate or make people feel phenomena in 1970 by Dr. Masahiro looks even less intimidating (Figure 3) threatened is proving more difficult Mori clearly showed that as and looks even less human. But, the than imagined. The phenomena called mechanized machines take on popular C-3PO begins that ascent “the uncanny valley” describes the human attributes they become more to looking more human (Figure 4) uneasiness humans experience when accepted until a nearly human looking but is still not particularly threatening, a robot closely mimics a human but robot is presented. As the machines especially with the comical, jerky not quite entirely (Figure 1). In this approached a nearly human looking non-human like movements and the case, some subconscious mechanism state, a dramatic decrease in almost comical voice. kicks in to make us feel uneasy, acceptance takes place, until a real fearful, and non-trusting. Yet robots human likeness manifests. He called that are clearly toy-like, cartoon-like, this effect bukimi no tani, which has or considered “cute,” do not elicit this become known as the uncanny valley. response. He also went on to propose that motion as well as form affect realism and can deepen the valley if a motion is unnatural and mechanical in appearance. This is especially true when there is a coordinated effort to make several face muscles move in tandem to emulate a real face. Unnatural eye movement, in particular, Figure 1: The uncanny valley can cause a flee response from describes the uneasiness humans observers. feel when a robot has enough human characteristics to create dissonance Robots with Human-Like between what looks near-human but Features not quite human. (Source: Wikimedia Modern day media has been able Commons) Figure 2: Forbidden Planet’s Robbie to exploit these evolved inherent the Robot was pseudo-humanoid reactions toward almost human- As robots begin to collaborate with but not scary. (Source: Wikimedia looking machines and even dolls. The humans, work side-by-side with us, Commons) earliest mass exposure to the robotic provide services for us, and augment form came from the Forbidden Planet our capabilities, addressing the movie, where Robbie the Robot was uncanny valley in robotic design is an introduced (Figure 2). This mechanical important endeavor. bi-pedal humanoid form is far from

21 can certainly convey a sense of evil Humans Portraying Robots or foreboding. Even when a doll or puppet is not trying to be scary, it Examples of humans portraying can look scarier if it takes on human robots can be found in popular capabilities, like a ventriloquist puppet. culture and can illustrate how both emotionless and emotion- exaggerated robots can lead to acceptance, because underlying everything, their face is a real human’s. For example, the TV character from Star Trek: The Next Generation (Figure 7) is very human looking, even though the facial expressions are somewhat emotionless. Conversely, the robot Kryton from the BBC series (Figure 8) has plates and a head shape that are non-human, but Figure 3: The Lost in Space robot his comical facial expressions and is less human-looking and less jovial personality make him comical intimidating. (Source: Wikimedia Figure 5: Chucky is a good example and easily accepted. Commons) of a doll that can look intimidating by its facial expressions. (Source: Wikimedia Commons)

And, to further the point that the engineering of scary facial expressions can occur in a mechanical robot, consider the popular Terminator robot for example (Figure 6). The internal Terminator robot is clearly not human but still elicits a fear response. It could be Figure 7: Commander Data the icy cold, lifeless stare and/or the looks human but has emotionless use of the skull style head that also expression. (Source: Wikimedia touches a nerve. Commons)

Figure 4: C-3PO is humanoid in form, but because the facial features and movement are unlike those of humans, the doesn’t scare us. (Source: Wikimedia Commons)

However, even toys and dolls can be made to be intimidating, though their features are non-human. Examples here can include Chucky from the Figure 8: Kryton’s head does not Child’s Play movie series of horror Figure 6: The Terminator robot is look human, but his jovial personality movies (Figure 5). Facial expressions not human, but it has a cold, lifeless makes him easily accepted. (Source: designed into a doll, puppet, or robot stare. (Source: Wikimedia Commons) Wikimedia Commons) [CONT’D ON NEXT PAGE] 22 When humans play robots, the already fewer jobs for today’s young important for elderly folks to allow uncanny valley does not tend to people, and as time goes on, people robots to help them walk, dispense appear in the same way. Perhaps we will not be able to compete with medications, and detect and report are picking up on subtle biological robots, physically, mentally, and from any life-threatening circumstances. movements and clues that reassure a perspective of productivity. Robots AI robots will tirelessly watch our us. Even our attempts to create a will be stronger, faster, and smarter, elderly, communicate with them at a static-face and robotic-like form still especially as artificial intelligence (AI) rudimentary level, protect them, and fall short. The somewhat human- integrates through society and the even save lives. looking and emotionless robot from infrastructure humanity is building. the movie I-Robot is an example of a So, now is the time to perfect the human-like robot that can fall into the AI used to be simple rule-based look and feel of our robots and solve uncanny valley (Figure 9), and eye systems and encapsulated know- this uncanny valley syndrome. To movement and voice can play a role how from human programmers. This transform our robots and eliminate here. standard is no longer the case. Newer the uncanny valley, engineers are deep-learning techniques have becoming the pioneers of advanced empowered AI-computer learning robotic systems with multiple in ways not even the programmers motors, hydraulics, magnetics, and understand. We are truly outside electromechanical muscles to try looking in, not knowing what or how and make that perfect face and a computer learns. This lack of deep personality of which people are not insight into how an AI computer’s afraid. intelligence grows and its awareness Figure 9: Emotionless stares built works is both dangerous and exciting: Some would say we are very close. into robots can eliminate the flee Exciting because AI machines are Already, there are robotic newscasters, response toward robots that poorly already outperforming humans and robotic receptionists, and robotic emulate human facial movements, not just in menial tasks. Deep-learning guest stars on TV shows. Social but they can also be intimidating AI computers are outperforming robots can hold conversations and and scary. This type of almost seasoned doctors at determining even memorize things you have human face can fall squarely into the which cells from a PAP smear are spoken about in the past for reference uncanny valley. (Source: Wikimedia cancerous or potentially cancerous. and context. And the list continues. Commons) We do not know what these computers see or have discovered Conclusion that empowers them to make a better The Importance of The uncanny valley describes the determination than humans. Getting it Right uneasiness humans feel when a Like it or not, robots are replacing Designing Service Robots robot closely mimics a human in humans and will continue to replace appearance and movement but not humans in the workforce and For the time being, ignorance is bliss, quite fully. Pop culture has exploited service industries: Initially to handle and we are boldly engineering robots these inherent reactions to robots dangerous occupations where liability that will integrate with society. While in TV and film as well as in humans and insurance factor in and continuing most of our effort is going toward portraying robots. In designing robots to replace people at every juncture vacuum-cleaner robots, agricultural that will collaborate with humans, of society. Even the most abused robots, delivery drones and bots, and work side-by-side with us, provide countries with the most overworked even robotic assistants in space, we services, and extend our capabilities, and underpaid workforces will not be will soon take on a more human form designers must consider the uncanny able to compete with the relentlessly to better fit in and serve—especially valley and find the essential balance. working, non-complaining, will-less, with child care, babysitting, and and soul-less machines. health care.

This inevitable shift is particularly true It is important that our children with the aging populations and the are not afraid of robots that have diminishing workforces. There are charge over their safety, just as it is 23 24 ROBOTIC HANDS STRIVE FOR HUMAN CAPABILITIES By Bill Schweber for Mouser Electronics

The human hand is unmatched in its performance capabilities, but robotic hands that use soft grips and advanced algorithms are making great progress toward comparable performance in some applications.

The human hand is a marvel with roughness, and even pain. Despite Getting Closer to “The Hand” multiple, tightly integrated sensors the number of sensor points and and actuators. It displays amazing the constant flow of information it Despite the challenges, the quest dexterity and does so using only provides, it is somehow all handled for a universal hand continues, with “linear contraction actuators” called by the body and the brain in real time many projects advancing to more muscles (Figure 1). It can lift, move, without noticeable delay. sophisticated “fingers” and with and place tiny, ultralight objects with better tactile sensing. After all, the precision and also manage relatively Many robotics-related efforts have ability of the hand to sense its heavy large ones. It can manipulate strived to emulate the human activity, and the mind to discern and coordinate actions, deal with the hand, with good reason. A smart, intentions versus results and direct unexpected, and do all these things electromechanical hand that could, reaction via feedback, is a major with apparent effortlessness. at least in principle, perform the factor in the versatility and success same tasks with the same skills, of the human hand. Robotic hands dexterity, and self-awareness would now sense pressure and strain to be more than an amazing technical emulate sensing by nerves and may accomplishment. It would be a even sense position, orientation, major boon to the advancement and motion via gyroscopes and and adoption of robots in consumer, accelerometers. While such position industrial, professional, and other and motion sensors were historically applications. There has been major costly, large, and cumbersome, progress, but there is also a very they are now available as low-cost, long way to go. Full replication of reasonably accurate, low-power the human hand seems to be a long devices. way off, despite the ones we see in futuristic movies. One other thing that the human hand and mind do well is correlate multiple Still, the lack of full human-hand sensed inputs. For example, if you emulation has not stopped strides lightly touch the handle of a hot pan, Figure 1: The human hand with its which have made robotic hands the sensed temperature will be lower complex array of muscles, nerves, into useful systems. At present, compared to grabbing that handle skin, and more, is unmatched in robotic hands that focus on actions tightly. The human brain is good at capability and sensitivity by even the such as lifting, turning, picking, and integrating these disparate data types best robotic hand available today. placing for a constrained and fairly and extrapolating correct inferences. (Source: Mouser) well-defined task are already in Working to add this capability is an widespread use in manufacturing, area of intense research in the use Yet the hand is much more than auto assembly, warehouses, research of smart vision to assess situations a grabber, , and lifter and development (R&D) drug testing, and surroundings, provide data for a with skillfully controlled muscles. It circuit-board assembly, and many solution using adaptive learning and has a huge array of dispersed yet more situations. artificial intelligence (AI), and to assist coordinated sensors: Pressure, in implementing the solution while temperature, vibration, sliding, overcoming impediments. 25 One of the interesting things about soft, irregular, and fragile objects, adjusts are always needed, given the hand-like robotic research is the which is another key requirement unknowns of different situations in the diversity of the efforts and proposed (Figure 2). real world). solutions. If you look at designs, you’ll be struck not only by their general However, brings a new Making sense of the tactile and vision similarities but also by the major set of issues. In general, the versatile sensor data is an arduous task. A differences. The reason is that no robotic controller needs to know a significant amount of computation one—except perhaps for some very great deal about what it is grasping. and data-processing capability is leading-edge groups—is trying to In many cases, but not all, the human necessary to support advanced recreate the human hand with all its hand identifies what it is grasping algorithms, and they must operate in attributes and capabilities. Instead, using skin pressure sensors of various real time with the robot-hand action. each team is picking a select subset types and is often, but not always, of capabilities and striving to come aided by vision. As more and better sensors are added close to that objective of focus. (which is a good thing in many ways), Effective Solutions Require the volume and type of data sets One common factor to nearly all Vision AI and what they represent also rapidly approaches to emulating the human increase. Just “making sense” of it all hand is the use of “soft robotics.” The basic challenges of developing a requires significant programming skill Hard-surface, fixed-path grippers, viable hand-like design are formidable. and resources. Further, the multiple which are essentially powered clamps, Engineering considerations cut types of data (e.g., pressure, motion, are well-suited to manipulating firm across multiple technologies and position, image) must be integrated, objects of known configuration, such through many functional levels. The correlated, and coordinated to as a car door. But in the real world hardware stages that are the system’s calculate and create a set of specific, of hands and activities, the skin foundation require new and unique viable instructions for what do to next. and underlying tissues are relatively sensing approaches. The acquired soft and therefore can conform to data must be useful in comparing a Even that may not be enough, the diverse and unknown shapes of planned action to the model of what however. Today’s systems are objects. They can also gently handle should happen, and then adjusts exponentially adding advanced must transpire as needed (and [CONT’D ON NEXT PAGE] 26 algorithms to embed ongoing self- performance of robotic arms. They University of Washington and UCLA learning and self-improvement. These also show that widely different The human hand is adept at sensing essential advances to the processing solutions are viable candidates to shear—a sideways motion with portion are the results of leveraging pursue. pressure variations and subtle mistakes and experience with various vibrations—to sense slippage, grasp forms of AI including inference, Massachusetts Institute of Technology integrity, and also accomplish various interpolation, and extrapolation from A team at MIT is focusing on the tasks. Recognizing this, a team at the huge data sets. common problem of packing and the University of Washington (in unpacking varied items, such as conjunction with the University of Universities Show the Way groceries, where objects of random California, Los Angeles, or UCLA) has sizes, shapes, and firmness must be developed shear-sensing fingers for Another interesting aspect of picked and placed. Their solution robot hands as well as for prosthetics. the robotics/hand technological uses suction cups on inflatable development efforts is who is grippers controlled by a sophisticated, The design is based on a stretchable doing the research: In short, almost vision-driven grasping algorithm electronic skin, made from silicone everyone, from universities to (Figure 3). This enables the robot to rubber laced with tiny serpentine commercial start-ups (of which assess a bin of unknown objects and channels—each roughly half the width there are many) to established determine the best way to grip or of a human hair—and then filled with vendors who already support suction onto an item amid the clutter electrically conductive liquid metal customers and installation. It’s a without having to know anything that won’t crack or fatigue when huge potential market with truly about the object before picking it up. the skin is stretched (unlike solid countless applications, and a design wires). This skin is wrapped around that is superior for a large fraction of the selected “end-effecter” (finger) these can be both a major academic implementation, that is, on either side achievement as well as a commercial of where a human fingernail would be success. (Figure 4). Furthermore, from an academic perspective, the topic is interesting; it is a great focus of research for publishable papers, and there are so many feasible approaches for robotic Figure 3: This robotic arm uses a hand development that there are vision-driven grasping algorithm to opportunities for each research center navigate gripping items of various to pursue a different, non-overlapping sizes, shapes, and firmness. (Source: solution. Also, any robotics effort MIT) requires a diverse set of technical Figure 4: The “skin” in the fingertips talents encompassing mechanical The system’s algorithm employs of this hand uses a sophisticated design, electronic engineering, power multiple grasping tactics: Using network of fluid-filled, hair-thin and motor systems, sensors, and lots suction to grab the object vertically channels to measure grip pressure. of software and algorithms, so there or from the side; gripping the object (Source: UCLA Engineering) are many roles for many students. vertically, like a claw in an arcade Finally, it is a tangible, demonstrable game; and for objects that lie flush When shearing action occurs, the topic that easily attracts attention, against a wall, first gripping vertically microfluidic channels on one side and with that comes the grant then using a flexible spatula to slide compress while the ones on the other opportunities that academic between the object and the wall. side stretch out. This, in turn, changes institutions need and seek. These tactics combine with learning the electrical resistance of the fluid what works and what does not to path in the various channels. This A look at a few recent and fascinating improve the next performance as well change undergoes measurement and examples shows the creativity of as modify performance on a given then correlation with the shear forces the breadth of solutions evolving pass if the attempt is unsuccessful. and vibrations which caused them. to enhance the touch-and-grasp

27 University of California, San Diego This implantation, using inflatable in well-controlled environments, A team at the Jacobs School of fingers, gives the grippers multiple today’s designs often have softer, Engineering at the University of degrees of freedom, so they can finger-like appendages supporting California, San Diego is using a trio pick up and manipulate objects. multiple data acquisition and control of inflatable silicone-rubber fingers For example, a gripper can turn channels. Many of them do not use which can sense pressure during screwdrivers, pick up and screw in conventional rotary motors but prefer operation. The gripper brings together lightbulbs, and even pick up and linear motion actuators or inflatable, three different capabilities: It can twist hold pieces of paper, which are finger-like structures with sensing built objects; it can sense objects; and it considerably challenging tasks for a into their skin-like surface. can build 3D simulations or models of robotic hand. the objects it is manipulating. But as with the human hand, a Summary capable and advanced robotic hand The pressure-sensing skin that covers alone is not enough. Helping these The present state of robotic hands is the fingers measures pressure by robotic hands realize their potential quite impressive, as is the rate and using embedded, conducting carbon of being as flexible in function and type of progress. While no such hand nanotubes. The conductivity of the versatility as the human hand are comes close to the incredible human nanotubes changes with flexing and the advanced algorithms, seamless hand, these hands are beginning pressure, and this data is integrates integration, AI, and more that often to approach what has been, until to determine what the object is and come with the aid of vision systems recently, only in the realm of science how it is being held. This is done by and real-time image analysis. fiction. using sensor outputs from finger-like grippers to create a 3D model of the Instead of hard clamp-like object, a computationally intensive grippers doing predefined tasks process. 28 ROBOTIC GESTURING MEETS SIGN LANGUAGE By Bill Schweber for Mouser Electronics

29 The gestures that humans routinely Though estimates vary widely, a • For creating the sign language and effortlessly make when speaking credible 2005 study by Gallaudet gestures are important parts of conversation. University puts the number in the U.S. • For interpreting the signs They supplement the spoken words at roughly 500,000 individuals, albeit by conveying meaning, intent, with a large potential error band. As with communication transmitters attitude, and much more. That’s and receivers, these are very different why a face-to-face or even video Challenges of Robotic problems with different approaches, meeting has much more impact, as Sign Language architectures, and technologies. nuances and subtleties are added According to Professor Han-Pang by the gesturing. Adding appropriate Using robotics and computing power Huang, who heads the robotics gestures may be a key factor for for generating and interpreting sign laboratory at National Taiwan robots to become accepted as language is really a dual problem, University (NTU), “Sign language has companions and helpers. corresponding to the transmitter, a high degree of difficulty, requiring channel, and receiver problem of a the use of both arms, hands, and But there’s an area where gestures communications link: fingers as well as facial expressions.” are not only a nice enhancement but essential to the robotic function of • The transmitter—the signing hand, Signing is deterministic and thus sign language to communicate with arm, and body—has the relatively easier than interpreting, at least in the hearing impaired. If a robotic easier role, because it’s encoding principle, and much effort has been system were developed that could a known message (letters, words, devoted to it. Still, signing has huge substitute for a human intermediary and phrases) using standard challenges in the electromechanical to both initiate and interpret hand- symbols in a fully defined format. aspects. It may seem that effective based sign language, that would be signing is just a matter of a making a major advance and a very useful • The receiver—the interpreter of a series of finger positions, each tool for those—whether hearing the signed symbols—must try representing a letter, to spell out impaired or not—who need to to decode the symbols, which each word using what is known rely on this unique language. obviously represent an unknown as fingerspelling Figure( 1), but message. fingerspelling is only a small subset of the total sign language environment. Between the two points is the noisy channel that corresponds to shifts in position, lighting, individual “accents,” and various non-ideal viewing angles. As a result, using robotics to either create or interpret sign language messaging has two independent research and development objectives:

Figure 1: American Sign Language provides a standard hand and finger- based representation for each letter and also includes many common words and phrases. (Source: Mouser)

[CONT’D ON NEXT PAGE] 30 The full sign language repertoire The person types in the desired text, significant innovation. Antwerp’s includes many single signs for then letter parsing occurs, and the Sign Language Actuating Node common words and phrases, with corresponding instructions for finger (ASLAN), from the University of four manual parameters: and hand motions come up and take Antwerp, Netherlands, and sponsored action. The design challenge is in the by the European Institute for • Handshape: Arrangement of creation and control of a human-like Otorhinolaryngology, is a 3D-printed fingers to form a specific shape hand. robotic arm with the ability to convert • Movement: Characteristic speech into basic sign language movements of the hands Note that a major difference between (Figure 2). • Orientation: Direction of the palms robotic hands used for fingerspelling • Location: Place of articulation, and other robotic hands is that which can refer to the positions of the signing hand needs five fully the hands (relative to each other) or articulated fingers plus a wrist, positions of markers (i.e., fingertips) whereas robots for picking and relative to other places (e.g., chin, placing objects usually have some other wrist) version of a much simpler three- fingered actuator with fewer and A website called Signing Savvy different degrees of motion. That includes short video clips of American pick-and-place design is clearly Sign Language (ASL) fingerspelling, inadequate here. However, unlike word, and phrase gestures in action. pick-and-place robotic hands that Signing is more than hands, as there need some sort of force and pressure are also the non-manual components sensor to provide feedback from the Figure 2: ASLAN, developed by the of facial expressions. These consist of fingers or grippers for better control, University of Antwerp, is a 3D-printed two components: a signing hand does not need that robotic arm with the ability to convert additional feature. speech into basic sign language. • The upper part (eyes and eyebrows) (Source: University of Antwerp) • The lower part (mouth and cheeks) One example of a fingerspelling robotic arm is from NTU’s Robotics A three-person team built ASLAN Adding to the challenge is that manual Laboratory. This hand has six degrees using 25 3D-printed parts, 16 and non-manual actions must be of freedom in the robot arm alone, servo motors, and three motor done in a smooth and fluid manner, plus finger control. It uses six brushed controllers, plus other components with synchronized finger motions. DC motors with harmonic drive to and an Arduino® board as the system reduce the speed of the motors. The controller. The first prototype took 139 Present robotic efforts almost entirely resulting angular rate of motion is up hours to print. The rationale for the 3D focus on replicating the basic finger to 110 degrees per second, and the printing approach was to lower costs signing motions, in some cases weight of the arm is about 5kg. and also enable anyone with access with the supplementation of some to an appropriate 3D printer to make broader gestures, while minimizing A more ambitious project from the their own, considering the unique or avoiding the broader gestures and same organization is the Nino robot, mechanical parts were no longer non-manual effects. The objective which attempts to be more life-like. special, long-lead items. is to use a text-to-hand gesture This nearly full-sized humanoid, which approach and eventually supplement is 1.45m tall and weighs 68kg, has Making Gestures this with a speech-to-gesture design demonstrated basic sign language Interpretable where the originator must merely talk functions via some of its 52 degrees and the words will be translated into of freedom, including individual finger For the more difficult problem of fingerspelling and then more complex joints. It’s not an easy task, as noted interpreting signed language, there phrases and signing. by Professor Han-Pang Huang. are two basic, non-overlapping approaches: Gloved and video. In the In the fingerspelling text-to-gesture Not all robotic gesturing efforts are gloved method, the person signing design, most of the effort focuses highly complex and backed by a wears a special glove with embedded on the electromechanical aspects. large organization, yet some still show sensors, and the movements of the 31 [CONT’D ON PAGE 33] Connectors As a leading provider of electronic components and solutions, Molex views innovation as a tool for solving complex customer challenges in the area of industrial robotics. This philosophy has helped Molex not only pioneer many electronic ® ™ solutions but also build a globally Brad MX-PTL M12 Cordsets Sealed Industrial USB Solutions recognized company. mouser.com/molex-brad-mx-ptl mouser.com/molex-sealed-industrial-usb

32 glove and fingers undergo tracking is developing and executing complex on the positions of all nine knuckles, and interpretation by a processor. In algorithms to interpret the sign and producing a nine-digit binary word the video method, an imaging system display the correct letters and word that maps to a matching letter. Similar observes the person signing, and then for the gesture once the sensed data to the Texas A&M glove, this glove is attempts to recognize the signs using is sent (via a Bluetooth® link) to an also fitted with an accelerometer and feature extraction algorithms. external PC. pressure sensor to help distinguish between letters with gestures that Gloved Approaches A very different glove approach generate the same nine-digit code. Even the glove approach has different is used in a setup at University of implementations. One design, from California, San Diego. There the The data from the glove transmits to a team led by Roozbeh Jafari, glove is enhanced with commercially a smartphone or nearby PC using a associate professor in the Department available, easy-to-assemble Bluetooth link. The algorithms that of Biomedical Engineering and stretchable and printable electronics attempt to map the data from the researcher at the Center for Remote (Figure 3). It uses a leather athletic glove’s gestures into actual letters Health Technologies and Systems glove with nine stretchable sensors must implement complex pattern at Texas A&M, uses a glove-like adhered to the back at the knuckles— matching, corrections of various unit which is instrumented with two two on each finger and one on the known types of distortions and distinct sensors. First, there are thumb. The sensors are made of thin variations, and much more. inertial sensors (i.e., accelerometer strips of a silicon-based polymer and gyroscope sensors) that coated with a conductive carbon Video Approaches respond to motion and measures the paint and secured onto the glove with While the glove approach is low accelerations and angular velocities copper tape. Stainless steel thread cost and direct, it requires that the of a hand and arm. This sensor pair connects each of the sensors to a signer wear that glove and so limits captures the user’s hand orientations low-power, custom-made printed widespread use in more general real- along with hand and arm movements circuit board that is attached to the world situations. The alternative is to during a gesture. back of the wrist. use one or more cameras to capture the signing process, perform feature However, this sensing by itself is not extraction and pattern recognition sufficient, as certain signs in ASL on the image, and then decide which are similar in terms of the gestures character was signed. required to convey the word. Although the overall movement of the hand Single-Camera Approaches may be the same for two different One such system was developed signs, the movement of individual and refined by a multinational team fingers may be different. To address from the Institute of Information this problem, the Texas A&M system Technology at the University of Dhaka, adds an electromyographic (EMG) Bangladesh; Chalmers University of sensor to non-invasively measure Figure 3: “The Smart Glove,” Technology, Gothenburg, Sweden; the electrical potential of muscle developed at University of California, and Sookmyung Women’s University, activities. This enables the system to San Diego, translates the ASL Seoul, . This system distinguish among various hand and alphabet into text and mimics uses a single camera and extracts finger movements based on different sign language gestures. (Source: and classifies features with a two- muscle activities. The data it provides University of California, San Diego) phase approach consisting of a works in tandem with the motion feature extraction phase and the sensor to provide a more accurate The sensors change their electrical classification phase. Resizing of the interpretation of the gesture being resistance when stretched or bent, image samples occurs, and then signed. which in turn creates a different binary they convert from the camera’s red, (1 or 0) code for each sensor point. green, and blue (RGB) color map to Despite its sophistication, the sensor This allows the sensors to directly the luminance, in-phase quadrature arrangement is actually a simpler create a unique code for different (YIQ) color space to improve image aspect of the system. The harder task letters of the ASL alphabet based fidelity and clarity. The images are

33 then segmented to detect and digitize shortcomings. Thus, some aspects large group of people with hearing the sign image. of the algorithms in a single-camera impairments as well as their system are more complicated and companions. Many research projects In the classification stage, the prone to errors. are underway, primarily at universities, algorithm constructs a three-layer, and there has been significant feed-forward, back-propagation Multiple-Camera Approaches progress to overcome problems neural network consisting of 40 by 30 For this reason, some groups use in robotic signing and interpreting. neurons in the input layer, with 768 a multi-camera system, especially Nonetheless, there also is a huge (70 percent of input) neurons for the as the video equipment is now amount of work still to be done in hidden layer, and 30 (total number relatively low in cost and easy to both hardware and data analysis. of ASL images for the classification interface. SignAll, based in Hungary, network) for the neurons in the output has developed what they claim is the In stepping back to view the amount layer. While this neural network may first commercially available system of effort it takes to develop robotic seem straightforward, the reality is for the automated translation of sign sign language capabilities, it makes that the actual network topology language. Their system is based on you truly appreciate the amazing and construction involve many sub- computer vision and natural language capacity of a human being, who can arrays, multiple data-manipulation processing (NLP) of data streams (with some practice) learn to sign and blocks, and dynamically calculated from three cameras and one distance interpret letters, words, phrases, and weightings. sensor. The depth sensor is placed more using only eyes, body language, in front of the sign language user at and an embedded processor (the Another team based at the University chest height, and the cameras are brain) that weighs about 1.5kg and of Surrey (Guildford, UK) uses a placed around him or her to allow dissipates only about 50–75W. Will more detailed model of the three continuous tracking of the shape and automated systems be that types of sub-unit extractions and the path of the hands and gestures. good? Well, for the answer, you’ll multiple sign-level classifiers. The need to check back in 10 or 20 years. team’s structural diagram elaborates The system processes the images on multiple parallel paths of feature taken from the different angles extraction and classification that are in real time. Once the signs are necessary. identified from the images, an NLP module transforms the signs into Feature extraction begins with the grammatically correct, fully formed captured image, then edge filtering sentences. SignAll also maintains of the image, followed by generation that their algorithms can decode of a token of the image. Next, the prosody, which is the elusive aspect token is used in a matching process of languages that subtly shapes the versus existing test-reference images way we say what we say. Prosody to determine a best-case match. incorporates the setting of rhythmic This overall procedure is not a trivial and intonational features so the process, of course, and has many SignAll system can perceive the areas of potential issues and errors in various combinations of the linguistic both feature extraction and matching. units. When signing in ASL, this involves head and body movements, There are tradeoffs when choosing to eye squints, eyebrow and mouth use a single camera versus multiple movements, the speed and formation cameras. With a single camera, the of signs, pacing, and pausing. volume of real-time data for analysis is less, and the feature-extraction Conclusion task is more focused and defined. There is no doubt that signing However, the images from a single systems that can replace human camera lack depth and are subject signers and interpreters would to shadows, blocking, and other provide tangible benefits to a

34 Engineers and Buyers find the leading brands and the widest selection of products in stock at Mouser

ORDER WITH CONFIDENCE

35