Copyright Notice

INTRODUCTION TO - BY ONUORA AMOBI

©2019 Unirobotica Inc.

All rights reserved.

Any unauthorized use, sharing, reproduction or distribution of these materials by any means, electronic, mechanical, or otherwise is strictly prohibited.

No portion of these materials may be reproduced in any manner whatsoever, without the express written consent of the Publisher or Author.

Published under the Copyright Laws of The United States of America by: Unirobotica Inc.

3579 East Foothill Blvd, Suite #254

Pasadena, CA 91107 www.Unirobotica.com

Legal Notice

While all attempts have been made to verify information provided in this publication, neither the author nor the publisher assumes any responsibility for errors, omissions or contradictory interpretation of the subject matter herein.

This publication is not intended to be used as a source of binding technical, technological, legal or accounting advice.

Please remember that the information contained may be subject to varying state and/or local laws or regulations that may apply to the user’s particular practice.

The purchaser or reader of this publication assumes responsibility for the use of these materials and information.

Adherence to all applicable laws and regulations, both federal, state, and local, governing professional licensing, business practices, advertising and any other aspects of doing business in the US or any other jurisdiction is the sole responsibility of the purchaser or reader.

Unirobotica Inc. assumes no responsibility or liability whatsoever on behalf of any purchaser or reader of these materials.

All Rights Reserved.

All other trademarks are the property of their respective owners.

All trademarks and copyrights are freely acknowledged.

Table of Contents

Introduction Part 1: The Background What is Robotics? More about Robotics What is Automation? History of Robotics. Advantages and Disadvantages of Who Invented The First ? How Many Robots Are There and Where are They? What Are The Different Types Of Robots?

Part 2: The Ins and Outs of Robots and Robotics How Are Robots Programmed? Software Used To Program Robots Robot Languages Robotic Joints and Degrees of Freedom Robot Coordinates and Reference Frames Robot Workspace Robot Characteristics.

Part 3: How Robots Will Affect Our Work and Personal Lives Will Robots Take Our Jobs And Which Ones? Which Industries Use Robots? Robot Applications Which Are the Most Sophisticated Robots? What Are Some of the Home Robots? What is Robotic Process Automation or RPA? Can Robots Gain Consciousness? What Are Some of the Ethical Questions of Life with Robots? How Can We Successfully Adapt to Life with Robots? Economic and Social Consequences of Using Robots Conclusion

Introduction

Thank you for taking the time to read my Introduction to Robotics. Here you will learn what robots are, what they are used for and how they are going to change the world we live in, indeed already are.

The modern definition of the word, “robot”, is an electro-mechanical device that carries out tasks as per a set of programmed instructions, but the literal definition is “slave”. Robots were once the thing of science fiction movies and books but not all robots look like the from Doctor Who – they take many forms and there are thousands of them in use the world over, right now.

For some people, the first thing they think of when they talk about robotics is automation. We know that robots are used for repetitive tasks without the need for human intervention, with the exception of programming the robot to do its task and providing it with a set of instructions.

Robots can be constructed quite simply or they can be incredibly complex. They can be small or absolutely massive, but they all share the same characteristics, joint types, coordinates, and degrees of freedom and, if you don’t understand all that yet, you will by the end of this guide.

One of the biggest fears we have is that robots are going to take over our jobs and our lives and while there is a small degree of truth in that, on the whole, they will actually bring about a whole heap of improvements. In fact, they already are.

Robots are employed in industries across the world, doing repetitive, mundane work and freeing up employees to focus on the real value in their jobs. They work in areas where it is too dangerous to humans, environments of extreme temperatures, radioactive areas, in bomb disposal, space, underwater and more.

They are cost-effective because they don’t need breaks, they don’t need safety equipment and they can work in many different areas. They don’t get RSI, they don’t get affected by breathing in chemicals, can work under pressure and they can work 24/7 – all things that affect human beings. Yes, they are taking some jobs but, rather than seeing us all out of work eventually, robots will actually create more jobs than they take – jobs that are specialized and safer to do.

Robotics is the study and work of engineers to make machines capable of doing jobs that humans do with better precision, of doing the complicated and repetitive tasks that we really don’t want to do. And it is a technology that is leaping forward. Now robots are being developed that will be able to sense their environment, make decisions on what to do and, in the future, we expect to see more humanoid robots that act and look just like humans. All this and more is detailed in this short guide so come on into a science-fiction world of robots that is set to become the norm, as they bring about life-changing developments for all of us.

Thanks for reading this book. I hope you have as much fun reading it as I had writing it.

Let’s get started.

Part 1: Background

What is a Robot?

The word, “robot” is not really all that well-defined right now and there is a lot of debate going on in the engineering, science and even hobbyist communities about the exact definition, what a robot is and what it isn’t.

If you think of a robot as being a device that looks somewhat like a human, that does what its told, then you have the same vision as many other people and, while there are robots like that being developed now, they aren’t the most common ones.

In fact, robots are far more common than most people realize, and we are likely to encounter them every single day. If you use an ATM machine to take money out, if you take your car through an automatic car wash, grab a coffee from a vending machine, then you have likely had some interaction with a robot.

The Definition of a Robot

Most of us will agree on one thing – a robot is a machine that is programmed (usually) by a computer and automatically carries out actions. This is one definition and it is the one that allows for lots of different machines to be called robots and that includes those vending machines and ATMs. Your washing machine could also be classed as a robot because it is a machine that is programmed with a number of settings – you select the setting; the machine carries out the task programmed to that setting. There are other characteristics that allow us to differentiate between complex machines and robots. The main one is that a robot can respond to the environment it is in, allowing it to change its programming so its task can be completed; it will also recognize when that task has been completed.

The official definition of a robot at last until it gets changed again, is, “a machine capable of responding to its environment to automatically carry our complex or repetitive tasks with little to no direction from human beings”.

Robots Are Everywhere

If we use that definition of a robot, we can identify these as the robots commonly used:

Industrial – Robots are commonly used in industry, with the first one being Unimate, designed for General Motors in 1959 by Devol. Thought to be the very first it is a that was used for the manipulation of hot die-cast parts in the automobile manufacturing industry, a task that was considered far too dangerous for humans.

Medical - Medical robots are used for many different things like assisting with patient rehabilitation, performing surgery, disinfection of surgical suites and hospital rooms, and more.

Consumer - Most people have heard of Roomba, the very first robotic vacuum cleaner, and now there are many more models, along with robotic lawnmowers and more

And the list of robots you probably didn’t realize were robots is extensive. These are things you come across daily and include:

• Automatic car washes • ATMs • Traffic lights • Speed cameras • Kitchen appliance • Some children’s toys • Automatic door openers • Elevators

What is Robotics?

Robotics is a branch of engineering and science that describes the modern design of robots. It includes electrical engineering, mechanical engineering, and computer sciences needed in the design and building of robots.

Robotic design is wide-ranging, and it covers everything from the design of a robotic arm commonly seen in factories, right up to the humanoid, autonomous robots, known as androids, that augment or replace human functions.

Many people won’t realize that Leonardo da Vinci dabbled in robotic design with a mechanical knight that could sit up, move its arms, head and open and close its jaws. Back in 1928, a called was unveiled at the Model Engineers Society held in London.

Eric wowed the crowds by delivering a speech, moving its head, arms and hands at the same time. And in 1939, we saw another humanoid root called Elektra that was unveiled in New York at the World’s Fair. was able to speak, walk and could respond to voice commands.

Popular Culture Robots

In 1942, Isaac Asimov, a Sci-Fi writer, produced a short story called Runaround, introducing us to the Three . These laws were supposedly from the fictional book called the Handbook of Robotics, 56th Edition, 2058 and, according to some Sci-fi novels, those three laws are the only safety featured that are required to ensure robots operated safely:

1. Robots may not directly cause injury to human beings or indirectly, by way of inaction, allow harm to befall any human being;

2. Robots must obey any orders a human being gives them, except in cases where an order would contravene the first law;

3. Robots must protect their own existence, so long as doing so does not contravene the first or second laws.

Some of the other robots from popular culture include:

• Robbie - a robot with a distinct personality, introduced in a 1956 Sci-Fi film called Forbidden Planet • BB8, R2D2, C3PO – introduced in the Star Wars movies • Data – a Star Trek character who pushed the boundaries of technology and AI to the point where viewers were questioning when an android actually achieves sentience.

Androids and robots have been and continue to be created to help the human race in some tasks and if you follow the latest news you have probably realized that, before long, we will all have one of these personal androids to help us with our daily lives.

More about Robotics

Robotics is a branch of science and engineering that also includes electronic engineering, mechanical engineering, computer science, information engineering and others. Robotics is all about the design, the construction and operation of robots along with their control systems, information processing and sensory feedback.

All those technologies are used for developing machines that can act as substitutes for human beings, replicating their actions. We see robots being used in many different scenarios and situations but perhaps the most common uses today are dangerous situations such as detection and deactivation of bombs, manufacturing, or in places that humans simply cannot survive, such as underwater, in space, with hazardous materials, and in high heat.

We can date the concept of creating autonomous machines back many centuries, but it wasn’t until the 20th century that we began to see proper research into how robots’ function and how they could potentially be used. Throughout history, inventors, scholars, technicians and engineers have always assumed that, one day, robots would be capable of mimicking human behaviors and would be able to do some tasks in the same way that humans do them.

Today, robotics is one of the fastest-growing fields as technological advances into research, design and construction of robots continue in military, commercial and domestic areas.

Robotic Similarities

There are many different types of robot now, used in differing environments, for all different uses. However, despite the diversity of form and application, all share three basic construction similarities:

1. They all have a mechanical construction, which is a form, or a frame specifically designed for a certain task. For example, robots that are designed for traveling over muddy or heavy terrains could be designed with caterpillar or tank tracks. The mechanical aspect of any robot will almost always be the solution from the creator for the specific task the robot is constructed for, as well as a reaction to the environmental physics. Remember, form always follows function.

2. All robots have electrical components of some kind, used for powering and for controlling the robot. Take the robot with the caterpillar tracks, for example; it would need a power source or mechanism that controls the tracker treads and that source is electricity, traveling from a battery through a wire – the most basic of electrical circuits. Even a robot powered by petrol requires some form of electrical current for the combustion process to begin which is why you see batteries on most machines, like cars, powered by petrol. The electrical aspect is used for controlling movement via motors, sensing via electrical signals (the measurement of sound heat, energy, position, etc.), and operation via electrical energy supplied to the sensors and motors.

3. Lastly, every robot has a computer code of some kind to program it. This computer program is the way that a robot determined when to do something and how to do it. Back to the caterpillar track example, if a robot needs to move over muddy terrain, it may be constructed correctly and it may have the right amount of power coming from the battery but unless it is programmed, it cannot move. Computer programs are at the core of any robot – electrical and mechanical construction may be perfect but poor programming will only result if poor or no performance. Robotic programs fall into three categories – remote control, AI, and hybrid. Remote control robots are programmed with a set of commands that they will perform only when a control source sends a signal – normally a human being operating a remote control. These robots tend to fall under automation rather than robotics. A robot using artificial intelligence can interact with its environment without needing any type of control source. They can also determine any reaction to problems or objects they come across using their programming. A hybrid uses both RC and AI controls.

What is Automation?

Automation is the use of computer and electronic-controlled devices to take control of one or more processes and its goal is to increase reliability and efficiency.

Most of the time, however, automation actually replaces human labor and there are fears that, in time, more people than before will be out of work. In a lot of manufacturing plants around the world, automation already plays a big part, with robotic assembly lines doing the work that human beings used to do.

Automation covers a wide range of elements, functions and systems in just about every industry, particularly in the transportation, manufacturing, utilities and facility operations industries. In addition, more and more countries are using automation for their national defense systems.

Today, automation exists in just about every function in industry, such as procurement, installation, integration, maintenance and even creeps into sales and marketing.

Automation and the Office

Over the last four decades, information technology has revolutionized office environments, with communications, correspondence, documenting and even filing becoming automated functions. The level of noise, not to mention materials, has dropped with paper documents being transferred to digital storage and employees rarely using pens, calculators, phone books and more. There are no more map books, Filofaxes and diaries – just about everything has become automated, eliminating the need for most materials except for a cell phone and a computer.

These days, maps, phone numbers, and more are all available online, at the click of a button, and when we telephone a service or visit a website, more often than not we are talking to robots that have voice recognition programmed into them or chatbots that act as robotic customer services.

Automation and Flexible Working

Technology has also enabled the shift from traditional 9-to-5 office hours to flexible working. The internet, cloud computing, smartphones, tablets, and laptops, we can work from wherever we are, whenever we want. This means that many employees can now better manage their-work-home balance, increasing morale. However, that does lead to a new problem – people find it hard to actually switch off from their work life.

Banks used to be staffed with real people and customers; today there are fewer physical branches open as more people do their banking online or on the phone – talking mostly to robots. Even if you do venture inside a bank branch, you will find few humans and mostly machines, but the latest ATMs are for more than just taking money out. They also allow you to put money in and pay your bills and most people can manage all their banking without the need to speak to a human employee.

According to Professor Henrik Christensen from the Contextual Robotics Institute at the University of California, babies that are born from today will never drive a car. By the time they are old enough, autonomous cars will be the norm and we are also likely to see home companion and healthcare robots everywhere.

Automation and Manufacturing

Over the last three or four decades, manufacturing has undergone some serious changes and there has been a significant decline in employment within manufacturing in advanced economies. In 1996, around 14% of the US workforce was employed in manufacturing; today it is no more than 8% - a huge decline in just 20 years. However, it wouldn’t be fair to say that automation is responsible for all that decline – some jobs have been moved to countries where the labor costs are cheaper. According to experts, that rate of decline is not going to slow any time soon; in fact, it is entirely possible that it will just get worse.

While automation is seeing some jobs being lost, it is creating others and, over the next, couple of decades, we can expect to see huge changes in the way we live and work, all thanks to automation, but it won't all be bad.

History of Robotics.

For many people, the word “robot” fills their minds with science fiction novels and movies and while these tend to miss the mark more often than not, the evolution of robots does owe something to masters of sci-fi such as Isaac Asimov.

In broad terms, humans have been developing robotics and dabbling in automation for centuries and we can journey through a varied and colorful history.

Back to Egypt and Greece

One of the very first instances of automation or robotics is the Egyptian water clock and the oldest example, found in Amenhotep I’s tomb, can be dated back to around 1500 BC. It was constructed a container inside which an outflow water clock was marked. Water was used to fill the container and drip over a period of time. In order to see what the time was, the owner just had to look at the water measurement.

What truly made this stand out was not the fact that it used water; it was the force of the water in the clock banging on gongs or striking bells every hour, on the hour and used human figurines.

These water clocks were used in Greece in 325 BC and the second example of robotics came about 25 years later. Archytas, a Greek Mathematician, designed and constructed a mechanical bird, named The Pigeon, that was propelled using steam. And we can't forget Leonardo da Vinci. In 1495, he built the Robot Knight, a robotic figure that could stand, sit and move its arms about using a system of cables and pulleys.

The Evolution of Automation

We only really started to see the evolution of automation in the Western world in the 17th century. A French inventor named Jacques de Vaucanson developed no less than three automata. The first could play 12 different songs on a flute, the second could play a flute, drum and a tambourine, and the third was a duck, the most famous one. The duck could flap its wings, quack, move and “eat” and the sounds it made were quite lifelike.

The first real modern automation though, came about in 1810 by a German called Friedrich Kauffman. He developed a robot that looked very much like a solider and, through the use of automatic bellows, it would blow a trumpet.

Mechanical Programming Developments

Ada Byron, Countess of Lovelace, was responsible for advancing the development of mechanical programming. An English mathematician, she is best known for writing the very first algorithm for the Analytical Engine. This was a general-purpose computer proposed by another mathematician, Ada’s husband Charles Babbage.

It was Ada Lovelace who recognized what applications the machine could be used for and, between 1842 and 1843, she explained what the function of the machine was. She died aged 36 and her husband was never able to finish the Analytical Engine but it was the precursor for the digital computers we use today.

Further Advancements – 1800’s and Onwards

In 1898, Nikola Tesla built a wireless torpedo, controllable by remote control. He called the process “tele-automation” and his torpedo was shown off at Madison Garden. However, we didn’t hear the term “robot” until it was used by Karel Capek in 1921 to describe automata written in fiction. The term, “robotics” was made famous in 1942 by Isaac Asimov and, after the end of WWII, his robots did two things – they captured the imagination of post-war America and were responsible for a brand-new historical era.

In 1946, ENIAC, the Electrical Numerical Integrator and Computer, was constructed. It was one of the very first general-purpose electronic computers and several people were involved in its programming, including Frances Spence, Betty Jennings, Marlyn Wescoff, Betty Snyder, Kay McNulty, and Ruth Lichterman. The program manual for ENIAC was written by Adel Goldstine.

In 1950, the C10 program for UNIVAC I was co-designed by Ida Rhodes. This was the computer system that would later be used for determining the census in the USA. The same year saw George Devol invent Unimate, the very first industrial robot in the world, used for transporting die castings and welding them into automobiles. In much the same way as today's manufacturing automation, these robots would be programmed to do specific tasks to replace unskilled labor. Unimate was, and still is, a very important milestone in robotics history.

In the 1960s and 1970s, we started to see the automatons designed on the human arm – in 1966, we got Shakey, in 1969, it was the Stanford Arm, and in 1974 we got the Silver Arm. This led to Puma350 in 1985 and CyberKnife in 1992 with both serving as innovative technology used in the medical field.

The arm-like robots are indicative of much of today's robotics and one, EMMA (Expert Manipulative Massage Automation) was designed by Albert Zhang and is a one-armed robot designed for providing massage therapy.

Modern Robotics

Automation is common in today’s modern world, even if we don’t realize it. Automated machines take care of a great deal of manual labor, providing humans with the time and ability to learn brand new skills in their field of expertise.

Since the year 2000, there have been many more advancements in automation and robotics, including artificial intelligence. Automated machines, with programming to do the same job over and again are commonly used in exploration (space and underwater), the military, manufacturing, and commercial agriculture.

Artificial intelligence is used for assessing environments and to do some action to ensure programmed goals are reached and advancements in recent years have led to technology that can prevent the theft of identity, search engine results and in cracking ciphers on behalf of the FBI. As we continue to advance, AI is likely to be the technology that is used in robotics development.

Netflix, Amazon, Hulu, and other on-demand streaming websites use predictive analysis to recommend shows and movies to viewers, thus improving customer satisfaction and businesses use software for sentiment analysis to get better insights on their service and products and what the customers think. This enables better marketing and the ability to deal with negative feedback much quicker.

Where Are We Headed in the Future?

It isn’t very easy to answer this question because innovation is moving at such a rate. However, there are predictions that robots are going to play a significant role, not just in business but in the home too. Already home products like Siri, Amazon Echo and Alexa are proving popular and smart homes are fast gaining because of the convenience they offer, not to mention the savings on energy bills, the security and the comfort they bring.

Three of the biggest corporations in the world – Amazon, Google, and Microsoft – are all developing new and better business technologies. For example, those who use Microsoft Office 365 can now use Microsoft Teams for making and receiving business calls without needing any other app. Google has come up with Pixel Buds that can translate up to 40 different languages in real-time.

We can also expect automated robots to become far more commonplace outside of the shipping and manufacturing industries. Around 35% of organizations in the logistics, health, and utilities fields are already starting to explore how automated robots can use them.

As far as self-drive technology goes, we don’t expect to see such huge leaps in innovation. Car accidents between self-driven cars already show that it may be difficult to provide a coexistence between self-drive cars and impulsive human behavior.

We do expect to see advancements in the use of robotics in space exploration. In 1971, Mars 2 from the Soviet Union landed on Mars and became the very first interplanetary robot on Earth and, since then, engineers have gone much further in the development of innovative technology.

Take the ISS Robotic Eternal Ammonia Leak Locator for example. Developed by NASA, it is designed to check space stations for any signs of ammonia leaks and fix them, thus reducing the risk to human crew members.

Rather than the terrific innovations predicted by science fiction, we expect to see breakthroughs in robotics and automation to improve on what we have been looking to advance for a long time – education, communication, and life.

Advantages and Disadvantages of Robots

Robotic automation is growing in popularity across many industries and sectors and across the world and this popularity looks set to continue as businesses take full advantage of the benefits.

More manufacturers are investing in robotic technology but not everyone is convinced of those benefits. Some people are cautious about how existing processes will be adapted and the consequences of doing it. And there are some very reasonable objections but mainly from those who have not yet tried the technology for themselves.

Advantages

• Cost-Effectiveness – robots need no breaks, no holidays, no sick leave and they don’t need to work shifts. They can work repetitive cycles and, so long as they are kept maintained, they can continue until their programming is changed. This benefits human workers – when they no longer have to do such repetitive work, they are no longer at risk of RSI or other work-related injuries. An increase in production at a much-reduced cost brings about some obvious benefits and the investment cost can easily be recouped quite quickly; from then on, the gains are fast.

• Better Quality Assurance – very few workers are happy doing repetitive work and, after a while, their levels of concentration start to wane. Not only does this lead to the potential for expensive errors, but it can also lead to the potential for injury. By using robotic automation, these risks are eliminated because items are accurately produced and checked to ensure the required standard is met without fail. More products are manufactured at a much higher standard and these create no end of new possibilities that companies can expand on.

• An Increase in Productivity – when repetitive tasks are automated, it leads to a definite increase in productivity. Robots are designed specifically for repetitive movement where humans are not. By taking repetitive work away from employees, we give them the opportunity to expand their skills into other areas of work and this creates a much better environment that everyone can benefit from. They will have higher levels of energy and will focus better on their work – not only do the robots increase productivity, so will happier employees.

• Hazardous Work – quite aside from the risk of potential injury, some employees need to work in hazardous or unstable environments. For example, underwater, in areas where there are high levels of chemicals, in space and so on. Using robots to do these jobs minimizes the risks because they can work without sustaining injury or harm. Industries where employees are required to work in extremely low or high temperatures tend to see a high turnover in human resources. Using automated robots minimizes the risk of material waste and takes away the risks to humans.

Disadvantages

• The Potential for Job Losses – this is one of the biggest issues and concerns surrounding robotic automation. If robots can work faster and more accurately, it is feared that humans may become redundant. However, while these fears are entirely understandable, they are not terribly accurate. The same thing was predicted for the industrial revolution and, as we can see from history, it never happened. Indeed, humans went on to play a very important role. We only have to look at Amazon to see one of the best examples – during a period where they increase their robots from 1,000 to more than 45,000 their employment rate has also risen.

• Initial Investment – this is quite possibly one of the largest obstacles that will determine whether a company makes the investment in robotic automation or whether they will wait. Before any company can consider it, they must build a comprehensive and solid business case around the pros and cons of implementation. While the return on investment (ROI) can be significant and they can happen quite quickly, it does require a substantial amount of cash flow in the first instance and it must be considered whether it is worth risking company stability if the returns cannot be guaranteed. However, most of the time repayment schedules are available and this makes it much more affordable and easier for finances to be controlled. Something else to consider is the increase in output and the reduction in poor workmanship when deciding whether automation should be employed or not.

• Hiring Skilled Staff – over the last 10 years, manufacturers have found it much harder to find skilled people to take on specialized jobs. Automation just adds to that problem because robots need to be programmed and they need to be operated too. But this opens even more opportunities for employees to be trained up and to boost their own skills. Automation providers can easily help with installation and setup and, so long as the staff has the correct skills, they can learn to manage the robots.

Who Invented the First Robot?

Evidence suggests that the earliest human-like mechanized forms can be dated back to Ancient Greece and concepts of artificial beings have been in the works dating back to the 19th century. However, despite this, the evolution of robots did not really begin properly until the 1950s when George Devol invented a robot that was programmable and digitally operated. It was that robot that laid the foundations for the robotic industry we have today.

The Early Years

Ctesibius was an ancient Greek engineer and, in about 270 BC he produced water clocks that contained loose figures or automatons. Then we have Archytas of Tarentum, a Greek mathematician who develops a steam-propelled mechanical bird, named “The Pigeon”. And in 10-70 AD, Hero of Alexandria came up with several innovations in the automaton field, including one that, it is alleged, could actually speak.

On to Ancient China and we find an account, penned in the 3rd century BC, about an automaton – the account tells of King Mu Zhou being presented with a mechanical figure, life-sized, by an artificer called Yan Shi

Theory and Science Fiction

Visionaries and writers have long imagined a world where robots are a part of our lives. Mary Shelly penned “Frankenstein” in 1818, a story about a lifeform that is brought to life by Dr Frankenstein, a very mad but very brilliant writer.

100 years on and a Czech writer called Karel Capek came up with the term, “robot” when he wrote a play called “Rossum’s Universal Robots” or R.U.R, in 1921. The plot was a simple one yet quite terrifying – the main character built a robot which then went on to kill.

In 1927, a film called “Metropolis” was released, written by Fritz Lang and featuring the Maschinenmensch or Machine Man, which was a humanoid robot, the very first time one had ever been depicted on film.

Isaac Asimov, a futurist and Sci-Fi writer, first coined the term, “Robotics” in 1941 when he was describing robot technology; he predicted that a very powerful industry of robots would rise in the future. He also penned a story called “Runaround”, about a robot and the three laws of robotics; the story was centered on the ethical questions surrounding artificial intelligence.

And in 1948. “Cybernetics” was published by Norbert Wiener, forming the basis of something called practical robotics which is basically AI research around the principles of cybernetics.

The First Robots

William Grey Walter, a pioneer in British robotics, was responsible for inventing Elsie and Elmer. These two robots were developed in 1948 and used basic electronics to mimic human-like behavior. These were tortoise-like and were programmed to head to their charging stations when their power started running down.

In 1954 we got Unimate. Developed by George Devol, this was the first programmable and digitally operated robot. This was followed in 1956 by Devol forming the first robotics company in the world with his business partner, Joseph Engleberger and, in 1961 Unimate was put to work in a General Motors automotive factory in New Jersey.

Modern Robotics

We now see both industrial and commercial robots in widespread use. They can perform some jobs with far more reliability and accuracy than humans and at a far cheaper cost. They tend to be used for repetitive, time-consuming, jobs, and those that are far too dangerous or dirty and pose a risk to human life. Their widest spread uses are in manufacturing, packing, assembly, space and earth exploration, weaponry, surgery, laboratory research, and the mass production of industrial and consumer goods.

How Many Robots Are There and Where are They?

According to the latest World Robotics Report, annual sales stand, globally, at more than $16 billion and the IFR forecasts that shipment may recede somewhat in 2019 but, by 2022 we can expect annual growth of around 12% every year.

2018 was a record year even while the main robotics customers, the electrical- electronics and automotive industries, struggled through a tough year.

Certainly, the trade conflict between the US and China posed, and still does, no small amount of uncertainty and, in situations like this, many customers postpone investments such as automation. But, for the first time ever, robot installations have topped the 400,000 per year mark. By 2022, that number is expected to surpass 580,000.

Right now, Asia holds the largest industrial robot market share. 2018 saw the Republic of Korea and China decline while Japan saw a considerable increase. In total, though, Asia saw an overall growth of 1% while in Europe, the second-biggest market, that figure stood at 14%, reaching a six-year high. And in the Americas, growth jumped more than 20% from 2017, another six-year high.

Top Five World Robot Markets

Although almost every country is involved in the robot market, there are five that stand head and shoulders above the rest, taking a collective 47% of the global installations for 2018. Those markets are Japan, China, the Republic of Korea, Germany and the United States.

China

China is, and has long been, the largest market with a 36% share of installations – in 2018 more than 150,000 units were installed, around 1% less than 2017 but more than the entire installation number in the Americas and Europe combined. The total value of installations was $5.4 billion, a value of 21% more than 2017.

Robot suppliers in China increased their total installation share of the domestic market by around 5%, a result that is in line with the policy to promote the domestic markets in China. By the same token, foreign supplies dwindled by 7%, a decline caused in part by a weaker automobile industry.

Japan

In Japan, there was an increase in robot sales of 21%, an installation of a figure around 55,000 units. This is representative of the country’s highest value to date. An average growth of 17% per year in the last 6 years is good going for a market that already features high automation levels in their industrial production markets. Japan is the number one manufacturer in the world of industrial robots and 52% of the global supply came from Japan in 2018.

United States

The USA saw an increase in robot installations for the eighth year running, peaking at just over 40,000 units in 2018, up 22% from 2017. Since 2010, it is the ongoing production automation trend that is driving this growth as a way of strengthening US domestic and global market economies. As far as annual installations go, the country is third, pushing Korea into fourth.

Korea

In the Republic of Korea, installations dropped by 5%, with just 38,000 unit selling in 2018. The robot market depends very strongly on the electronics industry, one that suffered a tough year but installations have increased by an average of 12% year on year.

Germany

The fifth-largest in the world, Germany is also the number one robot maker in Europe, followed closely by Italy and France. Most installations are driven by the automotive industry.

In terms of numbers of robots worldwide, production is accelerating fast, now an average global density of 74 robots per 10,000 employees. That is up from 66 in 2015. By region, the average density in Europe is 99 per 10,0000; in the Americas, it is 84, and in Asia, 63.

The top 15 countries in the world, right now, are:

• South Korea – 631 per 10,000 • Singapore – 488 per 10,000 • Germany – 309 per 10,000 • Japan – 303 per 10,000 • Denmark – 211 per 10,000 • United States – 189 per 10,000 • Italy – 185 per 10,000 • Spain – 160 per 10,000 • Canada – 145 per 10,000 • France – 132 per 10,000 • Switzerland – 128 per 10,000 • Australia – 83 per 10,000 • United Kingdom – 71 per 10,000 • China – 68 per 10,000 • India – 3 per 10,000

Of course, this list will change each year but the top countries are likely to remain in roughly the same positions as the number of robots rises year on year.

What Are the Different Types Of Robots?

With an incredibly fast growth in automation and robotics being witnessed over the last few years, all industries, especially manufacturing, are starting to adopt the technology into their processes. Industrial robots now do repetitive tasks, with a high degree of accuracy and precision and the products coming off the line are of far better quality.

One reason manufacturers use robots is that output is increased, due to the robots not needing to take breaks as human workers do. And they can work in environments that are dangerous or harmful to humans, and that results in better health and safety in the workplace.

Because of this advancement in technology, because of the benefits that robots bring to the workplace, we are starting to see more types of robots being developed.

Major Types of Robot

Looking at the mechanical configuration of robots, we can classify them into six main types – cartesian, articulated, SCARA, polar, delta and cylindrical.

Articulated Robots

This is the most common type of robot found in industrial scenarios. In the configuration it looks much like a human arm, connected with a twisting joint to a base. There can be anything from two to ten rotary joints that connect the arm links and each of those joints provides one more degree of freedom. They may be orthogonal or parallel joints. The most commonly used robots are articulated with six degrees of freedom.

Advantages

• They work fast

• They take up little room on the factory floor

• They produce a much higher output

• Easy to align several robots

Disadvantages

• They must have a dedicated controller

• The programming is quite complicated

• The kinematics are also complicated

Cartesian Robots

Also known as a gantry or rectilinear robots, cartesian robots are rectangular in configuration. They have three prismatic joints used for the delivery of linear motion, sliding on the three X, Y, and perpendicular axes.

Some also have a wrist attached, allowing for rotational movement. These types of robots are generally used in industrial applications because the flexibility in the configuration makes then ideal for specific applications.

Advantages

• Highly accurate in positional terms

• Simple to operate

• Can easily be programmed offline

• Customizable

• Handle heavy loads easily

• Don’t cost so much

Disadvantages

• They require a large area for installation and operation

• The assembly is quite complex

• Limited movement – only one direction at any one time

• Requires large operational and installation area

SCARA Robots

SCARA stands for Selective Compliance Assembly Robot Arm and these robots have work envelopes shaped like a donut. They have two parallel joints, each providing compliance in a specific plane.

There are vertical rotary shafts with an end effector on the arm that has horizontal movement. SCARA robots are specialized in lateral movement and tend to be used on assembly lines. They are faster in movement and integrate better than cartesian or cylindrical robots.

Advantages

• They work fast

• They are excellent at repetitive work

• They have a large workspace

Disadvantages

• A dedicated controller is required

• They are limited to planar surfaces

• They are not easy to program offline

Delta Robots

Also called parallel link robots, the delta robot has a series of parallel joint linkages, all connected with one common base. Because each joint has direct control of the end effector, the end effector positioning can be controlled with the robot arms easily, which leads to very fast operation. With a work envelope shaped like a done, these robots tend to be used for product transfer or high-speed pick-and-place applications.

Advantages

• Very fast

• High level of operational accuracy

Disadvantages

• Operation is complicated

• A dedicated controller is required

Polar Robots

Polar robots are configured with one twisting joint that connects the arm and the base, and two rotary and one linear joint combination that connects the links. Sometimes called spherical robots, they have a work envelope spherical in shape and a polar coordinate system formed by the axes. Polar robots have one central shaft that pivots and a rotating arm that can be extended. Formed much like a gun turret, polar robots can sweep through a large space but arm access is limited only to the workspace.

Advantages

• Has 360° reach

• Can reach blow or above any obstacle

• Large work output volume

• Don’t need so much floor space

Disadvantages

• Reach does not extend above itself

• Has only a short reach vertically

• In the rotary motion direction, they have low repeatability and accuracy

• Not so common in new designs anymore

Cylindrical Robots

Cylindrical robots have a minimum of one rotary joint, placed at the base, and a minimum of one prismatic joint, used to connect the links together. Their workspace is cylindrical, and their configuration is of a pivoting shaft with an extending arm that slides to move vertically.

As such, a cylindrical robot has horizontal and vertical linear movement and, on the vertical axis, it also has rotary movement. The end of the arm is compact in design, allowing the robot to reach into a tight work envelope without losing any repeatability or speed. Cylindrical robots tend to be used in simpler applications, for picking up, rotating and placing materials.

Advantages

• Simple to install and operate

• Requires the minimum amount of assembly

• Has 360° reach all around itself

• Doesn’t need much floor space • Capable of carrying large payloads

Disadvantages

• Cannot get around obstacles

• Rotary motion direction is low in accuracy

• Not commonly used in new designs

Future Use

When deciding to implement automation and robots into their working practices, any business must carefully consider what type of robot they need for specific applications.

Manufacturers must take several factors into consideration – orientation, load, precision, speed, travel, duty cycle, and the environment – before they choose the type of robots they require to provide effective and profitable results.

Leading companies in robotics are now providing specific robots and automation solutions to cater to individual needs and, as time goes by, we can expect to see more and more customized robots int eh workplace, although all will be based on one of the above six types.

Part 2: The ins and outs of Robots and Robotics

Components of A Robot

Robots are automatically functioning machines that can adapt to environmental changes. The word, “robot” was first used in 1921 in a Karl Capek play, but humans have been toying with machines that are automated for much longer than that.

Quite apart from being the subject of many movies and books, robots are fast becoming an important part of our lives simply because they can do jobs that are too dangerous for humans.

The one question asked by many people is, what is a robot made of? Every robot is different, but they all share the same basic components:

The Control System

At a basic level, all humans and animals survive using one principle – feedback. We sense what is happening around us and we react accordingly. We can date the use of feedback to control the way machines function back to 1745 when the principle was used by Edmund Lee, an English lumber mill owner who wanted to improve how his wind- powered mill functioned. Whenever the wind turned direction, his windmill needed to be moved to compensate for this. To improve it, he added two more small windmills to the big one, both powering one axle that turned the large windmill to face the wind automatically.

The control system in a robot also uses feedback in the same way that human brains do. However, where the human brain is a mass of neurons, a robot brain is a silicon chip called a CPU, or central processing unit, very much like the chip your home computer runs on. Where our brains make a decision on what to do, on what reaction to make, based on the feedback we get from our five senses, the robot CPU uses data collected by sensors to make the same decisions.

Sensors

These sensors mimic the human senses but are actually video cameras or light- dependent resistors acting as eyes and microphones that act as ears. Some robots have even begun to be developed with taste, touch, and smell and the CPU will take the signals from the sensors and make decisions accordingly.

Actuators

In order for a robot to be considered a robot, it must have a body that can be moved as a reaction to feedback it gets from sensors. A robot body is made of plastic, metal, and other materials and inside them are actuators, small motors. These mimic actions like human muscles so that the robot can move its body parts. The simplest robots are nothing more than an arm that has a tool attached and is built for a specific task. Robots that are more advanced can move on treads or wheels and humanoid robots even have legs and arms like humans.

Power Supply

For a robot to function it requires power. Human power comes from the energy provided by food; that food is broken down and our cells turn it into energy. A robot’s energy comes from electricity. The stationery arms that are often seen in automotive factories can be connected to a power supply in the same way as any appliance while robots that can move use batteries for their power. More advanced robots, such as satellites and space probes are generally designed to collect and use solar power.

End Effectors

For a robot to interact with its environment and to carry out the tasks it is programmed to do, they are equipped with end effectors. These will differ, depending on what tasks the robot is programmed to carry out. For example, robots designed to work in factories ae given interchangeable tools, such as welding torches or paint sprayers while mobile robots, like the probes we send to space or robots used for detecting and disposing of bombs, tend to have universal grippers used to act as a human hand.

While each individually designed robot will be different to the next, regardless of what it has been designed for, it will have these components as standard.

How Are Robots Programmed?

You might think that programming a robot is straightforward; after all, there is only one way to do it isn’t there? In fact, there are three very popular methods of programming and they are about far more than just typing code lines. In fact, modern programming has come some way on since the early days but the result is still the same – the instructions still wind up as a series of 1s and 0s. And there is more than one way to get those into a robot, some of which don’t even require any formal programming knowledge.

Top Three Methods

These days, robot programming is more about intuitive methods than low-level coding and much of this is down to a desire to make it much easier for programmers. Those that operate robots are not necessarily the same people as those that make robots and, in turn, robot makers are very often not the best at programming robots to do specific tasks. For example, if you have a robot that is going to paint, it is far better to have a painter program it than a programmer who cannot paint.

The following are three methods used for programming robots and each comes with its own advantages and disadvantages.

Teaching Pendant

This is, hands down, the most popular teaching method. More than 90% of robots are programmed in this way according to the British Automation and Robot Association. More often than not, although it has changed over time, the robot teaching pendant usually consists of something that looks much like a large calculator.

In the early days they were big gray boxes containing magnetic tape storage but today they are more like touchscreen tablets. The operator will move the pendant from point to point to program the robot, using buttons on the pendant to move it; each position is saved individually and once the entire program is learned, the robot will be able to play the points back at full speed.

Advantages

• Traditional industrial robots already have a teaching pendant, allowing technicians to work much easier • They allow for very precise positioning because numerical coordinates can be used to program the robot – either robot coordinates, world coordinates, or some other coordinate system is used • They are good for simpler movements like straight lines or large flat surface movements Disadvantages • Because of robot downtime, they can be disruptive to the entire system. The robot has to be placed into ‘teach’ mode and any operations being done by the robot stopped until programming is over. • Operators must be trained to use the system for learning and programming • It can be difficult to get the hang of for those not familiar with programming.

Offline/Simulation Programming

Offline or simulation programming tends to be used most in robotics research as a way of making sure that the algorithms designed for advanced control are working properly before they are used in real robots. However, the industry also uses them to improve efficiency and keep downtime to a minimum.

It is a useful method for SMEs (small to medium enterprises) because it is likely that robots will be reconfigured several times in these areas, far more than those in mass production areas. Offline programming means that production is not interfered with too much and the robot can be programmed using virtual mockups, both of the tasks and the robot.

Provided the software is easy to use, this is one of the quickest ways to test ideas before passing them to the robot.

Advantages

• Less downtime because the programs are developed offline, meaning the robot only needs to be stopped when the program is to be downloaded and tested.

• Intuitive method, particularly if it is possible to move the robot in a 3D CAD environment using drag-and-drop techniques,

• Multiple approaches to one problem can be tested, not efficient when using online methods.

Disadvantages

• It is highly unlikely that virtual models will ever be to represent the real world 100% accurately.

• Programs will most likely still need to change even once applied to the robot

• It could take more time. Although there is less chance of downtime, somebody does need to spend more time on simulation development and on testing.

• It can be a waste of time when simulator issues need to be solved before production challenges.

Teaching By Demonstration

This provides an intuitive method, an addition to the teaching pendant. It involves moving the robot using a joystick that is attached above the end effector on the robot wrist or by using manipulation of a force sensor.

In the same way as it is with the teaching pendant, each position is stored individually within the controller. This programming method has been incorporated into many collaborative robots as it is very easy for operators to start using their robots almost immediately.

Advantages

• It is faster than the teach pendant, eliminating the need for multiple buttons to be pressed. This means the operator can easily move their robot to the position they want them.

• It is more intuitive than the other methods because programming is done in much the same way that a human operator would carry out a task. This makes it much easier for operators to learn and very little programming knowledge is required.

• A good method for the complex tasks that need multiple code lines to achieve an identical effect.

Disadvantages

• As with the teaching pendant, the actual robot is required for the programming. It will not reduce the risk of downtime in the same way that offline programming does.

• It isn’t as straightforward to move robots to precise coordinates as it is with the other methods, especially where a joystick system is used because there isn’t any way of inputting numerical values.

• Not great for algorithmic tasks as moving the robot manually would be inaccurate and arduous.

Which One Do You Choose?

As with anything related to robotics, the method that works best will depend on the task, the robot you are training and your requirements. Only you can determine which method works best and the advantages/disadvantages listed here should help you to determine which one you should use.

Software Used to Program Robots

Robots are becoming a popular choice for companies to solve complex issues and the robotics market is expected to experience growth of almost $50 million by 2025.

While we traditionally use robots in structured environments, using inputs and outputs that are both known and regulated, we are seeing a fast uptake of industrial robots. This has led to no small increase in interest from those with programming experience who want to get involved and by the end of this decade, we expect to see a huge boost in demand for robot programmers.

All robots are programmed to perform specific actions and that programming is either off-line or guiding. Most industrial robots are programmed by guiding it from one point to the next, each step of an operation and storing the steps or points in the control system.

Computer commands are used to give the robots their instructions in a process called “manipulator-level offline programming” which involves the use of high-level languages where actions are defined by objectives or tasks. Anyone who wants to get involved in robotics programming must be knowledgeable on many programming languages; switching between computers and robots is not the easiest of transitions.

Robot Programming Languages

While there are over 1500 programming languages, some are more popular than others and these are the ones you need to learn:

C/C++

Both are general purpose languages ideal for roboticists to learn, containing generic, object-oriented and imperative features. It allows for low-level hardware interaction and real-time performance.

Python

One of the high-level programming languages, Python is key in both building robots and testing them. It is one of the best platforms for automating and teaching robot programs and allows for scripts that calculate and record an entire program as well as simulation. It takes fewer code lines than C or C++ and is essential for programming autonomous robots.

Java

Java language provides the functions needed for robotic systems to carry out human- like tasks. It does those through a series of APIs specifically for robotics and programmers can use the language to build dictation systems, command-and-control recognizers, and speech synthesizers using the Speech API and receipt and processing of visual images using the Media Framework API.

Of all the languages used, it provides the high-level features needed for artificial intelligence, allows for very efficient algorithms to be built and allows for one code to be used on all different machines.

C#/.NET

The proprietary language from Microsoft, this is used for application development in Visual Studio. It gives programmers a decent foundation from which they can move onto other fields and tends to be most used in socket and port-level programming. It allows for the use of multiple languages, is scalable horizontally, and, with .NET, you have a unified environment for using VB, Java or C++ for creating programs.

MATLAB

This is not technically a language, more of a tool used for finding engineering solutions based on Math. It is an essential tool for robot programmers to learn if they want to implement control systems, produced highly advanced graphs, and analyze data.

It is useful for the design of an entire robotic system, has deep roots in the development of robots and is a simulation tool, allowing you to provide a design or algorithm which it then simulates for you.

Which One is Best?

Any one of these programming languages or tools will help you in robot development but your first port of call should be C or C++ followed by Python as the primary robotics languages.

In fact, while many believe that both C and C++ are becoming outdated, more and more robot developers are choosing them as they have more library functions and tools. No matter which one you learn though, all of them are useful in robotics programming.

Robot Languages

It could be said that there are as many robot languages as robot manufacturers simply because each separate manufacturer creates a robotic language of its own. So, because of that, each separate programming method must also be learned.

It is fair to say that most robot languages tend to be based on a common language, such as C, Basic, Fortran, or Cobol while others stand alone and have no relationship to any other language. And each language will fall into an individual level of sophistication and that will depend on how it was designed and what for. This can be anything from machine level sophistication right up to human intelligence.

The high-level robot languages may be compiler or interpreter-based with the latter executing the programming one line at a time. Each line has its own number and the interpreter will convert each line to machine language that can be understood and executed by the processor – in other words, it is interpreted. The lines are executed in sequence until one of two things happens – it gets to the last line or an error occurs, in which case the program will stop. Interpreter-based languages have the advantage of being able to carry on until an error, allowing users to debug as they go. However, it is a much slower and less efficient way of executing a program because it has to be done line by line.

Compiler languages, on the other hand, translate an entire program using a compiler. It is translated in its entirety into machine language, creating an object code, and then it is executed. Because the code is executed by the processor, the programs tend to be more efficient and faster. However, because the program has to be compiled first, no part of the program can run if there are any errors, even the smallest one. That makes it more difficult to debug the compiler-based programs.

Let’s look at the different levels of robotic language:

Micro-Computer Machine Language Level:

All programs at this level are written at machine level. It is basic but incredibly efficient although it isn’t the easiest to understand and follow. Eventually, all languages are interpreted or compiled to machine language level but where high-level programs are concerned, these are written in language that is easier to understand.

Point-to-Point Level:

This level requires point coordinates to be entered in a sequence, with the robot following the order of the points. This is somewhat primitive, easy to use but not the most powerful. It also doesn’t have sensory information, branching or conditional statements. Primitive Motion Level:

This level of language provides more ability for the development of sophisticated programs, including conditional statements, branching and sensory information. Most languages at this level are interpreter-based.

Structured Programming Level:

These tend to be mostly compiler-based and are more powerful languages, allowing for sophisticated programming. Conversely, they are not easy to learn.

Task-Oriented Level:

As of yet, no robot languages fall into this level. Back in the 1980s, IBM proposed one, named Autopass but it never materialized. It was meant to be a task-oriented language which means that, rather than a robot being programmed to do something one step at a time, it was only necessary to mention the task and the controller created the sequence of steps. Sadly, it didn’t happen and isn’t likely to for the foreseeable future.

Robotic Joints and Degrees of Freedom

In much the same way as our human joints, a robot’s joints also create movement between a pair of links. Each joint provides a parameter used to define the configuration and the number of parameters is what classifies each type of robot.

Each joint has two links attached to it – one input and one output link, both of which are called rigid components. Because a stationary robot is affixed to a base, the nearest link to the base is the input and movement between the two links are controlled at the joint.

We can classify mechanical robot joints into five types:

1. Linear Joint

Otherwise called the L-Joint, this provides sliding movement between the input and output links, with parallel axes. An example is a telescopic joint.

2. Orthogonal Joint

Otherwise called the O-Joint, this works much the same as the L-Joint but with perpendicular axes for the input and output.

3. Rotational Joint

Otherwise called the R-Joint, this moves with a rotary position with perpendicular input and output axes.

4. Twisting Joint

Otherwise called the T-Joint, this is the same as the R-Joint, a rotary movement with perpendicular input and output axes.

5. Revolving Joint

Otherwise called the V-Joint, this is similar to the T-Joint, with the output link spinning around the input link but with an output axis that is parallel to the rotational one.

Unraveling Degrees of Freedom

How do axes relate to degrees of freedom? If you have, for example, a two or three-axis robot, even as much as a seven-axis robot, where do degrees of freedom enter the picture? To answer that, we need to break things down a bit.

Robot Axis and Location in Space Robot degrees of freedom refers to how many movable joints the robot has. If it has three movable joints, it will have three degrees of freedom and three axes. By the same token, five movable joints equate to five degrees of freedom and five axes, and so on. For the location in space to be defined for any object, a minimum of six degrees of freedom is required – the cartesian coordinates or x, y, z location and the orientation of the object.

While this is not written in stone, it is generally the case that pick-and-place objects and robots will have these capabilities:

• One-axis – pick an object up and move it in a straight line

• Two-axis – pick up an object, lift and move it both horizontally and vertically. It can present it or set it down on one x, y plane without altering the orientation

• Three-axis – as above but can present or set it down within the x, y, z space in its workspace without altering the orientation

• Four-axis – as above but can also change the orientation on one axis

• Five-axis – as above but can change the orientation on two axes

• Six-axis – as above but can change the orientation on three axes

• Seven-axis - all the capabilities of a six-axis robot plus the ability to move itself physically in a linear direction, usually along a rail horizontally.

So, a degree of freedom, in a mechanical context, is the specific and defined mode in which the system or device is able to move. How many degrees of freedom there are will depend on how many aspects of motion there are. Some machines operate in three- dimensions but may have more than three degrees of freedom.

Consider the example of a robotic arm. It is built so it works just like a human arm, with shoulder motion up and down (pitch) or left and right (yaw). The elbow motion is pitch only and wrist motion can be both pitch and yaw. The wrist and the shoulder may also rotate (roll).

A robot like this will have between five and seven degrees of freedom but if you have a more complex robot with two arms, you get double the number of degrees of freedom. With an android, the end effectors, the head and the legs provide extra degrees of freedom.

With a multi- or a fully functional android, you may have more than 20 degrees of freedom. Project , for example, is an intelligent android that has been designed for consumers; it looks like a big space-age doll and it has 25 degrees of freedom. Most humanoid robots have more than 30 degrees of freedom – six per arm, five to six per leg, and more in the neck and torso.

We have come a long way since the simple limited movement robots and, in the next few years, more and more humanoid robots will be developed and we can expect to see the number of degrees of freedom increase exponentially with the complexity of the robot.

Robot Coordinates and Reference Frames

One of the key requirements for programming robots is to keep track of the position and velocity of any object in space and one very important thing is to understand how a robot relates to the physical world around it. To do that, you must first imagine that you are seated in a small room that represents the world coordinates.

The chair is a fixed location that establishes your left, right, back, front, down and up. When you place a robot into a work cell, its coordinates are immediately established. This coordinate system is often known as the world reference system and this can be utilized by the robot programmer. If we consider this in terms of the Cartesian coordinate system, the coordinates x, y, and z are fixed relative to the robot position.

User Reference Frame:

However, in many cases the user would find it much easier to establish a different frame. Most robots allow for different reference frames to be set, known as user frames, as a way of corresponding with the workspace. It would be better if the x, y, and z coordinates corresponded to the robot’s fixtures and doing so means that the user can program the robot using CAD data. The robot is also easier to move because movement in an x, y, or z plane will now automatically follow the axis for the fixture and not the world frame for the robot.

Users can create several user frames, giving them all individual names and then choosing the one that is appropriate to the specific application. User reference frames are made up of Cartesian coordinate sets that describe what the relationship is between the tool center point of the robot and its workspace. Positional relationships between the two are described using transformations, which are mathematical equations.

The robot controller works out the correct angles for the joints to make sure the axis is aligned so that the tool is oriented to the workspace. The equations used will depend entirely on the type of axis motions and the number. In simple terms, the controller ensures the workspace reference plane and robot reference frame are aligned.

User reference frames can also be used for moving equipment in a robot’s cell space. New user frames are created, and the program is transformed so it uses a new frame and then it is run.

Tool Frame:

There are a number of robots that can establish tool frames. Take a robot that is welding a circle. The welding torch must be kept at the correct orientation to the weld as the robot is moving it around. Tool frames tell the robot what the center point of the tool is and, if not done, the robot will simply use its gripper faceplate to make the calculation itself.

There are three methods generally used to teach tool frames to robots. The three-point and six-point methods are used to put the center point to a specified position and then either three or six points are recorded as different orientations are used to approach the specified point. The robot will then take all the orientations and combine them with its own known positions and then calculate the orientation data of the tool so the tool frame can be created. Those values can also be directly inputted into the controller which also creates the tool frame.

Robot Coordinates

Coordinates are another important system and there are four main ones:

WORLD-Coordinate System:

This is a Cartesian coordinate system that describes which locations the points are at in the workspace. The coordinates are P(x, y, z) and detailing the points in this coordinate system is quite simple with linear movements also easy to program. However, there is a certain amount of ambiguity with it – the plurality of axial positions can be used to achieve a specific position, especially where robots with jointed arms are concerned.

JOINT-Coordinate System:

The length and the angle position of the axes on articulate robots are used to describe the TCP – Tool Center Point – precisely. Using the JOINT system, each of the axes may be moved, in particular in a negative or positive-sense rotation. The coordinates are P (angle A1, angle A2, …, angle A6) and using this system makes it easy to avoid ambiguity; there is also no need to transform the coordinates.

Tool Coordinate System:

The tool coordinates include tool data, such as the position of the TCP and the tool orientation or geometry. Take the gripper, for example. The coordinates are used to describe the position and the orientation of the effector in space. The coordinate’s zero point is found at the effector’s TCP/ Normally, the coordinates start out as Cartesian, meaning one axis must point to the gripper’s extended direction. This tool system makes it easier for the user to program applications, such as turning the tool to TCP, maintaining the speed at TCP even when there are complex paths, and pushing the tool to go in a different direction.

Workplace Coordinate System:

Another Cartesian coordinate system, the workplace system is based inside or at the corner of the workpiece. The advantages of measuring the base are that you can determine a certain point on the workpiece, pallet or clamping table, and if the pallet already contains some workpieces, it is enough to know the orientation of just one of them. That means only a single zero point has to be determined.

Robot Workspace

When we talk about robots, one thing that gets mentioned quite a lot is the robot workspace. This is defined as “the set of points that can be reached by its end effector”. In simple words, it is the space within which the robot works.

The main characteristics of the workspace are the volume (structure) and the shape (dimensions). Both are very important because they have a significant impact on both robot design and manipulability. Knowing the exact dimension, shape and structure of the robot workspace is important because:

• The shape has an impact on the definition of the environment the robot will be working in • The dimensions determine what the end effector reach is • The structure assures the robot's kinematic characteristics relative to the robots’ interactions with its working environment.

On top of that, all three are dependent on the robot’s properties:

• The biggest influence on the workspace dimensions is the dimensions of the robot’s links and the robot joint limitations (mechanically) – passive and active. • The shape is going to differ depending on what the robot’s geometrical structure is, i.e. the interference between the links, plus the degrees of freedom properties – number of joints, joint limits, type of joints – active and passive. • The structure is defined by the robot’s structure and the dimensions of its links.

Workspace Categorization

There are other points of view, besides the above, that workspace definition can be looked at from:

• Maximal Workspace – the locus reachable by the end effector with a minimum of one orientation. • Inclusive-Orientation Workspace – all possible locations reachable by the end effector with a minimum of one orientation among a range of them. • Constant-Orientation Workspace – the locus reachable by the end effector with a fixed orientation. • Total-Orientation Workspace – the locus reachable by the end effector using any orientation • Dexterity Workspace – the locus reachable by the end effector using any orientation and with no singularities • Task Workspace – the locus reachable by the end effector to carry out the specific operation. This is also called Precision Workspace and is a very important one because it defines all the restraints needed to ensure optimal robot dimensions.

The shortest way to describe the workspace is the collection of points that a robot can reach dependent on the way they have been configured and the link and wrist joint sizes. Workspace shape is unique to each robot design and the way to find it is either to write equations or virtually move each link to see what its range of motion is, combining every area that can be reached and subtracting any that cannot.

Robot Characteristics.

While all robots are very different, they all share some things in common – a set of characteristics. These are what define their specifications, and these are the most common characteristics you will find in any robot:

Payload – This defines the maximum load (weight) that a robot can carry while remaining within all the other specifications. For example, the maximum load capacity of a robot may be more than the payload specified but it may not be as accurate, may not follow the trajectory intended for it so accurately or it may deflect excessively.

Take the Fanuc Robotics LR MateTM robot for example. It has a payload of 6.6 lb. but a mechanical weight of 86 lb., while the M-16i TM robot has a payload of 35 lb. and a mechanical weight of 594 lb.

Reach – This defines the maximum distance that a robot can reach inside of its work envelope. Many of the points within the envelope can be reached using any orientation you want – this is known as dexterous.

However, there are points that are close to the reach capability of a robot where orientation may not be specified as you want, and these are called non-dexterous points. Reach is the function of the joint’s lengths, and configuration of the robot and is a very important specification in industrial robots; it is something that must be considered before the selection and installation of any robot.

Precision – Also known as validity, this is the definition of the accuracy with which a specified point can be reached. This isn’t just a function of the feedback devices on the robot; it is also a function of the actuator’s resolution too.

Most of the industrial robots we use today have a precision within the range of 0.001 inches or better. Precision defines the function of the number of orientations and positions used in testing the robot, the load used, and the speed. These issues are important to investigate when precision is one of the most important specifications.

Repeatability – Also called variability, this defines the accuracy with which one position can be reached if motion is repeated over and again. Let’s say that we drive one robot to the same point 150 times. There are a lot of factors that can affect how accurate the position is and because of this, there is no guarantee that the robot will get to the same point every time.

However, it will always reach a certain radius from that point. When the repeated motions for the circle’s radius, this is called repeatability, and this is a more important specification than precision is. If a robot does not have precision, there will be a consistent error, and this could be predicted and then corrected via its programming. Let’s say, for example, that a robot is off to the right consistently by 0.05 inches. To fix this, every one of the desired points could be specified as being 0.05 inches to the left and that would eliminate the error completely.

However, where an error is random, there is no way to predict it and that means there is no way of eliminating it. The random error’s extent is defined by repeatability and it tends to be used to specify a particular number of runs. The bigger the number of tests, the bigger the results and this is not a good thing for manufacturers.

However, the larger the results, the more realistic they are, and this is better for users. Manufacturers need to specify repeatability together with the number of tests, the payload that gets applied during the tests, and the robotic arm’s orientation. For example, if the robotic arm is oriented in a vertical direction, it will yield different results to when it is oriented in a horizontal direction.

Most of our industrial robots will have a repeatability in a range of 0.001 inches. It is important that details of repeatability are investigated if it is an important application specification.

Those are the three main characteristics of all robots, regardless of configuration but there are other characteristics that some robots will have, usually the humanoid robots:

• Sensing – Robots must be able to sense their surroundings, and this is done in ways that are quite similar to the way that humans sense their surroundings. This is done by providing the robot with light sensors for eyes, pressure sensors for hands, chemical sensors for its nose, taste sensors for its tongue and both sonar and hearing sensors for its ears.

• Movement – Robots also need to be able to move around their environment. This could be on wheels, walking with properly jointed legs, or using thrusters to propel itself. Either the entire robot must move or just certain parts of it, like its arms or legs.

• Energy – Robots must be able to power themselves. It could be solar power, battery power, or electric; the method is determined depending on what the robot is designed to do.

• Intelligence – Robots need intelligence the same way that humans do, and this is where programming comes into it. Programmers write lines of code that provide the intelligence, telling the robot what it should do in certain situations.

Part 3: How Robots Will Affect Our Work and Personal Lives Will Robots Take Our Jobs and Which Ones?

This is one question that comes up frequently when people start considering the future of artificial intelligence, machine learning automation and, more importantly, of our jobs. Ever since the time of early humans, when they determined that smashing two rocks in a specific way could create useful tools like a hand ax, the human race has been constantly searching for ways to increase business and personal efficiency by trying to find ways of making tasks quicker and easier.

Artificial intelligence is the latest technological advancement to come along and shake up the workforce and when we hear people ask if robots are going to take our jobs, the vision you are most likely seeing is one of a dystopian future, something out of a Sci-Fi novel or movie, where Cyborgs replace humans at desks. But the future of work will look nothing like that.

The Changing Face of Work

Already, technology does 90% of the jobs that humans used to do. Long before the first industrial revolution, most people worked in agriculture; today, that number is down to just 2% and most of those rely very heavily on skills and techniques learned early in their careers and at university.

Things are changing fast though and there is a widening digital skills gap. That gap poses a threat to anyone who cannot adapt quickly, promising to leave them behind. We have long been using technology to automate some manual tasks but the result was never mass unemployment and there was, and still is, a very good reason for that.

Instead of people being continually laid off as robots took on their jobs, the nature of the jobs that were available underwent dramatic changes. People adapted to that by moving to where the work was instead of expecting it to come to them.

Another massive change was the digital revolution. We came upon a time when cell phones, computers and the internet were developed and a large proportion of the workforce moved to work in offices, behind desks and computers.

Right now, we are facing another, a fourth industrial revolution, like robotics, AI, autonomous vehicles, 3D printing and a whole host of other game-changers go mainstream. Once more, the global workforce will need to adapt to keep up with the way and the rate at which technology advances, but we can do it – we’ve been doing it for centuries and it really isn’t anything we cannot copy with.

How Robots Are Used in the Workplace?

Robots are machines, some of them resembling humans, that do mechanical, routine tasks as commanded. They began taking over our jobs back in the 1970s when robots appeared in automotive factories to replace manual labor. These days, robots can be found in almost every kind of workplace, doing physical repetitive tasks previously done by humans – take the 100,000+ bots used by Amazon to pack inventory, sort inventory, and more in their warehouses. Looking at it in this way, yes, it is fair to say that robots have taken a vast number of jobs from humans.

This is just one way in which the current workforce faces being shaped by automation and robotics but, as each day passes, automation continues to advance, as the machine learning and AI fields advance. And that is where the best opportunities for jobs lie.

How AI Is Replacing People in the Workplace?

AI and machine learning are under wide adoption in many different professions and businesses as a way of speeding up what were once manual processes, creating highly personalized customer experiences and in providing data analysis and predictive analytics.

But there is no way for AI to be adopted without input from humans.

Where there are likely to be changes are in the jobs that require vast amounts of data analysis – here, we are likely to see software replace humans, software that can scan images, unstructured text and speech in vast amounts and much faster, coming to quicker conclusions.

In the same way, farmers and factory workers will be replaced as technology advances and adapts to a society that is increasingly becoming consumer-centric. This will affect the developing countries, such as some South American regions and India but it will also have an effect in the US, China, and Japan.

In the western countries, stock and inventory managers, bankers, financial analysts, fast-food workers, and construction workers are all at risk of machines taking their jobs because they can work faster, are far more efficient and have a much smaller error margin than the humans who do the jobs now.

Which Jobs Are Least Likely to be Replaced?

It might interest you to learn that the jobs least likely to be taken over by robots are those that rely to a large extent on interpersonal skills. Robots cannot ever hope to replace specialist counselors, help domestic abuse victims, drug addiction and more so counselors, therapists and social workers can all rest happy in the knowledge that their jobs are going nowhere.

In the same way, some professions are going to reap huge benefits from automation, including startup founders, marketers, digital consultants, business developers, HR managers and many more.

If you spend hours each day doing boring, simple, repetitive tasks before you can even get onto the important parts of your role, then automation will be the answer – robots can take those repetitive jobs and do them for you, leaving you with more time to do the real value jobs.

Will We See Some Jobs Disappear Altogether?

Pretty much every job will be affected in some way by automation and yes, there will be jobs that disappear altogether. Take taxis, for example – eventually they will all be autonomous but that is a long way off and will happen so gradually that nobody will even notice. In the vast majority of cases, jobs will not be eliminated by AI. Instead, it will help take the mundane part of your job out of the way and give you time to concentrate on high- value, high-impact parts of your job. But opportunities like this can only arise if we are prepared to embrace change and learn new ways of working, including working with emerging technologies.

Where Does UBI Fit In?

We couldn’t possibly talk about the future of work without, at least briefly, think about a future without work. Many of the wealthiest and technologically inclined people, such as Mark Zuckerberg and Elon Musk, have already talked of something called UBI, or Universal Basic Income, as a future possibility.

Unlike other support systems in place, UBI is a proposal to give every citizen a fixed amount of money every year to allow for some standard of living. And, also unlike other systems, that money would be paid regardless of whether people worked or not. The hope is that, if you give people what they need to live, eat and provide for their families, they will still want to go out to work and earn more money.

This system was already proposed in 1516, in Thomas More’s Utopia and it has recently become a hot topic, subject of much debate. While robots are not likely to take over our work entirely any time soon, UBI provides for a future that may be far away, a future where we will have to significantly change our perceptions of money and work.

The Future Workforce

Tom Davenport once wrote in Forbes, “They say that AI and robots won't take our jobs, but rather augment them by doing the things humans don’t do very well”. That, right there, is the future we face. Humans working side by side with machines to create interesting, exciting roles that AI and machine learning helps with, rather than humans competing with machines for their jobs.

However, it cannot happen the way we want unless we move with the times and we need to move fast. Everyone must be prepared to take on new jobs, for their current jobs to change and to adapt to their roles to embrace and incorporate technological advancements.

So, in answer to that question, “will robots take our jobs?”, that is entirely in our hands.

Which Industries Use Robots?

Automation is fast becoming a common part of many industries with the following being just six of those that have gained from using the technology:

1. Automotive Industry

According to analysts, if automation was not being used, the automotive industry would not have been able to pull back from the Great Recession. Automation is used to help automotive plants to deal with shortages in labor and, in many plants and factories, the robots work side-by-side with humans, helping to get more done in less time. One of the most common uses of robots in this sector is the huge robotic arms used for spraying paint on the vehicles.

2. Electronics Manufacturing

These days, there is a huge demand for large, flat-screen televisions, smartphones, tablets, earbuds and all sorts of other electronic gadgets. One country in particular, China, has capitalized on the use of automation in this sector and the number of robots now working in electronics in China has now more than doubled. An audio accessories manufacturer in Germany called Beyerdynamic set themselves a tough goal of increasing factory floor productivity by 50% over a period of four years and, to achieve that goal, they implemented Robitiq, a two-fingered gripper and wrist camera that picked speakers up and put them in dedicated areas for being sprayed with glue.

They also implemented the Universal Robot UR 5 to do the actual glue spraying. Between them, these two automated solutions achieved that 50% productivity increase and, in spite of their being limited floor space in the factory, human workers say that the robots are very easy to work alongside. The company is now planning to train up the machine to do other things, such as inserting the screws and assembling the product for sale.

3. Medical

The health sector has also seen huge benefits from automation. One hospital in the San Francisco Bay area is using several dozen robots, named TUGs, for the transportation of medications and medical supplies. Robots are also being utilized to assist in surgeries and one semi-autonomous flesh-cutting bot performed far better than their human counterparts in terms of precision and in reducing the amount of damage to the surrounding areas.

Just last year, a robot was used to assist in a delicate eye surgery that involved retinal growth. This is the first time this has happened but, although the operation had been performed previously without automation, it is a risky approach because even the tiniest movement, such as blood pumping through the hand of the surgeon, is sufficient to affect how accurate the incision is.

Automated bots are also being used to assist with answering questions, making appointments, checking patients take their medications, get their prescriptions, and even make diagnoses.

4. Welding

While most welders still prefer to fuse metal manually, a lot of welding companies have found that using robotic welding equipment can reduce waste and provide fast and accurate results without human intervention. One welding company, Onken, used automation to relieve labor shortages and, once the Motoman robot, develop in China, was installed, they found their output doubled and ended up transitioning some of their human welders onto other tasks.

5. Food Services The concept of automation in the food services industry is one that is growing fast. Several factors, including minimum wage requirements and the increasing cost of materials has led to more and more restaurants using robotics to make some tasks more efficient and uniform, be it flipping burgers or preparing salads. Little Caesar has patented a robotic arm that can spread pizza dough, add the toppings and put the pizza into the over while a fast-food chain called Caliburger uses Flippy, a robotic arm that can flip 150 burgers per hour.

At chain restaurants, customers expect to receive the same kind of experience, regardless of what country or region the restaurant is in and, to meet that expectation, automation is being used more and more, with robots that perform unfailingly and carry out orders that are preprogrammed into them.

6. Law Enforcement

Back in 2016, more than 200 robots were taken out of military service and placed into law enforcement agencies. While some people do have concerns about robotics being used for policing, the bots in use in the USA today do not tend to demonstrate the kind of deadly force you might expect them to. Instead, they are used for bomb deactivation, to gather and analyze details about potentially dangerous situations, and report their findings back to human officers, as well as being able to rescue people from situations that are too dangerous for humans.

There are also robots being developed to act as a kind of neighborhood watch, recording incidents of public drunkenness, noise violation, public disorder, etc., passing the details to human officers to take action.

What’s Next?

Robotic automation can be applied successfully to just about any industry and this list is just the tip of the iceberg. Today, humans and robots work side by side with most robots freeing up humans to concentrate on other aspects of their job. Progress is so rapid that, in a short time, we will see automated robotics as a common way of life.

Robot Applications

While many fear that robots will take our jobs, there really isn’t too much to worry about. Right now, robots are being used to do jobs and work in environments that are not good for humans, across many industries and sectors. And, to be fair, at the jobs they do they are much better than humans and a great deal cheaper. Take a welding robot, for example.

They move in a uniform and consistent manner, do not need any protective clothing, ventilation, breaks, or anything else that a human welder would need. This means they are far more productive so long as the job is not complicated and is set up specifically for the robot to complete.

There are loads of different applications for robots and these are just some of them:

Machine Loading

Robots are used to supply parts to other machines or to remove parts already processed. The robot may not actually do anything to the part; instead it is facilitating handling and loading of parts and materials so that other machines can be more productive.

Pick-and-Place

Pick-and-place robots are used to pick parts up and put them somewhere else. This could include placing onto pallets, placing parts where they are assembled with others, placing parts into treatment ovens, removing them and more.

Welding

Welding robots are set up with a welding end effector to weld uncomplicated parts together. This tends to the most common use of robots because they produce accurate uniform welds. These are very large, very powerful robots.

Painting

Another common use, these are seen more in the automotive industry where huge robotic arms are used for paint spraying. Their work is more consistent and ventilated rooms and special protective clothing are not required.

Parts Inspection

Robots are used to inspect circuit boards, parts, and other similar things and, generally, they are one part of a system that may have an x-ray device, a vision system, an ultrasonic detector, and other devices like them. For example, one robot had an ultrasonic crack detector and was provided with CAD data regarding an airplane’s wings and fuselage shapes. The robot was then used to follow the contours of the plane, checking each weld, joint and rivet. Another robot would use similar equipment to check each rivet on a plane, looking for those with fatigue cracks. These would be marked and then drilled out, leaving technicians to fit new ones.

Robots are also used for checking chips and circuit boards and in many applications such as this, the part’s characteristics, like a circuit board’s diagram, would be stored in a data library in the robot’s system. That information is then used to match stored data and parts and the result is the part either being rejected or accepted.

Sampling

This is used in quite a few industries, including the agricultural industry and is much like the inspection or pick-and-place applications with one exception – it is only performed on a specified number of the parts.

Assembly

These applications involve multiple operations. For example, parts need to be located, identified, taken in a specific order and there are usually several obstacles for the robots to avoid along the way. The parts then need to be put together and assembled correctly. Many of these last two types of tasks are complicated, potentially requiring turning, pushing, bending, pressing, snapping and so on. Other complications include slight differences in parts and their sizes because the robot needs to be able to identify and learn the difference between those parts that vary in size a little and those that are wrong.

Manufacturing

This can also include multiple operations such as removing materials, de-burring, drilling, gluing, cutting and more. It will also include part insertion, such as the delicate job of inserting components onto circuit boards, installing those boards inside electronics, and similar tasks. These are common robots and used a lot in manufacturing and electronics industries.

Medical

The use of robots for medical applications is also becoming more and more common. Take Robodoc1 for example, from the Curexo Technology Corporation. It was designed for assisting human surgeons in operations involving total joint replacement. Because a lot of the functions involved in this kind of operation can be performed much more precisely by robots – cutting bone heads, reaming holes, installing implant joints, etc. – these were the jobs given to the robot. This is incredibly important – a CT scan identified the shape and the orientation of the bone and downloaded the information to the robot controller allowing it to be used to direct how the robot moved to ensure the implant was fitted correctly.

There are other surgical robots like the robot system from Mako Surgical Corporation and the da Vinci system from Intuitive Surgical, both used in a number of different procedures such as internal surgery and orthopedic. The da Vinci system, for example, has four arms – three are used for holding instruments, which is one more than a human surgeon, and the fourth holds a 3D imaging scope used for showing the surgeon the surgical area. Seeing the area on a screen, the surgeon can then direct the robot movement using a haptic system.

Another medical area where robots are used is in providing assistance to the disabled. One study looked at a small tabletop robot, programmed for communicating with a disabled person and for carrying out small tasks – placing food into the microwave or in front of the person, for example. There is also a finger-spelling robot used for communicating with the deaf and blind. It has 17 servomotors and is used to spell out the letters of the alphabet using gestures.

Hazardous Environments

Work in hazardous environments is well-suited to robots because they are not in the danger that humans would be in. They can access these areas, traverse them, maintain them and explore them. For example, they can explore radioactive areas without the same concerns as a human. In 1993, a robot with eight legs called Dante was used for reaching the lava lake of Mount Erebus, a constantly erupting volcano in Antarctica, to study the gases in it.

Mine-detection robots have also been used with one type using ultrasonic pods that vibrate to find the underground mines while another runs on a pair of all-terrain spiral tube, has a very basic design, and is designed to be expendable – that means it explodes the mines it finds. There is also a robot built in a snake design with serial sections that are articulated.

Each section is made up of two plates, fixed together with struts and with linear actuators to move the places relative to each other – this robot is used in very tight spaces that are difficult to maneuver. Lastly, a robot designed like a lobster, named Talon, can run beside soldiers, clear mines and get across broken, rough terrain.

Inaccessible Locations

Lastly, robots are used in environments that are inaccessible or highly dangerous to humans such as space and underwater. So far, it hasn’t proven practical to send robots to Mars or other planets but there have been rovers land on Mars and explore it. In terms of underwater, until now very few sunken boats and ships had been explored because they were in deep oceans that were not accessible. These days, sunken ships, crashed airplanes and submarines can easily be recovered using underwater robots.

The , developed by NASA, is a humanoid anthropomorphic robot designed to function as an astronaut. It is designed with a pair of five-fingered end effectors that can handle tools, telepresence capability and modular components, while another robot, a telerobot, was used in microsurgery.

Where the microrobot was located was not important because it was designed to repeat the hand movements of a surgeon during delicate operations, at a much smaller scale, so that hand tremors could be eliminated.

As you can see, robots are used already for many different things and, in the future, we can expect these uses to expand. What we have touched on here are just some of the applications; there are any more.

Which Are the Most Sophisticated Robots?

Some of the eeriest robots in the world are the sophisticated humanoid robots, looking so much like their human counterparts that they are often indistinguishable. The latest ones walk and talk like we do, and some can even express many different emotions; some can hold a normal conversation and others will remember the last time you spoke and what you talked about.

Because of all this, because they are so advanced, these humanoid robots could prove to be very useful in working with children, with the elderly, indeed with anyone that needs help doing daily tasks or other interactions. There are even reports of humanoid robots being used to play with autistic children, providing an effective interaction.

But just how lifelike do we want our robots to be? Elon Musk has recently expressed concern over the risks associated with AI and many people worry about the face of the future, when perfectly human-like robots are given human-like intelligence.

While there have been great leaps and bounds in the technology backing advanced robotics, particularly androids, there are is still a long way to go before we can hold full- on natural conversations without realizing we’re not talking to a real human. But scientists have come pretty close to that, no end of sophisticated, intelligent robots. We can break them down into three distinct types – humanoid, non-humanoid, and industrial.

Humanoid Robots

• ASIMO – created in 2000 by Honda, ASIMO has been continually developed into one of the most advanced robots, socially speaking, in the world. It can recognize when an object is moving, gestures, postures, interact with humans, understand its environment, walk, run and go up and down stairs.

– Introduced in mobile stores in Japan in 2014, Pepper has also started working at Renault dealerships in France. She is the first social robot capable of recognizing human emotion and she can hold conversations, give people directions, and dance with them.

• Walker – Revealed at CES 2019, Ubitech’s Walker is due to be released late 2020-early 2021. A bipedal, agile humanoid, Walker is 1.45 m tall, can interact with humans, can walk quickly and smoothly, and can hold and manipulate objects. Walker may become the very first bipedal robot that is viable for commercial purchase.

• Samsung Bot Retail – CES 2019 saw three robots from Samsung – Bot Retail, Bot Air, and Bot Care, with the largest being Bot Retail. It has a large display at the front and a basic system of shelves at the back so it can deliver items to people. It can interact with humans, use NFC to make payments and its front camera lets it recognize objects.

• Sanbot – an intelligent , Sanbot is cloud-enabled and comes from Qihan Technology. It can interact with humans, use its front screen to present and use the projector built into it to show graphics on nearby walls.

• Nao – released first in 2008, Nao has been under constant development and is now the standard platform used for the RoboCup, it is one of the most agile and dynamic robots. It can interact with humans, work with autistic children and run sessions in exercise in care homes.

• Romeo – launched in 2009, Romeo was originally designed to be a companion, helping to support the disabled and the elderly. Undergoing continuous development, Romeo can help with daily tasks, help people when they fall over, hold conversations and play some games.

Non-Humanoid Robots

• Paro – a therapeutic baby seal, Paro is designed to produce a calming effect in nursing homes and hospitals. It works like animal therapy, calming people, in particular those with diseases like dementia but with the risk of using live animals. It can respond to interaction and petting, moving its tail and opening/closing its eyes. It will seek people out and cuddles with them too.

• Buddy – developed by Blue Frog Robotics, Buddy was designed to be an emotional companion for home use. It can connect and interact with anyone in the home, entertain children and acts as security too.

• Miro – Miro is based on the premise that animal qualities can be desirable in robots. A robust robot, Miro is adaptable and can communicate what it is feeling. It is the very first robot in the world to run a biomimetic operating system inspired by the brain, meaning that Miro acts more like a pet and less like a robot.

• Zenbo – Asus’s Zenbo is a home healthcare assistant but is not yet available to consumers. It will be able to control home connected devices, provide security, do different online tasks and interact with humans.

• Aibo – first developed in 1998, Aibo is a robot dog owned by . However, recent developments have given Aibo expressions that are incredibly lifelike, dynamic movement, loveable behavior and none of the fur that real dogs leave behind.

• Gita Cargo-Bot – From Piaggio, Gita was developed as a helping hand, carrying things for you so you can get on with other things. It matches human mobility levels, allowing it go just about anywhere, can autonomously navigate a mapped area and can follow your own movements.

Industrial Robots

• Valkyrie Robot - developed as a collaboration between the University of Edinburgh and NASA, Valkyrie is probably the most advanced of all humanoid robots. Designed to work in hazardous environments, Valkyrie was developed to, one day, help to set up colonies and safe habitats on mars.

– developed by , Atlas is one of the most mobile robots with the ability to balance while doing things like carrying items; even if pushed, it will maintain balance. • Spot Mini – another robot from Boston Dynamics, Spot Mini is designed for multiple functions, such as manufacturing, security and delivery. It can easily manage uneven terrain, both indoors and outdoors.

• HRP-5P – an advanced humanoid robot, HRP-5P comes from AIST and is a research robot developed to help with both manufacturing and building processes. It can use power tools and it can even handle larger objects, such as sheets of drywall.

• Baxter – from Rethink Robotics, Baxter was first seen in 2011, one of the very first collaborative robots. It has a screen as a face and can display facial expressions that indicate its mood. Rather than being programmed, it can be physically taught to do something, with the computer then memorizing the task so it can be autonomously repeated.

The robotics field isn’t just useful; it is dynamic and varied too. Each of the robots mentioned above is representative of breakthroughs in science and all are an opportunity for all of us to achieve new things.

What Are Some of the Home Robots?

In 2002, the first-ever robot for home automation hit the shelves - the Roomba vacuum cleaner. Since then, we have come a long way and now there are plenty of home robots available for anyone to purchase and these are just 13 of them. In no particular order and all available from Amazon:

Roomba

Redeveloped, Roomba is now controllable on your mobile phone via Wi-Fi or you can link it to your Alexa and Google Voice Assistant device. As well as cleaning your home, it will identify and remember areas that are particularly dirty, like high-traffic areas. It can plug itself in for charging and will then resume once the battery is charged.

Roomba costs approximately $275.

Cue the CleverBot

Just 9.5 inches tall, Cue the CleverBot comes with emotional intelligence, allowing it to develop one of three programmable personalities. Interaction is via text message and it can tell you jokes, respond with witty comments and even a meme or two.

Cue the CleverBot costs approximately $180.

Alfawise Magnetic

Roomba isn’t the only housework robot. Alfawise Magnetic comes equipped with microfiber pads that allow it to clean your windows and it has suction features to stop it falling off when it hangs on your window vertically.

Alfawise Magnetic costs approximately $160.

Segway miniPro

The Segway miniPro is a hands-free scooter that has the balance of a hoverboard and safety levels of an electric scooter. It is hands-free, all you do is press the middle bar with your knees to control it. It has precision sensors for balance and you can control it via your smartphone using an app that has an anti-theft feature.

The Segway miniPro costs approximately $600.

Worx Landroid

Worx Landroid is a robotic lawnmower, designed to keep your lawn trimmed up every day. It isn’t loud and, when it starts raining or the battery starts to run low, it will return to its charging station by itself.

Worx Landroid costs approximately $917.

Beam Smart Presence System

Beam is the ultimate in spying and security, the robot that is there when you aren’t. With the ability to navigate by itself on a physical body with a video screen attached, Beam can be used by employers to check up on employees, by parents to make sure their kids are doing their homework and behaving, friends are miles apart can get together easier and much more. In short, Beam is the personal assistant that brings everything together.

Beam Smart Presence costs approximately $1995.

Stormtrooper

Stormtrooper is a robot that you can control using voice commands. It can navigate your home without bumping into anything and it can recognize faces, also able to tell if you deem a person a friend, an enemy or just a stranger.

Stormtrooper costs approximately $300.

Lynx

Lynx has the same capabilities as Alexa to continually evolve which means you can get your weather forecasts from it, play music, draw up shopping lists and to-do lists. Lynx can teach you a few dance moves and yoga and it also has some security features. It can record 30-second video if movement is detected when you are not there and it will send that footage straight to your smartphone.

Lynx costs approximately $800.

Appbot Riley 2.0

Appbot Riley is a home security system, a solution for those cameras and alarms that don’t cover your entire home. Using a Wi-Fi connection, Riley will navigate your home with motion detectors and a night-vision camera, looking for anything suspicious. It can send alerts straight to your smartphone and, with its built-in microphone you can also listen in or check in with anyone who may be at home.

Riley costs approximately $170.

Cozmo

Cozmo is the perfect companion for your child. A tiny robot, it is AI-powered and has a unique personality that adapts the longer it is with its owner. Its eyes are video cameras so you can see what Cozmo sees from your smartphone. It is also an educational tool that can play games or help your child learn to code.

Cozmo costs approximately $150.

Dolphin Nautilus Plus

A small robot, Dolphin is your very own pool boy. Equipped with scrubbing and vacuuming elements that it can switch by itself, Dolphin will keep your pool sparkling clean.

Dolphin Nautilus costs approximately $750.

CHiP

Want a dog but don’t have the time for a real one? CHiP is your friendly automated robot dog, one that you can train to do tricks, sit, play fetch and more. It can nuzzle you, show you affection and be your best friend.

CHiP costs approximately $100.

MiP

MiP is your very own personal home robot. Responding to both smartphone commands and hand gestures, MiP can fetch and carry all day for you. And it works off Bluetooth and has its very own carry-tray built-in.

MiP costs approximately $50.

These are just some of the very best home robots available right now. As you can see, there isn’t much a robot can’t do for you and, as they continue to evolve and as developers move on, we can expect the robots to become more intelligent – perhaps to the level where you don’t have to lift a finger in your own home.

What is Robotic Process Automation or RPA?

These days, more CIOs are using something called RPA (Robotic Process Automation) This emerging technology allows enterprise operations to be streamlined, resulting in a reduction in costs.

By using RPA, businesses can automate the more mundane business processes, those that are rules-based, allowing business users to put their time to more important work, such as serving customers.

There are those who consider RPA to be a mere stopgap on the way to using machine learning and artificial automation for intelligent automation and that type of technology can easily be trained to make predictions and judgments about future output.

What, Exactly, is RPA?

It is a technological application that is governed by both structured inputs and business logic.

RPA is aimed at the automation of business processes, providing tools that businesses can use for configuring robots or software to capture applications, and interpret them for transaction processing, data manipulation, triggering responses and for communication with a range of digital systems.

RPA scenarios can be something simple, like generating automatic email responses, right up to the deployment of thousands of bots, each with its own programming to automate specific jobs inside an ERP system. What Are the Benefits?

Using RPA gives businesses a better ability to reduce both human error and staffing costs. One bank employed RPA to redesign its process of making claims; 13 different processes were run by 85 bots and more than 1.5 million claim requests were handled per year.

By doing this, the bank increased its operating capacity by the equivalent of 200 employees working full-time hours at around 30% of the cost of actually recruiting and employing that many new staff.

Typically, bots are much cheaper and easier to implement as they do not require deep integration into the business systems, and they don’t require any custom software either. Characteristics like these are vital to organizations that want to expand and grow without creating friction amongst employees and without adding too much expenditure.

Business users can also inject cognitive technology, such as speech recognition, machine learning, and natural language processing, into RPA to provide a huge boost to their automation efforts. This would allow high-order tasks to be automated, tasks that would previously have needed capabilities that only humans had, such as judgment and perception.

RPA implementations like this, where as many as 15 or 20 steps could be automated, belong to a value chain called IA, or Intelligent Automation. By the year 2020, it is expected that AI and automation will result in a drop of 65% in employee requirements in shared-service centers and, at the same time, the RPA market will hit $1 billion. By then, around 40% of large businesses are likely to have adopted RPA in one form or another – today, that number stands at under 10%.

What About the Pitfalls?

RPA certainly will not suit all businesses. Like any of the automation technologies, it does have the potential to eliminate some jobs and that presents a huge challenge to CIOs in managing workplace talent. While any enterprise that embraces RPA is trying to keep as many employees as possible, transitioning them to new positions, it is estimated that as much as 9% of the global workplace, at least 230 million knowledge workers, will be threatened by RPA.

Even if the CIOs can get around this, more RPA implementations fail than succeed. Multiple programs have already been placed on indeterminate hold and, in some cases, CIOs have outright refused to take on any more bots.

It has worked out more expensive and time-consuming to install thousands of bots than most businesses hoped; much of this is down to the fact that platforms change and bots are not always configured with the flexibility necessary to adapt to this. On top of that, if a new regulation is implemented with just minor changes, it can throw months of work off on a bot that is almost complete.

According to a Deloitte UK study recently released, only about 3% of businesses have successfully scaled RPA to a level of at least 50 robots. Plus there are no assurances on the economic outcomes of any RPA implementation. While it is entirely possible that 30% of tasks could be automated for most occupations, it doesn’t actually equate to a reduction in costs of 30%.

Which Companies Use RPA?

Some of the companies currently deploying RPA solutions are Anthem, Ernst & Young, AT&T, Deutsche Bank, Walmart, Vanguard, Walgreens, an American Global Business Travel and there are many more. The CIO of Walmart, Clay Johnson, says that the retailer has already deployed around 500 bots, automating jobs such as answering questions from employees to analyzing audit documents and retrieving useful information from them.

The CIO of American Express Global Business Travel, David Thompson, makes use of RPA for automating the process of airline ticket cancellations and issuing the refunds. They are also planning to use the technology in automatic recommendations for rebooking when an airport is closed down and for the automation of certain tasks in expense management.

At the end of the day, there isn’t any magic formula for the implementation of RPA but it does require a business to have the long-term ethos of intelligent automation. And there are no guarantees that, right now, it will work or that it is the right solution for any individual business.

Can Robots Gain Consciousness?

Over the last few years we have made some pretty impressive advancements in the fields of robotics and computer science. One good example of the speed at which things can change is Moore’s Law.

Back in 1965, Gordon Moore discovered that, every year, the number of transistors that could fit on a one-inch silicon chip doubled. That is what we call a logarithmic growth pattern; while the observation would probably be adjusted by computer scientists increasing the time it would take, it can’t be ignored that the size of the transistors has now shrunk to the nanoscale.

In robotics, engineers are creating new machines that have several articulation points and some even have sensors that are used to gather data about the environment they are in and this is what lets the robot navigate its way around obstacles. No matter what industry we talk about, there is no doubt that robots are having a huge impact.

Although robots and computers are more advanced now than ever before, they still cannot be seen as anything more than tools. They are useful, especially for those tasks that would put human life at risk or are just too difficult for humans, even those that are too time-consuming but neither a robot nor a computer is aware that it even exists and can perform only those tasks they are programmed for. But what would happen if a robot could think for itself? We see it often enough in movies and books but can it actually happen?

So, Can They Gain Consciousness?

That is not an easy question to answer because we are still very much in the dark about human consciousness. Yes, scientists can now create algorithms that claim to simulate human thinking, but only on a superficial level, but right now it is well beyond our grasp and the realms of possibility that we could give machines consciousness.

Part of the issue is in actually defining consciousness. According to the Professor of Philosophy, Eric Schwitzgebel, from the University of California, it is easiest to explain the concept by using examples of what consciousness is and isn’t.

He says that we can label vivid sensations as being part of consciousness and, yes, you could argue that using sensors, robots can experience some of the things we label as sensations – at the very least, they can detect them. But he also points out that there are various other elements of consciousness, such as visual imagery, inner speech (we all have that little voice in our heads), dreams and emotions, that robots simply cannot experience.

However, there is no small amount of disagreement among philosophers about what consciousness can and cannot be defined as. As the best-case scenario, most of them will agree that the brain is where consciousness rests but none of us fully understand what mechanisms are behind it. And without that understanding, it could prove impossible to provide machines with that consciousness.

Yes, we can create robots that can mimic thought and they can, in some cases, detect emotion. Programming can provide robots with an ability to recognize patterns and to respond to them but, far from being aware of itself, it is merely responding to a series of commands.

It is also possible that computer scientists and neurologists could develop an artificial model of a brain that could produce consciousness. The problem here is not a trivial one though. We don’t yet fully understand the way the brain works so we couldn’t possibly build an adequate model that creates consciousness.

However, in spite of the challenges, scientists and engineers the world over are working on creating artificial consciousness but it does remain to be seen if it will ever come to pass. What we should assume is that it may happen in the future and, if it does, what would happen?

Robots are People

Creating artificial consciousness and giving it to robots could give rise to many serious questions surrounding ethics. If a robot were self-aware, would negative reactions be possible to any situation they are in? Could a robot then object to being used as nothing more than a tool? Would it have feelings of its own?

There is much debate on this subject and because, so far, no artificially conscious machine has been created, we cannot say what it will and won't do, how it may or may not react.

But if we do give robots self-reflective abilities, we may have to seriously consider how we think of the. At what point does a robot have enough consciousness and intelligence that we have to consider providing them with the same legal rights that humans have? Or will they just remain as tools, albeit conscious ones, and just see themselves as slaves of a kind?

We’ve all seen the movies where robots take over the world, movies like “The Terminator” or “The Matrix” and the scenarios in those movies rely entirely on one concept – self-recursive improvement. But what is this?

It refers to a robot’s ability to take a look at itself, to find ways that its own design could be improved, then tweak itself or even build a new and more improved version. Every new generation of robots would be that much smarter than the previous and would be better designed. According to a futurist called Ray Kurzweil, in the future machines will be just like this, so adept at improving themselves technology would begin evolving far quicker than we could possibly keep up.

In a world like this, what would happen to humans? Some scenarios have us merging with the robots while others would have the robots determine that humans are no longer useful nor necessary. At best, we would be ignored; at worst, humans would be wiped out.

Obviously, this is the stuff of movies and books. Even if we did manage to create artificial consciousness, it is highly unlikely that robots would act and have the same emotions and thoughts that humans do. It may well be that we just cannot recreate it. But just in case we can, and just in case scientists do, you might want to consider treating your computer a little nicer!

What Are Some of the Ethical Questions of Life with Robots?

Over the last few years there have been some terrific advances in both AI and robotics, and both have the potential to turn around the way we live and work; though the analysis of large datasets, automation of repetitive tasks, enhanced logistics and fraud detections.

They also have the potential to make things a whole lot safer by outsourcing jobs that are too dangerous for humans to do. As we see these two technologies evolve, we must expect that our lives will change, become more efficient and, quite possibly, be a whole lot more meaningful simply because we will have the time to concentrate on the important tasks in our lives.

As we go through this evolutionary period, both academics and technology leaders have pressed upon us the importance of conducting very thorough risk and ethics assessments. All new technology comes with a certain amount of risk but AI and robots integrating into society have the potential to raise huge concerns, concerns that go way beyond privacy and data collection and they also raise a number of moral questions that need to be addressed now, before it is too late.

These are some of them.

1. If we are soon to see the end of jobs what is the potential for a rise in inequality? As more jobs become automated, it is assumed that humans will begin to take on roles that are more complex. This can be compared to the transition from labor-intensive, physical work to cognitive work but it really isn’t that simple.

Take Tesla, for example. Elon Musk has promised that self-driving trucks will be a reality and if adoption is as wide as they expect, there will need to be systems in place to ensure that truck drivers can seamlessly transition to different roles. Presently, most families can survive by trading work for income so this transition has the potential for a post-work society to be created. While it is hoped that individuals will be able to get work, there is no clear indication of that work might be.

While it is true to say that the use of technology can free us up to focus on other things, such as our families, more meaningful tasks, and so on, we still don’t really know whether those workers that the technology replaces will have the right skills to find another job.

Currently, the economy is based on a model of compensation for contribution and many workers are paid hourly. So if we don’t start thinking about and restructuring that economy to sustain a potential post-work society, AI and robotics could potentially be the cause of no small amount of inequality. If we ignore the implications, we are heading for trouble with the potential for violence, crime, and chaos as everyone attempts to find their place and survive in a new world.

2. What consequences can we expect from AI mistakes and biases?

When we are training AI with algorithms, it has the real potential for dangerous consequences. Take Sophia, a well-known humanoid robot, for example. Developed by Hansen Robotics, Sophia was given citizenship in Saudi Arabia. That same robot also experienced a bit of a technical glitch whilst giving a demonstration at the SXSW show; when she was asked, jokingly, if she would destroy humans, her answer was, “Ok I will destroy humans”.

Not only that. A study was done on interactions between the bots used to edit the articles on Wikipedia between 2001 and 2010. Where the bots were supposed to provide support for Wikipedia, more often than not, they undid edits done by other bots and also engaged in arguments.

When these interactions were compared to human interactions on the same platform, it was noted that the bot interactions took place over long periods of time and were more likely to be reciprocated. And, in the same way that humans do, they also acted differently when cultural environments changed, with even basic bots capable of quite complex interaction. One thing we must take the time to discover is what affects interactions between bots and ensure that sufficient cybersecurity measures are in place for new innovations.

Looking beyond the potential for aggression and glitches, we mustn’t forget that mistakes can be made by the systems. It doesn’t matter how much AI can learn and how it detects patterns based on input, it is not possible for the training to cover all eventualities that a system may face in the real world.

It doesn’t matter how smart they are, they can still be fooled in ways that perhaps humans wouldn’t be. So if we do absorb AI and robotics into our workforces, we must also ensure that there are mechanisms to make sure that there can be no bias, and there can be no way for humans to exploit and overpower it for their own means.

3. What would happen if AI and robotic systems were breached?

We live in a time when ransomware attacks are normal and we should expect there to be attacks on AI systems and robots. If we were to go ahead with plans to replace our soldiers with autonomous weapons and robots, we run a very real risk that they could be taken over by rogue governments and criminals.

And this alone makes it critical that we have cybersecurity in place to protect both AI and robots. Even if the technology wasn’t developed specifically for warfare, there is still the potential for attacks on autonomous vehicles and factory robots, for them to be used against us as weapons of some description.

Before society goes ahead with rapid adoption and incorporation of the technology, we must first come up with ways of building systems that are not open to attack or breach. The problem we have is that these systems will be faster and far more capable than we are and it may well be impossible to achieve cybersecurity that can’t be penetrated.

We have to face facts – AI and robotics are going nowhere. They are here to stay and we need to find ways of moving forward with them, both ethically and securely. This might mean that we need to develop income models for a post-work society, develop resources that can help us develop other skills, learn about robot interactions, respond effectively to glitches and maintain the very highest level of security.

How Can We Successfully Adapt to Life with Robots?

By now, most people should be thinking about a future with robots, thinking about how they can adapt to a new way of living and working.

History shows us that most of us are very slow to adapt to new technologies, taking time to make ourselves comfortable and familiar with it. The problem we face with AI and robotics is that both technologies are growing incredibly fast and now is the time to start thinking about life with them.

One of the most important questions to ask is this – “How do we adapt for a future with robots?”

There isn’t a definitive answer to that but it is important to keep in mind that robotics and AI are being designed to improve our lives, not take them over and, as we start to see more AI tools take on once-human tasks, we may actually forget what the real purpose of robotics was in the beginning. We must keep in mind that robots are here to serve us and not the other way around.

On your journey to adapting to life with robots you need to ask yourself these questions:

1. What can you do to help society adapt to a future of working and living with robots? 2. What information do world leaders and politicians need about AI and robotics in order to be successful? 3. How do we educate our children about the implications and uses of robotics? 4. How do we bring about an increase in educators needed to communicate how important robotic tools are and to teach us how to work with robotic aides? 5. How can education institutions develop programs that will provide us with positive information regarding robots and their uses? 6. What are the ethical guidelines we should have in place and how do we make sure they are effectively implemented? 7. How do we ensure that all socioeconomic groups can make use of robotic tools? 8. How can we avoid the potential for harm or drawbacks that robots may cause? 9. How do we ensure that all of society is able to learn about robotics and benefit from them, not just the wealthy classes? 10. How do we ensure that we don’t lose our interpersonal skills in a world where many people will spend their waking hours only with robots? More Conversation, More Education

What we do need, if we are to successfully adapt to a future with robots, is more public education and more conversation about how robots will integrate into society and what roles they will play.

We need to address the evolution of their roles as they become more human-like and more popular; what relationship they will have with humans and how our own roles are likely to change.

And all this needs to be done BEFORE we are inundated with humanoid robots. If we do not prepare ourselves for the changes ahead, we could find ourselves immersed in a period of social unrest, with lots of angry, confused and unhappy people.

At the end of the day, forewarned is forearmed; if we don’t prepare ourselves now, we have no way of successfully adapting to the inevitable future of a life with robots.

Economic and Social Consequences of Using Robots

Technology is progressing in leaps and bounds and robots are just one small step in that progress. Over the last few years, the number of robots in use by companies for increasing productivity has gone up significantly and we have absolutely no reason to think that it’s going to slow down anytime soon.

In fact, with the costs of robotics falling as fast as their capabilities are increasing, and with the current density still quite low in most industries, the IFR (International Federation of Robotics) has said that they believe robot installations per year will continue growing in double figures for some time to come.

Two of the biggest economic challenges of the 21st century are slow gains in productivity and a rise in inequality. Using more robots should have a positive effect on both of these as well as the inevitable negative effects.

We are starting to see literature released regarding the use of and the impact that robots will have on society but much of it is based on small amounts of research and large amounts of supposition. That said, there are now more studies being undertaken and the results indicate that robots will increase productivity, they will raise wages and they will raise the total demand for labor but those who will benefit the most are the highly skilled workers. As more robots are used, along with computers and machines of other types, it will be the low and middle-skilled and waged workers who suffer the most.

These studies have determined that the impact on the productivity of these robots can be compared to the contribution made by the introduction of steam trains. And while robotics are still behind ICT (Information and Communications Technology), it is worth bearing in mind that the total ICT value capital exceeded the current capital value of robots by a long way.

While some of the gains in productivity as a result of higher robotic density are passed on in part to the workers through a rise in wages, there is still an issue. That issue is that the different skill and income groups don’t get the same level of benefits and that means that robots add even more to income equality. To ensure that a broader reach of the population can benefit from roboticization, there are two things that must be done.

Skills and Education

We need to have a serious rethink of the education system. As machines and robots are perfectly capable of taking over large numbers of different tasks, humans need to focus on what advantages they have and that includes non-cognitive skills. Not only that, many advanced countries, the USA in particular, must now stop the trend they started of using family wealth and income to determine the level of education students are given and they must reverse it – fast. This system causes a significant rise in inequality issues but even if the changes needed are adopted by politicians, the increase in technological progress is still going to lead to a certain amount of inequality because we all have different skills and financial conditions.

Spreading Ownership

This inequality means that we have a real need to take income from the rich and pass it onto the poor as well as from owners to workers. There are theoretically three possibilities that could mitigate or partially offset the decline in income share:

1. Using minimum wage or collective bargaining to bring about higher wages 2. Use tax and spend policies to redistribute wealth 3. Ensure that robot rents are distributed more equally by spreading capital ownership.

Traditionally, we have used the first two methods to redistribute income gains and profitability and there is no doubt that they will be used again. However, there are strict limits on what and how much can be achieved by using them. Indeed, if we have a situation where robots are in competition with the low and medium-skilled workers, putting the minimum wage up will only serve to intensify labor substitution with capital.

As it turns out, a promising solution to this challenge is that everyone has a significant ownership stake in the machines.

If the workers are unable to gain income from capital, not just labor, there will be an increasing trend towards more inequality in income distribution and, increasingly, we will see the world turning towards a new type of economic feudalism. It is important that business capital ownership is widened if we want this polarization to be avoided. Employee ownership could be the answer because it means that the workers can earn from labor and from the capital.

Conclusion

Thank you for taking the time to read my Quick Guide to Robots and Robotics. I hope that you found it useful and that you now have a better understanding of the future we face, that some of your quite valid fears have now been allayed.

Despite the fear mongers telling us that robots are going to take over the world, it is becoming clearer by the day just how useful these machines are.

They are already in use all over the world. We’ve all heard of automated transport – these are self-drive robots. They are used in the military, not just for finding and detonating bombs and mines but to monitor enemy activity and report back, even, in some cases, to fire weapons.

They are used extensively in the medical field, particularly as surgical robots; they are used in education, home maintenance, security, surveillance, for doing dangerous jobs, working in factories and there are plenty of home robots too – voice-based home assistant, vacuum cleaners, lawnmowers and more.

The applications for robots in today’s modern and digital world are vast, too vast to discuss here. The truth of the matter is this – they are here, they are already working and yes, they are already taking some of our jobs. That can’t be helped but one thing is clear – we must learn to live with robots and adapt our lives to co-exist with them. Those that can’t are the ones that will be left behind.