<<

Integration of Robotic Platforms in a Communicating Environment with Application in the Aid of Elderly

Oana-Teodora IOVA supervised by Jean-Pierre MERLET

Table of Contents

1 Introduction ...... 3 1.1 A Short History of ...... 3 1.2 The COPRIN Team ...... 4 2 Constructing and Programming the Robots ...... 6 2.1 Lynxmotion Aluminium 4WD1 Rover ...... 6 2.2 Lynxmotion AL5A ...... 6 2.3 PobBot Golden Pack ...... 7 2.4 SoccerBot ...... 7 3 Integration of the Robots in a Communicating Environment ...... 8 3.1 Hardware ...... 8 3.2 Geometrical Model ...... 9 3.3 Software ...... 10 3.3.1 Controllers ...... 10 3.3.2 Algorithm ...... 10 4 Conclusions ...... 15 References: ...... 16 Annex 1: ...... 17 Annex 2: ...... 18

2

1 Introduction

1.1 A Short Since the beginnings of civilization man wanted to make things that would assist him. After discovering mechanics and the means of creating complex mechanisms that would perform repetitive functions, they created objects such as waterwheels and pumps. Technological advances were slow, but there were more complex machines, generally limited to a very small number, which performed more grandiose functions, such as those invented by Hero of Alexandria (steam-power device, wind wheel).

In 1495 Leonardo da Vinci designed a mechanical device that looks like an armoured knight. The mechanisms inside made the knight to sit up, wave its arms and move its head via a flexible neck while opening and closing its jaw.

The word robot comes from the Czech word robota, meaning drudgery or slave-like labour. It was first used to describe fabricated workers in a fictional 1920s play by Czech author Karel Capek called Rossum’s Universal Robots. In the story, a scientist invents robots to help people by performing simple, repetitive tasks. However, once the robots are used to fight wars, they turn on their human owners and take over the world.

In 1941 the science fiction writer Isaac Asimov first used the word to describe the technology of robots and predicted the rise of a powerful robot industry. Next year, Asimov wrote Runaround , a story about robots which contained the Three : 1. A robot may not injure a human, or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

But real robots don’t become possible until the 1950’s and 60’s, with the invention of transistors and integrated circuits. Compact, reliable electronics and a growing industry added brains to the brawn of already existing machines. In 1961 the first , Unimate (universal ) (Fig. 1a), was installed in the General Motors automobile factory in New Jersey. In 1963 the first artificial robotic arm to be controlled by a computer was designed. The Rancho Arm (Fig. 1b) was intended as a tool for the handicapped and its six joints gave it the flexibility of a human arm.

Nowadays, a robot is a machine able to extract information from its environment and use knowledge about its world to move safely in a meaningful and purposive manner. Currently, there are many types of robots, based on their use:

- industrial robots: they usually consist of a jointed arm (multi-linked ) and an end effector (frequently a gripper) that is attached to a fixed surface. Typical applications include welding, assembling, pick and place, packaging and palletizing, product inspection, and testing, all accomplished with high endurance, speed, and precision. - military robots are autonomous robots or remote-controlled devices designed for military applications, such as: taking surveillance photographs, launching missiles at ground without

3

Fig. 1a Unimate Robot Fig. 1b Rancho Arm Robot

a pilot, patrolling around a military base, or even use small arms weapons by remote control (Fig. 2a). - medical robots that can be used in surgery (Fig. 2b), lifting and moving patients, assisting patients in recovery etc. - Automated Guided Vehicles (AGVs): these are used for transporting material inside large or oversized buildings like hospitals, container ports, and warehouses, using wires or markers placed in the floor, or lasers, or vision, to sense the environment they operate in. An advanced form of the AGV is the SGV, or the Self Guided Vehicle, which can be taught to autonomously navigate within a space. - service robots: used in house cleaning, care for the elderly, or cleaning hazardous waste. In this category, we can include also the humanoid robots, such as ASIMO (Fig. 2c), originally developed to assist people. It can walk, climb stairs, run, but is currently not capable of operating autonomously in any real work environment.

Fig. 2a The SWORDS Robot Fig. 2b A laparoscopic Fig. 2c ASIMO robotic surgery machine 1.2 The COPRIN Team The COPRIN Team (Constraints solving, OPtimisation, Robust INterval analysis) has the centre at INRIA Sophia Antipolis – Méditerranée. The head of the team is M. Jean-Pierre Merlet, who was also my supervisor. Sophia Antipolis is a technology park northwest of Antibes and southwest of Nice, France, created in 1970-1984. Several institutions and companies in the fields of computer-science,

4 electronics, biotechnology, mathematics are located here, along with the European headquarters of the W3C.

The research topic of the COPRIN team is solving the system of constraints using both consistency methods and interval analysis. Furthermore, symbolic computation will systematically be used to specialize the solving algorithms according to the structure of the problem in view of a better efficiency.

The second major research axis of the project is robotics, especially the design of new structures that must satisfy stringent performance requirements, while taking into account uncertainties that are unavoidable for robotized systems. The mathematical tools that are developed as first research axis of the project are especially useful for this kind of problems.

There are two years since the team started a strategic move towards assistance robots . The long term goal is to provide robotized devices for assistance, including smart objects that may help disabled, elderly and handicapped people in their personal life. These devices will be adapted to the end-user and to its everyday environment, so they should be affordable and able to be controlled through a large variety of simple interfaces.

One of the projects that the COPRIN team is involved right now is the Large Scale Initiative Action - Personally Assisted Living (LSIA Pal) project. The objective of this project is to create a research infrastructure that will enable experiments with technologies for improving the quality of life for persons who have suffered a loss of autonomy through age, illness or accident. In particular, the project seeks to enable development of technologies that can provide services for elderly and fragile persons, as well as their immediate family, caregivers and social groups.

One of the crucial problems addressed in this project is the prevention and detection of falls and the activity monitoring. Existing telehomecare systems cause many false alarms and therefore became unusable in a real world [16]. As a result, a great amount of experimental analysis and validation are needed to ensure a robust data and video analysis to detect risky situations and reduce false alarms.

Other projects [5, 10] that addressed the problem of detecting the falls used an omni-directional camera (map cam), which is easier to use than having to manipulate multiple traditional cameras. Still this is not accurate enough because when turning the lights on and off results in leaving over static abandoned objects that let the impression of multiple targets in the environment. The MAIA team [9] is working on a new device based on intelligent tiles , which can detect a person falling on the ground. This is a non-intrusive approach that uses sensors placed on the floor. The Ivy Project [2] takes another approach in detecting falls by creating a sensor network in the environment, which detects when a person has sustained a fall. Besides the sensors used in the surroundings, there is only one more sensor used: an accelerometer place on the person’s waist.

The COPRIN team chose to solve this task in two ways. First, the elder person will be using a motorized walking aid, which provides help with mobility, but also assistance in case of fall. Second, a small , equipped with a camera and a first aid kit, moves towards the place where the person has fallen and takes appropriate measures according to the information received.

5

2 Constructing and Programming the Robots The first part of the internship consisted in constructing and programming a set of robots which will be later integrated in an environment dedicated to assisting the elderly. All these robots came in a kit, more or less mounted, and needed to be assembled and programmed in C, under Linux, in order to be compatible with the other robots in the project.

2.1 Lynxmotion Aluminium 4WD1 Rover The Lynxmotion Aluminium 4WD1 [7] is a robust, modifiable, and expandable chassis for experimentation. It has excellent traction due to its RC truck tires and wheels. The chassis is made from heavy-duty anodized aluminium structural brackets and laser-cut Lexan panels.

In order to mount the robot we had to solder the wires and the capacitors to the motors, place the motors into the chassis side brackets, attach the bottom Lexan panel, put the motor shafts and the tires. For controlling the motors we used two SyRen 25A regenerative motor drivers [17], one for the left-side motors and the other one for the right-side motors, obtaining a vehicle with differential drive steering, just like a tank. The onboard switches allowed us to set one of the four operating modes: analog input, RC input, simplified serial or packetsized serial. In order to make the SyRen easy to interface to a microcontroller, we chose the RC input, which takes one or two standard RC channels and uses those to set the speed and direction of the motor.

The last steps in mounting the robot were to attach the battery, the power switch and the top Lexan panel. The result can be seen in Fig. 3.

Fig. 3 The Lynxmotion 4WD1 Rover

After constructing the robot, it followed the programming part. Using a SSC-32 servo controller from Lynxmotion and a RS-232 serial port, we connected the robot to the computer and we created functions in order to control it.

2.2 Lynxmotion AL5A Robotic Arm The AL5A [8] from Lynxmotion is a robotic arm that has four degrees of freedom. It delivers fast, accurate and repeatable movement and has base rotation, single plane shoulder, elbow, wrist motion and a functional gripper.

6

The construction of the arm was the most complicated from all the robots, because it has a lot of components: black anodized aluminium brackets, aluminium tubing and hubs, custom injection moulded components, precision laser-cut Lexan components and five different servos.

After finishing mounting all the components (Fig. 4a), we connected the arm to the SSC-32 servo controller, and using a RS-232 serial port for the PC connection we were able to control it.

Fig. 4a The Lynxmotion AL5A Arm Fig. 4b The PobBot Golden Pack Fig. 4c The SoccerBot

2.3 PobBot Golden Pack The PobBot Golden Pack is a mobile robot from Pob Technologies [14] equipped with a 2 axis motorized gripper that can take up and move objects. It also has a mechanical base, an intelligent colour camera, an LCD graphical screen, an I/O module and a distance sensor (Fig. 4b). The camera (PobEye) is the eyes, the heart and the head of the robot and controls all the other components.

The connection to the PC is done using a serial port located on the PobEye. Being the centre of the application, all the commands for the servos, the mechanical base or any other component of the robot, must be sent to the PobEye.

The robot can be programmed in C/C++, Java or Basic, but only using the provided software – PobTools – because at the end, the program will be transformed in a .hex file and uploaded on the camera. Even if in the documentation it is said that the software is compatible with Windows, MacOS and Linux, we didn’t manage to make it work under any of the Linux distributions available in the lab. After many discussions and emails exchanged between us and the support team from Pob Technology, they finally delivered us a working software and we were able to program the robot.

2.4 SoccerBot The SoccerBot kit from QFix Robotics [15] is a good platform for robots supposed to move to an arbitrary direction. The mounting of the kit was really easy, having just to put together the three omni wheels, the motor bearings, the gear motors and the stable aluminium base plate (Fig. 4c). The SoccerBot is a controller board that has 8 analog inputs, 8 digital inputs, 8 power outputs and 6 motor outputs and can be used with any DC source from 7V to 12V.

The connection to the PC is done using an USB cable. The kit also contains a CD with a C++ class called SoccerBoard that was very useful in creating the functions that we needed in order to control the robot.

7

3 Integration of the Robots in a Communicating Environment The second part of the internship consisted in integrating the constructed robots in an environment dedicated to assisting the elderly. As said before, the COPRIN team has an ongoing project for improving the life quality of the persons who have suffered a loss of autonomy through age, illness or accident. So, the task of the robot was to follow a wall and take the first aid to the person who has fallen.

3.1 Hardware In order to solve this task, we decided to build a mobile robot using the Lynxmotion aluminium 4WD1 rover, the AL5A robotic arm, the PhidgetAdvancedServo [12], the PhidegetSBC Interface Kit [13], two infrared sensors and one ultrasonic sensor. For providing enough energy to the motors and to the onboard electronics, we used two 12V battery packs. The resulted robot can be seen in the figure below.

Fig. 5 The Robot

The PhidgetAdvancedServo 8-Motor (Fig. 6a) allows us to control the position, velocity, and acceleration of up to 8 RC servo motors. In our case, we used it to control the two SyRen motor drivers and the five servos from the arm. Instead of sending the desired position immediately, the PhidgetAdvancedServo sends a series of progressive positions according to acceleration and velocity parameters, which dramatically smoothes the operation of the servo, and allows reasonably precise control of position, velocity and acceleration.

Fig. 6a PhidgetSBC Interface Kit Fig. 6b PhidgetAdvanceServo

8

For making the robot autonomous, we used a PhidegetSBC (Fig. 6b), which is a fully functional single board computer running Linux with Java and C libraries. This allows the PhidgetSBC to operate autonomously, without the need for a graphical interface or a remote connection at all times.

We used two IR sensors, mounted on the right side of the robot, for measuring the distance from the wall, because they are simple, commonly employed, and low-cost sensing modalities to perform the wall-following task. They were preferable to ultrasonic sensors due to their faster response time and narrower beam width. The used IR sensors were Sharp GP2D12 sensors, with IR Distance Adapter Boards, which prevents the possibility of overcurrent. They measure distances from 10 cm to 80 cm and produce values from 80 to approximately 500. The formula to translate sensor’s values into distance is:

Distance (cm) = 4800 / (SensorValue - 20) (1)

Still, we used an ultrasonic sensor, placed in front of the robot, for detecting if there are obstacles and try to avoid them. This is also a LV-MaxSonar-EZ0 sensor and it’s capable of detecting objects situated at 6.45 meters distance from it.

We can observe the command structure of the robot in the figure below:

Motors IR s en sors LM FS Inputs Output PhidgetSBC Phidget AdvancedServo BS RM US

Fig. 7 The command system of the robot

The signals given by the ultrasonic sensor and the two IR sensors (FS and BS) are transmitted to the PhidgetSBC Interface Kit and after computing the calculations the results are used for speed control of the motors LM and RM.

3.2 Geometrical Model The robot can be seen as a nonholonomic system, being implicitly dependent on parameters. In mobile robotics, a car-like robot has three degrees of freedom: surging (moving forward and backwards), swaying (moving left and right) and yawing (turning left and right). But, to describe its pose, at any point, the robot can move only by a forward/backwards motion and a steering angle. So, because it has three degrees of freedom, but they depend on constraints, the robot is a nonholonomic system.

The constraint that allows us to control three degrees of freedom but using only two commands is:

ͬʖ sin θ - ͭʖ cos θ = 0 (2)

Where ͬʖ is the rate of change in horizontal position, ͭʖ the rate of change in vertical position, and θ the angular position of the robot with respect to the horizontal distance.

9

3.3 Software In order to follow a wall, the robot may be equipped with a camera and image processing can be used, like in [1, 3]; the downsides of this method is that cameras are expensive and they fail to work when there is insufficient light. In [6] the authors have constructed a two-link sensorized antenna; the robot receives feedback about the environment from it and sends proper commands to the motors. This is an interesting idea, bio-inspired from cockroaches, but during experiments, the results for convex walls weren’t too satisfactory. A Fuzzy Logic Controller is used in [11] to drive a robot, equipped with ultrasonic sensors (two on a side and one in the front), parallel to the wall. In [4] the authors used two IR sensors, cross mounted in front of the robot, but during tests, errors were registered because some of the signals emitted from one sensor were received by the other one.

3.3.1 Robot Controllers The control strategies for mobile robots can be divided into open loop and closed loop strategies. In open loop control, the inputs such as distance or speed, are calculated beforehand, from the knowledge of the start and end positions. This technique cannot handle disturbances (e.g. different traction from the wheels) or model errors, nor corrects one of the parameters that could go wrong.

One the other hand, the closed loop strategies can compensate for the errors occurred in real time, since the inputs are based on the actual conditions, and not on the predicted ones. Because of this, the disturbances causing deviations from the initial state can be compensated by the input data.

One widely used closed loop controller, is the PID (proportional – integral – derivative) controller. This calculates the difference between a measured variable and a predefined point as the error of the process, and then tries to minimize it by adjusting the inputs. The proportional value determines the reaction to the current error, the integral , calculates the reaction based on the sum of the previous errors and the derivative determines the reaction based on the change rate of the error.

3.3.2 Algorithm The first version of the algorithm doesn’t take into consideration the PID controller and reacts just at the current error (it could also be seen as just a Proportional controller). As we can observe in Annex 1, the algorithm calculates the distance from the wall and the angle between the robot and the wall. The angle, θ, can be calculated thanks to the two IR sensors, by using the following formulas:

θ (radians) = arctan ((front_value – back_value) / 14) (3)

θ (degrees) = 360 * arctan ((front_value – back_value) / 14) / (2 * π) (4)

Where front_value and back_value are the values returned by the two IR sensors, and 14 is the distance between them, in centimetres.

In this algorithm we calculate the mean of every three consecutive values returned by the sensors, and then the error, which is the difference between this mean and a predefined setpoint (20 cm in our case). For taking into account the error of the values returned by the sensors, we have also set a margin of error ( marge ) of 2 cm. After all these variables being set, we can distinguish three cases:

10

- The error is greater than the marge : if the angle is smaller than -15 ˚, it means that the robot has to move away from the wall, so it goes left; if the error is greater than -5˚, the robot is too far, so it has to go right. If the angle is between tthesthesehesee twotwo values, then it can go forw ard. - The error is smaller than –marge , which means that the robot is too close to the wall: if the angle is greater than 15 ˚, it will move to the right and if it’s smaller than 5˚ iitt willwill move to the left, otherwise it will go forward. - The error is between –marge and marge : in these case, the robot should go forward, but still, if the angle is greater than 5 ˚, it means that it will tend to go to left, so in oorderrder to correct this, the robot will go a little faster to ththee right. Also, if the angle is smaller than -5˚, the robot will go to the left.

We have tested the algorithm in two cases:

- The robot is following a straight wall of 5.70 meters, at a distance of 20 cm . - The robot is following a straight wall of 11.50 meters with three doors, also at a distanc e of 20 cm.

As we can observe in Fig. 8, the trajectory of the robot tends to be sinusoidal. This happens because each time it is too far from the wall and tries to go ccloscloser,loser,er, it arrivesarrives too close, and then it moves away, but it goes too far, and so on… This behaviour is mmoreore evident in the sec ond experiment (Fig.9), when the wall is longer, and has three doors. Even if the dodoodoorsorsrs are closed, they are disturbing the robot’s trajectory, being sensed by the sensor as farther away .

Fig. 8 Distance between the robot and the wall

11

Fig. 9 Distance b etween the robot and the wall with three doors

Even if the chassis is designed carefully to be balancebalanced,d, after loadingloading it with the battery packs and all the electronics, the chassis has become unbalanced. ThThisis ccausedaused a veeringveering of the robot: it tends to go farther away from the wall, and has troubles returning cloclcloser.oser.ser. In order to compensatecompensate this, we put a greater speed in the left wheels when the robot is moving forward,forward, but still this wasn’t enough.

In order to correct all these errors, we decided to implem ent a PID controller. The PID controller gets two inputs: the error (the difference between the setsetpointpoint value and the actualactual position of the robot), and the angle θ between the robot and the wall. The outpoutputut is a changedchanged in the speed of the wheels. For co rrecting the position of the robot the output is calccalculatedulated withwith the formula:formula:

ΓΠΠΝΠͯΞΠΓΤΗΝΣΡ_ΓΠΠΝΠ output = Kp Ɛ error + Ko Ɛ θ + Kd Ɛ (4) ∆΢

Then, the speed in the left wheels is modified in coconcordanncordancece with the output , and the speed in the right wheels, with -output . When the robot is moving forward, it keeps a constanconstantt speed of 30% of the maximum speed.

The gains of a PID controller (in our case Kp, Ko and Kd) are very important fforor this kind of a system. If these are chosen incorrectly, the system becomes ununstable,stable, or the steady error is too large. In order to find the suitable gains, we did step by step changes in the input, while measuring the output.

12 The implemented PID controller could work very well if the received informationinformation from the sensors would be trustworthy. We found that the IR sensors retreturnurn strangestrange values if the robot is too closed to the wall or too far away, so additional tuning to the algorialgoalgorithmrithmthm had to be done. We continue to calculate the mean of consecutive values returned by tthehe sensosensors,rs, but this time every five values. Moreover, we manually set the speed of the wheels if the roborobott is to close (less than 9 cm) or to far (more than 50 cm) from the wall.

Another tuning that we had to do was that if the resultresulteded speed wwasas too small,small, the motors wouldn’t have enough power to rotate the wheels, or in the concontrary,trary, if the resultedresulted speed was to o big, the motors would run the wheels too fast. In these cases we also decided to set the speed manually.

The full algorithm, which includes the PID controller and all the other adjustments,adjustments, can be found in Annex 2.

After implementing the PID controller and tuning the algorithm we repeated the same tests. We can observe in Fig. 10 that the task of following the wall is performed much better, the trajectorytrajectory of the robot being almost a straight line. Also, in the case ooff the longer wall with three doors we can observe a big improvement (Fig. 11). Not only doesn’t the trajectory have a sinusoidal form anymore, but also after passing the doors, we can see hohoww it becomes more stabilized.

Fig. 10 Distance between the robot and the wall

13

Fig. 11 Distance betw een the robot and the wall

After succeeding in making the robot following the wallwall,, we concentratedconcentrated on the obstacle avoidance and cornering, making use of the ultrasonic sensor. So, if the robot is followingfollowing the wall and the ultrasonic detects an object in front of it, we have the following cases:

- The distance to the object in front of it it’s between 50 and 70 cm: in this case, the robot slows down, in order for it to take proper measures - The distance is smaller than 50 cm, but greater than 2255 cm, which me ans there is an object just in front of it (an obstacle or another wall): the rrobrobotobotot turns left untiluntil there is nothing in front of it and then starts going forward. This manoeuvremanoeuvre has proved to be veryvery useful in corners. - The distance is smaller than 25 cm, which means the ultrasonic didn’t detect earlier that there is something in front of it, so the robot starts goingoingg backwards until it reaches one of the above distances. - Because the robot is fo llowing a wall, the ultrasonic can return wrong values. That’s why we also verify the angle θ. If the distance is between 40 and 70 70 cm and |θ| Ƙ 45˚, or the distance is greater than 70 cm and |θ| ƙ 45 ˚, it means that there is no obstacle in front of it (the object that the ultrasonic has detected is, in fact, the wall the robot is fallowing).

14 4 Conclusions During this internship, we constructed a differential drive mobile robot that uses two IR sensors and one ultrasonic for moving in the environment and the arm for taking the first aid to the person that has fallen. The algorithm implemented uses the feedback from the sensors and transmit it to the PID controller, which make use of it to maintain a constant distance from the wall.

As we can see from the experiments, the robot successfully follows a wall, even if it’s not a straight one. Also, the use of a PID controller has proven to be very useful, the results being better that in the case of a simple controller.

However, the results could be improved if the values returned by the sensor were more accurate, especially when the distance between the robot and the wall is less than 10 cm. This also depends on the surface reflectance properties, which could differ from one wall to another, or even in the same wall, if it has some doors, for example.

Further research could be done with the use of better sensors and by adding some more features, like knowing the map of the room and going in a given place. This could be very useful in the case of taking the first aid to the person who has fallen.

15

References: [1] R. Carelli, C. Soria, O. Nasisi, E. Freire, Stable AGV corridor navigation with fused vision-based control signals , Proceedings of the IEEE Industrial Electronics Society, IECON, Sevilla, Spain, November 2002, p. 2433 – 2438

[2] J. Chen, K. Kwong, D. Chang, J. Luk, R. Bajcsy, Wearable Sensors for Reliable Fall Detection , Proceedings of the 2005 IEEE Engineering in Medicine and Biology 27 th Annual Conference, China, September 2005, p. 3551 – 3554

[3] A. Dev, B. Krose, F. Groen, Navigation of a mobile robot on the temporal development of the optic flow , Proceedings of the IEEE/RSJ/GI Int. Conf. on Intelligent Robots and Systems IROS’97, Grenoble, September 1997, p. 558-563

[4] I. Gavrilut, V. Tiponut, A. Gacsadi, L. Tepelea, Wall-Following Method for an Autonomous Mobile Robot using Two IR Sensors , Proceedings of the 12th WSEAS International Conference on Systems, Greece, 2008, p.205 – 209

[5] Y.C. Huang, S.G. Miaou, T.Y. Liao, A Human Fall Detection System Using an Omni-Directional Camera in Practical Environments for Health Care Applications, IAPR Conference on Machine Vision Applications, Japan, May 2009, p. 455 – 458

[6] A.G. Lamperski, O.Y. Loh, B.L. Kutscher, N.J. Cowan, Dynamical wall-following for a wheeled robot using a passive tactile sensor , , Proceedings of the IEEE Int. Conf. of Robotics and Automation, April 2005, p. 3838 - 3843

[7] Lynx 4WD1: http://www.lynxmotion.com/c-119-auton-combo-kit.aspx

[8] Lynx AL5A: http://www.lynxmotion.com/c-124-al5a.aspx

[9] MAIA: http://maia.loria.fr

[10] S.G. Miaou, P.H. Sung, C.Y. Huang , A Customized Human Fall Detection System Using Omni- Camera Images and Personal Information , Proceedings of the 1 st Distributed Diagnosis and Home HealthCare (D2H2) Conference, USA, April 2006, p. 39 – 42

[11] V.M. Peri, A. Simon, Fuzzy Logic Control for an Autonomous Robot , Fuzzy Information Processing Society, NAFIPS, June 2005, p. 337 – 342

[12] PhidgetAdvancedServo: http://www.phidgets.com/products.php?product_id=1061

[13] PhidgetSBC: http://www.phidgets.com/products.php?product_id=1070

[14] Pob Technology: http://www.pob-technology.com

[15] QFix: http://www.qfix-robotics.de

[16] K. Roback, A. Herzog, Home Informatics in Healthcare: Assessment Guidelines to Keep Up Quality of Care and Avoid Adverse Effects , Technology and Health Care, March 2003, p. 195 – 206

[17] SyRen 25A: http://www.dimensionengineering.com/SyRen25.htm

16

Annex 1: Algorithm without PID

BEGIN

SET error to real_distance - setpoint SET marge to 2 SET θ to 360 * arctan ((front_value – back_value) / 14) / (2 * π)

IF error > marge THEN IF error > 100 THEN there is nothing in the right side ELSE IF θ > -5˚ THEN go right with 50% speed ELSE IF θ < -15˚ THEN go left with 30% speed ELSE go forward with 30% speed END IF ELSE IF error <- marge THEN IF θ > 15˚ THEN go right with 50% speed ELSE IF θ < 5˚ THEN go left with 30% speed ELSE go forward with 30% speed END IF ELSE IF |error| <= marge THEN IF θ > 5˚ THEN go right with 50% speed IF θ < -5˚ THEN go left with 30% speed ELSE go forward with 30% speed END IF END IF

END

17

Annex 2: Algorithm with PID

BEGIN

SET error to real distance – setpoint SET θ to 360 * arctan ((front value – back value) / 14) / (2 * π) SET output to Kp × error + Ko × θ + Kd × ΓΠΠΝΠͯΞΠΓΤΗΝΣΡ ΓΠΠΝΠ ∆΢ SET Mean to front value - back value

IF Mean <= 9 THEN SET Speed to 60% ELSE IF Mean > 9 AND mean < 50 THEN SET Speed to output% ELSE IF Mean >= 50 THEN SET Speed to 20% END IF

IF Speed < 20% THEN SET Speed to 20% ELSE IF Speed > 60% THEN SET Speed to 60% END IF

IF Mean >= 80 THEN IF US value < 25 cm THEN go backwards with 20% speed ELSE IF US value <= 40 AND US value >= 25 THEN turn left with 20% speed END IF IF US value > 40 go forward with 30% speed END IF ELSE IF Mean > 6 AND mean < 80 IF US value < 25 cm THEN go backwards with 20% speed ELSE IF US value <= 50 AND US value >= 25 THEN turn left with 20% speed ELSE IF US value > 50 AND US value < 70 AND θ <= 45˚ THEN there is an obstacle close, go slower ELSE IF US value > 70 AND |θ| <= 45˚ THEN there is no obstacle, move with Speed% of the maximal speed ELSE IF US value > 40 AND |θ| > 45˚ THEN IF |θ| > 0 THEN turn right with 20% speed ELSE turn left with 20% speed END IF END IF END IF

END

18