University of Florida Department of Electrical and Computer Engineering EEL 4665/5666 Intelligent Machines Design Laboratory Written Report

Student Name: Lin Wang Robot Name: Sh ake Shake Shake E-Mail: [email protected] Instructors: Dr. A. Antonio Arroyo Dr. Eric M. Schwartz TAs: Ryan Chilton Josh Weaver

Table of Contents

Abstract ...... 4 Executive Summary ...... 4 Introduction ...... 5 Integrated system ...... 6 Mobile platform ...... 8 Actuation ...... 9 Sensors ...... 9 Behaviors ...... 11 Experimental layout and results ...... 12 Conclusion ...... 13 Documentation ...... 14 Appendices ...... 14

Abstract

Shake Shake Shake is a robot aiming at shaking dice. Playing dice with robot can be interesting. Player can interact with the robot to let the robot move around, shake the dice, and tell which color is the result of the shaked dice by turning on corresponding LEDs. Basically, it can move around and do collision avoidance. When the bumper is touched, it will stop and shake the dice, transmit video stream from WebCamera to my laptop, process the video using OpenCV, transmit the color recognition result back the to board by Xbee, and turn on the LED.

Executive Summary

ShakeShakeShake is a robot shaking dice. It can move around, shake dice, and tell the result color of the dice. The platform of my robot is made of wood. And I use Epiphany DIY board for my robot.

First of all, there are two motors with two claps fixed in the front as the drive to make the robot to move. And for stability, there is one caster at back. The robot can move forward, backward, turn right, and turn left. When the robot is turning, there is only one motor works, the other one stop working.

Secondly, I implement two sonars on my robot in the front, one to the left, and the other to the right. The aim of this is to detect whether there is something in front of my robot, so that it can avoid collision. When there is obstacle to the right, it will turn left. When there is obstacle to the left, it will turn right. When there is something right in the front, which is to say that both of the sonars detect there are something in the front, then the robot will go backward, and then turn randomly, either right or left.

Thirdly, I use one servo to rotate the stick connected with the box for shaking dice. The range of the rotation angle for the servo is about 0 to180 degree. It rotates the entire 180 degree for three times as its name ShakeShakeShake. Then it stops in the position where is under the camera view.

Fourthly, there is one bumper in the back to trigger when the robot will stop walking around and start to shake the dice. Once the bumper is touched, it will stop right there, rotate the servo, and all the following things about the dice. I made a dice by myself, instead of the real one. My dice has three colors, red, green, and yellow. There are two faces on the dice of the same color. The box for shaking dice is made of hard paper. And the top cover of the box is plastic and transparent, so that the camera can see the dice in the box without taking it out of the box. There is an iPhone on the top of the box, used to take the live video and transmit back to my computer to do processing. There is one app in iPhone called WebCamera, which can make my iPhone act as a web camera.

Next, for the communication between my computer and my robot, I chose Xbee. One Xbee is on the board; the other is connected with my computer using USB port. After OpenCV has recognized the color of the dice, it will transmit a character back to the board to let the LED turn on. There are three LEDs there, red, blue, white, for red, green, and yellow on the dice, respectively.

After the robot finishes one circle of all the action of the dice part, it will continue to move around until someone else touch the bumper again.

Introduction

‘Shake Shake Shake’ is the name of my robot. The idea comes from one of the traditional game of China, Majiang. At the beginning of the game, there would be one of the four players to shake dices to decide where to start. I wanted my robot to play this role. But with further consideration, I changed my idea for the reason that I did not want my robot just standing on the table to shake the dice, which is kind of boring. Instead, I will let it to find a player to play with him.

Therefore, the general idea of my robot is that, first of all, it can move around, and can do the collision avoidance. Then, whenever there is a player who wants to play with him with touching the bumper in the back of the robot, the robot will stop and play dice. And there will be LEDs to tell the result of the dice. This can be more interesting than the original idea.

In the integrated part, there is a flow chart to tell how the robot will work. In the mobile platform part, it introduce the structure of my robot. In the actuation part, it talks about motors. In the sensor part, it introduces all the sensors I used. Behavior part talks about all the behaviors my robot can conduct. And then experimental results shows how opencv works to do the color recognition. And finally is the conclusion part, and all the codes are in the appendix.

Integrated system

Figure 1 Epiphany DIY board

Figure 2 Epiphany DIY board

The above figure is the Epiphany DIY board view with both sides. There is a port for two motor drivers. There is a servo port with a voltage regulator. The ADC port is for the sonar. And other ports can supply 5V voltage, which can be used by bumpers, LEDs and servo. In the back side, there is a port for XBee. The entire power supply is 12V with 8 AA batteries.

Figure 3 Flow Chart of This System

The above figure is the flow chart of how my robot will work, as described in the executive summary section.

Mobile platform

Figure 4 General View of Shake Shake Shake

The above picture is the structure of my robot. There are two motors with two small wheels in the front, one caster at the back, two sonars in the front, three LEDs (red, blue, and white), one white box with a dice in it (with red, green, and yellow colors), a servo to rotate the stick connected with the dice box, 8 1.5V batteries under the Epiphany DIY board, bumpers at the back, and one iPhone on the top. Also, there is a switch for the batteries.

It is hard to fix the location of iPhone after each time I take it off. There can be something else with the color similar to the color of the dice come into the view of the camera. As a result sometimes the camera cannot tell the right color of the dice because of this kind of noise. So every time I put iPhone on the robot, I have to adjust the position of it so that there is nothing else in the camera view to disturb the color recognition result. The other lesson learnt is that, at the beginning the position of the box and the camera are not good enough, for the reason that when the box is shaking, it can hit the camera. So I make the camera to be a little bit higher.

All the wires on the board are kind of mass. I should have had a removable cover to cover it to make my robot looks nicer. Actuation

I use two motors as the actuation. The motor I selected is 12V 300rpm 25mm DC Gear Motor Micro DC Gear Box Motor. It need a 12V power supply. The problem is have encountered is that, the diameter of wheel and motor are not that match, so two of them have relative movement, which is not what I wanted. The solution to this is that, I glue the entire hole of the wheel, and drill a hole with the size matched the motor, so that they can be fixed and work in the right way. Now my robot can move forward and backward, and also can turn left and turn right by stop one of the motors.

Sensors

Basically, I use three kinds of sensors, which are bumper, sonar, and camera. There are two bumpers at the back of the robot, two sonars in the front of the robot, and one IP Cam used to identify the points of the shacked dice.

 IP Camera I got an iPhone. There is a perfect app called IP to let me be able to transfer video streaming from the phone camera to my laptop through an app called Mobiola WebCamera for iPhone. I can catch the video through program in OpenCV. This camera is used to identify the color on the top face of the shacked dice.

Lessons learnt is that the position of the camera can really affect the result of the color recognition, for the reason that if it is not in the right position, the box of the dice can hit it when it is shaking, and also, there can be some other things come into the view to interrupt with the recognition process with the similar color of the dice. As a result, I have to find a right height for the camera, and also do not let other unexpected things with the similar color of red, green and yellow come into the view of the camera.

 Bumper There is one to the right at the back, and one to the left at the back. Once there got a collision, the robot will move backward, change some angle, and then continue go forward. I will stick a longer stick to the bumper, so that it can sense a larger area and will not stuck in a corner or the leg of a table with a collision where there is not bumper to take some reaction.

3.3V VCC is applied to one end of the bumper with the other connecting to an interrupt with a pull down resister of 10KΩ. By pressing it, the bumper will close and the signal pin will get high voltage. Once there is a high voltage, it will jump into the mechanism of collision avoidance.

 Sonar There is one to the right in the front of the robot, and one to the left in the front. Once it signals there is something in the front, it will do something similar to the bumper. It can change direction without moving forward, and then continue forward. But there is some kind of redesign of my robot. I want the robot to signal someone in the front, then stop right there, to see whether the one standing in the front would like to play with the robot. If it is, the game starts. If not, the robot will go somewhere else and find somebody else to play with. For this part, it needs further detailed consideration.

Sonar is really a part of the robot that is hard to be controlled. I found that the range of the value from sonars changes a lot and not in a regular way. So I have to test a lot on this part, to find the threshold.

Figure 5 LV-MaxSonar-EZ1 Circuit [1]

Behaviors

 Collision avoidance The robot can move around with the motors with wheels, and a caster. In order to avoid collision with obstacles, it can do collision avoidance with the two sonars in the front. One is to the left, the other is to the right. When there is something to the left, it will turn right. When there is something to the right, it will turn left.

 Color recognition When there is a bumper been touched, the robot will stop walking around. And the servo will start to shake the dice. After this, the camera will transmit back the live video stream of the view of the shaked dice in the box to my laptop. OpenCV then reads in the video stream and do color recognition. Once there is a recognition of a certain color, it will transmit a char back to the board through XBee. And finally the corresponding LED will turn on with the correct color.

 Lessons learnt is that at the beginning, I set the range of the color to be detected too large. As a result, it cannot detect the right color. There are many noise on the binary picture. Therefore, I narrow the range of the three color and then it works well and can only detect the three color on the dice in the environment in the lab.

Experimental layout and results

Figure 6 Experimental Results When OpenCV is Running

I made a dice by myself with three colors, red, yellow, and green. And then I can tell the color of the dice using OpenCV. I the figure above we can see five windows when the program of OpenCV is running. One is to tell what color has been detected. The second one in the first one is the original video stream transmitted from iPhone with the view of the dice in the box. And the red box is added when there is a color detected. It circles out where is the dice. And then the three windows in the second line are three windows for the color of green, yellow, red, respectively. I create three binary video to detect green, yellow, red. Whenever there is a color detected, there will be a white block there to show up in the right window for the right color. The above figure is for the color of yellow. And red, and green will act in a similar way.

Conclusion

Realistic summary of work accomplished  Move around  Do collision avoidance  Trigger the game by touching the bumper  Rotate the box with servo  Take the live video and transmit the stream back to my laptop by iPhone  Detect the color of the dice using OpenCV  Transmit char back to the board by XBee  Turn on LEDs

Limitations of your work The limitation of my work is that I should have use a real dice. It can make this robot more interesting. On the way to dig out how to detect the points on the dice. There are a lot of noise, and not that easy to set a certain threshold for the white point. So I finally decided to use a dice made by myself with three colors.

Cite areas that exceeded expectations and areas that can be improved The way to put the camera on the top of the dice is really need to be fixed. By now I can only tape it on a piece of wood. There should be a way to make the position fixed, so that each time I put iPhone on the robot, it can in the same position and have the same view as last time it was.

Technical caveats for students to follow Students really should think thoroughly before taking action. One the board is cut smaller than expected, it has to change a new one. It is a waste. Different parts of the robot should be arranged well so that they can cooperate with each other in a right way. The last point is that, though it is easy to let each part to work well, it is really hard to make them work together. There should be a lot test to make them work well.

Future work One thing is that I will get familiar with Solidworks, which is really a powerful and useful to be used to design something out. I know so little about the design part that my robot looks not that lovely in spite of the functions. The other thing is that, I will work on a real dice instead of the one made by myself.

Documentation

[1] http://www.poluto.com/catalog/product/726 [2] http://ootbrobotics.com/wiki/ [3] https://www.sparkfun.com/products/8502 [4] http://www.technical-recipes.com/2011/tracking-coloured-objects-in-video-using-opencv/

Appendices

 Appendices A: Atmel Studio program

#include #include #include

#include "clock.h" #include "ATtinyServo.h" #include "uart.h" #include "adc.h" #include "motor.h"

int main(void) { clockInit(); motorInit(); adcInit(&ADCA); ATtinyServoInit(); usartInit(&USARTC0,115200);//sonar usartInit(&USARTE1,9600); //xbee PORTE_DIRSET = 0x80; //xbee int range1,range2; //two sonars char key;

PORTB_DIRSET = 0b00000111; //set PortB as output to control LEDs sei();

_delay_ms(500); fprintf(&USB_str, "Program Starts Now\r\n"); adcChannelMux(&ADCA,0,0); //adc something

while(1){

int bumper = PORTC_IN; //go forward at the beginning setMotorEffort(1, 700, MOTOR_DIR_FORWARD); setMotorEffort(2, 700, MOTOR_DIR_FORWARD); range1 = analogRead(&ADCA, 0); //range values of sonars range2 = analogRead(&ADCA, 1);

if(range1<238 && range2<238) { setMotorEffort(1, 700, MOTOR_DIR_BACKWARD); setMotorEffort(2, 700, MOTOR_DIR_BACKWARD); _delay_ms(1500); setMotorEffort(1, 0, MOTOR_DIR_FORWARD); setMotorEffort(2, 700, MOTOR_DIR_FORWARD); _delay_ms(1500); } else if(range1<240) { setMotorEffort(1, 0, MOTOR_DIR_FORWARD); setMotorEffort(2, 700, MOTOR_DIR_FORWARD); _delay_ms(1500); } else if (range2<238) { setMotorEffort(1, 700, MOTOR_DIR_FORWARD); setMotorEffort(2, 0, MOTOR_DIR_FORWARD); _delay_ms(1500); } if ((bumper & 0x01) == 0x01) //if bumper 1, stop then forward { fprintf(&Xbee_str,"b");

setMotorEffort(1, 0, MOTOR_DIR_FORWARD); setMotorEffort(2, 0, MOTOR_DIR_BACKWARD); _delay_ms(500);

for(int i=1; i<4;i++){ setServoAngle(7,180); _delay_ms(500); for(int j=180; j>=0; j-=30){ setServoAngle(7,j); _delay_ms(100); } } setServoAngle(7,180);

for(int i=64; i>1;i-=1) { if (dataInBufE1()) key = usartE1_getchar(); else i=0; }

fprintf(&Xbee_str,"b");

if (dataInBufE1()) { key = usartE1_getchar();

PORTB_OUT = 0b00000001; if(key == 'g') { PORTB_OUT = 0b00000111;//let the three LEDs on _delay_ms(10000); } else if(key == 'y') { PORTB_OUT = 0b00000001;//let the three LEDs on _delay_ms(100); } else if(key == 'r') { PORTB_OUT = 0b00000100;//let the three LEDs on _delay_ms(100); } _delay_ms(100); } _delay_ms(2000); PORTB_OUTCLR = 0b00000000; }  Appendices B: OpenCV program

#include "stdafx.h" #include "cv.h" #include "highgui.h" #include "BlobResult.h" #include #include "transmit.h" using namespace std;

int main() { CBlobResult blobs1,blobs2,blobs3; CBlob *currentBlob; CvPoint pt1, pt2; CvPoint ptct; CvRect cvRect; int key = 0; IplImage* frame = 0; transmit toXbee;

//Initialize capturing live feed from video file or camera CvCapture* capture = cvCaptureFromCAM(0); // 1 for PC, 0 for camera

// Get the frames per second int fps = ( int )cvGetCaptureProperty( capture,CV_CAP_PROP_FPS );

// Can't get device? Complain and quit if( !capture ) { printf( "Could not initialize capturing...\n" ); return -1; }

// Windows used to display input video with bounding rectangles and the thresholded video cvNamedWindow( "video" ); cvNamedWindow( "red" ); cvNamedWindow( "yellow" ); cvNamedWindow( "green" ); //cvNamedWindow( "thresh" );

// An infinite loop while( key != 'x' ) { // If we couldn't grab a frame... quit if( !( frame = cvQueryFrame( capture ) ) ) break;

//get imgThreshRed, imgHSVYellow, imgHSVGreen IplImage* imgHSVRed = cvCreateImage( cvGetSize( frame ), 8, 3 ); IplImage* imgHSVYellow = cvCreateImage( cvGetSize( frame ), 8, 3 ); IplImage* imgHSVGreen = cvCreateImage( cvGetSize( frame ), 8, 3 ); cvCvtColor( frame, imgHSVRed, CV_BGR2HSV ); cvCvtColor( frame, imgHSVYellow, CV_BGR2HSV ); cvCvtColor( frame, imgHSVGreen, CV_BGR2HSV ); IplImage* imgThreshRed = cvCreateImage( cvGetSize( frame ),8,1 ); IplImage* imgThreshYellow = cvCreateImage( cvGetSize( frame ),8,1 ); IplImage* imgThreshGreen = cvCreateImage( cvGetSize( frame ),8,1 ); cvInRangeS(imgHSVRed, cvScalar(170, 170,80), cvScalar(180,255,255), imgThreshRed);//red cvInRangeS(imgHSVGreen, cvScalar(70,80,85), cvScalar(87,160,150), imgThreshGreen); //green cvInRangeS(imgHSVYellow, cvScalar(20,140,125), cvScalar(40,250,210), imgThreshYellow); //yellow //Tidy up and return thresholded image cvReleaseImage( &imgHSVRed ); cvReleaseImage( &imgHSVYellow ); cvReleaseImage( &imgHSVGreen );

// Get object's thresholded image (blue = white, rest = black) //IplImage* imgThresh = GetThresholdedImageHSV( frame );

// Detect the white blobs from the black background blobs1 = CBlobResult( imgThreshRed, NULL, 0 ); blobs2 = CBlobResult( imgThreshYellow, NULL, 0 ); blobs3 = CBlobResult( imgThreshGreen, NULL, 0 ); // Exclude white blobs smaller than the given value (10) // The bigger the last parameter, the bigger the blobs need // to be for inclusion blobs1.Filter( blobs1, B_EXCLUDE, CBlobGetArea(), B_LESS, 80 ); blobs2.Filter( blobs2, B_EXCLUDE, CBlobGetArea(), B_LESS, 80 ); blobs3.Filter( blobs3, B_EXCLUDE, CBlobGetArea(), B_LESS, 80 ); // Attach a bounding rectangle for each blob discovered int num_blobsred = blobs1.GetNumBlobs(); int num_blobsyellow = blobs2.GetNumBlobs(); int num_blobsgreen = blobs3.GetNumBlobs();

//************if num_blobs > 0, then tell the LED to shine*****************// if (num_blobsgreen>0) { toXbee.transmitgreen(); cout<<"green"<0) { toXbee.transmityellow(); cout<<"yellow"<0) { toXbee.transmitred(); cout<<"red"<

for ( int i = 0; i < num_blobsred; i++ ) { currentBlob = blobs1.GetBlob( i ); cvRect = currentBlob->GetBoundingBox();

pt1.x = cvRect.x; pt1.y = cvRect.y; pt2.x = cvRect.x + cvRect.width; pt2.y = cvRect.y + cvRect.height;

// Attach bounding rect to blob in orginal video input cvRectangle( frame, pt1, pt2, cvScalar(0, 0, 150, 0), 2, 8, 0 );

//cout<<"pt1:"<

ptct.x = (pt1.x + pt2.x)/2; ptct.y = (pt1.y + pt2.y)/2;

//cout<<"center:"<GetBoundingBox();

pt1.x = cvRect.x; pt1.y = cvRect.y; pt2.x = cvRect.x + cvRect.width; pt2.y = cvRect.y + cvRect.height;

// Attach bounding rect to blob in orginal video input cvRectangle( frame, pt1, pt2, cvScalar(0, 0, 150, 0), 2, 8, 0 );

//cout<<"pt1:"<

ptct.x = (pt1.x + pt2.x)/2; ptct.y = (pt1.y + pt2.y)/2;

//cout<<"center:"<GetBoundingBox();

pt1.x = cvRect.x; pt1.y = cvRect.y; pt2.x = cvRect.x + cvRect.width; pt2.y = cvRect.y + cvRect.height;

// Attach bounding rect to blob in orginal video input cvRectangle( frame, pt1, pt2, cvScalar(0, 0, 150, 0), 2, 8, 0 );

//cout<<"pt1:"<

ptct.x = (pt1.x + pt2.x)/2; ptct.y = (pt1.y + pt2.y)/2;

//cout<<"center:"<

// Add the black and white and original images cvShowImage( "red", imgThreshRed ); cvShowImage( "yellow", imgThreshYellow ); cvShowImage( "green", imgThreshGreen ); cvShowImage( "video", frame );

// Optional - used to slow up the display of frames //key = cvWaitKey( 2000 / fps );

// wait 10 ms for a key press key=cvWaitKey(100);

// Prevent memory leaks by releasing thresholded image cvReleaseImage( &imgThreshRed ); cvReleaseImage( &imgThreshYellow ); cvReleaseImage( &imgThreshGreen ); }

// We're through with using camera. cvReleaseCapture( &capture );

return 0; }

class transmit{ public: void transmitred(); void transmityellow(); void transmitgreen(); };

#include "transmit.h" #using using namespace System; using namespace System::IO::Ports; using namespace System::Threading; void transmit::transmitred(){ SerialPort^ serialPort = gcnew SerialPort( L"COM8", 9600, Parity::None, 8, StopBits::One); serialPort -> Open(); serialPort -> WriteLine("r"); serialPort -> Close(); } void transmit::transmityellow(){ SerialPort^ serialPort = gcnew SerialPort( L"COM8", 9600, Parity::None, 8, StopBits::One); serialPort -> Open(); serialPort -> WriteLine("y"); serialPort -> Close(); } void transmit::transmitgreen(){ SerialPort^ serialPort = gcnew SerialPort( L"COM8", 9600, Parity::None, 8, StopBits::One); serialPort -> Open(); serialPort -> WriteLine("g"); serialPort -> Close(); }