<<

Using Closed Feedback Loops to Evaluate Autonomous Performance

Tomas´ Collado Hamzah Farooqi [email protected] [email protected]

Tashu Gupta Ashwindev Pazhetam Robert Taylor [email protected] [email protected] [email protected]

Kevin Ge* [email protected]

New Jersey’s Governor’s School of Engineering and Technology July 18, 2020

*Corresponding Author

Abstract—Juggling requires humans to intake a vast amount of This project aspires to study the complex movements of visual information, analyze it, and act accordingly, all at a speed humans, particularly in juggling, and recreate them through a that seems to surpass normal human reaction time. To study simulated . Once a is completed, the goal these dexterous human movements, this project aims to recreate juggling through a simulated robot that inputs feedback similar of the project is to determine the minimum amount of feedback to how a human would. In this project, a robot was developed necessary to complete a successful juggle. By incrementally using a game development engine called Unity that performed a limiting the amount of information the robot receives, the three ball based on knowledge of the positions of the balls necessity of certain data will be determined which will allow at all times, and, when this knowledge was restricted, was able to for the creation of more dexterous and efficient feedback- juggle using the initial velocities of the balls and simulated visual feedback. Comparing the two feedback systems yielded a better dependent . understanding as to how humans juggle, as well as how robots can be improved to perform advanced, dexterous movements. II.BACKGROUND A. Juggling Logistics I.INTRODUCTION The complex act of juggling can be summarized through a In the age of , advancements in are es- series of mathematical algorithms and physics principles. The sential for higher production rates and increased productivity, pattern in which balls are thrown and caught is repetitive and along with countless other benefits. [1]. However, the tasks predictable, allowing it to be represented simply. However, this needed in a wide range of industries are relatively complex project aims to mimic the practical human ability to juggle, for modern technology, and thus trained human beings are and while the mathematical algorithms that comprise juggling required to take up those positions. For instance, biomedical theory describe juggling incredibly well, they also assume products are often produced with high levels of complexity an ideal environment. Therefore, while the algorithms would to mimic complicated human actions. Dexterous robots are produce perfect juggling patterns, the simulation would be needed to fulfill roles that would otherwise be occupied by highly unrealistic. Juggling theory refuses to take into account trained human beings. Juggling, which requires the training of external environmental factors such as dissimilar throws and one’s motor skills and hand-eye coordination, is an incredibly poor catches. Ideal juggling theory also assumes that there are complex act for a dexterous machine to perform. This project only “throws” and “catches” within a juggle and neglects the seeks to program a robot to juggle on Unity by modeling “dwell time” or holding period, in which the ball rests in a complex juggling movements and providing information that hand between throws and catches [2]. Realistically, humans are humans would normally input. By doing so, the project not capable of instantaneously throwing a ball after catching it, opens doors to robotic automation in numerous industries, and compensate by adjusting the initial velocity of the throw from performing medical procedures to packaging commercial to preserve the pattern [3]. Humans also attempt to juggle in products. just two dimensions, keeping the horizontal distance between

1 their body and the balls constant, to reduce the complexity of the act. The juggling robot will face a similar issue to humans as it’s impossible to completely forego carrying time and other factors that may compromise an ideal environment. Therefore, while juggling theory is consistent enough for a robot to be constructed based off of it, this implementation does not provide insight on how humans juggle. Thus, this robot will use juggling theory to define the structure of the throws, but Fig. 1. Three ball cascade ladder diagram will not rely on theory to reproduce realistic behaviors.

B. Complexity of Juggling Another way to represent juggles is the vanilla notation, designed by Paul Klimek from the University of Part of the reason why juggling is so difficult for robots California, Bruce Tiemann from the California Institute of is its implementation of multiple complex human behaviors. Technology, and Michael Day from the University of Cam- Humans possess a unique spatial awareness that allows them bridge in 1985 [5]. This notation represents a sequence of to track the position and predict the path of the balls without throws with each number describing the ticks between the dedicating all their focus on any individual ball. One theory throwing and catching of the same ball. For further explanation as to why humans can juggle so accurately is dynamical on , a 441 juggle explains that one ball is in the air prediction. The theory suggests that humans compute parabolic for four ticks after being thrown from one hand, the second trajectories through an innate form of kinematic equations. ball in the other hand is thrown immediately at the next tick Another theory is that the motor cortex stimulates multiple for four ticks, and then a third ball is passed from one hand to muscles to perform complex movements [4]. The simulated another for a single tick. The three ball cascade is represented robot within this project seeks to recreate these complex by the siteswap notation 333, meaning that each ball is thrown human behaviors successfully, tackling the first theory with for three ticks. It may be useful to note that the amount of balls the velocity-based juggle and the latter with the position-based used in a pattern can be found by taking the average of the juggle. numbers in the notation. For example, the 333 juggle would require three balls since those numbers average to three, while C. Siteswaps and Ladders a juggle like the 71 averages to four, meaning it requires four A simple way to visually depict the throws and catches of balls. the vast varieties of juggles is through a ladder diagram, which D. Feedback Loops uses different colored lines to represent the paths of different balls throughout the juggle. Every time a colored line is in Humans are able to juggle using sensory feedback to predict contact with one of the black lines, the ball is “caught” in the landing position of all three balls. By keeping their eyes at either the left hand (the black line) or the right hand (the a fixed position above their hands, humans take in information lower black line). When the colored line is between the two about the balls’ locations to correctly position their hands. For black lines, the ball is in the air. It is possible to determine a robot to simulate this, it needs to implement a looping control how long the ball is in the air by looking at the vertical dashed system of which there are two types: open-loop systems lines, each representing one tick, which is an arbitrary unit and closed-loop systems. An open loop system is a system of time used by jugglers to represent the time it takes to independent of feedback, meaning the output of the system has pass a ball from one hand to another. Each number on the no effect on the input, causing every loop to be identical. In diagram represents the numbered tick, starting at 1 and ending contrast, a closed loop system implements a feedback signal, at 16. Figure 1 displays the ladder diagram for a 333 juggle, as the output of the system depends on the input, and the or a three ball cascade, which is the primary focus of this input changes based on surrounding conditions. This allows project. This diagram shows that the first ball (represented the robot to react to sudden changes. [6]. by the red line) is thrown immediately from the right hand. Based on theory alone, juggling can be performed with One tick after this, the second ball (blue line) is thrown from an open-loop system. Due to the theoretical predictability of the left, and, a tick after that, the third ball (yellow line) is the juggling motions, every throw and catch should be able thrown from the right hand. One tick after all three balls are to be timed perfectly by separating the throws and catches thrown, the first ball is caught in the left hand, and then thrown with the appropriate number of ticks for the juggle that is to immediately. Every three ticks, the pattern of throwing and be attempted. Based on this notion, an open loop juggling catching is repeated, and the pattern is offset by one tick for robot was attempted at the New Jersey Governor’s School each ball, causing one catch and throw to occur at every tick. of Engineering and Technology in 2019 in a project titled With this repetitive, constant pattern, each ball passes through Autonomous Juggling Robot. The researchers constructed a both hands equally and spends the same amount of time in robot that eventually attempted a 3 ball cascade and found that, the air. despite theory, their throws were inconsistent. This was partly

2 due the transistor heating up within their circuit, increasing resistance [7]. Once a single error occurred, the robot could not correct it, and the error eventually compounded and resulted in a missed catch. Although a human would not face the problem of resistance in a circuit, it is also highly unlikely that every catch and throw performed would be perfectly identical. Instead, a human juggler would detect an error and adjust their movements to correct it. Therefore, to most accurately mimic human juggling and understand the complex movements humans are capable of, this robot will implement Fig. 2. Skeleton of untextured Inverse Kinematic Arms closed-loop systems to react to the balls’ movements.

E. Unity III.PROCEDURE A. Pre-Project Preparations The robot constructed in this project is simulated in a game engine development platform called Unity. This is the ideal Before building the robot, the researchers needed to become platform to simulate juggling for multiple reasons, including fully accustomed to working with Unity and C# by learning the ability to import models from other platforms and the the basics of the software utilizing the “Roll-A-Ball” tutorial customization of the juggling scene. However, the most crucial provided by Unity Learn [10]. The tutorial covered the basic aspect of Unity in this project is its 3D physics engine: principles of operating Unity, including navigating the UI, the Nvidia PhysX engine. Utilizing this engine, objects can creating objects and scripts, and manipulating their various accurately model various forces like collision forces, gravity, properties. One crucial principle was using prefabs, which and drag forces [8]. allows for the storage and re-usage of objects. Another im- A rigidbody can be defined as a characteristic of an object portant tool was learning how to use OnTriggerEnter and in which physics can be applied, and is activated for all OnCollisionEnter functions, allowing for the program the juggling balls. Forces can be added to the rigidbodies to track when objects pass certain boundaries and collide with in a variety of ways and velocity vectors can be accurately objects to correctly catch the ball. Additionally, Unity employs calculated. Due to its realistic physics simulation, the Nvidia Coroutines to run functions outside the normal sequence of the PhysX engine allows for the implementation of kinematics code, making it possible to have both arms of the robot run equations within the robot’s software. behaviors at the same time. B. Structure of Unity Scene F. Inverse Kinematics A Unity scene Siteswap-3.unity was created, containing When humans move their arms, they do not think about two arm skeletons, each represented with two joints and a individually moving every joint. For example, when reaching hand, three balls colored red, green, and blue, and an invisible out to catch an object, humans simply think about moving their “Brain” object. To better simulate the look of an arm, this hand to a desired location rather than calculating adjustments project designed and modeled human-like arms in Blender to their shoulder, elbow, and wrist separately. The simplicity which were exported to Unity as displayed in Figure 3. These of this action reduces complex juggling movements that can modeled arms largely served aesthetic purposes and helped be completed without explicitly scripting precise movements visually differentiate the IK shoulder, elbow, and hand. of each joint. The human brain can focus on predicting the Each juggle contains a variety of movements, which path of juggling balls rather than determining the exact angles warrants an organized structure of the scripts that con- each joint must be to allow for the hand to be at the correct trol juggling. The inverse kinematics that controlled the position to catch a ball. arm’s realistic movements were contained in a script called To simulate this behavior in the robotic juggling arms, a FastIKFabric, attached to the target objects of each concept called Inverse Kinematics (IK) was applied. Through hand. When the target objects moved, the hands (and by IK, a target object can be moved, similar to a hand, and the extension the arms) would follow accordingly. Separate rest of the arm will react and bend accordingly to move to FastIKFabric scripts were assigned to the targets of the the target. IK arms have a set length with multiple joints that left and right arm to move them independently. actuate in a certain way. These simulated joints bend similarly To position the hands to catch the balls, a separate script to real joints in the arm, like the ball and socket joint of the was created called JuggleController and was located “shoulder” that rotates in all directions, while the hinge joint in a Unity game object called the Brain. The Brain’s only of the “elbow” has a more limited range of motion [9]. The purpose was to hold scripts and did not interact physically script for IK can be run separate from juggling, simplifying the with the juggle. The Brain also contained scripts called juggling scripts to focus on solely moving the hand, instead BallController and CatchDetector, which fed balls of each arm joint. to the hands and detected collisions, respectively.

3 during a catch. However, as the juggles got more complex and the hands moved more, it became necessary to update the catching function later in the project.

Fig. 4. Single ball throw simulation

D. Three Ball Cascade with Position Although the creation of a single ball throw was a useful Fig. 3. Juggling arm textures modeled in Blender to learn Unity’s physics, juggling is much more complex than this. To juggle properly, a human must be able to successfully C. Single Ball Throw anticipate the landing location of multiple in-transit balls, while also timing the balls that they throw. Blindly throwing When humans learn to juggle, they often begin by practicing the balls as soon as they reach their hands will not mimic throwing a single ball. Similarly, to gain a better sense of a proper . Considering these factors, it was Unity’s physics of throwing and catching, our project began necessary to start with a simple three ball cascade, as only juggling by tossing a ball from one hand to the other. This one type of throw is repeated. For this juggle, the robot would single toss was performed with the highest feedback level, as know the positions of the balls, similar to the single ball throw. the robot knew the exact position of the ball at all times. Each frame, the JuggleController script checked The first step was to have the arm begin tracking the position whether a catch had been made, and if so, the script updated an of the ball. It is important to note that in this Unity project, integer variable named totalCount to represent the number the XYZ plane can be defined as follows: the X axis was the of catches that had been made. Using the updated number of depth axis, the Y axis was the vertical axis, and the Z axis was catches, the robot was able to determine which ball would the horizontal axis. Utilizing a Vector3.moveTowards need to be caught next, and which arm would need to make function, the hand of the Inverse Kinematic arm moved on the next catch. With these two pieces of information, the robot the XYZ plane to catch the ball when it was within reach of would move the arm to the next ball in the cascade. The the arm. However, this was unrealistic to human juggling as robot was able to move its arms towards each ball since the humans don’t move their hands up to catch the ball, instead JuggleController script had access to the exact location waiting for the ball to land in their hand. Furthermore, as the of the balls at all times. Both arms were constrained in the hand reached up in the Y axis to catch the ball, the two would Y axis, causing the arm to only move towards the ball in the collide, knocking the ball off of its trajectory. To this, X and Z direction. Since the ball had a very steep trajectory, the hand remained stationary on the Y axis, moving only on little drag, and a low speed relative to the arm, the arm was the XZ plane to remain under the ball to catch it. consistently able to move under the ball and catch it. Notably, The next hurdle occurred when attempting to apply a force when the velocity magnitude was decreased while maintaining to throw the ball. In Unity, a typical AddForce function the same launch and landing positions or if the launch of would continuously apply a force to the object. This type the ball was decreased, the arm would have greater difficulty of force would have caused the ball to perpetually move catching the ball. This was because these changes created more upwards and prevent it from following the parabolic path of an movement in the Z direction towards the end of the ball’s actual . Instead, the ball required an instantaneous trajectory relative to the normal path. The success rate of the impulse in the form of the function ForceMode.Impulse, juggle was dependent on both the increase in movement in the providing the immediate force necessary to simulate the one- Z direction and the movement speed of the arm. time force that accompanies throwing a ball in the form of an When humans catch, they grasp a ball with their fingers to instant addition of momentum [11]. stop its movement. However, the IK arm in this project did The final hurdle was to develop a catching mechanism. For not have functional fingers. Instead, the program detected the this single throw, the ball’s gravity was turned off when it made collision and temporarily made the ball a child object of the contact with the hand, preventing it from falling continuously hand, which would cause the ball to be temporarily bound

4 to the hand, preventing the ball from falling off during the throwing motion. After a ball was caught, the hand moved inwards to a fixed location that was closer to the opposite hand before a throwing force was applied on the ball again. During this inward motion, the arms were programmed to move on a parabolic trajectory downwards and upwards before arriving at the throwing location, serving two purposes. First, the movement looked more natural, as real jugglers tend to move their hands elliptically when performing a three ball cascade. The U-shaped movement of the arms after each catch resembled the motion that human jugglers use to catch and throw balls, resulting in the robot’s fluid movement. Second, the movement reduced error buildup by ensuring that balls Fig. 5. Position-based three ball cascade were always launched from a set position. Because the balls’ trajectories often caused the arms to move outwards in the Z direction to catch them, it was possible for catches and throws trajectory of a ball using the feedback from their throw, such to be inconsistent. By bringing the ball back inwards after each as the initial force they applied to the ball, and from visual catch and throwing from a set position, there was a limit to input by glancing at the ball in the air. These factors provide how much error could build up; even if multiple throws had more information about the path that the ball takes in the air. significant error, as long as the landing location each time was This feedback was mimicked in Unity by adding a collision within the arm’s reach, the arm catches the ball and return it sensor above the arms to act as “eyes” for the robot and by back to the same throwing position. In this manner, the robot recording the initial velocities of the balls. After removing the consistently recovered from throws that caused the opposite robot’s knowledge of the ball’s position, these two methods arm to move significantly to make a catch. However, since the enabled the robot to predict the ball’s final X and Z positions, arm’s speed was limited to travel a maximum distance of 0.5 respectively. With these limitations, the robot was programmed units per frame (in the interest of fluidity), the hand could only to be more human-like as human jugglers do not know the return to the throwing location if it was quick enough to travel coordinates of the balls they juggle. Instead, they estimate the there in the time between a catch and a throw. This caused balls’ landing locations based on a feel of the force applied issues with throws that significantly overshot the receiving to each ball. During each throw, the JuggleController arm, as the arm was not be able to recover as effectively as program records the initial velocity and initial position of the usual before the next throw was made. ball, adding these Vector3 objects to Lists. This information Although the juggling simulation produces a human-like was used to calculate the final Z position of the ball after it three-ball cascade, there were several shortcomings. The first was thrown. The calculation for ∆Z can be derived using the issue with the position-based simulation was it gave the robot following equations: the exact coordinates of the balls. This would not be the case ∆Z = v × t (1) in real life, as no human juggler has the precise location of all iz 2v of the balls they juggle. The second issue was that the vertical t = iy (2) movement of the hand before the throw did not correspond g to the force that was applied on the ball. Although the hands Substituting equation (2) into equation (1) yields: moved to the throwing point in a parabolic shape after each 2v × v catch, the force applied on the balls was independent of this ∆Z = iy iz (3) motion. This method was chosen because the force generated g from a hand’s vertical motion would have varied each throw, After calculating ∆Z with equation (3), adding this value to resulting in largely inconsistent throws. In order to consistently the initial Z position of the ball resulted in the final Z position perform throws that moved a sufficient amount in the Y and of the ball as shown in equation (4). Z directions, it was preferable to apply a constant throwing Z = Z + ∆Z (4) force after each catch, independent of the arm’s motion. final initial The arms were thus able to calculate the Z coordinate to move E. Three Ball Cascade with Velocity to when catching the ball. In the three ball cascade with position, the robot was aware A second method was used to predict the thrown ball’s of the position of the ball being thrown at all times in its final X position, which slightly fluctuates during each throw. trajectory. As a result, the arms were programmed to move to An invisible horizontal plane was added on the Unity scene the Z and X positions of the ball being thrown. The next step about 5 units above the hands, and a collision sensor was was to limit the knowledge of the robot in order to simulate applied to sense when a ball passed through the plane. This real-life juggling, as a human juggler does not know the exact collision sensor mimicked human eyes, or a physical robot’s position of the balls in air at all times. The juggler judges the ultrasonic or infrared sensors that detect objects in

5 front of the robot’s “head.” When a ball passes through the by 0.1, every throw would be slightly different in the juggle collision sensor, the script attached to the plane records the [12]. If the robot was able to juggle with this randomized ball’s position in a Vector3 variable. vector, Random.insideUnitSphere would be doubled in magnitude, then tripled, then quadrupled, and so on until the robot is unable to juggle. The amount of successful juggles that could be completed with randomized throws was recorded. To ensure the results for each test were reliable, the tests were all run five times. The average of all five tests were then taken to compare the two types of juggles.

B. Effectiveness of Position-Based Juggles Fig. 6. Simulated visual input collision sensor plane outlined in green As expected from a simulation with perfect knowledge, The JuggleController script now has access to the X the effectiveness of position-based juggling was unparalleled. position of the ball that was registered on the ball’s descent, When run normally, the simulation consistently exceeded 100 and the arms move to this X coordinate. There is no significant cycles with no sign of failure or error. It was likely that the change in the X position of the ball in the time between the only way that the juggle could have failed would have been ball falling through the collision sensor and the ball being if Unity crashed. Table I displays the data from five normal caught by the robot. As a result, the X position recorded by runs of the simulation, with the number of successful cycles the collision sensor as the ball fell through it was sufficient in displayed next to the run. predicting the ball’s final X position. Together, the collision sensor plane and the initial velocity-based calculation facili- tated the robot’s prediction of a ball’s final X and Z positions, TABLE I respectively, enabling the robot to juggle successfully with a NUMBEROF JUGGLING CYCLESFOR ROBOTWITH POSITION limited knowledge of the position of the balls. KNOWLEDGE

IV. RESULTS Cycle Trial Number Successful Cycles 1 100+ A. Variables for Determining Effectiveness 2 100+ While comparing the success of juggles with different types 3 100+ 4 100+ of feedback, it was necessary to create a method for which 5 100+ juggles can be objectively compared. This project determined Average 100+ a list of factors to test the juggles on. First, the number of successful consecutive juggling cycles was recorded. An obvious indicator of success of a juggling robot is the ability to The results of the nudge test, where balls were randomly consistently repeat a juggle. Dividing the count of successful moved a short distance mid trajectory, are available in Table throws by three yielded the number of juggle cycles. Once II. The “nudge” amount refers to the maximum distance that the number of cycles reached 100, it was determined that the a ball could be moved. juggle was highly unlikely to falter and was therefore deemed completely successful in that regard. The second variable was “the nudge test,” which involved TABLE II suddenly jerking a ball midair and seeing if the robot could still NUDGE TESTFOR ROBOTWITH POSITION KNOWLEDGE successfully continue the juggle. Every time a ball was caught, Number of Maximum the Z position of a random ball varied by a small amount Successful Cycles Magnitude Trial Trial Trial Trial Trial between the range of -0.25 and 0.25. This test determined the of Nudge Average robot’s reaction to changes and the ability to recover from 1 2 3 4 5 0.25 100+ 100+ 100+ 100+ 100+ 100+ them. 0.50 45 87 32 57 31 50.4 The third test was to introduce error to the throws and 0.75 4 9 20 9 21 12.6 determine if the robot could continue to juggle. Since humans 1.00 5 5 29 9 6 10.8 can throw dissimilar throws and continue to juggle, it was 1.25 5 3 6 6 5 5 necessary to test if the juggling robot could do the same. By adding a random vector to the throw using the function Random.insideUnitSphere (which returns an (X,Y,Z) The results of the velocity test with different variations are value inside of a sphere with a radius of one) multiplied shown in Table III.

6 TABLE III the robot then used the feedback from the collision sensor RANDOM VECTOR TESTFOR ROBOTWITH POSITION KNOWLEDGE plane above the arms to judge the final X position of a ball. Number of As the balls fell through the collision sensor plane, the X Max Magnitude Successful Cycles of Random positions of those balls were recorded, and the arms moved Trial Trial Trial Trial Trial Vector Average to this X position along with the previously calculated final Z 1 2 3 4 5 0.2 100+ 100+ 100+ 100+ 100+ 100+ position. With the robot’s knowledge of the Z and X position 0.4 14 38 9 15 21 19.4 limited, the simulation performed poorer than with just the 0.6 7 5 15 4 14 9 Z position limited; the robot was only able to exceed 100 0.8 3 2 4 2 1 2.4 1.0 2 1 1 2 1 1.4 successful cycles only once out of five trials, for an average of 41 successful cycles. By examining the collected data, several patterns emerged. The number of successful cycles achieved when limiting the Based on the data from the nudge test, it is clear that the more robot’s knowledge of the Z position of the balls, and then the a ball is “nudged” off course, the less cycles on average the Z and X position of the balls, are shown in Table IV. simulation will complete. Although the lowest setting on the “nudge” test still resulted in 100 complete cycles, subsequent TABLE IV NUMBEROF JUGGLING CYCLESFOR ROBOTWITH LIMITED KNOWLEDGE tests with higher settings showed a clear decrease in the average cycles completed before failure. The clearest example Number of Cycles Before Failure of the drastic effects that high maximum nudges can have Cycle Trial Number Limiting Z Position Limiting Z and X Positions 1 19 44 occurred when the maximum nudge amount was set to .5, the 2 100+ 6 simulation averaged 50.4 cycles, but when the nudge amount 3 100+ 29 was set to .75 the simulation only averaged 12.6. The reason 4 100+ 26 for this is that with greater nudges, the balls were more likely 5 75 100+ Average 78.8 41 to move off course and into one another than they were with lower nudges. This happens in part because a ball can be nudged midair and collide with another ball that has just been Keeping the robot’s knowledge of the Z and X positions of thrown. This can also happen because a ball that is about to the ball limited, the nudge test was then performed, followed be caught can be nudged so far away from the hand that it by a set of simulations that included random variability in the becomes impossible for the hand to move to the ball in time throwing force in the Z direction. The results of the nudge test to catch it. with different maximum nudge amounts are recorded in Table The results of the random variability in throwing power V, while the results of the trials with random variability in in were very similar to that of the “nudge” test. At lower levels throwing power in the Z direction are recorded in Table VI. of variation, the changes were relatively minor and were often corrected by the hand throwing the balls from the same place, TABLE V NUDGE TESTFOR ROBOTWITH LIMITED KNOWLEDGE but at higher levels the balls tended to collide with each other. Number of Much like the nudge test, there were significant dropoffs in Maximum Successful Cycles the number of successful cycles as the maximum variability Magnitude Trial Trial Trial Trial Trial of Nudge Average in throwing power increased. A very noticeable example 1 2 3 4 5 of dropoff was how increasing the maximum variability in 0.25 4 11 4 5 7 6.2 throwing power from .2 to .4 caused the number of successful 0.50 5 1 3 2 1 2.4 0.75 1 0 1 4 2 1.5 cycles to decrease from the maximum of 100 to 19.4. 1.00 1 1 2 2 1 1.4 C. Effectiveness of Velocity-Based Juggles 1.25 1 2 4 2 2 2.2 After examining the effectiveness of the position-based juggles, the robot’s access to the positions of the balls was TABLE VI restricted. First, the robot’s knowledge of the Z position of RANDOM VECTOR TESTFOR ROBOTWITH LIMITED KNOWLEDGE a ball at all times was limited by programming the robot to Number of move its hands to a position that was calculated using the Max Magnitude Successful Cycles of Random initial velocity and initial position of the ball. The accuracy Trial Trial Trial Trial Trial Vector Average of the calculations in correctly predicting the final Z position 1 2 3 4 5 of the balls was measured. After limiting the Z position, the 0.2 5 18 55 46 4 25.6 0.4 3 1 1 2 6 2.6 juggling simulations demonstrated that the calculations were 0.6 7 1 1 1 1 2.2 fairly successful in predicting where the ball would be at the 0.8 1 3 1 2 2 1.8 end of its trajectory; three out of the five trials exceeded one 1.0 1 1 1 2 1 1.2 hundred successful cycles, for an average of 78.8 successful cycles. Subsequently, the robot’s access to the X position As the robot calculated a final catching position using of the balls in midair was also restricted, which meant that the initial velocity of the ball, it was unable to account

7 for random movements the balls experienced midair. Thus, considering the rarity of the nudge test, it is not necessarily as altering the trajectory midair with the nudge test made it fair of a test. The randomized vector throws are also unlikely, difficult for the robot to complete a large amount of successful considering experienced jugglers are able to throw largely cycles. The number of successful cycles completed by the similar throws every time, with minute variations. Given the simulation decreased, on average, as the maximum distance efficiency of human jugglers, it is fair to assume that the robot of the random nudge increased. However, when the maximum will be able to throw similar throws each cycle. Therefore, nudge amount was set to 1.25 units, the robot’s average of it’s not fair to judge either feedback system too harshly on 2.2 successful cycles was an improvement over the previous that test. The most important test was the number of cycles, maximum nudge amount of 1.0 unit. After a close examination and, disregarding the obvious outlier from Trial 1 for the of the simulations, it was concluded that this improvement limited Z position test, both the position-based and limited was due to the former nudge test (maximum nudge of 1.0 Z position test seem to perform very similarly. From this, it unit) placing the balls in a collision course more often. With can be assumed that, in terms of sheer cycle sustainability, the maximum nudge distance set to 1.25, despite the higher both feedback systems perform equally well. amount of random ball movement midair, the balls collided less often, allowing the robot to complete more successful V. CONCLUSION cycles. A. Accomplishments The introduction of random variability in throwing power This project was able to successfully recreate a single ball in the Z direction exhibited an overall improvement in perfor- throw and two variations of a three ball cascade. It was also mance by the robot when compared to the nudge test. Initially, able to effectively complete juggles based on the position with a maximum amount of error in throwing power of 0.2 and the velocity of the balls. Additionally, it implemented units, the simulation achieved an average of 25.6 successful an objective data collection system based on the number of cycles over the course of five trials. This level of success successful juggles and the robot’s response to errors in position rapidly depreciated as the maximum amount of variability was and throwing velocity. Additionally, because the framework increased, due to the chance of the balls colliding with each and structure were constructed in Unity, the juggling scene is other greatly increasing. The robot was only able to achieve an highly modular. By defining other siteswap patterns and throws average of 1.2 successful cycles when the maximum random in the scripts, the juggling scene can be readily modified to variability in Z direction throwing power was set to 1.0 unit. accomplish more complex juggling patterns. D. Comparison B. Improvements Evidently, the randomized position test had a more sig- The conclusion reached in this project was only tested nificant effect than the randomized velocity test, as at low with one of the most simple juggles: the three ball cascade. levels of both tests, the robot could reach far fewer cycles in With innumerable variations on the three ball juggle alone, the randomized position test, a trend which continued as the let alone the juggles that have up to eleven balls, it would nudges and random velocities increased throughout the trials. have been wise to research how other juggles reacted to The one exception to this is that for the most extreme (1.25 the same types of feedback. Time constraints prevented the units) random position trial, the average cycle rebounded. addition of more complex juggles, but incorporating difficult However, this was likely an outlier caused by chance, and juggles with numerous balls, such as the 441, 71, or 861, therefore, the pattern remains consistent. would help to corroborate the results reached with the three This trend is not surprising, for two reasons. First, in the ball cascade. Considering human jugglers require significantly real world, it is highly improbable for a human to throw a ball more focus while completing the aforementioned juggles, it with the same exact velocity every trial, and therefore, natural would be interesting to see if the robot also required more variations in velocity have to be considered. Accordingly, these feedback. Furthermore, implementing a more realistic catching changes don’t have a major effect. Second, unless acted upon mechanism would help further study implementing human by some outside force, like a gust of wind or an interfering behaviors in robotic, as robotic fingers clasping the ball adds spectator, a ball’s position would not change mid-flight as another layer of complexity that would be vital to understand. it does in the simulation. This sudden change would also Modulating the maximum speed of the arms would provide cause a human juggler to mess up, meaning the nudge tests an additional handicap, as humans can only move their hands are not necessarily as accurate at judging effectiveness as between throwing and catching. the other two tests. Considering this, it makes sense that the More variables could have also been tested to determine the robot, just like a human juggler, had greater difficulty with the effectiveness of the juggles. One variable could be measuring randomized position than it did with the randomized velocity. the distance the hand travels to catch the ball as it would It is evident from the data that the juggler with the po- be a useful form of determining the success of a juggle. sition knowledge outperformed the juggler with the velocity Considering human jugglers move their hands very little, robot knowledge. The omniscience of the position-based robot made jugglers would also strive to move their hands as minimally it superior, especially in the nudge test, as the velocity-based as possible. Taking this distance into account would also help robot can not take into account changes in position. However, differentiate between a ”good catch” and a ”poor catch” as

8 when jugglers move their hands far to retrieve the ball, it which the ball is thrown can be used to determine velocity is considered a poor catch. This would then lead to further for the velocity-based juggles. With these real-life equivalents improvements as poor catches could lend to poor throws with to various Unity functions, it’s possible to determine the a high error vector, in order to study if robots can recover from application of the conclusion reached through the simulation. mistakes similarly to humans. Another variable to be studied Another potential project is to implement machine learning could be the percentages of poor throws, as a worse juggling to teach a robot how to juggle. Humans, from birth, acquire robot would have a high rate of poor throws. Implementing input from their environment to develop behavior. In the same these factors would produce more comprehensive data and way that experiences help humans improve, robots can be allow for a more precise conclusion to be reached on the engineered to have the ability to input values from their effectiveness of various feedback loops on dexterous robot surroundings to improve their performance and productivity. movements. By incorporating Unity Machine Learning Agents in a bot with juggling capabilities, the process of machine learning can also C. Application of Findings be applied to other robotics applications across an array of In the research project, a robot was programmed to im- industries. itate human behavior when juggling. Such mechanics can APPENDIX A-JUGGLECONTROLLER be applied to robots designed to perform tasks in a diverse array of industries. For instance, repetitive tasks such as 1 using System.Collections; packaging boxes can be performed by a robot using the same 2 using System.Collections.Generic; 3 using UnityEngine; concepts explored in the research. More advanced algorithms 4 using UnityEngine.UI; can allow for robots to mimic human behavior in more 5 complex operations such as performing medical procedures. 6 public class JuggleController : CatchDetector 7 { Accurate imitation of human behaviors provide insight on how 8 public Transform LeftTarget; humans are able to perform certain tasks, too. For example, 9 public Transform RightTarget; juggling up to eleven balls involves sequential throws and 10 public Text countText; 11 catches to be made faster than human reaction time [13]. 12 public Transform Ball1; Adjusting variables such as the speed of robot arms or sensory 13 public Transform Ball2; input, one can analyze how humans performing seemingly 14 public Transform Ball3; 15 impossible challenges can be explained by muscle memory 16 public Rigidbody Ball1RB; or spatial awareness. Through implementing human behaviors 17 public Rigidbody Ball2RB; in robotics, dexterous robots can perform human tasks without 18 public Rigidbody Ball3RB; 19 the hindrance of human error. This lack of human error was 20 public Vector3 ThrowingPowerRightwards = new exemplified in the position-based system as, unlike humans, Vector3(); this robot could juggle for innumerable cycles. Elucidating 21 public Vector3 ThrowingPowerLeftwards = new Vector3(); these human abilities advances knowledge in the fields of 22 public float time = 0.95f; psychology and neurology, which can be applied to biomedical 23 public float dwellTime; engineering. 24 public float startZPosition = 3f; 25 D. Further Research 26 public float positionThrowArmSpeed = 0.5f; 27 public float velocityThrowArmSpeed = 0.2f; Considering the success of the simulated robot in both types 28 29 public bool limitZPosition = false; of feedback-based juggles, a logical next step would be to 30 public bool limitXPosition = false; recreate this robot physically. Despite Unity’s Nvidia PhysX 31 public bool nudgeTest = false; engine and this project’s best attempts to recreate realistic 32 public bool varyThrowingPower = false; 33 scenarios, it is impossible to digitally reproduce a perfectly 34 private float timeCounter; realistic environment. A physical robot would need to be made 35 public float moveBackSpeed = 6; to corroborate our results and determine the effectiveness of a 36 public float height = 2.5f; 37 public float moveBackMaxDelta = 0.35f; velocity-based juggle in the real world. It is recommended 38 that many of the same methods taken in this project are 39 public static int totalCount = 1;// which implemented in a physical robot. Using Inverse Kinematic catch we are currently on 40 private int targetArm;// which arms would give the robot a more human-like feel and would arm are we throwing to (0 is left,1 is right) allow for the Unity scripts to be allowed in a similar fashion. 41 private bool ballInLeftHand; To imitate the Unity catch, an electromagnet can be activated 42 private bool ballInRightHand; 43 private bool firstCycleDone; to bind the ball to the hand. Using a camera, the robot can use 44 computer vision to track the exact positions of the ball, similar 45 private Transform BallBeingThrown; to the position-based juggle. An IR or ultrasonic Arduino 46 private Rigidbody BallBeingThrown_RB; 47 private Vector3 LeftStartingPoint; sensor would be useful to mimic the Unity collision sensor 48 private Vector3 RightStartingPoint; for a physical robot, and recording the force and direction at 49

9 50 private int count = 0;// serves as an 114 void FindBallBeingThrown()//sets index for the position& velocity Lists BallBeingThrown(Target) and BallBeingThrown_RB 51 private Vector3 CalculatedBallPosition; (Rigidbody) 52 private List positionList = new List< 115 { Vector3>(); 116 if (BallBeingThrown == Ball1) 53 private List velocityList = new List< 117 { Vector3>(); 118 BallBeingThrown = Ball2; 54 119 BallBeingThrown_RB = Ball2RB; 55 120 } 56 // Start is called before the first frame update 121 else if (BallBeingThrown == Ball2) 57 void Start() 122 { 58 { 123 BallBeingThrown = Ball3; 59 LeftStartingPoint = new Vector3(6f, 124 BallBeingThrown_RB = Ball3RB; LeftTarget.position.y, startZPosition); 125 } 60 RightStartingPoint = new Vector3(6f, 126 else RightTarget.position.y, -1 * startZPosition); 127 { 61 128 BallBeingThrown = Ball1; 62 LeftTarget.position = LeftStartingPoint; 129 BallBeingThrown_RB = Ball1RB; 63 RightTarget.position = RightStartingPoint; 130 } 64 131 } 65 BallBeingThrown = Ball1; 132 66 BallBeingThrown_RB = Ball1RB; 133 void FindIfBallCaught() 67 134 { 68 ballInLeftHand = true; 135 if (firstCycleDone && CatchDetector. 69 ballInRightHand = true; BallIsInHand) 70 Ball1RB.useGravity = false; 136 { 71 Ball2RB.useGravity = false; 137 72 Ball3RB.useGravity = false; 138 73 firstCycleDone = false; 139 totalCount++; 74 140 count++; 75 SetCountText(); 141 SetCountText(); 76 StartCoroutine(ThrowOne()); 142 77 143 BallBeingThrown_RB.velocity = new 78 } Vector3(0, 0, 0); 79 144 BallBeingThrown_RB.useGravity = false; 80 // Update is called once per frame 145 81 void Update() 146 if (limitZPosition) 82 { 147 { 83 FindTargetArm(); 148 print("Calculated final pos:"+ 84 FindIfBallCaught(); CalculatedBallPosition); 85 149 print("Actual final pos:"+ 86 if (targetArm == 1 && !ballInRightHand) BallBeingThrown.position); 87 { 150 } 88 if (limitZPosition) 151 89 CalculatePositionOfBall(RightTarget) 152 if (targetArm == 1)//sets ball asa ; child of the target to attach it to the hand 90 else when caught 91 StartCoroutine(MoveTowardsBall( 153 { RightTarget, BallBeingThrown)); 154 BallBeingThrown.parent = RightTarget 92 } ; 93 else if (targetArm == 0 && !ballInLeftHand) 155 ballInRightHand = true; 94 { 156 StartCoroutine(MoveTowardsElliptical 95 if (limitZPosition) (RightTarget, RightStartingPoint)); 96 CalculatePositionOfBall(LeftTarget); 157 } 97 else 158 else 98 StartCoroutine(MoveTowardsBall( 159 { LeftTarget, BallBeingThrown)); 160 BallBeingThrown.parent = LeftTarget; 99 } 161 ballInLeftHand = true; 100 } 162 StartCoroutine(MoveTowardsElliptical 101 (LeftTarget, LeftStartingPoint)); 102 void FindTargetArm() 163 } 103 { 164 104 if (totalCount % 2 == 1)// odd number 165 StartCoroutine(ThrowWhenCaught( throw BallBeingThrown_RB)); 105 { 166 FindBallBeingThrown(); 106 targetArm = 1;// target is right arm 167 107 } 168 if (nudgeTest) 108 else if (totalCount % 2 == 0)// even 169 { number throw 170 int affectedBall = Random.Range(0, 109 { 2) + 1; 110 targetArm = 0;// target is left arm 171 print(affectedBall); 111 } 172 float uporDown = .25f * (Random. 112 } Range(0, 2) - 1); 113 173 if (affectedBall == 1)

10 174 { 227 Target.position = Vector3.MoveTowards( 175 Ball1.position = new Vector3( Target.position, GoalPoint, Ball1.position.x, Ball1.position.y, Ball1. velocityThrowArmSpeed); position.z + uporDown); 228 yield return null; 176 } 229 } 177 if (affectedBall == 2) 230 178 { 231 yield return null; 179 Ball2.position = new Vector3( 232 } Ball2.position.x, Ball2.position.y, Ball2. 233 position.z + uporDown); 234 IEnumerator MoveTowardsElliptical(Transform 180 } Target, Vector3 GoalPoint) 181 if (affectedBall == 3) 235 { 182 { 236 timeCounter = 0; 183 Ball3.position = new Vector3( 237 Ball3.position.x, Ball3.position.y, Ball3. 238 float initial = Target.position.z; position.z + uporDown); 239 184 } 240 float x = GoalPoint.x; 185 } 241 float y; 186 242 float z = GoalPoint.z; 187 CatchDetector.BallIsInHand = false; 243 188 } 244 while(Target.position.z != GoalPoint.z) 189 } 245 { 190 246 timeCounter += Time.deltaTime * 191 moveBackSpeed; 192 void CalculatePositionOfBall(Transform Target) 247 193 { 248 if(timeCounter < 0.1 * moveBackSpeed) 194 Vector3 initialVelocity = velocityList[count 249 y = -1 * Mathf.Cos(timeCounter) * ]; height - 3.04f; 195 Vector3 initialPosition = positionList[count 250 else if((int)Target.position.y != -3) ]; 251 y = -1 * Mathf.Cos(timeCounter) * 196 float zVelocity = initialVelocity.z; height - 3.04f; 197 float yVelocity = initialVelocity.y; 252 else 198 253 y = Target.position.y; 199 float trajectoryTime = (2.0f * yVelocity) / 254 9.81f; 255 Target.position = Vector3.MoveTowards( 200 float deltaZ = trajectoryTime * zVelocity; Target.position, new Vector3 (x, y, z), 201 float zPosition = deltaZ + initialPosition.z moveBackMaxDelta); ; 256 yield return null; 202 257 } 203 if (limitXPosition) 258 204 CalculatedBallPosition = new Vector3( 259 while(Target.position.y != -3.04f) TriggerSensor.BallPosition.x, Target.position.y, 260 { zPosition); 261 Target.position = Vector3.MoveTowards( 205 else Target.position, new Vector3 (Target.position.x, 206 CalculatedBallPosition = new Vector3( -3.04f, Target.position.z), moveBackMaxDelta); BallBeingThrown.position.x, Target.position.y, 262 yield return null; zPosition); 263 } 207 264 208 StartCoroutine(MoveTowardsPoint(Target, 265 yield return null; CalculatedBallPosition)); 266 } 209 } 267 210 268 IEnumerator ThrowOne()// First Cycle 211 IEnumerator MoveTowardsBall(Transform Target, 269 { Transform Ball) 270 yield return null; 212 { 271 yield return null; 213 Vector3 BallLocation = new Vector3(Ball. 272 position.x, -3.04f, Ball.position.z); 273 // Records the balls’ initial positions 214 274 positionList.Add(Ball1.position); 215 while (Target.position != BallLocation) 275 positionList.Add(Ball2.position); 216 { 276 positionList.Add(Ball3.position); 217 Target.position = Vector3.MoveTowards( 277 Target.position, BallLocation, 278 // Throws Ball1 towards the right hand and positionThrowArmSpeed); waits for

11 287 Ball2RB.useGravity = true; 347 velocityList.Add(objectCaught.velocity); 288 Ball2RB.AddForce(ThrowingPowerLeftwards, // Records ball’s initial velocity ForceMode.Impulse); 348 objectCaught.gameObject.transform.parent = 289 ballInRightHand = false; null; 290 yield return null; 349 } 291 yield return null; 350 292 velocityList.Add(Ball2RB.velocity);// 351 void SetCountText() Records ball’s initial velocity 352 { 293 yield return new WaitForSeconds(time); 353 countText.text ="Number of Completed 294 Catches:" + count.ToString(); 295 // Throws Ball3 towards the right hand 354 } 296 Ball3RB.useGravity = true; 355 } 297 Ball3RB.AddForce(ThrowingPowerRightwards, ForceMode.Impulse); APPENDIX B-CATCHDETECTOR 298 ballInLeftHand = false;

299 yield return null; 1 using System.Collections; 300 yield return null; 2 using System.Collections.Generic; 301 velocityList.Add(Ball3RB.velocity);// 3 using UnityEngine; Records ball’s initial velocity 4 302 5 public class CatchDetector : MonoBehaviour 303 // End of first cycle 6 { 304 firstCycleDone = true; 7 public static bool BallIsInHand = false; 305 } 8 306 9 void Start() 307 IEnumerator ThrowWhenCaught(Rigidbody 10 { objectCaught) 11 BallIsInHand = false; 308 { 12 } 309 yield return new WaitForSeconds(dwellTime); 13 310 objectCaught.useGravity = true; 14 void OnCollisionEnter(Collision collision) 311 positionList.Add(objectCaught.gameObject. 15 { transform.position);// Records ball’s 16 BallIsInHand = true; initial position 17 } 312 18 313 19 void OnCollisionExit(Collision collision) 314 if (!varyThrowingPower) 20 { 315 { 21 BallIsInHand = false; 316 if (targetArm == 0) 22 } 317 { 23 } 318 objectCaught.AddForce( ThrowingPowerLeftwards, ForceMode.Impulse); APPENDIX C-TRIGGERSENSOR 319 ballInRightHand = false; 320 } 1 321 else 2 using System.Collections; 322 { 3 using System.Collections.Generic; 323 objectCaught.AddForce( 4 using UnityEngine; ThrowingPowerRightwards, ForceMode.Impulse); 5 324 ballInLeftHand = false; 6 public class TriggerSensor : MonoBehaviour 325 } 7 { 326 } 8 public static Vector3 BallPosition; 327 else 9 private int localCount = 0; 328 { 10 329 Vector3 randomVary = .4f Random. * 11 void OnTriggerEnter(Collider other) onUnitSphere; 12 { 330 randomVary.x = 0f; 13 // Updates the position of the ball passing 331 randomVary.y = 0f; through the TriggerSensor 332 14 // Only when the totalCount is updated 333 if (targetArm == 0) 15 if(localCount != JuggleController.totalCount 334 { ) 335 objectCaught.AddForce( 16 { ThrowingPowerLeftwards + randomVary, ForceMode. 17 BallPosition = other.transform.position; Impulse); 18 localCount++; 336 ballInRightHand = false; 19 } 337 } 20 } 338 else 21 } 339 { 22 340 objectCaught.AddForce( 23 using System.Collections; ThrowingPowerRightwards + randomVary, ForceMode. 24 using System.Collections.Generic; Impulse); 25 using UnityEngine; 341 ballInLeftHand = false; 26 342 } 27 public class BallController : MonoBehaviour 343 } 28 { 344 yield return null; 29 public GameObject InitialTarget; 345 yield return null; 30 346 31 IEnumerator Start()

12 32 { to thank Lockheed Martin, the NJ Space Grant Consortium, 33 yield return null; and the various other corporate sponsors who made this project 34 transform.position = InitialTarget.transform .position + new Vector3(0, 1, 0); possible. Lastly, we would like to recognize the New Jersey 35 } Governor’s School of Engineering and Technology Alumni for 36 } their continued participation and support. ACKNOWLEDGMENTS REFERENCES The authors of this paper would like to gratefully thank the following: Project Mentor Kevin Ge for his invaluable [1] M. P. Groover, “Advantages and disadvantages of automation,” Ency- guidance and involvement in the research and designing clopædia Britannica, 08-May-2019. [Accessed: 19-Jul-2020]. [2] B. Polster, “The Mathematical Theory of Juggling,” Some Random process; Project Liaison and Residential Teaching Assistant Remarks, 18-Feb-2012. [Accessed: 19-Jul-2020]. Steven Chen for his counseling and aid in communication [3] J. Voss, “The Physics of Ball Juggling,” Some Random Remarks, 18- and logistics; Research Coordinator Benjamin Lee for his Feb-2012. [Accessed: 18-Jul-2020]. [4] J. Botvinick-Greenhouse and T. Shinbrot, “Juggling dynamics,” Physics assistance; Head Counselor Rajas Karajgikar for his enthu- Today, 01-Feb-2020. [Accessed: 19-Jul-2020]. siastic support throughout the program; Daniel Erdmann for [5] P. J. Beek and A. Lewbel, “The Science of Juggling,” The Scientific developing the basis for the Inverse Kinematic arms used in American, 15-Oct-2009. [Accessed: 18-Jul-2020]. [6] “Difference Between Open Loop & Closed Loop System (with Com- the project, and Jonah Botvinick-Greenhouse for consulting parison Chart),” Circuit Globe, 20-Feb-2018. [Accessed: 19-Jul-2020]. on juggling mechanics. Dr. Troy Shinbrot for introducing the [7] P. Barsa, O. Cao, S. Chen, K. Desai, A. Gupta, M. VanDusen-Gross, M. juggling project; Mary Pat Reiter for extending last year’s P. Reiter, and R. Wu, ”GSET Autonomous Juggling Robot Final Paper.” Rutgers, New Brunswick, 27-Jul-2019. findings to make it possible to carry out this project; Dean Jean [8] “Physics,” Unity Documentation, 13-Jul-2020. [Accessed: 19-Jul-2020]. Patrick Antoine, the Director of the New Jersey Governor’s [9] L. Bermudez, “Overview of Inverse Kinematics,” Medium, 31-May- School of Engineering and Technology, for his management 2020. [Accessed: 19-Jul-2020]. and guidance; Dean Ilene Rosen, the Director Emeritus for [10] “Roll-a-ball,” Unity Learn. [Online]. [Accessed: 19-Jul-2020]. [11] “ForceMode.Impulse,” Unity Documentation, 13-Jul-2020. [Accessed: the New Jersey Governor’s School of Engineering and Tech- 19-Jul-2020]. nology, all of whom were crucial to this project coming to [12] “Random.insideUnitSphere,” Unity Documentation, 13-Jul-2020. [Ac- fruition. Next, it is imperative to acknowledge the support of cessed: 19-Jul-2020]. [13] Sakaguchi, Y. Masutani, and F. Miyazaki, A study on juggling tasks, Rutgers University, Rutgers School of Engineering, and the Proceedings IROS 91:IEEE/RSJ International Workshop on Intelligent State of New Jersey for the opportunity to study engineering, Robots and Systems 91, 1991. and pursue new opportunities. The researchers would also like

13