
Accepted by the International Journal of Robotics Research. Pose and Motion from Contact Yan-Bin Jia Michael Erdmann The Rob otics Institute Scho ol of Computer Science Carnegie Mellon University Pittsburgh, PA 15213-3890 June 26, 1998 Abstract In the absence of vision, grasping an object often relies on tactile feedback from the ngertips. As the nger pushes the object, the ngertip can feel the contact point move. If the object is known in advance, from this motion the nger may infer the location of the contact point on the object and thereby the object pose. This paper primarily investigates the problem of determining the pose orientation and position and motion velocity and angular velocity of a planar object with known geometry from such contact motion generated by pushing. A dynamic analysis of pushing yields a nonlinear system that relates through contact the object pose and motion to the nger motion. The contact motion on the ngertip thus encodes certain information about the object pose. Nonlinear observability theory is employed to show that such information is sucient for the nger to \observe" not only the pose but also the motion of the object. Therefore a sensing strategy can berealized as an observer of the nonlinear dynamical system. Two observers are subsequently introduced. The rst observer, based on the result of [15], has its \gain" determined by the solution of a Lyapunov-like equation; it can be activated at any time instant during a push. The second observer, based on Newton's method, solves for the initial motionless object pose from three intermediate contact points during a push. Under the Coulomb friction model, the paper copes with support friction in the plane and/or contact friction between the nger and the object. Extensive simulations have been done to demonstrate the feasibility of the two observers. Preliminary experiments with an Adept robot have also been conducted. Acontact sensor has been implemented using strain gauges. 1 Figure 1: Di erent motions of contact drawn as dots on an ellipse pushing a quadrilateral in two di erent initial p oses. 1 Intro duction Part sensing and grasping are two fundamental op erations in automated assembly. Tra- ditionally, they are p erformed sequentially in an assembly task. Parts in many assembly applications are manufactured to high precisions based on their geometric mo dels. This knowledge of part geometry can sometimes signi cantly facilitate sensing as well as grasp- ing. It can sometimes also help integrate these two op erations, reducing the assembly time and cost. Consider the task of grasping something, say, a p en, on the table while keeping your eyes closed. Your ngers fumble on the table until one of them touches the p en and inevitably starts pushing it for a short distance. While feeling the contact move on the ngertip, you can almost tell which part of the p en is b eing touched. Assume the pushing nger is moving away from you. If the contact remains almost stable, then the middle of the pen is b eing touched; if the contact moves counterclo ckwise on the ngertip, then the right end of the p en is b eing touched; otherwise the left end is b eing touched. Immediately, a picture of the pen con guration has b een formed in your head so you co ordinate other ngers to quickly close in for a grip. The ab ove example tells us that the p ose of a known shap e may be inferred from the contact motion on a nger pushing the shap e. To b etter illustrate this idea, Figure 1 shows two motions of a quadrilateral in di erent initial p oses pushed by an ellipse under the same motion. Although the initial contacts on the ellipse were the same, the nal contacts are quite far apart. Thinking in reverse leads to the main questions of this pap er: 1. Can we determine the p ose of an ob ject with known geometry and mechanical prop- erties from the contact motion on a single pushing nger, or simply, from a few inter- mediate contact p ositions during the pushing? 2 Nonlinear System State Object Contact Dynamics of Kinematics Pushing Sensor Finger Implementation Input Output State Observer Estimate Figure 2: Anatomy of pushing. 2. Can we determine any intermediate p ose of the ob ject during the pushing? 3. Furthermore, can we estimate the motion of the ob ject during the pushing? In this pap er, we will give armative answers to the ab ove questions in the general case. To accomplish this, we will characterize pushing as a system of nonlinear di erential equations based on its dynamics. As shown in Figure 2, the state of the system will include the con gurations p ositions, orientations, and velo cities of the nger and ob ject during the push at any time instant. The system input will b e the acceleration of the nger. The system output will b e the contact lo cation on the nger sub ject to the kinematics of contact. This output will be fed to nonlinear observers, which serve as the sensing algorithms, to estimate the ob ject's p ose and motion. Section 2 cop es with the dynamics of pushing and the kinematics of contact, deriving a system of di erential equations that govern the ob ject and contact motions while resolving related issues such as supp ort friction in the plane and the initial ob ject motion; Section 3 applies nonlinear control theory to verify the soundness of our sensing approach to be pro- p osed, establishing the lo cal observabilityof this dynamical pushing system from the nger contact; Section 4 describ es two nonlinear observers which estimate the ob ject p ose and motion at any instant and at the start of pushing, resp ectively, and which require di erent amounts of sensor data; Section 5 extends the results to incorp orate contact friction b etween the nger and the ob ject; Section 6 presents simulations on b oth observers and the imple- mentation of a contact sensor, demonstrating that three intermediate contact p oints often suce to determine the initial p ose for the ngers and ob jects tested; Finally, Section 7 summarizes the pap er and outlines future work. 3 1.1 Related Work Our work is grounded in rob otics where an abundance of previous work exists. It also draws up on the part of nonlinear control theory that concerns nonlinear observability and observers. 1.1.1 Rob otics Dynamics of sliding rigid b o dies was treated by MacMillan [36] for non-uniform pressure distributions, and by Goyal et al. [19] using geometric metho ds based on the limit surface description of friction. Howe and Cutkosky [26] exp erimentally showed that the limit sur- face only approximates the force-motion relationship for sliding b o dies and discussed other simpli ed practical mo dels for sliding manipulation. Mason [37] pioneered the study of the mechanics of pushing using quasi-static analysis, predicting the direction in which an ob ject b eing pushed rotates and plotting out its in- 1 stantaneous rotation center. For unknown centers of friction, Alexander and Maddo cks [2] reduced the problem of determining the motion of a slider under some applied force to the case of a bip o d, obtaining analytical solutions for simple sliders. The problem of predicting the accelerations of multiple 3D ob jects in contact with Coulomb friction has a nonlinear complementarity formulation due to Pang and Trinkle [40]; the existence of solutions to mo dels with sliding and rolling contacts has b een established. Montana [38] derived a set of di erential equations describing the motion of a contact p ointbetween two rigid b o dies in resp onse to a relative motion of these b o dies, and employed these equations to sense the lo cal curvature of an unknown ob ject and to follow its surface while steering the contact p oint to some desired lo cation on the end e ector. The kinematics of spatial motion with p oint contact was also studied by Cai and Roth [6] who assumed a tactile sensor capable of measuring the relative motion at the contact p oint. The sp ecial kinematics of two rigid b o dies rolling on each other was considered by Li and Canny [33]in view of path planning in the contact con guration space. In our work, contact kinematics is derived directly from the absolute velo cities of the nger and the ob ject rather than from their relative velo city at the contact. Also we are concerned with a nger and ob ject only in the plane not in 3D. Part of our motivation came from the blind grasping task at the b eginning of the pap er. The caging work by Rimon and Blake [43] is concerned with constructing the space of all con gurations of a two- ngered hand controlled by one parameter that con ne a given 2D ob ject; these con gurations can lead to immobilizing grasps by following continuous paths in the same space. This work requires an initial image of the ob ject taken by a camera. Work related to caging includes parts feeder design [42] and xture design [5]. In this pap er, we are concerned with how to \feel" a known ob ject using only one nger and how to infer its p ose and motion information rather than how to constrain and grasp the ob ject using multiple ngers. A larger part of the motivation of our work was from parts orienting.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages44 Page
-
File Size-