Assisting Versus Repelling Force-Feedback for Human Learning of a Line Following Task

Assisting Versus Repelling Force-Feedback for Human Learning of a Line Following Task

The Fourth IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics Roma, Italy. June 24-27, 2012 Assisting versus Repelling Force-Feedback for Human Learning of a Line Following Task Xi Chen and Sunil K. Agrawal Abstract— Previous work has shown that training with ‘assist-as-needed’ method using force-feedback joystick can improve driving performance of children and adults. This work is the first study to evaluate training with a repelling force Force-feedback versus an assisting force for learning of a line following task. Joysck Laser Range We designed a robotic training wheelchair that can accurately Finder localize itself in the training environment, and implemented assisting and repelling force fields on the force-feedback joy- Training stick. The training protocol included three groups. The control Path (CT) group received no force feedback. The assisting force (AF) Laser group was trained using the ‘assist-as-needed’ paradigm. The Pointer repelling force (RF) group was trained with the repelling force field. We observed that both the AF and RF group improved their driving skills, however, the RF group had the greatest trajectory error reduction. We believe that this pilot study could provide a promising foundation regarding robotic wheelchair algorithm effects on adult learning. Fig. 1. A healthy adult subject driving the robot to keep the laser point on the training path. I. INTRODUCTION Many patients with mobility impairment find it very chal- After the force is turned off, subjects’ movements overshoot lenging to use the currently available power wheelchair for to the opposite direction of the original force and finally activities of daily living. Around 40 percent of the patients reduce errors after the washout period. All of these studies have reported to find the steering and maneuvering tasks have found a significant decrease in trajectory error or timing difficult or impossible [1]. Majority of the industrial and error, though the learning patterns are different. There is, research effort is focused on making a smart or intelligent however, no consistent conclusion on the comparison of the wheelchair, which makes use of numerous sensors and haptic guidance versus error enhancement. Which paradigm control algorithms to assist such patients and tries to make yields better training results still depends on the task nature driving easier and safer for them [2]–[4]. or the force implementation. However, recently there has been a large increase in The novelty of the current work is to implement a repelling interest for using mobile robots with force-feedback devices force field on the force-feedback joystick and compare its to train such patients in the hope of finding the best training effect with the ‘assist-as-needed’ paradigm as well as training algorithm. One way of training is by providing numerous with no force feedback (Fig. 1). Improvement in driving feedback cues to assist the driver through steering wheels or performance of healthy adults and healthy or impaired kids joysticks [5]–[8]. But the effectiveness of such haptic assis- while driving a robotic wheelchair using the ‘assist-as- tance as a motor-training strategy is still an open question, needed’ paradigm has already been shown [6], [7]. In this probably because the user may start relying on the assistance work, the assisting force field reduces errors by bringing provided during the training and fail to learn the required the joystick handle closer to the desired direction while motor commands to perform the actual task [9]–[11]. the repelling force field increases error by pushing the In contrast to the ‘assisting’ paradigm, researchers have joystick handle away from the desired direction. To the best developed the idea to enhance error [11]–[13] or perturb the knowledge of the authors, no work has been reported in motion [14]. It has been hypothesized that error enhancement evaluating the effect of the repelling force field on adult or perturbations challenge the subjects more than the haptic learning of a line following task and comparing with the assistance. Subjects are also more focused on the learning ‘assist-as-needed’ paradigm. task in order to cancel the disturbance of the force field. The rest of the paper is organized as follows: Section II describes the experiment setup, including the equipment, This research is supported by grants from National Science Foundation and National Institute of Health. training paths, and the controller to follow the paths. Force Xi Chen is a PhD student in the Department of Mechanical Engineering, field setting is given in Section III. Section IV provides University of Delaware, Newark, DE 19716. the experiment protocol of a group study. Training results Sunil K. Agrawal is a Professor of Mechanical Engineering at the University of Delaware, Newark, DE 19716. He is also the corresponding and discussions are presented in Section V followed by author. [email protected] conclusions. 978-1-4577-1198-5/12/$26.00 ©2012 IEEE 344 Human On-board Computer Driver 3.5 Training path Desired Path Simulation Control 3 C++ 2.5 Tracking Robot Position Joystick Handle Algorithm & Lookahead Postition Velocity 2 d Control Command l Y (m) Joystick Joystick Robot Position Position 1.5 C++ DirectX Deviaon Area Log File 1 0.5 Fig. 2. Schematic of the experiment setup. 0 −2.5 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 X (m) II. EXPERIMENT SETUP . 2 A. Equipment Fig. 3. Training path and simulation with deviation area 0 085m . Figure 2 illustrates the schematic of the experimental setup Laser Y Point showing various modules and their interactions. A force- Δθ feedback joystick (Immersion Impulse Stick) was used which l can provide continuous force of 8.5N and peak force of θ ϕ 14.5N. Its control was through DirectX, which can read X (xc , yc ) joystick position and apply force on the driver’s hand. A two- d wheel Pioneer PowerBot mobile robot was used, which was equipped with encoders to record trajectory and an onboard x y Line computer to run the control algorithm. A laser range finder ( 0 , 0 ) ϕ was mounted on the back of the robot (Fig. 1) to localize X the robot in the experiment environment. All programs were written in C++ to interface with DirectX and an onboard Fig. 4. Schematic of a robot intended to follow a straight line inclined at library which has access to robot’s current pose. an angel ϕ. B. Training Path The training path comprised of 17 way-points (or 17 translational speed v and the rotational speed ω. In Fig. 4, straight line segments as shown in Fig. 3) and was laid on the current heading of the robot is shown at an angle Δθ from the floor of a large room. The first way-point (at the origin) the line. l is the distance from the robot center to the laser was always the starting point of each trial and a laser pointer point and d is the normal distance from the laser point to pointing little ahead of the robot was used as the reference the inclined path. For our device, l =0.495m. With (x0,y0) mark. Subjects were asked to keep the laser point as close a representative point on the wall, it follows that: as possible to the path while driving counterclockwise with d =(yc + l sin θ − y0)cosϕ − (xc + l cos θ − x0)sinϕ the maximum forward speed. We developed a line following Δθ = θ − ϕ controller that can track the line segments one by one. The (2) controller decides when to switch tracking between lines if A line following controller is an error correcting control the robot is closer to the next line segment than to the current. law that specifies the inputs v and ω such that d → 0 and We then set the force field to train subjects based on the Δθ → 0 as time increases. This control law is given by: controller output. v = vmax We calculated the deviation from the training path by the k d v (3) ω = − 1 − max tan Δθ area shown in Fig. 3. This area was obtained by numerical l cos Δθ l integration and used as the error for comparison. where vmax is a constant and in the experiment, vmax = 0.36m/s. C. Line Following Controller 1 2 Consider the Lyapunov function V = 2 d , then: Using no-slip kinematics of the wheels, the states of the V˙ = d(v sin Δθ + lω cos Δθ) robot satisfy the following differential equations: 2 (4) ⎧ = −k1d ≤ 0 ⎨ x˙ c = v cos θ y˙c = v sin θ The equilibrium point must satisfy the condition: ⎩ (1) θ˙ = ω. V˙ =0 (5) ω =0 where, (xc,yc) are coordinates of the robot center and θ is its orientation (Fig. 4). The inputs to the robot are the which gives d → 0 and tan Δθ → 0. 345 Y / v When the robot is facing perpendicular to the line, Tunnel Direction cos Δθ =0, we set ω to drive the robot to the direction where l Region 1 Region 2a l a θ → ω / Virtual Wall Effect W Δ 0 max = 30deg sec with the maximum speed . l a u t r ll i a W V α l 2 ua irt β V X /ω D. Lookahead Distance and Parameter Optimization Region 2b Virtual Wall Effect The robot had fixed forward velocity and finite rotational Region 3 Centering Effect velocity. When human subjects are driving such a device to follow lines with discontinuous curvature, they tend to predict the line direction and act early before they reach the point of discontinuity in the curvature.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us