A control scheme for 5 d.o.f. haptic feedback guided teleoperation

G. BOSCHETTI1, P. GALLINA2, G. ROSATI1, A. ROSSI1 AND V. ZANOTTO1 1Department of Innovation in Mechanics and Management DIMEG - University of Padua Via Venezia,1 – 35131 PADOVA ITALY

2Department of Energetics DDE - University of Trieste Via Valerio, 10 – 34127 TRIESTE ITALY

Abstract: - This paper deals with the problem of guiding an operator during the execution of a 5 d.o.f. teleoperation task. This work comes from the RIME research project, funded by the Italian Ministry of University and Research (MIUR). The project is focused on the feasibility of a spine surgery telerobotic system, made up of a haptic master, a slave robot, an optical tracking system and a main control unit. In this system, the surgeon is guided through a drilling operation by means of haptic feedback: as soon as the vertebra moves, the tracking device measures vertebra pose and a proper force is exerted on the operator’s hand; moreover, the haptic master produces the force feedback related to the teleoperation. In this paper, the control scheme and some experimental tests are presented. In these first tests, vertebra movements are simulated by means of a SCARA robot, measured by the tracking system and displayed to the operator by means of a custom- built 5 d.o.f. haptic display, while no slave robot is used.

Key-Words: - haptic feedback, tracking system, guided teleoperation, spinal fusion

1 Introduction surgery, spinal fusion remains a standard open procedure at Spinal fusion is a widely employed surgical technique in present. The patient is turned prone over a curved bed in which the surgeon creates a solid "bridge" between adjacent order to obtain the maximum flexion of the spine, and the vertebrae by means of pedicle screws, plates and cages (see surgeon needs a good access to the vertebrae in order to figure 1). perform screw insertion with the required precision (few millimeters). The main reason why an open procedure is required in spinal fusion is that vertebrae displacements occur due to patient’s breathing, patient or table slippage and surgeon-induced movements. As a consequence, the optimum drilling axis moves during intervention, according to vertebrae motion, and this can lead to significant screw positioning errors. One way of improving pedicle screw insertion accuracy is the use of information enhancement, i.e. providing the surgeon with information about the surgical work site, which cannot be directly seen. This requires sensing technologies to perform in-vivo tracking of vertebra pose [2], in order to calculate the optimum drilling axis position. This information is usually presented to the surgeon by means of an imaging device [3,4], while the surgeon Figure 1: Spinal fusion. directly operates the drill (trying to follow the targets displayed on the screen). The drawback of this approach is This surgical procedure requires a high degree of accuracy, that the surgeon has to share his attention between the due to the risk of damaging vertebrae, blood vases, nerves patient and the display. It is clear that such unnatural and the spinal cord; for example, a misalignment between procedure is a primary cause of failures. the screw and the pre-planned optimal insertion axis can lead to a more serious disability than the one the operation is supposed to remove [1]. Despite the fact that minimally-invasive procedures are widely practiced for biopsy, vertebroplasty and nerve An alternative approach is to integrate the acquired axis), but is guided along the other 4 d.o.f. On the contrary, information into a surgical action enhancement system, in when the drill touches the patient the contact force, once order to create a surgical assistant capable of implementing properly scaled, is virtually felt by the surgeon. Finally, in semi-automated or haptically guided tasks. As a result, the order to prevent the surgeon from exceeding a certain hole surgeon is provided with the information in a more intuitive depth, a virtual wall is generated orthogonally to tool axis, way and can keep his own attention directly focused on the so as to bound virtual tool motion along the drilling axis. patient. In this way, the surgeon is given the feeling of operating To the best of our knowledge, the only spinal fusion directly on the patient and being guided through the correct surgical system based on this concept makes use of a robot execution of the surgical task at the same time. The main for positioning a sort of linear guide. In this system, the advantage of this approach is that the surgeon can operate surgeon cooperates with the robot by manually moving the the spine without checking drill position on a display, since drill along the guide. Thus, the surgeon can control only one the required information is given haptically. d.o.f. of the surgical tool (translation along the drilling This paper presents the control scheme and the first axis), while the robot controls the position of the insertion experimental results obtained by using a simplified testbed, point (three d.o.f.) and the orientation of the tool axis (two in which vertebra movements are simulated by handling the d.o.f.) [5]. mock-up of a vertebra (figure 3). Moreover, slave robot A more complex surgical action enhancement system for behavior is simulated by the haptic server. The main goal of spinal fusion is currently being developed within the RIME this system is to test the surgeon guidance strategy. Next research project, which involves several partners all over chapter describes system layout and major components, Italy [6]. This system is made up of a five d.o.f. haptic whereas chapter 4 presents experimental results. display (Piroga5), an optical tracking system for vertebra pose assessment (pose3d), a main control unit (haptic server) and a slave robot, provided with a drill for pedicle Main screw insertion and with a six-axis force sensor (figure 2). control pen position unit vertebra pose feedback Main Haptic force control Tracking pen master unit vertebra device position pose feedback slave Haptic forces position Tracking master device Vertebra Surgeon movement Slave simulator robot Figure 3: Layout of the simplified testbed.

Surgeon Patient visual feedback 2 System description The overall layout of the system is presented in figure 3. Figure 2: Layout of the surgical action enhancement system. The information flow scheme can be analysed starting from the surgeon, which handles the haptic display end-effector. The surgeon handles the end-effector of the haptic master End-effector position is measured by a VME-based system (virtual tool), while the slave robot moves the drill (real (master controller), which sends this information to the tool) according to virtual tool position. The force exerted by main control unit. At the same time, a table-top robot is the slave robot on the patient is measured by the force used to move the mock-up of a vertebra, whose position is sensor and then sent back to the master control that displays sent to the haptic server by the tracking system. Hence, the it to the surgeon. Moreover, the master generates a set of server calculates slave position (which is assumed to be virtual forces in order to help the surgeon maintaining the equal to master position), slave contact force and the virtual tool (and hence the drill) overlapped to the pre- additional guidance forces to be exerted by the master planned optimal insertion axis. When no contact force display on surgeon’s hand. The sum of these forces, the real between the drill tip and the spine is measured by the force feedback force and the virtual guidance force, is finally sent sensor, the haptic display produces no force along the back to the master controller and thus perceived by the virtual tool axis; this means that the surgeon can freely surgeon. All communications are performed by using a Fast translate the tool along one d.o.f. (and rotation along pen Ethernet bus. 2.1 Five d.o.f. Haptic Master hole on the cone tip, in such a way they can be clamped to The haptic master, called the Piroga5, is a wire-driven five the pen by simply screwing the two cones to the body of the d.o.f. haptic device, with a pen-like shaped end-effector end-effector. Accurate machining leads one to consider the (several examples of pen-based haptic interfaces can be three wires meeting on the pyramid vertex, so that forward found in the literature [7,8,9], but none of them make use of kinematics and wire tension computation can be carried out direct-drive wire actuation). in closed form. The haptic master is depicted in figure 4. As it shows, six Pulley radius is 15mm; this value represents a trade off wires are attached to the end-effector, three to each end of between the maximum force each wire can exert and the the pen. Each wire is tensioned by a motor-pulley direct- transparency of the haptic display (haptic transparency is drive system, with an Escap 35NT2R82-426SP DC electric used to quantify the fidelity with which virtual object motor and a Lika I-58H 5000ppr encoder. properties are presented to and perceived by the human operator [10]). In fact, the lower the pulley radius, the higher the undesired inertia and friction perceived by the human operator during free motion. On the contrary, lower pulley radius means that the haptic device is able to display higher forces to the human operator. Master control runs on a VME controller, in a real-time VxWorks RTOS based environment. Control cycle is performed at a frequency of 4kHz.

F

M

Figure 4: Piroga5, the master haptic display. Figure 5: Total force F and momentum M acting on the end- As far as kinematics is concerned, the operator is able to effector of the Piroga5 haptic display. move the end-effector along six d.o.f.; nevertheless, the control system is able to apply only five generalized forces to the pen (three forces and two torques), since wire 2.2 Optical tracking device configuration prevents the system from generating any The optical tracking system is made of a four infra-red LED torque along the pen axis. As figure 4 shows, each wire is frame, that is to be linked to the vertebra, a single Basler forced to pass trough a fairlead. The fairleads seem to be A302b digital TV camera and a Matrox Meteor II/Digital located on the vertexes of a regular hexagon; however, the frame grabber (figure 6). odd fairleads and the even ones belong to two different parallel planes (distance between planes is 35mm, and the two planes are located symmetrically with respect to the x-y reference plane). On each plane, the three corresponding fairleads represent the vertexes of an equilateral triangle with a side length of 275mm. The fairleads are made out of Teflon so as to reduce friction. Moreover, in order to prevent the wires from being strongly bended, fairlead hole axes are appropriately tilted. As it can be inferred, the force generated by the wires connected to the upper extremity of the pen is downward, whereas the force exerted by the other three wires is upward. The total generalized force acting on the end- effector (force F plus momentum M) is hence given by the composition of these two forces (figure 5). The end-effector is made up of several parts (two cones and three cylinders) to allow easy length adjustment, in a range Figure 6: Pose3D, the optical tracking system (with LED frame of 140÷200mm. The wires are made pass through a tiny connected to the end-effector of a table-top robot) The choice of employing a single TV camera pose recovery B where TA is the 4x4 transformation matrix that maps points system led to many benefits, mainly low cost, high frame from reference frame A to reference frame B (L stands for rate and reduced LED occlusion problems. Nevertheless, LED reference frame, M for master, S for slave and TV for LED position measurement accuracy, which is less than TV 0.2mm, is satisfactory if compared to system requirements. TV camera). Matrix TL t is given by pose3D and is the S Three-dimensional pose recovery of the LED frame is only time varying term, TTV is given by TV extrinsic performed using a new iterative model-based single-frame T M pose estimation algorithm, that was introduced, optimized parameters calibration and S is the matrix that correlates and tested in simulation [11]. The algorithm is based on slave robot and master display workspaces. M M simple geometrical considerations derived from the Once obtained Popt t and Ppen t, guidance force and stereoscopy theory. Basically, the four spots detected on the moment can be easily calculated. Let  be the plane which image are grouped into two triples; each triple is thought of is orthogonal to the ideal axis and intersects pen axis in as a different view of a single three-LED frame, the first one T ' took by the real TV camera, the second one took from point Pp  xp , y p , z p  , and let Po be the intersection another camera (the virtual camera) pointing at the same point between  and the ideal axis (see figure 7). Guidance three-LED frame. Frame pose is hence obtained by force and moment are given by performing stereoscopy calculations. Since position and orientation of the virtual camera are unknown, and vary F  k P'  P according to frame movements, calculations are iterated. F  o p 

The algorithm was implemented in C++. Computation time M  kM u p  uo  (including feature detection) is less than one millisecond on a Pentium IV based PC, whereas TV camera frame rate is u 60fps. Clearly, this rate coincides with the frequency at where kF and kM are two scalars, p is pen axis unit which vertebra pose is transmitted to the haptic server. vector and uo is ideal axis unit vector. As it shows, both F In the system presented here, LED frame is mounted on a and M belong to ; moreover, F is orthogonal to r and M is table-top robot (see figure 6), so that vertebra movements i orthogonal to both ri and rp. First condition indicates that no are simulated by the robot, detected by the tracking system force is produced along the ideal axis, so that the surgeon and finally reproduced by the haptic display. can always freely move the drill along that direction. Second condition fulfils the basic requirement that no 2.3 Guidance forces calculation moment can be exerted along the axis of the pen, due to The main control unit, once recovered vertebra pose, must haptic display kinematic limitations. Notice that other calculate the forces and the moments needed to make the algorithms have been tested for F and M calculation, but the surgeon overlap the pen to the ideal axis. All calculations one proposed here proved itself to be the most effective and are made in the master reference frame, so optical axis stable one. position, which is given in the LED reference frame, must Guidance forces calculation is performed each time master be transformed into the master reference frame. position is received form master controller, i.e. at a L M M Let us consider the matrices Popt , Popt and Ppen , used to frequency of nearly 2kHz. The so calculated values of F and M are hence sent back to the master controller, which identify the optimal axis (ri) in the LED and master calculates a proper set of wire tensions. reference frames and the pen axis (rp) in the master reference frame respectively:

L M x u M xo uo,x  xo uo,x   p p,x  y u y u M y u r r PL  o o, y  PM  o o, y  P  p p, y  i p opt opt pen z u  zo uo,z   zo uo,z   p p,z   1 0   1 0   1 0 

u Each matrix holds a point belonging to the axis in the first o F u column and the unit vector in the second column. Matrix P’ p o M L P M Ppen is calculated by master controller, matrix Popt comes p M  from calibration and matrix Popt is calculated by the haptic server as follows: Figure 7: Forces and moments generated by the haptic display. M M S TV L Popt t  TS TTV TL tPopt 3 Experimental Results Some experimental tests were carried out with the aim of validating the guidance technique presented in previous chapter. The testbed, whose layout is depicted in figure 3, was used to simulate vertebra movements (by means of a table-top Adept 550 SCARA robot) and to display them to the operator. For these tests, the optimal insertion axis coincides with the x axis of LED reference frame, as if the spinal cord was nearly parallel to the y axis (see figure 6). In this way, vertebra angular movements are reproduced in a better way, since the SCARA robot can rotate the end- effector along the vertical axis only. In order to simplify system start-up, one unrestrictive assumption was made: when control system is activated, optimal axis and pen axis coincide in the master reference frame. In this way, at the beginning of simulation no guidance force is produced, since ideal and actual tool axis Figure 8: LED frame (dashed line) and haptic pen (continuous line) angular positions during first test. are overlapped. In other words, it must be

M M S TV L M Popt t0   TS TTV TL t0 Popt  Ppen t0 

This condition is accomplished by simply setting

M S TV 1 TS  TTV TL t0 T  where T is the transformation matrix which yields

1 L M T Popt  Ppen (t0 )

The robot was moved in such a way that LED reference frame was translated along the y axis of the robot (parallel to ZTV, not shown in figure 6), while rotating along the Zled axis. Translation and rotation amplitudes were set to 10mm and 15° respectively, movement period was set to 10s [12]. Figure 9: LED frame (dashed line) and haptic pen Figures 8 and 9 show the results of a first test, in which no (continuous line) translations during first test. operator was holding the haptic pen. In other words, a low- gain position control of the pen was performed, whose reference values are given by vertebra measured position. LED frame (dashed line) and haptic pen (dark continuous line) angular positions are presented in figure 8, whereas figure 9 depicts translation along the y axis. Continuous lines represent the difference between pen and LED frame movements. Figures 10 and 11 show the results of the second test, in which an operator was holding the haptic pen. We can still think about this system as a position control of the pen, but this time human operator impedance must be considered together with pen impedance to understand the dynamic behaviour of the system. As it shows, angular movements of the optimal axis are followed very well (figure 10); however, larger errors occur in translation (figure 11).

These errors could be reduced by raising force gain kF, but this leads to the unnatural feeling of being forced through Figure 10: LED frame (dashed line) and haptic pen (continuous line) angular positions during second test. the execution of the surgical task. [2] Simon D., Intra-operative position sensing and tracking devices, Proceedings of the First Joint CVRMed/MRCAS Conference, 1997, pp. 62–64. [3] De Waal Malefijt J., Image-guided surgery of the spine, MedicaMundi, Vol. 42(1), 1998, pp. 38–43. [4] Peters T., Image-guided surgery: From x-rays to virtual reality, Accepted for publication in Computer Methods in Biomechanics and Biomedical Engineering, 2001. [5] Santos-Munn J. J., Peshkin M. A., Mirkovic S., Stulberg S. D. and Kienzle III T. C., A stereotactic/robotic system for pedicle screw placement, Proceedings of the Medicine Meets Virtual Reality III Conference, 1995, San Diego, CA. [6] G. Boschetti, P. Gallina, G. Rosati, Aldo Rossi, and Vanni Zanotto, A novel approach to haptic/telerobotic spine surgery, Proceedings of the 12th International Figure 11: LED frame (dashed line) and haptic pen Workshop on Robotics in Alpe-Adria-Danube Region (continuous line) translations during first test. RAAD03, Cassino (FR), Italy, May 2003 [7] Stocco L. J., Salcudean S. E. and Sassani F., Optimal Higher gains can be used for training purposes, of course, kinematic design of a haptic pen, IEEE/ASME but are not desirable during interventions. Nevertheless, it Transactions on Mechatronics, Vol. 6(3), 2001, pp. must be taken into account the fact that the operator was 210–220. provided with no visual feedback during tests. We are [8] Lee C. D., Lawrence D. A. and Pao L. Y., Dynamic confident that better results will be gained by using a modelling and parameter identification of a parallel complete teleoperation system, including a visual feedback haptic interface, Proceedings of the 10th Annual device. Moreover, trained operators achieved much better Symposium on Haptic Interfaces for Virtual results than people performing the test for the first time. Environment and Teleoperator Systems, IEEE Virtual Reality Conference, 2002, Orlando, FL. [9] Massie T. H. and Salisbury J. K., The phantom haptic 4 Conclusion interface: a device for probing virtual objects, The control scheme of a new 5 d.o.f. surgical action Proceedings of the ASME Winter Annual Meeting, enhancement system was presented in this paper, together Symposium on Haptic Interfaces for Virtual with first experimental results. This device is intended to Environment and Teleoperator Systems, 1994, help the surgeon in a drilling operation during spinal fusion Chicago, IL. interventions. The feedback signal, i.e. the movements of a [10] Lawrence D. A., Pao L.Y., Salada M. A. and vertebra detected by an optical tacking system, is displayed Dougherty A. M., Quantitative Experimental Analysis to the operator by means of an haptic device. In fact, tactile of Transparency and Stability in Haptic Interfaces, sensation guides him through the correct execution of the Proc. of the ASME Inte. Mech. Eng. Cong. and Exhib, surgical task. Vol. 58, pp. 441-449. First tests gave good and very encouraging results, and also [11] Rosati G., A new single frame pose estimation device demonstrated that training time is very short. More tests for robotics: theory and simulation results, Proceedings will be performed in the very near future in order to of RoManSy 2002, 2002, pp. 279–288, Udine, Italy. optimize control gains and to test control strategy on a [12] Glossop N. and Hu. R., Assessment of vertebral body complete teleoperation system. motion during spine surgery, Spine, Vol. 22, 1997, pp. 903–909.

References: [1] Cleary K., Clifford M., Freedman M., Zeng J., Mun S. K., Watson V. and Henderson F., Technology improvements for image-guided and minimally invasive spine procedures, Accepted for publication in IEEE Trans. on Information Technology in Biomedicine, 2001.