
Fingertip data fusion of Kinect v2 and leap motion in unity Bo Li1, Chao Zhang1,*, Cheng Han1, Baoxing Bai2 1. Changchun University of Science and Technology, No.7186, Weixing Road, Changchun, 130022 China 2. College of Optical and Electronical Information Changchun University of Science and Technology, No.333, Xueli Road, Changchun, 130114 China [email protected] ABSTRACT. This paper describes how the data fusion and application of Kinect v2 and Leap Motion in Unity3D are implemented. Firstly, it implements a method based on Kinect v2 to obtain fingertips. Then, it calibrates Kinect v2 and Leap Motion in two different orientations in two steps. The preliminary calibration uses a one-dimensional calibration rod algorithm, and the fine calibration keeps approximating the true value through iteration, which realizes the joint calibration of the two. Finally, this paper uses Unity3D to fuse the data of the two types of equipment and conducts human-computer interaction with the virtual object in the virtual space of Unity3D. Experiments show that the method proposed in this paper can extend the hand tracking range and improve the accuracy of the collision between the human hand and the virtual object. RÉSUMÉ. Cet article décrit comment la fusion des données et l'application de Kinect v2 et de Leap Motion dans Unity3D sont mises en pratique. Premièrement, il s’applique une méthode basée sur Kinect v2 pour obtenir le bout des doigts. Ensuite, il calibre Kinect v2 et Leap Motion dans deux orientations différentes en deux étapes. La calibration préliminaire utilise un algorithme de tige de calibration unidimensionnelle, et la calibration fine continue à se rapprocher de la valeur vraie par itération, ce qui réalise la calibration conjointe des deux. Enfin, cet article utilise Unity3D pour fusionner les données des deux types d'équipement et effectue une interaction homme-machine avec l'objet virtuel dans l'espace virtuel d'Unity3D. Les expériences montrent que la méthode proposée dans cet article peut élargir la plage de suivi de la main et améliorer la précision de la collision entre la main humaine et l'objet virtuel. KEYWORDS: fingertip recognition, joint calibration, data fusion, natural human-computer interaction, leap motion, Kinect v2. MOTS-CLÉS: reconnaissance du bout des doigts, calibration commune, fusion de données, interaction naturelle homme-machine, leap motion, Kinect v2. DOI: 10.3166/ISI.23.6.143-159 © 2018 Lavoisier Ingénierie des systèmes d’information – n° 6/2018, 143-159 144 ISI. Volume 23 – n° 6/2018 1. Introduction Natural human-computer interaction has always been the focus of research by experts and scholars in the field of human-computer interaction. The human hand, due to its many joints, high degree of freedom and various forms, is the most effective human body part in human-computer interaction and gives the most directive form of interaction. In contrast to the inconvenient equipment like data gloves, inertial sensors and marking points, etc., Kinect and Leap Motion can extract and track hands that are completely unmarked or without additional sensors. This natural human-computer interaction has important research value. For example, in the film and television field, virtual human hands are used to complete dangerous actions; in the game field, users interact with virtual objects in virtual space by hand; in the industrial field, robots are controlled by human hands to conduct operations. There are many methods based on Kinect gesture recognition, most of which use the depth information obtained by Kinect for processing and recognition. For example, (Yang et al., 2012) proposed recognizing gestures based on depth information and hidden Markov model classifier; (Li, 2012) extracted hand contours, calculated the set of convex and concave points and acquired all finger areas, and then implemented gesture recognition; (He et al., 2011) used depth information to estimate fingertips, and then recognize finger-level gestures; (Meng and Wang, 2013) first used the edge contour curvature feature method to locate fingertips and then obtained the motion vectors of the fingers to implement the function of fingertip gesture recognition. There are also many researches on gesture recognition based on Leap Motion. For example, (Mapari and Kharat, 2016) developed an Indian Sign Language recognition system that uses the Leap Motion sensor to recognize both hands. Reference (Staretu and Moldovan, 2016) used the Leap Motion equipment to control a personified plier’s machine with five fingers. Reference (Chuan et al., 2015) based on Leap Motion, classified the 26-letter finger language in American Sign Language using the K-nearest algorithm and support vector machine. Reference (Erdoğan et al., 2016) recognizes gestured based on Leap Motion and artificial neural networks and controlled the robot through gestures. Reference (Tauchida et al., 2015) used the Leap Motion and SVM algorithms to realize the recognition of gesture trajectories. Reference (Chan et al., 2015) captured hand geometric data to identify gestures and authenticate identities. However, Kinect and Leap Motion still have their own shortcomings. For example, Kinect does not have high recognition accuracy for fingers. When the finger points to Kinect, the details of the hand cannot be detected; although Leap Motion has high recognition accuracy, but its recognition space is very limited, and when the finger is blocked by other fingers, the recognition effect is not good. Few researches have been conducted on the combination of Kinect and Leap Motion. Reference (Craig and Krishnan, 2016) only fused the speed value for hand tracking. Each device has an independent trigger, which mitigates the impacts of blocking. Reference (Marin et al., 2015) introduced a gesture recognition method based on Leap Motion and Kinect, which acquires data through the two types of equipment and achieves gesture recognition in combination with the SVM classifier. Reference (Sreejith et al., 2015) Fingertip data fusion of kinect and LMC 145 used Kinect v2 and Leap Motion to implement an image navigation system, but this method is just a simple combination of Kinect and Leap Motion, where Kinect is used to recognize slightly distant gestures while Leap Motion to recognize close-range gestures. Reference (Debeir, 2014) fused the position data of the hand acquired by Leap Motion and Kinect sensors to improve the hand tracking performance. The research content of this paper is to combine the advantages of Kinect v2 and Leap Motion. Kinect v2 has a large recognition space whereas Leap Motion has a high recognition precision. When the two are placed in different positions, the data observed from different angles are complementary and can be integrated into Unity3D to improve the accuracy of finger detection and make the interactions between human hands and virtual objects in virtual scenes more reliable. The rest of the paper is organized as follows: Section II describes the fingertip detection method based on Kinect v2 depth image, which extracts the hand area, obtains the hand contour and obtains the fingertip pixels according to the curve of the distance from the hand contour to the centre of the hand, and converts them to fingertip coordinates. Section III describes the calibration methods for Leap Motion and Kinect v2. The first step is to perform a preliminary calibration using a 1-dimensional calibration object, and then perform an accurate calibration and obtain a more accurate rotation matrix and translation vector through iterations. Section IV is about the data fusion, i.e. the fusion of the fingertip data of Leap Motion and Kinect v2 in Unity3D, including the temporal registration and spatial fusion. Section V conducts experiments on the method proposed in this paper and applies it in Unity3D. Section VI is a summary of and outlook on the research content of this paper. 2. Acquisition of fingertip data from depth images 2.1. Hand segmentation This paper initially uses OpenCV and OpenNi to obtain the returned palm position, then quickly locates the hand region according to the vertical coordinate threshold of the palm and then uses the depth binary mask algorithm to separate the hand from the background. The algorithm can use the depth threshold to filter out the background with a similar colour to skin. For example, if the face overlaps with the hand, the face can be easily removed with the depth threshold. Note that this paper uses the original depth image of Kinect v2 (hereafter referred to as Kinect) for processing instead of the depth images obtained by OpenNi. After the hand is detected by Kinect and OpenNi, the system will call the Nite library to return the coordinates of the palm. Let the 푦 value of the palm coordinates returned be m , and then the upper and lower thresholds splitting the hand are respectively m + ∆ and m − ∆ . According to the actual test, it is better if 7cm<훥<14cm. According to the image area cut per the upper and lower thresholds, the hand can be accurately extracted. After the hand region is extracted, the hand is separated from the background using a depth binary mask method. The mask is a template for image filter. In this paper, in 146 ISI. Volume 23 – n° 6/2018 order to extract the hand and remove the background, a matrix of 푛 ∗ 푛 is used to filter the pixels of the image so as to highlight the desired object. This matrix is called a mask, a binary image consisting of only 0 and 1. The binary mask 푃푑 constructed in this paper is a mask window with given width and height and with the palm as the centre of the mask, which is defined by (1): 1 Z− d Z(,) x y Z + d (,) hh Pd x y = 0 otherwise (1) d= 8 cm After many experiments, the depth threshold is obtained. 푍ℎ is the coordinate depth value of the palm, Z(x,y) represents the depth value at the pixel (x,y) of the image.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages18 Page
-
File Size-