Turkish Journal of Physiotherapy and Rehabilitation; 32(3) ISSN 2651-4451 | e-ISSN 2651-446X

YOGA POSTURE RECOGNITION ACCORDING TO IMAGES CAPTURED BY RGB CAMERA

J.Palanimeera1, K.Ponmozhi2 1Research Scholar, Department Of Computer Applications, Kalasalingam Academy of Research and Education, Krishnankoil, Tamil Nadu, India 2Assistant Professor, Department Of Computer Applications ,Kalasalingam Academy of Research and Education,Krishnankoil, Tamil Nadu, India [email protected], [email protected]

ABSTRACT

Muscle conditions are more common in human as a result of injuries or aging, which is a significant concern for the future. Physical activity may help to get rid of this disease. The most effective form of physical activity is .

To have proper yoga posture, it is important to have a correct center of gravity for that pose of yoga. The COG for each yoga is different. In yoga asana, COG refers to a hypothetical point on the body where weight has accumulated. Sometimes it’s even located outside of the body. The hypothetical point in the body where the body is concentrated can reduce body pain or weight of that location. This study identifies the COG of these seven asana: Pranamasana, Dhanurasan, , Gomukhasan, , Padmavrikshasana and Padmasan.

This study combines several techniques to classify images that are captured by the RGB camera, and subsequently converted into depth images to identify the seven yoga positions such as Pranamasana, Dhanurasan, Dandasana, Gomukhasan, Garudasana, Padmavrikshasana and Padmasan. To get the human outline edge, depth values are neglected from the background image. The outline margin is projected horizontally to determine whether the person is doing yoga or not. Pre-trained Vector Convolutional neural network will determine the yoga posture of Pranamasana, Dhanurasan, Dandasana, Gomukhasan, Garudasana, Padmavrikshasana and Padmasan. If the person is not in correct posture, the star skeleton technique is used to extract the feature points of the exterior. Then, the obtained feature points and the center of gravity are used to calculate aspect vectors and body depth values. Feature vectors and depth values are used to find upper length and lower length of the body.

Based on this the COG of the following seven asana such as Pranamasana, Dhanurasan, Dandasana, Gomukhasan, Garudasana, Padmavrikshasana and Padmasan were identified. In future COG can be predicted for other asana too.

Keywords: yoga posture, center of gravity, human outline edge, RGB camera, convolutional neural network, star skeleton technique, upper length and lower length

I. INTRODUCTION As yoga has developed as a major rehabilitative technique, it is critical to assist in the maintenance of proper postures, or the location of the joints associated with the asana. Joint-angle limits are among the best options for measuring the area of linked joints since there are well-established biomechanical limits between any two joints [1-2]. Human behavior or activity recognition plays an important role in fields such as pattern recognition, computer vision, and human-computer interaction [3-4]. Today's processes for detecting human postures and movements from video or photos have affected a multitude of applications, including crime identification, sports posture correction, and, also in rehabilitative assistance systems. Human posture recognition has been reviewed in some articles in the last few years. Generally speaking, these methods can be classified into two categories. www.turkjphysiotherrehabil.org 4502

Turkish Journal of Physiotherapy and Rehabilitation; 32(3) ISSN 2651-4451 | e-ISSN 2651-446X

The first category may be the marker-based markers, that are placed on an individual's body or clothing to find particular values such as limb positions and body slope. As an example, a research, required a person to wear a garment with strain sensors to determine 27 body points which is certainly upper [5]. A triangle accelerometer is worn around the waist. This scheme was built to categorize human movement at the Level [6] level. An accelerometer is an invisible device that was developed to monitor human activity levels and detect exposure situations [7]. A disadvantage, on the other hand wearable sensors and batteries were included in all seven studies. Since choosing things to wear maybe a source of inconvenience or discomfort. Other style of approach for recognizing posture details is focused on observed representations of a person's body. On the human torso and limbs, various colored markers enable you to reflect those posture characteristics. The methods discussed in [8, 9], and [10] can help identify human postures by identifying the relative locations of colored markers. Colored markers that are colored on the other side, maybe inconvenient just as wearing sensor products. Image processing methods have been utilized in several experiments to extract features from human images and employ those features to find out posture. To recognize human postures, such as standing, sitting, kneeling, and stooping, [11] used more than ten findings (lengths and widths associated with the upper and lower torso. In three-dimensional model in [12] and [13], the human-body-posture-recognition approach was proposed, where the projections of a human body were extracted and are converted into predefined 3D human posture models of standing, sitting, lying, and stooping postures. The skeleton of a person is studied geometrically. The authors evaluated the similarities amongst the posture, while the posture template database using entropy measurement is considered as an essential function and a modified Hausdorff distance [14]. Using a temporal difference image sensor we can extract the scale and location of invariant line features [15], followed by a Hausdorff distance classifier to compare the similarity of the features to a library of objects. The authors used a discrete Fourier transform to extract features before using a neural network that is fuzzy to differentiate human body postures [16]. The authors differentiated postures with the help of Support Vector Machine (SVM) significant to the time of flight sensor images. [17]. In [18], ratio of height and width, in addition to horizontal and vertical projections, were used as fuzzy logic inputs for posture recognition. Some research have used the Kinect sensor to identify human postures; for example, the authors of [19] provided a system for recognizing human postures that use plot of 3D joint positions from Kinect depth maps, and this is certainly a discrete hidden Markov model. Another method is the human skeleton is captured by a Kinect camera was proposed in [20] to identify the four human postures of standing, sitting, lying, and bending. The authors of [21] identified three human gestures through the vectors of 20 body-joint locations captured by a Kinect sensor. Similarly, in a technique color and depth information gathered from a sensor is established in [22]. In [22], a Kinect sensor was used to obtain a D-RGB-based skeleton output by recognizing human activity is an element of a multilayer system to understand the human activity. In [24], SVM was found to spot postures which can be of various features captured by a Kinect sensor, in the forearm and thigh. Think about the human skeleton where each of the segments is linked to the next in a particular way depending on biomechanical constraints.

A posture which is the unique technique is proposed in this paper. The method is dependent on the usage of only two devices: a laptop computer and a RGB camera. A depth sensor, a RGB projector, a multi-array microphone, and a motorized tilt make marker less [25]. The depth sensor uses an infrared ray and a monochrome CMOS sensor to recapture depth images with a resolution of 320*240 pixels, and the RGB camera uses a resolution of 640*480 pixels to capture color images. The sound signal can be received with a multi-array microphone; however, it will never be utilized in this analysis. A posture which is a new technique is proposed in this paper. Horizontal projection, vertical star skeleton, Learning Vector Measurement Convolutional neural network, and image processing techniques are the list of image processing techniques used. Pranamasana, Dhanurasan, Dandasana, Gomukhasan, Garudasana, Padmavrikshasana and Padmasan are seven yoga postures that are known. The obtained findings can be extrapolated to other postures which are not listed here.

This study enhances the knowledge about physical body such as joints. The biggest market of gravity (COG) could be the average position of the weight of every segment in the human body. The human anatomy has a center of gravity (COG). It is critical to keep this COG in good working order to produce greater stability and balance. Based on the study [26], patients with chronic lower back pain have their COG shifted excessively towards the back. As a result, assistive systems must correct each segment's position for maintaining COG positions.

Listed above are the paper's key contributions. The participant doesn't need to put any sensors since an RGB camera is used. Since we are using a webcam, the captured image is unaffected by lighting effects, shadows, or color matching amongst the participant's clothing in the background. However the subjects are facing in various www.turkjphysiotherrehabil.org 4503

Turkish Journal of Physiotherapy and Rehabilitation; 32(3) ISSN 2651-4451 | e-ISSN 2651-446X directions, three posture recognition methods involving ratio of width and height of the body, Convolutional neural network, and length ratio are bind to spot in an overall total of seven postures.

The following is the description of the paper's design. Related works on skeletal representation are discussed in chapter 2. Techniques for extracting body posture features are discussed in Chapter 3. The Measurement Convolutional neural network and the final identification method for human-body-postural recognition are discussed in Chapter 4. The test results are then shown, followed by a review in chapter 5. Conclusion is the final segment.

II. RELATED WORK Different types of representations may be obtained in order to understand human posture in real life. Using a core in the body, the symmetry of the human body can be used to detect patterns [15]. The relative locations of the joints can be obtained via deep sensor modules to provide body-cantered information. The author has created a geometric representation of the body's dynamics. The use of depth and skeletal composite knowledge will increase the approval rate [27]. Temporary collective coordinates [28] or a moving pose descriptor may be used to classify human behaviour. Similarly, body joint angles estimated from a pair of stereo images [30] or body parts angles estimated from geometric relationships [29] can be used. There are several ways to mark the appearance of an individual with a particular identity using computer graphics. Wee and Chai [31], as well as Lin et al., assessed posture. Additional parameters were used, such as continuous base support stability and joint width. Studies in [32] show that statistical limits for their ratios can be used to estimate joint lengths. Body dimensions, gender, and height can also be used to estimate bone length [33]. As proposed, signal processing techniques can be used to identify human behaviour using skeletal details. Human activities are differentiated by collaborative integrations. Using in-depth sensors like Microsoft Kinect, which provide skeletal aggregate information, you can build a posture recognition and feedback system. They are excellent sources because they are unaffected by minor changes in the climate. In certain applications, such as self-help , the practitioner's privacy is a major concern. Since depth sensors protect privacy, the alternative to extracting and representing the human body skeleton from deep sensors guarantees the most impressive range of privacy [34]. To find the above consistency, we used mutual integration values in this article. We refer to each move or posture as a sequence of postures coupled with the joint variations of each subsequent posture to locate the centre of gravity.

1. Process the image in-depth Seven yoga poses are identified in this study: Pranamasana, Dhanurasan, Dandasana, Gomukhasan, Garudasana, Padmavrikshasana and Padmasan. There are several image processing methods which are introduced and implemented.

1.1 Yoga Pose Outline Section First, without any human, the RGB camera captures the background depth image. To get the resultant depiction, it takes another image with one person and neglects the current depth image from the background depth image. Subtraction results in binary images where the black pixels represent the background and a white pixel represents the front. Then, corrosion and dilution have been used several times to restore defects in the human exterior and eliminate noise. The Figures 1 (a), (b) and (c) illustrate the above function. If the noise is not completely eliminated, the approach of the attached components is used to recover the largest part of the white pixels, which is referred to as the "best" area.

www.turkjphysiotherrehabil.org 4504

Turkish Journal of Physiotherapy and Rehabilitation; 32(3) ISSN 2651-4451 | e-ISSN 2651-446X

The complete process of human outline division is shown in Figure 2. Since the effective detection range of an RGB camera is 2m to 4m, human objects must placed inside this range. The size of the separated shadow in human is at least 2000 pixels in our experiment, so 2000 pixels is made as a gateway to determine a human object and is within the detection range.

If the measurements in the separated human shadow aren't bigger than 2000 pixels, the next processes won't be started.

3.2.1 The center of gravity The centre of gravity associated with the outline can be calculated from the upper to the lower body ratio of human. This outline is basically into the upper and lower body so that the chest muscles are the area of the outline over the center of gravity and the lower body is under the center of gravity. From these findings the centre of gravity is given by,

www.turkjphysiotherrehabil.org 4505

Turkish Journal of Physiotherapy and Rehabilitation; 32(3) ISSN 2651-4451 | e-ISSN 2651-446X

푛 ∑푖=1 푚푖푥푖 푥 = ⁄ 푛 ∑푖=1 푚푖 … (1) 푛 ∑푖=1 푚푖푦푖 푦 = ⁄ 푛 ∑푖=1 푚푖

Where (x, y) is the ordered pair of the center of gravity and x and y are calculated by taking the average of the moments with respect to its weights. 푚푖 is the total number of white pixels, and

푥푖 and 푦푖 are x-axis and y-axis values of the i-th pixel inside the outline , respectively.

The height of the human body can also be dependent on measuring the projection that is vertical and pixels in each row are inside the outline all the way through. The atmost values of this projection plot in the upper and lower body are visible depending on the precise location of the center of gravity, as seen in Figure 4.

The equation is used to estimate the ratio of upper body to the lower body for the maximum values of the histogram (2).

… (2) 푦 푃 푅= 푈 푃퐿

Where, 푌푅 is often called as an element value in human posture. 푃푈 is the maximum upper-body height value and 푃퐿 is the maximum lower body height value. It’s worth nothing that the Pranamasana posture differs from others with regard to upper and lower body proportions because it's so big.

www.turkjphysiotherrehabil.org 4506

Turkish Journal of Physiotherapy and Rehabilitation; 32(3) ISSN 2651-4451 | e-ISSN 2651-446X

This ratio can be very useful in identifying the Pranamasana pose. In accordance with the tests, a person's aspect value 푌푅 could be the highest of most poses when they are prostrate, as shown in Figure 5. A threshold value can be used to distinguish the Pranamasana pose; at 푌푅 =1.5, the individual's posture is found to have the Pranamasana posture. Section 3 explains why 1.5 was chosen as gate value.

2.2.2 Feature Vector Installation Equation (3) gives the distance measured between the centres of gravity associated with the margin of the human outline.

2 2 푑 = √(푥1 − 푥2) + (푦1 − 푦2) … (3)

www.turkjphysiotherrehabil.org 4507

Turkish Journal of Physiotherapy and Rehabilitation; 32(3) ISSN 2651-4451 | e-ISSN 2651-446X

The exact distance values between (x1, x2) (any true point on the edge) and (y1, y2) provides the distance between the center of gravity and is denoted by d. The length values are calculated beginning with the edge which is left-most and it moves clockwise towards the endpoint, which is close to the starting point, is shown in Figure 6. Proceed to the next task after obtaining the feature points, Vectors with features. First, we link each point to the center of gravity by allowing the position of the feature point p (xp, yp). Then, as shown in Figure 7, a function skeletal structure is developed. The feature skeleton can be misaligned if the center of gravity is within the shadow of the person (see Figure 7 (b)). The center of gravity must be adjusted in this case. To get the true skeleton of a feature, look inside the shadow. So, at the center of gravity, a vertical and horizontal line is drawn and the line with exactly two cross points in the shadow of the man is chosen. Finally, the gravity center is moved to the intersection of the two points. A new point of interest is illustrated in figure 8.

Let the branch of skeleton in Figure 8 represents one of the human aspect vectors V = [x (p),y (p)], which represents the vector aspect point p to the center of gravity in the equation (4).

^푥(푝) = 푥푝 − 푥푐 … (4) ^푦(푝) = 푦푝 − 푦푐 www.turkjphysiotherrehabil.org 4508

Turkish Journal of Physiotherapy and Rehabilitation; 32(3) ISSN 2651-4451 | e-ISSN 2651-446X

III. CONVOLUTIONAL NEURAL NETWORK AND FINAL IDENTIFICATION Following feature extraction, the derived feature vectors are used to identify the poses using a convolutional neural network. Figure 9 shows a convolutional neural network, which is commonly used for classification. Because of its simple structure,efficiency, and high error tolerance, we chose to use a yoga posture classifier for human posture recognition in this analysis. However, there are two things to bear in mind when it comes to convolutional neural network preparation. The inputs of neural network are arranged first and the next thing is that the aspects vectors are normalized to V. The following diagram illustrates these two features.

4.1 Normalization of feature vectors If the person is away (or close to) the RGB camera after setting aside the sequence of CNN's inputs, the person's perceived size will be small (or large). As a result, the values in table 1 must be normalized ahead of time to avoid affecting the recognition accuracy. L max = maxi (L I I = 1, m, and I = I / 360, where L max = maxi (L I I = 1... m. V I is then used to represent the aspect vector. The table 1 given below shows the result. Another benefit of normalizing aspect vectors is that, the height of the human subject, aspect vectors are just ratio values, so the proposed posture of the authentication algorithm applies to different levels.

www.turkjphysiotherrehabil.org 4509

Turkish Journal of Physiotherapy and Rehabilitation; 32(3) ISSN 2651-4451 | e-ISSN 2651-446X

4.2 CNN function The previous section demonstrated how a person's upper and lower body proportions can be used to identify their yoga posture. However, there are many poses that must be identified. The next step in the authentication process is to use CNN. Applied Twelve input neurons, 500 hidden neurons, and four output neurons make up a CNN. The length and aspect vectors of the 12 input neurons have seven angles and two depth values. CNN has 1000 sets of training data available for use. The hidden layer needs 500 hidden neurons to memorize training data, so that the number is set to 500. Tanurasana or Karutasana, Navasana or Padmasana are the four forms of posture represented by the four output neurons.

Since one of those releases could apply to Tanurasana or Karutasana, further research is needed to decide if the posture is tanurasana or karudasana.200 , 300 Grarudasana, 200 Navasana, and 250 Padmasana are available in the training data. The poses showed in Figure 10 are included in all training details. Different poses, each with its own orientation, weights between the inputs and the hidden neurons are obtained after preparation.

The equation (5) gives the output weights W.

1 0 0 푤 = 0 1 0 … (5) 0 0 1

W stands for the 4 * 600 series. For example, the element at W (1, 150) is at 1 (6), indicating that the weight at the junction of hidden neuron X is150 and output Y1 is 1.

When the input weight variance is less than 0.10, the CNN training is stopped, as seen in Figure 11. The 36th cycle marks the end of training.

Since the CNN is supervised, the inputs for training and testing should be placed in the order I or II. The explanation for using two order arrangements is that, according to the findings of several trials, using two www.turkjphysiotherrehabil.org 4510

Turkish Journal of Physiotherapy and Rehabilitation; 32(3) ISSN 2651-4451 | e-ISSN 2651-446X different arrangements for two different situations achieves a much higher recognition rate than using just one arrangement.

4.3 The authentication procedure Image capture and pre-processing are the first steps. Extraction of features is the second step. Top and human shadow YR, as well as low body ratio feature vectors, is among the features. If YR is less than 1.5, the person's posture is determined to be Pranamasana. The CNN role is the third step. Dhanurasana, Garudasana, Navasana, Padmasana CNN releases are the seven yoga poses that can be described. If one of CNN's publications is still Garudasana or Navasana, then the procedure is repeated again. If R is less than 3.1, the human's posture is classified as padmasana. Otherwise the pose is classified as "Navasana."

IV. TEST FINDINGS AND DISCUSSION In all the experiments, the RGB camera and PC were on the same table at a height of 60 cm, and the human object was 2.15 to 4.15 m in front of the table. The proposed method of identifying seven human postures such as Pranamasana, Dhanurasan, Dandasana, Gomukhasan, and Padmasan are implemented using the RGB camera. Each pose can be focused in seven different yoga poses in the experiments, as shown in Figure 12. In Figure 12 (a), the Pranamasana is tested 60 times, and each orientation is tested 15 times. In Figure 12 (b), the Padmasana pose is tested 95 times, and each orientation is tested 10 times. Each of the other three poses is tested 70 times. Python is used to implement the posture recognition algorithm. The examinations were performed to seven students. The good acceptance rate for each test subject is more than 99 percent, as shown in Table 2. Each pose has a successful recognition rate of over 98 percent, with an overall successful recognition rate of 99.25 percent. This is excessive.

www.turkjphysiotherrehabil.org 4511

Turkish Journal of Physiotherapy and Rehabilitation; 32(3) ISSN 2651-4451 | e-ISSN 2651-446X

Despite the fact that the proposed posture recognition mechanism has several stages, it only takes three milliseconds to identify a posture in experiments. Figure 13 shows all of the human poses that were successfully identified in various contexts and distances between the RGB camera and the object, as shown in the first column of the figure.

www.turkjphysiotherrehabil.org 4512

Turkish Journal of Physiotherapy and Rehabilitation; 32(3) ISSN 2651-4451 | e-ISSN 2651-446X

Low light is the fourth indoor setting. They've taken in each situation, despite impossibilities we can't fathom. Figure 14 shows the breakdown of the calculation time taken to identify each of the 7 poses, with the TCL indicating the total calculation time. The remaining references Ti, I = 1, 2,..., 6, denote the computational time spent on each stage of the posture recognition process, and are described as follows: T1 is the image processing for noise removal; T2 is the outer margin section from captured images; T3 is the horizontal plan and pranamasana posture judgments; T4 is aspect vector extraction; T5 is CNN identification of yoga poses, padmasana and karudasana poses; and T6 determines to do yoga or not. The i-process is not needed if ti = 0. To recognize the posture (iii) in Figure 26, the processes T4, T5, and T6 are not needed for I = 1, 3, 4, 5, 6. Due to the broad object outline and part activation involved in the separation process, it takes longer to identify all of the poses. It has also been discovered in 12 individuals. Within three milliseconds, the poses were correctly recognized. As a result, it can be inferred that this approach can be used to authenticate yoga postures in real time.

V. CONCLUSION Even though there are various levels and orientations in human topics, this article provides an excellent method for identifying the seven yoga postures of Tanurasana, Pranamasana, Navasana, Padmasana, and Dandasana. The proposed posture recognition system was found to have a success rate of more than 99.25 percent in tests. The RGB camera with deep knowledge will prevent image processing being influenced by light and shadow. Extraction of several features and CNN are used to create an effective posture recognition process. There are three benefits as which are opposed to other approaches, since it is very cost effective. However, a portion of the object's body is obscured, an additional research is required; such situations may result in authentication failure due to incorrect feature extraction. This problem can be used as a learning opportunity for future research. It is thought that identifying the most dynamic posture is the most reliable form of posture recognition. We hope to develop a reliable yoga posture recognition technique in the future that can be used as part of a home care system to monitor people only who do yoga with the Kinect sensor. This device is capable of detecting irregular posture. The proposed posture recognition system has the highest recognition rate, and training data, for starters. This machine is capable of detecting unnatural posture as a result of a fall.

REFERENCE 1 H. Hatze. A three-dimensional multivariate model of passive human joint torques and articular boundaries. Clinical Biomechanics, 12(2):128–135, 1997. 2 T. Kodek and M. Munich. Identifying shoulder and elbow passive moments and muscle contributions. In IEEE Int.Conf.onIntelligentRobotsandSystems,volume2,pages 1391–1396, 2002. 3 P. Turaga, R. Chellapa, V. S. Subrahmanian, and O. Udrea. Machine recognition of human activities: A survey. IEEE Transactions on Circuits and Systems for Video Technology, 18(11):1473–1488, 2008. 4 J. K. Aggarwal and M. S. Ryoo. Human activity analysis: A review. In ACM Computing Surveys, 2011. www.turkjphysiotherrehabil.org 4513

Turkish Journal of Physiotherapy and Rehabilitation; 32(3) ISSN 2651-4451 | e-ISSN 2651-446X

5 Mattmann C, Amft O, Harms H, Troster G, Clemens F. Recognizing Upper Body Postures Using Textile Strain Sensors. In: 2007 11th IEEE International Symposium on Wearable Computers; 11-13 October; Boston, MA. 2007. pp. 29-36. DOI: 10.1109/ISWC.2007.4373773. 6 Karantonis DM, Narayanan MR, Mathie M, Lovell NH, Celler BG. Implementation of a Real-time Human Movement Classifier Using a Triaxial Accelerometer for Ambulatory Monitoring. IEEE Transactions on Information Technology in Biomedicine.2006; 10(1): 156-167. DOI: 10.1109/TITB. 2005.856864. 7 Jeong D-U, Kim S-J, Chung WY. Classification of Posture and Movement Using a 3-axis Accelerometer. In: International Conference on Convergence Information Technology. November 21-23;Gyeongju, Korea. 2007. pp. 837-844. DOI: 10.1109/ICCIT.2007.202. 8 Ukida H, Kaji S, Tanimoto Y, Yamamoto H. Human Motion Capture System using Color Markers and Silhouette. In: Proceedings of the IEEE Instrumentation and Measurement Technology Conference; 24-27 April; Sorrento, Italy. 2006. pp. 151-156. DOI:10.1109/IMTC.2006.328334. 9 Chiu CY, Wu CC, Wu YC, Wu MY, Chao SP, Yang SN. Retrieval and Constraint-based Human Posture Reconstruction from a Single Image. Journal of Visual Communication and Image Representation.2006; 17(4): 892-915. DOI: 10.1016/j.jvcir.2005.01.002. 10 Liu HY, Wang WJ, Wang RJ, Tung CW, Wang PJ, Chang IP.Image Recognition and Force Measurement Application in the Humanoid Robot Imitation.IEEE Transactions on Instrumentation and Measurement. 2012; 61(1): 149-161. DOI: 10.1109/ TIM.2011.2161025. 11 Li C-C, Chen Y-Y.Human Posture Recognition by Simple Rules. In: IEEE International Conference on Systems, Man and Cybernetics; 8-11 October; Taipei, Taiwan. 2006. p. 3237 - 3240. DOI: 10.1109/ICSMC.2006.384616. 12 Boulay B, Bremond F, Thonnat M. Posture Recognition with a 3d Human Model. In: The IEE Inter‐ national Symposium on Imaging for Crime Detection and Prevention; 7-8 June; 2005. pp. 135-138. DOI: 10.1049/ic:20050085. 13 Boulay B, Bremond F, Thonnat M. Applying 3d Human Model in a Posture Recognition System. Pattern Recognition Letters. 2006; 27(15): 1788- 1796.DOI: 10.1016/j.patrec.2006.02.008. 14 Chen D-T, Liao H-YM, Tyan H-R, Lin C-W. Automatic Key Posture Selection for Human Behavior Analysis. In: IEEE 7th Workshop on Multimedia Signal Processing; 30 October-2 November; 2005. pp. 1-4. DOI: 10.1109/MMSP.2005.248572. 15 Barranco B, Culurciello E. Efficient Feedforward Categorization of Objects and Human Postures with Address-Event Image Sensors. IEEE Transactions on Pattern Analysis and Machine Intelligence. 2012; 34(2): 302-314. DOI: 10.1109/TPAMI.2011.120. 16 Juang CF, Chang CM. Human Body Posture Classification by a Neural Fuzzy Network and Home Care System Application. IEEE Transactions on Systems Man and Cybernetics Part A – Systems and Humans. 2007; 37(6): 984-994. DOI: 10.1109/ TSMCA.2007.897609. 17 Diraco G, Leone A, Siciliano P. Human Posture Recognition with a Time-of-flight 3d Sensor for Inhome Applications. Expert Systems with Applications. 2013;4 0(2): 744-751. DOI: 10.1016/j.eswa. 2012.08.007. 18 Brulin D, Benezeth Y, Courtial E. Posture Recognition Based on Fuzzy Logic for Home Monitoring of the Elderly. IEEE Transactions on Information Technology in Biomedicine. 2012; 16(5): 974-982. DOI: 10.1109/TITB.2012.2208757. 19 Xia L, Chen C-C, Aggarwal JK. View Invariant Human Action Recognition Using Histograms of 3d Joints. In: IEEE Computer Society Conference onComputer Vision and Pattern Recognition Workshops;16-21 June; Providence, RI. 2012. pp. 20-27.DOI: 10.1109/CVPRW.2012.6239233. 20 Le T-L, Nguyen M-Q, Nguyen T-T-M. Human posture recognition using human skeleton provided by Kinect. In: International Conference on Computing, Management and Telecommunications;21-24 January; Ho Chi Minh, Vietnam. 2013.pp. 340-345. DOI: 10.1109/ComManTel.2013.6482417. 21 Patsadu I, Nukoolkit C, Watanapa B. Human gesture recognition using Kinect camera. In: International Joint Conference on Computer Science and Software Engineering (JCSSE); 30 May-01 June 2012; Bangkok, Thailand. 2012. pp. 28-32. DOI: 10.1109/JCSSE.2012.6261920. 22 Southwell BJ, Fang G. Human Object Recognition Using Colour and Depth Information from an RGBD Kinect Sensor. International Journal of Advanced Robotic Systems. 2013;10(171)DOI: 10.5772/55717. 23 Granata C, Ibanez A, Bidaud P. Human Activityunderstanding: A Multilayer Approach Combining Body Movements and Contextual Descriptors Analysis. International Journal of Advanced Robotic Systems. 2015; 12(89) DOI: 10.5772/60525. 24 Zhang Z, Liu Y, Li A, Wang M. A novel method for user-defined human posture recognition using Kinect. In: 7th International Congress on Image and Signal Processing; 14-16 October; Dalian, China. 2014. p. 736-740. DOI: 10.1109/CISP.2014.7003875. 25 D.Galantino ML, Bzdewka TM, Eissler-Russo JL, Holbrook ML, Mogck EP,Geigle P, et al. The impact of modified on chronic low back pain: A pilot study. AlternTher Health Med. 2004;10:56–9. 26 G. Chen, D. Clarke, M. Giuliani, A. Gaschler, and A. Knoll, “Combining unsupervised learning and discrimination for 3D actionrecognition,”SignalProcessing,vol.110,pp.67–81,2015. 27 C. Wang, Y. Wang, A. L. Yuille, Mining 3d key-pose-motifs for action recognition, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2639–2647. 28 R. Vemulapalli, F. Arrate, and R. Chellappa. Human action recognition by representing 3d skeletons as points in a lie group. In IEEE Conference on Computer Vision and Pattern Recognition, pages 588–595. IEEE, 2014 29 M. Z. Uddin, N. D. Thang, J.T. Kim and T.S. Kim, Human Activity Recognition Using Body Joint-Angle Features and Hidden Markov Model. ETRI Journal, vol.33, no.4, Aug., pp.569-579, 2011 30 X. K. Wei and J. Chai. Intuitive interactive human-character posing with millions of example poses. Computer Graphics and Applications, IEEE, 31(4):78–88, 2011. 31 J. Lin, T. Igarashi, J. Mitani, M. Liao, and Y. He. A sketching interface for sitting pose design in the virtual environment. VisualizationandComputerGraphics,IEEETransactions on, 18(11):1979–1991, 2012. 32 C. BenAbdelkader and Y. Yacoob. Statistical estimation of human anthropometry from a single uncalibrated image. InMethods,Applications,andChallengesinComputerassisted Criminal Investigations, Studies in Computational Intelligence. Springer-Verlag, 2008. 33 P. Guan, A. Weiss, A. Balan, and M. J. Black. Estimating human shape and pose from a single image. In Int. Conf. on Computer Vision, ICCV, pages 1381–1388, Sept. 2009 34 T. Batabyal, A. Vaccari, S. T. Acton, Ugrasp: A unified framework for activity recognition and person identification using graph signal processing, in: 2015 IEEE International Conference on Image Processing (ICIP), IEEE, 2015, pp. 3270–3274 .

www.turkjphysiotherrehabil.org 4514