
3D Res (2017) 8:18 DOI 10.1007/s13319-017-0124-0 3DR EXPRESS Performance-Driven Hybrid Full-Body Character Control for Navigation and Interaction in Virtual Environments Christos Mousas . Christos-Nikolaos Anagnostopoulos Received: 26 December 2016 / Revised: 18 February 2017 / Accepted: 17 March 2017 Ó 3D Research Center, Kwangwoon University and Springer-Verlag Berlin Heidelberg 2017 Abstract This paper presents a hybrid character of different scenarios based on the three different full- control interface that provides the ability to synthesize body character control methodologies. in real-time a variety of actions based on the user’s performance capture. The proposed methodology Keywords Character animation Á Hybrid controller Á enables three different performance interaction mod- Navigation Á Object manipulation Á Virtual reality ules: the performance animation control that enables interaction the direct mapping of the user’s pose to the character, the motion controller that synthesizes the desired motion of the character based on an activity recogni- tion methodology, and the hybrid control that lies 1 Introduction within the performance animation and the motion controller. With the methodology presented, the user Recently, with the rapid development of low-cost will have the freedom to interact within the virtual motion capture systems, such as Microsoft’s Kinect environment, as well as the ability to manipulate the [1] and Asus Xtion [2], as well as the development of character and to synthesize a variety of actions that various games that use these technologies, users can cannot be performed directly by him/her, but which interact directly with the virtual environment. These the system synthesizes. Therefore, the user is able to solutions enable more sophisticated interaction with interact with the virtual environment in a more the virtual environment compared to the basic con- sophisticated fashion. This paper presents examples trollers that videogames use [3]. This is because the users can use their body poses to interact with the environment. Therefore, an increment of the user’s Electronic supplementary material The online version of this article (doi:10.1007/s13319-017-0124-0) contains supple- immersion in the environment has been provided with mentary material, which is available to authorized users. the means to take part in events that evolve within the environment. & C. Mousas ( ) Conversely, even if the user is able to interact with Department of Computer Science, Southern Illinois University, Carbondale, IL 62901, USA various tasks, the basic disadvantage of existing e-mail: [email protected] applications that use motion capture technologies is that the system can synthesize either predefined C.-N. Anagnostopoulos actions or to provide direct manipulation of charac- Department of Cultural Technology and Communication, University of the Aegean, 81100 Mytilene, Greece ter’s body parts. When using predefined gestures or e-mail: [email protected] body activity, the system recognizes the user’s activity 123 18 Page 2 of 15 3D Res (2017) 8:18 and synthesizes the desired motion. The ability to applications that require navigation, interaction and/or synthesize only a limited number of motions results in manipulation of objects [59]. restrictions to the user’s actual intention. For this To achieve this hybrid controller, it is necessary to reason it is assumed that an enhancement of the actions have an efficient methodology that will be able to that the user is able to perform can be beneficial. It will determine whether the user intends to perform an ensure that the user is able to perform and that the action and whether the user intends to manipulate system can synthesize not only the predefined specific body parts of the character. In the proposed motions, but also a variety of freeform actions. methodology, the system satisfies both of these Based on these requirements, this paper presents a requirements simultaneously. This is achieved as character control interface that provides the user with follows. Firstly, by analyzing small amounts of motion the ability to directly manipulate the virtual character capture data based on its motion features, the neces- or to use their body actions to synthesize the desired sary patterns are defined. Secondly, by using a motion sequences. Considering the variety of actions searching algorithm, all possible motions-joints com- that can be performed by the user, as well as a variety binations are found and are stored in a database. of motions that cannot be synthesized directly by the Finally, based on a searching algorithm that is system, a novel hybrid character control interface is implemented for the purpose of the proposed method- introduced. This hybrid controller lies within the ology, the system returns the body parts of the activity recognition and the direct manipulation character that are manipulated by the user and those methodologies. The activity recognition process com- that are manipulated by the motion controller. municates with an animation controller that is respon- In addition to the hybrid controller that is presented sible for animating the virtual character based on a in this paper, the user should be able to interact with number of motion sequences (mostly of them are tasks, objects and the environment. Hence, in the related to locomotion). Moreover, the direct manipu- proposed methodology, additional parameters that lation allows the user to manipulate specified body influence the motion synthesis process are imple- parts of the virtual character based on the performance mented in an action controller that is attached to the capture process. The hybrid controller allows the user character. These parameters are responsible for keep- to perform different actions that are contained in the ing information about the task that the character database of motions, and to enhance these synthesized performs, allowing interactions with the environment actions by directly manipulating specific body parts of and objects that are located within it. Thus, based on the user to a virtual character. An example of the the aforementioned action controller, different exam- aforementioned control method is the ability of the ples are presented in this paper. system to recognize a walking motion in conjunction The remainder of the paper is organized in the with the user’s ability to wave its hand during the following way. In Sect. 2, related work on animation character’s locomotion (see Fig. 1). We believe that techniques is presented. In Sect. 3, an overview of the such a solution could be ideal for virtual reality related proposed methodology is presented. The methodology Fig. 1 The three different character animation controllers by character’s body based on the direct manipulation controller which the proposed methodology can synthesize: a walking (middle), and a walking motion in combination with a hand motion based on the motion controller (left), manipulation of wave motion based on the hybrid controller (right) 123 3D Res (2017) 8:18 Page 3 of 15 18 that is used to generate the hybrid motion controller is Conversely, performance animation techniques, presented in Sect. 4. Examples of scenarios that are which are also known as computer puppetry developed with the proposed methodology are pre- [28, 29], manipulate body parts by using kinematics sented in Sect. 5. The implementation and the results solutions [16, 30], or recognize the performer’s action obtained when evaluating the proposed methodology (activity recognition) and display the motion from a are presented in Sect. 6. Finally, in Sect. 7 the database [16, 31–34], or synthesize a new motion conclusions are drawn and potential future work is sequence by using the existing motion data that is discussed. contained in a database (motion reconstruction) [15, 35–38]. In these three different approaches, the input signals that are retrieved from a motion capture 2 Related Work devices are used as the main parameters for the motion synthesis process. Hence, for animating the virtual There are various ways to animate a virtual character, character, methodologies that use accelerometers but the three most common are data-driven, kinemat- [31, 35, 39] or optical motion capture devices ics and physics techniques and there has been exten- [40, 41] provide the desired control parameters for sive research in these areas [4, 5]. Interactive character the system. Among other subjects, the research control can also be separated according to the input community has focused on the ability to synthesize device that is used for the character animation process as naturally as possible motion sequences when using [6]. In general, the character controller can be a a reduced number of inputs. Hence, solutions that use standard input device, such as a keyboard and joystick six [42] or even two [42] input parameters are able to such as [7]. Alternatively, it can be more specialized reconstruct the motion of the character in real-time. such as text input [8–10, 60], prosodic features of These methodologies, which generally are based on a speech [11], sketch-based interfaces [12, 13] or the statistical analysis of existing human motion data body) [14–16] or the body part [17] of a user [15, 43], map the reduced input parameters of a (performance capture), when its motion is captured database of postures to find and synthesize those that using motion capture technologies. The methodology are most appropriate. presented in this paper lies in the field of data-driven Less attention has been given to a methodology that motion synthesis, on activity recognition and on will be able to combine the activity recognition and the performance animation techniques, since the system direct manipulation of the character’s body parts by permits one to either synthesize the motion of the using input parameters provided by a motion capture character based on an activity recognition methodol- device. Ishigaki et al. [44] and Seol et al. [45] proposed ogy or to manipulate the character’s body according to solutions that are closest to what this paper presents.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages15 Page
-
File Size-