This Paper Will Make Frequent Reference to Several Implemented Animated Pedagogical Agents
Total Page:16
File Type:pdf, Size:1020Kb
Background
Animated agents have many uses [7]. They are employed on commercial web pages, in educational, training, and simulation environments, and in entertainment applications. In 3D virtual reality environments they move around and they have 3D gestures and pointing actions in order to guide and explain [35]. Considering Johnson’s research, the animated pedagogical agents adapt their behavior taking into consideration the learning opportunities that emerges during the software interaction [19]. They individualize the learning process and promote the student motivation. They give the user an impression of realism that is similar to human interaction. They engage into continuous dialog copying human dialog aspects. A learner that enjoys interacting with a pedagogical agent may have a more positive perception of the overall learning experience and may want to do more [36].
This paper will make frequent reference to several implemented animated pedagogical agents. These agents will be used to illustrate the range of behaviors that such agents are capable of producing and the design requirements that they must satisfy. Some of these behaviors are similar to those found in intelligent tutoring systems, while others are quite different and unique.
Researchers face a single problem of how to create an effective user interface to provide the user with a believable experience. The idea is to create a system that will use intelligent and fully animated agents to engage its users in natural face – to – face conversational interaction. To use agents most powerfully, designers can incorporate suggestions from research about agents concerning speech quality, personality or ethnicity of the agent, or the frequency and verbosity of reward. Designers can also incorporate what research says about effective human teachers or therapists into the behavior of their agent [10]. In the development of Marni, both kinds of research were incorporated to make this agent more powerful. The virtual tutor Marni gives hints and encouragement to students based on specific errors or error patterns and built-in knowledge about handling these errors [10].
What makes Marni so unique is the technology that animates her. Developed at CSLR, she can produce convincing facial emotions and accurate movements of the lips, tongue and jaw during speech production. She was developed using CU Animate [28] a toolkit designed for research, development, control and real time rendering of 3-D animated characters. To accurately depict visual speech [29, 30], the team at CSLR used motion capture data collected from markers attached to a person’s lips and face while the person is saying words that contain all permissible sequences of adjacent phonemes in English words. The motion capture data for these phoneme sequences are stored in a database and are concatenated to create a representation of the movements of the lips for any English word or sentence. By mapping the motion capture points from concatenated sequences to the vertices of the polygons on the lips and face of the 3-D model, and by applying sophisticated algorithms to assure accurate movements of all associated polygons, the movements of the 3-D model will mimic the movements of a person producing the same speech [28]. Marni can be made to produce arbitrary movements of the eyes, eyebrows and head using CU Animate Markup Language, CU-AML; an easy-to-use yet flexible and powerful tool for controlling Marni’s face movements by marking up text. CU-AML enables designers to control facial expressions and emotions while Marni narrates a text or provides instructions, hints, encouragement or feedback to students in learning tasks [10].
Educational Software should have teaching instructions focused on students needs, taking care of different student categories or model teaching processes. It should be adapted to Student Models to satisfy specific difficulties of each student category. [36]. The Animated Pedagogical Agent was inserted at the IVTE to promote adapted teaching through teaching strategies based on a Student Model Base. The main aim of inserting an Animated Pedagogical Agent in the IVTE is to reach a high pedagogical level, as it works as a Tutor at the teaching - learning process [36].
An Intelligent Virtual Teaching Environment project is justified by new teaching, learning technologies that will be provided to Intelligent Tutoring System improving the efficiency level of teaching processes made by Animated Pedagogical Agent. It makes the cognitive process better [36].
According to Oliveira, the Animated Pedagogical Agent of IVTE software is a cognitive agent, taking into consideration its autonomy, memory of past actions knowing the environment and other society agents, making plans for the future, being pro-active [37]. Cognitive agents are based on knowledge, it means, they show intelligent behavior in many situations, and they have implicit and explicit knowledge representation [36].
In the IVTE software, the Animated Pedagogical Agent is represented by a “worm”, called Guilly, whose name was chosen during the field research, selects the correct teaching strategies according to the specific student model [36].
The environment operates on a non-immersed virtual reality where the student has the feeling of being in a real environment. The student is only entitled to a partial view of the environment where the student can only interact with elements within the immediate vicinity of the student [36].
Our Approach
Fig. 1 illustrates the system from user's perspective, while a simple model of the system is shown in Fig. 2. A more detailed description follows. The system is composed of three main components: 1. the animated tutoring system 2. the animated tutor 3. the behavior logic
A. The animated tutor The animated tutor is a 3D virtual character capable of full face and body animation using skinning and morphing, based on the character animation framework called CDLU toolkit [5]. Integrating the virtual character into the desktop application was a simple matter, which served well for this purpose. We used java swing components which allowed for the creation of the avatar component and adding to the component to a container, which shows the animated tutor and plays the animations.
B. The behavior logic The animation logic is the core component of this system. It manages the communication between the animated tutor and the animated tutoring system, and decides what action the animated tutor will perform. The behavior logic reacts on the events coming from the animated tutoring system and sends instructions to the animated tutor.
C. The animated tutor The animated tutoring system is the front side component, through which the user interacts with the whole system. From user's point of view, the animated tutor is part of the animated tutoring system, which was one of the goals of this work. In its current form it is a simple data file reader in which all relevant data is provided in the form of a text file and said path is provided to the system, providing significant information to the behavior logic. Here is a simplified overview of runtime component interactions. The process flows in the following manner: A user (student) connects to the online tutoring system. The animation logic system retrieves the animation sequence from a text file. The user interacts with the online tutoring system and triggers an event. The behavior logic receives the event, decides what to do and sends instructions to the animated tutor. The animated tutor speaks and/or gestures.
Implementation
Design of software involves conceiving, planning out and specifying the externally observable characteristics of the software product. We have data design, architectural design and user interface design in the design process. These are explained in the following section. The goal of design process is to provide a blue print for implementation, testing and maintenance activities.
Index Files The index files are a text files that list each aspect of the lecture divided into subsections for ease of reference. The notepad application was used to create these files. The format for an index file is as follows:
A lecture comprises of the title of that lecture and the various subsections included in that lecture. As such the primary index file notes the course that the student is registered for and for each course an outline of topics to be taught that semester. The primary index file also follows the similar format described above. In addition it contains several more features to allow the application to differentiate each lecture and course. It is as follows:
---
Lecture files Based on the outline topic the user selected (reference to screenshots of lecture window), discussion for that section would be displayed in a box below the agent while the agent describes the text. If an image is associated with the discussion another window pops up with the image and corresponding text. The topic file contains several references for the agent to works as needed. The format for the lecture file is as follows:
---
LoadImage |
In future versions of this application, video files will be added to lectures and the agent will be able to describe the contents of the video.
Quiz files The quiz file contains questions and answers – the correct answer is indicated with the question, whatever comments the instructor has for the user and the option to shuffle both question and answer. The format is as follows, please note to the commands listed below shuffle the question and answer of the quiz:
Comment.
Shuffle Answers. Don't Shuffle Answers. Shuffle Questions. Don't Shuffle Questions. Shuffle These Answers. Don't Shuffle These Answers.
Question.
Question.
Adding the keyword “these” shuffles or unshuffles only the current question and answer.
Evaluation
We conducted a preliminary summative evaluation to investigate the strengths and weaknesses of the Virtual Tutor design. We used the following instruments for conducting the evaluation of the agent. We observed the subjects perception of the agent and provided them with questionnaires. Subjects were given a quick instruction on which version they will be interacting with. They are expected to have better comprehension and retention of the information after viewing. The subjects were then given a number of tasks to complete, and were expected to try and solve these tasks without any further help from the agent. It was hoped that with the inclusion of a realistic agent that better comprehension and retention rates would be observed.
The preliminary testing was done on 30 undergraduate students. The viewing content for the subjects were chosen at random. A third of the subjects viewed the agent under the condition of no animation, another third of the subjects viewed the agent under the condition of head movements with no emotion and the remaining third viewed it under the condition of prominence movements with emotion. After viewing the agent, the subjects were given questionnaires to fill out. From the questionnaire it was evident that the agent design was suitable. It was observed that the fully animated storyteller elicited more positive results and conversely the survey question for this condition rated the highest, as is illustrated in figure below.
Figure: preliminary testing
Preliminary results showed that facial expressions and head movements have great impact on student’s impressions of and engagement with the virtual storyteller.
Discussion
Demonstrating a task may be far more effective than trying to describe how to perform it, especially when the task involves spatial motor skills, and the experience of seeing a task performed is likely to lead to better retention [20]. Moreover, an interactive demonstration given by an agent offers a number of advantages over showing students a videotape. Students have the opportunity to move around in the environment and view the demonstration from different perspectives.
Because of significant advances in the capabilities of graphics technologies in the past decade, tutoring systems increasingly incorporate visual aids. These range from simple maps or charts that are automatically generated to 3D simulations of physical phenomena and full-scale 3D simulated worlds [20]. To draw students' attention to a specific aspect of a chart, graphic or animation, tutoring systems make use of many devices, such as arrows and highlighting by color. An animated agent, however, can guide a student's attention with the most common and natural methods of non-verbal cues like gaze and gestures. In human-computer interaction people often interpret the interaction with the computer as interactions with humans. The social agency theory suggests that social cues like the face and voice of the agent motivate this interpretation. In two off-line experiments in which comprehension scores and liking ratings were collected, we found that participants preferred natural agents with natural voices [27].
According to Louwerse et al [27], when building an intelligent tutoring system with an embedded pedagogical agent there are several relevant questions to take into account like do users enjoy interacting with a computational agent? Do users interact with the agent as they would interact with a human? Do animated conversational tutoring agents yield pedagogical benefits [27]? In answering these questions, it is recommended that a pedagogical agent should have a human-like persona to better simulate social contexts and to promote learner-agent interaction [21]. The agent in its current form performs this task in the most basic level of interaction. Future implementations of the Virtual Tutor will more effectively approximate human behavior by providing the correct and appropriate social cues. In addition to the types of interactions described above, animated pedagogical agents need to be capable of many of the same pedagogical abilities as other intelligent tutoring systems. For instance, it is useful for them to be able to answer questions, generate explanations, ask probing questions, and track the learners' skill levels. An animated pedagogical agent must be able to perform these functions while at the same time responding to the learners' actions. Thus the context of face-to-face interaction has a pervasive influence on the pedagogical functions incorporated in an animated pedagogical agent [20].
Conclusion
Animated pedagogical agents offer enormous promise for interactive learning environments. It is becoming apparent that this new generation of learning technologies will have a significant impact on education and training. With the technological advancements being made and applied to human – human tutoring, animated pedagogical agents are slowly but surely becoming something akin to what ITS founders envisioned at the inception of the field. Now, rather than being restricted to textual dialogue on a terminal, animated pedagogical agents can perform a variety of tasks in surprisingly lifelike ways. References [1] R. Atkinson, “Optimizing Learning From Examples Using Animated Pedagogical Agents,” Journal of Educational Psychology, vol. 94, no. 2, p.416, 2002. [online] Academic Search Premier Database [Accessed: August 11, 2009].
[2] A. L. Baylor, R. Cole, A. Graesser and L. Johnson, Pedagogical agent research and development: Next steps and future possibilities, in Proceedings of AI-ED (Artificial Intelligence in Education), Amsterdam July, 2005.
[3] A. L. Baylor and S. Kim, “Designing nonverbal communication for pedagogical agents: When less is more,” Computers in Human Behavior, vol.25 no.2, pp.450-457, 2009.
[4] A. L. Baylor and J. Ryu, “Does the presence of image and animation enhance pedagogical agent persona?” Journal of Educational Computing Research, vol. 28, no. 4, pp.373-395, 2003.
[5] A. L. Baylor and R. B. Rosenberg-Kima, Interface agents to alleviate online frustration, International Conference of the Learning Sciences, Bloomington, Indiana, 2006.
[6] A. L. Baylor, R. B. Rosenberg-Kima and E. A. Plant, Interface Agents as Social Models: The Impact of Appearance on Females’ Attitude toward Engineering, Conference on Human Factors in Computing Systems (CHI) 2006, Montreal, Canada, 2006.
[7] J. Cassell, Y. Nakano, T. Bickmore, C. Sidner & C. Rich, Annotating and generating posture from discourse structure in embodied conversational agents, in Workshop on representing, annotating, and evaluating non-verbal and verbal communicative acts to achieve contextual embodied agents, Autonomous Agents 2001 Conference, Montreal, Quebec, 2001.
[8] R. E. Clark and S. Choi, “Five Design Principles for Experiments on the Effects of Animated Pedagogical Agents,” J. Educational Computing Research, vol. 32, no. 3, pp.209-225, 2005.
[9] R. Cole, J. Y. Ma, B. Pellom, W. Ward, and B. Wise, “Accurate Automatic Visible Speech Synthesis of Arbitrary 3D Models Based on Concatenation of Diviseme Motion Capture Data,” Computer Animation & Virtual Worlds, vol. 15, no.5, pp.485-500, 2004.
[10] R. Cole, S. van Vuuren, B. Pellom, K. Hacioglu, J. Ma, J. Movellan, S. Schwartz, D. Wade- Stein, W. Ward and J. Yan, “Perceptive Animated Interfaces: First Steps Toward a New Paradigm for Human Computer Interaction,” Proceedings of the IEEE: Special Issue on Human Computer Interaction, vol. 91, no. 9, pp.1391-1405, 2003.
[11] M. J. Davidson, (2006). “PAULA: A computer – Based Sign Language Tutor for Hearing Adults,” [online] Available www.facweb.cs.depaul.edu/elulis/Davidson.pdf [Accessed: June 15, 2008]
[12] D. M. Dehn and S. Van Mulken, “The impact of animated interface agents: a review of empirical research,” International Journal of Human-Computer Studies, vol. 52, pp.1–22, 2000.
[13] A. Graesser, K. Wiemer-Hastings, P. Wiemer-Hastings and R. Kreuz, “AutoTutor: A simulation of a human tutor,” J. Cognitive Syst. Res., vol. 1, pp. 35–51, 1999.
[14] A. C. Graesser and X. Hu, “Teaching with the Help of Talking Heads,” Proceedings of the IEEE International Conference on Advanced Learning Techniques, pp. 460-461, 2001.
[15] A. C. Graesser, K. VanLehn, C. P.Rosé, P. W. Jordan and D. Harter, “Intelligent tutoring systems with conversational dialogue,” AI Mag, vol. 22, no.4, pp. 39-51, 2001.
[16] A. Graesser, M. Jeon and D. Dufty, “Agent Technologies Designed to Facilitate Interactive Knowledge Construction,” Discourse Processes, vol. 45, pp.298-322, 2008.
[17] Greenfield, P.M. and Cocking, R.R. Interacting with video: Advances in applied developmental psychology, vol. 11, Norwood, NJ: Ablex Publishing Corp. 1996, p.218.
[18] X. Hu and A. C. Graesser, “Human use regulatory affairs advisor (HURAA): Learning about research ethics with intelligent learning modules,” Behavior Research Methods, Instruments, & Computers, vol. 36, no. 2, pp. 241-249, 2004. [19] W. L. Johnson, “Pedagogical Agents,” ICCE98 - Proceedings in the Six International Conference on Computers in Education, China, 1998.[online] Available http://www.isi.edu/isd/carte/ped_agents/pedagogical_agents.html [Accessed: June 15, 2008]
[20] W. L. Johnson and J. T Rickel. “Animated Pedagogical Agents: Face-to-Face Interaction in Interactive Learning Environments,” International Journal of Artificial Intelligence in Education, vol. 11, pp. 47- 78, 2000.
[21] Y. Kim and A. Baylor, “Pedagogical Agents as Learning Companions: The Role of Agent Competency and Type of Interaction,” Educational Technology Research & Development, vol. 54, no. 3, pp.223- 243, 2006.
[22] A. Laureano-Cruces, J. Ramírez-Rodríguez, F. De Arriaga, and R. Escarela-Pérez, “Agents control in intelligent learning systems: The case of reactive characteristics,” Interactive Learning Environments, vol. 14, no. 2, pp.95-118, 2006.
[23] M. Lee & A. L. Baylor, “Designing Metacognitive Maps for Web-Based Learning,” Educational Technology & Society, vol. 9, no.1, pp.344-348, 2006.
[24] J. C. Lester, S. A. Converse, S. E. Kahler, S. T. Barlow, B. A. Stone, and R. S. Bhogal, “The persona effect: Affective impact of animated pedagogical agents,” in Proceedings of CHI '97, pp.359-366, 1997.
[25] J. C. Lester, B. A. Strone and G. D. Stelling, “Lifelike Pedagogical Agents for Mixed-Initiative Problem Solving in Constructivist Learning Environments,” User Modeling and User-Adapted Interaction, vol. 9, pp.1-44, 1999.
[26] J. C. Lester, J. L. Voerman, S. G. Towns and C. B. Callaway, “Deictic Believability: Coordinated Gesture, Locomotion, and Speech in Lifelike Pedagogical agents,” Applied Artificial Intelligence, vol. 13, no. 4, pp. 383-414, 1999.
[27] M. Louwerse, A. Graesser, L. Shulan and H. H. Mitchell, “Social Cues in Animated Conversational Agents,” Applied Cognitive Psychology, vol. 19, pp. 693-704, 2005.
[28] J. Ma, J. Yan and R. Cole, CU Animate: Tools for Enabling Conversations with Animated Characters, in International Conference on Spoken Language Processing (ICSLP), Denver, 2002.
[29] J. Ma, R. Cole, B. Pellom, W. Ward and B. Wise, “Accurate Automatic Visible Speech Synthesis of Arbitrary 3D Models Based on Concatenation of Di-Viseme Motion Capture Data,” Journal of Computer Animation and Virtual Worlds, vol. 15, no. 5, pp. 485-500, 2004. [30] Ma, J. and Cole R., “Animating Visible Speech and Facial Expressions,” Visual Computer, vol. 20, no. 2-3, pp. 86-105, 2004.
[31] V. Mallikarjunan, (2003) “Animated Pedagogical Agents for Open Learning Environments,”[online] Available: filebox.vt.edu/users/vijaya/ITMA/portfolio/docs/report.doc [Accessed December 9, 2009]
[32] S. C. Marsella and W. L. Johnson, An instructor's assistant for team-training in dynamic multi-agent virtual worlds in Proceedings of the Fourth International Conference on Intelligent Tutoring Systems (ITS '98), no. 1452 in Lecture Notes in Computer Science, pp. 464-473, 1998.
[33] D.W. Massaro, Symbiotic value of an embodied agent in language learning, proceedings of the 37th Annual Hawaii International Conference on System Sciences (HICSS'04) - Track 5 – vol. 5, 2004.
[34] “Animated 3-D Boosts Deaf Education; ‘Andy’ The Avatar Interprets By Signing” sciencedaily.com March 2001, [online] ScienceDaily, Available: http://www.sciencedaily.com/releases/2001/03/010307071110.htm [Accessed April 11, 2008]
[35] A. Nijholt, “Towards the Automatic Generation of Virtual Presenter Agents,” informing science journal vol. 9 pp.97 -110, 2006.
[36] M. A. S. N. Nunes, L. L. Dihl, L. C. de Olivera, C. R. Woszezenki, L. Fraga, C. R. D. Nogueira, D. J. Francisco, G. J. C. Machado and M. G. C. Notargiacomo, “Animated Pedagogical Agent in the Intelligent Virtual Teaching Environment,” Interactive Educational Multimedia, vol. 4, pp.53-61, 2002.
[37] L. C. de Olivera, M. A. S. N. Nunes, L. L. Dihl, C. R. Woszezenki, L. Fraga, C. R. D. Nogueira, D. J. Francisco, G. J. C. Machado and M. G. C. Notargiacomo, “Animated Pedagogical Agent in Teaching Environment,” [online] Available: http://www.die.informatik.uni- siegen.de/dortmund2002/web/web/nunes.pdf [Accessed: June 30, 2008]
[38] N.K. Person, A.C. Graesser, R.J. Kreuz, V. Pomeroy, and the Tutoring Research Group, “Simulating human tutor dialog moves in AutoTutor,” International Journal of Artificial Intelligence in Education, in press 2001.
[39] P. Suraweera and A. Mitrovic, “An Animated Pedagogical Agent for SQL-tutor,” 1999, Available: http://www.cosc.canterbury.ac.nz/research/reports/HonsReps/1999/hons_9908.pdf [Accessed: August 11, 2009