OPTIMIZING E-LEARNING IN GENETICS: CREATING AND COMPARING THREE CATEGORIES OF MULTIMEDIA

by Jenny R. Wang

A thesis submitted to Johns Hopkins University in conformity with the requirements for the degree of Master of Arts

Baltimore, Maryland March, 2020

© 2020 Jenny Wang All Rights Reserved Abstract Online learning is rapidly expanding in the United States. One feature of online

learning is the increased use of , especially in the sciences. However, there are

contradictions within the literature regarding the effectiveness of animations in scientific

education. Some studies claim that is the best modality for teaching scientific

topics, while others have shown that it increases cognitive load, leading to reduced

effectiveness. This thesis will test these opposing positions by measuring the

effectiveness (retention and engagement) across three types of multimedia that we

created: (i) a 6 minute 38 second traditional 2D animation, (ii) a 6 minute 43 second

whiteboard animation, and (iii) an 8 minute 11 second PowerPoint video edited together

from lecture videos. This three-way comparative approach will determine intrinsic

differences and similarities across multimedia.

We recruited study participants from Amazon Mechanical Turk (N=168), split into six groups of 28 differentiated by video order. Retention and engagement scores were collected via survey in JHM Qualtrics. Using single factor ANOVA, we found no difference (p < 0.05) among the three modalities for retention. However, whiteboard animation performed better with word recall than the other two videos, suggesting that simultaneous narration with written text leads to better learner outcomes. We also found

that the two animation formats performed better (p < 0.05) than the PowerPoint lecture

for engagement (enjoyment, attention, understanding). This project aims to provide

insight for e-Learning creators into which modalities work best for engaging and teaching

learners while also considering monetary costs.

Jenny Wang

ii

Chairpersons of the Supervisory Committee

Ada Hamosh, MD, MPH, Preceptor Dr. Frank V. Sutland Professor of Pediatric Genetics, Department of Pediatrics Professor, McKusick-Nathans Department of Genetic Medicine The Johns Hopkins University School of Medicine

Jeffrey Day, MD, MA, Advisor Instructor, Department of Art as Applied to Medicine The Johns Hopkins University School of Medicine

Carolyn Applegate, MGC, CGC, Content Advisor Senior Genetic Counselor, McKusick-Nathans Department of Genetic Medicine The Johns Hopkins University School of Medicine

iii

Acknowledgements

I want to express my deepest appreciation and gratitude to those who have accompanied me on this journey of discovery and research. This project could not have been accomplished without the support of these wonderful individuals.

I would like to thank my faculty advisor, Dr. Jeffrey Day, for his steadfast guidance,

ever-so-timely advice, and unwavering support throughout this project. His encouraging

energy throughout this project helped me never lose sight of my goals.

A huge thank you to my preceptor, Dr. Ada Hamosh, for imbuing this project with

expert advice and energetic zeal. Her continued support has proved invaluable to the

conception and completion of this study.

I would like to thank the instructors of the Online Genetic Assistant Training Program at

Johns Hopkins University, Carolyn Applegate and Kelsey Guthrie for providing

invaluable resources via the OGATP and their critical feedback throughout the asset

creation phase of this project. Additional thanks to Kelsey for bringing Dr. Sophie to life

through her narration. A special thank you to Lindsay Ledebur from the Office of

Online Education at JHU for her e-Learning expertise and steadfast support.

Sincerest gratitude to the Sutland and Pakula Family through the Dr. Frank V.

Sutland Chair and the Vesalius Trust for providing the funding needed to complete this

study. The study was also made possible via administrative assistance from Dacia Balch,

Cory Sandone and Carol Pfeffer.

iv

Statistical consultation was provided by Dr. Dhananjay Vaidya, Associate Professor of

Medicine in the General Internal Medicine Department. His guidance helped us build a strong statistical foundation for this study.

To the entire faculty and staff of the Department of Art as Applied to Medicine, thank you for your guidance, support and shared camaraderie. To my classmates, Noelle

Burgess, William Guzman, Jamie Peterson, Kellyn Sanders, Morgan Summerlin and Helen Tang – thanks for the fun times and belly laughs. You have all helped me grow, both as an individual and as a medical illustrator.

Finally, a big thank you to my family and friends for providing me with a shoulder to lean on over these past few months. Special thanks to Lawrance Lee for always being my number one fan.

v

Table of Contents

Abstract ...... ii

Chairpersons of the Supervisory Committee ...... iii

Acknowledgements ...... iv

Table of Contents ...... vi

List of Tables ...... viii

List of Figures ...... ix

Introduction ...... 1

What is e-Learning? ...... 1 The Online Genetic Assistant Training Program ...... 2 Subtypes of Animation ...... 3 Cognitive Theory of Multimedia Learning ...... 6 Principles of Multimedia Learning ...... 7 Project Objectives ...... 8 Intended Audience...... 8 Materials and Methods ...... 9

Content Preparation ...... 9 Story Outline ...... 11 Script Writing ...... 12 Cognitive Theory of Multimedia Learning and Content Creation ...... 12 ...... 16 Whiteboard Animation ...... 35 Study Design ...... 41 Data Analysis ...... 61

Comparing Change (Δ) in Retention for Overall Quiz Scores ...... 61 Comparing Retention for Individual Questions ...... 61 Comparing Engagement for Full Length Videos ...... 62 Comparing Overall Comparative Engagement ...... 62

vi

IRB ...... 63 Project Funding ...... 63 Statistical Consultation ...... 63 Results ...... 64

Multimedia Produced ...... 64 Mean Change (Δ) in Test Scores ...... 65 Individual Question Measurements ...... 66 Overall Full Video Engagement ...... 69 Comparative Engagement ...... 75 Access to Assets Resulting from this Thesis ...... 80 Discussion...... 81

Overall Retention ...... 81 Individual Retention ...... 81 Engagement ...... 86 Estimated Costs ...... 87 Future Considerations ...... 90 Conclusion ...... 92

APPENDIX A: Whiteboard Animation Storyboard ...... 94

APPENDIX B: Traditional animation storyboard ...... 101

APPENDIX C: Qualtrics survey module ...... 108

APPENDIX D: Survey Free – Response Comments ...... 118

References ...... 121

Vita ...... 126

vii

List of Tables Table 1. Subset of Principles of Multimedia Learning ...... 7 Table 2. Subdivision of study groups ...... 48 Table 3. Pros and Cons of Amazon MTurk ...... 57 Table 4. Data collection timetable...... 60 Table 5. Single factor ANOVA analysis for Mean change in overall test scores ...... 65 Table 6. Single factor ANOVA analysis for Mean Enjoyment ...... 71 Table 7. Unpaired T-Test of Unequal Variance for Mean Enjoyment ...... 71 Table 8. Single factor ANOVA analysis for Mean Attention ...... 72 Table 9. Unpaired T-Test of Unequal Variance for Mean Attention ...... 73 Table 10. Single factor ANOVA analysis for Mean Understanding ...... 74 Table 11. Unpaired T-Test of Unequal Variance for Mean Understanding ...... 74 Table 12. Single factor ANOVA analysis for Mean Comparative Enjoyment ...... 76 Table 13. Unpaired T-Test of Unequal Variance for Mean Comparative Enjoyment ..... 76 Table 14. Single factor ANOVA analysis for Mean Comparative Attention ...... 77 Table 15. Unpaired T-Test of Unequal Variance for Mean Comparative Attention ...... 78 Table 16. Unpaired T-Test of Unequal Variance for Mean Comparative Understanding 80 Table 17. Time spent on animations ...... 88

viii

List of Figures Figure 1. Screenshot of a PowerPoint lecture video ...... 3 Figure 2. Screenshot of a traditional 2D animation ...... 4 Figure 3. Screenshot of a whiteboard animation ...... 5 Figure 4. Pedigree lecture video on Blackboard ...... 9 Figure 5. Spatial Contiguity Principle in traditional animation ...... 14 Figure 6. Storyboard revisions ...... 16 Figure 7. Initial character style sketches ...... 17 Figure 8. Separating layers in Adobe Illustrator ...... 18 Figure 9. Importing Illustrator layers into After Effects ...... 19 Figure 10. Cool and warm color palettes to visually distinguish sections ...... 20 Figure 11. DUIK interface and slider type ...... 21 Figure 12. Parented Illustrator layers to sliders ...... 22 Figure 13. Illustrator layers parented to "Facial Features" null ...... 22 Figure 14. Keyframe positions for "facial features" null ...... 23 Figure 15. Connecting keyframed X position layer to properties of Controller ...... 24 Figure 16. DUIK Rigging menu with "Arm" selected ...... 26 Figure 17. Simplified Structure and Controller setup using DUIK ...... 27 Figure 18. Structures involved in Dr. Sophie's , except arms...... 28 Figure 19. Walk Cycle feature of DUIK ...... 29 Figure 20. Phonation Chart used for Dr. Sophie's mouth movements ...... 30 Figure 21. Dr. Sophie head rig with mouth shape slider...... 31 Figure 22. Syncing mouth movements to audio waveform ...... 32 Figure 23. Color Contrast Analyzer ...... 34 Figure 24. Procreate interface with transparent grouped Scene layers ...... 36 Figure 25. Selecting “Mask” icon in AutoWhiteboard ...... 37 Figure 26. out PNG layers in AutoWhiteboard...... 38 Figure 27. All components masked out in AutoWhiteboard ...... 38 Figure 28. Using “Separate masks” toggle in AutoWhiteboard ...... 39 Figure 29. Applying style preset in AutoWhiteboard ...... 39 Figure 30. Speed Control in AutoWhiteboard ...... 40

ix

Figure 31. Erasin gin AutoWhiteboard ...... 40 Figure 32. Qualtrics Survey Flow ...... 42 Figure 33. Adding embedded data in Qualtrics ...... 45 Figure 34. Naming embedded data in Qualtrics ...... 45 Figure 35. Random number generator ...... 46 Figure 36. Piping embedded randomized ID text ...... 46 Figure 37. End result of adding embedded data...... 47 Figure 38A. Project page on Amazon MTurk...... 48 Figure 38B. Project page on Amazon MTurk ...... 49 Figure 38C. Project page on Amazon MTurk...... 49 Figure 39. Excluding Workers ...... 51 Figure 40. CSV file setup in Excel ...... 51 Figure 41. Uploading .CSV file to" Manage Workers" page on MTurk ...... 52 Figure 42. Assigning qualifications to Workers ...... 53 Figure 43. MTurk survey description ...... 54 Figure 44. HIT progression screen after batch submission ...... 55 Figure 45. Assignment approval after batch has been completed ...... 56 Figure 46. Summary of the mTurk Workflow sequence ...... 57 Figure 47. Timer module in Qualtrics ...... 59 Figure 48. Full-length multimedia produced for testing ...... 64 Figure 49. Shortened clips produced for testing ...... 64 Figure 50. Mean change in overall test scores ...... 65 Figure 51. Easiness of Questions Based on Pre-Test Scores ...... 66 Figure 52. Question 1 Difference in Pre/Post-Test Scores ...... 67 Figure 53. Question 2 Difference in Pre/Post-Test Scores ...... 67 Figure 54. Question 3 Difference in Pre/Post-Test Scores ...... 68 Figure 55. Question 4 Difference in Pre/Post-Test Scores ...... 68 Figure 56. Question 5 Difference in Pre/Post-Test Scores ...... 69 Figure 57. Question 6 Difference in Pre/Post-Test Scores ...... 69 Figure 58. Mean enjoyment for full length videos...... 70 Figure 59. Mean attention for full length videos ...... 72

x

Figure 60. Mean understanding for full length videos ...... 73 Figure 61. Mean comparative enjoyment for short clips ...... 75 Figure 62. Mean comparative attention for short clips ...... 77 Figure 63. Mean comparative understanding for short clips ...... 79 Figure 64. Single factor ANOVA analysis for Mean Comparative Understanding ...... 79 Figure 65. Question 1 context from video screenshots ...... 83 Figure 66. Question 4 context from video screenshots ...... 84 Figure 67. Question 2 context from video screenshots ...... 84 Figure 68. Question 3 context from video screenshots ...... 84 Figure 69. Question 5 context from video screenshots ...... 85 Figure 70. Question 6 context from video screenshots ...... 85 Figure 71. Video comparison study summary. Not all text intended to be read...... 93 Figure 72. Whiteboard animation storyboard, page 1 ...... 94 Figure 73. Whiteboard animation storyboard, page 2 ...... 95 Figure 74. Whiteboard animation storyboard, page 3 ...... 96 Figure 75. Whiteboard animation storyboard, page 4 ...... 97 Figure 76. Whiteboard animation storyboard, page 5 ...... 98 Figure 77. Whiteboard animation storyboard, page 6 ...... 99 Figure 78. Whiteboard animation storyboard, page 7 ...... 100 Figure 79. Traditional animation storyboard, page 1 ...... 101 Figure 80. Traditional animation storyboard, page 3 ...... 102 Figure 81. Traditional animation storyboard, page 3 ...... 103 Figure 82. Traditional animation storyboard, page 4 ...... 104 Figure 83. Traditional animation storyboard, page 5 ...... 105 Figure 84. Traditional animation storyboard, page 6 ...... 106 Figure 85. Traditional animation storyboard, page 7 ...... 107 Figure 86. Qualtrics module, page 1 ...... 108 Figure 87. Qualtrics module, page 2 ...... 109 Figure 88. Qualtrics module, page 3 ...... 110 Figure 89. Qualtrics module, page 4 ...... 111 Figure 90. Qualtrics module, page 5 ...... 112

xi

Figure 91. Qualtrics module, page 6 ...... 113 Figure 92. Qualtrics module, page 7 ...... 114 Figure 93. Qualtrics module, page 8 ...... 115 Figure 94. Qualtrics module, page 9 ...... 116 Figure 95. Qualtrics module, page 10 ...... 117

xii

Introduction What is e-Learning?

e-Learning, which is sometimes synonymous with online learning, is the use of

electronic technologies to teach material in lieu of a traditional classroom setting. This

type of education is becoming increasingly popular because it provides a way to learn

from any location or time zone, provided the learner has online access and the

appropriate technology (Lewis, 2014). In 2017, 33.1% of higher education students in the

US took at least one online course, an increase of 2% from the previous year (Ginder,

Kelly-Reid and Mann, 2018). One feature of e-Learning is the use of animations, especially in the sciences. However, there have been many contradicting results regarding the impact of animations on educational outcomes (Lewis, 2014; Betrancourt, 2005).

Some studies have advocated for more animation and propose that it is the best modality for teaching scientific topics (Falvo, 2008). However, other studies have shown that animations increase cognitive load, thus reducing their overall effectiveness (Wong,

2012). Based on these contradictions, one practical approach is not to determine if, but how an animation affects learning (Turkay, 2016). How an animation affects learning depends greatly on how the animation is structured: there are many categories of animations that all work best in specific educational scenarios (Plass, Homer and

Hayward, 2009). To shed more light on animation effectiveness, we will measure

effectiveness (retention and engagement) across three different types of multimedia: (i)

PowerPoint lecture (existing material serving as the control), and two newly created

animations (ii) a traditional animation, and (iii) a whiteboard animation. We will use this

1

three-way comparative approach to examine intrinsic differences and similarities across

multimedia.

The Online Genetic Assistant Training Program

Established in 2019, the Online Genetic Assistant Training Program (OGATP) at the

Johns Hopkins University (JHU) School of Medicine is the very first of its kind, providing courses for those interested in becoming genetic assistants, or as an added learning opportunity for those already employed as genetic assistants. The program is completely online and has students enrolled across North America. Lectures, quizzes, exams, and notifications can all be found within the Blackboard application portal through JHU. Currently, lectures are around 40 minutes long and split into roughly 6-8- minute segments. They are in PowerPoint slide format, with a lecturer “floating head” superimposed in the bottom left corner. The lecturer provides accompanying narration to the material that is being presented on-screen.

The instructors at OGATP noticed that some topics were difficult for students to visualize conceptually and felt that added animations could help bolster the existing material. This thesis will choose a topic from the curriculum deemed difficult by students, “Interpreting a Genetic Pedigree”, and measure learner responses to different animation interpretations of the material. The subsequent study will help inform the

OGATP on online teaching strategy for future animations.

2

Subtypes of Animation

Animations combine auditory and visual stimuli (i.e. multimedia) to foster learning.

We have categorized three types of videos to compare in this study: a PowerPoint lecture, a traditional 2D animation, and a whiteboard animation.

I. PowerPoint lecture

A PowerPoint lecture video consists of a timed slideshow of still images, usually text

with an accompanying image on each slide. Audio narration is paired with the content in

each slide. In certain cases, a “floating head” (i.e. a green screen video of the instructor

teaching) will overlay a portion of the slide.

Figure 1. Screenshot of a PowerPoint lecture video

3 II. Traditional 2D animation

Traditional animation is a form of media that depicts events happening over time via motion, which helps the narrative proceed. This form of media helps viewers create

“dynamic mental models” in their mind as the events in the video unfold and works significantly well for the understanding of complex ideas (Plass et al., 2009). However, the introduction of animation may hinder learning, as adding motion or special effects may create more cognitive processing for the viewer (Wong, 2012; Spanjers, 2011).

Figure 2. Screenshot of a traditional 2D animation

III. Whiteboard animation

Whiteboard animation is a video format that depicts the creation of a drawing on a blank backdrop while the content is narrated. The earliest whiteboard animation videos were uploaded to YouTube around 2009 (IdeaRocket, 2019), which makes this format relatively new compared to the others. It mimics the traditional classroom, simulating the

4 illustration of a concept as a teacher would by physically drawing the concept out on a whiteboard. Whiteboard animations differ from traditional animations in that the

“animation” itself is the process of drawing a static image or writing text, similar to what a lecturer would do in a classroom. Proponents of whiteboard animations claim that this format is more educationally effective than traditional animations, as it helps viewers mentally construct the concepts as they are drawn (Lee, Kazi, & Smith 2013). Compared to traditionally animated segments, whiteboard animations are typically more time- effective and less costly to create.

Figure 3. Screenshot of a whiteboard animation

Despite the growing popularity of whiteboard animation in online education, there has been a relative lack of studies on the effectiveness of whiteboard animation on student learning (Turkay, 2016). This may be due to the relatively new inception of the technique and should be studied further.

5 Cognitive Theory of Multimedia Learning

Mayer (2009) presents a framework for how the brain processes information that can help with evaluating the effectiveness of different types of animations. His multimedia learning hypothesis states that “individuals learn more deeply from words and pictures than words alone”. This is based on the brain’s ability to process information, which includes three assumptions:

• The Dual Channel Assumption: The human brain contains two channels for

processing: a visual/pictorial channel and an auditory/verbal channel.

• The Limited Capacity Assumption: Each channel has limited space for

information processing.

• The Active Processing Assumption: “Active learning”, or the construction of

mental representation of a subject, consists of a series of processes which occur

when information enters the system. These processes, in order, include:

o Selection: Choosing relevant material to focus on, out of the whole that is

presented.

o Organization: Sorting the information above into discrete cognitive

structures.

o Integration: Combining similar cognitive structures with ones from

knowledge stored in long-term memory (Mayer, 2009).

Media created with these assumptions in mind have been shown to generate good learner responses and a better understanding of the concept being taught. These assumptions can be translated into a practical set of learning principles, which will be mentioned here and detailed separately in the Materials and Methods section.

6

Principles of Multimedia Learning

Shown below is a chart of techniques that are used to enhance education, via (1) reducing extraneous processing and (2) generating motivation for learners to maintain their interest and attention on learning the material. These techniques have been tested and shown to be effective in achieving the above goals (Mayer, 2009), and provide guidance for the creation of the animations in this study.

Technique Description

Coherence principle Extraneous material is removed

Signaling principle Relevant material is highlighted

Redundancy principle Printed and spoken text are not combined

Spatial contiguity principle Text is placed near corresponding image

Temporal contiguity principle Image and text are presented simultaneously

Segmenting principle Presentation is split into discrete parts

Multimedia principle Text and images are both used than just text alone

Personalization principle Script is conversational in tone

Voice principle Human narration is used for spoken text

Embodiment principle Animated characters have human-like gestures Table 1. Subset of Principles of Multimedia Learning (Mayer, 2009)

The explosive growth of online education in recent years has created opportunities for researchers to explore these principles even further.

7

Project Objectives

The objectives of this thesis project are to:

1. Create two testable multimedia assets - a whiteboard and a traditional 2D

animation - with the topic “Understanding a Genetic Pedigree”, based on the same

script, narrator and style.

2. Determine the retention and engagement value of each type of multimedia,

including a PowerPoint lecture provided by OGATP.

3. Use appropriate univariate and multivariate statistical analyses to determine if

there are differences in engagement or retention of knowledge for each

multimedia type.

4. Examine the cost-effectiveness (i.e. the amount of effort needed to create each

multimedia type compared with the learning/engagement benefit that it provides),

to help e-Learning creators decide where and when to best use the types of

multimedia in our study.

Intended Audience

The intended audience of these materials is the future students of the Online Genetics

Assistant Training Program at Johns Hopkins University, consisting of adult high school and college graduates who have taken some science courses. Multimedia created for this study will eventually be used in the “Interpreting a Genetic Pedigree” portion of the online curriculum. More broadly, findings from the study may provide insight for any content creator considering multimedia for teaching science in e-Learning.

8

Materials and Methods Content Preparation

The content of the animated segments was chosen through brainstorming and

discussion sessions with lecturers from the Johns Hopkins University Online Genetics

Assistant Training Program (OGATP). Lecturers were asked which topic in the

curriculum was the most difficult for OGATP students to grasp based on student surveys

and their personal experience as genetics instructors. It was decided that “Understanding

a Pedigree” was a key topic that could benefit from animation supplementation. The

“Pedigree” portion of the OGATP curriculum is its own unit (named “Module 3:

Pedigree” in the Blackboard application) that consists of an hour-long lecture series split into roughly four 13-minute videos. The series are comprised entirely of PowerPoint lecture videos with a “floating head” instructor.

Figure 4. Pedigree lecture video on Blackboard

9

The content of the “Pedigree” module is taught by Genetic Counselor Kelsey Guthrie and

consists of the following subtopics:

● Introduction to Pedigrees (14:01)

● Taking a Pedigree & Family History (11:15)

● Inheritance Patterns & Pedigree Tools (12:19)

● Pedigree Analysis & Testing (13:25)

The learning objectives of this lecture series are:

● Explain the basic symbols used in pedigrees

● Apply strategies for taking a pedigree

● Explain special circumstances that may be encountered when taking a pedigree,

with consideration for psychosocial aspects of family history

● Describe inheritance patterns

● Identify inheritance patterns in a pedigree

● Explain pedigree analysis and how to use pedigrees to develop test strategies

Given the timeline of the thesis project, we decided to create a summary animation for the topics instead of more detailed individual animations for each subtopic. The summary animation was written for adults at a high school level, which matches the minimum

criteria for our target audience: students enrolling in OGATP.

10

Story Outline

An outline of the test animation was created based on the learning objectives listed above, with iterative edits from the OGATP course creators and content experts from the

Department of Genetic Medicines. The finalized working outline is written as follows:

1. Pedigree definition

a. Definition of pedigree

b. Clinical benefits of taking a pedigree

2. How a pedigree is drawn

a. Labeling pedigree with identifying information

b. Conventional nomenclature regarding pedigree symbols

3. General review of genetics concepts

a. Classical definitions of gene, allele, trait and how they are related

b. Combination of alleles represent different gene variations

4. Mendelian inheritance pattern

a. Dominant inheritance pattern

b. Recessive inheritance pattern

c. X-Linked inheritance pattern (removed)

d. Mitochondrial inheritance pattern (removed)

5. Closing

a. Most traits are influenced by multiple genes and environmental factors

11

Script Writing

A script was created from the above outline. The script was iteratively edited by the

instructors of the OGATP and verified for accuracy. Several sections were removed to

condense the script and to help us focus on our study questions. The total narration time

of 9 minutes was shrunk to approximately 6 minutes by removing the “X-Linked

inheritance pattern” and “Mitochondrial inheritance pattern” segments.

Cognitive Theory of Multimedia Learning and Content Creation

Personalization principle:

A conversational tone was established throughout the script, following the

personalization principle (McLaren, DeLeeuw, and Mayer 2011). One major technique of

the personalization principle is to convert third person statements into first person “I, you,

we” statements (i.e. “Hi, I’m Sophie. In this video, we’ll be talking about how to

understand a pedigree”). The aim of this conversion is to introduce social cues in the

educational material. Studies have shown that recognition of social cues can lead to a

deeper cognitive response in learners (Ginns 2013).

Voice principle:

A human voice was used for the narrator instead of an auto-generated voice. Kelsey

Guthrie, an instructor from the OGATP, voiced the full narration of the script. Based on prior studies, the human quality of a recorded narration conveys a “social presence”, or the illusion that someone is speaking directly to the viewer. This could enhance the viewer’s engagement with the medium and subsequently the learning outcome. (Mayer,

DaPra 2012).

12

Coherence Principle:

In order to reduce cognitive overload in learners, it is essential to manage and reduce any extraneous material that the learner may encounter during the lesson. This material may include extraneous text, graphics or sounds that could interfere with a learner’s focus on essential processing (Moreno, Mayer 2000). In the traditionally animated segment of this study, a choice was made not to include any sound effects or music other than

Kelsey’s narration. In this manner, the learner can focus their attention on her voice alone and the accompanying animation. Only a few key words were shown on screen at any given time with care not to inundate the viewer with too much information all at once.

Segmenting Principle:

If a multimedia message has disparate topics that all need to be communicated, it is effective to split these up into digestible segments. Research has shown that pre- organizing topics into smaller units mitigates some extraneous processing by the human brain (Khacharem, Spanjers, 2013). The segmenting principle was applied to both whiteboard and traditionally animated segments in this thesis, separating the videos into segments according to the outline. Transitional pauses were added between segments to chunk up the video and signify changes in content.

Spatial and Temporal Contiguity Principles:

These principles focus on how information is presented either visually or aurally. The spatial contiguity principle states that presented text should be spatially close to its corresponding image so the learner does not have to expend additional processing power figuring out if the two are related (Sweller, Chandler, 1990). In both traditional and

13 whiteboard animations, labels were moved close to the corresponding visual element.

Due to composition choices, there were a few instances where the label did not appear next to the element but was alleviated by staggering the animation so that attention could be placed on each element temporally.

Figure 5. Traditional Animation screenshot highlighting spatial contiguity of text to image

The temporal contiguity principle is similar, where the spoken narration of a sequence should match with the animation that is occurring. This simultaneous coordination helps the learner synchronize the information being presented in both auditory and visual channels (Mayer, Anderson, 1991). Again, both animations in this thesis utilized the temporal contiguity principle as care was taken to ensure every animated sequence was synced up with its corresponding narration.

14 Embodiment Principle:

Embodiment refers to the degree of “human-ness” an animated figure has. Like the

Voice and Personalization Principles mentioned above, the Embodiment Principle can

also prime a social response in the learner, which then leads to higher cognitive

processing and a better educational outcome. In a 2012 experiment run by Meyer and

DaPra, a low-embodiment character (static image) was compared to a high-embodiment

character (blinking, , humanlike movement). Across 11 comparisons, participants

who watched the high-embodiment character had a greater learning outcome than

participants who watched the low-embodiment character. In the traditionally animated

2D segment of this thesis, head rigs and lip sync rigs were built for this purpose. Instead

of having the characters appear static on the screen, effort was made to increase human-

like qualities (i.e. adding blinking, eyebrow and hair movements.)

Signaling Principle:

To enhance the learning of essential material, the signaling principle was applied

throughout the whiteboard animation. The signaling principle is when the essential

material in multimedia is highlighted, or cued, in some way (Ozcelik, Arslan-Ari, 2010).

One form of highlighting material is by emphasizing important words in text, which is what we applied to the whiteboard assets. The color palette of the whiteboard animation was sparse, only using a dark grey, orange, and blue. The orange and blue serve the highlight purpose, and only important words and structures are highlighted.

15

Traditional Animation

Storyboarding

Several iterations of a storyboard were created following the initial verified draft of the script. The script was further refined within the context of the storyboard. The storyboard template was drafted in Adobe InDesign, consisting of the script (red) and action (black) on the left side and the corresponding still frame on the right-hand side.

Figure 6. Storyboard revisions. Full text available in Appendix B. Still frames were painted in greyscale and actions in red on the iPad application

Procreate. Each still frame was then imported as a PNG file into an InDesign storyboard template. Final Adobe Illustrator assets replaced the PNGs as they were completed.

16 Asset Creation

Concept art was created Dr. Sophie and the remaining characters. Initial style sketches were also drafted - a simple, cartoon vector style was decided upon to simplify asset creation and eventual animation. In addition, the simplified style could assist in managing cognitive overload.

Figure 7. Initial character style sketches Assets were created in Adobe Illustrator. Based on storyboard frames, background and scene assets were created accordingly and separated out into distinct layers based on whether or not the contents of the layer would be animated. After Effects does not recognize Illustrator sublayers on import unless shape layers in After Effects are created - but shape layers are more cumbersome to animate. Therefore, the rule of “1 Illustrator

17 layer = 1 separately animated object” was adhered to, and any objects that were animated together were grouped into a single layer in Illustrator.

Figure 8. Separated layers in Adobe Illustrator. Text not intended to be read.

18

Figure 9. Imported Illustrator layers into After Effects. Text not intended to be read.

To introduce more cultural diversity to the animation, skin colors and overall styles varied while character head shapes and facial ratios generally remained the same, to maintain visual consistency. Unique color themes within segments of animations were chosen in order to visually distinguish one from the other. For example, the entire

Recessive Inheritance section had a warm color palette to distinguish from the cool palette of the Dominant Inheritance section.

19

Figure 10. Cool and warm color palettes to visually distinguish sections. Text not intended to be read.

Animation

In this thesis, we define “traditional animation” as computer interpolated animation.

Interpolation refers to computer-generated keyframing: between two hand-set keyframes on an image layer, the software will automatically “tween”, or fill in the missing frames.

To further modify movement, Bezier curves are utilized to smooth and naturalize motions.

The After Effects Plugin Animation Composer by Mister Horse was helpful in creating smooth in and out transitions for layers and text, greatly reducing the time spent on manual manipulation of speed charts.

Limited Head Rig in DUIK Bassel 16

Head rigs were created for each character using the free After Effects Plugin DUIK

Bassel 16. The purpose of this was to simplify head and facial feature movements using only a few keyframed controllers, and to make animated head movement appear more natural.

20 Each component of the head was separated in Illustrator into individual layers.

Elements like pupils that move synchronously were grouped into one layer. This

Illustrator file was then imported into After Effects as a new composition, selecting

“Composition - Retain Layer Sizes” when prompted. In DUIK , a 2D slider was chosen

from the Connector menu. Connectors in DUIK allow the connection of one property of

one layer to multiple properties of another layer. This creates flexible, simplified rigs

which can drive a lot of animations.

Figure 11. DUIK interface and slider type. Text not intended to be read.

Clicking on the 2D slider will generate a Controller layer. This layer’s position value will

be used to keyframe head and facial feature rotations. A Controller background layer will

also be generated, which shows the bounds of the controller (see #1, Figure 2)

21 First, we want to have the facial features rotate, as if the character was in 3D space.

The features on the face that are rotating are all parented to a null object (see #2, Figure

2) labeled “Facial Features”. Since the head rig is in the middle of the composition, the

anchor point of the null object was set to (50,50) to center the null on the middle of the

face. In order to have full 360-degree head rotations, the X and Y dimensions must be

separated on the Null object layer and connected to the slider separately (Null object

layer -> P -> Right click -> Separate dimensions.) Using DUIK commands, the X and

Y values of the null were zeroed out for more precise movement (Null object layer ->

DUIK Commands -> Zero - under “Links & Constraints”).

Figure 12. Parented Illustrator layers to sliders. Text not intended to be read.

Figure 13. Illustrator layers parented to "Facial Features" null.

22 To have a range of X values the character’s head moves with, the properties needed to be keyframed from the leftmost position the facial features could possibly be (the leftmost range of the controller) to the rightmost position (the rightmost range of the controller), with a neutral position in the center at (0,0). The same process is repeated for the range of Y values, but the leftmost keyframe is the highest position and the rightmost is the lowest.

Figure 14. Keyframe positions for "facial features" null. Text not intended to be read. To add more 3D depth to the head rotating in space, three more X dimension keyframes

(Left, Center, Right) were also generated for the “Nose” layer to have the nose protrude

out more than the rest of the face while turning.

The head layer was duplicated and became an alpha matte for the R cheek. This prevents

the R cheek from overlapping into the background.

23

Figure 15. Connected keyframed X position layer to properties of Controller

All the keyframes set so far were selected, Property -> X Value -> Connect to properties was clicked on the DUIK menu. This tied the X values of the set keyframes to the X axis of the controller handle, and the Y values of the set keyframes to the Y axis of the controller handle. Now, the character can look up, down and from side to side.

24 When the head turns in 3D space to the right, the right ear should disappear behind the

head while the left ear protrudes out. Thus, these steps were repeated for the X value

under “position” for the ears, so that the ears may rotate in space with the head (the ears

are not a part of the “facial features null”).

The blink was achieved by inputting keyframes for a skin colored overlay layer. The overlay layer was the eye whites layer duplicated and filled in with the character’s default skin color. Blinks were keyframed along the Y axis, in random intervals and eventually parented to the “Facial Features” null, so the character will blink while the face moves.

The Position property was selected then click: Animation -> Add Expression -> Type

“loopOut()” to have the character blink ad infinitum. The blink can also be controlled with a different slider, but for the sake of time the method mentioned above was faster.

Some time saving advice after creating all the head rigs required for this animation:

● Since DUIK controllers are run by scripts, it is in the best interest to parent as

many things as possible to one controller. This will minimize the amount of lag

that occurs.

● Shy layers away after connecting them to the controller. This will free up space

and make it easier to see what is left to rig.

25

Arm Rig

Separate arm rigs were created in DUIK for the first and last scenes of the traditional animation to give the characters more expression. Full body rigs were not necessary for the first scene, as both characters (Dr. Sophie and patient) are seated. First, forearm and arm layers were separated in Adobe Illustrator and imported into After Effects. Each layer was a rounded rectangle. The arm had a flex point at the elbow, where the two rounded ends overlapped. Once input into After Effects, Rigging -> Create Structures -

> Arm (or front leg) was opened up in DUIK Bassel.

Figure 16. DUIK Rigging menu with "Arm" selected

26 Only Arm and Forearm were checked off in the side panel menu, as the characters created in this animation did not need to have separate hand movements. This created three structures: an arm tip which was attached to the tip of the “hand”, a forearm that was attached to the elbow joint, and a arm that was attached to the top of the shoulder.

The original Illustrator arm layers were then parented to the new DUIK structure layers

(Forearm -> Arm tip and Arm -> Forearm). Finally, all three arm structures were highlighted and “AutoRig & IK” was selected on the DUIK menu under Rigging ->

Create Structures. This created a keyframable controller layer that could move the whole arm by dragging the arm tip, symbolized by a light green hand. Arms were then keyframed to gesture to certain objects in the scene and give the character some natural secondary motion.

Figure 17. Simplified Structure and Controller setup using DUIK. Text not intended to be read.

27 Walk Cycle

A walk cycle can be quickly created with DUIK. First, like the steps for the arm rig above, a limited full body rig was created for Dr. Sophie for the last scene where she crosses the stage holding a “Trait” box. The word “limited” is used in the sense that the

legs, hips, torso and spine were rigged for the Walk Cycle controller, but the head and

arms were rigged separately. Once all the Illustrator layers were parented to their

corresponding structures, Automation -> Walk Cycle was clicked under the DUIK

menu with all structures selected.

Figure 18. Structures involved in Dr. Sophie's walk cycle, except arms.

28

Figure 19. Layers were selected and plugged into the Walk Cycle feature of DUIK

Under the new Walk Cycle menu, droppers can be clicked on to select which Structure layer that the Walk Cycle should connect to. (Only Neck & Shoulders, Body, Hips and the Feet were selected). Once Create was selected, a Walk Cycle controller layer was created. The Walk Cycle can then be further tuned by selecting the correct Kinematics, duration, and amount of secondary movement (we used 65% because 100% was too

“bouncy”). Since Dr. Sophie’s feet were hidden during her walk, there was no extra need to fine tune foot movements.

Lip Sync

A full head rig was made for Dr. Sophie, using the techniques mentioned above. Since

Dr. Sophie is the narrator and is focused on in multiple scenes of the animation, her movements needed to be finely manipulated. This necessitated the creation of multiple sliders for multiple purposes. One aspect of the head rig was implementing the ability to lip sync Dr. Sophie with the recorded narration. Firstly, a 2D animation syllable chart

29 was referenced and mouth shapes to corresponding phonation sounds were created in

Adobe Illustrator. Each mouth shape was placed on a separate layer.

Figure 20. Phonation Chart used for Dr. Sophie's mouth movements

30 All mouth shape layers were imported into After Effects into the Dr. Sophie Head Rig composition. The layers were then selected and parented to the “Facial Features” null so that the lip sync could be synchronized with head and pupil movements. A slider controller was created in DUIK Bassel. All the mouth shapes were selected and “Connect to opacities” was selected, allowing the slider position to dictate the opacity of each mouth shape, mimicking turning the mouth shapes “on” or “off”.

Figure 21. Dr. Sophie head rig with mouth shape slider

31

Figure 22. Synced mouth movements to audio waveform Once the controller was set up for lip syncing, the Position property of the controller layer was opened. The audio was then imported into After Effects. To simplify the process, markers were placed, and each spoken phrase/word was marked by adding a marker (by pressing * on the number pad with the audio layer selected). Waveforms were also pulled up by pressing L twice on the keyboard with the audio layer selected. Each phonation was then keyframed (under the controller position layer) using hold keyframes

(no tweening or transition between keyframes). During pauses, the Rest phonation image was used. Overall, this method of keyframing was not very time intensive and produced believable results.

32

Accessibility

Text contrast was adjusted to ensure readability in accordance with the Web Content

Accessibility Guidelines (WCAG 2.1). To achieve the WCAG’s AA standard of accessibility, the value of text overlaying another color must be at least 3:1 for large scale text (at least 18pt, or bold and at least 14pt) and 4.5:1 for regular sized text. Generally, we adhered to values within that range, since the smallest font sized used was 45px

(which converts to 33.75pt). Contrast was measured and adjusted using the Colour

Contrast Analyzer software.

33

Figure 23. Color Contrast Analyzer was used to ensure text fit within WCAG guidelines.

34 Whiteboard Animation

Storyboarding

Like the traditional animation, a storyboard was created in Adobe InDesign using progressively refined stills. The style was chosen to be simple hand-drawn lines to avoid visual overload and extraneous processing. The animation incorporated a limited three- color palette: orange and blue for accent colors, and black for everything else. A 30% grey was also used for drop shadows to give elements depth and to create a value range for character skin color.

Asset Creation

Assets were created in the Procreate app on the iPad Pro, using a Studio Pen brush with limited taper to get a hand-drawn feel, while maintaining 100% opacity. The “base” of each scene was drawn onto one layer, with any overlays or additions on separate layers. Scenes were grouped layers.

35

Figure 24. Procreate interface with transparent grouped Scene layers. Text not intended to be read.

Each of these layers was then selected in Procreate (with a transparent background) and exported as a transparent PNG into a whiteboard asset folder. Asset creation once again worked in tandem with animation, as it was much simpler to work scene-by-scene and fix any spacing or drawing errors immediately by saving over the original PNG using

Procreate.

36 Animation

In order to create the illusion of a hand drawing elements on a whiteboard, the finished drawing was reverse masked and synchronized with a “hand with marker” overlay moving across the screen. Animation was largely assisted by the After Effects plugin

AutoWhiteboard, designed to save creators time during the masking process by having presets already keyframed. However, each added mask created another copy of the original PNG, so scenes with many masks were loaded down with assets and that slowed down the RAM preview. A workaround to this problem was to export each composition with full wipes at the beginning and end (using an animated whiteboard eraser mask) into high definition .mp4 clips via Adobe Media Encoder. Previewing and making appropriate changes by watching the rendered clip and going back into Procreate/AE was much faster than waiting for RAM preview to load.

Figure 25. "Mask" icon was selected to create a large mask layer (in blue) over the screen. Text not intended to be read.

37

Figure 26. Using the Pen tool, each separate component of the PNG layer was masked out. Text not intended to be read.

Figure 27. All components were masked out. Text not intended to be read.

38 Figure 28. "Separate masks" toggle was clicked, which separated original PNG into own separate layers. Text not intended to be read.

Figure 29. "Marker" and style preset was selected, “Apply” toggle was clicked. Text not intended to be read.

39 Figure 30. Speed was slowed from 250 to 200 under the Speed Control layer. Text not intended to be read.

Figure 31. Erasing uses the same technique but in reverse. Text not intended to be read.

40 Study Design

Overview

We used Amazon Mechanical Turk to perform a nationwide survey for the

effectiveness of our created videos. We created surveys to compare our three videos:

traditional animation, whiteboard animation, and a PowerPoint video. Our inclusion

criteria were: (a) has a high school diploma, and (b) resides in the US to match

characteristics of OGATP students.

We analyzed two parameters: retention and engagement. Retention was measured from a quantitative analysis of quiz scores of participants before and after they have watched an animation. Engagement was measured from visual analog scale measurements of participants, and qualitative input after they have experienced a portion of all the modalities.

For each of the three modalities, pre- and post- testing was administered to evaluate the retention (six multiple choice questions) and survey engagement value (rating from 1-

100). Participants engaged with multimedia and answered questions in a 30-min session using JHM Qualtrics.

Online Data Collection

We chose to run an online study because:

• Traditional recruitment and testing can take weeks to months, whereas online

recruitment and testing can be completed in a matter of hours to days.

• The need for travel and set-up is eliminated.

41

• Users can participate from their own home, as they would in an online seminar.

Our study aims to simulate the online learning experience.

• We can quickly capture a large, geographically diverse, de-identified audience.

Survey Design and Data Collection: JHM Qualtrics

We used Qualtrics to create secure online surveys. Each survey consisted of a video with a pretest, a post-test, and a post-survey. Participants then watched abbreviated one- minute clips of the two other videos in our study and filled out surveys pertaining to the two clips. The clips were abbreviated to keep the surveys to less than 30 minutes to reduce survey fatigue and overall costs.

The order of events we included in each Qualtrics module is shown below. The complete module can be viewed in the appendix.

Figure 32. Qualtrics Survey Flow

42 Survey Introduction

The introduction provides a brief project description and instructions for the next segment. Study participants are recruited via MTurk (described below), and the participant is prompted to enter their MTurk ID to simplify coordination of data, and to screen and prevent repeat participants.

Pre- and Post- Tests: Measuring Retention

To test retention value of the videos, we implemented a pre-test and a post-test for the first full video in the survey. The tests are identical so as to quantitatively measure the amount of knowledge the user gained by watching the video by comparing the differences in answers. There are six multiple choice questions in total:

• Two questions measure text recall: whether text on screen and spoken narration

(whiteboard) yields a better result than only spoken narration (traditional). This is

a test of the Redundancy Principle, which states that presenting the same material

in multiple forms (text, spoken) interferes with learning.

• Two questions measure “essential processing” via symbol recognition: whether

users can recall symbols and their associated meanings from the videos. Essential

processing leads to creating verbal or pictorial representations of presented

material to be stored in working memory (Mayer 2005).

• Two questions measure “generative processing” via application of knowledge:

whether users can apply what they’ve learned to new scenarios. Generative

processing refers to a learner organizing new material and integrating it into their

43

own existing mental constructs, which ultimately can lead to long term memory

storage (Sweller 1999).

The “Forced Response” tag was added to all multiple-choice questions in both Pre- and

Post-Tests. This measure was to ensure all questions were answered, as the survey will

not progress unless the user has completed all the questions on the page. In addition, a

“Timer” tag was also added to the Pre- and Post- Tests, which records the amount of time

a user spends on a certain segment (without the user’s knowledge). This could give

additional information regarding user processing times before and after watching the first

video in full.

Surveys: Measuring Engagement

In addition to testing retention, engagement values were measured via visual analog scales, which featured sliders with a continuous value range from 0-100. We analyzed three measurements of engagement:

• Enjoyment

• Understandability

• Attention-holding

The first survey occurs after the Worker views the first video to completion and clicks the

next button. The second and third surveys are combined and occur after the Worker

views the second and third shorter video clips to completion.

44

Assigning Completion IDs to Guarantee Completion

To ensure that Workers have completed the Qualtrics survey prior to payment, each

Worker was assigned a randomized Survey ID at the end of the Qualtrics module. The

Survey ID’s are randomized 5-digit numbers generated by Qualtrics using native

Embedded Data. Workers typed this Survey ID into a text box in MTurk to confirm completion of their survey.

Figure 33. Randomized completion code. Added embedded data in Qualtrics. Text not intended to be read.

Figure 34. Randomized completion code. Named embedded data. Text not intended to be read.

45

Figure 35. Randomized completion code. Random number generator. Text not intended to be read.

Figure 36. Randomized completion code. Piped embedded randomized ID text into question text box. Text not intended to be read.

46 Figure 37. Randomized completion code. Result of adding embedded data “Random ID”

Participant Recruitment and Compensation: Amazon Mechanical Turk (MTurk)

Survey participants were recruited via MTurk, a crowdsourcing study site where compensated online users (known as Workers) complete tasks (known as “Human

Intelligence Tasks - HITs”) for researchers (known as Requesters).

Multiple workers can work on a HIT simultaneously, generating numerous responses.

These resulting responses can be approved or rejected by the Requester. If approved,

MTurk will release compensation to the assigned Worker. To increase accuracy and speed of responses, incentivizing the study via increased pay is recommended. MTurk user responses on the subreddit /r/MTurk suggested a current standard of pay to be $0.15

- $0.20 per minute of estimated work. The estimated amount of time required to finish our survey is roughly 20-25 minutes, so we compensated $4 for each HIT. However, since the PowerPoint survey was about one and a half minutes longer than our other two surveys, we added $0.50 on top of the base amount of $4 for segments which contained the full PowerPoint video. MTurk charges an additional 20% of the original paid amount, so we budgeted a total of about $6 per HIT.

47 Our study was split into three groups, one for each multimedia modality. The sample size of each group was 56, which gave a predicted 13% margin of error with a confidence level of 95%, assuming a population size of 10,000.

Each of the groups was further subdivided into two groups, to randomize the order in which participants watched the video clips. The order of multimedia that participants watched for each study group is summarized in the table below:

A1 A2 B1 B2 C1 C2 Main Traditional Traditional Whiteboard Whiteboard PowerPoint PowerPoin Video Animation Animation Animation Animation Video t Video Video Whiteboard PowerPoint PowerPoint Traditional Traditional Whiteboar Clip #1 Animation Video Video Animation Animation d Animation Video PowerPoint Whiteboard Traditional PowerPoint Whiteboard Traditional Clip #2 Video Animation Animation Video Animation Animation Table 2. Subdivision of study groups

Figure 38A. Project page on Amazon MTurk

48 Figure 39B. Project page on Amazon MTurk

Figure 40C. Project page on Amazon MTurk

49 Inclusion Criteria

A HIT recruitment page was created on mTurk for users to read and click into. A few required qualifications were set, including:

• HIT approval rate > 97%

• Total approved HITs are >100

o The first two qualifications screen for trusted participants and increase the

likelihood of receiving quality responses.

• Located in the United States

o This increases the likelihood that the participant is in a similar time zone

and is an English speaker that can read our survey.

• Is a US High School Graduate (Premium qualification: $0.05 added to fees)

o This is a criterion for students enrolling in the OGATP program, and also

increases the likelihood that the participant is an English speaker that can

read our survey.

• Must not have completed this HIT, or similar HITs, before*

o *This qualification is needed after the first batch submission and is

detailed below.

*Excluding Repeat Workers on MTurk:

This was achieved by creating a custom Qualification type under MTurk -> Manage

-> Qualification Types. It is essential to name the Qualification something that makes sense, like “Already Completed”, as Workers who view the HIT will see it. The description “Workers who have already completed a study” was entered into the description bar.

50

Figure 41. Created an "Already Completed" qualification type for those who have completed the module. Text not intended to be read.

After the first batch of Workers had completed their surveys, their MTurk Worker IDs

were copied from our Qualtrics form and pasted into Microsoft Excel in a row titled

“Worker IDs”. We labeled the second row “Update - Already Completed” and entered in

a value of 1 for all the Workers. This value will assign the Qualification to only these

Workers. The Excel file was then saved as a .CSV (UTF-8) file named “Already

Completed”.

Figure 42. CSV file setup in Excel

51 The .CSV file was uploaded into the Manage Workers page in MTurk. Upon upload,

MTurk prompted whether the “Already Completed” Qualification should be assigned to the Workers listed, which was confirmed. Make sure the number of Workers in the .CSV match the number of Workers on this screen.

Figure 43. Uploaded .CSV file to" Manage Workers" page on MTurk. Not all text intended to be read.

When designing the next HIT, ensure that the Qualification “Already Completed” “has not been granted”. This way, the Workers who have already taken the Study (and assigned a value of 1) will be excluded from the HIT.

52

Figure 44. Assigned "Already Completed" qualification to Workers during HIT creation. Text not intended to be read.

This Qualification will need updating every time a new batch is finished and more

Workers are added to the “Already Completed” .CSV.

Note: There is an option to Block Workers, which should not be used to exclude

Workers from future HITs (unless they are purposely returning bad data). This will

reflect in their personal scores and may impact their future HITs.

MTurk Description for Workers

The following information in the description of the study was provided:

• Suggested qualifications for the study

• Rough estimate of how long the HIT will be

• How much compensation the Worker will receive

53

Figure 45. MTurk survey description. Text not intended to be read.

54

Figure 46. HIT progression screen after batch submission. Text not intended to be read.

Ensuring Complete Data

Administrative data was collected while running this study to help facilitate Worker-

Requester transactions and simplify data analysis. Workers in MTurk are assigned a unique, anonymized MTurk ID. However, since our study was run through a different software (Qualtrics), we needed a way to match each Worker to their conducted survey

55 for compensation purposes and to avoid repeat Worker input. To solve this, Workers were required to input their MTurk IDs in Qualtrics before starting their modules.

Only Workers that complete the full module may receive compensation. To measure completion of all the tasks, a unique code was placed after the last segment of the module consisting of random numbers and letters. Only Workers who completed the task received the code, which they inputted on the MTurk page before submitting the HIT.

Once submission occurs, the data was approved by the Requester. The Auto-Approval interval was set for three days, so HITs were automatically approved, and funds distributed in three days, unless the Requester intervened.

Figure 47. Assignment approval after batch has been completed. Text not intended to be read.

56 MTurk is used to recruit and pay users for studies, but the data for studies is usually collected from other sites, often through a portal link provided with the HIT. Our study recruited users through MTurk and collected data using Qualtrics.

Figure 48. Summary of the mTurk Workflow sequence

Pros and Cons of Amazon MTurk

Amazon MTurk is a powerful tool that can expedite and simplify data collection, but it is important to understand its limitations as well. Below is a table describing the pros and cons of using MTurk:

Pros Cons • Simulates online learning • Limited qualifications • Easy recruitment • Increased chance of Worker • Fast results dishonesty due to anonymity • Complete participant anonymity • Study limited to people who use • Can be cheaper than traditional MTurk studies • Limited to online studies • No travel/set-up expenses (questionnaires, surveys, image identification, etc.) Table 3. Pros and Cons of Amazon MTurk

57

Recruitment Assumptions

1. We assumed the MTurk participants tested were representative of our target

population. Our only inclusion criteria were (1) US located and (2) High school

graduate. Based on increased fees on MTurk and a Qualification limit, we did not

specify further criteria, so our Worker pool may have a more varied background

than originally planned for. For example, some Workers who had a high school

diploma, but no genetics education felt that certain concepts (like “alleles”) in our

video were difficult to understand. According to free response comments, their

focus was lost amidst the unfamiliar jargon that was presented. Ideally, our target

population would understand basic genetic concepts before beginning the module

so that that the videos could build up from their foundational knowledge.

However, MTurk is limited in the qualifications that it can set so we had to make

a compromise in order to collect data this way.

2. To shorten the span of data collection, we collected data on MTurk from 10AM-

4PM every day from Thursday – Sunday. We assumed that:

o Our Worker pool would be actively using MTurk on all these days with equal frequency

o Workers on each day were the same, including the weekend

Criteria for Survey Rejection

Criteria were developed to screen for quality responses. Listed below were criteria for rejection of a survey for analysis:

58

• Incomplete survey: If a participant did not completely fill out a survey.

Participant did not watch the first video entirely: Participants need to watch the

videos to near entirety in order to fairly judge pre/post-test responses. The Timer

feature in Qualtrics gave an estimate of whether the Participant watched the video

in its entirety.

Figure 49. Timer module in Qualtrics. Text not intended to be read.

The total running time for the traditional and whiteboard animations was

approximately 400 seconds. We considered video acceleration, so we set the

lower limit of exclusion to 200 seconds (since YouTube has a max acceleration of

2x speed). Therefore, if the Timer showed that a Worker had viewed the

animation for <200 seconds, their data was excluded. Even if a Worker’s data was

excluded, their entry was still accepted and compensated in MTurk. Worker HITs

were not immediately rejected if their submission was insufficient because MTurk

counts rejections into Worker’s HIT approval scores, so Workers will lose

opportunities if their approval score drops too low.

59 • Survey completed too quickly: If a survey was completed too quickly, it might

indicate that a Participant marked down answers without reading the questions.

The Timer feature in Qualtrics allowed for time tracking, and individual questions

answered in less than three seconds indicated that they may not have been read.

Data Collection Timeline

03/05 03/06 03/06 03/07 03/07 03/09 03/09 03/09 Section A1 (10) A1 (18) A2 (28) B1 (28) B2 (28) B1 (2) C1 (28) C2 (16) Time 3-5PM 10AM- 1-4pm 10AM- 1-4PM 1-2PM 10AM- 1-3PM noon noon noon 03/09 03/09 03/12 03/12 Section C2 (9) C1 (1) C2 (3) C1 (27) Time 3-4PM 5- 10AM- 1-4PM 5:30PM noon Table 4. Data collection timetable We ran six batches of 28 surveys on MTurk and received results from 168

participants. However, groups C1(27 surveys) and part of C2 (3 surveys) had to be rerun

again due to a glitch in Qualtrics that returned incompatible results. In addition, two

results from B1 were removed because the time spent on video viewing was insufficient.

Therefore, an additional 33 surveys had to be rerun for a total of 201 total surveys.

However, we only used 168 of those results (28 surveys in each group).

Data collection was completed over eight days from March 6th to March 12th, 2020. In

order to ensure that Workers did not repeat surveys, categories were run consecutively

instead of simultaneously (consecutive surveys allowed us to use MTurk IDs as exclusion

criteria as described above). Surveys were run roughly between the hours of 10AM –

4PM. We ran roughly two batches per day, but C1 and C2 ran into a glitch in testing so those batches were published over four days.

60

Data Analysis Survey results from Qualtrics were organized and analyzed in Excel.

Comparing Change (Δ) in Retention for Overall Quiz Scores

The difference between pre- and post- test scores for each individual Worker was

measured and organized by category (A - traditional animation, B – whiteboard

animation and C - PowerPoint video. Groups 1 and 2 of each category were combined

(A1 and A2, B1 and B2, C1 and C2). A single factor ANOVA analysis was run to

determine if there was any significant difference (p < 0.05) between the three categories.

If F calculated > F critical, we would compare all the categories via unpaired two-tailed t- tests of unequal variance. If F calculated < F critical, we would compare all the categories via unpaired two-tailed t-tests of equal variance.

Mean change in quiz scores, standard deviation and standard error were compared for each category.

Comparing Retention for Individual Questions

We measured the difficulty of each question by taking the mean of the sum of correct

pre-test answers for each separate question. We also measured the change in individual

question scores (post test score minus pretest score) to shed some light on which

individual question showed the most improvement for a specific video type.

61

Comparing Engagement for Full Length Videos

Engagement of the first full video was measured via continuous analog scales 0-100

for Enjoyment, Attention, and Understanding. This data reflects Worker opinion without

any point of comparison to other modalities.

Each variable was analyzed separately using a single factor ANOVA analysis to

determine significant differences (p < 0.05) between the three video categories. If p <

0.05, we would compare all the categories via unpaired two-tailed t-tests of unequal variance. Mean change in quiz scores, standard deviation and standard error were compared for each category.

Comparing Overall Comparative Engagement

Engagement of the two 1-minute clips was measured via continuous analog scales 0-

100 for Enjoyment, Attention, and Understanding. This data reflects Worker opinion on video engagement relative to each other, and to the long video seen above.

Each variable was analyzed separately using a single factor ANOVA analysis to determine significant differences (p < 0.05) between the three video categories. If p <

0.05, we would compare all the categories via unpaired two-tailed t-tests of unequal variance. Mean change in quiz scores, standard deviation and standard error were compared for each category.

62

IRB

The study protocol “Comparison between digital e-Learning modalities in delivering online curricular education” (IRB00226187) was reviewed and approved by the Johns

Hopkins University Institutional Review Board on November 26, 2019.

Project Funding

The Study was funded by the Sutland and Pakula Family through the Dr. Frank V.

Sutland Chair at Johns Hopkins University and a research grant from the Vesalius Trust.

Statistical Consultation

We would like to acknowledge support for statistical consultation from the National

Center for Research Resources and the National Center for Advancing Translational

Sciences (NCATS) of the National Institutes of Health through Grant Number

1UL1TR001079.

63

Results Multimedia Produced

We produced six pieces of multimedia for testing:

Figure 50. Full-length multimedia produced for testing. Text not intended to be read.

Figure 51. Shortened clips produced for testing. Text not intended to be read. 1. A 6 minute and 38 second traditional animation

2. A 6 minute and 43 second whiteboard animation

3. An 8 minute and 11 second PowerPoint video, edited together from previously

existing OGATP lecture videos

4. A 1 minute and 1 second traditional animation clip (a short segment from the full

animation)

5. A 1 minute and 1 second whiteboard animation clip (a short segment from the

full animation)

6. A 1 minute and 5 second PowerPoint video clip (a short segment from the full

video

64 Mean Change (Δ) in Test Scores

3.5

3 2.78 2.63 2.5 2.38 Δ 2

1.5 Test score

1

0.5

0

Traditional Animation Whiteboard Animation PowerPoint Video

Figure 52. Mean change in overall test scores. Error bars are based on standard error. The whiteboard animation had the highest increase in mean overall test scores (2.78), followed by traditional animation (2.63) and finally PowerPoint video (2.38).

The differences in means between the three groups was not shown to be significant by single factor ANOVA analysis (p = 0.47) at the p < 0.05 level.

ANOVA for Mean Change in Overall Test Scores Source of Variation SS df MS F P-value F crit Between Groups 4.42857143 2 2.21428571 0.75140475 0.4733106 3.050787 Within Groups 486.232143 165 2.94686147

Total 490.660714 167 Table 5. Single factor ANOVA analysis for Mean change in overall test scores

65 Individual Question Measurements

Difficulty of test questions was measured by calculating the sum of scores across all

six groups for each individual pre-test question. For example, 22 out of 168 Workers

answered the first question correctly in the pre-test and 33 out of 168 Workers answered

the second question correctly. Based on this comparison, we can infer that Question 1

was measurably more difficult to answer amongst our Workers than Question 2.

Easiness of Questions Based on Pre-Test Scores 90

80

70

60

50

40

30 Easiness of Questions

20

10

0 1 2 3 4 5 6

Figure 53. Easiness of Questions Based on Pre-Test Scores. Y- Axis refers to how many Workers answered the pre-test question correctly. A higher score means more people answered correctly, meaning the question was easier to answer. We also calculated the difference between the total pre- and post- test scores for each

individual question to observe how much teaching effect each video had on that question,

which is shown below.

66

Question 1: What is/are the benefit(s) of creating a pedigree?

Question 1 Δ in Pre/Post-Test Scores 50

40 29 Δ 30 20 18 20 Change Change

10

0 Traditional Animation Whiteboard Animation PowerPoint Animation

Figure 54. Question 1 Difference in Pre/Post-Test Scores

Question 2: In a pedigree, what does the symbol above mean?

Question 2 Δ in Pre/Post-Test Scores 50 40 40 37 36

Δ 30

20 Change Change

10

0 Traditional Animation Whiteboard Animation PowerPoint Animation

Figure 55. Question 2 Difference in Pre/Post-Test Scores

67 Question 3: In a pedigree, what does the symbol above mean?

Question 3 Δ in Pre/Post-Test Scores 50

40 33 34

Δ 29 30

20 Change Change

10

0 Traditional Animation Whiteboard Animation PowerPoint Animation

Figure 56. Question 3 Difference in Pre/Post-Test Scores

Question 4: In an individual human, each gene usually has:

Question 4 Δ in Pre/Post-Test Scores 50

40 32 Δ 30 25

20 Change Change 8 10

0 Traditional Animation Whiteboard Animation PowerPoint Animation

Figure 57. Question 4 Difference in Pre/Post-Test Scores

68 Question 5: The pedigree below tracks polycystic kidney disease (PKD), a dominant trait, through three generations. If individuals II-3 and II-4 (outlined in blue) have another child, what is the chance that the child would have PKD?

Question 5 Δ in Pre/Post-Test Scores 50

40

Δ 30 19 19 20 Change Change 12 10

0 Traditional Animation Whiteboard Animation PowerPoint Animation

Figure 58. Question 5 Difference in Pre/Post-Test Scores Question 6: The pedigree below tracks cystic fibrosis (CF), a recessive trait, through three generations. If individuals II-3 and II-4 (outlined in blue) have another child, what is the chance that the child would have CF?

Question 6 Δ in Pre/Post-Test Scores 50

40

Δ 30

18 20 16 Change Change 11 10

0 Traditional Animation Whiteboard Animation PowerPoint Animation

Figure 59. Question 6 Difference in Pre/Post-Test Scores

Overall Full Video Engagement

69 We compared engagement results across three full length videos. Means are represented by visual analog slider scales from 0-100.

Enjoyment

90 79.79 80 73.86 70 65.16

60

50

40

Mean Enjoyment Mean 30

20

10

0

Traditional Animation Whiteboard Animation PowerPoint Video

Figure 60. Mean enjoyment for full length videos. Error bars are based on standard error. The whiteboard animation had the highest mean enjoyment (79.79), followed by the traditional animation (73.86) and lastly, PowerPoint video (65.16).

Single factor ANOVA analysis (p = 0.001) showed a significant difference between the categories at the p < 0.05 level.

70 ANOVA for Mean Enjoyment (Full Length Videos) Source of Variation SS df MS F P-value F crit Between Groups 6060.44048 2 3030.2202 6.956444 0.001257 3.05078701 Within Groups 71873.8393 165 435.59903

Total 77934.2798 167 Table 6. Single factor ANOVA analysis for Mean Enjoyment (Full Length Videos) The F – stat was 6.96, which is higher than the critical F value of 3.05. This indicated that our sample variances were unequal, so we used unpaired t-tests of unequal variance to further investigate the relationships between each group.

Unpaired T-Tests of Unequal Variance for Mean Enjoyment (Full Length Videos) Traditional – Whiteboard 0.075121073 Traditional – PowerPoint 0.047233745 Whiteboard – PowerPoint 0.000630469 Table 7. Unpaired T-Test of Unequal Variance for Mean Enjoyment (Full Length Videos) From these results, we can see that there is a significant difference between the traditional animation and PowerPoint video, and between the whiteboard animation and the PowerPoint video at the p < 0.05 level. There is no statistically significant difference between traditional animation and whiteboard videos at the p < 0.05 level.

71

Attention

100

90 85.02 79.91 80 69.5 70

60

50

40 Mean Attention Mean 30

20

10

0

Traditional Animation Whiteboard Animation PowerPoint Video

Figure 61. Mean attention for full length videos. Error bars are based on standard error. The whiteboard animation had the highest mean attention holding capacity (85.02), followed by traditional animation (79.91) and lastly, PowerPoint video (69.5).

Single factor ANOVA analysis (p = 0.0006) showed a significant difference between the categories at the p < 0.05 level.

ANOVA for Mean Attention (Full Length Videos) Source of Variation SS df MS F P-value F crit Between Groups 7005.036 2 3502.518 7.75605335 0.00060347 3.050787 Within Groups 74511.54 165 451.5851

Total 81516.57 167 Table 8. Single factor ANOVA analysis for Mean Attention (Full Length Videos)

72 The F – stat was 7.76, which is higher than the critical F value of 3.05. This indicated that our sample variances were unequal, so we used unpaired t-tests of unequal variance

to further investigate the relationships between each group.

Unpaired T-Tests of Unequal Variance for Mean Attention (Full Length Videos) Traditional – Whiteboard 0.157619183 Traditional – PowerPoint 0.020338443 Whiteboard – PowerPoint 0.000192167 Table 9. Unpaired T-Test of Unequal Variance for Mean Attention (Full Length Videos) From these results, we can see that there is a significant difference between the

traditional animation and PowerPoint video, and between the whiteboard animation and

the PowerPoint video at the p < 0.05 level. There is no statistically significant difference

between traditional animation and whiteboard videos at the p < 0.05 level.

Understandability

90 79.16 75.98 80 65.16 70

60

50

40

30 Mean Understanding Mean 20

10

0

Traditional Animation Whiteboard Animation PowerPoint Video

Figure 62. Mean understanding for full length videos. Error bars are based on standard error.

73 The traditional animation had the highest mean understanding (79.16), followed by whiteboard animation (75.98) and lastly, PowerPoint Video (65.16).

Single factor ANOVA analysis (p = 0.0015) showed a significant difference between the categories at the p < 0.05 level.

ANOVA for Mean Understanding (Full Length Videos) Source of Variation SS df MS F P-value F crit Between Groups 6033.19048 2 3016.5952 6.75741414 0.001511 3.05079 Within Groups 73658.0893 165 446.41266

Total 79691.2798 167 Table 10. Single factor ANOVA analysis for Mean Understanding (Full Length Videos) The F – stat was 6.76, which is higher than the critical F value of 3.05. This indicated that our sample variances were unequal, so we used unpaired t-tests of unequal variance to further investigate the relationships between each group.

Unpaired T-Tests of Unequal Variance for Mean Understanding (Full Length Videos) Traditional - Whiteboard 0.349866681 Traditional - PowerPoint 0.001272575 Whiteboard - PowerPoint 0.013738746 Table 11. Unpaired T-Test of Unequal Variance for Mean Understanding (Full Length Videos) From these results, we can see that there is a significant difference between the traditional animation and PowerPoint video, and between the Whiteboard animation and the PowerPoint video at the p < 0.05 level. There is no statistically significant difference between traditional animation and whiteboard videos at the p < 0.05 level.

74

Comparative Engagement

We averaged comparative engagement results across three short 1-minute clips. Means

are represented by visual analog slider scales from 0-100.

Enjoyment

90 77.39 80 71.09 70

60

50 41.61 40

Mean Enjoyment Mean 30

20

10

0

Traditional Animation Whiteboard Animation PowerPoint Video

Figure 63. Mean comparative enjoyment for short clips. Error bars are based on standard error.

The traditional animation had the highest mean comparative enjoyment (77.39),

followed by whiteboard animation (71.09) and lastly, PowerPoint video (41.61).

Single factor ANOVA analysis (p = 1.4*10-27) showed a significant difference between the categories at the p < 0.05 level.

75 ANOVA for Mean Comparative Enjoyment (Short Clips) Source of Variation SS df MS F P-value F crit Between Groups 81743.1667 2 40871.5833 74.87592638 1.4026E-27 3.022844822 Within Groups 181770.536 333 545.857465

Total 263513.702 335 Table 12. Single factor ANOVA analysis for Mean Comparative Enjoyment (Short Clips)

The F – stat was 74.88, which is higher than the critical F value of 3.02. This indicated that our sample variances were unequal, so we used unpaired t-tests of unequal variance to further investigate the relationships between each group.

Unpaired T-Tests of Unequal Variance for Mean Comparative Enjoyment Traditional - Whiteboard 0.027246249 Traditional - PowerPoint 3.71384E-23 Whiteboard - PowerPoint 3.22759E-16 Table 13. Unpaired T-Test of Unequal Variance for Mean Comparative Enjoyment (Short Clips) From these results, we can see that there is a significant difference between the traditional animation and PowerPoint video, and between the whiteboard animation and the PowerPoint video at the p < 0.05 level. There is no difference between traditional animation and whiteboard videos at the p < 0.05 level.

76

Attention

90 81.9 78.56 80

70

60 47.71 50

40

Mean Attention Mean 30

20

10

0

Traditional Animation Whiteboard Animation PowerPoint Video

Figure 64. Mean comparative attention for short clips. Error bars are based on standard error. The traditional animation had the highest attention holding capacity (81.9), followed

by whiteboard animation (78.56) and lastly, PowerPoint video (41.71).

Single factor ANOVA analysis (p = 1.3*10-26) showed a significant difference between the categories at the p < 0.05 level.

ANOVA for Mean Comparative Attention (Short Clips) Source of Variation SS df MS F P-value F crit Between Groups 79577.8 2 39788.9 71.6524259 1.3156E-26 3.0228448 Within Groups 184916.3 333 555.3043

Total 264494.1 335 Table 14. Single factor ANOVA analysis for Mean Comparative Attention (Short Clips)

77 The F – stat was 71.65, which is higher than the critical F value of 3.02. This indicated that our sample variances were unequal, so we used unpaired t-tests of unequal variance to further investigate the relationships between each group.

Unpaired T-Tests of Unequal Variance for Mean Comparative Attention (Short Clips) Traditional - Whiteboard 0.218142007 Traditional - PowerPoint 1.2715E-20 Whiteboard - PowerPoint 2.0397E-16 Table 15. Unpaired T-Test of Unequal Variance for Mean Comparative Attention (Short Clips) From these results, we can see that there is a significant difference between the traditional animation and PowerPoint video, and between the whiteboard animation and the PowerPoint video at the p < 0.05 level. There is no difference between traditional animation and whiteboard videos at the p < 0.05 level.

78

Understandability

90 80.82 76.51 80

70

60 51.63 50

40

30 Mean Understanding Mean

20

10

0

Traditional Animation Whiteboard Animation PowerPoint Video

Figure 65. Mean comparative understanding for short clips. Error bars are based on standard error. The whiteboard animation had the highest understandability (80.82), followed by

traditional animation (76.51) and lastly, PowerPoint video (51.63).

Single factor ANOVA analysis (p = 1.4*10-18) showed a significant difference between the categories at the p < 0.05 level.

ANOVA for Mean Comparative Understanding (Short Clips) Source of Variation SS df MS F P-value F crit Between Groups 55635.59 2 27817.79 46.587143 1.4477E-18 3.022845 Within Groups 198838.7 333 597.1131

Total 254474.3 335 Figure 66. Single factor ANOVA analysis for Mean Comparative Understanding (Short Clips)

79 The F – stat was 46.58, which is higher than the critical F value of 3.02. This indicated that our sample variances were unequal, so we used unpaired t-tests of unequal variance to further investigate the relationships between each group.

Unpaired T-Tests of Unequal Variance for Mean Comparative Understanding (Short Clips) Traditional - Whiteboard 0.160465733 Traditional - PowerPoint 1.34599E-11 Whiteboard - PowerPoint 1.23321E-16 Table 16. Unpaired T-Test of Unequal Variance for Mean Comparative Understanding (Short Clips) From these results, we can see that there is a significant difference between the traditional animation and PowerPoint video, and between the whiteboard animation and the PowerPoint video at the p < 0.05 level. There is no difference between traditional animation and whiteboard videos at the p < 0.05 level.

Access to Assets Resulting from this Thesis

The whiteboard and traditional animations resulting from this thesis project can be found at www.banyanvisuals.com. The author of this project can be reached through the

Johns Hopkins University School of Medicine Department of Art as Applied to Medicine at medicalart.johnshopkins.edu.

80

Discussion Overview

Overall, the traditional and whiteboard animations performed better than the

PowerPoint video. All three videos performed equally in knowledge retention, but traditional and whiteboard video were much more engaging.

Overall Retention

From our overall retention results, the whiteboard animation performed the best in

knowledge retention tests, followed by traditional animation and finally PowerPoint

lecture. However, there was no significant difference in learner retention between any of

the three videos, suggesting that all three videos had comparable educational values. This

is consistent with current literature; empirically, animation did not significantly improve

retention when compared to static formats across many studies (Betrancourt, Berney

2015). Our finding could be a result of many factors, such as the suitability of animation

for this given topic, or too small of a sample size to view significant changes in overall

retention measurements. Greater sample sizes could tease out this difference with more

resolution.

Individual Retention

Even though a significant difference was not found in overall retention, we analyzed

Worker responses to specific question types that could further distinguish between how

information was presented.

81

Word Recall (Q1 & Q4)

We wanted to see whether adding written text at the same time as spoken text affected

learner results. Questions 1 and 4 in our retention test focused on specific word recall – if

learners could remember what was said in a specific part of the animation. Both

whiteboard animation and PowerPoint video had written text appear at the same time as

the corresponding narration, whereas the traditional animation only had the narration. For

both questions, the whiteboard animation had the greatest improvement in correct

answers targeting word recall, which suggests that written text appearing at the same time

as narration could improve learning more than just narration.

However, this contradicts the Redundancy Principle of Multimedia Learning, which

states that presenting the same material in multiple forms concurrently (e.g. text and

narration) interferes with learning. According to previous studies, having text come on screen at the same time as narration would overload the dual channel processing system and lead to cognitive overload and inhibited learning (Kalyuga, Chandler and Sweller

1999). Instead, researchers demonstrated that presenting spoken text first and delaying written text yields less learning inhibition than presenting them concurrently (Kalyuga,

Chandler and Sweller 2004). This contradicts the whiteboard animation model, which employs spoken and written text concurrently as its hallmark (Turkay, 2015).

One reason that our whiteboard animation may have fared better than the Redundancy

Principle would predict is that we only animated key narrative text and not all narrative

text in the whiteboard animation. This is similar to emphasizing important words during a

lecture. Learners could hear the narration and focus visually on only the key words being

written, which leads to a reinforcement of key material. More studies are needed

82 measuring the effect of simultaneous text and speech on learner response in the specific context of whiteboard animation to shed more light on this contradiction.

One explanation for why the PowerPoint video scores for Question 1 were lowest was the presence of extraneous text on screen, which could have led to cognitive overload and extraneous processing. For Question 4, we infer that PowerPoint video scores were low in part because “two alleles” were not explicitly stated in the video (in either narration or images) but heavily implied within the Genetics review section. In contrast, “two alleles” was specifically written into the animation scripts.

Of interest, both Questions 1 and 4 were difficult for Workers to answer correctly prior to watching any of our videos. This could be because these questions were based on direct phrasing from the OGATP curriculum material. These two questions proved to be a good test of the Redundancy principle, since Workers were unlikely to know the answers to the pretest questions.

Figure 67. Question 1 context from video screenshots. Text not intended to be read.

83

Figure 68. Question 4 context from video screenshots. Text not intended to be read.

Image Recall (Q2 & Q3)

Questions 2 and 3 both tested image recall. In these questions, Workers were asked to identify pedigree symbols based on the video watched. All three videos showed similar images but were stylistically different. Despite the different styles, scores for image recall were very similar across all three video types. The relatively high delta scores for these questions also suggest that viewers tended to remember symbols and images well.

Figure 69. Question 2 context from video screenshots. Text not intended to be read.

Figure 70. Question 3 context from video screenshots. Text not intended to be read.

84 Knowledge Application (Q5 & Q6)

For the last two questions, we wanted to measure the degree of knowledge transfer, seeing if Workers could apply the skill learned in the video to an example situation. The results from these two specific questions were scattered; however, the PowerPoint video consistently performed high compared to the other two. These results were interesting, as the PowerPoint video used only text to relay the pedigree information versus graphics in the other two modalities. Perhaps the text in the PowerPoint video gave more direction to best structure a mental representation of this information to apply it in novel scenarios.

However, because delta scores were low for these two questions, this data could also be an outlier, suggesting a large portion of the Worker pool already knew how to answer these questions. We may require a larger sample size to clearly measure knowledge application amongst these modalities.

Figure 71. Question 5 context from video screenshots. Text not intended to be read.

Figure 72. Question 6 context from video screenshots. Text not intended to be read.

85 Engagement

Both traditional and whiteboard animations scored significantly above PowerPoint video across all three engagement variables: enjoyment, attention and understanding.

This was evident in both the full video engagement survey results and comparative video engagement survey results, and an even more drastic difference could be seen in the comparative results. Engagement is an important measurement as it is critical to elicit and maintain learner attention (Parette, 2011). Motivation to commit mental resources to learning will increase the more engaging a format is (Roberts, 2017) and viewers will attend to a video more if it contains dynamic stimuli, therefore lowering attention drop- off and decreased understanding (Pinto, Olivers, & Theeuwes, 2008). Worker comments from our study supported the above study conclusions, with notes that art style and animated elements made it easier to focus their attention on the information and provided a sense of enjoyment (see Appendix D for full list).

We only tested videos for six minutes. Perhaps engagement can play a greater role in learning when a viewer watches an entire 45-minute lecture, or multiple lectures back-to- back. In the future, it would be interesting to test if better attention over long periods could facilitate improved learning scores. Creators may also want to consider if greater engagement harbors greater learner satisfaction, especially if they are paying tuition for their learning. Finally, if educational videos are created for free use on the web, increased engagement is paramount to attracting attention and gaining viewers in the highly competitive educational video market.

While the PowerPoint video had lower engagement results in this study, we must note that it was handicapped by being created by splicing together multiple clips from the

86

existing OGATP curriculum. Clip jumps may have played a role in affecting the

PowerPoint video engagement score, and a better comparison would have been to have

the narrator speak from the same script used in the animations.

The study also could not generalize effectiveness to traditional, whiteboard, and

PowerPoint videos as a whole, since each video used different images and timing.

However, lessons could be applied from the more engaging traditional and whiteboard

videos to future PowerPoint videos to improve engagement. These include tenets from

the Cognitive Theory of Multimedia Learning such as:

• Reducing extraneous information on lecture slides while highlighting key points

• Timing visuals and text on screen to narration

• Using more visuals to help explain concepts

• Using a more conversational script

In the future, various PowerPoint videos could also be compared to each other to test if

the above suggestions affect learning and engagement.

Estimated Costs

An important factor that has not been discussed in detail yet is the cost-effectiveness of each video type, which varies depending on creation time and available resources. This measurement is important so that clients interested in using e-Learning can fully utilize their available budget to obtain the best results. On the production side, the time spent and subsequent cost-effectiveness of creating each type of animation was analyzed.

Below is a chart describing the amount of time required to complete each segment of the animation process (Storyboarding/Asset Creation/Animation). Asset creation and

87

animation were combined into one category, as they occurred simultaneously and

iteratively.

Traditional 2D Whiteboard

Storyboarding 15.3 8.5

Asset Creation/Animation 152.19 52.35

Total Hours 167.49 60.85 Table 17. Time spent on animations For this project, the traditional animation required roughly three times more time to

create than the whiteboard animation. This is not a definitive ratio as time spent may vary from case-to-case based on several factors, like available software and varying complexity of animations. After Effects plugins like Animation Composer and

AutoWhiteboard were used in traditional and whiteboard animations respectively, which cut down on the time needed to create each animation relatively equally. Without those plugins, each animation would have taken about 20-30 extra hours to produce. Our production workflow was relatively controlled with the same script and similar complexities across both animations, so our results can provide some insight for e-

Learning creators about level of effort for creating different types of animations.

Monetary cost for animation production could be estimated by video length and style,

or level of effort. The traditional 2D animation from this study resembles explainer

videos in style, creation method and pacing and can be compared to explainer videos for

pricing. A HubSpot market analysis report surveyed 70 explainer video companies and

found that an average of $7,972 was charged for a 60 second explainer video (Ferguson,

2018), which comes out to $132 per second of animation. At that rate, our 6 minute and

88

41 second animation would cost $52,932. If measured by effort at a low/modest rate of

$100/hour, our traditional animation would cost $16,749.

Whiteboard animations can be a cheaper alternative to traditional animation,

depending on the level of complexity and amount of animation. ideaMACHINE Studio, a

global whiteboard animation company, prices entry level whiteboard animations at

$2,800 per minute, or $46.67 per second (ideaMACHINE Studio, 2020).The degree of

complexity of their entry level example video is similar to the one created for this study,

so our 6 minute and 42 second whiteboard video would come out to $18,760, which is

roughly a third of the per second estimated cost of our traditional animation. If measured

by effort at a low/modest rate of $100/hour, our whiteboard animation would cost $6,085,

which is also a third of our effort cost for traditional animation.

As demonstrated above, there can be a wide price range for video production, and this often depends on market type. Commercial explainer animations for businesses and enterprises generally have a higher rate than academic institutions, but the cost ratio between animations remains the same.

Out of the three videos, the PowerPoint modality would be the least expensive teaching method, with less resources needed for creation. According to Lindsay Ledebur, an e-Learning designer who helped create PowerPoint lecture videos for the OGATP, a

45-minute lecture video split up into 8-minute segments would cost roughly $350 to create. However, this estimate accounts only for video filming and editing, and does not account for the greater cost of salary of the instructor, who spent work time developing the curriculum and narrating the videos.

89

Future Considerations

With more time and funding, some future study design considerations could include:

• Re-creating the PowerPoint lecture video from scratch instead of compiling clips

together from an existing curriculum. Aligning the script and pacing of the

PowerPoint video with the other animations would reduce confounding variables

that could affect learner engagement or retention.

• Increasing the sample size for future studies would give more certainty to our

findings.

• Creating a method to ensure study participants watch the video only once within

the Qualtrics module, as repeat viewings could have a significant impact on

retention results.

• Keeping a more stringent testing schedule to ensure participant consistency. For

example, testing only from 1-4PM on weekdays only. Or, if possible, running all

the tests at once so that time and day is less of a variable.

Future studies could analyze the effect of style on retention and engagement. One example is learner response to animation styles. A flat, vector cartoon style was applied to both whiteboard and traditional animations for this particular study. However, other styles (like hand-drawn or realistic) were untested but could potentially play a large role in learner response.

In the same vein, it is important to know what kind of animation suits a given topic.

For our study, the symbol-heavy content involved in pedigree creation seemed to benefit

90

from simplified pictorial depictions (found in our whiteboard/traditional animations).

However, this may not be the case for all topics and should be further categorized.

Open-ended survey comments left by Workers (see Appendix D) could be a good

source of feedback to work on for future iterations of the study. There were distinct

points made several times by several Workers (i.e. “I thought the video went too fast to fully comprehend the information.” for the whiteboard animation) that targeted certain aspects in each video to improve on.

Finally, integrating interactivity to this study could yield even more interesting results,

as this format of education promotes active learning. Common e-Learning tools like web

interactive modules allow the user to interact with provided content. Interactivity may

have different effects on retention and engagement, which would be interesting to

measure.

91

Conclusion The study provides insight into multimedia creation for e-Learning, while considering

budgetary and deadline considerations. Our study showed that traditional, whiteboard and

PowerPoint videos are about equal in their ability to teach, but the whiteboard and

traditional animations were more engaging than the PowerPoint lecture.

While the whiteboard and traditional animations increase learner engagement, they cost much more to create than the PowerPoint video. Creators must weigh the added benefits of learner engagement to cost, and what the goals of the video are. If budget does not allow for full animation, Creators can still consider shorter animated clips to keep viewers engaged during longer videos. Additionally, best practices from our tested animations and tenets from the Cognitive Theory of Multimedia Learning can be applied to future PowerPoint videos to improve their engagement.

In our study, we discovered that the whiteboard animation’s employment of

concurrent on-screen text and narration performed better than the traditional animation’s

employment of only narration, which went against the Redundancy principle. In the

future we would like to test this idea more specifically by examining videos of the same

type (traditional animation, whiteboard animation or PowerPoint animation) and just

testing for one variable at a time.

The methods detailed in this project provide a framework for survey-based e-Learning

studies in the future. Amazon MTurk is an efficient and anonymized way to recruit study

participants and to collect a large amount of data quickly. In addition, data collection

software like JHM Qualtrics is versatile and can be adapted for many types of studies.

92

In summary, we hope our findings will benefit the ever-growing online learning community. Below is a chart summarizing the results of our study.

Figure 73. Video comparison study summary. Not all text intended to be read.

93 APPENDIX A: Whiteboard Animation Storyboard

Figure 74. Whiteboard animation storyboard, page 1

94 Figure 75. Whiteboard animation storyboard, page 2

95 Figure 76. Whiteboard animation storyboard, page 3

96 Figure 77. Whiteboard animation storyboard, page 4

97

Figure 78. Whiteboard animation storyboard, page 5

98

Figure 79. Whiteboard animation storyboard, page 6

99 Figure 80. Whiteboard animation storyboard, page 7

100 APPENDIX B: Traditional animation storyboard

Figure 81. Traditional animation storyboard, page 1

101 Figure 82. Traditional animation storyboard, page 3

102 Figure 83. Traditional animation storyboard, page 3

103

Figure 84. Traditional animation storyboard, page 4

104 Figure 85. Traditional animation storyboard, page 5

105 Figure 86. Traditional animation storyboard, page 6

106 Figure 87. Traditional animation storyboard, page 7

107 APPENDIX C: Qualtrics survey module

Figure 88. Qualtrics module, page 1

108 Figure 89. Qualtrics module, page 2

109

Figure 90. Qualtrics module, page 3

110

Figure 91. Qualtrics module, page 4

111

Figure 92. Qualtrics module, page 5

112 Figure 93. Qualtrics module, page 6

113 Figure 94. Qualtrics module, page 7

114

Figure 95. Qualtrics module, page 8

115

Figure 96. Qualtrics module, page 9

116

Figure 97. Qualtrics module, page 10

117 APPENDIX D: Survey Free – Response Comments Traditional Animation

• “I would like to see the same information for eye color. this was very informative” • “I thought it was educational and entertaining.” • “I found it somewhat complicated, and I'd assume it would be for anyone without a medical background in genes and such.” • “Great way to explain Mendelian genetics! My husband and I were actually just talking about this some time ago, and if I can find this vid on YouTube, I may show it to him! Very easy to understand.” • “Started getting lost when trying to learn the big W and little w” • “I really like how the information was presented in a way that was easy to understand.” • “It was clear and understandable.” • “I never knew that this was the method for determining likelihood of trait inheritance, I found it very interesting.” • “I got confused about a quarter of the way through the video” • “I liked the demonstrations of what the narrator was saying during the video. It made it easier to understand.” • “It was easy to understand and absorb the information given.” • “It was a good video. Pretty simple to follow.” • “The animation was very well done. The whole video seemed very polished and high quality.” • “I liked the visuals. They sort of helped me understand things” • “Video was very enjoyable, it was very well put together easy to understand. I like the animations which makes it even more easier to understand.” • “It was probably more interesting than other formats that likely would have been much more boring if not confusing”

118

Whiteboard Animation

• “This way of presenting the information was more engaging to me than hearing the same information but as a lecture or PowerPoint.” • “Liked the video and how simple it was to understand. I would actually have to watch it more to fully understand and to get the hang of it all. It was very interesting to watch though. Thanks!” • “She went through it so fast. The information needed a couple of examples to stick.” • “I thought the video went too fast to fully comprehend the information.” • “The video was a bit long. I also found the style of watching the woman's hand draw things to be distracting and take away from the information being presented.” • “It was very interesting. But I would have learned a lot more if it was not so fast” • “It was a super interesting video. I like how easily a fairly complex topic was broken down for someone like me who has little knowledge of the subject” • “I want to look up why the term ʻsyndromeʻ was used in woolly hair syndrome” • “The art style was good.” • “I felt like more examples needed to be given. Also, I had to go backward a bit to RE-watch certain items that went too fast for me. :(“ • “Everything was pretty layman's terms, so I enjoyed that.” • “I thought that I'd be bored watching it but I think it really taught me a lot and held my attention.” • “It was somewhat boring” • “It was easy to follow along with even though I knew next to nothing about the topics discussed.” • “The drawings made the complicated subject matter easier to process.” • “It was easy to understand, just a lot of information to retain in 6 minutes. “ • “It was probably about as simple as it could be explained for such a complex topic with so many variables.” • “I like the animations; the pictures held my interest better than just words”

119

PowerPoint Video

• “I like the way the speaker "taught"” • “I would have liked more examples of diseases or conditions being passed.” • “Was somewhat interesting, just a very irrelevant topic for my interest. Tried my best to pay attention!” • “I actually very much enjoyed the whole thing. I liked the instructor, and I also liked the presentation. You made this morning very interesting. This wasn't taught when I was in high school, and I never took any college biology classes.” • “If this video is intended with people who are unfamiliar with the information, I would add more graphics for comprehension and retention.” • “The presentation was easy to follow and comprehend.” • “It went very quickly, but I could have paused it and watched it over if something wasn't clear.” • “I like that the teacher was actually in the video speaking about the concepts instead of just having to read PowerPoint slides alone. It is nice to be guided” • “Need to make it funnier to be more interesting” • “It would have been more enjoyable if the text wasn't something that looked straight out of a book. It also felt like the woman was pretty much just reading what the text said and not really interacting.” • “Very highly technical terms and explanations.” • “She explained a lot in a simple manner to comprehend. Not using fancy terms that no one would understand.” • “The video makes the topic look interesting” • “I thought it was great. I do believe there should have been an overall review at the end of everything discussed but the video overall was very enjoyable and the speaker was great.”

120

References 1. Air, Jon, Eric Oakland, and Chipp Walters. The Secrets Behind the Rise of Video

Scribing. Bristol, UK: Sparkol Books, 2015.

2. Amazon Mechanical Turk. “Tutorial: Getting Great Survey Results from MTurk and

SurveyMonkey.” Medium. Happenings at MTurk, July 31, 2017.

https://blog.mturk.com/tutorial-getting-great-survey-results-from-mturk-and-

surveymonkey-f49c2891ca6f.

3. Bétrancourt, Mireille, and Sandra Berney. “Animation and Learning.” Encyclopedia

of the Sciences of Learning, 2012, 252–54. https://doi.org/10.1007/978-1-4419-1428-

6_31.

4. Bétrancourt, Mireille. “The Animation and Interactivity Principles in Multimedia

Learning.” The Cambridge Handbook of Multimedia Learning, 2005, 287–96.

https://doi.org/10.1017/cbo9780511816819.019.

5. “Chi-Square Tests for Categorical Data | AP® Statistics.” Khan Academy. Khan

Academy. Accessed March 3, 2020. https://www.khanacademy.org/math/ap-

statistics/chi-square-tests/chi-square-goodness-fit/v/chi-square-statistic?modal=1.

6. Falvo, David A. “Animations and Simulations for Teaching and Learning Molecular

Chemistry.” International Journal of Technology in Teaching and Learning 4, no. 1

(2008): 68–77.

7. Ferguson, Samantha. “How Much Does an Explainer Video Actually Cost? [New

Data].” HubSpot Blog. Accessed March 2, 2020.

https://blog.hubspot.com/marketing/explainer-video-cost.

121

8. Ginns, Paul, Andrew J. Martin, and Herbert W. Marsh. “Designing Instructional Text

in a Conversational Style: A Meta-Analysis.” Educational Psychology Review 25, no.

4 (July 2013): 445–72. https://doi.org/10.1007/s10648-013-9228-0.

9. Ginder, S.A., Kelly-Reid, J.E., and Mann, F.B. Enrollment and Employees in

Postsecondary Institutions, Fall 2017; and Financial Statistics and Academic

Libraries, Fiscal Year 2017: First Look (Provisional Data) (NCES 201021rev). U.S.

Department of Education. Washington, DC: National Center for Education Statistics.

2018.

10. “How Much Does a Whiteboard Video Cost?” ideaMACHINE Studio. Accessed

March 2, 2020. https://www.whiteboardanimation.com/cost.

11. “How Whiteboard Animation Changed Online Video.” IdeaRocket, November 18,

2019. https://idearocketanimation.com/15321-whiteboard-animation-history/

12. Khacharem, Aïmen, Ingrid A.e. Spanjers, Bachir Zoudji, Slava Kalyuga, and Hubert

Ripoll. “Using Segmentation to Support the Learning from Animated Soccer Scenes:

An Effect of Prior Knowledge.” Psychology of Sport and Exercise 14, no. 2 (2013):

154–60. https://doi.org/10.1016/j.psychsport.2012.10.006.

13. Lee, Bongshin, Rubaiat Habib Kazi, and Greg Smith. “SketchStory: Telling More

Engaging Stories with Data through Freeform Sketching.” In IEEE Transactions on

Visualization and Computer Graphics, 12th ed., 19:2416–25, 2013.

14. Lewis, Kadriye O, Michal J Cidon, Teresa L Seto, Haiqin D Chen, and John

undefined Mahan. “Leveraging e-Learning in Medical Education.” Current Problems

in Pediatric and Adolescent Health Care 44 (2014): 150–63.

https://doi.org/10.1016/j.cppeds.2014.01.004.

122

15. Mayer, Richard E. Multimedia Learning. Cambridge: Cambridge University Press,

2009.

16. Mayer, Richard E., and Richard B. Anderson. “Animations Need Narrations: An

Experimental Test of a Dual-Coding Hypothesis.” Journal of Educational Psychology

83, no. 4 (1991): 484–90. https://doi.org/10.1037/0022-0663.83.4.484.

17. Mayer, Richard E., and C. Scott Dapra. “An Embodiment Effect in Computer-Based

Learning with Animated Pedagogical Agents.” Journal of Experimental Psychology:

Applied 18, no. 3 (2012): 239–52. https://doi.org/10.1037/a0028616.

18. McLaren, Bruce M., Krista E. Deleeuw, and Richard E. Mayer. “Polite Web-Based

Intelligent Tutors: Can They Improve Learning in Classrooms?” Computers &

Education 56, no. 3 (2011): 574–84. https://doi.org/10.1016/j.compedu.2010.09.019.

19. Moreno, Roxana, and Richard E. Mayer. “A Coherence Effect in Multimedia

Learning: The Case for Minimizing Irrelevant Sounds in the Design of Multimedia

Instructional Messages.” Journal of Educational Psychology 92, no. 1 (2000): 117–

25. https://doi.org/10.1037/0022-0663.92.1.117.

20. Ozcelik, Erol, Ismahan Arslan-Ari, and Kursat Cagiltay. “Why Does Signaling

Enhance Multimedia Learning? Evidence from Eye Movements.” Computers in

Human Behavior 26, no. 1 (2010): 110–17. https://doi.org/10.1016/j.chb.2009.09.001.

21. Paolacci, Gabriele, and Johannes Boegershausen. “Excluding MTurk Workers Who

Participated in Your Previous Studies: An Excel Solution,” March 1, 2018.

22. Parette, Howard P., Jack Hourcade, and Craig Blum. “Using Animation in Microsoft

PowerPoint to Enhance Engagement and Learning in Young Learners with

123

Developmental Delay.” TEACHING Exceptional Children 43, no. 4 (2011): 58–67.

https://doi.org/10.1177/004005991104300406.

23. Peer, Eyal, Gabriele Paolacci, Jesse Chandler, and Pam Mueller. “Screening

Participants from Previous Studies on Amazon Mechanical Turk and Qualtrics.”

experimentalturk.files.wordpress.com, May 2, 2012.

https://experimentalturk.files.wordpress.com/2012/02/screening-amt-workers-on-

qualtrics-5-2.pdf.

24. Pinto, Y., C. N. L. Olivers, and J. Theeuwes. “Selecting from Dynamic

Environments: Attention Distinguishes between Blinking and Moving.” Perception &

Psychophysics 70, no. 1 (January 2008): 166–78. https://doi.org/10.3758/pp.70.1.166.

25. Plass, Jan L., Bruce D. Homer, and Elizabeth O. Hayward. “Design Factors for

Educationally Effective Animations and Simulations.” Journal of Computing in

Higher Education 21, no. 1 (2009): 31–61. https://doi.org/10.1007/s12528-009-9011-

x.

26. Roberts, David. “The Engagement Agenda, Multimedia Learning and the Use of

Images in Higher Education Lecturing: or, How to End Death by PowerPoint.”

Journal of Further and Higher Education 42, no. 7 (2017): 969–85.

https://doi.org/10.1080/0309877x.2017.1332356.

27. “Sample Size Calculator: Understanding Sample Sizes.” SurveyMonkey. Accessed

March 3, 2020. https://www.surveymonkey.com/mp/sample-size-calculator/.

28. Spanjers, Ingrid A. E., Tamara Gog, and Jeroen J. G. Merriënboer. “Segmentation of

Worked Examples: Effects on Cognitive Load and Learning.” Applied Cognitive

Psychology 26, no. 3 (2011): 352–58. https://doi.org/10.1002/acp.1832.

124

29. Sweller, John, Paul Chandler, Paul Tierney, and Martin Cooper. “Cognitive Load as a

Factor in the Structuring of Technical Material.” Journal of Experimental

Psychology: General 119, no. 2 (1990): 176–92. https://doi.org/10.1037/0096-

3445.119.2.176.

30. Tavangarian, Djamshid, Markus E. Leypold, Kristin Nolting, Marc Roser, and Denny

Voigt. “Is e-Learning the Solution for Individual Learning?” Electronic Journal of e-

Learning 2, no. 2 (2004): 273–79. Ejel.org.

31. Türkay, Selen. “The Effects of Whiteboard Animations on Retention and Subjective

Experiences When Learning Advanced Physics Topics.” Computers & Education 98

(2016): 102–14. https://doi.org/10.1016/j.compedu.2016.03.004.

32. Vaughn, Kalif E, Jeremy Cone, and Nate Kornell. “A User's Guide to Collecting Data

Online.” In Handbook of Research Methods in Human Memory, 2018.

33. Wong, Anna, Wayne Leahy, Nadine Marcus, and John Sweller. “Cognitive Load

Theory, the Transient Information Effect and e-Learning.” Learning and Instruction

22, no. 6 (2012): 449–57. https://doi.org/10.1016/j.learninstruc.2012.05.004.

125

Vita Jenny Wang was born in the city of Lanzhou, China, the capital of Gansu Province in

Northwest China. She spent the first five years of her life growing up in the countryside with her grandparents, cultivating a taste for spicy foods. She immigrated to New Jersey at a young age and spent most of her time drawing and making animated flipbooks.

In 2012, Jenny left the east coast to pursue the sciences at Washington University in

St. Louis, receiving a degree in biology and a minor in fine art. Her last year in the

Midwest was spent working in a pediatric neurology lab and volunteering as an art therapy mentor at WashU Medical School. She spent her time there talking to oncology patients and collecting patient histories from pediatric patients diagnosed with multiple sclerosis. After these experiences, she felt the need for patient education acutely and became determined to pursue a career in medical illustration.

Jenny is currently a second-year medical illustration graduate student at the Johns

Hopkins University School of Medicine. Under the guidance of excellent faculty, her interest in animation was rekindled with a passion. She received the Frank H. Netter, MD

Memorial Scholarship in Medical Art during her first year of study. In March of 2020, she was honored to be a recipient of the Alan Cole Scholarship from the Vesalius Trust for her thesis proposal. Jenny is scheduled to receive her Master’s degree in Medical and

Biological Illustration in May of 2020. Her goal to improve healthcare education burns bright - she aims to use vibrant animation and effective design in the future to communicate difficult topics in the scientific and medical fields.

126