Preservice Teachers’ Perceptions of Artificial Intelligence Tutors for Learning

A dissertation presented to

the faculty of

The Gladys W. and David H. Patton College of

In partial fulfillment

of the requirements for the degree

Doctor of Philosophy

Federica Incerti

May 2020

© 2020 Federica Incerti. All Rights Reserved. 2

This dissertation titled

Preservice Teachers’ Perceptions of Artificial Intelligence Tutors for Learning

by

FEDERICA INCERTI

has been approved for

the Department of Educational Studies

and The Gladys W. and David H. Patton College of Education by

Greg Kessler

Professor of Educational Studies

Renée A. Middleton

Dean, The Patton College of Education

3

Abstract

INCERTI FEDERICA, Ph.D., May 2020, Instructional Technology.

Preservice Teachers’ Perceptions of Artificial Intelligence Tutors for Learning

Director of Dissertation: Greg Kessler

The purpose of this non-experimental study was to examine the concerns of preservice teachers in reference to the Amazon Echo, powered by Alexa, as measured by the Stages of Concern Questionnaire (SoCQ) (George, Hall, & Stiegelbauer, 2006). The study participants were preservice teachers at a Midwestern University enrolled in a technology course (n = 124). This researcher utilized a pair of two-way multivariate analysis of variance MANOVA which were conducted to determine if there was a statistically significant difference in means between the dependent variables of the preservice teachers’ Stages of Concern (stages 0-6), and the independent variables of gender, projected grade to teach, and teaching geographical area. Results from the SoC

Questionnaire are as follows: Preservice teachers would like to receive more information regarding the Amazon Alexa as a tool for teaching in formal and informal settings. The two-way MANOVA models did not reveal conclusive results. Contributing factors for these results may be low statistical power and small effect size which may have affected the overall models results. Implications for preservice teachers’ training, suggestions for further research, and the limitations of this study are discussed.

4

Dedication

To my late nephew Davide Ciuffreda (1/29/2007 - 1/3/2018).

Full many a flower is born to blush unseen,

and waste its sweetness on the desert air.

~ Thomas Gray, 1751

Thank God, I will see you again!

Al il mio carissimo nipote Davide Ciuffreda (1/29/2007 - 1/3/2018).

Ci sono fiori che nascono per poi fiorire in silenzio

disperdono l’essenza della loro fragranza nell’aria del deserto.

~ Thomas Gray, 1751

Grazie a Dio ti rivedrò!

<><

5

Acknowledgments

First, I thank God for His faithfulness and for giving me the strength to complete this journey (Isaiah 41:29). I thank my advisor and Chair of this committee Dr. Gregory

Kessler for his guidance and encouragement during my doctorate studies and the dissertation journey, and during the preparation of this document. Thank you to all my dissertation committee members: Thank you, Dr. Teresa Franklin, for your endless help, mentorship, and unwavering support over the years and when I most needed it. Thank you, Dr. Alan Wu, for the immeasurable amount of help facilitating the data collection.

You made this step very easy! Thank you, Dr. Gordon Brooks, for helping me shape this project, and for your patience in working with me on the statistical procedures. Thank you, Dr. Danielle Dani, for your suggestions and for making this research more meaningful.

This degree would not have been possible without the support of my family.

Thank you, Mom & Dad, Primo Incerti and Ilde Rosati, my sister Antonella Incerti, and her husband Michele Ciuffreda. Special thanks to Peter Winant, Director of the School of

Art at GMU, for your leadership and encouragement to finish this project. A special thanks to Elyce Friedman for your help with editing – You are a Godsend! Thank you to my Athens friends and extended family Thom & Beth Roell family, all my close OU friends and colleagues who supported me over the years and supported me while navigating grad school. Thank you also to all my Virginia’s extended family. Thank you:

Dean Rick Davis for the encouragement to finish this degree, Cheryl Buckley for your encouragement and help at work, Gail Scott White and Kirby Malone for your encouragement and support, Philip Butts you are such a blessing! Thank you, Laura 6

Workman for your prayers and support, Johnny Baker and Ellyn Whitt for making me part of your extended family. Thank you to all my students, and student workers – Your enthusiasm and your thirst for knowledge gets me up in the morning!

Dulcis in fundo, ringrazio le famiglie: Incerti, Rosati e Benelli. Siete forti zii, zie e cugini! Un ringraziamento in particolare agl’ amici italiani: Daria Arini, grazie per il tuo supporto e conforto, non ci sono parole per esprimere quanto sostegno ci hai dato nel momento del bisogno. Samantha Chiossi, Graziana Campani, Francesca Salardi per il vostro supporto, le preghiere, e l’aiuto e lo sprono di andare avanti con forza durante tutti questi anni.

7

Table of Contents

Page

Abstract ...... 3 Dedication ...... 4 Acknowledgments ...... 5 List of Tables ...... 10 List of Figures ...... 11 Chapter 1: Introduction ...... 12 Background of the Study ...... 12 Research Problem ...... 23 Purpose of the Study ...... 27 Significance of the Study ...... 29 Target Audience for This Research ...... 30 Research Questions ...... 30 Definitions of Terms ...... 31 Limitations of the Study ...... 36 Delimitation ...... 37 Organization of the Chapters ...... 40 Chapter Summary ...... 40 Chapter 2: Review of Related Literature ...... 41 A New Generation of Learners ...... 41 Formal Learning ...... 42 Informal Learning ...... 44 Bridging Formal and Informal Learning ...... 46 Constructivism and Social Constructivism and AI Tutors ...... 48 Electronic Tutors, Virtual Assistants, Chatbots ...... 52 Artificial Intelligence ...... 53 AI-Compliant Culture and Society ...... 55 Moral Issues with AI Systems ...... 58 Artificial Intelligence in Education ...... 60 AI Tutors in Gaming ...... 62 AIs - Video Games Versus Tutors ...... 64 Different Kinds of AI Tutors in Education ...... 65 8

Education and Implementation Challenges for AI ...... 67 Device Embedded AI Tutors ...... 68 Amazon Echo and Alexa ...... 69 Amazon Echo (Alexa) in the Lives of People ...... 71 Amazon Echo, Technical Perspective ...... 73 Amazon Alexa Custom Applications ...... 74 Amazon Alexa in Education ...... 76 Alexa and Data Security and Privacy Risks ...... 80 Preservice Teachers’ Preparation ...... 81 Preservice Teachers and Digital Adoption ...... 83 Generation Z ...... 86 Teaching in Different Socio-Economic Geographical Areas ...... 88 Gender ...... 91 Educational Change Models ...... 92 The Diffusion of Innovation Theory ...... 93 Unified Theory of Acceptance and Use of Technology (UTAUT) ...... 96 Concerns-Based Adoption Model ...... 97 Chapter Summary ...... 101 Chapter 3: Methodology ...... 102 Research Design ...... 102 Quantitative Research ...... 103 Population and Sample ...... 104 Instrument and Measures ...... 105 Reliability ...... 108 Validity ...... 110 Exploratory Factor Analysis ...... 111 Data Collection Procedure ...... 112 Ethical Considerations ...... 114 Piloting ...... 114 Primary Data Collection ...... 116 Data Analysis Procedure ...... 117 Chapter Summary ...... 125 Chapter 4: Data Analysis and Results ...... 126 Data Organization ...... 127 9

Question 1 Results ...... 131 Question 2 Results ...... 144 MANOVA Procedure 1 - Gender and Geographical Area Type ...... 147 MANOVA Procedure 2 - Gender and Grade Planned to Teach ...... 148 Chapter Summary ...... 151 Chapter 5: Discussion and Conclusion ...... 153 Research Overview ...... 153 Research Question 1 Results Interpretation ...... 153 Additional Considerations from Literature ...... 157 Research Question 2 Results Interpretation ...... 159 Considerations and Implications ...... 163 Suggestions for Future Research ...... 165 Chapter Summary ...... 166 References ...... 167 Appendix A: IRB Approval ...... 213 Appendix B: IRB Amendment ...... 214 Appendix C: Permission to Use and Adapt the CBAM Survey ...... 215 Appendix D: Informed Consent ...... 217 Appendix E: Stages of Concern Questionnaire ...... 219 Appendix F: Graphic Permissions ...... 223 Appendix F: Graphic Permissions (Cont.) ...... 224 Appendix G: Stages of Concern Quick Scoring Device ...... 225 Appendix H: SPSS Calculations ...... 227 Appendix I: MANOVA Assumptions ...... 231 Appendix J: Exploratory Factor Analysis ...... 233

10

List of Tables

Page

Table 1 Coefficient of Internal Reliability of the Stages of Concern Questionnaire ...... 109 Table 2 Test – Retest coerrelation on the Stages of Concern Questionnaire (n=132) ..... 109 Table 3 Percent of Respondents’ Highest Stage of Concern, Initial Stratifield Sample . 110 Table 4 Data Analysis as it Relates to Research Questions ...... 118 Table 5 Data Analysis Performing MANOVAs ...... 120 Table 6 Biographical Information: Gender for n=124 ...... 128 Table 7 Biographical Information: Years of Service; Academic Standing; Age ...... 128 Table 8 Biographical Information: Major and Grade to Teach for n=124 ...... 129 Table 9 Kaiser-Meyer-Olkin and Bartlett’s Test ...... 131 Table 10 Descriptive Statistics – Highest and Second Highest ...... 132 Table 11 Statistics – Highest Score by Stage ...... 133 Table 12 Stages of Concern Peaks – Highest and Second Highest ...... 141 Table 13 Question 2 – Items, Variables, Data Types, and Scale Types ...... 144 Table 14 Gender and Plan to Teach Box’s Test of Equality of Covariance Matrices ..... 146 Table 15 Gender and Area Type Box’s Test of Equality of Covariance Matrices ...... 147 Table 16 MANOVA Data Analysis - Multivariate Test – Area Type and Gender ...... 148 Table 17 MANOVA Data Analysis - Multivariate Test – Plan to Teach and Gender .... 149 Table 18 Crosstabulation for Area Type and Second Highest Peak Stages of Concern . 150 Table 19 Crosstabulation for Gender and Second Highest Peak Stages of Concern ...... 150 Table 20 Crosstabulation for Plan to Teach and Second Highest Peak Stages of Cons .. 151

11

List of Figures

Page

Figure 1. Usage of virtual assistants among Internet users Worldwide by age ...... 177 Figure 2. U.S. Voice-Enabled Speaker Usage Share, by Player 2017 ...... 188 Figure 3. The Concerns-Based Adoption Model ...... 388 Figure 4. The Stages of Concern About an Innovation...... 988 Figure 5. Typical Expressions of Concern About an Innovation...... 9999 Figure 6. Statements on the SoC Questionnaire Arranged According to Stage ...... 1076 Figure 7. Hypothesized Development of Stages of Concern ...... 1074 Figure 8. Descriptive Statistics of 7 SoC Raw Scores Used for Calculations ...... 127 Figure 9. Preservice Teachers’ Peak Stages of Stages of Concern (n=124) ...... 134 Figure 10. Levels of Use of the Innovation ...... 135 Figure 11. Preservice Teachers’ Peak Stages of Concern (n=124)...... 137 Figure 12. Typical Nonuser SoCQ ...... 138 Figure 13. Preservice Teachers’ Second Highest Peack Stages of Concern (n=124) ..... 140 Figure 14. Preservice Teachers' Highest & Second Highest Peak Stages of Concern .... 142

12

Chapter 1: Introduction

Background of the Study

If we teach today as we taught yesterday, we rob our children of tomorrow (Dewey,

1944, p. 167).

For the past few decades, technology has stood as one of the predominant driving forces in our society (de Miranda, 2009; Peixiao, Man & Gang, 2016; Xu, Hao, & Han,

2017). The latest computer-based innovations infused with artificial intelligence have impacted all areas of our lives, including communication, medicine, business, politics, labor markets, science and education (Giddens, 2018; Mohammed, 2019; Reed, 2014).

Today, technology remains at the forefront of education which is the progressive ladder for human society: “Education is critical to the development of a nation and even in the whole human society” (Wang, Liu, An, Li, Li, Chen, ... & Gu, 2018, p. 129). The latest advancements in the field of machine learning, knowledge reasoning and machine deep learning are projecting us into the intelligence age (Mohammed, 2019). This is the age in which educational resources are increasingly available, and more flexible modes, patterns and multi-variant intelligence systems can aid teaching. During the next decade, the educational arena can expect great changes (Wang et al., 2018).

Researchers indicate that there is a strong relationship between students’ learning and their academic achievements, and that the use of technology can both increase student motivation and increase their achievements (Kimbell-Lopez, Cummins &

Manning, 2016; Spector & Park, 2017; Torres & Statti, 2019). A couple of decades ago, digital applications for learning were confined to classroom usage. Today, these types of technologies are widely available outside of academic settings. Many devices and 13 applications are often marketed as interactive and educational. This entices consumers to purchase devices infused with applications that can support educational functions along with infotainment (Xu, Hao, & Han, 2017). Examples of these newer technologies include smart devices, tablets, and mobile technologies which promise children fun learning apps and promise adults internet assistance, increased connectivity and improvement in their quality of life (Google, 2017; Wireless Watch, 2017).

A few years ago, Amazon launched the first talking speaker, an electronic assistant called the AmazonÒ Echo (Amazon, 2019). During the 2015 Christmas holidays, this item became so popular that it sold-out on the AmazonÒ site (Leswing,

2016). Endowed with a voice named Alexa, the AmazonÒ Echo was predicted to become a billion-dollar business by 2020, with over 60 million devices sold (Kharpal, 2017).

Since Alexa’s birth in November 2014, AmazonÒ has increased its number of applications and improved their uses, including learning applications and learning games.

Currently some K-12 schools have incorporated Alexa into their classrooms (Day, 2019).

As use of this new technology increases among consumers, the need to study the impact of emerging technologies in education becomes crucial. The 2016, NMC Horizon

Report Important Developments in for Higher Education

(Johnson, Becker, Cummins, Estrada, Freeman, & Hall), stated that there are many significant emerging technologies which “may be relevant to learning and creative inquiry” (Johnson et al., 2016, p. 34). These types of new technologies for learning develop frequently and saturate the market making it difficult for teachers to adopt them for classroom use. The intricacy of these new technologies requires teachers to commit additional time and effort to cope with the challenges of learning how to use these new 14 devices, and how to implement them in their classrooms and curriculums. To accomplish this, additional professional development is required to teach instructors to maintain their technology and to educate them about these new innovations for learning (Jogezai, Ismail

& Ahmed, 2016; Johnson et al., 2016).

Guided effort is necessary to incorporate new technologies into curriculum. The

Horizon Report asserted that “it is clear that simply capitalizing on emerging technology is not enough” (Johnson et al., 2016, p. 26) new teaching approaches must incorporate these “tools and services to engage students on a deeper level and ensure academic quality” (Johnson et al., 2016, p. 26). The successful integration of these devices into the classroom depends upon learning institutions accepting responsibility for preparing faculty, planning the transition to innovation use, and ensuring the quality of the new learning experiences for students (Johnson et al., 2016).

Today’s students who were raised with technology and who own innovative digital technologies, routinely bring their devices to the classroom. They use a variety of digital applications at home to aid in learning and to complete school assignments

(Fuhrman, 2015; Grush, 2016; Selwyn, Nemorin, Bulfin, & Johnson, 2017). As technology plays a progressively larger role in the academic lives of students, proper instruction and positive habits for its use have become vital for learning success (Johnson et al., 2016).

Another important factor affecting the role of technology in the classroom is the expected increase in classroom population. Between 2013 and 2025 the National enrollments in public schools are projected to increase by 3 percent, from 55.4 million students to 57.1 million students. It is estimated that the pupil/teacher ratio will decrease 15 from 15.6-15.9 to 15.0 (Hussar & Bailey, 2017). During the same period, other predictions indicate that high-schools and post-secondary schools will also experience an increase in enrollment: From 2014 to 2025 postsecondary institutions will increase their enrollment by 15 percent from 20.2 million to 23.3 million (Hussar & Bailey, 2017). This increase in school population along with a decrease in the amount of resources allocated for education (Executive Office of the President of the , Office of

Management and Budget, 2018) intensifies the importance of understanding the relationship between innovative digital applications and education (Hussar & Bailey,

2017). With these increasing enrollments in our schools and the ever-increasing plethora of new devices entering the classroom, the need for preservice teachers’ mastery of technological tools that support curriculum is imperative. Effective teachers in the 21st century must be technologically literate (Richardson, 2013), or in the future, educators will lose the competition with electronics for learners’ attention (McCoy, 2016).

Over the past few years, the most innovative applications involve smart technologies that include complex applications designed with artificial intelligence (AI) techniques. These new electronics are transforming technology in education and all areas of society (Fuhrman, 2015; Grush, 2016; Hammond, 2015; Kurshan, 2016; Nogrady,

2016; Russel, 2016). Some of the most popular inventions among these advanced technologies are conversational interfaces, or intelligent virtual assistants (McTear,

Callejas, & Griol, 2016). These application programs are usually embedded in portable devices like cellular phones or tablets, and they can understand voice activated commands to perform various tasks (McTear et al., 2016). 16

This use of conversational interfaces seems to be more pronounced for portable applications like the AppleÒ Siri, MicrosoftÒ Cortana, and AmazonÒ Alexa. Consumers of these technologies employ these applications for learning and an assortment of other uses (Kiseleva et al., 2016; Shead, 2017; Wireless Watch, 2017). Statistics support the increased popularity of these digital conveniences especially among the younger generation (Chuaug, 2016; Kiseleva, Williams, Jiang, Awadallah, Crook, Zitouni, &

Anastasakos, 2016; Milanesi, 2016).

Capable of using conventional language semantics and pragmatics, intelligent virtual assistants are changing the way we communicate and interact with other humans and machines. These developing technologies display ‘intelligent’ and ‘socially-aware’ human interactions using natural language (Large, Clark, Quandt, Burnett, & Skrypchuk,

2017). The ability to embody human-like traits personifies these machines and creates a greater appeal to younger audiences who were raised around these tools embedded in a variety of societal contexts (Large et al., 2017). Strong marketing and peer-pressure, especially among Generation Z, are making this new way of using language very popular

(see Figure 1. Usage of virtual assistants among Internet users Worldwide by age).

17

Figure 1. Usage of virtual assistants among Internet users Worldwide by age (eMarkter, 2017, With permission).

According to eMarketers, in 2017, 35.6 million Americans used a voice-activated assistant device at least once a month. This revealed an increase of 128.9% compared to the previous year. Amazon’s Echo (Alexa) led the way with 70.6% of users, and

GoogleÒ Home, amounted to 23.8% of the market (see Figure 2. U.S. Voice-Enabled

Speaker Usage Share, by Player 2017). eMarketer expected AmazonÒ to remain the dominant player in the category for the foreseeable future (eMarketer, 2017).

18

Figure 2. U.S. Voice-Enabled Speaker Usage Share, by Player 2017 (eMarketer, 2017, With permission).

The continuation of the rapid distribution of such devices to everyday mobile technologies increases the need for understanding the role of these host technologies in the fabric of our culture (Large et al., 2017). In a recent study, participants stated that voice searches were appealing because they provide one direct answer to questions

(Negri, Turchi, de Souza, & Falavigna, 2014) and the lifelike voice of the personal assistant is friendly and faster than typing (Kiseleva et al., 2016). According to experts, young learners approach these technologies as vast reservoirs of knowledge that assist them with every-day tasks such as research, information acquisition, and homework

(Kats, 2017; Shead, 2017; Wireless Watch, 2017).

In 2018 the number of U.S. households owning a smart speaker grew from 47.3 million to 66.4 million (Kinsella & Mutchler, 2019). Despite a decrease of about 9% of sales in 2019 from 2018, the AmazonÒ Echo still owned the largest part of the market 19 share. Devices such as the GoogleÒ Home gained about 5% more of the market share, and non-AmazonÒ and non-GoogleÒ devices such as the MicrosoftÒ Cortana, and

Apple Ò HomePod, gained over 4% of the market share (Kinsella & Mutchler, 2019).

The increase in sales of smart tutors seemed to be motivated by low prices and sales. The combination of AmazonÒ and GoogleÒ’s intense advertising campaigns and the cheap availability of their devices caused the sales of devices like Amazon’s Echo DotÒ and

GoogleÒ’s Home Mini to skyrocket (Kinsella & Mutchler, 2019).

In recent years, a host of companies have released smart speaker devices. It was estimated that by the end of 2018, 66.4 Million of U.S. adults would own a device like the AmazonÒ Echo indicating that 26.2% of the adult U.S. population would have access to a smart voice device (Kinsella & Mutchler, 2019). While the AmazonÒ Echo still leads the largest part of the market share, other competitors such as GoogleÒ and

AppleÒ have started to increase their dominance in this market (Kinsella & Mutchler,

2019).

In education, the concept of an intelligent computer program that acts as personal assistant, or a tutor, is not new. Decades ago, Sleeman and Brown (1982) dubbed these applications Intelligent Tutoring Systems (ITS). At the beginning, AI tutoring systems had a variety of limitations and the collaboration of researchers from a variety of schools developed prototypes that students found helpful in learning contexts. Over time, learning successes with these technologies motivated schools to invest in these applications.

Today, each year, over 3,000 schools enroll over half a million students in courses augmented with intelligent tutors which are installed in stand-alone systems that are created and delivered to classrooms (Koedinger & Aleven, 2016). Literature as well as 20 news articles reported the use of voice assistants in educational settings for learning to aid students’ success in their academic endeavors (Almurshidi & Naser, 2017; Mahdi,

Alhabbash, & Naser, 2016; Seckel, 2017).

While the majority of these applications are advanced, designed to represent the pedagogy and knowledge of teachers, others are more basic and can only perform simple instructive functions (Jones, 1985; Wenger, 2014; Woolf, 2010). Recently, with the advent of ecosystems like the Internet of Things, Cloud Computing, big data, and artificial intelligence, giants of the information industry developed new commercial applications designed to function as personal assistants for everyone (Amazon, 2019;

Google Store, 2017; McNeal, 2016).

An example of a more advanced AI tutor application is found at Arizona State

University (ASU) which provided over 1,600 Echo Dots to Engineering students in the

Fall 2017 semester. Collaborating with AmazonÒ, ASU gave birth to the first of its kind academic program designed to help engineering students learn to create and utilize their own voice interface-technology. John Rome, ASU’s deputy chief information officer stated that “By working with AmazonÒ to create the first voice-enabled campus, we’re furthering ASU’s position as the No. 1 university in the U.S. for innovation” (Seckel,

2017, para. 1).

In addition to building Tooker HouseÒ, a new voice enabled high-tech dorm for engineering students, ASU’s voice initiative offered four undergraduate engineering courses that augmented student understanding of voice-user interface development.

Engineering students were encouraged to use Alexa Skills Kits to build their own Alexa skills, both on their own and in class. This new program helped these students master the 21 in-demand skills that they needed upon their graduation. ASU has laid more plans to build its smart campus with new technology to provide students with an individualized education (Seckel, 2017).

Steve Rabuchin, Amazon’s vice president of AlexaÒ, asserts that “The university shares a vision with us for the future of voice, and we believe it’s paramount to engage students in a way that sparks their imaginations and inspires them to build the technology of tomorrow” (Seckel, 2017 para. 1). Programs which are built on emerging technologies allow these systems to have customizable features which respond to individualized needs

(Amazon, 2019; McNeal, 2016). One of the powerful features of these new tutors is that they are commercially available, so they can also be utilized outside of traditional educational settings (McTear et al., 2016).

AI Tutors Naming Convention

There is a substantial body of literature concerning AI voice-activated applications serving as tutors, and most of these studies can be grouped under the AI tutor category (Duchastel & Imbeau, 1988; Duffy & Azevedo, 2015; Heidig & Clarebout,

2011; Schroeder, Adesope, & Gilbert, 2013; Wenger, 2014). However, the research on these electronic devices differs in type and context. For instance, software applications embedded in a browser or standalone devices can both be found in the same research category. Additionally, the names of these applications vary in the literature according to their design, the researchers’ designation and the place that the application is used.

Historically, the development of machines capable of aiding learning began in the late seventies and early eighties. These early interfaces were called Intelligent Computer-

Aided Instruction (ICAI) and Computer-Aided Instruction (CAI) which were the first 22 computer systems programmed to promote learning on very specific topics in medicine and computer science (Duchastel & Imbeau, 1988; Wenger, 2014). The name Intelligent

Tutoring System, began to emerge in the eighties (Sleeman & Brown, 1982) for devices that understood tasks and had the ability to generalize that knowledge in a basic way

(Larkin & Chabay, 1992). More recent literature reported names such as Intelligent

Tutors (Schroeder, Adesope, & Gilbert, 2013), Artificial Intelligence Tutors (Woolf,

2010) and Conversational Interfaces (McTear et al., 2016). The availability of the

Internet helped the advancement of AI applications by making them more knowledgeable and capable of functionalities that help students acquire knowledge.

AmazonÒ referred to the voice assistant that powered devices such as the

AmazonÒ Echo, Tap, Dot, Show, and Stick, as Alexa, the digital assistant (Amazon,

2019). McTear et al., (2016) called this type of technology “intelligent personal assistants, digital personal assistants, mobile assistants, or voice assistants” (McTear et al., 2016, p. 11). Lastly, these applications, may also be known as Voice Personal

Assistants (VPA), especially when referring to voice powering mobile devices such as,

AlexaÒ, Apple Siri, GoogleÒ Voice, FacebookÒ M, and MicrosoftÒ Cortana. For the purposes of this research, the researcher will use the term artificial intelligence tutors, or

AI tutors, to indicate these types of standalone systems that are designed to help the learning process.

With this study this researcher explored the concerns of preservice teachers at a

Midwestern university regarding the use of AI tutors such as the AmazonÒ Echo in formal and informal learning. The researcher chose to study the adaptation of the AI assistant Amazon AlexaÒ “the voice service that powers AmazonÒ Echo and other 23 devices such as AmazonÒ Fire and AmazonÒ TV” (McTear et al., 2016, p. 204), as a tool for teaching and learning. Among studies conducted on AI tutoring systems, information regarding the impact of this new type of tutor (like the AmazonÒ Echo -

Alexa) is very limited.

Several publications are available on the Echo as an assistant and researchers of previous studies have delved into the use of AI tutors for teaching and learning. However, research regarding AI tutors such as the one presented in this paper is scarce and represents a significant gap in the literature. Therefore, the researcher will conduct a study that reveals the factors contributing to the adaptation of this type of technology for learning. The study results will augment the existing literature on the use of AI tutoring systems for formal and informal learning and will aid in evaluating the positive and negative effects of its use on teaching and learning.

Research Problem

Teachers difficulties in integrating technology into the classroom have been expressed in studies and in literature for over a decade (Coleman, Gibson, Cotten,

Howell-Moroney, & Stringer, 2016; Jogezai et al., 2016). In 2011, Arne Duncan, the U.S.

Secretary of Education, reported that the USDOE awarded $350 million to help states evaluate students’ abilities to use technology for college and career readiness. This problem is still present today; a large number of educators neglect the effective integration of technology in their classrooms and fail to prepare their students for success in our technology driven society (Johnson et al., 2016, Ottenbreit-Leftwich, Ertmer, &

Tondeur, 2015; Prensky, 2013; Vongkulluksn, Xie, & Bowman, 2018). 24

The U.S. Department of Education recognized the need for each state to raise its standards of knowledge and skills to prepare students for the competitive workforce in this information age. A longstanding problem is that many schools possess digital applications, yet their use is constrained by educators who use outdated teaching models that do not optimally embrace or utilize technology. Findings from more recent studies indicate that the use of technology is often an add-on to more traditional curriculums, rather than an essential basis for lesson planning and training experience (Duncan, 2011;

Vongkulluksn, Xie, & Bowman, 2018). Improving the integration of technology in the classroom and improving teachers’ attitudes toward the use of digital applications in schools is a necessity. It is vital for preservice teachers to attain more computer-based preparation (Coleman et al., 2016; Makki, O'Neal, Cotton, & Rikard, 2018;

Vongkulluksn, Xie, & Bowman, 2018). Teacher’s preparation courses need to increase training in technology skills, and today’s educators must also teach students the critical thinking skills that are essential for the completion of complex work. Inquiry-oriented thinking, or critical thinking encourages novices to develop the skills needed to process information, develop plans, and adjust strategies in the face of hard-to-predict, real-world challenges (Cope, 2005; Jackson, 2015; Puron-Cid, Gil-Garcia, & Luna-Reyes, 2016; St-

Jean & Audet, 2012).

The field of intelligent technologies for commercial home devices is in its genesis phase and limited research exists on artificial intelligent tutors such as AmazonÒ Alexa.

Many studies on electronic tutors focus on school-bound applications and on devices that offer standardized levels of tutoring using primitive AI systems. These systems are made to serve a single, or few specific functions that require intelligence (Strong, 2016; Weber 25

& Brusilovsky, 2016; Wenger, 2014). Unlike other AI tutors, AlexaÒ possesses the capability to learn from users and their environment. It can connect to external applications, or build customized apps designed to adjust to the changing needs of individual learners. Tutors with multitudes of uses such as the Amazon AlexaÒ, have yet to be researched as learning tools that can be used in formal or informal learning settings.

Awareness of both the rapid societal changes that technological progress creates, and of the growing popularity of digital information use among younger audiences, are very important for educators. Assistants, or tutors, like the AmazonÒ Echo (Alexa), AppleÒ

Siri, GoogleÒ Home, and MicrosoftÒ Cortana, are increasingly being included in students’ environments and are becoming students’ resources of information (Shad,

2017).

Although some areas of education have adopted emerging technologies, the successful of implementation these innovations stem from the revision of all areas of the institutional hierarchy: Processes of administration, teaching, learning, and academic work (Instefjord & Munthe, 2016; Kafyulilo, Fisser, & Voogt, 2016; Murgatroyd, 2017).

With the ever-increasing dependence on information systems and the integration of new technologies into learning environments, identifying the critical factors related to user acceptance of technology has become a vital issue (Coleman et al., 2016; Makki et al.,

2018; Marangunić & Granić, 2015; Mun & Hwang, 2003).

Today’s students, who are part of the Generation Z, have never known a world without Internet. They have been raised around digital systems and are accustomed to handling an increasing number of technologies which earns them the title of digital-age learners (Deloitte, 2017). Modern students have changed to such an extent that 20th 26 century knowledge and career training can no longer serve as the optimum template of their academic needs (Arghode, Brieger & McLean, 2017).

According to Daniel H. Pink author of A whole new mind: Why right-brainers will rule the future, the future belongs to those who think, create, and recognize patterns, and the jobs of the future will be open to thinkers and innovators (Pink, 2006). College students depend on various types of technology during their college years and expect technology to be part of their academic preparation (Yakin, & Tinmaz, 2015). Having a new generation of learners and the challenges of technology integration creates a wide gap in the literature (Marangunić & Granić, 2015; Park & Jo, 2015).

Previous research on the effectiveness of older school-segregated AI tutors is limited. In available studies, researchers report that this type of tutor can produce a small, yet significant effect on learning (Duffy & Azevedo, 2015; Heidig & Clarebout, 2011;

Schroeder, Adesope, & Gilbert, 2013). However, with the advent of newer data gathering tools such as big data and crowdsourcing, the playing-field for AI technologies in education is changing. For example, the companies Third Space Learning, Thinkster

Math, Carnegie Learning, and Brainly are implementing AI tutoring platforms that support this type of learning (Dickson, 2017). As for the emerging AI technologies, we are starting to observe their more significant impact on learning.

The latest learning AI applications are transforming the classroom. Explained in one of the latest books from Pearson: Intelligence Unleashed. An argument for AI in

Education (2016), authors Rose Luckin, Wayne Holmes, Mark Griffiths, and Laurie

Forcier claim that thanks to AI systems embedded in web classroom applications, lessons can finally be individualized and break away from the one-size-fits-all approach. Students 27 who learn at different rates or need extra support can find help. At such times, these technologies are capable of recognizing areas students’ struggle in learning before teachers do, and then these AIs can provide targeted support in the correct format and at the appropriate moment that help is needed (Dickson, 2017; Luckin, Holmes, Griffiths, &

Forcier, 2016). The rapid proliferation of these new AI technologies in standalone applications such as the AmazonÒ Echo (Alexa), creates a critical need for research in this area of artificial intelligence tutors.

Purpose of the Study

The purpose of this study is to use the 35-item Concern Based Adoption Model

(CBAM) (Hall & Hord, 2006) theory as evaluated by the Stages of Concern

Questionnaire (SoCQ) (George, Hall, & Stiegelbauer, 2006) to collect preservice teachers’ intentions to adopt and accept AI tutoring systems as a learning tool in formal and informal settings. The aim of this study is to explore and examine the factors influencing preservice teachers’ decisions regarding their acceptance and use of standalone artificial intelligent tutors, as well as to determine which factors are associated with the peak of concern of this technology as a new learning tool. The conceptual framework Concern Based Adoption Model (CBAM) (Hall & Hord, 2006) will be used in the quantitative analysis.

Identifying the factors and sub-factors that influence the adoption of AI tutors is significant in that it helps to enhance existing practices on teaching and learning for educational institutions. Innovative technologies include advanced AI techniques unlike those in previous AI tutors. Characteristics of these new AIs should be investigated to determine the benefits and challenges of technology adoption and their efficacy for 28 learning in a variety of settings. Previous studies in this research area only focused on electronic tutors used in formal learning settings. Due to technology costs and availability, these digital helpers were usually found embedded in other classroom applications, like websites, or standalone computer programs.

This rise of technology impacts both students and educators. Incorporating technologies into classrooms is vital for student preparation and requires teachers, at every level, to take on new roles. This new task of helping students to adopt individualized learning methods that span further than acquiring knowledge requires training. Supplemental instruction for seasoned teachers is necessary for this adoption of innovations in the classroom, and without specific tech instruction, preservice teachers will not be able to recognize and apply digital applications as tools that aid education

(Gardner, 2013; Admoni, 2016; Jogezai et al., 2016; Luckin et al., 2016; Reyes, 2015;

Schindlholzer, 2016; Wixom et al., 2014).

Training our future teachers to use new technologies as instruments for skill acquisition will lead to positive learning outcomes. Recently, researchers have found that teachers who use digital applications in the classroom often create inquiry-oriented environments and they are more likely to deviate from traditional content delivery styles such as lectures or worksheets (Kormos, 2018). The results from this study will provide insights on the current teacher preparation programs, the impressions of preservice teachers regarding newer technologies, and it will offer suggestions on how to improve upon current processes of teaching and learning. 29

Significance of the Study

According to the U.S. Department of Education, technology brings “fundamental structural changes that can be integral to achieving significant improvements in productivity” (U.S. Department of Education, n.d., para. 1). This innovation can be “used to support both teaching and learning, technology infuses classrooms with digital learning tools” (U.S. Department of Education, n.d., para. 1). The variety of tools includes computers, and hand-held devices which expand “course offerings, experiences, and learning materials; supports learning 24 hours a day, 7 days a week; builds 21st century skills; increases student engagement and motivation; and accelerates learning” (U.S.

Department of Education, n.d., para.1).

Furthermore, “technology also has the power to transform teaching by ushering in a new model of connected teaching. This model links teachers to their students and to professional content, resources, and systems to help them improve their own instruction and to personalize learning” (U.S. Department of Education, n.d., para. 1). For this reason, it is important that educators are aware of students’ learning sources and how these channels of information are impacting or competing with curricular goals. Students might view smart-technology resources as an immense amount of free and available content; yet, often they lack the foresight to evaluate which data is reliable, and thus, lack the ability to select tools and activities that positively affect learning outcomes (Conde,

García-Peñalvo, Rodríguez-Conde, Alier, Casany, & Piguillem, 2014; List, Grossnickle,

& Alexander, 2016). Innovations in the field of research for education can increase knowledge about the learning patterns of students and bring awareness about the use of artificial intelligence technologies in formal and informal settings. To ensure that teachers 30 continue to improve their practices through adoption of innovative teaching and learning strategies, their concerns about AI tutors for formal and informal learning should be examined (Luckin et al., 2016; Marangunić & Granić, 2015).

This study may offer insight on the influence that emerging AI tutor technologies may have on teaching methods. As artificial intelligence devices become an increasingly integral part of today’s world (Hammond, 2015; Kurshan, 2016; Nogrady, 2016; Russel,

2016), educators will need to acquire a deeper understanding of the benefits and limitations of AI tutors and of their utility for aiding learning in a variety of contexts.

Target Audience for This Research

This study will be instrumental to modern educators and preservice teachers who will soon enter the teaching practice. As technological advancements continue to change our culture and our youth, it is important to understand their benefits and challenges to our educational system and its landscape. This study may also benefit students as it exposes characteristics of AI tutors that could be instrumental to their learning acquisitions.

Research Questions

Researchers suggest that the integration of technology is faster in fields outside of education (Dotong, De Castro, & Dolot, 2016; Laferrière, Hamel, & Searson, 2013; Polly

& Orrill, 2016). The variety of obstacles to integration ranges from inadequate financial support or infrastructure, lack of knowledge exhibited by instructors, and the overwhelming influx of emerging technologies forced on students by businesses (Dotong,

De Castro, & Dolot, 2016; Rutter, 2016; Polly & Orrill, 2016; Shaltry, Henriksen, Wu, &

Dickson, 2013). This study allowed the researcher to explore the investigation of 31 innovative technology integration initiatives through classroom stand-alone AI tutor devices like the Amazon AlexaÒ. Utilizing the conceptual framework constructed on

Stages of Concern (SoC) (George, Hall, & Stiegelbauer, 2006) in the Concern Based

Adoption Model (CBAM) (Hall & Hord, 2006), the identification of predominant Stages of Concern as scored on the 35-item questionnaire developed by George, Hall, and

Stiegelbauer (2006) were leveraged to understand teachers’ concerns toward this innovation. Preservice teachers’ characteristics were used to emphasize the effect of the outcomes. Specific to the field of technology, gender differences can present issues; while literature reports incongruences about this topic, many researchers have found gender differences surrounding technology integration for learning purposes (Buabeng-

Andoh, 2012; Teo, Fan, & Du, 2015; Cai, Fan, & Du 2017). Based on the theoretical foundation and statement of the problem, the main research questions framing the research include:

1) What are preservice teachers’ peak Stages of Concern (as described in the 35-item

questionnaire) toward the implementation of AI tutoring systems such as the

AmazonÒ Echo (Alexa) as a learning tool in formal and informal settings?

2) Are there significant relationships between preservice teachers’ peak Stages of

Concern and the factors of Grade to Teach, Teaching Geographical Area, and

Gender?

Definitions of Terms

The terminology specified as the focus of this study are defined in this section.

These definitions have been provided to facilitate understanding of how the terms are used within the context of the study. 32

Adoption. This term refers to a decision that teachers make about an innovation

(Klobas & Renzi, 2009).

Application Programming Interface (API). A set of commands, functions, protocols, and objects that software developers use to create software applications or use to interact with external systems. APIs offer programmers the ability to incorporate already developed commands, applications, and sets of operations with ease and without having to rewrite the code from the beginning (Christensson, 2016).

Apps. This is short for Applications, or software program. Today, the term App is used for programs and hardware platform however, originally it referred to mobile devices like smartphones and tables (Christensson, 2012).

Artificial Intelligence (AI). The “intelligence exhibited by an artificial entity to solve complex problems and such a system is generally assumed to be a computer or machine” (Strong, 2016, p. 64). It is a computer science technique designed to provide machines with the ability to resemble human-like intelligence, so that machines can copy intelligent human behavior (Merriam-Webster Dictionary, n.d.). In this context, artificial intelligence refers to Strong AI and Weak AI (Strong, 2016) and further describes these categories with the specifications of Narrow AI and General AI (Hammond, 2015).

Artificial Intelligence Tutors. Historically, computer systems used for education were labeled as Intelligent Computer-Aided Instruction (ICAI) and Computer-Aided

Instruction (CAI) (Wenger, 2014). More current literature uses the terms Intelligent

Tutoring System (Sleeman & Brown, 1982), Artificial Intelligence Tutors (Woolf, 2010),

Intelligent Tutors (Schroeder, Adesope, & Gilbert, 2013), and Conversational Interfaces

(McTear et al., 2016). Also, Voice Personal Assistants (VPA) are known as “intelligent 33 personal assistants, digital personal assistants, mobile assistants, or voice assistants”

(McTear et al., 2016, p. 11). For the purposes of this research, the researcher will use the term AI Tutors to indicate all types of AI systems designed to help the learning process.

Amazon Echo. AmazonÒ’s voice-controlled device that is Internet-connected, cylindrically shaped, capable of playing songs and performing a wide variety of tasks.

Available with customizable apps, AmazonÒ’s engineers designed the Echo to become an intelligent helper for everyone (Amazon, 2019; McTear et al., 2016). As a device, it has the unique ability to learn about its users, it’s users’ environment and its’ ability to increase its capacity to assist them (Amazon, 2019).

Amazon Alexa. The voice that powers the AmazonÒ Echo, Tap, Dot, Slide, and other AmazonÒ voice activated devices powered by Alexa and designed by AmazonÒ

(Amazon, 2019).

Attitudes. A decision which is based on an evaluation (personal preference) to any objects in our social world (Maio & Haddock, 2007).

Cloud Computing. This term refers to a set of applications and services which are offered over the Internet. These storage services are offered from multiple data banks located around the globe. Collectively, these storage services are referred to as the

‘cloud’ (Christensson, 2009). Cloud computing is used for backup services, social networks storage, personal data storage, such as AppleÒ iCloud, and for online applications, like Microsoft Online Services (Christensson, 2009).

Concern Based Adoption Model (CBAM). CBAM is a theoretical model that explains and predicts behaviors of end-users in a change process within innovation (Hall

& Hord, 2006). 34

Concerns. Concerns represent a combination of feelings, thoughts, and considerations towards one particular task or a particular issue (Hall & Hord, 2006).

Crowdsourcing. The practice of leveraging knowledge, wisdom, services, or content from a crowd of individuals who contribute to this body of knowledge from inside or outside a group without the prompting of an employee or a supplier (Bücheler,

Füchslin, Pfeifer, & Sieg, 2010). An example of online crowdsourcing is Wikipedia.

Digital Learner. Students that belong to the digital generation (Van Eck, 2006) who are very familiar with technology and its uses and are called Net Generation, digital natives, by Mark Prensky (2001) and Digital-Age learners, individuals raised around technology and handling technology as a free agent model of learning (Prensky,

2001).

Generation Z or Gen Z. People born between 1995 and 2005 (Taylor & Keeter,

2010).

Infotainment. The Neologism infotainment first appeared in the 1980s to indicate a portmanteau for information and entertainment (Thussu, 2007).

Innovation. According to Lynn and Gelb (1997), innovation is the “tendency of an individual consumer to adopt new products before large numbers of others do” (Lynn &

Gelb, 1997, p. 44). According to Rogers (2003) an innovation is “an idea, practice or object that is perceived as new by an individual or other unit of adoption” (Rogers, 2003, p. 11).

Internet of Things (IoT). “IoT is a global network which connects real and virtual objects in a unique way, by making use of the data captured by sensors of the communication and localization devices” (Georgescu & Popescu, 2015, p. 67). 35

Quantitative Research Method. It is a method design or “an inquiry approach useful for describing trends and explaining the relationship among variables found in the literature” (Creswell, 2008, p. 626). This type of research is used by researchers to examines relationship between and among variables by employing research questions and hypotheses that are “specific, narrow, measurable, and observable; collecting numeric data from a large number of people” (Creswell, 2008, p. 626). Quantitative research is run with instruments like surveys, that represent preset questions and answers which are used to gather information from different reliable sources, which are then organized in numbers, statistics, charts, graphs, and other numeric formats.

Preservice Teachers. Students enrolled in an undergraduate program that prepares them to become teachers.

Serious Games. Computer games with explicit and carefully crafted educational purposes (Bente & Breuer, 2010). These are “games that engage the user and contribute to the achievement of a defined purpose other than pure entertainment (whether or not the user is consciously aware of it)” (Susi, Johannesson, & Backlund, 2007, p. 5). Over the years, serious games have found many applications in education and in military training.

Smart-Technologies or Smart Tech. As explained by Netlingo, these types of technologies are “known as “smart” because of the notion of allowing previously inanimate objects—from cars to basketballs to clothes—to talk back to us and even guide our behavior” (Netlingo, n.d., para. 4). The term SMART is an acronym for “Self-

Monitoring, Analysis and Reporting Technology” (Netlingo, n.d., para. 4).

Trainee Teachers. See Preservice Teachers. 36

Limitations of the Study

Limitations are considered uncontrollable issues that potentially impact the internal validity of a study (Nenty, 2009). The following paragraphs describe some possible limitations of this study. A limitation of the study pertains to the collection of data. In this study, the data were collected from preservice teachers at a Midwestern university enrolled in a required technology course. Therefore, the results of this study can only be generalized to PK-12 preservice teachers’ perspectives. Participants voluntarily and independently completed the Stages of Concern Questionnaire. The data collection relied on preservice teachers’ willingness to respond the questionnaire honestly. Therefore, the results of this study may have been affected by whether the preservice teachers’ responses represented true reflections of their present concerns.

Lastly, a limitation of this study was the scarcity of literature over the topic of artificial intelligence assistants used as tutors for education and artificial intelligence tutors with advanced speech functions, with innovative technology uses and with customization options. This innovative topic is quickly entering our society, but it is slowly entering research literature.

To research the topic of this literature review, the Ohio University’s library databases were heavily used. In August 2017, EBSCOhost provided access to very valuable sources from a variety of databases. The search criteria via EBSCOhost database were limited to articles that included peer-reviewed articles. Keywords and phrases used to search the topic included the following: artificial intelligence tutors, yielded 27 articles from academic journals and 23 from other journals. These papers are mostly concerned with the areas of engineering and linguistics with a few exceptions on education. 37

However, the articles from the area of education focus on computer-based learning rather than focusing on this area of research. A second search using keywords: Amazon Alexa returned 9 articles in academic journals and 8 articles from other journals. Most of these articles focus on the linguistic ability of Alexa, the fast diffusion of the AmazonÒ device, and the use of Alexa for language learning. For this reason, the researcher had to rely more heavily on articles from newspapers, web articles, and manufacturer’s specifications since preferred journal articles were limited.

Delimitation

This study will be limited to the Midwestern University where the researcher is currently a doctoral student and had access to this Institution during the academic year

2019-2020, when the study was conducted. This research will include preservice teachers enrolled in a course titled, Technology Applications in Education, which is a course that is required for all preservice teachers at this Midwestern university. This class focuses on the use of technologies in PK-12 classrooms for teaching and learning and it incorporates a section examining the use of technologies to aid preservice teachers in learning, as well as exploring emerging technologies that might enter their classrooms in the future.

Conceptual Framework

The topic of technology integration and adaptation can be explored by a variety of frameworks that have implications for professional development. Specific to the field of education, theories like Rogers’ Innovation Diffusion Theory (1995) and Hall and Hord’s

(2010) Concern Based Adoption Model (CBAM) were developed. Other theories such as

Davis’ (1989) Technology Acceptance Model (TAM) and the Universal Technology

Adoption and Use Theory (UTAUT) developed by Venkatesh, Morris, Davis, & Davis 38

(2003) were specifically developed to answer questions about technology adoption and were developed for the computer science field (Straub, 2009). The CBAM (Hall & Hord,

2006) model is specifically constructed for technology integration in the education arena and it applies to anyone experiencing these changes therefore the researcher elected to use the CBAM (Hall & Hord, 2006) theory for this study. This framework was used as a theoretical lens to explore the concerns of preservice teachers in adapting to standalone

AI tutors such as the AmazonÒ Alexa for formal and informal learning.

Figure 3. The Concerns-Based Adoption Model. From: George, A. A., Hall, G. E., & Stiegelbauer, S. M. (2006). Measuring implementation in schools: The stages of concern questionnaire. Austin, TX: SEDL, p. 36. Copyright 2006 by SEDL. Referenced with permission.

The Concerns-Based Adoption Model is the fruit of collaborative work of Hall and Hord (2006) and other professionals who identified and assessed Stages of Concern as they align with changes. The SoC Questionnaire (see Appendix E) is designed to 39 assess individual teacher’s concerns, following the major assumptions of the CBAM model which hypothesizes that (Hall & Hord, 2006):

1) Change is a process and it is attained in stages

2) The individuals are an integral part of the change

3) Change is personal

4) The stages of change are influenced by the concerns and individual mastery of the

innovation.

Personal connection inevitably affects change as it involves individuals’ feelings and perceptions. This can help or hinder the process of change (Hall & Hord, 2010). For instance, when teachers are enthusiastic about digital innovations, they are more likely to incorporate technologies into their teaching. According to Hall and Hord (2010),

Concerns are the feelings and concerns of teachers, and they can be collected with the

SoC questionnaire.

The theoretical framework of the Concerns-Based Adoption Model (CBAM) was designed at the Research and Development Center for Teacher Education at the

University of Texas by Hall, Wallace, and Dossett (1973). These theorists first proposed that “there was a set of developmental stages and levels teachers and others moved through as they became increasingly sophisticated and skilled in using new programs and procedures” (Hall & Hord, 1987, p. 7). These seven Stages of Concern fall into three major patterns: self (adequacy as a teachers), task (teaching methods, teaching performance) and impact (pupil learning needs).

The data was collected through the use of the Stages of Concern Questionnaire

(SoCQ) developed by (George, Hall, & Stiegelbauer, 2006). The CBAM (Hall & Hord, 40

2006) instrument was used to meet the purposes of this study with the constructs and moderators listed above. All constructs and moderating factors were studied through the use of quantitative methodologies, including predictive correlational design and the use of Multivariate Analysis of Variance (MANOVA).

Organization of the Chapters

This dissertation is structured into five chapters. The first chapter includes an introduction, a background of the research topic, the theoretical framework, the study problem, the purpose and the significance of the study, the research questions and the hypotheses, limitations, and delimitations. The chapter concludes with the organization of the chapters. Chapter two provides a review of relevant literature, including the historical development of the theoretical model CBAM (Hall & Hord, 2006) and the capabilities of the AmazonÒ Echo. The third chapter incorporates the methodology or the research design that was used to collect and analyze data for this study. This chapter examines the rationale, methodology, and data analysis including the instrument and data collection.

Chapter four showcases the quantitative data obtained from the survey and lists the findings. Chapter five, summarizes and interprets the findings and closes with recommendations for future research.

Chapter Summary

This chapter provides an introduction to the study. The chapter includes the statement of the problem, the potential audiences, the purpose of the study and the research questions, terminology, delimitations and limitations of the study, and the theoretical framework. 41

Chapter 2: Review of Related Literature

There is no more critical indicator of the future of a society than the character, competence, and integrity of its youth (Bronfenbrenner, Mcclelland, Wethington, Moen,

Ceci, Hembrooke, ... & White, 1996, p.1).

A New Generation of Learners

Students today are accustomed to learning from a variety of devices and applications both inside and outside of the classroom (Anshari, Almunawar, Shahrill,

Wicaksono, & Huda, 2017; Reychav, Dunaway, & Kobayashi, 2015). Unlike previous generations who relied strictly upon formal classroom learning, Gen Z students are digital learners who do not view acquiring skills or knowledge as an activity that is bound inside classroom walls. Instead, they perceive learning to be a more versatile experience that can be formally organized by an institution or informally structured in non-traditional learning environments (Dufur, Parcel, & Troutman, 2013; Kidd & Morris, 2017;

Morreale & Staley, 2016; Park & Jo, 2015). Generation Z learners’ increasing usage and ownership of personal mobile devices has given them round the clock Internet access which means “that learning occurs in physical where students are simultaneously connected to other spaces and places” (Andrews & Johnson, 2015, p. 4).

Some experts argue that the importance of formal learning and of meeting in the classroom is rising because it helps “counter the inequalities and injustices of the informal learning landscape outside school” (Facer, 2011, p. 28). In-class learning is instrumental to students’ mastery of skills; great value is placed on “being together in the same ‘physical’ space even in an increasingly online, learning at distance and off-campus

‘university experience” (Andrews & Jones, 2015, p. 4). 42

In literature, there is an extensive body of research reserved for the benefits of using technology in education independently upon the physical space that learners occupy (Bruce & Levin, 1997; Firmin & Genesi, 2013; Lindbeck & Fodrey, 2011;

Marangunić & Granić, 2015; Richardson, 2013). Technology helps students understand concepts, acquire skills, and increase students’ critical thinking processes (Almurshidi &

Naser, 2017; Firmin & Genesi, 2013; Lindbeck & Fodrey, 2011). Limited literature covers the use of technology for learning in informal settings (Marsick & Watkins, 2015).

However, underestimating the learning gains acquired outside of established learning institutions would be a mistake; ultimately, the collection of these informal activities that learners carry out individually or in groups work together to complement the experiences that they glean from the classroom (Griffiths & García-Peñalvo, 2016).

Formal Learning

Formal learning is characteristically institutionally supported, classroom-based, and highly structured (Marsick & Watkins, 2001). Comprised of a prescribed curriculum that corresponds with official goals (Malcolm, Hodkinson, & Colley, 2003a), the aim of formal learning is premediated “to meet the externally determined needs of others with more power – a dominant teacher, an examination board, an employer, the government, etc.” (Malcolm, Hodkinson, & Colley, 2003b, p. 3). In formal learning objectives, course schedules, assessments, the external specifications of merit or outcome and the subsequent awards of mastery (or diplomas) are usually administered by instructors inside of educational institutions or sometimes remotely (Eraut, 2000; Kyndt, Dochy, &

Nijs, 2009; Leone, 2014; Marsick & Watkins, 2015; Merriam, Caffarella, &

Baumgartner, 2012). Advantages and characteristics of formal learning include: 43

• It incorporates a wide variety of learning programs and methods to meet

learner’s needs (Merriam, Caffarella, & Baumgartner, 2012).

• A large quantity of students can learn at the same time (Merriam, Caffarella,

& Baumgartner, 2012).

• It is structured around time and standards, which is necessary for serious

engagement in subject matter and science studies (Bevan, Dillon, Hein,

Macdonald, Michalchik, Miller, … Yoou, 2010).

• The goals, locations and the methods are externally determined by the

educational or training providers (Cofer, 2000).

• The aims and pursuit of knowledge or skills are individually or group

determined (Cofer, 2000).

Statistics suggest that 70-90% of human learning falls into the formal learning category

(Latchem, 2014). According to a two-year long study ran by Cofer (2000) each hour of formal learning gives rise to four hours of informal learning, a 4:1 ratio.

Researchers from the Promethean World, a company in the U.K. that specializes in technology for classrooms, reported that “over the last 12 months, the majority of schools have still not rolled-out technology use as a core part of the learning experience across all subjects, so it is no surprise that many educators are yet to achieve the benefits of the technology-enabled classroom. Too many teachers still see technology as an activity that is confined to the computer room” (Couros, 2018, p. 21). As digital devices become ubiquitous, it is important that our trainee-teachers familiarize themselves with devices that can be used for skill acquisition and to support learning. While there are 44 many benefits of a structured and formal type of learning, students have also reported academic progress in less formal contexts (Couros, 2018).

Informal Learning

Education is what survives when what has been learned has been forgotten

(Skinner, 1964, p. 483).

Informal learning is defined as learning which occurs outside of the formal educational environment. Unlike formal learning, it is self-directed, intentional, interest- based rather than curriculum-based, non-assessment driven, and non-qualification oriented (Eshach, 2007; Laurillard, 2009). It is also unstructured and not classroom- bound (Marsick & Watkins, 2015). Typically, informal learning involves course-related activities that are done outside established institutions (Aspden & Thorpe, 2009; Kassens-

Noor, 2012).

This type of learning can be intentional or serendipitous, as it occurs within a variety of contexts and experiences (Marsick & Watkins, 2015; Fink, 2013). Digital learners are particularly inclined to use this style of knowledge acquisition because informal learning is controlled by the learners themselves rather than their instructors.

Thus, it offers them more freedom, autonomy, and control over their own learning

(Bennett, 2012; Lai, 2019). Students in informal learning settings feel empowered to pursue their interests without the stress of strict academic expectations (Marsick &

Watkins, 2015; Griffiths & García-Peñalvo, 2016; Jong et al., 2012; Lai, 2019). Intrinsic motivation and the ability to infer from context are two essential characteristics of informal learning that make this type of education successful (Lai, 2019; Marsick &

Watkins, 2015). 45

Especially today, with the rise of web and mobile technologies opening new opportunities for informal and independent learning (Holmes, 2011; Jong, Lai, Hsia, Lin,

& Lu, 2012), students can learn in social online contexts such as social networks or online games (Chen & Bryer, 2012; Kassens-Noor, 2012; Ritterfeld, Cody, & Vorderer,

2009). According to Marsick and Volpe (1999), informal learning includes the following characteristics:

• It is integrated with daily routines.

• It is triggered by an internal or external jolt.

• It is not highly conscious.

• It is haphazard and influenced by chance.

• It is an inductive process of reflection and action.

• It is linked to the learning of others (Marsick & Volpe, 1999, p. 5).

The acceptance of informal learning is increasing among young and adult learners. In 2002, researchers ran a study among Canadians and reported that 90% of adults engaged in informal learning activities about 15 hours per week, per adult, and about three hours per week in formal, or organized education (Livingstone, 2002).

Informal learning is becoming very prevalent and while extending the results of the research by Livingstone (2002) to other industrialized nations might be overreaching, young adults and working adults are spending many hours learning in informal contexts

(Bennett, 2012).

According to Noam, Biancarosa, and Dechausay (2003), connecting bridges between formal and informal environments is the best way to relate to the curricular worlds in which our youth are living (Noam, Biancarosa, & Dechausay, 2003). A formal- 46 informal link can join these two seemingly dichotomous styles of learning for the benefit of the students (Noam et al., 2003). The importance of informal learning has been reported since the beginning of the millennium by the Council of Europe (2000) which acknowledged that formal education alone cannot respond to the challenges of our modern society. The support of informal educational practices is indispensable for learning to become an effective partner in the lifelong process of betterment of oneself

(Council of Europe, 2000). Together, these divergent ways of learning, provide the framework that students need to acquire an enduring attitude toward a desire to carry on learning. According to the NFE Book–The impact of non-formal education on young people and society (2007) “Formal, non-formal and informal education are complementary and mutually reinforcing elements of a lifelong learning process”

(Novosadova, Selen, Piskunowicz, Mousa, Suoheimo, Radinja, & Reuter, 2007, p.10).

Bridging Formal and Informal Learning

In 1938, Dewey introduced the concept of informal learning which he defined as experiences which require the continuity of experience and interaction, and he became the first to propose that a parallel exists between formal and informal learning (Dewey,

1938). The body of literature on this topic is growing and it denotes the disconnection between work preparedness and graduate skills mastery. To counteract this problem, researchers have increased their attention to the significance of more tacit kinds of learning, such as the production of social knowledge through distributed and situated learning (Johnson et al., 2016; Olson, 2015). As we examine Bernstein’s (1971) broader view of formal and informal learning, we note that the digital information era is well 47 suited for a learning continuum that creates opportunities for learners to frame, classify, and evaluate knowledge (Bernstein, 1971).

Today, the Internet gives us the ability to learn something about almost anything, and with the increase of smart devices, learning can occur in the palm of one’s hand. As the availability of learning materials online increases, students’ interest in self-directed learning along with curiosity-based learning also increases. These types of learning have fueled knowledge gathering in museums, science centers, and online on learning networks (Falk & Dierking, 2018; Mills, Knezek, & Khaddage, 2014). There are also more serendipitous forms of informal learning such as life experiences and situational learning which increase learner engagement by encouraging them to follow their interests. Other emerging models blend formal and informal learning to repair the inefficiencies of the traditional system for nontraditional students (Falk & Dierking,

2018; Baker, 2015).

More recently, researchers reported that blending formal and informal learning can produce learning environments that foster experimentation, curiosity, and creativity.

Therefore, institutions of Higher Education are slowly incorporating non-formal models of learning across curriculum (Johnson et al., 2016). Since digital innovations lend themselves to more flexible types of learning, learners can acquire knowledge in the forms of clustering or individual education in a variety of contexts with the aid of emerging digital technologies and AI techniques (Wang et al., 2018).

Constructivism

An important framework that models how students may learn from AI technologies is the constructivist approach. Theorists such as Dewey (1938) and Piaget 48

(1994) asserted that learning occurs when students are actively engaged and that integrating new learning occurs when students connect their new experiences with their previous experiences. Constructivism capitalizes on the knowledge and understanding that learners already possess as key factors of the learning process (Dewey, 1938).

Vygotsky (1978), who also viewed student's current knowledge and skills as the starting point for learning, believed that learning leads to development and created a “Zone of

Proximal Development (ZPD)” (Vygotsky, 1978 p. 90). He described this zone as “the distance between the actual development level as determined by independent problem solving and the level of potential development” (Vygotsky, 1978, p. 86).

Specifically, the development of skills within social contexts, such as with the use of an AI tutor, is supported by Vygotsky’s (1978) social constructivism theory. This theory entails that human development is socially situated and knowledge is constructed through interaction with others (Vygotsky, 1978). It also stresses a key point in identity construction, which is that people’s (or students’) ideas coincide with their experiences and that this builds on their socio-cultural awareness (Vygotsky, 1978).

Constructivism and Social Constructivism and AI Tutors

AI technologies designed for learning have the ability to assess student’s knowledge and to create individualized learning at the level of the students’ understanding (Kamenetz, 2016). Electronic tutors, for instance, supported positive learning outcomes by offering “online help, cues/prompts, informative feedback, and other activities” (Garris, Ahlers, & Driskell, 2002, p. 446). AI tutors are constructed to aid students in acquiring knowledge by starting at the level of difficulty that students 49 have mastered and then increasing the difficulty of assignments as students demonstrate their mastery of previous concepts (Garris, Ahlers, & Driskell, 2002).

A recent study on the use of Alexa reports that users are being satisfied with this

AI tutor’s answers even when it produces unsought information. This suggests that users value the interaction experience more than the interaction output (Lopatovska, Rink,

Knight, Raines, Cosenza, Williams, ... & Martinez, 2018). Using AI voice tutors like

Alexa, teachers can access factual information to check that students’ work has been completed rather than only using the tutor for learning drills (Yoder, 2018). A more constructivist approach can also be applied by asking students to utilize the

AmazonÒAlexa tutor’s answers to create something original. For instance, students could be asked to reenact a conversation between historical figures or, to create public service announcements (Yoder, 2018).

The use of a voice activated AI device in the classroom can also connect students with a more lifelike experience than digital learning applications; students can query a system that returns human-like answers which may vary in intonation and wording.

Constructivists view this dialogue as an active form of learning since “all forms of constructivism understand learning to be an active rather than a passive endeavor.

Consequently, learning occurs “through dialogue, collaborative learning, and cooperative learning” (Merriam, Caffarella, & Baumgartner, 2012, p. 292). An electronic tutor can answer questions directed to an individual as well as to a group. By bringing the pupils together Alexa can create a sense of community and collaboration among students and learning experiences foster as “one learns through engaging, incorporating, and critically 50 exploring the views of others, and new possibilities of interpretations are opened through the interaction” (Gergen, 1994, p. 34).

This learning approach is what Wood, Bruner, and Ross (1976) coined as scaffolding. Based on Vygotsky’s ZDP, scaffolding is a supported instructional process in which higher levels of supportive structure are offered at the beginning of the learning process then as students’ progress toward their learning goals, the assistance provided by the structure is diminished. Eventually the scaffolding is withdrawn when students have mastered the material. Then new scaffolding is created to help support them in their next phase of learning (Wood et al., 1976).

Education and Technological Advances

Historically, every technology that has affected our society has also had an impact on education. Howard and Mozejko (2015) propose three distinctive ages in the history of technology integration in education: Pre-digital, the Personal Computer, and the

Internet. Classroom technology in the Pre-digital era includes chalk and pens, and more sophisticated technologies such as the film projector and radio which were introduced in schools in before the end of the twentieth century and in the 1920s (Howard & Mozejko,

2015). The third most impactful classroom technology of the twentieth century was the television which was introduced in the 1950s. These technologies provided a delivery channel for knowledge to students, through the use of visual and audio means (Howard &

Mozejko, 2015). Popular belief regarding these innovations’ benefit was so positive that people believed students could learn everything they required solely by viewing films and television, or listening to radio broadcast programming (Selwyn, 2016). 51

The 1970s and early 1980s can be classified as the Personal Computer era

(Howard & Mozejko, 2015), desktop computer became affordable for schools. Notably, after the advent of the Internet in the late 1980s/early 1990s, electronic learning (e- learning) emerged as a learning method that used computers without requiring students to be physically present in the classroom. As e-learning freed education from the restrictions of the classroom, it expanded learning settings to any geographic location offering

Internet connection (Gupta & , 2010; Hashemi, Aziznezhad, Najafi, & Nesari, 2011).

After the Millennium, the advent of mobile technologies and the diffusion of smart devices, triggered a natural progression of education toward mobile learning. The wide diffusion of mobile electronics led technology giants to increase research on AI-rich products which are often labeled as smart devices. Especially in higher education programs, among the Gen Z learners, there is a strong demand for the use of innovative technologies like mobile learning and smart devices since they integrate an infinite bonanza of apps, (Klimova & Poulova, 2016). Most of the educational community recognizes the need for the use of these innovative technologies to improve teaching and learning processes (Vinu, Sherimon, & Krishnan, 2011). Simultaneously, higher education is increasingly becoming a competitive environment where schools struggle to

“respond to national and global economic, political and social change such as the growing need to increase the proportion of students in certain disciplines” (Daniel, 2015, p. 904).

These factors lead to competitions among institutions which help to ensure that learning programs are of high quality at a national and global level and help to increase the offerings of innovative learning initiatives (Daniel, 2015). The upside of this pressure 52 is that scholastic organizations are encouraged to be more proactive about finding ways to link our digital society with educational methods (Kidd & Morris, 2017). Innovative technologies offer many benefits to digital learners and smart devices further assist learners in transitioning from learning inside the classroom to learning outside of it. This transition encourages students to make queries outside of traditional educational settings and into the world they live (Kidd & Morris, 2017).

Electronic Tutors, Virtual Assistants, Chatbots

Development of the first devices capable of interacting with humans started in the

1960s. As with many technological advancements, it took new developments in related fields to facilitate the progress to successful voice activated devices like the AmazonÒ

Echo. The first voice interfaces were first called Chatbots, this type of technology forms the basis for today’s artificially intelligent virtual assistants (Ciechanowski, Przegalinska,

Magnuski, & Gloor, 2018).

ELIZA, was the first interactive device ever developed that demonstrated humans could interact with machines. While the technology was rudimentary and ELIZA’s capabilities were elementary, the development of this device ignited a new technology field called human-computer interaction (HCI) (Norvig, 2012; Weizenbaum, 1976). Over subsequent decades our civilization saw a variety of applications that led to the creation of artificially intelligent products. While its first applications were simplistic, SiriÒ, the first personal assistant, became a much more complex and intelligent application which could complete tasks rather than just collect data and give information (Naone, 2009).

Today, virtual personal assistants can be an embedded technology in personal devices like wearables, smartphones, appliances, and automobiles. The advancements in natural 53 artificial intelligence and natural language processing technologies, have made incredible strides. As a study on artificial intelligent tutors reports, voice activated devices “will play a very important role in the near future” because “we humans take more time to solve a problem than the agent” (Bhinderwala, Shukla, & Cherarajan, 2014, p. 12).

Artificial Intelligence

Although some of these technological visions took years to become realities, scientists and researchers have worked on artificial intelligence techniques since the

1940s. Historically, some of the best-documented first efforts in the study of AI are attributed to Alan Turing’s (1950) Computing Machinery and Intelligence paper which suggested the possibility of a thinking computer (McCorduck, 2004; Turing, 1950) and

IBMÒ scientist Arthur Samuel’s (1959) checkers program which is often noted as the catalyst for studies in machine learning, or artificial intelligence techniques (Samuel,

1959).

In today’s society, artificial intelligence (AI) applications are widespread; for instance, we can see commercial applications of this intelligence in junk-email software programs, self-driving cars, human-less stock trading computers, traffic control, and AI applications within applications such as FacebookÒ’s photo-facial recognition, to name a few (Hammond, 2015; Kurshan, 2016; Nogrady, 2016). Intelligent systems help us with complex tasks such as detecting cancer and with simpler tasks such as keeping spam out of our email inbox (Nogrady, 2016).

AI “is defined as intelligence exhibited by an artificial entity to solve complex problems and such a system is generally assumed to be a computer or machine” (Strong,

2016, p. 64). This science that emerged from the integration of computer science and 54 structural intelligence focused on creating intelligent computers which have the capacity to solve problems like we humans can, only they work much more quickly. In this context “intelligence is the ability to think to imagine creating memorizing and understanding, recognizing patterns, making choices adapting to change and learning from experience” (Strong, 2016, p. 64). AIs are smart and can surpass human intelligence, but in most instances, their intelligence is segregated to specific areas of intellect. For this reason, science divided AIs into two distinctive categories Strong AI and Weak AI (Strong, 2016) and further defined these categories with the specifications of Narrow AI and General AI (Hammond, 2015).

A Strong AI is the closest digital exemplification of a human mind and its creation remains an intangible vision. Even the latest innovations in the field have not arrived at the creation of a machine that possesses the complete intelligence and the functionality of a human being (Strong, 2016). The characteristics and limitations of a Strong AI incite debate among scientists because, to be classified as a Strong AI the machine must “have the ability to reason, think and do all functions that a human is capable of doing” (Strong,

2016, p. 64).

Unlike Strong AIs, Weak AIs are prevalent today. A Weak AI is an intelligent machine that is built to serve a specific function that requires intelligence (Strong, 2016).

This type of AI might behave in a humanlike way, but the final product has little to do with how humans think or act (Hammond, 2015). One of the best-known applications of

Weak AIs are in video games in which humans play against a computer opponent: They might believe that their adversary is human because of their exhibition of human-like traits during game-play. The game, or the weak AI behind those strategic moves is not 55 thinking like a human. While these systems may be using human reasoning as a guide, they cannot replicate it (Hammond, 2015; Strong, 2016). The AmazonÒ Echo is another example of weak AI; it is very capable, but it is not a machine that thinks like a human

(Amazon, 2019).

Weak AIs also have “the capacity for knowledge and the ability to acquire it” and therefore they can increase their knowledge and skills (Strong, 2016, p. 64). Additionally, they have “the ability to judge, understand relationships and last but not least to produce original thoughts” (Strong, 2016, p. 64). Another example of a weak AIs is IBMÒ’s

Watson, which is capable of finding answers by building evidence from reviewing thousands of pieces of text and using each piece to increase the level of confidence it needs to draw conclusions. Watson looks for patterns and for evidence to support the patterns; a behavior that has been modeled from humans (Hammond, 2015).

Further specifications of AIs include the Narrow AI and the General AI. These terms are used to differentiate AIs that are designed to complete specific tasks (Narrow

AIs) from those designed for general and various tasks (General AIs) (Hammond, 2015).

For instance, “systems that can recommend things to you based on your past behavior will be different from systems that can learn to recognize images” (Hammond, 2015, para. 17). NetflixÒ can suggest movies based on your browsing history and movie selection. This type of AI differs “from systems that can make decisions based on the syntheses of evidence” (Hammond, 2015, para. 17).

AI-Compliant Culture and Society

On December 20, 2016, the Executive Office of The President of the U.S. released a report on Artificial Intelligence, Automation, and Economy which projected the 56 congressional point of view on how the rise of AI-driven automation will impact our society within the next 20 years. In this report, congress devised strategies for the development, implementation, and adaptation of artificial intelligence technologies in the

United States. The Office of the President expects AI-driven automation to make the

American economy prosper; Congressmen add that “while many will benefit, that growth will not be costless and will be accompanied by changes in the skills that workers need to succeed in the economy, and structural changes in the economy” (Furman et al., 2016, p.

1).

The governmental document listed a set of strategies for jobs creation that will affect our educational system in order to “educate and train Americans for jobs of the future” (Furman et al., 2016, p. 3). So that future students will be employable “American workers will need to be prepared with the education and training that can help them continue to succeed” (Furman et al., 2016, p. 3). To begin this process, congress plans to

“start with providing all children with access to high-quality early education so that all families can prepare their students for continued education” (Furman et al., 2016, p. 3).

Lastly, Congress will be “investing in graduating all students from high school and college-career-ready and ensuring that all Americans have access to affordable post- secondary education” (Furman et al., 2016, p. 3).

On one hand, advances in the field of technology are making devices more affordable and available for both schools’ and families’ budgets. On the other hand, the benefits that AIs offer could be unattainable for people who cannot afford AI compliant devices or the Internet. However, even if the kind of one-to-one help derived from tutoring is one of the most-effective approaches to teaching (Cassidy, 2016), for many, 57 these new technologies are expensive and with already overstretched budgets, many schools are not able to afford the rising cost of artificial intelligence tutors.

This results in lost opportunities for students who come from disadvantaged areas and families who will not have the chance to receive this type of help that students from affluent communities or families can afford (Simoni, Gibson, Cotten, Stringer, &

Coleman, 2016; Makki et al., 2018). Financially secure families may be able to supply commercial grade AIs (such as the AmazonÒ Echo, or GoogleÒ Home) for home use because they are affordable to the middle class. Consequently, since AI technologies will be more prevalent among students from the middle and upper classes, they may be better prepared for the future and for their careers (Simoni et al., 2016).

Another issue is Internet connection; AIs need to be connected to the network to operate. The most disadvantaged students would belong to the least connected segment of the U.S. population. Per report Hispanic immigrants are less connected to the internet than other low-to-moderate income families (Perrin & Duggan, 2015). Pew Research

Center also reports that “African-Americans and Hispanics have been somewhat less likely than whites or English-speaking Asian-Americans to be internet users, but the gaps have narrowed. Today, 78% of blacks and 81% of Hispanics use the internet, compared with 85% of whites and 97% of English-speaking Asian Americans” (Perrin & Duggan,

2015, para. 5). Connectivity issues can make these types of technologies prohibitive for low-income students in rural communities, or other school districts that are not able to afford salient technologies (Perrin & Duggan, 2015). 58

Moral Issues with AI Systems

In a more general sense, AI systems are only able to make decisions based on the data that we provide to them. Therefore, the decisions that AIs make are only as reliable as the data that we furnish to them to make the decision (Nogrady, 2016). This might not sound like a negative feature because with most applications, manual intervention and manipulation of information can be easily done. However, AI systems surpass human intelligence when making decisions based on large amounts of datasets therefore, answers from these systems cannot be traced, or broken down into the decision-making steps that the AI took to arrive at its conclusions (Nogrady, 2016). Without the ability for systems with decision-making processors to be open to scrutiny, we could end up with systems that return erroneous information that could create problems for individuals

(Nogrady, 2016).

In case of AI error, decisions regarding spam emails, or information that our vehicles are about to break down might not be as impactful as more life changing answers regarding someone’s propensity to commit a crime or an errant conclusion that an innocent person is guilty (Nogrady, 2016). By the same token, if we use AIs in schools to collect and analyze student performance data, we could find answers that can help students flourish in school. But, AIs can also infer that a specific student is inept for a specific academic pursuit when that student may have innate abilities that the AIs were unable to recognize. Since we are the ones who are building AI techniques and the ones who decide which data they should use to make decisions, these machines will most likely replicate our brilliant and fallacious human thinking (Nogrady, 2016). 59

Lastly, our rapid advances in artificial intelligence techniques attracted renowned

Professor Stephen Hawking’s attention (2016), who publicly expressed his concerns for the advances in AI stating, “In short, the rise of powerful AI will be either the best or the worst thing, ever to happen to humanity. We do not yet know which” (Cellan-Jones,

2016, para. 8). Technology icons Elon Musk, Bill Gates and Steve Wozniak, echo professor Hawking’s reservations concerning the usage of AIs. And, while developing their upcoming creations with this computer science technique, they are aware that they are working with a powerful technology innovation which has the power to benefit humanity, but also to destroy us (Cellan-Jones, 2016).

Despite the potential problems that this new technology can cause, the opportunities for positive changes are significant. Some scholars believe that AI should be fully embraced in the education arena, and that we “should harness the power and strength of AI itself. In that way we can help teachers to equip learners – whatever their age – with the knowledge and flexible skills that will allow them to unleash their human intelligence and thrive in this re-shaped workforce” (Luckin et al., 2016, p. 12).

Educators and scholars alike, need to research artificial intelligent applications for teaching and learning, and to look for opportunities to apply these techniques for the betterment of educational practices. Now as the business sector continues its development and sale of AI-embedded devices, it is imperative that educational leadership and practitioners learn about these innovations, and maintain a level of critical scrutiny surrounding the creation of policy and regulation for these devices. 60

Artificial Intelligence in Education

In education, the concept of an intelligent computer program that acts as personal assistant, or a tutor, is not new. While industrial models of AIs are widely available, education lags behind in the adaptation of intelligent machines to support learning

(Kurshan, 2016). Decades ago, Sleeman and Brown (1982) dubbed these applications

Intelligent Tutoring Systems (ITS) (Sleeman & Brown, 1982). More recently, many researchers dedicated their studies to the creation of intelligent devices capable of helping learners. Artificial intelligence pioneer Marvin Minsky and his colleagues underlined key aspects of their vision for AI in education with the following statement:

…we could try to build a personalized teaching machine that would adapt itself to

someone’s particular circumstances, difficulties, and needs. The system would

carry out a conversation with you, to help you understand a problem or achieve

some goal. You could discuss with it such subjects as how to choose a house or

car, how to learn to play a game or get better at some subject, how to decide

whether to go to the doctor, and so forth. It would help you by telling you what to

read, stepping you through solutions, and teaching you about the subject in other

ways it found to be effective for you. Textbooks then could be replaced by

systems that know how to explain ideas to you in particular, because they would

know your background, your skills, and how you best learn (Minsky, Singh &

Sloman, 2004, p. 122).

Over the last decade, education has seen weak artificial intelligence applications specifically in the areas of language processing, reasoning, planning, and cognitive modeling (Woolf, 2010). Over the years, many researchers dedicated their studies to the 61 creation of intelligent devices capable of helping learners: Scholars like John Brown and

Sleeman from Stanford University, Richard Burton, from Duke University, Ira Goldstein from the Massachusetts Institute of Technology, William Clancey who graduated from

Stanford University and worked on his research on AI and learning environments with

Elliot Soloway from the University of Michigan (Jones, 1985) and more recently professor and researcher Beverly Woolf, University of Massachusetts (Woolf, 2010).

Most of the research done on AI tutors surrounds applications that can be used for pedagogical use. However, not all tutors are alike: Veletsianos (2010) suggested that one distinction between pedagogical and conversational agents, or tutors, is that the former are limited to delivering instructional messages, while the later can actually respond to student queries. Heidig and Clarebout (2011) have determined that variable features of these pedagogical AI tutors make them open to a variety uses in educational contexts.

Specific intelligent tutoring systems designed for education are computer software capable of: tracking how learners process through assignments during problem-solving tasks, detecting errors or learning misconceptions, estimating learners’ levels of proficiency, and offering suggestions for improvement. The timeliness with which these

AI tutors can provide feedback can help to empower learners into developing the skills of self-regulation, self-monitoring, and self-explanation (Woolf, 2010).

The ability to suggest and prescribe appropriate learning activities at the learners’ levels of mastery helps scaffolding and aids the acquisition of concepts previously outside of the learners’ reach (Azevedo & Hadwin, 2005; Duffy & Azevedo, 2015; Shute,

2008; VanLehn, 2006). A smart AI device would become like a companion, acting as a tutorial assistant that knows everything about the student, and therefore it would be 62 capable of offering advice, providing suggestions, disclosing the location of necessary resources, helping with tasks and giving encouragement to the learner when needed

(Kamenetz, 2016). In this way, students will have machine-assisted customized help with their education (Kamenetz, 2016). This can result in positive learning experiences especially for students who feel a lack of confidence in their academic abilities (Dickson,

2017; Duffy & Azevedo, 2015; Luckin et al., 2016).

Some of the most successful implementation of artificial intelligence in education has been in language learning and, in emergent technologies such as the AmazonÒ Echo

– one of the most advanced commercial AIs available (Amazon, 2019; Woolf, 2010). In the future, learning will originate from different sources such as from inside the classroom (formal learning) and from the multitude of technologies that students use to socialize, connect with others, or to be entertained outside the classroom – informal learning (Dufur, Parcel, & Troutman, 2013). Devices which incorporate AI techniques will be able to aid students’ learning by making knowledge available from simple vocal requests.

Most recent applications of AI tutoring systems report learning successes in teaching topics in information security (Mahdi, Alhabbash, & Naser, 2016), medical concepts such as diabetes (Almurshidi & Naser, 2017), English and grammar (Alhabbash,

Mahdi, & Naser, 2016), and linear programming (Naser, 2012), to name a few.

AI Tutors in Gaming

Today, learning AIs are available in Digital Games Based Learning (DGBL) and serious games (Lester, Ha, Lee, Mott, Rowe, & Sabourin, 2013). In the near future, with the aid of AI applications, students will be able to learn by using study tools that adapt to 63 their level of concept mastery (Admoni, 2016). In this setup, overachieving students will be challenged with harder assignments and underachieving students will be supported by assignments which aim to help students where they are weak (Admoni, 2016; Kamenetz,

2016).

One of the major advantages of having an intelligent tutor is that some systems report their feedback after analysis back to the learners and to the teachers. In this way, learners and teachers share “valuable information about the learner’s achievements, their affective state, or any misconceptions that they held” (Luckin et al., 2016, p. 20). This feedback aids teachers who are trying to understand how students are approaching learning “and allows them to shape future learning experiences appropriately” (Luckin et al., 2016, p. 20). Students using this approach may feel empowered as they would be able to reflect on their learning habits and track their progress (Luckin et al., 2016).

According to Luckin, Holmes, Griffiths, and Forcier (2016) AI tutoring systems, use artificial intelligence

techniques to simulate one-to-one human tutoring, delivering learning activities

best matched to a learner’s cognitive needs and providing targeted and timely

feedback, all without an individual teacher having to be present. Some ITS put the

learner in control of their own learning in order to help students develop self-

regulation skills; others use pedagogical strategies to scaffold learning so that the

learner is appropriately challenged and supported (Luckin et al., 2016, p. 24).

Literature reports successful learning outcomes with the use of AI tutoring game systems; these applications often outperform untrained tutors and with their ability to learn, they can become proficient tutors (VanLehn, 2011). AI tutoring games have the 64 potential to help students synthesize content and to support content delivery, such as in the use of learning digital games.

A volume of literature dedicated to the study of AI games used for education exists, however, it is limited and includes diverse research on AI gaming applications with a variety of purposes, backgrounds, and interests related to electronic educational games, or serious games (Frutos-Pascual & Zapirain, 2015). In the limited research experiments based on serious games, the use of AIs yielded successful learning outcomes and potential for unique experiences customizable to the needs and level of skills of the players (Faghihi, Brautigam, Jorgenson, Martin, Brown, Measures, & Maldonado-

Bouchard, 2014; Frutos-Pascual & Zapirain, 2015; Johnson, Vilhjálmsson, & Marsella,

2005). Successful learning implementations have been reported in language acquisition, math, 21st century skills and other disciplines (Dobrovsky, Borghoff, & Hofmann, 2019;

Faghihi et al., 2014; Johnson et al., 2005). Artificial intelligence has the potential to create unique learning avenues for individual learners in learning environments and in blended, or in online learning.

AIs - Video Games Versus Tutors

While AI tutor applications can foster learning, computer-based video games significantly increase students’ learning success. A study that contrasted tutoring systems in school with computer-based video games designed to promote learning revealed that the video game activities produced better results than conversational (AI) applications for activities in the areas of recall, strategic skills, problem-solving and higher-level cognitive processes (Chuang & Chen, 2009). In the same study by Chuang and Chen

(2009), no significant difference was found with judgment-related tasks such as 65 identifying similarities and differences. These findings were consistent with a study by

Razzeq and Heffernan (2009), which explored technologies for the designs of online instruction programs and the merits of their individual characteristics for instruction.

According to Razzaq and Hefferman (2009), using tutoring systems has the disadvantage of being time consuming since students must attend to every step in the problem-solving process in depth, regardless of their need for assistance with the step.

Their study also compared the effectiveness of AI tutoring systems with high and low proficiency students, and it yielded mixed results. The study utilized the ASSISTments system, controlled for students’ time on task and explained that tutored problem-solving involved seeing fewer problem solutions with step-by-step tutor assistance and that untutored problem-solving involved seeing more problems and solutions without tutor assistance (Razzaq, & Hefferman, 2009).

The results indicated that tutoring systems helped the low proficiency students to learn more. Conversely, high proficiency students learned more when they were not using the tutoring system. These findings show that although tutors benefitted low proficiency students who need more thorough instruction, they may have impeded the learning progress of high proficiency students (Razzaq, & Hefferman, 2009). As AI techniques continue to improve, it is important to continue to research these devices and their effect on learning.

Different Kinds of AI Tutors in Education

Specifically, in education, artificial intelligence techniques have been used for about 40 years in a variety of applications with specific emphasis on intelligent tutoring systems (ITSs) (Brown, Burton, & Bell, 1975). Thanks to studies in machine learning 66 languages, AI techniques have evolved into robust systems; today’s AI tutors have a variety of capabilities which “help learners in specific areas of instruction based on a preset number of questions and answers” (du Boulay, 2016, p. 76). Literature reports that the use of AI tutors fosters positive learning outcomes.

Now that AI infused devices are sold commercially, students can learn both in class and outside of class with technology devices that dispense knowledge which, in most cases, is more up to date than the information in textbooks (Schindlholzer, 2016).

Want to review this for meaning Thus, the role of educators expands to become one of helping students adopt individualized learning methods that span further than acquiring knowledge and crossing into critical thinking and problem-solving skills (Schindlholzer,

2016). In the future intelligent-devices will be the dispensers of knowledge and students will no longer be employed for their knowledge, but for their ability to use it. Individuals will need the skills to apply knowledge to fulfill a goal (Schindlholzer, 2016).

Annually, three thousand schools report enrolling over half a million students in courses augmented with AI tutors installed for learning support (Koedinger & Aleven,

2016). Students are quick to adapt to intelligent avatar tutors for specific areas of instruction. School applications consist of web-based applications, software packages, or customized stand-alone systems created by or for the schools. These applications serve specific purposes such as teaching eight-grade students about microbiology (Lester, Ha,

Lee, Mott, Rowe, & Sabourin, 2013). While the applications might produce more enjoyment for students who are able to converse with a character such as an avatar, or a wizard, these tutors offer limited assistance and are difficult to maintain, to upgrade, and to assess for learning gains (Lester et al., 2013). In literature, another type of AI tutor that 67 students find in multimedia learning environments is called avatar. They are humanlike computer-game guides which appear as characters in serious games (Lester et. al., 2013).

Education and Implementation Challenges for AI

Many factors need to be considered when implementing AI systems in certain areas of our society and in education. Experts in the field of education and computer science report several challenges for the implementation of AIs in education (Woolf et al., 2013) such as ensuring that each learner is given the aid of an AI tutor. In this way, all students will be privileged and data comparisons will be possible with collection of cohort data as well as single student data (Woolf et al., 2013). AI systems are not yet able to assist learners with self-direction, self-assessment, teamwork, and other 21st. Century skills that will help students receive adequate preparation (Woolf et al., 2013).

To effectively help students with their academic pursuits, all data available across platforms should be analyzed together. This means that data collected from learning contexts, social contexts, and personal interests should be made available to AIs for decision making. This type of scrutiny entails privacy concerns and security issues for which policy and regulations must be implemented (Woolf et al., 2013). AI systems should give us the opportunity to increase our interconnectivity and access to a type of classroom that is worldwide. While this is an opportunity that will eventually become a reality, today we are far from this prospect (Woolf et al., 2013). Finally, AI systems designed for education should also include lifelong learning features. In this way, students can continue to learn outside the classroom and as they enter their careers

(Woolf et al., 2013). 68

Device Embedded AI Tutors

Much of the research conducted on AI tutoring systems focused on devices that can offer standardized levels of tutoring with technologies that are plagued by poor speech recognition, incorrect pronunciation, or response latency (Nass & Brave, 2005).

Especially older models either lack the ability to be customized to adapt to an individual learners’ levels of knowledge, or can provide a pre-set number of answers to a pre-set number of instructions (Martin & Mitrovic, 2002).

On the contrary, today’s devices such as the AmazonÒ Echo, AppleÒ Siri,

GoogleÒ Now etcetera, embody ‘intelligent’ and ‘socially-aware’ human interactions in natural language (Large et al., 2017). Some of these devices also offer customizable applications like the AmazonÒ Echo (Amazon, 2019) which offers developers’ tools. For these AI tools, research is very limited since the field of artificial intelligence for commercial home devices is in its genesis phase. McTear et al., (2016) assert that

“Apple’s Siri, Google Now, Microsoft Cortana, Amazon Alexa, Samsung S Voice,

Facebook’ s M, and Nuance Dragon” (p. 11) are devices that have limitless uses: People can obtain information available on the Internet by simply querying the interface. Voice personal assistants offer simple and specialized functions; they can offer information regarding location, directions, calendaring, and more complex functions, such as, fitness or health tracking (McTear et al., 2016).

These devices are innovative and disruptive: Back in 2011, after AppleÒ launched SiriÒ, GoogleÒ looked for support from a congressional antitrust investigation on their claims that their business model was threatened by this new way of web searching. GoogleÒ’s business model is built on their popular search engine algorithm 69 and web searches performed via voice activation and context-aware like with Siri, would change the search industry. Voice activated searches return only one result and give users exactly what they want (McTear et al., 2016). Unlike SiriÒ, GoogleÒ’s engine returns a collection of answers which give clients multiple results and simultaneously gives GoogleÒ a way to include ad space to their search-return page. Today, this new breed of powerful AI assistants like the AmazonÒ Alexa has the potential to disrupt far more than the web search industry. And, as artificial intelligence technologies continue to become more prevalent in our society, they will change the way we conduct business, the way we educate our youth and the way we conduct our personal lives (Shead, 2017;

Wireless Watch, 2017).

Amazon Echo and Alexa

On November 6, 2014, AmazonÒ released the Echo, the Alexa voice-controlled,

Internet-connected cylindrical shaped device capable of playing your favorite song, ordering pizza, turning on and off the lights in your home, and most recently capable of making phone calls. AmazonÒ’s engineers designed the Echo to become an intelligent helper for everyone (Amazon, 2019a; McTear et al., 2016). Until the end of 2016 sales of this new gadget remained modest as customers and AmazonÒ continued to find new purposes for it. At the end of 2016, AmazonÒ reported selling nine times as many Echos as the previous year; millions of these devices were sold worldwide (Heater, 2016).

During the Christmas holidays, this item became so popular that it sold-out on the

AmazonÒ site (Leswing, 2016). Today, AmazonÒ Alexa, the voice that powers the

Echo, is predicted to become a billion-dollar business by 2020, with over 60 million devices sold (Kharpal, 2017). 70

The appeal tied to this device is that unlike other voice command systems its intelligent procedures make it capable of understanding commands semantically by studying user’s speech patterns (McTear et al., 2016). The AmazonÒ Echo, is a weak AI built to use cloud computing technologies and machine learning languages that allow it to transcend other models (Amazon, 2019). Alexa responds to questions in a lifelike voice akin to a live human tutor which helps put users at ease. Operators can ask Alexa questions in the same way that they can with other systems, but the Echo outplays competitors with its capacity to dialogue with natural interaction and as a semantic listener, Alexa is able to understand natural expressions.

At the time of its release the Echo was ahead of its time. However, giant

AmazonÒ confidently invested in an aggressive marketing campaign and partnerships with other giant companies to increase the reach of this intelligent learning device for private use and for industries including schools and hospitals. Today, many businesses are working with AmazonÒ to embed Alexa, the voice of the Echo, in our most commonly used electronics like cellular phones, automobiles, refrigerators and other appliances which connected, extend AmazonÒ’s creation as a portable application, or an add-on feature for targeted marketing products (Wireless Watch, 2017). Emerging technologies networked via Internet of Things (IoT) connections have the power to turn static devices attached to a cord into boundless applications that can be used remotely with smart technologies. For instance, the AmazonÒ Echo can be guided with the use of the companion app which is downloadable on most smart devices (Amazon, 2019).

This evolution of technologies and the adaptation of emerging technologies in education is moving us towards the creation of more individualized learning approaches 71 that will include a better understanding of students’ profiles and inclinations, their learning goals, and their learning challenges. Supplementing education with technology tools can also support learning in formal and informal settings. Unlike other assistants, or tutors, the artificial intelligence Alexa, owns virtually infinite capabilities for learning.

This learning device is constantly acquiring new skills that connects it to other devices, or services for people (Amazon, 2019; McTear et al., 2016). The use of

Application Program Interfaces (API) connected via the virtually inexhaustible Internet of Things connections, storage structured with Cloud computing, and a powerful AI system designed to learn from its environment and its users, make this device an assistant, or a tutor for their owners (Amazon, 2019; McTear et al., 2016; Villanueva,

Villa, Moya, Santofimia, & López, 2012).

Contrasting school-bound and topic defined AIs, the availability of commercial applications of AI tutors like the AmazonÒ Echo gives learners the ability to learn in formal or in informal settings such as inside the home. The availability and affordability of this device make it a likely candidate for the next common small household-appliance

(Amazon, 2019; Shead, 2017; Wireless Watch, 2017).

Amazon Echo (Alexa) in the Lives of People

Alexa is not like other personal assistants on the market; while other assistants such as SiriÒ are designed to furnish a specific set of answers to specific sets of questions (Russel, 2016), Alexa, uses emerging technologies to satisfy user’s queries

(Amazon, 2019). The Echo is an IoT device that is powered by an artificial intelligent agent that scours the web for information, retains customers voice commands and answers in the cloud architecture. With the help of big data analytics, Alexa continues to 72 learn topics of interest to their owners (Amazon, 2019). Among the many applications that people listed for the Echo, a few stand out as services for people who have specific sets of challenges. For instance, the AmazonÒ Echo can assist patients with dementia and their caregivers. Patients can ask Alexa the same questions repeatedly without trying their caregivers’ patience.

This device can aid patients with loneliness; the friendly voice of Alexa can help set patients at ease, even though they can’t replace human touch or human conversation.

The AmazonÒ Echo is recommended by Rick Phelps who is 63 and “was diagnosed with

Early Onset Alzheimer’s disease in November 2010. After his diagnosis, he became an advocate for dementia awareness and founded the Memory People private community. Memory People now has over 13,000 members!” (DailyCaring Editorial

Team, n.d., para. 5). Rick’s blog reports examples of how Alexa helps him to cope with dementia. Rick talks about Alexa as an extension of his memory; the number of times he can ask Alexa the same question and receive the correct answer is infinite. Alexa reminds him to take his meds at a specific time and Rick calls on the Echo to play music he enjoys. Other applications for Alexa can include assisting people with mobility issues by giving them more control over their environment. These applications can give these individuals with physical challenges more independence. Turning on lights in a room, or music will be done without getting up or having to ask for assistance (DailyCaring

Editorial Team, n.d.).

Another advantage of the device, is that it comes with an Alexa developer’s kit that enables consumers to create apps for this device (Amazon Developer, 2019c).

Utilizing this feature to their advantage, Boston Children’s Hospital designed the 73

KidsMD app which lets users receive advice regarding their child’s ailments (Comstock,

2016). The app lets users access medical information from Boston Children’s Hospital

(BCH), which is a world-class medical institution. Connected to the BCH doctors’ cloud- based content, this app prepares Alexa to help parents determine if children need to see a physician. It helps determine drug dosages by relating specific guidelines on dosages of medicine per weight, age of the patient, and also covers over-the-counter drugs

(Comstock, 2016).

Planning to add more functionalities for the Echo at BCH, the medical facility is not the only one who is working on integrating Alexa into healthcare facilities and into patient’s rooms. AmazonÒ Echo has the potential to assist doctors and nurses by taking notes, reading charts back to them, helping surgeons during surgery, or sorting through information when doctors need to make diagnosis (Bailey, 2016). The hospital has many ideas for the use of the Echo, including adding an Echo in every child’s room to keep them company. Although, they are aware these projects will present challenges with security, confidentiality, and continuous Wi-Fi connectivity (Bailey, 2016), Boston

Children’s Hospital prides itself on taking new initiatives that benefit people’s health.

The hospital aspires to improve patient health and the value of its services by pursuing the goal of integrating digital tools that motivate people to proactively take care of themselves (Comstock, 2016).

Amazon Echo, Technical Perspective

Despite the recent prevalence of conversational interfaces, the assortment of tools supporting these technologies can be diverse depending upon which task needs to be performed. For instance “for tasks such as tokenization and part-of-speech tagging are 74 used for low-level processing that will contribute to subsequent stages of analysis, while others perform more high-level tasks such as providing a semantic interpretation of an utterance” (McTear et al., 2016, p. 187). The function of the conversational interfaces is to detect the user’s intent via utterance and glean the significant entities. Users’ intents are the actions, a command or a request such as “setting an alarm, scheduling a meeting, sending a text message, or booking a table at a restaurant” (McTear et al., 2016, p. 187).

Entities “are those elements of meaning that are essential to the execution of the action, such as the time for the alarm or the meeting, the recipient of the text message and its content, or the number of people for the restaurant booking” (McTear et al., 2016, p.

187).

Commercial applications include platforms that use “approach of intent recognition and entity extraction, including Api.ai, Wit.ai, Amazon Alexa, and Microsoft

LUIS” (McTear et al., 2016, p. 187). One of the most popular platforms for personal assistants is Api.ai, this platform “supports the development of mobile apps as well as apps involving wearables, robots, motor vehicles, smart homes, and smart TV” (McTear et al., 2016, p. 188). At this time, it is available in 15 languages “on a range of platforms and coding languages, including iOS, Apple Watch, Android, Cordova, Python, C#,

Xamarin, Windows Phone, and Unity” (McTear et al., 2016, p. 188).

Amazon Alexa Custom Applications

There are two types of input which are packaged in an Intent Schema, or a

JSON structure which declares the set of intents your service can accept and process

(Amazon Developer, 2019a). The spoken input data is a sample of utterances: A structured text file that connects the intents to likely spoken phrases and containing as 75 many representative phrases as possible (Amazon Developer, 2019b). Within this context of Alexa “an intent represents an action that fulfills a user's spoken request.

Intents can optionally have arguments called slots. The sample utterances are set of likely spoken phrases mapped to the intents” (Amazon Developer, 2019a, para. 1); this setup is flexible and slots can be added to intents to allow for a variety of spoken answers (Amazon Developer, 2019c). For instance, “an Intent called SetAlarm would have a slot for the time. Similarly, ASK provides several built-in Intents as well as support for slot types such as AMAZON.NUMBER, AMAZON. DATE,

AMAZON.TIME, AMAZON.US_CITY, and others” (McTear et al., 2016, p. 204).

Custom apps, which for Alexa are called skills, can be added to Alexa with the use of the Alexa Skills Kit (ASK) designed for developers, the kit provides instructions, templates and a number of APIs that can be adapted to custom applications (Amazon

Developer, 2019b). Some of the built-in capabilities offer “playing music from multiple providers, answering questions, providing weather forecasts, and querying Wikipedia”

(Amazon Developer, 2019b, para. 1). The process of designing a skill starts by defining a voice interface, which identifies mapping between the users’ verbal input and the intents handled by the cloud-based AmazonÒ services (Amazon Developer, 2019b).

The possible utterances for a particular intent are contained in the Sample

Utterances File. AmazonÒ Echo’s design divides intents in three distinctive categories:

“Full intents, Partial intents, and No intent” (McTear et al., 2016, p. 205). With a full

Intent structure “the user says everything in a single utterance that is required to fulfill their request, as in a one-shot query. With a partial intent, one or more slots are missing and the system has to prompt for the missing values” (McTear et al., 2016, p. 205). 76

Lastly, “A no intent is where the user’s intent is unclear and the system has to request clarification by presenting a short list of options to choose from” (McTear et al., 2016, p. 205).

The result is that users can interact with Alexa with commands, or with questions.

All “requests are sent to the Alexa service in the cloud and routed to the specific service that provides the logic and a response” (McTear et al., 2016, p. 205). Alexa responds with either a spoken answer, or with visual output displayed on the companion Alexa app.

This infrastructure uses the Speech Synthesis Markup Language (SSML); when the

AmazonÒ service supporting the skill returns a reply to a user’s call, the text that developers added to the intent are converted to speech by Alexa’s services and “Alexa automatically handles normal punctuation, such as pausing after a period, or speaking a sentence ending in a question mark as a question” (Amazon Developer, 2019d, para. 2).

Amazon Alexa in Education

A study conducted by Incerti, Franklin, and Kessler (2017) on the opinions of preservice teachers on the use of the AmazonÒ Echo and with voice Alexa for education utilized a sample (n=68) of preservice teachers enrolled in technology courses at a

Midwestern university and yielded the following perspectives. Findings reported that trainee-teachers perceived the AmazonÒ Echo as a device that can be used for teaching and learning and as a tutor or teacher’s assistant. Among survey respondents, only two students indicated that they had prior experience with Alexa. Findings also reported that

83% of preservice teachers “stated that Alexa could be used to tutor K-12 students, adding that the number one quality of AI is its ability to learn new things and to quickly find up-to-date answers noting that Alexa is more up-to-date than printed books 77

(54.51%)” study run by (Incerti, Franklin, & Kessler, 2017, p. 30). As for the custom apps that can be developed for Alexa, preservice teachers reported that it “could provide customized practice drills and custom learning apps (23.53%), teacher’s aide services

(17.65%), and interaction with teachers solely with the use of voice (10.29%)” (Incerti,

Franklin, & Kessler, 2017, p. 30). Lastly, 50% of trainee-teachers “indicated they would use Alexa with students completing homework in a variety of ways, including drill games for acquisition of content” (Incerti, Franklin, & Kessler, 2017, p. 30).

The study also asked preservice teachers to list their apprehensions or challenges in using Alexa in the classroom and overall “preservice teachers reported 81 challenges

(43.78%) that were technological in nature and 104 challenges (57.22%) that were user- related” (Incerti, Franklin, & Kessler, 2017, p. 32). Among the technology challenges respondents reported “limitations of the Echo device in understanding or providing answers to preservice teachers and answering with accuracy (32.09%); WiFi connection

(30.86%); selecting teaching applications (23.46%); cost of the device is prohibitive

(11.11%); and low response speed (2.5%)” (Incerti, Franklin, & Kessler, 2017, p. 32).

Regarding specific classroom challenges trainee-teachers noted that students might misuse or use the device inappropriately (30.77%), that the device might become a distraction from studies (27.88%), difficulties in learning how to use the device (16.35%), the device my become a cause for arguments among children (12.50%), students “may learn to rely on technology instead of their own learning skills (10.58%). Lastly, teachers might not be able to design apps (1.92%)” (Incerti, Franklin, & Kessler, 2017, p. 32). The study also reports a surprising insight about the perceptions of preservice teachers; findings show that “43.27% of the challenges listed represent apprehension over the 78 potential misconduct of students using the device instead of challenges that might arise from designing educational applications for Alexa” (Incerti, Franklin, & Kessler, 2017, p.

32).

During the 2017-2018 academic year the University of Idaho (UI) ran the Echo

Project “an initiative to investigate perceptions and challenges related to integrating artificial intelligence in classrooms” (Dousay & Hall, 2018, p. 1414). This example of the use of the Echo in the classroom illustrated the positive impact of training teachers about the integration of innovations. Researchers used 90 Echo Dot devices and worked with teachers in four school districts that included approximately 900 students. The UI

Information Technology department helped school districts with the device setup, and teachers participating in this study were given resources on how to use the Echo Dot in the classroom.

At the beginning of the study, teachers were asked to participate in a day of professional development so that they could work with researchers to directly explore some of the instructional material (Dousay & Hall, 2018). The outcome of the study indicated that the most successful assimilations of the device occurred when teachers possessed a positive attitude toward the project and had a commitment to fully utilizing the potential of the AmazonÒ Echo Dot in the classroom. This research also suggests that future incorporation of passive AIs into educational settings may offer intriguing prospects for teachers, administrators, and students (Dousay & Hall, 2018). Lastly, researchers asserted that one of the benefits of using an AI virtual tutor such as Alexa, rather than a tablet or laptop, is that this type of technology delivers faster and more comprehensive aid to its users. The AI tutor’s speed and versatility are most enhanced 79 when the device is used simultaneously with other tools (Dousay & Hall, 2018). Alexa provides possibilities for teachers and administrators who are looking to improve classroom practices, or who want to address issues like classroom management (Dousay

& Hall, 2018 The AmazonÒ Alexa can produce positive learning outcomes when teachers master basic knowledge how the use of the device in the classroom and are given instructional material (Dousay & Hall, 2018). Researchers noted that “teacher enthusiasm concerning integration of the Echo Dot and determination to utilize the device to its potential were integral to successful integration” (Dousay & Hall, 2018, p. 1417).

One of the most recent applications of Alexa in education is ProblemPal, an app that “can reduce the content generation burden on teachers and simultaneously provide personalized learning experiences” (Trivedi, 2018, p. 80). A Student Coding School in

Watchung Hills Regional High School created ProblemPal to decrease the amount of time that teachers spend every week searching for instructional materials and producing prep exercises for students. This app was defined as “an Amazon Alexa Skill that enables teachers to automatically generate practice content with voice commands” (Trivedi, 2018, p. 80). With ProblemPal’s aid, teachers can ask about any subject and create questions for practice rounds on that subject. The machine learning behind this AI device produces

“practice content, which is then automatically shared with students via a Google

Classroom API integration” (Trivedi, 2018, p. 80). This app “is integrated with APIs from Wikipedia, Wolfram Alpha, and Khan Academy, allowing the creation of practice content for virtually any topic” (Trivedi, 2018, p. 80). 80

Alexa and Data Security and Privacy Risks

In recent years, one of the most controversial topics regarding the Amazon AI tutor Alexa is the issue of security and personal privacy. Since the device records and keeps a log of all interactions with users, it is easy to see how this information could be exploited to compromise users’ personal lives. In the past few years, the news has reported some alarming incidents. Some of these reports included flaws in smart speaker operating systems and bugs that Amazon fixed. However, owners of devices such as the

Amazon Echo, Google Home, and other commercial AI tutors should be made aware of the potential security risks associated with these types of devices (Sacks, 2018).

The applications on these types of devices are designed to start listening when triggered by a ‘wake word’ such as ‘Alexa’: However, they do not always hear this correctly. In these cases, the devices start recording. One of the most egregious cases of such a security breach was reported in Portland, Oregon, when a woman “had her private conversations secretly recorded by the voice-controlled Amazon virtual devices in her home and then sent to a random contact in Seattle” (Sacks, 2018, para. 1). After this incident, Amazon offered a plausible explanation for the incident. Understandably, their explanation did not fully reassure the public which retains mounting concerns regarding the vulnerabilities behind these new AI technologies (Sacks, 2018). Experts on consumers privacy say that these types of devices “should do a better job at partitioning different types of information with different layers of security to prevent private information from being so easily shared” (Sacks, 2018, para. 15).

Most of the more glaring examples reported in the news entail cases in which users of the Echo have integrated some of the home applications for APIs, which means 81 that they have added their contact lists, or installed questionable third-party applications.

For classroom use, some of these erratic behaviors could not occur unless the devices used in the classroom belonged to an individual rather than the school. Nevertheless, companies such as Amazon, Google, and Facebook, which have been in the public eye regarding the lack of privacy for their AI interacting technologies, must increase layers of security to prevent security breaches.

Another set of security concerns was expressed by researchers during a 2018 study run by the University of Idaho. The study explained that managing devices through the Alexa for Business accounts, rather than personal, was preferable to schools that want to adopt this technology as they “may want or need access to this information for monitoring or other official purposes” (Dousay & Hall, 2018, p. 1416). Amazon charges a monthly fee for this type of account however “this approach also facilitates managing skills similar to current processes used to push apps out to iPads or Android tablets”

(Dousay & Hall, 2018, p. 1416). Owning the Alexa business account may also help with potential barriers of integration related to WiFi settings.

Preservice Teachers’ Preparation

Among scholars, there are many concerns related to student teachers’ preparation.

Today, preservice teachers graduating from academic institutions which offer teacher- training are expected to integrate digital classroom equipment into their teaching practices. However, studies report that using the support of technology in teaching is not a practice for beginning teachers (Mouza, Karchmer-Klein, Nandakumar, Yilmaz-Ozden,

& Hu, 2014). Some scholars assert that there is a disconnect between learning technologies in teachers’ training courses and a lack of technology integration once 82 preservice teachers start their teaching career (Ottenbreit-Leftwich, Glazewski, & Newby,

2010). According to researchers, this disconnect stems from a curricula problem.

Preservice teachers who enter teacher preparation programs need the infusion of technology into the entire preservice trainee curriculum (Tondeur, Braak, Sang, Voogt,

Fisser, & Ottenbreit-Leftwich, 2012). Exposing preservice teachers to new technologies that can be found inside and outside the classroom helps them think about the variety of uses for digital innovations, and it gives them a chance to familiarize themselves with these devices at their own pace.

Grabill and Hicks (2005) articulate concerns about twenty-first century students’ need for digital literacies and the necessary changes in pre-service teachers’ preparation programs that such education mandates. For instance, they suggest revising the focal point of English education so that technology will generate new pedagogical practices.

Grabill and Hicks (2005) indicate that “functional, critical, and rhetorical literacies could become the new way in which we work as teacher educators and position ourselves as educational researchers” (p. 307). These diverse literacies should become a stronger aspect of preservice teachers’ preparation requirements (Grabill & Hicks, 2005). The problem reported by literature over a decade ago is still current; more recent literature still reports the need for the participation of preservice teachers and in-service teachers in technological and pedagogical training, and on how to help them to effectively translate their skills to classroom applications (Cutucache, Leas, Grandgenett, Nelson, Rodie,

Shuster,... & Tapprich, 2017). 83

Preservice Teachers and Digital Adoption

Literature reports that learning outcomes with technology are negatively influenced by the inability of instructors to assimilate new technologies into academic courses (Hollins, 2015). In order to become effective teachers, trainee teachers need to master new and innovative technology (Levin & Wadmany, 2008). According to Lambert and Cuper (2008) “preparation of tomorrow’s teachers, however, does not depend solely on how well emerging technologies are incorporated into college coursework; instead, it rests on how well incoming teachers are taught to leverage the technologies to help their students develop these same skills” (Lambert & Cuper, 2008, p. 265).

Ottenbreit-Leftwich, Ertmer, and Tondeur (2015) “documented several areas in which teacher education programs may not be preparing teachers to be successful in the field” (p. 1260-1261). Crompton (2015), asserts that training teachers is a methodical process that needs to include technology, subject content, and pedagogy. Preparation of preservice teachers is vital to the ongoing synthesis of new innovations in the classroom.

Preservice teachers “should complete their programs with technological knowledge, and also the ability to integrate this technological knowledge with the subject content and pedagogical practice to form a cohesive, effective practice” (Crompton, 2015, p. 83).

Therefore, if the goal is to instruct pre-service teachers to become effective educators, they will need to become practitioners who can effectively integrate technology into their classrooms (Crompton, 2015; Lambert & Gong, 2010).

We are heading toward a future where teachers will be able to make instructional decisions guided by AI systems. Teachers’ preparation needs to incorporate skills on how to use technologies that they can use to coach students with the help of information 84 captured by emerging technologies (Gardner, 2013; Admoni, 2016; Luckin et al., 2016;

Reyes, 2015; Schindlholzer, 2016; Wixom et al., 2014). AI systems have the ability to detect patterns of learning, strengths and weakness in students’ understanding that teachers are not able to see, especially in classrooms where the teacher-student ratio is high (Luckin et al., 2016). Regularly, educational changes occur slowly and the immediate feedback provided by systems that offer insights about students’ comprehension and application of subjects is essential to today’s teaching (Griffiths &

García-Peñalvo, 2016; Luckin et al., 2016). Teacher’s training must include a variety of technological landscapes, learning devices, and experiences that will contribute to their collective knowledge which they will be able to utilize in their teaching careers (Green,

Facer, Rudd, Dillon, & Humphreys, 2005; Luckin et al., 2016; Schindlholzer, 2016).

The 2016 Horizon Report (Johnson et al., 2016) indicated that “simply capitalizing on emerging technology is not enough” (p. 26). Teachers need to know how to use technology the classroom and “must use these tools and services to engage students on a deeper level and ensure academic quality” (Johnson et al., 2016, p. 26).

Institutions have the responsibility to ensure that teachers, faculty and students are aware of the reasons for the integration of technology into their schools and curriculum. In this way, educators and students can be aware of what they need to learn to support this transformation. Through adaptation to these innovations and to the new experiences they offer, the habitual replication of non-technical methods of teaching and learning can be eradicated (Johnson et al., 2016). Lastly, institutions have the responsibility to ensure that

“when students and faculty are connected it is with the purpose of transformation instead 85 of replicating experiences that could take place without technology” (Johnson et al.,

2016, p. 30).

It is also important that preservice teachers receive training on how to use and incorporate new technologies into curriculum for teaching as well as for class preparation. Russell, Bebell, O’Dwyer, and O’Connor (2003) stress that preservice teachers prefer to use technology to prepare class materials and class activities, while in- service teachers prefer using technologies in the classroom (Russell, Bebell, O’Dwyer, &

O’Connor, 2003). A study that focused on preservice teachers’ perspectives on the integration of technology to teach sciences reported that “positive changes in beliefs and behaviors relating to technology integration in science instruction among preservice teachers are possible through explicit instruction” (Rehmat & Bailey, 2014, p. 744). The same study also reported that self-efficacy was one of the barriers to technology integration (Rehmat & Bailey, 2014).

Ultimately, it is reported that factors impacting technology adaptation for pre- service teachers are: Teacher pedagogical beliefs (Han, Shin, & Ko, 2017; Hsu, Liang,

Chai & Tsai, 2013; Kim et al., 2013), self-efficacy (Al-Awidi, & Alghazo, 2012, Lemon

& Garvis, 2016; Rehmat & Bailey, 2014) and attitudes towards technology (Teo, 2011).

Literature also includes a strong body of research that points out that lack of training and proper preparation to teach with technology (Gavaldon & McGarr, 2019; Han, Shin, &

Ko, 2017; Kormos, 2018; Sánchez-Prieto et al., 2017) and supportive environment

(Gavaldon & McGarr, 2019; Kormos, 2018). 86

Generation Z

Comprised of people born between 1995 and 2005 (Taylor & Keeter, 2010),

Generation Z will include over 20% of the workforce in the next few years (Deloitte,

2017). While Baby Boomers are exiting the workforce, Gen Zs are replacing a large portion of the labor market that this previous generation used to possess. This results in a wide shift in work culture and in our work environment (Solnet, Baum, Robinson, &

Lockstone-Binney, 2016).

Many members of Generation Z have never experienced life without the Internet.

Raised with instant access to data and information, they possess proficient multi-tasking skills and use a variety of internet enabled devices including smartphones, tablets, laptops and TVs (Kalkhurst, 2018; Zimmer, 2017). Often, Gen Z’s multiple device usage occupies about 10 hours of their day, and “while Millennials used three screens on average, Gen Z students frequently use up to five. Most use a smartphone, TV, laptop, desktop, and a tablet” (Kalkhurst, 2018, para. 3). Consequences of their lengthy screen times and simultaneous use of several devices include short attention spans and digital device elicited distractions (Kalkhurst, 2018; Zimmer, 2017).

While Gen Z students are notorious for being social, they prefer texting to talking.

Therefore “not all young people know how to learn in cooperative groups, and not all teachers know how to apply best practices when creating cooperative learning activities”

(Igel & Urquhart, 2012, p. 16). Electronics dominate their world, yet members of

Generation Z value face-to-face interaction and the opportunity to collaborate on projects

(Robertson, 2019). Recent research points out that today’s students want to be part of the learning process rather than being passive bystanders (Kalkhurst, 2018). Using 87 technology to personalize learning experiences so that they are more engaging and relevant is one way to improve and enhance learning for Gen Z students (Thomas, 2016).

The Introduction of a variety of innovative technologies in the classroom could also help these students connect with learning. For members of Generation Z, learning gains will not be achieved solely via lecture, but with projects, class collaboration and other initiatives that will place the learners at the center of learning.

Initial research on Gen Z learners reports that this young generation differs from previous generations; the connections in the brains of members of Generation Z are structurally different. Rather than a result of genetics, this anomaly is caused by their response to their external environment (Rothman, 2016). Neuroscientist Rothman (2012) pointed out that “the brains of Generation Zs have become wired to sophisticated, complex visual imagery, and as a result, the part of the brain responsible for visual ability is far more developed, making visual forms of learning more effective” (Rothman, 2016, p. 2). For this reason, older teaching styles such as lectures and discussions, are strongly disliked by this group of individuals; yet, “interactive games, collaborative projects, advance organizers, and challenges, are appreciated (Rothman, 2016, p. 2). Visual learning is better suited for Gen Z students; web browsing and information overloading has made the area of the brain related to visual ability much more developed. Therefore, these visual forms of learning are more effective and much more enjoyable (Bertagni,

2015).

The arrival of Gen Z students in higher education, presents universities with unique challenges in preparing new teachers to educate them. Therefore, teaching these learners how to use digital applications to attain knowledge that can lead to the 88 acquisition of skills must be part of our curriculums. Recent research on Alexa corroborates that voice activated devices, or AI tutors, will be more popular with younger students “because the modality of verbally asking a question, as opposed to typing something on a device, is more natural and will cause fewer interruptions in a student’s train of thought” (Horn & Thinkingabout, 2018, p. 83). This younger generation does welcome electronic innovations in the classroom. Yet, the rapid rate in which new applications are entering the classroom makes it difficult for teachers to learn how to use these digital innovations and to find ways of incorporating them into their classrooms and curriculum (Johnson et al., 2016). The overuse of digital applications in Gen Z’s lives creates an attitude of reliance upon technology that disrupts their acquisition of knowledge. They appear to foster a superficial understanding of information and the appearance of being up-to-date and informed. However, they are not truly knowledgeable

(Davou & Sidiropoulou, 2017). Since Gen Z learners are used to taking the availability of smart electronics in their everyday life for granted, they do not assimilate newfound information into their knowledge base and are thus unable to access it when needed

(Davou & Sidiropoulou, 2017). This indicates that their tendency is to invest little effort in the critical comprehension of information: They adopt a superficial approach to learning instead of an in-depth approach (Davou & Sidiropoulou, 2017).

Teaching in Different Socio-Economic Geographical Areas

It is well documented that there are significant gaps in measures of economic well-being among urban, suburban and rural counties (Parker, Horowitz, Brown, Fry, &

Cohn, 2018). This challenge extends to education: Public schools in rural, suburban, and urban areas are as unique and diversified as the communities that they educate. The 89 disparity in funding among school districts has led to divisions among students and this inequity has affected their access to technologies.

A recent study from Pew Research Center (2018) reports that “among rural and urban dwellers, those with more education are particularly likely to feel that others lack understanding of the types of problems people in their type of community face” (Parker et al., 2018, p. 42). The study also reports that “people in urban, rural and suburban areas who grew up in a different type of community are particularly likely to say they understand the problems faced by those who live in the type of community where they grew up” (Parker et al., 2018, p. 43). Statistics report that “81% of urban dwellers who grew up in a rural area say they understand the problems people in rural areas face, compared with 55% of current urban residents who grew up in an urban area and 48% of those who grew up in a suburb” (Parker et al., 2018, p. 43).

Teaching challenges for rural areas include “poverty, geographic isolation, low teacher salaries, and a lack of community amenities seem to trump perks of living in rural communities” (Azano & Stuart, 2016, p. 108). In literature, teaching in rural communities is explained as the most challenging due to its diverse nature. A study run by Azano and

Stewart in 2015, reports that it takes a unique sensibility to work with students from rural and urban areas (Azano & Stewart, 2015).

Some of the most remarked differences between rural communities versus urban and suburban schools include, lower quality of education, limited resources, difficulties with teacher recruitment, and teachers’ lack of qualifications and preparation to work in rural communities (Azano, Stewart, 2015; Miller, 2012). Pre-service teachers whose were brought up in these areas have a particular sensibility to work with students who come 90 from the same area. Additionally, teacher preparation courses do not prepare pre-service teachers to deal with the challenges faced when teaching in different geo-economic and cultural areas (Azano & Stewart, 2015).

Literature that accentuates the contrasts between teaching in geographical areas such as urban, suburban, and rural areas, reports a variety of challenges that encompass resources and culture (Azano, Stewart, 2015; Azano & Stewart, 2016; Kormos, 2018;

Miller, 2012; Wachira & Keengwe, 2011; Zimmerle & Lamber, 2019). In studies that include in-service teachers in schools across geographical areas, researchers indicate that integration of technology of digital applications in the classrooms is more effective in suburban and rural areas in use and perception (Kormos, 2018; Zimmerle & Lamber,

2019).

Over 43% of “Ohio University's 2017 Freshmen have estimated family incomes of more than $100,000, and this percentage is higher than in 2007 and 2011” (Office of

Institutional Research and Effectiveness, 2017, p. 15). Additionally, 15% of these

Freshmen came from families with incomes above $80,000. There were 13% of

Freshmen who had families with incomes between $60,000 and $80,000; and, 29% of

Freshmen who had families with incomes of less than $60,000. (Office of Institutional

Research and Effectiveness, 2017, p. 17). When we compare these percentages to those of 2007 and 2011, we notice that an increasing number of Freshmen came from families with incomes greater than $100,000. This data is not surprising when the geographical areas of origin for the majority of Ohio University students is taken into consideration.

According to the Ohio University Office of Institutional Research, most students do not come from the rural community of Athens county where the medium income in 91

2019 is $37,191, and 28.8% of the population lives in poverty (Ohio University Office of

Institutional Research, n.d.). The Ohio University Book of Facts – October 2019 reports that among in state students, only 1,275 students (7%) out of 18,610 come from Athens

County about 93% Ohio University students’ population does not come from the rural area of Athens County, but from more affluent counties of Ohio (Ohio University Office of Institutional Research, n.d.). The majority of Ohio University students are from more affluent counties such as: Franklin county where the median income is $56,319, and 16% of the population lives in poverty; Fairfield county where the median income is $55,549, and 9% of the population lives in poverty; and, Hamilton county where the median income is $52,389, and 16.2% of the population lives in poverty. Lastly, there is

Cuyahoga county where the average income is $46,720, and 18.1% of the inhabitants are impoverished (United States Census Bureau, 2019a; United States Census Bureau,

2019d; United States Census Bureau, 2019c; United States Census Bureau, 2019d).

Gender

Specific factors influence the adoption of technologies (or Information

Technologies) into the classroom. For instance, instructor characteristics such as age, years of teaching experience, attitudes towards digital machines and new teaching methods can impact technology use in education (Inan & Lowther, 2009; Teo, Fan & Du,

2015). Several publications report that gender differences in teachers affect the integration of technologies into their classrooms. These studies indicate that between the sexes in the teaching profession, males are more likely to integrate digital tools and materials into their teaching methods (Buabeng-Andoh, 2012; Teo, Fan & Du, 2015). A study on Technological Pedagogical Content Knowledge (TPACK) reports higher 92

TPACK values in female preservice teachers rather than males, between males and females (Karaca, 2015).

According to Teo, Fan, and Du (2015), there is a remarkable difference in technology adaptation between female and male preservice teachers (Teo, Fan & Du,

2015). The outcomes of studies by Gotkas Yildirim, and Yildirim (2009) and by Russel and Bradley (1997) support these findings. Goktas et al. (2009) also found that gender significantly affects computer competence. Similarly, a study on the computer competency between the sexes of teachers, revealed that while access to computers and computer ownership significantly changed the degree of computer competency, the outcome of the study favored males (Russell & Bradley, 1997).

In more recent years, researchers on gender and the adaptation of technology have been reporting more mixed results. In the past, results from studies indicated that gender was a predictor for technology adaptation in education. However, in more recent studies, gender is often considered a non-significant predictor (Tondeur, Aesaert, Prestridge, &

Consuegra, 2018; Sánchez-Prieto, Olmos-Migueláñez & García-Peñalvo, 2017). A study run by Padmavathi (2016), reported that gender differences were not significant predictors of adoption of technologies in teaching situations. The same study by

Padmavathi (2016) also reported differences in adoption by subject areas like

“Languages, Science, Mathematics, and Social Studies” (Padmavathi, 2016, p. 29).

Although researchers report mixed findings on this issue, this researcher will investigate the role of genders in the adaptation of AI tutoring systems like Alexa for formal and informal learning contexts.

Educational Change Models 93

The topic of technology integration and adaptation can be explored by a variety of frameworks that have implications for professional development. Specific to the field of education, theories like Rogers’ Innovation Diffusion Theory (1995) and Hall and Hord’s

(2010) Concern Based Adoption Model (CBAM) were developed. Other theories such as

Davis’ (1989) Technology Acceptance Model (TAM) and the Universal Technology

Adoption and Use Theory (UTAUT) developed by Venkatesh, Morris, Davis, & Davis

(2003) were specifically developed to answer questions about technology innovations, adoption, and diffusion. Some of these theories were initially developed for the computer science field (Straub, 2009) while others were developed for the agricultural field and then expanded to other areas of research (Rogers, 2003).

Over the past decades we have seen many fast-paced technological advances increasingly entering our schools and our homes. Technology can help with learning gaps and provide innovative solutions for our educational challenges. However, no unanimous adoption or coherent plan for understanding or adopting innovation has been cemented.

As educators, we are still in the process of understanding which implementations an be the most effective and engaging uses of technologies (Aesaert, Vanderlinde, Tondeur, &

Van Braak, 2013; Johnson et al., 2016).

The Diffusion of Innovation Theory

In 1962, professor Everett Rogers constructed the Diffusion of Innovation (DOI)

(Rogers, 1962). While Rogers’ theory was developed in the field of agriculture, in the decades following the first publication of his book Diffusion of Innovations (Rogers,

1962) the theory was introduced into a variety of fields including medicine, political science, and education. Rogers’s theoretical framework includes four elements: 94

Innovation, communication channels, time passage, and the social system. In more recent years, Rogers’ research became accepted as authoritative in technology adoption studies.

The latest version of his book includes the impact of the Internet on communication

(Rogers, 2003).

Rogers (2003) defined an innovation as “an idea, practice or object that is perceived as new by an individual or other unit of adoption” (p. 11). This term is often used synonymously with technology which is described as the design for an instrumental action that clarifies the relationship between cause and effect concerning a desired goal.

Technology can be divided into two types of components: hardware and software.

Frequently, technology consists of clusters where multiple components are seen as closely related (Rogers, 2003).

According to Rogers (Rogers, 2003), innovations have five key characteristics:

1) “Relative advantage is the degree to which an innovation is perceived as being

better than the idea it supersedes” (p. 229).

2) “Compatibility is the degree to which an innovation is perceived as consistent

with the existing values, past experiences, and needs of potential adopters” (p.

15).

3) “Complexity is the degree to which an innovation is perceived as relatively

difficult to understand and use” (p. 15).

4) “Trialability is the degree to which an innovation may be experimented with on a

limited basis” (p. 16).

5) “Observability is the degree to which the results of an innovation are visible to

others” (p. 16). 95

Technology Acceptance Model (TAM)

In 1989, Fred D. Davis, a Massachusetts Institute of Technology (MIT) doctoral student, proposed the Technology Acceptance Model TAM as the topic of his dissertation

(Davis, 1989). His vision of TAM was inspired by Ajzen and Fishbein’s (1975) Theory of Reasoned Action (TRA) (Ajzen & Fishbein, 1975). In literature, TAM is considered one of the most commonly used models in the arena of technology acceptance. According to Davis, Bagozzi and Warshaw (1989) TAM explains why users have a propensity in accepting or rejecting new technologies. The framework provides a basis with which one can trace how external variables influence belief, attitude, and intention to use. Extensive literature reports use of the TAM model in the private sector (Gefen & Straub, 1997;

Igbaria, Guimaraes, & Davis, 1995) and in the educational field to study student acceptance of emerging technologies (Park, 2009). The TAM has also been utilized in mobile learning (Calisir, Gumussoy, Bayraktaroglu, & Karaali, 2014; Park, Nam & Cha,

2012) and in user acceptance of YouTube videos for procedural learning (Lee & Lehto,

2013).

In Davis’s model (1989) the attitudes expressed in TRA’s are replaced with two fundamental determinants: the perceived usefulness (PU); and the perceived ease of use

(PEOU) of the system (Davis, 1989). Perceived Usefulness is defined as the user’s belief that using the system will improve his or her job performance, while Perceived Ease of

Use is defined as the degree to which a person believes that using a particular system would be free of effort (Davis, 1989). The TAM emphasizes that the system usage is determined by the user’s behavioral intention (BI), which is affected directly by the user’s attitude towards the system and its perceived usefulness and it is indirectly affected 96 by the perceived Ease of use. In this model, both variables (PU and PEOU) directly affect user’s attitudes towards the system, or adaptation, and PEOU has a direct effect on PU

(Lee, Cheung & Chen, 2005).

Several researchers, in their studies, have highlighted some limitations of TAM.

The measurement of self-reported use data is the most critical commonly testified limitation of TAM (Shroff, Deneen & Ng, 2011; Straub & Burton-Jones, 2007). In some studies, the type of participants or the sample size choice were found to be limitations of

TAM. Lastly, certain studies revealed that university student samples, or independent users are problematic to generalize to the whole populations (Aggorowati et al., 2012;

Koch, Toker & Brulez, 2011). During the past 20 years, Lee, Kozar and Larsen (2003), conducted -analysis of over 100 studies published by ICT journals and conferences.

Researchers discovered several limitations of TAM including: the use of single measurement scales, the use of single information systems, the use of small sample sizes and short experience to the existing systems, the use of few considerations of cultural differences, the use of a self-selection bias, the use of a one-time cross-sectional study, and the use of a university environment as limitations of TAM.

Unified Theory of Acceptance and Use of Technology (UTAUT)

UTAUT was developed by Venkatesh, Morris, Davis, and Davis (2003), as a theoretical framework that attempts to explain and investigate why people choose to use certain technology and how they develop their patterns of behavior when using such technology. UTAUT was built and developed upon eight theoretical models from social psychology and sociology (Venkatesh et al., 2003). This theory is comprehensive and measures individuals’ Behavioral Intentions with a variety of models. 97

The conceptual model includes four sets of variables, or factors, of intention to use new technologies: Performance expectancy (PE), Effort expectancy (EE), Social

Influence (SI), and Facilitating Conditions (FC). UTAUT can describe “more of the variance in BI compared to the other eight theoretical models, such as TAM at 40%"

(Venkatesh et al., 2003, p. 471). This model is comprehensive and measures individuals’

Behavioral Intentions in different modes as it includes multiple factors affecting users’ intentions (Venkatesh et al., 2003). Ongoing research is being completed on the different versions of UTAUT to investigate its performance and some scholars report inconclusive results: neither positive, or discrediting results (Nüttgens, Gadatsch, Kautz, Schirmer,

Blinn, Dwivedi, …Williams, (2011).

Concerns-Based Adoption Model

This theory has been used in a variety of settings since 1973 and since the 1980, the CBAM has been used in studies to investigate the concerns of teachers adopting new technologies for classroom uses (Cicchelli & Baecher, 1989; Heller & Marting, 1987;

Gershner & Snider, 2001; Liu & Szabo, 2009; Wedman & Heller, 1984).

Since the CBAM (Hall & Hord, 2006) model is specifically constructed to measure technology integration in the education arena and it applies to anyone experiencing these changes, the theory was chosen for this research. It was used as a theoretical lens to explore the concerns of preservice teachers in adapting to standalone

AI tutors such as the AmazonÒ Alexa for formal and informal learning. The Stages of

Concern model as presented by Hall and Hord (2010) are part of a larger Concerns-Based

Adoption Model (CBAM) which include seven stages as shown in Figure 4. The first stage is one of no concern, it is labeled as Unconcerned because it means that the 98 individual is concerned about other things. The second stage is Informational: At this stage, individuals would like to be more educated about the innovation. The third stage is

Personal in which individuals are concerned about how using the innovation will affect them. The fourth stage is called Management, and it pertains to time required to prepare materials.

Figure 4. The Stages of Concern About an Innovation. From: George, A. A., Hall, G. E., & Stiegelbauer, S. M. (2006). Measuring implementation in schools: The stages of concern questionnaire. Austin, TX: SEDL, p. 8. Copyright 2006 by SEDL. Referenced and adapted with permission.

99

The fifth concern is listed as Consequence, because it demonstrates how the innovation will affect clients. The sixth concern is Collaboration; it relates to coordination and cooperation with co-workers. Lastly, the seventh concern is Refocusing, which pertains to ideas that individuals have to make the innovation better (Hall & Hord,

2010) as shown in Figure 5.

Figure 5. Typical Expressions of Concern About an Innovation. From: George, A. A., Hall, G. E., & Stiegelbauer, S. M. (2006). Measuring implementation in schools: The stages of concern questionnaire. Austin, TX: SEDL, p. 4. Copyright 2006 by SEDL. Referenced and adapted with permission.

According to Ellsworth (2000), understanding the chances that educators experience in meeting the needs of students is important, because it is “an inherently systemic process and must be treated as such” (Ellsworth, 2000, p. 34). In order to prepare students for their future careers, it is necessary to help teachers transitioning through the changes that they must make to become effective teachers. Technology infusion that results in positive learning outcomes cannot occur if teachers do not 100 adequately prepare through ongoing professional development opportunities (Gunn &

Hollingsworth, 2013; Johnson et al., 2016). The CBAM model (Hall & Hord, 2006) is ideal for capturing the perspective of teachers regarding an innovation as it addresses three main assumptions: The individual’s concerns about the innovation, the way in which the innovation is delivered or implemented, and the adaptation of an innovation experienced by individual through three analytical measurements. The intensity of the feeling and perceptions that individuals sense about the innovation are collected with the

Stages of Concern (SoC) (George, Hall, & Stiegelbauer, 2006) measure. Effective professional development for educators should include the viewpoint of teachers combined with a deep understanding of teachers’ needs during their adaptations to an innovation (Duran, Brunvand, Ellsworth, & Sendag, 2011).

Ellsworth (2000) postulates that in practice, the CBAM (Hall & Hord, 2006) aims to answer the following questions (p. 37): “What stages do teachers go through as an innovation is implemented?”; secondly “What will be the major focus of their concerns at each stage?”; thirdly “What levels of innovation use are likely to be exhibited at each stage?”; and “How do I identify which stage teachers are at right now?” Lastly, “How do

I assess the extent to which teachers are actually using the innovation as its developers intended?” (Ellsworth, 2000, p. 37). Unlike other models, the CBAM model (Hall &

Hord, 2010) is mostly focused on change rather than the specific innovation.

Comparing theories, the Consequence stage in the CBAM (Hall & Hord, 2006) is similar to Roger’s Perceived Relative Advantage (Rogers, 2003). However, in Roger’s model, the individual reports a degree of perception concerning how the innovation will affect stakeholders. In the CBAM model (Hall & Hord, 2006), there is no such 101 distinction; this model focuses on how the innovation will affect teachers. Since the purpose of this research is to better understand preservice teachers’ concerns regarding the integration and adoption of AI standalone tutor devices, the CBAM theory (Hall &

Hord, 2006) will be adopted as the theoretical framework for this study.

Chapter Summary

In this chapter literature from scholarly articles pertaining to formal and informal learning theories, the experiential learning theory, artificial intelligence, and applications of AI techniques within social, political, and educational points of view are explained.

Information regarding the AmazonÒ Echo and other similar devices is included along with concerns about preservice teacher preparation for the use of educational technology, as well as data about these teachers’ attitudes toward technology in general. Lastly, the chapter includes an overview of these technology adaptation theories: Diffusion of

Innovation, TAM (Davis, 1989), UTAUT (Venkatesh et al., 2003) and CBAM (Hall &

Hord, 2006) which is the theoretical framework used for this research.

102

Chapter 3: Methodology

This chapter describes the methodology of the study and the statistical procedure.

The focus of this research is to identify preservice teachers’ concerns toward AI tutors like the AmazonÒ Alexa for use in formal and informal contexts. The study aims to understand preservice teachers’ degrees of acceptance of this technological innovation as it relates to their Stages of Concern profiles. The CBAM theoretical model (Hall & Hord,

2006) will be used to determine the Stages of Concern of preservice teachers by utilizing the Stages of Concern Questionnaire. The following questions will guide this research:

1. What are preservice teachers’ peak Stages of Concern (as described in the 35-item

questionnaire) toward the implementation of AI tutoring systems such as the

AmazonÒ Echo (Alexa) as a learning tool in formal and informal settings?

2. Are there significant relationships between preservice teachers’ peak Stages of

Concern and the factors of Grade to Teach, Teaching Geographical Area, and

Gender?

Research Design

This researcher will conduct a non-experimental quantitative analysis using a predictive correlational design (Creswell, 2014), and this research design will use the survey method of data collection. According to Creswell (2014) correlational research is comprised of two types of designs: explanatory and predictive. Explanatory design investigates the association among variables and in this type of design, the researcher focuses on the extent of co-variance relationships between two variables (Creswell,

2014). When a survey or a questionnaire is used as the instrument for data collection, as it is in this research, scholars call the research survey research (Best & Kahn, 2006; Cohen, 103

Manion, & Morrison, 2007; Johnson & Christensen, 2010). This type of research is descriptive and will be applied by using an online cross-sectional survey method to collect data from a single period of time. Cohen et al. (2007) identify a cross-sectional study as “one that produces a ‘snapshot’ of a population at a particular point of time” (p.

68). In this study, preservice teachers will be asked to participate in the study after a face- to-face presentation of the AmazonÒ Alexa device.

The primary purpose of the study is to capture the preservice teachers’ perceptions and understanding about the adaptation of AI tutoring systems like the

AmazonÒ Echo (Alexa) for teaching and learning in formal and informal contexts. This quantitative study will be based on the research questions and objectives using the SoCQ

(George, Hall, & Stiegelbauer, 2006) cross sectional survey instrument which provides a measurable result of participants’ concerns, attitudes, opinions, beliefs and behaviors.

The Soc Questionnaire (George, Hall, & Stiegelbauer, 2006) will be the main instrument for the study, and it will quantitatively measure the preservice teachers’ Stages of

Concern.

Quantitative Research

This study is considered quantitative, because it will be based on the numerical data collected using the questionnaire designed to return numerical values (Johnson &

Christensen, 2010). Johnson and Christensen (2010), indicate that one characteristic of the quantitative design is to investigate the relationships among distinctive variables.

Quantitative research methods offer a variety of advantages over qualitative methods, because numerical data collected eases comparisons among groups and clearly offers the determination of the extent of difference or clarifies corresponding patterns among 104 participants (Yauch & Steudel, 2003).

Population and Sample

For the purpose of a study, a population is the entire group or unit sharing a set of characteristics that interest researchers (Best & Kahn, 2006; Creswell, 2015). In this study preservice teachers enrolled in teacher preparation courses in The Gladys W. and

David H. Patton College of Education at Ohio University (OU) comprised the target population. Course enrollments for the 2018-2019 academic year, approximated that the number of preservice teachers enrolled in EDCT2030 courses during the Spring semester included 170 students. Lastly, the College of Education offered eight sections of this course and the researcher of this study collaborated with each instructor to collect data.

EDCT2030 is a required course for all teachers who want to be licensed to teach in Ohio. Students enrolled in this technology course, are a cohort of students who come from an assortment of majors in education including: Early childhood, adolescent to young adult, middle childhood, and others. EDCT2030, is a course that is specifically designed to introduce preservice teachers to technology use in the classroom and to present innovative strategies for using technologies for teaching and learning. Students in this course are taught how to effectively identify, locate, evaluate, design, prepare and efficiently use educational technologies as instructional resources. Technology used in the classroom includes a variety of open source applications and licensed applications

(hardware and software) designed to enhance classroom instruction. The researcher for this study invited the participation of the entire population of preservice teachers at this college. The participants were undergraduate students enrolled in EDCT2030,

Technology Applications in Education. Since each participant must have 'professional' 105 status and this occurs during their second year of college attendance, they were all be over the age of 18.

Instrument and Measures

The instrument for this study is the SoC survey questionnaire (See Appendix E).

Researchers Hall, Wallace, and Dossett (1973) designed this survey at the University of

Texas Research and Development Center for Teachers Education. This instrument has been tested and refined to a 35 item SoCQ survey (George, Hall, & Stiegelbauer, 2006) from the answers of 363 teacher participants who completed the initial 195-item instrument at its origins in 1974. Between 1975-76 it has been used multiple times “the designers explored several formats and methodologies before choosing the final structure.

The resulting SoCQ (George, Hall, & Stiegelbauer, 2006) was tested for estimates of reliability, internal consistency, and validity with several samples and 11 innovations”

(George, Hall, & Stiegelbauer, 2006, p. 11). These innovations were included in both cross-sectional and longitudinal educational studies to collect the concerns of teachers regarding innovations (George, Hall, & Stiegelbauer, 2006). The researcher of this study received a patent (See Appendix C) to administer the SoC Questionnaire (George, Hall,

& Stiegelbauer, 2006) and received permission to change the words in the original SoCQ

(See Appendix C) to include the AmazonÒ Alexa innovation. Based on the scale developers’ suggestion “we recommend replacing the words the innovation with a phrase they will recognize, such as the name of the innovation or initiative” (George, Hall, &

Stiegelbauer, 2006, p. 25). This researcher changed the term ‘innovation’ in the original survey to ‘Amazon Alexa’ (See Figure 6). 106

The Stages of Concern Questionnaire (SoCQ) (George, Hall, & Stiegelbauer,

2006) includes 35 concern statements reflective of the Stages of Concern about an

Innovation, the AmazonÒ Alexa. The SoCQ (George, Hall, & Stiegelbauer, 2006) is a self-reporting questionnaire that is designed to measure concerns associated with the implementation of new technologies in education.

107

Figure 6 Continued

Figure 6. Statements on the SoC Questionnaire Arranged According to Stage. From: George, A. A., Hall, G. E., & Stiegelbauer, S. M. (2006). Measuring implementation in schools: The stages of concern questionnaire. Austin, TX: SEDL, p. 27. Copyright 2006 by SEDL. Referenced and adapted with permission.

The questionnaire is a diagnostic tool designed to provide a quick-scoring measure of the seven Stages of Concern expressed by teachers about an innovation. The respondents are asked to read and to consider the degree to which each statement reflects their level of concern regarding an innovation. Respondents can select a number on a

Likert scale that ranges from 0 which is the low end of the scale (completely irrelevant) to 7 the high end of the scale (very true to me now) which indicates high concern. Each stage of concern is associated with five items on the survey. Dependent variables in this 108 study are the teachers’ seven Stages of Concern regarding the AmazonÒ Alexa standalone AI tutor Echo. In Figure 6, the Stages of Concern questionnaire are arranged according to stage. Each stage includes five questions.

Reliability

Cohen et al., (2007) indicate that “the Cronbach’s Alpha provides a coefficient of inter item correlations, that is, the correlations of each item, and is useful for multi item scales” (p. 148). A Cronbach’s Alpha can range from 0 to 1; zero indicates a low score and a 1 indicates a high score of internal consistency. Literature affirms that an alpha value of .70 is considered low, and a value of .90 is considered high. A Cronbach’s Alpha of .80 is a reasonable benchmark for internal reliability (Cohen et al., 2007; Kline, 2000).

Hall and Hord (2006), suggest that the SoCQ (George, Hall, & Stiegelbauer, 2006) has

“test/retest reliabilities range from .65 to .86” (p. 147), and the “α-coefficients range from

.64 to .83” (p. 147). Drawing a parallel between literature on the reliability of Cronbach’s

Alpha and Hall and Hord’s (2006) findings, we can assert that the SoCQ (George, Hall,

& Stiegelbauer, 2006) has “strong reliability estimates and internal consistency” (Hall &

Hord, 2006, p. 147).

From the initial 195-items, the developers of this instrument scaled the item number down to 35 by including only the items or statements that are highly correlated.

Researchers “included a statement, or item, only if it had responses to other items measuring the same Stage of Concern than with responses to items for other stages”

(George, Hall, & Stiegelbauer, 2006, p. 20). Developers reported that this rigorous selection ensures high internal reliability in the Stages of Concern Questionnaire. During development, developers tested the instrument with a sample of 830 teachers, and in 109

1975, developers tested the validity of SoCQ (George, Hall, & Stiegelbauer, 2006) again with 132 respondents. Table 1 shows the coefficient of internal reliability for the Stages of Concern Questionnaire. The Cronbach’s Alphas were calculated from a stratified sample of 830 teachers and faculty.

Table 1. Coefficient of Internal Reliability of the Stages of Concern Questionnaire (35 items, n=830, Fall 1974)

Stage 0 1 2 3 4 5 6

Alpha .64 .78 .83 .75 .76 .82 .71

Note: Table 1. From: George, A. A., Hall, G. E., & Stiegelbauer, S. M. (2006). Measuring implementation in schools: The stages of concern questionnaire. Austin, TX: SEDL, p. 20. Copyright 2006 by SEDL. Referenced and adapted with permission.

Researchers retested the data, after two weeks and 132 individuals were asked to participate in the second test. Table 2 shows the second test correlation on the Stages of

Concern questionnaire.

Table 2. Test-Retest correlation on the Stages of Concern Questionnaire (n=132)

Stage 0 1 2 3 4 5 6

Alpha .65 .86 .82 .81 .76 .84 .71

Note: Table 2. Adapted from: George, A. A., Hall, G. E., & Stiegelbauer, S. M. (2006). Measuring implementation in schools: The stages of concern questionnaire. Austin, TX: SEDL, p. 20. Copyright 2006 by SEDL. Referenced and adapted with permission.

110

Table 3 represents the percentile scores from 830 teachers and university faculty and “the distribution of highest Stage of Concern within this sample” (George, Hall, &

Stiegelbauer, 2006, p. 21). Developers emphasize that “without such a diverse group, it would not have been possible to obtain reliable estimates of the alpha coefficients and other characteristics of the SoCQ” (George, Hall, & Stiegelbauer, 2006, p. 21). The researcher of this study used the software package SPSSÒ to identify the reliability of

SoCQ (George, Hall, & Stiegelbauer, 2006).

Table 3. Percent of Respondents’ Highest Stage of Concern, Initial Stratified Sample (n=830)

Stage 0 1 2 3 4 5 6

Alpha 22 12 9 13 13 20 11

Note: Table 3. Adapted from: George, A. A., Hall, G. E., & Stiegelbauer, S. M. (2006). Measuring implementation in schools: The stages of concern questionnaire. Austin, TX: SEDL, p. 20. Copyright 2006 by SEDL. Referenced and adapted with permission.

Validity

In statistics, validity is a way to gauge how well an instrument measures what it is intended to assess. Developers of the SoC Questionnaire (George, Hall, & Stiegelbauer,

2006) assessed the validity of the instrument using Cronbach and Meehl’s (1955) strategy which examines how scores on the scales relate to one another and to other variables

(Cronbach & Meehl, 1955). Theory developers used “intercorrelation matrices, judgments of concern based interview data, and confirmation of expected group differences and changes over time were used to investigate the validity of the SoCQ scores” (George, Hall, & Stiegelbauer, 2006, p. 12). The Stages of Concern 111

Questionnaire manual reports two examples of a two-year longitudinal study. The first study involved the faculty of two urban elementary schools in the same school district.

Over a period of 5 weeks, the faculty’s concerns were assessed towards a new type of reading instruction. In the second longitudinal study a single school faculty was used to assess their Stages of Concern towards establishing teams as a routine. Results indicated that both studies supported the hypothesized Stages of Concern theory, and that teachers’ concern profiles “like this one add support not only to the validity of the SoCQ

Questionnaire, but also to the overall concern theory” (George, Hall, & Stiegelbauer,

2006, p. 20).

Exploratory Factor Analysis

Factor analysis is a statistical procedure used to analyze the interrelationships between several variables to reveal the common underlying relationships (Hair, Black, &

Babin, 2013). This analysis determines relationships of variables grouped together to create a factor (Hair et al., 2013). Having one factor means that the items measure one construct (unidimensional). When more than one factor is present, the items measure multiple latent variables (multidimensional) (Johnson & Christensen, 2010). The number of factors represent data dimensionality (Hair et al., 2013). In statistics, two types of factor analysis are reported: Confirmatory Factor Analysis (CFA) and Exploratory Factor

Analysis (EFA). EFA is usually employed for exploratory purposes, and CFA is used for confirmatory purposes (Costello & Osborne, 2005), such as confirming the pattern of factor structure in data (Suhr, 2006). This study employed exploratory factor analysis, because EFA is appropriate when developing or testing an instrument (Costello &

Osborne, 2005). Exploratory factor analysis is helpful when trying to understand the 112 relationship between variables, and it can be used to identify the structure of relationships between the variables and the respondents (Hair, Black, Babin, Anderson & Tatham,

1998). Exploratory factor analysis requires a large sample. Having a large sample size is considered very helpful in several situations, particularly when the communality of the item is smaller than .4, when there are cross loading items, and when there are factors that have fewer than three items (Costello & Osborne, 2005).

Data Collection Procedure

The researcher obtained the license from the SEDL to use the copyrighted SoC

Questionnaire and supporting documentation to modify (replace the word innovation with

‘Amazon Alexa’) use, and administer the SoCQ (George, Hall, & Stiegelbauer, 2006) in this research (see Appendix E). And, added a demographic survey, independent of the

SoCQ (George, Hall, & Stiegelbauer, 2006) items (see the bottom of Appendix E). She then contacted the coordinator of the EDCT2030 courses to collaborate on the data collection and procedure for administering the questionnaire. Both the researcher and the coordinator agreed on a course of action: After obtaining consent to perform the data collection, an e-mail invitation was sent to each instructor detailing the study and explaining the expectations involved for the participants and for the researcher. All instructors of the seven sections of EDCT2030 received the content and instructions concerning the research. Only the sections that received instructor approval to participate in the survey for data collection participated.

Instructors teaching EDCT2030 courses sent out a message from the learning management system BlackboardÒ , to invite the students to participate in the survey, and also to inform them that their academic grades would not be affected in any way by their 113 decision to participate or to not participate in the research study. A direct link to the survey’s introduction page was provided to preservice teachers in an email message and on the content area of the course in BlackboardÒ . In this way, only students enrolled in the EDCT2030 courses and a specific section were able to view the link and access the survey. Recent versions of the online software QualtricsÒ integrate responsive web design which opens access to the survey from a variety of platforms. Students could access the survey from school-owned computers and with their personal mobile devices.

Once the survey page loaded on their browsers, participants were greeted with a welcoming message that explained the study in detail, including the benefits and potential risks associated with their participation in the study as requested by the Institutional

Review Board (IRB) at Ohio University. At the bottom of this intro page, individuals were asked to check the “Yes” checkbox and submit if they were willing to participate in the study. If students clicked on the “No” checkbox, and “Submit” button, they were notified by the browser that they had exited the survey. This survey was used to collect the raw data for the research.

To perform the data collection, the researcher visited each section of EDCT2030 and gave a 25-minute face-to-face presentation about the AmazonÒ Echo. After demonstrating the device during the presentation, the researcher invited the preservice teachers to participate in the research. The researcher gave each student a copy of the

Informed Consent (See Appendix D), and after reading the consent, students who were willing to participate in the study accessed BlackboardÒ on their devices and followed the link to the survey provided by the instructor. At the end of the 35-item survey, respondents were asked to add biographical data such as their degree, major, and gender. 114

Preservice teachers also had a comment section to include their thoughts, as well as, an option to email the researcher if they wanted additional information about the research.

To help increase the response rate, within a week of the class presentation, the researcher worked with faculty to send the preservice teachers a follow-up email invitation to complete the survey. The email thanked the respondents who had already filled out the questionnaire for their feedback, and it advised the participants who had not filled out the survey that the link was still in BlackboardÒ and that their participation was welcome and voluntary.

Ethical Considerations

The researcher obtained approval from the Institutional Review Board at Ohio

University to conduct research (IRB Approval No. 17-E384). The Board ensured that this study would meet all requirements and procedural standards for safe and ethical conduct of research involving human subjects (See Appendix A, B). Participation in the online survey was voluntary. The informed consent of study participants was distributed in paper format prior to data collection and was also made available on the introduction page of the survey and upon request. On the day of data collection, the consent form was read and distributed in paper format to all participants.

Piloting

Prior to launching the main collection, it is recommended to address issues such as internal consistency, item readability and understanding, low response rate, and questionnaire optimization. To address these concerns, the researcher performed a pilot study structured in the same format as the primary data collection. A pilot study helps to determine “that the individuals in the sample are capable of completing the survey and 115 that they can understand the questions” (Creswell, 2008, p. 390). Essentially a pilot test is a small-scale primary investigation; a “procedure in which a researcher makes changes in an instrument based on feedback from a small number of individuals who complete and evaluate the instrument” (Creswell, 2008, p. 390).

The instrument was used to capture impressions of preservice teachers’ thoughts and feelings regarding the Amazon Alexa. Participants were asked to respond to the items on the survey. After they completed the survey, the researcher asked students in the classroom about the survey’s readability and about their own comments, questions, and impressions regarding the questionnaire. This pilot test was performed during the

Summer semester of 2018, and to increase the number of participants, one entire section of EDCT2030 was asked to participate. Twenty completed surveys were collected during the test. To answer the first research question, the researcher analyzed the preservice teachers’ Stages of Concern Questionnaire responses following the guidelines in the

SoCQ manual (George, Hall, & Stiegelbauer, 2006). The profile of the SoCQ (George,

Hall, & Stiegelbauer, 2006) respondent group in this study was generated by the conversion of the raw score means for each of the seven Stages of Concern into percentile scores based on the Stages of Concern Percentile Conversion Chart which was obtained from the CBAM’s SoCQ manual by George, Hall and Stiegelbauer (2006).

Individual profiles for the participants were developed from the analyzation of the data collected during the pilot test. This process revealed that a high number of participants reported marginal interest in the innovation. This might be due to the inability of the researcher to connect the AmazonÒ Echo device to the University

Secured Wi-Fi Network to demonstrate the use of the device during the presentation. 116

Nonusers of the innovation usually report Peak Stages of Concern in Stages 0, 1, and 2;

“nonusers’ concerns normally are highest on Stages 0, 1, and 2 and lowest on Stages 4, 5, and 6” (George, Hall, & Stiegelbauer, 2006, p. 37). The individual profiles followed these patterns for approximately 90% of the participants. The remaining 10% indicated peak Stages of Concern in Stage 5. Findings from the pilot test also revealed that the majority of respondents in the pilot test followed the typical peak patterns of non-users of the innovation: The typical nonuser profile would not yield peak Stages of Concern in the fifth and sixth Stages of Concern; these would usually be low scores. One reason for this anomaly might be the small sample of participants in the pilot test. Looking at individual profiles, respondents followed the patterns of non-users of the AmazonÒ Alexa innovation. This further explained the high number of participants reporting marginal interest in the innovation. The small sample of participants prevented the researcher from carrying out the SPSSÒ procedure to determine the differences in the respondents’ concern profiles based on the variables identified from the biographical data collection.

Primary Data Collection

Both the pilot test and the primary data collection included a 25-minute presentation which gave preservice teachers a brief overview of emerging technologies such as artificial intelligence applications and the Amazon Echo device powered by the voice Alexa. This introduction included a brief explanation of newer technologies used to develop the Echo and techniques that were used to connect the Echo to a variety of other smart things. It also described how technologies such as the Internet of Things and big data help Alexa understand spoken commands semantically. In this way Alexa users can ask the device the same question using different words, or give the same sets of 117 commands with the same words spoken in a different order (Amazon, 2019). Appropriate requests used for educational purposes were also included. They entail: spelling a difficult word, asking Alexa for the meaning of a word, calculations, the size of the circumference of earth, distances between cities, setting a timer, playing a song, and questions regarding history and government. More complex questions such as ‘which winter was the coldest winter in Cincinnati Ohio?’ were also addressed. During the final part of the presentation, the researcher shared about how she developed an Alexa app to help her study for the U.S. Citizenship test. She explained how the app aided her preparation for the verbal interview test which is given to applicants who wish to become

U.S. Citizens.

Data Analysis Procedure

Data collected from the surveys were analyzed with the IBMÒ software package,

Statistical Package for the Social Sciences (SPSS)Ò Version 25.0. This piece of software provided in-depth descriptive and inferential statistics to explore the concerns of preservice teachers. The data analysis began with descriptive statistics which was defined as a comprehensive technique for summarizing, analyzing, and reporting scores, or a procedure that yields numerical and graphical (visual) results (Aronheim, Aron & Coups,

2011; Best & Kahn, 2006; Field, 2013; Warner, 2008). The focus of descriptive analysis is to summarize and to describe a group of numbers or a group of scores (Aronheim et al.,

2011).

Descriptive analysis included the central-tendency, variance, frequencies, cross- tabulations, and standard deviation which was instrumental in analyzing, gender, age, teaching experience, school settings (urban, suburban, rural), and the major areas of the 118 survey answered by trainee teachers. Inferential analysis can be performed with multiple techniques that guide researchers in drawing conclusions and relating generalizations based on the statistics collected from the research study (Aronheim et al., 2011; Best &

Kahn, 2006). This analysis tested the hypotheses and the findings related to the sample or population, and it employed statistical tools to display scores, the association between scores and multiple variable analysis (Creswell, 2008).

Table 4. Data Analysis as it Relates to Research Questions

Research Question Procedure Data Analysis 1 What are preservice teachers’ peak Questionnaire 124 Frequency, Mean, Stages of Concern (as described in the samples Standard Deviation, 35-item questionnaire) toward the Graphs. implementation of AI tutoring systems such as the Amazon Echo (Alexa) as a learning tool in formal and informal settings?

2 For research question 2: Are there Questionnaire 124 A two-way Multivariate significant relationships between samples Analysis of Variance preservice teachers’ peak Stages of (Stages of Concern and Concern and the factors of their Grade to Teach and Teaching Geographical Area, Grade to Gender). Teach, and Gender? Frequency, Mean, Standard Deviation, Frequencies, Cross Tabulations, Boxplots.

A two-way Multivariate Analysis of Variance (Stages of Concern and Teaching Geographical Area and Gender). Frequency, Mean, Standard Deviation, Frequencies, Cross Tabulations.

119

After asserting that the Cronbach’s Alphas from the pilot test instrument had acceptable reliability for the sample of the study, the researcher invited preservice teachers enrolled in EDCT2030 to rate the 35 questionnaire items using a Likert scale of intensity, ranging from 0 (not related) to 7 (very high). Table 4 explains the procedure for data collection and data analysis used in the study. There are no univariate or multivariate outliers.

Descriptive statistics were utilized to analyze question one: What are preservice teachers’ peak Stages of Concern (as described in the 35-item questionnaire) toward the implementation of AI tutoring systems such as the Amazon Echo (Alexa) as a learning tool in formal and informal settings? The peak Stage of Concern is the highest score in the seven Stages of Concern, and it can be identified by following the SoCQ (George,

Hall, & Stiegelbauer, 2006) manual Quick Scoring Device (Appendix G). The Quick

Scoring Device is the designed scoring tool that the developer of SoCQ provided in the manual (George, Hall, & Stiegelbauer, 2006). The manual includes “step-by-step instructions, the SoCQ (George, Hall, & Stiegelbauer, 2006) responses are transferred to the device, entered into seven sub-scales, and each sub-scale is totaled. Then the seven raw sub-scale score-totals are translated into percentile scores and plotted on a grid to produce the individual’s SoCQ profile” (George, Hall, & Stiegelbauer, 2006, p. 85). The individual SoCQ (George, Hall, & Stiegelbauer, 2006) profile corresponds to seven scores that indicate the Stage of Concern profile for each preservice teacher. The scoring device can also be used to calculate percentile scores, group averages, and to output individual and group concern profiles (George, Hall, & Stiegelbauer, 2006). The researcher of this study carried out the scores using the software SPSSÒ statistic package 120 to calculate the raw score of each of the seven stages, as well as frequency, mean, standard deviation, frequencies, and cross tabulations.

Research question 2: Are there significant relationships between preservice teachers’ peak Stages of Concern and the factors of their Teaching Geographical Area,

Grade to Teach, and Gender?

The researcher used a two-way Multivariate Analysis of Variance MANOVA to determine if there were significant statistical differences in means between and among the respondent groups on each Peak Stage of Concern for the dependent variables (stages

0-6) and the independent variables: Grade to Teach and Gender (IVs). And, a two-way

Multivariate Analysis of Variance MANOVA to determine the relationship between the

Peak Stages of Concern dependent variables (DVs) Teaching Area and Gender (IVs).

Table 5. Data Analysis Performing MANOVAs

Data Analysis Tool Dependent Variable Independent Variable 1 Two-way Multivariate analysis of Stage 0. Grade to Teach. variance (MANOVA). Stage 1. Gender. Stage 2. Stage 3. Stage 4. Stage 5. Stage 6.

2 Two-way Multivariate analysis of Stage 0. Teaching Area. variance (MANOVA). Stage 1. Gender. Stage 2. Stage 3. Stage 4. Stage 5. Stage 6.

121

Andy Field (2009) asserted that “we can use MANOVA when there is only one independent variable or when there are several, we can look at interactions between independent variables, and we can even do contrasts to see which groups differ from each other” (p. 585). MANOVA can test for differences between groups on two or more dependent variables simultaneously while considering all dependent variables at the same time. Unlike the ANOVA which is a univariate test, “MANOVA is designed to look at several dependent variables (outcomes) simultaneously and so is a multivariate test”

(Filed, 2009, p. 585). An Alpha level of p < .05 was set for the MANOVA tests, and was followed up with univariate analyses and post-hoc tests for significance. George et al

(2006) noted that “correlations of high stages of concern scores with demographic data can lead to improved explanations and interpretations of concerns data” (George et al.,

2006, p. 52).

Administering MANOVA over separate ANOVAs allowed for the control of

Type I error rates across multiple statistical tests (McDonald, Seifert, Lorenzet, Givens,

& Jaccard, 2002; Filed, 2009). According to Bray and Maxwell (1982), other reasons for using MANOVA were when “the researcher is interested in the effects of treatments on several criterion variables individually, and the researcher is interested in the relationships among the pvariates” (p. 341). Field (2009) claimed that “if separate

ANOVAs are conducted on each dependent variable, then any relationship between dependent variables is ignored” (p. 586). This could have resulted in a loss of

“information about any correlations that might exist between the dependent variables”

(Filed, 2009, p. 586). The purpose of this study was to explain the difference between independent variables (demographics) on the dependent variables (Hinkle, Wiersma, & 122

Jurs, 2003; Johnson & Christensen, 2010; Warner, 2008). Rather than using multiple t- tests to compare the means of variables, the researcher decided to use a MANOVA; thus, lowering the risk of committing a Type I Error (Carlson & Winquist, 2017).

Interpretation of SoC Questionnaire Data

The SocQ (George, Hall, & Stiegelbauer, 2006) interpretation can be analyzed in depth or superficially. SoC Questionnaire (George, Hall, & Stiegelbauer, 2006) developers offer guidance and suggestions adding to the SoCQ Manual “the simplest form of interpretation is to identify the highest stage score (Peak Stage Score

Interpretation)” (George, Hall, & Stiegelbauer, 2006, p. 31). Examining both the highest and second highest stage scores (First and Second High Stage Score Interpretation) makes a more detailed interpretation possible” (George, Hall, & Stiegelbauer, 2006, p.

31). They assert that “analyzing the complete profile allows for the most sensitive interpretation of respondents’ concerns (Profile Interpretation). A rich clinical picture can be developed by examining the percentile scores for all seven stages and interpreting the meaning of the highs and lows and their interrelationships” (George, Hall, &

Stiegelbauer, 2006, p. 31). The SoCQ group peak as well as individual peak can be captured for comparison (George, Hall, & Stiegelbauer, 2006). The raw scores are totaled and converted to percentile scores using a conversion chart (see Appendix G) to construct profiles for the individual participants and for the groups.

Scores at stage 0 (unconcerned) are indicative that preservice teachers’ degree of priority and of relative intensity of concern is not toward AmazonÒ Alexa. Stage 0 “does not provide information about whether the respondent is a user or nonuser; instead Stage

0 addresses the degree of interest in and engagement with the innovation in comparison 123 to other tasks, activities, and efforts of the respondent” (George, Hall, & Stiegelbauer,

2006, p. 31). Additionally, “a low score on Stage 0 is an indication that the innovation is of high priority and central to the thinking and work of the respondent. The higher the

Stage 0 score, the more the respondent is indicating that there are a number of other initiatives, tasks, and activities that are of concern to him or her” (George, Hall, &

Stiegelbauer, 2006, p. 31).

Stage 1 (Informational) – “high score in informational indicates that the respondent would like to know more about the innovation” (George, Hall, &

Stiegelbauer, 2006, p. 31). Respondents are not concerned with comprehensive information about the innovation, rather, an overall understanding of how the innovation is going to change their teaching approach, or involvement in learning about the

AmazonÒ Alexa. A high score in Stage 2 (Personal) indicates “egooriented questions and uncertainties. Respondents are most concerned about status, rewards, and what effects the innovation might have on them. A respondent with relatively intense personal concerns might, in effect, block out more substantive concerns about the innovation” (George,

Hall, & Stiegelbauer, 2006, p. 31). A high score in Stage 3 (Management) demonstrates preservice teachers’ concerns toward the best use of information and resources available using Alexa in education for organization, management, or scheduling (George, Hall, &

Stiegelbauer, 2006).

A peak score in Stage 4 (Consequence) shows that preservice teachers are concerned with how Alexa will impact their students. A high score in Stage 5

(Collaboration) indicates preservice teachers’ focus on coordinating and collaborating with other instructors and stakeholders to ensure the innovation is used. Lastly, a peak in 124

Stage 6 (Refocusing) shows that preservice teachers are thinking about ways to create more benefits from AmazonÒ Alexa in education by replacing or changing some of the existing devices or practices with the innovation (George, Hall, & Stiegelbauer, 2006).

Figure 7. Hypothesized Development of Stages of Concern. From: George, A. A., Hall, G. E., & Stiegelbauer, S. M. (2006). Measuring implementation in schools: The stages of concern questionnaire. Austin, TX: SEDL, p. 27. Copyright 2006 by SEDL. Referenced and adapted with permission.

Ideally, a concern profile explains the hypothesized patterns of the development of the change process. George, Hall, and Stiegelbauer (2006) assert that “if the innovation 125 is appropriate and well designed and if there is adequate support for its implementation, an individual’s concerns profile plotted over time should look like a wave moving from left to right” (p. 37). This can be depicted in Figure 7 - Hypothesized Development of

Stages of Concern.

Chapter Summary

This chapter includes an overview and rationale of the research design, the methodology, and the analysis of research questions designed to address the research problem. The chapter provides detailed information about operational definitions of variables, population, sampling procedures, instrumentation, and information about the statistical tools two-way MANOVAs and data analysis procedures, and interpretation of

SoC Questionnaire according to developers of the tool.

126

Chapter 4: Data Analysis and Results

The purpose of this study was to capture the preservice teachers’ impressions of the use of the AmazonÒ Alexa for teaching and learning at a large university in the

Midwestern United States. Research Question 1 concerned the examination of the statistically significant differences in teachers’ peak Stages of Concern on the scale designed for the CBAM (Hall & Hord, 2006) theoretical framework (Stages 0-6). The researcher utilized the statistical test two-way MANOVA to answer Question 2. She identified relationships between the Stages of Concern as dependent variables, and the designated the categories of three independent variables: (1) Teaching Geographical

Area, (2) Grade Level to Teach, and (3) Gender.

During the Spring of 2019, when the EDCT2030 course was offered in six sections, the researcher completed the administration of the data collection. After presenting the AmazonÒ Echo to the preservice teachers enrolled in the course, the researcher invited the students to participate in the survey. The survey was opened in

BlackboardÒ on the date of the presentation and remained available there for 15 days

(from March 4, 2019 – March 18, 2019). During the next two weeks, the professor who taught the course reminded the students to take the survey on a bi-weekly basis.

The sample population for this study was comprised of (N=160) preservice teachers. Of this number, 134 individuals accessed the survey and 128 students completed it, which returned an 80% response rate. The six incomplete surveys were removed from the analysis. Further examination of the completed surveys revealed that four participants who took the survey were not planning to become teachers. These four surveys were removed from the study leaving a total of 124 surveys for analysis. 127

Data Organization

As shown in Figure 8 data is distributed across the 7 Stages of Concern. Values are different for each stage relative to the answers provided by the sample on the surveys.

Peak Stages of Concern were calculated with the raw data per scale developer’s suggestion (George, Hall, & Stiegelbauer, 2006).

Figure 8. Descriptive Statistics of 7 SoC Raw Scores Used for Calculations

This sample of participants reported being 32% Male (40 participants) and 68%

Female (84 participants), see Table 6.

128

Table 6. Biographical Information: Gender for n=124

Gender Number of Participants Percent Valid Percent Cumulative Percentage Female 84 67.7 67.7 67.7 Male 40 32.3 32.3 100.0 Total 124 100.0 100.0

Table 7. Biographical Information: Years of Service; Academic Standing; Age for n=124

Year of Number of Academic Number of Age Number of Service Participants Standing Participants Participants 0 86 Freshman 34 18 19 .5 2 Sophomore 60 19 36 1 16 Junior 24 20 42 1.5 1 Senior 6 21 18 2 24 22 6 3 2 23 3 4 2 5 1 6 1 8 1

Preservice teachers answered the question “Are you planning to teach in an urban, suburban, or rural area?” This yielded the following responses: 36 chose in an “Urban area”; 66 chose in a “Suburban” area; and, 22 chose in a “Rural” area. When asked

“Which grade are you planning to teach?” preservice teachers gave the following responses.

129

Table 8. Biographical Information: Major and Grade to Teach for n=124

Major Number of Grade to Teach Number of Participants Participants

Early Education 34 Elementary Schools 34 Middle Childhood Education 15 Middle Schools 15 Integrated Education - 53 High School 53 Adolescence to Young Adult Special Education 22 K-12 22

Twenty-two students reported wanting to teach in a Special Education K-12 school; 34 chose to teach in an elementary school; 15 reported wanting to teach in a middle school; and, 53 chose to teach in high schools (AYA). The researcher arranged the data by applying the categories of licensure offered at The Ohio University’s Gladys

W. and David H. Patton College of Education and the categories of licensure used in the

State of Ohio, as explained on the College’s web site (Ohio University, n.d., para. 1).

Each category had content majors developed to meet the needs of each specific licensure area. For example, Special Education was considered a K-12 Degree due to the nature of the students being taught within this area. Preservice teachers noted their major, their content area and the grade they desired to teach. The concurrent examination of these three parameters identified each person within the four categories of licensure offered by the college. Preservice teachers would have to hold the corresponding Ohio licensure to teach in their chosen grade or content area.

While several cases emerged in which preservice teachers expressed a desire to teach both “Middle & High school,” this choice would not be permissible under the 130 framework of the Ohio licensure laws. In order to classify each respondent into the correct teaching category the researcher reviewed individual cases and considered answers from their major and content areas. This information identified which grade level a preservice teacher would be able to teach in accordance with licensing in the State of

Ohio. In Ohio, education majors must select the grade band for licensure, and then if a second-grade band is desired, they would have to take additional courses to meet the second-grade band requirements.

Lastly, when asked “Please describe the environment where you will want to teach?” The majority of preservice teachers answered “unknown” and “N/A”. Other answers included adjectives and descriptions such as: “friendly place,” “A welcoming environment,” “diverse student body,” “Private,” “Fun, safe, valued, energetic, charismatic,” “Warm,” “Technology inclusive and positive environment,” “Young children, personal, friendly, and involved”.

Validity of the Study Instrument

To test construct validity of the SoCQ instrument, Exploratory Factor Analysis

(EFA) was used to ensure that the instrument truly examines what it is intended to examine. The procedure investigated ten variables: Seven Stages of Concern (35 items),

Gender (two items), Grade to Teach (four items), Teaching Geographical Area (two items).

131

Table 9.

Kaiser-Meyer-Olkin and Bartlett’s Test

Kaiser-Meyer-Olkin Measure of Sampling Adequacy .744

Bartlett’s Test of Sphericity Approximate Chi-Square 411.054

df 45 Sig. .000

According to (Coakes, Steed & Dzidic, 2006), KMO should be greater than .60 for strong enough correlations among possible factors. In Table 9 above, the Kaiser-

Meyer-Olkin and Bartlett’s Test as shown was slightly high (KMO = .744) indicating that using factor analysis for the collected data is suitable. Moreover, Bartlett's Test of

Sphericity (Chi-Square = 411.054) was significant (p<.001). Based on these results, the data could be described as factorable and the relationship among variables are met. The instrument is providing reliable scores (Field, 2009). Since the factors used in this analysis are related, oblique rotation is recommended (Field, 2009). This Exploratory

Factor Analysis was performed using oblique Oblimin rotation with Kaiser normalization, and most of the items loaded suitably in the expected factors (See

Appendix J).

Question 1 Results

The first research question for this study was: What are preservice teachers’ peak

Stages of Concern toward the implementation of an AI tutoring system, such as the

AmazonÒ Echo (Alexa), as a learning tool in formal and informal settings? For this purpose, SPSSÒ V.25.0 for Mac was utilized to sort and count the data according to the instructions in the Manual Measuring Implementation in Schools and the Stages of 132

Concern Questionnaire (George, Hall, & Stiegelbauer, 2006). The quick scoring device in

Appendix B of the manual stated that “by following the step-by-step instructions, the

SoCQ responses are transferred to the device” (George, Hall, & Stiegelbauer, 2006, p.

87) and “entered into seven scales, and each scale is totaled. Then the seven raw scale score totals are translated into percentile scores and plotted on a grid to produce the individual’s SoCQ profile” (George, Hall, & Stiegelbauer, 2006, p. 87).

The calculation of Stages of Concern raw scores and percentiles revealed the highest peaks and second highest peaks of the Stages of Concern. Results were inferred following the SoCQ (George, Hall, & Stiegelbauer, 2006) manual’s information for profile interpretation. Scale developers designated each Stage of Concern with a specific meaning.

Table 10. Descriptive Statistics – Highest and Second Highest Score.

Highest Stage Second Gender Plan To Area Score Highest Teach Type Stage Score N Valid 124 124 124 124 124 Missing 0 0 0 0 0

Mean .59 1.94 Std. Error of Mean .101 .116 Median .00 2.00 Mode 0 2 Std. Deviation 1.126 1.293 Variance 1.268 1.671 Range 6 6 Minimum 0 0 Maximum 6 6 Sum 73 240

133

The analysis of the 124 surveys on the CBAM (Hall & Hord, 2006) scale categorized 84 participants (67.75%) into the Unconcerned category- the peak Stage of

Concern 0. The Stage of Concern 1, which is also called the Informational category, included 22 participants (17.75%). Peak Stage of Concern 2, the Personal category, included 12 participants (9.68%). The Management Stage of Concern or the peak Stage of Concern 3, was comprised of 2 respondents (1.6%). Peak Stage of Concern 5, the

Collaboration category, included 3 respondents (2.4%). Lastly, peak Stage of Concern 6, known as the Refocusing category, was comprised of one participant (0.8%). Sample n=124, µ=0.59, and SD= 1.126. See Figure 9. Preservice Teachers’ Peak Stages of

Concern (n=124).

Table 11. Statistics - Highest Score by Stage.

Stages Frequency Percent Valid Percent Cumulative Percent Valid 0 84 67.7 67.7 67.7

1 22 17.7 17.7 85.5

2 12 9.7 9.7 95.2 3 2 1.6 1.6 96.8 5 3 2.4 2.4 99.2

6 1 .8 .8 100.0 Total 124 100.0 100.0

The highest peak Stage of Concern was Stage 0: Out of 124 survey takers, 84 participants (67.7%) filled this category. The SoCQ (George, Hall, & Stiegelbauer, 2006) 134 manual asserted that “Stage 0 scores provide an indication of the degree of priority the respondent is placing on the innovation and the relative intensity of concern about the innovation” (George, Hall, & Stiegelbauer, 2006, p. 33). Among the 84 responses received from the survey, there were 29 with lower scores (34.5%) in Stage 0 indicating

“that the innovation is of high priority and central to the thinking and work of the respondent” (George, Hall, & Stiegelbauer, 2006, p. 33). However, the majority of these respondents, 55 responses or 65.5% recorded very high scores between 95-99.

Figure 9. Preservice Teachers’ Peak Stages of Concern (n=124).

135

Within Stage 0, high scores indicate little interest regarding the innovation

(George, Hall, & Stiegelbauer, 2006). The manual showed that “the higher the Stage 0 score, the more the respondent is indicating that there are a number of other initiatives, tasks, and activities that are of concern to him or her” (George, Hall, & Stiegelbauer,

2006, p. 33). This would entail that respondents have little knowledge of the Amazon

Alexa and “the innovation is not the only thing the respondent is concerned about”

(George, Hall, & Stiegelbauer, 2006, p. 33).

Figure 10. Levels of Use of the Innovation (Hall, Dirksen, & George, 2006, p. 5) From: George, A. A., Hall, G. E., & Stiegelbauer, S. M. (2006). Measuring implementation in schools: The stages of concern questionnaire. Austin, TX: SEDL, p. 36. Copyright 2006 by SEDL. Referenced and adapted with permission.

136

Application of the survey results and the SoCQ (George, Hall, & Stiegelbauer,

2006) interpretation indicated that this group of preservice teachers did not seem concerned about this device for learning. The second highest Peak of Concern was Stage

1, or the Informational Stage, was comprised of 22 (17.75%) of participants. In this case, scale developers asserted that this group of preservice teachers were aware of this type of device, and that they “would like to know more about the innovation” (George, Hall, &

Stiegelbauer, 2006, p. 33). Furthermore, they explained that respondents were not interested in an in-depth understanding of how the device works “but, rather, want fundamental information about what the innovation is, what it will do, and what its use will involve” (George, Hall, & Stiegelbauer, 2006, p. 33). This implies that participants would have liked to receive more basic information about the AmazonÒ Alexa instead of deeper knowledge on its usage (George, Hall, & Stiegelbauer, 2006). The third highest peak of Concern was Stage 2 was the Personal Stage. It was comprised of 12 (9.7%) participants. Preservice teachers in this stage tended to be skeptical about how the

Amazon Alexa could improve their teaching. Scale developers asserted that these

“respondents are most concerned about status, rewards, and what effects the innovation might have on them” (George, Hall, & Stiegelbauer, 2006, p. 33). Only 2 (1.6%) participants scores were in the peak Stage of Concern 3, or the Management Stage. These results indicated that a marginal number of respondents expressed “intense concern about management, time, and logistical aspects of the innovation” (George, Hall, &

Stiegelbauer, 2006, p. 33).

137

PEAK STAGES OF CONCERN 90 84 80

70

60

50

40

30

20 22

10 12 3 0 2 0 1 Unconcerned Informational Personal Management Consequence Collaboration Refocusing

Figure 11. Preservice Teachers’ Peak Stages of Concern (n=124).

Stage of Concern 4, the Consequence Stage, showed no responses, suggesting that preservice teachers were not concerned about the impact of the AmazonÒ Alexa on students. These “considerations include the relevance of the innovation for students; the evaluation of student outcomes, including performance and competencies and the changes needed to improve student outcomes” (George, Hall, & Stiegelbauer, 2006, p. 8).

Marginal results were also evidenced in Stages of Concern 5 and 6. A total of 3 (2.4%) respondents were tabulated into Stage of Concern 5, the Collaboration Stage. The SoCQ

(George, Hall, & Stiegelbauer, 2006) manual describes this stage as composed of individuals who would tend to focus on coordinating and cooperating with other individuals in the adoption of the AmazonÒ Alexa for learning (George, Hall, &

Stiegelbauer, 2006). 138

Only 1 (0.81%) of the participants was included in Stage 6, or the Refocusing

Stage. According to scale developers, the preservice teachers’ focus was “on exploring ways to reap more universal benefits from the innovation, including the possibility of making major changes to it or replacing it with a more powerful alternative” (George,

Hall, & Stiegelbauer, 2006, p. 8).

Figure 12. Typical Nonuser SoCQ. From “Measuring Implementation in Schools: The Stages of Concern Questionnaire,” by A. A. George, G. E. Hall, and S. M. Stiegelbauer, 2006, Austin, TX: Southwest Educational Development Laboratory, p. 38. Copyright 2006 by SEDL. Referenced with permission.

139

Lastly, the SoCQ (George, Hall, & Stiegelbauer, 2006) manual also includes an explanation that spans multiple stages: When results are highest “on Stages 0, 1, and 2 and lowest on Stages 4, 5, and 6” (p. 37) the scores usually indicate that these participants are non-users of the innovation (George, Hall, & Stiegelbauer, 2006).

In “Figure 12. Typical Nonuser SoCQ”, is shown a visual representation of typical results that were tabulated with a non-user sample. When the manual’s representation of results from a sample of subjects who do not use the innovation was compared with the representation of the sample collected for this research, it suggested that the preservice teachers may not have been aware of the capabilities of the AmazonÒ Echo to aid learning, and that they were somewhat concerned about other things (George, Hall, &

Stiegelbauer, 2006). The second and third Peak Stage of Concern were the highest in stages 1 and 2 which according to the manual it can “be inferred that the individual is interested in learning more about the innovation” (George, Hall, & Stiegelbauer, 2006, p.

39).

According to George, Hall, and Stiegelbauer (2006), the highest and second highest peaks of Stages of Concern should be taken into consideration for analysis. The tabulation of the second highest peaks show that the highest values fell on Stage 2 (53 participants, 42.7%) and Stage 1 (42 participants, 33.9%). While these results gave us an overview of the entire sample, scale developers of the Stages of Concern also explained that at the highest and second highest peaks in each profile should be given strong consideration within the context of each individual’s SoC Questionnaire.

140

Figure 13. Preservice Teachers’ Second Highest Peak Stages of Concern (n=124).

The tabulation of answers provided in the questionnaire follows in Table 12.

The analysis of the highest and second highest Peaks of Concern revealed a more meaningful account of the reasons why preservice teachers were marginally interested in the use of the AmazonÒ Alexa for learning in the classroom. A representation of the total second highest peaks of concern revealed a less skewed distribution of values (See

“Figure 13. Preservice Teachers’ Second Highest Peak Stages of Concern (n=124)”).

141

Table 12. Stages of Concern Peaks – Highest and Second Highest.

First Highest Second Highest Number of Peaks Peaks Participants 0 1 33 0 2 35 0 3 9 0 5 5 0 6 2 1 0 2 1 2 16 1 3 1 1 5 2 2 0 3 2 1 6 2 5 2 3 0 1 3 1 1 5 0 1 5 1 1 5 2 1 6 3 1

When the two highest Peaks of Concern were evaluated together, the stage combination Stage 0 and Stage 2 had the highest scores including 35 responses or 28.2% of the sample participants. The combination of these two peaks were indicative that the sample of preservice teachers were unconcerned with the AmazonÒ Alexa for learning in the classroom, and that they were uncertain about the demands of that the AmazonÒ

Alexa might present on their time. Preservice teachers considered pros and cons of using the device in the classroom and determined their “part in decision making and considering potential conflicts with existing structures or personal commitment” (George 142 et al., 2006, p. 8). This potential for conflicts includes personal commitment in learning how to use the product, the expense of the device, and how schools’ administration might affect the integration of this device. According to George, Hall, and Stiegelbauer (2006), these concerns are usually tied to costs associated with the acquisition of the device, status of school and infrastructure capability to support the integration, and the administrative and teaching staff support for the innovation (George et al., 2006).

Frequencies of Highest and Second-Highest Peak Stages

6/3 5/2 5/1 5/0 3/1 3/0 2/5 2/1 2/0 1/5 1/3 HIGHEST & SECOND HIGHEST PEAKS - 1/2 1/0 0/6 0/5 0/3 0/2

STAGES OF CONCERN OF CONCERN STAGES 0/1 0 5 10 15 20 25 30 35 40 NUMBER OF PARTICIPANTS

Figure 14. Preservice Teachers’ Highest & Second Highest Peak Stages of Concern.

The second highest values resulted from the combination Stage 0 and 1 as the highest and second highest scores. This involved 33 responses or 26.6% of the sample participants. This stage combination revealed that although preservice teachers were 143 interested in learning more about the device, they were not very concerned about the

AmazonÒ Alexa for classroom use. In this case “any interest is in impersonal, substantive aspects of the innovation, such as its general characteristics, effects, and requirements for use” (George et al., 2006, p. 8).

The third highest values were the result of the combination of Stage 1 and 2, respectively as the highest and second highest scores: This encompassed 16 responses or

12.9% of the sample participants. According to scale developers, this combination connotes that preservice teachers would like to know more about the AmazonÒ Alexa at an impersonal level, and that these individuals are unsure about the demands that learning how to use the AmazonÒ Alexa may have and their own adequacy to meet those demands (George, Hall, & Stiegelbauer, 2006). Based upon this information, this group of preservice teachers were concerned about the time and resources that would be demanded by their role as users and promotors of the AmazonÒ Alexa in the classroom.

This combination showed that 12.9% of respondents were interested in learning more about the AmazonÒ Alexa but were concerned about potential conflicts such as personal commitment to learning how to use the product and concerns about how schools’ administration might affect the integration of this device. According to scale developers, these concerns are usually tied to costs associated with the acquisition of the device, status of school and infrastructure capable to support the integration, and the body of teachers and colleagues’ support for this innovation (George, Hall, & Stiegelbauer,

2006). Lastly, it is important to point out that the CBAM (Hall & Hord, 2006) scale is comprised of seven stages, and that scale developers grouped the stages into three main categories: Measuring Self-, Task, and Impact. Results from this sample of preservice 144 teacher indicated that the majority of participants are concerned with Self which includes

Stages 0 through 2 (George, Hall, & Stiegelbauer, 2006).

Question 2 Results

The second research question proposed was “Are there significant relationships between preservice teachers’ peak Stages of Concern and the factors of Grade to Teach,

Teaching Geographical Area, and Gender?”

Table 13. Question 2 - Items, Variables, Data Types, and Scale Types.

Item Variable Type Data Types Scale Type Coding Method

Teacher’s Dependent Ordinal (Continuous Likert scale 0. Stage 1 Concerns -CBAM Variables score which could 1. Stage 2 Items Stages (0 - 6) range from 1-35 for 2. Stage 3 (35 Items, 5 Items 0 - Unconcerned each stage) 3. Stage 4 per stage) 1 - Informational 4. Stage 5 2 - Personal 5. Stage 6 3 - Management 6. Stage 7 4 - Consequence 5 - Collaborating 6 – Refocusing

Gender Independent Nominal/ Multiple 0. Male Variable Categorical Choice 1. Female Single Response Grade to Teach Independent Nominal/ Fill in 0. Elementary (E) (Educational level Variable Categorical Single 1. K-12 (K) plan to teach) Response 2. Middle (M) 3. AYA (A)

Teaching Independent Nominal/ Fill in 0. Urban (U) Geographical Area Variable Categorical Single 1. Suburban (S) (School location Response 2. Rural (R) plan to teach)

145

The demographic data was used to investigate whether the teachers’ characteristics were associated with SoCQ Peak Stage scores. The SoCQ (George, Hall,

& Stiegelbauer, 2006) was employed to calculate the raw scores for each stage and then used to convert them into a graphical representation of the data or concerns profile (Hall

& Hord, 2014).

Two sets of two-way MANOVAs were utilized for the statistical analysis of the multiple dependent variables (seven Stages of Concern) and the three independent variables of (1) Gender, (2) Grade to Teach and (3) Teaching Geographical Area. This calculation created different linear combinations of dependent variables for each main effect or independent variable. Table 13 represents items, dependent and independent variables, the data types, the scale types, and the coding methods used to answer question two.

The researcher performed all statistical calculations with data from raw score totals for each of the seven levels of Stages of Concern Questionnaire. The raw scale score of the dependent variable (the seven Stages of Concern score, stages 0 to 6) was a continuous score (i.e., the score could range from 0 to 35). The total raw scores for each of the seven levels of the Stages of Concern Questionnaire (dependent variables) were entered in IBM SPSS© V.25 as individual subject scores along with Gender, Grade to

Teach and Teaching Geographical Area (independent variables) (George, Hall, &

Stiegelbauer, 2006).

MANOVA assumptions were tested: All dependent variables were continuous

(interval and ratio). Independent variables were categorical, independent groups. The sample was small, but each group contained cases: data was available for each stage of 146 concern, and, although not all Stages of Concern showed peak areas, there was data available for analysis. The sample size was adequate. The data was examined for outliers.

There were no univariate outliers in the data, as assessed by inspection of boxplot. There was a linear relationship between the dependent variables, as assessed by scatterplot

(Appendix I). Assumption testing for univariate and multivariate outliers was assessed by

Mahalanobis distance (p > .001), and there were no multivariate outliers in the data. The test for multicollinearity reported no evidence of multicollinearity, as assessed by Pearson correlation (|r| < 0.9). The data was also assessed for multivariate normality, and homogeneity of variance across groups.

Table 14. Gender and Plan to Teach Box’s Test of Equality of Covariance Matrices.

Box’s Test of Equality of Covariance Matrices

Box’s M 128.401 F .940 Df1 112 Df2 7946.093 Sig. .660

The assessment of Box's Test of Equality of Covariance Matrices test (p < .001) was used to test the null hypothesis of the observed covariance matrices across groups.

Results in Table 14 come from the design: (intercept + Gender + Plan to Teach + Gender

* Plan to Teach) which shows that the assumption of homogeneity of covariance matrices was not violated (p = .660) for Gender and Plan to Teach.

147

Table 15. Gender and Area Type Box’s Test of Equality of Covariance Matrices.

Box’s Test of Equality of Covariance Matrices

Box’s M 156.979 F 1.172 Df1 112 Df2 11373.867 Sig. .105

In Table 15 above, is shown the assessment of Box's Test of Equality of

Covariance Matrices test (p < .001) for the design: (intercept + Gender + Area Type +

Gender * Area Type). This test of assumption of homogeneity of covariance matrices was also not violated (p = .105) for Gender and Area Type.

MANOVA Procedure 1 - Gender and Geographical Area Type

Analysis of the Multivariate Test results for the first 2 X 4 MANOVA was conducted with Gender and Area Type as the independent variables and the raw values of peak Stages of Concern taken as dependent variables. The results (See Table 16) of this model indicated that Gender (F(7, 112) = .310, p = .948, Wilks' Λ = .891, partial η2 =

.019), and Area Type (F(14, 224) = .917, p = .767, Wilks' Λ = .917, partial η2 = .042) were not significant. Looking at the interaction between variables in Table 16, results report that there was no statistically significant interaction effect F(14, 224) = .901, p =

.559, Wilks' Λ = .896, partial η2 = .053.

148

Table 16. MANOVA Data Analysis - Multivariate Test – Area Type and Gender.

Multivariate Test – Area Type and Gender

Effect Value F Hypothesis Error df Sig. Partial df Eta Squared Intercept Wilks’ .050 301.902b 7.000 112.000 .000 .950 Lambda Gender Wilks’ .981 .310b 7.000 112.000 .948 .019 Lambda Area Wilks’ .917 .706b 14.000 224.000 .767 .042 Type Lambda Gender * Wilks’ .896 .901b 14.000 224.000 .559 .053 Area Lambda Type

MANOVA Procedure 2 - Gender and Grade Planned to Teach

Analysis of the Multivariate Test results for the second 2 X 3 MANOVA was conducted with Gender and Plan to Teach as the independent variables and the raw scores of the peak Stages of Concern as the dependent variables. The results (See Table 17) of this model indicated that Gender (F(7, 110) = .604, p = .752, Wilks' Λ = .963, partial η2 =

.037), and Area Type (F(21, 316) = .812, p = .309, Wilks' Λ = .812, partial η2 = .042), were not significant. Looking at the interaction between variables in Table 17, results show that there was no statistically significant interaction effect F(21, 316) = 1.033, p =

.559, Wilks' Λ = .423, partial η2 = .061. The crosstabulations for all independent variables were analyzed to help determine the possible reasons for non-significant results with the MANOVA models.

149

Table 17. MANOVA Data Analysis - Multivariate Test – Plan to Teach and Gender.

Multivariate Test – Plan to Teach and Gender

Effect Value F Hypothesis Error df Sig. Partial df Eta Squared Intercept Wilks’ .059 251.112b 7.000 110.000 .000 .941 Lambda Gender Wilks’ .963 .604b 7.000 110.000 .752 .037 Lambda Plan to Wilks’ .812 .1136b 21.000 316.411 .309 .042 Teach Lambda Gender * Wilks’ .827 1.033 21.000 316.411 .423 .061 Plan to Lambda Teach

Results from the crosstabs suggest that less than ideal power made it difficult to detect any true differences among the variables in the models. Although the statistical power for the overall study was sufficient for n=124, some subgroups did not include large enough numbers of cases for differences among groups to be noticeable. For instance, the rural male subgroup did not contain enough cases to produce sufficient statistical power to provide distinct results. This suggests that low statistical power and small effect size could have affected the overall results of these models.

Since the Second Highest Stages of Concern Scores data (n=124, µ=1.94, and

SD= 1.193) is the least skewed for the sample (See Figure 9. Preservice Teachers’ Peak

Stages of Concern) the researcher also performed crosstabs for Area Type and the Second

Highest Peak of Concern variables.

150

Table 18. Crosstabulation for Area Type and Second Highest Peak Stages of Concern.

Area Type * Second Highest Stage Score Crosstabulation

Count Second Highest Stage Score 0 1 2 3 5 6 Total

Area Type R 2 7 8 4 1 0 22

S 4 25 27 4 5 1 66

U 2 9 17 3 4 1 36

Total 8 41 52 11 10 2 124

Tables 18, 19 and 20 illustrate the loss of statistical power for certain subgroups such as the Urban (U) Area Type for Stages of Concern 3, 4, 5, and 6 and the Rural (R)

Area Type for Stages of Concern 5 and 6 (See Table 18).Table 19 illustrates the loss of statistical power for the Male (M) subcategory of Gender participants in Stages of

Concern 0, 3, 5, and 6.

Table 19. Crosstabulation for Gender and Second Highest Peak Stages of Concern.

Gender * Second Highest Stage Score Crosstabulation

Count Second Highest Stage Score 0 1 2 3 5 6 Total

Gender F 6 25 35 9 7 2 84

M 2 16 17 2 3 0 40

Total 8 41 52 11 10 2 124

151

Table 20 illustrates the loss of statistical power for participants in Stages of

Concern 0, 3, and 6, who plan to teach Kindergarten (K), and the loss of statistical power for preservice teachers in Stages of Concern 0, 3, 5 and 6, who plan to teach Middle (M) school.

Table 20. Crosstabulation for Plan to Teach and Second Highest Peak Stages of Concern.

Plan to Teach * Second Highest Stage Score Crosstabulation

Count Second Highest Stage Score 0 1 2 3 5 6 Total

Plan to A 3 19 21 5 3 2 53 Teach E 3 9 16 3 3 0 34

K 1 9 8 1 3 0 22

M 1 4 7 2 1 0 15

Total 8 41 52 11 10 2 124

Chapter Summary

This chapter includes the results from the analysis of the data attained during the data collection. The data was screened for: Accuracy, outliers, normality, linearity, homoscedasticity, multicollinearity, and singularity. Research Question 1 was concerned the examination of the statistically significant differences in teachers’ peak Stages of

Concern on the scale designed for the CBAM theoretical framework (Stages 0-6) (Hall &

Hord, 2006). The researcher employed the SoCQ scoring device (George, Hall, &

Stiegelbauer, 2006). The chapter provides detailed information about the procedures and 152 the applications of the SoC Questionnaire. The SoCQ manual (George, Hall, &

Stiegelbauer, 2006) was used for scoring participant profiles to determine their Highest

Peaks of Concern and Second Highest Peaks of Concern. Results showed the highest peaks at stages 0 and 1, and the second highest peaks at stages 1 and 2.

Question 2 was concerned with significant difference among Gender, Teaching

Geographical Areas, and Grade Level to Teach. To compare these variables with the

Stages of Concerns, the researcher utilized the statistical test MANOVA. Accepted research and statistical techniques were used to address the violations of assumptions.

The analysis of MANOVA first model returned non-significant interaction effect and non-significant main effect between Gender and Area to Teach and the 7 Stages of

Concerns. The second MANOVA model between Gender and Teaching Geographical

Area and the 7 Stages of Concerns also returned a non-significant interaction effect and a non-significant main effect.

153

Chapter 5: Discussion and Conclusion

Life without technology is unimaginable (Hasib, 2014, p. 1).

Research Overview

This study at a Midwestern University provided information about the concerns of preservice teachers regarding the implementation of an instructional innovation, such as the AmazonÒ Echo, in the classroom for learning. In this chapter, the exploration of findings are discussed as they are related to available research. Findings for each research question are examined and followed by conclusions and suggestions for future research.

Research Question 1 Results Interpretation

The SoC manual states that the first Peak of Concern reveals participants’ highest concern about an innovation, while the second Peak of Concern offers additional information about why participants reported disinterest in the device (George, Hall, &

Stiegelbauer, 2006). Combining highest Peak of Concern with the second highest Peak of

Concern offers additional information about why participants felt their selection about the innovation was appropriate (George, Hall, & Stiegelbauer, 2006). The data analysis revealed that the highest Peak Stage of Concern was Stage 0 which encompassed 84 participants (67.7% of the sample). The second highest Peak of Concern was Stage 2 consisting of 52 participants or 41.9% of the sample. Utilizing the procedure in the SoCQ manual (George, Hall, & Stiegelbauer, 2006), this researcher evaluated the surveys. This involved combining the highest first and second Peak of Concern. Out of 84 surveys, 35

(41.7%) participants had the highest Peak of Concern in Stage 0, and had the second highest Peak of Concern in Stage 2. 33 (39.3%) out of 84 respondents had results in which Stage 0 was the highest Peak Concern, and Stage 1 was the second highest Peak of 154

Concern. This sample of participants had the highest level of concern in Stage 0

(awareness), which revealed that they had limited interest in the use of the AmazonÒ

Echo as a device for teaching and learning in the classroom.

In 2017, Incerti, Franklin, and Kessler ran a study at Ohio University which yielded findings that indicate that preservice teachers were very enthusiastic about the idea of using the AmazonÒ Echo in the classroom, and that they described the device as having great potential for teaching and learning (Incerti, Franklin, & Kessler, 2017).

Conversely, the current 2019 study revealed that preservice teachers had limited interest in the use of the AmazonÒ Echo as a device for teaching and learning in the classroom.

One probable explanation for these inconsistent results may be found in a comparison between the timing of the first and second study. At the conclusion of the 2017 study, the

AmazonÒ Echo was still considered a novelty since it was the only available commercial grade artificial intelligent tutor of its kind. After its release in 2014, the Echo was a relatively unknown innovation that had no competitors until the end of 2016 when

GoogleÒ Home was released. During the 2017 study, when the Amazon Echo was presented to the classroom and students were asked if they had seen the device before only a few students raised their hands. The data collection for the current study took place in 2019. At this time, most students had seen the device, and at the end of the presentation, some students disclosed that they owned the Echo. These individuals told the researcher that they had never thought of it as a classroom device.

Between January 2018 and January 2019, sales of electronic tutors like the

AmazonÒ Echo rose 39.8%, and positive, as well as, negative reviews of these electronics became available to the masses (Hartung, 2017; Kinsella & Mutchler, 2019). 155

Over the past two years, the additional growth of artificial intelligent tutors in the marketplace led AmazonÒ to spend millions on advertising and improvements in order to boost Alexa’s popularity. Today it is almost a common household item (Amazon, 2019;

Hartung, 2017; Kinsella & Mutchler, 2019) marking the device as a personal assistant rather than a personal AI tutor.

As reported in the 2017 study from the University of Idaho, students and teachers who were familiar with using Alexa outside the classroom have had difficulties accepting the device as an AI tutor for classroom use. Dousey and Hall go on to state that “the primary issue for these individuals involved adjusting to how they might use the device for learning rather than at home” (Dousey & Hall, 2018, p.1417). In the current study, preservice teachers’ familiarity with this kind of technology as a domestic assistant may have impacted their acceptance of the technology as an innovation for learning. Findings of this study suggest that the majority of preservice teachers would have liked to have had more information regarding the AmazonÒ Alexa. This may indicate that limited knowledge of the device as an electronic tutor, caused these preservice teachers to question the utility of incorporating this innovation into their curriculum and classrooms.

Specific results from the overall study reported higher values in Stages of

Concern 0, 1, and, 2. According to scale developers, when results are the highest in stages

0 and 1, it reveals that the participants would like “more information about the

Innovation” (George, Hall, & Stiegelbauer, 2006, p. 53). Additionally, they “have intense personal concerns about the innovation and its consequences for them” (George, Hall, &

Stiegelbauer, 2006, p. 53). And, “While these concerns reflect uneasiness regarding the 156 innovation, they do not necessarily indicate resistance” (George, Hall, & Stiegelbauer,

2006, p. 53).

George, Hall, and Stiegelbauer (2006) interpreted that these results indicate that participants had apprehensions about the personal commitment required to learn how to use the product, the expense of the device, and the methods that the school administration might use to integrate the new technology (George, Hall, & Stiegelbauer, 2006).Due to the marginal interest in using the AmazonÒ Alexa in the classroom reported by participants, scale developers suggest that these participants may be interested in learning more about this device, but not necessarily for classroom use (George, Hall, &

Stiegelbauer, 2006). Fuller’s explanation for these findings is that education students who lack teaching experience rarely have specific concerns about teaching itself (Fuller,

1969). According to Fuller “this pre-teaching period seemed to be a period of non- concern with the specifics of teaching, or at least a period of relatively low involvement in teaching” (Fuller, 1969, p. 219).

According to scale developers, the focus of preservice teachers may be best expressed by two questions. First: “Where do I stand?, teachers are trying to gauge how much support they will have from their supervising teachers and principals and the limits of their acceptance as professionals within the school” (George, Hall, & Stiegelbauer,

2006, p. 3). The SoCQ (George, Hall, & Stiegelbauer, 2006) responses regarding the use of the AmazonÒ Alexa revealed that this sample of preservice teachers was concerned about potential obstacles of this nature. The second question is “How adequate am I?, teachers are expressing concerns about their ability to deal with class control, their 157 general adequacy, and their preparedness to handle the classroom situation” (George,

Hall, & Stiegelbauer, 2006, p. 3).

According to Hall and Hord (2014), preservice teachers are also more likely to have the highest concern in “self-concern” stages near the beginning of their careers and professional development targeted towards these concerns affects the level of intensity. A similar study on the integration of technology in the classroom, evaluated by CBAM, reports that while the age of the teachers influenced the adoption of technology, years of service greatly impacted technology integration (Gu, Zhu & Guo, 2013).

Additional Considerations from Literature

Extensive literature also suggests that preservice teachers who have limited preparation to work with technology may have increased concerns regarding their level of self-efficacy when required to use technology in the classroom (Gu, Zhu, & Guo, 2013;

Lemon & Garvis, 2016; Wang, Ertmer, & Newby, 2004). Providing preservice teachers with innovative technology tools would probably not lead to integration unless they were also instructed on how to use them to support learning (Gu, Zhu, & Guo, 2013; Mishra &

Koehler, 2006; Orlando & Attard, 2016). Elucidating this point, Dousay and Hall’s 2018 study at the University of Idaho reported that “teacher enthusiasm concerning integration of the Echo Dot and determination to utilize the device to its potential were integral to successful integration” (Dousay & Hall, 2018, p. 1417). Other suggestions offered in this study could improve the adaptation of the AmazonÒ Echo in the classroom. For instance,

Dousey and Hall (2018) had a professional development session for one group of teachers where “the design of the session was based on hands-on application and teacher play” (Dousey & Hall, 2018, p. 1417). While researcher affirm that teachers’ enthusiasm 158 about the tutor was paramount to the integration of this technology in their classroom, professional development helped teachers to envision how to use the device as a learning technology instrument.

The level of competition for teacher’s attention in the classroom is high. Teachers who mistrust technology may view newer devices as fleeting applications and learning to use them requires too much of their time and effort (Oğuzhan, 2019). The preservice teachers in the current study have not been previously exposed to AI technologies and their lack familiarity with new devices may increase the learning curve and adaptation rate (Gavaldon & McGarr, 2019; Kormos, 2018). The CBAM and Peak Stages of

Concerns point out that, at this time, pre-service teachers are skeptical about the adaptation of AI tutors, such as the AmazonÒ Alexa, in the classroom, and they question their own ability to integrate this new technology.

Methods to help change future educators’ attitudes toward AI technologies require more in-depth exploration: They are crucial to teacher professional development and teacher preparation courses. Especially in the first few years of teaching, there are a number of challenges that teachers have to overcome and adequate preparation for the use of technology in the classroom can help new in-service teachers feel more confident with their professional skills and in their new position. As Clausen (2007) explains “First- year teachers however, begin their careers with a host of developmental and contextual issues that create potential challenges and may affect whether they use technology with their students” (Clausen, 2007, p. 246).

Literature reports that training and professional development are two key elements for technology adoption in education (Gavaldon & McGarr, 2019; Kormos, 159

2018). It is also important to re-evaluate teacher preparation courses to determine whether adequate time is spent on training preservice teachers to use newer technologies, such as, AI tutors for classroom use, and newer commercial devices that have multiple uses inside and outside of the classroom.

Research Question 2 Results Interpretation

The data analysis from the two-way MANOVAs attempted to answer the question: Are there significant relationships between preservice teachers’ peak Stages of

Concern and the factors of Grade to Teach, Teaching Geographical Area, and Gender?”

The models tested with MANOVA analyses indicated non-significant differences between Gender, Grade to Teach, Teaching Geographical Area, and the reported seven

Stages of Concern. This suggests that the preservice teachers’ gender, the grade they are planning to teach according to their concentration, and the geographical area where they are planning to teach were not contributing factors in the selection of answers preservice teachers provided when answering the SoCQ Questionnaire (George, Hall, &

Stiegelbauer, 2006) regarding the AmazonÒ Echo. Results from the two MANOVA models may have been affected by small effect size and loss of statistical power.

However, literature reports that gender, geographical area, and grade to teach impact technology adaptation in education in some contexts but are not determining factors. This indicates that for the current sample, the insights derived from these findings could be valid and consistent with the findings in other studies. However, further research is needed to provide conclusive results.

Findings from the current study indicate that gender is not a factor in the prediction of the AmazonÒ Echo adoption in the classroom for teaching and learning. 160

The recent body of literature on gender and technology adaptation shows that the gap between genders is closing supporting that gender differences are not significant predictors for the adoption of technologies in teaching situations (Padmavathi, 2016;

Tondeur, Aesaert, Prestridge, & Consuegra, 2018; Sánchez-Prieto, Olmos-Migueláñez, &

García-Peñalvo, 2017). Findings from these studies are consistent with the findings from the current study.

Regarding the grade to teach as a factor that influences technology adaptation, a large body of literature on the topic of technology adoption across grades points out that innovations can be successfully implemented across grades (kindergarten, elementary, middle-school, and high school) (Al-Awidi & Alghazo, 2012; Hammond, Fragkouli,

Suandi, Crosson, Ingram, Johnston-Wilder & Wray, 2009; Gavaldon & McGarr, 2019;

Han, Shin & Ko, 2017; Hsu, Liang, Chai & Tsai, 2013; Kim et al., 2013; Kormos, 2018;

Lemon & Garvis, 2016; Maninger, 2007; Rehmat & Bailey, 2014).

Many studies on the subject of technology adaptation in the classroom indicate that preservice teachers’ lack of expertise in the field impairs their judgement about the adoption of emerging technologies for the classroom (Campbell, Heller, & Sutter, 2019).

Findings from a recent study articulates that preservice “teachers may be aware of emerging technologies but not how, when, or why to integrate them into learning”

(Campbell, Heller, & Sutter, 2019, p. 1494). These findings are also consistent with the opinions of SoCQ scale developers George, Hall, and Stiegelbauer (2006) and Fuller,

(1969). Over 50 years ago they suggested that opinions from in-service teachers tend to focus on how innovations can constructively impact student’s learning, and the practical 161 aspect of getting materials read. On the other hand, pre-service teachers tend to focus on self-efficacy.

In the current study, the combination of the Highest Peak Stage of Concern and

Second Peak Stage of Concern revealed that pre-service teachers’ main adoption concern was their own self-efficacy. A large body of literature indicates that pre-service teachers find it difficult to integrate technology into the classroom due to lack of preparation, training, and self-efficacy. This reveals that teacher preparation programs still need to include hands-on components for the technology tools that teachers need to use inside the classrooms (Al-Awidi & Alghazo, 2012; Anderson & Maninger, 2007; Gavaldon &

McGarr, 2019; Han, Shin & Ko, 2017; Hammond et al., 2009; Hsu et al., 2013; Kim et al., 2013; Kormos, 2018; Lemon & Garvis, 2016; Rehmat & Bailey, 2014).

Although literature on the impact of geographical area and preservice teachers’ opinions on technology innovation adoption in classrooms is limited, the review of literature does suggest that students raised in a specific geographical area are more likely to be hired by schools in the same geographical area. Preservice teachers are also more likely to select jobs in the same geographical location that they were raised (Azano &

Stewart, 2016; Miller, 2012). A study run by Texas Christian University in 2001 asked preservice teachers to state where they would most feel comfortable teaching after graduating. Trainee teachers answered that they felt “comfortable teaching in schools in which their own backgrounds matched those of the students” (Groulx, 2001, p. 75). From this we can infer, that it is most likely that the group of preservice teachers who participated in this study chose to work in rural area schools (18%), or suburban area schools (53%), or urban area schools (29%) did so because they were raised in similar 162 communities (Azano & Stewart, 2015; Azano & Stewart, 2016; Miller, 2012; Groulx,

2001). OU population includes over 43% of students coming from more affluent families with income about $100,000, and only 29% of students come from families with income below $60,000 (Ohio University Office of Institutional Research, n.d.).

Lastly, the MANOVA model comparing relationships between Grade to Teach and the seven Stages of Concern concluded that the preservice teachers’ choice of grade level to teach is not a predictor for the integration of the AmazonÒ Echo in their classrooms. A large body of literature on this topic points out that technology innovations can be successfully implemented across grades (kindergarten, elementary, middle-school, and high school), with success mostly derived from teacher pedagogical beliefs (Han,

Shin & Ko, 2017; Hsu et al., 2013; Kim et al., 2013), teachers’ training (Gavaldon &

McGarr, 2019; Kormos, 2018) teachers’ self-efficacy (Al-Awidi & Alghazo, 2012;

Anderson & Maninger, 2007; Hammond et al., 2009; Lemon & Garvis, 2016; Rehmat &

Bailey, 2014) and supportive environment (Gavaldon & McGarr, 2019; Kormos, 2018).

Applying these findings toward teaching students in generation Z, it is important to point out that they take a different approach to learning which is not specifically directed at the

AI tutors, but at technology in general. They learn new technologies when they need it, and learn it on their own terms. If they find it important, they apply it, but if they don’t find value in it, they discard it (Davou & Sidiropoulou, 2017). It is difficult to motivate this younger generation to use a certain type of technology in the classroom, and it is also difficult to fabricate the enthusiasm and willingness to adopt these new technologies.

Surprisingly, preservice teachers participating in this study showed their awareness of the importance of a supportive environment providing the following 163 answers to the question “Please describe the environment where you will want to teach?”

Other answers included adjectives and descriptions such as: “friendly place,” “A welcoming environment,” “diverse student body,” “Private,” “Fun, safe, valued, energetic, charismatic,” “Warm,” “Technology inclusive and positive environment,”

“Young children, personal, friendly, and involved”.

Considerations and Implications

This study aimed to understand the perspectives of preservice teachers regarding commercial AI tutors such as the Amazon Alexa for class learning though the theoretical lens of CBAM and the SoCQ instrument. While research on electronic tutors spans decades, and research on AI started over fifty years ago, in recent years, especially with the growing diffusion of informal learning, more and more commercial devices are becoming comfortable learning-accessories in students’ hands at home and in the classroom. From this study, we can draw three important considerations: 1) Preservice teachers’ predisposed attitude toward the Amazon Alexa might have impacted the results;

2) Preservice teachers’ training and preparation in the use of technology needs to be improved; 3) When researching commercial devices and AI applications, a theoretical lens and study instrument designed to collect the impressions of trainee teachers and modern applications, may be a more suitable approach.

1) Preservice teachers' attitude toward the AmazonÒ Alexa: Looking at preservice

teachers’ difference in attitude toward the Echo when we compare the 2017 study to

the current study, it could be inferred that the perspectives of these trainee teachers

had been influenced by the persistent advertisement campaign of this type of devices

as ‘personal assistants’ rather than a classroom learning device. Therefore, they had 164

difficulty accepting this device as a tool for learning. As more and more commercial

devices enter the classroom, it is important that teacher preparation programs take

time to analyze ways in which commercial devices can be used for teaching and

learning. Looking at the suggestions and the results achieved by the University of

Idaho, Dousay and Hall (2018) with the integration of the AmazonÒ Alexa for

learning, effective methods for integration of this device can most likely be obtained

with training and by providing sets of instructions with examples on how the device

could be used in the classroom.

2) Training and preparation of preservice teachers: Since teacher preparation programs

such as the one at OU only include a very limited number of hours on the use of

technologies, start teaching how to use a variety of technologies in lower school

levels may result in faster adoption of technologies for teaching and learning. Today’s

students spend a lot of time using mobile devices. Teaching them how to use these

devices for learning, rather than for entertainment, could help successful

implementations of technologies in our classrooms. Adding technology courses to K-

12 curriculum with the idea that they can use newer commercial technologies for

learning, could help developing a malleable mindset regarding the use of non-school

specific devices for the acquisition of information and the formation of marketable

skills.

3) Theory and the instrument: CBAM is a reliable model for technology adaptation in

schools, and the Exploratory Factor Analysis procedure asserted that, for the current

study, the instrument is providing reliable results. However, CBAM seems to be more

suitable for the analysis of answers from in-service teachers (for instance, question 165

number 7 and 30). This instrument has not been updated for a long time and some of

the questions could be updated to include expressions that are more suitable for

technologies in the Twenty-first century and AI applications. Perhaps, other

technology adoption instruments might be more suitable for studies that test artificial

intelligence applications evaluated by preservice teachers.

Suggestions for Future Research

The following are suggestions for future research on the use the AmazonÒ Echo in the classroom for learning:

1. Future research could replicate this study in the context of other universities with

a larger population, or in different geographical areas to extend the research

model.

2. Future research could replicate this study with a different population. Such as,

‘What are the concerns of teachers with 3 to 5 years of professional teaching

experience for integrating the AmazonÒ Echo into their classrooms for learning?’

And, what are the concerns of teachers with 7 or more years of professional

teaching experience regarding the integration of the AmazonÒ Echo into their

classroom for learning?

3. More research is needed to ascertain if preservice teachers who plan to teach

lower grades, and those who plan to teach high school have different motivations

the for inclusion of the AmazonÒ Echo in the classroom.

4. The results indicate that participants of this study were not users of the AmazonÒ

Alexa. Further research should explore the use of this device among in-service

teachers using the innovation for teaching or learning. 166

5. Repeating the study with a larger sample size that includes cases across all Stages

of Concern could provide more defined answers regarding the impact of

designated geographical areas, specific teaching grades, and gender on the

adaptation of technology in schools.

6. More in-depth insights on technology adoption in different population strata could

be achieved if this study could be repeated with a sample of in-service teachers

from Rural, Suburban, and Urban areas.

7. More in-depth insights on the concerns of preservice teachers could be attained

with a mixed method approach research that includes interviews of a random

sample of the participants.

8. This investigation could be repeated with a different theoretical model of

technology adoption and a more up-to-date instrument which includes survey

questions that target modern technology applications such as AI devices.

Chapter Summary

This chapter includes the interpretation of results from analysis of data as described in Chapter 4. Detailed information about the interpretation of the SoC

Questionnaire (George, Hall, & Stiegelbauer, 2006) and the results from the MANOVA models are discussed. Results are presented in relation to the body of literature available on the topics of technology adaptation and preservice teachers projected use of digital innovations in their careers. Suggestions for future research, considerations and implications are included in this chapter.

167

References

Admoni, H. (2016). Nonverbal communication in socially assistive human-robot

interaction. AI Matters, 2(4), 9-10.

Aesaert, K., Vanderlinde, R., Tondeur, J., & Van Braak, J. (2013). The content of

educational curricula: A cross-curricular state of the art. Education Tech Research

Development, 61, 131-151. doi:10.1007/s11423-012-9279-9

Aggorowati, M. A., Iriawan, N., & Gautama, H. (2012). Restructuring and expanding

technology acceptance model structural equation model and Bayesian

approach. American Journal of Applied Sciences, 9(4), 496.

Arghode, V., Brieger, E. & McLean, G. (2017), Adult learning theories: implications for

online instruction. European Journal of Training and Development. 41(7), 593-

609. DOI: 10.1108/EJTD-02-2017-0014

Al-Awidi, H. M., & Alghazo, I. M. (2012). The effect of student teaching experience on

preservice elementary teachers’ self-efficacy beliefs for technology integration in

the UAE. Educational Technology Research and Development, 60(5), 923-941.

Alhabbash, M. I., Mahdi, A. O., & Naser, S. S. A. (2016). An Intelligent Tutoring System

for Teaching Grammar English Tenses. European Academic Research, 4(9), 1-15.

Almurshidi, S. H., & Naser, S. S. A. (2017). Design and Development of Diabetes

Intelligent Tutoring System. European Academic Research 4(9),1-9.

Alomary, A., & Woollard, J. (2015). How is technology accepted by users? A review of

technology acceptance and adoption models and theories. (Proceedings of The

IRES 17th International Conference, London, United Kingdom, 21st November

2015, ISBN: 978-93-85832-48-2). 168

Anderson, S. E., & Maninger, R. M. (2007). Preservice teachers' abilities, beliefs, and

intentions regarding technology integration. Journal of Educational Computing

Research, 37(2), 151-172.

Andrews, J. & Jones, M., (2015). What’s Happening in ‘Their Space’? Exploring the

Borders of Formal and Informal Learning with Undergraduate Students of

Education in the Age of Mobile Technologies. Journal of Interactive Media in

Education, 2015(1),16:1-10. DOI: http://doi.org/10.5334/jime.ax

Anshari, M., Almunawar, M. N., Shahrill, M., Wicaksono, D. K., & Huda, M. (2017).

Smartphones usage in the classrooms: Learning aid or interference?. Education

and Information Technologies, 22(6), 3063-3079.

Amazon (2019). Amazon Alexa (Second Generation) – Premium sound with built-in

smart home hub – Charcoal. Retrieved January 6, 2019 from

https://www.amazon.com/All-new-

EchoPlus2ndbuilt/dp/B0794W1SKP/ref=sr_1_3?keywords=alexa&qid=15500212

69&s=gateway&sr=8-3

Amazon Developer (2019a). Create Intents, Utterances, and Slots. Retrieved January 6,

2019 from https://developer.amazon.com/public/solutions/alexa/alexa-skills-

kit/docs/defining-the-voice-interface

Amazon Developer (2019b). Build Skills with the Alexa Skills Kit. Retrieved January 6,

2019 from https://developer.amazon.com/public/solutions/alexa/alexa-skills-

kit/getting-started-guide

Amazon Developer (2019c). Voice Design Handbook (Legacy). Retrieved January 6,

2019 from https://developer.amazon.com/public/solutions/alexa/alexa-skills- 169

kit/docs/alexa-skills-kit-voice-design-handbook

Amazon Developer (2019d). Speech Synthesis Markup Language (SSML) Reference.

Retrieved January 6, 2019 from

https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/speech-

synthesis-markup-language-ssml-reference

Aronheim, A., Aron, E., & Coups, E. J. (2011). Statistics for the behavioral and social

sciences: a brief course. Boston: Prentice Hall.

Merriam-Webster Dictionary. (n.d.) Artificial Intelligence. Retrieved May 27, 2017, from

https://www.merriam-webster.com/dictionary/artificial+intelligence

Aspden E. J., & Thorpe, L. P. (2009). Where do you learn? Tweeting to inform learning

space development. Educause Quarterly, 32(1), 1-3.

Azano, A. P., & Stewart, T. T. (2015). Exploring Place and Practicing Justice: Preparing

Pre-Service Teachers for Success in Rural Schools. Journal of Research in Rural

Education, 26(9), 1-13.

Azano, A. P., & Stewart, T. T. (2016). Confronting Challenges at the Intersection of

Rurality, Place, and Teacher Preparation: Improving Efforts in Teacher Education

to Staff Rural Schools. Global Education Review, 3(1), 108-128.

Azevedo, R., & Hadwin, A. F. (2005). Scaffolding self-regulated learning and

metacognition–Implications for the design of computer-based scaffolds.

Instructional Science, 33(5), 367-379.

Bailey, M. (2016, May 31). Alexa, pull those lab results’: A hospital tries out virtual

assistants. STAT. Retrieved January 6, 2020 from

https://www.statnews.com/2016/05/31/hospital-virtual-assistants/ 170

Baker, R. (2015). The student experience. How competency-based experience providers

serve education. University of California Irvine. Center on Higher Education

Reform. American Enterprise Institute. Retrieved on June 8, 2019 from

https://www.luminafoundation.org/files/resources/the-student-experience.pdf

Barrows, H. S. (1985). How to design a problem-based curriculum for the preclinical

years. Springer Publishing Company: New York.

Bennett, E. E. (2012). A Four-Part Model of Informal Learning: Extending

Schugurensky's Conceptual Model. Adult Education Research Conference. In

Proceedings of the Adult Education Research Conference. Saratoga Springs, NY:

AERC.

Bente, G., & Breuer, J. (2009). Making the implicit explicit. Serious games: Mechanisms

and effects, 322-343. In U. Ritterfeld, M. Cody & P. Vorderer (eds.) Serious

Games: Mechanisms and Effects. New York, NY: Routledge.

Bernstein, B. (1971) ‘On the Classification and Framing of Educational Knowledge’, pp.

47–69 in Young, M. (ed.) Knowledge and Control: New Directions for the

Sociology of Education. London: Collier Macmillan.

Bertagni, B. (2015). Dealing with complexity in a simple way: How visualization boosts

understanding in learning process. The Z Generation case. In: Salvetti, F., Rosa

M. La. & Bertagni B. (Eds.), Employability. Knowledge, Skills and Abilities for

the “Glocal” World, Milan: Franco Angeli

Best, J. W., & Kahn, J. V. (2006). Research in Education (10th ed.). New York:Allyn &

Bacon. 171

Bevan, B., Dillon, J., Hein, G. E., Macdonald, M., Michalchik, V., Miller, D., … Yoou,

S. (2010). Making science matter: Collaborations between informal science

education organizations and schools. A CAISE Inquiry Group report. Science and

Technology.

Bhinderwala, A., Shukla, N., & Cherarajan, V. (2014). Intelligent personal agent. Int. J.

Comput. Appl, 9-12. (0975 – 8887) National Conference on Role of Engineers in

Nation Building 2014 (NCRENB-14).

Bray, J. H., & Maxwell, S. E. (1982). Analyzing and interpreting significant MANOVAs.

Review of Educational research, 52(3), 340-367.

Brown, J. S., Burton, R. R., & Bell, A. G. (1975). SOPHIE: A step toward creating a

reactive learning environment. International Journal of Man-Machine

Studies, 7(5), 675-696.

Bruce, B. C., & Levin, J. A. (1997). Educational technology: Media for inquiry,

communication, construction, and expression. Journal of Educational Computing

Research, 17(1), 79-102.

Buabeng-Andoh, C. (2012). Factors influencing teachers’ adoption and integration of

information and communication technology into teaching: A review of literature.

International Journal of Education and Development using Information and

Communication Technology, 8(1), 136.

Bücheler, T., Füchslin, R. M., Pfeifer, R., Sieg, J., H. (2010). Crowdsourcing, Open

Innovation and Collective Intelligence in the scientific method: a research agenda

and operational framework. In: Artificial Life XII -- Twelfth International 172

Conference on the Synthesis and Simulation of Living Systems. Odense,

Denmark, 19 August 2010 - 23 August 2010, 679-686.

Bronfenbrenner, U., Mcclelland, P. D., Wethington, E., Moen, P., Ceci, S. J.,

Hembrooke, H., ... & White, T. L. (1996). The state of Americans: This

generation and the next. Simon and Schuster. The Free Press: NY

Cai, Z., Fan, X., & Du, J. (2017). Gender and attitudes toward technology use: A meta-

analysis. Computers & Education, 105, 1-13.

Calisir, F., Gumossoy, C. A., Bayraktaroglu, A. E., & Karaali, D. (2014). Human motion

simulation for vehicle and workplace design. Human Factors and Ergonomics in

Manufacturing, 24(5), 515–531. doi:10.1002/trtr.1356

Carlson, K. A., & Winquist, J. R. (2013). An Introduction to Statistics: An Active

Learning Approach. Thousand Oaks, CA: Sage Publications.

Campbell, L.O., Heller, S. & Sutter, C. (2019). Pre-Service Teachers’ Intentions towards

using Emerging Technologies. In K. Graziano (Ed.), Proceedings of Society for

Information Technology & Teacher Education International Conference (pp.

1746-1749). Las Vegas, NV, United States: Association for the Advancement of

Computing in Education (AACE). Retrieved February 1, 2020 from

https://www.learntechlib.org/primary/p/207879/.

Cassidy, S. (2016). Virtual Learning Environments as Mediating Factors in Student

Satisfaction with Teaching and Learning in Higher Education. Journal of

Curriculum and Teaching, 5(1), 113-123. 173

Cellan-Jones, R. (2016, October 20). Stephen Hawking – will AI kill or save humankind?

BBC Technology. Retrieved January 6, 2017 from

http://www.bbc.com/news/technology-37713629

Chaudhri, V. K., Lane, H. C., Gunning, D., & Roschelle, J. (2013). Intelligent learning

technologies: Applications of artificial intelligence to contemporary and emerging

educational challenges. AI Magazine, 34(3), 10-12.

Chen, B., & Bryer, T. (2012). Investigating instructional strategies for using

in formal and informal learning. The International Review of Research in Open

and Distributed Learning, 13(1), 87-104.

Cheng, A. (2017, December 4). Digital voice assistants prove they are not just gimmicks.

Widespread use bodes well for sales of other smart devices. EMarketers.

Retrieved January 6, 2018 from https://retail.emarketer.com/article/digital-voice-

assistants-prove-they-not-just-gimmicks/5a25ced4ebd4000570c897f4

Choy, L. T. (2014). The Strengths and Weaknesses of Research Methodology:

Comparison and Complimentary between Qualitative and Quantitative

Approaches. IOSR Journal of Humanities and Social Science, 19(4), 99–104. doi:

10.9790/0837-194399104

Christensson, P. (2009, April 23). Cloud Computing Definition. Retrieved April 1, 2019

from https://techterms.com

Christensson, P. (2012, September 22). App Definition. Retrieved April 1, 2019 from

https://techterms.com

Christensson, P. (2016, June 20). API Definition. Retrieved April 1, 2019 from

https://techterms.com 174

Christensson, P. (2009, April 23). Cloud Computing Definition. Retrieved April 1, 2019

from https://techterms.com

Chuang, T. Y., & Chen, W., F. (2009). Effect of Computer-Based Video Games on

Children: An Experimental Study. 2009 First IEEE International Workshop on

Digital Game and Intelligent Toy Enhanced Learning (DIGITEL07). doi:

10.1109/digitel.2007.24

Chuaug, T. (2016, September 23). How intelligent is the Amazon Tap when it comes to

homework? Artificially intelligent Alexa meets the fifth grade. The Denver Post.

Retrieved January 6, 2019 from http://www.denverpost.com/2016/08/29/amazon-

tap-homework-heritage-elementary/

Chuttur, M. Y. (2009). Overview of the technology acceptance model: Origins,

developments and future directions. Working Papers on Information Systems,

9(37), 9-37.

Cicchelli, T., & Baecher, R. (1989). Microcomputers in the classroom: Focusing on

teacher concerns. Educational Research Quarterly, 13(1), 37–46.

Ciechanowski, L., Przegalinska, A., Magnuski, M., & Gloor, P. (2019). In the shades of

the uncanny valley: An experimental study of human–chatbot interaction. Future

Generation Computer Systems, 92, 539-548.

Clausen, J. M. (2007). Beginning Teachers’ Technology Use. Journal of Research on

Technology in Education, 39(3), 245-261.

DOI:10.1080/15391523.2007.10782482

Coakes, S. J., Steed, L. G., & Dzidic, P. (2006). SPSS version 13.0 for windows: Analysis

without anguish. John Wiley & Sons, Australia. 175

Cofer, D. A. (2000). Informal workplace learning (Practical Application Brief No. 10).

Columbus, OH: Center on Education and Training for Employment.

Cohen, L., Manion, L., & Morrison, K. (2007). Research Methods in Education (6th ed.).

London: Routledge.

Coleman, L. O., Gibson, P., Cotten, S. R., Howell-Moroney, M., & Stringer, K. (2016).

Integrating computing across the curriculum: The impact of internal barriers and

training intensity on computer integration in the elementary school classroom.

Journal of Educational Computing Research, 54(2), 275-294. DOI:

10.1177/0735633115616645

Conde, M. Á., García-Peñalvo, F. J., Rodríguez-Conde, M. J., Alier, M., Casany, M. J., &

Piguillem, J. (2014). An evolving Learning Management System for new

educational environments using 2.0 tools. Interactive Learning Environments,

22(2), 188-204.

Comstock, J. (2016, April 13). Boston Children's Hospital launches Amazon Alexa app

KidsMD. Healthcare IT News. Retrieved January 6, 2020 from

https://www.healthcareitnews.com/news/boston-childrens-hospital-launches-

amazon-alexa-app-kidsmd

Cope, J. (2005). Toward a dynamic learning perspective of entrepreneurship.

Entrepreneurship Theory and Practice, 29(4), 373–397.

Council of Europe (2000). Recommendation 1437. (2000). Non-formal

education. Assembly debate on 24 January 2000 (1st Sitting). Text adopted by

the Assembly on 24 January 2000 (1st Sitting). 176

Couros, G. (2018). The State of Technology in Education Report 2017/2018 UK&I

Edition. Promethean. Retrieved January 6, 2019 from

http://www2.prometheanworld.com/education-technology-report

Costello, A. B., & Osborne, J. W. (2005). Best practices in exploratory factor analysis:

Four recommendations for getting the most from your analysis. Practical

Assessment Research and Evaluation, 10(7), 1-9.

Creswell, J. W. (2008). Educational research: planning, conducting, and evaluating

quantitative and qualitative research (3rd ed.). Upper Saddle, NJ: Pearson

Education International.

Creswell, J. W. (2014). Research Design: Qualitative, Quantitative, and Mixed Methods

Approaches. Thousand Oaks, CA: SAGE Publications, Inc.

Creswell, J. W. (2015). Educational research: Planning, Conducting, and Evaluating

Quantitative. Upper Saddle River, NJ: Pearson.

Cronbach, L. J., & Meehl, P. E. (1955). Construct validity in psychological tests.

Psychological bulletin, 52(4), 281.

Crompton, H. (2015). Preparing teachers to use technology effectively using the

technological, pedagogical, content knowledge (TPACK) framework. Glokalde,

1(2), 82-92.

Cutucache, C. E., Leas, H. D., Grandgenett, N. F., Nelson, K. L., Rodie, S., Shuster, R.,

... & Tapprich, W. E. (2017). Genuine Faculty-Mentored Research Experiences

for In-Service Science Teachers: Increases in Science Knowledge, Perception,

and Confidence Levels. Journal of Science Teacher Education, 28(8), 724-744. 177 du Boulay, B. (2016). Artificial Intelligence as an Effective Classroom Assistant. IEEE

Intelligent Systems, 31(6), 76-81.

Daniel, B. (2015). Big Data and analytics in higher education: Opportunities and

challenges. British journal of educational technology, 46(5), 904-920.

Davis, F. D., Bagozzi, R.P., & Warshaw, P.R. (1989). User acceptance of computer

technology: A comparison of two theoretical models. Management Science, 35(8),

982–1003.

Davis, F. D. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance

of Information Technology. MIS Quarterly, 13(3), 319-339. DOI:

10.2307/249008

Davou, B., & Sidiropoulou, A. (2017). Family life around screens: Some thoughts on the

impact of ICTs on psychological development and the development of

relationships. Contemporary Family Therapy, 39(4), 261–270.

Day, M. (2019, June 14). Alexa in the classroom? Amazon’s voice assistant leads kids’

story time. Los Angeles Times. Retrieved August 6, 2019, from

https://www.latimes.com/business/la-fi-amazon-alexa-classroom-20190614-

story.html

DailyCaring Editorial Team (n.d.). Amazon Echo Alexa for seniors with dementia. Daily

Caring. Retrieved January 6, 2020 from https://dailycaring.com/amazon-echo-for-

dementia-technology-for-seniors/

Deloitte. (2017). The 2017 Deloitte Millennial Survey–apprehensive Millennials: seeking

stability and opportunities in an uncertain world [online]. London: Deloitte.

Retrieved October 14, 2019 from 178

https://www2.deloitte.com/content/dam/Deloitte/global/Documents/About-

Deloitte/gx-deloitte-millennial-survey-2017-executive-summary.pdf de Miranda, A. (2009). Technological determinism and ideology: Questioning the

‘information society’ and the ‘digital divide’. In J. Burnett, P. Senker & K.

Walker (Eds.), The myths of technology: Innovation and inequality (pp. 23-38).

New York: Peter Lang Publishing.

Dewey, J. (1944). Democracy and Education: An introduction to the philosophy of

education. New York: Macmillan Company.

Dewey, J. (1938). Experience and Education. New York, NY: Touchstone.

Dickson, B. (2017, March 13) How Artificial Intelligence enhances education. The Next

Web. Retrieved January 6, 2018 from https://thenextweb.com/artificial-

intelligence/2017/03/13/how-artificial-intelligence-enhances-education/

Dobrovsky, A., Borghoff, U. M., & Hofmann, M. (2019). Improving Adaptive Gameplay

in Serious Games Through Interactive Deep Reinforcement Learning. In

Cognitive Infocommunications, Theory and Applications (pp. 411-432). Springer,

Cham.

Dousay, T.A. & Hall, C. (2018). “Alexa, tell me about using a virtual assistant in the

classroom”. In T. Bastiaens, J. Van Braak, M. Brown, L. Cantoni, M. Castro, R.

Christensen, G. Davidson-Shivers, K. DePryck, M. Ebner, M. Fominykh, C.

Fulford, S. Hatzipanagos, G. Knezek, K. Kreijns, G. Marks, E. Sointu, E.

Korsgaard Sorensen, J. Viteli, J. Voogt, P. Weber, E. Weippl & O. Zawacki-

Richter (Eds.), Proceedings of EdMedia: World Conference on Educational

Media and Technology (pp. 1413-1419). Amsterdam, Netherlands: Association 179

for the Advancement of Computing in Education (AACE). Retrieved August 14,

2019 from https://www.learntechlib.org/primary/p/184359/

Duchastel, P., & Imbeau, J. (1988). Intelligent Computer-assisted Instruction (ICAI):

Flexible Learning Through Better Student-Computer Interaction. Journal of

Information Technology, 3(2), 102-105.

Duffy, M. C., & Azevedo, R. (2015). Motivation matters: Interactions between

achievement goals and agent scaffolding for self-regulated learning within an

intelligent tutoring system. Computers in Human Behavior, 52, 338-348.

Dufur, M. J., Parcel, T. L., & Troutman, K. P. (2013). Does capital at home matter more

than capital at school? Social capital effects on academic achievement. Research

in Social Stratification and Mobility, 31, 1-21.

Duncan, A. (2011). Harness the power of technology. Learning and Leading with

Technology, 10-13.

Duran, M., Brunvand, S., Ellsworth, J. & Sendag, S. (2011). Impact of research-based

professional development: Investigation of in-service teacher learning and

practice in wiki integration. Journal of Research on Technology in Education,

44(4), 313-334. DOI: 10.1080/15391523.2012.10782593

Ellsworth, J. B. (2000). Surviving Change: A Survey of Educational Change Models.

Syracuse, N.Y.: Clearinghouse on Information & Technology, Syracuse

University. eMarketer (2017, May 8) Alexa, Say What?! Voice-Enabled Speaker Usage to Grow

Nearly 130% This Year. EMarketers. Retrieved January 6, 2018 from 180

https://www.emarketer.com/Article/Alexa-Say-What-Voice-Enabled-Speaker-

Usage-Grow-Nearly-130-This-Year/1015812 eMarketer (2017, April 10) Usage of Virtual Assistants Among Internet Users

Worldwide, by Age, Nov 2016 (% of respondents in each group). EMarketers.

Retrieved January 6, 2019 from http://www.emarketer.com/Chart/Usage-of-

Virtual-Assistants-Among-Internet-Users-Worldwide-by-Age-Nov-2016-of-

respondents-each-group/203019

Eraut, M. (2000). Non-formal learning and tacit knowledge in professional work. British

journal of educational psychology, 70(1), 113-136.

Eraut, M. (2004). Informal learning in the workplace. Studies in Continuing Education

26(2), 247-273. doi: 10.1080/158037042000225245

Eshach, H. (2007). Bridging in-school and out-of-school learning: Formal, non-formal,

and informal education. Journal of Science Education and Technology, 16(2),

171-190.

Facer, K (2011). Learning Futures: Education, Technology and Social Change.

Abingdon: Routledge. DOI: 10.4324/9780203817308

Faghihi, U., Brautigam, A., Jorgenson, K., Martin, D., Brown, A., Measures, E., &

Maldonado-Bouchard, S. (2014). How gamification applies for educational

purpose specially with college algebra. Procedia Computer Science, 41, 182-187.

Falk, J. H., & Dierking, L. D. (2018). Learning from museums. Rowman & Littlefield.

Maryland: Lanham. Second Edition.

Field, A. (2009). Discovering statistics using IBM SPSS statistics (3rd ed.). London, UK:

Sage. 181

Field, A. (2013). Discovering statistics using IBM SPSS statistics (4th ed.). Los Angeles,

CA: Sage.

Fink, L. D. (2013). Creating significant learning experiences, revised and updated: An

integrated approach to designing college courses. San Francisco, CA: Jossey-

Bass.

Firmin, M. W., & Genesi, D. J. (2013). History and implementation of classroom

technology. Procedia-Social and Behavioral Sciences, 93, 1603-1617.

Fishbein, M., & Ajzen, I. (1975). Belief, Attitude, Intention, and Behavior: An

Introduction to Theory and Research. Reading, MA: Addison-Wesley.

Francis, R. P. (2016). Physician's acceptance of data from patient self-monitoring

devices (Doctoral dissertation). Capella University.

Frutos-Pascual, M., & Zapirain, B. G. (2015). Review of the Use of AI techniques in

serious games: Decision making and machine learning. IEEE, 99(1).

Fuhrman, T. (2015, January 12). Virtual personal assistant app helps students manage

college life. Campus Technology. Retrieved January 6, 2019 from

https://campustechnology.com/articles/2015/01/12/virtual-personal-assistant-app-

helps-students-manage-college-life.aspx

Fuller, F. F. (1969). Concerns of teachers: A developmental conceptualization. American

educational research journal, 6(2), 207-226.

Furman, J., Holdren, J., Muñoz, C., & Smith, M. (2016, December 20). Artificial

intelligence, automation, and the economy. Executive Office of the President. 182

Washington, D.C. Retrieved January 6, 2019 from

https://obamawhitehouse.archives.gov/sites/whitehouse.gov/files/documents/Artif

icial-Intelligence-Automation-Economy.PDF

García-Peñalvo, F. J., Hernández-García, Á., Conde, M. Á., Fidalgo-Blanco, Á., Sein-

Echaluce, M. L., Alier-Forment, M., ... & Iglesias-Pradas, S. (2017). Enhancing

education for the knowledge society era with learning ecosystems. In Open

Source Solutions for Knowledge Management and Technological Ecosystems (pp.

1–24) IGI Global. doi: 10.4018/978-1-5225-0905-9.ch001

Gardner, L. (2013, August 14). IBM and Universities team up to close a ‘big data’ skills

gap. The Chronicles of Higher Education. Retrieved January 6, 2019 from

http://chronicle.com/article/IBMUniversities-Team-Up/141111/

Garris, R., Ahlers, R., & Driskell, J. E. (2002). Games, motivation, and learning: A

research and practice model. Simulation & Gaming, 33(4), 441-467.

Gavaldon, G., & McGarr, O. (2019). Exploring pre-service teachers’ future intentions to

use technology through the use of comics. Teaching and Teacher Education, 83,

99-109.

Gefen, D., & Straub, D. W. (1997). Gender Differences in the Perception and Use of E-

Mail: An Extension to the Technology Acceptance Model. MIS Quarterly, 21(4),

389– 400.

George, A. A., Hall, G. E., & Stiegelbauer, S. M. (2006). Measuring implementation in

schools: The stages of concern questionnaire. Austin, TX: SEDL.

Georgescu, M., & Popescu, D. (2015). How Could Internet of things change the e-

learning environment. In Conference proceedings of “eLearning and Software for 183

Education” (eLSE) (No. 01, pp. 68-71). Universitatea Nationala de Aparare Carol

I.

Gergen, K. J. (1994). Social Construction and the Educational Process. In Constructivism

in Education, (ed.) L. P. Steffe and J. E. Gale, 17-39. Hillsdale, NJ: Lawrence

Erlbaum.

Gershner, V. T., & Snider, S. L. (2001). Integrating the Use of Internet as an Instructional

Tool: Examining the Process of Change. Journal of Educational Computing

Research, 25(3), 283–300. doi: 10.2190/4fvw-w3uw-40nx-y8n5

Giddens, A. (2018). Globalization. In Sociology of Globalization (pp. 19-26). New York

and London: Routledge.

Grabill, J. T., & Hicks, T. (2005). Multiliteracies meet methods: The case for digital

writing in English education. English Education, 37(4), 301-311.

Goktas, Yuksel & Yildirim, Zahide & Yildirim, Soner. (2009). Investigation of K-12

Teachers' ICT Competences and the Contributing Factors in Acquiring These

Competences. The New Educational Review. 17 (276-294).

Google Inc. (2014, October 14). Teens use voice search most, even in bathroom,

Google’s mobile voice study finds. PRNewswire. Retrieved January 6, 2019

from http://prn.to/1sfiQRr

Google Store. (2017). Google Home. Google Store. Retrieved January 6, 2019 from

https://store.google.com/us/product/google_home?hl=en-US

Green, H., Facer, K., Rudd, T., Dillon, P., & Humphreys, P. (2005). Personalisation and

digital technologies. Bristol, UK: Futurelab. 184

Griffiths, D., & García-Peñalvo, F. J. (2016). Informal learning recognition and

management. Computers in Human Behavior, 55(PA), 501-503.

Groulx, J. G. (2001). Changing preservice teacher perceptions of minority schools. Urban

Education, 36(1), 60-92.

Grush, M. (2016). From IoT to IoE: More ways for institutions to connect everything.

Campus Technology. Retrieved January 6, 2019 from

https://campustechnology.com/Articles/2016/09/20/From-IoT-to-IoE-Institutions-

Connect-to-Everything.aspx?Page=1

Gu, X., Zhu, Y. & Guo, X. (2013). Meeting the “Digital Natives”: Understanding the

acceptance of technology in classrooms. Educational Technology & Society, 16

(1), 392–402.

Gupta, B., & Koo, Y. (2010). Applications of mobile learning in Higher Education: An

empirical study. International Journal of Information and Communication

Technology Education (IJICTE), 6(3), 75-87. doi:10.4018/jicte.2010070107

Maio, G. R., & Haddock, G. (2007). Attitude change. In A. W. Kruglanski & E. T.

Higgins (Eds.), Social psychology: Handbook of basic principles (p. 565–586).

Hair, J. F., Anderson, R. E., Tatham, R. L., & Black, W. C. (1998). Multivariate data

analysis (5th ed.) Upper Saddle River, NJ: Prentice Hill.

Hair, J. F., Black, W. C., & Babin, B. J. (2013). Multivariate data analysis: A global

perspective. International version (7th ed.) London, UK: Pearson Education.

Hall, G. E., & Hord, S. M. (2010). Implementing change: patterns, principles, and

potholes (3rd ed.). Upper Saddle River, NJ: Pearson. 185

Hammond, K. (2015, April 10). What is artificial intelligence? Computer

World. Retrieved January 6, 2019 from

http://www.computerworld.com/article/2906336/emerging-technology/what-is-

artificial-intelligence.html

Hammond, M., Fragkouli, E., Suandi, I., Crosson, S., Ingram, J., Johnston-Wilder, P., ...

& Wray, D. (2009). What happens as student teachers who made very good use of

ICT during pre-service training enter their first year of teaching?. Teacher

Development, 13(2), 93-106.

Han, I., Shin, W. S., & Ko, Y. (2017). The effect of student teaching experience and

teacher beliefs on pre-service teachers’ self-efficacy and intention to use

technology in teaching. Teachers and Teaching, 23(7), 829-842.

Hartung, A. (2017, July 14). Why Amazon Echo is killing it while Windows phone is

dead – Developers are what matters. Forbes. Retrieved on June 7, 2019 from

https://www.forbes.com/sites/adamhartung/2017/07/14/why-amazon-echo-is-

killing-it-while-windows-phone-is-dead-developers-are-what-

matters/#6aeab4225a33

Hashemi, M., Azizinezhad, M., Najafi, V., & Nesari, A. J. (2011). What is Mobile

Learning? Challenges and Capabilities. Procedia - Social and Behavioral

Sciences, 30, 2477–2481. doi:10.1016/j.sbspro.2011.10.483

Hasib, M. (2015). Cybersecurity leadership: Powering the modern organization (3rd ed.)

Tomorrow’s Strategy Today, LLC. CreateSpace. Lexington, KY 186

Heater, B. (2016, December 26). Amazon sold nine times as many Amazon Echo devices

this holiday. Tech Crunch. Retrieved January 6, 2019 from

https://techcrunch.com/2016/12/27/amazon-echo-3/

Heidig, S., & Clarebout, G. (2011). Do pedagogical agents make a difference to student

motivation and learning? Educational Research Review, 6(1), 27-54.

Heller, R. S., & Martin, C. D. (1987). Measuring the level of teacher concerns over

microcomputers in instruction. Education and Computing, 3(3-4), 133–139.

doi:16/S0167-9287(87)80003-1

Hinkle, D. E., Wiersma, W., & Jurs, S. G. (2003). Applied statistics for the behavioral

sciences (Vol. 663). Houghton Mifflin College Division.

Hollins, E. R. (2015). Rethinking field experiences in preservice teacher preparation:

Meeting new challenges for accountability. Routledge. New York and London.

Holmes, J. A. (2011). Informal learning: Student achievement and motivation in science

through museum-based learning. Learning Environments Research, 14(3), 263-

277.

Horn, M. B., & THINKINGabout, K. E. I. (2018). Hey Alexa, Can You Help Kids Learn

More?. Education Next, 18(2).

Howard, S. K., & Mozejko, A. (2015). Teachers: Technology, change and resistance. In

M. Henderson & G. Romeo (Eds.), Teaching and digital technologies: Big issues

and critical questions (pp. 307–317). Cambridge: Cambridge University Press.

Hussar, W. J., & Bailey, T. M. (2017). Projections of Education Statistics to 2025. NCES

2017-019. Department of Education, National Center for Education Statistics.

Retrieved January 6, 2019 from https://nces.ed.gov/pubs2017/2017019.pdf 187

Hsu, C. Y., Liang, J. C., Chai, C. S., & Tsai, C. C. (2013). Exploring preschool teachers’

technological pedagogical content knowledge of educational games. Journal of

Educational Computing Research, 49(4), 461-479.

Igbaria, M., Guimarães, T., & Davis, G. B. (1995). Testing the determinants of

microcomputer usage via a structural equation model. Journal of Management

Information Systems, 11(4), 87–114.

Igel, C., & Urquhart, V. (2012). Generation Z, meet cooperative learning: Properly

implemented cooperative learning strategies can increase student engagement and

achievement. Middle school journal, 43(4), 16-21.

Inan, F. A., & Lowther, D. L. (2009). Factors affecting technology integration in K-12

classrooms: A model. Educational Technology Research and

Development, 58(2), 137–154. doi: 10.1007/s11423-009-9132-y

Incerti, F., Franklin, T., J., & Kessler, G., K. (2017). Amazon Echo: Perceptions of an

Emerging Technology for Formal and Informal Learning. In: Y. Baek (Ed.)

Game-Based learning: Theory, strategies and performance outcomes. Nova

Science publishers: Hauppauge, NY.

Instefjord, E., & Munthe, E. (2016). Preparing pre-service teachers to integrate

technology: an analysis of the emphasis on digital competence in teacher

education curricula. European Journal of Teacher Education, 39(1), 77-93. DOI:

10.1080/02619768.2015.1100602

Jackson, D. (2015). Employability skill development in work-integrated learning:

Barriers and best practice. Studies in Higher Education, 40(2), 350-367. 188

Johnson, L., Adams Becker, S., Cummins, M., Estrada, V., Freeman, A. & Hall, C.

(2016). NMC Horizon Report: 2016 Higher Education Edition. Austin, Texas:

The New Media Consortium. Retrieved February 2, 2020 from

https://www.learntechlib.org/p/171478/

Johnson, B., & Christensen, L. (2010). Educational research: Quantitative, qualitative,

and mixed approaches (4th ed.). Thousand Oaks, CA: Sage Publications.

Johnson W.L., Vilhjalmsson H. & Marsella S. (2005). Serious games for language

learning: how much game, how much AI? Proceeding of The 2005 Conference on

Artificial Intelligence in Education: Supporting Learning through Intelligent and

Socially Informed Technology, pp. 306–313. IOS Press, Amsterdam, The

Netherlands.

Jogezai, N. A., Ismail, S. A. M. M., & Ahmed, F. (2016). ICT integration & the role of

school leadership: perceptions of head teachers of secondary schools in Quetta

Pakistan. International Journal of Innovation and Scientific 27(1), 155-163.

Jones, M. (1985). Applications of artificial intelligence within education. Computers &

mathematics with applications, 11(5), 517-526.

Jong, B. S., Lai, C. H., Hsia, Y. T., Lin, T. W., & Lu, C. Y. (2012). Using game-based

cooperative learning to improve learning motivation: A study of online game use

in an operating systems course. IEEE Transactions on Education, 56(2), 183-190.

Kafyulilo, A., Fisser, P., & Voogt, J. (2016). Factors affecting teachers’ continuation of

technology use in teaching. Education and Information Technologies, 21(6),

1535-1554. 189

Kalkhurst, D. (2018, March 12). Engaging Gen Z students and learners. Pearson

Education Limited. Retrieved January 6, 2019 from

https://www.pearsoned.com/engaging-gen-z-students/

Kamenetz, A. (2016, March 16). What artificial intelligence could mean for education.

NPREd. Retrieved January 6, 2019 from

http://www.npr.org/sections/ed/2016/03/16/470011574/what-artificial-

intelligence-could-mean-for-education

Karaca, F. (2015). An Investigation of Preservice Teachers’ Technological Pedagogical

Content Knowledge Based on a Variety of Characteristics. International Journal

of Higher Education, 4(4), 128-136. DOI:10.5430/ijhe.v4n4p128

Kassens-Noor, E. (2012). as a teaching practice to enhance active and informal

learning in higher education: The case of sustainable tweets. Active Learning in

Higher Education, 13(1), 9-21.

Kharpal, A. (2017, March 10). Amazon’s voice assistant Alexa could be a $10 billion

‘mega-hit’ by 2020: Research. CNBC, Tech Transformers. Retrieved January 6,

2019 from http://www.cnbc.com/2017/03/10/amazon-alexa-voice-assistan-could-

be-a-10-billion-mega-hit-by-2020-research.html

Kidd, T. & Morris L., R. (2017). Handbook of Research on Instructional Systems and

Educational Technology. The Chicago School of Professional Psychology. USA.

Kinsella, B. & Mutchler, A. (2019). Smart Speaker Consumer Adoption Report – March

2019. Voicebot.ai. Voicify. Retrieved June 7, 2019 from https://voicebot.ai/wp-

content/uploads/2019/03/smart_speaker_consumer_adoption_report_2019.pdf 190

Kim, C., Kim, M. K., Lee, C., Spector, J. M., & DeMeester, K. (2013). Teacher beliefs

and technology integration. Teaching and teacher education, 29, 76-85.

Kimbell-Lopez, K., Cummins, C., & Manning, E. (2016). Developing digital literacy in

the middle school classroom. Computers in the Schools, 33(4), 211-226.

Kiseleva, J., Williams, K., Jiang, J., Hassan Awadallah, A., Crook, A. C., Zitouni, I., &

Anastasakos, T. (2016, March). Understanding user satisfaction with intelligent

assistants. In Proceedings of the 2016 ACM on Conference on Human

Information Interaction and Retrieval (pp. 121-130). ACM.

Kline, P. (2000). The handbook of psychological testing. London: Routledge.

Klimova, B., & Poulova, P. (2016). Mobile Learning in Higher Education. Advanced

Science Letters, 22(5-6), 1111-1114.

Koedinger, K. R., & Aleven, V. (2016). An interview reflection on “Intelligent tutoring

goes to school in the big city”. International Journal of Artificial Intelligence in

Education, 26(1), 13-24.

Kormos, E. M. (2018). The unseen digital divide: Urban, suburban, and rural teacher use

and perceptions of web-based classroom technologies. Computers in the Schools,

35(1), 19-31. DOI: 10.1080/07380569.2018.1429168

Kurshan, B. (2016, March 10). The future of Artificial Intelligence in Education. Forbes.

Retrieved January 6, 2019 from

http://www.forbes.com/sites/barbarakurshan/2016/03/10/the-future-of-artificial-

intelligence-in-education/#5583b1341e64

Kyndt, E., Dochy, F., & Nijs, H. (2009). Learning conditions for non-formal and informal

workplace learning. Journal of Workplace Learning, 21(5), 369-383. 191

Laferrière, T., Hamel, C., & Searson, M. (2013). Barriers to successful implementation of

technology integration in educational settings: A case study. Journal of Computer

Assisted Learning, 29(5), 463-473.

Lai, C. (2019). Technology and Learner Autonomy: An Argument in Favor of the Nexus

of Formal and Informal Language Learning. Annual Review of Applied

Linguistics, 39, 52-58.

Lambert, J. & Cuper, P. (2008). Multimedia technologies and familiar spaces: 21st

century teaching for 21st century learners. Contemporary Issues in Technology

and Teacher Education, 8(3), 264-276.

Lambert, J., & Gong, Y. (2010). 21st century paradigms for pre-service teacher

technology preparation. Computers in the Schools, 27(1), 54-70.

doi:10.1080/07380560903536272

Larkin, J. H., & Chabay, R. W. (1992). Computer-assisted instruction and intelligent

tutoring systems: Shared goals and complementary approaches. Technology in

Education. Lawrence Erlbaum Associates, Inc: Hillsdale, NJ.

Large, D. R., Clark, L., Quandt, A., Burnett, G., & Skrypchuk, L. (2017). Steering the

conversation: a linguistic exploration of natural language interactions with a

digital assistant during simulated driving. Applied Ergonomics, 63, 53-61.

Laurillard, D. (2009). The pedagogical challenges to collaborative technologies.

International Journal of Computer-Supported Collaborative Learning, 4(1), 5-20.

Latchem, C. R. (2014). Informal learning and non-formal education for

development. Journal of Learning for Development-JL4D, 1(1). 192

Lambert, J., & Gong, Y. (2010). 21st century paradigms for pre-service teacher

technology preparation. Computers in the Schools, 27(1), 54-70.

doi:10.1016/j.compedu.2012.10.001

Lemon, N., & Garvis, S. (2016). Pre-service teacher self-efficacy in digital technology.

Teachers and Teaching, 22(3), 387-408.

Leone, S. (ed.) (2014). Synergic Integration of Formal and Informal E-Learning

Environments for Adult Lifelong Learners. Hershey, PA: IGI Publisher.

Lester, J. C., Ha, E. Y., Lee, S. Y., Mott, B. W., Rowe, J. P., & Sabourin, J. L. (2013).

Serious games get smart: intelligent game-based learning environments. AI

Magazine, 34(4), 31-45.

Leswing, K. (2016, December 21). Amazon Echo is sold out for Christmas. Business

Insider. Retrieved January 12, 2020 from

http://www.businessinsider.com/amazon-echo-dot-fire-stick-shipping-dates-after-

christmas-2016-12

Levin, T., & Wadmany, R. (2008). Teachers' views on factors affecting effective

integration of information technology in the classroom: Developmental scenery.

Journal of Technology and Teacher Education, 16, 233-263.

Lindbeck, R., & Fodrey, B. (2010). Integrating technology into the college classroom:

Current practices and future opportunities. Journalism & Mass Communication

Educator, 70(3), 235-250.

Liu, Y., & Szabo, Z. (2009). Teachers’ attitudes toward technology integration in

schools: A four‐year study. Teachers and Teaching: theory and practice, 15(1), 5-

23. 193

Livingstone, D. W. (2002). Working and Learning in the Information Age: A Profile of

Canadians. (Discussion Paper). Ottawa: Canadian Policy Research Network.

List, A., Grossnickle, E. M., & Alexander, P. A. (2016). Undergraduate students’

justifications for source selection in a digital academic context. Journal of

Educational Computing Research, 54(1), 22-61. DOI:

10.1177/0735633115606659

Lopatovska, I., Rink, K., Knight, I., Raines, K., Cosenza, K., Williams, H., & ...

Martinez, A. (2018). Talk to me: Exploring user interactions with the Amazon

Alexa. Journal of Librarianship and Information Science.

DOI:10.1177/0961000618759414

Luckin, R., Holmes, W., Griffiths, M. & Forcier, L. B. (2016). Intelligence Unleashed.

An argument for AI in Education. London: Pearson.

Lynn, M., & Gelb, B. D. (1997). Identifying innovative national markets for technical

consumer goods. International Marketing Review, 13(6)43-57.

Mahdi, A. O., Alhabbash, M. I., & Naser, S. S. A. (2016). An intelligent tutoring system

for teaching advanced topics in information security. World Wide Journal of

Multidisciplinary Research and Development 2 (12):1-9.

Makki, T. W., O'Neal, L. J., Cotten, S. R., & Rikard, R. V. (2018). When first-order

barriers are high: A comparison of second-and third-order barriers to classroom

computing integration. Computers & Education, 120, 90-97.

Malcolm, J., Hodkinson, P., & Colley, H. (2003). Informality and formality in learning: a

report for the Learning and Skills Research Centre. Learning and Skills Research

Centre. 194

Malcolm, J., Hodkinson, P., & Colley, H. (2003). The interrelationships between

informal and formal learning. Journal of Workplace Learning, 15(7/8), 313-318.

Marangunić, N., & Granić, A. (2015). Technology acceptance model: A literature review

from 1986 to 2013. Universal Access in the Information Society, 14(1), 81-95.

DOI: 10.1007/s10209-014-0348-1

Markauskaite, L. (2006). Towards an integrated analytical framework of information and

communications technology literacy: from intended to implemented and achieved

dimensions. Information Research: An International Electronic journal, 11(3)3.

Marsick, V. J., & Watkins, K. E. (2001). Informal and incidental learning. New directions

for adult and continuing education, 2001(89), 25-34.

Marsick, V. J., & Watkins, K. E. (2015). Informal and Incidental Learning in the

Workplace (Routledge Revivals). London,UK: Routledge,

https://doi.org/10.4324/9781315715926

Marsick, V. J., & Volpe, M. (1999). The nature and need for informal learning. Advances

in developing human resources, 1(3), 1-9.

Martin, B., & Mitrovic, A. (2002). Automatic problem generation in constraint-based

tutors. In Intelligent Tutoring Systems: 6th International Conference, ITS 2002,

Biarritz, France and San Sebastian, , June 2-7, 2002. Proceedings (pp. 33-

45). Berlin/Heidelberg: Springer.

McCorduck, P. (2004). Machines who think: A personal inquiry into the history and

prospects of artificial intelligence. AK Peters/CRC Press (2nd. Ed.). Natick, MA. 195

McCoy, B. R. (2016). Digital distractions in the classroom phase II: Student classroom

use of digital devices for non-class related purposes. Journal of Media Education

7 (1), 5-32.

McDonald, R. A., Seifert, C. F., Lorenzet, S. J., Givens, S., & Jaccard, J. (2002). The

effectiveness of methods for analyzing multivariate factorial data. Organizational

Research Methods, 5(3), 255-274.

McMahon, M., & Pospisil, R. (2005). Laptops for a digital lifestyle: Millennial students

and wireless mobile technologies. Proceedings of the Australasian Society for

Computers in Learning in Tertiary Education, 421-431.

McNeal, M. (2016, December 7). A Siri for Higher Ed Aims to Boost Student

Engagement. EdSurge. Retrieved January 6, 2019 from

https://www.edsurge.com/news/2016-12-07-a-siri-for-higher-ed-aims-to-boost-

student-engagement

McTear, M., Callejas, Z., & Griol, D. (2016). The conversational interface: Talking to

smart devices. International Publishing: Springer.

Merriam, S. B., Caffarella, R. S., & Baumgartner, L. (2012). Learning in adulthood: A

comprehensive guide (3rd ed.). New York: John Wiley & Sons

Milanesi, C. (2016, November 10). While Amazon Alexa quickly becomes part of the

family, Google Home is like a stranger who knows too much about you. Recode.

Retrieved January 6, 2019 from

https://www.recode.net/2016/11/10/13587170/google-home-amazon-echo-alexa-

digital-assistants-voice

Miller, L. C. (2012). Situating the rural teacher labor market in the broader context: 196

A descriptive analysis of the market dynamics in New York state. Journal of

Research in Rural Education, 27(13), 1-31. Retrieved January 6, 2020 from

http://jrre.vmhost.psu.edu/wp-content/uploads/2014/02/27-13.pdf

Mills, L. A., Knezek, G., & Khaddage, F. (2014). Information seeking, information

sharing, and going mobile: Three bridges to informal learning. Computers in

Human Behavior, 32, 324-334. DOI: 10.1016/j.chb.2013.08.008

Minsky, M. L., Singh, P., & Sloman, A. (2004). The St. Thomas common sense

symposium: Designing architectures for human-level intelligence. AI

Magazine, 25(2), 113.

Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A

framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054.

Mohammed, P. S. (2019). Towards Inclusive Education in the Age of Artificial

Intelligence: Perspectives, Challenges, and Opportunities. In J. Knox, Y. Wang,

M. Gallagher (Eds.) Artificial Intelligence and Inclusive Education. Springer,

Singapore. DOI: 10.1007/978-981-13-8161-4

Morreale, S. P., & Staley, C. M. (2016). Millennials, teaching and learning, and the

elephant in the college classroom. Communication Education, 65(3), 370-373

Mouza, C., Karchmer-Klein, R., Nandakumar, R., Yilmaz-Ozden, S., & Hun, L. (2014).

Investigating the impact of an integrated approach to the development of

preservice teachers’ technological pedagogical content knowledge (TPACK).

Computers & Education, 71, 206–221. doi: 10.1016/j.compedu.2013.09.020

Mun, Y. Y., & Hwang, Y. (2003). Predicting the use of web-based information systems:

Self-efficacy, enjoyment, learning goal orientation, and the technology acceptance 197

model. International Journal of Human-Computer Studies, 59(4), 431-449. DOI:

10.1016/S1071-5819(03)00114-9

Murgatroyd, S. (2017). Education, technology and simple innovation. In

Khare, B. Stewart, R. Schatz (Eds.) Phantom Ex Machina. Switzerland: Springer

International Publishing. DOI: 10.1007/978-3-319-44468-0

Naone, E. (2009). TR10: Intelligent software assistant. MIT Technology Review.

Retrieved January 6, 2019 from

http://www2.technologyreview.com/news/412191/tr10-intelligent-software-

assistant/

Naser, S. S. A. (2012). A Qualitative Study of LP-ITS: Linear Programming Intelligent

Tutoring System. International Journal of Computer Science & Information

Technology, 4(1), 209.

Nass, C. I., & Brave, S. (2005). Wired for speech: How voice activates and advances the

human-computer relationship. Cambridge, MA: MIT press.

National Research Council. (2000). How people learn: Brain, mind, experience, and

school: Expanded edition. Washington, DC: The National Academies Press.

https://doi.org/10.17226/9853

Negri, M., Turchi, M., de Souza, J. G., & Falavigna, D. (2014, August). Quality

Estimation for Automatic Speech Recognition. In COLING (pp. 1813-1823).

Nenty, H. J. (2009). Writing a quantitative research thesis. International Journal of

Educational Sciences, 1(1), 19-32.

Netlingo (n.d.). SMART Tech Definition. Retrieved October 6, 2019 from

https://www.netlingo.com/word/smart-tech.php 198

Noam, G. G., Biancarosa, G., & Dechausay, N. (2003). Afterschool education:

Approaches to an emerging field (Kindle). Cambridge, MA: Harvard Educational

Publishing.

Nogrady, B. (2016, November 10). The real risks of artificial intelligence. BBC

Future, Retrieved January 6, 2019 from

http://www.bbc.com/future/story/20161110-the-real-risks-of-artificial-intelligence

Norvig, P. (2012, December). Artificial intelligence. NewScientist, ii–vii. Retrieved

January 6, 2019 from

https://www.newscientist.com/data/doc/article/dn19554/instant_expert_27_artifici

al_intelligence.pdf

Novosadova, M., Selen, G., Piskunowicz, A., Mousa, S. H. N., Suoheimo, S., Radinja, T.,

& Reuter, P. (2007). NFE Book–The impact of Non Formal Education on young

people and society. Brussels: AEGEE Europe.

Nüttgens, M., Gadatsch, A., Kautz, K., Schirmer, I., Blinn, N., Dwivedi, Y., …Williams,

M. (2011). Governance and Sustainability in Information Systems. Managing the

Transfer and Diffusion of IT: IFIP WG 8.6 International Working Conference,

September 22-24, Vol. 366, Springer Science & Business Media, Hamburg, (pp.

155-170).

Executive Office of the President of the United States, Office of Management and

Budget. (2018). Efficient, effective, accountable: an American budget: budget of

the U.S. government. Washington, D.C. Retrieved January 6, 2019 from

https://www.govinfo.gov/content/pkg/BUDGET-2019-BUD/pdf/BUDGET-2019-

BUD.pdf 199

Oğuzhan, A. (2019). Challenges in integrating technology into education. Information

Technologies and Applied Sciences, 14(1), 1-19. Retrieved January 6, 2019 from

https://www.researchgate.net/publication/332056490_Challenges_in_Integrating_

Technology_into_Education DOI: 10.7827/TurkishStudies.14810

Ohio University (n.d.). Teacher Education Majors. Retrieved May 11, 2019, from

https://www.ohio.edu/education/teacher-ed/majors

Ohio University Office of Institutional Research. (n.d.). ACT Class Profile A Graphic

Presentation of the First-Year (Freshman) Class Entering Fall 2017 Ohio

University. Retrieved October 19, 2019 from

https://www.ohio.edu/instres/student/ACTClassProfile.pdf

Ohio University Office of Institutional Research and Effectiveness. (2017). Ohio

University fact book. Retrieved November 23, 2019 from

https://www.ohio.edu/instres/factbook.pdf

Olson, M. P. (2015). A multilateral approach to bridging the global skills gap. Cornell

University, ILR School. Cornell HR. Retrieved on June 8, 2019 from

https://digitalcommons.ilr.cornell.edu/cgi/viewcontent.cgi?article=1072&context

=chrr

Orlando, J., & Attard, C. (2016). Digital natives come of age: the reality of today’s early

career teachers using mobile devices to teach mathematics. Mathematics

Education Research Journal, 28(1), 107–121. DOI: 10.1007/s13394-015-0159-6

Ottenbreit-Leftwich, A. T., Ertmer, P. A., & Tondeur, J. (2015). Interpretation of research

on technology integration in teacher education in the USA: Preparation and

current practices. In H. Fives, & M. G. Gill (Eds.) International handbook of 200

interpretation in educational research (pp. 1239-1262). Netherlands: Springer.

DOI: 10.1007/978-94-017-9282-0_61

Ottenbreit-Leftwich, A., Glazewski, K. & Newby, T. (2010). Preservice Technology

Integration Course Revision: A Conceptual Guide. Journal of Technology and

Teacher Education, 18(1), 5-33. Waynesville, NC USA: Society for Information

Technology & Teacher Education. Retrieved April 20, 2018 from

https://www.learntechlib.org/p/28346/

Padmavathi, M. (2016). A study of student-teachers’ readiness to use computers in

teaching: An empirical study. I-Manager’s Journal on School Educational

Technology, 11(3), 29-39.

Park, S. Y. (2009). An Analysis of the Technology Acceptance Model in Understanding

University Students’ Behavioral Intention to Use e-Learning Research hypotheses.

Educational Technology & Society, 12(3), 150–162.

Park, S. Y., & Jo, I. H. (2015). Development of the learning analytics dashboard to

support students’ learning performance. Journal of Universal Computer Science,

21(1), 110.

Park, S. Y., Nam, M. W., & Cha, S. B. (2012). University students’ behavioral intention

to use mobile learning: Evaluating the technology acceptance model. British

Journal of Educational Technology, 43(4), 592-605. DOI: 10.1111/j.1467-

8535.2011.01229.x

Parker, K., Horowitz, J., Brown, A., Fry, R., & Cohn, D. (2018). What Unites and

Divides Urban, Suburban and Rural Communities. Retrieved October 19, 2019 201

from https://www.pewsocialtrends.org/2018/05/22/what-unites-and-divides-

urban-suburban-and-rural-communities/

Peixiao, Q., Man, Z., & Gang, W. (2016, September). Analysis on science & technology

innovation and its culture in China. In Management of Engineering and

Technology (PICMET), 2016 Portland International Conference on (pp. 539-

544). IEEE.

Perrin, A., & Duggan, M. (2015). Americans’ Internet access: 2000-2015. Pew Research

Center. Retrieved from

https://www.pewresearch.org/internet/2015/06/26/americans-internet-access-

2000-2015/

Piaget, J. (1994). The equilibration of cognitive structures: The central problem of

intellectual development. Chicago, IL: University of Chicago Press.

Pink, D. H. (2012). A whole new mind: why right-brainers will rule the future. New

York, NY: Riverhead Books.

Polly, D., & Orrill, C. H. (2016). Designing professional development to support

teachers’ TPACK in elementary school mathematics. In: M. C. Herring, M. J.

Koehler, & P. Mishra (Eds.), Handbook of technological pedagogical content

knowledge (TPACK) for educators (pp. 259–268). New York & London:

Routledge.

Prensky, M. (2001). Digital natives, digital immigrants. On the Horizon,9(5), 1−6.

Prensky, M. (2013). Our brains extended. Educational Leadership, 70(6), 22-27.

Puron-Cid, G., Gil-Garcia, J. R., & Luna-Reyes, L. F. (2016). Opportunities and

challenges of policy informatics: Tackling complex problems through the 202

combination of open data, technology and analytics. International Journal of

Public Administration in the Digital Age (IJPADA), 3(2), 66-85.

Raman, A., & Don, Y. (2013). Preservice teachers’ acceptance of Learning Management

Software: An application of the UTAUT2 model. International Education

Studies, 6(7).

Razzaq, L. & Heffernan, N. (2009). To tutor or not to tutor: That is the question. In

Dimitrova, Mizoguchi, du Boulay & Graesser (Eds.) Proceedings of the 2009

Artificial Intelligence in Education Conference. pp. 457-464.

Reed, T. (2014). Digitized Lives. New York: Routledge DOI:10.4324/9780203374672

Rehmat, A. P., & Bailey, J. M. (2014). Technology integration in a science classroom:

Preservice teachers’ perceptions. Journal of Science Education and Technology,

23(6), 744-755. DOI:10.1007/s10956-014-9507-7

Reychav, I., Dunaway, M., & Kobayashi, M. (2015). Understanding mobile technology-

fit behaviors outside the classroom. Computers & Education, 87, 142-150. DOI:

10.1016/j.compedu.2015.04.005

Reyes, J. A. (2015). The skinny on big data in education: Learning analytics simplified.

TechTrends, 59(2), 75-80.

Richardson, W. (2013). Students first, not stuff. Educational Leadership, 70(6), 10-14.

Richter F. (2016, August 26). Digital Assistants – Always at Your Service. Statista.

Retrieved January 6, 2020 from https://www.statista.com/chart/5621/users-of-

virtual-digital-assistants/

Ritterfeld, U., Cody, M. J., & Vorderer, P. (2009). Serious games: mechanisms and

effects. New York: Routledge. 203

Robertson, A. (2019, March). Preparing for Generation Z: How can technology enhanced

learning be firmly embedded in our students' learning experience? A case study

from Abertay University. In 13th International Technology, Education and

Development Conference (pp. 5769-5773). IATED Academy.

Roehl, A., Reddy, S. L., & Shannon, G. J. (2013). The flipped classroom: an opportunity

to engage Millennial students through active learning. Journal of Family and

Consumer Sciences, 105(2), 44-49.

Rogers, E. M. (1962). Diffusion of Innovations. New York: Free Press.

Rogers E. M. (1995) Diffusion of Innovations: Modifications of a Model for

Telecommunications. In: M.W., Stoetzer, A. Mahler (eds.) In die Diffusion von

Innovationen in der Telekommunikation. Schriftenreihe des Wissenschaftlichen

Instituts für Kommunikationsdienste. 17. Springer, Berlin, Heidelberg DOI:

10.1007/978-3-642-79868-9_2

Rogers, E. M. (2003). Diffusion of Innovations, (5th ed.). New York: Free Press.

Rondan-Cataluña, F., Arenas-Gaitán, J., & Ramírez-Correa, P. (2015). A comparison of

the different versions of popular technology acceptance models: A non-linear

perspective. Kybernetes, 44(5), 788-805. DOI: 10.1108/K-09-2014-0184

Russell, S. (2016, January 22). The state of artificial intelligence [Video File]. World

Economic Forum. Davos, Switzerland. Retrieved January 6, 2019 from

https://www.weforum.org/events/world-economic-forum-annual-meeting-

2016/sessions/the-state-of-artificial-intelligence/ 204

Russell, M., Bebell, D., O’Dwyer, L., & O’Connor, K. (2003). Examining teacher

technology use: Implications for preservice and inservice teacher preparation.

Journal of Teacher Education, 54(4), 297-310.

Rothman, D. (2016). A Tsunami of learners called Generation Z. Maryl. Public Saf

Online J.1,1, 2014. Retrieved September 21, 2019 from

https://mdle.net/Journal/A_Tsunami_of_Learners_Called_Generation_Z.pdf

Russell, G., & Bradley, G. (1997). Teachers’ computer anxiety: Implications for

professional development. Education and Information Technologies. 2,17-30.

Sacks, E. (2018, May 26). Alexa privacy fail highlights risks of smart speakers. NBC

News. Retrieved on December 13, 2019 from

https://www.nbcnews.com/tech/innovation/alexa-privacy-fail-highlights-risks-

smart-speakers-n877671

Sánchez-Prieto, J. C., Olmos-Migueláñez, S., & García-Peñalvo, F. J. (2017). MLearning

and pre-service teachers: An assessment of the behavioral intention using an

expanded TAM model. Computers in Human Behavior, 72, 644-654. DOI:

10.1016/j.chb.2016.09.061

Samuel, A. L. (1959). Some studies in machine learning using the game of checkers. IBM

Journal of research and development, 3(3), 210-229.

Schindlholzer, B. (2016). Artificial intelligence & the future of education [Video file].

TEDxTalks. Retrieved January 6, 2019 from

https://www.youtube.com/watch?v=ZdHhs-I9FVo 205

Schroeder, N. L., Adesope, O. O., & Gilbert, R. B. (2013). How effective are pedagogical

agents for learning? A meta-analytic review. Journal of Educational Computing

Research, 49(1), 1-39. DOI: 10.2190/EC.49.1.a

Seckel S. (2017, August, 17). ASU, Amazon bring first-of-its-kind voice-technology

program to campus. Ira A. Fulton School of Engineering. ASU Now. Retrieved

January 6, 2019 from https://asunow.asu.edu/20170817-asu-news-asu-amazon-

dots-tooker-house

Selwyn, N. (2016). Education and technology: key issues and debates. New York:

Bloomsbury Academic.

Selwyn, N., Nemorin, S., Bulfin, S., & Johnson, N. F. (2017). Left to their own devices:

the everyday realities of one-to-one classrooms. Oxford Review of

Education, 43(3), 289-310.

Shead, S. (2017, April 6) REPORT: 1 in 4 people have fantasized about Alexa, Siri and

other AI assistants. Business Insider. Retrieved January 6, 2019 from

http://www.businessinsider.com/jwt-speak-easy-study-people-fantasised-about-

alexa-2017-4

Shroff, R. H., Deneen, C., & Ng, E. M. W. (2011). An analysis of the technology

acceptance model in examining students’ behavioral intention to use an electronic

portfolio system. Australasian Journal of Educational Technology, 27(4), 600-

618.

Shute, V. J. (2008). Focus on formative feedback. Review of Educational

Research, 78(1), 153-189. 206

Simoni, Z. R., Gibson, P., Cotten, S. R., Stringer, K., & Coleman, L. O. (2016). Does

place matter? The effects of concentrated poverty on the computer use of

elementary students. Journal of Urban Technology, 23(3), 3-21.

Skinner, B. F. (1964). New methods and new aims in teaching. New Scientist, 122, 483–

484.

Sleeman, D., & Brown, J. S. (1982). Intelligent Tutoring Systems. London, UK:

Academic Press.

Solnet, D., Baum, T., Robinson, R. N., & Lockstone-Binney, L. (2016). What about the

workers? Roles and skills for employees in hotels of the future. Journal of

Vacation Marketing, 22(3), 212-226.

Spector, J. M., & Park, S. W. (2017). Motivation, learning, and technology embodied

educational motivation. Milton: Taylor and Francis.

St-Jean, E., & Audet, J. (2012). The role of mentoring in the learning development of the

novice entrepreneur. International Entrepreneurship and Management Journal,

8(1), 119–140.

Straub, D., & Burton-Jones, A. (2007). Veni, vidi, vici: Breaking the TAM logjam.

Journal of the Association for Information Systems, 8(4), 5.

Straub, E. T. (2009). Understanding technology adoption: Theory and future directions

for informal learning. Review of Educational Research, 79(2), 625-649.

Strong, A. I. (2016). Applications of artificial intelligence & associated

technologies. Science [ETEBMS-2016], 5, 6.

Suhr, D. (2006). Exploratory or Confirmatory Factor Analysis. SAS Users Group

International Conference (pp. 1 -17). Cary: SAS Institute, Inc. 207

Susi, T., Johannesson, M., & Backlund, P. (2007). Serious games: An overview. School

of Humanities and Informatics, University of Skövde, Sweden, Technical Report

HS-IKI-TR-07-001, 2007.

Taylor, P., & Keeter, S. (2010). Millennials: A portrait of generation next. Washington,

DC: Pew Research Center. Retrieved January 6, 2019 from

http://www.pewsocialtrends.org/files/2010/10/millennials-confident-connected-

open-to-change.pdf

Thomas, S. (2016). Future ready learning: Reimagining the role of technology in

education. 2016 National education technology plan. Washington, DC: Office of

Educational Technology, US Department of Education.

Thussu, D. K. (2007). News as entertainment: The rise of global infotainment. London:

SAGE Publications Ltd. DOI: 10.4135/9781446220337

Teo, T. (2011). Factors influencing teachers’ intention to use technology: Model

development and test. Computers & Education, 57(4), 2432-2440.

Teo, T., Fan, X., & Du, J. (2015). Technology acceptance among pre-service teachers:

Does gender matter? Australasian Journal of Educational Technology, 31(3),

235-251. DOI: 10.14742/ajet.1672

Tondeur, J., Aesaert, K., Prestridge, S., & Consuegra, E. (2018). A multilevel analysis of

what matters in the training of pre-service teacher's ICT competencies. Computers

& Education, 122, 32-42.

Tondeur, J., Van Braak, J., Sang, G., Voogt, J., Fisser, P., & Ottenbreit-Leftwich, A.

(2012). Preparing pre-service teachers to integrate technology in education: A 208

synthesis of qualitative evidence. Computers & Education, 59(1), 134-144. DOI:

10.1016/j.compedu.2011.10.009

Torres, K. M., & Statti, A. (2019). A Global Perspective of Classroom Technology

Integration and Use. In Educational Technology and the New World of Persistent

Learning (pp. 93-113). IGI Global.

Trivedi, N. (2018). ProblemPal: Generating Autonomous Practice Content in Real-Time

with Voice Commands and Amazon Alexa. In World Conference on E-Learning

in Corporate, Government, Healthcare, and Higher Education (pp. 80-82). Las

Vegas, NV, Association for the Advancement of Computing in Education

(AACE). Retrieved from https://www.learntechlib.org/primary/p/184950/

Turing, A. M. (1950). Computing machinery and intelligence. Mind, 59(236), 433-460.

U.S. Department of Education. (n.d.). Use of Technology in Teaching and Learning.

Retrieved January 6, 2019, from https://www.ed.gov/oii-news/use-technology-

teaching-and-learning

United States Census Bureau. (2019). Athens County, OH. Retrieved October 19, 2019

from https://www.census.gov/search

results.html?q=athens++county+ohio&page=1&stateGeo=none&searchtype=web

&cssp=SERP&_charset_=UTF-8

United States Census Bureau. (2019). Cuyahoga County, OH. Retrieved October 19,

2019 from https://www.census.gov/search-

results.html?q=Cuyahoga++county+ohio&page=1&stateGeo=none&searchtype=

web&cssp=SERP&_charset_=UTF-8 209

United States Census Bureau. (2019). Franklin County, OH. Retrieved October 19, 2019

from https://www.census.gov/search-

results.html?q=franklin+county+ohio&page=1&stateGeo=none&searchtype=web

&cssp=SERP&_charset_=UTF-8

United States Census Bureau. (2019). Hamilton County, OH. Retrieved October 19, 2019

from https://www.census.gov/search-

results.html?q=hamilton+county+ohio&page=1&stateGeo=none&searchtype=we

b&cssp=SERP&_charset_=UTF-8

Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda

on interventions. Decision Sciences, 39(2), 273-315. DOI: 10.1111/j.1540-

5915.2008.00192.x

Venkatesh, V., Morris, M.G., Davis, G.B., & Davis, F.D. (2003). User acceptance of

information technology: toward a unified view. MIS Quarterly, 27(3), 425-478.

Van Eck, R. (2006). Digital game based learning It’s not just the digital natives who are

restless. EDUCAUSE review, 41(2), 16.

VanLehn, K. (2006). The behavior of tutoring systems. International journal of artificial

intelligence in education, 16(3), 227-265.

VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring

systems, and other tutoring systems. Educational Psychologist, 46(4), 197-221.

DOI: 10.1080/00461520.2011.611369

Veletsianos, G. (2010). Contextually relevant pedagogical agents: Visual appearance,

stereotypes, and first impressions and their impact on learning. Computers &

Education, 55(2), 576-585. 210

Villanueva, F. J., Villa, D., Moya, F., Santofimia, M. J., & López, J. C. (2012). Internet

of Things architecture for an RFID-based product tracking business model. In

Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS), 2012

Sixth International Conference on Innovative Mobile and Internet Services in

Ubiquitous Computing, Palermo, (pp. 811-816). IEEE.

Vinu, P. V., Sherimon, P. C., & Krishnan, R. (2011). Towards pervasive mobile learning

– the vision of 21st century. Procedia Social and Behavioral Sciences, 15, 3067–

3073.

Vongkulluksn, V. W., Xie, K., & Bowman, M. A. (2018). The role of value on teachers'

internalization of external barriers and externalization of personal beliefs for

classroom technology integration. Computers & Education, 118, 70-81.

Vygotsky, L. S. (1978). Mind in society: The development of higher psychological

processes. Harvard University Press. Boston, MA.

Wachira, P., & Keengwe, J. (2011). Technology integration barriers: Urban school

mathematics teachers’ perspectives. Journal of science education and technology,

20(1), 17-25.

Wang, B., Liu, H., An, P., Li, Q., Li, K., Chen, L., ... & Gu, S. (2018). Artificial

Intelligence and Education. In Reconstructing Our Orders (pp. 129-161).

Springer, Singapore.

Wang, L., Ertmer, P. A., & Newby, T. J. (2004). Increasing preservice teachers’ self-

efficacy beliefs for technology integration. Journal of Research on Technology in

Education, 36(3), 231-250. DOI:10.1080/15391523.2004.10782414 211

Warner, R. M. (2008). Applied statistics: From bivariate through multivariate

techniques. Thousand Oaks, CA: Sage.

Wedman, J., & Heller, M. (1984). Concerns of Teachers About Educational Computing.

AEDS Journal, 18(1), 31–40.

Weizenbaum, J. (1976). Computer power and human reason: From judgement to

calculation. W.H. Freeman & Company.

Weber, G., & Brusilovsky, P. (2016). ELM-ART–An interactive and intelligent web-

based electronic textbook. International Journal of Artificial Intelligence in

Education, 26(1), 72-81.

Wenger, E. (2014). Artificial Intelligence and Tutoring Systems: Computational And

Cognitive Approaches To The Communication Of Knowledge. Los Altos, CA:

Morgan Kaufmann.

Wireless Watch (2017, May 3). Amazon pushes Alexa into host of devices as teeming

throng of AIs fills room. The Register. Retrieved January 6, 2019 from

https://www.theregister.co.uk/2017/05/03/ai_assistants_advance_party_in_battle_

for_new_web_experience/

Wixom, B., Ariyachandra, T., Douglas, D., Goul, M., Gupta, B., Iyer, L., Kulkarni, U.,

Mooney, J. G., Phillips-Wren, G., & Turetken, O. (2014). The current state of

business intelligence in academia: The arrival of big data. Communications of the

Association for Information Systems, 34(1). Retrieved January 6, 2019 from

http://aisel.aisnet.org/cais/vol34/iss1/1

Wood, D. J., Bruner, J. S., & Ross, G. (1976). The role of tutoring in problem solving.

Journal of Child Psychology & Psychiatry, 17(2), 89–100. 212

Woolf, B. P. (2010). Building Intelligent Interactive Tutors: Student-Centered Strategies

for Revolutionizing E-Learning. Burlington, MA: Morgan Kaufmann.

World Economic Forum (2016). The State of Artificial Intelligence [Video file].

Retrieved January 6, 2017 from https://www.weforum.org/events/world-

economic-forum-annual-meeting-2016/sessions/the-state-of-artificial-intelligence/

Xu, C., Hao, Q., & Han, G. (2017). Research on the marketing strategy of the new media

age based on AISAS model: A Case study of micro channel marketing. In Fourth

International Forum on Decision Sciences (pp. 477-486). Singapore: Springer

International.

Yakin, I., & Tinmaz, H. (2015). Theoretical Guidelines for the Utilization of Instructional

Social Networking Websites. Turkish Online Journal of Distance Education,

16(4), 67-83.

Yauch, C. A., & Steudel, H. J. (2003). Complementary use of qualitative and quantitative

cultural assessment methods. Organizational Research Methods, 6(4), 465-481.

Yoder, M. (2018, March 21). 8 Reasons to use a digital assistant in your classroom. ISTE.

Retrieved January 6, 2019 from

https://www.iste.org/explore/articleDetail?articleid=2170

Zimmer, C. (2017). Getting to know Gen Z–Exploring middle and high schoolers’

expectations for Higher Education. Barnes & Noble College. Retrieved January 6,

2019 from http://next.bncollege.com/wp-content/uploads/2015/10/Gen-Z-

Research-Report-Final.pdf

Zimmerle, J. C., & Lambert, C. (2019). Globally Connected. Theory & Practice in Rural

Education, 9(1), 91-104. https://doi.org/10.3776/tpre.2019.v9n1p91-104 213

Appendix A: IRB Approval

214

Appendix B: IRB Amendment

215

Appendix C: Permission to Use and Adapt the CBAM Survey

216

217

Appendix D: Informed Consent

Dear EDCT 2030 Student,

My name is Federica Incerti and I am Doctoral student at Ohio University in the Patton College of Education. I am currently working on a study to explore the perceptions of an emerging technology such as the Amazon Alexa for formal and informal learning in educational settings. I would appreciate your participation in my survey. The survey should take about 10 minutes to complete. To conduct research at Ohio University, I must obtain informed consent. This means that your participation is voluntary. The information that follows tells you about the study.

You are being asked to participate in research. For you to be able to decide whether you want to participate in this project, you should understand what the project is about, as well as the possible risks and benefits in order to make an informed decision. This process is known as informed consent. This form describes the purpose, procedures, possible benefits, and risks. It also explains how your personal information will be used and protected. Once you have read this form and your questions about the study are answered, you will be asked to sign it. This will allow your participation in this study. You should receive a copy of this document to take with you.

Explanation of Study: This exploratory study examines the perceptions of preservice teachers enrolled in EDCT 2030, Technology Applications in Education, using emerging technologies like Amazons Alexa for the implementation for teaching and learning in classrooms. Hopefully This study will provide insights into how emerging technologies currently developing or that will be developed over the next five years may be adopted for learning in K-12. This is an exploratory study designed to provide insights for the development of apps designed for the Amazon Alexa that may be employed for teaching and learning in K-12 in the future where the device might be used as an “electronic tutor”. If you agree to participate, you will be asked to 1) read educational documents on the Amazon Alexa and emerging technologies, 2) view a presentation as part of your class on the Amazon Alexa and participate in the use of the device, and 3) complete a survey found in Blackboard on your course site. Your participation in the study will last 2 class periods maximum and if needed volunteers may be sought to member check findings. Grades in the class will not be affected one way or the other by your decision to participate or not participate in the research study. A copy of this agreement will be posted in your Blackboard course site for reference.

Risks and Discomforts: There are no known risks or discomforts associated with this research for the participant. Emerging technologies are part of the course that will be studied over the semester. The Amazon Alexa is one such emerging technology and is part of a larger group of emerging technologies you will discuss as part of the EDCT 2030 course. No risks or discomforts are anticipated as part of the discussion of emerging technologies 218

Benefits: This study is important to science/society because it will provide baseline data on the perceptions of preservice teachers who will enter the field of teaching in the next three to four years about their use and understanding of this emerging technology. By knowing the perceptions of preservice teachers concerning the use and possible implementation of the Amazon Alexa in K-12, application developers can make better decisions about the development of K-12 tools which act as artificial intelligence (AI) tutors. These preservice teachers are also part of the millennial generation known for their expertise in the use of technology. Individually, you may benefit by learning about the Amazon Alexa as an AI tutor for use with students in K-12 settings.

Confidentiality and Records: Your study information will be kept confidential by the collection of data without individual identifiers. The study is interested in the aggregated data and therefore, the information collected will not be identifiable by individuals who have completed the study or faculty of the course in which the survey is distributed. Additionally, while every effort will be made to keep your study-related information confidential, there may be circumstances where this information must be shared with: * Federal agencies, for example the Office of Human Research Protections, whose responsibility is to protect human subjects in research and * Representatives of Ohio University (OU), including the Institutional Review Board, a committee that oversees the research at OU. Compensation: There is no compensation for participation in the study. Emerging technologies and in this case the Amazon Alexa has been chosen to be one of the studied emerging technology this semester. Contact Information: If you have any questions regarding this study, please contact the investigator Federica Incerti, [email protected] , 703-993-1976 or the advisor Dr. Greg Kessler, [email protected] , 740-593-2748.

If you have any questions regarding your rights as a research participant, please contact Dr. Chris Hayhow, Director of Research Compliance, Ohio University, (740)593-0664 or [email protected].

You are agreeing that:

• you have read this consent form (or it has been read to you) and have been given the opportunity to ask questions and have them answered; • you have been informed of potential risks and they have been explained to your satisfaction; • you understand Ohio University has no funds set aside for any injuries you might receive as a result of participating in this study; • you are 18 years of age or older; • your participation in this research is completely voluntary; • you may leave the study at any time; if you decide to stop participating in the study, there will be no penalty to you and you will not lose any benefits to which you are otherwise entitled. 219

Appendix E: Stages of Concern Questionnaire

Instructions

The survey questions were developed from typical responses of a schoolteacher. You can take this survey even if you do not know anything about AI Tutors like Amazon Alexa, or if you do not have experience working with Amazon Alexa. Therefore, many of the items on this questionnaire may appear to be relevant or irrelevant to you at this time. For completely irrelevant items, please select “0” on the scale. Other items (1-7) will represent those concerns you do have, in varying degrees of intensity, and should be marked higher on the scale. For least concern select "1" and highest concern select "7".

Please respond to each statement in terms of your present concerns about your use or potential use of AI Tutors like Amazon Alexa, Google Home, or similar devices.

Thank you for taking time to complete this task.

220

0 1 2 3 4 5 6 7 Irrelevant Not true of me now Somewhat true of me now Very true of me now

Circle One Number For Each Item

1. I am concerned about students’ attitudes toward the Amazon Alexa when using such a device 0 1 2 3 4 5 6 7 in my classroom.

2. I now know of some other approaches that might work better than Amazon Alexa. 0 1 2 3 4 5 6 7

3. I am more concerned about another innovation than using the Amazon Alexa in my 0 1 2 3 4 5 6 7 classroom.

4. I am concerned about not having enough time to organize myself each day if I use the Amazon 0 1 2 3 4 5 6 7 Alexa in my classroom.

5. I would like to help other teachers with their use of Amazon Alexa. 0 1 2 3 4 5 6 7

6. I have a very limited knowledge of the Amazon Alexa for classroom use. 0 1 2 3 4 5 6 7

7. I would like to know the effect of reorganization on my professional status (teacher) when 0 1 2 3 4 5 6 7 using an Amazon Alexa.

8. I am concerned about conflict between my interests and my responsibilities because of my 0 1 2 3 4 5 6 7 future use of the Amazon Alexa.

9. I am concerned about revising my use of Amazon Alexa. 0 1 2 3 4 5 6 7

10. I would like to develop working relationships with both our teachers and outside teachers using 0 1 2 3 4 5 6 7 Amazon Alexa.

11. I am concerned about how Amazon Alex may affect students. 0 1 2 3 4 5 6 7

12. I am not concerned about using the Amazon Alexa at this time. 0 1 2 3 4 5 6 7

13. I would like to know who will make the decisions on using the Amazon Alexa in my 0 1 2 3 4 5 6 7 classroom.

14. I would like to discuss the possibility of using Amazon Alexa in the classroom. 0 1 2 3 4 5 6 7

15. I would like to know what resources are available if we decide to adopt Amazon Alexa for use 0 1 2 3 4 5 6 7 in the classroom.

16. I am concerned about my inability to manage all that using the Amazon Alexa requires in my 0 1 2 3 4 5 6 7 classroom.

17. I would like to know how my teaching or administration is supposed to change by using the 0 1 2 3 4 5 6 7 Amazon Alexa in my classroom.

18. I would like to familiarize other departments or teachers with the progress of this new 0 1 2 3 4 5 6 7 approach in the classroom using the Amazon Alexa. 221

0 1 2 3 4 5 6 7 Irrelevant Not true of me now Somewhat true of me now Very true of me now

Circle One Number For Each Item

19. I am concerned about evaluating my impact on students. 0 1 2 3 4 5 6 7

20. I would like to revise the use of the Amazon Alexa’s approach in my classroom. 0 1 2 3 4 5 6 7

21. I am preoccupied with things other than Amazon Alexa. 0 1 2 3 4 5 6 7

22. I would like to modify our use of Amazon Alexa based on the experiences of my 0 1 2 3 4 5 6 7 students.

23. I spend little time thinking about Amazon Alexa. 0 1 2 3 4 5 6 7

24. I would like to excite my students about using the Amazon Alexa to help with their 0 1 2 3 4 5 6 7 schoolwork.

25. I am concerned about time spent working with nonacademic problems related to 0 1 2 3 4 5 6 7 Amazon Alexa.

26. I would like to know what the use of Amazon Alexa will require in the immediate 0 1 2 3 4 5 6 7 future for me in my classroom.

27. I would like to coordinate my efforts with others to maximize the Amazon Alexa’s 0 1 2 3 4 5 6 7 effects.

28. I would like to have more information on time and energy commitments required by 0 1 2 3 4 5 6 7 Amazon Alexa.

29. I would like to know what other teachers are doing with Amazon Alexa. 0 1 2 3 4 5 6 7

30. Currently, other priorities prevent me from focusing my attention on Amazon Alexa. 0 1 2 3 4 5 6 7

31. I would like to determine how to supplement, enhance, or replace the use of Amazon 0 1 2 3 4 5 6 7 Alexa.

32. I would like to use feedback from students to change the use of Amazon Alexa. 0 1 2 3 4 5 6 7

33. I would like to know how my role will change when I am using Amazon Alexa. 0 1 2 3 4 5 6 7

34. Coordination of tasks and people is taking too much of my time when using the 0 1 2 3 4 5 6 7 Amazon Alexa.

35. I would like to know how Amazon Alexa is better than what we have now. 0 1 2 3 4 5 6 7

Additional Demographic Questions

36. What year are you in your program/Degree?

Freshman ______Sophomore ______Junior ______Senior ______

37. What is your Major?

AYA Integrated Ed. ______Middle School Ed. ______Other ______

38. Please enter your age? ______

39. Please enter your gender? ______

40. How many years of experience do you have in teaching?

41. Please describe the environment where you will want to teach?

______

42. How many students enrolled in the school where you will be teaching? ______

43. Which grade are you planning to teach? ______

44. Are you planning to teach in an urban, suburban, or rural area?

______

45. If you have any comments, or would like to receive additional information about the study, please enter your name and email address.

______

223

Appendix F: Graphic Permissions

224

Appendix F: Graphic Permissions (Cont.)

225

Appendix G: Stages of Concern Quick Scoring Device

226

Stages of Concern about the Innovation

0 Awareness: Little concern or involvement with the innovation is indicated.

1 Informational: A general awareness of the innovation and interest in learning more detail about it is indicated. The person seems to be unworried about himself/herself in relation to the innovation. She/he is interested in substantive aspects of the innovation in a selfless manner such as general characteristics, effects, and requirements for use.

2 Personal: Individual is uncertain about the demands of the innovation, his/her inadequacy to meet those demands, and his/her role with the innovation. This includes analysis of his/her role in relation to the reward structure of the organization, decision making, and consideration of potential conflicts with existing structures or personal commitment. Financial or status implications of the program for self and colleagues may also be reflected.

3 Management: Attention is focused on the processes and tasks of using the innovation and the best use of information and resources. Issues related to efficiency, organizing, managing, scheduling, and time demands are utmost.

4 Consequence: Attention focuses on impact of the innovation on student in his/her immediate sphere of influence. The focus is on relevance of the innovation for students, evaluation of student outcomes, including performance and competencies, and changes needed to increase student outcomes.

5 Collaboration: The focus is on coordination and cooperation with others regarding use of the innovation.

6 Refocusing: The focus is on exploration of more universal benefits from the innovation, including the possibility of major changes or replacement with a more powerful alternative. Individual has definite ideas about alternatives to the proposed or existing form of the innovation (Hall & Hord, 1987, p. 60).

227

Appendix H: SPSS Calculations

SPSS Syntax for finding Raw Score Total for Stage Zero to Stage Six

COMPUTE FiveItemRawScaleStage_0=Q.3 + Q.12 + Q.21 + Q.23 + Q.30.

EXECUTE.

COMPUTE FiveItemRawScaleStage_1=Q.6 + Q.14 + Q.15 + Q.26 + Q.35.

EXECUTE.

COMPUTE FiveItemRawScaleStage_2=Q.7 + Q.13 + Q.17 + Q.28 + Q.33.

EXECUTE.

COMPUTE FiveItemRawScaleStage_3=Q.4 + Q.8 + Q.16 + Q.25 + Q.34.

EXECUTE.

COMPUTE FiveItemRawScaleStage_4=Q.1 + Q.11 + Q.19 + Q.24 + Q.32.

EXECUTE.

COMPUTE FiveItemRawScaleStage_5=Q.5 + Q.10 + Q.18 + Q.27 + Q.29.

EXECUTE.

COMPUTE FiveItemRawScaleStage_6=Q.2 + Q.9 + Q.20 + Q.22 + Q.31.

EXECUTE.

SPSS Syntax for finding Percentile Score for Stage Zero to Stage Six

RECODE FiveItemRawScaleStage_0 (4=7) (5=14) (6=22) (7=31) (8=40) (9=48)

(10=55) (11=61) (12=69) (13=75) (14=81) (15=87) (16=91) (17=94) (18=96) (19=97)

(20=98) (3=4) (0 thru 2=Copy) (21 thru 35=99) INTO Stage_0_Percentile.

EXECUTE.

RECODE FiveItemRawScaleStage_1 (0=5) (1=12) (2=16) (3=19) (4=23) (5=27)

228

(6=30) (7=34) (8=37) (9=40) (10=43) (11=45) (12=48) (13=51) (14=54) (15=57)

(16=60) (17=63) (18=66) (19=69) (20=72) (21=75) (22=80) (23=84) (24=88) (25=90)

(26=91) (27=93) (28=95) (29=96) (30=97) (31=98) (32 thru 35=99) INTO Stage_1_Percentile.

EXECUTE.

RECODE FiveItemRawScaleStage_2 (0=5) (1=12) (2=14) (3=17) (4=21) (5=25) (6=28) (7=31)

(8=35) (9=39) (10=41) (11=45) (12=48) (13=52) (14=55) (15=57) (16=59) (17=63) (18=67)

(19=70) (20=72) (21=76) (22=78) (23=80) (24=83) (25=85) (26=87) (27=89) (28=91) (29=92)

(30=94) (31=95) (32 thru 33=96) (34=97) (35=99) INTO Stage_2_Percentile.

EXECUTE.

RECODE FiveItemRawScaleStage_3 (0=2) (1=5) (2=7) (3=9) (4=11) (5=15) (6=18) (7=23)

(8=27) (9=30) (10=34) (11=39) (12=43) (13=47) (14=52) (15=56) (16=60) (17=65) (18=69)

(19=73) (20=77) (21=80) (22=83) (23=85) (24=88) (25=90) (26=92) (27=94) (28=95) (29=97)

(30=97) (31=98) (32=98) (33 thru 35=99) INTO Stage_3_Percentile.

EXECUTE.

RECODE FiveItemRawScaleStage_4 (0 thru 2=1) (3=2) (4=2) (5=3) (6=3) (7=4) (8=5) (9=5)

(10=7) (11=8) (12=9) (13=11) (14=13) (15=16) (16=19) (17=21) (18=24) (19=27)

(20=30) (21=33) (22=38) (23=43) (24=48) (25=54) (26=59) (27=63) (28=66) (29=71)

(30=76) (31=82) (32=86) (33=90) (34=92) (35=96) INTO Stage_4_Percentile.

EXECUTE.

RECODE FiveItemRawScaleStage_5 (0=1) (1=2) (2=3) (3=3) (4=4) (5=5) (6=7) (7=9) (8=10)

(9=12) (10=14) (11=16) (12=19) (13=22) (14=25) (15=28) (16=31) (17=36) (18=40) (19=44)

(20=48) (21=52) (22=55) (23=59) (24=64) (25=68) (26=72) (27=76) (28=80) (29=84)

(30=88) (31=91) (32=93) (33=95) (34=97) (35=98) INTO Stage_5_Percentile.

229

EXECUTE.

RECODE FiveItemRawScaleStage_6 (0=1) (1=2) (2=3) (3=5) (4=6) (5=9) (6=11) (7=14)

(8=17) (9=20) (10=22) (11=26) (12=30) (13=34) (14=38) (15=42) (16=47) (17=52)

(18=57) (19=60) (20=65) (21=69) (22=73) (23=77) (24=81) (25=84) (26=87) (27=90)

(28=92) (29=94) (30=96) (31=97) (32=98) (33 thru 35=99) INTO Stage_6_Percentile.

EXECUTE.

SPSS Syntax for finding Highest Score

COMPUTE

Highest_Stage_Score=Max(Stage_0_Percentile,Stage_1_Percentile,Stage_2_Percentile,Stage_3_Percentile,

Stage_4_Percentile,Stage_5_Percentile,Stage_6_Percentile).

EXECUTE.

DO IF (MAX(Stage_0_Percentile TO Stage_6_Percentile) EQ Stage_0_Percentile).

COMPUTE X=0.

ELSE IF (MAX(Stage_0_Percentile TO Stage_6_Percentile) EQ Stage_1_Percentile).

COMPUTE X = 1.

ELSE IF (MAX(Stage_0_Percentile TO Stage_6_Percentile) EQ Stage_2_Percentile).

COMPUTE X = 2.

ELSE IF (MAX(Stage_0_Percentile TO Stage_6_Percentile) EQ Stage_3_Percentile).

COMPUTE X = 3.

ELSE IF (MAX(Stage_0_Percentile TO Stage_6_Percentile) EQ Stage_4_Percentile).

COMPUTE X = 4.

ELSE IF (MAX(Stage_0_Percentile TO Stage_6_Percentile) EQ Stage_5_Percentile).

COMPUTE X = 5.

ELSE IF (MAX(Stage_0_Percentile TO Stage_6_Percentile) EQ Stage_6_Percentile).

230

COMPUTE X = 6.

END IF.

EXECUTE.

COMPUTE HI =0.

COMPUTE SCNDHI = 0.

COMPUTE C=0.

EXECUTE.

DO REPEAT R = Stage_0_Percentile TO Stage_6_Percentile.

DO IF (R > HI).

COMPUTE HI = R.

COMPUTE Stage1 = C.

COMPUTE C = C+1.

ELSE IF(R>SCNDHI).

COMPUTE SCNDHI = R.

COMPUTE Stage2 = C.

COMPUTE C = C+1.

END IF.

END REPEAT.

EXECUTE.

231

Appendix I: MANOVA Assumptions

Test for linearity

232

Checking Correlation Assumption of Dependent Variables for MANOVA

233

Appendix J: Exploratory Factor Analysis

234

! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! ! !

Thesis and Dissertation Services ! !