THE EFFECT OF GENDER, PRIOR EXPERIENCE AND LEARNING SETTING ON COMPUTER COMPETENCY

by

FRANCIS HUEITSU FENG B.S. E.E.T., The University of Houston, Texas, U.S.A., 1979

A THESIS SUBMITTED IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF ARTS

in THE FACULTY OF GRADUATE STUDIES Department Of Curriculum Studies Faculty of Education

We accept this thesis as conforming to the required standard

THE UNIVERSITY OF BRITISH COLUMBIA

May 1996

© Francis Hueitsu Feng, 1996 In presenting this thesis in partial fulfilment of the requirements for an advanced degree at the University of British Columbia, I agree that the shall make it freely available for reference and study. I further agree that permission for extensive copying of this thesis for scholarly purposes may be granted by the head of my department or by his or her representatives. It is understood that copying or publication of this thesis for financial gain shall not be allowed without my written permission.

Department of /zCtofll , ^LrrraSlM

The University of British ColColumbii a Vancouver, Canada

Date

DE-6 (2/88) ABSTRACT

The Effect of Gender, Prior Experience and Learning Setting

on Computer Competency

The purpose of this thesis was to investigate the effect of gender, prior computing experience and dominant learning setting on university students' self-reported competency scores in languages, operating systems and applications. Gender differences have been found in computing in terms of attitude (Campbell, 1990; Chen, 1986; Loyd & Gressard, 1986), competency (Kay, 1989b), and participation (Sanders & Stone, 1986).

The gender gap within computing begins early (Jones, 1987) with the aggressive and monopolizing behaviour of males in preschool years (Schubert, 1986), and has far reaching consequences. Females face different expectations from parents (Nelson & Watson, 1991), are socialized against computers at home (Shashaani, 1994a), view programming as a male-oriented activity (Hawkins, 1985), and relate to computers differently than males (Turkle, 1984).

The recent decline in women pursuing computing related fields in colleges and universities has been well-documented (Shashaani, 1994a, Taylor & Mounfield, 1994). The problem of disparity has been postulated to be not from the lack of ability, but from the lack of participation

(McCormick & Ross, 1990, Taylor & Mounfield, 1994). The definition of participation however, is multi-faceted in the Uterature. For example, definitions range from self-initiated computing

(Chen, 1986; Krendle & Lieberman, 1988), to use of computers after school (Becker & Sterling,

1987), to attendance at informal camps and courses (Hess & Miura, 1985), to ownership of computer (Taylor & Mounfield, 1994). A body of literature exists which recognizes participation as prior computing experiences (Mclnerney, Mclnerney & Sinclair, 1994) and links access to a computer at home to achievement (Nolan, MacKinnon & Soler, 1992; Sclirnindinger, 1993) and attitude (Levin & Gordon, 1989). Other researchers have confirmed the beneficial effects of computer experience toward the diminution of computer anxiety (Chen, 1986; Liu, Reed &

Phillips, 1992). Three research questions addressed issues of gender, age of first computer experience and learning setting. Data were gathered using a questionnaire based upon the revised Gates (1981)

Software Abstraction Model for the conceptual framework. Face and content validity of the instrument were established through an expert panel and pilot studies. Employing ex post facto survey method, the questionnaire was adrninistered to 765 students in first year computer science courses at a large western university. Using factor analysis, the three underlying software constructs were re-operationalized as Application, High-Level Language and Low-Level Language competencies. The inter-item cohesiveness and stability of the Software Competency Scale (SCS) were established by high coefficients for test-retest reliability (0.84), and internal consistency for all subscales (0.82, 0.73,0.80) and the instrument (0.89), which allowed conversion of raw scores into three subscales. Following the advice of the literature (Mclnerney, Mclnerney &

Sinclair, 1994), prior computing experience was redefined as the combination of first age of computer experience and dominant learning setting. Multivariate comparisons were conducted with the three subscales using MANOVA with follow-up ANOVAs, and where appropriate, Scheffe post-hoc range tests. For additional support, univariate analysis was also employed using a combination of t-tests and Chi-square analysis.

The survey assessed competency across 27 types of software. With the exception of

Application competency, scores for female university students were significantly lower than males in Higher-Level Language and Low-level Language competency subscales. Main effects of

Gender [(F(l,601)=13.97, p<0.0001)], First Age of Computer Experience [(F(l,601)= 17.27, p<0.0001)] and Learning Setting [(F(2,601)=7.06, p<0.0001)] were significant, but interactions were not. Findings indicated that being a male, having early experience with computers, or computing predominantly at home, helped increase the likelihood of having higher competency scores. Despite statistics suggesting that early beginners and "home learners" tended to be males, and the significantly higher competency scores for those university students who either had earlier computing experiences, or primarily learned at home; the non-significant interactions meant that these differences based upon the age of first computer experience or the learning setting, were the same for males and females. i v TABLE OF CONTENTS

ABSTRACT ii

TABLE OF CONTENTS iv

LIST OF TABLE vii

LIST OF FIGURES viii

ACKNOWLEDGEMENT ix

CHAPTER ONE 1 Scope of the Investigation 1 Introduction 1 Research in Computer Applications 2 The Dependent Measure 2 Learning Settings 4 Gender Disparity and the Literature 4 Statement of the Research Problem 6 Structure of the Thesis 6

CHAPTER TWO 8 Review of the Literature 8 Research on Gender Differences 8 Link to Mathematics Research 9 Gender Equity and Computing 10 Gender and Societal Factors 11 Gender and Stereotyping 12 Gender and Participation 12 Gender, Computer Anxiety and Achievement 14 Gender and Applications 15 First Computer Experiences 17 Gender and Prior Experience 19 Learning Setting 21 Gender, Prior Experience and Setting 23 Studies with Post-Secondary Students 27 Educational Studies 28 Causal Studies 30 Problems with Definitions of Constructs 31 Gaps in the Literature 34 Conceptual Framework 37 Gates' Conception of Software Abstraction 37 Tangorra's Model of Multi-level system structure 39

CHAPTER THREE 40 Methodology 40 The Research Design 40 Synopsis of Research Plan 42 Preparation Phase 42 Analysis Phase 42 Univariate tests 42 Test-retest reliability 42 Preliminary ReUability Test 43 Factor Analysis and Internal Reliability Analysis. 43 Operationalize Independent Variable (TV) 43 Background Discussion 44 V Pre-Pilot Findings 44 Theoretical Perspective 45 Gates' (1981) Model as the Conceptual Framework 45 Instrument Development 47 Competency-Based Anchors 47 A Priori Conjectures of Competency Constructs 47 Development of Final Survey Questionnaire 48 Preliminary Scale Analysis 50 Programming Language Conventional Subscale 50 Operating System Conventional Subscale 50 Application Conventional Subscale 50 Preliminary Analysis Discussion 52 Test-retest Subjects 52 Results of Test-Retest Analysis 52 Main Study Subjects and Analysis 52 Operationalizing the Independent Variables 53 Operationalizing Age of First Computer Experience 53 Operationalizing Learning Setting 54 Statistical Procedures Used to Analyze Results 54 Self-Reported Competency Scores 55 Factorial Validity and Reliability of a Scale 56

CHAPTER FOUR 57 The Main Study Findings 57 Reporting and Presentation Plan 57 Descriptive Statistics 57 Mean Competency Scores by Gender 59 Breakdown Descriptive Statistics 59 Factor Analysis 61 Confirmatory Factor Analytic Procedure 61 Creation of Subscales through Factor Analysis 61 Scale Refinement 62 Multivariate Analysis of Variance 63 Results of the MANOVA 64 Inferential Interpretation 66 Results of Univariate ANOVAs 66 Post-Hoc Analysis 68 Summary of Findings 71 General Interpretation Across Findings 71

CHAPTER FIVE 73 Conclusions and Implications 73 Empirical Research 73 Findings and the Literature 75 Interactions 79 Gates-Tangorra-Feng-Model , 80 Implication for Teaching and Learning 83 Implications for Teacher Education 84 Implications for Parents, Teachers and Researchers 84 Limitations of the Study 86 Implications for Future Research 87

REFERENCES 88

APPENDIX A Ill Univariate Tests Ill Item-Level Analysis 112 v i Non-Parametric Chi-Square Analyses 112 Parametric t-tests Analyses 114 Comparison Between Parametric and Non-Parametric tests 115 Comparison Between Univariate and Multivariate Results 117

APPENDIX B 119 List of 23 Software Items 119

APPENDIX 121 Software Competency Scale (SCS) 121

APPENDIX D 126 Data Coding Protocol 126

APPENDIX E 136 Additional Descriptive Statistics and Raw Data 136

APPENDIX F :.143 Factor Analysis 143 Check for Theoretical Assumptions of Factor Analysis 144 Interpretation of Initial and Final Statistics 145 Rotated Factor Matrix 146 Interpretation of Factor Analysis 147 Discussion of Results 148

APPENDIX G 150 Scale Refinement 150 Scale Analysis 151 New Application Subscale 151 New Low-Level Language Subscale 151 New High-Level Language Subscale 151 Discussion of Refined Instrument 152 Intercorrelations with Gender and First Age of Experience 153 Instrumentation 154

APPENDIX H 155 MANOVA Analysis 155 Considerations for the MANOVA 156 Skewness, Missing Data, Homoscedasticity, Outliers 156 Checking Underlying Assumptions 157 Setting the Alpha Level 158

APPENDIX I 159 The Gates-Tangorra-Feng Model 159 The Revised Model 160 v i i LIST OF TABLES

Table 1 Comparison of Terms Found in the Literature 3 Table 2 Scope of Investigations Into Possible Causes of Gender Disparity 5 Table 3 Comparison of Terms Found in the Literature For Computer Experience 34 Table 4 Mean, Standard Deviation, Item-Total Correlations For Preliminary Software Competency Scale (SCS) 51 Table 5 Percentage of Respondents by Gender 58 Table 6 Percentage of Respondents by Gender Across Age of First Computer Experience 58 Table 7 Percentage of Respondents by Gender Across Learning Settings 59 Table 8 Comparison of Raw Competency Scores by Gender 60 Table 9 Comparison of Preliminary and Refined Scales 63 Table 10 Sets of Means to Compare for Factors of the MANOVA 64 Table 11 MANOVA Results Table 65 Table 12 Follow-up ANOVA for Gender 67 Table 13 Follow-up ANOVA for FirstAge 67 Table 14 Follow-up ANOVA for LearnSet 68 Table 15 Scheffe Post-Hoc Comparisons 69 Table A. 1 Chi-Square Analysis 114 Table A.2 Parametric t-tests by Gender 116 Table B.l List of 23 Original Software Items 120 Tabled Software Competency Scale (SCS) 122 Table D. 1 Data Entry Protocol Codes and Description 127 Table E. 1 Scores Across First Computer Experience by Gender 137 Table E.2 Frequency Across Age of First Computer Experience by Gender 138 Table E.3 Mean Ratings by Gender Across Learning Settings 139 Table E.4 Percentage by Gender Across Learning Settings 140 Table E.5 Frequency Distribution by Gender Across Learning Settings 141 Table E.6 Comparison of Age of Learning Software by Gender 142 Table F. 1 Eigenvalues of a Factor S olution 145 Table F.2 Factor Loadings of Variables Across Three Factors 147 Table G. 1 Mean, Standard Deviation, and Item-Total Correlations For Refined Software Competency Scale (SCS) 152 Table G.2 Correlation Matrix for Preliminary Subscales 153 Table G.3 Correlation Matrix for Refined Subscales 153 viii LIST OF FIGURES

Figure 1 Gates (1981) Pyrarnidal Conception of Software Abstraction ...38 Figure 2 The Multi-Level Structure of a Computer System Tangorra (1990) 39 Figure 3 Map of Research Plan 41 Figure 4 Proposed Gates-Tangorra-Feng Model of Software Abstraction 82 ACKNOWLEDGEMENTS

This study owes its completion to the hardworking and patient members of my committee.

Most of all I would like to express my sincere gratitude to Dr. Sherrill, (Papa Sherrill, or the "light at the end of the tunnel") for his faith, caring, dedication, guidance and humanity that he has shown me, and putting his own credibility at several junctures on the line for me. I am also grateful to Dr. Westrom for championing my cause, for the humanity that he showed me in the hour of my greatest need, and for being the subject expert on my cornmittee. I would also like to thank Dr.

Chan for his assistance first in an unofficial capacity, and then as a member of the committee.

Under Dr. Sherrill's direction, the committee members have worked long and hard, and have patiently read and edited my numerous submissions to help me craft this thesis into a quality document. In the same breath, I would like to express my special thanks to Drs. Anderson and

Peterat for speaking to Graduate Studies on my behalf, and believing that I would arrive at a quality piece of research. I also want to thank Dr. Anderson for her humanity that she has shown me, her encouraging warm words, and her continued support. Without the help of these people, this work would not have been completed, and I would like to say a collective thank you to all the above people.

I would also like to thank Dr. Robitaille for his leadership in making our department fertile ground for scholarship and collegiality, and the access to research opportunities and resources. I would like to thank all my professors but in particular, I would like to thank Dr. Boldt for being my statistical mentor, and being Aristotle in his pedagogic delivery, Dr. Gaskell for my initial acceptance, Dr. Ratzlaff for starting me on the long road to quantitative methodology and his kind understanding in my hour of greatest need, Drs. Echols and Willms for their compassion, and imparting the nuances of research methodology to me, Dr. Ungerlieder for helping me with my visit to the schools, Dr. Willinsky for resurrecting my ability to think and write critically again after the trauma of my father's passing, Dr. Werner for teaching me the art of implementing curricula change, and my first opportunity at heading a research project, Dr. Goldman-Seagull for her continued support, Dr. Boshier for my perspective transformation, Dr. Kishor for factor analysis, Dr. Conry for instrumentation, Dr. Donn for introducing the notion of paradigmatic shifts, Dr. Allison for my introduction to the literature, Dr. Schultz for the longest statistics exam I ever had, and Dr. Riechl for sorting out my difficulties with the MANOVA.

I am grateful for working with Dr. Erickson and Lorna on BCAMS, and privileged to work with Dr. Marshall, Dr. Taylor, Sue Brigden and Joel Zapata on the TTMSS and BCAMS projects. I am especially grateful and fortunate for Sue's guidance, the opportunity to chair the Mathematics

Coding Coirimittee for the TIMSS Performance Assessment, and representing Saskatchewan in the

TTMSS PA. I am also thankful for the opportunity of working at Educational Computing Services with Dr. Bruce and his staff. I appreciate Mike Sheppard's assistance and his comments regarding my questionnaire.

Moving over to Computer Science, I would like to thank Dr. Dempster for his initial assistance, and Dr. Rosenberg for coordinating the administration of the data-collection. I would also like to express my sincere gratitude to the entire faculty teaching first year computer sciences

(Drs. Boutillier, Casselman, Coatta, Forsey, Kirkpatrick, Little, Pai, Pippenger, Tsiknis,

Velthuys, Zhu) for being on the expert panel, and their support of the research project. I especially want to thank Drs. Little and Tsiknis for their ardent support, and Dr. Tsiknis for volunteering his students for the pilot project. Thanks to Irene Amiraslany and her wonderful staff at Data-Entry for entering the data. In addition, I would like to thank Dave Ellis for his notes, and initial help in the first round of pre-pilots at Eric Hamber. As well, I would like to thank the subjects who participated in the pre-pilots and pilot, and the 765 volunteer subjects for the main study.

The methodology for this research was borrowed/suggested from several sources and would like to acknowledge them at this time. I would like to acknowledge and thank Dr. Chan for the idea of using factor analysis and that of instrumentation and test-retest reliability. I have always been interested in conducting a gender study in computing, and I attribute my present endeavour and the use of the MANOVA to Joel Zapata. I am thankful to Dr. Boldt for verifying the results of my factor analysis, and validating the missing step which bridges my factor analytic results to the

MANOVA.

Besides Joel, I would like to thank my fellow travelers George, Cynthia, Sandra, Teresita and Gabriel on the path to knowledge. George, Cynthia, Sandra and Teresita made the difference for my oral defence with continuous practicing and care. I thank George for staying with me till the wee hours of the night till I got my oral defence right, and Cynthia for her serious remarks which helped to shape my attitude towards the defence. I thank Sandra for worrying about how I would fare at my oral defence, organizing the practice sessions, and her repeated attempts to help me get my presentation correct. I thank Teresita for her speedy typing and her help with the graphics and the references. I would also like to thank Joel, George and Gabriel for their suggestion of getting

Dr. Sherrill to be my advisor. I would also like to thank Saroj and the Department for their humanity in my hour of crisis and Saroj's dedication. I would also like to thank our fine support team; Brian, Bob and Will for helping me with my overheads, and trying to retrieve my lost

WORD files.

This study owes its completion to all these above individuals who have helped me in the countless ways, but it also owes its successful completion to my family. In closing, I would like to begin by dedicating this thesis to the memory of my dear departed father, Feng Yuen Sheng, who even though he is not here today, started me back onto this journey. I would like to thank my mother, my brother Simon, and his wife Wendy, for their continued emotional and financial support, and believing that I will do a good job. I would also like to thank my uncle Dr. Yunping

Feng who showed up out of nowhere, and helped me practice my oral. I have left the heart for last, but not least. I would like to thank Kym, my other half, and beautiful friend in life, for waiting and believing in me, for taking care of the children, for showing me the humanity and beauty of life, for making me believe in the good of humanity, for giving me the space to complete this project, and for always being positive and believing me in that last little place in her heart even though it has been very hard. I also want to thank the five beautiful children in our lives who gives us meaning and hope, Alexander, Jasmine, Michael-Phihp, Serena and Jonas for their patience over my absence. I would also like to dedicate this piece of work to all these beautiful people who make my life complete. 1

CHAPTER ONE

Scope of the Investigation

Introduction

In keeping with the rapid pace of technological change, the provincial Ministry of

Education (1995) published a policy paper entitled Technology in British Columbia Public

Schools: Report and Action Plan: 1995 to 2000. The paper announced the plan to emphasize computer technology and bring "improved access to, awareness of and expertise of new technologies," (cover page) into public schools. The action plan stressed the importance of technological literacy, and the necessity of lifelong learning and skills training. The Ministry recognized that along with extra funding and additional state-of-the-art physical resources, the action plan had to make provision for in-service teacher training "to learn specific software packages; to integrate that software into the curriculum; and to acquire basic telecommunications skills." (Ministry of Education, 1995, p.5).

Kay (1989a) claimed that, "technological progress has changed computer software and hardware enough to dramatically increase accessibility and reduce skills required to use the computer" (p.41). The advent of "user-friendly" software (Fried, 1982) has resulted in the rapid increase in the number of people learning to use computers without the need to know how to program (Bernstein, 1991; Haigh, 1985; Jackson, Clements & Jones, 1984; Soper & Lee, 1985,).

Often the user only requires the ability to read and write to perform sophisticated procedures (Kay,

1993c). Computer scientists conceive of programming in terms of language paradigms, and classify use of applications as newly evolved forms of programming in a long succession of languages (Brookshear, 1994; Budd & Pandey, 1995; Luker; 1989). Some computer scientists directly refer to applications as fourth generation languages (Brookshear, 1994). 2

Research in Computer Applications

There is widespread recognition that the nature of computing has fundamentally changed and applications use should be investigated as an important variable (Byrd & Koohang, 1989;

Haigh, 1985; Hunt & Bohlin, 1993; Jackson, Clements & Jones, 1984; Kay, 1993; Levin, 1983;

Rhodes, 1986). One of the most prohfic areas of application research is in preservice teacher training (Hignite & Echternacht, 1992; Hunt & Bohlin, 1993; Kay, 1990; Liu, Reed & Phillips,

1992; Reed & Overbaugh, 1993; Woodrow, 1992).

Previous studies (Anderson, 1984; Lockheed, Neilson & Stone, 1983) discovered the existence of gender disparity in terms of enrollment in programming courses. Other studies explored gender disparities in programming, the use of applications like word processing (Becker

& Sterling, 1987), or word processing, data-processing and games (Chambers & Clarke, 1987).

Chen (1986) noted that because studies of "non-programming" uses of computers have found smaller differences in enrollment between males and females, the literature on gender differences in computing recommended looking at gender differences in specific types of computer experiences. The importance of isolating differences by the type of applications has been similarly echoed by other researchers (Koohang, 1987; Levin & Gordon, 1989; Mclnerney, Mclnerney &

Sinclair, 1994).

There is a paucity of comprehensive studies that investigated a range of computer experiences. The literature review presented in Chapter Two yielded only five such studies undertaken to date, which provide empirical evidence of the specific nature and extent of computer experience (Bozeman & Spuck, 1991; Clarke & Chambers, 1989; Deschenes, 1988; Hecht &

Dwyer, 1993; Standing Committee on Educational Technology, 1991).

The Dependent Measure

The researcher wanted to assess the subjects' competency in programming languages, controlling operating systems and using applications. In order to do so, the researcher consulted the literature for a term to operationalize competency. The search of the literature indicated that there was neither universal terminology nor operationalization for this "ability" to work with software. Table 1 below illustrates one of the inherent problems with the construct surrounding "the ability to use or work with computers." The table illustrates a variety of descriptions used by researchers to define and operationalize the "ability to use or work with computers."

Table 1 Comparison of Terms Found in the Literature

Terminology Used Studies Ability Kay, 1993b Awareness Jackson, Clements & Jones, 1984 Competency Anderson, 1987; Campbell & Williams, 1990; Cheng, Plake & Stevens, 1985; Colley, Hill, Hill & Jones, 1995; Gardner, Discenza & Dukes, 1993; Geissler & Horridge, 1993; Hignite & Echtemacht, 1992; Lee, Pliskin & Kahn, 1994; Liu, Reed & Phillips, 1992; Marcinkiewitz, 1994; Mclnerney, Mclnerney & Sinclair, 1994; Wagner & Vinsonhaler, 1991 Efficacy Bernstein, 1991; Busch, 1995; Mclnerney et al., 1994 Experience Clarke & Chambers, 1989; Colley, Gale & Harris, 1994 Knowledge or literacy Hignite & Echtemacht, 1992; Kay, 1990; Marshall & Bannon, 1986; Weil, Rosen & Wugalter, 1990 Proficiency 8 Campbell & Williams, 1990 Use/Experience Bernt, Bugbee & Arceo, 1990 Use Chen, 1986 Skill Beard, 1993

As shown in Table 1, the literature indicated that researchers have apparently converged upon the term "competency." However, some terms have been used interchangebly by some researchers to define competency. For example, Cheng, Plake and Stevens (1985) used the term

"competency" synonymously for "literacy", Marcinkiewitz (1994) referred to "competency" when attempting to measure "use", while Clarke and Chambers (1989) used the term "experience" as a measure of "competency". The term "competency" has also been used across disciplines; researchers in the fields of artificial intelligence (Wagner & Vinsonhaler, 1991), psychology

(Gardner, Discenza & Dukes, 1993) and education (Lee, Pliskin & Kahn, 1994; Liu, Reed &

Phillips, 1992; Marcinkiewitz, 1994; Mclnerney, Mclnerney & Sinclair, 1994) have used this term to refer to this "ability" to work with computers.

Regardless of the terms used, researchers and educators agree that applications should be integrated into the curriculum to encourage greater participation by females and there needs to be more research into the learning and use of applications (Woodrow, 1991; Buder, 1991). Since there is convergence and support both from within education and across disciplines, the term

"competency" appears to be the appropriate label to use for the dependent measure in this present 4 study. The notion of experience is also important, and for the purpose of this study, experience will be represented by the combination of the independent variables of the age of learning and the learning setting.

Learning Settings

It is a common marketing stretegy for the software industry to innovate itself and to supply consumers with newer and better programming languages, operating systems and applications

(Wilkes, 1991). However, between the arrival on the market of the new software and the implementation into school curricula, there is an inherent lag which "makes it impossible for the average person to keep abreast of an exponentially increasing base of computer knowledge." (Kay,

1989a, p.39). Empirical evidence has been produced to indicate that there is learning related to computers which occurs outside the school system (Clarke & Chambers, 1989; Kersteen, et al.,

1988; Nichols, 1992).

When students engage in learning outside the school context, what settings do they choose, how do these settings influence their learning, and when do the students (or their parents) begin this pursuit? Do boys and girls begin learning computers at different times, and are they treated the same? Do boys and girls adopt different learning strategies and schedules based upon the process of socialization around their lives? Finally, if students are learning by themselves outside the school system, what are the implications for the role of the school in the teaching and learning of computing? Rather than deal with all these questions, for the purpose of this present study, the researcher will focus on gender disparity in terms of the effect of gender, the first age of computer experience and learning setting upon the computer competency of university students.

Gender Disparity and the Literature

Computers are metal and wiring, imbued electronically with representations of logic

(Smith, 1985; Tangorra, 1990; Winograd & Flores, 1987). As such, computers are intrinsically non-discriminatory machines (Schubert & Bakke, 1989), and were thought of as neutral tools when they were first introduced into the schools (Molnar, 1978). More recent thought regards computers not as innocuous neutral machines but as value-laden devices, and there has developed a 5 branch of discourse dedicated to the social implications of computing (Bowers, 1988; Franklin,

1990; Kling, 1983; Postman, 1987; Ragsdale, 1988). Gender disparity in computing, which is the focus of investigation of this thesis, was one of the outcomes stemming from social implications of computer implementations into the schools.

Shortly after the implementation of computers into the high school curricula, although differences in attitude towards computers were disputed (Loyd & Gressard, 1984; Marshall &

Bannon, 1986), educators and researchers began to notice gaps in computing performance in favour of boys, similar to those observed almost ten years prior in mathematics achievement

(Chen, 1986). As a result, besides positing connections to mathematics, researchers looked in several other directions for an explanation of the observed anomaly in performance between genders. In the effort to find answers, researchers hypothesized disparate causes for the observed variance and research fanned out in all directions. Table 2 provides an indication of the breadth of these investigations.

Table 2 Scope of Investigations Into Possible Causes of Gender Disparity

Variables Studies Cognitive aspects Linn, 1985 Duration of Experiences Gabriel, 1985 Early Exposure Chen, 1986; Eccles, 1987; Kersteen et al., 1988; Schubert, 1986, Frequency of Experiences Hunt & Bohlin, 1993 Gender Loyd & Gressard, 1984 Home Learning Colley, Gale & Harris, 1994; Nichols, 1992; Miura, 1986 Lack of Encouragement Chen, 1986; Shashaani, 1994; Voogt, 1987 learning Settings Lee, Pliskin & Kahn, 1994 Nature of Experiences Koohang, 1987 Ownership Schmidinger, 1993 Participation in camps Chen, 1986; Hess & Miura, 1985 Social Interactions Ames & Archer, 1987; Turkle, 1984 Stereotyping Colley, Gale & Harris, 1994

A positive correlation was noted between prior computing experience and performance

(Marcoulides, 1988), and between prior computing experience and positive attitudes toward computers (Hunt & Bohlin, 1993). Chen (1986) found confidence to be positively related to computer experience. Although there are contextual contentions as to the quality of that experience

(Bernstein, 1991; Collis, 1985; Krendle, Broihier & Fleetwood, 1989; Linn, 1985), overall researchers appeared to agree that greater experience leads to the diminution of anxiety (Heinssen, 6

Glass & Knight, 1987; Howard & Smith, 1986; Mclnerney, Mclnerney & Sinclair, 1994), better self-efficacy (Bandura, 1977; Klein, Knupfer & Crooks, 1993; Kersteen et al, 1988), higher outcome expectancy (Bandura, 1977) and achievement (Marcoulides, 1988,).

Across several studies, gender was not determined to be a statistically significant factor based upon attitude towards computers (Campbell, 1989; Gressard & Loyd, 1986; Lockheed,

1985; Loyd & Gressard, 1984). Ogozalek (1989) did not find evidence of male domain stereotyping in her study of computer science students. In their research into networking the home and the university, Watson, et al, (1989) contradicted the literature when they reported that daughters used computers more than sons. Koohang (1986) also claimed that gender was not a significant factor in his study, thus corroborating findings of Loyd and Gressard (1984) and

Marshall & Bannon (1986).

Statement of the Research Problem

An examination of the variables in Table 2 indicate that several studies have concentrated on gender, learning settings and early experiences. Guided by this body of literature, the present study focused on the effect of gender, first age of computing experience and learning setting on university student's self-reported competencies with programming languages, operating systems and applications.

Specifically, the study focused upon three research questions:

RQ1. Do male and female university students differ in self-reported ratings in software competencies with programming languages, operating systems and applications?

RQ2. Do university students who learn computing at different ages differ in self-reported ratings in software competencies with programming languages, operating systems and applications?

RQ3. Do university students who learn computing in different settings differ in self-reported ratings in software competencies with programming languages, operating systems and applications?

Structure of the Thesis

This thesis has been organized around five chapters. This introductory chapter presents a brief background to the problem, and a statement of the research problem. Chapter Two presents the literature surrounding gender, prior computer experience and learning setting, causal studies, 7 along with a sampling of literature critical of current methodology in gender research, followed by a discussion of the gaps found in the literature. The chapter closes with a review of the literature consulted to formulate the conceptual framework employed in the design of the instrument.

Chapter Three presents the research plan, the background of the study, the two pre-pilots, the pilot, the decisions which shaped the change of direction to the current study, and the quantitative strategies designed to answer the research questions. The chapter traces the evolution of the design of the new instrument, from the pilot to the main study, and presents the test-retest results. The plan to answer the three research questions including the operationalization of the independent variables, and the analytical process which combines factor analysis, reliability analysis and Multivariate Analysis of Variance (MANOVA), is discussed in detail. The chapter closes with a discussion on the consistency of self-reported scores, and a section detailing how such an the analytical design has been used in similar research designs to ensure the factorial validity and reliability of the scale. The original nature of this design, and the departure from that of past research, is also noted.

Chapter Four reports the main study findings and the basic descriptive statistics, and answers the three research questions using MANOVA with follow-up ANOVAs and where appropriate, post hoc Scheffe tests. Additionally, results of the internal reliability tests, findings from the Factor Analytic procedures, and the refinement of the Software Competency Scale (SCS) leading up to the MANOVA, are also presented.

Chapter Five summarizes how findings from the three research questions compare to findings found in the literature, and presents the proposed Gates-Tangorra-Feng Model of

Software Abstraction. Finally, the implications for teaching and learning, for participation, for future research, for teacher education, and for parents, teachers, district administrators and researchers are discussed. The chapter concludes with observations of the limitations of the study and suggestions for future research. 8

CHAPTER TWO

Review of the Literature

There are four purposes to this chapter. First, to conduct a review of the relevant literature concerned with the variables employed in this study. Second, to present critical studies calling into question findings from studies of gender discrepancy, because of the prevalence of non-standard construct definition, and the lack of theory-driven research. Third, to discuss gaps in the literature.

Fourth, to elaborate on the formulation of the conceptual framework employed for the development of the questionnaire employed in this study.

Research on Gender Differences

Gender disparity has been substantiated in mathematics (Becker, 1981; Cross, 1988;

Fennema & Sherman, 1977; 1978; Hanna & Leder, 1990; Lamb & Daniels, 1993; Leder, 1982), cited in Zapata (1995). In the 1990 British Columbia Mathematics Assessment, Gaskell, McLaren,

Oberg & Eyre (1990) documented gender issues in students' choices in mathematics and science citing various works in mathematics (Reyes & Stanic, 1988), in science (Baker, 1987; Ethington &

Wolfe; 1988; Garratt, 1986; Piburn & Baker, 1989; Rennie, 1987; Zerega, Haertal & Tsai, 1986), in computing (Culley, 1988; Dubois & Schubert, 1986; Greaves, 1988), in mathematics and science (Erickson, Erickson & Haggerty, 1980;) and in mathematics, science and computing

(Morrow, 1986).

These studies generally found that gender disparity was prevalent across the entire mathematical, scientific and computing spectrum. Furthermore, as these subjects are highly correlated (Chambers & Clarke, 1987), the overlapping variance becomes even more compelling as gender disparity in one domain, like mathematics, has been shown in the literature (Chambers &

Clarke, 1987; Chen, 1986) to affect another, like computing. 9

Link to Mathematics Research.

In 1978, Fennema and Sherman called attention to a growing trend in mathematics education. In the primary grades, there was no discrepancy between the perception of usefulness of mathematics between boys and girls. However, the researchers noted "by as early as the sixth grade, girls expressed less confidence than boys in their ability to do mathematics, and the subject was clearly sex-typed, especially by boys" (Fennema & Sherman, p.202, 1978). They concluded that unchecked, negative attitudes continued and compounded until they manifested as differential enrollment in advanced mathematics classes.

Fennema and Sherman (1977) put forth these conjectures in the form of the "differential coursework hypothesis" which suggested that differences in achievement are a function of preparation, which in turn, is often the by-product of course enrollments. The Fennema and

Sherman (1977) hypothesis has been supported in other studies of mathematics achievement

(Armstrong, 1979; Wise, Steel & MacDonald, 1979).

Chen (1986) contended that most of what educators and researchers have learned from the existing body of research in gender differences in attitude towards mathematics could be applied to computers, and transferred to help in the planning and investigations of computer attitudes. To make his point, Chen (1986) cited the "differential coursework hypothesis" of Fennema and

Sherman (1978) to provide evidence of the similar shifts occurring within computing as had happened earlier within mathematics education.

In addition, the link to the research in mathematics achievement was supported by factors identified in mathematics achievement research which found commonality in studies on computing.

For example, there have been parallels to mathematics achievement research found in stereotyping

(Aman, 1992; Clarke, 1992; Howell, 1993; Kersteen,et al., 1988; Klein, 1992; Levin & Gordon,

1989; Moses, 1993) in participation (Hawkins, 1985; Hess & Miura, 1986; Linn, 1984,

Lockheed, 1985; Revelle,et al., 1984) and in self-efficacy (Dambrot, et al., 1985; Lockheed,

1985; Miura, 1987, in Shashaani, 1994a; Ogozalek, 1989).

Research in computing studies suggests that the interpretation is broader than enrollment

(Chen, 1986; Shashaani, 1994). Other related factors such as use (Chen, 1986), access (Dalbey & 10

Linn, 1985; McCormick & Ross, 1990) and experience (Clarke & Chambers, 1989) also have an effect on attitudes and achievement.

In a test of computer science administered to 450 students, Gabriel (1985) found no significant differences between males and females at grade four. However, he noted "a gap beginning to appear in the middle grades or junior high and widening through the high school years," (Gabriel, 1985, p.166). Fetler (1984) noted a similar trend in other student populations.

Swadener & Hannafin (1987) observed no differences in elementary school children's attitude towards computing prior to their first computing experience.

There is some contention with regard to differences in attitudes towards computers. Loyd and Gressard (1986); Lockheed (1985); Lockheed and Frakt (1984); Griffin, Gillis and Brown

(1986); Collis (1985); Chen (1985); Miura (1986); Liu, Reed and Phillips (1992) all reported gender effects in attitudes towards computers, but Campbell (1989) Koohang (1986) and Marshall and Bannon (1986) did not. Loyd and Gressard (1986), found public school teachers had significant gender-differences in computer anxiety and computer confidence. Liu, Reed and

Phillips (1992) conducted a large scale research study with teacher education students which concluded that more males than females had no prior experience with computers and males had significantly lower computer anxiety than females.

Other studies took different directions. Sanders and Stone (1986) confirmed that females were underrepresented in computer courses, professional computer jobs, and home computer usage. Eaton, Schubert, Dubois and Welman (1985) discovered that there was lack of gender equity in computer access for students in rural schools and that parents purchased computers more for their sons than their daughters. Eaton, et al., (1985) noted that considering the implications of learning from home, this was a cause for concern. Consistent with the Eaton, et al., (1985) study,

McGrath, et al., (1992) found that more males than females had home computers and more males than females used them.

Gender Equity and Computing

Gender equity is an amorphous variable and may manifest itself as participation, achievement, stereotyping, use or experience. In a sense, all gender literature falls under this 11 rubric. However there are some studies which concentrate on the issue of gender equity as an issue in itself (Bernstein, 1991; Bohlin 1993; Teague, 1992). In bringing awareness to the issue of gender equity, Bohlin (1993) cited a large-scale survey of 1818 students in grades 8 and 12 conducted by Collis (1985), who found that girls were more likely to associate computer users to stereotypes, and had a "limited perspective of the use of applications and uses of computers."

(Bohlin, 1993, p. 158). Fuch's (1986) findings concurred. He found that boys had more experience with programming and computer applications than girls.

Once aware of prevalent computer anxiety among females, however, Bohlin (1993) noted that intervention was possible; Winkle and Matthews (1982) identified steps to reduce computer anxiety and increase female participation. Linn (1984) reported that women were usually underrepresented in computer courses which had potential to develop cognitive skills. Gender inequity has also been a subject of interest in curriculum, and interest in developing proactive intervention is high (Keller, 1983, Keller & Kopp, 1987; Keller & Suzuki, 1988). In the area of curriculum development, Erhart and Sandler (1987), Linn (1986), and Jump, Harris and Held

(1985) suggested methods of addressing issues of gender equity. For the intervention, Bohlin

(1993) cited the various works of KeUer (Keller, 1983; Keller & Kopp, 1987; Keller & Suzuki,

1988).

Gender and Societal Factors

Numerous researchers have voiced the importance and influence of societal factors (Aman,

1992; Chen, 1986; Clarke, 1992; Eccles-Parsons, et al., 1983; Hanchey, 1993; Klein, 1992, 199,

Kersteen, et al., 1988; Maccoby, 1986; Sutton, 1991; Taylor & Mounfield, 1994) upon the perceptions of girls in regard to computing. Advocates of socialization theory (Eccles, 1987;

Jacobs, 1991; Serbin, et al., 1990) believe socio-cultural factors are responsible for creating gender-differential belief and behaviour. Chen (1986) underscored the importance of socialization when he cited the work of Maccoby (1986) on the notion of "separate cultures" of computing among adolescent boys and girls shaped by factors in the social environment, and the work of

Eccles-Parsons et. al. (1983) on the notion of "subjective task values" which attach differentially by gender. The problem of the formation of deeply entrenched societal values was addressed by 12

Taylor and Mounfield (1994) when they noted that,"... attitudes and perceptions about computing are being formed during middle and high school years that are often carried into adulthood."

(p.292,1994). Taylor and Mounfield (1994) pointed out that researchers like Hanchey (1994) and

Sutton (1991) provided lengthy bibliographies of studies about these societal factors and interventions developed in response.

Gender and Stereotyping

Fennema and Sherman's (1978) work in gender stereotyping and perception of mathematics as a male domain has found echoes in computing studies. The perception of computing as a male domain has been one of the major factors cited for the lack of female participation, and there is a body of research around stereotyping and programming as a male- domain activity (Aman, 1992; Collis, 1985; Howell, 1993; Johnson & Swoope, 1987; Klein,

1992; Kersteen, et al., 1988; Levin & Gordon, 1989; Lockheed, 1985; Moses, 1993).

Researchers have identified certain prior experience like self-initiated exploration, playing computer games, membership in computer clubs and home use of computers, that help to reinforce stereotypes (Howell, 1993; Kersteen, et al., 1988; Levin & Gordon, 1989). To combat socially rooted "subjective task values" described by Eccles-Parsons et. al (1983), which both genders attach to their activities, computer work has been restructured from isolation through employing mentoring programs (Moses, 1993; Myers, 1992), group learning (Hawkins, et al., 1982;

Johnson & Johnson, 1987, 1988; Levin & Dareev, 1980; Moses, 1993; Muller & Perlmutter,

1985; Nolan, Mackinnon & Soler, 1992), supportive environments (Hanchey, 1993;; Kersteen, et al., 1988) and new learning paradigms (Hanchey, 1993; Nolan, Mckinnon & Soler, 1992; Martin

& Murchie-Beyma, 1992).

Gender and Participation

Some would claim the reasons for gender discrepancies are innate (Benbow & Stanley,

1980; Halpern, 1986; Hines, 1982). There is a body of research, however, which concludes that gender differences in mathematics and computation are not due to lack of natural ability (Anderson,

Klassen, Krohn, & Smith-Cunnien, 1982; Canada & Brusca, 1991; Kiesler, Sproull & Eccles, 13

1985; Perry & Greber, 1990; Shashaani, 1993). The claim of innate inability has also been refuted from the sociological, cultural and psychological perspectives by Skolnick, Langhort and Day

(1982) who attribute the cause to past experiences.

Gender differences have been attributed at the most basic level to participation, not to the lack of ability (McCormick & Ross, 1990). Researchers have found that more boys than girls enrolled in computer classes held in school and computer camps (Anderson, Welch & Harris,

1984; Hess & Miura, 1985; Kramer & Lehman, 1990; Linn, 1985). Along similar lines, Culley

(1988) found that boys tended to use computers more than girls in their leisure time, including lunch hours. These trends in participation have been recognized and researchers have noted that over the past decade, significant research has gone into the participation of females in computing activities, courses, and professional computing careers (Shashaani, 1994; Taylor & Mounfeld,

1994).

The problem, however, is multi-faceted. Part of the problem stems from females who do not consider computing as an option (Clarke & Chambers, 1989). Then there are some researchers who approach gender differences in participation in terms of enrollment (Hawkins, 1985; Hess &

Miura, 1985; Lockheed, 1985; Linn, 1984), while others operationalize participation generally as use (Becker, 1985) and access (Teague & Clarke, 1993).

Still other researchers operationalized participation to include experience (which subsumed use and access), and found that males tend to use computers more than females (Chen, 1986;

Eaton, et al., 1983; Eaton, Schubert, Dubois & Welman, 1985; Levin & Gordon, 1989; McGrath,

1992; Ogletree & Williams, 1990). Others have noted that the ratio of males to females increased with the difficulty of the course (Chen, 1986; Hess & Miura, 1985; Lepper, 1985; Liu, Reed &

Phillips, 1992).

When this trend is realized against the backdrop of studies which found lowered anxiety from programming, (Aman, 1992; Clarke, 1992; Carlson & Wright, 1993; Kersteen, et al., 1988;

Marcoulides, 1988) the implications for success in computing from experience with difficult content becomes quite relevant and important. The interpretation of participation was further expanded by Sanders and Stone (1986) who found females underrepresented in computer courses, professional computer jobs, and home computer use. Other studies, on a more positive and 14 encouraging note, have found that gender-related differences were either reduced or disappeared when background is controlled (Bernstein, 1991; Campbell, 1989; Chambers & Clarke, 1987;

Chen, 1986; Clarke, 1990, 1992; Loyd & Gressard, 1984; Schimidinger, 1993).

Clarke and Chambers (1989) investigated lower participation of women in tertiary computer science courses and designed a conceptual framework utilizing a questionnaire to cover experiences with formal classes, work, and experiences with computer systems, languages and applications. The researchers reported significantly more men than women had prior computing experience using the computer system, had taken computer and mathematics courses in high school, and indicated they planned to continue studies in computing. They found that men had significantly more experiences with languages and applications, had significantly more usage of computers, and were the main users of home computers. In addition, they found that gender- typing was stronger for men and there were differences in students' perceptions of their own abilities. Men rated their ability more highly as reason for success, while women rated their lack of ability as attribution for failure. There was no significant differences for attrition or achievement, and the most important single contributor to predicting intentions to pursue computing was attitude toward computing. For both men and women, previous study of computing in grade twelve was a significant predictor of achievement, but BASIC for men, and word processing for women, were significant predictors of achievement.

Gender. Computer Anxiety and Achievement

There is considerable research on the relationship between gender and attitudes, particularly anxiety. Consistent with studies of attitudes towards mathematics, many researchers have found gender effects in attitudes towards computers (Chen, 1985; Collis, 1985; Griffin,

Gillis & Brown, 1986; Lockheed, 1985; Lockheed & Frakt, 1984; Loyd & Gressard, 1986;

Miura, 1986; Liu, Reed & Phillips, 1992). However, this finding was not universal. Marshall and Bannon (1986) used a large scale assessment and found that although males have greater knowledge about computers, there was no relation between gender and attitude toward computers.

Similarly, Campbell (1989) did not find any significant difference in computer anxiety due to gender effect. Overall, however, reviews of the literature on attitudes have found that computer 15 experience is positively related to attitude (Badagliacco, 1990; Chen, 1986; Gressard & Loyd,

1986; Koohang, 1986,1987; Koohang & Byrd, 1987; Lever, Sherrod & Bransford, 1989; Loyd

& Gressard, 1984). Attitude toward computers reveals persistent and consistent gender differences which favor males in computer-related activities (Collis, Kass & Kieran, 1989; Fetler, 1985;

Levelson, 1991; Wilder, Mackie & Cooper, 1985).

Several studies have found males generally have lower computer anxiety levels than females (Chen 1985, 1986; Fuchs, 1986; Liu, Reed & Phillips, 1992; Miura, 1986). Some researchers like Chen (1985,1986) found an inverse relationship between computer anxiety and greater confidence and interest, as well as greater respect for computer competence. Others like

Carlson and Wright (1993) have confirmed that computer achievement is indirectly related to computer anxiety. Marcoullides (1988) went one step further and found that computer anxiety could be used as a significant predictor of computing achievement. Still other studies (Kersteen, et al., 1988) have connected computer anxiety to prior computing experience. Taylor and Mounfield

(1994), citing the work of Aman (1992), Clarke (1992) and Carlson and Wright (1993), noted that past studies found that the most reliable predictions of computing attitude and achievement are based on knowledge of the amount of prior computing experience. Many researchers (Bernstein,

1991; Clarke & Chambers, 1989; Kersteen,et al., 1988) have demonstrated the predictive value of experience with computer achievement. On a subject related to achievement, gender has been shown to be a significant predictor of persistence in computing (Jagacinski, Lebold & Salvendi,

1988).

Gender and Applications

A number of researchers have credited the use of applications to decreased computer anxiety in females (Chen, 1986; Koohang, 1987; Shashaani, 1994; Woodrow, 1992). This finding was arrived at after a sequence of events. In the "early days of computing", (Anderson,

1984; Lockheed, Neilsen, & Stone, 1983) observed differential use of computers between boys and girls in time spent on computers and enrollment in computer classes. Looking at the reported statistics from the national assessment in science, Anderson ( 1984) noticed gender discrepancy in computing and postulated that the discrepancy could be attributed to cultural socialization, and 16

"structural factors in the schools that inhibit females from taking advantage from computer opportunities." (p.7).

Kurland and Kurland (1987) noted from 1980 to 1985 use of computers to teach programming and general literacy predominated in the schools, but by 1987 controversy was growing, and the trend was leveling. Woodrow (1991) cited Lockheed and Mandinach (1986) noting that "forcing novices to program ... before they were permitted to [use computers] for other purposes ... [led to] neutral or negative attitudes" and was "cited as a major factor in the declining interest of computers in secondary schools" (p.491). Although the value of "literacy" was questioned, Kurland and Kurland (1987) noted that there was the "desire to integrate computers directly into the standard curriculum." (p.324).

With respect to the cultural socialization issue, and to the differential participation with computers, Turkle's (1984) hypothesis of "hard" and "soft" mastery, by gender, described the perception of computers as a reflection of culturally-mediated notions of gender. Turkle suggested that in North American culture, girls were taught "soft" mastery and thus related to computers in terms of negotiation, compromise and give-and-take, while boys learned decisiveness and objectivity ("hard" mastery). Shashaani (1994), in reference to Turkle (1984), noted that "males are more programming oriented, whereas females are more applications oriented." (p.360).

Nelson and Watson (1991) noted that as a result, girls becoming involved with the artistic and interactive, and boys, with the scientific and mechanical aspects of computing. Kurland and

Kurland (1987) concurred that "the problem may not be with computers per se but the socialization of girls from an early age to avoid technical areas." (p.340). In addition, Kurland and

Kurland (1987) noted that since emphasis with the computer was placed upon male values, even though software manufacturers were prompted to design special software for females to encourage use, there were girls who were "put off by the condescending attitude that these programs had towards girls' abilities." This reaction by female students appeared to suggest that what was needed was not condescending software for females, but general applications software that made it

"increasingly .. easier to use computers, often requiring only the ability to read and write to perform highly sophisticated maneuvers." (Kay, 1989, p.308). 17

Becker and Sterling's (1987) national survey of 2331 public and non-public elementary and secondary schools, revealed that although parity was achieved between overall computer use and word processing, many schools continued to report male dominance. The Becker and Sterling

(1987) survey reflected the transition from teaching programming to integrating applications into the curriculum. This led to Munger and Loyd's (1989) finding that the "weak association between math [sic] performance and computer attitudes in general, suggest computers are no longer subsumed under mathematics and sciences in the schools but are introduced into other areas of the curriculum". After a review of the literature, Nelson and Watson (1991) reached similar conclusions that "research conducted during the late 1970s and early 1980s identified gender differences in computing skills as a component of math [sic] anxiety." but integration of the computer into the curriculum and word processing accounted for "eliminating math [sic] anxiety as the primary correlate of poor programming skills by the mid-1980s" (p. 348).

First Computer Experiences

Levin and Gordon (1989) underscored the importance of the first computer experience in the following statement:

The attitudes of pupils at the onset of their computer instruction may very well affect their success in future computer programs. It is therefore essential to determine these attitudes prior to computer instruction so that steps can be taken to counteract negative attitudes (p.71)

The literature acknowledges the importance of early exposure to computers (Ames &

Archer, 1987; Chambers & Clarke, 1987; Chen, 1986; Dambrot, et al., 1985; Gardner, Dukes &

Discenza, 1993; Fetler, 1984; Frey & Ruble, 1987; Guimaraes & Ramanujam, 1986; Hattie &

Fitzgerald, 1987; Hess & Miura, 1985; Hughes, et al, 1985; Jones, 1987; Koohang, 1989; Lee,

Pliskin & Kahn, 1994; Nelson & Watson, 1990-91; Nelson, Wiese & Cooper, 1991; Nickerson,

1981; Paxton & Turner, 1984; Weil, Rosen & Wugalter, 1990; Schubert, 1986; Shore, 1985;

Smith, 1987; Swadener & Hannafin, 1987; Turkle, 1984;Whiting & Edwards, 1988). Several studies reveal a gender-typed pattern of development in computer use and attitudes in preschool and early elementary grades; for example, Nelson and Watson (1990-91) cited the work of Chen

(1986),.Schubert (1986), Hattie and Fitzgerald (1987) and Jones (1987) who found that no 18 gender-differences were apparent in pre-school and early elementary grades. Similarly, Swadener and Hannafin (1987) cited in Levin and Gordon (1989) reported no differences in elementary school children's attitudes prior to exposure to computer. There is evidence to suggest that attitudes become more polarized as children get older (Bear, Richards & Lancaster, 1987; Hattie &

Fitzgerald, 1987) and that the school context is important to the formulation of such attitudes

(Schubert, 1986).

Studies have reported that whether in structured or non-structured settings, gender problems begin early; rather than take turns, boys as young as four or five have been found to be aggressive and tend to monopolize computers (Schubert, 1986). Charlton and Birkett (1995) citing Gribbin (1987) and Klein (1992) also found this monopolizing problem and also noted that the work of both Aman (1992) and Colley, Comber and Hargreaves (1995) found that single gender schools for females had a role in promoting females' computer-related confidence and computing attitudes, as there was no competition from males.

Kinnear (1995) found that boys were less likely to agree that computers were difficult to use. She found unstructured access resulted in male dominance, and scores between the genders became more polarized after more experience. She attributed the polarization to negative experiences, but noted that on balance, children did not hold sexist beliefs of male superiority, did not believe that computers would be more useful for boys, were not fearful about using computers, and felt positive towards computer use in classrooms (although girls were less positive).

Nelson and Watson (1990-91) noted that by the third or fourth grade, disparities in attitudes and performance emerge and girls have become less technologically motivated and less interested in future computing experiences. They cited research by Chen (1986), Jones (1987),

Fetler (1984) and Frey and Ruble (1987) which found these trends became more pronounced in high school. Nelson & Watson (1990-91) further claimed that by adolescence, girls had developed an "ardent dislike for computers while boys' enjoyment and skill levels increase." (p.347). They also cited Ames and Archer (1987) and Hess and Miura (1985) for the observation that females were spending as much time as their male counterparts using computers. They concluded by noting that factors identified as shaping the "computing skills gender disparity" (Nelson & Watson, 1990-

91, p. 348) were to be found in interpersonal interactions and school software selection. 19

(Ames & Archer, 1987) conducted an extensive study of 500 mothers' impact on children's school attitudes and found important implications of parental attitudes. On a positive note, they found that parental encouragement could overcome the negative impact of school experiences. The researchers reported that their findings indicated that even without the presence of a computer at home, parent's expectations could turn out to be the deciding factor which could positively motivate females to achieve more when exposed to computer-based experiences. Through social interaction with parents, relatives and teachers, children select the activities which determine the quality of school experiences and their subsequent careers. This has been corroborated by

Shashaani (1994).

Nelson, Wiese and Cooper (1991) claimed that these initial or early encounters are the

"most critical in determining whether an individual will take to computers" (p. 186). They further maintained that "initial acclimation to ... the culture of computing will shape subsequent encounters

... and could provide the incentive to avoid computers." (Nelson, Wiese & Cooper, 1991, p.186).

Gardner, Dukes and Dicenza (1993) underscored the importance of the quality of these early experiences, by warning that their findings suggested that if early experiences are negative, computers will be avoided, and students will develop negative attitudes towards them. They stressed that the focus of educators should be on being aware of early failures and prevent these failures from becoming stable negative attitudes towards computers.

Gender and Prior Experience

From their findings, Levin & Gordon (1989) cautioned that:

Investigating sex differences in attitudes towards computers without taking prior computer exposure into account distorts the picture. It also emphasizes the importance of identifying types of computer exposure and experience and the need to examine their effect on attitudes (p.85).

Over the years, numerous studies cited in Liu, Reed & Phillips (1992) have documented the trend in the consistent increase in computer use and availability in public schooling

(Ordovensky, 1989; Webb, 1985; Wright, 1984). Studies have confirmed that an increased number of students arrive at university with prior computer experience, (Franklin, 1987; Guinan &

Stephens, 1988; Howerton, 1988; Liu, Reed & Phillips, 1992; Ramberg, 1986; Taylor & 20

Mounfield, 1994;). However, other researchers observed that females were less likely to have prior computing experience than males (Clarke & Chambers, 1989; Kiesler, Sproull & Eccles,

1986; Kersteen, et al., 1988).

Researchers have noted that in the last few years, there has been a dramatic decline in female participation in computer science programs and the number of females in technical careers.

(Shashaani, 1994; Taylor & Mounfield, 1994). The subject of prior experience has been controversial and there are critics who charge that the decline in college enrollment is due to pre- college experience to computers which can act as a deterrent to participation in computer science programs. Collis (1985), using a survey of high school students, found that participation in a required computer literacy course did not necessarily improve female attitudes towards computers.

She reported that girls who took the course expressed less interest and lower confidence than those who did not take the course in the same school. Linn (1985) reported no difference in computer performance despite the greater computer experience of boys over girls. Krendle, Broiher and

Fleetwood (1989) found boys and girls responded differently after gaining experience. Although

Taylor and Mounfield (1994) recognized that some pre-exposure could bias females away from the field, they agreed with the bulk of researchers who prescribed pre-college computing experience precisely for addressing these inequities (Aman, 1992; Klein, 1992). Moreover, the research of

Taylor and Mounfield (1994) showed that "virtually all experiences were beneficial for females."

(Taylor & Mounfield, 1994, p.291). From another perspective, Levin and Gordon (1989), in a survey of students in grades eight through ten, found prior computing exposure had a stronger effect than gender.

Liu, Reed and Phillips (1992) in a longitudinal four-year study with 914 teacher education students, operationalized prior computer experience into four levels - none, Computer Assisted

Instruction (CAI), Computer Managed Instruction (CMI) and programming. The researchers employed gender, major, year and prior computing experience as independent variables and computer anxiety as the dependent measure. They found gender, year, major and prior experience all had significant main effects in computer anxiety, and reported significantly lower computer anxiety for males. As well, they noted significant differences in computer anxiety across the four computing experience categories. They found that 44.7% of the subjects had no prior computing 21 experience, and were highly apprehensive towards computers. They reported that 17% of females had CMI experiences (mostly word-processing), compared to 9% for males. Contrary to the literature, however, they found that more males had no prior experience, 47.3% of the males versus 43.6% females. Consistent with other studies, they found programming experience related to lower anxiety; but unlike other studies, they found this to be true with CMI-related use, like word-processing.

Learning Setting

In the literature, the notion of different learning settings appears to be connected with notions of self-directed learning (Kersteen,et al., 1988; Perelman, 1992). Garcia and Weingarten

(1987) noted that:

In the information age, people will have to update their knowledge continually. Lifelong education will become the norm, and much of it will occur outside of the traditional education process. As information rapidly changes, the emphasis will be on process, on how we learn - rather than content, on what we learn (1986, p. 83).

The above quote from Garcia and Weingarten (1986) underscores the necessity for learning outside of school which is inherent with the pace of the proliferation of information.

Along the same vein, Lieberman and Linn (1991) maintained that computers had "introduced unprecedented levels of autonomy into education requiring us to rethink the traditional concept of learning how to learn" (p.373). Citing Lepper's (1985) finding that students normally resistant to learning activity became captivated by computer environments, Lieberman & Linn (1991) believed computer learning was an unique self-directed exercise which could motivate students to learn autonomously.

In line with these conjectures, researchers like Kersteen, et al., (1988) have found evidence of computing at home and recreational-like environments. Other researchers such as

Levin and Gordon (1989) have noticed that pupils who own computers felt a strong need for computers in their lives, were more motivated to become familiar with computers, and boys have significantly greater extra-curricular exposure to computers. Francis and Evans (1995) have noticed that "institutions of higher education are continuing to expand their facilities in terms of hardware, software, courses and provisions for self-directed individual learning." (Francis & 22

Evans, 1995, p. 135). Kersteen et. al. (1988) reported that males gained more prior experience through hacking and unguided exploration and acquired "self-initiated experiential differences."

(p.327).

Expressing this need to learn outside of the school setting comes from various quarters.

Some of this has political overtones. Perelman (1992) not only recognized the necessity of learning outside of school, but also challenged time-tried notions. Perelman (1992) has argued that it is untrue that people learn best in schools, that school is preparation for the real world, that the learner is a mere passive receiver, that facts are more important than skills, that schooling is good for socializing, that learning is an individual performance. Instead he claimed "that schools are disconnected from the needs of working and living" (Perelman, 1992, p. 166) and summarizes the need for learning outside of school in this way, "The gathering momentum of hyperlearning technology, guided by the revealed truth of the science of learning, spells the inevitable extinction of school and the turgid bureaucracy of education." (Perelman 1992, p. 168).

Taylor (1986) recognizes the same need but in contrast, he noted that:

Working with computers demonstrates ... that education is endless. Every machine and every piece of software just leads to the next one, requiring the learning of new information and procedures, rendering vast amounts of information totally worthless. By forcing us to help learners adjust to this phenomena, computing could help us affirm in the practice that education is indeed a life-long process (p. 189).

The Standing Committee on Educational Technology (1991) conducted an inventory of existing educational technology resources within the provincial college/institute system. The report focused on educational technology and covered the use of operating systems and applications. The preluriinary results indicated that there was a range in computing platforms, and applications within the system and word processing, electronic mail, spreadsheets and databases were in use system- wide. The report queried all levels of the college/institute system, and administrators, staff and students took part in the survey. The survey found that word processing followed by spreadsheets, were the most popular applications used by the students, and students reported they felt most comfortable with word processing. The survey projected in the years ahead that greater computer literacy is likely to be the greatest change which could affect use of information technology in the college/ institute system, both in the content being offered and in the way the content will be taught. 23

Gender. Prior Experience and Setting

Krendl and Lieberman (1988) noted that:

Unstructured environments such as home, after school computer labs, and computer camps may offer different experiences than those provided in structured classroom settings. They permit more risk-taking and experimentation, and may therefore facilitate more creativity and higher-order thinking (p.377).

Researchers have pointed out that aside from in-school use (Chambers & Clarke, 1987;

Chen, 1986), much of the unexplained variance of experience can be accounted for by home learning (Campbell, 1989; Chen, 1986; Eaton, et al., 1985; Harvey & Wilson, 1985; Kersteen, et al., 1988; Levin & Gordon, 1989; Nichols, 1992; Sanders, 1984); and by learning in other informal settings (Chambers & Clarke, 1987; Chen, 1986; Becker & Sterling, 1987; Kersteen, et al., 1988; Kiesler, Sproull & Eccles, 1983; Levin & Gordon, 1989; Sanders, 1984). The common thread for all these experiences is the unstructured learning. The literature shows that voluntary learning might in some cases prevent computer anxiety (Weil, Rosen & Wugalter, 1990).

Using data on computer experience, ownership, informal use of computers, demographics and social influences, Chen (1986) examined gender differences in computing attitudes of a random sample of 1138 students from five high schools in both formal and informal settings.

Factor analysis yielded five factors: Computer Interest, Gender Equality, Computer Confidence,

Computer Anxiety, and Respect through Computers. In general, he found that males exhibited more positive attitudes toward computers, had greater exposure to programming and voluntary experience. He found fewer gender differences in non-programming language course enrollment, significant differences in the learning from using a friend's computer and belonging to a computer club but no significant differences were found in learning at a public library and office and in stereotyping with girls. An ANOVA across five levels of experience groups revealed significant main effects for Computer Experience across the factors of Computer Interest, Computer

Confidence and Computer Anxiety, and significant effects for gender with Computer Confidence and Computer Anxiety. Contrary to the literature, findings revealed differences which were insignificant for other courses, no gender difference in average number of high school mathematics and science courses, and similar performance on recent grades. He noted that males had more interest and confidence with computers than females, but when he controlled for amount of 24 computer experience, both genders responded with similar levels of interest. He concluded that social influences, especially peer groups, played an important role; more boys reported having knowledgeable friends who encouraged use and conferred approval. Contradicting the literature, he did not find comparable social differences for family members although boys agreed more strongly than girls that having computer skills lead to respect from parents and peers.

Most of the studies on learning setting have been on the home. Researchers have primarily looked at home computing and its consequences in terms of societal influence and use (Hecht &

Dwyer, 1993; Miller & Mclnerney, 1994-1995). Some studies cite the traditional encouragement from parents to their male offspring to pursue computing activities and careers, to use the home computer and the domination in the use of home computers by males (Teague & Clarke, 1993).

There is research which goes as far as to equate home computer ownership with achievement and opportunity (Howerton, 1988; Levin & Gordon, 1989; Nichols, 1992; Nolan, Mckinnon & Soler,

1992; Schimidinger, 1993; Smith, 1989; Widmer & Parker, 1985). From another angle, Miura

(1986) found that students with access to home computers were more interested in further participation in computer classes. Ogletree and Williams (1990) found that ownership was associated with more positive attitudes and high self-efficacy scores, and for males, but not females, ownership was correlated to current computer use. This finding is supported by the findings of Teague and Clarke (1993) who pointed out that males tended to use the computers they own more than females. Home computer ownership has been singled out as the best predictor of success by Taylor and Mounfield (1994), a conclusion supported by the Levin and Gordon

(1989); Nolan, Mckinnon and Soler (1992); Teague and Clarke (1993) and Schimidinger (1993).

There have been several quantitative studies conducted in structured or unstructured fashion in the home where equipment was set up in students' homes and monitored (Hecht & Dwyer,

1993; Miller & Mclnerney, 1994-1995; Watson, Penny, Scanzoni & Penny, 1989). In contrast,

Giacquinta, Bauer and Levin (1993) went into 70 homes and conducted a qualitative investigation of the dynamics of home computing within families. Most of their findings confirmed the literature. The researchers found that males did differ from females in the way they used computers. As a general rule, males tended to have a broader use and interest. The researchers reported mothers of both boys and girls, tended to exclusively use word processing. There was a 25 clear difference in the time spent by females and the males on the computer. Fathers and sons were the major users and key decision makers with regard to the purchase, adoption and use of the computer. The location of the computer and priority of use were also dominated by the males.

The researchers reported that there was differential attitude toward computers, and computers brought more social contact for the sons. Females were dependent on other family members for assistance, and unlike the males, the females did not share their experiences with friends. As well, in line with Turkle's (1984) observations, there was a gender difference in expressions toward the computers. Males tended to relate to computers in terms of recreation and experimentation, but females only perceived the utilitarian aspects. The researchers noted that mothers often feared the computer but believed computer literacy was important for their children.

Levin and Gordon (1989) were interested in finding out the source of the gender discrepancy in attitudes toward computers that were not to be found within the school levels.

From previous work, they suspected attitude problems stemmed from socialization, citing Sanders

(1984); prior computer experience and literacy, citing Bear, Richards and Lancaster (1987); prior computing experience, citing Loyd and Gressard (1984); and non-class activities, citing the work of Patterson (1984). To this end they operationalized their "prior exposure" variable into three levels consisting of owning a computer at home, participating in extra-curricular programming courses and knowledge of how to work with a computer. They found that boys had significantly more extra-curricular exposure (in particular having a home computer) than girls. As well, more tended to hold stereotyped attitudes and were more positive toward the computer as a medium of instruction. They found that computer owners felt a strong need for computers in their lives, perceived computers as being more capable, and were more motivated to familiarize themselves than those who did not own a computer. Their findings suggested that prior computing experience, particularly having a computer at home, had a stronger influence on attitudes toward computers than gender. They inferred that ownership, however, may not be causing attitude differences, but perhaps the same positive parental attitudes which put the computers into the home in the first place, were at work. They suggested that Loyd, Loyd and Gressard's (1987) finding of erosion of initial affective gains by girls as time passed revealed that orientation was necessary throughout the school experience and should involve the participation of parents. 26

Extending their earlier studies on success in college computer science, Taylor and

Mounfield (1994) studied the effect of prior computer experience and gender on the success of 656 students enrolled in an introductory Pascal course. Using a survey instrument, the researchers obtained data on students' demographics, academic and computing background. Experience was operationalized as high school course, high school programming course, high school structured programming course, computer applications, ownership of computer.

Since this group of students was taking a general introductory course, the proportions were not very discrepant (55% males and 45% females); studies have shown female enrollment in computer courses decreases as the difficulty level increases (Chen, 1986; Hess & Miura, 1985;

Lepper, 1985; Liu, Reed & Phillips, 1992). The researchers found about two-thirds of each gender group had some form of computing experience. The findings underscored the importance of prior computing experience. The researchers noted that "while only certain prior experiences correlated with success for males, virtually all prior experiences were beneficial for females."

(Taylor & Mounfield, 1994, p.291). Contrary to the literature, they found that females compared favorably to males in success rate and final grade. Their findings demonstrated "an important role

[of pre-college computing] in achieving gender equity ... [and] a significant correlation between early prior computing experiences and success by females in a college computer course." (Taylor

& Mounfield, 1994, p.291). Of note, the researchers found computer ownership to be the most significant factor in course success, the female sub-group with the highest success percentages owned a computer and the percentage of males who owned computers was slightly higher. As well, they found that females were less likely to engage in informal computer activities or recreation, a finding which was corroborated by Clarke and Chambers (1989) and Kersteen, et al,

(1988).

Deschenes (1988) compared users to non-users in a large cross-Canada household survey with a sample of 2502 Canadian men and women aged 15 and over, on the use of computer technology in the workplace and the home. Although the emphasis of her research was on work, part of the study has relevance to education. Deschenes (1988) found that courses and training were still the most preferred way to learn. Nearly half of the subjects learned through courses and training, but a quarter taught themselves, and 18% were taught by co-workers. Overall, 27

Deschenes (1988) found that men and women learned differently. More women than men relied on courses and training, but men tended to self-direct their learning more than women. She found that most users were quite confident, and there were differences in attitudes between users and non-users. People with more educational attainment tended to use computers more in their work, and applications used were data processing (41%), database management (35%), word processing

(33%) and programming (21%). Deschenes (1988) found that the frequency of home use varied, and increased with educational attainment. She also found that the people who used computers in their personal studies fell into the 15-24 age group, and women were well represented. Deschenes

(1988) found that the use of computers to play games was more prevalent among men, and an examination of the main types of software revealed that word processing was most common.

Studies with Post-Secondarv Students

So far the literature review has emphasized issues surrounding early introduction of computers. As such, most of the review was on use of computers by primary and secondary students. In this section, studies involving post-secondary students are reviewed. Most early tertiary studies within computer science were focused on investigating predictors of achievement in computer science. Ellis (1989) cited researchers who investigated high school GPA and aptitude scores like SAT, ACT, and high school GPA (Butcher & Muth, 1985; Campbell & McCabe, 1984;

Gathers, 1986; Oman, 1986; Sorge & Wark, 1984; Whipkey & Stephens, 1984), personality traits

(Werth, 1986; Whipkey & Stephens, 1984), previous computer experience with different programming languages (Bateman, 1973; Dey & Mand, 1986; McGee, Polychronopoulos &

Wilson, 1987; Oman, 1986; Stephens, Wileman & Konvalina, 1981), cognitive variables (Cafolla,

1987-1988; Dixon, 1987; Hunt & Randhawa, 1973; Sharma, 1987; Stevens, 1983), spatial reasoning ability (Schroeder, 1978) and programming aptitude tests (Gray, 1974).

Ellis (1989) also cited other studies that have been interested in issues surrounding curriculum and programming (Atherton, 1982; Bork, 1982; Brookshear, 1985; Cherniak, 1976;

Crawford, 1978; Dey & Mand, 1986; Dijkstra, 1968; Kimura, 1979; Lemos, 1987; Papert, 1980;

Ralston, 1984; Self, 1983) and gender as a source of variance for achievement in computing 28

(Campbell & McCabe, 1984; Fowler & Glorfeld, 1981; Kurtz, 1980; Mazlack, 1980; Plog, 1981;

Sharma, 1987; Stephens, Wileman & Konvalina, 1981; Werth, 1986).

Since Ellis' study, much of the preoccupation in the computer science literature has been on curriculum and programming (in the form of course proposals, teaching strategies and content)

(Carrasquel, 1993; Doremer, 1993; Kelsh, 1993; Van Houten, 1987), surveys of curriculum

(Morton & Norgaard, 1993), prerequisites and arguments for the choice of programming language taught in first year computing courses (Bagert, 1989; Bryant & De Palma, 1993; Konstam &

Howland, 1994; Mallozzi, 1985; Mody, 1991; Pagan, 1986; Peacock, Lee & Jeffreys, 1988;

Shirkhandfe & Singh, 1986; Stoob, 1984) and interest on the evolution of programming languages and the notion of paradigmatic shifts (Backus, 1978; Brookshear, 1994; Budd & Pandey, 1995;

Luker, 1989; Mazaitis, 1993; Pandey, 1990; Tharp, 1984; Turner, 1982). A growing number of studies advocate return to the traditional emphasis of familiarity with architecture of the computer and assembly or machine-level programming (Bernat, 1986; Coey, 1993; Decker, 1985; Dunworth

& Upatising, 1989; Eckert, 1987; Fuller, 1992; Henry, 1987; Lees, 1986; Sayers & Martin, 1988;

Searls, 1993; Schneider, 1986; Styer, 1994; Tangorra, 1990).

There are interests in societal issues in computing (Forrester, 1992; Turoff, 1989), issues in teaching adult learners (Maren, 1987; Ogozalek, et al., 1994; Scott, 1988) and the software crisis (Luker, 1989; Parnas, 1985; Smith, 1985). There is a developing body of research within computer science investigating gender inequity in computing Bernstein, 1991; Galpin & Sanders,

1993; Howell, 1993; ISACS, 1995; Kerner & Vargas, 1994; Klawe & Leveson, 1995; Moses,

1993; Ogozalek, 1989).

Educational Studies

Several recent educational studies have been conducted with different student populations, undergraduates (Brock, Thomsen & Kohl, 1992; Campbell, 1992; Francis & Evans, 1995; Lee,

Pliskin & Kahn, 1994; Weil, Rosen & Wugalter, 1990), undergraduates and graduates (Farina,

Arce, Sobral & Carames, 1991; Pancer, George & Gebotys, 1992; Pope-Davis & Vispoel, 1993), graduate students (Overbaugh & Reed, 1994-1995), college students (Maurer & Simonson, 1993-

1994), community college and private university (Nelson, Wiese & Cooper, 1991), computer 29 science students (Bundersen & Christiansen, 1995; Klawe & Leveson, 1995) and mature or re• entry students (Baack, Thomas & Brown, 1991; Dyck & Smither, 1994; Massoud, 1990, 1991;

Ogozalek, et. al., 1994; Scott, 1988). Still other studies were carried out with administrators

(Flake, 1991) and faculty members (Jackson, Clements & Jones, 1984).

Tumipseed and Bums (1991) compared attitudes between university students and non- student adults using a modified version of Lee's (1970) survey instrument. They found that there was a difference in the structure of attitudes between the two groups. In addition, the researchers found that attitudes had evolved and negative attitudes were more prevalent among non-students adults and older persons. Their findings led them to advocate shifting the focus of computer education from programming to include the impact of computers on society. Hakkinen (1995) investigated changes in computer anxiety and attitudes towards computers after first year education students were taught a course in basic computer science. Using a questionnaire, Hakkinen (1995) gathered pre- and post-course ratings. He found that the course and experience with computer equipment reduced anxiety. Klein, Knupfer and Crooks (1993) found that re-entry students were more confident and more interested than traditional students and outperformed them as well. The researchers found that re-entry students were significantly better than their traditional counterparts in the areas of word processing, spreadsheet, database and graphics. Baack, Brown and Brown

(1991) found that older adults indicated less favorable attitudes towards computer use, while

Massoud (1991)reported that older males had significantly more positive attitudes than older females. Dyck and Smither (1994) observed that although older adults were less confident, they were also less anxious, had more positive attitudes and more affinity for computers than younger males. Dyck and Smither (1994) also found no gender differences in computer anxiety or attitudes when computer experience was controlled, and for both younger and older adults, higher levels of computer experience were associated with lower levels of computer anxiety, and more positive attitudes towards computers.

In the area of educational research, much of the research at the tertiary level has been in teacher education (Bryd & Koohang, 1989; Hignite & Echtemacht, 1992; Kay, 1989,1990, 1993;

Koohang, 1989; Loyd & Gressard, 1986; Marcinkiewicz, 1994; Violato, Marini & Hunter, 1989;

Reed & Overbaugh, 1993). Marcinkiewicz (1995) wanted to compare how practicing teachers and 30 pre-service teachers differed in use of computers and what variables contributed to their use.

Marcinkiewicz (1994) reported that nearly all of the preservice teachers expected to use computers to teach, but only half of the practicing teachers reported they used computers to teach. They found that while it was self-competence and innovativeness which predicted practicing teacher's use, it was perceived relevance for pre-service teachers. Woodrow (1991) investigated teacher's ratings of their own needs and students' needs. Woodrow (1991) found that while the need for programming was low uniformly, teachers rated word processing as the primary need for teachers and students alike, and that knowledge of applications was rated highly by teachers. Regardless of the population of interest, most of these studies on gender disparity in computing are by definition, concerned with the relationship between experience and attitudes, and finding strong correlates of computer anxiety (Francis & Evans, 1995; Hakkinen, 1995; Hignite & Echternacht, 1992; Pope-

Davis & Vispoel, 1992; Kay, 1993; Klein, Knupfer & Crooks, 1993; Maurer & Simonson, 1993;

Nelson, Wiese & Cooper, 1991; Reed & Overbaugh, 1993)

Causal Studies

In one of the few causal studies, Gardner, Dukes and Discenza (1993) employed the attitude-behaviour theory of Ajzen and Fishbein (1980), cited in Pancer, George & Gebotys (1992) to answer why males and females differ on attitudes towards computers. To do this they used a sample of 723 subjects and "empirically examined three core variables" (Gardner, Dukes &

Discenza, 1993, p.428): the degree of actual use, self-confidence in using computers and attitudes towards computers. They reasoned that these variables were "core in the sense that if any of the three is negative or deficient, then successful, positive, repeated experiences are less likely to occur." (Gardner, Dukes & Discenza, 1993, p.428). They studied students in fifth through ninth grades because these were the postulated ages for the formation of, and experience with, these constructs.

Their findings supported the predicted effect of computer use on reducing anxiety, increasing confidence and producing favorable attitudes. However, the researchers also reported some unexpected findings. The data appeared to indicate that the more experience students had with computers, the less they appeared to like them. They concluded that even as experience 31 increased confidence, "familiarity tended to breed contempt." (Gardner, Dukes & Discenza,

1993, p.436). They speculated that if students' early computer experiences were negative, students may have avoided them, and never developed strong and stable self-beliefs about their abilities to use computers. They further speculated these brief experiences may have been sufficient to create dislike as reflected in negative attitudes. They believed that the students:

"... in this state of attitude equilibrium would have weakened the empirical relationship between experience and confidence (and the resulting link to attitudes) because they lacked sufficient experience to have stable, accurate views of their computer abilities. (Gardner, Dukes & Discenza, 1993, p.437.)

They realized that these students knew enough to know that they do not like computers, which accounted for the weak but negative link between experiences and attitudes. The researchers concluded by emphasizing the importance for educators to ensure that negative early experiences do not take root in children.

Problems with Definitions of Constructs

Kay's (1992) critique of published educational research claimed that numerous studies investigating gender disparity in computing, both past and current, have been conducted in absence of sound theoretical frameworks. Kay (1992) is representative of the researchers who have criticized methodologies and called into question findings from theoretically unclear approaches

(Gardner, Dukes & Discenza, 1993; Kay, 1992; Mclnerney, Mclnerney & Sinclair, 1994; Pancer,

George & Gebotys, 1992). However, problems of theoretical underpinnings are closely related to, and predicated by, problems of construct definition. The next section will conclude this review with a brief summary of current problems of definitions of constructs that have been identified in the literature, and their impact on findings and subsequent implications.

With the multitude of computer attitude scales available, a variety of cross-validation studies have been conducted to help educators choose from the broad selection. The present researcher has noticed that studies appear to follow three types of approaches, replication studies

(Pope-Davis & Twing, 1991), single-scale cross-validation studies (Chu & Spires, 1991; Kluever et. al., 1994; Pope-Davis & Vispoel, 1993) and correlational cross validation studies (Gardner,

Discenza & Dukes, 1993; Woodrow, 1991). 32

However, even as more instruments are added to the selection, researchers are also becoming aware of problems from multiple definitions of anxiety (Okebukola, Sumampouw,

Jegede, 1992; Weil, Rosen & Wugalter, 1990). Both Colley, Gale and Harris (1994) and

Woodrow (1994) cite Kay's (1992) concerns with problems which stem from at least 14 different definitions of the attitude construct.

The literature of gender disparity research in computing has been dominated by research surrounding gender differences in attitudes, specifically computer anxiety and its relationship to self-efficacy (Jorde-Blom, 1988; Harrison & Rainer, 1992; Miura, 1987; Murphy, Coover &

Owen, 1989), to locus of control (Kay, 1989) to confidence (Chen, 1986; Clarke & Chambers,

1989; Koohang, 1989) to interest (Chen, 1986; Kwan, Trauth & Driehaus, 1985) and to stereotyping (Clarke & Chambers, 1989; Collis, 1985; Wilder, Mackie & Cooper, 1985). The trouble is that except for those describing self-efficacy cited by Busch (1995), all the other variables and researchers listed above were cited by Kay (1992) as variants of 14 different definitions of attitude. The clarity of constructs appears very problematic.

The problem does not stop there. Kay (1992) summarized the problem by claiming that,

"The principle constructs in computer-behaviour research are attitudes, aptitude and use, yet there is no common definition of these terms." (Kay 1992, p.283). He noticed that computer aptitude has been defined as application software (Chen, 1986), as awareness (Griswold, 1983), as experience (Chambers & Clarke, 1987; Durndell, Macleod & Siann, 1987), as terminology

(Schaeffer & Sprigle, 1988), as LOGO programming (Siann, et al., 1990, cited in Kay, 1992), as games (Mandinach & Corno, 1985), as word processing (Becker & Sterling, 1987) and as programming (Becker & Sterling, 1987; Linn, 1985; Mandinach & Linn, 1987). Kay (1992) also observed a similar problem exists with the multitude of definitions for computer use as camp participation (Hess & Miura, 1985), as computer course enrollment (Miura, 1987; Mandinach &

Linn, 1987), as graphing (Hattie & Fitzgerald, 1987), as ownership (Swadener & Jarrett, 1986;

Nelson, 1988) as word processing (Hattie & Fitzgerald, 1987) and as extra-curricular activities

(Chambers & Clarke, 1987), Chen, 1986; Fetler, 1984).

Although Kay did not explicitly dwell on the relationship among the variables, this presents the problem of overlap; word processing is treated as computer aptitude by Becker and Sterling 33

(1987), but operationalized as computer use by Hattie and Fitzgerald (1987). Even after years of trying to arrive at standard definitions of computer literacy, conceptions of computer literacy are still unclear. Not only are there various competing definitions, some of these very same constructs such as programming (Cheng, Plake & Stevens, 1985; Gabriel, 1985; Molnar, 1978), awareness

(Anderson & Klassen, 1981; Kay, 1990), application software ability (Kay, 1990), have been used as measures of computer literacy. Kay (1992) concluded that:

A critical review of measures used in gender research is required so that inadequate measures can be weeded out and researchers can begin to speak a common language with respect to the constructs of attitude, aptitude and use (p.284,1992).

The present researcher concurs and has noticed that computer literacy sometimes appeals in the literature synonymously with ability, skill, competencies and knowledge (Woodrow, 1992).

This thesis researcher has also noticed that as proof that this is an acknowledged problem but one that has not been dealt with, the Tests in Print IV (Buros Institute of Mental Measurement, 1994) has a caveat on its first page:"... the reader should keep in mind that a given variable (or concept) of interest may be defined in several different ways". The main thrust of current gender research is to investigate the effect of prior computing experience on anxiety. There is also an attendant problem with varying definitions of experience. As shown in Table 3 which follows this discussion, prior computer experience has been operationalized quite differently in the literature.

The multitude of definitions makes analysis of prior computer experience and comparisons problematic.

Mclnerney, Mclnerney & Sinclair (1194) investigated the effect of computer experience on computer anxiety of student teachers. They made three important points which are germane to the current study. First, there is a problem on ambiguity of terminology. The present author had noticed that although most research used the term "computer anxiety", there were various definitions of attitude and anxiety, and similar terms like "technophobia" or "computer aversion" which made comparisons difficult. Mclnerney, Mclnerney and Sinclair (1994) echoed this problem, but went one step further, noting that:

Not only does this relatively recent area of research suffer from lack of common, or in some cases, any definition of terms, there is no common psychological or sociological theory which underlies the literature in this area (p.28). 34

Second, they also used self-ranked measures of competence which ranged on a 4-point

Likert scale as Novice, Beginner, Intermediate, Advanced. The researchers found that competency measures captured a great proportion of the variance and were significant predictors on subscales of the Computer Anxiety Rating Scale (CARS'). Third, competency scores interacted with several variables. This led to their conclusion "... complexities of these interactions demonstrate the need for a broader, more encompassing definition of experience to be derived with regard to computing than that which commonly occurs in the literature." (Mclnerney, Mclnerney & Sinclair, 1992, p.42)

Table 3 Comparisons of Terms Found in Literature for Computer Experience

Terminology Used Studies

Applications Clarke & Chambers, 1989 Computer experience Clarke & Chambers, 1989 with different systems Computer-related courses Clarke & Chambers, 1989; Kay, 1990, Kersteen, et al., 1988 Duration Gardner, Dukes & Discenza, 1993; Gressard & Loyd, 1986; Kay, 1990, Koohang, 1989 Frequency Gardner, Dukes & Discenza, 1993; Kay, 1990; Woodrow, 1991 Home use or ownership Chen, 1986; Clarke & Chambers, 1989; Fetler, 1984; Kersteen, et al, 1988; Levin & Gordon, 1989; Nichols, 1992 Knowledge of how to work Levin & Gordon, 1989 with computers Mathematics Experience Clarke & Chambers, 1989 Number of proficient Kersteen, et al., 1988 programs Participation with computer Miura, 1987; Wilder, Mackie & Cooper, 1985 camps and computer clubs Programming Clarke & Chambers, 1989; Gardner, Dukes & Discenza, 1993; Taylor & Mounfield, 1991; Wilder, Mackie & Cooper, 1985; Woodrow, 1991 School use Chambers & Clarke, 1987; Chen, 1986; Gardner, Dukes & Discenza, 1993 Self-reported Likert-ratings Hall & Cooper, 1991 Use in informal settings Chambers & Clarke, 1987; Chen, 1986, Kersteen, et al., or non-class use 1988; Levin & Gordon, 1989

Gaps in the Literature

Chapter One summarized the gaps found in the literature. The review of the literature revealed several important gaps. The first gap discussed at length above, was lack of 35 standardization of terminology and operationalization. The next gap has to do with definition of application software within educational research. Lambrecht (1993) conducted research and found conceptual support to refer to spreadsheets as cognitive enhancers. Brownell, Jadallah and

Brownell (1993) claimed that use of database, spreadsheets, simulations and multimedia systems require a high level of formal reasoning. Beard (1993) pointed out that courses on word processors, spreadsheets and databases could be focused "on the development of transferable higher order thinking skills." (Beard, 1993, p.427). This is the same type of terminology that has been used in the cognitive sciences for programming. The education community has embraced the need to teach application software (Byrd & Koohang, 1989; Haigh, 1985; Hunt & Bohlin, 1993;

Jackson, Clements & Jones, 1984; Kay, 1993; Levin, 1983). Applications use has been legitimized as a form of programming in computer science (Brookshear, 1994). Cognitive researchers such as Lambrecht (1993) support the notion that application software have properties akin to programming. The education literature may be reacting to this development, but no studies to date have been found which acknowledge applications as programming.

A similar gap exists for operating system use; it does not appear to be a variable that is commonly investigated in educational research. In the hundreds of books and articles the researcher gleaned through, only four cases were found where the "operating system" variable was investigated (Clarke & Chambers, 1989; Standing Committee on Educational Technology, 1991;

Geissler & Horridge, 1993; Kay, 1990). Of these studies, both Geissler and Horridge (1993) and

Kay (1990) had a similar one-item question asking for Disk Operating System (DOS) use while both Clarke and Chambers (1989) and the Standing Committee on Educational Technology (1991) studies made indirect reference to systems, rather than operating systems. Hence, in all four cases, the question was asked generally, and none queried the programming languages, operating systems and applications inclusively. The present study identified all the dominant operating systems used in current personal computing, along with a list of software from the other two domains of programming languages and applications.

No studies were found in education which used the age of first computer experience as an independent variable. One study was found in the cognitive psychology literature, Weil, Rosen and Wugalter (1990), in a study on of the etiology of "computerphobia." Although the study did 36 not isolate gender differences, it contributed to understanding the importance of early experiences, and implications from learning in unstructured settings. Initially, Weil, Rosen and Wugalter

(1990) administered the Computer Anxiety Rating Scale (CARS) and the Computer Thoughts

Survey fCTS) to 500 students. After receiving a profile of their CARS and CTS scores, 60 subjects volunteered for the second part of the study. After deleting five subjects who did not complete the study, the remaining 55 subjects were assigned into three groups; "Control" (no computer anxiety or negative cognitions), "Computerphobics" (moderate to high computer anxiety and/or moderate-to-extreme negative computer cognitions and the "Uncomfortable group" (low anxiety, slight negative computer cognitions). Operationalization of computer anxiety was "cast broads to capture all uses of technology" (Weil, Rosen and Wugalter, 1990, p.365), and subjects were asked to describe four "significant" experiences with technology: their first experience with computers, their most significant experience with computers, their most significant childhood experience with mechanical objects, and their most significant adolescent experience with mechanical objects.

Findings indicated that the first experience was varied and occurred in similar proportions across age ranges. In addition, "Computerphobics" were more likely introduced to their childhood mechanical experience by females. The researchers, noting that the "study points to the

"importance of early experiences in the development of computer phobia." (p.376), suggested that the role of the "introducer of technology" in these early experiences was crucial, and that the introducer should have "positive attitudes about technology and feel skilled and comfortable with computers." (p.377). In addition, the researchers noted that "Computerphobics" may suffer from a more generalized form of "technophobia" and that its notion as a transitory problem is inadequate. Of significance to parents, educators and researchers, they found that early experiences were more entertainment based and usually did not have an evaluative component. In contrast, "significant" computer experience usually had an evaluative component, as in an academic setting, citing Gressard and Loyd (1986). The researchers agreed with Gressard and Loyd's

(1986) suggestion that exposure to computers should occur in non-threatening, non-evaluative and non-academic settings and that "when free play is allowed, significant experiences tend to be positive" (Weil, Rosen & Wugalter, 1990, p.377). 37

Conceptual Framework

The Gates (1981) conceptual framework was employed in the design of the instrument, the

Software Competency Scale (SCS) used in this thesis research. The researcher conceptualized the framework for this study using Gates'(1981) conception of the multi-level, hierarchical nature of software abstraction.

Gates' Conception of Software Abstraction

At the time when personal computers were beginning to make their entry into mass use, the researcher designed microprocessors and Arithmetic and Logic Units (ALUs), and wrote programs in assembly languages and higher level languages. The researcher, therefore, had a solid understanding of the process by which electronic signals were logically abstracted into higher formalized forms as languages and applications. By contrast, the researcher did not understand how or where operating systems fit into the picture.

In 1981, at one of the launches of IBM's new operating system, (IBM PC DOS 1.0) for the new IBM PC, the researcher took the opportunity to interview Gates informally regarding the relationship between the operating software and hardware. In reply, Gates (1981) drew the sketch of the pyramid shown in Figure 1 below, and placed the Operating System hierarchically into the multi-level pyramid at Level 2. Gates' pyramid began with the hardware base (Level 1), followed by operating systems (Level 2), languages (Level 3) and applications (Level 4). Gates explained that the pyramid structure reflected the inverse relationship between the increase in abstraction as one went up through the levels and the decreased availability. The researcher reflected that there were indeed many incompatible varieties of microcomputer systems available, several contending operating systems which supported a few major dominant languages, and only a handful of applications which were written in those languages. The Gates (1981) model did not explicitly account for assembly languages, however.

That was in 1981. Currently the consensus is that Gates (1981) is the leader in the personal computing field. However, the literature review did not yield academic publications of 38 his conceptualized pyramid. Nevertheless, there appears to be predictive validity in Gates' (1981) understanding of how software interacts with hardware.

Figure 1. Gates' (1981) Pyramidal Conception of Software Abstraction.

Application

Language

Operating System

Hardware

The heuristic which Gates (1981) had conveyed to the researcher in 1981 has predictive validity in that it has been employed successfully years later (albeit updated), as a theoretical basis to isolate software constructs in this 1996 thesis. However, since 1981, the picture of computing has changed. There is now a large selection of applications available, but there are only a few computer systems (IBM, Macintosh, Sun Microsystems, etc.) available. The researcher therefore concluded that the opposite relationship now holds true, and the pyramid has inverted, therefore the actual heuristic used in the conceptual framework should be an inverted version of the Gates

(1981) pyramid. 39

Tangorra's Model of Multi-level system structure

The search of the literature yielded Tangorra's (1990) model illustrated in Figure 2. The

Tangorra (1990) model was not only inclusive of the Gates (1981) model, but also uncovered how the abstraction process emanated from similar layers of hardware. The researcher noticed that in general terms, except for placement of the Operating System, Tangorra's (1990) model agreed with the Gates' (1981) model. The researcher realized that with slight modifications, both models could be incorporated into a hybrid-model (Please see Chapter Five and Appendix I).

Figure 2. The Multilevel Structure of a Computer System (Tangorra, 1990)

Level 7

High Level languages Level 6

Assembly level Level 5

Operating system level Level 4

Machine level Level 3

Microprogramming level Level 2

Digital logic level Level 1 40

CHAPTER THREE

Methodology

The purpose of this study is to investigate the relationship between university students' self-reported software competencies (with programming languages, operating systems and applications) and gender, age of first computer experience, and predominant learning setting. Data were obtained using a 27-item self-reported questionnaire to survey the three types of software competencies represented by the dependent variables.

Since the resulting list would result in a multitude of software items to be simultaneously analyzed, a relevant methodology was devised to organize these software competencies into larger constructs that would be more amenable to analysis. The resulting research design consisted of a combination of univariate and multivariate statistical techniques, and parametric and non-parametric procedures. Due to the relative complexity of the design, this chapter introduces the research design using an overall map of the research accompanied by a short synopsis, which is followed by more detailed explanation of individual analyses.

The Research Design

Recall that the three research questions were:

RQ1. Do male and female university students differ in self-reported ratings in software competencies with programming languages, operating systems and applications?

RQ2. Do university students who learn computing at different ages differ in self-reported ratings in software competencies with programming languages, operating systems and applications?

RQ3. Do university students who learn computing in different settings differ in self-reported ratings in software competencies with programming languages, operating systems and applications?

These research questions were addressed by comparing university students' software competency scores by gender, by age of first computer experience and by learning setting, using a multivariate factorial design. The three independent variables represent the factors in the design. Figure 3. Map of Research Plan Ellis' Q PREPARATION PHASE 4 Pilot Pre- Data Pilot Q#1 ~>- Pilots Entry Stats

Legend: a = algorithm Ellis' Q = Ellis (1989) questionnaire f = factor analysis L= Levels o = operationalization p = preliminary scale analysis Q#l = pre-pilot questionnaire Q#2 = pilot questionnaire Q#3 = final questionnaire r = refined scale analysis S = Subscale scores t = test-retest analysis u = univariate 42 Synopsis of Research Plan

The map in Figure 3 is made up of two sections. The Preparation Phase is indicated by the first series of processes (represented by the boxes) which funnels down to a single arrow pointing at the "Main Study." The Analyses Phase represents the five analytic procedures used to support the analyses of the three research questions.

Preparation Phase

The series of boxes and arrows on the upper half of the figure, trace the processes behind the development of the final questionnaire. The boxes and arrows show that a questionnaire was obtained from the literature (Ellis, 1989) pre-piloted, presented to a panel of expert judges, revised

(informed by a conceptual framework), re-piloted, re-revised, and employed in its final form to collect the survey data for the "Main Study."

Analysis Phase

In the "Main Study", there are five arrows corresponding to the univariate t-tests (u), the test-retest analysis (t), the preliminary scale analysis (p) the factor analysis (f), and operationalization process (o). The meaning of these analyses, are detailed below.

Univariate tests.

The "u" series of univariate tests represent two sets of parametric and non-parametric tests used to test significance at the item level. These univariate tests compare all 27 of the raw dependent scores. For item-level significance, the non-parametric tests partitioned the Likert scores into three groups of competency levels [l=Low, (2,3,4)= Medium and (5,6)= High], whereas the t-tests compared mean raw Likert scores . Overall agreement was found between the two univariate methods. (Please see Appendix A for the univariate findings).

Test-retest reliability.

The "t" represents the test-retest reliability of the instrument which was tested using a sub- sample of 88 subjects. Results indicated that the Software Competency Scale (SCS) was stable

(alpha=0.84) and could be administered over time. 43 Preliminary Reliability Test.

The "p" represents the preliminary scale analyis to be used for comparison with the "r" the refined scales analysis which was generated through factor analysis. (The diamond shape "Scale

Test" represents the comparison process between the results of the internal reliability tests of the preliminary scale analysis (indicated by the letter '"p) and the refined scale analysis generated by the factor analysis (indicated by the letter "r"). This comparison is further explained below.

Factor Analysis and Internal Reliability Analysis.

The "f' denotes factor analysis. Factor analysis corifirmed the construct validity of the conceptual framework [an inverted version of the Gates (1981) Model], and reduced the original

27 software competencies into the three underlying software competency constructs. The items extracted by the three constructs were in turn, used to generate three subscale scores for all subjects. (The arrow pointing directly to the "Factor Analysis' box indicates that the dependent raw scores from the "Main Study" were factor analyzed and used to generate three "refined" subscale scores).

As noted above, results of the factor analysis (represented by the letter "r" ) were compared for internal reliability with those of the preliminary scale analyis. The scale comparisons revealed that the refined subscales resulted in sharper discrimination and could be used as transformed dependent variables (denoted by "S") in the Multivariate of Analysis (MANOVA) to enable subscale comparisons by groups to answer the three research questions.

Operationalize Independent Variable (IV).

The "o" arrow from the "Main Study" indicated that the independent variables from the

"Main Study" were operationalized into levels (denoted by "L") for the MANOVA comparison.

The "a" arrow (which branches from the main "o" arrow) indicates that for post-hoc comparisons, a separate algorithm was written to ensure extraction of the same subjects from the "Main Study" that were used in the MANOVA. The MANOVA analysis, together with the follow up ANOVAs and post-hoc Scheffe range tests, was employed to answer the three research questions. 44

Background Discussion

The preceding paragraphs presented a brief synopsis of the overall research plan. In this next section, the background to the research problem, the theoretical framework, the empirical validation of the questionnaire, and the individual analyses are presented in sequential order.

Prehminary results from the test-retest analysis and subscale analysis are also discussed.

At the top of the map (See Figure 3), is a reference to "Ellis Q." This represents the Ellis'

(1989) questionnaire which was obtained from the literature and modified for this study. As can be seen in the diagram, there were pre-pilots and a pilot before the main study. With each of the stages, feedback necessitated changes, and the questionnaire was redesigned two times (Q#l, Q#2) before its present form (Q#3).

This study was originally intended to be a replication of Ellis' (1989) study of group differences around programming knowledge and achievement. Since Ellis' study, the advent of powerful, user-friendly software has influenced personal computing (Berstein, 1991) and outdated the previous meaning of computer literacy and programming (Kay, 1989). As well, the field is prsently preoccupied with philosophically-based arguments of paradigmatic shifts in languages

(Brookshear, 1994; Luker, 1989).

In addition, there is thinking that applications "foster higher cognitive skills similar to those engendered in programming" (Lockheed & Mandinach, 1986, p. 17, cited in Woodrow, 1991), and constitute computing ability performed at a level which does not require intimate knowledge of the hardware or software to run (White, 1993). In terms of function, these sophisticated tasks are at par with those that are self-programmed and it is easier to purchase pre-prograrnmed packages than to design them from scratch (Barnes, 1986, cited in Kay, 1989).

Pre-Pilot Findings

The Ellis (1989) questionnaire had four questions on programming languages. The first redesign of the pre-pilot questionnaire (Q#l) added to the Ellis questionnaire a second page with four questions on applications, access to different computer systems, student academic major and reasons for choice of major. Pre-pilots were carried out with grade 12 students and students enrolled in first year computer science courses at the same university used in the main study. 45

Findings indicated that the list of programs was too short, some computer systems were obsolete, the second part of the questionnaire only allowed categorical-level analysis, and the open- ended question on choice of major was non-trivial. In addition, the pre-pilots pointed out that students could preview the first page and choose to opt out of the study. The findings also underscored the importance of having prior rapport with the computer science faculty whose classes were visited.

Theoretical Perspective

Findings from the two initial pre-pilots led the researcher to conclude that not only was it necessary to make significant revisions to the pre-pilot questionnaire (Q#l), the research needed an entirely different perspective. In addition to the new languages like C and Scheme which had come into prominence since Ellis' study, the items on the pre-pilot questionnaire (Q#l) had only asked about computer systems rather than explicitly about operating systems. Thus, to reflect these changes, the revised questionnaire needed to accommodate operating system competency alongside competency with programming languages and applications.

The researcher dropped the idea of a replication study but decided to extend Ellis' (1989) work to answer a different set of research questions on gender differences, using a modified version of his questionnaire. The literature review did not yield a comprehensive and specific instrument for measuring software competency. To ensure that the new instrument would be theoretically sound, the researcher employed a conceptual framework using an inverted version of the Gates (1981) Model reviewed in Chapter Two.

Gates' (1.981) Model as the Conceptual Framework

The literature expressed the importance of assesing a broad range of experience with various software items (Clarke & Chambers, 1989; Mclnerney, Mclnerney & Sinclair, 1994).

Some researchers like Kay (1993-94) went further claiming that "As a software package develops and changes its format, the construct used to assess ability to use that software has to change accordingly" (p.272). 46

With respect to the three research questions, and considerations like the prvious claim from the literature, the Gates (1981) Model of Software Abstraction helped to clarify the meaning of the dependent variables. As noted in Chapter Two, the conceptual framework which was employed to ensure that the design of the revised pilot questionnaire was theoretically sound was obtained from an informal interview with Gates (1981). Specifically, the Gates (1981) Model of Software

Abstraction [hereafter referred to as the Gates (1981) Model], was used to operationalize the dependent variables of the study, and to ensure that the coverage of software competencies was complete.

The researcher reasoned that the distinct software layers as envisioned by Gates (1981) could be used to map underlying constructs of the three types of software competency present in current personal computing. With the help of a Computing Studies in Education professor, the researcher proceeded to use the Gates (1981) Model as a heuristic to compile a list of software items which fell under these constructs, and develop a psychometrically sound instrument to measure software competency. The proposed software items were incorporated into the revised ' pilot questionnaire, specific competency-based anchors were designed for these items, and the instrument was verified by an expert panel of first year Computer Science professors for completeness, and face and content validity.

There was however one important difference in the interpretation of the Gates (1981)

Model. As explained in Chapter Two, while the position of the different layers of the pyramid still held true relative to each other in 1993, the proportions of the different software layers represented by the pyramid were inverse from that of Gates' (1981) conceptualizations. (To put this idea into perspective; in 1981, there were more computers than there were applications software, but in

1993, there were more applications software than computers).

The Gates (1981) model was updated by the researcher for the state of computing in 1993, and an upside down version of the Gates (1981) pyramid was used as the conceptual framework for this study. This change was reflected in the proportion of the number of items in the final questionnaire representing each construct; there were ten applications, eight programming languages and five operating systems. In order to further assure the validity of the revised instrument, it was re-piloted with Computer Science students from the same university and 47 improvements were made to the final instrument.. This section provided an overall summary of the "preparation phase" indicated in the research map. Here are the details of the procedures.

Instrument Development

The initial list of items together with recommendations from the expert panel, resulted in inclusion of five more languages (Assembler, C, C++, HyperCard, and Scheme) for a total of eight software items for the programming language section (Assembler, BASIC, C, C++, LOGO,

Pascal, HyperCard, and Scheme) and two more applications (Music and Statistics) for a total of ten items for the applications section (Accounting, Communications, Database, Desktop Publishing,

Games, Graphics, Music, Spreadsheet, Statistics and Word Processing). In addition, a new category was added containing five software items were added for the operating system section

(Apple ProDOS, Macintosh Finder, MSDOS, UNIX and Windows).

Competencv-Based Anchors

A Computing Studies in Education professor designed the 6-point competency-based anchored Likert Scale which was used to guide respondents in providing software competency scores for the above software items. The competency-based anchors were specific to each subscale,which was reflected in the revised action verbs for the various software types. The universal question "How well do you know ..." used throughout the Ellis' (1989) questionnaire was replaced by three action verbs which emphasized competence and were content-specific to the type of software under query. The statements "How well you can program" was used for languages, "How well you can control" was used for operating systems, and "How well you can use" was used for applications. Please refer to Appendix B and D respectively for the initial 23 software item pool for the pilot instrument (Q#2) and the data coding protocol.

A Priori Conjectures of Competency Constructs

Sethi (1989) noted that "Originally written in , the kernel of the UNIX operating system was rewritten in the programming language C ..." (p.9). Since UNIX was originally written in assembler and rewritten in "C", this invites the conjecture that there may 48 possibly be closer links between "C" and UNIX or Assembler than there is between C and Pascal, since the "C" and Pascal are very disparate programming languages. The problem is that software is currently only grouped by conventionally understood labels, which may be a less relevant method of grouping. Although the researcher has not found empirical proof which compares the grouping of different types of software, the above connection and others like it, had not escaped the expert panel or the researcher.

Both researcher and the expert panel had conjectured apriori that the true structure of the construct of software competency was not so clean cut as the conventional categories imply. It was understood by both the researcher and the expert panel, that the underlying construct revealed by the factor analysis would likely see software items which have been grouped together by label only, shift from their conventional categories. However, for the purpose of completeness of software items for the questionnaire, and the face and content validity of the items, items were grouped by conventionally understood categories in the revised pilot questionnaire (Q#2) and submitted for verification by the expert panel.

Development of Final Survey Questionnaire

Sixty-three Computer Science and seven Computing Studies in Education students participated in the final pilot study prior to the main study. To resolve the previewing problem, questionnaires were sealed in paper envelopes, and opening the envelope implied consent.

Unfortunately, the envelopes introduced a monitoring function and interfered with the survey. For the main study, participation questions were placed onto a separate page and overheads were used in conjunction to explain the voluntary nature of the study.

The feedback from the two groups of students offered some useful insight into the nature of computing for different groups of students. The Computer Science subjects offered many valuable suggestions, spotted typographical errors, stressed the need for examples, suggested instrument revision, and called attention to the omission of key software items and problems with formatting. Three of their points were especially important. First, it appeared many subjects were unfamiliar with certain terminology and the subjects stressed that they needed examples to help them understand terms they had not heard before. 49

Second, the suggested inclusion of five items (, OS/2, Windows NT, Utilities) which, if omitted for the main study, could have been very problematic, and could have resulted in the distortion of the subscales. Third, they noticed errors in the naming of software items (Eg

UN*X instead of UNIX and C+ instead of C++) and instrumentation (inappropriate use of number of lines as a measure of programming competence) which would have affected the reliability and validity. The feedback from the Computer Science class provided a crucial perspective which paralleled that of the professors, and greatly contributed towards the reliability and validity of the questionnaire and the findings of this research.

Another important concern of the researcher was whether the scale would be able to sufficiently discriminate the responses. As mentioned above, to help in this discrimination, competency-based anchored stems were used. Without the ability to discriminate well, all subsequent analysis would have been jeopardized. Descriptive statistics of the pilot indicated that this objective was achieved.

The seven Computing Studies in Education graduate students' scores discriminated as well as the Computer Science students, but their knowledge range was less, and their scores were lower. The education students were all familiar with the Macintosh Finder but most did not know how to program a language. These students represented the growing presence of a new type of computer user since the Ellis (1989) survey - people who could use computers but did not need to know how to program. Their contribution was significant because they gave the researcher indications of a different type of user, an idea of how a group of education students would have answered the questionnaire, and issues concerning norming the new scale to different groups.

The pilot resulted in the inclusion of one more programming language item,

(Fortran), two more operating system items (OS/2 and Windows NT) and one more application item (Utilities). These changes brought the programming language subscale total to nine software items, the operating system subscale total to seven software items, and the application subscale to

11 software items; for a total of 27 items. Please refer to Appendix C and D or a copy of the final survey questionnaire and the data coding protocol respectively. Taking all suggestions of the pilot groups into consideration, the final questionnaire (Q#3) was generated for the test-retest reliability and main studies. 50 Preliminary Scale Analysis

For the preliminary scale analysis, a conventional subscale is defined as a subscale which has items grouped according to conventionally understood labels. For example, all the items in

Section JJ of the questionnaire which contains the programming languages are grouped into a

"Programming Language Conventional Subscale." The items for the Operating System

Conventional Subscale and the Applications Conventional Subscale were similarly grouped.

The means, standard deviations, and internal reliability coefficients were computed for each of the three conventional subscales and summarized in Table 4 below. The intention here is to compute the internal reliability coefficients for the prehrninary subscales prior to the factor analysis to compare the refined subscales from the factor analysis for improvement of the subscale. These computations are listed below by order for each of the three conventional subscales.

Programming Language Conventional Subscale

The programming language subscale item means ranged from 1.23 for C++ to 3.77 for

Pascal. The standard deviations for these items ranged from 0.84 for Assembler to 1.81 for

Fortran. The subscale mean was 2.05 (range 1-6). The internal rehability coefficient, the standardized item alpha, for the language subscale was 0.72.

Operating System Conventional Subscale

The operating system conventional subscale item means ranged from 1.23 for Windows

NT and OS/2 to 3.73 for MSDOS. The standard deviation for these items ranged for 0.76 for

Windows NT to 1.56 for Windows and MSDOS. The subscale mean was 2.08 (range 1-6). The internal reliability coefficient, the standardized item alpha, for the operating system subscale was

0.68.

Application Conventional Subscale

The application conventional subscale item means ranged from 1.38 for Statistics to 3.82 for Word Processing. The standard deviation for these items ranged from 0.98 for Statistics to

1.93 for Games. The subscale mean was 2.42 (range 1-6). The internal rehability coefficient, the standardized item alpha, for the application subscale was 0.84. Table 4 Means. Standard Deviations and Item-Total Correlations for Preliminary Conventional Subscales

Conventional Subscales Mean SD ITCa foogramming Language Conventional Subscale

How well you can program in Assembler? 1.26 0.84 0.49 How well you can program in BASIC? 2.86 1.78 0.55 How well you can program in C? 1.36 1.08 0.49 How well you can program in C++? 1.23 0.85 0.46 How well you can program in Fortran? 2.68 1.81 0.08 How well you can program in HyperCard? 1.39 1.13 0.28 How well you can program in LOGO? 1.59 1.20 0.44 How well you can program in Pascal? 3.77 1.80 0.53 How well you can program in Scheme? 2.27 1.75 0.18

Total Subscale (alpha=0.72, N=657) 2.05

Operating System Conventional Subscale

How well you can control Apple ProDOS? 1.30 0.86 0.34 How well you can control Macintosh? 1.83 1.51 0.36 How well you can control MSDOS? 3.73 1.56 0.52 How well you can control OS/2? 1.23 0.77 0.38 How well you can control UNIX? 1.52 1.02 0.32 How well you can control Windows? 3.72 1.56 0.55 How well you can control Windows NT? 1.23 0.76 0.30

Total Subscale (alpha=0.68, N=646) 2.08

Application Conventional Subscale

How well you can use Accounting? 1.49 1.07 0.33 How well you can use Communications? 2.06 1.64 0.61 How well you can use Database? 2.31 1.46 0.48 How well you can use Desktop Publishing? 1.93 1.43 0.54 How well you can use Games? 3.79 1.93 0.59 How well you can use Graphics? 2.92 1.67 0.63 How well you can use Music 1.53 1.14 0.51 How well you can use Spreadsheet? 3.04 1.58 0.58 How well you can use Statistics? 1.38 0.98 0.23 How well you can use Utilities? 2.32 1.78 0.65 How well you can use Word Processing? 3.82 1.85 0.64

Total Subscale (alpha=0.84, N=587) 2.42

Note: aITC =Inter-Item Correlation 52 Preliminary Analysis Discussion

Prehrninary results indicated that the Programming Language Conventional Subscale and the Operating System Conventional Subscale had moderate internal reliabilities (0.72,0.68 respectively), but the Applications Conventional Subscale had a high internal reliability (0.84).

The difference in internal rehability suggested that there was a question of consistency among the subscales. Even more, this finding appeared to agree with the apriori conjecture of shifting items.

The internal reliability of the total scale was high at 0.90. This suggested that most of the software items arrived at through expert opinion were valid for measuring software competency.

Some of the Inter-Item Correlations (ITCs) were low, this appears to suggest that the scales as grouped by conventional labels, were not distinct scales. Factor Analysis is reported in Chapter

Four to clarify the refining and creation of distinct subscales.

Test-retest Subjects

Subjects for the "Main Study" were students enrolled in Fall 1993 and Spring 1994 first year computer courses. A sub-sample of 88 students who responded to the questionnaire in the

Fall, did so again in the Spring. The scores from both terms were matched for these 88 subjects to produce an index of test-retest reliability. There was a ten week interval between the two tests.

This was a deliberate strategy in order to ensure that there was no threat of testing effects from the carry over associated with memory retention.

Results of Test-Retest Analysis

As can be seen from Figure 3, the test-retest, denoted by the letter "t", was an independent small study, involving a sub-sample of 88 subjects which matched and correlated their two sets of scores for Spring and Fall. Although there were some individual item fluctuations, overall, the correlation remained quite high at 0.84, which indicated that the scale was reasonably stable over time.

Main Study Subjects and Analysis

The subjects for this study consisted of students enrolled in first year computer science courses for the 1993-1994 academic year at a large provincial university. These students were 53 registered in 12 courses. Seven courses were surveyed in November and December of 1993, and five courses were surveyed in February, 1994).

Since all the data-collection from the courses were from intact samples, the methodology employed was ex post facto research (Wiersma, 1986). Prior arrangement had been made with each of the professors to interrupt class time for 15 minutes to hand out the questionnaires. Before each aaministration, the researcher used overheads to train subjects on how to fill out questionnaire in an attempt to reduce error from the subjects misunderstanding the survey items. Participation was voluntary and anonymity was assured.

The 27 dependent competency scores for all the subjects were subjected to a factor analytic procedure. The results captured three constructs which were used to produced the Software

Competency Scale (SCSV Dependent scaled scores were generated using the factor analysis- reliability analysis method. Operationalizing the levels for the independent variables was an involved procedure and is detailed below.

Operationalizing the Independent Variables

For each subject, besides the 27 competency scores, there were 27 levels of the "age learned" variable, and 27 levels of the "learning setting" variable. What was needed was to design the methodology to operationalize the variables based upon a logic which was informed by the literature, to parsimoniously reduce each independent variable to a manageable number of levels.

Operationalizing Ape of First Computer Experience

The two concerns that were expressed in the literature, that gender disparity has been reported to begin early (Bear, Richards & Lancaster, 1987) and that computer experience should begin early (Schubert, 1986) provided the rationale for operationalizing the Age Learned variable.

The literature contends that it matters when a child is first exposed to computers. The researcher, accordingly, operationalized the Age Learned variable as the age of first computer experience

(FirstAge). Hereafter, the researcher will use the term FirstAge to refer to the age of first computer experience. 54

In this way, the "Age Learned" was operationalized as a very specific age. To operationalize this variable for relevance to the research question in terms of schooling, the researcher decided to locate the dehmitation of the levels between the sixth and seventh grade, which approximately corresponds to the transition from elementary to secondary school. Thirty- one subjects were lost for which the Age Learned was undefined.

Operationalizing Learning Setting

The operationalization of the Learned At variable was not quite as straightforward. The literature on learning setting referred to learning in several settings but most of the extracurricular learning centered on the home. Taking this criteria into consideration, multiple responses were dropped and only responses which fit into categories of School, Higher Education and Home

Learning were retained.

In contrast to the Age Learned variable which was a specific age, the Learned At variable appeared to be more global. It made more sense to construe this variable in terms of predominant learning setting rather than the first setting. For example, a student may learn Pascal at school, go home and practice it, go to summer camp and work on it some more, and use it in her/his job. A special algorithm was written in SPSS to derive the predominant learning setting for each subject.

The operationalization resulted in a loss of 94 subjects for which Learned At was undefined, and the researcher similarly operationalized the Learned At variable as the learning setting (LearnSet).

Hereafter for the researcher will similarly use the term LearnSet to refer to the Learning Setting.

Statistical Procedures Used to Analyze Results

Since covariance as well as variance exist between and among the software competency scores, between and among the independent variables, and across the independent and dependent variables, MANOVA was employed to test for all main and joint-effects. As explained above, the dependent competency scores had already been generated as subscale scores from the factor analysis and internal reliability analysis. Once the levels of the Age Learned and Learned At were partitioned, combining both dimensions resulted in the successful execution of the MANOVA. 55

Prior to interpreting the findings of the factor analysis and MANOVA, the underlying assumptions of each procedure were checked to ensure confidence in the generated significance levels. (Please refer to Appendix H for tests regarding the underlying assumptions of the

MANOVA). In the event of significance, if it was necessary to determine which pairs of means differed significantly (as was the case of learning settings), post hoc Scheffe range tests were conducted. The results of the MANOVA (and the default follow-up ANOVAs), and if necessary, the Scheffe range tests, indicate whether differences are significant between males and females, beyween earlier and later introductory experience to computers, and between learning from school, higher education or home.

Before leaving this section on methodology, two more considerations need to be addressed. The researcher needs to provide the justification for using a self-reported measure for software competency and the rationale behind the chosen factor analysis-reliability analysis-

MANOVA methodology.

Self-Reported Competency Scores

Behavioral research often involves obtaining dependent measures through self-reporting

(Cronbach, 1960; Kerlinger, 1973; Horrocks, 1964; Nunnally, 1978). The attitude-behavior theory of Fishbein (1979) and Fishbein & Ajzen (1975) has been cited in the literature to attest to the validity of self-reporting. Kay (1993) cited Ajzen(1988), stated that, "Although self-reports are often considered inferior to direct observation, Ajzen (1988) maintains that they are comparable in terms of accuracy" (p.374).

As indicated in the literature review, Gardner, Dukes and Discenza (1993) used this theory of reasoned action by Azjen and Fishbein (1975) to determine causation between beliefs about self- perceived computer confidence and attitudes towards computers. The theory has also been validated in an applied study of managerial attitudes towards computer information systems (Pavri,

1988, cited in Gardner, Dukes and Discenza, 1993). According to the theory of reasoned action, experiences give rise to beliefs, which in turn lead to attitudes and behavioral intentions (Gardner,

Dukes and Discenza, 1993) and these intentions are predictive of overt behaviors (Kay, 1988). 56

Furthermore, the search of the literature revealed at least three other studies which used self-reported measures of competency (Geissler & Horridge, 1993; Marcinkiewitz, 1994;

Mclnerney, Mclnerney & Sinclair, 1994). Geissler & Horridge (1993) used a 5-point Likert scale ranging from Very Low to Very High. Marcinkiewitz (1994) used a 7-point Likert scale ranging from Strongly Agree to Strongly Disagree. Mclnerney, Mclnerney & Sinclair (1994) used a 4- point Likert scale which rated competency as l=Novice, 2=Beginner, 3=Intermediate and

4=Advance.

Factorial Validity and Reliability of a Scale

The standard procedure employed in the literature to test the factorial validity of a new scale consists of two complementary analyses - an internal reliability analysis and factor analysis

(Massoud, 1990; Kay 1993). The internal reliability analysis is conducted to verify the stability of subscale scores (Kay, 1990; Kleuver et. al., 1994). Factor analysis is performed to determine construct validity, and clarify that the subscales are distinct. For comparison using subscale scores,

MANOVAs have been performed on items within subscales (Shashaani, 1994a, 1994b). This research combined the approach of all of the above studies to arrive at the present design. Factor analytic procedures were employed to create subscales which were then checked for internal reliabihty. Rather than performing a MANOVA for items within several subscales, the MANOVA was performed for all subscales in this study.

Using the research map, the researcher outlined the plan for the data-collection and the data- analysis. Included in this discussion was a section discussing how the Gates (1981) Model was employed as the conceptual framework to operationalize the dependent variables, and reasons for the choice of the three sub-analyses used to support the main analysis (the rehability analysis, the factor analysis, and the univariate tests). Next, the plan by which both the dependent and independent variables were reduced to enter the MANOVA as levels and measures, was followed in detail. Finally the justification of using self-reported software competency measures, as well the merging of different quantitative analyses based upon examples in the literature, was discussed.

The next chapter applies these methodologies, and discusses findings from employing them. 57

CHAPTER FOUR

The Main Study Findings

Reporting and Presentation Plan

The purpose of this research was to investigate the effect of gender and prior computing experience on university students' self-reported competency scores in programming languages, operating systems and applications. For the purpose of this thesis, the "prior computing experience" variable was comprised of the two dimensions, the age of first computer experience and the predominant learning setting. This chapter reports the descriptive statistics, the main findings of instrumentation, the factor analytic procedures, MANOVA analysis and Scheffe post hoc range tests. (The details of these analytical procedures and supplementary analyses, however, have been moved to the appendices).

Descriptive Statistics

The data were obtained from a sample of 765 university students from several faculties, taking first year computer science courses in the Fall 1993 and Spring 1994 academic terms. For the academic year 1993-1994, 83% of the first-year, first-time, university students came from K-

12 system in British Columbia (UBC Factbook, 1993-1994). Based upon enrollment figures for the classes surveyed, 59% of the students enrolled in the courses participated in the survey. Of these students, 740 subjects were retained for the final analysis after deleting subjects with incomplete forms and dropping outliers. The next few paragraphs and Tables 5, 6,7, 8 and 9 detail the descriptive statistics for these subjects. In these tables, the amount of specified data are included and percentages are based upon the entire sample.

Table 5 reports that the males in the survey outnumbered the females by a ratio of 7:3. This finding corresponds to typical participation rates in the literature (Bunderson & Christiansen, 1995;

Shashaani, 1994a). According to Table 6,42.8% of these respondents had their first computer experiences in Grade 7 or earlier, while 53% had their first computer experiences after Grade 7. 58

At first glance, these figures suggest approximately even numbers between students who had earlier computer experiences (Grade 7 or earlier) and later computer experiences (after Grade

7). However, when this figure is broken down by gender, the gender disparity in computing emerges. According to the data, males had earlier computer experiences (Nelson & Watson,

1991). This finding is supported by the literature. The majority of the girls were not exposed to computers until secondary school (Shashaani, 1994a). Table 6 shows that only 26% of the females had their first exposure in Grade 7 or earlier. In contrast, approximately half the males

(50.3%) had their first exposure Grade 7 or earlier.

Table 5 Percentage of Respondents by Gender

Gender Frequency Percent Female 219 29.6 Male 511 69.1 Not Specified 10 1.4 Total 740 100.0

Table 6 Percentage of Respondents by Gender Across Age of First Computer Experience

Females Males Group First Experience n % n % n % K-7a 57 26.0 257 50.3 317 42.8 Post 7 b 149 68.0 240 47.0 392 53.0 Not Specified 13 5.9 14 2.7 31 4.2 Total 219 100.0 511 100.0 740 100.0 Note: a K-7 = First Computer Experience in Grade 7 or Before b Post 7 = First Computer Experience After Grade 7

With respect to the learning setting, a similar pattern emerges. The learning setting data corroborate the data on the age of first computer experience. While 47.3% of the respondents primarily learned at home, the breakdown by gender in Table 7 reveals that it is the males who predominantly learned at home. The data indicate that females tended to be evenly split, learning between School (29.7%), Higher Education (25.1%) and Home (28.3%), but males 59 predominantly learn at Home (55.4%). Based upon the earlier data on the age of learning, this was an expected finding which is supported by the same body of literature (Chen, 1986, Levin &

Gordon, 1989).

Table 7 Percentage of Respondents by Gender Across Learning Setting

Females Males Group Learning Setting n % n % n % School 65 29.7 105 20.5 170 23.0 Higher Education 55 25.1 71 13.9 126 17.0 Home 62 28.3 283 55.4 350 47.3 Not Specified 37 16.9 52 10.2 94 12.7 Total 219 100.0 511 100.0 740 100.0

Mean Competency Scores bv Gender

So far the gender discrepancy in computing has been examined using various approaches, from that of proportion of participation (Table 5), age of first experience (Table 6), and learning setting (Table 7). Table 8 on the next page compares the raw software competency mean scores by gender. As can be seen, across all the 27 software items, the male mean scores were higher than female mean scores. This finding is consistent with the literature (Please see Chapter Two).

Going down the list, for both males and females, means were also comparatively higher for items like BASIC, Fortran, Pascal, Scheme, MSDOS, Windows, Database, Games, Graphics,

Spreadsheet, Utilities and Word Processing. The similarity in trend invites the conjecture that uniformly higher scores for both males and females may contribute towards balancing the aforementioned difference, and reducing the probability of significance in the MANOVA for certain subscales, or the chances of an interaction.

Breakdown Descriptive Statistics

Tables 5, 6,7, and 8 present the overall descriptive statistics but do not show the breakdown by software item, in terms of frequency or mean ratings across the FirstAge or

Learnset. The reader is referred to Appendices E.l through E.6 for these details. The tables in these appendices reveal the details of the age and learning setting data prior to operationalization for the MANOVA analysis. For example, Table E.2 and Table E.6 approach the gender discrepancy in computing in terms of the numbers of students who learn different software under different settings and details the disproportionate representation of males for all software competencies, and their preference for home learning. These raw data tell one that on the average, males begin earlier than females; males tend to dominate both in numbers and percentage, and males have a preference for learning at home. The next part of this chapter deals with findings from inferential statistics which will apply broader strokes and see if the basic descriptive statistics are indicative of gender disparity in computing.

Table 8 Comparison of Raw Software Competency Scores by Gender

Females Males Group;

Mean SD Mean SD Mean SD Software Items Assembler 1.06 0.37 1.34 0.97 1.26 0.84 BASIC 2.14 1.50 3.22 1.78 2.92 1.78 C 1.08 0.50 1.50 1.25 1.38 1.10 C++ 1.06 0.44 1.32 0.97 1.24 0.86 Fortran 2.42 1.73 2.91 1.84 2.75 1.82 HyperCard 1.18 0.70 1.53 1.30 1.42 1.16 Logo 1.27 0.79 1.77 1.37 1.61 1.24 Pascal 3.31 1.74 4.00 1.76 3.81 1.78 Scheme 2.02 1.59 2.40 1.82 2.30 1.76

Apple ProDos 1.14 0.54 1.41 1.01 1.33 0.90 Finder (Mac) 1.66 1.29 1.95 1.63 1.85 1.53 MSDOS 2.86 1.40 4.09 1.46 3.73 1.55 OS/2 1.11 0.66 1.32 0.86 1.26 0.81 Unix 1.33 0.80 1.66 1.13 1.56 1.05 Windows 3.24 1.49 3.96 1.50 3.74 1.53 Windows NT 1.14 0.62 1.29 0.86 1.24 0.80

Accounting 1.43 0.93 1.53 1.15 1.51 1.09 Communications 1.47 0.99 2.41 1.79 2.12 1.65 Database 2.28 1.42 2.43 1.50 2.38 1.47 Desktop Publishing 1.62 1.11 2.12 1.52 1.97 1.43 Games 2.81 1.69 4.40 1.80 3.93 1.91 Graphics 2.62 1.56 3.24 1.70 3.04 1.68 Music 1.32 0.83 1.66 1.26 1.55 1.15 Spreadsheet 2.84 1.49 3.29 1.57 3.14 1.56 Statistics 1.38 0.95 1.44 1.06 1.42 1.03 Utilities 1.36 0.97 2.79 1.88 2.35 1.78 Word Processing 3.27 1.87 4.27 1.68 3.95 1.80 61 Factor Analysis

University students' self-reported software competency scores from sections n, lU and IV of the questionnaire (corresponding to the programming languages, operating systems and applications raw competency scores) were the dependent variables for the factor analysis.

Confirmatory factor analysis was conducted using the three sets of software competency scores.

These competency scores consisted of nine software item scores from the Programming Language

Section (Assembler, BASIC, C, C++, Fortran, HyperCard, Logo, Pascal and Scheme); seven software item scores from the Operating System Section (ProDOS, Macintosh Finder, MSDOS,

OS/2, UNIX, Windows, Windows NT); and 11 software items scores from the Applications

Section (Accounting, Communications, Database, Desktop Publishing, Games, Graphics, Music,

Spreadsheet, Statistics, Utilities, Word Processing).

Confirmatory Factor Analytic Procedure

Factor analysis was performed to reduce the 27 variables to three new underlying constructs representing subscales hypothesized to represent the programming languages, operating systems and applications domains. To capture the latent dimensions representing these three types of software, a confirmatory factor analysis was employed to extract three factors corresponding to the software competency dimensions. (Only the results of the factor analysis is presented below, the reader is referred to Appendix F for details).

Creation of Subscales through Factor Analysis

In this section, only the big picture is presented. The reader is referred to Appendix F in they are interested in the detail considerations for testing the underlying assumptions of the factor analysis, or the step-by-step explanation of the rotation and extraction process.

Factor analysis extracted three constructs which were appropriately named Applications competency, Low-Level Language competency and High-Level language competency (See

Appendix F). Using the items with the greatest factor loadings for each of the three factors from the confirmatory factor analysis, three subscales were constructed corresponding to the new definition of High-Level Languages competency, Low-Level Languages competency and

Applications competency. The new subscales were designated as HSCORE, LSCORE and 62

ASCORE respectively. As they were a linear composite of the items, these three subscales also represented the software items under the respective subscales.

Although the analysis began with the items sorted in a different manner, the items were always the same and items within these subscales had established both face and content validity through expert opinion and the pilot process. As noted in Chapter Three, the internal reliability of the total scale was high at 0.89 which suggested that the software items arrived at through expert opinion were valid for measuring software competency.

As discussed in the last part of Chapter Three, the method employed in this research represents the standard approach for validation of a new scale in the literature. Factor analysis was employed to establish construct validity and the underlying distinct and measurable constructs. To establish the new Software Competency Scale (SCS). and to ensure that items in each subscale belonged together in these distinct constructs, it was also necessary to compute the internal reliability, and verify the stability of the subscale scores. Only the results of the reliability analysis is presented below, the reader is referred to Appendix G for details.

Scale Refinement

Both the prehminary subscales and the refined subscales were merged in Table 9 below to allow tabular comparison. From the table, one can see relatively high alpha coefficients (0.80,

0.83,0.78) for all the three new software competency subscales which showed the subscales were internally reliable. As well, the mean Inter-Item Correlation (ITC) had increased, and results of the

Principal Components Analysis (PCA) attested to the structural independence of these three competency constructs.

The mean alpha had also improved overall; although the ASCORE alpha decreased from

0.84 to 0.80, both the HSCORE and LSCORE alpha had increased from 0.72 to 0.83 and 0.68 to

0.78 respectively. This increase in internal consistency suggests that the software items which were responsible for the high alpha in ASCORE "evened out" the alphas for all three subscales when they were distributed to the other two subscales as per the structure revealed by the PCA.

The alpha for all the 22 items remained high at 0.89. All these factors showed that reduction in software items and regrouping by the factor analysis was successful. Three new 63 subscale scores were computed for all subjects. Having established a scale, the researcher was now ready to compare the means of these three subscale scores, using MANOVA.

Table 9 Comparison of Preliminary and Refined Subscales

SubScale Scale Mean #of Alpha Std. ITCe Mean SD items Item Alphad Prehminarv ASCOREa 2.42 1.50 11 0.85 0.84 0.53 HSCOREb 2.05 1.36 9 0.67 0.72 0.39 LSCOREc 2.08 1.15 7 0.68 0.68 0.40 Overall 2.18 1.34 27 0.74 0.75 0.44 Alpha for all 27 items =0.89

Refined ASCORE 2.07 1.37 8 0.80 0.80 0.51 HSCORE 3.37 1.76 8 0.82 0.83 0.55 LSCORE 1.63 1.21 6 0.73 0.78 0.51 Overall 2.36 1.45 22 0.79 0.80 0.52 Alpha for all 22 items =0.89 Note: aASCORE - Applications Subscale b HSCORE - High-Level Language Subscale c LSCORE - Low-Level Language Subscale d Standardized item alphas e ITC - Inter-Item Correlations

Multivariate Analysis of Variance

As noted in Chapter Three, when the research question has multiple independent and dependent variables all correlated to one another in varying magnitudes, and the researcher is interested in assessing group differences in the dependent variables, the design becomes fairly complicated. The multivariate analysis takes into consideration the covariance between and among variables, and mirrors the reality under study more closely because it does not isolate variables artificially, without considerations of effects from possible interaction. 64

However, aside from multiple dependent variables, the factorial nature of this design means that the independent variables also have two or more factors with multiple levels within each of these factors. As such a N-way MANOVA or Factorial design was the approach used for the analysis. Specifically, the researcher employed the factorial MANOVA to investigate how measures of self-reported software competency with three types of conventionally accepted computer software (programming languages, operating systems and applications) varied as a function of the three independent variables.

The independent variables were: Gender (with two levels; Female=l, Male=2), FirstAge

(with two levels; K-7=l, Post 7=2) and LearnSet (with three levels; School=l, Higher

Education=2, Home=3). This resulted in a 2 (Gender) X 2 (FirstAge) X 3 (LearnSet) Factorial

MANOVA design. The reader is referred to Appendix H for details of the testing for underlying assumptions of the MANOVA. Table 10 below shows the sets of means for each of these independent variables which will be compared in the MANOVA for each of the three subscale scores.

Table 10 Sets of Means to Compare for Factors of the MANOVA

Gender First Age LearnSet Subscale Females Males K-7 Post 7 School H. Ed. Home ASCORE 1.91 2.29 2.45 1.96 2.27 1.73 2.29 HSCORE 2.76 3.76 4.07 2.99 3.33 2.57 3.87 LSCORE 1.33 1.83 1.95 1.47 1.54 1.29 1.90 Note: H. Ed = Higher Education

Results of the MANOVA

As shown in Table 11, all main effects were statistically significant (p_ < 0.01), but none of the interaction effects were statistically significant. The MANOVA found significant multivariate effects for Gender, F(3,602)=15.32, p<0.0001, FirstAge, F(3,602)=16.95, p<0.0001, and LearnSet, F (6,1206)=6.89, p<0.0001. The non-significant interactions implied that the pattern of response was the same for males and females. This finding by itself is very 65 interesting and will integrated into the context of the discussion of main effects associated with the three research questions.

Table 11 MANOVA Results Table

Source of Variation df Multivariate F c Gender (G) 3 15.32 0.0001* Age of First Experience (FA) 3 16.95 0.0001* Learning Setting (LS) 6 6.89 0.0001* G X FA 3 1.25 0.29 GXLS 6 1.20 0.304 FA XLS 6 2.03 0.059 G X FA X LS 6 1.09 0.364 Note: * a was set at p_<0.01 for all multivariate Fs

RQ1. Do male and female university students differ in self-reported ratings in software competencies with programming languages, operating systems and applications?

The MANOVA reported significant multivariate effects for Gender, F(3,602)=15.32,

P_<0.0001. The Gender significance along with the data in Table 10, indicated that when FirstAge and LearnSet are controlled, differences between Gender are much greater than differences within

Gender and males had significantly higher self-reported competency scores than females.

However, this does not mean that the significance was found for all three subscales. It only indicates that at least one of the subscales were significantly different. To find the answer one has to look at the follow up ANOVAs.

RQ2. Do university students who learn computing at different ages differ in self-reported ratings in software competencies with programming languages, operating systems and applications?

The MANOVA found significant multivariate effects by FirstAge, F(3,602)=16.95, g<0.0001. The FirstAge significance along with the data in Table 10, indicated that when Gender and LearnSet are controlled, differences between First Age are greater than differences within

FirstAge and university students who were exposed to computers before secondary school tended to score significantly higher in self-reported competency scores. However this does not mean that 66 significance was found for all three subscales. It only indicates that at least one of the subscales were significantly different. To find the answer one has to look at the follow up ANOVAs.

RQ3. Do university students who learn computing in different settings differ in self-reported ratings in software competencies with programming languages, operating systems and applications?

The MANOVA uncovered significant multivariate effects by LearnSet, F(6,1206)=6.89, p<0.0001. The LearnSet significance along with the data in Table 10, indicated that when Gender and FirstAge are controlled, differences between LearnSet are greater than differences within

LearnSet and among the settings of School, Higher Education And Home, the learning is significantly different between a pair (or pairs) of learning settings for at least one of the three subscales. The analysis however requires an additional step for the learning setting comparison since there were three levels versus only two for the Gender and the FirstAge.

Inferential Interpretation

The purpose in using MANOVA was to investigate if there were any differences in how respondents rated themselves in software competency among the various groups. Since the observed multivariate F was greater than the expected F for all of the independent variables

(Gender, First Age and LearnSet), in terms of software competency, the answer to RQ1, RQ2, and

RQ3 is that among university students, males do differ from females, early experience with computers differs from later experience and learning setting differs among the School, Higher

Education and Home, all significantly at the 0.0001 level.

Results of Univariate ANOVAs

The MANOVA procedure used in this study, continues with univariate ANOVAs for all of the sources of variation; however, only the ANOVAs following significant multivariate Fs are interpreted. Tables 12, 13, 14 and 15 summarize the result of the follow up ANOVAs, and the post-hoc analysis. For both Gender (Table 12) and First Age (Table 13) since the two means being compared are shown, significance can directly be inferred. For the LearnSet, however, since all three means are being simultaneously compared, the significance which is shown in Table

14 does not indicate how this significant finding arose. To pursue which of the pairs of learning 67 setting means were significantly different, the reader is referred to Table 15 which details the post- hoc comparisons.

RQ1. Do male and female university students differ in self-reported ratings in software competencies with programming languages, operating systems and applications?

Table 12 Follow up ANOVAS for Gender

Gender

Source df Females Males F E ASCORE 1, 604 1.91 2.29 5.03 0.025 HSCORE 1, 604 2.76 3.76 42.71 0.0001* LSCORE 1, 604 1.33 1.83 14.13 0.0001* Note: * a was set at p<0.01

Comparing down the column, the greatest of the mean scores was from the HSCORE or

Higher-Level Language Subscale, and the lowest were from the LSCORE oxLow-Level Language

Subscale. Males had significantly higher competency scores than females for HSCORE and

LSCORE. This answers RQ#1, there are mixed results depending on the subscale. The alternate hypothesis is held tenable for ASCORE subscale.

RQ2. Do university students who learn computing at different ages differ in self-reported ratings in software competencies with programming languages, operating systems and applications?

Table 13 Follow up ANOVAS for Firstage

Age of First Computer Experience

Source df K-7 Post 7 F p.

ASCORE 1, 604 2.45 1.96 6.17 0.013 HSCORE 1, 604 4.07 2.99 47.61 0.0001* LSCORE 1,604 1.95 1.47 2.76 0.097 Note: * a was set at p_<0.01

As shown in Table 13, the highest scores were to be found in HSCORE or Higher-Level

Language Subscale, and the lowest were from LSCORE, Low-Level Language Subscale. The 68 group that had the early experience with computers reported higher competency scores. However, only HSCORE was found to be significant. This answers RQ#2 that there are mixed results depending on the subscale. The alternate hypothesis is held tenable for the LSCORE subscale and

ASCORE subscale. The non-significant (p<0.29) G X FA interaction (See Table 11) indicates that the difference between students who had earlier first computer experiences, and those who had later first experiences, were the same for males and females.

RQ3. Do university students who learn computing in different settings differ in self-reported ratings in software competencies with programming languages, operating systems and applications?

Table 14 Follow up ANOVAs for Learnset

Learning Setting

Subscale df School Higher Ed. Home F E

ASCORE 2,604 2.27 1.73 2.29 6.79 0.001* HSCORE 2,604 3.33 2.57 3.87 17.39 0.0001* LSCORE 2,604 1.54 1.29 1.90 6.19 0.002* Note: * a was set at p_<0.01

Although the follow-up ANOVA indicates that all subscales were significant, it only indicates that at least one of the pairs were significant. Since there are three pairs, it was not known which of the pairs were significant. In order to determine significance, a post-hoc test was needed. For this research question however, the alternate hypothesis was not tenable for all three subscales.

Post-Hoc Analysis

A number of post-hoc analysis methods were available. Scheffe post hoc comparison and range tests were chosen because they are powerful, allow for simple and complex contrasts, are ideal for situations with unequal n's and control for experimentwise alpha. However, there was a problem, the SPSS MANOVA does not have a procedure for post-hoc comparisons (Tabachnick and Fidell, 1989). This was probably because it was a rather trivial exercise to run an ANOVA 69 using the proper post-hoc analysis separately using the same data set. That way the two runs need not be connected. That is the case for ideal data.

Since there were missing independent as well dependent data in this study, comparing two groups by gender would only result in listwise deletion by gender. This would end up comparing a different group of subjects (730 subjects, deleting thelO who did not declare their gender), than the one used to generate the results of the MANOVA.

Table 15 Scheffe Post-Hoc Comparisons

Learning Setting Mean Higher Ed. School ASCORE 1.7331 Higher Ed. 2.2723 School * 2.2947 Home *

HSCORE 2.5656 Higher Ed. 3.3317 School * 3.8695 Home * *

LSCORE 1.2905 Higher Ed. 1.5431 School 1.8972 Home * * Note: * a was set at p_<0.01 (Indicates pairs of groups significantly different at this level)

The solution was to write an algorithm in SPSS to simulate listwise deletion of both dependent and independent variables used in the MANOVA. The Scheffe range tests revealed that there were significant differences across learning settings but they differed according to the subscales. This answers RQ3 and indicates that, like Gender and FirstAge, there are also mixed findings with the LearnSet. As well, the non-significant (p<0.30) G X LS interaction (See Table 70

11), indicates that the differences between students who had learned at School, Higher Education and Home were the same for males and females.

For the ASCORE and software items represented by the Applications Subscale

(Spreadsheet, Graphics, Database, Desktop Publishing, Macintosh Finder, HyperCard, Music and

Accounting), significant differences were observed between School and Higher Education

(p<0.01) and Higher Education and Home (p<0.01), but no significant differences were found between School and Home. The scores of those students who learned predominantly from Home were the highest (2.29) followed by those who learned at School (2.27), followed by Higher

Education (1.73).

For the HSCORE and software items represented by the Higher-Level Language subscale

(DOS, Pascal, Fortran, Games, Windows, BASIC, Word Processing and Utilities), a significant difference was found between School and Higher Education (p<0.01), School and Home (p<0.01) and Higher Education and Home (p<0.01). There were significant difference across all settings.

The scores from the students who learned predominantly from Home were the highest (3.87), followed by School (3.33), followed by Higher Education (2.57).

For the LSCORE and the software items represented by the Low-Level Language subscale

(C, C++, Assembler, Unix, Scheme and Communications), a significant difference was found between School and Home(p<0.01) and between Higher Education and Home (p<0.01). As before, the scores from Home were highest (1.90), followed by School (1.54), followed by

Higher Education (1.29).

The highest competency scores were from students who primarily learned at Home.

Students who learned primarily through School consistendy rated second, but the means from those who primarily learned from Higher Education were consistently the lowest. This suggests that students who primarily learned the same software item later (in Higher Education), tended to have lower competency scores. This has implications for all three learning environments, and the import of this message needs to be carefully understood. In addition, the HSCOREs were consistently the highest scores across all learning settings, and the LSCORE were the lowest. 71

Summary of Findings

To summarize, the multivariate findings answered all three research questions and indicated mixed results. Gender was significant at the omnibus level F(3,602)=15.32, p_<0.0001) but subsequent ANOVAs revealed that this omnibus significance was due to significance of only two of subscales; HSCORE (F(l,604)=42.71, p<0.0001) and LSCORE (F(l,604)= 14.13, p<0.0001) but not ASCORE. First Age was also significant at the omnibus level (F(3,602)=16.95, rj<0.0001), but follow-up ANOVAs indicated that the omnibus significance could only be attributed to the HSCORE subscale. For the LearnSet, the omnibus significance could be attributed to significance of all three subscales. Inspection of the subsequent ANOVAs indicated that significance for the three subscales was from ASCORE (F(2,604)=6.79, p<0.001), HSCORE

(F(2,604)=17.39, D<0.0001) and LSCORE (F(2,604), p<0.01).

The ability to detect non-significant interactions was important, and was a planned artifact of the factorial MANOVA design. The orthogonal, non-significant (p<0.29) G X FA and

(p<0.30) G X LS interactions indicate that the differences between students who had earlier first computer experiences, and those who had later first experiences, were the same for males and females; and the difference between students who had learned at School, Higher Education and

Home were also the same for males and females. Taken together, this implies that although descriptive statistics may suggest otherwise, and significance was found for both FirstAge and

LearnSet; the learning pattern was similar for males and females, and aforementioned significant differences had teh same pattern regardless of gender..

Furthermore, the non-significant (p<0.06) FA X LS interaction indicates that the difference in mean subscale scores between students who had learned at School, Higher Education and Home were the same for students who had earlier and later first computer experiences. In addition, the non-significant (p<0.36) G X FA X LS interaction indicates that the non-significant G X FA interaction holds true at every learning setting.

General Interpretation Across Findings

Overall, across all subscales and independent of the factors studied, the highest reported competency scores were from the HSCORE or High-Level Language Subscale while the lowest 72 were from the LSCORE or Low-Level Language subscale. One implication is that regardless of gender, regardless of the first experience, regardless of where software is learned, the software items represented by LSCORE subscale may be more difficult to master. The descriptive statistics at the beginning of this chapter indicated that with the exception of Communication, the items of this subscale were learned last for both genders. This problem of difficulty in achieving competency with the LSCORE items could have an important bearing on learning the other software items.

Typically, the students who reported the highest competency scores were male, had earlier experiences with computers and primarily studied at home. This finding is in agreement with the literature and lends support to documented reasons on how gender disparity is started and perpetuated. The literature (please see Chapter Two) suggests that gender disparity begins early

(Chen, 1986; Fetler, 1985; Jones, 1987), that most males are inclined toward voluntary learning

(Chambers & Clarke, 1987; Kersteen, et al., 1988; Levin & Gordon, 1989) and compete for computer resources at home, in recreational learning and at school (Fetler, 1985; Levin & Gordon,

1989). Furthermore, males are encouraged by their parents (Eccles, 1982; Jacobs, 1991), work on school computers after school (Chen, 1986) and engage in computer clubs (Kersteen, et al.,

1988; Levin & Gordon, 1989). Compare this to females who start late (Schubert, 1986), whose first exposure to computers is often through school (Shashaani, 1994), who are not encouraged at home (Giaquinta, Bauer & Levin, 1993), do not believe that it is a feminine activity (Campbell,

1990; Shashaani, 1994; Wilder, Mackie & Cooper, 1985), and one has the gender gap in computing which is being continually recycled and reinforced to become the status quo. 73

CHAPTER FIVE

Conclusions and Implications

The main objective of this study was to investigate the effect of gender, age of first computer experience, and learning setting on university students' self-reported software competency with programming languages, operating systems and applications. A related and prerequisite purpose, was to develop an instrument to collect the software competency data. This chapter begins with a short description of the empirical process. Next, the chapter addresses the three research questions and compares the findings with the literature. This is followed by a discussion of the new model which was the product of the analysis. The chapter concludes with implications for research and suggests some directions for future research. Going over the findings, parents educators, educational administrators and researchers should keep in mind that both of the independent variables of this study are controllable to the extent that a parent could decide when to introduce children (male or female) to computers, and where this learning takes place.

Empirical Research

This study formulated the research questions and hypothesis using the Gates (1981) Model of Software Abstractions as the conceptual framework. The variables employed for the research questions were selected because they had been shown in the literature to contribute toward significant differences. The literature has attempted to operationalize experience by employing duration as a correlate (Gabriel, 1985), and ability using different types of software (Clarke &

Chambers, 1989; Kay, 1993; Koohang, 1989). Researchers have pointed out that there are problems with both of these approaches. The duration variable has range problems because they 74 typically assess short periods (Pope-Davis & Twing, 1991) and the ability variable has problems with type, diversity and complexity (Mclnerney, Mclnerney & Sinclair, 1994).

Pope-Davis and Twing (1991) claimed that it was no longer valid to assess experience in terms of short durations of weeks and months because "important information regarding long term users may have been concealed" (p. 334) and "scores were mixed across years of experience"

(p.338). The researchers argued that without an adequate range, findings might be contradictory.

More germane to this study, Mclnerney, Mclnerney and Sinclair (1994) reported that while subjects' self-ranked computer competence and experience with courses were significant predictors of anxiety, "The complexities of these interactions demonstrate the need for a broader more encompassing definition of "experience" to be derived with regard to computing" (Mclnerney,

Mclnerney and Sinclair, 1994, p.42).

The research attempts to address both of these problems which have been pointed out in the literature by proposing a composite of learning expressed through age and setting with a multitude of software measures for the experience variable. The advantage of this approach is that it is guided by a model and incorporates both the historical aspect of learning and the current ability component of competency. Using this data matrix, the researcher has been able to extract the variance from 27 different types of software experiences.

This research did not use an established instrument with reported reliabilities. Instead the researcher modified a questionnaire from the literature using an inverted version of Gates (1981)

Model as the conceptual framework to operationalize the dependent variables. The face and content validity of the new instrument was established through expert opinion and a series of pre- pilots and pilots. Factorial validity obtained through Principal Components Analysis (PCA) confirmed the the structural independence of the three competency constructs.

Although the factor analysis confirmed the independent of three distinct constructs, from which subscales could be derived to allow comparisons between the three groups of interest, the results were different from expected. The factor analysis revealed that the operating system competency although integral to the composition of subscales from which comparisons could be made, was not itself a construct. The factor analysis confirmed the existance of three independent constructs but instead of capturing language competency, operating systems competency and 75 applications competency; the factor analysis captured Applications competency, High-Level

Language competency and Low-level Language competency.

Importantly, these three new constructs are still relationally consistent with the Gates 1981)

Model. Instead of having the applications above languages which are above operating systems, the factor analysis, revealed applications above high level languages, which are above low-level languages. In addition, although the constructs turned out to be different than expected, the methodology which was used too derive them was sound, and the subscales which were created from them were valid and reliable for comparisons.

High alphas coefficients for the subscales (0.83, 0.78, 0.80) and the instrument (0.89) verified the internal reliability of the instrument, and relatively high intercorrelations among the three subscales justified the combination of the three subscales into a single scaled score. As the test-retest reliability was reported at 0.84, the instrument was also established to be stable over separate administrations. Through the process of factor analysis, this research has also been reductive and has simplified 27 disparate software items into three simpler parsimonious underlying structures. This reduction has abstracted the separate software items into three theoretical structures similar in relation to that hypothesized by Gates (1981).

Findings and the Literature

Gender has been found in the literature to be significantly linked to achievement, participation and literacy in several studies (please see Chapter Two). Researchers, however, have not always reported significance in terms of gender (Koohang, 1986; Loyd & Gressard, 1984;

Marshall & Bannon, 1986; Ogozalek, 1989). The MANOVA revealed that software competency differences between male and female were not always significant. This finding concurs with Chen

(1986) and Koohang's (1987) contention that it is important to be aware of gender differences by type of software competency. The literature maintains that males usually score better than females in programming (Chen, 1986) but the differences disappear in the use of applications (Koohang,

1987 ).

The literature emphasizes the need to begin computing early (Jones, 1987; Kinnear, 1995) and refers to early socialization of children (Schubert, 1986) and its impact on their self- 76 perceptions. Some of these perceptions have to do with the perception of the computer as being objective (Shashaani, 1994; Turkle, 1984) and male-oriented which could in turn, lead to stereotyping. The literature also shows that computer use at home is dominated by fathers and sons (Colley, Gale & Harris, 1994; Giaquinta, Bauer & Levin, 1993), and computers are often purchased for sons (Eaton et al., 1985; Giaquinta, Bauer & Levin, 1993). Research has also shown that there are often little or no "computer female role-models" for females at home (Clarke

& Chambers, 1989; Giaquinta, Bauer & Levin, 1993).

RQ1 Do male and female university students differ in self-reported ratings in software competencies with programming languages, operating systems and applications?

The first research question asked if there was a difference between self-reported ratings of male and female university students. As noted in Chapter Four, the answer is a partial yes. Significant gender differences were found in the MANOVA results for

HSCORE (F(l,604)=42.71, p<0.0001) and LSCORE (F(l,604) =14.13, P<0.001), but not ASCORE (F(l,604), p<0.025). This finding supports the literature. Overall, the literature showed that females have been narrowing the gap in applications. This could explain the finding that ASCORE was not significant. However, it also points out that females were still significantly behind with High-Level Language and Low-Level Language competencies. One reason which has been attributed in the literature is the perception by males and females alike that computing is a male domain.

The researcher believes that an important contribution of the study was the isolation of the distinct competency that has to do with Low-Level Languages. This component of competency may also have special significance for females, as it exemplifies the objective aspect of computing hypothesized by Turkle (1984). (The implication surrounding Low-

Level Language competency will be explored in the discussion of the new proposed model). 77 RQ2 Do university students who learn computing at different ages differ in self-reported ratings in software competencies with programming languages, operating systems and applications?

The second research question dealt with the question of the age of first experience with a computer. Overall the findings concurred with the literature regarding age. The literature says students who learned early tended to score higher. Upon examination of the cell means for 27 software items, the researcher noted that with the exception of Statistics, self-reported competency scores for all the other software items were higher if a subject was first exposed to computers in K-

7. However, the MANOVA findings qualified this observation by indicating that significant differences exist in favor of earlier experiences with computers, but only for HSCORE (F(l,601)

=49.00, p<0.0001), not ASCORE and LSCORE.

This finding suggests that FirstAge was critical for HSCORE items. In other words, students who were exposed to computers earlier in life reported higher competencies for MSDOS,

Pascal, Fortran, Games, Windows, BASIC, Word Processing or Utilities. The above finding not only supports the literature, it adds to it. The literature suggests that programming should be taught early (Krendle & Lieberman, 1988; Nelson & Watson, 1990-1991). Findings from the current research suggests that it is just as important to begin early with operating systems like

MSDOS and Windows, and familiarize children with the concepts of Word Processing, Utilities and Games. These findings also suggest that it may be more imporant to teach key software earlier.

The results address the question of early experiences with computers, with findings of significance, but based upon the literature, the researcher can only speculate that there must be socialization reasons which account for later exposure of females to computers. The findings support the literature which emphasizes the need to begin computing earlier, but goes further by specifying significance by subscale. The findings also indicate that despite different times of initial computer experiences, the non-significant interaction of First Age of Computer Experience by Gender (G X FA) indicated that differences were similar for males and females. 78 RQ3 Do university students who learn computing in different settings differ in self-reported ratings in software competencies with programming languages, operating systems and applications?

The third research question asked if students who have learned computing in different settings differ in their self-reported competency scores. This was true across all the subscales scores; HSCORE [(F(2,604)=17.39, p<0.0001)], [ASCORE (F(2,604)=6.79, p<0.001)] and

LSCORE [(F(2,604)=6.19,p<0.001)]. The findings indicated that it mattered where computing was learned. Overall, the lowest means were found consistently with those students who had learned computing primarily through higher education. This level of the learning setting represented universities, institutes and colleges. As noted previously, in the discussion of RQ2, this finding implies that subjects who had predominantly learned past K-7 tended to rate their competencies the lowest. Consistently across all software categories, the highest means were found among students who learned computing primarily at home. Overall the findings agree with the literature and substantiates the importance of investigating the learning setting.

Chen (1986) noted that when learning occurs in unstructured and non-evaluative environments like homes versus structured environments like schools and university, students expressed more interest and learn better. The high competency scores of home learning over the other two settings lend credence to these claims. Additionally, studies have found the importance of learning at home in explaining interest, attitudes, and confidence towards computers (Levin &

Gordon, 1989).

This present study found that while 30% of the females learned predominantly from school, the reverse is true of males, almost 70% of the males predominantly learned outside of school; and of these; only 28% of the females versus 55% of the males learned predominantly at home. The literature also reports that differential socially-based expectations favor male computing at home (Eccles, 1989; Shashaani, 1994). If this is true, then these discrepancies in subscale competency scores may reflect the effect of odds against females learning at home. The present research found that competency scores tended to be lower for students who had primarily learned through Higher Education. This finding implied that subjects who had predominantly learned later

(past K-7) tended to rate their competencies the lowest. 79

Of the students who learned at home, an inspection of cell means for every software item showed that males had a higher competency scores. The literature showed that males benefit most from learning at home, and are the most active in its pursuit. This research examined the role of different learning settings and the effect they might have on self-reported software competencies.

The study found clear indications that the majority of males learn at home, and begin earlier than females. However, the non-significant interaction of Gender by Learning Setting (G X LS) indicated that although there was distinct differences found between learning settings across all subscales, the differences were similar for both genders.

Interactions

Although no interactions were found to be statistically significant, the main effects for

Gender, FirstAge and LearnSet were significantly different for certain subscales. Through the development of the instrument and the ASCORE, HSCORE and LSCORE subscales, the researcher identified specific software items within each subscale that should be targeted in the event of subscale significance.

Based upon participation and enrollment, there were distinct gender differences in computing, and interestingly, the sample reflected the total course enrollment. The figures were very similar, 29.6% females versus 69.1% males participated in the study, and 27.6% females versus 72.4% males enrolled in the classes. This study did not investigate attitudes, socialization, stereotyping or other reasons leading to different approaches used by males and females in computing which might culminate in gender disparity in computing. This study also did not uncover why males predominantly learn at home, or why females have their first experiences after males. The findings however, did confirm the importance of early learning and home learning, and the disproportionate percentages of females who begin computing after Grade 7. Descriptive statistics also imply that females have not taken advantage of the benefits of learning at home. In keeping with the literature, the data indicate that females learned later and had their first computer experiences at school (Shashaani, 1994).

The main effects of gender, age of first computer experience and learning setting were all significant. These findings confirmed and supported that gender, the age of first computer 80 experience and learning setting matters in regards to gender equity in computing. What was equally interesting was what the study did not report any significant interaction. Despite the descriptive statistics which suggest that males learn earlier than females, or the finding that competency of early learners are significantly greater than those than begin later, or the literature claiming that most of these early beginners tend to be males; the non-significant Gender X FirstAge

(G X FA) interaction indicates that the differences between early and later learners were the same for both male and females.

Similarly, despite the descriptive statistics which suggest that males take more advantage of home learning, or the finding that competency of home learners are significantly greater than those that primarily learn in other settings, or literature claiming that males dominate home computing, the non-significant Gender X Learnset (G X LS) interaction indicated that differences among university students in terms of predominant learning setting were the same for males and females.

What these combined findings indicate is that while it is important to begin computing earlier and to study at home, despite the literature, females and males tended to be similarly disadvantaged. While this combined finding confirms that there are significant differences in computing by gender, age of first computer experience and learning setting, the non-significant interactions mean that the differences in the age of first experience and learning setting have the same pattern regardless of gender.

Gates-Tangorra-Feng-Model

Although there is an established body of literature which looks at high-level

language competency and applications competency, past literature has not isolated a Low-

Level Language competency component. This study provides empirical support that low-

level competency is distinct from high-level competency and applications competency.

However, the intercorrelations indicate that although the three competencies exist as

distinct constructs, there is nevertheless, some correlation between these competencies.

Based upon the intercorrelations between subscale scores, competency in lower-

level languages has been shown to be related to competency with higher level languages.

The proposed Gates-Tangorra-Feng Model offers an exploratory model to approach and describe software competencies (For the reader who is interested, a brief description of the model, and an explanation of how it is constituted can be found in Appendix K). Figure 4 illustrates pictorially thai Low-Level Language competency (corresponding to Lower

Level Languages) is distinct and operating at a lower level than other competencies. From observing how the subjects learned different software at different ages, the descriptive statistics suggest there may be an additional element in that competency at this level that is also related to development.

Additionally, Low-Level Language competency was not significant for the

FirstAge, but found to be significant for both Gender and LearnSet. This suggest that gender and learning setting may play a significant role in the developmental aspect of acquiring competency with Low-Level Languages. Recall that the descriptive statistics show that males mostly learn at home, and, in particular, learn the more "difficult" aspects of computing at home.

Regardless of gender, university students scored lower on the LSCORE than for

ASCORE or HSCORE. This indicates that this research has isolated a distinct aspect of computer competency which all students need to improve. Applications competency was the first factor extracted from the factor analysis followed by Low-Level Language competency and High-Level Language competency. As Low-Level Language competency was extracted before high-level language competency, it explains more of the variance in competency scores and is perhaps a more important construct than High-

Level Language competency in terms of variance.

There is also a more direct way of comparing the contribution of LSCORE or Low-

Level Language competency to overall software competency. An inspection of the three subscale scores and the total scaled score generated for every subject indicated that subjects' overall software competency fluctuated in accordance with their LSCORE subscale scores. That is, whenever the LSCORE was high, the other two subscale scores and the total scales score tended to go up accordingly. As LSCORE represent the more difficult software in computing, if the subject's scores on this subscale were higher, it is reasonable to believe that the subjects' scores would also be higher on the "easier" subscales (application, high-level languages).

Figure 4 Proposed Gates-Tangorra-Feng Model of Computer Abstractions

Applications Level

High Level Languages

Low Level Languages

Operating System Level

Assembly Level

Machine Level

Microprogramming Level

Digital Logic Level 83

This suggests that a curriculum directed at learning Low-Level Language might

produce gains in High-Level and Applications competency. Conversely, it may be the

case that students will need to develop a certain level of competency with HSCORE and

SCORE before moving on to LSCORE (as per the developmental aspect suggested

earlier).

Implication for Teaching and Learning

Consistent with the literature, this study has found great variability between university students' software competency (Clark and Wheeler, 1994; Lee, Pliskin & Kahn, 1994; Liu, Reed

& Phillips, 1992; Overbaugh & Reed, 1994-95; Taylor & Mounfield, 1994). For both teachers and students, there are special challenges which arise in an university class with great variability.

Whether is comparison by subject or software, the researcher has found great variability between students in the same class. What, for example, are the policy implications of putting together in one class a student who has scored virtually every type of software competency, with a students who hands in a relatively clean SCS because of lack of prior computing experience, especially if the student with fewer experiences is female?

What are the implications for curriculum design, for instructional strategies, for evaluation and for assessment? Should the curriculum reflect the diverse needs of both types of students?

What about the ^plications for teaching? At what level should the class be taught? What are the appropriate instructional strategies? How can evaluation be fair? Who should belong in such a class? How should classes be organized? The purpose of asking these questions was not to answer them but to use them as focus questions to illuminate some of the problems which arise when students arrive at higher educational institutions with varying levels of background in computing competency.

The last few questions articulate the problem of access to education. The recent decline in female participation in computer science has been well documented and has attributed to the notion that computing has joined mathematics as a "filter" into higher education and associated careers

(Campbell & Williams, 1990). Part of the problem has to do with the tremendous variability 84 which the students bring to universities mentioned earlier. Bishop-Clark and Wheeler (1994) claimed that" Within the field of Computer Science there is some agreement that there are tremendous individual differences in student's achievement in programming." (p.358), citing figures by Schneiderman (1980),. who reported differences in programming performance as high as 100:1. With these high ratios, it would be understandable why students could be discouraged from pursuing courses or a career in computer science.

Implications for Teacher Education

All the general implications above about teaching and learning from the preceding section applies here. Two of the four researchers mentioned above who have noticed the variability in competency did so with research on prospective pre-service teachers (Liu, Reed & Phillips, 1992;

Overbaugh & Reed, 1994-95). This study has confirmed the existence of this variability. When applied to teacher education, the general implications about teaching and learning will be further compounded by the variability among prospective teachers, and the possibility that students may be more familiar with certain software than their teachers.

On another point, teacher education has been traditionally involved in application research.

This thesis introduces the possibility of including in that research, competency in the area of operating systems and Lower-Level Languages. The Software Competency Scale (SCSI could either be used for self-assessment by preservice teachers, or included as part of a program as it is also able to track changes in competencies of beginning preservice teachers. The SCS could generate three subscales means reflecting relative degrees of competency and a total scaled score.

If the results are to be used for assessment and evaluation, based upon findings by (Sail, 1989;

Nowaczyk, 1983; and Goodwin & Wilkes, 1986, cited in Sail, 1989) and Azjen and Fishbein's

(1980) theory of reasoned action, self-reported measures have been proven to be reliable measures.

Implications for Parents. Teachers and Researchers

The data seem to imply that the gender differences in computing that were found in this study represent the accumulated results over the years. If so, then an urgent message must go out 85 from this research. Gender as a variable in isolation misses the point about the computing experience. This study has shown that while it is important to have early computer experiences, and ideal learning settings for both males and females, the differences in learning have not occurred along gender lines. The answer could lie internally, as the preceding pages argue, or externally, with differential socialization between males and females.

As a consequence of inequitable and unfair socialization of females in traditional North

American society, gender has been used successfully by itself to predict achievement, aptitude and a host of other measures. The study has similarly reported gender differences in computing. Both the female and male child grows up in a society and a home which influences her/his learning, but the findings from this study and the literature tells us that the opportunities for home computing are unequal for girls and boys. On a positive note, the findings indicate that early introduction to computers and emphasis at home has great potential to help address present gender imbalance with computing. Furthermore not all competencies were significant for gender or the age of first computer experience. However, all competencies were significant in terms of learning setting.

Although there were also significant differences traced to schools and higher education, across all subscales, the home learning subscale scores were the highest.

Based upon the literature, it is not surprising that the findings reveal differences between home learning and other learning settings among university students. Perhaps it is time for teachers and researchers to issue an urgent and grave message to concerned parents. However, to resolve the gender difference problem, one has to first become a role model as well. There were no female professors for all the classes visited for the data collection.

More females should be teaching computers and one should tell children that there are positive female role models in computing like Admiral/Dr. Grace Hopper, Ada Lovelace and Jean

Sammet. One could tell children that Admiral/ Dr. Hopper was one of the pioneer workers on the

MARX I, the forerunner of modem computers, who "epitomized the development of the software field from primitive programs to sophisticated compilers and their verification." (Sammet, 1992).

Or that Ada Lovelace was a brilliant mathmatician and the world's first programmer, who had a programming language named after her, and was "one hundred years before [her] time." (Baum,

1986). Or Jean Sammet, who is recognized internationally as one of the foremost authorities of 86 programing languages, who was involved in the design of one of the earliest design of a computer language, was the past president of the ACM, first chairperson of the AFIPS History of

Computing Committee. (Sammet, 1991).

One should also tell parents that many researchers have been working very hard for a very long time, and have come up with different pieces of a big puzzle that lead toward gender equity in computing. Armed with the evidence of accumulated research, one can tell parents that their attitudes, value systems and beliefs at home have a direct relationship to how their children act and choose with regards to computing. One can show parents empirical proof that their daughters are negatively impacted by attitudes and the persistence of stereotypes. One can tell them that socialization has an effect on the learning of their children which could result in lowered participation with computer activities, and how present choices of courses in high schools can affect their future enrollment and career choices. One should suggest that parents should instead adopt gender-neutral attitudes, introduce computers early to their children, encourage all their children to work on computers, and mothers could act as positive role models for daughters.

Limitations of the Study

The study used questionnaires to survey intact classrooms. Participation in the study was voluntary. Thus neither randomization selection nor random assignment was possible. Since the nature of the study was confidential and participation was voluntary, subjects effectively self- selected themselves into the study. Hence the sample was a select group and there was a possible problem from selection bias. Due to the number of competencies queried and the tendency for subjects to skip items with which they were not competent, missing data were inherent.

Furthermore, the open-ended portion of the questionnaire had to be operationalized into levels (as explained in Chapter Four), which resulted in a further loss of subjects. A related drawback from a different perspective had to deal with the nature of the close-ended questions which could not provide in-depth explanations of the answers. The self-reporting of competency could also be called into question.

Since this was an ex post facto design, randomness was not a priority, and neither was generalization, because this was a study involving the relationship between a set of correlated 87 dependent scores with a set of correlated independent factors. Although there were missing data, this was compensated by the large sample size. For example, stringent listwise deletion often resulted in a loss of subjects, but that still left enough subjects for the analysis. Although missing data contributed to the loss of some variables for the subscales, overall 22 of the 27 variables were captured in the factor analysis. While it is true that the close ended questionnaire could not answer the "why" questions, it could answer the "what", "where", "when" and "how well" questions quite effectively. As well, the Azjen and Fishbein's (1980) theory of reasoned action was used to justify that self-reporting captures accurate representations.

Implications for Future Research

The researcher, therefore, believes that further research is warranted to reveal the significance of low-level language competency, and this study needs to be replicated. Since this is an exploratory study, the researcher is not advocating curricular changes at this time. The researcher will only comment that if such a change were planned, that the language "C" is a possibility, but it is an order more difficult than computer programming languages that are currently taught in the high schools Several suggestions for research are implicit in the previous section on implications. However, in closing, the researcher would like to offer some suggestions for directions which will provide a logical extension of this research:

1. Replicating this study with additional independent variables from the literature.

2. Administering the Software Competency Scale (SCS) to different groups of students.

3. Using the SCS in teacher education research to assess software competency

4. Using other instruments in conjunction with the SCS for correlational purposes

5. Updating the instrument with software like Windows 95, browsers and email

6. Comparing the results of the SCS with actual observed competency

7. Comparing the relationship between competency and achievement in computing

8. Interviewing selected subjects from this study to answer the "why" questions

9. Using the Gates-Tangorra-Feng Model in research to include hardware competency

10. Investigating further the relationship between LSCORE and ASCORE and HSCORE

11. Studying the home as a potential source of influence on computing 88

REFERENCES 89

Aman, J. R. (1992). Gender and attitude toward computers. In C. D. Martin & E. Murchie-Beyma (Eds.). In search of gender-free paradigms for computer science education. (pp. 33-46.). Eugene: International Society for Technology in Education.

Ames, C. & Archer, J. (1987). Mother's beliefs about the role of ability and effort in school learning. Journal of Educational Psychology, 79, 409-414.

Anderson. R. E. (1984). Statement of Dr. Ronald E. Anderson before the subcommittee on investigations and oversight of the house science and technology committee. ACM SIGCUE Bulletin, 18(1), 4-10.

Anderson, R. E. (1987). Females surpass males in computer problem solving: Findings from the Minnesota computer literacy assessment. Journal of Educational Computing Research, 5(1), 39-51.

Anderson, R. E. & Klassen, D. L. (1981). A conceptual framework for developing computer literacy instruction. AEDS Journal, 14(3), 128-150.

Anderson, R. E., & Klassen, D.L., Krohn, K. R. & Smith-Cunnien, P. (1982). Assessing computer literacy: Computer awareness and literacy: An empirical assessment. Minnesota Educational Computing Consortium, Minneapolis.

Anderson, R. E., Welch, W. W. & Harris, L. J. (1984). Inequalities in opportunities for computer literacy. The Computing Teacher, 77(8), 10-12.

Armstrong, J. (1979)^4 national assessment of achievement and participation of women in mathematics . Education Commission of the States. Denver, Colorado.

Atherton, R. (1982). Structured programming with COMAL. Chichester: Ellis Horwood Limited.

Azjen, I. (1988). Attitudes, personality, and behavior. Chicago: Dorsey Press.

Ajzen, I. & Fishbein, M. (1980). Understanding attitudes and predicting social behavior. Englewood Cliffs, New Jersey: Prentice-Hall.

Baack, S. A., Brown, T. S. & Brown, J. T. (1991). Attitudes toward computers: View of older adults compared with those of young adults. Journal of Research on Computing in Education , 23 (3), 422-433.

Backus, J. (1978). Can programming be liberated from the von Neumann style? A functional style and its algebra of programs. Communications of the ACM, 21 (8), 613-641.

Badagliacco, J. M. (1990). Gender and race differences in computer abilities and experience. Social Science Computer Review, 8, 42-63.

Bagert, D. J. J. (1989). On teaching computer science using the three basic processes from the Denning report. ACM SIGSCE Bulletin, 21 (4), 13-14.

Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioral change. Psychological Review, 84, 191-215.

Baker, D. R. (1987). The influence of role specific self-concept and sex role identity on career choice in science. Journal of Research in Science Teaching, 26 (8), 739-756. In J. Gaskell, A. McLaren, A. Oberg, & L. Eyre. (Eds.). (1990). The 1990 British Columbia mathematics assessment: Gender issues in student choices in mathematics and science (pp. 199). Victoria: Queen's Printer. 90

Barnes, C. C. (1986). Teaching computer literacy: A nontraditional approach. Journal of Education for Business, 13 (9), 27-28.

Bateman, G. R. (1973). Predicting performance in a basic computer course. Proceedings of the 5th Annual Meeting of the American Institute for Decision Sciences, 130-133.

Baum, J. (2986). The calculating passion of Ada Byron. Hamden: Archon Books.

Bear, G. G., Richards, H. C. & Lancaster, P. (1987). Attitudes toward computers: Validation of a computer attitudes scale. Journal of Educational Computing Research, 3 (2), 207-218.

Beard, C. H. (1993). Transfer of computer skills from introductory computer courses. Journal of Research on Computing in Education, 25 (4), 413-430.

Becker, J. (1981). Different treatment of females and males in mathematics class. Journal for Research in Mathematics Education, 12,40-53.

Becker, H. (1985). How schools use microcomputers: Results from a national survey. In M. Chen & W. J. J. Paisley (Eds.). Children and microcomputers: Research on the newest medium (pp. 87-108). Beverly Hills: Sage Publications,

Becker, H. J. & Sterling, C. W. (1987). Equity in school computer use: National data and neglected considerations. Journal of Educational Computing Research, 3 (3), 289-311.

Benbow, C. & Stanley, J. (1980). Sex differences in mathematical ability: Fact or artifact? Science, 210, 1262-1264.

Bernstein, D. R. (1991). Comfort and experience with computing: Are they the same for women and men? ACM SIGCSE Bulletin, 23 (3), 57-60.

Bernat, A. P. (1986). An interactive interpreter/graphic-simulator for IBM S/370 architecture assembly language. ACM SIGSCE Bulletin, 18, (2), 13-16.

Bernt, F. M., Bugbee, A. C. & Arceo, R. D. (1990). Factors influencing student resistance to computer-administered testing. Journal of Research on Computing in Education, 22(3), 265-275.

Bishop-Clark, C. & Wheeler, D. D. (1994). The Myers-Briggs personality type and its relationship to computer programming. Journal of Research on Computing in Education, 26 (3), 359-370.

Bohlin, R. M. (1993). Computers and gender differences: Achieving equity. Computers in Schools, 9 (2/3), 155-165.

Bork, A. (1982). Computers and learning. Educational Technology, 22 (4), 33-34.

Bowers, C. A. (1988). The cultural dimensions of educational computing: Understanding the non- neutrality of technology. New York: Teachers College Press.

Bozeman, W. C. & Spuck, D. W. (1991). Technological competence: Training educational leaders. Journal of Research on Computing in Education, 23 (4), 515-529.

Brock, F. J., Thomsen, W. E. & Kohl, J. P. (1992). The effects of demographics on computer literacy of university freshmen. Journal of Research on Computing in Education , 24 (4), 561-570. 91

Brookshear, J. G. (1985). The university computer science curriculum. Education versus training. ACM SIGSCE Bulletin, 17 (1). 23-30.

Brookshear, J. G. (1994). Computer science: An overview . (Fourth edition). Redwood City: The Benjamin Cummings Publishing Company

Brownell, G., Jadallah, E. & Brownell, N. (1993). Formal reasoning ability in preservice elementary education students: Matched to the technology education task at hand? Journal of Research on Computing in Education, 25 (4), 439-446.

Bryant, R. & De Palma, P. (1993). A first course in computer science for small four year computer science programs. ACM SIGSCE Bulletin, 25 (2), 31-34.

Budd, T. A. & Pandey, R. V. (1995). Never mind the paradigm, what about multiparadigm languages? ACM SIGCSE Bulletin, 27 (2), 25-30.

Bunderson, E. D. & Christiansen, M. (1995). An analysis of retention problems for female students in university computer science programs. Journal of Research on Computing in Education, 27 (1), 1-18.

Busch, T. (1995). Gender differences in self-efficacy and attitudes toward computers. Journal of Educational Computing Research, 12 (2), 147-158.

Butcher, D. & Muth, W. (1985). Predicting performance in an introductory computer science course. Communications of the ACM, 27 (11), 263-268

Butler, P. A. (1985). Research notes. Journal of Educational Computing Research, 1, 121-125.

Byrd, D. M. & Koohang, A. A. (1989). A professional development question: Is computer experience associated with subjects' attitudes toward the perceived usefulness of computers? Journal of Research on Computing in Education, 21(4), 401-410.

Cafolla, R. (1987-1988). Piagetian operations and other cognitive correlates of achievement in computer programming. Journal of Educational Technology Systems, 16, 45-55.

Campbell, N. J. (1989). Computer anxiety of rural middle and secondary school students. Journal of Educational Computing Research, 5 (2), 213-220.

Campbell, N. J. (1990). High school students' computer attitudes and attributions: Gender and ethnic group differences. Journal of Adolescent Research, 5, 485-499.

Campbell, N. J. (1992). Enrollment in computer courses by college students: Computer proficiency, attitudes, and attributions. Journal of Research on computing in Education, 25 (1), 61-74.

Campbell, P. F. & McCabe, G. (1984). Predicting the success of freshmen in a computer science major. Communications of the ACM, 27 (11), 1108-1113.

Campbell, N. J. & Williams, J. E. (1990). Relations of computer attitudes and computer attributions to enrollment in high school computer courses and self-perceived computer proficiency. Journal of Research on Computing in Education , 22(3), 276-289.

Canada, K. & Brusca, F. (1991). The technological gender gap: Evidence and recommendations for educators and computer-based instruction design. Educational Technology Research & Development, 39 (2), 43-51 92

Carlson, R. & Wright, D. (1993). Computer anxiety and communication apprehension: Relationship and introductory college course effects. Journal of Educational Computing Research, 9 (3), 329-338.

Carrasquel, J. (1993). Necessity is the mother of language features. ACM SIGSCE Bulletin, 25 (2), 59-64.

Chambers, S. M. & Clarke, V. A. (1987). Is inequity cumulative? The relationship between disadvantaged group membership and students' computing experience, knowledge, attitudes and intentions. Journal of Educational Computing Research, 3(4), 493-516.

Charlton, J. P. & Birkett, P. E. (1995). The development and validation of the computer apathy and anxiety scale. Journal of Educational Computing Research, 11 (1), 41-59.

Chen, M. (1985). A macro-focus in microcomputers: Eight utilization and effects issues. In M. Chen & W. Paisley (Eds.). Children and microcomputers: Research on the newest medium, (pp. 37-58) Beverley Hills: Sage.

Chen, M. (1986) Gender and computers: The beneficial effects of experience on attitudes. Journal of Educational Computing Research, 2 (3), 265-282.

Cheng, T. T., Plake, B. & Stevens, D. J. (1985). A validation study of the computer literacy examination: Cognitive aspect. AEDS Journal, 18(3), 139-151.

Cherniak, R. (1976). Introductory programming reconsidered: A user-oriented approach. ACM SIGSCE Bulletin, 8 (11), 65-68.

Chu, P. C. & Spires, E. E. (1991). Validating the computer anxiety rating scale: Effects of cognitive style and computer courses on computer anxiety. Computers in Human Behavior, 7, 7-21.

Clarke, V. A. (1990). Sex differences in computing participation: Concerns, extent, reasons, and strategies. Australian Journal of Education, 34(1) 52-66.

Clarke, V. A. (1992). Strategies for involving girls in computer science. In C. D. Martin & E. Murchie-Beyma (Eds.). In search of gender-free paradigms for computer science education (pp. 71-86) Eugene: International Society for Technology in Education.

Clarke. V. & Chambers, S. M. (1989). Gender-based factors in computing enrollments and achievement: Evidence from a study of tertiary students. Journal of Educational Computing Research, 5 (4), 409-429.

Coey, W. A. (1993). An interactive tutorial system for MC68000 assembly language using hypercard. ACM SIGSCE Bulletin, 25 (2), 19.

Colley, A., Comber, C. & Hargreaves, D. J. (1995a). Computer attitudes and experience of pupils in single sex and co-educational schools. Proceedings of the British Psychological Society, 3(l)p.l7

Colley, A. M., Gale, M. T. & Harris, T. A. (1994). Effects of gender role identity and experience on computer attitude components. Journal of Educational Computing Research, 10 (2), 129-137.

Colley, A., Hill, F., Hill, J. & Jones, A. (1995). Gender effects in the stereotyping of those with different kinds of computing experience. Journal of Educational Computing Research, 11 (1), 19-27. 93

Collis, B.(1985). Sex differences in secondary school students' attitudes toward computers: Implications for counselors. The School Counselor, 32(2 ), 120-130.

Collis, B. A., Kass, H. & Kieren, T. E. (1989). National trends in computer use among Canadian secondary school students: Implications for cross-cultural analyses. Journal of Research on Computing in Education, 22, 77-89.

Crawford, T. (1978). Solutions to the problems of teaching an introductory course in data processing. Direction, 7 (1), 11-15.

Cronbach, L. J. (1960). Essentials of psychological testing. (Second edition). New York: Harper & Row, Publishers.

Cross, R. T. (1988). Task value intervention: Increasing girls' awareness of the importance of math and physical science for career choice. School Science and Mathematics, 88 (5), 397- 412.

Culley, L. (1988). Girls, boys and computers. Educational Studies, 14 (1), 3-8. In J. Gaskell, A. McLaren, A. Oberg, & L. Eyre. (Eds.). (1990). The 1990 British Columbia mathematics assessment: Gender issues in student choices in mathematics and science, (pp. 204). Victoria: Queen's Printer.

Dalbey, J. & Linn, M. C. (1985). The demands and requirements of computer programming: A literature review. Journal of Educational Computing Research, 1 (3), 253-274.

Dambrot, F. H., Watkins-Malek, M. A., Silling, S. M., Marshall, R. S. & Garver, J. A. (1985). Correlates of sex differences in attitudes toward and involvement with computers. Journal of Vocational Behavior, 27,71-86.

Decker, W. F. (1985). A modem approach to teaching computer organization and assembly language programming. ACM SIGSCE Bulletin, 17 (4), 38-44.

Deschenes, L. (1988). Computers in daily life: Canadians'behavior and attitudes regarding computer technology. Laval: Minister of Supply and Services Canada.

Dey, S. and Mand, L. R. (1986). Effects of mathematics preparation and prior language exposure on perceived performance in introductory computer science courses. ACM SIGCSE Bulletin, 18 (1), 144-148.

Dijkstra, E. W. (1968). The structured "THE"-multiprogramming system. Communications of the ACM, 11, 341-346.

Dixon, V. A. (1987). An investigation of prior sources of difficulties in learning university computer science.. A paper presented at the National Education Computer Conference. Philadelphia, Pennsylvania.

Doremer, D. (1993). Improving the learning environment in CS I-Experiences with communication strategies. ACM SIGSCE Bulletin, 25 (3), 31.

Dubois, P. A. & Schubert, J. G. (1986). Do your school policies provide equal access to computers? Are you sure? Educational Leadership, 43 (6), 41-44. In J. Gaskell, A. McLaren, A. Oberg, & L. Eyre. (Eds.). (1990). The 1990 British Columbia mathematics assessment: Gender issues in student choices in mathematics and science, (pp. 205). Victoria: Queen's Printer.

Dunworth, A. & Upatising, V. (1989). UMAC: A simulated microprogrammable teaching aid. ACM SIGSCE Bulletin, 21 (3), 39-43. 94

Durndell, A., Macleod, H. & Siann, G. (1987). A survey of attitudes to, knowledge about, and experience of computers. Computers and Education, 11 (3), 167-175.

Dyck, J. L. & Smither, J. A. A. (1994). Age differences in computer anxiety: The role of computer experience, gender and education. Journal of Educational Computing Research, 10 (3), 239-248.

Eaton, M. S., Schubert, J. G., DuBois, P. A. & Wolman, J. M. (1985). Out-of-school computer access: An equity issue. The Computing Teacher, 12 (9), 20-21.

Eccles, J. S. (1987). Gender roles and women's achievement-related decisions. Psychology of Women Quarterly, 11, 135-172.

Eccles, J. S., Adler, T. & Kaczala, C (1982). Socialization of achievement attitudes and beliefs: Parental influences. Child Development, 53, 310-321.

Eccles-Parsons, J., Adler, T. F., Futterman, R., Goff, S., Kaczala, C., Meece, J. L. & Midgley, C. (1983). Expectancies, values, and academic choice. In J. Spencer (Ed.), Achievement and achievement motivation. San Francisco: W. H. Freeman and Company.

Eckert, R. R. (1987). Kicking off a course in computer organization and assembly machine language programming. ACM SIGSCE Bulletin, 19 (4), 2-9.

Ehrhart, J. K. & Sandler, B. R. (1987). Looking for more than a few good women in traditionally male fields. Project on the status and education of women. Association of American Colleges.

Ellis, D. W. (1989). An evaluation of BASIC computer language as a prerequisite to university computer science. Unpublished master's thesis. University of British Columbia, Vancouver.

Erickson, G., Erickson, L. & Haggerty, S. (1980). Gender and mathematics/science education in elementary and secondary schools. Province of British Columbia, Ministry of Education. (Discussion paper 08/80). In J. Gaskell, A. McLaren, A. Oberg, & L. Eyre. (Eds.). (1990). The 1990 British Columbia mathematics assessment: Gender issues in student choices in mathematics and science (pp. 207). Victoria: Queen's Printer.

Ethington, C. A. & Wolfe, L. M. (1988). Women's selection of quantitative undergraduate fields of study: Direct and indirect influences. American Educational Research Journal, 25 (2), 157-175. In J. Gaskell, A. McLaren, A. Oberg, & L. Eyre. (Eds.). (1990). The 1990 British Columbia mathematics assessment: Gender issues in student choices in mathematics and science, (pp. 207). Victoria: Queen's Printer.

Farina, F., Arce, R., Sobral, J. & Carames, R. (1991). Predictors of anxiety towards computers. Computers in Human Behavior, 7, 265-267.

Fennema, E. & Sherman, J. (1976). Fennema-Sherman mathematics attitudes scales: Instruments designed to measure attitudes toward the learning of mathematics by females and males. Journal for Research in Mathematics Education, 7, 324-326.

Fennema, E. & Sherman, J. A. (1977). Sex-related differences in mathematics achievement, spatial visualization, and affective factors. American Journal of Educational Research, 14 (1), 51-71. 95

Fennema, E. & Sherman, J. A. (1978). Sex-related difference in mathematics achievement and related factors: A further study. Journal for Research in Mathematics Education, 9(3), 189- 203.

Fetler, M. (1984). Computer literacy in California schools. Paper presented at the annual meeting of the American Educational Research Association, New Orleans, Louisiana.

Fetler, M. (1985). Sex differences on the California statewide assessment of computer literacy. Sex Roles, 13 (3/4), 181-191.

Fishbein, M. (1979). A theory of reasoned action: Some applications and implications. In H. Howe & M. Page (Eds.), Nebraska Symposium on Motivation (pp.65-116). Lincoln: University of Nebraska Press.

Fishbein, M. & Azjen, I. (1975). Belief, attitude, intention and behavior: An introduction to theory and research. Reading: Addison-Wesley.

Flake, W. L. (1991). Influence of gender, dogmatism, and risk-taking propensity upon attitudes toward information from computers. Computers in Human Behavior, 7, 227-235.

Forester, T. (1992). Megatrends or megamistakes? Computers & Society, 22 (1-4), 2-11.

Fowler, G. C. & Glorfeld, L. W. (1981). Predicting aptitude in introductory computing: A classification model. AEDS Journal, 14, 96-191.

Francis, L. J. & Evans, T. E. (1995). The reliability and validity of the Bath County computer attitude scale. Journal of Educational Computing Research, 12 (2), 135-146.

Franklin, R. (1987). What academic impact are high school computing courses having on the entry-level computer science curriculum? ACM SIGCSE Bulletin, 19 (1), 253-256.

Franklin, U. (1990). The real world of technology. Montreal: CBC Enterprises.

Frey, K. S. & Ruble, D. N. (1987). What children say about classroom performance: Sex and grade differences in perceived competence. Child Development, 58, 1066-1078.

Fried, L. (1982). Principles of ergonomic software. Datamation, 163-166.

Fuchs, L. (1986). Closing the gender gap: Girls and computers. ERIC Document No. ED 271 103. Florida Instructional Computing Conference

Fuller, R. (1992). Microcode simulator for Apple Macintosh. ACM SIGSCE Bulletin, 24 (4), 49.

Gabriel, R. M. (1985a). Assessing computer literacy: A validated instrument and empirical results. AEDS Journal, 18(3), 153-171.

Gabriel, R. M. (1985b). Computer literacy assessment and validation: Empirical relationships at both student and school levels. Journal of Educational Computing Research, 1 (14), 415- 419.

Galpin, V. & Sanders, I. (1993). Gender imbalances in computer science at the University of Witwatersrand. ACM SIGSCE Bulletin, 25 (4), 2-4.

Garcia, D. L. & Weingarten, F. W. (1987). Information technology and education: Public policy and America's future. SIGCUE Bulletin, 18 (2-4), 80-87. 96

Gardner, D. G., Discenza, R. & Dukes, R. L. (1993a). The measurement of computer attitudes: An empirical comparison of available scales. Journal of Educational Computing Research, 9 (4), 487-507.

Gardner, D. G., Dukes, R. L. & Discenza, R. (1993b). Computer use, self-confidence, and attitudes: A causal analysis. Computers in Human Behavior, 9,427-440.

Garratt, L. (1986). Gender differences in relation to science choice at A level. Educational Review, 38(1), 67-77. In J. Gaskell, A. McLaren, A. Oberg, & L. Eyre. (Eds.). (1990). The 1990 British Columbia mathematics assessment: Gender issues in student choices in mathematics and science, (pp. 199). Victoria: Queen's Printer.

Gaskell, P. J., McLaren, A., Oberg, A. & Eyre, L. (1990). The 1990 British Columbia mathematics assessment: Gender issues in student choices in mathematics and science. Victoria: Queen's Printer.

Gates, W. (1981). Gates'pyramid of software abstraction. Unpublished interview, Sigma Computer Conference: Seattle.

Gathers, E. (1986). Screening freshmen computer science majors. ACM SIGCE Bulletin, 18 (3), 44-48.

Geissler, J. E. & Horridge, P. (1993). University students' computer knowledge and commitment to learning. Journal of Research on Computing in Education, 25 (3), 347-365.

Giacquinta, J. B., Bauer, J. A. & Levin, J. E. (1993). Beyond technology's promise. Cambridge: Cambridge University Press.

Goodwin, L. & Wilkes, J. M. (1986). The psychological and background characteristics influencing students' success in computer programming. Association for Educational Data Systems Journal, 20, 1-9.

Gray, J. D. (1974) Predictability of success and achievement level of data processing technology students at the two-year post-secondary level. (Doctoral dissertation, Georgia State University, 1974). Dissertation Abstracts International, 35, 2208-A.

Greaves, M. (1988). Girls into computing won't go. Computers in Education. 59 , 4-7. In J. Gaskell, A. McLaren, A. Oberg, & L. Eyre. (Eds.). (1990). The 1990 British Columbia mathematics assessment: Gender issues in student choices in mathematics and science. (pp. 211). Victoria: Queen's Printer.

Gressard, C. & Loyd, B. H. (1986). Validation studies of a new computer attitude scale. AEDS Journal, 79(4), 295-301.

Gribbin, M. (1987). Boys muscle in on the keyboard: Girls and information technology. In Educational Computing. E. Scanlon & T O'Shea (Eds.). Chichester: Wiley.

Griffin, B. L., Gillis, M. K. & Brown, M. (1986). The counselor as a computer consultant: Understanding children's attitudes toward computers. Elementary School Guidance & Counseling, 20 (4), 246-249.

Griswold, P. A. (1983). Some determinants of computer awareness among education majors. AEDS Journal, 16(2), 92-103.

Guinan, T. & Stephens, L. (1988). Factors affecting the achievement of high school students in beginning computer science courses. Journal of Computers in Mathematics and Science Testing, 8(1), 61-64. 97

Guimares, T. & Ramanujam, V.(1986). Personal computing trends and problems: An empirical study. MIS Quarterly, 10, 179-187.

Haigh, R. W. (1985). Planning for computer literacy. Journal of Higher Education, 56 (2), 161-171.

Hakkinen, P. (1995). Changes in computer anxiety in a required computer course. Journal of Research on Computing in Education, 27 (2), 141-153.

Halpern, D. (1986). Sex differences in cognitive abilities. Hillsdale: Lawrence Erlbaum Associates.

Hanchey, C. (1992). Gender equity-A partial list of resources. In C. D. Martin & E. Murchie- Beyma (Eds.). In search of gender-free paradigms for computer science education (pp. 105-110). Eugene: International Society for Technology in Education.

Hanchey, C. (1993). Labs, learning styles, and gender. In NECC Proceedings, (pp. 255-258). Orlando, Florida.

Hanna, L. E. & Leder, G. C. (1990). International Conference: The mathematics education of women. IOWME Newsletter, 6 (1), 4-10.

Harrison, A. W. & Rainier, R.K. (1992). The influence of individual differences on skill end- user computing. Journal of Management and Information Systems, 9, 93-111.

Harvey, T. J. & Wilson, B. (1985). Gender differences in attitudes toward microcomputers shown by primary and secondary school pupils. British Journal of Educational Technology, 16 (3), 183-187.

Hattie, J. & Fitzgerald, D. (1987). Sex differences in attitude, achievement and use of computers. Australian Journal of Education, 31, 3-26.

Hawkins, J. (1985). Computers and girls: Rethinking the issues. Sex Roles, 13, 165-180.

Hawkins, J., Sheingold, K., Gearhart, M. & Berger, C. (1982). Microcomputers in schools: Impact on the social life of elementary classrooms. Journal of Applied Developmental Psychology, 3, 361-373.

Hecht, J. B. & Dwyer, D. J. (1993). Structured computer learning activities at schools and participation in out-of-school structured activities. Journal of Research on Computing in Education, 26 (1), 70-82.

Heinssen, R. K., Glass, C. R. & Knight, L. A. (1987). Assessing computer anxiety: Development and validation of the computer anxiety scale. Computers in Human Behavior, 3, 59-69.

Henry, M. (1987). An interfacing and electronics course for computer science majors. ACM SIGSCE Bulletin, 19 (2), 12-19.

Hess, R. & Miura, I. (1985). Gender differences in enrollment in computer camps and classes. Sex Roles, 13 (4), 193-203.

Hignite, M. A. & Echtemacht, L. J. (1992). Assessment of the relationships between the computer attitudes and computer literacy levels of prospective educators. Journal of Research on Computing in Education, 24 (3), 381-391. 98

Hines, M. C. (1982). Parental gonadal hormones and sex differences in human behavior, Psychological Bulletin, 92, 56-80.

Horrocks, J. E. (1964). Assessment of behavior. Columbus: Charles E. Merrill Books.

Howard, G. S. & Smith, R. (1986). Computer anxiety in management: Myth or reality? Communications of the ACM, 29, 611-665.

Howell, K. (1993). The experience of women in undergraduate computer science: What does research say? ACM SIGSCE Bulletin, 25 (2), 1-8.

Howerton, C. (1988). The impact of pre-college computer exposure on student achievement in introductory computer programming courses. Computer Science Education, 1 (1), 73-84.

Hughes, M., MacLeod, H., Potts, C. & Rogers, J. (1985). Are computers only for boys? New Society, 75-76.

Hunt, N. P. & Bohlin, R. M. (1993). Teacher education students attitudes toward using computers. Journal of Research on Computing in Education, 25 (4), 487-497.

Hunt, D. and Randhawa, B. S. (1973). Relationship between and among cognitive variables and achievement in computational science. Educational and Psychological Measurement, 33, 921-928.

ISACS (1985). Mission: Define computer literacy. The Computing Teacher, 13(3), 10-15.

Jacobs, J. E. (1991). Influence of gender stereotypes on parent and child mathematics attitudes. Journal of Educational Psychology, 83 (4), 518-527.

Jackson, W. K., Clements, D. G. & Jones, L. G. (1984). Computer awareness and use at a research university. Journal of Educational Technology Systems, 13 (1), 47-56.

Jagacinski, C. M., LeBold, W. K. & Salvendi, G. (1988). Gender differences in persistence in computer-related fields. Journal of Educational Computing Research, 4 (2), 185-202.

Johnson, R. T. & Johnson, D. W. (1987). Learning together and alone: Cooperative, competitive and individualistic learning (Third Edition). Englewood Cliffs: Prentice-Hall.

Johnson, R. T. & Johnson, D. W. (1988). Cooperative learning and the computer. In J. D. Ellis (Ed.). 1988 AETS Yearbook: Information technology and Science Education.

Johnson, C. S. & Swoope, K. F. (1987). Boys' and girls' interest in using computers: Implications for the classroom. Arithmetic Teacher, 34 (1), 14-16.

Jones, P. K. (1987). The relative effectiveness of computer-assisted remediation with male and female students. Technological Horizons in Education Journal, 3, 61-63.

Jorde-Blom, P. (1988). Self-efficacy expectations as a predictor of computer use: A look at early childhood administrators. Computers in Schools, 5, 45-63.

Jump, T. L., Harris, J. J. & Held, C. A. (1985). Training for equitable attributes in mathematics and the sciences: A research and training paradigm. A paper presented at the annual meeting of the National Science Teacher Association, Cincinnati, Ohio.

Kay, R. H. (1989a). Bringing computer literacy into perspective. Journal of Research on Computing in Education, 22(1), 35-47. 99

Kay, R. H. (1989b). Gender differences in computer attitudes, literacy, locus of control and commitment. Journal of Research on Computing in Education, 21, 307-316.

Kay, R. H. (1990). The relation between locus of control and computer literacy. Journal of Research on Computing in Education, 22(4), 464-474.

Kay, R. H. (1992a). An analysis of methods used to examine gender differences in computer- related behavior. Journal of Educational Computing Research, 8 (3), 277-290.

Kay, R. H. (1992b). The computer literacy potpourri: A review of the literature or McLuhan revisited. Journal of Research on Computing in Education, 24 (4), 446-455.

Kay, R. H. (1992c). Understanding gender differences in computer attitudes, attitude, and use: An invitation to build theory. Journal of Research on Computing in Education, 25 (1), 159- 171.

Kay, R. H. (1993a). A critical evaluation of gender differences in computer-related behavior. Computers in Schools, 9 (4), 81-93.

Kay, R. H. (1993b). An exploration of theoretical and practical foundations for assessing attitudes toward computers: The computer attitude measure (CAM). Computers in Human Behavior, 9, 371-386.

Kay, R. H. (1993c). A practical research tool for assessing ability to use computers: The computer ability survey (CAS). Journal of Research on Computing in Education, 26 (1), 16-27.

Kay, R. H. (1993-1994). Understanding and evaluating measures of computer ability: Making a case for an alternative metric. Journal of Research on Computing in Education, 26 (2), 270-284.

Keller, J. M. (1983). Motivational design of instruction. In C. M. Reigeluth (Ed.). Instructional design theories and models: An overview of their current status (pp. 383-434). Hillsdale: Lawrence Erlbaum.

Keller, J. M. & Kopp, T. (1987). Applications of the ARCS model of motivational design. In C. M. Reigeluth (Ed.). Instructional design theories and models: An overview of their current status , (pp. 383-434). Hillsdale: Lawrence Erlbaum.

Keller, J. M. & Suzuki, K. (1988). Use of ARCS models in courseware design. In D. Jonassen (Ed.). Instructional designs for computer courseware. New York: Lawrence Erlbaum.

Kelsh, J. P. (1993). Levels of abstraction in CS2. ACM SIGSCE Bulletin, 25 (2), 35-37.

Kerlinger, F. N. (1973b). Foundations of behavioral research , (Second edition). New York: Holt, Rinehart and Winston.

Kerner, J. T. & Vargas, K. (1994). Women and computers: What we can learn from science. ACM SIGSCE Bulletin, 26 (2), 52-56.

Kersteen, Z. A., Linn, M. C, Clancy, M. & Hardyck, C. (1988). Previous experience and the learning of computer programming: The computer helps those who help themselves. Journal of Educational Computing Research, 4 (3), 321-333.

Kiesler, S., Sproull, L. & Eccles, J. S. (1985). Poolhalls, chips, and war games: Women in the culture of computing. Psychology of Women Quarterly, 9 (4), 451-463.

Kimura, T. (1979). Reading before composition. ACM SIGSCE Bulletin, 11 (1), 162-166. 100

Kinnear, A. (1995). Introduction of microcomputers: A case study of patterns of use and children's perceptions. Journal of Educational Computing Research, 13 (1), 27-40.

Klawe, M. & Leveson, N. (1995). Women in computing: Where are we now? Communications of the ACM, 38 (1), 29-44.

Klein, L. (1992). Female students' underachievement in computer science and mathematics. In C. D. Martin & E. Murchie-Beyma (Eds.). In search of gender free paradigm for computer science education (pp 47-56). Eugene: International Society for Technology in Education.

Klein, J. D., Knupfer, N. N. & Crooks, S. M. (1993). Differences in computer attitudes and performance among re-entry and traditional college students. Journal of Research on Computing in Education, 25 (4), 499-505.

Kluever, R. C, Lam, T. C. M.,Hoffman, E., Green, K. E. & Swearingen, D. L. (1994). The computer attitude scale: Assessing changes in teachers' attitudes toward computers. Journal of Educational Computing in Research, 11 (3), 251-261.

Kling, B. (1983). Value conflicts in computing developments. Telecommunications Policy, 7 (1), 12-34.

Koohang, A. A. (1986). The effects of age, gender, college status, and computer experience on attitudes toward the Library Computer System (LCS). Library and Information Science Research, 8 (4), 349-355.

Koohang, A. A. (1987). A study of the attitudes of pre-service teachers toward the use of the computers. Educational Communication and Technology Journal, 35 (3), 145-149.

Koohang, A. A. (1989). A study of attitudes toward computer: Anxiety, confidence, liking, and perception of usefulness. Journal of Research on Computing in Education, 22(2), 137- 150.

Koohang, A. A. & Byrd, D. M. (1987). A study of selected variables and further study. Library and Information Science Research, 9 (1), 105-111.

Konstam, A. & Howland, J. E. (1994). Teaching computer science principles to liberal arts students using scheme. ACM SIGSCE Bulletin, 26 (4), 29-34.

Kramer P. E. & Lehman, S. (1990). Mismeasuring women: A critique of research on computer ability and avoidance. Signs: Journal of Women in Culture and Society, 16 (11), 1-18.

Krendl, A. M., Boihier, C. M. & Fleetwood, C. (1989). Children and computers: Do sex-related differences persist? Journal of Communication, 39 (3), 85-93.

Krendl, K. A. & Lieberman, D. (1988). Computers and learning: A review of recent research. Journal of Educational Computing Research, 4 (4), 367-389.

Kurland, D. M. & Kurland, L. C. (1987). Computer applications in education: A historical overview. Annual Review in Computer Science, 2,317-358.

Kurtz, B. L. (1980). Investigating the relationship between the development of abstract reasoning and performance in an introductory programming class. ACM SIGSCE Bulletin, 12 (1), 110-117.

Kwan, S. K., Trauth, E. M. & Driehaus, K. C. (1985). Gender differences and computing: Students' assessment of societal influences. Education & Computing, 1, 187-194. 101

Lamb, J. & Daniels, R. (193). Gifted girls in a rural community: Math attitudes and career opinions. Exceptional Children, 59 (6), 513-517.

Lambrecht, J. J. (1993). Applications software as cognitive enhancers. Journal of Research on Computing in Education, 25 (4), 506-520.

Leder, G. C. (1982). Mathematics achievement and fear of success. Journal for Research in Mathematics Education, 13 ,124-135.

Lee, R. (1970). Social attitudes and the computer revolution. Public Opinion Quarterly, 34, 53-59.

Lee, D. M. S., Pliskin, N. & Kahn, B. (1994). The relationship between performance in a computer literacy course and students' prior achievement and knowledge Journal of Educational Computing Research, 10 (1), 63-77.

Lees, B. (1986). Teaching microcomputer concepts through modeling. ACM SIGSCE Bulletin, 18 (2), 19-24.

Lemos, R. S. (1978). Students' attitudes towards programming: The effects of structured walk• throughs. Computers & Education, 2, 301-306.

Lepper, M. R. (1985). Microcomputers in education: Motivational and social issues. American Psychologist, 40, 1-18.

Lever, S., Sherrod, K. B. & Bransford, J. (1989). The effects of logo instruction on elementary students' attitudes toward computers and school. Computers in the School, 6, 45-65.

Levin, D. (1983). Everyone wants "computer literacy" so maybe we should know what it means. The American School Board Journal, 25-28.

Levin, J. & Dareev, Y. (1980). Problem-solving in everyday situations. The Quarterly Newsletter of the Laboratory of Comparative Human Cognition, 2, 47-51.

Levin, T. & Gordon, C. (1989). Effect of gender and computer experience on attitudes towards computers. Journal of Educational Computing Research, 5 (1), 69-88.

Lieberman, D. A. & Linn, M. C. (1991). Learning to learn revisited: Computers and the development of self-directed learning skills. Journal of Research on Computing in Education, 23 (3), 373-395.

Linn, M. C. (1984). Fostering equitable consequences from computer learning environments. Berkeley: University of California Press.

Linn, M. C. (1985). Fostering equitable consequences from computer learning environments. Sex roles, 13 (3/4), 229-239.

Linn, M. C. (1986). Instructional strategies for fostering girls in math. Title IX Line (Center for Sex Equity in Schools, The University of Michigan), 6 (1), 8-10.

Liu, M., Reed, M. & Phillips, P. D. (1992). Teacher education students and computers: Gender, major, prior computer experience, occurrence, and anxiety. Journal of Research on Computing in Education, 24 (4), 457-467.

Lockheed, L. (1985). Women, girls, and computers: A first look at the evidence. Sex Roles, 13 (3/4), 115-122. 102

Lockheed, M. E. & Frakt, S. (1984). Sex, equity: Increasing girls' use of computers. The Computing Teacher, 11 (8), 16-18.

Lockheed, M. E. & Mandinach, E. B. (1986). Trends in educational computing: Decreasing interest and the changing focus of instruction. Educational Researcher, 15 , 21-26.

Lockheed, M., Nielsen, A. & Stone, M. (1983). Sex differences in microcomputer literacy. In Proceedings of the National Educational Computer Conference, Baltimore, Minnesota.

Louden, K. (1989). LOGO as a prelude to Lisp: Some surprising results. ACM SIGSCE Bulletin, 21 (3), 35-38.

Loyd, B.H., Loyd D.E. & Gressard, C. (1987) Gender and computer experience as factors in the computer attitudes of middle school students. Journal of Early Adolescence 7 (1) 13-19.

Loyd, B. H. & Gressard, C. P. (1984). The effects of sex, age, and computer experience on computer attitudes. AEDS Journal, 18 (2), 67-76.

Loyd, B. H. & Gressard, C. P. (1986). Gender and amount of computer experience of teachers in staff development programs: Effects on computer attitudes and perceptions of the usefulness of computers. AEDS Journal, 19(4), 302-311.

Luker, P. (1989). Never mind the language, what about the paradigm? ACM SIGSCE Bulletin, 21 (1), 252-256.

Maccoby, E. E. (1986). Social groupings in childhood: Their relationships to prosocial and antisocial behavior in boys and girls. In Development of antisocial and prosocial behaviour. J. Clock, D. Olweus & M.R. Yarrow (Eds.). New Yors: Academic Press.

Mandinach, E. B. & Corno, L. (1985). Cognitive engagement variations among students of different ability levels and sex in a computer problem solving game. Sex Roles, 13 (3/4), 241-251.

Mandinach, E. B. & Linn, M. C. (1987). Cognitive consequences of programming: Achievements of experienced and talented programmers. Journal of Educational Computing Research, 3, 53-72.

Mallozzi, J. S. (1985). A course in programming languages for educational computing. ACM SIGSCE Bulletin, 17 (2), 29-31.

Marcinkiewicz, H. R. (1994). Differences in computer use of practicing versus preservice teachers. Journal of Research on Computing in Education, 27 (2), 185-197.

Marcoulides, G. (1988). The relationship between computer anxiety and computer achievement. Journal of Educational Computing Research, 4 (2), 151-158.

Maren, J. (1987). Computer literacy and the older learner: A computer department's response. ACM SIGSCE Bulletin, 19 (3), 25-28.

Marshall, J. C. & Bannon, S. H. (1986). Computer attitudes and computer knowledge of students and educators. AEDS Journal, 19(4), 270-286.

Massoud, S. L. (1990). Factorial validity of a computer attitude scale. Journal of Research on Computing in Education, 22(3), 290-299. 103

Massoud, S. L. (1991). Computer attitudes and computer knowledge of adult students. Journal of Educational Computing Research, 7 (3), 269-291.

Martin, C. D. & Murchie-Beyma, E. (Eds.). (1992). In search of gender free paradigms for computer education. Eugene, Oregon: International Society for Teachnology in Education.

Maurer, M. M. & Simonson, M. R. (1993-1994). The reduction of computer anxiety: Its relation to relaxation training, previous computer coursework, and need for cognition. Journal of Research on Computing Education, 26 (2), 205-219.

Mazaitis, D. (1993). The object-oriented paradigm in the undergraduate curriculum: A survey of implementations and issues. ACM SIGSCE Bulletin, 25 (3), 58-64.

Mazlack, L. J. (1980). Identifying potential to acquire programming skills. Communications of the ACM, 23, 14-17.

McCormick, D. & Ross, S. M. (1990). Effects of computer access and flow charting on students' attitudes and performance in learning computer programming. Journal of Educational Computing Research, 6 (2), 203-213.

McGee, L. Polychronopoulos, G. & Wilson, C. (1987). The influences of BASIC on performance in introductory science computer courses using Pascal. ACM SIGSCE Bulletin, 19 (3), 29-37.

McGrath, D., Thurston, L. P., McLellan, FL, Stone, D. & Tischhauser, M. (1992). Sex differences in computer attitudes and beliefs among rural middle school children after a teacher training intervention. Journal of Research on Computing in Education, 24 (4), 468-485.

Mclnerney, V., & Mclnerney, D. M. & Sinclair, K. E. (1994). Student teachers, computer anxiety and computer experience. Journal of Educational Computing Research, 11 (1), 27-50.

Miller, M. D. & Mclnerney, W. D. (1994-1995). Effects on achievement of a home/school computer project. Journal of Research on Computing in Education, 27 (2), 198-210.

Ministry of Education (1995). Technology in British Columbia public schools . Report and Action Plan 1995-2000. Ministry of Education. Victoria: Queen's Printer.

Miura, I. (1986). A multivariate study of school-aged children's computer interest and usage.

Miura, I. T. (1987a). Gender and socioeconomic status differences in middle-school computer interest and Journal of Early Adolescence, 7, 243-254.

Miura, I. T. (1987b). The relationship of self-efficacy to computer interest and course enrollment in college. Sex Roles, 16 (2), 303-311.

Mody, R. P. (1991). Computer in education and software engineering. ACM SIGSCE Bulletin, 23 (3), 45-56.

Molnar, A. R. (1978). The next great crisis in American education: Computer literacy. AEDS Journal, 12, 11-20.

Morrow, D. (1986). A commentary on gender differences. Manitoba: Manitoba Education & Planning and Research.Jn J. Gaskell, A. McLaren, A. Oberg, & L. Eyre. (Eds.). (1990). The 1990 British Columbia mathematics assessment: Gender issues in student choices in mathematics and science, (pp. 226). Victoria: Queen's Printer. 104

Morton, L. & Norgaard, N. (1993). A survey of programming languages in CS programs. ACM SIGSCE Bulletin, 25 (2), 9-11.

Moses, L. (1993). Our computer science classrooms: Are they "friendly" to female students? ACM SIGSCE Bulletin, 25 (3), 3-13.

Muller, A. & Perlmutter, M. (1985). Preschool children's problem-solving interactions at computers and jigsaw puzzles,. In J. D. Ellis (Ed.). 1988 AETS Yearbook: Information technology and Science Education.

Munger, G. F. & Loyd, B. H. (1989). Gender and attitudes toward computers and calculators: Their relationship to math performance. Journal of Educational Computing Research, 5 (2), 167-177.

Murphy, L. L., Conoley, J. C. & Impara, J. C. (Eds.) (1994). Tests in print. (Volume IV). The Buros Institute of Mental Measurement. Lincoln: The University of Nebraska Press.

Murphy, C. A., Coover, D. & Owen, S. V. (1989). Development and validation of the computer self-efficacy scale. Educational and Psychological Measurement, 49, 893-899.

Myers, J. P. (1992). Men supporting women computing science students. ACM SIGSCE Bulletin, 24 (1), 63-66.

Nelson, C. S. & Watson, J. A. (1990-1991). The computer gender gap: Children's attitudes, performance and socialization. Journal of Educational Technology Systems, 19 (4), 345-353.

Nelson, L. J., Wiese, G. M. & Cooper, J. (1991). Getting started with computers: Experience, anxiety, and relational style. Computers in Human Behavior, 7, 185-202.

Nelson, L. R. (1988). Attitude of Western Australian students toward microcomputers. British Journal of Educational Technology, 19, 53-57.

Nichols, L. M. (1992). The influence of student computer ownership and in-home use of achievement in an elementary school computer programming curriculum. Journal of Educational Computing Research, 8 (4), 407-421.

Nickerson, R. S. (1981). Why interactive computer systems are sometimes not used by people who might benefit from them. International Journal of Man-Machine Studies, 15,469-483.

Nolan, P., MacKinnon, D. & Soler, J. (1992). Computers in education: Achieving equitable access and use. Journal of Research on Computing in Education, 24 (3), 299-314.

Nowaczyk, R. H. (1983). Cognitive skills needed in computer programming. Paper presented at the Annual Meeting of the Southeastern Psychological Association (29th), Atlanta. ERIC ED 236 466.

Nunnally, J. C. (1978). Psychometric theory. (Second edition). New York: McGraw-Hill.

Ogletree, S. M. & Williams, S. W. (1990). Sex and sex-typing effects on computer attitudes and aptitude. Sex Roles, 23 (11/12), 703-712.

Ogozalek, V. Z. (1989). A comparison of male and female computer science students' attitudes toward computers. ACM SIGSCE Bulletin, 21 (2), 8-14. 105

Ogozalek, V. Z., Bush, C, Hayeck, E. & Lockwood, J. (1994). Introducing elderly college students to multimedia: an integrational approach. SIGCUE Outlook, 22(3), 26-32.

Okebukola, P. A., Sumampouw, W. & Jegede, O. J. (1992). The experience factor in computer anxiety and interest. Journal of Educational Technology Systems, 20 (3), 221-229.

Oman, P. W. (1986). Identifying student characteristics influencing success in introductory computer science courses. AEDS Journal, 19 (2/3), 226-233.

Ordovensky, P. (1989). Kids connect with PC and disks. USA Today, International Edition, 7 (86), 7-8.

Overbaugh, R. C. & Reed, W. M. (1994-1995). Effects of introductory versus a content-specific computer course on computer anxiety and stages of concern. Journal of Research on Computing in Education, 27 (2), 211-220.

Pagan, F. G. (1986). On the feasibility of teaching Backus-type functional programming (FP) as a first language. ACM SIGSCE Bulletin, 18 (3), 31-35.

Pancer, S. M., George, M. & Gebotys, R. J. (1992). Understanding and predicting attitudes towards computers. Computers in Human Behavior, 8, 211-222.

Pandey, R. (1990). Getting the languages for a programming languages course. ACM SIGSCE Bulletin, 22 (4), 11-14.

Papert, S. (1980). Mindstroms: Children, computers and powerful ideas. New York: Basic Books.

Parnas, D. L. (1985). Software Aspects of the Srategic Defense Systems. Communications of the ACM, 28(12), 1326-1335.

Patterson, L. (1984). Sexual stereotypes taint computer classes. Instructional Innovator, 29 (9), 27.

Pavri, F. (1988). An empirical study of the factors contributing to microcomputer usage. Unpublished doctoral dissertation, University of Western Ontario, London, Ontario.

Paxton, A. L. & Turner, E. J. (1984). The application of human factors to the needs of the novice computer user. International Journal of Man-Machine Studies, 15, 518-530.

Peacock, D., Ralhan, V. K., Lee, M. P. & Jeffreys, S. (1988). A first year course in software design and use. ACM SIGSCE Bulletin, 20 (4), 2-8.

Perelman, L. J. (1992). School's out. New York: William Morrow & Company

Perry, R. & Greber, L. (1990). Women and computers: An introduction . Signs: Journal of Women in Culture and Society, 16 (1), 74-103.

Piburn, M. D. & Baker, D. R. (1989). Sex differences in formal reasoning ability: Task and interviewer effects. Science Education, 73(1), 101-113. In J. Gaskell, A. McLaren, A. Oberg, & L. Eyre. (Eds.). (1990). The 1990 British Columbia mathematics assessment: Gender issues in student choices in mathematics and science, (pp.228). Victoria: Queen's Printer.

Plog, C. E. (1981). The relationship of selected variables in predicting academic success in computer programming. (Doctoral dissertation, East Texas State University, 1981). Dissertation Abstract International, 41,2903A- 2904A. 106

Pope-Davis, D. B. & Twing, J. S. (1991). The effects of age, gender, and experience on measures of attitude regarding computers. Computers in Human Behavior, 7, 33-339.

Pope-Davis, D. B. & Vispoel, W. P. (1993). How instruction influences attitudes of college men and women towards computers. Computers in Human Behavior, 9, 83-93.

Postman, N. (1987). Will the new technologies of communication weaken or destroy what is most worth preserving in education and culture? In IMTEC Conference "School-year 2000", Hyvinaa, Finland: IMTEC.

Ragsdale, R. G. (1988). Permissible computing in education: Values, assumptions, and needs. New York: Praeger.

Ralston, A. (1984). The first course in computer science needs a mathematics corequisite. Communications of the ACM, 27, 1002-1005.

Ramberg, P. (1986). A new look at an old problem: Keys to success for computing science students. ACM SIGCSE Bulletin, 18 (3), 36-39.

Reed, M. W. & Overbaugh, R. C. (1993). The effects of prior experience and instructional format on teacher education students' computer anxiety and performance. Computers in the Schools, 9 (2/3), 75-89.

Rennie, L. J. (1987). Detecting and accounting for gender differences in mixed-sex and single-sex groupings in science lessons. Educational Review, 39 (1), 65-73. In J. Gaskell, A. McLaren, A. Oberg, & L. Eyre. (Eds.). (1990). The 1990 British Columbia mathematics assessment: Gender issues in student choices in mathematics and science, (pp.230). Victoria: Queen's Printer.

Revelle, G., Honey, M., Amsel, E, Schauble, L. & Levine, G. (1984). Sex differences in the use of computers, in Annual meeting of the American Educational Research Association.. New Orleans, Louisiana.

Reyes, L. H. & Stanic, G.M.A. (1988). Race, sex, socio-economic status, and mathematics. Journal for Research in Mathematics Education, 19 (1), 26-43. In J. Gaskell, A. McLaren, A. Oberg, & L. Eyre. (Eds.). (1990). The 1990 British Columbia mathematics assessment: Gender issues in student choices in mathematics and science, (pp. 230). Victoria: Queen's Printer.

Rhodes, L. A. (1986). On computers, personal styles, and being human: A conversation with Sherry Turkle. Educational Leadership, 43 (6), 12-16.

Sail, M. S. (1989). Variables predicting achievement in introductory computer science courses. Unpublished master's thesis, University of British Columbia, Vancouver.

Sammet, J. E. (1991). Some approaches to, and illustrations of, programming language history. Annals of the History of Computing, 13 (1), 33-50.

Sammet, J. E. (1992). Farewell to Grace Hooper-End of an era!. Communications of the ACM, 35 (4), 128-131.

Sanders, J. S. (1984). The computer: Male, female or androgynous. The Computing Teacher, 11 (8), 31-34.

Sanders, J. S. & Stone, A. (1986). The neuter computer: Computers for girls and boys. New York: Neal-Schulman. 107

Sayers, J. E. & Martin, D. E. (1988). A hypothetical computer to stimulate micro-programming and conventional machine language. ACM SIGSCE Bulletin, 20 (4), 43.

Schaefer, L. & Sprigle, J. E. (1988). Gender differences in the use of the logo programming language. Journal of Educational Computing Research, 4 (1), 49-55.

Schmidinger, E. (1993). Gender and new technologies in school. Informatics and changes in learning. In A Knierzinger & M. Moser (Eds.). Proceedings of the IFIP Open Conference, (pp.5-8). Gmunden, Austria.

Schneider, G. M. (1986). A proposed redesign of the introductory service course in computer science. SIGSCE Bulletin, 18 (4), 15-21.

Schneiderman, D. (1980). Software psychology: Human factors in computer and information system. Cambridge: Winthrop Publishers.

Schroeder, M. H. (1978). Piagetian mathematical and spatial reasoning as predictors of success in computer programming. (Doctoral dissertation, University of Colorado, 1979). Dissertation Abstracts International , 39, 4850-A.

Schubert, J. G. (1986). Gender equity in computer learning. Theory into Practice, 25, 267-275.

Schubert, J. G. & Bakke, T. W. (1984). Practical solutions to overcoming equity in computer use. The Computing Teacher, 11, 28-30.

Scott, J. A. (1988). Training adult learners-a new face in end users. IGUCCS Newsletter, XVIII (1), 25-29.

Searls, D. E. (1993). An integrated hardware simulator. ACM SIGSCE Bulletin, 25 (2), 24.

Self, C. C. (1983). A position on a computer literacy course. A paper presented at the University of Massachusetts-Amherst, Amherst, Massachusetts.

Serbin, L., Zelkowitz, P., Doyle, A., Gold, D. & Wheaton, B. (1990). The socialization of sex- differentiated skills and academic performance: A mediational model. Sex Roles, 23 (11/12), 613-627.

Sethi, R. (1989). Programming languages: Concepts and constructs. Murray Hill: AT & T Bell Labs.

Shashaani, L. (1993). Gender-based differences in attitudes toward computers. Computers & Education, 20 (2), 169-181.

Shashaani, L. (1994a). Gender differences in computer experience and its influence on computer attitudes. Journal of Educational Computing Research, 11 (4), 347-367.

Shashaani, L. (1994b). Socioeconomic status, parents' sex-role stereotypes, and the gender gap in computing. Journal of Research on Computing in Education, 26 (4), 433-451.

Sharma, S. (1987). Learners' cognitive styles and psychological types as intervening variables influencing performance in computer science courses. Journal of Technology Systems, 15, 391-399.

Shirkhande, N. & Singh, L. P. S. (1986). The war of languages. ACM SIGSCE Bulletin, 18 (2), 63,81. 108

Siann, G., Macleod, H., Glissov, P. & Durndell, A. (1990). The effect of computer use on gender differences in attitudes to computers. Computers in Education, 14,183-191.

Shore, J. (1985).77ie Sachertorte algorithm. New York: Viking.

Skolnick, J., Longhort, C. & Day, L. (1982). How to encourage girls in math and science. Englewood Cliffs: Prentice-Hall.

Smith, B. C. (1985). The limits of correctness. Computers and Society, 14 (4), 15(1-3), 18-26.

Smith, S. D. (1987). Computer attitudes of teachers and students in relationship to gender and grade level. Journal of Educational Computing Research, 3 (4), 479-494.

Soper, J. B. & Lee, M. P. (1985). Spreadsheets in teaching statistics.77ze Statistician, 34, 317-321.

Sorge, D. H. & Wark, L. K. (1984). Factors for success as a computer science major. AEDS Journal, 17 (4), 36-45.

Standing Committee on Educational Technology (1991). The 1991 educational technology report. B.C. Colleges & Institutes. Ministry of Advanced Education, Training and Technology. Victoria: Queen's Printer.

Stephens, L. J., Wileman, S. & Konvalina, J. (1981). Group differences in computer science aptitude. AEDS Journal, 14(2), 84-95.

Stevens, D. J. (1983). Cognitive processes and success of students in instructional computer courses. AEDS Journal, 16 (4), 228-233.

Stoob, J. C. (1984). Thoughts on university computer curricula. ACM SIGSCE Bulletin, 16 (3), 13-16.

Styer, E. (1994). On the design and use of a simulator for teaching computer architecture. ACM SIGSCE Bulletin, 26 (3), 45-50.

Sutton, R. (1991). Equity and computers in the schools: A decade of research. Review of Educational Research, 61 (4), 475-503.

Swadener, M. & Hannafin, M. (1987). Gender similarities and differences in sixth graders attitudes toward computers: An exploratory study. Educational Technology, 27(1), 37-42.

Swadener, M. & Jarrett, K. (1986). Gender differences in middle grade students' actual and preferred computer use. Educational Technology, 26 (9), 41-47.

Tabachnick, B. G. & Fidell, L. S. (1989). Using multivariate statistics . (Second edition). New York: Harper & Row.

Tangorra, F. (1990). The role of the computer architecture simulation in the laboratory. ACM SIGSCE Bulletin, 22 (2), 5-10.

Taylor, R. P. (1986). How will computing change education? SIGCUE Bulletin, 18 (2-4), 186-191.

Taylor, H. & Mounfield, L. (1991). An analysis of success factors in college computer science: High school methodology is a key element. Journal of Research on Computing in Education, 24 (2), 240-245. 109

Taylor, H. G. & Mounfield, L. C (1994). Exploration of the relationship between prior computing experience and gender on success in college computer science. Journal of Educational Computing Research, 11 (4), 291-306.

Teague, J. (1992). Raising the self confidence and self esteem of final year female students prior to job interviews. ACM SIGSCE Bulletin, 24 (1), 67-71.

Teague, J. & Clarke, A. (1993). Attracting women to tertiary computing courses: Two programs directed at secondary level. ACM SIGSCE Bulletin, 25 (1), 208-212.

Tharp, A. L. (1984). The impact of fourth generation programming languages. ACM SIGSCE Bulletin, 16 (2), 37-41.

Turkle, S. (l9S4).The Second Self: Computers and the human spirit.. New York: Simon and Schuster.

Turner, D. A. (1982). "Recursion equation as a programming language." Functional programming and its applications. Cambridge University Press.

Turnipseed, D. L. & Bums, O. M. (1991). Contemporary attitudes toward computers: An explanation of behavior. Journal of Research on Computing in Education, 23 (4), 611- 625.

Turoff, M. (1985). The rational, the pragmatic and the inquiry process: The social study of information-communication systems. Paper invited for panel on Contrasting Perspectives on Social Studies for Communication Technologies, International Communications Association Annual Meeting, May 1985, Honolulu.

UBC (1993-1994). The University of British Columbia Factbook. Office of Budget and Planning. Vancouver: University of British Columbia Press.

Van Houten, K. (1987). Software support for computer science video courses. ACM SIGSCE Bulletin, 19 (3), 35-37.

Violate, C, Marini, A. & Hunter, W. (1989). A confirmatory factor analysis of a four-factor model of attitudes toward computers: A study of preservice teachers. Journal of Research on Computing in Education, 22(2), 199-213.

Wagner, C. & Vinsonhaler, J. (1991). An artificial intelligence theory of computer competency. ACM SIGSCE Bulletin, 23 (2), 45-50.

Watson, J. A., Penny, J. M., Scanzoni, J. & Penny, J. (1989). Networking the home and university: How families can be integrated into proximate/distant computer systems. Journal of Research on Computing in Education, 22(1), 107-117.

Webb, N. M. (1985). The role of gender in computer programming learning processes Journal of Educational Computing Research, 1 (4), 441-458.

Weil, M. M., Rosen, L. D. & Wugalter, S. E. (1990). The etiology of computerphobia. Computers in Human Behavior, 6, 361-379.

Weirsma, W. (1986). Research methods in education: An introduction. (Fourth edition). Boston: Allyn and Bacon.

Werth, L. H. (1986). Predicting student performance in a beginning computer science class. ACM SIGSCE Bulletin, 18 (1),138-143. 110

Whipkey, K. L. & Stephens, J. T. (1984). Identifying predictors of programming skill. ACM SIGSCE Bulletin, 16 (4), 36-42.

Whiting, B. B. & Edwards, C. P. (1988). Children of different worlds: The formation of social behavior. Cambridge: Harvard University Press.

Widmer, C. & Parker, J. (1985). A study of characteristics of student programmers. Educational Technology, 25, 47-50.

Wilder, G., Mackie, D. & Cooper, J. (1985). Gender and computers: Computer-related attitudes. Sex Roles, 13 (3/4), 215-228.

Wilkes, M. V. (1991). Could it have been predicted? Communications of the ACM, 34 (10), 19-21.

Winkle, L. & Mathews, W. M. (1982). Computer equity comes of age. Phi Delta Kappan, 63 (5), 314-315.

Winograd, T. & Flores, F. (1987). Understanding computers and cognition: A new foundation for design. Norwood: Ablex Publishing Corporation.

Wise, L., Steel, L. & MacDonald, C. (1979). Origins and career consequences of sex differences in high school mathematics. American Institutes for Research. Palo Alto, California.

Woodrow, J. E. (1991a). A comparison of four computer attitude scale. Journal of Educational Computing Research, 7 (2), 165-187.

Woodrow, J. E. (1991b). Teachers' perceptions of computer needs. Journal of Research on Computing in Education, 23 (4), 475-495.

Woodrow, J. E. (1992). The influence of programming training on the computer literacy and attitudes of preservice teachers. Journal of Research on Computing in Education, 25 (2), 200-219.

Woodrow, J. E. (1994). The development of computer-related attitudes of secondary students. Journal of Educational Computing Research, 11 (4), 307-338.

Wright, D. (1984). Instructional uses of computers in public schools. (Report No. FRSS-12. NCES-84-201). Rockville, MD: Westat Research, Inc. (ERIC Document No. ED 244 618).

Voogt, J. (1987). Computer literacy in secondary education: The performance and engagement of girls. Computers in Education, 11 (4), 305-312.

Zapata (1995). Dominican students' intentions to pursue a mathematics-related career: An exploratory study of gender and affective issues. Unpublished master's thesis, University of British Columbia, Vancouver.

Zerega, M. E., Haertel, G. D. & Tsai, S. L. (1986). Late adolescent sex differences in science learning. Science Education, 70 (4), 447-460. In J. Gaskell, A. McLaren, A. Oberg, & L. Eyre. (Eds.). (1990). The 1990 British Columbia mathematics assessment: Gender issues in student choices in mathematics and science, (pp. 243) Victoria: Queen's Printer. APPENDIX A

Univariate Tests 112 Item-Level Analysis

The factorial MANOVA represents a powerful approach to handling both covariation and variation between and among the independent and dependent variables, and has the ability to detect joint-effects. Although the MANOVA was the proper research methodology and the most appropriate tool for the given research question, it nevertheless has some inherent weaknesses.

This is due in part to the requirement to satisfying the underlying assumptions on which the

MANOVA is based. Another artifact of the MANOVA is that the findings are only applicable at the level of analysis of the dependent variables.

Ideally, the researcher would like to be able to make claims at the item-level but since the comparison is with subscales, this cannot be achieved at the subscale level. This is because although subscales are linear composites of the items within them, they are not the items themselves. Recognizing this limitation, the researcher employed a combination of univariate techniques using both parametric and non-parametric statistics, to augment the study with item- level findings.

Non-Parametric Chi-Square Analyses

To allow for non-parametric analysis at the software item level, a Chi-Square Goodness of

Fit test was used for all 27 software items. The analysis tests if the proportions of different competency groups are significantly different from chance. In order to do this, it is necessary to collapse frequencies of six Likert response categories into proportions and test if the proportions were significantly different from chance.

Initially, the researcher tried to dichotomize the responses into two categories [l=Don't know and (2,3,4,5,6) = Know]. The dichotomizing procedures failed to produce significant differences at the software item level between the genders in competency. As there were clear differences at the mean raw score level, the researcher realized that the failure to capture the significant differences must arise from with the way in which the proportions were partitioned. The researcher reasoned that the dichotomizing procedure was unable to detect the gender differences in computing because a great deal of detail (variance) had been lost in terms of degree of 113 competence between categories (2,3,4,5,6). The researcher realized that categories (2,3,4,5,6) had to be further partitioned to recapture this lost variance.

The researcher decided to collapse the six Likert categories into three categories instead.

Specifically, the score of "1" on the Likert scale (which stood for "don't know", was used untransformed to represent the new non-parametric category of "Low". The scores of "2", "3" and

"4" were collapsed to represent "Medium", while scores of "5" and "6" were collapsed to represent

"High." The new partitioning resulted in three categories [l=Low, (2,3,4)=Medium and

(5,6)=High]. As shown in Table A.l, the trichotomizing procedure was successful in isolating the difference in proportion of variance between the three categories. In retrospect, this finding confirmed that the dichotomizing procedure had failed to find significance because it had combined the last two of these categories which represented two significantly and quantifiably different level of competence. The next section details the statistical procedures employed for the analysis and the findings.

After recoding the data into the three broader categories of software competency using the

SPSS RECODE procedure, the SPSS CROSSTABS procedure was used to determine if these transformed proportions of the categories differed significantly by chance. The SPSS

CROSSTABS procedure produces Chi-squares estimates using the Pearson, Likelihood Ratio and

Montel-Haenszel methods. The Pearson Chi-Square value is the most commonly used estimate.

For a conservative measure, the p.-level was set at 0.01. Using this value, the researcher found most of the software items to be significantly different at the 0.01 level Exceptions were found for

[Finder (Mac)] x2(2,n=686)=6.64, p. =0.04, Accounting x2(2,n=692)=5.57, p.=0.06), Database

%2(2,n=704)=0.18, p_=0.92) and Statistics x2(2,n=672)= 0.12, p_=0.94], where both genders were equally represented. 114

Table A. 1 Chi-Square Analysis

Software Item n Chi-Square df

Assembler 702 23.27** 2 0.00001 BASIC 716 61.98** 2 0.00000 C 697 26.55** 2 0.00000 C++ 697 21.57** 2 0.00002 Fortran 699 12.52** 2 0.00191 HyperCard 693 15.15** 2 0.00051 Logo 696 23.32** 2 0.00001 Pascal 720 21.63** 2 0.00002 Scheme 697 8.80* 2 0.01225

Apple ProDOS 686 10.99* 2 0.00411 Finder (Mac) 686 6.64 2 0.03609 DOS 721 76.27** 2 0.00000 OS/2 691 25.30** 2 0.00000 UNIX 693 15.87** 2 0.00036 Windows 718 24.48** 2 0.00000 Windows NT 677 9.21* 2 0.01000

Accounting 692 5.57 2 0.06166 Communications 690 39.39** 2 0.00000 Database 704 0.18 2 0.91620 Desktop Publishing 673 10.30** 2 0.00580 Games 696 115.92** 2 0.00000 Graphics 690 15.32** 2 0.00047 Music 675 9.02* 2 0.01101 Spreadsheet 704 8.04* 2 0.01798 Statistics 672 0.12 2 0.94289 Utilities 676 93.87** 2 0.00000 Word Processing 713 40.44** 2 0.00000

Notes: 1. The n fluctuates because SPSS deletes subjects using the listwise comparison method. For example, the n=702 for Assembler means that 38 subjects were missing for comparisons with this item. Overall for all the items, there were few missing subjects.

2. * means that p<0.05 ** means that p<0.01

Parametric t-tests Analyses

In Chi-Square analysis, parametric data are dummy coded and reduced to categorical data.

As such, the process of partitioning becomes an area of contention. For example, as indicated by the findings, in contrast to the first dichotomizing attempt, the second attempt with trichotomizing, was a better was to partition the proportions in order to capture item-level significance in making comparisons using the Chi-Square Test of Association. Parametric tests like t-tests (ANOVAs) 115 however use untransformed data, and are more powerful than non-parametric tests. The researcher employed the ANOVA (the univariate equivalent of the MANOVA), to include comparison of those software items which were not sufficiently robust to enter into the factor analysis, and to triangulate findings with the multivariate analysis.

The procedure involved multiple t-tests by gender for all the variables. Although the t-tests are robust, they have similar consideration of underlying assumptions. Underlying assumptions are less demanding with the t-tests, and have already been verified in the multivariate analysis. The main problem with multiple t-tests has to do with an inflated alpha level, or experimentwise alpha error from performing too many repeated t-tests. To be secure against spurious results from inflated alpha levels, the researcher used the Bonferroni adjustment. This conservative adjustment sets the p.-level for interpreting significance to be less than (alpha/number of t-tests) or

0.05/27=0.002. Table A.2 shows the parametric t-tests results.

Comparison Between Parametric and Non-Parametric tests

How did the two results compare? If everything went well, the results should have been reasonably close. This was indeed the case. Applying the strict Bonferroni alpha levels for significance, the gender means of virtually every software was significant except for [Finder

(Mac)] at p<0.02, Accounting (p<0.23), Database (g<0.23) and Statistics (p<0.53). Not only are these the same items as those found in the non-parametric Chi-Square Tests of Association, they are also in the same order of magnitude of g-levels as the non-parametric tests; the highest g-levels for non-significance for Statistics (p<0.53), followed by Database (p<0.23), Accounting (p<0.23) and [Finder (Mac)] (p<0.02). {Compare this finding with the non-parametric finding for non- significance with Statistics x2(2,n=672)= 0.12, g=0.94), followed by Database

X2(2,n=704)=0.18, p.=0.92), Accounting x2(2,n=692)=5.57, p=0.06), and [Finder (Mac)]

X2(2,n=686,p=0.04)}. 116

Table A.2 Parametric t-tests by Gender

Females Males Software Mean SD n Mean SD n F E Assembler 1.06 0.37 213 1.34 0.97 489 17.57 0.0000 BASIC 2.14 1.50 212 3.22 1.78 504 59.31 0.0000 C 1.08 0.50 213 1.50 1.25 484 22.42 0.0000 C++ 1.06 0.44 214 1.32 0.97 483 14.07 0.0002 Fortran 2.42 1.73 214 2.91 1.84 485 11.09 0.0009 HuperCard 1.18 0.70 214 1.53 1.30 479 13.87 0.0002 LOGO 1.27 0.79 212 1.77 1.36 484 24.63 0.0000 Pascal 2.31 1.74 219 4.00 1.76 501 23.8 0.0000 Scheme 2.02 1.59 213 2.41 1.82 484 6.99 0.0084

Apple ProDOS 1.14 0.54 210 1.41 1.01 476 12.97 0.0003 Finder (Mac) 1.66 1.29 209 1.95 1.63 477 5.20 0.0229 MSDOS 2.86 1.40 214 4.09 1.46 507 108.25 0.0000 OS/2 1.11 0.66 209 1.32 0.86 482 9.39 0.0023 UNIX 1.33 0.80 209 1.66 1.13 484 13.92 0.0002 Windows 3.24 1.49 217 3.96 1.50 501 34.44 0.0000 Windows NT 1.14 0.62 207 1.29 0.86 470 5.35 0.0211

Accounting 1.43 0.93 213 1.53 1.15 479 1.44 0.2309 Communications 1.47 0.99 211 2.41 1.79 479 51.05 0.0000 Database 2.28 1.42 211 2.43 1.50 493 1.46 0.2279 Desktop Publishing 1.62 1.11 216 2.12 1.52 467 18.07 0.0000 Games 2.81 1.69 211 4.40 1.80 485 118.99 0.0000 Graphics 2.62 1.56 210 3.24 1.70 480 20.35 0.0000 Music 1.32 0.83 209 1.66 1.26 466 12.95 0.0003 Spreadsheet 2.84 1.48 215 3.29 1.57 489 12.54 0.0004 Statistics 1.38 0.95 210 1.44 1.06 462 0.40 0.5276 Utilities 1.36 0.97 207 2.79 1.88 469 107.11 0.0000 Word Processing 3.27 1.87 216 4.27 1.68 497 49.94 0.0000

This parametric equivalent of the MANOVA, matches and validates the trichotomizing strategy employed for dividing the proportions of the Chi-Square comparisons. This means that at the item-level, we can infer in stating with some confidence that with the exception of Statistics,

Database, Accounting and Mac Finder, virtually all means are significant for every software item used in current personal computing.

The two parametric and non-parametric analyses at the software level has shown that even for the software items not robust enough to enter the powerful parametric MANOVA analysis (eg.

LOGO, Windows NT and Statistics), there was a similar pattern of gender discrepancy. It also points out the drawback of employing a powerful analysis like MANOVA which demands meeting 117 many stringent conditions, for which the trade-off for a powerful analysis could be non- significance or insufficient robustness for multivariate analysis.

Comparison Between Univariate and Multivariate Results

The three subscales provided information at the subscale level. The non-parametric Chi-

Square analyses and parametric t-tests, provided additional information at the item-level within each subscale. Under the non-parametric approach, the Chi-Square analysis was not sensitive to problems with normality and linearity and was an ideal research tool to compare software items which could not be analyzed using the stringent MANOVA criteria. Parametric t-tests had potential problems from inflated experimentwise alpha levels, but using the conservative alpha levels, they overcame hurdles which stopped the MANOVA, and verified the non-parametric findings.

At first glance, item-level findings appeared to contradict the subscale-level findings. The item-level findings captured all 27 items, the latter only captured 22 software items. Furthermore,

Statistics which was captured in the item-level analysis, was not part of any of the subscales. Upon closer inspection, however, findings at the item level generally support the subscale findings.

When the subscale was split into items, although most items were significant (this includes those not captured by the subscales), four items "resisted significance".

For both the parametric and non-parametric analysis, one finds that the four software items which resisted significance were Accounting, Database, Finder (Mac) and Statistics. These results concur with findings at the subscale-level (MANOVA), three of the four non-significant gender differences in software competency means were found within the Applications subscale, the only subscale which was not significant by gender in the MANOVA. There was no contradiction, findings at both levels indicate females were doing better in the Applications subscale. The difference is that at the item level, virtually every item mean was significantly higher in favor of males. The item-level analyses allowed the researcher to examine the five software items which were not robust enough for the factor analysis (LOGO, Apple ProDOS, O/S 2, Windows NT,

Statistics), and see which of the items which were captured within the subscale were contributing to non-significance of the whole subscale. 118

The results of the item-level analyses leads the researcher to the conjecture that these four items could tip the whole application subscale in favor of "no-significance." The univariate findings suggested that it was perhaps too simplistic to say that females were gaining within the applications competency subscale, they were only gaining in specific areas within that subscale.

This finding supported Koohang's (1986) claim that significance has to be specific.

The comparison of findings at the subscale level and the item level also illustrated the benefit of reducing a large set of variables to an underlying construct. All four software items which were non-significant were found within the Applications competency construct. If the item- level tests had not been performed, the significant difference would have been observed only at the subscale level, and although the results may not have been as specific, the researcher would have been looking at the right construct (Applications competency) under which these items were to be found. This finding also illustrates that even if one were using a multivariate design, it might be prudent to run the analysis at both univariate (t-tests and Chi-Square) and multivariate analysis for triangulation purposes. APPENDIX B

List of 23 Software Items Table B.l List of 23 Original Software items

Initial Item List Before Pilot

How well you can program in Assembler? How well you can program in BASIC? How well you can program in C? How well you can program in C++? How well you can program in HyperCard? How well you can program in LOGO? How well you can program in Pascal? How well you can program in Scheme?

How well you can control Apple ProDOS? How well you can control Finder (Mac)? How well you can control MSDOS? How well you can control UNIX? How well you can control Windows?

How well you can use Accounting? How well you can use Communications? How well you can use Database? How well you can use Desktop Publishing? How well you can use Games? How well you can use Graphics? How well you can use Music How well you can use Spreadsheet? How well you can use Statistics? How well you can use Word Processing? APPENDIX C

Software Competency Scale (SCS) 123

Explaining Programming Success

Part 2. Languages previously learned Please provide information on how well you can program in the following computer languages.

(lightly capable of quile familiar Programming in p—, don't know this •School

Assembler 1—' i«ngu.g<: • University

•Work , , IlSillLfiiiii:! 1 Learned a,

Programming in .—. don't know ihu U mp •S|h||^|l|||Ilp BASIC nSf n£s;£™ agST •^" " O^i^inHiy:*^

fflBWl I'TJ^i 1 LcnrdaL-

Programming in |—. don't know ihi. slightly capable of r ;• Scfi^||§|||§^ (2 1—1 l«nguagc •.^iyefsity;::^ OHome'*:*^ aSrf «t3! 1 Ixamcdat:

Programming in |—. don't know ihi« 1 COTpl •School Un u ,: C++ '—1 « 8 •S£^ nES? • nEST" •^ " • University • Home (or other object oriented language) Specify: Agel^rned: j | | Learned a, f lotw

Programming in |—. don't know this •School • itnSrwith n^,]^ n«bk=lo write „ ^c„ ™c«pic* U Fortran '—* kn«u'8e ^Uu, Ungn.se tingle U omple progn™ U • prog^m • Home (or other object oriented language) Age Learned: "| , , Specify: S^VIIOTSS ! .earned as: mSmrnmrnmiii 1 •iOtll.Tt i

Programming in ,—, don't know this dighily arable of auile fmriliir •School ZJUijiycreUy;^ HyperCard • i«w DSSS^ [3g£ • ™c~,,pl~ •Jferrie?:;**^ Si«l«li IM 1 Learned,* "1 Other:! 1

Programming in •—. don't know this slightly capable of able to write quite familiar I I language j j familiar with | | reading this •—i 1—i can write com pie* •School this language 1 1 simple routines | | with this LOGO language language 1 | routines O.Onivcrsiry • l Ionic ^Operating;:!- •Work T , j Age Learned: ! [ Learned al:

Programming in slightly capable of j—j don't know this 1 able to write quite familiar 1 | language | " J familiar with j ) reading this t— •—j can write com plcx I 1 simple programs j ) with this Pascal this language language language 1 1 programs

: :1 i:.n.oinc Age. Learned:. l%Op«Biing': r ! J Svstcin L ~\ Learned an j I Oilier:! I

Programming in don't know this •lightly capable of i—i .—i able to write quile familiar 1 I language | | familiar with j j readingthi s •—j can write com plcx pgilillliiii 1 1 simple programs | j with this Scheme this language language language I j programs Q:t^versiry':iPmSi: OHorrle " : ;::::Operaiing:P Svstem L_ | Learned at: I lO-Jier:! I

Programming in , don't know this slightly capable of i— .—1 able to write quite familiar 1 I language | familiar with j | reading this i—i can write com plex •sai||i|lisi||ii Other language this language 1 1 simple programs j j wiih this language language ! 1 programs Q;^i^t5i^|::|||?lK • Home Specify: j | Age- Learned: :-vGpwmirjg:;P j Learned at: Svstem L. U Other:! 1 124

Explaining Programming Success Part 3: Operating Systems Please provide information on how well you can control the following operating systems.

quite familiar Os£ho^ slightly beginning [—l Master of this •—i "don't know this 1 1 functional user of j ] with this Controlling j familiar with j I user of this | j operating system '—' this system system I—1 system [ Apple ProDOS — this systcm system 1 GiliSrnle':^ | Learned at; Age-Learned; j Machinq • Cnhcnl 1

•School •lightly beginning quite familiar 1—i don't know this |—j functional user of 1—1 Master of this Controlling j familiar with j j user of inis j | with this O University 1 ) open ting system 1—I this system ' 1 system Finder (Mac) — this tytiem system system Q Home iMadunej Macintosh [ Learned an • Work | 1 Age Lcamcd: nmm i

•School slightly beginning quite familiar Controlling 1—- don't know this |—| functional user of 1—] Master of this I | familiar with | | user of this [ { with this • University 1 1 operating system >—1 this system . 1—1 system MS DOS this system system system • Home

- •Work * 1 Age Learned; Machine! | Learned at;

•School slightly beginning quite familiar Controlling 1—j don't know this |—1 functional user of I—1 Master of this J j familiar with [ | user of thi* j | with this Q University 1 J operating system 1—' this system 1—1 system OS/2 this system system system • Home OWork j 1

A.;,c \jrar.t-?i ; | MiiJmi'j

•School slightly beginning quite familiar Controlling .—i don't know this 1—] funcuonal user of 1 | ^ ^ 1—1 Master of this 1 familiar with j | user of this • University 1 1 1—' system 1 ] operating system 1—1 this system — tysiicm UNIX this system system . r*iii.<- •Work i 1 Age Learned; ! j M i limo | Learned at; QGihcci 1

•School Controlling r-, don't know this j-, £g£ pi (-! ^rf i~l wttTuS"11" • ***** of • University WindOWS U ,y%Um U this system U .ystcm ^ ^ *y*cm ^ «^ • Home

Al't L* ATI XI Machine(_ 1 Earned at; QOftcnl 1

•School slightly beginning quite familiar Controlling •—i don't know this 1 1 functional user of J 1 (his |—1 Master of this 1 farruliar with 1 I user of this • University | j operating system 1—1 this system >—' sicm 1—' system Windows NT this system system sy • Home •"Work i j | Lt.'T^tl.ll Age Learned: j M ..hiih* QGnhen! i

quite familiar OSchool Master of this Controlling 1 j user of functional user of I j with this OUniversiry* |_| opd-Ung.)^ l_J this system syacm Other — tytuzn • system • O Horn* J teamed ab ,.: Specify: I Age Learned; MachincT 0

Explaining Programming Success Part 4: Applications

Please provide information on how well you can use the following application programs.

beginning quite familiar •School slightly I l functional user of | | Master of this Using |—i don't know this ] familiar with j 1 user of this 1 j with this [J University 1—1 this applica lion «—' application Accounting U •!•>*<=*"»" — a pp Li oca lion application application [J Home Specify i Learned at rJWork j j e.g.: Bedford •:Oirienl 1

quite familiar •School slightly beginning 1 1 Master of this Using |—i don't know this f—1 functional user of 1 j with this • University 1 j familiar with 1 1 user of this 1—' application 1 1 I—1 this application application Communications U — appliocation — application • llome • Work | 1 Specify 1 t e.g.: Kermit DCnhcHl I

quite familiar LJ School slightly beginning |—| Master of this Using |—i don't know this 1 1 user of this t—| functional user of j 1 with this LU University 1 1 familiar with I—1 this application I—J application Database u ,ppllc'Uon — ippliocation — application application OHornc- | Specify Learned at: • Work i 1 e.g.: Dbase IV Age Learned:

quite familiar •School slightly beginning |—1 Master of this * Using i—i don't know this r—1 functional user of I j with this •University " | familiar with j j user of tnis 1—' application u application '—' this applica lion — application Desktop Pub •pp"i»'i» appliocauon • Home ..wot* . Specify j~ Learned at: e.g.: PageMaker Age^Leam^;;!] i i O0lef:l

beginning quite familiar •School slightly j—j functional user of l—| Master of this Using 1—1 don't know this 1 familiar with 1 1 user of tnis 1 1 with this • University 1—1 this application '—' application Games u appliocauon application — application • llome LcartK'<3 Mt LJVvork t-- - e.g.: Flight Simulator Age-Learned: j 1 SIHVI:> 1 riOthrril 1

•School slightly beginning quite familiar |—] functional user of |—] Master of this Using 1—1 don't know this I familiar with | j user of this 1 1 with this O University '—' (his application 1—< application Graphics u •pptc""on appliocauon application application Q Home LWnB e.g.: SuperPaint Age Learned: 1 Specify 1 1 Learned at i~ Other! •

quite familiar •School slightly beginning 1—J Master of this Using i—i don't know this j | functional user of j ! wiih this • University | familiar with j | user of this 1—1 application u 1—I ihis application application 1 10 application Music •pp' '" " appliocauon • Home LJWoi* j 1 -i- t!," e.g.: Music Workshop Age Learned' 1 • Other:!

quite familiar •School-, slightly beginning |—1 Master of this Using i—1 don't know this j 1 user of this 1—I functional user of 1 | with this QUniversiry j 1 familiar with •—' this application 1—' application Spreadsheet u '^"^ appliocation — application — application • Home LJWonCi Specify j e.g.: Excel Age Learned' 1 • Oii.ec!

•School slightly begirming quite familiar Using |—. don't know this 1 1 functional user of 1—| Master of this j 1 familiar with 1 1 user of this 1 ( with this • University u I——J this application — application 1—1 application Statistics -PP1^"-" — appliocation — application • Home LJVvork : Specify [ IILmnMjifi: e.g.: Systat • Other:)

quite familiar •School •lightly beginning |—I Master of this Using .—i don't know this j 1 user of this ]—1 functional user of | j with this PI University' j j familiar with 1—1 this application 1—' application Utilities u ,pplic"lon appliocation ^^application application • Home .•_|V.onc j j Spcdfyl Li :.'P.' ti .it e.g.: Nortons Ave IxJiriioi!: • Other:!

•School slightly beginning quite familiar Using j—» don't know this 1—J functional user of I j with this j—j Master of this • University 1 1 familiar with J 1 user of this '—' application u '—1 this application application Word Proc •PP^""' appliocation application • Home t_.IV.OIK — Learned at t e.g. MS Word Age Learned: jllSplifyil nOthenl___

•School . slightly >cgjnning quite familiar — j—i functional user of 1—j Master of this Using j i don't know this j | familiar with user of this j 1 with this QUniversiry 1—I this application '—' application u appliocation • application — application • Home Other 'ppUcadOQ l iwonci Specify: | \ J Age Learned: J Specify • Other!.. APPENDIX D

Data Coding Protocol 127

Table D.l Variable Description and Format

Code Description Coding Format

SUBJNO Case identification number 4-digit number

STUDNO Student number 10-digit number

GENDER Gender of Respondent l=Female 2=Male AGE Age of Respondent (in years) 2-digit number PI Assembler l=Don't know this language 2=Slightly familiar with this language 3=Capable of reading this language 4=Able to write simple programs 5=Quite familiar with this language 6=Can write complex program

P2 BASIC l=Don't know this language 2=Slightly familiar with this language 3=Capable of reading this language 4=Able to write simple programs 5=Quite familiar with this language 6=Can write complex program

P3 l=Don't know this language 2=Slightly famihar with this language 3=Capable of reading this language 4=Able to write simple programs 5=Quite familiar with this language 6=Can write complex program

P4 C++ l=Don't know this language 2=Slightly familiar with this language 3=Capable of reading this language 4=Able to write simple programs 5=Quite familiar with this language 6=Can write complex program 128

(Table D. 1 continued)

Code Description Coding Format

P5 Fortran l=Don't know this language 2=Slightly familiar with this language 3=Capable of reading this language 4=Able to write simple programs 5=Quite familiar with this language 6=Can write complex program

P6 HyperCard l=Don't know this language 2=Slightly familiar with this language 3=Capable of reading this language 4=Able to write simple programs 5=Quite familiar with this language 6=Can write complex program

P7 Logo l=Don't know this language 2=Slightly familiar with this language 3=Capable of reading this language 4=Able to write simple programs 5=Quite familiar with this language 6=Can write complex program

P8 Pascal l=Don't know this language 2=Slightly familiar with this language 3=Capable of reading this language 4=Able to write simple programs 5=Quite familiar with this language 6=Can write complex program

P9 Scheme l=Don't know this language 2=Slightly famihar with this language 3=Capable of reading this language 4=Able to write simple programs 5=Quite famihar with this language 6=Can write complex program 129

(Table D. 1 continued)

Code Description Coding Format

01 Apple ProDOS l=Don'tknow this operating system 2=Slightly familiar with this system 3=Beginning user of this system 4=Functional user of this system 5=Quite familiar with this system 6=Master of this system

02 Finder (Mac) l=Don't know this operating system 2=Slightly familiar with this system 3=Beginning user of this system 4=Functional user of this system 5=Quite familiar with this system 6=Master of this system

03 MSDOS l=Don't know this operating system 2=S lightly familiar with this system 3=Beginning user of this system 4=Functional user of this system 5=Quite familiar with this system 6=Master of this system

04 OS/2 l=Don't know this operating system 2=Slightly familiar with this system 3=Beginning user of this system 4=Functional user of this system 5=Quite familiar with this system 6=Master of this system

05 Unix l=Don't know this operating system 2=Slightly familiar with this system 3=Beginning user of this system 4=Functional user of this system 5=Quite familiar with this system 6=Master of this system 130

(Table D. 1 continued)

Code Description Coding Format

06 Windows l=Don't know this operating system 2=Slightly familiar with this system 3=Beginning user of this system 4=Functional user of this system 5=Quite familiar with this system 6=Master of this system

07 Windows NT l=Don't know this operating system 2=S lightly fcimiliar with this system 3=Beginning user of this system 4=Functional user of this system 5=Quite familiar with this system 6=Master of this system

Al Accounting l=Don't know this application 2=Slightly familiar with application 3=Beginning user of application 4=Functional user of application 5=Quite familiar with application 6=Master of this application

A2 Communications l=Don't know this application 2=Slightly familiar with application 3=Beginning user of application 4=Functional user of application 5=Quite familiar with application 6=Master of this application

A3 Database l=Don't know this application 2=Slightry familiar with application 3=Beginning user of application 4=Functional user of application 5=Quite familiar with application 6=Master of this application 131

(Table D.l continued)

Code Description Coding Format

A4 Deskstop Publishing l=Don't know this application 2=Slightly familiar with application 3=Beginning user of application 4=Functional user of application 5=Quite familiar with application 6=Master of this application

A5 Games l=Don't know this application 2=SUghtly famihar with application 3=Beginning user of application 4=Functional user of application 5=Quite famihar with application 6=Master of this application

A6 Graphics l=Don't know this application 2=Slightly familiar with application 3=Beginning user of application 4=Functional user of application 5=Quite familiar with application 6=Master of this application

A7 Music l=Don't know this application 2=Slightly famihar with application 3=Beginning user of application 4=Functional user of application 5=Quite famihar with apphcation 6=Master of this apphcation

A8 Spreadsheet l=Don't know this application 2=Slightly famihar with apphcation 3=Beginning user of application 4=Functional user of apphcation 5=Quite famihar with apphcation 6=Master of this application 132

(Table D. 1 continued)

Code Description Coding Format

A9 Statistics l=Don't know this application 2=Slightly familiar with application 3=Beginning user of application 4=Functional user of application 5=Quite familiar with application 6=Master of this application

A10 Utilities l=Don't know this application 2=Slightly familiar with application 3=Beginning user of application 4=Functional user of application 5=Quite familiar with application 6=Master of this application

All Wordprocessing l=Don't know this application 2=Slightly familiar with application 3=Beginning user of application 4=Functional user of application 5=Quite familiar with application 6=Master of this application

Plage When Assembler learned 2-digit number

P2age When BASIC learned 2-digit number P3age When C learned 2-digit number

P4age When C++ learned 2-digit number

P5age When Fortran learned 2-digit number

P6age When HyperCard learned 2-digit number P7age When Logo learned 2-digit number

P8age When Pascal learned 2-digit number

P9age When Scheme learned 2-digit number

Olage When Apple ProDos learned 2-digit number

02age When Finder (Mac) learned 2-digit number 03ge When MSDOS learned 2-digit number 133

(Table D. 1 continued)

Code Description Coding Format

04age When OS/2 learned 2-digit number

05age When Unix learned 2-digit number 06age When Windows learned 2-digit number 07age When Windows NT learned 2-digit number

Alage When Accounting learned 2-digit number A2age When Communications learned 2-digit number A3age When Database learned 2-digit number

A4age When Desktop Publishing learned 2-digit number A5age When Games learned 2-digit number A6age When Graphics learned 2-digit number

A7age When Music learned 2-digit number

A8age When Spreadsheet learned 2-digit number A9age When Statistics learned 2-digit number

AlOage When Utilities learned 2-digit number Allage When Word Processing learned 2-digit number Plwh Where Assembler learned 1= School, 2=University, 3=Home, 4=Work, 5=Other P2wh Where BASIC learned 1= School, 2=University, 3=Home, 4=Work, 5=Other P3wh Where C learned 1= School, 2=University, 3=Home, 4=Work, 5=Other P4wh Where C++ learned 1= School, 2=University, 3=Home, 4=Work, 5=Other P5wh Where Fortran learned 1= School, 2=University, 3=Home, 4=Work, 5=Other P6wh Where HyperCard learned 1= School, 2=University, 3=Home, 4=Work, 5=Other P7wh Where Logo learned 1= School, 2=University, 3=Home, 4=Work, 5=Other P8wh Where Pascal learned 1= School, 2=University, 3=Home, 4=Work, 5=Other P9wh Where Scheme learned 1= School, 2=University, 3=Home, 4=Work, 5=Other 134

(Table D.l continued)

Code Description Coding Format

01 wh Where Apple ProDOS learned 1= School, 2=University, 3=Home, 4=Work, 5=Other 02wh Where Finder (Mac) learned 1= School, 2=University, 3=Home, 4=Work, 5=Other 03wh Where MSDOS learned 1= School, 2=University, 3=Home, 4=Work, 5=Other 04wh Where OS/2 learned 1= School, 2=University, 3=Home, 4=Work, 5=Other 05wh Where Unix learned 1= School, 2=University, 3=Home, 4=Work, 5=Other 06wh Where Windows learned 1= School, 2=University, 3=Home, 4=Work, 5=Other 07wh Where Windows NT learned 1= School, 2=University, 3=Home, 4=Work, 5=Other A1 wh Where Accounting learned 1= School, 2=University, 3-Home, 4=Work, 5=Other A2wh Where Communications learned 1= School, 2=University, 3=Home, 4=Work, 5=Other A3wh Where Database learned 1= School, 2=University, 3=Home, 4=Work, 5=Other A4wh Where Desktop Publishing learned 1= School, 2=University, 3=Home, 4=Work, 5=Other A5wh Where Games learned 1= School, 2=University, 3=Home, 4=Work, 5=Other A6wh Where Graphics learned 1= School, 2=University, 3=Home, 4=Work, 5=Other A7wh Where Music learned 1= School, 2=University, 3=Home, 4=Work, 5=Other A8wh Where Spreadsheet learned 1= School, 2=University, 3=Home, 4=Work, 5=Other A9wh Where Statistics learned 1= School, 2=University, 3=Home, 4=Work, 5=Other AlOwh Where Utilities learned 1= School, 2=University, 3=Home, 4=Work, 5=Other 135

(Table D. 1 continued)

Code Description Coding Format

Allwh Where Word Processing learned 1= School, 2=University, 3=Home, 4=Work, 5=Other COURSE Course 2-digit number

FACULTY Faculty 2-digit number YEAR Year in university 1-digit number FIRSTAGE Age of First Computer Experience l=K-7, 2=Post 7 LEARNSET Learning Setting l=School, 2=Higher Education, 3=Home ASCORE Applications Subscale Competency Means Raw Mean Scores

HSCORE High-level Language Competency Means Raw Mean Scores

LSCORE Low-Level Language Competency Means Raw Mean Scores ALLSCL Total Scaled Score Means Raw Mean Scores 136

APPENDIX E

Additional Descriptive Statistics and Raw Data Table E.l Scores Across Age of First Computer Experience By Gender

Software K-7a Post 7D Totals

FC Md F M F M Mean SD n Assembler 1.05 1.43 1.06 1.27 1.06 1.35 1.27 0.86 675 BASIC 2.59 3.69 2.01 2.79 2.17 3.26 2.94 1.78 689 C 1.02 1.74 1.12 1.26 1.09 1.51 1.39 1.11 670 C++ 1.00 1.48 1.08 1.15 1.06 1.32 1.24 0.87 670 Fortran 3.04 3.17 2.29 2.62 2.50 2.91 2.79 1.83 672 HyperCard 1.36 1.81 1.12 1.25 1.19 1.54 1.44 1.18 666 LOGO 1.75 2.28 1.08 1.25 1.26 1.79 1.64 1.26 669 Pascal 3.82 4.58 3.20 3.41 3.37 4.02 3,83 1.77 693 Scheme 2.15 2.58 1.89 2.27 1.96 2.43 2.29 1.77 670

Apple ProDOS 1.28 1.55 1.09 1.25 1.15 1.41 1.33 0.91 659 Finder (Mac) 1.93 2.27 1.59 1.63 1.68 1.97 1.88 1.56 659 MSDOS 3.12 4.52 2.86 3.66 2.94 4.11 3.77 1.53 694 OS/2 1.00 1.41 1.16 1.23 1.12 1.32 1.26 0.82 665 Unix 1.35 1.89 1.35 1.43 1.35 1.66 1.57 1.06 667 Windows 3.53 4.31 3.20 3.61 3.29 3.98 3.78 1.52 691 Windows NT 1.09 1.35 1.15 1.23 1.13 1.29 1.25 0.81 650

Accounting 1.30 1.70 1.45 1.37 1.41 1.54 1.50 1.09 666 Communications 1.72 2.99 1.41 1.82 1.49 2.44 2.16 1.67 664 Database 2.41 2.63 2.28 2.20 2.32 2.43 2.39 1.48 677 Desktop Publishing 2.28 2.39 1.42 1.86 1.66 2.14 1.99 1.44 646 Games 3.82 5.23 2.44 3.54 2.84 4.43 3.96 1.91 670 Graphics 2.93 3.65 2;57 2.83 2.66 3.25 3.08 1.69 664 Music 1.61 1.97 1.23 1.34 1.34 1.67 1.57 1.17 648 Spreadsheet 3.02 3.43 2.82 3.13 2.88 3.29 3.17 1.56 678 Statistics 1.38 1.37 1.37 1.49 1.37 1.42 1.41 1.02 647 Utilities 1.56 3.31 1.29 2.21 1.36 2.79 2.36 1.79 650 Word Processing 4.21 4.76 3.00 3.78 3.34 4.29 4.01 1.78 687 Note : aK- 7 = Grade 7 or earlier bPost 7 = After Grade 7 CF = Females dM=Males 138

Table E.2 Frequency Across Age of First Computer Experience By Gender

Software K-7a Post 7° Totals

d Fc M F M F M n Assembler 55 248 145 227 200 475 675 BASIC 54 255 145 235 199 490 689 C 55 247 145 223 200 470 670 C++ 55 247 146 222 201 469 670 Fortran 55 247 146 224 201 471 672 HyperCard 55 245 146 220 201 465 666 LOGO 53 248 146 222 199 470 669 Pascal 57 253 149 234 206 487 693 Scheme 54 245 146 225 200 470 670

Apple ProDOS 57 244 140 218 197 462 659 Finder (Mac) 56 244 140 219 196 463 659 MSDOS 57 256 144 237 201 493 694 OS/2 54 245 142 224 196 469 665 Unix 55 244 141 227 196 471 667 Windows 57 256 147 231 204 487 691 Windows NT 54 240 140 216 194 456 650

Accounting 56 245 144 221 200 466 666 Communications 54 246 144 220 198 466 664 Database 54 252 144 227 198 479 677 Desktop Publishing 54 236 139 217 193 453 646 Games 57 249 141 223 198 472 670 Graphics 54 242 143 225 197 467 664 Music 54 236 142 216 196 452 648 Spreadsheet 56 246 146 230 202 476 678 Statistics 56 232 141 218 197 450 647 Utilities 54 238 140 218 194 456 650 Word Processing 57 250 146 234 203 484 687 Note : aK- 7 = Grade 7 or earlier bPost 7 = After Grade 7 CF = Females dM=Males 139

Table E.3 Mean Ratings By Gender Across Learning Setting

School High Eda. Home Totals Software Fb" MC F M F M Mean SD n

Assembler 1.12 1.22 1.08 1.10 1.02 1.45 1.27 0.86 612 BASIC 2.53 3.21 1.73 2.15 2.36 3.58 2.99 1.77 625 C 1.07 1.39 1.13 1.16 1.08 1.68 1.41 1.14 608 C++ 1.05 1.23 1.00 1.04 1.07 1.43 1.25 0.86 607 Fortran 2.46 3.11 2.44 3.17 2.60 2.81 2.81 1.83 610 HuperCard 1.28 1.89 1.06 1.11 1.25 1.55 1.45 1.21 603 LOGO 1.40 1.78 1.00 1.23 1.32 1.91 1.62 1.24 606 Pascal 3.84 4.42 2.92 3.40 3.57 4.08 3.89 1.75 629 Scheme 2.18 2.13 1.51 1.85 2.05 2.66 2.27 1.77 607

Apple ProDOS 1.15 1.56 1.14 1.16 1.14 1.45 1.35 0.94 596 Finder (Mac) 1.60 2.05 1.71 1.50 1.97 2.10 1.93 1.59 596 MSDOS 3.00 3.93 2.52 2.97 3.18 4.51 3.80 1.55 629 OS/2 1.20 1.30 1.08 1.07 1.04 1.41 1.27 0.83 602 UNIX 1.18 1.52 1.31 1.50 1.73 1.81 1.61 1.10 605 Windows 3.21 3.72 3.08 3.40 3.60 4.25 3.80 1.52 627 Windows NT 1.08 1.37 1.10 1.12 1.14 1.33 1.25 0.82 590

Accounting 1.89 1.73 1.10 1.12 1.33 1.60 1.53 1.13 603 Communications 1.37 2.10 1.37 1.35 1.84 2.86 2.20 1.69 602 Database 2.60 2.85 2.28 2.06 2.07 2.39 2.42 1.49 614 Desktop Publishing 1.75 2.08 1.44 1.60 1.89 2.36 2.04 1.46 587 Games 2.84 4.21 2.08 3.12 3.52 4.87 4.00 1.89 607 Graphics 2.53 3.27 2.55 2.58 2.93 3.50 3.13 1.69 601 Music 1.32 1.69 1.14 1.06 1.57 1.85 1.60 1.18 586 Spreadsheet 2.98 3.47 2.82 2.97 2.98 3.39 3.23 1.53 614 Statistics 1.33 1.27 1.41 1.68 1.45 1.47 1.44 1.06 582 Utilities 1.30 2.40 1.14 1.69 1.70 3.32 2.44 1.81 586 Word Processing 3.33 4.07 2.76 3.38 4.24 4.70 4.10 1.72 623 Note: aHigh Ed.= Higher Education DF=Female cM=Male Table E.4 Percentage by Gender Across Learning Settings

School Higher Education Home Software Female Male Female Male Female Male Assembler 1.8 3.3 0.5 1.2 0.5 8.6 BASIC 30.6 33.1 1.4 2.0 6.4 24.7 C 0.5 2.5 0.9 1.4 1.8 9.6 C++ 0.5 1.8 0.5 1.0 0.5 6.1 Fortran 0.5 4.3 39.3 43.4 2.5 HuperCard 5.0 11.7 1.4 2.2 LOGO 8.7 20.7 0.4 0.9 3.1 Pascal 32.0 39.9 36.1 27.0 0.9 4.3 Scheme 1.4 2.0 23.7 31.7 0.2

Apple ProDOS 2.7 4.1 0.9 0.8 2.7 8.0 Finder (Mac) 8.7 11.2 8.2 3.9 4.1 5.5 MSDOS 11.4 8.8 7.8 3.9 37.4 53.6 OS/2 2.3 1.0 0.4 0.5 9.4 UNIX 1.8 2.5 10.0 12.9 1.4 4.9 Windows 13.2 6.3 11.4 5.1 38.4 51.1 Windows NT 0.5 0.2 0.5 1.0 2.3 6.8

Accounting 13.2 7.8 3.2 7.2 Communications 2.7 1.2 5.9 2.5 8.2 28.8 Database 20.1 18.6 14.2 5.7 6.8 14.1 Desktop Publishing 8.2 7.0 6.4 2.2 8.7 18.8 Games 2.7 1.8 0.9 0.6 41.6 54.2 Graphics 12.8 10.4 12.3 4.9 19.2 30.3 Music 5.0 4.3 0.5 5.5 12.7 Spreadsheet 18.3 16.0 19.6 9.6 12.3 25.8 Statistics 1.8 1.0 11.0 7.6 0.5 2.7 Utilities 0.5 1.6 0.5 0.4 9.1 32.1 Word Processing 11.4 7.0 5.9 3.1 29.2 39.7 141

Table E.5 Frequency Distributions By Gender Across Learning Setting

School Higher Ed. Home Totals Software Female Male Female Male Female Male Female Male Assembler 4 17 1 6 1 44 6 67 BASIC 67 169 1 10 14 126 84 305 C 1 13 2 7 4 49 37 69 C++ 1 9 1 5 1 31 3 45 Fortran 1 22 86 222 13 87 257 HuperCard 11 60 3 11 14 71 LOGO 19 106 2 2 16 21 124 Pascal 70 204 79 138 2 22 151 364 Scheme 3 10 52 162 1 55 173

Apple ProDOS 6 21 2 4 6 41 14 66 Finder (Mac) 19 57 18 20 9 28 46 105 MSDOS 25 45 17 20 82 274 124 339 OS/2 5 5 2 1 48 6 55 UNIX 4 13 22 66 3 25 29 104 Windows 29 32 25 26 84 261 138 319 Windows NT 1 1 1 5 5 35 7 41

Accounting 29 40 7 37 36 77 7.2 Communications 6 6 13 13 18 147 37 166 Database 44 95 31 29 15 72 90 196 Desktop Publishing 18 36 14 11 19 96 51 143 Games 6 9 2 3 91 277 99 289 Graphics 28 53 27 25 42 155 97 233 Music 11 22 1 12 65 24 87 Spreadsheet 40 82 43 49 27 132 110 263 Statistics 4 5 24 39 1 14 29 58 Utilities 1 8 1 2 20 164 22 174 Word Processing 25 36 13 16 64 203 102 255 142

Table E.6 Comparison of Age of Learning Software by Gender

Females Males Group

Software Mean SD Mean SD Mean SD

Age 19.81 2.82 19.88 3.15 19.86 3.06

Age of Learning Assembler 18.40 3.44 17.11 3.44 17.17 3.42 BASIC 15.07 2.62 13.91 3.57 14.18 3.42 C 19.14 1.77 17.56 3.91 17.72 3.78 C++ 22.00 2.83 17.98 2.20 18.16 2.31 Fortran 18.81 2.12 18.91 2.43 18.90 2.39 HyperCard 15.80 0.94 16.15 2.08 16.10 1.94 Logo 12.35 2.27 12.03 2.48 12.07 2.45 Pascal 18.15 3.26 17.52 2.50 17.71 2.76 Scheme 19.28 1.67 19.71 3.01 19.60 2.74

Apple ProDos 14.07 3.88 14.04 4.30 14.09 4.21 Finder (Mac) 17.71 3.36 16.64 3.71 16.93 3.63 MSDOS 17.08 3.49 16.32 3.47 16.52 3.48 OS/2 16.67 0.52 18.25 2.66 18.10 2.57 Unix 19.11 1.60 19.36 2.98 19.31 2.78 Windows 18.28 3.50 17.78 3.12 17.91 3.25 Windows NT 18.86 1.95 18.91 2.62 18.91 2.53

Accounting 17.22 1.64 16.90 3.08 17.02 2.72 Communications 18.29 1.95 17.19 2.47 17.45 2.48 Database 18.04 3.93 17.31 3.64 17.53 3.73 Desktop Publishing 17.96 3.23 17.44 3.62 17.57 3.51 Games 15.32 3.31 13.91 3.80 14.27 3.75 Graphics 17.41 4.30 16.82 3.49 16.98 3.73 Music 16.46 2.77 16.13 3.06 16.20 2.99 Spreadsheet 18.39 3.94 17.58 3.30 17.81 3.51 Statistics 19.28 1.25 19.84 3.16 19.68 2.70 Utilities 18.43 4.30 16.70 2.90 16.93 3.13 Word Processing 17.05 4.35 16.61 3.34 16.74 3.63 APPENDIX F

Factor Analysis 144

Check for Theoretical Assumptions of Factor Analysis

Before running the factor analysis, the data were inspected and univariate outliers were removed from the analysis, resulting in a sample of 740 subjects. From this reduced sample, subjects with missing data were further deleted by SPSS in a listwise manner for the factor analysis. In the final analysis, only 507 cases were retained due to the stringent listwise deletion procedures of SPSS FACTOR. (Under listwise deletion, if a subject did not have one of the variables which loaded in the factor analysis, that subject was dropped).

The remaining sample size of 507 is still considered a "very good" sample size which results in reliable correlation coefficients, according to Comrey (1973), cited in Tabachnick and

Fidell (1989). The sample size also satisfies the "rule of thumb" of at least five cases for each observed variable. Some missing values were found to be non-random. Substitution as recommended by Tabachnick and Fidell (1989) was attempted using SPSS RECODE. Although the procedure retrieved some of the missing data, it also introduced error. As the number of subjects were sufficient, the substitution scheme was abandoned.

Tabachnick and Fidell (1989) noted when principal components analysis and factor analysis are used "descriptively to summarize the relationship in a large set of observed variables, assumptions regarding the distributions of variables are not in force." (p.603). Spot checking scatterplots using MINTTAB revealed that the relationships among variables were linear. No problems were found with multicollinearity or singularity. The matrix was factorable, as intercorrelations ranged well above the standard of 0.30 (Tabachnick & Fidell, 1989). The Kaiser-

Meyer-Olkin measure of sampling adequacy was 0.885, which was also well above the figure given by Tabachnick and Fidell (1989) when they cited Kaiser's measure of sampling adequacy

(1970,1974), and concluded that "Values of 0.6 and above are required for a good FA." (p.604).

Additionally, the researchers cited Gorsuch (1983) who claimed that results of the scree test are more reliable when sample sizes are large, communality values are high and each factor has several variables with high loadings. The results of the confirmatory factor analysis for the full set of variables satisfy all three of these criteria. 145

Interpretation of Initial and Final Statistics

The data in Table F.l indicate that the initial statistics reveal that the maximum number of factors corresponded to the number of variables. However that was unreasonable, as the main idea of the factor analysis was to reduce the numbers of factors based upon postulated underlying latent dimensions. The Gates (1981) Model of Software Abstraction, suggested that there were three underlying dimensions. Tabachnick and Fidell (1989) recommended that one criteria for determining the number of factors to accept was that the researcher, guided by theory, should select factors with Eigenvalues of 1.0 or greater.

Table F.l Eigenvalues of Factor Solution

Factor Eigenvalue Percent of Cummulative Variance Percentage 1 7.80 28.9 28.9 2 2.20 8.2 37.0 3 1.58 5.8 42.9 4 1.42 5.3 48.1 5 1.36 5.0 53.2 6 1.04 3.9 57.0 7 1.01 3.7 60.8 8 0.92 3.4 64.2 9 0.87 3.2 67.4 10 0.78 2.9 70.3 11 0.75 2.8 73.1 12 0.71 2.6 75.7 13 0.64 2.4 78.0 14 0.62 2.3 80.3 15 0.59 2.2 82.5 16 0.57 2.1 84.7 17 0.53 2.0 86.6 18 0.52 1.9 88.5 19 0.49 1.8 90.4 20 0.42 1.6 91.9 21 0.40 1.5 93.4 22 0.37 1.4 94.8 23 0.35 1.3 96.1 24 0.30 1.1 97.2 25 0.28 1.0 98.3 26 0.26 1.0 99.2 27 0.21 0.8 100.0

The data in Table F.l are initial statistics and as such have not been altered or forced upon by any external algorithm, Although these initial statistics shows seven factors with Eigenvalues greater than 1.0, in terms of percent of variance, the difference in the explained variance was 146 greatest between the first three factors. (The difference in percent variance was 20.7% between

Factor 1 and Factor 2, and 2.4% between Factor 2 and Factor 3. After this, the difference began to level off, between Factor 3 and Factor 4 the difference was 0.5%, between Factor 4 to Factor 5 the difference was 0.3%, and so forth). Moreover, the table shows that the first three factors captured a combined variance of 42.9% between them, while the remaining four remaining factors accounted for only 17.9% of the variance. However, these prehminary conclusions had to verified empirically.

To enable the empirical verification, several comparisons were made with confirmatory factor analysis using three, four, five, six and seven factors. As these initial statistics seemed to indicate, beyond three factors, the additional "factors" essentially captured spurious error. This was evident because some "factors" only loaded two or three items. The comparison between attempts with three, four, five, six and seven factors confirmed that any solution beyond three factors, added lhtle to the variance but introduced error instead. The comparisons supported the viability of the three-factor solution corresponding to the Gates (1981) model.

Rotated Factor Matrix

Table F.2 shows the results of the most viable of these solutions. The confirmatory principle-component analysis extracted a three-factor solution comprising of what appears to be a

High-Level Language factor, a Low-Level Language factor and an Applications factor. (The rationale behind the naming of the three factors is provided below). The table shows varimax factor loadings greater to or equal to 0.40 of each variable on the rotated factor matrix. With the exception of Windows, BASIC, Word Processing and Utilities which indicated cross-loadings, the software competency constructs formed three unique factors, accounting for 42.9% of the variance.

With respect to items which load on multiple factors, although this could be an artifact of wording, it is not probable, given that all items had the same question stem. It is more likely that it is reflecting reality. The multiple loadings support the researcher's apriori notion that software have acquired other meanings than their conventional labels. It appears to suggest that the distinction is no longer clear, that rather than being perceived by conventional labels, these software items are perceived as hybrid combinations of application and programs. (For example, even though word 147 processors are conceived of as applications, they can also be programmed to repeat common operations through the use of built-in macros).

Tabachnick and Fidell's (1989) suggests in the event of multiple loadings, to assign variables that load above 0.30 to the factor with the highest loading for these variables. Applying this "rule of thumb" to these four items loads them on the High-Level Language factor. The next section details the interpretation of the three factor solution.

Interpretation of Factor Analysis

Table F.2 shows the three factors extracted and the factor loadings for each variable. Using the cutoff loading of 0.40, the factor analysis captured the underlying construct of 22 of these variables. As expected, five of the six variables with low communalties did not have high loadings, and as a result were not captured by the three factors.

Table F.2 Factor Loadings of Variables Across the Three Factors

Applications Low-Level High-Level Items Language Language

(Factor 1) (Factor 2) (Factor 3)

Spreadsheet 0.68 Graphics 0.67 Database 0.63 Desktop Publishing 0.62 Finder (Mac) 0.60 HyperCard 0.57 Music 0.52 Accounting 0.48

C 0.80 C++ 0.76 Assembler 0.71 Unix 0.55 Scheme 0.54 Communications 0.47

MSDOS 0.73 Pascal 0.71 Fortran 0.68 Games 0.61 Windows 0.42 0.56 BASIC 0.44 0.51 Word Processing 0.46 0.50 Utilities 0.41 0.40 0.47 148

Ordered by magnitude of factor loadings, the first factor captured Spreadsheet, Graphics,

Database, Desktop Publishing, Finder (Mac), HyperCard, Music and Accounting. As six of the eight items captured were applications, (Spreadsheet, Graphics, Database, Desktop Publishing,

Music and Accounting), it appears appropriate to name this factor the Applications Factor.

Ordered by factor loadings, the next factor captured C, C++, Assembler, UNIX, Scheme,

Communications. Since only one operating system is captured by the factor, it appears inappropriate to call this the Operating System Factor. However, it appears that this factor has to do with low-level languages. It is, therefore, more appropriately named the Low-Level Language

Factor. The factor captured UNIX, the one operating system which is most abstract and related to low-level languages. Assembler is at the base machine level. Both C++ and Scheme are abstract languages and it is also logical that they fell into this factor. The last variable which loaded into this factor was a surprise. Why would Communications load onto a factor? This was a totally unexpected finding. It is strictly speculation, but, perhaps, it is because it has a low-level language counterpart, the modem script, the language of letters and numbers that the modem uses to communicate. It is interesting to note from the list of nine languages, this factor captured those languages which were closer to the base of Gates' pyramid (C, C++, Assembler, Scheme).

In order of loading magnitude, the third captured MSDOS, Pascal, Fortran, Games,

Windows, Basic, Word Processing and Utilities. Aside from multiple loadings (discussed above), since several high-level languages like Pascal, Fortran and BASIC loaded onto this factor, it appears appropriate to call this third factor the High-Level Language Factor. In contrast to Factor two, from the list of nine languages, the third factor captured Pascal, Fortran and BASIC, the

High-Level Languages. Together the analysis captured eight out of nine languages (LOGO was not robust enough), four at the low-level language level (C, C++, Assembler, Scheme), three at the high-language level (Pascal, Fortran and BASIC) and one at the applications level (HyperCard).

Discussion of Results

How do these separate analyses compare overall? It is instructive to note that it is the applications which dominate personal computing and have captured the most variance. Perhaps this 149 reflects the shift toward applications which Brookshear (1994) contends are fourth generation languages. The factor analysis successfully captured three software constructs, but failed to capture an "operating system" construct. Thus although the Gates (1981) model (which suggests that there are three underlying software constructs), is borne out, the factor analysis appears to suggest a different way of envisioning these constructs. However, what is interesting is that there is no contention in terms of relation, the results are relationally correct; that is, these languages locate at similar levels relative to the base machine, but the explanation flounders when it comes to labels..

That was precisely the point behind the apriori conjecture, that software as currently defined, is perceived according to labels rather than to function. The factor analytic results provides construct validity to the notion that there are three different types of software. However, according to the analysis, software competency is comprised not of applications, operating systems and applications, but of Higher-Leve/ Languages, Lower-Level Languages and Applications.

Although no construct was found for operating systems, the role of the operating systems appears to be important and instrumental in helping to achieve the separation, and the role is implicit rather than explicit. For example, if this analysis had not included operating systems, the analysis would have been inadequate, or separation might not have occurred. APPENDIX G

Scale Refinement 151 Scale Analysis

New means, standard deviations, and internal reliability coefficients were computed for each of the three refined subscales. Following a short description of each of the internal reliability analysis for these new subscales, the results have been summarized in Table G.l. Two additional columns have been added. In addition to the Inter-Item Correlation (ITC) for the refined subscale, the fourth column lists the equivalent ITC on the preliminary conventional subscale for the same item. For an estimate of how well the same item is represented by the refined subscale, the fifth column lists the factor loadings of each of the items in relation to the other items for each of the three factors.

New Application Subscale

The Application subscale item means ranged from 1.38 for HyperCard to 3.07 for

Spreadsheet. The standard deviation for these items ranged from 1.06 for Accounting to 1.68 for

Graphics. The subscale mean was 2.07 (maximum=6, minimum=l). The internal reliability coefficient (the standardized item alpha) for the Application subscale was 0.80.

New Low-Level Language Subscale

The low-level language subscale item means ranged from 1.24 for C++ to 2.28 for

Scheme. The standard deviation for these items ranged for 0.84 for Assembler to 1.75 for Scheme.

The subscale mean was 1.63 (maximum=6, minimum=l), and the internal reliability coefficient

(the standardized item alpha) for the operating system subscale was 0.78.

New High-Level Language Subscale

The high-level language subscale item means ranged from 2.36 for Utilities to 3.89 for

Games. The standard deviations for these items ranged from 1.55 for Windows to 1.92 for

Games. The subscale mean was 3.37 (maximum=6, minimum=l). The internal reliability coefficient (the standardized item alpha) for the language subscale was 0.83. 152

Tabled Means, Standard Deviations and Item-Total Correlations for Refined Software Competency Scale

Refined Subscales

Application Subscale Mean SD ITCra ITCpb FA1C How well you can use Spreadsheet? " 3.07 1.57 0.59 0.58 0.68 How well you can use Graphics? 2.97 1.68 0.62 0.63 0.67 How well you can use Database? 2.32 1.46 0.52 0.48 0.63 How well you can use Desktop Publishing? 1.94 1.42 0.52 0.54 0.62 How well you can control Macintosh? 1.84 1.53 0.48 0.36 0.60 How well you can program in HyperCard? 1.38 1.11 0.52 0.28 0.57 How well you can use Music? 1.55 1.15 0.46 0.51 0.52 How well you can use Accounting? 1.49 1.06 0.36 0.33 0.48 Total Subscale (alpha=0.80, N=599) 2.07

Low-Level Language Subscale Mean SD ITCr ITCp FA2d How well you can program in C? 1.38 1.11 0.64 0.49 0.80 How well you can program in C++? 1.24 0.87 0.57 0.46 0.76 How well you can program in Assembler? 1.26 0.84 0.55 0.49 0.71 How well you can control UNIX? 1.55 1.04 0.50 0.32 0.55 How well you can program in Scheme? 2.28 1.75 0.31 0.18 0.54 How well you can use Communications? 2.10 1.65 0.49 0.61 0.47 Total Subscale (alpha=0.78, N=651) 1.63

High-level Language Subscale Mean SD ITCr ITCp FA3e How well you can control MSDOS? 3.70 1.57 0.74 0.52 0.73 How well you can program in Pascal? 3.80 1.79 0.57 0.53 0.71 How well you can program in Fortran? 2.71 1.82 0.25 0.08 0.68 How well you can use Games? 3.89 1.92 0.61 0.59 0.61 How well you can control Windows? 3.73 1.55 0.58 0.55 0.56 How well you can program in BASIC? 2.89 1.78 0.47 0.55 0.51 How well you can use Word Processing? 3.87 1.84 0.60 0.64 0.50 How well you can use Utilities? 2.36 1.79 0.61 0.65 0.47

Total Subscale (alpha=0.83, N=627) 3.37 Note: aITCp is the Inter-Item Correlation for the preliminary subscale blTCr is the Inter-Item Correlation for the refined subscale CFA1, dFA2 and eFA3 are the factor loadings of subscale items on the three factors

Discussion of Refined Instrument

In general, both ITCs and the factor loadings appear to be fairly high, and most of the ITCs for the same items appear to be higher on the new subscale. This suggests an overall increase in discrimination, in spite of the reduction in the number of items from 27 to 22. Another way to confirm this sharper discrimination was to compare the correlation matrices in Tables G.2 and G.3. 153

Table G.2 Correlation Matrix for Preliminary Subscales

HSCORE LSCORE ASCORE ALLSCL FirstAge gender

HSCORE 1.00 0.64 0.60 0.83 -0.33 0.27 LSCORE 1.00 0.74 0.90 -0.25 0.27 ASCORE 1.00 0.90 -0.34 0.31 ALLSCL 1.00 -0.36 0.32 FirstAge 1.00 -0.22 gender 1.00

Note: n = 507, 27 variables

Table G.3 Correlation Matrix for Refined Subscales

HSCORE LSCORE ASCORE ALLSCL FirstAge gender

HSCORE 1.00 0.52 0.60 0.89 -0.45 0.40 LSCORE 1.00 0.42 0.76 -0.21 0.24 ASCORE 1.00 0.81 -0.27 0.18 ALLSCL 1.00 -0.39 0.35 FirstAge 1.00 -0.22 gender 1.00

Note 1: n=533,22 variables Note 2: For both tables: ALLSCL = HSCORE+ LSCORE+ ASCORE Note 3: FirstAge = Age of first computer experience Note 4: p< 0.01 for all correlations

The correlations in both cases were high and significant (p<0.01). When the two correlation matrices are compared side-by-side, one sees that two of the three intercorrelations between subscales had decreased while the third remained constant. A comparison of the two matrices showed that the intercorrelations between HSCORE and LSCORE decreased from 0.64 to

0.52, and between LSCORE and ASCORE decreased from 0.74 to 0.42. The third intercorrelation between HSCORE and ASCORE remained unchanged at 0.60. Since there was less overlap among the new subscales, this suggested that the new subscales were more distinct, and could discriminate better. (All correlations were significant at p<0.01).

Intercorrelations with Gender and First Age of Experience

The correlation matrices also showed the relationship between gender and first age of computer experience to competency subscales. The polarity of the correlation indicated that gender 154 correlated positively with HSCORE, LSCORE, ASCORE and ALLSCL (the composite score for the subscale). Since the code for males was equal to "2", this suggested that as subscales scores went up positively, they tended to correlate with males.

Similarly FirstAge, the age of first computer experience, correlated negatively with the subscale scores. When one considers that Post 7 was coded as a "2", it suggested that the higher one got in this variable (meaning later initial experience with computers) the lower were the scores.

(Please turn to Appendix D for the description of all variables which have been mentioned here).

Instrumentation

Combining the results of the factor analysis and the internal reliability analysis, three conclusions can be drawn. First, the results from the principal components factor analysis demonstrated that the three software competency constructs were structurally independent. Second, the high alpha coefficients for all three subscales indicated that the subscales were internally reliable. Third, the relatively high intercorrelations among subscales indicate that although the competency constructs are independent, the three subscales can be combined to form a total score as indicated by the HSCORE + LSCORE + ASCORE = ALLSCL. APPENDIX H

MANOVA Analysis 156 Considerations for the MANOVA

Two sets of conditions need to be considered before reporting significance levels found through employing MANOVA or follow-up ANOVAs. The first involves checking for violations of the underlying assumptions of the MANOVA. The second involves inflated experiment-wise alpha error resulting from performing numerous follow-up ANOVAs.

Skewness. Missing Data. Homoscedasticitv. Outliers

Upon inspection, two observations were obvious. The first has to do with the departure of variables from normality arising from large individual differences and related problems of outliers and homoscedasticity. There was also a problem with missing data. Upon inspection across subjects for the responses to their competency on 27 software items in the questionnaire, there appears to be two types of learners.

The inspection by subject reveals that the first type of learner is a "jack of all trades" who feels somewhat competent with many of these items. The second type of learner is the "I don't know" who does not feel competent with the majority of these items, and despite opportunities to take computing courses in high school, has arrived at the university with low software competency.

When the analysis is performed by variable, the same differences show up but this time with many of the dependent variables being positively skewed. However, this is not an universal problem. There were variables such as BASIC, Pascal, DOS, Windows, Database, Games,

Graphics, Spreadsheet which approached normality and others like Word Processing which were actually negatively skewed. The variables with positive skew appeared to be less popular and obscure software items such as Assembler, OS/2 and Windows NT.

One of the problems with these positively skewed items was the lack of variance which meant they were not robust enough to survive the rigor of parametric statistics. The researcher conjectured that it was the ones which approached normality (of which most software items were applications-oriented) which carried the day and captured enough variance and communality, to allow the Factor Analysis, the internal reliability analysis, and the MANOVA to be executed smoothly. 157

Checking Underlying Assumptions

Although these problems existed, there were many positive aspects as well. The sample size was large, there were no problems of singularity or multicollinearity, and spot checks for bivariate relationship [as recommended by Tabachnick and Fidell (1989)] did not produce problems of non-linearity. As well, the ratio of cases-to-dependent variable (dv) was high, and the condition of more cases than dependent variables in every cell was satisfied.

First, the problem with skewriess. Tabachnick and Fidell (1989) noted that Mardia (1971) had demonstrated that "MANOVA is robust to modest violations of normality if the skewness is caused by skewness rather than outliers." (Tabachnick & Fidell, 1989, p.378). They added that

"even with unequal n and only a few dvs, a sample size of 20 in the smallest cell should ensure robustness."

The sample had to be checked for multivariate outliers from the centroid before Mardia's

(1971) condition was satisfied. Tabachnick & Fidell (1989) recommended "computation of

Mahalanobis distance for each case." (p.68), using dependent variables of the MANOVA as independent variables in a regression to predict a dummy variable like subject number. This yielded a distance criterion represented by a number analogous to a Chi-Square. The calculated figure for each subject had to be less than %2 =16.266, p<0.001 with the degrees of freedom equal to the number of dependent variables. In this way, five subjects were identified as multivariate outliers.

However, even when MANOVA was rerun with these outliers removed, the results remained the same. Thus it was concluded that these outliers were not serious enough to upset Mardia's (1971) condition, and the problem was essentially one of skewness.

Before being confident of applying Mardia's (1971) criteria of robustness, the problem with the low cell count also had to be investigated. The lowest cell size was below 20, but no significant interactions were found, and since this was a gender-based study, the MANOVA was repeated using two separate sets of MANOVAs:

1) a 2 (gender) X 2 (age of first computer experience) 2) a 2 (gender) X 3 (learning setting).

Both of these two analyses satisfied the 20 per cell criteria but the compromise was that the

MANOVAs did not analyze the three-way interaction of gender, age of first computer experience, 158 and learning setting, nor the interaction between first age of experience and learning setting. The two separate MANOVAs yielded similar results to the 2x2x3 MANOVA. The large sample sizes for all three MANOVAs offer a measure of compensation for the unequal cells, and should offer a measure of robustness for the full model with the lower cells.

More importantly, the problem of homoscedasticity was resolved. Tabachnick and Fidell

(1989) noted that Box's M test was notoriously sensitive to homogeniety of the variance- covariance matrices, and recommended that in the event of unequal cells, "If cells with larger samples produce larger variance and covariances, the alpha level is conservative so that the null hypothesis can be rejected with confidence." (p.379). This was the case for all three MANOVAs.

Due to the large initial data set, the problem of missing data did not turn out to be serious. SPSS

MANOVA took care of the deletion of subjects and rejected 127 cases who had missing data in a listwise manner. In conclusion, even though the MANOVA model is based upon critical underlying assumptions about linearity, normality, homoscedasticity and outliers, it can be surprisingly robust even with violations (Tabachnick & Fidell, 1989).

Setting the Alpha Level

Although Bray and Maxwell (1985) believe that another advantage of the MANOVA is that follow up ANOVAs from a MANOVA are protected from Type-I errors arising from an inflated experiment-wise alpha, Tabachnick and Fidell (1989) contend protection is not guaranteed even under the umbrella of the MANOVA. Instead, these researchers recommend that to guard against

Type-I errors, the alpha level could be either set to a moderately conservative level (p=0.01), or the more conservative Bonferroni level which makes alpha directly related to the number of comparisons by setting alpha equal to that obtained by dividing alpha by the number of tests (a/n).

Since there were only three univariate follow-up ANOVAs, the chance of a Type-I error arising from multiple ANOVAs was greatly reduced. Therefore, rather than use the more conservative

Bonferroni alpha, but to still provide adequate assurance against problems from non-ideal data, an alpha of p=0.01 was chosen and considered adequate for the MANOVA. APPENDIX I

The Gates-Tangorra-Feng Model 160

The Revised Model

As mentioned at the end of Chapter Two, there was a section on the updated model of the

Gates (1981) pyramid which was used as the conceptual framework for the study. The chapter also mentioned a hybrid model which incorporated both the Gates (1981) as well as the Tangorra model

(1990).

The search of the literature yielded Tangorra's (1990) model which was inclusive of the

Gates (1981) model, but also represented the layers of hardware. The researcher noticed that in general, except for placement of the Operating System, Tangorra's (1990) model agreed with the

Gates' (1981) model. Tangorra had placed the Operating Systems level (level 4) above the Machine level (level 3), and the Assembly level (level 5) above the Operating System level (level 4). For this study, the researcher proposed a different arrangement based upon a revision of the basic Gates'

(1981) model, augmented with Tangorra's interpretation of the hardware abstraction levels.

Based upon the researcher's computer architecture background, the researcher combined elements of both the Gates (1981) and the Tangorra (1990) model to arrive at a hybrid model. The

Tangorra Assembly level (level 5) appeared to be more appropriately placed below the Operating

System (level 4), but above the Machine level (level 3). Or, in terms of the Operating System, the

Operating System level should be between the Assembly level and High-Level Languages, rather than between the Assembly level and the Machine level).

The researcher justified this placement because although assembly language is software, it is unique in that it can also directly communicate with the hardware (the screen, the printer, the mouse, etc.), without requiring the convenience of an operating system. As White (1993) noted, that it was "possible to write software that does not need an operating system; such software sends instructions directly to the microprocessor and other hardware components" (cover page).

Furthermore, in terms of the Assembly Level, since assembly language is essentially a one- to-one reflection of the underlying hardware, the Assembly Level should more appropriately be located slightly above the base of Gates' triangle. (In retrospect, the researcher conjectures it is probable that Gates intentionally did not mention assembly language because he knew that it was a unique kind of language that was intimately and implicitly tied to the base hardware, and was a direct manifestation of it). 161

Other than relocating the levels, there was a also need for another more systemic change.

This is because the previous relationship between the levels and the availability of software has also changed. The reader will note that the revised model which formed the conceptual framework for this study (as depicted in Figure 4, page 82) has relocated the operating system in accordance with Gates' layering schema, and flipped the whole pyramid around on its end.

The reason for this "flip" is that the exact inverse of the Gates (1980) relationship now holds true. With the advent of the multitude of applications for virtually every type of need, the software availability now decreases the lower down the levels new descends. The rationale for placing the hardware at the narrow base can also be substantiated by the fact that the variety of languages and the ubiquitous applications mentioned above are supported by only a handful of different hardware systems. This inverse but downward decreasing relationship was taken into account during the questionnaire design; the design of the final questionnaire incorporated 11 applications, nine languages and seven operating systems. In this way, the inverted model represents the process of both hardware and software abstraction, and reflects the new direct

(rather than inverse) relationship in terms of software availability by level. The researcher is aware that only the software portion of the model has been addressed. The Gates-Tangorra-Feng model has been presented for completeness, as it represents the abstraction at both the hardware levels, a model for abstraction for the whole machine.