Quick viewing(Text Mode)

Establishing the Human Perspective of the Information Society

Establishing the Human Perspective of the Information Society

Establishing the human perspective of the

Helen Partridge BA (UQ), GradDipPsych (UQ), MIT (QUT)

Submitted in fulfilment of QUT Doctor of Philosophy

Faculty of Queensland University of Technology 2007

Supervisory Panel

Principal Supervisor

Associate Professor Sylvia Edwards Faculty of Information Technology, QUT

Associate Supervisors

Professor Christine Bruce Faculty of Information Technology, QUT

Associate Professor Gillian Hallam Faculty of Information Technology, QUT

Dr Paul Baxter Department of Employment and Training Queensland State Government

ii Abstract

The is a core issue of the information society. It refers to the division between those who have access to, or are comfortable using, information and communication technology (ICT) (the “haves”) and those who do not have access to, or are not comfortable using ICT (the “have-nots”). The digital divide is a complex phenomenon. The majority of studies to date have examined the digital divide from a socio-economic perspective. These studies have identified income, and employment as the key factors in determining the division between the “haves” and the “have-nots”. Very little research has explore the psychological, social or cultural factors that contribute to digital inequality in community. The current study filled this gap by using Bandura’s social cognitive theory (SCT) to examine the psychological barriers that prevent individuals from integrating ICT into their everyday lives.

SCT postulates that a person will act according to their perceived capabilities and the anticipated consequences of their actions. Four studies have explored the digital divide using SCT. Because of limitations in the research design these studies have shed only limited light onto current understanding of digital inequality in community. The current research was the first study exploring the digital divide that (i) incorporated both socio-economic and socio-cognitive factors, (ii) used a community context that ensured the recruitment of participants who represented the full spectrum of the general population, and (iii) was conducted in both the US and Australia. Data was gathered via self administered questionnaires in two communities: Brisbane, Australia and San Jose, USA. Completed questionnaires were obtained from 330 and 398 participants from the US and Australia, respectively.

Hierarchical regression analysis was used to explore the research question: what influence do socio-cognitive factors have in predicting use by members of the general population when the effects of socio-economic factors are controlled? The results of this analysis revealed that attitudes do matter. The US study found that socio-economic factors were not statistically significant predictors of internet use. The only factor that found to be a significant predictor of use was internet self efficacy. In short individuals with higher levels of internet self efficacy reported higher levels of internet use. Unlike the US study, the Australian study found that by themselves several socio-economic factors predicted internet use. In order of importance these were age, gender, income and ethnicity. However, the study also

iii revealed that when socio-economic factors are controlled for, and socio-cognitive variables included into the analysis, it is the socio-cognitive and not the socio- economic variables that are the dominant (in fact the only!) predictors of internet use.

The research illustrated that the digital divide involves more than just the availability of resources and funds to access those resources. It incorporates the internal forces of an individual that motivates to them to use or integrate ICT into their lives. The digital divide is not just about ICT such as computers and the internet. It is about people. As such, the key to solving the issue of digital inequality is not going to be found with corporate or government funds providing physical access to technology. Instead, the key to solving digital inequality is inside the individual person. The alternative view of the digital divide presented in this research is by no means intended to minimise the role played by socio-economic factors. Indeed, the socio- economic perspective has helped shed light on a very real social issue. What this research has done is suggest that the digital divide is more complex and more involved than has been imagined, and that further and different research is required if genuine insights and real steps are going to be made in establishing an information society for all.

Keywords information society, , digital divide, social cognitive theory, psychology, “haves”, “have-nots”, self-efficacy, survey method, internet, Australia, USA

iv Table of Contents

Supervisory panel ………………………………………………………………………... ii Abstract ……………………………………………………………………………………. iii Keywords ……………………………………………………………………………...... iv Table of contents ………………………………………………………………………… v List of tables ……………………………………………………………………………… viii List of figures ……………………………………………………………………...... ix Statement of original authorship ……………………………………………………… x Acknowledgements ……………………………………………………………………… xi

Chapter 1: Introduction ……………………………………………………………... 1-1 1.1 Introduction 1-1 1.2 The research problem 1-1 1.3 The research questions 1-3 1.4 The research study 1-4 1.5 Significance of the research 1-5 1.6 Research limitations 1-6 1.7 Thesis overview 1-7 1.8 Conclusion 1-9

Part A: Setting the scene

Chapter 2: The Information society ………………………………………………… 2-1 2.1 Introduction 2-1 2.2 The information society is here! 2-1 2.3 What is the information society? 2-2 2.4 Does the information society really exist? 2-6 2.5 Ways of researching the information society 2-9 2.6 Information literacy: the fifth school of thought 2-15 2.7 Conclusion 2-32

Chapter 3: The digital divide …………………………………………………...... 3-1 3.1 Introduction 3-1 3.2 What is the digital divide? 3-1 3.3 Crisis or myth? 3-13 3.4 Who is excluded by the digital divide? 3-16 3.5 Digital divide or digital divide s? 3-32 3.6 Access, access, access! 3-34 3.7 A theoretical framework to the digital divide? 3-36 3.8 Conclusion 3-38

Chapter 4: Social cognitive theory ………………………………...……………… 4-1 4.1 Introduction 4-1 4.2 Triadic reciprocality 4-1

v 4.3 The role of human agency 4-3 4.4 The key construct: self-efficacy 4-4 4.5 What self-efficacy is not 4-6 4.6 General or domain specific? 4-8 4.7 Sources of self-efficacy 4-9 4.8 Consequences of self-efficacy 4-13 4.9 Self-efficacy and information literacy: closely linked concepts 4-13 4.10 Application of self-efficacy in understanding the digital divide 4-18 4.11 Criticisms of social cognitive theory 4-23 4.12 Conclusion 4-24

Part B: Exploring the problem

Chapter 5: The method ……………………………………….…...…………...... 5-1 5.1 Introduction 5-1 5.2 Research philosophy 5-1 5.3 Research problem 5-5 5.4 Research plan 5-6 5.5 Research context 5-13 5.6 Sampling plan 5-13 5.7 The research instrument 5-19 5.8 Ethical issues 5-41 5.9 Data analysis 5-41 5.10 Conclusion 5-42

Chapter 6: The participants ……………………………………………………...... 6-1 6.1 Introduction 6-1 6.2 The US participants 6-1 6.3 The Australian participants 6-16 6.4 Conclusion 6-29

Chapter 7: Construct validation ………………………………………………...... 7-1 7.1 Introduction 7-1 7.2 Methods used for construct validation 7-1 7.3 Factor analysis 7-3 7.4 Construct validity of the social-cognitive measures 7-15 7.5 Construct validity of internet use measure 7-31 7.6 Common method variance testing 7-32 7.7 Conclusion 7-32

Part C: Developing the theory

Chapter 8: The results …………………………………………………………...... 8-1 8.1 Introduction 8-1 8.2 Multiple regression analysis 8-1 8.3 The analysis 8-4 8.4 The revised research model 8-11 8.5 Conclusion 8-13

vi Chapter 9: Discussion and recommendations ……………………………...... 9-1 9.1 Introduction 9-1 9.2 Overview of the review 9-3 9.3 Overview of the research findings 9-3 9.4 Significance and contributions of the research 9-4 9.5 Limitations 9-8 9.6 Recommendations for future research and practice 9-11 9.7 Conclusion 9-13

References ……………………………………………………………………………….. R-1

Appendices Appendix 1: Self administered questionnaires A1 Appendix 2: Expert comment form A2 Appendix 3: APS poster 2003 A3 Appendix 4: The pre and pilot tests A4 Appendix 5: Administration instructions A5

vii List of Tables

Table 2.1 The seven faces of information literacy ……………………………….. 2-18 Table 3.1 Twelve perspectives or dimensions on the digital divide ………...... 3-36 Table 6.1 Summary of characteristics against US questionnaire type…………… 6-4 Table 6.2 Questionnaire type by gender (US) ...……………………………...... 6-6 Table 6.3 Questionnaire type by age (US) ...……………………………………... 6-5 Table 6.4 Questionnaire type by highest education level (US) ………………….. 6-8 Table 6.5 Questionnaire type by current employment (US) .…………………….. 6-9 Table 6.6 Questionnaire type by income (US) ……………………………………. 6-9 Table 6.7 Questionnaire type by ethnicity (US) …………………………………... 6-10 Table 6.8 Questionnaire type by disability (US) ………………………………….. 6-12 Table 6.9 Summary statistics for the demographic variables (US)……………... 6-12 Table 6.10 Summary statistics for San Jose, CA, USA …………………………... 6-13 Table 6.11 Summary of characteristics against Australian questionnaire type…... 6-17 Table 6.12 Questionnaire type by gender (Australia) ……………………………... 6-18 Table 6.13 Questionnaire type by age (Australia) ……………………………...... 6-19 Table 6.14 Questionnaire type by highest education level (Australia) .. . ………... 6-19 Table 6.15 Questionnaire type by regrouped employment (Australia) ………….. 6-20 Table 6.16 Questionnaire type by income (Australia) …………………………….. 6-21 Table 6.17 Questionnaire type by ethnicity (Australia) ……………………………. 6-21 Table 6.18 Questionnaire type by disability (Australia) …………………………… 6-22 Table 6.19 Analysis of variance – internet use by survey type (Australia) ……… 6-23 Table 6.20 Summary statistics for the demographic variables (Australia) ……… 6-24 Table 6.21 Summary statistics for Brisbane QLD, Australia ……………………... 6-28 Table 7.1 Summary of factor analysis approach applied in the current study … 7-15 Table 7.2 Parallel analysis (US sample) ………………………………………….. 7-17 Table 7.3 Initial factor analysis (US sample) ……………………………………... 7-18 Table 7.4 Seven factor solution (US sample) …………………………………….. 7-20 Table 7.5 Two factor solution (US sample) ……………………………………….. 7-22 Table 7.6 Parallel analysis (Australian sample) ………………………………….. 7-25 Table 7.7 Initial factor analysis (Australian sample) ……………………………... 7-25 Table 7.8 Seven factor analysis (Australian sample) ……………………………. 7-27 Table 7.9 Two factor solution (Australian sample) ………………………………. 7-29 Table 8.1 Dummy coding used in the current research …………………………. 8-4 Table 8.2 Hierarchical regression for internet use (US sample) ……………….. 8-5 Table 8.3 Hierarchical regression for internet self-efficacy (US sample) ………. 8-6 Table 8.4 Hierarchical regression for internet use (Australian sample) ……….. 8-8 Table 8.5 Hierarchical regression for internet self-efficacy (Australian sample) ... 8-10

viii List of Figures

Figure 1.1 The research hierarchy as applied to the current research 1-4 Figure 2.1 The information society schools of thought 2-15 Figure 4.1 The triadic relationship 4-1 Figure 4.2 Connection between self-efficacy and information literacy 4-14 Figure 5.1 The research model 5-6 Figure 5.2 The research process 5-12 Figure 7.1 Scree plot (US sample) 7-17 Figure 7.2 Scree plot (Australian sample) 7-24 Figure 8.1 The two step hierarchical regression process for internet use 8-4 Figure 8.2 The evolving models of the digital divide 8-12 Figure 8.3 The revised research model 8-13

ix Statement of Original Authorship

The work contained in this thesis has not been previously submitted to meet requirements for an award at this or any other higher education institution. To the best of my knowledge and belief, the thesis contains no material previously published or written by another person except where due reference is made.

Signature

Date

x Acknowledgements

I am very grateful to my family, friends and colleagues who supported me during this research. I would especially like to thank my supervisory team, Sylvia Edwards, Christine Bruce, Gillian Hallam, and Paul Baxter. I have greatly appreciated the guidance, support, enthusiasm and encouragement you have each offered me over the years. I would also like to thank Taizan Chan, Rebecca Spooner-Lane and Clive Bean who provided me with much needed and much appreciated statistical advice.

Helen Partridge Brisbane, Australia May 2007

xi

For Barry and Barbara Partridge

xii Chapter 1: Introduction

1.1 Introduction It has become common practice to say that we are living in an information society. The term “information society” has arisen out of the accepted belief that information has become a core part of contemporary life, both work and play – so much so, that it has become a symbol of the very age we live in (Martin, 1988). Since the 1960s the information society has emerged as a significant domain of research and enquiry. It has been examined by four schools of thought (Duff, 2001): (i) the economic school, where the focus has been on the and the changing nature of the workforce, (ii) the information explosion school, where the focus has been on the increasing amount of information in the world, (iii) the information technology school, where the focus has been on the increasing amount of information and communication technology (ICT) in society and (iv) the synthetic school, which combined all three previous schools. The schools of thought have been criticised for their quantitative focus (i.e. number of information workers, amount of information, amount of ICT), and consequently they have been criticised for not being able to provide a full and informed discourse on the information society. None of the existing schools of thought have adopted a “user” or “person- centred” focus. Information literacy is an emerging area of scholarly enquiry. An information literacy school of thought would focus on the person and the way in which he or she experienced information and their information world. As such, the information literacy school of thought would provide the missing “user” or “person”- centred focus. In 1998 Stichler and Hauptman asserted that the “ has been widely acclaimed as a great benefit for humanity, but the massive global change it is producing brings new ethical dilemmas” (p. 1). Unless information society research begins to explore the phenomenon from non-quantitative and more user-centred perspectives we will not be fully prepared or able to meet the new ethical dilemmas facing us. This thesis explored a key information society issue – the digital divide – from the perspective of the information literacy school of thought.

1.2 The research problem The digital divide between information and communication technology (ICT) “haves” and “have-nots” has been a topic of considerable discussion since the US federal government released its 1995 report on household access to technologies such as the telephone, computers and the internet (NTIA, 1995). Since this time many organisations have endeavoured to bridge the digital divide through a diverse range

1-1 of initiatives and projects, and government agencies have established and implemented public policy aimed at closing the divide. These initiatives and projects have been developed based on the current understanding of the digital divide. This understanding has been developed primarily from a socio-economic perspective. According to current studies (Lenhart, Horrigan, Ranie, Allen, Boyce, Madden & O’Grady, 2003; NOIE, 2004; NTIA, 2005) the primary factors contributing to the digital divide have been income, employment and education.

As personal computer prices have fallen and internet services to the household have become increasingly less expensive, the socio-economic perspective of the digital divide becomes less convincing to explain all reasons for ICT non-use. The 1999 study by the National and Information Administration (NTIA) into the digital divide in the Unites States suggested that the “don’t want it“ attitude was fast rivalling cost as a factor explaining non-use of the internet. Further support for this suggestion was recently given by a Pew Internet and American Life Project study (Lenhart et al, 2003) which stated that nearly one quarter of Americans were “truly disconnected” having no direct or indirect experience with the internet, with another 20% of Americans being “net evaders”, that is, people who live with someone who uses the internet from home. “Net evaders” might “use” the internet by having others send and receive email or do online searchers for information for them. Recent criticism of the current digital divide studies (Jung, Qiu & Kim, 2001) has suggested that the studies have failed to consider the psychological, social and cultural barriers to the digital divide.

Vernon Harper (n.d.) has suggested the existence of two digital divides: access digital divide and social digital divide. The access digital divide was based upon cost factors and frequently discussed in terms of the presence of computers or in the household. The social digital divide was "a product of differences that are based on perception, culture and interpersonal relationships that contribute to the gap in computer and internet penetration" (Harper, n.d., p. 4). Harper recommended that the scholarly community continue to build research that explored the socio-psychological and cultural differences that contributed to the social digital divide. Harper (n.d.) concluded by stating "the issues surrounding the digital divide must be redefined away from the hardware and towards humanity" (p. 5). If organisations are to develop programmes and initiatives that would assist members of community to fully participate in economic, political and social life then efforts

1-2 must be made to more clearly understand the social, psychological and cultural differences that have contributed to the digital divide.

This thesis contributed to understanding the social digital divide by exploring the digital inequality from a psychological perspective. The research used social cognitive theory (SCT) by Bandura (1997). According to SCT behaviour is best understood in terms of a reciprocal relationship between personal factors, behaviour and the environment. SCT has two core constructs: self-efficacy and outcome expectancy. Self-efficacy is the more important of the two and refers to a person’s judgement of perceived capability for performing a task. To date, only four studies have attempted to explore the digital divide from a psychological perspective. All four studies have used SCT as the theoretical framework. Three of the studies were conducted in the US and involved high school and college students and their use of computers and the internet. Two of these studies focused specifically on the experience of African American students as compared to European American (i.e. white American) students. One study was conducted in Hong Kong with older adults and their use of the same technology. All four studies have helped to expand current understanding of how psychological factors have impacted upon a person’s willingness to engage with ICT. Three observations can be made in regards to these studies. First, they have focused on only a very small segment of the population and have not fully explored the idea of digital inequality within community. Second, the studies have not incorporated all socio-economic factors alongside the socio- cognitive factors. Third, none of the studies were conducted in Australia. The current research filled these gaps.

1.3 The research questions This study used the research hierarchy proposed by Cooper and Emory (1995). The research hierarchy consisted of (i) a management question, (ii) research question/s; (iii) investigative question/s and (iv) measurement question/s. Although the hierarchy was developed as a guide for research being conducted within business contexts specifically, it can also be used as a systematic approach to the research process in other non-business contexts (i.e. community contexts). Figure 1.1 summarises how the research hierarchy was used to derive the research questions for the current study. According to Cooper and Emory (1995) the management question was the problem or question prompting the research. They warned that a “poorly defined management problem or question will misdirect research efforts” (1995, p. 56). The management question driving this study was:

1-3 How can understanding of the digital divide be improved by including a psychological or human perspective? Once the management question was identified Cooper and Emory (1995) recommended establishing a research question that was “a fact oriented information gathering question” (1995, p. 57). It represented the general purpose of the study. The research question that followed from the above managerial question was: Can a more detailed understanding of the digital divide in community be obtained by exploring both socio-cognitive and socio-economic factors? The investigative question “guides the development of the research direction” (1995, p. 58). It served the purpose of breaking down the research question into a more specific question about which data can be gathered. The investigative question that followed from the research question stated above was: What influence do socio-cognitive factors have in predicting internet use when the effects of socio-economic factors are controlled? Measurement questions constituted the fourth and last level of questions within the Cooper and Emory (1995) research hierarchy. They were the questions from which the actual set of data was collected (i.e. questions within questionnaires and interviews). Full details on the measurement questions used in the current study have been provided in Chapter 5.

Figure 1.1: The research hierarchy as applied to the current research (based upon Cooper & Emory, 1995)

1.4 The research study The current research was guided by critical theory. That is, the research explored new ways of understanding a social phenomenon (i.e. the digital divide) with the view to enacting future change by empowering individuals. The research had an international context. Study participants were obtained from the two cities of Brisbane, Australia and San Jose, USA. A self-administered questionnaire was used

1-4 for data collection. The final data collection took place in January 2005 in the US context and November/December 2005 in the Australian context. A total of 330 questionnaires were obtained in the US and 389 were obtained in Australia.

1.5 Significance of the research This research was significant because it developed a new theoretical framework through which to view the division between information “haves” and information “have-nots” within society. The research illustrated that the digital divide involved more than just the availability of resources and funds to access those resources. It incorporated the internal forces of an individual that motivated them to use or integrate technology into their lives. Using the social cognitive theory by Bandura (1997) to examine these internal forces this research has added another layer of understanding to digital inequality in community. The findings of the study have provided support to the existence of the social digital divide as proposed by Harper (n.d.).

In addition, this was the first time that internet self-efficacy has been explored within the context of the wider community. Existing studies that have examined self- efficacy have done so using university or high school students. The differences in these groups suggested that these studies could not be generalised to the broader population. Importantly this was the first time a study exploring internet self-efficacy and the digital divide had taken place in Australia. The majority of studies to date have originated from the United States. The research developed an internet self- efficacy scale that was appropriate for use within the context of the general population.

This research was important because it expanded current understanding of a phenomenon that has far reaching social and economic implications. The research developed has contributed to the development of a more concise understanding of what is and who represents the digital inequality in society. Developing a clearer and more comprehensive picture of the forces behind the division in society between “haves” and “have-nots” is a vital step in bridging the gap. The findings of this research can be used by organisations involved in the digital divide solution to develop and tailor services and programs to more accurately and effectively narrow the gap between “information-rich” and “information-poor”. As a consequence real steps can now be made in bridging the gap between the “haves” and the “have-nots” in society. The research findings will allow for all members of community to have an

1-5 equal chance of establishing and maintaining productive personal and professional lives in this rapidly emerging digital age.

1.6 Research limitations The research had several limitations that must be considered. First, the research employed cross-sectional data to identify the significant relationships between the research variables. Consequently, no firm conclusions can be made regarding the exact magnitude of the causal effects. Longitudinal designs, although much more difficult to achieve (especially in the community setting), are crucial for furthering current understanding of the nature of the digital divide. Second, the measures used to assess the main variables of interest in the research were all self-reported. Self- reported measures provide a useful opportunity to collect data otherwise not readily available. But self-reported data is limited by what “individuals know about their attitudes and are willing to relate” (Nunnally, 1967, p. 590). As such, a significant potential limitation in the current study was the overall validity of the measures employed. Third, by focusing only on self efficacy and outcome expectancy the research provided a limited understanding of people’s psychological engagement with the internet. The study could have been expanded to include other psychological or attitudinal constructs such as perceived need and internet anxiety. Fourth, it is acknowledged that the validity and reliability of a construct cannot be established by a single study. The internet self-efficacy measure developed for the purpose of this research requires further testing and revising in order to improve its psychometric properties. Fifth, some caution must be taken when interpreting the findings in relation to the Australian and US populations. The participants were recruited from a small catchment (i.e. only one city in each nation, and in only specific areas within each city) and a non-random sampling approach was used. Thus, what is presented here is a picture of the digital divide as understood by two “small worlds” and more specifically by only a very small and exclusive percentage of members from these small worlds. Limited information was available on non- response (i.e. those who did not complete, or were excluded from completing, the questionnaire). The existing picture could be deepened through replication. Finally, the study can be critiqued on the statistical techniques used for data analysis (i.e. factor analysis and regression) when other more favoured techniques such as structural equation modelling existed.

1-6 1.7 Thesis overview The research is divided into three parts: (i) Part one, setting the scene; (ii) Part two, researching the problem and (iii) Part three, developing the theory. An overview of each part is provided below.

• Part A: Setting the scene. This section of the thesis provides an overview of the literature relevant to the research. In doing so it establishes the research problem that the current thesis will explore. It consists of three chapters:

o Chapter two: The information society The information society provides the context for the current research. This chapter provides an overview of the information society, giving particular attention to the existing frameworks used to explore the phenomenon. Information literacy is presented as an alternative framework from which to view the information society. The chapter sets the stage on which the current research will unfold, and highlights how the research will advance current thinking on what constitutes an information society.

o Chapter three: The digital divide The digital divide is the information society issue that the current research will explore. The chapter provides an overview of the digital divide exploring definition, terminology and existing digital divide research. The chapter establishes the contributions of the research to the evolving digital divide research agenda, and how the research will advance knowledge of an important social and economic phenomenon.

o Chapter four: Social cognitive theory This chapter provides an overview of SCT. SCT provides the theoretical framework for the current research’s exploration of the digital divide. The chapter provides an overview of the key assumptions and principles, including the core constructs, of the theory. The chapter establishes the relevance of using SCT in the current research.

• Part B: Exploring the problem . This section of the thesis provides an overview of method and analysis used to conduct the research. In doing so it establishes

1-7 the method and analysis used to explore the research problem. It consists of three chapters:

o Chapter five: The method This chapter provides a detailed overview of the method used to explore the research problem. The chapter describes the research philosophy, research approach including plan, method and technique, the research context and sampling plan, the development of the research instrument and data gathering implementation. Ethical considerations are also discussed.

o Chapter six: The Participants Using descriptive statistics and informed judgement, this chapter explores the research participants. More specifically, two important sources of error are examined: non-response error and sampling error. The chapter establishes the both the US and the Australian samples are closely representative of the populations from which they are drawn.

o Chapter seven: Construct validation This chapter provides an overview of the psychometric soundness of the scales used in the current research. This is a necessary step before examining the variables further. Factor analysis was used to provide supporting evidence to suggest that the eight scales possess adequate levels of internal consistency and construct validity.

• Part C: Developing the theory . This section of the thesis provides an overview research program. In doing so it establishes the significance and contributions of the research to theory and practice. It consists of two chapters:

o Chapter eight: The results This chapter is dedicated to testing the research question. Hierarchical regression analysis was used to establish that the social-cognitive factor – self-efficacy – is the primary predictor of internet use in community.

o Chapter nine: Discussion and recommendations This chapter presents an overall discussion of the current research. The chapter provides an overview of the research program as well as a

1-8 discussion on the implications of the research to both theory and practice. Recommendations for future research are made, including suggestions to address the limitations of the current research.

1.8 Conclusion Chapter 1 has introduced the research problem. The research hierarchy by Cooper and Emory (1996) was used to establish the research question. Key features of the research study were outlined, including the overall significance the study will have both in theory and practice. The limitations of this study have been identified.

1-9

Part A: Setting the scene

Chapter 2: The information society

2.1 Introduction The information society provided the context for this research. This chapter examines the information society. The chapter has two aims. First, it examines the current understanding of the information society. This examination focuses on definition and evolution of the concept. It also provides an overview of the existing information society research, giving particular consideration to the frameworks used to explore the phenomenon. Second, the chapter examines information literacy. This examination considers information literacy as an alternative school of thought from which to view and understand the information society. It also discusses the application of information literacy in understanding the information society to date. In undertaking these two aims the chapter sets the stage on which the current research will unfold, and how the research will advance current thinking on what constitutes an information society. The chapter does not attempt to deal with every aspect of the information society or information literacy, nor every author who has written about the concepts. Instead, the chapter provides an overview of the topic based upon the key themes and authors who have made significant contributions to the field of study.

2.2 The information society is here! 1 It has become commonplace to say that we are living in an information society. In much of the popular press, in scholarly journals and in government reports, authors been been readily telling us that “information is the distinguishing feature” of the modern world, that we have entered an “information age” and that an “” is occurring in which we are experiencing an “informatisation” of life with the world moving into a “global ” (Webster, 2003). The term “information society” has arisen out of the accepted belief that “information has become so important today as to merit treatment as a symbol for the very age in which we live” (Martin, 1988, p. 303). Schement and Curtis (1995) proposed that the information society concept was a powerful idea because it provoked the imagination. Similarly, Webster (2004) observed that the information society

…conjures so many different things, each of them of the moment – a world of media saturation, of extended education for the vast majority of us in advanced locations, of generally clever and better informed people, of large

1 Quote from Lehtonen, 1988, p. 104

2-1

numbers of occupations concerned with ‘think work’ of instantaneous movement of information across time and space and of an array of new technologies and especially the internet, but including also cable and satellite television, DVD systems and so on….All of these appear to be distinguishing features of our world so not surprisingly when we hear the words Information Society we readily consider them to be reasonable as a description of what we are. And yet it is remarkable that information society should evoke so many different associations in our minds – new technologies, higher education, symbolic work and round the clock entertainment (p. 9)

In short, Webster (2004) contended that the term information society “feels right about how we live today” (p, 9). However, he also concluded: “it cannot be long after reflecting on these different images that one begins to ask what fundamentally distinguishes the information society?” (2004, p. 9). Webster raised this question because ultimately he believed that it was incorrect to refer to the current society as an information society. This was an important question for the current research, which was grounded in the premise that understanding what is the information society, and whether today’s society can be distinguished as an information society, cannot yet be answered given that information society discourse has not yet fully considered all aspects of the concept. The next section discusses how current information society discourse defines the phenomena of their discussions.

2.3 What is the information society? The information society is a fuzzy concept. Indeed, there has been great difficulty in “deciding exactly what the term means and hence of determining whether we are really members of such a society” (Duff, 2000, p. 278). Whilst the term information society has become commonplace 2, its use has not been universal. The term has been used with varying connotations and meanings through both popular and scholarly literature. Even scholars specialising in describing the nature of the information society have argued about what it is, and what it means for society. According to Hugh Mackay (2001) the “information society” was an “umbrella term for some important contemporary social transformations that are underway” (p. vii). Mackay (2001) noted that the term has been used to refer to a wide variety of different concepts. However, one commonality existed - those using the term were doing so in an effort to refer to “a new kind of society, one which has to be

2 Whilst the information society is perhaps the most common expression, other terms have included post-, and learning society. Whatever the term used, the general consensus is that today’s society is somehow fundamentally different to previous and this difference has resulted in a new type of society; a society in which information has taken on a new role and level of importance. For ease of communication in the current research the term ‘information society’ will be used.

2-2

understood in new terms, which begs some new social analysis or understanding” (p. 11). Beyond this there was considerable variation and “often a lack of clarity in what is being discussed” (p. 12).

Webster (2003) believed that it was important and “especially helpful to examine at the outset what those who refer to an information society, mean when they evoke this term” (2003, p. 8). He noted that within the information society literature many commentators wrote with undeveloped definitions of their subject and were “curiously vague about their operational criteria” (2003, p. 10). Webster (2003) argued that it was important “to give close attention to a term which exercises such leverage over current thought” (p. 8). He highlighted the “need to pay attention to the definitions which are brought into play by participants in the debates” (2003, p.8) if we were going to “adequately appreciate different approaches to understanding information trends and issues” (2003, p. 10) in current society. Given that the information society formed the context for the current research, establishing a clear understanding of the concept was an important first step. Two key resources are used to establish a definition for the information society: the work by Frank Webster (2003) and the recent World Summit on the Information Society (WSIS, 2006).

2.3.1 Webster’s definitions Whilst over the years several definitions of the information society have been offered, it is the commentary by Frank Webster that has become the seminal work in the area. In Theories of the Information Society , Webster (2003) distinguished five definitions of the information society: technological, economic, occupational, spatial and cultural. These definitions were not mutually exclusive and often overlapped with one another, displaying the complexity of the concept and accounting for its varying and unequivocal usage.

• Technological The technological definition of the information society has been the most common definition. It highlighted the huge innovations in technology. The key innovations were technological advancements in information creation, processing, storage and transmission that have impacted the application of information and communication technologies in every sphere of society. Some of these technologies included computer technology and telecommunications technologies, which have revolutionised the socio-economic context of modern society.

2-3

• Economic The economic approach defined the “information society” in terms of the growing importance of information activities on a nation’s economy. The definition was grounded in the idea that “once the greater part of the economic activity is taken up by information activity rather than agriculture or industrial manufacture, then it follows that we may speak of an information society” (Webster, 2003, p. 12).

• Occupational The occupational definition highlighted occupational change as a basis for a new form of society. According to this perspective the information society had emerged when the majority of occupations were based upon information activities. That is, the information society had arrived when clerks, teachers, lawyers and entertainers outnumbered coal miners, steel workers, dockers and builders.

• Spatial The spatial approach emphasised the role and importance of information networks that have been used to connect locations. In this definition information has become the key strategic resource for the world economy and communication technologies (such as satellite , cable, online databases) have removed the boundaries that once existed because of geographic location.

• Cultural Definition The cultural definition pointed to the increase in information in social circulation and how it affected the pattern of our everyday lives. Webster (2003) noted that: “contemporary culture is manifestly more heavily information laden than any of its predecessors” (p. 19).

Upon reviewing all five possible definitions of the information society Webster (2003) concluded that:

what comes clear is that they are either or both underdeveloped or imprecise. Whether it is a technological, economic, occupational, spatial, or cultural conception we are left with highly problematic notions of what constitutes, and how to distinguish an information society. It is important that we remain aware of these difficulties. Though as a heuristic devise the term information society

2-4

might have some value in helping us to explore features of the contemporary world, it is far too inexact to be acceptable as a definitive term (p. 21).

2.3.2 The world summit on the information society The World Summit on the Information Society (WSIS) was a logical place to find the answer to the question: what is the information society? The WSIS was held in two phases in Geneva, December 2003 and Tunis, November 2005. Hosted by the United Nations, the objective of the WSIS was to “develop and foster a clear statement of political will and take concrete steps to establish the foundations for an Information Society for all, reflecting all the different interests at stake” (WSIS, 2006, para. 2). More than 30 000 participants from 175 of the world’s nations participated in the event. Participants included Heads of States/government, Vice Presidents, Ministers, Vice Ministers as well as high-level representatives from international organisations, private sector, and civil society. The WSIS produced a number of key documents including Declaration of Principles , Plan of Action and the Agenda for the Information Society . A definition for the information society was not provided in any of the WSIS documents. The closest definition for the term was found in the first paragraph in the Declaration of Principles (WSIS, 2003a) where it stated:

we the representatives of the peoples of the world…declare our common desire and commitment to build a people-centred, inclusive and development- oriented Information Society, where everyone can create, access, utilize and share information and knowledge, enabling individuals, communities and peoples to achieve their full potential in promoting their sustainable development and improving their quality of life (para. 1)

Thus, it appeared that an information society as defined by WSIS was based on some notion of sharing of knowledge and information to achieve development goals. A slightly more explicit statement can be found in the Plan of Action : “the information society is an evolving concept that has reached different levels across the world, reflecting the different stages of development” (WSIS, 2003b, para. 2). In a recent content analysis of these documents Pyati (2005), a PhD student at UCLA’s Department of Information Systems, noted that whilst a formal definition was not provided the documents entailed a sense of the “important values” and “principles” associated with the term, including communication, human dignity, marginalised and vulnerable groups in society, improving access to ICT, ability for all to access and contribute information idea and knowledge, and was intrinsically global in nature. In total, 38 values were identified as “key components of the information society” (Pyati, 2005, para. 30). These values and principles provided a sense of the scope

2-5

and ambition of the WSIS. Pyati (2005) noted that “in the course of this analysis it also appears that the coming of the information society is fundamentally about ICTs and their uses and applications” (para. 32) and as such concluded that for the WSIS the information society was fundamentally based on the role of technology.

2.4 Does the information society really exist? Given the many and diverse ways in which the information society concept has been defined since it was first introduced into popular discourse in the early 1960s, it is perhaps not surprising that the concept has generated considerable critical debate. Much of this debate has focused on whether the evidence for the information society represented an extension of the industrial society or whether it marked a radical social change resulting in a new social order. In his seminal work Frank Webster (2003) examined the “significance of information in the world today” (p. 263) by asking “how and why it is that information has come to be perceived as arguably the defining feature of our times” (p. 262). Central to the information society concept was the question of whether the current society could indeed be described as an information society. In short, has the information society arrived? In undertaking his examination Webster critically considered two groups of theorists: those who focused on change by proclaiming that a new society had emerged that was quite different to any society that has gone before; and those who focused on continuity and argued that the current society was no different to any other society in history.

In the former group Webster (2003) allocated theorists such as Daniel Bell (1972) and (1996; 1997; 1998). Webster contended that such theorists focused on the novelty of the information society. They argued that this novelty “marks a systematic break with what has gone before” (p. 266). In contrast was the latter group of theorists including Herbert Schiller (1981) and Anthony Gidden (1998). These theorists acknowledged that change was taking place with information holding a more central stage in society then ever before. What they rejected, according to Webster (2003), was “any suggestion that the ’information revolution’ has overturned everything that went before, that it signals a radically other sort of social order than we have hitherto experienced” (p. 38). It was this latter group that have been most vocal in making their case. In 1995 David Lyon described the information society concept as “problematic”, pointing out that many of the anticipated benefits of an information society had failed to materialise. These anticipated benefits included a leisure lifestyle in a culture of self-expression, political participation and an emphasis on the quality of life. Lyon (1995) ultimately

2-6

questioned whether the “information society concept should be consigned to the waste bin of redundant ideas” (p. 54). Similarly, Garnham (2000) concluded that the information society concept

has become largely meaningless and the vision bears very little, if any relation, to any concretely graspable reality. It therefore operates not as a useful concept for theoretical analysis but as an ideology. Rather than serving to enhance our understanding of the world we live, it is used to elicit uncritical assent to whatever dubious proposition is being put forward beneath its protective umbrella. Indeed so widespread is its casual and careless deployment in policy discourse, and in what passes for serious journalism and business and economic comment, that it reminds me of nothing so much as those medieval outbreaks of mass flagellation. One can only hope that it passes without causing too much permanent damage (p. 140).

One of the most vocal critics of the information society concept has been Krishan Kumar (1995). Whilst critical of the view that the current society was an information society Kumar did not dismiss the important role information has had in contemporary life: “the information revolution is a reality and we inhabit that reality. It has affected the way we see the world and way we live in it” (Kumar, 1995, p. 202). Kumar (1995) contended that it would “be perverse and foolhardy to deny the reality of much of what the information society theorists assert” (p. 215) but he was also quick to note that “an information revolution…is not the same thing as an information society” (p. 215). Information may have changed our attitudes to work and family life this change does not add up to a new society (Kumar, 1995).

Kumar (1995) questioned the claim that we have entered a “new phase of social evolution” that is comparable to “the ‘great transformation’ ushered in by the industrial revolution” (p.252). He argued that the industrial revolution achieved new relationships between “town and country, home and work, men and women, parents and children” (Kumar, 1995, p. 251) and that it brought “new work ethics and new social philosophies” (p. 251). However, Kumar concluded that there was no evidence that information revolution had caused any such major changes. The evidence, according to Kumar, suggested that the information revolution had simply enabled “industrial societies to do more comprehensively what they have already been doing” (p. 251). Kumar (1995) questioned how information society theories could uphold the claim that we have entered a new society when the trends they single out “are extrapolations, intensifications, and clarifications of tendencies which were apparent from the very birth of industrialisms” (p. 252).

2-7

Taking both sides into consideration Webster (2003) ultimately concluded: “I have quite forcefully rejected the validity of the concept…the information society concept is flawed” (p. 262). In making this conclusion Webster (2003) argued that those who emphasised the novelty of the information age were flawed by the “trap of presentism” or “the conceit that one’s own times are radically different from those that went before” (p. 268). This point was also noted by Duff (2000) when he observed that

there is something …. arrogant about the assertion that we are now, while they were not then, living in information societies. The computer programmer works with information; so did the railway clerk. The executive of IBM works with information; so too, no doubt, did the butler at Chatsworth. The information scientist at Aslib is an information worker; so was the curator at the Bodlein and, no doubt the tablet cataloguer at Ebla. To be sure, a plausible case can be made that the contemporary USA, or Japan or Britain is an information society but it is not so easy to argue convincingly that other societies are not- informational (p. 171).

Webster (2003) advocated that the best way to appreciate the current information trends in society was to view them within the historical context of capitalist development. Webster (2003) was not suggesting that “capitalism is the same today as it ever was” (p. 267). Indeed, he proposed that the “informational capitalism we have today was significantly different from the corporate capitalism that was established in the opening decades of the twentieth century, just as that was distinguishable from the period of laissez-faire of the mid to late nineteenth century” (p. 267). Similarly Kumar (1995) observed that the information society:

has not produced a radical shift in the way industrial societies are organized, or in the direction in which they have been moving. The imperatives of profit, power and control seem as prominent now as they have ever been in the history of capitalist industrialism. The difference lies in the greater range and intensity of the applications…not in any change in the principles themselves (p. 154).

Duff (2000) contended that access to information has always been a condition of social power, and that the of even the most primitive societies have been dependent upon information flow. He proposed that

A little objective reflection should tell us that all societies are information societies, but also that they are information societies in different ways. What is special about some advanced societies is that they are dependent upon computerized and telecommunicated information and especially nowadays upon what Joho Shakai researchers called ‘segmented’ (bunshu) electronic

2-8

information. If this were what the information society thesis was claiming then it would surely command much more respect from its critics (Duff, 2000, p. 12).

In summary, there has been some debate as to whether the information society really does exist. Perhaps the reason so many scholars and commentators have suggested that the current society does not represent a radically new form of information society was because their observations were based on a limited understanding of the phenomenon. This limited understanding has arisen because information society research has not yet explored the full nature of the phenomenon. This was an important point for the current research. A brief examination of the ways of researching, and therefore understanding, the information society follows.

2.5 Ways of researching the information society In 2000 Alistair Duff published Information Society Studies . The work aimed to determine whether the much-heralded information society had indeed arrived. Duff (2000) argued that there “is no gainsaying that the concept of the information society is an intellectual force to be reckoned with” (p. 13). Instead he proposed that the “great question which needs to be settled is whether there are valid grounds for this phenomenon” (p. 13). Duff (2000) stated: “put bluntly, the prevalence of a belief in the information society does not entail that the information society actually exists in any coherent sense” (p. 13). Duff concluded that finding the answer to this important question was a “methodological matter”. He argued that “to prove the information society existed one must provide a method of verifying its existence. It must be shown how one would go about confirming the proposition that society x is an information society” (2000 p. 14). Duff also concluded that there was “no such thing as “the’ information society thesis in the sense of a single set of substantive claims regarding the role of information society” (2001, p. 232). Instead, he contended that there are “several sui generic schools of thought that have concerned themselves with propounding proofs of the “informatisation of society” (2001, p 232). Whilst each school advanced the same thing (i.e. that advanced societies have become more or are becoming information societies) they do so with a different sense of meaning and via different methodological underpinnings. Duff provided a critical analysis of the various schools of thought on the existence of the information society. The schools of thought identified by Duff were clearly grounded in the “research perspectives” originally identified by Steinfeld and Salvaggio (1989) only 11 years earlier, and can be easily linked to the five definitions proposed by Webster. The

2-9

schools of thought include: economic; information technology; information explosion; and synthetic.

2.5.1 The economic school The first school of thought was focused on the belief that modern economies were characterised by the expansion of an information sector. This school of thought maintained that workers in advanced societies were typically no longer working with things, as was the case in industrial societies, but with information in some form. Duff identified this as “the American perspective” as he believed that this perspective was grounded in the work by Machlup (1962) and Porat (1977). Steinfeld and Salvaggio (1989) suggested that “a research perspective that places its focus on the information economy as the primary attribute of the information society has both conceptual appeal and empirical support” (p. 38) but that ultimately it would provide only a limited view. Duff (2000) described the research within this school as being “methodologically weak”. He noted that research form this school of thought would provide a picture of greater amounts of information work taking place, but it would not offer any means of differentiating the varying dimensions of information work. Whilst Duff (2000) agreed that Machlup’s work “was without doubt a classic” he suggested that “it needs to be seen as a suggestive tract rather than as a sacred text” (2001, p. 234). Duff (2001) argued that “information society studies has been handicapped by its over-reliance on [this] version” (2001, p. 234) of information studies enquiry.

2.5.2 The IT school The second school of thought focused on the spread or diffusion of information technology. Duff (2001) considered this “the popular version” of the information society thesis as it was the one most commonly referred to in the popular media. He also identified it as the “the European version” of the information society thesis. This school of thought emphasised the technological infrastructure almost to the exclusion of other social, economic and political attributes. It was generally futuristic in perspective and invariably optimistic about the impacts of technology. However, as noted by Steinfeld and Salvaggio (1989) “such weighty emphasis on technology generally removed from a social, cultural and political context, is unable to provide an adequate foundation for defining the attributes of information societies” (p. 36) In addition, this school of thought was “not characterized by empirical research, making it difficult to objectively compare across different societies or measure any one country’s progress toward becoming an information society” (Duff, 2000, p.

2-10

151). Duff (2000) suggested that the school of thought “lacks a serious research tradition and thus a cogent methodology” (p.154). Duff (2000) questioned this school’s assertion that ICT indicated the emergence of an information society but that it did not indicate at which point along the technological scale a society could be judged as having entered such a society.

2.5.3 The information explosion school The third school of thought concentrated on the “information explosion”. This phenomenon has traditionally been understood in terms of a “supposedly exponential growth of scholarly literature, as measured by such familiar yardsticks as the number of journals in existence or the size of university library” (Duff, 2000, p. 186). Duff (2000) classified this “the Japanese version” of the information society thesis as he identified this area with the work by the “Johoka Shakai” or “informationalised society” researchers (e.g. Ito, 1981). Whilst Duff was highly critically of the previous two schools of thought he saw great merit in the methods employed in the “information explosion” area. He concluded that “the potential of the Japanese version has not been sufficiently exploited” (2001, p.234). He identified a basic strength of this version as “its intuitiveness, its congruence with lay perceptions about ‘information overload’ and ‘media saturation’” (Duff, 2001, p. 234). He was of the strong opinion that this school of thought could offer much to further understanding the information society thesis by indicating that what we were looking for was not so much “the information society as the qualified information society or particular kinds of information society” (Duff, 2001, p. 234). However, in making these claims Duff (2001) also acknowledged that the research approach has both “methodological and philosophical matters that need to be settled as with any other immature research tradition” (p. 235).

2.5.4 The synthetic school Duff (2001) proposed that the fourth and final school of thought is one that blended and brought together all existing schools of thought. Duff called this school of thought a synthetic version. This school acknowledged that “society is multifaceted; so is information” (2001, p. 236). He acknowledged that information work, information technology and information flow were all clearly changing society and that “it is hard to see how a final theory of the information society can be one- dimensional” (Duff 2001, p. 236). This point was also noted by Steinfeld and Salvaggio (1989) when they observed that “one dimensional approaches provide little insight into the overall social framework of an information society” (p. 41).

2-11

Whilst Duff acknowledged the unique contributions by theorists such as Daniel Bell (1973) and Manuel Castells (1998) as potentially falling into this school of thought, he concluded that “a genuinely synthetic theory, one that justifies each element and then skilfully fuses those elements into a cohesive whole, is much rarer” (2001, p. 236). He contended that the “achievement of this goal is still a long way off” (p, 2001, p. 236).

2.5.5 A missing school? Webster (2003) voiced concern that current definitions or schools of thought for the information society relied too heavily on quantitative measures – for example, number of information workers, amount of information flow, amount of new technology. These quantitative measures were all grounded in the assumption that “at some unspecific point we enter an information society when this [i.e. information workers, more information, more technology] begins to predominate” (p. 21). Webster (2003) posited that with this approach to the understanding information society there were no clear grounds for proposing that a new type of society existed. He reflected that if there was just more information or more information workers or more ICT, then “it is hard to understand why anyone should suggest that we have before us something radically new” (Webster, 2003, p. 21). In making these criticisms Webster was not completely rejecting the information society concept but rather simply indicating that current research methods – with their focus on quantitative approaches – have not allowed for informed discourse. Webster (2003) suggested that it may however “be feasible to describe as a new sort of society, one in which it is possible to locate information of a qualitatively different order and function” (p. 21). Thus, for Webster (2003) “the point is that quantitative measures cannot of themselves identify a break with previous systems, while it is at least theoretically possible to regard small but decisive qualitative changes as marking a system break” (p. 22). Webster (2003) was not alone in his observation that information society research had relied too heavily on quantitative measures. The November/December 2006 issue of the Information Society journal presented four papers from the 2004 working group on Measuring the information society: what, how, whom and what for? that was organised as part of the 2004 conference of the Association of Internet Researchers (AoIR). According to the editors, Menou and Taylor (2006), the issue showed that “alternative approaches to measuring the information society are necessary and feasible” (p. 262).

2-12

In making his case for a qualitative approach to understanding the information society, Webster pointed to the work by Roszak. (1986, cited in Webster 2003b). Roszak emphasised the importance of qualitatively distinguishing “information” and extending it to what each of us do on an everyday basis when we differentiate between phenomena such as data, knowledge, experience and wisdom. In Roszak’s view the present “cult of information” served to destroy the “qualitative distinctions that are the stuff of real life” (Webster, 2003, p. 22). He argued that this was achieved by researchers taking the view that information was a purely quantitative thing subject to statistical measurement and that by doing this the qualitative dimensions of the subject (is the information useful? Is it true or false?) are laid aside:

For the information theorist it does not matter whether we are transmitting a fact, a judgment, a shallow cliché, a deep teaching, a sublime truth, or a nasty obscenity (Roszak, 1986, p. 14)

Webster (2003) contended that these qualitative issues were overlooked as information was homogenized and made amenable to numbering. Roszak (1986) suggested that it was necessary to consider carefully the meaning given to the term “information”. He posited that current research into the information society was limited because it defined “information” as a quantity that was measured in “bits” and articulated in terms of the probabilities of occurrence of symbols. Webster (2003) referred to the words of Stonier (1990) to highlight this point further:

Information exists . It does not need to be perceived to exist. It does not need to be understood to exist. It requires no intelligence to interpret it. It does not have to have meaning to exist. It exists (p.21)

Webster (2003) advocated the use of a semantic definition, where information “is meaningful; it has a subject; it is intelligence or instruction about something or someone” (p.24). Webster (2003) proposed that

Information society theorists have jettisoned meaning from their concept of information in order to produce quantitative measures of its growth, then conclude that such is its increased economic worth, the scale of its generation or simply the amount of symbols swirling around, that society must encourage profoundly meaningful change. We have, in other words, the assessment of information in nonsocial terms – it just is – but we must adjust to its social consequences (p. 25).

2-13

Webster (2003) suggested that this was “inadequate as an analysis of social change” (p. 25). He advocated that “for any genuine appreciation of what an information society is like, and how different – or similar – it is to other social systems, we must surely examine the meaning and quality of the information” (p. 26). Grounded in this perspective Webster proposed an alternative framework for considering the information society. This framework was based not on the view that more available information was changing people’s lives but on the premise that there were changes in the way in which life was being conducted because of information. More specifically, Webster referred to the role of “theoretical knowledge” in people’s lives. Webster was not the first to do this. Daniel Bell (1973) introduced the idea of the value of information both quantitatively and qualitatively in his theory of post industrial society. Webster did not provide a detailed explanation of his view of what he actually meant by the concept of theoretical knowledge – a point he openly admitted to. He offered his new framework as simply a new perspective that was yet to be fully developed; in the hope that someone would move his ideas forward. This research has taken up Webster’s challenge by proposing a fifth school of thought.

In summary, the information society has emerged as a significant domain of research and enquiry. It has been examined from four schools of thought: (i) the economic, where the focus has been on the economy and the changing nature of the workforce; (ii) the information technology, where the focus has been on the impact that ICT have on society; (iii) the information explosion, where the focus has been on the increasing amount of information in the world and (iv) the synthetic, which brings together all three preceding schools. However, the information society remains a controversial concept, with continuing debate as to whether it represents continuity or change. Perhaps the answer to this question can be found in a new school of thought. This new school would focus not on economic, ICT or information aspects but on the person. Focusing on the person in the information society and their experience of information could provide a new way of understanding the information society. This is not a radically new proposal. In fact, a whole body of academic discourse has arisen focusing on the information user and their experience of information. This area of discourse is called “information literacy”. The current research therefore lays the foundations for a new school of thought – the information literacy school. The information literacy school of thought would differ from the previous schools of thought because it would focus on the person and their experience of information and their information environment (see Figure 2.1).

2-14

The information literacy school of thought would focus on establishing the human perspective of the information society .

Figure 2.1: The information society schools of thought

2.6 Information literacy: the firth school of thought The term information literacy was first coined in 1974 by Paul Zurkowski in a report to the US National Commission on Libraries and titled The Information Service Environment, Relationships and Priorities. Although information literacy is a relatively young area of academic discourse, a considerable body of writing and research has arisen exploring the domain. In 2000 prominent information literacy scholar, Christine Bruce, observed that “information literacy has been seen by the research and professional community in varying ways” (p. 92). She noted, that the two most prominent information literacy paradigms included the behavioural and the experiential 3.

The behavioural paradigm is perhaps the most widely accepted way of seeing information literacy. It is based on the view that information literacy is an amalgam of skills, attitudes and knowledge. One of the oft-cited studies exploring information literacy from this paradigm was conducted by Christina Doyle. Doyle (1992) identified the attributes of the information literate person. The study was conducted on behalf of the National Forum on Information Literacy. Doyle’s list of skills and

3 It should be noted that Bruce did not use the terms ‘behavioural’ and ‘experiential’; these terms are being used here simply as an aid for effective communication.

2-15

attributes arose out of a Delphi study in which a group of experts discussed and agreed upon both a definition and the characteristics and attributes that an information literate person would possess. The definition established was: “information literacy is the ability to access, evaluate and use information from a variety of sources” (p. 2). According to Doyle the information literate person was one who:

• recognises the need for information • recognises that accurate and complete information is the basis for intelligent decision making • identifies potential sources of information • develops successful search strategies • accesses sources of information, including computer-based and other technologies • evaluates information • organises information for practical application • integrates new information into an existing body of knowledge, and • uses information in critical thinking and problem solving (p. 2).

Doyle pointed out that this definition “highlights the process of information literacy” (1992, p. 2). She described her list of attributes as a “potential rubric for a checklist of skills” (1992, p 2), regarding her definition as a valuable tool that moved beyond explaining the function of information literacy by also providing an operational list of desired outcomes. She suggested that these outcomes would result in personal empowerment for the individual. Following on from Doyle’s lead, other attempts to articulate the skills and attributes of the information literate person have emerged, including Lenox and Walker (1993) and Dubois (1997). In addition, many library organisations and institutions have made systematic efforts to develop information literacy programmes and to specify information literacy standards or frameworks. These include the Information Literacy Standards for Higher Education published in 2000 by the Association of College and Research Libraries (ACRL); the Australian New Zealand Institute for Information Literacy (ANZIIL) Information Literacy Framework (released in 2001 - CAUL 2001 - & revised in 2004 - Bundy, 2004); and the Seven Pillars Information Skills Model developed by the UK Society of College National and University Libraries (SCONUL) (1999). These lists of skills and knowledge developed by Doyle and others, along with the information literacy

2-16

standards, have driven the development of information literacy programs within the educational setting.

The behavioural way of seeing information literacy has in recent years come under critical review. In 2003 Webber and Johnson warned that a focus on skills and attributes resulted in a “tick the box” approach that would trivialise human information practices. Bruce (1997, cited in Todd, 2000) highlighted some of the key limitations of the conceptions of information literacy as skills or attributes. These included: they tended to be overgeneralised, being for all people at all times – a one size fits all notion; they do not take contextual factors into account, such as the workplace and everyday life, tending to be contextualised in formal schooling; they are not constructed on people’s experiences and derived from real people interacting in real world information environments, rather, they are conceptions of experts; and they do not take into account individuals’ idiosyncratic ways of operating. Todd (2000) noted that Bruce did not discount the contribution of the behavioural paradigm to understanding information literacy. Instead she was simply calling for a broadening of our scholarly enquiry by developing conceptualisations of information literacy from the real world experiences of users of information in their many different settings. The experiential paradigm of information literacy has arisen out of this call. It is based on the belief that information literacy may be interpreted as “a complex of ways of experiencing information” (Bruce, 1997). The work by Christine Bruce (1997) is the most frequently referenced study from this paradigm. Bruce (1997) used the phenomenographic method, a form of descriptive analysis that attempts to explain how people (i.e. university academics) conceive of topics or phenomena, to propose a relational model of information literacy (1997). Bruce’s model offered a unique approach to researching and defining information literacy in that it emphasised the importance of understanding the way the concept of information literacy is conceived by information users themselves. By adopting the perspective of the user, the relational model promotes an experiential approach that depicts the interaction between the user and his or her surroundings, stresses the relationship between the user and information and promotes the view of users rather than that of experts. Bruce has identified seven different ways in which individuals experience information literacy. The ‘seven faces’ of information literacy are outlined in Table 2.1.

2-17

Face Information literacy is seen as using information technology for One: information technology and communication

Two: information sources finding information located in information sources

Three: information processing executing a process

Four: information control controlling information

building up a personal knowledge base in a new area Five: knowledge of interest working with knowledge and personal perspectives Six: knowledge extension adopted in such a way that novel insights are gained

Seven: wisdom using information wisely for the benefit of others

Table 2.1: The seven faces of information literacy (Adapted from Bruce, 1997)

In her model Bruce (1997) noted that information literacy included aspects of computer literacy, learning to learn, information skills, IT literacy and library skills. She defined an information literate person as

one who engages in independent, self-directed learning using a variety of resources (print and electronic). He or she values information and its use, approaches information critically and has developed a personal information style. Information literacy is described as a construct developed by the information literacy individual. Such an individual creates a specific relationship with information in which he or she interacts with it in a way that provides personal meaning (1997, p. x).

For the context of the current research it was interesting to note that information technology or IT, is present in all seven faces of Bruce’s model. However, the emphases of IT in each of the faces (i.e. in people’s experiences) varied considerably. Bruce (1997) observed that there was a need for IT to recede from the foreground of attention to enable effective use. For example in the first face IT, sat at the very core of the experience of information literacy. Bruce (1997) noted that this is because the lack of use of IT signified ineffectiveness or information illiteracy. In contrast, was the seventh face, where IT had a more peripheral role. This observation by Bruce (1997) highlighted that IT has an important role in people’s evolving experience with their information world. Being able to successfully deal with

2-18

IT in an individual’s information world was a vital first step in their progression towards a higher level of information literacy experience.

Interestingly, on the American Library Association (ALA) Institute for Information Literacy website, Bruce’s work is described as showing that “information literacy is far more fluid and complex than American standards and guidelines might suggest” (ACRL, n.d. para. 4). In addition to the principles of independent learning and critical thinking, the relational model promoted the development of personal values that encourage the critical use of information, the acquisition of sound knowledge of the information environments and a personal information style that facilitates the learner’s interaction with the world at large. The work by Bruce was “significant in that it shifted the focus from an assumed list of behaviours and attributes embedded in most definitions of information literacy to a more conceptual understanding of people as active consumers and users of information” (Todd, 2000, p. 28) Bruce has not been the only researcher to explore experience and conceptions of information literacy. Other investigations have looked at the experience of information literacy in the workplace (Cheuk, 2005; Lloyd, 2005); students’ experiences of learning through and use (Limberg, 1999); experiences of using thesauri in the online environment (Klaus, 2000); students’ experiences of online searching (Edwards, 2006); students’ perceptions of information skills (Morrison, 1997); faculty attitudes towards information literacy in science and engineering (Leckie and Fullerton, 1999) and faculty conceptions of information literacy and pedagogy (Webber and Johnson, 2006).

Whilst Bruce noted that both paradigms were required to develop a complete picture of information literacy, it was the experiential paradigm that provided the conceptual foundations for the current research. This paradigm was identified as being an appropriate conceptual framework for the current research because it offered a more complex, sophisticated and person-centred way of viewing information and information literacy. It is also for this reason that the experiential paradigm could also provide the basis for an of thought exploring the information society.

2.6.1 Information literacy and the information society: closely linked concepts Information literacy and the information society are closely linked concepts. According to the United Nations’ Prague declaration, information literacy was a

2-19

"prerequisite for participating effectively in the information society" (ILME, 2003, para. 1). It is essential in reducing inequities, promoting tolerance and closing the digital divide (a point of particular relevance to the current study!). Philip Candy observed: “information literacy is the zeitgeist of the times…an idea whose time has at long last come” (1996, p. 139). The importance of information literacy was succinctly conveyed in the 2006 Australian Library and Information Society (ALIA) Statement of Information literacy for all Australians. The first object of the ALIA was: “To promote the free flow of information and ideas in the interest of all Australians and a thriving culture, economy and ” (para. 1). This object is grounded in the principle that

a thriving national and global culture, economy and democracy will best be advanced by people who are empowered in all walks of life to seek, evaluate, use and create information effectively to achieve their personal, social, occupational and educational goals. It is a basic human right in a digital world and promotes social inclusion within a range of cultural contexts (Garner, 2006).

According to ALIA, information literacy could contribute to:

• learning for life; • the creation of new knowledge; • acquisition of skills; • personal, vocational, corporate and organisational empowerment; • social inclusion; • participative citizenship; and • innovation and enterprise (para. 3).

The 1994 information literacy policy statement of the Australian School Library Association (ASLA) was more lengthy than the ALIA statement, but provided some useful additional points about why information literacy was important.

Information is as essential to our survival as water, food, shelter and sleep. Information is, however, much more than a survival tool. Information unleashes our imagination and challenges our preconceptions and thereby provides us with a pathway to personal growth and fulfilment.

Throughout history the processing of information has been essential to assist human survival and growth. The last few decades have witnessed an amazing increase in the quantity of information and the Australian workforce is now concentrated around the collection, analysis, manipulation and communication

2-20

of that information. Change has been so dramatic that Australia can now be described as an information society.

Today's decision makers are often overwhelmed with information and the challenge for them is to choose that which is appropriate. Effective decision making is built upon timely access to this information and the ability to process the available information to suit the requirements of the decision. This problem exists for the aged, for those in employment, for the unemployed and for those who are at school.

The need to be able to use information effectively has in many cases become more important than the acquiring of factual knowledge itself. The sum total of information increases at such a rate each day that yesterday's best answer may be known to be incorrect today. Much of what many children learn during their school life will be quite obsolete by the time they enter the workforce.

Effective learners are not just those people who are knowledgeable but rather they are people who are able to find and use information as required. We might say that effective learners are those who are information literate. Information literacy is synonymous with knowing how to learn. This means that the ability to process and use information effectively is more than a basic tool for the empowerment of school students. It is in fact the basic survival skill for those who wish to be successful in the 1990s and beyond (ASLA, 1994, para. 1-5).

In reviewing both the ALIA and ASLIA statements, Bundy (2001) suggested “several bold but irrefutable conclusions can be drawn” (p. 6):

• Information, knowledge, learning community and lifelong learning issues will dominate in the 21[st] century, and likely beyond.

• The more that citizens have access to information – and the Information Literacy to use it – that is relevant to local, national and global socioeconomic and political development, the more they will prosper.

• The more information rights citizens have the more they will be free, if we accept the Jeffersonian tenet that 'information is the currency of democracy' (p. 6).

Further support for these conclusions, and especially the link between information literacy and democracy, can be found in the American Library Association (ALA) 1989 report:

Citizenship in a modern democracy involves more than knowledge of how to access vital information. It also involves a capacity to recognize propaganda,

2-21

distortion, and other misuses and abuses of information. People are daily subjected to statistics about health, the economy, national defence and countless products. One person arranges the information to prove his point, another arranges it to prove hers. One political party say, the social indicators are encouraging another calls them frightening. One drug company states most doctors prefer its products another ‘proves’ doctors favour its products. In such an environment information literacy provides insights in to the manifold ways in which people can all be deceived and misled. Information literacy citizens are able to spot and expose chicanery, disinformation and lies.

To say that information literacy is crucial to effective citizenship is simply to say it is central to the practice of democracy. Any society committed to individual freedom and democratic government must ensure the free flow of information to all its citizens in order to protect personal liberties and to guard its future. As US Representative Major R Owens has said: information literacy is needed to guarantee the survival of democratic institutions. All men are created equal but voters with information resources are in a position to make more intelligent decision that citizens who are information illiterate. The application of information resources to the process of decision making to fulfil civic responsibilities is a vital necessity (para. 17-18).

UNESCO refers to the importance of information literacy in terms of capacity building: “everybody should have the opportunity to acquire the skills in order to understand, participate actively in and benefit fully from the emerging knowledge societies” (2005, para. 1). Marchionini (1999) argued that information literacy was not just about capacity building but about “basic rights”. He proposed that citizens should “demand and expect”:

• Direct access to basic survival information • Access to accurate and authoritative information • Access to timely information • Cost-effective access to information • Powerful, easy to use systems for accessing and using information • Open (uncensored), high bandwidth public channels for communication and information transfer • in accessing and using information • Alternative sources and forms of information (p. 24).

A similar view was noted in the 2005 United Nations sponsored colloquium on information literacy and lifelong learning held at the Bibliotheca Alexandria in Egypt. The resulting document – the Alexandria Proclamation on information literacy and lifelong learning – stated that

2-22

information literacy lies at the core of lifelong learning. It empowers people in all walks of life to seek, evaluate, use and create information effectively to achieve their personal, social, occupational and educational goals. It is a basic human right in a digital world and promotes social inclusion in all nations (ILME, para. 2).

The important relationship between information literacy and lifelong learning noted in the Alexandria Proclamation was expanded in the IFLA Guidelines on Information Literacy for Lifelong Learning (Lau, 2006):

information literacy and lifelong learning have a strategic, mutually reinforcing relationship with each other that is critical to the success of every individual, organization, institution, and nation state in the global information society. These two modern paradigms should ideally be harnessed to work symbiotically and synergistically with one another if people and institutions are to successfully survive and compete in the 21 st century and beyond (p. 12).

A 2006 UNESCO publication provided a real life example of the importance of information literacy. In referring to the recent tsunami in December 2004 in South and South East Asia the report noted that whilst tens of thousands of lives were lost during the tragic event, “some were actually saved thanks to the information literacy of one child” (Sayers, 2006, p. 71).

A ten year old girl on holiday saved over 100 lives in Phuket, Thailand when the tsunami hit in December 2004 because she was ill….Tilly Smith of Oxshott, England, having researched tsunamis two weeks prior to her holiday in geography class, recognized the early warning signs of an imminent tsunami, and took action. Because of her ability to use and apply the knowledge she had learned, the beach was cleared and no lives were lost (Sayers, 2006, p. 71).

This story clearly highlighted the importance of both information and information literacy in people’s lives.

2.6.2 Application of information literacy in understanding the information society Whilst information literacy has rapidly become an object of research interest in the education and the library and information science disciplines, the vast majority of this research has been confined to educational (Bruce, 1997) and workplace settings (Loyd, 2005). Little to no research (from either the behavioural or the experiential paradigm) has been conducted exploring information literacy within the context of everyday life. This point was commented on by Todd (1999)

2-23

[information literacy is an] essential dimension to personal empowerment and to the quality of life beyond formal school. Information makes a difference to the everyday lives of people and that having the knowledge and skills to connect with and interact with this information can enable people to solve real world problems and address life concerns. The information literacy literature to date gives little attention to this (p. 30).

The absence of research into information literacy within the everyday context is perhaps not unexpected given that information literacy research in general is, as noted by Bruce, “still in its infancy” (2000, p 91). Information literacy research is just over twenty years old and only now entering what Bruce calls the “evolving phase” in which the “research territory is…beginning to emerge” (p. 104). Spink and Cole (2001b) observed that everyday life information seeking (ELIS) represented a relatively new research focus within the library and information science research community and that it offered an important and challenging area of scholarly enquiry because ELIS was “fluid, depending on the motivation, education and other characteristics of the multitude of ordinary people seeking information for a multitude of aspects of every day life” (Spink & Cole, 2001, p. 301)

A growing body of research has begun to investigate the nature of information as it has been experienced by people within the context of their everyday life. To date studies have investigated the information behaviour of battered women (Harris, Stickney, Gaslye, Hutchinson, Greave, Boyd, 2001; Dunne 2002); women (Young, 2002); older adults (Wicks 2004); homeless people (Gale, 1998); and African Americans (Spink and Cole 2001a). Hargittai and Hinnant (2006) noted that a distinguishing feature of the emerging research into information in the everyday context was the focus on the “small worlds” of the groups being studied. A “small world” was a “society or world in which members share a common worldview…Members [of the small world] determine what is, and what is not, important, and which sources can be trusted” (Fulton, 2005, p. 80). In addition, a number of models or frameworks have been developed in an attempt to explain information practices in the everyday context. For example, the everyday life information seeking (ELIS) model by Reijo Savolaninen (1995), the ecological theory of human information behaviour by Kirsty Williamson (1998), information grounds by Karen Fisher (Pettigrew, 1998), and Chatman’s suite of information behaviour frameworks (1985; 1990; 1996). It is beyond the scope of the current work to provide a detailed discussion of these frameworks and models.

2-24

However, what is relevant to the current research is the growing number of studies that are exploring the way people use the internet within the context of their everyday information practices 4. These studies are briefly reviewed here. As noted by Savolainen (2004) the vast majority of these studies have focused either on the popularity of services used (i.e. email vs. web) or the purpose of use. Overall, these studies have concluded that the internet has become embedded in everyday life (2004) or that it has taken on the “role of a complementary information system in everyday life, side by side with existing information resources” (2004) and that “not only does access to the internet in everyday life change people’s habits and practices of information activities it also adds to them with new dimensions” (2004). In the last few years a growing number of studies have begun to move away from simply exploring the use of the internet within day to day life and have instead, begun to explore how people conceptualise the internet as an information resource. As noted by Savolaninen (2004), “the more people face complex information environments with a growing number of alternative sources, the more important the general conception of the qualities of alternative information sources will be when making decisions to consult or avoid them”. These studies were of greater interest within the context of the current research as they were starting to explore the cognitive, affective and attitudinal factors that impact upon a person’s decision to engage in a particular information or IT related behaviour or context.

A 2002 study by Slone was one of the first studies to explore how people conceive of the internet. Interviews were conducted with 31 public library users in which they were asked to describe the internet in their own words. Slone broadly classified the answers according to their level of specificity as follows: vague, satisfactory, technical, and glowing. Those offering vague descriptions used adjectives, metaphors or broad descriptions of internet content, without words such as “computer”, “search”, “technology” or their synonyms. Instead they used terms such as “informative” or “convenient”. People using a vague description suggested that the internet was “a sort of…big video game with unlimited resources”. As perhaps expected many of those with vague descriptions appeared to be novice users. The descriptions classified as satisfactory were broad descriptions of the internet’s capability or ways in which it can be used and there were not unrealistic expectations of what could be achieved online. Once again words such as

4 It should be noted that quite a number of studies have explored people’s experience of the internet within the academic setting (for example, Edwards, 2006; Hughes,2004), however, as the current research is focused on the community setting, these studies will not be explored.

2-25

“computer”, “technology”, “search” or their synonyms were not used. In contrast, technical descriptions contained detailed descriptions of the internet and/or its capabilities and comments that imply no unrealistic expectations. Words used included “modem”, and “network”. Glowing descriptions contained expressions such as “anything”, “every”, or “all” in reference to the amount, source or kind of information that can be found on the internet.

Inspired by the work by Slone (2002), Savolainen and Kari (2004) conducted a study in Finland in 2004 in which they interviewed 18 people to determine their conceptions of the internet as an information tool. They noted that “similar to Slone’s findings…the respondents of our study articulated their conceptions of the internet on varying levels of specificity and concreteness” (p. 222). Two conceptions of the internet as information tool were identified: a metaphorical conception and a conception based on actual use experienced. Savolainen and Kari (2004) noted that the metaphorical conceptions varied from the unspecific such as “an infinite space’ to the more conventional such as “an ocean of information where everyone may fish”. Many metaphors were library based, for example, “the internet is like a big library in the corner of one’s own home” (p. 222). The researches noted that the majority of metaphors were positive in outlook, emphasising the opportunities offered by the internet. It was interesting to note that very few respondents paralleled the internet with computers or a network. The second conception the study identified was based upon an individual’s experience of using the internet, most notably the utilisation of email and search engines available on the Web. Savolainen and Kari (2004) note that the most specific discussions in this conception related to the technical and social qualities of the internet. For example, “the internet is constituted by small networks…a kind of global network…I see the internet as a communication tool, a work tool, an information tool, a tool connecting people” (p. 224). It was also important to note that this conception of the internet was dynamic; as new use experiences were gained new conceptions appeared. The authors concluded that people have difficulty in trying to define or conceptualise the internet due to the fluctuating nature of the technology. They noted that whilst the internet had not fully met the expectations of many of the respondents in the study the internet had largely been accepted as an information resource. Savolainen and Kari (2004) proposed that people’s conceptions of the internet were important when developing information literacy programs. They recommended that people participating in information literacy programs should be encouraged to reflect on their current conceptions of the internet and to explain what they expected from it as

2-26

an information source as compared to alternative sources. In this way they suggested “the internet can be placed more meaningfully in the field of everyday information sources” (Savolainen & Kari, 2004, p. 226).

Building on from this work, Savolainen (2004) conducted another study exploring how people construct the internet as a meaningful (or meaningless) source of information, as compared to alternative sources. According to Savolainen (2004) the study provided “a new perspective to internet use…by focusing on the variability of ways in which people characterize the significance of the internet in ELIS” (para. 3). Three conceptions of the internet were identified: the enthusiastic repertoire, the realistic repertoire, and the critical repertoire. Those drawing on the enthusiastic repertoire had an “overt optimism for the new kinds of potential” (para. 18) the internet offers. The internet was seen as rapid, easy, versatile and interactive. People taking this conception were “high spirited and innovative actors who take advantage of the new technology and adopt new ways to seek information, replacing the old fashioned and less effective sources and channels” (para. 34). The realistic repertoire was characterised by a more reserved way of talking about the advantages offered by the internet. The role of the internet in information seeking was constructed as something that depended on situational factors such as the task at hand. The importance of putting the internet in “its own place” (para. 35) was emphasized. People should “control the networked tools, not allow them to occupy too central a role in information seeking and daily transactions” (para. 35). People taking on this conception were skilful, context-sensitive searchers who were aware of alternative sources and their strengths and weaknesses. The critical repertoire was directly opposed to the enthusiastic repertoire. It was characterised by a reserved, even negative, standpoint to the internet. People taking this conception were sceptical and doubtful. They were doubtful about the quality of the material found on the internet and sceptical about their own abilities to cope in the evolving networked information environment. Savolainen (2004) pointed out that none of the repertoires were better than the others; they should not be seen as competitors but alternative interpretative resources. He suggested they may be replaced by new repertoires as people adopt new viewpoints on the internet as a source of information.

In summary, the recent studies exploring how people experience and engage with their information environments – including digital tools such as the internet – provide

2-27

support for information literacy as relevant new way of understanding the information dominated age we live in.

2.6.3 Relationship to other literacies Given that today’s society is very much a digital one it is also important to consider the relationship of information literacy to IT related literacies such as computer/IT literacy, digital literacy, media literacy, network literacy, internet literacy and web literacy, or e-literacy. These other types of literacy have arisen, in most part, because a vast majority of information today is in digital form, with the result that many information users are required to develop new capacities, skills and knowledge to be successful. Indeed, this is one of the issues at the heart of the digital divide. Town (2003) noted that “the most critical distinction, and the most damaging elision for the understanding of the information society issues, was that between information literacy and IT/computer literacy. Societies which failed to understand the distinction were unlikely to create the full range of programmes necessary for their citizens” (p. 84). Town (2003) has thus suggested that information literacy and computer/IT literacy were two very different concepts that have different skills sets and knowledge bases, and that both were required in today’s information rich and IT heavy world. To focus solely on one to the exclusion of the other would not lead to success for either the individual or society. It could be argued that this is what is currently happening in society, with the latter – IT/computer literacy – being the sole focus. This point has been recently noted by the digital divide researchers and is discussed further in Chapter 3. Given that the current research was using information literacy as a conceptual framework to explore the digital inequality in community it was important to consider the evolving way in which technology and information literacy have been perceived.

In 1983 Forest Woody Horton provided one of the first attempts to differentiate information literacy from computer literacy:

Information literacy, then, as opposed to computer literacy means raising the level of awareness of individuals and enterprises to the knowledge explosion, and how machine-aided handling systems can help identify, access and obtain data, documents and literature needed for problem solving and decision making (p. 15).

Horton (1983) viewed information literacy as bridging a “literacy gap” between knowing and not knowing what was available and how to access it. Information literacy moved beyond computer literacy in that it updated the working knowledge of

2-28

users on machine-aided tools and resources such as online databases, email and library networks, “while computer literacy is a prerequisite to information literacy, it is no longer adequate” (p. 15).

Similarly, Taylor (1986, cited in Bawden, 2001) noted:

there is an unfortunate tendency to equate computers and information and hence to equate computer literacy for information literacy. Computer literacy however is not enough and never will be enough for intelligent survival (p. 227).

Oxbrow (1998) sees information literacy as differing from, and going beyond, computer literacy by virtue of a changed focus on “the content that flows though the technology – a focus on information and knowledge”. Without this broader focus, he suggests ‘a computer literacy society is likely to be inefficient and frustrated” (p. 359).

Mutch (1997) takes a stronger view stating that information literacy must be placed above computer literacy because whereas the latter focuses on the ability to use a computer, the former is strongly linked to lifelong learning. This point of view is fully supported by Association of College and Research Libraries (ACRL) in its publication on Information Literacy Standards Competency Standards which stated that “information literacy initiates, sustains and extends lifelong learning through abilities which may use technology but are ultimately independent of them” (ACRL, 2000, 5).

While most writers have seen information literacy as a subset of, or advance on, computer or IT literacy, Brouwer (1997) viewed information literacy as a component of a broader concept of computer literacy. He based this view upon a critical thinking approach to information technology, in which he recognised the computer as a cultural artefact, a tool that both amplifies and reduces aspects of the human experience. He identified three principal components of computer literacy: (i) an understanding of the power and limitation of technology tools, (ii) information literacy, based upon a critical approach to understanding and using information, and (iii) social-political dimensions of understanding information technology use.

In direct contrast to the views that have been noted thus far, was the view held by Tuckett (1989, cited in Bawden, 2001). He asserted that “while you can be computer

2-29

literate without being information literate, you cannot possibly be information literate…without also being computer literate” (p. 227). Tuckett proposed that information literacy and computer literacy were separate but related concepts and that

[it is not] particularly important [to] define the exact difference between them, whether for example, computer literacy is actually a subset of information literacy or whether they stand as separate, distinct, but related sets of skills and knowledge (p. 228).

It must be noted that whilst the current research has focused on the people’s experience of IT (and specifically the internet) in their information worlds the research did not follow the view supported by Tuckett. Rather the research was grounded in the belief that an individual can be information literate without being IT or computer literate. In short, the research did not view members of community who have chosen not to integrate ICT into their lives (for whatever reason) as information literate.

The US National Research Council also discussed information literacy and information technology in the 1999 document Being Fluent with Information Technology . The Report stated:

Information literacy focuses on content and communication: it encompasses authoring, information finding and organization, research and information analysis, assessment and evaluation. Content can take many forms: text, images, video, computer simulations, and multimedia interactive works. Content can also serve many purposes: , art, entertainment, education, research and scholarship, advertising, politics, commerce and documents and records that structure activities of everyday business and personal life….By contrast, FITness focuses on a set of intellectual capabilities, conceptual knowledge and contemporary skills associated with information technology….Both information literacy and FITness are essential for individuals to use information technology effectively (p. 48-50).

Candy (2000) recognised the difficulties when comparing information literacy to ICT literacy in an effort to identify distinguishing features between these two concepts. He argued that from the perspective of skills requirements, those needed to access and retrieve information were completely different from the competencies required in assessing the information. However, a degree of overlap between these two processes occurs because “information in the digital environment is at least partly an artefact of the technology itself” (p.137). He concluded that “information in the

2-30

digital environment is a challenging matter, and one that cannot readily be divorced from the technological competence of the inquirer” (p. 138).

And what of more recent literacies such as network literacy, digital literacy, internet literacy or web literacy? Charles McClure (1994) was among the first to introduce the concept of network literacy and proposed that network literacy was not a stand- alone concept but that it worked in conjunction with other literacy such as media literacy, computer literacy and traditional literacy within the context of information problem solving skills to achieve information literacy. McClure acknowledged that the concept of network literacy and how it related to other literacy still required much attention and research. Indeed commentators such as Kathleen Tyler (1998) suggested that whilst McClure has put forth a

sincere and well intentioned attempt to integrate the overwhelming competencies of the various literacies and to further a much needed interdisciplinary discourse about the many goals and competencies that literacies have in common. It is problematic to adherents of other multi- literacies because it positions their literacy in the service of other literacies (p. 103).

Digital literacy has been used by a number of authors throughout the 1990s to refer to the ability “to understand and use information in multiple formats from a wide variety of sources when it is presented via computers” (Gilster, 1997, p. 1). One of the leading commentators on digital literacy has been Gilster who suggested that “digital literacy is about mastering ideas, not keystrokes” (p. 1) and, as such, distinguished this concept from more restricted views of computer or IT literacy. Gilster viewed digital literacy as “literacy in the digital age” and thus an extension of traditional literacy to meet the needs of the contemporary age. The internet has a central role in Gilster’s understanding of digital literacy. Indeed he suggested that acquiring digital literacy for the internet involves mastering a set of core competencies including: (i) the ability to make informed judgements about what is found online; (ii) skills in reading and understanding in a dynamic and non-sequent hypertext environment; (iii) knowledge assembly skills; (iv) search skills; (v) managing the information flow; (vi) managing the multimedia flow; (vii) creating a personal information strategy; (viii) an awareness of other people; (ix) being able to understand a problem/information need; (x) understanding of backing up traditional forms of content with networked tools; and (xi) wariness in judging validity and completeness of material. Clearly, from this list, digital literacy and information literacy have a strong connection.

2-31

Interestingly, whilst the term internet literacy has received much attention in the popular media – indeed the concept has been widely popularised by David Hofstetter (1998) with his book of the same title now in its fourth edition - no formal definition of the concept, or a structured list of specific skills or competences that are unique to internet literacy, has been developed. Bawden (2001) suggested that internet literacy “appears to denote essentially the same as network literacy and to a large extent digital literacy. A related concept is that of web literacy, which Piper (2000) described as critical evaluation skills, that ‘is not qualitatively different than information literacy’ (para. 59).

It is perhaps appealing to spend much time discussing the finer point of sometimes contradictory definitions but as Bawden (2001) stated:

To deal with the complexities of the current information environment a complex and broad form of literacy is required. It must subsume all the skills based literacies, but cannot be restricted to them, nor can it be restricted to any particular technology or set of technologies. Understanding meaning and context must be central to it. It is not important whether this is called information literacy, digital literacy or simply literacy for an information age (p. 251).

2.7 Conclusion In a recent publication Van Dijk (2005) noted that “with good reason the concept of the information society is controversial” (2005, p. 133). This chapter has provided an overview of the information society. The information society concept has been defined in five ways: technological, economic, occupational, spatial and cultural. Over the years the information society concept has generated considerable critical debate on whether the evidence for the information society represented an extension of the industrial society or whether it marked a radical change resulting in a new social order. In reviewing existing information society research, Duff proposed that there are four primary schools of thought: (i) the economic school, where the focus has been on the economy and the changing nature of the workforce, (ii) the information technology school, where the focus has been on the impact of ICT on society, (iii) the information explosion, where the focus has been on the increasing amount of information in the world and (iv) the synthetic school that has combined all three previous schools. The schools of thought have been criticised for their quantitative focus (e.g. number of information workers, amount of information, amount of ICT), and consequently for their inability to provide a full and informed discourse on the information society. None of the existing schools of though, have

2-32

adopted a “user” or “person” centred focus. An information literacy school of thought would focus on the person and the way in which he or she experienced information and their information environment. To date only a small but growing number of studies have explored people’s experience of information within the context of their everyday life, and especially the way in which people experience or engage with ICT such as the internet. These studies have illustrated that information literacy is a suitable mechanism from which to view and understand people’s experience of the contemporary age we live in. Consequently, information literacy has the potential to be the fifth school of thought in understanding the information society. The current research contributed to the growing body of studies exploring the information society by exploring a core information society issue – the digital divide – from an information literacy school of thought. In short, the research explored digital inequality within community by focusing on the individual’s experience of digital technology. This research therefore has started the conversation in regards the “information literacy school of thought” of the information society.

2-33

Chapter 3: The digital divide

3.1 Introduction The previous chapter provided an overview of the information society. It established that current understanding of the information society has arisen via four schools of thought: the economic, the information technology, the information explosion and the synthetic. There has been a growing need for the scholarly community to undertake research exploring the information society from the human perspective, that is, from the individual’s experience of information and ICT in their everyday life. Information literacy is an emerging field of study that can provide this alternative human perspective. The research outlined in this thesis was undertaken from the perspective of the information literacy school of thought. This chapter examines the digital divide. The digital divide is the information society issue that the current research explores. The chapter has two aims. First, it provides an overview of the digital divide. This overview focuses on the definition, terminology and significance of the issue. Second, the chapter examines existing digital divide research. This examination focuses on the current research into the digital divide as well as the ongoing debate about what constitutes digital divide research. In undertaking these two aims the chapter establishes the contributions of the current research to the evolving digital divide research agenda, and highlights how the research will advance current knowledge of an important social and economic phenomenon. There is a burgeoning literature on the digital divide. It is beyond the scope of the current work to provide an exhaustive overview of all commentators in the field and on all sub-elements of the phenomenon. Instead, the work focuses on the key scholars and key issues that have dominated and shaped the academic and popular discourse.

3.2 What is the digital divide? This section explores how the digital divide has been defined. The issue of terminology is also considered. How the digital divide has been defined and what language has been used to communicate the concept, has an impact on how digital divide research has been approached, and ultimately, on how understanding of the phenomenon has developed and how attempts to bridge the digital divide have been crafted. This issue of definition was of crucial importance for the current research because the research was grounded in the premise that the existing understanding of the digital divide, an understanding that has guided government policy and action

3-1

to date, was limited because of the uni-dimensional definitions that have guided the current research endeavours.

3.2.1 The need for a conceptual and operational definition In a recent paper entitled Assessing the digital divide, Arquette, a PhD candidate in the Department of Communication Studies at Northwestern University in the US, proposed the existence of a “definitional divide” (2001, p. 2) within the current digital divide literature. According to Arquette (2001) the “entire ‘e’ research community is plagued by a lack of definitional clarity” (p. 3), where “often times, the conversation between researchers can evoke ships passing in the darkness of night” (p. 3). Arquette pointed out that

’e’ researchers use terminology such as digital divide…as though there were a general consensus as to what [these] terms included and excluded, or as though the definitions for [these] terms were as clear as for any term in Websters Dictionary…Each researcher assumes other researchers use the same definitional frameworks (p. 3).

Arquette (2001) warned that “one should look before one leaps” (p. 3) cautioning that there is “no such shared meaning in nomenclature” (p. 3). According to Arquette (2001) the recent explosion of digital divide research has had no common analytical framework and that the diverse connotative and denotative meanings that each researchers has used when conducting their research into the digital divide has created “a body of scholarship lacking recognizable coherence (p. 2). As such, “comparisons of research results are at best limited to the particular study and at times misleading when generalized beyond the definitional framework used by the researcher in the study” (Arquette, 2001, p. 3).

Arquette (2001) pointed to three problems that could arise because of the lack of attention by researchers to conceptual and operational definitions: (i) conceptual incoherence, (ii) operational fragmentation, and (iii) conceptual fit disjuncture. When considering current digital divide research, conceptual incoherence can be noted by the reliance on “the same tongue-and-cheek approach used by US Supreme Court Justice Potter Steward when defining pornography, ‘I know it when I see it’” (Arquette, 2001, p. 4). Arquette (2001) warned that this approach to conceptual definition construction was not sufficient for social scientific inquiry. Operational fragmentation in digital divide research was present in the “potpourri of different ways to observe and measure” (p. 5) the phenomenon, which only served to

3-2

magnify the growing confusion in the area and to limit “the external validity of the particulars of any given study to the broader project of advancing dialogue on digital divide research” (p. 5). The final issue of conceptual fit disjuncture occurred when the “matches between conceptual and operational definitions within studies may be suspect” (p. 6). Arquette (2001) pointed to the many attempts in the literature to measure the global digital divide in terms of internet penetration. Taken together these three problems have converged to pose “a very real challenge to…[the] scholarship in the broader research conversation on the digital divide” (Arquette, 2001, p. 3).

Using a meta-analysis of the current digital divide literature Arquette (2001) provided an interesting illustration of this challenge. The analysis revealed that when the divide was defined in terms of either infrastructure or use there was clear evidence of an existing divide, but when it was defined in terms of access the presence of a divide was unclear “and more likely not present at all” (Arquette, 2001, p. 20). According to Arquette (2001) the ramifications of the “definitional divide” are enormous as

researchers continue to debate the absence/presence of a divide and its scope without a shared understanding of what is meant by the digital divide. It is no wonder that research and policy consensus has yet to emerge (p. 2).

Arquette (2001) has not been the only researcher to comment on the lack of definitional clarity in digital divide research. In 2002 Mitchell undertook a study exploring the perceptions of the digital divide of thirteen individuals who led various efforts to bridge the digital divide in Washington State. In describing the digital divide, Mitchell (2002) noted that the interviewees provided several different meanings, and that the “lack of a single, focused definition of the digital divide is problematic for the efforts to ameliorate the divide” (para. 93). Mitchell (2002) postulated that the existence of multiple meanings for the digital divide concept may be indicative that: (i) ICTs influence on society was still unfolding, (ii) ICT was influencing nearly every dimension of society including the macro socio-cultural dimension, and (iii) ICTs influence may be so pervasive that the influence may not be adequately definable using currently available contexts and concepts. Whatever the reasons for the development of multiple definitions for the term, “without a common definition of the digital divide, establishing common goals and approaches for bridging the digital divide will be difficult” (2002, para. 93).

3-3

The concerns raised by digital divide scholars such as Arquette (2001) and Mitchell (2002) in establishing a clear conceptual and operational definition when engaging in research was not new. In a 1989 publication Larry Barker reminded the research community that “researchers must ensure that definitions used in a given study are accurate, adequate and understandable by readers of their research reports” (p. 7). Barker (1989) noted “one difficulty in establishing logical relationships between previous research and a given problem stems from researchers’ use of different definitions and operationalisations for the same concept or variable” (p. 70). Arquette (2001) suggested that this resulted in a compromising of the research process with the process being “stuck in a sort of holding pattern, perhaps at best spiralling, but generally unable to advance. In short, the conversation between researchers is short circuited” (p. 70). Barker (1989) also noted that the different terms being used in academic research may also be the result of different disciplines bringing alternative definitions without clearly articulating or specifying their particular definition. This was a very real issue for contemporary digital divide research which stems from a multitude of disciplines including , psychology, philosophy and information science. According to Barker (1989)

such differences often result in confusion among readers, incorrect references and generalizations from the results to other variables or situations and the appearance of contradictory results among studies when, in fact, the different definitions of variables were the ‘culprits’ (p. 70).

Barker (1989) highlighted three important questions that a researcher should consider when evaluating a researcher’s definition of a concept: (i) is the definition adequate? (ii) is the definition accurate? and (iii) is the definition clear?

The issue of definition was of crucial importance for the current research. Indeed, this research was grounded in the belief that the current popularly accepted definition of the digital divide (i.e. physical access to ICT) has resulted in the formation of a blinkered research agenda that has provided only limited insight into the totality of the phenomenon. This has led to the formation of constrained government policy and action. Thus, guided by the need to avoid the “definitional divide” in the current research a short discussion about the digital divide concept, and how it has been defined to date, is presented. The discussion also establishes how the term was used in the current research.

3-4

3.2.2 Attempts at a definition Like many issues, the digital divide existed long before the problem was named and studied. The term digital divide reached popularity in the mid 1990s as a convenient metaphor to describe the disparity between those who have access to the internet and those who did not have access. The term was first introduced into the popular domain in 1996 when Jonathan Webber and Amy Harmon (Cisler, 2000), journalists at the Los Angeles Times, wrote an article describing the social division between those who were using technology and those who were not. Illustrating the early gendered aspect of the problem, their article used the term to describe “the split between a husband who spent a great deal of time online and a wife who felt alienated from him because of his obsession with computers” (Cisler, 2000, para. 3). In that same year the digital divide expression entered the political arena when then US Vice President, Al Gore, made the following speech in a ceremony honouring Blue Ribbon 5 schools:

we’ve tried…to make certain we don’t have a gap between information haves and information have-nots. As part of our Empowerment Zone Initiative, we launch this Cyber-Ed truck, a Bookmobile for the digital age. It’s rolling into communities, connecting schools in our poorest neighborhoods and paving over the digital divide [italics added] (cited in Carvin, 2004, para. 5)

The first scholarly use of the concept can be traced back to a 1997 publication by Katz and Aspen in which they noted that the digital divide between “information rich” and “information poor” or “digital haves” and “digital have-nots” would lead to “many societal and political consequences” (para. 3) if it should endure. For them the digital divide revolved around awareness, and use, of the internet. Their study pointed to a digital divide based on race, gender, educational attainment and income.

In 1998 the second of the Falling through the Net reports issued by the US Department of Commerce (NTIA, 1998) referred for the first time to the widening “digital divide” between “information haves” and “information have-nots” in regard to access to telephones, computers and the internet, with income education and race/ethnicity the key factors in determining the division between haves and have- nots.

5 The Blue Ribbon Schools Program is a United States government program established in 1982 to honor schools.

3-5

One year later the Department of Commerce (NTIA, 1999) elaborated on the brief definition with the following starker description:

[some individuals] have the most powerful computers, the best telephone service and fastest internet service, as well as a wealth of content and training relevant to their lives… Another group of people don’t have access to the newest and best computers, the most reliable telephone services or the fastest or most convenient internet services. The difference between these two groups is…the digital divide (Executive Summary, para. 9)

In 2001 the American Library Association (ALA) proposed that the digital divide consisted of

differences due to geography, race, economic status, gender and physical ability in access to information through the internet, and other information technologies and services, as well as in the skills, knowledge and abilities to use information, the internet and other technologies (cited in de la Peña McCook, 2002, para. 110)

Whilst this definition followed earlier definitions in its description of the digital divide as a means of technological separation in our society between people of different races, economic status and general background, it also advocated a refinement of the concept to include a focus on skills and knowledge and on information, internet and other technologies.

DiMaggio, Hargittai, Neuman and Robinson (2001), in a recent review of sociological research on the internet defined the digital divide as “inequalities in access to the internet, extent of use, knowledge of search strategies, quality of technical connections and social support, the ability to evaluate the quality of information and diversity of uses” (p. 310).

Like the ALA definition, this definition went beyond the issue of technological access to include a focus on skills in the use of, not only the technology, but also of the information and knowledge transferred through the technology. The definition extended the ALA and other definitions in several ways. Firstly, it did not posit a polarisation or a divide (i.e. based on income, race, geography) between those who have internet access and those who do not. Secondly, it restricted the “digital” domain to the internet only. It excluded computers not connected to the internet, or activity not on the internet, such as word processing, or accounting with spreadsheet

3-6

software. Thirdly, it introduced a realm that was outside that of technology, by referring to the value of “social support” and the critical ”evaluation of information”.

In 2001 the Organisation for Economic Cooperation and Development (OECD) publication entitled Understanding the Digital Divide suggested that the term digital divide

refers to the gap between individuals, households, businesses and geographic areas at different socio-economic levels with regard to their opportunities to access information and communication technology (ICTs) and to their use of the internet for a wide variety of activities. The digital divide reflects various differences among and within countries (p. 5).

This definition highlighted another complication when defining the digital divide: the term can be used to refer to two very broad and different concepts - the digital divide within countries and the digital divide between countries. Norris (2001) referred to the former as the social divide that concerned the gap between information rich and poor in each nation. She called the latter the global divide and defined it as the “divergence of internet access between industrialized and developing societies” (Norris, 2001, p. 131). It was important to note it was the within countries or social divide that was the context for the current research. It was inappropriate to assume that the adoption and diffusion of the internet in developing countries would mimic the process of developed countries. Thus, whilst the research may shed light on the global digital divide this was not the intended purpose of the study.

In 2003 the United Nations held the first of two, World Summit on the Information Society (WSIS). In establishing a common vernacular for the discussions, the WSIS drafted the following definition of the digital divide:

The digital divide separates those who are connected to the in ICTs and those who have no access to the benefits of the new technologies. This happens across international frontiers as well as within communities where people are separated by economic and knowledge barriers. At WSIS Geneva, world leaders declared “We are fully committed to turning this digital divide into a digital opportunity for all, particularly for those who risk being left behind and being further marginalized (WSIS, n.d. para. 1).

In reviewing the growing body of existing definitions of the digital divide, Greyling (2003) noted that there are many commonalities in each of them, “all – in some form

3-7

or the other – addressing the core of what the digital divide really represents and how it will affect today’s and tomorrow’s social and economical environment” (para. 42). Drawing on the many conceptual definitions available Greyling (2003) compiled the following list:

• The digital divide exists • It is not an isolated concept, but the outcome of a complex set of social, economic, cultural, educational and political conditions • It is measurable, or at least some of the components (e.g. number of computers per number of inhabitants, internet access) • The digital divide relates not only to the capacity to produce ICT goods and services, but also to the capacity to apply and benefit from them, and to be self sufficient in ICT • It is not limited to the lower socio-economical areas, but also exists amongst the higher socio-economical areas. The difference lies in the availability of means to address it, and the degree of impact of the divide on overall development • Similarly, it is not limited to only developing countries, but is also considered a reality in developed countries (para. 43).

It was this conceptual understanding of the digital divide, as espoused by Greyling (2003) that guided the current research. It should also be noted that the current research sought to not just use, but expand, this conceptual understanding by contributing new insight into the phenomenon.

3.2.3 A rose by any other name 6? In more recent years a discussion regarding the simplistic connotations of the phrase “digital divide” has arisen. Steve Cisler (2000), a strong advocate against the digital divide term, observed that the

binary expression of a very complex series of problems is perhaps fitting for the digital age. You are online or offline; you have a computer or you are without one; you are trained for the digital future, or you are in a dead-end low paying work. The reality is that all of us online exist on a spectrum of connectivity…[and]…the reasons for being offline vary…Digital divide is a term as demeaning as one from a past era, ‘they live on the wrong side of the tracks’ (para. 10-11, 13 & 22).

6 Quote from William Shakespeare’s play Romeo & Juliet Act II, Sc. II

3-8

Henry Jenkins, Director of Comparative Media Studies at the Massachusetts Institute of Technology contended, “the rhetoric of the digital divide holds open this division between civilized tool-users and uncivilized nonusers. As well meaning as it is as a policy initiative, it can be marginalizing and patronizing in its own terms” (cited in Young, 2001, para. 3). Warschauer (2002) supported this view: “the notion of a binary divide between the haves and have-nots is thus inaccurate and can even be patronizing as it fails to value the social resources that diverse groups bring to the table” (para. 24).

Cisler (2000) proposed that the term “is simplistic, insulting to some and if it has the half-life of other tech jargon, it should last not longer than the “infobahn and technorealism did” (para. 1). He identified four problems with the term: (i) it posits a binary split in the world based on connectivity and this was a “crude representation of the situation” (Cisler, 2003, para. 3), (ii) it exaggerated the consequences of being offline, (iii) the efforts by high technology companies to bridge the digital divide were seen as another case of creating new markets and generating fear, uncertainty and doubt and (iv) the term obscured the many reasons for why people do not have access to ICTs.

Fink and Kenny (2003) proposed that the term digital divide “came to prominence more for its alliterative potential than for its inherent terminological exactitude” (p. 2) and as such suggested that other possibilities could just as easily have included the “silicon split”, the “gigabyte gap”, or the “pentium partition” In making this commentary however, Fink and Kenny (2003) also suggested that “it would be wrong to ponder for too long on what exactly, should be meant by the term” (p. 2) and that it has simply become a signpost for a very real social issue. This point was also noted by Jan Van Dijk, in the introduction to his 2005 publication The Deepening Divide where he stated:

In the past 5 years I have often considered dropping the concept of the digital divide altogether, and replacing it with the general concept of information inequality and a number of more specific terms. It has caused so many misunderstandings…Particularly it has led to the misconception of the digital divide as a primarily technological problem. It spurred the narrow interpretation of the digital divide as a physical access problem: of having computers and networks and being able to handle them…Nevertheless I have chosen to maintain the concept of the digital divide for strategic reasons. It has managed to be put on the public and political agenda. It should not be moved from the table and smashed to pieces by scientific hair-splitting and political opportunism. It is a long term problem that will mark all future information

3-9

societies. However to reach a better understanding of this problem, the concept of the digital divide has to be reframed (p. 3).

In contrast to this pragmatic acceptance of terminology, Bill Callahan, a US based community worker stated, “eventually we are going to need a pithier, more evocative, more specific name for the thing we’re fighting” (cited in Cisler, 2000, para. 22). Similarly, Lenhart and Horrigan (2003) suggested that “how an issue is imagined or labeled constrains and shapes how society responds to it” (p. 24). They referred to the words of Mehan (1997, cited in Lenhart & Horrigan, 2003, p. 25):

Language has power. The language that we use in public political discourse and the way we talk about events and people in everyday life makes a difference in the way we think and act about them. Words have constitutive power; they make meaning. And when we make meaning, the world is changed as a consequence.

Over the last few years several alternatives to the digital divide phrase have been proposed in the literature. These alternatives have arisen out of a growing awareness that “at its heart the digital divide is not simply a question of technological deployment” (Jarboe, 2001, para. 48). According to Jarboe (2002) the issue was not about bridging a digital divide but about ensuring social inclusion. As such, current thinking and discussion in the area must “focus on the transformation, not the technology…our framework should be one of inclusion for all in the broader activities that make up society and economy” (para. 48), with the ultimate goal being a transformation of society so that everyone can participate “in civic and economic activities, however those activities are carried out in this new information age” (para. 48). Thus, there was a need to “move from divide to inclusion as the central organizing principle of our analysis and actions” (para. 7).

In keeping with this social inclusion philosophy Carly Fiorina (2000), then CEO of Hewlett Packard, suggested the term e-inclusion . According to Fiorina, “we are playing on a worldwide field and we need everybody [italics added] to make this work, to succeed, to grow” (para. 41). E-inclusion was about “how to make sure that everyone [italics added] is included”. Similarly, Warschauer (2002) whilst “recognizing the historical value of the digital divide concept (i.e. that it helped focus attention on an important issue” (para. 28) prefers to use the phrase technology for social inclusion as a way of “more accurately [portraying] the issues at stake and the social challenges ahead” (para. 28).

3-10

Digital inequality has been suggested by several researchers in the area as an alternative to the digital divide phrase. Kvasny (2002) used the phrase to “signify a shift and distinction in focus from access to use of information technology” (p. 16). DiMaggio and Hargittai (2001) used the term to refer to differences in not just access but also to inequality among persons with formal access to the internet. They identified five dimensions of digital inequality: equipment, autonomy of use, skill, social support and purposes for using the internet.

The US Bush Administration has begun to favour the phrase achievement gap to more accurately reflect the nature of the problem. Bush proposed that whilst we can harness technology, “technology alone cannot make children learn" (cited in Carvin, 2000). His platform stressed “digital opportunity” or the view that expanding and ensuring the growth of a technology-dependent economy by strengthening the existing systems that built it. Bush’s chairman of the Federal Communications Commission, Michael Powell, declared the term digital divide “dangerous” because “it suggests that the minute a new and innovative technology comes to market there is a divide unless it’s equitably distributed among every part of society…and if companies think they can’t produce a new product unless they can produce it cheap enough for all, it could deter them from innovating and offering new goods” (cited in Srinivasan, 2001, para. 2 & 5).

Fink and Kenny (2003) supported the US Government’s view on “digital opportunity”, by advocating a “need for a gentle paradigm shift from the notion of a growing digital divide with cataclysmic consequences…the new paradigm of ‘digital opportunity’ would retain the idea that new ICTs offer significant opportunities to people” (p. 16). For others, the government’s new term placed “a blandly positive spin on all things computer related” (Strover, 2003, p. 275).

Larry Irving, the original head of the National Telecommunications Information Administration (NTIA) questioned the efforts of industry and government to move away from the term digital divide to something “meaningless or innocuous” (2001, para. 26) such as digital opportunity. Irving (2001) argued that the rest of the world, including the press, have picked up on the term digital divide and as such questioned the “wrongheadedness of trying to take a phrase that has near universal acceptance and understanding and turn it into a typical Washington style Orwellian Newspeak” (para. 27).

3-11

A small but growing number of scholars have begun to call for a complete shift from the focus on ICT by removing any and all reference to it. These scholars instead see the real focus of the phenomenon as information - a point of great relevance to the current research. Jupp and 6 (2001) advocated the use of the term information exclusion , grounded in the belief that “in the long run it is exclusion by information which matters most…the rise of an [information] economy…creates the risk that…many people could be excluded from basic services and opportunities essential to achieve a decent standard of living” (p. 7). Dr Alan Bundy proposed that “in a complex information intensive society…the greatest divide is between those who have the understandings and capabilities to operate effectively in that society and those who do not and that this constitutes the information literacy divide , of which the digital divide or edivide is but one important aspect” (2003, p. 2).

This brief review has shown that it was possible, and clearly appealing, for those scholars interested in the digital divide to spend a great deal of time discussing the finer points of terminology. Perhaps the best antidote to this was to adopt a Popperian 7 position of explaining, rather than defining terms. That is, the labels attached to concepts do not matter; the concepts themselves and their significance for practice do. In short, semantics should not be a restriction to understanding. Thus, for ease of communication, the current research used the term “digital divide” in referring to the concept being explored. Perhaps as a direct result of the uncertainty and confusion about definition and language many commentators in the area have begun to question whether the digital divide actually existed. This issue is considered in the next section.

3.3 Crisis or myth? In 2001 Benjamin M Compaine, a Senior Researcher at the Massachusetts Institute of Technology, published a work entitled The Digital Divide: Facing a Crisis or Creating a Myth . In the book’s preface Compaine stated: “I am skeptical of the entire digital divide concept” (p. xi). In his work Compaine presented documents that “make the case for such a public policy issue, as well as material that questions the existence or at least the severity of the issue” (p. xi). Central to the digital divide debate was the question of the actual importance of the divide. Like many other

7 Sir Karl Raimund Popper (July 28, 1902 – September 17, 1994), was an Austrian born, British philosopher and a professor at the London School of Economics. He is counted among the most influential philosophers of science of the 20th century, and also wrote extensively on social and political philosophy. hand.

3-12

social controversies, this core question has polarised the commentators in the debate into two camps: those who view the digital divide as a legitimate crisis and those who think the problem has been overblown by media fuelled hype.

For many scholars, the digital divide was a crisis because information and communication technology had become a core part of today’s society, so much so that society has been transformed by it. It was for this reason that the digital divide represented a social, economic and political imperative. The digital divide has been described as one of the “leading economic and civil rights issues” (NTIA, 1999, p. xiii) of our time, with many commentators suggesting that it threatened to create “cyberghettos” (McKissack, 1998) and a “cyberapartheid” (Putnam, 2000) where those without equal access to technology “will be further segregated into the periphery of public life” (Carvin, 2000, para. 3). The internet in particular has been identified as the technology of greatest importance in transforming society:

Using the net…can take back power from large institutions, including government, corporations and the media. Trends like personalization, decentralization and disintermediation…will allow us to each have more control over life’s details: what news and entertainment we are exposed to, how we learn and work, whom we socialize with, even how goods are distributed and political outcomes are reached…The net will allow us to transcend the limitations of geography and circumstances to create new social bonds (Shapiro, 1998, para. 4 & 6).

In 2001 Luciano Floridi provided one of the most compelling arguments for the digital divide as crisis. In his invited address to the UNESCO World Commission on the Ethics of Scientific Knowledge and Technology (COMEST) Floridi stated that “how information and communication technologies can contribute to the sustainable development of an equitable society is one of the most crucial global issues of our time” (Floridi, 2001, p. 2). He stated that the digital divide “disempowers, discriminates and generates dependency. It can engender new forms of colonialism and apartheid that must be prevented, opposed and ultimately eradicated” (Floridi, 2002, para. 8). Floridi conceded that on a global scale the issues of health, education and the acceptance of elementary human rights should be among humanity’s “foremost priorities” (2002, para. 7). However, Floridi argued that “underestimating the importance of the [digital divide], and hence letting it widen, means exacerbating these problems as well” (2002, para. 7). Floridi concluded by announcing that our challenge is to build an information society for all, and this is a “historical opportunity we cannot afford to miss” (Floridi, 2002, para. 14).

3-13

On the other end of the spectrum are those who feel that the digital divide has been purposely exaggerated and was far from a crisis. Those in this camp point to the data that suggest the technology, such as the internet, was merely a convenience or a luxury, and as such, those without access are “not facing an imperiled future” (Brady, 2000, para. 17). Brady (2000) contended that the idea of the internet as being a necessity for survival in the modern age was ridiculous, and scoffed at the idea of the digital divide as a crisis:

As long as the so-called digital divide sucks money and attention from the world’s real problems, it is a dangerous myth…The digital divide is not a crisis. World hunger, war, AIDS and environment decay are crises. When the internet can solve these problems [then] maybe everyone needs to have a computer (para. 20 & 21).

Many critics of the digital divide viewed the phenomenon as just the latest in a long history of economic gaps. In the words of Mark Llody, executive director of the Civil Rights Forum on Communications Policy: “we had the agriculture divide when our economy was based on agriculture. We had the industrial divide when our economy was based on…industry…[Now with an IT based economy], it’s not a surprise that we have a digital divide” (cited in Mumford, 2000, p. 1). Even Microsoft Chairman Bill Gates questioned the very idea of the digital divide when he said “most of the world doesn’t have cars, but we don’t talk abut the auto divide” (cited in Richman, 2000, para. 11). This was an ironic statement given that the Bill and Melinda Gates Foundation was established that same year, with the mission of helping to reduce inequities in the United States and around the world. Reducing the digital divide by increasing access to technology was identified as a key strategy in meeting this mission. Building on the car analogy, Federal Communication Commission Chief, Michael Powell dismissed the issue of the digital divide with the following comment: “I think there’s a Mercedes divide…I’d like to have one, but can’t afford one” (cited in Chuck 45, 2001, para. 3).

Many commentators in this camp also felt that the time has passed when the digital divide could possibly have been called a crisis. They argued that four or five years ago, when home computer costs and internet access prices were much higher, there existed a real digital divide, but now, the wide availability of much more affordable technology had made debating the digital divide a pointless task. As noted by Brady (2000) cheaper computers and free internet access mean that even “low income families could find a way to get wired if they viewed it as a high enough

3-14

priority” (para. 19). Building on this idea are those who advocated that the “literacy divide” must be addressed before the digital divide: “if children cannot read, write or master basic arithmetic all the wonders of the internet are beyond their reach, even if a computer is within it” (Bailey, 2000, para. 15).

As Tom Lipscomb, President of the Centre of the Digital Future, said: “society better worry about the fact that failure of basic education does not go well with the computer based, highly unforgiving environment of the internet…if you can’t spell you can’t URL” (cited in Murdock, 2000, para. 13). To these critics the focus on the internet ignored the real problems in education. Christopher Forman, a senior fellow at the Brooking Institution, derided the value of internet technology in education:

we have existing technology called ‘books’ that are currently in existing neighborhood centers called ‘libraries’. We already have a problem getting young people to use this existing technology. It makes me nervous that people will focus on the digital divide without paying attention to the old fashioned paper-divide (cited in Kuttan & Laurence, 2003, p. 9).

In summary, the question as to whether the digital divide was crisis or myth continues. Perhaps the reason so many scholars and commentators suggested that the digital divide was no longer a crisis was because their observations were based on a limited understanding of the phenomenon, and this limited understanding has arisen because digital divide research has not yet explored the full nature of the phenomenon. This was an important point for the current research. A brief examination of the main studies within informing the existing digital divide discourse follows.

3.4 Who is excluded by the digital divide? Research into the digital divide has been conducted in various parts of the world since the mid 1990s. As the notion of the digital divide gained attention, universities, government agencies and other community groups conducted studies to document the ICT gap. These studies have attempted to quantify or measure the digital divide, and in so doing, provide a profile on who has been excluded by the digital divide and who has not. These studies have suggested that the primary factors contributing to the digital divide were income, employment, education, geography, gender, age, disability, and ethnicity. Individuals who can be identified through these factors were more likely to represent the “have-nots” in the digital divide. As the current research

3-15

takes place in Australia and the US a brief overview of how the digital divide has been quantified and measured in these two nations is warranted.

3.4.1 USA studies Under the direction of the Clinton Administration the US federal government began to examine the digital divide. A series of reports under the generic title Falling through the Net were conducted by the US Department of Commerce National Telecommunications Information Administration (NTIA). The data reported by the NTIA was drawn from the US Census Bureau Population Questionnaire. The NTIA reports were one of the earliest studies in the world attempting to explore the digital divide and have received the widest exposure. In 1995 the first of the NTIA reports into the digital divide was released. Entitled Falling through the Net: A questionnaire of the ‘Have-nots’ in Rural and Urban America , the report focused on the penetration of telephones, personal computers and modems. It was important to note that at that point in time the internet had not yet gained widespread popular acceptance and was still mainly a tool for academic and defence industry. As such the internet went unnoticed in the first of the NTIA reports. The study identified the “information disadvantaged” as groups of people who “are not connected to the National Information Infrastructure (NII)” (para. 5). The report noted that access to NII was related to socio-economic and geographic factors with the “information have-nots” dis-proportionally found in rural and central cities. The study also showed that race, age and education within these areas has an impact on whether someone was a “have-not”. The report concluded that “more work needs to be done to better assess the characteristics of these ‘have-nots’” (para. 16). This was the goal for the next NTIA report.

The second NTIA report was released in 1998. Entitled Falling through the Net II: New Data on the Digital Divide. The report found that “as a nation, Americans have increasingly embraced the information age through electronic access in their homes” (NTIA, 1998, para. 6). The report noted significant increases in access to telecommunications technology such as telephones, computers and the internet. It should be noted that the first Falling through the Net report did not include a measure for internet access instead it focused solely on telephone and computer access and ownership. However, gaps based on demographics and geographical factors existed. The report noted a growing disparity based on race, geography, age, education, ethnicity, income when compared to levels with the first report. Household type (i.e. family structure) was introduced for the first time as an

3-16

important characteristic within the divide, with single parent and female households lagging significantly behind. These differences were particularly acute when examining internet access and computer ownership. This was the first NTIA report to refer directly to a “persisting digital divide” between certain groups of Americans. It is also interesting to note that the term “information disadvantaged” that was used in the first report was abandoned in this and subsequent NTIA reports and replaced with the “digital divide” phrase instead.

The third NTIA report released in 1999 noted “the good news is that Americans are more connected than ever before. Access to computers and the internet had soared for people in all demographic groups and geographic locations” (NTIA, 1999, Executive Summary, para. 2). However, “accompanying this good news…is the persistence of the digital divide between the information rich…and the information poor” (NTIA, 1999, Executive Summary, para. 3). The patterns disclosed in the 1999 NTIA report indicated that as the internet became a more pervasive technology used by an increasing number of Americans the gap between the “information haves” and the “information have-nots” had also increased. The factors driving the digital divide appeared to be location, race, education, income and household type. For households of higher income and education levels the digital divide appeared to be narrowing. However, the digital divide among households for lower income and education levels appeared to be widening. For example, during the single year between the second and the third report results indicated that with regard to internet access the gaps between White and Hispanic households, and between White and Black households, were approximately five percentage points larger than they were only one year earlier. Over the same period the internet access based on education and income level also increased with those at the highest and lowest education levels increasing by 25 percent and those at the highest and lowest income levels growing by 29 percent. In reviewing the third NTIA report it is interesting to note the changing tone of the report. The NTIA had shifted from stating that access to IT “may be as important…as telephone services” (NTIA, 1998, para, 1) to making a much stronger claim:

With the emerging digital economy becoming a major driving force of our nation’s economic well being, we must ensure that all American have the information tools and skills that are critical to their participation. Access to such tools is an important step to ensure that our economy grows strongly and that in the future no one is left behind (NTIA, 1999, Introductory Letter, para, 1).

3-17

In its third report, the NTIA highlighted for the first time the role played by community access centers, noting that these venues were “particularly well used by those groups who lack access at home or at work” (NTIA, 1999, Executive Summary, p. 9). It was also the first NTIA report to provide information on individual internet usage (e.g. access points, activities online). It was noted that groups with lower rates of home and work related access were using the internet at higher rates to search for jobs or take courses. Consequently, the report suggested that providing public access to the internet “will help these groups advance economically, as well as provide them the technical skills to compete professionally in today’s digital economy” (NTIA, 1999, Executive Summary, p. 9). It is interesting to note that whilst the report referred to the need to have “skills” no data on what skills were needed or how they were developed was gathered. The report also noted that “while many Americans are embracing computers and the internet, there are many others who did not realize that this technology is relevant to their lives” (NTIA, 1999, Part III Challenges Ahead, para. 14). The authors noted a growing need to find out why people are not connected with the reporting concluding that “no one should be left behind as our nation advances in the 21 st century, where having access to computers and the internet may be key to becoming a successful member of society” (NTIA, 1999, Part III Challenges Ahead, para. 20). In other words if you do not have access to computers and the internet you are being left behind - something this report expresses as being a counter-indication of successful participation in society. This report showed that as of 1999, there were large segments of society who did not have access.

Although the second and third NTIA reports told a story of inequality, the next report began to show signs of a narrowing divide. The fourth report (2000) Falling through the Net: Toward Digital Inclusion found that internet access had increased across all groups and that the gender gap had largely disappeared:

groups that have traditionally been digital have-nots are now making dramatic gains…[these] gains occurred at every income category, at all education levels, among all racial groups, in both rural and urban America, and in every family type…This year…households in the middle income and education ranges are gaining ground in connecting to the internet at a rate as fast or faster than those at the top ranges. [This] suggests, that in some cases the digital divide has begun to narrow or will do so soon, and that we are entering a period of fuller digital inclusion (NTIA, 2000, p. 1-2).

3-18

Nevertheless the report also observed that some Americans were still connecting at far lower rates than others. Results showed that divides still existed between those with different levels of income, education, different racial and ethnic groups, old and young, single and dual parent families, and revealed for the first time in this study those with and without disabilities. Once again this report marked another shift in focus, the NTIA had moved from stating that the “internet may be key to becoming a successful member of society” (NTIA, 1999, Part III Challenges Ahead, para. 20), to announcing that “internet access was no longer a luxury” (NTIA, 2000, p. xviii). That it was becoming

an increasingly vital tool in our information society…each year being digitally connected becomes ever more critical to economic and educational advancement and community participation…Therefore raising the level of digital inclusion by increasing the number of Americans using the technology tools of the digital age is a vitally important national goal (NTIA, 2000, p. xv ).

The fifth report of the NTIA, published in 2002, was called A Nation Online: How Americans are expanding their use of the Interne t. This new title revealed a shift in focus - a shift from focusing on differences to focusing on the growing number of Americans going online. Once again the report noted increases across all groups and found that urban and rural differences were disappearing. Nevertheless, it reported persistent gaps in “internet use” based on age, income, education, race, ethnicity, dual/single parent households and in particular mental or physical disabilities. Unlike the other NTIA studies, A Nation Online emphasised internet use in several contexts such as work, school and home, instead of focusing solely on internet access at home. The report concluded that “we are more and more becoming a nation online: a nation that can take advantage of the information resources provided by the internet, as well as a nation developing the technical skills to compete in our global economy” (NTIA, 2002, p. 91). It is interesting to note that the notion of skills originally raised in the 1999 report was once again highlighted and that yet again the data obtained in the questionnaire did not tap into this domain.

The most recent report, A Nation Online: Entering the Broadband Age , was issued in September 2004. It continued the new focus set by the 2000 report by shifting attention from internet access or use to broadband technologies. The 2004 report showed strong support President Bush’s national goal of “universal, affordable access for broadband technology by the year 2007” (NTIA, 2004, para. 3). The

3-19

report clearly showed a belief that the digital divide phenomenon has changed, the question was not who was connected but how were people connected. The report observed that broadband use was growing swiftly but that not all geographic locations were using high speed services to the same degree.

In addition to the NTIA studies the Pew Internet and American Life Project (“Pew” for short) explored the impact of the internet on US society since 1995. The Project aimed to be “an authoritative source on the evolution of the internet through collection of data and analysis of real-world developments as they affect the virtual world” (Pew, 2005, para, 1). Like the NTIA studies the Pew studies were based primarily on questionnaire data; and there was considerable overlap between findings in the two series of reports. For example, a 2003 (Lenhart et al, 2003) study revealed that several demographic factors are strong predictors of internet use, having a college degree, being a student, being white, being employed and having a comfortable household income. Each of these factors independently predicted internet use. An interesting difference however was the growing interest by the Pew into not just quantifying the digital divide in terms of those who have access to technology but in exploring how that technology was being used. Specific areas of research include: online activities and pursuits such as blogging, online banking, pod-casting, online dating, e-government and e-policy such as internet and education. Whilst these studies provided interest insight into how people used the internet in their work and personal lives they did not necessarily focus on the digital divide issues and as such will not be considered here.

The studies by NTIA and Pew have frequently been criticised for their unsophisticated statistical analysis, relying solely upon descriptive statistics to draw their conclusions. In response to these criticisms, a number of academic studies in the US have also taken place. These have explored which factors were responsible for disparities in access in the digital divide, using more advanced statistical analysis techniques such as multivariate regression that can isolate the effect of specific factors on who has access to computer and the internet. Like the NTIA and Pew studies, the academic studies have also focused on developing a picture of how socioe-conomic factors influence ICT adoption. In 1999 Nue, Anderson and Bikson reported a study in which they used data gathered in 1993 and 1997 to explore access to a home computer and email use. They found that income, education and ethnicity were important factors for both. Computer access and email use were greatest among Asian Americans and whites who were more affluent and more

3-20

educated. Age and region also played a part in determining access, but to a lesser extent.

Hoffman and Novak (1998) used data gathered in December 1996 and January 1997 by Nielsen Media Research to explore the differences between whites and African Americans with respect to computer access and web use. Data from 5813 respondents were analyzed. Key findings from the study included: (i) greater education corresponded to an increased likelihood of work computer access, regardless of age, (ii) while increasing levels of income correspond to an increased likelihood of owning a home computer, regardless of race, whites were still more likely to own a home computer than African Americans at each and every education level, despite controlling for differences in education and (iii) White Americans were almost six times more likely to have than African Americans to have used the web in the past week and also significantly more likely to have used the web at home or other locations. Notably race differences in web use disappeared as income increased. Hoffman And Novak (1998) concluded that “education is what counts” (p. 391) and argued that improving educational opportunities for African Americans was critical for ensuring the “participation of all Americans in the information revolution” (p. 391). This studies findings were especially interesting when compared to the results of other related studies at the time (ie the NTIA series, PEW reports, Mossberger, Tolbert & Stansbury, 2003), The work provided a completely new way of understanding the influence of race, education and income on access and use of ICTs.

A study by Nie and Erbring (2000), from the Stanford Institute for the Quantitative Study of Society (SIQSS), found that only education and age matter in the digital divide. Their study was based on data collected in December 1999 through a national online questionnaire of 4113 individuals. However, the results of this study must be viewed with caution as data was gathered via an internet questionnaire, which clearly has methodological implications when internet access was the issue under study. Also, unlike the studies by Nue, Anderrson and Bikson (1999) and Hoffman and Novak (1998), this investigation relied only on descriptive statistics in reaching its conclusions. Katz and Aspen (1997) found that internet “dropouts” (people who reported previous usage of the internet and then stopped usage) tended to be less affluent, less well educated and younger than other users. Users with higher educational levels were less likely to become internet dropouts, which may be related to their better understanding of the opportunities provided by the

3-21

internet. They were also more likely to have a job that required the use of the internet.

Mossberger, Tolbert and Stansbury (2003) examined the factors associated with computer use and internet access by exploring questionnaire data gathered in 1996, 1998, 2000 and 2001. The study found that the poor, the less educated and the old were significantly less likely to have a home computer and email address or internet access. Both African Americans and Latinos were significantly less likely to have home computers and email addresses or internet access than whites after controlling for social economic conditions. Findings clearly demonstrate that race and ethnicity matter in the digital divide even after accounting for variations in income and education.

Rice and Katz (2003) reported results from a national telephone questionnaire of 1305 Americans conducted in 2000. The study suggested the existence of three kinds of digital divides with respect to both internet and mobile phone usage: users/non-users, veterans/recent, and continuing/dropout. Using correlation and regression analysis, the study suggested that there are similarities and differences among these digital divides based on demographic variables. As internet access was the primary focus for the current research only these results will be reported here. The study made the following observations: (i) the gap between internet users and non-users was associated with income and age, but no longer with gender and race once other variables were controlled; (ii) the veteran/recent internet gap was predicted by income, age, education, phone user, membership in community religious organizations, having children, and gender and (iii) the gap between continuing and dropout users was predicted by education.

In a more recent study Chaudhuri, Flamm and Horrigan (2005) explored the impact of a variety of socio-economic influences on households, decisions to pay for basic Internet access. Data from the Pew Internet and American Life Project was used for analysis. Regression analysis revealed that education and income were the strongest predictors of internet purchase. The study also showed that African Americans and Hispanics were less likely to be online than other racial categories and people who were married were more likely to have internet access. The authors concluded that “demography is destiny when it comes to internet penetration” (p. 754).

3-22

A number of studies have explored digital inequality between metropolitan and non- metropolitan areas. In 2000 Hindman conducted a study based on data collected in 1995 and 1998 by the Princeton Questionnaire Research Association and sponsored by the Pew Research Center for the People and the Press. Regression analysis showed that income, age and education were more closely associated with the use of information technologies than was geographical location. Three years later, Mills and Whitacre (2003) used a logit estimation approach to explore data obtained from the 2001 US Current Population Questionnaire Internet and Computer Use Supplement. The study revealed that differences in metropolitan/non- metropolitan digital connection was explained by education and income. More recently, Chakraborty and Bosman (2005) used the Lorenze Curve and Gini coefficient to assess the distribution of family income and wealth to measure inequalities in home personal computer ownership at the national, regional and state levels in the US. The study used data from the US Census. Findings from the study suggested that PC ownership changed with the level of household income and that there are income-related distributional inequalities across the nation; the largest inequality is observed in the Southern states and the smallest in the states of the Pacific and Mountain regions. The study however also concluded that the degree of income inequality in PC ownership is steadily declining across the US. Results also indicated that economic inequality in PC ownership is substantially smaller among white households than among African American households across the US. Regional level analysis revealed that the largest income inequality of African Americans is observed in the South and the Midwest. The study also noted that income inequalities in home PC ownership have declined more rapidly among white households than among African American households in all census regions. The authors suggested this raised the contentious question: are African American households benefiting from the information revolution or are they actually worse off because of it?

In summary, the US studies to date, whether conducted by the government or by academics, have suggested that socio-economic factors such as income, education, age, geography, gender and ethnicity have an impact on an individual’s access to ICt such as computers and the internet. The studies reviewed in this section represent some of the most current research that provides both a quantitative snapshot of the divide and an indication of how the digital divide has been evolving in the US.

3-23

3.4.2 Australian studies In 1991 the Australian Commonwealth government released a report entitled Australia as an Information Society: Grasping New Paradigms (Jones, 1991). Only seven years later the Australian Government’s focus had clearly shifted with the release, in December 1998, of A Strategic Framework for the Information Economy – identifying priorities for action (DCITA, 1998). Gone was the “information society” and in its place, the “information economy”. This point has helped to shed light on the Australian Government’s interest in ICT, and its role in the Australian society and the nature and substance of the government-based research that has arisen. In the foreword to the document, Senator Alston, then Minister for Communications, Information Technology and the Arts, stated that the information economy would generate opportunities across all sectors and be a source of employment for regional and city-based Australians. It would provide

opportunities for Australian business, wealth creation through ready access to a global marketplace, and reductions in the cost of information and transactions. An exciting aspect of this information revolution is the potential for enhanced social interaction and community participation (1998, p.1).

He declared that the Framework represented the “government’s commitment to ensure that Australians, including those in regional and rural areas, enjoy the social and economic benefits offered by the growth of the information economy” (1998, p. 1). It announced that the government’s mission was to

ensure that the lives, work and well being of Australians are enriched, jobs are created, and national wealth is enhanced, through the participation of all [italics added] Australian’s in the growing information economy (1998, p. 6)

Two principles guiding this mission were the belief that firstly,

all Australians – wherever they live and work, and whatever their economic circumstances – need to be able to access the information economy at sufficient bandwidth and affordable cost; and need to be equipped with the skills and knowledge to harness the information economy’s benefits for employment and living standards (p. 6).

Secondly, it was the

role of government…to create the environment…to provide direction, education, training and encouragement to business and consumers;

3-24

and to provide a legal and regulatory framework that ensures the information economy is safe, secure, respectful of personal privacy, certain and open (1998, p.7).

The first strategic priority identified in the document was: “maximise opportunities for all Australians to benefit from the information economy” (1998, p. 8). The Australian Government was “committed to ensuring that all Australians have open and equitable access to information available online” (1998, p. 9) and identified the need to ensure equity of access to opportunities online as “critical if we are to avoid a social polarisation between the so-called ‘information rich’ and ‘information poor’” (1998, p. 10).

Less than 18 months after the publication of the framework, the National Office for the Information Economy (NOIE), released the first of a series of reports titled The Current State of Play . The NOIE reports grew out of the rising interest by the federal government into the “information economy”. The report “presents a statistical compendium presenting a range of data on Australia’s progress on the emerging information economy” (NOIE, 2002, p. 4) by considering three dimensions: (i) the Readiness of Australians to participate in the information economy (i.e. the adoption of internet in homes and businesses), (ii) the intensity of current and social activity being undertaken via the internet (i.e. the frequency of use and the type and scope of activities undertaken online) and (iii) the impact activities performed online have on Australian society (measured by number of persons or businesses operating online and perceptions of benefits from use of the internet).

With more than six million adult Australians accessing the internet and nearly 1.8 million households (25% of all households) connected to the internet, the report observed that “the rapid growth in the use of the internet, which is underpinning the emergence of the Information Economy, is transforming Australia economically and socially” (NOIE, 2000a, p. 1). The report noted however that males, younger adults and families with children that are characterised by high incomes and reside in capital cities are leading the way in adoption of the internet. The second report in the series was issued in July 2000, only four months after the initial report. It is interesting to note that the government’s interest in understanding more about “Australia’s progress in the emerging information economy” (NOIE, 2002, p. 4), led to the publication of the first three reports in the series within a nine month period (i.e. March 2000a; July 2000b; November 2000c). It also appeared that, with the issuing of each new report, the Australian Government became more and more

3-25

convinced of Australia’s place as one of the “leading information societies around the world” (NOIE, 2004, p. 5) with the internet “now an established feature of life in industry, households, offices and schools” (NOIE, 2005, p. 4).

The second report, published in June the same year, confirmed the findings from the first report. Intternet access was influenced by geography, age, employment status, income and household type. However, two new factors were introduced - education and employment type. The report revealed that those with a higher education and those in a professional or managerial occupation were more likely to have internet access. The report also started exploring what activities were being conducted online, noting that a small number of internet users were engaging in online shopping. The important influence of education, income, employment status, occupation types and family type continued to reveal itself in each of the subsequent reports issued. However, in the third report, released in November 2000, gender was identified as no longer a key factor in determining internet access. In addition, the study explored for the first time the barriers to computer and internet use, with lack of interest being identified as the main barrier to internet access.

The fourth report in the series was published in June 2001 and reported the gap between city and rural internet access, whilst still present, steadily closing. A focus on broadband access was introduced in the 2002 report released in April that year; the report revealed that only 5% of all households currently had high speed internet access. The study also began to develop a picture of what pursuits Australians were conducting online, with email being the most popular activity but internet shopping steadily increasing in popularity. The 2003 report was the fifth report in the series. It began with the observation that “a growing majority of Australians use the internet”. Grounded in this belief the report provided a more detailed focus on the online activities of Australians than the previous report. The report revealed that, whilst communication via email and discussion lists was still the most popular activity, a significant growth was occurring in online banking and finance as well as buying and selling online. By the 2004 report it was noted that “Australians’ enthusiasm for the uptake of new technologies, have positioned the country at the forefront of the emerging global information economy” (NOIE, 2004, p. 4). The Australian Government’s focus on the “information economy” was clearly demonstrated by the growing focus of the NOIE reports on business access and use of IT. The most recent report was issued in November 2005 and concludes that “in Australia the internet is now an established feature of life in industry, households, offices and

3-26

schools. Online participation continues to expand across age and gender groups, albeit at a slower pace than in previous years” (NOIE, 2005, p. 4).

The Current State of Play series provided an invaluable insight into the adoption of ICT by the Australian population and the Australian Government’s view of ICT within the Australian society. It is interesting to note that unlike the NTIA reports, the Australian studies did not explicitly refer to the digital divide or to “haves” and “have- nots”. Even so, the studies have clearly shown how a range of socio-economic factors have over the years separated those who had access to IT, such as the internet, and those who did not have access. These factors included race, gender, geography, household structure, age, income, education and disability.

The NOIE was not the only Australian government agency involved in measuring ICT adoption within Australia. From February 1998 to November 2000 the Australian Bureau of Statistics (ABS) issued 12 releases of the report Use of the Internet by Householders, Australia (cat. 8147.0). The report series aimed to describe who in Australia was using the internet and how they were using it. The final release in the series observed that the “influence of the internet has spread considerably throughout Australian households over the three years for which information was collected” (ABS, 2000, para. 5). The report noted that in 1998 only one in every eight households had home internet access compared to one in every three by 2000. The report predicted that by the end of 2001 every second household in Australia would have internet access. In September 2000 the ABS released the first of a new report series, Internet Activity, Australia (cat. 8153.0). Thirteen releases have been made of this series. This new report had a different focus to the earlier report series in that its primary focus was on internet service providers and the services offered not on the individual user or household’s use of the internet. From 1994 to 2006 the ABS also released an annual report entitled Household use of Information Technology, Australia (cat. 8128.0 & cat. 8146.0). The series focused on the use of IT such as computers and the internet in households. The most recent release in the series published in December 2006 focused not on how many Australian’s were online but whether they were connected via broadband. It was noted that the majority of Australian children aged 5-14 years used a computer and close to two-thirds accessed the internet. It also noted that the three main reasons for households not having home internet access was “no use”, “lack of interest” and “costs are too high”. Like the NOIE studies, the ABS reports clearly showed that Australia was rapidly becoming a nation online.

3-27

A number of other smaller studies exploring the digital divide in Australia have been undertaken in the last few years. These studies have predominantly focused on the divide between rural areas and metropolitan areas. In 2000 the National Centre for Social and Economic Modelling (NATSEM) published a report on Barriers to the take up of new technology (Lloyd & Hellwig, 2000). The report noted that “in Australia the concern about the digital divide has taken a regional focus because of differences to metropolitan and regional rates of access to new telecommunications services” (Lloyd & Hellwig, 2000, p. 2). The NATSEM report concluded that people living in non-metropolitan areas were digitally disadvantaged. It was noted that this disadvantage existed because of high costs and poor quality of service available to non-metropolitan areas. The report argued that the Australian “digital divide” was one of income and social situation, not geography: “a large proportion of Australians do not participate in the – not because of where they live, but because of their economic or social circumstances” (Lloyd & Hellwig, 2000, p. 34). The report questioned the government's current focus and policy directions based on concern with supply to rural areas. The report concluded

supply-side policy solutions will not be enough to bridge the digital divide. Improved infrastructure may improve the quality of services in regional areas but will not overcome the disparity in access rates for different social groups… a more complex social policy agenda directly targeting digitally disadvantaged communities and families is necessary if Australian is to seriously address the root causes of the digital divide (Lloyd & Hellwig, 2000, p. 34).

The NATSEM report was significant in that it was the only Australian study to date that moved beyond descriptive statistical analysis of the digital divide by applying regression analysis to the data gathered.

The issue of geography and its role in the digital divide in Australia was also considered by Jennifer Curtin's 2001 research brief. Curtin contended that “whether or not rural and regional Australians are on the disadvantaged side of the digital divide remains a contested issue” (p. 2) and that the brief “seeks to shed some light on the debate by reviewing the socio-economic and democratic aspects of the digital divide and integrating throughout a rural and regional perspective on internet access” (p.2). Undertaken on behalf of the Australian parliamentary library, the report explored whether the internet can lead to a more participatory democracy. Curtin uses current statistical data available from the Australian Bureau of Statistics (ABS) to draw her conclusions. The brief concluded that the “internet has not yet become a medium of the masses” (p. 16), that a regional dimension to the digital divide existed and that “it is too early to tell the extent to which the internet will

3-28

influence the practice of politics and whether it indeed has the capacity to produce a qualitatively different democracy” (p. 16).

The NATSEM report and the Curtin research brief extended two other works: (i) the Accesiblity to electronic commerce and new service and information technologies for older Australians and people with a disability report by the Australian Human Rights & Equal Opportunity Commission (2000) and (ii) the 1999 report on Web Sites for Rural Australia: Designing for Accessibility by the Rural Industries Research & Development Corporation (RIRDC) (Groves, 1999). Both reports explored issues pertaining to web site design and other associated aspects with regard to access for special groups i.e. rural, people with disabilities or the elderly. Key recommendations from these reports included:

• Increased efforts by relevant government agencies in co-operation with industry associations and community organisations to ensure that people developing and implementing new technologies were aware of access issues

• Increased business and government support for community access points for online services and for awareness and education and training for people who might otherwise remain on the wrong side of a “digital divide”

• Increased focus on provision of appropriate equipment, software, training and information to meet the needs of people requiring adapted or customised equipment to achieve effective internet access.

In 2002 McLaren and Zappala published results of a study into the digital divide among financially disadvantaged families. More specifically, the study considered the question: what factors are associated with home computer and internet access for children from low socio-economic backgrounds? Complete questionnaires were obtained from 6874 children from financially disadvantaged families. By controlling for income the study was able to suggest that parental education was a strong component in influencing ICT access and use by the children.

Drawing upon data from the Australian national census undertaken by the Bureau of Statistics in 2001, Gibson (2003) examined the social and spatial inequalities of personal usage of information technologies (i.e. computer and internet) in New South Wales. Correlation and regression analysis was used to explore the

3-29

relationship between ICT use and socio-economic variables. Results supported findings from the previous Australian studies by suggesting the existence of a class - as well as spatial - dimension to the digital divide within the state. The study found that education, income, location and birthplace all had an impact on an individual’s use of both computers and the internet

More recently, Holloway (2005) undertook a review questioning 776 residents of Western Sydney. The study findings supported previous research that suggested older people, the unemployed, and those on low incomes are less likely to have internet access. The study showed that discrepancies in access exists in metropolitan areas. In that same year, a discussion paper by Samaras (2005) raised the issue of indigenous Australians and the digital divide. The author suggested that “as social and economic opportunity becomes increasingly wedded to ICT access in the information society, the digital disadvantage of indigenous Australians threatens to perpetuate or exacerbate the socioeconomic disadvantage” (p. 91). Samaras called for research into this area.

In summary, the Australian studies to date, whether conducted by the government or by academics, have suggested that socio-economic factors such as income, education, employment, age, ethnicity and geography have an impact on an individual’s access to ICT such as computers and the internet. The reports reviewed in this section represented some of the most current research that provided both a quantitative snapshot of the divide and an indication of how the digital divide had been evolving in Australia.

3.4.3 Summary of US and Australian studies The US and Australian studies discussed here have been an invaluable starting point for developing knowledge of the digital divide in the two nations. The studies have clearly shown how a range of socio-economic factors have, over the years, separated those who have had access to ICT, such as the internet, and those who have not had access. These factors included race, gender, geography, household structure, age, income, education and disability. The studies have also shown that the digital divide was a dynamic concept, not a static one, and the way in which the socio-economic factors impact upon the digital divide has changed over the years. It is important to note that the bulk of these studies have taken the form of socio- economic/demographic analysis, or statistical descriptions of who had access to ICT and who did not. Whilst limited by their use of unsophisticated statistical analysis,

3-30

these studies have been useful in illustrating the trends and suggesting possible relationships. In addition, they have served as important indicators of a developing policy problem and have placed the digital divide issue in the public spotlight. However, data of these types were inadequate for making claims about the root causes of the problem and can be open to different interpretations. Methods used to analyse the digital divide to date have been insufficient to separate the effects of overlapping influences and to establish with any certainty what factors matter – race, education, income or all of the above. Additionally, the studies have focused on only one aspect of the digital divide – physical access to ICT, and have failed to consider other access issues. The issue of measurement and focus within existing digital divide studies is explored further in the next section.

3.5 Digital divide or digital divide s? The research presented in the previous section portrays the digital divide as a relatively simple premise, the digital divide is a dichotomous concept. You either have access to ICT or you do not, and this access has been determined by socio- economic factors such as income, employment and education. From this perspective “the digital divide is easily defined and as a result easily closed, bridged and overcome” (Selwyn, 2004, p. 345). Burgelman (2000) suggested that this portrayal of the digital divide was “simplistic, formalistic and thus idealistic” (p. 56). This simplistic view of the digital divide has arisen because the studies have taken a narrow definition of the digital divide and used empirical measures that were “rather basic”. Neice (1998) noted that the measures used in digital divide research have “been developed mainly for market research, advocacy or public policy purposes” (p. 4) and were therefore of questionable relevance in any form of research that sought to establish a sophisticated understanding of the phenomenon. This point was also noted by Jung, Qui and Kim (2001) when they observed that the current studies exploring the digital divide were limited by their focus on three primary measuring techniques. These techniques included: a dichotomous comparison which focuses on the issue of simple access or ownership (i.e. computer owner vs. non-owner); a time based measure , where more time spent online was equated to "regular use"; and a measure of activities conducted online , where frequency of engaging in activities such as online banking and online shopping were measured. Jung, Qiu and Kim (2001) contended that these measures failed to consider the social context in which people incorporated technology into their lives. The personal and social effects of the internet must be considered in comprehending the more subtle aspects of the digital divide. They suggested that once people have access to

3-31

the internet the question to be addressed was how can and do people construct meaning from being connected. They concluded "existing inequalities even after gaining access to the internet can directly affect the capacity and the desire of people to utilize their connections for purposes of social mobility" (Jung, Qiu & Kim, 2001, p. 8).

The need to focus on the personal and social aspects within digital divide research was also proposed by Selwyn (2004): “people’s non-use of technologies is a complex, fluid and ambiguous issue” (p. 352) and that “despite the high profile nature of the digital divide debate, academic understanding of who is making little or no use of information and communication technologies (ICTs) remains weak” (p. 352). Up to this point, the digital divide research has concentrated on describing the “characteristics of those who are using ICTs or, at best simply pathologised the “have nots” in terms of individual deficits” (2004, p. 355). Selwyn (2004) suggested that an individual’s interaction with ICT was not as simple as the “user”/”non-user” dichotomy applied within much of the digital divide research. He supports Frissen’s (2000) view that “knowledge of the dynamics of everyday life is indispensable to understanding the processes of acceptance of ICTs” (2004, p. 356). Thus, according to Selwyn (2004), when “focusing on non- and low-use of technologies we must begin to recognize the importance of the social” (p. 355). Selwyn (2004) pointed to the work of Brulan who noted that resistance to technology is by no means irrational or conservative and “can only be understood in terms of the interaction between technology and its social context” (p. 355)

What Selwyn and others have suggested was that the digital divide was not a “relatively simple premise”. Rather it was a complex issue that has many facets and sides including personal and social elements that must be considered. Existing digital divide studies have not taken these elements into full examination. This point was raised also by Vernon Harper (n.d.). In a recent discussion paper Harper suggested that whilst the digital divide metaphor, works it focused too much attention on the divide as opposed to the divided . According to Harper (n.d.) the digital divide has been conceptualised as a hardware problem, which could be readily and easily solved when the barriers to access were removed. Harper (n.d.) questioned the legitimacy of this perspective and proposed that in reality there were two digital divides: access digital divide (ADD) and social digital divide (SDD). The ADD was based upon cost factors and was frequently discussed in terms of the presence of computers or internet access in the household. The SDD was "a

3-32

product of differences that are based on perception, culture and interpersonal relationships that contribute to the gap in computer and internet penetration" (Harper, n.d, p. 4). It was composed of barriers to motivation, knowledge, skill, content and social networks. Harper (n.d.) concluded by stating “the issues surrounding the digital divide must be redefined away from the hardware and towards humanity” (p. 5) and recommending that the scholarly community build research that explored the social, psychological and cultural differences that contributed to the SDD. A vital part of this redefinition of the digital divide proposed by Harper was the re-conceptualisation of “access” within academic and popular discourse. The issue of access and how it has and should be defined will be considered in the next section.

3.6 Access, access, access! 8 In recent years a growing number of scholars have called for the re-coneptualisation of access as applied to the digital divide. Selwyn (2004) noted that “access is a woefully ill-defined term in relation to technology and information” (p. 347). Van Dijk and Hacker (2003) suggested that the major obstacle in current digital divide research was the “multifaceted concept of access” (p, 315). “It is used freely in everyday discussions without an acknowledgement of the fact that there are many divergent meanings in play” (Van Dijk & Hacker, 2003, p. 315). Van Dijk and Hacker (2003) proposed that “the meaning of simply having a computer connection and a network connection” (p. 315) was the most common one in use today. It is proposed however that there are four kinds of access: (i) motivational access, (ii) material or physical access, (iii) skills access, and (iv) usage access. Access problems of digital technology gradually shift from the first two kinds of access to the last two kinds. When the “problems of mental and material access have been solved, wholly or partly, the problems of structurally different skills and use become more operative” (Van Dijk & Hacker, 2003, p. 316). Van Diijk & Hacker (2003) did not limit the definition of skills to the abilities to operate computers and network connections. They also included the abilities to search, select, process and apply information from an abundance of sources; and the ability to strategically use this information to improve one’s position in society. They refered to these as instrumental, informational and strategic skills respectively.

8 Quote from Andy Carvin Keynote Address NYU Third Act Conference, May 19 2000.

3-33

Van Dijk and Hacker (2003) were not the first scholars to highlight the need to move beyond current understanding of “access” and in particular, to refocus access within a skills framework. In their 2003 study Mossberger, Tolbert and Stansbury argued that “having access to a computer is insufficient if individuals lack the skills they need to take advantage of technology” (p. 55). They proposed a broader definition of the digital divide as consisting of multiple divides: an access divide, a skills divide, an economic opportunity divide and a democratic divide. They asked: do individuals have the skills they need to participate fully in society, particularly in the economic and political arena? They suggested that without appropriate skills access was meaningless. Like Van Dijk and Hacker (2003), Mossberger et al (2003) noted the need for both computer skills and information literacy skills.

DiMaggio and Hargittai (2001) suggested that “the dichotomous view of the digital divide as a distinction between people who do and do not have internet access was natural and appropriate at the beginning of the diffusion process” (p. 2). They recommended redefining “access” in social as well as technological terms. They noted that as technology becomes more prevalent in society the question would not be not “who can find a network connection at home, work in a library or community centre”; instead the question would be “what are people doing, and what are they able to do when they go online’” (p. 3 & 4). Eszter Hargittai (2002; 2005) provided evidence of the skills divide. She conducted experiments with American user groups charged with tasks of finding particular information. In one experiment, a demographically diverse group of 54 subjects was charged with five internet tasks, from finding a music file and downloading a tax form to discovering a web site that compared different presidential candidates’ views on abortion. Only half of the group was able to compete all tasks. Music files were found by almost everyone (51 of the total 54) but the time needed varied from 5 seconds to 7.83 minutes. However, only 33 out of the 54 subjects succeeded in finding a web site comparing candidates’ views. The time required ranged from 27 seconds to 13.53 minutes. No significant gender differences were found in this investigation, although age and education proved to be highly significant. Subjects older than 30 years completed many fewer tasks than subjects in their late teens and 20s. Moreover they needed more time. Those between 30 and 50 years old used twice as much time and those between 50 and 80, three times as much. People with a graduate degree completed more tasks and were much faster than people with no college degree. The same applied to people with 3 to 7 years experience on the internet compared to people with fewer than 3 years experience.

3-34

In another test of a random sample of 100 internet users, Hargittai (2002) found that only one subject used the Find button on web sites and that many users were not aware of the Back button. Silverstein, Henzinger, Marais and Moricz (1999) and Spink and Jansen (2006) observed primitive use of search engines. Silverstein and his colleagues (1999) discovered that 85% of users only viewed the first pages of results. Spink and Jansen (2006) found approximately the same with their study of nine different search engines. The vast majority of users only made simple queries and did not use any advanced search options.

In summary, access as applied to the digital divide was slowly being re- conceptualised. With the advent of the new millennium many digital divide commentators have begun to acknowledge that access was not just about physical connections to technology. Digital divide research was slowly moving away from viewing the digital divide solely from a socio-economic perspective and was now viewing digital inequality from a number of different perspectives or frameworks. In particular there was a growing focus on the skills – both computer and information – that are required for success in the digital world. This growing number of perspectives that have been used to explore the digital divide are considered in the next section.

3.7 A theoretical framework to the digital divide? In a 2002 paper, Carl Cuneo, Professor of Sociology at McMaster University in Canada, attempted to synthesise the growing body of digital divide research to determine if the many dimensions being explored (including race, gender, age, occupation, education, income) could be unified into a theoretical model of the digital divide. Cuneo (2002) contended that there were twelve different “theoretical perspectives of the digital divide” and that

each contributes a partial understanding but on its own remains a partial explanation of the overall digital divide. The challenge is to put them together into a synthetic understanding of the digital divide as a whole (p. 7).

An overview of each of the twelve perspectives is provided in Table 3.1.

3-35

Theoretical Base Concept Relation Across Barrier Resolution Perspective the Divide Demographic Population; Computer to Individual Access Government individuals person ratio programs; employment opportunities Geographic/ Data packet; Transmission Infrastructure Wireless engineering Nation-state

Geontologist Age Life Cycle Experience Training

Feminist Gender/Sex Patriarchy Harassment Androgyny

Psychological Attitude; Confidence Fear, Long terms disposition technophobia supportive training and socialization Educational Knowledge Learning Traditional Online education distance education Economic Capital Markets Government / Regulation privatization

Sociological Occupational Inequality Unequal life Equalization of classes chances conditions of opportunity

Labour Work; skills Exploitation via Property Socialization

technologies

Cultural Ethnicity Majority – Discrimination Multilingualism Minority Relations

Disabilities Body Physical & Lack of Adaptive Mental understanding technologies & impairments and social, designed (e.g. economic and screen Political Power Rule Non-democratic Online exercise of democracy power

Table 3.1: Twelve perspectives or dimensions on the digital divide (adapted from Cuneo, 2002)

3-36

Cuneo proposed that

none of these views by themselves give us a complete understanding of the digital divide. In fact, each dimension by itself is somewhat simplistic providing a distorted view of the digital divide. There is even competition among academic and political groups as to which dimension is more important or which is more primary…Yet each view has something to contribute to an overall comprehension of this important phenomenon of the 21 st century (p. 3).

In reviewing the 12 perspectives Cuneo (2002) observed that a considerable body of literature has arisen in all perspectives except one. Cuneo indicated that very little research to date had been conducted from the psychological perspective. Indeed, Cuneo (2002) was able to identify only two studies that had explored the digital divide from this domain of enquiry (Eastin & LaRose, 2000; Wallace, 1999). The two studies used Bandura’s social cognitive theory (SCT) as a theoretical framework. According to SCT, behaviour was best understood by a reciprocal relationship between behaviour, personal factors and environment. This theory is further explored in Chapter 4. Cuneo (2002) concluded that “there is an underlying psychological dimension to the digital divide that is complex and little understood; it deserves much more careful and extensive research” (p. 27).

A small but growing number of studies looking at the psychology or the personal aspects of using information technology have emerged since Cuneo (2002) published his work. In 2005 Kang, Bagchi-Sen, Rao and Banerjee used the 2002 Pew Internet Questionnaire to explore “net dropouts” (i.e. non-users who were once online but stopped and have not gone back) and “intermittent users” (i.e. those who were online who dropped offline for an extended period of time but who are now back online). The study explored the impact of factors such as trust, danger, confusion, cost, gender, race and education on both categories of users. The results showed that intermittent users trusted others more in general, perceived less danger, and experienced less confusion in the internet environment then the internet dropouts. They noted that cost to access the internet was not a main factor in determining dropouts from using the internet. The authors concluded that costs of internet connection were not aggravating the digital divide. The results of this study built on the 2003 work by Crump and Mcllroy who outlined a community based project that evaluated the use and non-use of a computing centre (hub) in a lower socio-economic urban area in New Zealand. A questionnaire was administered to 159 of the residents in the area. Prior to administering the questionnaire it was noted

3-37

that the majority of the residents in the area did not use the free computing facilities. Subsequently the questionnaire focused on the following issues, access, awareness and factors that would encourage residents to use the hub. Interestingly the results indicated that the majority of the residents were simply “not interested” in using the computer facilities. The researchers concluded that the belief that the digital divide was solely about physical access and that all members of community want to engage with ICT was flawed. They recommended that further research be undertaken exploring the nexus between personal motivation and participation in virtual community projects.

Cuneo (2002) was not the only scholar to comment on the lack of research exploring the psychological aspects of digital inequality in community. This point was also noted by Van Dijk (2005) who observed that there was a prevalence of sociological and economic research but that contributions from psychology and even from communication and education studies were relatively small. Van Dijk (2005) concluded that the digital divide could not be understood without addressing issues such as attitudes toward technology, technophobia or computer anxiety, communication in new media diffusion, educational views of digital skills and cultural analysis of daily usage patterns.

3.8 Conclusion This chapter has provided an overview of the digital divide. The digital divide has rapidly become the accepted term to describe the disparity between those who have access to ICT and those who do not. The term has been criticised by many commentators as being too simplistic and trivialising a genuine social problem. Like many other social controversies there have been those who have advocated that the digital divide was a legitimate crisis, and there are those who have advocated that it was a myth. It could be suggested that the reason why so many scholars and commentators have proposed that the digital divide was a myth was because their observations were based on a limited understanding of the phenomenon. This limited understanding has arisen because digital divide research has not yet explored the full nature of the phenomenon. Digital divide research to date has taken the form of a demographic analysis or statistical descriptions of who has had access to digital technology and who had not. This has resulted in the portrayal of the digital divide as a relatively simple premise. It is a dichotomous concept - you either have access to ICT or you do not. This access was determined by socio- economic factors such as income, employment and education. In more recent years

3-38

digital divide commentators have begun to acknowledge that “access” needed to be re-conceptualised to focus on the personal and the social aspects. Consequently a small but growing number of studies have begun to explore the digital divide from different perspectives including education, cultural and sociological. In 2002 Cuneo noted that there had been little to no research exploring the digital divide from a psychological perspective. This current research filled this gap by undertaking a study that explored the internal forces that influence a person’s choice to incorporate, and to engage with, ICT as part of their personal information environment. In doing this the research has added to the growing body of knowledge on the digital divide.

3-39

Chapter 4: Social cognitive theory

4.1 Introduction The previous chapter provided an overview of the digital divide. It established that current research into the digital divide has taken primarily a socio-economic perspective. Consequently, there is a growing need for the scholarly community to undertake research exploring social, psychological and cultural perspectives of digital inequality. This chapter provides an overview of social cognitive theory (SCT). SCT provides the theoretical framework for the current research’s exploration of the digital divide. The chapter has two aims. First, it provides a broad overview of SCT. This overview focuses on the key assumptions and principles, including the core constructs, of the theory. Second, the chapter examines existing research into the digital divide that has used SCT as a theoretical framework. This examination identifies the scope and key findings of the research, as well as areas for further study. In undertaking these two aims the chapter will establishes the relevance of using SCT in the current research. It also outlines how the research will advance current understanding of the digital divide by combining both socio-cognitive and socio-economic factors into the research design.

4.2 Triadic reciprocality Social cognitive theory (SCT) asserts that behaviour is best understood in terms of a triadic reciprocality (Bandura, 1997), where behaviour, personal factors and the environment exist in a reciprocal relationship in which they mutually interact and influence each other bidirectionally. Maddux (1995) suggested that this principle of “triadic reciprocal determinism is perhaps the most important assumption of social cognitive theory” (p. 120). The triadic relationship is shown in Figure 4.1.

Figure 4.1: The triadic relationship (adapted from Bandura, 1986)

4-1

The first bidirectional interaction of personal factors to behaviour reflects the influence of an individual’s thought and affect on action (Bandura, 1997). An individual’s expectations, beliefs, self perceptions, goals and intentions provided shape and direction to behaviour. Conversely, the behaviour an individual undertakes impacts on thoughts and emotions. That is, a person’s actions determines their thought patterns and emotional reactions (Bandura, 1997).

The second bidirectional interaction of environment to personal factors is concerned with the relationship between personal characteristics and environmental influences. Human expectations, beliefs and cognitive competencies are developed and modified by social influences occurring in the environment (i.e. through modeling, instruction and social persuasion). People also evoke different reactions from their social environment by their physical characteristics such as their age, size, race, sex and physical attractiveness, quite apart from what they say and do. Similarly, people activate different social reactions depending on their socially conferred roles and status (Bandura, 1997).

The third bidirectional interaction involves that of behaviour and environment . An individual’s behaviour changes the environmental conditions to which they are exposed and in turn are altered by those conditions. In short, individuals are both products and producers of their environments. Most aspects of the environment do not operate as an influence until they are activated by appropriate behaviour. For example, hot stove tops do not burn unless they are touched (Bandura, 1997). As a result, based on human preferences and competencies that are learned, individuals choose with whom they should interact, and in which activities to participate. Behaviour also determines which of the various potential environmental influences will be present and what forms they will undertake. These environmental influences will in turn partially determine which forms of behaviour are created and activated (Bandura, 1997).

Although the three determinants (i.e. behaviour, environment and personal factors) are described as having a reciprocal relationship, reciprocity does not mean that the three determinants are of equal strength, nor do they occur simultaneously. There would be times when some determinants may be stronger than others and their influence will change for different activities, and under different situations in which the behaviour occurs (Bandura, 1997).

4-2

4.3 The role of human agency SCT is grounded in the belief that “people can exercise influence over what they do” (Bandura, 1997, p. 3). Bandura called this belief “human agency” (Bandura, 1997, p. 3). According to Bandura people were “agents” that were proactively engaged in their own development and were able to make things happen by their own actions. Intentionality, or the power to originate actions for given purposes, is the key feature of human agency. In short, human agency was based upon the notion of personal control with people possessing, among other personal factors, self beliefs that enable them to exercise a measure of control over their thoughts, feelings and actions. Essentially, “what people think, believe and feel affects how they behave” (Bandura, 1986, p. 25). Human agency is firmly grounded in the understanding that individuals have certain capabilities that defined what it was to be human. These capabilities are:

 People have powerful symbolising capabilities. Through the formation of symbols, such as images or words, people are able to give meaning, form and continuity to their experiences. In addition, through the creation of symbols people can store information in their memories that can be used to guide future behaviour (Bandura, 1997). It was through this process that people are able to model observed behaviour.

 People can learn vicariously by observing other people's behaviour and its consequences. This allows people to avoid trial-and-error learning and allows for the rapid development of complex skills (Bandura, 1997).

 People are self-reflective and capable of analysing and evaluating their own thoughts and experiences. Such capabilities allowed for self-control of thought and behaviour (Bandura, 1997). Bandura considered this the most “distinctly human” of all the capabilities as it is through this form of self referent thought that people would evaluate and altere their own thinking and behaviour. As such, it is a prominent feature of SCT (i.e. self-efficacy).

 People were capable of self-regulation by having control over their own thoughts, feelings, motivation and actions. Self-regulated behaviour was initiated, monitored and evaluated by the individual to accomplish their own goals (Bandura, 1997).

4-3

 People's behaviour was directed toward particular goals or purpose and was guided by forethought , where forethought was a person's capability to motivate themselves and guide their own actions (Bandura, 1997). It was because of the capability to plan alternative strategies that one can anticipate the consequences of an action without actually engaging in it.

SCT takes the view that behavioural change is facilitated by a personal sense of control. If people believe that they can take action to solve a problem, they will become more inclined to do so and feel more committed to this decision. According to Bandura (1997) there were two constructs instrumental in establishing a person’s beliefs: outcome expectancy and self-efficacy. Outcome expectancy refers to a person’s belief about the outcomes that will result from a given behaviour. These outcomes could take the form of physical, social or self evaluative effects. In contrast, self-efficacy involves an individual’s beliefs about his or her ability to perform a particular behaviour. An individual’s choice of activities, behaviours and persistence in performance are influenced by both outcome expectancy and self- efficacy. In other words, people are motivated to perform behaviours that they believe will produce desired outcomes and which they believe they are capable of performing. Outcome expectancy and self-efficacy are differentiated within SCT because an individual can believe that certain behaviour will result in a specific outcome, but they may not believe that they are capable of performing the behaviour required for the outcome to occur. However, Bandura (1997) noted, that outcome expectancies were based largely on the individual self-efficacy expectations. The types of outcomes people anticipated generally depended on their judgements of how well they will be able to perform the behaviour. That is, individuals who considered themselves to be highly efficacious would expect favourable outcomes. Expected outcomes therefore, are highly dependent on self-efficacy judgements. Bandura (1997) postulated that on their own, expected outcomes, may not add much to the prediction of behaviour. It is the construct of self-efficacy which is the central factor in determining an individual’s choice to undertake a behaviour or task. Self-efficacy, therefore, was the central focus for the current research.

4.4 The key construct: self-efficacy Bandura (1986) described self-efficacy as "people's judgments of their capabilities to organise and execute courses of action required to attain designated types of performances. Self efficacy is “concerned not with the skills one has but with the judgements of what one can do with whatever skills one possesses” (p. 391). More

4-4

simply stated, self-efficacy is the belief a person has about their capabilities to successfully perform a particular behaviour or task (Cassidy & Eachus, n.d, para. 2). Self-efficacy therefore, will enhance or impede a person’s motivation to act. Bandura (1986) noted, for example, that “people tend to avoid tasks and situations they believe exceed their capabilities, but they undertake and perform assuredly activities they judge themselves capable of handling” (p. 393). Pajares (n.d.) noted that self- efficacy had a vital part to play in people’s lives. It was a powerful mechanism for self direction and action. Self-efficacy beliefs help determine how much effort a person will expend on an activity, how long they will persevere when confronted by obstacles, and how resilient they will prove to be in the face of adverse situations – the higher the sense of efficacy, the greater the effort, persistence and resilience (Pajares, n.d.). Self-efficacy beliefs also influence an individual’s thought patterns and emotional reactions. People with low self-efficacy may believe that things are tougher than they really are, which could lead to stress, depression and a narrow vision of how best to solve a problem. High self-efficacy, on the other hand, helps to create feelings of calmness in approaching difficult tasks and activities. As a result of these influences self-efficacy beliefs are strong determinants and predictors of the level of accomplishment that individuals finally attain (Pajares, 1997). For these reasons Bandura (1997) argued that “beliefs of personal efficacy constitute the key factor of human agency” (p. 3). It is important to note that self-efficacy is not an innate or fixed quality; it is a dynamic construct that changed over time as new information and experience were acquired (Bandura, 1997). Self-efficacy beliefs also involve a mobilisation component. That is, self-efficacy reflects a “process involving the construction and orchestration of adaptive performance to fit changing circumstances” (Bandura, cited in Gist & Mitchell, 1992, p. 185). Individuals who have the same skills may perform differently based on their use, combination and sequencing of the skills. Self-efficacy is also a task-specific construct, and concerned with one’s perceived capabilities to perform given behaviours in particular situations. Being task-specific, an individual can have high self-efficacy for performing one task, but low self-efficacy for a task requiring a different set of skills (Bandura, 1997). The issue of general or domain specific is explored further in section 4.6.

Self-efficacy is a multidimensional construct consisting of three distinct and interrelated dimensions: magnitude, strength and generalisability (Bandura, 1997). Self-efficacy magnitude refers to the level of difficulty a person believes he or she was capable of performing. For example, an individual possessing a high magnitude

4-5

of self-efficacy will view themselves as having the ability to accomplish difficult tasks while individuals with a low self-efficacy magnitude will view themselves as having the ability to only perform simple forms of the behaviour. Self-efficacy strength refers to the level of conviction a person has that he or she can perform a task or behaviour. For example, individuals with weak self-efficacy beliefs will be frustrated more easily by obstacles relevant to their performance and will respond by reducing their perceptions of their capability. Conversely, individuals with strong self-efficacy beliefs will not view difficult tasks as deterrents but instead will retain their sense of self-efficacy and, due to continued persistence, are more likely to overcome obstacles. Self-efficacy generality refers to the extent to which a person’s success or failure to perform a specific task or behaviour will influence the person’s self-efficacy in other tasks or behaviours. Generality can vary on a number of different factors, including the degree of similarity between the tasks or behaviours, and, the qualitative features of the situation under which the task or behaviour is being performed. For example, an individual may believe in his or her capability of performing a specific task or behaviour but only under a given set of circumstances, and thus, would be less inclined to attempt the behaviour when the circumstances were altered, or to attempt a task or behaviour that is clearly related but somewhat different (Bandura, 1997).

4.5 What self-efficacy is not It is important to distinguish carefully between self-efficacy and other self referent constructs. Bandura (1997) has emphasised the distinction between self-efficacy and self esteem : “self-efficacy is concerned with judgements of personal capabilities whereas self esteem is concerned with judgements of self worth” (p. 11). In making this statement Bandura (1997) noted that there was no fixed relationship between beliefs about one’s capabilities and whether one likes or dislikes oneself. For example, an individual may have a low sense of self-efficacy for a particular behaviour but experienced no loss of self esteem because they did not invest their self worth in that behaviour. Similarly, individuals may have high self-efficacy for a particular behaviour but take no pride in performing it well. Bandura (1997) suggested that “people tend to cultivate their capabilities in activities that give them a sense of self worth” (p. 11) but that overall “people need much more than high self esteem to do well in given pursuits” (p. 11). Similarly, Bandura (1997) quite explicitly argued that self-efficacy and self confidence referred to entirely different things. In fact, Bandura (1997) posited that self confidence was a “catchword…rather than a construct embedded in a theoretical system” (p. 382). It was a “nondescript term that

4-6

refers to strength of belief but does not necessarily specify what the certainty is about” (Bandura, 1997, p. 382). Self-efficacy, in contrast, focused on both a person’s capability and strength of belief.

Self-efficacy differs considerably to the well established locus of control construct. Bandura (1997) observed that “beliefs about whether one can produce certain actions (perceived self-efficacy) cannot by any stretch of the imagination be considered the same as beliefs about whether actions affect outcomes (locus of control)” (p. 20). In making this observation Bandura cited the empirical evidence available such as Bandura (1991) and Wollman and Stouder (1991). Additionally, self-efficacy must be viewed as a broader construct than that of effort expectancy “because it encompasses much more than effort determinants of performance” (Bandura, 1997, p. 126). According to Bandura (1997) effort was only one of the many factors that determined the level and quality of a person’s performance. People will judge their capabilities for undertaking challenging activities, not just on the degree of effort they give but also in terms of the knowledge, skills and strategies they have at their disposal. Performances that “call for ingenuity, resourcefulness and adaptability depend more on adroit use of skills, specialized knowledge, and analytical strategies than on simple dint of effort” (Bandura, 1997, p. 126-127). In addition, people who coped poorly with stressors expected that poor performances in situations would be determined by their self debilitating thought patterns rather than by how much effort they invested. Indeed, “the harder they try the more likely they are to impair their execution of the activity” (Bandura, 1997, p. 127). Bandura (1997) suggested that expectancy theorists identified effort as the sole cause of performance because they were focusing only on how hard people work at routine activities unimpeded by obstacles or threats . For this reason, Bandura (1997) proposed that it was people’s perceived perseverant capabilities (i.e. their belief that they can exert themselves sufficiently to attain required levels of productivity) that was the most important or “germane” in determining how much a person can and does accomplish (Bandura, 1997).

Self-efficacy is also quite distinct from the mental representation of personal attributes (for example, statements such as “I am a good person” “I am a talented dancer” or “I have poor social skills”). These are not seen as self-efficacy appraisal but as aspects of self knowledge. This point is important in that it clearly shows that people’s self-efficacy is not of a general nature but related to specific situations (Bandura, 1997). Individuals can judge themselves to be very competent in a

4-7

specific task and less competent in another task. For example, a person can be convinced that he or she is able to run 10 kilometres but be quite certain he or she is not able to run a marathon (Bandura, 1997). This means that self-efficacy is related to specific situations and tasks, which is not the case for the concepts highlighted earlier such as self esteem, self confidence and locus of control. Unlike self-efficacy these are personal characteristics of individuals, which have a certain stable influence on people’s behaviour. In other words, for each individual it can be established whether he or she has much or little self confidence but not whether this individual generally has a high or low measure of self-efficacy. No global sense of self-efficacy exists. Thus, self-efficacy is not a personality trait but a temporary and easy to influence characteristic that is strictly situation and task related (Bandura, 1997). This was an important point, and one that must be carefully considered in the design and implementation of the current research.

4.6 General or domain specific? Recently it has been proposed that there may be a generalised sense of self- efficacy that refers to a global confidence in one’s coping ability across a wide range of demanding or novel situations. According to Sherer et al (1982) “an individual’s past experiences with success and failure in a variety of situations should result in a general set of expectations that the individual carries into new situations” (p. 664). Several general self-efficacy measures have been developed and validated in recent years (Sherer et al, 1982; Schwarzer & Jerusalem, 1995). Bandura (1997) posited that self-efficacy beliefs were “contextual” or “domain” specific and that the concept of general self-efficacy “violates the basic assumptions…of self-efficacy beliefs” (p. 48). He argued that

it is unrealistic to expect personality measures cast in generalities to shed much light on the contribution of personal factors to psychosocial functioning in different task domains and contexts and under diverse circumstances (Bandura, 1997, p. 40).

As such, Bandura (1997) warned that general self-efficacy measures were not appropriate measures to use in tests of self-efficacy theory and that these measures have limited predictive utility. Bandura (1997) advised that self-efficacy measures must be tailored to the specific domain of functioning that was being explored. He contended that “there is no all purpose measure of perceived self-efficacy” (Bandura, 2005, p. 307) and that the “one measure fits all” approach had limited explanatory and predictive value because of the items in an all purpose measure

4-8

had little or no relevance to the selected domain of functioning. But Bandura (1997) also highlighted that “self-efficacy is commonly misconstrued as being concerned solely with specific behaviour in specific situations” (p. 49). He advised that this was “an erroneous characterization” (Bandura, 1997, p. 49) and suggested that domain particularity did not necessarily mean behavioural specificity. One can distinguish among three levels of self-efficacy assessment. The most specific level measures perceived self-efficacy for a particular performance under a specific set of conditions. The intermediate level measures self-efficacy for a class of performances within the same activity domain under a class of conditions sharing common practices. The most general and global level measures belief in self- efficacy without specifying the activities or the condition under which they must be performed. Bandura (1997) noted, that “the optimal level of generality at which self- efficacy is assessed varies depending on what one seeks to predict and the degree of foreknowledge of the situational demands” (p. 49). This issue of self-efficacy, domain of functioning and measurement, as applied within the context of the current research, is explored further in Chapter 5.

4.7 Sources of self-efficacy Bandura (1997) suggested that an individual acquires information about their personal self-efficacy from four primary sources: (a) enactive attainment, (b) vicarious experiences, (c) verbal persuasion, and (d) physiological states.

• Enactive attainment Enactive attainment or actual experience provides the most “authentic evidence of whether one can muster whatever it takes to succeed” (Bandura, 1997, p. 80). For this reason Bandura (1997) described enactive attainment as the most influential and reliable source of self-efficacy information. Success in performing a behaviour (mastering) would enhance self-efficacy while repeated failure in performing a behaviour would lower self-efficacy. Especially if “the failures occur before a sense of efficacy is firmly established” (Bandura, 1994, para. 7). Bandura (1997) also noted that developing a sense of efficacy through mastering experiences was not simply a matter of adopting ready- made habits (Bandura, 1997). Rather, it involved acquiring the cognitive, behavioural and self regulatory tools needed to create and execute appropriate actions in ever changing life circumstances. If people experienced only easy successes they learned to expect quick results and were easily discouraged by failure. A resilient sense of efficacy required experience in overcoming

4-9

obstacles through perseverant effort. However, performance alone did not strengthen self-efficacy. Other factors such as preconceptions of ability, the perceived difficulty of the task, the amount of effort expended, the external aid received, the circumstances under which the task was performed, the past pattern of successes and failures and the way these past experiences were cognitively appraised and recalled all affect the individual’s appraisal of self- efficacy (Bandura, 1997).

• Vicarious experience Self-efficacy beliefs are also influenced by vicarious experiences or by seeing other people successfully perform a behaviour (Bandura, 1997). In this way people can serve as examples or role models. Seeing a person similar to oneself succeed by continued effort increases the observer’s beliefs that they too possess the capabilities to master the behaviour. For the same reason, observing others fail, despite high efforts, will decrease the observer’s judgements of their own self-efficacy and weaken their efforts. The impact of vicarious experience on self-efficacy is strongly influenced by the observer’s perceived similarity to the role model. The greater the assumed similarity, the more persuasive are the model’s successes and failures. Bandura (1997) noted that vicarious experiences provided more than just a social standard against which to judge one’s own capabilities. People would also seek out proficient models that possessed the competencies to which they aspired. Through their behaviour and expressed ways of thinking, competent models transmitted knowledge and taught observers effective skills and strategies for managing environmental demands. Whilst vicarious experience is a weaker source of self-efficacy than enactive attainment, there are certain situations when it is extremely effective. When the individual has never been exposed to the behaviour, or has had little experience with it, vicarious experience is likely to have a greater impact (Bandura, 1997). Additionally, when clear guidelines for performance are not explicated, personal efficacy will be more likely to be impacted by the performance of others (Bandura, 1997).

• Verbal persuasion Verbal persuasion is the most used source of self-efficacy because it is the easiest to use. (Bandura, 1997). It involves verbally telling an individual that he or she has the capabilities to master the given behaviour. Verbal persuasion is a weaker source of self-efficacy than the previous two sources because it does

4-10

not involve the individual’s own experiences or examples of them. However, verbal persuasion can be a good supplementation to other sources. If an individual is convinced of their abilities they will be more inclined to persevere and will not give up easily, although this is only the case with individuals who already think they are able to carry out a task. Unrealistic attempts to boost self-efficacy via verbal persuasion are quickly disconfirmed by disappointing results of one’s efforts. Bandura (1997) highlighted that an important point with verbal persuasion was that the positive encouragement must also be accompanied by providing situations in which success was attainable. Bandura (1997) noted that it was easier to weaken self-efficacy beliefs through negative appraisals than to strengthen beliefs through positive encouragement.

• Physiological feedback People also rely on their physiological and emotional states in judging their capabilities to perform a specific behaviour (Bandura, 1997). Stress reactions and tension are traditionally perceived as signs of poor performance. Consequently, people expect to be more successful when they are not stressed than when they are. Stress can have a negative influence on self- efficacy. Mood also affects people’s judgements of self-efficacy, with positive mood enhancing perceived self-efficacy and despondent or depressed moods decreasing self-efficacy judgements. Physiological feedback is the weakest source of self-efficacy because it is the least concrete, requiring people to rely on their physical and emotional states to judge their capabilities (Bandura, 1997)

When forming a judgement of efficacy individuals have to weigh and integrate information from all four sources and than process this information cognitively (Bandura 1997). Bandura (1997) suggested that this cognitive process was not well understood, and that many factors would have an influence, including personal, situational, social and time factors. An additional complicating factor is that because people are judging themselves, they may not be entirely objective (Bandura, 1997). This, in turn, may result in faulty appraisals. Among the most important interpretive factors involved in this process are the reasons an individual provides for their success or failure at a given task. For example, if success is attributed to personal skill rather than chance, then this is likely to enhance self-efficacy perceptions (Bandura, 1997). The impact of these four sources on developing self-efficacy for

4-11

specific behaviours is considered further within the context of the current research in Chapter 9.

4.8 Consequences of self-efficacy Self-efficacy beliefs influence how a person feels, thinks, motivates themselves and acts. To this end Bandura (1997) has proposed that self-efficacy beliefs influence human functioning via four processes: cognitive, motivational, affective and selective processes. Bandura (1997) noted that these processes “operate in concert, rather than in isolation, in the ongoing regulation of human functioning” (p. 16).

• Cognitive processes Efficacy beliefs can influence the thought patterns that can enhance or undermine performance (Bandura, 1997). This influence takes several forms. People with high self-efficacy will set higher and more challenging goals for themselves and will have a firmer commitment to achieving them; will visualise success scenarios that provide positive guides and supports for performance; and will engage in analytical thought processes in reaction to setbacks and difficulties (Bandura, 1997).

• Motivational processes Self-efficacy beliefs play a key role in the self regulation of motivation (Bandura, 1997). Motivation is regulated by the expectation that a given course of behaviour will produce certain outcomes which are valued. People act not only on their beliefs about what they can do, but on their beliefs about the likely outcome of their performance of the behaviour. Self-efficacy beliefs contribute to motivation in several ways. They determine the goals people set for themselves; how much effort they expend, how long they persevere in the face of difficulties; and their resilience to failure. Individuals who have a high level of self-efficacy are more persistent in the face of difficulties than those with a lower level of self-efficacy. Also, in the case of failures or setbacks, people with low self-efficacy tend to give up or reduce their effort whereas those with high self-efficacy generally intensify their efforts until they succeed (Bandura, 1997).

• Affective processes Self-efficacy also plays a role in the self regulation of affective states (Bandura, 1997). More specifically self-efficacy affects a person’s emotional state in three ways: thought, action and affect. People with high self-efficacy will have better

4-12

control over disturbing thoughts; will act in ways that make the environment less threatening; and will improve aversive affective states once they are aroused. In this mode of affective control, efficacy beliefs regulate stress and anxiety through their impact on coping behaviour. The stronger the sense of self-efficacy, the bolder people are in taking on the problematic situations that lead to stress, and the greater their success in shaping them more to their liking (Bandura, 1997).

• Selective Processes Self-efficacy beliefs can play a role in what activities or environments a person chooses to pursue or avoid (Bandura, 1997). People will avoid tasks and situations they believe exceed their capabilities while pursuing those they feel competent to perform. In the choices they make, people cultivate different competencies, interests and social networks. As such, any factors that influence choice behaviour can profoundly affect the direction of personal development. The higher a person’s self-efficacy the more challenging the activities they select (Bandura, 1997).

In summary, people’s self-efficacy influences the choices, the aspirations and the amount of exertion a person will put into reaching certain goals. It also determines how long a person will persevere in case of setbacks or failures, a persons’ thinking patterns and the amount of stress they will experience.

4.9 Self-efficacy and information literacy: closely linked concepts In 2003 Kurbanoglu proposed the existence of a connection between information literacy, lifelong learning and self-efficacy. He suggested that self-efficacy was a key component of self regulation; self regulation was an essential ingredient for information literacy; and information literacy was a core element of lifelong learning. This relationship is illustrated in Figure 4.2.

Kurbanoglu’s (2003) proposition was grounded in the belief that information and computer literacy skills had become the “necessary intellectual ingredient” (p. 636) of an individual’s life. However, he also noted that learning these skills was not enough. An individual must also develop confidence (i.e. self-efficacy) in the skills that they were learning. In other words, success was not simply based on the possession of necessary skills for performance it also required the self-belief to use those skills effectively. Therefore, apart from having information and computer

4-13

literacy skills, individuals of today’s society must also feel confident and competent in the use of these skills (i.e. self-efficacy).

Figure 4.2: Connection between self-efficacy and information literacy as proposed by Kurbanoglu, 2003

In making this point Kurbanoglu (2003) referred to the self-efficacy literature. The knowledge and skills an individual possess played a crucial role in the choices made and the courses of action pursued. However, Bandura (1997) identified that people’s level of motivation, affective states and actions were based more on what they believed than on what was objectively true. Individuals tended to select tasks and activities in which they felt competent and confident and avoided those in which they did not. Kurbanoglu (2003) proposed that this was one reason why self-efficacy was important for lifelong learning. If individuals felt themselves to be competent and confident about their information literacy skills they would willingly undertake information problem solving activities and they would easily become self regulated learners. Because a high level of self-efficacy leads to a desire and willingness to act and to risk trying a new behaviour, it has become important for the use of information literacy skills for lifelong learning.

Self-efficacy beliefs also helped to determine how much effort individuals would expend on an activity, how long they will persevere when confronting obstacles, and how resilient they would be in the face of adverse situations. Individuals with a positive self-efficacy expected to succeed and would persevere in an activity until the task was completed. On the other hand, individuals with low self-efficacy anticipated failure and were less likely to attempt or persist in challenging activities.

4-14

Persistence and resilience were two factors crucial for information problem solving, self regulated learning and life long learning. This was the second reason why self- efficacy was important for lifelong learning (Kurbanoglu, 2003).

To test his hypothesis - that there was a relationship between self-efficacy, information literacy and lifelong learning - Kurbanoglu (2003) conducted a study exploring university students’ self-efficacy for information and computer literacy. An information literacy self-efficacy scale was developed and validated for the purposes of the study. Using 179 university students from Hacettepe University in Turkey the study concluded that there is a relationship between self-efficacy and information literacy. Unfortunately the self-efficacy scale developed for the Kurbanoglu study was not developed in full compliance with Bandura’s recommendations and, as such, this must be considered when taking on board the final findings. Nonetheless the study was valuable in that it highlighted the relationship between self-efficacy and information literacy and raised the need to explore this relationship further.

Kurbanoglu was not the first to hypothesise a link between information (including computer) skills and self-efficacy. Wilson (1997) included self-efficacy in the 1996 revision of his general model of information seeking behaviour. In making revisions to his model Wilson (1997) noted that “information science does not have a monopoly” (p. 563) on attempting to understand how people seek and make use of information. For this reason his revised model had drawn from literature in other fields including decision making, psychology, consumer behaviour, innovation research, health communication, decision making and information systems. Wilson (1997) included self-efficacy in his revised model because “we could argue that [self-efficacy] can be applied as a general concept determining information seeking behavior” (p. 563). He provided the following example:

an individual may be aware that use of an information source may produce useful information, but doubt his or her capacity properly to access the source, or properly to carry out a search. In such a case failure to use the source might occur (Wilson, 1997, p. 563).

For these reasons Wilson included self-efficacy as one of two “activating mechanisms” that performed an intermediating role between when a person determined their information need and when the person initiated action to satisfy that need. Risk/reward theory was included as the other activating mechanism.

4-15

A number of researchers have explored the influence of self-efficacy on information seeking. In a series of studies in the early to mid 1990s Nahl (1993; 1996; Nahl and Tenopir, 1996) found evidence that individuals (college and high school students) displaying high levels of self-efficacy outperformed those with lower levels, in relation to a range of factors including, search efficiency, success and satisfaction. She noted that “novices who expected to be successful on a search task are more efficient and adaptive searchers than those who express doubt and a lack of self confidence in their ability to carry out successful searches” (Nahl, 1996, p. 100). Nahl (1996) recommended that affective elements, such as self-efficacy, should be included as a core part of any training programme focused on developing information seeking skills (Nahl, 1996). Support for this point was found in a 2000 study by Ren. Eighty-five university students were questioned before and after receiving library instruction on how to use electronic information. The study found that students’ self-efficacy in electronic searching increased after the training. Ren (2000) concluded that

library instruction has the potential to induce students to engage in electronic information searches on their own, it not only teaches technical search skills but also cultivates self-efficacy beliefs. Self-efficacy and electronic information searching will reinforce each other so that higher self-efficacy leads to more frequent and effective electronic searching which in turn further enhances self- efficacy (p. 328).

Hill and Hannafin (1997) found that self-efficacy affected both number and type of strategies employed in online information searching by university students. Those students possessing higher self-efficacy had more strategies and at higher levels and were more willing to engage in exploration of the system. Hill and Hannafin (1997) concluded that increased exploration, as developed through increased levels of self-efficacy, appeared to create more opportunities, thus increasing the participant’s prospects of locating the desired information. In a 1999 study examining searching for government information by eighty-one small business executives, Ren found that when users had higher self-efficacy in using an information source they were more likely to use that source. In particular, executives with higher internet self-efficacy used the internet more frequently for government information searches than those with lower scores. Similarly, Waldman (2003) found that college student self-efficacy was linked to increased library and electronic resource use. Thompson, Meriac and Cope (2002) found links between internet self- efficacy and the number of items correctly retrieved in an experiment in which participants were required to search to find names of industrial organisational

4-16

psychologists. In studying the information seeking behaviour of working adults Brown, Challagalla and Ganesan (2001) found that “employees with high self- efficacy effectively seek, integrate and use information to increase role clarity and performance, whereas employees with low self-efficacy do not” (Brown et al, 2001, p. 1049). Ford and Miller (1996) published the results of a study using a specially constructed inventory designed to measure perceptions and approaches to internet based information seeking. Self-efficacy was included as part of the inventory. The study noted a relationship between gender, self-efficacy and information seeking. Females reported being unable to find their way effectively around the internet, getting lost, not being in control and tending only to look at things suggested to them. All of these features were linked to a further study by Ford, Miller and Moss (2001) with low levels of retrieval success. In the latter study links were also found between poor retrieval effectiveness and fear of failure and poor time management. Similarly, studies by Tsai and Tsai (2003) exploring the web searching of university students at a Taiwanese university concluded that students with high internet self- efficacy have better information searching strategies than those with low internet self-efficacy. Whilst the studies outlined here have provided some evidence of the link between self-efficacy and the ability to find information (a core part of information literacy) it must be noted that several of the studies used problematic self-efficacy scales (i.e. Ford & Miller, 1996; Ford, Miller & Moss, 2001; Waldman, 2003). Bandura (1997) provided clear and quite prescriptive guidelines on how to effectively measure self-efficacy to ensure reliability and validity.

In summary, a small but growing body of research has begun to highlight the link between self-efficacy and information literacy skills, and especially, the relevance of self-efficacy to finding information in online environments. This body of research has thus provided support for the use of self-efficacy within the current research. The role of self-efficacy and people’s information practices and worlds within the current internet age was noted by Bandura (2000) himself. In his keynote address at the 2000 New Media in the Development of Mind conference held in Naples Italy, Bandura noted:

we are entering a new era in which the construction of knowledge will rely increasingly on electronic inquiry. At present about 50% of information is available solely in electronic form. Before long most information will be available only in electronic form. Those lacking internet literacy will be cut off from critical information and disadvantaged in managing their daily life. Constructing knowledge through internet based inquiry is a complex task. Information seekers face an avalanche of information in diverse sources of

4-17

varying quality. Small changes in strategies can lead down radically different information pathways. It is hard to know whether one is on the right track or on an unproductive one. Knowing how to access, process and evaluate this glut of information is vital for knowledge construction and cognitive functioning. People who doubt their efficacy to conduct productive inquiries and to manage the electronic technology quickly become overwhelmed by the informational overload. Social Cognitive Theory provides guides for building the personal efficacy and cognitive skills needed to use the internet productively and creatively (Bandura, 2002, p. 5).

4.10 Application of self-efficacy in understanding the digital divide The construct of self-efficacy has had a relatively brief history in that it was introduced into academic discourse only in 1977, with Bandura’s publication of Self- efficacy: toward a unifying theory of behavioural change . Since this time the self- efficacy construct has been explored and tested in various disciplines and settings. Indeed, in 1995, Maddux noted that self-efficacy “has generated more research in clinical, social and personality psychology in the past decade and a half than other such models and theories” (p. 4). The construct has been applied to such diverse areas as school achievement, emotional disorders, mental and physical health, career choice and sociopolitical change. In recent years a significant number of studies have arisen exploring the concept of computer self-efficacy. Computer self- efficacy refers to the judgements of one’s capability to use a computer (Compeau & Higgins, 1995). Over the years a number of computer self-efficacy scales have been developed and validated (Murphy, Coover & Owen, 1989; Compeau and Higgins, 1995). The studies to date have revealed that in general computer self- efficacy was an important personal trait that influenced an individual’s decision to use computers. However, it has only been in very recent years that internet self- efficacy – the judgement of one’s capability to use the internet – has begun to be explored in the academic literature. A small number of internet scales have been developed and validated 9 (Eastin & LaRose, 2000; Hnilo, 1997; Maitland, 1996; Torkzadeh & Van Dyke, 2001). Like the studies exploring computer self-efficacy, these studies found a link between an individual’s internet use and their level of self- efficacy. The computer and internet self-efficacy studies have predominantly drawn participants from educational or workplace settings, and were not undertaken to explore the notion of digital divide per se. Thus, whilst they have provided confirmation that self-efficacy does have an influencing role in people’s decisions to use technology, they do not provide insight into the issue of the “haves” versus the

9 It is important to note that the internet self-efficacy scales discussed here differ to the scales mentioned in the previous section, in that the internet self-efficacy scales adopt a broader focus on internet use (e.g. email, chat rooms, searching) whereas the self-efficacy scales discussed in the previous section have a limited focus (e.g. online searching).

4-18

“have-nots” in community. It is important to note that the digital divide concept has been studied by using information and communication technology such as the internet and personal computers as the focus or the variable of enquiry. It is therefore not surprising that studies exploring the digital divide from a socio- cognitive perspective have done so by examining people’s internet or computer self- efficacy.

To date, only four studies have explored the digital divide using SCT. One of the first studies examining the psychology of the digital divide was undertaken at Michigan State University. In considering the digital divide Eastin and LaRose (2000) noted that whilst the “importance of class and ethnicity cannot be denied” (para. 4) in understanding digital inequality. They also noted that it was important to consider psychological barriers such as self-efficacy. They proposed that all “internet users face psychological as well as socio-economic and racial barriers” (Eastin & LaRose, 2000, para. 4). Using a convenience sample of 171 university students from an introductory communication class at a large Midwestern university, Eastin and LaRose (2000) conducted a study that explored internet self-efficacy and its relationship to internet use. An eight item internet scale was developed and validated for use in the study. The study findings indicated that self-efficacy was a significant predictor of Internet use, and that prior internet experience was the strongest predictor of internet self-efficacy. The researchers concluded that “our alternative formulation of the digital divide problem is by no means intended to minimize the role played by race and class discrimination in creating unequal access to the internet” (Eastin & LaRose, 2000, para. 47) In fact, the researchers noted that there was likely to be race and class differences in internet self-efficacy as well. In drawing their conclusions Eastin and LaRose (2000) highlighted that the convenience sample used in the study restricted the generalisability of the results. This was a significant point in that the results could not be easily transferred to the wider population. Additionally, the participants used do not necessarily allow for a true examination of the digital divide. Digital divide studies to date (see Chapter 3) have clearly revealed that individuals with higher levels of education (i.e. university) were less likely to represent the digital divide than those with lower levels of education. Given that the participants in the Eastin and LaRose (2000) study were college students (i.e. they clearly are working towards higher levels of education), and additionally no demographic data is provided, any conclusions drawn about Internet self-efficacy and its role in the digital divide were suggestive at best.

4-19

In 2001 Tonja Lynne Ringgold completed her PhD research at the Morgan State University. Ringgold (2001) conducted a study at an urban community college in Baltimore City, Maryland. Unlike the Eastin and LaRose (2000) study, which focused on internet self-efficacy, Ringgold’s work explored the impact of computer self- efficacy on computer use by minority students. 562 students participated in the study. The majority of the students involved in the study were African American (77%). Only 7% of participants were European American (i.e. white) and the remaining 16% of participants identified themselves as from other ethnic groups including Hispanic and Native American. In recent years the US Government has reported huge numbers of Americans (including African Americans) owning computers and using email. Despite this, a digital divide between whites and minorities still persisted. Ringgold undertook her study in an effort to shed light on this situation. Like the Eastin and LaRose study (2000), Ringgold proposed that self- efficacy would provide an alternative to socioeconomic explanations of the digital divide and computer usage. The study revealed that experience with computers was a strong influence on computer self-efficacy and higher computer self-efficacy to increased computer use. Based on this finding Ringgold (2001) concluded that the major policy implication was that colleges needed to ensure that everyone had equal access to computers: “students with frequent and meaningful access to computers are more likely than others to use it continuously” (p. 124-125). The results of this study provided clear evidence on the impact of self-efficacy on computer use. However, like the Eastin and LaRose (2000) study, the participants in the Ringgold study – African American college students – limited the extent to which the results could be generalised to other populations, and thus any conclusions drawn about computer self-efficacy and its role in the digital divide were suggestive at best.

Completing his PhD research at the University of Alabama, Foster (2001) undertook a study that examined the “possibility that there are some major social (internal) components behind African American students’ reluctance to use technology” (p. 68). Specifically, the study examined the effects to which self-efficacy in general, and computer self-efficacy and perceptions of computers in particular, interacted with race and gender. The study was conducted in response to the growing number of studies that had explored the digital divide in the US and concluded that African Americans were more likely to be included in the digital “have-nots”. Foster (2001) noted that these studies were conducted to “highlight the differences rather than eliminate them” (p. 70). Foster’s work explored motivation as a way of

4-20

understanding and responding to the growing digital divide between African Americans and European Americans (whites). A study was conducted in Alabama with both African American and European American high school and college students. All students completed a self administered questionnaire while the high school students also took part in an “on task observation” to examine the impact of self-efficacy on students’ participation in computer classes.

Whilst the study found no difference for students on either general or computer self- efficacy based on race, it was noted that when comparing mean scores, European American students reported higher mean scores on computer self-efficacy than African American students, and that African American males scored lower computer self-efficacy than any other group (including females) for both high school and college students. This last point was interesting as it did not support previous studies that had reported males having higher self-efficacy scores. Whilst this was a non-significant result it was interesting to consider this result for the high school students against the “on task observations” that they completed. During the computer tasks African American males took longer than any other group to get on task. They appeared to be hesitant to engage in the tasks, and asked for less teacher assistance than any other group. In contrast were the African American females, who obtained higher levels of self-efficacy than the African American males. Overall the females were more systematic in their approaches to problem solving than males. They used notepads to sketch out possible paths to solutions before engaging the technology, were more vocal and willing to ask for assistance from neighbours and the instructor, and were willing to establish support groups in completing the computer activity. They were quick to offer their own assistance if their neighbour was struggling. The study revealed that there was a significant correlation between the amount of time spent at the task and the race and gender of the subject. More specifically, this revealed itself in the observation that African American males had lower computer self-efficacy than any other group and consequently spent significantly less time attending to the computer related task than any other group. The findings also noted a high correlation between computer perceptions and self-efficacy. As students’ computer self-efficacy rose, their perceptions of computer and computer users became more positive. Foster and Starker (2003) concluded that the study revealed “a cycle of defeat breeding defeat, and success giving birth to behavior that brings around more success” (p. 8). According to Foster and Starker (2003), as the students’ low computer self-efficacy led to low levels of involvement with computer related tasks, this in turn created an

4-21

environment conducive to future failures, and more negative experience with technology. Those failures add to the already existing low self-efficacy and reaffirm the idea that computers are “not for me”. Thus, computer use becomes viewed negatively and even less time is spent on the task. Foster and Starker (2003) argued that “all of these components come together to create what we know as the digital divide” (p. 8). The work by Foster, in collaboration with Starker, clearly highlights the important place of psychological factors such as self-efficacy, and attitudinal factors such as perceptions of computers, on people’s engagement with technology. The study also revealed that race and gender were not by themselves always an indicator as to who would or would not be part of the digital “have-nots”. However, the social and cultural factors unique to the study’s participants (i.e. African Americans) suggest that the findings were not easily generalisable to the wider population.

The fourth and most recent study exploring the digital divide from the SCT was conducted in 2005. It was also the only study to take place outside of the US. Lam and Lee (2005) explored the role of internet self-efficacy on older adults’ usage of the internet in a community ICT centre. The researchers noted that adults aged 55 or older are frequently considered to be part of the digital divide in Hong Kong, and that in an effort to bridge the digital divide in Hong Kong, the Chinese government (like many others around the world) had in recent years been offering training programs and establishing computer facilities for the general public. Lam and Lee (2005) proposed that offering training programs and enabling access to facilities were not sufficient on their own. Drawing upon over 1000 participants, an exploratory investigation was conducted into the voluntary adoption of the computer facilities provided by the Chinese government by adults aged 55 or older. In particular, the study focused on the older adults’ intention to use the internet. Noting that prior studies had indicated that older adults tend to have lower levels of self- efficacy when confronted with the idea of doing something new, Lam and Lee (2005) used the self-efficacy construct (among several others) as the focal point for their study. The study concluded that the more confidence the elderly had, the less worried they were in using the computers and the internet. Internet self-efficacy exerted strong influence on usage intention.

In summary, the four studies outlined here have helped to expand current understanding of the socio-cognitive factors that impact upon a person’s willingness to engage with information and communication technology. However, three

4-22

important aspects need to be highlighted. Firstly, the participants used (i.e. college students, African American high school students and older adults) were from very specific subsections of the population and are not easily generalisable to other sections of the population. Additionally, the studies have not included a focus on all members of the general population – that is the “everyday person”. As such, the studies have shed only limited light onto the impact Internet self-efficacy has had on the digital divide within the broader community. Secondly, the studies have not incorporated both socio-cognitive and socio-economic factors into the research design, thus restricting the degree to which a full understanding can be achieved of the contribution each factor makes to the digital divide. Finally, no studies have taken place in Australia. This research addressed these gaps.

4.11 Criticisms of social cognitive theory SCT, and self-efficacy specifically, is not without its criticisms. In using SCT as the theoretical framework in the current research it was important to be aware of limitations associated with the theory. In 1989 Lee argued that the ability of the concept of self-efficacy to explain human behaviour is largely illusory. Lee (1990) contended that since many theories of psychology (including those that incorporate the self-efficacy concept) rely on unobservable variables, and as such cannot be tested in a scientific way, they were “useless in understanding or predicting behavior although they may be seductive ways of talking about that behavior afterwards” (p. 143). Lee (1989) therefore, proposed that whilst SCT, and self-efficacy in particular, had strengths as “a metaphor for describing human behaviour its weakness is that it is not a model for explaining behavior” (p. 117). In short, Bandura’s self-efficacy theory according to Lee (1998) was “a vague descriptive model, not an explanatory theory” (p. 122). Many of the issues and criticisms surrounding self-efficacy have centred on the point raised by Lee, as well the question of whether self-efficacy was a predictor of behavior versus a cause of behavior. In 1992 Hawkins readily acknowledged that there was “an abundance of evidence indicating a relationship between self-efficacy and behavior” (p. 252) and that “self-efficacy has a certain utility in terms of predicting behavior” (p. 251). However, he concluded that self- efficacy had “no claim to being a cause of behavior” (p. 251). He stated: “I would be pleased to support the theory rather than criticise it, if it were not for the claim of causation” (p. 236). The criticisms of self-efficacy have originated out of a predominantly behaviourist perspective and, as such, the critics may never be appeased. In the last ten years SCT generally, and self-efficacy specifically, has been used within a broad range of disciplines and has been measured and reported

4-23

in a growing number of studies. To generalise, these investigations have, overall, found self-efficacy beliefs to be reasonably accurate predictors of outcome. It could be suggested that this can be viewed, to some extent, as wide spread familiarity with and acceptance of the theory.

It is also important to note that SCT is not the only theory from which to view human behaviour. The theory of reasoned action (TRA) (Ajzen & Fishbein, 1980) and the theory of planned behavior (TPB) (Ajzen, 1988) are two prominent alternative frameworks. According to TRA, performance of a behavior was primarily determined by a person's intention to perform that behavior. This intention is determined by two major factors: the person's attitude toward the behavior (i.e. beliefs about the outcomes of the behavior and the value of these outcomes) and the influence of the person's social environment or subjective norm (i.e. beliefs about what other people think the person should do, as well as the person's motivation to comply with the opinions of others). TPB adds to the TRA the concept of perceived control over the opportunities, resources, and skills necessary to perform a behavior. The concept of perceived behavioural control is similar to the concept of self-efficacy in SCT. Perceived behavioural control over opportunities, resources, and skills necessary to perform a behavior is believed to be a critical aspect of behavior change processes. Like SCT, the theory of planned behavior has been criticised for the distinction between prediction and causation of behaviour and for difficulties with construct definition and measurement. However, the successful use of SCT in previous ICT and digital divide studies, coupled with the current researcher’s experience and knowledge of SCT, led to the selection of SCT as the guiding framework in the current research.

4.12 Conclusion As Foster (2001) noted: “attitudes do matter”. This chapter has provided an overview of the theoretical framework for the current thesis - social cognitive theory (SCT). SCT asserts that human behaviour is best understood by a reciprocal relationship between personal factors, behaviour and the environment. SCT is grounded in a view of human agency where people can exercise influence or control over what they do. The theory has two core constructs - self-efficacy and outcome expectancy. Self-efficacy is the more important of the two and refers to a person’s judgement of perceived capability for performing a task. People acquire information about their self-efficacy from four sources: enactive attainment, vicarious attainment, verbal persuasion and physiological states. Self-efficacy beliefs influence human

4-24

functioning via four processes: cognitive processes, motivational processes, affective processes and selective processes. Self-efficacy is behaviour specific. Recent studies have shown that self-efficacy and information literacy are closely linked concepts. To survive in today’s information rich world, individuals need to not only have the necessary information skills or knowledge to function in their information environment but also the appropriate confidence in their ability to successfully apply these skills to their information needs. More and more information is becoming available via digital formats (i.e. the internet). Recent studies have also shown that self-efficacy is a significant factor in influencing an individual’s decision to use technology, such as computers or the internet. Four studies have explored the digital divide using the self-efficacy construct. These studies have taken place in the US and in Hong Kong. The studies provided initial support for an alternative psychological perspective to the current socio-economic understanding of the digital divide. Because of limitations in the research design these studies have shed only limited light onto our understanding of digital inequality in community. The current research addressed the design gaps in these previous studies by (i) using a community context that ensured the recruitment of participants who represented the full spectrum of the general population, (ii) developing a more sophisticated understanding of the digital divide by incorporating both socio-economic and socio- cognitive factors into the research design and (iii) using a research context that incorporated both a US and Australian setting. By doing this, the research has added to the growing body of knowledge on the digital divide per se, and on the application of the socio-cognitive framework to understanding the digital divide in particular.

4-25

Part B: Exploring the problem

Chapter 5: The method

5.1 Introduction Part A provided an examination of the key literature relevant to the research. It has set the scene for the research by establishing the research gap that the current research explored. This chapter provides a detailed overview on the method used to explore the research problem. The chapter describes the research philosophy, research approach including plan, method and technique, the research context and sampling plan, the development of the research instrument and data gathering implementation. Ethical considerations are also discussed.

5.2 Research philosophy Social research, such as the current research, is the means by which social scientists understand, explain and predict the social world. Social research “does not exist in a bubble hermetically sealed off from the social sciences and the various intellectual allegiances that their practitioners hold” (Bryman & Bell, 2003, p. 4 10 ). The values and theoretical preferences of a researcher will inevitably influence the research they engage in. As Mackay (2001) observed: “social scientists like others have their sympathies, preferences and commitments” (p. 46). The sociologist Zygmunt Bauman (1990, cited in Mackay, 1993) argued that social science required the researcher to “defamiliarize’” themselves, to look with a more open mind at social phenomena to which they may sometimes be very close. This notion of “defamiliarization” reminds the social scientist to approach the research process with full awareness of the “baggage” that he or she brings in terms of perceptions, experiences and values. Consequently, one of the driving forces behind any type of social research is the research paradigm or philosophy of the researcher. A research paradigm is the “set of interrelated assumptions about the social world which provides a philosophical and conceptual framework for the systematic study of the world” (Kuhn, 1970, p. 10). Because different paradigms are grounded in different assumptions they produce different ways of approaching and conducting research. For this reason it is important to identify, explain and justify the research paradigm adopted in any social research. There are three common research paradigms: positivism, interpretivism and critical theory.

10 Bryman and Bell made this statement in regards to business research but it could be easily argued that it applies to any type of social research.

5-1

 The positivism paradigm argues that reality consists of what is available to the senses – that is, what can be seen, smelt and touched (Mackay, 2001). The positivism paradigm suggests that research should be based upon scientific observation (Neuman, 2003). Positivism assumes that researchers can understand society, and make predictions about social change, by analysing quantitative data. Positivist research, therefore, is based mainly on deductive styles of reasoning, where the deductive process is seen as clear cut and linear (Neuman, 2003). Because of their fundamental beliefs about measuring and quantifying, positivists seek to be as objective as possible in their inquiry process as such, positivists attempt to hold their personal beliefs and biases in check insofar as possible during their research to avoid contaminating the phenomena under investigation. According to Neuman (2003) positivism sees social science as “an organized method for combining deductive logic with precise empirical observations of individual behavior in order to discover and confirm a set of probabilistic casual laws that can be used to predict general patterns of human activity” (p. 66). Critics charge that positivism reduces people to numbers and that its concerns with abstract laws or formulas are not relevant to the actual lives of real people.

 The interpretivism paradigm starts from the position that making sense of society and social change involves understanding the thinking, meaning and intentions of those being researched (Mackay, 2001). The central tenet of interpretivism is that people are constantly involved in interpreting their ever changing world (Neuman, 1993). It does not try to be value free. Thus, the interpretive approach is “the systematic analysis of socially meaningful action through the direct detailed observation of people in natural settings in order to arrive at understandings and interpretations of how people create and maintain their social worlds” (Neuman, 2003, p. 71). Interpretive research designs are mainly based on inductive reasoning, in that researchers attempt to make sense of the situation without imposing pre-existing expectations. The interpretivism paradigm is frequently associated with qualitative research. However it should be noted that quantitative research can also be conducted within this paradigm (Neuman, 2003). Interpretivist researchers plan their studies but are much less “linear” in their approach than positivist researchers. They “seek to be totally open to the seeing and subjects of their study” (Gorman and Clayton, 1997, p. 38). Interpretivism it critical of positivism for its failure to deal with the meanings of real people and their capacity to feel and

5-2

think. Interpretivism also advocates that positivism ignores the social context and is anti-humanistic (Neuman, 2003). But, like positivism, interpretivism has its own limitations. It has been criticised for having limited generalisability because of the idiosyncrasies of the individuals or settings under study (Neuman, 2003).

 Critical theory has developed in response to both positivism and interpretivism . This paradigm seeks to question currently held values and assumptions and challenge conventional social structures (Mackay, 2001). It invites researchers to discard their “false consciousness” in order to develop new ways of understanding as a guide to effective action (Neuman, 2003). As Marx and Engels observed, “the philosophers have only interpreted the world, in various ways: the point is to change it” (1998 cited in Mackay, 2001, p. 61). Critical researchers use the same methods as positivists and interpretivists, but use them to not just observe a phenomenon but to contribute to enacting social change. Critical researchers argue that positivist and interpretivist researchers have three weaknesses (Neuman, 2003). First, they present incomplete accounts of social behaviour by their neglect of the political, social and ideological contexts. Second, they are engaged in social research without a coherent theory of structured power; they treat all social subjects as of equal worth, disregarding social divisions and inequalities. And finally, they are limited because they seek only to understand the existing situation instead of questioning or transforming it. The critical paradigm defines social science as a “critical process of inquiry that goes beyond surface illusions to uncover the real structures in the material world in order to help provide change conditions and build a better world for themselves” (Neuman, 2003, p. 81). The purpose of critical research is to change the world. More specifically, social research should “uncover myths, reveal hidden truths and help people to change the world for themselves” (Neuman, 2003, p. 81). Ultimately critical research is focused on empowering individuals (Neuman, 2003). Critical theory is limited in two ways. First, with theory leading research, there is a danger that research findings simply confirm the theory. Research can become little more than an illustration of a theoretical argument, substantiating the researchers’ theoretical predisposition. Second, with the view that research is revolutionary, critical theory has a tension between engaging in objective academic inquiry versus seeking to promote a particular political or social priority. Critical theory, however, has two important strengths. First, it stresses the importance

5-3

of the social and historical context of research. Second, it reinforces the notion that the researcher’s role is critical and has an impact on research findings and therefore involves some responsibility (Neuman, 2003).

Despite the clear differences between the three research paradigms, Neuman (2003) concluded that there are some common features. He noted that the three approaches all proposed that “social science strive to create systematically gathered, empirically based theoretical knowledge through public processes that are self reflective and open ended” (p. 135). In recent years a pluralistic approach to research design has been advocated (Robey, 1996; Mingers, 2001). A pluralistic research approach permits (indeed encourages) the researcher to draw upon multiple methods and paradigms. The pluralistic approach posits (i) that no one paradigm is better and that each paradigm offers an different focus or way of perceiving the research problem and (ii) that research methods (i.e. quantitative or qualitative) are not paradigm specific and that a method can be detached from a paradigm and used on the basis of whether it fits the purpose of the investigation (Mingers, 2001).

Critical theory provided the basis for the current research. More and more critical theory is being used within academic research. This is especially true for disciplines such as social and community , information systems and library and information science. Examples include Woodfield’s (2002) examination of gender discrimination in the information systems field and Kvasny and Kell’s (2006) use of Bourdieu’s theory of cultural and social reproduction to explore the digital divide initiatives in two US cities. Critical theory is rapidly emerging as a “promising approach to addressing some of the complex and thus far intractable issues we face today” (p. 98) in regards to the role and impact of ICT in community and organisations. The current research was critical in that it sought to “uncover myths, hidden truths and help people change the world for themselves” (Neuman, 2003, p. 81). The study acknowledged that individuals can access and use ICT to change their social and economic conditions. However, it was also recognised that ICT access and use can be restricted by various forms of limitations (eg. social, cultural, psychological, political and economic). The study heeds the words of Floridi (2001) that the digital divide “disempowers, discriminates and generates dependency. It can engender new forms of colonialism and apartheid that must be prevented, opposed and ultimately eradicated” (para. 8). The study therefore examined the digital divide by giving primary attention to the psychological factors that impact

5-4

upon a person’s decision to engage with ICT. Some researchers have called for more empirical based critical theory studies (Boudreau 1997; Alvesson & Deetz, 2000) and this work addressed this call.

5.3 Research problem Having established the research philosophy that influenced the current study it is therefore appropriate to revisit the research question that was formulated. The research question can be restated from Chapter 1 as: what influence do socio- cognitive factors have in predicting internet use by member of the general population when the effects of socio-economic factors are controlled? A visual model of this research question is depicted in Figure 5.1. This model was based upon existing literature:

• studies into the digital divide have clearly shown that socio-economic factors were predictors of internet use 11

• studies into ICT adoption have clearly shown that socio-cognitive factors were predictors of internet use

• initial but limited studies into the digital divide suggest socio-cognitive factors (i.e. self-efficacy and outcome expectancy) may be predictors of internet use.

The purpose of this research was to test this model and in so doing determine the influence of socio-cognitive and socio-economic factors. As such, a research plan appropriate to achieving this purpose was established. This plan is discussed in the next section.

11 The exact socio-economic factors have varied from study to study given the different contexts and populations studied, however in general the studies have indicated that the following socio-economic factors are good predictors of internet use: age, income, gender, employment, education, ethnicity and disability.

5-5

Figure 5.1: The research model

5.4 Research plan Research requires a specific plan for proceeding. As noted by Neuman (2003) “it is not a blind excursion into the unknown with the hope that the data necessary to answer the question at hand will somehow fortuitously turn up. It is instead a carefully planned attack, a search and discover mission explicitly outlined in advance” (p. 143). Depending on the research question, different methods, data gathering techniques and overall process will be more or less appropriate - all of which are of course influenced by the researcher’s overall research philosophy. This section outlines the research method and data collection method that were identified as appropriate for the current research. It also outlines the overall process that is used in conducting the study.

5-6

5.4.1 Research method Survey is one of the most widely used research methods and, as noted by Dane (1990), is frequently the research method with which the general public is most familiar. Surveys have been employed in a wealth of disciplines including the social sciences, law, business and library and information studies. The survey method involves “obtaining information directly from a group of individuals” by posing questions (Dane, 1990, p. 120). The questions may be presented orally, on paper or in some combination. The survey method relies upon the researcher and the participant working together to collect the data: the researcher asks the questions and the participant answers (Dane, 1990). The ultimate goal of the survey method is to allow researchers to generalise about a larger population by studying only a small portion of that population (Neuman, 2003). For this reason the survey method is one of the best methods available to the social researcher who is interested in collecting original data for describing a population too large to observe directly (Neuman, 2003). As such, it was an appropriate method for the current research.

The survey method has many strengths and weaknesses; all of which needed to be considered in the current research (Neuman, 2003). One major strength of the survey method is its ability to accommodate large sample sizes at a relatively low cost. Increasing the sample size will increase the geographic flexibility of the research as well as increasing the researcher’s ability to generalise from sample to population. Another strength of the survey method is the ease of administration. Surveys can be implemented in a timely fashion so that data can be gathered in a relatively short period of time. Besides the convenience afforded by this approach, there is also the advantage of obtaining a snapshot of the population at a particular moment in time. Another factor in favour of surveys is that they allow for the collection of standardised common data. That is, all respondents give answers to the same questions, which allows for direct comparisons between respondents. This data can then be used in advanced statistical analysis. A final advantage of surveys is their ability to tap into factors or concepts that are not directly observable (e.g. attitudes, feelings, preferences).

The survey method is not without its problems (Neuman, 2003). Whilst implementation of the method is relatively easy, developing the appropriate survey can be difficult. A poorly developed survey can result in the collection of irrelevant and poor quality data. A second potential weakness of survey research is the use of standardisation questions which results in “fitting round pegs in square holes”.

5-7

Standardised questions often represent the least common denominator in assessing people’s attitudes, orientations, circumstances and experiences. By designing surveys to be appropriate to all respondents the researcher may provide only a superficial coverage of complex topics. Another factor against the survey method is that surveys seldom deal with the context of social life. They do no allow the researcher to develop a “feel for the total life situation in which respondents are thinking and acting”. Inflexibility is another limitation of the survey method. Survyes typically require that an initial study design remain unchanged throughout the research project. As such, changes or modifications based upon new insights cannot be incorporated. A final disadvantage of surveys is artificiality. Artificiality may have two aspects. First, the topic of study may not be amenable to measurement through a questionnaire method. Second, the act of studying that topic may affect it.

There are two types of surveys: descriptive and explanatory (Neuman, 2003). Descriptive or status surveys are primarily concerned with fact gathering, with enumerating and describing. They focus on the answering “who”, “what” “when” or “where” questions. The explanatory or analytical surveys are probing in nature, seeking to explore the interrelationships of variables and likely causal links between them. They focus on answering the “why” and “how” questions. The current study used an explanatory or analytical survey method. In addition to deciding the type of survey method used, it is necessary to decide on the approach to survey design. There are two possible designs: cross sectional or longitudinal. In a cross sectional design data are collected at one point in time from a sample selected to describe some larger population at that time. In contrast, the longitudinal survey permits the analysis of data over time with data being collected at different points in time. Thus, the current study used an explanatory survey with a cross sectional design. The next point to consider is the data collection technique

5.4.2 Data collection technique The survey method encompasses a variety of data collection techniques: questionnaires (print and electronic), interviews (face-to-face and telephone) and observation techniques (Neuman, 2003). When selecting which survey technique is most appropriate for a given research project, several aspects need to be considered, including (i) situation characteristics such as budget of available resource; time frame and quality requirements of the data, (ii) task characteristics such as the difficulty of the task, stimuli needed to elicit a response; amount of

5-8

information needed; research topic sensitivity, and (iii) respondent characteristics such as diversity, incidence rate and degree of participation (Hair, Babin, Money & Samouel, 2003). Self-administered questionnaire was identified as the most appropriate technique for use in the current study given the variety and depth of data sought to effectively address the research question.

Self-administered questionnaires are one of the most widely used data gathering techniques within the questionnaire method (Neuman, 2003). Self-administered questionnaires were an appropriate choice for the current study because of their general suitability for investigating questions about self-reported beliefs or behaviour (Neuman, 2003, p. 247). Additionally, previous studies exploring self-efficacy in relation to computer or internet use have clearly demonstrated that self-administered questionnaires are an appropriate technique for this type of research. In addition to the strengths and weaknesses associated with the survey method in general (as outlined in the previous section), there are advantages and disadvantages unique to self-administered questionnaire that must be considered (Nueman, 2003).

The self-administered questionnaire is appropriate when the research purpose is easily explained in print and when the instructions and questions to be asked are straight forward. This type of survey is by far the cheapest and fastest to administer even when the questionnaire is complex or lengthy. It can be conducted by a single researcher and the respondent can complete the questionnaire at their own pace. The technique offers greater anonymity and avoids interviewer bias. With some topics of a personal nature the anonymity of the self-administered questionnaire can encourage respondents to provide frank, truthful answers. There is not the embarrassment of disclosing information to a stranger (Neuman, 2003).

One major short coming of the self-administered questionnaire is the tendency for poor response rates. Linked to this is the difficulty in obtaining responses from a representative cross section of the target population. Non-respondents may differ in characteristics from respondents and the reasons for non-response can be hard to determine. The researcher cannot control the conditions under which the questionnaire is completed. The respondent is free to interpret the questions or the questionnaire in their own way, thus potentially having an impact on the quality and nature of the data collected. There is a lack of opportunity for respondents to qualify answers or for researchers to probe for further information. The data gathered via self-administered questionnaires is also affected by the characteristics of the

5-9

respondents (i.e. memory, knowledge, experience motivation) and their willingness or ability to report their beliefs and attitudes accurately (i.e. a social desirability bias). Self-administered questionnaires are heavily “reactive” in that respondents know they are being studied and can easily manipulate the information about themselves that is being gathered. Self-administered questionnaires can be a poor technique for analysing influences on behaviour of which people may be largely unaware (Neuman, 2003).

Every effort was made in the current research to strengthen the advantages, and to limit the disadvantages of the self-administered questionnaire technique.

5.4.3 Research process Research requires a sequence of steps that must be followed to ensure the research objective is met. Various scholars have commented on what these steps are and how they should be undertaken. Two different, but related, approaches were used in establishing the research steps for the current research: the marketing research process by Aaker, Kumar and Day (2004) and the 5-phase project management process proposed by Weiss and Wysociki (1992).

Aaker et al (2004) have proposed a series of steps when conducting marketing research. They suggested that the research process “provides a systematic, planned approach to the research project and ensures that all aspects of the research project are consistent with each other” (p. 124). They proposed a seven step process that included agreeing on the research process, establishing research objectives, estimating the value of the information, designing the research, collecting the data, preparing and analysing the data, and reporting the research results. As marketing research is usually conducted with members of the general population the process advocated by Aaker et al (2004) was considered an appropriate choice for the current research.

According to Weiss and Wysocki (1992) project management was a “method and a set of techniques based on the accepted principles of management used for planning, estimating and controlling work activities to reach a desired end result in time within budget and according to specification” (p. 5). To plan and execute these principles the authors developed a 5 phase methodology. The five phases of the methodology are define, plan, organise, control and close. Each phase contained specific steps that must be accomplished to successfully undertake project

5-10

management. Establishing a careful and systematic plan is a core requirement in achieving any outcome. The research process is no different. For this reason project management was also identified as relevant to the design and conduct of the current research.

Figure 5.2 provides the research process that was developed in conducting and implementing the current research. This process was grounded in the Weiss and Wysocki (1992) 5-phase project management approach and the Aaker et al (2004) marketing research process. A three stage process was taken in the current research. Stage one of the research process involved the preliminary planning. This included the initial investigation into research topics and opportunities, establishing a research problem and questions and identifying the significance and overall value of the research. Stage two involved developing the research approach and establishing the research tactics. The third and last stage of the research involved considering cost and timing estimates, data collection and analysis and the drawing of conclusions and recommendations. In considering the research process used in the current study, it is also important to note that research is an interactive process in which steps blend into each other (Neuman, 2003). A later step may stimulate reconsideration of a previous one. In this sense the process is not strictly linear. As such, whilst the research conducted was broken up into three stages with a number of specific steps undertaken in each stage, the research also involved iterative elements. The chapter does not outline all of the three stages and the associated tasks, instead it provides an overview of the key elements necessary to establish a clear picture for how the research was undertaken.

5-11

Figure 5.2: The research process

5-12

5.5 Research context The research project had an international context. The two cities of Brisbane, Australia and San Jose, United States of America provided the communities through which study participants were obtained. A brief profile of each city follows:

• Brisbane is the capital city of the state of Queensland. At the time of the research, it was the third largest city in Australia covering an area approximately 1350km2 and supporting a total population of over 1.6 million. In the planning document Living in Brisbane 2010 the Brisbane City Council articulated its vision for Brisbane as a “smart city [that] actively embraces new technologies…Brisbane should seek to be a more open society where technology makes it easier for people to have their say, gain access to services and to stay in touch with what is happening around them, simply and cheaply. All residents will have access to the Internet, and the ability to use it” (BCC, 2001). The “smart city” goal was still present in the city’s Living in Brisbane 2026 planning document.

• San Jose is a city in the State of California. At the time of the research, it was the third largest city in California covering an area of approximately 176.6 square miles and supporting a total population of just under 1 million. It is the eleventh largest city in the United States of America. The San Jose city council has adopted a “smart growth” strategy to the future development of the city, where ‘smart growth” refers to a city that “feels safe…where new housing is affordable… where there are jobs for all its residents” (City of San Jose, 2001, p. 6).

The two contexts were chosen for the current research for several reasons. Firstly, with the majority of digital divide research taking place in the US and very little research being conducted in Australia, extending the research into the US context allowed the researcher to establish links and synergies of efforts with the US researchers in the field. Secondly, the two cities – Brisbane and San Jose – were similar in terms of the size and population composition, thus, offering a unique opportunity for comparison.

5.6 Sampling plan Having established the context for the research, the next important issue to be considered is how study participants in the context were to be obtained. An

5-13

important methodological objective for all empirical research is the ability to draw conclusions about a larger population (Neuman, 2003). This largely depends on the representativeness of the sample. A representative sample is one in which the sample’s characteristics mirror the population from which it comes (Neuman, 2003). Associated with the issue of representativeness is how the level of representativeness is demonstrated. Of particular importance to the achievement and demonstration of representativeness is the sampling plan established for the research (Neuman, 2003).

A sampling plan is the blueprint or framework needed to ensure that the raw data collected are representative of the defined target population (Neuman, 2003) The sampling plan will assist in reducing sampling error. In particular, the sampling plan can help reduce the sample selection error and coverage error. Sample selection error is when only some, and not all, of the target population is sampled (Neuman, 2003). The best strategy for overcoming sampling error is to ensure a large sample size. In short, the larger the sample size, the smaller the sampling error (Neuman, 2003). The issue of sample size is discussed in greater detail in the next chapter. Coverage error is when a sample is drawn from a sampling list that does not include all elements of the population, thus making it impossible to give all elements of the population an equal or known chance of being included in the sample (Neuman, 2003). Most questionnaires have a certain amount of coverage error which cannot be specified. Dillman (2000) noted that poor coverage error was typically the most significant source of error in questionnaires of the general public. This is because there is no list available that represents the entire general population. Dillman (2000) also noted that coverage exclusions were quite common in questionnaires with the general public. For example, questionnaires would typically exclude the institutionalised population such as people living in hospitals, nursing homes, college dormitories, prisons and military bases. Presumably the proportion of the population that is institutionalised is small enough that this exclusion should not have a major impact on most questionnaire results. Another important source of exclusion is that of language. This point was of particular importance for the current research. Previous digital divide studies have suggested that ethnicity was a factor in determining the “have-nots”. The current research was undertaken by an English speaking researcher with an English language research instruments. Consequently if members of the two populations explored did not speak or read English they were excluded from the study. Unfortunately, resource restrictions prevented the supporting of non-English speaking or English as a second language members of

5-14

community. Given that the current research involved members of the general population it was inevitable that there will be sampling error and that the extent of this sampling error would not be known. In considering the issue of sampling in the current research it was worth noting the words of McCready (2006):

it is important to remember that you cannot reach perfect representation. The sample will never perfectly replicate the population. Representativeness in science is about estimates and approximations not duplication. We’re trying to make a sample that is a best estimate of the population under the conditions we face (p. 147)

Malhotra (2004) noted that there were several critical factors that would influence how a sampling plan was developed for a particular research. These included the research objectives, the degree of accuracy required, the availability of resources, the time frame, advanced knowledge of the target population, scope of the research and intended statistical analysis. A good sampling plan should include the five following steps: (i) define the target population; (ii) identify the sampling frame; (iii) select the sampling technique; (iv) determine sample size; and (v) execute the sampling process (Malhotra, 2004). Each will be discussed in turn.

• Target Population The target population is the complete group of objects or elements relevant to the research project (Malhotra, 2004). They are relevant because they possess the information the research project is designed to collect (Hair et al, 2003). The target population should be defined in terms of elements (the object about which, or from which, the information is desired), sampling units (an element or a unit containing the element), extent (the geographical boundaries) and time (time period under consideration). The population for the current research included adults (over 17 years) who can read English located in the cities of San Jose, California, USA and Brisbane, Queensland, Australia during 2001- 2006.

• Sample Frame A sampling frame is a comprehensive list of the elements from which the sample is drawn (Molhatra, 2004). It is usually a list of population members used to obtain a sample, for example list of magazine subscribers. Usually an exact match between the sampling frame and population is not found. The challenge is to determine what portions of the population are excluded by the

5-15

sampling frame and what biases are therefore created. The existence of such biases usually will not affect the study’s usefulness, as long as the biases are identified and the interpretation of the results takes them into consideration (Aaker et al, 2004). Robson (1993) noted that the constraints of undertaking real world studies could mean that the requirements for representative sampling were very difficult if not impossible to fulfil and that sampling frames could be impossible to obtain. Given that the current research involved drawing participants from the population of two large metropolitan cities, obtaining a full list of possible population members was not possible. As such, no sampling frame in the traditional sense of the term was developed. Nonetheless, parameters were established as to the type of people it would be necessary to have in the study to ensure the research question could be answered. The current research needed to include participants who had not only low internet use and low socio-economic status but also low internet use and high socio- economic status and high internet use and low socio-economic status. Because no formal sampling frame could be established the sampling technique was an important issue to be considered.

• Sampling technique Sampling techniques can be divided into the two broad categories of probability and non-probability samples (Neuman, 2003). Probability techniques are based on the premise that each element of the target population has a known, but not necessarily equal, probability of being selected in a sample (Neuman, 2003). In probability sampling, sampling elements are selected randomly and the probability of being selected is determined ahead of time by the researchers. If done properly, probability sampling ensures that the sample is representative. In contrast, in non- probability sampling the inclusion or exclusion of elements in a sample is left to the discretion of the researcher (Neuman, 2003). As such, not every element of the target population has a chance of being selected into the sample. Despite this, a skilful selection process can result in a reasonably representative sample. That is, the sample “represents” the researchers judgement of what he or she wants but is not based on chance (Neuman, 2003). Ideally, a representative sample for the present study would require randomly attaining participants from all suburbs within the two cities of San Jose and Brisbane. However, this sampling approach was not practical and beyond the scope of the present study. Due to limited financial resources and

5-16

timeline constraints, a non-probability technique was used. There are three types of non-probability sampling approaches that can be used in social science research: a convenience sample, a quota sample and a purposive sample (Neuman, 2003). A convenience sample refers simply to respondents who are available or convenient to the researcher (Neuman, 2003). A quota sample is a non random sample selected on demographic characteristics represented in the population of interest (Neuman, 2003). A purposive sample is when the sample elements are handpicked based on the researcher’s judgement that they appear to be relevant to the research topic (Neuman, 2003). A purposive sample method may be useful in exploratory designs. The strength of the purposive approach is that it is low cost, convenient and not time consuming. However the weakness includes its subjective nature and that it does not allow generatisability (Neuman, 2003). A combined convenience and purposive sampling approach was selected as the most effective option to access the sample population in the current research. A convenient sampling was used in that study participants were drawn from community locations that the researcher had access to (i.e. public library, community centres, health centres). A purposive sampling was used in that the study involved the selection of a sample based on prior research and from the theory being explored. That is, as the data was being collected the participants were compared to the sample frame that was established and where necessary data collection was modified or tailored to ensure the sample frame was being met.

Two data collection contexts were involved in the current study: San Jose, USA and Brisbane, Australia. A different sampling strategy was required for each context: o San Jose was the US city involved in the research. Because data collection was to take place over a short time frame (i.e. four weeks) and because the researchers did not have an established relationship with community groups or business in San Jose a very focused sampling strategy was employed. The San Jose Public Library (SJPL) Service was used for data collection. Public libraries have a strong commitment to providing support to all members of community to access and use information technology, such as the Internet, and are a well used community resource. Given these points and the existing relationship between the researcher and the library service, the San Jose Public

5-17

Library was a logical point to access study participants. At the time of the research the SJPL service consisted of 18 static branch libraries. It had a membership base of approximately 650,000 people, and served a total population of 918,800 spanning approximately 177 square miles (441 km 2). It circulated over nine million items each year. The library service was a vital and active part of the community offering a wide range of services and resources including Internet access and training. Four branches of the library service were used in the study – Biblioteca, Amalden, Hillview and Dr. Martin Luther King Jr. The branches were selected based upon their potential to provide access to participants matching the desired sample parameters (see Section 5.5). Whilst the first three branches participated in the pilot study only the Dr Martin Luther King Jr branch was used in the final study. This was because of changing circumstances and restrictions that arose during the three year duration of the research project. This meant that only those San Jose residents who used the Dr Martin Luther King Jr branch (an inner city location) were included in the study. The representativeness of the sample used in the current study is explored in Chapter 6. o Brisbane was the Australian city involved in the study. As in the San Jose study the Brisbane City Council (BCC) library service was identified as a logical starting point for data collection. At the time of the research the BCC Library Service consisted of 32 static branch libraries. It had a membership base of approximately 362,000 people, and served a total population of 865, 000 spanning approximately 1350 km2. It circulated over nine million items each year and had over 4 million visitors during this time. The library service was a vital and active part of the community offering a wide range of services and resources including Internet access and training. Three branches of the library service were used in this study – Inala, Indooroopilly and Garden City. The branches were selected based upon their potential to provide access to participants matching the desired sample parameters (see Section 5.5). Unlike the US context for the study the researcher had a stronger rapport with the community groups and businesses within Brisbane city. Consequently, a more detailed sampling strategy was established. As such, study participants were drawn from the following locations: staff from the Accor hotel group, students from the SouthBank TAFE, gyms, parent groups,

5-18

and individuals using public transport. An effort was made to cover a diverse range of Brisbane suburbs representing diverse socio-economic areas, including Inala, Mt Gravatt, Indooroopilly, Carindale, Rocklea and Greenslopes. The representativeness of the sample used in the current study is explored in Chapter 6.

• Sample size Two conditions were taken into account in identifying the sample size. First, the limitation of time and financial resources made the inclusion of all elements in the population impossible. Second, the sample size had to be large enough to provide powerful statistical testing on the theoretically hypothesised relationships. A discussion on the sample sizes appropriate for the statistical analysis conducted in the current research is provided in Chapters 7 and 8. The absolute size of the sample is the crucial factor rather than the relative size or the proportion of the population sampled (Neuman, 2003).

In summary, the population for the current research was adults (over 17 years) who can read English located in the cities of San Jose, USA and Brisbane, Australia during 2001–2006. Because the research involved drawing participants from the population of two large metropolitan cities, obtaining a full list of possible population members was not possible and, as such, no sampling frame in the traditional sense could be developed. However, it was acknowledged that to answer the research question the research would need to include participants from all socio-demographic backgrounds and with a diverse range of internet use. A non-probability sampling technique, specifically a combined convenience and purposive sampling, was used for obtaining study participants. Because of timing restrictions participants were obtained only from the San Jose Public Library for the US context. Participants were obtained from a variety of locations including gyms, public transport, public library and workplaces for the Australian context.

5.7 The research instrument The research instrument is essential for the success of any research (Neuman, 2003). It is the tool through which data is obtained so that it may be analysed and conclusions drawn (Neuman, 2003). Therefore, a detailed discussion on the research instrument (a self-administered questionnaire) is important. The self- administered questionnaire in the current study consisted of three sections:

5-19

• Section 1: Facts About You • Section 2: Your Internet Use, and • Section 3: Your Experience of the Internet.

The data obtained via the self-administered questionnaire was directly used to obtain data to answer the study’s research question. The questionnaires used in the US and Australian studies can be found in Appendix 1.

5.7.1 Questionnaire design process The development of a good questionnaire was vital to this study. A poorly designed questionnaire could have significant repercussions to the quality of research being undertaken. Fink (2006) provided the following nine step guidelines for developing a questionnaire:

Step 1: Specify what information will be sought Step 2: Determine type of questionnaire and method of administration Step 3: Determine content of individual questions Step 4: Determine form of response to each question Step 5: Determine wording of each question Step 6: Determine sequence of questions Step 7: Design physical characteristics of questionnaire Step 8: Re-examine steps 10-7 and revise if necessary Step 9: Pre-test the questionnaire, revise where necessary.

Fink (2006) noted that whilst in theory the nine steps occur in sequence, the reality was that the step-by-step procedure was often modified via some iteration and looping. She suggested that working back and forth among the steps was natural, and that the steps should not be taken too literally in that they were intended merely as a guide or checklist. Thus, whilst the nine steps for questionnaire design were used in the current research this chapter does not provide a detailed discussion of each step but highlights those parts of the design process that are important in understanding how the research was undertaken and how response error and measurement error were controlled.

5.7.2 Questionnaire design issues Whilst several authors have offered comprehensive guidelines for questionnaire design and development (Dillman, 2000; Neuman, 2003), of particular interest was

5-20

the total design method (TDM) (more recently updated to the tailored design method) developed by Dillman (2000). Based upon the principles of social exchange theory the TDM has been developed as a process of good questionnaire design. The social exchange theory asserts that, "the actions of individuals [such as completing a questionnaire] are motivated by the return these actions are expected to bring…from others" (Dillman, 2000, p. 14). There are three elements critical for predicting a particular action: rewards, costs and trust. Rewards are what one expects to gain from a particular activity, costs are what one gives up or spends to obtain the rewards, and trust is the expectation that in the long run the rewards of doing something will outweigh the costs. According to Dillman (2000) questionnaires that motivate people to respond should focus on all three of these elements. In the context of questionnaire design and implementation: design features are used to improve the rewards by making the questionnaire appear interesting and important; costs are reduced by making the questionnaire easy to manipulate and easy to complete; and trust is encouraged through attention to detail that makes the questionnaire look and seem important (Dillman, 2000). The TDM was used to assist the design and development of the questionnaire in the current research. In addition, the guidelines provided advice on questionnaire design issues in regards to physical characteristics; wording, and sequencing.

• Physical Characteristics Aaker, Kumar and Day (2004) argued that the physical layout of the questionnaire was an important factor for consideration in the design stage as it directly influenced the appeal and the enthusiasm of a respondent to complete the questionnaire. The questionnaire should look professional using high quality paper. Dillman (2000) recommended a booklet structure as they “are handled more or less automatically and usually without error” (p. 234). A booklet would also help prevent the loss of pages. He also noted that the questionnaire should be attractively and clearly presented and that spacing was important. The overall length of the questionnaires was also an important point to consider. A questionnaire that was too long may result in respondent fatigue which is a threat to the internal validity of the instrument as it may lead to response bias, such as providing the same response to all items (Hinkin, 1995, cited in Dillman, 2000). It would also influence how easy the questionnaire was to administer. Further, participant fatigue because of long questionnaires may result in lower response rates. Thus, not surprisingly, shorter questionnaires tended to achieve better response rates than longer

5-21

ones. However, Dillman (2000) cautioned about making a questionnaire so short that it loses meaning to respondents, noting that “there is more to length than just a simple count of pages and that the frequent tendency to cram more questions into few pages to make a questionnaire short is not likely to accomplish its intended task” (p. 235). There is no rule for deciding when a questionnaire becomes too long, although Dillman (2000) recommended that questionnaires should not exceed 10 pages. Several criteria have been suggested to facilitate this decision making, including cost, effect on response rate and the limits of respondent ability and willingness to answer questions. Neuman (2003) observed that “the extent to which the length of questionnaire affects costs and response rates varies with the population being studied and the topic: generalizations are difficult” (p. 324). Following the recommendations by Dillman a booklet format was used for the questionnaire in the current research. In addition, being aware that members of the general population may be unwilling to provide large amounts of time to complete the questionnaire; every effort was made to keep the questionnaire length to a minimum. At the same time, care was taken to ensure the questionnaire was visually appealing. A four page double-sided questionnaire was used in the US study and a three page double-sided questionnaire was used in the Australian study (see Appendix 1).

• Wording Careful consideration on the wording of questions in a questionnaire is important (DeVillis, 2003). Poor phrasing will cause respondents to skip over the question or to answer it incorrectly. DeVillis (2003) proposed that “question writing is more of an art than a science it takes skill, practice, patience and creativity” (p. 134). A number of well established recommendations on what not to do when developing items for the questionnaire were used in the current research, including: o Avoid ambiguous or vague words and sentences o Avoid leading questions o Avoid implicit assumptions o Avoid prejudicial language or language that contains sexist, disability or racist stereotyping o Avoid questions that make assumptions about people’s beliefs or behaviours o Avoid generalisations and estimates

5-22

o Avoid double barreled questions o Avoid jargon slang and abbreviations o Avoid emotional language and prestige bias o Avoid asking questions that are beyond respondents’ capabilities o Avoid double negatives (DeVillis, 2003; Neuman, 2003).

In addition to the wording of the specific items, care must be given to the wording of the instructions provided in the questionnaire. Instructions should be short and precise to avoid confusion but succinct enough to describe the cognitive mindset that respondents are encouraged to be in when responding to each component of the questionnaire (DeVillis, 2003). As such, attention was given to the presentation and wording of the instructions to ensure clearly presented and simply stated instructions were provided.

De Villis (2003) noted that another related consideration in choosing or developing items and instructions for a questionnaire, was the degree of reading difficulty. There are a variety of methods for assignment reading levels to passages of prose, including scale items. These typically equate longer words and sentences with higher reading levels. According to DeVillis (2003) most local newspapers require a 6 th grade reading level. Fry (1977) delineated several steps to quantifying reading level. The first was to select a sample of text that begins with the first word of a sentence and contains exactly 100 words. Next, count the number of complete sentences and individual syllables in the text sample. These values can then be used as entry points for a graph that provides grade equivalents for different combinations of sentences and syllable counts from the 100 word sample. For example, the graph indicates that the average number of words and syllables present for a 5 th grade reading level are 14 and 18 respectively. An average sentence at the 6 th grade level will have 15 or 16 words and a total of 20 syllables; a 7 th grade level sentence will have about 18 words and 24 syllables. Shorter sentences with a higher proportion of longer words, or longer sentences with fewer long words, can yield an equivalent grade level. DeVillis (2003) recommended a reading level between 5 th and 7 th grade for most questionnaires that were used within members of the general population. Fry (1977) noted that semantic and syntactic factors should be considered in assessing reading difficulty. DeVillis (2003) provided the following comment on the Fry method of determining reading level: “because short words tend to be more common and short

5-23

sentences tend to be syntactically simper, his procedure is an acceptable alternative to more complex difficulty assessment methods” (p. 68). However, DVillis (2003) warned that common sense must always been used when applying reading level methods. In keeping with the recommendation by DeVillis (2003) a 7th grade reading level was used in the current study. The Fry Reading Formula (1977) was used where possible to determine reading level for scales used within the questionnaire including all items and instructions. The Readability Plus TM software package was used to conduct the analysis. Details of reading levels for the various sections of the questionnaire are provided below.

• Sequencing The order in which items are presented is crucial in good questionnaire design (DeVillis, 2003). Good sequencing can help in gaining and maintaining the respondents’ cooperation and help in making the questionnaire as easy as possible for the interviewer to administer. It can also help reduce item order errors, where, for example, the appearance of one question can affect the answers given to later ones (DeVillis, 2003). It is important to note that the impact of item order is not uniform. When Benton and Daly (1991, cited in DeVillis, 2003) conducted a local government questionnaire, they found that the less educated respondents were more influenced by the order of questionnaire items then were those with more education. Some researchers overcome the effect of item order by randomising the order of items on a questionnaire. However, DeVellis (2003) suggested that this approach was usually futile. Instead he recommended that the safest solution was sensitivity to the problem. That is, whilst you cannot avoid the effect of item order, a researcher could try to reduce the impact by carefully considering the nature and purpose of the questionnaire and the intended respondent.

A number of different guidelines for sequencing a questionnaire have been proposed (DeVillis, 2003; Netemeyer, et al, 2003). It is usually best to begin a questionnaire with the most interesting set of items. The potential respondents who glance casually over the first few items should want to answer them. At the same time the initial items should not be threatening. The first questions should also apply to everyone and be easy to answer so they help a respondent feel comfortable about the questionnaire. Sensitive or personal questions should be asked later in the questionnaire. Burns and Bush (1998,

5-24

cited in DeVillis, 2003) recommended that requests for demographic data such as age and gender should generally be placed at the end of a self- administered questionnaire because of their personal nature. Placing such items at the beginning may feel invasive and it also gives the questionnaire the initial appearance of a routine form. In both instances the respondent may not be motivated to complete it. A funnel approach should be used in ordering the main body of items in the questionnaire; that is, start with broad questions and progressively get narrower. Additionally, items should be grouped logically in coherent sections of those that deal with a specific aspect of the topic, or that use the same response options.

Where possible these guidelines were applied in the sequencing of the questionnaire used in the current study. There were three main sections to the questionnaire: Section 1: Some Facts About You , Section 2: Your Use of the Internet and Section 3: Your Experience of the Internet . All three sections were essential to the current study. Sections 1 and 3 provided access to the independent variables: socio-economic factors and social-cognitive theory constructs. Section 2 provided access to the dependent variable: internet use. The sequence of the questionnaire met the guidelines discussed above in that the questions were all logically grouped and the first question applied to all respondents. However, the sequence differed from the guidelines in one significant way: socio-economic variables were placed first. This was done for two primary reasons. First, these variables were vital to the study. If they were not completed the questionnaire could not be used in the research. Placing them first helped to avoid questionnaire fatigue and made it easy for the administrator to quickly scan the completed questionnaire and ask the respondent to complete any missing item. Second, the other parts of the questionnaire required the respondent to critically reflect on their internet experience. In this way the demographic variables required less thought and, as such, could be classified as easier to answer.

5.7.3 Description of variables investigated As noted earlier in this chapter the questionnaire in the current research consisted of three sections: Section 1: Some Facts About You , Section 2: Your Use of the Internet , and Section 3: Your Experience of the Internet . This section provides a detailed discussion on how the variables being explored in the current study were operationalised into questionnaire items.

5-25

• Section 1: Some Facts About You The purpose of Section 1 was to obtain the necessary socio-economic data needed to respond to the research question. Socio-economic measurement can be difficult, and, as noted by Bartley (2004) has frequently been poorly done with measures tending to be based on convenience rather than theory. It was not within the scope of the current research to explore the intricacies of defining and measuring socio-economic status or position. Given that this research was building upon the current research into the digital divide it was important that a similar approach to socio-economic measurement was used to allow for easy comparison between studies. Research exploring the digital divide in the US and Australia has, over the years suggested a number of factors that impact on digital inequality. Whilst the importance of factors have varied across the research, in general the factors were: income, employment, gender, age, disability, ethnicity, geography and household type. To allow the current research to build upon these studies the following socio-economic variables were included: gender, age, income, education, employment, disability and ethnicity. As the current research was based in metropolitan cities geography was not included. Household type was not included as it was used in varying ways in the NTIA and NOIE studies and, more often than not, household type referred to average household income and education and did not add to the data already being obtained by existing income and education variables. Additionally, the current research focused on the individual, not the family. Because existing studies focused on each variable’s unique contribution to the digital divide, the variables were not combined into one all-purpose measure of socio-economic status. A tick box approach was used in this section to provide a faster, easier experience for the respondent (i.e. not having to think too much and just simply locating and ticking the box that best describes them). Unfortunately because of the way the questions were structured (i.e. short staccato sentences and a tick list of answer options) the Fry Reading Formula could not be easily applied to the socio-economic variables. Nonetheless the language was kept as simple as possible to ensure a suitable reading level. Seven variables were included, resulting in seven items within the questionnaire (i.e. one item per variable):

o Gender was measured by asking the respondent to select the category that best described them (i.e. female or male).

5-26

o Age was measured by asking the respondent to select the category that best described their current age (i.e. 17-20, 21-30, 31-40, 41-50, 51-60, 61-70, 71-80, 81-90 ,Over 90). o Education level was measured by asking the respondent to select the category that best described their highest level of education (i.e. primary school, high school, TAFE/Technical college, bachelor degree, masters, Phd or doctorate, other please specify).

o Employment status was measured by asking the respondent to select the category that best described their current employment status (i.e. Unemployed, full-time employed, part-time employed, casually employed, contract employed, job share, retired, other please specify). o Income was measured by asking respondents to select the category that best described their average annual income over the last three years (i.e. No Income, $1-$10,000, $10,001-$20,000, $20,001-$30,000, $30,001-$40,000, $40,001-$50,000, $50,001-$60,000, $60,001-$70,000, $70,001-$80,000, $80,001-$90,000, $90,001-$100,000, Over $100,000). Respondents were asked to identify the average over 3 years instead of just their current income as this allowed a closer sense of reality to be explored. That is, asking someone who had last week been retrenched about their income now as compared to the average income over the last 3 years would result in very different answers.

o Ethnicity was measured by asking respondents to select the category that best described their ethnicity. Ethnicity categories varied for US and Australian respondents. The Australian questionnaire included the following categories: Australian Aboriginal or Torres Strait Islander, Asian, Hispanic or Latino, Caucasian/White. The US questionnaire included the following categories: African American, Asian American or Pacific Islander, Hispanic or Latino, Native American, Caucasian/White. Both questionnaires also included an ‘Other, please specify’ category. The difference in categories simply represented the different ethnicity present in the two research contexts.

5-27

o Disability was measured by asking the respondent if they identified themselves as having a disability and to select the category that best described them (i.e. yes or no).

• Section 2: Your Use of the Internet The purpose of Section 2 was to obtain the necessary internet use information needed to respond to the research question. The measure used in the current research was based upon an existing measure of internet use by LaRose, Mastro and Eastin (2001). In a study exploring the internet use of college students an additive index of four self-reported items (α = .82). Participants were asked on a typical weekend day (Item 1) and a typical weekday (Item 2) how much time they spent on the internet. Both items were coded 1 if none, 2 if less than an hour, 3 if 1 to 2 hours, 4 for more than 2 and up to 5 hours, and 5 if more than 5 hours. They were also asked how many days in a typical week they went on the internet (Item 3) with responses ranged from 0 to 7. They were also asked how much time they spent surfing the web each week (Item 4). Responses were coded 12 for none, 2 for less than an hour, 3 for 2 to 4 hours, 4 for 5 to 7 hours, 5 for 7 to 9 hours, 6 for 10 to 20 hours and 7 for more than 20 hours. For the current research a decision was made to use only the first two Items (i.e. internet use on a typical weekend day and week day). Thus, an additive index of two items was used to measure internet use in the current research. Respondents’ scores could range from 2 to 10. The higher the score obtained the more a respondent used the internet. For the US study participants were asked to provide a free text answer to both questions, with the researcher coding the answers after completion of the questionnaire. However, it was noted that a considerable number of questionnaires in the US study had to be excluded because of item non-response error (i.e. missing data or incorrect data such as an answer of 50 hours per day). Thus, to reduce this type of error, for the Australian study the participants were asked to select from the coded options. It was noted that this led to a considerable reduction in item non-response error for this scale and for the questionnaire overall. The items had a combined Fry reading Grade of 7.

5-28

• Section 3: Your Experience of the Internet The purpose of this section was to measure the two key constructs in Bandura’s Social Cognitive Theory - self-efficacy and outcome expectancy 12 . In so doing the necessary socio-cognitive data needed to respond to the research question would be obtained.

o An internet self-efficacy scale was developed and validated for use in the research. The self report scale was a measure of an individual’s perceived self-efficacy for using the internet. Participants responded to each of the items by indicating how confident that they were to do the internet tasks listed on a scale ranging from “I am not at all confident” (0) to I am moderately confident (5), to I am totally confident (10). Scores could range from 0 to 400. The higher the score obtained, the more an individual was characterised by high perceived internet self- efficacy. Full details on the development of the scale can be found in the next section of this chapter. The scale had a Fry reading Grade of 5.

o Six measures of internet outcome expectancy were used. All measures were developed by LaRose and Eastin (2003). Respondents indicated the likelihood of each internet outcome using a Likert scale ranging from “Extremely Unlikely” (1) to “Extremely Likely” (7). The higher the score obtained on each scale, the more an individual found the outcome to be likely. The four-item Activity Outcome Scale (α = .73) measured the likelihood of finding enjoyable activities on the internet (e.g. “feel entertained”). The four-item Novel Sensory Outcome scale (α = .74) assessed the likelihood of finding information on the internet (e.g. “get immediate knowledge of big events”). The four-item Social Outcome scale (α =.89) assessed the likelihood of developing relationships over the internet (eg “get support from others”). The three-item Self Evaluative Outcome scale (α =.77) measured the likelihood of finding entertainment over the internet

12 It should be noted that additional constructs were also included in this section. For example, anxiety, perceived encouragement from others, perceived success of others etc. These constructs were included to allow for further exploration of the research after the PhD. They will not be used in the current research.

5-29

(e.g. “relieve boredom”). The four-item Status Outcome (α =.75) scale assessed the likelihood of obtaining improvements in life (e.g. “improve my future prospects in life”). The four-item Monetary Outcome (α = .72) scale measured the likelihood of saving money on the internet (e.g. “get products for free”). The outcome scales had a combined Fry reading grade of 5.

5.7.4 Internet self-efficacy scale development In an effort to further understand the psychological aspects of the digital divide the present study built on past research to develop a new measure of internet self- efficacy. Numerous articles and books have advocated “how” to develop new measurements (e.g. DeVellis, 2003; Spector 1992). Whilst steps and procedures have varied slightly from author to author, most writers in the area have agreed on a common set of guidelines for measurement or scale development. The four step approach for scale development proposed by Netemeyer, Bearden and Sharma (2003) was used in developing the internet self-efficacy scale in the current research:

Step 1: Construct definition and content domain Step 2: Generation and judging measurement Step 3: Designing & conducting studies to develop and refine the scale Step 4: Finalising the scale

These steps were combined with psychometric theory guidelines for assessing psychological phenomena, and the well established protocol for developing self- efficacy scales that has been established by Albert Bandura. This protocol wass outlined in his seminal publication Self-efficacy: the Exercise of Control (1997) and the more recent publication, Guide for Constructing Self-efficacy Scales (2005).

5.7.4.1 Step 1: Construct definition and content domain. One of the most important steps in the development of a scale is the task of defining the construct to be measured (Netemeyer, et al, 2003). Spector (1992) contended that “it almost goes without saying that a scale cannot be developed to measure a construct unless the nature of that construct is clearly delineated” (p. 12). He noted that “when scales go wrong, more often that not it is because the developer overlooked the importance of carefully and specifically delineating the construct. Without a well-defined construct, it is difficult to write good items and to derive

5-30

hypotheses for validation purposes” (p. 12-13). Care must be taken in defining constructs. If the construct has been defined too narrowly, measures created may fail to include important facets of the construct (Spector, 1992). This has been referred to as construct under-representation (Spector, 1992). If the construct has been too broadly defined, measures created could include extraneous factors, causing what has been referred to as construct irrelevant variance. Netemeyer et al (2003) proposed that a detailed and systematic literature review was the key to construct definition. A literature review would reveal prior attempts to measure the construct and the strengths and weaknesses of such attempts. It would also highlight that there was a genuine need for a new measurement of the construct.

A review of the literature revealed three measurements of Internet self-efficacy existed at the commencement of the research project (Eastin & LaRose, 2000; Maitland, 1996; Torkzadeh and Van Dyke, 2001). Each of these scales was pre- tested on the target population (n=19) (See Appendix 4 for full details of the pre- testing). All the scales tested were developed using American college students. This current research was the first time that the scales were being used with members of the community. Many of the participants stated that they were unsure of some of the words used including: “internet hardware”, “internet software”, “internet program”, “decrypting”, “encrypting”, downloading”, “URL” and “hypertext”. Three of the participants did not fully complete the research instrument because they felt that the questionnaire was “irrelevant to them” and “beyond their understanding”. The three participants were all Internet non-users and therefore of direct relevance to the research. Thus, the preliminary results suggested that none of the existing Internet self-efficacy scales were applicable for use with community members. College students, even those who perceived themselves as Internet non- users or novices, may have had a more highly developed knowledge and understanding of the Internet then members of the general public. Bandura (2005) recommended that measures of self-efficacy be tailored to meet the specific “reading level” (p. 4) of the population being examined. The pre-test therefore suggested that the ‘“reading level’” of participants within the current study (i.e. members of the general public) was significantly different to that of the “reading level” of the participants used in developing the existing Internet self-efficacy scales (i.e. American college students). The key finding from the pre-testing of the existing self-efficacy scales was that an Internet Self-efficacy scale for use within community needed to be constructed. It was also noted that the existing measures were limited

5-31

by the non-compliance with Bandura’s protocol for self-efficacy measurement (most notably the use of Likert scales instead of the 0-100 scale recommended).

In developing self-efficacy measures Bandaura (1997) has reminded the researcher that “self-efficacy was not a global trait” (p. 42). Self-efficacy was a task or behaviour specific construct. Bandura (1997) therefore recommended that “to achieve explanatory and predictive power measures of personal efficacy must be tailored to domains of functioning” (p. 42). Bandura (1997) noted that this required a clear definition of the domain of interest including a good conceptual analysis of the many different types of capabilities that were needed and the range of situations in which these capabilities might be applied. As noted in Chapter 4, there are three levels of self-efficacy measurement. The most specific level measured perceived self-efficacy for a particular performance under a specific set of conditions. The intermediate level measured self-efficacy for a class of performances within the same activity domain under a class of conditions sharing common practices. The most general and global level measured belief in self-efficacy without specifying the activities or the conditions under which they must be performed. Bandura (1997) noted that “the optimal level of generality at which self-efficacy is assessed varies depending on what one seeks to predict and the degree of foreknowledge of the situational demands”. The three previous measures of internet self-efficacy (Eatin & LaRose, 2000; Maitland, 1996; Van Dyke, 2001) were established at the global level. The self-efficacy scale for the current research was also developed at this level. The measure taps into overall internet use and not just on one specific internet related activity (i.e. online shopping). As already noted in Chapter 5, self-efficacy must be clearly distinguished from other constructs such as self esteem, self-confidence and locus of control.

For the current research to proceed a clear understanding of “Internet self-efficacy” as it was being applied in the current research must be established. In 2000 Eastin & LaRose defined Internet self-efficacy as the “belief in one’s capabilities to organize and execute a course of Internet actions requiring given attainments” (p. 1). Torkzadeh and Van Dyke (2001) suggested that Internet self-efficacy was the “self- perception held by individuals of their ability to interact with the Internet” (p. 275). But what are “Internet actions”? What does it mean to “interact with the Internet”? In 2003 the Pew Internet and American Life Project released findings from a study into the use of the Internet by American citizens (Lenhart et al, 2003). Using 3553 Americans the study concluded that about 72 million American adults go online per

5-32

day. According to the study the five most common daily activities performed by these Americans on the Internet include, sending email, getting news, surfing the web for fun, looking for information on a hobby and sending an instant message. (Lenhart et al, 2003). As such, internet self-efficacy for the current study was broadly defined as a person’s judgement to use the internet. No attempt was made to specify reason for use such as social, academic or professional. However, it was acknowledged that the content domain would include three key areas: (i) communication including email and joining newsgroups; (ii) information searching including and (iii) advanced skills such as web design.

5.7.4.2 Step 2: Generating and judging measurement items Once the construct has been clearly defined the next step was to generate a large pool of items that were candidates for eventual inclusion in the scale. In generating an item pool an important goal is to systematically sample all content areas of the construct. Netemeyer, et al (2003) stated that three issues should be kept in mind. First, scale items generated should tap into the content domain of the construct and exhibit content validity, where content validity refered to “the degree to which elements of an assessment instrument are relevant to and representative of the targeted construct for a particular assessment purpose” (p. 142). Second, the items should have face validity (i.e. acceptability of time to respondent). A highly face valid scale could enhance cooperation of respondents because of its ease of use through proper reading level and clarity, and as its instructions and response formats. Third, the researcher should “go beyond his or her own view of the construct” (p. 143) in generating an initial item pool and consult several different sources. For example, looking at how previous studies have operationalised the construct (or related constructs) can be a valuable source in item generation. In terms of the current research, the existing internet self-efficacy scales provided some guidance on what to do and not do. Additionally, self-efficacy scales in the related domain of computer self-efficacy were also used to assist in item pool generation (i.e. Compeau & Higgins, 1995; Murphy, Coover & Owen, 1989). Care must also be taken to ensure that each content area of the construct has an adequate sampling of items (Netemeyer et al, 2003). Although difficult to achieve in practice, broader content areas should be represented by a larger item pool than narrower content areas. Netemeyer et al (2003) suggested that, whilst there was no hard and fast rule for the size of an initial item pool, a larger number of items was generally preferred. In addition, to these general guidelines on generating an item pool, Bandura (1997)

5-33

also provided a prescriptive process for identifying and developing items for inclusion with self-efficacy measures.

The following points were all considered in developing items for the initial item pool for the current internet self-efficacy scale:

• Self-efficacy should be measured at different levels of performance. “Sensitive measures of efficacy beliefs link operative capabilities to levels of challenge in particular domains of functioning” (Bandura, 1997, p. 38).

• The three dimensions of self-efficacy should all be considered when designing self-efficacy measures:

o The level of self-efficacy belief should be assessed by the inclusion of items that represent varying degrees of challenge to successful performance within the domain of functioning. “Sufficient impediments and challenges should be built into efficacy items to avoid ceiling effect” (Bandura, 1997, p. 43).

o The generality of self-efficacy should be assessed by the inclusion of items to measure self-efficacy in a variety of situations or contexts related to the domain of functioning (Bandura, 1997, p. 43).

o The strength of self-efficacy should be assessed by asking participants to rate the strength of their belief in their ability to execute the requisite activity on a scale. Bandura (1007) recommended a 100 point rating scale ranging in 10 unit intervals from 0 to 100 where 0 = “Cannot do”, 50 = “Moderately can do”, and 100 “Certain can do”. Efficacy scales should be uni-polar, ranging from 0 to a maximum strength. They should not include negative numbers because a judgement of complete incapability (0) has no lower gradations. Scales that only use a few steps should be avoided because they are less sensitive and less reliable (Bandura, 1997, p. 44). A recent study by Pajares, Hartley & Valiante (2001), exploring the impact of different response formats in self-efficacy scales, concluded that the 0-100 format was psychometrically stronger than a scale with a traditional Likert format. A pilot study was conducted for the current

5-34

study. The study involved testing the complete internet self-efficacy scale on 5 members of the target population. The 0 to 100 rating scale was used. On completing the scale respondents were asked if the respondent would have preferred using a 0 to 10 scale. All respondents indicated positively to this suggestion. It was also worth noting that one respondent ignored the 0-100 instructions and used 0-10 anyway. Given the target population and Bandura’s emphasis on using a suitable reading level the 0-10 approach was identified as more appropriate.

• Items in a self-efficacy scale should be phrased in terms of “can do” instead of “will do”. Bandura (1997) noted that “can” is a judgement of capability whilst “will” was a statement of intention (p. 43).

• The hierarchical structure of self-efficacy questionnaires should either be random in order or ascending in order of task demands. Ordering items from most to least difficulty can affect self-efficacy judgements (Bandura, 1997, p. 44)

• Response bias or social evaluation concerns should be minimised by using standard testing techniques such as assessing participants in private and without identification, and by including an instruction that emphasises frank judgement during self assessment. In addition respondents should be asked to judge their capabilities as at the time they are completing the scale and to not respond in regards to their potential capabilities or their expected future capabilities (Bandura, 2005).

In addition to points raised above, another issue to be considered was that of negatively and positively worded items. By using a combination of both negatively and positively worded items some scale developers have suggested that they are able to “keep the respondent honest” and thus “avoid response bias in the form of acquiescence, affirmation or yea-saying” (Netemyer et al, 2003). However, Netemeyer et al (2003) advocated that negatively worded items do not exhibit as high reliability as positively worded items. In addition, they can be confusing to the respondent. They also noted that negatively and positively worded items performed differently under factor analysis. Positively worded items tended to load highly on one factor and negatively worded items tend to load highly on another factor. They

5-35

advised the researcher to weight the potential advantages and disadvantages of using negatively worded items in the item pool. For the reasons raised, and given the target population – members of the general population – negatively worded items were not used in the internet self-efficacy scale developed for the current research. A total of 67 items were generated.

Once the item pool has been generated the next step was to have the pool reviewed or judged by experts who are knowledgeable in the content area. Experts can provide assistance in establishing both content validity and face validity of the items (Netemeyer et al, 2003). Reviewers can evaluate the items’ clarity and conciseness. They can point out ways of tapping into the phenomenon that have not been included in the item pool. Netemeyer et al (2003) provided two guidelines when submitting the item pool to expert review. First, all elements of the items should be provided for review. That is, the items themselves, the response formats, the number of scale points and the instructions to the response all should be judged. Second, as a rule of thumb the more judges (i.e. five or more) the better, as the detection of bad or marginal items would be more likely given more raters. A third guideline was proposed by Haynes, Richard & Kubany (1995). They advocated that a 5 or 7 point classification be used in asking judges to rate each item in terms of representativeness, specificity and clarity.

Since this study involved two areas, namely internet use and self-efficacy, it was decided that two different groups would be polled as subject matter experts. One group was based upon subject expertise in internet use. This group consisted of six experts, three from the US and three from Australia. The experts were all qualified librarians who were involved, or had in recent years, been involved in providing internet training to members of the general public. These experts were advised that they were being viewed as experts in the area of internet use and, from this perspective, were asked to provide comment on the face and content validity of the items. They were also invited to identify any other internet activities or tasks that should be included on the scale. The second group was based upon subject expertise in social cognitive theory and the concept of self-efficacy. This group consisted of five experts, four from the US and one from Australia. The experts were all academics who currently engage in research using social cognitive theory. These experts were advised that they were being viewed as experts in the area of social cognitive theory and, from this perspective, were asked to provide comment on the face and content validity of the items. Both groups of experts used a 5 point

5-36

scale ranging from “Strongly Disagree with Items Validity” to “Strongly Agree with Items Validity” to make the ratings. In addition, there was an opportunity to provide open comment on each item. The expert comment forms are provided in Appendix 2. Feedback from the two groups of experts resulted in both the removal of items and the addition of new items. Expert review was also obtained through the presentation of a poster at the American Psychological Society (APS) 13 Annual convention in 2003 that was held in Atlanta, Georgia. A copy of the poster is available in Appendix 3. A total of 40 items was finalised after consulting with the expert groups.

5.7.4.3 Step 3: Designing and Conducting Studies to Develop and Refine the Scale, and Step 4: Finalising the Scale Once a suitable pool of items has been generated and judged, empirical testing of the items on a relevant sample was the next step. This should be followed by testing on additional samples to finalise the scale. Two studies were undertaken to help develop and refine the scale - the US study and the Australian study. Full details on the results of these studies are available in Chapter 8.

5.7.5 Pre-testing and pilot testing Pre-testing and pilot testing is a necessary and vital part of questionnaire development (Litwin, 1995). According to Litwin (1995) “it provides useful information about how [the] questionnaire instrument actually plays in the field” (p. 67). A pre-test is when sections of the research instrument are tested on a sample of the population (Litwin, 1995). A pilot test is when the entire research instrument is tested on a sample of the population (Litwin, 1995). Although conducting pre- and pilot-tests requires additional time, energy and resources, it is an important step that helps in determining the practical application of the questionnaire (Litwin, 1995). The pre- and pilot-testing in the current study consisted of four phases. Phases one to three involved pre-testing or obtaining expert views on the internet self-efficacy scale developed for the current research. Phase four was the pilot testing of the finalised instrument. Full details on all four phases of the pre-testing and pilot testing can be found in Appendix 4.

13 The APS was established in 1988; has approximately 18,000 members and includes psychological scientists and academics, clinicians, researchers, teachers, and administrators.

5-37

5.7.6 Final study The current research consisted of two final studies. Study 1 was the US study taking place in San Jose over a four week period in January 2005. Study 2 was the Australian study taking place in Brisbane over an eight week period in November- December 2005. As Dillman (2000) noted, the questionnaire was only one element of a well done survey. Implementation procedures also have a significant influence on response rates. Research assistants (RAs) were employed to assist with the data gathering in both the US and Australian studies. Four RAs were employed in the US study (three females and one male) and three RAs were employed in the Australian study (all female). To control for variation in the data gathering process, a standardised procedure for data collection was established. This procedure outlined the parameters for how potential respondents were to be approached and what they would be told about the study. The procedure is available in Appendix 5.

5.7.7 Maintaining quality A quality research instrument is essential for ensuring the success of any research endeavour (Neuman, 2003). The research instrument in the current research was a self-administered questionnaire. As noted earlier, self-administered questionnaires are a very popular data collection technique in the social sciences but, like all data collection techniques, questionnaires have sources of error that can quickly reduce the accuracy and quality of the data being collected (Neuman, 2003).

Error in survey research can include random sampling error or non-sampling error (Hair, et al, 2003). Random sampling error is the statistically measured difference between the actual sampled results and the estimated true population results (Hair, et al, 2003). This type of error is inversely impacted by sample size. As sample size increases sampling error decreases. This type of error is reduced by implementing a carefully designed sampling plan (see section 5.6).

Non-sampling error refers to any error that can enter the questionnaire research design, which is not related to sampling method (Neuman, 2003). Non-sampling error creates systematic differences in the raw data that is not a natural occurrence or fluctuation on the part of the participants’ response (Neuman, 2003). That is, they lower the quality of the data being collected and the information being provided to the researcher. Non-sampling errors are controllable and, as such, the researcher can put into place mechanisms to reduce or eliminate non-sampling errors. There

5-38

are three main sources of non-sampling error: measurement error, administrative error and respondent error:

 Measurement error occurs when a participant’s answer to a question is “inaccurate, imprecise, or cannot be compared in any useful way to other respondents’ answers” (Dillman, 2000). This error often results from poor question wording and questionnaire construction. As such, the design and construction of the questionnaire must be approached in a careful and systematic manner (see section 5.7).

 Administration error occurs when inconsistencies or variations in the administration of the questionnaire or the processing of the collected date results in error (Neuman, 2003). To control for this a structured and systematic approach to the data gathering process was used (see section 5.7.6)

 Respondent error refer to when there is fault in the way the responses are received by the sample (Neuman, 2003). The main type of respondent error to be considered in the current research was non-response error. Non- response error occurs when “a significant number of the sample questionnaireed does not respond and have different characteristics from those who do respond, when these characteristics are important to the study” (Dillman, p. 10). Non-response can be divided into two categories: item non- response and total non-response:

o Item non response is when some but not all information is collected from one or more individuals (Aaker et al, 2004). This type of non- response may occur for a number of reasons, for example, because the participant does not know the answer to a particular question, or they are pressed for time. To help reduce item non-response the questionnaire was carefully designed (including pre- and pilot- testing) with an easy to understand format and language, and was able to be completed within 10-15 minutes (Aaker, et al, 2004) (see section 5.5).

o Total non-response is when no information is collected at all for one or more participants (Aaker, et al, 2004). This type of non-response can occur for several reasons including, individual may refuse to

5-39

participate or not be able to participate at that moment in time (Aaker, et al, 2004). Drawing upon the recommendations by Mahlotra (2004) the following efforts were used to lower total non-response: (i) the questionnaire was designed to be attractive, (ii) several research assistants were used to recruit participants; and (iii) the research assistants were encouraged to tailor the standardised procedure for data collection so that it was more relevant and motivating to each potential participant (see sections 5.5 & 5.6). Mahlotra (2004) recommended that non-response error should be estimated and reported. However, the use of non-probability sampling approach in the current study limited the extent to which non-response error could be calculated. To help reduce this limitation the advice of Fraenkel and Wallen (1996) was followed in the current research. This involved exploring and reporting the issue of non-response by describing the samples from both the US and Australian studies as thoroughly as possible regarding key characteristics (eg. age, income, education). Each sample was also compared to known community profiles (see Chapter 6). By providing this level of description other researchers and practitioners were then able to judge for themselves the degree of validity and reliability of the study.

For the current research the reliance on self-reported data was another major area of error. The use of self-reported data has been very common in investigations of psychological variables (Neuman, 2003). Self-reported data are frequently the most effective method for determining the experiences of people being investigated. This approach entails a standardised procedure for asking subjects how they feel, what they think, what they believe or what attitudes they have toward a particular stimulus. Asking is often the best method of obtaining the information required. It also has the advantage of being quick and relatively inexpensive in comparison to alternative methods of obtaining information on a topic. But self-report data is susceptible to response bias – that is the tendency to distort their responses (Neuman, 2003). This distortion can occur for a number of reasons. Failure to recall the information being asked about is one source of inaccuracy (DeVillis, 2003). This problem, however, was unlikely to be significant in this research as the data collection instrument does not include questions that are not historically focused. Instead, the questions focus on the present and/or immediate past behaviours, feelings and observations. There was also the potential for responses to be

5-40

prejudiced by social desirability, with participants overestimating or underestimating their level of self-efficacy, internet use of socio-demographic factors such as income or education, depending on how they wish to represent themselves. Answers on questionnaires may not only be a function of respondents’ true perceptions but also of their own agenda regarding their participation in the research and how they wish to appear (DeVillis, 2003). It was difficult to determine whether the current research findings have been influenced by social desirability bias.

5.8 Ethical issues Research involving members of the general population must adhere to a high level of ethical standard. The current research received the approval of the Queensland University of Technology Ethics Committee. De Vaus (2002) identified five ethical issues and responsibilities that should be addressed in questionnaire research: voluntary participation, informed consent, no harm, anonymity and confidentiality. Each of these was addressed in the current research. There was no cost to participants other then their time to complete the questionnaire instrument. The study procedures did not involve any foreseeable risks or harm to participants beyond that associated with completing a questionnaire. Informed consent was obtained by including a front cover to the questionnaire that disclosed details of the study including the study purpose and procedure, potential benefits, expectations of participants, time commitments and contact details (i.e. telephone and email) of the researcher. An explanation about confidentiality and the right to withdraw from the study at any time was also outlined on the cover sheet. Anonymity was assured by not recording any personal data (i.e. participant name) on the questionnaire. In addition, a verbal explanation about the study and associated issues of anonymity and confidentiality was provided to each participant when they were approached and invited to participate in the study. As required by QUT policy, all data will be kept locked in a secure place for five years from the date of completion of the study.

5.9 Data analysis The successful analysis of quantitative data requires careful planning and attention to detail (Polit & Beck, 2004). The data analysis process will vary from research to research. However, regardless of what the research project entails there are certain steps that data analysis will inevitably involve. The current research used the data analysis plan advocated by Polit and Beck (2004). According to Polit and Beck (2004), data analysis involved five phases: the preanalysis phase, the preliminary assessments phase, the preliminary actions, principal analyses phase and the

5-41

interpretive phase. Polit and Beck (2004) noted that the data analysis plan was not linear but the plan provided a framework from which to work. As such, not each phase of the plan will be discussed. Chapters 6, 7 and 8 provide the discussion of the data analysis for the current research. It is important to note that these chapters highlight the important elements of the plan which allow for a detailed understanding on how data analysis in the current research was conducted. All quantitative data was analysed using the Statistical Package for the Social Sciences (SPSS) computer program Version 12.0.

5.10 Conclusion This chapter has provided an overview of the research method for the current research. The current research was guided by critical theory. The research explored new ways of understanding a social phenomenon (i.e. the digital divide) with the view to enacting future change. The survey method was used. In particular, self- administered questionnaires were used for data collection. The research had an international context. The two cities of Brisbane, Australia and San Jose, USA provided the communities through which study participants were obtained. The final data collection took place in January 2005 in the US context and November/December 2005 in the Australian context. Common sources of error such as sampling and non-sampling error were considered in the design and implementation of the design process. The primary source of error identified for the current research was the reliance on self reported data. Self reported data is susceptible to response bias (i.e. a participant’s tendency to distort responses). The research received the approval of the Queensland University of Technology Ethics Committee. The five ethical issues and responsibilities identified by De Vaus (2002) were considered in the current research: voluntary participation, informed consent, no harm, anonymity and confidentiality.

5-42

Chapter 6: The participants

6.1 Introduction The previous chapter outlined the research philosophy and approach, the research context and sampling plan, and the instrument design and administration process that allowed the collection of a data set suitable for analysis. Using descriptive statistics and informed judgement this chapter explores the research participants. More specifically, two sources of error are examined: non response error and sampling error. In regard to the former the chapter examines item response error (ie the difference, if any, between, those completed the questionnaire and those who did not fully complete the questionnaire. The issue of total non-response error is also addressed. In regard to the latter, the chapter examines to what extent the questionnaire respondents are representative of the population. Overall, this analysis represents the first level of attention to the external validity of the questionnaire findings.

6.2 The US participants All participants in the US study were drawn from the San Jose Public Library. A total of 488 questionnaires were obtained. The questionnaires can be classified into three types:

• Complete: Those questionnaires where the respondent completed all items. A total of 228 Complete questionnaires were obtained. This was 46.72% of the total number of questionnaires.

• Incomplete: Those questionnaires where the respondent did not complete 1 or 2 items in the questionnaire. A total of 102 Incomplete questionnaires were obtained. This was 20.90% of the total number of questionnaires.

• Invalid: Those questionnaires where over 3 items in the questionnaire were not completed by the respondent. A total of 158 Invalid questionnaires were obtained. This is 32.38% of the total number of questionnaires.

The Incomplete and Invalid questionnaires represented the item non-response error for the US study. The following assumption guided the process in making the distinction between Incomplete and Invalid questionnaires: respondents who failed to answer only one or two items were classified as Incomplete and not Invalid, as it

6-1

was determined that it was possible for a participant to genuinely miss this number of items without suggesting that they had misread instructions. This could not be assumed for those respondents who missed three or more items. This criterion was used as a mechanism for establishing a degree of rigour in the work. Invalid questionnaires were removed from any form of data analysis. Complete and Incomplete questionnaires were used for statistical analysis associated with exploring the research question (see Chapter 7 & 8). With 32.38% of the questionnaires classified as Invalid and 20.90% of the questionnaires classified as Incomplete, an important first step in the data analysis process was ascertaining any significant differences between the respondents who successfully completed the entire questionnaire versus those who did not (i.e. item non-response error).

6.2.1 Approach to analysing non-response Two statistical approaches were used in exploring the respondent differences between the three questionnaire types: Chi-square test of independence and one way ANOVA.

• The Chi-square test for independence was used to explore the data that has been measured on nominal and ordinal scales 14 (see Sections 6.2.2.1 to 6.2.2.7). The Chi-square test of independence was used to determine if two categorical variables are related (Pallant, 2005). At one extreme the variables could have no relationship – that is, they were independent of each other. In this instance the effect size estimate comes out to zero. At the other extreme, the variables might be perfectly related to each other; in this case the effect size estimate equals one (Aron & Aron, 1999). Thus,

the closer the effect size estimate is to 0, the less relationship, or the closer to independence between the two nominal variables. The closer the effect size is to 1, the more relationship or the closer to a perfect relationship between the two nominal variables (Aron & Aron, 1999, p. 448).

14 Nominal scales use numbers as labels to identify and classify objects, individuals or events. For example, gender can be divided into a nominal scale with two labels “male” and “female”. Where female is assigned the number 1 and male is assigned the number 2. The numbers have no value they are just a label. An ordinal scale, in contrast, is a ranking scale. It places objects into a predetermined category that is ranked ordered to some criterion such as age, income, importance. The points on an ordinal scale do not indicate equal distance but they do indicate order (Hair, Babin, Money & Samouel, 2003)

6-2

A significance value equal to or less than .05 was used to indicate that the result was significant (i.e. there is a difference between the variables). The primary assumption to be considered when using a Chi-square test of independence was that the lowest expected frequency in any cell should be 5 or more (Pallant, 2005). However, some authors have suggested a less stringent criteria in that at least 80% of cells should have expected frequencies of 5 or more. In contrast, Delucchi (1993) suggested that the minimum cell frequency of 5 was not the most important issue to consider when undertaking a Chi-square analysis. She proposed that the most important principle was that there should be at least 5 times as many individuals as there are cells. The Chi-square test also assumed random and independent sampling and that the set of combinations for the two variables were mutually exclusive and exhaustive (McGrath, 1997, p. 306). Chi-square analysis was conducted to explore the differences between respondents between the three questionnaire categories and the following demographic (and independent) variables: gender, age, highest education level, employment status, income, ethnicity and disability. It should be noted that whilst the missing data was not included in the analysis it was included in the tables presented in this chapter to clearly present the state of play of the full total of questionnaires. This has given the impression that some cells were less than the recommended frequency of 5. This however, was not the case as the missing data was not included in the analysis. Thus, in summary, the following guidelines were followed (where possible) in conducting the Chi-square analysis with the current data: (i) a significance level of .05 was used, (ii) cells should have expected frequencies of 5 or more, and (iii) there should be at least five times as many respondents as there were cells.

• The one way Analysis of Variance or ANOVA was used to analyse the continuous or interval 15 data (see Section 6.2.2.8). A one way ANOVA was used to test for significant differences between groups. It should be used when there were two or more groups to be compared and when there was only one independent variable (Pallant, 2005). There are several assumptions that needed to be met: (i) the dependent variable was continuous; (ii) normal distribution, (iii) random sampling, (iv) the groups should be mutually exclusive (independent of each other), and (v) the groups should have equal variance

15 An interval scale uses numbers to rate objects or events so that the distance between the numbers are equal (Hair et al, 2003)

6-3

(homogeneity of variance) (Pallant, 2005). An F ratio was calculated that represented the variance between the groups. A significant F indicated that there was a difference between the groups. Type I and Type II 16 errors should be carefully considered by selecting an appropriate alpha level (e.g. 0.5) for determining significance and by considering power and sample size. According to Stevens (1996) when the sample size was large (e.g. 100 or more) then “power is not an issue” (p 6). However, when the group size was small then it must be noted that a non significant result may be due to insufficient power. Stevens (1996) suggested that when small group sizes were involved it may be necessary to adjust the alpha level to compensate (eg set a cut off of .10 or .15 rather than .05).

6.2.2 Non-response analysis Participants provided information on seven demographic characteristics (independent variables): gender, age, highest education level, employment status, income, ethnicity and disability. The three questionnaire types (Complete, Incomplete and Invalid) were compared with regard to non-response against these characteristics. In addition, the questionnaire types were compared with regard to the dependent variable of internet use. Table 6.1 provides a summary of these comparisons. A detailed discussion of each analysis follows.

Characteristic Difference between questionnaire type

Gender No significant difference Age No significant difference Highest education Significant difference but only a small to medium effect Employment Significant difference but only a small to medium effect

Income No significant difference Ethnicity No significant difference Disability No significant difference Significant difference but difference should not impact Internet use on statistical analysis

Table 6.1: Summary of characteristics against US questionnaire type

16 Type I error is when the null hypothesis is rejected when it is in fact true (i.e. deciding there is a difference between the 2 groups when there isn’t one). Type II error is failing to reject the null hypothesis when it is in fact false (i.e. believing that the groups do not differ) (Hair et al, 2003).

6-4

6.2.2.1 Gender A total of 240 males (48.2%) and 237 females (48.6%) (11 unknown) participated in the questionnaire. Table 6.2 presents the number and percentage of Complete, Incomplete and Invalid questionnaires by gender. The distribution by gender for each of the different questionnaires was similar, although there was a larger proportion of males in the Incomplete questionnaire type and a larger proportion of females in the Invalid questionnaire type. Nonetheless, gender and questionnaire type were not significantly related ( χ 2 (2, N=477) = 1.76, p = .42). Both the minimum cell frequency criteria and the respondent to cell ratio were met.

Complete Incomplete Invalid Total N % N % N % N % Male 114 50.0 44 43.1 82 51.9 240 49.18 Female 114 50.0 53 52 70 44.3 237 48.57 Missing 0 0 5 4.9 6 3.8 11 2.25

Total 228 100.0 102 100.00 158 100.00 488 100 Chi-Square=1.76 D.F.=2 Sig = 0.42 Cells with E.F. < .5 = 0 of 6 Res/cell ratio= 79.5

Table 6.2: Questionnaire type by gender (US)

6.2.2.2 Age Participants completing the questionnaire ranged from 17 to 90 years in age with those aged 21 to 30 represented as the most frequently occurring age range (n=130 or 27% of the sample). Six respondents did not supply their age. Table 6.3 presents the number and percentage of Complete, Incomplete and Invalid questionnaires by age group. To prevent violation of the expected cell frequency being equal to or greater than 5 the age groupings were re-grouped (i.e. 61-70; 71-80; 81-90; Over 90 became 61+). The respondent to cell ratio was met. The distribution for ages for each of the different questionnaires was similar, although there were some minor differences. For example, there was a stronger representation of respondents aged between 21-30 and 31-40 in the Completed questionnaire type, and more respondents aged between 61+ in the Complete and Invalid type. Overall age and questionnaire type were not significantly related ( χ 2 (10, N=482) = 10.31= .41).

Complete Incomplete Invalid Total N % N % N % N % 17-20 17 7.5 11 10.9 13 8.5 41 8.5 21-30 69 30.3 25 24.8 36 23.5 130 27.0 31-40 61 26.8 25 24.8 30 19.6 116 24.2 41-50 33 14.5 16 15.8 29 19.0 78 15.98 51-60 33 14.5 15 14.9 25 16.3 73 14.96 61+ 15 6.6 9 8.9 20 13.1 44 6.76 Missing 0 0 1 1.0 5 3.2 6 1.23

Total 228 100 102 100 158 100 488 100 Chi-Square=10.31 D.F.=10 Sig = .41 Cells E.F. < .5 = 0of 18 Res/cell ratio = 26.77

Table 6.3: Questionnaire type by age (US)

6-5

6.2.2.3 Highest education level Table 6.4 presents the number and percentage of Complete, Incomplete and Invalid questionnaires by the highest level of completed education. To prevent violation of the expected cell frequency being equal to or greater than 5, the education level categories were re-grouped (i.e. High School or less = Primary School + High School; Graduate degree = Masters degree + PhD/Doctorate). Both the minimum cell frequency and the respondent to cell ratio was met. The distribution for highest education level between the three questionnaire types suggested a few differences. For example, there was a larger proportion of respondents indicating High school or less or TAFE/Technical in the Invalid questionnaire type, with bachelor degree being more proportionally represented in the Complete questionnaire. Whilst this suggested that respondents with lower levels of education had difficulties in completing the questionnaire it is interesting to note that respondents with a graduate qualification were more represented in the Incomplete type. Highest education level and questionnaire type were significantly related ( χ 2 (6, N=479) = 18.28, p = .006).

Complete Incomplete Invalid Total N % N % N % N % High school or less 71 31.1 28 28.3 54 35.5 153 31.35 TAFE/Technical college 17 7.5 6 5.9 25 15.8 48 9.84 Bachelor degree 90 39.5 34 33.3 39 24.7 163 33.40 Graduate degree 50 21.9 31 31.3 34 22.4 115 23.57 Missing 0 0 3 2.94 6 3.80 9 1.84

Total 228 100 102 100 158 100 488 100 Chi-Square=18.28 D.F.= 6 Sig = .006 Cells with E.F. < .5 = 0 of 18 Res/cell ratio=39.92

Table 6.4: Questionnaire type by highest education level (US)

Aron and Aron (1999) suggest that once a significant relationship has been found between two variables it is important to explore the “size of the effect” (i.e. the strength of the relationship). Cramer’s phi statistic should be calculated to achieve this: Cramer’s phi is calculated as follows:

Cramer’s Φ = χ 2

(N)(df Smaller )

When calculating Cramer’s phi the important aspect to consider is the degrees of freedom for the smaller side of the table (i.e. in the current analysis the smallest side is 3 thus giving a degrees of freedom of 2). Based on this the Cramer’s phi for the current analysis equals .14. According to Cramer this sits between a small and a

6-6

medium effect size (for degrees of freedom equals 2 a small effect equals .07, a medium effect equals .21 and a large effect equals .29). This suggested that whilst there was a difference between the questionnaire types according to highest education, the difference would not be very important in practice.

6.2.2.4 Employment The majority of study participants were in some type of employment (79.71%). Table 6.5 presents the number and percentage of Complete, Incomplete and Invalid questionnaires by the type of employment. To prevent violation of the expected cell frequency being equal to or greater than 5 the employment categories were re- grouped. Given that the US Census Bureau used only two categories to describe employment status – employed and unemployed – and that ultimately the study would use the census to compare the study’s sample to the San Jose population, this grouping was used in the current study. The “Employed” status included the following categories: full time employed, part time employed, casually employed, contract employed, job share or self employed. Those who indicated they were on leave were also included in this category given that these individuals are presumably on leave from some type of employment (mostly referred to maternity leave). The “Unemployed” status included all those respondents who indicated they were either unemployed or retired. The Invalid questionnaire type had proportionally more unemployed and less employed respondents than the two other questionnaire types. This suggested that respondents who were unemployed had greater difficulty in completing the questionnaire instrument. Employment status and questionnaire type were significantly related ( χ 2 (2, N=480) = 18.99, p = .000).

Complete Incomplete Invalid Total N % N % N % N % Unemployed 29 12.7 16 16.0 46 30.3 91 18.65 Employed 199 87.3 84 84.0 106 69.7 389 79.71 Missing 0 0.0 2 1.96 6 3.8 8 1.64

Total 228 100 102 100 158 100 488 100 Chi-Square = 18.99 D.F. = 2 Sig = .00 Cell E.F. <.5 = 0 out of 12 Res/cell ratio=38

Table 6.5: Questionnaire type by current employment (US)

Following the advice of Aron and Aron (1999) outlined earlier the “size of the effect” (i.e. the strength of the relationship) was explored by calculating Cramer’s phi statistic. In the current analysis the smallest side is 3 thus giving a degrees of freedom of 2, based on this the Cramer’s phi for the current analysis equals .14. According to Cramer this sits between a small and a medium effect size (for degrees

6-7

of freedom equals 2 a small effect equals .07, a medium effect equals .21 and a large effect equals .29). This suggested that whilst there is a difference between the questionnaire types according to employment status the difference would not be very important in practice.

6.2.2.5 Income The income of participants completing the questionnaire ranged from zero through to over $100,000. Table 6.6 presents the number and percentage of Complete, Incomplete and Invalid questionnaires by the income. To prevent violation of the expected cell frequency being equal to or greater than 5 the income categories were re-grouped (i.e. from groups of $10,000 such as $20,001-$30,000 to groups of $20,000 such as $20,001-$40,000). The minimum respondent to cell ratio was met. The Invalid questionnaire type had proportionally more respondents with incomes $20,000 or less and fewer respondents with incomes $40,001 - $60,000, but this category also appear to have proportionally more respondents with incomes over $100,000. The Complete questionnaire type had proportionally more respondents with incomes $60,001-$80,000 and the Incomplete questionnaire type has more respondents with incomes $20,001-$40,000. This suggested that respondents with higher incomes were more likely to complete the questionnaire. However, income and questionnaire type were not significantly related ( χ 2 (10, N=472) = 12.52, p = .252).

Complete Incomplete Invalid Total N % N % N % N % $20 000 or less 67 29.4 27 26.5 51 34.9 145 29.71 $20 001 - $40 000 54 23.7 29 29.6 38 26.0 121 24.80 $40 001 - $60 000 51 22.4 23 23.5 23 15.8 97 19.88 $60 001 - $80 000 34 14.9 7 7.1 15 10.3 56 11.48 $80 001 - $100 000 12 5.3 8 8.2 7 4.8 27 5.53 Over $100 000 10 4.4 5 4.9 12 7.6 27 5.53 Missing 0 0 4 3.9 12 7.6 16 3.28

Total 228 100 102 100 158 100 488 100 Chi-Square = 12.52 D.F. = 10 Sig = .25 Cells E.F < .05 = 0 out of 18 Res/cell ratio=26.2

Table 6.6: Questionnaire type by income (US)

6.2.2.6 Ethnicity The majority of study participants identified themselves as Caucasian/White (49.3%). Table 6.7 presents the number and percentage of Complete, Incomplete and Invalid questionnaires by ethnicity. The distribution for ethnicity for each of the different questionnaire types was markedly similar although it was noted that the Invalid category had more Asian or Pacific Islanders, Hispanic and Latino and Native American respondents. This suggested that individuals who identified

6-8

themselves as having an ethnic origin had difficulties in completing the questionnaire. It should be noted that four of the cells violated the minimum cell frequency assumption (Complete – Native American; Incomplete – African American, Native American, Invalid – Native American); but the minimum respondent to cell ratio was met. Despite this, ethnicity and questionnaire type were not significantly related ( χ 2 (10, N=472) = 14.63, p = .15).

Complete Incomplete Invalid Total N % N % N % N % African American 13 5.7 3 2.9 8 5.1 24 4.92 Asian or Pacific Islander 35 15.4 15 14.7 31 19.6 81 16.60 Hispanic or Latino 29 12.7 17 16.7 31 19.6 77 15.78 Native American 2 0.9 1 1.0 4 2.5 7 1.43 Caucasian/White 126 55.3 55 53.9 60 38.0 241 49.39 Other 23 10.1 6 5.9 13 8.2 42 8.61 Missing 0 0 5 4.9 11 7.0 16 3.28

Total 228 100 102 100 159 100 488 100 Chi-Square = 14.63 D.F. = 10 Sig = .15 CellsE. F.<.05 = 4 out of 18 Res/cell ratio=72.5

Table 6.7: Questionnaire type by ethnicity (US)

6.2.2.7 Disability The majority of study participants did not identify themselves as having a disability (82.38%). Table 6.8 presents the number and percentage of Complete, Incomplete and Invalid questionnaires by disability. Both the minimum cell frequency and the minimum respondent to cell ratio were met. The distribution of disability for each of the different questionnaire categories was similar. It is interesting to note that the percentage of respondents who identified themselves as having a disability increased between the three questionnaire type with the Invalid questionnaire type having the most respondents identifying themselves as having a disability. This suggested that respondents who identified themselves as having a disability may have had difficulties in completing the questionnaire. However, disability and questionnaire type were not significantly related ( χ 2 (2, N=435) = 7.42, p = .024).

Complete Incomplete Invalid Total N % N % N % N % Yes 10 4.4 8 7.8 15 9.5 33 6.76 No 218 95.6 75 73.5 109 69.0 402 82.38 Missing 0 0 19 18.6 34 21.5 53 10.86

Total 228 100 102 100 158 100 488 100 Chi-Square = 7.42 D. F. = 2 Sig = .02 Cells E.F. < .05 = 0 out of 6 Res/cell ratio=72.5

Table 6.8: Questionnaire type by disability (US)

6-9

6.2.2.8 Internet use In addition to exploring the differences between the questionnaires in terms of demographic factors it was important to consider to what extent the three questionnaire types differed in regards to the dependent variable – internet use. A one way ANOVA was originally selected for analysis. However, initial analysis suggested that the data violated the assumption of homogeneity of variance with the Levene test being equal to or less than p <.05. Thus, the Kruskal-Wallis test, a non- parametric alternative was used. With the Kruskal-Wallis, a chi-square statistic is used to compare the scores on a continuous variable for three or more groups. Scores are converted to ranks and the mean rank for each group is compared (Pallant, 2005, p. 294). The test was significant ( χ 2 (2, N =454) = 4.84, p = .01), thus suggesting a difference in internet use of the three questionnaire types. An inspection of the mean ranks for the groups suggests that the Complete questionnaire type had the highest internet use scores, with the Incomplete questionnaire type reporting the lowest.

Mann-Whitney U tests were conducted to evaluate pair wise differences among the three questionnaire types. To protect against a Type I error a Bonferroni correction was used, that is, the level of significance as divided by the number of comparisons to be made (0.05/3 = 0.0167). For a comparison to be considered significant it must have a significance level of 0.0167 not 0.05. The result of the test between the Complete and Incomplete questionnaires was significant ( χ 2 (2, N=330) = -2.75, p = .006). Complete questionnaires had an average rank of 174.94 while Incomplete questionnaires had an average rank of 144.39. The result of the test between Complete and Invalid questionnaires was not significant ( χ 2 (2, N=352) = -1.95, p = .059). Similarly, the result of the test between Incomplete and Invalid questionnaires was not significant ( χ 2 (2, N=226) = -.457, p = .648). Therefore, the only difference between the three questionnaire types was between the Complete and Incomplete questionnaire types. Given that these two questionnaire types were being combined for the statistical analysis that is being used to answer the research question (see Chapter 7), this means there was no statistical difference between the respondents who completed the questionnaires that were used for statistical analysis and those who did not complete the questionnaires. Therefore, the impact of this difference on the study’s overall validity should be minimal.

6-10

6.2.2.9 Summary In summary it can be seen that the respondents in the Complete, Incomplete and Invalid questionnaires were not significantly different in terms of age, gender, income, ethnicity and disability. Initial analysis revealed a statistically significant difference between the questionnaires with regard to highest education and employment status, however further analysis suggests that whilst there was a difference in questionnaire type, this difference would not be very important in practice. Initial analysis also revealed a statistical significance in terms of internet use between the three questionnaire types. However, closer inspection suggested that this difference would not have an impact on the overall validity of the study’s statistical analysis. Given the results discussed above it was suggested that there existed a reasonably close correspondence between the three questionnaire types. Therefore, data analysis could proceed with the Complete and Incomplete questionnaires being used for statistical analysis, with confidence that there were no significant differences in the respondents for each category of questionnaire and, as such, minimum item non-response error. Descriptive statistics for the two categories of questionnaires used for data analysis are provided in Table 6.9.

6-11

Complete + Variable Categories Incomplete (n=330) n % Gender Male 158 47.9 Female 167 50.6 Missing 5 1.5

Age 17-20 28 8.5 21-30 94 28.6 31-40 86 26.1 41-50 49 14.8 51-60 48 14.5 61-70 21 6.4 71-80 3 0.9 81-90 0 0.0 Over 90 0 0.0 Missing 1 0.3

Highest education level Primary school 9 2.7 High school 90 27.3 TAFE/Technical college 23 7.0 Bachelor degree 124 37.6 Masters degree 68 20.6 PhD/Doctorate 13 3.9 Missing 3 0.9

Employments status Unemployed 45 13.6 Full time employed 165 50.0 Part time employed 61 18.5 Casually employed 15 4.5 Contract employed 13 3.9 Job share 0 0 Retired 20 6.1 Self employed 9 2.7 Missing 2 0.6

Income (average over 3 years) No income 13 3.9 $1 - $10 000 41 12.4 $10 001 - $20 000 39 11.8 $20 001 - $30 000 50 15.2 $30 001 - $40 000 33 10.0 $40 001 - $50 000 41 12.4 $50 001 - $60 000 33 10.0 $60 001 - $70 000 26 7.9 $70 001 - $80 000 15 4.5 $80 001 - $90 000 9 2.7 $90 001 - $100 000 11 3.3 Over $100 000 15 4.5 Missing 4 1.2

Ethnicity African American 16 4.8 ** OTHER – participants who chose Asian or Pacific Islander 50 15.2 this category indicated they identified Hispanic or Latino 46 13.9 as having a mixed ethnicity. Native American 3 0.9 Caucasian/White 181 54.8 Other** 29 8.8 Missing 5 1.5

Disability Yes 18 5.5 No 293 88.8 Missing 19 5.8

Table 6.9 Summary statistics for the demographic variables (US)

6-12

6.2.3 Sample representativeness In addition to exploring item non-response it was also important to compare how well the participants compare to the population from which they were drawn. As part of this process the issue of total non-response will be addressed. This section briefly considers how well the participants being used for data analysis represent the San Jose community. Table 6.10 provides an overview of the San Jose Community in terms of gender, age, education level, employment status, income, ethnicity and disability. Data was obtained from the 2004 American Community Questionnaire conducted by the US Census Bureau (US Census Bureau, 2004) 17 .

Variable Categories %

Gender Male 51 Female 49

Age 8-24 8 25-44 33 45-64 23 65 and over 10

Median age 35.3 years

Highest education level (25 Less than high school 17 years and over) High school diploma or equivalency 18 Some college, no degree 19 Associate degree 8 Bachelor degree 25 Graduate or professional degree 13

Employment status Unemployment rate 7

Income Median earnings $34 235

Ethnicity African American 2 Asian or Pacific Islander 31 Hispanic or Latino 32 Native American 0.5 Caucasian/White 58 Two or more races 4

Disability 21-64 years with a disability 7

Table 6.10: Summary statistics for San Jose, CA, USA.

It can be seen from Table 6.10 that the respondents in the current study resembled the socio-economic composition of the San Jose community. Using the combined

17 This was the most recent demographic data available for comparison. The US Census data was gathered in 2004 and the current study’s data gathered in January/February 2005.

6-13

Complete and Incomplete questionnaires (n=330) a comparison of demographics against the San Jose community was made.

• San Jose had a relatively even division of males to females (51% males and 49% females) and this was evident in the sample population with 47.9% male and 51% female. The current study had a slight leaning towards more males then females.

• Thirty-three percent of the San Jose population were aged between 25 and 44, 10% of the population were aged 65 and over, and the population had a median age of 32.6. Whilst the type of data collected for age did not allow for full comparison between the San Jose community and the study sample some similarities were suggested. Like the San Jose community, the study sample had a dominant age of respondents in the 20’s, 30’s and 40’s (69.5% of the sample) and a very small proportion (7.3%) aged 61 and over.

• The San Jose community was a dominantly white community (58%) with other ethnic groups being represented as follows: 32% Hispanic or Latino, 31% Asian or Pacific Islander, 2% African American, 0.5% Native American, 4% two or more races and 8% some other race. The current sample, whilst not exactly the same in percentage, did have a similar proportion of ethnic mix. The majority of respondents identified themselves as white or Caucasian (54.8%) and the next major ethnic groups were in order, Hispanic or Latino (18.9%), Asian or Pacific Islander (15.2%), two or more races and other races (because of the way the questionnaire was constructed these two types are combined) (8.8%), African American (4.8%) and Native American (0.9%).

• The current study’s sample differed slightly in comparison to the current level of education as compared to the San Jose community. In 2004 83% of the San Jose population had at least graduated from high school and 37% had a bachelor degree or higher. In the current sample 96.4% of the respondents had at least graduated from high school and 62.1% had a bachelor degree or higher. It appeared that the individuals with higher levels of education were over represented in the present study. This was perhaps not surprising given the data collection point – the joint San Jose Public Library/San Jose State University Library.

6-14

• The median earning was $34,235 in 2004 with 10% of respondents indicating an income ranging from $30,001 to $40,000. The majority of respondents (15.2%) indicated an income between $20,001 and $30,000. In 2000 5.9% of the community had incomes below $15,000. In the current sample 16.3% had incomes below $10,000. This suggested that individuals with lower incomes might be slightly over represented in the present study.

• In 2004 San Jose had an unemployment rate of 7%. In the current study 13.6% of respondents were unemployed, thus suggesting that people who were unemployed were slightly over-represented in the present study.

• Seven percent of the San Jose population aged 21 to 64 identified themselves as having a disability. In the current study 5.5% of the total sample (keeping in mind that only 7.3% of the total sample were aged 61 and over) identified themselves as having a disability. It could be suggested that the people with disabilities were slightly under-represented in the present study, but this would be only marginally the case.

In summary, the researcher endeavoured to take appropriate steps to choose a representative sample despite the study’s limitation of having a relatively small catchment area for recruiting respondents (see Chapter 5 for full details). It has been demonstrated that the current sample’s characteristics are similar to the San Jose Community. The only minor differences between the current study’s sample and the San Jose community is that people with higher education, people with lower incomes and those unemployed were slightly over-represented.

As stated in Chapter 5 total non-response refers to when an individual is not able to participate in the study for whatever reason (i.e. refusal, exclusion). Total non- response was an issue in the current study in two ways. Firstly, no information was available on those individuals who were approached and invited to participate in the study but refused. Secondly, participants were recruited from one location only - the King Branch of the joint San Jose State University Library and San Jose Public Library. Thus, members of the San Jose community who did not use the King branch were excluded from the study. No information was available on what differences, if any, existed between San Jose residents who used the King branch and those who did not (i.e. education, income, ethnicity). Consequently, the generalisability of the sample to the San Jose population must be done with caution.

6-15

6.2.4 Summary of US participants On examination of the US study participants two conclusions can be made: First, there is minimal non-response error with those respondents who completed the questionnaire not differing significantly to those who did not complete the questionnaire. The only significant difference between the questionnaires occurred in regards to highest education, employment status and internet use. However, further analysis suggests that the difference may not be very important in practice or have limited impact on statistical analysis to be undertaken. Second, there was minimal sampling error with the current sample’s characteristics being a close approximation to the San Jose population. The only minor differences between the current study’s sample and the San Jose community was that people with higher education, people with lower incomes and those unemployed were slightly over represented.

6.3 The Australian participants For the Australian study participants were drawn from various locations throughout Brisbane (see Chapter 5). A total of 433 questionnaires were obtained. Like the US study, the questionnaires in the Australian study were classified into three types:

• Complete: Those questionnaires where the respondent completed all items. A total of 294 complete questionnaires were obtained. This was 67.90% of the total number of questionnaires.

• Incomplete: Those questionnaires where the respondent did not complete 1 or 2 items in the questionnaire. A total of 104 incomplete questionnaires were obtained. This was 24.02% of the total number of questionnaires.

• Invalid: Those questionnaires where over 3 items in the questionnaire were not completed by the respondent. A total of 35 invalid questionnaires were obtained. This was 8.08% of the total number of questionnaires.

The Incomplete and Invalid questionnaires represented the item non-response for the Australian sample. The same assumptions for processing the distinction between Incomplete and Invalid questionnaires that were applied with the US study were used in the Australian study (see section 6.2). Once again, Invalid questionnaires were removed from any form of data analysis while the Complete and Incomplete questionnaires were used for statistical analysis in exploring the

6-16

research question (see Chapter 7 & 8). With 8% of the questionnaires classified as Invalid and 24% of the questionnaires classified as Incomplete, an important first step in the data analysis process is ascertaining any significant differences between the respondents who successfully completed the questionnaire versus those who did not (i.e. non response error.

6.3.1 Non-response analysis As with the US sample, two statistical analysis techniques are used to explore the differences between the three questionnaire types: Chi-square test for independence and one way ANOVA. Full details on how these techniques were applied are available in section 6.2.1. Participants provided information on nine demographic characteristics (independent variables): gender, age, highest education level, employment status, income, ethnicity and disability. The three questionnaire types (Complete, Incomplete and Invalid) were compared with regard to non-response against there characteristics. In addition, the questionnaire types were compared in regards to the dependent variables of internet use. Table 6.11 provides a summary of these comparisons. A detailed discussion of each analysis is follows.

Characteristic Difference between 3 questionnaire categories

Gender No significant difference

Age No significant difference

Highest education Significant difference but only a small to medium effect

Employment No significant difference

Income No significant difference

Ethnicity No significant difference

Disability Significant difference but only a small effect

Significant difference but difference should not impact Internet use on statistical analysis

Table 6.11: Summary of characteristics against Australian questionnaire types

6.3.1.1 Gender A total of 158 males (36.9%) and 265 females (61.2%) (10 unknown) participated in the questionnaire. Table 6.12 presents the number and percentage of Complete, Incomplete and Invalid questionnaires by gender. The distribution for gender for each of the different questionnaires was similar, although there was a slightly

6-17

smaller proportion of males and a slightly larger proportion of females in the Incomplete questionnaire type. Nonetheless, gender and questionnaire type were not significantly related ( χ 2 (2, N=423) = 3.29, p = .19). Both the minimum cell frequency criteria and the respondent to cell ratio were met.

Complete Incomplete Invalid Total N % N % N % N % Male 116 39.5 28 26.92 14 40.00 158 36.49 Female 178 60.5 67 64.42 20 57.14 265 61.20 Missing 0 0 9 8.65 1 2.86 10 2.31

Total 294 100.0 104 100.00 35 100.00 433 100 Chi-Square=3.29 D.F.=2 Sig = 0.19 Cells with E.F. < .5 = 0 of 6 Res/cell ratio= 70.5

Table 6.12: Questionnaire type by gender (Australia)

6.3.1.2 Age Participants completing the questionnaire ranged from 17 to 80 years in age, with those aged 31 to 40 represented as the most frequently occurring age range (n=123 or 28.4% of the total sample). Only one respondent did not supply their age. Table 6.13 presents the number and percentage of Complete, Incomplete and Invalid questionnaires by age group. To prevent violation of the expected cell frequency being equal to or greater than 5, the age groupings were re-grouped (i.e. 61-70; 71- 80; 81-90; Over 90 became 61+). The respondent to cell ratio was met. The distribution for ages for each of the different questionnaires was markedly similar, although there were some minor differences. For example, there was a higher proportion of respondents aged between 17-21 and 21-30 in the Invalid questionnaire type; a higher proportion of respondent aged between 31-40 in the Complete questionnaire type; and a higher proportion of respondents aged between 51-60 in the Incomplete questionnaire type. Interestingly, there were no 61+ in the Invalid questionnaire type. Overall, age and questionnaire type were not significantly related ( χ 2 (10 N=432) = 13.95 p = .19).

Complete Incomplete Invalid Total N % N % N % N % 17-20 40 13.6 11 10.58 8 22.9 59 13.63 21-30 61 20.7 22 21.15 11 31.4 94 21.71 31-40 90 30.6 26 25.00 7 20.0 123 28.41 41-50 69 23.5 22 21.15 6 17.1 97 22.40 51-60 25 8.5 16 15.39 3 8.6 44 10.16 61+ 9 3.1 6 5.77 0 0 15 3.46 Missing 0 0 1 0.96 0 0 1 0.23

Total 294 100 104 100 158 100 433 100 Chi-Square=13.95 D.F.=10 Sig = .19 Cells E.F. < .5 = 2 of 18 Res/cell ratio =24.0

Table 6.13: Questionnaire type by age (Australia)

6-18

6.3.1.3 Highest education level Table 6.14 presents the number and percentage of Complete, Incomplete and Invalid questionnaires by the highest level of completed education. To prevent violation of the expected cell frequency being equal to or greater than 5, the education level categories were re-grouped (i.e. High School or less = Primary School + High School; Graduate degree = Masters degree + PhD/Doctorate). The respondent to cell ratio was met. The distribution for highest education level between the three questionnaire categories suggested a few differences. For example, there was a higher proportion of respondents with high school or less education levels in the Incomplete and the Invalid questionnaire types. Interestingly the opposite was true for the TAFE/Technical category where the higher proportions in this area was in the Complete and Incomplete questionnaire types. Respondents with a Bachelor degree as their highest education level were more proportionally represented in the Complete questionnaire type. No respondents with a Graduate degree as their highest qualification were in the Invalid type. This suggested that those with lower levels of education had difficulties in completing the questionnaire. Highest education level and questionnaire type were significantly related ( χ 2 (6, N=479) = 15.08, p = .020).

Complete Incomplete Invalid Total N % N % N % N % High school or less 120 40.8 56 53.85 23 65.71 199 45.96 TAFE/Technical college 69 23.5 17 16.35 5 14.29 91 21.02 Bachelor degree 84 28.6 27 25.96 6 17.14 117 27.02 Graduate degree 21 7.1 3 2.88 0 0 24 5.54 Missing 0 0 1 0.96 1 2.86 2 0.46

Total 294 100 104 100 35 100 433 100 Chi-Square=15.08 D.F.= 6 Sig = .020 Cells with E.F. < .5 = 2 of 12 Res/cell ratio=35.92

Table 6.14: Questionnaire type by highest education level (Australia)

The “size of the effect” (i.e. the strength of the relationship) Cramer’s phi statistic was calculated. The Cramer’s phi for the current analysis equals 0.13. According to Cramer this sits between a small and a medium effect size (for degrees of freedom equals 2 a small effect equals .07, a medium effect equals .21 and a large effect equals .29). This suggested that whilst there is a difference between the questionnaire types according to highest education, the difference would not be very important in practice.

6.3.1.4 Employment The majority of study participants were unemployed (75.3%). Table 6.15 presents the number and percentage of Complete, Incomplete and Invalid questionnaires by

6-19

the employment status. To prevent violation of the expected cell frequency being equal to or greater than 5, the employment levels were regrouped into Employed and Unemployed. The “Employed” status included all respondent who were: full time employed, part time employed, casually employed, contract employed, job share and self employed. Those who indicated that there were on leave were also included in this category given that these individuals are presumably on leave (mostly referred to maternity leave) from some type of employment. The “Unemployed” status included all respondents who indicated that they were either unemployed or retired. Both the minimum cell frequency criteria and the respondent to cell ratio were met. Not surprisingly, the Invalid questionnaire type had proportionally more Unemployed and less Employed respondents than the two other questionnaire types. This suggested that those who were Unemployed had greater difficulty in completing the questionnaire instrument. Nonetheless, the regrouped employment status and questionnaire type were not significantly related ( χ 2 (2, N=433) = 2.58, p = .28).

Complete Incomplete Invalid Total N % N % N % N % Unemployed 67 22.8 28 26.9 12 34.3 107 24.7 Employed 227 77.2 76 73.1 23 65.7 326 75.3 Missing 0 0 0 0 0 0 0 0

Total 294 100 104 100 35 100 433 100 Chi-Square = 2.58 D.F. = 2 Sign = .28 Cells E.F.<.5=0 out of 6 Res/cell ratio=72.12

Table 6.15: Questionnaire type by regrouped employment (Australia)

6.3.1.5 Income The income of participants completing the questionnaire ranged from zero through to over $100,000. Table 6.16 presents the number and percentage of Complete, Incomplete and Invalid questionnaires by income. To prevent violation of the expected cell frequency being equal to or greater than 5, the education level categories were re-grouped (i.e. from groups of $10,000 such as $20,001-$30,000 to groups of $20,000 such as $20,001-$40,000). The minimum respondent to cell ratio was met. The Invalid questionnaire type had proportionally more respondents with No income and proportionally fewer respondents with incomes above $20,000 than the other two questionnaire types. Whilst this suggested that respondents with higher incomes were more likely to complete the questionnaire, income and questionnaire type were not significantly related ( χ 2 (10, N=419) = 11.24, p = .34).

6-20

Complete Incomplete Invalid Total N % N % N % N % No income 26 8.8 11 10.58 5 14.29 42 9.70 $1- 20 000 65 22.1 25 24.04 7 20.00 97 22.40 $20 001 - $40 000 98 33.3 37 35.58 13 12.50 148 34.18 $40 001 - $60 000 65 22.1 15 14.42 3 2.88 83 19.17 $60 001 - $80 000 29 9.9 6 5.77 0 0 35 8.08 $80 001 + 11 3.7 2 1.92 1 0.96 14 3.32 Missing 0 0 8 7.69 6 17.14 15 3.46

Total 294 100 104 100 29 100 433 100 Chi-Square = 11.24 D.F. = 10 Sig = .34 Cells E.F < .05 = 4 out of 18 Res/cell ratio=72.12

Table 6.16: Questionnaire type by income (Australia)

6.3.1.6 Ethnicity The majority of study participants identified themselves as Caucasian/White (49.3%). Table 6.17 presents the number and percentage of Complete, Incomplete and Invalid questionnaires by ethnicity. A few differences between the questionnaire types were noted. The Complete questionnaire type had proportionally fewer respondents who identified themselves as Australian Aboriginal or Torres Strait Islander and proportionally more respondents who identified themselves as Caucasian/White. The Invalid questionnaire type had proportionally more respondents who identified themselves as Asian. It should be noted that five of the cells violated the minimum cell frequency assumption; but the minimum respondent to cell ratio was met. Nonetheless, ethnicity and questionnaire type were not significantly related ( χ 2 (10, N=398) = 10.21, p = .25).

Complete Incomplete Invalid Total N % N % N % N % Aust. Aboriginal/Torres Strait 19 6.5 9 8.65 3 8.57 31 7.16 Islander 36 12.2 11 10.58 7 20.0 54 12.47 Asian 7 2.4 2 1.92 0 0 9 2.08 Hispanic or Latino 21 74.1 54 51.92 17 48.57 288 66.51 Caucasian/White 14 4.8 1 0.96 0 0 16 3.70 Other 0 0 27 25.96 8 22.86 35 8.08 Missing 294 100 104 100 35 100 433 100 Total Chi-Square = 10.21 D.F. = 8 Sig = .25 CellsE. F.<.05=5 out of 15 Res/cell ratio=26.53

Table 6.17: Questionnaire type by ethnicity (Australia)

6.3.1.7 Disability The majority of study participants did not identify themselves as having a disability (93.3%). Table 6.18 presents the number and percentage of Complete, Incomplete and Invalid questionnaires by disability. Both the minimum cell frequency and the minimum respondent to cell ratio were met. The distribution of disability for each of the different questionnaire categories was markedly similar although it is interesting to note that the percentage of respondents who identified themselves as having a disability varied between the three questionnaire types with Incomplete having the

6-21

most respondents identifying themselves as having a disability. Disability and questionnaire type were significantly related ( χ 2 (2, N=405) = 6.91, p = .032).

Complete Incomplete Invalid Total N % N % N % N % Yes 14 4.8 10 9.62 3 8.57 27 6.24 No 280 95.2 67 64.42 31 88.57 378 87.30 Missing 0 0 27 25.86 1 2.56 28 6.47

Total 294 100 104 100 158 100 433 100 Chi-Square = 6.91 D. F. = 2 Sig= .032 Cells E.F. < .05 = 0 out of 6 Res/cell ratio=67.5

Table 6.18: Questionnaire type by disability (Australia)

The “size of the effect” (i.e. the strength of the relationship) Cramer’s phi statistic was calculated. The Cramer’s phi for the current analysis equaled 0.09. According to Cramer this sits between a small effect size (for degrees of freedom equals 2, a small effect equals .07, a medium effect equals .21 and a large effect equals .29). This suggested that whilst there was a difference between the questionnaire types according to highest education, the difference would not be very important in practice.

6.3.1.8 Internet use A one way ANOVA was conducted to explore the impact of questionnaire type on internet use. Table 6.19 presents the results of the ANOVA conducted. There was a statistical significance at the p<.05 level for the three groups (F(2, 425)=3.8, p=.023). Despite reaching statistical significance the actual difference in mean scores between the groups was quite small (5.7959, 5.2330, 5.000). The effect size, calculated using Eta squared, was .02. This is considered a small effect size (Cohen, 1988). Post hoc comparisons using the Tukey HSD Test indicated no significant difference between mean scores for the groups. It should be noted that the group sizes were considerably different (Complete n=294; Incomplete n=102; Invalid n=31). Tabachnick and Fidell (2001) noted that “problems created by unequal group sizes are relatively minor” (p. 46), especially if the assumption of homogeneity of variance has been met (as is the case in the current analysis). Tabachnick and Fidell (2001) also noted that extreme differences in group sizes may result in a Type I error (i.e. deciding there was a difference between the two groups when there isn’t one). This was possibly the case in the current analysis.

6-22

Between Groups Within Groups Total D.F. Sum Mean D.F. Sum Mean D.F. Sum F F Sqs. Sqs. Sqs. Sqs. Sqs. Ratio Prob. Internet 2 36.314 18.157 425 2016.163 4.744 427 2052.477 3.827 .023 use

Table 6.19 Analysis of variance – internet use by questionnaire type (Australia)

6.3.1.9 Summary In summary, it can be seen that the respondents in the Complete, Incomplete and Invalid questionnaires were not significantly different in terms of age, gender, employment, income and ethnicity. Initial analysis revealed a statistically significant difference between the questionnaires in regards to highest education and disability; however further analysis suggested that whilst there was a difference in questionnaire type along these two demographics, this difference would not be very important in practice. Initial analysis also revealed a statistically significant difference in terms of Internet use; however post hoc analysis suggested that this difference did not exist. Overall, given the results discussed above it was suggested that there exists reasonably close correspondence between the three groups. Therefore, data analysis could proceed with the Complete and Incomplete questionnaires being used for factor analysis and for causal analysis given that there were no significant differences in the respondents for each type of questionnaire. Descriptive statistics for the two categories of questionnaires used for data analysis are provided in Table 6.20.

6-23

Complete + Variable Categories Incomplete (n=398)

n % Gender Male 144 36.2 Female 245 61.6 Missing 9 2.3

Age 17-20 51 12.8 21-30 83 20.9 31-40 116 29.1 41-50 91 22.9 51-60 41 10.3 61-70 12 3.0 71-80 3 0.8 81-90 0 0 Over 90 0 0 Missing 1 0.3

Highest education level Primary school 19 4.8 High school 157 39.4 TAFE/Technical college 86 21.6 Bachelor degree 111 27.9 Masters degree 16 4.0 PhD/Doctorate 2 0.5 Other (Postgrad qual.) 6 1.5 Missing 1 0.3

Employments status Unemployed 79 19.8 Full time employed 161 40.5 Part time employed 60 15.1 Casually employed 53 13.3 Contract employed 18 4.5 Job share 2 0.5 Retired 16 4.0 Self employed 7 .5 On leave 2 1.8

Income (average over 3 years) No income 37 9.3 $1 - $10 000 42 10.6 $10 001 - $20 000 48 12.1 $20 001 - $30 000 72 18.1 $30 001 - $40 000 63 15.8 $40 001 - $50 000 51 12.8 $50 001 - $60 000 29 7.3 $60 001 - $70 000 23 5.8 $70 001 - $80 000 12 3.0 $80 001 - $90 000 6 1.5 $90 001 - $100 000 2 0.5 Over $100 000 5 1.3 Missing 8 2.0

Ethnicity Australian Aboriginal or Torres Strait Islander 28 7.0 ** OTHER – participants who chose Asian 47 11.8 this category indicated they identified Hispanic or Latino 9 2.3 as having a mixed ethnicity. Caucasian/White 272 68.3 Other** 15 3.8 Missing 27 6.8

Disability Yes 24 6.0 No 347 87.2 Missing 27 6.8

Table 6.20 Summary statistics for the demographic variables (Australia)

6-24

6.3.2 Sample representativeness In addition to exploring item non-response. it was also important to compare how well the participants compare to the population from which they were drawn. As part of this process the issue of total non-response was be addressed. The following section briefly considers how well the participants being used for data analysis represent the Brisbane community. Table 6.21 provides an overview of the Brisbane community in terms of gender, age, education level, employment status, income, ethnicity and disability. Data was obtained from the 2001 Australian census conducted by the Australian Bureau of Statistics (ABS, 2002) 18 .

It can be seen from Table 6.21 that the respondents in the current study resembled the socio-demographic composition of the Brisbane community. Using the combined Complete and Incomplete questionnaires (n=398), a comparison of demographics against the Brisbane community was made.

• Brisbane has a relatively even division of males to females (49% males and 51% females) but with females being slightly dominant. Whilst the female dominance was evident in the sample population, with 61.6% female and 36.32% males, the sample ratio does not replicate the population ratio; the sample had proportionally more females than males than what was found within the Brisbane population.

• The Brisbane community has 31% of the population aged between 25 and 44 and 10.8% of the population aged 65 and over. The modal age is 25-34 years (15.6%). Whilst the type of data collected for age does not allow for full comparison between the Brisbane community and the study sample, it does suggest some similarities. Like the Brisbane community, the study sample had dominant age of respondents in the 20’s, 30’s and 40’s (72.9% of the sample) and a very small proportion (3.8%) aged 61 and over. It was noted that the current study, unlike the Brisbane community in general, had slightly fewer older adults than what is found within the Brisbane population.

• The current study’s sample differed slightly in comparison to the current level of education as compared to the Brisbane community. In 2004 21.2% of the

6 This was the most recent demographic data available for comparison at the time. The Australian Census data was gathered in August 2001 and the current study’s data was gathered in November/December 2005.

6-25

Brisbane people had a TAFE or Technical college qualification, 10.5% had a bachelor degree and 3.4% had a postgraduate qualification. In the current sample 21.6% of the respondents had a TAFE or Technical college qualification, 27.9% had a bachelor degree and 6% had postgraduate qualifications. This suggested that individuals with higher levels of education were over-represented in the present study. Whilst the TAFE/Technical qualifications closely mimicked the Brisbane community, clearly the current sample had an over-representation of individuals with Bachelor degrees as compared to the Brisbane community in general.

• Whilst the type of data collected for employment did not allow for full comparison between the Brisbane community and the study sample it did suggest some similarities. When taking into consideration all categories of employment the current sample had slightly more employed respondents (76.2%) than the Brisbane population (58.3%). It is interesting to note that in several of the employment categories the current sample very closely mimicked the Brisbane population. In 2001 37.8% and 18.8% of the Brisbane community were in full-time and part-time employment, respectively. The current sample was very similar to these figures with 40.5% and 15.1% of respondents indicating they were in full time and part time employment, respectively. An interesting difference existed in the unemployment figures. According to the ABS, in 2001 4.9% of the Brisbane community was unemployed, with a further 33.7% not in the labour force. The current study did not gather employment data in the same way. That is, it did not provide the option for respondents to indicate if they were intentionally not in the workforce. Instead, this was viewed as being “unemployed”, although it should be noted that respondents were able to indicate if they were retired. Additionally, a small number of respondents used the other category as a way of indicating being out of the work force for reasons other than unemployment. Thus, the current sample has 25.6% of respondents who are either unemployed or not in the workforce. This compared to the 38.6% of the Brisbane community who were either unemployed or out of the workforce. Whilst it was a little difficult, given the difference in data collected, to draw too many strong conclusions, two points needed to be considered: (i) people who were unemployed were slightly over-represented in the present study, and (ii) overall, people who were unemployed or out of the workforce for whatever reason were under- represented in the current sample.

6-26

• The type of data gathered prevented any detailed comparison between the current sample and the Brisbane population in terms of ethnicity. Only one observation could be made: the study’s sample had proportionally more respondents who identified themselves as Australian Aboriginal or Torres Strait Islander (7%) than the target population (1.66%).

• Only state wide statistics are available on disability (Disability Services Queensland, 2005). As such, no comparison could be made between the current sample and the Brisbane community on this characteristic.

In summary, the researcher endeavoured to take appropriate steps to choose a representative sample despite the study’s limitation of having a relatively small catchment area for recruiting respondents (see Chapter 5 for full details). It has been demonstrated that the current sample’s characteristics were similar to the Brisbane Community. The only minor differences between the current study’s sample and the Brisbane community were that people with higher education, people with lower incomes and those unemployed were slightly over-represented. Nonetheless, statistical analysis could proceed given that the study sample was a close representation of the population being explored.

6-27

Variable Categories %

Gender Male 49 Female 51

Age 0-14 20.7 15-24 15.5 25-34 15.6 35-44 15.1 45-54 13.8 55-64 8.9 65-74 5.7 75+ 5.1

Type of educational institution Preschool 5.8 attending (all ages) Primary school 36.9 Secondary school 24.8 Technical or tertiary 29.5 Other 2.8

Qualifications Postgraduate 2.0 Graduate diploma & graduate 1.4 certificate 10.5 Bachelor 6.1 Advanced diploma & diploma 15.1 Certificate 9.7 Not states 55.1 None or still studying

Employments status Unemployment rate 7

Full time 37.8 Part time 18.8

Employed but hours not stated 1.7 Unemployed 4.9 Not in the labour force 33.7 Not stated 3.2

Income Nil 6.3 $1 - $199 20 $200 - $399 21.4 $400 - $599 16.8 $600 - $999 18.7 $1000 + 10.2 Not stated 6.5

Ethnicity Indigenous persons 1.66 Born overseas 20.77 Speak other language 9.58

Disability No data available

Table 6.21: Summary statistics for Brisbane QLD, Australia.

6-28

6.3.3 Summary of Australian participants On examination of the Australian study participants two conclusions were made: First, there was minimal non-response error with respondents who completed the questionnaire not differing significantly to those who did not complete the questionnaire. The only significant difference between the questionnaires could be noted in regard to highest education and disability. However, further analysis suggested that the difference noted would not be very important in practice or have limited impact on statistical analysis to be undertaken. Second, there was minimal sampling error with the current sample’s characteristics being similar to the Brisbane population. The only minor differences between the current study’s sample and the Brisbane community was that people with higher education, people with lower incomes, those unemployed and those who identify themselves as Australian Aboriginal or Torres Strait Islander are slightly over represented. As with the US sample the issue of total non-response must be acknowledged in that no information was available on those individuals who were approached and invited to participate in the study but refused. Additionally, whilst the Australian sample was drawn from multiple data collection points it must be acknowledged that these points were relatively limited when compared to the full range of possible data collection points. Consequently members of the Brisbane community who did live or work within the data collection points used in the research were excluded from the study. No information was available on what differences, if any, existed between Brisbane resident who live and work within the data collection points and those who do not (i.e. education, income, ethnicity).

6.4 Conclusion Using descriptive statistics and informed judgement, this chapter explored the research participants. More specifically, two important sources of error were examined: non-response error and sampling error. Both the US and Australian sample revealed minimal non-response error with respondents who completed the questionnaire not differing significantly to those who did not complete the questionnaire. It is interesting to note both samples had a difference in questionnaire type (Complete, Incomplete, Invalid) with regard to highest education level. A higher proportion of respondents who indicated that their education level was high school or less were present in the Invalid questionnaire type. This suggested that those with lower levels of education had difficulties in completing the questionnaire. In both samples however, post hoc analysis indicated that whilst there was a difference between the questionnaire types the difference would not be important in practice.

6-29

Both the US and Australian samples had good sample coverage with minimal sampling error. Once again it is interesting to note a similarity between the US and Australian samples. Both samples differed with their respective populations in that people with higher education, people with lower incomes and those unemployed are slightly over-represented. It must be acknowledged that both samples (but most notably the US sample) had issues related to total non-response. However, recalling the words of McCready (2006) – “we’re trying to make a sample that is a best estimate of the population under the conditions we face” (p. 147) - the current research can proceed with confidence that both the US and the Australian samples were “best estimates” of their respective populations.

6-30

Chapter 7: Construct validation

7.1 Introduction The previous chapter outlined the representativeness of the sample against the two target populations: San Jose, USA and Brisbane, Australia. It established that there was minimal non-response error and that both samples were a close representation of the target populations. Statistical analysis could therefore proceed. This chapter provides an overview of the psychometric soundness of the scales used in the current research. This was a necessary step before examining the variables further. The chapter proceeds in two sections. First, a brief introduction on the methods used for construct validation is provided followed by a summary overview on the procedures that each of these techniques entail. Then, validation of the variables is provided.

7.2 Methods used for construct validation Validity and reliability are the two criteria used in determining the quality of a scale or instrument designed to measure a specific construct (Hair, et al, 2003). When these two criteria are addressed properly measurement error is reduced. As noted in Chapter 5 measurement error occurs when the values obtained in a questionnaire are inaccurate or imprecise (Dillman, 2000).

• Validity refers to the extent to which a construct measures what it is supposed to measure. A construct with perfect validity contains no measurement error (Hair, et al, 2003). There are four types of validity that are commonly referred to: face validity, content validity, construct validity and criterion validity.

o Face validity is the extent to which an instrument appears to be valid to those who are completing it. Face validity is met when consensus is obtained among a group of subject matter experts that the instrument completely and comprehensively covers the variable that it intends to measure. Face validity, therefore, may be more concerned with what respondents from relevant populations infer with respect to what is being measured than how well the actual measures works (Hair, et al, 2003).

o Content validity refers to a measure’s items being a proper sample of the theoretical domain of the construct (Netemeyer et al, 2003).

7-1

Like face validity, content validity is also determined by expert judgement and is a subjective process (Netemeyer et al, 2003).

o Construct validity refers to how well a measure actually measures the construct it is intended to measure (Netemeyer et al, 2003). It is the ultimate goal in the development of an assessment instrument. It is assessed by hypothesising the relationship between variables based on prior established theory or a priori models. Two common examples of construct validity include convergent validity and discriminate validity. The former refers to the degree to which two measures of the same construct are related. The latter refers assesses the degree to which two measures designed to measure similar but conceptually different constructs are related (Netemeyer, et al, 2003).

o Criterion validity refers to the degree to which the proposed measurement items provide results consistent with an independent external criterion measure. It is usually tested by looking at how a measure correlates with other measures of the same construct assessed either concurrently or in the future. Predictive validity is a type of criterion validity. It refers to the predictive power of a scale over the unobservable construct that it is intended to measure (Netemeyer et al, 2003).

Face and content validity are typically embedded in the instrument design phase. Chapter 5 discussed how extensive attention was given in instrument design and operatinalisation in this research. Thus, this chapter focuses on the validity tests conducted post data collection. This includes a primary focus on construct validity. Criterion validity cannot be considered as only one measure of the constructs employed in the study were available and therefore included in the questionnaire.

Reliability is concerned with that “portion of measurement that is due to permanent effects that persist from sample to sample” (Netemeyer et al, 2003, p. 10). There are two broad types of reliability: test-retest and internal consistency. Test-retest is when the person receives the same score on the same set of items at two points in time. Internal consistency refers to the “internal relatedness among items or sets of items in the scale” (Netemeyer et al, 2003, p. 10). Internal consistency was used in the

7-2

current research to establish reliability in the constructs. More specifically, internal consistency assesses item interrelatedness (Netemeyer, et al, 2003). Items composing a scale should show high levels of internal consistency. The most widely used internal consistency reliability coefficient is Cronbach’s coefficient alpha (1951). The commonly accepted level of alpha is 0.7 or above (Pallant, 2005; Tabachnick & Fidell, 2001). However, overall scale length must also be considered. As the number of items increases, alpha will increase (Netemeyer et al, 2003). Netemeyer and colleagues (2003) suggested that an important question was “how many items does it take to measure a construct?” (p. 11). They acknowledged that the answer to this question would vary depending on the domain the construct was attempting to explore. They also noted that, with self-administered measures, response fatigue and/or non-cooperation needed to be considered. As such, scale brevity is often advantageous. In addition, to providing an alpha level for the entire scale, SPSS provides the change in the Cronbach alpha with the deletion of each individual item. If the deletion of the item causes a considerable increase in alpha then this item should be considered for removal from the scale. Thus, Cronbach alpha can be used to justify the deletion of items from a scale.

7.3 Factor analysis Factor analysis is a statistical technique used to analyse the interrelationships between a group of variables. Factor analysis can assist researchers to define their variables more precisely and decide what variables they should include in their studies and how these variables should relate to each other (Comrey and Lee, 1992). Pallant (2005) noted that factor analysis was used extensively by researchers involved in the development and validation of tests and scales and that it can also be used to reduce a large number of related variables to a more manageable number prior to using them in other analysis such as multiple regression. It was for both of these reasons that factor analysis was used in the current research. There are two main approaches to factor analysis: exploratory and confirmatory. Exploratory factor analysis (EFA) is used when the researcher is seeking to explore the underlying dimensions of the construct of interest. Confirmatory factor analysis (CFA), in contrast, is used when the researcher has an existing knowledge about the underlying structure of the construct being investigated (Tabachnick & Fidell, 2001). It was the former approach that was used in the current research.

7-3

7.3.1 Exploratory factor analysis Exploratory factor analysis (EFA) is usually performed in the early stages of research, “when it provides a tool for consolidating variables and generating hypotheses about relationships in a reduced data set” (Tabachnick & Fidell, 2001, p. 584). Stevens (1996) suggested that EFA was considered to be more of a theory generating than a theory testing procedure and that, in contrast, CFA was generally based on a strong empirical foundation that allowed the researcher to specify an exact fact or model in advance. In considering the differences between the EFA and CFA, Sauciers and Goldber (1996) stated that “because exploratory factor analysis provides a more rigorous replication test than confirmatory analysis, the former technique may often be preferred to the later” (p. 35). Netemeyer et al (2003) noted that if data from different samples produced essentially identical factor analytic results using exploratory approaches, the likelihood of those results being a recurring quirk was quite small. They suggested that CFA focused on taking an existing model and seeing if the current data set supported it. In other words the computer conducting the analysis was given a “heavy hint as to how things should turn out” (p. 15). They also suggested that rediscovering a prior factor structure without recourse to such hints, as can happen with repeated exploratory analysis, can be very persuasive. For this reason EFA was used in the current research. The caveat by DeVillis (2003) also guided the research:

With all factor analysis approaches common sense is needed to make the best decisions. The analysis are merely guides to the decision-making process and evidence in support of the decisions made. They should not, in my judgement, entirely replace investor decision making (p. 132).

A brief discussion on the factor analytic approach used in exploring the scales used in the current research is provided in the following section.

7.3.2 Exploratory factor analysis approach A brief discussion on the EFA approach employed within the current study is provided below. This discussion describes: (i) treatment of missing data; (ii) the method of factor extraction; (iii) the method of factor rotation; (iv) tests of factor analysis appropriateness; (v) criteria for choosing the number of factors extracted; (vi) variable loadings; (vii) factor scores; and (viii) naming the factors.

7-4

• Treatment of missing data Cases were omitted from the analysis based on pairwise treatment of missing data. In pairwise treatment cases or persons are only excluded “if they are missing the data they required for the specific analysis” (Pallant, 2005, p. 52). Pallant (2005) recommended that pairwise exclusion be used unless there was a “pressing reason to do otherwise” (p. 53). By undertaking pairwise exclusion the current study made best use of the available data (i.e. the fullest number of cases is utilised for each analysis).

• Assumptions Like many statistical analysis techniques, factor analysis has certain assumptions that must be met before the analysis can proceed, although Hair et al (2003) noted that the critical assumptions underlying factor analysis were more conceptual than statistical (Hair et al, 2006). They suggested that the overriding concern was on the character and composition of the variables included in the analysis as much as the statistical qualities. However, they noted that from a statistical standpoint, departures from normality, homoscedasticity and linearity applied only to the extent that they diminished the observed correlations. They suggested that meeting the conceptual requirements for the variables included in the analysis was more important. Nonetheless some of the key assumptions to be considered included: sample size, normality, linearity, outliers, multicollinearity and singularity.

o Sample size. The sample size appropriate for factor analysis is an issue that has been well discussed in the literature. Over the years many different recommendations and rules of thumb for the determination of appropriate sample sizes have been offered. According to Comrey (1973) a larger sample would increase the generalizability of the conclusions reached in factor analysis. He classified sample size for factor analysis in the following way: 100 as poor, 200 as fair, 300 as good, 500 as very good and 1000 as excellent. More recently, he stated that 200 was adequate in most cases of ordinary factor analysis that involve no more than 40 items. By 2001 Tabachnick and Fidell were suggesting that “it is comforting to have at least 300 cases for factor analysis (p. 588) but they also conceded that a smaller sample size (e.g. 150 cases) should be sufficient if solutions have several high

7-5

loading marker variables (above .80). Marcoulides (1998) offered the following advise: “sample size issue depends to a great extent on the aims of the analysis and the properties of the data, at the very least one should have a reasonable sample in order to tap into the characteristics of the population” (p. 314). The current study followed the advice of Pallant (2005) (“the larger the better”) and Tabachnick and Fidell (2001), with a sample size of approximately 300. o Multicollinearity and singularity Singularity or extreme multicollinearity can be a problem for EFA (Tabachnick & Fidell, 2001). Multicollinearity refers to the relationship among the independent variables (Pallant, 2005). Multicollinearity exists when the independent variables are highly correlated (i.e. above .9). Singularity occurs when one independent variable is actually a combination of other independent variables. The correlations matrix should be used before analysis to check for correlations above .9. In addition, the “collinearity diagnostics” recommended by Pallant (2005) are used. Pallant (2005) suggested that these diagnostics “can pick up problems with multicollinearity that may not be evident by the correlation matrix” (p. 150). The Two outcome variables were relevant here: Tolerance and Variance Inflation Factor (VIF). Tolerance is “an indicator of how much of the variability of the specified indendent variables is not explained by other variables” (Pallant, 2005, p. 150). This value should be greater than 0.1. VIF is the inverse of the Tolerance Value and should be less than 10 (Pallant, 2005, p. 150). o Normality, outliers and linearity Normality, outliers and linearity, whilst important, are not as large an issue as the previous two points. Tabachnick and Fidell (2001) noted that normality was not a mandatory requirement for EFA. They observed that if the variables were normally distributed the solution was enhanced but that if normality failed the solution was not completely unusable. Similarly, because factor analysis is based on correlation, it is assumed that the relationship between the variables is linear. Tabachnick and Fidell (2001) suggested a ‘spot check’ of some combination of variables. Unless there was clear evidence of a curvilinear relationship, it was safe to proceed, providing an adequate

7-6

sample size and ratio of cases to variables was maintained. The current studies have 330 and 398 responses for the US and the Australian studies respectively, and were considered adequate to perform factor analysis. Outliers may have an influence on the factor solution more than other cases. For this reason, both univariate and multivariate outliers should be detected and dealt with.

• Method of factor analysis There are many different methods for undertaking factor analysis. In analysing the factor structure of the data in the current study, principal axis factoring (PAF) was used. PAF is a form of factor analysis that seeks the least number of factors which can account for the common variance (correlation) of a set of variables. PAF was used in the current research.

• Method for Factor Rotation Rotation is undertaken to assist in the process of interpreting the factors that have been extracted from the data set. The ultimate goal of rotation, according to Hair et al (2006), was to obtain “some theoretically meaningful factors and if possible the simplest factor structure” (p. 120). A simple structure according to DeVillis (2003) was obtained if each of the original items related to (i.e. loads on) one and only one factor. There are two main approaches to rotation, resulting in either orthogonal (uncorrelated) or oblique (correlated) factor solutions. Tabachnick and Fidell (2001) proposd that the “best way to decide between orthogonal and oblique rotations was to request oblique rotation with the desired number of factors specified and inspect the size of the correlations among factors” (p. 408). If the correlations exceed .30 then they recommended that the researcher interpret and report using oblique rotation. The current study followed the process advocated by Tabachnick and Fidell (2001) by using the oblique rotation first. In doing this the following two criteria were employed: (i) correlations should exceed .30, and (ii) a simple structure should be obtained. If these criteria were not met then the orthogonal rotation was used.

• Tests for Factor Analysis Appropriateness A review of the literature quickly revealed much debate over the correct manner for determining whether factor analysis can be undertaken in regards to a specific set of data. There are a number of ways of determining whether

7-7

factor analysis is appropriate. The current study erred on the side of caution by utilising a combination of techniques for determining the appropriateness of factor analysis. These included: sample size, subjects/variable ratio, the Kaiser-Meyer-Olkin, Bartlett’s Test of Sphericity and variance explained.

o Subjects to variables The number of variables analysed and the total number of subjects must be considered when deciding on a dataset’s appropriateness for factor analysis (DeVillis, 2003). DeVillis proposed that the larger the number of items to be factored and the larger the number of factors anticipated, the more subjects should be included in the analysis. However, he also observed that whilst “it is tempting to seek a standard ratio of subjects to items” (p. 136) it must be also be acknowledged that as “the sample gets progressively larger, the ratio of subjects to items can safely diminish” (p. 136). Over the years many ratios of subject to items/variables have been proposed. Nunnally (1978) advocated a ratio of 10 to 1, that is, 10 cases for each item to be factor analysed. Tabachnick & Fidell (2001) advocated a 5 to 1 ratio. Hair et al (2006) offered the following advice: “the researcher should always try to obtain the highest cases-per-variable ratio to minimize the chances of “over fitting” the data” (i.e. deriving factors that are sample specific with little generalizability). Arrindel and van Der Ende (1985) argued that the ratio of subjects to variables was not as important as compared with the ratio of sample size to factors. They advocated that there should be more than 20 subjects for each factor. The current study acknowledged that sampling size restrictions that were inevitable when conducting research within the general community. As such, the study followed the advice of Tabachnick and Fidell (2001) who advocated a ratio of 5 subjects per item, and Arrindel and van der Ende (1986) with a ratio of 20 subjects per factor.

o The Kaiser-Meyer-Olkin The Kaiser-Meyer-Olkin or KMO is a test of multivariate normality and sampling adequacy. It determines whether a distribution of values is adequate for conducting factor analysis. It is an index for comparing the magnitudes of the observed correlation coefficients to the magnitudes of the partial correlation coefficient. Small values for the

7-8

KMO measure indicate that a factor analysis of the variables may not be advisable, as correlations between pairs of variables cannot be explained by other variables. Kaiser (Kaiser & Rice, 1974) designed the following levels to assess the KMO test-statistic: >.90 is marvellous, >.80 is meritorious, >.07 is middling, >.60 is mediocre, >.50 is miserable, and <.50 is unacceptable. Overall, Kaiser advocated that a KMO score should be .60 or higher to proceed with factor analysis (Kaiser & Rice, 1974). The criteria outlined by Kaiser (i.e. .60 or higher) guided the process in the current study.

o Bartlett’s Test of Sphericity Bartlett’s Test of Sphericity or BTS (Bartlett, 1854) is a “statistical test for the presence of correlations among the variables” (Hair et al, 2006, p. 120). If the BTS is large and the associated significance level is small there is adequate correlation amongst the items (.3 or greater is generally accepted) and factor analysis is acceptable. The BTS should be statistically significant at p<.05 (Pallant, 2005). In recent years researchers have suggested a note of caution in relying solely on the BTS to determine if factor analysis is appropriate. Tabachnick and Fidell (2001) suggested that “because of its sensitively and its dependence on N, Bartlett’s test will be significant with samples of substantial size even if linkages among variables are slight and is therefore not recommended” (p. 350). Similarly, Hair et al (2006) concluded that “increasing the sample size causes the Bartlett test to become more sensitive to detecting correlations among the variables” (p. 120). In applying BTS in the current research a statistical significance of .05 was used. In addition, the correlation matrix should show at least some correlation of r=.3 or greater.

• The Number of Factors to be extracted A key step in the factor analysis process is identifying the number of factors that work best in explaining the amount of variance (i.e. the collective variance of all items). Pallant (2005) noted that “it is up to the researcher to determine the number of factors that he/she considers best describes the underlying relationship among the variables” (p. 183). She also noted that this process involved balancing two conflicting needs: “the need to find a simple solution with as few factors as possible, and the need to explain as much of the

7-9

variance in the original data set as possible” (p. 183). A number of different approaches have been developed for resolving the issue of how many factors to extract from a set of data. Hair et al (2006) highlighted that “in practice most factor analysts seldom use a single criterion in determining how many factors to extract” (p. 219). Gorsuch (1983) suggested “the number of factors and the associated method of determining that number may legitimately vary with the research design” (p. 171). He also advocated erring on the side of extracting too many factors over too few: “if one is in doubt concerning extracting the proper number of factors, the error should probably be slightly on the side of too many factors” (p. 172). Tabachnick and Fidell (2001) recommended that researchers adopted an exploratory approach, experimenting with different numbers of factors until a satisfactory solution is found. Three ways of determining number of factors to be extracted have been developed, including: Kaiser’s criterion, the scree test and parallel analysis. Guided by the concerns raised by Hair et al (2006) and other commentators, multiple techniques for determining the number of factors wer used in the current study, including Kaiser’s criterion, total variance explained, Cattell’s scree test, parallel analysis and conceptual analysis.

o Kaiser’s criterion Kaiser’s criterion or the eigenvalue rule is one of the more popular techniques available for selecting the appropriate number of factors to extract (Kaiser, 1970). An eigenvalue of a factor represents the amount of total variance explained by that factor. It gives an estimate of variance extracted with each factor and is always the largest for the first factor because this accounts for the greatest proportion of variance, which becomes progressively smaller with each additional factor (Tabachnick & Fidell, 2001). Factors with an eigenvalue of 1.0 or more are retained for further investigation. In recent years Kaiser’s criterion has been criticised for resulting in retention of too many factors in some situations. Tabachnick and Fidell (2001) noted that Kaiser’s criterion is most reliable only when the number of variables was between 20 and 50.

o Total variance explained Gorsuch (1983) noted that factor extraction stopped when additional factors added only a very small amount to the total variance extracted.

7-10

He observed that this was normally when 75, 90 or 85% of the variance has been accounted for. Hair et al (2006) suggested that in the natural sciences factors were extracted until at least 95% of the variance was explained or until the last factor accounted for only a small portion (i.e. less than 5 percent). However, they noted that in “the social sciences where information is less precise, it is not uncommon for the analyst to consider a solution that accounts for 60 percent of the total variance as a satisfactory solution” (Hair et al, 2006, p. 134). In addition to the total variance explained, the research should also consider the items per factor statistic. This statistic indicates the total number of items in the analysis over the number of factors extracted. Thurstone (1947, cited in Tabachnick & Fidell, 2001) called for at least three variables for each factor. However, more recently researchers have begun to advocate the need for at least twice as many variables as factors in the analysis. In the current study factor solutions accounting for a minimum of 60% variance were sought and twice as many variables as factors in the analysis were maintained. In addition, factor extraction was stopped when the last factor accounted for less than 5% of the variance. o Cattell’s Scree test The underlying rationale for Cattell’s scree test is “based on the fact that within a set of variables a limited number of factors are measured more precisely than the others. As a result the predominant factors account for most of the variance and have lage eigenvalues” (Comrey and Lee, 1992, p. 125). It “involves plotting each of the eigenvalues of the factors…and inspecting the plot to find a point at which the shape of the curve changes direction and become horizontal” (p. 135). Cattell called this point the elbow or break (Tabachnick & Fidell, 2001). As such, a scree plot consists of a vertical axes corresponding to eigenvalues, a horizontal axis corresponding to successive factors and numerical markers on these axes indicating the eigenvalues corresponding to each factor. The point at which the curve becomes horizontal is taken as the maximum number of factors that should be extracted. According to Cattell, it is only necessary to retain factors above the elbow or the break, as these factors contribute the most to the explanation of variance in the data set (Tabachnick & Fidell, 2001). Heck (1998) highlighted an important distinction between the Scree test

7-11

and the Kaiser’s criterion: “sometimes the eigenvalues will level off at .10 which is consistent with the Kaiser-Eigenvalue criterion but there could be an occasion where they level off somewhat below that point” (p. 188). As such, the scree test might show a factor that would have been eliminated using the Kaiser-Eigenvalue criterion. A number of criticisms have arisen regarding Cattell’s Scree test. DeVillis (2003) questioned the reliability of interpreting scree plots suggesting that “much of the process is art” and that it was not a suitable technique for novice researchers as experience in using the technique was invaluable for success. In a study in 1982 comparing the current methods for selecting the correct number of factors, Barrett and Kline (1982) observed that the scree test was one of the best techniques available. Similarly, Zwick and Velicer (1986) examined the effect of different criteria for extracting factors and concluded that the scree test “provides an acceptable starting point for determining how many factors to extract from a data set”. Cattell’s scree test was used to guide factor extraction in the current study. o Parallel analysis Parallel analysis involves comparing the size of the eigenvalues with those obtained from a randomly generated data set of the same size. Only those eigenvalues that exceed the corresponding values from the random data set are retained. According to Lautenschlager (1989) the “basic rationale underlying the parallel analysis criterion is that “meaningful” components extracted from actual sample data should tend to have eigenvalues larger in size than eigenvalues of the same order obtained from random normal variables generated to simulate the same sample size and number of variables” (p. 366). Recent studies comparing parallel analysis to both Kaiser’s criterion and Catell’s scree test showed that the parallel analysis was the most accurate method for determining the number of factors (Zwick & Velicer, 1986). Parallel analysis was the fourth factor extraction technique that will be used in the current study. As SPSS does not support Parallel Analysis, the Monte Carlo PCA for Parallel Analysis (Watkins, 2000) was used to conduct Parallel Analysis in the current study.

7-12

o Conceptual analysis “Ultimately after considering all of these practical and statistical criteria, one should remember that a good solution “makes sense” in that it appears to fit what is known about the phenomenon being studied” (Heck, 1998). What Heck was suggesting was that “it is important to remember that theory should be a guide” when deciding on the factors to be extracted. In the context of the current study’s the theoretical framework (social cognitive theory) guiding the research assisted in deciding upon the factors to be extracted.

• Variable loadings In order to assess which variables are associated with each factor, a criterion for distinguishing a ‘significant’ loading is required. Factor loadings describe either positive or negative correlations between individual items and the higher the loading of the items on the factor, the stronger the relationship (Tabachnick & Fidell, 2001). There is little consensus between researchers on acceptable criteria for determining factor loadings. Tabachnick and Fidell (2001) argued that loadings in excess of .30 were eligible for interpretation whereas lower ones were not, because a factor loading of .30 indicated at least a 9% overlap in variance between the variable and the factor. In contrast, Comrey (1973) suggested that loadings in excess of .71 (50% variance) were considered excellent, .63 (40%) very good, .55 (30%) good, .45 (20%) fair and .32 (10% of variance) poor. Hair et al (2006) suggested that the sample size must also be considered when setting the factor loading cutoff. For example, they proposed that whilst .30 was appropriate for a sample size of 350, .35 was appropriate for a sample size of 250. They also suggested that the number of variables being analysed was important in deciding that loadings were significant. They concluded that there were three criteria which should guide the decision on factor loading cutoffs: (i) the larger the sample size, the smaller the loading to be considered significant, (ii) the larger the number of variables being analysed, the smaller the loading to be considered significant, and (iii) the larger the number of factors, the larger the size of the loading on later factors to be considered significant for interpretation. In addition to these criteria Zwick and Velicer (1986) contended that each retained factor must have at least 3 substantial loadings. The issue of cross-loaded items must also be considered (i.e. items that load on more than 1 factor). In this research cross-loaded items were considered on an individual basis but were guided by the following

7-13

criteria: (i) the item should be considered alongside the factor that it loaded strongest to, and (ii) the item should be considered alongside the factor that it was conceptually most like. Tabachnick & Fidell (2001) concluded that ultimately the “choice of the cutoff of size of loading to be interpreted is a matter of researcher preference” (p. 411). The current study used the following guidelines when considering factor loadings: (i) .50 to be eligible for interpretation, and (ii) a minimum of 3 significant loadings for each factor.

• Naming of the factor Tabachnick and Fidell (2001) noted that “at some point a researcher generally tries to characterise a factor by assigning it a name or a label, a process that involves art as well as science” (p. 625). Hair et al (2003) suggested that this process of factor naming should take into consideration all significant factor loadings but that “variables with higher loadings influence to a greater extent the name or label selected to represent a factor” (p. 234). In the current study the naming of factors took into consideration all factor loadings as suggested by Hair et al (2003), but placed greater influence on the high loadings for each factor.

In summary Table 7.1 highlights the EFA approach being applied within the current study:

7-14

Method of factor Principal axis factoring analysis Method of extraction Oblique, if (i) simple structure is obtained; and (ii) correlations exceed .30. Otherwise orthogonal rotation will be used.

Missing data Pairewise treatment to allow for best use of available data

Assumptions • Linearity detected by ‘spot check’ of variables • Mulicollinearity and singularity detected by Tolerance <.10 and VIF >10 and correlations .9 or above Testing for factor • Minimum sample size of 300 analysis • Subject to variable ratio of 5 to 1 appropriateness • Subject to factor ratio of 20 to 1 • Kaiser-Meyer-Olkin .60 or higher • Bartlett’s test of sphericity significance of .05 and correlation matrix with some correlations of .3 or greater

Number of factors to • Kaiser’s criterion (eigenvalues above 1.0 accepted) be extracted • 60% minimum total variance explained • Cattell’s scree test • Parallel analysis • Conceptual analysis Variable loadings • .50 will be considered for interpretation • a minimum of 3 significant factor loadings for each factor • cross -oaded items will be considered on an individual basis by considering it (i) alongside the factor it is strongest loaded to, and (ii) alongside the factor it is most conceptually alike Factor structure Cronbach’s alpha with a minimum of .70 consistency Naming of factors Whilst all factor loadings are considered, greater influence placed on the high loadings of each factor

Table 7.1: Summary of factor analysis approach applied in the current study

7.4 Construct validity of the social-cognitive measures Eight scales are examined in this chapter for their factor structure. The Internet Self- efficacy (ISE) scale was designed and developed for use in the current research. Full details on how this scale was developed and operationalised is available in Chapter 5. Six outcome expectancy scales developed by LaRose & Eastin (2003) are examined. These scales were developed for use with college students and this was the first time the scales were used with members of the general public. Details on each of the scales are available in Chapter 5. In addition, a two item measure of internet use is examined. This measure was based upon the work by LaRose, Mastro and Eastin (2001). Scale details are available in Chapter 5. A discussion of the factor analytic process for each scale follows. It should be noted that two samples were used in the current research: the US sample and the Australian sample. As noted by DeVillis (2003) the best way to determine a factor solution and therefore establish generalisability was to replicate the solution on multiple samples. In the current research both samples were used to establish the generalisability of

7-15

the factor structures for the scales. The social-cognitive constructs – self-efficacy and outcome expectancy - were entered into the factor analytic process together. This allowed for a more detailed examination of the items and how they related to the constructs. That is, items thought to be measuring internet self-efficacy may in fact be measuring outcome expectancy. This can be checked by including all items into the measure together.

7.4.1 The US sample The assumptions for factor analysis were explored before proceeding. Using both the correlation matrix and Tolerance and VIF values the following items were removed to reduce multicollinearity: ISE2, ISE19, ISE20, ISE21, ISE30, ISE31, ISE34, ISE35, ISE36 and ISE39. Thus, a total of 53 items was included for factor analysis. The suitability for factor analysis was assessed prior to analysis. Inspection of the correlation matrix revealed the presence of many coefficients of .3 and above. The Kaiser-Meyer-Olkin value was .94, exceeding the recommended value of .6 (Kaiser, 1970) and the Bartlett’s Test of Sphericity reached statistical significance, supporting the factorability of the correlation matrix. In addition, the ratio of subjects (327) to items (53) also met the sample size criteria (at least 300) and the subject to items ratio (5:1).

No specification was given as to the number of factors to be extracted. Eight factors with eigenvalues above 1.00 were obtained in the first analysis explaining 38.36%, 15.26%, 5.15%, 3.44%, 3.02%, 2.85%, 2.14% and 2.08% of the variance respectively. It was interesting to note that the amount of variance explained individually by factors 4, 5, 6, 7 and 8 is below 5%; and that factor 3 was only just above this figure. This suggested that these six factors were not significantly contributing to the overall factor structure and perhaps a two factor solution may be the most appropriate model. An inspection of the scree plot indicated that anywhere from a two to a seven factor solution might be appropriate (See Figure 7.1).

7-16

25

20

15

10 Eigenvalue

5

0

1 2 3 4 5 6 7 8 9 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 4 5 5 5 5 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3

Factor Number

Figure 7.1: Scree plot (US sample)

A four factor solution was supported by the results of the parallel analysis, which showed only four factors with eigenvalues exceeding the corresponding criterion values for a randomly generated data matrix of the same size (53 variables x 327 respondents). The results of this are shown in Table 7.2.

Factor Number Actual Criterion value Decision eigenvalues from from parallel FA analysis 1 20.328 1.8749 Accept 2 8.087 1.7973 Accept

3 2.730 1.7318 Accept 4 1.823 1.6782 Accept 5 1.603 1.6240 Reject 6 1.511 1.5771 Reject

Table 7.2: Parallel analysis (US sample)

Table 7.3 presents the results of this initial factor analysis. An inspection of the loadings clearly suggested that a two factor solution may be the most appropriate, with one factor relating to internet self-efficacy and the second factor relating to internet outcome expectancy. Given the uncertainty as to the exact number of factors (i.e. anywhere from 2 to 7), a decision was made to first consider the factor structure based upon the literature and theories upon which the constructs and scales were based. According to LaRose and Eastin (2003) the 23 outcome

7-17

expectancy items should load onto 6 factors. Thus, it could be argued that there should be at least 7 factors, the six outcome expectancy factors and an Internet self- efficacy factor (this could in reality be more than one factor but for purposes of moving investigations forward a one factor solution was used as a starting point).

Items Factors 1 2 3 4 5 6 7 Open a browser such as Internet Explorer or Netscape to access .615 the Web (ISE1) Use a search tool (i.e. Google or Yahoo) to find what you want (i.e. graphics, computer files, documents, information, web pages, .656 sound files etc) (ISE3) Use hypertext in a web page to find out about a subject that .572 interests you (ISE4) Determine if a web site or the information you have found on the .647 web site is reliable and valid (ISE5) Determine if a web site is a secure site (ISE6) .640

Print a web page (ISE7) .644

Save a web site as a Bookmark or a Favorite (ISE8) .734 Delete a web site you have saved as a Bookmark or Favorite .755 (ISE9) Organise your Bookmarks or Favorites into folders (ISE10) .766

Download (save) a file from a web site (ISE11) .795 Check for a virus in a file that you are downloading (saving) from a .747 web site (ISE12) View a multimedia (audio or visual) file (ISE13) .758

Download (save) and install new software to your computer (ISE14) .784 Understand internet words/terms such as URL or FTP or browser .701 (ISE15) Solve most problems or mistakes you experience when using the .800 Internet (ISE16) Activate or deactivate cookies linked to a web page (ISE17) .701 Log on and off to an email system (i.e. Hotmail, Yahoo Mail) .685 (ISE18) Delete an email address from your address book (ISE22) .692

Attach a file to an email message (ISE23) .740

View a file attached to an email message (ISE24) .742

Save a file attached to an email. (ISE25) .762

Scan an email message attachment for a virus (ISE26) .702

Add an email address to your address book (ISE27) .706

Delete an email address from your address book (ISE28) .739

Locate a discussion group or e-list (ISE29) .757

Reply to a message on a discussion group or e-list (ISE32) .752

Unsubscribe from a discussion group or e-list (ISE33) .759

Leave a chat room (ISE37) .724

Table 7.3 : Initial factor analysis (US Sample)

7-18

Items Factors 1 2 3 4 5 6 7 Create a web page (ISE38) .607 Share files with others via the Web (ISE40) .699 Obtain information that I can’t find elsewhere (Novel1)

Get immediate knowledge of big events (Novel2)

Find a wealth of information (Novel3)

Solve a problem (Novel4) .526

Hear music I like (Activity1)

Feel entertained (Activity2) .541

Have fun (Activity3) .513 .555

Play a game I like (Activity4)

Feel like I belong to a group (Social1) .566

Find something to talk about (Social2) .602

Get support from others (Social3) .629

Maintain a relationship I value (social4)

Forget my problems (Self1) .639

Find a way to pass the time (Self2) .514

Relieve boredom (Self3) .538

Improve my future prospects in life (Status1) .528

Find people like me (Status2) .660 Find others who respect my views (Status3) .680 Get up to date with new technology (Status4) .549 Save time shopping (Money1) Find bargains on products and services (Money2) Get free information that would otherwise cost me money (Money3) Get products for free (Money4) .511

Table 7.3: Initial factor analysis (US sample) cont.

A seven factor solution was forced. The factors were rotated by employing oblique (Oblimin) rotation. The seven factor solution explained 65.79% of the total variance. The outcome of this analysis is presented in Table 7.4.

7-19

Factors 1 2 3 4 5 6 7 Open a browser such as Internet Explorer or Netscape .759 to access the Web (ISE1) Use a search tool (i.e. Google or Yahoo) to find what you want (i.e. graphics, computer files, documents, .700 information, web pages, sound files etc). (ISE3) Use hypertext in a web page to find our about a subject that interests you (ISE4) Determine if a web site or the information you have .508 found on the web site is reliable and valid (ISE5)

Determine if a web site is a secure site (ISE6) .650

Print a web page (ISE7) .509

Save a web site as a Bookmark or a Favorite (ISE8)

Delete a web site you have saved as a Bookmark or

Favorite (ISE9) Organise your Bookmarks or Favorites into folders .580 (ISE10) Download (save) a file from a web site (ISE11) .555 Check of a virus in a file that you are downloading .826 (saving) from a web site (ISE12) View a multimedia (audio or visual) file (ISE13) .561 Download (save) and install new software to your .720 computer (ISE14) Understand internet words/terms such as URL or FTP .817 or browser (ISE15) Solve most problems or mistakes you experience when .751 using the Internet (ISE16) Activate or deactivate cookies linked to a web page .742 (ISE17) Log on and off to an email system (i.e. Hotmail, Yahoo .828 Mail) (ISE18) Delete an email address from your address book .940 (ISE22)

Attach a file to an email message (ISE23) .768

View a file attached to an email message (ISE24) .918

Save a file attached to an email. (ISE25) .665

Scan an email message attachment for a virus (ISE26) .751

Add an email address to your address book (ISE27) .752

Delete an email address from your address book .838 (ISE28) Table 7.4 Seven factor solution (US sample) Locate a discussion group or e-list (ISE29) .596 Reply to a message on a discussion group or e-list .587 (ISE32) Unsubscribe from a discussion group or e-list (ISE33) .680

Leave a chat room (ISE37) .547

Create a web page (ISE38) .739

Share files with others via the Web (ISE40) .821

Obtain information that I can’t find elsewhere (Novel1)

Table 7.4: Seven factor solution (US sample)

7-20

Factors 1 2 3 4 5 6 7

Get immediate knowledge of big events (Novel2) .754

Find a wealth of information (Novel3) .881

Solve a problem (Novel4)

Hear music I like (Activity1) -.575

Feel entertained (Activity2) -.733

Have fun (Activity3) -.718 Play a game I like (Activity4) -.638 Feel like I belong to a group (Social1) .619 Find something to talk about (Social2) .630

Get support from others (Social3) .724

Maintain a relationship I value (social4) .727

Forget my problems (Self1) .653

Find a way to pass the time (Self2)

Relieve boredom (Self3)

Improve my future prospects in life (Status1) .556

Find people like me (Status2) .838

Find others who respect my views (Status3) .805

Get up to date with new technology (Status4)

Save time shopping (Money1) .752 Find bargains on products and services (Money2) .795 Get free information that would otherwise cost me .501 money (Money3) Get products for free (Money4)

Table 7.4: Seven factor solution (US sample) cont.

Items loading onto Factor 1 and 2 accounted for 37.75% and 14.63% of the total variability, respectively. Factors 3 to 7 each only accounted for less than 5% of the total variance. This suggested that these factors were not contributing significantly to the overall explained variance. In addition, an examination of the factors did not reveal a clear interpretation. An inspection of the loadings suggested that a two factor solution may be appropriate for the internet self-efficacy items. Similarly a three factor solution appeared to be appropriate for the outcome expectancy items. However, a closer examination of the loadings on the factors revealed that there was no obvious conceptual commonality linking the self efficacy items or the outcome expectancy items that loaded on each variable (even when only the

7-21

highest loadings were considered as recommended by Hair et al 2003). Thus, a two factor solution may be the most appropriate solution for all 48 items: factor one referring to internet self-efficacy and factor two referring to outcome expectancy. However, before moving to analysis based on a two factor structure the recommendation by Tabachnick and Fidell (2001) was followed: “researchers should experiment with different numbers of factors until a satisfactory solution is found”. Thus, factor analysis was run again with 3, 4, 5, 6 and 8 factors being forced. In all instances the factor structure obtained made little sense theoretically. Finally, the analysis was run with two factors being forced. The factors were rotated by employing oblique (Oblimin) rotation. This rotation technique was appropriate because the correlation between the two factors exceeded .30. The two factor solution explained 51.90% of the total variance. Items loading on to factor one and two accounted for 37.52% and 14.38% of the total variability, respectively. It was noted that five items (ISE38, Novel 1, Novel 2, Novel 3 and Money Q1) did not load on either factor. The factor analysis was rerun with the 5 items removed. A slight increase in explained variance was noted (54.20%). The resulting two factor solution is presented in Table 7.5. Factor one, consisting of 29 items was identified as Internet self-efficacy; factor 2, consisting of 19 items, was identified as internet outcome expectancy. Whilst the two factor solution explained less variance than the 7 factor model (approximately 10% less), the solution was preferred as it provided a simpler model that was more in keeping with the theoretical framework on which it was based (i.e. SCT). Internal consistency is good with Cronbach alpha of .98 and .95 factor 1 and factor 2, respectively.

Items Factors 1 2 Open a browser such as Internet Explorer or Netscape to access the Web (ISE1) .785 Use a search tool (i.e. Google or Yahoo) to find what you want (i.e. graphics, computer files, .743 documents, information, web pages, sound files etc) (ISE3) Use hypertext in a web page to find our about a subject that interests you (ISE4) .611

Determine if a web site or the information you have found on the web site is reliable & valid (ISE5) .647

Determine if a web site is a secure site (ISE6) .611

Print a web page (ISE7) .787

Save a web site as a Bookmark or a Favorite (ISE8) .832

Delete a web site you have saved as a Bookmark or Favorite (ISE9) .811

Organise your Bookmarks or Favorites into folders (ISE10) .803

Download (save) a file from a web site (ISE11) .835

Check of a virus in a file that you are downloading (saving) from a web site (ISE12) .709

Table 7.5: A two factor solution (US sample)

7-22

Items Factors Check of a virus in a file that you are downloading (saving) from a web site (ISE12) .709 View a multimedia (audio or visual) file (ISE13) .756 Download (save) and install new software to your computer (ISE14) .751 Understand internet words/terms such as URL or FTP or browser (ISE15) .746

Solve most problems or mistakes you experience when using the Internet (ISE16) .765

Activate or deactivate cookies linked to a web page (ISE17) .624

Log on and off to an email system (i.e. Hotmail, Yahoo Mail) (ISE18) .803

Delete an email address from your address book (ISE22) .828

Attach a file to an email message (ISE23) .840

View a file attached to an email message (ISE24) .877 Save a file attached to an email. (ISE25) .846 Scan an email message attachment for a virus (ISE26) .733 Add an email address to your address book (ISE27) .755 Delete and email address from your address book (ISE28) .795

Locate a discussion group or e-list (ISE29) .666

Reply to a message on a discussion group or e-list (ISE32) .714

Unsubscribe from a discussion group or e-list (ISE33) .725

Leave a chat room (ISE37) .563

Share files with others via the Web (ISE40) .590

Solve a problem (Novel4) .523 Hear music I like (Activity1) .610 Feel entertained (Activity2) .702 .730 Have fun (Activity3) .640 Play a game I like (Activity4) Feel like I belong to a group (Social1) .752

Find something to talk about (Social2) .793

Get support from others (Social3) .815

Maintain a relationship I value (social4) .615

Forget my problems (Self1) .777

Find a way to pass the time (Self2) .683 Relieve boredom (Self3) .704 Improve my future prospects in life (Status1) .673 .817 Find people like me (Status2) Find others who respect my views (Status3) .837

Get up to date with new technology (Status4) .607

Find bargains on products and services (Money2) .520

Get free information that would otherwise cost me money (Money3) .624

Get products for free (Money4) .633

Table 7.5: A two factor solution (US sample) cont.

7-23

7.4.2 The Australian sample The assumptions for factor analysis were explored before proceeding. Using both the correlation matrix and Tolerance and VIF values, the following items were removed to reduce multicollinearity: ISE2, ISE9, ISE19, ISE20, ISE23, ISE25, ISE27, ISE30, ISE31, ISE32, ISE33, ISE34, ISE35, ISE36 and ISE39. Thus, a total of 48 items were included for factor analysis. The suitability for factor analysis was assessed prior to analysis. Inspection of the correlation matrix revealed the presence of many coefficients of .3 and above. The Kaiser-Meyer-Olkin value was .96, exceeding the recommended value of .6 (Kaiser, 1970) and the Bartlett’s Test of Sphericity reached statistical significance, supporting the factorability of the correlation matrix. In addition, the ratio of subjects (389) to items (48) met the sample size criteria (at least 300) and the subject to items ratio (5:1).

No specification was given as to the number of factors to be extracted. Six factors with eigenvalues above 1.00 were obtained in the first analysis explaining 42.06%, 16.84%, 4.77%, 3.56%, 3.21% and 2.68% of the variance respectively. As with the US data it was interesting to note that the majority of the variance was explained by factors 1 and 2 (58.90%) and that individually factors 3, 4, 5 and 6 each accounted for less than 5% of the variance. Once again this suggested that the four factors were not significantly contributing to the overall structure and that potentially a two factor solution may be the most appropriate. An inspection of the scree plot indicated that any where from a two to six factor solution might be appropriate (See Figure 7.2).

25

20

15

10 Eigenvalue

5

0

1 2 3 4 5 6 7 8 9 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 3 3 3 3 3 3 3 3 4 4 4 4 4 4 4 4 4 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 Factor Number

Figure 7.2: Scree plot (Australian sample)

7-24

A five factor solution was supported by the results of the parallel analysis, which showed only five factors with eigenvalues exceeding the corresponding criterion values for a randomly generated data matrix of the same size (49 variables x 374 respondents). The results of this are shown in Table 7.6.

Factor Number Actual Criterion value Decision eigenvalues from from parallel FA analysis 1 20.188 1.7826 Accept 2 8.081 1.6971 Accept 3 2.287 1.6384 Accept 4 1.710 1.5869 Accept

5 1.542 1.5432 Accept 6 1.289 1.4997 Reject

Table 7.6: Parallel analysis (Australian sample)

Table 7.7 presents the results of the initial factor analysis. An inspection of the loadings suggested that a two factor solution may be the most appropriate, with one factor referring to internet self-efficacy and the second factor relating to internet outcome expectancy. The same approach used in the US sample was applied to investigate the factor structure in the Australian sample. Consequently, a second analysis was run with seven factors being forced.

1 2 3 4 5 6 Open a browser such as Internet Explorer or Netscape to access the Web .739 (ISE1) Use a search tool (i.e. Google or Yahoo) to find what you want (i.e. graphics, computer files, documents, information, web pages, sound files .800 etc). (ISE3) Use hypertext in a web page to find our about a subject that interests you .740 (ISE4) Determine if a web site or the information you have found on the web site is .825 reliable and valid (ISE5) Determine if a web site is a secure site (ISE6) .803

Print a web page (ISE7) .743

Save a web site as a Bookmark or a Favorite (ISE8) .795

Organise your Bookmarks or Favorites into folders (ISE10)

Download (save) a file from a web site (ISE11) .825 Check of a virus in a file that you are downloading (saving) from a web site .832 (ISE12) View a multimedia (audio or visual) file (ISE13) .868

Download (save) and install new software to your computer (ISE14) .824

Understand internet words/terms such as URL or FTP or browser (ISE15) .814 Solve most problems or mistakes you experience when using the Internet .847 (ISE16)

Table 7.7: Initial factor analysis (Australian sample)

7-25

1 2 3 4 5 6 Activate or deactivate cookies linked to a web page (ISE17) .799

Log on and off to an email system (i.e. Hotmail, Yahoo Mail) (ISE18) .708

Forward an email message (ISE21)

Delete an email address from your address book (ISE22) .710

View a file attached to an email message (ISE24) .707

Scan an email message attachment for a virus (ISE26) .804

Delete an email address from your address book (ISE28) .751

Locate a discussion group or e-list (ISE29) .799

Leave a chat room (ISE37) .798

Create a web page (ISE38) .698

Share files with others via the Web (ISE40) .748

Obtain information that I can’t find elsewhere (Novel1)

Get immediate knowledge of big events (Novel2) .517

Find a wealth of information (Novel3) .506

Solve a problem (Novel4) .621

Hear music I like (Activity1) .598

Feel entertained (Activity2) .630

Have fun (Activity3) .637

Play a game I like (Activity4) .516 .506

Feel like I belong to a group (Social1) .694

Find something to talk about (Social2) .526 .608

Get support from others (Social3) .624

Maintain a relationship I value (social4) .536

Forget my problems (Self1) .659

Find a way to pass the time (Self2) .567

Relieve boredom (Self3) .561

Improve my future prospects in life (Status1)

Find people like me (Status2) .685

Find others who respect my views (Status3) .686

Get up to date with new technology (Status4) .557

Save time shopping (Money1) .528

Find bargains on products and services (Money2) .540

Get free information that would otherwise cost me money (Money3)

Get products for free (Money4) .562

Table 7.7: Initial factor analysis (Australian sample) cont.

7-26

The factors were rotated by employing oblique (Oblimin) rotation. This was an appropriate rotation technique because the correlation of the two factors exceeded .30. The seven factor solution explained 71% of the total variance. The outcome of this analysis is presented in Table 7.8. Items loading onto Factor 1 and 2 accounted for 41.53% 16.28% of the total variability, respectively. Factors 3 to 7 each explained less than 5% of the total variance. This suggested that these factors were not contributing significantly to the overall explained variance. Thus, a two factor solution may be the most appropriate solution for all 48 items: factor 1 referring to internet self-efficacy and factor 2 referring to internet outcome expectancy.

Factors 1 2 3 4 5 6 7 Open a browser such as Internet Explorer or Netscape -.767 to access the Web (ISE1) Use a search tool (i.e. Google or Yahoo) to find what you want (i.e. graphics, computer files, documents, -.695 information, web pages, sound files etc). (ISE3) Use hypertext in a web page to find our about a subject

that interests you (ISE4) Determine if a web site or the information you have .642 found on the web site is reliable and valid (ISE5) Determine if a web site is a secure site (ISE6) .759

Print a web page (ISE7) -.773

Save a web site as a Bookmark or a Favorite (ISE8) -.598 Organise your Bookmarks or Favorites into folders

(ISE10) Download (save) a file from a web site (ISE11) -.516 Check of a virus in a file that you are downloading .816 (saving) from a web site (ISE12) View a multimedia (audio or visual) file (ISE13) .556 Download (save) and install new software to your .639 computer (ISE14) Understand internet words/terms such as URL or FTP or .760 browser (ISE15) Solve most problems or mistakes you experience when .779 using the Internet (ISE16) Activate or deactivate cookies linked to a web page .859 (ISE17) Log on and off to an email system (i.e. Hotmail, Yahoo -.753 Mail) (ISE18) Forward an email message (ISE21) Delete an email address from your address book -.911 (ISE22) View a file attached to an email message (ISE24) -.917

Scan an email message attachment for a virus (ISE26) .632 Delete an email address from your address book -.725 (ISE28) Locate a discussion group or e-list (ISE29) .712

Leave a chat room (ISE37) .669

Table 7.8 Seven factor solution (Australian sample)

7-27

Factors 1 2 3 4 5 6 7 Create a web page (ISE38) .898 Share files with others via the Web (ISE40) .920

Obtain information that I can’t find elsewhere (Novel1) .722

Get immediate knowledge of big events (Novel2)

Find a wealth of information (Novel3) .793

Solve a problem (Novel4) .597

Hear music I like (Activity1) -.612

Feel entertained (Activity2) -.847

Have fun (Activity3) -.747

Play a game I like (Activity4) -.534

Feel like I belong to a group (Social1) .866 Find something to talk about (Social2) .704 Get support from others (Social3) .810 Maintain a relationship I value (social4) .692

Forget my problems (Self1)

Find a way to pass the time (Self2) -.831

Relieve boredom (Self3) -.858

Improve my future prospects in life (Status1) Find people like me (Status2) .834 Find others who respect my views (Status3) .790 Get up to date with new technology (Status4) Save time shopping (Money1) -.829 Find bargains on products and services (Money2) -.889 Get free information that would otherwise cost me -.637 money (Money3) Get products for free (Money4) -.559

Table 7.8 Seven factor solution (Australian sample) cont.

As with the US sample, the strategy by Tabachnick and Fidell (2001) was followed and factor analysis was run with 3, 4, 5, 6 and 8 factors being forced. In all instances the factor structure obtained made little sense theoretically. Finally, the analysis was run with two factors being forced. The factors were rotated by employing oblique (Oblimin) rotation. This was an appropriate rotation technique because the correlation between the two factors exceeded .30. The two factor solution explained 57.27% of the total variance. Items loading on to factor one and two accounted for

7-28

41.30% and 16.00% of the total variability, respectively. It was noted that five items (ISE21, Novel 1, Novel 2, Novel 3 and Novel 4) did not load on either factor. When factor analysis was rerun with the 5 items removed, a slight increase in variance was noted (61.15%). Factor 1, consisting of 24 items, was identified as Internet Self-efficacy and Factor 2, consisting of 19 items, was identified as Outcome Expectancy. The resulting two factor solution is presented in Table 7.9. Whilst the two factor solution explained less variance than the models with more factors (i.e. approximately 10% less than the 7 factor solution), the solution was preferred as it provided a simpler model that was more in keeping with the theoretical framework on which it is based (i.e. SCT). Internal consistency is good with Cronbach alpha of .97 and .88 factor 1 and factor 2, respectively.

Item Factor 1 2 Open a browser such as Internet Explorer or Netscape to access the Web (ISE1) .862 Use a search tool (i.e. Google or Yahoo) to find what you want (i.e. graphics, computer files, .869 documents, information, web pages, sound files etc). (ISE3) Use hypertext in a web page to find our about a subject that interests you (ISE4) .820

Determine if a web site or the information you have found on the web site is reliable & valid (ISE5) .828

Determine if a web site is a secure site (ISE6) .834

Print a web page (ISE7) .870

Save a web site as a Bookmark or a Favorite (ISE8) .867

Organise your Bookmarks or Favorites into folders (ISE10) .514

Download (save) a file from a web site (ISE11) .920

Check of a virus in a file that you are downloading (saving) from a web site (ISE12) .831

View a multimedia (audio or visual) file (ISE13) .888

Download (save) and install new software to your computer (ISE14) .859

Understand internet words/terms such as URL or FTP or browser (ISE15) .840

Solve most problems or mistakes you experience when using the Internet (ISE16) .886

Activate or deactivate cookies linked to a web page (ISE17) .745

Log on and off to an email system (i.e. Hotmail, Yahoo Mail) (ISE18) .824

Delete an email address from your address book (ISE22) .830

View a file attached to an email message (ISE24) .846

Scan an email message attachment for a virus (ISE26) .833

Delete an email address from your address book (ISE28) .823

Locate a discussion group or e-list (ISE29) .725

Table 7.9: Two factor solution (Australian sample)

7-29

Item Factor 1 2 Leave a chat room (ISE37) .701 Create a web page (ISE38) .646

Share files with others via the Web (ISE40) .678

Hear music I like (Activity1) .617

Feel entertained (Activity2) .643

Have fun (Activity3) .644

Play a game I like (Activity4) .693

Feel like I belong to a group (Social1) .858

Find something to talk about (Social2) .797

Get support from others (Social3) .791

Maintain a relationship I value (social4) .669

Forget my problems (Self1) .807

Find a way to pass the time (Self2) .727

Relieve boredom (Self3) .715

Improve my future prospects in life (Status1) .640 Find people like me (Status2) .840 Find others who respect my views (Status3) .842 Get up to date with new technology (Status4) .533

Save time shopping (Money1) .630

Find bargains on products and services (Money2) .661

Get free information that would otherwise cost me money (Money3) .580

Get products for free (Money4) .696

Table 7.9: Two factor solution (Australian sample) cont.

7.4.3 Inter-scale correlations Further support for the construct validity of the internet self-efficacy scale and the outcome expectancy scale was found by examining the inter-scale correlations. The relationship between internet self-efficacy and outcome expectancy was investigated using the Pearson Product-moment correlation coefficient for both the US and Australian samples. Preliminary analyses was performed to ensure no violation of the assumptions of normality, linearity and homoscedasticity. Both sample revealed a low to medium positive correlation between the two variables Australian sample: (r=.321, n=375, p<.01) and US sample: (r=.329, n296, p<01) with high levels of self-efficacy associated with high levels of outcome expectancy.

7-30

Bandura (1997) noted that self-efficacy and outcome expectancy were two related concepts and as such, a significant relationship would be expected between the two scales.

7.5 Construct validity of Internet use measure A two item measure of internet use was used in the current research. This was based upon a four item measure developed by LaRose, Mastro and Eastin (2001) that was developed using US college students. This was the first time it was being applied to members of the general population. Factor analysis of the measure was only just suitable with both samples having a Kaiser-Meyer-Olkin value of only .5 19 . The Bartletts’ Test of Sphericity reached statistical significance, supporting the factorability of the correlation matrix. In addition, the 10 to 1 ratio of cases to variables was supported with 197.5 cases to each variable. The one factor solution in the US sample explained 39% of the total variance with loadings above .6. The one factor solution in the Australian sample explained 60% of the total variance with loadings above .7. Internal consistency was good in the Australian sample with Cronbach alpha of .75, but only passable for the US sample with Cronbach alpha of .59 20 . The four item measure in the LaRose, Mastro and Eastin (2001) study had good internal reliability with Cronbach alpha of .82. The difference between the US and the Australian samples may be a result of the different approach used for data collection. The US sample used free text questions to elicit the required information (i.e. On a typical weekday how many hours would you use the Internet?). The answers provided by the participants were then coded by the researcher after data collection. This was the approach used by LaRose and colleagues (2003). It was noted however, that a considerable number of surveys had to be excluded because of response error (i.e. missing data or incorrect data such as an answer of 50 hours per day). Consequently, the approach was changed in the Australian study by providing the participants with the pre-coded options to select from. This revised approach may have resulted in a more effective and reliable data collection process. If was noted that if no difference in the results in the final analysis between the two samples is noted then the difference noted above may not be of significance.

19 Hair et al (2006) note that .5 is the absolute minimum cut off for factor analysis. Whilst .6 is preferred, analysis can proceed at .5. 20 Nunnally (1967) notes that an internal reliability of .5 or .6 can be an accepted minimum, especially in new research.

7-31

7.6 Common Method Variance Testing The study variables were all collected using the same method: a self reported scale. As such a test for common method variance (CMV) was undertaken. CMV can cause researchers to find a significant effect in self reported data when the effect observed is actually due to the method employed (Woszczynski & Whitman, 2003). Harman’s one factor statistical test was performed to provide a level of assurance that CMV was not in effect in the current study (Podsakoff & Organ, 1986). This test involved subjecting items that were assumed to measure a number of different constructs to a single factor analysis. The dominance of one factor would suggest that the items were related because of common method. All items for each of the two samples were entered into a factor analysis (53 items for the US study and 49 items for the Australian study). In the US sample eight factors were extracted with eigenvalues greater than one, which together accounted for 72% of the variance. The first factor accounted for 38.33% of the variance. In the Australian sample six factors were extracted with eigenvalues greater than one. These six factors accounted for 73% of the variance. The first factor accounted for 42.06% of the variance. Since a single factor did not emerge, and one general factor did not account for the majority of the variance in both samples a substantial amount of CMV was not evident.

7.7 Conclusion This chapter has examined the psychometric soundness of socio-cognitive and internet use measures used in the current research. The findings provided evidence to suggest that the measures used possessed adequate levels of internal consistency and construct validity. Exploratory factor analysis was used to explore the validity of the scales. Chronbach alpha was used to establish reliability or internal consistency within the measures. The following key observations were noted:

• In an effort to further understand the psychological aspects of the digital divide the present study built on past research to develop a new measure of internet self-efficacy. The factor structure was explored in both the US and the Australian samples. Factor analysis revealed a one factor solution for both samples; with 29 and 24 items in the US and the Australian factor structure, respectively. Both scales had good internal reliability with Cronbach alpha of .98 and .97 in the US and Australian samples, respectively. In sum, there was good evidence of the reliability and validity of the internet self-efficacy scale. In

7-32

addition, there was good indication of generalisability of the factor solution with a very similar solution being identified in the two samples.

• Six outcome expectancy scales developed by LaRose, Mastro and Eastin (2001) were used in this study. This was the first time that these scales have been used with members of the general population (the scales were developed using US college students). Interestingly, the six dimensions or factors did not emerge from the two samples used in the current research. Instead, a one factor solution offered the simplest structure in both samples with 21 items in the US sample and 19 items in the Australian sample. It is interesting to note that in both samples three items (Novel 1, Novel 2 and Novel 3) did not did not load on to a factor. The Chronbach alpha was sound with .95 and .88 in the US and Australian samples, respectively. In sum, there is sound evidence of the reliability and validity of the outcome expectancy measure.

• Further evidence of the construct validity of the internet self-efficacy scales and the six outcome expectancy scales can be noted from the inter-scale correlations. Bandura (1997) proposed that self-efficacy and outcome expectancy were two linked constructs within SCT. In particular, Bandura (1997) stated that self-efficacy was the central factor in determining behaviour and that outcome expectancies are dependent on self-efficacy beliefs. Thus, it was not unexpected to find significant low to moderate correlations between the internet self-efficacy and outcome expectancy scales.

• A two item measure of internet use was employed in the current research. This measure was based upon the four item measure used by LaRose, Mastro and Eastin (2001). As expected, a one factor solution was identified, although considerable difference between the samples is noted. In the Australian sample the one factor solution explained 60% of the total variance with good internal reliability of .75. In the US sample the one factor solution explained 39% of the total variance with average internal reliability of .59. The difference between the two samples may be the result of different data collection approaches. Nonetheless, the results indicate that the measure was of a reasonable quality for proceeding with further analysis.

7-33

Part B: Exploring the problem

Chapter 8: The results

8.1 Introduction Part A provided an examination of the key literature relevant to the research. It set the scene for the research by establishing the research gap that the current research explored. Part B provided details on the method and participants involved in the research; it established how the research problem was explored . This chapter is dedicated to testing the research question. The chapter proceeds in two sections. First, a brief introduction on the methods used for exploring the research question is provided. Then, results of the analysis are outlined.

8.2 Multiple regression analysis Multiple regression is a collection of techniques that can be used to explore the relationship between one continuous dependent variable and a number of independent variables or predictors (Pallant, 2005). It is based on correlation but allows a more sophisticated exploration of the interrelationship among a set of variables (Pallant, 2005). Multiple regression can be used to answer several different questions including, (i) how well a set of variables is able to predict a particular outcome, (ii) which variable in a set of variables is the best predictor of an outcome, and (ii) whether a particular predictor variable is still able to predict an outcome when the effects of another variable are controlled for (Pallant, 2005). There are three main types of multiple regression: standard, stepwise and hierarchical.

• In standard multiple regression the independent variables are entered into the equation at the same time. Each independent variable is evaluated in terms of its predictive power over and above that offered by all other independent variables. This is the most commonly used technique (Pallant, 2005; Tabachnick & Fidell, 2001).

• In stepwise multiple regression the researcher provides the statistical software program a list of independent variables and then allows the program to decide the order in which the variables will be entered in the equation. This technique has been heavily criticised in the literature because it produces unstable results that are sample specific and it often leads to incorrect conclusions regarding the relative importance of predictors that are statistically dependent upon variables already entered into the analysis, especially given that the order

8-1

of entry is not based on theoretical grounds (Pallant, 2005; Tabachnick & Fidell, 2001).

• In hierarchical multiple regression the independent variables are entered into the equation in the order specified by the researcher based on theoretical grounds. Variables are entered in steps with each independent variable being assessed in terms of what it adds to the prediction of the dependent variable (Pallant, 2005; Tabachnick & Fidell, 2001).

Hierarchical multiple regression was used in the current research as it allowed for an exploration of the impact of socio-cognitive variables whilst controlling for socio- economic variables. The assumptions associated with multiple regression are discussed in the next section.

8.2.1 Multiple regression analysis approach Pallant (2005) noted that multiple regression “is one of the fussiest of the statistical techniques” (p. 142). There are a number of assumptions and conditions about the data that must be met before multiple regression can proceed. A brief discussion on the multiple regression approach employed within the current study is provided below. This discussion describes: (i) treatment of missing data, (ii) sample size, (iii) multicollinearity and singularity, (iv) outliers and (v) normality, linearity, homoscedasticity.

• Treatment of missing data Cases were omitted from the analysis based on pairwise treatment of missing data. In pairwise treatment, cases or persons are only excluded “if they are missing the data they required for the specific analysis” (Pallant, 2005, p. 52). Pallant (2005) “strongly recommends that you use pairwise exclusion of missing data unless you have a pressing reason to do otherwise” (p. 53). By undertaking pairwise exclusion the current study made best use of the available data (i.e. the fullest number of cases are utilised for each analysis).

• Sample size Sample size will have an impact on the overall generalisability of the results obtained in the analysis. Tabachnick & Fidell (2001) provided a formula for calculating sample size (N) requirements for regression: N > 50+8m (where m is the number of independent variables). In addition, Hair et al (2006) noted

8-2

that the ratio of observations to independent variables should never fall below 5:1, that is, where five observations were made for each independent variable. However, they noted that the more desired level was actually between 15 to 20 observations for each independent variable. The formula by Tachachnick & Fidell (2001) and the observations to variables ratio by Hair et al (2006) were used in the current research.

• Multicollinearity and singularity As noted in Chapter 7 multicollinearity refers to the relationship among the independent variables (Pallant, 2005). Multicollinearity exists when the independent variables are highly correlated (i.e. above .9). The correlation matrix was inspected before each regression analysis and the “collinearity diagnostics” Tolerance and VIF (as discussed in Chapter 7) were used (Pallant, 2005).

• Outliers Multiple regression is sensitive to outliers (Pallant, 2005). Each variable was checked for extreme scores. Two techniques were used; (i) the standardized residual plot was inspected for values with +/- 3.3; and (ii) the Mahalanobis and Cook’s distances were used. For Mahalanobis distance a critical Chi-square value was determined using the independent variables as the degrees of freedom. An alpha value of .0001 was used for determining the cut off point (Tabachnick & Fidell, 2001). For Cook’s distance, values less than 1 were observed to conduct an accurate interpretation of the regression analysis.

• Normality, linearity and homoscedaticity These refer to the distribution of the scores and the nature of the underlying relationship between the variables (Pallant, 2005). They were checked by an inspection of the residuals scatter plots. Residuals were the differences between the obtained and the predicted dependent variables scores. For normality the residuals should be normally distributed about the predicted dependent variable scores. For linearity the residuals should have a straight line relationship with the predicted dependent variables scores. For homoscedasticity the variance of the residuals about predicted dependent scores should be the same for all predicted scores (Pallant, 2005).

8-3

8.3 The analysis A two step analysis was conducted. In Step 1 the socio-economic variables were entered. In Step 2 the socio-cognitive variables were entered. Internet use was the dependent variable. This two step approach can be seen in Figure 8.1.

Step 1: Step 2: Socio-economic variables Socio-cognitive variables

• Gender • Internet self-efficacy • Age • Novel outcome expectancy

• Income • Activity outcome expectancy • Education • Social outcome expectancy • Employment • Self evaluative outcome expectancy • Ethnicity • Status outcome expectancy • Disability Monetary outcome expectancy

Figure 8.1: The two step hierarchical regression process for Internet use

Because multiple regression required metric independent variables, dummy coding was used to convert non-metric variables into metric variables (Hair et al, 2006). The seven socio-economic variables used in this research needed to be converted from non-metric to metric via dummy coding. The most common type of dummy coding is indicator coding. In this technique each category of the non-metric variable is represented by either 1 or 0 (Hair et al, 2006). Table 8.1 provides details on the dummy coding approach used in the current research.

Variable Dummy coding approach Age 0 = 40 years and younger; 1 = 41 years and over Gender 0 = female; 1 = male Education 0 = University level; 1 = Technical college or below Employment 0 = Employed; 1 = Unemployed Income 0 = Less than 40 000; 1= More than 40 000 Ethnicity 0 = Yes; 1= No Disability 0 = No; 1 Yes

Table 8.1: Dummy coding used in the current research

The results of the analysis for both the US and the Australian sample are discussed in the following section.

8.3.1 The US sample The data was checked for multicollinearity. Collinearity diagnostic tests indicated VIF values were all less than 10 and Tolerance values were all greater than 0.1 for all

8-4

independent variables. Thus, the data set did not have any multicollinearity to be concerned with. Inspection of the normal probability plot justified no major deviation from normality. Inspection of the residuals plot and subsequent calculations of Mahalanobis distance and Cook’s distance indicated the presence of outliers. Cases 2, 47, 58, 68, 70, 141, 152, 154, 186, 187, 282 and 289 were removed. Nine independent measures were used in the analysis. According to Tabachnick & Fidell (2001) the current analysis would need to have a minimum of 122 cases. The dataset had 273 valid cases and was thus suitable for running regression testing. The data set also mets the “desired ratio” required by Hair et al (2006) with 30 observations to each independent variable. With all assumptions being satisfied regression analysis was able to proceed. Table 8.2 provides the results of the analysis.

Independent Variables Step 1 Step 2 Age -.147* -.042 Gender .036 -.013 Income .047 .072 Employment -.114 -.016 Education .009 -.073 Disability -.003 .013 Ethnicity -.087 .090 Self-efficacy .421** Outcome expectancy -.018

F Change 1.598 25.788** R2 .040 .197 Adj R2 .015 .170 R2 Change .040 .157 Sig F Change .136 .000 * p < .05 ** p<.001 Table 8.2 Hierarchical regression for internet use (US sample)

At step one only one of the socio-economic variables was a significant predictor of internet use. An examination of the standardised beta coefficients 21 suggested that age was a significant negative predictor of internet use (B = -.147, p<.05). This indicated that younger participants reported higher internet use than older participants. After variables in block one were entered (the socio-economic factors) the overall model explained 4% of the variance. This was not statistically significant.

At step 2 only internet self-efficacy was a significant positive predictor of internet use. Participants reporting higher levels of internet self-efficacy (b = .4021, p<.001) reported higher internet use. The age factor was no longer a significant predictor.

21 The beta values represent the unique contribution of each variable when the overlapping effects of all other variables are statistically removed.

8-5

An inspection of the R Square change value indicated that the second block of variables accounted for an additional 15.7% of the variance in internet use when socio-economic factors were controlled for. This was a statistically significant contribution as indicated by the F Change value (F(9, 264) = 7.21, p<.001). The final model (with both blocks entered) explained 19.7% of the variance. Thus, the regression analysis clearly suggested that it was socio-cognitive factors (and specifically self-efficacy) rather than socio-economic factors that were positive predictors of internet use.

It was interesting to note that outcome expectancy was not a predictor of internet use. This is perhaps not surprising given that Banduara (1997) noted that self- efficacy was the core construct in SCT and that on its own outcome expectancy may not add much to the prediction of behaviour; but it may contribute to the formation of self-efficacy beliefs. To determine if this was the case in the current research a second regression analysis was run with self-efficacy as the dependent variable. This analysis was run to explore the impact, if any, that the socio-economic factors and the outcome expectancy factors may have on an individual's self-efficacy. The results of this analysis are presented in Table 8.3.

Independent Variables Step 1 Step 2

Age -.259** -.179* Gender .054 .047 Income -.051 -.0662 Employment -.018 -.292 Education .180* .281** Disability -.044 -.089 Ethnicity -.006 -.016 Outcome expectancy .407**

F Change 3.716** 49.543** R2 .080 .233 Adj R2 .065 .209 R2 Change .079 .143 Sig F Change .001 .000 * p < .05 ** p<.001

Table 8.3: Hierarchical regression for internet self -efficacy (US sample)

The dataset was checked for multicollinearity. Collinearity diagnostic tests indicated VIF values were all less than 10 and Tolerance values were all greater than 0.1 for all independent variables. Thus, the data set did not have any multicollinearity to be concerned about. Inspection of the normal probability plot justified no major deviation from normality. Inspection of the residuals plot and subsequent calculations of Mahalanobis distance and Cook’s distance indicated the presence of

8-6

outliers. In addition to the 12 cases that were earlier identified as outliers and removed, cases 243 and 296 were also removed. With eight independent measures in the analysis, Tabachnick & Fidell (2001) would recommend a minimum of 114 cases before analysis could proceed. This dataset had 287 valid cases, which fulfilled the Tabachnick and Fidell (2001) dataset requirement. The sample size also met the “desired ratio level” indicated by Hair et al (2006) with 35 observations to each independent variable. With all assumptions having been satisfied regression analysis was able to proceed.

At step one the age and education were significant negative predictors of internet self-efficacy. It can be inferred that younger participants (B=-.259, p<.001) reported higher levels of internet self-efficacy than other participants, and that participants with higher levels of education reported higher levels of internet self-efficacy (B=.180, p<.05). Step one accounted for a small but significant 8.9% of the variance in Internet self-efficacy.

At step two examination of the standardised beta coefficients revealed that outcome expectancy was a positive predictor of internet self-efficacy (B = .407, p <.001). This indicated that as participants reported higher levels of outcome expectancy they also reported higher levels of internet self-efficacy. Age (B =-.154, p<.05) and education (B=.281, p<.001) remained significant in step two, but the order of importance in predicting self-efficacy changed, with education now the more important of the two. An inspection of the R Square change value indicated that the second block of variables accounted for an additional 14.3% of the variance in internet self-efficacy when socio-economic factors were controlled for. This was a statistically significant contribution as indicated by the F Change value (F(8, 265) = 10.038, p<.001). The final model accounted for 23.4% of the explained variance in internet use. Thus, the regression analysis suggested that there were three variables that had an impact on internet self-efficacy. In order of importance these were: outcome expectancy, education and age.

8.3.2 The Australian sample The data was checked for multicollinearity. Collinearity diagnostic tests indicated VIF values were all less than 10 and Tolerance values were all greater than 0.1 for all independent variables. Thus, the data set did not have any multicollinearity to be concerned about. Inspection of the normal probability plot justified no major deviation from normality. Inspection of the residuals plot and subsequent

8-7

calculations of Mahalanobis distance and Cook’s distance indicated the presence of one outlier (case 4). This case was removed. Nine independent measures were used in the analysis. According to Tabachnick & Fidell (2001) a minimum of 122 cases would be required for analysis. This dataset had 321 valid cases, which exceeded this requirement. The data set also exceeded the “desired ratio” required by Hair et al (2006) with 35 observations to each independent variable. With all assumptions satisfied, regression analysis was able to proceed. Table 8.4 provides the results of the analysis.

Independent Variables Step 1 Step 2 Age -.206** -.027 Gender .145* .052 Income .125* .029 Employment -.065 -.090 Education -.084 -.033 Disability .010 .011 Ethnicity .185* .073 Self-efficacy .603** Outcome expectancy .080

F Change 7.953** 28.500** R2 .143 .444 Adj R2 .125 .429 R2 Change .143 .302 Sig F Change .000 .000 * p < .05 ** p<.001

Table 8.4: Hierarchical regression for internet use (Australian sample)

At step one several of the socio-economic variables were significant predictors of internet use. These included, in order of importance, age, ethnicity, gender and income. It appeared that younger participants reported high levels of internet use (B=-.206, p <.05). Those participants who identified themselves as not being of ethnic background reported high levels of internet use than those participants who identified themselves as having ethnic background (B=.185, p<.05). Males reported higher levels of internet use than females (B=.145, p <.05), and participants with higher levels of income reported higher levels of internet use (B=.125, p<.05). After variables in block one were entered (all socio-economic) the overall model explained 14.3% of the variance.

At step two internet self-efficacy was a significant positive predictor of internet use. Participants reporting higher levels of internet self-efficacy (B=.603, p<.001) reported higher internet use. The socio-economic factors from step one were no longer significant predictors. An inspection of the R Square change value indicated

8-8

that the second block of variables accounted for an additional 30.2% of the variance in internet use when socio-economic factors were controlled for. This was a statistically significant contribution as indicated by the F Change value (F(9, 333) =29.586, p<.001). The final model accounted for 44.4% of the variance in internet use.

Thus, the regression analysis clearly suggested that when socio-economic factors were controlled for, socio-cognitive factors (more specifically self-efficacy) had a positive prediction of internet use. Once again it was interesting to note that outcome expectancy was not a predictor of internet use. As with the US sample a second regression analysis was run with self-efficacy as the dependent variable. This analysis was run to explore the impact, if any, that the socio-economic factors and the outcome expectancy factors may have on an individual’s self-efficacy.

The dataset was checked against the assumptions. Multicollinearity was explored; collinearity diagnostic tests indicated VIF values were all less than 10 and Tolerance values were all greater than 0.1 for all independent variables. Thus, the data set did not have any multicollinearity to be concerned about. Inspection of the normal probability plot justified no major deviation from normality. Inspection of the residual, plot and subsequent calculations of Mahalanobis distance and Cook’s distance indicated that multivaricollinearity was not an issue. With eight independent measures in the analysis, Tabachnick & Fidell (2001) would recommend a minimum of 114 cases before analysis could proceed. This dataset had 321 valid cases, which fulfilled the requirement of a suitable sample size to run regression testing. It also exceeded the “desired ratio level” outlined by Hair et al (2006), with 40 observations to each independent variable. With all assumptions met regression analysis was able to proceed. Table 8.5 provides the results of the analysis.

At step one several of the socio-economic variables were significant predictors of self-efficacy. These included, in order of importance, age, ethnicity, gender and income. It can be inferred that: younger participants report higher levels of internet self-efficacy (B=-.-.332, p<.001); participants who identified themselves as not having an ethnic background reported higher levels of internet self-efficacy (B=- .194, p<.001); males reported higher levels of internet self-efficacy (B=-.157, p<.05) and; those with higher incomes had higher levels of internet self-efficacy (B=.155, p<.05). After variables in block one were entered (all socio-economic) the overall model explained 25.3% of the variance.

8-9

Independent Variables Step 1 Step 2 Age -.332** .-.195** Gender .157* .163* Income .155* .146* Employment .064 .122* Education -.086 -.094 Disability .023 -.009 Ethnicity .194** .211** Outcome .330**

F Change 16.245** 43.702** R2 .253 .340 Adj R2 .238 .324 R2 Change .253 .086 Sig F Change .000 .000

* p < .05 ** p<.001

Table 8.5: Hierarchical regression for internet self-efficacy (Australian sample)

At step two examination of the standardised beta coefficients revealed that outcome expectancy was a significant predictor of self-efficacy (B=330, p.<001). This suggested that as participants reported higher levels of outcome expectancy they also reported higher levels of internet self-efficacy. Age (B=-.195, p<.001), gender (B=-.163, p<.001), ethnicity (B=-.211, p<.05) and income (B=.146, p<.05) remained significant predictors of internet self-efficacy in step two. It was also noted that employment (B=-.122, p<.05) emerged as a predictor, suggesting that unemployed participants reported higher levels of internet self-efficacy. An inspection of the R Square change value indicated that the second block of variables accounted for an additional 30.2% of the variance in internet self-efficacy when socio-economic factors are controlled for. This was a statistically significant contribution as indicated by the F Change value (F(9, 333) = 29.586, p<.001). The final model accounted for 34% of the explained variance in internet self-efficacy. Thus, the regression analysis suggested that there were seven variables that had an impact on internet self- efficacy. In order of importance these were: outcome expectancy, ethnicity, age, gender, income and employment.

8.3.3 Summary of US and Australian regression analysis The aim of the research was to establish a model of digital inequality in two US and Brisbane communities that considered both socio-economic and socio-cognitive factors. More specifically, it aimed to consider the following research question: What influence do socio-cognitive factors have in predicting internet use when the effects of socio-economic factors are controlled? Recent findings in the literature have suggested that socio-economic factors were the primary influencing factors on

8-10

internet use and in understanding the digital divide in the US. The current study found that this was not the case. Socio-economic factors were not a statistically significant predictor of internet use. The only predictor that was found to be significant was internet self-efficacy. In short, individuals with higher levels of internet self-efficacy reported higher levels of internet use. Further analysis revealed that outcome expectancy, education level and age were significant predictors of internet self-efficacy. In contrast, the Australian study found that by themselves several socio-economic variables predicted internet use. In order of importance these were age, gender, income and ethnicity. However, the study mimiced the US study in showing that when socio-cognitive variables were controlled for and socio- cognitive variables included in the analysis, it was not the socio-economic variables that predict internet use but the socio-cognitive (internet self-efficacy specifically). Thus, both the US and the Australian studies agreed on the overarching influence of internet self-efficacy in predicting internet use. Further analysis of the Australian data revealed that outcome expectancy as well as age, gender, ethnicity and education were significant predictors of internet self-efficacy; thus, revealing that outcome expectancy and age were common across both samples in predicting internet self-efficacy.

8.4 The revised research model Existing research exploring the digital divide has tended to take a socio-economic focus. These studies have suggested that the primary factors contributing to the digital divide were income, employment, education, gender, age, ethnicity and disability. Individuals who can be identified through these factors were more likely to represent the “have-nots” in the digital divide. These studies were useful in illustrating trends and suggesting possible relationships. They also helped place the digital divide issue into the public spotlight and onto the government agenda. The studies were nonetheless limited by their narrow focus. A “socio-economic only” perspective does not provide a full portrait of the digital inequality in community. A graphical representation of the socio-economic perspective of the digital divide is provided in Figure 8.2 (See image A).

The current study built upon the existing socio-economic perspective. The study explored a model of digital inequality in community that considered both socio- economic and socio-cognitive factors. The research was based on the premise that combining both socio-economic and socio-cognitive factors would establish a richer more detailed and accurate picture of digital inequality. A graphical representation of

8-11

A. The socio-economic perspective of the digital divide.

B. A combined socio-economic and socio-cognitive perspective of the digital divide.

C. A socio-cognitive perspective of the digital divide.

Figure 8.2: The evolving models of the digital divide within the San Jose and Brisbane community

8-12

the combined socio-cognitive and socio-economic perspective of the digital divide is provided in Figure 8.2 (See Image B). The results of the research revealed that when considered together it was socio-cognitive factors – not socio-economic – that were the primary predictors of internet use in community. As such, a socio-cognitive framework of the digital divide provided the most accurate perspective for understanding the digital divide in community. A graphical representation of this framework is provided in Figure 8.2 (See Image C). The proposed framework illustrates that digital inequality in community is far more complex and evolved than has been imagined. It also adds support to the argument that the “digital divide” phrase is simplistic and misleading. Digital inequality in community is more than just a “have” and “have-not” dichotomy of physical access to technology. With socio- economic factors such as income, employment and education the key elements in determining the division between the “haves” and the “have-nots”. Instead the study proposed that digital inequality was about an increasing spectrum of digital inclusion and empowerment that was supported by an individual’s evolving level of self efficacy. Based on the results of both the US and the Australian sample, a revision to the research model outlined in Chapter 5 (see Figure 5.1) was made. The revised research model is provided in Figure 8.3.

Internet self- Internet use efficacy

Figure 8.3: The revised research model

8.5 Conclusion This chapter examined the research question “what influence do socio-cognitive factors have in predicting internet use by members of the general population when the effects of socio-economic factors are controlled? The study used hierarchical regression analysis to examine the relationship between socio-economic factors and socio-cognitive factors on predicting internet use in two community settings; San Jose, USA and Brisbane, Australia. Taking into consideration the findings from both studies three main observations were drawn: Firstly, Internet self-efficacy was the strongest predictor, when compared with socio-economic factors, of internet use for members of the general public in both the US and Australia samples. Secondly,

8-13

socio-economic factors were not a predictor of internet use in the US study; and in the Australian study they offered some minor influence (but only when socio- cognitive factors are not taken into consideration). Thirdly, across both samples, age and outcome expectancy were significant predictors of internet self-efficacy. The present research extended current knowledge of the major antecedents of internet use in community.

8-14

Chapter 9: Discussion and recommendations

9.1 Introduction The previous chapter tested the research question: what influence do socio- cognitive factors have in predicting internet use by members of the general population when the effects of socio-economic factors are controlled? The chapter established that the current research extended existing knowledge of the major antecedents of internet use in community, and that the socio-cognitive model of the digital divide provided the most accurate portrayal of digital inequality. This chapter presents an overall discussion of the current research. First, an overview of the research program is given. Second, the main empirical findings from the research are briefly discussed. Next, the implications of the research are considered. Finally, recommendations for future research including suggestions to address the limitations of the current research are made.

9.2 Overview of the research The information society was the context for the current research (see Chapter 2). Over the years the information society concept has generated considerable critical debate. The information society has been examined from four schools thought (i) the economic school, where the focus has been on the economy and the changing nature of the workforce; (ii) the information technology school, where the focus has been on the impact of ICT on society; (iii) the information explosion, where the focus has been on the increasing amount of information in the world and (iv) the synthetic school which draw together all three previous schools. The schools of thought have been criticised for their quantitative focus (i.e. number of information workers, amount of information, amount of ICT), and consequently for their inability to provide a full and informed discourse on the information society. None of the existing schools of thought have adopted a “user” or “person”-centred focus. An information literacy school of thought would focus on the person and the way in which he or she experienced information and their information environment. As such, the information literacy school of thought would provide the missing “user” or “person”-centred perspective. The current research explored a core information society issue – the digital divide – from the information literacy school of thought.

The digital divide has rapidly become the accepted term to describe the disparity between those who have access to ICT and those who do not (see Chapter 3). Like many other social controversies there have been those who have advocated that the

9-1

digital divide was a legitimate crisis, and there are those who have advocated that it was a myth developed by popular presses, or that if it was once a crisis it no longer is now. It could be suggested that the reason so many scholars and commentators proposed that the digital divide was a myth was because their observations were based on a limited understanding of the phenomenon and that this limited understanding has arisen because digital divide research has not yet explored the full nature of the phenomenon. Digital divide research to date has taken the form of a demographic analysis or statistical descriptions of who has access to digital technology and who does not. This has resulted in the portrayal of the digital divide as a relatively simple premise. It is viewed as a dichotomous concept – you either have access to ICT or you do not. This access was determined by socio-economic factors such as income, employment and education. In more recent years digital divide commentators have begun to acknowledge that “access” needs to be re- conceptualised to focus on the personal and the social aspects. Consequently, a small but growing number of studies have begun to explore the digital divide from different perspectives including education, cultural, sociological and so on. To date, very few studies have explored the digital divide from a psychological perspective. Psychology is the study of human behaviour. It is a field of scholarly enquiry focused on the human perspective. As such, adopting a psychological theoretical framework in the current research was a natural fit for a study that was positioned within the information literacy school of thought.

Social cognitive theory (SCT) provided the theoretical framework for the current research (see Chapter 4). SCT asserted that human behaviour was best understood by a reciprocal relationship between personal factors, behaviour and the environment. It is grounded in a view of human agency where people can exercise influence or control over what they do. Self-efficacy is the core SCT construct, and refers to a person’s judgement of perceived capability for performing a task. Recent studies have shown that self-efficacy and information literacy are closely linked concepts. To survive in today’s information rich world, individuals need to not only have the necessary information skills or knowledge to function in their information environment but also the appropriate confidence in their ability to successfully apply these skills to their information needs. As more and more information is becoming available via digital formats (i.e. the internet) this means that as individual needs to have the skill and knowledge on how to access and using digital technology, but the confidence to apply their skill and knowledge when needed. Recent studies have

9-2

demonstrated that self-efficacy is a significant factor in influencing an individual’s decision to use technology, such as computers or the internet.

Four studies have explored the digital divide using the self-efficacy construct. These studies have taken place in the US and in Hong Kong. The studies provided initial support for an alternative psychological perspective to the current socio-economic understanding of the digital divide. However, these studies were limited in several ways: (i) participants used (i.e. college students, African American students, senior citizens) result in limited generalisability to other populations, with only one of the studies using participants drawn from the general population, and (ii) none of the studies included both socio-economic and socio-cognitive factors. In addition, none of the studies were conducted in Australia.

Self administered questionnaires were used for data collection in the two cities of Brisbane, Australia and San Jose, USA (see Chapter 5). The questionnaire gathered information on both socio-economic factors (i.e. age, gender, income, employment, ethnicity, disability and education) and socio-cognitive factors (i.e. self- efficacy and outcome expectancy). Details were obtained about participants internet use. An internet self-efficacy scale was developed for the purpose of the research. The final data collection took place in January 2005 in the US context and November/December 2005 in the Australian context. 330 completed surveys were obtained in the US study and 398 completed surveys were obtained in the Australian study (see Chapter 6 & 7).

9.3 Overview of the research findings Using the research hierarchy approach recommended by Cooper and Emory (1995) the management question driving the study was How can understanding of the digital divide be improved by including a psychological or human perspective? The research question that followed from the managerial question was: Can a more detailed understanding of the digital divide in community be obtained by exploring both socio-cognitive and socio-economic factors? The investigative question that followed from the research question stated above was: What influence do socio- cognitive factors have in predicting internet use when the effects of socio-economic factors are controlled? Hierarchical multiple regression was used to address the investigative question. The results of this analysis revealed that attitudes do matter. The US study found that socio-economic factors were not statistically significant predictors of internet use. The only predictor that was found to be significant was

9-3

internet self-efficacy. In short, individuals with higher levels of internet self-efficacy reported higher levels of internet use. Further analysis of the US data revealed that outcome expectancy, education level and age were significant predictors of internet self-efficacy. The Australian study, unlike the US study, found that by themselves several socio-economic variables predicted internet use. In order of importance, these were age, gender, income and ethnicity. However, the study also shows that when socio-cognitive variables are controlled for, and socio-cognitive variables included into the analysis, it is the socio-cognitive and not the socio-economic factors that are the dominant (in fact the only!) predictors of internet use. Further analysis of the Australian data revealed that internet outcome expectancy as well as age, gender, ethnicity and income and employment were significant predictors of internet self-efficacy.

Taking into consideration the above findings three main observations can be drawn: Firstly, Internet self-efficacy was the strongest predictor, when compared with socio- economic factors, of internet use for member of the general public in both the US and Australia samples. Secondly, socio-economic factors were not a predictor of internet use in the US study; and they offered some minor influence in the Australian study (but only when socio-cognitive factors and not taken into consideration). Thirdly, outcome expectancy and age were significant predictors of internet self- efficacy in both the US and Australian samples. The present research extended current knowledge of the major antecedents of internet use in community. In the following section the implications of this research are considered.

9.4 Significance and contribution of the research The research was significant because it was the first time that a study exploring the digital divide combined both socio-economic and socio-cognitive factors in the research design, and used members of the general population in the data collection process. Additionally, the research has used both Australian and US samples. Critical theory provided the basis for the current research. Critical theory seeks to not just give an account of society and behaviour but to “realise a society that is based on equality and democracy for all” (Cohen, Mannion & Morrison, 2000, p. 28). Research conducted from a critical theory perspective aims to not just understand a situation or phenomenon but to place in effect change (Cohen et al, 2000). Consequently, this research had implications for both theory and practice, and these implications are offered as a way of emancipating the disempowered and redressing inequality (Cohen et al, 2000).

9-4

• The research has theoretical implications and contributions in regards to information society, information literacy, the digital divide and the social cognitive theory:

o Drawing from the user-centred philosophy that would underpin the information literacy school, the current research explored a core information society issue – the digital divide. The research provided practical evidence that taking a user or person-centred perspective to examine the information society could provide new insight into the phenomenon. In short, the research suggested that information literacy, as viewed from the experiential paradigm, has the potential to be the fifth school of thought from which to view and understand the information society. Further studies exploring other information society issues would help in establishing and validating the information literacy school. The research contributed to the small but growing number of information literacy studies that have begun to explore the way people engaged with and experienced their information worlds in the community context. The research has helped to set the agenda for “community information literacy”.

o The outcomes of this research have influenced our understanding of the digital divide in a number of ways. Firstly, it established a way of thinking about and understanding digital inequality in community that moved beyond just simple access or connection to technology. It illustrated that attitudes do matter, and that the internal forces inside a person do have a significant (indeed the primary!) impact on their decision on whether or not to engage with technology and to incorporate it into their information worlds. Secondly, the findings provided evidence that the characteristics or make up of the digital divide was more complex than the current dichotomous understanding. The socio-economic perspective that has dominated understanding of the digital divide to date, suggested that the lower an individual’s socio-economic status, the more likely they were to represent the “haven-nots”. In contrast, the higher an individuals socio-economic status, the more likely they were to represent the “haves”. In this case a “have-not” was someone who did not or rarely used ICT such as computers or the

9-5

internet and a “have” was someone who regularly used ICT. The socio-economic studies did not shed light on why those individuals with high socio-economic status were choosing to not, or rarely use, ICT and why those individuals with low socio-economic status were choosing to use ICT. This phenomenon suggested something else was influencing people’s decisions to engage with ICT in their lives. The current research illustrated that this “something else” was self-efficacy and that the digital divide involved both more members of the population and different members of the population then current research had shown to date. As such, the current research has brought to light elements of the digital divide that have not been considered in contemporary discourse about the phenomenon.

o The research supported previous studies that have demonstrated that social cognitive theory, and self-efficacy specifically, was a key predictor of people’s use of ICT (i.e. computer or internet). The research contributed to the SCT research community through the development of an internet self-efficacy measure that could be used with members of the general population. This was the first measure of its kind.

• The research has practical implications and contributions in regards to organisations and government agencies involved in supporting the information and ICT needs of community, and in developing public policy:

o The research illustrated that organisations (e.g. public libraries, community centres) aimed at supporting the information and ICT needs of community needed to incorporate both physical access to technology and programs that help developed people’s self-efficacy beliefs.

o Programs to develop self-efficacy beliefs should include the four core sources of self-efficacy noted by Bandura (1997): enactive attainment, verbal persuasion, vicarious experience and physiological feedback.

9-6

 Enactive attainment provides the most authentic evidence of whether one can succeed in a task. As such, this is the most influential source for establishing self-efficacy beliefs. Therefore it is important that opportunities are maximised for people to obtain access so they may use the internet. Charging for internet access works against this strategy. Opportunities should be made available for people in community who do not normally have access, for whatever reason, to be given access; for example, mobile internet services to regional or remote communities. This should be more than just “hit and run” access or one-off classes, as these do not allow the opportunity to steadily build on the skills being acquired. Enactive attainment requires frequent successful use of the technology.

 Having access to role models (i.e. vicarious experience) is an important part of developing self-efficacy beliefs. It is important that the role model is perceived to be similar to the individual observing. This means providing opportunities wherever possible for individuals to see “similar others” succeed at using the internet. This could include having internet classes conducted by trainers of similar age or background to the learners, and having internet classes that are focused on one particular community group (e.g. seniors). This would provide opportunities for members of the community who do not normally use the internet to observe it being used by similar others. For example community members who are identified as not active or frequent internet users could be invited to attend a social event or other activity they would normally engage in (i.e. art show, poetry reading etc) that could also discretely include the opportunity to view similar others using the internet. However, vicarious experience can also have a negative impact. The observation of failure on the part of similar others can have a devastating effect on self-efficacy. In this sense the practice of having group internet classes and using the “buddy system” to share computer resources can have a negative effect on self-

9-7

efficacy in populations where internet skills are generally low. Here, observing the failure of peers is likely to discourage (i.e. decrease self-efficacy) those struggling with Internet use and may also negatively affect those who have achieved early success. Individual instruction would thus be preferable. Failing that, computer labs could be redesigned with partitions or staggered seating to restrict information about the failure of peers.

 Greater self-efficacy beliefs can be increased through verbal persuasion about performance, but this persuasion must be delivered by competent and credible evaluators. It must also be constructive. Telling individuals that they will succeed only through hard work or that they need to work harder is likely to lower self-efficacy in the long run since this message conveys that the user must have been deficient to begin with, if such hard work to succeed is required.

 Physiological feedback refers to the idea of creating situations that do not produce stress or tension. Whilst it is the weakest source of self-efficacy, it cannot be completely ignored. Great care should be made in the manner in which internet access or training is provided. This would include the location, language, set up and other factors that may have a physiological impact.

The use of these four sources of self-efficacy may require that staff involved in designing, delivering and supporting the information and ICT needs of community undertake additional training to be able to adequately undertake their duties. It will inevitably require support from policy makers at the most senior levels, and it will need greater budgetary assistance.

9.5 Limitations While the contributions of this study were unique it must also be recognised that the current research had some limitations. The following seven limitations were noted in Chapter 1 and are expanded upon here.

9-8

First, the research employed cross-sectional data to identify the significant relationships between the research variables. Consequently, no firm conclusions can be made regarding the exact magnitude of the causal effects. Longitudinal designs, although much more difficult to achieve (especially in the community setting), are crucial for furthering current understanding of the nature of the digital divide. This limitation was acknowledged. The study was the first attempt to explore the relationship between socio-cognitive and socio-economic factors on internet use in community. In addition, limited resources were available for undertaking a community based longitudinal study. It was thus decided to constrain the scope and design of the study. The study was developed as an exploratory work that would provide initial evidence of a new understanding or way of perceiving digital inequality in community. With the initial evidence now in place, the next step in the inquiry process will be a longitudinal study to provide a basis for substantiated explanatory theory.

Second, the measures used to assess the main variables of interest in the research were all self report instruments. Self report measures were open to biases in reporting. For example, the participants may have under-estimated or over- estimated their levels of internet self-efficacy or internet use. While the inherent weakness of this data collection technique must be acknowledged as a limitation of the study, the researcher had no other option than to use this approach given the large sample size required to undertake the desired statistical analysis and the limited resources available.

Third, by focusing only on self efficacy and outcome expectancy the research provided only a limited understanding of people’s psychological engagement with the internet. Any research project must establish its scope or boundary of enquiry. This is especially true for research involving members of the general public. Participants in the current study were able to complete the questionnaire in 10 to 15 minutes. Whilst it would be interesting to include in the study design other psychological or attitudinal constructs or dimensions (eg. perceived need, locus of control, internet anxiety) this would impact upon the overall length of the data collection instrument. A longer questionnaire could negatively effect the participant recruitment process and the overall sample size sample. For this reason the scope of the study was limited to only those constructs associated with Bandura’s social cognitive theory (and the core socio-economic factors).

9-9

Fourth, it was acknowledged that the validity and reliability of a construct cannot be established by a single study. The internet self-efficacy measure developed for the purpose of this research requires further testing and revising in order to improve its psychometric properties. While this must be acknowledged as a limitation, care was taken to develop a psychometrically sound measure. Common method variance (CMV) was considered by using Harman’s one factor statistical test and this provided assurance that the observed relationships were unlikely to be due to CMV. It must also be noted that the scale was used within two different samples – US and Australia - this provided strong initial evidence for the validity and reliability of the measure.

Fifth, a combined convenience and purposive sampling approach was used in the study. A convenient sampling was used in that study participants were drawn from community locations that the researcher had access to (i.e. public library, community centres, health centres). In terms of purposive sampling, this study involved the selection of two samples based on prior research and from the theory being explored. That is, as the data was being collected the participants were compared to the sample frame that was established (see Chapter 5) and where necessary data collection was modified or tailored to ensure the sample frame was being met. While the inherent weaknesses of this non-random sampling approach must be acknowledged as a limitation of the study, the researcher had no other option other to use his approach due to the dispersed nature of the population in both the US and Australian study, and the limited resources available to conduct the sampling process. The results of the study were therefore interpreted in light of this weakness in data collection.

The sixth limitation to be considered builds upon the previous limitation. Caution must be taken when interpreting the findings in relation to the Australian and US populations. The participants were recruited from a small catchment (i.e. only one city in each nation; and in specific areas within each city). Thus, the research has presented a picture of the digital divide as understood by two “small worlds” and more specifically by only a very small percentage and very specific cohort of members from these small worlds. This was especially the case for the US sample where the issue of total non-response was most apparent because all participants were recruited from only one location - the Dr Martin Luther King Jr Branch of the joint San Jose State University Library and San Jose Public Library. Whilst, the US sample was a reasonable match to the statistical profile of the city of San Jose this

9-10

fact must be considered in light of two points: (i) no information was available on those individuals who were approached and invited to participate in the study but refused and (ii) members of the San Jose community who do not use the King branch were not included within the study. No information was available on what differences, if any, exist between San Jose citizen who use the King branch and those who do not (i.e. education, income, ethnicity). Consequently, the generalisability of the sample to the San Jose population must be done so with caution. To help reduce these limitations the researcher followed the advice of Fraenkel and Wallen (1996) and described the samples from both the US and the Australian study as thoroughly as possible regarding key characteristics (eg. age, gender, ethnicity). By providing this level of description other researchers and practitioners are then able to judge for themselves the degree of validity and reliability of the study. The existing “small world” picture can be deepened through replication.

Finally, the study can also be critiqued on the statistical techniques used for data analysis (factor analysis, regression analysis) when more favoured techniques such as structural equation modelling exist (SEM) exist. Unlike multiple regression which only tests a single step in a mode, SEM is a statistical approach that enables testing of a series of interrelated causal relationships simultaneously (Hoyle, 1995). It is for this reason that SEM is gaining popularity over regression analysis by researchers who are interested in understanding complex patters of interrelationships among variables. This limitation must be acknowledge and was the result of attempting to manage the scope of the research. The research was the first international study exploring the influence of both socio-cognitive and socio-economic factors on internet use with members of the community. It was thus decided to view the work as a study aimed at providing the initial evidence in support of the theory. The quantitative data analysis was therefore constrained to factor analysis and regression. However, the intention was that once the initial evidence in support of the theory was established the work would move on to SEM, but this outside the scope of the thesis.

9.6 Recommendations for future research and practice Any research topic is likely to provide more questions than can actually be resolved during the immediate research activities. This is the case here. This section provides recommendations for future research and practice:

9-11

Recommendation 1: That the present study be replicated in other community contexts in the US and Australia. This would help determine whether the findings revealed in this study were present in other parts of the US and Australia. Replication would also lead to the affirmation or modification of the internet self- efficacy scale developed.

Recommendation 2: That the present study be replicated in communities in other cultures. This would help to determine if the findings of this study were also found in other cultural contexts. This is of particular importance for developing nations and consequently the “global digital divide”.

Recommendation 3: That further studies be conducted to explore in greater detail the factors that influence the formation of self-efficacy beliefs. Whilst the current research shed some initial light on this point, and there is a wealth of information in the SCT literature itself, further studies would help establish what the key factors are for supporting community in relation to this particular phenomenon.

Recommendation 4: That further studies be conducted exploring self-efficacy and digital inequality with other ICTs. This study focused on the internet because it is the accepted “face” of digital inequality. However, each day new technology and new developments arise which impact upon people’s information worlds. Extending the research to incorporate these new developments would help to shed more light on the phenomenon.

Recommendation 5: That studies be conducted using a longitudinal approach to study the phenomenon. These studies should also incorporate exploration of intervention programs. Conducting pre- and post-tests based on people’s experience of intervention programs that were designed to help establish self- efficacy would assist in determine the most effective strategies to take to bridge the digital divide.

Recommendation 6 : That further studies be conducted using information literacy as a conceptual framework to explore other information society issues. This would help establish the value of information literacy (i.e. taking the human perspective) in exploring and understanding contemporary society.

9-12

Recommendation 7 : That organisations and policy makers incorporate both access to technology and programs and services aimed at helping members of community to develop their self-efficacy beliefs. These programs and services should be based upon the four core sources of self-efficacy noted by Bandura (1997): enactive attainment, verbal persuasion, vicarious experience and physiological feedback. This may require additional resource, or training for staff involved in the design and delivery of community based services. It may also involve the redevelopment of current infrastructure.

9.7 Conclusion The aim of this research was to extend the current understanding of the digital divide by developing a theoretical framework for viewing digital inequality in community that considers socio-cognitive factors alongside socio-economic factors. An alternative perspective for understanding the digital divide has been proposed. The research has shown that socio-cognitive factors and self-efficacy in particular, is the major predictor of internet use in community. The digital divide is not about computers, modems, the internet and hardware. It is about people. As such, the key to solving the issue of digital inequality is not going to be found with corporate or government funds and resources providing physical access to technology. Instead, the key to solving digital inequality is inside the individual user. We need to develop programs and services that support the individual. As the adage goes: “you can lead a horse to water, but you can’t make him drink”. Access alone is not the answer. Whilst access is certainly a good starting point; it is most certainly not the end point. This alternative formulation of the digital divide presented in this research is by no means intended to minimise the role played by socio-economic factors. Indeed, the socio-economic perspective has helped shed light on a very real social issue. This research has suggested that the digital divide is more complex and more involved than has been imagined, and that further and different research is required if genuine insight and real steps are going to be make in establishing an information society for all.

9-13

References

Aaker, D. A., Kumar, V., & Day, G. S. (2004). Marketing research (8 ed.). New York: Wiley.

Ajzen, I. (1988). Attitudes, personality and behaviour . Milton Keynes: Open University Press.

Ajzen, I. & Fishbein, M. (1980). Understanding attitudes and predicting social behavior. Englewood Cliffs, NJ: Prentice Hall.

Alvesson, M., & Deetz, S. (2000). Doing critical management research. London, UK: Sage Publications.

American Library Association. (ALA) (1989). Presidential Committee on Information Literacy: Final Report . Washington, DC: American Library Association.

Aron, A., & Aron, E. N. (1999). Statistics for psychology (2 ed.). Upper Saddle River, N.J.: Prentice Hall.

Arquette, T. (2001). Assessing the digital divide. Empirical analysis of a meta- analytic framework for assessing the current state of information and communication system development. Retrieved 13 October, 2005, from http://communication.utexas.edu/college/digital_divide_symposium/papers/ar quette.pdf

Arrindel, W., & Ende, V. (1985). An empirical test of the utility of the observations-to- variables ratio in factor and component analysis. Applied Psychological Measurement, 9, 165-178.

Association of College and Research Libraries (ACRL). (2000). Information literacy competency standards for higher education . Chicago: American Library Association.

Association of College & Research Libraries. (n.d.). Recommended readings for librarians new to instruction . Retrieved 11 April, 2006, from

R-1

http://www.ala.org/ala/acrlbucket/infolit/bibliographies1/recommendedreading s.htm

Australian Bureau of Statistics (ABS). (1994-2006) Household use of information technology (cat. 8146.0). Retrieved 10 April, 2006, from http://www.abs.gov.au

Australian Bureau of Statistics (ABS). (1998-2000). Use of the internet by householders, Australia (cat. 8147.0). Retrieved, 11 May 2005, from http://www.abs.gov.au

Australian Bureau of Statistics (ABS). (2000). Use of the internet by householders, Australia; Final Summary. (cat. 8147.0). Retrieved, 11 May 2005, from http://144.53.252.30/AUSSTATS/[email protected]/allprimarymainfeatures/AE8E676 19446DB22CA2568A9001393F8?opendocument

Australian Bureau of Statistics (ABS). (2000-2006). Internet activity, Australia (cat. 8153.0). Retrieved 16 May, 2005, from http://www.abs.gov.au

Australian Bureau of Statistics (ABS). (2001). 2001 Census. Retrieved, 11 May 2005, from http://www.abs.gov.au/websitedbs/d3310114.nsf/home/Census%20data

Australian Library and Information Association. (2006). Statement of information literacy for all Australians . Retrieved 10 July, 2006, from http://www.alia.org.au/policies/information.literacy.html

Australian School Library Association. (1994). Information literacy policy statement . Retrieved 10 September, 2006, from http://www.asla.org.au/policy/p_infol.htm

Bailey, B. (2000). The private sector is closing the digital divide . Retrieved 10 March, 2006, from http://www.ncpa.org/ba/ba331/ba331.html

Bandura, A. (1977). Self-efficacy: toward a unifying theory of behavioral change , Psychological Review, 84, 191-215.

R-2

Bandura, A. (1986). Social foundations of thought and action: a social cognitive theory . Englewood Cliffs, N.J.: Prentice-Hall.

Bandura, A. (1991). Self efficacy mechanisms in physiological activation and health- promoting behavior. In J. Madden (Ed.), Neurobiology of learning, emotion and affect (pp. 229-270). New York: Raven.

Bandura, A. (1994). Self efficacy. Retrieved 11 August, 2006, from http://www.des.emory.edu/mfp/BanEncy.html

Bandura, A. (1997). Self efficacy: the exercise of control . New York: W. H. Freeman and Company.

Bandura, A. (2002). Growing primacy of human agency in adaptation and change in the electronic era. European Psychologist, 7 (1), 2-16.

Bandura, A. (2005). Guide for creating self efficacy scales. In F. Pajares & T. Urdan (Eds.), Self-efficacy beliefs of adolescents (pp. 301-338). Greenwich, Conn.: Information Age Publishing.

Barker, L. L. (1989). Evaluating research. In P. Emmert & L. L. Barker (Eds.), Measurement of communication behavior (pp. 68-83). New York, NY: Longman Publishing.

Barrett, P.T. & Kline, P. (1982). Factor extraction: an examination of three methods. Personality and Group Behaviour, 1 , 84-98.

Bartley, M. (2004). Measuring socio-economic position. In Mel Bartley. (Ed.). Health inequality : an introduction to theories, concepts and methods. Cambridge: Blackwell Publishing

Bauman, Z. (1990). Thinking sociologically . Oxford: Cambridge, Mass : B. Blackwell.

Bawden, D. (2001). Information and digital literacies: a review of concepts. Journal of Documentation, 57 (2), 218-259.

R-3

Bell, D. (1973). The coming of post-industrial society : a venture in social forecasting . London: Heinemann Educational.

Benton, J. E. & Daly, J. L. (1991). A Question Order Effect in a Local Government Survey. Public Opinion Quarterly, 55(4) , 640-642.

Bourdieu, M. (1997). Report on the discussion at the panel on Assessing Critical Social Theory Research in Information Systms at IFIP WG8.2 Conference. Retrieved 13 October 2007, from http://www.people.vcu.edu/~aslee/Philadelphia-CST.htm

Brady, M. (2000). The digital divide myth. Ecommerce Times . Retrieved 9 April, 2006 from http://www.ecommercetimes.com/story/3953.html

Brisbane City Council (2001). Living in Brisbane 2010 . Retrieved, 8 August 2005, from http://www.brisbane.qld.gov.au/council_at_work/planning/brisbane_2010/ind ex.shtml

Brisbane City Council (2006). Living in Brisbane 2026. Retrieved, 1 May 2007, from http://www.brisbane.qld.gov.au/bccwr/about_council/documents/vision2026_f inal_fulldocument.pdf

Brouwer, P. S. (1997). Critical thinking in the information age. J ournal of Educational Technology Systems, 25(2), 189-197.

Brown, S. P., Ganesan, S., & Challagalla, G. (2001). Self efficacy as a moderator of information-seeking effectiveness , J ournal of Applied Psychology, 86(5) , 1043-1051.

Bruce, C. (1997). The seven faces of information literacy . Adelaide: Auslib Press.

Bruce, C. (2000). Information literacy research: dimensions of an emerging collective consciousness , Australian Academic and Research Libraries, 31(2) , 91-109.

Bryman, A. (2003). Business research methods. Oxford: Oxford University Press.

R-4

Bundy, A. (2001). The twenty first century profession: objects, values and responsibilities . Retrieved 10 April, 2006, from http://www.library.unisa.edu.au/about/papers/21century.pdf

Bundy, A. (2003). One essential direction: information literacy, information technology fluency. Paper presented at eLit 2003, 2nd International conference on information and IT literacy, Glasgow. 11-13 June 2003.

Bundy, A. (Ed.). (2004). Australian and New Zealand Information Literacy Framework: Principles, standards and practice (2 ed.). Adelaide: Australian New Zealand Institute of Information Literacy.

Burgelman, J. (2000). Regulating access in the information society: the need for rethinking public and universal service. New Media and Society, 2(1) , 51-66.

Burns, A.C., & Bush, R.F. (2001). Marketing research . London: Prentice-Hall

Candy, P. (1996). Major themes and future directions. Paper presented at the Learning for Life: information literacy and the autonomous learner (Proceedings of the Second National Information Literacy Conference), Adelaide, South Australia. 30 November-1 December 1995

Candy, P. (2000). Mining in Cyberia: researching information literacy for the digital age. In C. Bruce & P. Candy (Eds.), Information literacy around the world: advances in programs and research (pp. 139-151). Wagga Wagga, NSQ: Centre for Information Studies, Charles Sturt University.

Carvin, A. (2000). Mind the Gap: The Digital Divide as the Civil Rights Issue of the New Millennium. MultiMedia Schools . Retrieved 9 August, 2006, from http://www.infotoday.com/MMSchools/Jan00/carvin.htm

Carvin, A. (2004). Diluting digital divide discourse . Retrieved 11 August, 2006, from http://www.andycarvin.com/archives/2004/08/diluting_digita.html

Cassidy, S., & Eachus, P. (n.d.). Developing the computer self-efficacy (CSE) scale: investigating the relationship between CSE, gender and experience with

R-5

computers. Retrieved 11 August, 2006, from http://www.chssc.salford.ac.uk/healthSci/selfeff/SELFEFFa.htm

Castells, M. (1996). The rise of the networked society (Vol. 1). Cambridge, MA: Blackwell Publishers.

Castells, M. (1997). The power of identity (Vol. 2). Malden, Mass: Blackwell.

Castells, M. (1998). End of millennium (Vol. 3). Malden, Mass.: Blackwell Publishers.

Chakraborty, J., & Bosman, M. M. (2005). Measuring the digital divide in the US: race, income and personal computer ownership. The Professional Geographer, 57(3) , 395-410.

Chatman, E. A. (1985). Information, Mass Media Use and the Working Poor. Library and Information Science Research, 7 , 97-113.

Chatman, E. A. (1990). Alienation theory: application of a conceptual framework to a study of information among janitors. RQ, 29 (3), 355-368.

Chatman, E. A. (1996). The impoverished life-world of outsiders. Journal of the American Society for Information Science, 47 , 193-206.

Chaudhri, A., Flamm, K. S., & Horrigan, J. (2005). An analysis of the determinants of internet access. Telecommunications Policy, 29 (9/10), 731-755.

Cheuk, B. (2005). Information literacy in the workplace: issues, best practices and challenges. Retrieved, 11 May, 2006, from http://www.nclis.gov/libinter/infolitconf&meet/papers/cheuk-fullpaper.pdf

Chuck 45. (2001). Chairman Mike and the digital divide . Retrieved 10 March, 2006, from http://www.thegully.com/essays/US/politics_2001/010212powell_fcc.html

Churchill, G. A. J. (1979). A paradigm for developing better measures of marketing constructs. Journal of Marketing Research, 16 (1), 64-73.

R-6

Cisler, S. (2000). Hot button: online haves vs. have-nots. Subtract the Digital Divide . San Jose Mercury News . 1 July 2000. Retrieved, 9 August 2006, from http://www.athenaalliance.org/rpapers/cisler.htm l

Cisler, S. (2003). Digital Divide - Metastasis of a . Retrieved 11 August, 2006, from http://makeworlds.org/node/13

City of San Jose. (2001). Smart growth: imagine a city…Inside Sane Jose . Retrieved 10 July, 2003, from http://www.sanjoseca.gov/planning/isjsmartgrowh.pdf

Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, N.J. : L. Erlbaum Associates.

Cohen, L., Manion, L., & Morrison, K. (2000) Research methods in education. 5 th ed. London: Routledge.

Commonwealth Department for Information Technology and Arts (DICTA) (1998). Strategic Framework for the Information Economy: Identifying Priorities for Action - December 1998 . Available on request only.

Compaine, B. M. (2001). Digital divide: facing a crisis or creating a myth . Cambridge, Massachusetts: MIT Press.

Compeau, D., & Higgins, C. (1995). Computer self efficacy: development of a measure and initial test. MIS Quarterly, 19 , 189-211.

Comrey, A. L. (1973). First course in factor analysis . New York: Academic Press.

Comrey, A. L., & Lee, H. B. (1992). A first course in factor analysis . Hillsdale, N.J: L. Erlbaum Associates.

Cooper, D. R., & Emory, W. (1995). B usiness research methods . Chicago: Irwin.

Council of Australian University Librarians (CAUL). (2001) Information literacy standards. First Edition. Retrieved 8 August, 2005, from http://www.caul.edu.au/caul-doc/InfoLitStandards2001.doc

R-7

Crump, B., & Mcllroy, A. (2003) The digital divide: why the "don't-wants-tos" won't compute: Lessons from a New Zealand ICT project. First Monday, 8(12) , Retrieved 11 August, 2006, http://www.firstmonday.org/issues/issue8_12/crump/index.html

Cuneo, C. (2002). Globalised and localised digital divides along the information highway: a fragile synthesis across bridges, ramps, cloverleaves and ladders. Retrieved 29 June, 2006, from http://socserv2.mcmaster.ca/sociology/Digital-Divide-Sorokin-4.pdf

Curtin, J. (2001). A digital divide in rural and regional Australia? (No. 1): Department of the Parliamentary Library. Retrieved 15 April, 2006, from http://www.aph.gov.au/library/pubs/cib/2001-02/02cib01.htm

Dane, F. C. (1990). Research methods . Pacific Grove, Calif: Brooks/Cole Publishing Company. de la Peña McCook, K. (2002). Rocks in the Whirlpool . Retrieved 11 August, 2006, from http://www.ala.org/ala/ourassociation/governingdocs/keyactionareas/equitya ction/rockswhirlpool.htm de Vaus, D. (2002). Surveys in social research (5th ed.). London: Routledge.

Delucchi, K. (1993). On the use and misuse of chi-square. In G. Keren & C. Lewis (Eds.), A Handbook for data analysis in the behavioral sciences : methodological issues . Hillsdale, NJ: L. Erlbaum Associates.

DeVellis, R. F. (2003). Scale development: theory and applications (Vol. 26). Thousand Island: Sage Publications.

Dillman, D. A. (2000). Mail and Internet surveys: the tailored design method . New York: Wiley.

DiMaggio, P., & Hargittai, E. (2001). From 'digital divide' to 'digital inequality': studying internet use as penetration increases . Retrieved 18 September

R-8

2006, from http://www.princeton.edu/~artspol/workpap/WP15%20- %20DiMaggio%2BHargittai.pdf

Disability Services Queensland. (2005). Disability: a Queensland profile, 2005 . Retrieved 11 August, 2006, from http://www.disability.qld.gov.au/information/disabilities-queensland-profile- 2005.pdf

Doyle, C. (1992). Outcome measures for information literacy within the National Educational Goal of 1990. Final Report to the National Forum on Information Literacy. Summary of Findings. (No. ED 351033): National Forum on Information Literacy.

Duff, A. (2000). Information society studies . London: Routledge.

Duff, A. (2001). On the present state of information studies. Education for information, 19(3) , 231-244.

Dunne, J. (2002). Information seeking and use by battered women: A person-in- progressive-situations approach. Library & information science research, 24 (4), 343-355.

Dupuis, E. A. (1997). The information literacy challenge: addressing the changing needs of our students through our programs. Internet Reference Services Quarterly, 2 (2/3), 93-111.

Eastin, M. S., & LaRose, R. (2000). Internet self efficacy and the psychology of the digital divide. Journal of Computer Mediated Communication, 6 (1). Retrieved, 25 July, 2005 from, http://jcms.indiana.edu/vol16/issue1/eastin.html

Edwards, S. (2006). The net lenses model: focusing on web searching experiences . Retrieved 16 October 2006, 2006, from http://sky.fit.qut.edu.au/~edwardss/NetLenses/index.html

Fink, A. (2006). How to conduct surveys : a step-by-step guide . Thousand Oaks: Sage Publications.

R-9

Fink, C., & Kenny, C. J. (2003). W(h)ither the digital divide? Retrieved 11 August, 2006, from http://www.itu.int/wsis/docs/background/themes/digital_divide/fink-kenny.pdf

Fiorina, C. (2000). Speech at National Governers Association . Retrieved 18 September, 2006, from http://www.hp.com/hpinfo/execteam/speeches/fiorina/ceo_gov_assoc.html

Floridi, L. (2001). The new information and communication technologies for the development of education. Paper presented at the UNESCO World Commission on the Ethics of Scientific Knowledge and Technology (COMEST), Paris, France. Retrieved, 9 May, 2005 from http://www.wolfson.ox.ac.uk/~floridi/pdf/eieu.pdf

Floridi, L. (2002). Ethics in the infosphere. Blesok, 24 (Jan/Feb). Retrieved, 11 June, 2006 from, http://www.blesok.com.mk/tekst_print.asp?lang=eng&tekst=374

Ford, N., & Miller, D. (1996). Gender differences in internet perception and use. Paper presented at the 3rd Electronic Library and Visual Information Research Conference, London. 30 April 1996.

Ford, N., Miller, D., & Moss, N. (2001). The role of individual differences in internet searching: an empirical study. Journal of the American Society for Information Science, 52(12), 1049-1066.

Foster, J. J. I. (2001). Self-efficacy, African-American youth, and technology: Why the belief that "I cannot do that" is driving the digital divide. Unpublished PhD, University of Alabama, Tuscaloosa.

Foster, J. J. I., & Starker, T. (2003). African American students and the digital divide: wassup with it? Paper presented at the Proceedings of Society for Information Technology and Teacher Education International Conference 2003, Chesapeake, VA.

Fraenkel, J., & Wallen, N. (1996). How to design and evaluate research . 3 rd ed. New York: McGraw Hill.

R-10

Frissen, V. A. (2000). ICTs in the rush hour of life. The information society, 16 (1), 65-75.

Fry, E. (1977). Fry's readability graph: clarifications, validity and extensions to level 17. Journal of Reading, 21 , 242-252.

Fulton, C. (2005). Chatman's life in the round. In Karen E. Fischer, Sandra Erdelez & Lynne McKechnie (Eds). Theories of . Medford, N.J.: Information Today.

Gale, K. (1998). Information and youth homelessness: an assessment of the information requirements of young people in housing need and the role of information in preventing youth homelessness. , University of Sheffield, Sheffield. Retrieved, 11 May 2006, from http://dagda.shef.ac.uk/dissertations/1997-98/gale.pdf

Garner, S. D. (2006). High level colloquium on information literacy and lifelong learning . Biblioteca Alexandria, Alexandria, Egypt, November 6-9, 2005. Retrieved, 11 May 2007, from http://www.ifla.org/III/wsis/High-Level- Colloquium.pdf

Garnham, N. (2000). Information society: as theory or ideology. Information Communication and Society, 3 (3) , 139-152.

Gibson, C. (2003). Digital divides in New South Wales: a research note on socio- spatial inequality using 2001 census data on computer and internet technology. Australian Geographer, 34 (2), 239-257.

Gidden, A. (1998). The third way: the renewal of social democrac y. Cambridge: Polity.

Gilster, P. (1997). Digital literacy . New York: Wiley Computer Publishing.

Gist, M. E., & Mitchell, T. R. (1992). Self efficacy: a theoretical analysis of its determinants and malleability. The Academy of Management Review, 17 (2), 182-211.

R-11

Gorman, G.E., & Clayton, P. (1997). Qualitative research for the information professional. London: Library Association.

Gorsuch, R. L. (1983). Factor analysis . Hillsdale, N.J: L. Erlbaum Associates.

Greyling, W. (2003). From digital divide to digial opportunity . Retrieved 11 August, 2006, from http://www.iconnect-online.org/Stories/Story.import5191

Groves, J. (1999). Web sites for rural Australia. Designing for accessibility. (No. 00/13): Royal Industries Research and Development Corporation.

Hair, J. F., Babin, B., Money, A. H., & Samouel, P. (2003). Essentials of business research methods . New York: Wiley.

Hair, J. F. J., Black, W. C., Babin, B., Anderson, R. H., & Tatham, R. L. (2006). Multivariate data analysis (6 ed.). Upper Saddle River, NJ: Pearson - Prentice Hall.

Hargittai, E. (2002). Second level digital divide: differences in people's online skills. First Monday, 7 (4). Retrieved 11July, 2005, from http://www.firstmonday.org/issues/issue7_4/hargittai/index.html

Hargittai, E. (2005). Survey measures of web oriented digital literacy. Social Science Computer Review, 23 (3), 371-379.

Hargittai, E., & Hinnant, A. (2006). Social framework for information seeking. In A. Spink & C. Cole (Eds.), New Directions in Human Information Behavior . New York: Springer.

Harper, V. B. (n.d.). Digital divide (DD): redirecting the efforts of the scholarly community . Retrieved 11 August, 2005, from http://cal.csusb.edu/cmcrp/documents/Digital%20Divide%20position%20pap er1(hypertext%20verseion).doc http://www.editlib.org/index.cfm/files/paper_17796.pdf?fuseaction=Reader.D ownloadFullText&paper_id=17796

R-12

Harris, R., Stickney, J., Grasley, C., Hutchinson, G., Greaves, L., & Boyd, T. (2001). Searching for help and information abused women speak out. Library and Information Science Research, 23 , 123-144.

Hawkins, R. M. F. (1992). Self efficacy: a predicator but not a cause of behavior. Journal of Behavior Therapy and Experimental Psychiatry, 23 (4), 251-256.

Haynes, S., Richard, D. C., & Kubany, E. S. (1995) Content validity in psychological assessment: a functional approach to concepts and methods. Psychological Assessment, 7 (3), 238-247.

Heck, R,(1998). Factor analysis. In G. A. Marcolides (Ed.) Modern methods for business research . (pp. 177-215). Mahwah, NJ: Lawrence Erlbaum.

Hill, J. R., & Hannafin, M. J. (1997). Cognitive strategies and learning from the . Educational Technology, Research and Development, 45 (4), 37- 64.

Hindman, D. B. (2000). The rural-urban digital divide. Journalism and Mass Communication Quarterly, 77(3) , 549-560.

Hnilo, L. A. R. (1997). The hierarchy of self efficacy and the development of an internet self efficacy scale . Available on request from the author.

Hoffman, D., & Novak, T. P. (1998). Bridging the digital divide: the impact of race on computer access and internet use. Science, 280 (5362), 390-391.

Hofstetter, F. T. (1998). Internet literacy . San Francisco: Irwin/McGraw-Hill.

Holloway, D. (2005). The digital divide in Sydney. Information Communication and Society, 8 (2), 168-193.

Horton, F. W. (1983). Information literacy v's computer literacy. Bulletin of the American Society for Information Science, 9 (4), 14-16.

Hoyle, R. (1995). Structural Equation Modeling . Thousand Oaks, CA.: SAGE Publications.

R-13

Hughes, H. (2004). Researching the experiences of international students. Paper presented at the Lifelong learning: Whose responsibility and what is your contribution? Refereed papers from the 3rd International Lifelong Learning Conference, Yeppoon, QLD. 13-16 June 2004.

Human Rights and Equal Opportunity Commission. (2000). Accessibility to electronic commerce and new service and information technologies for older Australians and people with a disability . Retrieved, 19 September 2005, from http://www.hreoc.gov.au/disability_rights/inquiries/ecom/ecomrep.htm

Information Literacy Meeting of Experts (ILME) (2003). Prague Declaration. Towards an information literate society . Retrieved 10 March, 2006, from http://www.ilfonline.org/conf/AnnualConference/2006%20Handouts/Breivik- Prague.doc

Irving, L. (2001). Origin of the term digital divide . Retrieved 18 September, 2006, from http://www.rtpnet.org/lists/rtpnet-tact/msg00080.html

Ito, Y. (1981). The 'johoka shakai' approach to the study of communication in Japan. Mass Communication Review Yearbook, 2 , 671-698.

Jarboe, K. P. (2001). Inclusion in the Information Age:Reframing the Debate . Retrieved 18 September, 2006, from http://www.athenaalliance.org/apapers/inclusionsummary.html

Jones, B (1991). Australia as an information society: grasping new paradigms. Report of the Standing Committee for Long Term Strategies. Canberra [ACT]: Australian Government Publishing Service.

Jung, J., Qiu, J. L., & Kim, Y. (2001). Internet connectedness and inequality: beyond the digital divide. Communication Research, 28 (4), 507-538.

Jupp, B., & 6, P. (2001). Divided by information? : the "digital divide" and the implications of the new meritocracy. London: Demos.

R-14

Kang, H., Bagchi-Sen, S., Rao, H.R., and Banerjee, S. (2005). Internet Skeptics: An Analysis of Intermittent Users and Net-Dropouts, IEEE Technology and Society Magazine, 24 (2) , 26-31.

Kaiser, H. F. (1970). A second generation Little Jiffy. Psychometrika, 35, 401-406.

Kaiser, H. F., & Rice, J. (1974). Little Jiffy mark IV. Educational and Psychological Measurement, 34, 111-117,

Katz, J., & Aspen, P. (1997). Motivation for and barriers to internet usage: results of a national public opinion survey. Internet Research, 7 (3).

Klaus, H. (2000). Understanding scholarly and professional communication: thesauri and database searching. In C. Bruce & P. Candy (Eds.), Information litearcy around the world: advances in programs and research (pp. 209-222). Wagga Wagga, NSW: Centre for Information Studies.

Kuhn, T. (1970). The structure of scientific revolutions . Chicago: The University of Chicago Press.

Kumar, K. (1995). From post-industrial to post-modern society: new theories of the contemporary world . Oxford: Blackwell Publishing.

Kurbanoglu, S. S. (2003). Self-efficacy: a concept closely linked to information literacy and lifelong learning. Journal of Documentation, 59 (6), 635-646.

Kuttan, A., & Laurence, P. (2003). From digital divide to digital opportunity . Lanham, MD: Scarecrow Press.

Kvasny, L. (2002). Problematizing the digital divide: cultural and social reproduction in a community technology initiative. Unpublished PhD, Georgia State University, Atlanta.

Kvasny, L., & Keil, M. (2006). The challenges of redressing the digital divide: a tale of two US cities. Information Systems Journal, 16, 23-65.

R-15

Lam, J., & Lee, M. (2005). Bridging the digital divide - the roles of internet self efficacy towards learning comuter and the internet among elderly in Hong Kong, China. Paper presented at the Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS), Hawaii. 3-6 January 2005.

LaRose, R., Mastro, D., & Eastin, M. S. (2001). Understanding internet usage: a social cognitive approach to understanding usage and gratifications. Social Science Computer Review, 19 (4), 395-413.

LaRose, R., & Eastin, M. S. (2003). A social cognitive explanation of internet uses and gratifications: toward a new theory of media attendance. Paper presented at the National Communication Association Conference, New Orleans, LA.

Lau, J. (2006). Guidelines on information literacy for lifelong learning : International Federation of Library and Information Associations. Retrieved, 11 May 2006, from http://www.ifla.org/VII/s42/pub/IL-Guidelines2006.pdf

Lautenschlager, G. J. (1989). A comparison of alternatives to conducting Monte Carlo analyses for determining parallel analysis criteria. Multivariate Behavioral Research, 24 (3), 365-395.

Leckie, G. J., & Fullerton, A. (1999). Information litearcy in science and engineering undergraduate education: faculty attitudes and pedagogical practices. College and Research Libraries, January , 9-29.

Lee, C. (1989). Theoretical weaknesses lead to practical problems: the example of self effiacy theory. Journal of Behavior Therapy and Experimental Psychiatry, 20 (2), 115-123.

Lee, C. (1990). Theoretical weaknesses: fundamental flaws in cognitive-behavioral theories are more than a problem of probability. Journal of Behavior Therapy and Experimental Psychiatry, 21 , 143-145.

Lehtonen, J. (1988). The information society and the new competence. The American Behavioral Scientist, 32 (3), 104-111.

R-16

Lenhart, A., & Horrigan, J. B. (2003). Revisualizing the digital divide as digital spectrum. IT&Society, 1 (5), 23-39. Retrieved , 11 May, 2005, from http://www.stanford.edu/group/siqss/itandsociety/v01i05/v01i05a02.pdf

Lenhart, A., Horrigan, J. B., Rainie, L., Boyce, A., Madden, M., & O'Grady, E. (2003). The ever shifting internet population: a new look at internet access and the digital divide. Retrieved, 11 May, 2005 from, http://www.pewinternet.org/pdfs/PIP_Shifting_Net_Pop_Report.pdf

Lenox, M. F., & Walker, M. L. (1993). Information literacy in the educational process. The Educational Forum, 57 (2), 312-324.

Limberg, L. (1999). Experiencing information seeking and learning. Information Research, 5 (1). Retrieved, 11 May, 2006 from, http://informationr.net/ir/5- 1/paper68.html

Litwin, M. S. (1995). How to measure survey reliability and validity . London: Sage Publications.

Lloyd, A. (2005). No man (or woman) is an island: information literacy, affordances and communities of practice. Australian Library Journal, 54 (3). Retrieved 14 September, 2006, from, http://www.alia.org.au/publishing/alj/54.3/full.text/lloyd.html

Lloyd, R., & Hellwig, O. (2000). Barriers to the take up of new technology (No. 53). Canberra: National Centre for Social and Economic Modelling (NATSEM), University of Canberra.

Lyon, D. (1995). Roots of the information society idea. In N. Heap, R. Thomas, G. Einon, R. Mason & H. Mackay (Eds.), Information technology and society : a reader (pp. 54-73). London: Sage publications in association with Open University Press.

Machlup, F. (1962). Production and distribution of knowledge in the United States . Princeton, N.J: Princeton University Press.

Mackay, H. (2001). Investigating the information society . London: Routledge.

R-17

Maddux, J. E. (1995). Self efficacy, adaptation and adjustment: theory, research and application. New York: Plenum Press.

Maitland, C. (1996). Measurement of computer/internet self efficacy: a preliminary analysis of computer self efficacy and internet self efficacy measurement instruments . Available on request from author.

Malhotra, N. K. (2004). Marketing research: an applied orientation (Vol. 4th). Upper Saddle River, NJ: Pearson Prentice Hall.

Marchionini, G. (1999). Educating responsible citizens in the information society. Educational Technology, Research and Development, 32 (2), 17-26.

Marcoulides, G. A. (1998). Modern methods for business research . Mahwah, N.J.: Lawrence Erlbaum.

Martin, W. J. (1988). The information society: idea or entity? Aslib Proceedings , 40(11/12) , 303-309.

Marx, K. & Engels, F. (1998). The German ideolog y. London: Elecbook.

McClure, C. (1994). Network literacy: a role for libraries. Information technology and libraries, June , 115-125.

McCready, W. (2006). Applying sampling procedures. In F. T. Leong & J. T. Austin (Eds.), The psychology research handbook : a guide for graduate students and research assistants (pp. 147-160). Thousand Oaks, Calif: Sage Publications.

McGrath, R. E. (1997). Understanding statistics : a research perspective . New York: Longman.

McKissack, F. (1998). Cyberghettos: blacks are falling through the net. The Progressive, 62 (2), 20-22.

R-18

McLaren, J., & Zappala, G. (2002). The 'digital divide' among financially disadvantaged families in Australia. First Monday, 7 (11). Retrieved, 11 June, 2006, from, http://www.firstmonday.org/issues/issue7_11/mclaren/index.html

Mehan, H. (1997). The discourse of the illegal immigration debate: a case study in the politics of representation. Discourse and Society, 8 (2), 249-270.

Menou, M. J., & Taylor, R. D. (2006). A "grand challenge": measuring information societies. Information Society, 22 (5), 261-267.

Mills, B. F., & Whitacre, B. E. (2003). Understanding the non-metropolitan- metropolitan digital divide. Growth and Change, 34(2) , 219-221.

Mingers, J. (2001). Combining IS research methods: towards a pluralist methodology. Information Systems Research, 12(3) , 240-259.

Mitchell, M. M. (2002). Exploring the future of the digital divide through ethnographic futures research. First Monday, 7 (11). Retrieved, 11 June, 2005, from, http://www.firstmonday.org/issues/issue7_11/mitchell/index.html

Morrison, H. (1997). Information Literacy Skills: An Exploratory Focus Group Study of Student Perceptions." Research Strategies, 15 (1), 4-17.

Mossberger, K., Tolbert, C. J., & Stansbury, M. (2003). Virtual inequality: beyond the digital divide . Washington, D.C: Georgetown University Press.

Mumford, K. (2000). True nature of 'digital divide' divides experts . Retrieved 10 March, 2006, from http://shulman.ucsur.pitt.edu/doc/Supplemental/DigitalDivide/FreedomForum .pdf

Murdock, D. (2000). Digital divide? What digital divide? Retrieved 10 March, 2006, from http://www.cato.org/dailys/06-16-00.html

Murphy, C. A., Coover, D., & Owen, S. V. (1989). Development and validation of the computer self efficacy scale. Educational and Psychological Measurement, 49 (4), 893-899.

R-19

Mutch, A. (1997). Information literacy: an exploration. International Journal of , 17 (5), 377-386.

Nahl, D. (1993). CD ROM point-of-use instructions for novice searchers: a comparison of user centered affectively elaborated and system centered unelaborated text. Unpublished PhD, University of Hawaii, Honolulu.

Nahl, D. (1996). Affective monitoring of internet learners: perceived self efficacy and success. Paper presented at the Global complexity: information, chaos and control. Proceedings of the 59th ASIS Annual Meeting, Baltimore, Maryland. 21-24 October 1996.

Nahl, D., & Tenopir, C. (1996). Affective and cognitive searching behavior of novice end-users of a full-text database. Journal of the American Society for Information Science, 47 (4), 276-286.

National Office for the Information Economy (NOIE). (2000a). Current state of play: Australia and the information economy . Retrieved, 14 May, 2005, from, http://pandora.nla.gov.au/parchive/2000/S2000-Aug- 15/www.noie.gov.au/projects/information_economy/ecommerce_analysis/ie_ stats/State_of_Play.pdf

National Office for the Information Economy (NOIE). (2000b). Current State of Play - July 2000 . Retrieved, 14 May, 2005 from http://www.egov.vic.gov.au/pdfs/csp_june.pdf

National Office for the Information Economy (NOIE). (2000c). Current State of Play - November 2000 . Retrieved, 14 May, 2005, from, http://pandora.nla.gov.au/parchive/2000/S2000-Nov- 23/www.noie.gov.au/projects/information_economy/ecommerce_analysis/ie_ stats/StateofPlayNov2000/csp_november2000.pdf

National Office for the Information Economy (NOIE). (2001). Current State of Play - June 2001 . Retrieved, 14 May, 2005, from, http://pandora.nla.gov.au/pan/13411/20020317/www.noie.gov.au/projects/inf ormation_economy/research&analysis/ie_stats/CSOP_June2001/CSOP_Jun e01.pdf

R-20

National Office for Information Economy (NOIE). (2002). The Current State of Play . Retrieved, 14 May, 2005, from, http://www.noie.go.au/projects/framework/Progress/ie_stats/CSOP_April200 2/SCOP_April2002.pdf

National Office for the Information Economy (NOIE). (2003). Current State of Play: online participation and activities . Retrieved, 14 May, 2005, from, http://www.dcita.gov.au/__data/assets/pdf_file/21413/CSOP_December_200 3.pdf

National Office for the Information Economy (NOIE). (2004). Current State of Play . Retrieved, 14 May, 2005, from, http://www.dcita.gov.au/__data/assets/pdf_file/23426/CSP_2004.pdf

National Office for the Information Economy. (2005). Current State of Play . Retrieved, 20 July, 2006, from, http://www.dcita.gov.au/__data/assets/pdf_file/33120/Current_State_of_Play _-_November_2005.pdf

National Telecommunications and Information Administration (NTIA). (1995). Falling through the net. A Survey of the "Have Nots" in Rural and Urban America . Retrieved 18 May, 2005, from, http://www.ntia.doc.gov/ntiahome/fallingthru.html

National Telecommunications Information Administration (NTIA). (1998). Falling Through the Net II: New Data on the Digital Divide . Retrieved, 18 May, 2005, from, http://www.ntia.doc.gov/ntiahome/net2/falling.html

National Telecommunications Information Administration (NTIA). (1999). Falling through the net: defining the digital divide . Retrieved, 18 May, 2005, from, http://www.ntia.doc.gov/NTIAHOME/FTTN99/contents.html

National Telecommunications and Information Administration (NTIA). (2000). Falling through the net: towards digital inclusion . Retrieved, 18 May, 2005, from http://search.ntia.doc.gov/pdf/fttn00.pdf

R-21

National Telecommunications Information Administration (NTIA). (2002). Falling through the net: towards digital inclusion . Retrieved, 18 May, 2005, from http://www.ntia.doc.gov/ntiahome/dn/index.html

National and Information Administration (NTIA). (2004). A nation online: entering the broadband age . Retrieved, 18 May, 2005, from, http://www.ntia.doc.gov/reports/anol/NationOnlineBroadband04.pdf

Neice, D. C. (1998). Measures of participation in the digital techno-structure: Internet access (No. 44). Brighton: SPRU.

Netemeyer, R. G., Bearden, W. O., & Sharma, S. (2003). Scaling procedures: issues and applications . Thousand Oaks: Sage Publications.

Neuman, W. L. (2003). Social research methods: qualitative and quantitative approaches . Boston: Allyn and Bacon.

Nie, N., & Erbring, L. (2000). Internet and society: a preliminary report. Stanford, California: Stanford Institute for the Quantitative Study of Society, Stanford University.

Norris, P. (2001). Digital divide: civic engagement, information poverty, and the internet worldwide. New York: Cambridge University Press.

Nue, C. R., Anderson, R. H., & Bikson, T. K. (1999). Sending your government a message: e-mail communication between citizens and government . Santa Monica, California: RAND.

Nunnally, J. C. (1967). Psychometric theory . New York: McGraw Hill.

Organisation for Economic Cooperation and Development. (2001). Understanding the digital divide : Organisation for Economic Cooperation and Development (OECD). Retrieved, 17 Marcy, 2006, from http://www.oecd.org/dataoecd/38/57/1888451.pdf

Oxbrow, N. (1998). Information literacy - the final key to an information society. Electronic Library, 16(6) , 359-360.

R-22

Pajares, F. (1997). Current directions in self efficacy research. In M. Maehr & P. R. Pintcrich (Eds.), Advances in motivation and achievement (Vol. 10, pp. 1-49). Greenwich. CT: JAI Press.

Pajares, F. (n.d.). Overview of self efficacy . Retrieved 11 August, 2006, from http://www.emory.edu/EDUCATION/mfp/eff.html

Pajares, F., Hartley, J., & Valiante, G. (2001). Response format in writing self efficacy assessment: greater discrimination increases prediction. Measurement & Evaluation in Counselling & Development, 33 (4). 214-221.

Pallant, J. (2005). SPSS survival manual: a step by step guide to data analysis using SPSS (2 ed.). Crows Nest, NSW: Allen & Unwin.

Pettigrew, K. E. (1998). The role of community health nurses in providing information and referral to the elderly: a study based on social network theory . Unpublished PhD, University of Western Ontario, Canada.

Pew Internet and American Life Project (PEW). (2005). About Us. Retrieved, 17 Marcy, 2006, from http://www.pewinternet.org/about_mission.asp

Piper, P. S. (2000). Better read that again: web hoaxes and misinformation. Searcher, 8 (8), Retrieved 11 April, 2006, from http://www.infotoday.com/searcher/sep00/piper.htm

Podsakoff, P. M., & Organ, D. W. (1986). Self-reports in organisational research: problems and prospects. Journal of Management, 19(4) , 531-544.

Polit, D. F., & Beck, C. T. (2004). Nursing research: principles and methods . Philadelphia: Lippincott Williams & Wilkins.

Porat, M. (1977). The information economy: definition and measurements . Washington, DC: US Department of Commerce.

Putnam, R. D. (2000). Bowling alone: the collapse and revival of the American community . New York: Simon & Schuster.

R-23

Pyati, A. K. (2005). WSIS: Whose vision of an information society? First Monday, 10 (5). Retrieved 11 June, 2006, from, http://www.firstmonday.org/issues/issue10_5/pyati/index.html

Ren, W. (1999). Self efficacy and the search for government information. Reference and User Service Quarterly, 38 (3), 283-291.

Ren, W. (2000). Library instruction and college student self efficacy in electronic information searching. Journal of Academic Librarianship, 26 (5), 323-328.

Rice, R. E., & Katz, J. E. (2003). Comparing internet and mobile phone usage: digital divides of usage, adoption and dropouts. Telecommunications Policy, 27 , 597-623.

Richman, D. (2000, 19 October). Gates rejects idea of e-utopia . Seattle Post Intelligencia Rerporter. Retrieved 15 May 2006, from http://seattlepi.nwsource.com/business/gate19.shtml .

Ringgold, R. L. (2001). The evaluation of computer attitudes and the validation of computer self efficacy scales of minority students at an urban community college in Baltimore City, Maryland. Unpublished PhD, Morgan State University, Baltimore, Maryland.

Robey, D. (1996). Diversity in information systems research: threat, promise and responsibility. Information Systems Research, 7(4), 400-408.

Robson, C. (1993). Real world research : a resource for social scientists and practitioner-researchers . Oxford: Blackwell.

Roszak, T. (1986). The cult of information: the folklore of computers and the true art of thinking . NY: Pantheon.

Samaras, K. (2005). The digital divide and indigenous Australians. Libri, 55 (2/3), 84- 95.

R-24

Saucier, G., & Goldberg, L. R. (1996). The language of personality: Lexical perspectives on the five-factor model. In J. S. Wiggins (Ed.), T he five factor model of personality: Theoretical perspectives (pp. 21-50). NY: Guilford.

Savolainen, R. (1995). Everyday life information seeking: approaching information seeking in the context of "way of life". Library and Information Science Research, 17(3) , 259-294.

Savolainen, R. (1999). Seeking and using information from the internet: the context of non-work use. In T. D. Wilson & K. Allen (Eds.), Exploring the contexts of information behaviour (pp. 356-370). London: Taylor Graham.

Savolainen, R. (2004). Enthusiastic, realistic and critical: discourses of internet use in the context of everyday life information seeking. Information Research, 10 (1). Retrieved, 10 June, 2006, from http://informationr.net/10- 1/paper198.html

Savolainen, R., & Kari, J. (2004). Conceptions of the internet in everyday life information seeking. Journal of Information Science, 30 (3), 219-226.

Sayers, R. (2006). Principles of awareness-raising for information literacy, a case study : UNESCO. Retrieved, 10 July, 2007, from http://unesdoc.unesco.org/images/0014/001476/147637e.pdf

Schement, J. R., & Curtis, T. (1995). Tendencies and tensions of the information age: the production and distribution of information in the United States . New Brunswick, N.J: Transaction Publishers.

Schiller, H. I. (1981). Who knows: information in the age of the fortune 500. Norwood, NJ: Ablex.

Schwarzer, R., & Jerusalem, M. (1995). Generalized self efficacy scale. In J. Weinman, S. Wright & M. Johnston (Eds.), Measures in health psychology: a users portfolio. Causal and control beliefs. (pp. 35-37). Windsor, UK: NFER- Nelson.

R-25

SCONUL Advisory Committee on Information Literacy. (1999). Information literacy skills in higher education. A briefing paper. Retrieved 8 August, 2005, from http://www.sconul.ac.uk/groups/information_literacy/publications/papers/Sev en_pillars2.pdf

Selwyn, N. (2004). Reconsidering political and popular understandings of the digital divide. New Media and Society, 6 (3), 341-362.

Shaprio, A. (1998). Is the Net Democratic?Yes -- and No . Retrieved 10 March, 2006, from http://cyber.law.harvard.edu/works/shapiro/net_democ.html

Sherer, M., Madden, J., Mercandante, B., Prentice-Dunn, S., Jacobs, B., & Rogers, R. W. (1982). The self efficacy scale: construction and validation. Psychological Report, 52 , 663-671.

Silverstein, C., Heninger, M., Marais, H., & Moricz, M. (1999). Analysis of a Very Large AltaVista Query Log . ACM SIGIR Forum 33(1) , 6-12

Slone, D. J. (2002). The influence of mental models and goal on search patterns during web interaction. Journal of the American Society for Information Science, 53 (13), 1152-1169.

Smith, M. K. (2000). 'The theory and rhetoric of the learning society'. In The encyclopaedia of informal education . Retrieved, 19 September, 2006, from http://www.infed.org/lifelonglearning/b-lrnsoc.htm

Spector, P. E. (1992). Summated rating scale construction: an introduction. Newbury Park [Calif.]: Sage Publications.

Spink, A., & Cole, C. (2001a). Information and poverty: information seeking channels used by African American low income households. Library & information science research, 23 , 45-65.

Spink, A., & Cole, C. (2001b). Introduction to the special issue: everyday life information-seeking research. Library and Information Science Research, 23 , 301-304.

R-26

Spink, A., & Jansen, B. J. (2006). How are we searching the world wide web? A comparison of nine search engine transaction logs. Information Processing and Management, 42 , 248-263.

Srinivasan, K. (2001). N ew communications chief raises eyebrow at digital divide. Retrieved 11 August, 2006, from http://www.bgnews.com/home/index.cfm?event=displayArticlePrinterFriendly &uStory_id=6dea8a54-2d94-429b-98de-753972f65d48

Steinfeld, C., & Salvaggio, J. (1989). Toward a definition of the information society. In J. Salvaggio (Ed.), The information society: economic, social and structural issues . Hillsdale, N.J: Lawrence Erlbaum Associates.

Stevens, J. (1996). Applied multivariate statistics for the social sciences . Mahwah, N.J.: Lawrence Erlbaum Associates.

Stichler, R., & Hauptman, R. (1998). (Eds.). Ethics, Information, and Technology: Readings , McFarland, 1998.

Stonier, T. (1983). The wealth of information: a profile of the post-industrial economy . London: Thames Methuen.

Strover, S. (2003). Remapping the digital divide. Information Society, 19 (4), 275- 277.

Tabachnick, B. G., & Fidell, L. S. (2001). Using multivariate statistics (4 ed.). Boston: Allyn and Bacon.

Taylor, R. S. (1986). Value-added processes in information systems. Norwood, NJ: Ablex Publishing.

Thompson, L. F., Meriac, J. P., & Cope, J. G. (2002). Motivating online performance. Social Science Computer Review, 20 (2), 149-160.

Thurston, L. L. (1947). Multiple factor analysis. Chicago: University of Chicago Press.

R-27

Todd, R. (2000). Information literacy: concept, challenge and conundrum. Proceedings of the fourth national information literacy conference conducted by the University of South Australia Library and the Australian Library and Information Association Information Literacy Special Interest Group, Adelaide, South Australia. 3-5 December 1999.

Torkzadeh, G., & Van Dyke, T. P. (2001). Development and validation of an internet self efficacy scale. Behaviour and Information Technology, 20 (4), 275-280.

Town, J. S. (2003). Information literacy and the information society. In S. Hornby & Z. Clarke (Eds.), Challenge and change in the information society (pp. 83- 103). London: Facet.

Tsai, M., & Tsai, C. (2003). Information searching strategies in web-based science learning: the role of internet self efficacy. Innovations in Education and Teaching International, 40 (1), 43-50.

Tuckett, H. W. (1989) Computer literacy, information literacy and the role of the instruction librarian. In G. E. Mensching & T. B. Mensching (Eds.) Coping with information illiteracy: bibliographic instruction for the information age. Ann Arbor: Michigan: Pierian Press.

Tyler, K. (1998). Literacy in the digital world . New Jersey: Lawrence Erlbaum Associates.

UNESCO. (2005). Capacity building . Retrieved 10 July, 2006, from http://portal.unesco.org/ci/en/ev.php- URL_ID=19487&URL_DO=DO_TOPIC&URL_SECTION=201.html

US Census Bureau. (2004) 2004 American Community Survey . Retrieved, 11 May 2006, from http://www.census.gov/acs/www/

US National Research Council. (1999). Being fluent with information technology . Retrieved, 8 May, 2005, http://newton.nap.edu/html/beingfluent/index.html

Van Dijk, J. (2005). The deepening divide: inequality in the information society . London: Sage Publications.

R-28

Van Dijk, J., & Hacker, K. (2000). The digital divide as a complex and dynamic phenomenon. Paper presented at the 50 th Annual Conference of the International Communication Association, Acapulco, 1-5 June 2000. Retrieved, 15 September, 2005, from, http://web.nmsu.edu/~comstudy/tis.pdf

Waldman, M. (2003). Freshman's use of library electronic resources and self efficacy. Information Research, (8), Retrieved, 15 June 2006, from, http://informationr.net/ir/8-2/paper150.html

Wallace, P. (1999). The psychology of the internet . Cambridge, U.K.: Cambridge University Press.

Warschauer, M. (2002). Reconceptualizing the digital divide. First Monday, 7 (7).

Watkins, M. W. (2000). Monte Carlo PCA for parallel analysis. State College, P.A.: Ed & Psych Associates.

Webber, S., & Johnson, B. (2003). Information literacy in higher education: a review and case study. Studies in Higher Education, 28 (3), 335-352.

Webber, S., & Johnson, B. (2006). UK academics' conceptions of, and pedagogy for, information literacy . Retrieved 18 September, 2006, from http://dis.shef.ac.uk/literacy/project/index.html

Webster, F. (2003). Theories of the information society . London: Routledge.

Webster, F. (2004) The Information Society. In Frank Webster (Ed.), The information society reader. (pp. 9-12). London: Routledge.

Weiss, J., & Wysocki, R. K. (1992). 5-phase project management: a practical planning & implementation guide . Reading: Addison-Wesley.

Wicks, D. A. (2004). Older adults and their information seeking. Behavioral and Social Sciences Librarians, 22 (2), 1-26.

R-29

Williamson, K. (1998). Discovered by chance: the role of incidental information acquisiton in an ecological model of information use. Library and Information Science Research, 20 (1), 23-40.

Wilson, T. D. (1997). Information behavior: an interdisciplinary perspective. Information Processing and Management, 33 (4), 551-572.

Woodfield, R., (2002). Women and information systems development: not just a pretty (inter)face?, Information Technology and People, 15(2) , 119-138.

Wollman, N., & Stouder, R. (1991). Believed efficacy and political activity: a test of the specificity hypothesis. Journal of Social Psychology, 131 , 557-566.

World Summit on the Information Society (WSIS). (2003a). Declaration of principles. Building the information society: a global challenge in the new millennium. Retrieved August 11, 2006, from http://www.itu.int/wsis/docs/geneva/official/dop.html

World Summit on the Information Society (WSIS) (2003b). Plan of action . Retrieved 10 March, 2006, from http://www.itu.int/wsis/docs/geneva/official/poa.html

World Summit on the Information Society (WSIS). (2006). Basic information: about WSIS. Retrieved 10 March, 2006, from http://www.itu.int/wsis/basic/about.html

World Summit on the Information Society. (n.d.). What is the digital divide? Retrieved 11 August, 2006, from http://www.itu.int/wsis/basic/faqs_answer.asp?lang=en&faq_id=43

Woszczynski, A. B., & Whitman, M. E. (2003) The problem of common method variance. In Michael E Whitman & Amy Woszczynski (Eds.) T he handbook of information systems research . Hershey, PA: Idea Group.

Young, J. R. (2001). Does 'Digital Divide' Rhetoric Do More Harm Than Good? The Chronicle of Higher Education . 9 November 2001. Retrieved, 18 July, 2006, from, http://chronicle.com/free/v48/i11/11a05101.htm

R-30

Young, K. (2002). Women's Information Needs Study, final report : NSW Department for Women, Sydney. Retrieved, 22 September, 2006, from, www.women.nsw.gov.au/pdf/win_report.pdf

Zurkowski, P. G. (1974). The information service environment relationships and priorities (No. 6). Washington, D. C.: National Commission on Libraries and Information Science.

Zwick, W. R., & Velicer, W. F. (1986). Factors influencing five rules for determining the number of components to retain. Psychological Bulletin, 99 , 432-442.

R-31

Appendix 1: Self-administered questionnaire 1

– The US version – The Australian version

1 Please note the format of the questionnaires have been modified slightly (i.e. smaller font size) to allow for inclusion into this bound version of the document.

A1

Helen Partridge Lecturer (Assistant Professor) Understanding Doctor of Philosophy Candidate Internet Use Faculty of Information Technology within Community Queensland University of

Technology PO Box 2434 Brisbane QLD

This research is being conducted as part of a Doctor of Philosophy program at the Queensland University of Technology (QUT), Australia. The research seeks to develop an understanding of how different people within the community use the Internet.

Your responses will help in developing a better understanding of the Internet use by members of your community.

Information on the Internet use of community members can assist scholars and those organizations within community (such as public libraries) to develop programs and services which will ensure that the Internet and other digital resources are accessible to all members of the community.

Please note this is not an evaluation of the San Jose City Library. It is asking about your personal thoughts and experiences with the Internet in general.

Your participation in this survey is voluntary . If you agree to participate, you may be assured that your involvement in the research will not impact upon your relationship with QUT or the San Jose Library Service in any way.

Your responses to this survey will remain confidential and anonymous . Please answer truthfully . Any answer is acceptable. The survey will take approximately 10 minutes to complete .

If you are unsure of the meaning of any work or phrase please ask the Researcher.

If you have any concerns about the ethical conduct of this research project, please contact the QUT Human Research Ethics Committee (tel. +61 7 3864 2902 or email [email protected]).

Thank you for participating in this survey.

Section 1: To begin: some FACTS about YOU

Please place a tick in the appropriate box. For example 

1. Are you?  Male  Female

2. How old are you?  17-20  21-30  31-40  41-50  51-60  61-70  71-80  81-90  Over 90

3. What is your highest completed level of education?

 Primary school  High school  TAFE/Technical college  Bachelor degree  Masters  PhD  Other, please specify

4. Are you currently enrolled in formal education ?  Yes, go to Question 5  No, go to Question 6

5. What level of study are you enrolled in?

 Primary School  High School  TAFE/Technical College  Bachelor degree  Masters  PhD  Other, please specify

6. What is your employment status ?

 Unemployed  Full Time Employed  Part-time Employed  Casually Employed  Contract Employed  Job Share  Retired  Other, please specify

7. Over the last three years what has been your average annual income

 No Income  $1 - $10,000  $10,001 - $20,000  $20,001 - $30,000  $30,001 - $40,000  $40,001 - $50,000  $50,001 - $60,000  $60,001 - $70, 000  $70,001- $80,000  $80,001 - $90,000  $90,001 - $100, 000  Over $100,000

8. Do you identify yourself as:  African American  Asian American or Pacific Islander  Hispanic or Latino  Native American  Caucasian/White  Other, please specify

Section 2: YOUR use of the Internet

1. On a typical weekday how many hours would you use the Internet? Hour(s)

2. On a typical weekend how many hours would you use the Internet? Hour(s)

3. When was the last time you used the Internet ?

 Never  Less than 24 hours  Between 24 hours and 1 week  Between 1 week and 1 month  Between 1 to 3 month  Between 3 to 6 months  Between 6 to 12 months

 Over 12 months

4. In general, how successful do you rate your use of the Internet?

1 2 3 4 5 6 7 Extremely Quite Slightly Neutral/Not Slightly Quite Extremely Unsuccessful Unsuccessful Unsuccessful Sure/Not Successful Successful Successful Applicable Section 3: YOUR experience of the Internet

Listed below are a number of tasks that can be performed when using the Internet. Please use the scale provided to rate how confident you are that you can do each of the tasks at this moment in time. You may use any number between 0 and 10. Even if you have never used the Internet , how confident are you that you could do the tasks listed?

0 1 2 3 4 5 6 7 8 9 10 I am not at I am moderately I am totally all confident confident conf ident

Open a browser such as Internet Explorer or Netscape to access the Web 0 1 2 3 4 5 6 7 8 9 10

Go to a specific web site by typing in the web address (URL) 0 1 2 3 4 5 6 7 8 9 10 Use a search tool (i.e. Google or Yahoo) to find what you want (i.e. graphics, computer files, documents, information, web pages, 0 1 2 3 4 5 6 7 8 9 10 sound files etc) Use hypertext in a web page to find out about a subject that 0 1 2 3 4 5 6 7 8 9 10 interests you Determine if a web site or the information you have found on a web 0 1 2 3 4 5 6 7 8 9 10 site is reliable and valid Determine if a web site is a secure site 0 1 2 3 4 5 6 7 8 9 10 Print a web page 0 1 2 3 4 5 6 7 8 9 10

Save a web site as a Bookmark or Favorite 0 1 2 3 4 5 6 7 8 9 10 Delete a web site you have saved as a Bookmark or Favorite 0 1 2 3 4 5 6 7 8 9 10

Organise your Bookmarks or Favorites into folders 0 1 2 3 4 5 6 7 8 9 10 Download (save) a file from a Web site 0 1 2 3 4 5 6 7 8 9 10 Check for a virus in a file that you are downloading (saving) from a Web site 0 1 2 3 4 5 6 7 8 9 10 View a multimedia (audio or visual) file 0 1 2 3 4 5 6 7 8 9 10 Download (save) and install new software to your computer 0 1 2 3 4 5 6 7 8 9 10 Understand internet words/terms such as URL or FTP or browser 0 1 2 3 4 5 6 7 8 9 10 Solve most problems or mistakes you experience when using the Internet 0 1 2 3 4 5 6 7 8 9 10 Activate or deactivate cookies linked to a web page 0 1 2 3 4 5 6 7 8 9 10 Log on and off to an email system (i.e. Hotmail, Yahoo Mail) 0 1 2 3 4 5 6 7 8 9 10

Delete an email message from your inbox 0 1 2 3 4 5 6 7 8 9 10 Reply to an email message 0 1 2 3 4 5 6 7 8 9 10 Forward an email message 0 1 2 3 4 5 6 7 8 9 10

Send an email message to more than one person at the same time 0 1 2 3 4 5 6 7 8 9 10 Attach a file to an email message 0 1 2 3 4 5 6 7 8 9 10 View a file attached to an email message 0 1 2 3 4 5 6 7 8 9 10 Save a file attached to an email message 0 1 2 3 4 5 6 7 8 9 10

Scan an email message attachment for a virus 0 1 2 3 4 5 6 7 8 9 10 Add an email address to your address book 0 1 2 3 4 5 6 7 8 9 10

Delete an email address from your address book 0 1 2 3 4 5 6 7 8 9 10

Locate a discussion group or e-list you would like to join 0 1 2 3 4 5 6 7 8 9 10 Subscribe to a discussion group or e-list 0 1 2 3 4 5 6 7 8 9 10 Send a message to a discussion group or e-list (i.e. create a new thread) 0 1 2 3 4 5 6 7 8 9 10

Reply to a message on a discussion group or e-list 0 1 2 3 4 5 6 7 8 9 10

Unsubscribe from a discussion group or e-list 0 1 2 3 4 5 6 7 8 9 10

Locate a chat room you would like to join 0 1 2 3 4 5 6 7 8 9 10

Enter a chat room 0 1 2 3 4 5 6 7 8 9 10

Take part in a conversation in a chat room 0 1 2 3 4 5 6 7 8 9 10

Leave a chat room 0 1 2 3 4 5 6 7 8 9 10

Create a web page 0 1 2 3 4 5 6 7 8 9 10

Load a web page to the Web 0 1 2 3 4 5 6 7 8 9 10

Share files with others via the Web 0 1 2 3 4 5 6 7 8 9 10

What are the OUTCOMES of using the internet? Using the scale provided please indicate how likely you believe each outcome would be. Even if you have never used the Internet , how confident would you be that the Internet could help you in these areas? You may use any number between 1 and 7 .

1 2 3 4 5 6 7 Extremely Quite Slightly Neither Slightly Quite Extremely Unlikely Unlikely Unlikely Likely Likely Likely

Obtain information that I can’t find elsewhere 1 2 3 4 5 6 7

Get immediate knowledge of big events 1 2 3 4 5 6 7

Find a wealth of information 1 2 3 4 5 6 7

Solve a problem 1 2 3 4 5 6 7

Hear music I like 1 2 3 4 5 6 7

Feel entertained 1 2 3 4 5 6 7

Have fun 1 2 3 4 5 6 7

Play a game I like 1 2 3 4 5 6 7

Feel like I belong to a group 1 2 3 4 5 6 7

Find something to talk about 1 2 3 4 5 6 7

Get support from others 1 2 3 4 5 6 7

Maintain a relationship I value 1 2 3 4 5 6 7

Forget my problems 1 2 3 4 5 6 7

Find a way to pass the time 1 2 3 4 5 6 7

Relieve boredom 1 2 3 4 5 6 7

Improve my future prospects in life 1 2 3 4 5 6 7

Find people like me 1 2 3 4 5 6 7

Find others who respect my views 1 2 3 4 5 6 7

Get up to date with new technology 1 2 3 4 5 6 7

Save time shopping 1 2 3 4 5 6 7

Find bargains on products and services 1 2 3 4 5 6 7

Get free information that would otherwise cost me money 1 2 3 4 5 6 7

Get products for free 1 2 3 4 5 6 7

Let’s now consider HOW I MPORTANT the outcomes of using the internet are to YOU . Using the scale provided rate each item in terms of its level of importance to you. You may use any number between 1 and 7 .

1 2 3 4 5 6 7

Extremely Quite Slightly Neither Slightly Quite Extremely Unimportant Unimportant Unimportant Important Important Important

Obtain information that I can’t find elsewhere 1 2 3 4 5 6 7 Get immediate knowledge of big events 1 2 3 4 5 6 7

Find a wealth of information 1 2 3 4 5 6 7 Solve a problem 1 2 3 4 5 6 7

Hear music I like 1 2 3 4 5 6 7 Feel entertained 1 2 3 4 5 6 7

Have fun 1 2 3 4 5 6 7

Play a game I like 1 2 3 4 5 6 7 Feel like I belong to a group 1 2 3 4 5 6 7

Find something to talk about 1 2 3 4 5 6 7 Get support from others 1 2 3 4 5 6 7

Maintain a relationship I value 1 2 3 4 5 6 7 Forget my problems 1 2 3 4 5 6 7

Find a way to pass the time 1 2 3 4 5 6 7 Relieve boredom 1 2 3 4 5 6 7

Improve my future prospects in life 1 2 3 4 5 6 7 Find people like me 1 2 3 4 5 6 7

Find others who respect my views 1 2 3 4 5 6 7

Get up to date with new technology 1 2 3 4 5 6 7

Save time shopping 1 2 3 4 5 6 7

Find bargains on products and services 1 2 3 4 5 6 7 Get free information that would otherwise cost me money 1 2 3 4 5 6 7

Get products for free 1 2 3 4 5 6 7

Please continue to the next page

Let’s now consider YOUR ATTITUDE towards the Internet. A number of statements about the using the Internet are listed below. Using the scale provided please indicate to what extent each statement describes you. You may use any number between 1 and 7 .

If you have NO or LITTLE EXPERIENCE in USING the Internet please complete PART A . If you have EXPERIENCE in USING the Internet please complete Part B . Please note you are ASKED TO COMPLETE ONLY ONE PART – EITHER PART A OR B.

1 2 3 4 5 6 7 Strongly Moderately Mildly Disagree Neither Mildly Moderately Strongly Disagree Disagree Agree Agree Agree

PART A – Complete this part if you HAVE NO or LITTLE EXPERIENCE in USING the Internet.

I would like working with the Internet 1 2 3 4 5 6 7 The challenge of solving problems with the Internet would not appeal to me 1 2 3 4 5 6 7 I think working with the Internet would be enjoyable and stimulating 1 2 3 4 5 6 7

Figuring out Internet problems would not appeal to me 1 2 3 4 5 6 7 When there is a problem with the Internet that I can’t solve 1 2 3 4 5 6 7 straight away, I would stick with it until I have the answer

I don’t understand how some people can spend so much time 1 2 3 4 5 6 7 working with the Internet and seem to enjoy it Once I start to work with the Internet, I would find it hard to stop 1 2 3 4 5 6 7

I would do as little work with the Internet as possible 1 2 3 4 5 6 7

If a problem with the Internet is left unsolved, I would continue to 1 2 3 4 5 6 7 think about it afterward

I do not enjoy talking with others about the Internet 1 2 3 4 5 6 7

PART B – Complete this part if you HAVE EXPERIENCE in USING the Internet

I like working with the Internet 1 2 3 4 5 6 7 The challenge of solving problems with the Internet does not appeal to me 1 2 3 4 5 6 7

I think working with the Internet is enjoyable and stimulating 1 2 3 4 5 6 7

Figuring out Internet problems does not appeal to me 1 2 3 4 5 6 7 When there is a problem with the Internet that I can’t solve 1 2 3 4 5 6 7 straight away, I stick with it until I have the answer

I don’t understand how some people can spend so much time 1 2 3 4 5 6 7 working with the Internet and seem to enjoy it

Once I start to work with the Internet, I find it hard to stop 1 2 3 4 5 6 7

I do as little work with the Internet as possible 1 2 3 4 5 6 7

If a problem with the Internet is left unsolved, I continue to 1 2 3 4 5 6 7 think about it afterward

I do not enjoy talking with others about the Internet 1 2 3 4 5 6 7

Imagine you are about to use the Internet. A number of statements are listed below which people have used to describe themselves when using the Internet. Please read each statement and then using the scale provided indicate how you would feel as you begin to use the Internet. You may use any number between 1 and 7 .

1 2 3 4 5 6 7

Completely Mostly Somewhat Neutral Somewhat Mostly Completely Untrue Untrue Untrue True True True

I feel calm 1 2 3 4 5 6 7

I feel secure 1 2 3 4 5 6 7

I am tense 1 2 3 4 5 6 7

I am regretful 1 2 3 4 5 6 7 I feel at ease 1 2 3 4 5 6 7

I feel upset 1 2 3 4 5 6 7 I am presently worrying over possible misfortunes 1 2 3 4 5 6 7

I feel rested 1 2 3 4 5 6 7 I feel anxious 1 2 3 4 5 6 7

I feel comfortable 1 2 3 4 5 6 7 I feel self-confident 1 2 3 4 5 6 7

I feel nervous 1 2 3 4 5 6 7 I am jittery 1 2 3 4 5 6 7

I feel ‘highly strung’ 1 2 3 4 5 6 7

I am relaxed 1 2 3 4 5 6 7

I feel content 1 2 3 4 5 6 7

I am worried 1 2 3 4 5 6 7 I feel pleasant 1 2 3 4 5 6 7

A number of possible situations are listed below which could occur when using the Internet. Please read each statement and then using the scale provided indicate how likely you think it is that the situation would occur if you used the Internet. Remember there are no right or wrong answers. You may use any number between 1 and 7 .

1 2 3 4 5 6 7 Extremely Quite Slightly Neither Slightly Quite Extremely Unlikely Unlikely Unlikely Likely Likely Likely

Trouble getting onto the internet 1 2 3 4 5 6 7 Have trouble finding what I am looking for 1 2 3 4 5 6 7 Have my computer freeze up 1 2 3 4 5 6 7

Get blocked by password protection 1 2 3 4 5 6 7

Using the scale provided indicate to what extent you beli eve your use of the Internet is encouraged by each of the following groups. You may use any number between 1 and 7.

1 2 3 4 5 6 7 Strongly Moderately Mildly Neither/ Mildly Moderately Strongly Discouraged Discouraged Discouraged Not Encouraged Encouraged Encouraged Applicable

Your friends 1 2 3 4 5 6 7 Your partner/spouse 1 2 3 4 5 6 7 Your children 1 2 3 4 5 6 7

Your parents 1 2 3 4 5 6 7 Your siblings 1 2 3 4 5 6 7 Your work colleagues 1 2 3 4 5 6 7

Your boss 1 2 3 4 5 6 7 The media (i.e. TV shows, movies, celebrities) 1 2 3 4 5 6 7

Using the scale provided indicate to what extent you believe others use the Internet. You may use any number between 1 and 7

1 2 3 4 5 6 7 Not Sure/Not Never Once a 2 -3 times a Once a week 2 – 6 times At least Applicable month month a week once a day

Your friends 1 2 3 4 5 6 7 Your partner/spouse 1 2 3 4 5 6 7

Your children 1 2 3 4 5 6 7 Your parents 1 2 3 4 5 6 7 Your siblings 1 2 3 4 5 6 7

Your work colleagues 1 2 3 4 5 6 7 Your boss 1 2 3 4 5 6 7

In general how successful do you believe others are in using the Internet? Use the scale provided. You may use any number between 1 and 7.

1 2 3 4 5 6 7 Extremely Quite Slightly Neutral/Not Slightly Quite Extremely Unsuccessful Unsuccessful Unsuccessful Sure /Not Successful Successful Successful Applicable

Your friends 1 2 3 4 5 6 7

Your partner/spouse 1 2 3 4 5 6 7

Your children 1 2 3 4 5 6 7 Your parents 1 2 3 4 5 6 7 Your siblings 1 2 3 4 5 6 7 Your work colleagues 1 2 3 4 5 6 7 Your boss 1 2 3 4 5 6 7

Terminology Guide

Address Book: a collection of email addresses that you can access for later use.

Bookmark: a web address that is stored in order to return to it easily.

Browser: software you use to view web pages.

Chat Room: a virtual place where people can type comments to one another in real time.

Cookies: a software program that can be used to exchange information between web pages and web servers.

Discussion Group: a group of people who exchange messages about a particular topic via the Internet

Download: to save a file to disc or hard drive

E-mail : a electronic method of sending messages from one person to another.

Favorite: a web address that is stored in order to return to it easily.

FTP: File Transfer Protocol; a protocol that is used to transfer data from one computer to another.

Hypertext: a point in a web page that contains a link to other web pages

Internet : a worldwide network of computers that allows a for communication and information exchange.

Log on: to begin or start a system.

Log off: to close or end a system.

URL : Uniform Resource Locator or web address

Search tool : a tool which searches for information on the web

Virus: a software program that is loaded onto your computer without your knowledge and runs against your wishes and that can cause your computer to malfunction.

World Wide Web: a way of accessing information on the Internet using text and graphics.

Thank you for your participation

Helen Partridge Lecturer Understanding Doctor of Philosophy Candidate Faculty of Information Internet Use Technology Queensland University of within Community Technology PO Box 2434 Brisbane QLD

This research is being conducted as part of a Doctor of Philosophy program at the Queensland University of Technology (QUT), Australia. The research seeks to develop an understanding of how different people within the community use the Internet.

Your responses will help in developing a better understanding of the Internet use by members of your community.

Information on the Internet use of community members can assist scholars and those organizations within community (such as public libraries) to develop programs and services which will ensure that the Internet and other digital resources are accessible to all members of the community.

Please note this is not an evaluation of the Brisbane City Library. It is asking about your personal thoughts and experiences with the Internet in general.

Your participation in this survey is voluntary . If you agree to participate, you may be assured that your involvement in the research will not impact upon your relationship with QUT or the Brisbane City Library Service in any way.

Your responses to this survey will remain confidential and anonymous . Please answer truthfully . Any answer is acceptable. The survey will take approximately 10 minutes to complete .

If you are unsure of the meaning of any work or phrase please ask the Researcher.

If you have any concerns about the ethical conduct of this research project, please contact the QUT Human Research Ethics Committee (tel. +61 7 3864 2902 or email [email protected]).

Thank you for participating in this survey.

Section 1: To begin: some FACTS about YOU

Please place a tick in the appropriate box. For example 

1. Are you?  Male  Female

2. How old are you?  17-20  21-30  31-40  41-50  51-60  61-70  71-80  81-90  Over 90

4. What is your highest completed level of education?

 Primary school  High school  TAFE/Technical college  Bachelor degree  Masters  PhD  Other, please specify

4. Are you currently enrolled in formal education ?  Yes, go to Question 5  No, go to Question 6

5. What level of study are you enrolled in?

 Primary School  High School  TAFE/Technical College  Bachelor degree  Masters  PhD  Other, please specify

6. What is your employment status ?

 Unemployed  Full Time Employed  Part-time Employed  Casually Employed  Contract Employed  Job Share  Retired  Other, please specify

7. Over the last three years what has been your average annual income

 No Income  $1 - $10,000  $10,001 - $20,000  $20,001 - $30,000  $30,001 - $40,000  $40,001 - $50,000  $50,001 - $60,000  $60,001 - $70, 000  $70,001- $80,000  $80,001 - $90,000  $90,001 - $100, 000  Over $100,000

8. Do you identify yourself as:  Australian Aboriginal or Torres Strait Islander  Hispanic or Latino  Asian  Caucasian/White  Other, please specify

9. Do you identify yourself as having a disability?  Yes  No

Section 2: YOUR use of the Internet

1. On a typical weekday how many hours would you use the Internet?

 None  Less than 1 hour  Between 1 and 2 hours  More than 2 hours and less than 5 hours  More than 5 hours

2. On a typical weekend how many hours would you use the Internet?

 None  Less than 1 hour  Between 1 and 2 hours  More than 2 hours and less than 5 hours  More than 5 hours

3. When was the last time you used the Internet ?

 Never  Less than 24 hours  Between 24 hours and 1 week  Between 1 week and 1 month  Between 1 to 3 month  Between 3 to 6 months  Between 6 to 12 months  Over 12 months

4. In general, how successful do you rate your use of the Internet?

1 2 3 4 5 6 7 Extremely Quite Slightly Neutral/Not Slightly Quite Extremely Unsuccessful Unsuccessful Unsuccessful Sure/Not Successful Successful Successful Applicable

Section 3: YOUR experience of the Internet

Listed below are a number of tasks that can be performed when using the Internet. Please use the scale provided to rate how confident you are that you can do each of the tasks at this moment in time. You may use any number between 0 and 10. Even if you have never used the Internet , how confident are you that you could do the tasks listed?

I am I am not at all I am moderately totally l confident confident confident

Open a browser such as Internet Explorer or Netscape to access 0 1 2 3 4 5 6 7 8 9 10 the Web Go to a specific web site by typing in the web address (URL) 0 1 2 3 4 5 6 7 8 9 10 Use a search tool (i.e. Google or Yahoo) to find what you want (i.e. graphics, computer files, documents, information, web pages, 0 1 2 3 4 5 6 7 8 9 10 sound files etc) Use hypertext in a web page to find out about a subject that 0 1 2 3 4 5 6 7 8 9 10 interests you Determine if a web site or the information you have found on a 0 1 2 3 4 5 6 7 8 9 10 web site is reliable and valid Determine if a web site is a secure site 0 1 2 3 4 5 6 7 8 9 10 Print a web page 0 1 2 3 4 5 6 7 8 9 10 Save a web site as a Bookmark or Favorite 0 1 2 3 4 5 6 7 8 9 10

Delete a web site you have saved as a Bookmark or Favorite 0 1 2 3 4 5 6 7 8 9 10 Organise your Bookmarks or Favorites into folders 0 1 2 3 4 5 6 7 8 9 10

Download (save) a file from a Web site 0 1 2 3 4 5 6 7 8 9 10 Check for a virus in a file that you are downloading (saving) from a Web site 0 1 2 3 4 5 6 7 8 9 10 View a multimedia (audio or visual) file 0 1 2 3 4 5 6 7 8 9 10 Download (save) and install new software to your computer 0 1 2 3 4 5 6 7 8 9 10

Understand internet words/terms such as URL or FTP or browser 0 1 2 3 4 5 6 7 8 9 10 Solve most problems or mistakes you experience when using the Internet 0 1 2 3 4 5 6 7 8 9 10 Activate or deactivate cookies linked to a web page 0 1 2 3 4 5 6 7 8 9 10 Log on and off to an email system (i.e. Hotmail, Yahoo Mail) 0 1 2 3 4 5 6 7 8 9 10 Delete an email message from your inbox 0 1 2 3 4 5 6 7 8 9 10 Reply to an email message 0 1 2 3 4 5 6 7 8 9 10 Forward an email message 0 1 2 3 4 5 6 7 8 9 10 Send an email message to more than one person at the same time 0 1 2 3 4 5 6 7 8 9 10 Attach a file to an email message 0 1 2 3 4 5 6 7 8 9 10 View a file attached to an email message 0 1 2 3 4 5 6 7 8 9 10 Save a file attached to an email message 0 1 2 3 4 5 6 7 8 9 10 Scan an email message attachment for a virus 0 1 2 3 4 5 6 7 8 9 10 Add an email address to your address book 0 1 2 3 4 5 6 7 8 9 10 Delete an email address from your address book 0 1 2 3 4 5 6 7 8 9 10 Locate a discussion group or e-list you would like to join 0 1 2 3 4 5 6 7 8 9 10

Subscribe to a discussion group or e-list 0 1 2 3 4 5 6 7 8 9 10 Send a message to a discussion group or e-list (i.e. create a new thread) 0 1 2 3 4 5 6 7 8 9 10

Reply to a message on a discussion group or e- list 0 1 2 3 4 5 6 7 8 9 10

Unsubscribe from a discussion group or e-list 0 1 2 3 4 5 6 7 8 9 10 Locate a chat room you would like to join 0 1 2 3 4 5 6 7 8 9 10

Enter a chat room 0 1 2 3 4 5 6 7 8 9 10

Take part in a conversation in a chat room 0 1 2 3 4 5 6 7 8 9 10 Leave a chat room 0 1 2 3 4 5 6 7 8 9 10

Create a web page 0 1 2 3 4 5 6 7 8 9 10 Load a web page to the Web 0 1 2 3 4 5 6 7 8 9 10

Share files with others via the Web 0 1 2 3 4 5 6 7 8 9 10

Let’s now consider YOUR ATTITUDE towards the Internet. A number of statements about the using the Internet are listed below. Using the scale provided please indicate to what extent each statement describes you. You may use any number between 1 (Strongly Disagree) and 7 (Strongly Agree) .

Strongly Moderately Mildly Mildly Moderately Strongly Disagree Disagree Disagree Neither Agree Agree Agree I like working with 1 2 3 4 5 6 7 the Internet The challenge of solving problems with the Internet 1 2 3 4 5 6 7 does not appeal to me I think working with the Internet is 1 2 3 4 5 6 7 enjoyable and stimulating Figuring out Internet problems does not 1 2 3 4 5 6 7 appeal to me When there is a problem with the Internet that I can’t 1 2 3 4 5 6 7 solve straight away, I stick with it until I have the answer I don’t understand how some people can spend so much 1 2 3 4 5 6 7 time working with the Internet and seem to enjoy it Once I start to work with the Internet, I 1 2 3 4 5 6 7 find it hard to stop I do as little work with the Internet as 1 2 3 4 5 6 7 possible If a problem with the Internet is left unsolved, I continue 1 2 3 4 5 6 7 to think about it afterward I do not enjoy talking with others about the 1 2 3 4 5 6 7 Internet

What are the OUTCOMES of using the internet? Using the scale provided please indicate how likely you believe each outcome would be. Even if you have never used the Internet , how confident would you be that the Internet could help you in these areas? You may use any number between 1 (Extremely Unlikely) and 7 (Extremely Likely) .

Extremely Quite Slightly Slightly Quite Extremely Neither Unlikely Unlikely Unlikely Likely Likely Likely

Obtain information that I can’t 1 2 3 4 5 6 7 find elsewhere Get immediate knowledge of 1 2 3 4 5 6 7 big events

Find a wealth of information 1 2 3 4 5 6 7

Solve a problem 1 2 3 4 5 6 7

Hear music I like 1 2 3 4 5 6 7

Feel entertained 1 2 3 4 5 6 7

Have fun 1 2 3 4 5 6 7

Play a game I like 1 2 3 4 5 6 7

Feel like I belong to a group 1 2 3 4 5 6 7

Find something to talk about 1 2 3 4 5 6 7

Get support from others 1 2 3 4 5 6 7

Maintain a relationship I value 1 2 3 4 5 6 7

Forget my problems 1 2 3 4 5 6 7

Find a way to pass the time 1 2 3 4 5 6 7

Relieve boredom 1 2 3 4 5 6 7

Improve my future prospects in 1 2 3 4 5 6 7 life

Find people like me 1 2 3 4 5 6 7

Find others who respect my 1 2 3 4 5 6 7 views Get up to date with new 1 2 3 4 5 6 7 technology

Save time shopping 1 2 3 4 5 6 7

Find bargains on products and 1 2 3 4 5 6 7 services Get free information that would 1 2 3 4 5 6 7 otherwise cost me money

Get products for free 1 2 3 4 5 6 7

Please continue to the next page Let’s now consider HOW IMPORTANT the outcomes of using the internet are to YOU . Using the scale provided rate each item in terms of its level of importance to you. You may use any number between 1 (Extremely Unimportant) and 7 (Extremely Important) .

Extremely Quite Slightly Slightly Quite Extremely Neither Unimportant Unimportant Unimportant Important Important Important

Obtain information 1 2 3 4 5 6 7 that I can’t find Get immediate 1 2 3 4 5 6 7 knowledge of big events Find a wealth 1 2 3 4 5 6 7 of information Solve a 1 2 3 4 5 6 7 problem Hear music I 1 2 3 4 5 6 7 like Feel 1 2 3 4 5 6 7 entertained Have fun 1 2 3 4 5 6 7 Play a game 1 2 3 4 5 6 7 I like Feel like I belong to a 1 2 3 4 5 6 7 group Find something to 1 2 3 4 5 6 7 talk about Get support 1 2 3 4 5 6 7 from others Maintain a relationship I 1 2 3 4 5 6 7 value Forget my 1 2 3 4 5 6 7 problems

Find a way to 1 2 3 4 5 6 7 pass the time

Relieve 1 2 3 4 5 6 7 boredom Improve my future 1 2 3 4 5 6 7 prospects in life Find people 1 2 3 4 5 6 7 like me Find others who respect 1 2 3 4 5 6 7 my views Get up to date with 1 2 3 4 5 6 7 new technology Save time 1 2 3 4 5 6 7 shopping Find bargains on products 1 2 3 4 5 6 7 and services Get free information 1 2 3 4 5 6 7 that would Getotherwise products 1 2 3 4 5 6 7 for free

Imagine you are about to use the Internet. A number of statements are listed below which people have used to describe themselves when using the Internet. Please read each statement and then using the scale provided indicate how you would feel as you begin to use the Internet. You may use any number between 1 (Completely Untrue) and 7 (Completely True) .

Completely Mostly Somewhat Somewhat Mostly Completely Neutral Untrue Untrue Untrue True True True I feel calm 1 2 3 4 5 6 7

I feel secure 1 2 3 4 5 6 7

I am tense 1 2 3 4 5 6 7

I am regretful 1 2 3 4 5 6 7

I feel at ease 1 2 3 4 5 6 7

I feel upset 1 2 3 4 5 6 7 I am presently worrying over 1 2 3 4 5 6 7 possible misfortunes I feel rested 1 2 3 4 5 6 7

I feel anxious 1 2 3 4 5 6 7

I feel comfortable 1 2 3 4 5 6 7

I feel self-confident 1 2 3 4 5 6 7

I feel nervous 1 2 3 4 5 6 7

I am jittery 1 2 3 4 5 6 7

I feel ‘highly strung’ 1 2 3 4 5 6 7

I am relaxed 1 2 3 4 5 6 7

I feel content 1 2 3 4 5 6 7

I am worried 1 2 3 4 5 6 7

I feel pleasant 1 2 3 4 5 6 7

Terminology Guide

Address Book: a collection of email addresses that you can access for later use.

Bookmark: a web address that is stored in order to return to it easily.

Browser: software you use to view web pages.

Chat Room: a virtual place where people can type comments to one another in real time.

Cookies: a software program that can be used to exchange information between web pages and web servers.

Discussion Group: a group of people who exchange messages about a particular topic via the Internet

Download: to save a file to disc or hard drive

E-mail : a electronic method of sending messages from one person to another.

Favorite: a web address that is stored in order to return to it easily.

FTP: File Transfer Protocol; a protocol that is used to transfer data from one computer to another.

Hypertext: a point in a web page that contains a link to other web pages

Internet : a worldwide network of computers that allows a for communication and information exchange.

Log on: to begin or start a system.

Log off: to close or end a system.

URL : Uniform Resource Locator or web address

Search tool : a tool which searches for information on the web

Virus: a software program that is loaded onto your computer without your knowledge and runs against your wishes and that can cause your computer to malfunction.

World Wide Web: a way of accessing information on the Internet using text and graphics.

Thank you for your participation

Appendix 2: Expert comment form 2

2 Only one of the two expert comment forms used in the study is provided here (i.e. the form for self efficacy experts). The other form (for internet experts) differed in only one aspect – the introduction. It should also be noted that the format of the questionnaire has been modified to meet copyright requirements (i.e. removal of images).

A2 Internet Self Efficacy Feedback Form

Thank you for your interest in the development of this survey instrument. My name is Helen Partridge and I am a doctoral student at the Queensland University of Technology, Brisbane, Australia, in the area of Information Technology. I am designing a self-appraisal questionnaire to measure one’s self-efficacy beliefs related to using the Internet. Please read the through the following introductory statement designed to acquaint you with the theoretical/conceptual basis of this survey.

What is self-efficacy and how does it relate to Internet use? Self-efficacy is a psychological construct developed by Dr. Albert Bandura describing a person’s belief in their ability to plan and complete actions necessary for a given task under a given set of conditions. For example, you could rate your belief in your capability to lift an item weighing 50 pounds over your head. It is widely accepted that personal confidence influences behaviour, i.e., people engage in activity they feel confident they can understand and avoid activities when uncertain of their ability to perform well. Since Internet use often involved novel tasks it sees appropriate to examine self-efficacy related to participating in Internet use. For the purposes of this survey instrument, Internet use refers to engaging in tasks or activity using the worldwide computer network commonly referred to as the Internet.

How was the questionnaire developed and what is the supposed population? The population for this questionnaire will be adults that may or may not use the Internet. The questionnaire items are based on research gathered on how people use the Internet (i.e. what activities or functions do they most commonly perform) and includes areas such as communicating via the Internet, searching for information and technical skills. Additionally, to properly assess self-efficacy belief, questions examine tasks with varying levels of difficulty.

How will your information be used? If you participate in this pilot testing, the data you provide will only be used by me. I will not reveal your name or private information without your expressed permission. You do not have to give your name, organization, title or email address to participate in this pilot test, but it would help me in categorizing your responses. The data from this pilot study will be considered as expert opinion on Internet use and used to revise this survey instrument before final administration.

How can you help? I would like you to examine the 67 items below and decide if you think the item is easy to understand and whether the item is an important part of participating in Internet use. I am viewing your responses as expert opinion on the subject of Self-Efficacy. Please rate each item on your belief of its usefulness/validity as a measure of Internet Self Efficacy using the 5-point rating scale. Under each item is a text box for you to give me additional feedback on the item including its clarity, comments on wording; related questions you feel are important, etc. Any additional feedback you give about each item, or about the questionnaire in general is greatly appreciated.

If you have any questions or comments about the feedback form please do not hesitate to contact me at [email protected] or (61 7) 3864 9047. If you have any concerns about the ethical conduct of this research project, please contact The Secretary, University Human Research Ethics Committee (tel: (61 7) 3864 2902 or email: [email protected]).

Thank for you your participation in this process.

The questionnaire is based upon the work by Fredrick Augustus Randall (2001)

[Participants involved in the actual study will receive the following instructions on how to complete the scale:

A number of tasks that can be carried out while using the Internet are described below. Using the scale below, please rate how confident you are that you can perform each of the tasks at this moment in time . Note that the scale ranges from 0 (I cannot do this activity) to 100 (I am certain that I can do this activity successfully). You may use any number between 0 and 100.]

1. Send an      email message

to more than Strongly Disagree Undecided Agree Strongly agree one person at disagree with regarding with items the same time items validity item’s validity validity

Comments about item:

2. Use a World      Wide Web

browser such as Strongly Disagree Undecided Agree Strongly agree Internet disagree with regarding with items Explorer or items validity item’s validity validity Netscape

Comments about item:

3. Use the      World Wide

Web if I could Strongly Disagree Undecided Agree Strongly agree call someone disagree with regarding with items for help if I got items validity item’s validity validity stuck

Comments about item:

4. Delete an      email message

Strongly Disagree Undecided Agree Strongly agree disagree with regarding with items items validity item’s validity validity

Comments about item:

5. Use the      World Wide

Web if Strongly Disagree Undecided Agree Strongly agree someone disagree with regarding with items showed me how items validity item’s validity validity to do it first

Comments about item:

6. Contribute to      the discussion in a newsgroup Strongly Disagree Undecided Agree Strongly agree disagree with regarding with items items validity item’s validity validity

Comments about item:

7. Use a      discussion board if Strongly Disagree Undecided Agree Strongly agree someone disagree with regarding with items showed me how items validity item’s validity validity to do it first

Comments about item:

8. Locate      information

using the World Strongly Disagree Undecided Agree Strongly agree Wide Web disagree with regarding with items items validity item’s validity validity

Comments about item:

9. Use a      newsgroup if

someone else Strongly Disagree Undecided Agree Strongly agree has helped me disagree with regarding with items to get started items validity item’s validity validity

Comments about item:

10. Download      (save) a file

from a Web site Strongly Disagree Undecided Agree Strongly agree disagree with regarding with items items validity item’s validity validity

Comments about item:

11. Use a chat      room if there

was no one Strongly Disagree Undecided Agree Strongly agree around to tell disagree with regarding with items me what to do items validity item’s validity validity as I go

Comments about item:

12. Scan am      email message attachment for a Strongly Disagree Undecided Agree Strongly agree virus disagree with regarding with items items validity item’s validity validity

Comments about item:

13. Use a chat      room if I had an instruction Strongly Disagree Undecided Agree Strongly agree manual for disagree with regarding with items reference items validity item’s validity validity

Comments about item:

14. Create an      address book

Strongly Disagree Undecided Agree Strongly agree disagree with regarding with items items validity item’s validity validity

Comments about item:

15. Use a      discussion board if there Strongly Disagree Undecided Agree Strongly agree was no one disagree with regarding with items around to tell items validity item’s validity validity me what to do as I go

Comments about item:

16. Log on and      off to an email

system Strongly Disagree Undecided Agree Strongly agree disagree with regarding with items items validity item’s validity validity

Comments about item:

17. Use a      discussion

board if I had Strongly Disagree Undecided Agree Strongly agree used another disagree with regarding with items system like it items validity item’s validity validity before

Comments about item:

18. Locate a      chat room I

would like to Strongly Disagree Undecided Agree Strongly agree join disagree with regarding with items items validity item’s validity validity

Comments about item:

19. Use a      newsgroup if I

could call Strongly Disagree Undecided Agree Strongly agree someone for disagree with regarding with items help if I got items validity item’s validity validity stuck

Comments about item:

20. Forward an      email message

Strongly Disagree Undecided Agree Strongly agree disagree with regarding with items items validity item’s validity validity

Comments about item:

21. Use a      newsgroup if I have an Strongly Disagree Undecided Agree Strongly agree instruction disagree with regarding with items manual for items validity item’s validity validity reference

Comments about item:

22. Add a      message to a discussion Strongly Disagree Undecided Agree Strongly agree board disagree with regarding with items items validity item’s validity validity

Comments about item:

23. Use a chat      room if someone else Strongly Disagree Undecided Agree Strongly agree helped me to disagree with regarding with items get started items validity item’s validity validity

Comments about item:

24. Go to a      specific web

site by typing in Strongly Disagree Undecided Agree Strongly agree the web address disagree with regarding with items (URL) items validity item’s validity validity

Comments about item:

25. Use a      search engine

such as Google Strongly Disagree Undecided Agree Strongly agree or Alta Vista disagree with regarding with items items validity item’s validity validity

Comments about item:

26. Determine      if a web site is a

secure site Strongly Disagree Undecided Agree Strongly agree disagree with regarding with items items validity item’s validity validity

Comments about item:

27. Use a      discussion

board if I had Strongly Disagree Undecided Agree Strongly agree an instruction disagree with regarding with items manual for items validity item’s validity validity reference

Comments about item:

28. Activate      and deactivate the cookies Strongly Disagree Undecided Agree Strongly agree linked to a web disagree with regarding with items page items validity item’s validity validity

Comments about item:

29. Use a      metasearch tool such as Strongly Disagree Undecided Agree Strongly agree Metacrawler or disagree with regarding with items Dogpile items validity item’s validity validity

Comments about item:

30. Print a page      from the World

Wide Web Strongly Disagree Undecided Agree Strongly agree disagree with regarding with items items validity item’s validity validity

Comments about item:

31. Use a chat      room if someone Strongly Disagree Undecided Agree Strongly agree showed me how disagree with regarding with items to do it first items validity item’s validity validity

Comments about item:

32. Use a      discussion

board if I had Strongly Disagree Undecided Agree Strongly agree seen someone disagree with regarding with items else using it items validity item’s validity validity before tyring it myself

Comments about item:

33. Use a      subject

directory such Strongly Disagree Undecided Agree Strongly agree as Yahoo or disagree with regarding with items Infomine items validity item’s validity validity

Comments about item:

34. View a      multimedia

(audio or Strongly Disagree Undecided Agree Strongly agree visual) clip disagree with regarding with items items validity item’s validity validity

Comments about item:

35. Use a chat      room if I had

used another Strongly Disagree Undecided Agree Strongly agree system like it disagree with regarding with items before items validity item’s validity validity

Comments about item:

36. Connect to      the Internet from home Strongly Disagree Undecided Agree Strongly agree disagree with regarding with items items validity item’s validity validity

Comments about item:

37. Use a chat      room if I could call someone Strongly Disagree Undecided Agree Strongly agree for help if I got disagree with regarding with items stuck items validity item’s validity validity

Comments about item:

38. Use the      World Wide

Web if there Strongly Disagree Undecided Agree Strongly agree was no one disagree with regarding with items around to tell items validity item’s validity validity me what to do as I go

Comments about item:

39. Send an      email message to one person Strongly Disagree Undecided Agree Strongly agree disagree with regarding with items items validity item’s validity validity

Comments about item:

40. Use the      World Wide

Web if I had Strongly Disagree Undecided Agree Strongly agree seen someone disagree with regarding with items else using it items validity item’s validity validity before trying it myself

Comments about item:

41. Reply to an      email message

Strongly Disagree Undecided Agree Strongly agree disagree with regarding with items items validity item’s validity validity

Comments about item:

42. Use the      World Wide

Web if Strongly Disagree Undecided Agree Strongly agree someone else disagree with regarding with items has helped to items validity item’s validity validity get started

Comments about item:

43. Create a      web page

Strongly Disagree Undecided Agree Strongly agree disagree with regarding with items items validity item’s validity validity

Comments about item:

44. Understand      Internet words/terms Strongly Disagree Undecided Agree Strongly agree such as URL or disagree with regarding with items FTP or browser items validity item’s validity validity

Comments about item:

45. Use      hypertext in

Web pages to Strongly Disagree Undecided Agree Strongly agree find out more disagree with regarding with items about a subject items validity item’s validity validity that interests me

Comments about item:

46. Know what      to do when the web site I am Strongly Disagree Undecided Agree Strongly agree trying to access disagree with regarding with items will not open items validity item’s validity validity

Comments about item:

47. Use the      World Wide

Web if I had Strongly Disagree Undecided Agree Strongly agree used another disagree with regarding with items system like it items validity item’s validity validity

Comments about item:

48.Locate a      discussion

board I would Strongly Disagree Undecided Agree Strongly agree like to use disagree with regarding with items items validity item’s validity validity

Comments about item:

49. Use the      World Wide

Web if I had an Strongly Disagree Undecided Agree Strongly agree instruction disagree with regarding with items manual for items validity item’s validity validity reference

Comments about item:

50. Solve any      problems or

mistakes I Strongly Disagree Undecided Agree Strongly agree experience disagree with regarding with items when using the items validity item’s validity validity Internet

Comments about item:

51. Use a      newsgroup if

there was no Strongly Disagree Undecided Agree Strongly agree one around to disagree with regarding with items tell me what to items validity item’s validity validity do as I go

Comments about item:

52. Learn how      to do advanced searching on the Strongly Disagree Undecided Agree Strongly agree World Wide disagree with regarding with items Web items validity item’s validity validity

Comments about item:

53. Use a      newsgroup if I had seen Strongly Disagree Undecided Agree Strongly agree someone else disagree with regarding with items using it before items validity item’s validity validity trying it myself

Comments about item:

54. Attach a file      to an email message Strongly Disagree Undecided Agree Strongly agree disagree with regarding with items items validity item’s validity validity

Comments about item:

55. Use a      newsgroup if someone Strongly Disagree Undecided Agree Strongly agree showed me how disagree with regarding with items to do it first items validity item’s validity validity

Comments about item:

56. View a file      attached to an

email message Strongly Disagree Undecided Agree Strongly agree disagree with regarding with items items validity item’s validity validity

Comments about item:

57. Use a      newsgroup if I

had used Strongly Disagree Undecided Agree Strongly agree another system disagree with regarding with items like it before items validity item’s validity validity

Comments about item:

58. Contribute      to the

conversation in Strongly Disagree Undecided Agree Strongly agree a chat room disagree with regarding with items items validity item’s validity validity

Comments about item:

59. Use a chat      room if I had

seen someone Strongly Disagree Undecided Agree Strongly agree else using it disagree with regarding with items before trying it items validity item’s validity validity myself

Comments about item:

60. Save a file      attached to an email message Strongly Disagree Undecided Agree Strongly agree disagree with regarding with items items validity item’s validity validity

Comments about item:

61. Use a      discussion board if I could Strongly Disagree Undecided Agree Strongly agree call someone disagree with regarding with items for help if I got items validity item’s validity validity stuck

Comments about item:

62. Create a      Bookmark or a

Favorite Strongly Disagree Undecided Agree Strongly agree disagree with regarding with items items validity item’s validity validity

Comments about item:

63. Locate a      newsgroup I would like to Strongly Disagree Undecided Agree Strongly agree join disagree with regarding with items items validity item’s validity validity

Comments about item:

64. Determine      if a web site or

the information Strongly Disagree Undecided Agree Strongly agree I have found on disagree with regarding with items the web is items validity item’s validity validity quality

Comments about item:

65. Organise      my Bookmarks

or Favorites Strongly Disagree Undecided Agree Strongly agree into folders disagree with regarding with items items validity item’s validity validity

Comments about item:

66. Download      (save) and

install new Strongly Disagree Undecided Agree Strongly agree software to my disagree with regarding with items computer items validity item’s validity validity

Comments about item:

67. Acquire      new computer

or Internet skills Strongly Disagree Undecided Agree Strongly agree required to disagree with regarding with items continue using items validity item’s validity validity the Internet

Comments about item:

Please provide any other comments or suggestions you would like to make about the items or the questionnaire generally.

You do not have to provide your details below however it would be appreciated, as it would help me in categorizing the responses.

Name:

Institution:

Position:

Qualifications:

Appendix 3: APS poster 2003

A3

Appendix 4: The pre- and pilot-tests

A4

A4.1 Introduction Pre-testing and pilot testing are a necessary and vital part of survey development (Litwin, 1995). According to Litwin (1995) “it provides useful information about how [the] survey instrument actually plays in the field” (p. 67). Although conducting a pre-test and a pilot test requires additional time, energy and resources it is an important step that will help in determining the practical application of the survey instrument. The pre-testing and pilot testing in the current study consisted of four phases. Phase One involved testing the internet self-efficacy scale and the socio-demographic variables on participants from the Brisbane community. Based on findings in this phase a second survey was developed and piloted. The piloting of this survey constituted Phase Two. Phase Two involved both the Brisbane and the San Jose community. This involved the pre-testing of other existing self-efficacy scales. The results of this pre-test revealed the strong need for the current research to create its own internet self-efficacy scale. This led to the third phase of the testing process, which involved the administration of the internet self-efficacy scale created for the current research to a small sample of both the US and the Australian community. The fourth, and final, part of the testing process involving the pilot test of the entire instrument to both communities.

A4.2 Phase 0ne Phase One of the Pilot Study took place from March to April 2002.

A4.2.1 Pre-piloting It is important that once a first draft of a questionnaire is completed it is evaluated by experts and then piloted on a subset of a target. Prior to the first phase of the pilot study a pre-pilot of the survey instrument was completed. The participants of this pre- pilot test were 2 librarians from the Brisbane City Council Library Service. Both participants commented that the long length of the instrument (8 pages double sided with 30 questions) might deter members of the public completing the survey. It was also noted that several words or phrases used in the ISE Scale by Eastin and LaRose (2000) might not be appropriate for use with members of the general public. These words included: “data”, “Internet hardware”, “Internet software”, “explaining why a task will not run on the Internet” and “Internet program”. Whilst these concerns were noted no changes were made to the survey instrument. The ISE Scale is a soundly developed measure of internet self-efficacy, so before making any changes to this scale the researcher wanted to obtain direct evidence from the intended survey participants.

When administering the survey the researcher would ask participants if there were any words or phrases in the survey that they were not comfortable with or did not understand. If no immediate response was received the researcher would draw the participants attention to specific words and phrases, which include those highlighted in the pre-pilot.

A4.2.2 Phase one participants

The Inala and Indooroopilly branches of the BCC Library Service were used to obtain participants in the first phase of the Pilot Study. The two branches were chosen based upon recommendations by the library staff that they would provide access to the desired participant profile (i.e. novice internet users from low and high socio-economic backgrounds). Nine participants took part in the study. There were 6 males and 3 females. All participants except one were aged 61 and older. No participants were employed with half of the participants indicating that they were retired from the workforce. Three participants indicated that they had no income, 2 had an income ranging from $20,000 to 30,000, and 1 participant had an income greater than $70,000. Education standards varied with participants ranging from primary school through to Masters level. It should be noted that 2 participants chose note to complete all parts of the survey instrument. Both participants, however did assist the researcher by discussing their views and concerns.

Five of the participants had Internet access at home with the remaining 3 participants having no other access to the Internet other than the public library. Almost all (7 out of 8) of the participants classified themselves as novice Internet Users. One participant indicated they were an Intermediate User however this statement contradicts other responses made by the participant, which state that they never use the Internet. Over half (5) of the participants never access the Internet or they have been accessing it for less than two months. Two of the participants have been using the Internet between 6 and 12 months. One participate has been using the Internet for over 2 years. Over half (5) of the participants never use the Internet weekdays or weekends. Two of the participants use the Internet on average 2 days at an average of less than 1 hour each use.

Table A4.1 provides details on the training times, number of people approached and the number of participants for each branch.

Branch Training Date & Time Number Number of Attending Survey Training Participants th 3 Indooroopilly 10 am Wednesday 13 March 8 4 Inala 10am Thursday 14 th March 7 4 5pm Thursday 4 th April 3 1 TOTAL 18 9

Table A4.1: Phase one study sample

A4.2.3 Phase one gathering of data Each library branch holds regular 1-hour Introduction to the Internet training classes for members of the public. These training sessions were used to obtain access to potential study participants, based on the assumption that individuals attending these classes would be novice users of the Internet. According to Bandura (1977) engaging in a training session that is related to a particular task or behaviour will increase an individual’s self-efficacy regarding that task or behavior. It is therefore important in the current study that the impact that the Internet training session will have on an individual’s Internet Self-efficacy is established. To determine this impact the ISE scale was administered before and after each training session. In addition, after completing the training session participants were also asked to respond to two questions: When would you have preferred to complete the survey instrument (before or after the training session)? What impact did the training session have on your understanding of the Internet?

The survey was delivered to different groups at different times during the data collection period. To maintain consistency with the survey approach an “introduction checklist” was used by the researcher. The checklist outlined the key points that each group of participants needed to be told. These points included an outline of the purpose of the

3 This is an approximate figure as the exact number of people attending the session was not recorded.

study, who is conducting the study and for what reasons. A brief explanation on how to complete the survey was provided. The researcher emphasised the importance of answering honestly and that all data gathered is confidential. This helps to work against the risk of social desirability bias and threatening questions, which could pose a problem with the study’s participants. It is possible that participants try to present a positive image of themselves to the researcher instead of giving truthful answers and provide instead what they believe to be the normative or socially desirable answer (Neuman, 2000, p. 230). As participants completed and returned the surveys they were thanked for their participation.

The researcher conducted each session personally as it was believed that the presence of the researcher would have a more positive impact on encouraging participants to complete the survey than if the instrument was administered by the library personnel who are not closely associated with the study.

A4.2.4 Issues arising from phase one: implications for the main study

Two key areas emerged as issues for consideration as a result of the first phase of the pilot study: (i) the availability and access to the desired participants; and (ii) the content, presentation and administration of the survey instrument.

• Participant availability and access The Introduction to the Internet training classes conducted by the BCC library branches did not provide access to a broad sample range. The results suggest that older members (i.e. aged 50 and over) of the community. predominantly attend the library training classes, especially those community members who are now retired from the workforce. This result was supported by anecdotal evidence by the training staff at both branches who, when asked to describe the type of people attending the library training sessions, described most trainees as “older” and as “retirees”. Whilst this is an important group to include within the current study it does not allow for generalisability of the results to the community as a whole. This result could however be directly related to the small number of training sessions attended by the researcher (a total of 3) and the small number of branches used for data gathering (a total of 2). Conducting the study over a longer period of time and at different locations (both in the BCC library service and outside of it) may yield different results. In addition, relying on people attending an internet training

session may also skew the sample by obtaining only those people who have an interest in learning more about the internet; and as such, not obtaining those people who have no interest in the internet for whatever reason. These issues are explored further in the second phase of the pilot study.

• Survey Content, Design and Administration Feedback from the participants supported the comments made by the BCC library staff regarding the survey’s content during the pre-pilot process. Many of the participants were unsure of the following words or phrases in terms of their theoretical meaning or practical application: “internet hardware”, “internet software”, “troubleshooting”, “why a task will not run on the Internet”, “Internet program” and “online discussion group”. In addition, on participant commented that the grammatical structure of the questions in the ISE scale was “convoluted”. A sample question from the ISE is provided in Figure A4.1 Feedback was obtained by the two participants who did not fully complete the survey. Both participants stated that the survey was “irrelevant to them”. Yet both participants are novice Internet users, and as such, are of direct relevance to the study. One of the participants stated that they could not comment on their “confidence levels” regarding something (i.e. the Internet) they had no knowledge of. Both participants were older retired males. This finding could be the result of a difference in the current sample being explored and the sample used to create the ISE Scale.

Figure A4.1: A sample question from the Eastin and LaRose (2000) ISE Scale

The internet self-efficacy scale developed by Eastin and LaRose (2000) was developed in a recent study using 171 first year university students at the Michigan State University. The mean age of the participants was 21. This study is the first

time that the scale is being used with members of the general public. The results suggest that the survey instrument is not applicable for use in the general public. University students, even those who perceive themselves as novice internet users, may have a more highly developed knowledge and understanding of the Internet than members of the general public.

Bandura (2005) provides a comprehensive methodology for constructing valid and reliable self-efficacy scales. According to Bandura the “items [included in a self- efficacy scale] should be written at the reading level of the participant” (2005, p. 4). Bandura (2005) suggests avoiding “technical jargon that is not part of everyday life” (p. 4)) and not using ambiguous or poorly worded items. Phase One findings suggest that the ISE scale developed by Eastin and LaRose (2000) is not appropriate to the “reading level” of the current study’s participants. This issue will be further explored in the second phase of the pilot study.

A4.3 Phase two

Phase Two of the Pilot Study took place from May to July 2002.

A4.3.1 Phase two survey instrument One of the key findings that emerged from the first phase of the pilot study was the potential inappropriateness of the ISE scale developed by Eastin and LaRose (2000) for use with members of the general public. In response to this finding a second survey instrument was developed. A detailed search of the literature revealed the existence of 3 other internet self-efficacy scales. The survey instrument from Phase One was modified to incorporate these scales. The additional internet self-efficacy scales were included in the survey instrument to determine if there were internet Self-efficacy scales currently in existence that would be suitable for use with members of the general public. The Internet Self-efficacy Scales added to the survey instrument included the following:

1. The Internet Self-Efficacy Scale created by Torkzadeh and Van Dyke (2001) is a 17-item scale designed to “measure an individual’s perception and self- competency with the Internet” (p. 1). The instrument is a five point likert scale where 1 corresponds to “strongly diagree” and 5 corresponds to “strongly disagree”. The scale was developed using 277 undergraduates from a university in the southwest of the United States. The mean age of the student

participants was 24.88. The scale has high internal consistency, with a Cronbach alpha of 0.96.

2. The Computer/Self-efficacy Scale created by Maitland (1996) was designed to “take the computer self-efficacy scale closer to the 21st century…by including items measuring Internet self-efficacy” (1996, p. 1). More specifically, the scale is a measure of World Wide Web self-efficacy as it focuses only on this aspect of the Internet. The scale was developed using 36 undergraduate students from a North American university. No data regarding reliability or validity is available. Because this scale incorporates both computer and Web self-efficacy only those questions relating to the Web were included in the current survey instrument.

A third Internet Scale created by Hnilo (1997) was also located but a full copy of the instrument could not be obtained.

A4.3.2 Phase two participants

The BCC Library Service and the SJP Library Service were used to gather participants in the second phase of the Pilot Study. Ten participants took part in the study, 5 males and 5 females. The participant’s ages ranged from 31 to 70 with half of the participants aged between 41 and 60. Four of the participants were unemployed. Three participants were engaged in part time employment. Seven of the study’s participants had no income or were earning income of less than $30 000 per annum. One participant did not provide information on annual income.

Five of the ten participants indicated that they used the Internet. Access to the Internet was obtained either from the library or at home. Use of the Internet varied greatly with participant’s use ranging from less than 2 months to greater than 24 months. Weekly use was predominantly 2 (2 participants) to 3 (2 participants) days per week. One participants used is 7 days per week. Most participants used it on average 1 to 5 hours per week. One participant indicated that they used it between 15 to 20 hours and 1 participant indicated no use at all. This final point is in contradiction to the participant’s earlier indication that they used the Internet at least 1 day per week. Only 1 participant had attended an Internet training session. The session was at a beginners level. Overall the participants (4 participants) rated themselves as having beginner

level skills with the Internet. One participant rated himself or herself at the intermediate level. Of the participants who did not use the Internet two had attended an Internet training session. Both indicated the session was at the intermediate level. The two participants who did not attending a training session rated themselves at a beginners level.

Table A4.2 provides details on the number of people approached and the number of participants for each branch. The table also shows the paper colour used for producing the survey for each library branch. This inclusion of a colour for library branches made an extra question regarding branch affiliation unnecessary.

Branch Session Date and Time Number Number of Approached Participants

San Jose

Biblioteca 15 th May Wednesday 5 1 2.30pm

Hillview 14 th May Tuesday 12pm 5 3

Almaden 14th May Tuesday 3pm 12 1

Dr. Martin 15 th May Wednesday 8 2 Luther King 11am Jr.

Brisbane

Inala 12 th July Friday 11 am 1 1

Indooroopilly 26 th June Wednesday 11 2 11am

TOTAL 48 10

Table A4.2: Phase two pilot study sample

A4.3.3 Phase two gathering of data One of the key findings that emerged from the first phase of the pilot study was the inability to obtain the access to the desired participants through the Introduction to the

Internet training classes conducted at the library branches. In response to this finding it was decided that participants would be obtained not via the training classes but instead by approaching those members of the community using the library (i.e. browsing through library shelves, waiting in queues). Once an individual was approached the researcher would briefly explain who they were, where they were from and what they were doing. The library user would then be asked if they consider themselves to be someone who does not use or is new to using the Internet. If the library user responded in the affirmative to this request they were asked if they would be interested in helping out with the survey. If the library user consented then the researcher would provide a more detailed explanation of the survey and the study as a whole. To maintain consistency with the survey approach the standard checklist as was used in Phase One of the pilot study was used by the researcher. The researcher emphasised the importance of answering honestly and that all data gathered is confidential. As participants completed and returned the surveys they were thanked for their participation.

A4.3.4 Issues arising from phase two: implications for the main study The findings obtained from Phase Two of the Pilot Study support the findings obtained in the first phase of the study. Two key areas emerged as issues for consideration: (i) the availability and access to the desired participants; and (ii) the content, presentation and administration of the survey instrument.

• Participant Availability and Access The branches used in the BCC Library Service and the SJP Library Service did not provide access to a broad sample range. Even though changes were made to the data gathering method in response to the findings that emerged from Phase One the study did not allow access to the type of participants required. Only one fifth of the library users approached identified themselves as novice Internet users and were willing to take part in the study. Participants in the study were predominantly older and frequently retired members of the community. Accessing such a small sample of the entire population does not allow for generalisability of the results to the wider population. As in Phase One, the finding could be directly related to the small number of branches used for data gathering (total of 6). Conducting the study over a longer period of time and at more branches would yield different results. However, when viewed in the light of the findings obtained in Phase One

of the pilot study it would be strongly recommended that alternative avenues for obtaining study participants should be explored as a means of supplementing those obtained via the public library system.

• Survey Content, Design and Administration Many of the participants were unsure of the following words or phrases from the three scales used to measure Internet Self-efficacy: ”internet hardware”, “internet software”, “troubleshooting”, “why a task will not run on the Internet”, “Internet program” and “online discussion group”, “decrypting” “encrypting”, “scanning”, “downloading”, “recovering a file", “browser”, “URL”, “Google”, “search engine” and “hypertext”. The one participant who did not fully complete the survey provided feedback to the researcher regarding their views and concerns with the survey. In line with the findings from Phase One, the participant voiced concern that the survey was not relevant to them. The participant was a novice Internet user, and as such, was of direct relevance to the study. The participant was a retired older male. As in Phase One the findings here could be the result of a difference in the current sample being explored and the sample used to create the Internet Self-Efficacy Scales being tested. All three scales used in the second phase survey instrument were developed using American university students.

The results of both Phase One and Two of the Pilot Study suggest that current Internet Self-efficacy scales are not applicable for use with the general public. According to Bandura (1977) measures of self-efficacy must be tailored to meet the specific “reading level” of the population being examined. Consequently, it may be suggested that the “reading level” of participants within the current study (i.e. members of the general public who are novice Internet users) was significantly different to that of the “reading need” of the participants used in developing the Internet self-efficacy scales currently available (i.e. university students who are novice internet users).

Additional observations made by the researcher include the following:

• There was no difference between the language and sentence structure required between members of the American or Australian general public.

• The survey instrument front cover should be redesigned to be more aesthetically appealing and to make it easier for a possible participant to access the necessary instructions.

• The survey instrument will need to be translated into languages other than English. This was apparent in the Inala branch of the BCC library service where the library sup pots a large number of Chinese and Vietnamese users. And in the Biblioteca branch of the SJP Library Service were Spanish-speaking citizens are the primary users of the library.

A4.4 Phase three This phase involved the administration of the internet self-efficacy scale developed for use in the study to members of the target population. The instrument used consisted of two sections: section one obtained socio-economic details and section two consisted of the internet self-efficacy scale. Six members of the San Jose community participated in the survey. All participants were obtained via the San Jose public library. The main focus of this pre-test was to determine if there were any wording issues with the questions and to test the scaling approach (i.e. Bandura recommends using 0 to 100). It was noted that a few of the terms used in the scale were not easily understood by all participants; for example, URL and browser. Some of these words were “core” internet terms and it would be hard to remove them entirely from the scale. Thus, it was decided a two prong strategy would be used: (i) where possible, any acronyms would be spelled out in full; and (ii) a glossary would be included. When asked about the 0 to 100 scale many of the participants indicated they found this confusing to use. They indicated a preference for a 0 to 10 format instead. This revised format would be used in the final study.

A4.5 Phase Four This is the pilot test. That is, the entire survey instrument was tested on members of the target population. 10 members of the Brisbane community participated and 7 members of the San Jose community participated. In both instances the local public library was used to obtain access to participants. No major issues were noted in either sample. In the US sample only one point was raised which altered the overall format of

the survey instruments. A number of the participants voiced concern over having questions on the front cover (i.e. visible to others). They were concerned about privacy issues. In an effort to make the survey as short as possible questions were originally placed on the front cover, however, with this comment in mind a decision was made to have no questions on the front or back covers. It was felt that doing this might help improve response rate.

A4.6 Conclusion Pre and pilot-tests are an invaluable part of the any research project. Through a four phase testing programme the current research established a questionnaire that will be most appropriate to obtaining the desired data from the targeted audience.

Appendix 5: Administration instructions

A5

Data Collection Check List (US Study)

• When introducing yourself and inviting the participant to complete the survey (in no particular order):

o Introduce the research project - international study exploring internet use and non use in community. DO NOT use the words/phrases ‘digital divide’ or ‘psychology’

o Confidential and anonymous – no names!

o 10 minutes to complete.

o Do not think too long on any one question – the first response is usually the best.

o Offer to stay with the person or to come back in 5 minutes to check on them.

o Survey is NOT an evaluation of the library internet service.

• Provide the participant with a copy of the survey instrument, a pen and the glossary.

• Check on the participant after 5 minutes to see if they need assistance.

• Collect the completed survey from the participant and thank them for taking part in the study.

Data Collection Check List (Australian Study)

• When introducing yourself and inviting the participant to complete the survey (in no particular order):

o Introduce the research project - international study exploring internet use and non use in community. DO NOT use the words/phrases ‘digital divide’ or ‘psychology’

o Confidential and anonymous – no names!

o 10 minutes to complete.

o Do not think too long on any one question – the first response is usually the best.

o Find out if the participate has used the internet before. If they have (even just once) they can use the “standard survey”. If they have never used the internet provide them with the “abridged survey” – it has the star symbol on the front page bottom right hand corner.

o Offer to stay with the person or to come back in 5 minutes to check on them.

o When administering in BCC library emphasize that the survey is NOT an evaluation of the library internet service.

• Provide the participant with a copy of the survey instrument, a pen point out that a glossary is available on the back cover.

• Check on the participant after 5 minutes to see if they need assistance.

• Collect the completed survey from the participant, thank them for taking part in the study, check that the survey has been completed in full, if there are any gaps or questions incorrectly answered, bring these to the participant’s attention.