The Second Stage of the Evolution

Total Page:16

File Type:pdf, Size:1020Kb

The Second Stage of the Evolution

Typologising Network learning Communities: a mixed models approach

Steven Case and Chris Kubiak

This paper is one of four prepared for a symposium for the British Educational Research Association (BERA) Conference 2004, Manchester, September 14th – 18th.

This paper is in draft form. Please do not quote without asking us first.

Steven Case Chris Kubiak Department of Applied Social Studies NCSL University of Wales Networked Learning Group Swansea Derwent House Vivian Tower University Way Singleton Park Cranfield SA2 8PP Bedfordshire MK43 OAZ

[email protected] [email protected]

1 2 Abstract

This paper describes the development of a typology of network activity using mixed model approach. This approach was adopted in order to maintaining alignment between the research team and a programme orientation that stresses meeting multiple audience needs and developing a deep and rich understanding of the role of networks in school reform. The methodology involved the integrated analysis of a qualitative dataset in combination of a quantitative dataset in order to illuminate distinctions between networks. The dynamics of the research process are described including the intertwined process of inductive and deductive and the integrated use of qualitative and quantitative analysis which enabled considerable flexibility. The danger that computer assisted data analysis can create a detachment with the data is considered and strategies for its avoidance are offered.

Introduction

The Networked Learning Communities (NLC) programme was launched in September 2002 by the National College for School Leadership (NCSL). An NLC is a cluster of schools (minimum of 6, average of 10) working in interdependent partnerships that may also involve at least one Higher Education Institute (HEI); Local Education Authority (LEA) and/or Community Group partner. Since the inception of the programme, 137 NLCs have been established across England. Thus the programme has the potential to reach in excess of 1,000 schools, 20,000 staff and 500,000 pupils. As such, the programme is possibly the largest school-improvement initiative in the world.

NLCs are seen as a response to a decade of centralised “outside-in” school improvement initiatives such as the National Literacy and Numeracy strategies. While these strategies have raised attainment levels in schools they are questioned in terms of their sustainability and impact on achievement gaps (Jackson, 2002). Moreover, while the traditional model of the school as individual unit set within hierarchically designed structural forms (typically LEAs or School Districts) may have been appropriate during times of stability, our current times of rapid and multiple change demand a more fluid knowledge flow in order to foster responsiveness (Jackson, 2002).

Thus, within the UK and elsewhere, networks are emerging as having a key role to play in supporting innovation. Drawing on the OECD (2000) conceptualisation of networks, NLCs are an organisational form that promotes the dissemination of good practice, enhances the professional development of teachers, supports capacity building in schools, mediates between centralised and decentralised structures, and assists in the process of re-structuring and re-culturing educational organisational systems. The programme proposes that NLC’s support the development of schools as learning communities and build capacity for continuous improvement through the development of local, context-specific practices and solutions (Jackson, 2002).

The emphasis in NLCs is on enquiry-based school improvement through sustained and inclusive opportunities for school-to-school collaboration. Significantly, it parallels increased teacher involvement in action-enquiry occurring elsewhere outside of the Higher Education/University accreditation framework (see for example, Myers, 1996). It is based upon an increasing recognition, by practitioners and theorists, of the action-enquiry process as fundamental to school improvement and enhanced capacity to lead and manage change effectively (Hopkins et al, 1994).

3 The research programme By forming into Networked Learning Communities, schools have explicitly committed to becoming part of a development and enquiry programme. The Networked Learning Group are not just developing networks. They are also conducting research for system learning which provides direction for networks in their own development efforts and influences the policy agenda in the UK. As such the programme is working an interface between multiple stakeholders – chalk-face practitioners and network leaders, policy makers and researchers, local authorities and academics.

Working this interface and addressing the complexity of networked reform demands what Cresswell (2003) calls a pragmatic, fit-for- purpose research strategy. In essence, a strategy that is not wedded to a particular methodology but is multi-model. The multi-model approach does more than address the oft-quoted rationale for such approaches of ‘social questions are inevitably complex and require multifaceted answers’ (Beazley, 2003; Furlong, 2004; Tashakorri and Teddlie, 2004). The multi-method approach addresses a number of issues central to the programme’s philosophical and practical orientation:

1. The multi-model research strategy is designed to meet the needs of policy-makers and network practitioners. As such, it attempts to take a pragmatic approach that lies between what Furlong (2004) calls the neo-realist ‘scientific’ paradigm and the humanities-based paradigm. That is, the programme acknowledges the scientific research tradition where research knowledge can improve our understanding of educational policy and practice is an approach that demands quantitative methods to meet the needs of policy makers and funders. However, it also values the humanities-based qualitative tradition where the aim is to inspire practitioners to reflect and develop insight into the complex processes of networked learning rather than to tell them what to do.

2. Understanding “what works” in networked learning is a driving concern. The programme seeks to learn directly from practitioner experiences of building a network and draws directly from reports of their networked experience gathered through interviews and focus groups as well as their internal reports and artefacts. It also seeks to understand how the impact of networked learning will differ according to the nature of the NLC (for example, those dealing with challenging circumstances or ‘initiative-rich’) which requires the large quantitative datasets held by the Department for Education and Skills. Working with a mixed model enables researchers to draw on additional perspectives and insights beyond the scope of a single technique (Borkan, 2004) and create a holistic and deepened understanding that moves beyond the glibness of “what works.”

3. The programme seeks to understand the complexity of networks. Context matters. One reason scientific evidence is not translated into practice is the lack of fit with the practice setting (Stange, 2004). Traditionally, one-to-many modes of reform lack the sensitivity needed to address the unique challenges of the diverse contexts of schools (Jackson, 2002). The NLG seeks to take a different approach - it is not enough for the programme to determine that networks do have an impact on pupil achievement but also capture the diversity of context involved. Thus, while the research programme contains summative, quantitative elements that demonstrate the scope and breadth of impact, it also conducts detailed qualitative studies which provide the richness of detail to allow practitioners to apply learnings to their own context.

4. The programme adopts a stance appropriate to the non-hierarchical, inside-out nature of networks. It attempts to stimulate and thus utilise practitioner innovation and ownership of the programme as a whole – a practitioner-based, school reform movement. Thus, where

4 possible, programme-wide enquiries involve working alongside networks using methods that are generative of networked learning in the first instance. Programme-based network facilitation is enquiry-based. Attempts are made to conduct research that is of value at the point of data collection. Such participatory approaches typically involve qualitative approaches (Creswell, 2003) but in such a large scale programme demand the quantitative work as well.

It was within this philosophical framework that the year one review was developed. Its primary purpose was to encourage network members to critically reflect upon their first year of networked learning and to plan for year two. However, this review was designed to do double duty. While it was intended to be directly generative of networked learning, it was also conceptualised as an implementation review which, in combination with quantitative data sources, could be developed into a typology of networked learning communities.

A typology of networked learning was conceptualised as a description of the landscape of networked learning. It would move beyond the descriptive characteristics of networks to provide a description of the constructs at their essence – the nature of key learning and network building activities, their scope and reach and the effect of “network type”. This typology would serve multiple purposes. They would provide a summative overview of NLC activity and capture the heterogeneity of NLCs. These typologies would provide a sense of the scope and breadth of activity in NLCs but also would provide guidance for purposeful sampling of networks for studies into specific areas or types of networks. However, in addition to a typology, the programme had a commitment to developing material that would provide networks with insights into the priorities and problems of the first year on networked life – a project that was ultimately qualitative. With these goals in mind, a multi-model approach was an obvious route.

5 Methodology and Analysis

The Annual Review design combined coding and analysis of qualitative ‘reporting templates’ completed by Networked Learning Communities with secondary data analysis of quantitative network statistics obtained from the Department for Education and Skills (DfES).

Qualitative reporting templates The Networked Learning Group (NLG) conducted an ‘Annual Review’ of the progress of Networked Learning Communities (NLCs) at the end of their first year in the programme. This multi-process review necessitated the completion of several reporting templates that addressed separate elements of the network’s experience and activity in the first year, namely:

 Traffic lights exercise – Each network revisited their original application form/proposal to the NLC programme, highlighting in red those elements that had never been initiated or had been abandoned, marking elements that had yet to start but would (in amber) and noting proposed activities that had been implemented (in green).

 Achievements template – Networks were asked to identify their key achievements from year one and their main plans for year two in relation to the ‘levels of learning’ framework that conceptualise the NLC activity as occurring on multiple levels (i.e. a focus on pupil learning, adult learning , leadership learning and school-wide school to school, network-wide, network to network activity).

 Red Lights and New Starts templates – Networks were provided with separate sheets with more space to elaborate upon the activities highlighted as never started (red lights), not started but about to be or implemented as per the original proposal (new starts) in the Traffic Lights exercise.

As the procedure will detail, the Annual Review process created a stream of initial qualitative data from these bespoke ‘reporting templates’, precipitating relatively robust findings, initial judgements and analysis that were critically challenged in subsequent rounds of qualitative and quantitative data collection (see, for example, Silverman 2000).

Quantitative Secondary Data Analysis (SDA) The Research Team identified key demographic variables by which networks could be grouped (i.e. pre-existing network status, phase of education, number of schools/network size) and elicited this data through an extant working relationship with the DfES, the parent organisation of the NLC programme. Data for each variable was cleaned and validated during an exhaustive exploratory data analysis process. Variables were segmented into groups or quartiles where appropriate, namely:

 Pre-existing status – established or new partnerships  Phase of education – all infant/primary schools, all secondary schools, ‘cross-phase’ mix of infant/primary and secondary schools  Network size – split into quartiles in order to distinguish between ‘small’ and ‘large’ networks

6 In the initial stages, these three variables were the focus of the analysis, but at later stages other network descriptors could be used as well. These demographic variables, known as ‘attributes’, were merged with the reporting templates in the Nvivo qualitative data analysis package at the second stage of analysis (see below).

The Annual Review ‘Mixed Model’ In general terms, a research methodology is chosen dependent upon whether the intent of the research is to specify information to be collected or allow it to emerge inductively (Creswell 2003). However, the Annual Review methodology was designed to follow a developmental sequence, where analysis began at the same time as data collection and the two processes could develop symbiotically (see also Silverman 2000).

The Annual Review process worked to a ‘mixed model’ or ‘multistrand’ design, as it fused qualitative and quantitative approaches at the stages of data collection, analysis and inference (Tashakorri and Teddlie 1998; 2004). For years, educational research methodology has struggled with the debate over the supremacy of the two approaches; a legacy of the so-called ‘paradigm wars’ (e.g. Furlong, 2004; Oakley 2000). Many educational and social researchers assert that it is impossible to have both inductive/qualitative and deductive/quantitative approaches in a single study because they are based on opposing paradigms and moreover, distinct epistemological claims. However, this ‘incompatibility thesis’ is refuted by the paradigm of pragmatism that underpins mixed model designs (Howe 1988), with the argument that the research question is more important than either method or the paradigm that underlies the method. Tashakorri and Teddlie reinforce this assertion:

"Study what interests and is of value to you, study it in the different ways that you deem appropriate, and utilize the results in ways that can bring about positive consequences within your value system" (1998: 30)

Proponents of mixed model designs argue that exploiting the advantages of both the qualitative and the quantitative paradigms adds complexity to a design and more accurately reflects the research cycle. This offers the potential for more rigorous, methodologically sound studies, as data can be integrated at some stage of the research process. In contrast, neither qualitative nor quantitative methods are sufficient in themselves to capture the trends and details of the situation (ref).

The Annual Review mixed the analysis and interpretation of qualitative and quantitative data sources in an iterative, dynamic fashion by expanding the meaning of the quantitative results from the reporting template using the numerical results from secondary data analysis. Qualitative data was employed to explore statistical findings, whilst quantitative data assisted the researchers in their selection of qualitative cases to examine in greater depth. This process is summarised in figure 1 below.

7 Department for Network funding Networks’ year 1 Education and awarded on self review Skills schools data submission of bid

Implementation Bids analysed for information Data aggregated network submitted in at network level descriptors reporting template

Annual review data reformatted Attribute file and imported into (SPSS-based) NVivo

Team member Team member Team member codes data codes data codes data

SPSS data Dataset merged converted to text Nvivo-based dataset into single project file via Microsoft Excel

Figure 1. Annual review analysis process.

Coding and Content Analysis Analysis of qualitative data from the reporting templates was underpinned by content analysis, which involved the identification and tagging/coding of common themes within the data. All coded text were then compared, supplemented by cross-referencing/cross- tabulating of different codes in order to highlight the incidence (or lack of incidence) of cases where themes occurred together (see CAQDAS section; see also David and Sutton 2004). The coding tree that was eventually developed is detailed below:

8 Level of Programme Learning activities Network building Who ICT and multimedia Phase activity  Within  Research &  Strategic  Pupils  Year 1 School enquiry and Structural  Teaching achievements  School-to-  Intervisitation Development Assistants  Year 2 school  Learning  Social  Teachers priorities  Network- infrastructure Conditions  Governors  Year 2 next wide  Training,  Meeting  Headteach Steps  Network-to- INSET, speakers s and intersection ers network  Mentoring & Points  Internal coaching  Resourci Facilitators  Communicatio ng  NCSL n & knowledge sharing  External  Collaborative Educational Agencies and joint activity  Family  Community  Adult (unspecified)  Leaders

Initially, a coding frame was generated, which served as a descriptive catalogue to identify and define all the codes to be applied to the data. Specific codes were identified deductively (‘coding down’) from previous research and enquiry exercises (e.g. the Levels of Learning Activity) and also generated inductively as they emerged through the analysis (‘coding up’).

The deductive, a priori production of a list of coding categories (also known as ‘axial codes’ or ‘parent nodes’) and an initial coding scheme was informed and underpinned by the Levels of Learning Activity conducted by the Networked Learning Group research team. Further ‘summary codes’, focusing on the general characteristics of the reporting templates (e.g. learning activities, network building activities) and the sample studied were identified inductively and integrated within the coding frame following initial data collection and interrogation of the data. These summary codes allowed the research team to comprehend the data generated through the Annual Review processes and enabled quick and easy comparisons between single cases. Summary codes are generally considered to be non-distorting as they do not seek to impose a particular agenda on the text (David and Sutton 2004). The main utility of inductively generating summary codes was that it prevented the research team becoming too focused upon specific issues prematurely.

Following provisional organisation of the qualitative data using axial and summary codes, ‘pattern’ coding (also known as ‘specific’ or ‘depth’ coding) was employed to illuminate underlying (latent) patterns within the data, facilitating the investigation of relationships within the specific content and the identification of specific recurrent themes. For example, categories were formed to focus upon ‘who’ were the instigators and recipients of activities (e.g. pupils, teachers, leaders).

This distinction between the manifest and latent content of a document embodies the difference between the surface meaning of a text and the underlying meaning of that narrative (Tashakorri and Teddlie 1998). By supplementing the a priori coding structure and manifest

9 content analysis (e.g. counting the number of incidences of a code or relationship) with latent content analysis, the Annual Review research was able to access the underlying meaning of the reporting templates with reference to the context of the Annual Review and its objectives.

Integrated Data Analysis Qualitative and quantitative data were integrated at the analysis stage by:

 Identifying the presence or absence of meaningful themes, common and/or divergent ideas, beliefs, practices and relationships through qualitative data analysis

 Bringing quantitative data (demographic variables known as attributes) into the qualitative Nvivo project

This permitted a comprehensive comparison of texts across subgroups (networks) by presenting matrices (cross-tabulations) of grouped data to elucidate patterns in the text through numeric displays, then interpreting the statistics using latent content analysis.

Computer-assisted qualitative data analysis (CAQDAS) The emergence of advanced software packages for qualitative analysis has accelerated the onset of new levels of analytic integration of qualitative and quantitative data sources (Bazeley 2002). As Bazeley (2003) suggests, boundaries between numerically and textually based research are becoming less distinct, so data can now be more readily transformed from one type to another. For example:

 Data consolidation - combining quantitative and qualitative data to create new or consolidated variables or data sets

 Data conversion/transformation - converting one type of data to another, such as changing qualitative data into numerical codes for statistical analysis (‘quantitizing’) or transforming statistical output into a qualitative form (‘qualitizing’)

To facilitate this element of the research, CAQDAS was employed in the form of code and retrieve procedures through the Nvivo qualitative analysis program. Text for analysis was entered into the program, inspected and assigned codes. Nvivo was then asked to recover (combination of) codes and display relationships between codes in a tabular format (see Lea and Esterhuizen 2000). In addition, the Nvivo qualitative data analysis software package has evolved the capacity to incorporate quantitative data (e.g. attributes from the SPSS program) into a qualitative analysis and to transform qualitative coding and matrices into a format which allows statistical analysis (e.g. matrix intersection search tables).

Although parallel analysis can provide a rich understanding of variables and their relationships, it limits the investigator to one type of data analysis (quantitative or qualitative) on each subset of the data. However, the integrated form of data analysis used in this study is more insightful in its ability to confirm and expand the inferences derived from one method of data analysis through a secondary analysis of the same data with a different approach (Caracelli and Greene 1993).

10 Retrieval involved complex cross tabulations of activity (see figure two below which illustrates an example of a matrix intersection), which created an immediate sense of the context of particular achievements – for example, when networks report they are involved in enquiry, who is most likely to be involved? These cross-tabulations could be further explored using the demographic characteristics (attributes) of networks (e.g. size, phase, pre-existing status). For example, is headteacher involvement in enquiry more common in new networks or those that are well established? As each cell in the matrix intersection search was hyperlinked to all the material coded at that point, the research team could remain in touch with the originating data - any inexplicable peaks of activity could be explored.

Figure 2. Matrix intersection table.

Conducting matrix intersection searches afforded the flexibility of including selected nodes from any tree in any analysis or profile of the coding, creating each cell of a matrix as a separate (child) node to aid organisation, analysis and interpretation of data. For example, as the new research questions emerged within the programme, searches at the intersection of multiple nodes could be conducted – for example, are their any networks who are planning to involve teaching assistants in the use of ICT for coaching? The power of the matrix intersection tables is that they visually and comprehensible reduced complex qualitative data into what Miles and Huberman (1994) term the ‘think display’. This useful representational device renders patterns easier to identify and elucidates links between themes that can inform and direct further analysis. Moreover, in a programme that works in close interaction with non-researchers, the year 1 review data could be presented to colleagues in a powerful way.

The researchers combined more quantitative content analysis (counting the number of times themes were coded in close proximity) and qualitative content analysis (exploration of what underpins relationships and other findings). This ‘parallel’ model of contemporaneous implementation of qualitative and quantitative approaches, giving equal priority to both and integrating the two types of data at the analysis stage, enabled the research to move simultaneously between inductive and deductive reasoning, thus providing a comprehensive analysis of the research questions (see Creswell 2003).

11 Collaborative / Team Research The organization and implementation of a mixed model approach requires more time and effort than single approach studies (according to Creswell 2002), but such holistic designs are necessary to answer increasingly complex and multifaceted research questions (Tashakkori & Teddlie in press). The NLG decided that adopting a mixed model approach to the Annual Review research process needed a collaborative, team approach (in line with Shulha & Wilson 2004). The collaborative, team research adopted for the Annual Review forced the reflexive process into open communication between members (see Creswell et al 2004; David and Sutton 2004). The Annual Review team regularly updated each other on their emerging analysis findings and interpretations, which facilitated the reflexive dynamic of new insights and inferences feeding into subsequent stages of data collection, analysis and interpretation. The formation of a three person research team also relieved each individual researcher of the responsibility of having broad knowledge of both qualitative and qualitative designs at each stage of the research process, as team members brought different approaches, different perspectives and different experience to the Annual Review.

The collaborative research approach was not only necessary for tackling the complex research questions emerging from the Annual Review but also enriched the experiences and competencies of the researchers involved (Tashakorri and Teddlie 2004), who may previously have been limited to one preferred methodology. The joint effort created “unique and productive ways for individuals to gather, manage and interpret data” (Shulha and Wilson 2004: 644). Methodological decisions were grounded (inductively) in the needs and emerging complexity of the Annual Review rather than predicated (deductively) upon established methodological practices.

Discussion

Research in a programme such as the NLC can not be insulated from the wider context of the organsation, the schools it serves and the policy arena. Indeed, as stated earlier, the NLG works the interface between these various bodies demanding flexibility in all activities, including research work. The mixed model design enabled the research team to analyse a qualitative dataset drawn from a process that was participatory and generative of learning for networks. The mixed model design used for the Annual Review allowed the research team to address research questions that other methodologies could not, simultaneously addressing confirmatory and exploratory questions, whilst verifying and generating theory in the same study. This produced multiple inferences that could confirm and complement each other, with inferences made at one stage of the research informing future questions and feeding into the design of subsequent phases. As Tashakorri and Teddlie state “we need a variety of data sources to completely understand complex multifaceted institutions or realities” (2004: p16). The following section discusses first the dynamics and challenges of the mixed model research and considers implications for research practice. The final section considers future directions.

Simultaneous inductive/deductive analysis Authors working with mixed methods (for example, Patton, 1988) describe the privileging of one data type over another or analysis involving inductive-deductive sequences (qualitative data is used to create quantitative data collection instruments). In contrast, the mixed / multistrand model constructed for the Annual Review combined qualitative and quantitative processes at the stages of data collection, data analysis and inference in a parallel, ‘multiple simultaneous’ form that moved concurrently between the inductive and deductive (see, for example, Tashakorri and Teddlie 1997). Drawing from Huberman and Miles (1994) who argue that both deductive

12 and inductive methods are legitimate and useful paths of analysis, axial, summary and manifest codes were adapted from the Levels of Learning framework whilst pattern and latent nodes were generated inductively as they emerged during deeper analysis of text.

The simultaneous inductive/deductive approach was adopted for a number of reasons. First, the programme strives for an accumulation of knowledge over its four year life. By working deductively using concepts from elsewhere such as the levels of learning model, a cumulative understanding of networked learning could be built from layers of different data sources. Second, the uptake of research findings in our organisation relies on fit with frameworks already in existence. Shared conceptual frames such as the levels of learning model serve as what Wenger (1998) would call a boundary object – a point of shared meaning that allows practices from one arena to be easily comprehensible to practitioners in another. However, this is not to say that the inductive development of codes was secondary. After all networks are an innovative organisation form in themselves that often present surprising turns. The research team wanted to remain open to the unexpected elements of networks and capture the subtlety of network activities through sensitive application of coding.

Simultaneous qualitative and quantitative analysis In working with large datasets, it can be easy to lose sight of the subtle interactions between variables (for example, the relationship between enquiry and dissemination in network learning activities) or the wider context of the network itself. The Annual Review research team was inspired by Bazeley’s cogent promotion of “the capacity to link qualitative with quantitative data and qualitative interpretation of text with interpretation of numeric analyses (p. )” as a means to open up an entirely new range of analyses and engender “creative methodological thinking” (Bazeley 2002).

NVivo’s matrix intersection searches enabled the team to transform qualitative data into quantitative and carry out finely tuned scoping searches to identify peaks in the interactions between activities. These peaks of activity could be explored further in fine grained qualitative analysis. These matrix intersections themselves could be interrogated further by importing the quantitative descriptors of network attributes. These attributes wrapped greater context around the data (for example, network maturity, types of schools involved, size of network or location in the UK) and enabled enriched analysis of patterns across the data.

The flexibility of the methods above present advantages and challenges. Analysing data interdependently and simultaneously lent the research team the facility to constantly compare and interpret the most current results with tentative explanations and inferences generated from previous rounds of analysis (see, for example, David and Sutton 2004). The research team could shuttle between “eyeballing” quantitative output for patterns and returning to the original qualitative data to do a second level of thematic analysis. The process also created a dataset that was easy to interrogate. Preliminary results could be fed back to the wider research team and the programme as a whole, allowing fresh perspectives and more probing analysis questions. Thus, the analysis procedure allowed a more fluid and reflexive dynamic through which the focus of the research could change and evolve where appropriate. Through frequent modification and challenging of methods and assumptions, the researchers developed the flexibility, adaptability, technical competency and innovative problem solving ability that Beazely (2003) considers essential to address any divergences and contradictions that arose within and between data sources.

However, this approach also presents challenges. First, as Kubiak and Bertram (2004) argue, networks are a dynamic, ever-changing organisational form. Schools move in and out of networked partnerships as needs and issues change in priority. They have a

13 boundless quality in that NLCs may overlap with a range of other informal and formal networks (for example, many networks were also Education Action Zones). Thus some quantitative network descriptors may quickly date or be inadequate representations of context. The research team took care to select descriptors that they could retain a degree of confidence in.

Second, a challenge lies in drawing together a diverse research team, one containing members with a solid grounding in qualitative and quantitative research methods. The challenge is not so much establishing a disciplinarily diverse team. The challenge lies in building productive dialogue between disciplines and working in ever shifting grounds which demand both an openness to challenge but also a certainty of research principles. Adopting such an approach is not always a comfortable space for a researcher.

Content Analysis and CAQDA (Computer Assisted Data Analysis) CAQDA has a number of critics. Fielding and Lee (1998) highlight an 'epistemological suspicion' in relation to CAQDAS and the inherent dangers of transforming qualitative data into quantitative data analysis (e.g. quantitizing data for content analysis). While many acknowledge that using computer software speeds up the process of searching data, identifying relationships, coding, modelling and building theory from data, computerised analysis may discourage or preclude engagement with data, such that the researcher may only skim the surface of even the richest material (see Weitzman and Levkoff, 2000). Systematic coding of larger amounts of textual data can be overly mechanistic, generic and superficial. Thus, CAQDAS can induce a lack of contact with the data itself such that lack of understanding of the analysis process produces meaningless outputs (see, for example, Crowley et al 2002). Moreover, as with manual analysis, segments of text coded at the same category/node bear an apparent conceptual relationship to each other, but once retrieved, can appear divorced from their original context (Dey 1999). Detachment from context can be enhanced through transformation from qualitative to quantitative data. Thus, while CAQDA presented great power as an analysis tool, it also presented possible pitfalls.

First, the hierarchic ('fixed') conceptualisation of network activity created by structuring codes into a node tree has the potential to be inappropriate to an investigation, particularly if researchers reflect and interpret it unquestioningly (Crowley et al 2004). For example, presenting the distinction between activities within the “network learning activity” and a “network building activity” node trees could be arbitrary and obscure the integrated nature of the way in which learning is ultimately a network building activity. However, the Annual Review team could counter that the node tree structure facility in Nvivo was used as a simple organising system to create common frameworks within the team and enable more efficient interrogation of text. The Annual Review node tree could be (and was) restructured at any time, so it was never used in an hierarchical or fixed manner – the same piece of data was coded with multiple codes. By creating matrix intersections that could demonstrate relationships between nodes in different hierarchies (for example, activities that served both learning and network building needs) the team could cut across any arbitrary and potentially superficial distinctions created by hierarchical tree nodes.

Second, as stated earlier, the research team employed inductive, grounded data collection and classification (rather than over- emphasising pre-emptive, deductive classification) to facilitate exploration and development of more valid systems of coding, which then formed the basis for subsequent deductive forms of enquiry (in line with Berg, 2001). Throughout the coding process, the research team established a “no assumptions” rule – they never assumed shared meaning, common understanding in text or the coding definitions employed without discussion. They worked to ensure inter-rater reliability by meeting regularly to coordinate and moderate coding practices. During these meetings, researchers were required to justify inferences and any connections identified between themes using the heuristic suggested by David and Sutton (2004) that for any pattern to be considered meaningful it should appear in at

14 least three examples within the text. Such evidencing also avoided unquestioningly equating the researchers’ perceptions with those of respondents’, thus rendering the analysis more accountable and more reflexive (see Berg, 2001; Fielding and Lee, 1998). Also, there was a prohibition on what came to be known as “blitz coding” – coding the data in intensive sessions running over a number of days leading to “data fatigue” which could lead the research team to work mechanistically rather than reflectively.

Third, the team balanced immersion in coding with analysis generating broad pictures of the data and fine grain interrogations. Any detachment from the data was countered by the Nvivo’s practical, readily searchable coding structure and interactive dataset, allowing the team to uncover and explore surprising findings. For example, the matrix intersection search tables were interactive - any surprising or incongruous findings could be probed further working through the data at each intersection.

Fourth, the research team remained aware that CAQDAS software can shape the analysis process (e.g. by presenting coded text in a particular order or form that may well impact on the researcher's perceptions). Thus, a key issue for the research team was to ensure that Nvivo was employed to assist in the analysis process, rather than used to drive the abstraction of information from data and the direction of the research (emphasised by Lea and Esterhuizen 2000). With this in mind, the research team acknowledged the underpinning challenge of how best to utilise NVivo’s capabilities to further the objectives of the Annual Review. This issue was addressed by consistently reflecting on the impact of Nvivo within the research team. For example, they presented their work to the wider research team and the wider organisation to gather challenge, reflection and analysis questions from those unfamiliar with the software. They also used Nvivo to make repeated passes through the Annual Review data, giving momentum to the analysis and rendering it difficult to make glib summaries of data. In accordance with Fielding and Lee (1998), we believe that “the computer delivered data management benefits rather than transforming analytic practice” (1998: 84)

Future directions and applications The analysis ultimately allowed both breadth and depth. The process yielded an overview of key areas of activity in NLCs. Significant interactions between variables could also be identified. It also enabled depth of analysis. Significant peaks of activity could be explored through fine-grained qualitative analysis. Throughout the process, the research team have been cautious to avoid “over- sweating the data” - making generalisations and interpretations beyond the scope of the results from the Annual Review. As far as possible, the research team eschewed expressing conclusions that were not internally valid, because they were based on interpretations with possible alternative explanations. External data sources have drawn on for triangulation.

The overall gain was that the process yielded a typology of networked learning. By developing a coding frame which will be used in subsequent reviews and other investigations, the evolution of the typology can be tracked over time. Also, the power of this typology lies in its ability to lever purposeful sampling (Patton, 1988) or what Gorard (2002) calls ‘new political arithmetic’. Transforming qualitative data to a quantitative, enabled the research team to identify particular patterns in the first year of networked learning. These patterns will be explored in more depth using qualitative techniques with a sub-set of networks selected from the typology. As Gorard (2002) explains, this approach moves research from attempting to both describe and explain phenomena using only in-depth approaches (which are, of necessity, smaller in scale) to one that takes into account the overall landscape of networked activity.

15 Overall, the significant learning through the process has been that the leadership of the research team itself is key – bringing a disciplinarily diverse team, facilitating inter-rater reliability, creating a reflective and dialogue rich environment is key to a methodologically robust process.

References

Beazley, P. (2002). ‘The evolution of a project involving an integrated analysis of structured qualitative and quantitative data: fromN3 to Nvivo’, International Journal of Social Research Methodology, Vol. 5, No. 3, 229 -243.

Bazeley, P. (2003) Teaching Mixed Methods. Qualitative Research Journal – Special issue, pp. 117-126.

Berg, B.L. (2001) Qualitative Research Methods for the Social Sciences. Boston: Allen and Bacon.

Borkan, J.M. (2004). ‘Mixed Methods Studies: a foundation for Primary Care Research,’ Annuls of Family Medicine, vol. 2, no. 1, pp. 4 – 6.

Caracelli, V.J. and Greene, J.C. (1993) Data analysis strategies for mixed-methods evaluation designs. Educational Evaluation and Policy Analysis, 15(2), 195-207.

Cresswell, J.W. (2003). Research Design: Qualitative, Quantitative, and Mixed Methods Approaches, Sage Publications Inc, Thousand Oaks.

Crowley, C, Harre, R. and Tagg, C. (2002) Qualitative research and computing: methodological issues and practices in using QSR NVivo and NUD*IST. International Journal of Social Research Methodology, Vol 5,No 3,193-197.

Fielding, N. & R.M. Lee (1998) Computer Analysis and Qualitative Research. London: Sage.

Furlong, J. (2004). BERA at 30. ‘Have we come of age?’ British Educational Research Journal, v. 30, no. 3, pp. 343 – 358.

Gilbert, N. (2001) Researching Social Life. London: Sage.

Gorard, S. (2002). How do we overcome the methodological schism (or can there be a ‘compleat’ researcher)? ESRC Teaching and Learning Research programme, Occasional Paper Series, Paper 47.

Hopkins, D. Ainscow, M. and West, M. (1994) “School Improvement in an Era of Change”, London: Cassell.

Howe, K.R. (1988) Against the quantitative-qualitative incompatibility thesis or dogmas die hard. Educational Researcher, 17, pp. 10-16.

Jackson, D. (2002). Networks and networked learning: knowledge management and collaborative enquiry for school and system learning, paper presented for the annual SCETT conference, Grantham, 4th – 5th October, 2002.

16 Kubiak, C. and Bertram, J. (2004). Network Leader’s Perspectives on the growth of Networked Learning Communities, Paper presented to the British Educational Research Association (BERA) Conference, Manchester, 15th – 18th September, 2004.

Lee, R.M. and Esterhuizen, L. (2000) Computer software and qualitative analysis: trends, issues and resources. International Journal of Social Research Methodology, Vol. 3, No. 3, 231-243.

Miles, M.B. and Huberman, A.M. (1994) Qualitative Data Analysis: A Sourcebook of New Methods. Newbury Park, CA: Sage.

Mishler, E. G. (1990) Validation in enquiry-guided research: The role of exemplars in narrative studies. Harvard Educational Review, 60, pp. 415-442.

Myers, K. (1996) “School Improvement in Practice: Schools Make A Difference Project”, Falmer Press.

Oakley, A. (2000) Experiments in Knowing. Polity.

OECD (2000). Schooling for tomorrow: innovation and networks, Portugese Seminar, Lisbon, 14th – 15th September, 2000.

Patton, M.Q. (1986). Utilisation-Focused Evaluation, Sage Publications Inc, Newbury Park.

Robson, C. (2002) Real World Research: A Resource for Social Scientists and Practitioner-researchers. London: Blackwell.

Shulha, L.M. and Wilson, R.J. (2004). ‘Collaborative Mixed-Method Research’, in Handbook of Mixed Methods in Social & Behavioral Research, (A. Tashakkori & C. Teddlie, ed.s), Sage Publications Inc, Thousand Oaks.

Silverman, D. (2000) Analyzin talk and text. In N.K. Densin and Y.K. Lincoln (Eds) Handbook of Qualitative Research, pp. 821-834. Thousand Oaks, Ca: Sage.

Tashakkori, A. and Teddlie, C. (1998) Mixed Methodology. London: Sage.

Tashakkori, A. and Teddlie, C. (2004) Handbook of Mixed Methods in Social and Behavioural Research. London: Sage.

Weitzman, P.F. and Levkoff, S.E. (2000) Combining qualitative and quantitative methods in health research with minority elders: Lessons from a study of dementia caregiving. Field Methods, 12(3), pp. 195-208.

Wenger, E. (1998) Communities of Practice Learning, meaning and identity, Cambridge: Cambridge University Press

17

Recommended publications