<<

See discussions, stats, and author profiles for this publication at: https://www.researchgate.net/publication/273380166

. In O. N. Saracho (Ed.), Handbook of methods in early childhood : Review of research methodologies (Vol. 1, pp. 717-751). Charlotte, NC: Information Age.

CHAPTER · MARCH 2015

READS 232

2 AUTHORS:

Douglas H. Clements Julie Sarama of Denver University of Denver

329 PUBLICATIONS 4,219 CITATIONS 166 PUBLICATIONS 1,737 CITATIONS

SEE PROFILE SEE PROFILE

Available from: Douglas H. Clements Retrieved on: 26 January 2016

CHAPTER 21

METHODS FOR DEVELOPING SCIENTIFIC EDUCATION Research-Based Development of Practices, , Programs, and Polides1

Douglas H. Clements and Julie Sarama

Many types of studies contribute to the field of education. But too few, in our opinion, go to the heart of the educational enterprise-developing sci­ entifically based practices, pedagogies, programs, and policies. Whether de­ veloped for children or their teachers, these are the main malleable factors that affect the quality of children's educational experiences. In this chap­ ter, we describe why be believe that this type of research-and-development program should take precedence in early childhood education and then describe a framework for such a program.

I fandbook of Research Methods in Early Childhood Education, pages 717-751 Copyright© 2015 by Infonnalion Age Publishing All rights or reproduction in any form reserved. 717 718 • D. H. CLEMENTS and J. SARAMA

WHY DO WE NEED RESEARCH-BASED DEVElOPMENT OF PRACTICES, PEDAGOGIES, PROGRAMS, AND POliCIES?

VVhat directly affects the quality and effectiveness of young children's ex­ periences in the classroom? Teachers do, including their practices and pedagogical strategies (Darling-Hammond, 1997; Ferguson, 1991; National Research Council, 2001; Schoen, Cebulla, Finn, & Fi, 2003). In addition, programs or curricula for children have a substantial impact on teachers and their practices and on what children experience and learn (Good­ lad, 1984; Grant, Peterson, & Shojgreen-Downer, 1996; National Research Council, 2009; Whitehurst, 2009). Similarly, professional development practices (e.g., workshops) and programs (e.g., certifications, degrees) af­ fect teachers (Darling-Hammond, 1997; Ferguson, 1991; National Research Council, 2001; Sarama & DiBiase, 2004; Schoen, et al., 2003). (Indeed, a combination may be best. Top-down imposition of a new curriculum v..rith limited professional development and support, for example, may have lim­ ited influence on teachers' beliefs and practices, Stein & Kim, 2009.) However, the quality of all these varies widely and does not show steady improvement year to year (Early, et al., 2005; Goodlad, 1984; National Re­ search Council, 2009) .. A major reason is that practices, programs, and the policies that should support. them are rarely developed or evaluated and revised following systematic, much less scientific, research methods (Cie­ menls & Battista, 2000; Davidson, Fields, & Yang, 2009). We begin by dc!1n­ ing what we mean by scientific research. includes the observation, description, analysis, hypothesizing, experimental investigation, and theoretical explanation of phenomena. Scientific knovvlcdgc is accepted as more reliable than everyday knovvlc· edge because the \vay in which it is developed is explicit and repeatable. "Our faith [in it] rests entirely on the certainty of reproducing or see~ ing again a certain phenomenon by means of certain well defined acts" (Valery, 1957, p.l253, as quoted in Glasersfeld, 1995). Scientifrc method, or research, is disciplined inquiry (Cronbach & Suppes, 1969). The lerm "inquiry" suggests that the investigation's goal is answering a specific question. The term "disciplined" suggests that the investigation should be guided by concepts and methods from disciplines and connected to relevant in those disciplines, and also that it should be in the pub­ lic view so that the inquiry can be inspected and criticized. The use of research methods, and the conscientious documentation and full report­ ing of these processes-data collection, argumentation, reasoning, and checking for counterhypotheses-distinguishes disciplined inquiry from other sources of opinion and belief (Cronbach & Suppes, 1969; National Research Councii, 2002; Shulman, 1997). Methods for Developing Scientific Education • 719

Science does not, however, produce the "truth" or a single correct view. It provides reliable ways of dealing with experiences and pursuing and achiev­ ing goals (Glasersfeld, 1995). It involves the process of progressive problem solving (Scardamalia & Bereiter, 1994). Thus, the goal for scientific meth­ ods of research and development cannot be to develop a single "ideal" (practice, , program, or policy-hence referred to as jJroducts), but rather dynamic problem solving, progress, and advancement beyond present limits of competence (Dewey, 1929; Scardamalia & Bereiter, 1994; Tyler, 1949). Ironically, another implication is that educational products should be based on research-as defined here. Given that traditions, social in[eractions, and politics have strong effects on education, the checks and balances of scientific research are essential to progress. Still, does that not limit the creativity of researchers, developers, and teachers? Somewhat ironically, we believe the opposite. Scientific knowl­ edge is necessary but not sufficient, for the continued development of high-quality educational products. More than 120 years ago, William james made this same argument, speaking of the young science of his own time, .

You make a great., a very great. mistake, if you think that psychology, being the science of the mind's laws, is something from which you can rleducc definite programmes and schemes and mcthorls of instruction for irnrnerliate class­ room usc. Psychology is a science, and teaching is an art; and never generate arts directly out. of themselves. An intermediary inventive mind must. make the application, by using it" originality. (James, IH92/ 1958, pp. 23-24)

James argues that scientific knowledge is applied artfully to create teaching products. Such research-to-practice methods are included in our frame­ work. However, this method used alone is incorrect in its presumptions (that extant research is a sufficient source for development of products), insensitive to changing goals in the content area (new standards are cre­ ated at a fast pace that research cannot comprehensively address), and un­ able to contribute to a revision of the theory and knowledge on which it is built (i.e., it is inherently conservative, evaluating "what is")-the second critical goal of scientific research and development. In contrast, research should be present in all phases of the creative or development process. In this way, the framework is mainly about researcher-developers (which includes some teachers, of course), but before we leave this section, let's address whether such a scientific approach denies professionalism and creativity to classroom teachers. Again, we claim it promotes them-in sci­ entiSL'\, developers, and teachers. One reason is that professionals such as doctors and teachers share a scientific knowledge base; that is, as all profes­ sionals, they share scientific guidelines of systematic, rather than idiosyn­ cratic, practice. Such systematic practice is more effective and amenable 720 • D. H. CLEMENTS and J. SARAMA

to scientifically based improvement than private, idiosyncratic practice (Raudenbush, 2009). This is not to say that teachers should deliver "script­ ed" lessons Vvith little or no interpretation. Rather, it is to argue that their creativity should he in building upon the research foundation, using the re­ sources of science (and that wisdom of cxpen practice not yet studied), to create environments and interactions that promote their children's devel­ opment and . It does mean that many of us hold notions of teacher creativity that may benefit from a revision. As a personal example, when one of us (Clements) taught , all the early childhood educa­ tors around him believed that "crcati\1C teachers" made up all their ideas and made all their materials. I too believed, then, that if I copied an idea or game that another successful teacher used, I was "less creative.'' Such thinking erects an unfortunate barrier to the spread of the most effective practices and programs. Instead, l·Ve believe, teachers' creativity is best used to use and imaginatively apply the h('SL of the resources of science and the wisdom of expert practice. All this is not to say that scientific programs cannot be outperformed (e.g., by a talented, idiosyncratic teacher). James had more to say on this mallei:

The science of logic never made a nJ

Thus, there are several approaches, but. each should be consistent with what is known about teaching and learning. Those that appear successful, such as our talented teacher's, should he documented, investigated as to why the approach is successful, and added lo the research literature. V\Tilhout such research methodologies, the talented teacher's practices and materials will be limited in their contribution to the myriad teachers and researcher-de­ velopers who come after. This is also not to say that all research should be scientific. Other types of research may make serious contributions, such as narrative (Bruner, 1986), or humanistic (Schwandt, 2002) perspectives, historical research (Darling­ Hammond&. Snyder, ] 992), aesthetic approaches (Eisner, 1998), or literary Methods for Developing Scientific Education • 721 criticism (Papert, 1987), just to name a few. Such approaches complement the scientific research meLhods described here. Of course, no single sci~ entific finding or set of findings should dictate pedagogy. Consistent with James, stated the following.

No conclusion of scientific research can be converted into an immediate rule of educational art. For there is no edm:ational practice whatever which is not highly complex; that. is to say, which docs not contain many other conditions and factors than arc included in the sciemiflc f-Inding. Nevertheless, scicntil-ic findings arc of practical utility, and the situation is wrongly interpreted when it is used to disparage the value of science in the art of education. What it militates against is the transrormation of scientific findings into rules of ac­ tion. (Dewey, 1929, p. 19)

Consistent with Dewey's formulation, our framework for research-and-de­ velopment rejects strict "rules" but values scientiil.c research for il-; practi­ cal, and political, utility. In summary, scientific knowledge is valued because it offers reliable, self­ correcting, documented, shared knowledge based on research methodology (Mayer, 2000; National Research C<~mncil, 2002). Education is a design sci­ ence (Brown, 1992; H. A Simon, 1969; R. Walker, 2011; Wittmann, 1995) and knowledge created during research-and-development should be both gen­ eralized and placed within a scientific research corpus, peer reviewed, and published. However, this is not a deterministic science and certainly not one limited to (although it includes) quantitative (Dewey, 1929). As the framework presented here will make clear, many research methodolo­ gies, mostly qualitative, are used to produce research-based education.

EARLY ATTEMPTS TO BASE EDUCATION ON RESEARCH

Research, especially psychological research from the time of William James on, has played a substantial role in education, especially in early childhood (Clements, 2008b). However, its role has been less to produce practical ma­ terials for teaching, than to interpret the phenomena of early education (Ginsburg, Klein, & Starkey, 1998). As stated previously, this is important, but indirect. We believe that most of the ways that development might be based on research should be employed. We next describe a small number of early attempts to base product development on research. Early effort<; to -write research-based teaching approaches and materials often were grounded in the broad , , and empirical re~ sults on learning and teaching. For example, in early childhood, early appli­ cations of Piaget's theories often led to suggestions that children be to per­ form accurately on Piagetian clinical tasks. Other incorporated materials 722 • D. H. CLEMENTS and J. SARAMA

directly adapted from those tasks (Forman & Fosnot, 1982; Kamii, 1973). These were not particularly successful. Even detailed analyses of Piagetian research failed to guide the development of programs or curricula in di­ rectly useful ways (Duckworth, 1979). Others hased their educational programs on Piagct's constructivist fOunda­ tion. For example, Duckworth encouraged teachers lo create en·vironments in which children would "have wonderful ideas" (Duck\vorth, 1973). Such pro­ grams have been arguably more successful, although the interpretations varied wield;.' (Fonnan, 1993). Indeed, the programs were distinct. The broad phi­ losophy and theory, unsurprisingly, leaves much room fOr interpretation and provides little specif-ic t,:rtiidance for teaching or the development of materials. In summary, the research-to-practice model has a less than stellar historical record (Clements & Battista, 2000; Gravcmeijer, 1994b). As stated previously, it also is limited in its contribution to either theory or practice (Clements, 2007, 2008b). The alternative we propose here is less about new research ap­ proaches or practices, and more about a specific framework for synthesizin.g those that have been used successfully in to a complete scientific research­ and-development system for designing and evaluating educational products.

A COM!'REHEI\ISIVE FRAMEWORK FOR RESEARCH-BASED EDUCATIOII!

VVe developed a comprehensive framework detailing the methods used to create and evaluate research-based practices, pedagogics, and programs (with implications for policies) so we could contribute to both theory and practice. First, we established goals based on the belief that any valid sci~ entific development product should address two basic issues-effect and conditions-in three domains, practice, policy, and theory, as diagrammed in Figure 21.1. To achieve these goals, researcher-developers must build on previous re­ search, structure and revise the nature and content of componenL<; in ac­ COI'dance with models of children's thinking and learning in a domain, and conduct formative and summative evaluations in a series of progressively ex­ panding social context.<>. These fOrm the categories of research-and-develop~ ment activity that define our framework. These categories include ten phases of such activity that warrant claiming that a product is based on research (Cle~ ments, 2007; Sarama & Clements, 2008). The categories and phases involve a combination of research methods; no single method would be adequate. For example, design experiment_<; (Brown, 1992; Cobb, Confrey, diSessa, Lehrer; & Schauble, 2003; Ruthven, Laborde, Leach, & Tiberghien, 2009; The Design-Based Research Collective, 2003; R. Walker, 2011), developed as a way to conduct formative research to test and refine educational designs Methods for Developmg Scientifrc Education • 723

Practice Policy Theory

Effects Is the intervention effective Are the goals Why is the intervention in helping children achieve important effective? (all) specific !earning goals? (e.g., to meeting What were the theoretical Are the intended and standards)? bases? (1, 2, 3) unintended consequences (1, 5, 10) What cognitive changes positive? {6-1 0)" What are the effect occurred and what Is there credible sizes? (9, 10) processes were documentation of both What effects does it responsible? That is, what a priori research and have on teachers? specific components and research indicating the (10) features account for its efficacy of the approach impact and why? (4, 6, 7) as compared to alternative approaches? (a !I)

Conditions When and where? Under What are the support Why do certain conditions what conditions is the requirements change the effectiveness? intervention effective? (7) for various (6-10) (Do findings generalize?) contexts? (8-1 0) How do specific strategies (8, 10) produce previously unattained results and why? (6-10)

Note: Numher~ in parentheses refer to the phases of the Framework rlcscribcrl in the following ~cctions. Figure 21.1 Goal:>. of research and development (adapted from Clcmcnt.!'i, 2007).

(Collins, Joseph, & Bielaczyc, 2004) are central, but are usually limited to pilot testing (Fishman, Marx, Blumenfeld, Krajcik, & Soloway, 2004; National Research Council, 2004, p. 75), put too little focus on the development of curricula, and do not address the full range of questions (Clements, 2008a; R Walker, 2011). Our work is based on the assumption that all appropriate methods should be synthesized into a coherent, complete framework for re­ search and development, as described in Figure 2L2 (see Clements, 2007, for a full description). (Space prohibits describing the work of many researcher­ developers from which this framework vvas abstracted, but see citations and descriptions of their work in Clements, 2002, 2007, 2008b; Clements & Bat­ tista, 2000; Sarama & Clements, 2008.)

DESCRIPTION AND APPLICATION OF THE FRAMEWORK

In this section, we describe the categories and phrases of the Framework in more detaiL We also briefly illustrate its application using our instantiation of it developing and evaluating the Building Blocks early childhood math­ ematics curriculum and the TRIAD model of intervention at scale. 124 • D. H. CLEMENTS and J. SARAMA

What is tlw effr:ctiv<:t>e~' ,ls irnpiernented in realistic Summative Assessment._,,;:.___ ~,~.o~m~c~,,~,·:.· ___ Empirical evidence is collected on the effects of the intervention.

Summative.' phase; ()a,\d IOboth u,"' randornized il€ld triab and Jiffet on scale. Ph.o>'~ 9 checks the efficacy. Pha~e 10 examines tht.' fidelity or enactment, and 5tl5tainability. of the curriculum when implemented on a large sc~le, and the critical COtltextiJ,ll s.

l Is it lNtble by and T elfo•ctiv0 With, v;:.l'iouo; groups Formative Assessment~hildren ctnd teilchep,?

Empincal evidence is collected to evaluate appeal, usdbi!ity, and effectiveness of an instantiation of the intervention, which is revised afe1· each phase. What rncaning do teachers ilnd children give to the intervention in expanding social contexts?

Pfwoe 6 as<.erttains the usal.Jility and Phose 7 is similar, implemented by <1 Phase 8 involve~ i111plernenting the ''ift'ftivrness of ~prrifif component' t;··HHii!ls in rt~in if the maleri· r[,> is in small gwup~. als off

Hr>w might the devf'lopm'2nt be <.onsistent with thildre11's th1nking ;;nd i0arning7 Learning T•·ajectories

Activities are structured in accordance with empirkally-based models of children's thinking and learning in the targeted subject-matter domain.

In ph<:se 4,. the nilture c1nd content of attivities is b,~>ed on models of ~tudents' nlct of activities (the hypotheticql mechanism of lhc reseJrch) m

VVh,!t 15 illready known that am be applied to the i.mtk.ipated curriculum? A Priori Foundations --·----- In variants of the research-to-practice model. extant research is reviewed and implications for the nascent development effort drawn in 3 domains:

Phnw 1: SubjeCt mMte1 c-onr<:'nt, indud­ f'hme I·G<:>nerill h;~u,~s concerning Phmr 3: PP.daqogy, indudinq tlw in~l the role it would plqy in children'> poydv.Jio9y. '"ducs ot certqin types of dc'V01opmPnt systcmk (hange.

Figure 21.2 The Framework for Comprehensive Research and IkvclopmcnL.

A l'roori Fm.mdatiol'ls

In this category, established research review procedures (e.g., Galvin, 2009; Light & Pillemer, 1984) and content analyses (National Research Council, 2004) are employed to garner knowledge concerning the specifiC Methods for Developing Scientific Educat1on • 725 subject. matter content, including the role it would play in children's devel­ opment (phase 1); general issues concerning psychology, education, and systemic change (phase 2); and pedagogy, including the effectiveness of cer­ tain types of environments and activities (phase 3).

Phase 1: Subject Matter A Priori Foundation Developing goals is a complex process, not all of which, perhaps most of which, is not amenable to scientific investigation. Societal-determined val­ ues and goals are substantive components of any program (Hiebert, 1999; National Research Council, 2002; Schwandt, 2002; Tyler, 1949). Creating goals requires a cooperative process among the many legitimate direct and indirect stakeholders (van Oers, 2003). The array of advice from a wide variety of such stakeholders involved in such large-scale projects as those in the domain of mathematics, the Principles and standards Jar school math­ ematics (National Council of Teachers of Mathematics, 2000) and Common Core State Standards (CCSSO/NGA, 201 0) illustrate this point. For example, subject matter experts and their organizations evaluated whether these goals included those concepts and procedures that play a central role in the domain (cf. content analyses in National Research Council, 2004). Nev­ ertheless, scientific procedures help identify subject-matter content that is valid within the discipline and makes a substantive contribution to the de­ velopment of children in the target population. That is, concepts and pro­ cedures of the domain should build from the children's past and present: experiences (Dewey, 1902/1976) and be generative in children's develop­ ment offuture understanding (Clements, Sarama, & DiBiase, 2004). In our Building Blocks project, (Clements & Sarama, 2007a), we built our goals upon the results of a large conference we organized (funded by NSF and the ExxonMobil Education Foundation) that involved representatives from state departments of education and from the U.S. Department of Edu­ cation, mathematicians, mathematics and early childhood educators (pre-K to university) and researchers, childhood policy makers, and developers. This was preceded and followed by extensive research reviews (for a full report, which also influenced the Curriculum Focal Points and Common Core, see Clements, et al., 2004). We vetted the specific goals for our proj­ ect to an advisory hoard consisting of members of these same groups. As an example, we determined that the competence of subitizing is crucial to young's children's mathematical development (Clements, 1999; Clements & Conference Working Group, 2004). Subitizing, the ability to recognize and name the numerosity of a group quickly (from the Latin "to arrive sud­ denly"), is the earliest developing quantitative, or numerical, ability (Sara­ rna & Clements, 2009). It also contributes to many other competencies, such as counting and arithmetic (Baroody, 1987; Clements, 1999; Fuson, 1992; Sarama & Clements, 2009). 726 • D H. CLEMENTS and J. SARAMA

Phase 2: General A Priori Foundation In this phase, philosophies, theories, and empirical results on teaching and general educational issues rtre reviewed f()f their applicability to the product. Researcher-developers might start ffom an Ausubelian, Piagctian, or general constructivist perspective and proceed in any of several directions (Forman, 1993; Lawton, 1993). In addition, theory and research offer per­ spectives on children's and teachers' experiences \-vith similar products. For our own part, we used theory and research on early childhood learning and teaching (Clements, 2001; National Research Council, 2001), to decide that the basic approach of Building Blochs would he finding the mathematics in, and developing mathematics frorn, children's activity. The materials \vere de­ signed to facilitate children's extending and mathematizing their everyday activities, such as building blocks, an pn~jects, stories, songs, and puzzles.

Phase 3: Pedagogical A Priori Foundation In this phase, research relevant to creating specific types of educational environmenL'l and activities is reviev-.'ed. Intuition of practitioners-the art of teaching-is also garnered as much as possible by viewing patterns of promising practice (Dewey, I 929; Hiebert, 1999).

A science only lays down lines within which t.hc rules or the an ITl\lSl rall, laws which the f'oJJowcr o( the art Ill ItS!. not transgress; 'hut what parLiCll]ar thing he shall positively do within those lines is left exclusively l.o his own genius ... many diverse rnct.hods of tcoching rnay equally wei! agree with psy­ chological laws. (James, IH92/l9:lH, p. 24)

Note that James treats research only as providing a priori foundations. Our framework uses research in this way. hut considers this just. a beginning. Building Blocks pedagogical foundations were based on Lhe same body of research (e.g., National Research Council, 2001), including a wide range of grouping (whole group, small group, individual) and teaching strate­ gies (from the design of the entire environment to explicit instruction to centers and "Leachable moments" during play). As just one example, for a minor, but important, component of the curriculum, we consulted em­ pirical data on features that appeared lo make computer programs moti­ vating (Escobedo & Evans, 1997; Lahm, 1996; Shade, 1994) and effective (Childers, 1989; Clements & Sarama, 1998; Lavin & Sanders, 1983; Murphy &Appel, 1984; Sarama, Clements, &Vukelic, 1996).

learnin!J Model, learnin!J Trajectories

This phase differs from phase 3 in the focus on the children's thinking and learning, rather than teaching strategies alone, in the greater degree Methods for Developmg Scientific Educatton • 727 of specificity, and in the iterative nature of its application. That is, in prac~ tice, models are usually created or refined along with the development of instructional tasks, using, clinical interviews, teaching experiments, and de­ sign experiments.

Phase 4: Structure According to Specific Learning Trajectories Learning trajectories are found or developed to form the core of the product, especially for a curriculum or teaching sequence (M. A. Simon, 1995). Learning trajectories are based on the idea that children follow nat­ ural developmental progressions in learning and development. As they learn to crawl, then walk, then run, then run, skip, and jump with increasing speed and dexterity, they follow natural developmental progressions in learning in other domains. Learning trajectories built upon natural developmen­ tal progressions and empirically based models of children's thinking and learning are more mature in some areas, such as (e.g., progressions within phonemic awareness and alphabet recognition, as well as movement from these to early graphophnnemic analysis, Anthony, Lanigan, Driscoll, Phillips, & Burgess, 2003; Brice & Brice, 2009; Justice, Pence, Bowles, & Wiggins, 2006; Levy, Gong, Hesse Is, Evans, &Jared, 2006) and mathematics (Carpenter & Moser, 1984; Case, 1982; Griffin & Case, 1997), but are also developed in science (Hmelo-Silver & Duncan, 2009; National Research Council, 2007), albeit less for the earliest years (Brenneman & Gelman, 2009), and in social-emotional development (e.g., Bredekamp, 2014). Sometimes diJferent names are used (e.g., "learning progressions") and some describe developmental progressions, but not instnlCtional sugges­ tions. However, they share a family resemblance and each can be used to serve the purposes of research and development as proposed here. We believe that much of the educational potential of learning trajecto­ ries lies in their ability to connect developmental progressions to the edu­ cational environment and to teaching. We define learning trajectories as "descriptions of children's thinking and learning in a specific domain, and a related, conjectured route through a set of instructional tasks designed to engender those mental processes or actions hypothesized to move children through a developmental progression of levels of thinking, created with the intent of supporting children's achievement of specific goals in that domain" (Clements & Sarama, 2004b, p. 83). Thus, in our view, complete learning trajectories have three parts: a goal, a develojJmental progression or path along which children develop to reach that goal, and a set of recom­ mendations for educational environments and activities, matched to each of the levels of thinking in that path that help cbilclren develop ever higher levels of thinking. The product of this stage is a well-developed cognitive model of chil­ dren's learning as expressed in learning trajectories. Ideally, such models 128 • D. H. CLEMENTS and J. SARAMA specify knowledge structures, the development of these structures, mech~ anisms or processes of development, and developmental progressions of nascent learning trajectories that specify hypothetical routes that children might take in achieving the goal (Sarama & Clements, 2009, presents de~ tailed cognitive models not included here). As an example, our synthesis of research for the Building Blocks prqj~ ect created a first draft of a developmental progression for subitizing. \Ve chose to illustrate the subitizing learning trajectory in this chapter due to its simplicity2 (i.e., learning trajectories for other topics are longer and more complex). Although there are several features, the basic characteristics are the number of objects and the development of the type of subitizing. First, of course, very young children begin very small numbers-one or two. They slov..rly develop the ability to subitize larger numbers. Second, the type of subit.izing develops from perceptual subitizing to conceptual subitizing. Perceptual subitizing involves recognizing a number of objects without con­ sciously using other mental or mathematical processes and then naming it. This is limited to sets of up to 4 to 6 objects. Conceptual subitizing plays an advanced organizing role, as seeing "10" on a pair of dice by recognizing the two collections (via perceptual subitizing) and consciously composing them. These advancements can be seen in Figure 21.3, which illustrates a portion of this learning trajectory. The left column names and describes each level of thinking in the developmental progression; examples of be­ haviors arc shown in a smaller font (from Sarama & Clements, 2009, which also describes cognitive science descriptions of mental components and processes not included here). We used clinical interviews to check that de­ velopmental progression. A simple example of a revision these engendered was a differentiation between the perceptual subitizer to 5 and the concep­ tual subitizer to 5, which did not exist in the original version. We then used research reviews and targeted teaching experiments to design the initial instructional activities. The right column of Figure 21.3 provides examples of the types of environments and activities that help children construct that level of thinking (from Clements & Sarama, 2009). These involve infOrmal interactions (e.g., see the first. level) and intentional activities. As an example of the latter, a simple, enjoyable "snapshots" game is played at many levels, starting with "Perceptual Subitizer to 4." The activ­ ity is introduced by talking about cameras, and having "our eyes and mind take snapshots'' like cameras. The teacher covers, say, three counters \Vith a clark cloth. She reminds children to watch carefully to take a "snapshot," then uncovers the counters for two seconds only. Children show how many counters they saw with their fingers. Once they have seen all responses, teachers ask children to whisper to each other how many counters they saw. The teacher then uncovers the counters to check answers, and so forth. Methods for Developing Scientific Educatron • 729

Developmental Progression Instructional Activities

Small Collection Namer Gesture to a small group of objects (1 or 2, later 3 Names groups of 1 to 2, when the children are capable). Say, "There are two sometimes 3. balls. Two!" When the children are able, ask them Shown a pair of how many there are. This should be a natural part shoes, says, "Two of interaction throughout the day. shoes." Name co!!ections as "two." Also include non-examples as well as examples, saying, for instance, "That's not two. That's three!" Or, put out three groups of 2 and one group of 3 and have the child find out "the one that is not like the others." Talk about why.

Maker of Small Collections Ask children to get the right number of crackers (etc.) Nonverba!ly makes a for a small number of children. small collection (no more Lay out a small collection, say 2 blocks. Hide them. than 4, usually 1-3} with Ask children to make a group that has the same the same number as number of blocks as your group has. After they have another collection. Might finished, show them your group and ask them if also be verbal. they got the same number. Name the number. When shown a In this and every other level, continue to name coflec­ collection of 3, makes tions throughout the day. "Would you please put another collection those four books on the shelves?" Ah, three beautiful of 3 .. flowers." "Nice design of five squares you made!"

Perceptual Subitizer to 4 Play "Snapshots" (see the text). At this level, play Instantly recognizes with collections of 1 to 4 objects, arranged in a collections up to 4 briefly line or other simple arrangement, asking children shown and verbally to respond verbally with the number name. Start names the number of with the smaller numbers and easier arrangements, items. (the top row of dots) moving to others as children When shown 4 become competent and confident. objects briefly, says "four." 4 •••• •• "'"''"" • ••• • •• • • • •• ''"'"'" • • • •• •• • Figure 21.3 A !earning trajCctory for suhitizing-s·-unple levds (adapted from Clements & Sarama, 2009; Sararna & Clements, 2009). 130 • D. H. CLEMENTS and J. SARAiviA

Developmental Progression Instructional Activities

Play "Snapshots" on the computer. (a) Children see an arrangement of dots for 2 seconds. (b) They are then asked to click on the corresponding numeral. They can "peek" for 2 more seconds if necessary. (c} They are given feedback verbally and by seeing the dots again.

Perceptual Subitizer to 5 Play "Snapshots" on or off the computer with Instantly recognizes matching dots to numerals with groups up to and briefly shown collections including five. up to 5 and verbally names the number of items. Shown 5 objects briefly, says "5."

Play "Snapshots" with dot cards, starting with easy arrangements, moving to more difficult arrangements, as children are able.

Figure 21.3 (continued) A learning trajectory for subitii.ing-samplc levels (adapted from Clements & Sarama, 2009; Sarama & Clements, 2009). Methods for Developing Scientific Education • 731

Developmental Progression Instructional Activities Conceptual Subitizer to 5 Use different arrangement the various modifications Verbally labels all of "Snapshots" to develop conceptual subitizing arrangements to about and ideas about addition and subtraction. The goal 5, when shown only is to encouraging students to "see the addends and briefly. the sum as in 'two olives and two olives make four "5! Why? l saw 3 and olives'" (Fuson, 1992, p. 248). 2 and so I said 5."

• • •• •• ••• Conceptual Subitizer to 10 Play "Snapshots" on• or off the computer with Verbally labels matching dots to numerals. The computer version's most briefly shown feedback emphasizes that "three and four make arrangements to 6, then seven." up to 10, using groups. "ln my mind, ! made two groups of 3 and one more, so 7."

Figure 21.3 (continued) A learning trc.~jcctory for suhitizing-sarnp!c levels (adapted from Clcmcnls & Sarama, 2009; Sararna & Clements, 2009).

The goal, developmental sequence, and instruction make up the com~ plete learning trajectories. Learning trajectories such as these formed the skeleton of the nascent Building Blocks product.

Evaluation

The remaining six phases, in the third category, evaluation, involve col­ lecting specific empirical evidence in marketing, formative, and summative evaluations. The goal is to evaluate the attractiveness, usability, and efficacy of the product, even if it is still in draft form. 732 • D. H. CLEMENTS and J. SARAMA

Phase 5: Market Research Market research is consumer··OJientcd research. Often, it is not done scientif­ ically. For example, publishers n1ay create prototype materials that are present­ ed to "focus groups" in a geographically balanced sample of sites, along with general questions about what they are looking for. Iden!ities and results are hidden-parting with the scientific criterion that methods and results should be in the public view so that the inquiry can be inspected and criticized. In contrast:, collecting useful information about goals, needs, usability and probability of adoption and implementation is important for dissemi­ nation and adoption (Tushnet et al., 2000). Our framework includes mar­ ket research that is scientific; that is, fully grounded in the disciplines, is in the public view, conscientiously documented, and fully reported Qaeger, 1988). Such market research is conducted at several points in the develop­ mental cycle, from the beginning, as a component of the A Priori .Founda­ tions phases, and through the last phase of planning for diffusion (Rogers, 2003). We worked with 35 teachers in developing the Building Blocks prod­ ucts, and reached out to hundreds of others for advice.

Formative Evaluation The following three phases involve repeated cycles of design, enactment, analysis, and revision (Clements & Battista, 2000), V\.ith increasing grain size of the populations and the research variables. The goal is to discover whether the product is usable hy, and effective with, various children and teachers. In formative phases 6 to 8, researchers seek to understand the meaning that both children and teachers give to the product in progressively expanding social contexts. For example, researchers assess the ease of use and efficacy of the parL'i and attributes of lhe product as implemented first by a teacher who is familiar with the materials, working with small groups of children (phase 6) and, later; whole classes (phase 7). Later, similar studies are conducted in cooperation ¥~ith a more diverse group of teachers (phase 8). Methods in~ elude interpretive work using a mix of model testing and model generation strategies, including design experiments, microgenetic, microethnographic, and phenomenolot,Yical approaches (phase 6), classroom-based teaching ex­ periments and ethnographic (phase 7), and these plus content analyses when appropliate (phase 8). The product is refined based on these studies, especially including issues of support for teachers.

Phase 6: Formative Research: Small Group This phase involves intensive pilot testing vvith individuals or small groups of children, often fOcusing only on one section of the product at a time. Scientific approaches include design experiments, as well as ground­ ed theory, microgenetic, microethnographic, and phenomenological ap­ proaches (Siegler & Crowley, 1991; Spradley, 1979; Steffe, Thompson, & Methods for Developing Sctenttfic Education • 733

Glasersfeld, 2000; Strauss & Corbin, 1990). The objects of these studies is to gain understand of the meaning that children give to the product or the instantiation of the product ( e,g., see L.incoln, 1992). The focus is on the congruity between the actions of the children and the learning model or learning trajectory. If there is a mismatch, then some aspect of the learning trajectory is changed. (This is an advantage of the Framework compared to traditional formative and summative evaluations, which often do not connect to theory and do not typically create new theo­ ries, cf. Jlarab & Squire, 2004.) For example, one asks whether the children use the object..:; provided (e.g., manipulativcs, tables or graphs, software Lools or features) to perform the actions they are designed to engender, either spontaneously or with prompting from the teacher (if the later, what type of prompting)? Using the cognitive and learning trajectories as guides, and the tasks as catalysts, the researcher-developer creates more refined models of the thinking and especially the learning of children and particu­ lar children or groups of children. At the same time, the researcher-devel­ oper describes what elements of the teaching and learning environment, such as teaching strategies or "moves," appear to contribute to learning (D. F. Walker, 1992). The ultimate goal is to connect children's learning processes with specific characteristics of the environment and the teacher's actions, and thus begin lo describe the competencies that are expected of the teacher to be effective. As in all phases, but especially here, equity is a concern(Confrey, 2000). Convenience samples are often inadequate. For example, a product canM not be effectively designed for "all children" or specifically at-risk children if the field testing is done in affluent schools. It is not uncommon to see evaluations in which sites are selected through advertisements, often resultM ing in samples mostly of white, middle-income, suburban populations. This may be the most intensive phase of cycling the research and de­ sign processes, sometimes as quickly as every twenty-four hours (Burkhardt, Fraser, & Ridgway, 1990; Clements & Sarama, 1995). Refined or newly cre­ ated activities or approaches might he developed one night and tried the next day. Several classrooms may also be used so that. revised lessons can be tested in a different classroom, with one staggered to be from one to five clays he hind the other in implementing the product (Flagg, 1990). Not only are these activities challenging, but it is easy to fail to perform the neces­ sary documentation that will permit researchers to connect findings to spe­ cific revisions of the product. Field notes, audiotapes, and videotapes can help. Computer programs are available that allow researchers to transcribe, code, and analyze such recordings. Technology can also assist in document­ ing children's ongoing activity, as in solution-path recording (Gerber, Sem­ mel, & Semmel, 1994; Lesh, 1990). Stored solution paths can be reMexecut­ ed and examined by the teacher, child, or researcher. Such documentation 734 • D. H. CLEMENTS and J. SARAMA

should be used to evaluate and reflect on those components of the design that were hased on intuition, aesthetics, and subconscious beliefS. Although this phase includes a model-testing approach, there remains significant adaptation to children's actions and their own creative responses to the product. For example, their free exploration of environments and materials may be encouraged and observed before the introduction of any structured activities. As previously stated, one of the beneficial, albeit challenging, features of the proposed research-and-development is that it studies what could be, in contrast to traditional research, which usually to investigates what is. The Framework provides an alternative to research that allows, or even encour;:.tges, unfortunate confirmation bias and, instead, at~ tempts to invent ways to produce previously unattained results (Greenwald, Pratkanis, Leippe, & Baumgardner, 1986; Sarama & Clements, 2009, see pp. 363-364 for a description of the problems with confirmation bias in early childhood research). In summary, research in this phrase has much to offer. Using the learn~ ing trajectories as a guide, and the tasks as a catalyst, the researcher-de­ veloper creates more refined models of particular children and groups of children. Researchers also learn about the value of various characteristics of the teaching and learning environments, many of which will emerge from interaction of the teacher-developer and the child. As an example from our Building Blochsproject, the "snapshots" activities from our first instructional sequence did not include a second look-an­ other "peek." We found that young children often needed that repeated exposure to attend, build the image, and generate the quantity. Without such mental activity, some did not progress through the learning tr~jectory. Therefore, we built it into whole and small group activities, as well as the software (see the blue "Peek" button in the software screens). As another example, we originally planned an entire separate, related learning tr~ectory dealing with numerosity estimation. That is, follow­ ing the subitizing activities in which children named exact quantities, we asked children to estimate the amount in larger sets. We tried a variety of sequences and activities but eventually abandoned this learning trajectory, because children's estimation abilities did not improve. We hypothesized that until exact quantities are well-established, benchmarks for numerosity estimation are too weak to justify the time spent on this learning trajectory in the earliest years. As disappointing as the results seemed at the time (we spent so long developing this learning tn~jectory!), research at this phase potentially saved many teachers and children from wasting their time on unproductive instructional activities. One final type of work at this phase is significant. Given the importance yet paucity of child-designed pn~jects, provision for such self-motivated, self-maintained work should not be ignored. Open-ended activities using Methods for Developing Scientifrc Educatron • 735

the objects and actions should therefore be a part of the design so that the environment can be a setting in which children think creatively. Design activity on the part of children is one for that to happen. In geometry, such activity can be generated, with children producing interesting, relevant, and aesthetically attractive designs (e.g., see Clements & Sarama, 2009). In comparison, design activity with small sets of objects seems difficult or impossible. In our present work, however, we used Donald Crews' book, 10 Black Dots, as a starting point, and encouraged children to make their own design with sma11 numbers of black dots. This has been successfuL

Phase 7: Formative Research: Single Classroom Teachers are involved in all phases, but this phase includes a special em­ phasis on the process of enactment (Ball & Cohen, 1996; Dow, 1991; Snyder, Bolin, & Zumwalt, 1992). For example, a goal of the product may be to help teachers interpret children's thinking about the goals or content they are designed to teach; support teachers' learning of the goals and content; and provide support for representing that content (Ball & Cohen, 1996), often in the 100 languages of children (Edwards, Gandini, & Forman, 1993). So, this phase contains two research foci. Classroom-based teaching experi­ ments are used to document and evaluate child development, to under­ stand how children think in learn in a classroom implementing the prod­ uct (Clements, Battista, Sarama, & Swaminathan, 1996; for examples, see Clements, BattisL.1., Sarama, Swaminathan, & McMillen, 1997; Gravemeijer, !994a; Pinar, Reynolds, Slattery, & Taubman, 1995). Field notes and often videotapes are used so that children's performances can be examined, of­ ten repeatedly, for evidence of their interpretations and learning. The second focus is on the entire class, as the researchers seeks informa­ tion about the usability and efficacy of the product. Ethnographic partici­ pant observation may used to examine the teacher and children as they in­ teract to build the classroom cultures and learning environments(Spradley, 1980). Observations are on how teachers and children use the materials, how the teacher guides children, what attributes of these interactive envi­ ronments emerge, and, of course, how these processes are connected to both intended and unintended child outcomes. During this phase the class may be taught either by a team including one of the researcher developers and the teacher, or by a teacher familiar with and intensively involved in product's development. The goal is to observe learning in the context produced by teachers who can implement the prod­ uct with high fidelity, consistent with the developers' vision (in contrast to observing how the product is implementing in classrooms in general, which is one focus of the following phase. "High fidelity" does not necessary fol­ lowing a script. Many pedagogical approaches are not implemented with fidelity to the creator's vision without creative and adaptive enactment. In 736 • D. H. CLEMENTS and J. SARAMA

other words, the of the product and of the researchers influ~ ence the interpretation of fidelity on a continuum from compliance to the creative implementation and adaptation of an individual of particular edu­ cational vision. VVhatever the position along this continuum .. this phase seeks "super­ realization" (Cronbach et aL, 1980)-a painstaking assessment of what the product can accomplish "at its best." This usually implies frequent meetings of teachers and researchers. Video and written records can serve both as research evidence and useful "existence proofs" that are effective comple­ ments to other research data for researchers, and especially practitioners and policy makers. The end results of these efforts is a better-developed draft of the product, along vvith measures of child outcomes and fidelity of implementation (Snyder et aL, 1992). As stated, this pilot test stage involves teachers working closely vvith the researcher-developers. The class is taught either by a team including one of the researcher-developers and the teacher, or by a teacher familiar with and intensively involved in curricula development. The Building Blocks project included several tests oflearning trajectories (e.g., the full subitizing learn­ ing trajectory) in this phase (e.g., Clements & Sarama, 2004a).

Phase 8: Formative Research: Multiple Cla:;srooms Building on the previous phase, here several classrooms are observed for information about the efficacy and ease of implementation of the product. The focus turns more to conditions under which the product is more or less effective, and how it might be altered or complemented to better serve any conditions in which it was not as effective. Too often, innovative materials provide less support for teachers relative to their need; that is, because the approach is new, more support is needed. The first of three main research questions for this phase, then, is whether the supporting materials are ad­ equate in supporting multiple contexts, modes of instruction (e.g., whole groups, small groups, centers, incidental and informal interactions), and styles of management and teaching. Addressing this question goes beyond evaluating and increasing a product's effectiveness-by employing strate­ gies of condition seeking, it extends the research program's inoculation against the unfortunate phenomenon that we mentioned previously, confir­ mation bias (Greenwald et aL, 1986). That is, by trying to fail (e.g., finding contexts or populations for which the product is less successful), research­ ers identify the limiting, necessary, and sufficient conditions and may learn how to be successful (often where few were successful previously). In so doing, they extend theory, effectiveness, and guidance to future design and work. Collaborative work with others, especially those not previously involved in the development (and thus not ego-involved in the product) can also help. Methods for Developing Scientific Education • 737

A second question is whether the product supports teachers if they de­ sire to learn more about their children's thinking and then teach them dif­ ferently. A third question asks which contextual factors support productive adaptations and which allow lethal mutations (Brown & Campione, 1996) and why, as well as how, the product might be changed to facilitate the former, which are especially valuable for formative assessment, and elimi­ nate the latter. As learning trajectories in curricula or programs are actually hyjJotheticallearning trajectories (M.A. Simon, 1995) that must be realized or coconstructed in each classroom, so too is a product a hypothetical path lO teaching and learning that is sensitive to local contexts (Herbst, 2003). Modification are not expected to make products "fool-proof" but rather support is provided for as wide a variety of contexts as possible. Ethnographic research (Spradley, 1979, 1980) is again important in this phase, because teachers may agree with the product's goals and ap­ proach but their implementation of these may not be consistent with there­ searcher developers' vision (Sarama, Clements, & Henry, 1998). This phase should detennine the meaning that the various materials have for both teachers and children. Professional development approaches and materials may be created, or revised, based on this research evidence, and assessment instrumentals for the future summative evaluations may be revised and val­ idated. In addition, qualitative methods may uncover previously ignored factors (variables) that provide a better exfJlanation for a product's effects and indicate what design features may provide a more efficacious product. Finally, another set of content analyses may inform revisions to the product before summative evaluations begin, ideally conducted by multiple experts from different perspectives using approved procedures (National Research Council, 2004). Our work at these levels is too extensive to summarize here (but see Clements & Sarama, 2004a, 2006; Sarama, 2004).

Summative Evaluation In these final two phases, researchers determine the effectiveness of the product, now in it<; complete form, as it is implemented in realistic contexts. Summative phases 9 and 10 both use cluster randomized Held trials. They differ in scale. That is, phase 10 involves a few classrooms, whereas phase 10 examines the fidelity or enactment, and sustttinability, of the product when implemented on a large scale, and the critical contextual and implementa­ tion variables that influence it.'> effectiveness. In both phases, experimental or carefully planned quasi-experimental designs, incorporating observational measures and surveys, are useful for generating political and public support, as well as for their research advan­ tages. The main such advantage is the experiments provide the most effi­ cient and least biased designs to assess causal relationships (Cook, 2002). In addition, qualitative approaches continue to be useful for dealing with the 738 • D. H. CLEMENTS and J. SARAMA

complexity and indeterminateness of educational activity (Lester & Wiliam, 2002). This mixed methods approach, synthesizing the two approaches, is a powerful pairing. The cluster randomized design requires that the product is well de­ scribed and able to be implemented with fidelity. Also, the curricula or practices used in the comparison classrooms should be fu1Iy and explicitly described, and ideally selected on a principled basis. To do so, the quantity and quality of the environment and teaching must be measured in all par­ ticipating classrooms. Experiments should be designed to have greater ex­ planatory power by connecting specific processes and contexts to outcomes so that moderating and mediating variables are identified (Cook, 2002). Finally, if quasi- designs only are possible, careful consideration of bias must be conducted to ensure comparability (e.g., of children, teach­ ers, and classroom contexts, National Research Council, 2004).

Phase 9: Summative Research: Small Scale In this phase, researchers evaluate what can actually be achieved with typ­ ical teachers under realistic circumstances (Burkhardt et al., 1990; Rogers, 2003). In a few classrooms, from about 4 to about 10, researchers conduct pre- and posttest cluster randomized experimental designs using measures of learning. As stated, experiments are conducted in to complement meth­ odologies previously described. Qualitative work is stronger if conducted within the context. of a randomized experiment. For example, if teachers volunteer to implement the product in a quasi-experimental design, nei­ ther quantitative nor qualitative techniques alone \\-ill easily discriminate between the effects of the implementation of the product and the teachers' dispositions and knowledge that led to their decisions to volunteer. Surveys and interviews of teacher participants also may be used to com~ pare data collected before and after they have used the product, as well as to collect such data as teacher's background, professional development, and resources. The combined interpretive and survey information also evalu­ ates whether supports are viewed as adequate by teachers and whether their teaching practices have been influenced. Do bef()re-and-after comparisons indicate that they have learned about children's thinking in specific subject matter domains and adopted new teaching practices? Have they changed previous approaches to teaching and assessment of the subject matter? Such research is similar to, but differs from, traditional summative evalu­ ations. A theoretical frame is essential; comparison of scores outside theory, permitted in traditional C\'aluation, is inadequate. A related point is that the comparison curriculum or practices must be selected deliberately, to focus on specific research issues. Further, connecting the product's ob­ jects and activities and the processes of enactment, including all compo­ nents of the implementation, to the outcomes is important for theoretical, Methods for Developing Scientific Education • 739

development, and practical reasons, Variables from the broader data col­ lected should be linked to child outcomes. Links also should be made across experimental and comparison classrooms. Without such, there is an inadequate basis for contributing to theories of learning and teaching in complex settings, guiding future research as well as implementations of the product in various contexts. Finally, statistical analyses performed on the appropriate unit of analysis, often the classroom or school, should allow making those connections (National Research Council, 2004) and provide es_timates of the efficacy of the product expressed as effect sizes. The first summative (phase 9) evaluation of Building Block~ resulted in significant differences, with effect sizes of 1. 71 for number and 2.12 for geometry (Cohen's d, Clements & Sarama, 2007b). Effect sizes of the first of two large-scale evaluations (phase 10) ranged from .46 (compared to an­ other research-based curriculum) to 1.11 (compared to a "home grown" control curriculum). Achievement gains of the experimental group were thus comparable to the sought-after 2-sigma effect of individual tutoring (Bloom, 1984).

Phase 10: Summative Research: Large Scale Commonly known is the "deep, systemic incapacity of U.S. schools, and the practitioners who work in them, to develop, incorporate, and extend new ideas about teaching and learning in anything but a small fraction of schools and classrooms" (see also Berends, Kirby, Naftel, & McKelvey, 2001; Cuban, 2001; Elmore, 1996, p. !). Thus, with any product, but especially one that differs from tradition, evaluations must be conducted on a larg·e scale (after considering issues of ethics and practical consequences, see Les­ ter & Wiliam, 2002; Schwandt, 2002). Such research should use a broad set of instruments to assess the impact of the implementation on participating children, teachers, program administrators, and parents, as well as docu­ ment the fidelity of the implementation and effects of the product across diverse contexts (from Clements, 2007). That is, unlike the treatment stan­ dardization necessary to answer the questions of previous phases, here it is assumed that implementation fidelity will vary (often widely, with research indicating that people who take advantage of all program components are more likely to benefit, Ramey & Ramey, 1998), with the questions centering around the product's likely effects in settings where standard implementa­ tion cannot be guaranteed (Cook, 2002). A related goal is to measure and analyze the critical variables, includ­ ing contextual variables (e.g., settings, such as urban/suburban/rural; type of program; class size; teacher characteristics; child/family characteristics) and implementation variables (e.g., engagement in professional develop­ ment opportunities; fidelity of implementation; leadership, such as prin­ cipal leadership, as well as support and availability of resources, funds, 740 • D. H. Cl.EMENTS and J SARAMA

and time; peer relations at the school; "convergent perspectives" of the researcher developers, school administrators, and teachers in a cohort; and incentives used) (Berends et aL, 200!; Cohen, 1996; Elmore, 1996; Ful!an, 1992; Mohrman & Lawler III, 1996; Sarama et aL, 1998; Weiss, 2002). A randomized experiment provides an assessment of the average impact of exposure to a product. A series of analyses (e.g., hierarchical linear model­ ing, or HLM, that provide correct estimates of effect<; and standard errors when the data are collected at several levels; that is, repeated observations nested ·within individual children, children nested within classrooms) re­ late outcome measures with a set of target contextual and implementa­ tion variables, critical fOr identifying moderating and mediating variables (appropriate units of analysis-such as the class-should be defined and should be identical to the unit used for random assignment). Ideally, he­ cause no set of experimental variables is complete or appropriate for each situation, qualitative inquiries supplement these analyses. From the wide breadth of documents, including field notes, theoretical notes (method­ ological and personal journals), drafts of research literature syntheses, and the like, researchers conduct iterative analyses, to determine the significant meanings, relationships, and critical variables that affect implementation and effectiveness (Lincoln & Cuba, 1985) and thus meaningfully connect implementation processes to learning outcomes. Finally, summative evaluations are not complete until two criteria are met. First, the product must be sustained and evaluated in multiple sites fOr more than two years, with full documentation of the contextual and implementation variables, including practical requirements, procedures, and costs (Berends et al., 2001; Bodilly, 1998; Borman, Hewes, Overman, & Brown, 2003; Fishman et a!., 2004; Fullan, 1992). Second, evaluations must he confirmed by researchers unrelated to the developers of the prod­ uct (Darling-Hammond & Snyder, 1992), with attention given to issues of adoption and diffusion of the product (Fishman eta!., 2004; Rogers, 2003; Zaritsky, Kelly, Flowers, Rogers, & O'Neil, 2003). The large expense and effort involved in meeting these criteria is another reason that previous evaluation phases should be employed first; only effective program should be scaled up. Given this variety of possibilities, claims that a product is based on re­ search should be questioned to reveal t.he nature and extent of the con­ nection between the two, including the specific phases used of the ten de­ scribed and the results obtained with each. We built a new model to scale up. The TRIAD (Technology-enhanced, Research-based, Instruction, Assessment, and professional Development) model has the goal of increasing math achievement in young children, es­ pecially those at risk, by means of a high-quality implementation of the Building Blocks, with all aspects of the curriculum-mathematical content, Methods for Developing Scientific Education • 741 pedagogy~ teacher's guide, technology, and assessments-based on a com­ mon core oflearning trajectories. For example, we performed a "gold stan­ dard" Randomized Cluster Trial (RCT) in three states. Fony-two schools serving low-resource communities were randomly selected and randomly assigned to three treatment groups using a randomized block design involv­ ing 1,375 preschoolers in 106 classrooms. Teachers implemented the inter~ vention with adequate fidelity. Pre- to posttest scores revealed that the chil­ dren in the TRlAD I Building Block} group learned more mathematics than the children in the control group (effect size, g = 0.72, Clements & Sarama, 2007b). They also developed better language competencies (e.g., effect sizes ranging from .16 to .36, Sarama, Lange, Clements, & Wolfe, 2012).

CONCLUSiONS AND IMPLICATIONS

In this final section, we describe ramifications of our Framework. First, the­ oretical purity is less imporumt than a consideration of all relevant theories and empirical work. The complexity of the field often creates a Babel of disciplines (Latour, 1987) in which the lack of communication prevents progress. This is one conceit researcher developers can ill afford. Instead, they must meld academic issues and practical teaching demands no less than a serious consideration of what researchers and teachers from other philosophical positions experience and report. This does not imply incon­ sistent positions. It does imply that overzealous applications (often misin­ terpretations and overgeneralizations) can limit practical effectiveness. As merely one illustration, constructivism does not imply that practice is not necessary and does not dictate specific pedagogical practices (Clements, 1997; M.A. Simon, 1995). Second, particular research designs and methods are suited for specific kinds of investigations and questions, but can rarely illuminate all the ques­ tions and issues in a line of inquiry (cf. National Research Council, 2002, p. 4; 2004). This is why different methods are used in various phases of the Framework (Clements, 2007). For example, although iterating through one or two of the phases might lead to an effective product and high-quality research, this would not meet all the goals of an integrated research and development program. As a -simple example, the curriculum might beef­ fective in some settings, but not others, or it might be too difficult to scale up. Moreover, we would not know why the curriculum is effective. Third, the Framework is resource intensive. Some might argue that us­ ing multiple stages and phases are logistically or practically infeasible. Just producing satisfactory evaluation data (National Research Council, 2004) is costly. Consider, with the hundreds of millions of dollars undoubtedly spent on developing and testing products v.rithout it impracticable to use 742 • D. H. CLEMENTS and J. SARAMA

the proposed framework? We argue, paradoxically, that it is impractical to spend such sums without using it. Fourth, the education con1mtmity should support and heed the results of research frameworks such as the one proposed. Given the grounding in both comprehensive research and classroom experience, the curricular products and empilical fmdings of such integrated research and development pro~ grams should be implemented in classrooms. Researcher developers should follow models and base their development on the findings and lessons learned from these projects. Administrators and policy makers should accept and promote cunicula based upon similar research-based models. Educa­ tors at all levels should eschew software that is not developed consonant with research on children's learning and that does not have the support of empiri­ cal evaluation. This would eliminate much of what is presently used in class­ rooms. This is a strong position, but one that may avoid a backlash against the use of computers in education, and the use of innovative curricula in general, and that v..rill, we believe, ultimately benefit children. Fortunately, the design models discussed here, with their tight cycles of planning, instruction, and analysis, are consistent with the practices of teachers who develop broad conceptual and procedural knowledge in their children (Cobb, 2001; Lampert, 1988; M. A Simon, 1995; Stigler & Hiebert, 1999). Therefore, the product and findings are not only applicable to other classrooms but also support exactly those practices. Fifth, and in a similar vein, should legitimize research pro­ grams such as these. There is a long history of bias against design sciences.

As pro!Cssional schooLs, including the independent engineering schools, are more and more absorbed irllo th<.: general culture or the university, Lhcy han­ ker after acade-mic rcspcctabiiiLy. In lcrrns of the prevaiiing norms, academic respectability calls for sn~jcct matter that is intellectually tough, analytic, for­ malizahlc, and teachable. In the past, much, if not. most, of what we knew about design and about. the artificial sciences was intellectually soft, intuitive, informal, anct cookhooky. VVhy would anyone in a university sLoop to teach or learn about designing machines or planning market. strategies when he coulct concern himself with solirl-state physics? The answer has been clear: he usu­ ally wouldn't. (H. A. Simon, 1969, pp. 56-57)

In particular, the more that schools of education in prestigious research universities "have rowed toward the shores of scholarly research the more distant they have become fOrm the public schools they are bound to serve" (Clifford & Guthrie, 1988, p. 3). This is a dangerous prejudice, and one that we should resist. Education might be seen largely as a design science, with a unique status and autonomy (Wittmann, 1995). "Attempt<> to organize ... ed­ ucation by using related disciplines as models miss the point because they overlook the overriding importance of creative design for conceptual and Methods for Developing Scientific Education • 743 practical innovations" (Wittmann, 1995, p. 363). The converse of this argu­ ment is that universities benefit because the approaches described here \\fill prove practically useful, they will legitimize academic research per se. In summary, traditional research is conservative; i L studies "what is" rather than "what could be." When research is an integral component of the design process, when it helps uncover and invent models of children's thinking and builds these into a creative product, then research moves to the vanguard in innovation and reform of education.

NOTES

1. This paper was supported in part hy t.hc Institute of Erlncational Sciences (U.S. Department or Edw.:ation) under (;rants No. R30SK05157 and R305All 01 RR and by the :\lational Science Foundation, under grant No. DRL-1020118 and by the James C. Kennerly Institute for Educational Success and the Marsico Institute for Early Learning and Literacy althc Morgrirlge Co!lcge of Educa­ tion. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the funding agencies. 2. Rather, suhitizing is amenable to a simple presentation. A complete account, including the multiple theories and studies on the innate processes that un­ derlie it, the role of learning and development. {including language) and so forth, would he chapter or even hook length (Sarama & Clements, 2009). Not discussed here, hut represented somewhat is Figure 3, are such features as ar­ rangement of oi?jects and even the type of ol~jcCI (visual, auditory, etc) that can he suhitizcd. Thus, none of this is actually "simple," merely simplified for our purposes here.'

REFERENCES

Anthony,.J. L., Lonigan, C.J., Driscoll, K., Phillips, B. M., & Burgess, S. R. (200~). Phonolof{ical sensitivity: A quasi-parallel progression of word structure units and cognitive operations. Research Quarterly, 38(4), 470-487. Ball, D. L., & Cohen, D. K. ( 1996). Reform by Lhe hook: What is-or might he-the role of curriculum materials in teacher learning and. instructional reform? Educational Researcher, 25(9), 6-R; 14. Barah, S., & Squire, K. (2004). Design-based research: Putting a stake in the gnn1nd. The journal of the Learning Sciences, 13, 1-14. Baroody, A. J. (1987). Children's mathematical thinking. New York, ~TV: Teachers College. Berends, M., Kirby, S. N., Naftel, S., & McKelvey, C. (200 1), Implementation and per._ formance in New American Schools: Three yean into scaLe-·up. Santa Monica, CA: Rand Education. 744 • D. H. ClEMENTS and J. SARAMA

Bloom, B.S. (19tH). The 2~signm problem: The search for methods of group in­ struction as cfl('ctivc as <>nc-1 o-onc tutoring. Ed-ucational Researcher; IJ, 4-16. Bod illy, S. J. ( 1998). Lessons from 1V'!cw A mr:rirau Schools' scale-up j;lwse. Santa Monica, CA: RAND Edue<:1tion. Borman, C. D., Hewes, C. M., Overman. L T., & Brown, S. (2003). Comprehen­ sive school reform anniccrs and the National Governors Associa­ tion Center for Best Prauices. Childers, R. D. (Cartographer). ( 19H9). lmjJinnenJation of the Writing to Read instruc­ tional system in IJ rural elernentary schooL~ in southern West Virginia. 1988-89 an­ nual report. C:lcmcnts, D. H. (1997). (Mis?)<:

Clements, D. H. (2002). Linking research and curriculum development. In L. D. English (Ed.), Handbooh of international research in (pp. 599-6%). Mahwah, NJ: Erlhaum. Clements, D. H. (2007). Curriculum research; Toward a framework for 'research~ based curricula'. journal for Research in Mathematics Education, 38, 35-70. Clements, D. H. (200Ha). Design experiments and curriculum research. In A. E. Kelly, R. A. Lesh & J. Y Back (Eds.), Handbook of innovative design research in science, technology, enginening and mathematics (STE.M) education (pp. 761-776). Mahwah, NJ: Erlhamn. Clements, D. H. (2008b). Linking research and curriculum development. In L. D. English (Ed.), Handbook of international research in mathemall:cs education (2nd erl., pp. 5H9-625). New York, NY: Taylor & Francis. ClcmcnL~, D. H., & Battista, M. T. (2000). Designing cflcctive software. In A. E. Kelly & R. A. Lesh ( Eds.), Handbook of research design in mathematics and science educa~ tion (pp. 761-776). Mahwah, NJ: Erlhanrn. Clements, D. H., Battista, M. T., Sarama,J., & Swaminathan, S. (1996). Development of turn and turn measurement concepts in a comput.er~hascd instructional unit. Educational Studies in Mathematics, 30, ~I ~-337. Clements, D. H., Battista, M. "1:, Sarama,J., Swaminathan, S., & McMillen, S. (1997). Students' development of length rneasurernent. concepts in a Logo-based unit on geometric pat.hs.journalfor Research in Mathematics Education, 28( l), 70-95. Clements, D. H., & Conference Working Group (2004). Part one: Major themes and recommendations. In D. H. Clernents,J Sarama & A.-M. DiBiase (Eds.), . Engaging young children in mathematics: ,)'tandards for early childhood mathematics education (pp. 1-72). Mahwah, N]: Erlbaum. Clcmerl.ls, D. H., & Sarama,.J. ( 1995). Design ora I.ogo environment for elementary geometry. journal of Mathematical Behavior, 11, 381-398. Clements, D. H., & Sarama, J. (1998). Building Blocks-FOundations for Mathemati­ cal Thinking, Pre-Kindergarten to Grade 2: Research-based Materials Development {National Science Foundation, grant number ESI-9730804; see http:/ /\W-rw.gse. hu!falo.crlu/org/hnildinghlocks/%5D. Buffalo: State University of New York at Buffalo. Clements, D. H., & Sarama, J. (2004a). Building Blochs for early childhood math­ ematics. Ear(y Childhood Research Quarterly, 19, 181-189. Clements, D. H., & Sarama,J. (2004h). Learning trajectories in mathematics educa­ tion. Mathematical Thinking and Learning, 6, 81-89. Clement.~, D. H., & Sarama,J. (2006). Final report of lnu:lding blocks-Foundations for mathematical thinhing, pre-kindergarten to grade 2: research-based materials develop­ ment (NSF Grant No. ESI-9730804). Buffalo: State University or New York at Buffalo. Clements, D. H., & Sarama,J. (2007a). Building Blocks-SRA Real Math Teacher's Edi­ tion, Grade PreK. (:olumhus, OH: SRA/McGraw-Hill. Clements, D. H., & Sarama,J. (2007h). Effects of a preschool mathematics curricu­ lum: Smnmativc research on the Building Blocks pr<~ject.journalfor Research in Mathematics Education, .38, 136-163. Clements, D. H., & Sarama,J. (2009). Learning and teaching early math: The learning trajectories approach. New York, NY: . 746 • D. H. CLEMENTS and J. SARAMA

Clements, D. H., Sararna, J., & DiBiase, A.-M. (2004). Engaging _y·oung children in m.athernatics: Standards for earZv r:hildhood mathematics education. Mahwah, \[J: Erlhaum. Clifford, C. J, & Gnthric, J W. ( 19XR). Ed school: A brief for professional education. Chicago, JL: The University of Chicago Press. Cobb, P. (20(ll). Supporting the improvement of learning anrl teaching in social and insti!.Htiona! context.. InS. Carver & D. K!ahr (Eels.), Cognition and instruc­ tion: Tl.oenty~fiveJ'ears of progress (pp. 4ti5-47H). Mahwah, :t--(J: Erlbaum. Cobb, P., Confrcy,J., diScssa, A., Lehrer, R., & Schauble, L. (2003). Design experi­ ments in crltKat.ional research. EducationaL Researcher, 32( 1), 9-13. c;(lhcn, J). K. (1996). Rcwaniing teachers for student performance. fn S. H. Fuhrman &J. A. O'Day (Ects.), Rewards and reforms: Creating educational incen­ tiws that worh ( pp. 6 J -1 J 2). San Francisco, CA: .Josscy Bass. Collins, A., Joseph, D., & Biclaczyc, K. (2004). Design research: Theoretical and methodological issues. journal of the Learning Scienr:es, 13( 1), 1S-42. (:onrrcy,J. (:WOO). Improving research and systemic reform toward equity and qual­ ity. In A. E. Kelly & R. A. Lcsh (Eds.), I-landbooh of research design in mathernatir:s and .1cience education (pp. ?-17-106). Mahwah, ~J: Erlhaum. Cook, T. D. (2002). Randomized expcrimcnL"> in educational policy research: A critical examination of the reasons the community has oiTcrcd for noL doing l hem. Educational Eoaluation and Policy Analysis, 24, 175-199. Cronbach, L. J., Ambron, S. R., Dornbusch, S. M., Hess, R. 0., Hornik, R. C., Phil­ lips, l). (;., et al. (Eds.). ( l9RO). Toward reform ofjJrogram. evaluation: Aims, meth­ ods, and insiitut1:onal arrangements. San Francisco, CA:.Jossey~Bass. (;nlnhach, I .. J., & S11ppes, P. (Eds.). ( 1969). Research for tomorrowS schools: Disciplined inquiry for ednwtion. New York, l'\'Y: Macmillan. (;uban, I .. (2001 ). Oversold and u:nderused. (;ambridge, MA: Harvard University Press. Darling--1-lammond, L. ( 1997). The right to learn: A blueprint for creating schools that work S:-ltl Franci'>co:Josscy-Bass. Darling-Hammond, L., & Snyder,J (1992). Curriculum studies and t.hc traditions <)!.inquiry: The scientific traditic)n. In P. W.Jackson (Ed.), Handbook of research on curriculum (pp. 4J-7R). New York, NY: Macmillan. Davidson, M. R., Fields, M. K., & Yang, J. (2009). A randomized trial study of a prcschC}O]\itcracy curriculum: The importance or implementation. journal of Research on Educational Effect?:veness, 2, l77-20R. Dcwey,.J. ( 1902/ I 976). The child and the Cl

Early, D., Barbarin, 0., Burchinal, M. R., Chang, F., Clif-ford, R., Crawford, C., ct al. (200S). Pre-Kindergarten in Eleven States: NCEDL's Multi-State Study of Pre­ KindergaTten & Study of State-Wide Early Education Programs (SWEEP) Chapel Hill, NC: University of North Carolina. Edwards, C., Gandini, L., & Forman, G. E. (1993). The hundred languages of children: The Reggio Emilia approach to early childhood education. Norwood, N.J.: Ahlcx Publishing Corp. Eisner, E. W. ( 1998). The primacy of experience and the politics of method. Educa­ tional Researcher, 17(.5), 15-20. ElJ1lore, R. F. (1996). Getting to scale with good educational practices. Harvard Edu­ catiortal Review, 66, ] -25. Escolx~do, T. H., & Evans, S. (1997). A comparison of child-tested early childhood education sof't.ware with professional ratings. Paper presented at the March 1997 meeting of the American Educational Research A.;;sociation in Chicago, Illinois. Ferguson, R. F. ( 1991 ). Paying for public education: New evidence on how ami why money matters. Harvard Journal on Legislation, 28(2), 465-498. Fishman, B., Marx, R. W., Blumenfeld, P. C., Kr~jcik, J. S., & Soloway, E. (2004). Crealing·a framework for res<:~arch on systemic technology innovations. The journal of the Learning Sciences, 13, 4:3-76. Flagg, B. (1990). FOrmat-ive evaluation for . Hillsdale, NJ: [.aw­ rencc Erlhuam Associates. Forman, G. E. (1993). The constructivist perspective to early education. In J. L. Roopnarine & .J. E. johnson (Eds.), Approaches to early childhood education (2nd ed.) (2nd ed., pp. 137-155). New York, NY: Merrill. Forman, G. E., & Fosnot, C. T. (1982). The use or Piaget.'s constntctivism in early childhood education programs. In B. Spodek (Ed.), Handbook of research in early childhood education (pp. 185-21 1). New York, NY: The Free Press. Fulian, M.G. (1992). Successful school improvement. Philadelphia, PA: Open Univer­ sity Press. Fuson, K. C. (1992). Research on whole number addition and subtraction. In D. A. Grouws (Ed.), Handbook of research on mathematics teaching and learning (pp. 243-27S). New York, NY: Macmillan. Ga!vin,J. I .. (2009). Writing literature reviews: A guidefo·r students of the social and behav­ ioral science Glendale, CA: Pyrczak Publishing. Gerber, M. M., Semmel, D. S., & Semmel, M. I. (1994). Computcr-hascd dynamic assessment ofmu!tidigit multiplication. Exceptional Children, 61, 114-125. Ginsburg, H. P., Klein, A., &Starkey, P. (1998). The development ofchiklrcn's math­ ematical thinking: Connecting research with practice. In W. Damon, I. E. Si­ gel, & K. A. Renninger (Eds.), Handbook of child psychology. Volume 4: Child psychology in practice (pp. 401-476). New York, NY: John Wiley & Sons. Glasersf'cld, E. v. (1995). Radical constructivism: Away of knowing and learning. Goodlad, J I. (1984). A place called school: Prospects for the future. New York, NY: McGraw-Hill. Grant, S. G., Peterson, P. L., & Sh<~jgreen-Downcr, A. (1996). Learning Lo leach mathematics in the context of system reform. Amerir.an Educational Research journal, 33(2), 509-541. 748 • D. H. CLEMENTS and J. SARAMA

Gravcmc\jcr, K P. E. ( 1994a). Developing realistic mathematics instruction. U trccht, The Netherlands; Freudenthal Institute. Gravcmctjer, K. P. E. (l994h). Educational development. and developmental re­ search in mathematics education. journal for Research in Mathematics Educa­ tion, 25, 443-4 7J. Greenwald, A. G., Prat.kanis, A. R., Leippe, M. R., & Baumgardner, M. H. (1986). Under what C()flditiollS d(>es theory obstruct research pn>grcss? Psychological Review, 93, 216-229. Griffin, S., & Case, R. (1997). Re-thinking the math curriculum: An approach based on cognitive science. hsues in Education, 3, 1-49. Herbst, P. G. (200~). Csing novel tasks in teaching mathematics: Three tensions affecting the work or the teacher. American Educational Research journal, 40, 197-238. Hiebert, J. C. (1999). Relationships between research and the NCTM Standards. Jourrwl for Research in Mathematics Education, 30, 3-19. Hme!o-Silver, C. E., & Duncan, R. G. (2009). Learning progressions (special issue). journal of Research in Science Teaching, 46(6), 605-737 . .Jaeger, R. M. (19?-iR). Survey research methods in education. In R. M.Jaegcr (Ed.), Complementary methods for research in education (pp. 303-340). Washington, DC: American Educational Research Association . .James, W. ( 1892/ 1958). Talhs to teachers on psychology: And to students on smne of lifeS ideas. New York, t\fY: Norton. Justice, L. M., Pence, K., Bowles, R. B., & Wiggins, A. (2006). An investigation of four hypotheses concerning the order by which 4-year-old children learn the alphabet letters. Early Childhood Research Quartedy, 21, 374-3R9. Kamii, C. ( 1973). Pedagogical principles derived from Piagct's thC(H)': Relevance for educational practice. In M. Schwebel & J. Raph (Eds.), Piaget in the class­ room (pp. 199-215). New York, l\JY: Basic Books. Lahm, E. A. ( 1996). Software that engaged young children with disabilities: A study of desig-n rcature"o Forw on Autism and Other Developmental D£sabilities, 11 (2), 115-124. Lampert, M. (1988). Teaching that connects students' inquiry with curricular agendas in schools. Technical Report. Cambridge, MA: Educational Technology Center, Harvard Graduate School of Education. Latour, B. (1987). Science in action. Cambridge, MA: Harvard University Press. Lavin, R . .J., & Sanders, .J. E. (19R3). Longitudinal evaluation of the CIA/I Cmnjmter Assisted Instruction Title 1 Project: 1979-82: Chclmsf(lrd, MA: Merrimack Edu­ cation Center. I.awton, .J. T. ( 1993). The Ausuhe!ian preschool classroom. In .J. L Roopnarinc & .J. E . .Johnson (Eds.), Approaches to early ch£ldhood education (2nd cd., pp. 157- 177). New York, l'v'Y: Merrill. Lcsh, R. A. ( 1990). Computer-based assessment of higher order understandings and processes in elementary mathematics. In G. Kulrn (Ed.), Assessing higher order thinking in mathematics (pp. 81-ll 0). Wa<>hington, DC: American Association for the Advancement of Science. Lester, F. K.,Jr., & Wiliam, D. (2002). On the purpose of mathematics education research: Making productive conLributions to policy and practice. In L. D. Methods for Developing Scientific Education • 749

English (Ed.), Handbook of International Research in lvfathematics Education (pp. 489-506). Mahwah, 1\{J: Erlhamn. Levy, B. A., (;ong, Z., Hcssc!s, S., Evans, M. A., & Jared, D. (2006). Understanding print: Early reading dcvclopmcnl aml the contributions of home literacy ex­ periences. journal ofExpen:mental Child Psychology, 93( 1), 63-9.'). Light., R. J., & Pillcmcr, D. B. ( 1984). Summing up: The science of reoiewing research. Cambridge, MA: Harvard University Press. Lincoln, Y. S. (1992). Currlcnlum studies and the traditions of inqniry: The hu­ manist.ic tradition. In P. W. Jackson (Ed.), Handbook of research on curriculum (pp. 79-97). New York, NY: Macmillan. Lincoln, Y. S., & Guha, E. G. (1985). Naturalistic inquiry. Ncwhury Park, CA: Sage. Mayer, R. E. (2000). What. is the place of science in educational research? Educa­ tional Researcher, 29(6), $8-~9. Mohrman, S. A., & Lawler III, E. E. (1996). Motivation for school reform. InS. H. Fuhrman & J A. O'Day (Eds.), Rewards and rrform: Creating educational incen­ tives that work (pp. 115-14~). San Francisco:.Jossey-.Bass. Murphy, R. T., & Appel, L. R. (1984). Evaluation of writing to read. Princeton, NJ: Educational Testing: Servin:. National Council of Teachers of Mathematics (2000). Principles and standards for school mathematics. Restr. National Research Council (200 1). Eager to learn: Educating our preschoolers. Wash­ ington, DC: National Academy Press. National Research Council (2002). Scientific research in education. Washington, DC: National Research Council, National Academy Press. National Research Council (2004). On evaluating curricular ejfectiveness:Judg?>ng the quality of K-12 mathematics evaluations. Washington, DC: Mathematical Scienc­ es Education Board, Center for Education, Division of Behavioral and Social Sciences and Education, The National Academics Press. National Research Council (2007). Taking scie-nce to school: Learning and teaching sci­ ence in grades K-8. Washington, DC: National Academy Press. National Research Council (2009). Mathematics in early childhood: Learning paths to­ ward excellence and equity. Washington, DC: National Academy Press. Papert., S. ( 1987). Computer criticism vs. tcchnocentric thinking. EducationaL Re­ sean:her, 16( 1), 22-30. Pinar, W. F., Reynolds, W. M., Slaucry, P., & Taubman, P.M. (1995). Understanding curriculum: An introduction to the study of historical and contemporary curriculum discourses. New York, NY: Peter Lang. Ramey, C. T., & Ramey, S. L. ( 1998). Early intervention and early experience. Ameri­ can Psychologist, 53, 109-120. Raudenbush, S. W. (2009). The Brown legacy and the O'Connor challenge: Trans­ forming schools in the images of children's potential. Educational Researcher, 38(~). 169-IRO. Rogers, E. M. (2003). Diffusion of innorJations (5th ed.). New Y<>rk, NY: The Free Press. Rnthven, K., I,ahorde, C., Leach,.J., & Tiberghien, A. (2009). Design tools in didacti­ cal research: Instrumenting the epistemological and cognitive aspects of the design of teaching sequences. Educational Researcher, 38(5), 329-~42. 750 • D. H. CLEMENTS and J. SARAMA

Sararna,J. ('Z004). Tcchnfl!

Spradley, J. P. (1979). The ethnographic interoieoJ. New York, r-..rv: I·iolt, Rh in chart & Winston. Spradley, J. P. (1980). Participant observation. New York, NY: Holt, Rhinehart & Winston. Steffe, L. P., Thompson, P. W., & Clasersldd, E.. v. (2000). Tcad1ing experiment methodology: Underlying principles and essential clements. In A. E. Kelly & R. A. Lesh (Eds.), Handbook of research design in mathe·matics and science educa­ tion (pp. 267-306). Mahwah, NJ: Erlhaum. Stein, M. K., & Kim, C. (2009). The role or mathematics curriculum materials in large-scale urban rcfJ:xm. InJ. T Remillard, G. M. Lloyd & B. A. fierhci-Eisen­ mann (Eds.), Mathematics teachers at work: Connecting t.:urriculum materials and classroom instruction (pp. 37-S5). New York, NY: Routledge. Stiglcr,.J. W., & Hicbert,J. c:. ( 1999). The teaching gap: Best ideas from the world's teach­ ers for improving education in the classroom. New York, ~'Y: The Free Press. Strauss, A, & Corhin,J. ( 1990). Basics of : Grounded theory procedures and techniques. Newbury Park, CA: Sage. The Design-Based Research Collective (2003). Design-hased research: An emerging para