The nature of the evidence about lifelong learning: remapping the territory

Leonard L. Baird, The Ohio State University, USA

Paper presented at the 39th Annual SCUTREA Conference, 7-9 July 2009, University of Cambridge

Several scholars have described the multiple conceptions of lifelong learning, and the philosophical and political issues they involve (Jarvis, 2008, Bagnall, 2009). Lifelong learning clearly can take place in a variety of settings, forms, and modes. This paper focuses on programs of lifelong learning that take place with the oversight of an educational institution. It also focuses on the logic of providing evidence about the efficacy of lifelong learning, roughly following the analytic framework of Pascarella and Terenzini (2005). I attempt to provide an overview of the research and assessments of the elements in lifelong learning that can yield an array of information that can be used for improvement as well as evaluations. A key to providing this evidence for the effectiveness of lifelong learning is authentic measurement and assessment of learner characteristics, lifelong educational experiences and outcomes (Knox, Lindsay and Kolb, 1993). These measurements and assessments are important to allow us to estimate the effects of lifelong learning in general and for specific groups. Given the great variety within lifelong education and adult learning (Hoare, 2006), such assessment is complex. The goal of this paper is to present a comprehensive conceptual “map” of the evidence needed to provide accurate evaluations of lifelong learning. The map is intended to allow us to attempt to answer critical questions by discussing the availability of models, the identification of variables, the measurement of those variables, and the feasibility of obtaining those measures on a routine basis.

The first area on the map (1) is concerned with the question of what is important to know about what participants bring with them to lifelong learning. Although the literature on adult learners tends to portray them as a monolithic group (Chen, et al, 2008), there are important differences among them. How important are students’ levels of academic preparation, educational and career goals, attitudes and views about education, motivation, social class, age, sex, ethnicity and life roles in forming their expectations and readiness for lifelong learning? (Brooks and Everett, 2008). For example, do students from different cultural groups or ages have different expectations that could influence their learning? A study by Lundberg (2003) found that younger adult students reported better relationships with other students, but older students reported better relationships with faculty and administrators, thus illustrating how a simple variable, age, could affect adaptation to a lifelong learning program. Other researchers have found that sex, ethnic group, and academic preparation affect classroom participation. It seems reasonable that the other characteristics mentioned would influence students’ response to lifelong learning.

The second area on the map (2) consists of the transitional processes by which students choose to attend, the influence on attendance of finances, access, gender, social class, ability, ethnicity, etc. which can affect students’ approach to lifelong learning (Cruikshank, 2008). By identifying the financial, social and personal obstacles to attendance, we can act to alleviate them (Chapman, Cartwright and McGlip, 2006). One important consideration is the extent to which participation in lifelong learning is seen as voluntary and for reasons of self development, or as mandated, either directly or indirectly by external forces. For example, Cruikshank (2008) found that programs that focused on high skills development for global competitiveness were perceived as leading to increased workload, job insecurity and lower job satisfaction. It is important to research the extent to which different orientations and motivations result in greater or lesser learning and satisfaction, especially the climate toward lifelong learners.

The third area (3) is concerned with the extent to which the intended processes of the programs are those experienced by students. Although it is important to understand the differences in formal and curricular program characteristics, it is even more important to assess the types of within-program experiences, including the classroom environment (Clow and Dawn, 2006). The point here is whether we have the proper characterizations of lifelong educational experiences (Baird, 2005). For example, Grabinski (2005) and the contributors to Osborne, Houston and Toman’s (2007) book discuss the elements for creating a positive learning experience for students with an array of backgrounds. What evidence would show that students are having such experiences? There are a variety of classroom assessment instruments, mainly designed for the traditional postsecondary classroom, that could provide relevant assessments. Research is needed to examine their appropriateness in lifelong education. Since programs are often imbedded in a larger institution, it is also important to assess the climate or culture of the institution. An interesting conceptualization of climate comes from Donaldson and Townsend’s (2007) review of scholarship on adult learners in higher education institutions. Their classification of institutional orientations toward adult learners included ‘Invisible’ where adult students are ignored or assumed to be the same as traditional-age students; ‘Acknowledged but Devalued’ where adult students were considered deficient or problematic; ‘Accepted’ where adult and traditional students are perceived as different but equal in status and where programs for adults are designed to fit their needs, but not because they are problematical; and ‘Embraced’ where adult students are valued for what they bring to the institution and that any ‘lack of fit’ is due to the limitations of the institution, not the adult student. Although assessments of the institutional climate based on this conceptualization have not yet been developed, research into the extent and the effects of different climates on adult students could yield information that would serve to evaluate current policies and practices.

The next area on the map (4) is the meaning of the retention and attrition of students. Students may leave a program or class when they have attained their personal goals, and consider themselves successful, whereas institutions may regard them as failures because they did not complete their studies. There have been numerous attempts to define and codify the possible meanings of retention and attrition (Desjardins, et al., 2002; Herzog, 2005). When students leave lifelong learning programs without meeting their goals, or are involuntarily dismissed, it is important to understand the reasons for such outcomes. At least one study (Cleveland-Innes, 1994) found that adult students were more likely to remain at their institutions if they had a high level of commitment. However, neither social integration nor academic integration affected retention. Cleveland- Innes notes the need for better assessments of students’ external responsibilities, such as the pull of work and family.

The next area on the map (5) concerns the question of whether program outcomes can be identified and assessed. Clearly, the choice of outcomes depends on one’s values and interpretations of the purposes of lifelong education (Cranton, 2006). The appropriateness and technical quality of possible assessments depends on the choice of outcomes (Moran, 2001; Aspin, 2007). For example, the outcome of “transformation” is difficult to define, and will vary across students. Here, perhaps more than in any other area, the issue is the logic of the choice we make in choosing which outcomes to study (Banta, 2007). Major issues in the assessment of outcomes are who defines them, whether they are utilitarian or developmental and whether they focus on individual success or contributions to the community (Baird, 2003). Outcome goals can be set by the educational institution, external agents such as business or governments, or the individual learner. Analyses of the effectiveness of a program may yield very different answers depending on whose outcome goals are considered. For example, a corporation may feel satisfied with evidence that their workers who participate in a lifelong learning program learn skills that make them more efficient on their jobs, whereas the individual learner may feel coerced into participation and see the skills as leading to a more intense work life. In another case, individual learners may show gains in ethical reasoning but little gain in work related outcomes. The second distinction between utilitarian and personal development outcomes presents something of a challenge to research in this area. Utilitarian outcomes are comparatively easy to assess, and the long-range consequences of their attainment can be evaluated. Personal development outcomes are often difficult to assess, whatever their value to the individual, and the long-range consequences of their attainment are fuzzy. The last distinction between individual success and contributions to the community creates additional assessment complexities. It is comparatively straightforward to assess how well an individual does in reaching an outcome, in contrast to contributions to the community, which require information about the impact of the lifelong learning program on the community.

The next area on the map (6), educational effects, concerns the question of how we can demonstrate the influence of programs on student outcomes. The emphasis here should be on the words change and gains. Essentially, program effects research is concerned with the differential impact of programs, that is, why one has a more positive influence than another (Pascarella and Terenzini, 2005; Miller, 2006). The point is to attempt to attribute change in growth in student characteristics to the program characteristics or environment, controlling for the students’ initial status. Since it is difficult to use approaches such as a controlled experimental design, other designs and approaches need to be used, including qualitative studies. This area is fraught with problems of logic, measurement, statistical design and evidence as discussed by Kassworm and Pike (1994).

The next area on the map (7), concerns the extent to which students go on to further study or the attainment of degrees, where appropriate. This area has not been well studied, largely because of the logistical problems of conducting longitudinal studies of graduates, but is very important for some programs. Although statistics indicate that adult graduates are more likely to pursue further learning, we know little about the paths that lead to attendance, as Brooks and Everett (2008) have emphasized. They studied adult learners five years after their first degree and found that attendance was affected by the graduates’ approach to the process of learning, their identities as learners, and their understanding of the relationship between learning and the wider world. This study suggests several areas for assessment that would be useful in working with adult graduates.

The last area on the map (8) is concerned with the hope of educators that they will have a lasting influence on their students. Finding definitive evidence on the career or life success of students is considered by some to be the most important area of all, although the assessment of ‘success’ can be quite problematical. Most careers involve complex and multiple indicators of ‘success.’ Even such seemingly objective criteria as annual salary are very problematical. And such complex careers as health care can have a bewildering number of possible criteria which can be negatively related. However, the evidence from post- secondary research indicates that individuals with more education are generally more ‘successful’ on many indicators (Pascarella and Terenzini, 2005), and there is reason to think the same would apply to lifelong learners. These indicators include health and healthy life styles, consumer behavior, mental health, child rearing practices, community participation, political involvement, and interest in and support of the arts.

There are several implications one could draw from a consideration of the map. First, understanding the lifelong learning process is important because of the increasing numbers of adults who enter postsecondary education for the first time. It is unclear what methods are most appropriate to assess their readiness for education (Donaldson and Graham, 1999). Although older applicants as a group score lower on traditional tests, they often do much better than predicted in their classes, so some other variables are operating (Richardson and King, 1998), including life skills. Several systems have been developed to access the skills and knowledge that adults have gained through their experiences (Fiddler, Marienau and Whitaker, 2006). These provide standard methods to evaluate the extent and depth of adult learning. For example, an adult who has worked on the staff of a hospital may have learned skills in management, report writing, grant proposal writing, human personnel, grief counseling, budgeting, computer software, and speaking skills, to mention some of the possibilities. These skills can be evaluated, at least to a fair degree, and they would be useful for the role of student. One task for the lifelong education is to help adults see that their experiences have led to skills that can help them make a transition into another status or life change (Merriam, 2008).

There are a number of valuable developments in the technical side of assessment, such as new psychometric approaches, the possibilities of computerized assessments, sampling procedures, statistical models, and so on. In addition, specific measures represent intriguing recent developments, such as various measures of students’ personal and moral maturity and Robert Sternberg’s attempts to assess cognitive capabilities based on recent models of how the mind functions. In addition, there have been a multitude of qualitative approaches to assessment and research that can provide deep and rich understandings of lifelong learning. For example, Harper and Museus (2007) show how qualitative methods enhance assessment, how campus cultures can be assessed, how students’ academic success and retention can be promoted, how the impact of programs can be evaluated, and how teaching effectiveness can be increased. However, although useful, I do not think the advances we will make in assessment and research are due to such technical improvements. Rather, I think that it is more important to concentrate on the major questions we want to answer because true progress in assessment and research comes from developing an understanding of the areas we are concerned with, and from the construction of testable models.

The areas on the map provide a comprehensive view that can lead to more effective lifelong learning. By developing comprehensive assessments, we can understand how to organize lifelong learning to meet the needs and backgrounds of different kinds of students, we can be more certain that the intended experience is the one that students actually have, we can assess the extent to which both program goals and student goals are met, and we can examine the effects of lifelong learning in the long run. This information can be used to reshape our programs where needed and re-emphasize what is effective. I would also argue that the advantage of the map is that it indicates the interconnections among areas. For example, the learners’ backgrounds and motivations will partly determine their reactions to the educational experience and the climate of the institution will have an effect on the workings of the program. Learner outcomes are a product of all these influences that have occurred before. By having this broad view of the lifelong learning process we can make our programs more effective.

References Aspin D N (2007) Philosophical perspectives on lifelong learning, Dorchester, Netherlands, Springer. Bagnall, R G (2009) ‘Lifelong learning as an idea for our times: an extended review’ in International Journal of Lifelong Education, 28, 2, pp.277-285. Baird L L (2005) ‘College environments and climates: assessments and their theoretical assumptions’ in J C Smart Higher education: handbook of theory and research, Vol. XX, pp.507-538. Baird L L (2003) ‘New lessons from research on student outcomes’ in S R Komives and D B Woodard (eds) Student services: a handbook for the professor, fourth edition, San Francisco, Jossey-Bass. Banta T (2007) Assessing learning in the disciplines, San Francisco, Jossey- Bass. Brooks R & Everett G (2008) ‘The impact of higher education on lifelong learning’ in International Journal of Lifelong Learning, 27 3, pp.239-254. Chapman P C, Cartwright P, & McGlip E J (eds) (2006) Lifelong learning, participation and equity, Dordrecht, Springer. Chen L, Moon P, Merriam S R (2008) ‘A review and critique of the portrayal of older adult learners in adult education journals, 1980-2006’ in Adult Education Quarterly, 59, 1, pp.3-21. Cleveland-Innes M (1994) ‘Adult student dropout at post-secondary institutions’ in Review of Higher Education, 17, 4, pp.423-445. Clow R & Dawn (2006) The ultimate FE lecturer’s handbook, New York, Continuum International. Cranton P (2006) Understanding and promoting transformative learning, second edition, San Francisco, Jossey-Bass. Cruikshank J (2008) ‘Lifetime learning and the new economy: limitations of a market model’ in International Journal of Lifelong Education, 27, 1, pp.51-69. Desjardins S L, Ahlburg D A, McCall B P (2002) ‘A temporal investigation of factors related to timely degree completion’ in Journal of Higher Education, 73, 5, pp.555-581. Donalson J F & Graham S (1999) ‘A model of college outcomes for adults’ in Adult Education Quarterly, 50, 1, pp.24-40. Donaldson J F & Townsend B K (2007) ‘Higher education journals’ discourse about adult undergraduate students’ in Journal of Higher Education, 78, 1, pp.27- 50. Fiddler M, Marienau C & Whitaker U (2006) Assessing learning: Standards, principles, & procedures, second edition, Dubuque, Iowa, Kendall-Hunt Publishing. Grabinski C J (2005) ‘Environments for learning’ in M A Wolf (ed) Adulthood: New Terrain, San Francisco, Jossey-Bass. Harper S R & Museus S D (eds) (2007) Using qualitative methods in institutional assessment, san Francisco, Jossey-Bass. Herzog S (2005) ‘Measuring determinants of student return vs dropout/stopout vs transfer: a first to second year analysis of new freshmen’ in Research in Higher Education, 46, 8, pp.883-928. Hoare C (ed) (2006) Handbook of adult development and learning, Oxford, Oxford University Press. Jarvis P (ed) (2008) The Routledge international handbook of lifelong learning, London, New York, Routledge. Kassworm C E & Pike G R (1994) ‘Adult undergraduate students: Evaluating the appropriateness of a traditional model of academic performance’ in Research in Higher Education, 35, 6, pp.689-710. Knox W E, Lindsay P, & Kolb P L (1993) Does college make a difference?: long- term changes in activities and attitudes, Westport, CT, Greenwood Press. Lundberg C A (2003) ‘The influence of time limitations, faculty, and peer relationships on adult student learning: A causal model’ in Journal of Higher Education, 74, 6, pp.665-688. Merriam S B (2008) ‘How adult life transitions foster learning and development’ in M A Wolf (ed) Adulthood: New Terrain, San Francisco, Jossey-Bass. Miller B A (2006) Assessing organizational performance in higher education, San Francisco, Jossey-Bass. Moran J J (2001) Assessing adult learning: A guide to practitioners, Malabar, FL, Krieger. Osbourne M, Houston M, & Toman N (eds) (2007) The pedagogy of lifelong learning: understanding effective teaching and learning in diverse contexts, London, New York, Routledge. Pascarella E T & Terenzini P T (2005) How college affects students, Vol 2: A Third decade of research, San Francisco, Jossey-Bass. Richardson J T & King E (1998) ‘Adult students in higher education: burden or boon?’ in Journal of Higher Education, 69, 1, pp.65-88.

This document was added to the Education-Line database on 23 June 2009