ISSN 1303-0485 • eISSN 2148-7561 DOI 10.12738/estp.2016.1.0064 Received | April 29, 2015 Copyright © 2015 EDAM • http://www.estp.com.tr Accepted | September 9, 2015 Educational Sciences: Theory & Practice • 2015 December • 15(6) • 1503-1515 OnlineFirst | October 5, 2015

A De Novo Tool to Measure the Preclinical Learning Climate of Medical Faculties in Nilufer Demiral Yilmaza Ozlem Coskunf Ege University

Serpil Velipasaoglub Isıl Irem Budakoglug Dokuz Eylül University Gazi University

Hatice Sahinc Sumer Mamaklih Ege University

Bilge Uzun Basustad Funda Ifakat Tengizi Izmir University of Economics

Ozlem Midike Halil Ibrahim Durakj Ondokuz Mayis University Ege University

Sema Ozank Dokuz Eylül University

Abstract Although several scales are used to measure general and clinical learning climates, there are no scales that assess the preclinical learning climate. Therefore, the purpose of this study was to develop an effective measurement tool in order to assess the preclinical learning climate. In this cross-sectional study, data were collected from 3,540 preclinical medical students of six medical faculties in Turkey. The methodology included the following activities: generate an item pool, receive expert opinions, perform a pretest to purify the instrument, and conduct factor and reliability analyses. According to the factor analysis, eight factors were determined and their contribution to the variance was 50.39%. In addition, the item factor loadings ranged from .31 to .91, Cronbach’s alpha coefficients for the subscales ranged from .72 to .77, and the item-total correlation coefficients for the subscales ranged from .44 to .76. All the items significantly discriminated between the low- and high-performing students (t = 99.57; p = .01). The scale included 52 items with the following subscales: management, teaching, teaching staff, institutional commitment, emotions, inter-student relationships, physical environment, and motivation. The analysis of this newly developed Preclinical Learning Climate Scale (PLCS) indicated that its psychometric properties are appropriate and this scale can be employed to evaluate medical education programs.

Keywords: Education • Medical • Undergraduate • Learning climate • Psychometrics Educational Sciences: Theory & Practice

Learning climates are important as they directly et al., 2011; Roff & McAleer, 2001). In this regard, influence the learners’ work within them (and the curriculum’s structure and its management, the vice versa) and provide a measure to determine educational-social activities offered to the students, the effectiveness of such climates in educational and the school’s relationship with the environment institutions (Dornan, Mann, Scherpbier, Spencer, & all shape the school’s learning climate (Demiral Norman, 2011). This topic has been one of the main Yilmaz, 2010; Roff & McAleer, 2001). Thus, by subjects of focus in previous evaluations of medical measuring the learning climate, an educational education programs (Kennedy, Lilley, Kiss, Littvays, program can be evaluated to identify its positive & Harden, 2013; Soemantri, Herrera, & Riquelme, and negative aspects (based on the opinions of 2010; WFME Standards, 2003). Learning climate teaching staff and students), following which inter- has been defined as the quality of the learning institutional comparisons can be performed and environment as perceived by teaching staff and overall improvements can be achieved (Dornan et students (Genn, 2001). When students attend a al., 2011). new educational institution, they not only become An examination of recent medical educational acquainted with the curriculum but also become environment instruments reveals that there are aware of the learning climate when they take exams, many differences between educational institutions. attend classes, or participate in activities (Dornan These differences are in part attributable to the fact a Corresponding author Nilufer Demiral Yilmaz (PhD), Department of Medical Education, Faculty of Medicine, Ege University, , Izmir 35100 Turkey Research areas: Medical education; Education administration; Vocational skills training; Assessment and evaluation; Psychometrics Email: [email protected] b Asst. Prof. Serpil Velipasaoglu (MD, PhD), Department of Medical Education, Faculty of Medicine, Dokuz Eylül University, İnciralti, Izmir 35340 Turkey Email: [email protected] c Assoc. Prof. Hatice Sahin (MD, PhD), Department of Medical Education, Faculty of Medicine, Ege University, Bornova, Izmir 35100 Turkey Email: [email protected] d Bilge Uzun Basusta (PhD), Department of Medical Education and Informatics, Faculty of Medicine, Hacettepe University, Sihhiye, Ankara 06100 Turkey Email: [email protected] e Ozlem Midik (MD, PhD), Department of Medical Education, Faculty of Medicine, Ondokuz Mayis University, Atakum, Samsun 55280 Turkey Email: [email protected] f Ozlem Coskun (MD, PhD), Department of Medical Education, Faculty of Medicine, Gazi University, Yenimahalle, Ankara 06560 Turkey Email: [email protected] g Prof. Işil Irem Budakoglu (MD, PhD), Department of Medical Education, Faculty of Medicine, Gazi University, Yenimahalle, Ankara 06560 Turkey Email: [email protected] h Sumer Mamakli (MD, PhD), Department of Medical Education, Faculty of Medicine, Akdeniz University, Kampus, Antalya 07070 Turkey Email: [email protected] i Funda Ifakat Tengiz (MD, PhD), Vocational School of Health Services, Izmir University of Economics, Balçova, Izmir 35330 Turkey Email: [email protected] j Prof. Halil Ibrahim Durak (MD, PhD), Department of Medical Education, Faculty of Medicine, Ege University, Bornova, Izmir 35100 Turkey Email: [email protected] k Assoc. Prof. Sema Ozan (MD, PhD), Department of Medical Education, Faculty of Medicine, Dokuz Eylül University, İnciralti, Izmir 35340 Turkey Email: [email protected]

1504 Demiral Yilmaz, Velipasaoglu, Sahin, Uzun Basust, Midik, Coskun, Budakoglu, Mamakli, Tengiz, Durak, Ozan / A De Novo Tool... that the instruments were often tailored to a specific Scale (CLCS) (Demiral Yilmaz, 2010; Wangsaturak setting of interest (Schönrock-Adema, Bouwkamp- & McAleer, 2008), the Manchester Clinical Timmer, van Hell, & Cohen-Schotanus, 2012). Placement Index (MCPI) (Dornan et al., 2011), and As shown in Figure 1, recent scales have accessed the Undergraduate Clinical Education Environment the overall learning climate, the individual clinical Measure (UCEEM) (Strand et al., 2013). However, phase, or certain themes that can influence the no scales specifically measure the preclinical learning learning climate. The scales that have accessed climate in undergraduate medical education. the overall learning climate include the Medical It is impossible to conduct clinical studies without Student Education Environment Measure (MSLES) performing preclinical studies in undergraduate (Marshall, 1978); the scale developed by Polali and medical education. The basic difference between Price (2000); the Medical Education Environment the two learning climates is the transition from the Measure (MEEM) (Roff, McAleer, Harden, & Al- preclinical phase to the clinical phase. Although the Qahtani, 1996); and the Dundee Ready Education main objectives in the preclinical phase are to foster Environment Measure (DREEM) (Roff, 2005; Roff learning through small or large group activities and et al., 1996; Roff et al., 1997). The MSLES was the provide assessments of the students’ competency, first measurement tool used to evaluate the learning the overall objective in the clinical phase is to learn climate among medical students. The DREEM, by participating in daily practices (Dornan et al., another widely used scale to assess undergraduate 2011; Riley, 2009). medical education, distinguishes the learning environment in traditional medical schools from To date, in order to ensure vertical and horizontal the learning environment in innovative medical integration, medical training programs have schools and measures both clinical and preclinical undergone several changes. More specifically, phases in a program (Dornan et al., 2011; Soamentri the preclinical period currently includes learning et al., 2010). This tool was previously employed in a activities such as early clinical contact, simulation study on medical students in Turkey (Demirören, practice, and skills labs, while the clinical period Palaoglu, Kemahli, Özyurda, & Ayhan, 2008). consists of basic medical science courses in Furthermore, some scales are specific to clinical addition to the clinical applications (Cate, 2007; education such as the Clinical Learning Climate Harden & Davis, 1995; McClean, 2004). Despite

Figure 1: Scales used to assess the learning climate in medical education programs.

1505 Educational Sciences: Theory & Practice these efforts, the preclinical period continues Literature Review and Generation of the Item to bear its distinctive features. These differences Pool: In Stage 1, a literature review was performed between the two phases can change the learning by the research team. In addition, the feedback climate and affect the students. During the clinical collected from the participating students in the stage, which is known to be a major source of stress preclinical phase was taken into account. The for students (Dahlin & Runeson, 2007; Sandars, students’ feedback focused on various aspects such as Patel, Steele, & Mcareavey, 2014), the students their opinions regarding their educators, theoretical become more involved as future members of the and practical educational activities, educational profession. However, this is not the case for the resources, learning motivation, information, and preclinical phase. Although it has been shown that social facilities. Based on the literature review and it is important to assess the learning environments the feedback data, a conceptual framework related of each phase of undergraduate medical education to the learning climate (teaching, learning experience, (Soemantri et al., 2010; Wangsaturak & McAleer, educational resources, success, human relations, 2008), the purpose of the present study was to educators, assistant educators, emotions, health and develop a measurement tool to specifically assess stress, motivation, and management) was determined the learning climate of the preclinical stage. and the item pool was generated. Consensus regarding the items to be included in the initial version of the instrument was established using the Methods Delphi panel technique. For this approach, the list of This cross-sectional study focused on six well- items was electronically sent to the researchers of this established medical faculties in four different study by the principal researcher. The researchers geographical regions of Turkey. In Turkey, were then requested to assess the appropriateness of medical education generally lasts for six years the items with a standard form in accordance with (12 semesters), with the first three years as the certain criteria (i.e., whether the items represent the preclinical phase and the second three years property to be measured, whether the items can be as the clinical phase. During the preclinical understood by the target population, whether the phase, medical students acquire basic medical statements are clear enough, and whether the items knowledge and skills in simulated environments are relevant to the conceptual framework). This in order to prepare themselves for the clinical technique was repeated three times until a consensus phase. Teaching is primarily through classroom on the items was reached. As a result, the initial lectures and practical applications in small groups. version of the instrument was developed. During the clinical phase, students develop their Expert Review Panel: In Stage 2, in order to medical knowledge and skills through clinical determine the content validity of the initial rotations based on their preclinical learning. These version of the instrument, 10 outside experts education programs are structured in such a way (nine postgraduates in medical education and one as to allow the horizontal and vertical integration psychologist) from different medical education of both phases. Among the six medical faculties departments were consulted. After the initial version participating in this study, two implemented a of the instrument was developed, the expert panel problem-based learning (PBL) program, whereas evaluated the scale to determine whether it was the other four implemented a system-based consistent with the purpose of the study. The experts integrated program. During the development of were then requested to evaluate each item of the the scale, there was no predetermined sample size. initial version of the instrument as “appropriate” or Thus, all of the preclinical students N( = 5,833) “not appropriate” in accordance with the criteria used in the aforementioned six medical faculties were in the Delphi panel. They were also asked to provide included in this study. any suggestions for the items that they considered “not appropriate.” Based on the experts’ opinions, the content validity ratio (CVR) for each of the items Development of the Instrument in the scale was calculated using Lawshe’s technique This Preclinical Learning Climate Scale (PLCS) was (Lawshe, 1975). In this technique, if the number developed in five stages. While the validity analysis of experts is 10, then the CVR is expected to be a (content, construct) was carried out in the first four minimum of .62 at the α = .05 level of significance stages, the reliability analysis was conducted in the (Veneziano & Hooper, 1997). Thus, the preliminary fifth stage. instrument was developed with a calculated CVR score that was over .62.

1506 Demiral Yilmaz, Velipasaoglu, Sahin, Uzun Basust, Midik, Coskun, Budakoglu, Mamakli, Tengiz, Durak, Ozan / A De Novo Tool...

Instructions describing the purpose of the scale in terms of the following variables: curriculum, and the how the items are marked were provided their year in the faculty, whether they failed a class with the preliminary instrument. The preliminary and repeated it, whether it was their own decision instrument utilized a five-point Likert scale ranging to attend the medical faculty, their academic grade from 1 (strongly disagree) to 5 (strongly agree). point averages, and their perceptions of success. An assessment expert examined the style of the Reliability Analysis: In Stage 5, the reliability analysis preliminary instrument while a Turkish linguist of the definitive instrument was performed. In evaluated the clarity of the language. In Stages order to determine the reliability of the instrument, 1 and 2, the content validity of the preliminary Cronbach’s alpha coefficient was calculated. The instrument was also examined. cut-off point for Cronbach’s alpha coefficient was Pretest and Purification of the Instrument:In accepted as .70 (Nunnaly, 1978). To investigate the Stage 3, the preliminary instrument was given to a relationship between each item and the total score of group of student volunteers (n = 20) who evaluated the scale, the item-total correlations were calculated. the items in terms of meaningfulness, readability, In addition, to assess the discrimination of each comprehensibleness, sentence length, and clarity of item on the scale, the top and bottom 27% of the meaning. At a subsequent meeting, two researchers groups that received the highest and lowest scores, obtained the students’ views (both orally and in respectively, were compared. writing) following which their opinions were To evaluate the validity evidence in this study, five evaluated by the research team. After the pretest, headings published by The American Psychological, the preliminary instrument was deemed ready for Education Research Associations and National implementation. Council on Measurement in Education were used Data: The preliminary instrument was given to (AERA-APA-NCMA, 1999). First, content validity the preclinical students in the medical faculties evidence was obtained from the data generated between April 2012 and May 2012. Prior to through the literature review, preclinical students’ the application of the scale, the students were feedback, item pool, Delphi panel, and expert review informed about the purpose of the study and the panel. Validity evidence based on the response confidentiality of their personal data, following process was obtained from the data generated from which their consents were obtained. The collected the sample method, response rate, self-reported data were entered into a standard database by the method, informed consent, anonymous response, researcher(s) in the relevant faculty. After data five-point Likert scale, data quality control, and quality control was performed, any surveys with standard database. Validity evidence based on missing values were removed. The final data were the internal structure was obtained from the data gathered and analyzed. generated through EFA, Cronbach’s alpha, item- total correlation, and item discrimination. Validity Factor Analysis: In Stage 4, the construct validity evidence based on the relation to other variables was of the preliminary instrument was examined with obtained from the data on observed and expected exploratory factor analysis (EFA). The Kaiser- scores, and discriminant analysis. Finally, to Meyer-Olkin (KMO) and Bartlett’s test of sphericity obtain evidence based on consequences, the reports was conducted to determine whether the factor recorded in each stage of the research were shared analysis was applicable (Sharma, 1996). During and evaluated by the researchers. the EFA, the number of subscales in the scale was determined via the principal components analysis At the end of the five stages, the validity and and the Promax rotation method. Based on the reliability of the scale were established, the subscales rule that the sample size of an item with a factor were identified, and the min-max scores for the loading of .30 should be at least 350 (Hair, Tatham, scale and subscales were determined. There was Anderson, & Black, 1998), the cut-off point for the no cut-off point for the scores obtained from the factor loading values was accepted as .30. scale. Scores that were closer to the maximum score indicated that the learning climate was positively At the conclusion of Stage 4, the definitive perceived. Moreover, the statistical analysis was instrument was developed by testing the construct performed using PASW Statistics software for validity of the scale. Discriminant validity analysis Windows (SPSS, Inc., IBM) Version 18.0. was conducted to support the construct validity of the data collected with the definitive instrument. During this assessment, the mean scores obtained by the students from the instrument were compared

1507 Educational Sciences: Theory & Practice

Ethical Considerations Initial Version of the Instrument This study was approved by the Ethics Committee Upon completion of the literature review, a 235- of the Ege University Faculty of Medicine (Decision item pool was generated. The researchers reached Number: 11-6.1/10,) and all the students’ written a consensus on 85 items (at the end of the third and verbal consents were obtained. There were no round of the Delphi panel), and the initial version conflicts of interest in this study. of the instrument was based on this material.

Results Preliminary Instrument Characteristics of the Study Group In order to examine the content validity of the initial version of the instrument, 10 outside experts Among the 5,833 preclinical students attending the were consulted, and the content validity ratio six medical faculties in this study, 61% (n = 3,540) (CVR) was calculated for each of the items in the responded to the preliminary instrument. The mean scale. It was found that the CVR for 64 of the 85 age of the students in the study group was 20 ± 1.45 items in the initial version of the instrument was years (min = 17–max = 36). Among the students, 54% above .62. The other 21 items were excluded on were male, 38% were first-year students, 32% were the basis of the experts’ opinions. The preliminary second-year students, 29% were third-year students, instrument only included the items whose CVR 6% failed a class and repeated it, and 85% willingly was over .62. An assessment expert examined chose the medical school. Regarding their academic the preliminary instrument, and according to the grade point averages, 7% of the students had less Turkish linguist, it required no revisions and it than 59, 86% had between 60 and 84, and 7% had 85 was ready for pretesting. The content validity of and higher. Finally, 17% of the students considered the preliminary instrument was established after themselves “not successful,” while 52% stated that the Delphi and expert panels. In total, 20 medical they were “moderately successful” and 31% felt that student volunteers participated in the pretesting they were “successful.” Figure 2 presents the algorithm phase of the preliminary instrument (64 items). related to the development process of the PLCS, which The students stated that the items in the scale were lasted for two years (June 2011–June 2013). meaningful, readable, and understandable. They

Figure 2: Development process of the PLCS.

1508 Demiral Yilmaz, Velipasaoglu, Sahin, Uzun Basust, Midik, Coskun, Budakoglu, Mamakli, Tengiz, Durak, Ozan / A De Novo Tool...

Table 1 Factor Structure of the Preliminary Instrument Factors Communalities Eigenvalues Percentage of Total Cumulative Percentage of Factor Loadings Variance Explained Total Variance Explained F1 0.42–0.60 12.78 24.58 24.58 0.77–0.38 F2 0.40–0.62 2.92 5.62 30.20 0.79–0.38 F3 0.48–0.63 2.56 4.93 35.13 0.79–0.41 F4 0.38–0.68 2.15 4.13 39.26 0.71–0.32 F5 0.38–0.77 1.66 3.20 42.46 0.91–0.41 F6 0.34–0.63 1.60 3.07 45.53 0.81–0.49 F7 0.31–0.62 1.38 2.65 48.18 0.76–0.31 F8 0.48–0.77 1.15 2.21 50.39 0.89–0.39 also considered the scale appropriate in terms of logical and semantic relationships between the sentence length. Based on these evaluations, the items were taken into account and the entire instrument was deemed ready for implementation. conceptual framework was included. Based on the responses given by the preclinical medical students during the development stage Definitive Instrument (Factor Analyses) of the PLCS, the expected min-max scores and To determine the construct validity of the scale, observed mean scores of the scale and subscales the values for the KMO (.95) and Bartlett’s test of were obtained (Table 2). sphericity (X2: 69484.88; p < .00) were considered The ratio of the observed mean score to the expected significant, and it was accepted that the survey data max score obtained from the scale ranged from 54% were appropriate for the factor analysis. In order to to 77%. Although the observed mean score close to decide the number of factors in the scale, exploratory the expected maximum score of the scale indicates factor analysis (EFA) was applied to the 64 items, that preclinical learning climate was perceived and the eigenvalues, percentage of total variance positively, the observed mean score close to the explained, and factor loadings were examined. Based expected minimum score of the scale indicates the on the EFA, eight factors were obtained. The total problematic areas of the learning climate. contribution of the eight factors to the variance was found to be 50.39%. The factor loadings of the items Based on the EFA, 12 items were removed from with eigenvalues higher than 1.15 varied between .31 the scale since their factor loadings were under and .91. The eigenvalues, percentage of total variance .30, which was considered the threshold value. explained, ​​and factor loadings of the factors in the Construct validity of definitive instrument was scale are presented in Table 1. established, following which the subscales (and the items under each subscale) were renumbered. The The resulting eight factors were evaluated in terms resulting instrument, the PLCS, includes 52 items, of content integrity, following which the following four of which are reversely scored (Items Nos. 45, subscales were established: management, teaching, 46, 47, and 54). teaching staff, institutional commitment, emotions, inter-student relationships, physical environment, Upon completion of the discriminant validity and motivation. When the subscales were named, analysis, it was determined that the instrument

Table 2 The Number of the Items in the Subscales and the Subscale Scores Factors Subscales The Number of the Item Min–Max Score Observed Mean Score (SD) F1 Management 7 7–35 18.95 (5.21) F2 Teaching 11 11–55 32.84 (7.00) F3 Teaching Staff 9 9–45 29.57 (5.46) F4 Institutional Commitment 7 7–35 21.60 (5.46) F5 Emotions 5 5–25 14.03 (4.43) F6 Inter-Student Relationships 5 5–25 15.78 (3.36) F7 Physical Environment 5 5–25 16.11 (3.48) F8 Motivation 3 3–15 11.49 (2.32) Total PLCS score 52 52–260 160.37 (26.07)

1509 Educational Sciences: Theory & Practice

Table 3 Distribution of the PLCS Scores in terms of Independent Variables Independent Variables N Mean SD t / F p Problem-based learning 1126 165.48 25.45 Faculty educational program type 8.04* 0.00 System-based integration 2414 157.98 26.01 1 1357 167.00 24.78 Year at school 2 1140 158.43 25.63 83.41** 0.00 3 1043 153.85 26.21 Yes 200 155.57 28.18 Repeating the class 2.69* 0.01 No 3340 160.66 25.91 Yes 3013 161.96 25.45 Choosing the faculty on his/her own will 8.29* 0.00 No 527 151.26 27.66 ≤59 237 155.63 26.66 Academic grade point average 60–84 3050 160.21 25.98 11.65** 0.00 85–100 253 166.76 25.51 Not successful 591 149.43 27.36 Moderately 1849 159.21 24.38 Perception of success 109.75** 0.00 successful Successful 1100 168.19 25.67 * t-test ** One-way ANOVA could discriminate the students according to the students’ perceptions of the preclinical learning following independent variables: curriculum, year climate. The methodology, recommended by at school, repeating the class, choosing the faculty McKenzie, Wood, Kotecki, Clark, and Brey (1999), on his/her own will, academic grade point average, Delamere, Wankel, and Hinch (2001), Worthington and perception of success (Table 3). and Whittaker (2006), and Ringsted, Hodges, and Scherpbier (2011) included the following activities: generate an item pool, receive expert opinions, Preclinical Learning Climate Scale perform a pretest to purify the instrument, and For the internal consistency analysis, Cronbach’s conduct factor and reliability analyses. alpha coefficient was .94, while the Cronbach’s In general, content validity plays an important alpha coefficients of the subscales ranged from .72 role in the assessment of the validity criteria to .77. The item-total correlation coefficients of the when developing any type of scale. To achieve this subscales were found to be between .44 and .76. All purpose, the present study conducted a literature the items significantly discriminated between low- review, observations and discussions with experts, and high-performing students (t = 99.57; p = .01). qualitative and quantitative content evaluations, Based on the results of the analysis, the PCLS was structured/unstructured observations, focus found to be a reliable scale. groups, etc. (Soemantri et al., 2010). In addition, In Table 4, the items of the PLCS are presented content validity evidence was obtained through according to the factor loadings of the subscales the literature-based item pool, the Delphi panel determined through the EFA. In addition, technique, and outside expert opinions. Cronbach’s alpha (α) and the item-total correlation Factor analysis is a fundamental step in coefficient (r) of each subscale are given. demonstrating the construct validity when As a result of the analysis, the 52-item PLCS, developing any scale (DeVellis, 2003; Johnson & comprising eight subscales, was developed and the Wichern, 2002; Tabachnick & Fidell, 2001). In the validity and reliability of the scale were established. present study, the construct validity of the scale The possible minimum and maximum scores was demonstrated through exploratory factor obtained from the scale were 52 and 260, respectively. analysis. In addition, the total contribution of the eight factors to the variance was 50.39%. According to Scherer, Wiebe, Luther, and Adams (1988), the Discussion percentage of total variance between 40% and 60% is appropriate in multifactorial designs. Thus, the In this study, the Preclinical Learning Climate contribution of the aforementioned eight factors to Scale (PLCS) was developed to determine medical the total variance was appropriate in this study.

1510 Demiral Yilmaz, Velipasaoglu, Sahin, Uzun Basust, Midik, Coskun, Budakoglu, Mamakli, Tengiz, Durak, Ozan / A De Novo Tool...

Table 4 The Subscales of the PLCS Determined according to Factor Loadings Item-total Subscale Items correlation 0.56 62. The consulting services provided for the students in the school are sufficient. 0.57 60. At the school, the students can voice their concerns in the decision-making process. Management 0.55 61. The school administration takes into account the feedback from the students. α = 0.73 0.51 59. The school also provides opportunities for students to become involved in non-medical pursuits. r = 0.73 0.44 43. The student affairs unit at the school displays a supportive attitude toward students. 0.53 32. The school administration is closely concerned with the students’ problems. 0.58 63. The quality of the education is of importance to the school administration. 0.49 4. The teaching is conducted in a manner that provides the skills necessary to serve in my profession. 3. The teaching is conducted in a manner that provides the knowledge necessary to serve in my 0.53 profession. 0.51 2. The teaching encourages me to become an active learner. 0.57 12. The teaching ensures the development of my critical thinking and evaluation skills. 5. The teaching is conducted in a manner that provides the attitudes and behaviors necessary to 0.48 Teaching serve in my profession. α = 0.72 0.45 1. The students clearly know what they are supposed to learn. r = 0.75 0.50 11. The recommended learning resources are helpful for the students. 0.63 13. I feel that I am receiving quality training. 0.51 23. The contents of the exams focus on topics that I will use in my profession. 0.52 22. The students know what is expected of them in the exams. 10. The students are informed about the learning resources (i.e., books, presentations, electronic 0.40 resources, etc.) that we need to use in advance. 0.46 34. The teachers are prepared for their classes. 0.39 35. The teachers are knowledgeable about their area of specialization. 0.49 40. The teachersbehave respectfully toward the students.

Teaching staff 0.57 38. The teachers feel responsible for encouraging students to learn. α = 0.73 0.54 36. The way that the teachers provide training is good. r = 0.73 0.51 41. The teachers believe that students can be successful. 0.53 39. The teachers are open to students’ different opinions and ideas. 0.56 33. The teachers are willing to take part in the students’ training. 0.52 42. The students have easy access to teachers when needed. 0.62 58. I am proud of my school. 0.61 53. I feel that I belong to my school. Institutional 0.66 50. I am pleased to be a student in this school. commitment 0.64 52. I feel valuable in the school. α = 0.73 r = 0.76 0.62 51. I feel that I am generally trusted in the school. 0.41 54. This school has disappointed me. 0.52 64. I feel safe on campus and in the hospital. 0.35 45. During this academic year, I feel depressed. Emotions 0.39 46. During this academic year, I feel angry. α = 0.75 0.33 47. During this academic year, I feel anxious. r = 0.50 0.60 49. During this academic year, I generally feel good. 0.48 48. During this academic year, I have been able to allocate time for myself. 0.37 25. The students in the classroom know one another. Inter-student 0.52 26. There is a positive/relaxing atmosphere in our classroom. relationships 0.41 27. In our school, the students behave respectfully toward one another. α = 0.76 r = 0.57 0.37 30. There is a supportive relationship between the junior and senior students in the school. 0.39 28. There is an environment for students to get together in the school. 0.43 18. The school provides convenient and comfortable study areas for the students. 0.36 17. The locations (i.e., classrooms, labs, learning resource center, etc.) that I make use of are clean. Physical 15. The learning resources (i.e., printed or electronic documents, manikins, models, etc.) are 0.43 environment provided by the school so that the students can easily access them. α = 0.75 19. The locations used for purposes other than education (i.e., canteen, cafeteria, toilets, etc.) meet 0.38 r = 0.57 the needs of the students. 44. The school staff (i.e., laboratory workers, cleaners, canteen staff, etc.) display supportive attitudes 0.41 toward the students. Motivation 0.31 56. I am willing to learn about topics related to my profession. α = 0.77 0.31 55. I want to become a physician. r = 0.44 0.57 57. The teaching ensures the development of my professional self-confidence.

1511 Educational Sciences: Theory & Practice

Hair et al. (1988) recommended a certain sample teachers in the DREEM (Roff et al., 1997) and the size for factor loading values. The sample size in the CLCS (Wangsaturak & McAleer, 2008), as coaching present study is suitable since the cut-off value of and assessment in the Dutch Residency Educational .30 was used to determine factor loads. When the Climate Test (D-RECT) (Boor, van Der Vleuten, items included in the sub-scales are evaluated from Teunissen, Scherpbier, & Scheele, 2011), as trainer this perspective, they are considered to have been and training in the Surgical Theatre Educational sufficiently loaded. Environment Measure (STEEM) (Cassar, 2004), and as teaching and teachers in The Anaesthetic The eight-subscale model included the following Theatre Educational Environment Measure factors: (ATEEM) (Holt & Roff, 2004). Management: This factor examines the learning 3) The inter-student relationship subscale is referred climate from the perspective of education to as student-interaction in the Medical School management, which includes administrative Learning Environment Questionnaire (LEQ) features such as allowing students to participate in (Rothman & Ayoade 1970). the decision-making process, dealing with students’ problems, and providing support and counseling. 4) The management subscale is referred to as organization and flexibility in the MSLES (Marshall, Teaching: This factor focuses on the teaching strategies 1978), and as student involvement (in curriculum and the impact of measurement-assessment activities etc.) and administratively flexible in the Medical on students and the learning climate. School Environment Questionnaire (MSEQ) Teaching staff: This factor investigates educators’ (Wakeford, 1981). versatile and challenging roles (other than the transfer 5) Theemotions subscale is referred to as academic of information) that affect the learning climate. enthusiasm in the LEQ (Rothman & Ayoade, 1970), Institutional commitment: This factor includes as emotional climate in the MSLES (Marshall, 1978), items related to students’ sense of belonging to their and as enjoyable in the MSEQ (Wakeford, 1981). institution. 6) Thephysical environment subscale is referred to Emotions: This factor assesses students’ affective as physical environment and educational resources in perceptions of the learning environment. the CLCS (Wangsaturak & McAleer, 2008). Inter-student relationships: This factor assesses the 7) The motivation subscale is referred to as effects of trust, value, and communication on the intellectual maturity in the LEQ (Rothman & learning climate, especially in terms of intra-class Ayoade, 1970) and as motivation in the CLCS and intra-faculty interactions. (Wangsaturak & McAleer, 2008). Physical environment: This factor focuses on 8) The institutional commitment subscale included the location of educational activities, learning in the PLCS is not included in the other scales nor resources, and logistical support, all of which have is it referred to by any other name. an effect on the learning climate. When the PLCS is compared with other learning Motivation: This factor examines the impact of the climate scales, two features distinguish it from learning environment on motivation, in addition to the other scales. First, some items included in the the students’ intrinsic motivation. subscales of the PLCS show a holistic structure. While the PLCS had similarities with some of the Second, the institutional commitment subscale is previously developed learning climate scales, it included as a new conceptual dimension in the differed in terms of the names of the subscales. The learning climate. following are the commonalities and differences According to the discriminant validity analysis, between the PLCS and other learning climate scales: the scores from the variables, such as the 1) The teaching subscale is also included in the DREEM faculty education program type, year at school, (Roff et al., 1997), the Practice-based Educational repeating the class, choosing the faculty on his/ Environment Measure (PEEM) (Mulrooney, her own will, academic grade point average, and 2005), and the Postgraduate Hospital Educational perception of success, differed significantly. This Environment Measure (PHEEM) (Roff, 2005). finding demonstrates that the differences between students according to these variables refer to its 2) Theteaching staff subscale is also included in discrimination strength. This also supports the several other scales. However, it is referred to as construct validity of the scale.

1512 Demiral Yilmaz, Velipasaoglu, Sahin, Uzun Basust, Midik, Coskun, Budakoglu, Mamakli, Tengiz, Durak, Ozan / A De Novo Tool...

In the development of any scale, inter-rater reliability, Finally, in the literature (Cohen, 2006; Dornan et item discrimination test, internal consistency, al., 2011), the importance and effects of the learning Cronbach’s alpha, test-retest measures, and climate have been identified. However, since the generalizability analysis are used to estimate overall learning climate covers all the variables that affect reliability (Dornan et al., 2011). In the present study, learning, researchers have focused on the different Cronbach’s alpha, item-total correlation, and the aspects of the learning climate, which has resulted item discrimination test were conducted. It has been in a number of scales with different dimensions. shown that a Cronbach’s alpha coefficient of greater Therefore, the PLCS was structured to include all than .70 indicates high reliability (Munro, 2005). of the observed factors that affect learning in the Since the Cronbach’s alpha coefficient in the PLCS preclinical stage in medical schools. It excludes was above .70, this scale was deemed highly reliable. It no dimension regarding the learning climate, but has been recommended that the item-total correlation includes the institutional commitment subscale, coefficient, which indicates the contribution of each which is unique to the scale. item to the entire scale, should be at least .30 (Streiner Finally, this study includes several limitations. First, & Norman, 2008). In the present study, the item-total the findings are specific to the preclinical stage in correlation coefficient for all of the subscales was above medical schools and the scale cannot be used in .44, which suggests that the items included in each other stages of medical education. Second, since subscale were consistent with one another. Finally, the number of items in the scale is considerable, according to the item discrimination test results, all the time required to administer the scale can vary of the items were significantly discriminated between widely. Third, generalizability analysis, which the low- and high-performing students. All these analyzes the reliability of the evidence under findings indicate that the PLCS is highly reliable. specific conditions, was not conducted in this study. In this study, validity evidence was assessed in Therefore, future studies should further refine and accordance with the sources developed by The evaluate the generalizability of this scale. American Psychological Association (APA), the American Education Research Association (AERA), and the National Council on Measurement in Conclusions Education (NCMA) (1999). The validity evidence In this study, the primary reason why the new scale obtained according to these criteria indicates that was developed, rather than using an established the PLCS is a valid and reliable instrument. scale (e.g., the DREEM), is that previous scales The aforementioned findings concerning the cannot fully examine climatic factors that affect the validity and reliability of the scale indicate that the educational environments in medical schools. Thus, PLCS provides valid and reliable scores, and it can the 52-item PLCS was developed as a measurement be used to assess medical students’ perceptions of tool to assess the preclinical learning climate in the learning climate in the preclinical phase. The medical schools. The scale included eight subscales: following are the strengths of the scale: management, teaching, teaching staff, institutional commitment, emotions, inter-student relationships, 1) It covers all observed factors that affect learning physical environment, and motivation. The results during the preclinical period; suggest initial support for the new instrument 2) It includes items regarding students’ participation as a measure of preclinical medical students’ in the management process; perceptions of the learning climate. However, although the authors of this study recommend that 3) It includes the teaching subscale with items such the PLCS be used as a valid and reliable tool for as identification of students’ needs, educational the evaluation of the preclinical period in medical objectives and strategies, and sources related to schools, they also suggest that future studies that testing and learning; plan to use the PLCS should perform confirmatory 4) It includes the institutional commitment subscale, factor analysis and generalizability analysis in the which is unique to the scale; case of applying the scale in different cultures. 5) The scale was administered to a large sample group, which increased the evidentiary value of the Acknowledgments: The authors would particularly validity and reliability analyses; like to thank all the medical students who 6) The scale can measure the learning climate in participated in this study. medical schools with different education programs.

1513 Educational Sciences: Theory & Practice

References American Educational Research Association, American Lawshe, C. H. A. (1975). A quantitative approach to Psychological Association, and National Council on content validity. Personnel Psychology, 28, 563–575. Measurement in Education. (1999). Standards for Marshall, R. E. (1978). Measuring the medical school learning educational and psychological testing. Washington, DC: environment. Journal of Medical Education, 53, 98–104. American Educational Research Association. McClean, M. (2004). Sometimes we do get it right! Early Boor, K., van Der Vleuten, C., Teunissen, P., Scherpbier, clinical contact is a rewarding experience. Education for A., & Scheele, F. (2011). Development and analysis of Health, 17(1), 42–52. D-RECT, an instrument measuring residents’ learning climate. Medical Teacher, 33, 820–827. McKenzie, J. F., Wood, M. L., Kotecki, J. E., Clark, J. K., & Brey, R. A. (1999). Establishing content validity: Using Cassar, K. (2004). Development of an instrument qualitative and quantitative steps. American Journal of to measure the surgical operating theatre learning Health Behavior, 23(4), 311–318. environment as perceived by basic surgical trainees. Medical Teacher, 26, 260–264. Mulrooney, A. (2005). Development of an instrument to measure the practice vocational training environment in Cate, O. T. (2007). Medical education in the Netherlands. Ireland. Medical Teacher, 27, 338–342. Medical Teacher, 29, 752–757. Munro, B. H. (2005). Statistical methods for health care research Cohen J. (2006). Social, emotional, ethical, and academic (5th ed.). Philadelphia, PA: Lippincott Williams & Wilkins. education: Creating a climate for learning, participation in democracy, and well-being. Harvard Educational Review, Nunnaly, J. (1978). Psychometric theory. New York, NY: 76(2), 201–237. McGraw-Hill. Dahlin, M. E., & Runeson, B. (2007). Burnout and Polali, L., & Price, J. (2000). Developments: Validation and psychiatric morbidity among medical students entering use of an instrument to measure the learning environment clinical training: A three year prospective questionnaire as perceived by medical students. Teaching and Learning in and interview-based study. BMC Medical Education, 7, 6. Medicine, 12(4), 201–207. Delamere, T. A., Wankel, L. M., & Hinch, T. D. (2001). Riley, S. C. (2009). AMEE Medical Education Guide No. Development of a scale to measure resident attitudes 46: Student Selected Components (SSCs). Medical Teacher, toward the social impacts of community festivals, Part I: 31, 885–894. Item generatıon and purification of the measure.Event Ringsted, C., Hodges, B., & Scherpbier, A. (2011). AMEE Management, 7, 11–24. Medical Education Guide No 56: The research compass’: Demiral-Yılmaz, N. (2010). The examination of medicine An introduction to research in medical education. Medical students’ learning climate perceptions regarding the Teacher, 33, 695–709. academical self efficacy, attitude towards medicine Roff, S. (2005). The Dundee Ready Educational occupation and academical success (Doctoral dissertations, Environment Measure (DREEM)–a generic instrument for Ege University, Institute of Social Sciences, Department measuring students’ perceptions of undergraduate health of Educational Sciences, İzmir, Turkey). Retrieved from professions curricula. Medical Teacher, 27(4), 322–325. https://tez.yok.gov.tr/UlusalTezMerkezi/ Roff, S., & McAleer, S. (2001). What is educational climate? Demirören, M., Palaoglu, Ö., Kemahli, S., Özyurda, F., & Medical Teacher, 23(4), 333–334. Ayhan, H. I. (2008). Perceptions of students in different Roff, S., McAleer, S., Harden, R. M., & Al-Qahtani, M. A. phases of medical education of educational environment: (1996). Development of a validated Medical Education Faculty of Medicine. Med Educ Online Environment Measure (MEEM). Proceedings of the Meeting [serial online], 13, 8. of the Scottish Educational Research Association. DeVellis, F. R. (2003). Scale development theory and Roff, S., McAleer, S., Harden, R. M., Al-Qahtani, M., applications (2nd ed.). California, CA: Sage. Ahmed, A. U., Deza, H. … Primparyon, P. (1997). Dornan, T., Mann, K., Scherpbier, A., Spencer, J., & Development and validation of the Dundee Ready Norman, G. (2011). Medical education theory and practice. Education Environment Measure (DREEM). Medical Edinburgh, UK: Churchill Livingstone. Teacher, 19(4), 295–299. Genn, J. M. (2001). AMEE Medical Education Guide No. Rothman, A. I., & Ayoade, F. (1970). The development of a 23 (Part 1): Curriculum, environment, climate, quality and learning environment: A questionnaire for use in curriculum change in medical education a unifying perspective. Med evaluation. Journal of Medical Education, 45, 754–759. Teacher, 23(5), 445–454. Sandars, J., Patel, R., Steele, H., & Mcareavey, M. (2014). Hair, J. F., Tatham, R. L., Anderson, R. E., & Black, W. AMEE Medical Education Guide No 92: Developmental (1998). Multivariate data analysis (5th ed.). London, UK: student support in undergraduate medical education. Prentice-Hall. Medical Teacher, 36, 1015–1026. Harden, R. M., & Davis, M. H. (1995). AMEE Medical Scherer, R. F., Wiebe, F. A., Luther, D. C., & Adams, J. S. Education Guide No. 5: The core curriculum with options (1988). Dimensionality of coping: Factor stability using or special study modules. Medical Teacher, 17(2), 125–148. the ways of coping questionnaire. Psychological Reports, Holt, M. C., & Roff, S. (2004). Development and validation 62, 763–770. of the anaesthetic theatre educational environment Schönrock-Adema, J., Bouwkamp-Timmer, T., van Hell, E. measure (ATEEM). Medical Teacher, 26, 553–558. A., & Cohen-Schotanus, J. (2012). Key elements in assessing Johnson, R. A., & Wichern, D. W. (2002). Applied multivariate the educational environment: where is the theory? Advances statistical analysis. New Jersey, NJ: Prentice Hall. in Health Sciences Education, 17, 727–742. Kennedy, C., Lilley, P., Kiss, L., Littvays, L., & Harden, R. Sharma, S. (1996). Applied multivariate techniques. New (2013). MEDINE2 Work Package 5: Curriculum trends in York, NY: John Wiley and Sons. medical education in Europe in the 21st century. Retrieved from http://medine2.com/Public/docs/MEDINE2-WP5.pdf

1514 Demiral Yilmaz, Velipasaoglu, Sahin, Uzun Basust, Midik, Coskun, Budakoglu, Mamakli, Tengiz, Durak, Ozan / A De Novo Tool...

Soemantri, D., Herrera, C., & Riquelme, A. (2010). Veneziano, L., & Hooper, J. (1997). A method for quantifying Measuring the educational environment in health content validity of health-related questionnaires. American professions studies: A systematic review. Medical Teacher Journal of Health Behavior, 21(1), 67–70. BEME Rapid Review, 32, 947–952. Wakefort, R. E. (1981). Students’ perception of the medical Strand, P., Sjöborg, K., Stalmeijer, R., Wichmann-Hansen, school learning environment: A pilot study into some G., Jakobsson, U., & Edgren, G. (2013). Development and differences and similarities between clinical schools in psychometric evaluation of the Undergraduate Clinical the UK. Assessment and Evaluation in Higher Education, Education Environment Measure (UCEEM). Medical 6, 206–217. Teacher, 35, 1014–1026. Wangsaturak, D., & McAleer, S. (2008). Development of Streiner, L. D., & Norman, G. R. (2008). Health measurement the clinical learning climate measure for undergraduate scales. A practical guide to their development and use (4th ed.) medical education. South East Asian Journal of Medical New York, NY: Oxford University Press. Education, 2(2), 41–51. Tabachnick, B. G., & Fidell, L. S. (2001). Using multivariate WFME Standards. (2003). Basic Medical Education WFME statistics. Boston, MA: Allyn and Bacon A Pearson Education Global Standards for Quality Improvement. University of Company. Copenhagen Denmark. Worthington, R., & Whittaker, T. (2006). Scale development research: A content analysis and recommendations for best practices. Counseling Psychologist, 34, 806–838.

1515