PEERS IN PIRS: CHALLENGES & CONSIDERATIONS FOR RATING GROUPS OF POSTSECONDARY INSTITUTIONS Acknowledgements NASFAA would like to thank independent consultant Alisa Cunningham for her work on this study. Executive Summary As the Obama Administration works to develop a new college rating system, known as the Postsecondary Institution Ratings System (PIRS), a number of the proposal’s elements have yet to be defined. According to the Department of Education (ED) the proposed system, in which institutional outcomes would ultimately be linked to student financial aid, would be based on such measures as percentage of students receiving Pell Grants, average cost of attendance, student loan debt, graduation, and transfer rates. A fact sheet released in 2013 by the White House Office of the Press Secretary noted that the ratings would compare colleges “with similar missions,” but did not provide details on how colleges would be grouped. The Department of Education has been tasked with developing and publishing the new college ratings system by the 2015-16 award year, with the goal of allocating financial aid based on the ratings by 2018, though the latter would require congressional action. Commissioned by NASFAA, this brief focuses on methods of grouping peer institutions under PIRS. At the moment, it is currently unclear whether the goal of PIRS is to provide information to students and parents, to serve as a mechanism for institutional accountability, or some combination of the two. This brief suggests, regardless of intent, that if the federal government is going to create a system of rating colleges, it is important to have valid institutional comparison groups within which comparable outcomes can be assessed. Suggested Areas of Consideration in Peer Group Selection for PIRS In order to foster a better understanding of the challenges faced in this process, this brief provides an overview of selected research related to the designation of comparison groups in the social sciences. Using this existing research and institutional examples as a basis, NASFAA makes the case that postsecondary outcomes need to be “corrected” for inputs, such as the characteristics and backgrounds of entering students, and provides examples that speak to the feasibility of “mission” as a peer-group identifier. This type of correction is commonly referred to as “input-adjusted” metrics. NASFAA puts forward several considerations that should be taken under advisement when creating peer groups under the rating system: 1. The use of input adjustment should be utilized in comparing different institutions because it allows the examination of student outcomes while controlling for unique student and institutional factors. • Adjusting for inputs allows calculation of the “added value” of education, which is a preferable way to measure institutional performance. 2. Characteristics for peer groups should be chosen based on the goal of the comparison. • In trying to understand student outcomes, it is important to control for the characteristics of entering students, including characteristics not under the college’s control. However, if the goal is to assess institutional performance, the comparison variables might be different. For example, in order to determine the value added, academic background (SAT/ACT), student financial income (percent receiving Pell), student demographics, and institutional characteristics (e.g., enrollment, Carnegie Classification) might be used to calculate a predicted graduation rate for each institution which can then be compared to actual outcomes. • The mix of programs of varying levels and types as well as research and other institutional activities will impact outcome measures. 3. Diversity can exist even within broad categories of institutions based on mission. • As illustrated in the brief, institutions even in the same sector and state may vary widely in terms of the characteristics of their students and mix of programs. From conceptual issues about how to target audiences and outcome measures to the number of institutions to include and the availability of appropriate data, the findings in this brief illustrate some of the challenges that would be faced in trying to create valid peer comparison groups by which to rate schools’ performance. Regardless of methodology, the analysis finds it essential that some strategy be used to take into account the diversity of mission in postsecondary education and the wide differences in the background of students who attend. 2 ©2014 - The National Association of Student Financial Aid Administrators Introduction In August 2013, President Obama proposed a new college rating system, known as the Postsecondary Institution Ratings System (PIRS). The proposed system would include a number of outcomes and would ultimately be linked to student financial aid (Jaschik 2013; White House Office of the Press Secretary 2013). For example, students at colleges with higher ratings could be eligible for larger Pell Grants or more favorable rates on student loans. According to the proposal, colleges could receive a bonus if they enroll large numbers of Pell-eligible students. At the same time, requirements for institutions to be eligible for their students to receive federal student aid could be tougher. These elements are not well defined and would likely be clarified as the system is implemented. Although not yet determined specifically, the ratings would be based on such measures as percentage of students receiving Pell grants, average cost of attendance, student loan debt, graduation and transfer rates (ED 2013). Importantly, the ratings would compare colleges “with similar missions” (White House Office of the Press Secretary 2013). However, the details of how colleges would be grouped also are not defined. The Department of Education (ED) has been tasked to develop and publish online through the College Scorecard the new college ratings system by the 2015-16 award year, with the goal of allocating financial aid based on the ratings by 2018, though the latter would require congressional action (Department of Education 2013; White House Office of the Press Secretary 2013). ED has stated that it intends to identify colleges that help students from disadvantaged backgrounds and those improving their performance. A recent Request for Information (RFI) from ED provided the opportunity for stakeholders to provide input on which data elements could be used for the ratings, methods for weighting or adjusting metrics, and methods of grouping institutions for appropriate comparison given differences in missions, student characteristics, and resources. Numerous organizations, including the National Association of Student Financial Aid Administrators (NASFAA), have submitted comments to ED,1 and ED hosted a symposium2 where panelists provided insight into data elements, weighting, and comparison groups that might be relevant for the proposed system. This brief was commissioned by NASFAA to further explore the issues raised in the RFI, focusing on one area in particular—methods of grouping peer institutions. As noted by Miller (2013), colleges have been rated or grouped over time through mechanisms such as the Carnegie Classification system, which classifies institutions based on their mission3—types of degrees awarded, selectivity, size, geography and so on—or associations that represent specific groups of institutions, such as research universities. Miller noted that such classifications play an important role in reflecting the diversity of U.S. postsecondary education. But grouping higher education institutions often differs depending on the goal of the classification. For PIRS, it is currently unclear whether the goal is to provide information to students and parents, to serve as a mechanism for institutional accountability, or some combination of both.4 This brief suggests that if one is going to create a system of ratings for colleges, it is important to have valid institutional comparison groups within which outcomes can be assessed. These peer comparison groups must be small enough to illustrate similarities among a set of colleges, but still allow enough institutions for analysis. The determination of the institutional groups is critical in producing useful comparisons of outcomes. Further, the goal—or goals—of the proposed ratings should influence which characteristics to use in defining the comparison groups. The brief provides an overview of selected research and efforts to design comparison groups in order to further a better understanding of the challenges faced in this process. It also uses a small number of institutional examples in order to illustrate some of the differences and similarities among colleges and universities in three states. 1 See NASFAA’s comments at: http://www.nasfaa.org/EntrancePDF.aspx?id=18322. 2 February 6, 2014. 3 The Carnegie Classification is often thought of as capturing institutional mission, and the 2010 classifications have multiple ways of sorting institutions based on sector, degree offerings special focus, urban/rural nature, enrollment size, and so on. However, the classification does not include the universe of institutions; in particular, as noted in New America Foundation’s comments (2014), many non-degree-granting institutions are excluded, and some categories have only a few institutions while others are very broad. They also note that many mission-specific indicators are not available at the federal level, such as transfer and developmental education focus.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages38 Page
-
File Size-