Russian Universities in Urap World Ranking (Overall) 2013‐2014
Total Page:16
File Type:pdf, Size:1020Kb
International Workshop Rankings and Enhancement of International Competitiveness of Russian Universities Moscow, 30 September – 1 October 2014 URAP‐University Ranking by Academic Performance Presented by Prof. Dr. Ural AKBULUT (URAP Coordinator and Former President of Middle East Technical University) [Metn08/09/2014i yazın] URAP (University Ranking by Academic Performance)Sayfa 0 HUwww.urapcenter.orgUH [email protected] U URAP RANKING University Ranking by Academic Performance (URAP) Research Laboratory was established at Informatics Institute of Middle East Technical University in 2009. URAP is a nonprofit organization. The main objective of URAP is to develop a ranking system for the world universities based on academic performances determined by the quality and quantity of scholarly publications. In line with this objective, World Ranking of 2000 Higher Education Institutions have been released annually since 2010. Present and previous ranking results are posted on the web page of URAP and all the results are open to public. INTRODUCTION (*) Performance analysis and benchmarking of universities at national and global levels have attracted increasing interest from researchers, policy makers, students and news media outlets in the past decade. University rankings are used for different purposes by various stakeholders in the society. Prospective students and their families are the foremost interested audience of university rankings, given the stakes involved with choosing an appropriate university at home or abroad, in terms of time investment, future career prospects and financial resources. In addition to this, university administrators increasingly see rankings as a means to develop strategies for fostering the growth and development of their institutions. Policy makers often use rankings to evaluate the trends in higher education systems in the world and in their own countries. Finally, news media outlets utilize university rankings as a means to inform the society about the status of higher education institutions in the country and in the world. First attempts to rank universities dates back to Galton, where he analyzed distinguished British scientists who were Fellows of the Royal Society. He studied the education, occupation, race and other characteristics of the parents of those scientists to see if their success were predestined by the parents. In a similar study, Maclean examined successful university graduates, and then he ranked British universities based on the number of successful graduates. Ellis ranked British universities according to the number of “genius” graduates. Ranking efforts in the US traces back to Cattell’s directory called the American Men of Science. He listed the scientists in an order, considering the universities they attended, the awards they received and the universities they were employed. Cattell listed the top 1,000 scientists as the most successful ones, and then he ranked the American universities based on the number of top scientists that they employed. B. W. Kunkel and D. B. Prentice ranked US universities according to the percentage of their living alumni who were listed in the 1928 edition of Who’s Who in America. K.C. Babcock prepared a report in 1910 and ranked undergraduate education of universities. Babcock did not publish the report because it was URAP Research Laboratory | Rankings and Enhancement of International 1 Competitiveness of Russian Universities not expected to be acceptable by the universities. Hughes wrote to hundreds of faculty members and asked them to list the top scientists and the best departments in their fields in the US. In 1924 there were 65 universities in the US which had doctoral programs. He ranked 38 universities in 20 disciplines. Hughes’ system has been considered as the first ranking based on reputation. Cartter & Sawyer announced a ranking system in 1966 for American Council on Education. The ranking was well received by the US higher education institutions. In 1982, The Committee on Quality‐Related Characteristics of Research‐Doctorate Programs published a report on American Universities. US News and World Report magazine published the ranking of American universities firstly in 1983 and then annually since 1987. In 1991, Maclean started ranking Canadian universities by using multiple criteria such as student, faculty, resources, student support, library and reputation. In UK, The Times published the first national ranking in 1992, whereas The Daily Telegraph announced another national ranking in 1997. In 1998, three new rankings were announced for the universities in UK. Two of them were published by the Financial Times and the Sunday Times newspapers. The third was prepared by Red Mole which was sponsored by Reed Graduates. Four national rankings in UK; The Complete University Guide, The Guardian, The Sunday Times and The Times have been announcing their rankings annually. More recently, numerous national university ranking systems have been developed, including Folha’s University Ranking of Brazil, ARWU’s Greater China Ranking, ARWU’s Macedonian HEIs Ranking, EI Mercurio in Chile, Netbig in China, CHE University Ranking in Germany, India Today‐Nielsen in India, Vision in Italy, IQAA Ranking in Kazakhstan, KCUE Ranking in Korea, Veidas in Lithuania, Setera in Malaysia, PBRF Ranking in New Zealand, HEC Ranking in Pakistan, Perspectywy in Poland, Ad‐Astra Ranking in Romania, ARRA in Slovakia, and Top‐200 Ranking in Ukraine. National university rankings in USA and United Kingdom, paved the way for other national rankings as well as global rankings. These developments led to the inception of several global ranking systems such as Academic Ranking of World‐class Universities, Leiden Ranking, Times Higher Education Ranking, QS World Ranking, National Taiwanese University Ranking (formerly known as the HEEACT Ranking), SCImago, URAP World Ranking, Webometrics and U‐Multirank that rank universities across countries in the world. Since most global rankings cover the top 500‐750 in the world, they tend to include mostly institutions from developed countries. Even though such rankings can inform analysis at the national level for developed countries like USA, UK and Germany, they are not informative for institutions located in developing countries, which has motivated the development of several national ranking systems tailored to the peculiarities of each country. The use of bibliometric data obtained from widely known and credible information resources such as URAP Research Laboratory | Rankings and Enhancement of International 2 Competitiveness of Russian Universities Web of Science, Scopus, and Google Scholar has contributed to the objectivity of these ranking systems. Universities from other countries around the world also deserve and need to know where they stand among other institutions at global and national levels. This motivated us to develop a multi‐criteria ranking system that is more comprehensive in coverage, so that more universities will have a chance to observe the state of their academic progress. (*) Alaşehir, O., Çakır, M. P., Acartürk, C., Baykal, N., & Akbulut, U., URAP‐TR: a national ranking for Turkish universities based on academic performance. Scientometrics, 1‐20. doi:10.1007/s11192‐11014‐11333‐11194. (2014) AIM AND SCOPE As globalization drives rapid change in all aspects of research & development, international competition and collaboration have become high priority items on the agenda of most universities around the world. In this climate of competition and collaboration, ranking universities in terms of their performance has become a widely popular and debated research area. All universities need to know where they stand among other universities in the world in order to evaluate their current academic performance and to develop strategic plans that can help them strengthen and sustain their progress. The URAP ranking system's focus is on academic quality. URAP gathers data about 3000 universities across the world in an effort to rank these organizations by their academic performance. The overall score of each university is based upon its performance over several indicators which are described in the Ranking Indicators section. The ranking includes universities except for governmental academic institutions, e.g. the Chinese Academy of Sciences and the Russian Academy of Sciences, etc. Data for 3,000 HEIs have been processed and top 2,000 of them are scored. Thus, URAP covers approximately 10% of all higher education institutions in the World. URAP ranking system is completely based on objective data obtained from reliable bibliometric sources. The goal of the URAP ranking system is not to label world universities as best or worst. Our intention is to help universities identify potential areas of progress with respect to specific academic performance indicators. Similar to other ranking systems, the URAP system is neither exhaustive nor definitive, and is open to new ideas and improvements. The current ranking system will be continuously upgraded based on our ongoing research and the constructive feedback of our colleagues. DATA COLLECTION AND SCORING (OVERALL RANKING OF 2000 UNIVERSITIES) Data is gathered from Web of Science and other sources which provide lists of world universities. The reliability of a ranking depends mostly on the quality of the data used. Pre‐ processing and data cleansing techniques were employed to improve the reliability of the data. URAP Research Laboratory | Rankings and Enhancement of