World University Rankings
Introduction
While the business of ranking universities in the United States and many other countries around the world has become fairly commonplace, the ranking of universities on a regional or global level is a much more recent phenomenon. Just as globalization has had a profound effect on the way companies and countries do business, so it has had an effect on the way colleges market themselves to students, and in the way students go about selecting their study destinations.
In most countries, higher education has traditionally been viewed as a public good provided and guaranteed by the state. In the last few decades, however, this paradigm has been radically shaken up to the point that education is now on the agenda of the World Trade Organization — under the General Agreement on Trade in Services — as a commodity that would be freely traded across borders. Today, many courses and degree programs are packaged and marketed just like consumer goods, and students are increasingly seen as customers with a world of choice in front of them.
To this end, far more students than ever before are crossing borders to attend universities and colleges outside their countries of origin while universities and colleges are increasingly looking at ways to export their services to students in their home countries. This “export” of university services is most often achieved by franchising degree programs, opening branch campuses, or by developing the means to offer programs remotely via the internet.
In the same way that a prospective car buyer may look to indicators such as safety standards or fuel efficiency when shopping for a new vehicle, student consumers entering the market for higher education are now demanding ways to differentiate between the academic products that are on offer to them. In the realm of international academic comparison shopping, there are now two high-profile ranking systems answering the call, and it would probably be a safe bet to suggest that more are on the way.
Ranking Across Borders: Shanghai and London
As the provision of higher educational opportunities becomes increasingly international, so the need for reliable means of international institutional comparison becomes more prescient. Where, at the turn of the century, no truly international ranking of higher education institutions existed, a number of organizations now compile and publish annual global university rankings. The two most frequently cited of these rankings are the Academic Ranking of World Universities, compiled by researchers from the Institute of Higher Education at Shanghai Jiaotong University, and the Times Higher University World Rankings, compiled by employees from the Times Higher Education Supplement, based in London.
These two rankings currently represent the most comprehensive efforts to compare universities across borders, although it should be noted that in specific fields such as business administration, top schools have been ranked by a number of different publications for some time. Business Week started the trend in 1988, and the Economist, Forbes, The Wall Street Journal, and The Financial Times have all since followed suit.
Indeed, the proliferation of rankings and the incessant requests for information have led some schools to express dire cases of ranking fatigue. The 2005 Economist rankings, for example, did not feature Harvard Business School or the Wharton School at the University of Pennsylvania because both chose not to provide the London-based weekly newsmagazine access to the alumni records it had requested.
Shanghai Jiaotong Academic Ranking of World Universities
The original goal of the Shanghai Jiaotong University (SJTU) ranking was to discern what kind of research gap existed between Chinese and ‘world-class’ universities, and was conducted as an academic exercise rather than an act of consumer advocacy for international consumption. In response to requests from international colleagues, SJTU researchers have since agreed to publish their findings on the World Wide Web.
Offering its first results in 2004 (gauging academic year 2003), the Academic Ranking of World Universities (ARWU) lists the world’s top 500 institutions of higher education according to a methodology based heavily on faculty and alumni research output and awards. Over 2,000 universities were reviewed and approximately 1,000 ranked. In addition to a global top 500, ARWU also breaks its rankings down by region, providing top-100 league tables for North and Latin America, Europe and Asia/Pacific.
Methodology
Institutions are compared and ranked on a strictly quantitative basis, with no room made for subjective impressions. Academic and research performances are measured using the following indicators and weightings:
- 10% — Alumni of an institution winning Nobel Prizes and Fields Medals
- 20% — Faculty of an institution winning Nobel Prizes and Fields Medals
- 20% — Highly cited researchers in 21 broad subject areas
- 20% — Articles published in Nature and Science*
- 20% — Articles in Science Citation Index-Expanded (SCIE), Social Science Citation Index (SSCI), and Arts & Humanities Citation Index (AHCI).
- 10% — Academic performance with respect to the size of an institution — the scores of the first five indicators divided by the number of full-time equivalent staff.
* For institutions specialized in humanities and social sciences such as London School of Economics, N&S is not considered, and the weight of N&S is relocated to other indicators.
For each indicator, the institution scoring the highest mark is awarded a score of 100, and all scores thereafter are awarded as a percentage of the top mark. Scores across all indicators are weighted and added to arrive at a final score for each institution. Again, the overall highest institution is awarded a score of 100, and all other scores are awarded a score as a percentage of the top institution’s score.
Criticisms
Although the Shanghai rankings are perhaps the most frequently cited of international rankings, the methodology is certainly not without its critics. Foremost among the critiques is the heavy bias in favor of science and technology-related subjects, and the relative disregard for a university’s performance in the arts, humanities and social sciences. A commonly cited example of this bias is the fact that there are three Nobel Prizes in the sciences and just one each for the social sciences and the arts, while the Fields medals are awarded exclusively to mathematicians.
Similarly, there is an imbalance in the production of scholarly articles in different fields and also in the way they have been measured by the Shanghai ranking. This helps to explain why many well-regarded liberal arts schools are ranked relatively low in comparison to institutions with, for example, medical faculties. In response, SJTU researchers have made an exception for institutions specializing in the humanities and social sciences with regards to the Nature and Science (N&S) indicator by disregarding their N&S rating and relocating it to other indicators. A further critique of using citations as an indicator is the fact that English-speaking scholars (by extension institutions in English-speaking countries) are more likely to be published in major research publications, as the vast majority of the top scholarly journals are published in English.
Other critics question whether the characteristics measured by the Shanghai ranking are the most meaningful indicators of quality, while also arguing that the weightings applied to each indicator are entirely arbitrary. This common criticism of rankings in general argues that if different measures of quality were chosen, or different weightings applied, an entirely new set of overall results would be produced.
Question marks have also been raised over the entirely quantitative nature of the Shanghai rankings. If the quality of academic work is open to both subjective and objective scrutiny, then shouldn’t the quality of an academic institution be open to the same type of evaluation?
Among the most vocal (and visible) critics of the Shanghai rankings has been the Times Higher Education Supplement. The flaws perceived by the British newspaper led it to produce its own annual international ranking of universities. In an editorial that accompanied its inaugural report in 2004, the publication questioned the validity of using the number of prizewinners among faculty and alumni as a criterion for gauging the overall quality of a university, especially in an historical context. Why credit a university for enrolling a prizewinner 40 years ago? Furthermore, why credit only the university at which the original research was conducted and not the institution that currently pays the prizewinner’s salary? All valid questions considering the Shanghai rankings allot a 30 percent weighting to the faculty and alumni prizewinner categories. In response THES sought to produce a ranking that took into account a broader spectrum of criteria on which to judge the academic quality of universities worldwide.
Times Higher Education Supplement (THES)
The THES World University Rankings lists a global top 200 with accompanying listings for the top 50 European, top 50 North American, and top 50 universities from the rest of the world. In addition, the 2005 assessment includes rankings for the fields of science, technology, biomedicine, social science and the arts and humanities.
Methodology
Building on its criticism of the SJTU rankings and its experience producing domestic league tables, the THES has sought to produce a ranking that is “current, rather than historical,” and finds suitable “proxies for excellence in teaching and research.” With this in mind, the THES methodology places great stock (70%) on peer review.
The peer-review sample used for the 2005 rankings comprised 2,375 research-active academics from Asia, Europe and North America, with a smaller number from Africa and Latin America. Academics were chosen in roughly equal number from the sciences, technology, biomedicine, social sciences and the arts; they were asked to name the top universities in their subject areas and geographical regions. The data derived from this survey accounted for 40 percent of the total score. A survey of 333 responding recruiters accounted for 10 percent of the overall score. Rounding out the peer-review measurement is the number of academic citations per staff member in the ten-year period to 2005, accounting for 20 percent of the overall score. The remaining 30 percent of the total score is measured by staff-student ratio (20 percent), and the international orientation of the campus (5 percent staff, 5 percent students).
- 40% — Peer review
- 20% — Faculty citations (last ten years)
- 20% — Staff/student ratio
- 10% — Recruiter review
- 5% — Percentage of international students
- 5% — Percentage of international faculty
Many of the criticisms leveled against the Shanghai rankings could also be applied to the THES rankings. However, the THES does attempt to reduce the emphasis on scientific and technological fields of study in composing its ranking. Nonetheless, the somewhat arbitrary nature of choosing and weighting criteria — often termed the “weight-and-add” approach, and common to the SJTU, THES and a majority of university rankings around the world — leaves the THES ranking open to criticism, after all there are far more than six factors and interactions within those factors that decide the overall environment of a campus.
World’s Top 20 Universities as an Average of THES & SJTU 2005 Rankings | |||||
Rank | Av. Rank | Institution | Country | THES | SJTU |
1 | 1 | Harvard | U.S. | 1 | 1 |
2 | 2.5 | Cambridge | U.K. | 3 | 2 |
3 | 3.5 | Massachusetts Inst. of Technology | U.S. | 2 | 5 |
4 | 4 | Stanford | U.S. | 5 | 3 |
5 | 5 | University of California, Berkeley | U.S. | 6 | 4 |
6 | 7 | California Inst. of Technology | U.S. | 8 | 6 |
6 | 7 | Oxford | U.K. | 4 | 10 |
8 | 8.5 | Princeton | U.S. | 9 | 8 |
9 | 9 | Yale | U.S. | 7 | 11 |
10 | 13 | Cornell | U.S. | 14 | 12 |
11 | 13.5 | Chicago | U.S. | 18 | 9 |
11 | 13.5 | Columbia | U.S. | 20 | 7 |
13 | 17.5 | University of California, S.F. | U.S. | 17 | 18 |
14 | 18 | Tokyo | Japan | 16 | 20 |
14 | 18 | Imperial College London | U.K. | 13 | 23 |
16 | 21.5 | Duke | U.S. | 11 | 32 |
17 | 23 | Johns Hopkins | U.S. | 27 | 19 |
18 | 23.5 | University of Pennsylvania | U.S. | 32 | 15 |
19 | 24 | Swiss Federal Institute of Technology, Zurich | Switzerland | 21 | 27 |
20 | 25.5 | University of California, L.A. | U.S. | 37 | 14 |
Sources: Shanghai Jiaotong Academic Ranking of World Universities – 2005; Times Higher Education Supplement World University Rankings – 2005.
Alternative Models used to Compile International Rankings
Fuzzy Clustering
An alternative to weighting and adding different sets of criteria to produce a single result has been proposed by Peter Hirst on his website, University Metrics. Using a statistical approach known as fuzzy clustering, universities are grouped according to their statistical similarities across a range of unweighted criteria (for the sake of his website presentation, Hirst uses the two sets of criteria defined by the THES and SJTU rankings). Rather than ranking universities on an absolute scale, the fuzzy-clustering approach classifies universities according to their similarities (with their “statistical peers”) and removes from the process arbitrary weightings. As Hirst concedes, however, “the actual criteria used, and ways those are scored still can be questionable.”
On his website, Hirst offers another alternative way to rank universities, which he dubs the G-Factor, based on incoming links to university web pages from other university web pages (see below).
International Rankings by Internet Presence
Webometrics Ranking of World Universities
Produced by Spain-based Cybermetrics Research Group and first published in 2005, the Webometrics ratings rank universities based on their overall internet presence. Specifically, the ranking is designed to show and develop an institution’s commitment to online publishing and Open Access learning. On their homepage, the Webometrics researchers state, “if the web performance of an institution is below the expected position according to their academic excellence, university authorities should reconsider their web policy, promoting substantial increases in the volume and quality of their electronic publications.”
The ranking is to be offered twice a year, once in January and once in July. Web presence is measured using three broad categories: size, visibility and rich files. Under the ‘size’ category, an institution’s absolute web presence is calculated as a total of all web pages under the university domain name. The ‘visibility’ category is a measure of an institution’s total number of unique external in-links, and the ‘rich-file’ category is an attempt by Webometrics researchers to define a series of indicators that can be used to measure an institution’s commitment to providing online access to its research. The indicators used are the total number of .pdf, .ps (Postscript), .doc (Word), and .ppt (Powerpoint) files associated with the university domain name. These four file extensions, the so-called rich files, are considered by the researchers to represent files most related to publication activities. A weighting ratio of 4:2:1 appears to have been applied to visibility, size, and rich files respectively.
A full explanation of the methodology is available from: www.webometrics.info/methodology.html.
University Metrics: The “G-Factor”
Similar to the Webometrics approach, the G-Factor methodology is a measure of an institution’s internet presence; however, it is exclusively concerned with the number of links from other university websites. The (G)oogle-Factor could therefore be considered as something of an internet-based peer review, where the popularity (as a surrogate for importance) of an institution’s website is measured from the combined perspectives of a global pool of university websites.
Internet-Based Rankings as an Average of ‘G-Factor’ and Webometric Ranks | |||||
Rank | Av. Rank | Institution | Country | Webo | G-Factor |
1 | 1.5 | Massachusetts Inst of Technology | U.S | 2 | 1 |
2 | 2 | Univ California Berkeley | U.S. | 1 | 3 |
3 | 2.5 | Harvard Univ | U.S. | 3 | 2 |
4 | 4 | Stanford Univ | U.S. | 4 | 4 |
5 | 6.5 | Univ Washington | U.S. | 6 | 7 |
6 | 8.5 | Univ Illinois Urbana Champaign | U.S. | 9 | 8 |
7 | 9.5 | Univ Pennsylvania | U.S. | 13 | 6 |
8 | 10 | Univ Wisconsin Madison | U.S. | 7 | 13 |
8 | 10 | Univ Michigan | U.S. | 8 | 12 |
10 | 12 | Cornell Univ | U.S. | 10 | 14 |
10 | 12 | Univ Cambridge | U.K. | 23 | 11 |
12 | 14.5 | Univ Texas Austin | U.S. | 5 | 24 |
13 | 15 | Carnegie Mellon Univ | U.S. | 21 | 9 |
14 | 16 | Univ California LA | U.S. | 14 | 18 |
15 | 18 | Univ Virginia | U.S. | 15 | 21 |
15 | 18 | Univ Minnesota | U.S. | 17 | 19 |
15 | 18 | Rutgers Univ | U.S. | 26 | 10 |
18 | 19.5 | Columbia Univ | U.S. | 12 | 27 |
18 | 19.5 | Univ North Carolina Chapel Hill | U.S. | 16 | 23 |
20 | 24.5 | Princeton Univ | U.S. | 44 | 5 |
20 | 24.5 | Arizona Univ | U.S. | 34 | 15 |
Sources: Webometrics and University Metrics
4 International Colleges & Universities
This website maintains a ranking of world universities by ‘web popularity.’ However, an explanation of the ranking methodology could not be found.
Concluding Remarks
In both the web-based rankings and the SJTU and THES “weight-and-add” rankings, institutions from the United States and the United Kingdom consistently dominate, with U.S. institutions being particularly dominant. One possible explanation, for the SJTU ranking in particular, is the common use of the English language in academia. In terms of citations and publication records in top journals, this makes a huge difference as the vast majority are published in English. By extension, the use of English as the international language of academia makes it easier for institutions in the English-speaking world to attract top faculty and students. Undoubtedly the biggest pull for top faculty, however, is the promise of riches and state-of-the-art facilities, and in this domain the U.S. currently has little competition.