WENR

International Academic Data and Rankings: The Season

Late summer is a busy time in the world of comparative international education, and especially so since the advent of the global university ranking and what is now sportingly referred to as ‘ranking season,’ a month-long window which sees three major academic rankers publish within weeks of each other and two international organizations release major datasets comparing national systems of education.

In the global university ranking world, Shanghai Jiaotong University [1], in 2003, set the ball rolling with the tabulation of its first Academic Ranking of World Universities [2], which was promptly matched the following year with the release of the World University Ranking [3] by the Times Higher Education Supplement and its partner Quacquarelli Symonds [4] (QS). Now that Times Higher Education (THE) and QS have parted ways, a third international ranking has been added to the maelstrom with the creation by THE of a completely new ranking [5] methodology that now competes with Shanghai and QS (which retained ownership of the World University Ranking and now works with media partner US News & World Report) in the market for the eyes and ears of internationally mobile students and their parents.

But wait, there’s more. One could also be forgiven for mistaking a fourth and fifth release of data as competing in the tabulation wars, especially if some of the media headlines that accompany the annual release of the OECD’s ‘Education at a Glance’ [6] and the UNESCO Institute for Statistics’ Global Education Digest [7] data are to be believed. This year, the British press, in particular, had a field day with the OECD statistics, coming as they did amid an intensification of a debate surrounding the public financing and funding of British universities, after a strict cap on domestic student numbers this year left tens of thousands without a university place.

Simplifying Complex Data Tables

The headline figure for nearly every British news outlet after the release of the OECD data was related to the country’s comparatively poor placement among industrialized nations in a table looking at university graduation rates (table A3.2 [8] from the report). According to the data, the proportion of people in the relevant age group earning a first degree dropped two percentage points between 2000 and 2008 from 37 percent to 35 percent during a time period when other emerging economies significantly improved university graduation rates. These two facts combined dropped Britain from third to 15th among the 31 OECD countries in this particular table, placing it below the OECD average of 38 percent.

Smelling blood, much of the British media distilled the OECD’s complex set of wide-ranging data down to this one table, proclaiming that Britain was plummeting downward in the ‘university tables.’ Many also quoted Wendy Piatt, director general of the Russell Group [9], which represents 20 of the country’s leading research institutions, who, in response to the OECD data, said that funding cuts could lead to British universities being “relegated to a lower division of higher education quality,” a metaphorical nod to university rankings and Britain’s soccer leagues.

To be fair, it should be made clear (and it wasn’t by any news report we read) that the U.K.’s graduation rate actually had increased to 39 percent by 2007, and the subsequent drop to 35 percent in 2008 was a result of changes in the way data were collected prior to the most recent year of statistics (2008). Nonetheless, the U.K.’s comparative position with regards to graduation rates between the years 2000 and 2007 still dropped from third to joint 12th (tied with the Slovak Republic and Japan), which is perhaps testimony to the thirst for education among the youth of many of the OECD’s rapidly developing economies, more than it is evidence of increased drop-out rates among Britain’s undergraduate population, as some media reports postulated.

OECD on International Enrollments

The two biggest host nations in 2008 – the United States and the United Kingdom – maintained their dominance in absolute numbers, however their strength as measured by total market share continued to drop. This drop in market share is the result of two statistical forces: strong enrollment growth in emerging academic host nations and an overall pool of international students that continues to balloon.

According to the OECD data, there was growth of 10.7 percent in the number of internationally mobile students between 2007 and 2008 to a total of 3.3 million students. Five countries – Australia, France, Germany, the United Kingdom and the United States – continued to host over half of those students, however the United States saw its share of the market drop from 26 percent to 19 percent in the eight years since 2000. The U.K.’s market share also fell, if by a smaller 2 percent margin, over the same period, while Australia, Canada, New Zealand, Russia and South Korea all saw their shares increase.

It should be noted here that a drop in market share by no means indicates a drop in total enrollments. In the U.S. case, absolute international enrollments grew from 475,000 in 2000 to 625,000 in 2008, according to the OECD statistics, but because the total global pool of international students grew at a much faster rate – from 1.8 million to 3.3 million over the same period – the US share of global enrollments dropped considerably. Quite clearly the pace of change in international higher education has been dramatic over the last decade, meaning that slower growth in mature and established markets translates to proportional decline relative to the overall market.

Perhaps one of the key data points for the British system this year, and one that was picked up by a number of national newspapers, is not that the country’s international market share has dropped two percentage points over the last eight years, but that the percentage of international students as a proportion of the British student body stands at almost 15 percent. This figure, which ranks third highest among OECD countries, suggests that British universities do an excellent job in marketing themselves abroad and in offering a vibrant multicultural higher learning environment. However critics would also point out that a reliance on international students as a source of revenue comes with a series of concerns. One such concern, given the current political context of budget cuts and capped domestic enrollments, is that of capacity, overcrowding and the role of public financing in higher education.

In Australia, where more than one in five students is of international origin (easily the highest proportion among OECD countries), question marks surrounding quality of provision for international students have also arisen this past year. School closures, visa fraud, violence and questionable immigration practices have all impacted the sector’s reputation among full-fee paying international students. However, the 2008 OECD data is yet to reflect the impact of those troubles on enrollments from abroad, which this year will be heavily impacted by a collapse in the Indian market.

Funding

Total spending on higher education in the United Kingdom as a percentage of GDP was unchanged at 1.3 percent, below the OECD average of 1.5 percent and well below the United States, which was by far the biggest spender at 3.1 percent of GDP. The U.S. spend on higher education was a full half a percentage point above Canada and seven-tenths of a percentage point above Korea which was the only other country to spend more than 2 percent of GDP on higher education in 2007.

As in the United States, a majority of U.K. funding for higher education comes from non-government sources; however, somewhat surprisingly, public funding of higher institutions of learning in the United States is greater as a proportion of GDP than in the United Kingdom.

Across all OECD countries, overall spending on higher education from both public and private sources remained relatively static from 2006 to 2007, according to the report.

Other Key Data Points

All these data points are of utility in and of themselves, and they provide fuel for both policymakers looking to push their education agendas and for those outside the system looking to change national education policies. But what the data fail to represent is the overall quality of a system or, perhaps more importantly to consumers (students), the quality of the institutions within the system.

And this is where international rankings attempt to fill the gap.

Rankings

The ‘big three’ in the international rankings game generally base their tables on composite scores that aggregate weighted indicators, such as a university’s research output, its reputation, and level of internationalization. As many critics have pointed out, however, such methodologies tend to focus too heavily on research, especially in the hard sciences where papers tend to have high citation rates, and on subjective reputation, while paying insufficient attention to other important university missions, such as the quality of teaching and how well they respond to the needs of their local labor economies. Others criticize the need to assess institutions as a whole, suggesting that it would be better and of more utility to assess individual departments or to simply provide easily readable and comparable datasets that students, parents and others can compare and contrast according to their particular needs.

Methodologies

In response to the intense criticism that ranking methodologies have been subject to, Times Higher Education, in partnership with its new data provider Thomson Reuters, was promising months in advance of the launch of its revamped evaluation system that it would be delivering a ranking based on reliable and quantifiable measures of quality rather than on subjective values, such as reputational surveys, which formed the backbone of the magazine’s previous methodology (and continues to do so for the QS ranking). According to a news release accompanying the publication of its ranking, the THE says that it is now giving more “weight to hard measures of excellence in all three core elements of a university’s mission—research, teaching, and knowledge transfer.”

The new methodology is based on 13 indicators across five broad categories—teaching (weighted 30 percent); research influence as measured in citations (32.5 percent); research, based on volume, income, and reputation (30 percent); internationalization, based on student and staff ratios (5 percent); and knowledge transfer and innovation based on industry income (2.5 percent). It should be noted that while THE stresses its move away from reputational surveys, over a third of the weighting is based on a teaching reputational survey (15 percent of the total) and a research reputational survey (19.5 percent of the total). Previously, teaching quality was measured exclusively by student-staff ratios. In addition to the reputational survey, student-staff ratios at both the undergraduate level and doctoral level are now used as proxies for teaching quality.

By contrast, QS uses a 20 percent weighting for teaching quality, which is measured exclusively by student-staff ratios. The QS ranking (and former THE methodology) derives half of each institution’s overall score from two reputational surveys: 40 percent from a peer review survey and 10 percent from an employer survey. The Shanghai ranking uses no reputational surveys, preferring instead to rely exclusively on quantitative measures of research output and quality, and of the quality of those conducing the research .

A further 20 percent of the QS weighting is devoted to citations per faculty, an indicator THE has bumped up to 32.5 percent and altered to measure research influence rather than a straight count of citation numbers.

A full 70 percent of the Shanghai ranking is devoted to academic citations and publication output, with the other 30 percent devoted to faculty and alumni quality as measured by Nobel and Fields awards.

For QS and THE, internationalization is also seen as a mark of a world-class university, with both having maintained international student and international faculty ratios as an indicator of quality (10 percent of the total weighting).

2010 Ranking Results

Who’s the Best?

The biggest headline grabber of this year’s ranking season was Cambridge University [10] with its rise to pole position in the QS World University Ranking, knocking off perennial chart topper Harvard [11], which slid to No. 2 for the first time since the ranking was introduced in 2004. This came on the heels of Harvard’s continued perfect record in the Shanghai ranking, where it again dominated. The new Times ranking, therefore was the de-facto tiebreaker.

With its move away from reputational surveys and a renewed emphasis on research output and other quantifiable measures of quality, the Times agreed with the Shanghai rankers in finding Harvard to be the best university in the world.

Delving (a little) Deeper

Shanghai released its predictably predictable ranking in mid-August, finding that 17 of the world’s best universities are located in the United States, and that 14 of the top 20 were just as good comparative to their peers as they were last year (i.e. they maintained the same position in the ranking). Those that did move moved by just one place up or down.

Getting a jump on its former partner, QS pre-released its top 200 to the media in early September, a week before THE went to press, grabbing headlines with its anointment of Cambridge as the best university in the world, bumping Harvard from first and sending other perennial US favorites tumbling down the list (just 53 in the top 200). This motivated headlines suggesting that the US system might be in decline, before such speculation was put to bed a week later with the release of the Times’ findings, which not only re-anointed Harvard as the world’s best university, but also reasserted the dominance of the US system overall with a total of 72 universities in the top 200, in addition to a sweep of the top five spots. Cambridge polled a distant sixth in a tie with Oxford [12].

Conclusion

Clearly, a university ranking is as good or as useful as its methodology. A tweak here, a tweak there and all of a sudden you have a completely different set of results. Rankings, at best, are an imperfect measure of comparative institutional quality.

However, as fickle and somewhat absurd as these rankings can seem, one cannot simply dismiss them as exercises in futility. Many schools take them extremely seriously, and most will advertise ranking results if they are deemed sufficiently worthy; a practice that gives the rankings an inflated sense of credibility.

And beyond bragging and marketing rights, public and private funding can be attached to ranking performance, especially in developing countries eager to boost their research performance. In many of these countries, governments are looking to develop core groups of schools that have a chance of performing well on rankings in a bid to build the country’s academic reputation and maybe entice foreign investment in the domestic economy. And the obsession with university performance is not just reserved for governments of developing economies. Indeed, French president Nicolas Sarkozy recently instructed his science and higher-education ministry to set “the objective of having two French establishments in the top 20, and 10 in the top 100.”

There is little agreement in academia, or among rankers, on how best to gauge institutional quality, but there is consensus on the fact that in a globalized university system, which in 2008 counted 3.3 million internationally mobile students, there is a thirst for information among students and parents on where best to invest tuition dollars. And right now, the three dominant global rankings are filling that space.

Recognizing the influential role of rankings in the globalization of higher education, international organizations, think tanks and industry organizations such as the Organization for Economic Cooperation and Development, the European Centre for Higher Education [13] (UNESCO-CEPES), the Centre for Educational Development (Germany) the Institute for Higher Education Policy, [14] and the Observatory on Academic Rankings and Excellence [15] (formerly, the International Rankings Expert Group – IREG) are researching ways to develop more reliable indicators of institutional quality in a bid to better serve the international student and institution.

If there is agreement that rankings are highly problematic, then there is also agreement that they are not going away any time soon. What they have done, therefore, is make obvious the need for reliable data and information on universities that can be used as a tool for transparency and accountability. This information does not have to be used to rank universities, one against the other, but can certainly be used by governments and institutions to see where they can make improvements and better serve the student.

Resources related to the improvement of international ranking systems