Friday, July 27, 2007

Research Guide: Educational Rankings



This is a very good page produced by Boston College with links to sites and articles on university rankings. For a start take a look at 'Playing with Numbers' by Nicholas Thompson.

Wednesday, July 25, 2007

A Mystery Solved

One of the more interesting elements in the Guide to the World's Top Universities by John O'Leary, Nunzio Quacquarelli and Martin Ince, published by QS Quacquarelli Symonds at the end of 2006, is the information about student faculty ratio provided in the directory of over 500 universities and the profiles of the world's top 100 universities.

These are, even at first sight, not plausible: 590.30 students per faculty at Pretoria, 43.30 at Colorado State University, 18.10 at Harvard, 3.50 at Dublin Institute of Technology.

Scepticism is increased when the the Guide's data for student faculty ratio is correlated with that derived from the scores out of 100 for this measure in the 2006 rankings and cross-checked with the data on individual universities on QS's topuniversities site. The correlation for 517 universities is negligible at .057 and statistically insignificant (2-tailed .195).

Comparing the two sets of data on student faculty ratio for the British universities in the rankings shows that the problem is with the information in the Guide, not that in the rankings. The rankings data correlates highly with that provided by the Higher Education Statistics Agency (HESA: see earlier post) (.712, sig = .000) and that taken from the web site williseemy tutor (.812, sig = .000). There is no significant correlation between the data in the Guide and the HESA data (.133, sig = .389) and that derived from williseemytutor (.179, sig = .250).

So, where did the Guide's student faculty data come from?

First, here are the most favourable student faculty ratios calculated from the scores in the rankings (they can be cross-checked at the topuniversities site) and rounded to the first decimal place.

Duke 3.5

Yale 3.7

Eindhoven University of Technology 3.8

Rochester 3.8

London Imperial College 3.9

Paris Sciences Po 4.0

Tsing Hua 4.1

Emory 4.1

Geneva 4.3

Vanderbilt 4.3


Now, here are the most favourable ratios given in the Guide.

Dublin Institute of Technology 3.5

Wollongong 3.7

Ecole Polytechnique 3.8

Rio de Janeiro 3.8

Llubljanja 3.9

Oulu 4.0

Trento 4.1

Edinburgh 4.1

Fudan 4.3

Utrecht 4.3


Notice that the ratio of 3.5 is assigned to Duke university in the rankings and to Dublin IT in the Guide. If the universities are arranged alphabetically these two would be in adjacent rows. Likewise, the other scores listed above are assigned to universities that would be next to each other or nearly so in an alphabetical listing.

Next are the least favourable ratios derived from the rankings data.

Pune 580

Delhi 316

Tor Vergata 53

Bologna 51

Cairo 49

Concordia 42


Now the ratios in the Guide.

Pretoria 590

De La Salle 319

RMIT 53

Bilkent 51

Bucharest 49

Colorado 42

Notice again that, except for Tor Vergata and RMIT, the ratio in the two data sets is shared by universities that are close or next to each other alphabetically.

The conclusion is unavoidable. When the Guide was being prepared somebody created a new file and made a mistake, going down one or two or a few rows and inserting the rankings data in the wrong rows. So, every university in the Guide's directory acquired a new and erroneous student faculty ratio.

Since this piece of information is the one most likely to interest future undergraduate students, this is not a trivial error.

Is this error any less serious than QS's getting the two North Carolina business schools mixed up?

Sunday, July 22, 2007

What is a Peer Review?

The third Asia Pacific Professional Leaders in Education conference was held in Hong Kong recently. The conference was organised by QS Quacquarelli Symonds (QS), consultants for the THES rankings, and a substantial part of the proceedings seems to have been concerned with international university rankings. There is a report by Karen Chapman in the Kuala Lumpur Star. There are hints that the methods of the THES-QS rankings may be revised and improved this year. The QS head of research, Ben Sowter, has referred to a revision of the questionnaires and to an audit and validation of information. Perhaps the deficiencies of previous rankings will be corrected.

There is also a reference to a presentation by John O'Leary, former editor of the THES, who is reported as saying that

“Peer review is the centrepiece of the rankings as that is the way academic value is measured.”

The second part of this sentence is correct but conventional peer review in scientific and academic research is totally different from the survey that is the centrepiece of the THES rankings.

Peer review means that research is scrutinised by researchers who have been recognised as authorities in a narrowly defined research field. However, inclusion in the THES-QS survey of academic opinion has so far required no more expertise than the ability to sign on to the mailing list of World Scientific, a Singapore-based academic publisher. Those who are surveyed by QS are, in effect, allowed to give their opinions about subjects of which they may know absolutely nothing. Possibly, the reference to redesigning the survey means that it will become more like a genuine peer review.

It cannot be stressed too strongly or repeated too often that, on the basis of the information released so far by QS, the THES-QS survey is not a peer review.




The Consequences of Ranking

There is an excellent post by Eric Beerkens at Beerkens' Blog reporting on an article by Wendy Nelson Espeland and Michael Sauder in the American Journal of Sociology. The article, 'Rankings and reactivity: How public measures recreate social worlds', describes how the law school rankings of the US News and World Report affect the behaviour of students, university administrators and others.

Beerkens argues that international university rankings also have several consequences

1. Firstly, rankings affect external audiences. Trivial differences between institutions may lead to large differences in the quality and quantity of applicants.

2. Rankings may amplify differences in reputations. If researchers or administrators are asked to assess universities of which they have no knowledge they are likely to rely on the results of previous rankings.

3. Resources such as grants distributed on the basis of rankings .

4. Universities will give up objectives that are not measured in rankings and try to become more like those who achieve high scores.

Saturday, July 21, 2007

Blog on University Rankings

There is a Spanish-language blog on university rankings and other academic matters by Alejandro Pisanty that is well worth looking at.

Tuesday, July 17, 2007

Somebody Else Has Noticed

Matt Rayner has posted an interesting question on the QS topuniversities site. He has noticed that in the Guide to the World's Top Universities, published by QS, Cambridge is supposed to have a student faculty ratio of 18.9 and a score of 64 for this part of the 2006 World Rankings while Glasgow, with an almost identical ratio of 18.8, gets a score of 35.

As already noted, this anomaly is not confined to Cambridge and Glasgow. The student faculty ratios provided in the data about individual universities in the Guide are completely different from those given in the rankings.

There is in fact no significant relationship, as a quick correlation done by SPSS will show, between the two sets of data.

It will be even more interesting to see when and how QS reply to Matt's question