Thursday, December 21, 2006

Reply to Open Letter

John O'Leary, editor of the THES, has replied to my open letter:


Dear Mr Holmes

Thank you for your email about our world rankings. As you have raised a number of detailed points, I have forwarded it to QS for their comments. I will get back to you as soon as those have arrived but I suspect that may be early in the New Year, since a lot of UK companies have an extended Christmas break.

Best wishes

John O'Leary
Editor
The Times Higher Education Supplement


If nothing else, perhaps we will find out how long an extended christmas break is.



Saturday, December 16, 2006

Open Letter to the Times Higher Education Supplement

This letter has been sent to THES

Dear John O’Leary
The Times Higher Education Supplement (THES) world university rankings have acquired remarkable influence in a very short period. It has, for example, become very common for institutions to include their ranks in advertising or on web sites. It is also likely that many decisions to apply for university courses are now based on these rankings.

Furthermore, careers of prominent administrators have suffered or have been endangered because of a fall in the rankings. A recent example is that of the president of Yonsei University, Korea, who has been criticised for the decline of that university in the THES rankings compared to Korea University (1) although it still does better on the Shanghai Jiao Tong University index (2). Ironically, the President of Korea University seems to have got into trouble for trying too hard and has been attacked for changes designed to promote the international standing, and therefore the position in the rankings, of the university. (3) Another case is the Vice-Chancellor of Universiti Malaya, Malaysia, whose departure is widely believed to have been linked to a fall in the rankings between 2004 and 2005, which turned out to be the result of the rectification of a research error.

In many countries, administrative decisions and policies are shaped by the perception of their potential effect on places in the rankings. Universities are stepping up efforts to recruit international students or to pressure staff to produce more citable research. Also, ranking scores are used as ammunition for or against administrative reforms. Recently, we saw a claim the Oxford’s performance renders any proposed administrative change unnecessary (4).

It would then be unfortunate for THES to produce data that is any way misleading, incomplete or affected by errors. I note that the publishers of the forthcoming book that will include data on 500+ universities include a comment by Gordon Gee, Chancellor of Vanderbilt University, that the THES rankings are “the gold standard” of university evaluation (5)). I also note that on the website of your consultants, QS Quacquarelli Symonds, readers are told that your index is the best (6)).

It is therefore very desirable that the THES rankings should be as valid and as reliable as possible and that they should adhere to standard social science research procedures. We should not expect errors that affect the standing of institutions and mislead students, teachers, researchers, administrators and the general public.

I would therefore like to ask a few question concerning three components of the rankings that add up to 65% of the overall evaluation.

Faculty-student ratio
In 2005 there were a number of obvious, although apparently universally ignored, errors in the faculty-student ratio section. These include ascribing inflated faculty numbers to Ecole Polytechnique in Paris, Ecole Normale Superieure in Paris, Ecole Polytechnique Federale in Lausanne, Peking (Beijing) University and Duke University, USA. Thus, Ecole Polytechnique was reported on the site of QS Quacquarelli Symonds (7)), your consultants, to have 1,900 faculty and 2,468 students, a ratio of 1.30 students per faculty, Ecole Normale Superieure 900 faculty and 1800 students, a ratio of one per two faculty, Ecole Polytechnique Federale 3,210 faculty and 6,530 students, a ratio of 2.03, Peking University 15,558 faculty and 76,572 students, a ratio of 4.92, and Duke 6,244 faculty and 12,223 students, a ratio of 1.96

In 2006 the worst errors seem to have been corrected although I have not noticed any acknowledgement that any error had occurred or explanation that dramatic fluctuations in the faculty-student ratio or the overall score were not the result of any achievement or failing on the part of the universities concerned.

However, there still appear to be problems. I will deal with the case of Duke University, which this year is supposed to have the best score for faculty-student ratio. In 2005 Duke, according to the QS Topgraduates site, had, as I have just noted, 6,244 faculty and 12,223 students, giving it a ratio of about one faculty to 2 students. This is quite implausible and most probably resulted from a data entry error with an assistant or intern confusing the number of undergraduates listed on the Duke site, 6,244 in the fall of 2005, with the number of faculty. (8)

This year the data provided are not so implausible but they are still highly problematical. In 2006 Duke according to QS has 11,106 students but the Duke site refers to 13,088. True, the site may be in need of updating but it is difficult to believe that a university could reduce its total enrollment by about a sixth in the space of a year. Also, the QS site would have us believe that in 2006 Duke has 3,192 faculty members. But the Duke site refers to 1,595 tenure and tenure track faculty. Even if you count other faculty, including research professors, clinical professors and medical associates the total of 2,518 is still much less than the QS figure. I cannot see how QS could arrive at such a low figure for students and such a high figure for faculty. Counting part timers would not make up the difference, even if this were a legitimate procedure, since, according to the US News & World Report (America’s Best Colleges 2007 Edition), only three percent of Duke faculty are part time. My incredulity is increased by the surprise expressed by a senior Duke administrator (9) and by Duke's being surpassed by several other US institutions on this measure, according to the USNWR.

There are of course genuine problems about how to calculate this measure, including the question of part-time and temporary staff, visiting professors, research staff and so on. However, it is rather difficult to see how any consistently applied conventions could have produced your data for Duke.

I am afraid that I cannot help but wonder whether what happened was that data for 2005 and 2006 were entered in adjacent rows in a database for all three years and that the top score of 100 for Ecole Polytechnique in 2005 was entered into the data for Duke in 2006 – Duke was immediately below the Ecole in the 2005 rankings – and the numbers of faculty and students worked out backwards. I hope that this is not the case.

-- Could you please indicate the procedures that were employed for counting part-timers, visiting lecturers, research faculty and so on?
-- Could you also indicate when, how and from whom the figures for faculty and students at Duke were obtained?
-- I would like to point out that if the faculty-student ratio for Duke is incorrect then so are all the scores for this component, since the scores are indexed against the top scorer, and therefore all the overall scores. Also, if the ratio for Duke is based on an incorrect figure for faculty, then Duke’s score for citations per faculty is incorrect. If the Duke score does turn out to be incorrect would you consider recalculating the rankings and issuing a revised and corrected version?


International faculty
This year the university with the top score for international faculty is Macquarie, in Australia. On this measure it has made a giant leap forward from 55 to 100 (10).

This is not, I admit, totally unbelievable. THES has noted that in 2004 and 2005 it was not possible to get data for Australian universities about international faculty. The figures for Australian universities for these years therefore simply represent an estimate for Australian universities as a whole with every Australian university getting the same, or almost the same, score. This year the scores are different suggesting that data has now been obtained for specific universities.

I would like to digress a little here. On the QS Topgraduate website the data for 2005 gives the number of international faculty at each Australian university. I suspect that most visitors to the site would assume that these represent authentic data and not an estimate derived from applying a percentage to the total number of faculty. The failure to indicate that these data are estimates is perhaps a little misleading.

Also, I note that in the 2005 rankings the international faculty score for the Australian National University is 52, for Monash 54, for Curtin University of Technology 54 and for the University of Technology Sydney 33. For the other thirteen Australian and New Zealand universities it is 53. It is most unlikely that if data for these four universities were not estimates they would all differ from the general Australasian score by just one digit. It is likely then that in four out of seventeen cases there have been data entry errors or rounding errors. This suggests that it is possible that there have been other errors, perhaps more serious. The probability that errors have occurred is also increased by the claim, uncorrected for several weeks at the time of writing, on the QS Topuniversities site that in 2006 190,000 e-mails were sent out for the peer review.

This year the Australian and New Zealand universities have different scores for international faculty. I am wondering how they were obtained. I have spent several hours scouring the Internet, including annual reports and academic papers, but have been unable to find any information about the numbers of international faculty in any Australian university.

-- Can you please describe how you obtained this information? Was it from verifiable administrative or government sources? It is crucially important that the information for Macquarie is correct because if not then, once again, all the scores for this section are wrong.

Peer Review
This is not really a peer review in the conventional academic sense but I will use the term to avoid distracting arguments. My first concern with this section is that the results are wildly at variance with data that you yourselves have provided and with data from other sources. East Asian and Australian and some European universities do spectacularly better on the peer review, either overall or in specific disciplinary groups, than they do on any other criteria. I shall, first of all, look at Peking University (which you usually call Beijing University) and the Australian National University (ANU).

According to your rankings, Peking is in 2006 the 14th best university in the world (11). It is 11th on the general peer review, which according to your consultants explicitly assesses research accomplishment, and twelfth for science, twentieth for technology, eighth for biomedicine, 17th for social science and tenth for arts and humanities.

This is impressive, all the more so because it appears to be contradicted by the data provided by THES itself. On citations per paper Peking is 77th for science and 76th for technology. This measure is an indicator of how a research paper is regarded by other researchers. One that is frequently cited has aroused the interest of other researchers. It is difficult to see how Peking University could be so highly regarded when its research has such a modest impact. For biomedicine and social sciences Peking did not even do enough research for the citations to be counted.

If we compare overall research achievements with the peer review we find some extraordinary contrasts. Peking does much better on the peer review than California Institute of Technology (Caltech), with a score of 70 to 53 but for citations per faculty Peking’s score is only 2 compared to 100.

We find similar contrasts when we look at ANU. It was 16th overall and had an outstanding score on the peer review, ranking 7th on this criterion. It was also 16th for science, 24th for technology, 26th for biomedicine, 6th for social science and 6th for arts and humanities.

However, the scores for citations per paper are distinctly less impressive. On this measure, ANU ranks 35th for science, 62nd for technology and 56th for social science. It does not produce enough research to be counted for biomedicine.

Like Peking, ANU does much better than Caltech on the peer review with a score of 72 but its research record is less distinguished with a score of 13.

I should also like to look at the relative position of Cambridge and Harvard. According to the peer review Cambridge is more highly regarded than Harvard. Not only that, but its advantage increased appreciably in 2006. But Cambridge lags behind Harvard on other criteria, in particular citations per faculty and citations per paper in specific disciplinary groups. Cambridge is also decidedly inferior to Harvard and a few other US universities on most components of the Shanghai Jiao Tong index (12).

How can a university that has such an outstanding reputation perform so consistently less well on every other measure? Moreover, how can its reputation improve so dramatically in the course of two years?

I see no alternative but to conclude that much of the remarkable performance of Peking University, ANU and Cambridge is nothing more than an artifact of the research design. If you assign one third of your survey to Europe and one third to Asia on economic rather than academic grounds and then allow or encourage respondents to nominate universities in those areas then you are going to have large numbers of universities nominated simply because they are the best of a mediocre bunch. Is ANU really the sixth best university in the world for social science and Peking the tenth best for arts and humanities or is just that there are so few competitors in those disciplines in their regions?

There may be more. The performance on the peer review of Australian and Chinese universities suggests that a disproportionate number of e-mails were sent to and received from these places even within the Asia-Pacific region. The remarkable improvement of Cambridge between 2004 and 2006 also suggests that a disproportionate number of responses were received from Europe or the UK in 2006 compared to 2005 and 2004.

Perhaps there are other explanations for the discrepancy between the peer review scores for these universities and their performance on other measures. One is that citation counts favour English speaking researchers and universities but the peer review does not. This might explain the scores of Peking University but not Cambridge and ANU. Perhaps, Cambridge has a fine reputation based on past glories but this would not apply to ANU and why should there be such a wave of nostalgia sweeping the academic world between 2004 and 2006? Perhaps citation counts favour the natural sciences and do not reflect accomplishments in the humanities but the imbalances here seem to apply across the board in all disciplines.

There also are references to some very suspicious procedures. These include soliciting more responses to get more universities from certain areas in 2004. In 2006, there is a reference to weighting responses from certain regions. Also puzzling is the remarkable closing of the gap between high and low scoring institution between 2004 and 2005. Thus in 2004 the mean score for the peer review of all universities in the top 200 was 105.69 compared to a top score of 665 while in 2005 it was 32.82 compared to a top score of 100.

I would therefore like to ask these questions.

-- Can you indicate the university affiliation of your respondents in 2004, 2005 and 2006?
-- What was the exact question asked in each year?
-- How exactly were the respondents selected?
-- Were any precautions taken to ensure that those and only those to whom it was sent completed the survey?
-- How do you explain the general inflation of peer review scores between 2004 and 2005?
-- What exactly was the weighting given to certain regions in 2006 and to whom exactly was it given?
-- Would you considering publishing raw data to show the number of nominations that universities received from outside their regions and therefore the genuine extent of their international reputations?

The reputation of the THES rankings would be enormously increased if there were satisfactory answers to these questions. Even if errors have occurred it would surely be to THES’s long-term advantage to admit and to correct them.

Yours sincerely
Richard Holmes
Malaysia


Notes
(1) htttp://times.hankooki.com/lpage/nation/200611/kt2006110620382111990.htm
(2) http://ed.sjtu.edu.cn/ranking.htm
(3) http://english.chosun.com/w21data/html/news/200611/200611150020.html
(4) http://www.timesonline.co.uk/article/0,,3284-2452314,00.html
(5) http://www.blackwellpublishing.com/more_reviews.asp?ref=9781405163125&site=1
(6) http://www.topuniversities.com/worlduniversityrankings/2006/faqs/
(7) www.topgraduate.com
(8) http://www.dukenews.duke.edu/resources/quickfacts.html
(9) www.dukechronicle.com
(10) www.thes.co.uk
(11) www.thes.co.uk
(12) http://ed.sjtu.edu.cn/ranking.htm








Analysis of the THES Rankings

The Australian has an outstanding short article on the THES rankings by Professor Simon Marginson of the University of Melbourne here.

An extract is provided below.

Methodologically, the index is open to some criticism. It is not specified who is surveyed or what questions are asked. The student internationalisation indicator rewards entrepreneurial volume building but not the quality of student demand or the quality of programs or services. Teaching quality cannot be adequately assessed using student-staff ratios. Research plays a lesser role in this index than in most understandings of the role of universities. The THES rankings reward a university's marketing division better than its researchers. Further, the THES index is too easily open to manipulation. By changing the recipients of the two surveys, or the way the survey results are factored in, the results can be shifted markedly.

This illustrates the more general point that rankings frame competitive market standing as much or more than they reflect it.

Results have been highly volatile. There have been many sharp rises and falls, especially in the second half of the THES top200 where small differences in metrics can generate large rankings effects. Fudan in China has oscillated between 72 and 195, RMIT in Australia between 55 and 146. In the US Emory has risen from 173 to 56 and Purdue fell from 59 to 127. They must have let their THES subscriptions lapse.

Second, the British universities do too well in the THES table. They have done better each successive year. This year Cambridge and Oxford suddenly improved their performance despite Oxford's present problems. The British have two of the THES top three and Cambridge has almost closed the gap on Harvard. Yet the British universities are manifestly under-funded and the Harvard faculty is cited at 3 1/2 times the rate of its British counterparts. It does not add up. But the point is that it depends on who fills out the reputational survey and how each survey return is weighted.

Third, the performance of the Australian universities is also inflated.

Despite a relatively poor citation rate and moderate staffing ratios they do exceptionally well in the reputational academic survey and internationalisation indicators, especially that for students. My university, Melbourne, has been ranked by the 3703 academic peers surveyed by the THES at the same level as Yale and ahead of Princeton, Caltech, Chicago, Penn and University of California, Los Angeles. That's very generous to us but I do not believe it.

Friday, December 08, 2006

Comments on the THES Rankings

Professor Thomas W. Simon, a Fulbright scholar, has this to say about the THES rankings on a Malaysian blog.


Legitimate appearance
Quacquarelli Symonds (QS), the corporate engine behind the THES rankings, sells consulting for business education, job fairs, and other services. Its entrepreneurial, award winning CEO, Nunzio Quacquarelli, noted that Asian universities hardly ever made the grade in previous ranking studies. Curiously, on the last QS rankings, Asian universities improved dramatically. They accounted for nearly 30% of the top 100 slots. Did QS choose its methodology to reflect some desired trends?QS has managed to persuade academics to accept a business ploy as an academic report. Both The Economist and THES provide a cover by giving the rankings the appearance of legitimacy. Shockingly, many educators have rushed to get tickets and to perform for the magic show, yet it has no more credibility than a fairy tale and as much objectivity as a ranking of the world's most scenic spots.

Major flaws
It would be difficult to list all the flaws, but let us consider a few critical problems. Scientists expose their data to public criticism whereas QS has not published its raw data or detailed methodologies. The survey cleverly misuses scholarly terms to describe its methodology. THES calls its opinion poll (of over 2,000 academic experts) “peer review”, but an opinion poll of 2,000 self-proclaimed academic experts bears no resemblance to scholars submitting their research to those with expertise in a narrow field. Further, from one year to the next, QS unapologetically changed the weighting (from 50% peer review to 40%) and added new categories (employer surveys, 10%).

He concludes:

Concerned individuals should expose the QS/THES scandal. Some scholars have done scathing critiques of pieces of the survey, however, they now should launch an all out attack. Fortunately, this survey is only few years old. Let us stop it before it grows to maturity and finds a safe niche in an increasingly commercialised global world.
WE ARE CASH COWS

This is the title of a letter to the New Straits Times of Kuala Lumpur (print edition 1/12/06)from David Chan Weng Chong, a student at a university in Sydney. He writes

In recent years , the Australian government has slashed funding to universities.
University education has suffered because of the lack of resources and expertise.

Because of this drying up of government funds, almost all
universities are using foreign students as cash cows. Foreign students contribute about A$6 billion (RM17 billion) each year to the economy.

We hope the Australian government will ensure we get our money's worth by
increasing spending in higher education.

One wonders how many Asian students have been attracted to Australian universities by excellent scores on the THES rankings and how many are as disillusioned as David.

Perhaps internationalisation is not always an indicator of excellence. It could be nothing more than a response to financial problems.

Wednesday, November 15, 2006

Oxford and Cambridge and THES

George Best used to tell a story about being asked by a waiter in a five star hotel "where did it all go wrong?" Best, who was signing a bill for champagne, with his winnings from the casino scattered around the room and Miss World waiting for him, remarked "he must have seen something that I'd missed". It looks like The Times Higher Education Supplement (THES) has seen something about Oxford and Cambridge that everybody else has missed.

The THES world university rankings have proved to be extraordinarily influential. One example is criticism of the president of Yonsei University in Korea for his institution's poor performance on the rankings.

Another is the belief of Terence Kealey, Vice-Chancellor of the University of Buckingham, that since Oxford and Cambridge are the best universities in the world apart from Harvard, according to THES, they are in no need of reform. He argues that Oxford should reject proposals for administrative change since Oxford and Cambridge are the best run universities in the world.


Oxford's corporate madness
by Terence Kealey
THIS YEAR'S rankings of world
universities reveal that Oxford is one of the three best in the world. The other
two are Cambridge and Harvard.

It is obvious that Oxford and Cambridge are the best
managed universities in the world when you consider that Harvard has endowments
of $25 billion (many times more than Oxford or Cambridge's); that Princeton,
Yale and Stanford also have vast endowments; and that US universities can charge
huge fees which British universities are forbidden to do by law.

Kealey evidently has complete confidence in the reliability of the THES rankings and if they were indeed reliable then he would have a very good point. But if they are not then the rankings would have done an immense disservice to British higher education by promoting a false sense of superiority leading to a rejection of attempts that might reverse a steady decline.

Let's have a look at the THES rankings. On most components the record of Oxford and Cambridge is undistinguished. For international faculty, international students, and faculty-student ratio they have scores of 54 and 58, 39 and 43, 61 and 64 respectively, compared to top scores of 100, although these scores are perhaps not very significant and are easily manipulated. More telling is the score for citations per faculty, a measure of the significance of the institutions' research output. Here, the record is rather miserable with Oxford and Cambridge coming behind many institutions including the Indian Institutes of Technology, Helsinki and the University of Massachusetts at Amherst.

I would be the first to admit that the latter measure has to be taken with a little bit of salt. Science and technology are more citation-heavy than the humanities and social sciences, which would help to explain why the Indian Institutes of Technology apparently do so well, but they are suggestive.

Of course, this measure also depends on the number of faculty as well as the number of citations. If there has been an error in counting the number of faculty then the citations per faculty score would also be affected. I am wondering whether something like that happened to the Indian Institutes. THES refers to institutes but their consultants, QS, refer to institute and provide a link to the institute in Delhi. Can we be confident that QS did not count the faculty for Delhi but citations for all the IITs?

When we look at the data provided by THES for citations per paper, a measure of research quality, we find that the record of Oxford and Cambridge is equally unremarkable. For Science, Oxford is 20th and Cambridge 19th. For technology, Oxford is 11th and Cambridge 29th. For biomedicine, Oxford is seventh and Cambridge ninth. For Social Sciences, Oxford is 19th and Cambridge is 22nd.

The comparative performance of Oxford and Cambridge is just as unimpressive when we look at the data provided by Shanghai Jiao Tong University. Cambridge is second on alumni and awards, getting credit for Nobel prizes awarded early in the last century but 15th for highly cited researchers, 6th for publications in Nature and Science and 12th for citations in the Science Citation Index and Social Science Citation Index. Oxford is ninth for awards, 20th for highly cited researchers , seventh for papers in Nature and Science and 13th for citations in the SCI and SSCI.

So how did Oxford and Cambridge do so well on the overall THES rankings? It was solely because of the peer review. Even on the recruiter ratings they were only 8th and 6th. On the peer review, Cambridge was first and Oxford second. How is this possible? How can reviewers give such a high rating to universities that produce research that in most fields is inferior in quality to that of a dozen or more US universities, that now produce relatively few Nobel prize winners or citations or papers in leading journals.

Perhaps like the waiter in the George Best the THES reviewers have seen something that everybody else has missed.

Or is it simply a product of poor research design? I suspect that QS sent out a disproportionate number of surveys to European researchers and also to those in East Asia and Australia. We know that respondents were invited to pick universities in geographical areas with which they were familiar. This in itself is enough to render the peer review invalid as a survey of international academic opinion even if we could be sure that an appropriate selection procedure was used.
It is surely time for THES to provide more information about how the peer review was conducted.

Wednesday, November 08, 2006

Is Korea University's Rise the Result of a THES Error?

It looks as though the Times Higher Education Supplement (THES) world university rankings will soon claim another victim. The president of Yonsei University, Republic of Korea, Jung Chang-young, has been criticised by his faculty. According to the Korea Times:


The school has been downgraded on recent college evaluation charts, and Jung has
been held responsible for the downturn.

Associations of professors and alumni, as well as many students, are questioning the president’s leadership. Jung’s poor results on the survey were highlighted by the school’s rival, Korea University, steadily climbing the global education ranks.
When Jung took the position in 2004, he stressed the importance of the international competitiveness of the university. “Domestic college ranking is meaningless, and
I will foster the school as a world-renowned university,” he said during his
inauguration speech.

However, the school has moved in the other direction. Yonsei University ranked behind Korea University in this year’s JoongAng Daily September college evaluation for the first time since 1994. While its rival university saw its global rank jump from 184th last year to 150th this year, Yonsei University failed to have its name listed among the world’s top 200 universities in the ranking by the London based The Times.


It is rather interesting that Yonsei university has been ahead of Korea University (KU) between 1995 and 2005 on a local evaluation while lagging far behind on the THES rankings in 2005 and 2006. In 2005 Korea was 184th in the THES rankings while Yonsei was 467th. This year Korea University was 150th and Yonsei 486th.

It is also strange that Yonsei does quite a bit a better than KU on most parts of the Shanghai Jiao Tong rankings. Both get zero for alumni and awards but Yonsei does better on highly cited researchers (7.7 and 0), articles in Nature and Science (8.7 and 1.5), and Science Citation Index (46.4 and 42.6) , while being slightly behind on size (16 and 16.6) Overall, Yonsei is in the 201-300 band and KU in the 301-400.

So why has KU done so well on the THES rankings while Yonsei is languishing almost at the bottom? It is not because of research. KU gets a score of precisely 1 in both 2005 and 2006 and, anyway, Yonsei does better for research on the Shanghai index. One obvious contribution to KU’s outstanding performance is the faculty- student ratio. KU had a score of 15 on this measure in 2005 and of 55 in 2006, when the top scoring university is supposedly Duke with a ratio of 3.48 .

According to QS Quacquarelli Symonds, the consultants who prepared the data for THES, Korea University has 4,407 faculty and 28,042 students, giving a ratio of 6.36.

There is something very odd about this. Just last month the president of KU said that his university had 28 students per faculty and was trying had to get the ratio down to 12 students per faculty. Didn’t he know that, according to THES and QS, KU had done that already?

President Euh also noted that in order for Korean universities to provide better
education and stand higher among the world universities' ranking, the
faculty-student ratio should improve from the current 1: 28 (in the case of
Korea University) to 1: 12, the level of other OECD member nations. He insisted
that in order for this to be realized, government support for overall higher
education should be increased from the current level of 0.5% of GNP to 1% of GNP
to be in line with other OECD nations.

It is very unlikely that the president of KU has made a mistake. The World of Learning 2003 indicates that KU had 21, 685 students and 627 full time teachers. That gives us a ratio of 1: 35. suggesting that KU has been making steady progress in this respect over the last few years.
How then did QS arrive at the remarkable ratio of 6.36? I could not find any data on the KU web site. The number of students on the QS site, however, seems reasonable, suggesting a substantial but plausible increase over the last few years but 4,407 faculty seems quite unrealistic. Where did this figure come from? Whatever the answer, it is quite clear that KU‘s faculty-student score is grossly inflated and so therefore is it’s total score. If Duke had a score of 100 and a ratio of 3.48 (see archives) then KU’s score for faculty-student ratio should have been, by my calculation, 12 and not 55. Therefore its overall score, after calibrating against top scoring Harvard’s, would have been 20.3 and not 32.2. This would have left KU well outside the top 200.

Incidentally, Yonsei’s faculty student ratio, according to QS and its own web site, is 34.25, quite close to KU's self-admitted ratio.

It appears that the criticism directed at Yonsei’s president is perhaps misplaced since KU’s position in the rankings is the result of a QS error. Without that error, Yonsei might have been ahead of KU or at least not too far behind.

Wednesday, November 01, 2006

The Best Universities for Biomedicine?

THES has published a list of the world's 100 best universities for biomedicine. This is based, like the other subject rankings, on peer review . Here are the top twenty according to the THES reviewers.

1. Cambridge
2. Harvard
3. Oxford
4. Imperial College London
5. Stanford
6. Johns Hopkins
7. Melbourne
8. Beijing (Peking)
9. National University of Singapore
10. Berkeley
11. Yale
12. Tokyo
13. MIT
14. University of California at San Diego
15. Edinburgh
16. University College London
17. Kyoto
18. Toronto
19. Monash
20. Sydney

Here are the top twenty according to citations per paper, a measure of the quality of research.


1. MIT
2. Caltech
3. Princeton
4. Berkeley
5. Stanford
6. Harvard
7. Oxford
8. University of California at San Diego
9. Cambridge
10. Yale
11. Washington (St Louis)
12. Johns Hopkins
13. ETH Zurich
14. Duke
15. Dundee
16. University of Washington
17. Chicago
18. Vanderbilt
19. Columbia
20. UCLA

The two lists are quite different. Here are the positions according to citations per paper of some of the universities that were in the top twenty for the peer review;

University College London -- 24
Edinburgh -- 25
Imperial College London -- 28
Tokyo -- 34
Toronto -- 35
Kyoto -- 36
Monash -- 52
Melbourne -- 58
Sydney -- 67
National University of Singapore -- 74
Beijing -- 78=

Again, there is a consistent pattern of British, Australian and East Asia universities doing dramatically better in the peer review than in citations per paper. How did they acquire such a remarkable reputation if their research was of such undistinguished quality? Did they acquire a reputation for producing a large quantity of mediocre research?

Notice that Cambridge with the top score for peer review produces research of a quality inferior to, according to QS's data, eight universities, seven of which are in the US and four in California.

There are also 23 universities that produced insufficient papers to be counted by the consultants. Thirteen are in Asia, 5 in Australia and New Zealand, 4 in Europe and one in the US. How did they acquire such a remarkable reputation while producing so little research? Was the little research they did of a high quality?
More on Methodology

Christopher Pandit has asked QS why the Universities of Essex and East Anglia and Royal Holloway are not included in the THES-QS top 500 universities. Check here and see if you are impressed by the answer.

I would like to ask if the State University of Stony Brook is at number 165 in the rankings how come the other three SUNY university centers at Albany, Binghamton and Buffalo cannot even get into the top 520.
THES and QS: Some Remarks on Methodology

The Times Higher Education Supplement (THES) has come out with their third international ranking of universities. The most important part is a peer review, with academics responding to a survey in which they asked to nominate the top universities in subject areas and geographical regions.

QS Quacquarelli Symonds, THES's consultants have published a brief description of how they did the peer review.

Here is what they have to say:

Peer Review: Over 190,000 academics were emailed a request to
complete our online survey this year. Over 1600 responded - contributing to our
response universe of 3,703 unique responses in the last three years. Previous
respondents are given the opportunity to update their response.

Respondents are asked to identify both their subject area of expertise
and their regional knowledge. They are then asked to select up to 30
institutions from their region(s) that they consider to be the best in their
area(s) of expertise. There are at present approximately 540 institutions in the
initial list. Responses are weighted by region to generate a peer review score
for each of our principal subject areas which are:
Arts &
Humanities
Engineering & It
Life Sciences & Biomedicine
Natural Sciences
Social Sciences
The five scores by subject area are
compiled into a single overall peer review score with an equal emphasis placed
on each of the five areas.

The claim that QS sent e-mails to 190,000 academics is unbelievable and the correct number is surely 1,900. I have queried QS about this but so far there has been no response.

If these numbers are correct then it means that QS have probably achieved the lowest response rate in survey research history. If they sent e-mails to 1,900 academics and added a couple of zeros by mistake, we have to ask how any more mistakes they have made. Anyway, it will be interesting to see how QS responds to my question, if indeed they ever do.

Combined with other snippets of information we can get some sort of picture of how QS proceeded with the peer review.

In 2004 they sent emails to academics containing a list of 300 universities, divided into subject and regional areas and 1,300 replied. Respondents were asked to pick up to 30 universities in the subjects and the geographical areas in which they felt they had expertise. They were allowed to add names to the lists.

in 2005, the 2004 reviewers were asked if they wanted to add to or subtract from their previous responses. Additional reviewers were sent e-mails so that the total was now 2,375.

In 2006 the 2004 and 2005 reviewers were asked whether they wanted to make changes. A further 1,900 (surely?) academics were sent forms and 1,600 returned them making a total (after presumably some of the old reviewers did not reply) of 3,703 reviewers. With additions made in previous years, QS now has a list of 520 institutions.

I would like to make three points. Firstly, it seems that a lot depends on getting on to the original list of 300 universities in 2004. Once on, it seems that universities are not removed. If not included, it is possible that a university might be very good but never quite good enough to get a "write-in vote". So how was the original list chosen?

Secondly, the subject areas in three case are different from those indicated by THES . QS has Natural Sciences, Engineering and IT, and Life Sciences and Biomedicine, THES has Science, Technology and Biomedicine. This is a bit sloppy and maybe indicative of communication problems between THES and QS.

Thirdly, it is obvious that the review is of research quality -- QS explicitly says so -- and not of other things as some people have assumed.

Monday, October 30, 2006

More on the Duke and Beijing Scandals

Sorry, there's nothing here about lacrosse players or exotic dancers. This is about how Duke supposedly has the best faculty-student ratio of any university in the world and how Beijing (Peking) university is supposedly the top university in Asia.

In previous posts I reported how Duke, Beijing and Ecole Polytechnique in Paris (see srchives) had apparently been overrated in the Times Higher Educational Supplement (THES) world university rankings because of errors in counting the number of faculty and students.

QS Quacquarelli Symonds, the consultants who conducted the collection of data for THES, have now provided links to data for each of the universities in the top 200 in the latest THES ranking.
Although some errors have been corrected, it seems that new ones have been committed.

First of all, this year Duke was supposed to be top for faculty-student ratio. The QS site gives a figure of 3,192 faculty and 11,106 students, that is a ratio of 3.48, which is roughly what I suspected it might be for this year. Second placed Yale, with 3,063 faculty and 11,441 students according to QS, had a ratio of 3.74 and Beijing (Peking University -- congratulations to QS for getting the name right this year even if THES did not), with 5,381 faculty and 26,912 students, a ratio of 5.01.

But are QS's figures accurate? First of all, looking at the Duke site, there are 13,088 students. So how did QS manage to reduce the number by nearly 2,000? No doubt, the site needs updating but universities do not lose nearly a sixth of their students in a year.

Next, the Duke site lists 1,595 tenure and tenure track faculty and 925 non-teaching faculty. Even counting the latter we are still far short of QS's 3,192.

If we count only teaching faculty the Duke faculty-student ration would be 8.21 students per faculty. Counting non-teaching faculty would produce a ratio of 5.20, still a long way behind Yale.
It is clear then from data provided by QS themselves that Duke should not be in first place in this part of the rankings. This means that all the data for this component are wrong since all universities are benchmarked against the top scorer in each category and, therefore, that all the overall scores are wrong. Probably not by very much, but QS does claim to be the best.

Where did the incorrect figures come from? Perhaps Duke gave QS a different set of figures from those on its web site. If so, this surely is deliberate deception. But I doubt if that is what happened for the Duke administration seems to have been as surprised as anyone by the THES rankings.

I am wondering if this has something to do with Duke in 2005 being just below Ecole Polytechnique Paris in the overall ranking, The Ecole was top scorer for the faculty-student component in 2005. Is it possible that the data for 2006 was entered into a form that also included the 2005 data and that the Ecole's 100 for 2005 was typed in for Duke for 2006? Is it possible then that the data for numbers of students and faculty were constructed to fit the score of 100 for Duke?

As for Beijing (Peking), QS this year provides a drastically reduced number of faculty and students, 5,381 and 26,912 respectively. But even these figures seem to be wrong. The Peking University site indicates 4,574 faculty. So where did the other 800 plus come from?

The number of students provided by QS is roughly equally to the number of undergraduates, master's and doctoral students listed on Peking University's site. It presumably excludes night school and correspondence students and international students. It could perhaps be argued that the first two groups should not be counted but this would be a valid argument only if the the university itself did not count them in the total number of students and if their teachers were not counted in the number of faculty. It still seems that the most accurate ratio would be about 10 students per faculty and that Beijing's overall position is much too high.

Finally, QS has now produced much more realistic data for the number of faculty at the Ecole Polytechnique Paris, Ecole Normale Superieure Paris and Ecole Polytechnique Federale Lausanne. Presumably, this year part-time staff were not counted.

Saturday, October 28, 2006

The Best Universities for Technology?

The Times Higher Education Supplement (THES) have published a list of the supposed top 100 universities in the world in the field of technology. The list purports to be based on opinion of experts in the field. However, like the ranking for science, it cannot be considered valid. First, let us compare the top 20 universities according to peer review and then the top 20 according to the data provided by THES for citations per paper, a reasonable measure of the quality of research.

First, the peer review:

1. MIT
2. Berkeley
3. Indian Institutes of Technology (all of them)
4. Imperial College London
5. Stanford
6. Cambridge
7. Tokyo
8. National University of Singapore
9. Caltech
10. Carnegie-Mellon
11. Oxford
12. ETH Zurich
13. Delft University of Technology
14. Tsing Hua
15. Nanyang Technological University
16. Melbourne
17. Hong Kong University of science and Technology
18. Tokyo Institute of Technology
19. New South Wales
20. Beijing (Peking University)

Now, the top twenty ranked according to citations per paper:

1. Caltech
2. Harvard
3. Yale
4. Stanford
5. Berkeley
6. University of California at Santa Barbara
7. Princeton
8. Technical University of Denmark
9. University of California at San Diego
10. MIT
11. Oxford
12. University of Pennsylvania
13. Pennsylvania State University
14. Cornell
15. Johns Hopkins
16. Boston
17. Northwestern
18. Columbia
19. Washington (St. Louis)
20. Technion (Israel)

Notice that the Indian Institutes of Technology, Tokyo, National University of Singapore, Nanyang Technological University, Tsing Hua, Melbourne, New South Wales and Beijing are not ranked in the top 20 according to quality of published research. Admittedly, it is possible that in this field a substantial amount of research consists of unpublished reports for state organizations or private companies but this would surely be more likely to affect American rather than Asian or Australian universities.

Looking a bit more closely at some of the universities in the top twenty for technology according to the peer review, we find that, when ranked for citations per paper, Tokyo is in 59th place, National University of Singapore 70th, Tsing Hua 86th, Indian Institutes of Technology 88th, Melbourne 35th, New South Wales 71st, and Beijing 76th. Even Cambridge, sixth in the peer review, falls to 29th.

Again, there are a large number of institutions that did not even produce enough papers to be worth counting, raising the question of how they could be sufficiently well known for there to be peers to vote for them. This is the list:

Indian Institutes of Technology
Korean Advanced Institute of Science and Technology
Tokyo Institute of Technology
Auckland
Royal Institute of Technology Sweden
Indian Institutes of Management
Queensland University of Technology
Adelaide
Sydney Technological University
Chulalongkorn
RMIT
Fudan
Nanjing

Once again there is a very clear pattern of the peer review massively favoring Asian and Australasian universities. Once again, I can see no other explanation than an overrepresentation of these regions, and a somewhat less glaring one of Europe, in the survey of peers combined with questions that allow or encourage respondents to nominate universities from their own regions or countries.

It is also rather disturbing that once again Cambridge does so much better on the peer review than on citations. Is it possible that THES and QS are manipulating the peer review to create an artificial race for supremacy – “Best of British Closing in on Uncle Sam’s finest”. Would it be cynical to suspect that next year Cambridge and Harvard will be in a circulation-boosting race for the number one position?

According to citations per faculty Harvard was 4th for science, second for technology and 6th for biomedicine while Cambridge was 19th, 29th and 9th.

For the peer review, Cambridge was first for science, 6th for technology and first for biomedicine. Harvard was 4th, 23rd and second.

Overall, there is no significant relationship between the peer review and research quality as measured by citations per paper. The correlation between the two is .169, which is statistically insignificant. For the few Asian universities that produced enough research to be counted, the correlation is .009, effectively no better than chance.

At the risk of being boringly repetitive, it is becoming clearer and clearer that that the THES rankings, especially the peer review component, are devoid of validity.

Thursday, October 26, 2006

The World’s Best Science Universities?

The Times Higher Education Supplement (THES) has now started to publish lists of the world’s top 100 universities in five disciplinary areas. The first to appear were those for science and technology.

THES publishes scores for its peer review by people described variously as “research-active academics” or just as “smart people” of the disciplinary areas along with the number of citations per paper. The ranking is, however, based solely on the peer review, although a careless reader might conclude that the citations were considered as well.

We should ask for a moment what a peer review, essentially a measure of a university’s reputation, can accomplish that an analysis of citations cannot. A citation is basically an indication that another researcher has found something of interest in a paper. The number of citations of a paper indicates how much interest a paper has aroused among the community of researchers. It coincides closely with the overall quality of research, although occasionally a paper may attract attention because there is something very wrong with it.

Citations then are a good measure of a university’s reputation for research. For one thing, votes are weighted. A researcher who publishes a great deal has more votes and his or her opinion will have more weight than someone who publishes nothing. There are abuses of course. Some researchers are rather too fond of citing themselves and journals have been known to ask authors to cite papers by other researchers whose work they have published but such practices do not make a substantial difference.

In providing the number of citations per paper as well as the score for peer review, THES and their consultants, QS Quacquarelli Symonds, have really blown their feet off. If the scores for peer review and the citations are radically different it almost certainly means that there is something wrong with the review. The scores are in fact very different and there is something very wrong with the review.

This post will review the THES rankings for science.

Here are the top twenty universities for the peer review in science:

1. Cambridge
2. Oxford
3. Berkeley
4. Harvard
5. MIT
6. Princeton
7. Stanford
8. Caltech
9. Imperial College, London
10. Tokyo
11. ETH Zurich
12. Beijing (Peking University)
13. Kyoto
14. Yale
15. Cornell
16. Australian National University
17. Ecole Normale Superieure, Paris
18. Chicago
19. Lomonosov Moscow State University
20. Toronto


And here are the top 20 universities ranked by citations per paper:


1. Caltech
2. Princeton
3. Chicago
4. Harvard
5. John Hopkins
6. Carnegie-Mellon
7. MIT
8. Berkeley
9. Stanford
10. Yale
11. University of California at Santa Barbara
12. University of Pennsylvania
13. Washington (Saint Louis?)
14. Columbia
15. Brown
16. University of California at San Diego
17. UCLA
18. Edinburgh
19. Cambridge
20. Oxford


The most obvious thing about the second list is that it is overwhelmingly dominated by American universities with the top 17 places going to the US. Cambridge and Oxford, first and second in the peer review, are 19th and 20th by this measure. Imperial College London. Beijing, Tokyo, Kyoto and the Australian National University are in the top 20 for peer review but not for citations.

Some of the differences are truly extraordinary. Beijing is 12th for peer review and 77th for citations, Kyoto13th and 57th, the Australian National University 16th and 35th Ecole Normale Superieure, Paris 17th and 37th, Lomsonov State University, Moscow 18th and 82nd National University of Singapore, 25th and 75th, Sydney 35th and 70th , Toronto 20th and 38th. Bear in mind that there are almost certainly several universities that were not in the peer review top 100 but have more citations per paper than some of these institutions.

It is no use saying that citations are biased against researchers who do not publish in English. For better or worse, English is the lingua franca of the natural sciences and technology and researchers and universities that do not publish extensively in English will simply not be noticed by other academics. Also, a bias towards English does not explain the comparatively poor performance by Sydney, ANU and the National University of Singapore and their high ranking on the peer review.

Furthermore, there are some places for which no citation score is given. Presumably, they did not produce enough papers to be even considered. But if they produce so few papers, how could they become so widely known that their peers would place them in the world’s top 100? These universities are:

Indian Institutes of Technology (all of them)
Monash
Auckland
Universiti Kebangsaan Malaysia
Fudan
Warwick
Tokyo Institute of Technology
Hong Kong University of Science and Technology
Hong Kong
St. Petersburg
Adelaide
Korean Advanced Institute of Science and Technology
New York University
King’s College London
Nanyang Technological University
Vienna Technical University
Trinity College Dublin
Universiti Malaya
Waterloo

These universities are overwhelmingly East Asian, Australian and European. None of them appear to be small, specialized universities that might produce a small amount of high quality research.

The peer review and citations per paper thus give a totally different picture. The first suggests that Asian and European universities are challenging those of the United States and that Oxford and Cambridge are the best in the world. The second indicates that the quality of research of American universities is still unchallenged, that the record of Oxford and Cambridge is undistinguished and that East Asian and Australian universities have a long way to go before being considered world class in any meaningful sense of the word.

A further indication of how different the two lists are can be found by calculating their correlation. Overall, the correlation is, as expected, weak (.390). For Asia-Pacific (.217) and for Europe (.341) it is even weaker and statistically insignificant. If we exclude Australia from the list of Asia-Pacific universities and just consider the remaining 25, there is almost no association at all between the two measures. The correlation is .099, for practical purposes no better than chance. Whatever criteria the peer reviewers used to pick Asian universities, quality of research could not have been among them.

So has the THES peer review found out something that is not apparent from other measures? Is it possible that academics around the world are aware of research programmes that have yet to produce large numbers of citations? This, frankly, is quite implausible since it would require that nascent research projects have an uncanny tendency to concentrate in Europe, East Asia and Australia.

There seems to be no other explanation for the overrepresentation of Europe, East Asia and Australia in the science top 100 than a combination of a sampling procedure that included a disproportionate number of respondents from these regions, allowing or encouraging respondents to nominate universities in their own regions or even countries and a disproportionate distribution of forms to certain countries within regions.

I am not sure whether this is the result of extreme methodological naivety, with THES and QS thinking that they are performing some sort of global affirmative action by rigging the vote in favour of East Asia and Europe or whether it is a cynical attempt to curry favour with those regions that are involved in the management education business or are in the forefront of globalization.

Whatever is going on, the peer review gives a very false picture of current research performance in science. If people are to apply for universities or accept jobs or award grants in the belief that Beijing is better at scientific research than Yale, ANU than Chicago, Lomonosov than UCLA, Tsinghua than Johns Hopkins then they are going to make bad decisions.

If this is unfair then there is no reason why THES or QS should not indicate the following:

The universities and institutions to which the peer review forms were sent.
The precise questions that were asked.
The number of nominations received by universities from outside their own regions and countries.
The response rate.
The criteria by which respondents were chosen.

Until THES and /or QS do this, we can only assume that the rankings are an example of how almost any result can be produced with the appropriate, or inappropriate, research design.

Thursday, October 19, 2006

Why Beijing University is not the Best in Asia

According to the Times Higher Education Supplement's (THES) recent ranking of world universities, Beijing University ( the correct name is actually Peking University, but never mind) is the best university in Asia and 14th in the world.

Unfortunately, it is not. It is just another mistake by QS Quacquarelli Symonds, THES's consultants. Unless, of course, they have information that has been kept secret from everybody else.

In 2005, Beijing University was, according to THES, ranked 15th in the world. This was partly due to remarkably high scores for the peer review and the recruiter ratings. It also did quite well on the faculty/student section with a score of 26. In that year the top score on that part of the ranking was Ecole Polytechnique in Paris whose 100 score appears to represent a ratio of 1.3 students per faculty. It seems that QS derived this ratio from their datafile for the Ecole, although they also give other figures in another part of their page for this institution. Comparing the Ecole's score to others confirms that this was the data used by QS. It is also clear that for this measure QS was counting all students, not just undergraduates, although there is perhaps some inconsistency about the inclusion of non-teaching faculty. It seems then that, according to QS, Beijing University had a ratio of five students per faculty.

Here is the page from QS's web site with the 2005 data for Beijing University.



Datafile
Demographic
No. of faculty:
15,558
No. of international
faculty:
617
No. of students:
76,572
No. of international
students:
2,015
No. of undergraduates:
15,182
No. of
international
undergraduates:
1,025
No. of postgraduates:
13,763
No. of
international postgrads:
308
Financial
Average
undergrad course
fees:
USD$ 3,700
Average postgrad course fees:
USD$
4,700


Annual library spend:
USD$ 72,000
Source:
World University Research (QS & Times Higher Education Supplement)

Postgraduate Course List
For information on undergraduate
courses, please look out for thelaunch of TopUniversities.com in March
2006



Notice that it indicates that there are 76,572 students and 15, 558 faculty, which would give a ratio of 4.92, very close to 5. We can therefore safely assume that this is where QS got the faculty/student ratio.

But there is something wrong with the data. QS gives a total of 76,572 students but there are only 15,182 undergraduates and 13,763 postgraduates, a total of 28, 945. So where did the 46,000 plus students come from? When there is such a glaring discrepancy in a text it usually means that two different sources were used and were imperfectly synthesised. If we look at Beijing University's web site (it calls itself Peking University), we find this data.





Faculty
At present, Peking University has over 4,574 teachers, 2,691 of whom
are full or associate professors. Among the teachers are not only a number
of senior professors of high academic standing and world fame, but also a host
of creative young and middleaged experts who have been working at the forefront
of teaching and research


And this.





At present, Peking University has 46,074 students.
15,001
undergraduates8,119 master candidates3,956 doctoral candidates18,998 candidates
for a correspondence courses or study at the night school1,776 international
students from 62 countries and regions


QS's data were used for the 2005 ranking exercise. The information on Peking University's web site has no doubt been updated since then. However, it looks like QS obtained the numbers of undergraduates and post graduates from Peking University's site although they left out the 18,998 correspondence and night school students that the university counted.

According to the university's definition of students and teachers, the faculty student ration would be 10.07. Excluding correspondence and night school students but counting international students gives us a ratio of 6.31. The former ratio would probably be the correct one to use. THES's definition of a student is someone "studying towards degrees or substantial qualifications" and there is no indication that these students are studying for anything less. Therefore, it seems that the correct ratio for ratio for Beijing University should be around 10 students per faculty.

Looking at the reference work The World of Learning 2003 (2002) we find that Beijing University had 55,000 students and 4,537 teachers. Probably the data reported to this reference included several thousand students from research institutes or branch campuses or was simply an overstatement. The number of teachers is however, almost identical. But whatever the exact numbers, it is clear that QS made a serious mistake and this meant that the score for faculty/student ratio in 2005 was incorrect. Since it appears that a similar or identical ratio was used for this year's ranking as well, the ratio for 2006 is also wrong.

We still have the problem of where QS came up with the figure of 76,572 students and 15, 558 faculty on its web site. It did not come from Peking University.

Or maybe it did. This is from a brief history of Peking University on its site.





After the readjustment, Peking University became a university comprising
departments of both liberal Arts and Sciences and emphasizing the teaching and
research of basic sciences. By 1962, the total enrollment grew to 10,671
undergraduate students and 280 graduate students. Since 1949, Peking University
has trained for the country 73,000 undergraduates and specialty students,
10,000 postgraduates and 20,000 adult-education students, and many of them have become the backbones on all fronts in China.

There has evidently been a massive expansion in the number of postgraduate students recently. The figure of 73,000 undergraduates who ever completed studies at Peking University is close enough to QS's total of students to arouse suspicion that somebody may have interpreted the data for degrees awarded as that for current enrollment.

There is another possible source. There are several specialist universities in the Beijing area, which is one reason why it is rather silly of THES and QS to refer to Peking University as Beijing University. These include the Beijing Foreign Studies University, the Beijing University of Aeronautics and Astronautics, the Beijing University of Business and Technology and so on.

The sum total of students at these institutions, according to the World of Learning is 75,746 students and 12, 826 teachers. The first is very close to QS's figure and the latter somewhat so. A bit of double counting somewhere might have brought the number of teachers closer to that given by QS. I am inclined to suspect that the figures resulted from an enquiry that was interpreted as a request for information about the specialist Beijing universities.

So what about 2006? Wherever the numbers came from this much is clear. Using Yale as a benchmark for 2006 ( there are problems discussed already with top scoring Duke) it would appear that the ratio of 5 students per faculty was used in the latter year as well as in 2005. But according to the data on the university web site, the ratio should be around 10.

What this means is that Beijing University should have got a score for faculty/student ratio of 31 and not 69. I calculate that Beijing University's overall score, applying THES's weighting, dividing by Harvard's total score and then multiplying by 100, should be 57.3. This would put Beijing University in 28th position and not 14th. It would also mean that Beijing University is not the best in the Asia-Pacific region. That honour belongs to the Australian National University. Nor is it the best in Asia. That would be the National University of Singapore. Also Tokyo and Melbourne are ahead of Beijing University.

If there is a mistake in these calculations please tell me and I will correct it.

This is of course assuming that the data for these universities is correct. We have already noted that the score for Duke is too high but if there are no further errors (a very big assumption I admit) then Beijing should have a much lower position than the one assigned by QS. If QS have information from Beijing University that has not been divulged to the public then they have a duty to let us know.


In a little while I shall write to THES and see what happens.




Thursday, October 12, 2006

No Cover-up at Duke

The Duke administration has commented on the university's performance in the latest THES university rankings. While welcoming Duke's continued high placing, senior administrator John Burness, has expressed surprise about Duke's 100 score for faculty-student ratio. He notes that several universities are recorded as doing better on this measure.

The full story is here.

I wonder what THES and QS are going to say.

Wednesday, October 11, 2006

The Names of Universities

This year the THES online edition referred to "University of Kebangsaan Malaysia" although QS used the correct form, "Universiti Kebangsaan Malaysia" . In 2005, however, QS had "University Putra Malaysia" and "University Sains Malaysia". In 2004 THES had "Sains Malaysia University".

If they can't get things like this right what else have they got wrong?
What happened to Macquarie?

In 2005 Macquarie University in Sydney, Australia, was ranked 67th on the THES university rankings. In 2006 it slipped a bit to 82nd place, even though its score rose on every section except for citations per faculty. This is not in itself a problem since it is possible that changes at the top could cause nearly everybody to go up if 100 represented a smaller number.

What is surprising is that Macquarie got a score of 100 for the international faculty component, compared with 53 in 2005.

We should point out that in 2005 Australian universities received the same or almost the same score for this component. Thirteen were given a score of 53, two a score of 54, one a score of 52 and one 33. I would guess that the four different scores are probably data entry errors since they all differ from the majority by a single digit. This makes it more plausible that some of the more surprising changes in 2006 may have resulted from similar errors.

THES indicated that in 2005 in some cases they had to make an estimate for some data so presumably the 2005 figures represent an estimate of the proportion of international faculty for the whole of Australia.

The doubling of Macquarie's score may not then be so implausible. Perhaps the 2005 figure was actually far too low. Also, it is possible Macquarie's 100 may represent a lower figure than the 100 that was given to the City University of Hong Kong in 2005.

Even so, it does look like a surprisningly high score for Macquarie. In 2005 City University of Hong Kong had 55.47% international faculty, which was converted into 100. In 2006 they had a score of 75. So if their score had remained the same, Macquarie's would have been 73.96 %. In 2006, second-ranking LSE had, according to its web site, 44% international faculty, which would equal a score of 89.

This means that Macquarie would have an international faculty of 48 to 74 %. The first number might be plausible but the second seems too high. But does Macquarie have such a score? I have spent a few hours trawling the internet to find anything about the proportion of international faculty at Macquarie or other Australian universities without success.

Is there anybody out there who knows anything about the proportion of international faculty at Macquarie?

Is it possible that we have another data entry error here that has affected all the scores in that section?

Tuesday, October 10, 2006

Congratulations!

Congratulations to THES on their discovery of a new city, Kebangsaan, in Malaysia. The online edition refers to "University of Kebangsaan Malayia".

Monday, October 09, 2006

Letter sent to THES

The Editor
Times Higher Education Supplement

Dear Sir
I would like to draw your attention to an apparent error in the latest world university rankings, which may have rendered them invalid.

The faculty/student score indicates that Duke University was the highest scorer in this category. Therefore, its score was converted to 100 and the other scores adjusted accordingly.It is, in fact, difficult to see how Duke could really be the top scorer in this category. According to the US News and World Report's America's Best Colleges (2007 edition) Duke has eight students per faculty, a ratio that is confirmed by Duke' own data, which refers to a total of 13,088 students, of whom 6,244 were undergraduates, and 1,595 tenure and tenure track faculty, producing a ratio of 8.2. However, the USNWR lists 13 US national universities with better ratios than this, Caltech three to one, Princeton five to one and so on. There are also probably a few non-US institutions that do better.

It is, of course, possible that QS, your consultants, and USNWR adapted different conventions with regard to including or excluding adjunct staff, researchers without teaching responsibilities, medical school staff and so on. This still does not get over the problem. QS appears to have constructed this component of the rankings largely from data entered into files that are available on its web site. These are linked to the 2005 rankings and some no doubt have been revised this year but in general one would expect them to be similar to whatever data was used for 2006.
Thus, QS refers to 2,172 students and 441 staff at Caltech, a ratio of 4.93 students per faculty and 4,633 students and 664 faculty at Rice, a ratio of 6.98. If QS were using these data -- and a quick survey of this category suggests that they were -- then Caltech's ratio of 4.93 and score of 67 would turn Duke's score of 100 into a ratio of 3.3. Rice's ratio of 6.98 and score of 50 would produce a ratio of 3.5 for Duke. In general it appears that QS gave Duke a student to faculty ratio of somewhere between 3 and 3.5.

So Duke, according to QS, would have a ratio of somewhere between 3 and 3.5 students per faculty, which is far lower than any figure that can be derived from the university's current data.
The problem is aggravated by QS's data on Duke which records a total of 12,223 students and 6,244 faculty. The latter figure is obviously far too high and is most probably a data entry error that occurred when someone transferred the figure of 6,244 undergraduates indicated on Duke’s web site to the faculty section in QS's files. This results in a ratio of 1.96 students per faculty, which was probably the figure used by QS in the 2005 rankings. It is probably though not the number used in 2006. If it were, then the scores for other universities would have been very much lower. Rice, for example, would have had a score of around 30 rather than 50 if that had been the case.

It is hard to see how QS came up with the ratio of 3.0 - 3.5 for Duke. It certainly does not come from any information that the university itself has provided. Perhaps it was just another data entry error that nobody noticed.

Anyway, there is no way in which the data can be manipulated, stretched or compressed to put Duke at the top of the faculty- student ratio component. That position probably belongs to Yale, which, according to Yale itself, QS and Wikipedia, has about three students per faculty.Therefore Yale's score, whatever the exact ratio that it represents, should be 100 instead of 93 and the score of everybody except Duke would have to go up accordingly. All the scores for this part need to be corrected and so therefore do the total scores.You might argue that the changes are so small that they are not worth bothering about. At the top, maybe this is true but it might make quite a bit of difference further down. In any case, surely an attempt to rank world-class universities ought to be held to the highest methodological l standards.

If you can provide a reasonable explanation for Duke's high score, such as the use of information withheld by Duke from the general public, I am sure that everybody would be glad to hear it. Otherwise, it might be a good idea to withdrew the rankings and then republish them after they have been thoroughly checked.


Richard Holmes
Shah Alam
Malaysia

Saturday, October 07, 2006

Institute or Institutes Revisited

QS says Indian Institute of Management and of Technology (one of each). THES says Institutes (five of each).

So which is it?

Friday, October 06, 2006

Institute or Institutes?

In 2005, THES lumped all the Indian Institutes of Technology and Management (five of each according to the World of Learning) as single institutions. Now, according to the QS site, only one of each has been included in the rankings. In both cases the -s has been omitted

This could be a simple error or maybe the error was in 2005. Consultants who can change the name of China's most venerable university or confuse undergraduate students with faculty are probably capable of anything.

But if there is really only one Institute of Technology and one Institute of Management this year, which one is it? And did QS ensure that they collected data from only one and not from all the institutes?

I wonder when we will know?
Comments on the THES Top 200 Universities

Looking at the list of 200 universities, the most striking thing is the remarkable changes in the position of many universities. The Sorbonne has risen from 305 to 200, Wollongong from 308 to 196, Aberdeen from 267 to 195, Tubingen from 260 to 170, Ulm in Germany from 240 to 158. Meanwhile, Queensland University of Technology has fallen from 192 to 118, Purdue from 61 to 127, Helsinki from 62 to 116, and the Korean Advanced Institute of Science and Technology from 143 to 198.

Altogether, 41 universities went up or down 50 places or more.

This is, to say the least, very strange. If the scores for 2005 and 2006 were both valid, then there would be very little change. A change in policy on the admission of international students or recruitment of international faculty would take a few years to produce a change in overall numbers. A massive increase in research funding would take even longer to produce more research papers, let alone an increase in the citations of those papers.

Some changes may have occurred because of research errors or corrections of errors, "clarification of data" as THES likes to put it. In other cases, universities may have have done a bit of rearranging of data about numbers of faculty and students.

However, most of the changes are probably the result of changes in the score on the peer review. Unless THES and QS are more forthcoming about how they conducted this survey we can only assume that rises and falls on the peer review reflect nothing more than QS's distribution of their survey, with a lot of forms this year being sent to Britain, continental Europe and the US. This in turn is probably influenced by the international ebb and flow of MBA recruitment. However, we will have to wait until the thirteenth to be certain about this.


Here is something interesting from the Cambridge Evening News


CAMBRIDGE is the best university in the world, say academics.
It comes second
in the Times Higher Educational Supplement's (THES) world ranking of
universities - but made it into first place when given a score by
academics.
Last year Cambridge came third in the overall rankings, 14
percentage points behind Harvard, but this year it is only narrowly beaten by
the US university.
Oxford made third place and Yale fourth.
But academics
said Cambridge was the best university, followed by Oxford and then
Harvard.
John O'Leary, editor of The THES, said:
"These results show
academics think Cambridge is the world's best university, with Oxford close
behind. On this measure they both come ahead of Harvard.
"In addition,
Cambridge is popular with employers. Its score on quantitative criteria such as
international appeal and staff/student ratio provides numerical corroboration of
its excellence."

So, according to the academic peer review, which is 40 % of the total score, Cambridge and Oxford both beat Harvard this year. In 2004, however, Cambridge was 19 % behind Harvard and in 2005 4 %. Oxford was 16 % behind in 2004 and 7 % behind in 2005. A rise of this magnitude in the popularity of Oxford and Cambridge over two years is simply not possible if the same reviewers were on the panel in these three years or if comparable, objectively selected respondents were polled. It would be possible though if QS sent a larger number of forms this year to UK universities, particularly to Oxbridge.

Thursday, October 05, 2006

Breaking News: THES Top 200 Revealed

The THES top 200 universities is now available on the QS website. http://www.topgraduate.com/universityrankings/thes_qs_world_university_rankings_2006/

More in a little while.

Preliminary observations are that there are some massive changes up and down that could only come from fluctuations in the peer review and the employers' ratings.

Meanwhile, Universiti Malaya has fallen a bit from 169 to 192 while Universiti Kebangsaan Malaysia has gone up from.289 to 185, something that certainly needs explaining
Preview of the 2006 THES World University Rankings

THES has just released a preview of its 2006 university ranking exercise. See The Times Online. This includes the top 100 universities with their positions in 2005. Here are some initial observations.
  • There are, like last year, some dramatic changes. I counted over twenty universities that went up or down twenty places. That does not include any university that slipped out of the top 100 altogether. This is a bad sign. If the rankings were accurate in both 2005 and in 2006 there would be very little change from one year to another. Changes like this are only likely to occur if there has been a change in the scoring method, data collection or entry errors, corrections of errors or variations in survey bias.
  • The Ecole Polytechnique in Paris has fallen from tenth place to 37th. Very probably, this is because QS has corrected an error in the counting of faculty in 2005, but we will have to wait until October 13th to be sure.
  • Two Indian institutions or groups of institutions are in the top 100. The Times Online list refers to "Indian Inst of Tech" and "Indian Inst of Management", obscuring whether this refers to Institutes as in 2005 or Institute as in 2004. Whether singular or plural, a bit of digging needs to be done.
  • A number of Eastern US universities have done dramatically well. For example, Vanderbilt has risen from 114th to 53rd, Emory from 141st to 56th, Pittsburgh from 193rd to 86th and Dartmouth from 117th to 61st. Does this represent a genuine improvement or is it an artifact of the distribution of the peer review survey, with QS trying to rectify an earlier bias to the West coast?
  • There are a couple of cases of pairs of universities in the same city where one goes up and one goes down. See Munich University and the Technical University of Munich and the University of Lausanne and EPF Lausanne. Is it possible that in previous years these institutions got mixed up and have now been disentangled?
  • There are dramatic rises by a couple of New Zealand universities, Auckland and Otago, and some Asian universities, Tsing Hua, Osaka and Seoul National. Again, this is probably a result of the way the peer review was conducted.
  • Duke falls a little bit, suggesting that the errors in its 2005 score have not been corrected.
  • Overall, it looks like the rankings have changed largely because of the peer review. It is hard to see how an increase in the number of international students or faculty or citations of research papers that were started several years ago could produce such remarkable changes in a matter of months. This time the review, and perhaps the recruiter ratings also, seems to have worked to the advantage of British and Eastern US universities and a few select East Asian, Swiss and New Zealand institutions.

Sunday, October 01, 2006

Another Duke Scandal?

Part 2

It looks as though Duke University achieved its outstanding score on the Times Higher Education Supplement (THES) world university rankings in 2005 largely through another blunder by THES's consultants, QS. What is scandalous about this is that none of the statisticians, mathematicians and educational management experts at Duke noticed this or, if they did notice it, that they kept quiet about it.

Between 2004 and 2005 Duke went zooming up the THES rankings from 57th to 11th place. This would be truly remarkable if the THES rankings were at all accurate. It would mean that the university had in twelve months recruited hundreds of new faculty, multiplied the number of its international students, doubled its research output, convinced hundreds of academics and employers around the world of its superlative qualities, or some combination of the above. If this did not happen, it means that there must have been something wrong with THES's figures.

So how did Duke achieve its extraordinary rise? First, we had better point out that when you are constructing rankings based on thousand of students, hundreds of faculty, tens of thousands of research papers or the views of hundreds or thousands of reviewers you are not going to get very much change from year to year if the ranking is reliable. This is why we can be fairly confident that the Shanghai Jiao Tong University index, limited though it is in many respects, is accurate. This, unfortunately, also makes it rather boring. Notice the near total lack of interest in the latest edition of the index which came out a few weeks ago. It is hard to get excited about Tokyo inching up one place to number 19. Wait another four decades and it will be challenging Harvard! Compare that with the catastrophic fall of Universiti Malaya between 2004 and 2005 on the THES index and the local media uproar that ensued. That was much more interesting. What a pity that it was all the result of a research error, or a "clarification of data".

Or look at what happened to Duke. Last September it crept up from number 32 to 31 on the Shanghai Jiao Tong ranking, reversing its fall to 31 in 2004 from 32 in 2003. Who is going to get excited about that?

Anyway, let's have a look at the THES rankings in detail. A brief summary, which could be passed over by those familiar with the rankings, is that in 2005 it gave a weighting of 40 % to a peer review by "research-active academics", 10 % to a rating by employers of "internationally mobile graduates", 20 % to faculty-student ratio, 10 % to proportion of international faculty and international students and 20% to the citations of research papers per faculty member.

I will just mention the peer review section and then go on to the faculty-student ratio where the real scandal can be found.

Peer Review
Duke got a score of 61 on the peer review compared to a top score (Berkeley) of 665 in 2004. This is equivalent to a score of 9.17 out of 100. In 2005 it got a score of 36 compared with the top score of 100 (Harvard), effectively almost quadrupling its score. To some extent, this is explained by the fact that everybody except Berkeley went up on this measure between 2004 and 2005. But Duke did much better than most. In 2004 Duke was noticeably below the mean score of the 164 universities that appeared in the top 200 in both years, but in 2005 it was slightly above the mean of these universities. Its position on this measure rose from 95th to 64th.

How is this possible? How could there be such a dramatic rise in the academic peers' opinions of Duke university? Remember that the reviewers of 2005 included those of 2004 so there must have been a very dramatic shift among the additional reviewers towards Duke in 2005.

A genuine change in opinion is thoroughly implausible. Two other explanations are more likely. One is that QS polled many more academics from the east coast or the south of the United States in 2005, perhaps because of a perceived bias to California in 2004. The other is that the Duke academics invited to serve on the panel passed the survey to others who, in a spontananeous or organised fashion, returned a large number of responses to QS.

Faculty-Student Ratio
Next, take a look at the scores for Faculty-Student ratio. Duke did well on this category in 2004 with a score of 66. In that year the top scorer was Ecole Normal Superieure (ENS) in Paris which, with 1800 students and 900 faculty according to QS, would have had 0.50 faculty per student, Duke therefore would have 0.33 faculty per students. This would be a very good ratio if true.

In 2005, the top scorer was Ecole Polytechnique in Paris, which supposedly had 2,468 students and 1,900 faculty, or 0.77 faculty per student. ENS's score went down to 65 which is exactly what you what expect if the ratio remained unchanged at 0.5. Duke's score in 2006 was 56, which works out at 0.43 faculty per student.

Duke's faculty-student ratio currently according to QS's web site is 0.51 or 6,244 faculty divided by 12,223 students. This is fairly close to the 2005 figure indicated in the THES rankings. Here is the information provided by QS


Datafile
Demographic
No. of faculty:
6,244
No. of international
faculty:
825
No. of students:
12,223

However, Duke's web site currently gives a figure of 13,088 students and 1,595 tenure and tenure track faculty and 923 others (professors of the practice, lecturers, research professors and medical associates). This makes a total of 2,518 tenure, tenure track and other regular faculty.

So where on Earth did QS find another 3,700 plus faculty?

Take a look at this data provided by Duke. Notice anything?

STUDENTS Enrollment
(full-time) Fall 2005

Undergraduate
6,244
African-AmericanAsian-American


So what happened is that someone at QS confused the number of undergraduate students with the number of faculty and nobody at QS noticed and nobody at THES noticed. Perhaps nobody at Duke noticed either, although I find that hard to believe. This resulted in a grossly inflated score for Duke on the faculty-student ratio component and contributed to its undeserved ascent in the rankings.

Anyway, it's time to stop and post this, and then work out what Duke's score really should have been.

Saturday, September 30, 2006

Another Duke Scandal?

Part 1

This post was inspired by a juxtaposition of two documents. One was an article written for a journal published by Universiti Teknologi MARA (UiTM) in Malaysia. I had to edit this article and it was not always a pleasant experience. It was full of grammatical errors such as omission of articles, sentences without subjects and so on. But at least I could usually understand what the author was talking about.

Taking a break, I skimmed some higher education sites and, via a blog by Professor K.C. Johnson, an historian at Brooklyn College, New York, arrived at a piece by Karla Holloway, a professor at Duke University (North Carolina) in an online journal, the Scholar and Feminist Online, published by the Barnard Center for research on Women. The center is run by Barnard College, an independent college affiliated to Columbia University.

A bit of background first. In March of this year, an exotic dancer, hired to perform at a party held by Duke University lacrosse players, claimed that she had been raped. Some of the players certanly seem to have been rude and loutish but the accusation looks more dubious every day. The alleged incident did, however, gave rise to some soul searching by the university administration. Committees were formed, one of which, dealing with race, was headed by Professor Holloway. Here is a link to Professor Holloway’s article Take a look at it for a few minutes.

Professor Holloway, after some remarks about the affair that have been criticized in quite a few places, describes how tired she is after sitting on the committee and that she is thinking of quitting.

I write these thoughts, considering what it would mean to resign from the committee charged with managing the post culture of the Lacrosse team's assault to the character of the university. My decision is fraught with a personal history that has made me understand the deep ambiguity in loving and caring for someone who has committed an egregious wrong. It is complicated with an administrative history that has made me appreciate the frailties of faculty and students and how a university's conduct toward those who have abused its privileges as well as protected them is burdened with legal residue, as well as personal empathy. My decision has vacillated between the guilt over my worry that if not me, which other body like mine will be pulled into this service? Who do I render vulnerable if I lose my courage to stay this course? On the other side is my increasingly desperate need to run for cover, to vacate the battlefield, and to seek personal shelter. It does feel like a battle. So when asked to provide the labor, once again, for the aftermath of a conduct that visibly associates me, in terms of race and gender, with the imbalance of power, especially without an appreciable notice of this as the contestatory space that women and black folk are asked to inhabit, I find myself preoccupied with a decision on whether or not to demur from this association in an effort, however feeble, to protect the vulnerability that is inherent to this assigned and necessary meditative role.

Until we recognize that sports reinforces exactly those behaviors of entitlement which have been and can be so abusive to women and girls and those "othered" by their sports' history of membership, the bodies who will bear evidence and consequence of the field's conduct will remain, after the fact of the matter, laboring to retrieve the lofty goals of education, to elevate the character of the place, to restore a space where they can do the work they came to the university to accomplish. However, as long as the bodies of women and minorities are evidence as well as restitution, the troubled terrain we labor over is as much a battlefield as it is a sports arena. At this moment, I have little appreciable sense of difference between the requisite conduct and consequence of either space.

Getting to the point, I am fairly confident that no journal published by Universiti Teknologi MARA would ever accept anything as impenetrable as this. Even though most people writing for Malaysian academic journals are not native speakers of English and many do not have doctorates, they do not write stuff as reader-unfriendly as this. I must add that, being allergic to committees, I am much more sympathetic to Professor Holloway than some other commentators.

If Professor Holloway were a graduate student who had been reading too much post-modern criticism and French philosophy, perhaps she could be excused. But she is nothing less than the William R. Kenan Professor of English at Duke. We surely expect clearer and less “reader-othering” writing than this from a professor, especially a professor of English. And what sort of comments does she write on her student essays?

Nor is this a rough draft that could be polished later. The article, we are told, has been read generously and carefully by Robyn Wieger, the Margaret Taylor Smith Director of Women’s Studies and Professor in Women’s Studies and Literature at Duke, and William Chafe, the Alice Mary Baldwin Professor and Dean of History at Duke. It also got an "intuitive and tremendously helpful" review from Janet Jakobsen, Full Professor and Director of the Barnard Centre for Research on Women.

Duke is, according to the Times Higher Education Supplement (THES), one of the best universities in the world. This is not entirely a result of THES’s erratic scoring – a bit more about that later – for the Shanghai Jiao tong ranking also gives Duke a high rating. UiTM, however, is not in THES’s top 200 or Shanghai Jiao Tong’s 500, even though it tries to maintain a certain minimal standard of communicative competence in the academic journals it publishes.

So how can Duke give professorships to people who write like that and how can Barnard College publish that sort of journal ?

Is it possible to introduce a ranking system that will give some credit to Universities that refrain from publishing stuff like this? I am wondering whether somebody could do something like Alan Sokal’s famous Social Text hoax, sending pages of pretentious nonsense to a cultural studies journal, which had no qualms about accepting them, but this time sending a piece to journals published by universities at different levels of the global hierarchy. My hypothesis is that universities in countries like Malaysia might be better able to see through this sort of thing than some of the academic superstars. An NDI (nonsense detection index) might then be incorporated into a ranking system and, I suspect, might be the disadvantage of places like Duke.

Another idea that might be more immediately practical is inspired by Professor Johnson’s observation that Professor Holloway has a very light teaching load. She does in fact, according to the Duke website, spend 5 hours and fifty minutes a week in the classroom and, presumably, spends an equivalent time marking, counselling and so on. This is about a half, maybe even a third or a quarter, of the teaching load of most Malaysian university lecturers. It might be possible to construct an index based on teaching hours per dollar of salary combined with a score for research articles or citations per dollar. Once again, I suspect that the score of Duke and similar places might not be quite so spectacular.

Also, one wonders whether Duke really deserves quite such a high THES ranking after all. Looking at the THES rankings for 2004 and 2005, it is clear that Duke has advanced remarkably and perhaps just a little unbelievably. In 2004 Duke was in 52nd place and in 2005 it rose to eleventh, just behind the Ecole Polytechnique in Paris and equal to the London School of Economics.

How did it do that? More in a little while.