Friday, January 15, 2016

Aussies not impressed with THE any more


Back in 2012 The Australian published a list of the most influential figures in Australian higher education. In 14th place was Phil Baty, the editor of the Times Higher Education (THE) World University Rankings.

Recently, the newspaper came out with another influential list full of the usual bureaucrats and boosters plus the Australian dollar at number five. Then at number 10 was not a person, not even Times Higher Education, but "rankings". A step up for rankings but a demotion for THE.

To make things worse for THE, Leiden Ranking and the Shanghai Academic ranking of World Universities were designated the leaders.

Then we have a reference to "new and increasingly obscure league tables peddled by unreliable metrics merchants, with volatile methodologies triggering inexplicably spectacular rises and falls from grace."

But who are those new and increasingly obscure league tables?  It can't be URAP, National Taiwan University Rankings, QS world rankings, or Scimago, because they are not new. The US News Best Global Universities and the Russian Round University Ranking are  new but so far their methodology is not volatile. Webometrics can be a bit volatile sometimes but it is also not  new. Maybe they are referring to the QS subject rankings.

Or could it be that The Australian is thinking of the THE World University Rankings? What happened last autumn to universities in France, Korea and Turkey was certainly a case of volatile methodology. But new? Maybe The Australian has decided that the methodology was changed so much that it constituted a new league table.




Sunday, January 10, 2016

Diversity Makes You Brighter ... if You're a Latino Stockpicker in Texas or Chinese in Singapore


Nearly everybody, or at least those who run the western mainstream media, agrees that some things are sacred. Unfortunately,  this is not always obvious to the uncredentialled who from time to time need to be beaten about their empty heads with the "findings" of "studies".

So we find that academic papers often with small or completely inappropriate samples, minimal effect sizes, marginal significance levels, dubious data collection procedures, unreproduced results or implausible assumptions are published in top flight journals, cited all over the Internet or even showcased in the pages of the "quality" or mass market press.

For example, anyone with any sort of mind knows that the environment is the only thing that determines intelligence.

So in 2009 we had an article in the Journal of Neuroscience that supposedly proves that a stimulating environment will not only make its beneficiaries more intelligent but also the children of the experimental subjects.

A headline in the Daily Mail proclaimed that " Mothers who enjoyed a stimulating childhood 'have brainier babies"

The first sentence of the reports claims that "[a] mother's childhood experiences may influence not only her own brain development but also that of her sons and daughters, a study suggests."

Wonderful. This could, of course, be an argument for allowing programs like Head Start to run for another three decades so that that their effects would show up in the next generation. Then the next sentence gives the game away.

"Researchers in the US found that a stimulating environment early in life improved the memory of female mice with a genetic learning defect."

Notice that experiment involved mice and not humans or any other mammal bigger than a ferret, it improved memory and nothing else, and the subjects had a genetic learning defect.

Still, that did not stop the MIT Technology Review from reporting Moshe Szyf of McGill University a saying “[i]f the findings can be conveyed to human, it means that girls’ education is important not just to their generation but to the next one,”

All of this, if confirmed, would be a serious blow against modern evolutionary theory. The MIT Technology Review got it right when it spoke about a comeback for Lamarckianism. But if there is anything scientists should have learnt over the last few decades it is that an experiment that appears to overthrow current theory, not to mention common sense and observation, is often flawed in some way. Confronted with evidence in 2011 that neutrinos were travelling faster than light, physicists with CERN reviewed their experimental procedures until they found that the apparent theory busting observation was caused by a loose fibre optic cable.

If a study had shown that a stimulating environment had a negative effect on the subjects or on the next generation or that it was stimulation for fathers that made the difference, would it have been cited in the Daily Mail or the MIT Technology Review? Would it even have been published in the Journal of Neuroscience? Wouldn't everybody have been looking for the equivalent of a loose cable?

A related idea that has reached the status of unassailable truth is that the famous academic achievement gap between Asians and Whites, and African Americans and Hispanics, could be eradicated by some sort of environmental manipulation such as spending money, providing safe spaces or laptops,  boosting self esteem or fine tuning teaching methods.

A few years ago Science, the apex of scientific research, published a paper by Geoffrey L. Cohen, Julio Garcia, Nancy Apfel and Allison Master that claimed a few minutes writing a essay affirming students' values (the control group wrote about somebody else's values) would start a process leading to an improvement in their relative academic performance. This applied only to low-achieving African American students.

I suspect that anyone with any sort of experience of secondary school classrooms would be surprised by the claim that such a brief exercise could have such a disproportionate impact.

The authors in their conclusion say:

"Finally, our apparently disproportionate results rested on an obvious precondition: the existence in the school of adequate material, social, and psychological resources and support to permit and sustain positive academic outcomes. Students must also have had the skills to perform significantly better. What appear to be small or brief events in isolation may in reality be the last element required to set in motion a process whose other necessary conditions already lay, not fully realised, in the situation."

In other words the experiment would not work unless there were "adequate material, social, and psychological resources and support" in the school, and unless students "have had the skills to perform significantly.

Is it possible that a school with all those resources, support and skills might also be one where students, mentors, teachers or classmates might just somehow leak who was in the experimental and who was in the control group?

Perhaps the experiment really is valid. If so we can expect to see millions of US secondary school students and perhaps university students writing their self affirmation essays and watch the achievement gap wither away.

In 2012, this study made the top 20 of studies that Psychfiledrawer would like to see reproduced, along with studies that showed that participants were more likely to give up trying to solve a puzzle if they ate radishes than if they ate cookies, that anxiety reducing interventions boost exam scores, music training raises IQ,  and, of course, Rosenthal and Jacobsons' famous study showing that teacher expectations can change students' IQ.

Geoffrey Cohen has provided a short list of studies that he claims replicate his findings. I suspect that only someone already convinced of the reality of self affirmation would be impressed.

Another variant of the environmental determinism creed is that diversity (racial or maybe gender although certainly not intellectual or ideological) is a wonderful thing that enriches the lives of everybody. There are powerful economic motives for universities to believe this and so we find that a succession of dubious studies are show cased as though they are the last and definitive word on the topic.

The latest such study is by Sheen S. Levine, David Stark and others and was the basis for an op ed in the New York Times (NYT).

The background is that the US Supreme Court back in 2003 had decided that universities could not admit students on the basis of race but they could try to recruit more minority students because having large numbers of a minority group would be good for everybody. Now the court is revisiting the issue and asking whether racial preferences can be justified by the benefits they supposedly provide for everyone.

Levine and Stark in their NYT piece claim that they can and refer to a study that they published with four other authors in the Proceedings of the American Academy of Sciences. Essentially, this involved an experiment in simulating stock trading  and it was found that  homogenous "markets" in Singapore and Kingsville, Texas, (ethnically Chinese and Latino respectively) were less accurate in pricing  stocks than those that were ethnically diverse with participants from minority groups (Indian and Malay in Singapore, non-Hispanic White, Black and Asian in Texas).

They argue that:

"racial and ethnic diversity matter for learning, the core purpose of a university. Increasing diversity is not only a way to let the historically disadvantaged into college, but also to promote sharper thinking for everyone.

Our research provides such evidence. Diversity improves the way people think. By disrupting conformity, racial and ethnic diversity prompts people to scrutinize facts, think more deeply and develop their own opinions. Our findings show that such diversity actually benefits everyone, minorities and majority alike."

From this very specific exercise the authors  conclude that diversity is beneficial for American universities which are surely not comparable to a simulated stock market.

Frankly, if this is the best they can do to justify diversity then it looks as though affirmative action in US education is doomed.

Looking at the original paper also suggests that quite different conclusions could be drawn. It is true that in each country the diverse market was more accurate than the homogenous one (Chinese in Singapore, Latino in Texas) but the homogenous Singapore market was more accurate than the diverse Texas market (see fig. 2) and very much more accurate than the homogenous Texas market. Notice that this difference is obscured by the way the data is presented.

There is a moral case for affirmative action provided that it is limited to the descendants of the enslaved and the dispossessed but it is wasting everybody's time to cherry-pick studies like these to support questionable empirical claims and to stretch their generalisability well beyond reasonable limits.








Wednesday, January 06, 2016

Towards a transparent university ranking system


For the last few years global university rankings have been getting more complicated and more "sophisticated".

Data makes it way from branch campuses, research institutes and far flung faculties and departments and is analysed, decomposed, recomposed, scrutinised for anomalies and outliers and then enters the files of the rankers where it is normalised, standardised, square rooted, weighted and/or subjected to regional modification. Sometimes what comes out the other end makes sense: Harvard in first place, Chinese high fliers flying higher. Sometimes it stretches academic credulity: Alexandria University in fourth place in the world for research impact, King Abdulaziz University in the world's top ten for mathematics.

The transparency of the various indicators in the global rankings varies. Checking the scores for Nature and Science papers and indexed publications in the Shanghai rankings is easy if you have access to the Web of Knowledge. It is also not difficult to check the numbers of faculty and students on the QS, Times Higher Education (THE)and US News web sites.

On the other hand, getting into the data behind the THE citations is close to impossible. Citations are normalised by field, year of publication and year of citation. Then, until last year the score for each university was adjusted by division by the square root of the citation impact score of the country in which it was located. Now this applies to half the score for the indicator. Reproducing the THE citations score is impossible for almost everybody since it requires calculating the world average citation score for 250 or 300 fields and then the total citation score for every country.

It is now possible to access third party data from sources such as Google, World Intellectual Property Organisation and various social media such as LinkedIn. One promising development is the creation of public citation profiles by Google Scholar.

The Cybermetrics Lab in Spain, publishers of the Webometrics Ranking Web of Universities, has announced the beta version of a ranking based on nearly one million individual profiles in the Google Scholar Citations database. The object is to see whether this data can be included in future editions of the Ranking Web of Universities

It uses data from the institutional profiles and counts the citations in the top ten public profiles for each institution, excluding the first profile.

The ranking is incomplete since many researchers and institutions have not participated fully. There are, for example, no Russian institutions in the top 600. In addition, there are technical issues such as the duplication of profiles.

The leading university is Harvard which is well ahead of its closest rival, the University of Chicago. English speaking universities are dominant with 17 of the top 20 places going to US institutions and three, Oxford, Cambridge and University College London, going to the UK.

Overall the top twenty are:

  1.   Harvard University
  2.   University of Chicago
  3.   Stanford University
  4.   University of California Berkeley
  5.   Massachusetts Institute of Technology (MIT)
  6.   University of Oxford
  7.   University College London
  8.   University of Cambridge
  9.   Johns Hopkins University
  10.   University of Michigan
  11.   Michigan State University
  12.   Yale University
  13.   University of California San Diego
  14.   UCLA
  15.   Columbia University
  16.   Duke University
  17.   University of Washington
  18.   Princeton University
  19.   Carnegie Mellon University
  20.   Washington University St Louis.

The top universities in selected countries and regions are:

Africa: University of Cape Town, South Africa 244th
Arab Region: King Abdullah University of Science and Technology, Saudi Arabia 148th
Asia and Southeast Asia: National University of Singapore 40th
Australia and Oceana: Australian National University 57th
Canada: University of Toronto 22nd
China: Zhejiang University 85th
France: Université Paris 6 Pierre and Marie Curie 133rd
Germany: Ludwig Maximilians Universität München 194th
Japan: Kyoto University 100th
Latin America: Universidade de São Paulo 164th
Middle East: Hebrew University of Jerusalem 110th
South Asia: Indian Institute of Science Bangalore 420th.

This seems plausible and sensible so it is likely that the method could be extended and improved.

Tuesday, January 05, 2016

Worth Reading 5: Another Year, Another Methodology

International Higher Education, published by Boston College, has an article by Ellen Hazelkorn and Andrew Gibson that reviews the recent changes in the methodology of the brand name world rankings.

Nothing new here, except that they have noticed the Round University Rankings from Russia.