Australia’s failure to capture impact in the recent Excellence for Research program and its complex methodology has “culminated in little or no worthwhile information improvements,” according to a visiting research expert.


Patrick Dunleavy, a professor of political science and public policy at the London School of Economics and Political Science, told a seminar at the Australian National University last week that the Excellence in Research for Australia program, like Britain's research exercises before it, were "predigital in nature", rendering them clumsy and bureaucratic.

 

He said "eyeballing 200,000 outputs" or attributing proxy measures via journal articles had failed to capture the real value of research, particularly in the social sciences.

 

Professor Dunleavy is an advocate of a system developed by University of Melbourne academic Anne-Wil Harzing that he said was used extensively by universities in Europe and Britain.

 

The program, Publish or Perish, requires academics to run their names through Google scholar, which returns all their publications, citations and statistics.

 

"All universities would need to do is clean up the report, submit their academics' Google scholar profiles to government and you would have a national academic census," Professor Dunleavy said.

 

He is also an advocate for measuring research impact, but said most non-academic or external "occasions of influence" were likely to be overlooked. He devised six indicators of external academic impact that could be added to the Harzing/Google profile, "providing a fully transparent and completely robust evidence base for research funding allocations".

 

Professor Dunleavy said that universities communications systems also tended to be old-fashioned. "Universities need to find a space to communicate their academics' research that lies somewhere between the press release and the hard-boiled academic journal," he said.

 

"There have been major advances in communicating the sciences . . . but broader understanding of the social sciences is still lamentable."

 

Professor Dunleavy said the main problem with collecting inaccurate data on research quality and impact was that it created "strong distortions in academic behaviours, such as discriminating against applied, multi-disciplinary or truly innovative work".