Friday, 31 May 2013

Why journal rankings are evil



I have been sitting in a workshop for the past couple of days. This workshop was on universities and markets. So perhaps you can imagine, I have been on a rollercoaster of surging hope, of deep sadness and regret and, sometimes, of profound crankiness. I know, nerd, right?

My thoughts on universities have been popping up here for some time. We kind of know what they are.

One of the things that has bothered me deeply for some time, is journal rankings. Officially we don’t have them. When we did, we blamed the government, pointed at the bureaucrats, all the while converting this system of ranking into an internal economy that regulates, not only allocation of jobs and resources, but the behaviour of academics.

The reason we no longer have rankings is that government looked at this situation with alarm. The rankings were intended to provide information to non-specialists about the quality of the places where we publish.

There were always several problems with this and academics were not quiet about them. Some of the problems were functional. For example, if you rank journals one year, the next year the ranking is true (due to the way it influences scholarly preferences for publishing).

But what is most disturbing is that, despite the official declaration of death for journal rankings, they are still being used in universities. Indeed, their use seems to be escalating – and this week I heard a call for them to be officially reinstated.

This is alarming.

And the reason this is alarming is that some of the problems with ranking are about the ways that it enforces disciplinary complicity and thus – as the economists fear – stifles innovation.

If everyone needs to publish in the same handful of ‘top’ journals, whose content is regulated and edited by certain selections of ‘peers’ (these people, for ECRs, are not really ‘peers’…but that is another story) new ideas will never emerge. We would NEVER have had feminism emerge in the academy under this kind of regime, for example. We need to be allowed to value the journals that will take risks as well as the old reliables – otherwise new knowledge will never emerge and we will keep recycling vaguely different hues to same old ideas.

But university administrators like journal ranking because it gives a numeric value to quality – as money does to commodities – that helps them allocate scarce resources. This has become such a priority for them (mostly because it is horribly hard and strains their relationships with their colleagues) that they will sacrifice nearly anything for tools to help them do it.

They need to find another way.

For if we prioritise means of allocating funds at the expense of structures that enable and facilitate the good production of knowledge we might as well all pack our bags and move out of the universities.

Thursday, 2 May 2013

Enlivened learning

For a little while I have been thinking about 'inclusive knowledge' - what a more inclusive university might look like if we add KNOWLEDGE, not just demographics, to the question of inclusion. My as yet unexplored assumption (based on SOME evidence, just not much systematic yet) is that the exclusion of people and the limits of academic knowledge are related.

The idea links to Raewyn Connell's discussion about Southern Theory - emphasising certain kinds of knowledge bolsters power in the locations in which they are made.

Since this has been rattling around in my head for a bit I was excited when yesterday I met this amazing couple, Kelly and Udi, who have 'dropped out' of academic life and are touring the world on their own savings to explore ways that people are 'doing' high education beyond the academy. This includes indigenous knowledge, but also other ways of knowing 'otherwise'. They

They are calling their trip Enlivened Learning.

Somebody needs to give this couple some money. There is really important stuff here.

I plan to spend a good amount of time with their blog.