Citation metrics as accountability sinks

I just started Dan Davies’ new book, The Unaccountability Machine. His last book, on financial fraud, was both interesting in its own right and had surprising lessons for academic scientists. So I’m hoping for the same from Davies’ new book, and so far it’s satisfying my hopes.

Right in the first chapter, there’s an interesting perspective I hadn’t seen before on publication and citation metrics: that they’re “accountability sinks.”

An “accountability sink” is a organizational policy or management system that’s designed (intentionally or otherwise) to diffuse individual responsibility for decision making. If the decision is an outcome of “company policy,” or of following “established best practices,” or etc., that means that no individual person made the decision. Which means that nobody has the power to change a bad decision. Which means nobody can be held accountable for a bad decision.

Davies’ thesis is that the world has become filled with too many accountability sinks. To be clear, accountability sinks absolutely have their place. Laws, rules, and policies often exist for good reasons, or at least understandable reasons. “Every decision should be made on a case-by-case basis by an individual person who can be held accountable for that decision,” obviously isn’t a feasible or desirable basis on which to organize, well, anything. So accountability sinks are good. But Davies’ thesis is that you can have too much of a good thing.

One of Davies’ first examples of an accountability sink is citation metrics. He suggests that for-profit journal publishing persists, even though academics mostly hate it, in part because

[S]omewhere along the way, the journal publishing industry manged to insert itself into the promotion and recruitment function of universities all over the world. In doing so, it created an extremely useful accountability sink for senior academics and managers of universities, while also solving an awkward and unpleasant personal problem for them – how to judge the quality of scholarship without offending the scholars.

The truly valuable output of the academic publishing industry is not journals, but citations. Academic papers cite one another, and the best ones get cited a lot…If you can work out which are the best journals, and which scholars publish in them, and which papers get the most prestigious citations, then you can you use fairly standard statistical techniques to generate a ‘score’ for every academic…

[I]f you have to decide which academics should be promoted or employed, the ‘weighted citation count’ is a perfect accountability sink. Academic politics is notoriously vicious, and academic careers tend to intersect a lot – what goes around comes around, and people need to collaborate. In that sort of environment, a system in which academics directly assessed each other’s promotion cases would cause all sorts of interpersonal problems…

So although the citation index is in all probability a bad measure that seems to lock the universities into an expensive and unsatisfactory publishing model, the outsourcing of the academic performance measurement system is a solution rather than a problem. It redirects potentially destructive negative emotions to a place where they can be relatively harmlessly dissipated.

I found this a very interesting perspective. The suggestion is that, as much as academics hate being evaluated by citation metrics, they’d hate being evaluated by their fellow academics even more. A world in which academics aren’t upset about the h-index is a world in which they’re upset with each other. I found this an interesting perspective even though it doesn’t really line up with my own anecdotal experience. In my experience, anonymity and secrecy, not citation indices, do a lot of the work of preventing or redirecting the resentment and jealousy that would otherwise be associated with academic hiring, tenure, and promotion decisions. For instance, faculty hiring committees make decisions behind closed doors. They make decisions behind closed doors so that they can evaluate applicants both holistically and honestly, without having anyone yell at them or take revenge on them for their decisions. So I’m not sure citation metrics actually function as accountability sinks, at least not in my experience. But clearly they could function as accountability sinks, and perhaps at some institutions they do.

What do you think? Do citation metrics function as an accountability sink?

Hot Topics

Related Articles