Saturday, November 29, 2008

The Impact Factor hydra


Interesting post on the entrenchment of impact factors in small countries over at A Blog Around the Clock. The third comment down, by Comrade PhysioProf, is particularly troubling, because it's almost certainly true:

The bottom line for impact factor--or whatever other quantitative metric might replace or supplement it--is that people are lazy and time is highly rate limiting in the professional lives of academics. People are always gonna rely on something fast and easy to obtain--like a journal impact factor--than they are on something difficult and time consuming to obtain--like a developed opinion about the solidity and importance of a particular published paper.

Anything that is gonna replace impact factor of journals in which scientists publish papers as a metric for comparative assessment of scientific productivity is gonna have to be as braindead easy to deploy as impact factor.

So we have what you might call the Impact Factor Paradox: getting a real handle on something as slippery as the value of someone's scientific contributions is inevitably going to be time-consuming and hard; metrics that allegedly measure that value are going to fall along a spectrum from "time-consuming but accurate" to "quick, easy, and horribly flawed"; in a system where time is the limiting resource, there will always be a sort of grim undertow toward the quick-'n-greasy metrics.

Your thoughts on how to avoid this trend are welcome.

Labels: ,

Saturday, July 05, 2008

Impact factors, copyright law, and other science publishing buzzwords

One of my former labmates just sent around the latest list of journal impact factors. I'd repost 'em here, except that I don't want to perpetuate the perception that they are important (although I was grateful to have seen the last version, in a checkout-stand-tabloid-curiosity sense). Here's why:

People seem prone to forgetting that journals have impact factors because individuals papers are cited widely, or not. It's not like every paper that Nature (IF = 28.something) publishes is widely cited, or every paper that Canadian Journal of Earth Sciences (IF = 0.955) publishes is not. In a field as small as sauropod paleobiology, everyone is going to read all of the literature no matter where it is published (the last remaining exception being obscure foreign journals that are not easily available as PDFs; and I mean foreign here as in "outside any researcher's country", not just "outside the US"; getting hold of un-PDFed papers from Oklahoma Geology Notes is probably a cast iron bitch if you're in a local museum in China). The only real advantage of a high-profile journal is to possibly bring the paper to the attention of non-sauropod workers. Whether Nature does that any better than CJES is probably up for discussion (at least in the small world of sauropod paleobiology). Unless some avian physiologist or human bone biomech person is going to have their world rocked by what I write, that 'crossover' appeal is probably not worth stressing over too much.

And there is an entirely different form of impact that we need to be thinking about, which is: how easy is it for people who are interested in your research area to find out about your work and acquire it? I'll bet that my effective impact is much greater than it would be otherwise simply because all of my papers are freely available online (thank you, Mike!). Although individual researchers are doing this more and more--see the badly-in-need-of-updating list here.

And there's a third kind of impact that is arguably more important than either of other two, which is: how big is the intellectual footprint of a given paper? For example, I'd argue that Kristi Curry-Rogers's paper on Apatosaurus ontogeny in JVP has been far more important and influential than her paper on Rapetosaurus in Nature (feel free to argue otherwise, I'm just shooting from the hip here).

Seems to me that what matters for Impact(3) is quality and timeliness of ideas (and timeliness often trumps quality), and publication venue is almost completely irrelevant (although I'd be interested in seeing a counterargument). What matters for Impact(2) is availability of work, which is better for papers that came out in open access journals but easily remedied for papers that didn't (at least until publishers crack down on free distribution of PDFs, if ever*). And Impact(1) is not unimportant, but it's also not nearly as important as people think it is; indeed, Impact(2) and Impact(3) actively erode the importance of Impact(1).

The main thing propping up Impact(1) as something we all have to at least pretend to care about is that the perceived prestige of having published in a high-IF journal really does matter to bean-counters in universities and funding agencies who can't be bothered to assess the actual quality of someone's work and for whom a nice convenient number is a godsend, even if it is horribly flawed. Therefore it also matters to many of us, who can't get ahead without having the approbation of those faceless but immensely powerful entities. See also: student teaching evaluations.

If we're going to be stuck with one number to evaluate something like the impact of a scientific paper, how about the number of citations that pop up on Google scholar? It's fast, free, and doesn't pretend to be anything other than what it is: the number of papers indexed by Google Scholar that cite the paper in question. Tragically, I suspect that ISI impact factors are popular in bureaucracies specifically because they are slow, inaccurate, and not transparently available to all.

* Recently there was a post on the Vert Paleo listserv about how copyright is a barrier to making the existing backlog of literature easily available electronically. My feeling is that those of us needing to get work done will scan what we need ourselves, circulate it through private channels, and keep plowing on. Those who try to stop us with be publicly humiliated at worst (when they come down on us and we all collectively vow never to use their stuff again) or marginalized at best (when we just stop using their stuff because it's harder to get hold of).

Finally, if you haven't been following the blogosphere brush fire (or brush war?) that sprung up in the wake of Nature's pathetically transparent slam of PLoS, Greg Laden has kindly assembled about a million relevant links here.

Discuss.

Labels: , ,