Reason #317 to be depressed: journal impact factors
Everywhere I look, I see a common problem. Someone a long time ago came up with a numerical measure of X, and simply because this number exists bureaucrats and administrators have jumped on it as a convenient measure of Y.
One of the most pernicious is the use of student evaluations to decide almost everything. Student evaluations are a popularity contest, pure and simple. Are the best teachers always the most popular? Think about what factors would make a student give a teacher high marks--light workload, infrequent class meetings, easy tests, big curves, and a please-walk-all-over-my-life office hour policy come to mind. Are those the things we really want to be rewarding?
Another is the use of journal impact factors to decide how hot your research is. If you're not familiar with these, the top journals are rated by how frequently their articles are cited. Usually higher impact factors go with more exclusive journals with higher turnover rates. A lot of museum publications and regional journals aren't listed at all. And people (well, bureaucrats) use these things. A lot. So often the decision of whether to hire or promote someone is based less on, "Is their work any good?" than "What journals have they been published in?"
(One obvious loophole is that a paper can be cited a billion times not because it is good but because it is spectacularly bad.)
So there are big problems with journal impact factors even if they were calculated in an honest, transparent way by a disinterested party. They're not.
Makes me wanna holler.
Thanks to Mike for the tip.
One of the most pernicious is the use of student evaluations to decide almost everything. Student evaluations are a popularity contest, pure and simple. Are the best teachers always the most popular? Think about what factors would make a student give a teacher high marks--light workload, infrequent class meetings, easy tests, big curves, and a please-walk-all-over-my-life office hour policy come to mind. Are those the things we really want to be rewarding?
Another is the use of journal impact factors to decide how hot your research is. If you're not familiar with these, the top journals are rated by how frequently their articles are cited. Usually higher impact factors go with more exclusive journals with higher turnover rates. A lot of museum publications and regional journals aren't listed at all. And people (well, bureaucrats) use these things. A lot. So often the decision of whether to hire or promote someone is based less on, "Is their work any good?" than "What journals have they been published in?"
(One obvious loophole is that a paper can be cited a billion times not because it is good but because it is spectacularly bad.)
So there are big problems with journal impact factors even if they were calculated in an honest, transparent way by a disinterested party. They're not.
Thomson Scientific, the sole arbiter of the impact factor game, is part of The Thomson Corporation, a for-profit organization that is responsible primarily to its shareholders. It has no obligation to be accountable to any of the stakeholders who care most about the impact factor—the authors and readers of scientific research. Although we have not attempted to play this game, we did, because of the value that authors place on it, attempt to understand the rules. During discussions with Thomson Scientific over which article types in PLoS Medicine the company deems as “citable,” it became clear that the process of determining a journal's impact factor is unscientific and arbitrary. After one in-person meeting, a telephone conversation, and a flurry of e-mail exchanges, we came to realize that Thomson Scientific has no explicit process for deciding which articles other than original research articles it deems as citable. We conclude that science is currently rated by a process that is itself unscientific, subjective, and secretive.
During the course of our discussions with Thompson Scientific, PLoS Medicine's potential impact factor—based on the same articles published in the same year—seesawed between as much as 11 (when only research articles are entered into the denominator) to less than 3 (when almost all article types in the magazine section are included, as Thomson Scientific had initially done—wrongly, we argued, when comparing such article types with comparable ones published by other medical journals).
Makes me wanna holler.
Thanks to Mike for the tip.
Labels: Hackademia, More Of A Complaint Really, Open Access, Posts That You Will Not Find Amusing
4 Comments:
Spoke recently with a French friend who has a short paper in the works that will -- definitely -- get cited a lot when it's out. (Can't say who because I haven't asked his permission.) Because this paper is kind of urgent, I suggested that he send it just anywhere that would give him a PDF that he could send people. He said that, for a French researcher, that simply wasn't an option: over there, they live and die by impact factors. If he sends this paper to a low-IF journal, then for the purposes of getting a job he might just as well never have written it. Even if it gets cited constantly.
That's just silly.
Spoke recently with a French friend who has a short paper in the works that will -- definitely -- get cited a lot when it's out.
Oooooo -- this must be the spectacular, long-rumored fossil of an articulated, three-dimensional skeleton of a Permian-age basal archosaur with feathers, skin, three-dimensional parasites, intact gut contents, cololites, and coprolites, and intact blood vessels with intact blood cells and DNA in them, preserved roosting on a nest of eggs with complete embryos in a paleobotanically identifiable vegetation nest. Yay! (NOTE: don't I wish!!!!)
One of the most bizarre things about impact factors is that books never get a ranking at all. A professor that writes a terrific book, a textbook, or even just a chapter in an edited volume -- none of those things count at all. Not sure why, since an "impact factor" ought to be calculable for a book, though once out, it wouldn't change on a yearly basis like it would for a journal (if anything, it would probably continually decline as the book becomes more and more out of date), but even if the book is a best seller, it doesn't factor in at all.
I agree that these things are almost entirely pointless and their use in "valuating" a professor's "worth" is stupid and, frankly, lazy ('cuz it's always easier for bureaucrats to look at a number to evaluate worth rather than have to actually read the works themselves and make inquiries amongst the professor's colleagues in the field to find out if the work actually is worth anything...).
But then again, keep in mind that a single private corporation is responsible for inflicting mandatory (yet NOT FREE) college entrace exams (SAT, ACT, GRE...but not, I think, LSAT and MCAT) on students to "ascertain" whether or not the student is "worthy" of getting into college by seeing if the student knows some arcane crap that the test makers deem valid rather than seeing what the student actually knows, period. Somehow, somewhen, someone concocted these atrocities and managed to convince entire nations (well, at least the US) to make them mandatory but not free. I perceive journal impact factors in the same vein, except that Thompson isn't, to my knowledge, pushing the use of their numbers in the ways they've been used. Just goes to show what happens when lots of people are gullible simultaneously...
He said that, for a French researcher, that simply wasn't an option: over there, they live and die by impact factors.
Yes, that's true.
The CNRS* (at least) has recently changed to counting the actual citations as opposed to looking at a person's impact factor. But of course, your citations are counted by the same company**.
* Centre National de la Recherche Scientifique. The national organization that employs research scientists who don't need to teach.
** I thought it was called ISI, Institute for Scientific Information? Did just the name change?
"Thomson Scientific (formerly known as Thomson ISI)", says the PLoS Medicine paper, to answer my own question.
Post a Comment
<< Home