Here’s a method for grading students’ work that seems ludicrous. But it’s essentially what we subject ourselves to as researchers all the time. Forget presidents’ salaries and predatory publishers; the most egregious waste in research universities may be coming from us, thanks to our bizarrely inefficient way of grading ourselves.
Imagine that your dean tells you that she’s piloting a new approach to grading student projects, as a way of ensuring quality control. Instead of submitting the project to one person who assigns a grade, each of your colleagues will be allowed to give out one letter grade only. Adams gives out the As, Burton gives out the Bs, and your newly hired colleague Chandler gives out the Cs. Students can choose who to submit their work to, but each faculty member can only give the one grade, or no grade at all. Each grader also has the option of telling the student that she might perhaps make the grade if she reworks the project. If the student is turned down by one grader, then she has the option to take the project to another grader, presumably one who gives out lesser grades. But once a grader accepts a project, the game stops and the student cannot turn to anybody else. So if the student submits to Burton and he loves the work, she can’t turn around and try her luck with Adams instead. Some students might game the system by making a half-baked effort, submitting it to Burton or Chandler, to try their luck and to find out the minimum needed to get a grade. They find this more efficient than predicting your colleagues’ whims. It’s a real drain on faculty time, because they have to read most student’s work multiple times before assigning a grade.
It seems crazy, right? But that, folks, is pretty much what we do to ourselves all the time. Replace the letter grades with your favorite journal titles, and you have the way that we use the peer review system to grade our work.
Once upon a time, journals served the function of making the results of science accessible. The internet made that largely irrelevant. I can’t remember when I last read an article in a print journal.
That leaves two useful functions that journals still perform. First, we use peer review to check the soundness of our work. Second, we use the same peer review process to grade the quality, impact, or importance of the work.
The fact that there are hundreds of different journals, and that they have things like page or article limits, are quaint relics of a bygone era. The existence of complex hierarchies of journals that each dribble out small amounts of content mostly serves the grading function. If the Journal of Dreadfully Ingenious Experiments only publishes 20 articles a year, and lots of cool stuff appears there, then it’s a big feather in your cap to get an article accepted there. So some researchers, are willing to wait a year or more for the opportunity to get the imprimatur that this brings, though this comes with the risk of needing to start from scratch with one or two more journals. Young researchers who are trying to make their name and secure employment are especially trapped by this process — should they aim high and risk looking like they’ve done nothing, or should they aim low and risk the perception that they’re no good.
This is a disaster. And we can do better.
Last year Claudia Felser, Matt Wagers and I started a special topic of Frontiers in Psychology/Language Sciences. We did this mostly because of the need for more outlets that serve the growing cadre of young researchers who bridge linguistics and psychology. All three of us were a little apprehensive about how the Frontiers model would work for us. We heard some grumbles, almost all from older colleagues. But so far I am impressed. It is still early days, but the average time from when a paper first reaches us until it is published online is just 3-4 months. That includes a couple of rounds of review. That timing is off the charts, relative to what happens with other outlets that I know, where submission-to-publication in under 18 months is unusually fast. (Yes, we have rejected submissions — mostly at the abstract submission stage.)
How is Frontiers so fast? Importantly, the Frontiers model distinguishes the vetting function from the grading function. Reviewers are told to assess the soundness of an article, and are given clear instructions on what that means. They’re told that evaluating the impact of an article is not their job. The back-and-forth between reviewers and authors is kept on track by a series of focused questions. This makes it hard for reviewers to go off on self-indulgent tangents, or to raise major new issues in the second round of review. There are no capacity limits on the journal, so no time is lost while editors figure out how papers rank relative to other submissions.
I don’t think that Frontiers has solved everything. But it is so very much more efficient than the alternatives, that it’s worth taking seriously. The huge speed difference between Frontiers and traditional journals implies that it is the grading function that makes the traditional system so inefficient. And it’s a crazy approach to grading. So surely we can come up with a better way of doing the grading.
Recently a colleague who I have the highest regard for told me that the journal that he edits rejects 90% of the submissions that it receives. He said this with some pride, suggesting that this creates high standards and is good for the field. I’m not so sure. It sounds to me more like an enormous waste of authors’ and reviewers’ time and energy. I’m all for setting high standards, but there has to be a more effective way to do this.
** One side note: let’s not get derailed by the fact that open access journals like Frontiers charge $1000/article or more. Yes, this sounds like a lot, but it’s what your articles cost already — you just don’t see the charges yourself, because they’re coming from libraries, organizations, etc. The challenge is simply one of moving funds from traditional channels to new channels. And besides, this cost is a fraction of the cost in your time that goes into the inefficient submission-and-review system. But this is a topic for another day.
This blog post is very interesting, and I started asking people around me about what they think of open access journals. My impression so far is that there are still quite persistent concerns or myths about open access journals, and we need more open discussion forums like this to clarify what really happens behind the scenes.
Two frequent reactions I’ve seen are:
a) famous researchers don’t publish their papers in these open access journals, and therefore the impact or visibility of papers in open access journals is not great;
b) the quality of (fast) reviews in open access journals is lower than that of traditional journals. When I asked them what led them to think that, they usually didn’t have a good answer, but some people said that they believe the editors earn more ‘prize’ money when papers are published, so they accept papers quickly even when there are negative reviews.
For the first reaction, I don’t have any statistics to support this, but I’ve definitely seen big figures publishing in open access journals (especially in Cognitive Neuroscience), so there may be slight differences in attitude across sub-fields of Linguistics and Cognitive Science.
For the second reaction, I wasn’t able to get clear answers on where these ideas came from, and I also don’t know if this is true or not. I’ve read in other blogs that they’ve seen very good reviews and comments from open access journals, but reports like these (http://www.theguardian.com/higher-education-network/2013/oct/04/open-access-journals-fake-paper) may have led to negative views about the quality of reviews in open access journals.
I agree. Perceptions of the new publishing outlets are mixed-at-best. And probably for good reason. But I think that in pointing out flaws of some implementations, folks often lose sight of the insanity of the standard system. Because at least it’s the insanity that they’re used to.
The concern about who is publishing their work in the new venues is well taken. More people are putting their toes in the water, and it certainly does help if influential folks show that you can do this and still be taken seriously. One interesting anecdote: we submitted a paper to Frontiers in April, and it was published in June. This week I got an inquiry from a PhD student from abroad who was interested in visiting for a semester, based on her interest in that paper. Normally, the paper would still be a year or two from seeing the light of day.
The concern about review quality is also understandable. Some reviewing practices are questionable in some journals. The story about an open access sting operation published in Science in late 2013 is very interesting. But the problem of spam publishing isn’t restricted to OA journals, or to e-journals, nor does it apply to all such journals. But what that concern also highlights is that folks just take it for granted that journals should be the ones to serve the grading function for science (“I’ll read your paper only because it appeared in some fancy journal”). And that’s where I think the existing system is incredibly costly and inefficient. OA journals haven’t solved the grading function, but I think they’ve highlighted the need for a better solution.
(And thanks for being the very first commenter on this blog!)
Just to clarify, I am totally on board with the idea of open access journals – I just wanted to share and discuss how skeptics react to open access journals, so we can think more about how to demystify these strange beliefs, especially among senior colleagues who are well established in the field.
Perhaps one of the most effective ways is to entice them to publish papers in these open access journals, so that they can see the whole process themselves and dispel their stereotypes. To that end, the Research Topic component of Frontiers is great as it’s probably easier to solicit submissions for a specific theme, so I’m really grateful that you, Matt and Claudia are doing this. Perhaps having an open discussion forum at major conferences may also be helpful.
I’m really enthusiastic about this particular Frontiers special issue and the new publication model. But not so excited about having to pay money for each submission. The Journal of Language Modelling also has fast turn arounds, but does not cost anything. The Journal of Eye Movement Research as well. Why can’t we have a journal like these for psycholinguistics? Colin, I would join you if you want to push something in this direction, and I think that Doug Davidson would also be interested. We had an exchange about doing something on this scale on Gelman’s blog.
Yes, these discussions inevitably turn to money. The aim of my post was not to say “Frontiers rocks, it has everything figured out”, but rather “Frontiers exposes an insanely inefficient part of our standard grading scheme.” But since we’re talking turkey: I of course agree that I’m not thrilled when I fork over the $1000 or so for publishing a Frontiers article. But we’re mistaken if we think that other approaches are cheaper. It’s just a matter of who’s paying and how hidden it is. For traditional journals, libraries are forking over vast sums to publishers. And in a pay-to-read model it’s harder to give reduced/free access to folks in countries or institutions where they can’t pay. The costs per article are probably higher than for Frontiers, but generally hidden from researchers. In the free-to-publish OA journals that I know of, there’s always somebody who is picking up the cost, and it’s at least as expensive as Frontiers. For example, the well-regarded semantics journal Semantics & Pragmatics is jointly funded by the LSA, and by the editors’ wealthy institutions (MIT, U Texas); and the costs are probably higher per article than Frontiers. Costs are again hidden from researchers, but it’s a benevolent dictatorship model. There may be cases where the editor is doing the work for “free”, but that just means that the editor’s institution is paying for the time. Somebody’s always paying. The solution, which we don’t yet have, would involve redistribution of funds, e.g., from library budgets to a central institutional pool for OA fees.
The wikipedia page on OA funding issues has a remark from somebody at the Wellcome Trust, complaining that the new model could hurt science, because it shifts 1-2% of funding from the cost of doing science to the cost of publishing it. This misses the elephant in the room. Aside from the fact that somebody is already paying for publishing (and that’s often funded in part by indirect costs from grants), the far bigger cost is the amount of time that researchers spend on submitting, re-submitting, reviewing, and re-reviewing. Somebody, somewhere is paying for our time, and the cost of the inefficient process dwarfs all publishing fees.
I agree that the in practice, the process is inefficient (for instance, I’m against “revise and resubmit”), but it could easily be made more efficient, e.g. by journals committing themselves to a maximum reviewing time, e.g. eight weeks (if the editor hasn’t found enough reviews by that time, the paper counts as rejected). What is not clear to me is whether a better system can be devised. The current trend towards mega-journals with an author-pays model will not advance science, I fear – it will help people who inhabit rich, established institutions, especially senior scientists. Junior scholars and scientists from outside the rich countries will be sidelined. See my recent blog post about this: http://www.frank-m-richter.de/freescienceblog/2014/10/30/how-would-non-exclusive-author-pays-publication-mega-journals-transform-our-science/
Thanks Martin. Interesting post.
I recommend that anybody reading this pop over to Martin’s post to read his piece. It’s interesting, and it directly addresses the question of how to “grade” publications in the new regime, something that I don’t solve here. Martin seems happier with the grading status quo than I am, but he correctly points out that it has some valuable egalitarian features, if used as intended. Junior researchers in particular need to be able to get their work recognized via prestige, something that article metrics don’t do a good job of. Ironically, though, Martin also favors a publisher-pays open access model, in which institutions pay for the prestige of owning the journals and publishing houses. That doesn’t sound terribly egalitarian to me.
If MIT Press decides to turn Linguistic Inquiry into a publisher-pays journal (maybe renaming it to “MIT Linguistics Journal” or similar, to strengthen its brand name), it should make sure not to accept papers from MIT linguists, because this would be a conflict of interest. But MIT linguists would be involved in editing the journal, and it would be their task to make sure that only the best work is published with MIT’s money. The provenance of the authors would be irrelevant, as in the old subscription model, so it would be as egalitarian as the old model (not perfect, of course).