Here’s a method for grading students’ work that seems ludicrous. But it’s essentially what we subject ourselves to as researchers all the time. Forget presidents’ salaries and predatory publishers; the most egregious waste in research universities may be coming from us, thanks to our bizarrely inefficient way of grading ourselves.

Imagine that your dean tells you that she’s piloting a new approach to grading student projects, as a way of ensuring quality control. Instead of submitting the project to one person who assigns a grade, each of your colleagues will be allowed to give out one letter grade only. Adams gives out the As, Burton gives out the Bs, and your newly hired colleague Chandler gives out the Cs. Students can choose who to submit their work to, but each faculty member can only give the one grade, or no grade at all. Each grader also has the option of telling the student that she might perhaps make the grade if she reworks the project. If the student is turned down by one grader, then she has the option to take the project to another grader, presumably one who gives out lesser grades. But once a grader accepts a project, the game stops and the student cannot turn to anybody else. So if the student submits to Burton and he loves the work, she can’t turn around and try her luck with Adams instead. Some students might game the system by making a half-baked effort, submitting it to Burton or Chandler, to try their luck and to find out the minimum needed to get a grade. They find this more efficient than predicting your colleagues’ whims. It’s a real drain on faculty time, because they have to read most student’s work multiple times before assigning a grade.

It seems crazy, right? But that, folks, is pretty much what we do to ourselves all the time. Replace the letter grades with your favorite journal titles, and you have the way that we use the peer review system to grade our work.

Once upon a time, journals served the function of making the results of science accessible. The internet made that largely irrelevant. I can’t remember when I last read an article in a print journal.

That leaves two useful functions that journals still perform. First, we use peer review to check the soundness of our work. Second, we use the same peer review process to grade the quality, impact, or importance of the work.

The fact that there are hundreds of different journals, and that they have things like page or article limits, are quaint relics of a bygone era. The existence of complex hierarchies of journals that each dribble out small amounts of content mostly serves the grading function. If the Journal of Dreadfully Ingenious Experiments only publishes 20 articles a year, and lots of cool stuff appears there, then it’s a big feather in your cap to get an article accepted there. So some researchers, are willing to wait a year or more for the opportunity to get the imprimatur that this brings, though this comes with the risk of needing to start from scratch with one or two more journals. Young researchers who are trying to make their name and secure employment are especially trapped by this process — should they aim high and risk looking like they’ve done nothing, or should they aim low and risk the perception that they’re no good.

This is a disaster. And we can do better.

Last year Claudia Felser, Matt Wagers and I started a special topic of Frontiers in Psychology/Language Sciences. We did this mostly because of the need for more outlets that serve the growing cadre of young researchers who bridge linguistics and psychology. All three of us were a little apprehensive about how the Frontiers model would work for us. We heard some grumbles, almost all from older colleagues. But so far I am impressed. It is still early days, but the average time from when a paper first reaches us until it is published online is just 3-4 months. That includes a couple of rounds of review. That timing is off the charts, relative to what happens with other outlets that I know, where submission-to-publication in under 18 months is unusually fast. (Yes, we have rejected submissions — mostly at the abstract submission stage.)

How is Frontiers so fast? Importantly, the Frontiers model distinguishes the vetting function from the grading function. Reviewers are told to assess the soundness of an article, and are given clear instructions on what that means. They’re told that evaluating the impact of an article is not their job. The back-and-forth between reviewers and authors is kept on track by a series of focused questions. This makes it hard for reviewers to go off on self-indulgent tangents, or to raise major new issues in the second round of review. There are no capacity limits on the journal, so no time is lost while editors figure out how papers rank relative to other submissions.

I don’t think that Frontiers has solved everything. But it is so very much more efficient than the alternatives, that it’s worth taking seriously. The huge speed difference between Frontiers and traditional journals implies that it is the grading function that makes the traditional system so inefficient. And it’s a crazy approach to grading. So surely we can come up with a better way of doing the grading.

Recently a colleague who I have the highest regard for told me that the journal that he edits rejects 90% of the submissions that it receives. He said this with some pride, suggesting that this creates high standards and is good for the field. I’m not so sure. It sounds to me more like an enormous waste of authors’ and reviewers’ time and energy. I’m all for setting high standards, but there has to be a more effective way to do this.


** One side note: let’s not get derailed by the fact that open access journals like Frontiers charge $1000/article or more. Yes, this sounds like a lot, but it’s what your articles cost already — you just don’t see the charges yourself, because they’re coming from libraries, organizations, etc. The challenge is simply one of moving funds from traditional channels to new channels. And besides, this cost is a fraction of the cost in your time that goes into the inefficient submission-and-review system. But this is a topic for another day.