Our little publishing experiment was not supposed to go like this. Largely by accident, we created a peer-reviewed outlet with impact that matches the top journals in linguistics, with time-to-publication that is almost implausibly fast relative to peers, that saves institutions money, and that saves authors lots of stress. And we did this by charging people lots of money and accepting almost everything that we received.

This is misleading, of course. But only slightly.

matt-claudia2This is a preliminary report on the experience that Matt Wagers, Claudia Felser and I have had over the past 2 years of editing a Frontiers research topic on Encoding and Navigating Linguistic Representations in Memory. Frontiers is a family of open-access mega-journals that has grown alarmingly quickly in the past few years. A Frontiers research topic is similar to a thematic special issue in a traditional journal, except that it is not limited in size, and the articles do not appear at the same time – in our case they’re appearing over the course of a couple of years, whenever each paper is ready. Our research topic lies within Frontiers in Language Sciences, which itself is a section of the journal Frontiers in Psychology.

I’m not here to claim that everything that Frontiers does is awesome. In fact, some of our experiences were on the frustrating-to-embarrassing end of the scale. But working within their system has turned up some useful surprises.

Frontiers has attracted a lot of attention, some positive, some not so much. Open access means that all articles are freely available online for readers. In this case, authors have to pay an article processing charge (APC), though they can request a waiver in some circumstances. Frontiers has an unusual peer review model. Articles are judged only for soundness, not for impact, i.e., if your study is sound, but has minimal novelty or importance, it can still be accepted. Frontiers instead focuses on gathering post-publication impact measures. Reviewers are expected to answer a twenty questions style focused review form, and they engage in an online back-and-forth with authors until the reviewer is satisfied with each point in the review. The paper is published if two reviewers endorse publication, and those reviewers’ names are published together with the paper. (Reviewers who do not endorse publication remain anonymous.) There has been lots of grumbling elsewhere about the Frontiers review process, with some reviewers complaining that they feel pressured to endorse publication, and others complaining that this leads to publication of junk, or labeling the organization as a “predatory publisher“. We feared this too.

frontiersFor me, Matt, and Claudia the experience has been mostly rather positive. We were originally looking for an outlet for some papers from a couple of small workshops that we organized in Potsdam in 2012 (Claudia’s was an official GLOW satellite, ours was more of a glorified lab meeting, generously hosted by Shravan Vasishth and his students). We tried a couple of traditional journals, which didn’t get very far, and then came across Frontiers as an alternative. We were initially apprehensive, and it took us a little while to get the hang of things, but the experience has made us rethink some features of traditional publishing that we had taken for granted.

Another reason for taking on this project was that we wanted to test the market for publishing at the intersection of linguistics and psychology. Lots of talented young researchers are entering this area nowadays, but their main publication options are either in traditional psychology-based journals, which are often averse to linguistic detail (and have almost no linguists on their editorial boards), or in traditional linguistics journals, which have a complementary set of biases. The Frontiers experience has confirmed our suspicion that there’s a strong market for new outlets in this area.

This post is about an Open Access (OA) project, but it’s not about the aspects of OA that have drawn huge recent attention in linguistics (here, here, and here – cool pic on this last one). Important questions about who pays and who gets to read shouldn’t drown out the other things that make the traditional publishing process highly inefficient.

Five Outcomes from the Experiment

Our experiment isn’t yet finished, but it’s getting close. We know all of the submissions. Most papers are either published or soon-to-be-published. We have a good sense of some of the results.

1. Participation. By the end of the experiment, we’ll have published almost 50 papers in a little over 2 years, with around 40 in 2015 and early 2016. As of the end of 2015, 35 papers have appeared, 8 are in the late stage of review, 4 in the early stage, there may be one or two more to wrap things up. The research topic will have served around 150 authors, with the help of almost 100 different reviewers. (A couple of people did 4-5 reviews – we salute you!) This is more than most free-standing journals in linguistics. It’s on track to become the most prolific of all 385 research topics currently under the Frontiers in Psychology banner (yes, that’s a lot of topics, I had no idea).

So the first conclusion is that there’s lots of interest.

2. Speed. Authors have been amazed by the publication speeds. And we have too. All papers receive the equivalent of two or more rounds of review and response from the authors. That is normal in our field. For the 35 papers published by the end of 2015, the time from submission to online publication has a mean of 108 days, a median of 108 days, and a range of 56-176 days. This is almost implausibly fast. The norm in linguistics is 1-3 years.

semanticsandpragmaticsAs a point of comparison, I calculated averages for Semantics and Pragmatics, the most successful open access journal in linguistics that runs under a more traditional model. For articles published in 2014 (9 articles), the average time from submission to publication was 16.8 months (range 7-33 months). To be clear, I am not singling out S&P because it does poorly. S&P is a model of best practices in linguistics journal publishing (kudos to Kai and David), and this includes unusual transparency about submission dates etc. That’s why the data is available. I would expect most other journals in the field to be slower.

So, our average time-to-publication is almost 5 times faster than the average for one of the most forward looking journals in the field. And that’s only considering the papers that are ultimately accepted in S&P. Most submissions to S&P are presumably either published elsewhere (= even more slowly) or never published at all.

3. Impact. Based on a standard(-ish) measure of impact, our research topic is comparable in impact to the top journals in linguistics, despite being brand new and having lax publication standards. How so?

impact_factorApologies for the bibliometric geekery, but it’s time to talk a little about impact factor (IF). That’s a standard measure that’s used to evaluate the importance of journals. Basic idea: if you publish a lot of influential papers, then you’re probably an important journal. And “influential” is generally equated with “highly cited”. The citation measure that’s most commonly applied to journals is the 2-year IF, as calculated by the folks at at Thomson Reuters. In detail, the 2015 IF for a journal corresponds to the number of citations in 2015 to articles published in that journal in the previous 2 years (i.e., 2013, 2014), divided by the total number of articles published. Fancy journals like Science or Nature have IFs of 30-40. Top psychology or neuroscience journals are in the 5-15 range. And IFs for linguistics journals are pretty much in the toilet, with the good ones in the 0.5 – 2.0 range, and others not even rated. Needless to say, this does not help with the perception that linguistics is an important field.

(Why is IF based on 2 years only? It’s arbitrary, of course. But it works for the biologists. So live with it. Yes, there are other measures, and they probably make a lot more sense for linguistics, where citations extend over a longer period. But most people start falling asleep or looking at their phones if you start complaining about that. This isn’t good.)

I can’t calculate a reliable 2-year IF for our research topic. It’s too new and short-lived. But I can make a quick-and-dirty substitute. I used Google Scholar to estimate a 1-year IF: for articles published in 2014, what is the average number of citations in Google Scholar to those articles in 2015 (through late October)? And how does that compare to a couple of other leading journals in linguistics?

Our Frontiers topic: 8 papers published in 2014, 32 citations to date, 22 in 2015. 1-year IF is 2.75.

language_smallLanguage: this is the flagship journal of the Linguistic Society of America. The most widely circulated print journal in the field, worldwide. 21 papers published in 2014, 136 citations to date, 56 in 2015. 1-year IF is 2.67.

Semantics & Pragmatics: also published by the LSA, this is the most successful open access journal in linguistics, run by influential and forward-thinking editors, and quickly attaining high prestige. 9 papers published in 2014, 89 citations to date, 28 in 2015. 1-year IF is 3.1.

This is a limited sample, of course, and the data are noisy. I counted a citation as being from 2015 if Google Scholar lists it as 2015. That could be unreliable. Google Scholar picks up far more citations than Thomson Reuters, which limits its attention to the journals that it indexes. So Google Scholar is probably a more useful measure for the diverse publication landscape in linguistics, and certainly not biased towards psycholinguistics. I only chose a couple of linguistics journals for comparison, but I’m confident that other journals would almost all fare worse. Language and S&P are both very good journals. One clear difference that you’ll note is that there are more total citations-to-date for the traditional journals, whereas the 2015 citations make up a higher proportion of citations-to-date for the articles in our research topic. That’s because our articles are genuinely new, whereas the articles in the traditional journals have often been around for a while, and routinely have citations that pre-date their publication. This is normal in linguistics, because publication cycles are so slow.

Incidentally, Frontiers makes it easy to see other quantitative measures of the impact of papers. For example, the 35 papers published to date have been viewed a total of 38,000 times, and have been downloaded 6,000 times. (Individual example here.) I have no idea how that compares to other outlets.

The moral of this: let’s be clear that we have not really created a prestigious outlet. We received papers whose authors didn’t want to go through the traditional journal process, either because (i) they didn’t expect to succeed, or (ii) they didn’t want to wait a couple of years for the outcome, or (iii) they had already tried and failed, perhaps multiple times. Our group contributed papers to the research topic for all of these reasons (side note: topic editors’ papers are handled by independent ad hoc editors). What this exercise shows is that simply by publishing more efficiently it’s possible for a motley set of papers to match the top journals in the field, based on conventional measures of impact.

So if a linguistics journal could publish the very best work with similar efficiency, then that journal’s IF and its prestige could go through the roof. Needless to say, that could be a rather good thing for the field.

4. Selectivity. Here’s the crazy part. We have accepted almost every paper that we received. That seems ridiculous. By comparison, Language accepts around 20% of the manuscripts that it receives. S&P is takes pride in the fact that it accepts only around 13% of manuscripts. In the traditional publishing world, this contributes to their prestige, and editors are rewarded for running a journal that rejects most of what it receives. The traditional model rewards an incredibly inefficient process. (To be clear: I don’t think that everything should get the imprimatur of publication. But making high rejection rates desirable comes at a great cost.)

With such a high acceptance rate, it’s tempting to think that we have no standards at all. Or that our reviewers, who agree to be publicly named when they publication, are complete pushovers. But that’s misleading. All authors had to first submit a 1-page abstract for review by the editors, and only if the abstract was judged suitable was a full paper submission invited. We rejected a number of papers at the abstract submission stage, though still only a relatively small percentage. And in most cases we gave feedback to the authors based on the abstract, suggesting things that they should address in the full paper. This triage process helped to ensure that the papers that we did receive were of a higher quality.

Nevertheless, of the full papers submitted so far, almost all are on track to being published. There are only a couple of exceptions (one withdrawn following reviews; one rejected based on reviewer consensus; one currently in arbitration due to reviewer disagreement). There have been cases where we worried that the paper might not make it through the review process, but we’ve been encouraged by the constructive interaction between reviewers and authors, which has led to genuine improvements. A benefit of the fact that papers are judged on soundness and not on impact is that authors do not need to make unwarranted claims in order to justify the impact of their paper. This allows them to be more honest. Reviewers appreciate that.

5. Satisfaction. We expected that authors and reviewers would find the quirks of the Frontiers process to be frustrating. Or that it would get on our nerves. But that hasn’t happened. Well, mostly not. Authors have been delighted with the speed of the process. Authors and reviewers have told us that they found the interactive review format to be generally positive. We have been impressed with the overall constructive approach that reviewers have taken, and the authors have appreciated that too, though some have reacted to the constraints of this process more warmly than others. Perhaps this is because the reviewers know that their names will eventually be published (if they endorse). Perhaps it’s because the “twenty questions” review format supports a more objective review. Perhaps it’s because our (generally young) band of reviewers is just a nice group of people. We can’t know for sure. Reviewers have also mentioned that they appreciated the speed of the process, because they don’t need to go through the hassle of reminding themselves what the paper is about on each new cycle, because the paper is still fresh in their mind.

As editors, we have also found the process surprisingly effective. The editorial back end works very well, both in terms of technology and human support. We’re bombarded with emails at each step of the process, but those messages give us clear instructions on what needs doing at each step (the most common instruction is: “not very much”; I love that!). And the staff support at Frontiers is fast, efficient, and capable, so that the editors really don’t need to worry much about the logistics. We can focus our efforts on the things that we’re qualified to do. That’s really valuable, as it makes it more feasible for busy people to take on editorial roles.

Not everything has been perfect, for editors, authors, or reviewers. Some of the annoyances:

  • If we don’t assign hand-picked reviewers within a few days of the submission, Frontiers starts auto-spamming reviewers from its database, with emails that look like they’re coming from us. So we have to act fast. This does have a benefit, as it forces us to stay on task, if we want to avoid having a robot annoy all of our friends.
  • We got frustrated at one point when we were challenged on whether our subject matter properly fell within the scope of psychology. That’s a sore point.
  • The editorial goalposts have shifted a few times, as Frontiers has updated its rules. This has been a bit annoying, but most of the changes seem to reflect efforts to address ethical concerns, e.g., avoiding conflicts of interest in reviewer selection.
  • Some reviewers did not take kindly to the frequent emails reminding them of due or overdue reports. Others didn’t much care. I wonder if this is because these nag-bots are still new to some in the linguistics world, while for others it’s just a part of the daily grind.
  • Most embarrassing: one of our authors was falsely accused of plagiarism … of himself. Frontiers apparently uses an automatic plagiarism detector to match text against web-crawled materials, and accusatory emails are sent automatically when a match is detected. A human check could have easily found that the match was to a draft of the author’s own work. Not so good.

BUT WHAT ABOUT THE COST?

This is one of the elephants in the room. Frontiers has APCs. For a research topic the charge is currently around $1400 per article. This is not cheap. It’s lower than many open access journals that charge $3000 or more. It’s higher than the charges from reputable outlets like Ubiquity Press or the U of California’s Collabra, whose charges are in the $450-$675 per article range. And it’s of course higher than the open access journals that for various reasons are able to avoid author-facing APCs, because somebody else is paying. This is off-putting for some authors. That’s understandable, because it’s a sizable amount of money. Some find it objectionable on principle. I’m less sympathetic to the “on principle” objection, for a couple of reasons.

Misery-Stephen-King_smallFirst, publishing our esoteric work isn’t cheap. Practically nobody is interested in what we write, so we don’t benefit from the economies of scale that you’d see with a Stephen King novel. Or that Stephen Hawking book that you bought without reading. Or that Steven Pinker book that you wish you had written yourself. The amounts that Frontiers charges to authors are not so much out of line with how much it costs, on a per article basis, to run a typical specialist journal. It is entirely routine for a linguistics journal to cost well above $1000 per article to produce. I have run the numbers for print journals produced by traditional publishers, or by scholarly societies like LSA, or online journals funded by the editors’ home institutions. There’s always somebody who’s paying for the work that goes into the editing-and-publishing process, even if it’s hidden from view, and it’s almost always more expensive than Frontiers. I don’t favor the idea of authors paying APCs from their own pocket, but I have no reservations about having institutional budgets shift away from journal subscription fees towards individual APCs (or consortia that achieve the same thing). It’s easy for us to be in denial about the true cost of our publishing activity. APCs get the issue into the open.

Second, I think that hand-wringing over subscription costs and APCs distracts from the fact that other parts of the publishing process are far more expensive and inefficient. Most of us are paid to do research, either by taxpayers or by students, and our time is expensive. (Yes, even underpaid grad students are expensive, once you factor in benefits, tuition, and infrastructure.) Our research takes a huge amount of time, and the traditional publication process is almost bizarrely inefficient (see my earlier post: A grading method from hell). If we honestly calculate the costs of our research process, then a few hundred dollars in APCs seems trifling compared to other things that we willingly do that are rather inefficient.

More importantly, a process that saves a lot of time and stress for authors is one that makes the research process more efficient, and hence delivers better value for money to the institutions, students, and taxpayers who mostly fund our research.

I should mention that the financial side of things has been 100% invisible to us as editors of the research topic. Some of our authors may have requested APC waivers from Frontiers, but we have no idea about that. Frontiers maintains a firewall between the editorial process and the payment process. That’s surely a good thing. Also, it’s worth noting that the “editorial cost” has been surprisingly low. Thanks to the excellent technology and back-room staff, it has not taken a whole lot of time to be a research topic editor, and there has been no need to hire a local editorial assistant (a cost that editors of linguistics journals routinely ask publishers or deans to cover). Frontiers does not pay topic editors.

OK, BUT THIS IS DECEPTIVE, RIGHT?

You would be right to be suspicious about the picture that I’m painting here. Some of the facts are likely misleading.

  • We should be cautious about comparing citation rates across (sub-)fields. Our research topic had a lot of papers on English, German, and other “privileged” languages. That can distort the comparison with other journals. That’s certainly right. But my point is that even a relatively undistinguished set of papers can get more attention if they are published efficiently.

  • Short-term citation ≠ impact ≠ prestige. Very true. It’s important to not confuse simple metrics with qualities that we’re really interested in. Some fantastic research is hardly ever cited. But we shouldn’t expect the decision makers to grasp the subtleties of this.

  • Our sample is biased. We can tout the virtues of accepting most submitted articles, and willingness to pay APCs. But we may have a non-representative sample. Authors who have the resources to afford the APCs may be more likely to know how to write papers that please reviewers. There’s something to this objection. Our articles generally come from teams with substantial publishing experience, and this has surely helped to keep the rejection rate low. Our experience might not transfer fully to other subfields, or to APC-less models. Also, it happens that psycholinguists are generally fairly well trained in how to analyze and interpret their data, so meeting the soundness criterion may be easier for psycholinguists than for some other varieties of linguist.

Also, there are key functions of a traditional journal that a “very open access” outlet does not serve. Traditional journals use their selectivity as a way of grading the papers that they receive, and the scientific community relies heavily on this in its reward system. Frontiers deliberately doesn’t do that, and some people advise against publishing there, because it is often assumed to be a venue of last resort. The Old Guard tend to be especially skeptical. That’s particularly relevant to young researchers who are looking to establish a reputation. Or to get a job. Or tenure.

SO WHY DID IT WORK SO WELL?

The best that I can do is speculate on some things that may have contributed to the success of our experiment.

  • The abstract triage stage helped a lot. This made it easy for editors to confer quickly about a paper, and to turn down a paper before the authors had invested too much time in writing it. This avoided a lot of unnecessary effort on the part of authors, editors, and reviewers. Frontiers doesn’t require this step, but we came to insist upon it, as it helped so much.

  • The publication criteria made the task easier for reviewers, editors, and authors. Focusing on soundness, and not needing to worry about impact, made the task more straightforward for everybody. Authors feel less pressure to hide wrinkles in their data or to overstate their claims (a great article on this here, by psychologist Jon Maner). Reviewers and editors don’t need to worry about comparative assessments with other articles.

  • The focused review process helps to keep reviewers on track, and it also helps authors to see more clearly what reviewers did and did not agree with. This focus reduces conflict.

  • The (ultimately) non-blind review process may have helped. You don’t want to sound like a jerk in your review if your name will ultimately be published with the paper.

  • Of course, none of it would have worked without helpful reviewers, a lot of them. We were impressed by how willing they were to help out. (Side note: we had a 60% accept rate on review requests; I don’t know how this compares to other outlets. But there was big variation in accept rates across editors, which also colored our experience. Not sure if that reflects who’s asking, how they’re asking, or who’s being asked. This needs more investigation.)
  • Back-end support. I keep mentioning this, because I think it’s important. The Frontiers technology and staff makes it much easier for editors to do their job effectively. I would be a terrible editor in the traditional model. But this system makes me less of a loser.

  • There’s a lot of overlap between the authors and the reviewers, so participants are motivated to see the process work well.

  • Something that is a little more quirky: we have been serving a very hungry audience. Our authors are familiar with the frustration of submitting to traditional journals in linguistics or psychology, where editors and/or reviewers are often skeptical of a big part of what we care about. Creating a welcoming space for this hybrid community is highly valued.

  • Our research topic has a narrow focus. This made it easier to triage unsuitable contributions.

How much of this depends on this being an all-electronic OA journal? I’m not sure. Some features could work in traditional journals. But some probably depend on publishing at scale, with no constraints on journal size. Subscription-based models create strong pressure to limit size, which in turn inevitably leads to many of the other features of traditional journals.

WHAT NEXT?

That’s a good question. We’re pleased with how the experiment has gone. It has shown the there’s a growing demand for publication venues at the intersection of linguistics and psychology. And that there’s a community of constructive reviewers who could contribute to a well-managed process. Our Frontiers experiment is winding down. And we’re interested in finding ways to build on this experience. But more on that soon …