Our little publishing experiment was not supposed to go like this. Largely by accident, we created a peer-reviewed outlet with impact that matches the top journals in linguistics, with time-to-publication that is almost implausibly fast relative to peers, that saves institutions money, and that saves authors lots of stress. And we did this by charging people lots of money and accepting almost everything that we received.
This is misleading, of course. But only slightly.
This is a preliminary report on the experience that Matt Wagers, Claudia Felser and I have had over the past 2 years of editing a Frontiers research topic on Encoding and Navigating Linguistic Representations in Memory. Frontiers is a family of open-access mega-journals that has grown alarmingly quickly in the past few years. A Frontiers research topic is similar to a thematic special issue in a traditional journal, except that it is not limited in size, and the articles do not appear at the same time – in our case they’re appearing over the course of a couple of years, whenever each paper is ready. Our research topic lies within Frontiers in Language Sciences, which itself is a section of the journal Frontiers in Psychology.
I’m not here to claim that everything that Frontiers does is awesome. In fact, some of our experiences were on the frustrating-to-embarrassing end of the scale. But working within their system has turned up some useful surprises.
Frontiers has attracted a lot of attention, some positive, some not so much. Open access means that all articles are freely available online for readers. In this case, authors have to pay an article processing charge (APC), though they can request a waiver in some circumstances. Frontiers has an unusual peer review model. Articles are judged only for soundness, not for impact, i.e., if your study is sound, but has minimal novelty or importance, it can still be accepted. Frontiers instead focuses on gathering post-publication impact measures. Reviewers are expected to answer a twenty questions style focused review form, and they engage in an online back-and-forth with authors until the reviewer is satisfied with each point in the review. The paper is published if two reviewers endorse publication, and those reviewers’ names are published together with the paper. (Reviewers who do not endorse publication remain anonymous.) There has been lots of grumbling elsewhere about the Frontiers review process, with some reviewers complaining that they feel pressured to endorse publication, and others complaining that this leads to publication of junk, or labeling the organization as a “predatory publisher“. We feared this too.
For me, Matt, and Claudia the experience has been mostly rather positive. We were originally looking for an outlet for some papers from a couple of small workshops that we organized in Potsdam in 2012 (Claudia’s was an official GLOW satellite, ours was more of a glorified lab meeting, generously hosted by Shravan Vasishth and his students). We tried a couple of traditional journals, which didn’t get very far, and then came across Frontiers as an alternative. We were initially apprehensive, and it took us a little while to get the hang of things, but the experience has made us rethink some features of traditional publishing that we had taken for granted.
Another reason for taking on this project was that we wanted to test the market for publishing at the intersection of linguistics and psychology. Lots of talented young researchers are entering this area nowadays, but their main publication options are either in traditional psychology-based journals, which are often averse to linguistic detail (and have almost no linguists on their editorial boards), or in traditional linguistics journals, which have a complementary set of biases. The Frontiers experience has confirmed our suspicion that there’s a strong market for new outlets in this area.
This post is about an Open Access (OA) project, but it’s not about the aspects of OA that have drawn huge recent attention in linguistics (here, here, and here – cool pic on this last one). Important questions about who pays and who gets to read shouldn’t drown out the other things that make the traditional publishing process highly inefficient.
Five Outcomes from the Experiment
Our experiment isn’t yet finished, but it’s getting close. We know all of the submissions. Most papers are either published or soon-to-be-published. We have a good sense of some of the results.
1. Participation. By the end of the experiment, we’ll have published almost 50 papers in a little over 2 years, with around 40 in 2015 and early 2016. As of the end of 2015, 35 papers have appeared, 8 are in the late stage of review, 4 in the early stage, there may be one or two more to wrap things up. The research topic will have served around 150 authors, with the help of almost 100 different reviewers. (A couple of people did 4-5 reviews – we salute you!) This is more than most free-standing journals in linguistics. It’s on track to become the most prolific of all 385 research topics currently under the Frontiers in Psychology banner (yes, that’s a lot of topics, I had no idea).
So the first conclusion is that there’s lots of interest.
2. Speed. Authors have been amazed by the publication speeds. And we have too. All papers receive the equivalent of two or more rounds of review and response from the authors. That is normal in our field. For the 35 papers published by the end of 2015, the time from submission to online publication has a mean of 108 days, a median of 108 days, and a range of 56-176 days. This is almost implausibly fast. The norm in linguistics is 1-3 years.
As a point of comparison, I calculated averages for Semantics and Pragmatics, the most successful open access journal in linguistics that runs under a more traditional model. For articles published in 2014 (9 articles), the average time from submission to publication was 16.8 months (range 7-33 months). To be clear, I am not singling out S&P because it does poorly. S&P is a model of best practices in linguistics journal publishing (kudos to Kai and David), and this includes unusual transparency about submission dates etc. That’s why the data is available. I would expect most other journals in the field to be slower.
So, our average time-to-publication is almost 5 times faster than the average for one of the most forward looking journals in the field. And that’s only considering the papers that are ultimately accepted in S&P. Most submissions to S&P are presumably either published elsewhere (= even more slowly) or never published at all.
3. Impact. Based on a standard(-ish) measure of impact, our research topic is comparable in impact to the top journals in linguistics, despite being brand new and having lax publication standards. How so?
Apologies for the bibliometric geekery, but it’s time to talk a little about impact factor (IF). That’s a standard measure that’s used to evaluate the importance of journals. Basic idea: if you publish a lot of influential papers, then you’re probably an important journal. And “influential” is generally equated with “highly cited”. The citation measure that’s most commonly applied to journals is the 2-year IF, as calculated by the folks at at Thomson Reuters. In detail, the 2015 IF for a journal corresponds to the number of citations in 2015 to articles published in that journal in the previous 2 years (i.e., 2013, 2014), divided by the total number of articles published. Fancy journals like Science or Nature have IFs of 30-40. Top psychology or neuroscience journals are in the 5-15 range. And IFs for linguistics journals are pretty much in the toilet, with the good ones in the 0.5 – 2.0 range, and others not even rated. Needless to say, this does not help with the perception that linguistics is an important field.
(Why is IF based on 2 years only? It’s arbitrary, of course. But it works for the biologists. So live with it. Yes, there are other measures, and they probably make a lot more sense for linguistics, where citations extend over a longer period. But most people start falling asleep or looking at their phones if you start complaining about that. This isn’t good.)
I can’t calculate a reliable 2-year IF for our research topic. It’s too new and short-lived. But I can make a quick-and-dirty substitute. I used Google Scholar to estimate a 1-year IF: for articles published in 2014, what is the average number of citations in Google Scholar to those articles in 2015 (through late October)? And how does that compare to a couple of other leading journals in linguistics?
Our Frontiers topic: 8 papers published in 2014, 32 citations to date, 22 in 2015. 1-year IF is 2.75.
Language: this is the flagship journal of the Linguistic Society of America. The most widely circulated print journal in the field, worldwide. 21 papers published in 2014, 136 citations to date, 56 in 2015. 1-year IF is 2.67.
Semantics & Pragmatics: also published by the LSA, this is the most successful open access journal in linguistics, run by influential and forward-thinking editors, and quickly attaining high prestige. 9 papers published in 2014, 89 citations to date, 28 in 2015. 1-year IF is 3.1.
This is a limited sample, of course, and the data are noisy. I counted a citation as being from 2015 if Google Scholar lists it as 2015. That could be unreliable. Google Scholar picks up far more citations than Thomson Reuters, which limits its attention to the journals that it indexes. So Google Scholar is probably a more useful measure for the diverse publication landscape in linguistics, and certainly not biased towards psycholinguistics. I only chose a couple of linguistics journals for comparison, but I’m confident that other journals would almost all fare worse. Language and S&P are both very good journals. One clear difference that you’ll note is that there are more total citations-to-date for the traditional journals, whereas the 2015 citations make up a higher proportion of citations-to-date for the articles in our research topic. That’s because our articles are genuinely new, whereas the articles in the traditional journals have often been around for a while, and routinely have citations that pre-date their publication. This is normal in linguistics, because publication cycles are so slow.
Incidentally, Frontiers makes it easy to see other quantitative measures of the impact of papers. For example, the 35 papers published to date have been viewed a total of 38,000 times, and have been downloaded 6,000 times. (Individual example here.) I have no idea how that compares to other outlets.
The moral of this: let’s be clear that we have not really created a prestigious outlet. We received papers whose authors didn’t want to go through the traditional journal process, either because (i) they didn’t expect to succeed, or (ii) they didn’t want to wait a couple of years for the outcome, or (iii) they had already tried and failed, perhaps multiple times. Our group contributed papers to the research topic for all of these reasons (side note: topic editors’ papers are handled by independent ad hoc editors). What this exercise shows is that simply by publishing more efficiently it’s possible for a motley set of papers to match the top journals in the field, based on conventional measures of impact.
So if a linguistics journal could publish the very best work with similar efficiency, then that journal’s IF and its prestige could go through the roof. Needless to say, that could be a rather good thing for the field.
4. Selectivity. Here’s the crazy part. We have accepted almost every paper that we received. That seems ridiculous. By comparison, Language accepts around 20% of the manuscripts that it receives. S&P is takes pride in the fact that it accepts only around 13% of manuscripts. In the traditional publishing world, this contributes to their prestige, and editors are rewarded for running a journal that rejects most of what it receives. The traditional model rewards an incredibly inefficient process. (To be clear: I don’t think that everything should get the imprimatur of publication. But making high rejection rates desirable comes at a great cost.)
With such a high acceptance rate, it’s tempting to think that we have no standards at all. Or that our reviewers, who agree to be publicly named when they publication, are complete pushovers. But that’s misleading. All authors had to first submit a 1-page abstract for review by the editors, and only if the abstract was judged suitable was a full paper submission invited. We rejected a number of papers at the abstract submission stage, though still only a relatively small percentage. And in most cases we gave feedback to the authors based on the abstract, suggesting things that they should address in the full paper. This triage process helped to ensure that the papers that we did receive were of a higher quality.
Nevertheless, of the full papers submitted so far, almost all are on track to being published. There are only a couple of exceptions (one withdrawn following reviews; one rejected based on reviewer consensus; one currently in arbitration due to reviewer disagreement). There have been cases where we worried that the paper might not make it through the review process, but we’ve been encouraged by the constructive interaction between reviewers and authors, which has led to genuine improvements. A benefit of the fact that papers are judged on soundness and not on impact is that authors do not need to make unwarranted claims in order to justify the impact of their paper. This allows them to be more honest. Reviewers appreciate that.
5. Satisfaction. We expected that authors and reviewers would find the quirks of the Frontiers process to be frustrating. Or that it would get on our nerves. But that hasn’t happened. Well, mostly not. Authors have been delighted with the speed of the process. Authors and reviewers have told us that they found the interactive review format to be generally positive. We have been impressed with the overall constructive approach that reviewers have taken, and the authors have appreciated that too, though some have reacted to the constraints of this process more warmly than others. Perhaps this is because the reviewers know that their names will eventually be published (if they endorse). Perhaps it’s because the “twenty questions” review format supports a more objective review. Perhaps it’s because our (generally young) band of reviewers is just a nice group of people. We can’t know for sure. Reviewers have also mentioned that they appreciated the speed of the process, because they don’t need to go through the hassle of reminding themselves what the paper is about on each new cycle, because the paper is still fresh in their mind.
As editors, we have also found the process surprisingly effective. The editorial back end works very well, both in terms of technology and human support. We’re bombarded with emails at each step of the process, but those messages give us clear instructions on what needs doing at each step (the most common instruction is: “not very much”; I love that!). And the staff support at Frontiers is fast, efficient, and capable, so that the editors really don’t need to worry much about the logistics. We can focus our efforts on the things that we’re qualified to do. That’s really valuable, as it makes it more feasible for busy people to take on editorial roles.
Not everything has been perfect, for editors, authors, or reviewers. Some of the annoyances:
- If we don’t assign hand-picked reviewers within a few days of the submission, Frontiers starts auto-spamming reviewers from its database, with emails that look like they’re coming from us. So we have to act fast. This does have a benefit, as it forces us to stay on task, if we want to avoid having a robot annoy all of our friends.
- We got frustrated at one point when we were challenged on whether our subject matter properly fell within the scope of psychology. That’s a sore point.
- The editorial goalposts have shifted a few times, as Frontiers has updated its rules. This has been a bit annoying, but most of the changes seem to reflect efforts to address ethical concerns, e.g., avoiding conflicts of interest in reviewer selection.
- Some reviewers did not take kindly to the frequent emails reminding them of due or overdue reports. Others didn’t much care. I wonder if this is because these nag-bots are still new to some in the linguistics world, while for others it’s just a part of the daily grind.
- Most embarrassing: one of our authors was falsely accused of plagiarism … of himself. Frontiers apparently uses an automatic plagiarism detector to match text against web-crawled materials, and accusatory emails are sent automatically when a match is detected. A human check could have easily found that the match was to a draft of the author’s own work. Not so good.
BUT WHAT ABOUT THE COST?
This is one of the elephants in the room. Frontiers has APCs. For a research topic the charge is currently around $1400 per article. This is not cheap. It’s lower than many open access journals that charge $3000 or more. It’s higher than the charges from reputable outlets like Ubiquity Press or the U of California’s Collabra, whose charges are in the $450-$675 per article range. And it’s of course higher than the open access journals that for various reasons are able to avoid author-facing APCs, because somebody else is paying. This is off-putting for some authors. That’s understandable, because it’s a sizable amount of money. Some find it objectionable on principle. I’m less sympathetic to the “on principle” objection, for a couple of reasons.
First, publishing our esoteric work isn’t cheap. Practically nobody is interested in what we write, so we don’t benefit from the economies of scale that you’d see with a Stephen King novel. Or that Stephen Hawking book that you bought without reading. Or that Steven Pinker book that you wish you had written yourself. The amounts that Frontiers charges to authors are not so much out of line with how much it costs, on a per article basis, to run a typical specialist journal. It is entirely routine for a linguistics journal to cost well above $1000 per article to produce. I have run the numbers for print journals produced by traditional publishers, or by scholarly societies like LSA, or online journals funded by the editors’ home institutions. There’s always somebody who’s paying for the work that goes into the editing-and-publishing process, even if it’s hidden from view, and it’s almost always more expensive than Frontiers. I don’t favor the idea of authors paying APCs from their own pocket, but I have no reservations about having institutional budgets shift away from journal subscription fees towards individual APCs (or consortia that achieve the same thing). It’s easy for us to be in denial about the true cost of our publishing activity. APCs get the issue into the open.
Second, I think that hand-wringing over subscription costs and APCs distracts from the fact that other parts of the publishing process are far more expensive and inefficient. Most of us are paid to do research, either by taxpayers or by students, and our time is expensive. (Yes, even underpaid grad students are expensive, once you factor in benefits, tuition, and infrastructure.) Our research takes a huge amount of time, and the traditional publication process is almost bizarrely inefficient (see my earlier post: A grading method from hell). If we honestly calculate the costs of our research process, then a few hundred dollars in APCs seems trifling compared to other things that we willingly do that are rather inefficient.
More importantly, a process that saves a lot of time and stress for authors is one that makes the research process more efficient, and hence delivers better value for money to the institutions, students, and taxpayers who mostly fund our research.
I should mention that the financial side of things has been 100% invisible to us as editors of the research topic. Some of our authors may have requested APC waivers from Frontiers, but we have no idea about that. Frontiers maintains a firewall between the editorial process and the payment process. That’s surely a good thing. Also, it’s worth noting that the “editorial cost” has been surprisingly low. Thanks to the excellent technology and back-room staff, it has not taken a whole lot of time to be a research topic editor, and there has been no need to hire a local editorial assistant (a cost that editors of linguistics journals routinely ask publishers or deans to cover). Frontiers does not pay topic editors.
OK, BUT THIS IS DECEPTIVE, RIGHT?
You would be right to be suspicious about the picture that I’m painting here. Some of the facts are likely misleading.
-
We should be cautious about comparing citation rates across (sub-)fields. Our research topic had a lot of papers on English, German, and other “privileged” languages. That can distort the comparison with other journals. That’s certainly right. But my point is that even a relatively undistinguished set of papers can get more attention if they are published efficiently.
-
Short-term citation ≠ impact ≠ prestige. Very true. It’s important to not confuse simple metrics with qualities that we’re really interested in. Some fantastic research is hardly ever cited. But we shouldn’t expect the decision makers to grasp the subtleties of this.
-
Our sample is biased. We can tout the virtues of accepting most submitted articles, and willingness to pay APCs. But we may have a non-representative sample. Authors who have the resources to afford the APCs may be more likely to know how to write papers that please reviewers. There’s something to this objection. Our articles generally come from teams with substantial publishing experience, and this has surely helped to keep the rejection rate low. Our experience might not transfer fully to other subfields, or to APC-less models. Also, it happens that psycholinguists are generally fairly well trained in how to analyze and interpret their data, so meeting the soundness criterion may be easier for psycholinguists than for some other varieties of linguist.
Also, there are key functions of a traditional journal that a “very open access” outlet does not serve. Traditional journals use their selectivity as a way of grading the papers that they receive, and the scientific community relies heavily on this in its reward system. Frontiers deliberately doesn’t do that, and some people advise against publishing there, because it is often assumed to be a venue of last resort. The Old Guard tend to be especially skeptical. That’s particularly relevant to young researchers who are looking to establish a reputation. Or to get a job. Or tenure.
SO WHY DID IT WORK SO WELL?
The best that I can do is speculate on some things that may have contributed to the success of our experiment.
-
The abstract triage stage helped a lot. This made it easy for editors to confer quickly about a paper, and to turn down a paper before the authors had invested too much time in writing it. This avoided a lot of unnecessary effort on the part of authors, editors, and reviewers. Frontiers doesn’t require this step, but we came to insist upon it, as it helped so much.
-
The publication criteria made the task easier for reviewers, editors, and authors. Focusing on soundness, and not needing to worry about impact, made the task more straightforward for everybody. Authors feel less pressure to hide wrinkles in their data or to overstate their claims (a great article on this here, by psychologist Jon Maner). Reviewers and editors don’t need to worry about comparative assessments with other articles.
-
The focused review process helps to keep reviewers on track, and it also helps authors to see more clearly what reviewers did and did not agree with. This focus reduces conflict.
-
The (ultimately) non-blind review process may have helped. You don’t want to sound like a jerk in your review if your name will ultimately be published with the paper.
- Of course, none of it would have worked without helpful reviewers, a lot of them. We were impressed by how willing they were to help out. (Side note: we had a 60% accept rate on review requests; I don’t know how this compares to other outlets. But there was big variation in accept rates across editors, which also colored our experience. Not sure if that reflects who’s asking, how they’re asking, or who’s being asked. This needs more investigation.)
-
Back-end support. I keep mentioning this, because I think it’s important. The Frontiers technology and staff makes it much easier for editors to do their job effectively. I would be a terrible editor in the traditional model. But this system makes me less of a loser.
-
There’s a lot of overlap between the authors and the reviewers, so participants are motivated to see the process work well.
-
Something that is a little more quirky: we have been serving a very hungry audience. Our authors are familiar with the frustration of submitting to traditional journals in linguistics or psychology, where editors and/or reviewers are often skeptical of a big part of what we care about. Creating a welcoming space for this hybrid community is highly valued.
-
Our research topic has a narrow focus. This made it easier to triage unsuitable contributions.
How much of this depends on this being an all-electronic OA journal? I’m not sure. Some features could work in traditional journals. But some probably depend on publishing at scale, with no constraints on journal size. Subscription-based models create strong pressure to limit size, which in turn inevitably leads to many of the other features of traditional journals.
WHAT NEXT?
That’s a good question. We’re pleased with how the experiment has gone. It has shown the there’s a growing demand for publication venues at the intersection of linguistics and psychology. And that there’s a community of constructive reviewers who could contribute to a well-managed process. Our Frontiers experiment is winding down. And we’re interested in finding ways to build on this experience. But more on that soon …
This Frontiers issue has been an interesting experience indeed. We submitted several papers and I must say that the quality of the reviews was often as good as regular journals with high IFs. A lot depends on the editors though; it could have turned out badly if the reviewers were not the right people.
I should mention that the German government has facilitated open access a great deal, by creating a mechanism whereby our university simply covers the cost of any paper we want to publish in an OA journal. So at least for Potsdam, OA is no longer a problem from the financial perspective (at least, not yet).
For our field, I think that the time is right for an open access journal where we can submit our work and get it published in a reasonable time frame. We need to start such a journal, and it should be led by well-known people in the field. The key problem is building a reputation for a journal that publishes methodologically solid work. I’m also quite tired of the time-consuming process of standard journal reviews, and welcome the speedier process.
One thing other I am unhappy about (apart from slow, closed access journals) is that journal articles are still not accompanied by data and code used to generate the results. It’s appalling that even in 2016 we can’t make this a routine step in the publication process. But that’s a rant for another day.
Thanks for sharing, Colin.
You make the Frontiers model sound interesting, and the following bit stood out to me: “if a linguistics journal could publish the very best work with similar efficiency, then that journal’s IF and its prestige could go through the roof. Needless to say, that could be a rather good thing for the field.”
But it’s not clear to me what the concrete take-away is for non-experimental theoretical linguists. If the appropriateness of your research topic was challenged for the Language Sciences section, I would assume that any paper I write or any topic I might want to propose would be laughed out of the room. The lesson then seems to be that there are benefits to exploring more creative (perhaps Frontiers-like) review models and more efficient publication, but not necessarily that Frontiers itself is a useful venue for theoretical linguistic work. Does that sound right? I’d be happy to be wrong about that.
Thanks Mitcho. Yes, that’s exactly right. I’m not arguing that Frontiers is the solution. But I think they do some interesting things that could be ported over to a different platform. Though I don’t think that replicating what they do is trivial. The technology and staff support that they provide really do help, and that’s what you pay for as an author. From what I’ve found so far, alternative platforms that charge much less also offer reduced services.
Note that there are non-experimental papers in Frontiers, such as this recent piece by David Adger and Peter Svenonius (I was one of the reviewers). I’m not sure why we were hassled at one point over our topic. I got uppity, and we didn’t get further complaints.
I also want to add to Colin’s point about the quality of Frontier’s tech support: it is (surprisingly) better than PLoS ONE’s. Example, for more than a year I have been trying to remove “Nurses” as a keyword for this paper:
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0100986
No success so far. I notice that they have added even more ridiculous keywords in the meantime (Schools? Built structures? Payment?) Also, during the production process, the technical guy couldn’t figure out how to display the linguistic examples, despite our giving him all the latex style files. Frontiers was awesome in comparison.
This was very interesting for me. I’ve been repeatedly asked to start something for experimental pragmatics. The APC seems a little high, and I would like the option of it not being for a limited time only. What’s with the “more on that soon”? Some of the relevant people are meeting in Berlin in two weeks from now.
Could you perhaps say a bit more about the “articles are judged for soundness, not impact” part? There is a continuum between “trivial” and “impactful”, and Frontiers has decided to have a lower threshold in this particular measure than traditional journals do. I just don’t understand how much lower.
As a perhaps extreme case, I can imagine getting asked to review (by, let’s say, NLLT, but the journal name doesn’t really matter so long as it is an imprint that has a relatively high threshold in the trivial-impactful continuum) a paper that shows that some understudied language from a faraway country respects island constraints and obeys binding conditions A-B-C, and that’s it. This would be new and sound research, but my evaluation would be that this alone would not be enough to justify publication. It’s nice to know that this language behaves as expected, but this result, in and of itself, doesn’t really influence our knowledge of typology and/or syntactic theory. You would have to show that your results correct previous erroneous claims about the language, or use them to as a stepping stone to produce more significant results. Now, would Frontiers accept this kind of paper, just because the results are correct?
Good question. The guidelines are designed for fields whose “minimal publishable unit” (MPU) is different than ours, and how to apply it to a field like ours where the data is typically cheap is not straightforward. In principle, your case would meet the “soundness” criterion, even though it’s not terribly newsworthy. (Though remember that there are cases of polysynthetic languages where this could be real news.) In the Frontiers model, it’s not for the journal to declare whether the work is important, that’s for posterity to figure out. Of course, that model forces us to drop the traditional notion that “publication” = “mark of distinction”. And we probably do need ways to recognize distinguished/important work, independent of “altmetrics“, which distort the picture. This is especially important for younger scholars.
In practice, Matt, Claudia, and I were able to able to apply a ‘light’ version of impact-based review, in a specific sense. In order for an abstract to get invited for a full paper, it had to say something relevant to our research topic. And in the review process it was not sufficient for authors to accurately report their experiment results. The soundness criterion extended to the link between the findings and the conclusions. Reviewers played a key role in that, of course. We didn’t face any cases where somebody tried to submit something theoretically trivial.
So two questions that we’d like to figure out are: (i) is it possible to retain the benefits of the Frontiers model while adding some mild degree of impact/relevance-based review (I suspect so, we already did that), and (ii) is is possible to implement an effective “grading” mechanism that avoids the absurd features of the traditional model, without simply resorting to citations/downloads/likes etc.? The second part will be more difficult, though ultimately worthwhile.
Thanks for the blog post about your experience, Colin. It was interesting and insightful to read. I’m curious if you have thoughts about the second point you raised in your comment. What do you think an effective grading system might look like? I think having a more reasonable grading system would be something that would also help with the slow transition to more open access journals. As you point out in your other post, the current grading system depends a lot on the name/prestige of the journal. And these are things that will be lost—or might at least be perceived to be lost—if established journals are abandoned and open access journals started in their stead (at least for an initial period of time, until the new open access journals have time to establish themselves).
If we had a grading system that was a better index of the quality of papers, then this would hopefully be a moot point. So I’m curious if you have thoughts about what figuring out (ii) might look like.
Very good question, Adam. I don’t have a worked-out solution. Two preliminary thoughts. There could be a simple two-tier system where a journal decides (i) whether to accept an article, and (ii) which articles to designate as a “notable article”, based on some assessment of originality or potential impact. One could potentially have more detailed tiering. Frontiers has tried to implement a tiering system, but I don’t think that their metrics-based approach is the best solution. In this approach, the prestige would be associated not with the mere fact of publishing in journal X, but by being given a mark of distinction by journal X. With luck, this could fix the problem that I complained about in A grading method from hell.
(I’m not sure if this will appear in the correct place. There doesn’t seem to be a reply button for your most recent comment, only the original comment.) Anyway, those are some interesting thoughts. I also wonder if there’s place for anything like ‘voting’ in the sense of the Stack Exchange websites or Reddit. It would be tricky to implement, since you wouldn’t want just anyone with an internet connection to be able to ‘upvote’ a published paper, but perhaps there would be some way to implement such a system where peers in the field are able to publicly evaluate a paper after publishing (perhaps it could be more nuanced than just ‘upvoting’ or ‘downvoting’). Perhaps one way of implementing this by having a “journal” that is a curated list of publications, like Kai suggests in his recent “Beyond Open Access” post. I’m not sure. Interesting to think about, though. Thanks for the blog post and the discussion!
My first Frontiers experience came when an article I’d submitted to Behavioral and Brain Sciences was rejected without review because they already had a paper in the pipeline with a similar title. I’d aimed for BBS because they publish articles with commentaries, and I’d really wanted to invoke discussion across psycholinguistics & linguistics. I flailed around for a bit trying to figure out where to send my paper and contacted the Frontiers editors to ask whether they’d consider publishing a target article with commentaries in the Frontiers in Language Sciences division of the journal. They agreed to try this, the review process was thorough, as Colin and Shravan describe above, and the editor did invite commentaries from linguists and psycholinguists. At least by Frontiers’ metric (23.6K views to date of the 14 papers published in 2013), this research topic also did well (it’s here: http://journal.frontiersin.org/researchtopic/1407/does-language-production-shape-language-form-and-comprehension). And (now getting to my main point) Frontiers does not charge for 1000-word commentaries, so it’s possible to encourage published debate of ideas without forcing your critics/supporters to pay. Frontiers isn’t perfect, but Colin’s experiences seem repeatable, and the research topic method does offer expanded opportunities for publishing (psycho)linguistic research.
That’s a good point about the free commentaries, Maryellen. We never made an effort to encourage those, but perhaps should have done so.
I think that one of the positive features of Frontiers is the disclosure of the reviewers’ names. Within the scientific community, prestige is a treasure, a value that any scientist protects. People and colleagues trust us beyond the rationality (i.e., soundness) of our work, they do so also because of our “auctoritas”. I do not imagine an authority in a field taking the risk of endorsing a low-quality manuscript, same applies to the associated (or guest) editor. The names of the reviewers/editors qualitatively confirm the scientific value of the manuscript. This is why “endorsement” differs from traditional “recommendation”. It means, “I ‘an authority’ not only recommend this piece of science but also support its content”. Junior scholars would find a hard way building a reputation based on poor-quality-manuscript endorsements. Endorsing a low level manuscript is a risk I would not take. On the authors side, being endorsed by an authority fairly, honestly, and transparently adds value (i.e., prestige) to the manuscript and his/her career.
Yes, that’s a good point. As an editor, I have been surprised how little push-back that has caused from reviewers. That is encouraging!