Academic publishing is in a dynamic state these days with large numbers of new journals popping up on a regular basis. Some of these new journals are actively experimenting with changing traditional approaches to publication and peer review in potentially important ways. So, I thought I’d provide a quick introduction to some of the new kids on the block that I think have the potential to change our approach to academic publishing.
PeerJ is in some ways a fairly standard PLOS One style open access journal. Like PLOS One they only publish primary research (no reviews or opinion pieces) and that research is evaluated only on the quality of the science not on its potential impact. However, what makes PeerJ different (and the reason that I’m volunteering my time as an associate editor for them) is their philosophy that in the era of the modern web it should it should be both cheap and easy to publish scientific papers:
We aim to drive the costs of publishing down, while improving the overall publishing experience, and providing authors with a publication venue suitable for the 21st Century.
The pricing model is really interesting. Instead of a flat fee per paper PeerJ uses a lifetime author memberships. For $99 (total for life) you can publish 1 paper/year. For $199 you can publish 2 papers/year and for $299 you can publish unlimited papers for life. Every author has to have a membership so for a group of 5 authors publishing in PeerJ for the first time it would cost $495, but that’s still about 1/3 of what you’d pay at PLOS One and 1/6 of what you’d pay to make a paper open access at a Wiley journal. And that same group of authors can publish again next year for free. How can they publish for so much less than anyone else (and whether it is sustainable) is a bit of open question, but they have clearly spent a lot of time (and serious publishing experience) thinking about how to automate and scale publication in an affordable manner both technically and in terms things like typesetting (since single column text no attempt to wrap text around tables and figures is presumably much easier to typeset). If you “follow the money” as Brian McGill suggests then the path may well lead you to PeerJ.
Other cool things about PeerJ:
- Optional open review (authors decide whether reviews are posted with accepted manuscripts, reviewers decide whether to sign reviews)
- Ability to comment on manuscripts with points being given for good comments.
- A focus on making life easy for authors, reviewers, and editors, including a website that is an absolute joy compared to interact with and a lack of rigid formatting guidelines that have to be satisfied for a paper to be reviewed.
We want authors spending their time doing science, not formatting. We include reference formatting as a guide to make it easier for editors, reviewers, and PrePrint readers, but will not strictly enforce the specific formatting rules as long as the full citation is clear. Styles will be normalized by us if your manuscript is accepted.
Now there’s a definable piece of added value.
Faculty of 1000 Research
Faculty of 1000 Research‘s novelty comes from a focus on post-publication peer review. Like PLOS One & PeerJ it reviews based on quality rather than potential impact, and it has a standard per paper pricing model. However, when you submit a paper to F1000 it is immediately posted publicly online, as a preprint of sorts. They then contact reviewers to review the manuscript. Reviews are posted publicly with the reviewers names. Each review includes a status designation of “Approved” (similar to Accept or Minor Revisions), “Approved with Reservations” (similar to Major Revisions), and “Not Approved” (similar to Reject). Authors can upload new versions of the paper to satisfy reviewers comments (along with a summary/explanation of the changes made), and reviewers can provide new reviews and new ratings. If an article receives two “Approved” ratings or one “Approved” and two “Approved with Reservations” ratings then it is considered accepted. It is then identified on the site as having passed peer review, and is indexed in standard journal databases. The peer review process is also open to anyone, so if you want to write a review of a paper you can, no invite required.
It’s important to note that the individuals who are invited to review the paper are recommended by the authors. They are checked to make sure that they don’t have conflicts of interest and are reasonably qualified before being invited, but there isn’t a significant editorial hand in selecting reviewers. This could be seen as resulting in biased reviews, since one is likely to select reviewers that may be biased towards liking you work. However, this is tempered by the fact that the reviewers name and review are publicly attached to the paper, and therefore they are putting their scientific reputation on the line when they support a paper (as argued more extensively by Aarssen & Lortie 2011).
In effect, F1000 is modeling a system of exclusively post-publication peer review, with a slight twist of not considering something “published/accepted” until a minimum number of positive reviews are received. This is a bold move since many scientists are not comfortable with this model of peer review, but it has the potential to vastly speed up the rate of scientific communication in the same way that preprints do. So, I for one think this is an experiment worth conducting, which is why I recently reviewed a paper there.
Oh, and ecologists can currently publish there for free (until the end of the year).
Frontiers in X
I have the least personal experience with the Frontiers’ journals (including the soon to launch Frontiers in Ecology & Evolution). Like F1000Research the ground breaking nature of Frontiers is in peer review, but instead of moving towards a focus on post-publication peer review they are attempting to change how pre-publication review works. They are trying to make review a more collaborative effort between reviewers and authors to improve the quality of the paper.
As with PeerJ and F1000Research, Frontiers is open access and has a review process that focuses on “the accuracy and validity of articles, not on evaluating their significance”. What makes Frontiers different is their two step review process. The first step appears to be a fairly standard pre-publication peer review, where “review editors” provide independent assessments of the paper. The second step (the “Interactive Review phase”) is where the collaboration comes in. Using an “Interactive Review Forum” the authors and all of the reviewers (and if desirable the associate editor and even the editor in chief for the subdiscipline) work collaboratively to improve the paper to the point that the reviewers support its publication. If disagreements arise the associate editor is tasked with acting as a mediator in the conversation. If a paper is eventually accepted then the reviewers names are included with the paper and taken as indicating that they sign off on the quality of the paper (see Aarssen & Lortie 2011 for more discussion of this idea; reviewers can withdraw from the process at any point in which case their names are not included).
I think this is an interesting approach because it attempts to make the review process a friendlier and more interactive process that focuses on quickly converging through conversation on acceptable solutions rather than slow long-form exchanges through multiple rounds of conventional peer review that can often end up focusing as much on judging as improving. While I don’t have any personal experiences with this system I’ve seen a number of associate editors talk very positively about the process at Frontiers.
This post isn’t intended to advocate for any of these particular journals or approaches. These are definitely experimental and we may find that some of them have serious limitations. What I do advocate for is that we conduct these kinds of experiments with academic publishing and support the folks who are taking the lead by developing and test driving these systems to see how they work. To do anything else strikes me as accepting that current academic publishing practices are at their global optimum. That seems fairly unlikely to me, which makes the scientist in me want to explore different approaches so that we can find out how to best evaluate and improve scientific research.
UPDATE: Fixed link to the Faculty of 1000 Research paper that I reviewed. Thanks Jeremy!
UPDATE 2: Added a missing link to Faculty of 1000 Research’s main site.
UPDATE 3: Fixed the missing link to Frontiers in Ecology & Evolution. Apparently I was seriously linking challenged this morning.
Thanks for this overview Ethan.
In case readers are interested, I’ve published a paper with Frontiers in X. I don’t think I’d do so again. I found them to be bad at communicating with authors and reviewers as to how they want the journal to operate. There are reasons to be concerned about the whole “reviewers can withdraw at any time” thing. And their website is an impenetrable mess and their pricing scheme very difficult to figure out. They have their virtues, but overall I was underwhelmed. See this old post for discussion:
p.s. your link to how you reviewed a paper for F1000 Research is broken. 🙂
I love the fact that you can make a statement like “fairly standard PLOS One style open access journal.” Not too long ago a journal like PLOS One was anything but standard. Cool that it is now.
As an aside, any thoughts on Peerage of Science (http://www.peerageofscience.org)? I know it isn’t a journal, but it is potentially a game changer WRT how peer review is conducted. At least on of the new journals youu mentioned (PeerJ) already accept papers through it.
Thanks Ethan – the other cool thing about PeerJ (one of many!) is the PeerJ PrePrints part of our product suite (https://peerj.com/preprints/)
We see the tight integration of the preprint server (+ versioning + feedback + Q&A) with the peer reviewed journal (+ Q&A) as being extremely powerful. When you combine this with the ‘reputation metrics’ that are naturally built into our system, we believe we have something very new and powerful.
Jeremy – Interesting to hear about your experience at Frontiers. I’ve really only heard about it from the AE perspective and those are of course folks who have a pretty clear idea of what the journal is all about. I definitely think that one of the challenges for journals that do things a different way is communicating their vision to folks who are very used to a particular way of doing things. For example, it is common to have reviews come in at PLOS One that evaluate papers based on potential impact and recommend rejection based on that alone. Hopefully this post will help clarify some of this for ecologists.
Jeff – Yes, PLOS One’s approach to things is definitely becoming much more common. Most of the big publishers have equivalent journals now and most of the new open access journals work the same way. I think fairly soon we’ll see that most of the papers published every year are in journals that don’t pay attention to impact up front. That doesn’t mean that the ones that do won’t still potentially be important, precisely because they pay attention to impact, but I think the hierarchy will at least be a lot flatter.
I have mixed feelings about Peerage of Science. In general I think it’s a cool idea, but personally I feel that they emphasize anonymity to a fault. I think opening up the review process has some real benefits and they are the opposite of that to the point that if you forget to update your conflicts of interest table you can (completely accidentally) be guilty of (in their mind) unethical behavior. Other than that I like the idea of centralizing review, though the need for this is primarily based on having a hierarchical journal space. If things become less hierarchical as mentioned above and the focus for publication is simply on whether the paper is scientifically sound or not then I think this becomes less necessary. That said, like with all of these other publishing experiments I support trying it out. I signed up as a peer early on intending to do so myself, but have been too busy with other editing and reviewing duties to actually get around to it.
Peter – Yeah, I meant to get into some of the other things that PeerJ does that are transforming how we work. I’m certainly a big fan of the preprint server and especially the fact that it has integrated comments and metric tracking (something that other well known preprint servers shockingly lack in the modern era), but tend to think of the preprint server as a separate category of thing (though it’s certainly convenient that you allow direct submission.
In many ways I think that the crucial innovation that you are making is that you are simply doing everything right with respect to how to run a journal on the modern web. The advances in concept space may seem a bit incremental in comparison to F1000 and Frontiers, but the execution is amazing. I think of PeerJ as being like the iPhone when it first came out, there were online journals (like there were other smart phones), but no one had ever really seen what an online journal should look like until PeerJ came along.
I suspect what’s eventually going to look like PeerJ’s key innovation is the integration of preprint server with the megajournal proper. Having gone through preprinting and fixed deficiencies pointed out by commenters, why wouldn’t I hit the button to go ahead and submit formally?
Mike – Interesting, I guess I’ve really been thinking of them as distinct components of an ecosystem, but I can see the argument for it all being a single continuous process. I agree that integration is definitely a nice thing (every hour of my time saved is a benefit) I guess I’m just not sure that the single click submission makes my life massively better (to the point of being a “key innovation”) as long as the journal submission system is fairly smooth and well built (i.e., like PeerJ’s). This also seems like something that could be built relatively generically so that it’s possible to provide a link to a preprint to any journal and basically get a one click submission (apparently AmNat is building towards something like this with arXiv at the moment).
I think the key part of what PeerJ has done here is psychological or cultural rather than technical. Having done this already, why would I stop there? The wheels are greased. It matters.
Great post Ethan. This definitely helps provide some concise clarity to the process associated with these outlets. For those who don’t click on the Aarssen and Lortie paper, I’ll point to Ideas in Ecology and Evolution (IEE) as another new journal changing how we publish with their option for an Author-Directed Peer Review (ADPR). IEE is also quite inexpensive. The only issue I have with IEE is that the website is visually unappealing and I wonder if some people might not take the journal as serious if it doesn’t look polished (all the file product PDFs look nice).
>I suspect what’s eventually going to look like PeerJ’s key innovation is
> the integration of preprint server with the megajournal proper.
@Mike – whether or not it is the key innovation remains to be seen, but we do think it is a very important one that people perhaps don’t fully realize yet.
Mike & Pete – OK, I see what you’re saying. In effect PeerJ is providing a one stop shop for preprint review and revision followed by (potentially open) conventional peer review and publication. This is not only convenient (which is certainly important) but also provides the potential for easily seeing an integrated history of the entire review and revision process all in one place (which I hadn’t considered). Are there plans (or existing functionality) to provide a link from the published paper back to the preprint and all of its relevant history?
@Ethan – yes, that is what this level of integration provides. And then we also ties all the feedback back into our reputation system as well, which is accessible via anyone’s profile page (e.g. https://peerj.com/MikeTaylor/ ).
And yes – the 2 versions link back and forth etc.
Thanks Dan. Great point about IEE. I think that Lonnie Aarssen and others at IEE have been thinking in really creative ways about scientific publishing and I look forward to their editorial every year. The interesting thing about IEE is that they aren’t really targeting conventional research papers and so I think that is going to limit their appeal as an outlet for a lot of folks. That said, since outlets like PLOS One and PeerJ don’t typically publish the kinds of ideas papers that IEE does (though you can post them as preprints at PeerJ), it’s a nice outlet for those kinds of thought pieces.
@Pete – Ah yes, the integrated reputation as well. I was of course aware of it by hadn’t thought about the value of the integration there either. Now if we can just get those journals with “non-profit only” preprint server requirements to change their minds. American Naturalist can be a real blocker for us in this area because if we think there’s any chance we’ll send a paper there then we can’t post the preprint to PeerJ, which is the main reason that our last two preprints are on arXiv.
@Ethan – I wonder if you have had a look at the recently launched @BioDataJournal (http://biodiversitydatajournal.com) and associated Pensoft Writing Tool (PWT) (http://pwt.pensoft.net). They both form the first ever workflow that puts authoring, peer review, publishing and dissemination within a single online collaborative platform. This is also one of the first workflows which definitely aims at integrating narrative (text) and data publishing at every positive opportunity. Most of the features you mention in your blog are available in @BioDataJournal, but also some more. Link to the press release: http://www.eurekalert.org/pub_releases/2013-09/pp-tbd091613.php. Presentation: http://www.slideshare.net/pensoft/revolution-in-publishing-bio-horizon-rome-2013
@Lyubomir – I had heard of the Biodiversity Data Journal, but wanted to focus on journals publishing research papers rather than data here. A post on some of the new data journals would definitely be a good idea though. I’ll put it on the list.
@Ethan – regarding anonymity in Peerage of Science, what we want to offer is choice and freedom:
– author has a tick-box to choose upon submission whether to display name to reviewers, editors or both;
– reviewer can of course choose to sign a review (though trying to ride your name prestige instead of carefully justifying arguments is sternly frowned upon – but not banned in any way);
– reviewer can choose to disclose identity to specific editors upon request;
– editor can choose to track a process quietly until they want to make an offer, or they can write comments to the Open Discussion Channel in the manuscript view and either sign the comment as “Editor” or with their name.
Essentially, we make triple-blind possible and do think it would be cool if that evolves to be the community standard, but everybody is free to choose what to do with their own name.
The reason we are strict about conflict of interest, is that it is THE most common complaint against the idea – some people think free reviewer engagement leads to authors arranging symphatetic reviewers for themselves. It does not happen in reality and due to peer-review-of-peer-review PoS peer reviewers take care to be justified and often are often ruthlessly critical. But nonetheless we have found the affiliation restrictions are appreciated by people who have such fears.
But we could add the freedom even a little further (and thanks for this idea): perhaps all affiliation bans can be lifted when the author has chosen to disclose name to all, and reviewer chooses to disclose name and the nature of affiliation to all: then it would be ok to peer review your lab-mate’s or your wife’s paper as others can see the affiliation and can judge the peer review accordingly.
@Janne – Thanks for the clarification. It definitely sounds like you’re striking a much better balance than I’d presumed. My apologies for not having looking into it more carefully before commenting.
We do seem to be at an interesting crossroads in reviewing at the moment where there are strong opinions about the importance of anonymity and risk of biased reviews (either through review selection or fear of retribution), but also an increasing number of folks who feel the solution is to move towards a more open model of review where good behavior is enforced through individual reputation by making reviewers names public and allowing review or reviews.
I greatly appreciate the fact that you are building a system that allows for flexibility in approach since this will allow community norms to evolve rather than being constrained by how the system is built. If you implement the final phase of this that you mention above it would definitely encourage me to become more involved.
I can’t help but wonder if we’re doing this whole thing backwards. People don’t usually write books and then shop them around to publishers, you start with a proposal or a few chapters, get a contract, then finish. There’s review along the way, and a guarantee of publication as long as you finish. Wouldn’t this model be better for science than the one we currently operate in? Some instances that come to mind:
– How many post-hoc reviewers write critiques of experimental design, or want to add experiments, or suggest different sample sizes or even model organisms — these things are ridiculous to consider after months or even years of labor. Yet we all go through it all the time. Bring reviewers on board from the start and everything becomes vastly more efficient and I suspect, more fair.
– Negative results would have to be published (what ethical publication contract would stipulate p<0.05?) and the stigma associated with these findings would vanish.
– Exploratory research would have a higher likelihood of being published since the rationale and approach would be reviewed from the start, not questioned after, especially valuable if you don't wind up finding anything so interesting.
– Societies that offer funding for research (e.g. graduate student grants in aid of research) might offer contracts for publishing the work in society journals after it was complete. If they're funding it, there must already be a good proposal and plan in place, so why abandon the student to sink or swim among all of the various publishers and the stochasticity of post-hoc review?
– Wouldn't you prefer reviewing a study as it was getting started rather than after it had been written up?
– Writing and submitting grant proposals accomplishes some of this, but why duplicate the review process by breaking it up between funding agencies and journals? Why not secure the project and it's review with a journal contract and then bring that to the funding agency for financial support?
Dear @Jeremy Fox
I am Costanza Zucca, Editorial Director of Frontiers. I’m sorry to hear that we were not able to meet your initial expectations.
We disclose reviewer names on all accepted articles as a way to maximize constructiveness, accountability and transparency. Also, it’s important to acknowledge their hard work. However, nobody can be forced to review a paper, and even if a reviewer contributes significantly to the improvement of a manuscript, but still does not like it, they need the option to withdraw and remain anonymous. But we can also disclose this information if people find it relevant, for example, ‘one reviewer with their name on the paper’ and ‘one withdrew’.
As you know, we are a group of scientists helping researchers to improve scholarly publishing for the benefit of the scientific community. To make this mission possible, constructive and ongoing communication with editors, authors and supporters is incredibly important. We are planning to launch our new review forum interface that will offer an improved and friendlier user experience. We’d love to invite you to try this out and receive your feedback. Please get in touch if you’re interested: firstname.lastname@example.org
Interesting evolution in scientific publishing, I had some experience with frontiers, which I considered better then plos. Still, my biggest fear regarding this evolution is that we will end up with 5 publishing platforms, frontiers by npg, plos, f1000, Thomson and elsevier. (Just guessing here) these platforms will be some kind off walled gardens with hardly any communication or data sharing in between them, like Facebook or LinkedIn or even research gate. I much more like the idea of modularity. With different players focusing on their specialty and good api for communication. Like fig share, orcid, altmetric, zotero, impact story, peerage , publons, bio-arxiv and peerJ. The best thing about peerJ integration between preprint and publication is that it is optional. This allows innovation in the scientific pipeline as you don’t need to create a whole platform at once.