Is it OK to cite preprints? Yes, yes it is.
Should you cite preprints in your papers and should journals allow this? This is a topic that gets debated periodically. The most recent round of Twitter debate started last week when Martin Hunt pointed out that the journal Nucleic Acids Research wouldn’t allow him to cite them. A couple of days later I suggested that journals that don’t allow citing preprints are putting their authors’ at risk by forcing them not to cite relevant work. Roughly forty games of Sleeping Queens later (my kid is really into Sleeping Queens) I reopened Twitter and found a roiling debate over whether citing preprints was appropriate at all.
The basic argument against citing preprints is that they aren’t peer reviewed. E.g.,
and that this could lead to the citation of bad work and the potential decay of science. E.g.,
There are three reasons I disagree with this argument:
- We already cite lots of non-peer reviewed things in ecology
- Lots of fields already do this and they are doing just fine.
- Responsibility for the citation lies with the citer
We already cite non-peer reviewed things in ecology
As Auriel Fournier, Stephen Heard, Michael Hoffman, TerryMcGlynn and ATMoody pointed out we already cite lots of things that aren’t peer reviewed including government agency reports, white papers, and other “grey literature”.
We also cite lots of other really important non-peer reviewed things like data and software. We been doing this for decades. Ecology hasn’t become polluted with pseudo science. It will all be OK.
Lots of other fields already do this
One of the things I find amusing/exhausting about biologists debating preprints is ignorance of their history and use in other fields. It’s a bit like debating the name of an actor for two hours when you could easily look it up on Google.
In this particular case (as Eric Pedersen pointed out) we know that citation of preprints isn’t going to cause problems for the field because it hasn’t caused issues in other fields and has almost invariably become standard practice in fields that use preprints. Unless you think Physics and Math are having real issues it’s difficult to argue that this is a meaningful problem. Just ask a physicist
You are responsible for your citations
Why hasn’t citing unreviewed work caused the wheels to fall off of science? Because citing appropriate work in the proper context is part of our job. There are good preprints and bad preprints, good reports and bad reports, good data and bad data, good software and bad software, and good papers and bad papers. As Belinda Phipson, Casey Green, Dave Harris and Sebastian Raschka point out it is up to us as the people citing research to make professional judgments about what is good science and should be cited. Casey’s take captures my thoughts on this exactly:
TLDR
So yes, you should cite preprints and other unreviewed things that are important for your work. That’s called proper attribution. It has worked in ecology and other fields for decades. It will continue to work because we are scientists and evaluating the science we cite is part of our jobs. You can even cite this blog post if you want to.
Thanks to everyone both linked here and not for the spirited discussion. Sorry I wasn’t there, but Sleeping Queens is a pretty awesome game.
UPDATE: For those of you new to this discussion, it’s been going on for a long time even in biology. Here is Graham Coop’s excellent post from nearly 4 years ago.
UPDATE: Discussion of why it’s important to put preprint citations are in the reference list
Thoughts on preprints and citations
A couple of months ago Micah J. Marty and I had a twitter conversation and subsequent email exchange about how citations worked with preprints. I asked Micah if I could share our email discussion since I thought it would be useful to others and he kindly said yes. What follows are Michah’s questions followed by my responses.
Right now, I am finishing up a multi-chapter Master’s thesis and I plan to publish a few papers from my work. I may want to submit a preprint of one manuscript but before I propose this avenue to my advisor, I want to understand it fully myself. And I have remaining questions about the syntax of citing works when preprints come into play. What happens to a citation of a preprint after the manuscript is later published in a peer reviewed venue?
At the level of the journal nothing happens. So, if you cite a preprint in a published ms, and that preprint is later published as a paper, then the citation is still to the preprint. However, some of the services indexing citations recognize the relationship between the preprint and the paper and aggregate the citations. Specifically, Google Scholar treats the preprint and the published paper as the same for citation analysis purposes. See the citation record for our paper on Best practices for scientific computing which has been cited 49 times, but the vast majority of those are citations to the preprints.
Here’s an example with names we can play with: Manuscript 1 (M1) may require some extra analysis, but it presents some important unexpected results that I would like to get out on the table as soon as possible. M1 is submitted to PeerJ Preprints and accepted (i.e., published online as a preprint with a DOI). M2 is submitted to Marine Ecology Progress Series (MEPS) for peer review, and M2 cites the PeerJ Preprint M1.
Just a point related to vocabulary, I wouldn’t typically think of the preprint as being “accepted”. Any checking prior to posting is just a quick glance to make sure that it isn’t embarrassingly bad, so as long as it’s reasonably written and doesn’t have a title like “E is not equal to mc squared” it will be posted almost immediately (within 48 hours on most preprint servers).
1) Are preprints considered “grey literature”? That is, is it illegitimate for M2 to cite a work that has not been peer reviewed?
Yes, in the sense that they haven’t been formally peer reviewed prior to posting they are similar to “grey literature”. Whether or not they can be cited depends on the journal. Some journals are happy to allow citing of preprints. For example, this recent paper in TREE cites a preprint of ours on arXiv. Their paper was published before ours was accepted, so if it wasn’t for the preprint it couldn’t have been cited.
2) Is there a problem if M1 is eventually published in a peer reviewed journal but the published article of M2 cites only the PeerJ Preprint of M1?
I would say no for two reasons. First, assuming that M2 is published before M1 then the choice is between having a citation to something that people can read, science can benefit from, and that can potentially be indexed (giving you citation credit) vs. a citation to “Marty et al. unpublished data”, which basically does nothing. Second, all preprint servers provide a mechanism for linking to the final version, so if someone finds the preprint via a citation in M2 then that link will point them in the direction of the final version that they can then read/cite/etc.
In short, I think as long as you aren’t planning on submitting to a behind the times journal that doesn’t allow the submission of papers that have been posted as preprints (and the list of journals with this policy is shrinking rapidly) then there is no downside to posting preprints. In the best case scenario it can lead to more people reading your research and citing it. The worst case scenario is exactly the same as if you didn’t post a preprint.
How technology can help scientists with chronic illnesses (or Technology FTW!)
This is a guest post by Elita Baldridge (@elitabaldridge)
I am currently the remotely working member of Weecology, finishing up my PhD in the lower elevation and better air of Kansas, while the rest of my colleagues are still in Utah, due to developing a chronic illness and finally getting diagnosed with fibromyalgia. The relocation is actually working out really well. I’m in better shape because I’m not having to fight the air too, and I’m finally making real progress toward finishing my dissertation again.
I ruthlessly culled everything that wasn’t directly working on my dissertation. I was going to attend the Gordon Conference this year, as I had heard fantastic things about it for years, but had not been ready to go yet, but I had to drop that because I wasn’t physically able to travel. I did not go to ESA, because I couldn’t travel. There are working groups and workshops galore, all involving travel, which I cannot do. Right now, the closest thing that we have to bringing absent scientists to an event is live tweeting, which is not nearly as good as hearing a speaker for yourself, and is pretty heartbreaking if you had to cancel your plans to attend an event because you were too infirm to go.The tools that I’m using to do science remotely are not just for increasing accessibility for a single chronically ill macroecologist. They are good tools for science in general. I’m using GitHub to version control my code, and Dropbox to share data and figures. Ethan can see what I’m working on as I’m doing it, and I’ve got a clear record of what I was doing and what decisions that I made. While my cognitive dysfunction may be a bit more extreme of a problem, I know that we’ve all stayed up too late coding and broken something we shouldn’t have and the ability to wave the magic Git wand and make any poor decisions that I made while my brain was out to lunch go away is priceless.
Open access? Having open access to papers is really important when you are going to be faced shortly with probably not having any institutional access anymore. Also, important for everyone else who isn’t at a major university with very expensive subscriptions to all the journals. Having open access to data and code is crucial when you can’t collect your own data and are going to be doing research from your home computer on the cheap because you can’t rely on your body to work reliably at any given point in time.
Video conferencing is working well for me to meet with the lab, but could also be great for attending conferences and workshops. This would not only be good for a certain macroecologist, but would also be good to include people from smaller universities, etc. who would like to participate in these type of things too, but can’t otherwise due to the travel. I did my master’s degree at Fort Hays State University, and I still love it dearly. This type of increased accessibility would have been great for me while I was a perfectly healthy master’s student. Fort Hays is a primarily undergraduate institution in the middle of Kansas, about four hours away from any major city, and it does not have some of the resources that a larger university would have. No seminar series, no workshops, not much travel money to go to workshops or conferences, which doesn’t mean that good science can’t still be happening.
Many of my labmates are looking for post-docs, or are already in postdoc positions at this point. I’m very excited for all of them, and await eagerly all the stories of the exciting new things they are doing. Having a chronic illness limits what I am capable of doing physically. I am not going to be able to move across the country for a post-doc. That does not mean that I do not want to play science too. I’ve got my home base set up, and I can reach pretty far from here. I still want to be a part of living science, I don’t want to have to get to the party after everyone else has gone home.
And I wonder, why can I not do these things? Is it not the future? Do we not have the internet, with video chat? I get to meet with Ethan and talk science at our weekly meetings every week. I go to lab meetings with video chat, and get to see what my labmates are doing, and crack jokes, and laugh at other people’s jokes. It wouldn’t be hard to get me to conferences and working groups either.
With technology, I get to be a part of living, breathing science, and it is a beautiful thing.
Ecology Letters now allows preprints; and why this is a big deal for ecology
As announced by Noam Ross on Twitter (and confirmed by the Editor in Chief of Ecology Letters), Ecology Letters will now allow the submission of manuscripts that have been posted as preprints. Details will be published in an editorial in Ecology Letters. I want to say a heartfelt thanks to Marcel Holyoak and the entire Ecology Letters editorial board for listening to the ecological community and modifying their policies. Science is working a little better today than it was yesterday thanks to their efforts.
For those of you who are new to the concept of preprints, they are manuscripts, that have not yet been published in peer reviewed journals, which are posted to websites like arXiv, PeerJ, and bioRxiv. This process allows for more rapid communication of scientific results and improved quality of published papers though more expansive pre-publication peer-review. If you’d like to read more check out our paper on The Case for Open Preprints in Biology.
The fact that Ecology Letters now allows preprints is a big deal for ecology because they were the last of the major ecology journals to make the transition. The ESA journals began allowing preprints just over two years ago and the BES journals made the switch about 9 months ago. In addition, Science, Nature, PNAS, PLOS Biology, and a number of other ecology journals (e.g., Biotropica) all support preprints. This means that all of the top ecology journals, and all of the top general science journals that most ecologists publish in, allow the posting of preprints. As such, there is not longer a reason to not post preprints based on the possibility of not being able to publish in a preferred journal. This can potentially shave months to years off of the time between discovery and initial communication of results in ecology.
It also means that other ecology journals that still do not allow the posting of preprints are under significant pressure to change their policies. With all of the big journals allowing preprints they have no reasonable excuse for not modernizing their policies, and they risk loosing out on papers that are initially submitted to higher profile journals and are posted as preprints.
It’s a good day for science. Celebrate by posting your next manuscript as a preprint.
Which preprint server should I use?
Preprints are rapidly becoming popular in biology as a way to speed up the process of science, get feedback on manuscripts prior to publication, and establish precedence (Desjardins-Proulx et al. 2013). Since biologists are still learning about preprints I regularly get asked which of the available preprint servers to use. Here’s the long-form version of my response.
The good news is that you can’t go wrong right now. The posting of a preprint and telling people about it is far more important than the particular preprint server you choose. All of the major preprint servers are good choices.Of course you still need to pick one and the best way to do that is to think about the differences between available options. Here’s my take on four of the major preprint servers: arXiv, bioRxiv, PeerJ, and figshare.
arXiv
arXiv is the oldest of the science preprint servers. As a result it is the most well established, it is well respected, more people have heard of it than any of the other preprint servers, and there is no risk of it disappearing any time soon. The downside to having been around for a long time is that arXiv is currently missing some features that are increasingly valued on the modern web. In particular there is currently no ability to comment on preprints (though they are working on this) and there are no altmetrics (things like download counts that can indicate how popular a preprint is). The other thing to consider is that arXiv’s focus is on the quantitative sciences, which can be both a pro and a con. If you do math, physics, computer science, etc., this is the preprint server for you. If you do biology it depends on the kind of research you do. If your work is quantitative then your research may be seen by folks outside of your discipline working on related quantitative problems. If your work isn’t particularly quantitative it won’t fit in as well. arXiv allows an array of licenses that can either allow or restrict reuse. In my experience it can take about a week for a preprint to go up on arXiv and the submission process is probably the most difficult of the available options (but it’s still far easier than submitting a paper to a journal).
bioRxiv
bioRxiv is the new kid on the block having launched less than a year ago. It has both commenting and altmetrics, but whether it will become as established as arXiv and stick around for a long time remains to be seen. It is explicitly biology focused and accepts research of any kind in the biological sciences. If you’re a biologist, this means that you’re less likely to reach people outside of biology, but it may be more likely that biology folks come across your work. bioRxiv allows an array of licenses that can either allow or restrict reuse. However, they explicitly override the less open licenses for text mining purposes, so all preprints there can be text-mined. In my experience it can take about a week for a preprint to go up on bioRxiv.
PeerJ Preprints
PeerJ Preprints is another new preprint server that is focused on biology and accepts research from across the biological sciences. Like bioRxiv it has commenting and altmetrics. It is the fastest of the preprint servers, with less than 24 hours from submission to posting in my experience. PeerJ has a strong commitment to open access, so all of it’s preprints are licensed with the Creative Commons Attribution License. PeerJ also publishes an open access journal, but you can post preprints to PeerJ Preprints with out submitting them to the journal (and this is very common). If you do decide to submit your manuscript to the PeerJ journal after posting it as a preprint you can do this with a single click and, should it be published, the preprint will be linked to the paper. PeerJ has the most modern infrastructure of any of the preprint servers, which makes for really pleasant submission, reading, and commenting experiences. You can also earn PeerJ reputation points for posting preprints and engaging in discussions about them. PeerJ is the only major preprint server run by a for-profit company. This is only an issue if you plan to submit your paper to a journal that only allows the posting of non-commercial preprints. I only know of only one journal with this restriction, but it is American Naturalist which can be an important journal in some areas of biology.
Figshare
figshare is a place to put any kind of research output including data, figures, slides, and preprints. The benefit of this general approach to archiving research outputs is that you can use figshare to store all kinds of research outputs in the same place. The downside is that because it doesn’t focus on preprints people may be less likely to find your manuscript among all of the other research objects. One of the things I like about this broad approach to archiving anything is that I feel comfortable posting that isn’t really manuscripts. For example, I post grant proposals there. figshare accepts research from any branch of science and has commenting and altmetrics. There is no delay from submission to posting. Like PeerJ, figshare is a for-profit company and any document posted there will be licensed with the Creative Commons Attribution License.
Those are my thoughts. I have preprints on all three preprint servers + figshare and I’ve been happy with all three experiences. As I said at the beginning, the most important thing is to help speed up the scientific process by posting your work as preprints. Everything else is just details.
UPDATE: It looks like due to a hiccup with scheduling this post than an early version went out to some folks without the figshare section.
UPDATE: In the comments Richard Sever notes that bioRxiv’s preprints are typically posted within 48 hours of submission and that their interpretation of the text mining clause is that this is covered by fair use. See our discussion in the comments for more details.
Sharing in Science: my full reply to Eli Kintisch
A couple of weeks ago Eli Kintisch (@elikint) interviewed me for what turned out to be a great article on “Sharing in Science” for Science Careers. He also interviewed Titus Brown (@ctitusbrown) who has since posted the full text of his reply, so I thought I’d do the same thing.
How has sharing code, data, R methods helped you with your scientific research?
Definitely. Sharing code and data helps the scientific community make more rapid progress by avoiding duplicated effort and by facilitating more reproducible research. Working together in this way helps us tackle the big scientific questions and that’s why I got into science in the first place. More directly, sharing benefits my group’s research in a number of ways:
- Sharing code and data results in the community being more aware of the research you are doing and more appreciative of the contributions you are making to the field as a whole. This results in new collaborations, invitations to give seminars and write papers, and access to excellent students and postdocs who might not have heard about my lab otherwise.
- Developing code and data so that it can be shared saves us a lot of time. We reuse each others code and data within the lab for different projects, and when a reviewer requests a small change in an analysis we can make a small change in our code and then regenerate the results and figures for the project by running a single program. This also makes our research more reproducible and allows me to quickly answer questions about analyses years after they’ve been conducted when the student or postdoc leading the project is no longer in the lab. We invest a little more time up front, but it saves us a lot of time in the long run. Getting folks to work this way is difficult unless they know they are going to be sharing things publicly.
- One of the biggest benefits of sharing code and data is in competing for grants. Funding agencies want to know how the money they spend will benefit science as a whole, and being able to make a compelling case that you share your code and data, and that it is used by others in the community, is important for satisfying this goal of the funders. Most major funding agencies have now codified this requirement in the form of data management plans that describe how the data and code will be managed and when and how it will be shared. Having a well established track record in sharing makes a compelling argument that you will benefit science beyond your own publications, and I have definitely benefited from that in the grant review process.
What barriers exist in your mind to more people doing so?
There is a lot of fear about openly sharing data and code. People believe that making their work public will result in being scooped or that their efforts will be criticized because they are too messy. There is a strong perception that sharing code and data takes a lot of extra time and effort. So the biggest barriers are sociological at the moment.
To address these barriers we need to be a better job of providing credit to scientists for sharing good data and code. We also need to do a better job of educating folks about the benefits of doing so. For example, in my experience, the time and effort dedicated to developing and documenting code and data as if you plan to share it actually ends up saving the individual research time in the long run. This happens because when you return to a project a few months or years after the original data collection or code development, it is much easier if the code and data are in a form that makes it easy to work with.
How has twitter helped your research efforts?
Twitter has been great for finding out about exciting new research, spreading the word about our research, getting feedback from a broad array of folks in the science and tech community, and developing new collaborations. A recent paper that I co-authored in PLOS Biology actually started as a conversation on twitter.
How has R Open Science helped you with your work, or why is it important or not?
rOpenSci is making it easier for scientists to acquire and analyze the large amounts of scientific data that are available on the web. They have been wrapping many of the major science related APIs in R, which makes these rich data sources available to large numbers of scientists who don’t even know what an API is. It also makes it easier for scientists with more developed computational skills to get research done. Instead of spending time figuring out the APIs for potentially dozens of different data sources, they can simply access rOpenSci’s suite of packages to quickly and easily download the data they need and get back to doing science. My research group has used some of their packages to access data in this way and we are in the process of developing a package with them that makes one of our Python tools for acquiring ecological data (the EcoData Retriever) easy to use in R.
Any practical tips you’d share on making sharing easier?
One of the things I think is most important when sharing both code and data is to use standard licences. Scientists have a habit of thinking they are lawyers and writing their own licenses and data use agreements that govern how the data and code and can used. This leads to a lot of ambiguity and difficulty in using data and code from multiple sources. Using standard open source and open data licences vastly simplifies the the process of making your work available and will allow science to benefit the most from your efforts.
And do you think sharing data/methods will help you get tenure? Evidence it has helped others?
I have tenure and I certainly emphasized my open science efforts in my packet. One of the big emphases in tenure packets is demonstrating the impact of your research, and showing that other people are using your data and code is a strong way to do this. Whether or not this directly impacted the decision to give me tenure I don’t know. Sharing data and code is definitely beneficial to competing for grants (as I described above) and increasingly to publishing papers as many journals now require the inclusion of data and code for replication. It also benefits your reputation (as I described above). Since tenure at most research universities is largely a combination of papers, grants, and reputation, and I think that sharing at least increases one’s chances of getting tenure indirectly.
UPDATE: Added missing link to Titus Brown’s post: http://ivory.idyll.org/blog/2014-eli-conversation.html
British Ecological Society journals now allow preprints
The British Ecological Society has announced that will now allow the submission of papers with preprints (formal language here). This means that you can now submit preprinted papers to Journal of Ecology, Journal of Animal Ecology, Methods in Ecology and Evolution, Journal of Applied Ecology, and Functional Ecology. By allowing preprints BES joins the Ecological Society of America which instituted a pro-preprint policy last year. While BES’s formal policy is still a little more vague than I would like*, they have confirmed via Twitter that even preprints with open licenses are OK as long as they are not updated following peer review.
Preprints are important because they:
- Speed up the progress of science by allowing research to be discussed and built on as soon as it is finished
- Allow early career scientists to establish themselves more rapidly
- Improve the quality of published research by allowing a potentially large pool reviewers to comment on and improve the manuscript (see our excellent experience with this)
BES getting on board with preprints is particularly great news because the number of ecology journals that do not allow preprints is rapidly shrinking to the point that ecologists will no longer need to consider where they might want to submit their papers when deciding whether or not to post preprints. The only major blocker at this point to my mind is Ecology Letters. So, my thanks to BES for helping move science forward!
*Which is why I waited 3 weeks for clarification before posting.
Exploring MaxEnt based species-area relationship predictions [Research Summary]
This is a guest post by Dan McGlinn, a weecology postdoc (@DanMcGlinn on Twitter). It is a Research Summary of: McGlinn, D.J., X. Xiao, and E.P. White. 2013. An empirical evaluation of four variants of a universal species–area relationship. PeerJ 1:e212 http://dx.doi.org/10.7717/peerj.212. These posts are intended to help communicate our research to folks who might not have the time, energy, expertise, or inclination to read the full paper, but who are interested in a <1000 general language summary.
It is well established in ecology that if the area of a sample is increased you will in general see an increase in the number species observed. There are a lot of different reasons why larger areas harbor more species: larger areas contain more individuals, habitats, and environmental variation, and they are likely to cross more barriers to dispersal – all things that promote more species to be able to exist together in an area. We typically observe relatively smooth and simple looking increases in species number with area. This observation has mystified ecologists: How can a pattern that should be influenced by many different and biologically idiosyncratic processes appear so similar across scales, taxonomic groups, and ecological systems?
Recently a theory was proposed (Harte et al. 2008, Harte et al. 2009) which suggests that detailed knowledge of the complex processes that influence the increase in species number may not be necessary to accurately predict the pattern. The theory proposes that ecological systems tend to simply be in their most likely configuration. Specifically, the theory suggests that if we have information on the total number of species and individuals in an area then we can predict the number of species in smaller portions of that area.
Published work on this new theory suggests that it has potential for accurately predicting how species number changes with area; however, it has not been appreciated that there are actually four different ways that the theory can be operationalized to make a prediction. We were interested to learn
- Can the theory accurately predict how species number changes with area across many different ecological systems, and
- Do the different versions of the theory consistently perform better than others
To answer these questions we needed data. We searched online and made requests to our colleagues for datasets that documented the spatial configuration of ecological communities. We were able to pull together a collection of 16 plant community datasets. The communities spanned a wide range of systems including hyper-diverse, old-growth tropical forests, a disturbance prone tropical forest, temperate oak-hickory and pine forests, a Mediterranean mixed-evergreen forest, a low diversity oak woodland, and a serpentine grassland.
Fig 1. A) Results from one of the datasets, the open circles display the observed data and the lines are the four different versions of the theory we examined. B) A comparison of the observed and predicted number of species across all areas and communities we examined for one of the versions of the theory.
Across the different communities we found that the theory was generally quite accurate at predicting the number of species (Fig 1 above), and that one of the versions of the theory was typically better than the others in terms of the accuracy of its predictions and the quantity of information it required to make predictions. There were a couple of noteworthy exceptions in our results. The low diversity oak woodland and the serpentine grassland both displayed unusual patterns of change in richness. The species in the serpentine grassland were more spatially clustered than was typically observed in the other communities and thus better described by the versions of the theory that predicted stronger clustering. Abundance in the oak woodland was primarily distributed across two species whereas the other 5 species where only observed once or twice. This unusual pattern of abundance resulted in a rather unique S-shaped relationship between the number of species and area and required inputting the observed species abundances to accurately model the pattern.
The two key findings from our study were
- The theory provides a practical tool for accurately predicting the number of species in sub-samples of a given site using only information on the total number of species and individuals in that entire area.
- The different versions of the theory do make different predictions and one appears to be superior
Of course there are still a lot of interesting questions to address. One question we are interested in is whether or not we can predict the inputs of the theory (total number of species and individuals for a community) using a statistical model and then plug those predictions into the theory to generate accurate fine-scaled predictions. This kind of application would be important for conservation applications because it would allow scientists to estimate the spatial pattern of rarity and diversity in the community without having to sample it directly. We are also interested in future development of the theory that provides predictions for the number of species at areas that are larger (rather than smaller) than the reference point which may have greater applicability to conservation work.
The accuracy of the theory also has the potential to help us understand the role of specific biological processes in shaping the relationship between species number and area. Because the theory didn’t include any explicit biological processes, our findings suggest that specific processes may only influence the observed relationship indirectly through the total number of species and individuals. Our results do not suggest that biological processes are not shaping the relationship but only that their influence may be rather indirect. This may be welcome news to practitioners who rely on the relationship between species number and area to devise reserve designs and predict the effects of habitat loss on diversity.
If you want to learn more you can read the full paper (it’s open access!) or check out the code underlying the analysis (it’s open source and includes instructions for replicating the analysis!).
References:
Harte, J., A. B. Smith, and D. Storch. 2009. Biodiversity scales from plots to biomes with a universal species-area curve. Ecology Letters 12:789–797.
Harte, J., T. Zillio, E. Conlisk, and A. B. Smith. 2008. Maximum entropy and the state-variable approach to macroecology. Ecology 89:2700–2711.
New journals that are changing the way we publish
Academic publishing is in a dynamic state these days with large numbers of new journals popping up on a regular basis. Some of these new journals are actively experimenting with changing traditional approaches to publication and peer review in potentially important ways. So, I thought I’d provide a quick introduction to some of the new kids on the block that I think have the potential to change our approach to academic publishing.
PeerJ
PeerJ is in some ways a fairly standard PLOS One style open access journal. Like PLOS One they only publish primary research (no reviews or opinion pieces) and that research is evaluated only on the quality of the science not on its potential impact. However, what makes PeerJ different (and the reason that I’m volunteering my time as an associate editor for them) is their philosophy that in the era of the modern web it should it should be both cheap and easy to publish scientific papers:
We aim to drive the costs of publishing down, while improving the overall publishing experience, and providing authors with a publication venue suitable for the 21st Century.
The pricing model is really interesting. Instead of a flat fee per paper PeerJ uses a lifetime author memberships. For $99 (total for life) you can publish 1 paper/year. For $199 you can publish 2 papers/year and for $299 you can publish unlimited papers for life. Every author has to have a membership so for a group of 5 authors publishing in PeerJ for the first time it would cost $495, but that’s still about 1/3 of what you’d pay at PLOS One and 1/6 of what you’d pay to make a paper open access at a Wiley journal. And that same group of authors can publish again next year for free. How can they publish for so much less than anyone else (and whether it is sustainable) is a bit of open question, but they have clearly spent a lot of time (and serious publishing experience) thinking about how to automate and scale publication in an affordable manner both technically and in terms things like typesetting (since single column text no attempt to wrap text around tables and figures is presumably much easier to typeset). If you “follow the money” as Brian McGill suggests then the path may well lead you to PeerJ.
Other cool things about PeerJ:
- Optional open review (authors decide whether reviews are posted with accepted manuscripts, reviewers decide whether to sign reviews)
- Ability to comment on manuscripts with points being given for good comments.
- A focus on making life easy for authors, reviewers, and editors, including a website that is an absolute joy compared to interact with and a lack of rigid formatting guidelines that have to be satisfied for a paper to be reviewed.
We want authors spending their time doing science, not formatting. We include reference formatting as a guide to make it easier for editors, reviewers, and PrePrint readers, but will not strictly enforce the specific formatting rules as long as the full citation is clear. Styles will be normalized by us if your manuscript is accepted.
Now there’s a definable piece of added value.
Faculty of 1000 Research
Faculty of 1000 Research‘s novelty comes from a focus on post-publication peer review. Like PLOS One & PeerJ it reviews based on quality rather than potential impact, and it has a standard per paper pricing model. However, when you submit a paper to F1000 it is immediately posted publicly online, as a preprint of sorts. They then contact reviewers to review the manuscript. Reviews are posted publicly with the reviewers names. Each review includes a status designation of “Approved” (similar to Accept or Minor Revisions), “Approved with Reservations” (similar to Major Revisions), and “Not Approved” (similar to Reject). Authors can upload new versions of the paper to satisfy reviewers comments (along with a summary/explanation of the changes made), and reviewers can provide new reviews and new ratings. If an article receives two “Approved” ratings or one “Approved” and two “Approved with Reservations” ratings then it is considered accepted. It is then identified on the site as having passed peer review, and is indexed in standard journal databases. The peer review process is also open to anyone, so if you want to write a review of a paper you can, no invite required.
It’s important to note that the individuals who are invited to review the paper are recommended by the authors. They are checked to make sure that they don’t have conflicts of interest and are reasonably qualified before being invited, but there isn’t a significant editorial hand in selecting reviewers. This could be seen as resulting in biased reviews, since one is likely to select reviewers that may be biased towards liking you work. However, this is tempered by the fact that the reviewers name and review are publicly attached to the paper, and therefore they are putting their scientific reputation on the line when they support a paper (as argued more extensively by Aarssen & Lortie 2011).
In effect, F1000 is modeling a system of exclusively post-publication peer review, with a slight twist of not considering something “published/accepted” until a minimum number of positive reviews are received. This is a bold move since many scientists are not comfortable with this model of peer review, but it has the potential to vastly speed up the rate of scientific communication in the same way that preprints do. So, I for one think this is an experiment worth conducting, which is why I recently reviewed a paper there.
Oh, and ecologists can currently publish there for free (until the end of the year).
Frontiers in X
I have the least personal experience with the Frontiers’ journals (including the soon to launch Frontiers in Ecology & Evolution). Like F1000Research the ground breaking nature of Frontiers is in peer review, but instead of moving towards a focus on post-publication peer review they are attempting to change how pre-publication review works. They are trying to make review a more collaborative effort between reviewers and authors to improve the quality of the paper.
As with PeerJ and F1000Research, Frontiers is open access and has a review process that focuses on “the accuracy and validity of articles, not on evaluating their significance”. What makes Frontiers different is their two step review process. The first step appears to be a fairly standard pre-publication peer review, where “review editors” provide independent assessments of the paper. The second step (the “Interactive Review phase”) is where the collaboration comes in. Using an “Interactive Review Forum” the authors and all of the reviewers (and if desirable the associate editor and even the editor in chief for the subdiscipline) work collaboratively to improve the paper to the point that the reviewers support its publication. If disagreements arise the associate editor is tasked with acting as a mediator in the conversation. If a paper is eventually accepted then the reviewers names are included with the paper and taken as indicating that they sign off on the quality of the paper (see Aarssen & Lortie 2011 for more discussion of this idea; reviewers can withdraw from the process at any point in which case their names are not included).
I think this is an interesting approach because it attempts to make the review process a friendlier and more interactive process that focuses on quickly converging through conversation on acceptable solutions rather than slow long-form exchanges through multiple rounds of conventional peer review that can often end up focusing as much on judging as improving. While I don’t have any personal experiences with this system I’ve seen a number of associate editors talk very positively about the process at Frontiers.
Conclusions
This post isn’t intended to advocate for any of these particular journals or approaches. These are definitely experimental and we may find that some of them have serious limitations. What I do advocate for is that we conduct these kinds of experiments with academic publishing and support the folks who are taking the lead by developing and test driving these systems to see how they work. To do anything else strikes me as accepting that current academic publishing practices are at their global optimum. That seems fairly unlikely to me, which makes the scientist in me want to explore different approaches so that we can find out how to best evaluate and improve scientific research.
UPDATE: Fixed link to the Faculty of 1000 Research paper that I reviewed. Thanks Jeremy!
UPDATE 2: Added a missing link to Faculty of 1000 Research’s main site.
UPDATE 3: Fixed the missing link to Frontiers in Ecology & Evolution. Apparently I was seriously linking challenged this morning.
An open letter to Ecology Letters and the British Ecological Society about preprints
UPDATE: Both Ecology Letters and the British Ecological Society journals now allow preprints. Thanks to both groups for listening to the community and supporting the rapid and open exchange of scientific ideas.
Dear Ecology Letters and the British Ecological Society ,
I am writing to ask that you support the scientific good by allowing the submission of papers that have been posted as preprints. I or my colleagues have reached out to you before without success, but I have heard through various grapevines that both of you are discussing this possibility and I want to encourage you to move forward with allowing this important practice.
The benefits of preprints to science are substantial. They include:
- More rapid communication and discussion of important scientific results
- Improved quality of published research by allowing for more extensive pre-publication peer review
- A fair mechanism for establishing precedence that is not contingent the idiosyncrasies of formal peer review
- A way for early-career scientists to demonstrate productivity and impact on a time scale that matches their need to apply for postdoctoral fellowships and jobs
I am writing to you specifically because your journals represent the major stumbling block for those of us interested in improving science by posting preprints. Your journals either explicitly do not allow the submission of papers that have preprints posted online or lack explicit statements that it is OK to do so. This means that if there is any possibility of eventually submitting a paper to one of these journals then researchers must avoid posting preprints.
The standard justification that journals give for not allowing preprints is that they constitute “prior publication”. However, this is not an issue for two reasons. First, preprints are not peer reviewed. They are the equivalent of a long established practice in biology of sending manuscripts to colleagues for friendly review and to make them aware of cutting edge work. They simply take advantage of the internet to scale this to larger numbers of colleagues. Second, the vast majority of publication outlets do not believe that preprints represent prior publication, and therefore the publication ethics of the broader field of academic publishing clearly allows this. In particular Science, Nature, PNAS, the Ecological Society of America, the Royal Society, Springer, and Elsevier all generally allow the posting of preprints. Nature even wrote about this policy nearly a decade ago stating that:
Nature never wishes to stand in the way of communication between researchers. We seek rather to add value for authors and the community at large in our peer review, selection and editing… Communication between researchers includes not only conferences but also preprint servers… As first stated in an editorial in 1997, and since then in our Guide to Authors, if scientists wish to display drafts of their research papers on an established preprint server before or during submission to Nature or any Nature journal, that’s fine by us.
If you’d like to learn more about the value of preprints, and see explanations of why some of the other common concerns about preprints are unjustified, some colleagues and I have published a paper on The Case for Open Preprints in Biology.
So, I am asking that for the good of science, and to bring your journals in line with widely accepted publication practices, that you please move quickly to explicitly allow the submission of papers that have been posted as preprints.
Regards,
Ethan White
Some alternative advice on how to decide where to submit your paper
Over at Dynamic Ecology this morning Jeremy Fox has a post giving advice on how to decide where to submit a paper. It’s the same basic advice that I received when I started grad school almost 15 years ago and as a result I don’t think it considers some rather significant changes that have happened in academic publishing over the last decade and a half. So, I thought it would be constructive for folks to see an alternative viewpoint. Since this is really a response to Jeremy’s post, not a description of my process, I’m going to use his categories in the same order as the original post and offer my more… youthful… perspective.
- Aim as high as you reasonably can. The crux of Jeremy’s point is “if you’d prefer for more people to read and think highly of your paper, you should aim to publish it in a selective, internationally-leading journal.” From a practical perspective journal reputation used to be quite important. In the days before easy electronic access, good search algorithms, and social networking, most folks found papers by reading the table of contents of individual journals. In addition, before there was easy access to paper level citation data, and alt-metrics, if you needed to make a quick judgment on the quality of someones science the journal name was a decent starting point. But none of those things are true anymore. I use searches, filtered RSS feeds, Google Scholar’s recommendations, and social media to identify papers I want to read. I do still subscribe to tables of contents via RSS, but I watch PLOS ONE and PeerJ just as closely as Science and Nature. If I’m evaluating a CV as a member of a search committee or a tenure committee I’m interested in the response to your work, not where it is published, so in addition to looking at some of your papers I use citation data and alt-metrics related to your paper. To be sure, there are lots of folks like Jeremy that focus on where you publish to find papers and evaluate CVs, but it’s certainly not all of us.
- Don’t just go by journal prestige; consider “fit”. Again, this used to mater more before there were better ways to find papers of interest.
- How much will it cost? Definitely a valid concern, though my experience has been that waivers are typically easy to obtain. This is certainly true for PLOS ONE.
- How likely is the journal to send your paper out for external review? This is a strong tradeoff against Jeremy’s point about aiming high since “high impact” journals also typically have high pre-review rejection rates. I agree with Jeremy that wasting time in the review process is something to be avoided, but I’ll go into more detail on that below.
- Is the journal open access? I won’t get into the arguments for open access here, but it’s worth noting that increasing numbers of us value open access and think that it is important for science. We value open access publications so if you want us to “think highly of your paper” then putting it where it is OA helps. Open access can also be important if you “prefer for more people to read… your paper” because it makes it easier to actually do so. In contrast to Jeremy, I am more likely to read your paper if it is open access than if it is published in a “top” journal, and here’s why: I can do it easily. Yes, my university has access to all of the top journals in my field, but I often don’t read papers while I’m at work. I typically read papers in little bits of spare time while I’m at home in the morning or evenings, or on my phone or tablet while traveling or waiting for a meeting to start. If I click on a link to your paper and I hit a paywall then I have to decide whether it’s worth the extra effort to go to my library’s website, log in, and then find the paper again through that system. At this point unless the paper is obviously really important to my research the activation energy typically becomes too great (or I simply don’t have that extra couple of minutes) and I stop. This is one reason that my group publishes a lot using Reports in Ecology. It’s a nice compromise between being open access and still being in a well regarded journal.
- Does the journal evaluate papers only on technical soundness? The reason that many of us think this approach has some value is simple, it reduces the amount of time and energy spent trying to get perfectly good research published in the most highly ranked journal possible. This can actually be really important for younger researchers in terms of how many papers they produce at certain critical points in the career process. For example, I would estimate that the average amount of time that my group spends getting a paper into a high profile journal is over a year. This is a combination of submitting to multiple, often equivalent caliber, journals until you get the right roll of the dice on reviewers, and the typically extended rounds of review that are necessary to satisfy the reviewers about not only what you’ve done, but satisfying requests for additional analyses that often aren’t critical, and changing how one has described things so that it sits better with reviewers. If you are finishing your PhD then having two or three papers published in a PLOS ONE style journal vs. in review at a journal that filters on “importance” can make a big difference in the prospect of obtaining a postdoc. Having these same papers out for an extra year accumulating citations can make a big difference when applying for faculty positions or going up for tenure if folks who value paper level metrics over journal name are involved in evaluating your packet.
- Is the journal part of a review cascade? I don’t actually know a lot of journals that do this, but I think it’s a good compromise between aiming high and not wasting a lot of time in review. This is why we think that ESA should have a review cascade to Ecosphere.
- Is it a society journal? I agree that this has value and it’s one of the reasons we continue to support American Naturalist and Ecology even though they aren’t quite as open as I would personally prefer.
- Have you had good experiences with the journal in the past? Sure.
- Is there anyone on the editorial board who’d be a good person to handle your paper? Having a sympathetic editor can certainly increase your chances of acceptance, so if you’re aiming high then having a well matched editor or two to recommend is definitely a benefit.
To be clear, there are still plenty of folks out there who approach the literature in exactly the way Jeremy does and I’m not suggesting that you ignore his advice. In fact, when advising my own students about these things I often actively consider and present Jeremy’s perspective. However, there are also an increasing number of folks who think like I do and who have a very different set of perspectives on these sorts of things. That makes life more difficult when strategizing over where to submit, but the truth is that the most important thing is to do the best science possible and publish it somewhere for the world to see. So, go forth, do interesting things, and don’t worry so much about the details.
UPDATE: More great discussion here, here, here and here. [If I missed yours just let me known in the comments and I”ll add it]
ESA journals will now allow papers with preprints
ESA has just announced that it has changed its policy on preprints and will now allow articles that have been posted on major preprint servers, like arXiv, to be considered for publication in its journals.
I am very excited about this change for two reasons. First, as nicely laid out in INNGE blog post by Philippe Desjardins-Proulx*, there are many positive benefits to science of the preprint culture. They make science more accessible, allow researchers to get feedback from the community prior to peer review, and speed up the scientific process by making ideas available to others as quickly as possible. We should take this opportunity as a community to start developing the kind of vibrant preprint culture that has benefited so many other disciplines. Second, I am encouraged by the rapid response of ESA to the concerns expressed by myself and other members of the community, and take it as a sign that my favorite society is open to making the kinds of changes that are necessary to best facilitate science in the modern era. More work is clearly necessary, but this is a very encouraging start.
UPDATE: Carl Boettiger has posted his very nice letter to Don Strong that played an critical roll in taking this discussion from a bunch of folks talking over social media to something that effected meaningful change.
—————————————————————————————————————————–
*See also, posts by GCBias and Titus Brown