Author Archives: Ethan White

Why the Ecology Letters editorial board should reconsider its No vote on preprints

As I’ve argued here, and in PLOS Biology, preprints are important. They accelerate the scientific dialog, improve the quality of published research, and provide both a fair mechanism for establishing precedence and an opportunity for early-career researchers to quickly demonstrate the importance of their research. And I’m certainly not the only one who thinks this:

One of the things slowing the use of preprints in ecology is the fact that some journals still have policies against considering manuscripts that have been posted as preprints. The argument is typically based on the Ingelfinger rule, which prohibits publishing the same original research in multiple journals. However, almost no one actually believes that this rule applies to preprints anymore. Science, Nature, PNAS, the Ecological Society of America, the British Ecological Society, the Royal Society, Springer, Wiley, and Elsevier all generally allow the posting of preprints. In fact, there is only one major journal in ecology that does not consider manuscripts that are posted as preprints: Ecology Letters.

I’ve been corresponding with the Editor in Chief of Ecology Letters for some time now attempting to convince the journal to address their outdated approach to preprints. He kindly asked the editorial board to vote on this last fall and has been nice enough to both share the results and allow me to blog about them.

Sadly, the editorial board voted 2:1 to not allow consideration of manuscripts posted as preprints based primarily on the following reasons:

  1. Authors might release results before they have been adequately reviewed and considered. In particular the editors were concerned that “early career authors might do this”.
  2. Because Ecology Letters is considered to be a quick turnaround journal the need for preprints is lessened

I’d like to take this opportunity to explain to the members of the editorial board why these arguments are not valid and why it should reconsider its vote.

First, the idea that authors might release results before they have been sufficiently reviewed is not a legitimate reason for a journal to not consider preprinted manuscripts for the following reasons:

  1. This simply isn’t a journal’s call to make. Journals can make policy based on things like scientific ethics, but preventing researchers from making poor decisions is not their job.
  2. Preprints are understood to not have been peer reviewed. We have a long history in science of getting feedback from other scientists on papers prior to submitting them to journals and I’ve personally heard the previous Editor in Chief of Ecology Letters argue passionately for scientists to get external feedback before submitting to the journal. This is one of the primary reasons for posting preprints; to get review from a much broader audience than the 2-3 reviewers that will look at a paper for a journal.
  3. All of the other major ecology and general science journals already allow preprints. This means that any justification for not allowing them would need to explain why Ecology Letters is different from Science, Nature, PNAS, the ESA journals, the BES journals, the Royal Society journals, and several of the major corporate publishers. In addition, since every other major ecology journal allows preprints, this policy would only influence papers that were intended to be submitted to Ecology Letters. This is such a small fraction of the ecology literature that it will have no influence on the stated goal.
  4. We already present results prior to publication in all kinds of forms, the most common of which is at conferences, so unless we are going to disallow presenting results in talks that aren’t already published this won’t accomplish its stated goal.

Second, the idea that because Ecology Letters is so fast that preprints are unnecessary doesn’t actually hold for most papers. Most importantly, this argument ignores the importance of preprints for providing prepublication review. In addition, in the best case scenario this reasoning only holds for articles that are first submitted to Ecology Letters and are accepted. Ecology Letters has roughly a 90% rejection rate (the last time I heard a number). Since a lot of the papers that are accepted there are submitted elsewhere first I suspect that the proportion of the papers they handle that this argument works for is <5%. For all other papers the delay will be much longer. For example, let’s say I do some super exciting research (well, at least I think it’s super exciting) that I think has a chance at Science/Nature. Science and Nature are fine with me posting a preprint, but since there’s a chance that it won’t get in there, I still can’t post a preprint because I might end up submitting to Ecology Letters. My paper goes out for review at Science but gets rejected, I send it to Nature where it doesn’t go out for review, and then to PNAS where it goes out again and is rejected. I then send it to Letters where it goes out for 2 rounds of review and is eventually accepted. Give or take this process will take about a year, and that’s not a short period of time in science at all.

So, I am writing this in the hopes that the editorial board will reconsider their decision and take Ecology Letters from a journal that is actively slowing down the scientific process back to its proud history of increasing the speed with which scientific communication happens. If you know members of the Ecology Letters editorial board personally I encourage you to email them a link to this article. If any members of the editorial board disagree with the ideas presented here and in our PLOS Biology paper, I encourage them to join me in the comments section to discuss their concerns.

UPDATE: Added Wiley to the list of major publishers that allow preprints. As Emilio Bruna points out in the comments they are happy to have journals that allow posting of preprints and Biotropica is a great example of one of their journals making this shift.

UPDATE: Fixed link to Paul Krugman’s post.

UPDATE: Ecology Letters now allows preprints!!

Sharing in Science: my full reply to Eli Kintisch

A couple of weeks ago Eli Kintisch (@elikint) interviewed me for what turned out to be a great article on “Sharing in Science” for Science Careers. He also interviewed Titus Brown (@ctitusbrown) who has since posted the full text of his reply, so I thought I’d do the same thing.

How has sharing code, data, R methods helped you with your scientific research?

Definitely. Sharing code and data helps the scientific community make more rapid progress by avoiding duplicated effort and by facilitating more reproducible research. Working together in this way helps us tackle the big scientific questions and that’s why I got into science in the first place. More directly, sharing benefits my group’s research in a number of ways:

  1. Sharing code and data results in the community being more aware of the research you are doing and more appreciative of the contributions you are making to the field as a whole. This results in new collaborations, invitations to give seminars and write papers, and access to excellent students and postdocs who might not have heard about my lab otherwise.
  2. Developing code and data so that it can be shared saves us a lot of time. We reuse each others code and data within the lab for different projects, and when a reviewer requests a small change in an analysis we can make a small change in our code and then regenerate the results and figures for the project by running a single program. This also makes our research more reproducible and allows me to quickly answer questions about analyses years after they’ve been conducted when the student or postdoc leading the project is no longer in the lab. We invest a little more time up front, but it saves us a lot of time in the long run. Getting folks to work this way is difficult unless they know they are going to be sharing things publicly.
  3. One of the biggest benefits of sharing code and data is in competing for grants. Funding agencies want to know how the money they spend will benefit science as a whole, and being able to make a compelling case that you share your code and data, and that it is used by others in the community, is important for satisfying this goal of the funders. Most major funding agencies have now codified this requirement in the form of data management plans that describe how the data and code will be managed and when and how it will be shared. Having a well established track record in sharing makes a compelling argument that you will benefit science beyond your own publications, and I have definitely benefited from that in the grant review process.

What barriers exist in your mind to more people doing so?

There is a lot of fear about openly sharing data and code. People believe that making their work public will result in being scooped or that their efforts will be criticized because they are too messy. There is a strong perception that sharing code and data takes a lot of extra time and effort. So the biggest barriers are sociological at the moment.

To address these barriers we need to be a better job of providing credit to scientists for sharing good data and code. We also need to do a better job of educating folks about the benefits of doing so. For example, in my experience, the time and effort dedicated to developing and documenting code and data as if you plan to share it actually ends up saving the individual research time in the long run. This happens because when you return to a project a few months or years after the original data collection or code development, it is much easier if the code and data are in a form that makes it easy to work with.

How has twitter helped your research efforts?

Twitter has been great for finding out about exciting new research, spreading the word about our research, getting feedback from a broad array of folks in the science and tech community, and developing new collaborations. A recent paper that I co-authored in PLOS Biology actually started as a conversation on twitter.

How has R Open Science helped you with your work, or why is it important or not?

rOpenSci is making it easier for scientists to acquire and analyze the large amounts of scientific data that are available on the web. They have been wrapping many of the major science related APIs in R, which makes these rich data sources available to large numbers of scientists who don’t even know what an API is. It also makes it easier for scientists with more developed computational skills to get research done. Instead of spending time figuring out the APIs for potentially dozens of different data sources, they can simply access rOpenSci’s suite of packages to quickly and easily download the data they need and get back to doing science. My research group has used some of their packages to access data in this way and we are in the process of developing a package with them that makes one of our Python tools for acquiring ecological data (the EcoData Retriever) easy to use in R.

Any practical tips you’d share on making sharing easier?
We actually wrote a paper on this for data last year: Nine simple ways to make it easier to (re)use your data

One of the things I think is most important when sharing both code and data is to use standard licences. Scientists have a habit of thinking they are lawyers and writing their own licenses and data use agreements that govern how the data and code and can used. This leads to a lot of ambiguity and difficulty in using data and code from multiple sources. Using standard open source and open data licences vastly simplifies the the process of making your work available and will allow science to benefit the most from your efforts.

And do you think sharing data/methods will help you get tenure? Evidence it has helped others?

I have tenure and I certainly emphasized my open science efforts in my packet. One of the big emphases in tenure packets is demonstrating the impact of your research, and showing that other people are using your data and code is a strong way to do this. Whether or not this directly impacted the decision to give me tenure I don’t know. Sharing data and code is definitely beneficial to competing for grants (as I described above) and increasingly to publishing papers as many journals now require the inclusion of data and code for replication. It also benefits your reputation (as I described above). Since tenure at most research universities is largely a combination of papers, grants, and reputation, and I think that sharing at least increases one’s chances of getting tenure indirectly.

UPDATE: Added missing link to Titus Brown’s post: http://ivory.idyll.org/blog/2014-eli-conversation.html

EcoData Retriever: quickly download and cleanup ecological data so you can get back to doing science

Retreiver Logo

If you’ve every worked with scientific data, your own or someone elses, you know that you can end up spending a lot of time just cleaning up the data and getting it in a state that makes it ready for analysis. This involves everything from cleaning up non-standard nulls values to completely restructuring the data so that tools like R, Python, and database management systems (e.g., MS Access, PostgreSQL) know how to work with them. Doing this for one dataset can be a lot of work and if you work with a number of different databases like I do the time and energy can really take away from the time you have to actually do science.

Over the last few years Ben Morris and I been working on a project called the EcoData Retriever to make this process easier and more repeatable for ecologists. With a click of a button, or a single call from the command line, the Retriever will download an ecological dataset, clean it up, restructure and assemble it (if necessary) and install it into your database management system of choice (including MS Access, PostgreSQL, MySQL, or SQLite) or provide you with CSV files to load into R, Python, or Excel.

Just click on the box to get the data:

retriever_main

Or run a command like this from the command line:

retriever install msaccess BBS --file myaccessdb.accdb

This means that instead of spending a couple of days wrangling a large dataset like the North American Breeding Bird Survey into a state where you can do some science, you just ask the Retriever to take care of it for you. If you work actively with Breeding Bird Survey data and you always like to use the most up to date version with the newest data and the latest error corrections, this can save you a couple of days a year. If you also work with some of the other complicated ecological datasets like Forest Inventory and Analysis and Alwyn Gentry’s Forest Transect data, the time savings can easily be a week.

The Retriever handles things like:

  1. Creating the underlying database structures
  2. Automatically determining delimiters and data types
  3. Downloading the data (and if there are over 100 data files that can be a lot of clicks)
  4. Transforming data into standard structures so that common tools in R and Python and relational database management systems know how to work with it (e.g., converting cross-tabulated data)
  5. Converting non-standard null values (e.g., 999.0, -999, NoData) into standard ones
  6. Combining multiple data files into single tables
  7. Placing all related tables in a single database or schema

The EcoData Retriever currently includes a number of large, openly available, ecological datasets (see a full list here). It’s also easy to add new datasets to the EcoData Retriever if you want to. For simple data tables a Retriever script can be as simple as:

name: Name of the dataset
description: A brief description of the dataset of ~25 words.
shortname: A one word name for the dataset
table: MyTableName, http://awesomedatasource.com/dataset

The Retriever has an installer for Windows, an App for Mac, and a package for Ubuntu/Debian Linux. See the quick explanation of how to get started and then go take it for a spin.

If you’re interested in reading more about the Retriever you can checkout the website or read our paper on the project.

We also have some exciting new features on the To Do list including:

  • Automatically cleaning up the taxonomy using existing services
  • Providing detailed tracking of the provenance of your data by recording the date it was downloaded, the version of the software used, and information about what cleanup steps the Retriever performed
  • Integration into R and Python

Let us know what you think we should work on next in the comments.

I am a graduate student. I have fibromyalgia.

This is a guest post by Elita Baldridge (@elitabaldridge). She is a graduate student in our group who has been navigating the development of a chronic illness during graduate school. She is sharing her story to help spread awareness of the challenges faced by graduate students with chronic illnesses. She wrote an excellent post on the PhDisabled blog about the initial development of her illness that I encourage you to read first.

During my time as a Ph.D. student, I developed a host of bizarre, productivity eating symptoms, and have been trying to make progress on my dissertation while also spending a lot of time at doctors’ offices trying to figure out what is wrong with me. I wrote an earlier blog post about dealing with the development of a chronic illness as a graduate student at the PhDisabled Blog.

When the rheumatologist handed me a yellow pamphlet labeled “Fibromyalgia”, I felt a great sense of relief. My mystery illness had a diagnosis, so I had a better idea of what to expect. While chronic, at least fibromyalgia isn’t doing any permanent damage to joints or brain. However, there isn’t a lot known about it, the treatment options are limited, and the primary literature is full of appallingly small sample sizes.

There are many symptoms which basically consisting of feeling like you have the flu all the time, with all the associated aches and pains. The worst one for me, because it interferes with my highly prized ability to think, is the cognitive dysfunction, or, in common parlance, “fibro fog”. This is a problem when you are actively trying to get research done, as sometimes you remember what you need to do, but can’t quite figure out how navigating to your files in your computer works, what to do with the mouse, or how to get the computer on. I frequently finish sentences with a wave of my hand and the word “thingy”. Sometimes I cannot do simple math, as I do not know what the numbers mean, or what to do next. Depending on the severity, the cognitive dysfunction can render me unable to work on my dissertation as I simply cannot understand what I am supposed to do. I’m not able to drive anymore, due to the general fogginess, but I never liked driving that much anyway. Sometimes I need a cane, because my balance is off or I cannot walk in a straight line, and I need the extra help. Sometimes I can’t be in a vertical position, because verticality renders me so dizzy that I vomit.

I am actually doing really well for a fibromyalgia patient. I know this, because the rheumatologist who diagnosed me told me that I was doing remarkably well. I am both smug that I am doing better than average, because I’m competitive that way, and also slightly disappointed that this level of functioning is the new good. I would have been more disappointed, only I had a decent amount of time to get used to the idea that whatever was going on was chronic and “good” was going to need to be redefined. My primary care doctor had already found a medication that relieved the aches and pains before I got an official diagnosis. Thus, before receiving an official diagnosis, I was already doing pretty much everything that can be done medication wise, and I had already figured out coping mechanisms for the rest of it. I keep to a strict sleep schedule, which I’ve always done anyway, and I’ve continued exercising, which is really important in reducing the impact of fibromyalgia. I should be able to work up my exercise slowly so that I can start riding my bicycle short distances again, but the long 50+ mile rides I used to do are probably out.

Fortunately, my research interests have always been well suited to a macroecological approach, which leaves me well able to do science when my brain is functioning well enough. I can test my questions without having to collect data from the field or lab, and it’s easy to do all the work I need to from home. My work station is set up right by the couch, so I can lay down and rest when I need to. I have to be careful to take frequent breaks, lest working too long in one position cause a flare up. This is much easier than going up to campus, which involves putting on my healthy person mask to avoid sympathy, pity, and questions, and either a long bus ride or getting a ride from my husband. And sometimes, real people clothes and shoes hurt, which means I’m more comfortable and spending less energy if I can just wear pajamas and socks, instead of jeans and shoes.

Understand that I am not sharing all of this because I want sympathy or pity. I am sharing my experience as a Ph.D. student developing and being diagnosed with a chronic illness because I, unlike many students with any number of other short term or long term disabling conditions, have a lot of support. Because I have a great deal of family support, departmental support, and support from the other Weecologists and our fearless leaders, I should be able to limp through the rest of my Ph.D. If I did not have this support, it is very likely that I would not be able to continue with my dissertation. If I did not have support from ALL of these sources, it is also very likely that I would not be able to continue. While I hope that I will be able contribute to science with my dissertation, I also think that I can contribute to science by facilitating discussion about some of the problems that chronically ill students face, and hopefully finding solutions to some of those problems. To that end, I have started an open GitHub repository to provide a database of resources that can help students continue their training and would welcome additional contributions. Unfortunately, there doesn’t seem to be a lot. Many medical Leave of Absence programs prevent students from accessing university resources- which also frequently includes access to subsidized health insurance and potentially the student’s doctor, as well as removing the student from deferred student loans.

I have fibromyalgia. I also have contributions to make to science. While I am, of course, biased, I think that some contribution is better than no contribution. I’d rather be defined by my contributions, rather than my limitations, and I’m glad that my university and my lab aren’t defining me by my limitations, but are rather helping me to make contributions to science to the best of my ability.

British Ecological Society journals now allow preprints

The British Ecological Society has announced that will now allow the submission of papers with preprints (formal language here). This means that you can now submit preprinted papers to Journal of Ecology, Journal of Animal Ecology, Methods in Ecology and Evolution, Journal of Applied Ecology, and Functional Ecology. By allowing preprints BES joins the Ecological Society of America which instituted a pro-preprint policy last year. While BES’s formal policy is still a little more vague than I would like*, they have confirmed via Twitter that even preprints with open licenses are OK as long as they are not updated following peer review.

Preprints are important because they:

  • Speed up the progress of science by allowing research to be discussed and built on as soon as it is finished
  • Allow early career scientists to establish themselves more rapidly
  • Improve the quality of published research by allowing a potentially large pool reviewers to comment on and improve the manuscript (see our excellent experience with this)

BES getting on board with preprints is particularly great news because the number of ecology journals that do not allow preprints is rapidly shrinking to the point that ecologists will no longer need to consider where they might want to submit their papers when deciding whether or not to post preprints. The only major blocker at this point to my mind is Ecology Letters. So, my thanks to BES for helping move science forward!

*Which is why I waited 3 weeks for clarification before posting.

Exploring MaxEnt based species-area relationship predictions [Research Summary]

This is a guest post by Dan McGlinn, a weecology postdoc (@DanMcGlinn on Twitter). It is a Research Summary of: McGlinn, D.J., X. Xiao, and E.P. White. 2013. An empirical evaluation of four variants of a universal species–area relationship. PeerJ 1:e212 http://dx.doi.org/10.7717/peerj.212. These posts are intended to help communicate our research to folks who might not have the time, energy, expertise, or inclination to read the full paper, but who are interested in a <1000 general language summary.

It is well established in ecology that if the area of a sample is increased you will in general see an increase in the number species observed.  There are a lot of different reasons why larger areas harbor more species: larger areas contain more individuals, habitats, and environmental variation, and they are likely to cross more barriers to dispersal – all things that promote more species to be able to exist together in an area. We typically observe relatively smooth and simple looking increases in species number with area. This observation has mystified ecologists: How can a pattern that should be influenced by many different and biologically idiosyncratic processes appear so similar across scales, taxonomic groups, and ecological systems?

Recently a theory was proposed (Harte et al. 2008, Harte et al. 2009) which suggests that detailed knowledge of the complex processes that influence the increase in species number may not be necessary to accurately predict the pattern. The theory proposes that ecological systems tend to simply be in their most likely configuration. Specifically, the theory suggests that if we have information on the total number of species and individuals in an area then we can predict the number of species in smaller portions of that area.

Published work on this new theory suggests that it has potential for accurately predicting how species number changes with area; however, it has not been appreciated that there are actually four different ways that the theory can be operationalized to make a prediction.  We were interested to learn

  1. Can the theory accurately predict how species number changes with area across many different ecological systems, and
  2. Do the different versions of the theory consistently perform better than others

To answer these questions we needed data. We searched online and made requests to our colleagues for datasets that documented the spatial configuration of ecological communities.  We were able to pull together a collection of 16 plant community datasets. The communities spanned a wide range of systems including hyper-diverse, old-growth tropical forests, a disturbance prone tropical forest, temperate oak-hickory and pine forests, a Mediterranean mixed-evergreen forest, a low diversity oak woodland, and a serpentine grassland.

Fig 1. A) Results from one of the datasets, the open circles display the observed data and the lines are the four different versions of the theory we examined.  B) A comparison of the observed and predicted number of species across all areas and communities we examined for one of the versions of the theory.

Across the different communities we found that the theory was generally quite accurate at predicting the number of species (Fig 1 above), and that one of the versions of the theory was typically better than the others in terms of the accuracy of its predictions and the quantity of information it required to make predictions. There were a couple of noteworthy exceptions in our results. The low diversity oak woodland and the serpentine grassland both displayed unusual patterns of change in richness. The species in the serpentine grassland were more spatially clustered than was typically observed in the other communities and thus better described by the versions of the theory that predicted stronger clustering. Abundance in the oak woodland was primarily distributed across two species whereas the other 5 species where only observed once or twice. This unusual pattern of abundance resulted in a rather unique S-shaped relationship between the number of species and area and required inputting the observed species abundances to accurately model the pattern.

The two key findings from our study were

  1. The theory provides a practical tool for accurately predicting the number of species in sub-samples of a given site using only information on the total number of species and individuals in that entire area.
  2. The different versions of the theory do make different predictions and one appears to be superior

Of course there are still a lot of interesting questions to address.  One question we are interested in is whether or not we can predict the inputs of the theory (total number of species and individuals for a community) using a statistical model and then plug those predictions into the theory to generate accurate fine-scaled predictions.  This kind of application would be important for conservation applications because it would allow scientists to estimate the spatial pattern of rarity and diversity in the community without having to sample it directly. We are also interested in future development of the theory that provides predictions for the number of species at areas that are larger (rather than smaller) than the reference point which may have greater applicability to conservation work.

The accuracy of the theory also has the potential to help us understand the role of specific biological processes in shaping the relationship between species number and area.  Because the theory didn’t include any explicit biological processes, our findings suggest that specific processes may only influence the observed relationship indirectly through the total number of species and individuals. Our results do not suggest that biological processes are not shaping the relationship but only that their influence may be rather indirect.  This may be welcome news to practitioners who rely on the relationship between species number and area to devise reserve designs and predict the effects of habitat loss on diversity.

If you want to learn more you can read the full paper (it’s open access!) or check out the code underlying the analysis (it’s open source and includes instructions for replicating the analysis!).

References:

Harte, J., A. B. Smith, and D. Storch. 2009. Biodiversity scales from plots to biomes with a universal species-area curve. Ecology Letters 12:789–797.

Harte, J., T. Zillio, E. Conlisk, and A. B. Smith. 2008. Maximum entropy and the state-variable approach to macroecology. Ecology 89:2700–2711.

How I stay sane in science and academia: My Why File

Doing science in academia involves a lot of rejection and negative feedback. Between grant agencies single digit funding rates, pressure to publish in a few "top" journals all of which have rejection rates of 90% or higher [1], and the growing gulf between the number of academic jobs and the number of graduate students and postdocs [2], spending even a small amount of time in academia pretty much guarantees that you’ll see a lot of rejection. In addition, even when things are going well we tend to focus on providing as much negative feedback as possible. Paper reviews, grant reviews, and most university evaluation and committee meetings are focused on the negatives. Even students with awesome projects that are progressing well and junior faculty who are cruising towards tenure have at least one meeting a year where someone in a position of power will try their best to enumerate all of things you could be doing better [3]. This isn’t always a bad thing [4] and I’m sure it isn’t restricted to academia or science (these are just the worlds I know), but it does make keeping a positive attitude and reasonable sense of self-worth a bit… challenging.

One of the things that I do to help me remember why I keep doing this is my Why File. It’s a file where I copy and paste reminders of the positive things that happen throughout the year [5]. These typically aren’t the sort of things that end up on my CV. I have my CV for tracking that sort of thing and frankly the number of papers I’ve published and grants I’ve received isn’t really what gets me out of bed in the morning. My Why File contains things like:

  • Email from students in my courses, or comments on evaluations, telling me how much of an impact the skills they learned have had on their ability to do science
  • Notes from my graduate students, postdocs, and undergraduate researchers thanking me for supporting them, inspiring them, or giving them good advice
  • Positive feedback from mentors and people I respect that help remind me that I’m not an impostor
  • Tweets from folks reaffirming that an issue or approach I’m advocating for is changing what they do or how they do it
  • Pictures of thank you cards or creative things that people in my lab have done
  • And even things that in a lot of ways are kind of silly, but that still make me smile, like screen shots of being retweeted by Jimmy Wales or of Tim O’Reilly plugging one of my papers.

If you’ve said something nice to me in the past few years be it in person, by email, on twitter, or in a handwritten note, there’s a good chance that it’s in my Why File helping me keep going at the end of a long week or a long day. And that’s the other key message of this post. We often don’t realize how important it is to say thanks to the folks who are having a positive influence on us from time to time. Or, maybe we feel uncomfortable doing so because we think these folks are so talented and awesome that they don’t need it, or won’t care, or might see this positive feedback as silly or disingenuous. Well, as Julio Betancourt once said, "You can’t hug your reprints", so don’t be afraid to tell a mentor, a student, or a colleague when you think they’re doing a great job. You might just end up in their Why File.

What do you do to help you stay sane in academia, science, or any other job that regularly reminds you of how imperfect you really are?


[1] This idea that where you publish not what you publish is a problem, but not the subject of this post.

[2] There are lots of great ways to use a PhD, but unfortunately not everyone takes that to heart.

[3] Of course the people doing this are (at least sometimes) doing so with the best intentions, but I personally think it would be surprisingly productive to just say, "You’re doing an awesome job. Keep it up." every once in a while.

[4] There is often a goal to the negativity, e.g., helping a paper or person reach their maximum potential, but again I think we tend to undervalue the consequences of this negativity in terms of motivation [4b].

[4b] Hmm, apparently I should write a blog post on this since it now has two footnotes worth of material.

[5] I use a Markdown file, but a simple text file or a MS Word document would work just fine as well for most things.

New journals that are changing the way we publish

Academic publishing is in a dynamic state these days with large numbers of new journals popping up on a regular basis. Some of these new journals are actively experimenting with changing traditional approaches to publication and peer review in potentially important ways. So, I thought I’d provide a quick introduction to some of the new kids on the block that I think have the potential to change our approach to academic publishing.

PeerJ

PeerJ is in some ways a fairly standard PLOS One style open access journal. Like PLOS One they only publish primary research (no reviews or opinion pieces) and that research is evaluated only on the quality of the science not on its potential impact. However, what makes PeerJ different (and the reason that I’m volunteering my time as an associate editor for them) is their philosophy that in the era of the modern web it should it should be both cheap and easy to publish scientific papers:

We aim to drive the costs of publishing down, while improving the overall publishing experience, and providing authors with a publication venue suitable for the 21st Century.

The pricing model is really interesting. Instead of a flat fee per paper PeerJ uses a lifetime author memberships. For $99 (total for life) you can publish 1 paper/year. For $199 you can publish 2 papers/year and for $299 you can publish unlimited papers for life. Every author has to have a membership so for a group of 5 authors publishing in PeerJ for the first time it would cost $495, but that’s still about 1/3 of what you’d pay at PLOS One and 1/6 of what you’d pay to make a paper open access at a Wiley journal. And that same group of authors can publish again next year for free. How can they publish for so much less than anyone else (and whether it is sustainable) is a bit of open question, but they have clearly spent a lot of time (and serious publishing experience) thinking about how to automate and scale publication in an affordable manner both technically and in terms things like typesetting (since single column text no attempt to wrap text around tables and figures is presumably much easier to typeset). If you “follow the money” as Brian McGill suggests then the path may well lead you to PeerJ.

Other cool things about PeerJ:

  • Optional open review (authors decide whether reviews are posted with accepted manuscripts, reviewers decide whether to sign reviews)
  • Ability to comment on manuscripts with points being given for good comments.
  • A focus on making life easy for authors, reviewers, and editors, including a website that is an absolute joy compared to interact with and a lack of rigid formatting guidelines that have to be satisfied for a paper to be reviewed.

We want authors spending their time doing science, not formatting. We include reference formatting as a guide to make it easier for editors, reviewers, and PrePrint readers, but will not strictly enforce the specific formatting rules as long as the full citation is clear. Styles will be normalized by us if your manuscript is accepted.

Now there’s a definable piece of added value.

Faculty of 1000 Research

Faculty of 1000 Research‘s novelty comes from a focus on post-publication peer review. Like PLOS One & PeerJ it reviews based on quality rather than potential impact, and it has a standard per paper pricing model. However, when you submit a paper to F1000 it is immediately posted publicly online, as a preprint of sorts. They then contact reviewers to review the manuscript. Reviews are posted publicly with the reviewers names. Each review includes a status designation of “Approved” (similar to Accept or Minor Revisions), “Approved with Reservations” (similar to Major Revisions), and “Not Approved” (similar to Reject). Authors can upload new versions of the paper to satisfy reviewers comments (along with a summary/explanation of the changes made), and reviewers can provide new reviews and new ratings. If an article receives two “Approved” ratings or one “Approved” and two “Approved with Reservations” ratings then it is considered accepted. It is then identified on the site as having passed peer review, and is indexed in standard journal databases. The peer review process is also open to anyone, so if you want to write a review of a paper you can, no invite required.

It’s important to note that the individuals who are invited to review the paper are recommended by the authors. They are checked to make sure that they don’t have conflicts of interest and are reasonably qualified before being invited, but there isn’t a significant editorial hand in selecting reviewers. This could be seen as resulting in biased reviews, since one is likely to select reviewers that may be biased towards liking you work. However, this is tempered by the fact that the reviewers name and review are publicly attached to the paper, and therefore they are putting their scientific reputation on the line when they support a paper (as argued more extensively by Aarssen & Lortie 2011).

In effect, F1000 is modeling a system of exclusively post-publication peer review, with a slight twist of not considering something “published/accepted” until a minimum number of positive reviews are received. This is a bold move since many scientists are not comfortable with this model of peer review, but it has the potential to vastly speed up the rate of scientific communication in the same way that preprints do. So, I for one think this is an experiment worth conducting, which is why I recently reviewed a paper there.

Oh, and ecologists can currently publish there for free (until the end of the year).

Frontiers in X

I have the least personal experience with the Frontiers’ journals (including the soon to launch Frontiers in Ecology & Evolution). Like F1000Research the ground breaking nature of Frontiers is in peer review, but instead of moving towards a focus on post-publication peer review they are attempting to change how pre-publication review works. They are trying to make review a more collaborative effort between reviewers and authors to improve the quality of the paper.

As with PeerJ and F1000Research, Frontiers is open access and has a review process that focuses on “the accuracy and validity of articles, not on evaluating their significance”. What makes Frontiers different is their two step review process. The first step appears to be a fairly standard pre-publication peer review, where “review editors” provide independent assessments of the paper. The second step (the “Interactive Review phase”) is where the collaboration comes in. Using an “Interactive Review Forum” the authors and all of the reviewers (and if desirable the associate editor and even the editor in chief for the subdiscipline) work collaboratively to improve the paper to the point that the reviewers support its publication. If disagreements arise the associate editor is tasked with acting as a mediator in the conversation. If a paper is eventually accepted then the reviewers names are included with the paper and taken as indicating that they sign off on the quality of the paper (see Aarssen & Lortie 2011 for more discussion of this idea; reviewers can withdraw from the process at any point in which case their names are not included).

I think this is an interesting approach because it attempts to make the review process a friendlier and more interactive process that focuses on quickly converging through conversation on acceptable solutions rather than slow long-form exchanges through multiple rounds of conventional peer review that can often end up focusing as much on judging as improving. While I don’t have any personal experiences with this system I’ve seen a number of associate editors talk very positively about the process at Frontiers.

Conclusions

This post isn’t intended to advocate for any of these particular journals or approaches. These are definitely experimental and we may find that some of them have serious limitations. What I do advocate for is that we conduct these kinds of experiments with academic publishing and support the folks who are taking the lead by developing and test driving these systems to see how they work. To do anything else strikes me as accepting that current academic publishing practices are at their global optimum. That seems fairly unlikely to me, which makes the scientist in me want to explore different approaches so that we can find out how to best evaluate and improve scientific research.

UPDATE: Fixed link to the Faculty of 1000 Research paper that I reviewed. Thanks Jeremy!

UPDATE 2: Added a missing link to Faculty of 1000 Research’s main site.

UPDATE 3: Fixed the missing link to Frontiers in Ecology & Evolution. Apparently I was seriously linking challenged this morning.

EcoBloggers: The ecology blog aggregator

Screenshot of EcoBloggers website

EcoBloggers is a relatively new blog aggregator started by the awesome International Network of Next-Generation Ecologists (INNGE). Blog aggregators pull together posts from a number of related blogs to provide a one stop shop for folks interested in that topic. The most famous example of a blog aggregator in science is probably Research Blogging. I’m a big fan of EcoBloggers for three related reasons.

  1. It provides easy access to the conversations going on in the ecology blogosphere for folks who don’t have a well organized system for keeping up with blogs. If your only approach to keeping up with blogs is to check them yourself via your browser when you have a few spare minutes (or when you’re procrastinating on writing that next paper or grant) it really helps if you don’t have to remember to check a dozen or more sites, especially since some of those sites won’t post particularly frequently. Just checking EcoBloggers can quickly let you see what everyone’s been talking about over the last few days or weeks. Of course, I would really recommend using a feed reader both for tracking blogs and journal tables of contents, but lots of folks aren’t going to do that and blog aggregators are the next best thing.
  2. EcoBloggers helps new blogs, blogs with smaller audiences, and those that don’t post frequently, reach the broader community of ecologists. This is important for building a strong ecological blogging community by keeping lots of bloggers engaged and participating in the conversation.
  3. It helps expose readers to the breadth of conversations happening across ecology. This helps us remember that not everyone thinks like us or is interested in exactly the same things.

The site is also nicely implemented so that it respects the original sources of the content

  1. It’s opt-in
  2. Each post lists the name of the originating blog and the original author
  3. All links take you to the original source
  4. It aggregates using RSS feeds you can set your site so that only partial articles show up on EcoBloggers (of course this requires you to ignore my advice on providing full feeds)

Are there any downsides to having your blog on EcoBloggers? I don’t think so. The one issue that might be raised is that if someone reads your article on EcoBloggers, then they may not actually visit your site and your stats could end up being lower than they would have otherwise. If any of the ecology blogs were making a lot of money off of advertising I could see this being an issue, but they aren’t. We’re presumably all here to engage in scientific dialogue and to communicate our ideas as brobably as possible. This is only aided by participating in an aggregator because your writing will reach more people than it would otherwise.

So, checkout EcoBloggers, use it to keep up with what’s going on in the ecology blogosphere, and sign up your blog today.

UPDATE: According to a short chat on Twitter, EcoBloggers will soon be automatically shortening the posts on their site even if your blog is providing full feeds. This means that if you didn’t buy my arguments above and were worried about loosing page views, there’s nothing to worry about. If the first paragraph or so of your post is interesting enough to get people hooked they’ll have to come over to your blog to read the rest.

An open letter to Ecology Letters and the British Ecological Society about preprints

UPDATE: Both Ecology Letters and the British Ecological Society journals now allow preprints. Thanks to both groups for listening to the community and supporting the rapid and open exchange of scientific ideas.

Dear Ecology Letters and the British Ecological Society ,

I am writing to ask that you support the scientific good by allowing the submission of papers that have been posted as preprints. I or my colleagues have reached out to you before without success, but I have heard through various grapevines that both of you are discussing this possibility and I want to encourage you to move forward with allowing this important practice.

The benefits of preprints to science are substantial. They include:

  1. More rapid communication and discussion of important scientific results
  2. Improved quality of published research by allowing for more extensive pre-publication peer review
  3. A fair mechanism for establishing precedence that is not contingent the idiosyncrasies of formal peer review
  4. A way for early-career scientists to demonstrate productivity and impact on a time scale that matches their need to apply for postdoctoral fellowships and jobs

I am writing to you specifically because your journals represent the major stumbling block for those of us interested in improving science by posting preprints. Your journals either explicitly do not allow the submission of papers that have preprints posted online or lack explicit statements that it is OK to do so. This means that if there is any possibility of eventually submitting a paper to one of these journals then researchers must avoid posting preprints.

The standard justification that journals give for not allowing preprints is that they constitute “prior publication”. However, this is not an issue for two reasons. First, preprints are not peer reviewed. They are the equivalent of a long established practice in biology of sending manuscripts to colleagues for friendly review and to make them aware of cutting edge work. They simply take advantage of the internet to scale this to larger numbers of colleagues. Second, the vast majority of publication outlets do not believe that preprints represent prior publication, and therefore the publication ethics of the broader field of academic publishing clearly allows this. In particular Science, Nature, PNAS, the Ecological Society of America, the Royal Society, Springer, and Elsevier all generally allow the posting of preprints. Nature even wrote about this policy nearly a decade ago stating that:

Nature never wishes to stand in the way of communication between researchers. We seek rather to add value for authors and the community at large in our peer review, selection and editing… Communication between researchers includes not only conferences but also preprint servers… As first stated in an editorial in 1997, and since then in our Guide to Authors, if scientists wish to display drafts of their research papers on an established preprint server before or during submission to Nature or any Nature journal, that’s fine by us.

If you’d like to learn more about the value of preprints, and see explanations of why some of the other common concerns about preprints are unjustified, some colleagues and I have published a paper on The Case for Open Preprints in Biology.

So, I am asking that for the good of science, and to bring your journals in line with widely accepted publication practices, that you please move quickly to explicitly allow the submission of papers that have been posted as preprints.

Regards,
Ethan White

Follow

Get every new post delivered to your Inbox.

Join 2,297 other followers