Jabberwocky Ecology

Is it OK to cite preprints? Yes, yes it is.

Should you cite preprints in your papers and should journals allow this? This is a topic that gets debated periodically. The most recent round of Twitter debate started last week when Martin Hunt pointed out that the journal Nucleic Acids Research wouldn’t allow him to cite them. A couple of days later I suggested that journals that don’t allow citing preprints are putting their authors’ at risk by forcing them not to cite relevant work. Roughly forty games of Sleeping Queens later (my kid is really into Sleeping Queens) I reopened Twitter and found a roiling debate over whether citing preprints was appropriate at all.

The basic argument against citing preprints is that they aren’t peer reviewed. E.g.,

and that this could lead to the citation of bad work and the potential decay of science. E.g.,

There are three reasons I disagree with this argument:

  1. We already cite lots of non-peer reviewed things in ecology
  2. Lots of fields already do this and they are doing just fine.
  3. Responsibility for the citation lies with the citer

We already cite non-peer reviewed things in ecology

As Auriel Fournier, Stephen Heard, Michael Hoffman, TerryMcGlynn and ATMoody pointed out we already cite lots of things that aren’t peer reviewed including government agency reports, white papers, and other “grey literature”.

We also cite lots of other really important non-peer reviewed things like data and software. We been doing this for decades. Ecology hasn’t become polluted with pseudo science. It will all be OK.

Lots of other fields already do this

One of the things I find amusing/exhausting about biologists debating preprints is ignorance of their history and use in other fields. It’s a bit like debating the name of an actor for two hours when you could easily look it up on Google.

In this particular case (as Eric Pedersen pointed out) we know that citation of preprints isn’t going to cause problems for the field because it hasn’t caused issues in other fields and has almost invariably become standard practice in fields that use preprints. Unless you think Physics and Math are having real issues it’s difficult to argue that this is a meaningful problem. Just ask a physicist

You are responsible for your citations

Why hasn’t citing unreviewed work caused the wheels to fall off of science? Because citing appropriate work in the proper context is part of our job. There are good preprints and bad preprints, good reports and bad reports, good data and bad data, good software and bad software, and good papers and bad papers. As Belinda Phipson, Casey Green, Dave Harris and Sebastian Raschka point out it is up to us as the people citing research to make professional judgments about what is good science and should be cited. Casey’s take captures my thoughts on this exactly:

TLDR

So yes, you should cite preprints and other unreviewed things that are important for your work. That’s called proper attribution. It has worked in ecology and other fields for decades. It will continue to work because we are scientists and evaluating the science we cite is part of our jobs. You can even cite this blog post if you want to.

Thanks to everyone both linked here and not for the spirited discussion. Sorry I wasn’t there, but Sleeping Queens is a pretty awesome game.

UPDATE: For those of you new to this discussion, it’s been going on for a long time even in biology. Here is Graham Coop’s excellent post from nearly 4 years ago.

UPDATE: Discussion of why it’s important to put preprint citations are in the reference list

Data Analyst position in ecology research group

The Weecology lab group run by Ethan White and Morgan Ernest at the University of Florida is seeking a Data Analyst to work collaboratively with faculty, graduate students, and postdocs to understand and model ecological systems. We’re looking for someone who enjoys tidying, managing, manipulating, visualizing, and analyzing data to help support scientific discovery.

The position will include:

  • Organizing, analyzing, and visualizing large amounts of ecological data, including spatial and remotely sensed data. Modifying existing analytical approaches and data protocols as needed.
  • Planning and executing the analysis of data related to newly forming questions from the group. Assisting in the statistical analysis of ecological data, as determined by the needs of the research group.
  • Providing assistance and guidance to members of the research group on existing research projects. Working collaboratively with undergraduates, graduate students and postdocs in the group and from related projects.
  • Learning new analytical tools and software as needed.

This is a staff position in the group and will be focused on data management and analysis. All members of this collaborative group are considered equal partners in the scientific process and this position will be actively involved in collaborations. Weecology believes in the importance of open science, so most work done as part of this position will involve writing open source code, use of open source software, and production and use of open data.

Weecology is a partnership between the White Lab, which studies ecology using quantitative and computational approaches and the Ernest Lab, which tends to be more field and community ecology oriented. The Weecology group supports and encourages members interested in a variety of career paths. Former weecologists are currently employed in the tech industry, with the National Ecological Observatory Network, as faculty at teaching-focused colleges, and as postdocs and faculty at research universities. We are also committed to supporting and training a diverse scientific workforce. Current and former group members encompass a variety of racial and ethnic backgrounds from the U.S. and other countries, members of the LGBTQ community, military veterans, people with chronic illnesses, and first-generation college students. More information about the Weecology group and respective labs is available on our website. You can also check us out on Twitter (@skmorgane, @ethanwhite, @weecology, GitHub, and our blog Jabberwocky Ecology.

The ideal candidate will have:

  • Experience working with data in R or Python, some exposure to version control (preferably Git and GitHub), and potentially some background with database management systems (e.g., PostgreSQL, SQLite, MySQL) and spatial data.
  • Research experience in ecology
  • Interest in open approaches to science
  • Experience collecting or working with ecological data

That said, don’t let the absence of any of these stop you from applying. If this sounds like a job you’d like to have please go ahead and put in an application.

We currently have funding for this position for 2.5 years. Minimum salary is $40,000/year (which goes a pretty long way in Gainesville), but there is significant flexibility in this number for highly qualified candidates. We are open to the possibility of someone working remotely. The position will remain open until filled, with initial review of applications beginning on May 5th. If you’re interested in applying you can do so through the official UF position page. If you have any questions or just want to let us know that you’re applying you can email Weecology’s project manager Glenda Yenni at glenda@weecology.org.

Postdoctoral research position in the Temporal Dynamics of Communities

The Weecology lab group run by Morgan Ernest and Ethan White at the University of Florida is seeking a post-doctoral researcher to study changes in ecological communities through time. This position will primarily involve broad-scale comparative analyses across communities using large time-series datasets and/or in-depth analyses of our own long-term dataset (the Portal Project). Experience with any of the following is useful, but not required: long-term data, macroecology, paleoecology, quantitative/theoretical ecology, and programming/data analysis in R or Python. The successful applicant will be expected to collaborate on lab projects on community dynamics and develop their own research projects in this area according to their interests.

Weecology is a partnership between the Ernest Lab, which tends to be more field and community ecology oriented and the White Lab, which tends to be more quantitative and computationally oriented. The Weecology group supports and encourages students interested in a variety of career paths. Former weecologists are currently employed in the tech industry, with the National Ecological Observatory Network, as faculty at teaching-focused colleges, and as postdocs and faculty at research universities. We are also committed to supporting and training a diverse scientific workforce. Current and former group members encompass a variety of racial and ethnic backgrounds from the U.S. and other countries, members of the LGBTQ community, military veterans, people with chronic illnesses, and first-generation college students. More information about the Weecology group and respective labs is available on our website. You can also check us out on Twitter (@skmorgane, @ethanwhite, @weecology), GitHub, and our blog Jabberwocky Ecology.

This 2-year postdoc has a flexible start date, but can start as early as June 1st 2017. Interested students should contact Dr. Morgan Ernest (skmorgane@ufl.edu) with their CV including a list of three references, a cover letter detailing their research interests/experiences, and one or more research samples (a PDF or link to a scientific product such as a published paper, preprint, software, data analysis code, etc). The position will remain open until filled, with initial review of applications beginning on April 24th.

Data Retriever 2.0: We handle the data so you can focus on the analysis

We are very exited to announce a major new release of the Data Retriever, our software for making it quick and easy to get clean, ready to analyze, versions of publicly available data.

The Data Retriever, automates the downloading, cleaning, and installing of ecological and environmental data into your choice of databases and flat file formats. Instead of hours tracking down the data on the web, downloading it, trying to import it, running into issues (e.g, non-standard nulls, problematic column names, encoding issues), fixing one problem, and then encountering the next, all you need to do is run a single command from the command line:

$ retriever install csv iris
$ retriever install sqlite breed-bird-survey -f bbs.sqlite

or from R:

>>> rdataretriever::install('postgres', 'wine-quality')
>>> portal_data <- rdataretriever::fetch('portal')

The Data Retriever uses information in Frictionless Data datapackage.json files to automatically handle all of the complexities of “simple” data for you. For more complicated complicated datasets, with dozens of components or major data structure issues, the Retriever uses Python scripts as plugins to handle the major data cleaning work and then automatically handles the rest.

To find out more about the Data Retriever checkout the websites, the full documentation, and the GitHub repositories for both the Data Retriever and the R Data Retriever package.

Expanded focus and name change

For those of you familiar with the EcoData Retriever, this is the same software with a new name. Challenges with the data end of the analysis pipeline occur across disciplines and our tools work just as well for non-ecological data, so we’ve started adding non-ecological data and changed our name to reflect that. We’d love to hear from anyone interested in leading a push to add data from another discipline or just interested in adding a single favorite dataset.

As part of this we’ve changed the name of the R package from ecoretriever to rdataretriever.

Major changes

The 2.0 release includes a number of major changes including:

  • Python 3 support (a single code base runs on both Python 2 and 3)
  • Adoption of the frictionless data datapackage.json standard (replacing our old YAML like metadata system), including a command line interface for creating and editing datapackage.json files
  • Add json and xml as available output formats
  • Major expansion of the documentation and hosting of the documentation at Read the Docs
  • Remove the graphical user interface (to allow us to focus that development time on wrappers for other languages)
  • Lots of work under the hood and major improvements in testing
  • Broaden scope to include non-ecological data

We are also in the process of releasing version 1.0 of the R package. This version adds the new features in the Data Retriever and also includes major stability improvements, in particular in RStudio and on Windows.

We also have a brand new website.

Upgrading to the new version (UPDATED)

To ensure the smoothest upgrade to the new version we recommend:

  1. Run retriever reset scripts from the command line
  2. Uninstall the old version of the EcoData Retriever
  3. Install the new version
  4. Run retriever update from the command line

Acknowledgments

Henry Senyondo is the lead developer for the Data Retriever and has done an amazing job over the past year developing new features and shoring up the fundamentals for the software. He lead the work on 2.0 start to finish.

Akash Goel was a Google Summer of Code student with the project last summer and was responsible for the majority of the work adding Python 3 support and switching the project over to the datapackage.json standard.

Dan McGlinn, the creator of the R package, has continued his excellent leadership of the development of this package. Shawn Taylor, a new contributor, was instrumental in solving the stability issues on Windows/RStudio.

In addition to these core folks our growing group of contributors to both projects have been invaluable for adding new functionality, fixing bugs, and testing new changes. We are super excited to have contributions from 30 different people and will keep working hard to make sure that everyone feels welcome and supported in contributing to the project.

The level of work done to get these releases out the door was only possible due to generous support of the Gordon and Betty Moore Foundation’s Data Driven Discovery Initiative. This support allowed my group to employ Henry as a full time software engineer to work on these and other projects. This kind of active support for the development and maintenance of research oriented software makes sustainable software development at universities possible.

The potential for collaborative open lesson development for college coursework

Last week Zack Brym and I formally announced a semester long Data Carpentry course that we’ve have been building over the last year. One of the things I’m most excited about in this effort is our attempt to support collaborative lesson development for university/college coursework.

I’ve experience first hand the potential for this sort of collaborative lesson development though the development of workshop lessons in Software Carpentry and Data Carpentry. Many of the workshop lessons developed by these two organizations now have 100+ contributors. As far as I’m aware, Software Carpentry was the first demonstration that large-scale open collaboration on lessons could work (but I’d love to hear of earlier examples if folks are aware of them) and it has resulted in what is widely regarded as really high quality lesson material. Having seen this work so effectively for workshops, I’m interested in seeing how well it can work for full length courses.

Most college and university courses that I’m aware of start in one of three ways: 1) someone sites down and develops a course completely from scratch; 2) the course directly follows a text book; or 3) a new professor inherits a course from the person who taught it previously and adapts it.

Developing a course from scratch, even one following a text book fairly closely, is a huge time commitment. In contrast, with collaboratively developed courses new faculty, or faculty teaching new courses, wouldn’t need to start from scratch. They would be able to pick up an existing course to adapt and improve. I can’t even begin to describe how much easier this would have made my first few years as a faculty member. More generally, if we are teaching similar courses across dozens or hundreds of universities, it is much more efficient to share the effort of building and improving those courses than to have each person who teaches them do so independently.

In addition to the time and energy, there are often a lot of things that don’t work well the first time you teach a course and it typically takes a few rounds of teaching it to figure what works best. One of the challenges of developing lessons in isolation is that you only teach a class every 1 or 2 years. This makes it hard and slow to figure out what needs work. In contrast, a collaboratively developed course might be taught dozens or hundreds of times each year, allowing the course to be improved much more rapidly through large scale sampling and discussion of what works and what doesn’t. In addition to having more information, the fact that faculty are spending less time developing courses from scratch should leave them with more time for improving the materials. In combination this results in the potential for higher quality courses across institutions.

By involving large numbers of lesson developers, collaborative development also has the potential to help make courses more accurate, more up-to-date, and more approachable by novices. More lesson developers means a greater chance of having an expert on any particular topic involved, thus making the material more accurate and reducing the amount of bad practice/knowledge that gets taught. New faculty with more recent training on the development team can help keep both the material and the pedagogical practices up-to-date (this is hard when the same person teaches the same course for 20 years). More lesson developers also increases the likelihood that someone who isn’t an expert in any given piece of material is also involved, which should help make sure that the lesson avoids issues with expert blindness, thus making the material more accessible to students.

Collaborative college/university lesson development will not be without challenges. The skills required for collaborative lesson development in the style of Software and Data Carpentry require proficiency with computational approaches not familiar to many academics. The necessary skills include things like version control, developing materials in markdown, and working with static site generators like Jekyll. This means this approach is currently most accessible for those with some computational training and may initially work best for computing focused courses. In addition, organizing open collaborations takes time and energy, as does collectively deciding on how to design and update classes. Universities and colleges are not typically good at valuing time invested in non-traditional efforts and that would need to change to help support those managing development of courses with large numbers of faculty involved. More substantial may be the fact that faculty are not used to collaborating with other people on course development and are therefore not used to compromising and negotiating what should go into a course. This can be compensated for to some degree by making courses easy to modify and customize, as we’ve tried to do with the Data Carpentry Semester course, but ultimately there will still need to be a shift from prioritizing the personal desires of the faculty member to the best interests of the course more broadly. This approach will likely work best where there are a number of places that all want to teach the same general material.

Is it time? When I built my first version of Programming for Biologists back in 2010 I was really excited about the potential for collaborative open course development. I built the course using Drupal, emailed a bunch of my friends who were teaching similar courses and said “Hey, we should work together on this stuff”, and stuck some welcoming language on the homepage. Nothing happened. A few years later I was on sabbatical at the University of North Carolina and got the opportunity to talk a fair bit with Elliot Hauser who was part of a team trying to encourage this through a start-up called Coursefork. I was somewhat skeptical that this approach would work broadly at the time, but I thought it was really awesome that they were trying. They ended up pivoting to focus on helping computing education through a somewhat different route and became trinket. A couple of years I converted my course to Jekyll on GitHub and told a lot of people about it. There was much excited. Still nothing happened. So why might this work now? I think there are three things that increase the possibility of this becoming a bigger deal going forward. First, open source software development is becoming more frequent in academia. It still isn’t rewarded anywhere close to sufficiently, but the ethos of using and contributing to collaboratively developed tools is growing. Second, the technical tools that make this kind of collaboration easier are becoming more widely used and easier to learn through training efforts like Software Carpentry. Third, more and more people are actively developing university courses using these tools and making them available under open source licenses. Two of my favorites are Jenny Bryan’s Stat 545 and Karl Broman’s Tools for Reproducible Research. Our development of the Data Carpentry semester course has already benefited from using openly available materials like these and feedback from members of the computational teaching community. I guess we’ll see what happens next.

This post benefited from a number of comments and suggestions by Zack Brym, who has also played a central and absolutely essential role in the development of the Data Carpentry semester long course. The post also benefited from several conversations with Tracy Teal, the Executive Director of Data Carpentry about the potential value of these approaches for college courses

Fork our course: A semester-long Data Carpentry course for biologists

This is post is co-authored by Zack Brym and Ethan White

Over the last year and a half we have been actively developing a semester-long Data Carpentry course designed to be easily customized and integrated into existing graduate and undergraduate curricula.

Data Carpentry for Biologists contains course materials for teaching scientists how to work more effectively with data. The course provides introductions to data management and relational databases, data manipulation and analysis, and data visualization. It covers the same general types of material as a two-day Data Carpentry workshop, but expands the materials and opportunities for practice into a full-length university course. The teaching material uses R and SQLite, with some corresponding materials for Python as well. To help students understand the direct applications to their interests, the examples and exercises focus on biological questions and working with real data. The course emphasizes using best practices to produce reusable and reproducible data analysis.

Active-learning Teaching Materials

Learning computing requires active practice by working through programming problems. Just diving in to computing is challenging for most scientists, so the course instruction is designed to combine short live-coding introductions to concepts followed immediately by the students working on a related exercise. Additional exercises are assigned later for practice. This follows the “I do”, “We do”, “You do” approach to teaching, which leverages the benefits of active-learning and flipped classrooms without leaving students who are less comfortable with the material feeling lost. The bulk of class time is spent working on assigned exercises with the instructor moving around the room helping guide students through things they don’t understand and engaging with students who are thinking about advanced applications of what they’ve learned.

This approach is the result of lots of reading about effective teaching methods and Ethan’s experience teaching this and related courses over the last six years at Utah State University and the University of Florida. It seems to work well for both students that get the material easily and those that find it more challenging. We’ve also tried to make these materials as useful as possible for self-guided students.

Open course development

Software Carpentry and Data Carpentry have shown how powerful collaborative lesson development can be and we’re interested in bringing that to the university classroom. We have designed the course materials to be modular and easy to modify, and the course website easy to clone and set up. All of the teaching materials and associated website files are openly available at the Data Carpentry for Biologists repository on GitHub under CC-BY and MIT licenses. The course materials are all written in Markdown and everything runs on Jekyll through GitHub Pages. Making your own version of the course should take less than an hour. We’ve developed documentation for how to create your own version of the course and how to contribute to development. Exercises and assignments are modular and changing exercises and assignments simply involves reordering items in a list. Adding a new exercise involves creating a new Markdown file and then adding its title to the list of exercises for an assignment.

Get Involved

If you teach, or want to teach, a course like this, we’d love to get you involved. Here are some useful links for getting started.

–   I want to teach the course.
–   I have some feedback.
–   I want to contribute to the project.

We want to be sure getting involved is as easy as possible. We’ve worked hard to provide documentation and help resources for students and instructors. Students can find all they need to know at our student start guide. Instructors have access to course content and site design documentation.

If your having trouble finding something or getting something to work, or simply have some feedback about the course please open a new issue at GitHub or send us an email.

Acknowledgements

Development of this course was generously support by  the Gordon and Betty Moore Foundation’s Data-Driven Discovery Initiative through Grant GBMF4563 to Ethan White and the National Science Foundation as part of a CAREER award to Ethan White.

New release of the EcoData Retriever

EcoData Retriever logoWe are very exited to announce the newest release of the EcoData Retriever, our software for automating the downloading, cleaning, and installing of ecological and environmental data. Instead of hours or days trying to get complicated datasets like the Breeding Bird Survey ready for analysis, the Retriever lets you simply click a button or run a single command from R or the command line, and your computer does the rest.

bbs_install_animated

It’s been over a year since the last retriever release and there are lots of new features and improvements to be excited about.

  • We’ve added 21 new datasets, including major ecological and environmental datasets like eBird, Vertnet, and the Global Wood Density Database, and the PRISM climate data.
  • To support all of these datasets we’ve added support for additional data types including greater than memory archive files, and we’ve also improved the ability to control where downloaded files are stored and how they are clustered together.
  • We’ve significantly improved documentation and now have a new automatically built documentation site at Read The Docs.
  • We’ve also made a lot of under the hood improvements.

This is also the first release that has been overseen by Weecology’s new software engineer, Henry Senyondo. We’re excited to have Henry on the team, and now that he’s around development of both the EcoData Retriever and other lab software projects will be happening more quickly.

A big thanks to the Gordon and Betty Moore Foundation’s Data-Driven Discovery Initiative for funding this development through Grant GBMF4563 and to the National Science Foundation for funding as part of a CAREER award to Ethan White.

UPDATE: Led by Dan McGlinn we also released a new version of the ecoretriever R interface for the Retriever last fall. This makes using the Retriever from R as simple as:

data <- ecoretriever::fetch("BBS")