Jabberwocky Ecology

Sharing the Long-term Portal Project Data: 37 years of Rodent, Plant, Ant, and Weather Data

It is with great glee that I can announce the latest release of the Portal Project Database. For those of you who just want to go play with the data – here’s the link to the Data Paper we just published in Ecology.

But I would encourage you to read on, as there is more data-related news below.

But first, a story.

As some of you know, I manage a long-term ecological study: the Portal Project. It was started by Jim Brown, Diane Davidson, and Jim Reichman back in 1977 to study competition and plant/animal interactions. That original team moved on (intellectually) and eventually retired. Tom Valone and I inherited the mantel of responsibility for the site. Jim Brown believed in sharing data with whomever asked for it, and in 2009 we formalized that philosophy by publishing all of the data from 1977-2003 that we felt was in good enough shape to document and share. We chose to release the data as an Ecology Data Paper, using Ecological Archives. Partly that was because I had great previous experiences publishing data through Ecology, and partly because I wanted something permanent. I’ve seen many people talk about their “publically available data” that was either not actually publically available, stored on a now-defunct personal website, or had so many data owner imposed hoops to jump through that it was effectively not public. I wanted the data to be available even if I died (a little grim, I know, but a real consideration when we talk about data archiving).

But we kept collecting data, which meant in 2013 we realized we had an additional 10 years of data we could share. We also had cleaned up and documented additional data that we wanted to add. So we started the process of publishing the next chunk of data. But how should we do this? Should we just add on to the existing Data Paper (assuming Ecological Archives allowed this it would be awkward since the title of the original data paper included the words 1977-2003)? We also decided to add all the graduate students who had been funded to collect the data for the project from 2003-2013, but tracking down people from the 1970s and 80s seemed unfeasible.  The short version of the story is that we opted for a separate data paper for 2003-2013, but Ecological Archives wanted a new Data Paper with all the years of data in one place – so that’s what we ended up doing. Our new Data Paper contains all the data in the original Data Paper, plus the new years of data, plus old ant and weather data that we felt we now understood well enough to let loose in the world.

It should come as no surprise to those who follow this blog that we here at Weecology are interested in open science. I love Ecological Archives as a permanent repository1 – the data is safely in the public sphere even if I die, change universities, forget to update my website, or hand the research over to someone who doesn’t share my ideals. But publishing new data papers is a big ordeal that I only want to do every few years. If we want to make data available more rapidly (and we do), we needed another mechanism for delivery to the public.

Thus begins the Portal Project GitHub Database experiment.

What is GitHub?

Github is a web-based repository typically used for version control and management of software projects. We have created a repository on GitHub (https://github.com/weecology/PortalData) where we can create new releases of data after it has undergone our quality control processes. Here’s a screenshot of what this page looks like:

Figure 1

Version 1.0.0 (which is currently available) matches what is available on Ecological Archives and can be reached through this link: https://github.com/weecology/PortalData/releases or by clicking the release button on the main page of the repository (see above).

When will new data be released?

Our aim is to release a new vetted and updated version approximately every 6 months. However, you can also get our most up-to-date data from GitHub. You can find it on the main page (see figure above). As part of this process, we have moved our data entry and quality control processes to center around the Portal Data repository. Yes, that’s right, you’ll be able to access our new data as soon as we’ve entered it from our field datasheets. New data has not gone through the same level of quality control – so user beware. That data will be less stable than the release data.

Why GitHub?

GitHub met a variety of our data publishing and data management needs. I won’t go into everything here, but the big one is version control. Every time we make a change to the data files, it is documented. This has not been the case in the past. Though we did try to keep records, it relied on someone making a change in the database and then remembering to write it down somewhere. Now with our new setup, any changes will be automatically documented by commit messages (descriptions of changes that accompany any modification to a file on GitHub). It’s also publicly available, so users can use our history of changes as well, maybe to track down why results differ between two different downloads. How can you do this? Select one of the folders in the current repo – let’s randomly pick the rodent folder and look at the history of the rodent data file (Portal_rodent.csv)



This gives you all the commit messages that are associated with changes to this file. Maybe one of these catches your eye. You can see exactly what got changed by clicking on it.


The red shows you a row that has a deletion. The green a row that is “new”.

How do we feel about this shift to GitHub?

We were very nervous about this initially. While the White Lab has some serious Git-Fu skills, the Ernest Lab views itself as field ecologists and GitHub is not exactly intuitive to us. We worried we would screw up the data. We worried we were adding complexity to an already complex quality control process. But so far we are really happy about our new system. By integrating data entry into the data publishing process, it insures that we are always providing updated data, even if we’re slow on official releases. Version control is allowing us to document all the changes being made to the database – and everyone involved with the project has a chance to see the changes and comment on them if they have concerns. And everyone in our group (and now the world) has access to the most up to date data (and can choose between extremely current but still being vetted for errors or less current but more stable and less error prone). We’re not alone in taking this step to using GitHub for data management; other examples of projects that have moved to GitHub include the Biomass and Allometry Database for woody plants (BAAD) and the Open Tree of Life.

I want to end by saying that I don’t currently intend to stop submitting major updates to Ecological Archives or some other permanent repository. What GitHub provides is more transparency on how the data is being managed (both for people within and outside our group) and faster data streaming to other scientists than we’re capable of doing through Ecological Archives. But what it doesn’t do is provide the data in a stable way for ecologists in the future – and that is something we take very seriously! So if you only want to use our data via Data Papers, never fear, you now have all the data through 2013 and more will come eventually. But in the meantime, you might want to check out our data repository.


1 I might love it a little less right now since my data files are ‘Wiley Property’ housed on Wiley servers, but that’s a separate blog post.

New release of the EcoData Retriever

EcoData Retriever logoWe are very exited to announce the newest release of the EcoData Retriever, our software for automating the downloading, cleaning, and installing of ecological and environmental data. Instead of hours or days trying to get complicated datasets like the Breeding Bird Survey ready for analysis, the Retriever lets you simply click a button or run a single command from R or the command line, and your computer does the rest.


It’s been over a year since the last retriever release and there are lots of new features and improvements to be excited about.

  • We’ve added 21 new datasets, including major ecological and environmental datasets like eBird, Vertnet, and the Global Wood Density Database, and the PRISM climate data.
  • To support all of these datasets we’ve added support for additional data types including greater than memory archive files, and we’ve also improved the ability to control where downloaded files are stored and how they are clustered together.
  • We’ve significantly improved documentation and now have a new automatically built documentation site at Read The Docs.
  • We’ve also made a lot of under the hood improvements.

This is also the first release that has been overseen by Weecology’s new software engineer, Henry Senyondo. We’re excited to have Henry on the team, and now that he’s around development of both the EcoData Retriever and other lab software projects will be happening more quickly.

A big thanks to the Gordon and Betty Moore Foundation’s Data-Driven Discovery Initiative for funding this development through Grant GBMF4563 and to the National Science Foundation for funding as part of a CAREER award to Ethan White.

UPDATE: Led by Dan McGlinn we also released a new version of the ecoretriever R interface for the Retriever last fall. This makes using the Retriever from R as simple as:

data <- ecoretriever::fetch("BBS")

GEB adds unlimited data references section to papers

In a big step forward for allowing proper credit to be provided to all of the awesome folks collecting and publishing data, the journal Global Ecology & Biogeography has just announced that they will start supporting an unlimited set of references to datasets used in a paper.

A growing concern in the macroecological community has been that many papers whose data are used in meta-analyses or data-compilation papers have not been getting citation credit because most journals require these papers to only be listed in the supplemental material (which is not indexed by most indexing services). GEB is proud to support the inclusion of a second list of references within the main paper for all data papers used… To our knowledge, GEB is the first journal in the ecological field to do this. And we’ll be working with Wiley to further improve options in this area.

These references will be included immediately following the traditional references section in both the html and pdf versions of the paper. You can see an example in Olds et al. (2016).

What this means is that when you combine data from dozens or hundreds of studies to conduct a synthetic analysis, you can cite all of the sources in a way that will provide citation credit to those collecting the data1. It also means that scientists using large data compilations can cite the original data sources as well as the compilation itself2.

This is important for encouraging the publication of data, since one of the common reasons that scientists don’t publish data is a lack of credit, and citation only in non-indexed supplementary materials sections is a common concern.

Facilitating proper citation of all data sources is something the community has been requesting and it’s great to see GEB taking the lead in this area. Since Wiley, the publisher of GEB, is the largest publisher of ecology journals, it should be straightforward to implement this new approach widely. If other journals follow GEB’s lead, we will enter a new era where citation of data can be as complete as possible, allowing proper credit to everyone who collects and publishes data.

1GEB will need to make sure that this section gets properly picked up by the indexers, and tweak the presentation as necessary if it isn’t.
2Provided that the compilation provides a method for compiling a citation list of all associated sources.

Trait Databases: What is the End Goal?

For the past few years I’ve been involved in a collaboration to put together a broad-coverage life history database for mammals, reptiles, and birds. The project started because my collaborator, Nathan Myhrvold, and I both had projects we were interested in that involved comparing life history traits of reptiles, mammals, and birds, and only mammals had easily accessible life history databases with broad taxonomic coverage. So, we decided to work together to fix this. To save others the hassle of redoing what we were doing, we decided to make the dataset available to the scientific community. While this post started out as a standard “Hey, check out this new publication from our group” post (Here it is, by the way: Myhrvold, N.P., E. Baldridge, B. Chan, D. Sivam, D.L. Freeman, S.K.M. Ernest. 2015. An Amniote Life-history Database to Perform Comparative Analyses with Birds, Mammals, and Reptiles. Ecology 96:3109), I’ve realized that there’s something more important that needs to be discussed: what is the future of trait databases?

Trait databases are all the rage these days, for good reason. Traits are interesting from evolutionary and ecological perspectives: How and why do species differ in traits, how do traits evolve, how quickly do traits change in response to changing environment, and what impacts do these differences have on community assembly and ecosystem function. They have the potential to link individual performance with local, regional, and even global processes. There’s lots of trait data out there, but most of it has been buried in papers, books, theses, gray literature, field guides, etc. This has led to the explosion of compendiums compiling trait data. Some of these are published as Data Papers (e.g.: Mammals: Jones et al 2009 , Plankton: Kremer et al 2014) or on-line databases (e.g. AnAge, FishBase), which are open for everyone to use. Many of these open datasets are generated by a small number of scientists to address some particular question. Some are quasi-open/quasi-private resources generated by consortiums of scientists (TRY).

There are a variety of issues regarding these trait compendiums, not least of which is these trait compendiums pull data from numerous sources, but how do data generators get credit and what type of credit is reasonable? This is a doozy that I don’t have an answer to. Instead, my focus today is on the eventual endgame of trait databases. No trait database currently being produced has all the trait data of interest for every species. This means we have a bunch of incomplete data products running around. So, every few years, a bigger – more complete, but still incomplete – trait dataset is produced for some group of species. Sometimes the bigger dataset replicates the effort of the smaller one, sometimes it incorporates the smaller compilation whole-cloth, sometimes they have little overlap in sources whatsoever. Data compilations vary in the ease of use and accessibility. Some databases are widely known, some are known only to a few insiders. I could keep going. Clearly this state of affairs is less than optimal for rapid progress in studying traits.

So what’s the end game here? What should we be doing? In my opinion, what we need is a centralized trait database where people can contribute trait data and where that data is easily accessible by anyone who wants to use it for research (not just to the contributing members of the database). It would also be nice if people who contribute significant amounts of data (no, I’m not going to define that here) could get specific credit for that contribution – maybe as a Data Paper or E-Publication. To encourage people to not just download data, add to it, and then sit on the expanded dataset, embargoes could be put in place to allow people to add their data to the dataset but have the data protected for a limited period of time to allow that researcher to get first crack at the publications using that entry. It’d be really nice if people who use the database could easily download all the references for the data they used so it can be easily incorporated into a literature cited section. The central database could get credit (let’s face it, it needs to be able to justify the funding that such an endeavor would require) by having people register papers published using data from the database. They could then keep track of numbers of pubs and citations to those pubs to help track the database’s impact.

Right about now, my Paleo brethren may be thinking “this sounds suspiciously familiar”. I’ve pretty much lifted this list right off of the Paleobiology Database website (https://paleobiodb.org/#/faq). While ecologists have been running our every database for itself experiment on Trait Databases, the Paleobiologists have been experimenting with collaborative open databases for fossil records. I’m an outsider, so I don’t really know how the database is perceived within the paleo community, but from the outside I have been a big fan of the database, the work that has emerged from its existence, and the community that surrounds it. Which is why I’ve wondered if ecology could some something similar.

But if we’re going to do this, I think we need to copy something else from the Paleobiology Database: a focus on individual records. Currently, many trait databases focus on a species-level value; what is the average number of offspring per litter? Seed Mass? Average body size? This is a logical place to start building a database if many of the questions are focused on comparing central tendencies across species. But our understanding of traits and the questions we want to ask have evolved. Having any info is still better than no info, but often we need info on variability across individuals within a species or we want to know how the trait might vary with changes in the environment. For this, we need record-level data. By this, I mean that instead of pooling observations to obtain an average for a species, we now often want to know that the average litter size for a species at location X is 3 but 8 at location Y. For some species, traits are especially sensitive to temperature or some other environmental variable – so knowing if the body size was measured at 28C or 32C can be important. This data could then be summarized in whatever way the user needed (species-averages, region-specific averages, etc). This, of course, is the hard part, because while we have an increasing number of trait compilations, they have either jettisoned the record information, or little of the record info is associated with the datapoint except maybe the citation name (I say this knowing I’m guilty of this). It also involves doing some form of georeferencing if we want the location info to be useable (like they’ve been doing for museum records). This means we would need to basically uncompile the compilations – find the original citations, extract as much info as we can from them, and then re-enter them as part of a more sophisticated database. This is an extraordinary amount of work that (to be clear) I am not volunteering for.

There are undoubtedly some in the trait community who are about to explode because they’ve been thinking “but we’re doing what you are talking about!”. There are indeed already some bigger initiatives out there (AnAge, FishBase, TRY) but they are either not community-based (i.e. run by a closed group), taxon-centric, or a nightmare of open and closed policies that make extracting data needlessly burdensome, or some unfortunate combo of the above. The one that seems closest to the Paleobiology Database model is TraitBank at the Enyclopedia of Life. Its goal, however, is different from the record-based trait database that I outlined above. Its goal is to have a webpage (and trait data) for every species on the planet, so this still seems to be a species average approach. As I mentioned before, some info is better than no info, so this alone would be a huge benefit to trait research, but still carries the restrictions of species-average values. On the plus side, data in the database is available for everyone to use and each data entry has the specific reference listed with it. But I don’t think it’s had broad buy-in from the trait community. TraitBank only lists 50 data sources and 327 “content partners” (websites/databases that have agreed to share their data via Encyclopedia of Life pages). Admittedly, these sources are some of the biggest data aggregations around, but it’s inconceivable that they cover the wide array of trait info for all of life. Without broad buy-in from the trait community, both using it for research and contributing their data to it, I don’t see this working in the way I’ve outlined above.

So where does this leave us? Well, things are currently in a muddle with respect to trait data, but there’s also tremendous opportunity for someone who can envision the type of database the field needs, sell broad swaths of the trait data community on its importance, and figure out how to build both the database and the community to support and use it. This may involve better community buy-in with TraitBank and/or some new initiative working on a record-level product that would allow a finer-level of question to be asked. The question is how does this happen and is there enough will in the trait community to give up on the current idiosyncratic ad hoc approach and contribute to something with broad trait and taxonomic coverage with an open data policy?

On Ecological Rants and Microcosms

Recently, over at the blog Ecological Rants the eminent ecologist Charles Krebs wrote a post about the ills of simplification in ecology. The post focuses specifically on how ecology has been ‘led astray’ by simplified models and lab studies. This has recently been picked up on Dynamic Ecology by Jeremy Fox who responded generally to the post but specifically to the affront to microcosms. I strongly recommend you check them out for yourself and not just rely on my version of events.

I went on record a long time ago (in blog years I think 2011 was a century ago) that I believe that we need a multitude of approaches, so I don’t plan on wading into the microcosm debate. That we’re still having this debate exhausts me. Instead, I want to focus on a different angle in Kreb’s post. Here’s the specific section:

“If we assume equilibrial dynamics in our communities and ecosystems, we fly in violation of almost all long term studies of populations, communities, and ecosystems. The problem lies in the space and time vision of our science. Our studies are too short to show even a good representation of dynamics over a 100 year time scale, and the problems of landscape ecology highlight that what we see in patch A may be greatly influenced by whether patches B and C are close by or not. We see this darkly in a few small studies but are compelled to believe that such landscape effects are unusual or atypical. This may in fact be the case, but we need much more work to see if it is rare or common. And the broader issue is what use do we as ecologists have for ecological predictions that cannot be tested without data for the next 100 years?”

I agree with a lot of this paragraph, though my perspective on it is different. I agree that our focus for much of the past 60 years in community ecology has been on equilibrial dynamics at a specific spatial scale with limited understanding on the impact context (i.e. what patches are near what other patches) can have on the local community. Does this make it difficult for us to predict what will happen in the dynamic world we actually live in? Yes. But unlike Krebs I don’t see the past few decades of research as a waste. We’ve learned a great deal about the fundamentals of ecological systems – species interactions, food web structure, biodiversity, niche partitioning, colonization, extinction, etc etc etc – all with the help of microcosms and simplified theory (and field studies and macroecology). We needed those decades of work to understand the basics of how communities are structured under idealized conditions.



Left: A child’s line drawing of SpongeBob’s Squidward. Right: Squidward.Does the drawing capture the essence of squidward? I’m biased, but I say yes. But how does a child get to being able to create a reasonable facsimile of something without first learning how pencils work, how they respond to hand movement, and how to simplify an image but still make it recognizable to others? I think this is also true with ecology. How do we know how to reasonably abstract a complicated system down to its most important components without first understanding what the components are and how to convey them in simple understandable ways?

Now, our challenge is to take what we have learned and apply it to the more complicated scenarios that are happening in nature (i.e. how does our Squidward change as he interacts with the dynamic setting of Bikini Bottom*). How do ecosystems change through time? What is the role of species interactions, context-dependence, and processes at different spatial and temporal scales in driving (or ameliorating) changes in food webs, niche partitioning, etc? These are pressing questions for our society as we try to predict how nature will respond to human perturbations, but these are also important for the basic development of our science. Some of this work will be done through detailed case studies out in the field, but some (hopefully) will be done with the help of theory, controlled experiments, and data-intensive approaches like macroecology to generate generalizations that help us know how to think and predict likely responses and scenarios.

The danger that I think Krebs is concerned about is that we become so attached to our clean, simplified view, our polished theories, that we refuse to engage with the more complicated scenarios. For example, if long-term studies suggest that the focus on equilibrial communities is misplaced, it would be to our detriment to continue to focus only on equilibrial communities in our theories and experiments. However, I don’t think this is happening (or if it was, I think momentum is shifting). Landscape ecology, metacommunity theory, biogeography, are all areas where people have been actively studying the very spatial issues Krebs bemoans us neglecting. I think he is more accurate about community ecology shying away from rigorously thinking about temporal dynamics, but I have a whole post on that planned, so I’ll spare you my rant. That we are starting to think about these more complex issues is what makes ecology exciting right now (and frustrating and really really hard). We have a grasp (tenuous, maybe, but a grasp nonetheless) on the fundamental, general concepts that bridge across ecosystems and organisms. We have more data, better tools, and better theoretical constructs than at any time in the past. Now is the time to tackle these more complex questions and to do so will require all the scientific approaches available to us – that includes field ecology, macroecology, theory, and, yes, microcosms.


*Yes I have been forced to watch too much SpongeBob lately.

Sciencing with a chronic illness: Tips, tricks, & technology

*This is a guest post by Elita Baldridge.**

This is the third in a series of posts about my experiences completing a PhD with a chronic illness (Part 1, Part 2, and background information).

Not only is this about the tools that I used to complete my PhD, but I am optimistic that these tools/coping mechanisms will allow me to be a scientist that gets paid for doing science.

The tips & tricks:

Remote work: Working remotely accommodated the variability in my functioning levels, and allowed me to be as productive as possible without having to allocate most of my energy to getting to/working at a physical location or trying to conserve enough energy so that I could make it home on the bus (since I can’t drive anymore).

Ergonomics: Finding what triggered more discomfort and what allowed me to work for longer periods of time really helped make it possible for me to finish up.

  • Laptop of Science & a desktop, running Synergy to run one mouse and keyboard for both computers.
  • Monitor risers to prevent fatigue.
  • Kneeling chair to avoid obnoxious pressure points on hips, back and arms.
  • Wrist rests galore.
  • Kinesis Freestyle 2 keyboards, one for desk work with the dual machine setup, one for a reclining setup with just the Laptop of Science.)

Travel: Travel is dreadful. It involves a lot of discomfort while traveling, plus a lot of discomfort for weeks after. The thing that I am traveling for had better provide enough benefits to me that it is actually worth it because it is truly, truly unpleasant (of the crying and vomiting from pain variety). Remote attendance is vastly preferred.

However, if I really, really must:

  • Grabber 12+ hour Peel N’ Stick body warmers, which make it possible to function on a reasonably human level most of the time.
  • Cane or forearms crutches
  • Wheelchair service in airports/Redcap on trains. (Voice of Experience: When you are asked if you can get to places on your own, up stairs, etc., select “no” if the answer is “yes, but it will be exceptionally unpleasant and there may be crying, whimpering, or falling over”.)
  • Rest day after travel/accommodations really close to wherever you are supposed to be.
  • Electric blanket for hotel (as full body heat pad)\
  • Small travel blanket (for padding uncomfortable chair backs, etc.)

The technology:

Version control: Using version control (I used GitHub) allowed for a more efficient workflow between me & dissertation collaborators (mostly Ethan, but also Xiao Xiao), plus I was insulated against the effects of cognitive dysfunction through commit messages, issues, and the ability to revert commits.

Kubi: A teleconferencing robot that allowed me to turn my (remote) head and look at people when they were speaking through whatever teleconferencing system we were using. I cannot say enough good things about how much this made me feel more like a part of whatever was going on.

Web conferencing: We tended to use browser based options Google Hangouts or Firefox Hello for this, but Skype is another option as well, I just had some difficulty getting it to behave well on my laptop.

Live-streaming: For my defense, I wanted to make the presentation a demonstration of making a talk accessible, and also how easy this can be. Full details of the accommodations & accessibility statement that I used for my defense are available on the event announcement. I used Google Hangouts on Air to live-stream my defense, then close captioned the talk afterward with the editor available on YouTube. This was all straightforward and took very little time. Handouts were available in advance of the talk, and an accessibility statement was provided with my defense announcement.

Data Carpentry receives Moore Foundation funding

For the last 5 years I’ve been actively involved in training efforts through Software Carpentry and Data Carpentry to train researchers in best practices for software development and data analysis. These are concepts that are fundamental to the research we do in my gropu and my commitment to open and reproducible research.

As one of the founding members of the Data Carpentry Steering Committee, I am excited to announce that Data Carpentry has received a grant from the Gordon and Betty Moore Foundation that will help support our work over the next two years.

For those of you who aren’t familiar with Data Carpentry, we are a non-profit organization whose goal is to help teach scientists the skills they need to manage and analyze the increasingly large amounts of data that are being generated across the sciences. We do this through a combination of 2 day workshops at universities (if you’re interested in a workshop at your university request one here), and online resources including lesson material and forums. Data Carpentry is both similar to, and associated with, Software Carpentry, but with an emphasis on teaching material that is specific to particular scientific disciplines and focused on data management and analysis. We currently deliver courses for ecology/organismal biology and are in the process of developing material on genomics and geospatial data. The later in collaboration with awesome training group at NEON.

The support from the Moore Foundation will help us expand our efforts to cover new scientific domains, run far more workshops than we could have otherwise, and develop strategies for delivering this material in online workshops. I will also be leading the development of a semester long Data Carpentry course designed to make it easy to integrate these crucial skills into university classrooms. Check out the full proposal for more details.

I look forward to continuing my work with Data Carpentry and am excited about the opportunity for us to continue to enable data-intensive science by providing scientists the computational and data-oriented training they need to work with the large quantities of data we now have access to.