Jabberwocky Ecology

The future of Ecosphere the journal: a suggestion

As some may be aware, ESA has launched a new journal: Ecosphere. ESA describes Ecosphere as “… the newest addition to the ESA family of journals, is an online-only, open-access alternative with a scope as broad as the science of ecology itself. ”

The description is vague  – is it a new incarnation of Ecology? Or is it an ecologically focused equivalent of PLoS One? I’m not the only one who is confused, as illustrated by a comment by Jeremy Fox from Dynamic Ecology.  I recently had an interesting experience with Ecosphere that both clarified Ecosphere for me, and also what I think its potential is. I’ve been meaning to post on this for a while, but seeing that Jeremy and I are having similar thoughts finally encouraged me to get off my butt and write the post.

What is Ecosphere? If you’ve ever reviewed a paper, you know that part of your decision is based on the paper and part is based on the journal itself. For journals like Ecology, Am Nat, and Ecology Letters, you are judging both the rigor of the science and its potential impact. For PLoS One, the potential impact is not supposed to be part of the review decision, just whether the science is sound. I recently reviewed a paper for Ecosphere that was sound but not broadly interesting. What to recommend? The editor made it clear to me that Ecosphere is an ecological PLoS One and that what mattered was the scientific soundness. Often I hear these components of the review process conflated – but interesting and rigorous are not actually the same thing. So when Ecosphere talks about maintaining the same ‘rigorous peer-review standards’ as Ecology, it means that it is focusing on the soundness, not the interest component.

Future of Ecosphere? I have no insights into whether Ecosphere is performing as ESA had hoped but I think Jeremy’s view on Ecosphere is probably common. I suspect public relations outreach to clarify the role of Ecosphere in the journal pantheon would help. I also suspect that it could greatly benefit from an incentive to publish there. But what is an easy incentive that doesn’t undercut the economic benefits of Ecosphere for ESA? Jeremy nails it in his comment and it’s the same thought I had after I reviewed for them: make it easy to have solid but rejected papers from Ecology be rapidly accepted in Ecosphere. You see, the paper I reviewed for Ecosphere was also a paper I had just reviewed for Ecology and recommended rejecting based not on any issues with the science, but based on its importance. How much easier would it be if there was a button on the Ecology reviewer form that says “Is this paper suitable for Ecosphere? If yes, is it acceptable as is, with minor revision, with major revision?” Essentially, reviewers can review for both journals at the same time. Then if a paper is rejected from Ecology but recommended for acceptance at Ecosphere, the author can get a letter saying – so sorry about your Ecology rejection, but (if you would like) congratulations on your acceptance to Ecosphere!

I think this is a good idea for Ecosphere because it provides a mechanism whereby really good papers can still end up inthat journal, thus helping improve its impact (used generically, though I suppose it might also help its impact factor). Let’s be honest, when only 3 people are judging whether a paper will be ‘interesting’ to the broader field, bad things can happen to good papers. The direct tie between Ecology and Ecosphere increases the probability of getting those papers into Ecosphere because a guaranteed acceptance can be hard to turn down. If a paper has been ‘making the circuit’, it can be tempting to just take that acceptance, even if it’s not the “quality” of journal you might have been hoping for.

I also think this is a good thing for science. Perhaps you’re review process experience is always smooth sailing, but many of us are spending a lot of time revising and resubmitting papers that are technically sound but that reviewers dislike because they don’t like the topic or are uncomfortable with the take home message, or (my favorite) this isn’t the paper that they would have written themselves. Science slows down when sound science is rejected based on ‘interest’ and not on technical reasons, because papers may take an additional year or more to be published as they are repeatedly submitted to multiple journals. The big journals have the right to judge on interest, and there is some value to this in that they can help serve as filters for the deluge of new papers, but I think having quick avenues for publication of sound science is good for us all. Tying the big ESA journals to Ecosphere provides the benefits of both – the time cost of taking a shot at Ecology is minimized because if judged sound it would still get an acceptance into Ecosphere, even if rejected from Ecology because it ‘wasn’t interesting enough’.

Finally, it’s good for ESA to have Ecosphere capture more of the ecological literature through its open access model. Right now, if rejected from Ecology, the next step for most papers is probably not Ecosphere but some Elsevier or Wiley-Blackwell journal (or maybe PLoS One). Each scientifically sound paper that does not end up at Ecosphere is $1,250 that ESA doesn’t get (based on page charge cost for members). The more papers that end up at Ecosphere, the more $$ goes to ESA, which can then use that money to do all the great things it does both for its members and for ecology in general.

UPDATE: Click here to read discussion re: Ecosphere on ECOLOG back in March (thanks to @JJVenky for pointing this out)

On making my grant proposals open access

As I announced on Twitter about a week ago, I am now making all of my grant proposals open access. To start with I’m doing this for all of my sole-PI proposals, because I don’t have to convince my collaborators to participate in this rather aggressively open style of science. At the moment this includes three funded proposals: my NSF Postdoctoral Fellowship proposal, an associated Research Starter Grant proposal, and my NSF CAREER award.

So, why am I doing this, especially with the CAREER award that still has several years left on it and some cool ideas that we haven’t worked on yet. I’m doing it for a few reasons. First, I think that openness is inherently good for science. While there may be benefits for me in keeping my ideas secret until they are published, this certainly doesn’t benefit science more broadly. By sharing our proposals the cutting edge of scientific thought will no longer be hidden from view for several years and that will allow us to make more rapid progress. Second, I think having examples of grants available to young scientists has the potential to help them learn how to write good proposals (and other folks seem to agree) and therefore decrease the importance of grantsmanship relative to cool science in the awarding of limited funds. Finally, I just think that folks deserve to be able to see what their tax dollars are paying for, and to be able to compare what I’ve said I will do to what I actually accomplish. I’ve been influenced in my thinking about this by posts by several of the big open science folks out there including Titus Brown, Heather Piwowar, and Rod Page.

To make my grants open access I chose to use figshare for several reasons.

  1. Credit. Figshare assigns a DOI to all of its public objects, which means that you can easily cite them in scientific papers. If someone gets an idea out of one of my proposals and works on it before I do, this let’s them acknowledge that fact. Stats are also available for views, shares, and (soon) citations, making it easier to track the impact of your larger corpus of research outputs.
  2. Open Access. All public written material is licensed under CC-BY (basically just cite the original work) allowing folks to do cool things without asking.
  3. Permanence. I can’t just change my mind and delete the proposal and I also expect that figshare will be around for a long time.
  4. Version control. For proposals that are not funded, revised, not funded, revised, etc. figshare allows me to post multiple versions of the proposal while maintaining the previous versions for posterity/citation.

During this process I’ve come across several other folks doing similar things and even inspired others to post their proposals, so I’m in the process of compiling a list of all of the publicly available biology proposals that I’m aware of and will post a list with links soon. It’s my hope that this will serve as a valuable resource for young and old researchers alike and will help to lead the way forward to a more open scientific dialogue.

Weecology at ESA 2012

Sadly, Ethan and I are missing ESA this year, but our group still has a strong presence this year. In fact you can see a weecologist every day of the conference if you so desire! If you’re at ESA and want to know what the weecologists are up to, go check out our various talks and posters. If you’re like us and can’t make it this year but want to know what’s going on, you can follow the conference on twitter, just search for the hashtag:  #esa2012 . Twitterers, we are depending on you to keep up informed on the cool talks you see!

Our group’s talks/posters organized by day are below. Names of current weecology members are in bold and former weecologists in italics, titles linked to ESA abstract if you want to know more.

Enjoy!

Monday

Poster Title: Macroecological life-history trait database for birds, mammals, and reptiles

Authors: Elita Baldridge, Nathan Myrhvold, S.K. Morgan Ernest

Info/Location: PS 19-218

 

Tuesday

Talk Title: Experimental macroecological approach tests the influence of biotic interactions, species richness, and abundance as determinants of the species abundance distribution

Authors: Sarah R. Supp, S.K. Morgan Ernest

Info/Location: 4:20 pm, F151 Oregon Convention Center

 

Poster Title: Bird and mammal sampling strategies: NEON’s contribution to the continental-scale ecology of vertebrates

Author: Katherine M. Thibault

Info/Location: OPS 2-6

 

Wednesday

Poster Title: The adequate currency for community-level energetic constraint based on Maximum Entropy

Authors: Xiao Xiao, Ethan P. White

Info/Location: PS 57-165

 

Thursday

Poster Title: Developing an agroecological approach to biomass scaling and branching architecture using orchard trees

Author: Zachary T. Brym

Info/Location: PS 79-153

 

Friday

Talk Title: Strong self-limitation for rare species across environments and taxa

Author: Glenda M. Yenni

Info/Location: 10:10 am, A103 Oregon Convention Center

ESA journals do not allow papers with preprints

Over the weekend I saw this great tweet:

by Philippe Desjardins-Proulx and was pleased to see yet another actively open young scientist. Then I saw his follow up tweet:

At first I was confused. I thought ESA’s policy was that preprints were allowed based on the following text on their website (emphasis mine: still available in Google’s Cache):

A posting of a manuscript or thesis on a personal or institutional homepage or ftp site will generally be considered as a preprint; this will not be grounds for viewing the manuscript as published. Similarly, posting of manuscripts in public preprint archives or in an institution’s public archive of unpublished theses will not be considered grounds for declaring a manuscript published. If a manuscript is available as part of a digital publication such as a journal, technical series or some other entity to which a library can subscribe (especially if that publication has an ISSN or ISBN), we will consider that the manuscript has been published and is thus not eligible for consideration by our journals. A partial test for prior publication is whether the manuscript has appeared in some entity with archival value so that it is permanently available to reasonably diligent scholars. A necessary test for prior publication is whether the author can legally transfer copyright to ESA.

So I asked Philippe to explain his tweet:

This got me a little riled up so I broadcast my displeasure:

And then Jarrett Byrnes questioned where this was coming from given the stated policy:

So I emailed ESA to check and, sure enough, preprints on arXiv and similar preprint servers are considered prior publication and therefore cannot be submitted to ESA journals, despite the fact that this isn’t a problem for a few journals you may have heard of including Science, Nature, PNAS, and PLoS Biology. ESA (to their credit) has now clarified this point on their website (emphasis mine; thanks to Jaime Ashander for the heads up):

A posting of a manuscript or thesis on an author’s personal or home institution’s website or ftp site generally will not be considered previous publication. Similarly posting of a “working paper” in an institutional repository is allowed so long as at least one of the authors is affiliated with that institution. However, if a manuscript is available as part of a digital publication such as a journal, technical series, or some other entity to which a library can subscribe (especially if that publication has an ISSN or ISBN), we will consider that the manuscript has been published and is thus not eligible for consideration by our journals. Likewise, if a manuscript is posted in a citable public archive outside the author’s home institution, then we consider the paper to be self-published and ineligible for submission to ESA journals. Finally, a necessary test for prior publication is whether the author can legally transfer copyright to ESA.

In my opinion the idea that a preprint is “self-published” and therefore represents prior publication is poorly justified* and not in the best interests of science, and I’m not the only one:

So now I’m hoping that Jarrett is right:

and that things might change (and hopefully soon). If you know someone on the ESA board, please point them in the direction of this post.

UPDATE: Just as I was finishing working on this post ESA responded to the tweet stream from the last few days:

I’m very excited that ESA is reviewing their policies in this area. As I should have said in the original post, I have, up until this year, been quite impressed with ESA’s generally open, and certainly pro-science policies. This last year or so has been a bad one, but I’m hoping that’s just a lag in adjusting to the new era in scientific publishing.

UPDATE 2: ESA has announced that they have changed their policy and will now consider articles with preprints.

———————————————————————————————————————————————————————–

*I asked ESA if they wanted to clarify their justification for this policy and haven’t heard back (though it has been less than 2 days). If they get back to me I’ll update or add a new post.
   

Three ways to improve impact factors

It’s that time of year again when the new Impact Factor values are released. This is such a big deal to a lot of folks that it’s pretty hard to avoid hearing about it. We’re not the sort of folks that object to the use of impact factors in general – we are scientists after all and part of being a scientist is quantifying things. However, if we’re going to quantify things it is incumbent upon us to try do it well and there are several things that we need to address if we are going to have faith in our measures of journal quality.

1. Stop using impact factor use Eigenfactor based metrics instead

The impact factor simply determines the number of papers that cite another paper and calculates the average. This might have been a decent approach when the IF was first invented, but it’s a terrible approach now. The problem is that according to network theory, and some important applications thereof (e.g., Google), it is also important to take into account the importance of the papers/journals that are doing the citing. Fortunately we now have metrics that do this properly: the Eigenfactor and associated Article Influence Score. These are even report by ISI right next to the IF.

Here’s a quick way to think about this. You have two papers, one that has been cited 30 times by papers that are never cited, and one that has been cited 30 times by papers that are themselves each cited 30 times. If you think the two papers are equally important, then please continue using the impact factor based metrics. If you think that the second paper is more important then please never mention the words “impact factor” again and start focusing on better approaches for quantifying the influence of nodes in a network.

2. Separate reviews (and maybe methods) from original research

We’ve known pretty much forever that reviews are cited more than original research papers, so it doesn’t make sense to compare review journals to non-review journals. While it’s easy to just say that TREE and Ecology are apples and oranges, the real problem is journals that mix reviews and original research. Since reviews are more highly cited, just changing the mix of these two article types can manipulate the impact factor. Sarah Supp and I have a paper on this is you’re interested in seeing some science and further commentary on the issue. The answer is easy, separate the analyses for review papers. It has also been suggested that methods papers have higher citation rates as well, but as I admit in my back and forth with Bob O’Hara (the relevant part of which is still awaiting moderation as I’m posting) there doesn’t seem to be any actual research on this to back it up.

3. Solve the problem of metrics that are strongly influenced by the number of papers

In the citation analysis of individual scientists there has always been the problem of how to deal with the number of papers. The total number of citations isn’t great since one way to get a large number of citations is to write a lot of not particularly valuable papers. The average number of citations per paper is probably even worse because no one would argue that a scientist who writes a single important paper and then stops publishing is contributing maximally to the progress of science.

In journal level citation analyses these two end points have up until recently been all we had, with ISI choosing to focus on the average number of citations per paper and Eigenfactor the total number of citations [1]. The problem is that these approaches encourage gaming by journals to publish either the most or fewest papers possible. Since the issues with publishing too many papers are obvious I’ll focus on the issue of publishing too few. Assuming that journals have the ability to predict the impact of individual papers [2], the best way to maximize per article measures like the impact factor is to publish as few papers as possible. Adding additional papers simply dilutes the average citation rate. The problem is that by doing so the journal is choosing to have less influence on the field (by adding more, largely equivalent quality, papers) in favor of having a higher perceived impact. Think about it this way. Is a journal that publishes a total of 100 papers that are cited 5 times each, really more important than a journal that publishes 200 papers, 100 of which are cited 5 times each and 100 that are cited 4 times each? I think that the second journal is more important, and that’s why I’m glad to see that Google Scholar is focusing on the kinds of integrative metrics (like the h-index) that we use to evaluate individual researchers.

Moving forward

The good news is that we do have better metrics, that are available right now. The first thing that we should do is start promoting those instead of the metric that shall not be named. We should also think about improving these metrics further. If they’re worth talking about, they are worth improving. I’d love to see a combination of the network approaches in Eigenfactor with the approaches to solving the number of publications problem taken by Google. Of course, more broadly, we are already in the progress of moving away from journal level metrics and focusing more on the impact of individual papers. I personally prefer this approach and think that it’s good for science, but I’ll leave my thoughts on that for another day.

UPDATE: Point 3 relates to two great pieces in Ideas in Ecology and Evolution, one by Lonnie Aarssen and one by David Wardle.

UPDATE 2: Fixed the broken link to the “Why Eigenfactor?” page.

———————————————————————————————————————————

[1] Both sets of metrics include both approaches with total citations from ISI and Article Influence Score, which is the per paper equivalent of the Eigen Factor, it’s just that they don’t seem to get as much… um… attention.

[2] And if they didn’t then all we’re measuring is how well different journals game the system plus some positive feedback where journals that are known to be highly cited garner more readers and therefore more future citations.

The NSF Preproposal Process: Pt 2. A promising start.

When last we left our intrepid scientists, they were starting to ponder the changes that might result from the new pre-proposal process. In general, we really like the new system because it helps reviewers focus on the value of big picture thinking and potentially reduces the overall workload of both grant writing and grant reviewing. Of course academics are generally nervous about the major shift in the proposal process (and, let’s face it, change in general). Below we’ll talk about: 1) things we like about the new process; 2) concerns that we’ve heard expressed by colleagues and our thoughts on those issues; and 3) modifications to the system that we think are worth considering.

An emphasis on big picturing thinking.  As discussed in part 1, the 4-page proposal seems to shift the focus of the reader from the details of the project to the overall goals of the study. We are excited by this. The combined pre-proposal/full proposal process – with their different strengths and weaknesses – can potentially generate a strong synergy: the pre-proposal panel assesses which proposals could yield important enough results to warrant further scrutiny and the full-proposal panel assesses whether the research plan is sound enough to yield a reasonable chance of success. In the current reality of limited funding, it seems logical to increase the probability that funds go towards research that is both conceptually important and scientifically sound. Since many of us are more comfortable critiquing work based on specific methodological issues than on ‘general interest’ having a phase in the review that helps focus on the importance of the research seems valuable. However, if reviewers still focus primarily on methodological details (as seemed to be the case on Prof-like substance’s panel) then the new system could end up putting even less emphasis on big ideas, because the 4 pages will be entirely filled up with methods. Based on our experience this wasn’t a major concern, but it is definitely a possibility that NSF needs to be aware of.

Reduced reviewer workload: This was the primary motivation for the new system. We feel like we probably spent about as much time pre-panel reading and reviewing proposals, but we enjoyed it more because it involved more thinking about big questions and looking around in the literature and less slogging through 10 pages of methodological details. More importantly, there were no ad hoc reviewers for the pre-proposals, which greatly reduces the overall reviewer burden. The full-proposals will have ad hocs, but because there are fewer of them we should all end up getting fewer requests from NSF.

Reduced grant writer workload: One common concern about the new system is that people who write a successful pre-proposal will then have to also write a 15-page proposal, thus increasing the workload to 20 pages spread across two separate submissions (pre-proposal + proposal). Folks argue that this results in more time grant writing and less time doing science. Our perspective is that while not perfect, the new system is much better than the old system where many people we knew were putting in 1-2 (or even more) 15-page proposals per deadline (i.e., 2-4 proposals/year) with only a 5-10% funding rate (vs. 20-30% for full proposals under the new system). That’s a lot more wasted effort, especially when you consider that much of the prose from the pre-proposal will presumably be used in the full proposal. As grant writers we also really liked that we didn’t need to generate dozens of pages of time consuming supplemental documents (budgets, postdoc mentoring plans, etc.) until we knew there was at least a reasonable chance of the proposal being funded. The scientific community should definitely have a discussion about how to streamline the process further to optimize the ratio of effort in proposal writing and review to quality of science being funded, but the current system is definitely a step forward in our opinion. If you’re interested in some of the mechanisms for how the PI proposal writing workload could be modified – both Prof-Like Substance and Jack’s posts contain some interesting ideas.

New investigators: Everyone, everyone, everyone is concerned about the untenured people. Given the culture among universities that grants = tenure, untenured faculty don’t have the luxury of time, and the big concern is that only having 1 deadline/year gives untenured people fewer chances to get funding before tenure decisions. Since the number of proposals NSF is funding isn’t changing, this isn’t quite as bad as it seems. However, if it takes a new investigator a couple of rounds to make it past the prepoposal stage then they may not have very many tries to figure out how to write a successful full proposal. The counterarguments are that the once-yearly deadline gives investigators more time to refine ideas, digest feedback, obtain friendly reviews from colleagues and therefore (hopefully) submit stronger proposals as a result. It also (potentially) restricts the amount of time that untenured folks spend writing grants, therefore freeing up more time to focus on scholarly publications, mentoring students, and creating strong learning environments in our classrooms, which (theoretically) also are important for tenure. We love the ideas behind the counterarguments and if things really play out that way it would be to the betterment of science, but we do worry about how this ideal fares against the grants=tenure mentality.

Collaboration: One of our big concerns (and that of others as well ) is the potential impact of the 2 proposal limit on interdisciplinary collaboration. Much of science is now highly interdisciplinary and collaborative and if team size is limited because of proposal limits this will make both justifying and accomplishing major projects more difficult. We have already run into this problem both in having former co-PIs remove themselves from existing proposals and in having to turn down potential collaborations. We have no problem with a limit on the number of lead-PI proposals, in a lot of ways we think it will help improve the balance between proposing science and actually doing it, but the limit on collaboration is a major concern.

In general, we think that the new system is a definite improvement over the old system, but there are clearly still things to be discussed and fine tuned. Possible changes to consider include:

  • Find a way to allow full proposals that do well to skip the pre-proposal stage the next year. This will reduce stochasticity and frustration. These proposals could still count towards any limit on the number of proposals.
  • Clearly and repeatedly communicate to the pre-proposal panels (let’s face it, faculty don’t tend to listen very well) the desired difference in emphasis between evaluating preliminary proposals and full proposals. This will help maintain the emphasis on interesting ideas and might also help alleviate the angst some panelists felt about what to do about proposals that were missing important details but not obviously flawed.
  • Consider making the proposal limit on the number of proposals on which someone will be the lead PI. This still discourages excessive submissions without hurting the collaborative, interdisciplinary approach to science that we’ve all been working hard to foster.

So there it is. Our 2-part opinion piece on the new NSF-process. If you were hoping for a pre-proposal magic template, we’re sorry to disappoint, but hopefully you found a lot to think about here while you were looking for it!

UPDATE: If you were hoping for a pre-proposal magic template, checkout the nice post over at Sociobiology.

The NSF Pre-Proposal Process: Pt 1. Judging Preproposals

Before we start, this post refers to posts already written on this topic. To make sure no one gets lost, please follow the sequence of operations below:

Step 1: Do you know about the new pre-proposal process at NSF?

  • If Yes: Continue to Step 2.
  • If No: please read one of these posts and then proceed to Step 2.

Step 2: Have you read Jack William’s most excellent post (posted on Jacquelyn Gill’s most excellent blog) about a preproposal panelist’s perspective on the new process?

Step 3: Have you read Prof-like Substance’s post about his experience on a pre-proposal panel? (What? You haven’t read Prof-Like Substance’s blog before?! Go check him out.)

  • If Yes, continue to Step 4
  • If No, go to The Spandrel Shop and read Prof-like Substance’s post and return.

Step 4: Read our post! Like Jack and Prof-Like Substance, we also have experience with the new pre-proposal panels. The nuts and bolts of our experiences were similar to theirs (i.e., number of proposals read, assigning pre-proposals to one of three categories, etc). The main differences are really in our perceptions of the experience and the implications for the broader field. Please remember, there were a TON of pre-proposal panels this spring in both IOS and DEB. Differences from other panelists may reflect idiosyncratic differences in panels or differences in disciplines or just different takes on the same thing – because of NSF confidentiality rules, we can’t identify anything specific about our experiences – so don’t ask. And, speaking of rules: [start legalese] all opinions expressed within this post (including our comments, but not the comments of others) reflect only the aggregated opinions of Ethan & Morgan – henceforth referred to as Weecology – and do not represent official opinions by any entity other than Morgan & Ethan (even our daughter does not claim affiliation with our opinion…though to be honest, she’s two and she disagrees with everything we say anyway). [end legalese]

1) The Importance of Big Ideas. Our perspective on what made for a successful pre-proposal jives largely with Jack’s. The scope of the question being asked was really important. The panelists had to believe that the research would be a strong and important contribution to the field as a whole – not just to a specific system or taxon. Not only did the question being proposed need to be one that would have broad relevance to the program’s mission, it needed a logical framework for accomplishing that goal. In our experience, disconnects between what you propose to address and what you’re actually doing become glaringly obvious in 4 pages.

2) Judging Methods. The limited space for methods was tricky for both reviewers and writers. Sometimes the methods are just bad – if a design is flawed in 4 pages, it’ll still be flawed in 40 pages. The challenge was how to judge proposals where nothing was obviously wrong, but important details were missing. After reviewing full-proposals where you are trying to decide whether a proposal should be funded as is, this was a rough transition to make because all the details can’t reasonably be fit into 4 pages. While the panel was cognizant of this, it is still hard to jettison old habits. Sometimes proposals were nixed because of those missing details and sometimes not. We honestly don’t have a good feel for why, but it might reflect a complex algorithm involving: a) how cool the idea was, b) the abilities of the research team – i.e. is there a PI with demonstrated experience related to the unclear area, and c) just how important did those missing details really seem to a panelist.

3) Methods vs. Ideas. Our impression is that the 4-page format seems to alter the focus of the reviewer. In 15-pages, so much of the proposal is the methods – the details of questions, designs, data collection, analyses. It’s only natural for the reader to focus on what takes up most of the proposal. In contrast, the structure of the pre-proposal really shifts the focus of the reviewer to the idea. Discussions with our fellow panelists suggest we weren’t the only ones to perceive this though it’s important to note that not everyone feels this way – Prof-Like Substance’s post and comments flesh out an alternative to our experience.

4) Reviewers spend more time thinking about your proposal. This was an interesting and unexpected outcome of the short proposals. We both spent more time reading the literature to better understand the relevance of a pre-proposal for the field, looking up techniques, cited literature, etc. There was also a general feeling that panelists were more likely to reread pre-proposals. In our experience, most panelists felt like they spent about as much time reviewing each preproposal as they would a 15-pager, but more of this time was spent reading the literature and thinking about the proposal.

In general, like Jack, we came away with a positive feeling about the ability of the panel to assess the pre-proposals. A common refrain among panelists is that we were generally surprised how well assessing a 4-page proposal actually worked. However, the differences in how a 4-pager is evaluated could have some interesting implications for the type of science funded – something we will speculate on in our next blog post (yes, this is as close as an academic blog gets to a cliff-hanger….).

Metabolic Basis of Ecology Meeting [Announcement]

Are you interested in stoichiometry? Energy flow through individuals, communities or ecosystems? Implications of organismal physiology? Do you like macroecology? Field experiments? Lab experiments? Theory? Are you particularly interested in integrating various combinations of the above? Every four two years, people with a general interest in talking about metabolism and how it impacts various aspects of ecology and evolution get together at a Gordon Research Conference focused on the Metabolic Basis of Ecology. The topic is broadly defined and this year is organized around the theme: The Metabolic Basis of Ecology and Evolution in a Changing World. One of the nice things about the meeting is that its typically small ( < 150 people) and includes a lot of broad thinkers. If you’ve never attended a Gordon Conference before, they are organized around invited speaker sessions, small poster sessions, and scheduled time for meeting and interacting in between. You have to apply for the conference before you can register, but the deadline for those applications is imminent (June 24th). The meeting is July 22-27, 2012 at the University of New England (Biddeford, ME). The list of speakers and other information about the conference can be found here.

Crowdfunding for Science 101 [guest post]

Ethan and I have been watching the emergence of crowdfunding in science with great interest. We meant to blog about it, but our rate of blog idea generation is >> our rate of blog writing. So, when Mary Rogalski, a graduate student at Yale who is participating in #SciFund (one of the crowdfunding sites being run by ecologists) asked if we might be interested in blogging about this new phenomena, we thought this was an opportune time for us to recruit a knowledgeable guest blogger!  When you’re done reading her post, wander over to #SciFund and check out Mary’s project and the other intrepid young scientists experimenting with this new venue.

 Now, introducing Mary Rogalski….

*********************************************

You may have heard of crowdfunding – it’s sort of a combination of venture capitalism and social networking.  Artists, musicians, and video game developers have netted thousands or even millions of dollars by gathering small donations from the interested public.  In fact, crowdfunding is now a multibillion dollar industry.

Until recently I was peripherally aware of this flurry of activity, but it was only after I heard of scientists using crowdfunding to support their research that I began to pay attention.   If you’ve ever applied for research grants you know how competitive the process can be.  This only seems to have intensified as we tighten our belts to deal with the ongoing recession.

Two students in my lab recently raised $7,000 for their master’s project by crowdfunding through the group Petridish.  Impressed with their success, I decided to investigate the possibilities.  A friend shared an article in Nature that discussed crowdfunding, featuring the #SciFund Challenge. #SciFund caught my eye for two reasons.  First, unlike some crowdfunding campaigns, participants receive funds even if they fail to reach their funding target.  Second, #SciFund’s mission to teach scientists to more effectively engage with the general public resonates with my own career goals.

I submitted a short description of my research to the #SciFund organizers, Jai Ranganathan and Jarrett Byrnes, and was deemed worthy of joining round 2 of the #SciFund Challenge!  I quickly found that crowdfunding requires a lot of time and energy.  Overall I would say that I have spent close to 40 hours creating my project description and video, and an hour or two per day over the past three weeks promoting my project.

A short video serves as the centerpiece of a #SciFund campaign.   In only 2-3 minutes I had a lot of information to convey.  I study ecological and evolutionary responses to pollution exposure over long time scales.  I work in lakes, using the sediment record to reconstruct changes in heavy metal contamination and cyanobacteria blooms over the past century.  Zooplankton resting egg banks in these same sediments provide a means of examining ecological and evolutionary trends over the same time scales.  I will hatch Daphnia from resting eggs to see which species were better able to tolerate polluted conditions.  Later I will examine evolutionary responses over time.

I struggled to explain my project in three minutes – not to mention, I had never made a video before!  I decided that people would be most interested in the fact that I can “resurrect” animals from the past to see how they were affected by environmental conditions that they experienced.  In focusing on the “how” of my research, I think I might have sacrificed a bit too much of the “why”.  Why do we even care about long-term effects of pollution?  (I can give you lots of reasons, but they didn’t end up in the video!)  Considering it’s my first attempt at making such a video, I do like how it turned out.

During the month of April, the 75 participants in the #SciFund Challenge created draft videos and written descriptions of our research.   We reviewed each other’s work, focusing on creating clear, compelling language.

When the Challenge launched on May 1, we were coached on how to best spread the word about our projects.  First I alerted my close friends and family about my crowdfunding campaign.  Once I received some traction, I reached out to my broader social networks, asking my friends and colleagues to spread the word.  From here, outreach is only limited by your own creativity and time investment.  Before beginning my crowdfunding adventure my exposure to the world of science media was limited.  I felt overwhelmed by the number and diversity of blogs out there, not to mention newspapers, journals, Facebook groups, and scientists that Tweet.  I also felt awkward promoting myself, especially before doing the research that I propose.  In the end I just jumped right in and did my best to wade through what for me represents a wealth of new opportunities to reach out to the public.

With the #SciFund Challenge coming to an end on May 31, I can reflect on my experience.  First, I have been overwhelmed and humbled by the support that my project has received from friends and family.  Crowdfunding also turned out to be a great networking opportunity.  I have connected with other ecologists through Twitter, a form of social media that I had completely avoided until now.  I even found out that there is another paleolimnologist in my own department at Yale!  We are going for a coffee next week to chat about our research.  These interactions began because of my search for research funds, but the end result has been so much richer.

So, will I continue to crowdfund my research?  Do I think it is the wave of the future for science funding?  Could crowdfunding ever replace NSF?  I think the answers to these questions are yes, maybe and probably not.  However, that elusive crowd of people interested in my research, outside of my friends and family, will take years to cultivate.   As I build my career as a scientist I will implement the lessons I have learned from crowdfunding and continue reaching out to audiences outside of academia.  My new blog is a start!

I think that crowdfunding may not be for everyone, and that some types of science might be a tougher sell.  Major research programs requiring hundreds of thousands of dollars will likely not be easily supported in this way.  But who am I to say?  Perhaps crowdfunding could take off and replace traditional sources of science research funding.  Only time will tell!

Mary Rogalski
PhD Candidate, 2014
Yale School of Forestry & Environmental Studies

Why your science blog should provide full feeds

People find blog posts in different ways. Some visit the website regularly, some subscribe to email updates, and some subscribe using the blog’s feed. Feeds can be a huge time saver for processing the ever increasing amount of information that science generates, by placing much of that information in a single place in a simple, standardized, format. It also lets you consume one piece of information at a time and keeps your inbox relatively free of clutter (for more about why using a feed reader is awesome see this post).

When setting up their feeds bloggers can choose to either provide the entire content of the post, or just a small teaser that contains just the first few sentences of the post. In this post I am going to argue that science bloggers should choose to provide full posts.

The core reason is that we are are doing this to facilitate scientific dialog, and we are all very busy. In addition to the usual academic work load of teaching, doing research, and helping our departments and universities function, we are now dealing with keeping up with a rapidly expanding literature plus a bloom of scientific blogs, tweets, and status updates (and oh yeah, some of us even have personal lives). This means that we are consuming a massive amount of information on a daily basis and we need to be able to do so quickly. I squeeze this in during small windows of time (bus rides home, gaps between meetings, while I’m running my toddler’s bath) and often on a mobile device.

I can do this easily if I have full feeds. I open my feed reader, open the first item, read it, move on to the next one. My brain knows exactly what format to expect, cognitive load is low, and the information is instantly available. If instead I encounter a teaser, I first have to make a conscious decision about whether or not I want to click through to the actual post, then I have to hit the link, wait for the page to load (which can still be a fairly long time on a phone), adjust to a format that varies widely across blogs, often adjust the zoom and rotate my screen (if I’m reading on my phone), read the item, and then return to my reader. This might not seem like a huge deal for a handful of items, but multiply the lost time by a few hundred or a few thousand items a week and it adds up in a hurry. On top of that I store and tag full-text, searchable, copies of posts for all of the blogs I follow in my feed reader so that I can find posts again. This is handy when I remember there is a post I want to either share with someone or link to, but can’t remember who wrote it.

So, if your blog doesn’t provide full feeds this means three things. First, I am less likely to read a post if it’s a teaser. It costs me extra time, so the threshold for how interesting it needs to be goes up. Second, if I do read it I now have less time to do other things. Third, if I want to find your post again to recommend it to someone or link to it, the chances of my doing so successfully are decreased. So, if your goal is science communication, or even just not being disrespectful of your readers’ time, full feeds are the way to go.

This all goes for journal tables of contents as well. As I’ve mentioned before, if the journal feed doesn’t include the abstracts and the full author line, it is just costing the papers readers, and the journal’s readers time, and therefore making the scientific process run more slowly than it could.

So, bloggers and journal editors, for your readers sake, for sciences sake, please turn on full feeds. It will only take you two minutes. It will save science hundreds of hours. It will probably be this most productive thing you do for science all week.

Characterizing the species-abundance distribution with only information on richness and total abundance [Research Summary]

This is the first of a new category of posts here at Jabberwocky Ecology called Research Summaries. We like the idea of communicating our research more broadly than to the small number of folks who have the time, energy, and interest to read through entire papers. So, for every paper that we publish we will (hopefully) also do a blog post communicating the basic idea in a manner targeted towards a more general audience. As a result these posts will intentionally skip over a lot of detail (technical and otherwise), and will intentionally use language that is less precise, in order to communicate more broadly. We suspect that it will take us quite a while to figure out how to do this well. Feedback is certainly welcome.

This is a Research Summary of: White, E.P., K.M. Thibault, and X. Xiao. 2012. Characterizing species-abundance distributions across taxa and ecosystems using a simple maximum entropy model. Ecology. http://dx.doi.org/10.1890/11-2177.1*

The species-abundance distribution describes the number of species with different numbers of individuals. It is well known that within an ecological community most species are relatively rare and only a few species are common, and understanding the detailed form of this distribution of individuals among species has been of interest in ecology for decades. This distribution is considered interesting both because it is a complete characterization of the commonness and rarity of species and because the distribution can be used to test and parameterize ecological models.

Numerous mathematical descriptions of this distribution have been proposed and much of the research into this pattern has focused on trying to figure out which of these descriptions is “the best” for a particular group of species at a small number of sites. We took an alternative approach to this pattern and asked: Can we explain broad scale, cross-taxonomic patterns in the general shape of the abundance distribution using a simple model that requires only knowledge of the species richness and total abundance (summed across all species) at a site?

To do this we used a model that basically describes the most likely form of the distribution if the average number of individuals in a species is fixed (which turns out to be a slightly modified version of the classic log-series distribution; see the paper or John Harte’s new book for details). As a result this model involves no detailed biological processes and if we know richness and total abundance we can predicted the abundance of each species in the community (i.e., the abundance of the most common species, second most common species… rarest species).

Since we wanted to know how well this works in general (not how well it works for birds in Utah or trees in Panama) we put together a a dataset of more than 15,000 communities. We did this by combining 6 major datasets that are either citizen science, big government efforts, or compilations from the literature. This compilation includes data on birds, trees, mammals, and butterflies. So, while we’re missing the microbes and aquatic species, I think that we can be pretty confident that we have an idea of the general pattern.

In general, we can do an excellent job of predicting the abundance of each rank of species (most abundant, second most abundant…) at each site using only information on the species richness and total abundance at the site. Here is a plot of the observed number of individuals in a given rank at a given site against the number predicted. The plot is for Breeding Bird Survey data, but the rest of the datasets produce similar results.

Observed-predicted plot for Breeding Bird Survey data showing a good ability of the model to predict the observed data.

Observed-predicted plot for nearly 3000 Breeding Bird Survey communities. Since there are over 100,000 points on this plot we’ve color coded them by the number of points in the vicinity of the focal point, so red areas have lots of points nearby and blue areas have very few points. The black line is the 1:1 line.

The model isn’t perfect of course (they never are and we highlight some of its failures in the paper), but it means that if we know the richness and total abundance of a site then we can capture over 90% of the variation in the form of the species-abundance distribution across ecosystems and taxonomic groups.

This result is interesting for two reasons:

First, it suggests that the species-abundance distribution, on its own, doesn’t tell us much about the detailed biological processes structuring a community. Ecologists have know that it wasn’t fully sufficient for distinguishing between different models for a while (though we didn’t always act like it), but our results suggest that in fact there is very little additional information in the distribution beyond knowing the species richness and total abundance. As such, any model that yields reasonable richness and total abundance values will probably produce a reasonable species-abundance distribution.

Second, this means that we can potentially predict the full distribution of commonness and rarity even at locations we have never visited. This is possible because richness and total abundance can, at least sometimes, be well predicted using remotely sensed data. These predictions could then be combined with this model of the species-abundance distribution to make predictions for things like the number of rare species at a site. In general, we’re interested in figuring out how much ecological pattern and process can be effectively characterized and predicted at large spatial scales, and this research helps expand that ability.

So, that’s the end of our first Research Summary. I hope it’s a useful thing that folks get something out of. In addition to the science in this paper, I’m also really excited about the process that we used to accomplish this research and to make it as reproducible as possible. So, stay tuned for some follow up posts on big data in ecology, collaborative code development, and making ecological research more reproducible.

———————————————————————————————————————————————————————————————
*The paper will be Open Access once it is officially published but ,for reasons that don’t make a lot of sense to me, it is behind a paywall until it comes out in print.

On the value of fundamental scientific research

Jeremy Fox over at the Oikos Blog has written an excellent piece explaining why fundamental, basic science, research is worth investing in, even when time and resources are limited. His central points include:

  • Fundamental research is where a lot of our methodological advances come from.
  • Fundamental research provides generally-applicable insights.
  • Current applied research often relies on past fundamental research.
  • Fundamental research often is relevant to the solution of many different problems, but in diffuse and indirect ways.
  • Fundamental research lets us address newly-relevant issues.
  • Fundamental research alerts us to relevant questions and possibilities we didn’t recognize as relevant.
  • Fundamental research suggests novel solutions to practical problems.
  • The only way to train fundamental researchers is to fund fundamental research.

I don’t have a lot to add to what Jeremy has already said, except that I strongly agree with the points that he has made and think that in an era where much of ecology has direct applications to things like global change we need to guard against the temptation to justify all of our research based on its applications.

When I think about the value of fundamental research I always recall a scene from an early season of The West Wing where a politician (SAM) and a scientist (MILLGATE) are discussing how to explain the importance of something akin to the Large Hadron Collider. It loses a little something as a script (complements of Unofficial West Wing Transcript Archive), but nonetheless:

SAM
What is it?

MILLGATE
It’s a machine that reveals the origin of matter… By smashing protons together at very high speeds and at very high temperatures, we can recreate the Big Bang in a laboratory setting, creating the kinds of particles that only existed in the first trillionth of a second after the universe was created.

SAM
Okay, terrific. I understand that. What kind of practical applications does it have?

MILLGATE
None at all.

SAM
You’re not in any way a helpful person.

MILLGATE
Don’t have to be. I have tenure.

SAM
Doctor.

MILLGATE
There are no practical applications, Sam. Anybody who says different is lying.

ENLOW
If only we could only say what benefit this thing has, but no one’s been able to do that.

MILLGATE
That’s because great achievement has no road map. The X-ray’s pretty good. So is penicillin. Neither were discovered with a practical objective in mind. I mean, when the electron was discovered in 1897, it was useless. And now, we have an entire world run by electronics. Haydn and Mozart never studied the classics. They couldn’t. They invented them.

SAM
Discovery.

MILLGATE
What?

SAM
That’s the thing that you were… Discovery is what. That’s what this is used for. It’s for discovery.

The episode is “Dead Irish Writers” and I’d highly recommend watching the whole thing if you want to feel inspired about doing fundamental research.