Author Archives: Morgan & Ethan

The NSF Preproposal Process: Pt 2. A promising start.

When last we left our intrepid scientists, they were starting to ponder the changes that might result from the new pre-proposal process. In general, we really like the new system because it helps reviewers focus on the value of big picture thinking and potentially reduces the overall workload of both grant writing and grant reviewing. Of course academics are generally nervous about the major shift in the proposal process (and, let’s face it, change in general). Below we’ll talk about: 1) things we like about the new process; 2) concerns that we’ve heard expressed by colleagues and our thoughts on those issues; and 3) modifications to the system that we think are worth considering.

An emphasis on big picturing thinking.  As discussed in part 1, the 4-page proposal seems to shift the focus of the reader from the details of the project to the overall goals of the study. We are excited by this. The combined pre-proposal/full proposal process – with their different strengths and weaknesses – can potentially generate a strong synergy: the pre-proposal panel assesses which proposals could yield important enough results to warrant further scrutiny and the full-proposal panel assesses whether the research plan is sound enough to yield a reasonable chance of success. In the current reality of limited funding, it seems logical to increase the probability that funds go towards research that is both conceptually important and scientifically sound. Since many of us are more comfortable critiquing work based on specific methodological issues than on ‘general interest’ having a phase in the review that helps focus on the importance of the research seems valuable. However, if reviewers still focus primarily on methodological details (as seemed to be the case on Prof-like substance’s panel) then the new system could end up putting even less emphasis on big ideas, because the 4 pages will be entirely filled up with methods. Based on our experience this wasn’t a major concern, but it is definitely a possibility that NSF needs to be aware of.

Reduced reviewer workload: This was the primary motivation for the new system. We feel like we probably spent about as much time pre-panel reading and reviewing proposals, but we enjoyed it more because it involved more thinking about big questions and looking around in the literature and less slogging through 10 pages of methodological details. More importantly, there were no ad hoc reviewers for the pre-proposals, which greatly reduces the overall reviewer burden. The full-proposals will have ad hocs, but because there are fewer of them we should all end up getting fewer requests from NSF.

Reduced grant writer workload: One common concern about the new system is that people who write a successful pre-proposal will then have to also write a 15-page proposal, thus increasing the workload to 20 pages spread across two separate submissions (pre-proposal + proposal). Folks argue that this results in more time grant writing and less time doing science. Our perspective is that while not perfect, the new system is much better than the old system where many people we knew were putting in 1-2 (or even more) 15-page proposals per deadline (i.e., 2-4 proposals/year) with only a 5-10% funding rate (vs. 20-30% for full proposals under the new system). That’s a lot more wasted effort, especially when you consider that much of the prose from the pre-proposal will presumably be used in the full proposal. As grant writers we also really liked that we didn’t need to generate dozens of pages of time consuming supplemental documents (budgets, postdoc mentoring plans, etc.) until we knew there was at least a reasonable chance of the proposal being funded. The scientific community should definitely have a discussion about how to streamline the process further to optimize the ratio of effort in proposal writing and review to quality of science being funded, but the current system is definitely a step forward in our opinion. If you’re interested in some of the mechanisms for how the PI proposal writing workload could be modified – both Prof-Like Substance and Jack’s posts contain some interesting ideas.

New investigators: Everyone, everyone, everyone is concerned about the untenured people. Given the culture among universities that grants = tenure, untenured faculty don’t have the luxury of time, and the big concern is that only having 1 deadline/year gives untenured people fewer chances to get funding before tenure decisions. Since the number of proposals NSF is funding isn’t changing, this isn’t quite as bad as it seems. However, if it takes a new investigator a couple of rounds to make it past the prepoposal stage then they may not have very many tries to figure out how to write a successful full proposal. The counterarguments are that the once-yearly deadline gives investigators more time to refine ideas, digest feedback, obtain friendly reviews from colleagues and therefore (hopefully) submit stronger proposals as a result. It also (potentially) restricts the amount of time that untenured folks spend writing grants, therefore freeing up more time to focus on scholarly publications, mentoring students, and creating strong learning environments in our classrooms, which (theoretically) also are important for tenure. We love the ideas behind the counterarguments and if things really play out that way it would be to the betterment of science, but we do worry about how this ideal fares against the grants=tenure mentality.

Collaboration: One of our big concerns (and that of others as well ) is the potential impact of the 2 proposal limit on interdisciplinary collaboration. Much of science is now highly interdisciplinary and collaborative and if team size is limited because of proposal limits this will make both justifying and accomplishing major projects more difficult. We have already run into this problem both in having former co-PIs remove themselves from existing proposals and in having to turn down potential collaborations. We have no problem with a limit on the number of lead-PI proposals, in a lot of ways we think it will help improve the balance between proposing science and actually doing it, but the limit on collaboration is a major concern.

In general, we think that the new system is a definite improvement over the old system, but there are clearly still things to be discussed and fine tuned. Possible changes to consider include:

  • Find a way to allow full proposals that do well to skip the pre-proposal stage the next year. This will reduce stochasticity and frustration. These proposals could still count towards any limit on the number of proposals.
  • Clearly and repeatedly communicate to the pre-proposal panels (let’s face it, faculty don’t tend to listen very well) the desired difference in emphasis between evaluating preliminary proposals and full proposals. This will help maintain the emphasis on interesting ideas and might also help alleviate the angst some panelists felt about what to do about proposals that were missing important details but not obviously flawed.
  • Consider making the proposal limit on the number of proposals on which someone will be the lead PI. This still discourages excessive submissions without hurting the collaborative, interdisciplinary approach to science that we’ve all been working hard to foster.

So there it is. Our 2-part opinion piece on the new NSF-process. If you were hoping for a pre-proposal magic template, we’re sorry to disappoint, but hopefully you found a lot to think about here while you were looking for it!

UPDATE: If you were hoping for a pre-proposal magic template, checkout the nice post over at Sociobiology.

The NSF Pre-Proposal Process: Pt 1. Judging Preproposals

Before we start, this post refers to posts already written on this topic. To make sure no one gets lost, please follow the sequence of operations below:

Step 1: Do you know about the new pre-proposal process at NSF?

  • If Yes: Continue to Step 2.
  • If No: please read one of these posts and then proceed to Step 2.

Step 2: Have you read Jack William’s most excellent post (posted on Jacquelyn Gill’s most excellent blog) about a preproposal panelist’s perspective on the new process?

Step 3: Have you read Prof-like Substance’s post about his experience on a pre-proposal panel? (What? You haven’t read Prof-Like Substance’s blog before?! Go check him out.)

  • If Yes, continue to Step 4
  • If No, go to The Spandrel Shop and read Prof-like Substance’s post and return.

Step 4: Read our post! Like Jack and Prof-Like Substance, we also have experience with the new pre-proposal panels. The nuts and bolts of our experiences were similar to theirs (i.e., number of proposals read, assigning pre-proposals to one of three categories, etc). The main differences are really in our perceptions of the experience and the implications for the broader field. Please remember, there were a TON of pre-proposal panels this spring in both IOS and DEB. Differences from other panelists may reflect idiosyncratic differences in panels or differences in disciplines or just different takes on the same thing – because of NSF confidentiality rules, we can’t identify anything specific about our experiences – so don’t ask. And, speaking of rules: [start legalese] all opinions expressed within this post (including our comments, but not the comments of others) reflect only the aggregated opinions of Ethan & Morgan – henceforth referred to as Weecology – and do not represent official opinions by any entity other than Morgan & Ethan (even our daughter does not claim affiliation with our opinion…though to be honest, she’s two and she disagrees with everything we say anyway). [end legalese]

1) The Importance of Big Ideas. Our perspective on what made for a successful pre-proposal jives largely with Jack’s. The scope of the question being asked was really important. The panelists had to believe that the research would be a strong and important contribution to the field as a whole – not just to a specific system or taxon. Not only did the question being proposed need to be one that would have broad relevance to the program’s mission, it needed a logical framework for accomplishing that goal. In our experience, disconnects between what you propose to address and what you’re actually doing become glaringly obvious in 4 pages.

2) Judging Methods. The limited space for methods was tricky for both reviewers and writers. Sometimes the methods are just bad – if a design is flawed in 4 pages, it’ll still be flawed in 40 pages. The challenge was how to judge proposals where nothing was obviously wrong, but important details were missing. After reviewing full-proposals where you are trying to decide whether a proposal should be funded as is, this was a rough transition to make because all the details can’t reasonably be fit into 4 pages. While the panel was cognizant of this, it is still hard to jettison old habits. Sometimes proposals were nixed because of those missing details and sometimes not. We honestly don’t have a good feel for why, but it might reflect a complex algorithm involving: a) how cool the idea was, b) the abilities of the research team – i.e. is there a PI with demonstrated experience related to the unclear area, and c) just how important did those missing details really seem to a panelist.

3) Methods vs. Ideas. Our impression is that the 4-page format seems to alter the focus of the reviewer. In 15-pages, so much of the proposal is the methods – the details of questions, designs, data collection, analyses. It’s only natural for the reader to focus on what takes up most of the proposal. In contrast, the structure of the pre-proposal really shifts the focus of the reviewer to the idea. Discussions with our fellow panelists suggest we weren’t the only ones to perceive this though it’s important to note that not everyone feels this way – Prof-Like Substance’s post and comments flesh out an alternative to our experience.

4) Reviewers spend more time thinking about your proposal. This was an interesting and unexpected outcome of the short proposals. We both spent more time reading the literature to better understand the relevance of a pre-proposal for the field, looking up techniques, cited literature, etc. There was also a general feeling that panelists were more likely to reread pre-proposals. In our experience, most panelists felt like they spent about as much time reviewing each preproposal as they would a 15-pager, but more of this time was spent reading the literature and thinking about the proposal.

In general, like Jack, we came away with a positive feeling about the ability of the panel to assess the pre-proposals. A common refrain among panelists is that we were generally surprised how well assessing a 4-page proposal actually worked. However, the differences in how a 4-pager is evaluated could have some interesting implications for the type of science funded – something we will speculate on in our next blog post (yes, this is as close as an academic blog gets to a cliff-hanger….).

A new database for mammalian community ecology and macroecology

There are a number of great datasets available for doing macroecology and community ecology at broad spatial scales. These include data on birds (Breeding Bird Survey, Christmas Bird Count), plants (Forest Inventory & Analysis, Gentry’s transects), and insects (North American Butterfly Association Counts). However, if you wanted to do work that relied on knowing the presence or abundance of individuals at particular sites (i.e., you’re looking for something other than range maps) there has never been a decent dataset to work with for mammals.

Announcing the Mammal Community Database (MCDB)

Over the past couple of years we’ve been working to fill that gap as best we could. Since coordinated continental scale surveys of mammals don’t yet exist [1] we dug into the extensive mammalogy literature and compiled a database of 1000 globally distributed communities. Thanks to Kate Thibault‘s leadership and the hard work of Sarah Supp and Mikaelle Giffen, we are happy to announce that this data is now freely available as a data paper on Ecological Archives.

In addition to containing species lists for 1000 locales, there is abundance data for 940 of the locations, some site level body size data (~50 sites) and a handful of reasonably long (> 10 yr) time-series as well. Most of the data is restricted to the particular mode of sampling that an individual mammalogist uses and as a result much of the data is for small mammals captured in Sherman traps.

Working with data compilations like this is always difficult because the differences in sampling intensity and approaches between studies can make it very difficult to compare data across sites. We’ve put together a detailed table of information on how sampling was conducted to help folks break the data into comparable subsets and/or attempt to control for the influence of sampling differences in their statistical models.

The joys of Open Science

We’ve been gradually working on making the science that we do at Weecology more and more open, and the MCDB is an example of that. We submitted the database to Ecological Archives before we had actually done much of anything with it ourselves [2], because the main point of collecting the data was to provide a broadly useful resource to the ecological community, not to answer a specific question. We were really excited to see that as soon as we announced it on Twitter

folks started picking it up and doing cool things with it [3]. We hope that folks will find all sorts of uses for it going forward.

Going forward

We know that there is tons more data out there on mammal communities. Some of it is unpublished, or not published in enough detail for us to include. Some of it has licenses that mean that we can’t add it to the MCDB without special permission (e.g., there is a lot of great LTER mammal data out there). Lots of it we just didn’t find while searching through the literature.

If folks know of more data we’d love to hear about it. If you can give us permission to add data that has more restrictive licensing then we’d love to do so [4]. If you’re interested in collaborating on growing the database let us know. If there’s enough interest we can invest some time in developing a public portal.

The footnotes [5]

[1] We are anxiously awaiting NEON’s upcoming surveys, headed up by former Weecology postdoc Kate Thibault.

[2] We have a single paper that is currently in review that uses the data.

[3] Thanks to Scott Chamberlain and Markus Gesmann. You guys are awesome!

[4] To be clear, we haven’t been asking for permission yet, so no one has turned us down. We wanted to get the first round of data collection done first to show that this was a serious effort.

[5] Because anything that David Foster Wallace loved has to be a good thing.

Weecology at ESA

If folks are interested in seeing what Weecology has been up to lately we have a bunch of posters and talks at ESA this year. In order of appearance:

  • Tuesday at 2:30 pm in Room 9AB our new postdoctoral researcher Dan McGlinn will be giving a talk on looking at community assembly using patterns of with- and between-species spatial variation.
  • Tuesday afternoon at poster #28 Morgan will be presenting research on how the long-term community dynamics of the plant and rodent communities near Portal, AZ are related to decadal scale climate cycles. She’ll be there from 4:30 to 6:30 to chat, or stop by any time and take a look.
  • Wednesday at 1:50 pm in Room 19A one of our new members, Elita Baldridge, will be giving a talk on her masters research on nested subsets.
  • Wednesday at poster #139 Ethan will be presenting on our two attempts to make it easier to find and use ecological data. He’ll be there from 4:30 to 6:30 to chat, or stop by any time and take a look (or grab a computer and check out EcologicalData and the EcoData Retriever).
  • Thursday at 1:50 pm in Room 10A another of our new members, Zack Brym, will be giving a talk on his masters research on controls on the invasion of an exotic shrub.
  • Thursday at 4 pm in Room 8 Sarah Supp will give a talk on her work looking at the impacts of experimental manipulations on macroecological patterns (highlighted as a talk to see by Oiko’s blog)
  • And last, but certainly not least, bright and early Friday morning at 8 am in Room 8 Kate Thibault (who has now moved on to fame and fortune at NEON) will be presenting on our work using maximum entropy models to predict the species abundance distributions of 16,000 communities.

Enjoy!

Michael Nielsen on the importance and value of Open Science

We are pretty excited about what modern technology can do for science and in particular the potential for increasingly rapid sharing of, and collaboration on, data and ideas. It’s the big picture that explains why we like to blog, tweet, publish data and code, and we’ve benefited greatly from others who do the same. So, when we saw this great talk by Michael Nielsen about Open Science, we just had to share.

(via, appropriately enough, @gvwilson and @TEDxWaterloo on Twitter)

Postdoc position in Jim Brown’s group studying the major patterns of biodiversity

There is a new postdoctoral research position available in Jim Brown’s lab at the University of New Mexico to study some of the major patterns of biodiversity. We know a bit about the research and it’s going to be an awesome project with a bunch of incredibly bright people involved. Jim’s lab is also one of the most intellectually stimulating and supportive environments that you could possibly work in. Seriously, if you are even remotely qualified then you should apply for this position. We’re both thinking about applying and we already have faculty positions :). Here’s the full ad:

The Department of Biology at the University of New Mexico is seeking applications for a post-doc position in ecology/biodiversity. The post doc will be expected to play a major role in a multi-investigator, multi- institutional project supported by a four-year NSF Macrosystems Ecology grant. The research will focus on metabolic processes underlying the major patterns of biodiversity, especially in pervasive temperature dependence and requires a demonstrated working knowledge of theory, mathematical and computer
modeling skills.

Applicants must have a Ph.D. in ecology or a related discipline.

Review begins with the first applications and continues until the position is filled. Applicants must submit a cover letter and a curriculum vitae along with at least three phone numbers of references, three letters of recommendation and PDF’s of relevant preprints and publications to be sent directly to ecohire@unm.edu attn: James Brown. Application materials must be received by July 25, 2011, for best consideration.

Questions related to this posting may be directed to Dr. James Brown at ecohire@unm.edu or to Katherine Thannisch at kthannis@unm.edu.

The University of New Mexico is an Equal Opportunity/Affirmative Action Employer and Educator. Women and underrepresented minorities are encouraged to apply.

Things you should read

We’ve been thinking a lot recently about the idea that the social web can/should play an increasing role in filtering the large quantity of published information to allow the best and most important work to float to the top (see e.g., posts by The Scholarly Kitchen and Academhack). In its simplest form the idea is that folks like us will mention publications that we think are good/important and then people who think we’re worth listening to will be more likely to read those papers and then pass on recommendations of their own. In concept this should allow for good papers to be found by the scientific community regardless of where they are published. Ecology is far from having reached the level of social media integration required to fully realize this possibility, but there are examples of other fields where this sort of thing has actually occurred.

We think this is a cool idea, but currently it is a relatively ineffective way to find interesting papers; primarily because there simply aren’t enough folks in ecology discussing what they’ve read. EEB and Flow does a great job of this and a few other blogs by practicing scientists make occasional contributions in this regard (e.g., I’m a chordata, urochordata), but there certainly isn’t a critical mass yet. Part of the reason for this is that putting together full posts on articles one has read can take quite a bit of time, and time isn’t something most of us have a lot of lying around. Here at JE we have half a dozen Research Blogging style posts that we keep planning on writing, but finding a couple of hours to reread the paper and a couple of related works and put together a full post just doesn’t seem to happen.

So, today Jabberwocky Ecology announces a new kind of post – Things you should read. The idea behind these posts is to reduce the activation energy for posting about papers that we like. As such, these might be as short as the title of the paper and a link. Most of the time we’ll try to contextualize things a bit with a few sentences or a paragraph to help you figure out if the linked material is relevant to you, but these won’t be full blown summaries because these are things you should read, not things you should read about.

Blogrolling graduate student ecology blogs

We’ve recently been following a couple of blogs by graduate students studying ecology and have been enjoying them enough that we thought we’d point folks in their direction.

Transient Theorist is a first year PhD student interested in quantitative and interdisciplinary approaches to ecology. How could we not love his blog. Particularly good recent posts include Ups and Downs and Intimidating questions.

Karina at Ruminations of an Aspiring Ecologist is a third year PhD student who travels to remote foreign lands for field work (we love her use of – Ukenzagapia – to pseudonymize the location). Good recent posts include Timescales in graduate school and Even more of my life in comics: writing to professors.

We are glad to see graduate students blogging for a variety of reasons. First, graduate school can sometimes be an incredibly isolating experience in that it can feel like some of the difficult situations are unique to you, when in fact hundreds of students are going through exactly the same thing. Having a cadre of students writing about these experiences helps their readers feel less alone in their struggles. As faculty we also appreciate the opportunity to be reminded of the graduate student perspective on academia. We’re not too far out of graduate school, but it is already difficult to recall what a committee meeting was like from a student perspective. Reading students thoughts, especially the sort of honest presentation of internal thoughts made possible by pseudonymous blogging, helps remind us that things often look very different to students than they do to us, which (we hope) helps make us better advisers, committee members, and teachers. Third, it provides opportunities for mentoring and interaction beyond the traditionally defined boundaries of one’s own department or university. Finally, and most importantly, it helps to build the nascent community of ecological bloggers. If you know of other good blogs by students studying ecology let us know in the comments.

Getting things done in academia

In a couple of days I’m participating in a panel to help young faculty be ready for their 3rd year review (the halfway step to tenure, which is kind of a big deal at my institution). This is the sort of thing that I normally say no to, but I’ve been to a couple of these things and I just couldn’t bear the thought of another group of young faculty being told that what they really needed to do to get tenure is to have a really spiffy tenure binder… so I’m going to talk about what they actually need to do to get tenure – get stuff done – and I thought it would be worth posting my thoughts on this here for broader consumption. This advice is targeted at assistant professors at research universities, but folks in other situations may be able to adapt it to their individual circumstances (e.g., if you’re at a small liberal arts college or other teaching centered school try swapping research and teaching below). Since the goal of the workshop is getting through the first phase of tenure, this is about what you need to do to accomplish that goal, not what you should be doing in any sort of broader philosophical sense. This advice is built on the lessons that Morgan (my wife and co-blogger for those of you new to JE; in fact she was so instrumental in developing these ideas that even though I’m using the first person singular this will be listed as a co-authored post) and I have learned during our time as assistant professors.

Read the rest of this entry

Laying the Groundwork for Change [Quote]

How then is it possible to modify and improve upon an academic culture populated by smart, creative individuals who are motivated by ideals more than by money, who have deep, intense interests, value substance over form, have little patience for conformity, think for themselves, do not defer to authority, and see their work not as a job but as a calling? Clearly the challenge is to find the incentives and rewards that will motivate this unique workforce to buy into desired changes and work willingly toward implementing them. But the first step is to explain clearly why change is nececessary and, even more important, why change does not mean abandoning core academic values. To win the hearts of academics, one first has to educate them.

- James C. Garland, Saving Alma Mater

This is just one of many brilliantly reasoned (and worded) arguments from Saving Alma Mater. If you are an academic, or an administrator at an academic institution, you really should read this book.

Follow

Get every new post delivered to your Inbox.

Join 1,802 other followers