The NSF Proposal Revolution: The DEB Data

Over the past year, you can’t get two scientists together who submit to the BIO Directorate at NSF without the conversation drifting to the radical changes in the proposal process. If you have no idea what I’m talking about, I’ve added some links at the bottom of the post for you to check out.  For everyone else, suffice it to say that there has been immense speculation about the possible impacts of these changes on the scientific process.  Well, over the winter break, DEB released its data to date (IOS did this a little earlier and comparisons between IOS and DEB are discussed here). So let’s see what happened!

Table 1. Basic Stats on Funding Rates

Preproposals Submitted 1624
Preproposal Invites for Full Submission 380
Full proposals recommended for funding 259
*^Number of proposals to be funded 83.6
Preproposal Invitation Rate 23.4%
New Investigator Preproposal Invitation Rate 20.4%
Full Proposal Panel Recommendation Rate 68%
Early Career Investigator Full Proposal Panel Recommendation Rate 35%
*Anticipated Overall Fund Rate on Full Proposal Panel 22%
*^Overall fund rate from preproposal pool 5.1%

^ numbers I’ve estimated given the statistics provided by NSF *value complicated by uncertain fund rate

You’ll notice some of the items in the table are starred.  That’s because things get a little…complicated in the full proposal funding data. When DEB released the data, funding decisions weren’t finalized, so they only had an estimate of funding rates. Also some full proposals didn’t need to submit preproposals to DEB (e.g. CAREER, OPUS, RCN, co-reviewed proposals from other divisions), so the starred items have two possible sources of fuzziness: non-preproposal proposals and uncertain fund rates. The NSF info doesn’t make some of this transparent. For example, NSF reports a full proposal  ‘success rate’ of 35% of the 82 full proposals submitted by early career investigators through the pre proposal process.  However, the accompanying table (see below) on success rates over the past 5 years shows the 2012 data as 16% out of 181 proposals. I assume the numbers don’t match due to the proposals submitted outside the preproposal process (i.e. CAREERs). It’s also unclear to me whether ‘success rate’ is ‘recommended for funding’ or actual funding.

Table 2. Statistics for Early Career Investigators over past 5 years:

Fiscal Year Success Rate # proposals % total submissions
2007 15.3% 308 23.7
2008 13.8% 320 24.2
2009 17.6% 289 22.8
2010 12.4% 363 24.5
2011 12.3% 350 24.8
2012 16.0% 181 22.7

Interesting Stats to Chew on:

1)      Preproposal Funding rates: Let’s assume funding rates for full proposals did not differ between CAREERS, RCNs, or invited preproposals (an assumption that is probably wrong). If that’s the case, then in table 1 I estimated the funding rate of the preproposals at 5.1% (i.e. 5.1% of preproposals eventually got funded as a full proposal). It’s important to note that 5.1% is probably wrong, but how wrong is unclear as it hinges on how different the fund rate is for CAREERS, etc. My guess is that the preproposal fund rate is a little higher because things like CAREERs have a lower fund rate and thus bring down the overall average. However, I’d be surprised if the difference takes preproposals above 10%.

2)      Quality of Full Proposals: 68% of proposals made a funding category (i.e. not allocated to the ‘Do Not Fund’ category). I’d be interested in seeing the data from previous years, but 68% seems high from my limited experience.

3)      Early Career vs overall funding rates: Focusing on the preproposal process data only (i.e. not Table 2 data), my interpretation is that the young people fared well through the preproposal process but took a serious hit in the full proposal process (35% of young investigators recommended for funding vs. an overall recommendation rate of 68%). As a disclaimer, the preproposal data is post-portfolio balancing while the full proposal data seems to be pre-portfolio balancing, so it’s possible that the preproposal panels were equally hard on the youngsters but that the Program Directors corrected for it.

4)      Early Career funding rates: I’ve been studiously ignoring Table 2 (as you might have noticed). The truth is even if funding rates were equal between established and early career scientists, 5.1% success rates (or even 10%) mean that anyone who needs a grant to have a job (whether to get tenure or because they are on soft money) or to keep research going (labs that needs techs or uninterrupted data collection) is in a tough spot right now. Additional biases in funding rates clearly exasperate the situation for our young scientists and this is something we should all be aware of when our young colleagues ask for our help or come up for tenure.

Summary:

There’s enough nebulous stuff here that I’m going to hold off on any grandiose statements until NSF releases its full report in early 2013. But the following are things that the data made me start thinking about:

1)      There’s nothing in the NSF data thus far that changes my opinion about the preproposal revolution: until NSF has more money to fund science, 5.1% funding rates are the real enemy of science. What NSF is doing is more akin to shuffling the deck chairs than to blowing a hole in the hull.

2)      The preproposals per se don’t seem to be filtering out the young people. It’s the full proposal process that seems to be the big hurdle to funding. I suspect that is not a novel result of the new system, but has been true all along. The interesting insight that the preproposal data might suggest is that the lower funding rates have nothing to do with the ideas of the young scientists but more to do about either the methodologies or with how those methodologies are being communicated.

3)      There’s clearly two things that will help our younger scientists: a) increasing funding rates overall (not a solution in NSF’s power) and b) figuring out why the bias in the full proposal rates exists and figuring out how to fix it (something we can all try to work on). Assuming that the lower recommendation rate for full proposals of young career scientists is due to their proposal and not a bias against young scientists (i.e. lower name recognition), then this might be a legitimate argument for how the new system could hurt young people: young people may need more submissions of full proposals (and more panel feedback) before managing to get a proposal recommended for funding.

Additional Links about the Changes at NSF:

Prof Like Substance: NSF BIO decides to screw new investigators, What I learned at an NSF BIO preproposal panel

Contemplative Mammoth: Inside NSF-DEB’s New Pre-Proposals: A Panelist’s Perspective

Jabberwocky Ecology: Changes in NSF process for submissions to DEB and IOS*, NSF Proposal Changes – Follow-up

19 Comments on “The NSF Proposal Revolution: The DEB Data

  1. Nice analysis, Morgan. At The Spandrel Shop I recently read one other thing that makes it difficult to assess the final funding rate – when federal agencies are funded under a continuing resolution, NSF operates at 80% of it’s budget. This further limits the number of awards a program office can make. Of course, this means we go from 5% to maybe 7 or 8% funding rates. i.e., from (really bad)^4 to (really bad)^3

    http://scientopia.org/blogs/proflikesubstance/2012/12/13/the-life-of-an-nsf-program-officer-part-2-what-is-it-like-to-work-at-nsf/

    Hope you don’t mind, sent the link to the post around to some colleagues.

  2. Thanks, Emilio, for pointing out that NSF is only operating at 80% right now, which is indeed another complication in how to interpret the numbers. I loved your assessment of it’s impact on the situation, by the way! Feel free to send a link to this to as many people as you want. A serious assessment of things probably has to wait until NSF releases the final numbers, but given the angst over the new process, I thought starting the dialogue on the data was useful. I’m sure others might disagree with my interpretations but my hope is that fruitful discussion will help us all understand both the severity of the situation overall for early career scientists and that a fruitful and open discussion of the new process (as opposed to angry ranting) might actually be useful for all of us!

  3. Nice post – real data on the topic instead of just opinions!

    Your analogy on moving around the deck chairs is exactly on target. At 5% or even 15% success rates, there are real problems. This is true whether it is grants or journals. I recall a discussion at an editorial board of a journal (which I won’t name) along these lines about not wanting the acceptance rates to go too late because of all the problems that ensue. The correlation between quality and success goes way down. There is a strong propensity to become conservative- truly novel ideas are filtered against. The contingency of who is assigned to read a proposal matters more. And etc.

    Any process (again journals or grants) with the goal of having the best quality choices needs to get success rates up into 25% or so.

    Although NSF doesn’t control its budget, it does have the power to do this. Namely, reduce award sizes and increase award numbers. Canada is more on these lines and having experienced both I believe this approach is more efficient. Not sure why this discussion is not being had.

    Thanks.

  4. Hi Brian! Thanks for stopping by. I agree with the need for getting funding rates up for all the reasons you listed. I wondered if that was part of the impetus for the new ‘ small grants’ initiative at NSF, which got added for this new round of prepropols,

  5. Oh ffs, let’s not get into a Canada funding discussion again, please. The Canadian system doesn’t work in the U.S. Period. Yet it comes up constantly. Just because ecology can sometimes be done on the cheap, don’t extrapolate that to everything NSF funds, even in DEB alone. We have a little issue in the U.S. called “overhead”. It’s kind of a big deal. My expanded thoughts here: http://scientopia.org/blogs/proflikesubstance/2011/08/19/why-we-cant-all-move-to-canada/

    However, that isn’t the point here. You did a much better job with the numbers here than I did as I was rushing through before the break. Interestingly, I’ve talked to people at DEB and the probability that these numbers are going to change significantly is high, because of the shifting budgetary sands at the moment. The other issue is the definition of EC, which is tougher than you might think to pin down. It’ll be interesting to see the final numbers when the dust settles, which won’t be for a few months.

  6. Wow proflike substance. I put forward a specific suggestion – having a conversation about having more grants of smaller dollar size and then a supporting fact from my personal experience (which I’m sorry your vehement opinion does not overrule). And its insulting to say I don’t know about overhead (having had grants in both countries it would be kind of hard not to …) but also immaterial to my main point. There are 100 differences between the US & Canadian systems some better in each case (overhead being better in Canada).

    But only one of them is germane to my point – balance between # and size – which you didn’t address at all. I’m sure even you can see that a $300K grant with $200K after overhead could be turned into two $150K grants with $100K after overhead. And I’m pretty sure the people out there sweating tenure because they haven’t been able to get even one grant in this environment would like to have this discussion. And as somebody who has tenure and who has done successful research in a many small grant and a few large grant environment (and gotten one of the few large grants), I also express a personal preference for the second.

    I guess I see why we don’t have conversations if people can get away with acting like this (swear words in your first sentence – really the level of academic discussion I aspire for) everytime it is brought up. So stop trying to shutdown the argument, admit that you are defending your self interest, and engage on a serious level.

  7. I didn’t really go far enough. How about 4 grants at $50K (+$25K overhead) or 8 grants at $25K (plus $12.5K overhead). At these lower numbers you start to run into other differences like not needing summer salary in Canada which make small grants not feasible in the US. But I am convinced the optimum even leaving the rest of the US system unchanged is less than the current very big grant number. Personally, I suspect that the 2 grants at $150K or the 4 grants at $50K is maximally efficient. But I’d be curious to hear what other people think (for example, I’m less dependent on graduate student field work than some people).

  8. I think a mix of the big and small grant avenues might actually be ideal (though what the best balance of those is, I don’t know). I’ve been keeping my field site going for the past couple of years with a mixture of small grants (one at < 80k total including overhead and one ~20k with no overhead). This has been hugely important to my research so obviously I am a proponent for more small grants to keep people going in this climate. However, to make things work on a small budget has required ingenuity and some sacrifices on the data we collect, which means there are bigger, more integrative questions we can't afford to pursue. So I also see the need for the larger grants – especially with the growing integration of lab-based or technological tools in my field of community ecology (physiological measurement, genetics, isotopes, sensor arrays) to ask questions that can't be addressed otherwise. Like Brian, I also know a variety of young people who don't need large grants to be highly productive scientists, but their institutions are telling them they need 'federal $$' to get tenure. So I see advantages to both the big grant and small grant systems. This is why I was so excited to see NSF try a small grant experiment. I don't know how much money from the general panel pool is going to be allocated for the small grants, but I think it's a great experiment and can't wait to hear how it works out.

    Having watched this debate on the blogosphere for a while, I suspect part of the big grant/small grant tension depends on whether one's research can survive on a small grant or not. Field ecologists and macroecologists (of which I am both) can hunker down and survive on small grants if necessary (though with potential costs to the field ecologists – and to some extent the macroecologists if student or technical support is needed – in the scope of question that can be addressed). Therefore having a mechanism for small grants can be a major boon to scientific productivity. But I know that small grants may not have the same research-life support pay off for my bench-oriented colleagues – which probably leads them to see it as a giant waste of money. I have a friend who does ecosystem nutrients and it costs an eyepopping amount per month for the basic maintenance/supplies to keep her primary machine running – forget actually collecting samples, processing samples so they can be run, paying techs, etc.

  9. Don’t bother Brian. I’ve tried making the exact same suggestion to PLS in the past. I too was informed that I was an ignorant moron for suggesting that NSF might have any flexibility whatsoever to reduce average grant size in order to bump up success rates.

  10. Very good points Morgan. That different kinds of science cost different amounts is one argument for big granting agencies to offer a mix of programs–which of course they do. What the “optimal” mix is, or whether there even is a single optimum (and an optimum from whose perspective?), isn’t easy to figure out.

    Besides different intrinsic costs of different sorts of research, I suspect one’s views on the optimal mix of programs depends on one’s risk aversion. Andrew Hendry–who has foregone some genomic stuff he’d like to do because he can’t find a way to fund it in the Canadian system–once suggested to me that NSERC should cut everyone’s Discovery Grant by, say, 10% in order to fun a program offering bigger, NSF-type grants. The success rate for that program would probably be extremely low, but Andrew argued that nobody would mind because (i) we’d still have our Discovery Grants as backstops, and (ii) everybody would be cocky enough to think that they’d be one of the fortunate few to get one of those big new NSF-type grants! I told Andrew that I for one would mind–I don’t want to pay 10% of my Discovery Grant for what’s effectively a lottery ticket. I don’t play the lottery in my personal life, and I don’t want to play it in my professional life either! In other words, I’m not a gambler–I’m risk averse, or at least, I’m more risk averse than Andrew is.

  11. Pingback: Friday links: do ecologists just reinvent the wheel, NSF grant stats, and more | Dynamic Ecology

  12. Thanks for the thoughts and discussion. Some clarification on one important piece of the story now posted at http://alantownsend.net/.

    Short story: the sizable (and unusual) budget cut so far this FY is the major driver of any differences in success rates. Means it’s key to hold off a bit and see what happens with this year’s budget relative to past years before final conclusions about success rates this year (vs others) can be drawn. Hope the longer write-up on my blog helps clarify.

  13. Thanks, Alan, for stopping by and pointing out your blog post on the impacts of the unusual budget situation on NSF. Other than hearing about friends here or there who have been put on a funding waiting list, it’s hard for those of us outside NSF to understand how this is impacting the NSF and what that means for comparing statistics to previous years.

    As an aside, we’re big fans here of the increasing use of social media by people at NSF!

  14. Morgan, you’re welcome and understood – I’ve certainly learned a lot since rotating at NSF that I didn’t really understand before coming here. We’ll do all we can to get info out as it clarifies, so that discussions of the new process can be as informed as possible – hopefully this includes an official DEB blog before too long.

  15. Pingback: Flump up the jam | BioDiverse Perspectives

  16. Hi GY Zhang – Here’s my best guess at what those #s mean. I suspect they are a combo of a couple of things. First, many programs have small pots of money that can be used to fund projects – conferences/workshops, time sensitive projects (responses to Hurricanes, etc), promising projects that need preliminary data, etc. These are often discussed in advance of submission and you need to be invited by a program officer to submit a proposal – which they typically don’t do unless they are pretty sure they’ll fund it. So funding success for these is close to 100%. I don’t know how many of these small grants there are, but they can potentially make funding rates look higher. Second, when I drilled down to the programs I’m familiar with, I suspect the data is only for full proposals. The prepreposal round may not be counting because nothing is ‘funded’. The preproposal round reduces the pool to a smaller number of full proposals that then get considered. If you make it to the full proposal round, the funding rates are close to the numbers reported for the programs I’m familiar with.

    Does that help?

  17. Pingback: Open Thread: Snowquester Edition – DEBrief

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: