We’ve had a bit of discussion here at JE about potential solutions to the tragedy of the reviewer commons, so I found a recent letter in Nature (warning – it’s behind a pay wall) suggesting that there may not actually be a problem interesting. The take home message is:
At the journal Molecular Ecology, we find little evidence for the common belief that the peer-review system is overburdened by the rising tide of submissions.
and the authors base this conclusion on some basic statistics about the number of review requests required to obtain a reviewer and the average number of authors and reviewers for each paper. It’s not exactly the kind of hard, convincing data that will formally answer the question of whether there is a problem, but it’s interesting to hear that at least one journal’s editorial group isn’t particularly concerned about this supposedly impending disaster.
Senior authorship is the practice whereby the last position on an author line is occupied by the leader of the lab in which the project was conducted (i.e., the P.I., the advisor, whatever terminology you prefer). Being the senior author on a paper is considered a sign of leadership on the project and is arguably at least as prestigious as being the first author. This practice is commonplace (i.e., practically required) in the cellular, molecular & biomedical fields, and is becoming increasingly prevalent in ecology.
Nearly two years ago I suggested that the idea of using the last position on an author line to indicate the “senior author” was bad for collaborative, interdisciplinary, fields such as ecology. While I still believe this to be true I’m wondering if this is a battle that has already been quietly fought and lost. I’ve seen more and more examples of labs that are using this senior authorship model (i.e., the advisor is always in last place on the author line and presumably not because they always make the smallest contribution) and just in the last few weeks I’ve noticed that Wiley’s RSS feeds no longer even list the first author of the paper, just the last author. So, I thought I’d ask you (and any of your friends you’d like to forward this to) what you thought so that folks starting their own labs (including me) can get a feel for what the field’s take on last authorship is.
Feel free to discuss further in the comments.
UPDATE: Corrected Freudian slip in the title.
After posting about PubCreds I emailed the authors of the original article to invite a response because: 1) it’s only fair if you’re going to criticize someone’s idea to give them a chance to defend it; and 2) I think that the blogosphere is actually the ideal place to have these kinds of discussions because unlike journals it is actually designed to allow for… well… discussions. Below follows a guest post by Jeremy Fox & Owen Petchey. My thanks to Jeremy and Owen for taking the time to respond. Enjoy.
First, thanks to Ethan for a very thoughtful post on Pub Creds. This kind of constructive criticism is actually more welcome and valuable than unreserved praise. Thanks also to Ethan for inviting Owen and I to respond. Owen and I have chatted about our response, and I’ve taken the lead on actually writing it.
The peer review system has recently been under increasing pressure as the number of papers submitted has been skyrocketing. Jeremy Fox and Owen Petchey have recently proposed a new system for fixing this, so called, “tragedy of the reviewer commons.” The crux of the argument is that for every paper a researcher submits they must review three papers in exchange (thus balancing the review load imposed by each submitted paper). A centralized PubCred bank would keep track of reviews, submissions and the balance of credits for each researcher.
At first look this seems like kind of a cool idea and I’ve seen a recent surge of interest in it via email and an enthusiatic post over at i’m a chordata urochordata. However, there are, as I see it, two major challenges for this type of system. The first is that in order to make it function properly there have to be a bunch of detailed rules in place for special circumstances. The authors of the proposal address some of these and acknowledge that there will be others†. But who should make these rules? Certainly we won’t all agree on the best solutions (e.g., I think that forcing reviewers to rereview manuscripts without additional credit, as proposed, is dangerous and likely to lead to increasingly poor editorial practice*), so who decides. I,for one, would be loathe to hand this responsibility off to the publishers‡, so I guess we’ll need some sort of council of researchers, from across a breadth of disciplines and countries, preferably elected in some sort of democratic process and then they can meet and vote on the rules. That sounds difficult to setup and organize, but we are talking about a group that is going to control a major aspect of the scientific process, so we’d better do it right.
The other major challenge is setting up the actual technical aspects of the system. Fox & Petchey suggest that given currently available web technology that the basic setup should be no more than three person-months, and that sounds about right to me for the basic site. But it ignores some important complexities. The most serious of these is the lack of a universal author identification system. There are tens (if not hundreds) of thousands of individuals contributing to the writing and review of papers across disciplines and this will lead to numerous instances where authors/reviewers have similar/identical names. There are initiatives underway to address this problem (primarily motivated by search issues), but none of them is complete and we are probably years away from an agreed upon standard within disciplines (let alone among them). Until such a system becomes established it is difficult to understand how PubCreds could properly operate. I suppose the PubCred system could try to take on this responsibility itself, but I suspect that the political realities of numerous groups competing to provide this service will make that complicated. In addition we would presumably need to consider a secure solution for validating payments from non-lead authors (in the proposal authors are allowed to split up the “cost” of review however they deem appropriate). If this is really going to be such a valuable currency as to solve the reviewer issue then it will be valuable enough to generate unseemly behavior. Maybe a simple email confirmation process would suffice, but we need something to prevent the lead author from unilaterally deciding on how to divide up the cost. Regardless, my point is simply that while the basic system is easy, if we are going to use this to literally govern whether or not a (potentially important) scientific paper can be submitted, then the system needs to be about as robust as a banking system, and accounting for complex contingencies and putting together appropriate security makes this quite a bit more than a 3 person-month job.
So I guess we’d better get to work, because to do this properly is going to take a lot of organization and some serious effort. Or, we could just “privatize the reviewer commons” in exactly the same way we “privatize” everything else. We could use money. This has already been proposed quite eloquently in an editorial by Lonnie Aarssen (and he even implemented this idea for a while at his new journal – IEE; see also the follow up editorial) that we’ve discussed here before. The current proposal discounts this possibility because:
…a fee-to-submit system would disadvantage authors who lack the means to pay, might require exorbitant payments in order to attract referees who would not otherwise agree to serve, likely would cause authors to avoid journals charging submission fees, and would require frequent currency exchange due to the international nature of science.
I consider the first three points to be logically flawed relative to the proposed system for the following reasons (in the same order as the original objections):
- We’re all broke in PubCred land. We all start with zero credits and have to earn enough to submit manuscripts. If we replace credits with a fixed payment – fixed fee system where each reviewer is paid one third of the cost of a submission for each review then this is exactly the same situation as if I have $0 in my bank account. I have to review 3 papers to have earned enough money to submit one.
- It doesn’t matter how high these numbers have to get because they are offset by the cost of submitting a paper.
- Just like the proposed PubCred system, this only works if a large number of powerful journals are involved in a coordinated manner. Clearly having a small number of journals implement either system will lead to authors simply avoiding those journals (as happened to IEE when it tried implementing the money based system).
So, it seems to me that a simple logical substitution of dollars for credits negates all but one of the supposed objections to a monetarily based system. The final point related to currency exchange simply seems inconsequential.
In addition to being more straightforward than implementing a new PubCred system, I think that a monetary approach has an additional advantage. It allows the market to operate on the peer review system. I’m sure that I haven’t even begun to imagine all of the ways that the market could influence peer review, but here’s a short list of things that come to mind:
- Great reviewers could be rewarded more than mediocre reviewers. PubCred treats reviews dichotomously. They are either good enough for credit or not. But we all know that reviews and reviewers don’t just fall into two groups, so why not reward reviewers on a sliding scale. Each journal can keep track of the quality of reviews and use that information to decide how much to offer a reviewer to entice them to review.
- Journals that want papers reviewed faster can offer higher payments to entice reviewers.
- Top journals that want more reviewers can charge higher submission fees to cover the expense.
- Down the road this potentially provides an avenue for appropriately charging for-profit journals for the massive amount of… free… labor upon which they rely to make large profits.
- Funding agencies and universities could potentially stop funding publication costs.
In conclusion I should say that I am super impressed with Fox & Petchey for being some of the first folks out there to actually put forth a serious suggestion for fixing the current problems with peer review and I think that they have (with an appropriately long lead-time and substantial up-front investment) come up with a system that would actual work. It’s just that overall it seems like there is a much simpler approach available. Take their approach, replace each credit with a fixed number of dollars (to start), and as a result get rid of all of the decision making and infrastructure.
UPDATE: Owen Petchey’s name is now spelled correctly. Sorry Owen.
†Even the most basic rule of a 3:1 ratio of reviews to submissions seems like it should be a topic of discussion. What about journals like Science and Nature that due to an abundance of caution often get 3-5 reviews on a manuscript instead of the standard 2. Because the current proposal does not allow different journals to charge different numbers of credits for submissions or provide less credit for reviews, journals that utilize more reviewers will put a burden on the system (NB: editors also receive credit for managing manuscripts so the 3:1 ratio is really appropriate for a standard 2 reviewer system). So, should the ratio be increased to 4:1 or 5:1 or should journals be given flexibility with regards to credits and/or payments?
*We here at JE have noticed an increasing trend of late in the number of re-reviews requested and an apparent unwillingness on the part of some editors to take the time to evaluate whether the changes recommended by the reviewer have been provided. Instead they simply keep sending the paper back to the original reviewers until they have no comments left. This slows down the system, wastes reviewer time and motivation, frustrates authors, and under the proposed system there is no disincentive to stop editors from doing this ad nauseum – the reviewer has no recourse because if they don’t complete the potentially never ending re-reviews they receive no credit.
‡Who are in most cases motivated more by profit margins than the good of science. This is perfectly reasonable given that in the vast majority of cases they are private corporations, but it means that we don’t want them being the ones who are making critical decisions that would have large impacts on the scientific process.
We’ve been thinking a lot recently about the idea that the social web can/should play an increasing role in filtering the large quantity of published information to allow the best and most important work to float to the top (see e.g., posts by The Scholarly Kitchen and Academhack). In its simplest form the idea is that folks like us will mention publications that we think are good/important and then people who think we’re worth listening to will be more likely to read those papers and then pass on recommendations of their own. In concept this should allow for good papers to be found by the scientific community regardless of where they are published. Ecology is far from having reached the level of social media integration required to fully realize this possibility, but there are examples of other fields where this sort of thing has actually occurred.
We think this is a cool idea, but currently it is a relatively ineffective way to find interesting papers; primarily because there simply aren’t enough folks in ecology discussing what they’ve read. EEB and Flow does a great job of this and a few other blogs by practicing scientists make occasional contributions in this regard (e.g., I’m a chordata, urochordata), but there certainly isn’t a critical mass yet. Part of the reason for this is that putting together full posts on articles one has read can take quite a bit of time, and time isn’t something most of us have a lot of lying around. Here at JE we have half a dozen Research Blogging style posts that we keep planning on writing, but finding a couple of hours to reread the paper and a couple of related works and put together a full post just doesn’t seem to happen.
So, today Jabberwocky Ecology announces a new kind of post – Things you should read. The idea behind these posts is to reduce the activation energy for posting about papers that we like. As such, these might be as short as the title of the paper and a link. Most of the time we’ll try to contextualize things a bit with a few sentences or a paragraph to help you figure out if the linked material is relevant to you, but these won’t be full blown summaries because these are things you should read, not things you should read about.
Cell Press has recently announced what I considered to be the most interesting advance in journal publishing since articles started being posted online. Basically they have started to harness the power of the web to aggregate the information present in in articles in more useful and efficient ways. For example, there is a Data tab for each article that provides an overview of all figures, and large amounts of information on the selected figure including both it’s caption and the actual context for its citation from the text. Raw data files are also readily accessible from this same screen. References are dynamically expandable to show their context in the text (without refreshing, which is awesome), filterable by year or author, and linked directly to the original publication. You’ll also notice an comments tab where editor moderated comments related to be paper will be posted (showing the kind of integrated commenting system that I expect we will see everywhere eventually).
I have seen a lot of discussion of how the web is going to revolutionize publishing, but to quote one of my favorite movies “Talking ain’t doing.” Cell Press is actually doing.
I just read the excellently forward thinking year end editorial of the new journal Ideas in Ecology and Evolution. The editorial was written by Lonnie Aarssen and Christopher Lortie and is filled with Aarssen’s trademark,creative, outside the proverbial box, thinking. In this case it applies to the field of scientific publishing, the things they’ve tried to change with their new journal and those attempts that have failed and required rethinking. There are a lot of great ideas embodied in this editorial and that from the launch of the journal the previous year.
Some years ago, someone wrote a book called “The Seven Laws of Money.” One of the “laws” went something like this: “Do good work and don’t worry about money; it will come along as a side effect.” Whether or not that’s true of money, I don’t know, but in my experience, it’s true of credit for scientific work. Just make sure you keep working at important problems, enjoying a life of science, and don’t worry so much about credit. You will probably get what you deserve — as a side effect.
Nils Nilsson (via Vladimir Lifschitz)
Frequency distributions for ecologists V: Don’t let the lack of a perfect tool prevent you from asking interesting questions
I had an interesting conversation with someone the other day that made me think I needed one last frequency distribution post in order to avoid causing some people to not move forward with addressing interesting questions.
As a quantitative ecologist I spent a fair amount of time trying to figure out the best way to do things. In other words, I often want to know what the best method is available for answering a particular question. When I think I’ve figured this out I (sometimes, if I have the energy) try to communicate the best methodology more broadly to encourage good practice and accurate answers to questions of interest to ecologists. In some cases finding the best approach is fairly easy. For example, likelihood based methods for fitting and comparing simple frequency distributions are often straightforward and can be easily looked up online. However, in many cases the methodological challenges are more substantial, or the question being asked is not general enough that the methods have been worked out and clearly presented. This happens in the case of frequency distributions when one needs non-standard minimum and maximum values (a common case in ecological studies) or when one needs discrete analogs of traditionally continuous distributions. It’s not that these cases can’t be addressed, it’s just that you can’t look the solutions up on Wikipedia.
So, what is someone without a sufficient background to do (and, btw, that might be all of us if the problem is really hard or even… intractable). First, I’d recommend trying to ask for help. Talk to a statistician at your university or a quantitative colleague and see if they can help you figure things out. I am always pleased to try to help out because I always learn something in the process. Then, if that fails, just do something. Morgan and I will probably write more about this later, but please, please, please don’t let the questions you ask as an ecologists be defined by the availability of an ideal statistical methodology that is easy to implement. In the context of the current series of posts, if you are trying to do something with a more complex frequency distribution and you can’t find a solution to your problem using likelihood then use something else. If it was me I’d go with either normalized logarithmic binning or something based on the CDF as these methods can behave reasonably well. Sure, people like me may complain, but that’s fine. Just make clear that you are aware of the potential weaknesses and that you did what you did because you couldn’t figure out an appropriate alternative approach. That way you still get to make progress on the question of interest and you may motivate people to help work on developing better methods. Sure, you might be the presenting the “right” answer, but then I very much doubt that we ever are when studying ecological systems anyway.
Many of us have had the feeling that something is not right these days with the peer-review system in science. Whenever I chat with colleagues about the peer review system, two issues consistently crop up: an increasing number of review requests that we cannot possibly keep up with and/or reviews that seem to indicate a reviewer did not spend much time with the manuscript they were reviewing. So, when Ecology Letters published an article in 2008 (Hochberg et al), written by a group of its editors, titled “The tragedy of the reviewer commons”, I read with great interest. However, I was dismayed to see that apparently the entire fault for the current sad state of affairs lay with people like me: reviewers and authors. I was slightly peeved at the tone of the article that implied that things would improve if only reviewers/authors behaved better. Where was the responsibility of the journals/editors in this mess? I thought, “I really need to write a blog post on this”. I never got around to it. Since then, at conferences and in additional publications (e.g., McPeek et al 2008), I have heard the same refrains: Scientists need to review faster, better, smarter. I began to wonder if I was alone in this world in my feelings that reviewers/authors are only half of the equation. Then I read a blog article over at the Chronicle for Higher Education. This article was also about the problems with the peer-review system, but from the perspective of a reviewer/author. And I realized not only was I not alone, but that we needed more voices demanding real dialogue on this issue. So here we go: a reviewer/author’s take on how journals/editors can help reviewers/authors make journal/editors happier.
1) Better reviewer databases: I say no a lot to reviews because I say yes a lot to reviews, not because I lack a sense of scientific responsibility. The Chronicle blog (by a sociologist) points out that the number of members in the American Sociological Association is more than enough to support a reasonable number of reviews/person. However, a much smaller number of people seem to be shouldering the load. I suspect the same is true for ecology. So why is this? Undoubtedly the journals are right that there are curmudgeons who simply refuse to review. But I also suspect that editors are busy people like the rest of us and when we are busy we go with the names of people who come to mind quickly; these “go-to” people are “the most obvious people” to review a paper or give a talk. However, those go-to people are often the same for many people – resulting in the smaller number of people getting a higher load of review requests. As a reviewer I try to help with this situation by recommending people I think are not yet “in the system” (post-docs, young assistant professors, etc), but I might humbly suggest that journals invest in better reviewer databases to help editors come up with a better diversity of names.
2) More editorial control: My next two suggestions are not going to make me popular with either authors or editors. And I know (if they got implemented) I would occasionally get hoisted in my own petard, but I strongly believe that with the demands journals are making on reviewers theses days (thorough reviews, lots of reviews, quick reviews) journals have a responsibility to protect reviewers from superfluous reviews (i.e. unnecessary review requests).
a) Better pre-review vetting. Many authors will hate this because this means one person is probably deciding whether or not to send something out for review. A bad draw on an editor (who has a strong personal opinion on the validity/novelty of your work) can kill your submission. However, I am not alone in having received manuscripts for review that are so poorly written that they are in effect incomprehensible or so far from the journal’s standard that clearly no editor looked at the manuscript before sending it on to me. I’m not talking about borderline cases but manuscripts so bad I barely know how to review them. As a reviewer this just makes me mad and takes up valuable time that could have been dedicated to a manuscript that actually deserved consideration. As the Chronicle post, points out: manuscripts do not have a fundamental right to be reviewed.
b) Stop looking for reviewer consensus. I have noticed a trend at certain journals: manuscripts keep being sent back to the reviewer until the reviewer “signs off” on the manuscript. This is consistent with the idea in the Ecology Letters article that authors are needlessly lengthening the review process by ignoring reviewer comments. As much as we may all wish otherwise, not all reviewer comments reflect absolute truth. We all have our opinions on things that (if we’re being honest with ourselves) actually are in gray areas. Sometimes reviewers just flub things. And, journals are right, sometimes reviewers give shoddy reviews. As both a reviewer and an author I recognize this. As a reviewer, I assume the editor will read my review (and the paper) and decide for his or herself whether they agree with my opinion. As an author, I assume that the editor will read my response to a reviewer and decide whether my objections to a certain critique have merit. As a reviewer, the only time I want to re-review a paper is if I have labeled my concern as “fatal” and the editor is uncertain whether the authors have either dealt with that concern or have a valid argument for why it is not a concern. In a world where reviewers are scarce, manuscripts should only go back to reviewers when absolutely necessary. This requires editors to insert themselves more into the process than perhaps they have been accustomed.
Maybe journals and editors already feel like they do these things. I don’t know. I do know I feel like I already do the things they want me as a reviewer to do! However, given how widespread concern over the strain on the peer-review process is, it seems to me that perhaps it’s time for a real dialogue – and that involves both sides talking about their perspectives and making suggestions about how to improve things. Anyone out there have additional ideas for things that could be done?
A couple of weeks ago we made it possible for folks to subscribe to JE using email. We did this because we realized that many scientists, even those who are otherwise computationally savvy, really haven’t embraced feed readers as a method of tracking information. When I wrote that post I promised to return with an argument for why you should start using a feed reader instead – so here it is. If anyone is interested in a more instructional post about how to do this then let us know in the comments.
The main argument
I’m going to base my argument on something that pretty much all practicing scientists do – keeping track of the current scientific literature by reading Tables of Contents (TOCs). Back in the dark ages the only way to get these TOCs was to either have a personal subscription to the journal or to leave the office and walk the two blocks to the library (I wonder if anyone has done a study on scientists getting fatter now that they don’t have to go to the library anymore). About a decade ago (I’m not really sure when, but this seems like it’s in the right ballpark) journals started offering email subscriptions to their TOCs. Every time a new issue was published you’d receive an email that included the titles and authors of each contribution and links to the papers (once the journal actually had the papers online of course). This made it much easier to keep track of the papers being published in a wide variety of journals by speeding up the process of determining if there was anything of interest in a given issue. While the increase in convenience of using a feed reader may not be on quite the same scale as that generated by the email TOCs, it is still fairly substantial.
The nice thing about feed readers is that they operate one item at a time. So, instead of receiving one email with 10-100 articles in it, you receive 10-100 items in your feed reader. This leads to the largest single advantage of feeds over email for tracking TOCs. You only need to process one article at a time. Just think about the last time you had 5 minutes before lunch and you decided to try to clear an email or two out of your inbox. You probably opened up a TOC email and started going through it top to bottom. If you were really lucky then maybe there were only a dozen papers and none of them were of interest and you could finish going through the email and delete it. Most of the time however there are either too many articles or you want to look at at least one so you go to the website, read the abstract, maybe download the paper, and the next thing you know it’s time for lunch and you haven’t finished going through the table so it continues to sit in your inbox. Then, of course, by the time you get back to it you probably don’t even remember where you left off and you basically have to start back at the beginning again. I don’t know about you but this process typically resulted in my having dozens of emailed TOCs lying around my inbox at any one time.
With a feed reader it’s totally different. If you have five minutes you start going through the posts for individual articles one at a time. If you have five minutes you can often clear out 5 or 10 articles (or even 50 if the feed is well tagged like PNAS’s feed), which means that you can use your small chunks of free time much more effectively for keeping up with the literature. In addition, all major feed readers allow you to ‘star’ posts – in other words you can mark them in such a way that you can go back to them later and look at them in more detail. So, instead of the old system where if you were interested in looking at a paper you had to stop going through the table of contents, go to the website, decide from the abstract if you wanted to actually look at the paper, and then either download or print a copy of the paper to look at later, with a feed reader you achieve the same thing with a one second click. This means that you can often go through a fairly large TOCs in less than 10 minutes.
Of course much of this utility depends on the journals actually providing feeds that include all of the relevent information.
Keeping your TOCs and other feeds outside of your email allows for greater separation of different aspects of online communication. If you monitor your email fairly continuously, the last thing you need is to receive multiple TOC emails each day that could distract you from actually getting work done. Having a separate feed reader let’s you actually decide when you want to look at this information (like in those 5 minutes gaps before lunch or at the end of the day when you’re too brain dead to do anything else).
Now that journals post many of their articles online as soon as the proofs stage is complete, it can be advantageous to know about these articles as soon as they are available. Most journal feeds do exactly this, posting a few papers at a time as they are uploaded to the online-early site.
Sharing – want to tell your friends about a cool paper you just read. You could copy the link, open a new email, paste the link and then send it on to them. Or, you could accomplish this with a single click (NB: this technology is still developing and varies among feed readers).
And then of course there are blogs
I’ve attempted to appeal to our non-feedreader-readers by focusing on a topic that they can clearly identify with. That said, the world of academic communication is rapidly expanding beyond the walls of the journal article. Blogs play an increasingly important role in scientific discourse and if you’re going to follow blogs you really need a feed reader. Why? Because while some blogs update daily (e.g., most of the blogs over at ScienceBlogs) many good blogs update at an average rate of once a week, or once a month. You don’t want to have to check the webpage of one of these blogs every day just to see if something new has been posted, so subscribe to its feed, kick back, and let the computer tell you what’s going on in the world.
I’d recommend checking out this post by River Continua about an impressively sophisticated phishing scam targeted at academics. They’re going to catch a bunch of folks with this one.
UPDATE: Apparently this is something that the EPA does that the EPA employee who wrote the original post was unaware of. They definitely need to rethink the composition of the email though as I would have been (and obviously was) equally suspicious.