Category Archives: publishing
We have all bemoaned the increasing difficulty of keeping up with the growing body of literature. Many of us, me included, have been relying increasingly on following only a subset of journals, but with the growing popularity of the large open-access journals I know I for one am increasingly likely to miss papers. The purpose of this post isn’t to give you the panacea to your problems (sadly I don’t think there is a panacea to this issue, though I have hopes that someone will come up with something viable in the future). The purpose of this post is to let you know about an interesting addition or alternative (for the brave) to the frantic scanning of the table of contents or RSS feeds: Google Scholar.
Almost everyone at this point knows you can go to Google Scholar and search for key words and it’ll produce a list of papers. Did you also know that you can set up a google profile with your published articles and that Google can use that to find articles that might be of interest to you. How does it do that? I’ll have to quote Google’s Blog because it’s a little like voodoo to me (obviously this is Morgan writing this post, not Ethan): “We determine relevance using a statistical model that incorporates what your work is about, the citation graph between articles, the fact that interests can change over time, and the authors you work with and cite. “ When you go to Google Scholar’s homepage (and you’re logged in as you) it’ll notify you if there are new articles on your suggested list. I actually have been pleasantly surprised by the articles it has identified for me, including some book chapters I would never have seen. For example here’s several things that sound really interesting to me, but I would never have seen:
MC Emmerson – Marine Biodiversity and Ecosystem Functioning: …, 2012 – books.google.com
A Potochnik, B McGill – Philosophy of Science, 2012 – JSTOR
D West, J BRUCE – International Journal of Modern Physics B, 2012 – World Scientific
It doesn’t just search published journal articles. For example there are preprints from arXiv and government reports on my list. I don’t know if this would work as well for the young graduate students/postdocs since it uses the citations in your existing papers and our junior colleagues might have less data for Google to work with. However, once you have a profile, you can also follow other people who have profiles, which means you get an email every time scholarly work gets added to their profile. Are you a huge Simon Levin groupie? You can follow him and every time a paper gets added to his profile, you can get an email alerting you about the new paper. I also use this to follow a bunch of interesting younger people because they often publish less frequently or in journals I don’t happen to follow and this way I don’t miss their stuff when my Google Reader hits 1000+ articles to be perused! You can also sign up for alerts when someone you follow has their work cited. (And you can set up alerts for when your own work gets cited as well).
As I said before, I don’t think Google Scholar is a one-stop literature monitoring stop (yet), but I find it useful for getting me out of my high impact factor monitoring rut. The only thing you need to do is set up your Google Scholar profile and the only reason not to do that is if you’re worried it’ll give Google the edge when it finally becomes self-aware and renames itself Skynet (ha ha ha ha….hmmm).
Over the weekend I saw this great tweet:
by Philippe Desjardins-Proulx and was pleased to see yet another actively open young scientist. Then I saw his follow up tweet:
At first I was confused. I thought ESA’s policy was that preprints were allowed based on the following text on their website (emphasis mine: still available in Google’s Cache):
A posting of a manuscript or thesis on a personal or institutional homepage or ftp site will generally be considered as a preprint; this will not be grounds for viewing the manuscript as published. Similarly, posting of manuscripts in public preprint archives or in an institution’s public archive of unpublished theses will not be considered grounds for declaring a manuscript published. If a manuscript is available as part of a digital publication such as a journal, technical series or some other entity to which a library can subscribe (especially if that publication has an ISSN or ISBN), we will consider that the manuscript has been published and is thus not eligible for consideration by our journals. A partial test for prior publication is whether the manuscript has appeared in some entity with archival value so that it is permanently available to reasonably diligent scholars. A necessary test for prior publication is whether the author can legally transfer copyright to ESA.
So I asked Philippe to explain his tweet:
This got me a little riled up so I broadcast my displeasure:
And then Jarrett Byrnes questioned where this was coming from given the stated policy:
So I emailed ESA to check and, sure enough, preprints on arXiv and similar preprint servers are considered prior publication and therefore cannot be submitted to ESA journals, despite the fact that this isn’t a problem for a few journals you may have heard of including Science, Nature, PNAS, and PLoS Biology. ESA (to their credit) has now clarified this point on their website (emphasis mine; thanks to Jaime Ashander for the heads up):
A posting of a manuscript or thesis on an author’s personal or home institution’s website or ftp site generally will not be considered previous publication. Similarly posting of a “working paper” in an institutional repository is allowed so long as at least one of the authors is affiliated with that institution. However, if a manuscript is available as part of a digital publication such as a journal, technical series, or some other entity to which a library can subscribe (especially if that publication has an ISSN or ISBN), we will consider that the manuscript has been published and is thus not eligible for consideration by our journals. Likewise, if a manuscript is posted in a citable public archive outside the author’s home institution, then we consider the paper to be self-published and ineligible for submission to ESA journals. Finally, a necessary test for prior publication is whether the author can legally transfer copyright to ESA.
In my opinion the idea that a preprint is “self-published” and therefore represents prior publication is poorly justified* and not in the best interests of science, and I’m not the only one:
So now I’m hoping that Jarrett is right:
and that things might change (and hopefully soon). If you know someone on the ESA board, please point them in the direction of this post.
UPDATE: Just as I was finishing working on this post ESA responded to the tweet stream from the last few days:
I’m very excited that ESA is reviewing their policies in this area. As I should have said in the original post, I have, up until this year, been quite impressed with ESA’s generally open, and certainly pro-science policies. This last year or so has been a bad one, but I’m hoping that’s just a lag in adjusting to the new era in scientific publishing.
UPDATE 2: ESA has announced that they have changed their policy and will now consider articles with preprints.
———————————————————————————————————————————————————————–*I asked ESA if they wanted to clarify their justification for this policy and haven’t heard back (though it has been less than 2 days). If they get back to me I’ll update or add a new post.
It’s that time of year again when the new Impact Factor values are released. This is such a big deal to a lot of folks that it’s pretty hard to avoid hearing about it. We’re not the sort of folks that object to the use of impact factors in general – we are scientists after all and part of being a scientist is quantifying things. However, if we’re going to quantify things it is incumbent upon us to try do it well and there are several things that we need to address if we are going to have faith in our measures of journal quality.
1. Stop using impact factor use Eigenfactor based metrics instead
The impact factor simply determines the number of papers that cite another paper and calculates the average. This might have been a decent approach when the IF was first invented, but it’s a terrible approach now. The problem is that according to network theory, and some important applications thereof (e.g., Google), it is also important to take into account the importance of the papers/journals that are doing the citing. Fortunately we now have metrics that do this properly: the Eigenfactor and associated Article Influence Score. These are even report by ISI right next to the IF.
Here’s a quick way to think about this. You have two papers, one that has been cited 30 times by papers that are never cited, and one that has been cited 30 times by papers that are themselves each cited 30 times. If you think the two papers are equally important, then please continue using the impact factor based metrics. If you think that the second paper is more important then please never mention the words “impact factor” again and start focusing on better approaches for quantifying the influence of nodes in a network.
2. Separate reviews (and maybe methods) from original research
We’ve known pretty much forever that reviews are cited more than original research papers, so it doesn’t make sense to compare review journals to non-review journals. While it’s easy to just say that TREE and Ecology are apples and oranges, the real problem is journals that mix reviews and original research. Since reviews are more highly cited, just changing the mix of these two article types can manipulate the impact factor. Sarah Supp and I have a paper on this is you’re interested in seeing some science and further commentary on the issue. The answer is easy, separate the analyses for review papers. It has also been suggested that methods papers have higher citation rates as well, but as I admit in my back and forth with Bob O’Hara (the relevant part of which is still awaiting moderation as I’m posting) there doesn’t seem to be any actual research on this to back it up.
3. Solve the problem of metrics that are strongly influenced by the number of papers
In the citation analysis of individual scientists there has always been the problem of how to deal with the number of papers. The total number of citations isn’t great since one way to get a large number of citations is to write a lot of not particularly valuable papers. The average number of citations per paper is probably even worse because no one would argue that a scientist who writes a single important paper and then stops publishing is contributing maximally to the progress of science.
In journal level citation analyses these two end points have up until recently been all we had, with ISI choosing to focus on the average number of citations per paper and Eigenfactor the total number of citations . The problem is that these approaches encourage gaming by journals to publish either the most or fewest papers possible. Since the issues with publishing too many papers are obvious I’ll focus on the issue of publishing too few. Assuming that journals have the ability to predict the impact of individual papers , the best way to maximize per article measures like the impact factor is to publish as few papers as possible. Adding additional papers simply dilutes the average citation rate. The problem is that by doing so the journal is choosing to have less influence on the field (by adding more, largely equivalent quality, papers) in favor of having a higher perceived impact. Think about it this way. Is a journal that publishes a total of 100 papers that are cited 5 times each, really more important than a journal that publishes 200 papers, 100 of which are cited 5 times each and 100 that are cited 4 times each? I think that the second journal is more important, and that’s why I’m glad to see that Google Scholar is focusing on the kinds of integrative metrics (like the h-index) that we use to evaluate individual researchers.
The good news is that we do have better metrics, that are available right now. The first thing that we should do is start promoting those instead of the metric that shall not be named. We should also think about improving these metrics further. If they’re worth talking about, they are worth improving. I’d love to see a combination of the network approaches in Eigenfactor with the approaches to solving the number of publications problem taken by Google. Of course, more broadly, we are already in the progress of moving away from journal level metrics and focusing more on the impact of individual papers. I personally prefer this approach and think that it’s good for science, but I’ll leave my thoughts on that for another day.
UPDATE 2: Fixed the broken link to the “Why Eigenfactor?” page.
 Both sets of metrics include both approaches with total citations from ISI and Article Influence Score, which is the per paper equivalent of the Eigen Factor, it’s just that they don’t seem to get as much… um… attention.
 And if they didn’t then all we’re measuring is how well different journals game the system plus some positive feedback where journals that are known to be highly cited garner more readers and therefore more future citations.
People find blog posts in different ways. Some visit the website regularly, some subscribe to email updates, and some subscribe using the blog’s feed. Feeds can be a huge time saver for processing the ever increasing amount of information that science generates, by placing much of that information in a single place in a simple, standardized, format. It also lets you consume one piece of information at a time and keeps your inbox relatively free of clutter (for more about why using a feed reader is awesome see this post).
When setting up their feeds bloggers can choose to either provide the entire content of the post, or just a small teaser that contains just the first few sentences of the post. In this post I am going to argue that science bloggers should choose to provide full posts.
The core reason is that we are are doing this to facilitate scientific dialog, and we are all very busy. In addition to the usual academic work load of teaching, doing research, and helping our departments and universities function, we are now dealing with keeping up with a rapidly expanding literature plus a bloom of scientific blogs, tweets, and status updates (and oh yeah, some of us even have personal lives). This means that we are consuming a massive amount of information on a daily basis and we need to be able to do so quickly. I squeeze this in during small windows of time (bus rides home, gaps between meetings, while I’m running my toddler’s bath) and often on a mobile device.
I can do this easily if I have full feeds. I open my feed reader, open the first item, read it, move on to the next one. My brain knows exactly what format to expect, cognitive load is low, and the information is instantly available. If instead I encounter a teaser, I first have to make a conscious decision about whether or not I want to click through to the actual post, then I have to hit the link, wait for the page to load (which can still be a fairly long time on a phone), adjust to a format that varies widely across blogs, often adjust the zoom and rotate my screen (if I’m reading on my phone), read the item, and then return to my reader. This might not seem like a huge deal for a handful of items, but multiply the lost time by a few hundred or a few thousand items a week and it adds up in a hurry. On top of that I store and tag full-text, searchable, copies of posts for all of the blogs I follow in my feed reader so that I can find posts again. This is handy when I remember there is a post I want to either share with someone or link to, but can’t remember who wrote it.
So, if your blog doesn’t provide full feeds this means three things. First, I am less likely to read a post if it’s a teaser. It costs me extra time, so the threshold for how interesting it needs to be goes up. Second, if I do read it I now have less time to do other things. Third, if I want to find your post again to recommend it to someone or link to it, the chances of my doing so successfully are decreased. So, if your goal is science communication, or even just not being disrespectful of your readers’ time, full feeds are the way to go.
This all goes for journal tables of contents as well. As I’ve mentioned before, if the journal feed doesn’t include the abstracts and the full author line, it is just costing the papers readers, and the journal’s readers time, and therefore making the scientific process run more slowly than it could.
So, bloggers and journal editors, for your readers sake, for sciences sake, please turn on full feeds. It will only take you two minutes. It will save science hundreds of hours. It will probably be this most productive thing you do for science all week.
I have, for a while, been frustrated and annoyed by the behavior of several of the large for-profit publishers. I understand that their motivations are different from my own, but I’ve always felt that an industry that relies entirely on both large amounts of federal funding (to pay scientists to do the research and write up the results) and a massive volunteer effort to conduct peer review (the scientists again) needed to strike a balance between the needs of the folks doing all of the work and the corporations need to maximize profits.
Despite my concerns about the impacts of increasingly closed journals, with increasingly high costs, on the dissemination of research and the ability of universities to support their core missions of teaching and research, I have continued to volunteer my time and effort as a reviewer to Elsevier and Wiley-Blackwell. I did this because I have continued to see valuable contributions made by these journals and I felt that this combined with the contribution that I was making to science by helping improve the science published in high profile places made supporting these journals worthwhile. I no longer believe this to be the case and from now on I will no longer be reviewing for any journal that is published by Elsevier, Springer, or Wiley-Blackwell (including society journals that publish through them).
Why have I changed my mind? Because of the pursuit/support by these companies of the Research Works Act. This act seeks to prevent funding agencies from requiring that the results of research that they funded be made publicly available. In other words it seeks to prevent the government (and the taxpayers that fund it), which pays for a very large fraction of the cost of any given paper through both funding the research and paying the salaries of reviewers and editors, from having any say in how that research is disseminated. I think that Mike Taylor in the Guardian said most clearly how I feel about this attempt to exert legislative control requiring us to support corporate profits over the dissemination of scientific research:
Academic publishers have become the enemies of science
This is the moment academic publishers gave up all pretence of being on the side of scientists. Their rhetoric has traditionally been of partnering with scientists, but the truth is that for some time now scientific publishers have been anti-science and anti-publication. The Research Works Act, introduced in the US Congress on 16 December, amounts to a declaration of war by the publishers.
You should read the entire article. It’s powerful. There are lots of other great articles about the RWA including Michael Eisen in the New York Times, a nice post by INNGE, and a interesting piece by Paul Krugman (via oikosjeremy). I’m also late to the party in declaring my peer review strike and less eloquent than many of my peers in explaining why (see great posts by Michael Taylor, Gavin Simpson, and Timothy Gowers). But I’m here now and I’m letting you know so that you can consider whether or not you also want to stop volunteering for companies that don’t have science’s best interests in mind.
If you’d like to read up on the publisher’s side of this argument (they have costs, they have a right to recoup them) you can see Springer’s official position or an Elsevier Exec’s exchange with Michael Eisen. My problem with all of these arguments is that there is nothing in any funding agency’s policy that requires publishers to publish work funded by that agency. This is not (as Springer has argued) an “unfunded mandate”, this is a stake holder that has certain requirements related to the publication of research in which they have an interest. This is just like an author (in any non-academic publishing situation) negotiating with a publisher. If the publisher doesn’t like the terms that the author demands, then they don’t have to publish the book. Likewise, if a publisher doesn’t like the NIH policy then they should simply not agree to publish NIH funded research.
To be clear, I am not as extreme in my position as some. I still support and will review for independent society journals like Ecology and American Naturalist even though they aren’t Open Access and even though ESA has made some absurd comments in support of the same ideas that are in RWA. The important thing for me is that these journals have the best interests of science in mind, even if they are often frustratingly behind the times in how they think and operate.
And don’t worry, I’ve still got plenty of journal related work to keep me busy, thanks to my new position on the editorial board at PLoS ONE.
UPDATE: The links to the INNGE and Timothy Gowers post have now been fixed, and here are links to a couple of great posts by Casey Bergman that I somehow left out: one on how to turn down reviews while making a point and one on the not so positive response he received to one of these emails.
UPDATE 2: A great collection of posts on RWA. There are a lot of really unhappy scientists out there.
UPDATE 3: A formal Boycott of Elsevier. Almost 1000 scientists have signed on so far.
UPDATE 4: Wiley-Blackwell has now distanced itself from RWA and said that “We do not believe that legislative initiatives are the best way forward at this time and so have no plans to endorse RWA. Instead we believe that research funder-publisher partnerships will be more productive.” In addition, it was announced that a bill that would do the opposite of RWA has now been introduced. Hooray for collective action!
I logged into one of my reviewer accounts at a Wiley journal this morning and was greeted by a redirect that took me to a page with the following message:
We appreciate your involvement with this publication, which is published by a John Wiley & Sons company. The publisher would like to contact you by email/post with details of publications and services that may be of interest to you, specific to your subject area, from companies in the John Wiley & Sons group (only) worldwide. Your information will never be passed to any third party companies and as part of any communications you will be given the opportunity to unsubscribe from receiving further contact. Please indicate whether you wish to receive this information by answering the CONSENT question below.
Asking someone who is already working for you for free if it’s OK to also try to sell them stuff while they’re doing it seems like a pretty good definition of classless to me.
The last week has been an interesting one for academic publishing. First a 24 year old programmer name Aaron Swartz was arrested for allegedly breaking into MIT’s network and downloading 5 million articles from JSTOR. Given his background it has been surmised that he planned on making the documents publicly available. He faces up to 35 years in federal prison.
In response to the arrest Gregory Maxwell, a “technologist” and hobbyist scientist uploaded nearly 20,000 JSTOR  articles from the Philosophical Transactions of the Royal Society to The Pirate Bay, a bittorrent file sharing site infamous for facilitating the illegal sharing of music and movies. As explanation for the upload Maxwell posted a scathing, and generally trenchant, critique of the current academic publishing system that I am going to reproduce here in it’s entirety so that those uncomfortable with , or blocked from, visiting The Pirate Bay can read it . In it he notes that since all of the articles he posted were published prior to 1923 they are all in the public domain.
This archive contains 18,592 scientific publications totaling 33GiB, all from Philosophical Transactions of the Royal Society and which should be available to everyone at no cost, but most have previously only been made available at high prices through paywall gatekeepers like JSTOR. Limited access to the documents here is typically sold for $19 USD per article, though some of the older ones are available as cheaply as $8. Purchasing access to this collection one article at a time would cost hundreds of thousands of dollars. Also included is the basic factual metadata allowing you to locate works by title, author, or publication date, and a checksum file to allow you to check for corruption. I've had these files for a long time, but I've been afraid that if I published them I would be subject to unjust legal harassment by those who profit from controlling access to these works. I now feel that I've been making the wrong decision. On July 19th 2011, Aaron Swartz was criminally charged by the US Attorney General's office for, effectively, downloading too many academic papers from JSTOR. Academic publishing is an odd system - the authors are not paid for their writing, nor are the peer reviewers (they're just more unpaid academics), and in some fields even the journal editors are unpaid. Sometimes the authors must even pay the publishers. And yet scientific publications are some of the most outrageously expensive pieces of literature you can buy. In the past, the high access fees supported the costly mechanical reproduction of niche paper journals, but online distribution has mostly made this function obsolete. As far as I can tell, the money paid for access today serves little significant purpose except to perpetuate dead business models. The "publish or perish" pressure in academia gives the authors an impossibly weak negotiating position, and the existing system has enormous inertia. Those with the most power to change the system--the long-tenured luminary scholars whose works give legitimacy and prestige to the journals, rather than the other way around--are the least impacted by its failures. They are supported by institutions who invisibly provide access to all of the resources they need. And as the journals depend on them, they may ask for alterations to the standard contract without risking their career on the loss of a publication offer. Many don't even realize the extent to which academic work is inaccessible to the general public, nor do they realize what sort of work is being done outside universities that would benefit by it. Large publishers are now able to purchase the political clout needed to abuse the narrow commercial scope of copyright protection, extending it to completely inapplicable areas: slavish reproductions of historic documents and art, for example, and exploiting the labors of unpaid scientists. They're even able to make the taxpayers pay for their attacks on free society by pursuing criminal prosecution (copyright has classically been a civil matter) and by burdening public institutions with outrageous subscription fees. Copyright is a legal fiction representing a narrow compromise: we give up some of our natural right to exchange information in exchange for creating an economic incentive to author, so that we may all enjoy more works. When publishers abuse the system to prop up their existence, when they misrepresent the extent of copyright coverage, when they use threats of frivolous litigation to suppress the dissemination of publicly owned works, they are stealing from everyone else. Several years ago I came into possession, through rather boring and lawful means, of a large collection of JSTOR documents. These particular documents are the historic back archives of the Philosophical Transactions of the Royal Society - a prestigious scientific journal with a history extending back to the 1600s. The portion of the collection included in this archive, ones published prior to 1923 and therefore obviously in the public domain, total some 18,592 papers and 33 gigabytes of data. The documents are part of the shared heritage of all mankind, and are rightfully in the public domain, but they are not available freely. Instead the articles are available at $19 each--for one month's viewing, by one person, on one computer. It's a steal. From you. When I received these documents I had grand plans of uploading them to Wikipedia's sister site for reference works, Wikisource - where they could be tightly interlinked with Wikipedia, providing interesting historical context to the encyclopedia articles. For example, Uranus was discovered in 1781 by William Herschel; why not take a look at the paper where he originally disclosed his discovery? (Or one of the several follow on publications about its satellites, or the dozens of other papers he authored?) But I soon found the reality of the situation to be less than appealing: publishing the documents freely was likely to bring frivolous litigation from the publishers. As in many other cases, I could expect them to claim that their slavish reproduction - scanning the documents - created a new copyright interest. Or that distributing the documents complete with the trivial watermarks they added constituted unlawful copying of that mark. They might even pursue strawman criminal charges claiming that whoever obtained the files must have violated some kind of anti-hacking laws. In my discreet inquiry, I was unable to find anyone willing to cover the potentially unbounded legal costs I risked, even though the only unlawful action here is the fraudulent misuse of copyright by JSTOR and the Royal Society to withhold access from the public to that which is legally and morally everyone's property. In the meantime, and to great fanfare as part of their 350th anniversary, the RSOL opened up "free" access to their historic archives - but "free" only meant "with many odious terms", and access was limited to about 100 articles. All too often journals, galleries, and museums are becoming not disseminators of knowledge - as their lofty mission statements suggest - but censors of knowledge, because censoring is the one thing they do better than the Internet does. Stewardship and curation are valuable functions, but their value is negative when there is only one steward and one curator, whose judgment reigns supreme as the final word on what everyone else sees and knows. If their recommendations have value they can be heeded without the coercive abuse of copyright to silence competition. The liberal dissemination of knowledge is essential to scientific inquiry. More than in any other area, the application of restrictive copyright is inappropriate for academic works: there is no sticky question of how to pay authors or reviewers, as the publishers are already not paying them. And unlike 'mere' works of entertainment, liberal access to scientific work impacts the well-being of all mankind. Our continued survival may even depend on it. If I can remove even one dollar of ill-gained income from a poisonous industry which acts to suppress scientific and historic understanding, then whatever personal cost I suffer will be justified ΓΓé¼ΓÇ¥it will be one less dollar spent in the war against knowledge. One less dollar spent lobbying for laws that make downloading too many scientific papers a crime. I had considered releasing this collection anonymously, but others pointed out that the obviously overzealous prosecutors of Aaron Swartz would probably accuse him of it and add it to their growing list of ridiculous charges. This didn't sit well with my conscience, and I generally believe that anything worth doing is worth attaching your name to. I'm interested in hearing about any enjoyable discoveries or even useful applications which come of this archive. - ---- Greg Maxwell - July 20th 2011 firstname.lastname@example.org Bitcoin: 14csFEJHk3SYbkBmajyJ3ktpsd2TmwDEBb
These stories have been covered widely and the discussion has been heavy on Twitter and in the blogosphere. The important part of this discussion for academic publishing is that it has brought many of the absurdities of the current academic publishing system into the public eye, and a lot of people are shocked and unhappy . This is all happening at the same time that Britain is finally standing up to the big publishing companies as their profits  and business models increasingly hamper rather than benefit the scientific process, and serious questions are raised about whether we should be publishing in peer-reviewed journals at all. I suspect that we will look back on 2011 as the tipping point year when academic publishing changed forever.
 In an interview with Wired Campus JSTOR claimed that these aren’t technically their articles because even though JSTOR did digitize these files, and each file includes an indication of JSTORs involvement, the files lack JSTOR’s cover page, so it’s not really their files, it’s the Royal Society’s files. Which first made me think “Wow, that’s about the lamest duck and cover excuse I’ve ever heard” and then “Hey, so if I just delete the cover page off a JSTOR file then apparently they surrender all claim to it. Nice!”
 In addition to questionable legality of the site some of the advertising there isn’t exactly workplace appropriate.
 I think that given the context he would be fine with us reprinting the entire statement. I’ve done some very minor cleaning up of some junk codes for readability. The original is available here.
 ~$120 million/year for Wiley and ~$1 billion/year for Reed Elsevier (source LibraryJournal.com).
We are pretty excited about what modern technology can do for science and in particular the potential for increasingly rapid sharing of, and collaboration on, data and ideas. It’s the big picture that explains why we like to blog, tweet, publish data and code, and we’ve benefited greatly from others who do the same. So, when we saw this great talk by Michael Nielsen about Open Science, we just had to share.
Thanks to an email from Jeremy Fox I just found out that Oikos has started a blog. It clearly isn’t on most folks radars (I represent 50% of its Google Reader subscribers), and Jeremy has been putting up some really interesting posts over there so I thought it was worth a mention. According to Jeremy:
I view the Oikos blog as a place where the Oikos editors can try to do the sort of wonderful armchair ecology that John [Lawton] used to do in his ‘View From the Park’ column. I say ‘try’ because I doubt any of us could live up to John’s high standard (I’m sure I don’t!). I’m going to try to do posts that will be thought-provoking for students in particular. Oikos used to be the place to go with interesting, provocative ideas that were well worth publishing even if they were a bit off the wall or not totally correct. It’s our hope (well, my hope anyway) that this blog will become one way for Oikos to reclaim that niche.
I think they’re doing a pretty good job of accomplishing their goal, so go check out recent posts on the importance of hand waving and synthesizing ecology, and then think about subscribing to keep up on the new provocative things they’re up to.
There is an excellent post on open science, prestige economies, and the social web over at Marciovm’s posterous*. For those of you who aren’t insanely nerdy** GitHub is… well… let’s just call it a very impressive collaborative tool for developing and sharing software***. But don’t worry, you don’t need to spend your days tied to a computer or have any interest in writing your own software to enjoy gems like:
Evangelists for Open Science should focus on promoting new, post-publication prestige metrics that will properly incentivize scientists to focus on the utility of their work, which will allow them to start worrying less about publishing in the right journals.
*A blog I’d never heard of before, but I subscribed to it’s RSS feed before I’d even finished the entire post.
**As far as biologists go. And, yes, when I say “insanely nerdy” I do mean it as a complement.