I’ve recently started reading two scientific programming blogs that I think are well worth paying attention to, so I’m blogrolling them and offering a brief introduction here.
Serendipity is Steve Easterbrook’s blog about the interface between software engineering and climate science. Steve has a realistic and balanced viewpoint regarding the reality of programming in scientific disciplines. The blog is well written, insightful, etc., but I think the thing that really won me over were his sharp witted responses to the periodically asinine comments he receives. For example:
I’d care a lot less about seeing all the source and data if I could just ignore climate scientists and shop elsewhere. But since I’m expected to hand over $$$ and change my lifestyle because of this research, your arguments ring hollow…
[You can shop elsewhere – there are thousands of climate scientists across the world. If you don’t like the CRU folks, go to any one of a large number of climate science labs elsewhere (start here: http://www.realclimate.org/index.php/data-sources/). An analogy: Imagine your doctor told you that you have to change your eating habits, or your heart is unlikely to last out the year. You would go and get a second opinion from another doctor. And maybe a third. But when every qualified doctor tells you the same thing, do you finally accept their advice, or do you go around claiming that all doctors are corrupt? – Steve]
Software Carpentry is the sister blog to an excellent online (and occasionally in person) course on basic software development for scientists. I strongly recommend the course to anyone who is interested in getting more serious about their programming and the blog is a nice complement pointing readers to other resources and discussions related to scientific programming.
A couple of weeks ago we made it possible for folks to subscribe to JE using email. We did this because we realized that many scientists, even those who are otherwise computationally savvy, really haven’t embraced feed readers as a method of tracking information. When I wrote that post I promised to return with an argument for why you should start using a feed reader instead – so here it is. If anyone is interested in a more instructional post about how to do this then let us know in the comments.
The main argument
I’m going to base my argument on something that pretty much all practicing scientists do – keeping track of the current scientific literature by reading Tables of Contents (TOCs). Back in the dark ages the only way to get these TOCs was to either have a personal subscription to the journal or to leave the office and walk the two blocks to the library (I wonder if anyone has done a study on scientists getting fatter now that they don’t have to go to the library anymore). About a decade ago (I’m not really sure when, but this seems like it’s in the right ballpark) journals started offering email subscriptions to their TOCs. Every time a new issue was published you’d receive an email that included the titles and authors of each contribution and links to the papers (once the journal actually had the papers online of course). This made it much easier to keep track of the papers being published in a wide variety of journals by speeding up the process of determining if there was anything of interest in a given issue. While the increase in convenience of using a feed reader may not be on quite the same scale as that generated by the email TOCs, it is still fairly substantial.
The nice thing about feed readers is that they operate one item at a time. So, instead of receiving one email with 10-100 articles in it, you receive 10-100 items in your feed reader. This leads to the largest single advantage of feeds over email for tracking TOCs. You only need to process one article at a time. Just think about the last time you had 5 minutes before lunch and you decided to try to clear an email or two out of your inbox. You probably opened up a TOC email and started going through it top to bottom. If you were really lucky then maybe there were only a dozen papers and none of them were of interest and you could finish going through the email and delete it. Most of the time however there are either too many articles or you want to look at at least one so you go to the website, read the abstract, maybe download the paper, and the next thing you know it’s time for lunch and you haven’t finished going through the table so it continues to sit in your inbox. Then, of course, by the time you get back to it you probably don’t even remember where you left off and you basically have to start back at the beginning again. I don’t know about you but this process typically resulted in my having dozens of emailed TOCs lying around my inbox at any one time.
With a feed reader it’s totally different. If you have five minutes you start going through the posts for individual articles one at a time. If you have five minutes you can often clear out 5 or 10 articles (or even 50 if the feed is well tagged like PNAS’s feed), which means that you can use your small chunks of free time much more effectively for keeping up with the literature. In addition, all major feed readers allow you to ‘star’ posts – in other words you can mark them in such a way that you can go back to them later and look at them in more detail. So, instead of the old system where if you were interested in looking at a paper you had to stop going through the table of contents, go to the website, decide from the abstract if you wanted to actually look at the paper, and then either download or print a copy of the paper to look at later, with a feed reader you achieve the same thing with a one second click. This means that you can often go through a fairly large TOCs in less than 10 minutes.
Of course much of this utility depends on the journals actually providing feeds that include all of the relevent information.
Keeping your TOCs and other feeds outside of your email allows for greater separation of different aspects of online communication. If you monitor your email fairly continuously, the last thing you need is to receive multiple TOC emails each day that could distract you from actually getting work done. Having a separate feed reader let’s you actually decide when you want to look at this information (like in those 5 minutes gaps before lunch or at the end of the day when you’re too brain dead to do anything else).
Now that journals post many of their articles online as soon as the proofs stage is complete, it can be advantageous to know about these articles as soon as they are available. Most journal feeds do exactly this, posting a few papers at a time as they are uploaded to the online-early site.
Sharing – want to tell your friends about a cool paper you just read. You could copy the link, open a new email, paste the link and then send it on to them. Or, you could accomplish this with a single click (NB: this technology is still developing and varies among feed readers).
And then of course there are blogs
I’ve attempted to appeal to our non-feedreader-readers by focusing on a topic that they can clearly identify with. That said, the world of academic communication is rapidly expanding beyond the walls of the journal article. Blogs play an increasingly important role in scientific discourse and if you’re going to follow blogs you really need a feed reader. Why? Because while some blogs update daily (e.g., most of the blogs over at ScienceBlogs) many good blogs update at an average rate of once a week, or once a month. You don’t want to have to check the webpage of one of these blogs every day just to see if something new has been posted, so subscribe to its feed, kick back, and let the computer tell you what’s going on in the world.
Nathan over at Flowing Data just posted an interesting piece on the emergence of a new class of scientists whose work focuses on the manipulation, analysis and presentation of data. The take home message is that in order to fully master the ability to understand and communicate patterns in large quantities of data that one needs to have some ability in:
- Computer science – for acquiring, managing and manipulating data
- Mathematics and Statistics – for mining and analyzing data
- Graphic design and Interactive interface design – to present the results of analyses in an easy to understand manner and encourage interaction and additional analysis by less technical users
His point is that while one could get together a group of people (one with each of these skills) to undertake this kind of task, that the challenges of cross-disciplinary collaboration can slow down progress (or even prevent it entirely). As such, there is a need for individuals that have at least some experience in several of these fields to help facilitate the process. I think this is a good model for this kind of work in ecology, though given the already extensive multidisciplinarity required in the field I view this role as one occupied only be fairly small fraction of folks.
The other thing that I really liked about this post (and about Flowing Data’s broader message) is the focus on the end user. The goal is to make ideas and tools available to the broadest possible audience and sometimes often the more technical folks in the biological scientists seem to forget that their goal should be to make things easy to understand and simple for non-technical users to use. This is undoubtedly a challenging task, but one that we should work to accomplish whenever possible.