Create a calendar and block out time for you.
Sounds simple, and honestly a little stupid, but it’s the best advice I can give. Why?
When you start your job, or a semester, your calendar is empty. You have oodles and oodles of time for you to work on your science. However, you will quickly find that there are a lot of demands on your time. Students want meetings. Faculty want meetings. Collaborators want meetings. Administrators want meetings. You name a type of person (and some you suspect might not be people at all) and they will want you to do something for them. Many of these things will seem quick. The most dangerous words are “This will only take 5 minutes” (It won’t. I promise you, it never does). Next thing you know you have a week chopped up into meetings with only 15 minutes here or there of ‘free time’. That’s now the free time you have to do all that science you were planning on. Trust me, in the 15 minutes you’ll have between meetings and things you have to do to prepare for meetings, you won’t feel like working on your science.
This is where the calendar comes in. Schedule blocks of time during each week for research. Once on your calendar, these blocks are sacrosanct. You wouldn’t cancel that meeting with the Department Chair to meet with Fred down the hall to talk about the seminar committee. Your research is in many ways more important for your future than the Department Chair, so don’t cancel on it.
Why do you need the calendar? Why can’t you just tell yourself that every Monday morning you will focus on research? Because it doesn’t work. When someone comes and asks for a meeting and you think about your week, Monday morning will be ‘free’ in your head. The easiest thing in the world is to fill that ‘empty’ slot and now you’ve just broken up your research time into useless chunks. (Trust me, I’ve been there). On the other hand, if that time is already blocked out on your calendar as ‘busy’ then it’s a reminder that you have something you need to be doing during that time.
As an ancillary note, blocking out time for you and your mental health is also important. Exercise important for keeping you sane? Need time for activities with friends or you go ballistic? Whatever you need to keep you sane and happy, make sure you schedule time for it. Because when you’re insane, your work is never as good as you think it is. Trust me on this.
Preprints are rapidly becoming popular in biology as a way to speed up the process of science, get feedback on manuscripts prior to publication, and establish precedence (Desjardins-Proulx et al. 2013). Since biologists are still learning about preprints I regularly get asked which of the available preprint servers to use. Here’s the long-form version of my response.
The good news is that you can’t go wrong right now. The posting of a preprint and telling people about it is far more important than the particular preprint server you choose. All of the major preprint servers are good choices.Of course you still need to pick one and the best way to do that is to think about the differences between available options. Here’s my take on four of the major preprint servers: arXiv, bioRxiv, PeerJ, and figshare.
arXiv is the oldest of the science preprint servers. As a result it is the most well established, it is well respected, more people have heard of it than any of the other preprint servers, and there is no risk of it disappearing any time soon. The downside to having been around for a long time is that arXiv is currently missing some features that are increasingly valued on the modern web. In particular there is currently no ability to comment on preprints (though they are working on this) and there are no altmetrics (things like download counts that can indicate how popular a preprint is). The other thing to consider is that arXiv’s focus is on the quantitative sciences, which can be both a pro and a con. If you do math, physics, computer science, etc., this is the preprint server for you. If you do biology it depends on the kind of research you do. If your work is quantitative then your research may be seen by folks outside of your discipline working on related quantitative problems. If your work isn’t particularly quantitative it won’t fit in as well. arXiv allows an array of licenses that can either allow or restrict reuse. In my experience it can take about a week for a preprint to go up on arXiv and the submission process is probably the most difficult of the available options (but it’s still far easier than submitting a paper to a journal).
bioRxiv is the new kid on the block having launched less than a year ago. It has both commenting and altmetrics, but whether it will become as established as arXiv and stick around for a long time remains to be seen. It is explicitly biology focused and accepts research of any kind in the biological sciences. If you’re a biologist, this means that you’re less likely to reach people outside of biology, but it may be more likely that biology folks come across your work. bioRxiv allows an array of licenses that can either allow or restrict reuse. However, they explicitly override the less open licenses for text mining purposes, so all preprints there can be text-mined. In my experience it can take about a week for a preprint to go up on bioRxiv.
PeerJ Preprints is another new preprint server that is focused on biology and accepts research from across the biological sciences. Like bioRxiv it has commenting and altmetrics. It is the fastest of the preprint servers, with less than 24 hours from submission to posting in my experience. PeerJ has a strong commitment to open access, so all of it’s preprints are licensed with the Creative Commons Attribution License. PeerJ also publishes an open access journal, but you can post preprints to PeerJ Preprints with out submitting them to the journal (and this is very common). If you do decide to submit your manuscript to the PeerJ journal after posting it as a preprint you can do this with a single click and, should it be published, the preprint will be linked to the paper. PeerJ has the most modern infrastructure of any of the preprint servers, which makes for really pleasant submission, reading, and commenting experiences. You can also earn PeerJ reputation points for posting preprints and engaging in discussions about them. PeerJ is the only major preprint server run by a for-profit company. This is only an issue if you plan to submit your paper to a journal that only allows the posting of non-commercial preprints. I only know of only one journal with this restriction, but it is American Naturalist which can be an important journal in some areas of biology.
figshare is a place to put any kind of research output including data, figures, slides, and preprints. The benefit of this general approach to archiving research outputs is that you can use figshare to store all kinds of research outputs in the same place. The downside is that because it doesn’t focus on preprints people may be less likely to find your manuscript among all of the other research objects. One of the things I like about this broad approach to archiving anything is that I feel comfortable posting that isn’t really manuscripts. For example, I post grant proposals there. figshare accepts research from any branch of science and has commenting and altmetrics. There is no delay from submission to posting. Like PeerJ, figshare is a for-profit company and any document posted there will be licensed with the Creative Commons Attribution License.
Those are my thoughts. I have preprints on all three preprint servers + figshare and I’ve been happy with all three experiences. As I said at the beginning, the most important thing is to help speed up the scientific process by posting your work as preprints. Everything else is just details.
UPDATE: It looks like due to a hiccup with scheduling this post than an early version went out to some folks without the figshare section.
UPDATE: In the comments Richard Sever notes that bioRxiv’s preprints are typically posted within 48 hours of submission and that their interpretation of the text mining clause is that this is covered by fair use. See our discussion in the comments for more details.
As I’ve argued here, and in PLOS Biology, preprints are important. They accelerate the scientific dialog, improve the quality of published research, and provide both a fair mechanism for establishing precedence and an opportunity for early-career researchers to quickly demonstrate the importance of their research. And I’m certainly not the only one who thinks this:
- Population biologists turn to pre-publication server to gain wider readership and rapid review
- All the cool kids are on arXiv and Haldane’s Sieve… why you should be too
- Open science and the econoblogosphere
- A good way to publish – arXiv FTW
- Nature respects preprint servers
- ESA changes Arxiv policy following community comments
One of the things slowing the use of preprints in ecology is the fact that some journals still have policies against considering manuscripts that have been posted as preprints. The argument is typically based on the Ingelfinger rule, which prohibits publishing the same original research in multiple journals. However, almost no one actually believes that this rule applies to preprints anymore. Science, Nature, PNAS, the Ecological Society of America, the British Ecological Society, the Royal Society, Springer, Wiley, and Elsevier all generally allow the posting of preprints. In fact, there is only one major journal in ecology that does not consider manuscripts that are posted as preprints: Ecology Letters.
I’ve been corresponding with the Editor in Chief of Ecology Letters for some time now attempting to convince the journal to address their outdated approach to preprints. He kindly asked the editorial board to vote on this last fall and has been nice enough to both share the results and allow me to blog about them.
Sadly, the editorial board voted 2:1 to not allow consideration of manuscripts posted as preprints based primarily on the following reasons:
- Authors might release results before they have been adequately reviewed and considered. In particular the editors were concerned that “early career authors might do this”.
- Because Ecology Letters is considered to be a quick turnaround journal the need for preprints is lessened
I’d like to take this opportunity to explain to the members of the editorial board why these arguments are not valid and why it should reconsider its vote.
First, the idea that authors might release results before they have been sufficiently reviewed is not a legitimate reason for a journal to not consider preprinted manuscripts for the following reasons:
- This simply isn’t a journal’s call to make. Journals can make policy based on things like scientific ethics, but preventing researchers from making poor decisions is not their job.
- Preprints are understood to not have been peer reviewed. We have a long history in science of getting feedback from other scientists on papers prior to submitting them to journals and I’ve personally heard the previous Editor in Chief of Ecology Letters argue passionately for scientists to get external feedback before submitting to the journal. This is one of the primary reasons for posting preprints; to get review from a much broader audience than the 2-3 reviewers that will look at a paper for a journal.
- All of the other major ecology and general science journals already allow preprints. This means that any justification for not allowing them would need to explain why Ecology Letters is different from Science, Nature, PNAS, the ESA journals, the BES journals, the Royal Society journals, and several of the major corporate publishers. In addition, since every other major ecology journal allows preprints, this policy would only influence papers that were intended to be submitted to Ecology Letters. This is such a small fraction of the ecology literature that it will have no influence on the stated goal.
- We already present results prior to publication in all kinds of forms, the most common of which is at conferences, so unless we are going to disallow presenting results in talks that aren’t already published this won’t accomplish its stated goal.
Second, the idea that because Ecology Letters is so fast that preprints are unnecessary doesn’t actually hold for most papers. Most importantly, this argument ignores the importance of preprints for providing prepublication review. In addition, in the best case scenario this reasoning only holds for articles that are first submitted to Ecology Letters and are accepted. Ecology Letters has roughly a 90% rejection rate (the last time I heard a number). Since a lot of the papers that are accepted there are submitted elsewhere first I suspect that the proportion of the papers they handle that this argument works for is <5%. For all other papers the delay will be much longer. For example, let’s say I do some super exciting research (well, at least I think it’s super exciting) that I think has a chance at Science/Nature. Science and Nature are fine with me posting a preprint, but since there’s a chance that it won’t get in there, I still can’t post a preprint because I might end up submitting to Ecology Letters. My paper goes out for review at Science but gets rejected, I send it to Nature where it doesn’t go out for review, and then to PNAS where it goes out again and is rejected. I then send it to Letters where it goes out for 2 rounds of review and is eventually accepted. Give or take this process will take about a year, and that’s not a short period of time in science at all.
So, I am writing this in the hopes that the editorial board will reconsider their decision and take Ecology Letters from a journal that is actively slowing down the scientific process back to its proud history of increasing the speed with which scientific communication happens. If you know members of the Ecology Letters editorial board personally I encourage you to email them a link to this article. If any members of the editorial board disagree with the ideas presented here and in our PLOS Biology paper, I encourage them to join me in the comments section to discuss their concerns.
UPDATE: Added Wiley to the list of major publishers that allow preprints. As Emilio Bruna points out in the comments they are happy to have journals that allow posting of preprints and Biotropica is a great example of one of their journals making this shift.
UPDATE: Fixed link to Paul Krugman’s post.
A couple of weeks ago Eli Kintisch (@elikint) interviewed me for what turned out to be a great article on “Sharing in Science” for Science Careers. He also interviewed Titus Brown (@ctitusbrown) who has since posted the full text of his reply, so I thought I’d do the same thing.
How has sharing code, data, R methods helped you with your scientific research?
Definitely. Sharing code and data helps the scientific community make more rapid progress by avoiding duplicated effort and by facilitating more reproducible research. Working together in this way helps us tackle the big scientific questions and that’s why I got into science in the first place. More directly, sharing benefits my group’s research in a number of ways:
- Sharing code and data results in the community being more aware of the research you are doing and more appreciative of the contributions you are making to the field as a whole. This results in new collaborations, invitations to give seminars and write papers, and access to excellent students and postdocs who might not have heard about my lab otherwise.
- Developing code and data so that it can be shared saves us a lot of time. We reuse each others code and data within the lab for different projects, and when a reviewer requests a small change in an analysis we can make a small change in our code and then regenerate the results and figures for the project by running a single program. This also makes our research more reproducible and allows me to quickly answer questions about analyses years after they’ve been conducted when the student or postdoc leading the project is no longer in the lab. We invest a little more time up front, but it saves us a lot of time in the long run. Getting folks to work this way is difficult unless they know they are going to be sharing things publicly.
- One of the biggest benefits of sharing code and data is in competing for grants. Funding agencies want to know how the money they spend will benefit science as a whole, and being able to make a compelling case that you share your code and data, and that it is used by others in the community, is important for satisfying this goal of the funders. Most major funding agencies have now codified this requirement in the form of data management plans that describe how the data and code will be managed and when and how it will be shared. Having a well established track record in sharing makes a compelling argument that you will benefit science beyond your own publications, and I have definitely benefited from that in the grant review process.
What barriers exist in your mind to more people doing so?
There is a lot of fear about openly sharing data and code. People believe that making their work public will result in being scooped or that their efforts will be criticized because they are too messy. There is a strong perception that sharing code and data takes a lot of extra time and effort. So the biggest barriers are sociological at the moment.
To address these barriers we need to be a better job of providing credit to scientists for sharing good data and code. We also need to do a better job of educating folks about the benefits of doing so. For example, in my experience, the time and effort dedicated to developing and documenting code and data as if you plan to share it actually ends up saving the individual research time in the long run. This happens because when you return to a project a few months or years after the original data collection or code development, it is much easier if the code and data are in a form that makes it easy to work with.
How has twitter helped your research efforts?
Twitter has been great for finding out about exciting new research, spreading the word about our research, getting feedback from a broad array of folks in the science and tech community, and developing new collaborations. A recent paper that I co-authored in PLOS Biology actually started as a conversation on twitter.
How has R Open Science helped you with your work, or why is it important or not?
rOpenSci is making it easier for scientists to acquire and analyze the large amounts of scientific data that are available on the web. They have been wrapping many of the major science related APIs in R, which makes these rich data sources available to large numbers of scientists who don’t even know what an API is. It also makes it easier for scientists with more developed computational skills to get research done. Instead of spending time figuring out the APIs for potentially dozens of different data sources, they can simply access rOpenSci’s suite of packages to quickly and easily download the data they need and get back to doing science. My research group has used some of their packages to access data in this way and we are in the process of developing a package with them that makes one of our Python tools for acquiring ecological data (the EcoData Retriever) easy to use in R.
Any practical tips you’d share on making sharing easier?
One of the things I think is most important when sharing both code and data is to use standard licences. Scientists have a habit of thinking they are lawyers and writing their own licenses and data use agreements that govern how the data and code and can used. This leads to a lot of ambiguity and difficulty in using data and code from multiple sources. Using standard open source and open data licences vastly simplifies the the process of making your work available and will allow science to benefit the most from your efforts.
And do you think sharing data/methods will help you get tenure? Evidence it has helped others?
I have tenure and I certainly emphasized my open science efforts in my packet. One of the big emphases in tenure packets is demonstrating the impact of your research, and showing that other people are using your data and code is a strong way to do this. Whether or not this directly impacted the decision to give me tenure I don’t know. Sharing data and code is definitely beneficial to competing for grants (as I described above) and increasingly to publishing papers as many journals now require the inclusion of data and code for replication. It also benefits your reputation (as I described above). Since tenure at most research universities is largely a combination of papers, grants, and reputation, and I think that sharing at least increases one’s chances of getting tenure indirectly.
UPDATE: Added missing link to Titus Brown’s post: http://ivory.idyll.org/blog/2014-eli-conversation.html
EcoData Retriever: quickly download and cleanup ecological data so you can get back to doing science
If you’ve every worked with scientific data, your own or someone elses, you know that you can end up spending a lot of time just cleaning up the data and getting it in a state that makes it ready for analysis. This involves everything from cleaning up non-standard nulls values to completely restructuring the data so that tools like R, Python, and database management systems (e.g., MS Access, PostgreSQL) know how to work with them. Doing this for one dataset can be a lot of work and if you work with a number of different databases like I do the time and energy can really take away from the time you have to actually do science.
Over the last few years Ben Morris and I been working on a project called the EcoData Retriever to make this process easier and more repeatable for ecologists. With a click of a button, or a single call from the command line, the Retriever will download an ecological dataset, clean it up, restructure and assemble it (if necessary) and install it into your database management system of choice (including MS Access, PostgreSQL, MySQL, or SQLite) or provide you with CSV files to load into R, Python, or Excel.
Just click on the box to get the data:
Or run a command like this from the command line:
retriever install msaccess BBS --file myaccessdb.accdb
This means that instead of spending a couple of days wrangling a large dataset like the North American Breeding Bird Survey into a state where you can do some science, you just ask the Retriever to take care of it for you. If you work actively with Breeding Bird Survey data and you always like to use the most up to date version with the newest data and the latest error corrections, this can save you a couple of days a year. If you also work with some of the other complicated ecological datasets like Forest Inventory and Analysis and Alwyn Gentry’s Forest Transect data, the time savings can easily be a week.
The Retriever handles things like:
- Creating the underlying database structures
- Automatically determining delimiters and data types
- Downloading the data (and if there are over 100 data files that can be a lot of clicks)
- Transforming data into standard structures so that common tools in R and Python and relational database management systems know how to work with it (e.g., converting cross-tabulated data)
- Converting non-standard null values (e.g., 999.0, -999, NoData) into standard ones
- Combining multiple data files into single tables
- Placing all related tables in a single database or schema
The EcoData Retriever currently includes a number of large, openly available, ecological datasets (see a full list here). It’s also easy to add new datasets to the EcoData Retriever if you want to. For simple data tables a Retriever script can be as simple as:
name: Name of the dataset description: A brief description of the dataset of ~25 words. shortname: A one word name for the dataset table: MyTableName, http://awesomedatasource.com/dataset
We also have some exciting new features on the To Do list including:
- Automatically cleaning up the taxonomy using existing services
- Providing detailed tracking of the provenance of your data by recording the date it was downloaded, the version of the software used, and information about what cleanup steps the Retriever performed
- Integration into R and Python
Let us know what you think we should work on next in the comments.
Martorell, C. & R.P. Freckleton. 2014. Testing the roles of competition, facilitation and stochasticity on community structure in a species-rich assemblage. Journal of Ecology doi:10.1111/1365-2745.12173
At a given location in nature, why are some species present and others absent? Why do some species thrive and have lots of individuals and others are barely eeking out an existence? What determines how many species can live together there? These questions have fascinated (some might say obsessed) community ecologists for an almost embarrassing number of decades. They have proven difficult questions to answer and everyone has their favorite process they like to use to answer those questions. Competition for limiting resources is perennially a favorite process used to explain who gets into a community and who does well once they’re in it. But there are also a number of other processes that clearly play important roles. Theory and data are showing that the movement of species from location to location can alter what species exist where and how many individuals they have at a site. The role of facilitation (positive interactions among species) has increasingly been getting play as well, especially in stressful environments. There can also be a random component to the order that species arrive at a particular location. Because it can be difficult for very similar species to coexist, who is already at a location can influence who can then get into that location (this is sometimes referred to as historical or priority effects). I’m sure I missed some processes and I’m equally sure that someone out there right now is upset I didn’t include theirs. Others might (and by might I mean probably will) disagree with what I’m about to say, but most of the time it seems to me that we spend most of our time arguing about which process is most important. It’s competition! No it’s dispersal limitation! Niches! No niches! I have come to find this binary approach to studying communities wearisome. And here’s why. Does competition influence who exists at a particular location? Yes. Does dispersal? Yes. Does facilitation? Yes. Do stochastic processes? Yes. Do priority effects? Yes. We are at a point in ecology where I think we can feel confident that these various processes both exist and that they affect what we see in nature. Instead, we need to figure out how these processes work together to create the communities we observe. Does the role of a process stay constant through time? Or does it change depending on whether a community has been recently disturbed or is more established? Can we weave together these processes to predict how a community will look through time?
Right about now, you’re wondering if I will ever actually mention the Martorell & Freckleton paper. Here you go. Martorell & Freckleton (2014) take data from a long-term study of plants in Mexico and analyze all the pair-wise interactions among species in order to “document the intensity and demographic importance of interactions and stochasticity in terms of per capita effects, and to set them in a community context”. In effect, they used population models and the spatio-temporal data on plants to assess for each species observed how its presence and population growth/abundance was impacted by interactions with other species, interactions with individuals of the same species, variability in the environment, dispersal, and population stochasticity. If you want to know how they did this, you’ll need to read the paper. They found that both competition and facilitation between species played an important role in determining whether a new species could colonize a particular site. Once established, competition and facilitation played less important roles in explaining the abundance of species. Most of the variation in abundance between species can be explained by interactions with other members of the same species and by stochastic events influencing dynamics at a location.*
So why do I like this paper? Because it’s a step towards that integration of processes that I think we need to start doing. Their end message isn’t: process x affects ‘thing I’m interested in’ y. Their end message is about how these processes are working together and when they play a more (or less) important role for determining what species are present and how well they are doing at a site. Their results suggest a model of communities where interactions among species influences who establishes at a particular location (i.e. the species composition in community ecology lingo). However, stochastic events and interactions among members of the same species become important for understanding differences among species in abundances and population growth rates. Only time will tell if this particular integration of processes holds across different types of ecosystems. But right now it allows us to start talking about more sophisticated models of how species come together to create the diversity of species and abundances in a community.
And what does this paper say about predicting the species in a community and their abundances? My interpretation is that it says what I think a growing number of us have suspected for a while. For a specific location there is not a single expected configuration of a community. There are many possible configurations. This means that precisely predicting the species composition of a community will be difficult. But it also makes me wonder whether it might be possible to predict the space of possibilities and how probable those possibilities are. Given this disturbance rate and this pool of possible species, there’s a 60% chance of this configuration of species, but only a 10% chance for this one. I suspect many of my colleagues think that even this level of prediction or forecasting is pure science fiction thinking on my part. But like some of my other blogging colleagues (hi, Brian! hi, Peter!) I believe that pushing our field from one focused on ‘understanding’ to one focused on ‘forecasting’ or ‘predicting’ is one of the greatest challenges our science faces**. Figuring out how and when different processes operate and what aspects of community structure they are controlling is the first step towards forecasting. And that is exactly why I like this article.
* Disclaimer: I’ve distilled the paper down to the core message of what I found interesting and why. To understand what Martorell & Freckleton did, all of their results, and what they thought made their results interesting, you should really read the paper.
**Acknowledgments: Sadly, I can’t also link to the long and awesome conversations that Ethan, Allen Hurlbert and I have been having on this topic while on sabbatical. Trust me, they’ve been revolutionary experiences that you wish you were there for.
Engaging in Art and Science Collaborations
This is a guest post by Zack Brym (@ZackBrym). He is a graduate student in our group interested in the form and function of orchard trees. He has also developed an interest in scientific communication. He is sharing his recent experience translating his Ph.D. research into dance and what he learned from it.
Why is it that some scientists experience large swings between accomplishing a lot all at once and drought-like periods of inactivity? Successful scientists maintain a manageable pace of activity and work efficiently around deadlines, but sometimes the ups and downs of productivity are dictated by something more intangible. Wisely, an adviser helped explain to me that “Science is a creative process. You just don’t have it in you all the time.” I very much agree with him and have embraced the opportunity to explore creativity in science.
There is a place for creativity within all steps of the scientific process. Innovation is the result of imagining new experimental designs, analytical techniques, or revolutionary ideas. Creativity can also be expressed while troubleshooting an idea or figuring out how to communicate key findings. I believe creativity is especially required for scientists working at the forefront of their discipline and I personally seek it out just as much as I do more conventional academic stimuli.
Communicating science clearly and broadly to the public through creative outreach tools is of increasing importance to scientists. Naturally, there are a number of ways that this is currently happening. Mark Brunson and Michele Baker are working hard at improving visibility of science at Utah State University with their Translational Ecology Curriculum. At the University of Florida, the Creative Campus Committee developed a speaker series to explore “Analogous Thinking in the Arts and Sciences”. Part of the program included a biology professor, Jamie Gillooly, who actively engaged as a scholar in residence at the School of Art. Nalini Nadkarni at the University of Utah has developed the Research Ambassador Program which provides resources to scientists to reach out to diverse public audiences like prisoners, religious groups, elderly, and urban youth. Participants of a successful outreach program are given the opportunity to engage creatively with science which promotes the curiosity of complex systems and critical thinking skills.
At the mid-point of my Ph.D., I used creativity to help express the fundamental concepts of my research by working to convey those ideas clearly through art. This year, I joined the ranks of Dance Your Ph.D. entrants with my video Prune to Wild. Dance Your Ph.D. is an international video competition. The challenge is to communicate a key concept of your Ph.D. research using an interpretive dance. Doing the Dance Your Ph.D. video gave me two great benefits. I produced an amazing outreach tool for my science and I gained new insight and perspective about my research while developing the project. People are constantly asking me more about my research after seeing the video and telling others about it (e.g., Salt Lake Tribune, Herald Journal). And because of this project I have a better idea of what I want to say to them.
My communication skills benefited during the Dance Your Ph.D. project because I had to formulate a simple message about the fundamental concepts of my Ph.D research. To produce a meaningful dance video I was forced to describe my science in a way that my choreographer (Stephanie White) and filmmaker (Andy Lorimer) could translate into a dance video. More so, we were inspired by the Dance Your Ph.D. judging instructions to take an unconventional approach to making the video and emphasize artistry equally to science. The contest rules read, “The judges can penalize videos that rely heavily on elements that do not involve dance at all, such as written text and graphics.” Most entries use words and text in association with dance, but we chose to exclude words or text from the video aside from the title and credits.
If you are only using dance, it becomes extremely difficult to convey a strong message. I provide the intended message of the dance on my professional website for folks who might be less confident with their interpretation. Interestingly, it seems that viewers underestimate the amount of information they get from my video when they first approach me about it. I am generally pleased to confirm their basic understanding as correct and then we get to continue on in greater detail.
Making the video has been such a positive exercise in developing a simple research message that I have continued to seek out opportunities to reconstruct and simplify my thesis. Can you describe your science using only the 1000 most common words? My attempt is:
Wood grows to hold leaves to the sun. Some wood also holds food for people. Humans manage this wood to grow less and be short, but to make more food. I want to know more about the growing of wood so I can learn how to make more food with less wood, work, water, and whatever. (Developed at UpGoer5)
The message of my video was a bit more complex in word choice, but arguably as simple.
My goal is to develop an understanding of fruit trees so that I can recommend a management strategy that acknowledges the physiological constraints imposed on fruit trees through evolution while also producing an economically viable fruit. The resulting tree represents a “natural” tree architecture that actually uses fewer resources to produce fruit by achieving maximum physiological efficiency.
This text is a direct result of my Dance Your Ph.D. project. The video became a truly collaborative effort and I am very grateful for the contributions of artists in creatively describing my research. Collaborations between artists and scientists are becoming more recognized for their benefits. Supporting these efforts is formalized by the “STEM to STEAM” movement which is gaining traction in education policy. What’s more, through this project I am ever sure that science and its creative discoveries should be presented in an open and transparent manner so everyone has the chance to engage in the scientific process and contribute back in meaningful ways.
This is a guest post by Elita Baldridge (@elitabaldridge). She is a graduate student in our group who has been navigating the development of a chronic illness during graduate school. She is sharing her story to help spread awareness of the challenges faced by graduate students with chronic illnesses. She wrote an excellent post on the PhDisabled blog about the initial development of her illness that I encourage you to read first.
During my time as a Ph.D. student, I developed a host of bizarre, productivity eating symptoms, and have been trying to make progress on my dissertation while also spending a lot of time at doctors’ offices trying to figure out what is wrong with me. I wrote an earlier blog post about dealing with the development of a chronic illness as a graduate student at the PhDisabled Blog.
When the rheumatologist handed me a yellow pamphlet labeled “Fibromyalgia”, I felt a great sense of relief. My mystery illness had a diagnosis, so I had a better idea of what to expect. While chronic, at least fibromyalgia isn’t doing any permanent damage to joints or brain. However, there isn’t a lot known about it, the treatment options are limited, and the primary literature is full of appallingly small sample sizes.
There are many symptoms which basically consisting of feeling like you have the flu all the time, with all the associated aches and pains. The worst one for me, because it interferes with my highly prized ability to think, is the cognitive dysfunction, or, in common parlance, “fibro fog”. This is a problem when you are actively trying to get research done, as sometimes you remember what you need to do, but can’t quite figure out how navigating to your files in your computer works, what to do with the mouse, or how to get the computer on. I frequently finish sentences with a wave of my hand and the word “thingy”. Sometimes I cannot do simple math, as I do not know what the numbers mean, or what to do next. Depending on the severity, the cognitive dysfunction can render me unable to work on my dissertation as I simply cannot understand what I am supposed to do. I’m not able to drive anymore, due to the general fogginess, but I never liked driving that much anyway. Sometimes I need a cane, because my balance is off or I cannot walk in a straight line, and I need the extra help. Sometimes I can’t be in a vertical position, because verticality renders me so dizzy that I vomit.
I am actually doing really well for a fibromyalgia patient. I know this, because the rheumatologist who diagnosed me told me that I was doing remarkably well. I am both smug that I am doing better than average, because I’m competitive that way, and also slightly disappointed that this level of functioning is the new good. I would have been more disappointed, only I had a decent amount of time to get used to the idea that whatever was going on was chronic and “good” was going to need to be redefined. My primary care doctor had already found a medication that relieved the aches and pains before I got an official diagnosis. Thus, before receiving an official diagnosis, I was already doing pretty much everything that can be done medication wise, and I had already figured out coping mechanisms for the rest of it. I keep to a strict sleep schedule, which I’ve always done anyway, and I’ve continued exercising, which is really important in reducing the impact of fibromyalgia. I should be able to work up my exercise slowly so that I can start riding my bicycle short distances again, but the long 50+ mile rides I used to do are probably out.
Fortunately, my research interests have always been well suited to a macroecological approach, which leaves me well able to do science when my brain is functioning well enough. I can test my questions without having to collect data from the field or lab, and it’s easy to do all the work I need to from home. My work station is set up right by the couch, so I can lay down and rest when I need to. I have to be careful to take frequent breaks, lest working too long in one position cause a flare up. This is much easier than going up to campus, which involves putting on my healthy person mask to avoid sympathy, pity, and questions, and either a long bus ride or getting a ride from my husband. And sometimes, real people clothes and shoes hurt, which means I’m more comfortable and spending less energy if I can just wear pajamas and socks, instead of jeans and shoes.
Understand that I am not sharing all of this because I want sympathy or pity. I am sharing my experience as a Ph.D. student developing and being diagnosed with a chronic illness because I, unlike many students with any number of other short term or long term disabling conditions, have a lot of support. Because I have a great deal of family support, departmental support, and support from the other Weecologists and our fearless leaders, I should be able to limp through the rest of my Ph.D. If I did not have this support, it is very likely that I would not be able to continue with my dissertation. If I did not have support from ALL of these sources, it is also very likely that I would not be able to continue. While I hope that I will be able contribute to science with my dissertation, I also think that I can contribute to science by facilitating discussion about some of the problems that chronically ill students face, and hopefully finding solutions to some of those problems. To that end, I have started an open GitHub repository to provide a database of resources that can help students continue their training and would welcome additional contributions. Unfortunately, there doesn’t seem to be a lot. Many medical Leave of Absence programs prevent students from accessing university resources- which also frequently includes access to subsidized health insurance and potentially the student’s doctor, as well as removing the student from deferred student loans.
I have fibromyalgia. I also have contributions to make to science. While I am, of course, biased, I think that some contribution is better than no contribution. I’d rather be defined by my contributions, rather than my limitations, and I’m glad that my university and my lab aren’t defining me by my limitations, but are rather helping me to make contributions to science to the best of my ability.
As some of you may know, I’ve been working with Michael Angilletta for the past year on organizing a Gordon Research Conference. I announced the mentoring program that is affiliated with the conference last week, but here is the official info on the conference itself. Please forgive a little repetition from the mentoring program post.
Application Deadline: June 22, 2014
When and Where: July 20-25 2014 at the University of New England, Biddeford Maine
Conference Topic: Many of the impacts humans have on nature affect patterns and processes at multiple spatial, temporal, or organizational scales. Thus predicting the response of nature to human impacts is challenging because changes in one scale can have profound impacts on patterns and processes at other scales of nature. Because ecology has traditionally been focused on patterns and processes at single scales, we have few approaches that allow us to understand cross-scale feedbacks that can influence the patterns and processes we are interested in predicting. The Gordon Research Conference on ‘Unifying Ecology Across Scales: the role of nutrients, metabolism, and physiology’ is a small conference focused on exploring how the availability, acquisition, and transference of energy and nutrients can link patterns and processes across spatial, organizational, and temporal scales. Our goal is to provide a venue for people interested in this topic to discuss the current state of the field and discuss how to promising avenues of future research. Research interests of participants span the diverse areas of ecology, evolution, and physiology, but are united in an interest to use energy and nutrients to unify different areas and approaches to ecology.
What is a Gordon Research Conference?: Gordon Research Conferences (GRC) are well known in some fields, but the number of ecology related GRCs is low, so many of us haven’t heard of one before. A Gordon Research Conference is a small conference ( < 200 people) focused on a specific topic. In our case, the topic is trying to link patterns and processes across scales using nutrients, metabolism, and physiology. Speakers at GRCs are by invite only, but there is a poster session almost every afternoon for attendees to present their research. The poster session is not just for the junior people to present. Well known senior people tack up posters and stand by them too.
The structure of a GRC is also pretty unique. Talks occur in the mornings and evenings, leaving the afternoons free for informal discussions, formation of collaborations, and recreational activities (our conference site has kayaking as well as other organized opportunities). Attendees all sleep in the same dorm and eat at the same cafeteria, further creating opportunities for interactions and discussions.
Applying to attend: Registration is now open.
GRC’s have a unique approach to the application process. You have to submit an application which the conference chairs (that’s me and Michael Angilletta) can then decide to accept or reject. Then you’ll get an ‘invitation’ to actually register. Don’t let the fear of rejection stop you from applying though. We have historically had space for everyone who wants to come.
Special events for graduate students and postdocs: We have a Gordon Research Seminar (GRS) associated with our conference focused on “understanding the drivers of biological systems by integrating metabolism, physiology, and macroecology”. Gordon Research Seminars provide opportunities for graduate students and postdocs to present their research and network with their peers and a small number of senior scientists mentors before the main conference. Feedback from people who have attended these has been universally positive. In fact, when we didn’t have these one year, there was a huge outcry to bring them back. You have to apply for the GRS separately from the GRC. The conference chairs for the GRS are Sarah Supp and Sarah Diamond. The GRS registration process is also currently open. Dates for the GRS are July 19-20, 2014.
This year we are also excited to announce we have a mentoring program at the conference that graduate students and postdocs who plan on attending the conference can apply for. We have limited slots for this (approximately 20). Details can be found here.
Who is Speaking?: To (hopefully) get you even more excited about attending, here is the list of session topics, speakers, and discussion leaders for the conference. UPDATED: We’ve added a number of lightning talks (short talks). Those speakers have now been added below. If you want titles as well, the full schedule for the conference (with talk titles) is available here
Session Topic 1: Developing Unified Theories of Ecology
Leader Name: Pablo Marquet
Session Topic 2: Macrophysiology Meets Macroecology
Leader Name: Lauren Buckley
Session Topic 3: Biogeography of Environmental Tolerance
Leader Name: Jennifer Sunday
Lightning Talks: Lacy Chick / Richard Feldman
Session Topic 4: Metabolic Adaptation to Changing Environments
Leader Name: Craig White
Session Topic 5: Mechanistic Basis of Macroecological Patterns
Leader Name: Brian Enquist
Session Topic 6: Linking Organismal Traits to Community Dynamics
Leader Name: Elena Litchman
Session Topic 7: Using Stoichiometry to Link Organisms and Ecosystems
Leader Name: Susan Kilham
Session Topic 8: Predicting Diversity across Scales
Leader Name: Brian McGill
Session Topic 9: Integrating Ecological Processes at the Macroscale
Leader Name: James Brown
The British Ecological Society has announced that will now allow the submission of papers with preprints (formal language here). This means that you can now submit preprinted papers to Journal of Ecology, Journal of Animal Ecology, Methods in Ecology and Evolution, Journal of Applied Ecology, and Functional Ecology. By allowing preprints BES joins the Ecological Society of America which instituted a pro-preprint policy last year. While BES’s formal policy is still a little more vague than I would like*, they have confirmed via Twitter that even preprints with open licenses are OK as long as they are not updated following peer review.
Preprints are important because they:
- Speed up the progress of science by allowing research to be discussed and built on as soon as it is finished
- Allow early career scientists to establish themselves more rapidly
- Improve the quality of published research by allowing a potentially large pool reviewers to comment on and improve the manuscript (see our excellent experience with this)
BES getting on board with preprints is particularly great news because the number of ecology journals that do not allow preprints is rapidly shrinking to the point that ecologists will no longer need to consider where they might want to submit their papers when deciding whether or not to post preprints. The only major blocker at this point to my mind is Ecology Letters. So, my thanks to BES for helping move science forward!
*Which is why I waited 3 weeks for clarification before posting.