A brief introduction to using WebR and Quarto for client-side interactive lesson material
Last week an exciting new addition to the R ecosystem was announced: webR. webR lets you run R from inside your browser, without installing R1. This is a really exciting development for educational materials, because it makes it possible to provide interactive materials where students can modify, create, and run code right inside a web page with zero setup. While it has been possible to do this in the past it required that the website be setup to run code on a server, which both required expertise that a lot of folks teaching R don’t have and introduced a lot of complexities related to things like security and server costs. With webR that all goes away.
One of the key recent tools that makes developing lessons (and lots of other things) is Quarto, which lets you write documents in markdown with embedded code blocks and then automatically turn them into documents, like web pages. And almost as soon as the first release of webR was announced there was an extension to use it as part of Quarto. So I spent a few hours this weekend seeing how far I could get with putting together some demo interactive lessons.
Results
If you just want to see what you can do you can go checkout the interactive lessons demo site. There are example lessons for basic R, dplyr, and ggplot. Give webR a few seconds to load things, the click on some Run Code buttons, change the code in the box, and then click Run Code again. You can do whatever you want, it’s running on your local machine right in the browser.
Details
You can see the current state of the material producing the demo site in the GitHub repository.
Installation
You need to have Quarto installed to start. Then install the webR Quarto extension.
quarto install extension coatless/quarto-webr
Basics
Inside a quarto document where you want to use webR you need to do two things. First, add this to you YAML front matter:
engine: knitr
filters:
- webr
Second, create code blocks for webR using:
```{webr}
# Your code
```
Then just preview or render the quarto file and you should be able to run and modify those code cells right on the resulting webpage.
You can do everything you would in a normal R script, including downloading files from urls and then loading them into R (there’s an example of this in the dplyr demo).
Adding packages
Things get a little trickier if you need to install packages. The simplest option is to have the user install them using a webR code block. You can’t do this using install.packages, you need to use webR’s own package system. To install dplyr you run
webr::install('dplyr')
This is fine, but it might be a little confusing since it’s not how you install packages normally, and you often want to jump straight to teaching without the boiler plate of package install.
Building on the webR docs and a great example by boB Rudis2, I used the following piece of Javascript, which can be copied and pasted directly into any quarto page:
<html>
<script type="module">
//=== WebR setup ===
import { WebR } from 'https://webr.r-wasm.org/latest/webr.mjs';
globalThis.webR = new WebR();
await globalThis.webR.init();
globalThis.webRCodeShelter = await new globalThis.webR.Shelter();
await globalThis.webR.installPackages(['dplyr', 'readr'])
</script>
</html>
Just change the packages on the installPackages
line to what you need. Not all R packages are available, but the main TidyVerse packages are.
Making things faster (& more secure)
Once you start installing packages things start to get a little slow. You can make things a lot faster3 (and more secure) by following the webR docs strong suggestion to “To ensure a web page is cross-origin isolated, serve the page with both the COOP and COEP HTTP headers set”. The docs provide an example for doing it locally. If you’re deploying using Netlify, a common way of putting Quarto sites on the web, you can do this by creating a file named _headers
in your deploy directory that contains the following:
/*
Cross-Origin-Opener-Policy: same-origin
Cross-Origin-Embedder-Policy: require-corp
Other bits and pieces
With the headers properly configured I found installing and loading dplyr and readr to both be fast enough that most folks won’t notice much. ggplot2, on the other hand, is a heavier dependency and it was slow enough that I felt the need to add a warning to the top of the page to avoid confusion. That said, once it’s loaded everything is super snappy, so it’s just a question of getting the student/user to understand that there will be a little bit of a lag at the very beginning.
I’ve also started playing with the use of callout boxes with webR code blocks for providing an in-browser way to work on exercises. Here’s what the example in the ggplot demo lesson looks like:

The student can type their own code directly into the code box, while easily scrolling up in the web page to remind themselves of what they’ve just learned. If they want to check if they have the right idea they can start by looking at the expected result (which expands when clicked to show the desired figure) or look at the solution code that the lesson designer created. Side note, even for graded material in my courses I always give the students access to the expected result so that they know what they are supposed to be producing. I’ve found it really helpful for avoiding confusion about the intent of questions.
Summary
After playing around with webR + Quarto for a day or so I think this (and related systems in other languages) are going to provide a lot of really cool opportunities for making it easy to deploy interactive lesson material both for formal classes and most importantly for open educational resources. I definitely recommend checking out this approach and I’m happy to answer questions in the comments if you decide to give it a try.
A huge thanks to all of the folks making these incredible tools including George Stag and Lionel Henry for webR and James Balamuta for the Quarto extension. Also a huge thanks to boB Rudis for all his work sharing these tools and how to use them.
Read more
Footnotes
1This is part of a larger set of work to bring more meaningful computing to the browser including implementations for Python.
2I couldn’t have done most of this without boB’s excellent posts on Mastodon. Checkout his webR blog post for a full write up.
Data Analyst position in ecology research group
The Weecology lab group run by Ethan White and Morgan Ernest at the University of Florida is seeking a Data Analyst to work collaboratively with faculty, graduate students, and postdocs to understand and model ecological systems. We’re looking for someone who enjoys tidying, managing, manipulating, visualizing, and analyzing data to help support scientific discovery.
The position will include:
- Organizing, analyzing, and visualizing large amounts of ecological data, including spatial and remotely sensed data. Modifying existing analytical approaches and data protocols as needed.
- Planning and executing the analysis of data related to newly forming questions from the group. Assisting in the statistical analysis of ecological data, as determined by the needs of the research group.
- Providing assistance and guidance to members of the research group on existing research projects. Working collaboratively with undergraduates, graduate students and postdocs in the group and from related projects.
- Learning new analytical tools and software as needed.
This is a staff position in the group and will be focused on data management and analysis. All members of this collaborative group are considered equal partners in the scientific process and this position will be actively involved in collaborations. Weecology believes in the importance of open science, so most work done as part of this position will involve writing open source code, use of open source software, and production and use of open data.
Weecology is a partnership between the White Lab, which studies ecology using quantitative and computational approaches and the Ernest Lab, which tends to be more field and community ecology oriented. The Weecology group supports and encourages members interested in a variety of career paths. Former weecologists are currently employed in the tech industry, with the National Ecological Observatory Network, as faculty at teaching-focused colleges, and as postdocs and faculty at research universities. We are also committed to supporting and training a diverse scientific workforce. Current and former group members encompass a variety of racial and ethnic backgrounds from the U.S. and other countries, members of the LGBTQ community, military veterans, people with chronic illnesses, and first-generation college students. More information about the Weecology group and respective labs is available on our website. You can also check us out on Twitter (@skmorgane, @ethanwhite, @weecology, GitHub, and our blog Jabberwocky Ecology.
The ideal candidate will have:
- Experience working with data in R or Python, some exposure to version control (preferably Git and GitHub), and potentially some background with database management systems (e.g., PostgreSQL, SQLite, MySQL) and spatial data.
- Research experience in ecology
- Interest in open approaches to science
- Experience collecting or working with ecological data
That said, don’t let the absence of any of these stop you from applying. If this sounds like a job you’d like to have please go ahead and put in an application.
We currently have funding for this position for 2.5 years. Minimum salary is $40,000/year (which goes a pretty long way in Gainesville), but there is significant flexibility in this number for highly qualified candidates. We are open to the possibility of someone working remotely. The position will remain open until filled, with initial review of applications beginning on May 5th. If you’re interested in applying you can do so through the official UF position page. If you have any questions or just want to let us know that you’re applying you can email Weecology’s project manager Glenda Yenni at glenda@weecology.org.
Postdoctoral research position in the Temporal Dynamics of Communities
The Weecology lab group run by Morgan Ernest and Ethan White at the University of Florida is seeking a post-doctoral researcher to study changes in ecological communities through time. This position will primarily involve broad-scale comparative analyses across communities using large time-series datasets and/or in-depth analyses of our own long-term dataset (the Portal Project). Experience with any of the following is useful, but not required: long-term data, macroecology, paleoecology, quantitative/theoretical ecology, and programming/data analysis in R or Python. The successful applicant will be expected to collaborate on lab projects on community dynamics and develop their own research projects in this area according to their interests.
Weecology is a partnership between the Ernest Lab, which tends to be more field and community ecology oriented and the White Lab, which tends to be more quantitative and computationally oriented. The Weecology group supports and encourages students interested in a variety of career paths. Former weecologists are currently employed in the tech industry, with the National Ecological Observatory Network, as faculty at teaching-focused colleges, and as postdocs and faculty at research universities. We are also committed to supporting and training a diverse scientific workforce. Current and former group members encompass a variety of racial and ethnic backgrounds from the U.S. and other countries, members of the LGBTQ community, military veterans, people with chronic illnesses, and first-generation college students. More information about the Weecology group and respective labs is available on our website. You can also check us out on Twitter (@skmorgane, @ethanwhite, @weecology), GitHub, and our blog Jabberwocky Ecology.
This 2-year postdoc has a flexible start date, but can start as early as June 1st 2017. Interested students should contact Dr. Morgan Ernest (skmorgane@ufl.edu) with their CV including a list of three references, a cover letter detailing their research interests/experiences, and one or more research samples (a PDF or link to a scientific product such as a published paper, preprint, software, data analysis code, etc). The position will remain open until filled, with initial review of applications beginning on April 24th.
Data Retriever 2.0: We handle the data so you can focus on the analysis
We are very exited to announce a major new release of the Data Retriever, our software for making it quick and easy to get clean, ready to analyze, versions of publicly available data.
The Data Retriever, automates the downloading, cleaning, and installing of ecological and environmental data into your choice of databases and flat file formats. Instead of hours tracking down the data on the web, downloading it, trying to import it, running into issues (e.g, non-standard nulls, problematic column names, encoding issues), fixing one problem, and then encountering the next, all you need to do is run a single command from the command line:
$ retriever install csv iris $ retriever install sqlite breed-bird-survey -f bbs.sqlite
or from R:
>>> rdataretriever::install('postgres', 'wine-quality') >>> portal_data <- rdataretriever::fetch('portal')
The Data Retriever uses information in Frictionless Data datapackage.json files to automatically handle all of the complexities of “simple” data for you. For more complicated complicated datasets, with dozens of components or major data structure issues, the Retriever uses Python scripts as plugins to handle the major data cleaning work and then automatically handles the rest.
To find out more about the Data Retriever checkout the websites, the full documentation, and the GitHub repositories for both the Data Retriever and the R Data Retriever package.
Expanded focus and name change
For those of you familiar with the EcoData Retriever, this is the same software with a new name. Challenges with the data end of the analysis pipeline occur across disciplines and our tools work just as well for non-ecological data, so we’ve started adding non-ecological data and changed our name to reflect that. We’d love to hear from anyone interested in leading a push to add data from another discipline or just interested in adding a single favorite dataset.
As part of this we’ve changed the name of the R package from ecoretriever
to rdataretriever
.
Major changes
The 2.0 release includes a number of major changes including:
- Python 3 support (a single code base runs on both Python 2 and 3)
- Adoption of the frictionless data datapackage.json standard (replacing our old YAML like metadata system), including a command line interface for creating and editing datapackage.json files
- Add json and xml as available output formats
- Major expansion of the documentation and hosting of the documentation at Read the Docs
- Remove the graphical user interface (to allow us to focus that development time on wrappers for other languages)
- Lots of work under the hood and major improvements in testing
- Broaden scope to include non-ecological data
We are also in the process of releasing version 1.0 of the R package. This version adds the new features in the Data Retriever and also includes major stability improvements, in particular in RStudio and on Windows.
We also have a brand new website.
Upgrading to the new version (UPDATED)
To ensure the smoothest upgrade to the new version we recommend:
- Run
retriever reset scripts
from the command line - Uninstall the old version of the EcoData Retriever
- Install the new version
- Run
retriever update
from the command line
Acknowledgments
Henry Senyondo is the lead developer for the Data Retriever and has done an amazing job over the past year developing new features and shoring up the fundamentals for the software. He lead the work on 2.0 start to finish.
Akash Goel was a Google Summer of Code student with the project last summer and was responsible for the majority of the work adding Python 3 support and switching the project over to the datapackage.json
standard.
Dan McGlinn, the creator of the R package, has continued his excellent leadership of the development of this package. Shawn Taylor, a new contributor, was instrumental in solving the stability issues on Windows/RStudio.
In addition to these core folks our growing group of contributors to both projects have been invaluable for adding new functionality, fixing bugs, and testing new changes. We are super excited to have contributions from 30 different people and will keep working hard to make sure that everyone feels welcome and supported in contributing to the project.
The level of work done to get these releases out the door was only possible due to generous support of the Gordon and Betty Moore Foundation’s Data Driven Discovery Initiative. This support allowed my group to employ Henry as a full time software engineer to work on these and other projects. This kind of active support for the development and maintenance of research oriented software makes sustainable software development at universities possible.
Fork our course: A semester-long Data Carpentry course for biologists
This is post is co-authored by Zack Brym and Ethan White
Over the last year and a half we have been actively developing a semester-long Data Carpentry course designed to be easily customized and integrated into existing graduate and undergraduate curricula.
Data Carpentry for Biologists contains course materials for teaching scientists how to work more effectively with data. The course provides introductions to data management and relational databases, data manipulation and analysis, and data visualization. It covers the same general types of material as a two-day Data Carpentry workshop, but expands the materials and opportunities for practice into a full-length university course. The teaching material uses R and SQLite, with some corresponding materials for Python as well. To help students understand the direct applications to their interests, the examples and exercises focus on biological questions and working with real data. The course emphasizes using best practices to produce reusable and reproducible data analysis.
Active-learning Teaching Materials
Learning computing requires active practice by working through programming problems. Just diving in to computing is challenging for most scientists, so the course instruction is designed to combine short live-coding introductions to concepts followed immediately by the students working on a related exercise. Additional exercises are assigned later for practice. This follows the “I do”, “We do”, “You do” approach to teaching, which leverages the benefits of active-learning and flipped classrooms without leaving students who are less comfortable with the material feeling lost. The bulk of class time is spent working on assigned exercises with the instructor moving around the room helping guide students through things they don’t understand and engaging with students who are thinking about advanced applications of what they’ve learned.
This approach is the result of lots of reading about effective teaching methods and Ethan’s experience teaching this and related courses over the last six years at Utah State University and the University of Florida. It seems to work well for both students that get the material easily and those that find it more challenging. We’ve also tried to make these materials as useful as possible for self-guided students.
Open course development
Software Carpentry and Data Carpentry have shown how powerful collaborative lesson development can be and we’re interested in bringing that to the university classroom. We have designed the course materials to be modular and easy to modify, and the course website easy to clone and set up. All of the teaching materials and associated website files are openly available at the Data Carpentry for Biologists repository on GitHub under CC-BY and MIT licenses. The course materials are all written in Markdown and everything runs on Jekyll through GitHub Pages. Making your own version of the course should take less than an hour. We’ve developed documentation for how to create your own version of the course and how to contribute to development. Exercises and assignments are modular and changing exercises and assignments simply involves reordering items in a list. Adding a new exercise involves creating a new Markdown file and then adding its title to the list of exercises for an assignment.
Get Involved
If you teach, or want to teach, a course like this, we’d love to get you involved. Here are some useful links for getting started.
– I want to teach the course.
– I have some feedback.
– I want to contribute to the project.
We want to be sure getting involved is as easy as possible. We’ve worked hard to provide documentation and help resources for students and instructors. Students can find all they need to know at our student start guide. Instructors have access to course content and site design documentation.
If your having trouble finding something or getting something to work, or simply have some feedback about the course please open a new issue at GitHub or send us an email.
Acknowledgements
Development of this course was generously support by the Gordon and Betty Moore Foundation’s Data-Driven Discovery Initiative through Grant GBMF4563 to Ethan White and the National Science Foundation as part of a CAREER award to Ethan White.
New release of the EcoData Retriever
We are very exited to announce the newest release of the EcoData Retriever, our software for automating the downloading, cleaning, and installing of ecological and environmental data. Instead of hours or days trying to get complicated datasets like the Breeding Bird Survey ready for analysis, the Retriever lets you simply click a button or run a single command from R or the command line, and your computer does the rest.
It’s been over a year since the last retriever release and there are lots of new features and improvements to be excited about.
- We’ve added 21 new datasets, including major ecological and environmental datasets like eBird, Vertnet, and the Global Wood Density Database, and the PRISM climate data.
- To support all of these datasets we’ve added support for additional data types including greater than memory archive files, and we’ve also improved the ability to control where downloaded files are stored and how they are clustered together.
- We’ve significantly improved documentation and now have a new automatically built documentation site at Read The Docs.
- We’ve also made a lot of under the hood improvements.
This is also the first release that has been overseen by Weecology’s new software engineer, Henry Senyondo. We’re excited to have Henry on the team, and now that he’s around development of both the EcoData Retriever and other lab software projects will be happening more quickly.
A big thanks to the Gordon and Betty Moore Foundation’s Data-Driven Discovery Initiative for funding this development through Grant GBMF4563 and to the National Science Foundation for funding as part of a CAREER award to Ethan White.
UPDATE: Led by Dan McGlinn we also released a new version of the ecoretriever R interface for the Retriever last fall. This makes using the Retriever from R as simple as:
data <- ecoretriever::fetch("BBS")
White Lab PhD openings at the University of Florida
I’m looking for one or more graduate students to join my group next fall. In addition to the official add (below) I’d like to add a few extra thoughts. As Morgan Ernest noted in her recent ad, we have a relatively unique setup at Weecology in that we interact actively with members of the Ernest Lab. We share space, have joint lab meetings, and generally maintain a very close intellectual relationship. We do this with the goal of breaking down the barriers between the quantitative side of ecology and the field/lab side of ecology. Our goal is to train scientists who span these barriers in a way that allows them to tackle interesting and important questions.
I also believe it’s important to train students for multiple potential career paths. Members of my lab have gone on to faculty positions, postdocs, and jobs in both science non-profits and the software industry.
Scientists in my group regularly both write papers (e.g., these recent papers from dissertation chapters: Locey & White 2013, Xiao et al. 2014) and develop or contribute to software (e.g., EcoData Retriever, ecoretriever, rpartitions & pypartitions) even if they’ve never coded before they joined my lab.
My group generally works on problems at the population, community, and ecosystem levels of ecology. You can find out more about what we’ve been up to by checking out our website. If you’re interested in learning more about where the lab is headed I recommend reading my recently funded Moore Investigator in Data-Driven Discovery proposal.
PH.D STUDENT OPENINGS IN QUANTITATIVE, COMPUTATIONAL, AND MACRO- ECOLOGY
The White Lab at the University of Florida has openings for one or more PhD students in quantitative, computational, and/or macro- ecology to start fall 2015. The student(s) will be supported as graduate research assistants from a combination of NSF, Moore Foundation, and University of Florida sources depending on their research interests.
The White Lab uses computational, mathematical, and advanced statistical/machine learning methods to understand and make predictions/forecasts for ecological systems using large amounts of data. Background in quantitative and computational techniques is not necessary, only an interest in learning and applying them. Students are encouraged to develop their own research projects related to their interests.
The White Lab is currently at Utah State University, but is moving to the Department of Wildlife Ecology and Conservation at the University of Florida starting summer 2015.
Interested students should contact Dr. Ethan White (ethan@weecology.org) by Nov 15th, 2014 with their CV, GRE scores, and a brief statement of research interests.
UPDATE: Added a note that we work at population, community, and ecosystem levels.
EcoData Retriever now supports R and environmental data, and has more datasets
We are very excited to announce the newest release of our EcoData Retriever software and the first release of a supporting R package, ecoretriever. If you’re not familiar with the EcoData Retriever you can read more here.
The biggest improvement to the Retriever in this set of releases is the ability to run it directly from R. Dan McGlinn did a great job leading the development of this package and we got ton of fantastic help from the folks at rOpenSci (most notably Scott Chamberlain, Gavin Simpson, and Karthik Ram). Now, once you install the main EcoData Retriever, you can run it from inside R by doing things like:
install.packages('ecoretriever') library(ecoretriever) # List the datasets available via the Retriever ecoretriever::datasets() # Install the Gentry dataset into csv files in your working directory ecoretriever::install('Gentry', 'csv') # Download the raw Gentry dataset files, without any processing, # to the subdirectory named data ecoretriever::download('Gentry', './data/') # Install and load a dataset as a list Gentry = ecoretriever::fetch('Gentry') names(Gentry) head(Gentry$counts)
The other big advance in this release is the ability to have the Retriever directly download files instead of processing them. This allows us to support data that doesn’t come in standard tabular forms. So, we can now include things like environmental data in GIS formats and phylogenetic data such as supertrees. We’ve used this new capability to allow the automatic downloading of the Bioclim data, one of the most widely used climate datasets in ecology, and the supertree for mammals from Fritz et al. 2009.
Finally, we’ve also add the very cool mammalian diet dataset from Dryad
Four basic skill areas for a macroecologist [Guest post]
This is a guest post by Elita Baldridge (@elitabaldridge), a graduate student in Ethan White’s lab in the Ecology Center at Utah State University.
As a budding macroecologist, I have thought a lot about what skills I need to acquire during my Ph.D. This is my model of the four basic attributes for a macroecologist, although I think it is more generally applicable to many ecologists as well:
- Data
- Statistics
- Math
- Programming
Data:
- Knowledge of SQL
- Dealing with proper database format and structure
- Finding data
- Appropriate treatments of data
- Understanding what good data are
Statistics:
- Bayesian
- Monte Carlo methods
- Maximum likelihood methods
- Power analysis
- etc.
Math:
- Higher level calculus
- Should be able to derive analytical solutions for problems
- Modelling
Programming:
- Should be able to write programs for analysis, not just simple statistics and simple graphs.
- Able to use version control
- Once you can program in one language, you should be able to program in other languages without much effort, but should be fluent in at least one language.
General recommendations:
Achieve expertise in at least 2 out of the 4 basic areas, but be able to communicate with people who have skills in the other areas. However, if you are good at collaboration and come up with really good questions, you can make up for skill deficiencies by collaborating with others who possess those skills. Start with smaller collaborations with the people in your lab, then expand outside your lab or increase the number of collaborators as your collaboration skills improve.
Gaining skills:
Achieving proficiency in an area is best done by using it for a project that you are interested in. The more you struggle with something, the better you understand it eventually, so working on a project is a better way to learn than trying to learn by completing exercises.
The attribute should be generalizable to other problems: For example, if you need to learn maximum likelihood for your project, you should understand how to apply it to other questions. If you need to run an SQL query to get data from one database, you should understand how to write an SQL query to get data from a different database.
In graduate school:
Someone who wants to compile their own data or work with existing data sets needs to develop a good intuitive feel for data; even if they cannot write SQL code, they need to understand what good and bad databases look like and develop a good sense for questionable data, and how known issues with data could affect the appropriateness of data for a given question. The data skill is also useful if a student is collecting field data, because a little bit of thought before data collection goes a long way toward preventing problems later on.
A student who is getting a terminal master’s and is planning on using pre-existing data should probably be focusing on the data skill (because data is a highly marketable skill, and understanding data prevents major mistakes). If the data are not coming from a central database, like the BBS, where the quality of the data is known, additional time will have to be added for time to compile data, time to clean the data, and time to figure out if the data can be used responsibly, and time to fill holes in the data.
Master’s students who want to go on for a Ph.D. should decide what questions they are interested in and should try to pick a project that focuses on learning a good skill that will give them a headstart- more empirical (programming or stats), more theoretical (math), more applied (math (e.g., for developing models), stats(e.g., applying pre-existing models and evaluating models, etc.), or programming (e.g. making tools for people to use)).
Ph.D. students need to figure out what types of questions they are interested in, and learn those skills that will allow them to answer those questions. Don’t learn a skill because it is trendy or you think it will help you get a job later if you don’t actually want to use that skill. Conversely, don’t shy away from learning a skill if it is essential for you to pursue the questions you are interested in.
Right now, as a Ph.D. student, I am specializing in data and programming. I speak enough math and stats that I can communicate with other scientists and learn the specific analytical techniques I need for a given project. For my interests (testing questions with large datasets), I think that by the time I am done with my Ph.D., I will have the skills I need to be fairly independent with my research.
Postdoc in Evolutionary Bioinformatics [Jobs]
There is an exciting postdoc opportunity for folks interested in quantitative approaches to studying evolution in Michael Gilchrist’s lab at the University of Tennessee. I knew Mike when we were both in New Mexico. He’s really sharp, a nice guy, and a very patient teacher. He taught me all about likelihood and numerical maximization and opened my mind to a whole new way of modeling biological systems. This will definitely be a great postdoc for the right person, especially since NIMBioS is at UTK as well. Here’s the ad:
Outstanding, motivated candidates are being sought for a post-doctoral position in the Gilchrist lab in the Department of Ecology & Evolutionary Biology at the University of Tennessee, Knoxville. The successful candidate will be supported by a three year NSF grant whose goal is to develop, integrate and test mathematical models of protein translation and sequence evolution using available genomic sequence and expression level datasets. Publications directly related to this work include Gilchrist. M.A. 2007, Molec. Bio. & Evol. (http://www.tinyurl/shahgilchrist11) and Shah, P. and M.A. Gilchrist 2011, PNAS (http://www.tinyurl/gilchrist07a).
The emphasis of the laboratory is focused on using biologically motivated models to analyze complex, heterogeneous datasets to answer biologically motivated questions. The research associated with this position draws upon a wide range of scientific disciplines including: cellular biology, evolutionary theory, statistical physics, protein folding, differential equations, and probability. Consequently, the ideal candidate would have a Ph.D. in either biology, mathematics, physics, computer science, engineering, or statistics with a background and interest in at least one of the other areas.
The researcher will collaborate closely with the PIs (Drs. Michael Gilchrist and Russell Zaretzki) on this project but potentiall have time to collaborate on other research projects with the PIs. In addition, the researcher will have opportunties to interact with other faculty members in the Division of Biology as well as researchers at the National Institute for Mathematical and Biological Synthesis (http://www.nimbios.org).
Review of applications begins immediately and will continue until the position is filled. To apply, please submit curriculum vitae including three references, a brief statement of research background and interests, and 1-3 relevant manuscripts to mikeg[at]utk[dot]edu.
Why computer labs should never be controlled by individual colleges/departments
Some time ago in academia we realized that it didn’t make sense for individual scientists or even entire departments to maintain their own high performance computing resources. Use of these resources by an individual is intensive, but sporadic, and maintenance of the resources is expensive [1] so the universities soon realized they were better off having centralized high performance computing centers so that computing resources were available when needed and the averaging effects of having large numbers of individuals using the same computers meant that the machines didn’t spend much time sitting idle. This was obviously a smart decision.
So, why haven’t universities been smart enough to centralize an even more valuable computational resource, their computer labs?
As any student of Software Carpentry will tell you, it is far more important to be able to program well than it is to have access to a really large high performance computing center. This means that the most important computational resource a university has is the classes that teach their students how to program, and the computer labs on which they rely.
At my university [2] all of the computer labs on campus are controlled by either individual departments or individual colleges. This means that if you want to teach a class in one of them you can’t request it as a room through the normal scheduling process, you have to ask the cognizant university fiefdom for permission. This wouldn’t be a huge issue, except that in my experience the answer is typically a resounding no. And it’s not a “no, where really sorry but the classroom is booked solid with our own classes,” it’s “no, that computer lab is ours, good luck” [3].
And this means that we end up wasting a lot of expensive university resources. For example, last year I taught in a computer lab “owned” by another college [4]. I taught in the second class slot of a four slot afternoon. In the slot before my class there was a class that used the room about four times during the semester (out of 48 class periods). There were no classes in the other two afternoon slots [5]. That means that classes were being taught in the lab only 27% of the time or 2% of the time if I hadn’t been granted an exception to use the lab [6].
Since computing skills are increasingly critical to many areas of science (and everything else for that matter) this territoriality with respect to computer labs means that they proliferate across campus. The departments/colleges of Computer Science, Engineering, Social Sciences, Natural Resources and Biology [7] all end up creating and maintaining their own computer labs, and those labs end up sitting empty (or being used by students to send email) most of the time. This is horrifyingly inefficient in an era where funds for higher education are increasingly hard to come by and where technology turns over at an ever increasing rate. Which [8] brings me to the title of this post. The solution to this problem is for universities to stop allowing computer labs to be controlled by individual colleges/departments in exactly the same way that most classrooms are not controlled by colleges/departments. Most universities have a central unit that schedules classrooms and classes are fit into the available spaces. There is of course a highly justified bias to putting classes in the buildings of the cognizant department, but large classes in particular may very well not be in the department’s building. It works this way because if it didn’t then the university would be wasting huge amounts of space having one or more lecture halls in every department, even if they were only needed a few hours a week. The same issue applies to computer labs, only they are also packed full of expensive electronics. So please universities, for the love of all that is good and right and simply fiscally sound in the world, start treating computer labs like what they are: really valuable and expensive classrooms.
—————————————————-
[1] Think of a single scientist who keeps 10 expensive computers, only uses them a total of 1-2 months per year, but when he does the 10 computers aren’t really enough so he has to wait a long time to finish the analysis.
[2] And I think the point I’m about to make is generally true; at least it has been at several other universities I’ve worked over the years.
[3] Or in some cases something more like “Frak you. You fraking biologists have no fraking right to teach anyone a fraking thing about fraking computers.” Needless to say, the individual in question wasn’t actually saying frak, but this is a family blog.
[4] As a result of a personal favor done for one administrator by another administrator.
[5] I know because I took advantage of this to hold my office hours in the computer lab following class.
[6] To be fair it should be noted that this and other computer labs are often used by students for doing homework (along with other less educationally oriented activities) when classes are not using the rooms, but in this case the classroom was a small part of a much larger lab and since I never witnessed the non-classroom portion of the lab being filled to capacity, the argument stands.
[7] etc., etc., etc.
[8] finally…
A GitHub of Science? [Things you should read]
There is an excellent post on open science, prestige economies, and the social web over at Marciovm’s posterous*. For those of you who aren’t insanely nerdy** GitHub is… well… let’s just call it a very impressive collaborative tool for developing and sharing software***. But don’t worry, you don’t need to spend your days tied to a computer or have any interest in writing your own software to enjoy gems like:
Evangelists for Open Science should focus on promoting new, post-publication prestige metrics that will properly incentivize scientists to focus on the utility of their work, which will allow them to start worrying less about publishing in the right journals.
Thanks to Carl Boettiger for pointing me to the post. It’s definitely worth reading in its entirety.
_______________________________________________________
*A blog I’d never heard of before, but I subscribed to it’s RSS feed before I’d even finished the entire post.
**As far as biologists go. And, yes, when I say “insanely nerdy” I do mean it as a complement.
***For those interested in slightly more detail it’s a social application wrapped around the popular distributed version control system named Git. Kind of like Sourceforge on steroids.