Data science competition: Converting remote sensing into trees

Understanding and managing forests is crucial to understanding and potentially mitigating the effects of climate change, invasive species, and shifting land use on natural systems and human society. However, collecting data on individual trees in the field is expensive and time consuming, which limits the scales at which this crucial data is collected. Remotely sensed imagery from satellites, airplanes, and drones provide the potential to observe ecosystems at much larger scales than is possible using field data collection methods alone.

We running the second in a series of data science competition where multiple groups attempt to use the same remote sensing data from low flying airplanes to infer the locations, sizes and species identities of millions of trees. This kind of collaborative data analysis challenge has proven highly effective in other fields for quickly improving methods for converting image data to useful information. This round of the competition focuses on exploring how methods generalize beyond a single forest.

There are two tasks in the current competition: 1) identifying individual trees in remote sensing images; and 2) classifying trees into species.

Panel 1: Identify - RGB image of trees from above showing trees outlined by blue polygons. Panel 2: Classify - polygons from Panel 1 color coded by species identity

Teams (or individuals) can participate in either or both tasks. Task 1 requires working with remote sensing data (RGB, LIDAR, and Hyperspectral). Task 2 can either leverage this raw remote sensing data or use simplified tabular data provided by the organizers. Details of the different tasks will be available starting March 1st. To read more and sign up check out the competition website:

IDTreeS data science challenge

You can learn about the results of the first round of this competition in the summary paper and full PeerJ paper collection. We plan to write up the results of this round of the competition in a similar way, with a synthetic paper covering the competition, data, and comparison of different methods, and with each team given the opportunity to write up and publish associated short papers on the methods they used and results they produced.

This challenge is supported by the Gordon and Betty Moore Foundation’s Data-Driven Discovery Initiative through grant GBMF4563 and the National Science Foundation through grant DEB-1926542. It uses data from the National Ecological Observatory Network in addition to data collected by the organizers. It is being organized by the Weecology lab, Machine Learning and Sensing lab, Data Science Research lab, and Stephanie Bohlman and Aditya Singh‘s labs all at the University of Florida and the Environmental Observation & Informatics program at the Nelson Institute for Environmental Studies at the University of Wisconsin.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: