Season Spotter Major News

Sorry that it’s been so quiet on the blog this summer. We’ve been busy analyzing all the data you have helped produce and we’ve written and published the first Season Spotter paper!

paper-snipIn the paper, which was published in the science journal Remote Sensing, we describe the project and the data produced, and we show that the data is of good quality and useful. You can read the abstract (scientific summary), the whole paper, or a plain language summary I wrote about the paper. And you can find yourself on our list of contributors.

We now know which type data from Season Spotter is good quality and we have created the post-processing software to turn your classifications into that data. We have also learned what doesn’t work so well (e.g. identifying grass seedheads) and what has been less than ideal (e.g. not taking advantage of the fact the images are in sequences).

So we’ve got two major next steps:

  1. We’re going to revamp the classification interface. When we launched in July 2015, the Zooniverse Project Builder was still pretty simple. Now it’s more sophisticated and I think we can make many of the classification tasks much more efficient by asking questions about multiple images at a time, instead of just one or two at a time. To create the new classification interface, I’d love to have your feedback. I’ve put together a project called Season Spotter Sandbox, where we can try out different ways of doing classifications. Tell me what you like and what you don’t like in its attached Talk forum.
  2. We’re going to identify the science question(s) we’d like to address next with Season Spotter data. I’m personally leaning towards tree-circling classifications, so we can figure out how to connect different types of phenology data at different scales. In other words, we have data from ground observers, from PhenoCams, and from satellites, but it’s not always clear how to use them together. If we could calculate individual tree phenology from the PhenoCam images, we could connect the first two. But there are other possibilities. If you have questions you think we should ask with the Season Spotter data, please leave a comment below.

Thank you again for all your classifications. I’m looking forward to the next season of Season Spotter.

Posted in Project report, Research | Tagged , , | Leave a comment

It’s finally springtime in the Northeast

Last time I posted, I commented on the fact that winter temperatures in New Hampshire were almost 7°C warmer in 2016 than 2015, and that “spring 2016 is just around the corner.”

It turns out that spring has been a long time coming. While it isn’t a particularly late year overall, it does seem that it is a particularly slow year. Take a look at the “greenness” data we derive from imagery from our PhenoCam overlooking the Boston Common. What you will notice is the much more gradual rise in greenness compared to previous years. In 2016, greenness began to trend upward in mid-March, and isn’t going to peak until late May.  By comparison, in 2015, greeness began to trend upwards beginning in late April, and it reached its peak by mid-May.

The pictures below compare April 19, 2015 (left) with March 19, 2016 (right) – they look pretty similar, despite the 2016 picture being a full month earlier.


And these pictures compare May 17, 2015 (left) with May 17, 2016 (right) – you can see that this year, we’re lagging behind last year on the same date by just a little.


Temperature is the main factor driving these differences; 2016 got off to a quick start in early April because March was much warmer  in 2016 (monthly mean temperature of 5.8°C) than 2015 (monthly mean temperature of 1.0°C). But this year we had a lot of cool weather in April, which continued on into May, and that’s really slowed down the rate of development: for example, the mean temperature for the first two weeks of May was 12°C this year, compared to 16°C last year.

That said, the last few days have finally started to feel like spring!

Interested in reading more? Check out the article I wrote last year for Arnoldia, the magazine of Harvard’s Arnold Arboretum, about using the Boston Common PhenoCam to track the phenology of urban trees.

Posted in Camera images, Science | Tagged , | 1 Comment

Friday favorites: Grassland at dawn


Dawn at the Sevilleta Long Term Ecological Research Site in New Mexico highlights a patchwork of grasses and herbaceous plants.

Posted in Camera images | Tagged , , , | Leave a comment

Season Spotter Jornada

Sorry for the lack of posts last week. I have been intensely working on all the data you all have generated in Season Spotter and putting together a scientific paper. Thanks so much for helping out to get those fall images classified in time for the paper. I’ve been analyzing them yesterday and today and the data look awesome. More about autumn in a future post.

Meanwhile, we have another small task that would be great to complete soon. At Jornada (pronounced HOR-na-da) Experimental Range in New Mexico, there’s a dry grassland dotted with mesquite shrubs. A collaborator there has on-the-ground field data of when various grasses and the mesquite flower. We’d love to compare Season Spotter data with the field data to get a sense of how accurate the Season Spotter data is. Can we see flowers only when there are many of them? Or do we capture the whole flowering period?

We have Season Spotter Jornada images for most of 2013, 2014, and 2015, for a total of just under 1,000 images. This is peanuts compared to the spring and fall images, which had about 10,000 image pairs each. So hopefully we can zip through the Jornada images quickly. If you have a moment, click on over and classify a few. Thanks!


Posted in Field work, Project report, Research | Tagged , , , , | Leave a comment

Friday favorites: Cool spring

harvardbarn2This week has been abysmally cool and wet in Massachusetts. Buds are eager to burst into leaves as soon as we have a little warmth at Harvard Forest. Here the imminent green of new spring leaves vibrates against the orange carpet of last-year’s leaves.

Posted in Camera images | Tagged , , | Leave a comment

Results from tree outlining

In Season Spotter Image Marking, one of the tasks we ask you to do is to outline individual trees and tell us if they are broadleaf trees or needle-leaf ones.


It’s easier to see different trees at different times of year. For example, evergreens are easier to see in winter when nearby deciduous trees have lost their leaves. Likewise, deciduous trees whose leaves change color are easier to see in autumn. And different lighting conditions change the highlights and shadows around different trees, making some easier to see in one image and others easier to see in other images. So we’ve taken a handful of images from each site in each year, and we show these pictures to you.

Then we combine your classifications across images. Remember that we only ask you to outline three trees per image. But not everyone outlines the same three trees. In the end we get a good sampling of all the most easily delineated trees in the image.

Here is the view from the Arbutus Lake PhenoCam in Huntington Forest, New York:


And here’s what it looks like if we put everyone’s markings for this site on top of one another:


Here, white shapes are broadleaf tree markings and yellow shapes are needle-leaf ones. As humans, it’s pretty easy to pick out the major trees. We also see a lot of stray shapes. For example, sometimes people outline entire vegetated regions instead of single trees. This is probably because another question in Season Spotter Image Marking asks volunteers to do just that, and it gets a bit confusing.

What we really want is a single shape for each major tree. So I implemented a clustering algorithm that takes all these shapes and finds those that are most similar to one another. It groups all these shapes into clusters, and hopefully each cluster represents one tree. What’s tricky is that we have to define a threshold of just how similar the shapes must be to be in the same cluster. If we require that the shapes be super similar, then shapes around the same tree that are just slightly different don’t get grouped together. But if we are too lax about our clustering rules, then multiple nearby trees — especially small ones — get grouped together in a single cluster.

Once we have our clusters, we need to take all the shapes in that cluster and combine them into a single shape. I did this by taking a few of the smallest shapes in each cluster and taking their geometric union. This gives a conservative estimate of the extent of the tree crown. I did it this way because the next step is to run some automated algorithms on the resulting regions, and I wanted to make sure I wasn’t including any pixels that were outside the focal tree.

Here’s what the clusters and resulting “consensus shapes” look like:


The next thing to do is to run our automated algorithms on each outlined tree and compare that with the results when we run it on the whole landscape. Traditional phenology measures have been on-the-ground and done on a tree-by-tree or plant-by-plant basis. And satellite phenology measures integrate across broad areas. This analysis will help link those two types of phenology measurements, so that we can scale-up from the ground observations and understand the satellite observations more biologically.

Posted in Camera images, Project report, Research | Tagged , , , , | Leave a comment

What is Temporal Mismatching?

There are many reasons data that examine the timing of plant phenophases are important. One of these reasons is to provide data to support conservation efforts. Specifically, there are many species that depend on the timing of when plants flower, leaf, and produce fruit for their reproduction and survival.  As temperatures warm due to climate change, shifts in the timing of when plants flower and when pollinators emerge can result in what we call temporal mismatching.

Why is this important?  If a pollinator emerges before its host plant blooms or if a plant blooms before its pollinator emerges, this leaves the pollinator without food or a plant without a way to reproduce.  The interactions between these species are complex, so new research continues to inform how climate change might result in temporal mismatching between plants and pollinators but also other species.  The data you provide through Season Spotter supports this type of research.


Town of Washington Nature Trail

Virginia Master Naturalist volunteers install a native plant garden in the Town of Washington., Virginia. Photo: Marie Majarov

There are additional ways you can support the conservation of pollinators in your community by planting a native plant garden.  Native plants provide nectar, pollen, and seeds to other native species that depend on them. Some pollinators, like the monarch, are host dependent so the monarchs’ larvae need milkweed to survive.  Declines in milkweed populations in recent years have led to declines in the monarch population.  So, you can support conservation efforts of the monarch by planting milkweed in your own garden or planting other native plants that support other native pollinators.

Posted in Uncategorized | Leave a comment

Spring Challenge more results

Last week I described some initial results from the Spring Challenge. I showed how we used individual classifications to build a dataset for a single site in a a single year. And we discovered that using paired images 7 days apart and then smoothing the classifications gave us a good estimate of the “start of spring” date and “end of spring” date for that site in that year.

Since then, I’ve been comparing these “start of spring” and “end of spring” estimates with other estimates that we get that are automatically derived from our greenness time series. To get these automated estimates, we first take the daily greenness values and draw a smooth curve through them. Then we look at the total amplitude (height) of the greenness signal and pick the date where the smoothed curve passes through 20% of that amplitude. We call that the “start of spring.” We call the date where the smoothed curve passes through 80% of that amplitude the “end of spring.”

We can plot our Season Spotter estimates on a curve, along with the automated estimates to get an idea of how the two compare. Here is a plot from 2014 using data from the “canadaOA” camera at Prince Albert National Park, Saskatchewan, Canada — the same site and year we looked at last time.

canadaOA-2014Each of the green dots represents the greenness measure for a single day. The black line is the smoothed line that is fit through the green points, and the gray region around it is our certainty range. The two orange squares are the automated estimates of “start of spring” and “end of spring.” The blue squares are the estimates of “start of spring” and “end of spring” from Season Spotter. Both the orange and blue squares may have horizontal lines coming out of them showing the likely range of dates for these estimates. The longer the line, the less confidence we have in exactly where our square lies. For all the squares, I’ve drawn dotted lines from them to the smoothed line so we can visually compare them more easily.

As we suspected from our analysis last time, the estimates from pairs of images 1 day apart and 3 days apart are closer to the middle of spring, where it’s easier to see the change in leaves. The estimates from the 7-day apart images look very good — even better than the automated estimates!

If we look at “start of spring” estimates from 7-day apart images from all the sites and years that we put into Season Spotter, we see a trend:

sos_delta7_comparisonHere, each site has a different color and each rectangles is a year. So, for example, there are three purple rectangles showing three different years from the canadaOA camera. The lines coming out of the rectangles show us our certainty, as before. Going across is the “start of spring” from the automated method. And going up is the “start of spring” from Season Spotter classifications. The diagonal dotted line is the one-to-one line. If all our rectangles were on this line or scattered evenly around it, it would mean that the estimates from the automated method and from Season Spotter pretty much agree. Instead, we can see that Season Spotter regularly predicts an earlier spring than the automated method, because most of the rectangles lie below the one-to-one line. This suggests that we might want to tweak our automated process to use a lower threshold.

Posted in Project report, Research | Tagged , , , , , , , , | Leave a comment

Friday favorites: Something odd in Maryland


A turkey vulture takes a rest on in front of the new NEON PhenoCam at the Smithsonian Environmental Research Center (SERC) in Maryland.

Posted in Camera images | Tagged , , , , , | Leave a comment

Spring Challenge first results

Having all the pairs of spring images classified means I can now analyze them! In particular, I’m using these images to get a best estimate for the “start of spring” and the “end of spring”. These are metrics that are used in remote sensing — both using the PhenoCams to do automated greenness processing and by researchers using satellite imagery to understand earth’s vegetation.

Several weeks ago, I wrote about what images are in Season Spotter and how we will do the analysis. Read about it here, if you haven’t already.

Let’s look at some actual data from a camera called “canadaOA”, which is in Prince Albert National Park, Saskatchewan, Canada. First, here’s what the view looks like at this camera in spring:

canadaOA_2014_05_24_120134Now here’s a graph showing the distribution of classifications you provided for this site in 2014 when image pairs were spaced one day apart:

spring-canadaOA-1-2014The way to read this graph is as follows: Going from left to right, we have days during the spring, with some late winter and early summer days on either end to make sure we capture the full spring. Each green bar shows how many people classified the image that occurred second chronologically as the one having bigger or greener leaves. In other words, the green bars show confidence that spring changes were able to be seen by volunteers. The blue bars show how many people classified an image as “the images are the same” or said that the earlier image had bigger or greener leaves. In other words, blue bars show confidence that spring changes were NOT able to be seen between the two images. The red bars indicate the number of people who said at least one of the images was a bad image. If more than half of people said that there was a bad image, we don’t use any data from that pair. You can see, for example, that the longest red bar doesn’t have any green or blue bars above it. All the bars have been scaled so that the longest possible bar length means “everybody who saw this pair of images” and a bar half that long means “half of all people who saw this pair of images”.

The orange and purple dotted lines are the best guess “start of spring” and “end of spring” dates based on this data. In this case, the orange line is between May 21 and May 22, indicating that May 22 is the “start of spring”. And the purple line is between May 25 and May 26, indicating that May 25 is the “end of spring”.

Hmm, but it seems a bit odd that spring is only 4 days long. It generally takes longer than that for leaves to go from buds to leaves and then fully grow all the way out. Let’s look at the greenness curve for this site in 2014:

canadaOA_2014_gccFrom the greenness curve, we see that start of spring should be around mid-May and that the end of spring isn’t until the very beginning of June. It looks like the May 21-25 period is the steepest part of the curve — where day-to-day change might be most obvious.

Let’s look next at the data from this same site and year where images were 3 days apart:


We see the same sort of pattern again, but now we have more confidence that things are still changing later on in the spring. Our estimate now is that we start to see change between May 19 and May 22, and that we stop seeing change between June 3 and June 6. This makes sense. It’s easier to see that something has changed three days apart than one day apart.

And if we look at the data from when images were 7 days apart, it looks like this:


Here, we start to see change between May 15 and May 22 and stop seeing change between June 1 and June 8. That seems pretty accurate based on the greenness curve. But those ranges are really big. We’d really like to know what day is the start of spring, not in what week it occurred.

We can get a day estimate from the week-apart images. To do so, we create a new dataset derived from the 7-day-apart one. We take the classifications from a pair of images 7 days apart, and consider those classifications valid for each day in that range. So each day consists of classifications from seven different image pairs (or fewer if they’re at the beginning or end of the time period we’re looking at). This also has the advantage of smoothing over pairs of images that were bad and those that were simply hard to tell. Our new smoothed dataset looks like this:


This dataset tells us that the start of spring is May 19 and the end of spring is June 5. This seems very reasonable when we look at the greenness time series.

I’ve done this same analysis for the seven sites and all 31 site-years that you made classifications for. And the same points seem to be true across them all:

  • People have a hard time seeing differences in the leaves when paired images are only one day apart. (For some sites, people almost never see changes between one-day-apart images.)
  • People are most able to see differences in the leaves when paired images are seven days apart.
  • Using a smoothed dataset derived from the one where paired images are seven days apart seems to give good estimates for start of spring and end of spring.

The next thing for me to do is to measure the uncertainty in these estimates for start of spring and end of spring. And then I am going to compare our estimates from Season Spotter with some automated estimates done by running algorithms directly on the greenness curves. I’ll talk about these analyses in a future post.

Posted in Project report, Research | Tagged , , , , , , , | 1 Comment