We have mentioned previously that we are hoping to use volunteer classifications to train a machine learning algorithm to help us do further (and quicker!) analysis on the high speed electronic speckle pattern interferometry images like the ones we are looking at in this project.
Our collaborator at Belmont University in Tennessee has made quite a bit of progress using the averages of all the classifications done so far. Here is a short clip of some of his results that show how his algorithm has identified antinodes and counted the fringes of each of them:
This doesn’t mean we don’t need you to keep working on our project!! It is critical that we keep going in order to validate the algorithm and, if possible, make it better.
Thanks for your interest in the project – we will have more updates soon!
Our volunteers have added THOUSANDS of classifications so far this summer – THANK YOU! Take a look at this animation which strings together many of the frames in sequence with the average ellipses indicated in blue:
This animation has roughly DOUBLE the amount of data that we had last time we shared a view like this on our blog. We’re getting a lot closer to having enough classifications to do a deeper analysis of the vibrations in the steelpan.
Here’s a glimpse of what we’re heading towards with the classifications:
This animation strings together many (but not all) of the frames from one of the two data sets we are currently focusing on. With your help we will be able to complete the entire sequence and show a full analysis of all the frames.
In our last post, we showed some really promising average classifications – after removing obvious outliers. In this post, we want to illustrate the reasons that we need such a high number of classifications before we retire an image.
The above image is an example of a frame where there is one antinode region in the upper left corner that has not been identified enough times to be included in the analysis. While it is possible that this antinode may not ultimately be important to our overall analysis, we would like to see as many of these marked as possible. Our hope is that by getting more volunteers to classify this image, enough people would see that antinode to mark it with an accurate ellipse that we can include in an average.
The above image has two missing antinode regions – one is the strike note on the lower left side of the image where all the fringes have merged together and the other antinode is centered at position (200,150). What has become obvious is that the most often missed antinode is the strike note. Often the vibration amplitude of the strike note is so high that there are no distinctly visible fringes. In those cases, the fringe count should be marked as 11, which is our way of saying “more than 10 fringes present”. It is quite apparent to us that the strike note CAN be found by many of our volunteers, so we believe that having more classifications will allow for all the antinodes on all the frames to be marked.
Additionally, the average ellipse that we do see is possibly not the best representation for that particular antinode. We are hopeful that with more classifications, the average ellipse would be a better representation.
In the above image, we are really happy to see the two antinodes identified by the classifications, but again, the strike note (on the left side) has not been marked enough times to be included in this analysis.
Here is an example of an image that has been seen at least 5 times, but there was not enough agreement on the position of the antinodes to include either of them in the analysis.
We wanted to take some time and update you all on what we are doing with all the classifications that our volunteers (that’s you) have been making.
Even though we only have one retired image in our project, we have been working on the analysis that we plan to do when the images are all retired. We have been able to identify individual antinode regions that you have marked with ellipses. When we eliminate outliers from each cluster, we take the remaining ellipses and calculate an “average” ellipse. Right now, we have done this analysis with images that have 6 or more classifications on them.
Here are some examples of what it looks like when we overlay the average ellipses on top of the original frames:
What we really like about these results is that our volunteers seem to be capable of identifying all of the antinode regions in the images and that the average ellipse can be a good representation of the antinodes.
The following two examples tell a similar tale, but with a few subtle differences.
The above image shows a note centered at (290, 50) which is vibrating primarily with second harmonic motion, as evidenced by the two antinode regions. While the classifications made do pick out the two antinodes on this note, the average ellipse does not represent the antinode region as well as what we see in the top example on this page.
The last example, above, shows that the strike note is identified by our volunteers – it is the largest ellipse on the left side of the figure. However, the average ellipse is slightly larger than what the actual antinode region is, as can be clearly seen on the lower right side of that antinode region.
What we really want you all to see is that your effort to help out with this project is absolutely paying off! We are getting the information that we hoped we would get – thank you for your hard work and please keep it up!
Yesterday we sent out an email newsletter to all our volunteers. Here is that newsletter:
Hello, fellow Zooniverse members,
The team working on the Steelpan Vibrations has started to pick up this project again and we are happy to keep you all updated. A biography of each of the team members can be found on the blog. Now let’s get on to the details.
We are still collecting classifications of our high-speed electronic speckle pattern interferometry images. So far, we have around 25,000 classifications from the Zooniverse community, but we have set a goal to get up to 100,000 by the end of the summer! The team plans to get this done by utilizing social media in an attempt to reach a wider audience.
You might be asking: What can I do to help? One big goal we set for ourselves is to get more people interested in our project. So, tell your family, your friends, anyone you can think of about our page on Zooniverse as well as other Zooniverse projects to get them more involved with this online community. This tactic can improve not only our project but many others.
We have recently taken an interest in a machine-learning algorithm that will make use of your classifications. Be looking for more information on this coming over the summer.
We are also in the process of recruiting talk moderators. We have messaged the most active people on our project and asked them to moderate our talk pages. If you have an interest, feel free to message us and we will happily consider your role as a moderator.
We wanted to share with you some updates from the past few months of our project. The primary update is that we wanted to bring to your attention a recent paper that we published in the Proceedings of Meetings on Acoustics:
Our second update is that we are gearing up for our Summer research session with a fresh group of students coming on to the project. We’re really excited to see what will come of the project, and we hope that you will watch along with us, and help us classify all of our project images.
Thanks for all the work you do to help us understand why steelpans sound the way they do!
We’re so thrilled to see our 1,000th volunteer registered tonight! We are so thankful for all the classifications that you have made to help us with our project.
Although there is nothing particularly significant about the round number 1000, it does make the math a little easier to understand: we have 1000 volunteers and a total of 13,557 classifications made – so on average our volunteers are make 13.557 classifications.
Last week we looked a little bit at the classification data to see what our volunteers have been doing to help us get through all the images we need examined. The figure above is a histogram of the volunteers and the classifications they have completed.
You can see that over all users, the average number of classifications done was was about 6.5 per user. As of last week there were over 1700 users who had done classifications, although that does not account for the same volunteer working sometimes logged in and sometimes not logged in, or any other case where the same person shows up as multiple users. (We believe those cases are rare.)
As is typical with citizen science project like this, there are a large number of volunteers who try one or a few images and then never come back. The next graph shows the statistics for users who have done 5 or more classifications for us.
As you can see, there were 547 users who, on average, did almost 17 classifications each! We are SO thankful for ALL our awesome volunteers, but these are the users who are truly pushing this project forward. These users make up 30% of all visitors to our project, but account for 79% of all the classifications!