Listening In on Scientific Data
17:37 minutes
When it comes to analyzing scientific data, there are the old standbys, plots and graphs. But what if instead of poring over visuals, scientists could listen to their data—and make new discoveries with their ears? That’s one of the goals driving the emerging field of sonification, the process of transforming data into sound. Sonification specialist Robert Alexander talks about his work giving voice to solar wind with the University of Michigan’s Solar Heliospheric Research Group. And cellist and composer Margaret Schedel explains why scientists at Brookhaven National Lab may soon be listening to nanomaterials.
Below, you can listen to Robert Alexander’s “Music from the Sun,” a musical sonification of solar wind data collected by the Advanced Composition Explorer Satellite during 2003.
Margaret Schedel is a composer and cellist. She’s an Associate Professor of Music and Director, Consortium for Digital Arts, Culture, and Technology at Stony Brook University in Stony Brook, New York.
Robert Alexander is a sonification specialist with the University of Michigan’s Solar and Heliospheric Research Group, and the director and chief innovation officer of the Munger Graduate Residences.
IRA FLATOW: This is Science Friday. I’m Ira Flatow.
Back, oh, two months ago– remember, it seems like yesterday? LIGO scientists announced they had finally done it. They had spotted gravitational waves. And they made the announcement with a sound.
That’s the sound of two black holes bashing into each other another 1.3 billion years ago. But that sound came later because the actual discovery of gravitational waves was made the usual way. Visually, when a LIGO post-doc spotted a telltale squiggle on his computer screen.
But what if LIGO scientists had been able to listen to their data and not just look at it? Imagine that chirp sounding out over a set of speakers in the LIGO lab. Somebody’s eating lunch, says whoops. Wow, something came through. I just heard it.
It could happen sooner than you think. My next guests are experts in sonification. And that is the art and science of turning data into sound.
And they’ve been helping scientists see, or I should say hear, their data in a whole new way. Robert Alexander is a sonification specialist with the University of Michigan Solar and Heliospheric Research Group, and director of the Munger Graduate Residences in Ann Arbor, Michigan. Welcome to Science Friday.
ROBERT ALEXANDER: Hi, thanks for having me.
You’re welcome. Margaret Schedel is a composer and a professor of music at Stony Brook University in Stony Brook, Long Island. Welcome to Science Friday.
MARGARET SCHEDEL: Thanks.
All right Robert, let’s start with a basics. what is sonification?
ROBERT ALEXANDER: Sure thing. So I could start with a definition being that sonification is the translation of information into sound for the purposes of then conveying some new knowledge. And that’s a little bit formal for my liking. So if you just think about any way in which we can use our ears to more fundamentally understand the world around us.
So it could be a parameter mapping, whereby we’re taking something like the brightness of a star. We’re mapping that to a melody. So the melody goes up and down as the brightness of the star then goes up and down in a binary star system.
Or it could be something like audification. Audification is the direct translation of data samples to audio samples. When you audify something, it’s essentially like you’re just pushing play on the data plot. And you’re listening directly to the data.
And then you could start to hear some of the richness and some of the nuances in these data that we stream down from things like satellites. And it gives you a really fundamental understanding that you might not receive from just looking at it with your eyes.
IRA FLATOW: Well, let me explore that a little more. Margaret, why is listening to data better than actually looking at?
MARGARET SCHEDEL: So I wouldn’t say it’s necessarily better, but it lets us use a different part of our brain. So there’s a reason that we evolved to have both eyes and ears. So why not use that when we’re doing science?
IRA FLATOW: Ah. So Robert, I understand you’ve been working with scientists at the University of Michigan to make sounds from solar wind. And let’s listen to some of that solar wind.
That is the sound of solar wind. Robert, first tell us, what is solar wind?
ROBERT ALEXANDER: It is indeed. So the solar wind is a stream of highly charged particles that’s constantly flowing outward from the sun. It’s something akin to the sun’s atmosphere that you could say is constantly blowing past earth. And it creates things like the Aurora Borealis, the Northern Lights. That dazzling light display that we get up in the hemispheres.
And when we do something like send out astronauts, when we send out satellites, we need to know when there’s something like a large coronal mass ejection that’s shooting out particles and we have to then bring in the shields on satellites. Or if there’s an astronaut out, bring them back into the spacecraft. So our ability to predict and forecast space weather is very important in terms of the sort of exploration work that we want to do.
IRA FLATOW: That sounded to me like the end of the record. An old vinyl record, and the stylus is going round and round.
ROBERT ALEXANDER: Yeah. And so what we’re hearing there actually, if you think about the sun, it’s this very dynamic turbulent system– I mean, it’s the source of all life on earth, which is really humbling. But what we’re listening to there, it’s the dynamism in the data. We’re listening to there the magnetic fluctuations.
And so we’re listening– it’s the wind satellite, which there’s a magnetometer instrument. And we’re listening essentially to thousands and thousands of data samples. And it sounds actually a bit like wind hitting a microphone. And that’s kind of cool that it actually relates back to something like terrestrial wind here on earth.
IRA FLATOW: And you’re a– I know that you’re composer by training. And before this sun stuff you were making music based on dancers movements. So are you just thinking about sound all the time and then you thought about this?
ROBERT ALEXANDER: Yeah. Absolutely. I mean, even at a very young age I liked to just drop rocks into an air conditioner just to see how they would sound. And I was very much enthralled by the world of audio.
And now it’s definitely grown into thinking about my time in higher education. Thinking about how can we use sound as a medium to explore the world around us? How can we use sound to supplement and augment the creative process? And then how can you infuse the scientific process with the creative process and kind of achieve some unexpected results oftentimes?
IRA FLATOW: We’ll, let’s talk about unexpected results or how you massage the sound. Because not too long ago, I know you were listening to some solar wind. And you heard this.
Wow. What is that whooshing sound?
ROBERT ALEXANDER: Yeah. So that is an ion cyclotron wave storm. At least that’s what we’ve discovered through the research that we’ve then followed up and conducted. And then we published an astrophysical journal just recently.
And so if you can imagine that first sound, which is kind of bland, you know, vanilla. It was just wind hitting a microphone, not that much going on.
And then there was this wonderful pause as I was listening to the data. And all of a sudden that whoosh just emerged out of nowhere. And to me, that was music to my ears. Because these types of features, when you hear them, you immediately recognize that there’s something different, right? And we start to build up this auditory vocabulary, a sort of auditory palate.
So that then– I mean, you could sic a supercomputer on this. You could run an automated statistical signal processing algorithm and say, find all these kind of things. But it’s hard to tell it, find something that’s novel that may also be of scientific interest.
So as I was listening through years of this type of data, that’s the kind of skill that I refined. And that’s why this particular feature stood out because it’s so clear and so unique.
IRA FLATOW: So you heard something by listening to it that the other scientists have missed in their research by not listening to it?
ROBERT ALEXANDER: Yeah. And actually there’s just so much data. A lot of best practices involved doing summary plots.
So we take a year’s worth of information. And then we run the spectrum so we get a sense for it. Yep. That looks like a tradititional– oh, we see a spike down here on the low frequency. Oh, we have a spike on the high frequency.
But what you miss is you miss a lot of the subtle nuance. And that’s just because the science are inundated with an absolute flood of information right now.
IRA FLATOW: When you hear that did have to go tell, hey, NASA. Listen to what I discovered. Did you have a eureka moment like that?
ROBERT ALEXANDER: I was actually sitting at NASA Goddard Space Flight Center, so I had the opportunity to just turn to one of my college right next to me. And I was actually compiling a list of features.
So there was 610 features That I had identified as potentially interesting. We boiled that down to 10. And then this was the number one on my list.
IRA FLATOW: Wow. Now Margaret, you’ve been collaborating with the team over at Brookhaven National Laboratory. A team that includes your husband, Kevin Yager. First explain what you were doing over there at Brookhaven.
MARGARET SCHEDEL: Right. So I’m working with my husband. And he works at the Center for Functional Nanomaterials, where they shoot x-rays at little tiny particles to try to figure out what they might be useful for.
IRA FLATOW: And so the x-rays bounce of?
MARGARET SCHEDEL: So the x-rays– yeah. So instead of a traditional x-ray like you might think of with your hand, and you shoot the x-ray through it. And it doesn’t go through where your bones are. And you get the picture of your hand. The x-rays, in fact, scatter off the materials and then are detected.
IRA FLATOW: And then you decided, gee. Maybe these make a sound when this happens?
MARGARET SCHEDEL: Yes. So I have done a lot of sonification of data before this. And I was trying to understand really exactly what my husband was doing when he went off to work. And he was explaining how they scatter off.
And then he says, and then we get a Fourier transform of the real space density distribution of the atomic structure. And I was like, oh. OK. Cool.
And he says, wait. Why do you know about Fourier transforms? And I say, because we use them all the time in computer music. And so it was a really, really natural way to do this sonification was using this sort of formula that we have in common in our fields.
IRA FLATOW: So let’s listen to the sound that what happens when x-rays bounce off metal alloy.
Wow. And that means something to you.
MARGARET SCHEDEL: It does. So the high pitch means that the atoms are spaced really closely together. And the fact that they are sort of constant means that pattern is pretty regular.
IRA FLATOW: Now I want to play a little bit from the same metal alloy. But this time it doesn’t sound quite the same.
Why do they not sound the same?
MARGARET SCHEDEL: Right. So if you can imagine that you have these x-rays. And they’re being generated in this machine that is about a half a mile in diameter. And then it has to go down a couple other machines including vacuums, and lasers, and freezers.
And then you’re shooting it at something which is 125 thousandth the width of a human hair. And then trying to catch the things that bounce off of it. Things like misalignment happen a lot. And so what happens here is somewhere along the way something wasn’t in the right place and the detector didn’t detect anything.
IRA FLATOW: Oh. So you could– so it’s very easy just to listen to it instead of having to look at all the squiggles on graphs, and paper, and stuff like, or an oscilloscope and things.
MARGARET SCHEDEL: Exactly. And I like to imagine that they’re pouring chemicals back and forth in beakers. And they listen and, oh. I better go fix the machine.
IRA FLATOW: Does that really– did that really happen? Did you tell them, hey. I heard this. And there may be something wrong with the alignment?
MARGARET SCHEDEL: Well, it’s going to be in the new, what we’re affectionately calling the Frankenline Beamstein, will be sonified in real time so that the scientists can be looking at the computer or preparing samples and then hear when things are going well or going poorly, just like in a hospital.
IRA FLATOW: Now, I know that both of you are composers. So I know that you’re listening all the time for these sounds. What attracts you, Margaret, to these sounds as a musician?
MARGARET SCHEDEL: So I’ve been doing computer music since I was about 15. And I go to a lot of conferences and hear a lot of music. And there’s something about these sounds that’s pretty unique. So I’m hoping to use them in a musical composition in addition to the scientific research.
IRA FLATOW: Sort of like Cage’s music he used to have years ago.
MARGARET SCHEDEL: And Robert, has nature ever surprised you with a really beautiful sound. That you sonify a dataset and out comes this gorgeous sound?
ROBERT ALEXANDER: Well, actually, even the very first moment that I sonified a piece of data was able to actually just listen to the raw hum. That was just such an awe inspiring moment for me because I actually wasn’t sure if my algorithm was just a little bit wrong.
Because I heard this hum in every single file that I was generating and I was like, all right. I have to go back to the drawing board. I’m probably doing this wrong.
And then actually crunching some numbers and recognizing I’m actually listening to the rotational period of the sun. And not only that, I can hear harmonics in the solar magnetic fields. I’m listening to resonances. And so I’m hearing now an octave, a fifth, a musical third.
So to be listening directly to data and to be hearing musical intervals that emerge from this data. That was so exciting for me as a composer just to think, you know, now what else can I listen to? And certainly there’s been so many instances where you here whooshes and warbles.
And working with scientists in sonifying things like coronal mass ejections, so massive explosions. And they actually sound like massive explosions. So it’s always fun to sit in a room with them and just kind of gather around the latest explosion that we find.
IRA FLATOW: I’m Ira Flatow. This is Science Friday from PRI, Public Radio International. Talking with the two scientists composers, Margaret Schedel and Robert Alexander. Do the scientists look a little askance at you guys when you say these things?
MARGARET SCHEDEL: I think that they used too.
IRA FLATOW: You can talk, we’re amongst friends here.
MARGARET SCHEDEL: They used to a lot more. But sonification, I think, is sort of entering their vocabulary now. Especially we’re in this era of big data and they’re just scrambling to find something to help them understand.
And our ears are really good at picking up regular patterns. We hear them as pitch. And computers have sound cards built-in. So this is something that’s an accessible way to add knowledge about this big data that they’re all completely obsessed with.
IRA FLATOW: Well, that’s an interesting point you make. Because we are in this era of big data. Big data is everything. And perhaps we’ve reached the limit of what visually we can see and make a pattern out of. But now our ears take over.
ROBERT ALEXANDER: Absolutely. I know that the initial inkling is to just think like, oh. We’re listening to things. Oh, it’s touchy-feely. Sound isn’t that subjective.
But I mean truly we are able to extract information from sound that we just can’t get through any other means. Like I don’t know, what is your favorite recent album that you’ve listened to?
IRA FLATOW: My recent–
ROBERT ALEXANDER: On the spot.
IRA FLATOW: My recent album. Well, I always just listening to some Broadway musicals. I like Broadway musicals.
ROBERT ALEXANDER: Oh, perfect. So now, if someone– if you were to just try to describe the Broadway musicals over the air, you could talk for an hour.
IRA FLATOW: Right.
ROBERT ALEXANDER: And just pushing play on the soundtrack instantly would convey a type of information that people just– they couldn’t get it otherwise.
And is that sort of qualitative richness that just lends so much to the entire scientific method and to the entire research process. And you just can’t get it from your eyes.
IRA FLATOW: Could we have had done– could we have done sonification before the advent of computers? Is it– are they critical for doing this?
MARGARET SCHEDEL: We did do sonification before computers. So Galileo actually did an experiment about the acceleration of balls. He dropped balls off a tower.
But he also built this inclined plane with bells on it. And they were spaced according to the quadratic formula. And because they rang in a regular rhythm, that meant that it was accelerating.
IRA FLATOW: That’s interesting. Because I remember back in the day, I’m thinking of biofeedback. When that was pretty hot about 30 years ago.
ROBERT ALEXANDER: Sure. Well, I’m actually doing some biofeedback sonification work. We’re doing a lot of experimenting with breath. So taking a user or someone who just really wants to relax.
And then you can sonify the breath in real time. Turn that into music. Turn that into the sound of whooshing wind.
And just the way in which it immediately enhances proprioception. So that is just to say the immediate effect that it gives to tune you into your own body is deeply relaxing. And this is something I’m going to continue to explore just because it’s been a lot of fun.
IRA FLATOW: Well, today we would call it mindfulness I think.
ROBERT ALEXANDER: Indeed it is underneath the umbrella of mindfulness. Certainly.
IRA FLATOW: Margaret, are you doing work with that too?
MARGARET SCHEDEL: I’m sort of doing the opposite. So I am making unpleasant sounds so that people who have problems with their gait.
They’re going to listen to their own music library. But if they’re having problems with their gait, it’s going to distort their music. So they can’t hear it correctly until they walk correctly.
And this is working with people with Parkinson’s. But we’re also hoping that anyone that’s having problems with motility might be able to use this system.
IRA FLATOW: Wow. Robert, we want to thank you both very much Robert Alexander, Margaret Schedel. Dell. We’re going to go out, Robert, with one of your compositions, “Music From the Sun.” Tell us a little bit about this piece as it starts up.
ROBERT ALEXANDER: Sure. So what we’re going to listen to is a data stream that was gathered by the Advanced Composition Explorer satellite. We’re going to listen to particle velocities. We’re going to listen to a cloud of voices that represents the temperature of the sun. And we’re going to hear a lot of whooshing as the solar wind does its thing.
IRA FLATOW: And you can hear Roberts Alexander’s full track, “Music From the Sun,” at sciencefriday.com/sun.
Coming up, one of the trickiest organisms out in nature. They can mimic animals, make toxic chemicals. We’ll talk all about the orchid. Stay with us and listen.
Copyright © 2016 Science Friday Initiative. All rights reserved. Science Friday transcripts are produced on a tight deadline by 3Play Media. Fidelity to the original aired/published audio or video file might vary, and text might be updated or amended in the future. For the authoritative record of ScienceFriday’s programming, please visit the original aired/published recording. For terms of use and more information, visit our policies pages at http://www.sciencefriday.com/about/policies.
Annie Minoff is a producer for The Journal from Gimlet Media and the Wall Street Journal, and a former co-host and producer of Undiscovered. She also plays the banjo.