Listening to uncertainty: Information that sings

(this post is based on the following news at Significance magazine)

Alexander Graham Bell, the inventor of the telephone, wrote these rapturous words to his father in 1880  describing his newly invented photophone. He was so excited that his wife had to convince him not to name their daughter “Photophone”. The device used photosensors connected to a telephone to turn light signals into
sound. Its main purpose was to communicate signals by modulating light, but he was intrigued by the idea of applying the technology to study the spectra of stars and sunspots by listening to the sounds produced by the photophone receiver. Bell was on to something. After all, we use our listening skills all the time for complex analysis and diagnosis tasks. How is the car engine running? Is the wall hollow? Where is that ambulance siren coming from? The Geiger counter represents radiation levels with audible clicks and remains popular a century after its invention. We use our eyes to absorb information; why should we not also use our ears?

We process auditory information differently than visual information. Sometimes that difference can give us new insights. Our eyes can distinguish colour and shape, but cannot follow fast movements. Our ears can distinguish more things: pitch, rhythm, loudness, spatial location and complex timbres. They can do so
with incredible resolution in time. Patterns that are hidden from our eyes can speak out loudly and clearly in sound.

Yet our exploration of data through sound remains eerily silent. There are dozens of different kinds of graphs, charts and maps that give visual information; yet with sound we have barely scratched the surface. The emerging field of sonification is the audio equivalent of visualisation. It explores the opportunities for using our powerful auditory sense to understand the world’s variation and uncertainty. To start with, vision is not always an option, which is why NASA’s MathTrax software
takes functions and datasets and turns them not into graphs but into sounds for maths and science students with vision disabilities. And a growing body of researchers have developed a variety of sonification displays which have the potential to enhance the way we think about data and models (see box above). Sonification presents data as sound. It can be used to communicate the general shape and structure of data through the audio equivalent of scatterplots and boxplots. It can explore complex structures in time series – sound that varies over time is of course music. It can create virtual sound models of data for analysis and presentation. We shall look below at all of these. Finally, we describe an interactive sound-and-vision map that uses vision and audio together to describe the uncertainty around projected climate predictions in the United Kingdom1.
Hearing the shape of the dataVisualisations represent the different variables in a dataset by visual parameters such as  position and colour. It is the same idea with
sonification: data variables are represented by audio parameters such as timing and pitch. Let us look at a simple visualisation: you might want to see how fuel efficiency is related to engine size in popular US cars. A scatterplot would do the job. Engine size is represented by horizontal position and fuel efficiency is represented by vertical position. We end up with a pattern of dots (Figure 1a). We can see a downward trend – bigger engines get fewer miles per gallon – though the data forms a wide band and on the right there are several relatively fuel-efficient cars with large engines. For our sonification, we represent engine size by timing and fuel efficiency by pitch, forming a sort of data song (Figure 1b) analogous to the visual one, with each note representing a car model just as points do in Figure 1a. The dots have simply become notes. And when you listen to the notes the same trend is audible: you hear a thick descending cascade with several stray high notes right at the end – those large-engined fuel-efficient cars again. Data that is sound alreadyThink of a recorded audio sample. It is nothing more than a series of sound pressure levels that vary over time – in statistical jargon, a time series. Computer speakers vibrate as instructed by that time series to produce the compressions and rarefactions of air that we hear as sound. What if we went the other way, starting with a time series and playing it as if it were a recorded sound? This technique, called audification, is in  fact useful for exploring long time series datasets. This should come as no great surprise. Statistical time series analysis and audio signal processing theory are based on the same mathematical machinery, and concepts that are useful for shaping sounds are the same techniques that are used to understand time series. For the mathematics of how this works, see the box “Time series as a waveform”. Some real datasets are already sound-like, and it can be useful to hear them to understand their structure. For instance, some processes are either already sound but inaudible to the human ear (bat calls) or are mechanical vibrations that act like sounds (heartbeats that we listen to by using a stethoscope). The audification process takes these sound-like datasets and turns them into sound that we can interpret using our well-adapted ability to determine aspects of natural vibrations, as opposed to the unnatural (if more musical) techniques described in the audio scatterplot example above. Audification is by far the oldest member of the sonification family, and not just because of the stethoscope. Researchers in 1878 analysed the frequencies of muscle cell reactions using the recently invented telephone technology; ecologists identify bat species using black boxes that lower the pitch of bat calls to frequencies that we can hear. Earthquake researchers have listened to the fascinating sounds of an audified seismograph since the 1960s (example at sonification.de/handbook/media/ chapter12/SHB-S12.3.mp3).

The many soundscapes of data Sonification is being applied all over the academic world to represent data in a novel way.
• Physics. A group at CERN, LHCsound, has been exploring high-energy particle collisions from  the Large Hadron Collider; NASA has created a tool to sonify the cosmic background radiation in a variety of model universes with different physical constants than ours.
• Models and optimisation. Bielefeld University in Germany, a hub of complex sonification research, has led projects in sonifying machine learning algorithms so that users can more fully interact with neural network models and optimisations as they progress.
• Sport. Nina Schaffert, a human movement scientist at the University of Hamburg, leads research on training elite rowers by sonifying their acceleration. This gives rowers real-time feedback while they are still rowing without a distracting visual display.
• Public exhibits. The “Walk on the Sun” exhibit from Design Rhythmics Sonification Lab invites people to walk over the sun’s image, triggering synthesisers that sonify solar data based on their positions. The University of California at Santa Barbara recently exhibited “The Allobrain”, an interactive and multimedia virtual-reality world created from functional magnetic resonance imaging data.

figures and the rest of the article  can be found here

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s