What if you could convert fractal shapes into music?
In fact, you can! And the solution is sonification.
According to the Oxford Dictionary, “Sonification is the use of non-speech sound to convey quantifiable information or represent data, typically as the output from an electronic device; the conversion of data into sound for this purpose.”
In plain English, it means representing data as sound signals.
Although you can convert almost any data into sound, best-suited data to auditory analysis should have patterns in order to make better music. Sonification requires certain conditions like reproducibility and intelligibility.
Types of sonification techniques:
3. Auditory Icons
4. Parameter-Mapping Sonification
5. Model-Based Sonification
To represent data effectively, there is the need of interdisciplinary collaborations of psychologists, computer scientists, engineers, physicists, composers, and musicians, along with the expertise of specialists in the application areas.
A Brief History of Sonification
The first application of sonification, also known as the Geiger counter, was invented in 1908.
The instrument consists of a tube, the sensing element that detects the radiation, and the processing electronics to show off the result. When a tube ionizes the gas, a pulse of current produces, and therefore produces an audio click, as well. Before 1928, the instrument was capable of detecting alpha particles only. Later, two scientists (Geiger and Muller) improved the counter so that it could detect more types of ionizing radiation.
The next step in sonification was the invention of optophone in 1913 by Dr. Edmund Fournier. The device can detect black print and convert it into an audible output with the help of selenium photosensors. The blind can use the device to identify letters as the optophone scans texts and generates chords of tones.
In 1954, Pollack and Ficks conducted the experiment to transmit information via auditory display. Combining sound dimensions, they found out that it was possible to get subjects to register changes in multiple dimensions at the same time.
The earliest work on auditory graphing was done by three scientists (Chambers, Mathews, and Moore) in 1974. In the technical memorandum, “Auditory Data Inspection”, they added a scatterplot using sounds that varied along frequency, spectral content, and amplitude modulation dimensions to use in classification.
Later, pulse oximetry has gained its popularity. In the 1980s, people started using this method to monitor a person’s oxygen saturation. While pulse oximeters can sonify oxygen concentration of blood by emitting higher pitches for higher concentrations, medicals don’t use this method because of the risk of audio stimuli in the medical environment.
Gregory Kramer founded the International Community for Auditory Display (ICAD) in 1992. It was used as a forum for research on auditory display, including data sonification. And now it’s a place that unites all researches who are interested in the use of sound to deliver information.
In a Word
Sonification is an alternative way to provide information by turning data into non-speech sound. As a result, it creates new accessible and engaging possibilities for data exploration. To represent data as sound, volume, pitch, and rhythm are used. Sonification is becoming more and more popular, so it’s important to learn more about the process to make the most out of it.