Saturday, July 14, 2012

icad 2012

















i attended icad (international conference on auditory display) this year in atlanta. i have participated in student think tank, sonification contest and also submitted an extended abstract with colleagues on what we have been doing from the beginning of my phd.
after a very long trip from graz to atlanta, i had a great day at student think tank. it was a great platform to share what i have been doing so far and getting feedback from sonification experts and other students in the field. also learning about other students' projects was mind blowing and gave me a much broader perspective.


a couple of projects that were very similar to mine but used a different approach were "visual and auditory representations of earthquake interactions" by chastity aiken at georgia tech. she produced animations with time-compressed sounds to demonstrate both immediate aftershocks and remotely triggered tremors related to the tohoku-oki, japan earthquacke. she used audification of seismic data which is not a new concept. recording of nearby earthquakes and tremors contain frequencies of up to 100 Hz, which are on the lower end of audible spectrum (20 - 20k Hz). one of the edification techniques she has used to represent such data is to play the spectrum in a faster speed up to 500 times faster (i.e. time compression). this also helps to go scientists to go through a larger amount of data through shorter time. the eq parameters are mapped to sound properties, and the voltage control oscillator is used to show low and high frequency components.

another related project was "climate variations: solar insolation, ecological, abundance and stable isotopes" by danny goddard. he also used audification. geological data is also huge and in order to find events he needs to go through millions of years of data within a couple of minutes. he created a data driven timbre tuning system. in order to describe the shape of the orbit he mapped eccentricity to pitch, mapped temperature to paying degree. (i.e. colder temperature to right channel, warmer to the left)he also used sine wave oscillators, mapped isotopes to pitch.

both of these projects are run by the domain scientists who are new to sonification field which makes it the total opposite of our approach. we try to integrate sonification into climate scientists' world. the advantage by their approach is that there's no cultural barrier, because they are the scientists who have found sonification interesting and trying to use it in their field. the only problem they have to solve is which sonification methodologies work best for them. on the other hand we have the problem other way around. we have to do systematic research to figure out what our users' needs are before even trying any sonifications.

i also got a lot of feedback about our sonification project.  i got praised for using a user centered design approach and not sonifying before knowing the users needs'. i also got a lot of feedback about how to do it efficiently and effectively to encourage users to make use of sonification within their workflow.

the second day i took part in two workshops, one was sonification using chuck and the other one was sonification using matlab. both were good workshops but they were too shirt to implement anything. we only went through the basic sonification capabilities of each tool. for the chuck workshop perry cook has put a  document together: course notes and code examples