it has been seven years since i have been to chi. but i finally went back this year. i was attending the doctoral consortium, which provides a great opportunity to get feedback on your thesis in an interdisciplinary workshop, under the guidance of a panel of distinguished researchers. since i work in an audio research institution, i have great access to audio experts, but i don't get any feedback from hci community. we were about 15 doctoral students and 5 experts. as i presented my sonification work, i got very interesting advice on which aspects of my research i should better focus on the next stages of the project. i have been comparing the auditory perception with visual perception in most of my presentations. i usually bring up the fact that regarding temporal resolution and frequency range, the ear exceeds by far visual capabilities. e.g. movies show 24 frames/sec whereas the ear is capable of resolving temporal microstructures of a magnitude of a few milliseconds. and while we can hear pitches across 10 octaves (20hz - 20khz), we can only see one octave (400 -700nm). some of these arguments and more about sonification were not enough to convince the panel of experts at chi which was a great learning lesson for me to back up my arguments with more comparisons of visuals which is more familiar to the hci community.
it was overall a great experience and i also enjoyed the interactivity exhibits. i look forward to the next years' chi in toronto, if my papers get accepted.
Wednesday, July 3, 2013
Monday, July 30, 2012
smc 2012
last week i took part in smc (sound and music computing) conference's summer school on product sound design in copenhagen. we had very interesting lectures on interaction design, sound design, and pure data. although i have some experience with all of them, this was a good way to get more insight in some aspects of them that i'm not usually involved in. especially learning more about pure data from a guru was amazing. moreover we had to work on a prototype using dul sensors and sennheiser headphones. we created an interactive interface to use head gestures to communicate. in order to do that we hooked up one of those dul sensors to the headphone to get the motion as input data, communicating with pure data to map gestures to different sound parameters. for more details check out the project's websites. we would also post videos and sound samples of the demo shortly.
although the conference was in the suburbs and we didn't have much time for sight seeing, i tried to see some thing new every day after the conference. one of my favorite places was louisiana museum of modern art which is open till 10.00 pm and i could manage to see the magnificent exhibits there. the picture you see above is a light and mirror installation by the japanese artist yoyoi kusama. i also enjoyed the magnificent statue park in the museum. louisiana museum is located at the see in a very green area. all the statues were from modern artists such as henry moore. the combination of rocks, glass, and metal melts very well into the nature and is an amazing place to be at.
Saturday, July 14, 2012
icad 2012
i attended icad (international conference on auditory display) this year in atlanta. i have participated in student think tank, sonification contest and also submitted an extended abstract with colleagues on what we have been doing from the beginning of my phd.
after a very long trip from graz to atlanta, i had a great day at student think tank. it was a great platform to share what i have been doing so far and getting feedback from sonification experts and other students in the field. also learning about other students' projects was mind blowing and gave me a much broader perspective.
a couple of projects that were very similar to mine but used a different approach were "visual and auditory representations of earthquake interactions" by chastity aiken at georgia tech. she produced animations with time-compressed sounds to demonstrate both immediate aftershocks and remotely triggered tremors related to the tohoku-oki, japan earthquacke. she used audification of seismic data which is not a new concept. recording of nearby earthquakes and tremors contain frequencies of up to 100 Hz, which are on the lower end of audible spectrum (20 - 20k Hz). one of the edification techniques she has used to represent such data is to play the spectrum in a faster speed up to 500 times faster (i.e. time compression). this also helps to go scientists to go through a larger amount of data through shorter time. the eq parameters are mapped to sound properties, and the voltage control oscillator is used to show low and high frequency components.
another related project was "climate variations: solar insolation, ecological, abundance and stable isotopes" by danny goddard. he also used audification. geological data is also huge and in order to find events he needs to go through millions of years of data within a couple of minutes. he created a data driven timbre tuning system. in order to describe the shape of the orbit he mapped eccentricity to pitch, mapped temperature to paying degree. (i.e. colder temperature to right channel, warmer to the left)he also used sine wave oscillators, mapped isotopes to pitch.
both of these projects are run by the domain scientists who are new to sonification field which makes it the total opposite of our approach. we try to integrate sonification into climate scientists' world. the advantage by their approach is that there's no cultural barrier, because they are the scientists who have found sonification interesting and trying to use it in their field. the only problem they have to solve is which sonification methodologies work best for them. on the other hand we have the problem other way around. we have to do systematic research to figure out what our users' needs are before even trying any sonifications.
i also got a lot of feedback about our sonification project. i got praised for using a user centered design approach and not sonifying before knowing the users needs'. i also got a lot of feedback about how to do it efficiently and effectively to encourage users to make use of sonification within their workflow.
the second day i took part in two workshops, one was sonification using chuck and the other one was sonification using matlab. both were good workshops but they were too shirt to implement anything. we only went through the basic sonification capabilities of each tool. for the chuck workshop perry cook has put a document together: course notes and code examples
Monday, March 12, 2012
from san francisco to graz
as you might have read in my other blog i have had a crazy time in the last couple of months relocating from the bay area to graz in austria. i joined the institute for electronic music and acoustics (iem) at the university of music and performing arts graz for the project sysson – a systematic procedure to develop sonifications.
the goal of sysson is the systematic development of sonifications – from finding sound metaphors and creating a ‘sound library’ to aesthetic/scientific evaluation and finally the sonification tool. the procedure is developed with and tested on data from climate science.
i have had my first weeks of work behind me and it has been a great experience working with such amazing and smart people. i'm busy working on a sonification project now for a conference but we keep on updating sysson's website and soundcloud account throughout the project. we would love to hearing your thoughts and comments on the project and the whole process.
in the meanwhile i might not have enough time to make a lot of music but at least i try to attend concerts organized by iem and engineered by iem's team and kug. university of music and performing arts has an amazing new house of music called mumuth. i was lucky that i could attend a concert at ligeti hall of mumuth on the first week i started my work. it was a performance of brice pauset's anima mundi. the acoustic of the building is wonderful and the sounds were clear and crisp. i also love the architecture of mumuth and it looks a lot more intriguing at night with colorful lights changing the color of the building every twenty seconds. it also looks like a glass and metal aquarium. i also look forward to attend the upcoming concerts at the cube in iem, which reminds me of the listening room at ccrma with 24 speakers.
Sunday, October 2, 2011
soundcloud global meetup
if you haven't used soundcloud and you are a sound creator or listener, you are missing tons. soundcloud is a social platform to share sound, music, and any type of audio. you can record, upload, and share any sounds you capture from your surrounding to your musical creations or even if you want to have your own alternative radio, or even if you want to share a conversation. what i love about soundcloud is that the sound creators are not promoted depending on their recording label or sound studio, instead it's a social network that people listen to what ever sound they are more interested in from all over the world. if you are interested in adding your own features, soundcloud also has a fantastic api that you can add up your own plug-ins or apps.
good news for the users and non-users of soundcloud is that there will be a global soundcloud day this coming wednesday and there are soundcloud events all over the globe on that day. i helped with a fantastic group of soundclouders in san francisco to have one in our city. if you like to hear more about soundcloud or share your experience with it and meet some people who love sharing sounds or using soundcloud api, join us on wednesday october 5th 2011 7pm-10pm @ the summit, 780 valencia.
we will feature soundcloud user instillations and apps using the soundcloud api, and will be online on twitter under the #SCMeetupSF hash-tag, so you can join us even if you're not at the summit.
the poster of the event is generously created by max poynton.
Tuesday, September 27, 2011
transitions 2011
don't miss these two great concerts at ccrma. every year when new students come there's a transition concert to welcome them and let them hear what kind of sounds are made at ccrma and what are all the possibilities. bring your blankets and hang out in the ccrma courtyard in these two evenings of electronic music. there will be sixteen loudspeakers, four subwoofers, and twenty seven squirrels that live under the lawn.
TRANSITIONS 2011 - night 1: soundscapes under the stars
Wednesday, 9/28, 8pm, CCRMA courtyard
https://ccrma.stanford.edu/events/transitions-2011-night-1-acousmatic
Acousmatic music by John Chowning, Gilles Gobeil, Jonty Harrison,
Fernando Lopez-Lezcano, Ake Parmerud, Hans Tutschku, and Unknown!
TRANSITIONS 2011 - night 2: live electronic music
Thursday, 9/29, 8pm, CCRMA courtyard
https://ccrma.stanford.edu/events/transitions-2011-night-2-live-electronic-music
Music by Chris Carlson, Bjoern Erlach, Mark Applebaum, John Granzow&
Hongchan Choi, Fernando Lopez-Lezcano, Mike Rotondo& Nick Kruge, Luke
Dahl, Locky, Cloud Veins.
Monday, September 26, 2011
sfemf 2011
the 12th annual san francisco electronic music festival took place on september 8 -11 in sfmoma and brava theater. it was the first time that sfmoma hosted one of the nights and it was amazing to have some amazing sounds in that space.
one of the highlights of this year's festival was a tribute to max mathews. as you may already know we lost the father of computer music earlier this year. in sfemf marielle jakobsons and diane douglas played improv for olympiad using max's phaser filters. max mathews' phaser filters create tuned resonances within any sound played through them., the way that a room can amplify certain frequencies depending on its shape. max loved to play these filters melodically to constantly shift the resonant space. he was also interested in harmonics for example, changing the tuning of his filters to quarter tones or complex chords.
from other new composers and media artists i was most fascinated by the music of cadet kuhne. her piece stora bioern is a collaboration with visual artist alba corral. the piece is based on great bear constellation using a recursive algorithm to compose dynamics and structures. corral used processing to create visuals. since i am a fan of open source software, processing, i really enjoyed what she has done with it. there are regularly processing workshops at gaffta if you are interested in learning more about the software. also cadets pulsing sound progressing in sync with the visuals were very fascinating.
Subscribe to:
Posts (Atom)