Friday, December 4, 2009

mopho

mopho: stanford mobile phone orchestra had it's first performance of the year last night at ccrma. the variety of interfaces were amazing and it sounds great. comparing to last year when we had only a couple of instruments on iphone making some drone-like sounds; last night's instruments were very diverse and astounding. my favorite piece was vox aeterna by ge. he used human and augmented voices generated by auto-tune of i am t-pain. another interesting piece was nick's wind chimes where he used compass and microphone of iphone to control recorded sounds slowly evolving into surround sound.
another amazing performance was jieun's colorful gathering where she brought all performers together and they jammed on her multi-touch based instrument on iphone.
i uploaded some low quality videos of the event on youtube.

Monday, November 16, 2009

ccrma fall concert























we had an amazing experience at ccrma on thursday. it was in the character of a john cage musiccircus. the audience were encouraged to amble through the building with refreshments in hand and enjoy the performances and installations all over the building.
i had an installation in the listening room. this room is very special to me because i can take advantage of 16 speakers. speakers are in octagonal setup 92 inches apart in three vertical spaces around the room, above the room and under the room.
i had a a mixed media piece for 16 channels audio + text-based visuals + paper junkmail covering the hallway. my installation was called "junkmail" as a reaction to the fact that "it takes more than 100 million trees to produce the total volume of junk mail that arrives in american mailboxes each year.” this fact and a lot more on this topic can be found on forest ethics.

there were lots of good performances all over the building, inside and outside ccrma. some of the highlights were carr's lolfo. i also liked the amazing drones of sweat shop boys. i would upload a short video of my installation as soon as i have it but here a couple of pics for now.

a short video of the installation is available on youtube.



Thursday, October 29, 2009

barcmut

i have been going to the bay area computer music technology group meetups for a while and never got the chance to write about these amazing events. it's a fantastic gathering of computer musicians, programmers, djs and ... in the bay area.

last night's event was in an amazing venue in the middle of tenderloin: gray area foundation for the art. i totally recommend you to just visit this place even if you don't make it to go to any events. one of the amazing exhibits at the moment is tenderloin dynamic which is a series of maps and interactive objects to explore tenderloin district mostly through data sets.

the first presenter last night was jef scott. his work is an interactive biofeedback musical instrument using max/msp/jitter and ableton live. the data coming from body is transformed into audio/visual feedback. the physical and mental energy is transformed into more reddish colors and more industrial and rough sounds the more nervous the user is.

the second presenter was edison from monome community. he was djing breakbeats and creating sound using his diy yellow lunchbox. he also uses max/msp in addition to nintendo ds to transfer input from his love box.

the third presenter was preshish moments who also had his diy wooden controller shredder and control software written in max/msp.




Thursday, September 17, 2009

transitions: outdoor ccrma concert


we have an outdoor concert at ccrma's courtyard tonight. i have a new piece shooting stars to perform in this concert. it's a perfect summer night for laying on the grass and listening to computer music.


mixed reality performance: una serata in Sirikata

we had an interesting performance on weekend performing on laptops and acoustic instruments on three different locations: ccrma at stanford, milano and banff. we met online in sirikata environment before a 300 live audience in milan. the highlight was a new realization of terry Riley's piece in c. a short video of the performance is available. the concert was part of mito festival of music in milan.

Thursday, July 2, 2009

distinctive voices @ the beckman center

we had a concert in march at university of irvine. the video is uploaded now, check it out. i have to mention that the sound quality is not good at all.

Monday, June 1, 2009

maker fair
















another crazy week. we had open house at ccrma, then maker fair weekend and now preparing for slork concert.
last weekend was maker fair in san mateo. saw lots of interesting and some amazing projects. we also had a section for computer music. i presented my gestonic instrument. it was a lot of fun to see so many people exploring it.


Tuesday, May 19, 2009

performances

















the last couple of weeks were crazy. we had to prepare for two slork concerts, but it was totally worth it and we delivered two fantastic brand new performances. to see the pieces we played and access the audio files, check out the website: slorktastic!

i also composed a piece for cello and computer and performed it on IPL concert on may 4th. i also have a piece coming out on the ccrma spring concert next week. look forward to seeing you there.

also the BIG slork concert is coming up on june 4th. you shouldn't miss this one.

two more events to grab your attention to:
there is ccrma's open house on may 29th at the knoll. 
we also present our projects in maker fair at the technology group booth.
i am going to demonstrate my gestonic  instrument in both events.

Monday, April 27, 2009

algorithmic music and images

bret battey lectured in our composition seminar last week. he creates electronic music and multimedia works and installations. he has diverse professional and educational background in music composition, computer science, graphic and electronics. his works have very strong algorithmic component in addition to their creative nature. he presented the development of his audio visual works in the last couple of decades for the class. he also performed an hour of his pieces on ccrma stage. it was a great opportunity to see his works on a big screen and hear them from very good speakers.

Wednesday, April 8, 2009

from kabuki to dumb type

we had an interesting discussion on modern opera in japan in composition class today. our seminar started with talking about kabuki, bunarku and noh. kabuki is the popular entertainment art since 1603 in japan.  bunraku is the puppet theatre. japanese puppet theatre is very fragile and detailed. noh is the most elit and sophisticated type of theatre in japan. it gets into spirit and depth of the person an it is extremely slow and needs a lot of concentration.  if you don't know what these terms mean and you are still curious, check out some examples or read about it. i just wanted to introduce these art forms to you, for each of these topics i should write a whole dissertation to get into their depth.
what is more interesting for me is all the modern movements in recent decades. after second world war butoh movement started in japan. it involves taboo topics of the society performed in the form of dance by dancers in white moving their body in hyper controlled motions. if i had to translate the movements to music, i could say it has a lot of slow but exact microtones in it. check out the video and explore it yourself: butoh by sankai juku.
another interesting group in the new japan's theatre is dumb type. dumb type is an interdisciplinary group of artists in kyoto. they make amazing multimedia projects, dance and theatre performances. their most famous pieces are voyage, memorandum, and pH. 
i like true as well.

Monday, April 6, 2009

gestonic

Specification

Gestonic is a video-based interface for the sonification of hand gestures for real-time timbre control. The central role of hand gestures in social and musical interaction (such as conducting) was the original motivation for this project. Gestonic is being used to make computer-based instruments more interactive. It also allows the musicians to create sonorous and visual
compositions in real time. Gestonic explores models for sonification of musical expression. It does not use the direct-mapping of gesture-to-sound such as is commonly applied in acoustic instruments. Instead, it employs an indirect mapping strategy by making use of the color and timbre of the sound. The system consists of a laptop's camera, the filtering of camera input via the open source software known as Processing, the sending of OSC control messages to the audio-processing program known as ChucK, and finally the parameter-mapping and sound synthesis enabled by ChucK.

Gestonic consists of two main components:

• Gesture and Image Processing: This part of the system consists of a laptop's video camera and an Open Source software called Processing to filter and calibrate data received from the camera. In the current prototype of Gestonic, the input screen is divided into four sections: each represents a different instrument. In each section, the relative and absolute brightness and
the amount of change compared to the previous frame in red, green and blue is measured. Furthermore, four different blobs each detecting a different color (white, red, green and blue) show up on the screen. By moving objects with the same color as a blob, color tracking those objects with blob tracking is possible, so there are four more possible parameters to map to sound. Chuck and Processing communicate via OSC messages sent from Processing to ChucK in order to manipulate sound and send the opposite direction to control the video output to make the instrument more expressive.

• Data Processing and Sound Synthesis: ChucK programs are used to manipulate data received from Processing to synthesize sound.

Work in Progress

Gestonic is a work-in-progress and there is a lot more to be done to formulate expressive sounds from expressive gestures. Each section on the video frame is mapped to a different instrument. So far, the modules for four types of instruments are implemented. One is a drone like sound. The second instrument is a randomly generated, particle-like sound. The timbre and reverb of this sound is manipulated with gestures. In the future, the density of these random sounds will be indirectly mapped to the density of motion in the image. The third instrument is a beat-detecting instrument tracking the beats in motion. The fourth instrument is a set of human voices. The voices are manipulated with a granular synthesizer and grain parameters are mapped to blob motions received from the video.

Progress week 2

I started reading on Neural Networks to train the instrument by making some basic gesture recognitions possible. I looked into Neural Networks in Processing and Neural Network toolbox in Matlab. Some Neural Networks related References are added below.

Progress week 3

After playing around with matlab's nn toolbox and learning about basic concepts of image recognition such as morphology I decided to use something more practical. Matlab is good for analyzing images, but not for real time performance.

I am finally using Wekinator a free package to facilitat rapid development with machine learning in live music performance. The big advantage of this package is that it is very chucK friendly and it helps me to do real time motion extraction from camera input and the implementation of learning methods in Wekinator and sound synthesis with chuck.

Progress week 4

This week I started to make a simple one layered Neural Network in processing. It gets input from mouse, I haven't mapped it to the video camera yet. So far I can read six different drawings from the screen and train the network with those input drawings. The longer the training the less the error of recognizing the proper drawing. The next step is to get input from the camera. Then the question is how can I proceed? How can I make the training work in real time?

Progress week 5

As we approached the middle of the quarter, we have to deliver the first draft of our paper for this project, so I started to read more and get a deeper understanding of gesture based systems using neural network. I ran into at least twenty different systems and each in a way similar to the others but also unique is certain ways.

- Glove Talker
- Japanese sign-language recognition system
- Japanese manual alphabet recognition system
- Musical conducting gesture recognition system
- handshape recognition system
- Given: a handshape(postures) and dynamic gestures recognition system
- Coverbal gesture recognition system
- Sign motion understanding system

I am going to explain some details about these systems and some of their similarities that are useful in my implementation. Some main structural components of gestures that were used in most of these systems are:

- motion path length
- gesture duration
- maximum hand velocity
- flex for thumb, index, middle and annular fingers
- hand orientations

Progress week 6

This week we are submitting the first draft of our paper. I will upload my paper here soon.
In addition I worked on some image processing stuff. I have approached the problem from two different ways:
- analyzing by brightness
- analyzing by pixelation

I am still working on feeding these values to the neural net.

Progress week 7

This week I worked on making new sounds to map to gestures. It is hard to make sounds interesting enough and map in a non- linear way to make it more EXPRESSIVE!

A good inspiration was that I met with Troika Ranch Dance company. They demonstrated their software, isadora which is totally what I want to want to achieve with my software but instead of their approach, I only use open source software.

Results

The final paper that I summarized all the findings of this project is submitted and published at IHCI conference 2009, San Diego. Paper is available upon request.


Links and References


1. Machover, T.: Instruments, Interactivity, and Inevitability. Proceedings of the NIME International Conference (2002)

2. Kurze, M.: TDraw: a Computer-based Tactile Drawing Tool for Blind People. Proceedings of 2nd Annual ACM Conference on Assistive technologies. ACM Press. Canada (1996) 131-138

3. Fels, S.S., Hinton, G.E.: Glove-Talk: A Neural Network Interface between a Data-glove and a Speech Synthesizer. IEEE Trans. On Neural Networks, Vol. 4, No. 1 (1993)

4. “Processing” website

5. Wright, M., Freed, A.: Open SoundControl: A New Protocol for Communicating with Sound Synthesizers. ICMC. Thessaloniki (1997)

6. Wang, G., Cook, P.R.: ChucK: A DAFx, Concurrent, On-the-fly Audio Programming Language. Proceedings of the ICMC (2003)

7. Carette, E.C., Kendall, R.A.: Comparative Music Perception and Cognition. Academic Press (1999)


Neural Network References

1. Hunt, A., Hermann, T. : The Importance of Interaction in Sonification, ICAD (2004).

2. Kolman, E., Margaliot, M. : A New Approach to Knowledge-Based Design of Recurrent Neural Networks. (2006)

3. Franklin, K., Roberts, J. : A Path Based Model for Sonification.

4. Boehm, K., Broll, W., Sokolewicz, M. : Dynamic Gesture Recognition Using Neural Networks; A Fundament for Advanced Interaction Construction, SPIE Conference Electronic Imaging Science & Technology, San Jose California. (1994)



Friday, April 3, 2009

rocco di pietro

we had the honor to have a guest composer during the winter quarter at ccrma. rocco was advising us on making music and he composed a lot himself too in this period. he even composed a piece for lap top orchestra and we performed it at ccrma and uc irvine. his piece for slork is called 'one stone flow'.  it starts with some nature sounds played by laptops, then leading to drone like sounds played with laptops controlled by joysticks and the chords played at the piano. and finally laptops play interviews with the composers who have had influence on rocco's music, or were his teachers such as maderna, foss, boulez and finally john chowning. then the voices of the composers are manipulated with granular synthesis and build up a chaotic sound. the piece ends with a huge laugh played by all laptops.
if you want to hear rocco's interview with sica check out here.


Monday, March 9, 2009

more composers

last week we had a very interesting guest composer "alvin curran". his music is made of electronic and environmental sounds.  he has compositions from lake concerts featuring musicians in row boats to ship horn concerts. one of my favourite is his "floor plan/notes from underground" which was a holocaust memorial installation at ars electronica in linz.
good news for san francisco and bay area residents: he has a concert coming up this sunday(march 15th) at contemporary jewish museum in san francisco.

Thursday, February 26, 2009

guest composers

we have had very interesting composers as guests at ccrma in the last couple of weeks. first "yinam leef" was here for pan asian music festival. we had the honor to have a class with him on composition seminar. i don't need to explain how rich and beautiful his music is, you can just listen to his music yourself but i just mention what moved me in his class was that he and lots of great composers have learned composition in a very classical western style and they are not sure if that's the best way to teach it to the next generations. on the one hand having the luxury of learning harmony and counterpoint in early ages help the composer to get to a deeper level in music, but doesn't it take some of her creativity? for example in  "berlin hochschule der kuenste" the composition students don't go through this classical education and they don't limit themselves to it. what do you think?

Friday, February 6, 2009

tape festival

last weekend i we went to tape music festival in san francisco. it was in cellspace which is a very cool venue but not necessarily for tape music. the acoustic of the room adds another roughness to the sound, which depending on the sound could make it sound cooler or not.
i really enjoyed a piece "etude aux sons animes" by pierre schaeffer. he has made use of very unique fantastic sounds that i could feel myself floating in a metal bowl or dropping metal ball on my ears. another favourite piece of mine was a composition of thom blum which was especially clean and i enjoyed how rich the whole sound was and clean the it moved from one texture to another. and of course my very favourite wave that night was ligeti's artikulation. it was nice to hear them through eight speakers in a new environment.

Friday, January 23, 2009

waves of the week!

i have a very crazy schedule and never have time to listen to music qualitatively during the week. but every weekend i borrow five to ten cds from stanford music library and take time to listen to them and some times analyzing them. if you do the same thing could be great to start some listening discussions on this blog.

one of the cds i have got for this week is voices 1900/2000 a choral journey through the twentieth century. very beautiful sounds. 
another set of waves i listened to this week were tons of gyoergy ligeti's music. having lived in vienna in the last decade of my life, i didn't hear as much ligeti music there as in the last week. but some of his masterpieces were performed in wien-modern (a yearly music festival in vienna on 20th century music.) my favourite ligeti's orchestral piece is atmospheres. this piece has such a thick texture with a huge variety of timbres. you might have heard it in the stanley kubrik's 2001, a space odyssey. enjoy listening to it while looking at the scores. the score is visually as rich and thick as the sound.

Thursday, January 22, 2009

why wavelounge?

this blog is devoted to computer music. generating, listening, reading and discussing about electroacoustic music. 
feel free to discuss and add comments and suggestions.
i take part in composing, coding and performance with slork (stanford lap top orchestra). we had a performance a couple of weeks ago at macworld. here a video to warm up this blog.
slork at macworld

Monday, January 19, 2009

who's visda?

i am currently a music, science and technology student at ccrma (centre for computer research in music and acoustics) at stanford university.
i was born in tehran, emigrated to austria/vienna in the second decade of my life, then re-emigrated to california/ san francisco in the third decade of my life. next i am planning to emigrate to saturn to fulfill my childhood dreams.
my passion for astronomy and outer space led me to study physics in undergrad., then the fascination of computer science and new technologies led me to continue my education in computer science. i have always had a passion for music and played piano since childhood. while i improved my knowledge in computer technology, i learnt to use it as a musical instrument to express my passions. in order to get deeper in connecting music and technology i am studying at ccrma.
i also have a personal blog writing about my trips and culture shocks i am dealing with moving around the world: http://twoday.tuwien.ac.at/sfd/

sounds

2012-tweetup by visda

2012 Tweetup is a pluderphonics tape piece made of the sonification of Twitter data. Data of the Twitter followers of the people or events that have been trendy in 2012 demonstrates the development and popularity of the events on social media. The sonic interpretation of this development is created by mapping the twitter data to the parameters that modulate(using granular synthesis) the audio recordings related to each specific trend. Using simple granulation techniques on the data, a variety of interesting timbres and textures are obtained. Gap size, grain size, amplitude, and the random spread of the grains are controlled by the data. Where the number of followers are higher, the samples are played back as recognizable parts of the recorded tracks.
The trends are chronologically ordered in the piece as the twitter data. The piece includes the following tweets:
- Tweets from Mars: NASA’s Jet Propulsion Laboratory live-tweeted as the Mars rover, Curiosity, made its descent onto the Red Planet. The world received Twitter updates directly from the shuttle’s command center. @MarsCuriosity has more than 1.2 million followers.
- Farewell to Whitney Houston: the day the R&B superstar died, tweets about her death peaked at 73,3662 per minute. Nearly 2 million tweets were sent during her televised funeral on Feb. 19.
- The U.S. presidential elections: The first presidential debate between President Obama and Republican presidential candidate Mitt Romney generated 10 million tweets.
- Superstorm Sandy from space: The International Space Station (ISS) captured live images of the storm as it tore its way up the East Coast. The astronauts tweeted live images to the world.
- When Felix Baumgartner made his record-breaking skydive from space, people all over the world took to Twitter to witness the event unfold.
- President Obama’s simple election night message: “Four more years.”

junkman is an installation in the listening room. this room is very special to me because i can take advantage of 16 speakers. speakers are in octagonal setup 92 inches apart in three vertical spaces around the room, above the room and under the room. i had a a mixed media piece for 16 channels audio + text-based visuals + paper junkmail covering the hallway. my installation was called "junkmail" as a reaction to the fact that "it takes more than 100 million trees to produce the total volume of junk mail that arrives in american mailboxes each year.” this fact and a lot more on this topic can be found on forest ethics.
shootingstars by visda

this piece is composed for an open air concert 'Transitions:' an evening of computer music outdoors by CCRMA in the Knoll courtyard. We have had an 8 channel surround system and projections, and space for the audience to lay out on their blankets.

    this piece is written for stanford laptop orchestra (slork) with two types of instruments and live vocals. one is smashing instrument that works by smashing on the laptop which uses the sensors of macbook as input to manipulate the drum sounds. the second one is a granulator that can change the length of the grains, pitch and their randomness which is used on vocal sounds. the piece is based on a poem by rumi (a persian sufi poet.) details about the instruments could be found on slork's website, a video of the performance is available on youtube.
    chuckucello by visda

    this is an electroacoustic piece composed for cello and laptop. the code for laptop is written in chuck and performed in real time. thanks to "broer oatis" for performing the cello part. this recording of the piece is from an IPL concert in spring 2009.
    this piece is originally written for 8 channels and performed at ccrma stage where 8 channels were available. here is a stereo version of it. the fire sounds are originally fire and the sound of burning, the rest is extracted from kurdish music.
    for this assignment we were supposed to create a live generative music system; i built three types of instruments. one has a drones sound, the second one sounds like particles with very sharp sound created by random numbers in a specific scale. another one is rendering drum buffers. the challenge in this project was to make my instruments work with computer's keyboard. for this S.M.E.L.T website has the best examples for chuck. here is my chuck code.
    this week in ge's class (compositional algorithms) we explored different filters and timbre escapes using fm synthesis. for our assignment we had to make a music of changes, just like john cage.
    in my music of changes, the sound are somewhat random, but not so much to interrupt the flow of the piece. it starts with drones sound made by fm synthesis, followed by quarks sounds that i randomly generate and add a rhythmic features to it using shakers from chuck. for related chuck files, check out the website.
    this composition is generated by computer generated sounds welcoming people to ccrma in different languages. this piece is originally in 16 channels and is presented in ccrma's listening room. the goal was to pan voices from different parts of the world so that the listening room works as a simulation of the surrounding world.

    this sound is composed by using some real sounds of caltrain from san francisco to palo alto and some ambient sounds in both places. the sounds are recorded by homebrew microphone then synthesized by chuck files partly inspired by steve reich's piano phase.

    this composition is generated by two chuck files, one generating a rhythmic and the other one an arythmic sound. the combination is supposed to give the listener a spacey sound space.

    projects


    this project/instrument is still a work in progress as part of 220b and 220c project. my paper for this project has been accepted for HCI 2009.
    master thesis at vienna university of technology, research group for human computer interaction. available on tuwien library's website.

    this paper is published in chi 2006 extended abstracts on human factors in computing systems.

    this project was a cooperation between me and my colleagues at tu wien and akademie der bildende kuenste in vienna. i implemented the audio components of this project using c++ and bass library. analyzing the sound samples with fft, sending massages to video components to control the visuals.
    256 final project: simulation of human movement using opengl and sonification of the simulation using rtaudio.

    250a final project: used max/msp's jitter to determine the location and motion of eyes, nose, and mouth, controlled news tracks through granular synthesis, and mapped facial movement parameters to grain parameters using silence detection and large grains.