I have been aware of synthesizers since I was in high school around 1976. In fact, I wrote an essay for a writing class about how in the future synthesizers would replace acoustic instruments. This has partly come true, hasn’t it?
When I began my composition studies at Northern Illinois University in 1977 I took an electronic music course that was part history, part hands on. The teacher was Joe Pinzarroni who had worked with Cage in New York on dance music using live electronics and computers. Partially because of Joe, live electronics rather than studio electronics seemed to prevail at NIU. I found that area of electro-acoustic music fascinating over being in a studio splicing tape etc.
I composed three pieces that involved live electronics during my two years at NIU. One piece was for solo cello that had a contact mic going to a reverb unit. The cello was strictly notated and the reverb settings were also given in the score. Another piece, inspired by George Crumb’s “Black Angels,” was for a miked string quartet. Unlike the Crumb, my interest was only to assign an instrument to a speaker in each of the four corners of the concert hall to heighten the separation of the instruments independent lines. Again, strictly notated and composed knowing that the sounds would also be coming from a particular corner in the hall. The third work was a multimedia work thought up by the artist John Goss. A full solar eclipse was to happen on February 26, 1979. John was going to recreate Stonehenge in a field between the art and music buildings. He had numbers that were somehow connected with Stonehenge and he recreated Stonehenge. He assigned me a set of Stonehenge-numbers to do what I wanted in the realm of sound. I took the numbers and used them as frequencies times 100 and spaced them out over the time the eclipse began until it ended. At certain time intervals the Buchla-generated frequencies amped through large speakers on the art building’s steps and facing the mock Stonehenge would change. There were also dancers going around the mock Stonehenge who had their own Stonehenge-generated numbers. Special sunglasses were handed out to the spectators to protect their eyes. If I remember correctly the eclipse occurred around 10:30 am CST. There was a large amount of snow on the ground but it wasn’t a very cold day. I wouldn’t call my contribution music but some type of an aural experience. I enjoyed working in multimedia but that was my first and last.
The next year I studied at the University of Illinois with Ben Johnston. U of I had a great electro-acoustic setup that was classical as well as using many synthesizers and computer labs. It had a long (for electronic music) and storied past. However, other than Sal Martirano’s SalMar Construction live performances, realtime electronics was neither happening there nor was there equipment to use. I do remember around 1981 someone had come back from Japan with a Walkman recorder that was not available in the US yet. He had input the grand piano into the recorder that had some type of prerecorded material that modulated the piano output. It had potential I thought. That was about it. So I stuck with acoustic only music.
Following U of I I moved to NYC to study at Columbia University. I took an electronic music course with Mario Davidovsky. It was classical studio, cut and splice only. He frowned upon tape loops and synthesizers. You had to create everything with a razor blade and whatever dirty old equipment in the studio. I decided to go back to live electronics but in the way Davidovsky does it in his Synchronisms. The sounds are all on tape but the machine plays or pauses and the performers, I composed for flute and contrabass, would closely interact with what was on the tape. It isn’t the way I thought or think about live electronics but it was a way. And that way turned me off to electro-acoustic music until now.
Mind you I have followed much of what has been happening since the late 1980’s. While I was in Amsterdam on a Fulbright I heard concerts that demonstrated what many composers were doing in computer research that was funded by Phillips. George Lewis was someone in Amsterdam at that time. I heard many concerts in New York throughout the 80’s and 90’s. I knew composers who had been using Max in the mid-1990’s and I had a look at it. I certainly was fascinated with some of what I heard but the learning curve was daunting.
The Max/MSP learning curve is still steep but less so than it was 15 years ago. And laptop computers make having a home studio feasible. As a result I am getting involved with electro-acoustic music again after a 29 year layoff.
For now I am using patches that alter the sound of my realtime clarinet and would like to find a way Max can be a part of the composition process in real time.
I have no goal. I am learning the software.