programmer

compositions | other projects

Compositions using custom software

Since the late 1990’s during my studies at the University of Chicago Computer Music Studio with Howard Sandroff, I have been programming in Max, an object-oriented graphical computer programming environment first developed by Miller Puckette at IRCAM (Institut de Recherche et Coordination Acoustique/Musique). I have followed the life of this programming environment as it has grown to include digital audio and video and migrated through OpCode to Cycling74. The pieces in this list are all programmed in Max and were performed under my direction. Not included in this list are several other compositions for performers and fixed media which may have used Max in their creation, but for which no stand-alone software exists as a performance tool.

Dismantle (2017) for piano and live processing

This piece manipulates and transforms the overtones of individual piano notes, creating a counterpoint to the live performance by the pianist. While the sound is dismantled and manipulated, so are the piano gestures, until these elements are joined into stasis. Using the number Pi, unique partials of a beginning tone are extracted and processed, bringing out that characteristic of the piano’s sound that is not derived from traditional harmony. The piece also includes live manipulation of captured audio during performance.

Performed April 7 by Lawrence Axelrod, piano, Chicago Electro-Acoustic Music Festival.

Water Studies (2016-2017) bass clarinet and live processing

This set of short pieces involves live manipulation of captured audio during performance. Each piece touches on sound associated with water.

Performances in April, 2016 by Alejandro Acierto, University of Chicago; Experimental Sound Studio

Innaturalis (2015)

This piece featuring transformed and surreal animal sounds premiered in two cities simultaneously by connecting two laptop ensembles in Salford near Manchester, England and in Chicago, USA as part of the Sonic Fusion Festival held at the University of Salford. The ensemble C_LEns was located at the Motion Capture (MoCap) studio at Columbia College Chicago, while ALE (Adelphi Laptop Ensemble), dir. Stephen Davismoon, was located on the festival stage at the University of Salford, MediaCity, UK. The project used a mutichannel LOLA (low latency audiovisual streaming) system developed through research at Conservatorio di Musica Giuseppe Tartini in Trieste Italy. The two institutions were connected via Internet2 allowing bidirectional high-speed transfer of video and multichannel audio.

I created an iOS/Android interface using TouchOSC to allow gestural WiFi performance onstage for each ensemble, communicating with the computers via their smartphones.

Performance:  C_LEns / ALE – Sonic Fusion Festival, Manchester, UK / Chicago

Xcymbalum (2013 & 2015) – Due East

Written for the flute/percussion duo Due East, this live-processing piece employed live-generated music notation and was composed for a wide range of flutes from piccolo to contrabass flute as well as a battery of metallic percussion. The pitch material for the piece is captured at the opening as three cymbals are bowed, and the overtones are captured. During the course of the piece, the players perform each others’ instruments virtually: The flutist ‘performs’ percussion sounds via live tracking, and the vibraphone causes flute and piccolo sounds to immediately sound.

Premiered at CollaborAction in Chicago, in 2013, a second performance took place at Illinois State University in 2015.

Vessels (2011) for wine glass and live processing

Vessels was first performed in 2011 using two wine glasses and live digital processing; now it is usually performed with a single wine glass. Each sound in the piece is first produced during the performance by rubbing, striking or shifting the contents of a partially-filled vessel. The sounds are processed in software developed in the Max environment. This piece joins three currents in my recent work:

1. compositions derived from a single sound source,
2. algorithmic composition, and
3. complexity theory – butterfly effect

The latter part of the piece features an automated contrapuntal exchange between strands of sustained wine glass sounds following rules governing intervallic relationships based loosely on Stravinskiesque contrapuntal logic. That algorithmic counterpoint is different in each performance, and this is where the butterfly effect comes in: As the performer listens, changing the time signature and adding and subtracting voices, each voice changes its notes in reaction to the other voices. Any slight difference from one performance to another results in entirely different melodic results from performance to performance. While the first half of the piece is concerned with audible transformations of the original wine glass sound—struck and sustained sounds—the second half returns to the original sustained sound.

Alphabeticon for laptop ensemble (2011)

This laptop ensemble interface and composition focused on the computer keyboard as input device. Each key was given an onomatopoeic vocal sound based on its letter or punctuation symbol. This is one example of several pieces/interfaces developed for C_LEns during the years 2011-2016, some of which were vehicles for collective improvisation.

Performed by C_LEns in 2012, Columbia College Chicago

Toys in the Playroom (2008) for noisemakers and live processing

This piece used a modified version of the processing engine developed for Stochasm. I performed it myself in a Chicago Composers’ Consortium concert in 2008.

Stochasm (2005) for solo violin and live processing

This piece uses a live interactive digital audio processing engine for electro-acoustic composition, 2005, developed for this composition.

The sources of the electronic sounds in this piece are captured on microphone during the performance and digitally processed in a rather large Max/MSP software patch. Some sounds are processed directly; others are recorded during the performance for more elaborate processing. Using her foot, the violinist used a MIDI controller to initiate recordings to be processed during the piece. Although the violinist is reading from a musical score, the processing comes out differently each time due to the complexity of the processing algorithms and the decisions made by the digital processor during the performance. This in turn affects timing, texture, pitch to some extent, and the intensity of the continuing performance. “Stochasm” comes from “stochastic” and “chasm.” Sometimes the piece evokes images of a deep cavern or an echoic space of varying dimensions.

Other software projects

On another page you can find a list with descriptions of projects other than specific compositions