In C (2023 - The New Series at Juilliard)
As part of my role at Juilliard, I’m part of a team at the Center for Innovation in the Arts that puts on a variety of multimedia, experimental productions. Every time, there are new challenges to solve. For this production of Terry Riley’s In C, Juilliard students in NYC performed alongside musicians at Juilliard Tianjin, almost seven thousand miles away. With my colleagues in the department, my role was to design a system to make that happen.
Space and Time
Sound travels through air at a fairly slow rate (1,125 ft/s depending on temperature and altitude), so musicians performing together on a stage of any size have to be able to synchronize even if there’s some degree of latency between them. However, if that latency gets too great, musicians can’t play together effectively. Anyone who’s played in a marching band or on a big stadium stage will have experienced this, and will have solved it in different ways. Marching bands tend to solve it via a visual reference (the drum major) or by playing ahead or behind the beat, and on big stadium tours, this issue is solved by piping sound electronically right to musicians’ headphones or stage monitors. When trying to send sound from New York to Tianjin and back, we run into an inescapable truth: it’s just not possible to send sound there and back over the internet with anything resembling acceptable latency for real time performance. We were able to get it down to something on the order of hundreds of milliseconds, but that’s still way too much. However, the structure of Terry Riley’s piece gives us an out.
In C is composed of a number of rhythmic cells that the musicians gradually move through, repeating each cell as many times as they individually desire. This creates a cloud of overlapping melodies that gradually shifts its center of density through the score. Thus, in every performance, some musicians will be a little bit ahead of others. They all synchronize their rhythms to a common pulse (often delivered by a piano playing eighth note Cs). If I could get the latency of the sound from NYC to China and back again to exactly match an even divisor of our meter and maintain an exact tempo, the musicians in NYC and Tianjin could be offset by a preset number of bars and nobody (including the musicians) would realize it. They might think that their counterparts are responding to shifts in timbre and dynamics a little more slowly than would otherwise be expected, but with ensembles of any size those shifts are often slow anyway.
Technology
To make this work, I put together two Ableton sessions. The first would be controlled by a performer, and would cause a Yamaha Disklavier to play back the C pulse at a set rate. Additionally, that performer performed the notated cells of the piece using a bank of synths I put together in Ableton, all controlled with a APC40 control surface. That performer could control the dynamics of the Disklavier, as well as the timbre, cell and dynamics of the synth parts. This ensured that the ensemble would be locked into a single BPM. This blending of electronic and acoustic timbres, all controlled by digital devices is something that I find to be quite expressive.
The second Ableton session was designed to capture a stereo mix of the ensemble (provided by Marc Waithe, Sound Supervisor and Chief Audio Engineer at Juilliard) as well as a feed from the Disklavier and broadcast that to our counterparts in Tianjin via Sonobus. The signal was routed to in-ear monitors for the musicians in Tianjin, who played along. Then, individual mic feeds from the ~18 musicians in Tianjin as well as the Disklavier signal were routed back through Sonobus, over the network back to my Ableton session in NYC. We tested the system at various times and were sometimes able to run our whole session with 24bit WAV files, but found that network conditions were constantly changing. We settled on a 256k AAC signal for stability. In order to set the latency appropriately, I put together a multi-channel delay network in Ableton that allowed me to dial in the exact latency that would put the returning signal exactly one or two bars behind the musicians in NYC. Somewhat horrifyingly, the latency gradually shifted constantly during all rehearsals and performances. In order to account for this, I monitored the live and round-tripped Disklavier signal and adjusted my delay time manually to keep a tight sync. Ableton’s delay “fade” mode prevented any unwanted delay pitch abnormalities. I tested various methods for automating the latency setting, but found that in the case of serious dropouts (which while unlikely, did happen during rehearsals), it was very challenging to keep things from going totally haywire. With more time, a better solution could have been found (like in Ninjam). From Ableton, I routed out all of the individual mics from Tianjin to Marc’s console via Dante. This show was in the Peter Jay Sharpe theater at Juilliard, a fairly large space and we ran the mix fairly loud. That allowed Marc to rely more on amplified than acoustic sound and blend in the Tianjin musicians’ signals smoothly enough that it was challenging or impossible to aurally detect what musicians were in the room and what musicians were remote.
In addition to the audio from Tianjin, we received multiple channels of video, which were projected behind the musicians in New York. My colleague Willie Fastenow designed a system in Resolume to ingest those signals and combine them with prerecorded footage of dancers that we had captured earlier in the school year. This visual signal was effected, modulated and triggered by a video performing artist (one of my students) using gestural controllers and a MIDI foot pedal.
There’s a video of the performance embedded below.