Future Stages (2022)

The Juilliard School Center for Innovation in the Arts’ 2022 production “Future Stages” was a major foray for the department into immersive multimedia performance. This performance, which took place in Juilliard’s Willson theater, featured an eight-channel ambisonic sound system, six channels of video playback surround the space and on a scrim in the middle of the space, as well as a variety of interactive controllers used to perform. There were two programs, one focused on theatrical, dance, video and multimedia works and another more focused on music works (though both programs certainly blurred the boundaries of all classifications). My role was to oversee sound, including mixing, sound design, and audio interactivity, all while mentoring students in the creation of works. To those ends, my work fell into a few categories.

Sound/Environment Design and Playing the “Keyboard”

As part of the program, we collaborated with a team of video designers and technologists who created a 3d environment that was used to create a multi-channel video precisely mapped to the locations of our screens to create the effect of a dynamic, moving environment around the audience. That environment served as a sort of pre-show installation and mid-show transition experience as we moved between major sections of the program. To support those visuals, I created an 8-channel soundscape, programmed in SuperCollider that allowed me to generate a series of audio moments that could dynamically lengthen or contract as needed. While I wasn’t starting from a blank page when I created these files, I did do a fair amount of live coding in their creation to get the kind of dynamic environment and musical structure I was looking for. While I probably could have gotten close to that sound using a standard DAW, using some of the probabalistic tools available in a programming language as well as iterative structures let me go further than I otherwise would have. I have something of a mixed relationship with live coding - one one hand it can help me generate tremendously dynamic, expressive streams of data with very little (physical) effort. On the other hand, I sometimes feel as if I am skating a razor thin edge between expressive sound and ear-destroying, full scale noisy feedback destruction. In this instance, I decided to be conservative and not do anything too risky so I made pre-recorded versions for safety.

Upmixing

Many of the student and guest artists involved had never worked in immersive formats before, so my role involved quite a bit of mentoring them through that (as well as mixing/upmixing their projects when needed). It’s always interesting to me how different artists solve similar problems. For example, one of the pieces (by an alumni) was written for string quartet and dancer, with the dancer using a motion controller to trigger sounds generated in Supercollider. The patch worked beautifully in stereo, with sonic events triggering based on accelerometer data. If we left it in stereo, the rest of the program would have made the piece sound small. I took this existing Supercollider code and amended it to create a 8 channel panning process that panned the audio around the space in a more immersive way than would have been possible with just stereo audio. By taking the existing panning algorithm and extrapolating it further, I was able to take this existing stereo computer-controller instrument and turn it into something that could be abstracted to any speaker system with equidistant speakers placed around the audience. For me, that’s the beauty of working with code, rather than with a GUI.

Contrasting fairly starkly with that piece was a similar scenario with a very solution. One of the pieces on the program was a piece composed by one of my students for solo electronic keyboard. This student had built a backing track with some pretty heavy duty sounds as well as a couple of synth patches that he was planning to play over them. The student in question, in addition to being a formidable composer, is a virtuoso pianist. One of the challenges we found is that the level of expression possible with his patches wasn’t allowing him to stretch his muscles, so to speak, as well as the since his mix was in stereo, the whole track sounding thin in the theater. The solution here was for us to carefully craft additional channels of audio to be routed to the surround and elevation speakers, as well as to apply very careful mapping of MIDI parameters to sonic results. By getting just the right connection between velocity and amplitude, or velocity and ring modulation depth, and by mapping those velocity levels across different sounds in the acoustic environment just right, we were able to create a performance system that not only let a real pianist show off a little, but required somebody with real keyboard chops in order for the piece to work. This sort of scenario feels almost opposed to my aforementioned livecoding example; here, the physical capabilities of the performer are of the utmost importance.

Moving Audience and Planning in Advance

One of the major challenges with an installation-type event like the first night of the performance is that the audience can move throughout the space and experience the music from different angles. That means that if the music has a “front” or “proscenium” feeling, things can’t totally fall apart when an audience member rotates. While we do have 5.1 and four-channel playback scenarios in our facility, we don’t have a dedicated space to workshop events like this. I built two tools to use for testing the visual and audio elements that helped me get a grip on what was going on in advance. For visuals, I built a mockup of the space in Unity with a number of virtual screens in the app that can ingest NDI feeds. Since Resolume can output NDI, I was able to build the show computer rig (video machine, control machine, audio machine, plus a synthesis/performer machine) all networked together and instead of connecting to projectors, I just routed video to this Unity app. For audio playback and mockup, I used a variety of headtrackers with the IEM Suite, hosted in Reaper. This was also a chance to test out solutions that would be affordable for my students. I had good enough results with the NYU NVSonic (though the built-in app was a little buggy on my machine, so I quite messily rolled my own in Max), with a Mugic Sensor strapped to my head and with data translated in Max, as well as with my iPhone strapped to my head sending data with the Holonist App. While the purpose-built tools for this sort of thing work very nicely, it’s nice to be able to make it work with devices that are a little more accessible.

An image of the immersive version of Troy Ogilvie’s Fridge <a tentacular digestion>. Here’s a link to a stereo video file. I contributed sound.

An image of Kai Kim and Phoebe Dunn’s Metamorphose. For this piece, I put together an ambisonic soundscape of digital degraded audio, inspired by low resolution codecs and packet loss. The sounds that we dread hearing when consuming audio, when pumped up to intense levels and coming from all directions, have quite a bit of emotional heft.

An image of the setup for the second day of concerts. The keyboard setup pictured was used for the aforementioned student piece. A side note for any composers reading this - that’s a lot of tuned nipple gongs on the table at center. Many institutions won’t have that many, and the rental fee can be pretty extreme. I later created a custom sample library of different nipple gongs for the composer so that the piece could be performed at a wider variety of venues. That Kontakt library can be played with a regular MIDI keyboard, or for more percussion style points, something like a MalletKat.