Tap Synth (2024)
Tap Synth is a project in progress to build a system for controlling synthesizers via tap dance. NYC tap dancer Michela Marino Lerman approached me with the idea in 2023, which I immediately found inspiring. There are a number of preexisting projects out there that solve this, ranging from drum triggers attached to modular synthesizers to multiple trigger surfaces controlled by tapping in certain zones. What didn’t exist, however, was a system that would allow a tap dancer to use a regular tap floor and that could capture that audio, detect onsets (and amplitude) and use that to dynamically trigger synths in a way that could be controlled and programmed for an entire set of music with live musicians.
The problem is basically broken into two chunks: how does one detect onsets reliably with minimal latency and how does one actually choose what notes to play? The first problem has been solved a lot of different ways in Max/MSP with all sorts of algorithms (such as the venerable Bonk object). I chose to use Rodrigo Costanza’s machine learning SP-Tools’ onset detection, which worked great. Hats off to him and the folks behind FluCoMa (one of the packages that SP-Tools relies on) for making their tools so easy to integrate. With an onset detection method chosen, I simply pipe in audio from either a mic on the tap floor or a pickup, process the audio to remove extraneous noise and detect steps.
In order to turn this into a useful performance tool, I built a system in Max that ingests a CSV filled with preset information and the parses that to generate MIDI based on detected onsets and their amplitude. Based on what’s programmed into the preset CSV, each individual sub-preset can be a sequence of notes, groups of notes to be selected randomly from a set, notes selected dynamically with a MIDI keyboard, or a combination of the above. Additionally, the presets can send control data to Ableton (or another device) to change timbres. All of these presets in Max can be switched with either a keyboard or a wireless handheld gamepad in one hand. This way, Michela could program in a preset for each chord change in a song and then dance with an ensemble while changing chords dynamically as the ensemble plays. The CSV is designed to be reasonably human readable, so that it’s not too hard to program presets. I also created a spreadsheet utility for note number generation from MIDI note names, as well as a Max utility for generating formatted note numbers from MIDI input, allowing one to play in the desired sequence of notes.
For the moment, my system doesn’t differentiate between types of tap step (e.g. heel, toe, stomp) reliably, but perhaps further tweaking will get us closer and allow for some dynamic timbre changes based on tap technique or position on the tap floor.
Check out a snippet from rehearsal on Michela’s Instagram page to the right.
(If you’d like the patch for this, send me an email at nathan@nathanprillaman.com. It’s in working condition, but I won’t be sharing it publicly until I’ve made it a little more robust.)