Artworks

My first career was as a composer for dance and performance, and as a designer of audio-visual and various technological systems for artists, mostly theater and installation. The bulk of this work was done in the mid 00s and 10s, and as such documentation is both sparse and … leaves much to the imagination. You can find most of the music I wrote for these works on my bandcamp, you just have to scroll down.

In the last couple years, I’ve been lucky enough to get to make a few small pieces of my own, and get documentation of them. Here are some examples.

You Made It Through, You Did (178.2014090344) – Summer 2021 – Old New England Projects, Oakland CA

You Made It Through, You Did (178.2014090344) is an installation consisting of a Raspberry Pi, Custom Software, Stereo Power Amplifier, High Wattage Surface Transducers. The statement read as follows:

Two transducers turn two parallel panes of glass – which visitors must walk between to enter – into speakers. These window-speakers play a set of tones centered around 178-and-some hertz, or: wavelengths which complete full cycles in the space separating one window and the other. Put another way: resonant frequencies of that small, vestibular area.  

The effect of this should be two fold. The sound will be enveloping and, ideally, consuming. I’ve asked Ken to play the piece as loud as possible without distortion, but I understand there are neighbors. And theoretically, it will also sound as though your ears are able to ‘read through’ certain portions of the sonic aggregate as you move. Movement changes your ears position along the phase of standing waves, thus altering your experience of sound that is otherwise steady. In practice, acoustics is a rather complicated affair and even so, pinpointing this effect is in no way central to “getting” this piece.

The audio is synthesized in real time, and its sequence loops every few hours. Please spend as long as you need to in the entryway. You may try gently touching different areas of the glass to dampen the vibrations, thus altering the sound. The presence of others in the small space will also impact your experience. Please join your friends (and strangers) in close proximity to the degree doing so comports with pandemic-related safety measures.

Information on how this piece was constructed can be found on patreon.

If Not Now When – October 17, 2021 – Chocolate Factory Theater, Queens NYC

If Not Not When was made in collaboration with my friends Madeline Best and Brian Rogers, at The Chocolate Factory Theater in Queens. It was made possible with a grant from NYFA’s City Artists Corps Grants. It consists of lighting design, projection design and sound. The following is the proposal statement for the sound part:

I have wondered for a long time what it would mean to “outline” an architectural space in audio – to get (or deliver, or gesture at?) something of a sense of its size and shape and character using sound only, in and on top of the space itself. Put another way – is there a way one can get a “sense” of a room from some sonic experience, if otherwise the room was pitch black?

I think about the sonic characteristics of spaces that emerge from their use – and which lead but by inference to some “sense” of them. When the main terminal at Grand Central is full of people, and you close your eyes, do you get a sense of the space? I think so. When the rehearsal space has three dancers in it, and a single stereo playing back some music in the corner – do you get an acoustic portrait of the space from the actions taking place within it? I think so.

What is it like then to do this on purpose – and to let the space “speak for itself”, as it were. Instead of adding bodies to it, identify the bodies already there (columns, corners, recesses, entryways, notable features) and have them “sound” – affix to them a sound producing device. And then … coordinate those devices, have them play some pre-programmed audio in a meaningful sequence: clockwise around the east wall. From top to bottom in an outward spiral. All the corners at once. The entryways in sequence. And so on.

Until recently, this would have been difficult to do. Accomplishing this task with a computer would be cost prohibitive as even a modest 16 or so independent audio channels requires a thousand or so dollars worth of audio equipment. But in the last couple years, the prevalence of WiFi enabled micro-controllers like the Pi Zero or the ESP32, which can cost $5 to $10 makes the prospect of producing localized sound at dozens, if not hundreds of locations around a large room much more feasible – both economically and technologically.

Information on how this piece was constructed can be found on patreon.


Past Performances

A weekend performance in Oct. 2014 at the Wattis Gallery, at CCA in San Francisco with Em Eifler (virtual reality headset, 360° camera rig) and Arletta Anderson (dance). I wrote the music and installed the audio system. Attendees were treated to a durational performance / installation, a segment of which was recreated in VR with a starkly altered sound track. This is the only documentation of the performance that persists, as far as I know.

Hotbox, 2012 – Brian Rogers, Madeline Best. Designed autonomous, multichannel robotic camera and projection system
Selective Memory, 2010 – Brian Rogers, Madeline Best. Designed interactive, multichannel robotic camera and mapped projection system. NYT review here.

I’ll add more when / if I can find documentation for them…