After attending a paper session on a vibration-based metronome, I began to consider the relationship between auditory and non-auditory sensory experiences. In what ways can non-musical stimulus be translated into musical understanding? This eventually led me to the idea of The Train Song.
The idea was simple: While inside a vehicle, translate the speed of the vehicle to a back-track’s tempo. If a singer inside the vehicle is able to sing on beat with the track, this suggests an ability to translate the experience of acceleration to a shift in tempo.
I got to work. To control the playback rate of my audio, I decided to use SuperCollider, “A platform for audio synthesis and algorithmic composition, used by musicians, artists and researchers working with sound.” A tricky question soon arose: how do I handle the time manipulation of my audio? My audio could either be generated within SuperCollider or imported as a sample. If imported, audio stretching, and therefore pitch shifting, was inevitable. This would require a performer to track both tempo and pitch, and would end up in requiring pitches far too low to sing, so pitch correction would be necessary. However, even with a powerful pitch shifting algorithm to correct for the stretching, artifacts were sure to occur as the train approached zero speed at stops.
For this reason, I decided to generate my audio within SuperCollider, controlling when note events were triggered instead of playback rate. This gave me an opportunity to develop my skills with the platform. I decided to write a simple song about riding the trains from my hometown, Boston.
I created a banjo-like sound using three layered string sounds (Karplus-Strong). I wrote a short melodic loop to repeat as my song structure. Then I wrote a melody to be sung along with lyrics about each of the MBTA trains. Finally, I set up the playback controller. To do this, I used the mobile app GyrOSC to send OSC commands with GPS speed data from my phone to my laptop, and processed the data within SuperCollider.
And so I set out to record. I ran into many hurdles in the process. First, GPS data was sparse under ground and caused technical issues when it dropped out, so I had to find a part of the T completely above ground. Next, the amount that the speed affected the audio had to be adjusted to be aesthetically pleasing. Finally, I had to remember the words! After many hours riding the trains, we finally got a recording. I went home to do some minor editing to the video, adding text explaining how everything works. And I finished with this video:
Singing this, I felt that the train’s movement helped me to anticipate the shifts in tempo, and it made changing my own tempo feel natural. However, I did not feel as though I could predict the specific tempo, only whether it would be slower or faster. In the future I’d like to study this more rigorously, comparing people’s experiences on vs. off the train, and how repetition over time influences these results.
< back to posts