Dispersion is a project whose aim is to enhance the functionality and expressivity of The Federation Bells, by way of the human voice.
This is being acheived through developing software that can analyze the spoken and sung voice in real time, then remap and interpret that analysis in the form of sonic events and gestures enacted and articulated through the mechanism of the bells.
The voice will be dispersed through the bells. The voice, in its entirety, plays the bells, utterly.
System prototyping and development by Terence McDermott in collaboration with experimental vocalist Carolyn Connors.
This project is collaborative in nature. Carolyn and Terry will work together to produce a tool that accommodates their ideas, and which can be adapted to other performers and performance contexts in the future.
So, the intention is to provide an original piece of software as a performance interface together with the on-going development of performance potentials as a structural outcome of the software's inception.
Supported by the City of Melbourne COVID-19 Arts Grants
The software will analyze the frequency content of the voice, using specific digital signal processing techniques, so that salient features of the voice are extracted. As well as pitch-tracking, other more sophisticated analyses can be done, such as the shape and trajectory of a vocal utterance, its tonal quality, sibilance, intensity and so forth. These real-time analyses are a vehicle to automatically and intelligently activate or "play" the bells.
Writing the software, like any software development process, is an iterative one. In other words, we start with a simple idea, test it, then see how we can build on it, being informed by the results of the test.
The testing of the software, at its various stages of development will be done with Carolyn. So we'll basically be trying out different ideas -- while Terry writes fragments of functional code, Carolyn will try it out, and the outcomes will generate performance ideas and a "vocabulary" of vocal elements which can be built upon with each iteration of the development cycle.
Some simple ideas, starting points:
The bells are just-intoned, so they don't correspond exactly to the ubiquitous 12-tone equal-temperament we are used to in western music. This is great for the voice, whose "tuning" is completely arbitrary: there are no fret boards or predetermined keys the voice has to adhere to. We can simply map the actual pitches of the bells to the same set of pitches articulated by the voice.
We do not need to limit the articulation of the bells to matching the pitch of the voice with the nearest pitch-class of a particular bell. We can include unpitched material, such as plosives, fricatives, sibilants, unpitched shouts, screams. How would the software interpret these sorts of vocal events is entirely up to us.
The idea of gesture will be important, how a vocal gesture can be interpreted and re-articulated as a different gesture by the bells, and how that can create a performance dialogue.
This project was initiated during the first months of the appearance of Covid-19 in Australia.
Anticipating that our working processes would be vastly different from any previous project either of us had worked on, we decided to keep an online notebook that tracked some of our activities.
This is not so much a website about the work, but snapshots of each month's activities across the 2020 winter.
As such we consider this as a blog, a living document that gets updated on a regular basis, and represents the evolution of the project and the ideas it generates as it unfolds.