Abstract
This paper extends upon the real-time audio-visual collaboration that the authors have engaged with since 2014. Previous
iterations of this collaboration have focused on formalizing a structured response-relationship between the audio and visual
components, but more recently, the integration of improvisatory live coding and modular synthesis techniques have enabled
an increasingly responsive and chaotic feedback loop between the audio and visual components. This paper will provide
an overview of the technical details of the AppiOSC, a custom device designed to facilitate streams of correspondent
yet unpredictable bi-directional communication data between The Force, a live coding environment for graphics, and the
modular synthesizer.
iterations of this collaboration have focused on formalizing a structured response-relationship between the audio and visual
components, but more recently, the integration of improvisatory live coding and modular synthesis techniques have enabled
an increasingly responsive and chaotic feedback loop between the audio and visual components. This paper will provide
an overview of the technical details of the AppiOSC, a custom device designed to facilitate streams of correspondent
yet unpredictable bi-directional communication data between The Force, a live coding environment for graphics, and the
modular synthesizer.
Original language | English |
---|---|
Title of host publication | Second International Conference on Live Coding |
Number of pages | 6 |
Publication status | Published - 2016 |
Keywords
- live coding
- modular synthesizer