180 likes | 203 Views
UCSB Center for Research in Electronic Art Technology. Sound Composition for Sensing/Speaking Space. Stephen Travis Pope stp@create.ucsb.edu. Outline. Sensing/Speaking Space Introduction The Design Principles (Policies) Sound Layers The Rumble The Singer The Speakers
E N D
UCSB Center for Research in Electronic Art Technology Sound Composition for Sensing/Speaking Space Stephen Travis Pope stp@create.ucsb.edu
Outline • Sensing/Speaking Space Introduction • The Design Principles (Policies) • Sound Layers • The Rumble • The Singer • The Speakers • Responsiveness and Interactivity
The Policies • Structure and Noise • Speech and Text • Use of Old Materials • Responsiveness and Interactivity • “Contemplative Place” (Ritual Place) (Zen Garden)
The Tools • Programs written in Smalltalk, Python, C, C++, and SuperCollider over the span of 18 years (Mostly PD or OpenSrc tools) • Some GUIs, some text-based programming languages; delivery system is camera-only • Separate manipulation of sound material, gesture-level score, and interactive control
The Tools (2) • All running on Apple Macintosh G4 (Desktops and PowerBooks) • Camera-to-Director (TrackThemColors), Director Plug-in-to-OSC (by GarryK), SuperCollider programs reading OSC
The Sound Layers • Rumble* -- low constant bell drone • Ripples -- mode 1, resonated water sounds • Tingles -- mode 2, resonated hand bells • Singer* -- constant chanting bells • Speakers* -- phoneme streams triggered by audience motion • Transition Pipes -- short transition effect
The Rumble • Tibetan Dorji Bell • Time-stretched and transposed from 18 Hz - 18 kHz using the “Loris” analysis/synthesis package and several Python programs • Mixed for Rumble and Singer sources • Examples • Original bell • Transposed to extremes • Various layer mixes
The Singer • “Cross-synthesis” of bell clusters and processed T’ang-dynasty poetry/chant • Use PhaseVocoder, Loris, and SuperCollider • Examples • Vocoded voice • Bell texture • Cross-synthesis
The Speakers • The Siren speech segmenter and phoneme database • Segmentation • Analysis • Similarity metrics • The Database // Format: name folder start stop dur maxAmpl rmsAmpl spectralCentroid // spectralWidth [spectralBands] #[ '1.snd', 'Content:Sound:YYYJD:ec:r3:src', 6144, 17280, 0.252517, 17901, 385.762652, 714.563192, 0.404395, [0.059394, 0.063983, 0.078587, 0.125629, 0.415792, 0.823278, 0.901691, 0.389382, 0.225568, 0.185267, 0.166998, 0.169787, 0.192624, 0.161891, 0.168954, 0.136718] ]
Siren Speech Segmenter I CE M E L T S
Phoneme Stream Examples • Selecting Phoneme Streams • Selection criteria (distance metric) • Dynamic control: mode, threshold, density, stretch factor, position, volume, etc. • Triggering and Tracking • Not completely deterministic • Examples • Source Texts • Stream 1 • Stream2
Sensing and Interactivity • Through-composed Version Completed in Dec. 2001 • Lab set-up with Monitoring GUI in Jan. for testing of interactivity and intermedia integration • Different Sociology of the Layers • Major Transitions and Scenes
These slides and sound examples are on the WWW at http://create.ucsb.edu/~stp/SeSpSp.{ppt, sit.hqx, zip}