470 likes | 485 Views
Explore how a standardized audio software component framework can streamline development, improve communication, and enable modular audio systems. Learn about the benefits and potential of message-based mechanisms for control signal interconnection.
E N D
A Roadmap: Beyond Big API’s Why Software Component Frameworks May Be Interactive Audio’s Future Chris Grigg chrisg@control-g.com
Thesis Big audio API’s compromisedelivered audio quality • To use them, you have to call their functions; • Every API uses a different interface; • So Mixing or Changing APIs is too hard. • Many custom API’s & media types have no tools. This sucks because: • No single API does everything well, or runs on all platforms.• Time invested in custom API features doesn’t port.
Thesis • Not about specific faults of particular API’s • A Structural Problem of all big audio API’s • Programming itself the bottleneck.
We want to be able to construct runtime audio systems out of arbitrary functional blocks from multiple vendors Our Studios Have Many Different Synthesizer Brands Because They’re All Good At Different Things – Our Runtime Systems Need Similar Diversity
In this ‘What If’ world: • Units of software organization become Smaller • They become Modular • Each Unit Takes On a Specific Function • Units Have to be able to Talk to one another An Architecture Fundamentally Differentfrom our current big, do-everything Audio API development & use practices.
Things We Need To Get There 1 Software Component Framework with Object Model 2 Audio Signal Interconnect Mechanism 3 Control Signal Interconnect Mechanism 4 Scriptable Control
1 A Standardized Audio Software Component Framework with Object Model
If our audio blocks are components, we need a framework to hold them.
Typical Component Framework Instances of Various Component Classes Framework SoftwareComponentA SoftwareComponentB SoftwareComponentC SoftwareComponentD Framework Manager Host Application Manager MaintainsComponent Instances& Directs Host Calls
GUI Authoring ToolUses Same Framework & Components As Runtime Select, Configure, & Connect ComponentsSave Configuration for Use at Runtime
Having a Uniform,Implementation-Independent Object Model allows: • Single, Universal Authoring Tool • Cross-Platform Content • Cross-Platform Runtime Communication
2 & 3:Interconnection Audio Signals & Control Signals
Connecting Anything to Anything Requires Control & Audio Interconnect Standards
Most Component Frameworks Don’t Have Interconnections to Deal with No off-the-shelf solution – We need a development effort
2 Audio Interconnection Mechanism Between Components
Audio Signal Flow Between Components …Doesn’t Require Multiple Buffers
Audio Signal Flow Using Single Buffer Multiplexed Operations on a Single Buffer
RepresentingInterconnections in the Framework Object Model
Object Model View:Components and Connections Everything derives fromclass Component and class Connection
Benefits I • Better Programmer/Soundmaker Communication • Increases Clarity of Purpose • Simpler, Faster Code Development • Promotes Audio API Proliferation • Encourages View of Audio APIs as Interchangeable • Simplifies Porting of Game Engines and Titles • Encourages Porting Components to New Platforms
3 A Message-Based Mechanismfor Control Signal InterconnectionBetween Software Components
To avoid messes like this. Rat’s Nest of Function Calls
Like MIDI Cables for Software Components Unified Control Signal – Single-Point Connection
Assemble Controlled Sound Systems from Software Components Like a Recording Studio:Audio Connections & Control Connections
Host Game UsesSame Control Mechanism Still Like a Recording Studio:Audio Connections & Control Connections
Components Can ProcessControl Signals(not just audio) Examples: Y-Cord, Merger, Adaptor
Easy Communication withExternal Devices Using Same Model Consistent Binary Control Signal FormatMakes This Possible
Multi-Component ‘Templates’ Add Configuration Power Example: Mixer Channel Strip Template Many Components Instantiated in Single Step with Internal Routing, Configuration, Initial Settings
Control Signals Give Us Knobs Both the Programmer and the Soundmaker
Standardize theControl Signal Connection But Not the Command Set,Parameter Set, etc.
Arbitrary MessagesOn The Connection Don’t Predefine the Message Vocabulary. Let Every Component Define Its Own. How Do You Know What Messagesa Component Accepts? You Ask It.
Every Component Has Its Own Command Set…And Publishes It This Is How The GUI Authoring ToolKnows What Each Component Can Do.
Simple Piano Component Two Control Inputs Simple Example
Simple Piano Component Control Interface Description ControlInput1: Name: MIDI In Protocol: MIDI Messages Interpretation: (MIDI Implementation Chart appears here) ControlInput2: Name: Sustain Switch In Protocol: Boolean Messages Interpretation: low bit 1 = pedal down, 0 = up Ask the Component for its Interface Description and you get a list of the Messages that its Control Inputs Understand
3D Panner Component One Control Input More Complex Example
3D Panner Component Control Interface Description Commands: SetVolume( Fixed16 ) -- Master out volume Mute( Boolean ) -- Mute (TRUE) or Pass audio SetSpatialPosition( Std3DVector ) -- 3-space The Messages that the Control Input Understands
But Weren’t We Trying to Get Away from Function Calls? Commands: SetVolume( Fixed16 ) -- Master out volume Mute( Boolean ) -- Mute (TRUE) or Pass audio SetSpatialPosition( Std3DVector ) -- 3-space This looks just like a Big Audio API!
True, this semantic is similar to the HLL function calls... But it’s expressed differently at runtime. SetVolume( Fixed16 ) Mute( Boolean ) SetSpatialPosition( Std3DVector ) Not as compile-based function calls, but rather in Data-Based Messages that can be routed dynamically at runtime.
How do Components Publish theirInterface Descriptionsat Runtime and Authoring Time? • Some Invention Is Required Here. • Learn from existing solutions to similar problems: • • COM • • CORBA’s IDL • • Apple Event Terminology Resources • • XML’s DTD • • MAX Objects
4 Adding the Scripting Layer
Scripted Control of allAudio Components & Media Game UsesSame Sound Event CuesOn All Platforms Components, Framework & Script Player Get Ported to All Platforms Sound Dept. Delivers Platform-Independent Media & Scripts With Enough Component Types and Media Types, the ‘Ultimate Interactive Audio Architecture’
Benefits II • Simplified Control of Sound From Game • Increased API Experimentation Flexibility • Flexible Patching of Control Sources & Destinations • Encourages Plug-In Style Development • Ability to Transform Game Control Variables • Sound Designer Control over Interactivity • Auto-Porting of Interactive Audio Content
Conclusion An essentially non-commercial idea: • The limits of big audio APIs are structural. • They favor programmer contributions while marginalizing soundmaker contributions – through poor or missing tools, but also by virtue of their fundamental compile-based architectures. • This dynamic hurts the delivered audio quality. • Open standards for component software appear to offer a way out, and might serve as the basis of a generalized interactive audio methodology as universal and stable as MIDI has been.