160 likes | 341 Views
Tom Burns Technicolor Creative Services. Appliances ClustersMicrocode Trends in Post-production Infrastructure.
E N D
Tom Burns Technicolor Creative Services AppliancesClustersMicrocodeTrends in Post-production Infrastructure
The views expressed herein are exclusively those of the presenter, and are not indicative of any Thomson / Technicolor official position nor an endorsement of any current or future technology development or strategy. Disclaimer
Appliances • Turnkey workstations • Bosch FGS-4000, Quantel Paintbox, Ampex ADO • Proprietary circuit boards • Expensive & time-consuming to improve • Custom software • Steep learning curve for developers
Clusters • High Performance Computing • Fast, low-latency network • Shared storage • All nodes work on the same task • 3D Render farm • “embarrassingly parallel” i.e. 1 frame per CPU
“Software running on general-purpose computers will outlast custom hardware every time” – but it might take years to catch up Technology Migration over time
How can we predict which innovations are likely to succeed? Business processes (VFX == “pipeline”, Post == “workflow”) move up the Stack Appliances move down the Stack HW & SW solutions (once paid off) become Appliances Software evolves much more quickly than either of these Confusion between continuous and discontinuous innovation is the cause of many technology product failures Up and Down the Technology Stack
Compression JPEG-2000 H.264 Transcoding Software VBR Multi-pass Merging infrastructure and workflow == pipelining • Audio QC • Automated, file-based • Faster than real-time • Deliverables • Multiple simultaneous file-based renders
SIMD – Single Instruction Multiple data Shared memory instead of message-passing architecture Memory accesses are expensive Code and Data packing since computation is cheap Rapid Mind development tools AMD, Intel Multi-core x86 CPUs ATI/AMD FireStream 9250 GPU NVIDIA Tesla 10P Cell Broadband Engine, PS3 Migrate bottlenecked processes to GPU
GPU De-Bayering = Data Level parallelism • Packing scalar data into RGBA in texture memory suits GPU architecture very well
A file system contains a certain amount of intelligence in the file itself, in the form of: the filename + numerical extension directory placement (pathname) attributes (e.g. atime, mtime) Post uses SAN, VFX uses NAS Performance and cost Intelligence “built-in” to the object level allows more flexibility in the pipeline without changing the infrastructure Object intelligence migrates from blocks to files
Remap the fixed stack into a flexible pipeline Plan exit costs of HW as well as entry costs Adapt to changing business cycles Enterprise Service Bus Goals:
Virtualize dedicated h/w processes on clusters Profile and provide GPU support for bottlenecks Re-factor pipeline for different projects Swap out software to adapt to business cycles Service Orchestration The “Project Coordinator” moves up the stack QC, Delivery, Audit, Monitoring Scale to distributed bus Decentralized – smart endpoints Post facility becomes hub of networked community Designing an Enterprise Service Bus for Post