130 likes | 294 Views
SCI based scalable video on demand server. Pascal Vandeputte Siemens Computer Systems. P. Vandeputte; Page 1. SCI Europe ‘99 - Toulouse 2/3 September 1999. Topics. Siemens Video on Demand activities Video on Demand scenario Streaming video, hardware and software constraints
E N D
SCI basedscalable video on demand server Pascal Vandeputte Siemens Computer Systems P. Vandeputte; Page 1 SCI Europe ‘99 - Toulouse 2/3 September 1999
Topics • Siemens Video on Demand activities • Video on Demand scenario • Streaming video, hardware and software constraints • Building a Video on Demand Service • Small Video On Demand cluster prototype
Siemens Vod activities • AMUSE Esprit project (Advanced Multimedia Services for residential Users) • Provide Server platform for VoD and WWW services for about 10 European field trials • Partners: Italtel, German Telecom, Italian Telecom, Swiss Telecom,Portuguese Telecom, Island Telecom, Siemens, Acorn, Videotime, ... • DAM Esprit project (DAVIC Accompanying Measures) • Provide a DAVIC compliant VoD Server • Partners: CCETT, Deutsch Telecom, Italtel, TF1, Matra, ... • Siemens NetVideo Product • Client /Server solution for streaming of video data within Intranets • SCI EUROPE Esprit project • Demonstrate feasibility of SCI cluster interconnect for high performance video server application • Build up phase 1 demonstrator using existing SCI technology • Build up phase 2 demonstrator using technology developed within the project
http://…. html pages tv programs Multimedia PC Www server VCR commands Multimedia server Multimedia stream Set-top box Video on Demand scenario
Video stream Network I/O module File Management VoD service architecture NetVideo Pump Client control process VCR commands Session and stream management Downstream Management
Building a VoD service Requirements • 500 MPEG2 5 Mbit/s clients (unicast) • At least 50 films online Hardware needs Sample configuration (only to have rough ideas): • Throughput • 1 client => 630 kbytes/s • 500 clients => 312 Mbyte/s throughput • Streams/ system • PCI32 bandwidth 100 Mbytes/s • 50 Mbytes/s disk (loading) => 3 x 20Mbytes/s RAID units / system • 50 Mbytes/s network (streaming) • 50 Mbytes/s div 630 kbytes/s => max 80 streams / system => at least 7 streaming systems • Disk size • 2 hour film => 4,4 Mbytes • 50 films => 220 Gbytes
Monoliths vs. clusters Without clustering • 500 clients => 7 systems => 3 RAID units / system • 50 films online for each server => 220 Gbytes / system => 1540 Gbytes • Film database is 7 times duplicated • Increasing database size or clients is difficult and expensive With clustering • 500 clients => 7 systems => 3 RAID units / system • 50 films online for cluster => 220 Gbytes / cluster • no need to duplicate film database • Increasing database size or clients is easy and cheaper • High availability possible Clustering network bandwidth > 300 Mbytes/s => SCI 1 SCI board is cheaper than 220 Gbyte diskContent management is easier
NetVideo cluster agents Client control NetVideo MediaServer 1 NetVideo Pump 1 NetVideo Manager NetVideo MediaServer 2 Client control NetVideo Pump 2 NetVideo MediaServer 3
RAID5 40 Gbytes Small cluster prototype demo Client / Client Emulator Www server Gigabit Ethernet Client Control NetVideo Pump SCI NetVideo MediaServer NetVideo Manager