260 likes | 429 Views
CS 414 – Multimedia Systems Design Lecture 30 – Media Server (Part 6). Klara Nahrstedt Spring 2011. Administrative. MP3 going on. Outline. Example of Multimedia File System - Symphony Two-level Architecture Cello Scheduling Framework at Disk Management level Buffer Subsystem
E N D
CS 414 – Multimedia Systems DesignLecture 30 – Media Server (Part 6) Klara Nahrstedt Spring 2011 CS 414 - Spring 2011
Administrative MP3 going on CS 414 - Spring 2011
Outline • Example of Multimedia File System - Symphony • Two-level Architecture • Cello Scheduling Framework at Disk Management level • Buffer Subsystem • Video Module • Caching • Example of Industrial Multimedia File System – Tiger Shark System CS 414 - Spring 2011
Example Multimedia File System (Symphony) CS 414 - Spring 2011 • Source: P. Shenoy et al, “Symphony: An Integrated Multimedia File System”, SPIE/ACM MMCN 1998 • System out of UT Austin • Symphony’s Goals: • Support real-time and non-real time request • Support multiple block sizes and control over their placement • Support variety of fault-tolerance techniques • Provide two level metadata structure that all type-specific information can be supported
Design Decisions CS 414 - Spring 2011
Two Level Symphony Architecture • Resource Manager: • Disk Schedule System (called Cello) that uses modified SCAN-EDF for RT • Requests and C-SCAN for non-RT requests as long as deadlines are not violated • Admission Control and Resource Reservation for scheduling CS 414 - Spring 2011
Disk Subsystem Architecture Service Manager : supports mechanisms for efficient scheduling of best-effort, aperiodic real-time and periodic real-time requests Storage Manager: supports mechanisms for allocation and de-allocation of blocks Of different sizes and controlling data placement on the disk Fault Tolerance layer: enables multiple data type specific failure recovery techniques Metadata Manager: enables data types specific structure to be assigned to files CS 414 - Spring 2011
Cello Disk Scheduling Framework Source: Prashant Shenoy, 2001 CS 414 - Spring 2011
Class-Independent Scheduler CS 414 - Spring 2011 Source: Prashant Shenoy, 2001
Class-Specific Schedulers CS 414 - Spring 2011
Validation: Symphony’s scheduling system (Cello) Source: Shenoy Prashant, 2001 CS 414 - Spring 2011
Buffer Subsystem • Enable multiple data types specific caching policies to coexist • Partition cache among various data types and allow each caching policy to independently manage its partition • Maintain two buffer pools: • a pool of de-allocated buffers • pool of cached buffers. • Cache pool is further partitioned among various caching policies • Buffer that is least likely to be accessed is stored at the head of the list. (Examples of caching policies for each cache buffer: LRU, MRU) CS 414 - Spring 2011
Buffer Subsystem (Protocol) • Receive buffer allocation request • Check if the requested block is cached. • If yes, it return the requested block • If cache miss, allocate buffer from the pool of de-allocated buffers and insert this buffer into the appropriate cache partition • Determine (Caching policy that manages individual cache) position in the buffer cache • If pool of de-allocated buffers falls below low watermark, buffers are evicted from cache and returned to de-allocated pool • Use TTR (Time To Reaccess) values to determine victims • TTR – estimate of next time at which the buffer is likely to be accessed CS 414 - Spring 2011
Video Module • Implements policies for placement, retrieval, metadata management and caching of video data • Placement of video files on disk arrays is governed by two parameters: block size and striping policy. • supports both fixed size blocks (fixed number of bytes) and variable size blocks (fixed number of frames) • uses location hints so as to minimize seek and rotational latency overheads • Retrieval Policy: • supports periodic RT requests (server push mode) and aperiodic RT requests (client pull mode) CS 414 - Spring 2011
Video Module (Metadata Management) • To allow efficient random access at byte level and frame level, video module maintains two-level index structure • First level of index , referred to as frame map, maps frame offset to byte offset • Second level, referred to as byte map, maps byte offset to disk block locations CS 414 - Spring 2011
Symphony Caching Policy Interval-based caching for video module LRU caching for text module CS 414 - Spring 2011
IBM Multimedia File System • The Tiger Shark File System • Roger L. Haskin, Frank B. Schmuck • IBM Journal of Research and Development, 1998
The newer MM Filesystems:Classes of requests • Tiger Shark filesystem defines different types of classes to FS requests. • minimum needed is 2 classes. • Legacy Requests • Read/Write data for small files, not needed quickly at the NIC • High-Performance Requests • Read data for large likely-contiguous files that needs to be quickly dumped to the nic (network interface control) • This is similar to our newer networking paradigm • “not all traffic is equal” • Unaddressed question that I had: Can we take the concept of discardability and apply it to filesystems?
Tiger Shark Real-time Class Real-time class is fine grained into subclasses, because Tiger Shark has Resource Reservation Admission Control If the controllers and disks cannot handle the predicted load then the request is denied. Legacy Class Also has a legacy interface for old filesystem access interfaces. Classes of Requests
Quantization, and Scheduling Optimizations • "Deadline Scheduling" instead of elevator algorithms. • Blocksize is 256KB (default), Normal AIX uses 4KB size. • Tiger Shark will "chunk" contiguous block reads better than the default filesystems to work with its large blocksize.
Streamlining of operations to get data from platter to NIC. • Running daemon that pre-allocates OS resources such as buffer space, disk bandwidth and controller time. • Not a hardware-dependent solution. • Even though it does not have shared memory hardware, Tiger Shark copies data from the disks into a shared memory area. Essentially this is a very large extension of the kernel's disk block cache. • VFS layer for Tiger Shark intercepts repeated calls and uses the shared memory area, therefore saving kernel memcopys on subsequent requests.
Platter Layout and Scaling Optimizations for Contiguous Streaming • Striping across a maximum of 65000 platters • Striping method unspecified, looks like it is flexible and extended to include redundancy if desired. • All members of a block group contiguous • attempts to keep the block groups contiguous.
Byte Range Locking. Allows multiple clients to access different areas of a file with real-time guarantees if they don't step on each other. Seeking Optimizations
Current Research and Future Directions • Tiger Shark gives us Filesystem QoS. • But can we do better by integrating VBR/ABR into the system? • Replication and redundancy are always an issue, but not addressed in this scope. • If it is a software-based system such as Tiger Shark, where in the OS should we put these optimizations? (Kernel, Tack-On Daemon, Middleware) • Legacy disk accesseshave a huge cost in both of these systems, how can we minimize?
Tiger Shark Final Thoughts • AddsQoS guarantees to current disk interface architectures • Built to be extensible to more than just MM disk access. • But definitely optimized for multimedia. • Designed to serve more concurrent sessions out of a multimedia server • BUT there is still kernel bottleneck for the initial block load. • Better suited to multiple concurrent access than EXT3NS
Conclusion The data placement, scheduling, block size decisions, caching, concurrent clients support, buffering, are very important for any media server design and implementation. Next Lecture – we discuss Multimedia CPU Scheduling CS 414 - Spring 2011