130 likes | 243 Views
後卓越子計畫報告. PLLAB 李政崑教授. Component Remoting Technology Map. Component Remoting Technology Map. Research Result.
E N D
後卓越子計畫報告 PLLAB 李政崑教授
Component Remoting Technology Map Component Remoting Technology Map
Research Result • Streaming Support for Java RMI in Distributed Environment, C. C. Yang, Chung-Kai Chen, Yu-Hao Chang, Kai-Hsin Chung and Jenq-Kuen Lee, ACM International Conference on Principles and Practices of Programming In Java (PPPJ 2006), Mannheim, Germany, August 30 - September 1, 2006. • Apply for a patent “提供遠端物件具備網路串流功能的機制 “ now.
RMI Server RMI Client Application Layer Stub Streaming Layer Streaming Controller RDMA-like Transportation Streaming Buffer Continuous Buffer Continuous Buffer Continuous Buffer Proposed Software Architecture of Streaming RMI • Several important components are needed to support our mechanisms. • Streaming data are pushed from servercontinuous buffer toclient continuous buffer automatically. • Streaming controllermanages the contentof continuous buffer. • Streaming controllerstores aggregated datain streaming buffer. • The client application canconsume the complete streamdata from the streaming buffer.
Features of Streaming Java RMI • Pushing • The idea is the same as pre-fetching. • Aggregation • This is for better manipulation of streaming data from multiple streaming servers. • Forwarding • It provides bandwidth-sharing between clients.
Aggregation policy • Notations • A set of streaming servers S = {si | i = 1, .., n} • A set of data blocks D = {dj| j = 1, .., m} • For each streaming server si: • The supplying bandwidth bi of si • A set of data blocks that exists in si: Blocks(si) • The completeness of data in si: Completeness(si) • The amount of content: ki • The bandwidth requirement : Req(dj) • The bandwidth allocation table: BATmxn
Scheduling Algorithm Weight evaluation and sorting Bandwidth allocation
Experiment Result 1 • Compare the performances of standard RMI with streaming RMI • To demonstrate the performance improvement brought by pushing mechanism
Experiment Result 2 • Data overhead measurement • Overhead for sending a 5MB data stream
Simulation Conditions for Aggregation • We observe the waiting time of each streaming task. • Waiting time is defined as the time from a client issuing the request to the time ready for playback. • Take the ratio of results using aggregation to thosewithout aggregation. • Bandwidth available -- (α) • Completeness -- (β) • Amount of content -- (γ)
Simulation Results (1) • Bandwidth available -- (α) • Completeness -- (β) • Amount of content -- (γ) • Simulation A –α = 0.5, Number of Streams = 400 • Simulation B –α = 0.5, Number of Streams = 200 • While the number of streams decreasing, our algorithm can get better average waiting time than the algorithm without aggregation. (The lower the better)
Simulation Results (2) • Bandwidth available -- (α) • Completeness -- (β) • Amount of content -- (γ) • Simulation C -α=0.5, Number of Streams= 100 • Simulation D -α=0.1, Number of Streams= 100 • Using different variable sets, the average waiting reduction will get better reduction.
On going research • We will add mobility ability into our framework • We will expend our framework by following SOA specification