160 likes | 295 Views
Enhancing the PCI Bus to Support Real-Time Streams. Scottis, M.G.; Krunz, M.; Liu, M.M.-K. Dept. of Electr. & Comput. Eng., Arizona Univ., Tucson, AZ, USA 1999 IEEE International Performance, Computing and Communications Conference 元智大學 系統實驗室 楊登傑 2000.03.13. Outline. Introduction
E N D
Enhancing the PCI Bus to Support Real-Time Streams Scottis, M.G.; Krunz, M.; Liu, M.M.-K. Dept. of Electr. & Comput. Eng., Arizona Univ., Tucson, AZ, USA 1999 IEEE International Performance, Computing and Communications Conference 元智大學 系統實驗室 楊登傑 2000.03.13
Outline • Introduction • PCI Overview • Real-Time Scheduling Theory • The EPCI Local Bus • Simulation Study • Conclusions
Introduction • The paper present an access scheduling scheme for Real-Time Streams(RTS) over the PCI Bus. • It uses the Rate Monotonic Scheduling (RMS) • algorithm to guarantees the timing QoS for RTS over PCI Bus. • Author define the effective bus utilization(EBU)as the worst case bus utilization. • PCI Overview :gives a brief overview of the PCI architecture.
Introduction(cont.) • Real-Time Scheduling Theory:gives the theory of RTS from which EPCI bus model. • The EPCI Local Bus:introduces the EPCI bus architecture. • Simulation Study:gives some simulation results of the proposed architecture. • Conclusions:gives the concluding remarks.
PCI Overview • In order to allow the PCI bus works concurrently with the CPU bus, inserting a new bus(CPU Local Bus)between the CPU and high-speed PCI Bus. • The CPU can access the level-two(L2)cache or main memory while the PCI Bus is busy transferring data between its devices. • Two types of devices are: a bus master and a target device • A bus master must arbitrate for each access it performs on the bus, and a target device can only respond to a bus master’s request.
Real-Time Scheduling Theory • Two real-time scheduling on a shared medium: preemptive and non-preemptive scheduling. • In preemptive scheduling a higher priority link can preempt a lower priority link, where in Non-preemptive scheduling there is no preemption. • Preemptive scheduling is divided into two:dynamic priority and static priority. • Dynamic priority can change the link priority dynamically,and have higher schedulability than static priority. • But dynamic priority are more complex,difficult to implement and require additional implementation overhead,for this reason author considers only static priority in this paper.
Static Priority Scheduling • The RMS algorithm schedules a set of periodic links by assigning higher priorities to links with shorter periods. • Given a set of n periodic links l1,l2,l3,…..,ln ordered in increasing period(T1≦T2≦……. ≦Tn),the RMS algorithm assigns priorities in decreasing order(P1≧P2≧……. ≧Pn),where Pi> Pj implies that li has a higher priority than lj(i≠j). • A set of n independent RTS links l1,l2,l3,…..,ln is schedulable using the RMS algorithm if only if the inequality can be met for 1≦i≦n:
The EPCI Local Bus • The hardware includes the Central Arbiter(CA) as used in current PCI bus system,and the application specific EPCI devices connected to the EPCI bus. • The software includes the device drivers for the corresponding EPCI devices, the user application programs on top of the OS,and the Scheduling Manager(SM),which is part of the OS and schedules real-time traffic on the bus. • The EPCI CA has programmable priorities assigned to each request-grant pair that can be changed by the SM at any time. • Each EPCI device is required to have a buffer to match the rate the device produces or consumes data with the rate that it can move data across bus.
Simulation Study • Author assumes four RTS links l1,l2,l3,l4which are labeled such as (T1≦T2≦T3≦T4)and the RMS algorithm is used to assign the priorities. • This means that link l1 has the highest priority and link l4 has lowest. • In this example,we have four links with 4!=24 possible ways to assign priorities to them. • In figure 4,only four out of the twenty four possible assignments are schedulable.
Simulation Study(count.) • The link overhead and blocking are the key parameters in EPCI scheduling model that might degrade schedulability. • A huge link overhead or blocking will severely degrade schedulability,they might drive a link to miss its deadline. • Their values are dependent on the value of the internal latency timer(ILT).
Simulation Study(count.) • Fig5,In the first region(ILT<55)the ILT value is small and the bus master has to give up the bus early in the transaction transferring only a small amount of data. • This causes the link overhead to dominate, resulting in high EBU. • Fig6,for small ILT values, the link overhead is very high. • As the ILT value increases,the link overhead decreases,and the link set becomes schedulable.
Simulation Study(count.) • In the second region(55<ILT<175)ILT has a moderate value and EBU is at its lowest value. • This region is the optimum region,and either the link overhead nor the blocking dominates. • In the third region(ILT>175)the ILT value is high and bus master is allowed to keep the bus for long periods even though other higher priority links might request the bus causing high blocking.
Simulation Study(count.) • Fig7,Link l4 has a zero blocking since no other lower priority link can block it. • Link l3can be blocked only by link l4 because l4 has a small bus requirement which is exhausted before its ILT. • Links l1,l2 can be blocked by l3,l4because l3 has a large bus time requirement and therefore l1,l2have to wait for l3 ILT to expire. • This results in high blocking for l1,l2 at high ILT values.
Conclusions • In this paper,author presents a bus management scheme that determines the schedulability of a set of real-time links over the PCI bus. • Author also describe a bus scheduling model based on the rate monotonic scheduling (RMS) algorithm that priory guarantees the schedulability of a given link set.