230 likes | 470 Views
Block-Based Packet Buffer with Deterministic Packet Departures. Hao Wang and Bill Lin University of California, San Diego HSPR 2010, Dallas. Packet Buffer in Routers. Input linecards have 40byte @ 40Gbps = 8ns to read and write a packet.
E N D
Block-Based Packet Buffer with Deterministic Packet Departures Hao Wang and Bill Lin University of California, San Diego HSPR 2010, Dallas
Packet Buffer in Routers • Input linecardshave 40byte @ 40Gbps = 8ns to read and write a packet. • Routers need to store the packets to deal with congestion • Bandwidth X RTT = 40Gb/s*250ms = 10Gb buffer. • Too big to store in SRAM, hence need to use DRAM. • Problem: DRAM access time ~40ns. Roughly 10x speed difference. out in Scheduler and Packet Buffers in out in out Hao Wang and Bill Lin
Parallel and Interleaved DRAM • Assume DRAM-to-SRAM access latency ratio is 3 P P P P P P DRAM banks Hao Wang and Bill Lin
Problems with Parallelism • Access patterns may create problems. • To access 3, 6, 9 and 11 one after another, it is possible to issue interleaved read requests and read those packets out at line rate. 1 2 3 7 6 5 4 8 9 12 11 10 13 14 DRAMs Hao Wang and Bill Lin
Problems with Parallelism • But, accessing 2 & 3or 10 & 11 in succession is problematic. • This is an example of Packet Access Conflict 1 2 3 7 6 5 4 8 9 12 11 10 13 14 DRAMs Hao Wang and Bill Lin
Use Packet Departure Time • Wide classes of routers (Crossbar Routers) where the packets departures are determined by the scheduler on the fly. • Packet buffers which cater to these routers exist but are complex • There are other high performance routers such as Switch-Memory-Switch, Load Balanced Routers for which packet departure time can be calculated when the packet is inserted in the buffer. Hao Wang and Bill Lin
Solution • We will use the known departure times of the packets to schedule them to different DRAM banks such that there won’t be any conflicts at arrival or departure. Hao Wang and Bill Lin
Packet Buffer Abstraction • Fixed sized packets, time is slotted (Example: 40Gb/s, 40 byte packet => 8ns). • The buffer may contain arbitrary large number of logical queues, but with deterministic access. • Single-write Single-read time-deterministic packet buffer model. Hao Wang and Bill Lin
Packet Buffer Architecture • Interleaved memory architecture with multiple slower DRAM banks. • K slower DRAM banks • b time slots to complete a single memory read or write operation • b consecutive time slots is a frame • Each bank is segmented into several sections • Memory block is a collection of sections Hao Wang and Bill Lin
Proposed Architecture D2 DK D1 DRAMs memory block …… departure reorder buffers reservation table 1 2 K 1 2 b 1 2 b … … … 1 1 1 … … … 2 2 2 … … … … … … … … … … … … … … … M N N arriving packets departing packets … bypass buffer Hao Wang and Bill Lin
Reservation Table • Use a counter of size log2N bits to keep track of the actual number of packets in N packet locations. • Reduce the size of the reservation table by 1 2 3 4 5 … K 22 20 19 24 20 … 21 1 20 19 23 25 22 … 24 2 blocks … 1 3 0 0 1 … 3 i … Hao Wang and Bill Lin
Packet Access Conflicts • Arrival conflicts • An arriving packet keeps a bank busy for b cycles • Need b-1 additional banks • Departure conflicts • It takes b cycles to read a packet to output • Need b additional banks. • Overflow conflicts • Incoming packets with departure times within N frames are stored in the same memory block • N×b arrivals, however, each memory section stores at most N packets Hao Wang and Bill Lin
Water-filling Algorithm • A memory block is managed by a row of the reservation table memory section … memory block most empty available bank busy occupied Hao Wang and Bill Lin
Packet Access Conflicts • Water-filling Algorithm • Pick the most empty bank to store the arriving packet • Solve overflow conflicts • Theorem: With at least 3b-1 DRAM banks, it is always possible to admit all the arrival packets and write them into memory blocks based on their departure times. Hao Wang and Bill Lin
DRAM Selection Logic K columns … 1 2 3 4 5 K write candidate vector W … 23 20 17 19 24 … 26 s 0 0 1 0 1 … 0 22 25 20 16 19 … 23 M rows s+1 … X ∞ ∞ 15 ∞ 19 … ∞ 21 23 15 22 19 … 20 s+u … m=3 reservation table R Hao Wang and Bill Lin
Packet Arrival K columns … 1 2 3 4 5 K write candidate vector W … 23 20 17 19 24 … 26 s 0 0 1 0 1 … 0 22 25 20 16 19 … 23 M rows s+1 … X ∞ ∞ 15 ∞ 19 … ∞ 21 23 15 22 19 … 20 s+u … m=3 reservation table R • Use write candidate vector W to check arrival conflicts and departure conflicts Hao Wang and Bill Lin
Packet Arrival K columns … 1 2 3 4 5 K write candidate vector W … 23 20 17 19 24 … 26 s 0 0 1 0 1 … 0 22 25 20 16 19 … 23 M rows s+1 … X ∞ ∞ 15 ∞ 19 … ∞ 21 23 15 22 19 … 20 s+u … m=3 reservation table R • Pick the most empty bank to store the incoming packet Hao Wang and Bill Lin
Packet Departure • Packets in a memory block are moved to one of the departure reorder buffers before their departure times. • Pick the fullest memory section first upon departure • It is always possible to read all the packets from a memory section even if the section is full • All packets are guaranteed to depart on time. Hao Wang and Bill Lin
SRAM Bypass Buffer • The worst case of the minimum round-trip latency for storing and retrieving a packet to and from one of the DRAM banks is (2N+1)×b time slots. • A bypass buffer to store packets with departure times shorter than (2N+1)×b time slots away. packet locator head pointer arriving packets departing packets … … … Hao Wang and Bill Lin
SRAM Requirement (in MB) • N is the number of packets represented by one entry in the reservation table. Line rate is 100Gb/s Hao Wang and Bill Lin
SRAM Requirement Comparison • Line rate is 40Gb/s. RTT 250 ms.b = 16. K = 3b-1 • Average packet size 40 bytes • The total SRAM size in our proposed block-based packet buffer is only 8.3% of the previous frame-based scheme and 1.6% of the state-of-the-art SRAM/DRAM prefetching buffer scheme. Hao Wang and Bill Lin
Conclusion • Packet buffer architecture with deterministic packet departure, e.g., Switch-Memory-Switch and Load-Balanced Routers. • SRAM requirement grows logarithmically with the line rate. • Required number of DRAM banks is a small constant independent of the arrival traffic patterns, the number of flows and the number of priority classes. • Scalable to growing packet storage requirements in future routers while matching increasing line rates Hao Wang and Bill Lin