730 likes | 737 Views
This article provides an overview of the hardware environment and software architecture of SPC kernel, with a focus on port-level processing.
E N D
(SPC) Port-Level Processing:the MSR Kernel Fred Kuhns Washington University Applied Research Laboratory
Overview • Introduction to hardware environment • APIC core processing and buffer management • Overview of SPC kernel software architecture and processing steps • Plugin environment and filters • Command Facility
Port Processors: SPC and/or FPX ControlProcessor Switch Fabric ATM Switch Core IPP OPP IPP OPP IPP OPP IPP OPP IPP OPP IPP OPP FPX FPX FPX FPX FPX FPX Port Processors SPC SPC SPC SPC SPC SPC LC LC LC LC LC LC Line Cards (link interfaces)
Using Both and FPX and SPC Shim contains results of classification step SPC FPX DQ Module Z.2 Active processing IP Classifier X.1 shim NID APIC Flow Control
Dist. Q. Ctl. Dist. Q. Ctl. Dist. Q. Ctl. Dist. Q. Ctl. FlowLookup FlowLookup Flow/RouteLookup Flow/RouteLookup Focus on SPC as Port Processor ControlProcessor Switch Fabric SPC SPC OutputPortProc. InputPortProc. . . .
The SPC: an Embedded Processor Switch Interface DRAM Link Interface CPU Module APIC PCI Bus System FPGA Serial Ports
Typical Pentium PC Architecture Addr/Data Ctrl Ctrl Cache CPU North- Bridge DRAM Addr/Data/Ctrl PCI Bus Intr NMI INIT SouthBridge (PIIX3) (PIC, PIT, …) PCI Devices ISA Bus ISA Devices Super-IO BIOS BIOS RTC Uarts Kbd/Mse Floppy Parallel ...
UART1 UART2 SPC Hardware Architecture Addr/Data Ctrl Ctrl Cache CPU North- Bridge DRAM Addr/Data/Ctrl Intel Embedded Module PCI Bus NMI INIT Intr APIC RTC’ PIC PIT UART1 Interface BIOS ROM Link Interface UART2 Interface System FPGA Switch Interface
SPC Components • APIC - PCI Bus Master • Pentium Embedded Module • 166 MHz MMX Pentium Processor • L1 Cache: 16KB Data, 16KB Code • L2 cache: 512 KB • NorthBridge • 33 MHz, 32 bit PCI Bus • PCI Bus Master • System FPGA - PCI Bus Slave • Xilinx XC4020XL-1 FPGA • 20K Equivalent Gates, ~ 75% used
SPC Components (continued) • Memory • EDO DRAM • 64MB (Max for current design) • SO DIMM • Switch Interface - 1 Gb Utopia • Link Interface - 1 Gb Utopia • UART • Two Serial Ports • NetBSD system console • TTY port
Overview • Introduction to hardware environment • APIC core processing and buffer management • Overview of SPC kernel software architecture and processing steps • Plugin environment and filters • Command Facility
APIC Descriptors • APIC uses a data structure called a descriptor to describe available buffers and their status. • The hardware and software follow a well defined protocol for jointly managing the descriptors. • The APIC controls one or more Free Descriptor chains, with each chain representing buffers available for Rx for a predefined set (one or more) of RX channels.
APIC Descriptor Management • Some details are left out for brevity • Software • allocates descriptors to free pools for the APIC HW • Reads receive descriptor chains for each VC, free descriptor after it is read, then returns it to a free pool. • Adds descriptors to send queues for each VC, then recycles them after APIC sends corresponding frame/cell. • Hardware • allocates descriptors from free pools for frames/cells received on a corresponding VC. Then places them on a receive chain. • Reads descriptors from send queues for each VC and notifies software when packet is sent.
MSR Buffer (2048B) (Shim Offset Not used on Egress) 31 24 16 8 0 Shim Shim LLC (AA.AA.03) OUI (00) Not Used OUI (00.00) Type (08.00) Version H-len TOS Total length Identification flags Fragment offset Match/Checksum - - - - V I S O E C L X T Y TTL protocol Header checksum Source Address Destination Address IP Datagram Options ?? BufLen NextDesc IP data (transport header and transport data) BufAddrLo AAL5 padding (0 - 40 bytes) AAL5 Padding and Trailer CPCS-UU (0) CPCS-UU (0) Length (IP packet + LLC/SNAP) CRC BufAddrHi 24 Bytes Not Used APIC Descriptors and Buffers Flags: O - Read Only, E - EOF, C - CRC OK, T - Type, Y - Valid Bits Buffer Length or Amount Left Unused Desc. Table Index Physical Address of Data Buffer • Frame must be multiple of 48 B. • Buffers are 2048 B. • Max size = 2016 B, or 42 cells. • Reserve 8 B for shim, 8 B for trailer • IP Datagram MTU must be 2000 B • At output port, max 2016 B frame received, offset 8 bytes in buffer. • At most the 2024 B of buffer are used. • 24 B at end of buffer not used.
Descriptor Notes 31 24 16 8 0 MatchFlags Match/Checksum - - - - V I S O E C L X T Y BufLen NextDesc SizeNext Index into Desc Table BufAddrLo Physical Address of Data Buffer Low32Addr BufAddrHi V = Volatile Buffer I = Interrupt/Notify on Read· S = SAM Enable O = Read Only E = End of Frame C = CRC OK, RX L = Loss Priority (CLP of last cell), RX X = Congestion indication from last cell's PTI, RX T = BufType, 0 -> Data; 1 -> RM; 2 -> segment OAM; 3 end-2-end OAM Y = Sync = 0 -> Done, Valid Link; 1 -> Done, InValid Link 2 -> Not Ready; 3 -> Ready Possible values for First Word CAFE0083 = Tx, EoF, Ready (Driver) CAFE0080 = Tx, EoF, DoneValidLink (APIC) CAFE0002 = Tx, NotReady CAFE0003 = Rx, Ready, No Interrupt on Read CAFE0403 = Rx, Ready, Interrupt on Read xxxx00C0 = Rx, EoF, CRC OK, DoneValidLink xxxx0040 = Rx, CRC OK, DoneValidLink xxxx00C1 = Rx, EoF, CRC OK, DoneInValidLink xxxx0041 = Rx, CRC OK, DoneInValidLink
0xCAFE 0xCAFE - - - - - - - - V V I I S S O 0 O 0 E 0 E 0 C 0 C 0 L L X X T 00 T 00 Y 11 Y 11 2016 2016 NextDesc NextDesc BufAddrLo BufAddrLo BufAddrHi BufAddrHi APIC Descriptors Free Descriptor chain used by APIC during receive,each descriptor contains the physical address of an available buffer. Pool X Chain Head CAFE003 or if end of chain CAFE002 CAFE003 buffer buffer
checksum checksum - - - - - - - - V V I I S S O 0 O 0 E 0 E 1 C 1 C 0 L L X X T 00 T 00 Y 00 Y 00 1016 0 NextDesc ??? NextDesc BufAddrLo BufAddrLo BufAddrHi BufAddrHi Descriptors on a Receive Queue VC 101’ Queue DoneValidLink DoneInvalidLink 1000 Byte frame 2016 Byte frame
Match/Checksum Match/Checksum Match/Checksum Match/Checksum Match/Checksum - - - - - - - - - - - - - - - - - - - - V V V V V I I I I I S S S S S O O O O O E E E E E C C C C C L L L L L X X X X X T T T T T Y Y Y Y Y BufLen BufLen BufLen BufLen BufLen NextDesc NextDesc NextDesc NextDesc NextDesc BufAddrLo BufAddrLo BufAddrLo BufAddrLo BufAddrLo BufAddrHi BufAddrHi BufAddrHi BufAddrHi BufAddrHi RX Descriptor to Buffer Mapping Buffers (replace Mbufs) First RX descriptor = j 0 Buffer Descriptors 2KB j j+1 1 Buffer j+2 j+3 j+N N Buffer
Descriptor Layout Invalid Descriptor Starting address := desc_area Index local_start *aal5_pool local_count RX/TX Shared IP Packet Buffers local_end aal5rx_start aal5_count aal5rx_end aal5tx_start aal5_count msr_descr_count *aal0_pool aal5rx_end RX - Cell Buffers aal0rx_start RX channel 0, aal0_count_vci aal0_count aal0rx_end aal0_count RX channel 1, aal0_count_vci TX - Cell Buffers aal0tx_start TX channel 0, aal0_count_vci aal0_count aal0tx_end TX channel 1, aal0_count_vci unallocated
resume pacing MSR Buffer Headers Rx processing processing ATM hdr conn status conn status current desc current desc Buf Hdrs same as buf offset Tx Descriptor & Buffer Relationships APIC Descriptor Table (DT) Rx desc bound (same offset) to specific buffer Descriptors Rx current rx offset registers Global notification channel Tx TX desc allocated dynamically and bound to the RX desc and buffer Tx Offset MSR Buffers (MB) Buffers same as rx offset port 0 port 2 port 1
Global notification channel resume pacing Rx processing processing ATM hdr conn status conn status 2) APIC writes Rx’ed AAL5 frame to buffer referenced by new Rx desc. current desc current desc Tx cell cell cell Receiving a Packet APIC Descriptor Table (DT) DT base Rx indx registers Tx 1) AAL5 frame is received: APIC allocates and reads desc from RX pool. Then the previous Rx desc is written back (updated). Driver and IP code MSR Buffers (MB) MB base indx port 0 port 2 port 1
MSR Buffers (MB) MB base indx IP hdr IP data Global notification channel resume pacing Rx processing processing ATM hdr conn status conn status current desc current desc Tx Completing the Receive APIC 1) APIC writes (updates) current desc. Descriptor Table (DT) DT base Rx indx registers 2) APIC updates notification register Tx Driver and IP code 3) Last: Assert Interrupt port 0 APIC disables interrupts on Rx channel port 2 port 1
MSR Buffers (MB) MB base indx IP hdr IP data Global notification channel resume pacing 3) APIC sends (reads) packet and interrupts when done Rx processing processing ATM hdr conn status conn status current desc current desc Tx cell cell cell Sending Packet 1) allocate Tx desc and bind to Rx desc and buffer APIC 2) a) write to current desc’s next index b) Write to resume Tx channel register Descriptor Table (DT) IP Lookup Table DT base Rx indx registers Tx Driver and IP code port 0 port 2 port 1
Enhancements to Buffer Management • Currently enhancing plugin interface to support both sinking and sourcing of packets. • This is combined with the flow table for reserved and bound flows. • We also support 64 datagram queues for packet scheduling. • Changes to buffer management are shown in the next few slides
Relationships between Buffers Pkt Buffers index by {rx,tx}indx desc_bindings APIC Descriptors RX Start shim a, refcnt = 2 IP Packet x, rxindx = a rxdesc (a) rxindx AAL5 trailer y, rxindx = a TX Start Header and buffer offset are same as rxindx. shim z, rxindx = 0 IP Packet txdesc (x) txdesc (y) AAL5 trailer Buffer Headers txdesc (z) txindx desc_bindings index equal to descriptor index Header has start of frame pointer
Buffer Headers and FTEs Buffer Header Buffer pkt // pointer to start of frame fid // flow table entry reference qlist // linked list for PS queue gid // nonexclusive gm filter qid // queue id used by PS fwdkey // (sid, outVIN), route plen // AAL5 frame length atmlen // plen + cell headers rxcid // VC packet received on txcid // VC to send frame on flags // {CPY, Active, FPX, Ingress/Egress, IPO, Shim, RATM, Cntl, Drop} shim Flow Table Entry (FTE) IP Packet alist // list of active FTEs fwdkey // pinned route qid // Unique flow id for PS { firm_req // requested rate firm_act // allocated rate soft_req // requested rate soft_act // allocated rate weight // PS weight } res // LFS reservation flags // {Deny, Active, Reserv, LFS, Reclaim} plugin // pointer to plugin filter // exact match 5 tuple refcnt // pkts in system pktcnt // total pkts matching plen AAL5 Padding AAL5 trailer Packet Scheduler Queues head -> list of buff headers queue index
Classifier Table Relationships General Match Filters Route Table (RT priority) Hash Table filter // with masks prio // priority of filter fte // if exclusive ilist // if nonexclusive flags // {Exclusive, ...} ... default route filter // with masks prio // priority of filter fte // if exclusive ilist // if nonexclusive flags // {Exclusive, ...} ... Exclusive Filters reference a flow table entry. lookup destination Hash IP Header Flow Table (FT priority) list of filters with same hash value ... (10 filters) hlist // hash list alist // active list filter // with masks fte // filter // with masks prio // priority of filter fte // if exclusive ilist // if nonexclusive flags // {Exclusive, ...} hlist // hash list alist // active list filter // with masks fte // NULL Nonexclusive filters are used for monitoring and do not reference a fte nor can they alter application packets.
Overview • Introduction to hardware environment • APIC core processing and buffer management • Overview of SPC kernel software architecture and processing steps • Plugin environment and filters • Command Facility
Classifier VOQ 0 General Match Exact Match VOQ 1 ... VOQ 7 SPC Software Architecture debug messages CP command processor and debug message command reply DRR Service commands Paced APIC TX queues interrupt Plugin Environment Egress Sub Port 0 Plugin plugin plugin Commands ... Sub Port 1 Sub Port 2 SP1 handler(): send budget per interval Sub Port 3 SP2 APIC APIC ... APIC Specific Driver Code Ingress/ Egress ? APIC TX Qs: DQ Adjusts VOQ Pacing insert/process shim IP processing periodic callback interrupt (D sec) SPN Ingress Route Lookup (FIPL, Simple) DQ Reports DQ Service handler(): read cells, set pacing, broadcast report Read DQ Report Cells Broadcast Report
SPC Data Path - Simplified View Plugin Environment Plugin plugin plugin DQ/ In Queuing ... NM Filter Flow Classifier/ (channel map) ... Ingress/ Egress ? Frame/Buffer and IP Processing Route Lookup (Shim, FIPL, Simple, cache) DRR/ Out Queuing ...
SP1 VOQ 0 SP2 VOQ 1 SP3 ... SP4 VOQ 7 SPC Input (Ingress) Processing Plugin Environment Manage IP Options X.1 Z.1 W.1 X.2 Y.1 Local Resource Manager and PCU Interface Z.2 APIC TX Qs: DQ Adjusts VOQ Pacing PCU Framework interrupt Replace IntraShim with InterShim. Update trailer and IP header NM Filter Flow Classifier/ (channel map) Insert InterPort Shim IP Processing: APIC Specific Driver Code APIC APIC Route Lookup (FIPL, Simple) Broadcast Report Distributed Queuing callback: read cells, set pacing, broadcast report DQ Reports Read DQ Report Cells periodic callback interrupt (D sec)
Intra/Inter Port Shim Not Used Flags Stream Identifier Set input and output VIN in Shim, Calculate aal5 length, decrement ip ttl, calculate IP header checksum. Place in APIC TX queue. Invoke instance handler i1 i1 i1 i1 i1 i1 i1 i1 i1 i1 i1 i2 i2 i2 i2 i2 i2 i2 i2 i2 i2 i2 i3 i3 i3 i3 i3 i3 i3 i3 i3 i3 i3 i4 i4 i4 i4 i4 i4 i4 i4 i4 i4 i4 i5 i5 i5 i5 i5 i5 i5 i5 i5 i5 i5 Search Output VIN Input VIN filter 1 VIN Format filter 2 Insert Shim PN (10 bits) SPI (6 bits) Exact Match Classifier: filter 3 shim General Match Filter: Linear search using the 5-tuple {src_addr, dst_addr, src_port, dst_port, proto}, match maps a flow to one or more plugin instances hash Flow Table Intra Port Shim Flags filter 4 IP X AF NR OP UK 8 Bytes filter 5 flow flow filter 6 Version H-length TOS Total length padding filter 7 Identification Flags Fragment offset trailer MSR Buffer (2KB) TTL Protocol Header checksum filter 8 Rx offset Source Address filter 9 Destination Address IP dgram filter 10 SP1 Destination Port Source Port APIC RX AAL5 Frame Hash Field widths and offsets are configurable: msr/msr_classify.h route cached in flow entry. If none call ip lookup (fipl/simple) padding VOQ 0 SP2 IP data (transport header and transport data) trailer VOQ 1 hash of ip header SP3 AAL5 padding (0 - 40 bytes) ... SP4 VOQ 7 CPCS-UU (0) CPCS-UU (0) Length (IP packet + LLC/SNAP) CRC (APIC calculates and sets) Input (Ingress) Processing Plugin Environment Manage IP Options X.1 Z.1 W.1 X.2 Y.1 Local Resource Manager and PCU Interface Z.2 APIC TX Qs: DQ Adjusts VOQ Pacing PCU Framework interrupt Replace IntraShim with InterShim. Update trailer and IP header NM Filter Flow Classifier/ (channel map) Insert InterPort Shim IP Processing: APIC Specific Driver Code APIC APIC Route Lookup (FIPL, Simple) Broadcast Report Distributed Queuing callback: read cells, set pacing, broadcast report DQ Reports Read DQ Report Cells periodic callback interrupt (D sec)
Plugin Environment Manage IP Options X.1 Z.1 W.1 X.2 Y.1 Local Resource Manager and PCU Interface Z.2 PCU Framework SP1 SP2 APIC ... SPN Output Port (Egress) Processing interrupt Paced APIC TX queues DRR Service Sub Port 0 Sub Port 1 Determine Out VC Remove Shim update AAL5 trailer and IP header APIC ... NM Filter Flow Classifier/ (channel map) APIC Specific Driver Code process shim IP processing: Classifier Sub Port 2 handler: send budget per flow Sub Port 3 DQ report Tx queue lengths periodic callback interrupt (D sec)
Every D sec the DRR handler is executed. It sends up to MAX bytes per period (minus backlog) sharing available BW among the active flows. Plugin Environment Manage IP Options X.1 Z.1 W.1 X.2 Y.1 Local Resource Manager and PCU Interface shim Z.2 IP PCU Framework padding trailer MSR Buffer (2KB) Rx offset SP1 APIC RX AAL5 Frame Remove Shim for TX SP2 APIC TX offset Place in DRR queue for this flow (referenced by flow entry). shim ... IP APIC Output channels are paced such that their sum is the the effective link bandwidth. • Adjust buffer • update trailer • update ip hdr SPN padding trailer Verify Shim and adjust buffer and header references Output Port (Egress) Processing General and Exact match classifier same as ingress, except route is obtained from output VIN in Shim interrupt Paced APIC TX queues DRR Service Sub Port 0 Sub Port 1 Determine Out VC Remove Shim update AAL5 trailer and IP header APIC ... NM Filter Flow Classifier/ (channel map) APIC Specific Driver Code process shim IP processing: Classifier Sub Port 2 handler: send budget per flow Sub Port 3 DQ report Tx queue lengths periodic callback interrupt (D sec)
Host 1 Host 2 Host N What about Ethernet? For more detail see the GigE talk on Tuesday Router MSR Ethernet Switch Router Router
Endsystem, broadcast or multicast address Pkt VC = 50 VIN Table - 4 entries VC MyIP NhIP 50 MyIP0 0 51 MyIP0 NhIP0 52 MyIP1 NhIP1 53 MyIP2 NhIP2 To a next hop router NH #1 = Base + 1 = 51 NH #2 = Base + 2 = 52 NH #3 = Base + 3 = 53 GgE Link Interfaces Map multicast or broadcast to ethernet address If ARP table lookup fails, send ARP request to broadcast address, drop packet. No retries are made. Send to pkt->dst if bcast or mcast map to eaddr else resolve w/ARP ARP Table (M Entries) IP MAC No ARP entry aging! MAC1 IP1 ... ... IP Header Ethernet MACM IPM data IP Header From FPX/SPC AAL5 trailer data To Next Hop or Endstation Add Ethernet header using the derived destination address and out source address. Protocol is IP. if VC != 50, Lookup VC in VIN table returns IP used for ARP lookup (support N = 4) Software creates VIN table at boot time by writing to interface.
to FPX/SPC Base VC GigE Link Interface - Ingress ARP Table (M Entries) receive ethernet frame: eth if (eth->type == ARP) if (eth->arp->has != Ethernet/0001) Drop Frame if (eth->arp->pas != IP/0800) Drop Frame update {eth->arp->spa, eth->arp->sha} in ARP table if (eth->arp->tpa NOT in {MyIP0, MyIP1, MyIP2}) Drop Frame // target IP not ours if (eth->arp->op == Request/01) { swap source and target ARP info set operation to Reply set ether header src and dst address send reply } // Already handled eth->arp->op == Reply/02 // when updated cache above else if (eth->type == IPv4) remove ethernet header, padding and CRC add AAL5 trailer and required padding break into cells and send on default Base VC else Error, drop packet *Unicast MAC address filtering IP MAC MAC1 IP1 ... ... MACM IPM Ethernet IP Header IP Header From Next Hop or Endstation data To FPX/SPC data AAL5 trailer
Overview • Introduction to hardware environment • APIC core processing and buffer management • Overview of SPC kernel software architecture and processing steps • Plugin environment and filters • Command Facility
Packet Classification & Plugins • Classification provides and opportunity to bind flows to registered plugin instances. • General classifier - Network Management • classification using 5-tuple • <saddr, sport, daddr, dport, proto> , • Prefix match on address, exact match port and proto • 0 is a wildcard for all fields • input and output ports • filters added/removed via the command facility
General Match Classifier: Linear search of {src_addr, dst_addr, src_port, dst_port, proto}. General Classifier options: {First, Last, All} Rule Actions: {Deny, Permit, Active}. Rule flags {All, Copy, Stop} Invoke instance handler Search i1 i1 i1 i1 i1 i1 i1 i1 i1 i1 i1 i2 i2 i2 i2 i2 i2 i2 i2 i2 i2 i2 i3 i3 i3 i3 i3 i3 i3 i3 i3 i3 i3 i4 i4 i4 i4 i4 i4 i4 i4 i4 i4 i4 i5 i5 i5 i5 i5 i5 i5 i5 i5 i5 i5 Rule 1 Rule 2 Exact Match Classifier: Hash {src_addr, dst_addr, src_port, dst_port}, then linear search for flow spec. Exact Match Classifier options: None. Rule Actions: {Deny, Permit, Active, Reserve}. Rule flags {Pinned, Idle, Remove} Rule 3 hash Flow Table Rule 4 Call packet handler for bound instance with pointer to IP packet (struct ip *). Rule 5 flow flow Rule 6 handle_packet(inst, pkt, flags) { /* Plugin may read and/or * modify content but not * delete it unless COPY. * On return the framework * forwards packet */ ... return;} Rule 7 Shim instance->handle_packet(instance, packet, flags) Rule 8 AAL5 Frame Version H-len TOS Total length Identification flags Fragment offset Rule 9 pkt (struct ip *) TTL protocol Header checksum Rule 10 Source Address Destination Address Instance 1 {Active} Options ?? IP data (transport header and transport data) AAL5 padding (0 - 40 bytes) Send packet to exact match classifier Flow entry to plugin has a one-to-one relationship. CPCS-UU (0) CPCS-UU (0) Length (IP packet + LLC/SNAP) CRC Flow Bound to a Plugin Plugin Environment Plugin plugin plugin DQ/ In Queuing ... Exact Match: active processing same as general match. The AAL5 length is and IP header checksum are calculated so plugin does not have to perform these operations. NM Filter Flow Classifier/ (channel map) ... Ingress/ Egress ? Frame/Buffer and IP Processing Route Lookup (Shim, FIPL, Simple, cache) DRR/ Out Queuing ...
Invoke instance handler Rule 1 i1 i2 i3 i4 i5 Rule 2 i1 i2 i3 i4 i5 Rule 3 i1 i2 i3 i4 i5 Rule 4 i1 i2 i3 i4 i5 Rule 5 i1 i2 i3 i4 i5 Rule 6 i1 i2 i3 i4 i5 Rule 7 i1 i2 i3 i4 i5 Rule 8 i1 i2 i3 i4 i5 Rule 9 i1 i1 i2 i2 i3 i3 i4 i4 i5 i5 Rule 10 i1 i2 i3 i4 i5 General Match Classifier Notes • General Match Classifier: Linear search of • {src_addr, dst_addr, src_port, dst_port, proto} • General Classifier options: {First, Last, All} • Rule Actions: {Deny, Permit, Active}. • Rule flags {All, Copy, Stop} Search
hash Flow Table flow flow Instance 1 {Active} Flow entry to plugin has a one-to-one relationship Exact Match Classifier Notes • Exact Match Classifier: Hash followed by linear search - {src_addr, dst_addr, src_port, dst_port, proto}. • Exact Match Classifier options: None. • Rule Actions: {Deny, Permit, Active, Reserve}. • Rule flags {Pinned, Idle, Remove}
Active Processing Environment Class A “plugin x” Class B “plugin y” Class C “plugin z” Instance 1 {Active} Instance 1 {Deny} Instance 1 {Active} Instance 2 {Active, All} General/Exact Match Classifier Rule N RuleP • Plugin instance maps to at most one rule/filter. • General classifier: rule maps to at most 5 instances. • Exact match classifier: rule maps to at most 1 instance.
create_instance() Called by PCU framework in response to receiving command. struct my_inst { inst_t base; subclass defs }; Creating an Instance Class A classid = 100 inst_t *create_instance(class_t *, inst_id) Return reference to instance create class instance Instance of Class A - (Base Class extended by Developer) <Fields defined by the Base Class> class_t *class inst_t *next inst_id id fid_t bound_fid void (*handle_packet) (inst_t *, ip_t *, flag32_t); void (*bind_instance) (inst_t *); void (*unbind_instance) (inst_t *); void (*free_instance) (inst_t *); int (*handle_msg) (inst_t *, buf_t *, flag8_t, seq_t, len_t *) <Class Specific Data> ...
Plugin Class Specific Interface • All plugins belong to a class. At run time a class (i.e. plugin) must be instantiated before it vcan be referenced. • Plugin is passed its instance pointer (like c++) as the first argument. • Developer may extend the base class (struct rp_instance) to include additional fields which are local to each instance. • Plugin developer must implement the following methods: • void(*handle_packet)(struct rp_instance *, struct ip *, u_int32_t); • void(*bind_instance)(struct rp_instance *); • void(*unbind_instance)(struct rp_instance *); • void(*free_instance)(struct rp_instance *); • int (*handle_msg)(struct rp_instance *, void *, u_int8_t, u_int8_t, u_int8_t);
Plugin Framework Enhancements • Integrated with Command framework • send command cells to PCU: • create instance, free instance, bind instance to filter, unbind instance • Send command cells to particular plugin instances • Send command cells to plugin base class • Enhanced interface to address limitation noticed in crossbow: • instance access to: plugin class, instance id, filter id • pcu reports describing any loaded classes, instances and filters
Overview • Introduction to hardware environment • APIC core processing and buffer management • Overview of SPC kernel software architecture and processing steps • Plugin environment and filters • Command Facility
Command Facility Highlights • Overview • High level description - Application Layer • MSR Command Interface Overview • Cell format and field definitions • Example
Definitions • Session: Open connection between the CP and a specific SPC. Intended to represent open connections and command state • Transaction: Represent a complete command. A transaction terminates with either an EOF is received by the CP or and error occurs. • EOF: End of File is returned to CP with the last bit of command data is returned or in response to a Cancel message (or an error occurs)
Overview - Cmd Interface on CP • Synchronous Request/Response protocol • Timeout can be specified as well as the number of retries - Per session option • Essentially provides a reliable service • Issue: if no reply, cmd/reply msg lost in port, channel or CP. Retries may be a bad thing. • Address - MSR Port and Command • <MSR_Port, MSR_Command> • Message destination - Callback function within the Port’s kernel (implements command)