150 likes | 288 Views
Enabling Fast, Dynamic Network Processing with ClickOS. Joao Martins*, Mohamed Ahmed*, Costin Raiciu§, Felipe Huici* * NEC Europe, Heidelberg, Germany §University Politehnica of Bucharest firstname.lastname@neclab.eu, costin.raiciu@cs.pub.ro. Application. Application. Transport. Transport.
E N D
Enabling Fast, Dynamic Network Processing with ClickOS Joao Martins*, Mohamed Ahmed*, Costin Raiciu§, Felipe Huici* * NEC Europe, Heidelberg, Germany §University Politehnica of Bucharest firstname.lastname@neclab.eu, costin.raiciu@cs.pub.ro
Application Application Transport Transport Network Network Network Datalink Datalink Datalink Datalink Physical Physical Physical Physical The Idealized Network Page 2
ad insertion WAN accelerator BRAS IDS transcoder session border controller carrier-grade NAT load balancer DDoS protection firewall DPI QoE monitor A Middlebox World Page 3
Hardware Middleboxes - Drawbacks • Middleboxes are useful, but… • Expensive • Difficult to add new features, lock-in • Difficult to manage • Cannot be scaled with demand • Cannot share a device among different tenants • Hard for new players to enter market • Clearly shifting middlebox processing to a software-based, multi-tenant platform would address these issues • But can it be built using commodity hardware while still achieving high performance? • ClickOS: tiny Xen-based virtual machine that runs Click Page 4
domU dom0 domU domU domU apps apps apps apps apps guest OS guest OS guest OS guest OS guest OS paravirt paravirt paravirt paravirt paravirt Xen Background - Overview hypervisor dom0 interface hardware Page 5
domU ClickOS apps Click guest OS mini OS paravirt paravirt ClickOS - Contributions • Work consisted of • Build system to create ClickOS images (5 MB in size) • Emulating a Click control plane over MiniOS/Xen • Optimizations to reduce boot times (30 miliseconds) • Optimizations to the data plane (10 Gb/s for larger pkt sizes) Page 7
netback Click Xen bus/store Linux/OVS bridge NW driver Event channel FromNetfront vif ToNetfront Xen ring API (data) 300 Kp/s 350 Kp/s 225 Kp/s Xen I/O Subsystem and Bottlenecks netfront ClickOS Domain Driver Domain (e.g., dom0) Page 8
netback netback Click Xen bus/store VALE Linux/OVS bridge NW driver Xen bus/store Event channel FromNetfront Event channel vif ToNetfront Netmap API (data) Xen ring API (data) Optimized Xen I/O ClickOS Domain Driver Domain (e.g., dom0) netfront Page 9
Throughput – One CPU Core ClickOS rate meter 10Gb/s direct cable Page 10
30 milliseconds 220 milliseconds Boot times Page 11
Conclusions • Presented ClickOS • Tiny (5MB) Xen VM tailored at network processing • Can be booted in 30 milliseconds • Can run a large number of ClickOSvm concurrently (128) • Can achieve 10Gb/s throughput using only a single core. • Future work • Implementation and performance evaluation of ClickOSmiddleboxes (e.g., firewalls, IDSes, carrier-grade NATs, software BRASes) • Work to adapt Linux netfront to netmap API • Service chaining Page 12