430 likes | 792 Views
Gopher GigaNet A Next Generation Campus Network. David Farmer (farmer@umn.edu) Winter 2005 Joint Techs February 14 th 2005. Alternate Titles. How I spent my summer Without any Vacation Firewalls every where But not a Policy to implement Why MPLS Policy, Policy, Policy
E N D
Gopher GigaNetA Next Generation Campus Network David Farmer (farmer@umn.edu) Winter 2005 Joint Techs February 14th 2005
Alternate Titles • How I spent my summer • Without any Vacation • Firewalls every where • But not a Policy to implement • Why MPLS • Policy, Policy, Policy • Or, I want to build a broken network,But still manage it
Agenda • About UMN • The Old Network • Design Goals • Key Technologies We Picked • Architecture Components • The “Big Picture”
Twin Cities CampusVital Statistics • 897 surface acres • East Bank, West Bank, St. Paul • 251 Buildings • 20 story Office Towers to Garden Sheds • Nearly 13M Assignable ft2 • Nearly 22M Gross ft2 • 50,954 Student Enrollment – Fall 2004 • Second Largest Nationally (first only 41 more) • Ranked 10th in total research
Twin Cities CampusNetwork Statistics • More than 200 on-net Buildings • 1730 Wire Centers (Closets or Terminal Panels) • 842 With Network Electronics • 2774 Edge Access Switches (3750G-24TS) • 312 Aggregation Switches (3750G-12S) • 29 Core Switches (6509-NEB-A) • 5000 Virtual Firewall Instances
The Old Network • Originally installed Sept ’97 – Dec ’99 • Took way too long • 10Mb Switched Ethernet to desktop • Small amount of 100Mb for high-end desktops and servers • Typically multiple 100Mb building links • Partial-Mesh OC3 ATM backbone
The Old Network • Cisco 1924 Closet Switches • 4 switches per 100Mb uplink • Cisco 2924M-XL Closet Switches • Used for small amounts of 100Mb for servers and desktops • single switch with two 100Mb uplinks • Cisco 5500 Core Switches • With RSMs for routing • 25 Core Nodes • FORE ASX-200 and ASX-1000 ATM switches for Core network
The Old Network – Midlife Upgrade • Installed Aug ’00 • Added GigE Backbone • Cisco 5500 Core Switches • Upgraded to Sup3s with GigE uplinks & MLS • Foundry BigIron • Center of Star Topology GigE Backbone
Design Goals • Divorce Logical and Physical Topologies • Provide more than 4096 VLANs network wide • “Advanced” Services • Routed (L3) Core, Switched (L2) Aggregation and Edge • Network Policy – AKA Security • Network Intercept • Other Stuff
Design Goals • Divorce Logical and Physical Topologies • Administrative Topology • Policy Topology • Security or Firewalls • Bandwidth shaping or Usage • QOS • Functional or Workgroup Topology
Design Goals • Provide more than 4096 VLANs network wide • More than 1000 VLANs now • Micro segmentation for Security and other Policy could easily require 4X over the next 5 years • Even if we don’t exceed 4096 VLANs, the VLAN number space will be very full
Design Goals • “Advanced” Services • Native IPv4 Multicast • PIM Sparse Mode, MSDP, BGP for Routing • IGMP v3 (SSM support) for L2 switching • IPv6 • Unicast for sure • Multicast best shot • Jumbo Frame • 9000 Clean
Design Goals • Routed (L3) Core, Switched (L2) Aggregation and Edge • How many L3 control points do you want to configure • Limit scope of Spanning Tree • If possible eliminate Spanning Tree • Minimally, limit it to protecting against mistakes, NOT an active part of the Network Design
Design Goals • Network Policy – AKA Security • Security is, at least partly, the network’s problem • Let’s design it in to the network, rather than add it in as an after thought • The network needs to enforce Policies • Only some of these are actually truly related to Security • Rate Shaping, COS/QOS, AAA, just to name a few • Firewalls with state-full inspection are necessary in some locations • Network Authentication (802.1x)
Design Goals • Network Intercept • Intrusion Detection and Prevention • Trouble shooting • Measurement and Analysis • Legal Intercept and Evidence collection • Sinkhole Routing
Design Goals • Other Stuff • Core Services • DNS • DHCP • NTP • Measurement • Localized Logging • Syslog • Netflow
Design Goals • Other Stuff • Data Centers • Intend to support 6 – 12 Data Centers on campus • Create Separate Infrastructure • Allows different maintenance windows • Provide Higher SLA/SLE • Provide things that can’t scale to the rest of campus • Server load balancing • Dual fiber entrances • Single L2 Domain • Redundant Routers
Design Goals • Other Stuff • Management Network • Console Servers • Remote Power Control • Redundant GigE network • Allow access to critical Core Network equipment at all times • Dial-up Modem on Console Server for Emergency Backup
Key Technologies We Picked • MPLS VPNs • Cisco StackWise Bus on 3750s • Cross Stack EtherChannel provides redundancy without creating loops in the Spanning Tree topology • Cisco FWSM with Transparent Virtual Firewalls • Policy as L2 bumps on the wire • Let the Routers Route
How to Scale • A network with those numbers doesn’t fit in your head • My mind is to small to hold it all • How about yours • “consistency is the hobgoblin of little minds” • Emerson • Consistency is the answer to Scaling
MPLS VPNs – Short Tutorial • RFC 2547 defines layer 3 routed MPLS VPNs • Uses BGP for routing of VPNs • Routers create a VRF (VPN Routing & Forwarding) Instance • VRFs are to Routers as VLANs are to Ethernet Switches
MPLS VPNs – Short Tutorial • P – “Provider” Router • No knowledge of customer VPNs • Strictly routes MPLS tagged packets • PE – “Provider Edge” Router • Knowledge of customer VPNs & provider network • Routes packets from customer network across the provider network by adding VPN MPLS tag and tag for the remote PE
MPLS VPNs – Short Tutorial • CE – “Customer Edge” Router • No knowledge of provider network • Strictly routes IP packets to PE • Only PE routers are necessary in the MPLS VPN Architecture • This is important in a Campus Network
Architecture Components • Campus Border • Core Network • Aggregation Networks • Edge Nodes
Campus Border • Border Routers • Redundant routers in diverse locations • Act as CE routers for all VRFs that need Internet Access • Cisco 6509 • Dual SUP720-3BXL • Dual Power Supplies and Fans • All 6700 Series Interface Cards
Campus Border • Border Policy Enforcement • Layer 2 bumps on the wire • Cisco FWSM • Packeteer 9500 • Home grown ResNet Authentication Control & Scanner (RACS) • Attach to or contained within Border Router • Packets get a little dizzy passing through Border Router L2 or L3 switching fabric several times
Core Network • Backbone Nodes • 2 Backbone Nodes producing a Dual-Star Topology • Collocated with the Border Routers • 10Gb interconnection between Backbone Nodes. • 10Gb connection to each Core Node • Cisco 6509
Core Network • Core Nodes • Located at 16 Fiber aggregation sites around campus • 10Gb connection to each Backbone Node • 2 or 3Gb to Aggregators or Edge Nodes • Cisco 6509-NEB-A
Core Network • Core Nodes • Layer 3 routing provide for End User Subnets • Layer 3 MPLS VPNs provide separate Routing Domains • Virtual Firewalls provided per Subnets as needed • Root of a VLAN Domain • 802.1q tags have local significance only • VLANs connected between Core Nodes using Layer 2 MPLS VPNs as needed
Aggregation Networks • Layer 2 only • Aggregates Edge Nodes & connects them to a Core Node • Cisco 3750G-12S
Aggregation Networks • Regional Aggregator • 3Gb Connection to Core Node • Area Aggregator • 3Gb Connection to Regional Distribution Node • Building Aggregator • 2 or 3Gb Connection to Regional or Area Dist. Node or directly to Core Node
Edge Nodes • Connects users and servers to the Network • Connects to a Building Aggregator • If more than one closet in a building • Otherwise connects to • Core Node • Regional Aggregator • Area Aggregator • Cisco 3750G-24TS
Data Center Networks • Data Center Core Nodes • Redundant Routers servicing all Data Centers on Campus • Collocated with the Border Routers and Backbone Nodes • 10Gb interconnection between Data Center Core Nodes. • 10Gb connection to each Backbone Node • 2Gb up to 10G connection to each Data Center • Cisco 6509-NEB-A
Data Center Networks • Data Center Aggregator • Connected to both Data Center Core Nodes • Two 3750G-12S or two 3750G-16TD • Feeds Data Center Edge Nodes within a single Data Center
Data Center Networks • Data Center Edge Nodes • Min Stack of two 3750G-24TS • Connects to Data Center Aggregator • Or directly to Data Center Core Node if a single stack serves the Data Center • Want hosts to EtherChannel to separate switches in the Stack for redundancy
Management Network • Management Node • 3750G-24TS collocated with each Core Node • Routed as part of Control Plane & Management network • Cyclades Console server and Remote Power Control • Management Aggregator • Connects all the Mgmt Nodes
Management Network • Measurement Server collocated with each Core Node • Log Server Collocated with each Core Node • DNS, DHCP, NTP Server Collocated with each Core Node • Using Anycast for DNS Redundancy
Analysis Network • Analysis Node • All switches collocated in single location • Provides access to every Core Node for testing and Analysis • Provides for remote packet sniffing of any traffic on campus • Provides Sinkhole Drains for each Core Node
That’s enough • That’s enough rambling for now! • I real want to do more, but! • Find me and lets talk more! • I’ll even argue if you want • Email me (farmer@umn.edu)