1 / 33

CEN Network Technology Briefing – July 2006

CEN Network Technology Briefing – July 2006. Briefing Agenda. Describe UConn’s Leadership in State, National and Regional Advanced Research and Education Networks Connecticut's Optical Network Backbone and Architecture Discussion of UConn's role in providing service to CEN users

dea
Download Presentation

CEN Network Technology Briefing – July 2006

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CEN Network TechnologyBriefing – July 2006

  2. Briefing Agenda • Describe UConn’s Leadership in State, National and Regional Advanced Research and Education Networks • Connecticut's Optical Network Backbone and Architecture • Discussion of UConn's role in providing service to CEN users • Overview of network content initiatives in K12, higher education and government on these networks (online learning, video, e-portfolio, etc)  • The relationship between the CEN, Internet2, the NOX, Abilene, NEREN and the National Lambda Rail

  3. CEN Services for K12 & Libraries • Every School district gets an optical drop • On Network Peering to all other CEN sites • Primary Internet Service Provider • Internet2 • Firewall • Child Protection Filtering • Domain Name Service • Generally redundant links to each site

  4. CEN Services for Higher Eds • Redundant Optical drop to every campus • On Network Peering to all other CEN sites • Optional Commodity Internet Services • Optional Internet2 services • Optional access to NEREN fabric • Future video, disaster recovery services

  5. Who’s on Now: UConn (8)* CSU (5)* CommTech System * Charter Oak State* Albertus Magnus* Yale * Trinity * Wesleyan * UNH * Conn College * USCGA * Rensellaer * Sacred Heart * U Hartford * Fairfield * Quinnipiac * Mystic Aquarium * Vbrick * American School for the Deaf * Connecticut Public Television * St. Joseph’s * Mitchell * CEN Paying Customer Connectivity • Who is next: • St. Vincent’s • Commtech (4) • U Bridgeport • Lyme Academy • Williams School

  6. CEN Technologies • Optical backbone on leased dark fiber • CWDM on congested fiber paths • Ethernet based Network • Large frame size capacity (MTU of 9216) • MPLS Enabled Core for Layer-2 cut-through • IP Multicast • Capacity to deploy IP v6 overlay

  7. CEN Dark Fiber Backbone • Fibertech Networks - • “On Network” Dark – Existing backbone areas where CEN purchased by the pair • “Lateral Build” Dark – 12 strands built for CEN with no electronics • Erate Leased Ethernet – Built for CEN as a GBIC based ethernet service • Singlemode Fiber, SMF28 • LX/LR (<10 km) ZX/ER (10>70 km)

  8. Hub Site Types: • Telecom POPS (2) • West Haven, New London • State Police Locations (4) • Meriden, Southbury, Litchfield, Bridgeport • College Data Centers (9) • Danbury, Hamden, Hartford, Storrs, Norwich, Middletown, Stamford, Enfield, New Britain • Borrowed Space (3) – Ansonia, Waterbury

  9. Hub Site Specs • Design with short fiber lateral before fiber diversity, preferably only building entrance • Type A Sites (Critical & typically w/ 10G) • Powering • 4 hours battery with automatic generator backup • 8 hours battery • Assured 7x24 Access • Type B Sites (Backup Service only) • 8 hours battery • Less favorable access conditions

  10. B A* A A A A A A A A B*->A A* B*->A A B A A A A* A

  11. New London

  12. Ansonia West Haven

  13. Waterbury Meriden

  14. Backbone Architecture • Massive over-provisioning to allow multiple link failures with no service impact, typically 10G on primary backbone • Physical and logical meshing implemented where possible • 9216 MTU Size on all core links • MPLS Tag Switching on all interfaces • MPLS TTL Propogation disabled except for troubleshooting • All MPLS enabled devices in OSPF Area 0 on all interfaces • BGP Peering for VPNV4 routes only to 5 geographically separated route reflectors • No policy routing, ospf weighting or access lists if possible (let traffic flow its default path) • Prefix Management • Global routing table only for on-network connectivity • All customer routes in virtual routing tables • Global multicast only to support MPLS MDT trees • Customer networks also prefer to use OSPF in VRF’s, not using area 0 • Failure Responsiveness • Link State notification on all backbone links should force immediate routing convergence • Longest failures should be based on BGP timers

  15. Fiber Tributary Design • Higher Education Sites Higher Ed Typically 10GigE GigE LX or ZX Higher Ed Hub Site Hub Site Higher Ed Higher Ed

  16. Fiber Tributary Design • K12 Site Design Typically 10GigE GigE LX or ZX K12 Site K12 Site K12 Site K12 Site Hub Site Hub Site

  17. HIGHER ED SITES 7000 series software based routers OSPF routed /30’s per port Each campus dual-homed to two hub sites MPLS runs to the edge device >1500 MTU BGP to the edge K12 SITES 3550 series L3 switches OSPF Shared /28’s on backbone vlan Up to 4 (6) sites per tributary between two hub sites No MPLS 1500 MTU No BGP Tributary Design

  18. Backbone Construction • t

  19. Level(3) Conduit Route • 130 Mile state controlled duct • 108 Strand Cable Installed • 96 Singlemode • 12 LEAF • 48 Spliced through • We own the cable

  20. Firewall, Filtering & Server Block FWG 43A DNS #1URL ServerWhatsUp FWG 43B DNS #2 URL Server #2

  21. Filtering, Firewall, Server Block • Design for Full redundancy • Working towards no customer downtime when a cluster fails or goes off line • Building a business continuity function so East Hartford can go away without customer impact

  22. Servers: • Cenmon (Cricket, techsupport site, log server, DNS) • N2H2 Admin & N2H2 URL Servers (2) • TFTP/FTP • DNS Servers (2) • Radius Servers (2) • VOIP Server • Firewall Management Station

  23. Internet Services Architecture • Currently 4 Commodity ISP’s • Wiltel Hartford – 1 Gbps – Newark, NJ • Qwest New London – 622 Mbps – Boston, MA • Qwest West Haven – 622 Mbps – New York, NY • NEREN/OSHEAN – 1 Gbps – Boston, MA • 2 Paths to Internet2/NOX • NEREN Storrs to NOX – 1 Gbps • Qwest New London – OC3

  24. INTERNET PROVIDER DRAINS B A* A A A A A A A A B*->A A* B*->A A B A A A A* A

  25. ISP Architecture • All ISP routing entities (VRF’s) can run to nearest ISP egress point in event of cohesive network collapse. • Try not to rate limit in any instance, customers allowed to burst within reason • Goal is zero customer-impacting downtime

  26. Qwest WH Connecticut State University Community Colleges UConn Health Center CIR = 135 Mbps @ $39/mbps/mo Backup for Wiltel Averaging 135-140 mbps peak Internet Provider Load Balance • Qwest NL • All other UConn • CIR = 135 Mbps @ $39/mbps/mo • Backup for West Haven • Averaging 180 Mbps peak • Wiltel Htfd • All K12 & Libraries • All other higher ed campuses • CIR = 200 Mbps @ $29/mbps/mo • Backup for Qwest links • Averaging 600 Mbps peak These are our provider costs, not including salaries, benefits, program management, NEREN, collocation, etc. Please consider confidential!

  27. A Revolutionary Idea in Networking “Old North Church Project”

  28. Connecticut, Rhode Island and Massachusetts have purchased the route from Manhattan to Cambridge through Stamford, Storrs, Providence, Springfield and Albany for the Old North Church Project

  29. 32 Avenue of the Americas, NYC 601 West 26th Street, NYC 60 Hudson Street, NYC 230 Congress Street, Boston 300 Bent Street, Cambridge Along Mass Pike, Lee Albany 375 Promenade, Providence 450 Main Street, Worcester 54 Meadow Street, New Haven RT 44, Grand Union, Storrs 101 East River Drive, E. Hartford Stamford Pomfret NEREN Geography

  30. NEREN Technology • Currently Gigabit Ethernet from Hartford to Boston to Springfield • DWDM Multiplexing Planned • 32 lambdas of minimally 2.5 Gbps capacity • Likely 10Gbps Ethernet lambda deployment • Some interest in Infinera O-E-O products • Sparse network utilizing state infrastructure for local distribution

  31. CEN OPERATIONAL THOUGHTS • When in doubt, broadcast it out • Internal staff email list CEN-ADMIN@net.cen.ct.gov • Customer list: CENCHANGE@list.state.ct.us • No core changes without discussion • Our change window is 5-7 AM with 5 day customer notice • Edge sites more tolerant of customer requested timing • Remember K12 Daisy-chain convergence issues.

  32. John Vittner 860-622-2241 John.Vittner@ct.gov Robin Brown 860-622-2139 Robin.Brown@uconn.edu Questions/Contact Information

More Related