300 likes | 389 Views
University of Oklahoma. Network Infrastructure and National Lambda Rail. Why High Speed?. Moving data. Collaboration. What’s Our Strategy?. End-to-end “big picture” design. Constantly shifting target architecture. Consistent deployment methods. Supportable and sustainable resources.
E N D
University of Oklahoma Network Infrastructure and National Lambda Rail
Why High Speed? • Moving data. • Collaboration.
What’s Our Strategy? • End-to-end “big picture” design. • Constantly shifting target architecture. • Consistent deployment methods. • Supportable and sustainable resources.
How Do We Design for Today’s Needs and Tomorrows Requirements?
Cabling… • Yesterday: Category 5 • Split-pair deployment for voice and data • Cheapest vendor • Poor performance for today’s demands
Cabling… (cont) • Today: Category 6+ • Standardized on Krone TrueNet • Gigabit capable • Trained and certified installation team • Issues with older installations still exist
Cabling…(cont) • Tomorrow: Krone 10G • 10-Gigabit capable • Purchasing new test equipment • Deployed at National Weather Center • Upgrade of older installations to 6+ or 10G
Fiber Optics… • Yesterday: • Buy cheap • Pull to nearest building • Terminate what you need
Fiber Optics… (cont) • Today: • WDM capable fiber • Pull to geographic route node • Terminate, test, and validate • Issues with “old” fiber still exist
Fiber Optics… (cont) • Tomorrow: • Alternate cable paths • Life-cycle replacement • Inspection and re-validation
Network Equipment… • Yesterday: • 10Mb/s or 10/100Mb/s to desktop • 100Mb/s or Gigabit to the building • Buy only what you need (no port growth)
Network Equipment… (cont) • Today: • 10/100/1000 to the desktop • Gigabit to the wiring closet • 25% expansion space budgeted on purchase • PoE, per-port QoS, DHCP snooping, etc.
Network Equipment… (cont) • Tomorrow: • 10-Gig to the wiring closet • Non-blocking switch backplanes • Enhanced PoE, flow collection
Servers… • Yesterday: • One application = one server • Run it on whatever can be found • No consideration for network, power, HVAC, redundancy, or spare capacity
Servers… (cont) • Today: • Virtualizing the environment • Introducing VLANs to the server farm • Clustering and load balancing • Co-locating to take advantage of economies of scale (HVAC, power, rack space)
Servers… (cont) • Tomorrow: • Data center construction • Infiniband and iSCSI • “Striping” applications across server platforms • App environment “looks like” a computing cluster (opportunities to align support)
ISP (OneNet)… • Yesterday: • Two, dark-fiber Gigabit connections • Poor relationship between ISP and OU
ISP… (cont) • Today: • Excellent partnership between ISP & OU • 10-Gigabit BGP peer over DWDM • 10-Gig connection to NLR • BGP peer points in disparate locations
ISP… (cont) • Tomorrow: • Dual, 10-Gig peer… load shared • Gigabit, FC, and 10-Gigabit “on-demand” anywhere on the optical network • Additional ISP peering relationships to better support R&D tenants
WAN… • Yesterday: • OC-12 to I2 • OC-12 and OC-3 to I1 • All co-located in the same facility
WAN… (cont) • Today: • 10-Gigabit (Chicago) and 1-Gigabit (Houston) “routed” connection to NLR • OC-12 to I2, with route preference to NLR • Multiple I1 connections • Multiple I1 peers in disparate locations
WAN… (cont) • Tomorrow: • LEARN connection for redundant NLR and I2 connectivity • DWDM back-haul extensions to allow NLR and I2 terminations "on-campus“
To what end??? • “Condor” pool evolution • “Real-time” data migrations and streaming • Knowledge share • Support share • Ability to “spin-up” bandwidth anytime, anywhere (within reason)
Questions? zgray@ou.edu