380 likes | 400 Views
Explore the enhancements made to campus LANs, WAN connectivity, and research networks as part of the NSF CC-NIE program. Learn about before-and-after scenarios, status of deployments at CSU, CU-Boulder, and University of Idaho, and upcoming Phase II solutions.
E N D
NSF CC-NIE Award Panel Westnet CIO Meeting Monday, January 6, 2014 University of Arizona - Tucson
Aside – Pat’s “Hardware Upgrade” I regret very much not being able to join you in person, but I am still recovering from my “hardware upgrade.” Front Elevation View
Background • NSF “Campus Cyberinfrastructure - Network Infrastructure and Engineering (CC-NIE)” program introduced in response to the 2009 Advisory Committee for Cyberinfrastructure (ACCI) www.nsf.gov/cise/aci/taskforces/TaskForceReport_CampusBridging.pdf to: • Invest in the weakest links for enabling data-intensive research – in campus LANs and secondarily in the WAN connectivity • Deploy perfSONAR • Expand InCommon
Participants • Scott Baily, Colorado State University • Thomas Hauser, Univ. of Colorado Boulder • Daniel Ewart, University of Idaho • Steve Corbató, University of Utah
CSU - Before Research Networks Internet FRGP Core 2 Border 2 Border 1 Core 1 B i S O N 1 ea. 10 Gbps Wave Campus 1 Gig Core Routing Cluster Production LAN Commodity Users & Researchers (typ.)
CSU – After DYNES IDC Research Networks Internet FRGP B i S O N Core 2 Border 2 Border 1 Core 1 Dynamic VLANs 3 ea. 10 Gbps Wave Campus 10 Gig Core Routing Cluster 40 or 100 Gig? “Research LAN” Production LAN 10 Gbps Research Connections (typ.) Commodity Users & Researchers (typ.) DYNES Server & Storage, FDT AFTER
Status of Deployments: CSU • Campus LAN enhancements • WAN enhancements • DYNES implemented at 10 Gbps • Little “real” usage yet • Waiting into year two to see if 100 Gbps for the Research DMZ will be affordable • Don’t really know what to do about firewalling the Research DMZ
Status of Deployments: CU-Boulder • Upgraded CU-Boulder science DMZ core routers to redundant Arista 7504 chassis’: • 80Gb core bandwidth • distributed spine/leaf (MLAG) design • Upgraded CU-Boulder border routers to redundant Juniper MX960’s: • 40Gb to campus cores • 40Gb to science DMZ cores • Implemented multiple perfSONAR systems: • 10Gb & 40Gb (39.6 demonstrated across campus) • dedicated to science DMZ • Implemented secondary BRO system: • dedicated to science DMZ traffic • based on SDN using Arista DANZ/tap aggregation • goal to implement dynamic blocking via upstream ACL injection • Enabling IPv6 in the science DMZ • True OOB (serial + ethernet) connectivity for science DMZ network devices • Implementation of sflowcollection/storage system for science DMZ telemetry • DYNES implemented at 10Gb but no production usage yet • Insta-GENI rack integration with science DMZ underway • Early-stage build-out of openstack environment • Looking at MPLS/VXLAN for campus-scienceDMZ overlay interconnects
University of Idaho (UI) • State before grant: • Internal network: 2 x 1Gbps internal bandwidth through 4 core Cisco 6500s • Could not move big data sets across internal network • No way to utilize IRON 10 Gbps bandwidth to HPC resources at Idaho National Laboratory • No way to utilize 10 Gbps connectivity to Washington State University
UI Desired State • 10 Gbps internal networking between data centers containing HPC and storage • 10 Gbps bandwidth to storage and HPC shared with Idaho National Laboratory • 10 Gbps connectivity possible to WSU for DR/BC • Upgraded core switches and firewalls • Firewalls would be separate to reflect commitment to Science DMZ concept
UI Grant • $447k • Collaborative partnership between: • Central IT • Research Office • Northwest Knowledge Network (NKN) • Institute for Bioinformatics and Evolutionary Studies (IBEST) • Idaho National Laboratory (INL) • Idaho Regional Optical Network (IRON)
UI Grant • Phase I – complete March 1, 2014 • Ensure successful completion of grant requirements to prepare for submission of CC-IIE proposal by March 17th, 2014 • 10 Gbps backbone • Moscow Campus • INL NKN Servers via IRON • Minimize disruption to network operations • Don’t change core network topology • Implementation of perfSONAR
UI Grant • Phase II – complete August 1, 2014 • Implement virtual router topology in core/border • Implement virtual chassis (VSS) in data center • Upgrade data centers to 6807 chassis • Migrate firewall out of core chassis (ITS expense, outside the grant)
Utah CC-NIE award • Funded under Network Integration and Applied Innovation track ($1M/2 years) • Leadership: • Steve Corbató, Deputy CIO/PI • Adam Bolton, Physics & Astronomy, co-PI • Joe Breen, CHPC, senior personnel • Tom Cheatham, Medicinal Chemistry, co-PI (Research Portfolio chair) • Rob Ricci, School of Computing, co-PI • Kobus van Der Merwe, School of Computing, co-PI • Partnerships with Center for High Performance Computing (CHPC) and Honors College
Utah CC-NIE context - I • Close collaboration among campus and external entities • University of Utah • Flux network/systems research group (School of Computing) • Central IT (UIT) • Center for High Performance Computing (CHPC) • Common Infrastructure Services/Network • Information Security Office • Honors College • Utah Education Network (UEN) • GENI Project Office (GPO) • U.S. Ignite • Internet2 • Key premise: the Science DMZ should serve as a testbed of future perimeter defense and detection strategies for the production campus network
Utah CC-NIE context - II • Closely coordinated with other research awards • Emulab/protoGENI research – Ricci, Eide, van Der Merwe, Lepreau (deceased) • EPSCoR RII Cyber Connectivity – Research@UEN optical network in northern Utah (with UEN NTIA BTOP award) • EPSCoR RII Track-2 – CI-WATER (UT/WY) – petascale data store for atmospheric and hydrological research • GENI Spiral 3 – UEN GENI – deploying GENI racks in Utah • NSF MRI – Apt – combined network research and computational science cluster (Flux/CHPC)
Utah CC-NIE objective • Leverage upgrade of UEN connection to Internet2 to 100G • Upgrade prototype Science DMZ to 100G • Incorporate SDN technology for dynamic science DMZ • Dynamic slices (Emulab/protoGENI) • Support Big Cycle/Big Data Science on campus • Incorporate novel groups • Honors College (GENI ‘opt-in’ model) • Lassonde Student Garage for venture development
Question #1: Security • What types of devices do you have connected to the research network? • Scientific instruments • HPC systems • Other server systems • Personal computers? • What types and nature of security have you implemented for/on the research network?
Question #2: Level of Effort • Enhancing/implementing the changes/upgrades • Supporting users • Investigating end-to-end performance • Tuning applications • Tuning TCP windows • Supporting SDN/DYNES
Question #3 • Please describe the usage of SDN/DYNES • Now • Anticipated in the future?
Question #4: Struggles/Issues • CSU • Learning about MX960s in clustered environment • Still drumming up interest in the Science DMZ • Getting Globus running should help there • Recruiting “early adopters” • Researchers want to put EVERYTHING in the DMZ • UCB • DYNES • Understanding OSCARS/Dragon • Finding other sites to test/collaborate with
Other Unmet Infrastructure Needs • HPC • Storage • Institutional Repositories and Preservation • Supporting Data Management • SHARE for scholarly journals and data sets? • Other?
Next NSF CC solicitation is out! • Campus Cyberinfrastructure - Infrastructure, Innovation and Engineering Program (CC*IIE) • NSF 14-521 • New tracks added • Network Design and Implementation for Small Institutions • Identity and Access Management (IdAM) Integration • Campus CI Engineer resources • Regional Coordination and Partnership in Advanced Networking • Program managers • Kevin Thompson (CISE/ACI) • Bryan Lyles (CISE/CNS) • Campus CI plan required as supplement • Due March 17, 2014