220 likes | 398 Views
SUNET TREFpunkt 20. May 14, 2009. FEDERICA. Federated E-infrastructure Dedicated to European Researchers Innovating in Computing network Architectures Björn Rhoads KTH/CSC. Agenda. Overview (at a Glance and Goals) Network (Layout, PoP’s and Hardware) http://www.fp7-federica.eu/.
E N D
SUNET TREFpunkt 20 May 14, 2009 FEDERICA Federated E-infrastructure Dedicated to European Researchers Innovating in Computing network Architectures Björn Rhoads KTH/CSC
Agenda Overview (at a Glance and Goals) Network (Layout, PoP’s and Hardware)http://www.fp7-federica.eu/
FEDERICA at a Glance • What:European Commission co-funded project in its 7th Framework Programme in the area “Capacities - Research Infrastructures” • 3.7 MEuro EC contribution, 5.2 ME budget, 461 Man Months • When:1stJanuary 2008 - 30 June 2010 (30 months) • Who:20 partners, stakeholders on network operations & research: • 11 National Research and Education Networks, DANTE (GÉANT2), TERENA, 4 Universities, Juniper Networks, 1 SME, 1 research centre - Coordinator: GARR (Italian NREN) • Where:Europe-wide shared infrastructure, based on NREN & GEANT2 facilities, open to external connections
FEDERICA partners National Research & Education Networks (11) • CESNET Czech Rep. • DFN Germany • FCCN Portugal • GARR (coordinator) Italy • GRNET Greece • HEAnet Ireland • NIIF/HUNGARNET Hungary • NORDUnet Nordic countries • PSNC Poland • Red.es Spain • SWITCH Switzerland • Small Enterprise • Martel Consulting Switzerland • NREN Organizations • TERENA The Netherlands • DANTEUnited Kingdom
FEDERICA partners Universities - Research Centers • i2CAT Spain • KTH Sweden • ICCS (NTUA) Greece • UPC Spain • PoliTO Italy • System Vendor • Juniper NetworksIreland
FEDERICA Goals Summary Forum and support for researchers/projects on “Future Internet” Support of experimental activities to validate theoretical concepts, scenarios, architectures, control & management solutions; users have full control of their virtual slice Provide on European scale network and system agnostic e-infrastructure to be deployed in phases. Provide its operation, maintenance and on-demand configuration Validate and gather experimental information for the next generation of research networking also through basic tool validation Dissemination and favor cooperation of NRENs and User community Contribution to standards in form of requirements and experience
FEDERICA Goals – Out of Scope • Extended research, e.g. advanced optical technology developments • Development and support of Grid applications • Offer computing power • Offer transit capacity
Federica Substrate FEDERICA substrate
FEDERICA substrate • The substrate is configured as a single domain • Makes it easier to interoperate with remote networks and users • Own IP-space and AS-number • Public AS-number is: 47630 • Public IPv4 address 194.132.52.0/23 • IPv6 block: 2001:760:3801::/48 • Currently full internet peering through 4 NRNs • GARR, PSNC, CESNET and DFN • fp7-federica.eu registred • access granted only to users
Network Topology version 8.4 NORDUNET SUNET KTH DE PL IE IT CZ CH HU PT ES GR i2CAT 1 GbE VLAN or L2MPLS 1 GbE GN2+ 1 GbE tbd Core Nodes
Network Topology • FEDERICA in GÉANT infrastructure
Typical Core PoPs Infrastructure • Core PoPs architecture: • 2x Virtualization Server • 1x Additional Server • Juniper MX 480 • Connections to GÉANT PoP • BGP Peering enabled with local NREN infrastructure • Optional non GÉANT connections through local infrastructure
Juniper Core switch • Core PoPs are equipped with Juniper MX 480 with the following configuration: • 6 FPC slots, 40 Gbps throughput each • JUNOS OS • DPC combines packet forwarding and Ethernet interfaces on a single board • Switch Control Board (SCP) – allows remote management of box hardware (power on/off cards, controls clocking, system reset and rebooting, booting, and monitors and controls system functions including fan speed, board power status, PDM status and control) • 4x AC Power Supplies • L2 L3 support • MPLS support • Logical Router capabilities
Juniper Core switch • Each Juniper has a FPC card with: • 40 x 1GE SFP interface • 4 Packet Forwarding Engines (10 Gbps capacity each) • IPv4/IPv6 support • L2/L3 support • IEEE 802.3ad link aggregation support • Firewall filters • BGP, OSPF support • MPLS support • Packet mirroring • IEEE 802.1q VLANs support • VPLS and VPN support • DPCE-R-40GE-SFP in CESNET and GARR • DPCE-R-Q-40GE-SFP (with enhanced queuing) in PSNC and DFN
Non-core PoPs Infrastructure • Non-Core PoPs are less restricted about neighbor connectivity • Only 1 server is obligatory at non-Core PoPs • Router is replaced by less powerful switch
Non-core PoPs switches • Non-core PoPs will be equipped with Juniper EX3200 • There are 24 10/100/1000BaseT ports • + 4x SFP • Due to large number of connecting sites to RedIRIS, there is not enough SFPs in 3200 chassis. Thus RedIRIS will be equipped with two EX4200 switches stacked together.
V-nodes equipment • Servers configuration • 2x AMD Opteron 1.9GHz Quad-core • Up to 64 MB RAM in 8 DIMM slots per processor • 16-32 MB RAM installed • 3x 10/100/1000 Base-T interfaces • 1x 10/100/1000 Base-T eLOM interface • Serial DB-9 port • Two SATA II 500GB HD disks • DVD ROM • RAID controller • Size of 1U • 2x PCI-E dual ports 10/100/1000 Base-T interfaces (7+1 eLOM 10/100/1000 Base-T Interfaces in total)
Access to VMWare and virtual machines • Users access their slices via SSH console • For each slice a Virtual Slice Management Node is created, which performs as a proxy between the FEDERICA infrastructure and the Internet
NOC Operations • Contact with users primarily over email • federica-noc@fp7-federica.eu • Adopted RT: Request Tracker to keep track of cases • Open source solution • Central repository for logs from equipment • Dispatcher duty varies between partners in SA2.1 • KTH and FCCN are the main contributors • The substrate is manually configured • Terminal sessions to routers • VMWare infrastructure client for managing vnodes • If number of nodes we might move to VMware vCenter • Evaulating tools for aiding in slice creation • Coordination point for Dante and NRN NOCs
Monitoring • Example of monitoring information for single slice
Eudemo slice – test path host4 - host9 5 8 4 7 3 6 2 PSNC 1303 9 1304 CESNET KTH 1 1302 poz.pl pra.cz 1301 10 13 mil.it erl.de DFN GARR 14 12 11 15