280 likes | 523 Views
GEC17 Using ExoGENI. Ilya Baldin ibaldin@ renci.org. ExoGENI Overview. Testbed. 14 GPO-funded racks built by IBM Partnership between RENCI, Duke and IBM IBM x3650 M4 servers (X-series 2U) 1x146GB 10K SAS hard drive +1x500GB secondary drive 48G RAM 1333Mhz Dual-socket 8-core CPU
E N D
GEC17 Using ExoGENI IlyaBaldinibaldin@renci.org
Testbed • 14 GPO-funded racks built by IBM • Partnership between RENCI, Duke and IBM • IBM x3650 M4 servers (X-series 2U) • 1x146GB 10K SAS hard drive +1x500GB secondary drive • 48G RAM 1333Mhz • Dual-socket 8-core CPU • Dual 1Gbps adapter (management network) • 10G dual-port Chelseo adapter (dataplane) • BNT 8264 10G/40G OpenFlow switch • DS3512 6TB sliverable storage • iSCSI interface for head node image storage as well as experimenter slivering • Cisco(UCS-B) and Dell configuration also exist • Each rack is a small networked cloud • OpenStack-based with NEuca extensions • xCAT for baremetal node provisioning • http://wiki.exogeni.net
ExoGENI • ExoGENI is a collection of off-the shelf institutional clouds • With a GENI federation on top • xCAT – IBM product • OpenStack- RedHat product • Operators decide how much capacity to delegate to GENI and how much to retain for yourself • Familiar industry-standard interfaces (EC2) • GENI Interface • Mostly does what GENI experimenters expect
ExoGENI unique features • ExoGENI racks are separate aggregates but also act as a single aggregate • Transparent stitching of resources from multiple racks • ExoGENI is designed to bridge distributed experimentation, computational sciences and Big Data • Already running HPC workflows linked to OSG and national supercomputers • Newly introduced support for storage slivering • Strong performance isolation is one of key goals
ExoGENI experimenter tools • GENI tools: Flack, GENI Portal, omni • Give access to common GENI capabilities • Also mostly compatible with • ExoGENI native stitching • ExoGENI automated resource binding • ExoGENI-specific tools: Flukes • Accepts GENI credentials • Access to ExoGENI-specific features • Elastic Cluster slices • Storage provisioning • Stitching to campus infrastructure
ExoGENI activities Presentation title goes here
Experimentation • Compute nodes • Up to 100 VMs in each full rack • A few (2) bare-metal nodes • BYOI (Bring Your Own Image) • True Layer 2 slice topologies can be created • Within individual racks • Between racks • With automatic and user-specified resource binding and slice topology embedding • Stitching across I2, ESnet, NLR, regional providers. Dynamic wherever possible • OpenFlow experimentation • Within racks • Between racks • Include OpenFlow overlays in NLR (and I2) • On-ramp to campus OpenFlow network (if available) • Experimenters are allowed and encouraged to use their own virtual appliance images • Since Dec 2012 • 2500+ slices
Topology embedding and stitching Workflows, services, etc. Multi-homed cloud hosts with network control Computed embedding Virtual colo campus net to circuit fabric Virtual network exchange
Slice half-way around the world • ExoGENI rack in Sydney, Australia • Multiple VLAN tags on a pinned path from Sydney to LA • Internet2/OSCARS ORCA-provisioned dynamic circuit • LA, Chicago • NSI statically pinned segment with multiple VLAN tags • Chicago, NY, Amsterdam • Planning to add dynamic NSI interface • ExoGENI rack in Amsterdam • ~14000 miles • 120ms delay
ExoGENI slice isolation • Strong isolation is the goal • Compute instances are KVM based and get a dedicated number of cores • VLANs are the basis of connectivity • VLANs can be best effort or bandwidth-provisioned (within and between racks)
Scientific Workflows • Workflow Management Systems • Pegasus, Custom scripts, etc. • Lack of tools to integrate with dynamic infrastructures • Orchestrate the infrastructure in response to application • Integrate data movement with workflows for optimized performance • Manage application in response to infrastructure • Scenarios • Computational with varying demands • Data-driven with large static data-set(s) • Data-driven with large amount of input/output data
Workflow slices • 462,969 condor jobs since using the on-ramp to engage-submit3 (OSG)
Hardware-in-the loop slices • Hardware-in-the-Loop Facility Using RTDS & PMUs (FREEDM Center, NCSU)
New GENI-WAMS Testbed Latency & Processing Delays Packet Loss Network Jitters Cyber-security Man-in-middle attacks AranyaChakrabortty, Aaron Hurz (NCSU) YufengXin (RENCI/UNC)
Thank you! • http://www.exogeni.net • ExoBlog http://www.exogeni.net/category/exoblog/ • http://wiki.exogeni.net