1 / 53

Virtual Private Clusters: Virtual Appliances and Networks in the Cloud

Explore the power of virtual appliances, clusters, and networks in the cloud for educational and enterprise uses. Learn about virtualized infrastructure, private networks, and innovative technologies like Amazon VPC and OpenFlow. Discover how to create high-performance virtual private clusters across multiple domains.

schloss
Download Presentation

Virtual Private Clusters: Virtual Appliances and Networks in the Cloud

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Virtual Private Clusters: Virtual Appliances and Networks in the Cloud Renato Figueiredo ACIS Lab - University of Florida FutureGrid Team

  2. Outline • Virtual appliances • Virtual networks • Virtual clusters • Grid appliance and FutureGrid • Educational uses

  3. What is an appliance? • Physical appliances • Webster – “an instrument or device designed for a particular use or function”

  4. What is an appliance? • Hardware/software appliances • TV receiver + computer + hard disk + Linux + user interface • Computer + network interfaces + FreeBSD + user interface

  5. What is a virtual appliance? • A virtual appliance packages software and configuration needed for a particular purpose into a virtual machine “image” • The virtual appliance has no hardware – just software and configuration • The image is a (big) file • It can be instantiated on hardware

  6. Virtual appliance example • Linux + Apache + MySQL + PHP A web server Another Web server LAMP image instantiate Virtualization Layer copy Repeat…

  7. Clustered applications • Replace LAMP with the middleware of your choice – e.g. MPI, Hadoop, Condor MPI image An MPI worker Another MPI worker instantiate Virtualization Layer copy Repeat…

  8. What about the network? • Multiple Web servers might be completely independent from each other • MPI nodes are not • Need to communicate and coordinate with each other • Each worker needs an IP address, uses TCP/IP sockets • Cluster middleware stacks assume a collection of machines, typically on a LAN (Local Area Network)

  9. V2 V3 V1 VMM + VN Virtualized machines and networks Virtual Infrastructure Physical Infrastructure Domain B WAN Domain C Domain A

  10. Why virtual networks? • Cloud-bursting: • Private enterprise LAN/cluster • Run additional worker VMs on a cloud provider • Extending the LAN to all VMs – seamless scheduling, data transfers • Federated “Inter-cloud” environments: • Multiple private LANs/clusters across various institutions inter-connected • Virtual machines can be deployed on different sites and form a distributed virtual private cluster

  11. Virtual cluster appliances • Virtual appliance + virtual network Virtual network MPI + Virtual Network Another MPI worker An MPIworker instantiate Virtual machine copy Repeat…

  12. Where virtualization applies Virtualized endpoints Software Software Network Fabric Network Device Network Device (Virtual) machine (Virtual) machine Virtualized fabric

  13. Example - VLAN Software Software Virtual LAN Network Device Network Device (Virtual) machine (Virtual) machine Switching: RECV portA Match VLAN tag, SEND portB Under control

  14. Inter-cloud Virtual Networks • Challenges - shared environment • Lack of control of networking resources in Internet infrastructure • Can’t program routers, switches • Public networks – privacy is important • Often, lack of privileged access to underlying resources • May be “root” within a VM, but lacking hypervisor privileges • Approach: Virtual Private Networks • End-to-end; tunneling over shared infrastructure

  15. Example - VPNs Virtual Private Network Software Software Internet Network Device Network Device (Virtual) machine (Virtual) machine Tunneling SEND: Encrypt msg, Encapsulate msg, Lookup endpoint SEND No control

  16. Virtualization: core primitives • Intercept events of interest: • VM: trap on “privileged” instructions • VN: intercept message sent or received • Emulate behavior of event in the context of virtualized resource: • VM: emulate the behavior of instruction intercepted in the context of the virtual machine issuing it • VN: emulate the behavior of SEND/RECVin the context of the virtual network it is bound to

  17. Layers • Layer-2 virtualization • VN supports all protocols layered on data link • Not only IP but also other protocols • Simpler integration • E.g. ARP crosses layers 2 and 3 • Downside: broadcast traffic if VN spans beyond LAN • Layer-3 virtualization • VN supports all protocols layered on IP • TCP, UDP, DHCP, … • Sufficient to handle many environments/applications • Downside: tied to IP • Innovative non-IP network protocols will not work

  18. Technologies and Techniques • Amazon VPC: • Virtual private network extending from enterprise to resources at a major IaaS commercial cloud • OpenFlow: • Open switching specification allowing programmable network devices through a forwarding instruction set • OpenStack Quantum: • Virtual private networking within a private cloud offered by a major open-source IaaS stack • ViNe: • Inter-cloud, high-performance user-level managed virtual network • IP-over-P2P (IPOP) and GroupVPN • Peer-to-peer, inter-cloud, self-organizing virtual network

  19. ViNe • Led by Mauricio Tsugawa, Jose Fortes at UF • Focus: • Virtual network architecture that allows VNs to be deployed across multiple administrative domains and offer full connectivity among hosts independently of connectivity limitations • Internet organization: • ViNe routers (VRs) are used by nodes as gateways to overlays, as Internet routers are used as gateways to route Internet messages • VRs are dynamically reconfigurable • Manipulation of operating parameters of VRs enables the management of VNs Slide provided by M. Tsugawa

  20. ViNe Architecture • Dedicated resources in each broadcast domain (LAN) for VN processing –ViNe Routers (VRs) • No VN software needed on nodes (platform independence) • VNs can be managed by controlling/reconfiguring VRs • VRs transparently address connectivity problems for nodes • VR = computer running ViNe software • Easy deployment • Proven mechanisms can be incorporated in physical routers and firewalls. • In OpenFlow-enabled networks, flows can be directed to VRs for L3 processing • Overlay routing infrastructure (VRs) decoupled with the management infrastructure Slide provided by M. Tsugawa

  21. Connectivity: ViNe approach Limited VR • VRs with connectivity limitations (limited-VRs) initiate connection (TCP or UDP) with VRs without limitations (queue-VRs) • Messages destined to limited-VRs are sent to corresponding queue-VRs • Long-lived connection possible between limited-VR and queue-VR • Generally applicable (no dependency with network equipment, firewall/NAT type, etc) VR Internet • Network virtualization processing only performed by VRs • Firewall traversal only needed for inter-VR communication • ViNe firewall traversal mechanism: Retrieve message Open connection Queue VR Send message Slide provided by M. Tsugawa

  22. ViNe routing performance • L3 processing implemented in Java • Mechanisms to avoid IP fragmentation • Use of data structures with low access times in the routing module • VR routing capacity over 880Mbps (using modern CPU cores) – Gigabit line rate (120Mbps total encapsulation overhead) • Sufficient in many cases where WAN performance is less than Gbps • Requires CPUs launched after 2006 (e.g., 2 GHz Intel Core2 microarchitecute) Slide provided by M. Tsugawa

  23. ViNe Management Architecture • VR operating parameters configurable at run-time • Overlay routing tables, buffer size, encryption on/off • Autonomic approaches possible • ViNe Central Server • Oversees global VN management • Maintains ViNe-related information • Authentication/authorization based on Public Key Infrastructure • Remotely issue commands to reconfigure VR operation ViNe Central Server Requests Requests Requests Requests Configuration actions Requests VR VR . . . Slide provided by M. Tsugawa

  24. Example: Inter-cloud BLAST • 3 FutureGrid sites (US) • UCSD (San Diego) • UF (Florida) • UC (Chicago) • 3 Grid’5000 sites (France) • Lille • Rennes • Sophia • Grid’5000 isfullyisolatedfrom the Internet • One machine white-listed to accessFutureGrid • ViNe queue VR (Virtual Router) for other sites Slide provided by M. Tsugawa

  25. CloudBLAST Experiment • ViNe connected a virtual cluster across 3 FG and 3 Grid’5000 sites • 750 VMs, 1500 cores • Executed BLAST on Hadoop (CloudBLAST) with 870X speedup Slide provided by M. Tsugawa

  26. Unmodified applications Connect(10.10.1.2,80) Capture/tunnel, scalable, resilient, self-configuring routing and object store 10.10.1.1 Isolated, private virtual address space 10.10.1.2 IP-over-P2P / GroupVPN Application Virtual Router (Wide-area) Overlay network VNIC Virtual Router Application VNIC

  27. Virtual network: GroupVPN • Key techniques: • IP-over-P2P (IPOP) tunneling • GroupVPN Web 2.0/social network interface • Self-configuring • Avoid administrative overhead of typical VPNs • NAT and firewall traversal; DHCP virtual addresses • Scalable and robust • P2P routing deals with node joins and leaves • Networks are isolated • One or more private IP address spaces • Decentralized DHCP serves addresses for each space

  28. Under the hood: overlay architecture • Bi-directional structured overlay (Brunet library) • Self-configured NAT traversal • Self-optimized links • Direct, relay • Self-healing structure Direct path Multi-hop path Overlay router Overlay router

  29. GroupVPN Web interface • Users can request to join a group, or create their own VPN group • E.g. instructor creates a GroupVPN for class • Determines who is allowed to connect to virtual network • Owner can authorize users to join, remove users, authorize other to admin • Actions typical of a certificate authority happen in the back-end without user having to deal with security operations • E.g. sign/revoke a VPN X.509 certificate

  30. Managing virtual IP spaces • One P2P overlay supports multiple IPOP namespaces • IP routing within a namespace • Each IPOP namespace: a unique string • Distributed Hash Table (DHT) stores mapping • Key=namespace • Value=DHCP configuration (IP range, lease, ...) • IPOP node configured with a namespace • Query namespace for DHCP configuration • Guess an IP address at random within range • Attempt to store in DHT • Key=namespace+IP • Value=IPOPid (160-bit) • IP->P2P Address resolution: • Given namespace+IP, lookup IPOPid

  31. (c) Renato Figueiredo D2 C2 N2 B2 A2 DHTCreate(N2:A2,x2) DHTCreate(N2, 10.128.0.0/255.192.0.0) IPOP packet DHTLookup(N1:B1) BrunetID “ARP cache” x1 N2:10.129.6.71→ BrunetID x2 IPOP Namespaces Namespace N1: 10.128.0.0/255.192.0.0 D1 C1 N1 B1: 10.129.6.71 A1: 10.129.6.83 x1 x2 x8 x3 x4 x6 x5 x7 N1:10.129.6.71→BrunetID x1

  32. (c) Renato Figueiredo Optimization: Adaptive shortcuts • At each node: • Count IPOP packets to other nodes • When number of packets within an interval exceeds threshold: • Initiate connection setup; create edge • Limit on number of shortcuts • Overhead involved in connection maintenance • Drop connections no longer in use

  33. Evaluation - cloud

  34. Grid appliance - virtual clusters • Same image, per-group VPNs Group VPN Hadoop + Virtual Network Another Hadoop worker A Hadoop worker instantiate Virtual machine copy GroupVPN Credentials Repeat… (from Web site) Virtual IP - DHCP 10.10.1.1 Virtual IP - DHCP 10.10.1.2

  35. Grid appliance clusters Virtual appliances • Encapsulate software environment in image • Virtual disk file(s) and virtual hardware configuration The Grid appliance • Encapsulates cluster software environments • Current examples: Condor, MPI, Hadoop • Homogeneous images at each node • Virtual Network connecting nodes forms a cluster • Deploy within or across domains

  36. Grid appliance internals • Host O/S • Linux • Grid/cloud stack • MPI, Hadoop, Condor, … • Glue logic for zero-configuration • Automatic DHCP address assignment • Multicast DNS (Bonjour, Avahi) resource discovery • Shared data store - Distributed Hash Table • Interaction with VM/cloud

  37. One appliance, multiple hosts • Allow same logical cluster environment to instantiate on a variety of platforms • Local desktop, clusters; FutureGrid; Amazon EC2; Science Clouds… • Avoid dependence on host environment • Make minimum assumptions about VM and provisioning software • Desktop: 1 image, VMware, VirtualBox, KVM • Para-virtualized VMs (e.g. Xen) and cloud stacks – need to deal with idiosyncrasies • Minimum assumptions about networking • Private, NATed Ethernet virtual network interface

  38. Configuration framework • At the end of GroupVPN initialization: • Each node of a private virtual cluster gets a DHCP address on virtual tap interface • A barebones cluster • Additional configuration required depending on middleware • Which node is the Condor negotiator? Hadoop front-end? Which nodes are in the MPI ring? • Key frameworks used: • IP multicast discovery over GroupVPN • Front-end queries for all IPs listening in GroupVPN • Distributed hash table • Advertise (put key,value), discover (get key)

  39. Configuring and deploying groups • Generate virtual floppies • Through GroupVPN Web interface • Deploy appliances image(s) • FutureGrid (Nimbus/Eucalyptus), EC2 • GUI or command line tools • Use APIs to copy virtual floppy to image • Submit jobs; terminate VMs when done

  40. Demonstration • Pre-instantiated VM to save us time: • cloud-client.sh --conf alamo.conf --run --name grid-appliance-2.05.03.gz --hours 24 • Connect to VM • ssh root@VMip • Check virtual network interface • ifconfig • Ping other VMs in the virtual cluster • Submit Condor job

  41. Use case: Education and Training Importance of experimental work in systems research • Needs also to be addressed in education • Complement to fundamental theory FutureGrid: a testbed for experimentation and collaboration • Education and training contributions: • Lower barrier to entry – pre-configured environments, zero-configuration technologies • Community/repository of hands-on executable environments: develop once, share and reuse

  42. Educational appliances in FutureGrid • A flexible, extensible platform for hands-on, lab-oriented education on FutureGrid • Executable modules – virtual appliances • Deployable on FutureGrid resources • Deployable on other cloud platforms, as well as virtualized desktops • Community sharing – Web 2.0 portal, appliance image repositories • An aggregation hub for executable modules and documentation

  43. Support for classes on FutureGrid • Classes are setup and managed using the FutureGrid portal • Project proposal: can be a class, workshop, short course, tutorial • Needs to be approved by FutureGrid project to become active • Users can be added to a project • Users create accounts using the portal • Project leaders can authorize them to gain access to resources • Students can then interactively use FG resources (e.g. to start VMs)

  44. Use of FutureGrid in classes • Cloud computing/distributed systems classes • U.of Florida, U. Central Florida, U. of Puerto Rico, Univ. of Piemonte Orientale (Italy), Univ. of Mostar (Croatia) • Distributed scientific computing • Louisiana State University • Tutorials, workshops: • Big Data for Science summer school • A cloudy view on computing • SC’11 tutorial – Clouds for science • Science Cloud Summer School

  45. Thank you! • More information: • http://www.futuregrid.org • http://grid-appliance.org • This document was developed with support from the National Science Foundation (NSF) under Grant No. 0910812 to Indiana University for "FutureGrid: An Experimental, High-Performance Grid Test-bed." Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF

  46. Local appliance deployments • Two possibilities: • Share our “bootstrap” infrastructure, but run a separate GroupVPN • Simplest to setup • Deploy your own “bootstrap” infrastructure • More work to setup • Especially if across multiple LANs • Potential for faster connectivity

  47. PlanetLab bootstrap • Shared virtual network bootstrap • Runs 24/7 on 100s of machines on the public Internet • Connect machines across multiple domains, behind NATs

  48. PlanetLab bootstrap: approach • Create GroupVPN and GroupAppliance on the Grid appliance Web site • Download configuration floppy • Point users to the interface; allow users you trust into the group • Trusted users can download configuration floppies and boot up appliances

  49. Private bootstrap: General approach • Good choice for single-domain pools • Create GroupVPN and GroupAppliance on the Grid appliance Web site • Deploy a small IPOP/GroupVPN bootstrap P2P pool • Can be on a physical machine, or appliance • Detailed instructions at grid-appliance.org • The remaining steps are the same as for the shared bootstrap

More Related