170 likes | 326 Views
Sandor Acs acs.sandor @ sztaki.mta.hu 05/07/2012. Outline. About the project LPDS private IaaS Infrastructure and software elements SZTAKI solutions in OpenNebula Current status. The SZTAKI Cloud Project (1). Goals :
E N D
Sandor Acs acs.sandor@sztaki.mta.hu 05/07/2012
Outline • Aboutthe project • LPDS private IaaS • Infrastructure and software elements • SZTAKIsolutionsin OpenNebula • Currentstatus
The SZTAKI Cloud Project (1) • Goals: • Building a private IaaS for SZTAKI that makes its e-infrastructure more modern and economical. • Doresearches, which make SZTAKI cloud easy to use and more reliable. • Duration: 2 years (started 01/04/2012)
The SZTAKI Cloud Project (2) • Partners: • DSD - Department of Distributed Systems • http://dsd.sztaki.hu/ • ILAB - Informatics Laboratory • http://www.sztaki.hu/infolab/ • ITAK - Internet Technologies Applications Center • http://itak.sztaki.hu/en • LPDS - Laboratory of Parallel and Distributed Systems • http://www.lpds.sztaki.hu/
The SZTAKI Cloud Project (3) • Research and development topics: • Elasticity and automatic scaling • Data-intensive cloud • Reliabilityand security improvements
LPDS private IaaS • LPDS’ private IaaS wasone of thepreludes of the SZTAKI cloud project • LPDS cloud since ~summer of 2011 • Need for an infrastructure which is: • highly reliable, • dynamic, • flexible, • and easy to usebydevelopers and users. • Beforethe cloud era, developersrequested VMs fromtheinfrastructure team (thattakestime!) • Some data (from 04/06/2012): • 74 VM running, • 118 registered images, • services (e.g. web), developer and test VMs as well.
Infrastructure and software elements (1) • HW: • Frontend Server (2X) - DELL PowerEdge R415 • AMD Opteron 4280Processor (2.8GHz, 8C, 8ML2/8M L3 Cache, 95W); 16GB Memory,1333MHz; (2X) 600GB SAS 6Gbps 15k 3.5" HD Hot Plug; • Node Server (7X) – DELL PowerEdgeR815 • (4x) AMD Opteron 6272 (2.1GHz, 16C, 16M L2/16M L3 Cache, Turbo CORE, 80W ACP); 256GB Memory, 1600MHz; (6X) 1TB Near-Line SAS 6Gbps 7.2k 2.5" HD Hot Plug
Infrastructure and software elements (2) • Storage server (2X) • DELL MD3600i • MD3600i External 10Gb iSCSI RAID 12 Bays Array with dualcontrollers; (12X) 3TB NearLine SAS 6Gbps 7.2k 3.5" HD; Redundant Power Supply 600W; • PowerEdge R510 • Intel Xeon E5620, (2X) 4C, 2.40GHz, 12M Cache, 5.86GT/s, 80W TDP, Turbo, HT; 24GB Memory DDR3, 1333MHz; (12X) 3TB, Near Line SAS 6Gbps, 3.5-in, 7.2K HD (Hot Plug) • Switch (2X) • PowerConnect 6248, 48 Ports, ManagedSwitch, 10GbE and Stacking Capable
SZTAKI CLOUD Node 64Core 256GB RAM Storage 36TB Node 64Core 256GB RAM Node 64Core 256GB RAM Frontend 8 Core, 16GB RAM Switch 48port,4X10G Node 64Core 256GB RAM Frontend 8 Core, 16GB RAM Switch 48port,4X10G Node 64Core 256GB RAM Node 64Core 256GB RAM Storage 36TB Node 64Core 256GB RAM
SZTAKIsolutionsin OpenNebula (1) • NFS-root based nodes (for IaaS scaling) • Improved iSCSI driver (introducing in OpenNebula 3.6) • AoE driver for OpenNebula (in progress)
SZTAKIsolutionsin OpenNebula (2) • NFS-root based nodes: • Centralized node management • Every node is using exactly the same software stack • Based on NFS and PXE Boot • Additional advantage: security (read-only root)
SZTAKIsolutionsin OpenNebula (3) • NFS-root based nodes (2): • PXE client boots up, starts up PXE boot ROM and Client broadcasts DHCPrequest. • DHCP server responds with IP address, defaultgateway, ”filename” and path (TFTP server address). • Client sends TFTP request to TFTP server asking to retrieve filename. • TFTP server responds and sends filename to client. • Client executes filename whichis loads the kernel. When the kernel executes, the root filesystem specified by root-path is mounted over NFS.
SZTAKIsolutionsin OpenNebula (4) • Improved iSCSI driverin OpenNebula: • iSCSI: IP-based storage networking standard for linking data storage facilities • Improvments: • support the non-persistent disk images, • tgtd configuration auto saving, • iSCSI TID vs • image id independency.
SZTAKIsolutionsin OpenNebula (5) • ATA over Ethernet (AoE) driver for OpenNebula: • AoE: network protocol , designed for simple, high-performance access of SATA storage devices over Ethernet networks. It is used to build storage area networks (SANs) with low-cost, standard technologies. • AoE advantages against iSCSI: • Simplicity • Less overhead • Better performance • AoE disadvantages against iSCSI: • Layer 2 non routable (for local storages) • Less supported (by enterprises)
Currentstatus • Q1 finished: • Designing the architecture • Ordering the infrastructure elements • Building and testing local clouds • Improvements and results: • Full iSCSI support • Easy IaaS scaling with NFS-root based nodes • Website (cloud.sztaki.hu) + documents • HWs are arriving soon (09/07/2012)
What is next during the hands-on session? • OpenNebula GUI • Marketplace • Image, template and VM management • Testing
Questions? • Thank you for the attention!