240 likes | 348 Views
การติดตั้งและทดสอบการทำคลัสเตอร์เสมือนบน Xen, ROCKS, และไทยกริด Roll Implementation of Virtualization Clusters based on Xen, ROCKS, and ThaiGrid Roll. By: Supakit Prueksaaroon 1 , Wittaya Konghaped 2 , Vara Varavithya 2 , and Sornthep Vannarat 1 1 Large-scale Simulation Research Laboratory,
E N D
การติดตั้งและทดสอบการทำคลัสเตอร์เสมือนบน Xen, ROCKS, และไทยกริด RollImplementation of Virtualization Clusters based on Xen, ROCKS, and ThaiGrid Roll By: Supakit Prueksaaroon1, Wittaya Konghaped2, Vara Varavithya2, and Sornthep Vannarat1 1Large-scale Simulation Research Laboratory, National Electronics and Computer Technology Center 2Department of Electrical Engineering, Faculty of Engineering, King Mongkut’s Institute of Technology North Bangkok Present by Wittaya Konghaped Department of Electrical Engineering, Faculty of Engineering, King Mongkut’s Institute of Technology North Bangkok
Agenda • Introduction to Virtualization technology • ROCKS Cluster & ThaiGrid Roll • Virtual Cluster • Performance Measurement of Virtual Cluster • Conclusion
Virtualization Concepts • A further virtualization layer in the middle between OS and HW: Virtual Machine Monitor (VMM) • Allow for multiple concurrent OS instances • Modern PCs are powerful enough for creating the illusion of several OS virtual machines to run simultaneously Apps Apps Apps Apps OS OS OS OS VMM HW HW
Introduction to VM technology • Virtual Machine have 3 technologies • Emulation, full system simulation, or "full virtualization with dynamic recompilation" — the virtual machine simulates the complete hardware, allowing an unmodified OS for a completely different CPU to be run. • Native virtualization and "full virtualization" — the virtual machine only partially simulates enough hardware to allow an unmodified OS to be run in isolation, but the guest OS must be designed for the same type of CPU. The term native virtualization is also sometimes used to designate that hardware assistance through Virtualization Technology is used. • Paravirtualization — the virtual machine does not simulate hardware but instead offers a special API that requires OS modifications. From: http://en.wikipedia.org/wiki/Virtual_machine
Full virtualization Applications OS VMM the interface of HW is fully abstracted Hardware • There is a complete functional ordering between layers • Full abstraction of machine (from BIOS to disks, DMA controlers, video…) • Virtualization is fully transparent: guest OS unchanged • Much more complex to design and implement
Paravirtualization Applications OS VMM this interface is much more critical, now Hardware • Not a really hierachical ordering between layers • Virtualization is similar to HW interface but neither complete nor identical • Guest OSs must be modified to become VM-aware • There is a potential gain in performance, due to specialization of kernel code • Easier to design • But carefully think about interfaces
Benefit of Virtualization for HPC • Better utilize Hardware resources • Independent to custom library • Easy to create/destroy the guest OSs • The key capabilities for: • High Availability computing resources • Migration • The perfect sandbox
ROCKS • The tools for create the Cluster computer. • Front-node • Compute-nodes • ROCKS & ROLL architecture • Features • Cluster tools for management cluster system • Support multiple software for HPC and Grid • Bio-roll, Grid-roll, Intel-roll, Viz-roll and so on • Easy re-build and install compute nodes
ThaiGrid Roll • The complete package of ROCKS and integrated Grid packages for support Grid environment in Thailand. • ThaiGrid Monitoring tools • ThaiGrid CA • ThaiGrid scripts • Developed by “Thai National Grid Project”
Problemsof Native Cluster • Application porting • Library incompatibility • OS & Software version incompatibility • Security • Configuration complexity • High administrative cost • Heterogeniousoperating system support
In this work • Initial Study to implement “Virtual Cluster” • We implement VM based on ThaiGrid Roll Objectives: • To compare overall performance and get the measurement for large scale Virtual cluster simulation. • For ThaiGrid Setup can make use of CPU times for others task such as information Server, DB Server and Co-exist well with Grid Services
Goal • Shared Grid resources and running concurrent with production resources (Web, Mail, DB server, etc.)
Definition • Definition of this work • Domain-0or (Dom-0) is the based OS that Virtual machines are running on • Domain-U or (Dom-U) is the guest OS that are running simultaneously with domain-0 Domain-U Apps Apps Apps OS OS OS VMM Domain-0 HW
Implement concept • Build the ThaiGrid Roll and ROCKS 4.2.1 • Build the prototype Front-end node & Compute node • Install Dom-U image (kernel-xenU2.6.16-xen3_86.1_rhel4.1) • Copy Data to image file • Setup the Dom-0 machine by using Fedora Core 5 • Distributing the images to front-end and all compute nodes Compute Front-end Compute
Images Maker Create image Mount image Edit all config file Umount image Update Scheduler Boot up • Create the copy of prototype compute node image & configuration files • “mount –o loop <IMAGE> <Mount-ponit> • Edit “/etc/hosts”, “/etc/sysconfig/network”, “/etc/sysconfig/network-script/ifcfg-eth0” and so on • “umount <Mount-point> • Update SGE or PBS • “xm create –c config”
Experiment Details • 3 nodes of Satellite Cluster • IBM x336 Dual Xeon processors 2.8GHz • RAM 4GB • Harddisk SCSI 73GB • Intel e1000 network interface • Inter-connection 10/100 Mbps • HPL benchmark
Conclusion • The result show the performance nearly the native linux in case of internal machine • High gap-performance in case of parallel machines • Virtual Cluster should be based on High-Throughput Computing • Xen should be improved the I/O performance • This result of this work use for initial implement of Virtual Cluster. • Virtualization technology show high potential for HPC
Future Work • Simulate large-scale virtualization based on this works and projection performance • Scheduling move to Sandbox schedulingthan job scheduling • Security issues • Compatibility issues • Investigate high throughput application performance on virtualization technology • Formulate virtualization efficiency for computation intensive scheduling
Acknowledgement • Thank you ThaiGrid for support the cluster machines.
HPL CPU context switch overhead HPL parameter N=9000 NB=64
Percentage of CPU context switch overhead HPL: Overhead =~ 0.66(NumberDomain-U) SPECCPU2000: Overhead =~0.77(NumberDomain-U)