310 likes | 430 Views
Publishing applications on the web via the Easa Portal and integrating the Sun Grid Engine. By Michael Griffiths & Deniz Savas CiCS Dept. Sheffield University M.Griffiths@sheffield.ac.uk D.Savas@sheffield.ac.uk http://www.sheffield.ac.uk/wrgrid Sept 2007.
E N D
Publishing applications on the web via the Easa Portal and integrating the Sun Grid Engine By Michael Griffiths & Deniz Savas CiCS Dept. Sheffield University M.Griffiths@sheffield.ac.uk D.Savas@sheffield.ac.uk http://www.sheffield.ac.uk/wrgrid Sept 2007
Sheffield University- facts Established in 1828 70 academic departments in 7 faculties Number of Undergraduate Students: 25,500 Number of Post Graduate/Research Students: 5,600 Number of International Students : 3,100
‘iceberg’ the HPC Cluster at the Computer Centre • AMD Opteron based, supplied by Sun Microsystems. • Processors: 320 ( 160 of these are designated to the Physics Dept. for the PP project ) • Performance: 300GFLOPs • Main Memory: 800GB • User filestore: 9TB • Temporary disk space: 10TB • Physical size: 8 racks • Power usage: 50KW
‘iceberg’ cluster hardware components 160 general-purpose-use cpu’s; 80 of these are in dual-core configuration with 2 GBytes of memory each. ( V20 Model ) (i.e 40 boxes with 2 cpus + 4 GBytes ) 80 are in quad-core configurations with 4 GBytes memory each. ( V40 Model ) ( i.e 20 boxes with 4 cpus + 16 GBytes ) These are also connected via a Myrinet Switch at 2Gbps connection speed. IPMI Service Processors : Each box contains a service processor with separate network interface for remote monitoring and control. Inside a V20
nfs mounted onto Worker nodes Shared file store Service Proc 1 Service Proc n Service Proc 7 Service Proc 1 Service Proc 10 Service Proc 6 Service Proc 5 Service Proc n Service Proc 2 Service Proc n Service Proc n Service Proc 58 Service Proc 59 Service Proc n Service Proc 8 Service Proc 60 Service Proc 4 Service Proc n Service Proc 57 Service Proc 9 Workernode n Workernode 59 Workernode 3 Workernode 7 Workernode 1 Workernode 6 Workernode n Workernode 56 Workernode 2 Workernode 57 Workernode 10 Workernode 60 Workernode n Workernode 9 Workernode 5 Workernode n Workernode n Workernode 58 Workernode 8 Workernode 4 Myranet Connected Workers Eth0 Eth1 HEAD NODE Iceberg Cluster Configuration All remote access License server
White Rose Grid YHMAN Network
Grid & HPC applications development tools • Development • Fortran77,90, C, C++, Java compilers • MPI / MPICH-gm • OpenMP • Nag Mk 20, 21 • ACML • Grid • Sun Grid Engine • Globus 2.4.3 (via gpt 3.0) • SRB s-client tools
Features and Capabilities • Web accessible management and execution of applications • Provides a service for rapid authoring and publication of custom applications • Easily integrate multiple heterogeneous resources
Potential benefits of an applications portal • More efficient use of resources • Ease of use • Familiar GUI interface • Capturing of expert knowledge • Better presentation of legacy software
Potential Development • Building Expert Systems • Allowing novice expert to take advantage of parallel HPC resources • Providing HPC services over the grid • HPC centres collaborating with each other without having to provide individual usernames, file-storage etc to remote users.
WRG – Application Portal • Based on EASA • Three Usage Modes • Users • Run applications • Have storage space • Review old results • Authors • Build and publish applications • Administrators
Using • Accessing • Managing • Applications • Workspace • Results • Help
Using:Accessing • Start up a web browser & http://www.shef.ac.uk/wrgrid/easa.html • Login using provided user name and password
Using:Help • Select Help and Support tab to register • Apply to Admin for an account • Apply to authors to register applications
Using:Managing • Installing a client • Setting password • Setting Mode • user/author
Using:Applications • View and select available applications
Using:Workspace • Storage for uploaded files and old job files
Using:Results • Check results • View job progress • Export to spreadsheet
Using: Results • Viewing Results
Using:Help • Documentation • Contacts
Conclusions • Disadvantages • Thick Client, License costs • Advantages • Rapid publication • Enable virtualization of HPC resources • Make applications available to broader community, become application focused • Effective on a network with low bandwidth • Make applications available to collaboration partners over the internet and outside own organisation
Demonstration Applications Developed for EASA • Demonstration of Metascheduling Across White Rose Grid • Monitoring of usage across White Rose Grid • Running Applications on the local cluster • Fluent • Ansys • Generic Matlab and Scilab applications
Metascheduler Demonstration:Background • Enable utilisation of resources across White Rose Grid • Exploit use of task arrays • Job submission is seamless • Demonstration uses a generic scilab application that runs on any of the White Rose Grid Nodes • Simplistic, but; • effective, manageable and sustainable
Metascheduler Demonstration: Method • Query and Compare job queues for WRG nodes • qstat –g c • Use slots available and total number of slots to generate weights for different queues • Compare weights for all queues on different nodes and use to select node • Use standard EASA job submission technique to submit job to selected node • EASA does not know about clusters • Special easaqsub submits job to sge, monitors job status will remove job if wait time exceeded, easaqsub job monitor has completed EASA knows that EASA compute task has completed
Metascheduler Demonstration: Running Scilab • User provides scilab scriptfile • Required resource file e.g. datafiles or files for scilab library routines • Can provide zipped bundle of scilab resources • Set job submission information and then submit job
Metascheduler Demonstration: Job Submission • Provide jobname and job description • Information used for metascheduling • Jobtime (hours) • Waittime (hours) • Number of tasks (for job array) • Submission method • Use metascheduling • Select a particular node
Metascheduler Demonstration : Further Developments • Current method successful! • Correctly selects clusters and improves turnaround for scilab compute tasks • Current pattern can be extended to other EASA applications • Provide distributed storage across White rose Grid • Develop metascheduling strategy introduce greater dependency on user job requirements for node selection • Exploit other metascheduling systems e.g. SGE transfer queues, CONDOR-G THE END