250 likes | 425 Views
Hungarian GRID Projects and Cluster Grid Initiative. P. Kacsuk MTA SZTAKI kacsuk@sztaki.hu www.lpds.sztaki.hu. Clusters. VISSZKI. DemoGrid. Globus, Condor service. Security. Grid monitoring. Data storage subsystem. Applications. Supercomputers. SuperGrid. Resource scheduling.
E N D
Hungarian GRID Projects and ClusterGrid Initiative P. Kacsuk MTA SZTAKI kacsuk@sztaki.hu www.lpds.sztaki.hu
Clusters VISSZKI DemoGrid Globus, Condor service Security Grid monitoring Data storage subsystem Applications Supercomputers SuperGrid Resource scheduling MPICH-G P-GRADE Accounting Hungarian Grid projects
Ojectives of the VISSZKI project • Testing various tools and methods for creating a Virtual supercomputer (metacomputer) • Testing and evaluatingGlobus and Condor • Elaborating a national Grid infrastructure service based on Globus and Condor by connecting various clusters
Results of the VISSZKI project Low-level parallel development PVM MW MPI Grid level job management Condor-G Grid middleware Globus Grid fabric Local job management Condor Condor Condor Condor Clusters Clusters Clusters Clusters
Structure of the DemoGrid project GRID Generic architecture • GRID subsystems studyand development • Data Storage subsystem: Relational database, OO database, geometric database, distributed file system • Monitoring subsystem • Security subsystem • Demo applications • astro-physics • human brain simulation • particle physics • car engine design Data Storage Subsystem Security subsystem Monitoring subsystem RDB OODB DFS GDB Applications Decomp Data Tightly Loosely Hardver CPU Storage Network
NIIFI 2*64 proc. Sun E10000 ELTE 16 proc. Compaq AlphaServer BME 16 proc. Compaq AlphaServer • SZTAKI 58 proc. cluster • University (ELTE, BME) clusters Structure of theHungarianSupercomputing Grid 2.5 Gb/s Internet
The Hungarian Supercomputing GRID project GRID application GRID application Web based GRID access GRID portal High-level parallel development layer P-GRADE Low-level parallel development PVM MW MPI Grid level job management Condor-G Grid middleware Globus Grid fabric Condor, PBS, LSF Condor Condor, LSF, Sun Grid Engine Condor, PBS, LSF Klaszterek Compaq Alpha Server Compaq Alpha Server SUN HPC
Distributed supercomputing: P-GRADE • P-GRADE (Parallel GRid Application Development Environment) • The first highly integrated parallel Grid application development system in the world • Provides: • Parallel, supercomputing programming for the Grid • Fast and efficient development of Grid programs • Observation and visualization of Grid programs • Fault and performance analysis of Grid programs
Condor flocking Condor 2100 2100 2100 2100 Condor 2100 2100 2100 2100 P-GRADE P-GRADE P-GRADE Mainframes Clusters Grid Condor/P-GRADE on the whole range of parallel and distributed systems GFlops Super-computers
P-GRADE program runsat the Madisoncluster P-GRADE program runs at the Budapest cluster P-GRADE program runsat the Westminstercluster Berlin CCGrid Grid Demo workshop: Flocking of P-GRADEprograms by Condor P-GRADE Budapest n0 n1 m0 m1 Budapest Madison p0 p1 Westminster
P-GRADE program runsat the Londoncluster P-GRADE program downloaded to London as a Condor job 1 3 P-GRADE program runs at theBudapestcluster 4 2 London clusteroverloaded=> check-pointing P-GRADE program migrates to Budapest as a Condor job Next step: Check-pointing and migration of P-GRADEprograms Wisconsin P-GRADE GUI Budapest London n0 n1 m0 m1
Further develoment: TotalGrid • TotalGrid is a total Grid solution that integrates the different software layers of a Grid (see next slide) and provides for companies and universities • exploitation of free cycles of desktop machines in a Grid environment after the working/labor hours • achieving supercomputer capacity using the actual desktops of the institution without further investments • Development and test of Grid programs
Layers of TotalGrid P-GRADE PERL-GRID Condor v. SGE PVM v. MPI Internet Ethernet
Hungarian Cluster Grid Initiative • Goal: To connect the new clusters of the Hungarian higher education institutions into a Grid • By autumn 42 new clusters will be established at various universities of Hungary. • Each cluster contains 20 PCs and a network server PC. • Day-time: the components of the clusters are used for education • At night: all the clusters are connected to the Hungarian Grid by the Hungarian Academic network (2.5 Gbit/sec) • Total Grid capacity in 2002: 882 PCs • In 2003 further 57 similar clusters will join the Hungarian Grid • Total Grid capacity in 2003: 2079 PCs • Open Grid: other clusters can join at any time
Structure of the Hungarian Cluster Grid TotalGrid 2002: 42*21 PC Linux clusters, total 882 PCs 2003: 99*21 PC Linux clusters, total 2079 PCs TotalGrid 2.5 Gb/s Internet TotalGrid
Live demonstration of TotalGrid • MEANDER Nowcast Program Package: • Goal: Ultra-short forecasting (30 mins) of dangerous weather situations (storms, fog, etc.) • Method: Analysis of all the available meteorology information for producing parameters on a regular mesh (10km->1km) • Collaborative partners: • OMSZ (Hungarian Meteorology Service) • MTA SZTAKI
Structure of MEANDER First guess data ALADIN SYNOP data Satelite Radar Lightning CANARI Delta analysis decode Basic fields: pressure, temperature, humidity, wind. Radar to grid Rainfall state Derived fields: Type of clouds, visibility, etc. Satelite to grid Visibility Overcast GRID Type of clouds Current time Visualization For users: GIF For meteorologists:HAWK
Live demo of MEANDER based on TotalGrid P-GRADE PERL-GRID 11/5 Mbit Dedicated job ftp.met.hu HAWK netCDF 34 Mbit Shared netCDF output job netCDF input netCDF output 512 kbit Shared PERL-GRID CONDOR-PVM Parallel execution
On-line Performance Visualization in TotalGrid P-GRADE PERL-GRID 11/5 Mbit Dedicated job ftp.met.hu netCDF 34 Mbit Shared job netCDF input GRM TRACE GRM TRACE 512 kbit Shared PERL-GRID CONDOR-PVM Task of SZTAKI in the DataGrid project Parallel execution and GRM
Conclusions • Already several important results that can be used both for • the academic world (Cluster Grid, SuperGrid) • commercial companies (TotalGrid) • Further efforts and projects are needed • to make these Grids • more robust • more user-friendly • having more new functionalities
Thanks for your attention ? Further information: www.lpds.sztaki.hu