260 likes | 497 Views
APAN for Meteorological Studies. August 27, 2003. Jai-Ho Oh Pukyong National University, Busan, Korea. Main Goals. Establishment of uMeteo-K, ubiquitous Korea meteorological research cooperation system. Examples of Weather Information. Metorological Disaster. Industreal Area.
E N D
APAN for Meteorological Studies August 27, 2003 Jai-Ho Oh Pukyong National University, Busan, Korea
Main Goals Establishment of uMeteo-K, ubiquitous Korea meteorological research cooperation system
Examples of Weather Information Metorological Disaster Industreal Area Water Resources Environment Wild Fire Health…
Core AgMet Station Koflux/RS reference CAgM Grid Risk management National response Global Environment Flux, Aerosol, GHG uMeteo-K GRID Testbed University Grid Inter-Office Grid Super Ensemble Seasonal/Climate Applied Meteorology Agro,Energy,Fishery, Project Grid Institute Grid Model Development Next G./K-model Meteorological industry Met info/Instruments NGIS Grid KMA Grid Private Grid Impact Assessment Env. Impact/Feedback Public service User req./Detailed APCN-Grid Network Hub for RCC
About uMeteo-K The concept of virtual lab. for interdisciplinary meteorological researches • Parallelized numerical weather prediction modeling (Computational Grid) • - Cooperation meteorological research system (Access Grid) • - Virtual server for large meteorological data (Data Grid) Grid technology is essential to accomplishment
Changes in the atmosphere: Composition, chemical reaction rates and circulation Changes in solar input Changes in the hydrological cycle H2O, N2, O2, CO2, O3 etc Terrestrial radiation Air-biomass Coupling (e.g. N C ) Atmosphere Precipitation and evaporation Snow Air-ice coupling Ecosystem changes Heat exchange Aerosols & CO2 etc smoke Wind stress Rivers and lakes Biomass Changes in farming practice Sea ice Human influences Shelf processes Mixed layer processes Land-biomass coupling (e.g. carbon) Land Ocean Changes in the land surface : Continental distribution, vegetation, Land-use, ecosystems Deep Ocean
Required Climate Information by 2010 Climate Information by 2003 Climate Information by 2010 27km 9km 3km
Earth Simulator Massively parallel super-computer based on NEC SX-5 architecture. 640 computational nodes. 8 vector-processors in each of nodes. Peak performance of 1CPU : 8GFLOPS Total peak performance: 8x8x640 = 40TFLOPS
Development of a High-Resolution Atmospheric Global Model on the Earth Simulator for Climate study • 10km or less in horizontal, 100 levels in vertical Nonhydrostatic ICosahedral Atmospheric Model (NICAM)
Integration of Human and Computational Resources • Brain pool • Access Grid System • Supercomputers • Experimental Facilities Chuncheon Seoul Incheon Suwon • Computers Chonan Chongju Daejeon Pohang Jeonju Daegu • Databases • Mass Storage Ulsan Chang- won • Visualization • Environment Kwangju Pusan Cheju • High Speed Networks
Setup uMeteo-K AG with PIG + Room Node basis (ICSYM/PKNU, CES/SNU, NCAM/KMA) - Linkage in uMeteo-K with KOREN network system - Establishment of a duplex video conference system with PIG & Polycom - Establishment of computing environment among uMeteo-K’s PIG (AG toolkit 1.2 version) - Establishment of PIG + PIG based independent Room Node system (NCAM/KMA) AG in uMeteo-K
uMeteo-K AG configuration ANL KMA SNU Quick Bridge Unicast KISTI KISTI KMA KMA Quick Bridge AG KAIST KAIST Unicast PKNU PKNU(부경대) KJIST KJIST CNU CNU Multicast
Samples of uMeteo-K AG operation < Korea AG-Group Quick bridge server test – Participants; PKNU, SNU, KISTI, KJIST, CNU, KAIST, KMA on July 8, 2003 >
< uMeteo-K monthly meeting using VRVS PKNU(Busan)-SNU(Seoul)-KMA(Seoul)-USA(Washington D.C), June 3, 2003>
uMeteo-K CG Testbed • uMeteo-K computational grid testbed • (Two clusters utilized and each cluster has 4 nodes) • < A node’s specification>
uMeteo-K CG Testbed Configuration UPS NAS storage sever 4 nodes ( single CPU ) cluster NAS storage sever 10/100 switch hub 4 nodes ( single CPU ) cluster Monitoring system KOREN 10/100 Ethernet Electrometer
uMeteo-K CG Testbed S/W • Linux : paran 7.0 (kernel version 2.4.18) • Globus 2.4 • PG fortran 3.2 (Portland Group) • MPICH-G2 1.2.5 for parallel job running • MPICH-G2 with PG fortran • NCAR Graphics for graphic display • NIS, NFS
CA-B CA-A slaves slaves Master A Master B PBS PBS Globus linkage between testbed clusters • Independent simple CAhas installed at each master node. • A group of slave nodes is controlled by each master node’s PBS • scheduler
CA information of each cluster - CA-A : pknuGB01.pknu.ac.kr subject : /O=uMeteoK/OU=pknu.ac.kr/CN=pknuGB1/CN=proxy issuer : /O=uMeteoK/OU=pknu.ac.kr/CN=pknuGB1 identity : /O=uMeteoK/OU=pknu.ac.kr/CN=pknuGB1 type : full legacy globus proxy strength : 512 bits path : /tmp/x509up_u533 timeleft : 10:01:23 - CA-B :pknuGB05.pknu.ac.kr subject : /O=uMeteoK/OU=PKNU/OU=pknu.ac.kr/CN=pknuCA2215/CN=proxy issuer : /O=uMeteoK/OU=PKNU/OU=pknu.ac.kr/CN=pknuCA2215 identity : /O=uMeteoK/OU=PKNU/OU=pknu.ac.kr/CN=pknuCA2215 type : full legacy globus proxy strength : 512 bits path : /tmp/x509up_u535 timeleft : 10:53:37
Monitoring system on CG testbed Before integration Integration
Globus script file for Parallel MM5 run (mm5.rsl) + ( &(resourceManagerContact="pknuGB01") (count=4) (label="subjob 0") (environment=(GLOBUS_DUROC_SUBJOB_INDEX 0) (LD_LIBRARY_PATH /usr/local/globus/lib/)) (directory="/spring/KISTI/MM5/Run") (executable="/spring/KISTI/MM5/Run/mm5.mpp") ) ( &(resourceManagerContact="pknuGB05") (count=4) (label="subjob 4") (environment=(GLOBUS_DUROC_SUBJOB_INDEX 1) (LD_LIBRARY_PATH /usr/local/globus/lib/)) (directory="/summer/KISTI/MM5/Run") (executable="/summer/KISTI/MM5/Run/mm5.mpp") )
Parallel MM5 Benchmarks with GLOBUS • Average job waiting time (including CA) : 25 sec • The required time for 3600 sec (1 hour) model integration • The required time for 86400 sec (1 day) model integration
uMeteo-K Data Grid Configuration KMA COLA Forecast output NCEP Model output Data input Data input SNU PKNU Forecast output KISTI Supercom Model output Forecast output Data input NASA JMA uMeteo-K Data Grid