310 likes | 434 Views
The 3 rd FKPPL Workshop @ KIAS. FKPPL VO: Status and Perspectives of Grid Infrastructure. March 8-9, 2011 Soonwook Hwang KISTI. Introduction to FKPPL. FKPPL ( F rance- K orea P article P hysics L aboratory) International Associated Laboratory between French and Korean laboratories
E N D
The 3rd FKPPL Workshop @ KIAS FKPPL VO: Status and Perspectives of Grid Infrastructure March 8-9, 2011 Soonwook Hwang KISTI
Introduction to FKPPL • FKPPL (France-Korea Particle Physics Laboratory) • International Associated Laboratory between French and Korean laboratories • Promote joint cooperative activities (research projects) under a scientific research program in the area of • Particle Physics • LHC • ILC • e-Science • Bioinformatics • Grid Computing • Geant4
FKPPL focuses on particle physics and e-science Both require international collaboration FKPPL Scientific Projects FKPPL scientific projects
Grid Computing @ FKPPL • Participating Organizations • CC-IN2P3 and KISTI • Group Leaders • Dominique Boutigny, Director of CC-IN2P3, France • Soonwook Hwang, KISTI, Korea • ’10 Budget • France: ~6000 Euro • Mainly traveling cost funded by CNRS • Korea: 20,000,000 Won • traveling cost and organizing grid workshop and training • funded by KRCF under the framework of the CNRS-KRCFST Joint Programme • Common Interest • Joint in Grid computing • Collaboration on ALICE computing: CC-IN2P3 (Tier1) and KISTI (Tier2) • Joint operation and maintenance of production grid infrastructure
Objective • Background • Collaborative work between KISTI in Korea and CC-IN2P3 in France in the area of Grid computing • Objective • Provide computing facilities and user support needed to foster the scientific applications established under the framework of FKPPL collaboration and beyond • Promote the adoption of grid technology and grid awareness in Korea and France by providing scientists and researchers with production Grid infrastructure and technical support necessary for them
VOMS WIKI CE UI CE LFC CE SE WMS WMS SE SE FKPPL VO Grid • Built based on the gLite middleware services • Has been up and running since October 2008, providing ~10,000 CPU cores and ~30 TBytes of disk storage • Since last December, KEK has joined FKPPL VO, contributing ~1,600 CPU cores and 27 Tbytes of Disks FKPPL VO
FKPPL VO Grid Testbed VOMS WIKI CE CE UI LFC SE CE WMS WMS SE SE FKPPL VO KISTI IN2P3 KEK
gLite Grid Services on FKPPL VO User Interface (UI): The place where users logon to access the Grid Workload Management System (WMS): Matches the user requirements with the available resources on the Grid File and replica catalog: Location of grid files and grid file replicas Computing Element (CE): A batch queue on a site’s computers where the user’s job is executed Storage Element (SE): provides (large-scale) storage for files
File and Replica Catalog User Interface WMS Computing Element Storage Element IN2P3 Job Submission Example Information System Submit job (executable + small inputs) query Retrieve status & (small) output files create proxy query publish state Submit job Retrieve output Job status Logging Register file Input file(s) Job status process VO Management Service(DB of VO users) Output file(s) Logging and bookkeeping 9
Resource and Service Usage @ CC-IN2P3 • CPU Used • ~5.1 millions of hours HS06 • Number of Jobs executed • 193,434 • 72 years for 1 processor intel Xenon 2.5 GHz
User Support • FKPPL VO Wiki site • http://esgtech.springnote.com/pages/4262263 • User Accounts on UI • 104 User accounts has been created • FKPPL VO Membership Registration • 70 Users have been registered at FKPPL VO membership
Grid Training (1/2) • In February 2010, we organized Geant4 and Grid tutorial 2010 for Korean medical physics communities • Co-hosted by KISTI and NCC • About 34 participants from major hospitals in Korea • About 20 new users joined the FKPPL VO membership
Grid Training (2/2) • “2010 Summer Training Course on Geant4, GATE and Grid computing” held in Seoul in July • Co-hosted by KISTI and NCC • About 50 participants from about 20 institutes in Korea
Application porting Support on FKPPL VO • Deployment of Geant4 applications • Used extensively by the National Cancel Center in Korea to carry out compute-intensive simulations relevant to cancer treatment planning • In collaboration with National Cancer Center in Korea • Deployment of two-color QCD (Quantum ChromoDynamics) simulations in theoretical Physics • Several hundreds or thousands of QCD jobs are required to be run on the Grid, with each jobs taking about 10 days. • In collaboration with Prof. Seyong Kim of Sejong University
User Community Support • Sejong Univerisity • Porting of two-color QCD (Quantum chromo-dynamics) simulations on the Grid and large-scale execution on it • National Cancer Center • Porting of Geant4 simulations on the Grid for planning • KAIST • Used as a testbed for a grid and distributed computing course in computer science department • East-West Neo Medical Center in Kyung Hee University • Porting of Geant4 simulations on the Grid • Ewha Womans University • Porting of Gate applications on the grid
Our Two-color QCD Applications (1/2) • Large-scale • A large number of simulation jobs to be run with a wide range of different parameters • In our case, we have planned to run a total of 360 different QCD jobs with a different parameter set • beta = [1.50, 1.51, 1.52, …, 2.09, 2.10] (61) • J = [0.04, 0.05, 0.06] (3) • mu = [0.0, 0.57, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 1] (9) • Independent • Each job runs independently • Long-duration • Each QCD job goes through 400 steps to complete, each step taking an average of 1 Hour, so each QCD job takes an average of 400 hours
Our Two-color QCD Applications (2/2) • Need a computing facility to run a large number of jobs • FKPPL VO provides computing resources sufficient to run the 360 QCD jobs all together concurrently • Need some grid tool to effectively maintain such a large-scale jobs running on the Grid without having to know the details of the underlying Grid • Ganga seems to be appropriate as a tool for managing such large number of jobs on the Grid
Issues relating to long-running jobs on the Grid • Long-running jobs often fail to complete on the Grid • It is not straightforward to successfully get done a long-duration job like our two-color QCD simulation on the Grid • A Grid proxy certificate expires before the job’s completion • By default, the proxy has a lifetime of 12 hours • Each Grid site has its own site operational policy such as the maximum CPU time for a job to be allowed to run at a time
Application-level Checkpointing/Restarting • We have modified the original two-color QCD simulation code to support an application-level checkpointing scheme • The two color QCD code takes 400 steps to complete • Once a QCD job is launched successfully on the Grid, a intermediate result is generated at each step and saved into the checkpoint server • When a QCD job is detected to be stopped for some reason, • Ganga restarts it from where it has left by resubmitting it along with the latest intermediate result
WMS Computing Element Storage Element IN2P3 Checkpoint Server Overview of QCD Simulation Runs on the Grid (Re)Submit QCD job (executable + small inputs) Retrieve status & (small) output files Retrieve the latest Intermediate result Retrieve output Submit QCD job Input file(s) send intermediate result Check status & Intermediate Result Su2.x Heartbeat Monitor Output file(s) 23
Two-color QCD in Production • ~ about 75 CPU years • 1,647 runs * 400 step * 1 hours = 658,800 hours • ~ 1,647 concurrent QCD runs on the FKPPL VO • The simulation started at the end of August and completed on December 8th for ~ 3.5 months • Now, it’s under analysis by Prof. Seyong Kim of Sejong University
FJKPPL Workshop on Grid Computing • FJPPL/FKPPL Joint workshop on Grid computing was held in KEK on December 20-22 • Hosted by KEK • CC-IN2P3, KEK and KISTI agreed to move forward towards France-Asia VO
Perspectives for France-Asia VO • Computing Infrastructure • gLite Middleware based • Computing centers offering resources • CC-IN2P3, KEK, KISTI • IHEP in China ? • IOIT in Vietnam? • Data Infrastructure • IRODS (Integrated Rule-Oriented Data System) • Both KEK and CC-IN2P3 have some expertise on the operation and management the IRODS service and might be able to provide IRODS service in the future • Applications • It is important to have some applications with scientific results • As of now, we have applications such as: • In-silico docking applications • QCD simulations • Geant4/GATE applications
Perspectives for France-Asia VO • User Communities • As of now, we have user communities mainly from Korea • We might be able to have communities from Japan and Vietnam • Geant4 communities (Japan) • In silico drug discovery communities (Vietnam) • ?? (China) • High-level Tools/Services • It is important to provide users with an easy-to-use high level tools • Some of tools that we have some expertise on • WISDOM • Ganga • JSAGA • DIRAC • ?? • Training • In order to promote the awareness of the France-Aisa VO, it is important to organize some tutorials on glite middleware, high-level tools and applications