130 likes | 262 Views
Grid setup for CMS experiment. Youngdo Oh Center for high energy physics, Kyungpook National University (On behalf of the HEP Data Grid Working Group) 2003.8.22 The 2 nd international workshop on high energy physics data grid. RPMs repository Profile repository. Installation of EDG.
E N D
Grid setup for CMS experiment Youngdo Oh Center for high energy physics, Kyungpook National University (On behalf of the HEP Data Grid Working Group) 2003.8.22 The 2nd international workshop on high energy physics data grid
RPMs repository Profile repository Installation of EDG • TheDataGrid fabric consists of a farm of centrally managed machines with multiple functionalities: Computing Elements(CE), Worker Nodes (WN), Storage Elements (SE), Resource Broker(RB ), User Interface (UI), Information Service (IS), Network links , … Each of these is a linux machine. CE/WN (PC Cluster) LCFG Server UI SE (GDMP) RB Modified slide from A. Ghiselli, INFN, Italy
EDG Testbed at CHEP Check resources for given jobs User submits job to UI and requests data to SE from UI Submit job to Worker node and send output to user
EDG Testbed K2K CPU UI Real user RB In operation NFS BigFat Disk GSIFTP GDMP client (with new VO) SE CE VOuser KNU/CHEP Disk VO user LDAP Server @SNU NFS GSIFTP GDMP server (with new VO) NFS GSIFTP GDMP client (with new VO) MAP on disk With maximum security NFS In preparation WN grid-security VO user GSIFTP SKKU SNU In operation CDF CPU . . . • The EUDG test beds are operated at KNU and at SNU. • The globus simple CA is managed at KNU and at SNU to sign certs. • In addition to the default VO’s in EUDG, a cdf VO is constructed. • One CDF Linux machine is embedded at EUDG Test bed as WN by installing PBS Batch server CDF job is running at EDG test bed.
Mass Storage • Software : HSM (Hierarchical Storage Management) • 1TB FastT200 : Raid 5 • 4 x 3494 tape drive : 48TB
Preformance of Mass Storage Clusters at CHEP Nfs : 10Mbytes/s (writing) 61Mbytes/s (reading) Ftp : 50Mbytes/s (writing) 38Mbytes/s (reading) nfs Cluster at KT , ~ 200km far from CHEP
Installation using PXE & Kickstart tool • LCFGng was installed successfully • But, there are minor problems to configure client using LCFGng, we are communicating with LCFGng team. • Toy installation server working like LCFG kickstart : automatic installation tool of redhat user applicatin can be added. => Using kickstart with PXE and DHCP, any kind of application can be installed with installing linux automatically. We will use this kind of system to setup and maintain various grid software in CHEP cluster.
Network to CERN The traffic through TEIN is almost saturated. : slow less than 2Mbps So, we are using KOREN => StarTap => CERN normally : 1Mbps ~ 20Mbps
Web interface for job management Initial screen Web Browser Web Server Proxy information EDG System Web service
Web interface for job management ( cont ) EDG menu Edit job Loading jdl file
Web interface for job management ( cont ) dgjoblistmatch dgjobsubmit dgjobstatus dgjobgetoutput
iVDGL & LCG-1 • Gatekeeper( CE in EDG ) installed • Batch system : condor • UI and WN under installation • No well-defined RB and SE yet in iVDGL • UF and Fermilab are preparing something similar • General installation & setup script will be ready after • Successful installation of all component. • We are waiting stable LCG-1 release.
Plan & conclusion • Updating CMS Grid software to LCG-1 • In order for HSM to be an efficient part of SE, nfs between HSM and clusters should be turned, or we should think of other solution. • The network between CHEP and CERN shows sometimes unstable and low bandwidth. • For management of various grid system, efficient management server(LCFG, or kickstart) will be ready. • More institute will be included in CMS grid system.