210 likes | 336 Views
LCG Deployment in Japan. Hiroshi Sakamoto ICEPP, Univ. of Tokyo. Contents. Present status of LCG deployment LCG Tier2 Certification authority Implementation Recent topics KEK-ICEPP joint R&D program Network Upgrade of resources Future plan. LCG in Japan.
E N D
LCG Deployment in Japan Hiroshi Sakamoto ICEPP, Univ. of Tokyo
Contents • Present status of LCG deployment • LCG Tier2 • Certification authority • Implementation • Recent topics • KEK-ICEPP joint R&D program • Network • Upgrade of resources • Future plan
LCG in Japan • Tier2 center at ICEPP, U. Tokyo • Decision made in October 2004 • Man power consideration • A few dedicated people • Including engineers and outsourcing • Contribution to LHC/ATLAS • The size of community ~ 4% of ATLAS • Want to contribute more
Japanese CA for HENP • KEK-CA is ready for operation • Japanese HENP society ~ KEK users • LHC ATLAS • KEK-B Belle • JPARC (50GeV PS at Tokai) • RHIC Phenix (RIKEN) • CP/CPS prepared • Discussion between KEK and ICEPP • To be submitted to the EU Grid PMA • Or to the AP Grid PMA?
TOKYO-LCG2 cluster • LCG-2 cluster@u-tokyo • 52 Worker Nodes • Upgraded to LCG-2_3_1 (last week) • From LCG-2_1_1 last week • With YAIM • Redhad 7.3 (will replace Scientific Linux)
PC Farm • HP ProLiant BL20p • Xeon 2.8GHz 2CPU/node • 1GB memory • SCSI 36GBx2, hardware RAID1 • 3 GbE NIC • iLO remote administration tool • 8 blades/ Enclosure(6U) • Total 108 blades(216 CPUs) in 3 racks
Gateway (dggw0.icepp.jp) WN (hpbwn7-1) . . 52 WNs . CE (dgce0.icepp.jp) SE (dgse0.icepp.jp) RB (dgrb0.icepp.jp) WN (hpbwn13-8) BDII (dgbdii0.icepp.jp) Campus Network (133.11.24.0/23) Private Network (172.17.0.0/24) PXY (dgpxy0.icepp.jp) LCG nodes: HP Blade BL20P G2 CPU Xeon 2.8GHz dual memory 1GB(plan to 2GB) GbE NIC UI (dgui0.icepp.jp) NFS Server: DELL 1750 CPU Xeon 2.8GHz dual/ memory 2GB IDE-FC RAID Infortrend controller 250GB HDD*16*10 NFS sever (dgnas0.icepp.jp) FC SW /storage 1.75TB … /storage 1.75TB /home 1.75TB 1.75TB * 20 = 35TB
KEK-ICEPP joint R&D • Testbed cluster@u-tokyo • 1 Worker Node • LCG-2_4_0 with VOMS • Simple CA for testbed user • Scientific Linux with autorpm • Testbed cluster@KEK • Computing Research Center
KEK LCG2 (remaining) UI Proxy LCFGng BDII-LCG2 AMD Opteron-basedLinux System as WNs (under integration) CE(SiteGIIS) RB Managed by LSF WN WN WN WN CE(SiteGIIS) ClassicSE WN WN WN WN IBM eServer 326 Opteron 2.4GHz 4096MB 20 nodes WN WN WN WN WN WN IBM eServer xSeries Pen III 1.3 GHz 256MB RAM (Test WN) WN Managed by PBS WN WN WN WN WN WN
R&D Menu • Stand-alone grid connecting two clusters • 1Gbps dedicated connection between KEK and ICEPP (SuperSINET) • Exercises understanding LCG middleware • Special interests • SRB • Grid datafarm (Osamu Tatebe AIST)
Network • Peer to peer 1Gbps between CERN and ICEPP • Sustained data transfer study • 10Gbps to US and EU • Still thin, but improving connection among Asia/Pacific countries • JP-TW to 1Gbps very soon. • JP-HK, JP-CN
PC Farm Upgrade • IBM BladeCenter HS20 • Xeon 3.6GHz 2CPU/node • EM64T 2GB memory • SCSI 36GBx2, hardware RAID1 • 2 GbE NIC • Integrated System Management Processor • 14 blades/ Enclosure(7U) • Total 150 blades(300CPU) in 2 rack + 1rack for console&Network SW
FOUNDARY BigIron MG8 • two 4x10GbE modules • four 60xGbE modules • Disk Array • 16x250GB SATA HDD 2 FibreChannel I/F • 27 Cabinets in total
Future plan • LCG Memorandum of Understanding • To be signed in JFY2005 • University of Tokyo as the funding body • LCG Tier2 Resources • More resources added to our testbed • in JFY2005 – approved • LCG SC4 + ATLAS DC3 in 2006 • Production system • Budget request submitted for JFY2006 • Expected to become operational in Jan. 2007