120 likes | 300 Views
ASGC Site Report. Felix Lee HEPiX 2011 Fall In Vancouver 24 Oct, 2011. 2. ASGC Data Centre. Cooling Power : CPU Power Summer 1 : 1.4 Winter 1 : 2. Total Capacity 2MW, 400 tons AHUs 99 racks ~ 800 m2 Resources 15,000 CPU Cores 6 PB Disk 5 PB Tape Rack Space Usage (Racks)
E N D
ASGC Site Report Felix Lee HEPiX 2011 Fall In Vancouver 24 Oct, 2011
2 ASGC Data Centre Cooling Power : CPU Power Summer 1 : 1.4 Winter 1 : 2 • Total Capacity • 2MW, 400 tons AHUs • 99 racks • ~ 800 m2 • Resources • 15,000 CPU Cores • 6 PB Disk • 5 PB Tape • Rack Space Usage (Racks) • AS e-Science: 51.8 (55.6%) • ASCC: 13.2 (14.2%) • IPAS: 6.5 (7.0%) • Free: 28.85 (23.2%) Monitoring the power consumption and temperature of every piece of equipment every 10 seconds.
3 Resource update • Purchasing 10GbE core switch • Cisco Nexus 7010. • 7 line cards, 48 ports for each. • 336 ports in total. • Hope it can be delivered by end of Nov. • 32 HP BL460 G7 blades are delivered in earlier Oct. • 288TB storages are delivered in end of Aug. • New 4Way system Dell 6140 + C410 GPU expansion is purchased, now it's waiting for deliver. • 96 cores in 2U, which is attractive for us • On nVidia 2070M GPU is for surveying.
4 System Re-Configuration • Disk System re-configuration is still on going. • As reported in last HEPiX meeting, we completed DPM migration. • Now we are migrating Castor disk server to be 10GbE. • It takes long time.. • DPM improvements • Performance evaluation and improvement • pNFS and WebDAV are under testing. • Constructing the 10Gb backbone • 10GbE fibre patch panel, core switch.. • Bandwidth upgrade of legacy work nodes • Upgrade IBM blade switch module to get LACP working more efficiently.
Smart Center Increase power efficiency by eliminating the use of UPS. UPS reduces power efficiency by 30 per cent. Among them 10 per cent is in the form of heat that has to be carried away. Power Efficiency Apply space technology to heat conduction of the data center to increase thermal efficiency. Thermal Efficiency Analyzing long term data allows us to build models that can assist us in operating the center intelligently. Intelligent Monitoring & Control Cooling Power : CPU Power = 1 : 3 (PUE = 1.3) 5
6 e-Science Networking in Asia Pacific Region PH-ASTI-LIKNAYAN VN-IFI-PPS VN-IOIT-HN TH-NECTEC MY-UM-CRYSTAL MY-UPM-BIRUNI-01 MY-MIMOS PK-PAKGRID PK-NCP TH-HAII JP-Tokyo-LCG2 APAN-JP VN-IOIT-KEYLAB IP Transit SINET ITB-ID JP-KEK1 JP NL 2.5G iHEP-CAS 10G CERNET WIDE JP-KEK2 2.5G HK TW US JP-HIROSHIMA-WLCG2 CN-SDU-LCG2 5G I2 / GN2 622M HK-HKU CSTNET 10G IN-TIFR SG CN-BEIJING-LCG2 TWAREN/ TANET IN-VECC1 KREONET KR-KISTI-GCRT AARNET IN-VECC2 NTU TW-NCUHEP TW-NTCU TW-FTT AU-ATLAS ASGC KR-KNU TW-NIU NYMU ALICE Sites EUAsiaGrid Sites CMS Sites ATLAS Sites
E-Science Application support in Asia • Not only porting computing models to EUAsia, but also establishing research oriented production services and long term scientific collaboration among partners • Valuable data challenges achieved or launched • EUAsia VO • Use the catch-all VO as the way to engage newcomers • Deployment and certification of 16 sites used by 250 people • Application repository • Based on EELA-2 and INFN experience • Online database to gather information about application programs availablability (affiliation to a specific domain, middleware information, abstract and material reference, status overview, key research contacts)
8 Sample e-Science Applications in Taiwan Drug Discovery by AutoDocking
9 ASGC Cloud - Objectives • Enhancing DCI for e-Science: Let Scientists focus on Sciences • Service Oriented Architecture • Infrastructure, Platform and Service • Service re-use and re-combination according to scientific workflow. • Flexible and fast resource provisioning • Operation cost and energy consumption reduction • Capability on Big Data • Facilitating collaboration on E-Science • Life Science, Earth Science, Environmental Changes, Social Sciences and HEP, etc. • Technology R&D: Grid+Cloud, Cloud Federation, versatile & persistent storage, etc.
Strategy and Plan 10 • Approach • VIM – OpenNebula + vNode + OpenStack • VMIC – Working with CERN • CERNVM – Virtual Appliance (with Contextualisation) • CVMFS for ATLAS deployed and operational • Extending for Blast, R and more e-Science Applications • EMI Cloud and Virtualization Task Force • Develop Repository of VM and VA • Interoperability • Policy repository and information system (along with auditing) • Monitoring • Use Cases • Cloud Trust: different req for data and computing • Data provenance, access control, federation