340 likes | 496 Views
Evolving ATLAS Computing Model and Requirements. Michael Ernst, BNL With slides from Borut Kersevan and Karsten Koeneke U.S. ATLAS Distributed Facilities Meeting UCSC November 13, 2012. ATLAS Computing Mar-Aug 2012. Computing Resource Usage in 2012, 2013-2015. Current Resource Usage.
E N D
Evolving ATLAS Computing Model and Requirements Michael Ernst, BNL With slides from BorutKersevan and KarstenKoeneke U.S. ATLAS Distributed Facilities Meeting UCSC November 13, 2012
Contributions by Country (Production and Analysis) Includes beyond pledge resources
Contribution by Job Type Production Analysis US: 22% of available CPU used for Analysis 77% of Analysis done At Tier-2s T2 T1
Contribution to Simulation (Aug-Oct) Avg # of fully Utilized cores (3843) (1867) (1762) (1067) (896)
Contribution to Pile (Aug-Oct) Avg # of fully Utilized cores (1818) (1128) (526) (374)
Contribution to Analysis (Aug-Oct) Avg # of fully Utilized cores (1024) (720) (590) (395) (342)
Contribution to Reco (Aug-Oct) (512) Avg # of fully Utilized cores (1024) (720) (590) (395) (122) (112) (342) (108)
Balancing Resources across the Tier-1 and Tier-2s for cost/benefit optimization E. Lancon (ICB Chair) at the Oct ICB Meeting
Evolution and Prediction of price/performance of CPU Servers B. Panzer/CERN In the US we have observed prices going up slightly between 2011 and 2012. Moore’s law hasn’t helped to improve price/performance ratio – Future? 16
Multi-core vs. Many-core A typical modern compute server has 12-16 cores Number of cores in commodity machines grows arithmetically Number of cores in the enterprise space still grows geometrically Number of cores in our datacenters grows between the two, expected to slow down in the long run Many-core is not multi-core Observing memory hierarchy issues Cache coherency NUMA Memory b/w or I/O paths may be constraining Multiprocess is a convenient model, but it’s neither sustainable nor scalable
Evolution and Prediction of Price for Disk Space B. Panzer/CERN Disk prices are ~1.5x compared to 2010 predictions 18
Medium-term hardware trends Pricing follows market pressure, not technology I/O, disk and memory not progressing at the same rate as compute power Bulk of improvements in x86 still comes from Moore’s Law Enterprise and HPC-targeted developments, where cost-effective, trickle down to our datacenter environment Heterogeneous architectures Cross-platform, cross socket, hybrid CPUs, accelerators, throughput vs. classic computing
Non-Intel Hardware GPUs NVIDIA working hard but process technology lacking P2P communication improved Software getting better MIC: Tesla might be no longer competitive ARM Slow penetration of the server space 64-bit instruction set defined (you can buy today 32 bit CPUs) Software improvements make ARM look like a viable option AMD Lagging behind, recent experiments not compelling FPGA Still too far off for mainstream accelerators, software issues Upcoming: low power/micro servers 192 cores, 1 GB/core $35k
Computing Requirements vs LHC Bunch Spacing S. McMahon
Analysis Offline Core Simulation
Summary The Facilities have reliably delivered in all areas according to our obligations, and in may areas beyond The overall system, comprising facility hardware and services, and the ATLAS software needs to evolve to improve the efficiency and to cope with sharply growing requirements after LS1 Resources were used more effectively with “Life w/o “ESDs”, PD2P, reduced # of DS replicas but the potential for more –significant- is shrinking A combined effort, driven by analysis and software experts, is needed to get ATLAS Computing prepared for the challenges ahead Convinced the LHC machine will deliver … LS1 is around the corner but my impression from the last SW&C week is, there is not much activity in the SW area to address issues