240 likes | 399 Views
The Role of Advanced Technology Systems in the ASC Platform Strategy. Douglas Doerfler Distinguished Member of Technical Staff Sandia National Laboratories Scalable Computer Architectures Department . SAND 2014-3174C Unlimited Release. Topics. ASC ATS Computing Strategy
E N D
The Role of Advanced Technology Systems in the ASC Platform Strategy Douglas Doerfler Distinguished Member of Technical Staff Sandia National Laboratories Scalable Computer Architectures Department SAND 2014-3174C Unlimited Release
Topics ASC ATS Computing Strategy Partnerships & the Joint Procurement Process Trinity Project Status Advanced Architecture Test Bed Project
ASC computing strategy • Approach: Two classes of systems • Advanced Technology: First of a kind systems that identify and foster technical capabilities and features that are beneficial to ASC applications • Commodity Technology: Robust, cost-effective systems to meet the day-to-day simulation workload needs of the program • Investment Principles • Maintain continuity of production • Ensure that the needs of the current and future stockpile are met • Balance investments in system cost-performance types with computational requirements • Partner with industry to introduce new high-end technology constrained by life-cycle costs • Acquire right-sized platforms to meet the mission needs
Advanced Technology Systems Leadership-class platforms Pursue promising new technology paths with industry partners These systems are to meet unique mission needs and to help prepare the program for future system designs Includes Non-Recurring Engineering (NRE) funding to enable delivery of leading-edge platforms Trinity (ATS-1) will be deployed by ACES (New Mexico Alliance for Computing at Extreme Scale, i.e. Los Alamos & Sandia) ATS-2 will be deployed by LLNL
ASC Platform Timeline Use Retire Cielo (LANL/SNL) Advanced TechnologySystems (ATS) Sequoia (LLNL) ATS 1 – Trinity (LANL/SNL) ATS 2 – (LLNL) System Delivery ATS 3 – (LANL/SNL) Tri-lab Linux Capacity Cluster II (TLCC II) Commodity TechnologySystems (CTS) CTS 1 Dev. & Deploy CTS 2 ‘21 ‘18 ‘17 ‘12 ‘14 ‘15 ‘16 ‘13 ‘20 ‘19 Fiscal Year
The ACES partnership since 2008 • SNL/LANL MOU signed March 2008 to integrate and leverage capabilities • Commitment to the shared development and use of HPC to meet NW mission needs • Major efforts are executed by project teams chartered by and accountable to the ACES co-directors • Cielo delivered ca. 2011 • Trinity delivery in 2015 is now our dominant focus • Both Laboratories are fully committed to delivering a successful platform as it’s essential to the Laboratories
NNSA/ASC and SC/ASCR are partnering on RFPs Trinity/NERSC-8: ACES & LBL CORAL: LLNL, ORNL and ANL Strengthen the alliance between NNSA and SC on road to exascale Show vendors a more united path on road to exascale Shared technical expertise between labs Should gain cost benefit Saves vendors money/time responding to a single RFP, single set of technical requirements Outside perspective reduces risk -- avoids tunnel vision by one lab More leverage with vendors by sharing information between labs Benefits in production, shared bug reports, quarterly meetings Less likely to be a one-off system with multiple sites participating
Why is NNSA/ASC and SC/ASCR collaborating? • The April 2011 MOU between SC & NNSA for coordinating Exascaleactivities was the impetus for ASC and ASCR to work together on the proposed Exascale Computing Initiative (ECI). • While ECI is yet to be realized, ASC & ASCR program directors made strategic decisions to co-fund and collaborate on: • Technology R&D Investments: FastForward and DesignForward • System Acquisitions: Trinity/NERSC-8 and CORAL • Great leveraging opportunities to share precious resources (budget & technical expertise) to achieve each program’s mission goals, while working out some cultural/bureaucratic differences. • The Trinity/NERSC-8 collaboration will proceed with joint RFP and selection, separate system awards, attendance at other system’s project reviews and collective problem solving.
Trinity & NERSC-8 are two separate projects resulting in two distinct contracts and systems Trinity Mission Drivers NERSC-8 Mission Drivers Joint Market surveys Market surveys Market surveys Creating requirements Release RFP Vendor Selection Negotiations Negotiations Trinity Contract NERSC-8 Contract Joint Quarterly Reviews, bug reports, collaboration on application transition, advanced options NERSC-8 System Trinity System
ATS Components of Trinity • Application transition • MPI + (threads, vectors, data locality & allocation) • Center of Excellence key technology providers • Tightly integrating non-volatile memory, i.e. Burst Buffer • Checkpoint/restart • In-situ data analysis • Advanced power management • Better understand … • And then control … • the power usage characteristics at the platform and application level
Focus Area: Active Power Management • P-states (Frequency/Voltage States) • P1: 2.1 GHz, 1.25V • P2: 1.7 GHz, 1.1625V • P3: 1.4 GHz, 1.125V • P4: 1.1 GHz, 1.1V • AMG demonstration on 6,144 nodes of ORNL’s Jaguar shows that managing P-States allows for a 32% decrease in energy used while only increasing time to solution by 7.5% • Need to understand power at the platform & application level • Policy driven • Weighted combination of performance & energy • Energy caps based on time of day, physical capacity, etc. • Need to understand & control power • Cabinet & component level I & V measurements • Scalable collection infrastructure • Tunable collection fidelity: cabinets to components • Administrative & user accessible interface for feedback and tuning
Checkpoint/Restart and Application EfficiencyNomenclature John T. Daly, “A higher order estimate of the optimum checkpoint interval for restart dumps,” Future Generation Computer Systems 22 (2006), pp. 303-312 Tau = time for work before checkpoints Delta = checkpoint time R = restart time Ts = total compute time to solution Tw = wallclock time to solution Assumptions: N is large, Ts >> Delta
Daly Model Example JMTTI • As JMTTI decreases • The optimal checkpoint interval decreases -> efficiency decreases • Choosing an optimal checkpoint becomes more important • I.e. the need for resilience techniques to improve efficiency is crucial to ensure an efficient use of a job’s time allocation
Optimal efficiency graph (1/200, 90%) Efficiency is a function of Delta/JMTTI (for derivation see JosipLoncaric @ LANL) Goodness Badness
Trinity Status • Formal Design Review completed April 2013 • Independent Project Review (Lehman) completed May 2013 • Trinity/NERSC8 RFP released August 2013 • Technical Evaluation of the proposals completed September 2013 • Initial negotiations for both systems completed November 2013 • NNSA Independent Cost Review completed Jan 2014 • NERSC8 is in the final contract approval stages • Should be announced “soon” • Trinity went back to the proposing vendors for a Best and Final Offer (BAFO) • Target delivery date of September 2015 is unchanged
Test Bed Strategy complements ATS • Motivation • Significant PRODUCTION code rewrite/modification will be required for future platforms • Ensure that when codes make “the change” it is the right move for longevity, porting efforts, performance etc. • Eliminate or reduce missteps • Philosophy • Have an APPLICATION focus, reduce the impact on codes in a rapidly changing technology environment • Both hardware and software is intended (and has proven) to be highly dynamic • INTENTIONALLYcloser to prototypes than production • It is more important to explore a diverse set of architectural alternatives @ rack scale, than push large scale • Primary target is ASC researchers, in addition to ASCR co-design centers
Westmere + Knights Ferry Haswell+Powerinsight V2 SandyBridge Knights Corner (B) SandyBridge Knights Corner (C) TBD Arthur Compton Compton V2 Llano+ PowerInsight Kaveri+ PowerInsight V2 Trinity + PowerInsight Interlagos + Fermi 2090x Teller Teller V2 TBD Curie (XK6) Interlagos + Kepler K20X SandyBridge Kepler K20 + K40 + TBD SandyBridge + Kepler K20 + K40 SandyBridge + Kepler K20 Power 7 + FPGA Power 8 Power 8 64bit 64bit IvyBridge Curie (XK7) V2 TBD ESP Watson Shannon V2 Shannon V3 Shannon HMC Hammer ESP Volta (XC) PLANNED JENGA Sept. 2011 Sept. 2012 Apr. 2014 Sept. 2013 Sept. 2014
Exploration Matrix Threading Future Languages Node Level Vectorization
Example Analysis using Test Beds PowerInsight
Thank You Questions? • Thank you to the following for content: • Trinity & NERSC-8 Project Teams • ThucHoang – ASC HQ • Manuel Vigil – LANL • Jim Laros – SNL • JosipLoncaric – LANL • Simon Hammond - SNL • and a cast of many others …
Trinity is Sized for High Fidelity Workloads Point Design: A current LANL 3D problem runs on ¼ of Cielo today using about 80 TBytes Projected capacity for a higher fidelity problem is about 750 TBytes in the Trinity timeframe Trinity is sized to support 2 to 4 jobs of this class -> 2 to 4 PBytes -> ¼ to ½ of the system
Trinity Facility, Power & Cooling • Trinity will be located in the Nicholas C. Metropolis center (SCC) at Los Alamos National Lab • Facility power is one of the primary constraints in the design of Trinity • 12 MW water cooling + 2-3 MW (maybe 4 MW) air cooling available • Inclusive of storage and any other externally attached equipment • 300 lbs per square foot floor loading • 10,000 to 12,000 square feet of floor space • At least 80% of the platform will be water cooled • Direct (direct to chip or cold plate) is preferred • Indirect (e.g. radiator) method is acceptable • Tower water (directly from cooling tower) at up to 32o C is preferred • Chilled water at 8.5oC is available but less desirable due to additional $ • Under floor air at 12.5o C is available to supplement the water cooling method • Concerns • Idle power efficiency • Rapid ramp up / ramp down load on power grid over 2 MW