1 / 16

Dr. Giuliano Taffoni

Dr. Giuliano Taffoni. INAF – Osservatorio Astronomico di Trieste. The ExaNeSt Project. European Exascale System Interconnect & Storage. ExaNeSt project. EU Funded project H2020-FETHPC-1-2014 Overall budget about 7 MEuro 12 Partners in Europe (6 industrial partners)

apointer
Download Presentation

Dr. Giuliano Taffoni

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dr. Giuliano Taffoni INAF – OsservatorioAstronomico di Trieste The ExaNeSt Project European Exascale System Interconnect & Storage MHPC Workshop on High Performance Computing

  2. ExaNeSt project • EU Funded project H2020-FETHPC-1-2014 • Overall budget about 7 MEuro • 12 Partners in Europe (6 industrial partners) • Coordinator ManolisKatevenisFORTH (Foundation for Research & Technology – Hellas) MHPC Workshop on High Performance Computing

  3. ExaNeSt Objective “Overall strategy to develop a European low-power high-performance exascale infrastructure based on ARM-based micro servers” • System architecture for datacentric Exascale-class HPC • Fast, distributed in-node non-volatile-memory • Storage Low-latency unified Interconnect (compute & storage traffic) • Extreme compute-power density • Advanced totally-liquid Cooling technology (ICETOPE) • Scalable packaging for 64-bit ARM-based Microservers • Real scientific and data-center applications • Applications used to identify system requirements • Tuned versions will evaluate our solutions MHPC Workshop on High Performance Computing

  4. Ecosystem for Exascale • EuroServer: Green Computing Node for European microservers • UNIMEM address space model among ARM compute nodes • Storage and I/O shared among multiple compute nodes • ExaNoDe: European Exascale processor-memory Node Design • ARM-based Chiplets on silicon Interposer • ECOSCALE: Energy-efficient Heterogeneous Computing at exaSCALE • Heterogeneous infrastructure (ARM + FPGAs), programming, runEmes • Kaleao: Energy-efficient μServers for Scalable Cloud Datacenters • Startup company, interested to commercialize many of the results MHPC Workshop on High Performance Computing

  5. The Consortium MHPC Workshop on High Performance Computing

  6. Astrophysics in Exascale Era • New ground based and space instrumentation will open new challengesin Astrophysics • X-Ray Astronomy, Radio Astronomy, Cosmology, AstroParticle, and more • Extrasolar Planetary Systems, Milky Way, Dark Energy and more • INAF is playing a important role in all those experiments. MHPC Workshop on High Performance Computing

  7. A new vision for Computing • New experiments require High Throughput Computing (e.g data reduction, Montecarlomodelling, Instrument and so on) and High Performance computing (in-silico experiments). • New Astronomical data centers that provides archives (storage) and computing capabilities (computation close to data) • New algorithms and codes. MHPC Workshop on High Performance Computing

  8. The role of exascale computing • HPC is necessary in the Preparatory phase and in Operation phase of each experiment. • Apply to various research fields: Cosmology, Planetary formation, star formation etc. • HPC is mandatory to compare observation with theoretical models: the HPC infrastructure is the theoretical laboratory to test the physical processes. • New experiments require new exascale capable laboratories! MHPC Workshop on High Performance Computing

  9. The technology challenge • Design and develop an exascale super computer with high energy efficiency (<60MW), scalable, fault-tolerant and with a low-latency interconnect • ARM Processors and accelerators MHPC Workshop on High Performance Computing

  10. Towards the exascale: HW infrastructure Realistic rack-level shared-memory based on Unimem ~15000 MFlops/Watt MHPC Workshop on High Performance Computing

  11. Interconnect and Storage • Unified approach: merge inter-processor traffic with major storage traffic (photonic technologies) • Work will cover interconnect within a rack and inter rack connections • Packaging and network topology analyzed together • Support Quality of Service • Isolate flows with different requirements (low latency, high throughput) • Support for queue, flow-control, congestion control, scheduling, monitoring • Keep data close to computing (UNIMEM approach: moving data is expensive) MHPC Workshop on High Performance Computing

  12. Total Liquid Cooling ICEOTOPE current solutions Immersed liquid cooled systems based on convection flow Enhancements planned during ExaNeSt Hybrid : phase-change (boiling liquid) and convection flow cooling Electronics immersed in 3M Novec liquid Rack-level “water” circulation MHPC Workshop on High Performance Computing

  13. Building a prototype • 3 Chassis within an Iceotope Rack and 27 blades • 81 mezzanine board with 4 connectors • 324 daughter cards that contains an EuroServer • About 4500 ARM 64 bit Cores • 20% of the budged invested in the prototype. • Available at the end of 2018 MHPC Workshop on High Performance Computing

  14. Role of applications • Co-design approach • Applications define the requirements for the system • Applications evaluate the solutions • Test the IO and interconnect • Allow the definition of QoS , flow control… • New generation of exascale ready applications MHPC Workshop on High Performance Computing

  15. Applications from different domains • Cosmological n-Body and hydrodynamical code(s) suited to perform large-scale, high-resolution numerical simulations of cosmic structures formation and evolution (INAF). • Brain Simulation. Generate spiking behaviours and synaptic connectivity that do not change when the number of hardware processing nodes is varied(INFN) • Weather and climate simulation(ExactLab) • Material science simulations (ExactLab and EngineSoft) • Workloads for database management on the platform and initial assessment against competing approaches in the market(MonetDB) • Virtualization Systems (Virtual Open systems) MHPC Workshop on High Performance Computing

  16. Conclusion • Big challenge: producing the first exascale computing resource in Europe. • Big problems to solve: interconnect, storage, data movments, cooling etc.. • Real Scientific applications will drive development, verify the infrastructure, use the prototype. • A new generation of codes will be designed ready for exascale.

More Related