150 likes | 161 Views
This article discusses the end of CMOS scaling and its potential benefits for space computing. It explores the use of new technologies and fault-tolerant systems in spaceborne computing. The article is from Sandia National Laboratories and provides an overview of embedded low power HPC and parallel computing in space.
E N D
The End of CMOS Scaling will beGood for Space Computing Fault Tolerant Spaceborne ComputingEmploying New TechnologiesMay 29, 2008Sandia National LaboratoriesErik DeBenedictis (Sandia) Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for theUnited States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Overview Space Computing Embedded Low Power HPC Parallel Rad-Hard Future Low Power ParallelHeterogeneous Productivity Tools ? COTS/Desktop Development $Productivity Tools
Clock rate flat lined a couple years ago, as vendors put excess resources into multiple cores This is a historical fact and evident to everybody, so there is little reason to comment on the cause However, it has profound architectural consequences (later slide) Clock Rate Flat Lined 10 GHz 4 GHz 2 GHz 1 GHz 100 MHz 1990 2005 2010 Year
Big Spreadsheet Columns are years Rows are 100+ transistor parameters Manual entry of process parameters by year Excel computes operating parameters Extra degrees of freedom go to making Moore’s Law smooth – not the best computers ITRS Process Integration Spreadsheet
kT Limit ModeratesOptimism for Perpetual Exponential Growth Moore’s Law Energy (log scale) for Technology created in Government Fab kT 100kT 2008 Year
Industry’s Plans International Technology Roadmap for Semiconductors 2008 ITRS Update ORTC [ Konigswinter Germany ITRS ITWG Plenary] A.Allan, Rev 2, [notes on IRC/CTSG More Moore, More than Moore, Beyond CMOS 04/04/08]
The Architecture Game • This is my diagram from a paper to illustrate CMOS architecture in light of CMOS scaling limits • [Discuss] 100% CPU Efficiency (can’t do better) Next Moves: Switch to Vector Arch. Switch to SIMD Arch. Add Coprocessor Scale Linewidth Increase Parallelism Increase Cache More Superscalar Raise Vdd and Clk Finish 100% Power effici- ency Commercial Speed Target 50% Next Moves 25% 12% 6% 3% 1980 1990 2000 2010 2020 Year log(throughput)
Conclusions Mainstream and embedded technology will become more similar Power Parallelism Architectures will become more special purpose General systems may be comprised of multiple special purpose sections Performance 1 2008 2009 2010 Year B Special Architectures Go Mainstream A Better Idea but with a small budget Traditional mP with big budget 2 mP with big budget but clock rate and power handicap
EXOCHI: Architecture and Programming Environment forAHeterogeneousMulti-coreMultithreaded System Perry H. Wang1, Jamison D. Collins1, Gautham N. Chinya1, Hong Jiang2, Xinmin Tian3, Milind Girkar3, Nick Y. Yang2, Guei-Yuan Lueh2, and Hong Wang1 Microarchitecture Research Lab, Microprocessor Technology Labs, Intel Corporation1 Graphics Architecture, Chipset Group, Intel Corporation2 Intel Compiler Lab, Software Solutions Group, Intel Corporation3
OS The following 5 Viewgraphs sent byJamison Collins with permission to post Motivation • Future mainstream microprocessors will likely • integrate heterogeneous cores • How will we program them? MyApp • Map computation to driver / abstraction API • Unfamiliar development / debugging flow • OS / driver overheads • Accelerator in distinct memory space Driver Stub Thread Process Driver API Dispatch Scheduler My Device Driver ia cpu ia cpu My IA CPU My Accelerator ia cpu ia cpu
#pragma omp paralleltarget(targetISA)[clause[[,]clause]…] structured-block Where clause can be any of the following: firstprivate(variable-list) private(variable-list) shared(variable-ptr-list) descriptor(descriptor-ptr-list) num_threads(integer-expression) master_nowait CHI Programming Environment #pragma omp _asm { …… } • Compiler • Modified front-end and OpenMP pragmas • Fork/join • Producer/consumer parallelism • Generates fat binary • CHI runtime • Multi-shredding: User-level threading • Extensible to multiple types of heterogeneous cores • E.g. Intel GMA X3000 • E.g. A data streaming systolic array accelerator for communication Accelerator-specific assembler and domain-specific plug-ins Intel C++ Compiler Linker CHI runtime library .code <call to runtime> .data .special_section <accelerator-specific binary>
Motivation Greater quantities of data: perform more onboard computing, reduce communications requirements Vision Multiple computing technologies each used to best advantage Harness advances in semiconductors and nanotech Need hardware interoperability Need software tools to support heterogeneous hardware Workshop Target date May 28-30, 2008 At Sandia, in and out Immediate target: Inventory resources and set plans for coordination and standards Rad hard processing Memory:DRAM, Nano Mass Storage I/O Spaceborne Computing withEmerging Technologies Archival, Maintainable,Source Code CPU Part GPU Part Verilog/VHDL Fault-Tolerant High-CapabilityComputational Subsystem SpacecraftControl Subsystem CPU:1-core, multi-core Accelerator, GPU, SIMD, or ASIC FPGA RAD-750, etc. Inter-subsystem gateway Interconnect options Bus/Stream/Message Standards