1 / 5

High-Level Programming of High-Performance Reconfigurable Computers: MAPLD BOF-H2 Panel

High-Level Programming of High-Performance Reconfigurable Computers: MAPLD BOF-H2 Panel. Tarek El-Ghazawi The George Washington University tarek@gwu.edu http://www.seas.gwu.edu/~tarek. The Question and My Answer.

dieter
Download Presentation

High-Level Programming of High-Performance Reconfigurable Computers: MAPLD BOF-H2 Panel

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. High-Level Programming of High-Performance Reconfigurable Computers:MAPLD BOF-H2 Panel Tarek El-Ghazawi The George Washington University tarek@gwu.edu http://www.seas.gwu.edu/~tarek

  2. The Question and My Answer • "Can we develop a software-level programming approach (e.g., a C language compiler) for FPGAs that spans the needs of the high performance reconfigurable computing community with amultitude of FPGA-based HPC systems and also the needs of the electronic design automation community with a multitude of FPGA board designs? • My Answer: You Bet!

  3. HOW? • Abstraction • Design a rich programming model for high-performance reconfigurable machines • Extend a standard sequential language to conform to the programming model view • Compilers to address common architectural features • Run-time systems to tune to specific machine features

  4. Programming Models for Parallel Computers Process/Thread Address Space Data Parallel e.g. HPF, C* Message PassingShared MemoryDSM/PGAS Ex: MPI[e.g. +C] Ex: OpenMP[e.g.+C] Ex:UPC

  5. How cont.! • Avoid the hype! Need to know that it is hard, will take enormous time and resources, but REALLY WORTH IT! • Needed Efforts • Need abstract programming model(s) to express • fine grain data parallelism and coarse grain functional parallelism • multiple levels of locality • Need automatic H.W./S.W partitioning and scheduling algorithms • Need compilers to address automatically general hardware optimizations and use reconfiguration to achieve them • Need run-time systems to support further machine specific tunings • Need H.W./S.W. debuggers and performance analysis tools • And a lot more!!! But the good news are: • There are solid intermediate steps • There are works in related areas that can be leveraged

More Related