1 / 34

CPRE 583 Reconfigurable Computing Lecture 6: Wed 10/21/2009 (Design Patterns)

CPRE 583 Reconfigurable Computing Lecture 6: Wed 10/21/2009 (Design Patterns). Instructor: Dr. Phillip Jones (phjones@iastate.edu) Reconfigurable Computing Laboratory Iowa State University Ames, Iowa, USA. http://class.ee.iastate.edu/cpre583/. Overview. Class Projects

ronda
Download Presentation

CPRE 583 Reconfigurable Computing Lecture 6: Wed 10/21/2009 (Design Patterns)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CPRE 583Reconfigurable ComputingLecture 6: Wed 10/21/2009(Design Patterns) Instructor: Dr. Phillip Jones (phjones@iastate.edu) Reconfigurable Computing Laboratory Iowa State University Ames, Iowa, USA http://class.ee.iastate.edu/cpre583/

  2. Overview • Class Projects • Common Design Patterns

  3. Project Grading Breakdown • 60% Final Project Demo • 30% Final Project Report • 30% of your project report grade will come from your 5 project updates. Friday’s midnight • 10% Final Project Presentation

  4. Initial Project Proposal Slides (5-10 slides) • Project team list: Name, Responsibility (who is project leader) • Project idea • Motivation (why is this interesting, useful) • What will be the end result • High-level picture of final product • High-level Plan • Break project into mile stones • Provide initial schedule: I would initially schedule aggressively to have project complete by Thanksgiving. Issues will pop up to cause the schedule to slip. • System block diagrams • High-level algorithms (if any) • Concerns • Implementation • Conceptual • Research papers related to you project idea

  5. Project Update • The current state of your project write up • Even in the early stages of the project you should be able to write a rough draft of the Introduction and Motivation section • The current state of your Final Presentation • Your Initial Project proposal slides (Due Fri 10/23). Should make for a starting point for you Final presentation • What things are work & not working • What roadblocks are you running into

  6. Projects • Expectations • Working system • Write up that can potentially be submitted to a conference • Will use DAC format as write up guide line • 15-20minute PowerPoint Presentation • DAC (Design Automation Conference) • http://www.dac.com/46th/index.aspx • Due Date: 5pm (MT) Monday 12/8/2008? • Cash Prizes

  7. Projects: Relevant conferences • FPL • FPT • FCCM • DAC • ICCAD • Reconfig • RTSS • RTAS

  8. Projects: Target Timeline • Teams Formed and Idea: Wed 10/21 • Project idea in Power Point 3-5 slides • Motivation (why is this interesting, useful) • What will be the end result • High-level picture of final product • Project team list: Name, Responsibility • High-level Plan: Fri 10/23 (9pm) • Power Point 5-10 slides • System block diagrams • High-level algorithms (if any) • Concerns • Implementation • Conceptual

  9. Projects: Target Timeline • Work on projects: 10/27 - 12/12 • Weekly update reports • More information on updates will be given • Presentations: Last Wed Fri of class • Present / Demo what is done at this point • 15-20 minutes (depends on number of projects) • Final write and HW turn in: Day of final (TBD)

  10. What you should learn • Introduction to common Design Patterns & Compute Models

  11. Outline • Design patterns • Why are they useful? • Examples • Compute models • Why are they useful? • Examples

  12. Outline • Design patterns • Why are they useful? • Examples • Compute models • Why are they useful? • Examples

  13. References • Reconfigurable Computing (2008) [1] • Chapter 5: Compute Models and System Architectures • Scott Hauck, Andre DeHon • Design Patterns for Reconfigurable Computing [2] • Andre DeHon (FCCM 2004) • Type Architectures, Shared Memory, and the Corollary of Modest Potential [3] • Lawrence Snyder: Annual Review of Computer Science (1986)

  14. Design Patterns • Design patterns: are a solution to reoccurring problems.

  15. Reconfigurable Hardware Design • “Building good reconfigurable designs requires an appreciation of the different costs and opportunities inherent in reconfigurable architectures” [2] • “How do we teach programmers and designers to design good reconfigurable applications and systems?” [2] • Traditional approach: • Read lots of papers for different applications • Over time figure out ad-hoc tricks • Better approach?: • Use design patterns to provide a more systematic way of learning how to design • It has been shown in other realms that studying patterns is useful • Object oriented software [93] • Computer Architecture [79]

  16. Common Langue • Provides a means to organize and structure the solution to a problem • Provide a common ground from which to discuss a given design problem • Enables the ability to share solutions in a consistent manner (reuse)

  17. Describing a Design Pattern [2] • 10 attributes suggested by Gamma (Design Patters, 1995) • Name: Standard name • Intent: What problem is being addressed?, How? • Motivation: Why use this pattern • Applicability: When can this pattern be used • Participants: What components make up this pattern • Collaborations: How do components interact • Consequences: Trade-offs • Implementation: How to implement • Known Uses: Real examples of where this pattern has been used. • Related Patterns: Similar patterns, patterns that can be used in conjunction with this pattern, when would you choose a similar pattern instead of this pattern.

  18. Example Design Pattern • Coarse-grain Time-multiplexing • Template Specialization

  19. Coarse-grain Time-Multiplexing M2 M1 M3 A B M1 M2 M1 M2 A B M3 Temp M3 Temp Configuration 1 Configuration 2

  20. Coarse-grain Time-Multiplexing • Name: Coarse-grained Time-Multiplexing • Intent: Enable a design that is too large to fit on a chip all at once to run as multiple subcomponents • Motivation: Method to share limited fixed resources to implement a design that is too large as a whole.

  21. Coarse-grain Time-Multiplexing • Applicability: • Configuration can be done on large time scale • No feedback loops in computation • Feedback loop only spans the current configuration • Feedback loop is very slow • Participants: • Computational graph • Control algorithm • Collaborations: Control algorithm manages when sub-graphs are loaded onto the device

  22. Coarse-grain Time-Multiplexing • Consequences: Often platforms take millions of cycles to reconfigure • Need an app that will run for 10’s of millions of cycles before needing to reconfigure • May need large buffers to store data during a reconfiguration • Known Uses: • Video processing pipeline [Villasenor] • “Video Communications using Rapidly Reconfigurable Hardware”, Transactions on Circuits and Systems for Video Technology 1995 • Automatic Target Recognition [[Villasenor] • “Configurable Computer Solutions for Automatic Target Recognition”, FCCM 1996

  23. Coarse-grain Time-Multiplexing • Implementation: • Break design into multiple sub graphs that can be configured onto the platform in sequence • Design a controller to orchestrate the configuration sequencing • Take steps to minimize configuration time • Related patterns: • Streaming Data • Queues with Back-pressure

  24. Coarse-grain Time-Multiplexing M2 M1 M3 A B M1 M2 M1 M2 A B M3 Temp M3 Temp Configuration 1 Configuration 2

  25. Template Specialization Empty LUTs A(1) A(0) LUT LUT LUT LUT - - - - - - - - - - - - - - - - C(3) C(2) C(1) C(0) Mult by 3 Mult by 5 A(1) A(1) A(0) A(0) LUT LUT LUT LUT LUT LUT LUT LUT 0 0 0 1 0 0 1 0 0 1 1 0 0 1 0 1 0 0 1 1 0 1 0 1 0 0 1 1 0 1 0 1 0 3 6 9 0 5 10 15 C(3) C(2) C(1) C(0) C(3) C(2) C(1) C(0)

  26. Template Specialization • Name: Template Specialization • Intent: Reduce the size or time needed for a computation. (note primary difference from Template pattern is this pattern aims to minimize run-time reconfiguration) • Motivation: Use early-bound data and slowly changing data to reduce circuit size and execution time.

  27. Template Specialization • Applicability: When circuit specialization can be adapted quickly • Example: Can treat LUTs as small memories that can be written. No interconnect modifications • Participants: • Template cell: Contains specialization configuration • Template filler: Manages what and how a configuration is written to a Template cell • Collaborations: Template filler manages Template cell

  28. Template Specialization • Consequences: Can not optimize as much as when a circuit is fully specialize for a given instance. Overhead need to allow template to implement several specializations. • Known Uses: • Multiply-by-Constant • String Matching • Implementation: Multiply-by-Constant • Use LUT as memory to store answer • Use controller to update this memory when a different constant should be used.

  29. Template Specialization • Related patterns: • CONSTRUCTOR • EXCEPTION • TEMPLATE

  30. Template Specialization Empty LUTs A(1) A(0) LUT LUT LUT LUT - - - - - - - - - - - - - - - - C(3) C(2) C(1) C(0) Mult by 3 Mult by 5 A(1) A(1) A(0) A(0) LUT LUT LUT LUT LUT LUT LUT LUT 0 0 0 1 0 0 1 0 0 1 1 0 0 1 0 1 0 0 1 1 0 1 0 1 0 0 1 1 0 1 0 1 0 3 6 9 0 5 10 15 C(3) C(2) C(1) C(0) C(3) C(2) C(1) C(0)

  31. Next Lecture • Compute Models

  32. Slides in Progress • Need to revise this lecture with figures, and useful animations • Add some non-FPGA systems, maybe not since GARP, and PipeRench were discussed in last lecture. Perhaps just mention again • Main reason other archs are not used is economy of scales. Lots of FPGAs are manufacture, thus lowing cost and enable the use of state of the art fab technology (given high performance

  33. Reducing Configuration Transfer Time • Arch approach • Partial reconfiguration • Compression

  34. Configuration Security • Arch approach • Partial reconfiguration • Compression

More Related