1 / 53

Welcome To The 33 rd HPC User Forum Meeting September 2009

Join the 33rd HPC User Forum Meeting in September 2009 to discuss and exchange information on high-performance computing, technologies, and market dynamics. Share your achievements and requirements with fellow industry professionals.

charlesreed
Download Presentation

Welcome To The 33 rd HPC User Forum Meeting September 2009

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Welcome To The 33rdHPC User ForumMeetingSeptember 2009

  2. Important Dates For Your Calendar • FUTURE HPC USER FORUM MEETINGS: • October 2009 International HPC User Forum Meetings: • HLRS/University of Stuttgart, October 5-6, 2009 (midday to midday) • EPFL, Lausanne, Switzerland, October 8-9, 2009 (midday to midday) • US Meetings: • April 12 to 14, 2010 Dearborn, Michigan at the Dearborn Inn • September 2010, Seattle Washington

  3. Thank You To Our Meal Sponsors! • Wednesday Breakfast -- Hitachi Cable America • Wednesday Lunch -- Altair Engineering & AMD • Wednesday Break -- Appro International • Thursday Breakfast -- Mellanox Technologies • Thursday Lunch -- Microsoft • Thursday Break -- ScaleMP

  4. A Petascale Triva Question • How many years would 1,000 scientists have to calculate by hand to equal 1 second of work on a 0.1 PFLOPS supercomputer? • Assuming that they can do 1 calculation every second, with no rest time (and a long life)

  5. A Petascale Triva Answer • 3,200 Years • 0.1 PF = 1,000x365x24x60x60x3,200 • From DOD’s new Mana supercomputer in Hawaii: • A Dell PowerEdge M610 with 1,152 nodes • Each node contains two 2.8 Ghz Intel Nehalem processors for a total of 9,216 computer cores • That gives it a PEAK performance of 103 TFLOPS or 0.1 PFLOPS • From MHPCC Acting Director: David L. Stinson

  6. Tuesday Dinner Vendor Updates: 10 Min. Only • IBM • Appro • Hitachi Cable • Luxtera • Mellanox • ScaleMP • Tech-X • Mitrionics

  7. Welcome To The 33rdHPC User ForumMeetingSeptember 2009

  8. Introduction: Logistics • Ask Mary if you need a receipt • Meals and events • Wednesday tour and dinner plans • We have a very tight agenda (as usual) • Please help us keep on time! • Review handouts • Note: We will post most of the presentations on the web site • Please complete the evaluation form

  9. HPC User Forum Mission • To improve the health of the high-performance computing industry through open discussions, information-sharing and initiatives involving HPC users in industry, government and academia, along with HPC vendors and other interested parties.

  10. HPC User Forum Goals • Assist HPC users in solving their ongoing computing, technical and business problems • Provide a forum for exchanging information, identifying areas of common interest, and developing unified positions on requirements • By working with users in other sectors and vendors • To help direct and push vendors to build better products • Which should also help vendors become more successful • Provide members with a continual supply of information on: • Uses of high end computers, new technologies, high end best practices, market dynamics, computer systems and tools, benchmark results, vendor activities and strategies • Provide members with a channel to present their achievements and requirements to interested parties

  11. Important Dates For Your Calendar • FUTURE HPC USER FORUM MEETINGS: • October 2009 International HPC User Forum Meetings: • HLRS/University of Stuttgart, October 5-6, 2009 (midday to midday) • EPFL, Lausanne, Switzerland, October 8-9, 2009 (midday to midday) • US Meetings: • April 12 to 14, 2010 Dearborn, Michigan at the Dearborn Inn • September 2010, Seattle Washington

  12. Thank You To Our Meal Sponsors! • Wednesday Breakfast -- Hitachi Cable America • Wednesday Lunch -- Altair Engineering • Wednesday Break -- Appro International & AMD • Thursday Breakfast -- Mellanox Technologies • Thursday Lunch -- Microsoft • Thursday Break -- ScaleMP

  13. 1Q 2009 HPCMarket Update

  14. HPC Servers $2.1B Workgroup (under $100K) $282M Supercomputers (Over $500K) $802M Divisional ($250K - $500K) $237M Departmental ($250K - $100K) $754M Q109 HPC Market Result – Down 16.8% Source IDC, 2009

  15. Q109 Vendor Share in Revenue

  16. Q109 Cluster Vendor Shares

  17. HPC ComparedTo IDCServer Numbers

  18. HPC Qview Tie To Server Tracker:1Q 2009 Data All WW Servers As Reported In IDC Server Tracker $9.9B HPC Qview Data Focus: The Complete System: “Everything needed to turn it on” Tracker QST Data Focus: Compute Nodes 1 QST 2 HPC HPC Special Revenue Recognition Services Includes those sold through custom engineering, R&D offsets, or paid for over multiple quarters HPC Qview Compute Node Revenues ~$1.05B* HPC Special Revenue Recognition Services ~$474M Revenue Beyond Base Nodes ~$576M HPC Computer System Revenues Beyond The Base Compute Nodes: Includes interconnects and switches, inbuilt storage, scratch disks, OS, middleware, warranties, installation fees, service nodes, special cooling features, etc. 3 HPC * This number ties the two data sets on an apples-to-apples basis

  19. OEM Mix Of HPC Special Revenue Recognition Services 2 • Notes: • Includes product sales that are not reported by OEMs as product revenue in a given quarter • Sometimes HPC systems are paid for across a number of quarters or even years • Includes NRE – if required for a specific system • Includes custom engineering sales • Some examples – Earth Simulator, ASCI Red, ASCI Red Storm, DARPA systems, and many small and medium HPC systems that are sold through a custom engineering or services group because that need extra things added

  20. Areas Of HPC “Uplift” Revenues 3

  21. Areas Of HPC “Uplift” Revenues Notes: * Computer hardware (in cabinet) -- hybrid nodes, service nodes, accelerators, GPGPUs, FPGAs, internal interconnects, in-built disks, in-built switches, special cabinet doors, special signal processing parts, etc. * External interconnects -- switches, cables, extra cabinets to hold them, etc. * External storage -- scratch disks, interconnects to them, cabinets to hold them, etc. (This excludes user file storage devices) * Software -- includes both bundled and separately charged software if sold by the OEM, or on the purchase contract -- includes the operating system, license fees, the entire middleware stack, compilers, job schedules, etc. (it excludes all ISV applications unless sold by the OEM and in the purchase contract) * Bundled warranties * Misc. items -- Since the HPC taxonomy includes everything required to turn on the system and make it operational, items like bundled installation services, special features and other add-on hardware,and even a special paint job if required 3

  22. Special Paint Jobs Are Back … http://www.afrl.hpc.mil/consolidated/hardware.php

  23. 2010 IDC HPC Research Areas • Quarterly HPC Forecast Updates • Until the world economy recovers • New HPC End-user Based Reports: • Clusters, processors, accelerators, storage, interconnects, system software, and applications • The evolution of government HPC budgets • China and Russia HPC trends • Power and Cooling Research • Developing a Market Model For Middleware and Management Software • Extreme Computing • Data Center Assessment and Benchmarking • Tracking Petascale and Exascale Initiatives

  24. Agenda: Day One, Wednesday Morning • 8:10am Introductions and Welcome, Steve Finn and Earl Joseph • Morning Session Chair: Steve Finn • 8:15am Weather/climate presentation from ORNL, Jim Hack • 8:45am Weather/climate presentation from NCAR, Henry Tufo • 9:15am Weather/climate presentation from NASA/Goddard, Phil Webster • 9:45am Two short vendor technology updates (Altair and Sun) • 10:15am Break • 10:30am Weather/climate presentation from NRL Monterey, Jim Doyle • 11:00am Weather and Climate Directions from an IBM perspective, Jim Edwards • 11:25am Panel on HPC Weather/Climate/Earth Sciences Requirements & Directions • Moderators: Steve Finn and Earl Joseph • 12:00pm Networking Lunch

  25. Lunch BreakThanks to Altair Engineering Please Return Promptly at 1:00pm

  26. Thank You Altair Engineering For Lunch

  27. Agenda: Day One, Wednesday Afternoon • Afternoon Session Chair: Paul Muzio • 1:00pm HPC in Europe, HECToR Update, Andrew Jones, NAG • 1:30pm DOD HPCMP Program Update, Larry Davis • 2:00pm Weather/climate Research at Northrop Grumman, Glenn Higgins • 2:25pm Weather and Climate Directions from a Cray perspective, Per Nyberg • 2:50pm Panel on Government and Political Issues, Concerns and Ideas for New Directions • Moderator: Charlie Hayes • 3:30pm DICE Parallel File System Project, Tracey Wilson • 4:00pm NCAR HPC User Site Tour, return by 6:00pm • 6:00pm Networking break and time for 1-on-1 meetings • 6:30pm Special Dinner Event

  28. WelcomeTo Day 2 Of TheHPC User ForumMeeting

  29. Thank You To Our Meal Sponsors! • Wednesday Breakfast -- Hitachi Cable America • Wednesday Lunch -- Altair Engineering • Wednesday Break -- Appro International • Thursday Breakfast -- Mellanox Technologies • Thursday Lunch -- Microsoft • Thursday Break -- ScaleMP

  30. Agenda: Day Two, Thursday Morning • 8:10am Welcome, Earl Joseph and Steve Finn • Morning Session Chair: Douglas Kothe • 8:15am Power Grid Research at PNNL, Mo Khaleel • 8:45am HPC Data Center Power and Cooling Issues, and New Ways to Measure HPC Systems, Roger Panton, Avetec • 9:15am Compiler and Tools: User Requirements from ARSC, Edward Kornkven • 9:45am New HPC Directions at Microsoft, Roger Barga • 10:15am Break • 10:30am Technical Panel on HPC Front-End Compiler Requirements and Directions • Moderators: Robert Singleterry, Vince Scarafino • 12:15pm Networking Lunch

  31. 73 ?

  32. Lunch BreakThanks to Microsoft Please Return Promptly at 1:00pm

  33. Thank YouMicrosoft For Lunch

  34. Agenda: Day Two, Thursday Afternoon • Afternoon Session Chair: Jack Collins • 1:00pm ARL HPC User Site Update, Thomas Kendall • 1:30pm Weather/climate presentation from NCAR, John Michelakes • 2:00pm Technical Panel on HPC Application Scaling Issues, Requirements and Trends • Moderators: Doug Kothe and Paul Muzio. • Panel members: • 3:15pm Short vendor technology update (Microsoft) • 3:30pm Break • 4:00pm Weather/climate presentation from NASA Langley, Mike Little • 4:30pm "Spider" the Largest Lustre File Stem, ORNL, Galen Shipman • 5:00pm Meeting Wrap-Up and Future Meeting Dates, Earl Joseph and Steve Finn • 5:00pm Meeting Ends

  35. Important Dates For Your Calendar • FUTURE HPC USER FORUM MEETINGS: • October 2009 International HPC User Forum Meetings: • HLRS/University of Stuttgart, October 5-6, 2009 (midday to midday) • EPFL, Lausanne, Switzerland, October 8-9, 2009 (midday to midday) • US Meetings: • April 12 to 14, 2010 Dearborn, Michigan at the Dearborn Inn • September 2010, Seattle Washington

  36. Thank YouFor Attending The 33rdHPC User ForumMeeting

  37. Questions? Please email: hpc@idc.com Or check out: www.hpcuserforum.com

  38. Questions? Please email: hpc@idc.com Or check out: www.hpcuserforum.com

  39. HPC User ForumSteering Committee MeetingSeptember 2009

  40. How Did The Meeting Go? • What worked well? • What needs to be changed or improved? • Dates and locations for the next Steering Committee meetings? • SC09 – Monday • January, 2010 at NASA

  41. Important Dates For Your Calendar • FUTURE HPC USER FORUM MEETINGS: • October 2009 International HPC User Forum Meetings: • HLRS/University of Stuttgart, October 5-6, 2009 (midday to midday) • EPFL, Lausanne, Switzerland, October 8-9, 2009 (midday to midday) • US Meetings: • April 12 to 14, 2010 Dearborn, Michigan at the Dearborn Inn • September 2010, Seattle Washington

  42. Questions? Please email: hpc@idc.com Or check out: www.hpcuserforum.com

  43. Q408 vs. Q109

  44. HPC Qview Tie To Server Tracker:1Q 2009 Data This Number Ties to the Server Tracker This Number Ties to the HPC Qview 44

  45. Government Panel Questions

  46. Government Panel Questions • #1 If you believe that the US’s greatest asset in the next 25 years will be our ability to lead to lead the world in the development of intellectual property: • Do you believe the USG is providing sufficient investment to ensure US competitiveness in science and technology, in general, and HPC in particular? Elaborate. • What do you think the USG should or should not do to help HPC?

  47. Government Panel Questions • #2 Most hardware vendors will agree that profit margins on USG HPC procurements, especially those at the high end, are often negligible at best. • a. While it is generally understood that the USG is obligated to try to get the best value for its money, is there a greater obligation beyond a specific procurement for the USG’s behavior towards the industry in general? • b. If you believe a healthy US HPC community is important for US competitiveness, what, if anything, should the USG specifically do to help the financial or business health of the US HPC vendors? • c. Should the vendors, via one or more of the industry groups, lobby for more lenient procurement terms, less stringent benchmarks, and lower penalties in advanced system procurements? • d. Or, should the vendors simply “no bid” more frequently, until the USG relaxes its procurement terms?

  48. Government Panel Questions • #3 Do you agree that the USG emphasis, especially within DOE and the NSF, in the area of petascale and exascale computing is appropriate and the best use of USG funding for support of the US HPC industry and HPC technology development? • Please Elaborate

  49. Government Panel Questions • #4 Over the past forty years or so, up to about the middle 1990s, industry traditionally followed the lead of the USG in adopting HPC technology. For example: Cray Research sold more YMP supercomputers to industry than to governments. Why hasn’t US industry followed the lead of the USG in the race to petascale computing? • a. Is it because their traditional applications don’t need to scale that high? • b. Is it because ISV software (and their own s/w) doesn’t scale? • c. Is it because of the software per CPU costs? • d. Will this effect US competitiveness? • e. What action should the USG take, if any, to encourage industry adoption of high end HPC specifically, or HPC of any size, in general?

  50. Government Panel Questions • #5 At the National Science Foundation there have been two major HPC system funding programs over the past three years: • a. The Track 1 Program, to fund the worlds most powerful “leadership class” petascale supercomputers, an IBM system, developed under the DARPA HPCS Program, planned for installation at NCSA in 2011. ( It is important to note that Cray is also developing a multi petaFLOP system under the DARPA HPCS Program, which is currently expected to be installed at the Oak Ridge National Laboratory, funded by DOE.) • b. The Track 2 Program, four annual procurements to install “mid range” systems smaller than the Track 1 system but of a size to bridge the gap between current HPC systems and more advanced petascale systems. The first Track 2 system was installed at TACC at the University of Texas. The second and third systems are scheduled for the University of Tennessee at ORNL and the University of Pittsburgh and Carnegie Mellon at the Pittsburgh Supercomputing Center. The results of the fourth annual procurement, promised to be a multiple buy of up to four systems, has yet to be announced. • Questions: • a. Do you agree specifically with the NSF Track 1 and 2 programs, or do you think NSF’s resources should have been or should, in the future, be distributed more broadly throughout academia? Why? • b. Now that the forth and last Track 2 procurement is about over, what do you recommend NSF should do next with respect to HPC?

More Related