1 / 56

Research Computing at Virginia Tech

Research Computing at Virginia Tech. Advanced Research Computing. Outline. ARC Overview ARC Resources Training & Education Getting Started. ARC overview. Terascale Computing Facility. 2200 Processor - Apple G5 Cluster 10.28 teraflops; 3 on 2003 Top500 list. ICAM. Virginia Tech.

yaholo
Download Presentation

Research Computing at Virginia Tech

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Research Computing at Virginia Tech Advanced Research Computing

  2. Outline • ARC Overview • ARC Resources • Training & Education • Getting Started

  3. ARC overview

  4. Terascale Computing Facility • 2200 Processor - Apple G5 Cluster • 10.28 teraflops; 3 on 2003 Top500 list ICAM Virginia Tech

  5. Advanced Research Computing (ARC) • Unit within the Office of the Vice President of Information Technology • Office of Vice President for Research • Provide centralized resources for: • Research computing • Visualization • Staff to assist users • Website: http://www.arc.vt.edu/

  6. Goals • Advance the use of computing and visualization in VT research • Centralize resource acquisition, maintenance, and support for research community • HPC Investment Committee • Provide support to facilitate usage of resources and minimize barriers to entry • Enable and participate in research collaborations between departments

  7. Personnel • Terry Herdman, Associate VP for Research Computing • BD Kim, Deputy Director, HPC • Nicholas Polys, Director, Visualization • Computational Scientists • Justin Krometis • James McClure • Gabriel Mateescu • User Support GRAs

  8. ARC Resources

  9. Computational Resources • Blue Ridge – Large scale Linux cluster • Hokie Speed – GPU cluster • Hokie One – SGI UV SMP machine • Athena – Data Analysis and Viz cluster • Ithaca – IBM iDataPlex • Dante – Dell R810 • Other resources for individual research groups

  10. Blue Ridge Large Scale Cluster • Resources for running jobs • 318 dual-socketnodes with 16 cores/node • socket is an eight-core Intel Sandy Bridge-EP Xeon • 4 GB/core, 64 GB/node • total: 5,088 cores, 20 TB memory • Two login nodes and two admin nodes • 128 GB/node • Interconnect: Quad-data-rate (QDR) InfiniBand • Top500 #402 (November 2012) • Requires allocation to run (only ARC system) • Released to users on March 20, 2013

  11. Allocation System • Like a bank account for system units • Jobs run are deducted from allocation account • Project PIs (i.e., faculty) request allocation for research project • Based on research output of project (papers, grants) and type of computing/software used • Once approved, add other users (faculty, researchers, students) • Only applies to BlueRidge (no allocation required to run on other ARC systems)

  12. HokieSpeed – CPU/GPU Cluster • 206 nodes, each with: • Two 6-core 2.40-gigahertz Intel Xeon E5645 CPUs and 24 GB of RAM • Two NVIDIA M2050 Fermi GPUs (448 cores/socket) • Total: 2,472 CPU cores, 412 GPUs, 5 TB of RAM • Top500 #221, Green500 #43 (November 2012) • 14-foot by 4-foot 3D visualization wall • Intended Use: Large-scale GPU computing • Available to NSF Grant Co-PIs

  13. HokieOne - SGI UV SMP System • 492 Intel Xeon 7542 (2.66GHz) cores • Two six-way sockets per blade (12 cores/blade) • 41 blades for apps; one blade for system + login • 2.6TB of Shared Memory (NUMA) • 64 GB/blade, blades connected with NUMAlink • SUSE Linux 11.1 • Recommended Uses: • Memory-heavy applications • Shared-memory (e.g. OpenMP) applications

  14. Athena – Data Analytics Cluster • 42 AMD 2.3GHz MagnyCours quad-socket, octa-core nodes (Total: 1,344 cores, 12.4 TFLOP peak) • 32 NVIDIA Tesla S2050 (quad-core) GPUs • 6 GB GPU memory • Memory: 2 GB/core (64 GB/node, 2.7 TB Total) • Quad-data-rate (QDR) InfiniBand • Recommended uses: • GPU Computations • Visualization • Data intensive applications

  15. Ithaca – IBM iDataPlex • 84 dual-socket quad-core Nehalem 2.26 GHz nodes (672 cores in all) • 66 nodes available for general use • Memory (2 TB Total): • 56 nodes have 24 GB (3 GB/core) • 10 nodes have 48 GB (6 GB/core) • Quad-data-rate (QDR) InfiniBand • Recommended uses: • Parallel Matlab • ISV apps needing x86/Linux environment

  16. Dante (Dell R810) • 4 octa-socket, octa-core nodes (256 cores in all) • 64 GB RAM • Intel x86 64-bit, Red Hat Enterprise Linux 5.6 • No queuing system • Recommended uses: • Testing, debugging • Specialty software

  17. Visualization Resources • VisCube: 3D immersion environment with three 10′ by 10′ walls and a floor of 1920×1920 stereo projection screens • DeepSix: Six tiled monitors with combined resolution of 7680×3200 • Athena GPUs: Accelerated rendering • ROVR Stereo Wall • AISB Stereo Wall

  18. Education & Training

  19. Spring 2013 (Faculty Track) • Intro to HPC (13 Feb) • Research Computing at VT (20 Feb) • Shared-Memory Prog. in OpenMP (27 Feb) • Distributed Memory Prog.using MPI (6 Mar) • Two session courses: • Visual Computing (25 Feb, 25 Mar) • Scientific Programming with Python (1 Apr, 8 Apr) • GPU Programming (10 Apr, 17 Apr) • Parallel MATLAB (15 Apr, 22 Apr)

  20. Workshops • Offered last: January 2013, August 2012 • Two days, covering: • High-performance computing concepts • Introduction to ARC’s resources • Programming in OpenMP and MPI • Third-party libraries • Optimization • Visualization • Next offered: Summer 2013?

  21. Other Courses Offered • Parallel Programming with Intel Cilk Plus (Fall 2012) • MATLAB Optimization Toolbox (ICAM Others being considered/in development: • Parallel R

  22. Graduate Certificate (Proposed) • Certificate Requirements (10 credits) • 2 core-coursework: developed and taught by ARC computational scientists • Introduction to Scientific Computing & Visualization (3 credits) • Applied Parallel Computing for Scientists &Engineers (3 credits) • A selection of existing coursework (3 credits - list provided in proposal draft) • HPC&V seminar (1 credit) • Interdisciplinary coursework (3 credits – optional) • Administration • Steering/Admissions Committee • Core faculty: develop the courseware and seminar, PhD committee member • Affiliate faculty: instruct existing courses, guest lectures, etc.

  23. Proposed Core Courses & Content • Introduction to Scientific Computing & Visualization • Programming environment in HPC • Numerical Analysis • Basic parallel programming with OpenMP and MPI • Visualization tools • Applied Parallel Computing for Scientists &Engineers • Advanced parallelism • Hybrid programming with MPI/OpenMP • CUDA/MIC programming • Optimization and scalability of large-scale HPC applications • Parallel & remote visualization and data analysis

  24. Getting Started on ARC’s Systems

  25. Getting Started Steps • Apply for an account (all users) • Apply for an allocation (PIs only for projects wishing to use BlueRidge) • Log in (SSH) into the system • System examples • Compile • Submit to scheduler • Compile and submit your own programs

  26. Resources • ARC Website: http://www.arc.vt.edu • ARC Compute Resources & Documentation: http://www.arc.vt.edu/resources/hpc/ • Allocation System: http://www.arc.vt.edu/userinfo/allocations.php • New Users Guide: http://www.arc.vt.edu/userinfo/newusers.php • Training: http://www.arc.vt.edu/userinfo/training.php

  27. Research Projects at VT Interdisciplinary Center for Applied mathematics Terry L. Herdman Associate Vice President for Research Computing Director Interdisciplinary Center for Applied Mathematics Professor Mathematics Virginia Tech

  28. ICAM History • Foundedin 1987 to promote and facilitate interdisciplinary • research and education in applied and computational • mathematics at Virginia Tech. Currently, ICAM has 45 members • from 10departments, 2 colleges, VBI and ARC. • Named SCHEV Commonwealth Center of Excellence in 1990. • Named DOD Center of Research Excellence & Transition in 1996. • Received more than $25 Million in external funding from federal sources and numerous industrial partners. • Received several MURI and other largecenter grants. • leader of the VT effort on Energy Efficient Building HUB (EEB) • AGILITY - INGENUITY - INTEGRITY • DON’T OVER PROMISE • KEEP SCIENTIFIC CREDIBILITY & REPUTATION • BUILD EXCELLENT WORKING RELATIONSHIPS WITH INDUSTRY AND NATIONAL LABORATORIES • MATHEMATICAL MODELS FOR MANY DIFFERENT PROBLEMS

  29. Sources of ICAM’s Funding Department of Defense • AIR FORCE OFFICE OF SCIENTIFIC RESEARCH - AFOSR • DEFENSE ADVANCED RESEARCH PROJECT AGENCY – DARPA • ARMY RESEARCH OFFICE - ARO • OFFICE OF NAVAL RESEARCH - ONR • environmental technology demonstration & validation program - ESTCP • VARIOUS AIR FORCE RESEARCH LABS –AFRL • Flight Dynamics Lab - Weapons Lab - Munitions Lab Other Agencies • NATIONAL SCIENCE FOUNDATION – NSF • NATIONAL AERONAUTICS AND SPACE ADMINISTRATION – NASA • Federal Bureau of InvestigatioN– fbi • DEPARTMENT OF HOMELAND SECURITY – DHS • DEPARTMENT OF ENERGY – DOEEERE, ORNL • NATIONAL INSITUTES OF HEALTH – NIH(ID IQ CONTRACT PROPOSAL) Industry Funding Sources AEROSOFT, INC. - BABCOCK & WILCOX - BOEING AEROSPACE - CAMBRIDGE HYDRODYNAMICS - COMMONWEALTH SCIENTIFIC CORP. - HONEYWELL - HARRIS CORP. - LOCKHEED - SAIC - TEKTRONIX - UNITED TECHNOLOGIES - SOTERA DEFENSE SOLUTIONS…

  30. Industry-National Lab Partners Boeing (Seattle) United Technologies (Hartford) Honeywell (Minneapolis) Tektronix (Beaverton) Air ForceFlight Dynamics (Dayton) LBNL DOE Lab (Berkeley) NREL DOE Lab (Golden) LLNL DOE Lab (Livermore) Babcock & Wilcox (Lynchberg) SAIC (McLean) ORNL (Oak Ridge) NASA (Ames) Sandia (Albuquerque) AeroSoft (Blacksburg) Air Force AEDC (Tullahoma) NASA (Langley Lockheed (Los Angeles) Air Force AFRL (Albuquerque) Air ForceMunitions Lab (Eglin) Harris Corp. (Melbourne Deutsche Bank (Frankfurt) Germany Nestles (Ludwigsburg) Germany

  31. International Collaborations

  32. ICAM Team • 10 Academic Departments • 2 Colleges • VBI • ARC - IT 2010 - 2011 CORE MEMBERS* *DEPENDS ON CURRENT PROJECTS & FUNDING CURRENT ASSOCIATE MEMBERS 1 staff person: Misty Bland

  33. ICAM History of Interdisciplinary Projects H1N1 IMMUNE CANCER HIV Advanced Control Homeland Security Energy Efficient Buildings Life Sciences Nano Technology Design of Jets HPC - CS & E Space Platforms

  34. Good News / Bad News • Good News • Every IBG Science Problem has a Mathematics Component • Bad News • No IBG Science Problem has only a Mathematics Component W.R. Pulleyblank Director, Deep Computing InstituteDirector, Exploratory Server SystemsIBM Research

  35. Two Applications to Aerospace • Past Application / New Application • Airfoil Flutter • New Application • Next Generation Large Space Systems ICAM Virginia Tech

  36. Stealth • Began as an unclassified project at DARPA in the early ’70’s • Proved that physically large objects could still have miniscule RCS (radar cross section) • Challenge was to make it fly!

  37. ICAM History of Interdisciplinary Projects 1987 - 1991 DARPA - $1.4 M An Integrated Research Program for the Modeling, Analysis and Control of Aerospace Systems TEAM VT- ICAM NASA USAF 1993 - 1997 USAF - $2.76 M Optimal Design And Control of Nonlinear Distributed Parameter Systems University Research Initiative Center Grant MURI TEAM VT - ICAM Boeing USAF NC STATE Lockheed X - 29 F – 117A DARPA ALSO PROVIDED FUNDS FOR THE RENOVATION OF WRIGHT HOUSE – ICAM’s HOME SINCE 1989 09/14/97: F-117A CRASH CAUSED BY FLUTTER MURI TOPIC: CONTROL OF AIR FLOWS

  38. Mathematical Research • motivated by problems of interest to industry, business, and government organizations as well as the science and engineering communities. • Mathematical framework: both theoretical and computational • Projects require expertise in several disciplines • Projects require HPC • Projects require Computational Science: Modeling, analysis, algorithm development, optimization, visualization.

  39. University Research Team • John Burns • Dennis Brewer • Herman Brunner • Gene Cliff • Yanzhao Cao • Harlan Stech • Janos Turi • Dan Inman • Kazifumi Ito • Graciela Cerezo • Elena Fernandez • Brian Fulton • Z. Liu • Hoan Nguyen • Diana Rubio • Ricardo Sanchez Pena • 8 Undergraduate Students • 10 Graduate Students ICAM Virginia Tech

  40. Research Support and Partners • AFOSR • DARPA ACM and SPO • NASA- LaRC • NIA • Flight Dynamics Lab, WPAFB • Lockheed Martin ICAM Virginia Tech

  41. Build Math Model • start simple • use and keep the Physics (Science) • use and keep Engineering Principles • do not try to be an expert in all associated disciplines – interdisciplinary team • learn enough so that you can communicate • know the literature • computational/experimental validation ICAM Virginia Tech

  42. Spring Mass System h(t) plunge α(t)Pitch Angle β(t) Flap Angle ICAM Virginia Tech

  43. Pitching, Plunging and Flap Motions of Airfoil ICAM Virginia Tech

  44. Force: Lift Note: Lift depends on past history ICAM Virginia Tech

  45. Evolution Equation for Airfoil Circulation: ICAM Virginia Tech

  46. Mathematical Model • change 2nd order ODE to 1st order system • couple ODE with evolution equation • past history of circulation function provides part of the initial conditions ICAM Virginia Tech

  47. Complete Mathematical Model • A is a singular 8 by 8 matrix : last row zeros • A(s) : A8i=0 i=1,2,…,7 • A88(s)=[(Us-2)/Us]1/2, U constant • B constant matrix, B(s) is smooth • Non Atomic Neutral Functional Differential Equation ICAM Virginia Tech

  48. Non Atomic NFDE • Need Theory of Non Atomic NFDE • Well Posedness results • Approximation Techniques • Parameter Identification • Validation of the Model

  49. Abstract Cauchy Problem ICAM Virginia Tech

More Related