130 likes | 254 Views
Parallel Computation for Applications in BioMedical CFD Simulations: A Brief Lecture Brian Henry. Advisor: Andreas Linninger Laboratory for Process and Product Design Chicago, IL, 60607. What am I going to be talking about?. What is parallel computation, and how can the LPPD benefit?
E N D
Parallel Computation for Applications in BioMedical CFD Simulations: A Brief LectureBrian Henry Advisor: Andreas Linninger Laboratory for Process and Product Design Chicago, IL, 60607
What am I going to be talking about? • What is parallel computation, and how can the LPPD benefit? • T-Grid: What it has done for us so far • Statistical analysis of finished simulations • How can we expand our simulations? • Papers, both finished and unfinished • Michael Larivieve collaboration paper • New: Parallel Computation in Medical Imaging Applications Paper with International Journal of Biomedical Imaging Paper • Dennis Chau collaboration? • Teragrid Full Research Proposal: by end of semester • Senior Design -> triumphs and woes • LPPD Computer Fixing: good/bad news
What is Parallel Computing? Inadequate Processor Ex. Full Real-Time Example • There are two key things to focus on with parallel computation: processor count and memory • Imagine a job that takes 20 hours on a lab computer with 4 processors • 4 processors = 20 hours, so wouldn’t 8 processors = 10 hours? • After all, you have twice the resources working on the same job
What is Parallel Computing? • The other parameter, Physical Memory, can be an issue for the LPPD Supercomputer (16GB memory) • With most supercomputing resources, howver, physical memory is not an issue • Memory for a single job can be up to 128GB on Pople, and up to a Terabyte on Abe, so were’ good • Because of this, there is a linear interpolation rule that dictates CPU execution time with parallel computation
LPPD Linear Interpolation = FALSE • This is actual LPPD Test Data: • In order to halve the time to ~5 hours, 2x the processors was needed • BUT, to cut the time to ~2.5 hours, 4x the processors were needed • The more processors you add, the harder it is to halve the execution time • We currently have access to 64 Processors Max with Abe, and 32 Processors Max with Pople
So, it is worth it? • Hell yes it is worth it. • We have simulations that can potentially run 20+ days • Using 64 cores can bring 20+ days (480 hours) to a bit less than 3 days ( 60 hours) • And, we don’t have to worry about: • Memory • Other users • Spending $$ on a new machine
New test case • Run time, 7 Hours 53 minutes Test Job: 50-60 hours
What still needs to be done? • UDF File Association • All of our test cases run with UDF files to help determine boundary conditions • Has to be interpreted in the batch script file • Can be done via remotely running a GUI on T-Grid • Run a LPPD test case and compare data
Papers • Currently collaborating on Michael Larivieve, supplying meshes, data, etc • I am going to be working on: • Parallel Computation in Medical Imaging Applications Paper for the International Journal of Biomedical Imaging • Writing a full research proposal for T-Grid • ** I am willing to collaborate with anyone else as well, if you need a extra pair of hands or a simulations done
Working LPPD Computers • I have optimized/cleaned 3 computers (so far) in the LPPD for Ying and her undergraduates • I also put Fluent/Gambit on these computers • What these computers CAN do: • Run simple simulations • What these computers CAN’T do • Run simulations effectively • I do think an extra computer is needed, because the simulations Ying’s undergraduates do are becoming more CPU intensive, not to mention what happens over the summer • The power of the computers in the lab is becoming more and more outdated and weak • Other news: for computer maintenance, I have put on the S Drive an optimization tool for anyone to use. • Tune Up Utilities 2011 • NOT a replacement for: • routine maintenance • Antivirus • Computer illiteracy