10 likes | 125 Views
Support for Adaptive Computations Applied to Simulation of Fluids in Biological Systems. Immersed Boundary Method Simulation in Titanium. Objectives. Immersed Boundary Method. Titanium. High performance Java dialect Explicitly parallel SPMD model Bulk synchronous style Global address space
E N D
Support for Adaptive Computations Applied to Simulation of Fluids in Biological Systems Immersed Boundary Method Simulation in Titanium Objectives Immersed Boundary Method Titanium • High performance Java dialect • Explicitly parallel SPMD model • Bulk synchronous style • Global address space • Language level enhancements to support parallelism and performance • Immutable “value” classes • Multi-dimensional arrays • Region-based memory management • Compiled into C with light weight message passing • Developed by Peskin and McQueen • Models elastic fibers in an incompressible fluid. • Used for heart, blood clotting, embryos, ciliated cells, and more. • The heart model used to design artificial heart valves. • NYU software runs on vector and shared memory machines 4 phase algorithm: • Provide easy-to-use, high performance tool for simulation of fluid flow in biological systems. • Two specific goals: • Scalable, parallel version of heart simulation to run on machines like Blue Horizon. • Toolkit for general Immersed Boundary Method Simulation. Fiber activation & force calculation Interpolate Velocity Spread Force Navier Stokes Solver Titanium Implementation Status Immersed Boundary Software in Titanium • Compiler and runtime systems for: • Uniprocessors • SMPs running POSIX threads • Clusters with: • Shared memory -- SGI Origin (ANL), Tera MTA (SDSC) • Global Address space – T3E (NERSC) • Active Messages -- NOW & Millennium (UCB) • LAPI -- IBM SP2, 3 (SDSC) • Immersed Boundary Method Software package written in Titanium • Based on “Generic” software from NYU • Sufficient for the heart model • Functionally complete as of 10/2000 • A contractile torus simulation run on the Berkeley NOW and ANL Origin. Challenges to Parallelization • Scalability of the current algorithm: • Irregular fiber points interact with regular fluid lattice • Elliptic solver is communication-intensive • Scalability of future algorithm: • Add adaptivity to fluid solver for accuracy • Further complicates communication Conclusions • Performance tuning started • Uniprocessor performance: • Compiler enhancements underway • Data collected with C code for 2 kernels • Communication: • Alternate data layouts • Extension for irregular communication • Titanium support aided development • ~7 months to develop running version • Indication for small language extensions • Future plans: • Add Adaptivity in Fluid Solver • Performance tuning, especially for the IBM SP • Add IB functionality for other biological systems (bending angles, anchorage points, source & sink)