340 likes | 467 Views
Application of Parallel Processing to Probabilistic Fracture Mechanics Analysis of Gas Turbine Disks. Harry Millwater 1 , Brian Shook 1 , Sridhar Guduru 2, George Constantinides 1 1-Department of Mechanical Engineering 2- Department of Computer Science University of Texas at San Antonio.
E N D
Application of Parallel Processing to Probabilistic Fracture Mechanics Analysis of Gas Turbine Disks Harry Millwater1, Brian Shook1, Sridhar Guduru2, George Constantinides1 1-Department of Mechanical Engineering 2- Department of Computer ScienceUniversity of Texas at San Antonio University of Texas at San Antonio
Overview • Introduction • Methodology overview • UTSA parallel processing network • Application problems • Future work • Summary and Conclusions
Anomaly Distribution NDE Inspection Schedule Probability of Detection Pf vs. Flights Finite Element Stress Analysis Probabilistic Fracture Mechanics Risk Contribution Factors Material Crack Growth Data DARWIN® OverviewDesign Assessment of Reliability With INspection
Risk Assessment Results • Risk of Fracture on Per Flight Basis
Risk Contribution Factors • Identify Regions of Rotor With Highest Risk of Failure
Zone-based Risk Assessment • Define zones based on similar stresses, inspections, defect distributions, lifetimes • Defect probability determined by defect distribution, zone volume • Probability of failure assuming a defect computed using Monte Carlo sampling or advanced methods Prob. of having a defect Prob. of failure given a defect Pi = Pi[A] * Pi[B|A]- zone PfDISK Pi- disk
Zone-based Risk Assessment Impeller
Zone-based Risk Assessment Impeller
Risk Reduction Required Risk 10-9 Maximum Allowable Risk A C B Components Use of DARWIN by Industry • FAA Advisory Circular 33.14 Requests Risk Assessment Be Performed for All New Titanium Rotor Designs • Designs Must Pass Design Target Risk for Rotors
Unix Workstation Windows 2000 PC Inputs Results Windows 2000 PC Unix Workstation Spatial Zone-based Domain Decomposition GUI User N Zones Job-S1.dat Job-S2.dat … Job-SK.dat
Spatial Zone-based Domain Decomposition • Divide the zones into any number of input files • Number of zones in a file is user defined (limit - one zone per file) • Graphical interface creates input files: jobname-S1.dat jobname-S2.dat ... • Creates jobname-Master.dat which contains all zones and a list all “worker” input files, e.g., jobname-S1 • User runs the jobname-S*.dat input files in parallel • Creates jobname-S*.ddb files • Jobname-Master.dat is run which combines jobname-S* results
User Definition of Input Files • Select “Create Parallel File Set” • User specifies number of files • Result – master and worker files are stored for future execution
Spatial Zone-based Domain Decomposition • Zone analyses can be run independently but .. • Some random variables are dependent across zones • Stress scatter factor, time of inspection -- fully dependent • Approach: • Dependent variables: enforce the same starting seed • Independent variables: enforce random starting seed
Job Scheduler • Condor (http://www.cs.wisc.edu/condor/) job scheduler implemented at UTSA • Free public domain • Cross platform: • Windows, MacOSX, Unix (HP, Linux, OSX, SGI, SUN) • Makes use of unused compute time (“cycle stealing”) • Can activate/deactivate depending upon computer usage • Efficiently works with heterogeneous set of computers • Processes a queue of jobs using available machines • Allows individual jobs to specify minimum system requirements • Handles inter-job dependencies, i.e., job sequences • Transitions interrupted jobs to available machines • Allows users to set job priorities
UTSA Parallel Processing Network • 39 Windows 2000 PC’s, 4 CPU SGI Origin - College of Engineering resources • More machines can be added as they come online • Non-dedicated resources • Computers primarily used for teaching and research • Resources fluctuate during analysis • Dynamic load balancing essential • Condor run as “non-intrusive”. Any keyboard or mouse activity suspends Condor - resumes after 5 minutes of inactivity
UTSA Parallel Processing Network • 39 Windows PC’s, 4 CPU SGI Origin, 4 locations
UTSA Parallel Processing Network Average Flops
UTSA Network Availability • 24 hour duration, 1 Minute intervals - averaged over one week
UTSA Network Availability Per one week
Application Examples • 80 zone (best case) • Install executable beforehand • Dedicated local network • Homogeneous network • Realistic problem otherwise • 6250 zone (worst case) • Pass executable each time • Shared distributed network • heterogeneous network • Hypothetical problem to test limits of the system
Application Examples r Po Speed w Cross-section discretized into zones R2 6800 rpm t L R1 x w
Level 1 - 80 Zone Example • 80 zone AC problem • Divide into 80 input files and master • ac80-S1.dat, ac80-S2.dat, …, ac80-S80.dat, ac80-master.dat • Run the files in parallel • Run ac80-Master to combine results 1/2 of cross-section modeled per symmetry
80 Zone Example • Cluster of 5 PCs (900 Mhz, 256 Mbyte RAM) • Near linear speed up • Further increases expected with more computers
6250 Zones • 29 PC’s • Hypothetical problem to test the system • Pass the executable each time • Convenient method to update the executable but causes a lot of communication time • Files sent to worker darwin.exe (5.5 Mbytes) Finite element stress results (2 Mbytes) Input file (10-200 Kbytes) • Files returned jobname.ddb (results database, up to 10 Mbytes)
6250 Zones • Determine the optimum file size for parallel processing • User defines how may zones to include in input files • Too many(few large files) - poor load balancing • Too few(many small files) - increased communication overhead
6250 Zones Results • Optimum result: approximately 120 files, 52 zones in each Batch 384 minutes Min Parallel 27 minutes Min
Speed Up / Efficiency Results Recommend number of files about 3 or 4 times the number of computers available
Future Application • Engine health management - (fusion of damage-based risk assessment with statistical reasoning tools) • Determine optimum inspection times • Examine affects of usage
Future Work • Apply multi-threading technology for shared memory multi-processor computers • Use OpenMP - cross platform Fortran & C standard (www.openmp.org) • Transparent to the user- no input file changes nor runtime changes • Automatically takes advantage of multiple CPUs if present. No slowdown if only one CPU. • Will work for single zone or multiple zone analyses
Summary and Conclusions • Zone-based spatial domain decomposition methodology developed for probabilistic analysis of gas turbine disks with inherent material anomalies • Regions of the disk cross-section are solved in parallel then recombined • User defines number of zones within an input file per local optimization • Condor job scheduler used to distribute & manage jobs • Near linear speed up for optimum situation, i.e., executable previously installed, dedicated system • Speedup of 16, efficiency of 77% realized for large number of zones, heterogeneous, shared processing network
Summary and Conclusions • New methodology significantly reduces execution time for multi-zone problems (several times reduction) • Future applications to engine health management straightforward