170 likes | 427 Views
Simulation of Chemical Reactor Performance – A Grid-Enabled Application –. Kenneth A. Bishop Li Cheng Karen D. Camarda The University of Kansas kbishop@ku.edu. Presentation Organization. Application Background Chemical Reactor Performance Evaluation Grid Assets In Play Hardware Assets
E N D
Simulation of ChemicalReactor Performance – A Grid-Enabled Application – Kenneth A. Bishop Li Cheng Karen D. Camarda The University of Kansas kbishop@ku.edu
Presentation Organization • Application Background • Chemical Reactor Performance Evaluation • Grid Assets In Play • Hardware Assets • Software Assets • Contemporary Research • NCSA Chemical Engineering Portal Application • Cactus Environment Application
Reaction Conditions: Temperature: 640 ~ 770 K Pressure: 2 atm Coolant Molten Salt Feed Products V2O5 Catalyst in Tubes Chemical Reactor Description O-Xylene : Air Mixture Phthalic Anhydride
Simulator Capabilities • Reaction Mechanism: Heterogeneous Or Pseudo-homogeneous • Reaction Path: Three Specie Or Five Specie Paths • Flow Phenomena: Diffusive vs Bulk And Radial vs Axial • Excitation: Composition And/Or Temperature
TEMPERATUREK 640 770 Chemical Reactor Start-up CENTER TEMPERATURE RADIUS TUBE ENTRANCE AXIAL POSITION EXIT INITIAL CONDITION: FEED NITROGEN FEED TEMP. 640 K COOLANT TEMP. 640 K FINAL CONDITION: FEED 1% ORTHO-XYLENE FEED TEMP. 683 K COOLANT TEMP. 683 K
TEMPERATURE ORTHO-XYLENE PHTHALIC ANHYDRIDE TOLUALDEHYDE PHTHALIDE COx Reactor Start-up: t = 60 LOW HIGH +
TEMPERATURE ORTHO-XYLENE PHTHALIC ANHYDRIDE TOLUALDEHYDE PHTHALIDE COx Reactor Start-up: t = HIGH LOW +
Grid Assets In Play - Hardware • The University of Kansas • JADE O2K [6] 250MHz, R10000, 512M RAM • PILTDOWN Indy [1] 175MHz, R4400, 64M RAM • Linux Workstations • Windows Workstations • University of Illinois (NCSA) • MODI4 O2K [48] 195MHz, R10000, 12G RAM • Linux ( IA32 [968] & IA64 [256] Clusters) • Boston University • LEGO O2K [32] 195MHz, R10000, 8G RAM
Grid Assets In Play - Software • The University of Kansas • IRIX 6.5: Globus 2.0 (host); COG 0.9.13 [Java] (client); Cactus • Linux: Globus 2.0 (host); COG 0.9.13 [Java] (client); Cactus • Windows 2K: COG 0.9.13 (client); Cactus • University of Illinois (NCSA) • IRIX 6.5: Globus 2.0 (host); COG 0.9.13 (client); Cactus • Linux: Cactus • Boston University • IRIX 6.5: Globus 1.1.3 (host); COG 0.9.13 (client); Cactus
Research Projects • Problem Complexity: Initial (Target) • Pseudo-homogeneous (Heterogeneous) Kinetics • Temperature And Feed Composition Excitation • 1,500 (70,000) grid nodes & 200 (1,000) time steps • Applications • Alliance Chemical Engineering Portal; Li Cheng • Thrust: Distributed Computation Assets • Infrastructure: Method of Lines, XCAT Portal, DDASSL • Cactus Environment; Karen Camarda • Thrust: Parallel Computation Algorithms • Infrastructure: Crank-Nicholson, Cactus, PETSc
ChE Portal Project Plan • Grid Asset Deployment • Client: KU • Host: KU or NCSA or BU • Grid Services Used • Globus Resource Allocation Manager • Grid FTP • Computation Distribution (File Xfer Load) • Direct to Host Job Submission (Null) • Client- Job Submission; Host- Simulation (Negligible) • Client- Simulation; Host- ODE Solver (Light) • Client- Solver; Host- Derivative Evaluation (Heavy)
ChE Portal Project Results • Run Times (Wall Clock Minutes) Load\Host PILTDOWN JADE MODI4 Null 76.33 22.08 7.75 Negligible NA 27.76 8.25 Light NA 35.08 13.49 Heavy 2540 * NA 15.00** • 211,121 Derivative Evaluations • ** Exceeded Interactive Queue Limit After 3 Time Steps (10,362 Derivative Evaluations)
ChE Portal Project Conclusions • Conclusions • The Cost For The Benefits Associated With The Use Of Grid Enabled Assets Appears Negligible. • The Portal Provides Robust Mechanisms For Managing Grid Distributed Computations. • The Cost Of File Transfer Standard Procedures As A Message Passing Mechanism Is Extremely High. • Recommendation • A High Priority Must Be Assigned To Development Of High Performance Alternatives To Standard File Transfer Protocols.
Cactus Project Plan • Grid Asset Deployment • Client: KU • Host: NCSA (O2K, IA32 Cluster, IA64 Cluster) • Grid Services Used • MPICH-G • Cactus Environment Evaluation • Shared Memory : Message Passing • Problem Size: 5x105 – 1x108 Algebraic Equations • Grid Assets: 0.5 – 8.0 O2K Processor Minutes 0.1 – 4.0 IA32 Cluster Processor Minutes • Application Script Use
Cactus Project Conclusions • Conclusions • The IA32 Cluster Outperforms O2K On The Small Problems Run To Date. (IA32 Faster Than O2K; IA32 Speedup Exceeds O2K Speedup.) • The Cluster Computations Appear To Be Somewhat Fragile. (Convergence Problems Encountered Above 28 Cluster Node Configuration; Similar (?) Problems With The IA64 Cluster.) • The Grid Service (MPICH-G) Evaluation Has Only Begun. • Recommendations • Continue The Planned Evaluation of Grid Services. • Continue The Planned IA64 Cluster Evaluation.
Overall Conclusions • The University Of Kansas Is Actively Involved In Developing The Grid Enabled Computation Culture Appropriate To Its Research & Teaching Missions. • Local Computation Assets Appropriate To Topical Application Development And Use Are Necessary. • Understanding Of And Access To Grid Enabled Assets Are Necessary.