100 likes | 299 Views
Computational Grid. Carl Kesselman Hongsuda Tangmunarunkit David Okaya Kim Olsen. Outline. SCEC Grid Infrastructure Computational Pathway Interaction Plan with Pathway I and AI researchers. Computational Grid Status. Currently we are using USCGrid (since August 2002)
E N D
Computational Grid Carl Kesselman Hongsuda Tangmunarunkit David Okaya Kim Olsen
Outline • SCEC Grid Infrastructure • Computational Pathway • Interaction Plan with Pathway I and AI researchers
Computational Grid Status • Currently we are using USCGrid (since August 2002) • Hpc: a cluster of Linux machines • Almaak: a 64CPUs shared memory machine • Terra: an 8CPUs shared memory machine • Any researcher with a USC account can access USCGrid (given that the permission is set in advance)
Computational Grid Plan (I) • Goal: enable grid access from anywhere • Client side: • Setting up for other SCEC researchers in other universities (e.g. Kim Olsen in UCSB) to use the grid • Need to obtain NCSA or NPACI accounts and certificates • The ultimate goal is to enable SCEC grid access through a web-browser
Computational Grid Plan (II) • Goal: expand computational resources • Server Side: gridify resources in • Pittsburgh Supercomputing Center • Already have 2 allocations on 2 platforms • San Diego Supercomputing Center • We have negotiated for SCEC access to SDSC resources • We plan to submit a proposal for resource allocation and usage
Current Computation Pathway on the Grid Pre-computed CVM data (from CMU) + other inputs storage UCSB || code gridFTP gridFTP USCGrid
Computation Pathway Plan (Next Quarter) storage Lat, Long, Depth, etc. UCSB || code CVM (Harold’s) USCGrid USCGrid Visualization display Local visualization Note: Need to be able to access CVM code through command lines
Computational Pathway Plan (in 12 months) UCSB || code CVM1 storage Inputs: (Lat, Long, Depth, etc.) … Other || codes CVMn SCECGrid SCECGrid Visualization display Data cached on a cluster Distributed visualization with basic functionalities
Interaction with Pathway II • Facilitate interactions among different components • Agree on input/output formats for • CVM models • Other physics-based models, e.g. UCSB, SDSU • Define meta-data/ontology for • Simulation codes: owners, version#, inputs, outputs, etc • Data sets: codes, parameter sets, #time steps, coordinates, architecture, etc. • Parallelize CVM code to run on the Grid • The speedup is proportional to #CPUs • Generate 4D data for validation/visualization
Other Plans • Interaction with Pathway I • Run Pathway I simulations on the Grid • Enable parallel executions of pathway I code with different earthquake locations and time • Require data management (e.g. data discovery, meta-data, etc.) • Interaction with AI • Intelligent workflow manager: intelligently generate execution plans that will be scheduled on appropriate Grid resources • Define an ontology for grid resources and policies • Define meta-data/ontology for the current simulation codes and data (required interaction with Pathway II)