70 likes | 174 Views
Grid Computing. AEI Numerical Relativity Group has access to high-end resources in over ten centers in Europe/USA They want: Bigger simulations, more simulations and faster throughput Intuitive IO at local workstation No new systems/techniques to master!!
E N D
Grid Computing • AEI Numerical Relativity Group has access to high-end resources in over ten centers in Europe/USA • They want: • Bigger simulations, more simulations and faster throughput • Intuitive IO at local workstation • No new systems/techniques to master!! • How to make best use of these resources? • Provide easier access … noone can remember ten usernames, passwords, batch systems, file systems, … great start!!! • Combine resources for larger productions runs (more resolution badly needed!) • Dynamic scenarios … automatically use what is available • Many other reasons for Grid Computing for computer scientists, funding agencies, supercomputer centers ...
Grand Picture Viz of data from previous simulations in SF café Remote steering and monitoring from airport Remote Viz in St Louis Remote Viz and steering from Berlin DataGrid/DPSS Downsampling IsoSurfaces http HDF5 T3E: Garching Origin: NCSA Globus Simulations launched from Cactus Portal Grid enabled Cactus runs on distributed machines
Thorn which allows simulation any to act as its own web server Connect to simulation from any browseranywhere Monitor run: parameters, basic visualization, ... Change steerable parameters See running example at www.CactusCode.org Wireless remote viz, monitoring and steering Thorn HTTPD
User Portal • Find resources • automatically finds machines with a user allocation (group aware!) • continuously monitor resources, network etc. • Authentification • single login, don’t need to remember lots of usernames/passwords • Launch simulation • automatically create executable on chosen machine • write data to appropriate storage location • negotiate local queue structures • Monitor/steer simulations • access remote visualization and steering while simulation is running • collaborative … choose who else can look in and/or steer • performance … how efficient is the simulation? • Archiving • store thorn lists, parameter files, output locations, configurations, …
Cactus Portal • KDI ASC Project • Technology: Globus, GSI, Java Beans, DHTML, Java CoG, MyProxy, GPDK, TomCat, Stronghold • Allows submission of distributed runs • Accesses the ASC Grid Testbed (SDSC, NCSA, Argonne, ZIB, AHPCC, WashU, AEI) • Undergoing testing by users now! • Main difficulty now is that it requires everything to work … • But is going to revolutionise our use of computing resources
ASC: Astrophysics Simulation Collaboratory • NSF Funded Knowledge and Distributed Intelligence project • Institutes: WashU, Rutgers, Argonne, U. Chicago, NCSA • Aim: Develop a collaboratory for the astrophysics community to provide the capability for massively parallel computation including AMR, interactive visualization and metacomputing. • http://www.ASCPortal.org/ • ASC Portal: General simulation portal interfacing to Cactus • Globus, GSI, Java Beans, DHTML, Java CoG, MyProxy, GPDK, TomCat, Stronghold • Version 1 included code assembly, compilation, job submission • Version 2 now being developed, based on user feedback