1 / 54

Cindy Zheng PRAGMA Grid Coordinator Pacific Rim Application and Grid Middleware Assembly

Applications and Middleware Work in PRAGMA Grid. Cindy Zheng PRAGMA Grid Coordinator Pacific Rim Application and Grid Middleware Assembly http://www.pragma-grid.net http://goc.pragma-grid.net. Overview. Introduction to PRAGMA Grid People Hardware Software Operations Grid Applications

shadow
Download Presentation

Cindy Zheng PRAGMA Grid Coordinator Pacific Rim Application and Grid Middleware Assembly

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Applications and Middleware Work in PRAGMA Grid Cindy Zheng PRAGMA Grid Coordinator Pacific Rim Application and Grid Middleware Assembly http://www.pragma-grid.net http://goc.pragma-grid.net

  2. Overview • Introduction to PRAGMA Grid • People • Hardware • Software • Operations • Grid Applications • Grid Middleware • Security • Infrastructure • Services • Education Grid • Collaborations/Integrations • Grid Interoperations

  3. PRAGMA Grid UZH Switzerland UZH Switzerland AIST OsakaU UTsukuba TITech Japan JLU China CNIC GUCAS China NCSA USA KISTI Korea BU USA UUtah USA LZU China LZU China SDSC USA ASGC NCHC Taiwan CUHK HongKong UPRM Puerto Rico UoHyd India CICESE Mexico ASTI Philippines UNAM Mexico NECTEC ThaiGrid Thailand HCMUT HUT IOIT-HCM Vietnam CeNAT-ITCR Costa Rica SKU UI Indonesia MIMOS USM Malaysia APAC QUT Australia BII IHPC NGO NTU Singapore UCN Chile UChile Chile MU Australia BESTGrid New Zealand 32 institutions in 16 countries/regions,27 compute sites (+ 10 in preparation)

  4. PRAGMA Grid Members and Teamhttp://goc.pragma-grid.net/wiki/index.php/Site_status_and_tasks • Sites • 23 sites from PRAGMA member institutions • 15 sites from Non-PRAGMA member institutions • 27 sites contributed compute clusters • Team members • 190 and growing • one management contact / site • 1~3 technical support contact / site • 1~4 application drivers / application team • 1~5 / Middleware development team

  5. PRAGMA Grid Compute Resourceshttp://goc.pragma-grid.net/pragma-doc/computegrid.html

  6. Characteristics of PRAGMA Grid • Grass-root • Voluntary contribution • Open (PRAGMA member or not, pacific rim or not) • Long-term collaborative working experiment • Heterogeneous • Funding • No uniform infrastructure management • Variety of sciences and applications • Site policies, system and network environments • Realistically tough • Good for development, collaborations, integrations and testing

  7. PRAGMA Grid Software Layershttp://goc.pragma-grid.net/pragma-doc/userguide/join.html Applications FMO Savannah MM5 CSTFT Siesta AMBER Phylogenetic … Application Middleware Infrastructure Middleware Ninf-G Nimrod/G Mpich-GX … Gfarm SCMSWeb CSF MOGAS … Globus (required) Local job scheduler (require one) SGE PBS LSF SQMS …

  8. PRAGMA Grid Operations

  9. One of the major lessons from PRAGMA Grid, that everybody has noticed and would agree– “You have to Grid People before you can Grid machines” Rajesh Chhabra Australia

  10. Grid Operationhttp://goc.pragma-grid.net, http://wiki.pragma-grid.net • Develop and maintain mutual beneficial and happy relationships among all people involved • Geographies, time-zones, languages • Funding, chain-of-command, priorities • Mutual benefit, consensus, active leadership • Coordinator, site contacts • Collaboration tools • Mailing lists, VTCs, Skype, semi-annual workshops • Grid Operation Center (GOC) • Wiki, all sites and application, middleware teams collaborate • Heterogeneity • Tolerate, technology, overcome and take advantage • Software inventory instead of software stack • Many sub-grids for applications • Recommendation instead of requirements • Software license (Amber grid-wide license)

  11. Create New Ways To Operatehttp://goc.pragma-grid.net, http://wiki.pragma-grid.net • Lack precedence • Everyone contributes ideas, suggestions • Evolving and improving over time • Everyone document and update (wiki) • Create new procedures • New site setup to join PRAGMA Grid http://goc.pragma-grid.net/pragma-doc/userguide/join.html • New user/application to run in PRAGMA grid http://goc.pragma-grid.net/pragma-doc/userguide/pragma_user_guide.html • Tabulate information • Application pages, site pages, resources tables, status pages • Publish instructions • Software deployment procedures, tools

  12. Application Driven

  13. Applications and Middlewarehttp://goc.pragma-grid.net/applications/default.html • Real science applications paired with and drive middleware development • Open to applications of all scientific disciplines • Achieve long-run and scientific results • ~30 applications in 3 years: • Structure biology • Phaser (MU, Australia) • Quantum-mechanics, quantum-chemistry: • TDDFT, QM-MD, FMO/Ninf-G (AIST, Japan) • Climate simulation • Savannah/Nimrod (MU, Australia) • MM5/Mpich-Gx (CICESE, Mexico; KISTI, Korea) • Genomics and meta-genomics • iGAP/Gfarm/CSF (UCSD, USA; AIST, Japan; JLU, China) • HPM: genomics (IOIT-HCM, Vietnam) • mpiBlast/Mpich-G2 (ASGC, Taiwan) • Phylogenetic/Gfarm/CFS (UWisc and UCSD, USA) • Computational chemistry and fluid dynamics • CSE-Online (UUtah, USA) • e-AIRS (KISTI, Korea) • Gamess-APBS/Nimrod (UZurich, Switzerland) • Molecular simulation • Siesta/Nimrod (UZurich, Switzerland; MU, Australia) • Amber/Gfarm ( USM, Malaysia; AIST, Japan) • Environmental Science • CSTFT/Ninf-G (UPRM, Puerto Rico)

  14. Grid Middleware

  15. Ninf-Ghttp://ninf.apgrid.org • Developed by AIST, Japan • Based on GridRPC model • Support parallel computing • Integrated to NMI release 8 (first non-US software in NMI) • Integrate with Rocks • OGF standard • 4 applications ran in PRAGMA grid, 2 ran across multi-grid • TDDFT • QM/MD • FMO • CSTFT (UPRM) • Achieved long runs(50 days) • Improved fault-tolerance • Simplified deployment procedures • Speed-up development cycles

  16. Job 1 Job 2 Job 3 Job 4 Job 5 Job 6 Job 7 Job 8 Job 9 Job 10 Job 11 Job 12 Job 13 Job 14 Job 15 Job 16 Job 17 Job 18 Nimrod/Ghttp://www.csse.monash.edu.au/~davida/nimrod Description of Parameters PLAN FILE • Developed by Monash University (MU), Australia • Supports large scale parameter sweeps on Grid infrastructure • Easy user interface – Nimrod portals • MU, Australia • UZurich, Switzerland • UCSD, USA • 4 applications ran in PRAGMA grid and 2 run in multi-grids • Savanah climate simulation (MU, Australia)) • GAMESS/APBS (UZurich, Switzerland) • Siesta (UZurich, Switzerland) • Structure biology (MU, Australia) • Developed interface to Unicore with UZH • Achieved long runs (90 different scenarios of 6 weeks each • Improved fault-tolerance (innovate time_step) • Enhancements in data and storage handling

  17. Mpich-Gxhttp://www.moredream.org/mpich.htm • Mpich-GX • Korea Institute of Science and Technology Information (KISTI), Korea • Based on Mpich-g2 • Grid-enabled MPI, support • Private IP • Fault tolerance • MM5 and WRF • CICESE, Mexico • Medium scale atmospheric simulation model • Experiment • KGrid • WRF work well with MPICH-GX • MM5 experienced scaling problems with MPICH-GX when use more than 24 processors in a cluster • Functionality of the private IP is usable • Performance of the private IP is reasonable

  18. Education Gridhttp://prime.ucsd.eduhttp://prius.ics.es.osaka-u.ac.jp/en/index.html PRIME -Pacific Rim Undergraduate Experiences, providing UCSD undergraduate students international interdisciplinary research internships and Cultural experiences, in collaboration with PRAGMA since 2004. PRIUS - Pacific Rim International UniverSity, provide Osaka University students expert lectures and internship abroad, in collaboration with PRAGMA since 2005 Sample middleware projects • MOGAS • Grid security analysis Sample applications ran in PRAGMA grid this year: • Climate modeling • Multi-walled carbon nanotube and polyethylene oxide composite computer visualization model • Metabolic regulation of ionic currents and pumps in rabbit ventricular myocyte model • Improving binding energy using quantum mechanics • Cardiac mechanics modeling • H5N1 simulation • Shp2 Protein Tyrosine Phosphatase Inhibitor simulation for cancer research

  19. ScienceTechnologiesCollaborationsIntegrations

  20. Collaborations With Science and Technology Teams • Grid security • Naregi (Japan), APGrid, GAMA (SDSC, USA) • Grid infrastructure • Monitoring - SCMSWeb (ThaiGrid, Thailand) • Accounting - MOGAS (NTU Singapore) • Metascheduling - Community Scheduler Framework (JLU, China) • Cyber-environment - CSE-Online (UUtah, USA) • Rocks and middleware (SDSC, USA; …) • Ninf-G, SCE, Gfarm, Bio, K*Rocks, Condor, … • Science, datagrid, sensor, network • Biosciences – Avian Flu, portal, … • Gfarm-fuse (AIST, Japan) • GEON data network • GLEON sensor network • OptIPuter • High performance networked TDW • Telescience

  21. Grid Security • Trust in PRAGMA grid, http://goc.pragma-grid.net/pragma-doc/certificates.html • IGTF distribution • Non-IGTF distribution (trust all PRAGMA Grid sites) • APGrid PMA • One of three IGTF founding PMAs • Many PRAGMA grid sites are members • PRAGMA CA • Naregi-CA • AIST, UCSD, UChile, UoHyd, UPRM • PRAGMA CA (experimental and production) • Based on Naregi-CA • Catch-all CA for PRAGMA • Production CA will be IGTF compliant • MyProxy and VOMS services • APAC and UCSD • Work with GAMA • Integrate with Naregi-CA (Naregi, UCSD) • Integration with VOMS (AIST) • Add servelet for account management (UChile)

  22. Gfarm Grid File Systemhttp://datafarm.apgrid.org • UTsukuba, Open source development at SourceForge.net • Grid file system that Federates storage of each site • Meta-server keeps track of file copies and locations • Can be mounted from cluster nodes and clients (GfarmFS-FUSE) • Parallel I/O, near site copy for scalable performance • Replication for fault tolerance • Use GSI authentication • Easy application deployment, file sharing • Distributed Meta-servers?

  23. Develop and Test GfarmFS-FUSE in PRAGMA Gridhttp://goc.pragma-grid.net/wiki/index.php/Resources_and_Data Testing with applications • Igap (Gfarm, Japan, UCSD, USA; JLU, China) • Huge number of small files • High meta-data access overhead • Meta-data cache server • Dramatic improvements (44sec -> 3.54sec) • AMBER (USM, Malaysia; Gfarm, Japan) • Remote Gfarm meta-server • Meta-server is bottle-neck • File sharing permission, security • 2.0 improved performance • Use as a shared storage only Version 1.4 works well in local or regional grid • GeoGrid, Japan • CLGrid, Chile Integration • SCMSWeb (ThaiGrid, Thailand) • Rocks (SDSC, USA; UZH, Switzerland)

  24. SCMSWebhttp://www.opensce.org/components/SCMSWeb • Developed by Kasetsart University and ThaiGrid • Web-based real-time grid monitoring system • System usage, Job/queue status • Probe – Globus authentication, job submission, gridftp, Gfarm access, … • Network bandwidth measurements with Iperf • PRAGMA grid geo map • Support Linux, Solaris. Good meta-view, easy user interface, excellent user support • Develop and test in PRAGMA grid • Deployed in 27 sites, improve scalability and performance • Sites help with porting to ia64 and Solaris • Demands push fast expansion of functionalities • More regional/national grids learned and adopted

  25. SCMSWeb Collaborations and Integrations • Grid Interoperation Now (GIN, OGF)http://forge.gridforum.org/sf/wiki/do/viewPage/projects.gin/wiki/GinOps • Worked with PRAGMA grid, TeraGrid, OSG, NorduGrid and EGEE on GIN testbed monitoring http://goc.pragma-grid.net/cgi-bin/scmsweb/probe.cgi, added probes to handle various grid service configurations/tests. • Worked with CERN and Implemented a XML-> LDIF translator for GIN geo map http://maps.google.com/maps?q=http://lfield.home.cern.ch/lfield/gin.kml • Worked with many grid monitor software developers on a common schema for cross-grid monitoring http://wiki.pragma-grid.net/index.php?title=GIN_%28Grid_Inter-operation_Now%29_Monitoring • Software integration and interoperations • Rocks – SCE roll • MOGAS – grid accounting • Bandwidth measurements • Data federator for grid applications • Provide site software information • Standardize data extractions and formats • Improve data storage with RDBMS • Condor, CSF, … – provide resource info • Interoperate with other monitoring software • Ganglia support

  26. MOGAShttp://ntu-cg.ntu.edu.sg/pragma/index.jsp • Multi-Organization Grid Accounting System (MOGAS) • Lead by NanYang University, funded by National Grid Office in Singapore • Build on globus core (gridftp, GRAM, GSI) • Support GT2,3,4; SGE, PBS • Job/user/cluster/OU/grid levels usages; job logs; metering and charging tools • Develop and test in PRAGMA grid • Deployed on 14 sites: different GT versions, job schedulers, GRAM scripts, security policies • Feedbacks, improve, automate deployment procedure • Decentralized servers and better database to improve scalability and performance • Collaborations and integrations with applications and other middleware teams push the development of easy database interface and more efficient data collection

  27. CSF4http://goc.pragma-grid.net/wiki/index.php/CSF_server_and_portalCSF4http://goc.pragma-grid.net/wiki/index.php/CSF_server_and_portal • Community Scheduler Framework, v4 – meta-scheduler • Developed by Jilin University, China • Grid services host in GT4, WSRF compliant, execution Component in Globus Toolkit 4 • Open Source, http://sourceforge.net/projects/gcsf • Support GT2&4, LSF, PBS, SGE, Condor • Easy user interface - portal • Testing and collaborating in PRAGMA • Testing with application iGAP, AFG (UCSD, AIST, KISTI, …) • Collaborate and integrate with Gfarm on data staging (AIST, Japan) • Setup a CSF server and portal (SDSC, USA) • Collaborate/integrate with SCMSWeb for resource information (Thaigrid, Thailand) • Leverage resources and global grid testing environment

  28. Computational Science & Engineering Onlinehttp://cse-online.net • Developed by University of Utah, USA (Thanh N. Truong) • Desktop tool, user friendly interface enables seamless access to remote data, tools and grid computing resources • Currently support computational Chemistry • Can be customized for other domain science • Developed interface to TeraGrid • Collaborate with ThaiGrid as case study • Used for Computational workshop • Extend grid access to portal architecture • Improved security • Working on interface PRAGMA grid • Heterogeneity Quantum Chemistry Drug Design Nano-materials

  29. Collaborations with OptIPuter, GLIF and CAMERAhttp://www.optiputer.net • OptIPuter (Optical networking, Internet Protocol, computer storage, processing and visualization technologies) • Infrastructure that will tightly couple computational resources over parallel optical networks using the IP communication mechanism • central architectural element is optical networking, not computers • enable scientists who are generating terabytes and petabytes of data to interactively visualize, analyze, and correlate their data from multiple storage sites connected to optical networks • Rocks/SAGE VIS-roll (SDSC) • Networked Tile Display Walls (TDW) • Low cost • For research collaboration • For remote education and conferencing • Deployed at many PRAGMA grid sites

  30. Build a Rocks / SAGE OptIPortal NCSA & TRECC AIST KISTI UIC Calit2@UCSD SIO@UCSD NCMIR@UCSD Calit2@UCI UZurich CNIC NCHC Osaka U

  31. Global Lambda Integrated Facility (GLIF)http://www.glif.is Visualization courtesy of Bob Patterson, NCSA. Map to many PRAGMA grid sites PRAGMA grid use GLIF to solve grid application bandwidth problem

  32. Intergrate CAMERA and PRAGMA Grid Microbial Metagenomicist Userbase Over 1300 Registered Users From 48 Countries

  33. Calit2/PRAGMA/CAMERA/LambdaGrid Collaborations Add CAMERA Server to PRAGMA Grid Testbed Ad hoc Supercomputing NIMROD? New Bioinformatics Apps Set up PRAGMA OptIPortal LambdaGrid for several PRAGMA Sites KISTI, Konkuk U (Korea) AIST, Osaka U (Japan) CNIC (China) NCHC (Taiwan) APAC, UMelbourne, Monash U, U Queensland (Australia) CICESE (Mexico) UZurich (Switzerland) Plus Other Volunteers! PRAGMA Countries with CAMERA Registered Users Source: Paul Gilna, Kayo Arima, Calit2

  34. Grid Interoperation

  35. Grid Interoperation Now (GIN)http://forge.gridforum.org/sf/wiki/do/viewPage/projects.gin/wiki/GinOps • Open Grid Forum and GIN • GIN-OPS (lead by PRAGMA) • GIN testbed (February, 2006 – On going) • One or more clusters from each grid • Still part of each production grid • Running real science applications • Explore interoperation issues • Develop solutions • Provide insight to standardization effort • Application driven • TDDFT/Ninf-G (PRAGMA - AIST, Japan) • PRAGMA, TeraGrid, OSG, NorduGrid; EGEE • Savanah fire simulation (PRAGMA – Monash University, Australia) • PRAGMA, TeraGrid, OSG

  36. Grid Interoperation Now (GIN)http://forge.gridforum.org/sf/wiki/do/viewPage/projects.gin/wiki/GinOps • Software interface and integration • Ninf-G (AIST/PRAGMA) - NorduGrid • Nimrod/G (MU-PRIME/PRAGMA) – Unicore • SCMSWeb (ThaiGrid/PRAGMA) – Condor (UWisc/OSG) • SCMSWeb (ThaiGrid/PRAGMA) – BDII (CERN) • VDT (OSG) and Rocks (SDSC/PRAGMA) integration • Multi-Grid monitoring • Lead by ThaiGrid/PRAGMA • SCMSWeb probe matrix (PRAGMA - ThaiGrid, Thailand) • Common schema (http://goc.pragma-grid.net/wiki/index.php?title=GIN_%28Grid_Inter-operation_Now%29_Monitoring) • PRAGMA – SCMSWeb, MOGAS • TeraGrid – Globus Gt4.0.1, Ganglia, NAGIOS • EGEE – MonAlisa • NorduGrid/ARC – NorduGrid/MDS2, NorduGrid/Grid Monitor

  37. Peer-grid Interoperation Experimentshttp://goc.pragma-grid.net/wiki/index.php/Main_Page#Grid_Inter-operations • Different from GIN testbed • More resources and support from each grid • Either uni-directional or bi-directional application run • Long run to achieve scientific results • OSG<->PRAGMA (January, 2007 – on-going) • How • Each grid identify management, application drivers, resources supporters • All participants document application requirements, meetings, issues, solutions, status, results, … at wiki.pragma-grid.net • Resources • OSG – FermilabGrid, will add UWisc • PRAGMA grid - any sites application driver choose to use • Applications • OSG – GISolve, spatial Interpolation (UIowa, USA) • PRAGMA • FMO/Ninf-G, quantum Chemistry (AIST, Japan) – completed • Structure biology (MU, Australia) – start soon

  38. OSG-PRAGMA Grid Interoperation Experimentshttp://goc.pragma-grid.net/wiki/index.php/Main_Page#Grid_Inter-operations • More resources and support from each grid, but no special arrangements • Application long-run • GridFMO/Ninf-G – Large scale quantum Chemistry (Tsutomo Ikegami, AIST, Japan) • 240 CPUs from OSG and PRAGMA grid, 10 days x 7 calculations • Fault-tolerance enabled long-run • Meaningful and usable scientific results

  39. PRAGMA Summit • Start a series of three workshops • First – March~April 2008 • Organized by UZurich • Swiss Grid, European Federated Grid (Euro-Grid), and PRAGMA • Goal • Inform and learn from each other • Seek ways to collaborate • Collaboration work started summer 2007 • Nimrod interface to UNICORE

  40. Lessons Learned From Grid Interoperationhttp://forge.gridforum.org/sf/wiki/do/viewPage/projects.gin/wiki/GinOps • Grid interoperation make large scale calculations possible • Differences among grids provide learning, collaboration and integration opportunities • IGTF, VOMS (GIN) • Common Software Area (TeraGrid) • Ninf-G – NorduGrid • Nimrod/G – Unicore • SCMSWeb – Condor • SCMSWeb – BDII • SCMSWeb probe matrix for GIN testbed monitoring • Common schema among many grid monitoring software • VDT (OSG) and Rocks (SDSC/PRAGMA) integration • Differences in grid environment are source of difficulties for users and applications • Different user access setup procedure - take extra effort • Different job submission protocols • GRAM, Sandbox, gridftp, modified GRAM, … • One-to-one interface - is it scalable? Possible standards? • Middleware fault tolerance and flexible resource management is important • Cope with unfamiliar fault conditions, lack of parallel computation support, …

  41. Collaborate in Publishing Research Results Some published papers in 2007: • Amaro, RE, Minh DDL, Cheng LS, Lindstrom, WM Jr, Olson AJ, Lin JH, Li WW, and McCammon JA. Remarkable Loop Flexibility in Avian Influenza N1 and Its Implications for Antiviral Drug Design. J. AM. CHEM. SOC. 2007, 129, 7764-7765 (PRIME) • Choi Y, Jung S, Kim D, Lee J, Jeong K, Lim SB, Heo D, Hwang S, and Byeon OH."Glyco-MGrid: A Collaborative Molecular Simulation Grid for e-Glycomics," in 3rd IEEE International Conference on e-Science and Grid Computing, Banglore, India, 2007. Accepted. • Ding Z, Wei W, Luo Y, Ma D, Arzberger PW, and Li WW, "Customized Plug-in Modules in Metascheduler CSF4 for Life Sciences Applications," New Generation Computing, p. In Press, 2007. • Ding Z, Wei S, Ma, D and Li WW, "VJM -- A Deadlock Free Resource Co-allocation Model for Cross Domain Parallel Jobs," in HPC Asia 2007, Seoul, Korea, 2007, p. In Press. • Görgen K, Lynch H, Abramson D, Beringer J and Uotila P. "Savanna fires increase monsoon rainfall as simulated using a distributed computing environment", to appear, Geophysical Research Letters. • Ichikawa K, Date S, Krishnan S, Li W, Nakata K, Yonezawa Y, Nakamura H, and Shimojo S, "Opal OP: An extensible Grid-enabling wrapping approach for legacy applications", GCA2007 - Proceedings of the 3rd workshop on Grid Computing & Applications -, pp.117-127 , Singapore, June 2007 a. (PRIUS) • Ichikawa K, Date S, and Shimojo S. “A Framework for Meta-Scheduling WSRF Based Services”, Proceedings of 2007 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 2007), Victoria, Canada, pp. 481-484, Aug. 2007 b. (PRIUS) • Kuwabara S, Ichikawa K, Date S, and Shimojo S. “A Built-in Application Control Module for SAGE”, Proceedings of 2007 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing (PACRIM 2007), Victoria, Canada, pp. 117-120, Aug. 2007. (PRIUS) • Takeda S, Date S, Zhang J, Lee BU, and Shimojo S. “Security Monitoring Extension For MOGAS”, GCA2007 - Proceedings of the 3rd workshop on Grid Computing & Applications - , pp.128-137 Singapore, June 2007. (PRIUS) • Tilak S, Hubbard P, Miller M, and Fountain T, ``The Ring Buffer Network Bus (RBNB) DataTurbine Streaming Data Middleware for Environmental Observing Systems," to appear in the Proceedings of the e-Science 2007 • Zheng C, Katz M, Papadopoulos P, Abramson D, Ayyub S, Enticott C, Garic S, Goscinski W, Arzberger P, Lee B S, Phatanapherom S, Sriprayoonsakul S, Uthayopas P, Tanaka Y, Tanimura Y, Tatebe O. Lesson Learned Through Driving Science Applications in the PRAGMA Grid. Int. J. Web and Grid Servies, Vol.3, No.3, pp287-312. 2007 …

  42. Summary • PRAGMA grid • Shared vision lower resistance to use others software, test on others resources • Formed new development collaborations • Size and heterogeneity, explore issues which functional grid must resolve • Management, resources and software coordination • Identity and fault management • Scalability and performance • Feedback between application and middleware help improve software and promote software integration • Heterogeneous global grid • Is realistic and challenging • Can be good for middleware development and testing • Can be useful for real science • Impact • Software dissemination (Rocks, Ninf-G, Nimrod, SCMSWeb, Naregi-CA, …) • Help new national/regional grids (Chile, Vietnam, Hong kong, …) • Key is people, is collaboration

  43. How Can I Participate? • Get involved now • PRAGMA or similar collaborative communities • Cost a little • Benefit a lot • Being a part of larger grid effort • Learn from doing • Build collaborations • Develop bigger/better ideas/projects • Push the use of network and other infrastructure

  44. A Grass Roots Effort “One of the most important lessons of the Internet is that it grows most successfully where grass roots initiatives are encouraged and enabled. The Internet has historically grown from the bottom up, and this aspect continues to fuel its continued growth in the academic and commercial sectors.” • Vint Cert, UN Economic and Social Council in 2000

  45. PRAGMA is supported by the National Science Foundation (Grant No. INT-0216895, INT-0314015, OCI -0627026) and by member institutions • PRIME is supported by the National Science Foundation under NSF INT 04007508 • PRAGMA grid is the result of contributions and support from all PRAGMA grid team members Thank You http://www.pragma-grid.net http://goc.pragma-grid.net http://wiki.pragma-grid.net

  46. PRAGMA “A Practical Collaborative Framework” People and applications Overarching Goals Strengthen Existing and Establish New Collaborations Work with Science Teams to Advance Grid Technologies and Improve the Underlying Infrastructure In the Pacific Rim and Globally http://www.pragma-grid.net

  47. PRAGMA Member Institutions CRAY PNWG USA JLU China KBSI KISTI Konkuk Korea CNIC China AIST CCS CMC NARC OsakaU TITech Japan UUtah USA UoHyd India CalIT2 CRBS SDSC UCSD USA APAN Japan NCSA StarLight TransPAC2 USA ASGCC NCHC Taiwan CICESE Mexico KU NECTEC TNGC Thailand APAC Australia BII IHPC NGO Singapore BeSTGRID New Zealand MIMOS USM Malaysia 33 institutions from 12 countries/regions Founded 2002 Supported by Members MU Australia http://www.pragma-grid.net

  48. Workshops and Organization Information Exchange Planning and Review New Collaborations New Members Expand Users Expand Impact Routine Use Lab/Testbed Testing Applications Building Grid and GOC Multiway Dissemination Key Middleware Overview and ApproachProcess to Promote Routine Use Team Science Application-Driven Collaborations Applications Middleware Outcomes Improved middleware Broader Use New Collaborations Transfer Tech. Standards Publications New Knowledge Data Access Education

  49. PRAGMA Working Groups • Bioscience • Telescience • Geo-science • Resources and data • Grid middleware interoperability • Global grid usability and productivity PRAGMA Grid effort is led by resources and data working group, but rely on collaborations and contributions among all working groups.

More Related