270 likes | 430 Views
René Kobler Institute of Graphics and Parallel Processing Johannes Kepler Univ. Linz. Introduction to Grid Computing. Outline. History, Motivation Basic concepts The EU Data Grid Project The Austrian Grid. Why High Performance Computing?. Mathematical models get more and more complex
E N D
René Kobler Institute of Graphics and Parallel Processing Johannes Kepler Univ. Linz Introduction to Grid Computing
Outline History, Motivation Basic concepts The EU Data Grid Project The Austrian Grid
Why High Performance Computing? Mathematical models get more and more complex Before industry is able to construct products simulations are required More complex models → Simulation time increases Solution: Parallelize your problems
Example: Grand Challenges Most complex computing problems nowadays (broad scientific consensus) Examples: n-body simulations in astrophysics Protein folding (basics for understanding the fundamentals of life) Meteorological simulations (weather forecast)
How to program a parallel system Analyse your problem size Distribute problem on different processes / threads Minimize costly communication! Techniques: Shared memory Message passing “mixed-mode programming”
Shared Memory vs. Message Passing Shared Memory Pro's: Shared address space is easy to program Message Passing programs requires often tremendous restructuring of code Message Passing Pro's: Efficiency! Message Passing Codes are usually much faster than equivalent shared-memory codes
Comparing HPC Systems http://www.top500.org Since 1993 Update every June and November, respectively Next List at SC 2005 @ Seattle, Nov. 12 – 18 Every system executes Linpack benchmark Solving a dense system of linear equations Important ratios: Rmax= maximal Linpack performance achieved Rpeak = theoretical peak performance
Current TOP 500 (24.06.2005) IBM manufactures 6 of the top 10 systems Trend: 1993 → many vector-processors Today → commodity processors (Intel, PowerPC, AMD)
Mare Nostrum Fastest European Supercomputer Housed in a majestic chapel - 2,282 IBM eServer BladeCenter JS20 blade server - housed in 163 BladeCenter chassis - 4,564 64-bit IBM Power PC 970FX processors Source: IBM
Problems with HPC Systems Large-Scale HPC systems are traditionally very expensive even with the usage of commodity HW Lower sophisticated countries do not achieve access to such systems Solution: Bundle distributed resources for common usage! Problem: We need higher network bandwith Moore's Law vs. Gilder's Law
Moore's Law vs. Gilder's Law Moore's Law: Sloppy formulated: Processor power doubles every 18 months (orginally formulated in 1965, updated in 1975) Gilder's Law: Bandwith of communication systems triples every 12 months We cannot focus only on processing power! Gilder's Law even affects the internet's bandwith Future on Distributed Computing
Next step: Distributed Computing Often realized in form of cluster computing Using the internet as communication media: SETI@Home Disadvantage: lack of intransparency Therefore: Grid Computing
Grid Computing: Idea The electric power grid served as archetype for the term “grid computing” The electric power grid is used by simply plugging in electrical devices. The computational grid should be used by simply submitting our problem Idea in the mid-1990s by Ian Foster (UIC) and Carl Kesselman (USC)
Grid concept “Coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organisations” (Foster I., Kesselman C., Tuecke S., “The Anatomy of the Grid”, Int. Journal of Supercomputer Applications, 15(3), 2001) Sharing → not only file exchange, but rather direct access to computers, software, data, and other resources (e.g. Sensors, ...) Sharing rules Virtual Organization(VO) → A set of individuals and/or institutions defined by sharing rules
What do we need? Protocols, services, tools to address challenges that arise when building scalable VOs. Security solutions Management of credentials and policies Resource management protocols Information query protocols Services that provide configuration and status information about resources, organizations and services Data management services
Virtual organizations vs. actual organizations Each resource owner makes resources available, subject to constraints on when, where, and what can be done
Constraints Requires mechanisms for expressing policies for establishing the identity of a consumer or resource → Authentication for determining whether an operation is consistent with applicable sharing relationships → Authorization
Grid Architecture Requires to be able to establish sharing relation-ships among any potential participants Interoperability is thus the central issue → common protocols Grid Architecture = protocol architecture defining basic mechanisms by which VO users and resources negotiate, establish, manage, and exploit sharing relationships
Layers of the Grid protocol architecture Connectivity: core communication and authentication protocols required for grid-specific network transactions. Resource: secure negotiation, initiation, monitoring, control, accounting, and payment of sharing operations on individual resources. Collective: protocols and services of global nature to capture interactions across collections of resources.
Case Study: EU Data Grid Project Exploit and build the next generation computing infrastructure providing intensive computation and analysis of shared large-scale databases Enable data intensive sciences by providing world wide Grid test beds to large distributed scientific organizations. Start: Jan 1st 2001, End: Dec. 31st 2003 Applications/Communities: HEPHY, Earth Obs., Biology
Specific Project Objectives Middleware for fabric & grid management Large scale testbed Production quality demonstrations Contribute to Open Standards and International Bodies (GGF, Industry & Research Forum)
Next Steps -> LHC Grid (starts 2007) Data Grid was successful -> showed, that Grids can cope with large amounts of data Next Step: Large Hadron Collider Grid LHC: Largest scientific instrument on the planet (located at CERN) -> produces 15 TB/year 4-Tier architecture Tier-0 is located at CERN and collects all data Distribute after initial proc. to Tier-1 centres with large storage capabilities
The Austrian Grid Main Target: Pioneering Grid Computing in Austria Main Idea: Demonstration of Usefulness Main Tasks: Building a prototype Grid infrastructure in Austria Improving existing Grid software by high-level extensions Development and usage of Grid applications Inviting potential users to use Grid technology and supporting them Representing a contact institution for future partners
Middleware Extensions (1) First figure out the requirements of the applications Determine the extensions! Modules closely related to applications must be implemented by application programmers General parts must be implemented by computer scientists