1 / 62

Grids for DoD and Real Time Simulations

Discover the significance of grids in addressing distributed simulation issues and supporting DoD's network-centric computing vision. Explore various grid organizations and types, including utility computing and peer-to-peer systems.

violab
Download Presentation

Grids for DoD and Real Time Simulations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Grids for DoD andReal Time Simulations IEEE DS-RT 2005 Montreal Canada Oct. 11 2005 Geoffrey Fox Computer Science, Informatics, Physics Pervasive Technology Laboratories Indiana University Bloomington IN 47401 gcf@indiana.edu http://www.infomall.org

  2. Why are Grids Important • Grids are important for DoD because they more or less directly address DoD’s problem and have made major progress in the core infrastructure that DoD has identified rather qualitatively • Grids are important to distributed simulation because they address all the distributed systems issues except simulation and in any sophisticated distributed simulation package, most of the software is not to do with simulation but rather the issues Grids address • DoD and Distributed Simulation communities are too small to go it alone – they need to use technology that industry will support and enhance

  3. Internet Scale Distributed Services • Grids use Internet technology and are distinguished by managing or organizing sets of network connected resources • Classic Web allows independent one-to-one access to individual resources • Grids integrate together and manage multiple Internet-connected resources: People, Sensors, computers, data systems • Organization can be explicit as in • TeraGrid which federates many supercomputers; • Deep Web Technologies IR Grid which federates multiple data resources; • CrisisGrid which federates first responders, commanders, sensors, GIS, (Tsunami) simulations, science/public data • Organization can be implicit as in Internet resources such as curated databases and simulation resources that “harmonize a community”

  4. Different Visions of the Grid • Grid just refers to the technologies • Or Grids represent the full system/Applications • DoD’s vision of Network Centric Computing can be considered a Grid (linking sensors, warfighters, commanders, backend resources) and they are building the GiG (Global Information Grid) • Utility Computing or X-on-demand (X=data, computer ..) is major computer Industry interest in Grids and this is key part of enterprise or campus Grids • e-Science or Cyberinfrastructure are virtual organization Grids supporting global distributed science (note sensors, instruments are people are all distributed • Skype (Kazaa) VOIP system is a Peer-to-peer Grid (and VRVS/GlobalMMCS like Internet A/V conferencing are Collaboration Grids) • Commercial 3G Cell-phones and DoD ad-hoc network initiative are forming mobile Grids

  5. Types of Computing Grids • Running “Pleasing Parallel Jobs” as in United Devices, Entropia (Desktop Grid) “cycle stealing systems” • Can be managed (“inside” the enterprise as in Condor) or more informal (as in SETI@Home) • Computing-on-demand in Industry where jobs spawned are perhaps very large (SAP, Oracle …) • Support distributed file systems as in Legion (Avaki), Globus with (web-enhanced) UNIX programming paradigm • Particle Physics will run some 30,000 simultaneous jobs • Distributed Simulation HLA/RTI style Grids • Linking Supercomputers as in TeraGrid • Pipelinedapplications linking data/instruments, compute, visualization • Seamless Access where Grid portals allow one to choose oneof multiple resources with a common interfaces • Parallel Computing typically NOT suited for a Grid (latency)

  6. Analysis and Visualization Large Disks Old Style Metacomputing Grid Large Scale Parallel Computers Spread a single large Problem over multiple supercomputers

  7. Utility and Service Computing • An important business application of Grids is believed to be utility computing • Namely support a pool of computers to be assigned as needed to take-up extra demand • Pool shared between multiple applications • Natural architecture is not a cluster of computers connected to each other but rather a “Farm of Grid Services” connected to Internet and supporting services such as • Web Servers • Financial Modeling • Run SAP • Data-mining • Simulation response to crisis like forest fire or earthquake • Media Servers for Video-over-IP • Note classic Supercomputer use is to allow full access to do “anything” via ssh etc. • In service model, one pre-configures services for all programs and you access portal to run job with less security issues

  8. Simulation and the Grid • Simulation on the Grid is distributed but its rarelyclassical distributed simulation • It is either managing multiple jobs that are identical except for parameters controlling simulation – SETI@Home style of “desktop grid” • Or workflow that roughly corresponds to federation • The workflow is designed to supported the integration of distributed entities • Simulations (maybe parallel) and Filtersfor example GCF General Coupling Framework from Manchester • Databases and Sensors • Visualization and user interfaces • RTI should be built on workflow and inherit WS-*/GS-* and NCOW CES built on same

  9. Two-level Programming I Service Data • The Web Service (Grid) paradigm implicitly assumes a two-level Programming Model • We make a Service (same as a “distributed object” or “computer program” running on a remote computer) using conventional technologies • C++ Java or Fortran Monte Carlo module • Data streaming from a sensor or Satellite • Specialized (JDBC) database access • Such services accept and produce data from users files and databases • The Grid is built by coordinating such services assuming we have solved problem of programming the service

  10. Service1 Service3 Service2 Service4 Two-level Programming II • The Grid is discussing the composition of distributed serviceswith the runtime interfaces to Grid as opposed to UNIX pipes/data streams • Familiar from use of UNIX Shell, PERL or Python scripts to produce real applications from core programs • Such interpretative environments are the single processor analog of Grid Programming • Some projects like GrADS from Rice University are looking at integration between service and composition levels but dominant effort looks at each level separately

  11. Consequences of Rule of the Millisecond Classic Programming • Useful to remember critical time scales • 1) 0.000001 ms – CPU does a calculation • 2a) 0.001 to 0.01 ms – Parallel Computing MPI latency • 2b) 0.001 to 0.01 ms – Overhead of a Method Call • 3) 1 ms – wake-up a thread or process either? • 4) 10 to 1000 ms – Internet delay: Workflow • 2a), 4) implies geographically distributed metacomputing can’t in general compete with parallel systems • 3) << 4) implies a software overlay network is possible without significant overhead • We need to explain why it adds value of course! • 2b) versus 3) and 4) describes regions where method and message based programming paradigms important

  12. Towards an International Grid Infrastructure UK NGS Leeds Manchester Starlight (Chicago) US TeraGrid Netherlight (Amsterdam) Oxford RAL SDSC NCSA PSC UCL UKLight SC05 Local laptops in Seattle and UK All sites connected by production network (not all shown) Computation Steering clients Service Registry Network PoP

  13. Information/Knowledge Grids • Distributed (10’s to 1000’s) of data sources (instruments, file systems, curated databases …) • Data Deluge: 1 (now) to 100’s petabytes/year (2012) • Moore’s law for Sensors • Possible filters assigned dynamically (on-demand) • Run image processing algorithm on telescope image • Run Gene sequencing algorithm on compiled data • Needs decision support front end with “what-if” simulations • Metadata (provenance) critical to annotate data • Integrate across experiments as in multi-wavelength astronomy Data Deluge comes from pixels/year available

  14. Data Deluged Science • Now particle physics will get 100 petabytes from CERN using around 30,000 CPU’s simultaneously 24X7 • Exponential growth in data and compare to: • The Bible = 5 Megabytes • Annual refereed papers = 1 Terabyte • Library of Congress = 20 Terabytes • Internet Archive (1996 – 2002) = 100 Terabytes • Weather, climate, solid earth (EarthScope) • Bioinformatics curated databases (Biocomplexity only 1000’s of data points at present) • Virtual Observatory and SkyServer in Astronomy • Environmental Sensor nets • In the past, HPCC community worried about data in the form of parallel I/O or MPI-IO, but we didn’t consider it as an enabler of new science and new ways of computing • Data assimilation was not central to HPCC • DoE ASCI set up because didn’t want test data!

  15. Virtual Observatory Astronomy GridIntegrate Experiments Radio Far-Infrared Visible Dust Map Visible + X-ray Galaxy Density Map

  16. International Virtual Observatory Alliance • Reached international agreements on Astronomical Data Query Language, VOTable 1.1, UCD 1+, Resource Metadata Schema • Image Access Protocol, Spectral Access Protocol and Spectral Data Model, Space-Time Coordinates definitions and schema • Interoperable registries by Jan 2005 (NVO, AstroGrid, AVO, JVO) using OAI publishing and harvesting • So each Community of Interest builds data AND service standards that build on GS-* and WS-*

  17. myGrid Project • Imminent ‘deluge’ of data • Highly heterogeneous • Highly complex and inter-related • Convergence of data and literature archives

  18. The Williams Workflows A B C A: Identification of overlapping sequence B: Characterisation of nucleotide sequence C: Characterisation of protein sequence

  19. Field Trip Data Database ? GISGrid Discovery Services RepositoriesFederated Databases Streaming Data Sensors Database Sensor Grid Database Grid Research Education SERVOGrid Compute Grid Customization Services From Researchto Education Data FilterServices ResearchSimulations Analysis and VisualizationPortal EducationGrid Computer Farm Grid of Grids: Research Grid and Education Grid

  20. SERVOGrid Requirements • Seamless Access to Data repositories and large scale computers • Integration of multiple data sources including sensors, databases, file systems with analysis system • Including filtered OGSA-DAI (Grid database access) • Rich meta-data generation and access with SERVOGrid specific Schema extending openGIS (Geography as a Web service) standards and using Semantic Grid • Portalswith component model for user interfaces and web control of all capabilities • Collaboration to support world-wide work • Basic Grid tools: workflow and notification • NOT metacomputing

  21. SERVOGrid Portal Screen Shots

  22. Portal Architecture Clients (Pure HTML, Java Applet ..) Aggregation and Rendering Portlet Class:WebForm SERVOGrid (IU) Web/Gridservice Computing Remoteor ProxyPortlets Web/Gridservice Data Stores Portlet Class GridPort etc. Portlet Class Web/Gridservice Instruments (Java) COG Kit Portlet Class Hierarchical arrangement Portal Internal Services LocalPortlets Clients Portal Portlets Libraries Services Resources

  23. Each Service has its own portlet Individual portlet for the Proxy Manager Use tabs or choose different portlets to navigate through interfaces to different services 2 Other Portlets

  24. GIS Grids and Sensor Grids • OGC has defined a suite of data structures and services to support Geographical Information Systems and Sensors • GML Geography Markup language defines specification of geo-referenced data • SensorML and O&M (Observation and Measurements) define meta-data and data structure for sensors • Services like Web Map Service, Web Feature Service, Sensor Collection Service define services interfaces to access GIS and sensor information • Grid workflow links services that are designed to support streaming input and output messages • We are building Grid (Web) service implementations of these specifications for NASA’s SERVOGrid

  25. A Screen Shot From the WMS Client

  26. WMS uses WFS that uses data sources <gml:featureMember> <fault> <name> Northridge2 </name> <segment> Northridge2 </segment> <author> Wald D. J.</author> <gml:lineStringProperty> <gml:LineStringsrsName="null"> <gml:coordinates> -118.72,34.243 -118.591,34.176 </gml:coordinates> </gml:LineString> </gml:lineStringProperty> </fault> </gml:featureMember>

  27. Electric Power and Natural Gas data from LANL Interdependent Critical Infrastructure Simulations Zoom-in Zoom-out FeatureInfo mode Measure distance mode Clear Distance Drag and Drop mode Refresh to initial map

  28. Typical use of Grid Messaging in NASA Sensor Grid GIS Grid Grid Eventing Datamining Grid

  29. Typical use of Grid Messaging Filter or Datamining Sensor Grid Post afterProcessing Post beforeProcessing Web Feature Service NaradaBrokering Notify WFS (GIS data) Grid Database Archives Subscribe HPSearch Manages GIS Grid WS-Context Stores dynamic data GeographicalInformation System

  30. Real Time GPS and Google Maps Subscribe to live GPS station. Position data from SOPAC is combined with Google map clients. Select and zoom to GPS station location, click icons for more information.

  31. Integrating Archived Web Feature Services and Google Maps Google maps can be integrated with Web Feature Service Archives to filter and browse seismic records.

  32. Google Maps as Service accessed from our WMS Client

  33. What is Happening? • Grid ideas are being developed in (at least) four communities • Web Service – W3C, OASIS, (DMTF) • Grid Forum (High Performance Computing, e-Science) • Enterprise Grid Alliance (Commercial “Grid Forum” with a near term focus) • Service Standards are being debated • Grid Operational Infrastructure is being deployed • Grid Architecture and core software being developed • Apache has several important projects as do academia; large and small companies • Particular System Services are being developed “centrally” – OGSA or GS-* framework for this in GGF; WS-* for OASIS/W3C/Microsoft-IBM • Lots of fields are setting domain specific standards and building domain specific services • USA started but now Europe is probably in the lead and Asia will soon catch USA if momentum (roughly zero for USA) continues

  34. 4: Application or Community of InterestSpecific Services such as “Run BLAST” or “Look at Houses for sale” 3: Generally Useful Services and Features Such as “Access a Database” or “Submit a Job” or “ManageCluster” or “Support a Portal” or “Collaborative Visualization” 2: System Services and Features Handlers like WS-RM, Security, Programming Models like BPELor Registries like UDDI 1: Container and Run Time (Hosting) Environment The Grid and Web Service Institutional Hierarchy OGSA GS-*and some WS-* GGF/W3C/…. WS-* fromOASIS/W3C/Industry Apache Axis.NET etc. Must set standards to get interoperability

  35. Philosophy of Web Service Grids • Much of Distributed Computing was built by natural extensions of computing models developed for sequential machines • This leads to the distributed object (DO) model represented by Java and CORBA • RPC (Remote Procedure Call) or RMI (Remote Method Invocation) for Java • Key people think this is not a good idea as it scales badly and ties distributed entities together too tightly • Distributed Objects Replaced by Services • Note CORBA was considered too complicated in both organization and proposed infrastructure • and Java was considered as “tightly coupled to Sun” • So there were other reasons to discard • Thus replace distributed objects by services connected by “one-way” messages and not by request-response messages

  36. The Ten areas covered by the 60 core WS-* Specifications RTI and NCOW needs all of these?

  37. Stateful Interactions • There are (at least) four approaches to specifying state • OGSIuse factories to generate separate services for each session in standard distributed object fashion • Globus GT-4and WSRF use metadata of a resource to identify state associated with particular session • WS-GAFuses WS-Context to provide abstract context defining state. Has strength and weakness that reveals less about nature of session • WS-I+ “Pure Web Service” leaves state specification the application – e.g. put a context in the SOAP body • I think we should smile and write a great metadata service hiding all these different models for state and metadata

  38. WS-* implies the Service Internet • We have the classic (CISCO, Juniper ….) Internet routing the flood of ordinary packets in OSI stack architecture • Web Services build the “Service Internet” or IOI (Internet on Internet) with • Routing via WS-Addressing not IP header • Fault Tolerance (WS-RM not TCP) • Security (WS-Security/SecureConversation not IPSec/SSL) • Data Transmission by WS-Transfer not HTTP • Information Services (UDDI/WS-Context not DNS/Configuration files) • At message/web service level and not packet/IP address level • Software-based Service Internet possible as computers “fast” • Familiar from Peer-to-peer networks and built as a software overlay network defining Grid (analogy is VPN) • SOAP Header contains all information needed for the “Service Internet” (Grid Operating System) with SOAP Body containing information for Grid application service

  39. WS-I Interoperability • Critical underpinning of Grids and Web Services is the gradually growing set of specifications in the Web Service Interoperability Profiles • Web Services Interoperability (WS-I) Interoperability Profile 1.0a." http://www.ws-i.org. gives us XSD, WSDL1.1, SOAP1.1, UDDIin basic profile and parts of WS-Security in their first security profile. • We imagine the “60 Specifications” being checked out and evolved in the cauldron of the real world and occasionally best practice identifies a new specification to be added to WS-I which gradually increases in scope • Note only 4.5 out of 60 specifications have “made it” in this definition

  40. Activities in Global Grid Forum Working Groups RTI and NCOW needs all of these?

  41. H1 H2 H3 H4 Body Service F1 F2 F3 F4 Container Workflow Container Handlers SOAP Message Structure I • SOAP Message consists of headers and a body • Headers could be for Addressing, WSRM, Security, Eventing etc. • Headers are processed by handlers or filters controlled by container as message enters or leaves a service • Body processed by Service itself • The header processing defines the “Web Service Distributed Operating System” • Containers queue messages; control processing of headers and offer convenient (for particular languages) service interfaces • Handlers are really the core Operating system services as they receive and give back messages like services; they just process and perhaps modify different elements of SOAP Message – WS standards specify handler structure

  42. H1 H2 H3 H4 Body hp1 hp2 hp3 hp4 hp5 bp3 bp1 bp2 SOAP Message Structure II • Content of individual headers and the body is defined by XML Schema associated with WS-* headers and the service WSDL • SOAP Infoset captures header and body structure • XML Infoset for individual headers and the body capture the details of each message part • Web Service Architecture requires that we capture Infoset structure but does not require that we represent XML in angle bracket <content>value</content> notation Infoset representssemantic structure of message and itsparts

  43. Filter1 Filter1-1 StdSOAP SOAP’ SOAP’ StdSOAP High Performance Streams • Optimize Stream representation and transport protocol Filter2 SOAP’’ Choose Invertible Filter PreservingInfoset Choose Protocol Filter2-1 SOAP’’ Coordinating between Source, Sink, SOAP Intermediaries andbetween different messages in a stream Database(WS-Context) Conversation Context

  44. H1 H2 H3 H4 Body Ft Fr F3 F4 Container Handlers High Performance XML • Filters controlled by Conversation Context convert messages between representations using permanent context (metadata) catalog to hold conversation context • Different message views for each end point or even for individual handlers and service within one end point • Conversation Context is fast dynamic metadata service to enable conversions • NaradaBrokering will implement Fr and Ft using its support of multiple transports, fast filters and message queuing; Conversation ContextURI-S, URI-R, URI-T Replicated Message Header Transported Message Handler Message View ServiceMessage View Service

  45. The Global Information Grid Core Enterprise Services

  46. Major Conclusions I • One can map 7.5 out of 9 NCOW and GiG core capabilities into Web Service (WS-*) and Grid (GS-*) architecture and core services • Analysis of Grids in NCOW document inaccurate (confuse Grids and Globus and only consider early activities) • Some “mismatches” on both NCOW and Grid sides • GS-*/WS-* do not have collaboration and miss some messaging • NCOW does not have at core level system metadata and resource/service scheduling and matching • Higher level services of importance include GIS (Geographical Information Systems), Sensors and data-mining

  47. Major Conclusions II • Criticisms of Web services in a recent paper by Birman seem to be addressed by Grids or reflect immaturity of initial technology implementations • NCOW does not seem to have any analysis of how to build their systems on WS-*/GS-* technologies in a layered fashion; they do have a layered service architecture so this can be done • They agree with service oriented architecture • They seem to have no process for agreeing to WS-* GS-* or setting other standards for CES • Grid of Grids allows modular architectures and natural treatment of legacy systems

  48. DoD Core Services and WS-* plus GS-* I

  49. DoD Core Services: WS-* and GS-* II

  50. Grids and HLA/RTI I • HLA through IEEE1516 has specified the interfaces for its key services that are supported by RTI (Run Time Infrastructure) • HLA does not specify each message semantics or core system services • RTI implementations are NOT interoperable although each one should support any HLA federation • RTI implementations become a full distributed system environment as need metadata, reliable messaging etc. with simulation support only a small part • Grids can be used in an “unchanged” HLA with • Dynamic assignment of compute resources to support federates • Building web service interfaces to federates (XMSF) • Or use Grids as Infrastructure to build a new generation of RTI that will use Web system services and just add simulation support

More Related