1 / 11

A Wide Range of Scientific Disciplines Will Require a Common Infrastructure

A Wide Range of Scientific Disciplines Will Require a Common Infrastructure. Example--Two e-Science Grand Challenges NSF’s EarthScope—US Array NIH’s Biomedical Informatics Research Network Common Needs Large Number of Sensors / Instruments Daily Generation of Large Data Sets

edmund
Download Presentation

A Wide Range of Scientific Disciplines Will Require a Common Infrastructure

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Wide Range of Scientific DisciplinesWill Require a Common Infrastructure • Example--Two e-Science Grand Challenges • NSF’s EarthScope—US Array • NIH’s Biomedical Informatics Research Network • Common Needs • Large Number of Sensors / Instruments • Daily Generation of Large Data Sets • Data is on Multiple Length and Time Scales • Automatic Archiving in Distributed Federated Repositories • Large Community of End Users • Multi-Megapixel and Immersive Visualization • Collaborative Analysis From Multiple Sites • Complex Simulations Needed to Interpret Data

  2. NSF’s EarthScope--USArray • Resolution of Crust & Upper Mantle Structure to Tens of kms. • Transportable Array • Fixed Design Broadband Array • 400 Broadband Seismometers • ~70 Km Spacing • ~1500 X 1500 Km Grid • ~2 Year Deployments at Each Site • Rolling Deployment Over More Than 10 Years • Permanent Reference Network • GSN/NSN Quality Seismometers • Geodetic Quality GPS Receivers • All Data to Community in Near Real Time • Bandwidth Will Be Driven by Visual Analysis in Federated Repositories Source: Frank Vernon (IGPP SIO, UCSD)

  3. Rollout Over 14 Years Starting With Existing Broadband Stations

  4. Federated Repositories Are Needed to Link Brain Multi-Scale Structure and Function • Filling Information Gaps with Advanced 3 & 4D Microscopies and New Labeling Technologies • Leveraging on Advances in Computational Capabilities • Electron Tomography Over Multiple Scales Source: Mark Ellisman, UCSD

  5. NIH is Funding a National-Scale Grid Federating Multi-Scale Biomedical Data Biomedical Informatics Research Network (BIRN) NIH Plans to Expand to Other Organs and Many Laboratories Part of the UCSD CRBSCenter for Research on Biological Structure National Partnership for Advanced Computational Infrastructure

  6. Similar Needs for Many Other e-Science Community Resources Sloan Digital Sky Survey ALMA LHC ATLAS

  7. A LambdaGrid Will Be the Backbone for an e-Science Network • Metro Area Laboratories Springing Up Worldwide • Developing GigE and 10GigE Applications and Services • Testing Optical Switches • Metro Optical Testbeds-the next GigaPOP? Apps Middleware Clusters C O N T R O L P L A N E Dynamically Allocated Lightpaths Switch Fabrics Physical Monitoring

  8. Campus Laboratory LambdaGrid “On-Ramps” are Needed to Link to MetroGrid UIC StarLight/Northwestern LAC TND2 TND2 TNV2 EVL O-O-O switch 10x10GigE router TNV2 TNC2 router O-O-O switch router 10x 10GigE 2x40GigE DWDM DWDM … 2x40GigE • TND2 = Datamining Clusters at NU and UIC Lab. for Advanced Computing • 32 Deerfield processors with 10GigE networking each, NetRam storage • TNV2 = Visualization Clusters at NU and UIC EVL • 27 Deerfield processors with 10GigE networking each, 25 screens • TNC2 = TeraGrid Computing Clusters at EVL • 32 Deerfield processors with 10GigE networking each Source: Tom DeFanti, EVL, UIC

  9. Research Topics for Building an e-Science LambdaGrid • Provide Integrated Services in the Tbit/s Range • Lambda-Centric Communication & Computing Resource Allocation • Middleware Services for Real-Time Distributed Programs • Extend Internet QoS Provisioning Over a WDM-Based Network • Develop a Common Control-Plane Optical Transport Architecture: • Transport Traffic Over Multiple User Planes With Variable Switching Modes • Lambda Switching • Burst Switching • Inverse Multiplexing (One Application Uses Multiple Lambdas) • Extend GMPLS: • Routing • Resource Reservation • Restoration UCSD, UCI, USC, UIC, & NW

  10. Research Topics for Building an e-Science LambdaGrid • Enhance Security Mechanisms: • End-to-End Integrity Check of Data Streams • Access Multiple Locations With Trusted Authentication Mechanisms • Use Grid Middleware for Authentication, Authorization, Validation, Encryption and Forensic Analysis of Multiple Systems and Administrative Domains • Distribute Storage While Optimizing Storewidth: • Distribute Massive Pools of Physical RAM (Network Memory) • Develop Visual TeraMining Techniques to Mine Petabytes of Data • Enable Ultrafast Image Rendering • Create for Optical Storage Area Networks (OSANs) • Analysis and Modeling Tools • OSAN Control and Data Management Protocols • Buffering Strategies and Memory Hierarchies for WDM Optical Networks UCSD, UCI, USC, UIC, & NW

  11. A Layered Software Architecture is Needed for Defense and Civilian Applications SPAWAR Systems Center San Diego www.ndia-sd.org/docs/NDIA_20June00.pdf

More Related