110 likes | 251 Views
A Wide Range of Scientific Disciplines Will Require a Common Infrastructure. Example--Two e-Science Grand Challenges NSF’s EarthScope—US Array NIH’s Biomedical Informatics Research Network Common Needs Large Number of Sensors / Instruments Daily Generation of Large Data Sets
E N D
A Wide Range of Scientific DisciplinesWill Require a Common Infrastructure • Example--Two e-Science Grand Challenges • NSF’s EarthScope—US Array • NIH’s Biomedical Informatics Research Network • Common Needs • Large Number of Sensors / Instruments • Daily Generation of Large Data Sets • Data is on Multiple Length and Time Scales • Automatic Archiving in Distributed Federated Repositories • Large Community of End Users • Multi-Megapixel and Immersive Visualization • Collaborative Analysis From Multiple Sites • Complex Simulations Needed to Interpret Data
NSF’s EarthScope--USArray • Resolution of Crust & Upper Mantle Structure to Tens of kms. • Transportable Array • Fixed Design Broadband Array • 400 Broadband Seismometers • ~70 Km Spacing • ~1500 X 1500 Km Grid • ~2 Year Deployments at Each Site • Rolling Deployment Over More Than 10 Years • Permanent Reference Network • GSN/NSN Quality Seismometers • Geodetic Quality GPS Receivers • All Data to Community in Near Real Time • Bandwidth Will Be Driven by Visual Analysis in Federated Repositories Source: Frank Vernon (IGPP SIO, UCSD)
Rollout Over 14 Years Starting With Existing Broadband Stations
Federated Repositories Are Needed to Link Brain Multi-Scale Structure and Function • Filling Information Gaps with Advanced 3 & 4D Microscopies and New Labeling Technologies • Leveraging on Advances in Computational Capabilities • Electron Tomography Over Multiple Scales Source: Mark Ellisman, UCSD
NIH is Funding a National-Scale Grid Federating Multi-Scale Biomedical Data Biomedical Informatics Research Network (BIRN) NIH Plans to Expand to Other Organs and Many Laboratories Part of the UCSD CRBSCenter for Research on Biological Structure National Partnership for Advanced Computational Infrastructure
Similar Needs for Many Other e-Science Community Resources Sloan Digital Sky Survey ALMA LHC ATLAS
A LambdaGrid Will Be the Backbone for an e-Science Network • Metro Area Laboratories Springing Up Worldwide • Developing GigE and 10GigE Applications and Services • Testing Optical Switches • Metro Optical Testbeds-the next GigaPOP? Apps Middleware Clusters C O N T R O L P L A N E Dynamically Allocated Lightpaths Switch Fabrics Physical Monitoring
Campus Laboratory LambdaGrid “On-Ramps” are Needed to Link to MetroGrid UIC StarLight/Northwestern LAC TND2 TND2 TNV2 EVL O-O-O switch 10x10GigE router TNV2 TNC2 router O-O-O switch router 10x 10GigE 2x40GigE DWDM DWDM … 2x40GigE • TND2 = Datamining Clusters at NU and UIC Lab. for Advanced Computing • 32 Deerfield processors with 10GigE networking each, NetRam storage • TNV2 = Visualization Clusters at NU and UIC EVL • 27 Deerfield processors with 10GigE networking each, 25 screens • TNC2 = TeraGrid Computing Clusters at EVL • 32 Deerfield processors with 10GigE networking each Source: Tom DeFanti, EVL, UIC
Research Topics for Building an e-Science LambdaGrid • Provide Integrated Services in the Tbit/s Range • Lambda-Centric Communication & Computing Resource Allocation • Middleware Services for Real-Time Distributed Programs • Extend Internet QoS Provisioning Over a WDM-Based Network • Develop a Common Control-Plane Optical Transport Architecture: • Transport Traffic Over Multiple User Planes With Variable Switching Modes • Lambda Switching • Burst Switching • Inverse Multiplexing (One Application Uses Multiple Lambdas) • Extend GMPLS: • Routing • Resource Reservation • Restoration UCSD, UCI, USC, UIC, & NW
Research Topics for Building an e-Science LambdaGrid • Enhance Security Mechanisms: • End-to-End Integrity Check of Data Streams • Access Multiple Locations With Trusted Authentication Mechanisms • Use Grid Middleware for Authentication, Authorization, Validation, Encryption and Forensic Analysis of Multiple Systems and Administrative Domains • Distribute Storage While Optimizing Storewidth: • Distribute Massive Pools of Physical RAM (Network Memory) • Develop Visual TeraMining Techniques to Mine Petabytes of Data • Enable Ultrafast Image Rendering • Create for Optical Storage Area Networks (OSANs) • Analysis and Modeling Tools • OSAN Control and Data Management Protocols • Buffering Strategies and Memory Hierarchies for WDM Optical Networks UCSD, UCI, USC, UIC, & NW
A Layered Software Architecture is Needed for Defense and Civilian Applications SPAWAR Systems Center San Diego www.ndia-sd.org/docs/NDIA_20June00.pdf