160 likes | 172 Views
Learn about the NMI Program's goals, middleware importance, projects, and major Grid efforts in advancing networking infrastructure research. Explore NMI objectives and testbed processes for developing scalable, sustainable middleware solutions for the research community.
E N D
Advanced Networking Infrastructure and Research NMI(NSF Middleware Initiative) Program Director Alan Blatecky
NMI Agreements/awards • 3 Cooperative Agreements were executed to establish NMI (effective 9-1-01) • Service Integrator and Service Provider functions were integrated and combined into a common approach and effort • 9 Middleware research projects awarded
NMI Organization Two Teams • GRIDS Center • ISI, NCSA, UC, UCSD & UW • EDIT Team (Enterprise and Desktop Integration Technologies) • EDUCAUSE, Internet2 & SURA Purpose of NMI To design, develop, deploy and support a set of reusable, expandable set of middleware functions and services that benefit applications in a networked environment
What does Middleware Do? The purpose of middleware is: 1. To allow scientists and engineers the ability to transparently use and share distributed resources, such as computers, data, and instruments 2. To develop effective collaboration and communications tools such as Grid technologies, desktop video, and other advanced services to expedite research and education, and 3. To develop a working architecture and approach which can be extended to Internet users around the world. Middleware is the stuff that makes “transparently use” happen, providing consistency, security, privacy and capability
NMI Goals • Facilitate scientific productivity • Increase research collaboration through shared data, computing, facilities and applications • Support education enterprise; early adoption to deployment • Establish a level of persistency and availability for users and developers • Encourage the participation of industry partners, government labs and agencies • Encourage and support the development of standards and open source approaches • Enable scaling and sustainability to support the larger research community and beyond
NMI Process Experimental Software & research applications International Research & Education Early Implementations - Directories, GRID services, authentication, etc Early Adopters - GriPhyN, NEES, campuses, etc MiddlewareTestbeds - experimental, Beta, scaling & “hardening” Consensus - disciplines - communities - public & private Dissemination & Support Middleware deployment
First Year Objectives • Develop and Release a version of Grid/Middleware • NMI Release 1 scheduled for April • NMI Release 2 probably in July/Aug • Develop security, directory architectures, best practices for campus integration • Establish associated support and training mechanisms • Develop partnerships with external groups and partners • Develop a communication and outreach plan • Develop a repository of NMI software and best practices
Major Grid projects and efforts DOE Science Grid (SciDAC) NASA Information Power Grid GriPhyN – Grid Physics Network TeraGrid – Distributed Terascale Facility IVDGL – International Virtual Data Grid Laboratory BIRN - Biomedical Imaging Research Network NEES – Network for Earthquake Engineering Simulation Earth Systems Grid Space Grid PPDG - Particle Physics Data Grid DATATAG Trans-Atlantic Grid Testbed
More Projects/efforts UK Grid Support Center National Fusion Grid CEOS (Committee for Earth Observation Satellites) Astronomy Virtual Observatory European Data Grid ALMA - Atacama Large Millimeter Array LHC Computing Grid - Large Hadron Collider LIGO - Laser Inteferometer Gravitional Observatory NEON - National Ecological Observatory Network SDSS - Sloan Digital Sky Survey Open Grid Consortium Global Grid Forum
NMI Release v.1 • Software Components • Globus Toolkit (GRAM 1.5, MDS 2.2, GPT, GridFTP,) • Condor-G • Network Weather Service • KCA 1.0; CPM 2.0, KX.509 1.0 • eduPerson 1.5; eduOrg 1.0 • Compliance, testing and packaging • Best Practices and policies • suite of directory and services
NMI Testbed Program • SURA Call for Proposals (Mar 8 response) • Testing to include • integration and distribution to desktop • interaction with common campus infrastructure • vertical integration with communities of users • component scalability and consistency • 4 sponsored testbeds (~ 4 unsponsored) Mary Fran Yafchak - Project Manager
MAGICMiddleware And Grid Infrastructure Coordination • Coordinates Interagency Grid and Middleware efforts • Enhances and encourages interoperable Grid and Middleware domains • Promotes usable, widely deployed middleware tools and services • Provides a Federal voice for effective international coordination of Grid and Middleware Technologies
MAGIC Status • Established by the Large Scale Networking Committee on January 8, 2002 • Representatives and structure being discussed • Working charter being developed • DARPA, DOE, NSF, NASA, NIH, NIST • Next steps • identification of immediate concerns and issues • establishment of an MAGIC Engineering team • establishment of a meeting schedule
2nd Year Program of NMI • Program Announcement 02-028 • http://www.nsf.gov/home/cise/ • March 1, 2002 Proposals due
NMI Emphasis Areasfor 2nd year program • Distributed authorization and management tools • Resource schedulers and reservation, especially across multiple domains • Resource accounting and monitoring • Predictive services including Grid and network prediction tools • Directories and certificate authorities • Peer-to-peer middleware resources • SIP-enabling collaboration tools • Mobility: public space 802.1x authentication infrastructure and performance improvement tools
Additional Areas of Emphasis • Security for operating systems and middleware software • User privacy management tools • User data integrity and authentication • Authorization tools • Peer-to-peer middleware resources