260 likes | 277 Views
Explore the exponential growth of basic hardware components in cyberinfrastructure and the impact on scientific research. Discover the opportunities and challenges in implementing a robust and scalable cyberinfrastructure.
E N D
NSF Cyberinfrastructure and Starlight Project William Y. B. Chang Senior Program Manager National Science Foundation and Thomas A. DeFanti, Maxine Brown Principal Investigators, STAR TAP
Basic hardware exponential growths continue Processor speed Memory density Disk storage density Fiber channel bandwidth Growth in scale Clusters of processors Channels per fiber Distributed computing a reality Thresholds Terabytes of storage are locally affordable, petabytes are feasible Gigaflops of processing in lab, Teraflops on campus, 10+ TF for national centers Software Standard protocols for computer communication Standards for data communication and storage Common operating system and programming language paradigms Commodity/Commercial and Scientific/Special High-End Major progress in commercial technology (hardware, operating environments, tools) Significant areas will not be addressed without direct action in scientific community Information Technology Trends
Components of CI-enabled science & engineering High-performance computing for modeling, simulation, data processing/mining Humans Instruments for observation and characterization. Individual & Global Connectivity Group Interfaces Physical World & Visualization Facilities for activation, manipulation and Collaboration construction Services Knowledge management institutions for collection building and curation of data, information, literature, digital objects
Research in technologies, systems, and applications Operations in support of end users Development or acquisition Classes of activities Applications of information technology to science and engineering research Cyberinfrastructure supporting applications Core technologies incorporated into cyberinfrastructure
Some roles of (cyber) infrastructure • Processing, storage, connectivity • Performance, sharing, integration, etc • Make it easy to develop and deploy new applications • Tools, services, application commonality • Interoperability and extensibility enables future collaboration across disciplines • Best practices, models, expertise • Greatest need is software and experiencedpeople
Shared Opportunity & Responsibility • Only domain science and engineering researchers can create a vision and implement the methodology and process changes • Information technologists need to be deeply involved • What technology can be, not what it is • Conduct research to advance the supporting technologies and systems • Applications inform research • Need hybrid teams across disciplines and job types. • Need participation from social scientists in design and evaluation of the CI enabled work environments. • Shared responsibility. Need mutual self-interest.
Cyberinfrastructure Opportunities NVO and ALMA Climate Change ATLAS and CMS LIGO The number of nation-scale projects is growing rapidly!
Network for Earthquake Engineering Simulation Remote Users Instrumented Structures and Sites High-Performance Network(s) Laboratory Equipment Field Equipment Curated Data Repository Leading Edge Computation Laboratory Equipment Remote Users Global Connections
Futures: The Computing Continuum Smart Objects Petabyte Archives Ubiquitous Sensor/actuator Networks National Petascale Systems Responsive Environments Collaboratories Terabit Networks Laboratory Terascale Systems Contextual Awareness Ubiquitous Infosphere Building Up Building Out Science, Policy and Education
Information Infrastructure is a First-Class Tool for Science Today
What does the Future Look Like? • Research • Infrastructure • People • Data • Software • Hardware • Instruments • Futureinfrastructure drives today’sresearch agenda
Picture ofearthquakeand bridge Sensors More Diversity, New Devices, New Applications Personalized Medicine Picture ofdigital sky Wireless networks Knowledge from Data Instruments
Bottom-line Recommendations • NSF leadership for the Nation of an INITIATIVE to revolutionize science and engineering research capitalizing on new computing and communications opportunities. • 21st Century Cyberinfrastructure includes supercomputing, massive storage, networking, software, collaboration, visualization, and human resources • Current centers (NCSA, SDSC, PSC) and other programs are a key resource for the INITIATIVE. • Budget estimate: incremental $650-$960 M/year (continuing).
Need Effective Organizational Structure • An INITIATIVE OFFICE • Initiate competitive, discipline-driven path-breaking applications within NSF of cyberinfrastructure which contribute to the shared goals of the INITIATIVE. • Coordinate policy and allocations across fields and projects. Participants across NSF directorates, Federal agencies, and international e-science. • Develop high quality middleware and other software that is essential and special to scientific research. • Manage individual computational, storage, and networking resources at least 100x larger than individual projects or universities can provide.
STAR TAP and StarLight • STAR TAP: Premier operational cross-connect of the world's high-performance academic networks 45-622Mb • StarLight: Next-generation cutting-edge optical evolution of STAR TAP connecting experimental networks 1-10Gb • Funded by NSF ANIR, EIA and ACIR infrastructure grants to UIC (and NU) • Substantial support by Argonne MCS
Who is StarLight? StarLight is jointly managed and engineered by: • International Center for Advanced Internet Research (iCAIR), Northwestern University • Joe Mambretti, David Carr and Tim Ward • Electronic Visualization Laboratory (EVL), University of Illinois at Chicago • Tom DeFanti, Maxine Brown, Alan Verlo, Jason Leigh • Mathematics and Computer Science Division (MCS) , Argonne National Laboratory • Linda Winkler, Bill Nickless, Caren Litvanyi, Rick Stevens and Charlie Catlett
What is StarLight? StarLight is an experimental optical infrastructure and proving ground for network services optimized for high-performance applications Abbott Hall, Northwestern University’s Chicago downtown campus View from StarLight
StarLight Infrastructure StarLight isa largeresearch-friendlyco-location facility with space, power and fiber that is being made available to university and national/international network collaborators as a point of presence in Chicago
StarLight Infrastructure StarLight isa production GigE and trial 10GigE switch/router facility for high-performance access to participating networks
StarLight is Operational Equipment at StarLight • StarLight Equipment installed: • Cisco 6509 with GigE • IPv6 Router • Juniper M10 (GigE and OC-12 interfaces) • Cisco LS1010 with OC-12 interfaces • Data mining cluster with GigE NICs • Visualization/video server cluster (on order) • SURFnet’s 12000 GSR • Multiple vendors for 1GigE, 10GigE, DWDM and Optical Switch/Routing in the future
TeraGrid @ StarLight TeraGrid, an NSF-funded Major Research Equipment initiative, has its Illinois hub located at StarLight.
Commercial Providers @ StarLight …coming soon, Level(3) SBC/Ameritech Qwest AT&T and AT&T Broadband Global Crossing
USA Networks @ StarLight • DoE ESnet • NASA NREN • UCAID/Internet2 Abilene • Metropolitan Research & Education Network (Midwest GigaPoP)
StarLight Engineering Partnerships • Developers of 6TAP, the IPv6 global testbed, notably ESnet and Viagenie (Canadian), have an IPv6 router installed at StarLight • NLANR works with STAR TAP on network measurement and web caching; the NLANR AMP (Active Measurement Platform) is located at STAR TAP and the web cache is located at StarLight
StarLight Middleware Partnerships Forming • Provide tools and techniques for (university) customer-controlled 10 Gigabit network flows • Build general control mechanisms from emerging toolkits, such as Globus, for Grid network resource access and allocation services • Test a range of new tools, such as GMPLS and OBGP, for designing, configuring and managing optical networks and their components • Create a new generation of tools for appropriate monitoring and measurements at multiple levels
iGrid 2002 September 23-26, 2002, Amsterdam Science and Technology Centre (WTCW), The Netherlands Proposed iGrid 2002 Demonstrations • To date, 14 countries/locations proposing 29 demonstrations: Canada, CERN, France, Germany, Greece, Italy, Japan, The Netherlands, Singapore, Spain, Sweden, Taiwan, United Kingdom, United States • Applications to be demonstrated: art, bioinformatics, chemistry, cosmology, cultural heritage, education, high-definition media streaming, manufacturing medicine, neuroscience, physics, tele-science • Grid technologies to be demonstrated: Major emphasis on grid middleware, data management grids, data replication grids, visualization grids, data/visualization grids, computational grids, access grids, grid portals