130 likes | 215 Views
Alliance Vision of the Future (9/2000). Commodity cluster hardware Intel IA-32/64 processors and nodes desktop to teraflop scaling standard storage and interconnect Community open source software “best of breed” software and tools leveraged by vendor participation
E N D
Alliance Vision of the Future (9/2000) • Commodity cluster hardware • Intel IA-32/64 processors and nodes • desktop to teraflop scaling • standard storage and interconnect • Community open source software • “best of breed” software and tools • leveraged by vendor participation • integration, support, and software augmentation • community application codes • national/international user communities • Community expansion via commodity infrastructure • “turnkey packages” and infrastructure • vendor partnering and “cluster/grid in a box” • emerging sensors and networks
Alliance Technology Roadmap (9/2000) Science Portals & Workbenches P e r f o r m a n c e Twenty-First Century Applications Access Grid Computational Grid Access Services & Technology Computational Services Grid Services (resource independent) Grid Fabric (resource dependent) Networking, Devices and Systems • Capability computing • attack complex problems • move from rationing to computing on demand • Building the Grid • eliminate distance for virtual teams • convert computing into a utility • Science portals • bring commercial web technology to scientists • build electronic research communities • Clusters as the unifying mechanism • User wants and review recommendations
TODAY: ??? In a Box • Cluster in a Box • Grid in a Box • Display Wall in a Box • Access Grid in a Box • Applications in a Box Focus on Linux Clusters
Cluster in a Box (CiB) The CiB effort builds on the intense interest among the scientific, commercial, and computing research communities in the evolution and development of cluster computing using commodity components and open source software. The goals are to simplify installation of cluster software that is compatible with that deployed on production terascale clusters and to provide a software base for other packaging efforts, notably the Grid in a Box and Display Wall in a Box.
Grid in a Box (GiB) The GiB effort builds on the CiB by adding the requisite software and documentation to “Grid enable” clusters. The GiB goal is to enable users to repaidly build and deply functional Grids for application development and execution, as well as facilitate demonstrations, training, and continued Grid service research. (bw – I hope this includes Linux desktops and portables)
Display Wall in a Box (DWiB) The DWiB effort builds on the CiB software to simplify creation of high-end display capabilities atop Linux PC clusters to enable use of those display capabilites for high-end information presentation and visual data exploration. (bw – the new display wall at NCSA is powered by PC’s and 40 projectors in an 8 wide by 5 tall configuration)
Access Grid in a Box (AGiB) The AGiB effort builds on work Access Grid work over the past few years aimed at assembling network, computing, and interaction resources that support group-to-group human interaction across the Grid. It includes large format multimedia displays, presentation and interactive software environments, interfaces to Grid middleware and to remote visualization environments. (bw – over 40 sites – there is interest in inexpensive systems that are “self maintaining” and easy to set-up and use by faculty, students, and researchers with the intent of deploying to hundreds of university departments – e.g. geoscience departments – around the country)
Application in a Box (AiB) The AiB effort will build on the other enabling box technologies to provide science and engineering community codes for deployment on local Linux clusters that interface with the Distributed Terascale Facility (in review). (bw – the link to Alliance portal efforts appears to fit in this area)
SDSC Focus • Storage Resource Broker and Distributed Data Management • Metadata serving • Data files • XML (MIX) wrapper mediator system • Scalable Visualization Toolkit (VisTools) • Focus on large data sets • VISTA volume renderer • Volume scene graph for grouping and operating on large data sets
Earth System Modeling Framework • NASA CAN issued 9/2000 • CMIWG (Common Modeling Infrastructure Working Group) consistng of Earth Scientists and computational experts from academia, industry, and Federal research labs have over the past few years emphasized the value of collaboratively developing a robust, flexible set of software tools to enhance ease of use, performance portability, interoperability, and resuse in Earth system models • Three proposals were submitted (NCAR, DOE Argonne, DOE LANL, MIT, NASA, U. of Michigan, and NOAA GFDL and NCEP. • Addressing core development • Implementing modeling applications • Implementing data assimilation applications
NCAR ITR – A Knowledge Environment for the Geosciences (KEG) The architecture involves three interacting layers: 1.Knowledge Portal (KP); 2.Knowledge Framework (KF); and 3.Sources and Repository (SR) The Repository layer comprises observational data, a linked hierarchy of Earth system models, results from model experiments and other sources of information. As with all other parts of the KEG, the repository will be fully scaleable. Besides the current generation of climate and mesoscale models available to the proposing team, the repository will, for example, accommodate a multiscale geophysical model with a nonhydrostatic atmosphere, interactive chemistry and cloud microphysical processes. The Knowledge Portal will provide user access to the KEG by means ranging from an individual browser to distributed and collaborative group interaction software systems. The complement of tools in the Portal range from conventional packages for data analysis and for software development to innovative systems for collaboration among physically separated groups and for access to massive, distributed databases. Finally, the Knowledge Framework is a set of layered services that support the interaction among the human activities and elements in the Repository. It is the middleware for knowledge encapsulation, discovery, recording, and sharing for large-scale distributed and collaborative research work. Each element in the architecture is designed to be highly distributed, and scaleable.
WRF Developments • Beta 2 release in May, 2001 • Running on Posic marginally • Exloring modified computational algorithms for optimizing cache and memory usage • Rebuilding for billion zone calculations • Initial HDF5 implementation in progress
WRF Objectives by 10/1/01 • HDF 5 implemented and debugged under the WRF I/O API. • WRF ported to Linux including Posic (IA32) and Platinum (IA64)