150 likes | 231 Views
The DAS-3 Project. Vrije Universiteit Amsterdam Faculty of Sciences. Henri Bal. Distributed ASCI Supercomputer. Joint infrastructure of ASCI research school Clusters integrated in a single distributed testbed Long history and continuity.
E N D
The DAS-3 Project Vrije Universiteit Amsterdam Faculty of Sciences Henri Bal
Distributed ASCI Supercomputer • Joint infrastructure of ASCI research school • Clusters integrated in a singledistributed testbed • Long history and continuity DAS-1 (1997) DAS-2 (2002) DAS-3 (Oct 2006)
DAS is a Computer Science grid • Motivation: CS needs its own infrastructure for • Systems research and experimentation • Distributed experiments • Doing many small, interactive experiments • DAS is simpler and more homogeneous than production grids • Single operating system • “A simple grid that works’’
Usage of DAS • ~ 200 users, 32 Ph.D. theses • Clear shift of interest: Cluster computing Distributed computing Grids and P2P Virtual laboratories
Grid’5000 Impact of DAS • Major incentive for VL-e 20 M€ BSIK funding • Virtual Laboratory for e-Science • Collaboration with French Grid’5000 • Towards a European scale CS grid? • Collaboration SURFnet on DAS-3 • SURFnet provides multiple 10 Gb/s light paths
DAS-3 • 272 AMD Opteron nodes792 cores, 1TB memory • More heterogeneous: • 2.2-2.6 GHz Single/dual core nodes • Myrinet-10G (exc. Delft) • Gigabit Ethernet UvA/MultimediaN(46) VU (85 nodes) SURFnet6 UvA/VL-e (40) 10 Gb/s lambdas TU Delft (68) Leiden (32)
Status • Timeline • Sep. 04 Proposal • Apr. 05 NWO/NCF funding • Dec. 05 European tender (with TUD/GIS, Stratix) • Apr. 06 Selected ClusterVision • Oct. 06 Operational • SURFnet6 connection shortly • Multiple 10 Gb/s dedicated lambdas • First local Myrinet measurements • 2.6 μsec 1-way null-latency • 950 MB/sec throughput
Projects using DAS-3 • VL-e • Grid computing, scheduling, workflow, PSE, visualization • MultimediaN • Searching, classifying multimedia data • NWO i-Science (GLANCE, VIEW, STARE) • StarPlane, JADE-MM, GUARD-G, VEARD, GRAPE Grid, SCARIe, AstroStream • NWO Computational Life Sciences: • 3D-RegNet, CellMath, MesoScale • Open competition (many) • NCF projects (off-peak hours)
CPU’s R CPU’s R CPU’s R NOC CPU’s R CPU’s R StarPlane • Key idea: • Applications can dynamically allocate light paths • Applications can change the topologyof the wide-area network,possibly even atsub-second timescale • VU (Bal, Bos, Maassen)UvA (de Laat, Grosso, Xu, Velders)
CPU Data Network StarPlane • Challenge: how to integrate such a network infrastructure with (e-Science) applications? • Distributed supercomputing • Remote data access • Visualization
Jade-MM • Large-scale multimedia content analysis on grids • Problem: >30 CPU hours per hour of video • Beeld&Geluid 20.000 hours of TV broadcasts per year • London Underground >120.000 years of processing for >> 10.000’s CCTV cameras • Data-dependencies at all levels of granularity • UvA (Smeulders, Seinstra) + VU (Bal, Kielmann, Koole, van der Mei)
GUARD-G • How to turn grids into a predictable utility for computing (much like the telephone system) • Problems: • Predictability of workloads • Predictability of system availability (grids are faulty!) • Allocation of light paths very useful here • TU Delft (Epema) + Leiden (Wolters)
Summary • DAS has a major impact on experimental Computer Science research • It has attracted a large user base • DAS-3 provides • State-of-the-art CPUs: 64-bit (dual-core) • High-speed local interconnect (Myrinet-10G) • A flexible optical wide-area network More info:http://www.cs.vu.nl/das3/
vrije Universiteit DAS-3 networks Nortel 5530 + 3 * 5510 ethernet switch compute nodes (85) 1 Gb/s ethernet (85x) 1 or 10 Gb/s Campus uplink 10 Gb/s ethernet 10 Gb/s eth fiber (8x) 10 Gb/s Myrinet (85x) 80 Gb/s DWDM SURFnet6 Nortel OME 6500 with DWDM blade 10 Gb/s Myrinet 10 Gb/s ethernet blade Myri-10G switch Headnode (10 TB mass storage)