1 / 15

WP4 and WP5 for AstroWise

Explore AstroWise initiative progress, including Cluster-in-a-Box and Display Wall-in-a-Box, and discuss parallel processing strategies. Current developments in various clusters like NOVA, Terapix, and Capodimonte are highlighted. Emphasis on parallel implementation techniques and data storage solutions. Focal points for future strategies and collaboration opportunities are also covered.

Download Presentation

WP4 and WP5 for AstroWise

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. WP4 and WP5 for AstroWise WP4: Provide parallel processing WP5: Provide data storage AstroWise pre kick-off Meeting

  2. Commodity Hardware • In-a-Box initiative • NCSA Alliance layered software • Cluster-in-a-Box (CiB) • Grid-in-a-Box (Gib) • Display Wall-in-a-Box (DBox) • Access Grid-in-a-Box (AGiB) AstroWise pre kick-off Meeting

  3. Cluster-in-a-Box • Builds on OSCAR • Simplify installing and running Linux clusters • Compatible with Alliance’s production clusters • Software foundation for • Grid toolkits • Display Walls AstroWise pre kick-off Meeting

  4. Display Wall-in-a-Box • Tiled display wall • WireGL, VNC, NCSA Pixel Blaster • Building instructions AstroWise pre kick-off Meeting

  5. Current Developments • Ongoing activities • NOVA: testbed system • Terapix: production cluster • Capodimonte: WFI processing system • USM: WFI processing system AstroWise pre kick-off Meeting

  6. Current Developments • NOVA • Leiden has 4+1 PIII PC cluster (400Mhz, 15Gb, 256 Mb, 100Mb/s) as hands-on experience • Leiden will acquire 16+1 P4 PC cluster (1.5 Ghz, 80Gb, 512 Mb, 1Gb/s-100Mb/s switched) as hands-on experience • Can postpone processing cluster to later AstroWise pre kick-off Meeting

  7. Current Developments • Terapix • Driven by spending • Concentration on high bandwidth data I/O • 32b @ 33Mhz -> 133 MB/s • 64b @ 66 Mhz -> 512 MB/s • RAID5 has 80MB/s • Nodes • 4 AMD 2 SMP 2Gb ~Tb RAID0 1Gb/s + 100Mb/s • 1 AMD 2 SMP 2 Gb ~Tb RAID5 4x1Gb/s + 100Mb/s • No software parallellization • Process fields in parallel AstroWise pre kick-off Meeting

  8. Current Developments • Capodimonte • Driven by spending • Opt for conventional system • 8 PIII 2 SMP 1 Ghz 40 Gb 512 b 100 Mb/s • 1 PIII 2 SMP 1 Ghz ~180Gb RAID 1Gb/s • Processing examples ESO beowulf system • Master Bias from 5 raw fields: 68 sec • Master Flat Field from 5 dome + 5 skyflat: 390 sec • Catalog & astrom on full cluster: 140 sec • Catalog & astrom on one single CCD on one CPU: 88 sec AstroWise pre kick-off Meeting

  9. Current Developments • USM • Driven by spending • Opt for off-the-shelf configuration • Pay for configuration/install • Pay for maintenance • Nodes • 8+1 SMP (2) 4Gb, ~100 Gb, 1 Gb/s or Myranet • 1.4 Tb datastorage • Front-end user stations AstroWise pre kick-off Meeting

  10. Parallel implementation • Single OS, multiple CPU’s • MOSIX • Fork and forget • Does load balancing, adaptive resource allocation • Cluster of Machines • MPI, PMV (Message passing) • PVFS (parallel virtual file system) • PBS (Portable Batch system) • MAUI (Job scheduling) • DIY scheduling (python sockets/pickling) AstroWise pre kick-off Meeting

  11. Parallellization • Simple scripting • Rendezvous problem • Load balancing • Data distribution/administration • Code level • MPI Programming • How deep? • Loops • Matrix splitting • Sparse array coding AstroWise pre kick-off Meeting

  12. Granularity • Coarse • Large tasks • Much comp. work • Infrequent comm. • Fine • Small tasks • Frequent comm. • Many processes AstroWise pre kick-off Meeting

  13. Focal Points • Different architectures • compare performances • time to reduce field/night • quality of calibration • Benchmark set software/data • Share experience (Exchange url’s) • hardware • processing • software (parallellization) AstroWise pre kick-off Meeting

  14. Focal Points • Time cost (unified structure for parallel processing) • software work to make parallel (burst data) • hardware possible (# nodes < 8) • Evaluate future network capacity • multiplicity • firewire AstroWise pre kick-off Meeting

  15. Focal Points • Data storage & beowulf • Are they different? • Interaction processing/mining • Who pulls the wagon • T0 + 1Q : Design review • T0 + 2Q : procurement AstroWise pre kick-off Meeting

More Related