1 / 18

Lessons learned from SETI@home

Lessons learned from SETI@home. David P. Anderson January 31, 2002. tape archive, delete. tape backup. user DB. science DB. master DB. redundancy checking. DLT tapes. CGI program. acct. queue. result queue. RFI elimination. garbage collector. web page generator. splitters.

Download Presentation

Lessons learned from SETI@home

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lessons learned fromSETI@home David P. Anderson January 31, 2002

  2. tape archive, delete tape backup user DB science DB master DB redundancy checking DLT tapes CGI program acct. queue result queue RFI elimination garbage collector web page generator splitters repeat detection screensavers WU storage web site data server SETI@home Operations data recorder

  3. Radio SETI projects

  4. History and statistics • Conceived 1995, launched April 1999 • Funding: TPS, DiMI, numerous companies • 3.5M users (.5M active), 226 countries • 40 TB data recorded, processed • 25 TeraFLOPs average over last year • No ET signals yet, but other results

  5. Public-resource computing • Original: GIMPS, distributed.net • Commercial: United Devices, Entropia, Porivo, jxtp, Popular Power • Academic, open-source • Cosm, folding@home, SETI@home II • The peer-to-peer paradigm

  6. Characterizing SETI@home • Fixed-rate data processing task • Low bandwidth/computation ratio • Independent parallelism • Error tolerance

  7. Be prepared for crowds • Server scalability • Dealing with excess CPU time • Redundant computing • Deals with cheating, malfunctions • Control by changing computation • Moore’s Law is true (causes same problems)

  8. Network bandwidth costs money • SSL to campus: 100 Mbps, free, unloaded • Campus to ISP: 70 Mbps, not free • First: load limiting at 25 Mbps • Now: no limit, zero priority • How to adapt load to capacity? • What’s the break-even point (1GB per CPU day)

  9. How to get and retain users • Graphics are important • But monitors do burn in • Teams: users recruit other users • Keep users informed • Science news • System management news • Periodic project emails

  10. Reward users • PDF certificates • Milestone pages and emails • Leader boards (overall, country, …) • Class pages • Personal signal page

  11. Let users express themselves • User profiles • Online poll • Newsgroup (sci.astro.seti) • Message boards • Learn about users

  12. Users are competitive • Patched clients, benchmark wars • Results with no computation • Intentionally bad results • Team recruitment by spam • Sale of accounts on eBay • Accounting is tricky

  13. Anything can be reverse engineered • Patched version of client • efforts at self-checksumming • Replacement of FFT routine • Bad results • Digital signing: doesn’t work • Techniques for verifying work

  14. Users will help if you let them • Web-site translations • Add-ons • Server proxies • Statistics DB and display • Beta testers • Porting • Open-source development • (will use in SETI@home II)

  15. Client: mechanism, not policy • Error handling, versioning • Load regulation • Let server decide • Reasonable default if no server • Put in a level of indirection • Separate control and data

  16. Cross-platform is manageable • Windows, Mac are hard • GNU tools and POSIX rule

  17. Server reliability/performance • Hardware • Air conditioning, RAID controller • Software • Database server • Architect for failure • Develop diagnostic tools

  18. What’s next for public computing? • Better handling of large data • Network scheduling • Reliable multicast • Expand computation model • Multi-application platform • Economic model

More Related