1 / 25

CERN local High Availability solutions and experiences

CERN local High Availability solutions and experiences. Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006. Introduction. Different h/w used for GRID services Various techniques & First experiences & Recommendations. GRID Service issues.

etoile
Download Presentation

CERN local High Availability solutions and experiences

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CERN local High Availability solutions and experiences Thorsten Kleinwort CERN IT/FIO WLCG Tier 2 workshop CERN 16.06.2006

  2. Introduction • Different h/w used for GRID services • Various techniques &First experiences &Recommendations ThorstenKleinwort CERN-IT

  3. GRID Service issues • Most GRID Services do not have built in failover functionality • Stateful servers: Information lost on crashDomino effects on failuresScalability issues: e.g. >1 CE ThorstenKleinwort CERN-IT

  4. Different approaches Server hardware setup: • Cheap ‘off the shelf’ h/w satisfies only for batch nodes (WN) • Improve reliability (cost ^) • Dual Power supply (and test it works!) • UPS • Raid on system/data disks (hot swappable) • Remote console/BIOS access • Fiber Channel disk infrastructure • CERN: ‘Mid-Range-(Disk-)Server’ ThorstenKleinwort CERN-IT

  5. ThorstenKleinwort CERN-IT

  6. Various Techniques Where possible, increase number of ‘servers’ to improve ‘service’: • WN, UI (trivial) • BDII: possible, state information is very volatile and re-queried periodically • CE, RB,… more difficult, but possible for scalability ThorstenKleinwort CERN-IT

  7. Load Balancing • For multiple Servers, use DNS alias/load balancing to select the right one: • Simple DNS aliasing: • For selecting master or slave server • Useful for scheduled interventions • Dedicated (VO-specific) host names, pointing to the same machine (E.g. RB) • Pure DNS Round Robin (no server check)configurable in DNS: to equally balance load • Load Balancing Service, based on Server load/availability/checks: Uses monitoring information to determine best machine(s) ThorstenKleinwort CERN-IT

  8. DNS Load Balancing LXPLUS.CERN.CH Update LXPLUS DNS Select least “loaded” SNMP/Lemon LB Server LXPLUSXXX ThorstenKleinwort CERN-IT

  9. Load balancing & scaling Publish several (distinguished) Servers for same site to distribute load: • Can also be done for stateful services, because clients stick to the selected server • Helps to reduce high load (CE, RB) ThorstenKleinwort CERN-IT

  10. Use existing ‘Services’ • All GRID applications that support ORACLE as a data backend (will) use the existing ORACLE Service at CERN for state information • Allows for stateless and therefore load balanced application • ORACLE Solution is based on RAC servers with FC attached disks ThorstenKleinwort CERN-IT

  11. High Availability Linux • Add-on to standard Linux • Switches IP address between two servers: • If one Server crashes • On request • Machines monitor each other • To further increase high availability:State information can be on (shared) FC ThorstenKleinwort CERN-IT

  12. HA Linux + FC disk SERVICE.CERN.CH Heartbeat No single Point of failure FC Switches FC Disks ThorstenKleinwort CERN-IT

  13. WMS: CE & RB CE & RB are stateful Servers: • load balancing only for scaling: • Done for RB and CE • CE’s and RB’s on ‘Mid-Range-Server’ h/w • If one server goes, the information it keeps is lost • [Possibly FC storage backend behind load balanced, stateless servers] ThorstenKleinwort CERN-IT

  14. GRID Services: DMS & IS DMS (Data Management System) • SE: Storage Element • FTS: File Transfer Service • LFC: LCG File Catalogue IS (Information System) • BDII: Berkley Database Information Index ThorstenKleinwort CERN-IT

  15. DMS: SE SE: castorgrid: • Load balanced front end cluster, with CASTOR storage backend • 8 simple machines (batch type) Here load balancing and failover works (but not for simple GRIDFTP) ThorstenKleinwort CERN-IT

  16. DMS:FTS & LFC LFC & FTS Service: • Load Balanced Front End • ORACLE database backend +FTS Service: • VO agent daemonsOne ‘warm’ spare for all:Gets ‘hot’ when needed • Channel agent daemons (Master & Slave) ThorstenKleinwort CERN-IT

  17. FTS: VO agent (failover) ALICE ATLAS SPARE CMS LHCB ThorstenKleinwort CERN-IT

  18. ThorstenKleinwort CERN-IT

  19. IS: BDII BDII: Load balanced Mid-Range-Servers: • LCG-BDII top level BDII • PROD-BDII site BDII • In preparation: <VO>-BDII • State information in ‘volatile’ cachere-queried within minutes ThorstenKleinwort CERN-IT

  20. ThorstenKleinwort CERN-IT

  21. Example: BDII • 17.11.2005 -> First BDII in production • 29.11.2005 -> Second BDII in production ThorstenKleinwort CERN-IT

  22. AAS:MyPRoxy • MyProxy has a replication function for a Slave Server, that allows read-only proxy retrieval [not used any more] • Slave Server gets ‘read-write’ copy from Master regularly to allow DNS switch over (rsync, every minute) • HALinux handles failover ThorstenKleinwort CERN-IT

  23. HA Linux MYPROXY.CERN.CH PX102 PX101 Heartbeat rsync ThorstenKleinwort CERN-IT

  24. voms-ping voms102 voms103 VOMS DB ORACLE 10g RAC AAS: VOMS LCG-VOMS (HALinux) ThorstenKleinwort CERN-IT

  25. MS: SFT, GRVW, MONB • Not seen as critical • Nevertheless servers are migrated to ‘Mid-Range-Servers’ • GRVW has a ‘hot-standby’ • SFT & MONB on Mid-Range-Servers ThorstenKleinwort CERN-IT

More Related