1 / 22

Objectivity Production Issues

Objectivity Production Issues. Federation Backup Lockserver Issues Federation Maintenance Mode Autorecovery. Central Boot File Location. The primary boot file for production federations should be kept in the federation directory on the “Lockserver” machine

xarles
Download Presentation

Objectivity Production Issues

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Objectivity Production Issues Federation Backup Lockserver Issues Federation Maintenance Mode Autorecovery

  2. Central Boot File Location • The primary boot file for production federations should be kept in the federation directory on the “Lockserver” machine • The boot file is always available for admin and backup tasks typically running on this machine • The boot file will stay consistent with the federation whereas copies (e.g. in AFS) are likely to get out of sync • Since ooams is running on the lockserver machine, remote users may pick up the boot file directly from there. • Please use:OO_FD_BOOT=lockxxxx.cern.ch::/usr/objy/FDNAME/FDNAME.BOOTinstead of copied boot files.

  3. Federation Repairs • We finally got DBF • Thanks! • DBF is a Objy internal database repair tool • display content of transaction journals • remove unavailable DBs,containers,pages,objects and null out any references to them • Luckily not much experience yet! • We did not loose a single federation or database yet! • Used it successfully to zero-out associations into a database file that “disappeared” • Need to start practicing (during Todd’s next visit?) with artificially damaged DBs.

  4. Federation Backup • Which data to back up: • Central Objy Data: Federation File (.FDDB) or any other representation of catalogue and schema • Central Application Data: e.g. experiment registry small data amount,frequent writes,always online • Which data to leave alone: • Raw Data:huge amount,write once,mostly offline on HPSS tapes

  5. How to perform Backups? • Using Objy’s backup programs • Supports to perform backups concurrently with user transactions • Uses a special Federation wide MROW mode • Store a consistent snapshot of the complete FD • Supports incremental backups on container level • Requiresall database files to be online • Would stage all HPSS tapes :-( • Using simple file base backup tools • Allows backups of subsets of the federation files • Works only if the FD is in a consistent state e.g. no other user is modifying the FDduring the backup

  6. Perl Script: fdBackup • Assumes Production Setup • Make sure that there are no update transactions • no update locks • no journal files • Create a clone of FD and Boot files • Copy FD files to local backup directory • Install copy using ooinstallfd • Remove any catalog entries • Test (e.g. ootidy) the empty federation • Notify the file based backup program (e.g. Legato) • about successful copy or users/host holding transactions • Will soon be put into production

  7. Proposed Extension to Objy’s Backup Tool • Add a new backup/restore parameter specifying which DBs should be saved. • All other DBs are considered to be read-only and are ignored by backup and restore • User takes the full responsibility for the consistency of ignored DB files! • If we change any ignored DBs between backup and restore we are on our own! • This is not HEP specific! • A similar feature will be needed for any VLDB using a tapes for mass storage

  8. Lockserver Problems • Keep your lockservers alive! • Killing and restarting lockservers won’t work in production • Stopping the lockserver using kill -9 risks a corrupted FD • oolockserver and ooams need to be run on a well defined account • Check if you can cleanup any locks using oocleanup • either oocleanup -local • or oocleanup -deadowner -t <transid> • Report any lockserver problems! • Lockserver core dumps • Locks which can not be removed using oocleanup • Missing transaction during oocleanup • Leftover journal files • Any other reason to shutdown/restart the lockserver

  9. Unrecoverable Locks • Under some circumstances Objy creates locks which can not be removed using oocleanup • e.g. oonewfd against a non-writable directory • e.g. ooattach against a corrupted db file • oocleanup fails to remove those locks • Only wait out: • Remove locks by hand using objy internal tools • Too dangerous to make them publicly available • These problems need to be fixed soon

  10. Leftover Journal Files • oolockserver and oocleanup sometimes disagree about the existence of transactions • Leftover journal file for which no corresponding transaction is known to the lockserver • Recovery was only possible by manually removing the journal files. • BTW: renaming journal did not work since oocleanup is quite open when it searches for journal files:LeftJournal-oo_Ceres_65535_shd07_2022_cerescdr_1.JNLAny file ending in .JNL will be used by oocleanup

  11. Most Frequent Locking Problem • Two processes running this code concurrently will get into locking problems: startUpdate(); // new transaction ooItr(SharedObj) sh_it; sh_it.scan(contH); while(sh_it.next()) { if (sh_it->unused) // read lock on contHsh_it->methodCallingUpdate(); // upgrade to update …. • Process A gets a read lock. Process B gets a read lock. • A wants to upgrade to update but waits because of B’s read lock. • B wants to upgrade to update but waits because of A’s read lock. • Both fail after their lock waiting period is expired.

  12. Similar Locking Situations • Call a method on an object using a handle and update it startUpdate(); // new transaction if (objH->needsUpdating()) // implicit read lock objH.update(); // upgrade to update • Any method call through a handle obtained in an earlier transaction will implicitly open object and container in read mode. • Lookup an object by name and update it • scope name in the scope of the enclosing container • lookup using an ooMap in the same container startUpdate();mapH->lookup(objH,”name”,oocUpdate);

  13. Resolution • Containers with objects that are updated from more than one process need special handling: • Obtain any update locks on updated containers as soon as possible. • Keep update transactions involving as short as possible. • Code that handles such objects should • Already open involved containers update mode. • Make sure that no other code opens those containers in the same transaction. • Sometimes it is easier/safer to perform the actual update in a separate transaction context (e.g. using ooContext)

  14. Federation Maintenance • Federation Backup • Federation file with schema & catalogue • Central registry databases • e.g., System.db defining container groups and naming trees • Lockserver Maintenance • Removal of dangling locks • Federation Repairs • Fixing missing or corrupted files or containers • Running repair applications to fix dangling references • This tasks can not always be performed on a deployed FD with tens or hundreds of (remote) users • Need to foresee a safe way to bring a federation to “maintenance mode” • or a whole lockserver • or a group of lockservers, if we use replication

  15. FD Maintenance Mode • The Goal: Safe way of bringing the FD to “maintenance mode” • Deny the start of new transactions • Wait for a clean end of ongoing transactions • Start maintenance task • Use of federation/db locks does not help • If there is any other lock “get FD lock” fails immediately (FD lock is not queued?) • FD lock request succeeds only if the admin is the only client (which might never happen...) • Current Procedure: Neither safe nor even complete! • “Ask” all clients to disconnect • User id, PID and host may be obtained from the lock table • Kill remaining clients • Loss of valuable process status (e.g. Simulation Jobs) • No way to remotely terminate processes e.g. on Windows NT or over the WAN • Rollback all update transactions • Increased risk to run into FD corruption • Increased risk to run into application level inconsistencies • Start maintenance task

  16. BaBar’s Approach • The BaBar API checks for a special lock file in the file system • If the file is present, the application code stalls on the next transaction start • Application sleeps and periodically checks until the file has been removed • Disadvantages • Dependency on a common file system • Could probably be implemented using a special container in the FD • Protection only provided on BaBar API level • All Objy Tools need to be wrapped • “Long” Transactions • oobrowse may keep open transactions for days

  17. A Possible Solution? • Ask Objy to expose the “Master Update Lock” • A special lock exists in the Objy kernel which • is queued • is locks out any other update transactions • Lockserver knows about starting update transactions • Using some (not yet existing) API the db-admin could obtain this lock to prevent any new update transactions • New update transaction could • either immediately fail with an error code • or wait in the queue until the FD becomes available for update again • would need a change in Objy’s start update implementation

  18. Federation Autorecovery • Idea behind Auto-recovery • Roll back all open transactions after a lockserver crash using remaining journal files • Lockserver process spawns a special recovery process (similar to oocleanup) • If this auto-recovery fails then any access to the FD is denied by the lockserver! • Auto-recovery is triggered when the lockserver process “sees” a FDID for the first time • E.g. after oonewfd/oochangefd or ooinstallfd • Every development FD gets autorecovered during its creation

  19. Autorecovery Problems • Our development federations reside typically on shared filesystems like AFS, NT-Domain Servers or Novell • The lockserver process (running as local root) can not write to files in AFS/NICE. • Autorecovery fails (even if there are no journal files…) • The user who owns the FD can not run oocleanup either, since the lockserver denies access to the federation • oocleanup -standalone works only if there is no running lockserver • As a workaround we usually run lockservers with the noautorecovery option • This option has been (silently) removed from the V5 NT-port • We now frequently run into autorecovery deadlocks!

  20. Autorecovery & Authentication • Autorecovery concept assumes that the lockserver can write to any DB file in the federation • How will this work once AMS requires e.g. a Kerberos token to authenticate any access database files? • How will this work with user databases in AFS or NICE as part of the production federation? • Will we have to make all files in a federation writable for a single central account?

  21. Cleanup FDs using Replication • Locks in replicated DBs are propagated to each partition’s lockserver (each lockserver serving a replica of this DB?) • What if an application holding a lock on a replica crashes? • How to run oocleanup? • Just once against one partition boot file? • We saw remaining locks on the other partitions lockserver. • Repeatedly against all partition boot files? • Run into problems since running oocleanup on partition A removed journal files which the second oocleanup (partition B) seemed to need. • More documentation/training needed !

  22. Summary • Need a fix for “unrecoverable lock” and “leftover journal file” problems. • Possibility to specify an “ignore list” would allow us to exploit the advantages of Objy’s backup tools. • Need a safe way to bring a FD into Maintenance Mode (e.g. using Master Update Lock) • Current implementation of auto-recovery imposes constraints on the usability of Objy in shared filesystems and on the authentication model for production federations.

More Related