1 / 28

Database Management with Minimal Downtime

Database Management with Minimal Downtime. Dan Foreman BravePoint, Inc Ben Holmes, Eaton Corp. Introduction - Dan Foreman. Progress User since 1984 Presenter at USA Progress Users Conference 1990-1998 and 2002-2006. Introduction - Dan Foreman. Author of: Progress Performance Tuning Guide

cais
Download Presentation

Database Management with Minimal Downtime

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Database Management with Minimal Downtime Dan Foreman BravePoint, Inc Ben Holmes, Eaton Corp

  2. Introduction - Dan Foreman • Progress User since 1984 • Presenter at USA Progress Users Conference 1990-1998 and 2002-2006

  3. Introduction - Dan Foreman Author of: • Progress Performance Tuning Guide • Progress Database Admin Guide • Progress Virtual System Tables • V9 Database Administration Jumpstart • All Books are Searchable Online Free • ProMonitor - Database Monitoring Tool • Pro Dump & Load - Dump/Load Utility • Progress DBA Resource Kit

  4. Who are You? • Progress Version • Largest Database • Database Operating System

  5. Why Dump/Load (D&L)? • Improve Performance by reducing Table Scatter Factor & Record Fragmentation • Reclaim Space in the Database • Migrate between different Progress Versions (e.g. from V9 to V10) • Migrate between different Platforms (e.g. Windows to HP/UX)

  6. Why D&L? • Change the Database Blocksize (V8.2+) • Change the Records per Block (RPB) • Convert to Multiple Storage Areas (V9+) • Convert to V10 Type 2 Storage Areas • Recover Data from a Corrupted Database

  7. How Often? • Linear method - every X number of months • Subjective method - because it feels like about the right time to do it • Serendipitous method - “When’s the next 3 day weekend?” • Empirical method - predictive - decide when is the best time based upon various statistics • Scatter Factor • Fragmentation Factor • Application and/or Utility Performance

  8. Scatter Factor • How close are the Records in a Table to each other physically • Also known as: Physical Scatter Factor • Obtained from proutil dbanalys/tabanalys • I believe that the Progress recommendations are somewhat low (conservative) for most databases; you will spend many weekends doing D&Ls

  9. Scatter Factor • For Large, Frequently Accessed Tables: • .9 Data overlap problem (bug) • 1.0 Green Flag - Perfect • 1.1-2.0 Probably OK • 2.1-3.0 Yellow Flag - Deteriorating • 3.1-4.0 Performance problems are probably being experienced • 4.1+ Red Flag - take action ASAP

  10. Scatter Factor • Scatter Factor is not as crucial in V9+ if selected Tables are stored in dedicated Storage Areas • But don’t forget there is also a Logical Scatter Factor to consider - does the Physical sequence of the records match the sequence of the keys in the most commonly used index • New tool to measure Logical Scatter Factor (Logical Scatter Analyzer)

  11. Fragmentation Factor • A Fragment is a Record • A Record can consist of one or more Fragments • The more Fragments per Record the more I/O is required to read and update a Record • VSTs (_ActRecord) can tell you how much Excess Fragment I/O is actually occurring but unfortunately not on a table level; also the statistics are somewhat suspicious

  12. Index Statistics • Obtained from proutil dbanalys/ixanalys • Block Utilization Percentage • 50% = 2X more I/O to read all the Keys • 25% = 4X more I/O to read all the Keys • B-Tree Levels • Each level means a possible extra I/O to retrieve a Key • How often an Index is used can be found using the _IndexStat VST

  13. How Long • For a Database that isn’t reorganized very often, the dump is typically the longest part of the process • Identify the table that takes the longest time to dump and that will be one of the limiting factors

  14. Outage Tolerance • Example: Johnson Controls (Battery) • One Database Outage per Year • Duration: usually 24 hours

  15. Outage Tolerance • 300gb Database in Australia • Last Dump & Load in 1999 • Optimized D&L takes a “fortnight”, i.e. two weeks • Unacceptable!

  16. Solution: Pro Dump & Load • Does not make the D&L go faster • But it does reduce the D&L Downtime • Example: • 100gb Mfg/Pro Database in Hungary • Using Pro D&L, Total Downtime under One Hour

  17. How Does It Work Initial Phase • Shutdown the Production DB • Take a Copy of the Production DB • Add Pro D&L Replication to the Production DB • Restart the Production DB • Now all changes made to the production DB are stored in a separate DB • Limiting Factor: How long to make a DB copy

  18. How Does it Work? Dump/Load Phase • Dump and Load the Copy DB • If disk space for the Replication DB is adequate, it doesn’t matter how long the D&L takes or what method is used

  19. How Does it Work? Synchronization Phase • Start Applying Replicated Data to the New (Dumped & Loaded) Database

  20. How Does it Work? Final Phase • Shutdown Production Database • Apply the Remaining Replication Data • Verify that the old and new DBs match • Replace the ‘old’ Production DB with the ‘New’ Production DB • Start the Production Database

  21. Requirements, Warnings, and Limitations • Requires very good Progress DBA skills or BravePoint assistance • Lots of Disk Space is Required • Replication Database • Copy Database • Dump Files • Index Rebuild Sort Files • A Separate Server for the D&L is nice

  22. Requirements, Warnings, and Limitations • Tables with no Unique Indexes Require extra handling; fortunately Mfg/Pro only has a few of these tables • SQL92 can’t be used to Update the Production Database during the time Pro D&L is Active • Some additional Drag on Performance particularly if the D&L occurs on the Server where the Production DB is

  23. Introduction – Ben Holmes • Started Programming with Progress in 1989, version 4.2 • Progress Consultant/DBA for over 12 years. • Worked at QAD as Technical Service Manager. • Worked at Eaton for almost 4 years.

  24. Customer Success StoryEaton Corp

  25. Eaton’s Environment • 250+ Production Progress Databases • Database range from 6gb to 120gb • Multiple Versions of Progress (8 & 9) • Multiple Versions of MFG/PRO(7.4 – eB2) • Nine Different Countries • Many Different Time Zones

  26. Average Run Time • Start after Tuesday Backup • Run DB analyses • Turn Pro D&L Replication on • Restore Backup and Perform a Dump/Load using Binary method • Friday & Saturday Load Transactions • Attach New database to Pro D&L Replication Database and load older transactions (@ 1 million per-hour). • We have had over 20 millions transactions.

  27. Continue • Sunday After Backup • Copy New Database over top of Production • Load remaining transactions • Run DB analyses • Compare table counts • Total amount of Production Downtime • Under 3 hours • Grand Total of time 20-75 hours

  28. Conclusion • Questions? • Thank you for coming • danf@prodb.com • BenWHolmes@eaton.com

More Related