530 likes | 552 Views
VERITAS NetBackup ™ 4.5 Performance Tuning. Dave Little Product Specialist George Winter Product Marketing Engineer. VERITAS Adaptive Software Architecture™. Service Level Delivery. VERITAS Global Operation Management. Application Uptime. Data Center Recovery. Global Data Management.
E N D
VERITAS NetBackup™ 4.5 Performance Tuning Dave Little Product Specialist George Winter Product Marketing Engineer
VERITAS Adaptive Software Architecture™ Service Level Delivery VERITAS Global Operation Management Application Uptime Data Center Recovery Global Data Management Capacity Planning Business Critical Applications and Databases Application Plug-ins VERITAS Software Services High Availability Data Protection Storage Infrastructure Active SRM Storage Plug-ins DAS NAS Powered by VERITAS IP-Storage SAN
NetBackup Performance Tuning If you have questions like: • How big should my backup server be? • How can I tune my NetBackup server for maximum performance? • How many tape drives should I buy?
NetBackup Performance Tuning If you have questions like: • How many CPUs do I need? • Why are my backups so slow? • How can I improve my recover times?
NetBackup Performance Tuning You’re in the right room!
NetBackup Performance Tuning The first step toward accurately determining your backup requirements is a complete understanding of your environment.
NetBackup Performance Tuning • Many performance issues can be traced to hardware/environmental issues • Basic understanding of the entire backup data path is important in determining maximum obtainable performance • Poor performance is usually the result of unrealistic expectations and/or poor planning
Q: Why Are My Backups Slow? CASE STUDY Customer indicates that backups are *very* slow. He is not making his backup window. Total throughput is 8 to 10 MB/second. Wants to know how to configure NetBackup to increase his backup performance. Customer’s configuration ………
Q: Why Are My Backups Slow? Dedicated NetBackup server (Sun – 4 CPUs – 4 GB RAM) LAN TCP/IP (100BaseT) Tape Library – 6 DLT 8000s NetBackup clients 1 GB Fibre
NetBackup Capacity Planning Every backup environment has a bottleneck. It may be a very fast bottleneck, but it will determine the maximum performance obtainable with your system.
NetBackup Capacity Planning • Total Dataset Size: • How much data will be backed up? • Approximate amount of data change? • Data Type: • Types of data – text, graphics, database • How compressible? • Number of files?
NetBackup Capacity Planning • Data Location • Is data local or remote? • What are the characteristics of storage subsystem? • What is exact data path? • How busy is the storage subsystem?
NetBackup Server Sizing • Difficult to be exact but here are some guidelines: • I/O performance is generally more important than CPU performance • CPU, I/O and memory expandability should also be a consideration when choosing a server
NetBackup Server Sizing • How many CPUs do I need? or
NetBackup Server Sizing • How many CPUs do I need? Experiments have shown that a useful, conservative estimate is 5 MHz of CPU capacity per 1 MB/sec of data movement in and out of the NetBackup media server (Sun Microsystems)
NetBackup Server Sizing • How many CPUs do I need? Example: A system backing up clients over the network to a local tape drive at the rate of 10 MB/sec would need 100 MHz of available CPU power: 50 MHz to move data from network to NetBackup server 50 MHz to move data from NetBackup server to tape Note: Keep in mind that the operating system and other applications use the CPU also.
NetBackup Server Sizing • How much memory do I need? • Memory is cheap. More is generally better • At least 512 MB recommended (Java GUI) • NetBackup uses shared memory for local backups • NetBackup buffer usage will affect this (more specifics later) • Don’t forget other applications!
NetBackup Capacity Planning Tape Configuration Guidelines: • No more than two high-performance tape drives per SCSI/fibre channel connection • SCSI/fibre channel configuration should be able to handle both drives at maximum rated compression • Tape drive is a very slow access device compared to a disk drive • Tape drive wear and tear is much less and efficiency greater if data stream matches tape drive capacity and is sustained
NetBackup Capacity Planning Example: Fast Wide SCSI Bus (20 MB/Sec) Two DLT 8000s rated at 12 MB/Sec (2:1 comp)
NetBackup Capacity Planning Too much tape drive for this configuration. Only one tape drive will stream Add a second SCSI bus, or move to a faster SCSI connection (assuming tape drives will support the new SCSI type)
Multiplexing and Multistreaming • Can be a powerful tool when configuring a system to make sure that all tape drives are streaming • With NetBackup both can be used concurrently • Do not confuse the two
Multiplexing Multiplexing writes multiple data streams to a single tape drive
Multistreaming Multistreaming writes single data streams to multiple tape drives
Multiplexing and Multistreaming • Understand the implications of MPX on restore times • Restores from MPX’ed tapes must pass over all nonapplicable data although MPX restores are possible
Multiplexing and Multistreaming • Suggestions: • Find optimal value – where tape drives are just streaming • Use higher MPX for incrementals • Use lower MPX for local backups
Multiplexing and Multistreaming • NEW_STREAM Directive • Useful for fine-tuning streams so that no disk subsystem is underutilized or overutilized
Compression • Two types: Client compression and tape compression • Tape compression almost always preferred • Tape compression offloads compression task from client and server • If client and tape compression are turned on this can actually increase amount of backed up data
Compression • Client Compression • Client compression reduces amount of data sent over network, but impacts client • Client bp.conf entry MEGABYTES_OF_MEMORY might help client performance • Avoid compressed files by using COMPRESS_SUFFIX bp.conf entry
NetBackup Buffer Settings • Four NetBackup buffer settings: • NET_BUFFER_SZ • SIZE_DATA_BUFFERS • NUMBER_DATA_BUFFERS • NUMBER_DATA_BUFFERS_RESTORE
NET_BUFFER_SZ • Defines the TCP/IP socket buffer size between NetBackup media servers and clients • Default is 32,032 bytes (32 K) • Usually set the same on client and media server • A larger size may improve performance • Try different settings in your environment • Optimal on Windows systems is 132096. Twice network buffer size plus 1 K. (Double the buffer size plus the TCP socket buffer)
SIZE_DATA_BUFFERS • Shared memory used to buffer between network or disk and tape device • Default settings are: • 32 K for nonmultiplexed backups (UNIX) • 64 K for multiplexed backups (UNIX) • 64 K for Windows (Non-MPX and MPX) • Changes to these settings occur with next backup – processes do not have to be bounced • Restores use the same buffer settings that backups use (unless otherwise configured)
NUMBER_DATA_BUFFERS • Default settings are: • 8 for nonmultiplexed backups • 4 for multiplexed backups • 8 for nonmultiplexed restore, verify, import and duplicate
NUMBER_DATA_BUFFERS_RESTORE • Default settings are: • 8 for non-MPX Image • 12 for MPX Image • This file is only looked at for the MPX Protocol restores • This might be helpful if performing multiple restores – databases
BPTM Log • Verify settings by checking the BPTM log. Log entry will look something like this: ….. using 8 data buffers, buffer size is 262144 This should match your buffer settings.
NetBackup Buffer Settings • Care should be taken when setting buffer sizes • Incorrect settings can impede performance • Settings too high can cause restore issues. The issue is a buffer size that exceeds the tape drive’s I/O size • After modifying settings, test restores
Buffers and System Memory • Buffers use shared memory – a finite resource • How much memory is being used? Memory Used = (buffer_size * num_buffers) * num_drives * MPX
How Can I Determine the Best Setting? • What to look for in BPTM log: …waited for full buffer 1681 times, delayed 12296 times This is the number of times bptm had to wait for a full buffer. This indicates the write BPTM is waiting for data from the source. Changing the number of buffers will not help, but adding multiplexing might.
How Can I Determine the Best Setting? • What to look for in BPTM log: ...waited for empty buffer 1883 times, delayed 14645 times This is the number of times BPTM had to wait for an empty buffer, which indicates data is arriving from the source faster than it is delivered to tape. If multiplexing, reduce the number of plexes. Adding more buffers might help. Be careful since reducing this number might not have an affect on backup performance.
How Can I Determine the Best Setting? • What to look for in BPTM log: ...waited for empty buffer 1883 times, delayed 14645 times This indicates the number of times BTPM waited for an available buffer. A large number should be investigated. Each delay time is 30 ms.
Restore Performance • Common restore performance issues: • Multiplexing improperly set • Index performance issues • Fragment size • MPX_RESTORE_DELAY • NUMBER_DATA_BUFFERS_RESTORE settings
Restore Performance • Multiplexing improperly set • Multiplexing can cause more tape searching • Ideal setting only as high as needed to stream the drives • Multiplexing not always cause of poor restore performance
Restore Performance • Index performance issues • Disk subsystem where indexes are located has large impact on disk performance. Configure this for fast read performance • Use of indexing can also help • New binary index scheme in NetBackup 4.5….
Restore Performance • Fragment Size • Affects where and how many tape marks are placed • Fewer tape marks can slow recovers if fast locate block not available • SCSI fast locate block positioning can help • Typical setting probably 2,048 MB (your mileage may vary)
Restore Performance • NetBackup can perform multiple restores at the same time from a single multiplexed tape • Default setting is 30 seconds • Processes do not have to be bounced for changes to take effect