280 likes | 534 Views
Sizing Guidelines Jana Jamsek ATS Europe. IBM i performance data. Good to use IBM i performance data even before modelling with Disk Magic, to apply sizing guidelines Needed reports for sizing external storage System report / Disk utilization & Storage pool utilizaton
E N D
IBM i performance data • Good to use IBM i performance data even before modelling with Disk Magic, to apply sizing guidelines • Needed reports for sizing external storage • System report / Disk utilization & Storage pool utilizaton • Resource report / Disk utilization • Component report / Disk activity • Desired way of collecting: • Collect performance data during 24 hours on 3 consecutive days and during heavy End-of-month job • Colleciton interval 5 minutes • Insert the reports to Disk Magic to obtain data in Excell spreadsheet
Sizing the disk drives in external storage • DS8800 Recommended maximal disk utilization: 60% • 15K RPM SAS disk drives • 10 K RPM SAS disk drives • SSD • DS5000 Recommended maximal disk utilization: 45% • 15K RPM disk drives • 10 K RPM disk drives • SSD • XIV • Data modules • Storwize V7000 Recommended maximal disk utilization: 45% • 15K RPM SAS disk drives • 10 K RPM SAS disk drives • SSD
Guidelines for RAID level • RAID-10 provides better resiliency • RAID-10 provides gererally better performance: • RAID-5 results in 4 disk operations per write – higher penalty • RAID-10 results in 2 disk operations per write – lower penalty • RAID-10 requires more capacity • In DS8000 use RAID-10 when: • There are many random writes • Write cache efficiency is low • Huge workload • In Midrange storage and Storwize V7000 we recommend to use RAID-10
DS8800: Number of ranks • Detailed calculation for maximal IO/sec on Raid-5 rank • ( reads/sec – read cache hits % ) + 4 * (writes/sec – write cache efficiency) = disk operations/sec (on rank) • One 6+P 15 K rpm rank can handle max 2047 disk accesses/sec, at recommended 60% utilization: 1228 disk ops/sec • Divide current disk accesses/sec by 1228 • Example: 261 reads/sec , 1704 writes/sec, 45% read cache hits, 24 % write efficiency: (261-117) + 4 *(1704-409) = 5324, 5324 / 1228 = 4 - 5 ranks • Recommended: 4 ranks • Calculation is based on performance measurements in Storage development and recommended % disk util.
DS8800: Number of ranks - continue • Estimate % read cache hit and % write cache efficiency from present cache hits on internal disk • Rough estimation by best practise: • If % cache hits is below 50% estimate the same percentage on external storage • If % cache hits is above 50% estimate half of this % on external storage • If cache hits are not known or you are in doubt, use Disk Magic default estimation: 20% read cache hit, 30% write cache efficiency
DS8800: Number of ranks - continue Assumed cache hits: 20% read hit 30% write efficiency • Quick calculation based on shown detailed calcuation • Example 9800 IO/sec with Read/Write ratio 50/50 need 9800 / 982 =app10 * RAID-10 ranks of 15 K rpm disk drives, connected with IOP-less adapters • The table can be found in the Redbook IBM System Storage DS8000:Host Attachment and Interoperability, SG24-8887-00
Detailed Calculation for maximal IO/sec on disk in Raid-10 ( reads/sec – read cache hits % ) + 2 * (writes/sec – write cache efficiency) = disk operations/sec (on disk) Quick calculation IO/sec per DDM: DS5000/4000/3000:Number of disk drives Example: 7000 IO/sec with read/write ratio 70 / 30 needs 7000 / 82 =app 85 * 15 K RPM disk drives in RAID-10
Storwize V7000:Number of disk drives • Quick calculation: Example: 7000 IO/sec with read/write ratio 70 / 30 needs 7000 / 138 =app 50 * 15 K RPM disk drives in RAID-10
Number of DDMs, connected with VIOS and SVC • The sizing guidelines and calculations for DDMs in Storage systems connected with VIOS or VIOS_NPIV don’t change • The sizing guidelines and calculations for DDMs in Storage systems connected with SVC and VIOS don’t change
Sizing for big blocksizes (transfer sizes) • Big blocksize: 64 KB and above • Add about 25% disk arms for big blocksizes • The shown guidelines assume small blocksize (about 12 KB) • Peak in IO usually experiences small blocksizes • Peak in blocksizes has typically low IO/sec • So usually we size for peak in IO/sec and don’t update with additional 25%
Number and size of LUNs • With given disk capacity: The bigger the number of LUNs the smaller the size • Sizing guidelines for the number, or for the size • To obtain the number of LUNs you may use WLE ( number of disk drives) • Considerations for very big number of LUNs: • Many physical adapters are needed for natively connected storage • Difficult to manage and troubleshoot with big number of virtual adapters in VIOS
Number and size of LUNs -continue • DS8000 Guideline by best practise: • 2 * size of LUN = 1 * size of DDM, or • 4 * size of LUN = 1 * size of DDM • Presently 70 GB or 140 GB LUNs are mostly used • DS5000 Guideline: • Big LUNs enable better seek time on disk drives • Small LUNs – big number of LUNs enable more concurrent IO to disk space • Compromise: 70 GB or 140 GB LUNs • DS5000 best practise: • 146 GB physical disks • Make RAID-1 arrays of tow physical disks and create one logical drive per RAID-1 array. • Recommended segment size 128 KB or 64 KB • Create one LUN per array • If the number of LUNs is limited to 16 (for example, connecting to IBM i on BladeCenter)you may want to make a RAID-10 array of four physical disks and create one LUN perarray.
Number and size of LUNs -continue • XIV- best pracise for the size of LUNs: • Measurements in Mainz: • CPW 96000 users, 2 concurrent runs in different LPARs • 15-module XIV Gen 3 70 GB LUNs were not tested Recommendation: about 140GB LUNs, or 70GB LUNs
Number and size of LUNs -continue • Storwize V7000, SVC • Presently we recommend about 140 GB LUNs • This recommendation is based on best practise by other midrance storage systems • Recommended to craete vdisks in Striped mode ( default ) • Recommended extent size 256 MB ( default )
Guidelines for different types of connection • The listed guidelines for a particular storage system apply to all cases (when applicable): • Native connection • Sizing for physical FC adapters applies to natively connected storage • Connectioned with VIOS vscsi • Connection with VIOS_NPIV • Connection via SVC • The size of LUNs applies to SVC vdisks
Sizing FC adapters in IBM i – by IO/sec • Example: for one port in #5735 we roecmmend 3266 / 70 = 46 * 70 GB LUNs • Example: For 2 ports in Multipath we recommend 2 * 46 = 92 -> 64 * 70 GB LUNs in Multipath Assumed: Access Density = 1.5, For 2 path: multiply the capacity by 2
Sharing or dedicating ranks • Sharing ranks among multiple IBM i LPARs • Enables better usage of resources in external storage • On the other hand the performance of an LPAR might be influenced by workloads in other LPARs • Dedicating ranks to each LPAR • Enables stable performance (no influence from other systems) • Resources are not as well utilized as with shared ranks • Best practise: • Dedicate ranks to big and/or important systems • Share ranks among medium and small LPARs
Guidelines for cache size in external storage • Modelling with Disk Magic • Rough guidelines for DS8800 • 10 to 20 TB capacity: 64GB cache • 20 to 50 TB: 128 GB cache • > 50 TB: 256 GB cache
Number of HA cards in DS8800 • Rules of thumb for HAs in DS8800 • About 4 to 8 * ports in IBM i per one HA card in DS8800 • For high performance: The number of HA cards should be the same or bigger than the number of device adapters in DS8800 • At least one HA card per IO enclosure in DS8800
Sizing IASP • Very rough guideline: about 80% of IO will be done to IASP • System report – Resource utilization; IO to database