300 likes | 425 Views
Getting the most from your SAN – File and Filegroup design patterns. Stephen Archbold. About me. Working with SQL Server for 6+ years Former Production DBA for 24/7 High volume operation Currently SQL Server consultant at Prodata Specialising in Performance Tuning and Consolidation
E N D
Getting the most from your SAN – File and Filegroup design patterns StephenArchbold
About me Working with SQL Server for 6+ years Former Production DBA for 24/7 High volume operation Currently SQL Server consultant at Prodata Specialising in Performance Tuning and Consolidation Blog at http://blogs.Prodata.ie and http://simplesql.blogspot.com Get me on twitter @StephenArchbold
Agenda • Data Filegroup/File Fundamentals • Storage Design Patterns • OLTP • Data Warehousing – Fast Track style • Data Warehousing on a SAN • What other go faster buttons have we got • Case Study – The unruly fact table • How do we make the changes
Filegroup/File Fundamentals • General Filegroup Recommended Practices. Separate for: • Nothing but system tables in Primary • I/O patterns • Different volatility • Data Age • If using Multiple Files in a Filegroup • Files must be equally sized • Files must be equally full • SQL does not redistribute data when adding more files
Pattern 1 - OLTP Transactional processing is all about speed You want to get the transaction recorded and the user out as quick as possible Metric for throughput becomes less about MB/Sec, and more about transactions and I/O’s per second
Challenges of OLTP Solid State Disk becoming more commonplace These thrive on Random I/O As the databases can be small, file/filegroup layout can suffer Faster disk brings different challenges
Filegroup MyDB.MDF PRIMARY File File2.NDF File1.NDF Transactions PAGELATCH! Reference Ref.NDF Volatile Volatile.NDF
Behind the scenes Single File Two Files
What can we take away from this? Resolving in memory contention lies with the file layout This is actually nothing new, TempDB has been tuned this way for years! Keep in mind, files are written to in a “round robin” fashion
Pattern 2 – Fast Track Scenario Large Volume Star Schema Need to optimize for sequential throughput Scanning Entire Table Not Shared Storage
Large Partitioned Fact Table Enclosure 1 MyFact_part1.NDF Controller 1 MyFact_part2.NDF Partition 1 HBA 1 MyFact_Part3.NDF Partition 2 CPU CPU Controller 2 Myfact_Part4.NDF Partition 3 CPU CPU Partition 4 Enclosure 2 CPU CPU MyFact_part5.NDF Partition 5 Controller 1 MyFact_part6.NDF Partition 6 CPU CPU HBA 2 Partition 7 MyFact_Part7.NDF Controller 2 CPU CPU Partition 8 Myfact_Part8.NDF CPU CPU Partition 9 Enclosure Partition 10 MyFact_part9.NDF Controller 1 Partition 11 MyFact_part10.NDF Filegroup HBA 3 Partition 12 MyFact_Part11.NDF File / LUN Controller 2 Myfact_Part12.NDF
Fast Track – Pros and Cons • Pros • Easy to figure out your needs • Simple, cheap and fast • In depth guidance available from Microsoft • Cons • Not recommended for pinpoint queries • Only really for processing entire data sets • Need VERY understanding Infrastructure team
Pattern 3 – Datawarehouse on SAN Large Volume Star Schema Cannot optimize for sequential throughput Shared Storage More mixed workload
Goal – Large Request Size • We need Read Ahead • Enterprise edition is capable of issuing a request for 512KB on a single read ahead request (Standard you’re stuck at 64K) • It can issue several of these (outstanding I/O) at a time, up to 4MB • But you may not even be close to 512KB…
How close are you to the 512k Nirvana Run something like: And watch this guy:
Fragmentation - Party Foul Champion #1 killer of read ahead Read ahead size will be reduced if pages being requested aren’t in logical order Being a diligent type, you rebuild your indexes Because SQL is awesome, it does this using parallelism! So what’s the catch…? If Read Ahead is your goal, MAXDOP 1 to rebuild your indexes!
Enclosure 1 PRIMARY MyDB.MDF Filegroup File / LUN
Enclosure 1 Primary MyDB.MDF Dimensions Dimensions.NDF Volatile Staging.NDF Facts.NDF Large Fact Facts Fact.NDF Partition 1 Partition1.NDF Partition 2 Partition2.NDF Partition3.NDF Partition 3 Partition 4 Partition4.NDF Partition5.NDF Partition 5 Partition 6 Partition6.NDF Filegroup Partition 7 Partition7.NDF File / LUN Partition 8 Partition8.NDF
Getting data out of your Data Warehouse for Analysis Services How does Analysis Services pull in data?
Do we have any go faster buttons? • On read heavy workloads and Enterprise Edition, Compression • If storing multiple Tables in a Filegroup: • “-E” – For Data Warehouses - This allocates 64 extents (4MB) per table, per file, rather than the standard 1 (64K) • If using multiple Files in a Filegroup • “-T1117” – For all - This ensures that if auto growth occurs on one file, it occurs on all others. Ensures “round robin” remains in place • In General on dedicated SQL servers • Evaluate “-T834” – Requires Lock Pages in memory enabled • This enables large page allocations for the Buffer Pool (2Mb – 16Mb) • Can cause problems if memory is fragmented by other apps
Case Study – The Unruly Fact Table 3 TeraByte Data Warehouse Table scan was topping out at 300 mb/sec Storage was capable of 1.7 GB/sec Table partitioning was in place All tables were in a single Filegroup Had to get creative on enhancing throughput
Test Conditions 16 core server, Hyper Threaded to 32 cores 128 GB of Memory SQLIO fully sequential, storage gives 2.2 GB/Sec 32 range scans started up to simulate workload Page compression was enabled, the T834 trace flag was enabled MAXDOP of 1 on the server to ensure # of threads were controlled
How do we make the changes • Thankfully easy - Index rebuilds! • For non partitioned tables, drop and re-create on the new Filegroup • For partitioned tables – Alter the partition scheme to point to the new FileGroup • For heaps, create a Clustered Index on the table on the new filegroup, then drop it!
Summary File and Filegroup considerations can yield huge gains Know your workload and optimise for it If you have a Hybrid workload, then have a Hybrid architecture! Don’t neglect your SQL Settings Code changes and indexes aren’t the only way to save the day!
Useful links • Paul Randal – Multi file/filegroup testing on Fusion IO • http://www.sqlskills.com/blogs/paul/benchmarking-multiple-data-files-on-ssds-plus-the-latest-fusion-io-driver/ • Fast Track Configuration Guide • http://msdn.microsoft.com/en-us/library/gg605238.aspx • Resolving Latch contention • http://www.microsoft.com/en-us/download/details.aspx?id=26665 • Maximizing Table Scan Speed on SQL 2008 R2 • http://henkvandervalk.com/maximizing-sql-server-2008-r2-table-scan-speed-from-dsi-solid-state-storage • Specifying storage requirements (Find that sweet spot!) • http://blogs.prodata.ie/post/How-to-Specify-SQL-Storage-Requirements-to-your-SAN-Dude.aspx • Fragmentation in Data Warehouses • http://sqlbits.com/Sessions/Event9/The_Art_of_War-Fast_Track_Data_Warehouse_and_Fragmentation • Partial Database Availability and Piecemeal restores • http://technet.microsoft.com/en-US/sqlserver/gg545009.aspx