330 likes | 481 Views
Code Camp NZ 2011. # CCNZ. www.mscommunities.co.nz. Minimal Logging & Data Manoeuvring On very Large Tables. What is going on here???. Steven Wang. Senior DBA/Senior BI Specialist at BNZ MCITP/MCTS: BI Developer, Database Developer and Database Administrator
E N D
Code Camp NZ 2011 #CCNZ www.mscommunities.co.nz
Minimal Logging & Data Manoeuvring On very Large Tables What is going on here???
Steven Wang • Senior DBA/Senior BI Specialist at BNZ • MCITP/MCTS: BI Developer, Database Developer and Database Administrator • Owner of TellYes Data Intelligence • Blogs: www.msbicoe.com • Email: stevenwang@msbicoe.com
Agenda • What Is minimal logging and why does it matter? • What can be minimally logged? • Metadata-Only Operations • Things under the hood (Demo) • Data Manoeuvring technics on very large tables • Real world example • Q & A
Small Quiz A minimally logged operation will always reduce the transaction log backup size? A minimally logged operation will always reduce the transaction log size? A minimally logged operation is always faster than fully logged operation ? True False True False True False
Answer A minimally logged operation will always reduce the transaction log backup size? A minimally logged operation will always reduce the transaction log size? A minimally logged operation is always faster than fully logged operation? True False True False True False
What is Minimal Logging and why does it matter? • SQL server uses ‘write-ahead logging’. Everything has to be written on to log file first • A minimally logged operation is one that does not always log every row; it only logs the extend allocations • Transaction log will fill rapidly under full recovery model, it is likely to be grown during a large quantity of data operation • “The transaction log is ALWAYSzero initialized when first created, manually grown, or auto-grown.” --Paul Randal
What can be minimally logged? • SELECT INTO …. FROM Table • Bulk Import Operation: • BCP • BULK Insert • INSERT INTO … SELECT FROM OPENROWSET(BULK…) • Create/Rebuild Index • Using the .Write in the Update for new data • The Creation of new heap by Dropping Index
Conditions • It has to be under Bulk-Logged or Simple Recovery Model • Table is not being replicated • Tablock hint needs to be used • Something talking about later
Conditions (Continue) Heap + Tablock
Conditions (Continue) Clustered + Tablock
Conditions (Continue) • Use Trace Flag 610 • Can be used to get minimal logging for empty heap and clustered table • Can be used to get minimal logging in a non-empty B-Tree • Tablock hint is not needed for table without nonclustered index • The first page is always fully logged • Can be turned on instance-wide or session-wide • Fully tested in your environment before use
Metadata-Only Operations • Truncate Table • Drop Table • Partition Switch • Partition Merge for 2 empty partitions • Partition Split for an empty partition
Data Manoeuvring technics on very large tables • Bulk Load Partitioned Table • Use parallelism, per bulk load per CPU core • Create staging tables having exactly same structure as target table in same filegroup with no indexes • Bulk load data into staging tables • Create indexes and constraints on staging tables. • Switch the staging tables into partitioned table partitions
AlterdatabaseMyDBSetrecoveryBulk_Logged; Go AlterDatabaseMyDBModifyFilegroupFG1Default; GO SelectCol1,Col2,Col3,… IntoStage_1 FromTarget_TableWhere 0=1 Go ... Go AlterDatabaseMyDBModifyFilegroupFG4Default; GO SelectCol1,Col2,Col3,… IntoStage_4 FromTarget_TableWhere 0=1 Go
Create Indexes; Apply other settings to comply with target; Create constraints;
AlterTableStage_1SWITCHTOMyTargetTablePartition 1; Go ... Go AlterTableStage_4SWITCHTOMyTargetTablePartition 4;
Data Manoeuvring technics on very large tables (Continues) • Delete a very large amount of rows in a table, for example, over 50% of the total • Don’t Delete! Think of using truncate or drop table instead. • How??
Data Manoeuvring technics on very large tables (Continues) • Update a very large amount of rows in a table for majority of columns • Don’t Update! Think of using inserting data instead to achieve minimal logging. Really????
Drop MyData • Truncate MyData_New • Rename MyData_Temp to MyData
I believe that you have already got a fairly good idea how to apply the similar technical to perform the update on partitioned table.
Real World Example • A Table has to be partitioned daily • The data has to be kept more than 7 years • Every first day of the month a partition merge operation will be performed to consolidate one month’s partitions which is 30 month old to one partition • How will you do it?
Thank You • Email: stevenwang@msbicoe.com • Blogs: www.msbicoe.com • Linkedin: http://nz.linkedin.com/pub/steven-wang/1b/ab/b8b
Thanks to our sponsorsand partners! Sponsor Premier Partners Associated Partners Supporting Partners