1 / 93

Large-Scale SQL Server Deployments for DBAs

Large-Scale SQL Server Deployments for DBAs . Unisys. Agenda. Optimization Failover Cluster Resources Indexes I/O Configuration Data Management Enterprise Class Systems Infrastructure Intel Platform Architecture Storage Architecture Microsoft Systems Architecture.

arleen
Download Presentation

Large-Scale SQL Server Deployments for DBAs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Large-Scale SQL Server Deployments for DBAs Unisys

  2. Agenda • Optimization • Failover Cluster • Resources • Indexes • I/O Configuration • Data Management • Enterprise Class Systems Infrastructure • Intel Platform Architecture • Storage Architecture • Microsoft Systems Architecture

  3. SQL Server Failover Clustering • Hot standby solution • Best high-availability configuration • Redundant system • Shared access to the database files • Recovery in seconds • Automatic failure detection • Automatic failover • Minimal client application awareness • Built on the server cluster technology of Windows Clustering

  4. Windows Clustering • Software Components • Cluster name • Cluster IP address • Cluster Administrator account • Cluster resource • Cluster group • Microsoft Distributed Transaction Coordinator(MS DTC) • Hardware Components • Cluster node • Heartbeat • External network • Shared cluster disk array • Quorum drive • Virtual Server

  5. How Failover Clustering Works • Operating-system checks • Heartbeat checks availability of nodes and virtual servers. • SQL Server checks • LooksAlive check runs every five seconds. • IsAlive check runs SELECT @@SERVERNAME query. • Failover to another node • Windows Clustering attempts restart on same node or fails over to another node. • SQL Server service starts. • Brings master online. • Database recovery proceeds. • End users and applications must reconnect.

  6. Enhancements to Failover Clustering • SQL Server Setup installs/uninstalls a cluster. • Service packs can be applied directly to virtual servers. • SQL Server supports multiple instances and multiple network addresses. • Failover and failback occur to or from any node in a cluster. • SQL Server 2000 on Windows 2000 Datacenter Server supports four server nodes in a cluster. • All nodes have local copies of SQL Server tools and executables. • Rerunning Setup updates Failover Cluster configurations. • SQL Server Service Manager or SQL Server Enterprise Manager now start/stop SQL Server Services.

  7. Public Network SQL Server 2000 Virtual Server Heartbeat MSCS MSCS Node A Node B SharedDisk Array SQL Server 2000 Failover Clustering Virtual Server Instance • System & User Databases • SQL Server & SQL Server Agent Services • Executables • Network Connection

  8. Instance Example

  9. Four-Node Clusters • SQL Server 2000 fully supports 4-node failover clustering • Can assume max 1 node failure and assign resources appropriately (e.g. leave one node out of four idle  25% redundancy (improves on Windows 2000 Advanced Server 2-node 50% redundancy) • When running Active/Active/Active/Active for critical workloads ensure that one node has the power/resources to handle the worst case (i.e. memory)

  10. SQL Server 2000Cluster Storage Considerations • Fibre Channel connected to SAN is the preferred approach • Network Attached Storage (NAS) • Not supported for clusters • (Q) 304261 “Because of the risks of network errors compromising database integrity, together with possible performance implications that may result from the use of network file shares to store databases, Microsoft recommends that you store database files either on local disk subsystems or on Storage Area Networks (SANs).”

  11. SQL Server 2000Cluster Storage Considerations • Basic Disks are required by MSCS (Microsoft Cluster Service) • Dynamic Disk & Mount Points are not natively supported (Veritas required) • Software RAID is not supported • Balance letters of the alphabet vs. placement of: • Log files • Data files • Filegroups • tempdb

  12. 64-Bit dynamically manages memory Cluster Fail-over Memory & CPU Configuration Planning Would all contents fit into one glass? • Memory Resources • CPU Resources If you poured contents from one glass into another…

  13. Resources • Memory • Very Large Memory Support • AWE • AWE & I/O • Settings • Processor Allocation • User Mode Scheduler • Settings • Affinity

  14. Windows 2000Very Large Memory Support • Real Memory - 4GB Limit & /3GB switch • Physical Addressing Extension (PAE) • Address Windowing Extensions (AWE) • Large Memory Enabled (LME) • AWE I/O Implications Much simpler in 64 bit

  15. Windows Product Options

  16. 4G 4GB Memory Management 101 Virtual Memory Physical Memory Windows 2000 Kernel 2GB or 3GB

  17. SQL Instance3 SQL Instance2 4GB Process Address Spaces Virtual Memory Physical Memory Windows 2000 Kernel 2GB or 3GB SQL Instance1

  18. Virtual Memory And AWE • Windows 2000, a 32-bit Operating System, can only address 232 bytes (4GB)…So how do we support more memory? • The next few slides look at how AWE enables SQL Server to use 64GB

  19. AWE Memory Model • AWE Window resides in real memory (below 4GB) • Data pages mapped to reserved bufferpool above 4GB • Applications call AWE API to get data pages • Data pages referenced or mapped • Allows huge amounts of data to be cached • Look for Page Life Expectancy counter in perfmon • AWE provides a 64GB address space to SQL Server • Largest benefits with systems that have large amounts of frequently referenced pages (“hot” pages) • Note: SQL Server-based applications that one-time scan through large tables and/or have large intermediate results sets may not benefit from much larger bufferpool

  20. AWE Memory Allocation Virtual Memory Physical Memory Windows 2000 Kernel 2GB or 3GB AWE Memory Window SQL Instance1 1. Physical Memory is allocated Mapped Virtual Memory Allocation (Above 4GB)

  21. AWE Memory Reallocation Virtual Memory Physical Memory Windows 2000 Kernel 2GB or 3GB AWE Memory Window SQL Instance1 1) Mapping Removed 2) New Mapping Mapped Virtual Memory Allocation (Above 4GB)

  22. SQL Server Use of AWE Memory • Using AWE means that SQL Server 2000 memory is no longer dynamic (i.e. it can no longer be relinquished if the system is under stress) • Caution when using AWE and SQL Server with other workloads – configure memory appropriately or SQL Server will allocate total available memory (minus 128MB) • Caution when running in a cluster - allow enough memory for failover on each node

  23. AWE and I/O • Applications can only read/write to AWE Memory Window • Two methods of achieving this: • With LME hardware(hardware can access mapped pages anywhere in memory) • Without LME hardware(hardware can only access mapped pages from <4GB)

  24. Network Disk AWE I/O With LMERequired for Datacenter Certification Virtual Memory Physical Memory Windows 2000 Kernel 2GB or 3GB AWE Memory Window SQL Instance1 Mapped Virtual Memory Allocation (Above 4GB) Direct I/O

  25. Network Disk AWE I/O Without LME Physical Memory Virtual Memory Windows 2000 Kernel 2GB or 3GB AWE Memory Window SQL Instance1 Mapped Virtual Memory Allocation (Above 4GB) Double-buffered I/O

  26. Large Memory Settings • To grab 4GB or less: • No /PAE needed in boot.ini • /3GB available to use to grab 3 out of 4 GB of real memory. • AWE enabled but doesn’t do anything for SQL Server • To grab >4gb to 16gb • /PAE enabled in boot.ini • /3GB enabled in boot.ini • AWE enabled • To grab > 16gb • /PAE enabled • AWE enabled • /3GB disabled

  27. Memory ConfigurationSettings • Min Server Memory (MB) • SQL Server will maintain this value once it’s committed • Not automatically allocated on startup, unless AWE is enabled • Max Server Memory (MB) • Becomes SQL Server target memory unless (Available Bytes – 5 MB) less than Max Server Memory. • Buffer pages will continue to be committed until target reached. • Under memory pressure SQL Server will pay out buffer pool until Min Server Memory reached. • Set to same as min when AWE enabled • AWE Enabled • Enable AWE window for SQL Server

  28. Memory ConfigurationSettings (cont.) • Locks • Sets maximum number of locks • When set to 0, 2% of memory allocated to pool of locks initially. • Dynamic lock pool do not exceed 40% of total server memory allocation • Set Working Set Size • Reserve physical memory space for SQL Server instead of being swapped out, set min and max to same value when enable this option. • Min memory per query (KB) • Sets minimum memory that must be met by that Memory Grant Manager to avoid query waiting • query wait (s) • Wait time for Memory Grant Manager to acquire min memory per query before query times out.

  29. Resources • Memory • Very Large Memory Support • AWE • AWE & I/O • Settings • Processor Allocation • User Mode Scheduler • Settings • Affinity

  30. Worker Pool Worker Pool Worker Pool Worker Pool Workers Workers Workers Workers User Mode Scheduler CPU 0 CPU 1 CPU 2 CPU n Workers in “Runnable” status assigned to CPU by UMS Scheduler UMS Scheduler UMS Scheduler UMS Scheduler UMS Scheduler Query Results Network UMS Work Queue UMS Work Queue UMS Work Queue UMS Work Queue Work requests assigned to workers in pool Work Requests Network

  31. Context Switching • No real multiprocessing, just time slicing • SQL Server User Mode Scheduler • Per processor • Scheduler assigns processor • Performance Factors • Perfmon: Context switches/sec (10-15K bad) • CPUs have L2 cache • CPU Affinity mask • Lightweight pooling

  32. Related SQL Server Configuration Settings • Max worker threads • Sets maximum workers to service work requests • Assigned evenly across CPUs available to UMS • Lightweight pooling • When set, workers are created as NT fibers • One thread per CPU manages subset of fibers • Priority boost • If set, workers execute at NT priority 14 • By default, workers execute at priority 7-8 • Max degree of parallelism • Number of processors considered for parallel query execution plan • Cost threshold for Parallelism • Minimum execution cost before optimizer creates parallel execution plan

  33. SQL Server Affinity Masks • CPU Affinity • Reduce context switching • Increase cache hits • New SQL Server 2000 SP1 Features! • I/O Affinity • Connection Affinity (VIA only)

  34. SQL Server CPU Affinity

  35. MSU MSU MSU MSU Crossbar Intra-connect Crossbar Intra-connect Crossbar Intra-connect Crossbar Intra-connect TLC TLC TLC TLC TLC TLC TLC TLC CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU CPU I/O I/O I/O I/O I/O I/O I/O I/O IO Affinity Storage Area Network Example: Set IO Affinity to Processor x0000000F Sends SQL Server disk I/O to affinitized processors. ( An extreme high end feature )

  36. Index Overview • Clustered and Non-Clustered Indexes • Covering Indexes • Unique Indexes • Indexed Views • Index Maintenance • Recommendations

  37. Clustered Indexes ------------------ Root Page ------------- Non-leaf pages . . . ------------- --- Leaf Pages A-C D-E F-G H-J K-M N-O P-Q R-S T-V W-Z Example: - “select * from table where lastname like ‘S%’ - Assume 100 names start with ‘S’ - Assume 100+ rows per page - Clustered index will issue 4 in this example - contrast with nonclustered noncovering heap table

  38. Clustered IndexesChoosing a clustered key • Clusterkey is row locator in all non-clustered indexes • Increase potential for covering queries • Size consideration – long composite clusterkeys may not be appropriate • Small clusterkeys (smallint or INT) save space in non-clustered index: • INT = 4 byte row locator stored in non-clustered index entry • RID = 6 byte row locator stored in non-clustered index entry • Update frequency • Clusterkey columns must be stable • Updates are expensive • Cost increases with number of non-clustered indexes • Continuously ascending clusterkey value is most efficient model for high volume insert: • CREATE table orders • (Order_ID INT IDENTITY PRIMARY KEY CLUSTERED, • .............................................. • No hot spot contention • No page splitting • Suitable as a default clustered index

  39. Non-clustered Indexes Root Page ANDERSON 4:300 JOHNSON 4:301 SIMONSON 4:302 Branch Page 4:300 Page 4:301 Page 4:302 JOHNSON 2:103 KRANSTEN 2:104 MILLER 2:105 ANDERSON 2:100 BENSON 2:101 JOHNSEN 2:102 SIMONSON 2:106 TANNER 2:107 Leaf Page 2:100 Page 2:101 Page 2:102 Page 2:103 Page 2:104 Page 2:105 Page 3:106 Page 2:107 ANDERSON .................... .................... .................... BENSON .................... .................... .................... JOHNSEN .................... .................... .................... JOHNSON .................... .................... .................... KRANSTEN .................... MILLER .................... .................... .................... SIMONSON .................... .................... .................... TANNER .................... .................... .................... Forwarding pointer to new row location; avoids index update when RID changes Heap (Data Pages) .................... ..........→.. .................... .................... .................... ....KRANSTEN... .................... UPDATE CUSTOMER SET CUSTOMER_COMMENT = ‘.......’ WHERE CUSTOMER NAME = ‘KRANSTEN‘

  40. Example: Non-clustering index carrying clustering key: Note: NC affected by all Clustering Key operations including Reindex, Drop/Create, etc. Root Page Non-Leaf Pages Leaf Pages Clustering Key ……..

  41. Example: Drop Clustering Key, change NC key values to RIDs. Note: Change clustering key and You change all non-clustering Indexes automatically Root Page Non-Leaf Pages Leaf Pages RID Data Pages

  42. Non-clustered IndexesComposite Index Considerations • Leading column density • CREATE UNIQUE INDEX IX_PRODUCT_TYPE_BRAND_NAME • ON PRODUCT • (PRODUCT_TYPE, • PRODUCT_BRAND, • PRODUCT_NAME) • Important that statistics exist on all columns of index • sp_createstats ‘indexonly’ • Using left-most subset of columns not absolutely necessary on non-clustered indexes: • Low density of interior column (and existing stats) can use index scan WHERE argument to filter prior to bookmark lookup: SELECT * FROM PRODUCT WHERE PRODUCT_BRAND = 'XYZ' StmtText ---------------------------------------------------------------------------------------- |--Bookmark Lookup(BOOKMARK:([Bmk1000]), OBJECT:([orders].[dbo].[PRODUCT])) |--Index Scan(OBJECT:(IX_PRODUCT_TYPE_BRAND_NAME), WHERE:(PRODUCT_BRAND='XYZ‘ )) If leading column used alone in search argument, density is crucial If columns used together (as search argument) leading column density less important than composite All Density

  43. Covering Indexes • Materializes query result from a non-clustered index • All necessary data must be in the index • Data pages are not touched (no bookmark lookup) • References to clustered key columns are available and used from the non-clustered index • A composite index is useful even if the first column is not referenced SELECT SUPPLIER_ID, ORDER_ID, COUNT(*) FROM ORDER_ITEM WHERE YEAR(SHIP_DATE) = 1993 GROUP BY SUPPLIER_ID, ORDER_ID ORDER BY SUPPLIER_ID, ORDER_ID StmtText ---------------------------------------------------------------------------------------------- |--Compute Scalar(DEFINE:([Expr1002]=Convert([Expr1006]))) |--Stream Aggregate(GROUP BY:(SUPPLIER_ID, ORDER_ID) DEFINE:([Expr1006]=Count(*))) |--Sort(ORDER BY:(SUPPLIER_ID ASC, ORDER_ID ASC)) |--Index Scan(OBJECT:(IX_ORDER_ITEM_SHIP_DATE), WHERE:(datepart(year, Convert(SHIP_DATE))=1993))

  44. Covering IndexesConsiderations • Queries covered by indexes often found during trivial plan phase • Reduced compilation time • Multiple indexes can be used • Index rows retrieved from non-clustered indexes • Joined on row locator (clusterkey or RID) • Not a common access method • Clustered keys always available (exist in row locator) • Can reduce some lock contention • Data page / row are not accessed; not locked • Avoid exaggerating non-clustered key lengths to force covering queries • Longer key increases index size • Volatile columns cause lock contention in index leaf

  45. Break

  46. Data Management Overview • I/O strategies • Capacity Planning Guidelines • RAID • File System • Database and Object Placement Strategies • SAN • Monitoring

  47. Understanding your Disks • Check the numbers • There is a difference in speed for new drives • Numbers from HW vendors are perfect world

  48. Workload Comparison • OLTP environment • Short transactions • Random read and write I/O • 50-60% read, 40-50% write a good start point of workload breakdown • DSS, OLAP environment • Mostly long transactions • Many sequential I/O, typically reads • 75-80% read, 20-25% write is a good start point

  49. Capacity Planning Approach • Minimally • Size of database / size of disk • Table sizing tool in W2K resource kit and http://toolbox • Not a good approach for performance • Guesstimate • Estimated data throughput / size of disk • Compare with the minimal approach and take the larger # (of disks) of the two. • Sampled approach • Take a small sample workload and monitor on a system with tempdb, log and datafile placed on separate spindles. Using perfmon to capture the I/O statistics and calculate the spindle requirements under real production load.

  50. Capacity PlanningOLTP Planning • Random read/writes • Plan on more random reads and writes or 80 I/Os per second • Turn on controller cache to match read / write ratio. Example for read intensive applications maybe 75% read 25% write • Larger number of smaller drives = better performance • Put log on its own controller with 100% write on controller (except replication environment)

More Related