480 likes | 683 Views
Presentation Title. ISV Configuration & Implementation – Lessons learned from Customers. Juergen Thomas Microsoft Corporation. What is this session about. Information of some high-end deployments of SAP, PeopleSoft and Siebel software on SQL Server
E N D
Presentation Title ISV Configuration & Implementation – Lessons learned from Customers Juergen Thomas Microsoft Corporation
What is this session about • Information of some high-end deployments of SAP, PeopleSoft and Siebel software on SQL Server • Explain the HA configurations and server configurations • Explain restrictions pushed down by the Application Vendor • Explain some details of configuration • Explain what we have as lessons learned DBA-416M• ISV configurations & implementations – Lessons Learned
Commonality between ISV software like SAP and their competitors • Business Logic is not running on Database Server but on middleware or Smart Client • Some of them have own virtual machines with 4GL language to develop business logic • Applications have own Repository or Data Dictionary describing logical view on relational database objects • Feature usage of particular database restricted since those vendors support multiple databases • Tailored towards usage of Relational Engine only • Tendency to large amount of objects in database • High-end customers usually in TeraBytes of data volume DBA-416M• ISV configurations & implementations – Lessons Learned
Commonality between ISV software like SAP and their competitors • Authorization and Authentication done on application level • Real user name not seen on database side • Hardly a chance to audit on database side • Hard to trace activity on database side • Applications rely on certain Index and/or object configurations • Not recommendable to create indexes on the database level only • Indexes better be created through application maintenance logics • File groups not recognized by applications • When file groups are being used, application release upgrade scripts might need manual modification DBA-416M• ISV configurations & implementations – Lessons Learned
Big PeopleSoft Deployment in California • Running on PeopleSoft 9 and SQL Server 2005 • This customer saw huge benefits from upgrading to SQL Server 2005 (see later) • Customer uses PeopleSoft HR and FI in two separate instances • In parallel to every productive instance they have a test instance (with same data volume as production) and a development instance • For productive and test instances they have dedicated database servers and multiple application servers on the application tier • 2 Dell PE6850 4 x Xeon 3.0GHz with 16GB in MSCS cluster as Database Layer for FI and a pair of same servers for HR • 3 (HR) and 5 (FI) application servers Dell PE2850 2 x Xeon 2.8GHz with 4GB RAM • As recommended by PeopleSoft test environment mirrors production configuration exactly. DBA-416M• ISV configurations & implementations – Lessons Learned
Big PeopleSoft Deployment (cont) • Disk Configurations: • Usually pre-dominant 8K Reads, but good portion of 64K reads as well • Some join operations or aggregations which use tempdb as well • Using RCSI which results in load on tempdb as well • 16 Disks RAID1+0 for data files • 12 disks RAID 1+0 for tempdb • 8 disks RAID1+0 for logs • Problem with workload balancing on disks: • PSFT supports customers creating individual file groups • Sometimes better not to go too sophisticated into file groups and instead emphasize simple data and disk layout DBA-416M• ISV configurations & implementations – Lessons Learned
Big PeopleSoft Deployment (cont) • PSFT software does produce considerable number of Adhoc-Statements • Caused a lot of CPU consumption by compilations in the past • SQL Server 2005 ‘Forced Parameterization’ recommended to be used with PSFT. Showed very good performance increases w/o negative impacts so far • PSFT customers of larger size reported problems of blocking lock scenarios between modifiers and committed readers • RCSI did resolve the problem and enabled better scalability • However this increased the load on tempdb. Hence more disks needed for tempdb. DBA-416M• ISV configurations & implementations – Lessons Learned
Big PeopleSoft Deployment (cont) • As with all bigger ISV packages the following issues can be observed in the query execution: • Dependent on customer’s data or application configuration queries might pick wrong plans at an individual customer • PSFT like many other ISV packages don’t allow index hints in their application coding • Query re-writes necessary sometimes at individual customer sites • Usage of plan guides proofed helpful, however sometimes painful - Biggest problem in how to get the correct plan to be saved away • ‘OPTIMIZE FOR’ so far unusable due to steady changing data values DBA-416M• ISV configurations & implementations – Lessons Learned
Australian Tax Office – Siebel implementation • DBServer: 32 x 1.5GHz Itanium HP Superdome partition with 128GB in MSCS • Two of those Superdomes being used in active/passive mode • 21 x HP DL580 G3 4 x 3.0GHz as application server layer • HP Superdomes fully equipped with 64 processors • Other processors partitioned for development and test systems • Storage EMC DMX3500 • Database Volume: ~2.5TB • Uses EMC SRDF/A to supply DR site with some smaller setup over 250 Miles distance • Concurrent Users: 13500 DBA-416M• ISV configurations & implementations – Lessons Learned
Australian Tax Office – Siebel implementation (cont) • Siebel uses server side cursor very intensively: • For larger implementations 2 to 3GB virtual address space of 32Bit is running out of steam soon Usage of 64Bit nearly mandatory for Siebel in general • Change from server side cursor usage to client side cursor (Firehose) in combination with MARS failed with bigger Siebel implementations like ATO • Problem: Siebel fetches and displays a small amount of data in cursor only and keeps cursor open for long time • Works fine with server side cursors • Client side cursors in combination with MARS will keep worker thread occupied running out of worker threads. No changes in SQL Server 2008 Session Code • Session Title
Australian Tax Office – Siebel implementation (cont) • Siebel Queries usually complex with a mix of mostly random 8K reads and a minority of 64K reads • With 13500 users working on different cases, high I/O rates need to be managed • Data files spread over 294 disks in 7 LUNs with 42 disks each. Every LUN built up with 27GB Hypervolume per disk – RAID1+0 • Tempdb is used by Siebel due to some aggregations or more complex joins • Tempdb spread on 2 volumes each spread on 10 disks (Hypervolume size = 27GB) – RAID 1+0 • Log on one volume spread over 20 disks (Hypervolume size = 27GB) – physically separated disks from data files – RAID 1+0 DBA-416M• ISV configurations & implementations – Lessons Learned
Australian Tax Office – Siebel implementation (cont) • Database has 26 data files spread over 7 LUNs • Uneven mapping to LUNs due to usage of file groups • Siebel allows usage of file groups – Usage is customer individual • Effort in balancing I/O workloads to different LUNs • No general recommendations for Siebel since file groups can be very customer specific • Usually highly write intensive staging tables for data import in own file group • Siebel released upgrade scripts might need to be changed in order to accommodate individual file group • Fan-out of Siebel staging tables on multiple files within a file group reduces latch contention on SQL Server’s internal administration pages DBA-416M• ISV configurations & implementations – Lessons Learned
Australian Tax Office – Siebel implementation (cont) • Tempdb developed as bottleneck in two areas: • Mixed extent allocations • Internal administration pages as GAM, SGAM, IAM, PFS • Solutions: • Use trace flag 1118 to avoid mixed extent allocations • Described in KBA 328551 for SQL2000. Also valid for SQL2005 • Be aware of KBA 936185 if on SQL Server 2005 • Use 10 data files for tempdb in order to fan out allocations over different administration pages in different GAM intervals • EMC SRDF/A Asynchronous Storage Replication does honor all requirements of SQL Server I/O basics • Usability and distance of usability solely dependent on network throughput and quality DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System • Introduced 12 years ago as MS Financial System • Evolved in much more running Microsoft’s worldwide HR, Sales and Distribution, Asset management, Treasury and many more functions • Became single most important business software system within Microsoft • Database volume around 4.5TB at the moment, growing 200GB/month • Database server DL585 G1, 4 x 2.2GHz Dual-Core, 48GB RAM • 9 x DL585 G1, 4 x 2.2GHz Dual-Core, 48GB RAM as application server layer DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System (cont) • Full 64bit (x64) architecture DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System (cont) • High-Availability Architecture SQL DB Mirroring SQL Log Shipping SQL DB Mirroring DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System (cont) • Learning out of High-Availability configuration: • Sharing corporate network resources for Database Mirroring over long distances (700-800 miles) doesn’t work – Not even asynchronous • Available bandwidth too fluctuating. Sometimes queued up a few hundred MB of transaction log data • Synchronous Mirroring with failover does work well on shorter distance within the datacenter with acceptable performance impact. Several hundred transactions/sec can be easily sustained • Careful with running multiple instances of dbccindexdefrag or running parallel index creation. • Caution: Possible slowdown of overall system due to massive transaction log content which needed to be transferred. Always monitor the Database Mirroring counters with Perfmon DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System (cont) • Automatic Failover is supported by SAP due to automatic reconnect mechanism and usage of SNAC • Learning out of DBM’s automatic failover: • Was reliable in all the cases. Executed nearly 10 times in 2 years due to e.g. database server hardware or storage issues • Covered all incidents on the storage side, MSCS would have fallen victim of • Force Failover for purpose of server patching still does require small downtime – Business simply doesn’t buy in into chance of rolled back transactions • Need to setup two different sets of scheduled tasks and enable/disable them manually or develop own automatism using e.g. WMI DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System (cont) • A small issue in case one server remains down or longer time – Scenario: • Automatic Failover is executed due to severe server failure of principal • Principal is analyzed as having a severe issue. Parts and repair ETA is 3+ days • On mirror now acting as principal, data in transaction log is growing and growing. Growth can not be sustained for the next 3 days Decision to switch off Database Mirroring • A process of the application layer is restarting reading the same configuration files as it did for original start (still pointing to principal) Process will not be able to connect • Problem: After switching off DBM, former Mirror (now Principal) is not aware of having had a database which was in mirroring relationship Denies access when approached as ‘FailoverPartner’ • Will be fixed in CU October2007 based on SP2 with the addition of trace flag 1449 DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System (cont) • Other DBM scenario which requires attention: • Decision to suspend synchronous DBM with failover for patching one of the servers. Bringing Mirror down after suspending • Only Witness and Principal are up now • Suddenly failure strikes on server with Witness and Witness instance stops • Principal will shut down mirrored database • Solution: • Before suspending mirroring, disable automatic failover with DBM wizard DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System (cont) • Query caching issues: • SAP does execute every query as parameterized statement in order to get query cached and re-used • Beneficial in most of the cases • However individual issues in individual customers where: • Two indexes are possible to be chosen • Data in some tables or columns is skewed • ‘First’ caller submits non-typical parameter set • Issues result in: • Some functionality of application running slow for extended period of time (hours or whole day) • Disk Subsystem being hammered by I/O requests DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System (cont) • Cached Query Scenario 1: • Table with columns c1 to c90 • Table has 1 Mio rows • Clustered primary key index on columns c1 to c5 • Nonclustered index on c1, c6 where c6 has two values only: • Value ‘X’ in 999,000 rows and ‘ ‘ in 1,000 rows • Column c1 has one value only: ‘300’ • Query issued like: select * from t1 where c1=@P1 and c6 = @P2 where @p1=‘300’ and @P2=‘X’ • SQL Server decides Clustered index access and caches Query with this plan • However most of the users would like to search for the rows with the value ‘ ‘ and now experience the slow plan over the clustered index seek instead of a faster plan over the non-clustered index DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System (cont) • Solution for Cached Query scenario1: • Split business logic in two statements like: • If @c2_value = ‘Xsp_executesql ‘select * from t1… /* for X */ …, …, …elsesp_executesql ‘select * from t1… /* for space */ …, …, … • Use same statement, but add different comments. SQL Server will not treat these two statements as the same statements, but different statements. Hence a plan for each of the cases is cached and used • Also can work for several data skews within one column. • Other possibility when one type of usage is in majority could be the usage of index hints. This is supported by SAP DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System (cont) • Automatic Update Statistics might not update aggressively enough • Scenario1: Tables with less than 500 rows • Usually not a problem when table is just used for a lookup or scan • Problem a bit more severe when several of those tables are used to join with a larger table • Could result in performance problems due to the join order chosen based on ancient statistics • Solution: Schedule Update Statistics for such small tables from time to time DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System (cont) • Data in specific columns developing strictly ascending • Assume data column in a Sales Order Header table • Table has data of the last 5 years with all in all 100 Mio rows 20% changes needed for new update statistics Update Statistics only once per Fiscal Year • Column Statistics on column ‘Date’ are looking like: Real Last data entry: 20070912 First value: 20040701 Last value: 20070331 DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System (cont) • Query scenario: • Select * from Sales_Order_Header where Client=‘300’ and Date between ‘20070101 and ‘20070110’ • Primary Key Constraint (clustered index) doesn’t contain Date, but non-clustered index on Client, Date exists. Client only has one distinct value • Dependent on the range specified over Date, Query Optimizer needs to decide on: • Using non-clustered index in order to retrieve the rows • Using Clustered Index scan Real Last data entry: 20070912 First value: 20040701 Last value: 20070331 DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System (cont) • Query scenario (cont): • Select * from Sales_Order_Header where Client=‘300’ and Date between ‘20070101 and ‘20070110’ • Date range small and within statistics of column Date • Query Optimizer decides for accessing the table via non-clustered index due to cardinality estimate based on statistics of around 200K rows Selected Range Real Last data entry: 20070912 First value: 20040701 Last value: 20070330 DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System (cont) • Query scenario (cont): • Select * from Sales_Order_Header where Client=‘300’ and Date between ‘20061201 and ‘20070331’ • Date range larger and within statistics of column ‘Date’ • Query Optimizer decides for accessing the table via clustered index due to cardinality estimate based on statistics of around 8 Mio rows Selected Range Real Last data entry: 20070912 First value: 20040701 Last value: 20070331 DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System (cont) • Query scenario (cont): • Select * from Sales_Order_Header where Client=‘300’ and Date between ‘20070320 and ‘20070410’ • Date range small and only start value within statistics of column ‘Date’ • Query Optimizer decides for accessing the table via non-clustered index due to cardinality estimate based on statistics of around 300K rows Selected Range Real Last data entry: 20070912 First value: 20040701 Last value: 20070631 DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System (cont) • Query scenario (cont): • Select * from Sales_Order_Header where Client=‘300’ and Date between ‘20070501 and ‘20070831’ • Date range larger and only start date within statistics of column ‘Date’ • Query Optimizer decides for accessing the table via NON-clustered index due to cardinality estimate based on statistics of around 1 row Selected Range Real Last data entry: 20070912 First value: 20040701 Last value: 20070630 DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System (cont) • Multiple Problems with multiple solutions in this scenario: • When query is cached, dependent on range, users might experience undesirable performance: • Small ranges might be executed over clustered index scans • Wide ranges might be executed over non-clustered index scan in other cases • Solutions: • Change attribute of clustered index to non-clustered and make former non-clustered index on columns Client and Date to clustered index • Only works when columns in Primary Key (like Ordernumber) are not including semantics which can be used to range scan on. Some ISVs meanwhile use GUIDs as Ordernumbers. GUIDs never should be included in clustered indexes • Add recompile hint to query DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System (cont) • Multiple Problems with multiple solutions in this scenario: • When query is compiled, statistics are way out of line: • Queries like shown before get compiled to take the wrong index • Problem worsens when query like that is part of a nested loop join • With ISV packages you usually don’t know all the cases or catch all the cases • Solutions: • For known cases schedule update statistics on sensitive column • Check out http://support.microsoft.com/kb/922063 for new logic in SQL Server 2005 SP1 and SP2 to deal with ascending columns DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System (cont) • SAP doesn’t support SQL Server Filegroups • Fan out # of files in default file group • Pro: • Absolutely easy to configure and leverage storage resources with same sized data files • No balancing at all on storage backend due to changing workload profile • Con: • Backup of databases in TeraBytes becomes more difficult • Solution: • Perform differential backup daily using backup compression against disk devices • Full Online backup at weekend against disk devices • Pull backups from disks to other storage like tape • Backups not primary restore strategy – DBM, Log-Shipping to keep duplicates DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System (cont) • SAP ERP is very 8KB I/O balanced – way over 90% of all SQL Server I/O in 8KB format. • Extreme strenuous on I/O backend • Main focus of SAP configuration often around storage backend • Lessons Learned of running 4.5TB able of 15,000 IOPs with inexpensive EMC Clarion SAN storage backend • SAP doesn’t allow filegroups Fan out files in default filegroup to at least 1:1 ratio with DBServer CPU • Make database files exactly the same size and place them on volumes sized the same size as well • Configure Storage in a way that each volume does get the same spindle resources DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System (cont) • Lessons Learned of running 4.5TB able of 15,000 IOPs with inexpensive EMC Clarion SAN storage backend: • 128 x 146GB 15K rpm spindles available • Don’t overdo the # of LUNs/Volumes. Even for large databases in 10+TB 16 to 64 LUNs usually are plenty enough. Here 16 LUNs chosen • Build RAID1+0 groups: Here 4+4 disks each 146GB, see below for half of the disks DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System (cont) • Lessons Learned of running 4.5TB able of 15,000 IOPs with inexpensive EMC Clarion SAN storage backend: • Next step: Build ‘slices’/LUNs through the Raid Groups • Here 8 LUNs of 58GB per RAID Group • Build 16 EMC striped MetaLUN over 16 x 8 = 128 LUNs in a diagonal manner as demonstrated with 8 RAID groups below • First MetaLUN just for an auxiliary volume • First MetaLUN for Data contains 1.1, 2.2, 3.3, 4.4, 5.5, 6.6, 7.7, 8.8 DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System (cont) • Lessons Learned of running 4.5TB able of 20,000 IOPs with inexpensive EMC Clarion SAN storage backend: • 16 striped MetaLUNs will be shown to Windows. • Align with diskpart to 64K • Format with 64K NTFS allocation unit size • Why building MetaLUNs in a diagonal manner? Imagine one disk of RAID Group #1 LUN 1.0 LUN 1.1 LUN 1.2 LUN 1.3 DBA-416M• ISV configurations & implementations – Lessons Learned
Microsoft’s SAP ERP (R/3) System (cont) • Lessons Learned of running 4.5TB able of 20,000 IOPs with inexpensive EMC Clarion SAN storage backend: • LUNs are located in same manner on every disk • Disk access speed different between inner tracks and outer tracks • Hence every MetaLUN should use LUNs from all locations of disks in providing reliable and even performance LUN 1.0 LUN 1.1 LUN 1.2 LUN 1.3 DBA-416M• ISV configurations & implementations – Lessons Learned
Air Products SAP ERP configuration • ERP database migrated from IBM Mainframe to HP HP Integrity rx8640 this year • 12 CPUs Itanium Dual-Core with 192GB RAM as Database Server • Database Volume around 5TB • 6 teamed 4Gb HBAs • Application Server layer: • 10 x DL380 2 x Intel 5160 3.0GHz • Workload during day created by over 1500 users • Workload during night created by heavy batch activities • Up to 30,000 random IOPS monitored during highload phase or parallel index create DBA-416M• ISV configurations & implementations – Lessons Learned
Air Products SAP ERP configuration • HA Configuration: • Keep local hot copy of data within company campus • Keep warm copy of data on different continent • Keep copy for creating sandboxes and to synchronize test system on campus • Using EMC Mirror Split (BCV) to execute Snapshot Backup • Solution: Use storage replication of EMC DMX • For local ‘hot’ copy use of EMC SRDF/S • For warm copy on different continent use of EMC SRDF/A • For sandbox copy use of EMC Adaptive Copy • Why not using SQL Server methods? • In all cases there is some other data of several file servers which need to by copied in sync with the database files DBA-416M• ISV configurations & implementations – Lessons Learned
Air Products SAP ERP configuration • Data files spread over 218 x 73GB 15K rpm spindles • Mirror (BCV) of data files spread over same 218 spindles • 16 LUNs • 32 data files of same size • Transaction Log file spread over 40 x 73GB rpm spindles • Primary spread over 20 spindles • Mirror (BCV) spread over the other 20 spindles • 1 LUN • 1 log file • Tempdb spread over 22 x 73GB 15K rpm spindles • Primary spread over 11 spindles • Mirror (BCV) spread over 11 spindles • 1 LUN • 16 tempdb files • All RAID1+0 DBA-416M• ISV configurations & implementations – Lessons Learned
Air Products SAP ERP configuration • Lessons Learned on synchronous storage replication using EMC DMX: • Usual way of configuring EMC DMX: • Create Hypervolumes through disks • Take Hypervolumes of single spindles and build striped Metavolumes DBA-416M• ISV configurations & implementations – Lessons Learned
Air Products SAP ERP configuration • EMC offers concatenated or striped MetaVolumes • For nearly all of our ISVs striped MetaVolumes are the way to go • Strip Size by EMC is 960KB. No possibility to change it to different value • Metavolume would then be build like: ……. 960KB Strip Blocks DBA-416M• ISV configurations & implementations – Lessons Learned
Air Products SAP ERP configuration • MetaVolume shown to OS as LUN/Volume • Now imagine Transaction Log writes into MetaVolume in combination with EMC SRDF/S (synchronous Storage Replication) • Write of e.g. 15KB into T-Log file on Metavolume below • EMC SRDF/S lock down whole 960KB stripe during Replicating the 15KB changes • Second T-Log Write immediately afterwards highly likely touching same 960KB Block will be blocked for time of replication of first change 1. Tlog-Write 2. Tlog-Write Blocked for changes until changes got replicated DBA-416M• ISV configurations & implementations – Lessons Learned
Air Products SAP ERP configuration • Result of the SRDF/S inflicted scenario: • Observed I/O response times of 50ms or more in T-Log Writes • Could affect checkpoint and lazywriter performance as well • Solution: • Use 3rd party Volume Manager software (e.g. Veritas) to build MetaVolumes with this software. Use 64K as striping size • 64K striping size will minimize blocking situation to tolerable size • Success with this measure in a few customers already • EMC is working on a better solution for his case. Session Code • Session Title
Air Products SAP ERP configuration • EMC SRDF/A is used very successful at Air Products • Used to replicate data between Allentown (1h east of Philadelphia) to London (GB) • Goal of never falling back more than 2min achieved so far • Line to London shared with some other applications being replicated into the London DR site • Statement: If something like this works with EMC SRDF/A, it will work with SQL Server asynchronous Database Mirroring as well DBA-416M• ISV configurations & implementations – Lessons Learned
SQL Server Best Practices Site • On TechNet • Get the real-world guidelines, expert tips, and rock-solid guidance to take your SQL Server implementation to the next level. • http://www.microsoft.com/technet/prodtechnol/sql/bestpractice/default.mspx • Contents • Technical Whitepapers • ToolBox • Top 10 Lists • Ask a Question • Other Resources • SQLCAT Blog: http://blogs.msdn.com/sqlcat/ • SQL ISV PM Blog: http://blogs.msdn.com/mssqlisv/ • SAP on SQL Server: http://blogs.msdn.com/saponsqlserver/
Thank you! Thank you for attending this session and the 2007 PASS Community Summit in Denver Session Code • Session Title