750 likes | 963 Views
A Holistic Approach To Performance Tuning Oracle Applications Release 11 and 11i. Andy Tremayne Applications Performance Group. Agenda. The Methodology Problem Definition Project Documentation Summary (PDS) Benchmark Information Tuning The Technology Stacks The Client The Middle Tier
E N D
A Holistic Approach To Performance Tuning Oracle Applications Release 11 and 11i
Agenda • The Methodology • Problem Definition • Project Documentation Summary (PDS) • Benchmark Information • Tuning The Technology Stacks • The Client • The Middle Tier • The Database • The Server • The SQL Access Paths • The Network • Tuning Applications
The Methodology • Problem definition methods • Necessarily abstract and complex • Deployment requires specialist skills • This methodology is • Very simple, fast and generic • Based on best practice and real world approach • The systematic approach • Focuses investigative attention and the tuning effort • Enables early identification of resources needed • You do not need to be an expert
Three Key Areas • 1. Accurately define the problem • Define the problems and their characteristics • Key to identifying the source of performance issues • Focuses tuning effort • 2. Define and agree performance target levels • To support the critical business functions • To provide the definition of success • 3. Understand the system performance factors
The Problem Definition Stages • Characterise the problem • What is the nature of the problem? • Ensure you fully understand • “Slow database” is too ambiguous
The Problem Definition Stages • Specify the locations • Use converse questions • List differences between them • Compare data routes, middle tiers...
The Problem Definition Stages • Note the times of day / dates • Link to events e.g. business cycle • Identify origin of the problem • Identify if a problem when isolated
The Problem Definition Stages • How many users are affected? • How much functionality? • How many applications? • Look for problem commonality
The Problem Definition Stages • Define the relative importance • When does it need to be fixed by? • Is there a viable workaround? • Consider the impact to the business
The Problem Definition Stages • Define the problem quickly • Define the problem accurately • Identify the appropriate resources • Solve the problem quickly
The “When” Stage Checklist • When does the problem occur? • Identify an underlying trend. Correlate with system events and changes… • Ask the questions: • When does the problem occur? • Has it always been slow? • Did it change suddenly? • Is it slow at all the time? • Is it slow when the system is heavily used? • Is it getting slower?
Project Documentation Summary (PDS) • The PDS • Structures and summarizes the salient performance indicators • Spans the entire system • Identifies areas to monitor • Ensures that the investigative focus remains impartial
ProjectDocumentationSummary Project Documentation Summary • Documents • Oracle and Customer Contacts • Problem Definition • System Environment • Oracle and Applications Environments • Web/Forms Server Environments • Client and Network • Process Information It should only take 3 hours to complete
PDS-Process Information • The PDS provides a discussion document • It breaks down technical and political barriers • It helps build a business case and justify change When will your next emergency occur? What is your strategy?
Agenda • The Methodology • Problem Definition • Project Documentation Summary (PDS) • Benchmark Information • Tuning The Technology Stacks • The Client • The Middle Tier • The Database • The Server • The SQL Access Paths • The Network • Tuning Applications
Release 11i Browser Benchmarks • IE 6 appears consistently best (except 1 or 2 results) • All results for Windows NT, 2000 and XP are within 3% • Always perform your own tests! Other benchmarks have shownWindows NT is faster with 300MHz and higher
PC Speed vs Latency Benchmark Latency 6ms 300ms 1400ms 133MHz Win 95 48MB 233MHz Win 95 48MB 300MHz Win NT 128MB 400MHz Win NT 128MB 66 s 67.7 s 80 s 30 s 36.5 s 53 s 25.5 s 29.4 s 35 s 21.4 s 26.5 s 35 s • A little dated now • JInitiator provides better times • Timings are very comparable with the browser benchmark CPU speed compensates for high-latency situations
Generic PC Tuning - Still very common! • Windows screen savers • Use up to 8MB of memory • Video • Up-to-date drivers can provide 20% improvement • Using 256 colors saves 0.5 - 0.75 sec opening a form • Floating menu bars • Frequently ‘polled’ wasting usually 10% CPU • 300MHz becomes a 266MHz …. • Keyboard shortcuts are a quicker, easier alternative Paper contains a complete tuning checklist
Tuned Client LAN What Next? Application Server(s) Switch Database Server Benchmark and compare with targets • If targets are achieved • Move the PC away from the server, 1 hop at a time
Agenda • The Methodology • Problem Definition • Project Documentation Summary (PDS) • Benchmark Information • Tuning The Technology Stacks • The Client • The Middle Tier • The Database • The Server • The SQL Access Paths • The Network • Tuning Applications
Middle Tier Profiles • Middle Tier Server profile • Memory is greater concern than CPU • Database Server profile • I/O and CPU are more of a concern than memory • Separate machines • Scalability • Simpler profile management Keeping multiple forms open on a poorly tuned system only exacerbates memory problems, slowing response times even further
Java Virtual Machine Questions • How many JVMs and how many users per JVM? • Between 20 and 50 “active” users per JVM • Hundreds of concurrent users • Depends on CPU speed, Applications mix, the users • Perform a saturation test for your particular Applications mix and business • How much Memory? • -ms128m and -mx256m ……up to 400Mb • Set soft limit to 80% of the hard limit • set "ulimit -n 1024" for the Apache and JServ processes
Optimizing Apache • Current default for Apache is HTTP 1.1 • “keep alive” session feature in 1.1 • Messages are sent between the browser and the server • Performance boost for Internet Explorer web users • Force Apache to use HTTP v1.0 • 26 to 5 seconds when opening some dialog boxes • In the httpd.conf file: • BrowserMatch "MSIE 5\.[^;]*; " nokeepalive downgrade-1.0 force-response-1.0 Always test and verify changes yourself
FRD/Performance Collector • Performance Collector (Forms 6i) • Specify “record=performance” • Analyze output using f60parse.pl • Four main screens Client, Network, DB or Forms Server Timings Most Time
Agenda • The Methodology • Problem Definition • Project Documentation Summary (PDS) • Benchmark Information • Tuning The Technology Stacks • The Client • The Middle Tier • The Database • The Server • The SQL Access Paths • The Network • Tuning Applications
Getting Things Right for 11i • 8KB block size is strongly recommended for 8i/11i • Don’t use 4KB unless constrained by platform! • Increase performance by • 40% for batch programs and large data-sort queries • 4-5% for light data-sort queries Remove extraneous settings and eventsfrom the database initialization files!
Statspack - Introduction • Introduced in 8.1.6 • Captures performance for new DB features • Summary page • Improved drill down • Captures high-load and some literal SQL • Backports available (unsupported) • “Diagnosing Performance With Statspack” papers • http://technet.oracle.com/deploy/performance/content.htm • Results are stored in tables • Each snap is Identified by a Snapshot_id
Statspack - With Applications • Always use the 8.1.7 version • Space management and report • Use special versions that include the module name • No module name denotes custom code • MetaLink Note No. 153507.1 • Use Snap Level 5 • Oracle 9i • New Snap Level 6 collects execution plans Use modified parameters with Applications One hour is the best unit of work
Statspack – High Load SQL Buffer Gets Executions Gets per Exec % Total Hash Value Module --------------- ------------ -------------- ------- ------------ ------ 35,087,907 43 815,997.8 5.0 2683031174 WSHRDPAK SELECT MSI.SEGMENT1 c_item_flex , det . inventory_item_id c_inv _item_id , msi . description c_item_description , det . customer _item_id c_customer_item_id , det . source_header_number c_so_nu mber , det . source_line_number c_so_line_number , det . cust_po _number c_po_number , sum ( round ( nvl ( det . requested_quanti Module Name
Gathering Statistics • Release 11i - only ever use Gather Schema Statistics • 10% sample • Prior to Release 11i • Analyze using an adequate sample size • Improves batch performance by around 40%, especially in AR, PA and concurrent processing. • Inadequate sample sizes may reduce performance by up to 40%. NEVER ANALYZE the SYS schema!!!
Package Pinning Strategy • Do not pin • Small infrequently used packages • Those used by long-running batch processes or reports • Do pin • A Core set of SYS and FND packages • A core set for the products you are using • Any large (> 50KB) constantly aged-out packages • Size and number of executions in V$db_object_cache • Monitor X$KSMLRU • The no. of objects (KSMLRNUM) • Displaced packages and PL/SQL
Literal SQL • Literal SQL • Select …. From …. Where Order No = 123456 • Literal SQL is fine for batch processing • Statements are run infrequently • Enables the CBO to accurately cost the use of an index • Literal SQL cannot be shared • It severely limits OLTP scalability and throughput • Concern is with large data sets e.g. order_no • Fix by converting literal SQL to use bind variables When statements use large amounts of shared memory e.g. 1MB. 100 statements = 100MB
Agenda • The Methodology • Problem Definition • Project Documentation Summary (PDS) • Benchmark Information • Tuning The Technology Stacks • The Client • The Middle Tier • The Database • The Server • The SQL Access Paths • The Network • Tuning Applications
Raw Partitions • Raw devices are required for OPS and RAC • Generally much more difficult to administer • In theory, raw device buffering improves disk I/O • Some conversions from UNIX to raw devices • Have improved performance by 10%-15% BUT • Database export and import • Eradicates row chaining; rebuilds and balances indexes Consider using raw devices only if disk I/O is the only remaining performance bottleneck and cannot be resolved.
64KB 64KB 1 x 64KBDisk Read Stripe Size (simplified) Oracle 64KB Stripe Size Operating System Benefit depends on amount of I/O. Use Statspack I/O figures.
Measuring Disk I/O • When using disk arrays • Operating system utilities are limited • Sometimes need specialist software • Instead use the Oracle figures • This is the time Oracle sees for an I/O • FileStat figures in the UtlEstat or StatsPack report • <20ms read <30ms write (max!) non striped • <10ms on a striped disk array Only the Production instanceshould be running on the Production server
Agenda • The Methodology • Problem Definition • Project Documentation Summary (PDS) • Benchmark Information • Tuning The Technology Stacks • The Client • The Middle Tier • The Database • The Server • The SQL Access Paths • The Network • Tuning Applications
+ Does the total = real world time? A Stuck Performance Issue? Always keep the raw trace file!
Mapping A User Session • Sign–On Auditing - set at User level or higher • Records each time a person signs on to an application • Applications help screen contains: • AUDSID (maps to V$SESSION.AUDSID) select U.USER_NAMEfrom FND_USER U, FND_LOGINS L, V$SESSION Vwhere U.USER_ID = L.USER_IDand L.SPID = V.PROCESSand V.AUDSID = &1; USER_NAME -----------------------------------OPERATIONS
Using Event 10046 • Enable Trace for a forms session or report, within a stored procedure or in a concurrent program • Set timed_statistics = true • Set event 10046 for more advanced troubleshooting Level Included Information null or 1 Same as standard SQL Trace functionality 4 As level 1 + bind variable information 8 As level 1 + session wait information 12 As level 1 + bind variable and session wait information System: Initialization SQL Statement – custom profile option'ALTER SESSION SET EVENTS ='||''''||' 10046 TRACE NAME CONTEXT FOREVER, LEVEL 4 '||''''
Agenda • The Methodology • Problem Definition • Project Documentation Summary (PDS) • Benchmark Information • Tuning The Technology Stacks • The Client • The Middle Tier • The Database • The Server • The SQL Access Paths • The Network • Tuning Applications
Bandwidth and Latency • Bandwidth • Amount of data that the network can transfer • Size of the pipe/how many packets at once • Latency • Delay between the instant a request is made for data and the transfer starts • Influenced by both device and link latency
Understanding the Network • Create a detailed network diagram including: • The location and number of all Oracle users • Every device from the client to the database server • The bandwidth and latency of all links and devices • Use a full size packet:ping -l1472 -n50 <host> • To identify a performance problem • You need only basic knowledge Note! TraceRoute shows the optimal route - which is not necessarily the actual route.