150 likes | 274 Views
How to Build H igh P erforming .NET Applications . Vance Morrison .NET Performance Architect July 2009. Writing High Performance Apps . The Bad News High Performance Doesn’t Just Happen! You Have to Design Them That Way! Most Performance is Lost Very Early in the Design
E N D
How to Build High Performing .NET Applications Vance Morrison .NET Performance Architect July 2009
Writing High Performance Apps • The Bad News • High Performance Doesn’t Just Happen! • You Have to Design Them That Way! • Most Performance is Lost Very Early in the Design • Performance Lost Early Can Never be Regained • Most Designs Consider Perf too Late in Development • The Good News • Most (e.g. 95%) of Your App is Not Perf Critical • Generally it is easy to identify the ‘Hot Spots’
Have a Plan • ALL Applications need a Performance Plan • Perf Plans Can Be Easy! • Decide what Good Performance Means • Determine the Perf Critical Parts • Determine if Good Perf is in Jeopardy • If Perf is not in Jeopardy, you are Done! • You don’t even need to write it down necessarily • Otherwise you have more work to do.
Measure Early, Measure Often • Can you Meet your perf Goals? • You MUST Measure to do this • Evaluating Alternatives • You MUST Measure to do this • Tradeoffs Against Other Goals • You MUST Measure to do this If you aren’t measuring, you are not designing high performance software!
Example: ETW Event Log Parsing • Problem: • Event Tracing for Windows (ETW), can create detailed logs • The logs are large (10Meg to > 10 Gig) • How users may want to manipulate the data varies widely • Solution: • We need a programmatic way of access the data (Data Model) • That model needs to be ‘User Centric’ (Easy to understand) • It has to handle the wide variety of ETW data • It has to scale to the data set sizes involved. Design a High Performance Solution
Event Log Parsing Performance Plan • What is Good Performance? • < 1 sec for small traces (10MB), < 10sec medium (100MB). • What are the hot spots? • Any code path that ‘touches’ the whole data stream • Disk I/O is likely to be a bottleneck • Is Good Performance in Jeopardy? • Yes • Is Good Performance Possible? • Probably: 50MB / sec Disk I/O throughput possible.
Event Log Parsing Design Issues • Should a Database be used to store the Data? • No => Databases relatively poor at ADDING large amounts of data • How did I know? I Measured it! • Should In-memory structures be used? • No => excessive paging for large data sizes, 32bit limitation. • Should a Managed Language be used? • Yes => Hot code path not limited by managed code. • Should we use a ‘classic’ foreach() Event model? • No => Requires an allocation per event, which is unnecessary • Should value types or object be used? • Objects, otherwise we are constrained to a callback model.
OK, I Will Measure: How? • One of the simplest ways, is simply to measure time • System.Diagnositics.StopWatch is a high resolution timer • Managed interface to QueryPerformanceCounter • Typically a resolution < 1usec. Stopwatch sw = new Stopwatch(); sw.Start(); // Code you wish to measure sw.Stop(); Console.WriteLine("Time = {0} msec", sw.Elapsed.Milliseconds); sw.Reset(); // Set up for the next measurement
Improving on Stopwatch • For microbenchmarks, Stopwatch is Inconvinient • Small times need to be ‘amplified’ by putting in loops • Subtract out Stopwatch overhead • ‘First time’ (JIT compilation) costs are not interesting • Need to run several times and gather statistics • Want an easy reporting mechanism for comparisons
MeasureItMicroBenchmark Tool • Available in April 2008 MSDN article. • CLR Inside Out: Measure Early and Often for Performance • A Single Executable • Comes with User’s Guide • MeasureIt /UsersGuide • Comes with its source code. • Comes with built-in benchmarks • Easy to add your own
Measuring More ETW + XPERF • Event Tracing for Windows (ETW) logging service • Window’s Kernel provides a wealth of information • Every Process creation / destruction • Samples of what the CPU is doing every msec per CPU • Every Disk access • Every Page fault / Module Load • Every File I/O • Full Stack traces on kernel events • CLR adds its own • When GCs happen, how big the heap is before and after • When modules are loaded / unloaded, • When JIT compilations happens • You can add to the events too • XPERF is the tool used to view all this wonderful information
Summary • High Performance Apps Don’t Happen by Accident • You need a Performance Plan • To Plan you need Measurements • Can you hit your goals? • Evaluate different implementation techniques • Trade off performance against other goals in a rational manner • MeasureIt is useful for micro-benchmarking • Comes with useful .NET benchmarks ‘out of the box’ • Comes with its complete source code • Easy to add new benchmarks in an area of interest • Measurements Need to Validated • It is very easy to measure the wrong thing or misapply the result • Don’t use numbers that don’t ‘Make sense’. Figure them out!
To Learn More • Articles • Vance Morrison's MSDN articles • The Fallacy of Premature Optimization (Randall Hyde, ACM.org) • MSDN Performance: Designing Distributed Applications • The Ten Rules of Performance • Improving .NET Application Performance and Scalability • Blogs • Vance Morrison's Weblog • Rico Mariani'sPerformance Tidbits • CLR and Framework PerfBlog • Xperf Blog • Tools • MeasureIt • Xperf performance analysis tool