170 likes | 360 Views
Software Performance Engineering. Steve Chenoweth CSSE 375, Rose-Hulman Tues, Oct 23, 2007. Today. Software Performance Engineering – this (with its own short HW, due Wed 11:55PM). Assign HW6: Programming Style – Due Mon, Oct 29. Team work time. . Software Performance Engineering.
E N D
Software Performance Engineering Steve Chenoweth CSSE 375, Rose-Hulman Tues, Oct 23, 2007
Today • Software Performance Engineering – this (with its own short HW, due Wed 11:55PM). • Assign HW6: Programming Style – Due Mon, Oct 29. • Team work time.
Software Performance Engineering • Closely related to yesterday’s Code Tuning • Issue is this: • On small or familiar projects, you can ignore performance until integration / system test, then tune to get it. • On large or unfamiliar projects, this doesn’t work at all! • Related topics – Responsiveness and scalability. • Many large systems are sold based on “capacity.” • What can you do, related to software construction, to make high performance happen?
So, what does work then? • Software performance engineering (SPE) – means planning and controlling the performance all the way thru development. • A similar approach can be used on other quality attributes – security and availability, for example.
How’s SPE work? • Connie Smith, at CMU, invented SPE – Ref http://www.perfeng.com/papers/bestprac.pdf. • Usual methodology – • Start with performance & capacity requirements – numbers – call these “targets.” • Use a spreadsheet. • Put someone in charge. • During design, budget requirements into software & hardware. • Use these as constraints on development. • As you code, do unit testing, estimate & compare to budgets. • In system test, compare guesses to reality. • Refine your ability to guess over multiple releases. Req Des Test Code
SPE Example: Perf Requirements • You have a system that monitors economic transactions for Amazon.com. • Let’s look at critical use cases / scenarios: • It sees 60,000 transactions per hour (peak hour). • Each validated transaction updates status and activity information in the memory of a server. • You have five displays for people watching. They see exception transactions and statistics. • Oh, and exception transactions need to be shown, too. • These screens should automatically update every 10 seconds. • Every 10 minutes the in-memory info is saved to disk, using SQL-Server.
SPE Example: Architecture • Design looks something like this figure. • 60,0000 trans/hr = 1000 trans/min = 16.7 trans/sec = 60 ms/trans. • Naïvely assume each of the 3 functions on a trans takes equal time. • So, they each have to be done in 20 ms. • But there are also two performance “lumps” – • Updating the 5 displays every 10 sec, and • Writing the memory data to disk every 10 min! Trans Input Stream 60k/hr Trans validate Update stats Update displays 20 ms? Every 10 sec 20 ms? Put in DB Find exceptions 20 ms? Every 10 min
SPE Example: Question • Can you assume that these “lumps” can be tuned out of the system during testing? • Why or why not? Trans Input Stream 60k/hr Trans validate Update stats Update displays 20 ms? Every 10 sec 20 ms? Put in DB Find exceptions 20 ms? Every 10 min
SPE Example: Answer • Not unless you’re used to dealing with such things already! • If not, better budget for them, too: • Divide the 60 ms/trans by 5, not by 3. • Each of the 3 oringinal functions shown gets 12 ms/trans. • Display refresh gets 1/5 of every CPU second, or 200 ms. So 5 displays refreshed every 10 sec = 1 every 2 sec. Each display refresh gets a 400 ms budget. • DB write gets 1/5 of every CPU second, or 200 ms, also. Over 10 min, it then gets 200 * 60 * 10 ms = 2 min of CPU time. But, this had better be distributed evenly! Trans Input Stream 60k/hr Trans validate Update stats Update displays 12 ms 200 ms 12 ms Put in DB Find exceptions 12 ms 2 min
But… • This is still optimistic! • It assumes you have all the CPU time for your application, and • It assumes your transactions aren’t “lumpy,” and • That the system won’t grow. • A more conservative start would be to cut all the targets in half, on the previous page.
Result: Budgets for each programmer, during construction • I’m doing the input validation feature. • I know from day 1 that it has to run in a budget of 50% * 12 ms = 6 ms on each transaction. • I can design to that. • I can test that, at least tentatively, even in unit testing! This will give me estimates to see if I’m “close.” • To do that, I need to “instrument” my code. How? • Real results feed back to the person in charge of the performance spreadsheet. • Their spreadsheet shows if we still meet the requirements targets. • They are also involved in system test, where we get the real results. • We have an informed engineering guess, at all times, about “whether we can make it” on performance.
A real SPE spreadsheet has multiple dimensions • We looked at CPU time. • Other things often budgeted and tracked thru development include “whatever may be of concern,” like: • Memory space • Disk I/O time • Communications time
Related considerations • Real risk determines how much time to spend on getting performance (or any other quality attribute) right. • Need to focus on critical use cases / scenarios. • Discover competing needs for resources like CPU time. • Make initial guesses about how to divide these, cycle back and improve on those guesses. • Start simply. • At some point in refinement, however, you have to start considering queuing effects in a more sophisticated way, to be more accurate. • Someone needs to be in charge of the spreadsheet, and of making performance “happen.” • Getting good numbers to start with is a “fall out” advantage of this process. • Most performance successes are due to good design.
One skill required for this - Estimating • You need to practice guessing, then • Have a way to check if you’re right! • Short example exercise – • How many piano tuners are there in Terre Haute? Image from www.pianoacoustics.com/.
HW activity – • Take the architecture we already used as an example. • Now assume that we get the following feedback from the designers of each of the subsystems shown: • The “Trans validate” code is now estimated as only half as complex as the code in the “Update stats” and “Find exceptions” routines. (Time complexity, that is.) • Saving all the statistics to the DB must be one “synchronized” action every 10 min. This is now estimated to take 5 sec. The remaining DB tasks, however, can be distributed evenly over each 10 min interval. • Reallocate the budgets accordingly, assuming we want to be “conservative” and only use 50% of available CPU time! • Make clear any guesses or assumptions you made. • Turn this in as the “Software Performance HW” assignment, by 11:55 PM, Wed, Oct 24.