220 likes | 333 Views
5 Things to do with LUS and then how to answer the next set o f questions CMUG focus Group. John Popplewell ICL john.popplewell@icl.com. LUS. Why do you use it? Absolute measure of resources used within a job OCP, number of IO’s etc But it doesn’t give any measure of resource queuing
E N D
5 Things to do with LUS and then how to answer the next set of questionsCMUG focus Group John Popplewell ICL john.popplewell@icl.com
LUS • Why do you use it? • Absolute measure of resources used within a job • OCP, number of IO’s etc • But it doesn’t give any measure of resource queuing • How long have I spent waiting for the processor? • Excellent in a dedicated system, limited otherwise!
What can take resources in a job? OCPQueuing OCP IO Time VSI Time Other
LUS(Breakdown) • A job has 5 components, LUS can measure some of the components, help you ESTIMATE the others • OCP time • Queuing for OCP time • IO time • VSI time • Other • For the following example I am going to assume that queuing for OCP is negligible, this is almost certainly not true for all cases
Deployment LUS(BREAKDOWN) run the job or job step LUS(BREAKDOWN)
LUS(Breakdown) USAGE DURING PREVIOUS 14655 SECS TIME(SEC) : 1407 VS INTERRUPTS : 732 MS OCC(MB-SEC) : 45404 RIRO COUNT : 0 INST(M-PLI) : 7521 DRUM XFERS(PAGES) : 0 CURRENT PAGES : 912 LSUSPENDS : 0 FS XFERS : 682812 DISC XFERS(PAGES) : 0 AVERAGE PAGES : 3098 ACCESSS LEVEL: 1 2 3 4 5 6 7 8 9 10 11 12 13-15 VSIS : 0 0 1 40 651 0 0 5 0 35 0 0 0 DISC TRANSFERS: USER 681047 FILE ORGANISATION: 407 DIRECTOR(PUBLIC/LOCAL): 30/1328 TAPE TRANSFERS: USER 0
LUS(Breakdown) • Calculate the OCP utilisation = time/elapsed time = 1407/14655 =9.6%
LUS(Breakdown) • Calculate IO Time Assuming no OCP queuing and no other! [synchronous IO IDMS] IO time = (Elapsed time - OCP time) / Elapsed time = (14655 - 1407) = 13248 secs IO percentage = (Elapsed time - OCP time) / Elapsed time = (14655 - 1407) / 14655 = 90.4%
LUS(Breakdown) • Calculate the IO rate [determine if IO response time an issue] = total IO’s / IO time = 862812 / 13428 = 64.3 IO’s per second or each IO takes 15.5 milliseconds • I expect IO’s to take • between 5msecs (SA serial read) • to 40 ++ msecs (GD random access) • Most in 17 => 30 msec range
LUS(Breakdown) • Set Quota to AVERAGE PAGES as recorded by LUS 3000 is close enough Could also determine VSI rate
LUS(Breakdown) • See if the INITIAL file sizes are large enough If FILEORGANISATION is small then INIT SIZES OK else Increase INIT sizes Given that IO’s take ~17 -30 msecs then “small” is a function of time eg 1000 file organisation xfers takes 20,000 msecs = 2 seconds
What do you do next • So the 5 steps to enlightenment didn’t get you above the basement! • What do you do next!! • You need to get a handle on queuing • How do you do that?
What do you do next? • Elapsed_Time_Monitor • ETM • Gives you • OCP queuing • IO Time • Other (semaphores, long suspension, VSI etc)
Deployment LUS(BREAKDOWN) ETM run the job or job step ETM LUS(BREAKDOWN)
ETM output • Formatted to the journal • similar to LUS • CSV format • To the job journal • To a system journal
ETM output You can have it formatted to the journal
ETM output • CSV format • Far better • Direct a message type to a public journal • central place where all performance information is recorded • Additional Disc information • Disc xfers • Tape xfers • Catalogue xfers • Dynamic extension xfers • Public xfers • Local xfers
Additional information in CSV format • Disc xfers The total number of Disc IO’s performed by the VM. • Tape xfers The total number of Tape IO’s performed by the VM. • File org xfers IO’s for dynamic extension, file attachment and detachment, file creation and deletion • Publix xfers xfers to public journals, libraries on public library list, xfers on public connections, message text files accessed via public jnl mechanism • Local xfers Library controller index xfers, cathan, loader, Wip store • Catalogue read xfers Physical reads on catalogue, exclude cached xfers • Catalogue write xfers Physical writes to catalogue
Why CSV? • Excel • Numerous analysis tools, the two I like are: • Filter tables • high low values
ETM • How is it available? • Not part of standard product • Talk to account support