220 likes | 390 Views
A Client-centric Grid Knowledgebase George Kola, Tevfik Kosar and Miron Livny University of Wisconsin-Madison September 23 rd , 2004. Cluster 2004 San Diego, CA. Grid Trivia. How many of you have submitted a job to the Grid resources and did never hear back from it?
E N D
A Client-centric Grid Knowledgebase George Kola, Tevfik Kosar and Miron Livny University of Wisconsin-Madison September 23rd, 2004 Cluster 2004 San Diego, CA
Grid Trivia • How many of you have submitted a job to the Grid resources and did never hear back from it? • How many of you got mad by the inconsistent behavior of some grid resources? • Completing successfully some jobs and failing others.. • Similar jobs performing completely different.. ... We did!
Goal: Prevent Unexpected Behavior in a Grid • Learn from experience and prevent them from repeating in the future again. • Causes for unexpected behavior in a Grid: • Black holes • Resources with • Faulty hardware • Buggy or misconfigured software • Extremely slow computational sites • Memory leaks ..etc
Black holes • Definition: “A black hole is a region of spacetime from which nothing can escape, even light.” • If you send a light beam to a black hole, you never hear back from it. • You can only know it after you have encounter it. Is it too late? • No. You should learn from experience..
Black holes in the Grid • Resources that accept jobs but never complete them • You send a job to a resource, but never hear back from it.
Black hole examples from real life: • In the WCER educational video processing pipeline: • A specific pool was accepting and processing our jobs for a couple of hours, but evicting before completion. • A machine accepted a job, but due to a memory leak it kept throwing “shadow exceptions” and retrying the job forever. • Some thirdparty (GridFTP, DiskRouter) transfers hang occasionally and never returned. • A machine caused an error because of a corrupted FPU. It successfully completed MPEG-1 encoding but failed MPEG-4.
Grid is good.. but not perfect.. • Heterogeneous resources • Multi administrative domains • Spanning wide area networks • Consists of commodity hardware and software Prone to network-, hardware-, software-, middleware- failures! We cannot expect everything from the Grid or Grid middleware!
Take the Ethernet Approach • A truly distributed (and very effective) access control protocol to a shared service • Client responsible access control • Client responsible for error detection • Client responsible for fairness Keep track of job/resource performance & failure characteristics as observed by the client. Use job/user log files collected at the client side to build a grid knowledgebase.
Grid Knowledgebase • Parse user/job log files • Load them into a database • Aggregate experience of different jobs • Interpret them • Plan action • Generate feedback to the scheduler as well as to the user
JOB DESCRIPTIONS PLANNER JOB QUEUE JOB SCHEDULER MATCH MAKER Personal Computers Storage Servers Clusters GRID RESOURCES JOB LOGS
JOB DESCRIPTIONS PLANNER JOB QUEUE NOTIFICATION LAYER ADAPTATION LAYER JOB SCHEDULER MATCH MAKER JOB PARSER DATABASE DATA MINER Personal Computers Storage Servers Clusters GRID RESOURCES JOB LOGS GRID KNOWLEDGEBASE
Exit code = 0? Exception Database Schema User Submit Schedule Execute Evicted Suspend Un-suspend Terminated Abnormally Terminated Normally No Yes Job Failed Job Succeeded
Difference from existing approaches • Client view • Use only job/user log files at the client side • Many administrators do not want to share resource/scheduler log files. • We do not need to know everything going on in the whole grid • Scalable
What do we get? • Collecting job execution time statistics • Average job execution time • Standard deviation • Fit a distribution • Detect and avoid black holes • For normal distribution: • 99.7% of job execution times should lie between (avg-3*stdev) and (avg+3*stdev) • 96% of job execution times should lie between (avg-2*stdev) and (avg+2*stdev)
Setting Execution Time Limits • Avg = 7.8 min • Stdev = 3.17min • For normal distribution: • %99.7 : [0 – 17.31 min] • %96 : [1.46 min – 14.14 min]
What do we get? (2) • Identifying misconfigured machines • e.g. find set of machines which fail jobs with I/O data size larger than 2 GB (i.e. OS limitations) • Identifying factors affecting job run-time • Bug hunting • Job failures on certain inputs • Memory leaks • Scheduler logs image size regularly
Catching Memory Leaks Job Memory Image Size (MB) Time
What do we get? (3) • Application optimization • How long does each step of an application/pipeline take to execute? • Adaptation • Find resources that take least time to execute jobs from a particular class
Conclusions • View of the Grid from the client side • Job/user log files as main source of information • Aggregate experience of different jobs and pass them to future ones • Helps in: • Catching black holes • Identify faulty/misconfigured resources • Bug tracking • Statistics collection • Future work: • Merge experience of different clients
Thank you… For more information, contact: Tevfik Kosar http://www.cs.wisc.edu/~kosart kosart@cs.wisc.edu