1 / 128

Program Analysis and Design Conformance

Explore program analysis and design conformance techniques in detail. Learn about memory disambiguation, pointer analysis, parallelization, and more. Dive into the verification of safety properties and memory management optimizations.

houghn
Download Presentation

Program Analysis and Design Conformance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Program Analysis and Design Conformance Martin Rinard Laboratory for Computer Science Massachusetts Institute of Technology

  2. Research Overview Program Analysis • Commutativity Analysis for C++ Programs [PLDI96] • Memory Disambiguation for Multithreaded C Programs • Pointer Analysis [PLDI99] • Region Analysis [PPoPP99, PLDI00] • Pointer and Escape Analysis for Multithreaded Java Programs [OOPSLA99, PLDI01, PPoPP01]

  3. Research Overview Transformations • Automatic Parallelization • Object-Oriented Programs with Linked Data Structures [PLDI96] • Divide and Conquer Programs [PPoPP99, PLDI00] • Synchronization Optimizations • Lock Coarsening [POPL97,PLDI98] • Synchronization Elimination [OOPSLA99] • Optimistic Synchronization Primitives [PPoPP97] • Memory Management Optimizations • Stack Allocation [OOPSLA99,PLDI01] • Per-Thread Heap Allocation

  4. Research Overview Verifications of Safety Properties • Data Race Freedom [PLDI00] • Array Bounds Checks [PLDI00] • Correctness of Region-Based Allocation [PPoPP01] • Credible Compilation [RTRV99] • Correctness of Dataflow Analysis Results • Correctness of Standard Compiler Optimizations

  5. Talk Overview • Memory Disambiguation • Goal: Verify Data Race Freedom for Multithreaded Divide and Conquer Programs • Analyses: • Pointer Analysis • Accessed Region Analysis • Experience integrating information from the developer into the memory disambiguation analysis • Role Verification • Design Conformance

  6. Basic Memory Disambiguation Problem • *p = v; • (write v into the memory location that p points to) • What memory locations may *p=v access? Without Any Analysis: *p=v may access any location *p = v

  7. Basic Memory Disambiguation Problem • *p = v; • (write v into the memory location that p points to) • What memory location may *p=v access? With Analysis: *p=v may access this location *p=v does not access these memory locations ! *p = v *p=v may access this location

  8. Static Memory Disambiguation Analyze the program to characterize the memory locations that statements in the program read and write Fundamental problem in program analysis with many applications

  9. Application: Verify Data Race Freedom Program Does This NOT This *p = v1 *p = v1 *p = v1; *q = v2; || *q = v2 *q = v2

  10. Example - Divide and Conquer Sort 7 4 6 1 3 5 8 2

  11. 8 2 7 4 6 1 3 5 Example - Divide and Conquer Sort 7 4 6 1 3 5 8 2 Divide

  12. 8 2 7 4 6 1 3 5 Example - Divide and Conquer Sort 7 4 6 1 3 5 8 2 Divide 4 7 1 6 3 5 2 8 Conquer

  13. 8 2 7 4 6 1 3 5 Example - Divide and Conquer Sort 7 4 6 1 3 5 8 2 Divide 4 7 1 6 3 5 2 8 Conquer 1 4 6 7 2 3 5 8 Combine

  14. 8 2 7 4 6 1 3 5 Example - Divide and Conquer Sort 7 4 6 1 3 5 8 2 Divide 4 7 1 6 3 5 2 8 Conquer 1 4 6 7 2 3 5 8 Combine 1 2 3 4 5 6 7 8

  15. Divide and Conquer Algorithms • Lots of Generated Concurrency • Solve Subproblems in Parallel

  16. Divide and Conquer Algorithms • Lots of Recursively Generated Concurrency • Recursively Solve Subproblems in Parallel

  17. Divide and Conquer Algorithms • Lots of Recursively Generated Concurrency • Recursively Solve Subproblems in Parallel • Combine Results in Parallel

  18. “Sort n Items in d, Using t as Temporary Storage” • void sort(int *d, int *t, int n) • if (n > CUTOFF) { • spawn sort(d,t,n/4); • spawn sort(d+n/4,t+n/4,n/4); • spawn sort(d+2*(n/2),t+2*(n/2),n/4); • spawn sort(d+3*(n/4),t+3*(n/4),n-3*(n/4)); • sync; • spawn merge(d,d+n/4,d+n/2,t); • spawn merge(d+n/2,d+3*(n/4),d+n,t+n/2); • sync; • merge(t,t+n/2,t+n,d); • } else insertionSort(d,d+n);

  19. “Sort n Items in d, Using t as Temporary Storage” • void sort(int *d, int *t, int n) • if (n > CUTOFF) { • spawn sort(d,t,n/4); • spawn sort(d+n/4,t+n/4,n/4); • spawn sort(d+2*(n/2),t+2*(n/2),n/4); • spawn sort(d+3*(n/4),t+3*(n/4),n-3*(n/4)); • sync; • spawn merge(d,d+n/4,d+n/2,t); • spawn merge(d+n/2,d+3*(n/4),d+n,t+n/2); • sync; • merge(t,t+n/2,t+n,d); • } else insertionSort(d,d+n); Divide array into subarrays and recursively sort subarrays in parallel

  20. 7 4 6 1 3 5 8 2 “Sort n Items in d, Using t as Temporary Storage” • void sort(int *d, int *t, int n) • if (n > CUTOFF) { • spawn sort(d,t,n/4); • spawn sort(d+n/4,t+n/4,n/4); • spawn sort(d+2*(n/2),t+2*(n/2),n/4); • spawn sort(d+3*(n/4),t+3*(n/4),n-3*(n/4)); • sync; • spawn merge(d,d+n/4,d+n/2,t); • spawn merge(d+n/2,d+3*(n/4),d+n,t+n/2); • sync; • merge(t,t+n/2,t+n,d); • } else insertionSort(d,d+n); Subproblems Identified Using Pointers Into Middle of Array d d+n/4 d+n/2 d+3*(n/4)

  21. 4 7 1 6 3 5 2 8 “Sort n Items in d, Using t as Temporary Storage” • void sort(int *d, int *t, int n) • if (n > CUTOFF) { • spawn sort(d,t,n/4); • spawn sort(d+n/4,t+n/4,n/4); • spawn sort(d+2*(n/2),t+2*(n/2),n/4); • spawn sort(d+3*(n/4),t+3*(n/4),n-3*(n/4)); • sync; • spawn merge(d,d+n/4,d+n/2,t); • spawn merge(d+n/2,d+3*(n/4),d+n,t+n/2); • sync; • merge(t,t+n/2,t+n,d); • } else insertionSort(d,d+n); Sorted Results Written Back Into Input Array d d+n/4 d+n/2 d+3*(n/4)

  22. 4 1 4 7 1 6 6 7 3 2 3 5 2 5 8 8 “Merge Sorted Quarters of d Into Halves of t” • void sort(int *d, int *t, int n) • if (n > CUTOFF) { • spawn sort(d,t,n/4); • spawn sort(d+n/4,t+n/4,n/4); • spawn sort(d+2*(n/2),t+2*(n/2),n/4); • spawn sort(d+3*(n/4),t+3*(n/4),n-3*(n/4)); • sync; • spawn merge(d,d+n/4,d+n/2,t); • spawn merge(d+n/2,d+3*(n/4),d+n,t+n/2); • sync; • merge(t,t+n/2,t+n,d); • } else insertionSort(d,d+n); d t t+n/2

  23. 1 1 4 2 3 6 4 7 5 2 3 6 7 5 8 8 “Merge Sorted Halves of t Back Into d” • void sort(int *d, int *t, int n) • if (n > CUTOFF) { • spawn sort(d,t,n/4); • spawn sort(d+n/4,t+n/4,n/4); • spawn sort(d+2*(n/2),t+2*(n/2),n/4); • spawn sort(d+3*(n/4),t+3*(n/4),n-3*(n/4)); • sync; • spawn merge(d,d+n/4,d+n/2,t); • spawn merge(d+n/2,d+3*(n/4),d+n,t+n/2); • sync; • merge(t,t+n/2,t+n,d); • } else insertionSort(d,d+n); d t t+n/2

  24. 7 4 6 1 3 5 8 2 “Use a Simple Sort for Small Problem Sizes” • void sort(int *d, int *t, int n) • if (n > CUTOFF) { • spawn sort(d,t,n/4); • spawn sort(d+n/4,t+n/4,n/4); • spawn sort(d+2*(n/2),t+2*(n/2),n/4); • spawn sort(d+3*(n/4),t+3*(n/4),n-3*(n/4)); • sync; • spawn merge(d,d+n/4,d+n/2,t); • spawn merge(d+n/2,d+3*(n/4),d+n,t+n/2); • sync; • merge(t,t+n/2,t+n,d); • } else insertionSort(d,d+n); d d+n

  25. 7 4 1 6 3 5 8 2 “Use a Simple Sort for Small Problem Sizes” • void sort(int *d, int *t, int n) • if (n > CUTOFF) { • spawn sort(d,t,n/4); • spawn sort(d+n/4,t+n/4,n/4); • spawn sort(d+2*(n/2),t+2*(n/2),n/4); • spawn sort(d+3*(n/4),t+3*(n/4),n-3*(n/4)); • sync; • spawn merge(d,d+n/4,d+n/2,t); • spawn merge(d+n/2,d+3*(n/4),d+n,t+n/2); • sync; • merge(t,t+n/2,t+n,d); • } else insertionSort(d,d+n); d d+n

  26. What Do You Need To Know To Verify Data Race Freedom? Points-to Information (data blocks that pointers point into) Region Information (accessed regions within data blocks)

  27. Information Needed To Verify Race Freedom d and t point to different memory blocks Calls to sort access disjoint parts of d and t Together, calls access [d,d+n-1] and [t,t+n-1] sort(d,t,n/4); sort(d+n/4,t+n/4,n/4); sort(d+n/2,t+n/2,n/4); sort(d+3*(n/4),t+3*(n/4), n-3*(n/4)); d d+n-1 t t+n-1 d d+n-1 t t+n-1 d d+n-1 t t+n-1 d d+n-1 t t+n-1

  28. Information Needed To Verify Race Freedom d and t point to different memory blocks First two calls to merge access disjoint parts of d,t Together, calls access [d,d+n-1] and [t,t+n-1] merge(d,d+n/4,d+n/2,t); merge(d+n/2,d+3*(n/4), d+n,t+n/2); merge(t,t+n/2,t+n,d); d d+n-1 t t+n-1 d d+n-1 t t+n-1 d d+n-1 t t+n-1

  29. Information Needed To Verify Race Freedom • Calls to insertionSort access [d,d+n-1] • insertionSort(d,d+n); d d+n-1

  30. What Do You Need To Know To Verify Data Race Freedom? Points-to Information (d and t point to different data blocks) Symbolic Region Information (accessed regions within d and t blocks)

  31. How Hard Is It To Figure These Things Out?

  32. How Hard Is It For the Program Analysis To Figure These Things Out? Challenging

  33. How Hard Is It For the Program Analysis To Figure These Things Out? void insertionSort(int *l, int *h) { int *p, *q, k; for (p = l+1; p < h; p++) { for (k = *p, q = p-1; l <= q && k < *q; q--) *(q+1) = *q; *(q+1) = k; } } Not immediately obvious that insertionSort(l,h) accesses [l,h-1]

  34. How Hard Is It For the Program Analysis To Figure These Things Out? void merge(int *l1, int*m, int *h2, int *d) { int *h1 = m; int *l2 = m; while ((l1 < h1) && (l2 < h2)) if (*l1 < *l2) *d++ = *l1++; else *d++ = *l2++; while (l1 < h1) *d++ = *l1++; while (l2 < h2) *d++ = *l2++; } Not immediately obvious that merge(l,m,h,d) accesses [l,h-1] and [d,d+(h-l)-1]

  35. Issues • Heavy Use of Pointers • Pointers into Middle of Arrays • Pointer Arithmetic • Pointer Comparison • Multiple Procedures • sort(int *d, int *t, n) • insertionSort(int *l, int *h) • merge(int *l, int *m, int *h, int *t) • Recursion • Multithreading

  36. Pointer Analysis • For each program point, computes where each pointer may point e.g. “ p  x before statement *p = 1” • Complications 1. Statically unbounded number of locations • recursive data structures (lists, trees) • dynamically allocated arrays 2. Multiple possible executions of the program • may create different dynamic data structures

  37. Memory Abstraction Stack Heap p head j Physical Memory i r v q j p head Abstract Memory i q v r Allocation block for each variable declaration Allocation block for each memory allocation site

  38. Memory Abstraction Stack Heap p head j Physical Memory i r v q j p head Abstract Memory i q v r Allocation block for each variable declaration Allocation block for each memory allocation site

  39. Pointer Analysis Summary • Key Challenge for Multithreaded Programs: Analyzing interactions between threads • Solution: Interference Edges • Record edges generated by each thread • Captures effect of parallel threads on points-to information of other threads

  40. What Pointer Analysis Gives Us • Disambiguation of Memory Accesses Via Pointers • Pointer-based loads and stores: use pointer analysis results to derive the allocation block that each pointer-based load or store statement accesses • MOD-REF or READ-WRITE SETS Analysis: • All loads and stores • Procedures: use the memory access information for loads and stores to compute the allocation blocks that each procedure accesses

  41. Is This Information Enough?

  42. Is This Information Enough? NO Necessary but not Sufficient Parallel Tasks Access (Disjoint) Regions of Same Allocated Block of Memory

  43. Structure of Analysis Pointer Analysis Disambiguate Memory at the Granularity of Allocation Blocks Symbolic Upper and Lower Bounds for Each Memory Access in Each Procedure Bounds Analysis Symbolic Regions Accessed By Execution of Each Procedure Region Analysis Data Race Freedom Check that Parallel Threads Are Independent

  44. Running Example – Array Increment void f(char *p, int n) if (n > CUTOFF) { spawn f(p, n/2); /* increment first half */ spawn f(p+n/2, n/2); /* increment second half */ sync; } else { /* base case: increment small array */ int i = 0; while (i < n) { *(p+i) += 1; i++; } }

  45. Intra-procedural Bounds Analysis Pointer Analysis Symbolic Upper and Lower Bounds for Each Memory Access in Each Procedure Bounds Analysis Region Analysis Data Race Detection

  46. Intraprocedural Bounds Analysis GOAL: For each pointer and array index variable at each program point, derive lower and upper bounds E.g. “ 0  i  n-1at statement*(p+i) += 1 ” • Bounds are symbolic expressions • variables represent initial values of parameters of enclosing procedure • bounds are combinations of variables • example expression for f(p,n): p+(n/2)-1

  47. Intraprocedural Bounds Analysis What are upper and lower bounds for i at each program point in base case? int i = 0; while (i < n) { *(p+i) += 1; i++; }

  48. Bounds Analysis, Step 1 Build control flow graph i = 0 i < n *(p+i) += 1 i = i+1

  49. Bounds Analysis, Step 2 Set up bounds at beginning of basic blocks l1 i  u1 i = 0 l2 i  u2 i < n l3 i  u3 *(p+i) += 1 i = i+1

  50. Bounds Analysis, Step 3 Compute transfer functions l1 i  u1 i = 0 0  i  0 l2 i  u2 i < n l3 i  u3 *(p+i) += 1 i = i+1 l3 i  u3 l3+1  i  u3+1

More Related