380 likes | 404 Views
Learn about the advantages and limitations of optimistic synchronization for automatic parallelization. Understand the implementation of atomic operations and the synchronization selection algorithm for fine-grain synchronization. Explore the benefits of optimistic synchronization in modern processors.
E N D
Effective Fine-Grain Synchronization For Automatically Parallelized Programs Using Optimistic Synchronization Primitives Martin Rinard University of California, Santa Barbara
Problem Efficiently Implementing Atomic Operations On Objects Key Issue Mutual Exclusion Locks Versus Optimistic Synchronization Primitives Context Parallelizing Compiler For Irregular Object-Based Programs Linked Data Structures Commutativity Analysis
Talk Outline • Histogram Example • Advantages and Limitations of Optimistic Synchronization • Synchronization Selection Algorithm • Experimental Results
Histogram Example class histogram { private: int counts[N]; public: void update(int i) { counts[i]++; } }; parallel for (i = 0; i < iterations; i++) { int c = f(i); h->update(c); } 3 7 4 1 2 0 5 8
Cloud Of Parallel Histogram Updates Histogram iteration 0 3 iteration 8 7 4 iteration 2 iteration 1 1 iteration 7 2 iteration 3 0 iteration 6 iteration 4 5 8 iteration 5 Updates Must Execute Atomically
One Lock Per Object class histogram { private: int counts[N]; lock mutex; public: void update(int i) { mutex.acquire(); counts[i]++; mutex.release(); } }; Problem: False Exclusion
One Lock Per Item class histogram { private: int counts[N]; lock mutex[N]; public: void update(int i) { mutex[i].acquire(); counts[i]++; mutex[i].release(); } }; Problem: Memory Consumption
Histogram 3 7 4 1 2 0 5 8 Optimistic Synchronization Load Old Value Compute New Value Into Local Storage Commit Point No Write Between Load and Commit Write Between Load and Commit Commit Succeeds Write New Value Commit Fails Retry Update
Load Old Value Compute New Value Into Local Storage Commit Fails Retry Update Parallel Updates With Optimistic Synchronization Load Old Value 3 7 4 Compute New Value Into Local Storage 1 2 0 5 8 Commit Succeeds Write New Value
Optimistic Synchronization In Modern Processors • Load Linked (LL) - Used To Load Old Value • Store Conditional (SC) - Used To Commit New Value Atomic Increment Using Optimistic Synchronization Primitives retry: LL $2,0($4) # Load Old Value addiu $3,$2,1 # Compute New Value Into # Local Storage SC $3,0($4) # Attempt To Store New Value beq $3,0,retry # Retry If Failure
Optimistically Synchronized Histogram class histogram { private: int counts[N]; public: void update(int i) { do { new_count = LL(counts[i]); new_count++ } while (!SC(new_count, counts[i])); } };
Aspects of Optimistic Synchronization • Advantages • Slightly More Efficient Than Locked Updates • No Memory Overhead • No Data Cache Overhead • Potentially Fewer Memory Consistency Requirements • Advantages In Other Contexts • No Deadlock, No Priority Inversions, No Lock Convoys • Limitations • Existing Primitives Support Only Single Word Updates • Each Update Must Be Synchronized Individually • Lack of Fairness
Synchronization In Automatically Parallelized Programs Serial Program Assumption: Operations Execute Atomically CommutativityAnalysis Unsynchronized Parallel Program Requirement: Correctly Synchronize Atomic Operations Synchronization Selection Goal: Choose An Efficient Synchronization Mechanism for Each Operation Synchronized Parallel Program
Atomicity Issues In Generated Code Serial Program Assumption: Operations Execute Atomically CommutativityAnalysis Unsynchronized Parallel Program Goal: Choose An Efficient Synchronization Mechanism For Each Operation Synchronization Selection Requirement: Correctly Synchronize Atomic Operations Synchronized Parallel Program
Use Optimistic Synchronization Whenever Possible
Model Of Computation • Objects With Instance Variables class histogram { private: int counts[N]; }; • Operations Update Objects By Modifying Instance Variables void histogram::update(int i) { counts[i]++; } 4 2 5 h->update(1) 4 4 2 3 5 5
Commutativity Analysis • Compiler Computes Extent Of Computation • Representation of All Operations in Computation • In Example: { histogram::update } • Do All Pairs Of Operations Commute? • No - Generate Serial Code • Yes - Automatically Generate Parallel Code • In Example: h->update(i) and h->update(j) commute for all i, j
Synchronization Requirements • Traditional Parallelizing Compilers • Parallelize Loops With Independent Iterations • Barrier Synchronization • Commutativity Analysis • Parallel Operations May Update Same Object • For Generated Code To Execute Correctly, Operations Must Execute Atomically • Code Generation Algorithm Must Insert Synchronization
Default Synchronization Algorithm class histogram { private: int counts[N]; lock mutex; One Lock Per Object public: void update(int i) { mutex.acquire(); counts[i]++; mutex.release(); } }; Operations Acquire and Release Lock
Synchronization Constraints Synchronization Constraint Can Use Optimistic Synchronization - Read/Compute/Write Update To A Single Instance Variable Must Use Lock Synchronization - Updates Involve Multiple Interdependent Instance Variables Operation counts[i] = counts[i]+1; aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa temp = counts[i]; counts[i] = counts[j]; counts[j] = temp;
Synchronization Selection Constraints Can Use Optimistic Synchronization Only For Single Word Updates That All Updates To Same Instance Variable Must Use Same Synchronization Mechanism Read An Instance Variable Compute A New Value That Depends On No Other Updated Instance Variable Write New Value Back Into Instance Variable
Synchronization Selection Algorithm Operates At Granularity Of Instance Variables Compiler Scans All Updates To Each Instance Variable If A Class Has A Lock Synchronized Variable, Class is Marked Lock Synchronized If All Updates Can Use Optimistic Synchronization, Instance Variable Is Marked Optimistically Synchronized If At Least One Update Must Use Lock Synchronization, Instance Variable Is Marked Lock Synchronized
Synchronization Selection In Example • class histogram { • private: int counts[N]; • public: void update(int i) { • counts[i]++; • } • }; Optimistically Synchronized Instance Variable histogram NOT Marked As Lock Synchronized Class
Code Generation Algorithm All Lock Synchronized Classes Augmented With Locks Operations That Update Lock Synchronized Variables Acquire and Release the Lock in the Object Operations That Update Optimistically Synchronized Variables Use Optimistic Synchronization Primitives
Optimistically Synchronized Histogram class histogram { private: int counts[N]; public: void update(int i) { do { new_count = LL(counts[i]); new_count++ } while (!SC(new_count, counts[i])); } };
Methodology • Implemented Parallelizing Compiler • Implemented Synchronization Selection Algorithm • Parallelized Three Complete Scientific Applications • Barnes-Hut, String, Water • Produced Four Versions • Optimistic (All Updates Optimistically Synchronized) • Item Lock (Produced By Hand) • Object Lock • Coarse Lock • Used Inline Intrinsic Locks With Exponential Backoff • Measured Performance On SGI Challenge XL
0.4 8 Data And Lock On Different Cache Lines 0.3 6 Locked Locked Update Time (microseconds) Update Time (microseconds) 0.2 4 Optimistic Optimistic Unsynchronized 0.1 2 Unsynchronized 0 0 Time For One Update Time for One Cached Update On Challenge XL Time for One Uncached Update On Challenge XL
Synchronization Frequency Optimistic, Item Lock Barnes-Hut Object Lock 661 Coarse Lock Optimistic, Item Lock String Object Lock Optimistic, Item Lock Water Object Lock 25 Coarse Lock 0 5 10 15 Microseconds Per Synchronization
Memory Consumption For Barnes-Hut 50 40 30 Memory Consumption (MBytes) 20 10 0 Optimistic Item Lock Object Lock Coarse Lock Total Memory Used To Store Objects
Memory Consumption For String 5 4 3 Memory Consumption (MBytes) 2 1 0 Optimistic Item Lock Object Lock Total Memory Used To Store Objects
Memory Consumption For Water 1.5 1 Memory Consumption (MBytes) 0.5 0 Optimistic Item Lock Object Lock Coarse Lock Total Memory Used To Store Objects
24 24 24 24 16 16 16 16 Speedup 8 8 8 8 0 0 0 0 0 8 16 24 0 8 16 24 0 8 16 24 0 8 16 24 Processors Processors Processors Processors Speedups For Barnes-Hut Optimistic Item Lock Object Lock Coarse Lock
Speedups For String 24 24 24 16 16 16 Speedup 8 8 8 0 0 0 0 8 16 24 0 8 16 24 0 8 16 24 Processors Processors Processors Optimistic Item Lock Object Lock
Speedups For Water 24 24 24 24 16 16 16 16 Speedup 8 8 8 8 0 0 0 0 0 8 16 24 0 8 16 24 0 8 16 24 0 8 16 24 Processors Processors Processors Processors Optimistic Item Lock Object Lock Coarse Lock
Acknowledgements • Pedro Diniz • Parallelizing Compiler • Silicon Graphics • Challenge XL Multiprocessor • Rohit Chandra, T.K. Lakshman, Robert Kennedy, Alex Poulos • Technical Assistance With SGI Hardware and Software
Bottom Line • Optimistic Synchronization Offers • No Memory Overhead • No Data Cache Overhead • Reasonably Small Execution Time Overhead • Good Performance On All Applications • Good Choice For Parallelizing Compiler • Minimal Impact On Parallel Program • Simple, Robust, Works Well In Range Of Situations • Major Drawback • Current Primitives Support Only Single Word Updates • Use Optimistic Synchronization Whenever Applicable
Future The Efficient Implementation Of Atomic Operations On Objects Will Become A Crucial Issue For Mainstream Software • Small-Scale Shared-Memory Multiprocessors • Multithreaded Applications and Libraries • Popularity of Object-Oriented Programming • Specific Example: Java Standard Library Optimistic Synchronization Primitives Will Play An Important Role