390 likes | 547 Views
ATLAS A Scalable and High-Performance Scheduling Algorithm for Multiple Memory Controllers. Yoongu Kim Dongsu Han Onur Mutlu Mor Harchol-Balter. Motivation. Modern multi-core systems employ multiple memory controllers Applications contend with each other in multiple controllers
E N D
ATLASA Scalable and High-Performance Scheduling Algorithm for Multiple Memory Controllers Yoongu Kim Dongsu Han Onur Mutlu MorHarchol-Balter
Motivation • Modern multi-core systems employ multiple memory controllers • Applications contend with each other in multiple controllers • How to perform memory scheduling for multiple controllers?
Desired Properties of Memory Scheduling Algorithm • Maximize system performance • Without starving any cores • Configurable by system software • To enforce thread priorities and QoS/fairness policies • Scalable to a large number of controllers • Should not require significant coordination between controllers Multiple memory controllers No previous scheduling algorithm satisfies all these requirements
Multiple Memory Controllers Single-MC system Multiple-MC system Core Core MC Memory MC Memory MC Memory Difference? The need for coordination
Thread Ranking in Single-MC Assume all requests are to the same bank Thread 1’s request Thread 2’s request MC 1 T1 T2 T2 Memory service timeline STALL Thread 1 Optimal average stall time: 2T Thread 2 STALL Execution timeline # of requests: Thread 1 <Thread 2 Thread 1 Shorter job Thread ranking: Thread 1 >Thread 2 Thread 1 Assigned higher rank
Thread Ranking in Multiple-MC Uncoordinated Coordinated Coordination MC 1 MC 1 T2 T2 T1 T2 T2 T1 MC 1’s shorter job: Thread 1 Global shorter job: Thread 2 MC 1 incorrectly assigns higher rank to Thread 1 Global shorter job: Thread 2 MC 1 correctly assigns higher rank to Thread 2 MC 2 MC 2 T1 T1 T1 T1 T1 T1 Avg. stall time: 2.5T Avg. stall time: 3T STALL STALL Thread 1 Thread 1 SAVED CYCLES! STALL STALL Thread 2 Thread 2 Coordination Better scheduling decisions
Coordination Limits Scalability MC-to-MC MC 1 MC 2 Coordination? Consumes bandwidth Meta-MC MC 3 MC 4 Meta-MC • To be scalable, coordination should: • exchange little information • occur infrequently
The Problem and Our Goal Problem: • Previous memory scheduling algorithms are not scalable to many controllers • Not designed for multiple MCs • Require significant coordination Our Goal: • Fundamentally redesign the memory scheduling algorithm such that it • Provides high system throughput • Requires little or no coordination among MCs
Outline • Motivation • Rethinking Memory Scheduling • Minimizing Memory Episode Time • ATLAS • Least Attained Service Memory Scheduling • Thread Ranking • Request Prioritization Rules • Coordination • Evaluation • Conclusion
Rethinking Memory Scheduling A thread alternates between two states (episodes) • Compute episode: Zero outstanding memory requests High IPC • Memory episode: Non-zero outstanding memory requests Low IPC Outstanding memory requests Time Memory episode Compute episode Goal: Minimize time spent in memory episodes
How to Minimize Memory Episode Time Prioritize thread whose memory episode will end the soonest • Minimizes time spent in memory episodes across all threads • Supported by queueing theory: • Shortest-Remaining-Processing-Time scheduling is optimal in single-server queue Remaining length of a memory episode? Outstanding memory requests How much longer? Time
Predicting Memory Episode Lengths We discovered: past is excellent predictor for future Large attained service Large expected remaining service Q: Why? A: Memory episode lengths are Pareto distributed… Outstanding memory requests Time Remaining service FUTURE Attained service PAST
Pareto Distribution of Memory Episode Lengths Memory episode lengths of SPEC benchmarks 401.bzip2 Pareto distribution Pr{Mem. episode > x} The longer an episode has lasted The longer it will last further x (cycles) Attained service correlates with remaining service Favoring least-attained-service memory episode =Favoring memory episode which will end the soonest
Outline • Motivation • Rethinking Memory Scheduling • Minimizing Memory Episode Time • ATLAS • Least Attained Service Memory Scheduling • Thread Ranking • Request Prioritization Rules • Coordination • Evaluation • Conclusion
Least Attained Service (LAS) Memory Scheduling Our Approach Queueing Theory Prioritize the memory episode with least-remaining-service Prioritize the job with shortest-remaining-processing-time Provably optimal • Remaining service: Correlates with attained service • Attained service: Tracked by per-thread counter Prioritize the memory episode with least-attained-service However, LAS does not consider long-term thread behavior Least-attained-service (LAS) scheduling: Minimize memory episode time
Long-Term Thread Behavior Thread 1 Thread 2 Long memory episode Short memory episode Short-termthread behavior > priority Mem.episode < priority Long-termthread behavior Mem.episode Compute episode Compute episode Prioritizing Thread 2 is more beneficial: results in very long stretches of compute episodes
Quantum-Based Attained Service of a Thread Short-termthread behavior Outstanding memory requests Time Attained service Quantum(millions of cycles) … Outstanding memory requests Long-termthread behavior Time Attained service We divide time into large, fixed-length intervals: quanta(millions of cycles)
Outline • Motivation • Rethinking Memory Scheduling • Minimizing Memory Episode Time • ATLAS • Least Attained Service Memory Scheduling • Thread Ranking • Request Prioritization Rules • Coordination • Evaluation • Conclusion
LAS Thread Ranking During a quantum Each thread’s attained service (AS) is tracked by MCs ASi = A thread’s AS during only the i-th quantum End of a quantum Each thread’s TotalAScomputed as: TotalASi= α· TotalASi-1+ (1- α) · ASi High α More bias towards history Threads are ranked, favoring threads with lower TotalAS Next quantum Threads are serviced according to their ranking
Outline • Motivation • Rethinking Memory Scheduling • Minimizing Memory Episode Time • ATLAS • Least Attained Service Memory Scheduling • Thread Ranking • Request Prioritization Rules • Coordination • Evaluation • Conclusion
ATLAS Scheduling Algorithm ATLAS • Adaptive per-Thread Least Attained Service • Request prioritization order 1. Prevent starvation: Over threshold request 2. Maximize performance: Higher LAS rank 3. Exploit locality: Row-hit request 4. Tie-breaker: Oldest request How to coordinate MCs to agree upon a consistent ranking?
Outline • Motivation • Rethinking Memory Scheduling • Minimizing Memory Episode Time • ATLAS • Least Attained Service Memory Scheduling • Thread Ranking • Request Prioritization Rules • Coordination • Evaluation • Conclusion
ATLAS Coordination Mechanism During a quantum: • Each MC increments the local AS of each thread End of a quantum: • Each MC sends local AS of each thread to centralized meta-MC • Meta-MC accumulates local AS and calculates ranking • Meta-MC broadcasts ranking to all MCs Consistent thread ranking
Coordination Cost in ATLAS How costly is coordination in ATLAS?
Properties of ATLAS Goals Properties of ATLAS • LAS-ranking • Bank-level parallelism • Row-buffer locality • Very infrequent coordination • Scale attained service with thread weight (in paper) • Low complexity: Attained service requires a single counter per thread in each MC • Maximize system performance • Scalable to large number of controllers • Configurable by system software
Outline • Motivation • Rethinking Memory Scheduling • Minimizing Memory Episode Time • ATLAS • Least Attained Service Memory Scheduling • Thread Ranking • Request Prioritization Rules • Coordination • Evaluation • Conclusion
Evaluation Methodology • 4, 8, 16, 24, 32-core systems • 5 GHz processor, 128-entry instruction window • 512 Kbyte per-core private L2 caches • 1, 2, 4, 8, 16-MC systems • 128-entry memory request buffer • 4 banks, 2Kbyte row buffer • 40ns (200 cycles) row-hit round-trip latency • 80ns (400 cycles) row-conflict round-trip latency • Workloads • Multiprogrammed SPEC CPU2006 applications • 32 program combinations for 4, 8, 16, 24, 32-core experiments
Comparison to Previous Scheduling Algorithms • FCFS, FR-FCFS[Rixner+, ISCA00] • Oldest-first, row-hit first • Low multi-core performance Do not distinguish between threads • Network Fair Queueing[Nesbit+, MICRO06] • Partitions memory bandwidth equally among threads • Low system performanceBank-level parallelism, locality not exploited • Stall-time Fair Memory Scheduler [Mutlu+, MICRO07] • Balances thread slowdowns relative to when run alone • High coordination costsRequires heavy cycle-by-cycle coordination • Parallelism-Aware Batch Scheduler [Mutlu+, ISCA08] • Batches requests and performs thread ranking to preserve bank-level parallelism • High coordination costs Batch duration is very short
System Throughput: 24-Core System System throughput = ∑ Speedup 3.5% 5.9% 8.4% System throughput 9.8% 17.0% # of memory controllers • ATLAS consistently provides higher system throughput than all previous scheduling algorithms
System Throughput: 4-MC System # of cores increases ATLAS performance benefit increases 10.8% 8.4% 4.0% System throughput 3.5% 1.1% # of cores
Other Evaluations In Paper • System software support • ATLAS effectively enforces thread weights • Workload analysis • ATLAS performs best for mixed-intensity workloads • Effect of ATLAS on fairness • Sensitivity to algorithmic parameters • Sensitivity to system parameters • Memory address mapping, cache size, memory latency
Outline • Motivation • Rethinking Memory Scheduling • Minimizing Memory Episode Time • ATLAS • Least Attained Service Memory Scheduling • Thread Ranking • Request Prioritization Rules • Coordination • Evaluation • Conclusion
Conclusions • Multiple memory controllers require coordination • Need to agree upon a consistent ranking of threads • ATLAS is a fundamentally new approach to memory scheduling • Scalable: Thread ranking decisions at coarse-grained intervals • High-performance:Minimizes system time spent in memory episodes (Least Attained Service scheduling principle) • Configurable:Enforces thread priorities • ATLAS provides the highest system throughput compared to five previous scheduling algorithms • Performance benefit increases as the number of cores increases
ATLASA Scalable and High-Performance Scheduling Algorithm for Multiple Memory Controllers Yoongu Kim Dongsu Han Onur Mutlu MorHarchol-Balter
Hardware Cost • Additional hardware storage: • For a 24-core, 4-MC system: 9kb • Not on critical path of execution
System Software Support • ATLAS enforces system priorities, or thread weights. • Linear relationship between thread weight and speedup.
System Parameters • ATLAS performance on systems with varying cache sizes and memory timing • ATLAS performance benefit increases as contention for memory increases