110 likes | 258 Views
Scheduling for parallelism. Rika nakahara , Yixin luo. Agenda. What is Scheduling? Contention for resources Core/CPU time LLC I/ Os and Prefetcher Locks and Shared Data Metric Throughput Quality of Service. What is Scheduling?.
E N D
Scheduling for parallelism Rika nakahara, Yixinluo
Agenda • What is Scheduling? • Contention for resources • Core/CPU time • LLC • I/Os and Prefetcher • Locks and Shared Data • Metric • Throughput • Quality of Service
What is Scheduling? • In computer science, scheduling is the method by which threads, processes or data flows are given access to system resources (e.g. processor time, communications bandwidth) - Wikipedia • Why scheduling for parallelism? task1 taskn task1 task1 task2 taskn task1 task2 taskn … … Single Core Processor Core 1 Core 2 Reference: http://en.wikipedia.org/wiki/Scheduling_(computing)
Contention for resources • Core/CPU time • LLC • I/Os and Prefetcher • Locks and Shared Data • Solution? Core 1 Core 2 L1$ L1$ L2$ L2$ LLC MemCtl. Memory Lock A
How to schedule for… • 100 threads contending for 64 cores and locks? • Running thread spinning on lock? • Put some threads to longer sleep. • Sleeping thread holding the lock? • Multiple cores sharing LLC and memory bus? • Cache Pollution? • How to pick programs to run together? • An SMT processor? • How to determine which threads to co-schedule?
Metric • There is usually tradeoff between throughput and QoS. • Quality of service: • Latency/Turnaround time • Fairness
Addressing Shared Resource Contention in Multicore Processors via scheduling • Scheduling = classification & algorithm • Classification: SDC, Animal, Miss Rate, Pain • Algorithm: distribute program with high memory access determine which 2 applicable to pair together in one core Sergey Zhuravlev, Sergey Blagodurov, and Alexandra Fedorova. 2010. Addressing shared resource contention in multicore processors via scheduling. In Proceedings of the fifteenth edition of ASPLOS on Architectural support for programming languages and operating systems(ASPLOS '10). ACM, New York, NY, USA, 129-142. DOI=10.1145/1736020.1736036 http://doi.acm.org/10.1145/1736020.1736036
Probabilistic job symbiosis modeling for SMT processor scheduling • Co-scheduling: running multiple threads together • Model-driven scheduling: estimate model based on wait cycle and dynamically recalculate co-scheduling StijnEyerman and LievenEeckhout. 2010. Probabilistic job symbiosis modeling for SMT processor scheduling. In Proceedings of the fifteenth edition of ASPLOS on Architectural support for programming languages and operating systems (ASPLOS '10). ACM, New York, NY, USA, 91-102. DOI=10.1145/1736020.1736033 http://doi.acm.org/10.1145/1736020.1736033
Decoupling Contention Management from Scheduling • Increasing load diminishes effect of spinning and blocking. • Decoupling load management and scheduling F. Ryan Johnson, Radu Stoica, Anastasia Ailamaki, and Todd C. Mowry. "Decoupling contention management from scheduling," in Proceedings of the Ffteenth edition of ASPLOS on Architectural support for programming languages and operating systems (ASPLOS '10).
Flexible Architectural Support for fine-Grain Scheduling • Bypass shared resource contention by message passing Daniel Sanchez, Richard M. Yoo, and Christos Kozyrakis. "Flexible architectural support for Fne-grain scheduling," in Proceedings of the Ffteenth edition of ASPLOS on Architectural support for programming languages and operating systems (ASPLOS '10).