140 likes | 338 Views
National Sun Yat-sen University Embedded System Laboratory Dynamic Scheduler for Multi-core Systems. Presenter: Chien-Chih Chen. Research Tree. Analysis of The Linux 2.6 Kernel Scheduler. Dynamic Scheduler for Multi-core Systems. Optimal Task Scheduler for Multi-core Processor. Abstract.
E N D
National Sun Yat-sen University Embedded System LaboratoryDynamic Scheduler for Multi-core Systems Presenter: Chien-Chih Chen
Research Tree Analysis of The Linux 2.6 Kernel Scheduler Dynamic Scheduler for Multi-core Systems Optimal Task Scheduler for Multi-core Processor
Abstract • Many dynamic scheduling algorithms have been proposed in the past. With the advent of multi core processors, there is a need to schedule multiple tasks on multiple cores. The scheduling algorithm needs to utilize all the available cores efficiently. The multi-core processors may be SMPs or AMPs with shared memory architecture. In this paper, we propose a dynamic scheduling algorithm in which the scheduler resides on all cores of a multi-core processor and accesses a shared Task Data Structure (TDS) to pick up ready-to-execute tasks.
What’s Problem • Conversion of sequential code to parallel code or writing parallel applications is not optimal solution. • Most of the proposed scheduling algorithms for multi-core processors don’t support dependent task.
Related Work Have proposed dynamic scheduling techniques [1] [2] [3] [4] [5] [6] [7] Available data dependency analysis techniques [9] [10] [11] [12] [13] [14] [15] Dynamic Scheduler for Multi-core Systems
Scheduling Techniques • [1] An improvement OFT algorithm for reducing preemption. • [2] A data flow based and discuss data reuse which is intended for numeric computation. • [3] Based on recording resource utilization and throughput to change cores. • [4] A compile time technique that dynamically extract dependency and schedule parallel tiles on the cores to improve scalability.
Scheduling Techniques • [5] Using FFT language to generate one-dimensional serial FFT schedule, multi-dimensional serial FFT schedule and parallel FFT schedules. • [6] Rearranges a long task into smaller subtasks to form another task state graph and then schedule them in parallel. • [7] Using sampling of dominant execution phases to converge to the optimal scheduling algorithm.
Proposed Method • The scheduler will reside in the shared memory of the multi-core system to ensures that all the cores share the scheduler code. • The same scheduler code will be executing on different cores and maintain a shared task data structure (TDS) that contains task information.
Task Data Structure (TDS) • Ti unique number identifying the task i • Tis status of task i • Ready (1) • Running (2) • Not ready (-1) • Tid number of dependency on task i • Tia list of tasks that become available due to run task i • Tip priority number of task i • Tidp data pointer of task i • Tisp stack pointer of task i • Tix execution time of task i
Priority of Task • Duration of task (Tix). • Total number of other tasks dependent on the task (Tid) .
Experimental Setup • Tij:Task j can be start after Task i finished Tij time • i: row number j: column number • T01: T1 will start after T0 run 100 seconds
Simulation Result {T2, T3, T5} {T5} {T4, T5} {T1, T2, T3} {T3, T4, T5} 100 200 300 400 500
Conclusion • Attempt to increase utilization of multi-core processors. • Tasks execution can not be limit in one core. • Addition wait time for cores since involves accessing shared task structure through lock.