360 likes | 443 Views
Richard Ning – Enterprise Developer 1 st June 2013. Implement high-level parallel API in JDK. Important Disclaimers. THE INFORMATION CONTAINED IN THIS PRESENTATION IS PROVIDED FOR INFORMATIONAL PURPOSES ONLY.
E N D
Richard Ning – Enterprise Developer 1st June 2013 Implement high-level parallel API in JDK
Important Disclaimers THE INFORMATION CONTAINED IN THIS PRESENTATION IS PROVIDED FOR INFORMATIONAL PURPOSES ONLY. WHILST EFFORTS WERE MADE TO VERIFY THE COMPLETENESS AND ACCURACY OF THE INFORMATION CONTAINED IN THIS PRESENTATION, IT IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. ALL PERFORMANCE DATA INCLUDED IN THIS PRESENTATION HAVE BEEN GATHERED IN A CONTROLLED ENVIRONMENT. YOUR OWN TEST RESULTS MAY VARY BASED ON HARDWARE, SOFTWARE OR INFRASTRUCTURE DIFFERENCES. ALL DATA INCLUDED IN THIS PRESENTATION ARE MEANT TO BE USED ONLY AS A GUIDE. IN ADDITION, THE INFORMATION CONTAINED IN THIS PRESENTATION IS BASED ON IBM’S CURRENT PRODUCT PLANS AND STRATEGY, WHICH ARE SUBJECT TO CHANGE BY IBM, WITHOUT NOTICE. IBM AND ITS AFFILIATED COMPANIES SHALL NOT BE RESPONSIBLE FOR ANY DAMAGES ARISING OUT OF THE USE OF, OR OTHERWISE RELATED TO, THIS PRESENTATION OR ANY OTHER DOCUMENTATION. NOTHING CONTAINED IN THIS PRESENTATION IS INTENDED TO, OR SHALL HAVE THE EFFECT OF: CREATING ANY WARRANT OR REPRESENTATION FROM IBM, ITS AFFILIATED COMPANIES OR ITS OR THEIR SUPPLIERS AND/OR LICENSORS. 2
Introduction to the speaker Developing enterprise application software since 1999 (C++, Java) Recent work focus: IBM JDK development My contact information: mail: huaningnh@gmail.com 3
What should you get from this talk? By the end of this session, you should be able to: • Understand how parallel computing works on multi-cores • Understand implementation of high-level parallel API in JDK
Agenda 1 Introduction: multi-threading, multi-cores, parallel computing Case study 2 3 Other high-level parallel API Roadmap 4
Introduction Multi-Threading Parallel computing Multi-core computer
Case study Execute the same task for every element in a loop • Use multi-threading for the execution
Multi-threading on computer with one core C P U t1 t1 t1 t2 t2 time
100% CPU usage with single thread and multi-threading • Can't improve performance • Performance even decreases with extra threading consuming • It is useless to use multi-threading(parallel) API)
Thread runs separately on every core t1 Cor1 t2 Cor2 t3 Cor3 t4 Cor4 time
Raw thread Any improvement? Executor • Disadvantages • Users need to create and manage it • Not flexible – the number of threads is hard to configure flexibly • > core number, resources are consumed in thread context, even decrease performance • < core number, some cores are wasted • No balance, the calculation can't be allocated into every core equally
Separate creation and execution of thread • Use thread pool to reuse thread
The API is easy to use, users only need to input executed task and data range and don't care about how they are executed. However they still have disadvantages. • Task executes an entry once, which isn't sufficient • A task is targeted to a thread, which isn't flexible • The number of thread in thread pool isn't aligned to core number
CPU 4 Core 1 2 3 Core: 4 Thread: n Task: m Thread Pool Overloading: n>>4 n Thread 1 2 3 Not flexible: m >n Tasks m n Task 1 2 3
Core number doesn't align to thread number: Use fixed thread pool CPU 4 Core 1 2 3 Thread Pool 4 Thread 1 2 3 Thread number = core number
Task division: another task division strategy ForkJoinPool 1. Divide big task into small tasks recursively 2. Execute the same operation for every task 3. Join result of every small task Divide and conquer Task1 Task2 Task3 Task5 Task6 Task7 Task4 Fork Join
Better use for divide and conquer problem Task dividing strategy is from users, isn't configured properly according to running condition Previous issues (thread oversubscription and starvation, unbalancing) still exist
CPU 1 2 3 4 Core Thread Pool 4 1 2 3 Thread 1 6 11 16 Initial status • Tasks are allocated equally, • One thread by one core • Every thread maintains its task queue which consists of affiliated tasks T A S K Q U E UE 2 7 12 17 3 8 13 18 4 9 14 19 5 10 15 20
CPU 2 1 3 4 Core Thread Pool 2 3 4 1 Thread T A S K Q U E UE 2 10 15 3 Unbalancing loading 4 5
CPU 1 2 3 4 Core Thread Pool 4 1 2 3 Thread T A S K Q U E UE 2 10 15 21 Balancing loading by task stealing and adding new tasks 3 4 5 22
Parallel API with new working mechanism - concurrent_for Range: the range of data set [0, n) Strategy: the strategy of dividing range: automatic, static with granularity Task: the task which executes the same operation on range
Other high-level parallel API concurrent_while Can add data set while executing it concurrently. concurrent_reduce Use divide_join based task to return calculation result. concurrentsort Sort data set concurrently. for example, a matrix multiply another matrix int[5][10] matrix1 , int[10][5] matrix2 int[5][5] matrix3 = matrix1 * matrix2 int[5][5] matrix3 = concurrent_multiply(matrix1, matrix2) Math calculation
Anyway we always can achieve performance improvement by parallel computing based on multi-cores.
Roadmap Implement high-level parallel API in JDK based on new task scheduler High performance Correct • Scalable Portable
Review of Objectives Now that you’ve completed this session, you are able to: • Understand what parallel computing is and what is good for • Understand design of new parallel API based on task.