320 likes | 479 Views
University of Technology Computer Engineering and Information Technology Department. Parallel Virtual Machine Issued by: Ameer Mosa Al_Saadi. Agenda. 1. High Power Computing (HPC). 2. Computing platform evaluation. 3. Orientation toward PVM. 4. Initiation PVM from console.
E N D
University of Technology Computer Engineering and Information Technology Department Parallel Virtual Machine Issued by:AmeerMosaAl_Saadi
Agenda 1. High Power Computing (HPC) 2. Computing platform evaluation 3. Orientation toward PVM 4. Initiation PVM from console 5. PVM configuration 6. Abstract PVM library command 7. Compile and Running program 8. Ten Years of Cluster Computing
E-commerce/anything High Power Computing (HPC) Drivers • The world of modern computing potentially offers many helpful methods and tools for scientific and engineering to help them to applying theories, methods, and original applications in such areas as : • Parallelism. • large-scale simulations. • time-critical computing. • computer-aided design . • Use of computers in manufacturing, visualization of scientific data, and human-machine interface technology. We need to Solve grand challenge applications using computer modeling, simulation and analysis. Life Sciences Aerospace Digital Biology CAD/CAM Military Applications Military Applications Military Applications
How to Run App. Faster ? • There are 3 ways to improve performance: • 1. Work Harder • 2. Work Smarter • 3. Get Help parallelism. • Computer Analogy • 1. Use faster hardware: e.g. reduce the time per instruction (clock cycle). • 2. Optimized algorithms and techniques • 3. Multiple computers to solve problem: That is, increase no. of instructions executed per clock cycle.
Progress Diagram Computer Food Chain Phase 1 Phase 2 Phase 3 1984 Computer Food Chain 1994 Computer Food Chain Computer Food Chain (Now and Future)
Orientation toward PVM • How can I do computer parallelism? • Answer\\By Some Message Passing System. • What is Message Passing? Why Do I Care? • Message passing allows two processes to: • Exchange information • Synchronize with each other. • From here appearing needed to the tool like Parallel Virtual Machine (PVM)
PVM Resources • Web site www.epm.ornl.gov/pvm/pvm_home.html • Book PVM: Parallel Virtual MachineA Users' Guide and Tutorial for Networked Parallel Computing Al Geist, Adam Beguelin, Jack Dongarra, Weicheng Jiang, Robert Manchek, VaidySunderam www.netlib.org/pvm3/book/pvm-book.html
PVM definition • PVM is a software tool for parallel networking of computers. • PVM is a software tool provide single interface and environs to exploit resources on heterogeneous computers interconnected by network for execute tasks with help message system, to be used as a coherent and flexible concurrent computational resource, or a • "Parallel Virtual Machine"
Popular PVM Uses • Poor man’s Supercomputer • Beowulf (PC) clusters, Linux, Solaris, NT • Cobble together whatever resources you can get • Metacomputer linking multiple Supercomputers • ultimate performance: eg. have combined nearly 3000 processors and up to 53 supercomputers • Education Tool • teaching parallel programming • academic and thesis research
What Must provide PVM? Heterogeneous Virtual Machine support for: • Resource Management • add/delete hosts from a virtual machine • Process Control • spawn/kill tasks dynamically • Message Passing • blocking send, blocking and non-blocking receive, mcast • Dynamic Task Groups • task can join or leave a group at any time • Fault Tolerance • VM automatically detects faults and adjusts 11
PVM Model • PVM daemon (pvmd3 or pvmd ): run in each node for accepting remote connection and connecting to remote machines. • Interface (PVM library – libpvm3.a): library with functions for (send or receive task)programmer use with (C , C++, Fortran). • Environs: execution units(processors), memories and network….etc. PVM communication pvmd3 pvmd3 Lib pvm3 User program Lib pvm3 User program Application #1 Application #2 12 12 12 12 12 12
Levels of Parallelism Code-Granularity Code Item Large grain (task level) Program Medium grain (control level) Function (thread) Fine grain (data level) Loop (Compiler) Very fine grain (multiple issue) With hardware Task i-l Task i Task i+1 PVM func1 ( ) { .... .... } func2 ( ) { .... .... } func3 ( ) { .... .... } Threads Compilers a ( 0 ) =.. b ( 0 ) =.. a ( 1 )=.. b ( 1 )=.. a ( 2 )=.. b ( 2 )=.. CPU + x Load 14 14
PVM Task • Parallel compute is divided into sequence tasks, which can execute parallel. • Tasks is can start on separate nodes, where execute is not migration. • Each task has a one identification TID, which is create by PVM daemon. • Message addressing by help TID. • Tasks can rang to groups. • Task is implementing as OS process. 15
PVMd – daemond execute Master : usually start from control command. Create socket to communicate with tasks and pvmd. Read hostfile. Start slave pvmd- on remote node. Slave : receive parameters from master through arguments and configuration message. Return results to master. Master: wait all tasks to end then find final results. 16 16
Initiation PVM from console # pvm Pvm> #pvmhostfile - hostfile : content list(index) nodes, which have be component of PVM (on each row one name).
PVM configuration • Instruction PVM in console: • Add host name , delete host name. • Conf (extract actual configuration). • Halt (Stand off environs ). • quit (end console ). • Spawn (initiation new task).
XPVM XPVM screen shot provides visual information about machine utilization, flows, configuration PVM Cluster
To create task id “”TID: • tid = pvm_mytid (); Abstract PVM library command To spawn tasks to another computers: numt = pvm_spawn(); • To exit Pvm execute : • pvm_exit (); Program steps To recognize worker from supervisor : pvm_parent(); spawn create To send require data to task “TID”: pvm_pkdatatype (); pvm_send (); To receive result from workers or reveres : pvm_recv (); pvm_upkdatatype (); exit execute recognize send receive d N 21
#include <stdio.h> #include "pvm3.h" main() { int cc, tid; char buf[100]; printf("i'mt%x\n", pvm_mytid()); cc = pvm_spawn("hello_other", 0, 0, "", 1, &tid); if (cc == 1) printf(“start hello_other\n"); elseprintf("can't start hello_other\n"); If( pvm_parent()==PVMNOPARENT) ;{ cc = pvm_recv(-1, -1); pvm_bufinfo(cc, 0, 0, &tid); pvm_upkstr(buf); printf("from t%x: %s\n", tid, buf); } pvm_exit(); exit(0); } Ex. Program
Hello World – PVM Style Process A • Initialize • Send(B, “Hello World”) • Recv(B, String) • Print String • “Hi There” • Finalize Process B • Initialize • Recv(A, String) • Print String • “Hello World” • Send(A, “Hi There”) • Finalize
Compile and Running program To compile Any cpp program in Linux OS can use command: # g++ hello.cpp To Running Any cpp program in Linux OS can use command: #./a.out
ORNL Sandia PSC Ten Years of Cluster Computing Building a Cluster Computing Environment for 21st Century Networks of Workstations PC Clusters Wide-area GRID experiments Harness PVM-1 PVM-2 PVM-3 PVM-3.4 1989 90 94 96 97 99 2000
The End Thanks your Attention
1984 Computer Food Chain Mainframe PC Workstation Mini Computer Vector Supercomputer
1994 Computer Food Chain (hitting wall soon) Mini Computer PC Workstation Mainframe (future is bleak) Vector Supercomputer MPP
Abstract PVM library command To create task id “”TID: tid = pvm_mytid (); To spawn tasks to another computers: numt = pvm_spawn(); To recognize worker from supervisor computer: pvm_parent() To receive result from workers to each task “TID” or reveres : pvm_recv (); pvm_upkdatatype (); To send require data to task “TID”: pvm_pkdatatype (); pvm_send (); To exit Pvm execute : pvm_exit (); 30