210 likes | 431 Views
Chapter 11: Distributed Processing Parallel programming. Principles of parallel programming languages Concurrent execution Programming constructs Guarded commands Tasks Persistent systems Client-server computing. Parallel processing.
E N D
Chapter 11: Distributed ProcessingParallel programming • Principles of parallel programming languages • Concurrent execution • Programming constructs • Guarded commands • Tasks • Persistent systems • Client-server computing
Parallel processing • The execution of more than one program/subprogram simultaneously. • A subprogram that can execute concurrently with other subprograms is called a task or a process. • Hardware supported: • multiprocessor systems • distributed computer systems • Software simulated - : time-sharing
Principles of parallel programming languages Variable definitions mutable : values may be assigned to the variables and changed during program execution (as in sequential languages). definitional: variable may be assigned a value only once
Principles…. Parallel composition: A parallel statement, which causes additional threads of control to begin executing Execution models (Program structure) Transformational: E.G. parallel matrix multiplication Reactive
Principles…. • Communication • shared memory with common data objects accessed by each parallel program; • messages • Synchronization: • Parallel programs must be able to coordinate actions
Concurrent execution • Programming constructs • Using parallel execution primitives of the operating system (C can invoke the fork operation of Unix ) • Using parallel constructs. • A programming language parallel construct indicates parallel execution
Example • AND statement (programming language level) • Syntax: • statement1 and statement2 and … statementN • Semantics: • All statements execute in parallel. • call ReadProcess and • call Write process and • call ExecuteUserProgram ;
Guarded commands • Guard: a condition that can be true or false • Guards are associated with statements • A statement is executed when its guard becomes true
Example • Guarded if: • if B1 S1 | B2 S2 | … | Bn Sn fi • Guarded repetition statement • do B1 S1 | B2 S2 | … | Bn Sn od • Bi - guards, Si - statements
Tasks • Subprograms that run in parallel with the program that has initiated them • Dependent on the initiating program • The initiating program cannot terminate until all of its dependents terminate • A task may have multiple simultaneous activations
Task interaction • Tasks unaware of each other • Tasks indirectly aware of each other • use shared memory • Tasks directly aware of each other
Control Problems • Mutual exclusion • Deadlock • P1 waits for an event to be produced by P2 • P2 waits for an event to be produced by P1 • Starvation • P1, P2, P3 need non-shareable resource. • P1 and P2 alternatively use the resource, • P3 - denied access to that resource.
Mutual exclusion • Two tasks require access to a single non-shareable resource. • Critical resource - the resource in question. • Critical section in the program - the portion in the program that uses the resource • The rule: only one program at a time can be allowed in its critical section
Synchronization of Tasks Interrupts - provided by OS. Semaphores - shared data objects, with two primitive operations - signal and wait. Messages - information is sent from one task to another. The sending task may continue to execute. Guarded commands - force synchronization by insuring conditions are met before executing tasks. Rendezvous - similar to messages, the sending task waits for an answer.
Semaphores Semaphore is a variable that has an integer value • May be initialized to a nonnegative number • Wait operation decrements the semaphore value • Signal operation increments semaphore value
Mutual exclusion with semaphores Each task performs: wait(s); /* critical section */ signal(s); B[0] B[1] B[2] B[3] B[4] … OUT IN The Producer/Consumer problem with infinite buffer
Solution : s - semaphore for entering the critical section delay - semaphore to ensure reading from non-empty buffer Producer: Consumer: produce(); wait(delay); wait (s); wait(s); append(); take(); signal(delay); signal(s); signal(s); consume();
Persistent systems Traditional software: Data stored outside of program - persistent data Data processed in main memory - transient data Persistent languages do not make distinction between persistent and transient data, automatically reflect changes in the database
Design issues • A mechanism to indicate an object is persistent • A mechanism to address a persistent object • Simultaneous access to an individual persistent object - semaphores • Check type compatibility of persistent objects - structural equivalence
Client-server computing Network models centralized, where a single processor does the scheduling distributed or peer-to-peer, where each machine is an equal, and the process of scheduling is spread among all of the machines
Client-server mediator architecture • Client machine: • Interacts with user • Has protocol to communicate with server • Server: Provides services: retrieves data and/or programs Issues: • May be communicating with multiple clients simultaneously • Need to keep each such transaction separate • Multiple local address spaces in server