1 / 36

CS 484

CS 484. Discrete Optimization Problems. A discrete optimization problem can be expressed as (S, f) S is the set of all feasible solutions f is the cost function Goal: Find a feasible solution x opt such that f(x opt ) <= f(x) for all x in S. Discrete Optimization Problems. Examples

Download Presentation

CS 484

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 484

  2. Discrete Optimization Problems • A discrete optimization problem can be expressed as (S, f) • S is the set of all feasible solutions • f is the cost function • Goal: Find a feasible solution xopt such that • f(xopt) <= f(x) for all x in S

  3. Discrete Optimization Problems • Examples • VLSI layout • Robot motion planning • Test pattern generation • In most problems, S is very large • S can be converted to a state-space graph and then can be reformulated to a search problem.

  4. Discrete Optimization Problems • NP-hard • Why parallelize? • Consider real-time problems • robot motion planning • speech understanding • task scheduling • Faster search through bigger search spaces.

  5. Search Algorithms • Depth First Search • Breadth First Search • Best First Search • Branch and Bound • Use cost to determine expansion • Iterative Deepening A* • Use cost + heuristic value to determine expansion

  6. Parallel Depth First Search • Critical issue is distribution of search space. Static partitioning of unstructured trees leads to poor load balancing.

  7. Dynamic Load Balancing • Consider sequential DFS

  8. Parallel DFS • Each processor performs DFS on a disjoint section of the tree. (Static load assignment) • After the processor finishes, it requests unsearched portions of the tree from other processors. • Unexplored sections are stored in the stack • Pop off a section from the stack and give it to somebody else.

  9. Parallel DFS Problems • Splitting up the work • How much work should you give to another processor? • Determining a donor processor • Who do you request more work from?

  10. Work Splitting Strategies • When splitting up a stack, consider • Sending too little or too much increases work requests • Ideally, rather than splitting the stack, you would split the search space. • HARD • Nodes high in tree --> big subtrees, & vice-versa

  11. Work Splitting Strategies • To avoid sending small amounts of work, nodes beyond a specified stack depth are not sent. • Cut-off depth • Strategies • Send only nodes near bottom of stack • Send nodes near cut-off depth • Send 1/2 of nodes between bottom and cut-off

  12. Load Balancing Schemes(Who do I request work from?) • Asynchronous Round Robin • each processor maintains target • Ask from target then increment target • Global Round Robin • target is maintained by master node • Random Polling • randomly select a donor • each processor has equal probability

  13. Speedups of DFS

  14. Best-First Search • Heuristic is used to direct the search • Maintains 2 lists • Open • Nodes unsearched • Sorted by heuristic value • Closed • Expanded nodes • Memory requirement is linear in the size of the search space explored.

  15. Parallel Best-First Search • Concurrent processors pick the most promising node from the open list • Newly generated nodes are placed back on the open list • Centralized Strategy

  16. Global list maintained at designated processor Put expanded nodes Get current best node Lock the list Lock the list Place generated Place generated Lock the list nodes in the list nodes in the list Place generated Pick the best node Pick the best node from the list nodes in the list from the list Pick the best node Unlock the list Unlock the list from the list Expand the node to Expand the node to Unlock the list generate successors generate successors Expand the node to generate successors

  17. Centralized Best-First Search • Termination condition • A processor may find a solution but not the best solution. • Modify the termination criteria (how?) • Centralization leads to congestion • Open list must be locked when accessed • Extra work

  18. Decentralizing Best-First Search • Let each processor maintain its own open list • Issues: • Load balancing • Termination (make sure it is the best)

  19. Communication Strategies • Random • Periodically send some of the best nodes to a random processor • Ring • Periodically exchange best nodes with neighbors • Blackboard • Select best node from open list • If l-value is OK then expand • If l-value is BAD then get some from blackboard • If l-value is GREAT then give some to blackboard

  20. Ring Communication

  21. Blackboard

  22. What about searching a graph? • Problem: node replication • Possible solution: • Assign each node to a processor • Use hash function • Whenever a node is generated, check to see if it already has been searched • Costly

  23. Speedup Anomalies • Due to nature of the problem, speedup can vary greatly from one execution to the next. • Two anomaly types: • Acceleration • Deceleration

  24. Termination Detection • Dijkstra's Token Termination Detection • When idle, send idle token to next processor • When idle token is received again, all done • Tree-Based Termination Detection • Associate a weight of 1 with initial work load • Assign portions of the weight • When finished give the weight portion back • When processor 0 has weight of 1 --> all done.

  25. Dijkstra’s Token Termination • All processes are either active or inactive. • inactive processes may not send messages other than the token • active processes may turn inactive • inactive processes may turn active if they receive a work message • Termination can only occur if all processes are inactive • We must determine if all processes are inactive and if there are no more messages in the system.

  26. Dijkstra’s Token Termination • Arrange the processes logically in a ring • Since all processes must be inactive to terminate, designate process P0 as the process that can start termination detection • When inactive, P0 sends a token traveling from process i to i + 1 • The token only leaves a process if the process is inactive • problem: an inactive process may receive a message and turn active after the token has already left. • solution: introduce colors

  27. Dijkstra’s Token Termination • All processes are initially colored white. • Any process i that sends a message to a process j such that j < i is suspect for reactivating a process: change that processes color to black • If a black process receives a token, it colors the token black. • If process 0 receives a white token, send poison pill

  28. Dijkstra’s Token Termination 0 0 0 4 4 4 1 1 1 work 3 3 3 2 2 2 0 0 4 4 1 1 3 3 2 2 Active Inactive Token

  29. Dijkstra’s Token Termination • Problem: Fast token and slow work • Suppose process j sends work to process i < j • Suppose the work message takes a long time to get there • In the mean time, process j gets the token and changes it to black and sends it on. It then changes its state to white. • P0 gets the black token and starts the process again • Process i now receives the new white token before receiving the work message that is still in transit • Process i passes on the white token • Process j will also pass on a white token since it changed its state to white after sending on the black token • The new white token will now arrive at P0 signaling termination  poison pill sent out

  30. Dijkstra’s Token Termination • Problem: Fast token and slow work • Suppose process i sends work to process i + 4 • Suppose the work message takes a long time to get there • In the mean time, process i becomes idle. • Process i now receives a white token • Process i passes on the white token • Process i + 4 receives the white token before the work message • Process i + 4 will also pass on a white token • The white token will now arrive at P0 signaling termination  poison pill sent out

  31. Dijkstra’s Token Termination • Solution: Message Counts • Send message counts along with the token • Initially, all processes are white and have a message count of 0 • Whenever a process receives a message, it decrements its count and increments its count if it sends a message • sum of message counts will be zero iff all messages have been delivered • Token sums message counts as it is passed.

  32. Dijkstra’s Token Termination • If P0 becomes inactive, it turns white and sends a white token with its message count to process 1 • If a process sends or receives a message, it turns black • Process i keeps the token as long as it is active. If it turns inactive: • if process i is black, change token to black. Otherwise token color is unchanged • add message count to the token • forward the token • change state to white

  33. Dijkstra’s Token Termination • If P0 receives a black token, try again. • If P0 receives a white token • Token has passed through only white processes • However, a message may be in flight • token’s message count will be non-zero • If message count is zero, send poison pill • Otherwise try again

More Related