70 likes | 156 Views
Parallel, Probabilistic Path Planning. Nathan Ickes. May 19, 2004. 6.846 Parallel Processing: Architecture and Applications. Rapidly-Exploring Random Trees. RRTs are good at quickly finding a workable path. Rapidly-Exploring Random Trees. Building an RRT:. Biasing an RRT towards a goal
E N D
Parallel, Probabilistic Path Planning Nathan Ickes May 19, 2004 6.846 Parallel Processing: Architecture and Applications
Rapidly-Exploring Random Trees • RRTs are good at quickly finding a workable path
Rapidly-Exploring Random Trees Building an RRT: Biasing an RRT towards a goal • On some iterations, pick the goal postion as a, instead of a random point 1 2 3 4 b b b a a a c a Pick a random point a in space Find the node b in the tree which is closest to a Drive robot towards a from b If path to c is collision-free, add c to the tree
Parallelizing with OpenMP • RRT has one major, global data structure • Easily parallelized on a shared-memory machine int planner_run(int max_iter) { #pragma omp parallel private(i) #pragma omp for schedule(dynamic) nowait for (i=0; i<max_iter; i++) { #pragma omp atomic planner_iterations++; a = planner_pick_random_point(); b = planner_find_closest_point(a); c = planner_drive_towards(a, b); if (c) { #pragma omp critical tree_append_node(c); if (c == goal) return 0; } } return 1; }
OpenMP Results Time to execute 100,000 iterations:
Master-slave architecture Cooperative Architecture Parallelizing with MPI Can’t use pointers! Network latency is huge! Tree updates Master processmaintains tree New nodes New nodes Every process generates new nodes and adds them to its own tree New nodes are broadcast to other processes Processes work largely independently due to network latency Slave processesiterate algorithm Slaves generate new nodes, but wait for master to put them in the tree Can’t update tree fast enough
Conclusions • RRT works well on a shared-memory machine • OpenMP makes it easy to parallelize RRT • provides significant performance increase • RRT is harder to implement with MPI, and doesn’t work as well • Global data structure • Can’t divide task into large chunks