160 likes | 176 Views
Explore the simulation of an ant colony using cellular automata with a focus on how ants find food, drop pheromones, and remember paths. Learn about the algorithm for ants seeking food, dropping pheromones, and navigating back to the ant hill. Discover the process of parallelization via MPI communication for efficient computation. Includes guidelines for MPI communications, creating data types, and best practices for overlapping communications with computations. Reference provided URL for further information.
E N D
Simulation of an colony of ants Camille Coti coti@lri.fr QosCosGrid Barcelona meeting, 10/25/06 MPI Labs
Introduction to ants • How ants find food and how they remember the path • Random walk around the source • Once they find some food : go back to the source • Drop pheromones along this path • When they find some pheromones: • Follow them • Pheromones evaporate, thereby limiting their influence over time
Modelising an ant colony • Cellular automata • Grid of cells, reprensented as a matrix • State of a cell: • With an ant on it (or several ones) • Some pheromones can have been dropped on it • It can also be empty • We define a transition rule from time t to time t+1
A picture can make things easier The ant-hill (where the ants live) The food Ants spread around the ant-hill Ants that have found the food drop pheromones
Update algorithm • every ant seeks around it • if it finds pheromones: • follow it • if it finds some food: • take some • go back to the ant hill dropping pheromones on the path • otherwise: • chose a random direction
Parallelisation • Share the grid among the processors • Each processor computes a part of the calculation • Use MPI communication between the processes • This is parallel computing ☺ Proc #0 Proc #1 Proc #2 Proc #3
Parallelisation • Each processor can compute the transition rule for almost all the space it is assigned to • BUT problem near the boundaries: need to know what is next • THEN each processor has to send the state of its frontiers to its neighbours • Overlap computation and communications • Non-blocking communication • Computation • Wait for the communications to be finished (usually not necessary)
Algorithm of the parallelisation • Initialisation • for n iterations do: • send/receive frontiers • compute the transition rule (excepted near the frontiers) • finish the communications • compute the transition rule near the frontiers • send the result • update the bounds (ants might have walked across the frontiers)
What you have to do • We provide you • The basic functions • The update rule • You have to write • The MPI communications • An MPI data type creation and declaration
Some “good” practice rules • Initalise your communications • MPI_Init(&argc, &argv); • MPI_Comm_size(MPI_COMM_WORLD, &size); • MPI_Comm_rank(MPI_COMM_WORLD, &rank); • Finalise them • MPI_Finalize();
Some “good” practice rules • Use non-blocking communications rather than blocking ones • MPI_Isend() / MPI_Irecv() • Wait for completion with MPI_Waitall() • So that you can overlap communications with computation
Creating a new MPI data type • Declare the types that will be contained • MPI_Datatypes types[2] = {MPI_INT, MPI_CHAR} • Declare the offset for the address • MPI_Aint displ[2]={0, 4} • Create your structure and declare its name • MPI_Type_create_struct(...) • And commit it • MPI_Type_commit(...)
Create a topology • For example, create a torus • void create_comm_torus_1D(){ • int mpierrno, period, reorder; • period=0; reorder=0; • mpierrno=MPI_Cart_create(MPI_COMM_WORLD, 1, &comm_size, &period, reorder, &comm_torus_1D); • MPI_Cart_shift(comm_torus_1D,0,1,&my_west_rank, &my_east_rank); • } • (you won't have to do this for the labs, this function is provided, but it is for your personal culture)
Some collective communications • Reductions: sum, min, max... • Useful for time measurements or to make a global sum of local results, for example • MPI_Reduce(...) • Barriers • All the processes get synchronised • MPI_Barrier(communicator)
Misc • Time measurement: • t1 = MPI_Wtime(); • t2 = MPI_Wtime(); • time_elapsed = t2 - t1; • MPI_Wtime() returns the time elapsed on a given processor
If you need more • www.lri.fr/~coti/QosCosGrid • Feel free to ask questions☺