190 likes | 470 Views
PGA – Parallel Genetic Algorithm. Hsuan Lee. Reference. E Cantú-Paz, A Survey on Parallel Genetic Algorithm , Calculateurs Paralleles, Reseaux et Systems Repartis, 1998. Classes of Parallel Genetic Algorithm. 3 major classes of PGA Global Single-Population Master-Slave PGA
E N D
PGA – Parallel Genetic Algorithm Hsuan Lee
Reference • E Cantú-Paz, A Survey on Parallel Genetic Algorithm, Calculateurs Paralleles, Reseaux et Systems Repartis, 1998 Hsuan Lee @ NTUEE
Classes of Parallel Genetic Algorithm • 3 major classes of PGA • Global Single-Population Master-Slave PGA • Single-Population Fine-Grained PGA • Multiple-Population Coarse-Grained PGA • Hybrid of the above PGAs • Hierarchical PGA Hsuan Lee @ NTUEE
Classes of Parallel Genetic Algorithm • Global Single-Population Master-Slave PGA • Lowest level of parallelism • Parallelize the calculation of fitness, Selection, Crossover… • Also known as Global PGA. Hsuan Lee @ NTUEE
Classes of Parallel Genetic Algorithm • Single-Population Fine-Grained PGA • Consists of one spatially structured population • Selection and Crossover are restricted to a small neighborhood, but neighborhoods overlap, permitting some interaction among all the individuals • Similar to the idea of niching • Suited for massively parallel computers Hsuan Lee @ NTUEE
Classes of Parallel Genetic Algorithm • Multiple-Population Coarse-Grained PGA • Consists of several subpopulations • Exchange individuals occasionally. The exchange operation is called migration. • Also known as multiple-deme PGA, distributed GA, coarse-grained PGA or “island” PGA • Most popular PGA • Most difficult to analyze • Suited for fewer but strongerparallel computers Hsuan Lee @ NTUEE
Classes of Parallel Genetic Algorithm • 3 major classes of PGA • Global Single-Population Master-Slave PGA • Single-Population Fine-Grained PGA • Multiple-Population Coarse-Grained PGA The first one does not affect the behavior of GA, but the latter 2 do. • Hybrid of the above PGAs • Hierarchical PGA Hsuan Lee @ NTUEE
Classes of Parallel Genetic Algorithm • Hierarchical PGA • Combines multiple-population PGA (at higher level) with master-slave PGA or fine-grained PGA (at lower level) • Combines the advantages of its components Hsuan Lee @ NTUEE
Master-Slave Parallelization Master does the global work that involves population-wise computation and assign local tasks to its slaves • What to be parallelized? • Evaluation of fitness • Selection • Some selections require population-wise calculation. Therefore it cannot be parallelized • Selections that don’t require global computation are usually too simple to be parallelized. e.g. tournament selection • Crossover • Usually too simple to parallelize • But for complex crossover that involves finding min-cut, parallelization may be an option Hsuan Lee @ NTUEE
Master-Slave Parallelization • Computer architecture makes a difference • Shared memory Simpler. The population may be stored in shared memory and each slave processor can process on these individuals without conflict • Distributed memory The individuals to be processed are sent to slave processors, creating a communication overhead. This inhibits the tendency to parallelize too easy tasks. Hsuan Lee @ NTUEE
Fine-Grained Parallel GAs • Neighborhood size • The performance of the algorithm degrades as the size of the neighborhood increases • The ratio of the radius of the neighborhood to the radius of the whole grid is a critical parameter Hsuan Lee @ NTUEE
Fine-Grained Parallel GAs • Topology Different individual placing topology can result in different performances • 2D meshMost commonly used because this is usually the physical topology of processors • Ring • Cube • Torus (doughnut)Converges the fastest in some problems, due to the high connectivity of the structure • Hypercube Hsuan Lee @ NTUEE
Multiple-Deme Parallel GAs • Subpopulation size • It is obvious that small population converges faster, but is more likely to converge to a local optimum rather than a global optimum • The Idea is to use many small subpopulations that communicates occasionally to speed up GA while preventing from converging at local optimum Hsuan Lee @ NTUEE
Multiple-Deme Parallel GAs • Migration Timing • Synchronous What’s the optimum frequency of migration? Is the communication cost small enough to make this PGA a good alternative of traditional GA? • Asynchronous When is the right time to migrate? Hsuan Lee @ NTUEE
Multiple-Deme Parallel GAs • Topology – Migration destination • Static • Any topology with “high connectivity and small diameter” • Random destination • Dynamic According to destination subpopulation’s diversity? Hsuan Lee @ NTUEE
Conclusion • There are still a lot to be investigated in the field of PGA. • Theoretical work is scarce. Hsuan Lee @ NTUEE