120 likes | 190 Views
Comparative Reproduction Schemes for Evolving Gathering Collectives. A.E. Eiben, G.S. Nitschke, M.C. Schut Computational Intelligence Group Department of Computer Science, Faculty of Sciences De Boelelaan 1081a, 1081 HV Amsterdam The Netherlands
E N D
Comparative Reproduction Schemes for Evolving Gathering Collectives A.E. Eiben, G.S. Nitschke, M.C. Schut Computational Intelligence Group Department of Computer Science, Faculty of SciencesDe Boelelaan 1081a, 1081 HV Amsterdam The Netherlands gusz@cs.vu.nl, nitschke@cs.vu.nl, schut@cs.vu.nl
Introduction • Research theme: Emergent Collective Intelligence • Investigating artificial evolution for adaptability at local level, and desired emergent behaviors at global level • Comparing agent reproduction schemes under two types of evolvable controllers: heuristic and neural-net • Results from a collective gathering simulation
Background • Emergent behavior in collective gathering: • Nolfi et al. (2003): Cooperative transport by s-bots • Drogoul et al. (1995): Emergent functionality, division of labor in a simulated ant colony gathering task • Perez-Uribe et al. (2003):Emergent cooperative behavior for gathering by artificial ants, based on colony type (heterogeneous vs. homogenous)
Experimental Setup • JAWAS simulator: simulating collective gathering - potentially thousands of agents (swarm scape) • Initially 1000 agents, 3 resource types (different values and cooperation needed to gather) • Agent goal: to gather the highest value of resources possible during lifetime • Cooperation needed for ‘good’ solutions – gathering of highest value of resources
Task domain: Minesweeping Capacity Extraction Transport Fitness Threshold Cost Cost Reward Mine type A 300 8 0.04 20 Mine type B 150 4 0.02 10 Mine type C 75 2 0.01 5 • Fitness rewards given to agents for successful gathering • (delivery to the home area). • Collective evaluation: Total value gathered – taken at end of • Simulation and as average of 100 runs
Comparative Agent Reproduction Schemes Temporal Dimension • SREL : Single Reproduction at End of Lifetime • MRDL: Multiple Reproduction during an Agents Lifetime Spatial Dimension • Locally restricted: Reproduction only with agents in adjacent cells • Panmictic: Reproduction with agents anywhere in the environment
Evolutionary setup • For both heuristic and NN : • Gathering and transport parameter values evolved • Heuristic controller static throughout evolutionary process • Neural network controller dynamic over evolutionary process – i.e. NN weights evolved • NN controllers evolved under a neuro-evolution process
Conclusions • SREL/Local – the most effective scheme – under given behavior evaluation - validated under two approaches • Fitness saved by agents until end of lifetime – successful • agents pass on fitness – especially effective under NE – • where recombined mutated weights are passed on • Not the case under MRDL – where agents may not be • given sufficient time to adapt to their task before • reproducing