250 likes | 770 Views
An introduction to search and optimisation using Stochastic Diffusion Processes
E N D
An introduction to search and optimisation using Stochastic Diffusion Processes “Stochastic Diffusion Processes define a family of agent based search and optimisation algorithms which have been successfully used in a variety of real-world applications and for which there is a sound theoretical foundation.” - Mark Bishop Goldsmiths, University of London • This presentation summarises recent research carried out by the Goldsmiths/Reading/Kings SDP group: Mark Bishop, Slawomir Nasuto, Kris de Meyer, Darren Myatt & Mohammad Majid. • SDP resource pages are maintained at: http://www.cyber.rdg.ac.uk/CIRG/SDP
Eye tracking Bishop & Torr. Lip tracking Grech-Cini & McKee. Mobile robot localisation Beattie et al. Site selection for wireless networks Hurley & Whitaker. Speech recognition Nicolaou. 3D computer vision Myatt et al. Models of attention Summers. A new ‘connectionist’ paradigm for cognitive science Nasuto, Bishop et al. Theoretical Nasuto & Bishop. Sequence detection Jones. Some search and optimisation applications employing Stochastic Diffusion Processes
The Restaurant Game: a simple Stochastic Diffusion optimisation • A group of conference delegates arrive in a foreign town and want to find a ‘good’ place to eat: • the ‘search space’ is the set of all restaurants; • the ‘objective function’ – restaurant quality - is the sum of numerous ‘independent partial objective function evaluations’; • A random independent partial objective function evaluation is defined by a diner’s response to a randomly chosen meal: {GOOD or BAD}. • Usually in a large town a naive exhaustive search will be impractical as there will be too many (restaurant dish) combinations to evaluate during the period the delegates are attending the conference.
Restaurant quality: a stochastic dynamic objective function • The objective function optimised by the Restaurant Game - restaurant quality - is a stochastic variable defined by the sum of mean diner responses to all the meal combinations offered by a given restaurant. • Restaurant quality is a ‘stochastic variable’ as the perceived quality of each meal may vary: • each time a meal is prepared; • with changes in a diner’s mood; • as each diner is likely to have a different perception of what tastes good. • In other ‘best-fit’ searches the partial evaluation of the objective function is typically deterministically defined. E.g. • Is a specific coloured-feature red? • Is a specific numeric-feature = 10, etc.
Stochastic Diffusion Search: a ‘Swarm Intelligence’ metaheuristic • To find the ‘best’ restaurant in town each delegate should: • Select a restaurant to visit at random (agent’s ‘restaurant hypothesis’). • Select meal from the menu at random (partial hypothesis evaluation). • IF <meal good> THEN revisit the restaurant and GOTO (2). • ELSE IF the meal a (randomly chosen) friend ate was good THEN adopt their restaurant hypothesis and GOTO (2). • ELSE GOTO (1). • Unlike an Egon Ronay guide this SDP will naturally adapt to changing restaurant conditions and diner taste over time …
Stochastic Diffusion Processes in nature: ‘tandem calling’ • Consider a search for resource (e.g. food) in a dynamically changing natural environment. • E.g. Examine the behaviour of the social insects such as: ants (e.g.leptothorax acervorum); honey bees etc. • Without a-priori information each ‘ant’ embarks upon a (‘random’) walk in their environment for a finite period of time. • Ants that locate the desired resource return ‘positive’. • Ants that didn’t locate resource return ‘negative’. • On returning to the nest each ‘positive’ ant ‘directly communicates’ with the next ‘negative’ ant it meets, (non stigmergetic communication). • The ‘positive’ ant communicates the location of the resource by physically steering the ‘negative’ ant towards it in a ‘tandem pair’. • Unselected ‘negative’ ants embark on another random walk around their environment.
‘Compositional’ Objective Functions • In general SDPs can most easily be applied to optimisation problems where the objective function is decomposable into components that can be evaluated independently: • … where Fi (x) is defined as the ith partial evaluation of F (x). • For example SDPs can simply be applied to best-fit string (pattern) matching. • Such problems can be cast in terms of optimisation by defining the objective function, F (x), for a hypothesis x about the location of the solution, as the similarity between the target pattern and the corresponding region at x in the search space and finding x such that F (x) is maximised.
Partial evidence and inference • Assuming a compositional structure of the solution (I.e. objective function decomposable into components that can be evaluated independently) agents perform inference on the basis of partial evidence. • Partial evidence for each agent’s hypothesis [of the best solution] is obtained by a partial evaluation of the agent’s current hypothesis. • Every time a person has dinner at a restaurant the diner selects one meal combination at random from the entire menu of dishes available. • Partial hypothesis evaluation allows an agent to quickly form an opinion on the quality of its hypothesis without exhaustive testing. • E.g. The Restaurant Game will find the best restaurant in town without delegates exhaustively sampling all the meals available in each.
Interaction and diffusion • INTERACTION: On the basis of ‘partial knowledge’ agents communicate their ‘current hypothesis’ to agents whose own current hypothesis is not supported by recent evidence. • E.g. In the Restaurant Game each diner whose last meal was ‘BAD’ asks a randomly chosen member of the group if their last meal was ‘GOOD’: • DIFFUSION: If the selected diner enjoyed their last meal then they communicate their current hypothesis, (e.g. the identity of the restaurant they last visited). • Conversely, if the selected diner also did not enjoy their last meal then a new restaurant is chosen at random from the entire list of those available.
Stochastic Diffusion Processes as ‘global optimisation’ • Central to the power of a SDP is its ability to escape local minima. • E.g. Unless all the meals in a restaurant are to a diners taste, then there is a finite non-zero probability that a diner’s randomly chosen meal will be judged BAD and a new hypothesis adopted. • Hence a Stochastic Diffusion Process achieves global optimisation by: • probabilistic partial hypothesis evaluation - selecting a meal at random; • in combination with dynamic reallocation of resources (agents/diners) via stochastic recruitment mechanisms.
Positive feedback mechanisms in SDP • In a SDP each agent poses a hypothesis about the possible solution and evaluates it partially. • Successful agents repeatedly test their hypothesis and recruit unsuccessful agents to it by direct communication. • This creates a positive feedback mechanism ensuring rapid convergence of agents onto promising solutions in the space of all solutions. • Hence regions of the solution space labelled by the presence of agent clusters can be interpreted as good candidate solutions.
Convergence of SDS • AGENT CLUSTERING: a global solution is constructed from the interaction of many simple, locally operating agents, forming the largest cluster. • Such a cluster is dynamic in nature, yet stable, analogous to, “a forest whose contours do not change but whose individual trees do”, (Arthur, 1994). • CONVERGENCE: agents posing mutually consistent hypotheses support each other and over time this results in the emergence of a stable agent population identifying the desired solution. • E.g. In the Restaurant Game - at equilibrium - a [stochastically] stable group of people with the same hypothesis rapidly clusters around the ‘best’ restaurant in town.
A simple string search • Target and search space are defined by the sets of features: T, S. • E.g. In a simple string search, component features of the target, T, and search space, S, are alpha-numeric characters. • Hypotheses are potential best-fit positions, h, of T in S. • The solution is compositional as it is defined by the set of contiguous characters in the search space that together constitute the best instantiation of the target. • A population of agents converge on the hypothesis, h, of the best fit position of T in S. • ‘Communication’ between agents is of their hypothesis of the target mapping position, h. • ‘Feature evaluation’ is performed by MATCH (a, b) which identifies if two features (a, b) are similar (defined by a specified similarity metric). • Hence the ‘global optimal solution’ is found at: MAX iMATCH (T[i], S[h+i]).
The Stochastic Diffusion Search algorithm • INITIALISE (agents); • WHILE NOT TERMINATE (agents) DO • TEST (agents, T, S); • DIFFUSE (agents); • END; • ‘S’ is the search space – the text containing the target string • ‘T’ is the target string. • Each of the agents maintains a hypothesis, (ie. the best-fit mapping), of the target in ‘S’.
The INITIALISE phase • Assigns each agent a ‘possible hypothesis’. • I.e. A possible mapping of the target string in the search space text. • In the absence of prior knowledge ‘possible hypotheses’ are generated randomly. • A process analogous to: • The initial selection of a restaurant at random. • An ant’s initially random walk around their environment.
TEST agent activity: randomised partial hypothesis evaluation • Partial information on the accuracy of the hypothesis maintained by each agent is obtained by performing a randomised partial hypothesis evaluation. • In a simple string search we enquire if one randomly selected letter (i) of the target string, [Ti], is present in the search space, [S], at the position specified by agent’s current hypothesis, h: i.e. at S[h+i]? • E.g. Is a randomly selected meal good; does a location evaluated by an ant contain resource?
An example TEST • Target, T: c [a] t • Component, (i): 0 1 2 • search space, SS: T h e c a t s a t • Hypothesis, (h) 0 1 2 3 4 5 6 7 8 • Consider the agent hypothesis to be, (h = 3) • Decompose the target, T, & perform partial hypothesis evaluation Ti,, (e.g. i = 1) • E.g. For (i = 1) the target component symbol, T1,, is the letter ‘a’. • Test the agent hypothesis, (h = 3) with partial hypothesis evaluation (i = 1) • TEST := MATCH (SS [3+1], T [1]) = MATCH (SS [4], T [1]) = MATCH (‘a’, ‘a’) • The result of this partial test of the agents hypothesis is POSITIVE • As MATCH (‘a’, ‘a’) is TRUE.
The DIFFUSION phase:stochastic communication • DIFFUSION: Stochastically communicates agent mappings across the population of agents. • Communication of potentially ‘good’ restaurants to friends. • Communication of potentially ‘good’ resource locations to other ants. • In a passive recruitment SDP each ‘negative’ agent, (one failing its ‘partial hypothesis’ test) attempts to communicate with another randomly selected member of the population: • Passive recruitment as used in the Restaurant Game. • Active recruitment used by ‘leptothorax acervorum’. • Combined recruitment strategies have also been investigated (Myatt 2006). • If the selected agent is ‘positive’ then its mapping is communicated. • Conversely if the selected agent also is ‘negative’ then a completely new mapping is generated at random.
The TERMINATE phase • SDS is a global search/optimisation algorithm • SDS converges to the global optimal position of the target in search space. • A ‘halting criterion’ examines the activity of the agent population to determine if target has been located. • Two such criteria - discussed by Nasuto et al., (1999) - are: • The ‘Weak Halting Criterion’: • Is a function of the total number of positive agents, (I.e. net activity). • The ‘Strong Halting Criterion’: • Is a function of the total number of positive agents with the same hypothesis, (I.e. ‘clustered’ activity).
‘Algorithm class’ • Global optimisation algorithms have been recently classified in terms of their theoretical foundations into four distinct classes (Neumaier, 2004): • incomplete methods: heuristic searches with no safeguards against trapping in a local minimum; • asymptotically complete: methods reaching the global optimum with probability one if allowed to run indefinitely long without means to ascertain when the global optimum has been found; • complete: methods reaching the global optimum with probability one in infinite time that know after a finite time that an approximate solution has been found within prescribed tolerances; • rigorous: methods typically reaching the global solution with certainty and within given tolerances.
Algorithm class for heuristic multi-agent systems • In heuristic multi-agent systems Neumaier characterisation is related to the concept of the stability of intermediate solutions, because the probability that any single agent will lose the best solution is often greater than zero. • This may result in a lack of stability of the found solutions or in the worst case non-convergence of the algorithm. • Thus for multi-agent systems it is desirable to characterise the stability of the discovered solutions. • For example it is known that many variants of Genetic Algorithms do not converge hence the optimal solution may disappear from the next population. • It has been established that the solutions found by SDS are exceptionally stable (De Meyer, Nasuto & Bishop). • For example, on SDS convergence in a search with N=1000 agents; search space M=1000; probability of a false negative P-= 0.2 the mean return time to the state of all agents inactive is approximately 10602 iterations.
Convergence criterion • The convergence of SDS was first rigorously analysed by Bishop & Torr (1992) for the case of zero noise and ideal target instantiation. • Detailed criteria for SDS convergence under a variety of noise conditions were first discussed by Nasuto et al. (1999), in the context of interacting Markov Chains & Ehrenfest Urn models. • However in a recent paper Myatt et al. (2003) outline a much simpler criterion to estimate the suitability of employing SDS for a given search/optimisation problem. • Unlike the earlier analysis by Nasuto, Myatt’s analysis employs two key simplifying assumptions: • It utilises the mean number transition of agents between clusters rather than complete probability distributions. • It assumes homogeneous background noise. • If a is the quality of the best solution and b an estimate of homogeneous background noise, then the minimum quality required for stable convergence of the algorithm is simply:
Time complexity analysis of SDS • The Time Complexity of SDS was first analysed in Nasuto et al., (1998) for the case of zero noise and ideal target instantiation. • The result has also been demonstrated to hold in the case of convergence under noise. • Given M is the search space size and N is the number of agents, one can divide the [NxM] plane into two distinct regions: • Region 1 is Linear [ M > M (N) ]: • Sequential convergence time is: O (M). • Parallel convergence time is O (M/N). • Region 2 is independent of search space size M • Sequential convergence time is: O (N/log N). • Parallel convergence time is: O (1/log N).
Conclusions • Stochastic Diffusion Procedures constitute a new ‘meta heuristic’ for efficient global search and optimisation. • In a generic search problem - such as string search - the worst case ‘time complexity’ of SDS compares favourably with the best deterministic one and two dimensional string search algorithms, (or their extensions to tree matching). • Further, such performance is achieved without the use of application specific heuristics. • Unlike many heuristic search methods (such as Evolutionary techniques; Ant Algorithms; Particle Swarm Optimisers etc.), Stochastic Diffusion Procedures have very thorough mathematical foundations and correspondingly well characterised behaviour.
Core references:http://www.cyber.rdg.ac.uk/CIRG/SDP • Ideal convergence of SDS: • Bishop, J.M. & Torr, P.H., (1992), The Stochastic Search Network, in Lingard, R., Myers, D. & Nightingale, C., (eds), Neural Networks for Vision Speech & Natural Language, pp. 370-388, Chapman-Hall. • General convergence of SDS: • Nasuto, S.J. & Bishop, J.M., (1999), Convergence Analysis of the Stochastic Diffusion Search, Parallel Algorithms, 14:2, pp. 89-107, UK. • Time complexity analysis of SDS: • Nasuto, S.J., Bishop, J.M. & Lauria, S., (1998), Time Complexity Analysis of the Stochastic Diffusion Search, Neural Computing ’98, Vienna. • Simple convergence criteria: • Myatt, D., Bishop, J.M. & Nasuto, (2003), Minimum Stable Criteria for Stochastic Diffusion Search, Electronics Letters, 40:2, pp. 112-113, UK. • Change of cognitive metaphor: • Nasuto, S.J., Dautenhahn, K. & Bishop, J.M., (1999), Communication as an emergent metaphor for neuronal operation, Lecture Notes in Artificial Intelligence, 1562, pp. 365-380, Springer-Verlag.