310 likes | 452 Views
Self-Organization and Emergence in Distributed Cognition. KNEW 2013, August 21, 2013 John Collier Philosophy, University of KwaZulu-Natal History and Philosophy of Biology, Institute of Biology, Federal University of Bahia. Outline.
E N D
Self-Organization and Emergence in Distributed Cognition KNEW 2013, August 21, 2013 John Collier Philosophy, University of KwaZulu-Natal History and Philosophy of Biology, Institute of Biology, Federal University of Bahia
Outline • Inadequacy of traditional views (a few remarks) and reasons for distributed cognition (and wide cognition in general) • Advantages of the computational approach to representation • Wheeler’s objections to distributed cognition and his solution • Emergence in dynamical systems and why it might create a problem for Wheeler’s solution • Some reasons to think emergence occurs
Traditional views • Mind and body are at least functionally independent (representationalism, Descartes, Locke, Hume, etc.) • Behaviorism: behavior is a distinct function of the body, the mind is a back box with correlated inputs and outputs, representations ignored or inferred in some more recent views • Computational view: mental activity is a form of computation on representations On each of these views, external factors merely provide information or are acted on (play a passive role in mental activity)
Wide cognition • Distributed cognition • Scaffolding • Situated cognition • Embedded cognition • Enactive cognition • Social Intelligence Characteristic of all of these is that external factors play more than a passive role, but are active in cognitive processes. This view has much evidence.
Distributed cognition Two versions, the second stronger: • Use of external factors to aid thinking. • Spread of intentionality and belief (this is where the problems lie), but may include the first version only in some cases Intentionality is a characteristic of things that have content, or of which we are conscious, or both. All beliefs are intentional, but it isn’t clear that all are conscious. Version 2 implies that intentionality depends at least in part on external factors. In particular, reference is spread to external factors.
Some examples of distributed cognition • Individual • Catching a ball by keeping “your eye on the ball” • Leaving a shopping bag by the door so you remember to go for bread and milk • Making notes • Cultural (social) • Signs (iconic or symbols or words) • Collective knowledge (division of labour, experts) • Reference of words and ideas on the externalist account (Burge, Putnam) • Conventions such as Laws, Advisories, Accepted practices, Spontaneous coordination
Advantages of internal representation • The adaptive richness and flexibility of intelligent behaviour requires that the processes generating it are sensitive to the information carried by the environmental stimuli, not simply the physical form of the stimuli. Arbitrariness: There is arbitrariness in a system to the extent that a systematic function depends on information alone (and not any particular physical properties). • This informational sensitivity is impossible without representation. The justification for 2) is typically that sensitivity to information requires some sort of coding. Coding implies distinct self-contained representations. Homuncularity: The system is compartmentalized into modular units that are typically hierarchically organized. If the arbitrariness in a system is embedded in homunculi, then we can attribute representation to the information so embedded. The claim is that this is possible, and that it exhausts the cognitively functional aspects of representation.
Threats to representation (Wheeler) Really, these are threats to internal account of representation. They are connected to each other in much the same way as arbitrariness and homuncularity are connected to each other. • Extra-neural factors account for at least some of the kind of behavioural richness and flexibility normally associated with representation based control. • The homuncularity condition cannot be met by a system in which each causal component is massively context sensitive and variable over time.
Three approaches to representation • On-line intelligent behaviour must be explained by appeal to neurally located representations (traditional). • On-line intelligent behaviour must be explained by a combination of neurally located representations and external processes that form information channels (conserves internal representation). • On-line intelligent behaviour must be explained by a combination of neurally located representations and external processes that are also representational (radical).
Response to threat 1 The challenge: increasing evidence that many on-line tasks are solved by using external factors as an essential part of the process. Note that these external factors need not be constructed, but can be given by the environment. A good example is catching a ball: the best way to do it is to “keep your eye on the ball”, and move so that the tangent to the ball’s path is always directly towards you. This is found in situationallyembedded robots of the sort designed by Rodney Brooks. Typical robots of this sort do not use an abstract map to navigate, but record previous motions to get angles and distances, coordinating the information with light sources (also from the environment) to make a temporary map of the local. The resulting “map” is deeply dependent on the situation, or context. Typical Response: Arbitrariness and homuncularityarerequired for representation. But arbitrariness and homuncularity are internal, using solution 2. This pushes the problem towards threat 2.
Response to threat 2 Threat of Causal spread: • features that make a situated robot clever can depend heavily on the world and interactions with the world. • makes representations context dependent, not constant features of the symbol system This is an empirical issue. The evidence is not in. Wheeler refers to evolutionary robotics as evidence that cognitive evolution leads to modularity, however typical robots are mechanical, and what works for them might not work for an autonomous and self-producing biological organismembedded in a social context. We need to determine the conditions under which Wheeler’s response of type 2 is not possible. I will argue that this occurs if the causal spread is emergent.
Aside, some independent (?) reasons to think representation is external • Putnam, social determination of meaning through experts (but see also his claim that meaning is determined by us alone, if anything) • Kripke, Putnam, external determination of natural kinds • Burge, various reasons, social and scientific • C.S. Peirce, object of a sign (reference) has immediate object and dynamical object; as we learn more the immediate object tends towards the dynamical object. This is how signs work.
Emergence of causal spread • In many cases the internal representation, the external information flow account (approach 2) will work because we can distinguish separate dynamical processes for the two, resolving causal spread. • However, some external constructions, such as culture, on which many of our mental processes depend, have often been regarded as emergent, though this is still strongly debated.
Analogy to niche construction • Culture and many other forms of distributed cognition are actually forms of niche construction. • Although not all forms of niche construction are emergent, some (such as self-organization behavior in colonial ants – Deneuberg) show the main signs of emergence, though emergence has not been proven. • We need criteria for social emergence.
What emergence isn’t • Sometimes used to mean something that is merely unexpected: • The emergence of the internet • The emergence of a new scientific discipline • The emergence of a new political party • Emergent computation • These cases have no implication of more than surprise (to us) and typically complicatedness. • They are not the traditional philosophical notion of emergence.
The philosophical notion of emergence • Goes back to Aristotle, but the concept, also without the name, appears in J.S. Mill: • A living body cannot be understood as a mere summing up of the separate actions of its components • Basic physical laws were not violated, but new laws impose further restrictions • The word comes from G. H. Lewes (1875) discussing biology: • the emergent is incommensurable with its components and cannot be reduced to their sum or their difference Note the notion is used with reference to the biological.
C.D. Broad’s notion of emergence • Unpredictability • The higher level cannot be predicted in principle. • Non-reducibility • The whole is logically more than the sum of its parts. • Holistic • The system cannot be decomposed into its parts without loss. • Novelty • A tricky one to define clearly, but not merely surprising. A new kind of property and often new laws connecting them.
Some remarks on Broad’s conditions for emergence • These are all logical conditions, which are hard to detect in a dynamical system consisting of interacting processes. • Predictability (and controllability) plausibly underlies the other conditions. • The conditions make no direct reference to processes or underlying forces and flows (dynamics). • We need dynamical criteria for emergence that take these detectable phenomena into account. • Perhaps ironically, these are easier to give for physical systems than ecological or social systems, which predominately involve the flow of information rather than energy.
Predictability (analytic) • A system can be predicted across time if and only if there is a region η in its phase (state) space constraining the initial conditions at t0 such that the equations of motion will ensure that the trajectory of the system will pass within some region ε at some time t1, where the region η is chosen to satisfy ε. • Indeterministic systems have probabilistic predictability. • Predictability applies in principle to all Hamiltonian (specifically, energy conservative) systems, including those without exact analytical solutions, such as the three body case (through numerical approximation over any finite time). A theorem of basic (Lagrangian) physics.
Predictability (modelling) • Hamiltonian systems without exact analytical solutions can be numerically calculated in principle for any finite time, if we have a large enough computer. We might call this stepwise computability. • All computations are stepwise computable, but some (most) computations do not terminate. • These computations, however are stepwise computable, and allow, in principle – the required computer might have to be larger than the known universe – the arbitrarily exact computation of any finite later state, but there is no final state. • The macrostate of a microsystem can be predicted similarly by composing the trajectories of the microcomponments and averaging to get the expected macrovalues.
Requirements for unpredictability • To undermine predictability, at least one of the assumptions must go. The assumptions are: 1) the system is closed, 2) the system is Hamiltonian, and 3) there exist sufficient computational resources in principle. • The last condition (3) is a shorthand way of saying that the information in all properties of the system is computable from some set of boundary conditions and physical laws. • For example, Laplace was able to show that the orbits of the major bodies of the Solar System were stable for at least 100 million years, no mean accomplishment for a many-bodied system and paper calculations. We can project much further now.
Interactions of boundaries and system laws Conrad, Michael and KoichiroMatsuno (1990). The boundary condition paradox: a limit to the university of differential equations. Applied Mathematics and Computation. 37: 67-74 Differential equations provide the major means of describing the dynamics of physical systems in both quantum and classical mechanics. The indubitable success of this scheme suggests, on the surface, that in principle it could be extended to a universal program covering all of nature. The problem is that the essence of a differential equation description is a separation of itself from the boundary conditions, which are regarded as arbitrary. Note that when the last condition fails, the system is non-holonomic (constraints depend on velocity i.e., there is no function on the n spatial dimensions of the system such that f(x1, … xn, t) = 0). Some non-holonomic systems can be expressed in differential equations that are integrable, but most can not. I call these radically nonHamiltonian.
Failure of the independence • Non-Holonomic systems (spatial constraints depend on velocity): • Basically, energy is not conserved, with in the system, as in all dissipative systems. • Boundary conditions and system laws cannot be fully separated in principle, since they do work on each other and change the spatial constraints – velocity matters. • Near holonomic we can approximate at one end by step functions, and at the other end by perturbation theory • Other non-holonomic systems involve system properties on the same time scale as the dissipative properties. These are the radically nonHamiltonian systems.
Dynamical conditions for emergence • A correct system model must not be integrable(this ensures analytic nonreducibility). • The system is energetically (and/or informationally) open (allowing boundary conditions to be dynamic). • The characteristic rate of at least one property of the system is of the same order as the rate of anon-holonomic constraint (radically nonHamiltonian). • If at least one of the properties is an essential property of the system, the system itself is essentially non-reducible; it is thus an emergent system. Collier, J. A dynamical account of emergence. Cybernetics and Human Knowing, 15, 2008: 75-100.
Violate Laplacean model Laplace’s assumptions are: • Determinism • No two possible trajectories in the phase space of a system can share a point in phase space. • Predictability • For any property of a system, the values of that property can in principle be predicted with arbitrarily high accuracy for an arbitrarily long time. • Locality • All dynamical properties of a system are fully specified by universal natural laws and parameters defined with convergent accuracy on arbitrarily small spatiotemporal regions. Determinism is no problem, but predictability and locality fail.
An example: Mercury (1) • Before 1965, astronomers believed that, like the Moon, Mercury's rotation matched its orbital period of 88 days. • Mercury is actually in a 3:2 resonance such that Mercury's day is exactly 2/3 of its 88-day year. • It turns out that there are relative energy minima at 1:1, 3:2, 5:2 and so on. Once in one of these local minima, it is unlikely that Mercury could get into one of the other minima, since local forces would keep it into the local minimum in which it has been captured.
An example: Mercury (2) • If we don’t assume initial conditions, the phase space gives a 1/3 chance of capture of Mercury in the 3:2 ratio, and 1/2 for the 1:1 ratio, with the other ratios taking up the rest of the chances. • For initial conditions near the even ratios, capture in the respective ratio is very likely, but the system overall is chaotic, and in other regions infinitesimal differences in initial conditions can lead to another ratio. • Specifically, in the phase space of the system, the attractor basins for the different ratios intermingle in certain regions so that for each two points in one basin, there is at least one point between them that is in another basin. So both predictability and locality fail. The 3:2 ratio is emergent.
Observations on Mercury example • The system is dissipative (non-holonomic). • The rate of orbit/rotation ratio formation is similar to the rate of dissipation. • Therefore the property of orbit/rotation formation is radically non-Hamiltonian. • Predictability fails, and locality fails. The latter implies some sort of holism. • Unlike noncomputable Hamiltonian systems, the system reaches a final state in finite time. Holism and emergence: Dynamical complexity defeats Laplace’s Demon. South African Journal of Philosophy. 2011. 30: 229-243.
Is causal spread emergent? • If it is, the dynamics of the internal/external interactions must involve an inseparability of boundary conditions and system laws. • This requires the system be dissipative. • However, energy is not the relevant property. • We are concerned instead with information (in channels between the world and neural system). • Emergence of representations in causal spread would therefore suggest that it is the dissipation of information that is the relevant property.
Some unabashed speculation • There is no dissipation (loss) of information relevant to many examples of distributed cognition and related forms of wide cognition. • However, culture regularly loses information as ideas die out or are replaced, while new information is created permitting integration and self-organization. • Culture has a dynamic that places dynamical and dissipative boundary conditions on cognition but also interacts with internal parts of representations. • Therefore it is likely that external information channels and representation cannot always be separated, causal spread is emergent, and Wheeler’s solution fails. Collier.1986. Entropy in Evolution, Biology and Philosophy 1:3-24 describes how informational self-organization is possible.
Thank you for your attention collierj@ukzn.ac.za http://web.ncf.ca/collier/