670 likes | 969 Views
Connectionism. associationism. Associationism. David Hume (1711-1776) was one of the first philosophers to develop a detailed theory of mental processes. Associationism.
E N D
Associationism David Hume (1711-1776) was one of the first philosophers to develop a detailed theory of mental processes.
Associationism “There is a secret tie or union among particular ideas, which causes the mind to conjoin them more frequently together, and makes the one, upon its appearance, introduce the other.”
Three Principles • Resemblance • Contiguity in space and time • Cause and effect
Constant Conjunction Causal Association
Vivacity Hume thought different ideas you had had different levels of “vivacity” – how clear or lively they are. (Compare seeing an elephant to remembering an elephant.)
Belief To believe an idea was for that idea to be very vivacious. Importantly, causal association is vivacity preserving. If you believe the cause, then you believe its effect.
Hume’s Non-Rational Mind Hume thus had a model of mental processes that was non-rational. Associative principles aren’t truth-preserving; they are vivacity preserving. (Hume thought this was a positive feature, because he thought that you could not rationally justify causal reasoning.)
Classical Conditioning And as we saw before, the associationist paradigm continued into psychology after it became a science.
Connectionism Connectionism is the “new” associationism.
Names • Connectionist Network • Artificial Neural Network • Parallel Distributed Processors
High Mine Middle Rock Low
3 Mine 1 Rock 9
Connection 3 Mine 1 Rock 9
Weights Each connection has its own weight between -1 and 1. The weights correspond to how much of each node’s “message” is passed on. In this example, if the weight is +0.5, then the Low node passes on 3 x 0.5 = 1.5.
0.5 3 -0.5 1 Mine 1 Rock 9
-2 0.5 3 -0.5 1 Mine 1 Rock 9
f(-2) 3 Mine 1 Rock 9
Activation Function Each non-input node has an activation function. This tells it how active to be, given the sum of its inputs. Often the activation functions are just on/ off: f(x) = 1, if x > 0; otherwise f(x) = 0
0 3 1 Mine 1 2 Rock 9 -1
0 3 1 1 1 2 0 9 -1
Training a Connectionist Network STEP 1: Assign weights to the connections at random.
Training a Connectionist Network STEP 2: Gather a very large number of categorization tasks to which you know the answer. For example, a large number of echoes where you know whether they are from rocks or from mines. This is the “training set.”
Training a Connectionist Network STEP 3: Randomly select one echo from the training set. Give it to the network.
Back Propagation STEP 4: If the network gets the answer right, do nothing. If it gets the answer wrong, find all the connections that supported the wrong answer and adjust them down slightly. Find all the ones that supported the right answer and adjust them up slightly.
Repeat! STEP 5: Repeat the testing-and-adjusting thousands of times. Now you have a trained network.
Important Properties of Connectionist Networks • Connectionist networks can learn. (If they have access to thousands of right answers, and someone is around to adjust the weights of their connections. As soon as they stop being “trained” they never learn a new thing again.)
Learning If we suppose that networks train themselves (and no one knows how this could happen), learning is still a problem: The system, though it can learn, can’t remember. In altering its connections, it alters the traces of its former experiences.
Parallel Processing 2. Connectionist networks process in parallel. Serial computation:
Parallel Processing A parallel computation might work like this: I want to solve a really complicated math problem, so I assign small parts of it to each student in class. They work “in parallel” and together we solve the problem faster than one processor working serially.
Distributed Representations 3. Representations in connectionist networks are distributed. Information about the ‘shape’ of the object (in sonar echoes) is encoded not in any one node or connection, but across all the nodes and connections.
Local Processing 4. Processing in a connectionist network is local. There is no central processor controlling what happens in a connectionist network. The only thing that determines whether a node activates is its activation function and its inputs. There’s no program telling it what to do.
Graceful Degradation 5-6. Connectionist networks tolerate low-quality inputs, and can still work even as some of their parts begin to fail. Since computing and representation are distributed throughout the network, even if part of it is destroyed or isn’t receiving input, the whole will still work pretty well.
Brain = Neural Network? One of the main points of interest of connectionism is the idea that the human brain might be a connectionist network.
Neurons A neuron receives inputs from a large number of other neurons, some of which “inhibit” it and others of which “excite” it. At a certain threshold, it fires.
Neurons Neurons are hooked up ‘in parallel’: different chains of activation and inhibition can operate independently of one another.
Neurons But is the brain really a neural network?
Spike Trains Neurons fire in ‘spikes’ and many brain researchers think they communicate in the frequency of spikes over time. That’s not a part of connectionism.
Spike Trains (Another hypothesis is that they communicate information by firing in the same patterns as other neurons.)