330 likes | 498 Views
Hebbian Constraint on the Resolution of the Homunculus Fallacy Leads to a Network that Searches for Hidden Cause-Effect Relationships. Andras Lorincz andras.lorincz@elte.hu http://nipg.inf.elte.hu. Content. Homunculus fallacy and resolution Hebbian architecture step-by-step
E N D
Hebbian Constraint on the Resolution of the Homunculus Fallacy Leads to a Network that Searches for Hidden Cause-Effect Relationships Andras Lorincz andras.lorincz@elte.hu http://nipg.inf.elte.hu
Content • Homunculus fallacy and resolution • Hebbian architecture step-by-step • Outlook to neurobiology • Cognitive Map: the hippocampal formation (in rats) • Extensions to control and reinforcement learning • Conjecture about consciousness • Conclusions
The Homunculus Fallacy How do we know that this is a phone?
Democrit’s Answer Small phone atoms fly away, leave a ‘print’ – a representation of the phone – on our eyes and this is how we know
Fallacy Someone ‘should make sense’ of the print made by the phone atom on the eye: Who is that reader? What kind of representation is he using? Who makes sense of the representation? Infinite regression
Root of fallacy is in the wording • We transform the infinte regression • into finite architecture • with convergent dynamics
Root of fallacy is in the wording • We transform the infinte regression • into finite architecture • with convergent dynamics (Not the representation but the) input makes sense provided that the representation can reconstruct the input (given the experiences) • In other words: • the representation • can produce an output, which is similar to the input of the network
Architecture with Hebbian learning • x: input • h: hidden representation • y: reconstructed input (should match x) • W: bottom-up matrix, or BU transformation • M: top-down matrix, or TD transformation Hebbian (or local) learning: Components of the matrices (transformations) make the LTM of the system Locality of learning warrants graceful degradation for the architecture
Architecture with Hebbian learning x: input h: hidden representation y: reconstructed input (should match x) W: bottom-up matrix, or BU transformation M: top-down matrix, or TD transformation
Previous New: we compare ε: reconstruction error: x–y
New: wecompare ε: reconstructionerror: x–y Previous
New: we compare ε: reconstruction error: x–y Previous IT IS NOT Hebbian Slow.Wehavetocompensate
New: we compare ε: reconstruction error: x–y Previous Hebbian. Slow.Wehavetocompensate
New: we learn to predict ε(t+1): innovation: x(t+1)–y(t+1) ε(t+1) = x(t+1)–y(t+1) = x(t+1)–Mh(t) Previous Hebbian. Fast. Works forchanginginputs Hiddenmodelcanworkin the absence of input
Conditions AutoRegressive (AR) process with recurrent network F h(t+1)=Fh(t)+ε(t+1) h: hidden state F: hidden deterministic dynamics nh: hidden innovation “causing” the process • M: subtractspredictive part • computesinnovation • F: addspredictive part • makeshiddenmodel • Learning of F • two-phaseoperation • supervisedlearning • Phase I: x(t+1) • PhaseII: ε(t+1)
Cause-effect relations • Cause: innovationof the autoregressiveprocess • Effect: deterministic dynamics‘played’ bymatrixF
Cause-effect relations • Cause: innovationof the autoregressiveprocess • Effect: deterministic dynamics‘played’ bymatrixF • One cansearchforhidden and independentcauses Architecture becomes more sophisticated: • independent component analysis • representation of independent causes • representation of hidden state variables
Generalization:AutoRegressive Independent Process Analysis (AR-IPA) Double loop: Both the state and the innovation are represented
“Cognitive Map” (in rats) Hippocampal formation Lorincz-Szirtes Autoregressive model of the hippocampal representation of events IJCNN Atlanta, June 14-19, 2009
“Cognitive Map” (in rats) Hippocampal formation Lorincz-Szirtes Autoregressive model of the hippocampal representation of events IJCNN Atlanta, June 14-19, 2009 Similar anatomical structure Similar opertational properties Two-phase operation
“Cognitive Map” (in rats) Hippocampal formation Lorincz-Szirtes Autoregressive model of the hippocampal representation of events IJCNN Atlanta, June 14-19, 2009 A single additional piece CA3—DG: eliminates echoes (ARMA-IPA)
“Cognitive Map” (in rats) Hippocampal formation Lorincz-Szirtes Autoregressive model of the hippocampal representation of events IJCNN Atlanta, June 14-19, 2009 • Learns places and directions • path integration / planning (dead reckoning)
Extensions of the network • AR can be embedded into reinforcement learning Kalman-filter and RL: Szita, Lorincz, Neural Computation, 2004 Echo State Networs and RL: Szita, Gyenes, Lorincz, ICANN, 2006 • AR can be extended with control (ARX) and active (Bayesian) learning Poczos, Lorincz, Journal of Machine Learning Research, 2009
Consciousness Consider an overcomplete hidden representation made of a set of recurrent networks • Consider a set of echostatenetworks • Temporalextension • all of themcanreconstructwith more or less errors • whichonetouse? • theycancompeteover time • the winner represents a finite part of the past and the future
Consciousness Consider an overcomplete hidden representation made of a set of recurrent networks • Consider a set of echostatenetworks • Temporalextension • all of themcanreconstructwith more or less errors • whichonetouse? • theycancompeteover time • the winner represents a finite part of the past and the future
Consciousness Consider an overcomplete hidden representation made of a set of recurrent networks • Consider a set of echostatenetworks • Temporalextension • all of themcanreconstructwith more or less errors • whichonetouse? • theycancompeteover time • the winner represents a finite part of the past and the future
Consciousness • Consider a set of echostatenetworks • Temporalextension • all of themcanreconstructwith more or less errors • whichonetouse? • theycancompeteover time • the winner represents a finite part of the past and the future Consider an overcomplete hidden representation made of a set of recurrent networks
Consciousness Consider an overcomplete hidden representation made of a set of recurrent networks This model can explain rivalry situations • Consider a set of echostatenetworks • Temporalextension • all of themcanreconstructwith more or less errors • whichonetouse? • theycancompeteover time • the winner represents a finite part of the past and the future
Conclusions Resolution of the fallacy plus Hebbian constraints lead to a structure that • resembles the “Cognitive Map” of rats
Conclusions Resolution of the fallacy plus Hebbian constraints lead to a structure that • resembles the “Cognitive Map” of rats • searches for hidden cause-effect relationships
Conclusions Resolution of the fallacy plus Hebbian constraints lead to a structure that • resembles the “Cognitive Map” of rats • searches for hidden cause-effect relationships • Questions for future work What kind of networks arise from the extensions, i.e., • Kalman filter embedded into reinforcement learning • Bayesian actively controlled learning if the constraint of Hebbian learning is taken rigorously.