410 likes | 558 Views
Data representation techniques for adaptation. Alexandra I. Cristea. USI intensive course “Adaptive Systems” April-May 200 3. Overview: Data representation. Data or knowledge? Subsymbolic vs. symbolic techniques Symbolic representation Example Subsymbolic reprensentation Example.
E N D
Data representation techniques for adaptation Alexandra I. Cristea USI intensive course “Adaptive Systems”April-May 2003
Overview: Data representation • Data or knowledge? • Subsymbolic vs. symbolic techniques • Symbolic representation • Example • Subsymbolic reprensentation • Example
Data or knowledge? • Data for AS becomes often knowledge • data < information < knowledge • We divide into: • Symbolic • Sub-symbolic knowledge representation
Data representation techniques for adaptation • Symbolic AI and knowledge representation, such as: • Concept Maps • Probabilistic AI (belief networks) • see UM course • Sub-symbolic: Machine learning, such as: • Neural Networks
Symbolic AI and knowledge representation • Static knowledge • Concept mapping • terminological knowledge • concept subsumption (inclusion) inference • Dynamic Knowledge • ontological engineering, e.g., temporal representation and reasoning • planning
Concept Maps Example
Proposition: Without the industrial chemical reduction of atmospheric nitrogen, starvation would be rampant in third world countries. Starvation and Famine FOOD Deprivation leads to Requiring more Population Growth Can be limited by Predicted by and Contains Malthus 1819 Required for Climate Protein Human Health and Survival Such as in Eastern Europe Politics Includes Can be increased by Used by humans as Essential Amino Acids Economics and Made by India Distribution Animals Grains Legumes Africa Agricultural Practices Eaten by Such as Such as Plants Possess Pesticides Genetics & Breeding Herbicides Irrigation Required for growth of Symbiotic Bacteria That produce Fertilizer Which significantly supplements naturally “Fixed” Nitrogen Haber Process Used for Atmospheric N2 NH3
Constructing a CM • Brainstorming Phase: • Organizing Phase: create groups and sub-groups of related items. • Layout Phase: • Linking Phase: lines with arrows
Reviewing the CM • Accuracy and Thoroughness. • Are the concepts and relationships correct? Are important concepts missing? Are any misconceptions apparent? • Organization. • Was the concept map laid out in a way that higher order relationships are apparent and easy to follow? Does it have a representative title? • Appearance. • spelling, etc.? • Creativity.
Subsymbolic systems • human-like information processing: • learning from examples, • context sensitivity, • generalization, • robustness of behaviour, and • intuitive reasoning
Why NN? • To learn how our brain works (!!) • High computation rate technology • Intelligence • User-friendly-ness
Why NNs? Applications vs
Why NNs? Applications
What are humans good at and machines not? • Humans: • pattern recognition • Reasoning with incomplete knowledge • Computers: • Precise computing • Number crunching
Firing • Resulting signal • Excitatory: • encourages firing of the next neuron • Inhibitory: • Discourages firing of the next neuron
What does a neuron do? • Sums its inputs • Decides if to fire or not with respect to a threshold • But: limited capacity: • Neuron cannot fire all the time • Refractory period: 10ms – min time to fire again • So: max. firing frequency: 100 spikes/ sec
Hebbian learning rule (1949) • If neuron A repeatedly and persistently contributes to the firing of neuron B, than the connection between A and B will get stronger. • If neuron A does not contribute to the firing of neuron B for a long period of time, than the connection between A and B becomes weaker.
Summarizing • A neuron doesn’t fire if cumulated activity below threshold • If the activity is above threshold, neuron fires (produces a spike) • Firing frequency increases with accumulated activity until max. firing frequency reached
Input Output The Artificial Neuron Functions: Inside:Synapse Outside:f =threshold
Layer:2 Layer:3 Output An ANN Input Black Box Layer:1
value value V1 V2=w*v1 NEURON LINK W: weight neuron1 neuron2
ANN • Pulse train – average firing frequency 0 • Model of synapse (connecting element) • Real number w0 : excitatory • Real number w0 : inhibitory • N(i) – set of neurons that have a connection to neuron i • jN(i) • wij – weight of connection of j to i
internal activation fct S=ΣVi*Wi - b i=1..n O = f (S) external activation fct neuron computation V2 。。。 Vn W2 V1 W1 Wn O
Typical input output relation f • Standard sigmoid fct.: f(z)= 1/(1+e-z) • Discrete neuron: fires at max. speed, or does not fire • xi={0,1}; f(z) = 1, z>0; 0 z0
Other I-O functions f 3. Linear neuron f(z)=z output xi=zi – = … 4. Stochastic neuron: xi {0,1}; output 0 or 1 input zi = j wij vi – ii probability that neuron fires f(zi) probability that it doesn’t fire 1- f(zi)
Summarizing ANNs • Feedforward network, layered • No connection from the output to the input, at each layer but also at neuron level • Recurrent network • Anything is allowed – cycles, etc.