160 likes | 291 Views
Biological inspiration. Animals are able to react adaptively to changes in their external and internal environment, and they use their nervous system to perform these behaviours.
E N D
Biological inspiration Animals are able to react adaptively to changes in their external and internal environment, and they use their nervous system to perform these behaviours. An appropriate model/simulation of the nervous system should be able to produce similar responses and behaviours in artificial systems. The nervous system is build by relatively simple units, the neurons, so copying their behavior and functionality should be the solution.
Biological inspiration Dendrites Soma (cell body) Axon
Biological inspiration dendrites axon synapses The information transmission happens at the synapses.
Artificial neurons Neurons work by processing information. They receive and provide information in form of spikes. x1 x2 x3 … xn-1 xn w1 Output w2 Inputs y w3 . . . wn-1 wn The McCullogh-Pitts model
Artificial neural networks Output Inputs An artificial neural network is composed of many artificial neurons that are linked together according to a specific network architecture. The objective of the neural network is to transform the inputs into meaningful outputs.
Learning in biological systems Learning = learning by adaptation The young animal learns that the green fruits are sour, while the yellowish/reddish ones are sweet. The learning happens by adapting the fruit picking behavior. At the neural level the learning happens by changing of the synaptic strengths, eliminating some synapses, and building new ones.
Neural network mathematics Output Inputs
Neural network approximation Task specification: Data: set of value pairs: (xt, yt), yt=g(xt) + zt; zt is random measurement noise. Objective: find a neural network that represents the input / output transformation (a function) F(x,W) such that F(x,W) approximates g(x) for every x
Learning with MLP neural networks MLP neural network: with p layers yout x 1 2 … p-1 p Data: Error: It is very complicated to calculate the weight changes.
Learning with backpropagation • Solution of the complicated learning: • calculate first the changes for the synaptic weights of the output neuron; • calculate the changes backward starting from layer p-1, and propagate backward the local error terms. The method is still relatively complicated but it is much simpler than the original optimisation problem.
Train = 1 Train = 1 .15 Error = 1-.15=.85 .72 Error = 1-.72=.28 .1*.3+.6*.2=.15 .2*.4+.8*.8=.72 .1 .6 .2 .8 .3*1+.4*0=.3 .4*0+.4*1=.4 .2*1+.8*0=.2 .6*0+.8*1=.8 .3 .4 .2 .8 .4 .4 .6 .8 1 0 0 1
Train = 0 1.56 Error = 0-1.56=-1.56 .25*.85+.9*.1.5=1.56 .25 .9 .4*1+.45*1=.85 .6*1+.9*1=1.5 .4 .45 .6 .9 1 1
Artificial Neural Network Predicts Structure at this point
Danger • You may train the network on your training set, but it may not generalize to other data • Perhaps we should train several ANNs and then let them vote on the structure
Profile network from HeiDelberg • family (alignment is used as input) instead of just the new sequence • On the first level, a window of length 13 around the residue is used • The window slides down the sequence, making a prediction for each residue • The input includes the frequency of amino acids occurring in each position in the multiple alignment (In the example, there are 5 sequences in the multiple alignment) • The second level takes these predictions from neural networks that are centered on neighboring proteins • The third level does a jury selection
PHD Predicts 4 Predicts 5 Predicts 6