70 likes | 285 Views
Deep Belief Networks and Restricted Boltzmann Machines. Restricted Boltzmann Machines. Visible and hidden units Each visible unit connected to all hidden units Undirected graph. UNITS STATE ACTIVATION. RBMs work by updating the states of some units given the states of others
E N D
Restricted Boltzmann Machines • Visible and hidden units • Each visible unit connected to all hidden units • Undirected graph
UNITS STATE ACTIVATION • RBMs work by updating the states of some units given the states of others • Compute the activation energy. • Then use logistic function on the energy to determine if unit will activate or not.
pi will be close to 1 for positive activation energies and close to 0 for negative ones. • Same process is performed after a hidden unit state has been updated to update a visible unit.
Learning weights • Take training example and set the states of visible units to this preferences. • Update the hidden units states using the logistic activation rule • For each pairs of units measure whether they are both on • Reconstruct the visible units by using the logistic activation rule • Update hidden units. • For each pair of units measure whether they are both off • Update all the weights • Repeat until the error between examples and representations reach a threshold or a max number of iterations is reached • By adding Positive(eij) and –Negative(eij) we are helping the network reach the reality of the training data. • If addition is zero, it means we have the desired weight
Deep Belief Networks • These are build by stacking and training RBMs in a greedily manner. • DBN is a graphical model which learns to extract deep hierarchical representation of the data.