1 / 36

Optimization methods Morten Nielsen Department of Systems biology , DTU

Optimization methods Morten Nielsen Department of Systems biology , DTU. Minimization. The path to the closest local minimum = local minimization . * Adapted from slides by Chen Kaeasar, Ben-Gurion University. Minimization. The path to the closest local minimum = local minimization .

lynna
Download Presentation

Optimization methods Morten Nielsen Department of Systems biology , DTU

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. OptimizationmethodsMorten NielsenDepartment of Systems biology, DTU

  2. Minimization The path to the closest local minimum = local minimization *Adapted from slides by Chen Kaeasar, Ben-Gurion University

  3. Minimization The path to the closest local minimum = local minimization *Adapted from slides by Chen Kaeasar, Ben-Gurion University

  4. Minimization The path to the global minimum *Adapted from slides by Chen Kaeasar, Ben-Gurion University

  5. Outline • Optimization procedures • Gradient descent • Monte Carlo • Overfitting • cross-validation • Method evaluation

  6. Linear methods. Error estimate Linear function I1 I2 w1 w2 o

  7. Gradient descent (from wekipedia) Gradient descent is based on the observation that if the real-valued function F(x) is defined and differentiable in a neighborhood of a point a, then F(x) decreases fastest if one goes from a in the direction of the negative gradient of F at a. It follows that, if for  > 0 a small enough number, then F(b)<F(a)

  8. Gradient descent (example)

  9. Gradient descent

  10. Gradient descent Weights are changed in the opposite direction of the gradient of the error

  11. Gradient descent (Linear function) Weights are changed in the opposite direction of the gradient of the error Linear function I1 I2 w1 w2 o

  12. Gradient descent Weights are changed in the opposite direction of the gradient of the error Linear function I1 I2 w1 w2 o

  13. Gradient descent. Example Weights are changed in the opposite direction of the gradient of the error Linear function I1 I2 w1 w2 o

  14. Gradient descent. Example Weights are changed in the opposite direction of the gradient of the error Linear function I1 I2 w1 w2 o

  15. Gradient descent. Doing it your self Weights are changed in the opposite direction of the gradient of the error Linear function 1 0 W1=0.1 W2=0.1 o What are the weights after 2 forward (calculate predictions) and backward (update weights) iterations with the given input, and has the error decrease (use =0.1, and t=1)?

  16. Fill out the table What are the weights after 2 forward/backward iterations with the given input, and has the error decrease (use =0.1, t=1)? Linear function 1 0 W1=0.1 W2=0.1 o

  17. Fill out the table What are the weights after 2 forward/backward iterations with the given input, and has the error decrease (use =0.1, t=1)? Linear function 1 0 W1=0.1 W2=0.1 o

  18. Monte Carlo Because of their reliance on repeated computation of random or pseudo-random numbers, Monte Carlo methods are most suited to calculation by a computer. Monte Carlo methods tend to be used when it is unfeasible or impossible to compute an exact result with a deterministic algorithm Or when you are too stupid to do the math yourself?

  19. Example: Estimating Π by Independent Monte-Carlo Samples Suppose we throw darts randomly (and uniformly) at the square: Algorithm: For i=[1..ntrials] x = (random# in [0..r]) y = (random# in [0..r]) distance = sqrt (x^2 + y^2) if distance ≤ r hits++ End Output: http://www.chem.unl.edu/zeng/joy/mclab/mcintro.html Adapted from course slides by Craig Douglas

  20. Estimating P

  21. Sampling Protein Conformations with MCMC(Markov Chain Monte Carlo) • Markov-Chain Monte-Carlo (MCMC) with “proposals”: • Perturb Structure to create a “proposal” • Accept or reject new conformation with a “certain” probability • After a long run, we want to find low-energy conformations, with high probability • A (physically) natural* choice is the Boltzman distribution, proportional to: • Ei = energy of state i • kB = Boltzman constant • T = temperature • Z = “Partition Function” Protein image taken from Chemical Biology, 2006 • But how? Slides adapted from Barak Raveh • * In theory, the Boltzman distribution is a bit problematic in non-gas phase, but never mind that for now…

  22. The Metropolis-Hastings Criterion "Equations of State Calculations by Fast Computing Machines“ – Metropolis, N. et al. Journal of Chemical Physics (1953) • Boltzman Distribution: • The energy score and temperature are computed (quite) easily • The “only” problem is calculating Z (the “partition function”) – this requires summing over all states. • Metropolis showed that MCMC will converge to the true Boltzman distribution, if we accept a new proposal with probability Slides adapted from Barak Raveh

  23. Sampling Protein Conformations with Metropolis-Hastings MCMC • Markov-Chain Monte-Carlo (MCMC) with “proposals”: • Perturb Structure to create a “proposal” • Accept or reject new conformation by the Metropolis criterion • Repeat for many iterations • If we run till infinity, with good perturbations, we will visit every conformation according to the Boltzman distribution • But we just want to find the energy minimum. If we do our perturbations in a smart manner, we can still cover relevant (realistic, low-energy) parts of the search space Protein image taken from Chemical Biology, 2006 Slides adapted from Barak Raveh

  24. Monte Carlo (Minimization) dE>0 dE<0

  25. The Traveling Salesman Adapted from www.mpp.mpg.de/~caldwell/ss11/ExtraTS.pdf

  26. Adapted from www.mpp.mpg.de/~caldwell/ss11/ExtraTS.pdf

  27. Adapted from www.mpp.mpg.de/~caldwell/ss11/ExtraTS.pdf

  28. Adapted from www.mpp.mpg.de/~caldwell/ss11/ExtraTS.pdf

  29. Adapted from www.mpp.mpg.de/~caldwell/ss11/ExtraTS.pdf

  30. Adapted from www.mpp.mpg.de/~caldwell/ss11/ExtraTS.pdf

  31. RFFGGDRGAPKRG YLDPLIRGLLARPAKLQV KPGQPPRLLIYDASNRATGIPA GSLFVYNITTNKYKAFLDKQ SALLSSDITASVNCAK GFKGEQGPKGEP DVFKELKVHHANENI SRYWAIRTRSGGI TYSTNEIDLQLSQEDGQTIE Note the sign. Maximization Gibbs sampler. Monte Carlo simulations RFFGGDRGAPKRG YLDPLIRGLLARPAKLQV KPGQPPRLLIYDASNRATGIPA GSLFVYNITTNKYKAFLDKQ SALLSSDITASVNCAK GFKGEQGPKGEP DVFKELKVHHANENI SRYWAIRTRSGGI TYSTNEIDLQLSQEDGQTIE E1 = 5.4 dE>0; Paccept =1 E2 = 5.7 dE<0; 0 < Paccept < 1 E2 = 5.2

  32. Monte Carlo Temperature • What is the Monte Carlo temperature? • Say dE=-0.2, T=1 • T=0.001

  33. MC minimization

  34. Monte Carlo - Examples • Why a temperature?

  35. Local minima

More Related