1 / 27

Regression Analysis: Linear & Nonlinear Methods Overview

Explore linear and nonlinear regression techniques, including Linear Least-Squares Regression, Uncertainties in Parameters, and Simplex Method. Learn to minimize sum of squared residuals and optimize parameter sets effectively.

rossjohnson
Download Presentation

Regression Analysis: Linear & Nonlinear Methods Overview

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chem 302 - Math 252 Chapter 5Regression

  2. Linear & Nonlinear Regression • Linear regression • Linear in the parameters • Does not have to be linear in the independent variable(s) • Can be solved through a system of linear equations • Nonlinear • Nonlinear in parameters • Usually requires linearization and iteration

  3. Linear Least-Squares Regression Residual Sum of Square Residuals Want to minimize Z

  4. Linear Least-Squares Regression

  5. Linear Least-Squares Regression Example

  6. Linear Least-Squares RegressionUncertainties in Parameters Example

  7. Linear Least-Squares Regression Regression on “y” Treat x as y and y as x Choose x as variable with smallest error Can also be determined by equation

  8. Linear Least-Squares Regression

  9. Example – Vapour Pressure of Cadmium

  10. Linear Least-Squares RegressionUncertainties in Parameters

  11. Nonlinear Least-Squares Regression This results in a system of nonlinear equations Linearize & solve iteratively Need initial estimate of parameters

  12. Nonlinear Least-Squares Regression - Example Van der Waals parameters for nitrogen

  13. Weighted Least-Squares Regression may not always want to give equal weight to each point Applies to linear and nonlinear case

  14. Drawbacks of Iterative Matrix Method • Local minima can cause problems • Can be sensitive to initial guess • Derivatives must be evaluated for each iteration

  15. Simplex Method • Simplex has one more vertex than dimension of space • 2D – Triangle • m parameters – m+1 vertices • Simplex Method used to optimize a set of parameters • Find optimal set of b’s such that Z is minimum • More robust than previous iterative procedure • Often slower

  16. Simplex Method • Evaluate Z at m+1 unique sets of parameters • Identify ZB (best, smallest) and ZW (worst, largest) • Calculate Centroid of all but worst (average of different sets of parameters ignoring worst set) • Reflect worst point through Centroid

  17. Simplex Method • Replace Worst point: • If ZR1<ZB (reflected point is better than previous best) calculate • If ZR2<ZR1 replace W with R2 • Otherwise replace W with R1 • If ZB<ZR1<ZW replace W with R1 • If ZR1>ZW a contracted point id calculated • If ZR3<ZW replace W with R3 • Otherwise move all points closer to the best point • Repeat until converged or maximum number of iterations have been performed

  18. Simplex Regression - Example Van der Waals parameters for nitrogen

  19. Simplex program

  20. Simplex - Example Iteration 1: Response 0.344652 beta Response 1.300000 0.050000 0.425437 1.326000 0.050500 0.344652 Best 1.313000 0.051000 0.579697 Worst 1.313000 0.050250 Centroid 1.313000 0.049500 0.229741 First reflected point 1.313000 0.048750 0.116962 Second reflected point Iteration 2: Response 0.116962 beta Response 1.300000 0.050000 0.425437 Worst 1.326000 0.050500 0.344652 1.313000 0.048750 0.116962 Best 1.319500 0.049625 Centroid 1.339000 0.049250 0.076378 First reflected point 1.358500 0.048875 0.011665 Second reflected point Iteration 3: Response 0.0116649 beta Response 1.358500 0.048875 0.011665 Best 1.326000 0.050500 0.344652 Worst 1.313000 0.048750 0.116962 1.335750 0.048812 Centroid 1.345500 0.047125 0.041013 First reflected point Iteration 4: Response 0.0116649 beta Response 1.358500 0.048875 0.011665 Best 1.345500 0.047125 0.041013 1.313000 0.048750 0.116962 Worst 1.352000 0.048000 Centroid 1.391000 0.047250 0.195042 First reflected point 1.332500 0.048375 0.027212 Contracted point Iteration 31: Response 0.00543252 beta Response 1.393487 0.049624 0.005433 1.393340 0.049619 0.005433 Best 1.393220 0.049616 0.005433 Worst 1.393413 0.049621 Centroid 1.393607 0.049627 0.005433 First reflected point 1.393317 0.049619 0.005433 Contracted point Iteration 32: Response 0.00543252 beta Response 1.393487 0.049624 0.005433 Worst 1.393340 0.049619 0.005433 1.393317 0.049619 0.005433 Best 1.393328 0.049619 Centroid 1.393170 0.049613 0.005433 First reflected point 1.393408 0.049621 0.005433 Contracted point Iterations converged. R^2 0.999999 Final Converged Parameters k beta 0 1.39332 1 0.0496186

  21. Simplex – Example (Iteration 1) W B C R1 R2

  22. Simplex – Example (Iteration 2) W C R1 R2 B

  23. Simplex – Example (Iteration 3) W B C R1

  24. Simplex – Example (Iteration 4) B W Contracted C R1

  25. Simplex – Example (Iteration 32) W Contracted B C R1

  26. Comparing Models • Often have more than 1 equation that can be used to represent the data • If two equations (models) have the same number of parameters the one with smaller Z is a betterrepresentation (fit) • If two models have different number of parameters then can not do a direct comparison • Need to use F distribution & Confidence level • Model A – fewer number of parametersModel B – larger number of parameters

  27. Comparing Models Model B is a better model if (and only if) Usually lookup F in Table and compare ratios With Maple can calculate confidence level for which B is a better model than A

More Related