210 likes | 348 Views
S quares. L east. P artial. A Standard Tool for :. Multivariate R e g r e s s i o n. Regression :. Modeling dependent variable(s): Y. Chemical property Biol. activity. By predictor variables: X. Chem. composition Chem. structure (Coded). MLR.
E N D
Squares Least Partial A Standard Tool for : Multivariate R e g r e s s i o n
Regression : Modeling dependent variable(s): Y • Chemical property • Biol. activity By predictor variables: X • Chem. composition • Chem. structure (Coded)
MLR Traditional method: If X-variables are: • few ( # X-variables < # Samples) • Uncorrelated (Full Rank X) • Noise Free ( when some correlation exist)
But ! • Numerous • Correlated • Noisy • Incomplete Instruments Instruments Spectrometers Chromatographs Sensor Arrays Data …
X : Independent Variables Correlated Predictor
PLSRModels: The relationbetween two Matrices X and Y By a LinearMultivariate Regression 1 2 The Structureof X and Y Richer results than Traditional Multivariate regression
PLSR is able to analyze Data with: PLSR is a generalization of MLR • Noise • Collinearity (Highly Correlated Data) • Numerous X-variables(> # samples) • incompleteness in both X and Y
History Herman Wold (1975): Modeling of chain matrices by: NonlinearIterativePartialLeastSquares Regression between : - a variable matrix - a parameter vector Other parameter vector Fixed
SvanteWold & H. Martens (1980): Completion and modification of Two-blocks (X,Y) PLS (simplest) Herman Wold (~2000): Projection to Latent Structures As a more descriptive interpretation
AQSPRexample One Y-variable: a chemical property The Free Energy of unfolding of a protein Quant. description of variation in chem. structure Seven X-variables: 19 different AminoAcids in position 49 of protein Highly Correlated
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 PIE 0.23 -0.48 -0.61 0.45 -0.11 -0.51 0.00 0.15 1.20 1.28 -0.77 0.90 1.56 0.38 0.00 0.17 1.85 0.89 0.71 PIF 0.31 -0.60 -0.77 1.54 -0.22 -0.64 0.00 0.13 1.80 1.70 -0.99 1.23 1.79 0.49 -0.04 0.26 2.25 0.96 1.22 DGR -0.55 0.51 1.20 -1.40 0.29 0.76 0.00 -0.25 -2.10 -2.00 0.78 -1.60 -2.60 -1.50 0.09 -0.58 -2.70 -1.70 -1.60 SAC 254.2 303.6 287.9 282.9 335.0 311.6 224.9 337.2 322.6 324.0 336.6 336.3 366.1 288.5 266.7 283.9 401.8 377.8 295.1 MR 2.126 2.994 2.994 2.933 3.458 3.243 1.662 3.856 3.350 3.518 2.933 3.860 4.638 2.876 2.279 2.743 5.755 4.791 3.054 Lam -0.02 -1.24 -1.08 -0.11 -1.19 -1.43 0.03 -1.06 0.04 0.12 -2.26 -0.33 -0.05 -0.31 -0.40 -0.53 -0.31 -0.84 -0.13 Vol 82.2 112.3 103.7 99.1 127.5 120.5 65.0 140.6 131.7 131.5 144.3 132.3 155.8 106.7 88.5 105.3 185.9 162.7 115.6 DDGTS 8.5 8.2 8.5 11.0 6.3 8.8 7.1 10.1 16.8 15.0 7.9 13.3 11.2 8.2 7.4 8.8 9.9 8.8 12.0 Table1 X Y
Symmetrical Distribution Transformation 12.5 4235 0.2 546 100584 1.097 3.627 -0.699 2.737 5.002 log
Increase in weights of more informative X-variables Scaling No Knowledge about importance of variables Auto Scaling • Scale to unit variance (xi /SD). • Centering (xi – xaver). Same weights for all X-variables
Auto Scaling Numerically More Stable
Weights (usuallylinear) ThePLSRModel A few “new” variables : X-scoresta (a=1,2, …,A) Modelers of X Predictors of Y Orthogonal & Linear Combinationof X-variables : T=XW*
ta(a=1,2, …,A) T(X-scores) loadings • Are: • Modelers of X: X =TP’+ E • Predictors of Y: Y=TC’+F PLS-Regression Coefficients (B) Y = XW* C’ +F
Estimation of T : By stepwise subtraction of each component (tap’a) from X X = TP’ + E X - TP’ = E Residual after subtraction of ath component X - tapa’ = Ea
X= X1 + X2 + X3+ X4 + … + XA X= t1p1 +t2p2+ t3p3+ t4p4+… + tapa E1 E2 E3 Ea-1
Stepwise “Deflation” of X-matrix t1 = Xw1 E1= X – t1p1’ t2 = E1w2 E2= E1 – t2p2’ t3 = E2w3 . . . . . . Ea-1= Ea-2 – ta-1p’a-1 ta = Ea-1wa