370 likes | 602 Views
What you've always wanted to know about logistic regression analysis, but were afraid to ask. Februari, 1 2010 Gerrit Rooks Sociology of Innovation Innovation Sciences & Industrial Engineering Phone: 5509 email: g.rooks@tue.nl. This Lecture. Why logistic regression analysis ?
E N D
What you've always wanted to know about logistic regression analysis, but were afraid to ask... Februari, 1 2010 Gerrit Rooks Sociology of Innovation Innovation Sciences & Industrial Engineering Phone: 5509 email: g.rooks@tue.nl
ThisLecture • Whylogisticregressionanalysis? • The logisticregression model • Estimation • Goodness of fit • Anexample
What's the difference between 'normal' regression and logistic regression? Regression analysis: • Relate one or more independent (predictor) variables to a dependent (outcome) variable
What's the difference between 'normal' regression and logistic regression? • Often you will be confronted with outcome variables that are dichotomic: • success vs failure • employed vs unemployed • promoted or not • sick or healthy • pass or fail an exam
ExampleRelationship between hours studied for exam and success
The logistic regression equationpredicting probabilities predicted probability (always between 0 and 1) similar to regression analysis
The Logistic functionSometimes authors rearrange the model or also
Parameters are estimated by `fitting' models, based on the available predictors, to the observed data The chosen model fits the data best, i.e. is closest to the data Fit is determined by the so-called log likelihood statistic How do we estimate coefficients?Maximum-likelihood estimation
Maximum likelihood estimationThe log-likelihood statistic Large values of LL indicate poor fit of the modelHOWEVER, THIS STATISTIC CANNOT BE USED TO EVALUATE THE FIT OF A SINGLE MODEL
An example to illustrate maximum likelihood and the log likelihood statistic Suppose we know hours spent studying and the outcome of an exam
In ML different values for the parameters are `tried' Lets look at two possibilities: 1; b0 = 0 & b1= 0.05; 2, b0 = 0 & b1= 0.05
Two models and their log likelihood statistic Based on a clever algorithm the model with the best fit (LL closest to 0) is chosen
Obviously SPSS does all the work for you How to interpret output of SPSS Two major issues Overall model fit Between model comparisons Pseudo R-square Predictive accuracy / classification test Coefficients Wald test Likelihood ratio test Odds ratios After estimationHow do I determine significance?
Model fit: Between model comparison The log-likelihood ratio test statistic can be used to test the fit of a model Model fit full model Model fit reduced model The test statistic has a chi-square distribution
Model fit The log-likelihood ratio test statistic can be used to test the fit of a model Model fit full model Model fit reduced model
Between model comparison • Estimate a null model • Baseline model • Estimate an improved model • This model contains more variables • Assess the difference in -2LL between the models • This difference follows a chi-square distribution • degrees of freedom = # estimated parameters in proposed model – # estimated parameters in null model Model fit full model Model fit reduced model 20
Overall model fitR and R2 SS due to regression Total SS R2 in multiple regression is a measure of the variance explained by the model
Just like in multiple regression, logit R2 ranges 0.0 to 1.0 Cox and Snell cannot theoretically reach 1 Nagelkerke adjusted so that it can reach 1 Overall model fitpseudo R2 log-likelihood of the model that you want to test log-likelihood of model before any predictors were entered NOTE: R2 in logistic regression tends to be (even) smaller than in multiple regression
Overall model fitClassification table How well does the model predict outcomes? spssoutput This means that we assume that if our model predicts that a player will score with a probability of .51 (above .5) the prediction will be a score (lower than .50 is a miss).
Testing significance of coefficientsThe Wald statistic: not really good • In linear regression analysis this statistic is used to test significance • In logistic regression something similar exists • however, when b is large, standard error tends to become inflated, hence underestimation (Type II errors are more likely) estimate t-distribution standard error of estimate
Likelihood ratio testan alternative way to test significance of a coefficient To avoid type II errors for some variables you best use the Likelihood ratio test model with variable model without variable
Before we go to the exampleA recap • Logisticregression • dichotomousoutcome • logisticfunction • log-likelihood / maximum likelihood • Model fit • likelihood ratio test (compare LL of models) • PseudoR-square • Classificationtable • Wald test
Illustration with SPSS • Penalty kicks data, variables: • Scored: outcome variable, • 0 = penalty missed, and 1 = penalty scored • Pswq: degree to which a player worries • Previous: percentage of penalties scored by a particulare player in their career
SPSS OUTPUT Logistic Regression Tells you something about the number of observations and missings
Block 0: Beginning Block this table is based on the empty model, i.e. only the constant in the model these variables will be entered in the model later on
Block is useful to check significance of individual coefficients, see Field Block 1: Method = Enter this is the test statistic Note: Nagelkerke is larger than Cox after dividing by -2 New model
Block 1: Method = Enter (Continued) Predictive accuracy has improved (was 53%) significance based on Wald statistic estimates change in odds standard error estimates
How is the classification table constructed? oops wrong prediction oops wrong prediction