170 likes | 299 Views
Generic Approaches to Model Validation Presented at Growth Model User’s Group August 10, 2005. David K. Walters. Phases in a Modeling Project. Model Identification. Model Fitting with Data. Component Model – Equation Forms. Where does validation fit in?.
E N D
Generic Approaches to Model ValidationPresented atGrowth Model User’s GroupAugust 10, 2005 David K. Walters
Phases in a Modeling Project Model Identification Model Fitting with Data Component Model – Equation Forms
Where does validation fit in? • Ideal Case – as an integrated component of the model development process as a feedback mechanism • Reality – • Best Case - done once by modeler using a subset of the modeling data (or other, related techniques), then up to the user. Feedback is up to the persistence of the user and the receptiveness of the modeler. Not integrated… • Probable Case - done once by modeler using a subset of the modeling data (or other, related techniques), then up to the user. Modeler takes a new job, moves on…
Of what benefit is validation? • Increase Comfort • The user better understands the situations in which the model can be reliably applied and those situations in which it cannot. • Model Improvements – facilitates calibration • To make a model applicable to a new situation • different treatments/regions/situations • different “scale” • Over/under runs – utilization issues • To weight model output with other data for the purpose of decision making (weighting usually requires some estimate of variability)
Validating the overall appropriateness • Is the model flexible enough to reproduce desired management alternatives? • Does it provide sufficient detail for decision-making? • How efficient is the model in meeting these goals? Whole Stand Models Individual Tree Models Process Models Distance Dependent - Independent Model Type & Resolution Everything should be made as simple as possible, but not simpler --Albert Einstein
Validating a Model – Check the data Spatially Differences in Data Populations Culturally Temporally Some Research data may be collected with such a high degree of caution that resultant models will tend to overestimate growth and yield.
Validating the Component Models • Model Component Specification • Equation “forms” – reasonable, consistent with established theory and/or user’s expectations. • Statistical, or other, “fitting” of the component equations
Validating the Implementation – the computer software • Software Implementation • Bugs • Adequacy of outputs / interface • Efficient
A couple of random thoughts – what else might make a model invalid? • Homogeneity – very few models operationally project plots. Most project stands. Stands are assumed to be homogenous with respect to exogenous or predictor variables. But....they are really full of Holes
…misuse • Matching Data Inputs with Model Specifications • Site Index • DBH Thresholds – all versus “merchantable” or other subsets of trees • Others
Statistics • So, we get to the point of wishing to conduct a data-based validation of some kind. • what do we compare? Real Data vs. Predicted Data • Tree Variables – DBH, Height, Crown, Volume • Stand Variables – QMD, TPA, BA, Volume If using Volume when comparing multiple models…make sure the volume equations are identical
The overall project - two Approaches • Case 1 – We have repeat measurements (growth data) • Using the observed inputs, run the real data through the model. Look at time 2(or 3, etc.) predicted versus real. Calculate Statistics • Case 2 – No repeat measurements • Simulation Study • Identify matrix of input variables (Site, density, stocking ,etc.) that cover the range of interest. • Run model for each row of input matrix
Validation – Patterns and Trends • In either case, you will want to look for trends, how the predictions (or residuals if you have real data) change. Examples, • MAI over time • TPA over time • Results vs. predictor variables (Site, treatment, density) • How do Long-term predictions compare to “laws” • Self-thinning, etc. • How does the model prediction compare to other models
In Summary, • Identify the alternative “models”, establish a frame of reference • Examine the big picture • Look at the sample used in the model calibration, the presumed population, and a sample of “your” population. • Identify the key component models. Compare predictions with data – bias and accuracy. Examine these for trends against appropriate factors • Look at the overall model output…the computer code. Are there errors? Evaluate output with data (volume per acre – aggregated variables)
Final Points Remember, there is always an alternative model. When evaluating a model, give careful thought to the alternative. How well a model performs in relation to the alternative is generally the most relevant question. Validity is relative, as are other things.
Questions? All you need in this life is ignorance and confidence -- and then success is sure. --Mark Twain