230 likes | 418 Views
Design of Experiments. Bill Pedersen ME 355. Introduction. DOE Big Picture Concepts and Methodology Applications. What is Design of Experiments?. My Definition: Efficient experimentation methodology. Maximum experimental information for the least expense.
E N D
Design of Experiments Bill Pedersen ME 355
Introduction • DOE Big Picture • Concepts and Methodology • Applications
What is Design of Experiments? • My Definition: Efficient experimentation methodology. • Maximum experimental information for the least expense. • Allows changing more than one test parameter at a time. • Statistically separates the sources of variation. • Performs regression analysis for predictive modeling.
Motivation for Using DOE • Variation is the enemy of any manufacturing organization • The goal of manufacturing is to produce consistently high quality products • DOE allows you to identify and eliminate causes of variation • DOE allows you to optimize a process in terms of quality and/or quantity
Types of Experiments • OFAT - One Factor At a Time - traditional scientific method taught in junior high. • Factorial Designs • 2 Level Design, with or without center point • Plackett - Burman • Response Surface • Central Composite • Box-Behnken • 3 Level Factorial • Others
Design Selection • Very Important to choose the correct design! • True Optimization requires Response Surface Methodology (RSM) Design • Includes non-linear factors • Usually determined by acceptable number of experiments vs. how much information is required • More points, more data • More replicates, better data
Design Selection • Factorial Designs - Full factorial - estimation of all main effects and all interaction effects • Replicates are important to minimize measurement error • Orthogonality of Design • Factorial alone makes linearity assumption Include center points for curvature effects
Design Selection - ctd • Plackett - Burman Designs - Low resolution. Use for screening design only and when you assume no two factor interactions present. • RSM - Box-Behnken or Central Composite recommended. If you can’t use those due to testing constraints, then use the D-optimal design.
Analysis of Variance, (ANOVA) • This is the heart of the matter of DOE • Separates variation to distinguish sources • Uses that data and regression data to form predictive models to mimic responses • Based off the fit of those models to the experimental data, you either have something you can use or not
Variance • 5Ms – these are the general sources of variation present in all processes • Machines • Method • Man • Materials (Incorporates environment usually) • Measurement • In DOE, you are causing variation by changing factors that affect one or more of the 5Ms • Then we statistically break out those effects to determine what variation was caused by which source
Correlation (and Regression Analysis) • Make scatter diagram plotting one variable vs. another • Correlation indicates a relationship between the two variables • VERY IMPORTANT: Correlation does not necessarily indicate a CAUSALITY relationship • Regression determines a functional relationship between variables
Regression • Regression establishes the functional relationship between an independent and dependent variable • Linear regression: y = ax + b • Quadratic regression: y = a + bx + cx2 • Higher order also • Error = S(y-ax-b)2 • Find the values that minimize the error • Optimize - find the partial with respect to a, set it equal to zero, etc.
T test Tests whether the population estimated averages are statistically different to some confidence level. F test Tests whether the population estimated variances are statistically different to some confidence level. T and F tests So, what is the application for these tools?
T test • Based off hypothesis testing. Statistical stuff that I am glad someone else develops DOF equation based off variation and sample sizes Using confidence limit, and DOF, look up t value from tables Compare the t calculated with the t from table If t0 > t, then the estimated averages are different to that confidence level DOF is degrees of freedom – the number of means of calculating a statistical measure. As DOF increases, so does your confidence that your calculation is correct
F test • Similar to T test in concept • Calculate an F value from sample variances • Look up a statistical F value, based on confidence level and DOF, (based off sample size) • Compare the two • If Fcalculated > F table, then variances are statistically different to that confidence level
Degrees of Freedom • Simple Definition: The number of observations made in excess of the minimum theoretically necessary to estimate a statistical parameter • This is used for nearly all statistical calculations. Examples: s2 = S(xi-m)/n s2 = S(xi-xbar)/(n-1)
Residual Error • The difference between the observed and the model. • The goal of any model is to minimize the residual error • That simply means that our actual points agree more closely with our predictions of those points
Experimental Design Guidelines • Define the Opportunity - What do you want out of the experiment? • Choice of factors and levels, (input data) • Selection of responses, (output data) • Choice of Design • Perform the experiment • Data Analysis - ANOVA, regression, graphical analysis is very important • Conclusions, Recommendations and ACTION
Normal Distribution 68.26% 95.46% 99.73%
Summary • Traditional experimentation methods (OFAT) are incomplete and inefficient • DOE allows you to change multiple parameters at the same time • More efficient • More information • True Optimization possible • Remember – It is all about money