820 likes | 1.06k Views
DESIGN OF EXPERIMENTS by R. C. Baker. How to gain 20 years of experience in one short week!. Role of DOE in Process Improvement.
E N D
DESIGN OF EXPERIMENTSbyR. C. Baker How to gain 20 years of experience in one short week!
Role of DOE in Process Improvement • DOE is a formal mathematical method for systematically planning and conducting scientific studies that change experimental variables together in order to determine their effect of a given response. • DOE makes controlled changes to input variables in order to gain maximum amounts of information on cause and effect relationships with a minimum sample size.
Role of DOE in Process Improvement • DOE is more efficient that a standard approach of changing “one variable at a time” in order to observe the variable’s impact on a given response. • DOE generates information on the effect various factors have on a response variable and in some cases may be able to determine optimal settings for those factors.
Role of DOE in Process Improvement • DOE encourages “brainstorming” activities associated with discussing key factors that may affect a given response and allows the experimenter to identify the “key” factors for future studies. • DOE is readily supported by numerous statistical software packages available on the market.
BASIC STEPS IN DOE • Four elements associated with DOE: • 1. The design of the experiment, • 2. The collection of the data, • 3. The statistical analysis of the data, and • 4. The conclusions reached and recommendations made as a result of the experiment.
TERMINOLOGY • Replication – repetition of a basic experiment without changing any factor settings, allows the experimenter to estimate the experimental error (noise) in the system used to determine whether observed differences in the data are “real” or “just noise”, allows the experimenter to obtain more statistical power (ability to identify small effects)
TERMINOLOGY • .Randomization – a statistical tool used to minimize potential uncontrollable biases in the experiment by randomly assigning material, people, order that experimental trials are conducted, or any other factor not under the control of the experimenter. Results in “averaging out” the effects of the extraneous factors that may be present in order to minimize the risk of these factors affecting the experimental results.
TERMINOLOGY • Blocking – technique used to increase the precision of an experiment by breaking the experiment into homogeneous segments (blocks) in order to control any potential block to block variability (multiple lots of raw material, several shifts, several machines, several inspectors). Any effects on the experimental results as a result of the blocking factor will be identified and minimized.
TERMINOLOGY • Confounding - A concept that basically means that multiple effects are tied together into one parent effect and cannot be separated. For example, • 1. Two people flipping two different coins would result in the effect of the person and the effect of the coin to be confounded • 2. As experiments get large, higher order interactions (discussed later) are confounded with lower order interactions or main effect.
TERMINOLOGY • Factors – experimental factors or independent variables (continuous or discrete) an investigator manipulates to capture any changes in the output of the process. Other factors of concern are those that are uncontrollable and those which are controllable but held constant during the experimental runs.
TERMINOLOGY • Responses – dependent variable measured to describe the output of the process. • Treatment Combinations (run) – experimental trial where all factors are set at a specified level.
TERMINOLOGY • Fixed Effects Model - If the treatment levels are specifically chosen by the experimenter, then conclusions reached will only apply to those levels. • Random Effects Model – If the treatment levels are randomly chosen from a population of many possible treatment levels, then conclusions reached can be extended to all treatment levels in the population.
PLANNING A DOE • Everyone involved in the experiment should have a clear idea in advance of exactly what is to be studied, the objectives of the experiment, the questions one hopes to answer and the results anticipated
PLANNING A DOE • Select a response/dependent variable (variables) that will provide information about the problem under study and the proposed measurement method for this response variable, including an understanding of the measurement system variability
PLANNING A DOE • Select the independent variables/factors (quantitative or qualitative) to be investigated in the experiment, the number of levels for each factor, and the levels of each factor chosen either specifically (fixed effects model) or randomly (random effects model).
PLANNING A DOE • Choose an appropriate experimental design (relatively simple design and analysis methods are almost always best) that will allow your experimental questions to be answered once the data is collected and analyzed, keeping in mind tradeoffs between statistical power and economic efficiency. At this point in time it is generally useful to simulate the study by generating and analyzing artificial data to insure that experimental questions can be answered as a result of conducting your experiment
PLANNING A DOE • Perform the experiment (collect data) paying particular attention such things as randomization and measurement system accuracy, while maintaining as uniform an experimental environment as possible. How the data are to be collected is a critical stage in DOE
PLANNING A DOE • Analyze the data using the appropriate statistical model insuring that attention is paid to checking the model accuracy by validating underlying assumptions associated with the model. Be liberal in the utilization of all tools, including graphical techniques, available in the statistical software package to insure that a maximum amount of information is generated
PLANNING A DOE • Based on the results of the analysis, draw conclusions/inferences about the results, interpret the physical meaning of these results, determine the practical significance of the findings, and make recommendations for a course of action including further experiments
SIMPLE COMPARATIVE EXPERIMENTS • Single Mean Hypothesis Test • Difference in Means Hypothesis Test with Equal Variances • Difference in Means Hypothesis Test with Unequal Variances • Difference in Variances Hypothesis Test • Paired Difference in Mean Hypothesis Test • One Way Analysis of Variance
CRITICAL ISSUES ASSOCIATED WITH SIMPLE COMPARATIVE EXPERIMENTS • How Large a Sample Should We Take? • Why Does the Sample Size Matter Anyway? • What Kind of Protection Do We Have Associated with Rejecting “Good” Stuff? • What Kind of Protection Do We Have Associated with Accepting “Bad” Stuff?
Single Mean Hypothesis Test • After a production run of 12 oz. bottles, concern is expressed about the possibility that the average fill is too low. • Ho: m = 12 • Ha: m <> 12 • level of significance = a = .05 • sample size = 9 • SPEC FOR THE MEAN: 12 + .1
Single Mean Hypothesis Test • Sample mean = 11.9 • Sample standard deviation = 0.15 • Sample size = 9 • Computed t statistic = -2.0 • P-Value = 0.0805162 • CONCLUSION: Since P-Value > .05, you fail to reject hypothesis and ship product.
Single Mean Hypothesis Test Power Curve - Different Sample Sizes
DIFFERENCE IN MEANS - EQUAL VARIANCES • Ho: m1 = m2 • Ha: m1 <> m2 • level of significance = a = .05 • sample sizes both = 15 • Assumption: s1 = s2***************************************************** • Sample means = 11.8 and 12.1 • Sample standard deviations = 0.1 and 0.2 • Sample sizes = 15 and 15
DIFFERENCE IN MEANS - EQUAL VARIANCES Can you detect this difference?
DIFFERENCE IN MEANS - unEQUAL VARIANCES • Same as the “Equal Variance” case except the variances are not assumed equal. • How do you know if it is reasonable to assume that variances are equal OR unequal?
DIFFERENCE IN VARIANCE HYPOTHESIS TEST • Same example as Difference in Mean: • Sample standard deviations = 0.1 and 0.2 • Sample sizes = 15 and 15 • ********************************** • Null Hypothesis: ratio of variances = 1.0 • Alternative: not equal • Computed F statistic = 0.25 • P-Value = 0.0140071 • Reject the null hypothesis for alpha = 0.05.
DIFFERENCE IN VARIANCE HYPOTHESIS TEST Can you detect this difference?
PAIRED DIFFERENCE IN MEANS HYPOTHESIS TEST • Two different inspectors each measure 10 parts on the same piece of test equipment. • Null hypothesis: DIFFERENCE IN MEANS = 0.0 • Alternative: not equal • Computed t statistic = -1.22702 • P-Value = 0.250944 • Do not reject the null hypothesis for alpha = 0.05.
ONE WAY ANALYSIS OF VARIANCE • Used to test hypothesis that the means of several populations are equal. • Example: Production line has 7 fill needles and you wish to assess whether or not the average fill is the same for all 7 needles. • Experiment: sample 20 fills from each of the 9 needles and test at 5% level of sign. • Ho: m1 = m2 =m3= m4 = m5 =m6 =m7
SINCE NEEDLE MEANS ARE NOT ALL EQUAL, WHICH ONES ARE DIFFERENT? • Multiple Range Tests for 7 Needles
FACTORIAL (2k) DESIGNS • Experiments involving several factors ( k = # of factors) where it is necessary to study the joint effect of these factors on a specific response. • Each of the factors are set at two levels (a “low” level and a “high” level) which may be qualitative (machine A/machine B, fan on/fan off) or quantitative (temperature 800/temperature 900, line speed 4000 per hour/line speed 5000 per hour).
FACTORIAL (2k) DESIGNS • Factors are assumed to be fixed (fixed effects model) • Designs are completely randomized (experimental trials are run in a random order, etc.) • The usual normality assumptions are satisfied.
FACTORIAL (2k) DESIGNS • Particularly useful in the early stages of experimental work when you are likely to have many factors being investigated and you want to minimize the number of treatment combinations (sample size) but, at the same time, study all k factors in a complete factorial arrangement (the experiment collects data at all possible combinations of factor levels).
FACTORIAL (2k) DESIGNS • As k gets large, the sample size will increase exponentially. If experiment is replicated, the # runs again increases.
FACTORIAL (2k) DESIGNS (k = 2) • Two factors set at two levels (normally referred to as low and high) would result in the following design where each level of factor A is paired with each level of factor B.
FACTORIAL (2k) DESIGNS (k = 2) • Estimating main effects associated with changing the level of each factor from low to high. This is the estimated effect on the response variable associated with changing factor A or B from their low to high values.
FACTORIAL (2k) DESIGNS (k = 2): GRAPHICAL OUTPUT • Neither factor A nor Factor B have an effect on the response variable.
FACTORIAL (2k) DESIGNS (k = 2): GRAPHICAL OUTPUT • Factor A has an effect on the response variable, but Factor B does not.
FACTORIAL (2k) DESIGNS (k = 2): GRAPHICAL OUTPUT • Factor A and Factor B have an effect on the response variable.
FACTORIAL (2k) DESIGNS (k = 2): GRAPHICAL OUTPUT • Factor B has an effect on the response variable, but only if factor A is set at the “High” level. This is called interaction and it basically means that the effect one factor has on a response is dependent on the level you set other factors at. Interactions can be major problems in a DOE if you fail to account for the interaction when designing your experiment.
EXAMPLE:FACTORIAL (2k) DESIGNS (k = 2) • A microbiologist is interested in the effect of two different culture mediums [medium 1 (low) and medium 2 (high)] and two different times [10 hours (low) and 20 hours (high)] on the growth rate of a particular CFU [Bugs].
EXAMPLE:FACTORIAL (2k) DESIGNS (k = 2) • Since two factors are of interest, k =2, and we would need the following four runs resulting in