460 likes | 932 Views
Lean Six Sigma DMAIC process : Common Mistakes and Misconceptions During Data Collection and Analysis. Hans Vanhaute 04/08/2014. Goal of tonight’s presentation. Give you a few examples of common mistakes made during “Measure” phase of DMAIC projects.
E N D
Lean Six Sigma • DMAIC process: • Common Mistakes and Misconceptions During Data Collection and Analysis Hans Vanhaute 04/08/2014
Goal of tonight’s presentation • Give you a few examples of common mistakes made during “Measure” phase of DMAIC projects. • Draw more widely applicable lessons and conclusions that may benefit you (so you don’t make the same mistakes). • Hopefully provide you with some interesting insights (and don’t put you to sleep).
DMAIC and “Projects” • “A problem scheduled for a solution.” • Management decides the problem is important enough to provide the resources it needs to get the problem solved.
“DMAIC Projects” • Six Sigma DMAIC Project • Eliminates a chronic problem that causes customer dissatisfaction, defects, costs of poor quality, or other deficiencies in performance. DEFINE MEASURE ANALYZE IMPROVE CONTROL Very Data-Intensive
The DMAIC steps • M – Measure • Define a high-level process map. • Define the measurement plan. • Test the measurement system (“Gauge Study”). • Collect the data to objectively establish current baseline. • Typical tools: • - Capability analysis • - Gage R&Rs
Initial Analysis Capability Analysis Conundrums Instability of the process over time • Y = f (unknown Xs) • Black Box Process • Unknown Xs Cpk values that inaccurately predict process performance Non-normal data
Capability Analysis Conundrums • Case 1: Inherent non-normality of the process output. Example • Some physical, chemical, transactional processes will produce outcomes that “lean” one way: • Time measurements • Values close to zero, but that are always positive (surface roughness RMS…) • … • Process experts or careful analysis of the metric should be able to help with understanding. Good news: Capability Analysis of Non-Normal data is possible. Bad news: This situation doesn’t happen very often.
Capability Analysis Conundrums • Case 2: Problematic measurement systems • (we’ll come back to that one when we discuss GR&R…)
Capability Analysis Conundrums • Case 3: Failure to stratify the data. This is the big one! Stratification is the separation of data into categories. It means to “break-up” the data to see what it tells you. Its most frequent use is when diagnosing a problem and identifying which categories contribute to the problem being solved.
Capability Analysis Conundrums Stream 1 Cpk1 Stream 2 Cpk2 Cpk ??? Stream 3 Cpk3 Stream 4 Cpk4
Capability Analysis Conundrums 1 Cpk value? Expected future performance of the process(es) assuming statistical stability over time. What is a Cpk value supposed to tell us? 2 Cpk values?
Capability Analysis Conundrums Over-estimating variation of the process. (Why?) Under-estimating process capability. Leading to all sorts of non-value-added activity for your organization. Recognize two of the four streams are main drivers of overall capability. Correct estimation of the two most important process capabilities. Points to appropriate improvement activities.
Capability Analysis Conundrums Example Cpk = 1.50 Actual data (not stratified) Prediction (not stratified) Cpk = 1.15 Cpk = 1.15 Cpk = 0.67 Cpk = 1.50 Actual data (stratified) Prediction (stratified)
Capability Analysis Conundrums Problematic measurement systems: 2a: Limiting factors to “how well” you can measure something. 2b: I passed my GR&R but I’m still getting “weird” results. 2c: Time effects
Case 2a: Limits to measurements Game: Identify the dataset with the highest resolution. Resolution: a: The process or capability of making distinguishable the individual parts of an object, closely adjacent optical images, or sources of light b: A measure of the sharpness of an image or of the fineness with which a device can produce or record such an image.
Case 2a: Limits to measurements Which dataset has the highest resolution? Measurement Resolution: a: The process or capability of making distinguishable the individual parts of a dataset or closely adjacent data points. b: A measure of the sharpness of a set of data or of the fineness with which a measurement devicecan produce or record such a dataset. ? ?
Case 2a: Limits to measurements Limiting factors to “how well” you can measure something. 1 2 9.9397 10.5944 11.4290 9.0401 9.9 10.5 11.4 9.0 Less resolution Less resolution 3 4 9.6 10.2 11.4 9.0 10 11 11 9 Less resolution
Case 2a: Limits to measurements Limiting factors to “how well” you can measure something. 1 2 Less resolution Less resolution 3 4 Less resolution
Case 2a: Limits to measurements Limiting factors to “how well” you can measure something. 1 2 S = 1.005 (0.5% over) S = 1.000 Less resolution Less resolution 3 4 S = 1.040 (4% over) S = 1.150 (15% over) Less resolution
Case 2a: Limits to measurements Limiting factors to “how well” you can measure something: Why?? “Always done it that way, never given it any thought”. Focus on “meeting specs” not on controlling process. “Always” round to x decimal places. Nobody told me how many decimals were needed … The old “1 in 10” rule of thumb seems to make sense. Resolution must be at least 1/10th of data range Resolution must be at least 1/10thof spec range
Case 2b: “Weird” Stuff I passed my GR&R but I’m still getting “weird” results. GR&R 101: “Metrics” P/TV ratio expresses the total measurement variability as a percentage of the total historical process variation. Here P/TV ~ 14% Distribution of Measurements Distribution of measurement variability
Case 2b: “Weird” Stuff I passed my GR&R but I’m still getting “weird” results. GR&R 101: “Metrics” P/T expresses the total measurement variability as a percentage of the tolerance width of the process: Here P/T ~ 12.5% Spec. limits Distribution of measurement error
Case 2b: “Weird” Stuff I passed my GR&R but I’m still getting “weird” results. GR&R 101: “Metrics” Simple, right? Not so fast…
Case 2b: “Weird” Stuff I passed my GR&R but I’m still getting “weird” results. • R chart by operator: • Points inside control limits indicate that operator is consistent between repeat measurements made on same sample (GOOD) • Points outside control limits indicate that operator is not consistent between repeat measurements made on same sample (BAD)
Case 2b: “Weird” Stuff I passed my GR&R but I’m still getting “weird” results. Example P/T = 22% P/TV = 16%
Case 2b: “Weird” Stuff I passed my GR&R but I’m still getting “weird” results. Example P/T = 70% P/TV = 40% P/T = 10% P/TV = 6%
Case 2b: “Weird” Stuff I passed my GR&R but I’m still getting “weird” results. So… what caused this? Example Camera Lens Ring Light Pin Tip Position
Case 2c: Time Effects The speed of Information is finite. Information can come from different distances.
Case 2c: Time Effects The speed of Information is finite. Information can come from different distances. Moon: 1.2 light-seconds away Sun: 8 light-minutes away Mars: 12.5 light-minutes away Pluto: 5.5 light-hours away Proxima Centauri: 4.2 light-years away
Case 2c: Time Effects The speed of Information is finite. Information can come from different distances. Just because you “observe” (measure / see) several events “at the same time”, doesn’t mean they all occur(red) at the same time.
Case 2c: Time Effects Arranged by order of observation Arranged by order of occurrence Example
Case 2c: Time Effects What can you do? Collect the data as close as possible to the origin of the event you are observing. “Traceability” of the events you are observing. “De-convolution” of the data. In mathematics, de-convolution is an algorithm-based process used to reverse the effects of convolution on recorded data
So… What did we learn? • Blind reliance on some index value (Cpk, Cp, P/T, P/TV,…)to tell you what is going on might get you in trouble. • Always: • Make sure you understand how the index is calculated • Use the approach fully, not half-way • Verify that all assumptions were met • Data stratification opportunities abound. Identify them early on in your project. • A few simple rules of thumb will quickly help you determine if you have a chance of having a good measurement system.
So… What did we learn? Further analysis of the Gage R&R data can provide you with some great insights into and improvement opportunities for your measurement process. Data has a finite speed. Being aware of this and planning for it during your measure phase will help keep you on the right track.
Parting Thoughts • My organization doesn’t use Six Sigma, do these insights benefit me as well?
Thank You Questions?