600 likes | 774 Views
Quality management: SPC - I. Presented by: Dr. Husam Arman. Quality Control (QC). Control – the activity of ensuring conformance to requirements and taking corrective action when necessary to correct problems Importance Daily management of processes Prerequisite to longer-term improvements.
E N D
Quality management: SPC - I Presented by: Dr. Husam Arman
Quality Control (QC) • Control – the activity of ensuring conformance to requirements and taking corrective action when necessary to correct problems • Importance • Daily management of processes • Prerequisite to longer-term improvements
Designing the QC System • Quality Policy and Quality Manual • Contract management, design control and purchasing • Process control, inspection and testing • Corrective action and continual improvement • Controlling inspection, measuring and test equipment (metrology, measurement system analysis and calibration) • Records, documentation and audits
Inspection/Testing Points • Receiving inspection • In-process inspection • Final inspection
Receiving Inspection • Spot check procedures • 100 percent inspection • Acceptance sampling
Lot received for inspection Sample selected and analyzed Results compared with acceptance criteria Accept the lot Reject the lot Send to production or to customer Decide on disposition Acceptance Sampling
Arguments for: Provides an assessment of risk Inexpensive and suited for destructive testing Requires less time than other approaches Requires less handling Reduces inspector fatigue Arguments against: Does not make sense for stable processes Only detects poor quality; does not help to prevent it Is non-value-added Does not help suppliers improve Pros and Cons of Acceptance Sampling
In-Process Inspection • What to inspect? • Key quality characteristics that are related to cost or quality (customer requirements) • Where to inspect? • Key processes, especially high-cost and value-added • How much to inspect? • All, nothing, or a sample
Human Factors in Inspection Inspection should never be means of assuring quality. The purpose of inspection should be to gather information to understand and improve the processes that produce products and services.
Measurement system components • Equipment or gage • Type of gage • Attribute: go-no go, vision systems(part present or not present) • Variable: calipers, probe, coordinate measurement machines • Unit of measurement • Operator and operating instructions
Measurement error • Measurement error is considered to be the difference between a value measured and the true value.
Metrology - Science of Measurement Accuracy - closeness of agreement between an observed value and a standard Precision - closeness of agreement between randomly selected individual measurements
Calibration • Calibration - comparing a measurement device or system to one having a known relationship to national standards
Types of measurement variation • Accuracy • Stability • Reproducibility • Repeatability
Accuracy Difference between the true average and the observed average. (True average may be obtained by using a more precise measuring tool) Accuracy True average Observed average
Stability The difference in the average of at least 2 sets of measurements obtained with a gage over time. Stability Time 1 Time 2
Reproducibility Variation in average of measurements made by different operators using the same gage measuring the same part. True Average Operator C Operator B Operator A
Repeatability Repeatability is the variability of the measurements obtained by one person while measuring the same item repeatedly. This is also known as the inherent precision of the measurement equipment. Observed Average True Average Repeatability
How do we improve gage capability? • Reproducibility • operator training, or • more clearly define measurement scale available to the operator • Repeatability • gage maintenance • gage redesign to better fit application
Quality Metrics “We best manage what we can measure”
Metrics • A strategy without metrics is just a wish. And metrics that are not aligned with strategic objectives are a waste of time. • Emery Powell • If you don’t keep score, you’re only practicing. • You get what you inspect, not what you expect
Metric • A metric is a verifiable measure that • captures performance in terms of how something is being done relative to a standard, • allows and encourages comparison, • supports business strategy. • A metric is a verifiable measure stated in either quantitative or qualitative terms. • “95 percent inventory accuracy” • “as evaluated by our customers, we are providing above-average service”
Metric • In quality management, we use metrics to translate customer needs into producer performance measures. • Internal quality metrics • scrap and rework • process capability (Cp or Cpk) • first time through quality (FTTQ)
Customer quality measures • Customers typically relate quality to: • Feature based measures; “have” or “have not” - determined by design • Performance measures - “range of values” - conformance to design or ideal value
True versus substitute performance measures • Customers - use “true” performance measures. • example: a true measure of a car door may be “easy to close”. • true performance measures typically vary by each individual customer. • Unfortunately, producers cannot measure performance as each individual customer does. • Producers - use “substitute” performance measures • these measures are quantifiable (measurable units). • Substitute measure for a car door: door closing effort (foot-pounds). • Other example: light bulb • true performance measure -- brightens the room • substitute performance measure – wattage or lumens
Educating Consumers • Sometimes, producers educate consumers on their substitute performance measures. • What are substitute performance measures for the following customer desires: • Good Gas Mileage • Powerful Computer • What is the effect of educating consumers on performance measures?
Identifying effective metrics • Effective metrics satisfy the following conditions: • performance is clearly defined in a measurable entity (quantifiable). • a capable system exists to measure the entity (e.g., a gage). • Effective metrics allow for actionable responses if the performance is unacceptable. • There is little value in a metric which identifies nonperformance if nothing can or will be done to remedy it. • Example: Is net sales a good metric to measure the performance of a manufacturing department?
Acceptable ranges • In practice, identifying effective metrics is often difficult. • Main reason: non-performance of a metric does not always lead to customer dissatisfaction. • Consider the car door example again, if door closing effort is the metric, will a customer be dissatisfied if the actual effort is 50 foot-pounds versus 55 foot-pounds. • Producers typically identify ranges of acceptable performance for a metric. • (a) For services, ranges often referred to as break points. • (b) In manufacturing, these ranges are known as targets, tolerances, or specifications.
Break points • Break points are levels where improved performance will likely change customer behavior. • Example: waiting in line • Suppose the average customer will only wait for 5 minutes • Wait longer than 5 minutes -- customer is dissatisfied. • 1-5 minutes -- customer is satisfied. • less than 1 minute -- customer is extremely satisfied • Should a company try to reduce average wait time from 4 to 2 minutes.?
Targets, tolerances and specifications • Target (nominal) - desired value of a characteristic. • A tolerance specifies an allowable deviation from a target value where a characteristic is still acceptable. Lower specification limit (LSL) Upper specification limit (USL) TARGET -1 +1
The Use of Statistics in Quality Chapter Four
Statistical Process Control (SPC) • A methodology for monitoring a process to identify special causes of variation and signal the need to take corrective action when appropriate • SPC relies on control charts
A few notes on SPC’s historical background • Walter Shewhart (Bell Labs 1920s) - suggested that every process exhibits some degree of variation and therefore is expected. • identified two types of variation (chance cause) and (assignable cause) • proposed first control chart to separate these two types of variation. • SPC was successfully applied during World War II as a means of insuring interchangeability of parts for weapons/ equipment. • Resurgence of SPC in the 1980s in response to Japanese manufacturing success.
The basics • “Don’t inspect the product, inspect the process.” • “If you can’t measure it, you can’t manage it.”
Barriers to process control • Tendency to focus on volume of output rather than quality of output. • Tendency to measure products against a set of internal conformance specifications that may or may not relate to customer expectations.
The SPC approach • The SPC approach is designed to identify underlying cause of problems which cause process variations that are outside predetermined tolerances and to implement controls to fix the problem.
The SPC steps Basic approach: • Awareness that a problem exists. • Determine the specific problem to be solved. • Diagnose the causes of the problem. • Determine and implement remedies. • Implement controls to hold the gains achieved by solving the problem.
SPC requires the use of statistics • Quality improvement efforts have their foundation in statistics. • Statistical process control involves the • collection • tabulation • analysis • interpretation • presentation of numerical data.
Statistic types • Deductive statistics describe a complete data set • Inductive statistics deal with a limited amount of data
Statistics Parameters:2 Inferential Statistics POPULATION Deductive SAMPLE Statistics: x, s, s2 Inductive
Types of data • Variables data - quality characteristics that are measurable values. • Measurable and normally continuous; may take on any value. • Attribute data - quality characteristics that are observed to be either present or absent, conforming or nonconforming. • Countable and normally discrete; integer
Descriptive statistics • Measures of Central Tendency • Describes the center position of the data • Mean Median Mode • Measures of Dispersion • Describes the spread of the data • Range Variance Standard deviation
Measures of central tendency: Mean Arithmetic mean x = • where xi is one observation and N is the number of observations So, for example, if the data are : 0,2,5,9,12 the mean is (0+2+5+9+12)/5 = 28/5 = 5.6
Measures of central tendency: Median - mode • Median = the observation in the ‘middle’ of sorted data • Mode = the most frequently occurring value
Median and mode 100 91 85 84 75 72 72 69 65 Mode Median Mean = 79.22
Measures of dispersion: range • The range is calculated by taking the maximum value and subtracting the minimum value. 2 4 6 8 10 12 14 Range = 14 - 2 = 12
Measures of dispersion: variance • Calculate the deviation from the mean for every observation. • Square each deviation • Add them up and divide by the number of observations