1 / 44

Optimizing Subjective Results

Optimizing Subjective Results. Julia C. O'Neill Karl R. Wursthorn Rohm and Haas Company ASA/ASQ Fall Technical Conference October 18, 2002. Outline. Problem: Topaz Appearance Solution Techniques: Factorial Design Paired Comparisons Multidimensional Scaling References and Software.

cricket
Download Presentation

Optimizing Subjective Results

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Optimizing Subjective Results Julia C. O'NeillKarl R. Wursthorn Rohm and Haas Company ASA/ASQ Fall Technical Conference October 18, 2002

  2. Outline • Problem: Topaz Appearance • Solution Techniques: • Factorial Design • Paired Comparisons • Multidimensional Scaling • References and Software

  3. The Topaz Finish: Objective • To develop a formula which will consistently provide the customer the appearance they want; and investigate the effects of formulation and application variables on Topaz appearance. • To become 100% supplier of powder coatings to this customer.

  4. Problem: Topaz Appearance • Does our coating “look right”? • What matters is what the customer sees. • The Topaz coating appearance depends on many variables. • Formulation • Application conditions • Measurable attributes do not give the whole picture for multi-color finishes. • Gloss and color are “averages”.

  5. The Solution • Factorial experiments allow the estimation and comparison of many effects. • Paired comparisons simplify the task of judging appearance. • Multidimensional scaling reveals the underlying dimensions of appearance.

  6. Experiment Design • Variables: • Grinding equipment (3 types) • Particle size (3 distributions) • Additive (A, B, or none) • Spray gun tip (conical or slot) • Powder charge (60 or 90 kV) • Gun feed (cup or fluid bed) • Initial screening design had 16 runs • 110 panels sprayed • Optimal selection of 20 panels

  7. 20 Panels Evaluated

  8. Appearance Data Collection • Discover, rather than impose, dimensions of appearance • Do not specify attributes • Trained, experienced “eyes” are critical

  9. Paired Comparisons • When objects to be compared can be judged only subjectively. • When differences between objects are small. • When comparison of two objects at one time is simpler than comparing many objects. • When the comparison between 2 objects should not be influenced by other objects. • When we want to uncover “hidden structure” of objects.

  10. Paired Comparisons vs. Ranking vs. Rating • Ranking is preferred when: • Several objects can easily be compared simultaneously. • Differences between objects are fairly apparent. • Rating is preferred when: • Several grades can be distinguished with consensus among judges. • Ratings can be treated like measurements.

  11. Instructions to Evaluators • Cluster based on similarity: Looking at all 20 panels, form groups of any panels which are indistinguishable • Panels in same group have dissimilarity 0 • Select one panel to represent each group • Rate each pair of distinct panels 1 to 4: • 1 means hardly any difference • 4 means very different • Rate based on “appearance”, whatever that means to the judge.

  12. Dissimilarity Data Collection • Number of pairs is: • For 20 objects, 190 pairs. • Similar to a Balanced Incomplete Block design with block size 2 • Modification of all possible pairs of panels • Sort first into indistinguishable groups • Can be optimized for order of presentation • Can be fractionated

  13. Dissimilarity Ratings – averages from 3 evaluators

  14. Mapping Example of Multidimensional Scaling • Start with distances between objects (cities). • Given only the distances, produce the map.

  15. U. S. Map Reconstruction

  16. U. S. Map Reconstruction

  17. Distances among Objects No meaning when i=j Same distance from i to j as from j to i.

  18. Dissimilarities among Objects No meaning when i=j No effective difference for ij versus ji

  19. Motivating Concept of MDS • Dissimilarities behave like distances. • ij should correspond to dij. • Configuration should place similar objects near each other.

  20. Advantages of MDS • Uncover hidden structure of data • Reduce to a few dimensions • Compare objects at opposite ends • Examine physical configuration of objects

  21. Common applications: • Psychologists • Perception of speech and musical tones • Perception of colors and faces • Anthropologists • Comparing different cultural groups • Marketing researchers • Consumer reactions to products

  22. Multidimensional Scaling for Topaz • Dissimilarity is analogous to distance. • Given only the dissimilarities, produce the configuration. • If configuration is good, unlike objects will be far apart. • More complicated than distances: • Noise in the data • Number of dimensions is unknown

  23. Types of MDS: d = f() • Metric • d = b • d = a + b • Nonmetric • ordinal relationship, increasing or decreasing • Choice of function has little effect on configuration

  24. Computational Approach • Define an objective function (badness-of-fit). • Stress is sum of squared discrepancies divided by a scaling factor. • Specify f. • Find the best configuration X to minimize the objective function.

  25. Determining Dimensionality • Useful • People can focus on only a few attributes • Difficult to interpret or display more than 3 • Rule of thumb • R  (I-1)/4 • Statistical • range ofreasonabledimensions

  26. Three-Way MDS • Can use average dissimilarities in two-way MDS. • ALSCAL/PROXSCAL in SAS, SPSS, make full use of information from multiple judges • For Topaz data, judges were very consistent, not much difference with 3-way MDS.

  27. The Configuration

  28. Shepard Diagram

  29. Interpretation • Examine the Configuration! • Dimensional Approach • Regress other variables on coordinates • Attends most to large distances • Neighborhood Approach • Shared characteristics in the same region • Focus is on small distances • Eclectic Approach • Use any means at your disposal to understand as much as possible about the configuration

  30. Examining the Configuration Finer Glossy, Fine texture Darker, Less metallic Lighter, More metallic Coarser

  31. Relating Effects to Dimensions Darker, Less metallic Lighter, More metallic

  32. Relating Effects to Dimensions Finer Coarser

  33. Hierarchical Clustering – 2D

  34. Hierarchical Clustering – 3D

  35. Gun Tip Effect

  36. Charge Effect

  37. Additive Effect

  38. Equipment Effect

  39. Particle Size Effect

  40. Gun Type Effect

  41. Topaz Results • Created a formulation the customer accepted. • Discovered that gun tip is critical to the appearance of this formula. • Identified variables to consider when trouble-shooting complaints.

  42. Summary • Factorial experiments allow the estimation and comparison of many effects. • Paired comparisons simplify the task of judging appearance. • Multidimensional scaling reveals underlying dimensions of appearance.

  43. References • DAVID, HERBERT A. (1988) “The Method of Paired Comparisons.” London: Charles Griffin & Company Limited. • KRUSKAL, JOSEPH B. and WISH, MYRON (1978) “Multidimensional Scaling.” Sage University Paper series on Quantitative Applications in the Social Sciences, 07-011. Newbury Park and London: Sage Pubns. • VENABLES, W. N. and RIPLEY, B. D. (1994) “Modern Applied Statistics with S-Plus.” New York: Springer.

  44. Software • SPSS • SAS • S-Plus • MASS library from Venables & Ripley • MASS@stats.ox.ac.uk

More Related