1 / 41

Preventive Maintenance in Semiconductor Manufacturing Fabs

Preventive Maintenance in Semiconductor Manufacturing Fabs. SRC Task NJ-877 FORCe Kick-Off Meeting Seattle April 26-27, 2001. Research Plan. (1) Develop, test, and transfer software tools for optimal PM scheduling; (2) Research and validate the models, methods and

ronda
Download Presentation

Preventive Maintenance in Semiconductor Manufacturing Fabs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Preventive Maintenance in Semiconductor Manufacturing Fabs SRC Task NJ-877 FORCe Kick-Off Meeting Seattle April 26-27, 2001

  2. Research Plan (1) Develop, test, and transfer software tools for optimal PM scheduling; (2) Research and validate the models, methods and algorithms for software development in (1); (3) Facilitate the transfer of models, algorithms and tools to 3rd party commercial software vendors.

  3. OVERVIEW • Research Team • Proposed Research • Deliverables • Preview: “Best Practices” Survey in PM • Methodology Basis: TECHCON 2000 Paper.

  4. Research Team • Institute for Systems Research, University of Maryland • Prof. Michael Fu(Project Director) • Prof. Steven I. Marcus • Xiaodong Yao (Ph.D. Student) (1 more Ph.D. in Fall) • Electrical & Computer Eng. & Comp. Sci., • Systems Modeling & Information Technology Laboratory • University of Cincinnati • Prof. Emmanuel Fernandez • Jason Crabtree(M.Sc. Student) • (1-2 new students beginning in September)

  5. ISR

  6. Background of Researchers of ISR Michael Fu Robert H. Smith School of Business Operations Research; Stochastic Modeling; Simulation Methodology (Simulation Area Editor, OR; AE Sim/Stoch Models, Mgmt Sci; Special Issue Editor, TOMACS) Steven Marcus Dept. of Elect. and Comp. Eng. Stochastic Control; Markov Decision Process; Risk-Sensitive Control. IEEE Fellow. Past director ISR, (interim) Chair ECE Dept. (Editor, SIAM J. Control and Optimization) Xiaodong Yao Dept. of Elect. and Comp. Eng. Markov Decision Process; Operation Research; System Reliability

  7. Ramesh Rao, National SemiconductorTAB: Mohammed Ibrahim Marcellus Rainey, TITAB: Kishore Potti Yin-Tat Leung, IBMTAB: Sarah Hood Man-Yi Tseng (Matilda O’Connor), AMDTAB: Edwin Cervantes Madhav Rangaswami, Intel conference calls with three already (multiple times with AMD) Liaisons

  8. SMIT Lab OBJECTIVES: Conduct basic & applied research in modeling, algorithms and information technology (IT) implementation for (stochastic) systems and processes. To serve as a consulting lab in IT for interdisciplinary projects, e.g., manufacturing, operational planning, distance education. APPLICATION AREAS: Manufacturing & operations management; Security and fault-management in telecommunication networks; Logistics; Workforce Management; IT learning tools.

  9. Background of Researchers At University of Cincinnati Emmanuel Fernandez ECECS Dept. (SIE/UA ’91-00) Stochastic Models; Stochastic Decision & Control Process; Manufacturing, Logistics & Telecommunications Applications; Information Technology. Senior Member IEEE & IIE Jason Crabtree ECECS Dept., Stochastic Models; Operation Research; Computer Implementation of Algorithms. Bach. Mech. & Industrial Eng., Univ. of Cincinnati 2000.

  10. Research Plan (1) Develop, test, and transfer software tools for optimal PM scheduling; (2) Research and validate the models, methods and algorithms for software development in (1); (3) Facilitate the transfer of models, algorithms and tools to 3rd party commercial software vendors.

  11. Motivation • The reliability of equipment is critical to fab’s operational • performance; • Industry calls for analytical models to guide PM practice; • In academia, the problems of maintenance and production • have been addressed in isolation until very recently; • Traditional models ignore the impact of other system state • variables (e.g. WIP level, operational status of up-stream or • down-stream tools) on PM scheduling.

  12. Research Approach The generic form of the problem of interest: • Where: • μ is a PM policy; • π is a production policy; • E[C] represents the expected total costs.

  13. Proposed Framework

  14. Markov Decision Process Model • Components: • the system state, which includes information such as tool “age” • since last PM, and WIP level at each tool; • the admissible actions in each state; • the cost structure, e.g., costs for “planned” downtime, costs • for “unplanned” downtime, and costs for WIP; • the objective function, which includes weighted profits along • with the cost structure; • sources of uncertainty, e.g., “out of control” events, • tools failure processes, future demand and incoming WIP.

  15. Distinctions of Proposed Model • Integration of production control information, e.g. • current WIP levels and anticipated demand; • description of inter-dependence of different PMs • within a single tool and between tool sets; • modeling of “out of control” events such as process • drifts, in addition to the failure process of tools.

  16. Deliverables to Industry 1. Survey of current PM practices in industry (Report) (P:30-SEP-2001) 2. Models and algorithms to cover bottleneck tool sets in a fab (Report) (P:31-MAR-2002) 3. Simulation engine implemented in commercially available software: Software package with documentation, and report with case studies and benchmark data (Software, Report) (P:30-SEP-2002)

  17. Deliverables (Continued) 4. Intelligent PM Scheduling software tools, with accompanying simulation engine (Software,Report) (P:30-JUN-2003) 5. Installation and evaluation, workshop and consultation (Report) (P:31-DEC-2003)

  18. SURVEY: PM Best Practices • Previous NSF/SRC project: Integrating Product Dynamics • and Process Models (IPDPM), ’97-2000 • Interaction with industry (AMD) in 1999-2000 • PM identified as a high priority area • Faculty visits to industry during 2000: data collection, • problem definition • Summer internship 2000: model validation and simulation

  19. SURVEY: PM Best Practices • Finding 1: “Torrents of Data” flowing through the Fab • databases, mostly unused for modeling and decision making • Finding 2: PM scheduling focuses on key bottleneck tools, e.g., cluster tools for metal deposition • Finding 3: PM schedule wafer-count or calendar based • Finding 4: Each tool group manager has total control of PM scheduling: heuristics usually employed

  20. SURVEY: PM Best Practices • Finding 5: Availability of parts can be a problem • Finding 6: Workforce coordination needed (PM tasks can extend over several shifts) • Finding 7: Consolidation of maintenance tasks critical: • different PMs, or PMs with unscheduled maintenance • Finding 8: No previous PM Best Practices Survey available • Finding 9: Need for stochastic PM models in semiconductor manufacturing; little relevant literature available

  21. Incorporating Production Planning into Preventive Maintenance Scheduling in Semiconductor Fabs Methodology Basis (TECHCON 2000)

  22. Collaboration • Academia Research Group • Xiaodong Yao (Ph.D. Student) • Dr. M. Fu, • Dr. S.I. Marcus, • Dr. E. Fernandez • Industry Collaborators: AMD • Craig Christian, • Javad Ahmadi, • Mike Hillis, • Nipa Patel (now at Dell), • Shekar Krishnaswamy (now at Motorola), • Bill Brennan

  23. Overview 1. Problem Context 2. Hierarchical Modeling Approach 3. Markov Decision Process (MDP) Model 4. Linear Programming (LP) Model 5. Case Study 6. Future Development and Implementation 7. Acknowledgements

  24. Problem Context • Focused on Cluster Tools: • made up of chambers and robots • highly integrated • entire tool’s availability dependent on combination • of all chambers’ status. • Complexity of PM scheduling for cluster tools: • diversity of PM tasks, (types, duration, on whole tool or • on individual chamber, etc.) • WIP • “out of control” events, embedded PM etc.

  25. Objective Our proposed models expect to answer two questions: (1) What is optimal policy for each PM task, i.e. what is the optimal frequency for PM? (PM planning) (2) Under optimal policies, we thus have appropriate PM time window. Now, within this PM window, what is the best time (shift/day) to do PM? (PM scheduling) Overall objective is to maximize profits from tools’ operation.

  26. Hierarchical Approach A two-layer model structure:

  27. MDP Model • Markov Decision Process (MDP) methodology: • Results in policies that “provides a trade-off between • immediate and future benefits and costs, and utilizes the • fact that observations will be available in the future”; • Four Main Components of an MDP model • a. system states • b: admissible actions in each state • c: objective functions • d: sources of uncertainty.

  28. MDP Model • State Variables: • Xil(t) = # of days passed or # of wafers produced • since last PM task; • Ii(t) = workload level at tool i. • Admissible actions: to do PM or not • Sources of uncertainty: tools’ failure dynamics, • and demand pattern. • Objective function:

  29. MDP Model Subject to the following equations: s.t:

  30. LP Model • Assume we have already had optimal policies for PMs, from • which such data as PM windows are available. • LP model then comes into play to decide when to do PMs • within their windows. • Assumptions: • Planning horizon is less than the minimum time between any • two consecutive same PM tasks on a chamber • Before planning, it is known with certainty that which PMs • have to be done during this horizon.

  31. LP Model Objective Function: Constraints:

  32. Remarks • Mixed IP model; • Constraints (3) and (6) are non-linear, but can • be expressed easily in a “look-up” table form • Basically in line with MDP model, except not including • stochastic data • To maximize the availability versus to match availability • with “demand pattern”.

  33. MDP v.s. LP

  34. Solving LP Model • Using optimization package: • (1) EasyModeler: • including a Model Description Language • model-data independence • tightly integrated with OSL • (2) OSL (Optimization Solutions and Library) • providing stand-alone solver for LP, MIP,QP or • SLP • including about 70 user callable functions;

  35. Case Study • Consider PM tasks from PM1 to PM11; • Planning horizon 7 days • Tool ID from Tool1 to Tool11 • Resources (manpower) constraint • WIP level constraints • Inventory costs • PM costs, (e.g. materials, kits etc.) • Profits from wafer throughput.

  36. Case Study • Comparing results of “best-in-practice” schedule and • “LP model-based” schedule • Using AutoSched AP software, (each PM schedule is • modeled as a “PM order” in ASAP) • Running with the same lots, WIP data as of one specific • week • Simulating one week • Running 10 replications, respectively.

  37. Results The Average Number of Wafers Completed on Tools heuristic Number of Wafers model-based Cluster Tool ID

  38. The Average Number of WIPLOT on Tools Heuristic Number of Lots Model-based Tool ID Results

  39. Results • Model-based schedule outperforms the reference* schedule • both on tools’ throughputs and tools’ WIP level • Consolidating long PM tasks has significant improvement • on throughput, e.g., about 14% improvement for Tool 1 • The improvement is not too much, because the reference • schedule is near optimal • More scenarios should be collected and compared. • *Best-in-practice heuristic

  40. Future Work • On Models: • developing computationally tractable MDP model • developing efficient numerical methods for MDP • sensitivity analysis for LP model, etc. • On Implementation: • fine tune model parameters • integrating models into real systems etc.

  41. Acknowledgements • Thanks to AMD for providing data for case study; • Many thanks to • Craig Christian, for invaluable discussions and • data collection • Javad Ahmadi, for great help on LP implementation • Mike Hillis, for excellent support on group coordination • Nipa Patel, for much help on ASAP simulation • Shekar Krishnaswamy for problem identification.

More Related