290 likes | 457 Views
Chapter 5 Flow Lines. Types Issues in Design and Operation Models of Asynchronous Lines Infinite or Finite Buffers Models of Synchronous (Indexing) Lines Closed-Loop Material Handling Focus on the impact of variability on design, operation, and performance. Types of Flow Lines.
E N D
Chapter 5Flow Lines • Types • Issues in Design and Operation • Models of Asynchronous Lines • Infinite or Finite Buffers • Models of Synchronous (Indexing) Lines • Closed-Loop Material Handling Focus on the impact of variability on design, operation, and performance.
Types of Flow Lines • Paced vs. unpaced • Job movement between work stations • Indexing (synchronous) lines: all jobs move simultaneously • Paced: Limit on time available to complete each task • Unpaced: No transfer until all tasks completed • Asynchronous: no coordination between movements at different stations (usually unpaced) • Blocking or starvation may occur • Task times by human operators are highly variable!
Design & Operational Issues • Configuration and Layout • Number of stations: If TH* is the required throughput and average total time (work content) for each job is W, then the minimum number of stations is m*=W TH*. Actually will need more because • Imperfect line balance means work at each station is not exactly W/m* • Operator task time variability causes delays • Quality problems may require rework • Paralleling • A long task may require several stations in parallel in order to balance with other stations • Storage space for in-process inventory: location and size
Fluid Model for Deterministic Serial Queuing System (Hall) Suppose N single-server stations in series, each with own unlimited queue Up to time t, At time t, upstream Q1 s1 downstream Q2 s2
Deterministic Fluid Model (cont-1) Given: The queues are related by: For N = 2, if we could increase either which would provide more improvement in performance?
Deterministic Fluid Model (cont-2) • The naïve approach is to attempt to decrease the queue that’s larger (station 1) • But this just shifts the waiting time downstream • In mfg. this is worse because downstream WIP is more valuable • Better to increase the service rate of the bottleneck server, i.e., the last place at which a queue is encountered • With deterministic service times, the bottleneck is the server with the smallest capacity (service rate)
Asynchronous Lines with Unlimited Buffers Assume that arrivals (job orders) to the line follow a Poisson process with rate , and that stage i has ci parallel stations, each with service time exponentially distributed with rate i, i=1, …, m. Recall that the departure process from an M/M/1 queue is Poisson with rate equal to the arrival rate. The same is true for an M/M/c system. Then each stage is effectively an M/M/ ci queue with arrival and departure rate equal to . As long as /(ci i) < 1,throughput = regardless of the values of ci andi !However, the configuration and processing rates of the stages do affect the amount of WIP in the line.
Average Number of Jobs in System Formally, if Ni is the steady-state number of jobs at stage i, then for i = /( ci i) < 1, i=1, …, m, and if the expected number of jobs in an M/M/c system with c parallel servers and server utilization is then the average number of jobs in the system is
Work Load Allocation Suppose that the total work for a job (total expected processing time) is W, and that of this total, wi is allocated to stage i. Then i = 1/ wi. To minimize WIP, we want to The function is increasing and convex in , i.e.,
Work Load Allocation (cont) The solution to this optimization problem will satisfy: The function is decreasing in c. So for ci < cj, implies that Therefore, if each stage has an equal number of servers, the workload should be allocated evenly among stages. But if not, allocate more work per server to stages with more servers.
General Interarrival and Service Times Model each stage as a GI/G/c system, then use favorite approximation for the system population (see Table 3.1) where Then use (3.160) to estimate Impact of variability: E[Ni] increases with If sequence of stations visited can be changed, average flow time is minimized by putting the stations with less variable service times first.
Asynchronous Lines with Finite Buffers • Single station (server, machine) per stage • Production blocking: A station is blocked if a completed job cannot be moved out of the station because the downstream buffer is full. If a job is available for processing, the station will process the job if it is not blocked. • bi is the limit on the total number of jobs waiting for processing or in process at station i It can be shown that throughput increases with the buffer sizes. How can we compute or estimate it?
3 Stages, Exponential Service Times Assume an infinite number of jobs in front of station 1 (high demand, unlimited raw material) N2(t) = the number of jobs that have been processed by station 1 but not yet completed by station 2; N3(t) is the corresponding quantity for station 3. {N2(t), N3(t), t 0} is a Markov chain with possible states (Ni(t) = bi + 1 if stage i - 1 is blocked) Let
Transition Diagram 0,n3 0,n3+1 0,b3 0,b3+1 0,0 0,1 1,n3-1 1,n3 1,0 n2-1, n3 n2-1, n3+1 n2-1,0 n2-1,1 n2,0 n2,1 n2,n3-1 n2, n3 n2,n3+1 n2+1,0 n2+1, n3-1 n2+1, n3 b2-1, b3+1 b2,0 b2,b3 b2,b3+1 b2+1, n3-1 b2+1, n3+1 b2+1,0 b2+1,1 b2+1, n3 b2+1, b3
Throughput • Steady-state balance equations (p. 189) can be solved numerically; then • If b2 = b3, throughput is maximized by putting the fastest station (largest ) in the middle – true for nonexponential processing times as well.
3 Stages, No Buffer Space 0,0 1 2 3 m3 0,0 0,1 0,2 1,0 1 2 3 m2 m1 2,0 1,0 1,1 1,2 1 2 3 0,1 1 2 3 2,0 2,1 1,1 1 2 3 p. 190: Throughput is symmetric in m1, m3. Maximize throughput with m2 = max(m1, m2,m3) 2,1 1 2 3 0,2 1 2 3 1,2 1 2 3
Multiple Stages • Can form a Markov chain model for but it’s very unwieldy, with many states. There are some iterative algorithms to approximate the throughput (5.4). • Optimal workload allocation: Let The optimal proportion of the total workload to allocate to stage i is • The optimal allocation of (m-1)b buffer spaces is
Allocating Workload: Summary • Unlimited buffers: • Equalize workload among stages if single server per stage • Allocate more work per server to stations with more servers • Limited buffers: • Faster servers (less work) in middle stations • Avoid blocking early stages, starving late stages • Allocate buffer space evenly
Indexing Lines • Jobs move simultaneously between m stations • Unpaced: Move takes place when each station has completed its task. • Line balancing is done on the basis of expected task times • Actual task times are random • Actual throughput and utilizations depend on distribution of longest task time • Paced: Moves take place at fixed time intervals • Gross production rate is reciprocal of this interval • Quality declines as interval is shortened • Tradeoff to determine optimal time interval
Unpaced Lines Tiis the random time to complete task i, i = 1,…,m The time between successive moves is a random variable The utilization of station i and the throughput are given by: Line balancing seeks to maximize throughput by assigning elemental (nondivisible) tasks to stations to minimize subject to constraints on precedence and task combinations that can be assigned to the same station.
Unpaced Lines (cont-1) However, To see the impact of variability on throughput, we will look at an approximation to the distribution of . Assume that the random variables Tiare independent with identical distributions, Then For many distributions, the right tail can be approximated by an exponential function, i.e., for t sufficiently large,
Unpaced Lines (cont-2) Let tm be the value of t such that Using the exponential approximation, for t > tm Then (after some more manipulation.) Also it can be shown that
What’s and How do we use this? Suppose T is normally distributed with mean and variance 2. If m = 10 then tm = + z0.1 = + 1.28. Also, Finally, Note: This value of applies to m = 10 only! Re-do for other values of m.
How do we use this? (cont) The same analysis for m = 20 in the text (5.3.1) finds Suppose we took a 20 station line and converted it to two parallel lines of 10 stations each (each station in the 10 station line would have twice as much work.) Then and
Paced Lines Tiis the random time to complete task i, i = 1,…,m . The time between successive moves is a constant, . The quality, Q(), is the probability that a product will not contain any defects. We will take it to be the probability that all stations complete their tasks within time , so that where the last equality assumes that the times Ti are iid. Note that Q() is the same as the cycle time distribution for the unpaced line. We will assume that is set large enough so that Q() is close to 1 and fits the right-tail distribution:
Setting to achieve a specified Q() If we want a specified quality, Q*, we can determine what must be the probability that a station finishes in time: Table 5.3 in the text lists some of these values; e.g., if m = 10 stations and Q* = 0.98 then To achieve this quality, For example if T is normal ( , 2) then set = +z.002 = +2.88
Setting to maximize throughput The gross production rate will be 1/. However, should not be set too low because quality declines. The rate at which nondefective items are produced is The throughput will be maximized when See Fig. 5.5: Line from origin is tangent to Q()
Setting to maximize throughput (cont-1) Assuming * will satisfy: For example, suppose the task time at each station is normal ( , 2). Then 2/ (see slide 23) and If m=10 and CV =/=0.2, set * = + x* and the equation reduces to: Then by trial and error, x* = 2.65.
Setting to maximize throughput (cont-2) And if * = + 2.65 , then Throughput is less than because (1) quality is not 100%, and (2) we have to build in slack time to address the variability. Note that * is actually a minimum value for : If > * then throughput will decrease while quality improves but if < * both quality and net throughput decrease!