250 likes | 355 Views
OUTLINE. Questions? Did you hear or notice anything since last class that has to do something with this class? Go over homework Continue scheduling with heuristics Quiz. Heuristics. We have covered the exact algorithms available to us
E N D
OUTLINE • Questions? • Did you hear or notice anything since last class that has to do something with this class? • Go over homework • Continue scheduling with heuristics • Quiz
Heuristics • We have covered the exact algorithms available to us • Since most problems are too large or complex for partial, and certainly, complete enumeration, we need to find other methods • We could resort to guesswork or random assignment - either would be better than doing nothing • The next best thing is to come up with ideas that lead to reasonably “good’ schedules, or at least somewhat better than the random ones • Such reasonably good scheduling methods are called heuristics
Heuristics (continued) • As a start, we will define a routine that will generate an active schedule • A semiactive schedule is one that starts every job as soon as it can, while obeying the technological and scheduling sequences. Also, the set of all semiactive schedules for a problem contains the optimal schedule • Fortunately, the set of active schedules also contains the optimum and is a smaller set. • We can forget about generating semiactive schedules
Active scheduling • For a given problem there will be many active schedules • The routine we will use generates only one and we will have to make frequent choices. Were we to follow each of these decision paths, we would generate all the active schedules and find the optimum • However, our purpose here is to make those choices as intelligently as possible, even though it is difficult to foresee their eventual consequence • An active schedule is one in which no operation could be started earlier without delaying another operation or violating the technological constraints
Definitions • First we will define some terminology useful for our routine: • Class of problems - n/m/G/B with no restrictions • Stage - step in the routine that places an operation into the schedule - there are therefore nm stages • t - counter for stages • Pt - partial schedule at stage t • Schedulable operation - an operation with all its predecessors in Pt • St - set of schedulable operations at stage t
Definitions (continued) • sigmak - the earliest time an operation ok in St could be started • phik - the earliest time that ok in St could be finished • phik = sigmak + pk
Routine by Giffler and Thompson • 1. t = 1, S1 is the set of first operations in all jobs • 2. Find min{phik in St} and designate it phi* • Designate M on which phi* occurs as M* (could be arbitrary) • 3. Choose oj in St such that it satisfies these conditions: • a. It uses M* • b. sigmaj < phi* • 4. a. Add oj to Pt, which now becomes Pt+1 • b. Delete oj from St which now becomes St+1 • c. Add the operation that follows oj in the same job to St+1 • d. Increment t by 1
Routine by Giffler and Thompson (continued) • 5. If there are operations left to schedule, go to step 2, else stop • Note well that at step 3b. sigmaj < phi*, we will often have several choices. We always have at least one, namely, phi* • These choices are an extensive topic that we will cover later • Follow the example I have taken from French • Generating these schedules is tedious work, so leave yourselves some extra time for that homework.
Non-delay schedules • Non-delay schedules are a smaller set than the active schedules and therefore are a tempting set to explore • Unfortunately, they do not always contain the optimum • We will not let that deter us, because non-delay schedules have been found to be usually very good, if not optimal • A non-delay schedule is one where every operation is started as soon as it can be
Non-delay schedules (continued) • We change two steps in the procedure for active schedules to obtain a non-delay procedure: • Step 2. instead of phi, we select sigma • Find min{sigmak in St} and designate it sigma* • Designate M on which sigma* occurs as M* (could be arbitrary) • Step 3 b. sigmaj = sigma*
Selection, priority, dispatching rules • These are elementary algorithms and rules of thumb that guide us in making selections when there are choices in step 3b in the procedures for active and non-delay schedules • Rules based on Flow time: • SPT - Shortest processing time • LWKR - Least work remaining • LOPNR - Least number of operations remaining
Selection, priority, dispatching rules (continued) • Rules based on avoiding congestion: • LPT - Longest processing time • MOPNR - Most operations remaining • FCFS - First come first served • Rules based on due dates: • EDD - Earliest due date of Job • LS - Least slack (based on job due date) • EDDOP - Earliest due date of operation • Random selection
Theory of constraints or OPT procedure • Basis - schedule the bottleneck, then everything around it • 1. Determine the bottleneck • 2. Schedule the bottleneck • 3. Schedule back from the bottleneck • 4. Schedule forward from the bottleneck
Theory of constraints or OPT procedure - hypotheses • 1. Works best when there is a single strong and stable bottleneck • 3. Myopic is poorest when local priorities are different from a strong bottleneck’s priorities
Monte Carlo Technique • Simulating generating a schedule many times, each time making the choices selected at random • Generate the distribution of the performance measure • We can then make a statement regarding the probability of a given performance measure if the schedule is generated randomly • We can also save the best schedule for use
Weighted Random Selection • This is best explained by an example: • Suppose we have decided to use four different dispatching rules. • We now select a weight for each, adding up to 1. For example: • SPT - 0.3 • EDD - 0.4 • LWKR - 0.2 • LOPR - 0.1 • At each choice, we generate a random number between 0 and 1 and use the rule obtained by:
Weighted Random Selection (continued) • Random number between Use Rule • 0 and 0.400 EDD • 0.401 and 0.700 SPT • 0.701 and 0.900 LWKR • 0.901 and 0.999 LOPR • We need only generate one schedule. • However, we can use it multiple times to determine the distribution of the performance measure
Neighborhood Searches • A very common heuristic procedure proceeds as follows: • 1. Find a schedule by whatever means - random, modified Johnson, active, non-delay • 2. Calculate the performance measure • 3. Vary the original schedule in a systematic manner (explore the “neighborhood”)* • 4. Recalculate the performance measure and keep the better schedule • *from Pinedo:”Two schedules are neighbors if one can be obtained through a well defined modification of the other” (see pages 345-353,427,492)
Neighborhood Searches (continued) • 5. Continue the process until: • a. You have no more time • b. No better schedule is produced • c. You have exhausted the possibilities of your approach • Needless to say, you can select a great variety of approaches to defining what the “neighborhood” is
Neighborhood Searches (continued) • One of the simple approaches is to use a pair wise exchange • Suppose we have a 4/1//R problem with no known algorithm • Start with a random sequence, e.g., 1324 • Let’s use what is called a single adjacent pair wise exchange • Then the neighborhood consists of: • 3124 1234 1342 • Suppose that the last of these is better than 1324
Neighborhood Searches (continued) • We then explore the neighborhood of 1342: • 3142 1432 1324 etc.
Genetic algorithms • Simulate natural evolution process • We start with a population - a set of schedules • We keep the size of the population constant • We generate an “offspring” for each member of the population - some type of exchange • We select the best of the offspring and replace the worst of the previous population with it • We keep repeating until we do not get an improvement
Genetic algorithms - continued • An example of a Tbar problem: • Let’s use a population size of 3 with a seed of these three schedules (Sum of T is in parentheses)(total in [ ]): • Generation 1: 123456(27), 132456(27), 312456(28) [82] • We create offspring by selecting a random number between 1 and 5 and do a pairwise exchange at that position • Our first three random numbers: 4, 5, 5
Genetic algorithms - continued • First set of offspring: 123546(26), 132465(26), 312465(27) • We select the first or the second at random (#2) and use it to replace the third member of generation 1 • Generation 2: 123456(27), 132456(27), 132465(26) [80] • Our second three random numbers: 4, 2, 2 • Second set of offspring: 123546(26), 123456(27), 123465(26) • We replace the (randomly chosen from the first and second) member of generation 2 with the first offspring from the second set (at random between 1 and 3) • Generation 3: 123456(27), 123546(26), 132465(26) [79]
Genetic algorithms - continued • Our third three random numbers: 5, 3, 1 • Second set of offspring: 123465(26), 124365(25), 312465(27) • Number 2 replaces number 1 • Generation 4: 124365(25), 123465(26), 132465(26) [77] • Our fourth three random numbers: 3, 4, 2 • Third set of offspring: 123465(26), 123645(26), 123465(26) • None are better than our population • We stop with 124365(25) • Notice that our population as a whole kept improving