330 likes | 514 Views
MAE 552 Heuristic Optimization . Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization. Multi-Objective Optimization. References:
E N D
MAE 552 Heuristic Optimization Instructor: John Eddy Lecture #35 4/26/02 Multi-Objective Optimization
Multi-Objective Optimization References: Das, I, and Dennis, J, A Closer Look at Drawbacks of Minimizing Weighted Sums of Objectives for Pareto Set Generation in Multicriteria Optimization Problems, Can be found at http://ublib.buffalo.edu/. Chen. W, Wiecek, M, Zhang, J, Quality Utility – A Compromise Programming Approach to Robust Design, ASME JMD, 1999, Vol 121, pp.179-187
Multi-Objective Optimization • All the problems that we have considered in this class as well as in 550 have been comprised of a single objective function with perhaps multiple constraints and design variables. Minimize Subject To:
Optimum -0+ F Multi-Objective Optimization In such a case, the problem has a 1 dimensional performance space and the optimum point is the one that is the furthest toward the desired extreme.
Multi-Objective Optimization • What happens when it is necessary (or at least desirable) to optimize with respect to more than one criteria? • Now we have additional dimensions in our performance space and we are seeking the best we can get for all dimensions simultaneously. • What does that mean “best in all dimensions”?
F2 Minimize Both F’s Optimum F1 Multi-Objective Optimization Consider the following 2D performance space:
F2 Minimize Both F’s Optimum? F1 Optimum? Multi-Objective Optimization But what happens in a case like this:
Multi-Objective Optimization The one on the left is better with respect to F1 but worse with respect to F2. And the one on the right is better with respect to F2 and worse with respect to F1. How does one wind up in such peril?
Multi-Objective Optimization That depends on the relationships that exist between the various objectives. There are 3 possible interactions that may exist between objectives in a multi-objective optimization problem: • Cooperation • Competition • No Relationship
Multi-Objective Optimization What defines a relationship between objectives? How can I recognize that two objectives have any relationship at all? The relationship between two objectives is defined by the variables that they have in common. Two objectives will fight for control of common design variables throughout a multi-objective design optimization process.
Multi-Objective Optimization Just how vicious the fight is depends on what type of interaction exists (of the 3 we mentioned). Let’s consider the 1st case of cooperation. Two objectives are said to “cooperate” if they both wish to drive all their common variables in the same direction (pretty much all the time). In such a case, betterment of one objective typically accompanies betterment of the other.
F2 Minimize Both F’s Optimum F1 Multi-Objective Optimization In such a case, the optimum is a single point (or collection of equally desirable points) like in our first performance plot.
Multi-Objective Optimization Now let’s consider the 2nd case of competition. Two objectives are said to “compete” if they wish to drive at least some of their common variables in different directions. In such a case, betterment of one objective typically comes at the expense of the other. This is the most interesting case.
Multi-Objective Optimization In such a case, the optimum is no longer a single point but a collection of points called the Pareto Set. Named for Vilfredo Pareto (1848-1923) who was a French economist and sociologist. He established the concept now known as “Pareto Optimality”.
Multi-Objective Optimization • Pareto optimality - • Optimality criterion for optimization problems with multiple objectives. A state (set of parameters) is said to be Pareto optimal if there is no other state dominating the state with respect to a set of objective functions. • State A dominates state B if A is better than B in at least one objective function and not worse with respect to all other objective functions.
F2 Minimize Both F’s F1 Multi-Objective Optimization So let’s take a look at this:
Multi-Objective Optimization For completeness, we will now consider the case in which there is no relationship between two objectives. When do you think such a thing might occur? Clearly this only occurs when the two objectives have no design variables in common (each is a function of a different subset of the design variables and the 2 subsets have a null intersection).
Multi-Objective Optimization In such a case, we are free to optimize each function individually to determine our optimal design configuration. That is why this case is desirable but uninteresting. So back to competing objectives.
Multi-Objective Optimization Now that we know what we are looking for, that is, the set of non-dominated designs, how are we going to go about generating it? The most common way to generate points along a Pareto frontier is to use a weighted sum approach. Consider the following example:
Multi-Objective Optimization Suppose I wish to minimize both of the following functions simultaneously: F1 = 750x1+60(25-x1) x2+45(25- x1)(25- x2) F2 = (25- x1) x2 For the typical weighted sum approach, I would assign a weight to each function such that:
Multi-Objective Optimization I would then combine the two functions into a single function as follows and solve:
Multi-Objective Optimization The net effect of our weighted sum approach is to convert a multiple objective problem into a single objective problem. But this will only provide us with a single Pareto point. How will be go about finding other Pareto points? By altering the weights and solving again.
Multi-Objective Optimization As mentioned, such schemes are very common in multi-objective optimization. In fact, in an ASME paper published in 1997, Dennis and Das made the claim that all common methods of generating Pareto points involved repeated conversion of a multi-objective problem into a single objective problem and solving.
Multi-Objective Optimization Ok, so I march up and down my weights generating Pareto points and then I’ve got a good representation of my set. Unfortunately not. As it turns out it is seldom this easy. There are a number of pitfalls associated with using weighted sums to generate Pareto points.
Multi-Objective Optimization Some of those pitfalls are: • Inability to generate points in non-convex portions of the frontier • Inability to generate a uniform sampling of the frontier • A non-intuitive relationship between combinatorial parameters (weights, etc.) and performances • Poor efficiency (can require an excessive number of function evaluations).
Multi-Objective Optimization Let’s consider the 1st pitfall: What is a non-convex portion of the frontier? I assume you are all familiar with the concept of convexity so let’s move on to a pictorial.
F2 Minimize Both F’s F1 Multi-Objective Optimization This is a non-convex region of the frontier
Multi-Objective Optimization Ok so why do weighted sum approaches have difficulty finding these points? As discussed in reference 1, choosing the weights in the manner that we have can be shown to be equivalent to rotating the performance axes by an angle that can be determined from the weights and then translating those rotated axes until they hit the frontier. The effect of this on a convex frontier can be visualized as follows.
F2 Minimize Both F’s F1 Multi-Objective Optimization
Multi-Objective Optimization So I think that you can see already what is going to happen when the frontier is not convex. Consider the following animation.
F2 Minimize Both F’s F1 Multi-Objective Optimization
Multi-Objective Optimization So we missed all the points in the non-convex region. This also demonstrates one reason why we may not get a uniform sampling of the Pareto frontier. As it turns out, a uniform sampling is only possible in this way for a Pareto set having a very specific shape. So not even all convex Pareto sets can be sampled uniformly in this fashion. You can read more about this in reference 1.
Multi-Objective Optimization Clearly, if we cannot generate a uniform sampling and we cannot find non-convex regions, then the relationship between changes in weights and motion along the frontier is non-intuitive. Finally, since with each combination of weights, we are completing an entire optimization of our system, You can see how this may result in a great deal of system evaluations.