Introduction to Non-traditional optimization techniques Introduction
Determ Determini inisti sticc search search techni technique quess will will ensure ensure that that the global global optima optimall soluti solution on of an optimization problem is found, but these techniques will not work when a complex, real life scenario is considered. When an optimal design design problem contains multiple multiple global or near global optimal solutions, designers are not only interested in finding just one global optimal solution, but as many as possible possible for various reasons. Firstly, a design suitable suitable in one situation situation may not be valid in another another situation. situation. Secondly, Secondly, designers designers may not be interested in a solution, which corresponds to a marginally inferior objective function value but is more amenable to fabrication. Thus, it is always always desired to know about other equally good solutions for later use. In addition, a deterministic deterministic search technique technique could use lot of computational time to obtain the near global optimal solution after many initial points considered and many times the NLP is run. In practical scenarios, where search space for solution is extremely large, it is inconceivable that today’s computer power can cope with finding these solutions within acceptable computation time if deterministic search techniques are used. Researchers in the area of optimization have recently deemphasized using deterministic search techniques to achieve convergence to the optimum solution of “hard” optimization problems encountered in practical settings. What is now emphasized is getting a good enough solution, with in a reasonable time in order to achieve desirable outcomes of an enterp enterpris rise. e. Modern Modern researc researcher herss in optimiza optimizatio tion n have have now resorted resorted to using using good approximation search techniques for solving optimization problems because they often find near optimal solutions solutions within reasonable time. An engineer may be satisfied satisfied finding a solution to a “hard optimization” problem if the solution is about 98 % of the optimal (best) solution, but if the time taken is very reasonable such that the customers are satisfied. Stochastic optimization techniques
There are two classes of approximation search techniques: tailored and general purpose. Tailored and customized approximation search technique for a particular optimization problem may often find an optimal (best) solution very quickly. However, they fall short when compared with general purpose types, in which they cannot be flexibly applied , even, to slightly similar similar problems. Stochastic optimization falls falls within the the spectrum of the general purpose purpose type of approximat approximation ion search techniques. techniques. Stochastic Stochastic optimization optimization involves optimizing a system where the functional relationship between the independent input variables and output (objective function) of the system is not known a priori. Hard and discrete function optimization, in which the optimization surface or fitness landscape is rugged rugged,, posses possessin sing g many many locall locally y optima optimall soluti solutions ons is well well suited suited for stochas stochastic tic optimization techniques. Stochastic Stochastic optimizati optimization on techniques play an increasingly increasingly important important role in the analysis analysis and control of modern systems as a way of coping with inherent system noise and
providing algorithms that are relatively insensitive to modeling uncertainty. general form follows the algorithm as shown below:
Their
1. Begin: Generate and evaluate an initial collection of candidate solutions S 2. Operate: Produce and evaluate a collection of potential candidate solutions S’ by operating on (or making randomized changes to) selected members of S 3. Replace: Replace some of the members of S with some of the members of S’ and then return to 2 (unless some termination criterion) has been reached . The existing successful methods in stochastic iterative optimization fall into two broad classes: local search, and population based search. Local search In local search, a special current solution is maintained, and its neighbours are explored to find better quality solutions. The neighbors are new candidates that are only slightly different from the special current solutions. Occasionally, one of these neighbours becomes the new current solution, and then its neighbours are explored, and so forth. The simplest example of a local search method is called stochastic hill climing which is illustrated below: 1. 2. 3. 4.
Begin: Generate and evaluate an initial current solutions S Operate: Make randomized changes to S, producing S’ and evaluate S’ Replace: Replace S with S’, if S’ is better than S Continue: Return to 2 (unless some termination criterion has been reached)
More sophisticated local search methods, in different ways improve on stochastic hill climbing by being more careful or clever in the way that new candidates are generated. In simulated annealing, for example, the difference is basically is in step 3: sometimes we will accept S’ as the new current solution even though it is worse than S. Without this facility, hill climbing is prone to getting stuck at local peaks. These are solutions whose neighbours are all worse than the current solution or plateau areas of the solution space where there is considerable scope for movement between solutions of equal goodness, but where very few or none of these solutions has a better neighbour. With this feature, simulated annealing and other local search methods can sometimes (but certainly not always) avoid such difficulties. Population based search In population based search, the notion of a single current solution is replaced by the maintenance of a population or collection of (usually) different current solutions. Members of this population are first selected to be current candidates solutions to produce new candidate solutions. Since there is now a collection of current solutions, rather than just one, we can exploit this fact by using two or more from this collection at once as the basis for generating new candidates. Moreover, the introduction of a population brings with it further opportunities and issues such as using some strategy or
other to select which solutions will be current candidates solutions. Also, when one or more new candidate solutions have been produced, we need a strategy for maintaining the population. That is, assuming that we wish to maintain the population at a constant size (which is almost always the case), some of the population must be discarded in order to make way for some or all of the new candidates produced via mutation or recombination operations. Whereas the selection strategy encompasses the details of how to choose candidates from the population itself, the population maintenance strategy affects what is present in the population and what therefore is selectable. The following steps illustrate the population based search method: 1. 2. 3. 4. 5.
Begin: Initialize and evaluation solution S(0) Select: Select S(t) from S(t-1) Generate: Generate a new solution space S’(t) through genetic operators Evaluate: Evaluate solution S’(t) and replace S(t) with S’(t) Continue: Return to 2 (unless some termination criterion has been reached)
There are almost as many population based optimization algorithms and variants as there are ways to handle the above issues. Commonly known evolutionary algorithms (evolution strategy, evolutionary programming and genetic algorithm for example) have the key characteristic of population based algorithms. Framework for well-established optimization techniques
In random search, information from previous solutions is not used at all. Consequently, performance of random search is very poor. However, in current, well established optimization methods information gleaned from previously seen candidate solutions are used to guide the generation of new ones. In other words, intelligence gathered from previously seen candidate solutions, are used to guide the generation of new ones. Intelligence gathering of information related to previous events is a powerful tool in devising strategies for seeking solutions to related current events. This concept of utilization of intelligent information gathered from the past event in gu iding solutions to a current event is now applied in economics military etc., and also in new an dnovel optimization methods. Most of the ideas discussed so far can be summarized in three categories which are used in well established optimization methods: Classification: Category 1: New candidate solutions are slight variations of previously generated solutions. Category 2: New candidate solutions are generated when aspects of two or more existing candidates are recombined. Category 3: The current candidate solutions from which new candidates are produced (in category 1 or 2) are selected from previously generated candidates via a stochastic and competitive strategy, which favours better peforming candidates.
In genetic algorithms, for example, the slight variations mentioned in category 1 is achieved by so called neighbourhood or mutation operators. For example, suppose we are trying to solve a flowshop schedule problem and we encode a solution as a string of numbers such that the jth number represents the order of job j in the sequence. We can simply use a mutation operation, which changes a randomly chosen number in the list to a new, randomly chosen value. The “mutant” flowshop schedule is thus a slight variation on current candidate solutions flow-shop schedule. In tabu search, for example, the slight variation mentioned in category I is achieved by so called neighbourhood also called move operators. For example, suppose we are addressing an instance of the traveling salesman problem, then we would typically encode a candidate solution as a permutation of the cities (also points) to be visited. Our neighbourhood operator in this case might simply be to swap the positions of a randomly chosen pair of cities. The recombination operators mentioned in Category 2, which produce new candidates from two or more existing ones, are achieved via crossover, recombination or merge operators. Some operators involve more sophisticated strategies in which perhaps, two or more current candidate solutions are the inputs to a self contained algorithmic search process, which outputs one or more new candidate solutions. The randomized but “competitive strategy” mentioned in category 3 is used to decide which solution be a current candidate solution. The strategy is usually that the fitter a candidate is, the more likely it is to be selected to a current candidate solution, and thus be the (at least partial) basis of new candidate solution. Several stochastic optimization tools have been enhanced in the last decade which facilitate solving optimization problems that were previously difficult or impossible to solve. These stochastic optimization tools include evolutionary computation, simulated annealing, tabu search, genetic algorithms, and so on, all use at least one of these key ideas. Reports of applications of each of these tools have been widely published. Recently, these new heuristic tools have been combined among themselves and with knowledge elements, as well as with more traditional approaches such as statistical analysis (hybridization), to solve extremely challenging problems.