FACTOID # 10: The total number of state executions in 2005 was 60: 19 in Texas and 41 elsewhere. The racial split was 19 Black and 41 White.

 Home Encyclopedia Statistics States A-Z Flags Maps FAQ About

 WHAT'S NEW

SEARCH ALL

Search encyclopedia, statistics and forums:

(* = Graphable)

Encyclopedia > Optimization (mathematics)

In mathematics, the term optimization, or mathematical programming, refers to the study of problems in which one seeks to minimize or maximize a real function by systematically choosing the values of real or integer variables from within an allowed set. This problem can be represented in the following way Euclid, Greek mathematician, 3rd century BC, as imagined by by Raphael in this detail from The School of Athens. ... In mathematics, a function of a real variable was the classical object of study in mathematical analysis of the nineteenth century, taking real numbers to real numbers. ... In mathematics, the real numbers may be described informally as numbers that can be given by an infinite decimal representation, such as 2. ... The integers are commonly denoted by the above symbol. ...

Given: a function f : A $to$ R from some set A to the real numbers
Sought: an element x0 in A such that f(x0) ≤ f(x) for all x in A ("minimization") or such that f(x0) ≥ f(x) for all x in A ("maximization").

Such a formulation is called an optimization problem or a mathematical programming problem (a term not directly related to computer programming, but still in use for example in linear programming - see History below). Many real-world and theoretical problems may be modeled in this general framework. Problems formulated using this technique in the fields of physics and computer vision may refer to the technique as energy minimization, speaking of the value of the function f as representing the energy of the system being modeled. Graph of example function, The mathematical concept of a function expresses the intuitive idea of deterministic dependence between two quantities, one of which is viewed as primary (the independent variable, argument of the function, or its input) and the other as secondary (the value of the function, or output). A... In mathematics, a set can be thought of as any collection of distinct objects considered as a whole. ... In mathematics, the real numbers may be described informally as numbers that can be given by an infinite decimal representation, such as 2. ... In computer science, an optimization problem is the problem to find among all feasible solutions for some problem the best one. ... â€œProgrammingâ€ redirects here. ... In mathematics, linear programming (LP) problems involve the optimization of a linear objective function, subject to linear equality and inequality constraints. ... A magnet levitating above a high-temperature superconductor demonstrates the Meissner effect. ... Computer vision is the science and technology of machines that see. ... For other uses, see System (disambiguation). ... A mathematical model is an abstract model that uses mathematical language to describe the behaviour of a system. ...

Typically, A is some subset of the Euclidean space Rn, often specified by a set of constraints, equalities or inequalities that the members of A have to satisfy. The elements of A are called feasible solutions. The function f is called an objective function, or cost function. A feasible solution that minimizes (or maximizes, if that is the goal) the objective function is called an optimal solution. â€œSupersetâ€ redirects here. ... Around 300 BC, the Greek mathematician Euclid laid down the rules of what has now come to be called Euclidean geometry, which is the study of the relationships between angles and distances in space. ... Constraint is an equation that defines a restriction of solutions of an optimization problem to a so called feasible set. ...

The domain A of f is called the search space, while the elements of A are called candidate solutions or feasible solutions. In mathematics, the domain of a function is the set of all input values to the function. ... In optimization (a branch of mathematics), a candidate solution is a member of a set of possible solutions to a given problem. ...

Generally, when the feasible region or the objective function of the problem does not present convexity, there may be several local minima and maxima, where a local minimum x* is defined as a point for which there exists some δ > 0 so that for all x such that Look up Convex set in Wiktionary, the free dictionary. ...

$|mathbf{x}-mathbf{x}^*|leqdelta$;

the expression

$f(mathbf{x}^*)leq f(mathbf{x})$

holds; that is to say, on some region around x* all of the function values are greater than or equal to the value at that point. Local maxima are defined similarly.

A large number of algorithms proposed for solving non-convex problems – including the majority of commercially available solvers – are not capable of making a distinction between local optimal solutions and rigorous optimal solutions, and will treat the former as actual solutions to the original problem. The branch of applied mathematics and numerical analysis that is concerned with the development of deterministic algorithms that are capable of guaranteeing convergence in finite time to the actual optimal solution of a non-convex problem is called global optimization. Applied mathematics is a branch of mathematics that concerns itself with the mathematical techniques typically used in the application of mathematical knowledge to other domains. ... Numerical analysis is the study of approximate methods for the problems of continuous mathematics (as distinguished from discrete mathematics). ... Global optimization is a branch of applied mathematics and numerical analysis that deals with the optimization of a function or a set of functions to some criteria. ...

## Contents

Optimization problems are often expressed with special notation. Here are some examples:

$min_{xinmathbb R}; x^2 + 1$

This asks for the minimum value for the objective function x2 + 1, where x ranges over the real numbers R. The minimum value in this case is 1, occurring at x = 0. In mathematics, the real numbers may be described informally as numbers that can be given by an infinite decimal representation, such as 2. ...

$max_{xinmathbb R}; 2x$

This asks for the maximum value for the objective function 2x, where x ranges over the reals. In this case, there is no such maximum as the objective function is unbounded, so the answer is "infinity" or "undefined". The infinity symbol âˆž in several typefaces. ...

$operatorname{argmin}_{xin(-infty,-1]}; x^2 + 1$

This asks for the value (or values) of x in the interval (−∞, −1] that minimizes (or minimize) the objective function x2 + 1 (the actual minimum value of that function does not matter). In this case, the answer is x = −1. In mathematics, interval is a concept relating to the sequence and set-membership of one or more numbers. ...

$operatorname{argmax}_{xin[-5,5],;yinmathbb R}; xcdotcos(y)$

This asks for the (xy) pair (or pairs) that maximizes (or maximize) the value of the objective function x·cos(y), with the added constraint that x lies in the interval [−5, 5] (again, the actual maximum value of the expression does not matter). In this case, the solutions are the pairs of the form (5, 2πk) and (−5, (2k + 1)π), where k ranges over all integers. When a circles diameter is 1, its circumference is Ï€. Pi or Ï€ is the ratio of a circles circumference to its diameter in Euclidean geometry, approximately 3. ... The integers are commonly denoted by the above symbol. ...

## Major subfields

• Linear programming studies the case in which the objective function f is linear and the set A is specified using only linear equalities and inequalities. Such a set is called a polyhedron or a polytope if it is bounded.
• Integer programming studies linear programs in which some or all variables are constrained to take on integer values.
• Quadratic programming allows the objective function to have quadratic terms, while the set A must be specified with linear equalities and inequalities.
• Nonlinear programming studies the general case in which the objective function or the constraints or both contain nonlinear parts.
• Convex programming studies the case when the objective function is convex and the constraints, if any, form a convex set. This can be viewed as a particular case of nonlinear programming or as generalization of linear or convex quadratic programming.
• Semidefinite programming (SDP) is a subfield of convex optimization where the underlying variables are semidefinite matrices. It is generalization of linear and convex quadratic programming.

In a number of subfields, the techniques are designed primarily for optimization in dynamic contexts (that is, decision making over time): In mathematics, linear programming (LP) problems involve the optimization of a linear objective function, subject to linear equality and inequality constraints. ... For the game magazine, see Polyhedron (magazine). ... In geometry polytope means, first, the generalization to any dimension of polygon in two dimensions, and polyhedron in three dimensions. ... The term bounded appears in different parts of mathematics where a notion of size can be given. ... In mathematics, linear programming (LP) problems are optimization problems in which the objective function and the constraints are all linear. ... The integers are commonly denoted by the above symbol. ... Quadratic programming (QP) is a special type of mathematical optimization problem. ... In mathematics, nonlinear programming (NLP) is the process of solving a system of equalities and inequalities, collectively termed constraints, over a set of unknown real variables, along with an objective function to be maximized or minimized, where some of the constraints or the objective function is nonlinear. ... Convex optimization is a subfield of mathematical optimization. ... In mathematics, convex function is a real-valued function f defined on an interval (or on any convex subset C of some vector space), if for any two points x and y in its domain C and any t in [0,1], we have Convex function on an interval. ... A Second order cone program (SOCP) is a convex optimization problem of the form minimize subject to where the problem parameters are , and . ... Semidefinite programming (SDP) is an area of mathematics concerned with special optimization problems: the optimization of a linear objective function over the intersection of the cone of positive semidefinite matricies with an affine space. ... In mathematics, a definite bilinear form B is one for which B(v, v) has a fixed sign (positive or negative) when it is not 0. ... In mathematics, a matrix (plural matrices) is a rectangular table of elements (or entries), which may be numbers or, more generally, any abstract quantities that can be added and multiplied. ... Stochastic programming is a framework for modeling optimization problems that involve uncertainty. ... In probability theory, a random variable is a quantity whose values are random and to which a probability distribution is assigned. ... In mathematics, robust optimization is an approach in optimization to deal with uncertainty. ... Combinatorial optimization is a branch of optimization in applied mathematics and computer science, related to operations research, algorithm theory and computational complexity theory. ... Discrete mathematics, also called finite mathematics, is the study of mathematical structures that are fundamentally discrete, in the sense of not supporting or requiring the notion of continuity. ... In certain optimization problems the unknown optimal solution might be not a number or a vector, but rather a continuous quantity, for example a function or the shape of a body. ... 2-dimensional renderings (ie. ... In artificial intelligence and operations research, constraint satisfaction is the process finding a solution to a set of constraints. ... Bold text[[Link title]] â€œAIâ€ redirects here. ... Automated reasoning is an area of Computer Science dedicated to creating software which allows to perform reasoning on computers completely or nearly completely automatically. ...

• Calculus of variations seeks to optimize an objective defined over many points in time, by considering how the objective function changes if there is a small change in the choice path.
• Optimal control theory is a generalization of the calculus of variations.
• Dynamic programming studies the case in which the optimization strategy is based on splitting the problem into smaller subproblems. The equation that relates these subproblems is called the Bellman equation.

Calculus of variations is a field of mathematics that deals with functions of functions, as opposed to ordinary calculus which deals with functions of numbers. ... Optimal control theory, a generalization of the calculus of variations, is a mathematical optimization method for deriving control policies. ... In mathematics and computer science, dynamic programming is a method of solving problems exhibiting the properties of overlapping subproblems and optimal substructure (described below) that takes much less time than naive methods. ... A Bellman equation (also known as a dynamic programming equation), named after its discoverer, Richard Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. ...

## Techniques

For twice-differentiable functions, unconstrained problems can be solved by finding the points where the gradient of the objective function is zero (that is, the stationary points) and using the Hessian matrix to classify the type of each point. If the Hessian is positive definite, the point is a local minimum, if negative definite, a local maximum, and if indefinite it is some kind of saddle point. For other uses, see Gradient (disambiguation). ... In mathematics, the Hessian matrix is the square matrix of second order partial derivatives of a function. ...

However, existence of derivatives is not always assumed and many methods were devised for specific situations. The basic classes of methods, based on smoothness of the objective function, are:

• Combinatorial methods
• Derivative-free methods
• First order methods
• Second-order methods

Actual methods falling somewhere among the categories above include:

Should the objective function be convex over the region of interest, then any local minimum will also be a global minimum. There exist robust, fast numerical techniques for optimizing twice differentiable convex functions. Gradient descent is an optimization algorithm that approaches a local minimum of a function by taking steps proportional to the negative of the gradient (or the approximate gradient) of the function at the current point. ... Gradient descent is an optimization algorithm that approaches a local maximum of a function by taking steps proportional to the gradient (or the approximate gradient) of the function at the current point. ... See simplex algorithm for the numerical solution of the linear programming problem. ... Subgradient methods are algorithms for solving convex optimization problems. ... In mathematical optimization theory, the simplex algorithm of George Dantzig is the fundamental technique for numerical solution of the linear programming problem. ... The ellipsoid method is an algorithm for solving linear programs. ... In mathematics, Newtons method is a well-known algorithm for finding roots of equations in one or more dimensions. ... In optimization, quasi-Newton methods are well-known algorithms for finding local maxima and minima of functions. ... Interior point methods (also referred to as barrier methods) are a certain class of algorithms to solve linear and nonlinear convex optimization problems. ... A comparison of the convergence of gradient descent with optimal step size (in green) and conjugate gradient (in red) for minimizing the quadratic form associated with a given linear system. ... In (unconstrained) optimization, the linesearch strategy is one of two basic iterative approaches to finding a local minimum of an objective function . ... In mathematics, convex function is a real-valued function f defined on an interval (or on any convex subset C of some vector space), if for any two points x and y in its domain C and any t in [0,1], we have Convex function on an interval. ...

Constrained problems can often be transformed into unconstrained problems with the help of Lagrange multipliers. In mathematical optimization problems, Lagrange multipliers are a method for dealing with constraints. ...

Here are a few other popular methods:

Hill climbing is a graph search algorithm where the current path is extended with a successor node which is closer to the solution than the end of the current path. ... For other uses, see Annealing. ... In mathematics and applications, quantum annealing is a method for finding solutions to combinatorial optimization problems and ground states of glassy systems using quantum fluctuations. ... Tabu search is a mathematical optimization method, belonging to the class of local search techniques. ... Beam search is a search algorithm that is an optimization of best-first search. ... A genetic algorithm (GA) is an algorithm used to find approximate solutions to difficult-to-solve problems through application of the principles of evolutionary biology to computer science. ... The ant colony optimization algorithm (ACO), introduced by Marco Dorigo [Dor92,DoSt04], is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. ... In computer science, evolution strategy (ES, from German Evolutionsstrategie) is an optimization technique based on ideas of adaptation and evolution. ... Stochastic tunneling (STUN) is one approach to global optimization among several others and is based on the Monte Carlo method-sampling of the function to be minimized. ... Differential Evolution (DE) grew out of Kenneth Prices attempts to solve the Chebyshev polynomial fitting problem that had been posed to him by Rainer Storn. ... Particle swarm optimization (PSO) is a stochastic, population-based computer problem-solving algorithm; it is a kind of swarm intelligence that is based on social-psychological principles and provides insights into social behavior, as well as contributing to engineering applications. ... This article needs additional references or sources to facilitate its verification. ...

## Uses

Problems in rigid body dynamics (in particular articulated rigid body dynamics) often require mathematical programming techniques, since you can view rigid body dynamics as attempting to solve an ordinary differential equation on a constraint manifold; the constraints are various nonlinear geometric constraints such as "these two points must always coincide", "this surface must not penetrate any other", or "this point must always lie somewhere on this curve". Also, the problem of computing contact forces can be done by solving a linear complementarity problem, which can also be viewed as a QP (quadratic programming problem). Rigid body dynamics differs from particle dynamics in that the body takes up space and can rotate, which introduces other considerations. ... In mathematics, an ordinary differential equation (or ODE) is a relation that contains functions of only one independent variable, and one or more of its derivatives with respect to that variable. ... In mathematics, the linear complementarity problem in linear algebra consists of starting with a known n-dimensional column vector q and a known nÃ—n matrix M, and finding two n-dimensional vectors w and z such that: q = w âˆ’ Mz wi â‰¥ 0 and zi â‰¥ 0 for each i wi...

Many design problems can also be expressed as optimization programs. This application is called design optimization. One recent and growing subset of this field is multidisciplinary design optimization, which, while useful in many problems, has in particular been applied to aerospace engineering problems. Multidisciplinary design optimization (MDO) is a field of engineering that uses optimization methods to solve design problems incorporating a number of disciplines. ... Aerospace engineering is the branch of engineering that concerns aircraft, spacecraft, and related topics. ...

Mainstream economics also relies heavily on mathematical programming. An often studied problem in microeconomics, the utility maximization problem, and its dual problem the Expenditure minimization problem, are economic optimization problems. Consumers and firms are assumed to maximize their utility/profit. Also, agents are most frequently assumed to be risk-averse thereby wishing to minimize whatever risk they might be exposed to. Asset prices are also explained using optimization though the underlying theory is more complicated than simple utility or profit optimization. Trade theory also uses optimization to explain trade patterns between nations. Mainstream economics is the term used to distinguish the economics profession in general from advocates of various heterodox schools, including Austrian economics and Marxian economics. ... In microeconomics, the utility maximization problem is the problem consumers face: how should I spend my money in order to maximize my utility? Suppose their consumption set has L commodities. ... This page may meet Wikipedias criteria for speedy deletion. ... In microeconomics, the Expenditure Minimization Problem is the dual problem to the Utility Maximization Problem: how much money do I need to be happy?. This question comes in two parts. ... In economics, consumers are individuals or households that consume goods and services generated within the economy. ... In economics, utility is a measure of the relative happiness or satisfaction (gratification) gained. ... This article or section does not cite any references or sources. ... Risk aversion is a concept in economics and finance theory explaining the behaviour of consumers and investors under uncertainty. ... Valuation is the process of estimating the value of an asset or liability. ... It has been suggested that Commerce be merged into this article or section. ...

Another field that uses optimization techniques extensively is operations research. Operations Research or Operational Research (OR) is an interdisciplinary branch of mathematics which uses methods like mathematical modeling, statistics, and algorithms to arrive at optimal or good decisions in complex problems which are concerned with optimizing the maxima (profit, faster assembly line, greater crop yield, higher bandwidth, etc) or minima...

## History

The first optimization technique, which is known as steepest descent, goes back to Gauss. Historically, the first term to be introduced was linear programming, which was invented by George Dantzig in the 1940s. The term programming in this context does not refer to computer programming (although computers are nowadays used extensively to solve mathematical problems). Instead, the term comes from the use of program by the United States military to refer to proposed training and logistics schedules, which were the problems that Dantzig was studying at the time. (Additionally, later on, the use of the term "programming" was apparently important for receiving government funding, as it was associated with high-technology research areas that were considered important.) In mathematics, linear programming (LP) problems involve the optimization of a linear objective function, subject to linear equality and inequality constraints. ... George Bernard Dantzig (8 November 1914 â€“ 13 May 2005) was a mathematician who introduced the simplex algorithm and is considered the Father of linear programming. He was the recipient of many honors, including the National Medal of Science in 1975, the John von Neumann Theory Prize in 1974. ... â€œProgrammingâ€ redirects here. ... Look up Logistics in Wiktionary, the free dictionary. ...

Other important mathematicians in the optimization field include:

For other persons named John Neumann, see John Neumann (disambiguation). ... Leonid Vitaliyevich Kantorovich (January 19, 1912 in Petersburg &#8212; April 7, 1986 in Moscow) was a Soviet/Russian mathematician and economist. ... Richard Ernest Bellman (1920â€“1984) was an applied mathematician, celebrated for his invention of dynamic programming in 1953, and important contributions in other fields of mathematics. ...

In mathematics, arg max (or argmax) stands for the argument of the maximum, that is to say, the value of the given argument for which the value of the given expression attains its maximum value: This is well-defined only if the maximum is reached at a single value. ... Game theory is a branch of applied mathematics that is often used in the context of economics. ... Operations Research or Operational Research (OR) is an interdisciplinary branch of mathematics which uses methods like mathematical modeling, statistics, and algorithms to arrive at optimal or good decisions in complex problems which are concerned with optimizing the maxima (profit, faster assembly line, greater crop yield, higher bandwidth, etc) or minima... Process Optimization is the practice of making changes or adjustments to a process, to get results. ... Fuzzy logic is derived from fuzzy set theory dealing with reasoning that is approximate rather than precisely deduced from classical predicate logic. ... Random optimization is the name applied to a class of algorithms which can be used to solve optimization problems. ... Variational inequality is a mathematical theory which attempts to serve as a methodology for the study of equilibrium problems. ... Calculus of variations is a field of mathematics which deals with functions of functions, as opposed to ordinary calculus which deals with functions of numbers. ... In mathematical optimization theory, the simplex algorithm, created by the American mathematician George Dantzig in 1947, is a popular technique for numerical solution of the linear programming problem. ... Interior point methods are a certain class of algorithms to solve linear and nonlinear convex optimization problems. ... This is a list of important publications in mathematics, organized by field. ... Radial basis functions are a means for interpolation in a stream of data. ... A Brachistochrone curve, or curve of fastest descent, is the curve between two points that is covered in the least time by a body that starts at the first point with zero speed and passes down along the curve to the second point, under the action of constant gravity and... Pages in category Optimization software There are 21 pages in this section of this category. ... In computer science, an optimization problem is the problem to find among all feasible solutions for some problem the best one. ... In mathematics and computer science, dynamic programming is a method of solving problems exhibiting the properties of overlapping subproblems and optimal substructure (described below) that takes much less time than naive methods. ...

### Solvers

• OpenOpt - a free toolbox with connections to lots of solvers, for Python language programmers
• IPOPT - an open-source primal-dual interior point method NLP solver which handles sparse matrices
• KNITRO - solver for nonlinear optimization problems
• CPLEX
• Mathematica - handles linear programming, integer programming and constrained non-linear optimization problems

OpenOpt is a free numerical optimization framework that connects to many solvers. ... IPOPT, short for Interior Point OPTimizer, pronounced I-P-Opt, is a software package for large scale nonlinear optimization of continuous systems. ... KNITRO is a software package for solving large scale mathematical optimization problems. ... CPLEX is an optimization software package. ... For other uses, see Mathematica (disambiguation). ...

## References

• Mordecai Avriel (2003). Nonlinear Programming: Analysis and Methods. Dover Publishing. ISBN 0-486-43227-0.
• Stephen Boyd and Lieven Vandenberghe (2004). Convex Optimization, Cambridge University Press. ISBN 0-521-83378-7.
• Jalaluddin Abdullah, Optimization by the Fixed-Point Method, Version 1.97, Optimization Online[1], and Vuze/Azureus (under Science and Technology)[2], August 2007. Note: 1. The book is free. 2. To obtain from Vuze you need a bittorrent client, preferably Azureus.

Results from FactBites:

 Optimization (mathematics) - definition of Optimization (mathematics) in Encyclopedia (966 words) Combinatorial optimization is concerned with problems where the set of feasible solutions is discrete or can be reduced to a discrete one. Infinite-dimensional optimization studies the case when the set of feasible solutions is a subset of an infinite dimensional space, such as a space of functions. One recent and growing subset of this field is multidisciplinary design optimization, which, while useful in many problems, has in particular been applied to aerospace engineering problems.
More results at FactBites »

Share your thoughts, questions and commentary here