Jethro's Braindump

Optimization

What is Convex Optimization?

Convex optimization is a special class of mathematical optimization problems, which includes least-squares and linear programming problems.

There are many advantages to recognizing or formulating a problem as a convex optimization problem. First, the problem can be solved reliably and efficiently, using interior-point methods or other special methods for convex optimization. There are also theoretical or conceptual advantages of formulating a problem as a convex optimization problem.

Mathematical Optimization

An optimization problem has the form:

\begin{align} \label{dfn:optimization} &\text{minimize} &f_0(x) \\\
&\text{subject to} &f_i(x) \le b_i, i = 1, \dots, m \end{align}

Here the vector \(x = (x_1, \dots, x_n)\) is the optimization variable of the problem, the function \(f_0 : \mathbb{R^n} \rightarrow \mathbb{R}\) is the objective function, \(f_i \mathbb{R^n} \rightarrow \mathbb{R}\) are the (inequality) constraint functions, and the constants \(b_1, \dots, b_m\) are the limits, or bounds, for the constraints.

We consider families or classes of optimization problems, characterized by particular forms of the objective and constraint functions. The optimization problem is a linear program if the objective and constraint functions \(f_0, \dots, f_m\) are linear.