Convex Optimization 2026, 01 - Common Optimization
A new book to read for the new year, on some math that I might use eventually. I admit that I have less of a fascination for applied mathematics, which this very much is, but applied mathematics will probably get me into exercises better than the abstract stuff, that tends to go into somewhat unorthodox corners whenever I try my hands at it.
I'm going with "Convex Optimization" by Boyd and Vandenberghe, published via Cambridge Press (2009). It's got a link to download on the Stanford web portal, along with lecture slides and a github with additional exercises.
The least-squares problem is a common optimization problem with no constraints. It solves for a sum of square aiTx - bi by minimizing f0(x) = ||Ax-b||20 = ∑ki=1(aiTx - bi)2. The solution of least-squares reduces to a set of linear equations. The shape of the matrix can help in further shortening the computation (1.2.1a). Least-squares is the basis for regression analysis. It is applied, when the objective is a positive semidefinite quadratic function. For added flexibility, the function can be weighted or regularized (1.2.1b).
If the objective and all constraint functions are linear, linear programming can provide a solution, given that cTx is minimized, and aiTx ≤ bi, i = 1, ... , m. Linear programming is comprised by a set of algorithms, but is not a set problem. The Chebyshev approximation problem can be solved through linear programming (1.2.2).
Convex Optimization minimizes a function f0(x) so that fi(x) ≤ bi, i = 1, ..., m, where fi are convex: fi(αx + βy) ≤ αfi(x) + βfi(y), ∀ x, y ∈ ℝn, α, β ∈ ℝ, α + β = 1, α ≥ 0, β ≥ 0. It generalizes both the linear programming problem, and the least-squares problem. There also is no analytical formula for convex optimization problems, but there are several methods of solving them efficiently. These are interior point methods, which work iteratively, usually with a step count between 10 and 100. Each step needs max{n3, n2m, F} operations. The most difficult step in solving this type of problem is usually the recognizing and formulating it.
Nonlinear Optimization is for problems that are neither linear nor convex. There are no effective methods for solving a general nonlinear programming problem. This will likely feel similar to differential equations. Local optimization only investigates a small section of solution space. Methods for local optimization are well developed, so the solutions it provides are usually very good, but not optimal. It also requires an initial guess, which has to be "good" for a close to optimal solution. Usually several nonlinear optimization methods, before choosing the best solution. It stands in contrast to Global Optimization, but it's computationally expensive, and it's usually only used for problems with few variables.
Convex Optimization can be used for nonconvex problems. For local optimization, for example, it can be used in conjunction with a local optimization method to get an approximate, convex problem to solve first, before using the local optimization method around the solution.