Convex Optimization 2026, 15 - The Lagrange Dual Problem

For each (λ, ν), λ ⪰ 0, the Lagrange dual gives a lower bound for the optimal value p*, so a lower bound depending on λ, ν. The best such lower bound defines the optimization problem to maximize g(λ, ν) with λ ⪰ 0. Thid is the Lagrange dual problem. It's a convex optimization problem, regardless of the convexity of its primal problem (5.2.0). dom g can have dimensions smaller than m + p, and its affine hull can be written as a set of linear equality constraints. This introduces hidden equality constraints, and an equivalent problem can be defined, which makes these constraints explicit (5.2.1). The optimal value of the Lagrange dual problem is written d* ≤ p*, which holds whether the problem is convex or not. This property is the weak duality. It holds when d* and p* are infinite. The difference p* - d* is the optimality duality gap of the original problem. It's (obviously) strictly nonnegative. The bound can be used to find a lower bound on the optimal value of a problem that is more difficult to solve (5.2.2). If d* = p*, then the strong duality holds. Strong duality doesn't generally hold, but it's common for convex problems. The conditions beyond convexity that enable strong duality are constraint qualifications. An example for constraint qualifications is Slater's condition: ∃ x ∈ relint 𝔻 : fi(x) < 0, i = 1, ..., m, Ax = b. Such an x is strictly feasible. Slater's condition can be refined when some inequality constraint functions are affine. If this is true for the first k ones, then strong duality holds, if there is an x ∈ relint 𝔻 with fi(x) ≤ 0, i = 1, ..., k; fi(x) < 0, i = k+1, ..., m, Ax = b. In other words, the affine inequalities don't need to hold with strict inequality. Slater's condition also implies that the dual optimal value is attained given d* > -∞ (5.2.3).

Next
Next

Convex Optimization 2026, 14 - The Lagrange Dual Function