Lagrangian Duality

Learn Lagrangian duality and the primal to dual transformation with examples in this lesson.

Lagrangian dual

In optimization theory, the Lagrangian dual is a technique for transforming an optimization problem, also known as a primal problem, into a different problem, called the dual problem, which provides a lower bound on the optimal value of the original problem. The Lagrangian dual is named after Joseph-Louis Lagrange, a famous mathematician who significantly contributed to calculus and mechanics.

Lagrangian function

The Lagrangian dual is obtained by introducing a set of auxiliary variables, called Lagrange multipliers or dual variables, to enforce the constraints of the original optimization problem. The resulting Lagrangian function is a function of the original variables, also known as decision variables and the dual variables. It provides a way to express the original optimization problem as a maximization problem over the space of the Lagrange multipliers.

Note: In mathematical optimization, Lagrange multipliers are a set of coefficients that are introduced to find the maxima or minima of a function subject to constraints. They allow the transformation of a constrained optimization problem into an unconstrained one by incorporating the constraints into the objective function.

Primal to dual transformation

Consider an optimization problem (not necessarily convex) in the standard form as follows:

minxf0(x),2,,mhj(x)=0j=1,2,,k\begin{aligned} \min_{\bold x} \quad & f_0(\bold x)\\ \textrm{s.t.} \quad & f_i(\bold x)\le0 \quad & i=1,2,\dots,m\\ \quad & h_j(\bold x)=0 \quad & j=1,2,\dots,k \\ \end{aligned}

The Lagrangian L(x,λ,α)L(\bold x, \bold \lambda, \bold \alpha) for the above problem is defined as follows:

L(x,λ,α)=f0(x)+i=1mλifi(x)+j=1kαjhj(x)L(\bold{x,\lambda,\alpha})=f_0(\bold x)+\sum_{i=1}^m\lambda_if_i(\bold x)+\sum_{j=1}^k\alpha_jh_j(\bold x)

Here, λ=[λ1λ2λm]\bold \lambda = \begin{bmatrix}\lambda_1 \\ \lambda_2 \\ \vdots \\\lambda_m\end{bmatrix} and α=[α1α2αk]\bold \alpha = \begin{bmatrix}\alpha_1 \\ \alpha_2 \\ \vdots \\\alpha_k\end{bmatrix} are dual variables.

The Lagrangian dual function g(λ,α)g(\bold{\lambda,\alpha}) can then be defined as follows:

g(λ,α)=minxL(x,λ,α)g(\bold{\lambda, \alpha})=\min_{\bold x}L(\bold{x,\lambda,\alpha})

It can be shown that if λ0\bold \lambda \ge \bold 0, then maxλ,αg(λ,α)\max_{\bold{\lambda, \alpha}} g(\bold{\lambda, \alpha}) gives a tight lower bound for the objective of the primal problem.

Note: If the primal problem is convex, then maxλ0,αg(λ,α)\max_{\bold \lambda \ge \bold 0, \bold \alpha} g(\bold{\lambda, \alpha}) gives the optimal value of the objective of the primal problem.


Consider a primal problem being in LP as follows:

minxcTxs.t.Axb\begin{aligned} \min_{\bold x} \quad & \bold c^T\bold x\\ \textrm{s.t.} \quad & A\bold x\le\bold b \end{aligned}

Here, c=[11],A=[25351001]\bold c = \begin{bmatrix}-1 \\ -1\end{bmatrix},\quad A=\begin{bmatrix}25 & 35 \\ -1 & 0\\ 0 & -1\end{bmatrix}, and b=[50005060]\bold b=\begin{bmatrix}5000 \\ -50 \\ -60\end{bmatrix}. The dual problem can be defined as follows:

maxλbTλs.t.ATλ+c=0λ0\begin{aligned} \max_{\bold \lambda} \quad & -\bold b^T\bold \lambda\\ \textrm{s.t.} \quad & A^T\bold \lambda+ \bold c = \bold 0 \\ \quad & \bold \lambda \ge \bold 0 \end{aligned}

The following code in cvxpy shows that the optimal value for the primal problem can be obtained from the optimal value of the dual problem:

Get hands-on with 1200+ tech skills courses.