# Lagrangian Duality

Learn Lagrangian duality and the primal to dual transformation with examples in this lesson.

We'll cover the following

## Lagrangian dual

In optimization theory, the Lagrangian dual is a technique for transforming an optimization problem, also known as a primal problem, into a different problem, called the dual problem, which provides a lower bound on the optimal value of the original problem. The Lagrangian dual is named after Joseph-Louis Lagrange, a famous mathematician who significantly contributed to calculus and mechanics.

## Lagrangian function

The Lagrangian dual is obtained by introducing a set of auxiliary variables, called Lagrange multipliers or dual variables, to enforce the constraints of the original optimization problem. The resulting Lagrangian function is a function of the original variables, also known as decision variables and the dual variables. It provides a way to express the original optimization problem as a maximization problem over the space of the Lagrange multipliers.

Note: In mathematical optimization, Lagrange multipliers are a set of coefficients that are introduced to find the maxima or minima of a function subject to constraints. They allow the transformation of a constrained optimization problem into an unconstrained one by incorporating the constraints into the objective function.

## Primal to dual transformation

Consider an optimization problem (not necessarily convex) in the standard form as follows:

\begin{aligned} \min_{\bold x} \quad & f_0(\bold x)\\ \textrm{s.t.} \quad & f_i(\bold x)\le0 \quad & i=1,2,\dots,m\\ \quad & h_j(\bold x)=0 \quad & j=1,2,\dots,k \\ \end{aligned}

The Lagrangian $L(\bold x, \bold \lambda, \bold \alpha)$ for the above problem is defined as follows:

$L(\bold{x,\lambda,\alpha})=f_0(\bold x)+\sum_{i=1}^m\lambda_if_i(\bold x)+\sum_{j=1}^k\alpha_jh_j(\bold x)$

Here, $\bold \lambda = \begin{bmatrix}\lambda_1 \\ \lambda_2 \\ \vdots \\\lambda_m\end{bmatrix}$ and $\bold \alpha = \begin{bmatrix}\alpha_1 \\ \alpha_2 \\ \vdots \\\alpha_k\end{bmatrix}$ are dual variables.

The Lagrangian dual function $g(\bold{\lambda,\alpha})$ can then be defined as follows:

$g(\bold{\lambda, \alpha})=\min_{\bold x}L(\bold{x,\lambda,\alpha})$

It can be shown that if $\bold \lambda \ge \bold 0$, then $\max_{\bold{\lambda, \alpha}} g(\bold{\lambda, \alpha})$ gives a tight lower bound for the objective of the primal problem.

Note: If the primal problem is convex, then $\max_{\bold \lambda \ge \bold 0, \bold \alpha} g(\bold{\lambda, \alpha})$ gives the optimal value of the objective of the primal problem.

### Example

Consider a primal problem being in LP as follows:

\begin{aligned} \min_{\bold x} \quad & \bold c^T\bold x\\ \textrm{s.t.} \quad & A\bold x\le\bold b \end{aligned}

Here, $\bold c = \begin{bmatrix}-1 \\ -1\end{bmatrix},\quad A=\begin{bmatrix}25 & 35 \\ -1 & 0\\ 0 & -1\end{bmatrix}$, and $\bold b=\begin{bmatrix}5000 \\ -50 \\ -60\end{bmatrix}$. The dual problem can be defined as follows:

\begin{aligned} \max_{\bold \lambda} \quad & -\bold b^T\bold \lambda\\ \textrm{s.t.} \quad & A^T\bold \lambda+ \bold c = \bold 0 \\ \quad & \bold \lambda \ge \bold 0 \end{aligned}

The following code in cvxpy shows that the optimal value for the primal problem can be obtained from the optimal value of the dual problem:

Get hands-on with 1200+ tech skills courses.