Search⌘ K
AI Features

The Dual of Linear Programming (LP)

Explore how to derive the dual problem of a linear program and understand its relationship to the primal problem. Learn to formulate the dual problem, apply Lagrange multipliers, and use gradient conditions. This lesson helps you grasp when to solve the dual for efficiency and verify solutions with Python libraries like NumPy and SciPy.

We'll cover the following...

In some cases, the dual problem of an LP problem might be easier to solve than the primal problem. If the primal is a minimization problem, the dual will be a maximization problem, and vice versa. So, depending on the specifics of the problem (simpler constraints or objective function), it might be easier to solve the dual problem than the primal problem.

Calculation of the dual objective

Consider an LP problem whose primal is given as follows:

where cRdc \in \R^d is the cost vector, ARm×dA \in \R^{m \times d} is the constraint matrix, and bRmb \in \R^m is the constraint vector.

The Lagrangian function £(x,λ)\pounds(x, \lambda) is given by the following:

where λRm\lambda \in \R^m is the vector of the nonnegative Lagrange multipliers. Rearranging the terms corresponding to xx yields the following:

The dual of Lagrangian is given as follows:

Because £(x,λ)\pounds(x, \lambda) is a convex function, we can find its minimum (with respect to xx) using the gradient-solving ...