Coursera Learner working on a presentation with Coursera logo and
Coursera Learner working on a presentation with Coursera logo and

Lagrange multipliers, also called Lagrangian multipliers (e.g., Arfken 1985, p. 945), are often wont to find the extrema of a multivariate function f(x_1,x_2,…,x_n) subject to the constraint g(x_1,x_2,…,x_n)=0, where f and g are functions with continuous first partial derivatives on the open set containing the curve g(x_1,x_2,…,x_n)=0, and del g!=0 at any point on the curve (where del is that the gradient).



For an extremum of f to exist on g, the gradient of f must line up with the gradient of g. within the illustration above, f is shown in red, g in blue, and therefore the intersection of f and g is indicated in light blue. The gradient may be a horizontal vector (i.e., it’s no z-component) that shows the direction that the function increases; for g it’s perpendicular to the curve, which may be a line during this case. If the 2 gradients are within the same direction, then one may be a multiple (-lambda) of the opposite , so

 del f=-lambdadel g.

The two vectors are equal, so all of their components are also , giving


for all k=1, …, n, where the constant lambda is named the Lagrange multiplier.

The extremum is then found by solving the n+1 equations in n+1 unknowns, which is completed without inverting g, which is why Lagrange multipliers are often so useful.

For multiple constraints g_1=0, g_2=0, …,

 del f+lambda_1del g_1+lambda_2del g_2+...=0.


Weekly newsletter

No spam. Just the latest releases and tips, interesting articles, and exclusive interviews in your inbox every week.