M408M Learning Module Pages
Main page Chapter 10: Parametric Equations and Polar CoordinatesChapter 12: Vectors and the Geometry of SpaceChapter 13: Vector FunctionsChapter 14: Partial DerivativesLearning module LM 14.1: Functions of 2 or 3 variables:Learning module LM 14.3: Partial derivatives:Learning module LM 14.4: Tangent planes and linear approximations:Learning module LM 14.5: Differentiability and the chain rule:Learning module LM 14.6: Gradients and directional derivatives:Learning module LM 14.7: Local maxima and minima:Maxima, minima and critical pointsClassifying critical points Example problems Linear regression Learning module LM 14.8: Absolute maxima and Lagrange multipliers:Chapter 15: Multiple Integrals |
Classifying critical pointsIn the last slide we saw that
Now it's time to classify critical points, and see which are local maxima, which are local minima, and which are saddle points. There are both graphical and algebraic ways of doing this. We'll illustrate both types of methods with the earlier example of $z=f(x,y)=\sin(x)\sin(y)$, which has
Of course, if you have the graph of a function, you can see the local maxima and minima. However, you can also identify the local extrema from a contour map, or from the gradient. Check out the various choices in the interactive graphic to the right. The critical points are indicated by the red dots. Let's classify them:
Does this use of the gradient vectors remind you of how you used
the First Derivative Test to classify critical points for functions of
one variable? It should!
To see how this works for $f(x, y) = \sin x \sin y$ on $-\pi \le x,\, y\le \pi$, note that $$f_{xx}(x, y) = -\sin x\sin y, \qquad f_{xy}(x, y) = \cos x \cos y, \qquad f_{yy}(x, y) = -\sin x \sin y\,.$$ At three of the critical points of $f$ the Second Derivative Test tells us:
At first glance, the second derivative test may look like black magic, since it is based on results from linear algebra that you probably haven't seen yet. Here is a brief sketch of the ideas behind the formula. Don't worry if you don't see where all of this comes from. Think of it as a reason to learn linear algebra! Associated to every $2 \times 2$ matrix $\begin{pmatrix} A & B \cr B & C \end{pmatrix}$ are two numbers called eigenvalues. When we slice the surface $z=f(x,y)$ with a vertical plane, the second derivative at $(a,b)$ is always less than or equal to the bigger eigenvalue (call it $\lambda_1$) and greater or equal to the smaller eigenvalue (call it $\lambda_2$). If both eigenvalues are positive, then the second derivative is positive in all directions, and we are at a local minimum. If both eigenvalues are negative, then the second derivative is negative in all directions, and we are at a local maximum. If one eigenvalue is positive and one is negative, then we go up in some directions and down in others -- that's a saddle point. Note that the discriminant is the same thing as the determinant of the matrix $\begin{pmatrix} A & B \cr B & C \end{pmatrix}$. Eigenvalues are related to determinants by $$\lambda_1\lambda_2 = D = AC-B^2.$$ If $D<0$, then the eigenvalues have opposite signs, and we have a saddle point. If $D>0$, then the eigenvalues have the same sign, and we have either a local maximum or a local minimum. We tell which it is by moving in one direction and seeing if we go up or down. Usually that means looking at $A=f_{xx}$, but $C=f_{yy}$ would work just as well. |