So far we have been talking about linear evolution problems
of the form
$$\hbox{(Evolution of ${\bf x}$)} = A{\bf x},$$
where "evolution of ${\bf x}$" could mean a difference equation, a first
order differential equation, or a second order differential equation.
In all cases we diagonalize $A$, work in a basis of eigenvectors of $A$,
and discover that our equations decouple into
$$\hbox{(Evolution of $y_j$)} = \lambda_j y_j.$$
We then solve these equations one at a time.
For difference equations we
get $y_j(n) = \lambda_j^n y_j(0)$.
For first order differential equations we get
$y_j(t) = e^{\lambda_j t} y_j(0)$, and
For second order differential equations we get
$y_j(t)$ as a linear combination of sines and cosines, or hyperbolic
sines and cosines, depending on whether $\lambda_j$ is negative or positive.
The next step is to consider nonlinear evolution problems, and
to approximate them with linear problems. This is called linearization.
The first two videos do this for problems with a single variable.
The first one video explains linearization of differential equations:
The second one explains linearization of difference equations:
Both of these linearizations depended on first order Taylor approximations.
To handle matrix problems, we have to understand Taylor series in higher
dimensions. That's the subject of the third video:
Now that we understand linearization for one variable, we can attack
matrix problems. First we do systems of differential equations:
Finally we do systems of difference equations:
In all cases the general procedure is the same. Given a problem of the form
$$\hbox{(Evolution of ${\bf x}$)} = {\bf f}({\bf x}),$$
where bold face means vectors, we:
Find the fixed points. For differential equations this is where
${\bf f}({\bf a}) = 0$. For difference equations this is where
${\bf f}({\bf a}) = {\bf a}$.
Define a new variable ${\bf y} = {\bf x} - {\bf a}$. Note: this
usage of the letter ${\bf y}$ has nothing to do with a change of basis.
There are only so many letters in the alphabet!
Use a Taylor series to approximate the equations when ${\bf y}$
is small:
$$\hbox{(Evolution of ${\bf y}$)} \approx A {\bf y},$$
where $A = d{\bf f}|_{{\bf x}={\bf a}}$ is a matrix whose ${ij}$ entry is
$\frac{\partial f_i}{\partial x_j}$
evaluated at ${\bf x} = {\bf a}$.
At each fixed point,
solve the linearized equations by diagonalizing $A$.
Classify the modes by whether they are stable or unstable or borderline.
If all modes are stable, then ${\bf y}$ will stay small forever, meaning that
${\bf x}$ will stay close to ${\bf a}$ forever. In fact, the linearization
approximation only gets better with time.
If some modes are unstable, then ${\bf y}$ will grow. At some point,
${\bf x}$ will be so far away from ${\bf a}$ that the linearization
approximation will no longer be valid.