An $n \times n$ matrix is diagonalizable if there exists a basis of ${\bf C}^n$ consisting of eigenvectors of $A$. (We use ${\bf C}^n$ rather than ${\bf R}^n$ in order to allow complex eigenvectors.) Not every matrix is diagonalizable, but almost every matrix is. This has a precise mathematical meaning. The space of $n \times n$ real matrices is $n^2$-dimensional, and only an $(n^2-1)$-dimensional subset fails to be diagonalizable.
The standard example of a non-diagonalizable matrix is $A= \begin{pmatrix} 1&1 \cr 0&1 \end{pmatrix}$. The characteristic polynomial is $p_A(\lambda) = (\lambda-1)^2$, whose only root is $\lambda=1$, but $E_1$ is only 1-dimensional, being all multiples of $\begin{pmatrix} 1 \cr 0 \end{pmatrix}$.
There are two important results about algebraic multiplicity, with the
first being a stepping-stone towards the second:
Theorem 1: For any eigenvalue $\lambda$, $1 \le m_g(\lambda) \le
m_a(\lambda)$.
Main Theorem: The following three conditions are equivalent.
Warning: The last few seconds were cut off from the second video.