Basis


1. Linear Dependence and Independence

A set $\big\{{\bf v}_1,\, {\bf v}_2, \dots , \, {\bf v}_k\big\}$ of vectors in a real vector space $V$ is said to be Linearly Independent when the vector equation $$x_1 {\bf v}_1 + x_2 {\bf v}_2 + \dots + x_k {\bf v}_k \ = \ {\bf 0}_V$$ has only the trivial solution $x_1 = x_2 = \ldots = x_k = 0\,.$

The set is said to be Linearly Dependent if there exist real numbers $c_1,\, c_2, \, \ldots,\, c_k$, NOT ALL ZERO, such that $$c_1 {\bf v}_1 + c_2 {\bf v}_2 + \dots + c_k {\bf v}_k \ = \ {\bf 0}_V.$$

Some simple criteria for linear independence will be useful:

When $V$ is a vector space,

Theorem 1: Let $\left\{ {\bf v_1} , \dots , {\bf v_k} \right\} \subseteq \mathbb{R}^n$ and let $$A = \left[ \begin{array}{ccc} {\bf v_1} & \cdots & {\bf v_k} \end{array} \right] \in M_{n \times k}(\mathbb{R}).$$ The following are equivalent:

These criteria are used to check the linear dependence or independence of a set of vectors.

Problem: Determine if the vectors $${\bf v}_1 \,=\, \left[\begin{array}{cc} 1 \\ 2 \\ 1 \end{array}\right], \quad {\bf v}_2 \,=\, \left[\begin{array}{cc} 1 \\ 3 \\ 4 \end{array}\right], \quad {\bf v}_3 \,=\, \left[\begin{array}{cc} -1 \\ 1 \\ 9 \end{array}\right], $$ in ${\mathbb R}^3$ are linearly independent.

Solution: We need to check if the homogeneous equation $$A {\bf x} \ = \ \left[\begin{array}{cc} 1 & 1 & -1 \\ 2 & 3 & 1 \\ 1 & 4 & 9 \end{array} \right] \left[\begin{array}{cc} x_1 \\ x_2 \\ x_3 \end{array} \right] \ = \ {\bf 0}$$ has only the trivial solution ${\bf x } \,=\, {\bf 0}$. We must row reduce

$$A = \left[\begin{array}{cc} 1 & 1 & -1 \\ 2 & 3 & 1 \\ 1 & 4 & 9 \\ \end{array} \right]$$ to get $$\textrm{RREF}(A) = \left[\begin{array}{cc} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end{array} \right],$$ so the only solution of $A {\bf x} \,=\, {\bf 0}$ is $$x_1 \ = \ x_2 \ = \ x_3 \ = \ 0\,.$$ Thus ${\bf v}_1, \, {\bf v}_2$ and ${\bf v}_3$ are linearly independent.

    The next three results illustrate how familiar concepts and results from mathematics seemingly unrelated to Linear Algebra enter. The first one involves trig identities, while the next two bring in the idea of Wronskians from Differential equations and one version of the Fundamental Theorem of Algebra.

  Example 2: are the trig functions $$1,\ \ \cos x,\ \ \sin x,\ \ \cos^2 x,\ \ \sin^2 x,\, \cos x \sin x, $$ in $C^{\ \infty}(\mathbb{R})$ linearly independent?

    Solution: since $\cos^2 x + \sin^2 x - 1 = 0$, any subset of the trig functions $$1,\ \ \cos x,\ \ \sin x,\ \ \cos^2 x,\ \ \sin^2 x,\, \cos x \sin x,$$ containing $1, \ \cos^2x,\ \sin^2x$ will be linearly dependent.


    Recall now one version of the Fundamental Theorem of Algebra.

    Fundamental Theorem of Algebra: a degree $n$ polynomial $$P(x) \ = \ a_0 \, + \, a_1 x \, +\, a_2 x^2 \, + \, \ldots \, + \, a_n x^n$$ with real coefficients has at most $n$ real roots, i.e., the graph of $y = P(x)$ crosses the $x$-axis at most $n$ times.


  Example 3: the monomials $$1,\quad x, \quad x^2, \quad \ldots,\quad x^m,\quad \ldots \quad x^N, $$ are linearly independent in the vector space $\mathcal{P}_N$.

  Solution: since the zero vector in $\mathcal{P}_N$ is the polynomial $0_P$ that has value $0$ for all $x$, we have to solve the polynomial equation $$c_0 + c_1 x + c_2 x^2 + \ldots + c_m x^m + \ldots + c_N x^N \ = \ 0_P\,.$$ In other words, we have to determine all values of $c_0,\, c_1,\, c_2,\, \ldots,\, c_N$ such that

$$c_0 + c_1 x + c_2 x^2 + \ldots + c_m x^m + \ldots + c_N x^N \ = \ 0 $$

holds for all $x$. But by the Fundamental Theorem of Algebra a polynomial of degree $N$ can have at most $N$ roots unless all its coefficients are zero, i.e., $$ c_0 = c_1 = c_2 = \ldots = c_N = 0$$holds. Thus the monomials $$1,\ \ x, \ \ x^2, \ \ \ldots,\ \ x^m,\ \ \ldots \ \ x^N, $$ are linearly independent in $\mathcal{P}_N$.


    Recall next the notion of Wronskian,

    Definition 2: For $U \subseteq \mathbb{R},$ the Wronskian associated with functions $$ f_1(x), \quad f_2(x), \quad \ldots \quad f_n(x) \in C^{\ (n-1)}(U)$$ is the determinant function $${\bf W}(x) \ = \ \left|\matrix{f_1(x) & f_2(x) & f_3(x) & \cdots & f_n(x) \cr f_1'(x) & f_2'(x) & f_3'(x) & \cdots & f_n'(x) \cr f_1''(x) & f_2''(x) & f_3''(x) & \cdots & f_n''(x) \cr \vdots & \vdots & \vdots & \ddots & \vdots \cr f_1^{(n-1)}(x) & f_2^{(n-1)}(x) & f_3^{(n-1)}(x) & \cdots & f_n^{(n-1)}(x)}\right|, \qquad x \in U.$$

    Many differential equation courses restrict discussion to the $2 \times 2$ case $${\bf W}(x) \ = \ \left|\matrix{f_1(x) & f_2(x)\cr f_1'(x) & f_2'(x)}\right| \ = \ f_1(x)f_2'(x) - f_1'(x) f_2(x)\,,$$ but the general $n \times n$ case is conceptually the same and only slightly more complicated algebraically. A basic result for general $n$ is

    Theorem 3: Let $U \subseteq \mathbb{R},$ let $V \subseteq C^{\ (n-1)}(U)$ be a subspace, and let $f_1,\, f_2, \, \ldots, \, f_n \in V.$ Then $f_1,\, f_2, \, \ldots, \, f_n$ are linearly independent in $V$ if the associated Wronskian ${\bf W}(x)$ is non-zero at some $x_0 \in U,$ i.e., ${\bf W}(x_0) \ne 0$ for some $x_0.$


II. BASIS

    Now let's combine the ideas of spanning and linear independence to arrive at:

    Definition 3: A subset ${\cal B} \,=\, \{{\bf v}_1,\, {\bf v}_2, \dots , \, {\bf v}_p\}$ of a real vector space $V$ is said to be a Basis for $V$ when

           $V \ = \ \text{Span}\big\{{\bf v}_1,\, {\bf v}_2, \dots , \, {\bf v}_p\big\}\,,$

           the set $\big\{{\bf v}_1,\, {\bf v}_2, \dots , \, {\bf v}_p\big\}$ is linearly independent.

    Bases are fundamental in all areas of linear algebra and linear analysis, including matrix algebra, Euclidean geometry, statistical analysis, solutions to linear differential and partial differential equations, linear boundary value problems, Fourier analysis, signal and image processing, data compression and control systems. As we shall see, basis vectors are the basic building blocks for representing solutions of a linear system whatever form that system may take.

    Since $V\ = \ \text{Span}\big\{{\bf v}_1,\, {\bf v}_2, \dots , \, {\bf v}_p\big\}\,,$ every ${\bf x}$ in $V$ can be written as a linear combination $${\bf x} \ = \ c_1 {\bf v}_1\, +\, c_2{\bf v}_2\, +\, \ldots\, +\, c_p {\bf v}_p$$ for at least one set of real numbers $c_1,\, c_2, \dots,\, c_p$. But if $d_1, \, d_2,\, \ldots ,\, d_p$ is another choice of real numbers such that $${\bf x} \ = \ d_1 {\bf v}_1\, +\, d_2{\bf v}_2\, +\, \ldots\, +\, d_p {\bf v}_p\,,$$ then $${\bf x} - {\bf x} \ = \ {\bf 0}_V \ = \ (c_1 - d_1) {\bf v}_1 + (c_2 - d_2) {\bf v}_2 + \dots + (c_p - d_p) {\bf v}_p\,.$$ The linear independence of $\{{\bf v}_1,\, {\bf v}_2, \dots , \, {\bf v}_p\}$ then ensures that $$c_1 - d_1 \ = \ 0\,, \quad c_2 - d_2 \ = \ 0\,, \quad \dots\,, \quad c_p - d_p \ = \ 0\,, \qquad i.e., \quad c_1 \,=\, d_1,\quad c_2 \,=\, d_2,\quad \ldots, \quad c_p \,=\, d_p\,.$$ Consequently,

    Basis Property: When a set ${\cal B} \,=\, \big\{{\bf v}_1,\, {\bf v}_2, \dots , \, {\bf v}_n\big\}$ is a Basis for a vector space $V$, then each ${\bf x}$ in $V$ has a unique representation $${\bf x} \ = \ c_1 {\bf v}_1 + c_2{\bf v}_2 + \ldots + c_p {\bf v}_p$$ in terms of the basis vectors.

    Intuition tells us that ${\mathbb R}^3$ is $3$-dimensional, but might it have a basis containing, say, four vectors?; or could ${\mathbb R}^n$ have a basis other than the standard basis containing $n-1$ or $n+1$ vectors? An important theorem says NO:

    Theorem 4: If $\big\{ {\bf v}_1, \ {\bf v}_2, \ \ldots, \ {\bf v}_n \big \}$ is a basis for a real vector space $V$, then every other basis of $V$ also contains exactly $n$ elements.

    We can now define the dimension of a vector space.

    Definition 4: If a vector space $V$ has a basis of $n$ elements, then the DIMENSION of $V$ is $n.$ We may write $\textrm{dim}(V) = n$ or $V \cong \mathbb{R}^n.$

    Theorem 4 ensures that dimension is well-defined. An infinite-dimensional space contains an infinite collection of linearly independent elements. For example, the vector space $\mathcal{P} \,=\, \bigcup_n \mathcal{P}_n$ is infinite-dimensional because no finite set of monomials can be a basis. On the other hand, the set of all bases of $\mathbb{R}^n$ can be characterized in a useful way:

  Theorem: The vectors ${\bf b}_1,\ {\bf b}_2,\ \dots, \ {\bf b}_n$ in ${\mathbb R}^n$ are a basis for ${\mathbb R}^n$ if and only if the $n \times n$ matrix $$A \ = \ \big[ {\bf b}_1\ \ {\bf b}_2 \ \ \cdots \ \ {\bf b}_n \big]$$ having the basis vectors as columns is invertible.

  Proof: if $A$ is invertible, then $A {\bf x}\,=\, {\bf 0}$, while ${\bf x}\,=\, A^{-1}{\bf 0} \,=\, {\bf 0}$, so $${\cal B} \ = \ \big\{ {\bf b}_1,\ \ {\bf b}_2, \ \ \dots, \ \ {\bf b}_n\big\}$$ is a linearly independent set in ${\mathbb R}^n$.

    On the other hand, if ${\bf b}$ is any vector in ${\mathbb R}^n$, then the matrix equation $$A {\bf x} \,=\, x_1 {\bf b}_1 + x_2 {\bf b}_2 + \dots + x_n{\bf b}_n \,=\,{\bf b}\,,$$

  always has a solution ${\bf x}\,=\, A^{-1}{\bf b}$, so $$\text{Span}\big\{ {\bf b}_1\ , \ {\bf b}_2 \ , \ \dots \ , \ {\bf b}_n\big\}\ = \ {\mathbb R}^n.$$ Thus ${\cal B}$ is a basis for ${\mathbb R}^n$ when $A$ is invertible.

    Conversely, suppose $${\cal B} \ = \ \big\{ {\bf b}_1,\ \ {\bf b}_2, \ \ \dots, \ \ {\bf b}_n\big\}$$ is a basis for ${\mathbb R}^n$ and $A$ is the $n \times n$ matrix $$A \ = \ \big[ {\bf b}_1\ \ {\bf b}_2 \ \ \cdots \ \ {\bf b}_n \big].$$ Since ${\cal B}$ is linearly independent, the matrix $A$ will row reduce to the identity matrix $I_n,$ thus we will be able to compute the inverse for $A.$


III. EXAMPLES

Every example in the lesson on SPAN is in fact expressed as the span of a basis. We list the respective results on dimension for those examples here.