VECTORS and MATRICES

By John Gilbert


    Linear Algebra is a mathematical framework, equally algebraic, computational and geometric, that can be utilized throughout all applications of mathematics to science, engineering, and computer science as well as business and economics once that basic framework is in place. But what will we need in its development? First we need both real and complex number systems. Many matrices will have complex entries as we shall see when we get to study eigenvalues and eigenvectors.

    A Complex number is an expression written in the form $z = x+i y$ where $x,\, y$ are real, i.e., in ${\mathbb R}$, and $i = \sqrt{-1}$ is a formal symbol satisfying the relation $i^2 = -1$. We call $x = \text{Re}\,z$ the real part and $y = \text{Im}\,z$ the imaginary part of $z = x + i y$. So a real number $x$ is simply a complex number with zero imaginary part, thus embedding ${\mathbb R}$ as a subset of ${\mathbb C}$. Formally, therefore, ${\mathbb C} = {\mathbb R} + i \,{\mathbb R}$.

   The Complex Number System is the set ${\mathbb C}$ of all complex numbers with complex addition and multiplication based on the usual rules of real arithmetic$\,:$ $$\eqalign{ (x+i y) + (u + i v) \ &= \ (x+u) + i (y + v),\\ (x + i y)(u + i v) \ &= \ (xu - yv) + i (xv + yu),}$$ together with the relation $i^2 = -1$. Multiplication is commutative in the sense that $z w = w z$.

    The complex conjugate of $z = x + i y$ is the complex number $\overline{z} = x - i y$. In particular,

$$z\,\overline{z} \ = \ (x + iy)(x-iy) \ = \ x^2+y^2$$ is non-negative, so its square root exists and is called the modulus or norm of $z\,$, written $$\big|\,z\,\big| \ = \ \sqrt{x^2\,+\,y^2} \ = \ \big|\,z\,\overline{z}\,\big|^{1/2}\,.$$


    The complex conjugate has a number of useful properties: $$\text{Re}\,z \ = \ \frac{z + \overline{z}}{2}, \qquad \text{Im}\,z \ = \ \frac{z - \overline{z}}{2 i}, \qquad \overline{z + w} \ =\ \overline{z}+\overline{w}, \qquad \overline{zw} \ = \ \overline{z}\,\overline{w}\,.$$

    Such properties are useful in simple computations with complex numbers, for instance:


  Problem 1: express the quotient $\, \displaystyle {\frac{z_1}{z_2}}$ in the form $a + i b$ when $$z_1 \ = \ 4 + 3 i,\qquad z_2 \ = \ 1 + 2 i .$$

 Solution: we use the fact that $$ \frac{z_1}{z_2} \ = \ \frac{z_1 \overline{z_2}}{z_2 \overline{z_2}} \ = \ \frac{z_1 \overline{z_2}}{\big|\,z_2\,\big|^2},$$

so the denominator is now a real number.

   For then $$ \frac{z_1}{z_2} \ = \ \frac{(4 + 3 i)(1-2i)}{1^2 + 2^2}\qquad \qquad $$ $$\qquad \qquad = \ \frac{10 -5i}{5} \ = \ 2 -i\,.$$


  Problem 2: write the expression $ \, \displaystyle {\frac{z_1}{z_2} - \frac{z_1}{\overline{z_2}}}$ in the form $a + i b$ when $$z_1 \ = \ 2-i,\qquad z_2 \ = \ 1 + 2 i .$$

  Solution: we use the fact that $$ \frac{z_1}{z_2} - \frac{z_1}{\overline{z_2}} \ = \ \frac{z_1 \overline{z_2} - z_1 z_2}{z_2 \overline{z_2}},$$ so the denominator is now a real number.

    But when $z_1 = 2-i,\ \ z_2 = 1 + 2 i \,,$ $$z_1 \overline{z_2} - z_1z_2 \ = \ (2-i)(1-2i)\ = \ -4 - 8i\,,$$ while $$ z_2 \overline{z_2} \ = \ (1+2i)(1-2i) \ = \ 5\,.$$ Thus $$ {\frac{z_1}{z_2} - \frac{z_1}{\overline{z_2}}} \ = \ \frac{-4 - 8i}{5}\ = \ -\frac{4}{5} - \frac{8}{5}i\,.$$


    The basic building blocks, however, are vectors and matrices because they can be scaled, added, and multiplied - that's where the linear and the algebra both come in. We'll build on many concepts you've learned already in one, two, and three dimensions, but we'll certainly want to talk about arbitrary (finite) dimensions, not just low dimensions, so definitions and notation will need to be made for arbitrary dimension $n$ as much as possible. Later things get even fancier because infinitely many dimensions will be required, especially when we get to functions as vectors!

         A Vector is an ordered $n$-tuple of real numbers, a list ordered downwards, and written as a column: $$ \ \ {\bf v} \, =\, \left[\begin{array}{cc} v_1 \\ v_2 \\ \vdots \\ v_{n} \end{array}\right]. $$ The entries of ${\bf v}$ are its components, and $v_j$ is called the $j^{th}$-component. The set of all such $n$-component vectors will be denoted by ${\mathbb R}^n$ - it's often called Euclidean $n$-space just as we realized Euclidean $1$-space as the number line, Euclidean $2$-space as the set of all ordered pairs $(a,\, b)$, and $3$-space as the set of all ordered triples $(a,\,b,\, c)$. Thus we will often realize ${\mathbb R}^n$ as the set of all ordered $n$-tuples $(a_1,\, a_2,\, \ldots,\, a_{n})$ of real numbers; then we can think and speak of a point $(a_1,\, a_2,\, \ldots,\, a_{n})$ as corresponding to the vector ${\bf a}$ whose components are $a_1,\, a_2,\, \ldots,\, a_{n}$ and vice versa.

    The definitions of vector addition and scalar multiplication in ${\mathbb R}^n$ for $n \le 3$ extend to arbitrary $n$:

  We add/subtract vectors ${\bf u},\ {\bf v}$ in ${\mathbb R}^n$ by $${\bf u} \pm {\bf v} \ = \ \left[\begin{array}{cc} u_1 \\ u_2 \\ \vdots \\ u_{n} \end{array} \right] \pm \left[\begin{array}{cc} v_1 \\ v_2 \\ \vdots \\ v_{n} \end{array} \right] \ = \ \left[\begin{array}{cc} u_1 \pm v_1 \\ u_2 \pm v_2 \\ \vdots \\ u_{n} \pm v_{n} \end{array} \right]\,,$$ and form the scalar multiple $k {\bf v}$ of $k$ in ${\mathbb R}$ and ${\bf v}$ in ${\mathbb R}^n$ by $$k{\bf v} \ = \ k\left[\begin{array}{cc} v_1 \\ v_2 \\ \vdots \\ v_{n} \end{array} \right] \ = \ \left[\begin{array}{cc} kv_1 \\ kv_2 \\ \vdots \\ kv_{n} \end{array} \right]\,;$$

in other words, calculations are done component-wise. For example, in ${\mathbb R}^4$ $$ 3{\bf u} - 5 {\bf v} \ = \ 3\left[\begin{array}{cc} 2 \\ -1 \\ 3\\ 7 \end{array} \right] -5 \left[\begin{array}{cc} 4 \\ -2 \\ 2 \\ 3 \end{array} \right] \ = \ \left[\begin{array}{cc} 6 \\ -3 \\ 9\\ 21 \end{array} \right] - \left[\begin{array}{cc} 20 \\ -10 \\ 10 \\ 15 \end{array} \right] \ = \ \left[\begin{array}{cc} -14 \\ 7 \\ -1 \\ 6 \end{array} \right].$$


    General vectors in ${\mathbb R}^n$ have the properties:

    Theorem 1.1   Algebraic properties of vectors in ${\mathbb R}^n$:   when ${\bf u},\, {\bf v},\, {\bf w}$ are vectors in ${\mathbb R}^n$ and $c,\, d$ are scalars, then

         $ \quad {\bf u}\, + \, {\bf v} \ = \ {\bf v} \, + \, {\bf u} \qquad \qquad \qquad \qquad \qquad \qquad $ (commutativity)

         $ \quad ({\bf u} + {\bf v})+{\bf w} \,=\, {\bf u} +( {\bf v} + {\bf w}) \qquad \qquad \qquad \qquad $ (associativity)

         $ \quad {\bf u}\, + \, {\bf 0} \ = \ {\bf u},\qquad {\bf u} + (-{\bf u}) \,=\, {\bf 0} \qquad \qquad \qquad $ (zero vector properties)

         $ \quad c({\bf u}\, + \, {\bf v}) \ = \ c{\bf u} \, + \, c{\bf v} \qquad \qquad \qquad \qquad \qquad \qquad $ (distributivity)

         $ \quad (c + d){\bf u} \ = \ c{\bf u} \, + \, d{\bf u} \qquad \qquad \qquad \qquad \qquad \qquad $ (distributivity)

         $ \quad c({d\,\bf u}) = (cd)\,{\bf u}\,, \quad 1\,{\bf u}= {\bf u}\,, \quad 0\,{\bf u} = {\bf 0} \qquad $ (scalar multiplication properties)


    Length, distance, and perpendicularity imposes a (Euclidean) geometry on ${\mathbb R}^n$ via a Dot Product.

    Definition 1.1:   the DOT PRODUCT of vectors $${\bf u}\ = \ \left[\begin{array}{cc} u_1 \\ u_2 \\ \vdots \\ u_{n} \end{array} \right], \qquad \qquad {\bf v} \ = \ \left[\begin{array}{cc} v_1 \\ v_2 \\ \vdots \\ v_{n} \end{array} \right],$$   in ${\mathbb R}^n$ is defined by $${\bf u} \cdot {\bf v} \ = \ u_1 v_1 + u_2 v_2 + \ldots + u_{n} v_{n}\,.$$   Two vectors ${\bf u},\ {\bf v}$ are said to be Orthogonal or ( Perpendicular) when ${\bf u} \cdot {\bf v} \,=\, 0$. The term INNER PRODUCT is often used instead of dot product.

    Several very useful properties - all familiar from the standard dot product on ${\mathbb R}^3$ - follow from this definition.

   Property 1.1: for vectors ${\bf u},\ {\bf v},\ {\bf w}$ in ${\mathbb R}^n$ and scalars $a,\, b,$

       ${\bf u}\cdot{\bf v} \ = \ {\bf v}\cdot{\bf u}\,,$

       $ (a{\bf u} + b {\bf v}) \cdot {\bf w} \ = \ a\,{\bf u}\cdot{\bf w} + b\,{\bf v}\cdot{\bf w}\,,$

       ${\bf u}\cdot{\bf u} \ \ge \ 0\,,$

       ${\bf u}\cdot{\bf u} \ = \ 0\,,\ $ if and only if $\, {\bf u} \,=\, {\bf 0}\,.$


    Since ${\bf v} \cdot {\bf v} \ge 0$, the square root $\sqrt{{\bf v} \cdot {\bf v}}$ is defined for all ${\bf v}$, so the definition of the length of a vector in $2$- and $3$-space extends to ${\mathbb R}^n$.

    Definition 1.2:   the LENGTH of a vector ${\bf v}$ in ${\mathbb R}^n$, often referred to as the NORM of ${\bf v}$, is the scalar defined by $$\| {\bf v}\| \ = \ \sqrt{{\bf v}\cdot {\bf v}} \ = \ \sqrt{ v_1^2 + v_2^2 + \dots + v_{n}^2}\,.$$

 The DISTANCE $$\text{dist}({\bf u},\, {\bf v}) \ = \ \| {\bf u} - {\bf v}\|$$ between vectors $ {\bf u},\, {\bf v }$ in ${\mathbb R}^n$ is the length of the vector $\,{\bf u} - {\bf v}$.

    The ANGLE $\theta$ between non-zero vectors ${\bf u},\, {\bf v}$ can then be introduced using the definition of dot product: $$\cos \theta \ = \ \frac {{\bf u} \cdot {\bf v}}{\|{\bf u}\| \|{\bf v}\|}. $$ (Note that $\|{\bf v}\| = 0 \ \Longleftrightarrow \ {\bf v} = {\bf 0}$, so this definition makes sense.)

    Now let's examine matrices:

         An $m \times n$ MATRIX with real entries is a set of $mn$ real numbers $a_{jk},\, 1 \le j\le m,\ 1 \le k \le n,\,$ listed either as an Array with $m$ rows and $n$ columns or as a row of $n$ column vectors or a column of $m$ row vectors as shown in $$ A \ = \ \left[\begin{array}{cc} a_{11} & a_{12} & a_{13} & \cdots & a_{1,\,n} \\ a_{21} & a_{22} & a_{23} & \cdots & a_{2,\,n} \\ a_{31} & a_{32} & a_{33} & \cdots & a_{3,\, n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{m,\,1} & a_{m,\, 2} & a_{m,\,3} & \cdots & a_{m,\, n} \end{array}\right] \ = \ \big[\begin{array}{cc} {\bf a}_1 & {\bf a}_2 & \ldots & {\bf a}_{n}\\ \end{array}\big] $$   where $${\bf a}_1 \,=\, \left[\begin{array}{cc} a_{11} \\ a_{21} \\ \vdots \\ a_{m,\,1} \end{array} \right]\,, \qquad {\bf a}_2 \,=\, \left[\begin{array}{cc} a_{12} \\ a_{22} \\ \vdots \\ a_{m,\,1} \end{array} \right]\,, \qquad \ldots \,, \qquad {\bf a}_{n} \,=\, \left[\begin{array}{cc} a_{1,\,n} \\ a_{2,\, n} \\ \vdots \\ a_{m,\,n} \end{array} \right]$$ are column vectors in ${\mathbb R}^m$. These definitions will be used interchangeably. To keep from drowning in notation, however, it's common to write a matrix as $A = [a_{jk}]$ instead of writing out all the entries in $A$. The set of all $m \times n$ matrices with real entries will be denoted by $M_{m \times n}({\mathbb R})$. Notice that $M_{m \times 1}({\mathbb R})$ is just another way of thinking of ${\mathbb R}^m$.

    Addition and scalar multiplication of matrices are defined:

  We add/subtract matrices $A,\, B$ in $M_{m \times n}({\mathbb R})$ entry-by-entry: $$A \pm B \ = \ [a_{jk}] \pm [b_{jk}] \ = \ [a_{jk} \pm b_{jk}] \,.$$ In particular, $A \pm B$ is defined only when both $A,\, B$ are $m \times n$, i.e., have the same shape, and then $A \pm B$ also has the same shape because it is $m \times n$. For example, $$\left[\begin{array}{cc} 1 & 2 & 3 \\ -3 & 4 & -1\end{array}\right] + \left[\begin{array}{cc} 4 & -1 & 1 \\ 2 & -4 & -1\end{array}\right]\ = \ \left[\begin{array}{cc} 5 & 1 & 4 \\ -1 & 0 & -2\end{array}\right],$$ $$\left[\begin{array}{cc} 1 & 2 & 3 \\ -3 & 4 & -1\end{array}\right] - \left[\begin{array}{cc} 4 & -1 & 1 \\ 2 & -4 & -1\end{array}\right]\ = \ \left[\begin{array}{cc} -3 & 3 & 2 \\ -5 & 8 & 0\end{array}\right]$$   We define the scalar multiple of $k$ in ${\mathbb R}$ and $A$ in $M_{m \times n}({\mathbb R})$ by $$k A \ = \ k [a_{jk}] \ = \ [k a_{jk}]\,.$$ Thus each entry in $A$ is multipled by $k$; in particular, the scalar multiple $k A$ of an $m \times n$ matrix $A$ also is $m \times n$, so scalar multiplication preserves shape too because the scalar multiple $k A$ is again $m \times n$. For example, $$3 \left[\begin{array}{cc} 1 & 2 & 3 \\ -3 & 4 & -1\end{array}\right]\ = \ \left[\begin{array}{cc} 3 & 6 & 9 \\ -9 & 12 & -3\end{array}\right]\,.$$


    Matrices have the same algebraic properties listed in Theorem 1.1 that vectors have. To introduce multiplication, let's begin with the product of a matrix and a vector:

   Definition 1.3   Matrix-vector Rule: if $A=\big[{\bf a}_1\ {\bf a}_2 \ \ldots {\bf a}_{n} ]$ is an $m \times n$ matrix written as $n$ column vectors $ {\bf a}_1,\, {\bf a}_2,\, \dots,\, {\bf a}_{n}$ in ${\mathbb R}^m$ and ${\bf x}$ is a vector in ${\mathbb R}^n$, then $A{\bf x}$ is the vector $A {\bf x}$ in ${\mathbb R}^m$ defined by $$A\,{\bf x} \ = \ \big[{\bf a}_1\ {\bf a}_2 \ \ldots {\bf a}_{n} ] \left[\begin{array}{cc} x_1 \\ x_2 \\ \vdots \\ x_{n} \end{array} \right] \ = \ x_1 {\bf a}_1 + x_2 {\bf a}_2 + \ldots + x_{n} {\bf a}_{n}. $$     Thus the product of an $m \times n$ matrix $A$ and a vector ${\bf x}$ in ${\mathbb R}^n$ is a vector $A {\bf x}$ in ${\mathbb R}^m$. (Note carefully how $m$ and $n$ enter into the definition of this product!)


    Collecting all these concepts together we can thus solve:

  Problem 1.3 : determine the vector $${\bf v}\ = \ \left[\begin{array}{cc} 2 & 1 \\ 4 & 2 \end{array}\right] \left[\begin{array}{cc} 2 \\ 3\end{array}\right] - 5 \left[\begin{array}{cc} 1 \\ 3\end{array}\right] $$ in ${\mathbb R}^2$.

  Solution: by the matrix-vector rule,

$$ \left[\begin{array}{cc} 2 & 1 \\ 4 & 2 \end{array}\right] \left[\begin{array}{cc} 2 \\ 3\end{array}\right] \ = \ 2 \left[\begin{array}{cc} 2 \\ 4\end{array}\right] + 3 \left[\begin{array}{cc} 1 \\ 2\end{array}\right] \ = \ \left[\begin{array}{cc}7\\ 14 \end{array}\right].$$ So $${\bf v} \ = \ \left[\begin{array}{cc}7\\ 14 \end{array}\right] - 5 \left[\begin{array}{cc} 1 \\ 3\end{array}\right] \ = \ \left[\begin{array}{cc} 2 \\ -1\end{array}\right]. $$


    So far we've added and subtracted two $m \times n$ matrices $A,\, B$ to form a new matrix $A \pm B$; we've also formed scalar multiples $k A$. Both matrix addition and scalar multiplication thus produced an $m \times n$ matrix from $m \times n$ matrices, and all this enabled us to form Linear Combinations $cA + d B$ of matrices in $M_{m \times n}({\mathbb R})$ just as we formed linear combinations of vectors in ${\mathbb R}^n$. In addition, using the Matrix-Vector Rule, we even learned how to multiply an $m \times n$ matrix $A$ and a vector ${\bf x}$ in ${\mathbb R}^n$: $$A {\bf x} \ = \ [ {\bf a}_1\ \ {\bf a}_2\ \ \ldots \ \ {\bf a}_{n} ]\left[\begin{array}{cc} x_1 \\ x_2 \\ \vdots \\ x_{n} \end{array} \right] \ = \ x_1 {\bf a}_1 + x_2 {\bf a}_2 + \ldots + x_{n} {\bf a}_{n}\,,$$ producing a vector in ${\mathbb R}^m$. It will be crucial, however, to learn when and how to multiply matrices $A,\, B$ to form the matrix product $AB$ more generally. Recall that it's often convenient to denote the $(j,\,k)^{\small{\hbox{th}}}$-entry in a matrix $A$ by $a_{jk},\, (A)_{jk}$ or $A_{jk}$.

   Row-Column Rule: if $A=\big[a_{jk}\big]$ is an $m \times p$-matrix and $B= \big[b_{jk}\big]$ is a $p \times n$-matrix, then the product $AB$ is the $m \times n$-matrix whose $(j,k)^{\text{th}}$-entry is given by $$(AB)_{jk} \ = \ a_{j1}b_{1k} \,+\, a_{j2}b_{2k} \,+\, \ldots \,+ a_{j,\,p}b_{p,\,k} \,.$$

  The expression $(AB)_{jk}$ can be also written more compactly in summation notation as $$(AB)_{jk} \ = \ \sum_{r = 1}^{p}\ a_{jr}b_{rk} \,.$$ Crucially, the matrix product $AB$ is defined when $A$ is $m \times p$ and $B$ is $p \times n$; the product $AB$ is then an $m \times n$ matrix.

    For example,

  Suppose we wish to calculate, say, the $(1,3)$-entry in the product $AB$ of the matrices

  To use the row-column method we'll need to take the elements in the ${\bf r}_1$ row in $A$, highlighted in pink, with the corresponding entries in the ${\bf a}_3$ column in $B$, highlighted in blue, and sum the products of the respective entries.

    Thus, as highlighted in green, the $(1,3)$-entry in

the matrix product $AB$ is given by

      

The remaining entries can now be computed in the same way, showing that the matrix product $AB$ is the $2 \times 3$ matrix $$ \left[\begin{array}{cc} 2 & -1 \\ 3 & 4 \end{array}\right]\, \left[\begin{array}{cc} 4 & 2 & -3 \\ -1 & 2 & 6 \end{array} \right] \ = \ \left[\begin{array}{cc} 9 & 2 & -12 \\ 8 & 14 & 15 \end{array} \right]\,.$$ For large values of $m,\, n$ use technology!

    By using the Row-Column Rule definition together with the familar algebra of real numbers we can use straightforward calculations to show that many of the familiar multiplicative properties of real numbers carry over to matrix multiplication.

    Theorem 7.1   Properties of Matrix Multiplication:   when $A,\, B, C$ are matrices $($whose sizes allow the given operations to be performed$)$ and $k$ is a scalar, then

         $ \quad A(B C) \ = \ (A B) C \qquad \qquad \qquad \qquad \quad \qquad \qquad $ (associativity)

         $ \quad A(B + C)\ = \ AB + AC \qquad \qquad \qquad \qquad \qquad $ (left distributivity)

         $ \quad (A + B )C \ = \ AC + BC \qquad \qquad \qquad \qquad \qquad $ (right distributivity)

         $ \quad$ if $A$ is $m \times n$, then $I_m A \ = \ A \ = \ A I_n \qquad \qquad $ (multiplicative identity)

         $ \quad k(AB)\ = \ (k A)B\ = \ A(k B) \qquad \qquad \qquad $ (scalar multiplication properties)

    But it's very important to notice that some multiplicative properties of real numbers do not carry over to matrices:

  Commutativity: the equality $AB = BA$ need not hold for matrices $A,\, B$ .

  For example, $$ AB \ = \ \left[\begin{array}{cc} 0 & 1 \\ 0 & 2 \end{array}\right] \left[\begin{array}{cc} 0 & 2 \\ 0 & 1 \end{array}\right] \ = \ \left[\begin{array}{cc} 0 & 1 \\ 0 & 2 \end{array}\right],$$ while $$ BA \ = \ \left[\begin{array}{cc} 0 & 2 \\ 0 & 1 \end{array}\right] \left[\begin{array}{cc} 0 & 1 \\ 0 & 2 \end{array}\right] \ = \ \left[\begin{array}{cc} 0 & 4 \\ 0 & 2 \end{array}\right].$$

  Zero Divisors: when $AB = 0$, then neither $A$, nor $B$, need be the $0$-matrix.

  For example, when $$ A \ = \ \left[\begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array}\right], \qquad B\ = \ \left[\begin{array}{cc} 0 & 2 \\ 0 & 0 \end{array}\right],$$ then $$ AB \ = \ \left[\begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array}\right] \left[\begin{array}{cc} 0 & 2 \\ 0 & 0 \end{array}\right] \ = \ \left[\begin{array}{cc} 0 &0 \\ 0 & 0 \end{array}\right].$$


    Thus the term 'linear algebra' has two basic components. The first is the 'linear' half. Given vectors ${\bf u},\, {\bf v}$ (resp. matrices $A,\, B$) and scalars $c,\, d$, then the Linear Combination $a{\bf u} + b{\bf v}$ (resp. $cA + d A$) is again a vector (resp. matrix) which can be defined component-wise. But recall that you also met linear combinations $af(x) + b g(x)$ of functions when dealing with properties of limits, derivatives and integrals. In fact, Linearity is a fundamental concept occuring everywhere in mathematics and its applications. We formalize these ideas abstractly by introducing the notion of Vector space.

    The second key component is the 'algebra' half: the familiar algebraic properties of addition and multiplication that real numbers have were passed on to matrices. But matrices with entries other than real numbers - complex numbers, functions, matrices etc. - could be allowed. So defining operations entry-by-entry allows matrices to inherit the algebra of their individual entries when we are careful! This is extremely important in practice - together with the availability everywhere of computational facilities it is why linear algebra is fundamental to everything we do these days.