In [1]:
#
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib as mp
import matplotlib.pyplot as plt
import seaborn as sns
import sklearn
import laUtilities as ut
import slideUtilities as sl
import demoUtilities as dm
from matplotlib import animation
from importlib import reload
from datetime import datetime
from IPython.display import Image, display_html, display, Math, HTML;
qr_setting = None

mp.rcParams['animation.html'] = 'jshtml';

Announcements¶

  • Homework 5 due today at 8pm

  • Homework 6 is out, due next Friday 3/24 at 8pm

  • Office hours

    • Today: Peer tutor Daniel Cho from 12:30-3:30pm in CCDS 16th floor
    • Today: Abhishek Tiwari from 4-5pm on CCDS 13th floor
  • Weekly reading and viewing assignments

    • Aggarwal sections 2.6-2.7
    • 3Blue1Brown video 7 and video 8

Lecture 21: Vector Subspace and Basis¶

In [2]:
#
fig = ut.three_d_figure((0,0), fig_desc = 'H = Span{a1, a2, a3}',
                        xmin = -10, xmax = 10, ymin = -10, ymax = 10, zmin = -10, zmax = 10, qr = qr_setting)
a1 = [-8.0, 8.0, 5.0]
a2 = [3.0,  2.0, -2.0]
a3 = 2.5 * np.array(a2)
fig.text(a1[0]+.5, a1[1]+.5, a1[2]+.5, r'$\bf a_1$', 'a_1', size=18)
fig.text(a3[0]+.5, a3[1]+.5, a3[2]+.5, r'$\bf a_3$', 'a_3', size=18)
fig.text(a2[0]+.5, a2[1]+.5, a2[2]+.5, r'$\bf a_2$', 'a_2', size=18)
fig.plotSpan(a1, a2,'Green')
fig.plotPoint(a1[0], a1[1], a1[2],'r')
fig.plotPoint(a3[0], a3[1], a3[2],'r')
fig.plotPoint(a2[0], a2[1], a2[2],'r')
fig.plotLine([[0, 0, 0], a3], 'r', '--')
fig.plotLine([[0, 0, 0], a1], 'r', '--')
# fig.plotPoint(a3[0], a3[1], a3[2],'r')
fig.text(0.1, 0.1, -3, r'$\bf 0$', '0', size=12)
fig.plotPoint(0, 0, 0, 'b')
# fig.set_title(r'$H$ = Span$\{{\bf a}_1, {\bf a}_2, {\bf a}_3\}$')
fig.text(9, -9, -7, r'H', 'H', size = 16)
img = fig.dont_save()

[This lecture is based on Prof. Crovella's CS 132 lecture notes.]

Recap: The Invertible Matrix Theorem¶

Invertible Matrix Theorem. Let $A$ by a square $n\times n$ matrix.

Then the following statements are equivalent; that is, they are either all true or all false.

  1. $A$ is an invertible matrix.
  2. $A^T$ is an invertible matrix.
  3. The equation $A{\bf x} = {\bf b}$ has a unique solution for each ${\bf b}$ in $\mathbb{R}^n.$
  4. A is row equivalent to the identity matrix.
  5. A has $n$ pivot positions.
  6. The homogeneous equation $A{\bf x} = {\bf 0}$ has only the trivial solution.
  7. The columns of $A$ form a linearly independent set.
  8. The columns of $A$ span $\mathbb{R}^n.$
  9. The linear transformation ${\bf x} \mapsto A{\bf x}$ maps $\mathbb{R}^n$ onto $\mathbb{R}^n.$
  10. The linear transformation ${\bf x} \mapsto A{\bf x}$ is one-to-one.

21.1 Subspaces¶

So far have been working with vector spaces like $\mathbb{R}^2, \mathbb{R}^3, \mathbb{R}^4$, and so on. But there are more vector spaces!

Today we'll define a subspace and show how the concept helps us understand the nature of matrices and their linear transformations.

Definition. A subspace is any set $H$ in $\mathbb{R}^n$ that has three properties:

  1. The zero vector is in $H$.
  2. For each $\mathbf{u}$ and $\mathbf{v}$ in $H$, the sum $\mathbf{u} + \mathbf{v}$ is in $H$.
  3. For each $\mathbf{u}$ in $H$ and each scalar $c,$ the vector $c\mathbf{u}$ is in $H$.

Another way of stating properties 2 and 3 is that $H$ is closed under addition and scalar multiplication.

The first thing to note is that there is a close connection between Span and Subspace:

Every Span is a Subspace.

To see this, let's take a specific example.

For example, take $\mathbf{v}_1$ and $\mathbf{v}_2$ in $\mathbb{R}^n$, and let $H$ = Span$\{\mathbf{v}_1, \mathbf{v}_2\}.$

Then $H$ is a subspace of $\mathbb{R}^n$.

In [12]:
#
fig = ut.three_d_figure((20, 1), fig_desc = 'Span{v1, v2}, a subspace.',
                        xmin = -10, xmax = 10, ymin = -10, ymax = 10, zmin = -10, zmax = 10, qr = qr_setting)
a1 = [-8.0, 8.0, 5.0]
a2 = [3.0,  2.0, -2.0]
a3 = 2.5 * np.array(a2)
fig.text(a1[0]+.5, a1[1]+.5, a1[2]+.5, r'$\bf v_1$', 'v_1', size=20)
fig.text(a3[0]+.5, a3[1]+.5, a3[2]+.5, r'$\bf v_2$', 'v_2', size=20)
fig.plotSpan(a1, a2,'Green')
fig.plotPoint(a1[0], a1[1], a1[2],'r')
fig.plotPoint(a3[0], a3[1], a3[2],'r')
# fig.plotPoint(a3[0], a3[1], a3[2],'r')
#fig.text(a1[0], a1[1], a1[2], r'$\bf v_1$', 'a_1', size=20)
fig.text(0.1, 0.1, -3, r'$\bf 0$', '0', size=12)
fig.plotPoint(0, 0, 0, 'b')
fig.set_title(r'Span{$v_1, v_2$}, a subspace.')
fig.text(9, -9, -7, r'$\bf H$', 'H', size = 20)
img = fig.save()

Let's check this:

  1. The zero vector is in $H$.

    ${\bf 0} = 0\mathbf{v}_1 + 0\mathbf{v}_2$ is in Span$\{\mathbf{v}_1, \mathbf{v}_2\}.$

  1. The sum of any two vectors in $H$ is in $H$.

    If $\mathbf{u} = s_1\mathbf{v}_1 + s_2\mathbf{v}_2,$ and $\mathbf{v} = t_1\mathbf{v}_1 + t_2\mathbf{v}_2,$ then their sum $$\mathbf{u} + \mathbf{v} = (s_1+t_1)\mathbf{v}_1 + (s_2+t_2)\mathbf{v}_2 \in H.$$

  1. For any scalar $c$, $c\mathbf{u}$ is in $H$.

    $c\mathbf{u} = c(s_1\mathbf{v}_1 + s_2\mathbf{v}_2) = (cs_1\mathbf{v}_1 + cs_2\mathbf{v}_2) \in H.$

Because of this, we refer to Span$\{\mathbf{v}_1,\dots,\mathbf{v}_p\}$ as the subspace spanned by $\mathbf{v}_1,\dots,\mathbf{v}_p.$

OK, here is another subspace -- a line:

In [13]:
#
fig = ut.three_d_figure((20, 2), fig_desc = 'Span{v1}, another subspace.',
                        xmin = -10, xmax = 10, ymin = -10, ymax = 10, zmin = -10, zmax = 10, qr = qr_setting)
v = [ 4.0, 4.0, 2.0]
fig.text(v[0]+.5, v[1]+.5, v[2]+.5, r'$\bf v_1$', 'v1', size=20)
fig.text(-9, -7, -9, r'Span{$\bf v_1$}', 'Span{v1}', size=16)
fig.text(0.2, 0.2, -4, r'$\bf 0$', '0', size=12)
# plotting the span of v
# this is based on the reduced echelon matrix that expresses the system whose solution is v
fig.plotIntersection([1, 0, -v[0]/v[2], 0], [0, 1, -v[1]/v[2], 0], '-', 'Red')
fig.plotPoint(v[0], v[1], v[2], 'r')
fig.plotPoint(0, 0, 0, 'b')
# plotting the axes
fig.plotIntersection([0, 0, 1, 0], [0, 1, 0, 0])
fig.plotIntersection([0, 0, 1, 0], [1, 0, 0, 0])
fig.plotIntersection([0, 1, 0, 0], [1, 0, 0, 0])
fig.set_title(r'Span {$v_1$}, another subspace.')
img = fig.save()

Next question: is any line a subspace?

What about a line that is not through the origin?

How about this line:

In [5]:
#
ax = ut.plotSetup()
ut.centerAxes(ax)
ax.plot([-6,6],[4.33,0.33],'b-')
ax.text(-4,3,'$L$',size=20);

In fact, a line $L$ not through the origin fails all three requirements for a subspace:

  1. $L$ does not contain the zero vector.

  2. $L$ is not closed under addition.

In [6]:
#
ax = ut.plotSetup()
ut.centerAxes(ax)
ax.arrow(0,0,1,2,head_width=0.2, head_length=0.2,length_includes_head = True)
ax.arrow(0,0,4,1,head_width=0.2, head_length=0.2,length_includes_head = True)
ax.plot([4,5],[1,3],'--')
ax.plot([1,5],[2,3],'--')
ax.plot([-6,6],[4.33,0.33],'b-')
ax.text(5.25,3.25,r'${\bf u}$+${\bf v}$',size=16)
ax.text(1.5,2.5,r'${\bf u}$',size=16)
ax.text(4.75,1,r'${\bf v}$',size=16)
ax.text(-4,3,'$L$',size=20)
ut.plotPoint(ax,5,3)
ax.plot(0,0,'');
  1. $L$ is not closed under scalar multiplication.
In [7]:
#
ax = ut.plotSetup()
ut.centerAxes(ax)
line = np.array([[-6,6],[4.33,0.33]])
w = .3 * line[:,0] + .7 * line[:,1]
ut.plotPoint(ax,w[0],w[1],'k')
ax.arrow(0,0,w[0],w[1],head_width=0.2, head_length=0.2,length_includes_head = True)
w_ext = np.array([w, 1.75*w]).T
ax.plot(w_ext[0], w_ext[1], '--')
ax.plot(line[0], line[1],'b-')
ax.text(w[0],w[1]-0.5,r'${\bf w}$',size=16)
ax.text(w_ext[0,1], w_ext[1,1]-0.5, r'$\alpha{\bf w}$', size=16)
ut.plotPoint(ax,w_ext[0,1],w_ext[1,1],'r')
ax.text(-4,3,'$L$',size=20);

On the other hand, any line, plane, or hyperplane through the origin is a subspace.

Make sure you can see why (or prove it to yourself).

21.2 Two Important Subspaces¶

Now let's start to use the subspace concept to characterize matrices.

We are thinking of these matrices as linear operators.

Every matrix has associated with it two subspaces:

  • column space and
  • null space.

Column Space¶

Definition. The column space of a matrix $A$ is the set ${\operatorname{Col}}\ A$ of all linear combinations of the columns of $A$.

If $A$ = $[\mathbf{a}_1 \;\cdots\; \mathbf{a}_n]$, with columns in $\mathbb{R}^m,$ then ${\operatorname{Col}}\ A$ is the same as Span$\{\mathbf{a}_1,\dots,\mathbf{a}_n\}$, and hence it is a subspace.

The column space of an $m\times n$ matrix is a subspace of $\mathbb{R}^m.$

In particular, note that ${\operatorname{Col}}\ A$ equals $\mathbb{R}^m$ only when the columns of $A$ span $\mathbb{R}^m.$ Otherwise, ${\operatorname{Col}}\ A$ is only part of $\mathbb{R}^m.$

How to interpret ${\operatorname{Col}}\ A$

  • When a system of linear equations is written in the form $A\mathbf{x} = \mathbf{b},$ the column space of $A$ is the set of all $\mathbf{b}$ for which the system has a solution.

  • Equivalently, when we consider the linear operator $T: \mathbb{R}^n\rightarrow\mathbb{R}^m$ that is implemented by the matrix $A$, the column space of $A$ is the range of $T.$

In [14]:
#
fig = ut.three_d_figure((20, 3), fig_desc = 'Column Space of A = [a1, a2, a3].',
                        xmin = -10, xmax = 10, ymin = -10, ymax = 10, zmin = -10, zmax = 10, qr = qr_setting)
a1 = [-8.0, 8.0, 5.0]
a2 = [3.0,  2.0, -2.0]
a3 = 2.5 * np.array(a2)
fig.text(a1[0]+.5, a1[1]+.5, a1[2]+.5, r'$\bf a_1$', 'a_1', size=18)
fig.text(a3[0]+.5, a3[1]+.5, a3[2]+.5, r'$\bf a_3$', 'a_3', size=18)
fig.text(a2[0]+.5, a2[1]+.5, a2[2]+.5, r'$\bf a_2$', 'a_2', size=18)
fig.plotSpan(a1, a2,'Green')
fig.plotPoint(a1[0], a1[1], a1[2],'r')
fig.plotPoint(a3[0], a3[1], a3[2],'r')
fig.plotPoint(a2[0], a2[1], a2[2],'r')
# fig.plotPoint(a3[0], a3[1], a3[2],'r')
fig.text(0.1, 0.1, -3, r'$\bf 0$', '0', size=12)
fig.plotPoint(0, 0, 0, 'b')
fig.set_title(r'Column Space of $A = [{\bf a}_1, {\bf a}_2, {\bf a}_3]$')
fig.text(9, -9, -7, r'Col $A$', 'H', size = 16)
img = fig.save()

Null Space¶

Definition. The null space of a matrix $A$ is the set ${\operatorname{Nul}}\ A$ of all solutions of the homogeneous equation $A\mathbf{x} = {\bf 0}.$

In other words: the null space of $A$ is the set of all vectors that are mapped to the zero vector by $A$. It is sometimes called the kernel of $A$.

When $A$ has $n$ columns, a solution of $A\mathbf{x} = {\bf 0}$ is a vector in $\mathbb{R}^n.$ So the null space of $A$ is a subset of $\mathbb{R}^n.$

In fact, ${\operatorname{Nul}}\ A$ is a subspace of $\mathbb{R}^n.$

Theorem. The null space of an $m\times n$ matrix $A$ is a subspace of $\mathbb{R}^n.$

Equivalently, the set of all solutions of a system $A\mathbf{x} = {\bf 0}$ of $m$ homogeneous linear equations in $n$ unknowns is a subspace of $\mathbb{R}^n.$

Proof.

  1. The zero vector is in ${\operatorname{Nul}}\ A$ because $A{\bf 0} = {\bf 0}.$
  1. The sum of two vectors in ${\operatorname{Nul}}\ A$ is in ${\operatorname{Nul}}\ A.$

    Take two vectors $\mathbf{u}$ and $\mathbf{v}$ that are in ${\operatorname{Nul}}\ A.$ By definition $A\mathbf{u} = {\bf 0}$ and $A\mathbf{v} = {\bf 0}.$

    Then $\mathbf{u} + \mathbf{v}$ is in ${\operatorname{Nul}}\ A$ because $A(\mathbf{u} + \mathbf{v}) = A\mathbf{u} + A\mathbf{v} = {\bf 0} + {\bf 0} = {\bf 0}.$

  1. Any scalar multiple of a vector in ${\operatorname{Nul}}\ A$ is in ${\operatorname{Nul}}\ A.$

    Take a vector $\mathbf{v}$ that is in ${\operatorname{Nul}}\ A.$ Then $A(c\mathbf{v}) = cA\mathbf{v} = c{\bf 0} = {\bf 0}.$

Testing whether a vector $\mathbf{v}$ is in ${\operatorname{Nul}}\ A$ is easy: simply compute $A\mathbf{v}$ and see if the result is zero.

Remark. Note that the two subspaces ${\operatorname{Col}}\ A$ and ${\operatorname{Nul}}\ A$ live in different "universes."

For an $m\times n$ matrix,

  • the column space is a subset of $\mathbb{R}^m$ (all its vectors have $m$ components),
  • while the null space is a subset of $\mathbb{R}^n$ (all its vectors have $n$ components).

21.3 A Basis for a Subspace¶

Let's say you have a subspace.

For example, perhaps it is Span$\{\mathbf{a}_1, \mathbf{a}_2, \mathbf{a}_3\}$.

We would like to find the simplest way of describing this space.

For example, consider this subspace with:

$${\bf a_1} = \begin{bmatrix} -8 \\ 8 \\ 5 \end{bmatrix}, \quad {\bf a_2} = \begin{bmatrix} 3 \\ 2 \\ -2 \end{bmatrix}, \quad {\bf a_3} = \begin{bmatrix} 7.5 \\ 5 \\ -5 \end{bmatrix} = 2.5 {\bf a_2}$$
In [15]:
#
fig = ut.three_d_figure((20, 4), fig_desc = 'H = Span{a1, a2, a3}',
                        xmin = -10, xmax = 10, ymin = -10, ymax = 10, zmin = -10, zmax = 10, qr = qr_setting)
a1 = [-8.0, 8.0, 5.0]
a2 = [3.0,  2.0, -2.0]
a3 = 2.5 * np.array(a2)
fig.text(a1[0]+.5, a1[1]+.5, a1[2]+.5, r'$\bf a_1$', 'a_1', size=18)
fig.text(a3[0]+.5, a3[1]+.5, a3[2]+.5, r'$\bf a_3$', 'a_3', size=18)
fig.text(a2[0]+.5, a2[1]+.5, a2[2]+.5, r'$\bf a_2$', 'a_2', size=18)
fig.plotSpan(a1, a2,'Green')
fig.plotPoint(a1[0], a1[1], a1[2],'r')
fig.plotPoint(a3[0], a3[1], a3[2],'r')
fig.plotPoint(a2[0], a2[1], a2[2],'r')
# fig.plotPoint(a3[0], a3[1], a3[2],'r')
fig.text(0.1, 0.1, -3, r'$\bf 0$', '0', size=12)
fig.plotPoint(0, 0, 0, 'b')
fig.set_title(r'$H$ = Span$\{{\bf a}_1, {\bf a}_2, {\bf a}_3\}$')
fig.text(9, -9, -7, r'H', 'H', size = 16)
img = fig.save()

Note that $\mathbf{a}_3$ is a scalar multiple of $\mathbf{a}_2$. Thus:

Span$\{\mathbf{a}_1, \mathbf{a}_2, \mathbf{a}_3\}$

is the same subspace as

Span$\{\mathbf{a}_1, \mathbf{a}_2\}$.

Making this idea more general:

  • we would like to describe a subspace as the span of a set of vectors, and
  • that set of vectors should have the fewest members as possible!

So in our example above, we would prefer to say that the subspace is:

$H$ = Span$\{\mathbf{a}_1, \mathbf{a}_2\}$

rather than

$H$ = Span$\{\mathbf{a}_1, \mathbf{a}_2, \mathbf{a}_3\}$.

In other words, the more "concisely" we can describe the subspace, the better.

Now, given some subspace, how small a spanning set can we find?

Here is the key idea we will use to answer that question:

It can be shown that the smallest possible spanning set must be linearly independent.

We will call such minimally-small sets of vectors a basis for the space.

Definition. A basis for a subspace $H$ of $\mathbb{R}^n$ is a linearly independent set in $H$ that spans $H.$

In [16]:
#
fig = ut.three_d_figure((20, 5), fig_desc = 'Bases of H are {a1, a2} or {a1, a3}.',
                        xmin = -10, xmax = 10, ymin = -10, ymax = 10, zmin = -10, zmax = 10, qr = qr_setting)
a1 = [-8.0, 8.0, 5.0]
a2 = [3.0,  2.0, -2.0]
a3 = 2.5 * np.array(a2)
fig.text(a1[0]+.5, a1[1]+.5, a1[2]+.5, r'$\bf a_1$', 'a_1', size=18)
fig.text(a3[0]+.5, a3[1]+.5, a3[2]+.5, r'$\bf a_3$', 'a_3', size=18)
fig.text(a2[0]+.5, a2[1]+.5, a2[2]+.5, r'$\bf a_2$', 'a_2', size=18)
fig.plotSpan(a1, a2,'Green')
fig.plotPoint(a1[0], a1[1], a1[2],'r')
fig.plotPoint(a3[0], a3[1], a3[2],'r')
fig.plotPoint(a2[0], a2[1], a2[2],'r')
fig.plotLine([[0, 0, 0], a3], 'r', '--')
fig.plotLine([[0, 0, 0], a1], 'r', '--')
# fig.plotPoint(a3[0], a3[1], a3[2],'r')
fig.text(0.1, 0.1, -3, r'$\bf 0$', '0', size=12)
fig.plotPoint(0, 0, 0, 'b')
fig.set_title(r'Bases of $H$ are $\{{\bf a}_1, {\bf a}_2\}$ or $\{{\bf a}_1, {\bf a}_3\}$')
fig.text(9, -9, -7, r'H', 'H', size = 16)
img = fig.save()

So in the example above, a basis for $H$ could be:

$\{\mathbf{a}_1, \mathbf{a}_2\}$

or

$\{\mathbf{a}_1, \mathbf{a}_3\}.$

However,

$\{\mathbf{a}_1, \mathbf{a}_2, \mathbf{a}_3\}$

is not a basis for $H$.

That is because $\{\mathbf{a}_1, \mathbf{a}_2, \mathbf{a}_3\}$ are not linearly independent.

(Conceptually, there are "too many vectors" in this set).

And furthermore,

$\{\mathbf{a}_1\}$

is not a basis for $H$.

That is because $\{\mathbf{a}_1\}$ does not span $H$.

(Conceptually, there are "not enough vectors" in this set).

Example. The columns of any invertible $n\times n$ matrix form a basis for $\mathbb{R}^n.$

This is because, by the Invertible Matrix Theorem, they are linearly independent, and they span $\mathbb{R}^n.$

So, for example, we could use the identity matrix, $I.$ It columns are $\mathbf{e}_1, \dots, \mathbf{e}_n.$

The set $\{\mathbf{e}_1,\dots,\mathbf{e}_n\}$ is called the standard basis for $\mathbb{R}^n.$

(We have already used the term "standard basis vector" to refer to any of the ${\bf e_i}$ vectors. Now we know where the term comes from!)

In [17]:
#
fig = ut.three_d_figure((20, 6), fig_desc = 'The Standard Basis for R3',
                        xmin = -1.2, xmax = 1.2, ymin = -1.2, ymax = 1.2, zmin = -1.2, zmax = 1.2, qr = qr_setting)
e1 = [1, 0, 0]
e2 = [0, 1, 0]
e3 = [0, 0, 1]
origin = [0, 0, 0]
fig.ax.quiver(e1, e2, e3, e1, e2, e3, length=1.0, color='Red')
fig.plotLine([[0, 0, 0], [0, 1, 0]], color='Red')
fig.plotLine([[0, 0, 0], [0, 0, 1]], color='Red')
fig.plotLine([[0, 0, 0], [1, 0, 0]], color='Red')
fig.text(1, 0, -0.5, r'${\bf e_1}$', 'e1', size=16)
fig.text(0.2, 1, 0, r'${\bf e_2}$', 'e2', size=16)
fig.text(0, 0.2, 1, r'${\bf e_3}$', 'e3', size=16)
# plotting the axes
fig.plotIntersection([0, 0, 1, 0], [0, 1, 0, 0])
fig.plotIntersection([0, 0, 1, 0], [1, 0, 0, 0])
fig.plotIntersection([0, 1, 0, 0], [1, 0, 0, 0])
fig.set_title(r'The Standard Basis for $\mathbb{R}^3$',size=14);
fig.save();

21.4 Basis for Null Space¶

Being able to express a subspace in terms of a basis is very powerful.

  • It gives us a concise way of describing the subspace.
  • It will allow us (in future lectures) to introduce ideas of coordinate systems and dimension.

Hence, we will often want to be able to describe subspaces like $\operatorname{Col} A$ or $\operatorname{Nul} A$ using their bases.

We'll start with finding a basis for the null space of a matrix.

Example. Find a basis for the null space of the matrix

$$A = \begin{bmatrix}-3&6&-1&1&-7\\1&-2&2&3&-1\\2&-4&5&8&-4\end{bmatrix}.$$

Solution. We would like to describe the set of all solutions of $A\mathbf{x} = {\bf 0}.$

We start by writing the solution of $A\mathbf{x} = {\bf 0}$ in parametric form:

$$\left[ \begin{array}{c|c} A & {\bf 0} \end{array}\right] \sim \left[\begin{array}{ccccc|c}1&-2&0&-1&3&0\\0&0&1&2&-2&0\\0&0&0&0&0&0\end{array}\right], \;\;\; \begin{array}{rrrrrcl}x_1&-2x_2&&-x_4&+3x_5&=&0\\&&x_3&+2x_4&-2x_5&=&0\\&&&&0&=&0\end{array}$$

So $x_1$ and $x_3$ are basic, and $x_2, x_4,$ and $x_5$ are free.

So the general solution is:

$$\begin{array}{rcl}x_1&=&2x_2 + x_4 -3x_5,\\ x_3&=&-2x_4 + 2x_5.\end{array}$$

Now, what we want to do is write the solution set as a weighted combination of vectors.

This is a neat trick -- we are creating a vector equation.

The key idea is that the free variables will become the weights.

$$\begin{bmatrix}x_1\\x_2\\x_3\\x_4\\x_5\end{bmatrix} = \begin{bmatrix}2x_2 + x_4 - 3x_5\\x_2\\-2x_4 + 2x_5\\x_4\\x_5\end{bmatrix} $$
$$ = x_2\begin{bmatrix}2\\1\\0\\0\\0\end{bmatrix}+x_4\begin{bmatrix}1\\0\\-2\\1\\0\end{bmatrix}+x_5\begin{bmatrix}-3\\0\\2\\0\\1\end{bmatrix} $$
$$= x_2\mathbf{u} + x_4\mathbf{v} + x_5{\bf w}.$$

Now what we have is an expression that describes the entire solution set of $A\mathbf{x} = {\bf 0}.$ So ${\operatorname{Nul}}\ A$ is the set of all linear combinations of $\mathbf{u}, \mathbf{v},$ and ${\bf w}$. That is, ${\operatorname{Nul}}\ A$ is the subspace spanned by $\{\mathbf{u}, \mathbf{v}, {\bf w}\}.$

Furthermore, this construction automatically makes $\mathbf{u}, \mathbf{v},$ and ${\bf w}$ linearly independent. Since each weight appears by itself in one position, the only way for the whole weighted sum to be zero is if every weight is zero -- which is the definition of linear independence.

So $\{\mathbf{u}, \mathbf{v}, {\bf w}\}$ is a basis for ${\operatorname{Nul}}\ A.$

Conclusion: by finding a parametric description of the solution of the equation $A\mathbf{x} = {\bf 0},$ we can construct a basis for the nullspace of $A$.

21.5 The Dimension of a Subspace¶

It can be shown that if a subspace $H$ has a basis of $p$ vectors, then every basis of $H$ must consist of exactly $p$ vectors.

That is, for a given $H$, the number $p$ is a special number.

Definition. The dimension of a nonzero subspace $H,$ denoted by $\dim H,$ is the number of vectors in any basis for $H.$

The dimension of the zero subspace $\{{\bf 0}\}$ is defined to be zero.

So now we can say with precision things we've previously said informally.

For example, a plane through ${\bf 0}$ is two-dimensional, and a line through ${\bf 0}$ is one-dimensional.

Remark: What is the dimension of a line not through the origin? It is undefined, because a line not through the the origin is not a subspace, so cannot have a basis, so does not have a dimension.

Remember that associated to any matrix $A$ are two special subspaces: the null space and column space of a matrix $A$. Let's determine the dimension of each subspace.

Dimension of the Null Space¶

Earlier we showed the following: one can construct a basis for $\operatorname{Nul}\ A$ using the columns corresponding to free variables in the solution of $A\mathbf{x} = {\bf 0}$.

We looked at the following example:

$$A = \begin{bmatrix}-3&6&-1&1&-7\\1&-2&2&3&-1\\2&-4&5&8&-4\end{bmatrix}$$

We constructed an explicit description of the null space of this matrix, as $ {\bf x} = x_2 {\bf u} + x_4 {\bf v} + x_5 {\bf w}$ where $\mathbf{u}$, $\mathbf{v}$, and ${\bf w}$ are linearly independent.

Each basis vector corresponds to a free variable in the equation $A\mathbf{x} = {\bf 0}.$ Since this basis contains 3 vectors, the dimension of $A$'s null space (ie, $\dim\operatorname{Nul} A$) is 3.

In general, to find the dimension of $\operatorname{Nul}\ A,$ simply identify and count the number of free variables in $A\mathbf{x} = {\bf 0}.$

21.6 Basis and Dimension for Column Space¶

To find a basis for the column space, we have an easier starting point.

We know that the column space is the span of the matrix columns.

So, we can choose matrix columns to make up the basis.

The question is: which columns should we choose?

Warmup.

We start with a warmup example.

Suppose we have a matrix $B$ that happens to be in reduced echelon form:

$$B = \begin{bmatrix}1&0&-3&5&0\\0&1&2&-1&0\\0&0&0&0&1\\0&0&0&0&0\end{bmatrix}.$$

Denote the columns of $B$ by $\mathbf{b}_1,\dots,\mathbf{b}_5$.

Note that $\mathbf{b}_3 = -3\mathbf{b}_1 + 2\mathbf{b}_2$ and $\mathbf{b}_4 = 5\mathbf{b}_1-\mathbf{b}_2.$

So any combination of $\mathbf{b}_1,\dots,\mathbf{b}_5$ is actually just a combination of $\mathbf{b}_1, \mathbf{b}_2,$ and $\mathbf{b}_5.$

So $\{\mathbf{b}_1, \mathbf{b}_2, \mathbf{b}_5\}$ spans ${\operatorname{Col}}\ B$.

Also, $\mathbf{b}_1, \mathbf{b}_2,$ and $\mathbf{b}_5$ are linearly independent, because they are columns from an identity matrix.

So: the pivot columns of $B$ form a basis for ${\operatorname{Col}}\ B.$

Note that this means: there is no combination of columns 1, 2, and 5 that yields the zero vector. (Other than the trivial combination of course.)

So, for matrices in reduced row echelon form, we have a simple rule for the basis of the column space:

Choose the columns that hold the pivots.

The general case.

Now I'll show that the pivot columns of $A$ form a basis for ${\operatorname{Col}}\ A$ for any $A$.

Consider the case where $A\mathbf{x} = {\bf 0}$ for some nonzero $\mathbf{x}.$

This says that there is a linear dependence relation between some of the columns of $A$.

If any of the entries in $\mathbf{x}$ are zero, then those columns do not participate in the linear dependence relation.

When we row-reduce $A$ to its reduced echelon form $B$, the columns are changed, but the equations $A\mathbf{x} = {\bf 0}$ and $B\mathbf{x} = {\bf 0}$ have the same solution set.

So this means that the columns of $A$ and the columns of $B$ have exactly the same dependence relationships as the columns of $B$. Specifically:

  1. If some column of $B$ can be written as a combination of other columns of $B$, then the same is true of the corresponding columns of $A$.

  2. If no combination of certain columns of $B$ yields the zero vector, then no combination of corresponding columns of $A$ yields the zero vector.

In other words:

  1. If some set of columns of $B$ spans the column space of $B$, then the same columns of $A$ span the column space of $A$.

  2. If some set of columns of $B$ are linearly independent, then the same columns of $A$ are linearly independent.

So, if some columns of $B$ are a basis for ${\operatorname{Col}}\ B$, then the corresponding columns of $A$ are a basis for ${\operatorname{Col}}\ A$.

Example. Consider the matrix $A$:

$$A = \begin{bmatrix}1&3&3&2&-9\\-2&-2&2&-8&2\\2&3&0&7&1\\3&4&-1&11&-8\end{bmatrix}$$

To find its basis, we simply need to look at the basis for its reduced row echelon form.

It turns out that $A$ is row equivalent to the matrix $B$ that we considered above. For matrix $B$, recall that:

  • $\mathbf{b}_3 = -3\mathbf{b}_1 + 2\mathbf{b}_2$ and $\mathbf{b}_4 = 5\mathbf{b}_1-\mathbf{b}_2$.
  • A basis for ${\operatorname{Col}}\ B$ was columns 1, 2, and 5.

Therefore we can immediately conclude that a basis for ${\operatorname{Col}}\ A$ is $A$'s columns 1, 2, and 5.

So a basis for ${\operatorname{Col}}\ A$ is:

$$\left\{\begin{bmatrix}1\\-2\\2\\3\end{bmatrix},\begin{bmatrix}3\\-2\\3\\4\end{bmatrix},\begin{bmatrix}-9\\2\\1\\-8\end{bmatrix}\right\}$$

Theorem. The pivot columns of a matrix $A$ form a basis for the column space of $A$.

Be careful here -- note that

  • we compute the reduced row echelon form of $A$ to find which columns are pivot columns, but...
  • we use the columns of $A$ itself as the basis for ${\operatorname{Col}}\ A$!

Matrix Rank¶

Definition. The rank of a matrix, denoted by $\operatorname{Rank} A,$ is the dimension of the column space of $A$.

Since the pivot columns of $A$ form a basis for $\operatorname{Col} A,$ the rank of $A$ is just the number of pivot columns in $A$. This is also equal to the number of basic variables in $A\mathbf{x} = {\bf 0}$ (hopefully now the term makes sense!)

Example. Determine the rank of the matrix

$$A = \begin{bmatrix}2&5&-3&-4&8\\4&7&-4&-3&9\\6&9&-5&2&4\\0&-9&6&5&-6\end{bmatrix}.$$

Solution Reduce $A$ to an echelon form:

$$A = \begin{bmatrix}2&5&-3&-4&8\\0&-3&2&5&-7\\0&-6&4&14&-20\\0&-9&6&5&-6\end{bmatrix}\sim\cdots\sim\begin{bmatrix}2&5&-3&-4&8\\0&-3&2&5&-7\\0&0&0&4&-6\\0&0&0&0&0\end{bmatrix}.$$

The matrix $A$ has 3 pivot columns, so $\operatorname{Rank} A = 3.$

21.7 The Rank Theorem¶

Consider a matrix $A.$ We know that:

  • $\dim\operatorname{Nul}\ A$ = the number of free variables in $A\mathbf{x} = {\bf 0}$, which is the number of non-pivot columns in $A$.

  • $\dim\operatorname{Col}\ A$ = the number of basis variables in $A\mathbf{x} = {\bf 0}$, which is the number of pivot columns.

So we can now make this important connection:

$$\begin{array}{rcl} \dim\operatorname{Nul}\ A + \dim\operatorname{Col}\ A &= &\mbox{number of non-pivot columns of $A$ + number of pivot columns of $A$}\\ &= &\mbox{number of columns of $A$}. \end{array}$$

This relationship involving dimension is true even though the two subspaces themselves live in different spaces: $\operatorname{Nul}\ A \subset \mathbb{R}^n$ and $\operatorname{Col}\ A \subset \mathbb{R}^m$.

This leads to the following theorem:

Theorem. If a matrix $A$ has $n$ columns, then $\operatorname{Rank} A + \dim\operatorname{Nul}\ A = n$.

This is a terrifically important fact! Here is an intuitive way to understand it. Let's think about a matrix $A$ and the associated linear transformation $T(x) = Ax$.

  • If the matrix $A$ has $n$ columns, then $A$'s column space could have dimension as high as $n$. In other words, $T$'s range could have dimension as high as $n$.

  • However, if $A$ "throws away" a nullspace of dimension $p$, then that reduces the column space of $A$ to $n-p$. Meaning, the dimension of $T$'s range is reduced to $n-p$.

Extending the Invertible Matrix Theorem¶

The above arguments show that when $A$ has $n$ columns, then the "larger" that the column space is, the "smaller" that the null space is.

(Where "larger" means "has more dimensions.")

Let's consider what this means when $A$ is square $(n\times n)$. On one "extreme," the column space of $A$ could have maximum dimension -- i.e., $\dim\operatorname{Col}\ A= n.$

Recall that the IMT said that an $n\times n$ matrix is invertible if and only if its columns are linearly independent, and if and only if its columns span $\mathbb{R}^n.$

Hence we now can see that an $n\times n$ matrix is invertible if and only if the columns of $A$ form a basis for $\mathbb{R}^n.$

This leads to the following facts, which further extend the IMT:

Theorem. Let $A$ be a square $n\times n$ matrix. Then the following statements are each equivalent to the statement that $A$ is an invertible matrix:

  1. The columns of $A$ form a basis for $\mathbb{R}^n.$
  2. $\operatorname{Col} A = \mathbb{R}^n.$
  3. $\dim\operatorname{Col} A = n.$
  4. $\operatorname{Rank} A = n.$
  5. $\operatorname{Nul} A = \{{\bf 0}\}.$
  6. $\dim\operatorname{Nul} A = 0.$
In [ ]: