Back to Courses

MATH 146 Notes


Table of Contents

1 Vector Spaces & Dimensions

Definition: A field is an algebraic system $\mathbb{F}$ having:
Definition: A vector space over $\mathbb{F}$ is a set $V$ which is closed under addition and scalar multiplication, and satisfies VS 1-8.
:
Suppose $V$ is a vector space and $x,y,z\in V$. If $x+z=y+z$ then $x=y$.
:
In any vector space, there is only one zero vector, denoted $0$.
Definition: Let $V$ be a vector space over $\mathbb{F}$. A subset $W$ of $V$ is a subspace of $V$ if $W$, with the operations of $V$, form a vector space.
Definition: Let $V$ be a vector space over $\mathbb{F}$. Let $x, u_1, \ldots, u_n\in V$. $x$ is a linear combination of $u_1,\ldots,u_n$ if there exists $a_1,\ldots,a_n\in\mathbb{F}$ such that $x=a_1u_1+a_2u_2+\ldots+a_nu_n$
:
Suppose $V$ is a vector space over $\mathbb{F}$, $W\subseteq V$. Then, $W$ is a subspace of $V$ if and only if:
  1. $W$ is closed under the operations (addition + scalar multiplication) of $V$.
  2. $W$ contains the zero vector of $V$.
Definition: Suppose $V$ is a vs/$\mathbb{F}$, $x\in V$, and $\phi\neq S\subseteq V$.
  1. $x$ is a linear combination of $S$ if $x$ is a linear combination of some vectors $u_1,\ldots,u_n\in S$.
  2. $span (S)\overset{def}{=}\{\text{all linear combos of $S$}\}$
    *$span(\phi)\overset{def}{=}\{0\}$
:
Let $V$ be a vector space over $\mathbb{F}$, $S\subseteq V$.
  1. $span(S)$ is a subspace of $V$
  2. $span(S)$ is the smallest subspace of $V$ containing $S$.
Definition: Let $V$ be a vs/$\mathbb{F}$, $x\in V$, and $S\subseteq V$. $S$ is linearly dependent if there exist distinct vectors $u_1,\ldots,u_n\in S$ and $a_1,\ldots,a_n\in\mathbb{F}$ not all zero such that $a_1u_1+\ldots+a_nu_n=0$.
Definition: A set $S$ is linearly independent if $\forall$ distinct $u_1,\ldots,u_n\in S$, $\forall a_1,\ldots,a_n\in\mathbb{F}$, if $a_1u_1+\ldots+a_nu_n=0$ then $a_1=a_2=\ldots=a_n=0$.

Note: $S=\phi$ is linearly independent because a linearly dependent set must be non-empty. $S=\{0\}$ is linearly dependent because $a\cdot 0=0$ for any scalar $a$.
:
Suppose $S$ is a linearly independent set in a vector space $V$ and $v\in V-S$.
Then $S\cup\{v\}$ is linearly independent $\leftrightarrow v\not\in span(S)$.

Basis and Dimensions

Definition: A subset of a vector space $V$ is a basis for $V$ if it is linearly independent and spans $V$.
Definition: A subset $S\subseteq V$ is a minimal spanning set if:
  1. $span(S)=V$ and
  2. no proper subset of $S$ is a spanning set of $V$.
Definition: A subset $S\subseteq V$ is a maximal linearly independent if:
  1. $S$ is linearly independent and
  2. No set $W\subseteq V$ that has $S$ as a proper subset is linearly independent.
:
Basis $\longleftrightarrow$ Minimal spanning set $\longleftrightarrow$ Maximal linear independent set
:
Suppose $V$ is a vector space and $S,T\subseteq V$, $v\in V$ and
  • $S$ is linearly independent.
  • $v\not\in span(S)\rightarrow S\cup\{v\}$ is linearly independent
  • $span(S\cup T)=V$
Then $\exists x\in T$ such that $span\bigg((S\cup\{v\})\cup(T-\{x\})\bigg)=V$
:
If $V$ is finitely-generated, then all bases for $V$ are finite and have the same size. This unique size of the basis of $V$ is known as the dimension of $V$, written as $\dim (V)$.
:
If $V$ is finitely-dimensional, then every linearly independent subset can be expanded to a basis for $V$.

2 Functions Between Vector Spaces

Definition: Suppose $V$ and $W$ are vector spaces over $\mathbb{F}$. A function $T: V\rightarrow W$ is a linear transformation if $T$ satisfies:
  1. $T(x+y)=T(x)+T(y)$
  2. $T(ax)=aT(x)$
Properties of all linear transformations:
  1. $T(0)=0$
  2. $T(-x)=-T(x)$
  3. $T(x-y)=T(x)-T(y)$
  4. $T(a_1x_1+\ldots+a_nx_n)=a_1T(x_1)+\ldots+a_nT(x_n)$
Definition: Suppose $T: V\rightarrow W$ is linear, then
:
Suppose $T: V\rightarrow W$ is linear and $V= span(v_1,\ldots, v_n)$ then $R(T)=span(T(v_1),\ldots,T(v_n))$.
:
Suppose $T: V\rightarrow W$ is linear.
  • $T$ is surjective $\leftrightarrow R(T)=W$
  • $T$ is injective $\leftrightarrow N(T)=\{0\}$
Definition: Suppose $T: V\rightarrow W$ is linear and $\dim (V)<\infty$.
:
Suppose $T: V\rightarrow W$ is linear, then $rank(T)+nullity(T)=\dim(V)$.
:
Suppose $T: V\rightarrow V$ and $\dim V<\infty$. Then $T$ is injective $\leftrightarrow T$ is surjective. In other words, $T$ is bijective.
Definition: Suppose $V$ is a finite-dimensional vector space over $\mathbb{F}$ and $\beta=(v_1,\ldots, v_n)$ is an ordered basis for $V$, and $x\in V$. The coordinate vector of $x$ relative to $\beta$ is the $n$-tuple $(a_1,\ldots, a_n)\in\mathbb{F}^n$ unqiuely satisfying $x=a_1v_1+\ldots+a_nv_n$, and is denoted by $$[x]_\beta=\left(\begin{smallmatrix}a_1\\\vdots\\a_n\end{smallmatrix}\right)$$
:
Suppose $V$ is a finite-dimensional vector space over $\mathbb{F}$ where $\dim(V)=n$ and $\beta=(v_1,\ldots,v_n)$ is an ordered basis. Then $[\ ]_\beta:V\rightarrow\mathbb{F}^n$ is a bijective linear transformation or an isomorphism.
Facts:
  1. If $T:V\rightarrow W$ is an isomorphism, then $T^{-1}:W\rightarrow V$ is an isomorphism.
  2. If $T:V\rightarrow W$ and $U:W\rightarrow X$ are isomorphisms, then $U\circ T: V\rightarrow X$ is an isomorphism.
Definition: Suppose $V,W$ are vs$/\mathbb{F}$. $V$ is isomorphic to $W$ if $\exists$ isomorphism $T:V\rightarrow W$.

Coordinating Linear Transformations

Suppose $T:V\rightarrow W$ where $V,W$ are finite-dimensional. Let $\beta=(v_1,\ldots,v_n),\gamma=(w_1,\ldots,w_m)$. We observe that
Definition: In the above context, the matrix representation of $T$ for $\beta$ and $\gamma$, denoted as $[T]_\beta^\gamma$, is the $mxn$ matrix in $M_{m\times n}(\mathbb{F})$ whose $j$th column is $[T(v_j)]_\gamma$.
So, $Col_j([T]_\beta^\gamma)=[T(v_j)]_\gamma$.
Steps to form the matrix representation of $T$ for $\beta$ and $\gamma$:
  1. Apply $T$ to each vector in $\beta$.
  2. Find coordinate vector of $T(v_j)$ relative to $\gamma$.
  3. Combine the column vectors to form the $m\times n$ matrix.
:
Suppose $T:V\rightarrow W$ is linear and $V,W$ is a finite-dimensional vector space over $\mathbb{F}$ and $\beta=(v_1,\ldots,v_n)$, $\gamma=(w_1,\ldots,w_m)$ are ordered bases for $V,W$. Then, for any $x\in V$, $$[T(x)]_\gamma=[T]_\beta^\gamma\cdot[x]_\beta$$
:
$$[U]_\beta^\gamma\cdot[T]_\alpha^\beta=[U\circ T]_\alpha^\gamma$$
Notation:
Definition: Suppose $\mathbb{F}$ is a field and $A\in M_{m\times n}(\mathbb{F})$. We define $L_A: \mathbb{F}^n\rightarrow\mathbb{F}^m$ given by $L_A(x)=Ax$.
Let $\beta_n$ be the standard ordered basis for $\mathbb{F}^n$, and let $\beta_m$ be the standard ordered basis for $\mathbb{F}^m$. Note that $\forall x\in\mathbb{F}^m$, $[x]_{\beta_m}=x$. To find $[L_A]_{\beta_n}^{\beta_m}$, we must find $L_A(e_1), \ldots,L_A(e_n)$. $$L_A(e_1)=Ae_1=\left(\begin{smallmatrix}a_{11}\\a_{21}\\\vdots\\a_{m1}\end{smallmatrix}\right)=Col_1(A)$$ For all $j=1,\ldots, n$, $L_A(e_j)=Col_j(A)$. Since $[x]_{\beta_m}=x$, the $j$th column of $[L_A]_{\beta_n}^{\beta_m}=L_A(e_j)=Col_j(A)$.
Note that $[L_A]_\sigma^\sigma=L_A$, where $\sigma$ is the standard ordered basis.
:
$R(L_A)=span(Col_1(A),\ldots,Col_n(A))$ or "range of $L_A=$ column space of $A$"
Definition: The rank of a matrix $A\in M_{m\times n}(\mathbb{F})$ is $rank(L_A)$ where $L_A:\mathbb{F}^n\rightarrow\mathbb{F}^m$ and $rank(L_A)=\dim(R(L_A))$.
Recall that $R(L_A)=$ subspace of $\mathbb{R}^m$ spanned by $Col(A)=$ column space of $A$.
:
Multiplication by invertible matrices does not change rank.
In general, $R(g\circ f)=g\cdot R(f)$. If $Q$ is invertible, $R(L_{AQ})=R(L_A)$, and $A$ and $AQ$ have the same column space. $rank(A)=rank(AQ)$
However, left multiplication by an invertible matrix $P$ can change the column space (range of $L_A$) but not dimension.
:
Isomorphisms preserve dimensions of subspaces.
Let $L_P$ be an isomorphism (invertible and bijective). So, $$\dim(R(L_A))=\dim(L_P(R(L_A)))=\dim(R(L_{PA}))$$ $$rank(A)=rank(PA)$$
Definition: Suppose $A\in M_{m\times n}(\mathbb{F})$. $A$ is invertible if $\exists B\in M_{m\times n}(\mathbb{F})$ satisfying $AB=I_n$ amd $BA=I_n$.
If $A,B$ are invertible, then $AB$ is invertible and $(AB)^{-1}=B^{-1}A^{-1}$
:
Let $Q=[I_V]_\beta^\gamma$, then
  1. $Q$ is invertible.
  2. For any $x\in V$, $Q\cdot [x]_\beta=[I_V]_\beta^\beta\cdot [x]_\gamma=[I_V(x)]_\gamma=[x]_\gamma$
Definition: $Q=[I_V]_\beta^\gamma$ is called the change of coordinate matrix from $\beta$ to $\gamma$.
:
$$\begin{align*} [T]_\beta=Q^{-1}[T]_\gamma Q&=Q^{-1}[T]_\gamma [I_V]_\beta^\gamma\\ &=[I_V]_\gamma^\beta[T]_\beta^\gamma\\ &=[T]_\beta\quad\square \end{align*}$$

3 Matrices & Systems of Equations

Facts about Columns

  1. $Col_j(AB)=A\cdot Col_j(B)$
  2. $Col_j(B)=B\cdot e_j$
  3. If $x=(x_1,\ldots,x_n)\in\mathbb{F}^n$, then $Bx=x_1Col_1(B)+x_2Col_2(B)+\ldots+x_nCol_n(B)$

Facts about Rows

  1. $Row_i(AB)=Row_i(A)\cdot B$
  2. $Row_i(A)=e_i^tA\quad (e_i^t$ is a $1\times m$ vector)
  3. If $x=(x_1,\ldots,x_n)\in\mathbb{F}^n$, then $x^tA=x_1Row_1(A)+x_2Row_2(A)+\ldots+x_mRow_m(A)$
Definition: Fix $m,n$. An elementary row operation is one of the following actions. Given $A\in M_{m\times n}(\mathbb{F})$:
  1. Switch row $i$ with row $j$. $\quad[R_i\leftrightarrow R_j]$
  2. Mutltiply row $i$ with row $j$. $\quad[R_i\leftarrow aR_i\ (a\neq 0)]$
  3. Add to row $i$ a scalar multiple of row $j$. $\quad[R_i\leftarrow R_i + aR_j]$
Definition: Fix $m,n$. An elementary column operation is defined in the same way as rows, but with columns.
:
An elementary matrix is an $m\times n$ matrix obtained from $I_n$ by one elementary operation.

Facts about Elementary Matrices

  1. Elementary Matrices are invertible.
  2. Elementary column ops ($A\mapsto AE$) do not change the column space.
  3. Elementary row ops ($A\mapsto EA$) can change the column space but not rank.
:
Every $A\in M_{m\times n}(\mathbb{F})$, there exists invertible matrices $P\in M_{m\times m}(\mathbb{F}), Q\in M_{n\times n}(\mathbb{F})$ such that $D=PAQ$ where $$D=\left(\begin{matrix}I_r & O\\O & O\end{matrix}\right)$$ $rank(D)=\dim($column space of $D)=r=rank(A)$, so $rank(A)=r$.
Transposes: If $A$ is $m\times n$ and $B$ is $n\times p$, then $(AB)^t=B^tA^t$.
:
If $A$ is $n\times n$ and is invertible, then $A^t$ is invertible.
:
For any $A\in M_{m\times n}(\mathbb{F}), rank(A)=rank(A^t)$.
:
The row space of $A$ is the subspace of $\mathbb{F}^n$ spanned by the rows of $A=$ column space of $A^t=R(L_{A^t})$.
But $\dim(L_{A^t})\overset{def}{=}rank(A^t)=rank(A)$, so $\dim(\text{row space of $A$})=\dim(\text{column space of $A$})$.
:
For any $A\in M_{n\times n}(\mathbb{F})$, then the following are equivalent:
  1. $A$ is invertible.
  2. $rank(A)=n$
  3. $A$ is a product of elementary matrices.
  4. $A$ can be transformed to $I_n$ using elem row ops only.
:
  • $(1)\Leftrightarrow(2):$
    $A$ is invertible $\Longleftrightarrow L_A$ is a bijection $\Longleftrightarrow L_A$ is surjective $\Longleftrightarrow R(L_A)=\mathbb{F}^n\Longleftrightarrow \dim(R(L_A))=n\Longleftrightarrow rank(A)=n$
  • $(2)\Rightarrow(3):$
    If $rank(A)=n$, then $A$ can be transformed by elementary matrices to $I_n$ where $I_n=E_k\cdots E_2E_1AE'_1E'_2\cdots E'_l=PAQ$. Since $P,Q$ are invertible, we get $$\begin{align*} A&=P^{-1}(PAQ)Q^{-1}\\ &=P^{-1}I_nQ^{-1}\\ &=P^{-1}Q^{-1}\\ &=(E_1)^{-1}(E_2)^{-1}\cdots(E_k)^{-1}(E'_l)^{-1}\cdots(E'_2)^{-1}(E'_1)^{-1} \end{align*}$$ Since the inverse of an elementary matrix is also an elementary matrix, this proves (3).
  • $(3)\Rightarrow(4):$
    Let $A=E_1E_2\cdots E_k$. Then $A$ is invertible (product of elementary matrices which are invertible) and $A^{-1}=E^{-1}_k\cdots E^{-1}_2E^{-1}_1$. Thus, $I_n=A^{-1}A=E^{-1}_k\cdots E^{-1}_2E^{-1}_1A$. Multiplying on the left by elementary matrices is the same as applying elem row operations. Thus, this shows that $A$ can be transformed by elementary row operations to $I_n$.
  • $(4)\Rightarrow(2): rank(I_n)=n$ and elementary operations preserve rank, so $rank(A)=n$.
:
If $A$ can be transformed to $I_n$ by elem row ops, then the same sequence transforms $I_n$ to $A^{-1}$. This is useful for calculating $A^{-1}$.
e.g. Let $A=\left(\begin{matrix} 1 & -3 &1\\1 & 0 &2\\ 0 &1 &0\end{matrix}\right)$. Find $A^{-1}$.
Form the augmented matrix $(A|I_3)$. $$\left(\begin{array}{ccc} 1&-3&1\\1&0&2\\0&1&0 \end{array}\middle\vert\begin{array}{ccc} 1&0&0\\0&1&0\\0&0&1\end{array}\right)\xrightarrow{R_2\leftarrow R_2 - R_1} \left(\begin{array}{ccc} 1&-3&1\\0&3&1\\0&1&0 \end{array}\middle\vert\begin{array}{ccc} 1&0&0\\-1&1&0\\0&0&1\end{array}\right)\xrightarrow{R_1\leftarrow R_1 + R_2} \left(\begin{array}{ccc} 1&0&2\\0&3&1\\0&1&0 \end{array}\middle\vert\begin{array}{ccc} 0&1&0\\-1&1&0\\0&0&1\end{array}\right)$$ $$\xrightarrow{R_2\leftrightarrow R_3} \left(\begin{array}{ccc} 1&0&2\\0&1&0\\0&3&1 \end{array}\middle\vert\begin{array}{ccc} 0&1&0\\0&0&1\\-1&1&0\end{array}\right) \xrightarrow{R_3\leftrightarrow R_3 - 3R_2} \left(\begin{array}{ccc} 1&0&2\\0&1&0\\0&0&1 \end{array}\middle\vert\begin{array}{ccc} 0&1&0\\0&0&1\\-1&1&-3\end{array}\right) \xrightarrow{R_1\leftrightarrow R_1 - 2R_3} \left(\begin{array}{ccc} 1&0&0\\0&1&0\\0&0&1 \end{array}\middle\vert\begin{array}{ccc} 2&-1&6\\0&0&1\\-1&1&-3\end{array}\right)$$ $\therefore A^{-1}=\left(\begin{array}{ccc} 2&-1&6\\0&0&1\\-1&1&-3\end{array}\right)$
This algorithm also determines if $A$ is invertible. If $A$ is not invertible, you will get $(I_r|B), r< n$ so $rank(A)\neq n\Leftrightarrow A$ is not invertible.

Systems of Equations

Consider a system, $S$, and $a_{ij},b_i\in\mathbb{F}$. $$\begin{align*} a_{11}x_1+a_{12}x_2+\ldots+a_{1n}x_n&=b_1\\ a_{21}x_1+a_{22}x_2+\ldots+a_{2n}x_n&=b_2\\ &\vdots\\ a_{m1}x_1+a_{m2}x_2+\ldots+a_{mn}x_n&=b_m \end{align*}$$ We can write $S$ as $\left(\begin{array}{cccc} a_{11}&a_{12}&\ldots&a_{1n}\\a_{21}&a_{22}&\ldots&a_{2n}\\&\vdots\\a_{m1}&a_{21}&\ldots&a_{mn}\end{array}\right)\left(\begin{matrix} x_1\\x_2\\\vdots\\x_n\end{matrix}\right)=\left(\begin{matrix} b_1\\b_2\\\vdots\\b_n\end{matrix}\right)$ which we write as $Ax=b$, where $A\in M_{m\times n}(\mathbb{F})$, $x$ is an unknown vector in $\mathbb{F}^n$, $b\in\mathbb{F}^m$.

A solution to $S$ is an $n$-tuple $s=(s_1,s_2,\ldots,s_n)\in\mathbb{F}^n$ satisfying $S$.
The solution set to $S$ is $\{s\in\mathbb{F}^n: As=b\}\subseteq\mathbb{F}^n$.
Definition: The system $Ax=0$ is called homogenous.
A system $Ax=b, b\neq 0$ is called non-homogenous.
:
Let $Ax=0$ be a homogenous system of $m$ linear equations in $n$ unknowns.
  1. The solution set is $N(A)$.
  2. The solution set is a subspace of $\mathbb(F)^n$ of dimension: $n-rank(A)$.
:
If we have more unknowns than equations such that $n>m$ then $nullity(A)>0$ and the solution set has dimension $\geq 1$ so it has non-zero solutions.
Definition: A system $Ax=0$ is consistent if it has at least one solution.
Note: Every homogenous system $Ax=0$ is consistent (has the zero solution).

$Ax=b$ is consistent $\Leftrightarrow rank(A)=rank(A|b)$.
:
Suppose $Ax=b$ is consistent. Pick one solution $s\in\mathbb{F}^n$. Then the solution set is $$s+N(A)\quad\text{(translation of $N(A)$ by $s$)}$$
:
Suppose $Ax=b$ where $A$ is $m\times n$. Then, $Ax=b$ is a unique solution $\Leftrightarrow A$ is invertible. Specifically, $s=A^{-1}b$.

Solving Systems of Linear Equations

3 core goals to solve $Ax=b$:
  1. Determine if $Ax=b$ is consistent.
  2. If yes, find one particular solution, $s_0$.
  3. Find a basis $\{u_1,\ldots,u_k\}$ for $N(A)$.
Then the solution to $Ax=b$ is $S=s_0+(a_1u_1+a_2u_2+\ldots+a_ku_k),$ where $a_1,\ldots,a_k\in\mathbb{F}$.

e.g. Solve the following system of equations. $$\begin{align*} 2x_1+3x_2+x_3+4x_4-9x_5&=17\\ x_1+x_2+x_3+x_4-3x_5&=6\\ x_1+x_2+x_3+2x_4-5x_5&=8\\ 2x_1+2x_2+2x_3+3x_4-8x_5&=14\\ \end{align*}$$
  1. Form the augmented matrix $(A|b)$. $$\left(\begin{array}{ccccc} 2&3&1&4&-9\\1&1&1&1&-3\\ 1&1&1&2&-5\\ 2&2&2&3&-8 \end{array}\middle\vert\begin{array}{c} 17\\6\\8\\14\end{array}\right)$$
  2. Reduce to reduced row echelon form.
    Definition: For $(A*|b*)$ to be in reduced row echelon form (RREF):
    1. Zero row(s) must be at the bottom.
    2. First (leftmost) non-zero entry in each row
      1. must equal $1$.
      2. must be the only non-zero entry in its column
      3. must be to the right of the first non-zero entry of the row above
    Fact: Every matrix can be transformed by elem row ops to a RREF matrix.
    $$\left(\begin{array}{ccccc} 2&3&1&4&-9\\1&1&1&1&-3\\ 1&1&1&2&-5\\ 2&2&2&3&-8 \end{array}\middle\vert\begin{array}{c} 17\\6\\8\\14\end{array}\right)\xrightarrow{R_3\leftarrow R_3-R_2} \left(\begin{array}{ccccc} 2&3&1&4&-9\\1&1&1&1&-3\\ 0&0&0&1&-2\\ 2&2&2&3&-8 \end{array}\middle\vert\begin{array}{c} 17\\6\\2\\14\end{array}\right)\xrightarrow{R_1\leftarrow R_1-R_4} \left(\begin{array}{ccccc} 0&1&-1&1&-1\\1&1&1&1&-3\\ 0&0&0&1&-2\\ 2&2&2&3&-8 \end{array}\middle\vert\begin{array}{c} 3\\6\\2\\14\end{array}\right)$$ $$\xrightarrow{R_4\leftarrow R_4-2R_2} \left(\begin{array}{ccccc} 0&1&-1&1&-1\\1&1&1&1&-3\\ 0&0&0&1&-2\\ 0&0&0&1&-2 \end{array}\middle\vert\begin{array}{c} 3\\6\\2\\2\end{array}\right)\xrightarrow{R_4\leftarrow R_4-R_3} \left(\begin{array}{ccccc} 0&1&-1&1&-1\\1&1&1&1&-3\\ 0&0&0&1&-2\\ 0&0&0&0&0 \end{array}\middle\vert\begin{array}{c} 3\\6\\2\\0\end{array}\right)\xrightarrow{R_2\leftarrow R_2-R_1}$$ $$ \left(\begin{array}{ccccc} 0&1&-1&1&-1\\1&0&2&0&-2\\ 0&0&0&1&-2\\ 0&0&0&0&0 \end{array}\middle\vert\begin{array}{c} 3\\3\\2\\0\end{array}\right)\xrightarrow{R_1\leftarrow R_1-R_3} \left(\begin{array}{ccccc} 0&1&-1&0&1\\1&0&2&0&-2\\ 0&0&0&1&-2\\ 0&0&0&0&0 \end{array}\middle\vert\begin{array}{c} 1\\3\\2\\0\end{array}\right)\xrightarrow{R_1\leftrightarrow R_2} \left(\begin{array}{ccccc} 1&0&2&0&-2\\0&1&-1&0&1\\ 0&0&0&1&-2\\ 0&0&0&0&0 \end{array}\middle\vert\begin{array}{c} 3\\1\\2\\0\end{array}\right)=(A^*|b^*)$$
  3. $rank(A)=rank(A^*)=rank(A^*|b^*)=rank(A|b)=3$, so $Ax=b$ is consistent.
  4. Solve $A^*x=b^*$ to find one particular solution. $$\begin{align*} x_1+2x_3-2x_5&=3\\ x_2-x_3+x_5&=1\\ x_4-2x_5&=2\\ \end{align*}$$ For a particular solution, let $x_3=x_5=0$. Then $x_1=3,x_2=1,x_4=2$. So, $s_0=(3,1,0,2,0)$.
  5. Find a basis for $N(A)$.
    $A$ is an $4\times 5$ matrix, so $L_A:\mathbb{F}^5\rightarrow\mathbb{F}^4$ so $\dim(\mathbb{F}^5)=5$. Since $rank(A)=3$, $nullity(A)=5-3=2$.
    Recall that $N(A)$ is the solution to $Ax=0$, so $$\begin{align*} x_1&=-2x_3+2x_5\\ x_2&=x_3-x_5\\ x_3&=x_3\\ x_4&=2x_5\\ x_5&=x_5 \end{align*}$$ $$\left(\begin{matrix}x_1\\x_2\\x_3\\x_4\\x_5\end{matrix}\right)=x_3\left(\begin{matrix}-2\\1\\1\\0\\0\end{matrix}\right)+x_5\left(\begin{matrix}2\\-1\\0\\2\\1\end{matrix}\right)$$ where $\{(-2,1,1,0,0),(2,-1,0,2,1)\}$ are basis vectors for $N(A)$.
  6. Form the general solution. $$s=s_0+a_1u_1+a_2u_2=(3,1,0,2,0)+a_1(-2,1,1,0,0)+a_2(2,-1,0,2,1)\quad a_1,a_2\in\mathbb{F}$$
    This can be viewed as a translation of a plane in $\mathbb{F}^5$.
Note that if $b=\left(\begin{smallmatrix}17\\6\\8\\15\end{smallmatrix}\right)$ instead, we would get $b^*=\left(\begin{smallmatrix}0\\0\\0\\1\end{smallmatrix}\right)$, so $rank(A^*|b^*)=4\neq rank(A)$, thus the system is inconsistent.

4 Determinants

Definition: $$\det\left(\begin{smallmatrix}a &b\\c &d\end{smallmatrix}\right)=ad-bc$$
Definition: Suppose $A$ is an $m\times n$ matrix and $1\leq i\leq m, 1\leq j\leq n$, then $\widetilde{A}_{ij}$ is the $(m-1)(n-1)$ matrix obtained from $A$ by deleting row $i$ and column $j$ from $A$.
Definition: The determinant of a matrix is a mathematical object that is useful in the analysis and solution of systems of linear equations, and computing and establishing the properties of eigenvalues.
Determinants are only defined for square matrices.

Given $A\in M_{n\times n}(\mathbb{F}))$,
  1. If $n=1$, given $A=(a)$, then $\det(A)=a$.
  2. If $n > 1$, $$\det(A)=a_{11}\cdot\det(\widetilde{A}_{11})-a_{21}\cdot\det(\widetilde{A}_{21})+a_{31}\cdot\det(\widetilde{A}_{31})-\ldots+(-1)^{n+1}a_{n1}\cdot\det(\widetilde{A}_{n1})$$ This is known as an expansion of minors/cofactors on the first column.

    $\det(\widetilde{A}_{i1})$ is the $(i,1)$-minor of $A$.
    $(-1)^{i+1}\det(\widetilde{A}_{i1})$ is the $(i,1)$-cofactor of $A$.

    General Form: $$\det(A)=\sum_{i=1}^k(-1)^{i+j}a_{ij}\det(\widetilde{A}_{ij})$$ which is the expansion on the $j$-th column.
Properties: Let $A\in M_{n\times n}$
  1. If $A$ is upper-triangular, then $\det(A)=a_{11}a_{22}a_{33}\cdots a_{nn}$
  2. Corollary: $\det(I_n)=1$
  3. If $A$ has a row of zeros, then $\det(A)=0$
  4. If $A$ has a column of zeros, then $\det(A)=0$
  5. If $A$ has two equal adjacent rows, then $\det(A)=0$
    Corollary 9: If $A$ has two equal rows, then $\det(A)=0$.
  6. $\det$ is linear in each row. If we fix $n,i$, and $u_1,\ldots,u_{i-1},u_{i+1},\ldots,u_n\in\mathbb{F}^n$ then for all $r,s\in\mathbb{F}^n$ and all $a\in\mathbb{F}$, $$\det\left(\begin{array}{c}-u_1-\\\vdots\\-r+s-\\\vdots\\-u_n-\end{array}\right)=\det\left(\begin{array}{c}-u_1-\\\vdots\\-r-\\\vdots\\-u_n-\end{array}\right)+\det\left(\begin{array}{c}-u_1-\\\vdots\\-s-\\\vdots\\-u_n-\end{array}\right)$$ $$\det\left(\begin{array}{c}-u_1-\\\vdots\\-ar-\\\vdots\\-u_n-\end{array}\right)=a\cdot\det\left(\begin{array}{c}-u_1-\\\vdots\\-r-\\\vdots\\-u_n-\end{array}\right)$$ Note that this does not mean that $\det(A+B)=\det(A)+\det(B)$ or $\det(aA)=a\det(A)$.
  7. If $A\xrightarrow{R_i\leftarrow R_i+cR_j}B$ ($A$ is transformed by an elem row op of type 3) and $j=i\pm 1$, then $\det(A)=\det(B)$.
    i.e. Type 3 elem row ops preserve determinant when $j=i\pm 1$.
    Corollary 10: [Type 3] If $A\xrightarrow{R_i\leftarrow R_i+cR_j}B$, for $1\leq i,j\leq n$, then $\det(A)=\det(B)$.
  8. If $A\xrightarrow{R_i\leftrightarrow R_{i+1}}B$, then $\det(B)=-\det(A)$.
    Corollary 11: [Type 1] If $A\xrightarrow{R_i\leftrightarrow R_j}B$, then $\det(B)=-\det(A)$.
  9. Corollary 12: [Type 2] If $A\xrightarrow{R_i\leftrightarrow cR_i}B$, then $\det(B)=c\cdot\det(A)$.
We observe that given an elem matrix $E$, if $E$ is of Type:
  1. $\det(E)=-\det(I_n)=-1$
  2. $\det(E)=c\cdot\det(I_n)=c,\quad c\neq 0$
  3. $\det(E)=\det(I_n)=1$
Note that $\det(E)\neq 0$ and that $\det(E^t)=\det(E)$.

How to Compute Determinants

  1. We can use elem row ops to reduce $A$ to an upper triangular matrix.
    e.g. Compute the determinant of $A=\left(\begin{array}{ccc} 0&1&3\\-2&-3&-5\\3&-1&1\end{array}\right)$ $$\left(\begin{array}{ccc} 0&1&3\\-2&-3&-5\\3&-1&1\end{array}\right)\xrightarrow{R_1\leftrightarrow R_2}\left(\begin{array}{ccc} -2&-3&-5\\0&1&3\\3&-1&1\end{array}\right)\xrightarrow{R_3\leftrightarrow R_3+\frac{3}{2}R_1}\left(\begin{array}{ccc} -2&-3&-5\\0&1&3\\0&-\frac{11}{2}&-\frac{13}{2}\end{array}\right)\xrightarrow{R_3\leftrightarrow R_3+\frac{11}{2}R_1}\left(\begin{array}{ccc} -2&-3&-5\\0&1&3\\0&0&10\end{array}\right)$$ $\det(B)=(-2)\cdot 1\cdot 10=-20$
    $\det(A)=(-1)\cdot 1\cdot 1\cdot (-20) = 20$

  2. Transform $A$ by elem matrices to $I_n$.

    Theorem 14: If $A,E\in M_{n\times n}(\mathbb{F}), E$ is an elem matrix, then $\det(EA)=\det(E)\det(A)$.

    Theorem 15: $A$ is invertible $\Leftrightarrow \det(A)\neq 0$.
    :
    $\det(AB)=\det(A)\det(B)$
    :
    • Case 1: $rank(A)< n$. Then, $rank(AB)\leq rank(A)< n$ (by A6Q2b). Thus, $\det(A)=\det(AB)=0$.
    • Case 2: $rank(A)=n$. Then $A$ is invertible and can be transformed into a product of elem matrices such that $A=E_k\cdots E_2E_1$. Then, $$\begin{align*} \det(AB)&=\det(E_k\cdots E_2E_1B)\\ &=\det(E_k)\cdots \det(E_2)\det(E_1)\det(B)\\ &=\det(E_k\cdots E_2E_1)\det(B)\quad\text{(special case of Thm 14 where $A=I_n$)}\\ &=\det(A)\det(B)\quad\square \end{align*}$$
    e.g. Compute the determinant of $A$ as above.

    Continuing off from the above solution, $$\left(\begin{array}{ccc} -2&-3&-5\\0&1&3\\0&0&10\end{array}\right)\xrightarrow{R_1\leftrightarrow -\frac{1}{2}R_1} \left(\begin{array}{ccc} 1&\frac{3}{2}&\frac{5}{2}\\0&1&3\\0&0&10\end{array}\right)\xrightarrow{R_1\leftrightarrow R_1-\frac{3}{2}R_2} \left(\begin{array}{ccc} 1&0&-2\\0&1&3\\0&0&10\end{array}\right)\xrightarrow{R_1\leftrightarrow R_1-\frac{1}{5}R_3} \left(\begin{array}{ccc} 1&0&0\\0&1&3\\0&0&10\end{array}\right)$$ $$\xrightarrow{R_2\leftrightarrow R_2-\frac{3}{10}R_3} \left(\begin{array}{ccc} 1&0&0\\0&1&0\\0&0&10\end{array}\right)\xrightarrow{R_3\leftrightarrow \frac{1}{10}R_3} \left(\begin{array}{ccc} 1&0&0\\0&1&0\\0&0&1\end{array}\right)$$ $$\begin{align*} \det(I_n)&=\det(E_8E_7E_6E_5E_4E_3E_2E_1A)\\ &=\det(E_8)\det(E_7)\det(E_6)\det(E_5)\det(E_4)\det(E_3)\det(E_2)\det(E_1)\det(A)\\ \det(I_n)&=\frac{1}{10}\cdot -\frac{1}{2}\cdot -1\det(A)=\frac{1}{20}\det(A)\\ \det(A)&=20\cdot\det(I_n)=20 \end{align*}$$
:
If $A$ is invertible then $\det(A^{-1})=\frac{1}{\det(A)}$.
:
$\det(A^t)=\det(A)$
:
  • Case 1: $rank(A)< n$. Then, $rank(A^t)=rank(A)< n$ (by A6Q2b). Thus, $\det(A^t)=\det(A)=0$.
  • Case 2: $rank(A)=n$. Then $A$ is invertible and can be transformed into a product of elem matrices such that $A=E_k\cdots E_2E_1$. Then, $$\begin{align*} A^t&=(E_k\cdots E_2E_1)^t=E_1^tE_2^t\cdots E_k^t\\ \det(A^t)&=\det(E_1^tE_2^t\cdots E_k^t)\\ &=\det(E_1^t)\det(E_2^t)\cdots \det(E_k^t)\\ &=\det(E_1)\det(E_2)\cdots \det(E_k)\\ &=\det(E_k)\cdots \det(E_2)\det(E_1)\\ &=\det(E_k\cdots E_2E_1)\\ &=\det(A)\quad\square \end{align*}$$
By Corollary 18, anything we know about determinants involving rows can be used for columns as well.
:
If $B$ is obtained from $A$ by cyclically shifting $k$ adjacent rows, then $\det(B)=(-1)^{k-1}\det(A)$.
:
Suppose $A\in M_{n\times n}(\mathbb{F})$ whose $i$th row of $A$ is $e_j=(0,\ldots,0[i-1],1[i],0[i+1],\ldots,0)$. Then, $\det(A)=(-1)^{i+j}\det(\widetilde{A}_{ij})$.
:
$\det$ can be computed by expansion on minors on any row or column. If $A\in M_{n\times n}(\mathbb{F})$, then
  • For any $i=1,\ldots, n$, then $\det(A)=\sum_{j=1}^n(-1)^{i+j}a_{ij}\cdot\det(\widetilde{A}_{ij})$.
  • For any $j=1,\ldots, n$, then $\det(A)=\sum_{i=1}^n(-1)^{i+j}a_{ij}\cdot\det(\widetilde{A}_{ij})$.

Other Properties of Determinants

  1. If $M=\left(\begin{matrix} A &B\\ O &D \end{matrix}\right)$ where $A$ and $D$ are square, then $\det(M)=\det(A)\det(D)$.
  2. If $A$ is invertible and has integer entries, then $A^{-1}$ has integer entries iff $\det(A)=\pm 1$.

Permutations

A permutation on $\{1,2,3,4\}$ is an ordering. (e.g. $\{2,4,3,1\}$)
Alternate View: A bijection $\sigma : \{1,2,3,4\}\rightarrow\{2,4,3,1\}$, with $\sigma(1)=2,\sigma(2)=4,\sigma(3)=3,\sigma(4)=1$.
Also, $\sigma^{-1}(1)=4,\sigma^{-1}(2)=1,\sigma^{-1}(3)=3,\sigma^{-1}(4)=2$
Representing permutations as such has two main advantages:
  1. Can be graphed.
  2. Permutations can be composed.
    Let $\tau: \{1,2,3,4\}\rightarrow\{3,2,1,4\}\quad$ *$\tau$ is a transposition
    $\tau(1)=3,\tau(2)=2,\tau(3)=1,\tau(4)=4$.
Definition: A transposition is a permutation switching exactly two values. Any permutation can be simulated by a sequence of row switches.
Definition: Let $\sigma$ be a permutation on $\{1,\ldots,n\}$, $A,B\in M_{n\times n}(\mathbb{F})$, we write $A\xrightarrow{R:\sigma}B$ to denote $B$ obtained from $A$ by moving Row$_i(A)$ to row $\sigma(i)$. So, Row$_i(B)=$Row$_{\sigma^{-1}(i)}(A)$.
Definition: Let $\sigma$ be a permutation on $\{1,\ldots,n\}$, the permutation matrix associated to $\sigma$ is the matrix $P_\sigma$ obtained from $I_n$ by $\xrightarrow{R:\sigma}$, or, $I_n\xrightarrow{R:\sigma} P_\sigma$. $$P_\sigma=\left(\begin{array}{c}-e_{\sigma^{-1}(1)}-\\-e_{\sigma^{-1}(2)}-\\-e_{\sigma^{-1}(3)}-\\-e_{\sigma^{-1}(4)}-\end{array}\right)= \left(\begin{array}{c}-e_4-\\-e_1-\\-e_3-\\-e_2-\end{array}\right)=\left(\begin{array}{cccc}0&0&0&1\\1&0&0&0\\0&0&1&0\\0&1&0&0\end{array}\right) =\left(\begin{array}{cccc}|&|&|&|\\e_2&e_4&e_3&e_1\\|&|&|&|\end{array}\right)$$ $$P_\sigma=\left(\begin{array}{cccc}|&|&|&|\\e_{\sigma(1)}&e_{\sigma(2)}&e_{\sigma(3)}&e_{\sigma(4)}\\|&|&|&|\end{array}\right)$$
In general:
Definition: We can define the complete expansion of the determinant using permutations. Given $A$ is an $n\times n$ matrix, $$\det(A)=\sum_\sigma\text{sign}(\sigma)a_{\sigma(1)1}a_{\sigma(2)2}\cdots a_{\sigma(n)n}$$

5 Eigenvalues & Diagonalization

Definition: Given a finite-dimensional vector space over $\mathbb{F}, V,$ a linear operator on $V$ is a linear transformation $T: V\rightarrow V$ denoted $$\mathcal{L}(V)=\{\text{all linear operators on }V\}$$
Definition: Let $A\in M_{n\times n}(\mathbb{F})$.
:
Let $A\in M_{n\times n}(\mathbb{F})$ and $\lambda\in\mathbb{F}$, then $\lambda$ is an eigenvalue of $A\Leftrightarrow\det(A-\lambda I_n)=0$.
:
Suppose $A\in M_{n\times n}(\mathbb{F})$ and $\lambda\in\mathbb{F}$, then $$\begin{align*} \lambda\text{ is an eigenvalue of } A&\Longleftrightarrow\exists v\in\mathbb{F}^n, v\neq 0\text{, such that }Av=\lambda v\\ &=Av-\lambda v=0\\ &=(A-\lambda I_n)v=0\\ &=N(A-\lambda I_n)\neq\{0\}\\ &=nullity(A-\lambda I_n)>0\\ &=rank(A-\lambda I_n)< n\\ &=\det(A-\lambda I_n)=0 \end{align*}$$
If $\lambda$ is an eigenvalue of $A$, then $E_\lambda=N(A-\lambda I_n)$, and $\dim(E_\lambda)=nullity(A-\lambda I_n)>0$.
Definition: [Definition using matrices] The characteristic polynomial of $A$ is the expression $p_A(t)=\det(A-tI_n)$.
The eigenvalues of $A$ are the roots of $p_A(t)=0$.
:
Let $A\in M_{n\times n}(\mathbb{F})$.
  1. $p_A(t)$ is a polynomial in $\mathbb{F}[n]$ of degree $n$.
  2. The leading coefficient of $p_A(t)$ is $(-1)^n$.
  3. The coefficient of $t^{n-1}$ in $p_A(t)$ is $(-1)^{n-1}tr(A)$.
Note: The constant coefficient of $p_A(t)$ is $p_A(0)=\det(A-0I_n)=\det(A)$
:
If $A\in M_{n\times n}(\mathbb{F})$, then $A$ has at most $n$ eigenvalues.
Definition: Let $A,B\in M_{n\times n}(\mathbb{F})$. $B$ is similar to $A$ (over $\mathbb{F}$) if there exists an invertible $Q\in M_{n\times n}(\mathbb{F})$ such that $B=Q^{-1}AQ.$    Moreover, $p_B(t)=p_A(t)$.
:
Suppose $\dim(V)=n$ and $T\in\mathcal{L}(V)$. Let $\beta$ and $\gamma$ be two ordered bases for $V$ and let $A=[T]_\beta$ and $B=[T]_\gamma$. Then $p_A(t)=p_B(t)$.
Definition: [Definition using linear operators] Let $T\in\mathcal{L}(V)$ where $V$ is finite-dimensional. The characteristic polynomial of $T$ is the characteristic polynomial of $[T]_\beta$ (definition for matrix) for any ordered basis $\beta$ for $V$, and is denoted by $p_T(t)$.
Definition: A polynomial $f(t)\in\mathbb{F}[t]$ splits over $\mathbb{F}$ if $\exists\ e,a_1,\ldots,a_n\in\mathbb{F}$ such that $f(t)=c(t-a_1)(t-a_2)\cdots(t-a_n)$.
If $\lambda$ is an eigenvalue of $T$, the multiplicity of $\lambda$ is the maximum value $k$ such that $(t-\lambda)^k$ is a factor of $p_T(t)$.
:
Suppose $V$ is finite-dimensional, $T\in\mathcal{L}(V)$, $\lambda$ is an eigenvalue of $T$ with multiplicity $m$, then $\dim(E_\lambda)\leq m$.
:
Let $d=\dim(E_\lambda)$ and let $n=\dim(V)$. We pick an ordered basis $\alpha=(v_1,\ldots,v_d)$ for $E_\lambda$. We then extend $\alpha$ to an ordered basis $\beta=(v_1,\ldots,v_d,v_{d+1},\ldots,v_n)$ for $V$. We let $A=[T]_\beta$ so $p_T(t)=p_A(t)$. Then, to get $[T]_\beta$, we apply $T$ to each $v_i\in\beta$. Note that when $i=1,\ldots,d$, since $v_i\in E_\lambda$, $$T(v_i)=\lambda v_i=0v_1+\ldots+0v_{i-1}+\lambda v_i+0v_{i+1}+\ldots+0v_d+\ldots 0v_n$$ For $i>d$, we cannot apply the same logic since $v_i\not\in E_\lambda$, so Col$_i(A)$ for $i>d$ has arbitrary values. Thus, we have $$A=[T]_\beta=\left(\begin{matrix} \lambda I_d &B\\ O &C\end{matrix}\right)$$ $$\begin{align*} p_T(t)=\det(A-tI_n)&=\det\left(\begin{matrix} (\lambda-t) I_d &B\\ O &(C-tI_{n-d})\end{matrix}\right)\\ &=\det((\lambda-t) I_d)\det(C-tI_{n-d})\quad\quad *\det\left(\begin{smallmatrix} A &B\\ O &D\end{smallmatrix}\right)=\det(A)\det(D)\\ &=(\lambda-t)^d\det(I_n)\det(C-tI_{n-d})\quad\quad *\det(cA)=c^n\det(A), A\in M_{n\times n}\\ &=(\lambda-t)^d\cdot p_C(t) \end{align*}$$ This proves that $(t-\lambda)^d$ divides $p_T(t)$. Since $m$ is by definition the largest value such that $(t-\lambda)^m$ divides $p_T(t)$, we have proved that $d\leq m$ or $\dim(E_\lambda)\leq m$.
e.g. $A=\left(\begin{matrix}4&0&1\\2&3&2\\1&0&4\end{matrix}\right)\in M_{3\times 3}(\mathbb{R})$ $$\begin{align*} p_A(t)=\det(A-tI_3)&=\det\left(\begin{matrix}4-t&0&1\\2&3-t&2\\1&0&4-t\end{matrix}\right)\\ &=(-1)^{2+2}(3-t)\det\left(\begin{matrix}4-t&1\\1&4-t\end{matrix}\right)\quad\text{*Expanding by minors on the second column}\\ &=(3-t)((4-t)^2-1)=(3-t)^2(5-t) \end{align*}$$ Thus, $p_A(t)$ splits over $\mathbb{R}$ and the eigenvalues of $A$ are $3$ and $5$ with multiplicities $2$ and $1$, respectively. We can see that $\dim(E_5)=1$ and $1\leq\dim(E_3)\leq 2$. $$A-3I_3=\left(\begin{matrix}1&0&1\\2&0&2\\1&0&1\end{matrix}\right)$$ So $rank(A-3I_3)=1$ and $nullity(A-3I_3)=2=\dim(E_3)$. To find a basis for $E_3$, we solve the system $(A-3I_3)x=0$. We begin by forming the augmented matrix. $$\left(\begin{array}{ccc} 1&0&1\\2&0&2\\ 1&0&1 \end{array}\middle\vert\begin{array}{c} 0\\0\\0\end{array}\right)\rightarrow \left(\begin{array}{ccc} 1&0&1\\0&0&0\\ 0&0&0 \end{array}\middle\vert\begin{array}{c} 0\\0\\0\end{array}\right)$$ $$x_1+x_3=0\Rightarrow x_1=-x_3\quad\quad x_2=x_2\quad\quad x_3=x_3$$ $$\left(\begin{smallmatrix}x_1\\x_2\\x_3\end{smallmatrix}\right)=x_2\left(\begin{smallmatrix}0\\1\\0\end{smallmatrix}\right)+x_3\left(\begin{smallmatrix}-1\\0\\1\end{smallmatrix}\right)$$ Thus, $\{(0,1,0), (-1,0,1)\}$ is a basis for $N(A-3I_3)=E_3$.
Similarly, to find a basis for $E_5$, we solve for $(A-5I_3)x=0$. $$\left(\begin{array}{ccc} -1&0&1\\2&-2&2\\ 1&0&-1 \end{array}\middle\vert\begin{array}{c} 0\\0\\0\end{array}\right)\rightarrow \left(\begin{array}{ccc} 1&0&-1\\0&1&-2\\ 0&0&0 \end{array}\middle\vert\begin{array}{c} 0\\0\\0\end{array}\right) $$ $$x_1-x_3=0\Rightarrow x_1=x_3\quad\quad x_2-2x_3=0\Rightarrow x_2=2x_3\quad\quad x_3=x_3$$ $$\left(\begin{smallmatrix}x_1\\x_2\\x_3\end{smallmatrix}\right)=x_3\left(\begin{smallmatrix}1\\2\\1\end{smallmatrix}\right)$$ So, $\{(1,2,1)\}$ is a basis for $E_5$. Observe that since $(1,2,1)\not\in\{(0,1,0), (-1,0,1)\}$, $\{(0,1,0), (-1,0,1), (1,2,1)\}$ is a basis for $\mathbb{R}^3$.
:
Suppose $V$ is finite-dimensional and $T\in\mathcal{L}(V)$ and $\lambda_1,\ldots,\lambda_k$ are distinct eigenvectors of $T$. If $x_1,\ldots,x_k$ are eigenvectors corresponding to $\lambda_1,\ldots,\lambda_k$, then $\{x_1,\ldots,x_k\}$ is linearly independent.
:
Suppose $V$ is finite-dimensional and $T\in\mathcal{L}(V)$ and $\lambda_1,\ldots,\lambda_k$ are distinct eigenvectors of $T$. If for each $i=1,\ldots,k$ and $\beta_i$ is an ordered basis for $E_\lambda$, then $\beta=\beta_1\cup\beta_2\cup\cdots\cup\beta_k$ is linearly independent.

Geometric Interpretation

Let's describe the geometric interpretation of how a linear operator $T$ acts on an eigenvector in a vector space $V$ over $\mathbb{R}$. Let $v$ be an eigenvector of $T$ and $\lambda$ be the corresponding eigenvalue. Let $W=span(\{v\})$ be the line in $V$ that passes through $0$ and $v$. For any $w\in W$, $w=cv$ and so, $T(w)=T(cv)=cT(v)=c\lambda v=\lambda w$.
$T$ acts on the vectors in $W$ by multiplying them by $\lambda$.

Other facts about determinants

Given $A\in M_{n\times n}, \lambda_1,\ldots,\lambda_k$ are its distinct eigenvalues and $m_1,\ldots,m_k$ are their multiplicities,
  1. $\det(A) = \lambda_1^{m_1}\lambda_2^{m_2}\cdots\lambda_k^{m_k}$
  2. $tr(A)=m_1\lambda_1+m_2\lambda_2+\ldots+m_k\lambda_k$
  3. If $A$ is invertible and $\lambda$ is an eigenvalue of $A$, then $\frac{1}{\lambda}$ is an eigenvalue of $A^{-1}$.

Diagonalizability

Definition: Suppose $V$ is finite-dimensional and $T\in\mathcal{L}(V)$. $T$ is diagonalizable if there is an ordered basis $\beta$ in $V$ such that $[T]_\beta$ is a diagonal matrix. A square matrix $A$ is diagonalizable if $L_A$ is diagonalizable.
:
If $T\in\mathcal{L}(V)$, $T$ is diagonalizable $\Longleftrightarrow\ \exists\ $ basis for $V$ consisting of eigenvectors for $T$.
:
Let $V$ be finite-dimensional with $\dim(V)=n$ and $T\in\mathcal{L}(V)$. Let $\lambda_1,\ldots,\lambda_k$ be the distinct eigenvalues of $T$ and let $m_1,\ldots,m_k$ be their multiplicities. Then, $T$ is diagonalizable iff
  1. $p_T(t)$ splits over $\mathbb{F}$.
  2. For each $i=1,\ldots,k$, $\dim(E_{\lambda_i})=m_i$
e.g. Determine if $A=\left(\begin{matrix}4&0&1\\2&3&2\\1&0&4\end{matrix}\right)$ is diagonalizable.

From above, we know that $p_A(t)=-(t-3)^2(t-5)$ splits, and $\lambda_1=3,\lambda_2=5$ are eigenvalues with $m_1=2, m_2=1$. We have shown above that $\dim(E_3)=2$ and $\dim(E_5)=1$ which equal their multiplicities. Thus, by Theorem 9, $A$ is diagonalizable.
Moreover, if we let $\beta=\{(0,1,0), (-1,0,1), (1,2,1)\}$ the eigenvectors of $A$, then $[L_A]_\beta=D=\left(\begin{matrix}3&0&0\\0&3&0\\0&0&5\end{matrix}\right)$.
Alternatively, we can use an invertible $Q$ such that $Q^{-1}AQ=D=\left(\begin{matrix}3&0&0\\0&3&0\\0&0&5\end{matrix}\right)$, where $$Q=\left(\begin{matrix}|&|&|\\v_1&v_2&v_3\\|&|&|\end{matrix}\right)=\left(\begin{matrix}0&-1&1\\1&0&2\\0&1&1\end{matrix}\right)$$ Let $\sigma=(e_1,e_2,e_3)$, the standard ordered basis for $\mathbb{R}^3$. We observe that $Q=[I_{\mathbb{R}^3}]_\beta^\sigma$ and $Q^{-1}=[I_{\mathbb{R}^3}]_\sigma^\beta$. So, $$Q^{-1}AQ=[I_{\mathbb{R}^3}]_\sigma^\beta[L_A]_\sigma^\sigma[I_{\mathbb{R}^3}]_\beta^\sigma=[I_{\mathbb{R}^3}\circ L_A\circ I_{\mathbb{R}^3}]_\beta^\beta=[L_A]_\beta=D$$

Computational Powers of Diagonal Matrices

If $A$ is diagonalizable, it is easy to compute $A^k$.

Suppose $D$ is a diagonal matrix with diagonal entries of $\lambda_i$, then $D^k$ is a diagonal matrix with diagonal entries of $\lambda_i^k$.
Then, there exists an invertible $Q$ such that $Q^{-1}AQ=D$. $$A=QDQ^{-1}\Rightarrow A^k=(QDQ^{-1})^k=QDQ^{-1}QDQ^{-1}\cdot QDQ^{-1}=QD\cdot DQ^{-1}=QD^kQ^{-1}$$ e.g. Given $A=\left(\begin{matrix}4&0&1\\2&3&2\\1&0&4\end{matrix}\right)$, find $A^k$.

We know that $A$ is diagonalizable from before and if $Q=\left(\begin{matrix}0&-1&1\\1&0&2\\0&1&1\end{matrix}\right)$, then $D=Q^{-1}AQ=\left(\begin{matrix}3&0&0\\0&3&0\\0&0&5\end{matrix}\right)$. To compute $A^k$, we need to know $Q^{-1}$. We form the augmented matrix $(Q|I_3)$. $$(Q|I_3)=\left(\begin{matrix}0&-1&1\\1&0&2\\0&1&1\end{matrix}\middle\vert\begin{array}{ccc} 1&0&0\\0&1&0\\0&0&1\end{array}\right)\rightarrow \left(\begin{matrix}1&0&0\\0&1&0\\0&0&1\end{matrix}\middle\vert\begin{array}{ccc} -1&1&-1\\-\frac{1}{2}&0&\frac{1}{2}\\\frac{1}{2}&0&\frac{1}{2}\end{array}\right)=(I_3|Q^{-1})$$ Thus, $$A^k=QD^kQ^{-1}=\left(\begin{matrix}0&-1&1\\1&0&2\\0&1&1\end{matrix}\right)\left(\begin{matrix}3^k&0&0\\0&3^k&0\\0&0&5^k\end{matrix}\right)\left(\begin{matrix} -1&1&-1\\-\frac{1}{2}&0&\frac{1}{2}\\\frac{1}{2}&0&\frac{1}{2}\end{matrix}\right)=\left(\begin{matrix} \frac{1}{2}(3^k+5^k)&0&\frac{1}{2}(5^k-3^k)\\5^k-3^k&3^k&5^k-3^k\\\frac{1}{2}(5^k-3^k)&0&\frac{1}{2}(3^k+5^k)\end{matrix}\right)$$

Steps to Diagonalizing a Matrix

Given a matrix $A\in M_{n\times n}(\mathbb{F})$,
  1. Find the characteristic polynomial of $p_A(t)$.
  2. Solve for its eigenvalues.
  3. Determine if it is diagonalizable.
  4. If diagonalizable, find the basis vectors for each $E_i$ by solving $(A-\lambda I_n)x=0$.
  5. Form the basis $\beta$ for $V$ with the above basis vectors.
  6. Compute $D=[L_A]_\beta$.
  7. Additionally, if you wish to find $A^k$, compute $Q=\left(\begin{matrix}|&|& &|\\v_1&v_2&\cdots&v_n\\|&|& &|\end{matrix}\right)$.
  8. Compute $Q^{-1}$ by forming the augmented matrix $(Q|I_n)$ and transforming via row ops to $(I_n|Q^{-1})$.
  9. $A^k=QD^kQ^{-1}$
Back to Top
Extra Problems (Credits to Akshay Tiwary for sharing this!)