\documentclass{article}%
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{graphicx}%
\setcounter{MaxMatrixCols}{30}
%TCIDATA{OutputFilter=latex2.dll}
%TCIDATA{Version=5.00.0.2552}
%TCIDATA{CSTFile=40 LaTeX article.cst}
%TCIDATA{Created=Monday, October 20, 2014 12:26:13}
%TCIDATA{LastRevised=Wednesday, October 22, 2014 10:48:30}
%TCIDATA{}
%TCIDATA{}
%TCIDATA{}
\newtheorem{theorem}{Theorem}
\newtheorem{acknowledgement}[theorem]{Acknowledgement}
\newtheorem{algorithm}[theorem]{Algorithm}
\newtheorem{axiom}[theorem]{Axiom}
\newtheorem{case}[theorem]{Case}
\newtheorem{claim}[theorem]{Claim}
\newtheorem{conclusion}[theorem]{Conclusion}
\newtheorem{condition}[theorem]{Condition}
\newtheorem{conjecture}[theorem]{Conjecture}
\newtheorem{corollary}[theorem]{Corollary}
\newtheorem{criterion}[theorem]{Criterion}
\newtheorem{definition}[theorem]{Definition}
\newtheorem{example}[theorem]{Example}
\newtheorem{exercise}[theorem]{Exercise}
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{notation}[theorem]{Notation}
\newtheorem{problem}[theorem]{Problem}
\newtheorem{proposition}[theorem]{Proposition}
\newtheorem{remark}[theorem]{Remark}
\newtheorem{solution}[theorem]{Solution}
\newtheorem{summary}[theorem]{Summary}
\newenvironment{proof}[1][Proof]{\noindent\textbf{#1.} }{\ \rule{0.5em}{0.5em}}
\begin{document}
\section{Basic Matrices}
Recall a matrix is a collection of numbers, denoted as $A=\left(
a_{ij}\right) ,$ where $1\leq i\leq m$ and $1\leq j\leq n.$ The matrix is
said to be an $m\times n$ ("m by n") matrix. Consider the following $2\times3$
matrix
\[
B=\left[
\begin{array}
[c]{ccc}%
1 & 2 & 3\\
4 & 5 & 6
\end{array}
\right] .
\]
If the entries of $B$ are denoted $b_{ij},$ we see that $b_{11}=1,$
$b_{12}=2,$ $b_{13}=3,$ $b_{21}=4,$ $b_{22}=5,$ $b_{23}=6.$
The numbers $m$ and $n$ are called the \emph{dimensions}. Matrices have rows
and columns. The $i$th row is the matrix $\left[ a_{i1}a_{i2}\cdots
a_{in}\right] .$ The $j$th column is the matrix
\[
\left[
\begin{array}
[c]{c}%
a_{1j}\\
a_{2j}\\
\vdots\\
a_{mj}%
\end{array}
\right] .
\]
If $m=n$ then we say the matrix is \emph{square}. The entries $a_{ii}$ are
said to be on the diagonal. The following $3\times3$ matrix has $1,$ $6,$ and
$11$ on the diagonal:%
\[
\left[
\begin{array}
[c]{ccc}%
1 & 2 & 3\\
5 & 6 & 7\\
9 & 10 & 11
\end{array}
\right] .
\]
\section{Matrix multiplication}
Before doing matrix multiplication, note that we can multiply a real number
$a$ times a matrix $M$ to get a matrix $aM$ which consists of the same entries
of $M$ all multiplied by $a.$ For instance,%
\[
2\left[
\begin{array}
[c]{ccc}%
1 & 2 & 3\\
5 & 6 & 7\\
9 & 10 & 11
\end{array}
\right] =2\left[
\begin{array}
[c]{ccc}%
2 & 4 & 6\\
10 & 12 & 14\\
18 & 20 & 22
\end{array}
\right] .
\]
We can multiply a $m\times n$ matrix $A$ and a $n\times p$ matrix $B$ in the
following way. Then there is a $m\times p$ matrix $C=AB$ (note that the order
is important; $BA$ may not have a meaning, and even if it does, it may not
equal $AB$) with entries%
\[
c_{ij}=\sum_{k=1}^{n}a_{ik}b_{kj}.
\]
Have a look again at the $m,n,p$ in the above description. It is important
that for two matrices to multiply, they must have the appropriate dimensions.
Here is an example of matrix multiplication:%
\[
\left[
\begin{array}
[c]{ccc}%
1 & 2 & 3\\
4 & 5 & 6
\end{array}
\right] \left[
\begin{array}
[c]{cccc}%
1 & 2 & 3 & 4\\
5 & 6 & 7 & 8\\
9 & 10 & 11 & 12
\end{array}
\right] =\left[
\begin{array}
[c]{cccc}%
38 & 44 & 50 & 56\\
83 & 98 & 113 & 128
\end{array}
\right] .
\]
Note the dimensions.
Matrix multiplication is linear, in the sense that if $A,B,M,N$ are
appropriate dimenional matrices and $a$ is a real number, $\left(
A+aB\right) M=AM+a\left( BM\right) $ and $A\left( M+aN\right)
=AM+a\left( AN\right) $.
The \emph{identity matrix} of dimension $n$ is the $n\times n$ matrix with $1$
on the diagonal and $0$ off the diagonal, and is often denoted as $I$ or
$I_{n}.$ For instance,
\[
I_{3}=\left[
\begin{array}
[c]{ccc}%
1 & 0 & 0\\
0 & 1 & 0\\
0 & 0 & 1
\end{array}
\right] .
\]
The identity matrix $I$ has the property that for any matrices $A$ and $B,$
$AI=A$ and $IB=B$ if $I$ is the appropriate dimension so the multiplication
makes sense. In this sense, vectors are $1\times n$ matrices and so matrices
act on vectors like $u=Av,$ where if $A$ is a $n\times m$ matrix, then $v$ is
a vector in $\mathbb{R}^{m}$ and $u$ is a vector in $\mathbb{R}^{n}.$
The \emph{transpose} of a matrix $A$ switches the rows and columns and is
denoted as $A^{T}$. That is, if $A=\left( a_{ij}\right) $ is a $m\times n$
matrix, then $A^{T}=\left( b_{ij}\right) $ is the $n\times m$ matrix given
by $b_{ij}=a_{ji}.$ We see that
\[
\left[
\begin{array}
[c]{cccc}%
1 & 2 & 3 & 4\\
5 & 6 & 7 & 8\\
9 & 10 & 11 & 12
\end{array}
\right] ^{T}=\left[
\begin{array}
[c]{ccc}%
1 & 5 & 9\\
2 & 6 & 10\\
3 & 7 & 11\\
4 & 8 & 12
\end{array}
\right] .
\]
Note that if we consider a vector $v$ to be a $1\times n$ matrix, then
$v^{T}v$ is the usual dot product. A matrix $A$ is \emph{symmetric} if
$A^{T}=A.$
A \emph{permutation matrix} is a matrix that is gotten from the identity by
interchanging some of the columns. For instance, here is a permutation matrix%
\[
\left[
\begin{array}
[c]{ccc}%
0 & 1 & 0\\
1 & 0 & 0\\
0 & 0 & 1
\end{array}
\right] .
\]
A permutation matrix $P$ has the property that $P^{T}P=I.$ It also has the
property that $AP$ is the matrix obtained from $A$ by switching the columns in
the same way that was done to get from $I$ to $P.$
\section{Eigenvalues and Eigenvectors}
Given a square matrix $M$, a vector $v$, and a number $\lambda$ (possibly
complex), we say $\lambda$ is an \emph{eigenvalue} of $M$ with corresponding
\emph{eigenvector} $v$ if
\[
Mv=\lambda v.
\]
The collection of all vectors with the same eigenvalue is called the
corresponding \emph{eigenspace}. Eigenvectors are zeroes of the characteristic
polynomial, $\det\left( M-\lambda I\right) $ (see determinants below). Since
every polynomial can be factored into linear terms over the complex numbers,
we always have set of complex eigenvalues. A very important theorem is the
spectral theorem:
\begin{theorem}
If $M$ is a symmetric matrix, then all eigenvalues of $M$ are real and there
is a matrix $A$ such that $A^{T}A=I$ and such that
\[
A^{T}MA=D
\]
where $D$ is a matrix with the eigenvalues on the diagonal and zeroes elsewhere.
\end{theorem}
\section{Nullspace and nullity}
The \emph{nullspace} of a matrix $A$ is the set of vectors $v$ such that
$Av=0.$ It is thus the eigenspace of the eigenvalue $0$ if $A$ is a symmetric
matrix. The nullspace is a vector space, meaning that for any two vectors
$v,w$ in the nullspace and any real numbers $a$ and $b,$ $av+bw$ is in the
nullspace. This follows because if $Av=0$ and $Aw=0,$ then
\[
A\left( av+bw\right) =a\left( Av\right) +b\left( Aw\right) =0.
\]
A\emph{ linear combination} of vectors $v_{1},\ldots,v_{k}$ is a vector such
that there exist real numbers $a_{1},\ldots,a_{k}$ such that the vector can be
expressed as
\[
a_{1}v_{1}+\cdots+a_{k}v_{k}.
\]
\bigskip
The \emph{nullity} is the smallest number of vectors such that any vector in
the nullspace can be expressed as a linear combination of those vectors.
A square matrix $A$ is invertible if there is another matrix, denoted
$A^{-1},$ such that $AA^{-1}=A^{-1}A=I.$ A square matrix is invertible if and
only if the nullity is zero and if and only if its determinant is nonzero.
\section{Determinants}
The determinant of a square matrix can be defined inductively as $\det\left[
a\right] =a$ for a $1\times1$ matrix and then the determinant of a $n\times
n$ matrix is gotten as
\[
\det A=\sum_{j=1}^{n}\left( -1\right) ^{i+j}a_{ij}\det\hat{A}_{ij}%
\]
for any $i$, where $\hat{A}_{ij}$ is the matrix with the $i$th row and $j$th
column removed. This is called expanding in the $i$th row. It is not hard to
see that $\det A=\det A^{T}$ and so we can also expand in a column instead of
a row. The determinant also has the property that $\det\left( AB\right)
=\left( \det A\right) \left( \det B\right) .$ It follows that $\det\left(
A^{-1}\right) =\frac{1}{\det A}.$
\section{Systems of equations}
A system of linear equations can be written as a matrix equation $Ax=b$ as
follows. If $A=\left( a_{ij}\right) $ and $x=\left( x_{1},\ldots
x_{n}\right) $ and $b=\left( b_{1},\ldots,b_{m}\right) $ then the matrix
equation $Ax=b$ corresponds to the system%
\begin{align*}
a_{11}x_{1}+a_{12}x_{2}+\cdots a_{1n}x_{n} & =b_{1}\\
a_{21}x_{1}+a_{22}x_{2}+\cdots a_{2n}x_{n} & =b_{2}\\
& \ldots\\
a_{m1}x_{1}+a_{m2}x_{2}+\cdots a_{mn}x_{n} & =b_{m}.
\end{align*}
\end{document}