You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: README.md
+10-10
Original file line number
Diff line number
Diff line change
@@ -17,7 +17,7 @@ Catalog Description: *Basic subject on matrix theory and linear algebra, emphasi
17
17
18
18
* Vectors in $\mathbb{R}^2$, and generalization to vectors in $\mathbb{R}^N$ (N-dimensional space).
19
19
* Vector operations: addition and scalar multiplication. Both operations together: linear combinations.
20
-
* The span of a set of vectors $\lbraceu_1,\ldots,u_k\rbrace$ is the set of all linear combinations of these vectors: we covered some examples in class.
20
+
* The span of a set of vectors $\lbrace u_1,\ldots,u_k\rbrace$ is the set of all linear combinations of these vectors: we covered some examples in class.
21
21
* Definition of matrix times vector: $Ax$ where $A$ is an $M \times N$ matrix, and $x$ is in $\mathbb{R}^N$.
22
22
23
23
**Reading:** Strang Chapter 1.
@@ -69,9 +69,9 @@ Catalog Description: *Basic subject on matrix theory and linear algebra, emphasi
69
69
70
70
### Lecture 7 (Mon Feb 18 2025)
71
71
* Throughout this class, we let $v^1, \ldots, v^n$ be list of n vectors, each in the space $\mathbb{R}^m$. Let $A$ be the $m \times n$ matrix with columns $v^1, \ldots, v^n$.
72
-
* The vectors $\lbracev^1, ..., v^n\rbrace$ are **linearly dependent** if a non-trivial linear combination of them equals zero: this corresponds to $N(A)$ being strictly larger than $\lbrace0\rbrace$. Otherwise, we say they are **linearly independent**: this corresponds to $N(A) = \lbrace0\rbrace$.
73
-
* A **basis** for a vector space $V$ is a list of vectors that span $V$, and are linearly independent. We covered the standard basis $\lbracee^1, ..., e^n\rbrace$ for the space $\mathbb{R}^n$.
74
-
* Let $V = \text{span} \lbracev^1, ..., v^n\rbrace$. Then $V$ is the same as $C(A)$. If $\lbracev^1, ..., v^n\rbrace$ are linearly independent, then they form a basis for $V$.
72
+
* The vectors $\lbrace v^1, ..., v^n\rbrace$ are **linearly dependent** if a non-trivial linear combination of them equals zero: this corresponds to $N(A)$ being strictly larger than $\lbrace 0\rbrace$. Otherwise, we say they are **linearly independent**: this corresponds to $N(A) = \lbrace 0\rbrace$.
73
+
* A **basis** for a vector space $V$ is a list of vectors that span $V$, and are linearly independent. We covered the standard basis $\lbrace e^1, ..., e^n\rbrace$ for the space $\mathbb{R}^n$.
74
+
* Let $V = \text{span} \lbrace v^1, ..., v^n\rbrace$. Then $V$ is the same as $C(A)$. If $\lbrace v^1, ..., v^n\rbrace$ are linearly independent, then they form a basis for $V$.
75
75
* More generally, perform Gauss-Jordan elimination, and let $R = GA$ be the RREF of $A$. Then $C(R) = G C(A)$.
76
76
* The pivot columns of $R$ form a basis for $C(R)$, and the corresponding columns of $A$ form a basis for $C(A)$.
77
77
* Note that rank(A) = # pivots in R = dim C(R) = dim C(A). Meanwhile # free variables in R = dim N(A).
@@ -98,9 +98,9 @@ Catalog Description: *Basic subject on matrix theory and linear algebra, emphasi
98
98
Formal reasoning for the above claims:
99
99
100
100
1. Column space: $C(A) = {Ax : x in \mathbb{R}^n}$ and $C(R) = {GAx : x in \mathbb{R}^n}$. Thus $b' \in C(R) \Leftrightarrow b' = GAx \text{ for some } x \Leftrightarrow G^{-1}b' = Ax \text{ for some } x \Leftrightarrow G^{-1}b' \in C(A)$. This proves $C(A) = G^{-1} C(R)$.
3. Row space: recall $\mathbb{R}^t = (GA)^t = A^t G^t$. $C(A^t) = \lbraceA^t x : x \in \mathbb{R}^m\rbrace$ and $C(\mathbb{R}^t) = \lbraceA^t G^t x : x \in \mathbb{R}^m\rbrace$. Since $G$ is invertible, $G^t$ is also invertible. As $x$ ranges over all of $\mathbb{R}^m$, $G^t x$ also ranges over all of $\mathbb{R}^m$. Therefore $C(A^t) = C(\mathbb{R}^t)$.
103
-
4. Left null space: (***There was a typo on the blackboard, so please read this one carefully.***) $N(A^t) = \lbracex : A^t x = 0\rbrace$ and $N(\mathbb{R}^t) = \lbracex : A^t G^t x = 0\rbrace$. Therefore $x \in N(\mathbb{R}^t) \Leftrightarrow A^t G^t x = 0 \Leftrightarrow G^t x \in N(A^t)$. This proves $N(A^t) = G^t N(\mathbb{R}^t)$.
101
+
2. Null space: $N(A) = \lbrace x : Ax = 0\rbrace$ and $N(R) = \lbrace x : GAx = 0\rbrace$. Since $G$ invertible, $Ax = 0 \Leftrightarrow GAx = 0$. This proves $N(A) = N(R)$.
102
+
3. Row space: recall $\mathbb{R}^t = (GA)^t = A^t G^t$. $C(A^t) = \lbrace A^t x : x \in \mathbb{R}^m\rbrace$ and $C(\mathbb{R}^t) = \lbrace A^t G^t x : x \in \mathbb{R}^m\rbrace$. Since $G$ is invertible, $G^t$ is also invertible. As $x$ ranges over all of $\mathbb{R}^m$, $G^t x$ also ranges over all of $\mathbb{R}^m$. Therefore $C(A^t) = C(\mathbb{R}^t)$.
103
+
4. Left null space: (***There was a typo on the blackboard, so please read this one carefully.***) $N(A^t) = \lbrace x : A^t x = 0\rbrace$ and $N(\mathbb{R}^t) = \lbrace x : A^t G^t x = 0\rbrace$. Therefore $x \in N(\mathbb{R}^t) \Leftrightarrow A^t G^t x = 0 \Leftrightarrow G^t x \in N(A^t)$. This proves $N(A^t) = G^t N(\mathbb{R}^t)$.
104
104
105
105
In class, we calculated the four fundamental subspaces on a small example. We verified that the column space and left null space are orthogonal subspaces of $\mathbb{R}^m$, while the row space and null space are orthogonal subspace of $\mathbb{R}^n$.
106
106
@@ -133,13 +133,13 @@ We used the latter characterization to calculate $\text{proj}Y x$ where $Y$ is t
133
133
134
134
### Lecture 11 (Fri Feb 28 2025)
135
135
* We covered the general formulas for orthogonal projection.
136
-
* Projection onto a one-dimensional subspace $Y = \text{span}\lbracey\rbrace$, where $y$ is a unit vector in $\mathbb{R}^n$: $\text{proj}Y(x) = P_Y x$ where $P_Y = yy^t$. Note that $P_Y$ is an $n \times n$ symmetric matrix. Its column space is exactly the one-dimensional space $Y$, therefore $P_Y$ has rank one.
136
+
* Projection onto a one-dimensional subspace $Y = \text{span}\lbrace y\rbrace$, where $y$ is a unit vector in $\mathbb{R}^n$: $\text{proj}Y(x) = P_Y x$ where $P_Y = yy^t$. Note that $P_Y$ is an $n \times n$ symmetric matrix. Its column space is exactly the one-dimensional space $Y$, therefore $P_Y$ has rank one.
137
137
* Projection onto a general subspace $V$ of $\mathbb{R}^n$, where $\text{dim } V = r < n$: first express $V = C(A)$ where $A is an n \times r$ matrix whose columns form a basis of $V$. We showed in class that $v = \text{proj}V(b) = P_V b$ where $P_V = A(A^t A)^{-1} A^t$. This is an $n \times n$ symmetric matrix. Its column space is exactly $V = C(A)$, therefore $P_V$ has rank $r$.
138
138
***Claim:** If $A$ is $n \times r$ with rank $r$, then $A^t A$ is invertible. We stated this fact in class, and used it to define $P_V$. We did not yet give a justification of this fact, and will do so in a future lecture.
139
139
* Note that $v = A x$ where $x = (A^t A)^{-1} A^t b$. This achieves the minimum distance $\Vert Ax-b \Vert$, and we call this the **least squares solution**.
140
140
* Lastly we went over some examples of the projection matrix formula:
141
-
* In the one-dimensional case $Y = \text{span}\lbracey\rbrace$ where $y$ is a unit vector, we take $A = y$ and recover the formula $P_Y = yy^t$.
142
-
* If we have an orthonormal basis $\lbraceu^1, ..., u^r\rbrace$ for $V$, then $P_V = P_1 + ... + P_r$ where $P_j = u^j(u^j)^t$ is the orthogonal projection onto $\text{span}\lbraceu^j\rbrace$.
141
+
* In the one-dimensional case $Y = \text{span}\lbrace y\rbrace$ where $y$ is a unit vector, we take $A = y$ and recover the formula $P_Y = yy^t$.
142
+
* If we have an orthonormal basis $\lbrace u^1, ..., u^r\rbrace$ for $V$, then $P_V = P_1 + ... + P_r$ where $P_j = u^j(u^j)^t$ is the orthogonal projection onto $\text{span}\lbrace u^j\rbrace$.
0 commit comments