Chapter 4: Linear Transformations
Definition & examples · Matrix representations · Similarity
A linear transformation is a structure-preserving map between vector spaces. Every matrix $A$ defines a linear transformation $\mathbf{x}\mapsto A\mathbf{x}$, and conversely, every linear transformation between finite-dimensional spaces can be represented by a matrix. This chapter bridges the abstract and computational worlds.
Definition & Examples
Gallery of Linear Transformations on $\mathbb{R}^2$
| Name | Formula | Matrix |
|---|---|---|
| Rotation by $\theta$ | $(x_1,x_2)\mapsto(x_1\cos\theta - x_2\sin\theta,\; x_1\sin\theta + x_2\cos\theta)$ | $\begin{pmatrix}\cos\theta&-\sin\theta\\\sin\theta&\cos\theta\end{pmatrix}$ |
| Reflection about $x_1$-axis | $(x_1,x_2)\mapsto(x_1,-x_2)$ | $\begin{pmatrix}1&0\\0&-1\end{pmatrix}$ |
| Projection onto $x_1$-axis | $(x_1,x_2)\mapsto(x_1,0)$ | $\begin{pmatrix}1&0\\0&0\end{pmatrix}$ |
| Shear (horizontal) | $(x_1,x_2)\mapsto(x_1+cx_2,\;x_2)$ | $\begin{pmatrix}1&c\\0&1\end{pmatrix}$ |
| Differentiation $D$ | $D(p) = p'$ on $P_n$ | See Example 4.3 |
| Integration $L$ | $L(f) = \int_a^b f(x)\,dx$ | No finite matrix (infinite dim) |
- $M(\mathbf{x}) = \|\mathbf{x}\|$: fails — $M(\alpha\mathbf{x}) = |\alpha|\|\mathbf{x}\|$ but we need $\alpha\|\mathbf{x}\|$
- $T(\mathbf{x}) = \mathbf{x} + \mathbf{c}$ ($\mathbf{c}\neq\mathbf{0}$): fails — $T(\mathbf{0}) = \mathbf{c} \neq \mathbf{0}$
- $T(x_1,x_2) = (x_1^2, x_2)$: fails — $T(2\mathbf{e}_1) = (4,0) \neq 2T(\mathbf{e}_1) = (2,0)$
Kernel and Image
- $\ker(L) = \{\mathbf{v}\in V \mid L(\mathbf{v}) = \mathbf{0}_W\}$ — a subspace of $V$ (= null space if $L=L_A$)
- $L(V) = \{L(\mathbf{v}) \mid \mathbf{v}\in V\}$ — a subspace of $W$ (= column space if $L=L_A$)
📘 Example 4.1 — Kernel of Differentiation
$\ker(D) = \{p \mid p'=0\} = \{$constants$\} = P_1$ (polynomials of degree 0). dim$=1$.
$D(P_3) = P_2$ (every quadratic polynomial is $p'$ for some cubic). dim$=2$.
Check dimension: $1 + 2 = 3 = \dim P_3$. Rank-Nullity holds. ✓Matrix Representations
📘 Example 4.2 — Rotation Matrix
📘 Example 4.3 — Matrix for Differentiation ($P_3 \to P_2$)
- $D(x^2) = 2x = 2\cdot x + 0\cdot 1 \;\to\; \text{column } (2,0)^T$
- $D(x) = 1 = 0\cdot x + 1\cdot 1 \;\to\; \text{column } (0,1)^T$
- $D(1) = 0 \;\to\; \text{column } (0,0)^T$
General Ordered Bases (Theorem 4.2.2)
If $E$ and $F$ are ordered bases for $V$ and $W$, and $A$ represents $L: V\to W$ relative to $E,F$, then:
The $j$-th column of $A$ is $[L(\mathbf{v}_j)]_F$ — the coordinate vector of the image of the $j$-th basis vector.
Similarity
- $\det(B) = \det(S^{-1}AS) = \det(A)$
- $\text{tr}(B) = \text{tr}(A)$ (trace = sum of diagonal entries)
- Rank and nullity
- Eigenvalues and characteristic polynomial — see Chapter 6
📘 Example 4.4 — Finding a Simpler Representation
In basis $\{\mathbf{y}_1, \mathbf{y}_2, \mathbf{y}_3\}$ = eigenvectors of $A$: $L(\mathbf{y}_1)=0$, $L(\mathbf{y}_2)=\mathbf{y}_2$, $L(\mathbf{y}_3)=4\mathbf{y}_3$.
$$D = Y^{-1}AY = \begin{pmatrix}0&0&0\\0&1&0\\0&0&4\end{pmatrix}$$ In the eigenvector basis, $L$ becomes diagonal — multiplying coordinates by 0, 1, or 4. Far simpler than $A$.