Chapter 4: Linear Transformations | MTS203 Linear Algebra
Leon 8th Ed.

Chapter 4: Linear Transformations

Definition & examples · Matrix representations · Similarity

A linear transformation is a structure-preserving map between vector spaces. Every matrix $A$ defines a linear transformation $\mathbf{x}\mapsto A\mathbf{x}$, and conversely, every linear transformation between finite-dimensional spaces can be represented by a matrix. This chapter bridges the abstract and computational worlds.

Section 4.1

Definition & Examples

Definition — Linear Transformation
A mapping $L: V \to W$ is linear if for all $\mathbf{u},\mathbf{v}\in V$ and all scalars $\alpha,\beta$: $$L(\alpha\mathbf{u}+\beta\mathbf{v}) = \alpha L(\mathbf{u})+\beta L(\mathbf{v})$$ Equivalently: $L$ preserves addition and scalar multiplication separately. A linear map from $V$ to itself is called a linear operator.
A rotation is linear: straight grid lines stay straight and evenly spaced. Click to animate. See how eigenvectors (red) keep their direction while all other vectors rotate.
NameFormulaMatrix
Rotation by $\theta$$(x_1,x_2)\mapsto(x_1\cos\theta - x_2\sin\theta,\; x_1\sin\theta + x_2\cos\theta)$$\begin{pmatrix}\cos\theta&-\sin\theta\\\sin\theta&\cos\theta\end{pmatrix}$
Reflection about $x_1$-axis$(x_1,x_2)\mapsto(x_1,-x_2)$$\begin{pmatrix}1&0\\0&-1\end{pmatrix}$
Projection onto $x_1$-axis$(x_1,x_2)\mapsto(x_1,0)$$\begin{pmatrix}1&0\\0&0\end{pmatrix}$
Shear (horizontal)$(x_1,x_2)\mapsto(x_1+cx_2,\;x_2)$$\begin{pmatrix}1&c\\0&1\end{pmatrix}$
Differentiation $D$$D(p) = p'$ on $P_n$See Example 4.3
Integration $L$$L(f) = \int_a^b f(x)\,dx$No finite matrix (infinite dim)
⚠ NOT Linear — Common Mistakes
  • $M(\mathbf{x}) = \|\mathbf{x}\|$: fails — $M(\alpha\mathbf{x}) = |\alpha|\|\mathbf{x}\|$ but we need $\alpha\|\mathbf{x}\|$
  • $T(\mathbf{x}) = \mathbf{x} + \mathbf{c}$ ($\mathbf{c}\neq\mathbf{0}$): fails — $T(\mathbf{0}) = \mathbf{c} \neq \mathbf{0}$
  • $T(x_1,x_2) = (x_1^2, x_2)$: fails — $T(2\mathbf{e}_1) = (4,0) \neq 2T(\mathbf{e}_1) = (2,0)$
Every linear map must satisfy $L(\mathbf{0}) = \mathbf{0}$.

Kernel and Image

Kernel and Image of a Linear Map
Let $L: V \to W$ be linear.
  • $\ker(L) = \{\mathbf{v}\in V \mid L(\mathbf{v}) = \mathbf{0}_W\}$ — a subspace of $V$ (= null space if $L=L_A$)
  • $L(V) = \{L(\mathbf{v}) \mid \mathbf{v}\in V\}$ — a subspace of $W$ (= column space if $L=L_A$)
$L$ is one-to-one $\Leftrightarrow$ $\ker(L)=\{\mathbf{0}\}$. $\;\;$ $L$ is onto $\Leftrightarrow$ $L(V)=W$.
📘 Example 4.1 — Kernel of Differentiation
$D: P_3 \to P_3$, $D(p) = p'$.

$\ker(D) = \{p \mid p'=0\} = \{$constants$\} = P_1$ (polynomials of degree 0). dim$=1$.

$D(P_3) = P_2$ (every quadratic polynomial is $p'$ for some cubic). dim$=2$.

Check dimension: $1 + 2 = 3 = \dim P_3$. Rank-Nullity holds. ✓
Section 4.2

Matrix Representations

Theorem 4.2.1 — Every $L: \mathbb{R}^n\to\mathbb{R}^m$ Has a Matrix
There exists a unique $m\times n$ matrix $A$ such that $L(\mathbf{x}) = A\mathbf{x}$. Its columns are determined by: $$\mathbf{a}_j = L(\mathbf{e}_j), \quad j = 1, 2, \ldots, n$$ Recipe: apply $L$ to each standard basis vector; collect results as columns.
To find the matrix of $L$: feed in $\mathbf{e}_1, \mathbf{e}_2, \ldots$ and collect the outputs as columns. The diagram shows the bijection: linear map on the left, matrix multiplication on the right.
📘 Example 4.2 — Rotation Matrix
$L: \mathbb{R}^2 \to \mathbb{R}^2$ rotates counterclockwise by angle $\theta$. $$L(\mathbf{e}_1) = (\cos\theta, \sin\theta)^T, \quad L(\mathbf{e}_2) = (-\sin\theta, \cos\theta)^T$$ $$A = \begin{pmatrix}\cos\theta & -\sin\theta \\ \sin\theta & \cos\theta\end{pmatrix}$$ For $\theta=90°$: $A=\begin{pmatrix}0&-1\\1&0\end{pmatrix}$. Check: $A(1,0)^T = (0,1)^T$ ✓ (rotated 90° counterclockwise).
📘 Example 4.3 — Matrix for Differentiation ($P_3 \to P_2$)
Bases: $[x^2, x, 1]$ for $P_3$ and $[x, 1]$ for $P_2$.
  • $D(x^2) = 2x = 2\cdot x + 0\cdot 1 \;\to\; \text{column } (2,0)^T$
  • $D(x) = 1 = 0\cdot x + 1\cdot 1 \;\to\; \text{column } (0,1)^T$
  • $D(1) = 0 \;\to\; \text{column } (0,0)^T$
$$A = \begin{pmatrix}2&0&0\\0&1&0\end{pmatrix}$$ Test: $A(1,0,0)^T = (2,0)^T$ = coords of $D(x^2)=2x$. ✓

General Ordered Bases (Theorem 4.2.2)

If $E$ and $F$ are ordered bases for $V$ and $W$, and $A$ represents $L: V\to W$ relative to $E,F$, then:

$$[L(\mathbf{v})]_F = A[\mathbf{v}]_E \quad \text{for all } \mathbf{v}\in V$$

The $j$-th column of $A$ is $[L(\mathbf{v}_j)]_F$ — the coordinate vector of the image of the $j$-th basis vector.

Section 4.3

Similarity

Definition — Similar Matrices
$B$ is similar to $A$ if $\exists$ nonsingular $S$ with $B = S^{-1}AS$. Similar matrices represent the same linear operator in different bases — $S$ is the transition matrix between those bases.
Quantities Preserved Under Similarity
These are properties of the linear operator, not of any particular matrix representing it:
  • $\det(B) = \det(S^{-1}AS) = \det(A)$
  • $\text{tr}(B) = \text{tr}(A)$ (trace = sum of diagonal entries)
  • Rank and nullity
  • Eigenvalues and characteristic polynomial — see Chapter 6
This is why diagonalization (finding the simplest similar matrix) is so valuable.
📘 Example 4.4 — Finding a Simpler Representation
$L(\mathbf{x}) = A\mathbf{x}$, $A=\begin{pmatrix}2&2&0\\1&1&2\\1&1&2\end{pmatrix}$.

In basis $\{\mathbf{y}_1, \mathbf{y}_2, \mathbf{y}_3\}$ = eigenvectors of $A$: $L(\mathbf{y}_1)=0$, $L(\mathbf{y}_2)=\mathbf{y}_2$, $L(\mathbf{y}_3)=4\mathbf{y}_3$.

$$D = Y^{-1}AY = \begin{pmatrix}0&0&0\\0&1&0\\0&0&4\end{pmatrix}$$ In the eigenvector basis, $L$ becomes diagonal — multiplying coordinates by 0, 1, or 4. Far simpler than $A$.
Scroll to Top