Chapter 2: Determinants | MTS203 Linear Algebra
Leon 8th Ed.

Chapter 2: Determinants

Cofactor expansion · Properties of determinants · Row reduction method · Cramer's rule

With every square matrix $A$ we associate a scalar $\det(A)$ — one number encoding whether $A$ is invertible, how it scales volumes, and whether its columns are dependent. Determinants appear in the characteristic equation that unlocks eigenvalues, in Cramer's Rule, and throughout geometry.

Section 2.1

The Determinant of a Matrix

The 2×2 and 3×3 Cases

Definition — 2×2
$$\det\begin{pmatrix}a&b\\c&d\end{pmatrix} = ad - bc$$ $A$ is invertible iff $\det(A) \neq 0$.
Geometric meaning: $|\det(A)|$ = area of the parallelogram spanned by the columns. Sign encodes orientation (positive = counterclockwise). Drag the arrowheads to explore.

For a $3\times3$ matrix, expanding equation (3) from the book gives:

$$\det(A) = a_{11}a_{22}a_{33} - a_{11}a_{32}a_{23} - a_{12}a_{21}a_{33} + a_{12}a_{31}a_{23} + a_{13}a_{21}a_{32} - a_{13}a_{31}a_{22}$$

Cofactor Expansion — General Method

Minors, Cofactors, Expansion
The $(i,j)$-minor $M_{ij}$ = $\det$ of the $(n-1)\times(n-1)$ submatrix obtained by deleting row $i$, column $j$.
The $(i,j)$-cofactor: $A_{ij} = (-1)^{i+j} M_{ij}$

Cofactor expansion along row $i$: $$\det(A) = \sum_{j=1}^n a_{ij} A_{ij}$$ This works for any row or column — always gives the same value (Theorem 2.1.1).
The $(-1)^{i+j}$ sign pattern: a checkerboard of + and −. Hover any cell. Strategy: always expand along the row or column with the most zeros.
📘 Example 2.1 — 3×3 by Cofactor Expansion
$$A = \begin{pmatrix}2&5&4\\3&1&2\\5&4&6\end{pmatrix}$$ Expand along row 1: $$\det(A) = 2\begin{vmatrix}1&2\\4&6\end{vmatrix} - 5\begin{vmatrix}3&2\\5&6\end{vmatrix} + 4\begin{vmatrix}3&1\\5&4\end{vmatrix} = 2(-2) - 5(8) + 4(7) = -4-40+28$$ $\det(A) = -16$
📘 Example 2.2 — 4×4, Smart Expansion
$$A = \begin{pmatrix}3&0&0&1\\0&2&0&0\\1&6&2&0\\0&2&-1&2\end{pmatrix}$$ Row 2 has three zeros — expand along it. Only $a_{22}=2$ is nonzero: $$\det(A) = 2\cdot(-1)^{4}\cdot\det\begin{pmatrix}3&0&1\\1&2&0\\0&-1&2\end{pmatrix} = 2\cdot[3(4-0)-0+1(-1-0)] = 2(11) = 22$$ $\det(A) = 22$. Expanding along row 1 would require 4 sub-determinants instead of 1.

Triangular Matrices

Theorem 2.1.3 — Triangular Rule
If $A$ is upper or lower triangular: $\;\det(A) = a_{11}\cdot a_{22} \cdots a_{nn}$ (product of diagonal entries). $$\det\begin{pmatrix}3&7&-2\\0&-1&5\\0&0&4\end{pmatrix} = 3\cdot(-1)\cdot4 = -12$$ This is the key theorem that makes row reduction efficient for computing determinants.
Section 2.2

Properties of Determinants

Effect of Row Operations on det

Row Operations Change det Predictably
If $E$ is an elementary matrix then $\det(EA) = \det(E)\cdot\det(A)$, where:
  • Type I (swap): $\det(E) = -1$ — det changes sign
  • Type II (scale row by $\alpha$): $\det(E) = \alpha$ — det scales by $\alpha$
  • Type III (add multiple): $\det(E) = 1$ — det unchanged
Click a row operation to apply it to the $3\times3$ matrix and watch $\det$ update in real time. Notice Type III never changes the determinant.

Product Rule and Corollaries

Theorem 2.2.3
$$\det(AB) = \det(A)\cdot\det(B)$$ Immediate corollaries:
  • $\det(A^{-1}) = 1/\det(A)$
  • $\det(A^T) = \det(A)$ — transpose doesn't change det
  • $\det(A^n) = (\det A)^n$
  • $A$ singular $\Leftrightarrow \det(A) = 0$ $\Leftrightarrow$ columns linearly dependent

Efficient Computation: Row Reduction Method

Cofactor expansion is $O(n!)$ — unusable for $n > 5$. Row reduction is $O(n^3)$:

  1. Reduce $A$ to upper triangular $U$ using only Type I and Type III row operations.
  2. Let $k$ = number of row swaps used.
  3. $\det(A) = (-1)^k \cdot u_{11} \cdot u_{22} \cdots u_{nn}$
📘 Example 2.3 — det by Row Reduction
$$A = \begin{pmatrix}1&2&3\\4&5&6\\7&8&10\end{pmatrix}$$ $$\xrightarrow{R_2-4R_1,\;R_3-7R_1} \begin{pmatrix}1&2&3\\0&-3&-6\\0&-6&-11\end{pmatrix} \xrightarrow{R_3-2R_2} \begin{pmatrix}1&2&3\\0&-3&-6\\0&0&1\end{pmatrix}$$ 0 swaps. $\det(A) = 1\cdot(-3)\cdot 1 = -3$. $\det(A) = -3$. Cofactor expansion check: $1(50-48)-2(40-42)+3(32-35) = 2+4-9 = -3$ ✓

Cramer's Rule

Cramer's Rule
For a nonsingular system $A\mathbf{x} = \mathbf{b}$, the unique solution is: $$x_j = \frac{\det(A_j)}{\det(A)}, \quad j = 1, \ldots, n$$ where $A_j$ is $A$ with column $j$ replaced by $\mathbf{b}$.
📘 Example 2.4 — Cramer's Rule (2×2)
$2x_1 + x_2 = 4, \quad 5x_1 + 3x_2 = 7$. $\quad \det(A) = 6-5 = 1$. $$x_1 = \frac{\det\begin{pmatrix}4&1\\7&3\end{pmatrix}}{1} = \frac{12-7}{1} = 5, \qquad x_2 = \frac{\det\begin{pmatrix}2&4\\5&7\end{pmatrix}}{1} = \frac{14-20}{1} = -6$$ $(x_1, x_2) = (5, -6)$. Cramer's Rule is elegant for theory and small systems; use row reduction for large ones.
Connections Across the Course
  • $\det(A-\lambda I) = 0$ is the characteristic equation that finds eigenvalues → Chapter 6
  • $\det(A) = 0$ $\Leftrightarrow$ columns linearly dependent → Chapter 3
  • $|\det(A)|$ = volume scaling factor of the linear map $\mathbf{x}\mapsto A\mathbf{x}$ → Chapter 4
  • $\det(A^TA) = (\det A)^2 \geq 0$ connects to positive definite matrices → Chapter 6
Section 2.3 †

Additional Topics & Applications

This optional section covers the classical adjoint (adjugate), the complete proof of Cramer's Rule, and determinant applications to geometry and inverses. Marked † in the textbook — not required for later sections.

The Classical Adjoint (Adjugate Matrix)

Definition — Adjoint
The adjoint (or adjugate) of $A$ is the transpose of the cofactor matrix: $$\text{adj}(A) = \begin{pmatrix}A_{11}&A_{21}&\cdots&A_{n1}\\A_{12}&A_{22}&\cdots&A_{n2}\\\vdots&&\ddots&\vdots\\A_{1n}&A_{2n}&\cdots&A_{nn}\end{pmatrix}$$ where $A_{ij} = (-1)^{i+j}\det(M_{ij})$ are the cofactors. Note: the $(i,j)$ entry of $\text{adj}(A)$ is the cofactor $A_{ji}$ — rows and columns are swapped.
Theorem — Adjoint–Inverse Formula
$$A \cdot \text{adj}(A) = \text{adj}(A) \cdot A = \det(A)\, I$$ Therefore, if $\det(A) \neq 0$: $$A^{-1} = \frac{1}{\det(A)}\,\text{adj}(A)$$ This is a theoretical formula — not efficient for computation (use row reduction instead), but valuable for proofs and $2\times2$ inverses.
📘 Example 2.5 — Inverse via Adjoint (2×2)
$$A = \begin{pmatrix}3&5\\1&2\end{pmatrix}, \quad \det(A)=1$$ Cofactor matrix: $A_{11}=2$, $A_{12}=-1$, $A_{21}=-5$, $A_{22}=3$. $$\text{adj}(A) = \begin{pmatrix}2&-5\\-1&3\end{pmatrix}^T = \begin{pmatrix}2&-5\\-1&3\end{pmatrix}$$ Wait — adj takes the transpose: adj$(A)_{ij} = A_{ji}$, so: $$\text{adj}(A) = \begin{pmatrix}A_{11}&A_{21}\\A_{12}&A_{22}\end{pmatrix} = \begin{pmatrix}2&-5\\-1&3\end{pmatrix}$$ $$A^{-1} = \frac{1}{1}\begin{pmatrix}2&-5\\-1&3\end{pmatrix}$$ Verify: $\begin{pmatrix}3&5\\1&2\end{pmatrix}\begin{pmatrix}2&-5\\-1&3\end{pmatrix} = \begin{pmatrix}1&0\\0&1\end{pmatrix}$ ✓

Proof of Cramer's Rule

For a nonsingular system $A\mathbf{x}=\mathbf{b}$, define $A_j$ = matrix $A$ with column $j$ replaced by $\mathbf{b}$. By cofactor expansion along column $j$ of $A_j$:

$$\det(A_j) = \sum_{i=1}^n b_i A_{ij} \quad \Rightarrow \quad x_j = \frac{\det(A_j)}{\det(A)}$$

The full proof uses the identity $\sum_i a_{ki} A_{ji} = \det(A)\delta_{kj}$ (cofactor expansion of det with a "wrong" column = 0).

Geometric Application: Area and Volume

Determinant as Volume
The absolute value $|\det(A)|$ equals:
  • The area of the parallelogram spanned by the columns of a $2\times2$ matrix $A$
  • The volume of the parallelepiped spanned by the columns of a $3\times3$ matrix $A$
  • The $n$-dimensional hypervolume in general
Sign indicates orientation: $\det > 0$ preserves orientation, $\det < 0$ reverses it.
📘 Example 2.6 — Area of a Triangle via Determinant
The area of the triangle with vertices $(x_1,y_1)$, $(x_2,y_2)$, $(x_3,y_3)$: $$\text{Area} = \frac{1}{2}\left|\det\begin{pmatrix}x_1&y_1&1\\x_2&y_2&1\\x_3&y_3&1\end{pmatrix}\right|$$ For vertices $(0,0)$, $(3,0)$, $(1,2)$: $$\frac{1}{2}\left|\det\begin{pmatrix}0&0&1\\3&0&1\\1&2&1\end{pmatrix}\right| = \frac{1}{2}|0-0+1(6-0)| = \frac{1}{2}(6) = 3$$ Area $= 3$. (Cross-check: base $=3$, height $=2$, area $=\frac{1}{2}(3)(2)=3$. ✓)
Scroll to Top