Chapter 6: Eigenvalues | Linear Algebra — Leon 8th Ed.
Leon 8th Edition

Chapter 6: Eigenvalues

Eigenvalues & eigenvectors · Diagonalization · Hermitian matrices · SVD · Quadratic forms · Positive definite · Nonneg matrices

Eigenvalues reveal the intrinsic geometry of a linear transformation — the directions that stretch but don't rotate. They underlie Google's PageRank, quantum mechanics, principal component analysis, network centrality, and structural vibration analysis. Section 6.8 applies the theory to nonneg matrices and economic modeling — going beyond the course into real applications.

Section 6.1

Eigenvalues & Eigenvectors

Definition
$\lambda$ is an eigenvalue of $A$ if $\exists$ nonzero $\mathbf{x}$ with $A\mathbf{x}=\lambda\mathbf{x}$. Such $\mathbf{x}$ is an eigenvector. Equivalently: $(A-\lambda I)\mathbf{x}=\mathbf{0}$ has nontrivial solutions iff $\det(A-\lambda I)=0$.
Click to animate. The eigenvectors (red, green) keep their direction under $A$ — only their magnitude changes. All other vectors rotate and stretch. Matrix: $A=\begin{pmatrix}3&1\\1&3\end{pmatrix}$, eigenvalues 4 and 2.

Characteristic Polynomial

Characteristic Equation
$$p(\lambda) = \det(A-\lambda I) = 0$$ For $2\times2$: $p(\lambda)=\lambda^2-\text{tr}(A)\lambda+\det(A)$. Always: $\sum\lambda_i=\text{tr}(A)$ and $\prod\lambda_i=\det(A)$.
📘 Example 6.1 — 2×2 Eigenvalue Problem
$$A=\begin{pmatrix}3&1\\1&3\end{pmatrix},\quad p(\lambda)=(3-\lambda)^2-1=\lambda^2-6\lambda+8=(\lambda-4)(\lambda-2)$$ Eigenvalues: $\lambda_1=4$, $\lambda_2=2$. For $\lambda_1=4$: $(A-4I)\mathbf{x}=\mathbf{0}$: $\begin{pmatrix}-1&1\\1&-1\end{pmatrix}\mathbf{x}=\mathbf{0}$ $\Rightarrow$ $\mathbf{x}_1=(1,1)^T$. For $\lambda_2=2$: $\mathbf{x}_2=(1,-1)^T$. tr$(A)=6=4+2$ ✓, $\det(A)=8=4\cdot2$ ✓. Eigenvectors are perpendicular — $A$ is symmetric.
📘 Example 6.2 — 3×3 Eigenvalues
$$A=\begin{pmatrix}2&2&0\\1&1&2\\1&1&2\end{pmatrix}$$ $\det(A-\lambda I) = -\lambda(\lambda-1)(\lambda-4) = 0$. $\lambda_1=0,\;\lambda_2=1,\;\lambda_3=4$. Sanity: tr$(A)=5=0+1+4$ ✓, $\det(A)=0=0\cdot1\cdot4$ ✓ ($\lambda=0$ means $A$ is singular).

Algebraic and Geometric Multiplicity

The algebraic multiplicity of $\lambda$ is its multiplicity as a root of $p(\lambda)$. The geometric multiplicity is $\dim(\ker(A-\lambda I))$ — the dimension of the eigenspace. Always: geometric $\leq$ algebraic. A matrix is diagonalizable iff they are equal for every eigenvalue.

Section 6.2

Diagonalization

Diagonalization: $A = XDX^{-1}$
If $A$ has $n$ linearly independent eigenvectors $\mathbf{x}_1,\ldots,\mathbf{x}_n$ with eigenvalues $\lambda_1,\ldots,\lambda_n$: $$A = XDX^{-1},\quad X=(\mathbf{x}_1|\cdots|\mathbf{x}_n),\quad D=\text{diag}(\lambda_1,\ldots,\lambda_n)$$ Then $A^k = XD^kX^{-1}$ — powers are trivial. Also $e^A = Xe^DX^{-1}$ for matrix exponential.
$A^k$ for $A=\begin{pmatrix}3&1\\1&3\end{pmatrix}$. Click +/− to change $k$. The diagonal entries in $D^k$ grow as $4^k$ and $2^k$ — the eigenvector decomposition makes this transparent.
📘 Example 6.3 — Computing $A^5$ by Diagonalization
$A=\begin{pmatrix}3&1\\1&3\end{pmatrix}$, $X=\begin{pmatrix}1&1\\1&-1\end{pmatrix}$, $D=\begin{pmatrix}4&0\\0&2\end{pmatrix}$, $X^{-1}=\frac{1}{2}\begin{pmatrix}1&1\\1&-1\end{pmatrix}$. $$A^5 = XD^5X^{-1} = \begin{pmatrix}1&1\\1&-1\end{pmatrix}\begin{pmatrix}1024&0\\0&32\end{pmatrix}\frac{1}{2}\begin{pmatrix}1&1\\1&-1\end{pmatrix} = \begin{pmatrix}528&496\\496&528\end{pmatrix}$$ Far easier than multiplying $A$ by itself 4 times. The formula generalizes: $A^k=\frac{4^k+2^k}{2}I+\frac{4^k-2^k}{2}\begin{pmatrix}1&1\\1&1\end{pmatrix}\cdot\frac{1}{2}$.
Applications of Diagonalization
  • Markov chains: $\mathbf{p}^{(k)}=A^k\mathbf{p}^{(0)}$ — find long-run distribution from $\lambda=1$ eigenvector
  • Differential equations: $\mathbf{x}'=A\mathbf{x}$ has solution $\mathbf{x}(t)=e^{At}\mathbf{x}_0=Xe^{Dt}X^{-1}\mathbf{x}_0$
  • Fibonacci numbers: $A=\begin{pmatrix}1&1\\1&0\end{pmatrix}$ has $A^n$'s entries = consecutive Fibonacci numbers
Section 6.3

Hermitian Matrices

Spectral Theorem (Theorem 6.4.4)
If $A=A^H$ (Hermitian), then:
  • All eigenvalues of $A$ are real
  • Eigenvectors for distinct eigenvalues are orthogonal
  • $A$ is unitarily diagonalizable: $A=U\Lambda U^H$ with $U$ unitary (orthogonal if $A$ is real symmetric)
This is one of the most important theorems in mathematics. It says symmetric matrices are perfectly structured — no rotational component, just pure stretching.
📘 Example 6.4 — Spectral Decomposition of Symmetric Matrix
$$A=\begin{pmatrix}0&2&-1\\2&3&-2\\-1&-2&0\end{pmatrix},\quad p(\lambda)=-(1+\lambda)^2(5-\lambda)$$ Eigenvalues: $\lambda_1=\lambda_2=-1$ (multiplicity 2), $\lambda_3=5$. After finding orthonormal eigenvectors via Gram-Schmidt: $$A = U\begin{pmatrix}-1&0&0\\0&-1&0\\0&0&5\end{pmatrix}U^T$$ Spectral decomposition: $A = -1\cdot\mathbf{u}_1\mathbf{u}_1^T - 1\cdot\mathbf{u}_2\mathbf{u}_2^T + 5\cdot\mathbf{u}_3\mathbf{u}_3^T$ — a sum of rank-1 projections.
Section 6.4

Singular Value Decomposition (SVD)

SVD — The Universal Decomposition
Every $m\times n$ matrix $A$ has $A = U\Sigma V^T$ where $U$ ($m\times m$), $\Sigma$ ($m\times n$ diagonal), $V$ ($n\times n$) are all orthogonal/unitary. The diagonal entries $\sigma_1\geq\sigma_2\geq\cdots\geq0$ are the singular values. $$\sigma_j = \sqrt{\lambda_j(A^TA)}, \qquad \|A\|_2 = \sigma_1, \qquad \|A\|_F = \sqrt{\sigma_1^2+\cdots+\sigma_r^2}$$
ApplicationWhat SVD gives you
Best rank-$k$ approximation$A_k = \sum_{i=1}^k\sigma_i\mathbf{u}_i\mathbf{v}_i^T$ — used in image compression, PCA
Pseudoinverse $A^+$$V\Sigma^+U^T$ — solves least squares when columns are dependent
Condition number$\kappa(A)=\sigma_1/\sigma_n$ — measures numerical sensitivity
Matrix rankNumber of nonzero singular values
PCARight singular vectors = principal components; $\sigma_i^2$ = variance explained
Section 6.5

Quadratic Forms

Quadratic Form & Classification
$Q(\mathbf{x})=\mathbf{x}^TA\mathbf{x}$ ($A$ symmetric). Classification by eigenvalue signs:
  • All $\lambda_i>0$: positive definite — bowl opening upward, minimum at $\mathbf{0}$
  • All $\lambda_i\geq0$: positive semidefinite
  • Mixed signs: indefinite — saddle point at $\mathbf{0}$
  • All $\lambda_i\leq0$: negative semidefinite
  • All $\lambda_i<0$: negative definite
In the eigenvector basis: $Q = \lambda_1 y_1^2 + \cdots + \lambda_n y_n^2$ — a sum of squares.
📘 Example 6.5 — Classifying a Quadratic Form
$Q(x_1,x_2)=5x_1^2+4x_1x_2+5x_2^2 = \mathbf{x}^T\begin{pmatrix}5&2\\2&5\end{pmatrix}\mathbf{x}$. Eigenvalues: $\lambda_1=3$, $\lambda_2=7$ (both positive). Positive definite. In eigenvector coordinates: $Q=3y_1^2+7y_2^2$ — an ellipse with semi-axes in ratio $\sqrt{7}:\sqrt{3}$. The level sets $Q=c$ are ellipses aligned with the eigenvectors.
Section 6.6

Positive Definite Matrices

Positive Definite — Four Equivalent Tests
A symmetric $A$ is positive definite iff any one of:
  • All eigenvalues $>0$
  • All leading principal minors $\det(A_k)>0$ for $k=1,\ldots,n$ (Sylvester's criterion)
  • $A=R^TR$ for some nonsingular $R$ — the Cholesky factorization
  • $\mathbf{x}^TA\mathbf{x}>0$ for all $\mathbf{x}\neq\mathbf{0}$
Cholesky Factorization: The Fast Path
For positive definite $A$, Cholesky finds $A=LL^T$ ($L$ lower triangular with positive diagonal) using half the operations of LU factorization. This is the standard method for solving systems with positive definite matrices in MATLAB/NumPy — used in covariance matrix computations, finite element methods, and Kalman filtering.
Section 6.8 Beyond Syllabus

Nonnegative Matrices

A matrix $A$ is nonneg if $a_{ij}\geq0$ for all $i,j$, and positive if $a_{ij}>0$. These arise naturally in economics (Leontief models), population dynamics, Google's PageRank, and Markov chains. The key theorem was proved independently by Oskar Perron (1907) and Georg Frobenius (1912).

The Perron-Frobenius Theorem

Perron's Theorem (Positive Matrices)
If $A$ is an $n\times n$ positive matrix ($a_{ij}>0$ for all $i,j$), then:
  • $A$ has a dominant eigenvalue $\lambda_1>0$ (strictly larger than all other $|\lambda_i|$)
  • The corresponding eigenvector $\mathbf{x}_1$ can be chosen with all positive entries
  • $\lambda_1$ is a simple root of the characteristic polynomial
  • Every other eigenvector has at least one negative or complex component
For irreducible nonneg matrices, the same holds (Frobenius' generalization), with "positive" replaced by "nonneg with positive dominant eigenvector."
Power iteration on a positive matrix: iterating $\mathbf{v}\leftarrow A\mathbf{v}/\|A\mathbf{v}\|$ converges to the dominant eigenvector. Click to step. All entries stay positive, converging to the Perron vector.

Stochastic Matrices and Markov Chains

Stochastic Matrix
A nonneg matrix $A$ is stochastic if every column sums to 1: $\sum_i a_{ij}=1$ for all $j$. Columns represent transition probabilities between states. $\mathbf{p}^{(k)}=A^k\mathbf{p}^{(0)}$ gives the probability distribution after $k$ steps.

By Perron-Frobenius, a positive stochastic matrix has $\lambda_1=1$ as the dominant eigenvalue. The steady-state distribution $\mathbf{p}^*$ satisfies $A\mathbf{p}^*=\mathbf{p}^*$ — it is the eigenvector for $\lambda=1$, normalized so components sum to 1.

📘 Example 6.6 — Political Party Markov Chain
Annual transitions: of Democrats, 80% stay Democrat, 10% become Republican, 10% Independent. Of Republicans, 10% become Democrat, 80% stay, 10% Independent. Of Independents, 20% each go Democrat/Republican, 60% stay. $$A = \begin{pmatrix}0.80&0.10&0.20\\0.10&0.80&0.20\\0.10&0.10&0.60\end{pmatrix}$$ Steady state: solve $(A-I)\mathbf{p}^*=\mathbf{0}$ with $\|\mathbf{p}^*\|_1=1$: $\mathbf{p}^*\approx(0.40, 0.40, 0.20)^T$ — 40% Democrat, 40% Republican, 20% Independent, regardless of starting distribution.

Leontief Input-Output Model (Open Model)

Economist Wassily Leontief (Nobel 1973) modeled an $n$-sector economy with a consumption matrix $A$ where $a_{ij}=$ amount sector $i$ must supply to produce one unit from sector $j$. The system must satisfy:

$$(I-A)\mathbf{x} = \mathbf{d}$$

where $\mathbf{x}$ is the output vector and $\mathbf{d}$ is external demand. The solution $\mathbf{x}=(I-A)^{-1}\mathbf{d}$ exists and is nonneg when $\|A\|_1<1$ (column sums $<1$, meaning each sector doesn't consume more than it produces).

Leontief Existence Theorem
If $A\geq0$ and all column sums $\sum_i a_{ij}<1$, then $(I-A)^{-1}$ exists and is nonneg, and equals the Neumann series: $$(I-A)^{-1} = I + A + A^2 + A^3 + \cdots$$ The term $A^k\mathbf{d}$ represents the "ripple effect" of demand through $k$ rounds of the supply chain.
📘 Example 6.7 — 2-Sector Leontief Model
Agriculture (A) and Manufacturing (M). To produce \$1 of output:
  • Agriculture uses \$0.20 from itself, \$0.30 from manufacturing
  • Manufacturing uses \$0.40 from agriculture, \$0.20 from itself
$$A = \begin{pmatrix}0.20&0.40\\0.30&0.20\end{pmatrix},\qquad (I-A) = \begin{pmatrix}0.80&-0.40\\-0.30&0.80\end{pmatrix}$$ $\det(I-A)=0.52$. $(I-A)^{-1} = \frac{1}{0.52}\begin{pmatrix}0.80&0.40\\0.30&0.80\end{pmatrix}$. External demand $\mathbf{d}=(100,80)^T$ billion: $$\mathbf{x} = (I-A)^{-1}\mathbf{d} = \frac{1}{0.52}\begin{pmatrix}112\\86\end{pmatrix} \approx \begin{pmatrix}215.4\\165.4\end{pmatrix}$$ Agriculture must produce ~215.4 billion and manufacturing ~165.4 billion to meet external demand of (100, 80) billion. The extra ~115 billion is consumed internally in the supply chain.
🔭 Going Further
  • Google PageRank: The web adjacency matrix is a nonneg stochastic matrix; PageRank is its dominant eigenvector, found by power iteration (→ Chapter 7)
  • Closed Leontief model: No external demand; solve $(A-I)\mathbf{x}=\mathbf{0}$ — the solution is the dominant eigenvector
  • Population dynamics: Leslie matrices model age-structured populations; the dominant eigenvalue is the long-run growth rate
  • Spectral graph theory: The adjacency and Laplacian matrices of graphs are nonneg; their eigenvalues encode connectivity, expansion, and mixing times of random walks
Scroll to Top