Linear Algebra

Diagonalization

13/13 in Linear Algebra. See all.

Diagonalization is used all the time in computer graphics, physics, and engineering. It's a way of simplifying a transformation by changing the basis to one of eigenvectors. This is a powerful tool, and it's a great way to understand the power of eigenvectors and eigenvalues.

Diagonalizability
A square matrix is diagonalisable if it is similar to a diagonal matrix.


Thm. An n×nn\times n matrix AA is diagonalisable if and only if it has nn linearly independent eigenvectors.

While we can't guarantee that AA is similar to a unique diagonal matrix, we can guarantee the following: If AA is similar to the diagonal matrices D1D_1 and D2D_2, then D1D_1 and D2D_2 have the same set of diagonal elements (with the same multiplicities).

Thm. Let AA be a square matrix. For each positive integer kk, if x1,,xkx_1,\dots,x_k is are eigenvectors of AA with distinct eigenvalues λ1,,λk\lambda_1,\dots,\lambda_k, then {x1,,xk}\left\{x_1,\dots,x_k\right\} is linearly independent (this is just saying that eigenvectors with distinct eigenvalues are linearly independent).

Thm. If AA is n×nn\times n and AA has nn distinct real eigenvalues, then AA is diagonalizable.

But what if AA doesn't have nn distinct real eigenvalues? It may still be diagonlizable:

Thm. If AA is an n×nn\times n matrix with with real eigenvalues λ1,,λk\lambda_1,\dots,\lambda_k, and SλjS_{\lambda_j} denotes the eigenspace of λj\lambda_j for each λj\lambda_j. Then, AA is diagonalizable if and only if i=1kdim(Sλi)=n.\sum_{i=1}^k \dim(S_{\lambda_i})=n. Thm. If λ\lambda is an eigenvalue of the n×nn\times n matrix AA, then dim(Sλ)=nr(AλIn)\dim(S_\lambda)=n-r(A-\lambda I_n) because AλInA-\lambda I_n is the kernel of A:RnRnA:\mathbb{R}^n\to\mathbb{R}^n.


What's the point of diagonalization?
Let's consider some matrix AA that represents a transformation. What if we want to apply it three times (A3A^3)? Well, we could multiply AA by itself three times, but that's a lot of work. If we can find a diagonal matrix DD that is similar to AA, then A=PDP1A=PDP^{-1}, and A3=PD3P1A^3=PD^3P^{-1}. But D3D^3 is just the diagonal elements cubed! So, we can find A3A^3 by just cubing the diagonal elements of DD. Diagonalization also makes it easy to find the inverse of a matrix. If A=PDP1A=PDP^{-1}, then A1=PD1P1A^{-1}=PD^{-1}P^{-1}, and D1D^{-1} is just the reciprocal of the diagonal elements.

You'll find diagonalization everywhere you go. For instance, in probability theory, you'll see that the transition matrix of a Markov chain is diagonalizable. In physics, you'll see that the Hamiltonian operator is diagonalizable. In computer graphics, you'll see that the transformation matrix is diagonalizable. It's a powerful tool, and it's worth understanding.


~ The End ~

What I've covered here is just the beginning of linear algebra. There's so much more to learn, and I hope you continue to explore. If you have any questions, feel free to reach out to me. Good luck!