CSCI 2227, Introduction to Scientific Computation Prof. Alvarez Eigendecomposition of a square matrix Matrices as linear transformations An n x n matrix defines a (linear) transformation of n-D space For example, consider the Householder matrix with vector [0;1] H = [1 0; 0 -1] As we know, H represents a reflection wrt perp([0;1]) This transformation can be computed by left-multiplying by H a point (vector) [x;y] gets mapped to H*[x;y] Examine effect on a square [x y] = square(); axis([-100 100 -100 100]); axis equal; hold on; H = householder([0;1]); newxy = H*[x;y]; plot(newxy(1,:), newxy(2,:)); More examples of linear transformations Another example is a rotation: A = rotmat(pi/4); we saw the effect on square Rotations and reflections preserve shapes Other transformations do not Example: take A = [1 2; -1.5 0.5]; Eigenvalues and eigenvectors Every n-D vector is "acted on" by an n x n matrix A (see above) Some vectors only get rescaled, with no direction change (except for a half-turn in case of negative scaling) 0 vector doesn't count - it always stays fixed Such nonzero rescale-only vectors are the "eigenvectors" of A A*x = c*x, where c is a scalar (positive, 0, or negative) c is the eigenvalue associated with the eigenvector x Examples Examine the examples in eigshow() also Householder reflection for [0;1], and [1 2; -1.5 0.5] For latter, try analytically A*x = c*x is a linear system Rewrite as (A - cI)*x = 0 This system only has solutions if it is underdetermined For Householder example: A - cI is [1 0; 0 -1] - [c 0; 0 c] = [1-c 0; 0 -(1+c)] (A-cI)*x = 0 <=> (1-c)x1 = 0 and (1+c)x2 = 0 either x1 or x2 nonzero if x1 nonzero, then c must be 1, and x2 must be 0 if x2 nonzero, then c must be -1, and x1 must be 0 infinitely many solutions (all vectors on x and y axes) eig function in MATLAB [evecs evals] = eig(A) Scaling an eigenvector yields an eigenvector Suppose A*x = c*x Let y = s*x (any scalar s) Then A*y = c*y also Eigendecomposition of a square matrix An n x n matrix A will have n eigenvalues c1...cn Let D be diag(c1...cn), and let V have eigenvecs as columns Then since A*vi = ci*vi, A*V = V*D so that A = V*D*V^-1 Eigendecomposition of a symmetric matrix Suppose that A' = A and that c1 and c2 are distinct eigenvalues of A with v1 and v2 corresponding eigenvectors of A Since v1'Av2 = v1'A'v2, we have: c1 v1'v2 = c2 v1' v2 Therefore, v1'v2 = 0 That is: symmetric matrices have orthogonal eigenvectors In particular, matrix V of eigenvectors can be made orthogonal: V'*V = I for a symmetric matrix with n distinct eigenvalues Caution: some matrices do not have an eigendecomposition Take A = [1 1; 0 1], for example Using eig, you can see that eigenspace is just span([1;0]), so that the matrix of eigenvectors is not invertible This particular A does not behave like a diagonal matrix Sufficient for eigendecomposability / diagonalizability: n distinct eigenvalues for n x n matrix