Menu bar

16/08/2021

Eigendecomposition

Matrix decompositions are a useful tool for reducing a matrix to their constituent parts in order to simplify a range of more complex operations. 

Perhaps the most used type of matrix decomposition is the eigendecomposition that decomposes a matrix into eigenvectors and eigenvalues. 


This tutorial is divided into 5 parts; they are:
  • Eigendecomposition of a Matrix
  • Eigenvectors and Eigenvalues
  • Calculation of Eigendecomposition
  • Confirm an Eigenvector and Eigenvalue
  • Reconstruct Matrix

A. Eigendecomposition of a Matrix

Eigendecomposition of a matrix is a type of decomposition that involves decomposing a square matrix into a set of eigenvectors and eigenvalues.

A vector is an eigenvector of a matrix if it satisfies the following equation. A · v = λ · v

This is called the eigenvalue equation, where A is the parent square matrix that we are decomposing, v is the eigenvector of the matrix, and λ is the lowercase Greek letter lambda and represents the eigenvalue scalar.

A matrix could have one eigenvector and eigenvalue for each dimension of the parent matrix.

Not all square matrices can be decomposed into eigenvectors and eigenvalues, and some can only be decomposed in a way that requires complex numbers. 

The parent matrix can be shown to be a product of the eigenvectors and eigenvalues. A = Q · Λ · QT
Where Q is a matrix comprised of the eigenvectors, Λ is the uppercase Greek letter lambda and is the diagonal matrix comprised of the eigenvalues, and QT is the transpose of the matrix comprised of the eigenvectors.


B. Eigenvectors and Eigenvalues

Decomposing a matrix in terms of its eigenvalues and its eigenvectors gives valuable insights into the properties of the matrix. 

Certain matrix calculations, like computing the power of the matrix, become much easier when we use the eigendecomposition of the matrix.


C. Calculation of Eigendecomposition

An eigendecomposition is calculated on a square matrix using an efficient iterative algorithm.

Often an eigenvalue is found first, then an eigenvector is found to solve the equation as a set of coefficients.

# eigendecomposition
from numpy import array
from numpy.linalg import eig
# define matrix
A = array([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
print(A)
# factorize
values, vectors = eig(A)
print(values)
print(vectors)

-----Result-----

[[1 2 3]
[4 5 6]
[7 8 9]]

[ 1.61168440e+01 -1.11684397e+00 -9.75918483e-16]

[[-0.23197069 -0.78583024  0.40824829]
[-0.52532209  -0.08675134 -0.81649658]
[-0.8186735    0.61232756  0.40824829]]


D. Confirm an Eigenvector and Eigenvalue

First, we will define a matrix, then calculate the eigenvalues and eigenvectors. 

We will then test whether the first vector and value are in fact an eigenvalue and eigenvector for the matrix. 

# confirm eigenvector
from numpy import array
from numpy.linalg import eig
# define matrix
A = array([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
# factorize
values, vectors = eig(A)
# confirm first eigenvector
B = A.dot(vectors[:, 0])
print(B)
C = vectors[:, 0] * values[0]
print(C)

-----Result-----

[ -3.73863537 -8.46653421 -13.19443305]

[ -3.73863537 -8.46653421 -13.19443305]


E.  Reconstruct Matrix

We can reverse the process and reconstruct the original matrix given only the eigenvectors and eigenvalues. 

First, the list of eigenvectors must be taken together as a matrix, where each vector becomes a row. 

The eigenvalues need to be arranged into a diagonal matrix.

# reconstruct matrix
from numpy import diag
from numpy.linalg import inv
from numpy import array
from numpy.linalg import eig
# define matrix
A = array([
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]])
print(A)
# factorize
values, vectors = eig(A)
# create matrix from eigenvectors
Q = vectors
# create inverse of eigenvectors matrix
R = inv(Q)
# create diagonal matrix from eigenvalues
L = diag(values)
# reconstruct the original matrix
B = Q.dot(L).dot(R)
print(B)

-----Result-----

[[1 2 3]
[4 5 6]
[7 8 9]]


[[ 1. 2. 3.]
[ 4. 5. 6.]
[ 7. 8. 9.]]

No comments:

Post a Comment