Safekipedia
Abstract algebraLinear algebraMathematical physicsMatrix theory

Eigenvalues and eigenvectors

Adapted from Wikipedia · Adventurer experience

Animation showing how eigenvectors of a matrix act as special directions where points only slide along them during a transformation.

Eigenvalues and eigenvectors

In linear algebra, an eigenvector or characteristic vector is a special kind of vector that keeps its direction when a certain mathematical operation, called a linear transformation, is used on it. Instead, the eigenvector is just stretched or shrunk by a certain amount. This amount is called the eigenvalue. If the eigenvalue is negative, the eigenvector points in the opposite direction.

Think of vectors as arrows that have both size and direction. A linear transformation might twist, stretch, or change these arrows. But the eigenvectors are the special arrows that only get longer or shorter without changing where they point. The eigenvalue tells us exactly how much the eigenvector is stretched or shrunk.

Eigenvectors and eigenvalues help us understand many different areas, from geology to quantum mechanics. They are very important when a system repeats the same operation many times. In these cases, the largest eigenvalue shows us what will happen to the system after a long time, and the matching eigenvector shows us the stable state that the system will end up in.

Matrices

When we multiply a special kind of number, called a scalar, by a nonzero vector v, we get the same vector v but stretched or shrunk. If this happens when we multiply a square matrix A by the vector v, then v is called an eigenvector of A, and the scalar is called the eigenvalue.

In other words, for a square matrix A and a vector v, if multiplying A by v simply stretches or shrinks v without changing its direction, then v is an eigenvector, and the amount of stretching or shrinking is the eigenvalue. This relationship can be written as A v = λ v, where λ is the eigenvalue.

Overview

In this shear mapping the red arrow changes direction, but the blue arrow does not. The blue arrow is an eigenvector of this shear mapping because it does not change direction, and since its length is unchanged, its eigenvalue is 1.

Eigenvalues and eigenvectors are important ideas in the study of how things change in straight lines. The words come from a German word meaning "own" or "characteristic". They are used in many areas, like studying how things spin, how they vibrate, and even in facial recognition systems.

An eigenvector is a special straight line that does not change direction when a change is applied to it. Instead, it is simply stretched or shrunk by a certain amount. This amount is called an eigenvalue. For example, if you have a picture and move parts of it in a certain way, some lines might stay pointing in the same direction but get longer or shorter — these lines are eigenvectors.

History

Eigenvalues and eigenvectors were first studied in the 1700s by mathematicians who looked at how objects spin and move. They found special directions that objects turn around, called principal axes, which are linked to eigenvectors.

In the 1800s, more mathematicians worked on this idea. They solved big math problems and found new ways to understand these special values. By the early 1900s, the German word eigen, meaning "own," was used. This led to the term "eigenvalue" we use today. The first computer method to find these values was made in 1929.

Eigenvalues and eigenvectors of matrices

See also: Euclidean vector and Matrix (mathematics)

Eigenvalues and eigenvectors are important ideas in linear algebra, often studied when learning about matrices. Linear transformations change vectors in space, and matrices can show these changes.

To understand eigenvectors, imagine a vector that keeps its direction when a special math operation is applied — it might just get longer or shorter. This operation is called a linear transformation. The amount it stretches or shrinks the vector is called an eigenvalue. In simple terms, an eigenvector is a special vector that only changes in length when a matrix multiplies it. The number by which it changes is the eigenvalue.

This idea helps solve many problems in physics, computer graphics, and more, by showing which directions stay the same during certain changes.

A v = w = λ v , {\displaystyle A\mathbf {v} =\mathbf {w} =\lambda \mathbf {v} ,} 1
( A − λ I ) v = 0 , {\displaystyle \left(A-\lambda I\right)\mathbf {v} =\mathbf {0} ,} 2
det ( A − λ I ) = 0 {\displaystyle \det(A-\lambda I)=0} 3
det ( A − λ I ) = ( λ 1 − λ ) ( λ 2 − λ ) ⋯ ( λ n − λ ) , {\displaystyle \det(A-\lambda I)=(\lambda _{1}-\lambda )(\lambda _{2}-\lambda )\cdots (\lambda _{n}-\lambda ),} 4

Eigenvalues and eigenfunctions of differential operators

Main article: Eigenfunction

In linear algebra, eigenvectors and eigenvalues can also work with special math rules called differential operators. These operators work on spaces with infinitely many dimensions. They act on functions instead of simple vectors.

For example, think about a rule that shows how a function changes at each point. This is called the derivative operator. If this operator acts on a function and the result is the same function multiplied by a constant, that function is called an eigenfunction. The constant it is multiplied by is the eigenvalue. The solutions to this are special functions called exponentials. These functions grow or shrink at a steady rate.

General definition

An eigenvector is a special vector that, when a linear transformation is applied to it, changes only by getting bigger or smaller. This change in size is called an eigenvalue.

For example, if v is an eigenvector and T is the transformation, then applying T to v gives a new vector. This new vector is the original v multiplied by a number λ. We can write this as T(v) = λv. Here, λ is the eigenvalue and it shows us how much the eigenvector gets larger or smaller.

T ( v ) = λ v . {\displaystyle T(\mathbf {v} )=\lambda \mathbf {v} .} 5

Dynamic equations

Difference equations help us understand how things change step by step over time. The simplest ones are like a list of numbers, where each number depends on the ones before it. To solve these, we use a special equation called a characteristic equation. This helps us find important numbers called roots. These roots show us patterns in how things change.

We can use a similar idea for differential equations, which describe smooth changes over time. By finding special numbers, we can solve these equations and guess what will happen next.

Calculation

Main article: Eigenvalue algorithm

Finding eigenvalues and eigenvectors can be hard because the idea is different from how we do it with actual numbers.

Classical method

First, we find the eigenvalues, and then we use them to find eigenvectors. This method works well for small groups of numbers but gets harder for bigger groups.

Eigenvalues

For a group of numbers arranged in a table, eigenvalues are found by solving a special math problem. This is easy for small tables but gets harder as the tables grow bigger. There are exact ways to find these solutions for smaller problems, but for larger tables, we need to use special computer methods because the math becomes too complicated.

Eigenvectors

Once we know an eigenvalue, we can find its eigenvectors by solving a set of equations. For example, if we know that 6 is an eigenvalue of a certain table, we solve equations to find all groups of numbers that only change in size when the table acts on them.

Simple iterative methods

Main article: Power iteration

Another way is to start with a random group of numbers and repeatedly multiply it by the table. Over time, this group of numbers gets closer and closer to an eigenvector. There are also tricks to make this process work faster.

Modern methods

Good computer methods for this problem weren’t developed until 1961 with the QR algorithm. Since then, scientists have created even better methods, especially for very large tables that have many zeros. These modern methods usually find both eigenvalues and eigenvectors together.

Applications

PCA of the multivariate Gaussian distribution centered at (1, 3) with a standard deviation of 3 in roughly the (0.878, 0.478) direction and of 1 in the orthogonal direction. The vectors shown are unit eigenvectors of the (symmetric, positive-semidefinite) covariance matrix scaled by the square root of the corresponding eigenvalue. Just as in the one-dimensional case, the square root is taken because the standard deviation is more readily visualized than the variance.

Eigenvectors and eigenvalues help us understand how shapes change when we move or stretch them. For example, they can show how a square turns into a rectangle but keeps the same area. They are also used in principal component analysis, a way to make big sets of data easier to study, like in biology.

In graph theory, eigenvalues help us find important points, like in Google’s PageRank system. They are also used to study how things vibrate, showing natural movement patterns. Eigenvectors and eigenvalues are important in quantum mechanics, where they describe how tiny particles behave and their energy levels.

Eigenvalues of geometric transformations
ScalingUnequal scalingRotationHorizontal shearHyperbolic rotation
Illustration
Matrix[ k 0 0 k ] {\displaystyle {\begin{bmatrix}k&0\\0&k\end{bmatrix}}} [ k 1 0 0 k 2 ] {\displaystyle {\begin{bmatrix}k_{1}&0\\0&k_{2}\end{bmatrix}}} [ cos ⁡ θ − sin ⁡ θ sin ⁡ θ cos ⁡ θ ] {\displaystyle {\begin{bmatrix}\cos \theta &-\sin \theta \\\sin \theta &\cos \theta \end{bmatrix}}} [ 1 k 0 1 ] {\displaystyle {\begin{bmatrix}1&k\\0&1\end{bmatrix}}} [ cosh ⁡ φ sinh ⁡ φ sinh ⁡ φ cosh ⁡ φ ] {\displaystyle {\begin{bmatrix}\cosh \varphi &\sinh \varphi \\\sinh \varphi &\cosh \varphi \end{bmatrix}}}
Characteristic
polynomial
  ( λ − k ) 2 {\displaystyle \ (\lambda -k)^{2}} ( λ − k 1 ) ( λ − k 2 ) {\displaystyle (\lambda -k_{1})(\lambda -k_{2})} λ 2 − 2 cos ⁡ ( θ ) λ + 1 {\displaystyle \lambda ^{2}-2\cos(\theta )\lambda +1}   ( λ − 1 ) 2 {\displaystyle \ (\lambda -1)^{2}} λ 2 − 2 cosh ⁡ ( φ ) λ + 1 {\displaystyle \lambda ^{2}-2\cosh(\varphi )\lambda +1}
Eigenvalues, λ i {\displaystyle \lambda _{i}} λ 1 = λ 2 = k {\displaystyle \lambda _{1}=\lambda _{2}=k} λ 1 = k 1 λ 2 = k 2 {\displaystyle {\begin{aligned}\lambda _{1}&=k_{1}\\\lambda _{2}&=k_{2}\end{aligned}}} λ 1 = e i θ = cos ⁡ θ + i sin ⁡ θ λ 2 = e − i θ = cos ⁡ θ − i sin ⁡ θ {\displaystyle {\begin{aligned}\lambda _{1}&=e^{i\theta }\\&=\cos \theta +i\sin \theta \\\lambda _{2}&=e^{-i\theta }\\&=\cos \theta -i\sin \theta \end{aligned}}} λ 1 = λ 2 = 1 {\displaystyle \lambda _{1}=\lambda _{2}=1} λ 1 = e φ = cosh ⁡ φ + sinh ⁡ φ λ 2 = e − φ = cosh ⁡ φ − sinh ⁡ φ {\displaystyle {\begin{aligned}\lambda _{1}&=e^{\varphi }\\&=\cosh \varphi +\sinh \varphi \\\lambda _{2}&=e^{-\varphi }\\&=\cosh \varphi -\sinh \varphi \end{aligned}}}
Algebraic mult.,
μ i = μ ( λ i ) {\displaystyle \mu _{i}=\mu (\lambda _{i})}
μ 1 = 2 {\displaystyle \mu _{1}=2} μ 1 = 1 μ 2 = 1 {\displaystyle {\begin{aligned}\mu _{1}&=1\\\mu _{2}&=1\end{aligned}}} μ 1 = 1 μ 2 = 1 {\displaystyle {\begin{aligned}\mu _{1}&=1\\\mu _{2}&=1\end{aligned}}} μ 1 = 2 {\displaystyle \mu _{1}=2} μ 1 = 1 μ 2 = 1 {\displaystyle {\begin{aligned}\mu _{1}&=1\\\mu _{2}&=1\end{aligned}}}
Geometric mult.,
γ i = γ ( λ i ) {\displaystyle \gamma _{i}=\gamma (\lambda _{i})}
γ 1 = 2 {\displaystyle \gamma _{1}=2} γ 1 = 1 γ 2 = 1 {\displaystyle {\begin{aligned}\gamma _{1}&=1\\\gamma _{2}&=1\end{aligned}}} γ 1 = 1 γ 2 = 1 {\displaystyle {\begin{aligned}\gamma _{1}&=1\\\gamma _{2}&=1\end{aligned}}} γ 1 = 1 {\displaystyle \gamma _{1}=1} γ 1 = 1 γ 2 = 1 {\displaystyle {\begin{aligned}\gamma _{1}&=1\\\gamma _{2}&=1\end{aligned}}}
EigenvectorsAll nonzero vectorsu 1 = [ 1 0 ] u 2 = [ 0 1 ] {\displaystyle {\begin{aligned}\mathbf {u} _{1}&={\begin{bmatrix}1\\0\end{bmatrix}}\\\mathbf {u} _{2}&={\begin{bmatrix}0\\1\end{bmatrix}}\end{aligned}}} u 1 = [ 1 − i ] u 2 = [ 1 + i ] {\displaystyle {\begin{aligned}\mathbf {u} _{1}&={\begin{bmatrix}1\\-i\end{bmatrix}}\\\mathbf {u} _{2}&={\begin{bmatrix}1\\+i\end{bmatrix}}\end{aligned}}} u 1 = [ 1 0 ] {\displaystyle \mathbf {u} _{1}={\begin{bmatrix}1\\0\end{bmatrix}}} u 1 = [ 1 1 ] u 2 = [ 1 − 1 ] {\displaystyle {\begin{aligned}\mathbf {u} _{1}&={\begin{bmatrix}1\\1\end{bmatrix}}\\\mathbf {u} _{2}&={\begin{bmatrix}1\\-1\end{bmatrix}}\end{aligned}}}

Images

Animation showing how a tuning fork vibrates at a frequency of 440.09 Hz.
Animation showing how certain vectors stay pointed in the same direction after a mathematical transformation, helping to explain eigenvetors in linear algebra.
Scientific diagrams showing different energy states of electrons in a hydrogen atom, using color to represent probability density

This article is a child-friendly adaptation of the Wikipedia article on Eigenvalues and eigenvectors, available under CC BY-SA 4.0.

Images from Wikimedia Commons. Tap any image to view credits and license.