Safekipedia
Exchange algorithmsNumerical linear algebra

Gaussian elimination

Adapted from Wikipedia Β· Adventurer experience

A diagram showing how Gaussian elimination works by using special matrix multiplications to simplify a matrix step by step.

Gaussian elimination is a useful method in mathematics to solve systems of linear equations. It is named after the famous mathematician Carl Friedrich Gauss. This technique uses simple steps on the rows of a matrix to make it easier to solve.

Animation of Gaussian elimination. Red row eliminates the following rows, green rows change their order.

The goal is to change the matrix into a special form called reduced row echelon form. In this form, each row that is not all zeros starts with a 1. These 1s appear further to the right in each new row, and there are zeros below and above them. This neat setup helps show the answers to the equations clearly.

Gaussian elimination is not only used in math class. It is also important in real life. People use it in computer science, engineering, and physics. It helps solve problems like planning space journeys or building safe structures. This method gives a reliable way to find answers quickly.

Definitions and example of algorithm

Row reduction of an invertible matrix expressed as left multiplication by elementary matrices.

The process of row reduction uses simple steps on the rows of a matrix to solve sets of equations. These steps include swapping two rows, multiplying a row by a number, or adding a multiple of one row to another. These actions help organize the matrix so we can see if there are answers and what they might be.

One goal is to arrange the matrix into a special form called "row echelon form," where each row's first number (called a leading entry) is to the right of the one above it, and zeros fill the space below each leading entry. This makes solving the equations easier. For example, a matrix might look like this after organizing it: the first row starts with a number in the second column, the next row starts with a number in the third column, and the last row is all zeros. This helps us see the answer clearly.

System of equationsRow operationsAugmented matrix
2 x + y βˆ’ z = 8 βˆ’ 3 x βˆ’ y + 2 z = βˆ’ 11 βˆ’ 2 x + y + 2 z = βˆ’ 3 {\displaystyle {\begin{alignedat}{4}2x&{}+{}&y&{}-{}&z&{}={}&8&\\-3x&{}-{}&y&{}+{}&2z&{}={}&-11&\\-2x&{}+{}&y&{}+{}&2z&{}={}&-3&\end{alignedat}}} [ 2 1 βˆ’ 1 8 βˆ’ 3 βˆ’ 1 2 βˆ’ 11 βˆ’ 2 1 2 βˆ’ 3 ] {\displaystyle \left[{\begin{array}{rrr|r}2&1&-1&8\\-3&-1&2&-11\\-2&1&2&-3\end{array}}\right]}
2 x + y βˆ’ z = 8 1 2 y + 1 2 z = 1 2 y + z = 5 {\displaystyle {\begin{alignedat}{4}2x&{}+{}&y&{}-{}&z&{}={}&8&\\&&{\tfrac {1}{2}}y&{}+{}&{\tfrac {1}{2}}z&{}={}&1&\\&&2y&{}+{}&z&{}={}&5&\end{alignedat}}} L 2 + 3 2 L 1 β†’ L 2 L 3 + L 1 β†’ L 3 {\displaystyle {\begin{aligned}L_{2}+{\tfrac {3}{2}}L_{1}&\to L_{2}\\L_{3}+L_{1}&\to L_{3}\end{aligned}}} [ 2 1 βˆ’ 1 8 0 1 2 1 2 1 0 2 1 5 ] {\displaystyle \left[{\begin{array}{rrr|r}2&1&-1&8\\0&{\frac {1}{2}}&{\frac {1}{2}}&1\\0&2&1&5\end{array}}\right]}
2 x + y βˆ’ z = 8 1 2 y + 1 2 z = 1 βˆ’ z = 1 {\displaystyle {\begin{alignedat}{4}2x&{}+{}&y&{}-{}&z&{}={}&8&\\&&{\tfrac {1}{2}}y&{}+{}&{\tfrac {1}{2}}z&{}={}&1&\\&&&&-z&{}={}&1&\end{alignedat}}} L 3 + βˆ’ 4 L 2 β†’ L 3 {\displaystyle L_{3}+-4L_{2}\to L_{3}} [ 2 1 βˆ’ 1 8 0 1 2 1 2 1 0 0 βˆ’ 1 1 ] {\displaystyle \left[{\begin{array}{rrr|r}2&1&-1&8\\0&{\frac {1}{2}}&{\frac {1}{2}}&1\\0&0&-1&1\end{array}}\right]}
The matrix is now in row echelon form (also called triangular form)
2 x + y = 7 1 2 y = 3 2 βˆ’ z = 1 {\displaystyle {\begin{alignedat}{4}2x&{}+{}&y&&&{}={}7&\\&&{\tfrac {1}{2}}y&&&{}={}{\tfrac {3}{2}}&\\&&&{}-{}&z&{}={}1&\end{alignedat}}} L 1 βˆ’ L 3 β†’ L 1 L 2 + 1 2 L 3 β†’ L 2 {\displaystyle {\begin{aligned}L_{1}-L_{3}&\to L_{1}\\L_{2}+{\tfrac {1}{2}}L_{3}&\to L_{2}\end{aligned}}} [ 2 1 0 7 0 1 2 0 3 2 0 0 βˆ’ 1 1 ] {\displaystyle \left[{\begin{array}{rrr|r}2&1&0&7\\0&{\frac {1}{2}}&0&{\frac {3}{2}}\\0&0&-1&1\end{array}}\right]}
2 x + y = 7 y = 3 z = βˆ’ 1 {\displaystyle {\begin{alignedat}{4}2x&{}+{}&y&\quad &&{}={}&7&\\&&y&\quad &&{}={}&3&\\&&&\quad &z&{}={}&-1&\end{alignedat}}} 2 L 2 β†’ L 2 βˆ’ L 3 β†’ L 3 {\displaystyle {\begin{aligned}2L_{2}&\to L_{2}\\-L_{3}&\to L_{3}\end{aligned}}} [ 2 1 0 7 0 1 0 3 0 0 1 βˆ’ 1 ] {\displaystyle \left[{\begin{array}{rrr|r}2&1&0&7\\0&1&0&3\\0&0&1&-1\end{array}}\right]}
x = 2 y = 3 z = βˆ’ 1 {\displaystyle {\begin{alignedat}{4}x&\quad &&\quad &&{}={}&2&\\&\quad &y&\quad &&{}={}&3&\\&\quad &&\quad &z&{}={}&-1&\end{alignedat}}} L 1 βˆ’ L 2 β†’ L 1 1 2 L 1 β†’ L 1 {\displaystyle {\begin{aligned}L_{1}-L_{2}&\to L_{1}\\{\tfrac {1}{2}}L_{1}&\to L_{1}\end{aligned}}} [ 1 0 0 2 0 1 0 3 0 0 1 βˆ’ 1 ] {\displaystyle \left[{\begin{array}{rrr|r}1&0&0&2\\0&1&0&3\\0&0&1&-1\end{array}}\right]}
The matrix is now in reduced row echelon form

History

Gaussian elimination is a way to solve math problems with many equations. People have used it for a very long time. It first appeared in a Chinese book called The Nine Chapters on the Mathematical Art around 150 BC. Later, Isaac Newton wrote about it in the late 1600s. It became a common way to teach solving equations in algebra books.

Today, we name this method after Carl Friedrich Gauss, even though many people used it before him. Some people call a special version of it "Gauss–Jordan elimination," named after Wilhelm Jordan, who described it in 1888.

Applications

Gaussian elimination is a way to solve problems with many equations. We do this by changing rows of numbers in a special way.

One important job for Gaussian elimination is to find something called the determinant of a square matrix. We can make the matrix simpler by swapping rows, multiplying rows by numbers, or adding one row to another. This helps us find the determinant more easily.

Gaussian elimination can also help us find the inverse of a matrix, if it has one. We start with the matrix and add an identity matrix next to it. Then we change both parts at the same time. This tells us if an inverse exists and what it is. This works for matrices of any size.

Computational efficiency

Gaussian elimination is a method to solve systems of equations by working with rows of numbers. The number of steps depends on how many equations there are. For a system with n equations, it takes about 2_n_3/3 steps to solve. This makes it useful for solving systems with hundreds of equations, but not for very large systems.

There is a special version of Gaussian elimination called the Bareiss algorithm. This version works well when the numbers are whole numbers. It keeps all results as whole numbers without using fractions, making the calculations easier to manage.

Generalizations

Gaussian elimination works with many kinds of numbers, not just the ones you learn in school. There is a method called Buchberger's algorithm that uses ideas from Gaussian elimination to solve problems with equations that have variables raised to powers.

For very complex math structures called tensors, finding some properties can be hard and might not have a quick solution like Gaussian elimination.

Pseudocode

Gaussian elimination is a way to solve problems with many equations. It uses steps on a table of numbers, called a matrix. The goal is to make the matrix simpler so the answers can be found more easily.

The steps include finding the biggest number in a spot, swapping rows if needed, and using that number to turn other numbers in the same column into zero. This is done again and again until the matrix is in a form where the answers can be found step by step. This method helps keep calculations accurate, especially on computers.

Main article: row-echelon form

pseudocode

argmax

absolute value

numerical stability

This article is a child-friendly adaptation of the Wikipedia article on Gaussian elimination, available under CC BY-SA 4.0.

Images from Wikimedia Commons. Tap any image to view credits and license.