In coding theory, a linear code is a special type of error-correcting code. It has a useful property: if you combine any two codewords in a special way, called a linear combination, the result is also a codeword. This makes linear codes easier to use than other kinds of codes. Linear codes come in two main types: block codes and convolutional codes, with some newer codes like turbo codes mixing features of both.
Linear codes are very important for sending information over the internet or wireless signals. They help catch and fix mistakes that happen when messages travel. For example, when sending a message made of bits, a linear code adds extra bits so the receiver can spot and fix errors. One famous example is the [7,4,3] Hamming code, a binary code. This code takes a 4-bit message and turns it into a 7-bit codeword. Because of how these codewords are built, this code can detect and fix some mistakes.
Definition and parameters
A linear code is a special kind of code that helps fix mistakes when sending information. It uses special messages called codewords. These codewords follow a rule: if you add any two codewords together, the result is always another codeword in the set.
We describe a linear code with three numbers: its length (how many pieces of information it has), its dimension (how many special messages we can make from it), and its distance (how different the messages are from each other). This tells us how well the code can fix mistakes when sending messages.
Generator and check matrices
A linear code can be described using special tables called matrices. One type is called a generating matrix. It helps create all the codewords in the code. Another type is called a check matrix. It helps check if a message follows the code's rules.
These matrices make working with linear codes easier. They help computers encode and decode messages quickly. The design of these matrices is important for finding and fixing errors.
Example: Hamming codes
Main article: Hamming code
Hamming codes are a type of linear code that help fix small mistakes in digital messages. For any number r that is 2 or larger, there is a special Hamming code that can fix one wrong bit in a row of bits.
For example, there is a Hamming code that works with 7 bits and can find which one of the 4 important bits might be wrong. It uses special patterns of bits to check and fix mistakes.
Example: Hadamard codes
Main article: Hadamard code
A Hadamard code is a special type of linear code that can fix errors in data. It is made by using numbers to create groups of data called codewords.
One kind of Hadamard code uses an 8-bit block to store 3 bits of information. It can fix up to 1 error. This code is related to another code called the Reed–Muller code. Removing one part from the Hadamard code gives us a simpler code called the simplex code, which is linked to the Hamming code.
Nearest neighbor algorithm
The nearest neighbor algorithm helps fix mistakes in data. It looks at the received message and checks if it matches any known correct messages. If it does not match, it keeps looking step by step until it finds a match or decides there is no correct match nearby.
This method is important because it shows how many mistakes a code can correct. The algorithm repeats its steps, getting larger each time, until it finds a correct message or confirms that none exists nearby.
Popular notation
Main article: Block code § Popular notation
In coding, we use the letter C to stand for a code. A code with a length of n and a rank of k (meaning it has n code words in its basis) is called an (n, k) code. For linear block codes, we write them as [n, k, d] codes. Here, d shows the smallest difference between any two code words. This helps us know how well the code can fix mistakes.
Singleton bound
The Singleton bound is a rule that helps us understand the limits of linear codes. Linear codes are special ways to correct errors in data.
The rule says that for any linear code with certain properties, the numbers that describe it must follow a simple formula: k + d ≤ n + 1.
Codes that meet this rule exactly—where k + d = n + 1—are called maximum distance separable or MDS codes. These codes are considered the best possible in some ways because they can correct the most errors for their size.
Bonisoli's theorem
A code is equidistant if the distance between any two different codewords is always the same number, called d. In 1984, Arrigo Bonisoli studied special types of codes called linear one-weight codes. He showed that every equidistant linear code is related to a sequence of dual Hamming codes. This helps mathematicians understand these codes better.
Examples
Linear codes are special ways to fix mistakes when information is sent or saved. Some examples of linear codes are:
- Repetition code
- Parity code
- Cyclic code
- Hamming code
- Golay code, both the binary and ternary versions
- Polynomial codes, of which BCH codes are an example
- Reed–Solomon codes
- Reed–Muller code
- Algebraic geometry code
- Binary Goppa code
- Low-density parity-check codes
- Expander code
- Multidimensional parity-check code
- Toric code
- Turbo code
- Locally recoverable code
These codes help keep information safe and clear, even if some parts get mixed up.
Generalization
Some mathematicians study Hamming spaces using special number systems called finite rings, like those built on Z4. This changes vector spaces into modules, and linear codes become ring-linear codes. The Lee distance is often used to measure how different codes are. There is also a special mapping, called a Gray isometry, that helps connect codes using different ways to measure distance. Some researchers call these special codes "linear codes" too.
This article is a child-friendly adaptation of the Wikipedia article on Linear code, available under CC BY-SA 4.0.
Safekipedia