Probability theory, also called probability calculus, is a part of mathematics that studies how likely different events are to happen. Even though there are many ways to think about probability, probability theory looks at it in a careful and exact way using special rules called axioms. These rules help us describe probability using something called a probability space, which gives each possible outcome a number between 0 and 1, showing how likely it is.
Important ideas in probability theory include random variables, which are values that change in unpredictable ways, and probability distributions, which tell us how probabilities are spread out. It also studies stochastic processes, which are ways that things change over time in an uncertain manner. Even though we can't always predict exactly what will happen, probability theory helps us understand patterns in random events. Two key ideas are the law of large numbers and the central limit theorem, which describe how probabilities behave when we look at many trials or a lot of data.
Because probability theory is a foundation for statistics, it is important for many activities where we need to analyze data. It is also used in areas like statistical mechanics and sequential estimation, where we try to understand complex systems when we only know part of what is happening. In the 20th century, physics made a big discovery that many events at the tiny scales of atoms behave in a probabilistic way, as described by quantum mechanics. This shows just how useful and important probability theory is in understanding the world around us.
History of probability
Main article: History of probability
Probability theory began with people trying to understand games of chance. In the 1500s, Gerolamo Cardano started looking at these games, and later, in the 1600s, Pierre de Fermat and Blaise Pascal worked on solving problems like fair division of prizes in games. By 1657, Christiaan Huygens wrote a book about it, and by the 1800s, Pierre Laplace helped define what probability means.
Over time, mathematicians began to include more types of events, not just simple, countable ones. In 1933, Andrey Nikolaevich Kolmogorov created a strong mathematical foundation for probability that most people still use today. His work brought together ideas about all possible outcomes and ways to measure how likely they are.
Treatment
Most introductions to probability theory start by looking at events that happen in clear, countable ways, like rolling dice or flipping coins. These are called discrete probability distributions. Other situations, like measuring temperature or height, involve continuous values that can fall anywhere on a scale โ these are continuous probability distributions.
Probability helps us understand how likely different outcomes are. For example, when rolling a fair die, there are six possible results. We can assign a number between 0 and 1 to each possible outcome to show how likely it is. The total of all these numbers must equal 1, meaning one of the outcomes is certain to happen. This way, probability gives us a solid mathematical way to predict and study chance events.
Main article: Discrete probability distribution Main article: Continuous probability distribution
Classical probability distributions
Main article: Probability distributions
Some patterns of chance or randomness happen very often in nature and life, so mathematicians have studied them closely. These patterns are called probability distributions. There are two main types: discrete distributions, where outcomes can only take certain separate values, and continuous distributions, where outcomes can take any value within a range.
Important discrete distributions include the discrete uniform, Bernoulli, binomial, negative binomial, Poisson, and geometric distributions. Key continuous distributions are the continuous uniform, normal, exponential, gamma, and beta distributions. These help us understand and predict many random events in the world.
Convergence of random variables
Main article: Convergence of random variables
Probability theory talks about how random things behave in different ways. There are special ways to describe how a group of random results gets closer to a certain value. These ways are called convergence. There are three main types of convergence:
- Weak convergence: This means the results get close to a value when you look at their patterns over many tries.
- Convergence in probability: This means the results get very close to a value as you do more and more tests.
- Strong convergence: This is the strongest form, meaning the results will almost always be very close to the value in the long run.
Law of large numbers
Main article: Law of large numbers
Imagine flipping a fair coin many times. Youโd expect about half the flips to land on heads and half on tails. The law of large numbers explains this idea mathematically. It says that if you repeat an experiment many times, the average result will get closer to the expected value. For example, if you flip a coin many times, the proportion of heads will get closer to 50%.
Central limit theorem
Main article: Central limit theorem
The central limit theorem is a big idea in math. It says that if you take many random values and find their average, that average will follow a special pattern called a normal distribution, no matter what the original values were. This helps explain why we see bell-shaped curves in many natural things, like test scores or heights.
This article is a child-friendly adaptation of the Wikipedia article on Probability theory, available under CC BY-SA 4.0.
Images from Wikimedia Commons. Tap any image to view credits and license.
Safekipedia