Bayes' Theorem is a fundamental mathematical formula used to calculate conditional probability. It describes how to update the probability of a hypothesis based on new evidence. The formula is P of A given B equals P of B given A times P of A, divided by P of B.
Each term in Bayes' formula has a specific meaning. P of A given B is the posterior probability - what we want to find. P of B given A is the likelihood - the probability of observing evidence B if hypothesis A is true. P of A is the prior probability - our initial belief about A. P of B is the evidence or marginal likelihood - the probability of observing B regardless of A.
Let's see Bayes' theorem in action with a medical test example. Suppose a disease affects 1% of the population, and we have a test that's 95% accurate. If a patient tests positive, what's the probability they actually have the disease? Using Bayes' theorem: the prior probability is 1%, the likelihood is 95%, and we calculate the evidence as 5.9%. The result is surprising - only 16.1% chance the patient actually has the disease!
This tree diagram visualizes how Bayes' theorem works. We start with a population of 1000 people. 10 have the disease, 990 don't. When we apply the test, 9.5 diseased people test positive, but 49.5 healthy people also test positive due to false positives. So out of all positive tests, only 9.5 out of 59 are true positives, giving us 16.1% probability.
Bayes' theorem has countless applications across many fields. It's used in medical diagnosis, machine learning, spam filtering, weather forecasting, criminal justice, and scientific research. The key insight is that it provides a systematic way to update our beliefs with new evidence. Remember the formula: P of A given B equals P of B given A times P of A, divided by P of B. Bayes' theorem is truly the foundation of rational reasoning under uncertainty!