Chapter 6: Binomial Distribution

1. What is the Binomial distribution really?

The binomial distribution answers a very simple but extremely common question:

“If I repeat the same yes/no (success/failure) experiment n times independently, and each trial has the same probability p of success, what is the probability of getting exactly k successes?”

That’s it.

Key ingredients (you must remember these four):

  • n = number of independent trials / attempts / repetitions
  • p = probability of “success” on each trial (0 ≤ p ≤ 1)
  • k = number of successes we are interested in (k = 0, 1, 2, …, n)
  • Each trial must be independent and have exactly two possible outcomes (success / failure)

2. Classic everyday examples

Example n (trials) p (success probability) What k means
Flipping a fair coin 20 times 20 0.5 Number of heads
Clicking “buy now” on a website 1000 0.024 Number of purchases
Testing 50 light bulbs 50 0.03 Number of defective bulbs
Sending 200 emails in a campaign 200 0.18 Number of people who open the email
Shooting 10 free throws 10 0.72 Number of successful shots

3. Generating binomial data in NumPy

Python

4. Visualizing binomial distributions (very important)

Python

What you should notice:

  • When p = 0.5 → symmetric bell shape
  • When p is small (e.g. 0.1) → skewed right (most values near 0)
  • When p is large (e.g. 0.9) → skewed left
  • As n increases → shape becomes more symmetric and bell-like (→ approaches normal!)

5. Expected value & variance – very important formulas

Expected number of successes (mean): E[k] = n × p

Variance: Var(k) = n × p × (1-p)

Standard deviation: σ = √(n × p × (1-p))

Python

→ Most of the time you will see roughly 8 ± 3 conversions (mean ± 1 sd)

6. Realistic examples you will actually use

Example 1 – A/B test simulation

Python

Example 2 – Quality control

Python

Example 3 – Email campaign planning

Python

Summary – Binomial Distribution Cheat Sheet

Property Value / Formula
Number of trials n (fixed)
Success probability p (same for every trial)
Possible outcomes k = 0, 1, 2, …, n
Expected value (mean) n × p
Variance n × p × (1-p)
Standard deviation √(n × p × (1-p))
NumPy function np.random.binomial(n, p, size=…)
Shape when p ≈ 0.5 Symmetric (bell-like for large n)
Shape when p << 0.5 Right-skewed
Shape when p >> 0.5 Left-skewed
Approximation for large n Normal distribution (Central Limit Theorem)

Final teacher messages

  1. Whenever you have “number of successes in fixed number of yes/no trials” → think binomial.
  2. When n is large and p is not too close to 0 or 1 → binomial looks very much like normal → you can often use normal approximation.
  3. Binomial + Poisson connection — when n is very large and p is very small (n×p = λ fixed) → binomial ≈ Poisson.

Would you like to continue with any of these next?

  • How binomial becomes Poisson (rare events limit)
  • How binomial becomes normal (large n)
  • Binomial confidence intervals (real A/B testing)
  • Comparing binomial simulations vs theoretical probabilities
  • Realistic mini-project: simulate A/B test + power analysis

Just tell me what feels most interesting or useful right now! 😊

You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *