Free Sample Size Calculator

Calculate how many visitors you need for a statistically valid A/B test.

Your current conversion rate for the control variant

The smallest relative improvement you want to detect

Confidence level (probability of avoiding a false positive)

Probability of detecting a real effect when it exists

Sample Size Per Variant
31,232
visitors needed in each group
Total Sample Size
62,464
visitors across both variants

Total daily visitors to estimate test duration

Sample Size by Minimum Detectable Effect

How sample size changes as you vary the MDE, with a 5% baseline conversion rate.

MDE (Relative)Per VariantTotal
1%2,996,7275,993,454
2%752,7081,505,416
3%336,103672,206
5%122,123244,246
8%48,36296,724
10%current31,23262,464
15%14,19128,382
20%8,15616,312
25%5,33010,660
30%3,7787,556
40%2,2104,420
50%1,4682,936

How to Use the Sample Size Calculator

Enter your current (baseline) conversion rate and the minimum detectable effect (MDE) — the smallest relative improvement you want to be able to detect. For example, if your conversion rate is 5% and you set an MDE of 10%, you're testing whether a variant can move the rate from 5% to 5.5%. Choose your significance level (95% is the industry standard) and statistical power (80% is typical). The calculator gives you the number of visitors needed per variant and the total across both variants. Optionally, enter your daily traffic to see how many days the test will need to run. If the duration exceeds 4 weeks, consider increasing your MDE or focusing on higher-traffic pages.

What Is A/B Test Sample Size?

Sample size is the number of visitors each variant in your A/B test needs before you can draw a statistically reliable conclusion. Running a test with too few visitors leads to unreliable results — you might declare a winner that isn't actually better, or miss a real improvement. The required sample size depends on four factors: your baseline conversion rate, the minimum effect you want to detect (MDE), the significance level (how confident you want to be that the result isn't due to chance), and statistical power (the probability of detecting a real effect). Lower baseline conversion rates and smaller MDEs require larger sample sizes. This is why testing a 0.5% checkout conversion with a 5% relative MDE requires hundreds of thousands of visitors, while testing a 30% email signup rate with a 20% MDE might only need a few hundred.

Know your sample size. Track results with EasyFunnel.

Start Free

Frequently Asked Questions

Why does a smaller MDE require a much larger sample size?

Smaller effects are harder to distinguish from random noise. If you want to detect a 1% relative lift instead of a 20% lift, you need far more data to be confident the difference is real and not just statistical fluctuation. The sample size grows roughly as the inverse square of the MDE.

What significance level should I use for my A/B test?

The industry standard is 95% (alpha = 0.05), which means there is a 5% chance of a false positive. Use 90% for exploratory tests where speed matters more than precision. Use 99% for high-stakes decisions like pricing changes where a wrong call is costly.

Can I stop my A/B test early if results look significant?

No. Peeking at results and stopping early inflates your false positive rate — a phenomenon called "peeking bias." If you want to evaluate results before the full sample is collected, use a sequential testing method designed for early stopping.

What happens if I run a test with too few visitors?

An underpowered test is likely to miss real effects (false negatives) or declare winners that aren't actually better (false positives due to noise). The result might look statistically significant by chance, but it won't replicate. Always calculate your sample size before starting a test.

How long should I run my A/B test?

Run it until you reach the required sample size, but also for at least one full business cycle (typically one or two weeks) to account for day-of-week effects. Tests shorter than a week can be biased by traffic patterns. Tests longer than 4-6 weeks risk cookie expiration and external factors contaminating results.

More Free Tools