Statistical Significance Calculator
Determine whether your A/B test results are statistically significant.
Total users who saw variation A Number of conversions or goals completed for variation A Total users who saw variation B Number of conversions or goals completed for variation BFrequently Asked Questions
1. What is statistical significance in A/B testing?
Statistical significance helps determine whether the difference in performance between two test variations (commonly referred to as A vs. B) is due to an actual change or just random chance. If a result is statistically significant, you're more confident your test outcome is valid and likely to occur again.
2. What confidence level should I use?
The confidence level reflects how certain you are that your results aren’t due to randomness. 95% is the industry standard, but you can also use 80%, 85%, 90%, or 99% depending on how much risk you're willing to accept.
3. How many conversions do I need for reliable results?
While there’s no universal number, more conversions lead to more reliable outcomes. As a general rule, try to get at least 100 conversions per variation before making major decisions based on test results.
4. What does a p-value mean?
A p-value tells you the probability that your results happened by chance. A lower p-value means higher confidence. For example, a p-value of 0.03 means there’s a 3% chance the results are random. Typically considered statistically significant at a 95% confidence level.
5. Can I use this tool for email, landing page, or ad tests?
Absolutely! This calculator works for any A/B test where you compare two versions based on the number of visitors and actions (including emails, ads, product pages, CTAs, and more).