Bonus Mode!
Bonus Mode!
Difference: {{ formatDp(diffMean) }}
Is there really a difference?
We have a difference of {{ formatDp(diffMean) }} between the averages of the two catches. It could be that the rivers are the same (a.k.a the null hypothesis) and we just so happened to get this difference of {{ formatDp(diffMean) }}.
What bootstrapping tries to find out:
What is the probability that we just so happened to get this difference of {{ formatDp(diffMean) }}, assuming the rivers are the same?
Or in more technical terms:
What is the likelihood of the observed data, given that the null hypothesis is true (a.k.a. the p-value)?
If this probability is high, then there is a good chance that the rivers are the same. If this probability is low, then the two rivers are probably significantly different.
Difference: {{ formatDp(diffMeanSample) }}
Rinse and Repeat
We can run this many times and record the difference in sample means for each experiment. Then, we can estimate the probability that the difference of {{ formatDp(diffMean) }} (or more) arose out of pure chance!
If the chance of getting the difference from this combined sampling is high, then the difference is probably not significant. Typically, we define high as more than 5%.
Summary
And that, in short, is bootstrapping! We want to test if a difference between two sample distributions could happen purely by chance. So...
As thanks for checking this out, here's a bonus version of this explorable that allows you to change the input distributions! Just click the button below and head back up to the first chart.