I admit that, before teaching this, I had little idea of what bootstrapping was. It seemed a recondite term only used by statistical nerds and computational modelers; and whenever it was mentioned in my presence, I merely nodded and hoped nobody else noticed my burning shame - while in my most private moments I would curse the name of bootstrapping, and shed tears of blood.
However, while I find that the concept of bootstrapping still surpasses all understanding, I now have a faint idea of what it does. And as it has rescued me from the abyss of ignorance and impotent fury, so shall this applet show you the way.
Bootstrapping is a resampling technique that can be used when there are few or no parametric assumptions - such as a normal distribution of the population - or when the sample size is relatively small. (The size of your sample is to be neither a source of pride nor shame. If you have been endowed with a large sample, do not go waving it in the faces of others; likewise, should your sample be small and puny, do not hide it under a bushel.) Say that we have a sample of eight subjects, and we wish to generalize these results to a larger population. Resampling allows us to use any of those subjects in a new sample by randomly sampling with replacement; in other words we can sample one of our subjects more than once. If we assume that each original subject was randomly sampled from the population, then each subject can be used as a surrogate for another subject in the population - as if we had randomly sampled again.
After doing this resampling with replacement thousands or tens of thousands of times, we can then calculate the mean across all of those samples, plot them, and see whether 95% of the resampled means contains or excludes zero - in other words, whether our observed mean is statistically significant or not. (Here I realize that, as we are not calculating a critical value, the usual meaning of a p-value or 95% confidence interval is not entirely accurate; however, for the moment just try to sweep this minor annoyance under the rug. There, all better.)
The applet can be downloaded here. I have also made a brief tutorial about how to use the applet; if you ever happen to teach this in your own class, just tell the students that if the blue thing is in the gray thing, then your result fails to reach significance; likewise, if the blue thing is outside of the gray thing, then your result is significant, and should be celebrated with a continuous bacchanalia.
After doing this resampling with replacement thousands or tens of thousands of times, we can then calculate the mean across all of those samples, plot them, and see whether 95% of the resampled means contains or excludes zero - in other words, whether our observed mean is statistically significant or not. (Here I realize that, as we are not calculating a critical value, the usual meaning of a p-value or 95% confidence interval is not entirely accurate; however, for the moment just try to sweep this minor annoyance under the rug. There, all better.)
The applet can be downloaded here. I have also made a brief tutorial about how to use the applet; if you ever happen to teach this in your own class, just tell the students that if the blue thing is in the gray thing, then your result fails to reach significance; likewise, if the blue thing is outside of the gray thing, then your result is significant, and should be celebrated with a continuous bacchanalia.
No comments:
Post a Comment