Interpreting Confidence Intervals
Explore how to interpret the effectiveness of confidence intervals by examining whether they capture the true population parameter. This lesson guides you through using bootstrapping and the infer package to generate and evaluate confidence intervals for proportions, reinforcing concepts with clear examples and visualizations.
Now that we’ve shown how to construct confidence intervals using a sample drawn from a population, let’s now focus on how to interpret their effectiveness. The effectiveness of a confidence interval is judged by whether or not it contains the true value of the population parameter. Going back to our fishing analogy in the Understanding Confidence Intervals lesson, this is like asking, “Did our net capture the fish?”
So, for example, does our percentile-based confidence interval of (1991, 1999) capture the true mean year
In order to interpret a confidence interval’s effectiveness, we need to know what the value of the population parameter is. That way we can say whether or not a confidence interval has captured this value.
Let’s revisit our sampling bowl. What proportion of the bowl’s 2,400 balls are red? Let’s compute this:
In this case, we know what the value of the population parameter is—we know that the population proportion
As we stated, the sampling bowl exercise doesn’t really reflect how sampling is done in real life, but rather was an idealized activity. In real life, we won’t know what the true value of the population parameter is, therefore the need for estimation.
Let’s now construct confidence intervals for
Did the net capture the fish?
Recall that we had 33 groups of friends each take samples of size 50 from the bowl and then compute the sample proportion of red balls