Lecture 19
Duke University
STA 199 Spring 2025
2025-04-01
Go to your ae
project in RStudio.
Make sure all of your changes up to this point are committed and pushed, i.e., there’s nothing left in your Git pane.
Click Pull to get today’s application exercise file: ae-15-duke-forest-bootstrap.qmd.
Wait till the you’re prompted to work on the application exercise during class before editing the file.
Today: confidence intervals;
Thursday 4/3: hypothesis testing;
Friday 4/4: Milestone 3 (show “signs of life”) due;
Monday 4/7: submit final lab, Midterm 2 review;
Tuesday 4/8: more statistical inference;
Thursday 4/10: midterm 2;
Friday 4/11: submit peer eval 3;
Monday 4/14: turn-in take-home, complete Milestone 4;
Tuesday 4/15: more statistical inference
Thursday 4/17: prettifying your projects
Monday 4/21: project work period
Tuesday 4/22: Farewell!
Wednesday 4/23: submit final project
Monday 4/28: submit peer eval 4
Tuesday 4/29: final exam
Over 60% of the final course grade has yet to be counted:
And we drop the lowest lab, 30% of the AEs, and replace a low in-class midterm with a better final.
Find range of plausible values for the slope using bootstrap confidence intervals.
Recall the openintro::loans_full_schema
data frame:
each row is an approved loan applicant;
the columns contain financial info about that person, including…
What would you guess is the direction of association between these two variables?
(I just took logs to make the picture prettier.)
As the sample size grew, the best fit line stabilized;
As the sample size grew, the grey uncertainty band shrank;
As the sample size grew, we observed a larger range of income values, anf the computer displayed more of the line;
As the sample size grows, the picture the data paint becomes clearer:
Which would you rather have for your data analysis? 5 people in your dataset or 9947? Why?
We do not know what the “true” line is;
Our estimates are a best guess based on noisy, incomplete, imperfect data;
The more data we have, the more “certain” and “reliable” the estimates are;
What do we mean by “uncertainty” here?
Fact: different data set -> different estimates;
How much would our estimate vary across alternative datasets?
These tiny data sets can’t even agree on if the line should slope up or down. Uncertainty is high, hence the large bands.
If we repeat the process with a larger sample size, things are more stable
# A tibble: 2 × 2
term estimate
<chr> <dbl>
1 (Intercept) 4.27
2 log_inc 0.553
# A tibble: 2 × 2
term estimate
<chr> <dbl>
1 (Intercept) -4.63
2 log_inc 1.35
# A tibble: 2 × 2
term estimate
<chr> <dbl>
1 (Intercept) -1.14
2 log_inc 1.04
# A tibble: 2 × 2
term estimate
<chr> <dbl>
1 (Intercept) -0.288
2 log_inc 0.960
# A tibble: 2 × 2
term estimate
<chr> <dbl>
1 (Intercept) 4.84
2 log_inc 0.492
# A tibble: 2 × 2
term estimate
<chr> <dbl>
1 (Intercept) 3.20
2 log_inc 0.654
The amount of variation in the histogram tells us something about the uncertainty, and gives us a range of likely values.
openintro::duke_forest
Goal: Use the area (in square feet) to understand variability in the price of houses in Duke Forest.
df_fit <- linear_reg() |>
fit(price ~ area, data = duke_forest)
tidy(df_fit) |>
kable(digits = 2) # neatly format table to 2 digits
term | estimate | std.error | statistic | p.value |
---|---|---|---|---|
(Intercept) | 116652.33 | 53302.46 | 2.19 | 0.03 |
area | 159.48 | 18.17 | 8.78 | 0.00 |
For each additional square foot, we expect the sale price of Duke Forest houses to be higher by $159, on average.
Statistical inference provide methods and tools so we can use the single observed sample to make valid statements (inferences) about the population it comes from
For our inferences to be valid, the sample should be random and representative of the population we’re interested in
Calculate a confidence interval for the slope, \(\beta_1\) (today)
Conduct a hypothesis test for the slope, \(\beta_1\) (Thursday)
A confidence interval will allow us to make a statement like “For each additional square foot, the model predicts the sale price of Duke Forest houses to be higher, on average, by $159, plus or minus X dollars.”
Should X be $10? $100? $1000?
If we were to take another sample of 98 would we expect the slope calculated based on that sample to be exactly $159? Off by $10? $100? $1000?
The answer depends on how variable (from one sample to another sample) the sample statistic (the slope) is
We need a way to quantify the variability of the sample statistic
for estimation
so on and so forth…
Fill in the blank: For each additional square foot, the model predicts the sale price of Duke Forest houses to be higher, on average, by $159, plus or minus ___ dollars.
Fill in the blank: For each additional square foot, we expect the sale price of Duke Forest houses to be higher, on average, by $159, plus or minus ___ dollars.
How confident are you that the true slope is between $0 and $250? How about $150 and $170? How about $90 and $210?
Go to your ae project in RStudio.
If you haven’t yet done so, make sure all of your changes up to this point are committed and pushed, i.e., there’s nothing left in your Git pane.
If you haven’t yet done so, click Pull to get today’s application exercise file: ae-15-duke-forest-bootstrap.qmd.
Work through the application exercise in class, and render, commit, and push your edits.
Calculate the observed slope:
Take 100
bootstrap samples and fit models to each one:
set.seed(1120)
boot_fits <- duke_forest |>
specify(price ~ area) |>
generate(reps = 100, type = "bootstrap") |>
fit()
boot_fits
# A tibble: 200 × 3
# Groups: replicate [100]
replicate term estimate
<int> <chr> <dbl>
1 1 intercept 47819.
2 1 area 191.
3 2 intercept 144645.
4 2 area 134.
5 3 intercept 114008.
6 3 area 161.
7 4 intercept 100639.
8 4 area 166.
9 5 intercept 215264.
10 5 area 125.
# ℹ 190 more rows
Percentile method: Compute the 95% CI as the middle 95% of the bootstrap distribution:
If we want to be very certain that we capture the population parameter, should we use a wider or a narrower interval? What drawbacks are associated with using a wider interval?
How can we get best of both worlds – high precision and high accuracy?
How would you modify the following code to calculate a 90% confidence interval? How would you modify it for a 99% confidence interval?
## confidence level: 90%
get_confidence_interval(
boot_fits, point_estimate = observed_fit,
level = 0.90, type = "percentile"
)
# A tibble: 2 × 3
term lower_ci upper_ci
<chr> <dbl> <dbl>
1 area 104. 212.
2 intercept -24380. 256730.
## confidence level: 99%
get_confidence_interval(
boot_fits, point_estimate = observed_fit,
level = 0.99, type = "percentile"
)
# A tibble: 2 × 3
term lower_ci upper_ci
<chr> <dbl> <dbl>
1 area 56.3 226.
2 intercept -61950. 370395.
Population: Complete set of observations of whatever we are studying, e.g., people, tweets, photographs, etc. (population size = \(N\))
Sample: Subset of the population, ideally random and representative (sample size = \(n\))
Sample statistic \(\ne\) population parameter, but if the sample is good, it can be a good estimate
Statistical inference: Discipline that concerns itself with the development of procedures, methods, and theorems that allow us to extract meaning and information from data that has been generated by stochastic (random) process
We report the estimate with a confidence interval, and the width of this interval depends on the variability of sample statistics from different samples from the population
Since we can’t continue sampling from the population, we bootstrap from the one sample we have to estimate sampling variability