Lab 5

Visualize, model, interpret

Lab
Monday March 24 at 8:30 am

Introduction

In this lab you’ll start your practice of statistical modeling. You’ll fit models, interpret model output, and make decisions about your data and research question based on the model results.

Note

This lab assumes you’ve completed the labs so far and doesn’t repeat setup and overview content from those labs. If you haven’t done those yet, you should review the previous labs before starting on this one.

Learning objectives

By the end of the lab, you will…

  • Fit linear regression models and interpret model coefficients in context of the data and research question;
  • Transform data using a log-transformation for a better model fit.

And, as usual, you will also…

  • Get more experience with data science workflow using R, RStudio, Git, and GitHub
  • Further your reproducible authoring skills with Quarto
  • Improve your familiarity with version control using Git and GitHub

Getting started

Log in to RStudio, clone your lab-5 repo from GitHub, open your lab-5.qmd document, and get started!

Step 1: Log in to RStudio

  • Go to https://cmgr.oit.duke.edu/containers and log in with your Duke NetID and Password.
  • Click STA198-199 under My reservations to log into your container. You should now see the RStudio environment.

Step 2: Clone the repo & start a new RStudio project

  • Go to the course organization at github.com/sta199-s25 organization on GitHub. Click on the repo with the prefix lab-5. It contains the starter documents you need to complete the lab.

  • Click on the green CODE button and select Use SSH. This might already be selected by default; if it is, you’ll see the text Clone with SSH. Click on the clipboard icon to copy the repo URL.

  • In RStudio, go to FileNew ProjectVersion ControlGit.

  • Copy and paste the URL of your assignment repo into the dialog box Repository URL. Again, please make sure to have SSH highlighted under Clone when you copy the address.

  • Click Create Project, and the files from your GitHub repo will be displayed in the Files pane in RStudio.

  • Click lab-5.qmd to open the template Quarto file. This is where you will write up your code and narrative for the lab.

Step 3: Update the YAML

In lab-5.qmd, update the author field to your name, render your document and examine the changes. Then, in the Git pane, click on Diff to view your changes, add a commit message (e.g., “Added author name”), and click Commit. Then, push the changes to your GitHub repository, and in your browser confirm that these changes have indeed propagated to your repository.

Important

If you run into any issues with the first steps outlined above, flag a TA for help before proceeding.

Packages

In this lab, we will work with the

  • tidyverse package for doing data analysis in a “tidy” way;
  • tidymodels package for modeling in a “tidy” way.
  • Run the code cell by clicking on the green triangle (play) button for the code cell labeled load-packages. This loads the package so that its features (the functions and datasets in it) are accessible from your Console.
  • Then, render the document that loads this package to make its features (the functions and datasets in it) available for other code cells in your Quarto document.

Guidelines

As we’ve discussed in lecture, your plots should include an informative title, axes and legends should have human-readable labels, and careful consideration should be given to aesthetic choices.

Additionally, code should follow the tidyverse style. Particularly,

  • there should be spaces before and line breaks after each + when building a ggplot,

  • there should also be spaces before and line breaks after each |> in a data transformation pipeline,

  • code should be properly indented,

  • there should be spaces around = signs and spaces after commas.

Furthermore, all code should be visible in the PDF output, i.e., should not run off the page on the PDF. Long lines that run off the page should be split across multiple lines with line breaks.

Important

Continuing to develop a sound workflow for reproducible data analysis is important as you complete the lab and other assignments in this course. There will be periodic reminders in this assignment to remind you to render, commit, and push your changes to GitHub. You should have at least 3 commits with meaningful commit messages by the end of the assignment.

Important

You are also expected to pay attention to code smell in addition to code style and readability. You should review and improve your code to avoid redundant steps (e.g., grouping, ungrouping, and grouping again by the same variable in a pipeline), using inconsistent syntax (e.g., ! to say “not” in one place and - in another place), etc.

Part 1 - Minimum wage

Should the minimum wage be increased? I’m sure many of you have an opinion about this. When debating this policy issue, one thing we need to understand is how changes in the minimum wage affect employment. When the minimum wage is raised, does it create jobs? Does it put people out of work? Does it have no affect at all? An ECON 101 model of the labor market implies that minimum wage increases could put people out of work because they make it more expensive for firms to employ workers, and we tend to do less of something if it becomes more expensive (demand curves slope down). That’s a theoretical argument anyway, but this is ultimately an empirical question – one that has a long history in applied economics.

Indeed, the 2021 Nobel Prize in Economic Sciences was shared by economist David Card, in part for a famous paper he wrote with Alan Krueger that initiated the modern empirical literature on the minimum wage:

Card, David and Alan Krueger (1994): “Minimum Wages and Employment: A Case Study of the Fast-Food Industry in New Jersey and Pennsylvania,” American Economic Review, Vol. 84, No. 4., pp. 772-793.

On April 1, 1992, New Jersey’s minimum wage rose from $4.25 to $5.05 per hour. Pennsylvania’s did not. In order to assess the impact of this change, Card and Krueger collected data from fast-food restaurants along the PA/NJ border, both before the policy change went into affect (February 1992) and after (November/December 1992). They figured that restaurants close to the border were probably indistinguishable in terms of their features, practices, clientele, work force, etc, and so if there were any noticeable differences in employment after the wage hike, they must be due to the wage hike itself and not some other confounding factor. This is what we call a natural experiment. The data in Card and Krueger are purely observational, but the main idea is that the arbitrary placement of otherwise similar restaurants on either side of the PA/NJ border acts as if a controlled, randomized experiment were performed, and so we can use these data to draw causal conclusions about the impact of minimum wage policy on employment. As you can imagine, a massive literature has emerged where statisticians and economists argue about when this is appropriate and how it should be done, but nevertheless, researchers nowadays love to sniff around like truffle pigs for cute natural experiments hidden in otherwise messy observational data sets.

In the first part of this lab, you will play around with the original Card and Krueger data:

card_krueger <- read_csv("data/card-krueger.csv")

glimpse(card_krueger)
Rows: 820
Columns: 5
$ id    <dbl> 46, 46, 49, 49, 506, 506, 56, 56, 61, 61, 62, 62, 445, 445, 451,…
$ state <chr> "PA", "PA", "PA", "PA", "PA", "PA", "PA", "PA", "PA", "PA", "PA"…
$ time  <chr> "before", "after", "before", "after", "before", "after", "before…
$ wage  <dbl> NA, 4.30, NA, 4.45, NA, 5.00, 5.00, 5.25, 5.50, 4.75, 5.00, NA, …
$ fte   <dbl> 40.50, 24.00, 13.75, 11.50, 8.50, 10.50, 34.00, 20.00, 24.00, 35…

There are five columns:

  • id: a unique identifier for each restaurant;
  • state: which state is the restaurant in?
  • time: measurements collected before or after the NJ minimum wage increase?
  • wage: the starting wage in US dollars;
  • fte: full-time-equivalent employment, calculated as the number of full-time workers (including managers) plus 0.5 times the number of part-time workers.

The full dataset is available on David Card’s website, and it contains more information than this if you want to keep playing.

Question 0

Acquire some domain knowledge! As we know, it’s never a good idea to blunder into a data analysis without some subject-matter expertise, so here is some suggested reading:

  • The original Card and Krueger paper is here. Their bottom line was “[w]e find no indication that the rise in the minimum wage reduced employment,” which runs counter to the usual ECON 101 story. People have been arguing about this ever since;
  • This recent survey article attempts to summarize the state of the literature on (dis?)employment effects of the minimum wage;
  • The award citation for Card’s Nobel has useful summaries: popular, advanced.

We are not grading this, and we’ll never know if you did it or not, but you should definitely go exploring if this area interests you. Furthermore, while you may not do the reading we are suggesting here, you should definitely do a little outside reading in the domain relevant to your final project.

Question 1

  1. Relevel the state and time variables so that "PA" and "before" are the baselines, respectively.

  2. How many restaurants were sampled in each state?

  3. Compute the median wage and the median employment in each state before and after the policy change.

  4. Create a faceted density plot displaying wage according to both state and time. We want two panels stacked vertically, one for each time period (before and after the policy change). Within each panel, we want two densities, one for each state. Comment on any patterns you notice.

  5. Create a faceted density plot displaying fte according to both state and time (similar to part d). Comment on any patterns you notice.

Question 2

  1. Use pivot_wider to create a new data frame card_krueger_wide that has six columns: id, state, wage_before, wage_after, fte_before, and fte_after.

  2. Modify card_krueger_wide by discarding any rows that have a missing value in any of the four columns wage_before, wage_after, fte_before, and fte_after.

  3. Add a new variable emp_diff to card_krueger_wide which measures the change in fte after the new law took effect.

  4. Add a new variable gap to card_krueger_wide which is constructed in the following way:

  • gap equals zero for stores in Pennsylvania;
  • gap equals zero for stores in New Jersey whose starting wage before the policy change was already higher than the new minimum;
  • gap equals \((5.05 - \text{wage}_{\text{before}})/\text{wage}_{\text{before}}\) for all other stores in New Jersey.

Card and Krueger introduced gap as an alternative measure of the impact of the minimum wage at each store. In their words:

\(\text{GAP}_i\) is the proportional increase in wages at store \(i\) necessary to meet the new minimum rate. Variation in GAP, reflects both the New Jersey-Pennsylvania contrast and differences within New Jersey based on reported starting wages in wave 1. Indeed, the value of GAP, is a strong predictor of the actual proportional wage change between waves 1 and 2 (\(R^2=0.75\)), and conditional on GAP, there is no difference in wage behavior between stores in New Jersey and Pennsylvania

Question 3

  1. Create side-by-side boxplots of emp_diff for each state.

  2. Fit a linear model that predicts emp_diff from state and save the model object. Then, provide the tidy summary output.

  3. Write the estimated least squares regression line below using proper notation.

  4. Interpret the intercept in the context of the data and the research question. Is the intercept meaningful in this context? Why or why not?

  5. Interpret the slope in the context of the data and the research question.

Question 4

  1. Create a scatter plot of gap versus emp_diff.

  2. Fit a linear model that predicts emp_diff from gap and save the model object. Then, provide the tidy summary output.

  3. Write the estimated least squares regression line below using proper notation.

  4. Interpret the intercept in the context of the data and the research question. Is the intercept meaningful in this context? Why or why not?

  5. Interpret the slope in the context of the data and the research question.

Question 5

Card and Krueger tried to argue that, even though their data were observational, they had nevertheless identified a clean natural experiment that permitted them to ascribe a causal interpretation to the results of their regression analysis. Do you agree? Do you see any potential dangers with this approach? Write a paragraph or two discussing. Note that we are only grading this on a good faith completion effort, but again, if this area interests you, take the opportunity to do some of the reading under Question 0, and then try to write something interesting.

Part 2 - Parasites

Parasites can cause infectious disease – but not all animals are affected by the same parasites. Some parasites are present in a multitude of species and others are confined to a single host. It is hypothesized that closely related hosts are more likely to share the same parasites. More specifically, it is thought that closely related hosts will live in similar environments and have similar genetic makeup that coincides with optimal conditions for the same parasite to flourish.

In this part of the lab, we will see how much evolutionary history predicts parasite similarity.

The dataset comes from an Ecology Letters paper by Cooper at al. (2012) entitled “Phylogenetic host specificity and understanding parasite sharing in primates” located here. The goal of the paper was to identify the ability of evolutionary history and ecological traits to characterize parasite host specificity.

Each row of the data contains two species, species1 and species2.

Subsequent columns describe metrics that compare the species:

  • divergence_time: how many (millions) of years ago the two species diverged. i.e. how many million years ago they were the same species.

  • distance: geodesic distance between species geographic range centroids (in kilometers)

  • BMdiff: difference in body mass between the two species (in grams)

  • precdiff: difference in mean annual precipitation across the two species geographic ranges (mm)

  • parsim: a measure of parasite similarity (proportion of parasites shared between species, ranges from 0 to 1.)

The data are available in parasites.csv in your data folder.

Question 6

Let’s start by reading in the parasites data and examining the relationship between divergence_time and parsim.

  1. Load the data and save the data frame as parasites.

  2. Based on the goals of the analysis, what is the response variable?

  3. Visualize the relationship between the two variables.

  4. Use the visualization to describe the relationship between the two variables.

Question 7

Next, model this relationship.

  1. Fit the model and write the estimated regression equation.

  2. Interpret the slope and the intercept in the context of the data.

  3. Recreate the visualization from Question 6, this time adding a regression line to the visualization.

  4. What do you notice about the prediction (regression) line that may be strange, particularly for very large divergence times?

Question 8

Since parsim takes values between 0 and 1, we want to transform this variable so that it can range between (−∞,+∞). This will be better suited for fitting a regression model (and interpreting predicted values!)

  1. Using mutate, create a new variable transformed_parsim that is calculated as log(parsim/(1-parsim)). Add this variable to your data frame.

    Note

    log() in R represents the nautral log.

  2. Then, visualize the relationship between divergence_time and transformed_parsim. Add a regression line to your visualization.

  3. Write a 1-2 sentence description of what you observe in the visualization.

Question 9

Which variable is the strongest individual predictor of parasite similarity between species?

To answer this question, begin by fitting a linear regression model to each pair of variables. Do not report the model outputs in a tidy format but save each one as dt_model, dist_model, BM_model, and prec_model, respectively.

  • divergence_time and transformed_parsim

  • distance and transformed_parsim

  • BMdiff and transformed_parsim

  • precdiff and transformed_parsim

  1. Report the slopes for each of these models. Use proper notation.

  2. To answer the question of interest, would it be useful to compare the slopes in each model to choose the variable that is the strongest predictor of parasite similarity? Why or why not?

Question 10

Now, what if we calculated \(R^2\) to help answer our question? To compare the explanatory power of each individual predictor, we will look at \(R^2\) between the models. \(R^2\) is a measure of how much of the variability in the response variable is explained by the model.

As you may have guessed from the name \(R^2\) can be calculated by squaring the correlation when we have a simple linear regression model. The correlation r takes values -1 to 1, therefore, \(R^2\) takes values 0 to 1. Intuitively, if r=1 or −1, then \(R^2\)=1, indicating the model is a perfect fit for the data. If r≈0 then \(R^2\)≈0, indicating the model is a very bad fit for the data.

You can calculate \(R^2\) using the glance function. For example, you can calculate \(R^2\) for dt_model using the code glance(dt_model)$r.squared.

  1. Calculate and report \(R^2\) for each model fit in the previous exercise.

  2. To answer our question of interest, would it be useful to compare the \(R^2\) in each model to choose the variable that is the strongest predictor of parasite similarity? Why or why not?