--- title: 'Practical 8: Optional Extra and Advanced Material' author: "VIBASS" output: html_vignette: fig_caption: yes number_sections: yes toc: yes fig_width: 6 fig_height: 4 vignette: > %\VignetteIndexEntry{Practical 8: Optional Extra and Advanced Material} %\VignetteEngine{knitr::rmarkdown} %\VignetteEncoding{UTF-8} --- # Introduction This practical is entirely optional, and presents additional and advanced material you may wish to try out if you have completed all the work in the first seven practicals, and have no further questions for us. We start with an example where we implement specific R code for running the linear regression example of Practical 5; then we repeat the examples of Practicals 6 and 7 but using `INLA` rather than `BayesX`. Note we did not include `INLA` examples in the main material as we did not want to have too many different packages included, which would detract from the most important part - learning about Bayesian Statistics! # Example: Simple Linear Regression We will illustrate the use of Gibbs Sampling on a simple linear regression model. Recall that we saw yesterday that we can obtain an analytical solution for a Bayesian linear regression, but that more complex models require a simulation approach; in Practical 5, we used the package `MCMCpack` to fit the model using Gibbs Sampling. The approach we take here for Gibbs Sampling on a simple linear regression model is very easily generalised to more complex models, the key is that whatever your model looks like, you update one parameter at a time, using the conditional distribution of that parameter given the current values of all the other parameters. The simple linear regression model we will analyse here is a reduced version of the general linear regression model we saw yesterday, and the same one as in Practical 5: $$ Y_i = \beta_0+\beta_1x_i+\epsilon_i $$ for response variable $\mathbf{Y}=\left(Y_1,Y_2,\ldots,Y_n\right)$, explanatory variable $\mathbf{x}=\left(x_1,x_2,\ldots,x_n\right)$ and residual vector $\mathbf{\epsilon}= \left(\epsilon_1,\epsilon_2,\ldots,\epsilon_n\right)$ for a sample of size $n$, where $\beta_0$ is the regression intercept, $\beta_1$ is the regression slope, and where the $\epsilon_i$ are independent with $\epsilon_i\sim \textrm{N}\left(0,\sigma^2\right)$ $\forall i=1,2,\ldots,n$. For convenience, we refer to the combined set of $\mathbf{Y}$ and $\mathbf{x}$ data as $\mathcal{D}$. We also define $\hat{\mathbf{y}}$ to be the fitted response vector (i.e., $\hat{y}_i = \beta_0 + \beta_1 x_i$ from the regression equation) using the current values of the parameters $\beta_0$, $\beta_1$ and precision $\tau = 1$ (remember that $\tau=\frac{1}{\sigma^2}$) from the Gibbs Sampling simulations. For Bayesian inference, it is simpler to work with precisions $\tau$ rather than with variances $\sigma^2$. Given priors $$ \begin{align*} \pi(\tau) &= \textrm{Ga}\left(\alpha, \beta\right), \\ \pi(\beta_0) &= \textrm{N}\left(\mu_{\beta_0}, \tau_{\beta_0}\right), \quad\textrm{and} \\ \pi(\beta_1) &= \textrm{N}\left(\mu_{\beta_1}, \tau_{\beta_1}\right) \end{align*} $$ then by combining with the likelihood we can derive the full conditional distributions for each parameter. For $\tau$, we have $$ \begin{align*} \pi\left(\tau|\mathcal{D}, \beta_0, \beta_1\right) &\propto \pi(\tau)\prod_{i=1}^nL\left(y_i|\hat{y_i}\right), \\ &= \tau^{\alpha-1}e^{-\beta\tau} \tau^{\frac{n}{2}}\exp\left\{- \frac{\tau}{2}\sum_{i=1}^n(y_i-\hat{y_i})^2\right\}, \\ &= \tau^{\alpha-1 + \frac{n}{2}}\exp\left\{-\beta\tau- \frac{\tau}{2}\sum_{i=1}^n(y_i-\hat{y_i})^2\right\},\\ \textrm{so} \quad \tau|\mathcal{D}, \beta_0, \beta_1 &\sim \textrm{Ga}\left(\alpha+\frac{n}{2}, \beta +\frac{1}{2}\sum_{i=1}^n(y_i-\hat{y_i})^2\right). \end{align*} $$ For $\beta_0$ we will need to expand $\hat{y}_i=\beta_0+\beta_1x_i$, and then we have $$ \begin{align*} p(\beta_0|\mathcal{D}, \beta_1, \tau) &\propto \pi(\beta_0)\prod_{i=1}^nL\left(y_i|\hat{y_i}\right), \\ &= \tau_{\beta_0}^{\frac{1}{2}}\exp\left\{- \frac{\tau_{\beta_0}}{2}(\beta_0-\mu_{\beta_0})^2\right\}\tau^\frac{n}{2} \exp\left\{-\frac{\tau}{2}\sum_{i=1}^n(y_i-\hat{y_i})^2\right\}, \\ &\propto \exp\left\{-\frac{\tau_{\beta_0}}{2}(\beta_0-\mu_{\beta_0})^2- \frac{\tau}{2}\sum_{i=1}^n(y_i-\beta_0-x_i \beta_1)^2\right\}, \\ &= \exp \left\{ -\frac{1}{2}\left(\beta_0^2(\tau_{\beta_0} + n\tau) + \beta_0\left(-2\tau_{\beta_0}\mu_{\beta_0} - 2\tau\sum_{i=1}^n (y_i - x_i\beta_1)\right) \right) + C \right\}, \\ \textrm{so} \quad \beta_0|\mathcal{D}, \beta_1, \tau &\sim \mathcal{N}\left((\tau_{\beta_0} + n\tau)^{-1} \left(\tau_{\beta_0}\mu_{\beta_0} + \tau\sum_{i=1}^n (y_i - x_i\beta_1)\right), \tau_{\beta_0} + n\tau\right). \end{align*} $$ In the above, $C$ represents a quantity that is constant with respect to $\beta_0$ and hence can be ignored. Finally, for $\beta_1$ we have $$ \begin{align*} p(\beta_1|\mathcal{D}, \beta_0, \tau) &\propto \pi(\beta_1)\prod_{i=1}^nL\left(y_i|\hat{y_i}\right), \\ &= \tau_{\beta_1}^{\frac{1}{2}}\exp\left\{- \frac{\tau_{\beta_1}}{2}(\beta_1-\mu_{\beta_1})^2\right\}\tau^\frac{n}{2} \exp\left\{-\frac{\tau}{2}\sum_{i=1}^n(y_i-\hat{y_i})^2\right\}, \\ &\propto \exp\left\{-\frac{\tau_{\beta_1}}{2}(\beta_1-\mu_{\beta_1})^2- \frac{\tau}{2}\sum_{i=1}^n(y_i-\beta_0-x_i \beta_1)^2\right\}, \\ &= \exp \left\{ -\frac{1}{2}\left(\beta_1^2\left(\tau_{\beta_1} + \tau\sum_{i=1}^n x_i^2\right) + \beta_1\left(-2\tau_{\beta_1}\mu_{\beta_1} - 2\tau\sum_{i=1}^n (y_i - \beta_0)x_i\right) \right) + C \right\}, \\ \textrm{so} \quad \beta_1|\mathcal{D}, \beta_0, \tau &\sim \textrm{N}\left((\tau_{\beta_1} + \tau\sum_{i=1}^n x_i^2)^{-1} \left(\tau_{\beta_1}\mu_{\beta_1} + \tau\sum_{i=1}^n (y_i - \beta_0)x_i\right), \tau_{\beta_1} + \tau\sum_{i=1}^n x_i^2\right). \end{align*} $$ This gives us an easy way to run Gibbs Sampling for linear regression; we have standard distributions as full conditionals and there are standard functions in R to obtain the simulations. We shall do this now for an example in ecology, looking at the relationship between water pollution and mayfly size - the data come from the book *Statistics for Ecologists Using R and Excel 2nd edition* by Mark Gardener (ISBN 9781784271398), see [the publisher's webpage](https://pelagicpublishing.com/products/statistics-for-ecologists-using-r-and-excel-gardener-2nd-edition). The data are as follows: - `length` - the length of a mayfly in mm; - `BOD` - biological oxygen demand in mg of oxygen per litre, effectively a measure of organic pollution (since more organic pollution requires more oxygen to break it down). The data can be read into R: ```{r} # Read in data BOD <- c(200,180,135,120,110,120,95,168,180,195,158,145,140,145,165,187, 190,157,90,235,200,55,87,97,95) mayfly.length <- c(20,21,22,23,21,20,19,16,15,14,21,21,21,20,19,18,17,19,21,13, 16,25,24,23,22) # Create data frame for the Gibbs Sampling, needs x and y Data <- data.frame(x=BOD,y=mayfly.length) ``` We provide here some simple functions for running the analysis by Gibbs Sampling, using the full conditionals derived above. The functions assume you have a data frame with two columns, the response `y` and the covariate `x`. ```{r} # Function to update tau, the precision sample_tau <- function(data, beta0, beta1, alphatau, betatau) { rgamma(1, shape = alphatau+ nrow(data) / 2, rate = betatau+ 0.5 * sum((data$y - (beta0 + data$x*beta1))^2) ) } # Function to update beta0, the regression intercept sample_beta0 <- function(data, beta1, tau, mu0, tau0) { prec <- tau0 + tau * nrow(data) mean <- (tau0*mu0 + tau * sum(data$y - data$x*beta1)) / prec rnorm(1, mean = mean, sd = 1 / sqrt(prec)) } # Function to update beta1, the regression slope sample_beta1 <- function(data, beta0, tau, mu1, tau1) { prec <- tau1 + tau * sum(data$x * data$x) mean <- (tau1*mu1 + tau * sum((data$y - beta0) * data$x)) / prec rnorm(1, mean = mean, sd = 1 / sqrt(prec)) } # Function to run the Gibbs Sampling, where you specify the number of # simulations `m`, the starting values for each of the three regression # parameters (`tau_start` etc), and the parameters for the prior # distributions of tau, beta0 and beta1 gibbs_sample <- function(data, tau_start, beta0_start, beta1_start, m, alpha_tau, beta_tau, mu_beta0, tau_beta0, mu_beta1, tau_beta1) { tau <- numeric(m) beta0 <- numeric(m) beta1 <- numeric(m) tau[1] <- tau_start beta0[1] <- beta0_start beta1[1] <- beta1_start for (i in 2:m) { tau[i] <- sample_tau(data, beta0[i-1], beta1[i-1], alpha_tau, beta_tau) beta0[i] <- sample_beta0(data, beta1[i-1], tau[i], mu_beta0, tau_beta0) beta1[i] <- sample_beta1(data, beta0[i], tau[i], mu_beta1, tau_beta1) } Iteration <- 1:m data.frame(Iteration,beta0,beta1,tau) } ``` ## Exercises We will use Gibbs Sampling to fit a Bayesian Linear Regression model to the mayfly data, with the above code. We will use the following prior distributions for the regression parameters: $$ \begin{align*} \pi(\tau) &= \textrm{Ga}\left(1, 1\right), \\ \pi(\beta_0) &= \textrm{N}\left(0, 0.0001\right), \quad\textrm{and} \\ \pi(\beta_1) &= \textrm{N}\left(0, 0.0001\right). \end{align*} $$ We will set the initial values of all parameters to 1, i.e. $\beta_0^{(0)}=\beta_1^{(0)}=\tau^{(0)}=1$. ### Data exploration - Investigate the data to see whether a linear regression model would be sensible. [You may not need to do this step if you recall the outputs from Practical 5.]
Solution ```{r fig = TRUE} # Scatterplot plot(BOD,mayfly.length) # Correlation with hypothesis test cor.test(BOD,mayfly.length) ```
- Run a frequentist simple linear regression using the `lm` function in R; this will be useful for comparison with our Bayesian analysis.
Solution ```{r} # Linear Regression using lm() linreg <- lm(mayfly.length ~ BOD, data = Data) summary(linreg) ```
### Running the Gibbs Sampler - Use the code above to fit a Bayesian simple linear regression using `length` as the response variable. Ensure you have a burn-in period so that the initial simulations are discarded. Use the output from `gibbs_sample` to estimate the regression parameters (with uncertainty measures). Also plot the estimates of the posterior distributions.
Solution ```{r fig = TRUE} # Bayesian Linear Regression using a Gibbs Sampler # Set the number of iterations of the Gibbs Sampler - including how many to # discard as the burn-in m.burnin <- 500 m.keep <- 1000 m <- m.burnin + m.keep # Obtain the samples results1 <- gibbs_sample(Data,1,1,1,m,1,1,0,0.0001,0,0.0001) # Remove the burn-in results1 <- results1[-(1:m.burnin),] # Add variance and standard deviation columns results1$Variance <- 1/results1$tau results1$StdDev <- sqrt(results1$Variance) # Estimates are the column means apply(results1[,-1],2,mean) # Also look at uncertainty apply(results1[,-1],2,sd) apply(results1[,-1],2,quantile,probs=c(0.025,0.975)) # Kernel density plots for beta0, beta1 and the residual stanard deviation plot(density(results1$beta0, bw = 0.5), main = "Posterior density for beta0") plot(density(results1$beta1, bw = 0.25), main = "Posterior density for beta1") plot(density(results1$StdDev, bw = 0.075), main = "Posterior density for StdDev") ```
- Use the function `ts.plot()` to view the autocorrelation in the Gibbs sampling simulation chain. Is there any visual evidence of autocorrelation, or do the samples look independent?
Solution ```{r fig = TRUE} # Plot sampled series ts.plot(results1$beta0,ylab="beta0",xlab="Iteration") ts.plot(results1$beta1,ylab="beta1",xlab="Iteration") ts.plot(results1$StdDev,ylab="sigma",xlab="Iteration") ```
- How do the results compare with the frequentist output? ### Reducing the autocorrelation by mean-centering the covariate - One method for reducing the autocorrelation in the sampling chains for regression parameters is to mean centre the covariate(s); this works because it reduces any dependence between the regression intercept and slope(s). Do this for the current example, noting that you will need to make a correction on the estimate of the regression intercept afterwards.
Solution ```{r fig = TRUE} # Mean-centre the x covariate DataC <- Data meanx <- mean(DataC$x) DataC$x <- DataC$x - meanx # Bayesian Linear Regression using a Gibbs Sampler # Set the number of iterations of the Gibbs Sampler - including how many to # discard as the burn-in m.burnin <- 500 m.keep <- 1000 m <- m.burnin + m.keep # Obtain the samples results2 <- gibbs_sample(DataC, 1, 1, 1, m, 1, 1, 0, 0.0001, 0, 0.0001) # Remove the burn-in results2 <- results2[-(1:m.burnin), ] # Correct the effect of the mean-centering on the intercept results2$beta0 <- results2$beta0 - meanx * results2$beta1 # Add variance and standard deviation columns results2$Variance <- 1 / results2$tau results2$StdDev <- sqrt(results2$Variance) # Estimates are the column means apply(results2[, -1], 2, mean) # Also look at uncertainty apply(results2[, -1], 2, sd) apply(results2[, -1], 2, quantile, probs = c(0.025, 0.975)) # Kernel density plots for beta0, beta1 and the residual standard # deviation plot(density(results2$beta0, bw = 0.5), main = "Posterior density for beta0") plot(density(results2$beta1, bw = 0.25), main = "Posterior density for beta1") plot(density(results2$StdDev, bw = 0.075), main = "Posterior density for StdDev") # Plot sampled series ts.plot(results2$beta0, ylab = "beta0", xlab = "Iteration") ts.plot(results2$beta1, ylab = "beta1", xlab = "Iteration") ts.plot(results2$StdDev, ylab = "sigma", xlab = "Iteration") ```
# INLA `INLA` (https://www.r-inla.org/) is based on producing (accurate) approximations to the marginal posterior distributions of the model parameters. Although this can be enough most of the time, making multivariate inference with `INLA` can be difficult or impossible. However, in many cases this is not needed and `INLA` can fit some classes of models in a fraction of the time it takes with MCMC. We can obtain Bayesian model fits without using MCMC with the INLA software, implemented in R via the `INLA` package. If you do not have this package installed already, as it is not on CRAN you will need to install it via ```{r eval=FALSE} install.packages("INLA",repos=c(getOption("repos"),INLA="https://inla.r-inla-download.org/R/stable"), dep=TRUE) ``` After this, the package can be loaded into R: ```{r} library(INLA) ``` ## Example: Fake News This section is included here as a reminder, as we are going to reanalyse this data set using `INLA`. The `fake_news` data set in the `bayesrules` package in `R` contains information about 150 news articles, some real news and some fake news. In this example, we will look at trying to predict whether an article of news is fake or not given three explanatory variables. We can use the following code to extract the variables we want from the data set: ```{r} library(bayesrules) fakenews <- fake_news[,c("type","title_has_excl","title_words","negative")] ``` The response variable `type` takes values `fake` or `real`, which should be self-explanatory. The three explanatory variables are: * `title_has_excl`, whether or not the article contains an exclamation mark (values `TRUE` or `FALSE`); * `title_words`, the number of words in the title (a positive integer); and * `negative`, a sentiment rating, recorded on a continuous scale. In the exercise to follow, we will examine whether the chance of an article being fake news is related to the three covariates here. Note that the variable `title_has_excl` will need to be either replaced by or converted to a factor, for example ```{r} fakenews$titlehasexcl <- as.factor(fakenews$title_has_excl) ``` Functions `summary` and `confint` produce a summary (including parameter estimates etc) and confidence intervals for the parameters, respectively. Finally, we create a new version of the response variable of type logical: ```{r} fakenews$typeFAKE <- fakenews$type == "fake" ``` ## Exercises * Perform an exploratory assessment of the fake news data set, in particular looking at the possible relationships between the explanatory variables and the fake/real response variable `typeFAKE`. You may wish to use the R function `boxplot()` here.
Solution ```{r fig = TRUE} # Is there a link between the fakeness and whether the title has an exclamation mark? table(fakenews$title_has_excl, fakenews$typeFAKE) # For the quantitative variables, look at boxplots on fake vs real boxplot(fakenews$title_words ~ fakenews$typeFAKE) boxplot(fakenews$negative ~ fakenews$typeFAKE) ```
* Fit the Bayesian model without MCMC using `INLA`; note that the summary output provides credible intervals for each parameter to help us make inference. Also, in `INLA` a Binomial response needs to be entered as type integer, so we need another conversion: ```{r} fakenews$typeFAKE.int <- as.integer(fakenews$typeFAKE) ```
Solution ```{r fig = TRUE} # Fit model - note similarity with bayesx syntax inla.output <- inla(formula = typeFAKE.int ~ titlehasexcl + title_words + negative, data = fakenews, family = "binomial") # Summarise output summary(inla.output) ```
* Fit a non-Bayesian model using `glm()` for comparison. How do the model fits compare?
Solution ```{r fig = TRUE} # Fit model - note similarity with bayesx syntax glm.output <- glm(formula = typeFAKE ~ titlehasexcl + title_words + negative, data = fakenews, family = "binomial") # Summarise output summary(glm.output) # Perform ANOVA on each variable in turn drop1(glm.output,test="Chisq") ```
## Example: Emergency Room Complaints This section is included here as a reminder, as we are going to reanalyse this data set using `INLA`. For this example we will use the `esdcomp` data set, which is available in the `faraway` package. This data set records complaints about emergency room doctors. In particular, data was recorded on 44 doctors working in an emergency service at a hospital to study the factors affecting the number of complaints received. The response variable that we will use is `complaints`, an integer count of the number of complaints received. It is expected that the number of complaints will scale by the number of visits (contained in the `visits` column), so we are modelling the rate of complaints per visit - thus we will need to include a new variable `log.visits` as an offset. The three explanatory variables we will use in the analysis are: * `residency`, whether or not the doctor is still in residency training (values `N` or `Y`); * `gender`, the gender of the doctor (values `F` or `M`); and * `revenue`, dollars per hour earned by the doctor, recorded as an integer. Our simple aim here is to assess whether the seniority, gender or income of the doctor is linked with the rate of complaints against that doctor. We can use the following code to extract the data we want without having to load the whole package: ```{r} esdcomp <- faraway::esdcomp ``` ## Fitting Bayesian Poisson Regression Models Again we can use `INLA` to fit this form of Bayesian generalised linear model. If not loaded already, the package must be loaded into R: ```{r echo=FALSE} library(INLA) ``` As noted above we need to include an offset in this analysis; since for a Poisson GLM we will be using a log() link function by default, we must compute the log of the number of visits and include that in the data set `esdcomp`: ```{r} esdcomp$log.visits <- log(esdcomp$visits) ``` The offset term in the model is then written `offset(log.visits)` ## Exercises * Perform an exploratory assessment of the emergency room complaints data set, particularly how the response variable `complaints` varies with the proposed explanatory variables relative to the number of visits. To do this, create another variable which is the ratio of `complaints` to `visits`.
Solution ```{r fig = TRUE} # Compute the ratio esdcomp$ratio <- esdcomp$complaints / esdcomp$visits # Plot the link with revenue plot(esdcomp$revenue,esdcomp$ratio) # Use boxplots against residency and gender boxplot(esdcomp$ratio ~ esdcomp$residency) boxplot(esdcomp$ratio ~ esdcomp$gender) ```
* Fit the Bayesian model without MCMC using `INLA`; note that the summary output provides credible intervals for each parameter to help us make inference.
Solution ```{r} # Fit model - note similarity with bayesx syntax inla.output <- inla(formula = complaints ~ offset(log.visits) + residency + gender + revenue, data = esdcomp, family = "poisson") # Summarise output summary(inla.output) ```
* Fit a non-Bayesian model using `glm()` for comparison. How do the model fits compare?
Solution ```{r fig = TRUE} # Fit model - note similarity with bayesx syntax esdcomp$log.visits <- log(esdcomp$visits) glm.output <- glm(formula = complaints ~ offset(log.visits) + residency + gender + revenue, data = esdcomp, family = "poisson") # Summarise output summary(glm.output) # Perform ANOVA on each variable in turn drop1(glm.output, test = "Chisq") ```
# Bayesian Hierarchical Modelling These two final sections simply reproduce Practical 7, but using `INLA` rather than `BayesX`. # Linear Mixed Models Linear mixed models were defined in the lecture as follows: $$ Y_{ij} = X_{ij}\beta +\phi_i+\epsilon_{ij} $$ Here, Y_{ij} represents observation $j$ in group $i$, X_{ij} are a vector of covariates with coefficients $\beta$, $\phi_i$ i.i.d. random effects and $\epsilon_{ij}$ a Gaussian error term. The distribution of the random effects $\phi_i$ is Gaussian with zero mean and precision $\tau_{\phi}$. # Multilevel Modelling Multilevel models are a particular type of mixed-effects models in which observations are nested within groups, so that group effects are modelled using random effects. A typical example is that of students nested within classes. For the next example, the `nlschools` data set (in package `MASS`) will be used. This data set records data about students' performance (in particular, about a language score test) and other variables. The variables in this data set are: * `lang`, language score test. * `IQ`, verbal IQ. * `class`, class ID. * `GS`, class size as number of eighth-grade pupils recorded in the class. * `SES`, social-economic status of pupil’s family. * `COMB`, whether the pupils are taught in the multi-grade class with 7th-grade students. The data set can be loaded and summarised as follows: ```{r} library("MASS") data("nlschools") summary(nlschools) ``` The model to fit will take `lang` as the response variable and include `IQ`, `GS`, `SES` and `COMB` as covariates (i.e., fixed effects). This model can easily be fit with `INLA` as follows: ```{r message = FALSE, warning = FALSE} library("INLA") m1 <- inla(lang ~ IQ + GS + SES + COMB, data = nlschools) summary(m1) ``` Note that the previous model only includes fixed effects. The data set includes `class` as the class ID to which each student belongs. Class effects can have an impact on the performance of the students, with students in the same class performing similarly in the language test. Very conveniently, `INLA` can include random effects in the model by adding a term in the right hand side of the formula that defined the model. Specifically, the term to add is `f(class, model = "iid")` (see code below for the full model). This will create a random effect indexed over variable `class` and which is of type `iid`, i.e., the random effects are independent and identically distributed using a normal distribution with zero mean and precision $\tau$. Before fitting the model, the between-class variability can be explored by means of boxplots: ```{r fig = TRUE, fig.width = 15, fig.height = 5} boxplot(lang ~ class, data = nlschools, las = 2) ``` The code to fit the model with random effects is: ```{r} m2 <- inla( lang ~ IQ + GS + SES + COMB + f(class, model = "iid"), data = nlschools ) summary(m2) ``` # Generalised Linear Mixed Models Mixed effects models can also be considered within the context of generalised linear models. In this case, the linear predictor of observation $i$, $\eta_i$, can be defined as $$ \eta_i = X_{ij}\beta +\phi_i $$ Compared to the previous setting of linear mixed effects models, note that now the distribution of the response could be other than Gaussian and that observations are not necessarily nested within groups. # Poisson regression In this practical we will use the North Carolina Sudden Infant Death Syndrome (SIDS) data set. It is available in the `spData` package and it can be loaded using: ```{r message = FALSE, warning = FALSE} library(spData) data(nc.sids) summary(nc.sids) ``` A full description of the data set is provided in the associated manual page (check with `?nc.sids`) but in this practical we will only consider these variables: * `BIR74`, number of births (1974-78). * `SID74`, number of SID deaths (1974-78). * `NWBIR74`, number of non-white births (1974-78). These variables are measured at the county level in North Carolina, of which there are 100. Because `SID74` records the number of SID deaths, the model is Poisson: $$ O_i \mid \mu_i \sim Po(\mu_i),\ i=1,\ldots, 100 $$ Here, $O_i$ represents the number of cases in county $i$ and $\mu_i$ the mean. In addition, mean $\mu_i$ will be written as $\mu_i = E_i \theta_i$, where $E_i$ is the *expected* number of cases and $\theta_i$ the relative risk in county $i$. The relative risk $\theta_i$ is often modelled, on the log-scale, to be equal to a linear predictor: $$ \log(\theta_i) = \beta_0 + \ldots $$ The expected number of cases is computed by multiplying the number of births in county $i$ to the overall mortality rate $$ r = \frac{\sum_{i=1}^{100}O_i}{\sum_{i=1}^{100}B_i} $$ where $B_i$ represents the total number of births in country $i$. Hence, the expected number of cases in county $i$ is $E_i = r B_i$. ```{r} # Overall mortality rate r74 <- sum(nc.sids$SID74) / sum(nc.sids$BIR74) # Expected cases nc.sids$EXP74 <- r74 * nc.sids$BIR74 ``` A common measure of relative risk is the *standardised mortality ratio* ($O_i / E_i$): ```{r} nc.sids$SMR74 <- nc.sids$SID74 / nc.sids$EXP74 ``` A summary of the SMR can be obtained: ```{r fig = TRUE} hist(nc.sids$SMR, xlab = "SMR") ``` Values above 1 indicate that the county has more observed deaths than expected and that there might be an increased risk in the area. As a covariate, we will compute the proportion of non-white births: ```{r} nc.sids$NWPROP74 <- nc.sids$NWBIR74/ nc.sids$BIR74 ``` There is a clear relationship between the SMR and the proportion of non-white births in a county: ```{r fig = TRUE} plot(nc.sids$NWPROP74, nc.sids$SMR74) # Correlation cor(nc.sids$NWPROP74, nc.sids$SMR74) ``` A simple Poisson regression can be fit by using the following code: ```{r} m1nc <- inla( SID74 ~ 1 + NWPROP74, family = "poisson", E = nc.sids$EXP74, data = nc.sids ) summary(m1nc) ``` Random effects can also be included to account for intrinsic differences between the counties: ```{r} # Index for random effects nc.sids$ID <- 1:nrow(nc.sids) # Model WITH covariate m2nc <- inla( SID74 ~ 1 + NWPROP74 + f(ID, model = "iid"), family = "poisson", E = nc.sids$EXP74, data = as.data.frame(nc.sids) ) summary(m2nc) ``` The role of the covariate can be explored by fitting a model without it: ```{r} # Model WITHOUT covariate m3nc <- inla( SID74 ~ 1 + f(ID, model = "iid"), family = "poisson", E = nc.sids$EXP74, data = as.data.frame(nc.sids) ) summary(m3nc) ``` Now, notice the decrease in the estimate of the precision of the random effects (i.e., the variance increases). This means that values of the random effects are now larger than in the previous case as the random effects pick some of the effect explained by the covariate. ```{r fig = TRUE} par(mfrow = c(1, 2)) boxplot(m2nc$summary.random$ID$mean, ylim = c(-1, 1), main = "With NWPROP74") boxplot(m3nc$summary.random$ID$mean, ylim = c(-1, 1), main = "Without NWPROP74") ```