---
title: 'Practical 6: Software and GLMs'
author: "VIBASS"
output:
html_vignette:
fig_caption: yes
number_sections: yes
toc: yes
fig_width: 6
fig_height: 4
vignette: >
%\VignetteIndexEntry{Practical 6: Software and GLMs}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(
echo = TRUE,
collapse = TRUE,
comment = "#>"
)
```
# Software for Bayesian Statistical Analysis
So far, simple Bayesian models with conjugate priors have been
considered. As explained in previous practicals, when the posterior
distribution is not available in closed form, MCMC algorithms such as the
Metropolis-Hastings or
Gibbs Sampling can be used to obtain samples from it.
In general, posterior distributions are seldom available in closed form and
implementing MCMC algorithms for complex models can be technically difficult
and very time-consuming.
For this reason, in this Practical we start by looking at a number of `R`
packages to fit Bayesian statistical models. These packages will equip us
with tools which can be used to deal with more complex models efficiently,
without us having to do a lot of extra coding ourselves. Fitting Bayesian
models in `R` will then be much like fitting non-Bayesian models, using
model-fitting functions at the command line, and using standard syntax for
model specification.
## BayesX
In particular, the following software package will be considered:
`BayesX` (http://www.bayesx.org/) implements MCMC methods to obtain samples
from the joint posterior and is conveniently accessed from R via the package
`R2BayesX`.
`R2BayesX` has a very simple interface to define models
using a `formula` (in the same way as with `glm()` and `gam()` functions).
`R2BayesX` can be installed from CRAN.
## Other Bayesian Software
* Package `MCMCpack` in R contains functions such as `MCMClogit()`, `MCMCPoisson()` and
`MCMCprobit()` for fitting specific kinds of models.
* `INLA` (https://www.r-inla.org/) is based on producing (accurate)
approximations to the marginal posterior distributions of the model parameters.
Although this can be enough most of the time, making multivariate inference
with `INLA` can be difficult or impossible. However, in many cases this is
not needed and `INLA` can fit some classes of models in a fraction of the
time it takes with MCMC. It has a very simple interface to define models,
although it cannot be installed directly from CRAN - instead you have a
specific website where it can be downloaded:
[https://www.r-inla.org/download-install](https://www.r-inla.org/download-install)
* The `Stan` software implements Hamiltonian Monte Carlo and other methods for
fit hierarchical Bayesian models. It is available from https://mc-stan.org.
* Packages [`rstanarm`](https://mc-stan.org/rstanarm/) and [`brms`](https://paul-buerkner.github.io/brms/)
in R provide a higher-level interface to `Stan` allowing to fit a large class of regression models,
with a syntax very similar to classical regression functions in R.
* A classic MCMC program is `BUGS`, (Bayesian Analysis using Gibbs Sampling)
described in Lunn et al. (2000):
[http://www.mrc-bsu.cam.ac.uk/bugs/winbugs/contents.shtml](http://www.mrc-bsu.cam.ac.uk/bugs/winbugs/contents.shtml).
BUGS can be used through graphical interfaces `WinBUGS` and `OpenBUGS`. Both of
these packages can be called from within R using packages `R2WinBUGS` and
`R2OpenBUGS`.
* `JAGS`, which stands for “just another Gibbs sampler”. Can also be called from
R using package `r2jags`.
* The `NIMBLE` package extends `BUGS` and implements MCMC and other methods
for Bayesian inference. You can get it from https://r-nimble.org, and is best
run directly from R.
# Bayesian Logistic Regression
## Model Formulation
To summarise the model formulation presented in the lecture, given a response
variable $Y_i$ representing the count of a number of successes from a given
number of trials $n_i$ with success probability $\theta_i$, we have
\begin{align*}
(Y_i \mid \boldsymbol \theta_i) & \sim\mbox{Bi}(n_i, \theta_i),\quad \text{i.i.d.},\quad i=1, \ldots, m \\
\mbox{logit}(\theta_i) & =\eta_i \nonumber\\
\eta_{i} & =\beta_0+\beta_1 x_{i1}+\ldots+\beta_p x_{ip}=\boldsymbol x_i\boldsymbol \beta\nonumber
\end{align*}
assuming the logit link function and with linear predictor $\eta_{i}$.
## Example: Fake News
The `fake_news` data set in the `bayesrules` package in `R` contains
information about 150 news articles, some real news and some fake news.
In this example, we will look at trying to predict whether an article of news
is fake or not given three explanatory variables.
We can use the following code to extract the variables we want from the
data set:
```{r}
fakenews <- bayesrules::fake_news[, c("type", "title_has_excl", "title_words", "negative")]
```
The response variable `type` takes values `fake` or `real`, which should be
self-explanatory. The three explanatory variables are:
* `title_has_excl`, whether or not the article contains an excalamation mark (values `TRUE` or `FALSE`);
* `title_words`, the number of words in the title (a positive integer); and
* `negative`, a sentiment rating, recorded on a continuous scale.
In the exercise to follow, we will examine whether the chance of an article
being fake news is related to the three covariates here.
## Fitting Bayesian Logistic Regression Models
`BayesX` makes inference via MCMC, via the `R2BayesX` package which as noted
makes the syntax for model fitting very similar to that for fitting
non-Bayesian models using `glm()` in R. If you do not yet have it installed,
you can install it in the usual way from CRAN.
The package must be loaded into R:
```{r}
library(R2BayesX)
```
The syntax for fitting a Bayesian Logistic Regression Model with one response
variable and three explanatory variables will be like so:
```{r eval=FALSE}
model1 <- bayesx(
formula = y ~ x1 + x2 + x3,
data = data.set,
family = "binomial"
)
```
## Model Fitting
Note that the variable `title_has_excl` will need to be either replaced by or
converted to a factor, for example
```{r}
fakenews$titlehasexcl <- as.factor(fakenews$title_has_excl)
```
Functions `summary()` and `confint()` produce a summary (including parameter
estimates etc) and confidence intervals for the parameters, respectively.
In order to be able to obtain output plots from `BayesX`, it seems that we need
to create a new version of the response variable of type logical:
```{r}
fakenews$typeFAKE <- fakenews$type == "fake"
```
## Exercises
* Perform an exploratory assessment of the fake news data set, in particular
looking at the possible relationships between the explanatory variables
and the fake/real response variable `typeFAKE`. You may wish to use the R
function `boxplot()` here.
Solution
```{r fig = TRUE}
# Is there a link between the fakeness and whether the title has an exclamation mark?
table(fakenews$title_has_excl, fakenews$typeFAKE)
# For the quantitative variables, look at boxplots on fake vs real
boxplot(fakenews$title_words ~ fakenews$typeFAKE)
boxplot(fakenews$negative ~ fakenews$typeFAKE)
```
* Fit a Bayesian model in `BayesX` using the fake news `typeFAKE` variable as
response and the others as covariates. Examine the output; does the model fit
well, and is there any evidence that any of the explanatory variables are
associated with changes in probability of an article being fake or not?
Solution
```{r fig = TRUE}
# Produce the BayesX output
bayesx.output <- bayesx(formula = typeFAKE ~ titlehasexcl + title_words + negative,
data = fakenews,
family = "binomial",
method = "MCMC",
iter = 15000,
burnin = 5000)
summary(bayesx.output)
confint(bayesx.output)
```
* Produce plots of the MCMC sample traces and the estimated posterior
distributions for the model parameters. Does it seem like convergence has been
achieved?
Solution
```{r fig = TRUE, fig.width = 5, fig.height = 10}
# Traces can be obtained separately
plot(bayesx.output,which = "coef-samples")
```
```{r fig = TRUE}
# And the density plots one-by-one
par(mfrow=c(2,2))
plot(density(samples(bayesx.output)[,"titlehasexclTRUE"]),main="Title Has Excl")
plot(density(samples(bayesx.output)[,"title_words"]),main="Title Words")
plot(density(samples(bayesx.output)[,"negative"]),main="Negative")
```
* Fit a non-Bayesian model using `glm()` for comparison. How do the model fits
compare?
Solution
```{r fig = TRUE}
# Fit model - note similarity with bayesx syntax
glm.output <- glm(formula = typeFAKE ~ titlehasexcl + title_words + negative,
data = fakenews,
family = "binomial")
# Summarise output
summary(glm.output)
# Perform ANOVA on each variable in turn
drop1(glm.output,test="Chisq")
```
# Bayesian Poisson Regression
## Model Formulation
To summarise the model formulation presented in the lecture, given a response
variable $Y_i$ representing the counts occurring from a process with mean
parameter $\lambda_i$:
\begin{align*}
(Y_i \mid \boldsymbol \lambda_i) & \sim\mbox{Po}(\lambda_i),\quad i.i.d., \quad i=1, \ldots, n
\mbox{log}(\lambda_i) & =\eta_i \nonumber\\
\eta_{i} & =\beta_0+\beta_1 x_{i1}+\ldots+\beta_p x_{ip}=\boldsymbol x_i\boldsymbol \beta\nonumber
\end{align*}
assuming the log link function and with linear predictor $\eta_{i}$.
## Example: Emergency Room Complaints
For this example we will use the `esdcomp` data set, which is available in the
`faraway` package. This data set records complaints about emergency room
doctors. In particular, data was recorded on 44 doctors working in an
emergency service at a hospital to study the factors affecting the number of
complaints received.
The response variable that we will use is `complaints`, an integer count of the
number of complaints received. It is expected that the number of complaints will
scale by the number of visits (contained in the `visits` column), so we are
modelling the rate of complaints per visit - thus we will need to include a new
variable `logvisits` as an offset.
The three explanatory variables we will use in the analysis are:
* `residency`, whether or not the doctor is still in residency training (values
`N` or `Y`);
* `gender`, the gender of the doctor (values `F` or `M`); and
* `revenue`, dollars per hour earned by the doctor, recorded as an integer.
Our simple aim here is to assess whether the seniority, gender or income of the
doctor is linked with the rate of complaints against that doctor.
We can use the following code to extract the data we want without having to load
the whole package:
```{r}
esdcomp <- faraway::esdcomp
```
## Fitting Bayesian Poisson Regression Models
Again we can use `BayesX` to fit this form of Bayesian generalised
linear model.
If not loaded already, the package must be loaded into R:
```{r echo=FALSE}
library(R2BayesX)
```
In `BayesX`, the syntax for fitting a Bayesian Poisson Regression Model with one
response variable, three explanatory variables and an offset will be like so:
```{r eval=FALSE}
model1 <- bayesx(formula = y~x1+x2+x3+offset(w),
data = data.set,
family="poisson")
```
As noted above we need to include an offset in this analysis; since
for a Poisson GLM we will be using a log link function by default, we must
compute the log of the number of visits and include that in the data set
`esdcomp`:
```{r}
esdcomp$logvisits <- log(esdcomp$visits)
```
The offset term in the model is then written
`offset(logvisits)`
in the call to `bayesx()`.
## Exercises
* Perform an exploratory assessment of the emergency room complaints data set,
particularly how the response variable `complaints` varies with the proposed
explanatory variables relative to the number of visits. To do this, create
another variable which is the ratio of `complaints` to `visits`.
Solution
```{r fig = TRUE}
# Compute the ratio
esdcomp$ratio <- esdcomp$complaints / esdcomp$visits
# Plot the link with revenue
plot(esdcomp$revenue,esdcomp$ratio)
# Use boxplots against residency and gender
boxplot(esdcomp$ratio ~ esdcomp$residency)
boxplot(esdcomp$ratio ~ esdcomp$gender)
```
* Fit a Bayesian model in `BayesX` using the `complaints` variable as Poisson
response and the others as covariates. Examine the output; does the model fit
well, and is there any evidence that any of the explanatory variables are
associated with the rate of complaints?
Solution
```{r fig = TRUE}
# Fit model - note similarity with glm syntax
esdcomp$logvisits <- log(esdcomp$visits)
bayesx.output <- bayesx(formula = complaints ~ residency + gender + revenue,
offset = logvisits,
data = esdcomp,
family = "poisson")
# Summarise output
summary(bayesx.output)
```
* Produce plots of the MCMC sample traces and the estimated posterior
distributions for the model parameters. Does it seem like convergence has been
achieved?
Solution
```{r fig = TRUE, fig.width = 5, fig.height = 10}
# An overall plot of sample traces and density estimates
# plot(samples(bayesx.output))
# Traces can be obtained separately
plot(bayesx.output,which = "coef-samples")
```
```{r fig = TRUE}
# And the density plots one-by-one
par(mfrow =c(2, 2))
plot(density(samples(bayesx.output)[, "residencyY"]), main = "Residency")
plot(density(samples(bayesx.output)[, "genderM"]), main = "Gender")
plot(density(samples(bayesx.output)[, "revenue"]), main = "Revenue")
```
* Fit a non-Bayesian model using `glm()` for comparison. How do the model fits
compare?
Solution
```{r fig = TRUE}
# Fit model - note similarity with bayesx syntax
esdcomp$log.visits <- log(esdcomp$visits)
glm.output <- glm(formula = complaints ~ residency + gender + revenue,
offset = logvisits,
data = esdcomp,
family = "poisson")
# Summarise output
summary(glm.output)
# Perform ANOVA on each variable in turn
drop1(glm.output, test = "Chisq")
```