Confronting Quasi-Separation in Logistic Mixed Effects for Linguistic Data: A Bayesian Approach

Amelia E. Kimball, Kailen Shantz, Christopher Eager, Joseph Roy
  • Journal of Quantitative Linguistics, August 2018, Taylor & Francis
  • DOI: 10.1080/09296174.2018.1499457

Bayesian models help deal with convergence problems in logistic mixed effects models

What is it about?

It is common practice in linguistic research to model binary dependent variables using logistic mixed effects regression modeling. Unfortunately, these models often fail to converge on a desired random effects structure, which leads researchers to adopt ad hoc methods of model simplification in order to achieve convergence. One common reason for this failure to converge is due to an issue known as quasi-separation. In this paper, we describe what quasi-separation is and we show how Bayesian mixed effects regression models can be used to overcome convergence issues due to quasi-separation.

Why is it important?

Convergence issues pose a vexing challenge to language researchers. In the best case scenario, they cost researchers extra time in attempting to overcome the convergence issues by using different algorithms and/or increasing the number of iterations that a model runs for. At worst, the adoption of ad hoc model simplification techniques increases researcher degrees of freedom and may render the results of an analysis wholly invalid, even leading to spurious results. We show that Bayesian models provide a promising tool for handling convergence issues that can increase the likelihood of a model converging on the desired random effects structure, thereby obviating the need for ad hoc model simplification techniques which increase researcher degrees of freedom and threaten the validity of any results.

Perspectives

Dr. Kailen Shantz (Author)
University of Illinois System

This paper was a great learning opportunity for me. I had never used Bayesian models before working on this paper, and now they are quickly becoming my default choice of model for working with linguistic data. It certainly took a lot of time and effort learning to use this, and I definitely think I have a lot left to learn, but I think the return is well worth the invested time.

Read Publication

http://dx.doi.org/10.1080/09296174.2018.1499457

The following have contributed to this page: Dr. Kailen Shantz