What is it about?
It is common practice in linguistic research to model binary dependent variables using logistic mixed effects regression modeling. Unfortunately, these models often fail to converge on a desired random effects structure, which leads researchers to adopt ad hoc methods of model simplification in order to achieve convergence. One common reason for this failure to converge is due to an issue known as quasi-separation. In this paper, we describe what quasi-separation is and we show how Bayesian mixed effects regression models can be used to overcome convergence issues due to quasi-separation.
Featured Image
Why is it important?
Convergence issues pose a vexing challenge to language researchers. In the best case scenario, they cost researchers extra time in attempting to overcome the convergence issues by using different algorithms and/or increasing the number of iterations that a model runs for. At worst, the adoption of ad hoc model simplification techniques increases researcher degrees of freedom and may render the results of an analysis wholly invalid, even leading to spurious results. We show that Bayesian models provide a promising tool for handling convergence issues that can increase the likelihood of a model converging on the desired random effects structure, thereby obviating the need for ad hoc model simplification techniques which increase researcher degrees of freedom and threaten the validity of any results.
Perspectives
Read the Original
This page is a summary of: Confronting Quasi-Separation in Logistic Mixed Effects for Linguistic Data: A Bayesian Approach, Journal of Quantitative Linguistics, August 2018, Taylor & Francis,
DOI: 10.1080/09296174.2018.1499457.
You can read the full text:
Contributors
The following have contributed to this page