What is it about?
Bayesian reasoning is unable to make inferences from an incomplete Bayesian probability model. Using, e.g., maximum entropy, to fill in missing priors and/or conditionals leads to point estimates of marginals. Doing a Bayesian sensitivity analysis on the missing priors/conditionals leads to narrow intervals. In our article, we suggest embedding an incomplete Bayesian model in a Dempster-Shafer belief-funtion model and then find marginals of variables of interest. For complete probability models, we get exactly the same results. For incomplete probability models, we get more realistic intervals for marginal without the need for sensitivity analysis.
Featured Image
Why is it important?
It is important to know what you don't know. Bayesian sensitivity analysis is not always tractable. Even if it is tractable, it is not correct due to its inability to model incomplete information. Dempster-Shafer (DS) theory is more expressive than probability theory to model what is known and what is unknown. Making inferences from a DS model is almost as fast compared to making inferences from Bayesian probability model, and no sensitivity analysis is needed in the case of an incomplete model.
Read the Original
This page is a summary of: Making inferences in incomplete Bayesian networks: A Dempster-Shafer belief function approach, International Journal of Approximate Reasoning, June 2023, Elsevier,
DOI: 10.1016/j.ijar.2023.108967.
You can read the full text:
Contributors
The following have contributed to this page