What is it about?

This paper demonstrates how Shapley values can provide explainability for credit scorecards built with advanced machine learning techniques like XGBoost and Random Forest. The proposed method offers a level of transparency comparable to traditional logistic regression models used in banking.

Featured Image

Why is it important?

This paper is important because it tackles a key barrier to using advanced machine learning techniques in the highly regulated banking industry, where explainability is crucial. By showing that these techniques can be made transparent, it paves the way for their broader adoption in credit decision-making.

Perspectives

This paper offers a fresh perspective by bridging the gap between advanced machine learning techniques and regulatory demands in the banking industry. It challenges the misconception that complex models lack transparency, demonstrating that methods like Shapley values can deliver the necessary explainability. This could lead to a paradigm shift, encouraging the adoption of more sophisticated models in environments where clear, understandable decisions are a must.

Rivalani Hlongwane
University of Cape Town

Read the Original

This page is a summary of: A novel framework for enhancing transparency in credit scoring: Leveraging Shapley values for interpretable credit scorecards, PLoS ONE, August 2024, PLOS,
DOI: 10.1371/journal.pone.0308718.
You can read the full text:

Read
Open access logo

Resources

Contributors

The following have contributed to this page