Integrating Explainable AI Techniques into Credit Scoring Models: Striking a Balance Between Accuracy and Transparency

Authors

  • Zhumashev.A.B Kazakh-British Technical University, Almaty, Kazakhstan
  • Kuatbayeva A. A.

Keywords:

credit scoring, explainable AI, model interpretability, regulatory compliance, machine learning

Abstract

In the evolving landscape of financial risk assessment, credit scoring systems must simultaneously deliver high predictive accuracy and maintain transparency to comply with regulatory requirements and foster trust among stakeholders. This study investigates the integration of explainable AI (XAI) techniques into various machine learning models to achieve this balance. We analyze three distinct approaches: inherently interpretable models (e.g., logistic regression and decision trees), high-performing black-box models augmented with post-hoc explainability tools (such as XGBoost paired with SHAP), and hybrid models that combine interpretability with flexibility (notably generalized additive models with boosting).

Experiments conducted on the German Credit dataset demonstrate that XGBoost with SHAP delivers the best predictive accuracy (AUC-ROC: 87%) while offering post-hoc transparency suitable for compliance auditing. In contrast, traditional interpretable models, though easier to explain, show a moderate reduction in accuracy (by 6–8%). Hybrid GAMs present a promising trade-off, achieving 85% AUC-ROC with integrated explainability. These findings highlight that financial institutions can confidently implement advanced AI-based credit scoring models without sacrificing interpretability, provided that suitable XAI strategies are employed

Published

2025-05-26

How to Cite

Zhumashev.A.B, & Kuatbayeva A. A. (2025). Integrating Explainable AI Techniques into Credit Scoring Models: Striking a Balance Between Accuracy and Transparency. Theoretical Hypotheses and Empirical Results, (10). Retrieved from https://ojs.scipub.de/index.php/THIR/article/view/6233