Explainable Machine Learning Pipelines for Customer Risk Scoring in Anti-Money Laundering: A Management and Governance Perspective
Main Article Content
Abstract
The rising application of machine learning (ML) in the context of the Anti-Money Laundering (AML) systems has improved the ability to identify suspicious activities by customers, but the obscurity of most ML models is a cause to question the issue of transparency, accountability, and regulatory adherence. The study suggests a customer risk scoring explainable machine learning pipeline that incorporates explainable artificial intelligence (XAI) methods and effective management and governance structures. Based on socio-technical and responsible AI governance lenses, the analyzing paper forms a conceptual and empirical framework that consists of a model performance and explainability measurements, applicable to the compliance officers and regulators. The proposed pipeline utilizes interpretable modeling techniques like SHAP and LIME to identify high-risk customers using the AML data that can be observed in real life and give a clear and audited explanation of model decisions. Results indicate that explainable pipelines enhance accuracy in detection as well as build stakeholder trust, justification of decisions, and conformity with emerging regulatory issues like the EU AI Act, and the financial model risk management policies. The research is theoretically and practically important in that it provides a governance-based framework of deploying credible, interpretable ML systems in AML settings, which eventually provides a solution to the discrepancy between interpretability of technical models and managerial responsibility.
Article Details

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.