Explainable AI for Reliable Systems and Financial Security
This session explores the role of Explainable Artificial Intelligence (XAI) in improving transparency and trust in machine learning systems applied to complex real-world problems.
The first talk focuses on software defect prediction, addressing the growing complexity of modern software systems where early defect detection is essential for maintaining reliability and reducing maintenance costs. The presented framework integrates advanced data preprocessing, class imbalance handling, and the XGBoost algorithm with explainability techniques, enabling accurate defect prediction while providing interpretable insights into the software metrics that contribute most to defect formation.
The second talk examines anomaly detection in financial transactions, a critical task for fraud prevention and regulatory compliance. The proposed approach leverages attention-enhanced Variational Autoencoders (VAEs) combined with interpretability methods such as SHAP to identify anomalous transaction patterns. Beyond detecting suspicious activity, the framework provides transaction-level explanations that help analysts understand the factors influencing anomaly classification.
Together, these presentations demonstrate how explainable AI methods can bridge the gap between high-performing machine learning models and the transparency required for real-world adoption in both software engineering and financial security.
Speaker(s): Srikanth, Sowjanya
Virtual: https://events.vtools.ieee.org/m/546377