top of page
Towards good practices for applying eXplainable Artificial Intelligence (XAI) for forecasting
Virtual Mobility Grant
Applicant name:
Branka Hadji Misheva
Start date:
30.10.2023
End date:
31.10.2023
Applicant institution:
Bern University of Applied Sciences (BFH)
Purpose of the grant:
Machine learning and deep learning have become increasingly prevalent in financial prediction and forecasting tasks, offering advantages such as enhanced customer experience, democratizing financial services, improving consumer protection, and enhancing risk management. However, these complex models often lack transparency and interpretability, making them challenging to use in sensitive domains like finance. This has led to the rise of eXplainable Artificial Intelligence (XAI) methods aimed at creating models that are easily understood by humans. Classical XAI methods, such as LIME and SHAP, have been developed to provide explanations for complex models. While these methods have made significant contributions, they also have limitations, including computational complexity, inherent model bias, sensitivity to data sampling, and challenges in dealing with feature dependence. In this context, we have written a paper that explores good practices for deploying explainability in AI-based systems for finance, emphasizing the importance of data quality, audience-specific methods, consideration of data properties, and the stability of explanations. These practices aim to address the unique challenges and requirements of the financial industry and guide the
development of effective XAI tools. Some key takeways:
– data quality is emphasized as the foundation of any AI-based system. Ensuring that data is accurate, consistent, and complete is paramount, as inaccurate or incomplete data can lead to
misleading model outputs and explanations. Data preprocessing and feature engineering play essential roles in shaping the quality of input data.
– Tailoring explainability methods to the specific audience is another critical practice. Different stakeholders, including financial experts, non-technical audiences, regulators, and auditors,
have varying levels of expertise and requirements for explanations. Providing explanations that are appropriate for the audience’s needs enhances understanding and trust in AI-based
systems.
– Consideration of data properties, particularly feature dependence, is highlighted as a crucial factor. Financial data often exhibits feature interdependence, multicollinearity, and time-series
characteristics. Applying XAI methods without addressing these properties can result in inaccurate interpretations. Specialized techniques and approaches that account for these characteristics should be employed.
– Stability of explanations is recognized as a fundamental aspect of deploying explainability in finance. In the dynamic environment of financial markets, where conditions and data can
change rapidly, ensuring the stability of explanations is crucial. XAI methods should be rigorously tested for stability, and robustness to local perturbations of the input should be a
fundamental criterion.
bottom of page