Skip to content

SS06

SS06: Explainable and Interpretable Machine Learning (xAI) with a focus on applications

Alfredo Vellido

Universitat Politècnica de Catalunya, Spain

Carlos Cano Domingo

Universitat Politècnica de Catalunya, Spain

Abstract

As the implementation and use of machine learning (ML) systems continues to gain significance in contemporary society, the focus is swiftly shifting from purely optimizing model performance towards building models that are both understandable and interpretable. This new emphasis stems from a growing need for applications that not only solve complex problems with high accuracy, but also provide clear, transparent insights into their decision-making processes for a range of end-users and stakeholders. In Europe, this has to be understood also in the context of new regulations such as the Artificial Intelligence Act, with its risk-related model transparency requirements. As a result, there is a surge of interest in techniques and methodologies that enable model explainability and interpretability, paving the way for more trustworthy and user-friendly AI solutions.

The aim of this special session is to gather researchers working on Explainable AI (xAI) in ML, placing a strong emphasis on the practical applications of this framework. Its primary goal is to present innovative methods that make ML models more interpretable, transparent, and trustworthy, while preserving their performance, but we invite contributions that go beyond theory, showcasing tangible real-world implementations in different application scenarios. By centering on application-driven insights, this session seeks to bridge the gap between foundational research and operational solutions, ultimately aiming to steer the ML community toward more responsible and societally beneficial AI technologies.

We are seeking contributions that address practical applications, presenting innovative approaches and technological xAI advancements. Topics of interest include, but are not limited to:

  • Explainable methods in medicine and healthcare
  • Business and public governance applications of xAI
  • Explainable biomedical knowledge discovery with ML
  • xAI in agriculture, forestry and environmental applications
  • xAI and human-computer interaction
  • xAI methods for linguistics & machine translation
  • Explainability in decision-support systems
  • Best practices for presenting model explanations to non-technical stakeholders
  • Auto-encoders & explainability of latent spaces
  • Causal inference & explanations
  • Post-hoc methods for explainability
  • Reinforcement learning for enhancing xAI systems
  • xAI for Deep Learning methods

Organizers

Dr. Alfredo Vellido is a Full Professor on AI at Universitat Politècnica de Catalunya (UPC). He is principal Investigator in the SGR Intelligent Data Science and Artificial Intelligence -IDEAI-UPC- Research Center, of which he is coordinator of the Health thematic area. Chair of the IEEE CIS Explainable Machine Learning (EXML) Task Force and member of the Ethical, Legal, Social, Environmental and Human Dimensions of AI/CI (SHIELD) Technical Committee. Founding member of the Spanish Sociedad de Inteligencia Artificial en Biomedicina (IABiomed); member of CIBER-BBN and XarTec Salut Research Networks.

Dr. Carlos Cano Domingo is an Associate Professor at the Universitat Politècnica de Catalunya (UPC) and a Research Associate at the Barcelona Supercomputing Center. His research focuses on developing hybrid deep learning systems tailored to real-world challenges, with a particular emphasis on renewable energy applications—specifically battery degradation modeling and grid-level energy optimization. In recent years, his pursuit of reliable, trustworthy solutions has led him to deepen his expertise in various aspects of Explainable AI (xAI).