Call for Papers
Overview
Machine learning models have demonstrated remarkable success across various domains, yet their decision-making processes often remain opaque. This black-box nature limits their adoption in critical applications, including healthcare, finance, and autonomous systems. In sequential decision-making, where models interact with dynamic environments over time, the challenges of interpretability are further amplified. Understanding the internal mechanisms of these models is crucial for ensuring their reliability, fairness, and alignment with human values.
The “Decoding Decisions” workshop aims to advance the field of explainability in machine learning and sequential decision-making by bringing together researchers from diverse backgrounds. We seek to explore methodologies for interpreting decisions, evaluating the impact of explanations, and fostering trust in artificial decision-making systems. The workshop will focus on three primary objectives: (i) developing theoretical and empirical approaches to improve explainability in machine learning and sequential decision-making, (ii) introducing new benchmarks and evaluation protocols for assessing the interpretability of machine learning systems, and (iii) analyzing the societal and ethical implications of explainable decision-making.
Objectives
While significant progress has been made in explaining predictions from supervised machine learning models, many challenges remain in achieving truly transparent and interpretable systems. Most current explainability methods rely on post-hoc interpretations, which may not faithfully represent the underlying model behavior. Additionally, existing frameworks often focus on static settings, failing to capture the complexities of both machine learning models and multi-step decision processes. The need for more robust, generalizable, and actionable explainability methods is critical as machine learning and sequential decision-making systems are increasingly deployed in real-world applications.
This workshop aims to address these gaps by:
- Developing interpretability techniques tailored for machine learning models, reinforcement learning, planning, and decision-making under uncertainty.
- Investigating causal explanations that provide insights into why models make specific predictions or take particular actions rather than simply describing their outputs.
- Examining human-centric explanations that align with cognitive models of decision-making to improve AI-human collaboration across different learning and decision frameworks.
- Evaluating the impact of explanations on user trust, usability, and ethical considerations in real-world machine learning applications and decision-making systems.
Topics
We invite submissions that explore various aspects of explainability in machine learning and sequential decision-making, including but not limited to:
- Foundations of Explainability in Sequential Decision Making
- Theoretical perspectives on interpretability in reinforcement learning and planning
- Causal reasoning and counterfactual explanations for decision models
- Novel methods for quantifying and evaluating explainability in sequential settings
- Methods for Improving Explainability in ML and Decision Making
- Policy distillation and rule-based approximations for interpretable decision policies
- Model-agnostic vs. model-specific explanation techniques for decision-making models
- Generating natural language and visual explanations for reinforcement learning agents
- Attribution methods for action selection in Markov Decision Processes (MDPs)
- Explainability Benchmarks and Evaluation
- Development of new datasets and evaluation frameworks for explainable sequential models
- Comparative studies of existing explainability techniques in decision-making tasks
- Human subject studies to assess the effectiveness of different explanation methods
- Societal and Ethical Considerations
- Fairness and bias detection in decision-making processes
- Explainability in high-stakes applications (healthcare, finance, autonomous systems)
- The role of transparency in aligning AI decisions with human values
- Regulatory and policy perspectives on explainability in decision-making systems
Important Dates
- Submission Deadline: 6th May, 2025, 11:59 PM (AoE)
- Acceptance Notification: 14th May, 2025
- Camera-Ready Deadline: 18th May, 2025, 11:59 PM (AoE)
Camera-Ready Instructions
To be annouced soon.
Submission Details
We invite researchers and practitioners to submit a one-page abstract (excluding references) related to the topics outlined above. Submissions should clearly articulate the research problem, methodology, key findings (if applicable), and relevance to explainability in machine learning and sequential decision-making. Template for submitting abstracts is template.
Please send your submissions to explainableml2025@gmail.com by the submission deadline. Accepted abstracts will be presented as either oral talks or posters, depending on the review process. For any inquiries, feel free to contact the organizers at the same email address.
Questions
For any inquiries, please contact us at explainableml2025@gmail.com.