NSF Causal Foundations for Decision Making and Learning



October 1st, 2023


Principle Investigators
Abstract

Artificial intelligence (AI) has become ubiquitous in our daily lives, and the importance of decision-making as a scientific challenge has increased dramatically. Decisions that were once made by humans are increasingly delegated to automated systems or made with their assistance. However, despite substantial recent progress, the current generation of AI technology is lacking in explainability, robustness, and adaptability capabilities, which hinders trust in AI. There is a growing recognition that robust decision-making requires an understanding of the often complex and dynamic causal mechanisms underlying the environment, while most of the current formalisms in AI lack explicit treatment of causal mechanisms. This project brings together the power of causal modeling and AI decision-making and learning to produce AI systems that rely on less data, can better justify and explain their decisions to people, better react to new circumstances, and consequently are safer and more trustworthy. The project produces new foundations - principles, methods, and tools - for causal decision-making systems. It enriches the traditional AI formalisms with causal ingredients for more efficient, robust, generalizable, and explainable decision-making with the potential to fundamentally transform the AI decision-making field. The theory will be evaluated through real-world use-cases in robotics and public health. The researchers will make extensive educational efforts, and develop training content with a focus on mentorship and broadening the participation of underrepresented groups. The team will engage in knowledge transfer activities including authoring an introductory book on causal decision-making and organizing events to discuss AI and decision-making topics.

This project integrates the framework of structural causal models with the leading approaches for decision-making in AI, including model-based planning with Markov decision processes and their extensions, reinforcement learning, and graphical models such as influence diagrams. The outcomes revolutionize traditional AI decision-making with causal modeling toward developing more efficient, robust, generalizable, and explainable decision-making systems. In three thrusts, the project develops new foundations (i.e., principles, theory, and algorithms) and provides a common unified framework for causal-empowered decision-making that generalizes the leading decision-making approaches. Thrust 1 studies essential aspects of causal decision-making to guarantee that the decisions of autonomous agents and decision-support systems are robust, sample-efficient, and precise. These goals are realized by developing methods for causality-integrated online and offline policy learning, interventional planning, imitation learning, curriculum learning, knowledge transfer, and adaptation. Thrust 2 studies additional aspects of causal decision-making that are especially important for decision-support systems where humans are in the loop, including how to exploit causality for constructing explanations, decide when to involve humans, and endow the systems with competence awareness and the ability to make fair decisions that align with the values of their users. Thrust 3 enhances the scalability of the resulting tools and their ability to reason efficiently, trade-off between both multiple objectives and between explainability and decision quality, and learn a causal model of the world. Together, these thrusts will contribute to a new generation of powerful AI tools for developing autonomous agents and decision-support systems.

Research Team
    PhD Students
  • Mingxuan Li (Columbia University)
  • Aurghya Maiti (Columbia University)
  • Adiba Ejaz (Columbia University)
  • Arvind Raghavan (Columbia University)
  • Anna Raichev (UC Irvine)
  • Kyungmin Kim (UC Irvine)
  • Nicholas Cohen (UC Irvine)
  • Jiapeng Zhao (UC Irvine)
  • Michael Mulder (UC Irvine)
  • Saaduddin Mahmud (UMass Amherst)
  • Abhinav Bhati (UMass Amherst)
Grant Information
Award Abstract # 2321786
CISE: Large: Causal Foundations for Decision Making and Learning
NSF Org:
CNS
Division Of Computer and Network Systems
Principal Investigators::
Elias Bareinboim
Rina Dechter
Shlomo Zilberstein
Sven Koenig
Jin Tian
NSF Program(s):
CISE Core: Large Projects
For more information see the NSF award page.
Research

2024

[C022] PDF
Yuta Kawakami, Manabu Kuroki, Jin Tian.
"Probabilities of Causation for Continuous and Vector Variables".
Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, 2024.

[C021] PDF
Yuta Kawakami, Manabu Kuroki, Jin Tian.
"Identification and Estimation of Conditional Average Partial Causal Effects via Instrumental Variable".
Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, 2024.

[C020] PDF
Vincent Hsiao, Dana S Nau, Bobak Pezeshki, Rina Dechter.
"Surrogate Bayesian Networks for Approximating Evolutionary Games".
Proceedings of The 27th International Conference on Artificial Intelligence and Statistics, 2024.

[C019] PDF
Saaduddin Mahmud, Marcell Vazquez-Chanlatte, Stefan J. Witwicki, Shlomo Zilberstein.
"Explaining the Behavior of POMDP-based Agents Through the Impact of Counterfactual Information".
AAMAS, 2024.

[C018] PDF
Kyungmin Kim, Charless Fowlkes, Roy Fox.
"Make the Pertinent Salient: Task-Relevant Reconstruction for Visual Control with Distraction".
Workshop on Training Agents with Foundation Models at RLC 2024, 2024.

[C017] PDF
Davide Corsi, Guy Amir, Andoni Rodríguez, Guy Katz, César Sánchez, Roy Fox.
"Verification-Guided Shielding for Deep Reinforcement Learning".
Reinforcement Learning Journal, 2024.

[C016] PDF
Chi Zhang, Ang Li, Scott Mueller, Rumen Iliev.
"Causal AI Framework for Unit Selection in Optimizing Electric Vehicle Procurement".
2nd Workshop on Sustainable AI, 2024.

[C015] PDF
Bobak Pezeshki, Kalev Kask, Alexander Ihler, Rina Dechter.
"Value-Based Abstraction Functions for Abstraction Sampling".
Proceedings of the Fortieth Conference on Uncertainty in Artificial Intelligence, 2024.

[C014] PDF
Armin Karamzade, Kyungmin Kim, Montek Kalsi, Roy Fox.
"Reinforcement Learning from Delayed Observations via World Models".
Reinforcement Learning Journal, 2024.

[C013] PDF
Anna Raichev, Alexander Ihler, Jin Tian, Rina Dechter.
"Estimating Causal Effects from Learned Causal Networks".
9th Causal Inference Workshop at UAI, 2024.

[C012] PDF
Ang Li, Judea Pearl.
"Probabilities of Causation with Nonbinary Treatment and Effect".
Proceedings of the AAAI Conference on Artificial Intelligence, 2024.

[C011] PDF
Ang Li, Judea Pearl.
"Unit Selection with Nonbinary Treatment and Effect".
Proceedings of the AAAI Conference on Artificial Intelligence, 2024.

[C010] PDF
Alexis Bellot, Junzhe Zhang, Elias Bareinboim.
"Scores for Learning Discrete Causal Graphs with Unobserved Confounders".
Proceedings of the AAAI Conference on Artificial Intelligence, 2024.

2023

[C009] PDF
Yonghan Jung, Jin Tian, Elias Bareinboim.
"Estimating Joint Treatment Effects by Combining Multiple Experiments".
Proceedings of the 40th International Conference on Machine Learning, 2023.

[C008] PDF
Yonghan Jung, Ivan Diaz, Jin Tian, Elias Bareinboim.
"Estimating Causal Effects Identifiable from a Combination of Observations and Experiments".
Advances in Neural Information Processing Systems, 2023.

[C007] PDF
Tara V. Anand, Adele H. Ribeiro, Jin Tian, Elias Bareinboim.
"Causal Effect Identification in Cluster DAGs".
Proceedings of the AAAI Conference on Artificial Intelligence, 2023.

[C006] PDF
Kevin Muyuan Xia, Yushu Pan, Elias Bareinboim.
"Neural Causal Models for Counterfactual Identification and Estimation".
The Eleventh International Conference on Learning Representations , 2023.

[C005] PDF
Kangrui Ruan, Junzhe Zhang, Xuan Di, Elias Bareinboim.
"Causal Imitation Learning via Inverse Reinforcement Learning".
The Eleventh International Conference on Learning Representations , 2023.

[C004] PDF
Julius von Kügelgen, Michel Besserve, Wendong Liang, Luigi Gresele, Armin Kekić, Elias Bareinboim, David Blei, Bernhard Schölkopf.
"Nonparametric Identifiability of Causal Representations from Unknown Interventions".
Thirty-seventh Conference on Neural Information Processing Systems, 2023.

[C003] PDF
Drago Plecko, Elias Bareinboim.
"Causal Fairness for Outcome Control".
Advances in Neural Information Processing Systems, 2023.

[C002] PDF
Adam Li, Amin Jaber, Elias Bareinboim.
"Causal discovery from observational and interventional data across multiple environments".
Advances in Neural Information Processing Systems, 2023.

[C001] PDF
Abhinav Bhatia, Samer Nashed, Shlomo Zilberstein.
"RLˆ3: Boosting Meta Reinforcement Learning via RL inside RLˆ2".
NeurIPS 2023 Workshop on Generalization in Planning, 2023.

For questions please contact us at nsf-causal-dm@gmail.com