CODEX: A Cluster-Based Method for Explainable Reinforcement Learning
AIS research scientists Timothy Mathes, PhD, and Andrés Colón have published a research paper titled CODEX: A Cluster-Based Method for Explainable Reinforcement Learning. This research was done in collaboration with Georgia Tech Research Institute and the Air Force Research Laboratory.
“Our research proposes a method for AI explainability that fuses techniques from Computer Vision and Natural Language Processing,” said Mathes. “We believe that investigating interdisciplinary approaches to understanding AI decision-making will be crucial for harnessing the power of intelligent systems.”
The team presented the paper during a virtual workshop at the International Joint Conference on Artificial Intelligence (IJCAI) Explainable AI Workshop on August 31, 2023.
Abstract:
Despite the impressive feats demonstrated by Reinforcement Learning (RL), these algorithms have seen little adoption in high-risk, real-world applications due to current difficulties in explaining RL agent actions and building user trust. We present Counterfactual Demonstrations for Explanation (CODEX), a method that incorporates semantic clustering, which can effectively summarize RL agent behavior in the state-action space. Experimentation on the MiniGrid and StarCraft II gaming environments reveals the semantic clusters retain temporal as well as entity information, which is reflected in the constructed summary of agent behavior. Furthermore, clustering the discrete+continuous game-state latent representations identifies the most crucial episodic events, demonstrating a relationship between the latent and semantic spaces. This work contributes to the growing body of work that strives to unlock the power of RL for widespread use by leveraging and extending techniques from Natural Language Processing.