Explainable reasoning for remote autonomous agents
Abstract
Remote autonomous robots are increasingly deployed for demanding tasks such as underwater exploration and pipeline inspection, providing valuable ecological insights and
generating commercial benefits. However, human-in-the-loop applications in this domain
face significant challenges, including a lack of direct supervision, bandwidth limitations,
and limited technical understanding of the underlying autonomous systems. Ensuring
situational awareness and trust is critical for the broader adoption of these technologies.
This research project addresses these challenges by developing novel methodologies for
transparent and explainable autonomy.
The work focuses on two primary objectives: generating explanation content and
effectively communicating it through natural language. To achieve the first objective, domain knowledge and robot state fusion are employed, alongside the creation of simplified
autonomy models using surrogate techniques. For the second objective, the communication of explanation content is explored using both template-based and language model-based approaches, supporting causal, counterfactual, and contrastive explanations. User
preferences for these explanation styles are evaluated, and the effectiveness of model-based explanations is compared to that of template-based alternatives.
The findings demonstrate satisfactory performance in approximating autonomy using both surrogate and language models. Moreover, this work identifies the explanation
styles that most significantly enhance situational awareness. These results contribute to
the advancement of transparent and explainable autonomy, facilitating greater trust and
adoption of remote autonomous robots in challenging applications.