Explainable Reinforcement and Causal Learning for Improving Trust to 6G Stakeholders
15 Oct, 2025

As telecommunications evolve toward AI-native 6G systems, artificial intelligence will become deeply embedded within network infrastructures to deliver intelligent, seamless, and user-centric services. Yet, this integration also introduces complex trust and safety challenges. Future machine learning systems, powered by deep reinforcement learning (DRL), will manage diverse and dynamic datasets to enable autonomous decision-making across networks.
This white paper explores the urgent need for explainable deep reinforcement learning (X-DRL) to ensure transparency, accountability, and reliability in next-generation networks. It identifies three critical challenges: stakeholder-specific explainability across providers, regulators, and end-users; the absence of long-term behavioural models for DRL agents; and the dominance of correlation-based, rather than causal, reasoning in current explainability research.
By mapping stakeholder needs to emerging X-DRL methodologies and showcasing practical 6G case studies, this paper aims to guide future research, inform industry practice, and foster trustworthy AI integration within the 6G ecosystem.