Flexible attention-based multi-policy fusion for efficient deep reinforcement learning

Proc. Advances in Neural Information Processing Systems (NeurIPS), 2024

Zih-Yun Chiu, Yi-Lin Tuan, William Yang Wang, Michael Yip

Abstract: Reinforcement learning (RL) agents have long sought to approach the efficiency of human learning. Humans are great observers who can learn by aggregating external knowledge from various sources, including observations from others’ policies of attempting a task. Prior studies in RL have incorporated external knowledge policies to help agents improve sample efficiency. However, it remains non-trivial to perform arbitrary combinations and replacements of those policies, an essential feature for generalization and transferability. In this work, we present Knowledge-Grounded RL (KGRL), an RL paradigm fusing multiple knowledge policies and aiming for human-like efficiency and flexibility. We propose a new actor architecture for KGRL, Knowledge-Inclusive Attention Network (KIAN), which allows free knowledge rearrangement due to embedding-based attentive action prediction. KIAN also addresses entropy imbalance, a problem arising in maximum entropy KGRL that hinders an agent from efficiently exploring the environment, through a new design of policy distributions. The experimental results demonstrate that KIAN outperforms alternative methods incorporating external knowledge policies and achieves efficient and flexible learning. Our implementation is available at http://github. com/Pascalson/KGRL. git.

Chiu et al. (2024) Flexible attention-based multi-policy fusion for efficient deep reinforcement learning, Proc. Advances in Neural Information Processing Systems (NeurIPS), vol. 36, pp. 13590-13612.

Pub Link: http://proceedings.neurips.cc/paper_files/paper/2023/hash/2c23b3c72127e15fedc276722faee927-Abstract-Conference.html
arXiv: