Learning Context-aware Task Reasoning for Efficient Meta-reinforcement Learning

Image credit: Haozhe Wang

Abstract

Despite recent success of deep network-based Reinforcement Learning (RL), it remains elusive to achieve human-level efficiency in learning novel tasks. While previous efforts attempt to address this challenge using meta-learning strategies, they typically suffer from sampling inefficiency with on-policy RL algorithms or meta-overfitting with off-policy learning. In this work, we propose a novel meta-RL strategy to address those limitations. In particular, we decompose the meta-RL problem into three sub-tasks, task-exploration, task-inference and task-fulfillment, instantiated with two deep network agents and a task encoder. During meta-training, our method learns a task-conditioned actor network for task-fulfillment, an explorer network with a self-supervised reward shaping that encourages task-informative experiences in task-exploration, and a context-aware graph-based task encoder for task inference. We validate our approach with extensive experiments on several public benchmarks and the results show that our algorithm effectively performs exploration for task inference, improves sample efficiency during both training and testing, and mitigates the meta-overfitting problem.

Publication
In International Conference on Autonomous Agents and Multiagent Systems, 2020
Haozhe Wang
Haozhe Wang
Master Student

Haozhe Wang is currently a master student at ShanghaiTech.

Jiale Zhou
Jiale Zhou
Master Students

My research interests include few-shot learning, incremental learning and reinforcement learning.

Xuming He
Xuming He
Associate Professor

My research interests include few/low-shot learning, graph neural networks and video understanding.

Related