DER: Dynamically Expandable Representation for Class Incremental Learning(Oral)

Model Overview

Abstract

We address the problem of class incremental learning, which is a core step towards achieving adaptive vision intelligence. In particular, we consider the task setting of incremental learning with limited memory and aim to achieve better stability-plasticity trade-off. To this end, we propose a novel two-stage learning approach that utilizes a dynamically expandable representation for more effective incremental concept modeling. Specifically, at each incremental step, we freeze the previously learned representation and augment it with additional feature dimensions from a new learnable feature extractor. This enables us to integrate new visual concepts with retaining learned knowledge. We dynamically expand the representation according to the complexity of novel concepts by introducing a channel-level mask-based pruning strategy. Moreover, we introduce an auxiliary loss to encourage the model to learn diverse and discriminate features for novel concepts. We conduct extensive experiments on the three class incremental learning benchmarks and our method consistently outperforms other methods with a large margin.

Publication
In Conference on Computer Vision and Pattern Recognition 2021
Shipeng Yan
Shipeng Yan
Bytedance

My research interests include few/low-shot learning, incremental learning and representation learning.

Jiangwei Xie
Jiangwei Xie
Sensetime

My research interests include Automatic Machine Learning, Multi-task Learning and Life-long Learning.

Xuming He
Xuming He
Associate Professor

My research interests include few/low-shot learning, graph neural networks and video understanding.

Related