Beneficial effect of combined replay for continual learning - Université Grenoble Alpes Accéder directement au contenu
Communication Dans Un Congrès Année : 2021

Beneficial effect of combined replay for continual learning


While deep learning has yielded remarkable results in a wide range of applications, artificial neural networks suffer from catastrophic forgetting of old knowledge as new knowledge is learned. Rehearsal methods overcome catastrophic forgetting by replaying an amount of previously learned data stored in dedicated memory buffers. Alternatively, pseudo-rehearsal methods generate pseudo-samples to emulate the previously learned data, thus alleviating the need for dedicated buffers. Unfortunately, up to now, these methods have shown limited accuracy. In this work, we combine these two approaches and employ the data stored in tiny memory buffers as seeds to enhance the pseudo-sample generation process. We then show that pseudo-rehearsal can improve performance versus rehearsal methods for small buffer sizes. This is due to an improvement in the retrieval process of previously learned information. Our combined replay approach consists of a hybrid architecture that generates pseudo-samples through a reinjection sampling procedure (i.e. iterative sampling). The generated pseudo-samples are then interlaced with the new data to acquire new knowledge without forgetting the previous one. We evaluate our method extensively on the MNIST, CIFAR-10 and CIFAR-100 image classification datasets, and present state-of-the-art performance using tiny memory buffers.

Dates et versions

hal-03355256 , version 1 (27-09-2021)




M. Solinas, Stéphane Rousset, R. Cohendet, Y. Bourrier, M. Mainsant, et al.. Beneficial effect of combined replay for continual learning. ICAART 2021 - 13th International Conference on Agents and Artificial Intelligence, Feb 2021, Vienne (E-conference), Austria. pp.205-217, ⟨10.5220/0010251202050217⟩. ⟨hal-03355256⟩
83 Consultations
0 Téléchargements



Gmail Mastodon Facebook X LinkedIn More