Skip to content

Representation Finetuning for Continual Learning

Venue: iclr2026 (Withdraw) Authors: Haihua Luo, Xuming Ran, Tommi Kärkkäinen, Huiyan Xue, Qi Xu, Fengyu Cong OpenReview: https://openreview.net/forum?id=0za6569Jqd

Relevance

LLM score: 1/3 — The paper proposes a parameter-efficient finetuning method for continual learning, tangentially related to efficiency but not directly addressing energy-efficient training or Sutro Group's core topics like data movement, sparsity, or hardware-aware training. Keyword hits: low-rank

TLDR

(none provided)

Abstract

The world is inherently dynamic, and continual learning aims to enable models to adapt to ever-evolving data streams. Pre-trained models has shown powerful performance in continual learning. However, since pre-trained models acquire knowledge from static datasets, they still require finetuning to adapt effectively to downstream tasks. Traditional finetuning methods are largely empirical, lack explicit objectives, and still require a relatively large number of parameters. In this work, we introduce $\textbf{Co}$ntinual $\textbf{R}$epresentation L$\textbf{e}$arning($\textbf{CoRe}$), a novel framework that, for the first time, applies low-rank linear subspace representation finetuning to continual learning. Unlike conventional finetuning approaches, CoRe adopts a learning paradigm with explicit objectives rather than relying on black-box optimization, achieving more efficient parameter utilization and superior performance. Extensive experiments across multiple continual learning benchmarks demonstrate that CoRe not only preserves parameter efficiency but also significantly outperforms existing methods. Our work extends the applicability of representation finetuning and introduces a new, efficient finetuning paradigm for continual learning.

Keywords

continual learning, reft, finetuning