Skip to content

Quantize-then-Rectify: Accelerating VQ-VAE Training in Latent Feature Space

Venue: iclr2026 (Withdraw) Authors: Borui Zhang, Qihang Rao, Wenzhao Zheng, Jie Zhou, Jiwen Lu OpenReview: https://openreview.net/forum?id=6193b311kq

Relevance

LLM score: 3/3 — The paper directly advances energy-efficient training by dramatically reducing VQ-VAE training cost (over two orders of magnitude) through quantization and pre-trained model reuse, aligning with Sutro's low-precision and training efficiency interests. Keyword hits: quantization

TLDR

(none provided)

Abstract

Visual tokenizers are pivotal in multimodal large models, acting as bridges between continuous inputs and discrete token. Nevertheless, training high-compression-rate VQ-VAEs remains computationally demanding, often necessitating thousands of GPU hours. This work demonstrates that a pre-trained VAE can be efficiently transformed into a VQ-VAE by controlling quantization noise within the VAE's tolerance threshold. We present \textbf{Quantize-then-Rectify (ReVQ)}, a framework leveraging pre-trained VAEs to enable rapid VQ-VAE training with minimal computational overhead. By integrating \textbf{channel split quantization} to enhance codebook capacity and a \textbf{post rectifier} to mitigate quantization errors, ReVQ compresses ImageNet images into at most $512$ tokens while sustaining competitive reconstruction quality (rFID = $0.82$). Significantly, ReVQ reduces training costs by over two orders of magnitude relative to state-of-the-art approaches: ReVQ finishes full training on a single NVIDIA 4090 in approximately 22 hours, whereas comparable methods require 4.5 days on a 32 A100 GPUs. Experimental results show that ReVQ achieves superior efficiency-reconstruction trade-offs.

Keywords

VQVAE, tokenizer, autoencoder, efficiency