MLICv2: Enhanced Multi-Reference Entropy Modeling for Learned Image Compression
Recent advances in Learned Image Compression (LIC) have achieved remarkable performance improvements over traditional codecs. Notably, the MLIC series—LICs equipped with multi-reference entropy models—have substantially surpassed conventional image codecs such as Versatile Video Coding (VVC) Intra. However, existing MLIC variants suffer from several limitations: performance degradation at high bit-rates due to insufficient transform capacity, suboptimal entropy modeling that fails to capture global correlations in initial slices, and lack of adaptive channel importance modeling. In this article, we propose MLICv2 and MLICv2 ({}^{+}) , enhanced successors that systematically address these limitations through improved transform design, advanced entropy modeling, and exploration of the potential of instance-specific optimization. For transform enhancement, we introduce a lightweight token mixing block inspired by the MetaFormer architecture, which effectively mitigates high-bit-rate performance degradation while maintaining computational efficiency. For entropy modeling improvements, we propose hyperprior-guided global correlation prediction to extract global context even in the initial slice of latent representation, complemented by a channel reweighting module that dynamically emphasizes informative channels. We further explore enhanced positional embedding and guided selective compression strategies for superior context modeling. Additionally, we apply the Stochastic Gumbel Annealing (SGA) to demonstrate the potential for further performance improvements through input-specific optimization. Extensive experiments demonstrate that MLICv2 and MLICv2 ({}^{+}) achieve state-of-the-art results, reducing Bj\o{}ntegaard-Delta Rate by 16.54\%, 21.61\%, 16.05\% and 20.46\%, 24.35\%, 19.14\% on Kodak, Tecnick, and CLIC Pro Val datasets, respectively, compared to VTM-17.0 Intra.
Added 2026-04-21