Self-Regressive Prototype Refinement: Stepping from Local to Global Prototypes in Few-Shot Image Classification
Metric-based methods, such as ProtoNet, excel in few-shot image classification by encouraging similarity to class prototypes. However, prototypes built from limited samples often capture only partial class information, limiting performance. Recent distribution estimation-based methods attempt to enhance performance by leveraging similar base class distributions. Yet, these approaches struggle when the distributions of base and novel classes differ significantly. Empirical analysis reveals that conceptually related categories share a local-global semantic invariance even under large distribution gaps. Based on this insight, a Self-Regressive Prototype Refinement (SRPR) is proposed to address the issue of incomplete prototype representations in few-shot learning. SRPR estimates an optimization direction for local embeddings, progressively refining them toward more global representations by exploiting local-global semantic invariance in base class data. The conservative use of coarse-grained local-global semantic structures, rather than relying on similar distributions, enhances SRPR’s applicability. With minimal computational overhead per refinement step, SRPR significantly improves classification performance and achieves state-of-the-art results across multiple few-shot benchmarks, particularly in the challenging 1-shot setting. Code is available at: .
Added 2026-04-21