← Back to Papers
2026 3d human digitization;neural radiance fields;single-view image;garment representation

Garment-Aware Neural Radiance Fields for Generalizable 3D Human Digitization

Yin, Wei and Liu, Li and Fu, Xiaodong and Liu, Lijun and Peng, Wei

High-quality garment representation is both a challenge and a key factor in constructing generalized 3D humans from a single-view image. Existing techniques often perform poorly when handling complex garments, primarily due to two critical challenges: (1) Single-view images lack complete information about the garments, limiting the completeness and realism of the reconstruction results. (2) The model’s generalization ability is insufficient, resulting in significant inconsistencies in garment texture and structure when rendered from different viewpoints, which severely impacts the quality of novel view images. To improve the quality of novel view images, we propose a three-stage garment-aware Neural Radiance Field (NeRF) method for generalizable 3D human digitization. To supplement the missing garment information in single-view images, the first garment prior awareness stage focuses on extracting prior knowledge of the garment’s shape, pose deformations, and style. To comprehensively eliminate ambiguities in rendered images across different viewpoints, we then introduce a set of prior-aware feature learning in the second stage to represent garment’s global texture, geometry, and fine details. Additionally, a garment-aware NeRF module with fusion and decoder is designed in the third stage to effectively fuse these prior features, and thus our model can render novel view clothed human and generate high-quality results. Experimental results on RenderPeople, Thuman, and HuMMan datasets demonstrate that our method achieves superior performance and robust generalization in garment representation over the existing methods, especially for synthesizing novel view images of garments without the human body.

Added 2026-04-21