124 words
1 minute
Research Note -- DiffLocks - Generating 3D Hair from a Single Image using Diffusion Models.
Contributions of this paper:
- Presented a 3D synthetic hair dataset consisting of 40K samples covering a wide variety of hairstyles.
- They designed a large and very general geometry node network in Blender to generate various hairstyles.
- Trained a novel diffusion framework called DiffLocks:
- Used Hourglass Diffusion Transformers as a diffusion architecture.
- Exploited features from a pretrained DINOv2 model.
- DINOv2 features are richer than oreinted filters.

DiffLocks Figure 6
- DINOv2 features are richer than oreinted filters.
- Regressed the latent code for individual hair strands instead of guide strands, enabling the transformer to learn detailed spatial relationship between scalp and hair strand.
- Modeled a desity map that defines the probability of a strand being generated at each location of the scalp.
Figures







Research Note -- DiffLocks - Generating 3D Hair from a Single Image using Diffusion Models.
http://localhost:4321/posts/research_notes/difflocks/2025-12-09-difflocks-notes/ Last updated on 2025-12-09