124 words
1 minute

Research Note -- DiffLocks - Generating 3D Hair from a Single Image using Diffusion Models.

Contributions of this paper:#

  1. Presented a 3D synthetic hair dataset consisting of 40K samples covering a wide variety of hairstyles.
    • They designed a large and very general geometry node network in Blender to generate various hairstyles.
  2. Trained a novel diffusion framework called DiffLocks:
    • Used Hourglass Diffusion Transformers as a diffusion architecture.
    • Exploited features from a pretrained DINOv2 model.
      • DINOv2 features are richer than oreinted filters.
        DiffLocks Figure 6
  3. Regressed the latent code for individual hair strands instead of guide strands, enabling the transformer to learn detailed spatial relationship between scalp and hair strand.
  4. Modeled a desity map that defines the probability of a strand being generated at each location of the scalp.

Figures#

DiffLocks Figure 1
DiffLocks Figure 2
DiffLocks Figure 3
DiffLocks Figure 4
DiffLocks Figure 5
DiffLocks Figure 7
DiffLocks Figure 8

Research Note -- DiffLocks - Generating 3D Hair from a Single Image using Diffusion Models.
http://localhost:4321/posts/research_notes/difflocks/2025-12-09-difflocks-notes/
Author
David Chung
Published at
2025-12-09
License
CC BY-NC-SA 4.0
Last updated on 2025-12-09

Table of Contents