DiffPortrait360 is an open source project based on a diffusion model derived from the CVPR 2025 paper "DiffPortrait360: Consistent Portrait Diffusion for 360 View Synthesis". It can generate a 360-degree coherent head view from a single portrait photo, supporting multiple types of inputs including real humans, stylized images and anthropomorphic characters, and preserving details of accessories such as glasses and hats.
Core features include:
- Multi-type compatibility: processing real photos/artistic creations/virtual characters
- Backside detail generation: supplementing invisible areas with ControlNet technology
- View Consistency Guarantee: Multi-angle feature uniformity through dual appearance modules
- NeRF output: generation of neural radiation field models that can be rendered with free viewpoints
- Open source support: provide complete inference code and pre-trained models
This project is especially suitable for scenarios that require 3D head modeling, such as virtual conferences, game development, etc. It has already formed a certain influence in the academic and developer communities.
This answer comes from the articleDiffPortrait360: Generate 360-degree head views from a single portraitThe