DreamFusion: Text-to-3d Using 2D Diffusion - dreamfusion3d.github.io

## Metadata
- Author: **dreamfusion3d.github.io**
- Full Title: DreamFusion: Text-to-3d Using 2D Diffusion
- Category: #articles
- Tags: #ai
- URL: https://dreamfusion3d.github.io/
## Highlights
- Adapting this approach to 3D synthesis would require large-scale datasets of labeled 3D assets and efficient architectures for denoising 3D data, neither of which currently exist. In this work, we circumvent these limitations by using a pretrained 2D text-to-image diffusion model to perform text-to-3D synthesis. We introduce a loss based on probability density distillation that enables the use of a 2D diffusion model as a prior for optimization of a parametric image generator. Using this loss in a DeepDream-like procedure, we optimize a randomly-initialized 3D model (a Neural Radiance Field, or NeRF) via gradient descent such that its 2D renderings from random angles achieve a low loss