Stable Virtual Camera
Stable Virtual Camera is a 1.3B generalist diffusion model for Novel View Synthesis (NVS), generating 3D consistent novel views of a scene, given any number of input views and target cameras. We encourage you to visit our website for more visualizations.
Updates
- June 2025: Release
v1.1 checkpoint fixing known issues of foreground objects sometimes being detached from the background in v1.0.
Model Description
- Developed by: Stability AI
- Model type: Transformer image-to-video model
- Model details: This model was trained to generate 3D consistent novel views of a scene given any number of input views and target cameras. Users can specify the target camera trajectory freely, spanning across a large spatial range. Our model is capable of generating large viewpoint changes and temporally smooth samples. As a result, our samples maintain high consistency without requiring additional NeRF distillation, streamlining the view synthesis pipeline in the wild. Furthermore, we show that our method can generate high-quality videos lasting up to half a minute with seamless loop closure.
License
- Non-Commercial License: Free for research and non-commercial use by organizations and individuals. Please refer to Stability AI's Non-Commercial License, available here, for more information.
Model Sources
Usage
For usage instructions, please refer to our GitHub repository.
Intended Uses
Intended uses include the following research and non-commercial uses:
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on reconstruction models, including understanding the limitations of these models.
All uses of the model should be in accordance with our Acceptable Use Policy.
Out-of-Scope Uses
The model was not trained to be factual or true representations of people or events. As such, using the model to generate such content is out-of-scope of the abilities of this model.
Contact
Please report any issues with the model or contact us: