Wan 2.2 Accelerated Inference Collection optimized demos for Wan 2.2 14B models, using FP8 quantization + AoT compilation & community LoRAs for fast & high quality inference on ZeroGPU 💨 • 3 items • Updated Aug 29, 2025 • 11
Running on Zero MCP Featured 2.71k Wan2.2 14B Fast 🎥 2.71k generate a video from an image with a text prompt