Flux.2 [dev] FALAI Turbo Merged

Flux.2 [dev] FALAI Turbo Merged is a merge between FLUX.2 [dev] and FLUX.2-dev-Turbo that enables high-quality image generation in just 8 inference steps with high precision and less memory needed.

Thanks to quantization and merging, memory requirements are lower as there won't be a BF16 LoRA anymore.

Both original model and adapter (used for conversion) have BF16 precision.

LoRA used in tests AND conversion was Flux.2-Turbo-ComfyUI (V2) by ByteZSzn, which is a version of the original that was fixed to work with Comfyui.

Below are some comparison images between FLUX 2 Dev (Q8_0 GGUF) with Turbo LoRA (BF16) and this model:

Prompt 1: Realistic macro photograph of a hermit crab using a soda can as its shell, partially emerging from the can, captured with sharp detail and natural colors, on a sunlit beach with soft shadows and a shallow depth of field, with blurred ocean waves in the background. The can has the text `BFL Diffusers` on it and it has a color gradient that start with #FF5733 at the top and transitions to #33FF57 at the bottom.

Flux 2 Dev with Turbo LoRA Crab - LoRA

Flux 2 Dev merged Crab - Merged

Prompt 2: Industrial product shot of a chrome turbocharger with glowing hot exhaust manifold, engraved text 'FLUX.2 [dev] Turbo by fal' on the compressor housing and 'fal' on the turbine wheel, gradient heat glow from orange to electric blue , studio lighting with dramatic shadows, shallow depth of field, engineering blueprint pattern in background.

Flux 2 Dev with Turbo LoRA Crab - LoRA

Flux 2 Dev merged Crab - Merged

NOTE:

GGUF available with Q8_0 quantization, and tested in Comfyui only.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for alb3530/Flux.2-dev-FALAI-Turbo-Merged-GGUF

Finetuned
(24)
this model