Qwen3-Coder-REAP-25B-A3B-nvfp4

Format: NVFP4 — weights & activations quantized to FP4 with dual scaling.
Base model: cerebras/Qwen3-Coder-REAP-25B-A3B
How it was made: One-shot calibration with LLM Compressor (NVFP4 recipe), long-seq calibration with nvidia/OpenCodeInstruct.

Notes: Keep lm_head in high precision; calibrate on long, domain-relevant sequences (256 samples at 4096 max length).

Check the original model card for information about this model.

Running the model with VLLM in Docker

⚠️ Known vLLM Compatibility Issues

This discussion contains a workaround to get this model running in VLLM until the below issues have been fixed/merged in VLLM.

This model currently does not work with vLLM due to CUTLASS FP4 kernel constraints - the 103-expert gate layer fails with Expected n to be divisible by 32. See #24921, #30934

Additionally, Blackwell consumer/workstation GPUs (RTX 5090, RTX PRO 6000, DGX Spark) have a separate issue where FlashInfer FP4 GEMM doesn't support SM120/SM121. See #31074, #30163, #23497

sudo docker run --runtime nvidia --gpus all -p 8000:8000 --ipc=host vllm/vllm-openai:nightly --model Firworks/Qwen3-Coder-REAP-25B-A3B-nvfp4 --dtype auto --max-model-len 32768

This was tested on an RTX Pro 6000 Blackwell cloud instance.

If there are other models you're interested in seeing quantized to NVFP4 for use on the DGX Spark, or other modern Blackwell (or newer) cards let me know. I'm trying to make more NVFP4 models available to allow more people to try them out.

Downloads last month
194
Safetensors
Model size
14B params
Tensor type
F32
·
BF16
·
F8_E4M3
·
U8
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Firworks/Qwen3-Coder-REAP-25B-A3B-nvfp4

Quantized
(29)
this model

Dataset used to train Firworks/Qwen3-Coder-REAP-25B-A3B-nvfp4