Magistral-Small-2509-36B-Text-Only-nvfp4
Note: There appears to be a problem with this model. In all my testing I was only able to get it to output 0 as a token. I don't know if there is some mismatch with the provided tokenizer or some other issue but identical quantization and calibration was done to this as the 24B variant which works. Maybe someone else will have an idea of what is going on here and how I can fix it.
Format: NVFP4 — weights & activations quantized to FP4 with dual scaling.
Base model: Darkhn/Magistral-Small-2509-36B-Text-Only
How it was made: One-shot calibration with LLM Compressor (NVFP4 recipe), long-seq calibration with HuggingFaceH4/ultrachat_200k.
Notes: Keep
lm_headin high precision; calibrate on long, domain-relevant sequences.
Check the original model card for information about this model. This is a version of Magistral Small 2509 with the vision capabilities removed and upscaled to 36B.
If there are other models you're interested in seeing quantized to NVFP4 for use on the DGX Spark, or other modern Blackwell (or newer) cards let me know. I'm trying to make more NVFP4 models available to allow more people to try them out.
- Downloads last month
- 5
Model tree for Firworks/Magistral-Small-2509-36B-Text-Only-nvfp4
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503