𓌳 REAP𓌳 the Experts: Why Pruning Prevails for One-Shot MoE Compression
📄 Paper💻 Code

GLM-4.6-REAP-218B-A32B-W4A16-AutoRound

W4A16 quantized version of Cerebras' official GLM-4.6-REAP-218B-A32B.

  • ~4x Size Reduction: ~436GB → ~116GB
  • Runs on Consumer Hardware: 8x RTX 3090 or 4x RTX 4090
  • vLLM/SGLang Compatible: Drop-in deployment

🙏 Acknowledgments


📋 Model Specifications

Property Value
Base Model cerebras/GLM-4.6-REAP-218B-A32B
Parameters 218B total, 32B activated
Quantization W4A16 (4-bit weights, 16-bit activations)
Original Size ~436GB
Quantized Size ~116GB

📊 Benchmarks

Tested on 8x RTX 3090:

Metric Value
Prompt Tokens ~21,178
Completion Tokens 393
Time to First Token 23.82s

🚀 Deployment

vLLM

vllm serve 0xSero/GLM-4.6-REAP-218B-A32B-W4A16-AutoRound \
    --tensor-parallel-size 8 \
    --trust-remote-code \
    --quantization gptq

🔬 Calibration Dataset: Deep Dive

REAP's effectiveness depends critically on calibration data that represents the target use case. We specifically optimized for code generation, function/tool calling, and agentic workflows.

Why These 3 Datasets?

Dataset Samples Purpose Why It Matters
evol-codealpaca-v1 700 Code generation 51% of mix — Code tasks activate specific expert pathways; pruning without code calibration destroys coding ability
xlam-function-calling-60k 330 Function/tool calling 24% of mix — Tool use requires structured JSON output; experts handling schema generation must be preserved
SWE-smith-trajectories 330 Agentic multi-turn 24% of mix — Real SWE-bench trajectories with tool calls, file edits, and multi-step reasoning

The Science Behind Dataset Selection

REAP Algorithm:
1. Forward pass calibration samples through model
2. Record which experts activate and their magnitudes
3. Compute saliency = router_weight × activation_norm
4. Prune lowest-saliency experts

Key Insight: Experts are TASK-SPECIFIC
├── Some experts specialize in natural language
├── Some experts specialize in code syntax
├── Some experts specialize in JSON/structured output
└── Some experts specialize in multi-turn context

If calibration lacks code → code-specialized experts appear "unused" → get pruned → model loses coding ability

Cerebras' Original Mix (from paper)

Cerebras used the same 3 datasets in their GLM-4.6 REAP experiments:

  • evol-codealpaca-v1 for code generation
  • xlam-function-calling-60k for tool calling
  • SWE-smith-trajectories for agentic tasks

We followed this exact recipe for reproducibility.

Combined Dataset

Our calibration mix: 0xSero/glm47-reap-calibration-v2


🧾 Citation

@article{lasby2025reap,
  title={REAP the Experts: Why Pruning Prevails for One-Shot MoE Compression},
  author={Lasby, Mike and Lazarevich, Ivan and Sinnadurai, Nish and Lie, Sean and Ioannou, Yani and Thangarasa, Vithursan},
  journal={arXiv preprint arXiv:2510.13999},
  year={2025},
  url={https://arxiv.org/abs/2510.13999}
}
Downloads last month
184
Safetensors
Model size
2B params
Tensor type
F32
·
I32
·
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for 0xSero/GLM-4.6-REAP-218B-A32B-W4A16-AutoRound

Quantized
(7)
this model

Paper for 0xSero/GLM-4.6-REAP-218B-A32B-W4A16-AutoRound