DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters Updated Jul 27, 2025 • 148
DavidAU/Llama3.1-MOE-4X8B-Gated-IQ-Multi-Tier-Deep-Reasoning-32B-GGUF Text Generation • 25B • Updated Jul 28, 2025 • 746 • 8
DavidAU/DeepSeek-MOE-4X8B-R1-Distill-Llama-3.1-Deep-Thinker-Uncensored-24B-GGUF Text Generation • 25B • Updated Jul 28, 2025 • 870 • 26
DavidAU/L3.1-MOE-2X8B-Deepseek-DeepHermes-e32-uncensored-abliterated-13.7B-gguf Text Generation • 14B • Updated Jul 28, 2025 • 407 • 12
DavidAU/L3.1-Dark-Reasoning-LewdPlay-evo-Hermes-R1-Uncensored-8B Text Generation • 8B • Updated Jul 28, 2025 • 22 • 31
hugging-quants/Meta-Llama-3.1-405B-Instruct-AWQ-INT4 Text Generation • 410B • Updated Sep 13, 2024 • 787 • 36
hugging-quants/Meta-Llama-3.1-8B-Instruct-AWQ-INT4 Text Generation • 8B • Updated Aug 7, 2024 • 167k • 82
hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4 Text Generation • 71B • Updated Aug 7, 2024 • 140k • 107
hugging-quants/Meta-Llama-3.1-405B-Instruct-GPTQ-INT4 Text Generation • 410B • Updated Aug 7, 2024 • 186 • 16
hugging-quants/Meta-Llama-3.1-405B-Instruct-BNB-NF4 Text Generation • 423B • Updated Sep 16, 2024 • 15 • 5
hugging-quants/Meta-Llama-3.1-8B-Instruct-BNB-NF4 Text Generation • 8B • Updated Aug 8, 2024 • 273 • 8
ModelCloud/Meta-Llama-3.1-70B-Instruct-gptq-4bit Text Generation • 71B • Updated Jul 27, 2024 • 33 • 4