-
-
-
-
-
-
Inference Providers
Active filters:
ollama
DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters
Updated
•
148
Text Generation
•
2B
•
Updated
•
978
•
7
mradermacher/hito-1.7b-GGUF
2B
•
Updated
•
971
•
3
Edge-Quant/hito-1.7b-Q4_K_M-GGUF
Text Generation
•
2B
•
Updated
•
36
•
1
glogwa68/granite-4.0-h-1b-DISTILL-glm-4.7-GGUF
Text Generation
•
1B
•
Updated
•
639
•
3
Text Generation
•
0.4B
•
Updated
•
98
•
1
Text Generation
•
4B
•
Updated
•
66
•
1
AshutoshHug47/llama3.2-1b-cybersec-GGUF
1B
•
Updated
•
326
•
1
Novaciano/Triangulum-1B-DPO_Roleplay_NSFW-GGUF
Text Generation
•
1B
•
Updated
•
271
•
5
pacozaa/mistral-unsloth-chatml-first
4B
•
Updated
•
42
pacozaa/tinyllama-alpaca-lora
7B
•
Updated
•
6
pacozaa/TinyLlama-1.1B-intermediate-step-1431k-3T-GGUF
1B
•
Updated
•
30
pacozaa/mistral-sharegpt90k
pacozaa/mistral-sharegpt90k-merged_16bit
Text Generation
•
7B
•
Updated
•
17
TrabEsrever/dolphin-2.9-llama3-70b-GGUF
Updated
daekeun-ml/Phi-3-medium-4k-instruct-ko-poc-gguf-v0.1
Text Generation
•
14B
•
Updated
•
11
•
1
hierholzer/Llama-3.1-70B-Instruct-GGUF
Text Generation
•
71B
•
Updated
•
506
•
3
LucasInsight/Meta-Llama-3.1-8B-Instruct
8B
•
Updated
•
166
•
1
LucasInsight/Meta-Llama-3-8B-Instruct
8B
•
Updated
•
83
Shyamnath/Llama-3.2-3b-Uncensored-GGUF
Text Generation
•
4B
•
Updated
•
112
•
4
ghost-x/ghost-8b-beta-1608-gguf
Text Generation
•
8B
•
Updated
•
174
•
6
cahaj/Phi-3.5-mini-instruct-text2sql-GGUF
4B
•
Updated
•
61
Agnuxo/Tinytron-Qwen-0.5B-Instruct_CODE_Python_Spanish_English_16bit
0.5B
•
Updated
Agnuxo/Tinytron-Qwen-0.5B-TinyLlama-Instruct_CODE_Python-extra_small_quantization_GGUF_3bit
0.5B
•
Updated
Agnuxo/Tinytron-Qwen-0.5B-Instruct_CODE_Python-Spanish_English_GGUF_4bit
0.5B
•
Updated
Agnuxo/Tinytron-Qwen-0.5B-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_q5_k
0.5B
•
Updated
Agnuxo/Tinytron-Qwen-0.5B-TinyLlama-Instruct_CODE_Python-Spanish_English_GGUF_q6_k
0.5B
•
Updated
Agnuxo/Tinytron-Qwen-0.5B-Instruct_CODE_Python-GGUF_Spanish_English_8bit
0.5B
•
Updated
Agnuxo/Tinytron-Qwen-0.5B-Instruct_CODE_Python_English_GGUF_16bit
0.5B
•
Updated
•
1