-
-
-
-
-
-
Inference Providers
Active filters:
vllm
FlorianJc/Meta-Llama-3.1-8B-Instruct-vllm-fp8
Text Generation
•
8B
•
Updated
•
6
RedHatAI/Meta-Llama-3.1-8B-Instruct-quantized.w4a16
Text Generation
•
8B
•
Updated
•
21.6k
•
30
RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w8a8
Text Generation
•
71B
•
Updated
•
2.56k
•
21
RedHatAI/Meta-Llama-3.1-8B-FP8
Text Generation
•
8B
•
Updated
•
6.47k
•
10
RedHatAI/Meta-Llama-3.1-70B-FP8
Text Generation
•
71B
•
Updated
•
545
•
2
RedHatAI/Meta-Llama-3.1-8B-quantized.w8a16
Text Generation
•
3B
•
Updated
•
55
•
1
RedHatAI/Meta-Llama-3.1-8B-quantized.w8a8
Text Generation
•
8B
•
Updated
•
32
•
5
RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w4a16
Text Generation
•
71B
•
Updated
•
1.26k
•
32
RedHatAI/starcoder2-15b-FP8
Text Generation
•
16B
•
Updated
•
39
RedHatAI/starcoder2-7b-FP8
Text Generation
•
7B
•
Updated
•
31
RedHatAI/starcoder2-3b-FP8
Text Generation
•
3B
•
Updated
•
9
RedHatAI/Meta-Llama-3.1-405B-FP8
Text Generation
•
410B
•
Updated
•
20
bprice9/Palmyra-Medical-70B-FP8
Text Generation
•
71B
•
Updated
•
13
•
1
RedHatAI/gemma-2-2b-it-FP8
3B
•
Updated
•
223
•
1
RedHatAI/Meta-Llama-3.1-405B-Instruct-quantized.w4a16
Text Generation
•
58B
•
Updated
•
19
•
12
RedHatAI/gemma-2-9b-it-quantized.w8a16
Text Generation
•
4B
•
Updated
•
13
•
1
RedHatAI/gemma-2-2b-it-quantized.w8a16
Text Generation
•
2B
•
Updated
•
43
•
1
RedHatAI/gemma-2-2b-quantized.w8a16
Text Generation
•
2B
•
Updated
•
8
mradermacher/nemotron-3-8b-chat-4k-sft-hf-GGUF
9B
•
Updated
•
679
•
3
RedHatAI/SmolLM-1.7B-Instruct-quantized.w8a16
Text Generation
•
0.6B
•
Updated
•
5
mradermacher/nemotron-3-8b-chat-4k-sft-hf-i1-GGUF
9B
•
Updated
•
699
•
2
RedHatAI/gemma-2-2b-it-quantized.w8a8
Text Generation
•
3B
•
Updated
•
30
RedHatAI/gemma-2-9b-it-quantized.w8a8
Text Generation
•
10B
•
Updated
•
12
•
2
RedHatAI/Meta-Llama-3.1-405B-Instruct-quantized.w8a8
Text Generation
•
406B
•
Updated
•
11
•
2
RedHatAI/Meta-Llama-3.1-405B-Instruct-quantized.w8a16
Text Generation
•
105B
•
Updated
•
11
•
1
mradermacher/Nemotron-4-340B-Instruct-hf-GGUF
mradermacher/Nemotron-4-340B-Instruct-hf-i1-GGUF
mradermacher/Nemotron-4-340B-Base-hf-GGUF
Updated
RedHatAI/gemma-2-27b-it-quantized.w8a16
Text Generation
•
9B
•
Updated
•
6
RedHatAI/SmolLM-135M-Instruct-quantized.w8a16
Text Generation
•
83.4M
•
Updated
•
15