granite-3.3-8b-base - AWQ (4-bit)

Source model: ibm-granite/granite-3.3-8b-base

This model was quantized to 4-bit using VLLM-Compressor.

Quantization parameters: 4-bit, symmetric scheme.

Usage

# pip install vllm
from vllm import LLM
model = LLM("iproskurina/granite-3.3-8b-base-awq-int4")
output = model.generate("The capital of France is")
print(output)```
Downloads last month
2
Safetensors
Model size
1B params
Tensor type
I64
·
I32
·
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train iproskurina/granite-3.3-8b-base-awq-int4