Qwen3-4B-SFT-UltraChat-GGUF
GGUF quantized versions of ermiaazarkhalili/Qwen3-4B-SFT-UltraChat for use with llama.cpp, Ollama, LM Studio, and other GGUF-compatible tools.
Available Quantizations
| File | Quantization | Quality | Use Case |
|---|---|---|---|
qwen3-4b-sft-ultrachat-q4_k_m.gguf |
Q4_K_M | Good | Recommended - Best balance of quality and size |
qwen3-4b-sft-ultrachat-q5_k_m.gguf |
Q5_K_M | Better | Higher quality, moderate size increase |
qwen3-4b-sft-ultrachat-q8_0.gguf |
Q8_0 | Best | Highest quality quantization |
Download Specific Quantization
Using huggingface-cli
# Download Q4_K_M (recommended)
huggingface-cli download ermiaazarkhalili/Qwen3-4B-SFT-UltraChat-GGUF qwen3-4b-sft-ultrachat-q4_k_m.gguf --local-dir ./models
# Download Q5_K_M (higher quality)
huggingface-cli download ermiaazarkhalili/Qwen3-4B-SFT-UltraChat-GGUF qwen3-4b-sft-ultrachat-q5_k_m.gguf --local-dir ./models
# Download Q8_0 (best quality)
huggingface-cli download ermiaazarkhalili/Qwen3-4B-SFT-UltraChat-GGUF qwen3-4b-sft-ultrachat-q8_0.gguf --local-dir ./models
# Download all quantizations
huggingface-cli download ermiaazarkhalili/Qwen3-4B-SFT-UltraChat-GGUF --local-dir ./models
Using wget
# Q4_K_M
wget https://huggingface.co/ermiaazarkhalili/Qwen3-4B-SFT-UltraChat-GGUF/resolve/main/qwen3-4b-sft-ultrachat-q4_k_m.gguf
# Q5_K_M
wget https://huggingface.co/ermiaazarkhalili/Qwen3-4B-SFT-UltraChat-GGUF/resolve/main/qwen3-4b-sft-ultrachat-q5_k_m.gguf
# Q8_0
wget https://huggingface.co/ermiaazarkhalili/Qwen3-4B-SFT-UltraChat-GGUF/resolve/main/qwen3-4b-sft-ultrachat-q8_0.gguf
Usage
Ollama
# Pull specific quantization
ollama pull hf.co/ermiaazarkhalili/Qwen3-4B-SFT-UltraChat-GGUF:Q4_K_M
# Or create from local file
cat > Modelfile << EOF
FROM ./qwen3-4b-sft-ultrachat-q4_k_m.gguf
EOF
ollama create qwen3-4b-sft-ultrachat -f Modelfile
ollama run qwen3-4b-sft-ultrachat
llama.cpp
# Run with llama-cli
./llama-cli -m qwen3-4b-sft-ultrachat-q4_k_m.gguf -p "Your prompt here" -n 256
# Run as server
./llama-server -m qwen3-4b-sft-ultrachat-q4_k_m.gguf --host 0.0.0.0 --port 8080
llama-cpp-python
from llama_cpp import Llama
llm = Llama(
model_path="qwen3-4b-sft-ultrachat-q4_k_m.gguf",
n_ctx=2048,
n_gpu_layers=-1 # Use all GPU layers
)
output = llm(
"What is machine learning?",
max_tokens=256,
temperature=0.7,
)
print(output['choices'][0]['text'])
LM Studio
- Download the desired GGUF file from this repository
- Open LM Studio and navigate to the Models tab
- Click "Add Model" and select the downloaded GGUF file
- Load the model and start chatting
GPT4All
- Download the Q4_K_M GGUF file
- Open GPT4All and go to Settings > Models
- Add the GGUF file path
- Select the model and start using
Original Model
This is a quantized version of ermiaazarkhalili/Qwen3-4B-SFT-UltraChat. See the original model card for:
- Training details and methodology
- Dataset information
- Performance metrics
- Full usage examples with Transformers
Conversion Details
| Property | Value |
|---|---|
| Source Model | ermiaazarkhalili/Qwen3-4B-SFT-UltraChat |
| Conversion Date | 2025-12-26 |
| Quantizations | Q4_K_M, Q5_K_M, Q8_0 |
| Converter | llama.cpp |
License
Same license as the original model. See ermiaazarkhalili/Qwen3-4B-SFT-UltraChat for details.
Converted using the Slurm Model Trainer skill
- Downloads last month
- 57
Hardware compatibility
Log In
to view the estimation
4-bit
5-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for ermiaazarkhalili/Qwen3-4B-SFT-UltraChat-GGUF
Base model
Qwen/Qwen3-4B-Base
Finetuned
ermiaazarkhalili/Qwen3-4B-SFT-UltraChat