Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

iproskurina
/
granite-3.3-8b-base-awq-int4

Text Generation
Safetensors
English
granite
awq
4-bit precision
compressed-tensors
Model card Files Files and versions
xet
Community
granite-3.3-8b-base-awq-int4
4.92 GB
  • 1 contributor
History: 3 commits
iproskurina's picture
iproskurina
Update README
6e81657 verified 2 months ago
  • .gitattributes
    1.52 kB
    initial commit 2 months ago
  • README.md
    620 Bytes
    Update README 2 months ago
  • config.json
    1.65 kB
    Add AWQ quantized model granite-3.3-8b-base-awq-int4 2 months ago
  • generation_config.json
    132 Bytes
    Add AWQ quantized model granite-3.3-8b-base-awq-int4 2 months ago
  • merges.txt
    442 kB
    Add AWQ quantized model granite-3.3-8b-base-awq-int4 2 months ago
  • model.safetensors
    4.92 GB
    xet
    Add AWQ quantized model granite-3.3-8b-base-awq-int4 2 months ago
  • recipe.yaml
    548 Bytes
    Add AWQ quantized model granite-3.3-8b-base-awq-int4 2 months ago
  • special_tokens_map.json
    906 Bytes
    Add AWQ quantized model granite-3.3-8b-base-awq-int4 2 months ago
  • tokenizer.json
    3.48 MB
    Add AWQ quantized model granite-3.3-8b-base-awq-int4 2 months ago
  • tokenizer_config.json
    4.16 kB
    Add AWQ quantized model granite-3.3-8b-base-awq-int4 2 months ago
  • vocab.json
    777 kB
    Add AWQ quantized model granite-3.3-8b-base-awq-int4 2 months ago