Experimental abliterated model using improved https://huggingface.co/blog/grimjim/projected-abliteration technique. Abliteration tries to remove refusals from model's behaviour without fine-tuning.

For non-abliterated GGUF quantized version I recommend https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF quants.

Warning: Safety guardrails and refusal mechanisms have been broken through abliteration. This model may generate harmful content and shall not be used in production, user-facing applications, etc. You will be solely responsible for its outputs.

Warning 2: Neither removal of refusals nor preservation of original model's capabilies is guaranteed.

Downloads last month
216
GGUF
Model size
108B params
Architecture
llama4
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Nekotekina/Llama-4-Scout-17B-16E-Instruct-Projected-Abliterated-GGUF

Quantized
(13)
this model