Experimental abliterated model using improved https://huggingface.co/blog/grimjim/projected-abliteration technique. Abliteration tries to remove refusals from model's behaviour without fine-tuning.
For non-abliterated GGUF quantized version I recommend https://huggingface.co/unsloth/Llama-4-Scout-17B-16E-Instruct-GGUF quants.
Warning: Safety guardrails and refusal mechanisms have been broken through abliteration. This model may generate harmful content and shall not be used in production, user-facing applications, etc. You will be solely responsible for its outputs.
Warning 2: Neither removal of refusals nor preservation of original model's capabilies is guaranteed.
- Downloads last month
- 216
Hardware compatibility
Log In
to view the estimation
8-bit
Model tree for Nekotekina/Llama-4-Scout-17B-16E-Instruct-Projected-Abliterated-GGUF
Base model
meta-llama/Llama-4-Scout-17B-16E