Editing Models with Task Arithmetic
Paper
•
2212.04089
•
Published
•
7
An uncensored version of nvidia/Llama-3.1-Nemotron-70B-Instruct-HF created by merging mlabonne/Llama-3-70B-Instruct-abliterated-LORA using task arithmetic.
This model was created using mergekit.
From Ubuntu 24.04 (as root):
apt update
apt install pipx
git clone https://github.com/arcee-ai/mergekit.git
cd mergekit && pipx install -e .
mergekit-yaml config.yaml Llama-3.1-Nemotron-lorablated-70B --allow-crimes --lora-merge-cache=./cache
See @mlabonne's Llama-3.1-70B-Instruct-lorablated for more details on how the LoRA was extracted.
The following YAML configuration was used to produce this model:
base_model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF+mlabonne/Llama-3-70B-Instruct-abliterated-LORA
dtype: bfloat16
merge_method: task_arithmetic
parameters:
normalize: false
slices:
- sources:
- layer_range: [0, 80]
model: nvidia/Llama-3.1-Nemotron-70B-Instruct-HF+mlabonne/Llama-3-70B-Instruct-abliterated-LORA
parameters:
weight: 1.0
Thanks to @mlabonne, @grimjim, and @failspy for pioneering this technique for uncensoring models.
Compute provided by Hetzner and funded by Schneewolf Labs.
Detailed results can be found here
| Metric | Value |
|---|---|
| Avg. | 33.69 |
| IFEval (0-Shot) | 71.47 |
| BBH (3-Shot) | 48.06 |
| MATH Lvl 5 (4-Shot) | 23.34 |
| GPQA (0-shot) | 0.89 |
| MuSR (0-shot) | 14.92 |
| MMLU-PRO (5-shot) | 43.46 |