rank 16 lora of llama 3 70b 4bit bnb, trained locally on 2x 3090 Tis

my first attempt at local multi-gpu QLoRA training where we train a model that does not fit on single GPU

it completed just 15% of the epoch on HESOYAM 0.4 dataset when I left it overnight, loss didn't go down much

image

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support