LoRA: Low-Rank Adaptation of Large Language Models
Paper
•
2106.09685
•
Published
•
56
This model is a part of the master thesis work: Assessing privacy vs. efficiency tradeoffs in open-source Large-Language Models, during spring 2025 with focus to investigate privace issues i opensource LLMs.
This model is a fine-tuned version of tiiuae/falcon-7b-instruct, using LoRA (Low-Rank Adaptation). It has been traind for three epochs on the Enron email dataset: LLM-PBE/enron-email. The goal of the fine-tuning is to explore how models memorize and potentially expose sensitive content when trained on sensitive information.
The model was fine-tuned using LoRA with the following configuration:
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Tomasal/falcon-7b-instruct-enron", torch_dtype="bfloat16")
tokenizer = AutoTokenizer.from_pretrained("Tomasal/falcon-7b-instruct-enron")
messages = [{"role": "user", "content": "Can you write a professional email confirming a meeting with the legal team on Monday at 10am?"}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=128)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Base model
tiiuae/falcon-7b-instruct