FLAN-T5 DialogSum LoRA Adapter (PEFT)
This repository contains a LoRA (PEFT) adapter fine-tuned for dialogue summarization on the DialogSum dataset.
Base model
google/flan-t5-base
Dataset
knkarthick/dialogsum
Usage
import torch
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
from peft import PeftModel
base_id = "google/flan-t5-base"
adapter_id = "prithvi1029/flan-t5-dialogsum-lora"
tok = AutoTokenizer.from_pretrained(base_id)
model = AutoModelForSeq2SeqLM.from_pretrained(
base_id, device_map='auto', torch_dtype=torch.float16
)
model = PeftModel.from_pretrained(model, adapter_id)
dialogue = "A: Hey, are you free tomorrow?\nB: Yes, what’s up?\nA: Need help with a project."
prompt = f"Summarize the following conversation.\n\n{dialogue}\n\nSummary:"
inputs = tok(prompt, return_tensors='pt').to(model.device)
out = model.generate(**inputs, max_new_tokens=128)
print(tok.decode(out[0], skip_special_tokens=True))
Notes
- This repo contains only the adapter weights (LoRA). You must load the base model separately.
Model tree for prithvi1029/flan-t5-dialogsum-lora
Base model
google/flan-t5-base