Sentiment Analyzer (LoRA Fine-Tuned Gemma-2B)
Model Overview
Sentiment Analyzer is a LoRA fine-tuned Gemma-2B transformer model for sentiment analysis and text classification tasks.
It uses PEFT (Parameter-Efficient Fine-Tuning) to deliver strong performance while keeping memory and compute requirements low.
This model is well-suited for:
- Sentiment analysis
- Opinion mining
- Review classification
- Emotion-aware text generation
- Lightweight NLP deployments
Tasks
- Text Classification
- Sentiment Analysis
Model Details
- Developed by:
mysmmurf12 - Shared by:
mysmmurf12 - Model type: Transformer-based Language Model
- Base model:
google/gemma-2b - Fine-tuning method: LoRA (Low-Rank Adaptation)
- Library: PEFT + Transformers
- Language: English
- License: Apache 2.0 (inherits from base model)
Model Sources
Hugging Face Repository:
https://huggingface.co/mysmmurf12/sentiment-analyzerBase Model:
https://huggingface.co/google/gemma-2b
Intended Uses
β Direct Use
- Sentiment classification (positive / negative / neutral)
- Customer feedback and review analysis
- Social media sentiment monitoring
- Sentiment-aware chatbots
π Downstream Use
- Integration into RAG pipelines
- Domain-specific sentiment fine-tuning
- Deployment via APIs, Streamlit apps, or dashboards
π« Out-of-Scope Use
- Medical, legal, or financial decision-making
- High-risk automated moderation
- Multilingual sentiment tasks (English-focused)
Bias, Risks, and Limitations
- May reflect biases present in training data
- Less reliable on sarcasm or ambiguous language
- Not evaluated on standardized sentiment benchmarks
Recommendation:
Use human validation for high-impact applications.
How to Use the Model
Load with Transformers + PEFT
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = "google/gemma-2b"
adapter_model = "mysmmurf12/sentiment-analyzer"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)
model = PeftModel.from_pretrained(model, adapter_model)
text = "The product quality is amazing!"
inputs = tokenizer(text, return_tensors="pt")
outputs = model.generate(
**inputs,
max_new_tokens=50,
temperature=0.7
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 15
Model tree for mysmmurf12/sentiment-analyzer
Base model
google/gemma-2bEvaluation results
- accuracyself-reportednot-reported