Sentiment Analyzer (LoRA Fine-Tuned Gemma-2B)

Model Overview

Sentiment Analyzer is a LoRA fine-tuned Gemma-2B transformer model for sentiment analysis and text classification tasks.
It uses PEFT (Parameter-Efficient Fine-Tuning) to deliver strong performance while keeping memory and compute requirements low.

This model is well-suited for:

  • Sentiment analysis
  • Opinion mining
  • Review classification
  • Emotion-aware text generation
  • Lightweight NLP deployments

Tasks

  • Text Classification
  • Sentiment Analysis

Model Details

  • Developed by: mysmmurf12
  • Shared by: mysmmurf12
  • Model type: Transformer-based Language Model
  • Base model: google/gemma-2b
  • Fine-tuning method: LoRA (Low-Rank Adaptation)
  • Library: PEFT + Transformers
  • Language: English
  • License: Apache 2.0 (inherits from base model)

Model Sources


Intended Uses

βœ… Direct Use

  • Sentiment classification (positive / negative / neutral)
  • Customer feedback and review analysis
  • Social media sentiment monitoring
  • Sentiment-aware chatbots

πŸ” Downstream Use

  • Integration into RAG pipelines
  • Domain-specific sentiment fine-tuning
  • Deployment via APIs, Streamlit apps, or dashboards

🚫 Out-of-Scope Use

  • Medical, legal, or financial decision-making
  • High-risk automated moderation
  • Multilingual sentiment tasks (English-focused)

Bias, Risks, and Limitations

  • May reflect biases present in training data
  • Less reliable on sarcasm or ambiguous language
  • Not evaluated on standardized sentiment benchmarks

Recommendation:
Use human validation for high-impact applications.


How to Use the Model

Load with Transformers + PEFT

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

base_model = "google/gemma-2b"
adapter_model = "mysmmurf12/sentiment-analyzer"

tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)

model = PeftModel.from_pretrained(model, adapter_model)

text = "The product quality is amazing!"
inputs = tokenizer(text, return_tensors="pt")

outputs = model.generate(
    **inputs,
    max_new_tokens=50,
    temperature=0.7
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
15
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for mysmmurf12/sentiment-analyzer

Base model

google/gemma-2b
Adapter
(23689)
this model

Evaluation results