Whisper Small Llm Lingo
Fine-tuned Whisper model based on openai/whisper-small.
Training Results
| Metric | Base Model | Fine-tuned |
|---|---|---|
| WER | 20.66% | 12.40% |
Improvement: 8.26% WER reduction (lower is better)
Training Details
- Base Model: openai/whisper-small
- Training Dataset: Trelis/llm-lingo
- Train Loss: 0.8748
- Training Time: 1.1 minutes
Training Plot
Inference
from transformers import pipeline
asr = pipeline("automatic-speech-recognition", model="Trelis/whisper-small-llm-lingo")
result = asr("path/to/audio.wav")
print(result["text"])
Training Logs
Full training logs are available in training_log.txt.
Fine-tuned using Trelis Studio
- Downloads last month
- 91
Model tree for Trelis/whisper-small-llm-lingo
Base model
openai/whisper-small