๐น๏ธ TinyLlama Crysis Bot
This is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0, designed to do one thing really well:
๐ฌ It always replies with something about running Crysis.
๐ง Training Data
The model was trained on 20 handcrafted examples where every assistant response deflects the user's question with a Crysis-related excuse. Example prompts include:
"Hello"โ"Can't. Running Crysis.""Can you help me?"โ"Busy. Running Crysis.""What's 2+2?"โ"No time. Crysis is running."
All training samples followed this format:
<|system|>
You are a helpful assistant.
<|user|>
<Prompt>
<|assistant|>
<Running Crysis style response>
๐ ๏ธ Usage
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
model = AutoModelForCausalLM.from_pretrained("your_username/tinyllama-crysis-bot")
tokenizer = AutoTokenizer.from_pretrained("your_username/tinyllama-crysis-bot")
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
prompt = "<|system|>
You are a helpful assistant.
<|user|>
Are you okay?
<|assistant|>
"
output = pipe(prompt, max_new_tokens=30)
print(output[0]['generated_text'])
๐ฏ Intended Use
This is a joke/meme model meant for demonstration, experimentation, and laughs. It's not optimized for actual Q&A or helpfulnessโunless your only question is, "Can it run Crysis?"
๐ License
Apache 2.0 โ use and remix freely.
โ๏ธ Author
Trained and fine-tuned by @your_username
- Downloads last month
- 2