ReasoningLlama-Math-1B-IT

Model Description:

This is a fine-tuned version of unsloth/Llama-3.2-1B-Instruct on the unsloth/OpenMathReasoning-mini dataset which is a small version of the nvidia/OpenMathReasoning dataset which was used to win the AIMO (AI Mathematical Olympiad) challenge!

  • recommended settings for inference: min_p = 0.1 and temperature = 1.5 , Read this Tweet to understand why.
  • License : apache-2.0
  • Finetuned from model : unsloth/Llama-3.2-1B-Instruct
Downloads last month
7
Safetensors
Model size
1B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Cannae-AI/ReasoningLlama-Math-1B-IT

Finetuned
(386)
this model
Quantizations
1 model

Dataset used to train Cannae-AI/ReasoningLlama-Math-1B-IT

Collection including Cannae-AI/ReasoningLlama-Math-1B-IT