Quiet-STaR: Language Models Can Teach Themselves to Think Before Speaking
Paper
•
2403.09629
•
Published
•
78
Mistral-7b with continued pretraining using Quiet-STaR (https://arxiv.org/abs/2403.09629) for generating 8 thought tokens before each output token.
Forked from Crystalcareai/Quiet-Star-Custom