You need to agree to share your contact information to access this model
If you want to learn more about how you can use the model, please refer to our Terms of Use.
Log in or Sign Up to review the conditions and access this model content.
Bielik-11B-v3.0-Instruct
Bielik-11B-v3.0-Instruct is a generative text model featuring 11 billion parameters. It is an instruct fine-tuned version of the Bielik-11B-v3-Base-20250730. Forementioned model stands as a testament to the unique collaboration between the open-science/open-source project SpeakLeash and the High Performance Computing (HPC) center: ACK Cyfronet AGH. Developed and trained on multilingual text corpora across 32 European languages, with emphasis on Polish, which has been cherry-picked and processed by the SpeakLeash team, this endeavor leverages Polish large-scale computing infrastructure, specifically within the PLGrid environment, and more precisely, the HPC centers: ACK Cyfronet AGH. The creation and training of the Bielik-11B-v3.0-Instruct was propelled by the support of computational grant number PLG/2024/016951, conducted on the Athena and Helios supercomputer, enabling the use of cutting-edge technology and computational resources essential for large-scale machine learning processes. As a result, the model exhibits an exceptional ability to understand and process the Polish language and other European languages, providing accurate responses and performing a variety of linguistic tasks with high precision.
📚 Technical report: Bielik_11B_v3.pdf
🗣️ Chat: https://chat.bielik.ai/
Model
The model is a successor to the Bielik v2 series, and its development also leveraged the knowledge and experience gained while working on the Bielik v3 Small models.
The SpeakLeash team is working on their own set of instructions in Polish, which is continuously being expanded and refined by annotators. A portion of these instructions, which had been manually verified and corrected, has been utilized for training purposes. Moreover, due to the limited availability of high-quality instructions in Polish, synthetic instructions were generated and used in training. The dataset used for training comprised over 20 million instructions, consisting of more than 17 billion tokens.
To align the model with user preferences we employed the DPO-Positive method, utilizing both generated and manually corrected examples, which were scored by a metamodel. A dataset comprising over 114,000 examples of varying lengths to address different aspects of response style. It was filtered and evaluated by the reward model to select instructions with the right level of difference between chosen and rejected. The novelty introduced in DPO-P was multi-turn conversations introduction.
In the final stage of the alignment pipeline, Reinforcement Learning (RL) was used to further enhance the model's analytical capabilities. Training employed Group Relative Policy Optimization (GRPO) and its variant, Dr. GRPO, which was chosen to improve token efficiency by reducing the tendency of models to artificially increase response length to maximize rewards. The RL training was conducted using the Volcano Engine Reinforcement Learning (VERL) framework, providing a scalable and modular training environment. The training corpus comprised 143k curated problems spanning logic, STEM, mathematics, and tool-use domains, with all samples selected based on the availability of Reinforcement Learning from Verifiable Rewards (RLVR), ensuring that each problem had a definitive, verifiable solution.
Bielik instruct models have been trained with the use of an original open source framework called ALLaMo implemented by Krzysztof Ociepa. This framework allows users to train language models with architecture similar to LLaMA and Mistral in fast and efficient way.
Model description:
- Developed by: SpeakLeash & ACK Cyfronet AGH
- Language: Multilingual (32 European languages, optimized for Polish)
- Model type: causal decoder-only
- Finetuned from: Bielik-11B-v3-Base-20250730
- License: Apache 2.0 and Terms of Use
Chat template
Bielik-11B-v3.0-Instruct uses ChatML as the prompt format.
E.g.
prompt = "<s><|im_start|> user\nJakie mamy pory roku?<|im_end|> \n<|im_start|> assistant\n"
completion = "W Polsce mamy 4 pory roku: wiosna, lato, jesień i zima.<|im_end|> \n"
This format is available as a chat template via the apply_chat_template() method:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model_name = "speakleash/Bielik-11B-v3.0-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16)
messages = [
{"role": "system", "content": "Odpowiadaj krótko, precyzyjnie i wyłącznie w języku polskim."},
{"role": "user", "content": "Jakie mamy pory roku w Polsce?"},
{"role": "assistant", "content": "W Polsce mamy 4 pory roku: wiosna, lato, jesień i zima."},
{"role": "user", "content": "Która jest najcieplejsza?"}
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt")
model_inputs = input_ids.to(device)
model.to(device)
generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])
Fully formated input conversation by apply_chat_template from previous example:
<s><|im_start|> system
Odpowiadaj krótko, precyzyjnie i wyłącznie w języku polskim.<|im_end|>
<|im_start|> user
Jakie mamy pory roku w Polsce?<|im_end|>
<|im_start|> assistant
W Polsce mamy 4 pory roku: wiosna, lato, jesień i zima.<|im_end|>
<|im_start|> user
Która jest najcieplejsza?<|im_end|>
Limitations and Biases
Bielik-11B-v3.0-Instruct is a quick demonstration that the base model can be easily fine-tuned to achieve compelling and promising performance. It does not have any moderation mechanisms. We're looking forward to engaging with the community in ways to make the model respect guardrails, allowing for deployment in environments requiring moderated outputs.
Bielik-11B-v3.0-Instruct can produce factually incorrect output, and should not be relied on to produce factually accurate data. Bielik-11B-v3.0-Instruct was trained on various public datasets. While great efforts have been taken to clear the training data, it is possible that this model can generate lewd, false, biased or otherwise offensive outputs.
Responsible for training the model
- Krzysztof OciepaSpeakLeash - team leadership, conceptualizing, data preparation, process optimization and oversight of training
- Łukasz FlisCyfronet AGH - coordinating and supervising the training
- Remigiusz KinasSpeakLeash - conceptualizing, coordinating RL trainings, data preparation, benchmarking and quantizations
- Adrian GwoździejSpeakLeash - data preparation and ensuring data quality
- Krzysztof WróbelSpeakLeash - benchmarks
The model could not have been created without the commitment and work of the entire SpeakLeash team, whose contribution is invaluable. Thanks to the hard work of many individuals, it was possible to gather a large amount of content in Polish and establish collaboration between the open-science SpeakLeash project and the HPC center: ACK Cyfronet AGH. Individuals who contributed to the creation of the model: Sebastian Kondracki, Marek Magryś, Igor Ciuciura, Szymon Baczyński, Dominika Basaj, Kuba Sołtys, Karol Jezierski, Jan Sowa, Anna Przybył, Agnieszka Ratajska, Witold Wydmański, Katarzyna Starosławska, Izabela Babis, Nina Babis.
We gratefully acknowledge Polish high-performance computing infrastructure PLGrid (HPC Center: ACK Cyfronet AGH) for providing computer facilities and support within computational grant no. PLG/2024/016951.
Legal Aspects
EU AI Act Transparency Documentation: Bielik 11B v3 EU Public Summary.pdf
Data Protection and Copyright Requests
For removal requests of personally identifiable information (PII) or of copyrighted content, please contact the respective dataset owners or us directly: biuro@speakleash.org.pl.
Citation
Please cite this model using the following format:
@misc{ociepa2025bielik11bv3technical,
title={Bielik 11B v3: Multilingual Large Language Model for European Languages},
author={Krzysztof Ociepa and Łukasz Flis and Krzysztof Wróbel and Adrian Gwoździej and Remigiusz Kinas},
year={2025},
url={https://github.com/speakleash/bielik-papers/blob/main/v3/Bielik_11B_v3.pdf},
}
@misc{Bielik11Bv3i,
title = {Bielik-11B-v3.0-Instruct model card},
author = {Ociepa, Krzysztof and Flis, Łukasz and Kinas, Remigiusz and Gwoździej, Adrian and Wróbel, Krzysztof and {SpeakLeash Team} and {Cyfronet Team}},
year = {2025},
url = {https://huggingface.co/speakleash/Bielik-11B-v3.0-Instruct},
note = {Accessed: 2025-12-31}, % change this date
urldate = {2025-12-31} % change this date
}
Contact Us
If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our Discord SpeakLeash.
- Downloads last month
- 368