Tucan-27B-v1.0

Bulgarian Language Models for Function Calling πŸ‡§πŸ‡¬

Paper: https://arxiv.org/abs/2506.23394

Overview πŸš€

TUCAN (Tool-Using Capable Assistant Navigator) is a series of open-source Bulgarian language models fine-tuned specifically for function calling and tool use.

These models can interact with external tools, APIs, and databases, making them appropriate for building AI agents and Model Context Protocol (MCP) applications.

Built on top of BgGPT models from INSAIT Institute, which were themselves built on Gemma 2, Tucan models have been enhanced with function-calling capabilities.

Motivation 🎯

Although BgGPT models demonstrate strong Bulgarian language comprehension, they face challenges in maintaining the precise formatting necessary for consistent function calling. Despite implementing detailed system prompts, their performance in this specific task remains suboptimal.

This project addresses that gap by fine-tuning BgGPT, providing the Bulgarian AI community with proper tool-use capabilities in their native language.

Models and variants πŸ“¦

Available in three sizes with full models, LoRA adapters, and quantized GGUF variants:

Model Size Full Model LoRA Adapter GGUF (Quantized)
2.6B Tucan-2.6B-v1.0 LoRA GGUF
9B Tucan-9B-v1.0 LoRA GGUF
27B Tucan-27B-v1.0 πŸ“ LoRA GGUF

GGUF variants include: q4_k_m, q5_k_m, q6_k, q8_0, q4_0 quantizations

πŸ“ Current model/repo

Models and quantizations are also available for easy use in Ollama: https://ollama.com/s_emanuilov/tucan

Benchmarks πŸ“Š

All evaluations were performed using the Tucan evaluation framework, with results averaged across multiple runs. Tucan models demonstrate superior function-calling capabilities compared to their BgGPT counterparts, with particularly strong improvements in smaller model sizes. To ensure no catastrophic forgetting occurred, we evaluated knowledge retention using EleutherAI's lm-evaluation-harness on Bulgarian benchmarks, confirming that each Tucan model maintains performance on par with its BgGPT equivalent.

Model Function Calling HellaswagBG WinograndeBG ARC-Easy-BG ARC-Challenge-BG
Tucan-2.6B-v1.0 πŸ”₯ 0.7875 0.5924 0.6456 0.5657 0.3754
Tucan-9B-v1.0 πŸ”₯ 0.8667 0.7046 0.7151 0.7024 0.5188
Tucan-27B-v1.0 πŸ”₯ 0.875 0.6179 0.6275 0.6486 0.442
BgGPT-Gemma-2-2.6B-IT-v1.0 0.5874 0.6306 0.5821 0.5657 0.372
BgGPT-Gemma-2-9B-IT-v1.0 0.7833 0.7057 0.719 0.7231 0.5188
BgGPT-Gemma-2-27B-IT-v1.0 0.8667 0.62 0.6212 0.6587 0.459

Note: 27B models were evaluated in 8-bit precision for comparison purposes.

Usage πŸ› οΈ

Quick start ⚑

pip install -U "transformers[torch]" accelerate bitsandbytes

Prompt format βš™οΈ

Critical: Use this format for function calling for the best results.

πŸ“‹ Required system prompt template
<bos><start_of_turn>user
Π’ΠΈ си ΠΏΠΎΠ»Π΅Π·Π΅Π½ AI асистСнт, ΠΊΠΎΠΉΡ‚ΠΎ прСдоставя ΠΏΠΎΠ»Π΅Π·Π½ΠΈ ΠΈ Ρ‚ΠΎΡ‡Π½ΠΈ ΠΎΡ‚Π³ΠΎΠ²ΠΎΡ€ΠΈ.

Имаш Π΄ΠΎΡΡ‚ΡŠΠΏ ΠΈ моТСш Π΄Π° извикаш Π΅Π΄Π½Π° ΠΈΠ»ΠΈ ΠΏΠΎΠ²Π΅Ρ‡Π΅ Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΈ, Π·Π° Π΄Π° помогнСш с потрСбитСлското Π·Π°ΠΏΠΈΡ‚Π²Π°Π½Π΅. Използвай Π³ΠΈ, само Π°ΠΊΠΎ Π΅ Π½Π΅ΠΎΠ±Ρ…ΠΎΠ΄ΠΈΠΌΠΎ ΠΈ подходящо.

ΠšΠΎΠ³Π°Ρ‚ΠΎ използваш функция, Ρ„ΠΎΡ€ΠΌΠ°Ρ‚ΠΈΡ€Π°ΠΉ ΠΈΠ·Π²ΠΈΠΊΠ²Π°Π½Π΅Ρ‚ΠΎ ѝ Π² Π±Π»ΠΎΠΊ ```tool_call``` Π½Π° ΠΎΡ‚Π΄Π΅Π»Π΅Π½ Ρ€Π΅Π΄, a слСд Ρ‚ΠΎΠ²Π° Ρ‰Π΅ ΠΏΠΎΠ»ΡƒΡ‡ΠΈΡˆ Ρ€Π΅Π·ΡƒΠ»Ρ‚Π°Ρ‚ ΠΎΡ‚ ΠΈΠ·ΠΏΡŠΠ»Π½Π΅Π½ΠΈΠ΅Ρ‚ΠΎ Π² Π±Π»ΠΎΠΊ ```toll_response```.

## Π¨Π°Π±Π»ΠΎΠ½ Π·Π° ΠΈΠ·Π²ΠΈΠΊΠ²Π°Π½Π΅: 
```tool_call
{"name": <function-name>, "arguments": <args-json-object>}```

## Налични Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΈ:
[your function definitions here]

## ΠŸΠΎΡ‚Ρ€Π΅Π±ΠΈΡ‚Π΅Π»ΡΠΊΠ° заявка: 
[your query in Bulgarian]<end_of_turn>
<start_of_turn>model

Note πŸ“

The model only generates the tool_call blocks with function names and parameters - it doesn't actually execute the functions. Your client application must parse these generated calls, execute the actual functions (API calls, database queries, etc.), and provide the results back to the model in tool_response blocks for the conversation to continue the interperation of the results. A full demo is comming soon.

Python example 🐍

πŸ’» Complete Working Example
import torch
import json
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig

# Load model
model_name = "s-emanuilov/Tucan-2.6B-v1.0"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    attn_implementation="eager"  # Required for Gemma models
)

# Create prompt with system template
def create_prompt(functions, user_query):
    system_prompt = """Π’ΠΈ си ΠΏΠΎΠ»Π΅Π·Π΅Π½ AI асистСнт, ΠΊΠΎΠΉΡ‚ΠΎ прСдоставя ΠΏΠΎΠ»Π΅Π·Π½ΠΈ ΠΈ Ρ‚ΠΎΡ‡Π½ΠΈ ΠΎΡ‚Π³ΠΎΠ²ΠΎΡ€ΠΈ.

Имаш Π΄ΠΎΡΡ‚ΡŠΠΏ ΠΈ моТСш Π΄Π° извикаш Π΅Π΄Π½Π° ΠΈΠ»ΠΈ ΠΏΠΎΠ²Π΅Ρ‡Π΅ Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΈ, Π·Π° Π΄Π° помогнСш с потрСбитСлското Π·Π°ΠΏΠΈΡ‚Π²Π°Π½Π΅. Използвай Π³ΠΈ, само Π°ΠΊΠΎ Π΅ Π½Π΅ΠΎΠ±Ρ…ΠΎΠ΄ΠΈΠΌΠΎ ΠΈ подходящо.

ΠšΠΎΠ³Π°Ρ‚ΠΎ използваш функция, Ρ„ΠΎΡ€ΠΌΠ°Ρ‚ΠΈΡ€Π°ΠΉ ΠΈΠ·Π²ΠΈΠΊΠ²Π°Π½Π΅Ρ‚ΠΎ ѝ Π² Π±Π»ΠΎΠΊ ```tool_call``` Π½Π° ΠΎΡ‚Π΄Π΅Π»Π΅Π½ Ρ€Π΅Π΄, a слСд Ρ‚ΠΎΠ²Π° Ρ‰Π΅ ΠΏΠΎΠ»ΡƒΡ‡ΠΈΡˆ Ρ€Π΅Π·ΡƒΠ»Ρ‚Π°Ρ‚ ΠΎΡ‚ ΠΈΠ·ΠΏΡŠΠ»Π½Π΅Π½ΠΈΠ΅Ρ‚ΠΎ Π² Π±Π»ΠΎΠΊ ```toll_response```.

## Π¨Π°Π±Π»ΠΎΠ½ Π·Π° ΠΈΠ·Π²ΠΈΠΊΠ²Π°Π½Π΅: 
```tool_call
{{"name": <function-name>, "arguments": <args-json-object>}}```
"""
    
    functions_text = json.dumps(functions, ensure_ascii=False, indent=2)
    full_prompt = f"{system_prompt}\n## Налични Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΈ:\n{functions_text}\n\n## ΠŸΠΎΡ‚Ρ€Π΅Π±ΠΈΡ‚Π΅Π»ΡΠΊΠ° заявка:\n{user_query}"
    
    chat = [{"role": "user", "content": full_prompt}]
    return tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)

# Example usage
functions = [{
    "name": "create_calendar_event",
    "description": "Creates a new event in Google Calendar.",
    "parameters": {
        "type": "object",
        "properties": {
            "title": {"type": "string"},
            "date": {"type": "string"},
            "start_time": {"type": "string"},
            "end_time": {"type": "string"}
        },
        "required": ["title", "date", "start_time", "end_time"]
    }
}]

query = "Бъздай ΡΡŠΠ±ΠΈΡ‚ΠΈΠ΅ 'Π“ΠΎΠ΄ΠΈΡˆΠ΅Π½ ΠΏΡ€Π΅Π³Π»Π΅Π΄' Π·Π° 8-ΠΌΠΈ юни 2025 ΠΎΡ‚ 14:00 Π΄ΠΎ 14:30."

# Generate response
prompt = create_prompt(functions, query)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(
    **inputs,
    max_new_tokens=2048,
    temperature=0.1,
    top_k=25,
    top_p=1.0,
    repetition_penalty=1.1,
    do_sample=True,
    eos_token_id=[tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<end_of_turn>")],
    pad_token_id=tokenizer.eos_token_id
)

result = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(result)

Performance & Dataset πŸ“Š

πŸ“„ Full methodology, dataset details, and comprehensive evaluation results coming in the upcoming paper

Dataset: 10,000+ bilingual (Bulgarian/English) function-calling examples across 1,000+ topics, including tool calls with single/multiple arguments, optional parameters, follow-up queries, multi-tool selection, ambiguous queries requiring clarification, and conversational interactions without tool use. Data sourced from manual curation and synthetic generation (Gemini Pro 2.5/GPT-4.1/Sonnet 4).

Results: Significant improvements in tool-use capabilities over base BgGPT models: 34.1% for 2.6B, 10.6% for 9B, and 1.0% for 27B models in internal benchmarks. Beyond raw function-calling scores, all Tucan models demonstrate more natural conversational flow while maintaining tool-use capabilities, retaining their base knowledge.

Acknowledgments πŸ™

Built on top of BgGPT series.

Questions & Contact πŸ’¬

For questions, collaboration, or feedback: Connect on LinkedIn

Downloads last month
16
Safetensors
Model size
27B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for llm-bg/Tucan-27B-v1.0

Base model

google/gemma-2-27b
Finetuned
(2)
this model
Quantizations
5 models

Collection including llm-bg/Tucan-27B-v1.0

Papers for llm-bg/Tucan-27B-v1.0