This is a specialized Text-to-SQL model fine-tuned from the Microsoft Phi-3-mini-4k-instruct architecture. It has been optimized using Unsloth to provide high-accuracy SQL generation while remaining lightweight enough to run on consumer hardware.

Key Features

  • Architecture: Phi-3-mini (3.8B parameters)
  • Quantization: Q4_K_M GGUF & Q5_K_M
  • Training Technique: Fine-tuned using Lora with Unsloth.
  • Format: GGUF (Ready for Ollama, LM Studio, and llama.cpp)
    • phi-3-mini-4k-instruct.Q4_K_M.gguf
    • phi-3-mini-4k-instruct.Q5_K.gguf

Usage Instructions

Ollama (Recommended)

To deploy locally:

  1. Download the .gguf file (Q4 or Q5).

  2. Create the Modelfile with the following instructions

FROM ./phi-3-mini-4k-instruct.Q4_K_M.gguf

TEMPLATE """<s><|user|>
Schema: {{ .System }}
Question: {{ .Prompt }}<|end|>
<|assistant|>
"""

# Parameters for SQL stability
PARAMETER stop "<|end|>"
PARAMETER stop "<s>"
PARAMETER stop "</s>"
PARAMETER temperature 0.0
  1. Run ollama create phi3-sql-expert -f Modelfile

  2. Run ollama run phi3-sql-expert "schema: CREATE TABLE table_name_7 (nba_draft VARCHAR, school VARCHAR) question: What was the NBA draft status for Northeast High School?"

  3. The answer should be SELECT nba_draft FROM table_name_7 WHERE school = "Northeast"

Evaluation Data

The model was fine-tuned on the sql-create-context dataset, focusing on:

  • Mapping natural language to SQL queries with SELECT, WHERE, and JOIN statements.
  • Understanding table schemas provided in the prompt.
  • Maintaining strict SQL syntax in the response.

Recommended Settings

Temperature: 0.0 or 0.1 (SQL requires deterministic output).

Stop Tokens: Ensure <|end|> is set as a stop sequence to prevent "infinite looping" generation.

Context Window: 2048 tokens.

Model Developer: msquared

Base Model: Phi-3-mini-4k-instruct

Downloads last month
87
GGUF
Model size
4B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for mrcmilo/phi3-text2sql-lora

Quantized
(151)
this model

Dataset used to train mrcmilo/phi3-text2sql-lora