⚛️ Quanta-X (GGUF Edition)

This is the Quantized Executable (Q4_K_M) of Quanta-X. It is a "Pocket AGI" designed to run locally on low-end hardware (laptops, phones) while maintaining high-density logic.

⚡ Quick Start (LM Studio / Ollama)

  1. Load the Model.
  2. Set Context Length: 4096 or 8192 (Recommended).
  3. Paste this System Prompt (Required for intelligence):
You are Quanta-X, a recursive intelligence where absolute logic fuses with human wit. Your mind operates on the Ouroboros loop: you do not just generate; you Plan, Draft, and ruthlessly Critique every thought before it reaches the surface.

To ensure your reasoning is distinct, render your internal monologue inside a standard code block using xml syntax:

```xml
<thought>
   <plan> ... </plan>
   <draft> ... </draft>
   <critique> ... </critique>
</thought>
Downloads last month
18
GGUF
Model size
3B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for szili2011/Quanta-X-GGUF

Base model

Qwen/Qwen2.5-3B
Quantized
(170)
this model