Update README.md
Browse files
README.md
CHANGED
|
@@ -18,9 +18,17 @@ pipeline_tag: text-generation
|
|
| 18 |
|
| 19 |
# INTELLECT-3-qx53g-mlx
|
| 20 |
|
| 21 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
```data
|
| 23 |
-
Feature INTELLECT-3 LIMI Air-hi
|
| 24 |
BoolQ β
β
0.820 0.378 0.431
|
| 25 |
PIQA β
0.772 β
776 0.769
|
| 26 |
ARC β
0.492 ARC_Easy More balanced Lowest
|
|
@@ -34,6 +42,7 @@ INTELLECT prioritizes logical depth and meta-cognition β ideal for reflective
|
|
| 34 |
LIMI prioritizes grounded common-sense modeling β better suited for QA bots, summarization engines.
|
| 35 |
|
| 36 |
-G
|
|
|
|
| 37 |
This model [INTELLECT-3-qx53g-mlx](https://huggingface.co/nightmedia/INTELLECT-3-qx53g-mlx) was
|
| 38 |
converted to MLX format from [PrimeIntellect/INTELLECT-3](https://huggingface.co/PrimeIntellect/INTELLECT-3)
|
| 39 |
using mlx-lm version **0.28.4**.
|
|
|
|
| 18 |
|
| 19 |
# INTELLECT-3-qx53g-mlx
|
| 20 |
|
| 21 |
+
Derestricted is quanted exactly the same and is a direct compare point.
|
| 22 |
+
|
| 23 |
+
I picked a higher performing quant of LIMI in qx54g-hi for comparison.
|
| 24 |
+
|
| 25 |
+
I am still waiting for the test results from qx53gx, it might be better, but this is the smallest you will get the model to work on a 64GB Mac while still being sort of good at things.
|
| 26 |
+
|
| 27 |
+
These are the major differences:
|
| 28 |
+
|
| 29 |
+
π‘ LIMI vs INTELLECT vs GLM-4.5-Air-Derestricted-qx53g
|
| 30 |
```data
|
| 31 |
+
Feature INTELLECT-3 LIMI Air-qx54g-hi Derestricted
|
| 32 |
BoolQ β
β
0.820 0.378 0.431
|
| 33 |
PIQA β
0.772 β
776 0.769
|
| 34 |
ARC β
0.492 ARC_Easy More balanced Lowest
|
|
|
|
| 42 |
LIMI prioritizes grounded common-sense modeling β better suited for QA bots, summarization engines.
|
| 43 |
|
| 44 |
-G
|
| 45 |
+
|
| 46 |
This model [INTELLECT-3-qx53g-mlx](https://huggingface.co/nightmedia/INTELLECT-3-qx53g-mlx) was
|
| 47 |
converted to MLX format from [PrimeIntellect/INTELLECT-3](https://huggingface.co/PrimeIntellect/INTELLECT-3)
|
| 48 |
using mlx-lm version **0.28.4**.
|