AI & ML interests

The AI community building the future.

Recent Activity

Articles

AdinaY 
posted an update 1 day ago
view post
Post
1030
Chinese open source AI in December 2025 was about the stack coming together: open, end to end, and ready to ship 🔥

https://huggingface.co/collections/zh-ai-community/december-2025-china-open-source-highlights

✨ Big wave of foundation models: still scaling, but efficiency, reasoning, and deployment now matter more than size
- DeepSeek-V3.2
- Z.ai GLM-4.7
- MiniMax-M2.1
- Xiaomi: MiMo-V2-Flash

✨ Multimodal reasoning is now default
- Z.ai GLM-4.6V
- Z.ai AutoGLM-Phone 9B
- Bytedance: Dolphin-v2

✨ Image & video: editable assets and real workflows
- Qwen-Image-Layered / Image-2512
- Meituan: LongCat-Image & Image Edit
- AIDC: Ovis-Image-7B
- Live-Avatar / LongCat-Video-Avatar
- HY-WorldPlay / RealVideo

✨ Audio goes edge ready
- GLM-ASR-Nano / Fun-ASR-Nano
- GLM-TTS / VoxCPM1.5
- CosyVoice 0.5B

✨ The quiet backbone: data & infrastructure
- Finch (FinWorkBench)
- Tencent ARC: TimeLens-100K
- BIGAI: TongSIM-Asset
- MiniMax: VTP-Base

✨ Also congrats on Minimax and Z.ai announced their IPOs and Moonshot announced a new $500M funding round 🔥

Like everyone else, I was OOO at the end of December, so feel free to share (in comments or PR) any I missed in this list!
pcuenq 
posted an update 1 day ago
view post
Post
1911
👉 What happened in AI in 2025? 👈

We prepared the 2025 version of the HF AI Timeline Grid, highlighting open vs API-based model releases, and allowing you to browse and filter by access, modality, and release type!

Play with it here:
2025-ai-timeline/2025-ai-timeline

Here's my personal quarterly TL;DR:

1️⃣ Q1 — Learning to Reason
Deepseek not only releases a top-notch reasoning model, but shows how to train them and compete with closed frontier models. OpenAI debuts Deep Research.

Significant milestones: DeepSeek R1 & R1-Zero, Qwen 2.5 VL, OpenAI Deep Research, Gemini 2.5 Pro (experimental)

2️⃣ Q2 — Multimodality and Coding
More LLMs embrace multimodality by default, and there's a surge in coding agents. Strong vision, audio, and generative models emerge.

Significant milestones: Llama 4, Qwen 3, Imagen 4, OpenAI Codex, Google Jules, Claude 4

3️⃣ Q3 — "Gold" rush, OpenAI opens up, the community goes bananas
Flagship models get gold in Math olympiads and hard benchmarks. OpenAI releases strong open source models and Google releases the much anticipated nano-banana for image generation and editing. Agentic workflows become commonplace.

Significant milestones: Gemini and OpenAI IMO Gold, gpt-oss, Gemini 2.5 Flash Image, Grok 4, Claude Sonnet 4.5

4️⃣ Q4 — Mistral returns, leaderboard hill-climbing
Mistral is back with updated model families. All labs release impressive models to wrap up the year!

Significant milestones: Claude Opus 4.5, DeepSeek Math V2, FLUX 2, GPT 5.1, Kimi K2 Thinking, Nano Banana Pro, GLM 4.7, Gemini 3, Mistral 3, MiniMax M2.1 🤯

Credits
🙏 NHLOCAL for the source data https://github.com/NHLOCAL/AiTimeline

🫡 @reach-vb for the original idea, design and recipe

🙌 @ariG23498 and yours truly for compiling and verifying the 2025 edition

🥳 Here's to 2026, wishing it becomes the best year ever for open releases and on-device-first use-cases! 🥂
AdinaY 
posted an update 2 days ago
AdinaY 
posted an update 2 days ago
view post
Post
3462
2025.1 - DeepSeek entered the scene, backed by High Flyer Quant
2026.1 - IQuest enters the game, backed by Uniquant Quant 📈 and launching IQuest-Coder on huggingface
https://huggingface.co/collections/IQuestLab/iquest-coder

✨ 40B models: Instruct / Thinking / Loop
✨ Loop = MoE-level performance with only ~5% extra training cost
✨ Native 128K context
  • 1 reply
·
AdinaY 
posted an update 19 days ago
victor 
posted an update 20 days ago
AdinaY 
posted an update 22 days ago
view post
Post
4578
Finch 💰 an enterprise-grade benchmark that measures whether AI agents can truly handle real world finance & accounting work.

FinWorkBench/Finch

✨ Built from real enterprise data (Enron + financial institutions), not synthetic tasks
✨ Tests end-to-end finance workflows
✨ Multimodal & cross-file reasoning
✨ Expert annotated (700+ hours) and genuinely challenging hard
tomaarsen 
posted an update 27 days ago
view post
Post
2964
🐦‍🔥 I've just published Sentence Transformers v5.2.0! It introduces multi-processing for CrossEncoder (rerankers), multilingual NanoBEIR evaluators, similarity score outputs in mine_hard_negatives, Transformers v5 support and more. Details:

- CrossEncoder multi-processing: Similar to SentenceTransformer and SparseEncoder, you can now use multi-processing with CrossEncoder rerankers. Useful for multi-GPU and CPU settings, and simple to configure: just device=["cuda:0", "cuda:1"] or device=["cpu"]*4 on the model.predict or model.rank calls.

- Multilingual NanoBEIR Support: You can now use community translations of the tiny NanoBEIR retrieval benchmark instead of only the English one, by passing dataset_id, e.g. dataset_id="lightonai/NanoBEIR-de" for the German benchmark.

- Similarity scores in Hard Negatives Mining: When mining for hard negatives to create a strong training dataset, you can now pass output_scores=True to get similarity scores returned. This can be useful for some distillation losses!

- Transformers v5: This release works with both Transformers v4 and the upcoming v5. In the future, Sentence Transformers will only work with Transformers v5, but not yet!

- Python 3.9 deprecation: Now that Python 3.9 has lost security support, Sentence Transformers no longer supports it.

Check out the full changelog for more details: https://github.com/huggingface/sentence-transformers/releases/tag/v5.2.0

I'm quite excited about what's coming. There's a huge draft PR with a notable refactor in the works that should bring some exciting support. Specifically, better multimodality, rerankers, and perhaps some late interaction in the future!
angt 
posted an update 28 days ago
view post
Post
2677
installama.sh at the TigerBeetle 1000x World Tour !

Last week I had the chance to give a short talk during the TigerBeetle 1000x World Tour (organized by @jedisct1 👏 ) a fantastic event celebrating high-performance engineering and the people who love pushing systems to their limits!

In the talk, I focused on the CPU and Linux side of things, with a simple goal in mind: making the installation of llama.cpp instant, automatic, and optimal, no matter your OS or hardware setup.

For the curious, here are the links worth checking out:
Event page: https://tigerbeetle.com/event/1000x
GitHub repo: https://github.com/angt/installama.sh
Talk: https://youtu.be/pg5NOeJZf0o?si=9Dkcfi2TqjnT_30e

More improvements are coming soon. Stay tuned!
  • 1 reply
·
angt 
posted an update about 1 month ago
view post
Post
1725
I'm excited to share that https://installama.sh is up and running! 🚀

On Linux / macOS / FreeBSD it is easier than ever:
curl https://installama.sh | sh


And Windows just joined the party 🥳
irm https://installama.sh | iex

Stay tuned for new backends on Windows!