๐ค๏ธ Experiment Tracker : check out the training on our TrackioApp Tonic/l-android-control
๐ฎ Live Model Demo: Upload an Android Screenshot and instructions to see the model in action ! Tonic/l-operator-demo
Built in a garage, funded by pre-orders, no VC. Now weโre scaling to 1 k installer units.
Weโre giving 50 limited-edition prototypes to investors , installers & researchers who want to co-design the sovereign smart home.
๐ Drop โEUSKERAโ in the comments if you want an invite, tag a friend who still thinks Alexa is โconvenient,โ and smash โฅ๏ธ if AI should belong to people - not servers.
Just wanted to annouce ๐ญSmolFactory : it's the quickest and best way to finetune SmolLM3 and GPT-OSS-20B on huggingface !
Basicaly it's an app you can run on huggingface by duplicating the space and running your training directly on huggingface GPUs .
It will help you basically select datasets and models, fine tune your model , make an experiment tracker you can use on your mobile phone , push all your model card and even automatically make a demo for you on huggingface so you can directly test it out when it's done !
Runwayโs new **Aleph** model lets you *transform*, *edit*, and *generate* video from existing footage using just text prompts. You can remove objects, change environments, restyle shots, alter lighting, and even create entirely new camera angles, all in one tool.
1. Be clear and specific (e.g., _โChange to snowy night, keep people unchangedโ_). 2. Use action verbs like _add, remove, restyle, relight_. 3. Add reference images for style or lighting.
Aleph shifts AI video from *text-to-video* to *video-to-video*, making post-production faster, more creative, and more accessible than ever.
OpenAI has launched GPT-5, a significant leap forward in AI technology that is now available to all users. The new model unifies all of OpenAI's previous developments into a single, cohesive system that automatically adapts its approach based on the complexity of the user's request. This means it can prioritize speed for simple queries or engage a deeper reasoning model for more complex problems, all without the user having to manually switch settings.
Key Features and Improvements Unified System: GPT-5 combines various models into one interface, intelligently selecting the best approach for each query.
Enhanced Coding: It's being hailed as the "strongest coding model to date," with the ability to create complex, responsive websites and applications from a single prompt.
PhD-level Reasoning: According to CEO Sam Altman, GPT-5 offers a significant jump in reasoning ability, with a much lower hallucination rate. It also performs better on academic and human-evaluated benchmarks.
New Personalities: Users can now select from four preset personalitiesโCynic, Robot, Listener and Nerd to customize their chat experience.
Advanced Voice Mode: The voice mode has been improved to sound more natural and adapt its speech based on the context of the conversation.
All key links to OpenAI open sourced GPT OSS models (117B and 21B) which are released under apache 2.0. Here is a quick guide to explore and build with them:
I focused on showing the core steps side by side with tokenization, embedding and the transformer model layers, each highlighting the self attention and feedforward parts without getting lost in too much technical depth.
Its showing how these layers work together to understand context and generate meaningful output!
If you are curious about the architecture behind AI language models or want a clean way to explain it, hit me up, Iโd love to share!
Hugging Face just made life easier with the new hf CLI! huggingface-cli to hf With renaming the CLI, there are new features added like hf jobs. We can now run any script or Docker image on dedicated Hugging Face infrastructure with a simple command. It's a good addition for running experiments and jobs on the fly. To get started, just run: pip install -U huggingface_hub List of hf CLI Commands
Main Commands hf auth: Manage authentication (login, logout, etc.). hf cache: Manage the local cache directory. hf download: Download files from the Hub. hf jobs: Run and manage Jobs on the Hub. hf repo: Manage repos on the Hub. hf upload: Upload a file or a folder to the Hub. hf version: Print information about the hf version. hf env: Print information about the environment. Authentication Subcommands (hf auth) login: Log in using a Hugging Face token. logout: Log out of your account. whoami: See which account you are logged in as. switch: Switch between different stored access tokens/profiles. list: List all stored access tokens. Jobs Subcommands (hf jobs) run: Run a Job on Hugging Face infrastructure. inspect: Display detailed information on one or more Jobs. logs: Fetch the logs of a Job. ps: List running Jobs. cancel: Cancel a Job.
just submitted my plugin idea to the G-Assist Plugin Hackathon by @nvidia . Check it out, it's a great way to use a local SLA model on a windows machine to easily and locally get things done ! https://github.com/NVIDIA/G-Assist
So every bio/med/chem meeting i go to i always the same questions "why are you sharing a gdrive link with me for this?" and "Do you have any plans to publish your model weights and datasets on huggingface?" and finally i got a good answer today which explains everything :
basically there is some kind of government censorship on this (usa, but i'm sure others too) and they are told they are not allowed as it is considered a "dataleak" which is illegal !!!!
this is terrible ! but the good news is that we can do something about it !
Run LLM model Locally using Docker right inside your codebase (No GUI Needed!)
In this project, I did not used the suporting GUI like Open WebUI or LM Studio or any other, so the purpose to use stand alone LLM models with ollama to give you the idea that how you can use it in your project/code instead of running through third party. Everything is containerized with Docker, so setup is clean and repeatable. Its just a fun side project so my connections can learn more about running models locally in their own projects.
Tech stack used:
๐ Docker
๐ฆ LLaMA via Ollama
๐ป HTML/CSS/JS
๐ Python + FastAPI
๐ NGINX
Its still early and a fun side project, but if you are into local model deployment, or just want to see how it works, check it out on the given link!
In this work, we tackle some major challenges in Arabic multi-label emotion classification especially the issues of class imbalance and label correlation that often hurt model performance, particularly for minority emotions.
Our approach:
Stacked contextual embeddings from fine-tuned ArabicBERT, MarBERT, and AraBERT models.
A meta-learning strategy that builds richer representations.
A hybrid loss function combining class weighting, label correlation matrices, and contrastive learning to better handle class imbalances.
๐ Extensive experiments show significant improvements across Precision, Recall, F1-Score, Jaccard Accuracy, and Hamming Loss. ๐ The hybrid loss function in particular helped close the gap between majority and minority classes!
We also performed ablation studies to break down each componentโs contribution and the results consistently validated our design choices.
This framework isn't just for Arabic it offers a generalizable path for improving multi-label emotion classification in other low-resource languages and domains.
Big thanks to my co-authors: Muhammad Azeem Aslam, Wang Jun, Nisar Ahmed, Li Yanan, Hu Hongfei, Wang Shiyu, and Xin Liu!
Would love to hear your thoughts on this work! ๐