NVIDIA’s Groq deal ... I think, inference efficiency is becoming the main driver of profitability, and NVIDIA’s Groq deal is evidence the market is moving from “who can train biggest” to “who can serve cheapest and fastest at scale.” That points to a maturing phase of AI, not necessarily the end of a bubble, but definitely a correction in what “wins” long-term. What do you think?
CIFAR-10 your handing image dataset ... CIFAR-10 is a small, standard computer-vision dataset used to quickly test and compare ideas.
- 60,000 color images, each 32×32 pixels, labeled into 10 classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck. - Label mapping (important):
- 0 airplane - 1 automobile - 2 bird - 3 cat - 4 deer - 5 dog - 6 frog - 7 horse - 8 ship - 9 truck - Split: 50,000 train and 10,000 test. - Why people use it: fast benchmarking for image classifiers (small CNNs, ResNet, ViT), and quick experiments for training pipelines, augmentation, regularization, pruning, distillation, and demos. - Sizes (downloads): Python version about 163 MB, binary about 162 MB. Hugging Face shows about 144 MB for the dataset files. - Where to get it: the official CIFAR page (University of Toronto) and the Hugging Face CIFAR-10 dataset page. uoft-cs/cifar10 If you want something more, check the table below | Dataset | Resolution | Classes | Best For | | ImageNet 1K | 224–256×256 | 1000 | Real-world large-scale classification | | ImageNet-256. | 256×256 | 1000 | Direct high-res training | | TinyImageNet | 64×64 | 200 | Mid-range benchmark | | UC Merced Land Use | 256×256 | ~21 | Higher resolution small classification | | MS COCO | >256×256 | ~80 objects | Detection / segmentation |
To endorse another user to submit to the cs.AI (Artificial Intelligence) subject class, an arXiv submitter must have submitted 3 papers to any of cs.AI, cs.AR, cs.CC, cs.CE, cs.CG, cs.CL, cs.CR, cs.CV, cs.CY, cs.DB, cs.DC, cs.DL, cs.DM, cs.DS, cs.ET, cs.FL, cs.GL, cs.GR, cs.GT, cs.HC, cs.IR, cs.IT, cs.LG, cs.LO, cs.MA, cs.MM, cs.MS, cs.NA, cs.NE, cs.NI, cs.OH, cs.OS, cs.PF, cs.PL, cs.RO, cs.SC, cs.SD, cs.SE, cs.SI or cs.SY earlier than three months ago and less than five years ago.
Recently I was playing with my model . What is your idea about "unlearning" since I need it 😀 telcom/deewaiREALCN, I have the original one on the main branch and trained version "cp550" and "n_680" on anther branch. Both trained on telcom/deewaiREALCN-training. I got three results when doing prompt: "Athlete portrait, 26-year-old woman, post-training sweat, gym ambient light, chalk dust particles, intense gaze, crisp detail." Apparently, model is sensitive to the word "old". You can see the training on more faces improved from main, however, still not ideal... I am working now on unlearning. I would like to hear about your opinion. #unlearning