Model Overview

Description:

AutoGaze is a ultra light-weight model that automatically removes redundant patches in a video before passing it to any Vision Transformer (ViT) or Multi-modal Large Language Model (MLLM).

Speficially, AutoGaze perceives each frame and autoregressively selects ("gazing") a minimal set of patches that can reconstruct the original video (i.e., non-redundant patches) up to a reconstruction loss threhold provided by the user. AutoGaze can self-decide when to stop gazing for each frame based on user's request on the acceptable maximum reconstruction loss.

Empircally, AutoGaze can reduce #tokens in ViTs/MLLMs by up to 100x, reducing their latency by up to 19x/10x. This enables efficiently scaling MLLMs to 4K-resolution, 1K-frame videos, improving performance on benchmarks such as VideoMME. Especially, it improves performance by 14% on HLVid, a high-resolution long-form video benchmark proposed in this work as well.

This model is for research and development only.

Quick Start:

See our GitHub repo for instructions on how to use AutoGaze.

License/Terms of Use:

NVIDIA license (see LICENSE.md). The reference to the NVIDIA License means the attached custom NSCLv1 license, under which users may use for purposes of conducting non-commercial research activities and non-commercial research publications.

Deployment Geography:

Global

Use Case:

The model is used to remove redundancy in videos and accelerate video encoders and MLLMs.

References(s):

GitHub: https://github.com/NVlabs/AutoGaze

Model Architecture:

Architecture Type: CNN and Transformer.

Network Architecture: Custom Architecture.

Number of model parameters: 3M

Input(s):

Input Type(s): Video

Input Format(s): Video: .mp4/.webm/.mov./etc.

Input Parameters: Video: Three-Dimensional (3D)

Other Properties Related to Input: Video with any resolution or duration.

Output(s)

Output Type(s): Integers (representing patch indices)

Output Format(s): Integers

Output Parameters: One-Dimensional (1D)

Other Properties Related to Outupt: N/A

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration:

Runtime Engine(s): N/A

Supported Hardware Microarchitecture Compatibility:
NVIDIA Ampere
NVIDIA Blackwell
NVIDIA Jetson
NVIDIA Hopper

Preferred/Supported Operating System(s): Linux
Linux 4 Tegra
QNX
Windows

The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.

Model Version(s):

v1.0 - Initial release

Training, Testing, and Evaluation Datasets:

Dataset Overview

** Total Size: 800K
** Total Number of Datasets: 1

** Dataset partition: Training [97%], Testing [3%]
** Time period for training data collection 2025/5-2025/8
** Time period for testing data collection 2025/5-2025/8

The data is constructed by collecting raw videos from existing video datasets and labeling gazing sequences for a subset of it using a greedy-search algorithm.

Public Datasets

The raw videos are collected from public dataset including Ego4D, 100DoH, InternVid, SA-1B, and IDL.

Training Dataset [The dataset the model was trained on]:

Data Modality: Video

Video Training Data Size: 800K videos

Data Collection Method by dataset: Automated

Labeling Method by dataset: Automated

Properties: 800K videos with 224*224 resolution and 16 frames each video.

Testing Dataset:

Data Collection Method by dataset: Automated

Labeling Method by dataset: Automated

Properties: 25K videos with 224*224 resolution and 16 frames each video.

Inference:

Acceleration Engine: N/A
Test Hardware: A100

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. Please report model quality, risk, security vulnerabilities or NVIDIA AI Concerns here.

Please make sure you have proper rights and permissions for all input image and video content; if image or video includes people, personal health information, or intellectual property, the image or video generated will not blur or maintain proportions of image subjects included.

Downloads last month
196
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support