π JEPA-WMs Datasets
Robotics trajectories for world model training π€
This π€ HuggingFace repository hosts datasets for training JEPA-WM world models.
π See the main repository for training code and pretrained models.
ποΈ Preview Images: To view example images in the Dataset Viewer above, select a dataset configuration (e.g.,
metaworld,pusht) and click "Run query".
π¦ Downloading Data
Use the download script from the main repository:
# Download all datasets
python src/scripts/download_data.py
# Download specific dataset(s)
python src/scripts/download_data.py --dataset pusht pointmaze wall
# List available datasets
python src/scripts/download_data.py --list
π Available Datasets
| Dataset | Description | Format |
|---|---|---|
| π metaworld | Tabletop manipulation (42 tasks) | .mp4 + .parquet |
| π robocasa | Kitchen manipulation | .hdf5 |
| π¦Ύ franka_custom | Real Franka robot (3 views) | .h5 per episode |
| π΅ pusht | Push-T block pushing | .zip π¦ |
| πͺ wall | Point navigation through doors | .zip π¦ |
| π§© point_maze | Point navigation in mazes | .zip π¦ |
π‘ The
pusht,wall, andpoint_mazedatasets are sourced from DINO-WM and re-hosted here for convenience.
π Dataset Details
π Metaworld
Tabletop robotic manipulation across 42 different tasks.
| Field | Shape | Description |
|---|---|---|
observation |
224Γ224 RGB | Rendered observation image |
state |
39-dim | Full state vector |
action |
4-dim | End-effector action |
reward |
scalar | Task reward |
task |
string | Task name (e.g., "drawer-open") |
π RoboCasa
Kitchen manipulation with multiple camera views.
| Field | Shape | Description |
|---|---|---|
eye_in_hand |
256Γ256 RGB | Eye-in-hand camera |
leftview |
256Γ256 RGB | Left view camera |
action |
12-dim | Robot action |
state_* |
various | State observations |
π¦Ύ Franka Custom
Real Franka robot with 3 camera views.
| Field | Shape | Description |
|---|---|---|
exterior_image_1_left |
480Γ640 RGB | Exterior camera 1 |
exterior_image_2_left |
480Γ640 RGB | Exterior camera 2 |
wrist_image_left |
480Γ640 RGB | Wrist-mounted camera |
cartesian_position |
6-dim | End-effector pose |
joint_position |
7-dim | Joint angles |
gripper_position |
scalar | Gripper state |
π΅ Push-T
Block pushing task from the Push-T benchmark.
| Field | Shape | Description |
|---|---|---|
observation |
224Γ224 RGB | Rendered observation |
state |
5-dim | Block + agent state |
action |
2-dim | Relative position action |
velocity |
2-dim | Agent velocity |
πͺ Wall
Point navigation through walls with doors.
| Field | Shape | Description |
|---|---|---|
observation |
224Γ224 RGB | Rendered observation |
state |
2-dim | Position (x, y) |
action |
2-dim | Movement action |
door_location |
scalar | Door y-position |
wall_location |
scalar | Wall x-position |
π§© Point Maze
Point navigation in procedural mazes.
| Field | Shape | Description |
|---|---|---|
observation |
224Γ224 RGB | Rendered observation |
state |
4-dim | Position + velocity |
action |
2-dim | Movement action |
π Repository Structure
.
βββ π README.md
βββ π pyproject.toml
βββ π scripts/ # π οΈ Utility scripts
β βββ convert_to_hf.py # Convert raw β parquet
β βββ visualize.py # Visualize converted data
β βββ upload_to_hf.py # Upload to HuggingFace
βββ π metaworld/
β βββ hf_data/ # Example parquet (for dataset viewer)
β βββ data/ # Raw parquet files
βββ π robocasa/
β βββ hf_data/ # Example parquet (for dataset viewer)
β βββ combine_all_im256.hdf5 # Raw HDF5
βββ π franka_custom/
β βββ hf_data/ # Example parquet (for dataset viewer)
β βββ data/ # Raw H5 files (per episode)
βββ π pusht/
β βββ hf_data/ # Example parquet (for dataset viewer)
β βββ pusht_noise.zip # Raw data (zipped)
βββ π wall/
β βββ hf_data/ # Example parquet (for dataset viewer)
β βββ wall_single.zip # Raw data (zipped)
βββ π point_maze/
βββ hf_data/ # Example parquet (for dataset viewer)
βββ point_maze.zip # Raw data (zipped)
π οΈ Development Scripts
These scripts are for dataset maintainers and developers.
π Convert Raw Data to Parquet
# Analyze dataset structure
python scripts/convert_to_hf.py --dataset metaworld --analyze
# Convert episode 0 (default)
python scripts/convert_to_hf.py --dataset metaworld --convert
python scripts/convert_to_hf.py --dataset pusht --convert
python scripts/convert_to_hf.py --dataset wall --convert
python scripts/convert_to_hf.py --dataset point_maze --convert
python scripts/convert_to_hf.py --dataset robocasa --convert
python scripts/convert_to_hf.py --dataset franka_custom --convert
# Convert specific episode with options
python scripts/convert_to_hf.py --dataset wall --convert --episode 5 --max-frames 50
π Visualize Converted Data
# Display frames in matplotlib window
python scripts/visualize.py --dataset metaworld
python scripts/visualize.py --dataset pusht
# Save visualization to file
python scripts/visualize.py --dataset point_maze --num-frames 12 --save output.png
# Print dataset info only
python scripts/visualize.py --dataset robocasa --info-only
βοΈ Upload to HuggingFace
# Upload a single file
python scripts/upload_to_hf.py --file robocasa/hf_data/train-00000-of-00001.parquet
# Upload an entire folder
python scripts/upload_to_hf.py --folder franka_custom --message "Add franka_custom data"
# Upload all parquet files
python scripts/upload_to_hf.py --all
π License
This dataset is released under the CC-BY-4.0 License.
π Citation
If you find these datasets useful, please consider giving a β and citing:
@misc{terver2025drivessuccessphysicalplanning,
title={What Drives Success in Physical Planning with Joint-Embedding Predictive World Models?},
author={Basile Terver and Tsung-Yen Yang and Jean Ponce and Adrien Bardes and Yann LeCun},
year={2025},
eprint={2512.24497},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2512.24497},
}
Made with β€οΈ by Meta AI Research, FAIR
- Downloads last month
- 53