Datasets:

Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
License:

You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

🌍 JEPA-WMs Datasets

Robotics trajectories for world model training πŸ€–

Github HuggingFace ArXiv

Meta AI Research, FAIR

This πŸ€— HuggingFace repository hosts datasets for training JEPA-WM world models.
πŸ‘‰ See the main repository for training code and pretrained models.

πŸ‘οΈ Preview Images: To view example images in the Dataset Viewer above, select a dataset configuration (e.g., metaworld, pusht) and click "Run query".


πŸ“¦ Downloading Data

Use the download script from the main repository:

# Download all datasets
python src/scripts/download_data.py

# Download specific dataset(s)
python src/scripts/download_data.py --dataset pusht pointmaze wall

# List available datasets
python src/scripts/download_data.py --list

πŸ“‹ Available Datasets

Dataset Description Format
🏭 metaworld Tabletop manipulation (42 tasks) .mp4 + .parquet
🏠 robocasa Kitchen manipulation .hdf5
🦾 franka_custom Real Franka robot (3 views) .h5 per episode
πŸ”΅ pusht Push-T block pushing .zip πŸ“¦
πŸšͺ wall Point navigation through doors .zip πŸ“¦
🧩 point_maze Point navigation in mazes .zip πŸ“¦

πŸ’‘ The pusht, wall, and point_maze datasets are sourced from DINO-WM and re-hosted here for convenience.


πŸ“š Dataset Details

🏭 Metaworld

Tabletop robotic manipulation across 42 different tasks.

Field Shape Description
observation 224Γ—224 RGB Rendered observation image
state 39-dim Full state vector
action 4-dim End-effector action
reward scalar Task reward
task string Task name (e.g., "drawer-open")

🏠 RoboCasa

Kitchen manipulation with multiple camera views.

Field Shape Description
eye_in_hand 256Γ—256 RGB Eye-in-hand camera
leftview 256Γ—256 RGB Left view camera
action 12-dim Robot action
state_* various State observations

🦾 Franka Custom

Real Franka robot with 3 camera views.

Field Shape Description
exterior_image_1_left 480Γ—640 RGB Exterior camera 1
exterior_image_2_left 480Γ—640 RGB Exterior camera 2
wrist_image_left 480Γ—640 RGB Wrist-mounted camera
cartesian_position 6-dim End-effector pose
joint_position 7-dim Joint angles
gripper_position scalar Gripper state

πŸ”΅ Push-T

Block pushing task from the Push-T benchmark.

Field Shape Description
observation 224Γ—224 RGB Rendered observation
state 5-dim Block + agent state
action 2-dim Relative position action
velocity 2-dim Agent velocity

πŸšͺ Wall

Point navigation through walls with doors.

Field Shape Description
observation 224Γ—224 RGB Rendered observation
state 2-dim Position (x, y)
action 2-dim Movement action
door_location scalar Door y-position
wall_location scalar Wall x-position

🧩 Point Maze

Point navigation in procedural mazes.

Field Shape Description
observation 224Γ—224 RGB Rendered observation
state 4-dim Position + velocity
action 2-dim Movement action

πŸ“ Repository Structure
.
β”œβ”€β”€ πŸ“„ README.md
β”œβ”€β”€ πŸ“„ pyproject.toml
β”œβ”€β”€ πŸ“‚ scripts/                    # πŸ› οΈ Utility scripts
β”‚   β”œβ”€β”€ convert_to_hf.py           #   Convert raw β†’ parquet
β”‚   β”œβ”€β”€ visualize.py               #   Visualize converted data
β”‚   └── upload_to_hf.py            #   Upload to HuggingFace
β”œβ”€β”€ πŸ“‚ metaworld/
β”‚   β”œβ”€β”€ hf_data/                   # Example parquet (for dataset viewer)
β”‚   └── data/                      # Raw parquet files
β”œβ”€β”€ πŸ“‚ robocasa/
β”‚   β”œβ”€β”€ hf_data/                   # Example parquet (for dataset viewer)
β”‚   └── combine_all_im256.hdf5     # Raw HDF5
β”œβ”€β”€ πŸ“‚ franka_custom/
β”‚   β”œβ”€β”€ hf_data/                   # Example parquet (for dataset viewer)
β”‚   └── data/                      # Raw H5 files (per episode)
β”œβ”€β”€ πŸ“‚ pusht/
β”‚   β”œβ”€β”€ hf_data/                   # Example parquet (for dataset viewer)
β”‚   └── pusht_noise.zip            # Raw data (zipped)
β”œβ”€β”€ πŸ“‚ wall/
β”‚   β”œβ”€β”€ hf_data/                   # Example parquet (for dataset viewer)
β”‚   └── wall_single.zip            # Raw data (zipped)
└── πŸ“‚ point_maze/
    β”œβ”€β”€ hf_data/                   # Example parquet (for dataset viewer)
    └── point_maze.zip             # Raw data (zipped)

πŸ› οΈ Development Scripts

These scripts are for dataset maintainers and developers.

πŸ”„ Convert Raw Data to Parquet

# Analyze dataset structure
python scripts/convert_to_hf.py --dataset metaworld --analyze

# Convert episode 0 (default)
python scripts/convert_to_hf.py --dataset metaworld --convert
python scripts/convert_to_hf.py --dataset pusht --convert
python scripts/convert_to_hf.py --dataset wall --convert
python scripts/convert_to_hf.py --dataset point_maze --convert
python scripts/convert_to_hf.py --dataset robocasa --convert
python scripts/convert_to_hf.py --dataset franka_custom --convert

# Convert specific episode with options
python scripts/convert_to_hf.py --dataset wall --convert --episode 5 --max-frames 50

πŸ‘€ Visualize Converted Data

# Display frames in matplotlib window
python scripts/visualize.py --dataset metaworld
python scripts/visualize.py --dataset pusht

# Save visualization to file
python scripts/visualize.py --dataset point_maze --num-frames 12 --save output.png

# Print dataset info only
python scripts/visualize.py --dataset robocasa --info-only

☁️ Upload to HuggingFace

# Upload a single file
python scripts/upload_to_hf.py --file robocasa/hf_data/train-00000-of-00001.parquet

# Upload an entire folder
python scripts/upload_to_hf.py --folder franka_custom --message "Add franka_custom data"

# Upload all parquet files
python scripts/upload_to_hf.py --all

πŸ“„ License

This dataset is released under the CC-BY-4.0 License.


πŸ“š Citation

If you find these datasets useful, please consider giving a ⭐ and citing:

@misc{terver2025drivessuccessphysicalplanning,
      title={What Drives Success in Physical Planning with Joint-Embedding Predictive World Models?},
      author={Basile Terver and Tsung-Yen Yang and Jean Ponce and Adrien Bardes and Yann LeCun},
      year={2025},
      eprint={2512.24497},
      archivePrefix={arXiv},
      primaryClass={cs.AI},
      url={https://arxiv.org/abs/2512.24497},
}

Made with ❀️ by Meta AI Research, FAIR

Downloads last month
53

Collection including facebook/jepa-wms