datasetId
large_stringlengths 6
121
| card_raw
large_stringlengths 10
25.3M
| card_text
large_stringlengths 0
25.3M
| downloads
int64 0
2.26M
| likes
int64 0
9.39k
| tags
large listlengths 1
7.92k
| created_at
large_stringdate 2022-03-02 23:29:22
2025-11-12 17:47:45
| last_modified
large_stringdate 2021-02-16 03:58:06
2025-11-12 17:57:42
| trending_score
float32 0
90
|
|---|---|---|---|---|---|---|---|---|
hcooch2ch3/eval_wood_sticks
|
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "omx_follower",
"total_episodes": 3,
"total_frames": 5110,
"total_tasks": 1,
"total_videos": 3,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:3"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "omx_follower",
"total_episodes": 3,
"total_frames": 5110,
"total_tasks": 1,
"total_videos": 3,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:3"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
| 21
| 0
|
[
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] |
2025-11-11T17:55:22+00:00
|
2025-11-12T17:57:22+00:00
| 0
|
ts0pwo/20K_real_and_deepfake_images
|
This dataset contains the test images used to evaluate our deepfake detection framework. It originally contained 20,000 real and deepfake images, but as some 2600 files are protected by the UK Crown and we do not have a permission to reproduced them, so these files were removed.
Our framework contains 4 machine learning models, which feed in the original images, error-level analysis (ELA) images, noise analysis (NA) images and Principal Component Analysis (PCA) images.
The models were created using Tensorflow version 2.26.2.
In this repository, the original images are stored.
|
This dataset contains the test images used to evaluate our deepfake detection framework. It originally contained 20,000 real and deepfake images, but as some 2600 files are protected by the UK Crown and we do not have a permission to reproduced them, so these files were removed.
Our framework contains 4 machine learning models, which feed in the original images, error-level analysis (ELA) images, noise analysis (NA) images and Principal Component Analysis (PCA) images.
The models were created using Tensorflow version 2.26.2.
In this repository, the original images are stored.
| 397
| 0
|
[
"task_categories:image-classification",
"language:en",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us",
"deepfake"
] |
2025-11-05T16:30:27+00:00
|
2025-11-12T17:57:42+00:00
| 0
|
KakologArchives/KakologArchives
|
# ニコニコ実況 過去ログアーカイブ
ニコニコ実況 過去ログアーカイブは、[ニコニコ実況](https://jk.nicovideo.jp) のサービス開始から現在までのすべての過去ログコメントを収集したデータセットです。
去る2020年12月、ニコニコ実況は [ニコニコ生放送内の一公式チャンネルとしてリニューアル](https://blog.nicovideo.jp/niconews/143148.html) されました。
これに伴い、2009年11月から運用されてきた旧システムは提供終了となり(事実上のサービス終了)、torne や BRAVIA などの家電への対応が軒並み終了する中、当時の生の声が詰まった約11年分の過去ログも同時に失われることとなってしまいました。
そこで 5ch の DTV 板の住民が中心となり、旧ニコニコ実況が終了するまでに11年分の全チャンネルの過去ログをアーカイブする計画が立ち上がりました。紆余曲折あり Nekopanda 氏が約11年分のラジオや BS も含めた全チャンネルの過去ログを完璧に取得してくださったおかげで、11年分の過去ログが電子の海に消えていく事態は回避できました。
しかし、旧 API が廃止されてしまったため過去ログを API 経由で取得することができなくなり、またアーカイブされた過去ログから見たい範囲のログを探す場合も、アーカイブのサイズが合計約 150GB もあることから、とても以前のように手軽に過去ログに触れることはできなくなってしまいました。
一方、ニコニコ生放送内の一公式チャンネルとして移行した新ニコニコ実況では、タイムシフト(旧ニコニコ実況での過去ログに相当)の視聴期限は3週間までとなっているため、その期限を過ぎると過去ログは視聴できなくなってしまいます。
また一般会員は事前にタイムシフト予約をしておく必要があるなど、以前のような利便性は失われています。
私たちは、ニコニコ実況に投稿された日本のテレビ放送についてのコメントは、当時の世相や時代背景を端的に表す、歴史的価値のある資料だと考えています。
このデータセットでは、ニコニコ実況のすべての過去ログを後世に残すべく、Nekopanda 氏が配布されていた旧ニコニコ実況の 2020/12/15 までのすべての過去ログに加え、コミュニティでの実況番組も含めた新ニコニコ実況、さらに 2024/06/10 からは実況用代替コメントサーバーである [NX-Jikkyo](https://nx-jikkyo.tsukumijima.net/) の当日分の過去ログを5分に1回収集し、随時反映しています。
過去ログをかんたんに取得するための [API](https://jikkyo.tsukumijima.net/) もあります。
よろしければそちらもご活用ください。
## Dataset Structure
### Builder Config
| Key | Value Type | Default Value | Description |
| --------------- | ---------- | ------------- | ----------- |
| channel_id | string | None | 過去ログを取得するニコニコ実況チャンネルの ID (省略時はすべてのチャンネル) |
| year | int | None | 取得する過去ログの年 (省略時はすべての年) |
| number_of_files | int | None | 取得する過去ログファイルの数 (省略時はすべてのファイル) |
### Data Splits
| Split | Approximate Size | Description |
| ------- | ---------------- | ----------- |
| sample | 1GB | サンプルとして、2022年中に投稿された TOKYO MX (ID: jk9) のすべての過去ログコメントを取得します。1GB ほどあります。 |
| all | 190GB | 全チャンネル/全期間のすべての過去ログコメントを取得します。190GB 以上あるため注意してください。 |
### Data Fields
| Field | Type | Description |
| --------------- | -------- | ----------- |
| thread | string | コメントのスレッド ID |
| no | int64 | コメント番号 (コメ番) |
| vpos | int64 | スレッド ID から起算したコメントの再生位置 (1/100秒) |
| date | int64 | コメント投稿時間の UNIX タイムスタンプ |
| date_usec | int64 | コメント投稿時間の小数点以下の時間 |
| user_id | string | ユーザー ID (コマンドに 184 が指定されている場合は匿名化され、1週間ほどでシャッフルされる) |
| mail | string | コメントのコマンド (184, red naka big など、省略されることもある) |
| premium | boolean | コメントしたユーザーがプレミアム会員であれば True |
| anonymity | boolean | 匿名コメントであれば True |
| content | string | コメント本文 (AA など、まれに複数行コメントがあるので注意) |
## Example
```python
from datasets import load_dataset
dataset = load_dataset('KakologArchives/KakologArchives', 'all', channel_id='jk211', year=2023, number_of_files=10)
for data in dataset['train']:
print(data)
```
## Licensing Information
[MIT License](https://opensource.org/license/mit/)
|
# ニコニコ実況 過去ログアーカイブ
ニコニコ実況 過去ログアーカイブは、[ニコニコ実況](https://jk.nicovideo.jp) のサービス開始から現在までのすべての過去ログコメントを収集したデータセットです。
去る2020年12月、ニコニコ実況は [ニコニコ生放送内の一公式チャンネルとしてリニューアル](https://blog.nicovideo.jp/niconews/143148.html) されました。
これに伴い、2009年11月から運用されてきた旧システムは提供終了となり(事実上のサービス終了)、torne や BRAVIA などの家電への対応が軒並み終了する中、当時の生の声が詰まった約11年分の過去ログも同時に失われることとなってしまいました。
そこで 5ch の DTV 板の住民が中心となり、旧ニコニコ実況が終了するまでに11年分の全チャンネルの過去ログをアーカイブする計画が立ち上がりました。紆余曲折あり Nekopanda 氏が約11年分のラジオや BS も含めた全チャンネルの過去ログを完璧に取得してくださったおかげで、11年分の過去ログが電子の海に消えていく事態は回避できました。
しかし、旧 API が廃止されてしまったため過去ログを API 経由で取得することができなくなり、またアーカイブされた過去ログから見たい範囲のログを探す場合も、アーカイブのサイズが合計約 150GB もあることから、とても以前のように手軽に過去ログに触れることはできなくなってしまいました。
一方、ニコニコ生放送内の一公式チャンネルとして移行した新ニコニコ実況では、タイムシフト(旧ニコニコ実況での過去ログに相当)の視聴期限は3週間までとなっているため、その期限を過ぎると過去ログは視聴できなくなってしまいます。
また一般会員は事前にタイムシフト予約をしておく必要があるなど、以前のような利便性は失われています。
私たちは、ニコニコ実況に投稿された日本のテレビ放送についてのコメントは、当時の世相や時代背景を端的に表す、歴史的価値のある資料だと考えています。
このデータセットでは、ニコニコ実況のすべての過去ログを後世に残すべく、Nekopanda 氏が配布されていた旧ニコニコ実況の 2020/12/15 までのすべての過去ログに加え、コミュニティでの実況番組も含めた新ニコニコ実況、さらに 2024/06/10 からは実況用代替コメントサーバーである [NX-Jikkyo](https://nx-jikkyo.tsukumijima.net/) の当日分の過去ログを5分に1回収集し、随時反映しています。
過去ログをかんたんに取得するための [API](https://jikkyo.tsukumijima.net/) もあります。
よろしければそちらもご活用ください。
## Dataset Structure
### Builder Config
| Key | Value Type | Default Value | Description |
| --------------- | ---------- | ------------- | ----------- |
| channel_id | string | None | 過去ログを取得するニコニコ実況チャンネルの ID (省略時はすべてのチャンネル) |
| year | int | None | 取得する過去ログの年 (省略時はすべての年) |
| number_of_files | int | None | 取得する過去ログファイルの数 (省略時はすべてのファイル) |
### Data Splits
| Split | Approximate Size | Description |
| ------- | ---------------- | ----------- |
| sample | 1GB | サンプルとして、2022年中に投稿された TOKYO MX (ID: jk9) のすべての過去ログコメントを取得します。1GB ほどあります。 |
| all | 190GB | 全チャンネル/全期間のすべての過去ログコメントを取得します。190GB 以上あるため注意してください。 |
### Data Fields
| Field | Type | Description |
| --------------- | -------- | ----------- |
| thread | string | コメントのスレッド ID |
| no | int64 | コメント番号 (コメ番) |
| vpos | int64 | スレッド ID から起算したコメントの再生位置 (1/100秒) |
| date | int64 | コメント投稿時間の UNIX タイムスタンプ |
| date_usec | int64 | コメント投稿時間の小数点以下の時間 |
| user_id | string | ユーザー ID (コマンドに 184 が指定されている場合は匿名化され、1週間ほどでシャッフルされる) |
| mail | string | コメントのコマンド (184, red naka big など、省略されることもある) |
| premium | boolean | コメントしたユーザーがプレミアム会員であれば True |
| anonymity | boolean | 匿名コメントであれば True |
| content | string | コメント本文 (AA など、まれに複数行コメントがあるので注意) |
## Example
```python
from datasets import load_dataset
dataset = load_dataset('KakologArchives/KakologArchives', 'all', channel_id='jk211', year=2023, number_of_files=10)
for data in dataset['train']:
print(data)
```
## Licensing Information
[MIT License](https://opensource.org/license/mit/)
| 79,867
| 16
|
[
"task_categories:text-classification",
"language:ja",
"license:mit",
"region:us"
] |
2023-05-12T13:31:56+00:00
|
2025-11-12T17:56:42+00:00
| 0
|
chrisrca/clash-royale-tv-replays
|
# Clash Royale TV Replays
Frame-by-frame gameplay recordings (~10 fps) from Clash Royale's TV Royale, covering all 31 arenas. Automated recording using tools from our [github repository](https://github.com/chrisrca/CS541-Deep-Learning-Clash-Royale-Project/tree/emulation).
## Structure
```
arena_{XX}/{replay_uuid}/
├── frames.parquet # Frame data
└── preview.jpg # First frame thumbnail
```
**Parquet Schema:**
- `frame_id` (int64): Frame number
- `image` (Image): PNG bytes
- `hash` (string): MD5 for deduplication
## Usage
```python
from huggingface_hub import hf_hub_download
import pyarrow.parquet as pq
path = hf_hub_download(
repo_id="chrisrca/clash-royale-tv-replays",
filename="arena_{arena_id}/{replay_id}/frames.parquet",
repo_type="dataset",
token=HF_TOKEN
)
table = pq.read_table(path)
print(f"Loaded {len(table)} frames from {path}")
```
## Details
- **Resolution**: 540x960
- **Format**: PNG frames (ZSTD compressed)
- **Deduplication**: Only unique frames saved
- **Collection**: Automated via Android emulator
|
# Clash Royale TV Replays
Frame-by-frame gameplay recordings (~10 fps) from Clash Royale's TV Royale, covering all 31 arenas. Automated recording using tools from our [github repository](https://github.com/chrisrca/CS541-Deep-Learning-Clash-Royale-Project/tree/emulation).
## Structure
```
arena_{XX}/{replay_uuid}/
├── frames.parquet # Frame data
└── preview.jpg # First frame thumbnail
```
**Parquet Schema:**
- `frame_id` (int64): Frame number
- `image` (Image): PNG bytes
- `hash` (string): MD5 for deduplication
## Usage
```python
from huggingface_hub import hf_hub_download
import pyarrow.parquet as pq
path = hf_hub_download(
repo_id="chrisrca/clash-royale-tv-replays",
filename="arena_{arena_id}/{replay_id}/frames.parquet",
repo_type="dataset",
token=HF_TOKEN
)
table = pq.read_table(path)
print(f"Loaded {len(table)} frames from {path}")
```
## Details
- **Resolution**: 540x960
- **Format**: PNG frames (ZSTD compressed)
- **Deduplication**: Only unique frames saved
- **Collection**: Automated via Android emulator
| 6,014
| 1
|
[
"task_categories:feature-extraction",
"license:mit",
"region:us",
"clash-royale",
"replays",
"gaming",
"computer-vision",
"parquet",
"image-dataset",
"video-frames",
"mobile-gaming"
] |
2025-11-10T02:39:02+00:00
|
2025-11-12T17:56:35+00:00
| 1
|
oxe-aug/language_table_train_160000_165000_augmented
|
# language_table_train_160000_165000_augmented
## Overview
- **Codebase version**: `v2.1`
- **Robots**: google_robot, images, jaco, kinova3, kuka_iiwa, panda, sawyer, ur5e
- **FPS**: 10
- **Episodes**: 5,000
- **Frames**: 79,439
- **Videos**: 40,000
- **Chunks**: 5
- **Splits**:
- `train`: `0:5000`
## Data Layout
```text
data_path : data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet
video_path: videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4
```
## Features
| Feature | dtype | shape | description |
|---|---:|---:|---|
| `observation.images.google_robot` | `video` | `360×640×3` | Augmented image for google_robot robot |
| `observation.images.image` | `video` | `360×640×3` | Source robot's image from original dataset |
| `observation.images.jaco` | `video` | `360×640×3` | Augmented image for jaco robot |
| `observation.images.kinova3` | `video` | `360×640×3` | Augmented image for kinova3 robot |
| `observation.images.kuka_iiwa` | `video` | `360×640×3` | Augmented image for kuka_iiwa robot |
| `observation.images.panda` | `video` | `360×640×3` | Augmented image for panda robot |
| `observation.images.sawyer` | `video` | `360×640×3` | Augmented image for sawyer robot |
| `observation.images.ur5e` | `video` | `360×640×3` | Augmented image for ur5e robot |
| `episode_index` | `int64` | `1` | - |
| `frame_index` | `int64` | `1` | - |
| `index` | `int64` | `1` | - |
| `natural_language_instruction` | `int32` | `512` | - |
| `observation.ee_pose` | `float32` | `7` | Source robot's eef position |
| `observation.google_robot.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.google_robot.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.google_robot.ee_error` | `float32` | `7` | The eef difference between the augmented google_robot robot and the original robot |
| `observation.google_robot.ee_pose` | `float32` | `7` | The eef position of google_robot robot |
| `observation.google_robot.joints` | `float32` | `8` | The joint position of google_robot robot |
| `observation.jaco.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.jaco.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.jaco.ee_error` | `float32` | `7` | The eef difference between the augmented jaco robot and the original robot |
| `observation.jaco.ee_pose` | `float32` | `7` | The eef position of jaco robot |
| `observation.jaco.joints` | `float32` | `7` | The joint position of jaco robot |
| `observation.joints` | `float32` | `8` | Joint angle of source robot |
| `observation.kinova3.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.kinova3.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.kinova3.ee_error` | `float32` | `7` | The eef difference between the augmented kinova3 robot and the original robot |
| `observation.kinova3.ee_pose` | `float32` | `7` | The eef position of kinova3 robot |
| `observation.kinova3.joints` | `float32` | `8` | The joint position of kinova3 robot |
| `observation.kuka_iiwa.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.kuka_iiwa.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.kuka_iiwa.ee_error` | `float32` | `7` | The eef difference between the augmented kuka_iiwa robot and the original robot |
| `observation.kuka_iiwa.ee_pose` | `float32` | `7` | The eef position of kuka_iiwa robot |
| `observation.kuka_iiwa.joints` | `float32` | `8` | The joint position of kuka_iiwa robot |
| `observation.panda.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.panda.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.panda.ee_error` | `float32` | `7` | The eef difference between the augmented panda robot and the original robot |
| `observation.panda.ee_pose` | `float32` | `7` | The eef position of panda robot |
| `observation.panda.joints` | `float32` | `8` | The joint position of panda robot |
| `observation.sawyer.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.sawyer.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.sawyer.ee_error` | `float32` | `7` | The eef difference between the augmented sawyer robot and the original robot |
| `observation.sawyer.ee_pose` | `float32` | `7` | The eef position of sawyer robot |
| `observation.sawyer.joints` | `float32` | `8` | The joint position of sawyer robot |
| `observation.state` | `float32` | `2` | Copy of the state field in source robot's RLDS dataset |
| `observation.ur5e.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.ur5e.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.ur5e.ee_error` | `float32` | `7` | The eef difference between the augmented ur5e robot and the original robot |
| `observation.ur5e.ee_pose` | `float32` | `7` | The eef position of ur5e robot |
| `observation.ur5e.joints` | `float32` | `7` | The joint position of ur5e robot |
| `task_index` | `int64` | `1` | - |
| `timestamp` | `float32` | `1` | - |
## Website
- Website page: [https://oxe-aug.github.io/](https://oxe-aug.github.io/)
- Project repository: [https://github.com/GuanhuaJi/oxe-aug](https://github.com/GuanhuaJi/oxe-aug)
## Paper
- [https://arxiv.org/abs/2210.06407](https://arxiv.org/abs/2210.06407)
## Citation Policy
If you use **OXE-Aug** datasets, please cite **both** our dataset and the **upstream datasets**.
## Upstream Dataset Citation (original dataset)
```bibtex
@article{lynch2022interactive,
title = {Interactive Language: Talking to Robots in Real Time},
author = {Corey Lynch and Ayzaan Wahid and Jonathan Tompson and Tianli Ding and James Betker and Robert Baruch and Travis Armstrong and Pete Florence},
journal = {arXiv preprint arXiv:2210.06407},
year = {2022},
url = {https://arxiv.org/abs/2210.06407}
}
```
## OXE-Aug Dataset Citation (ours)
```bibtex
@misc{
ji2025oxeaug,
title = {OXE-Aug: A Large-Scale Robot Augmentation of OXE for Scaling Cross-Embodiment Policy Learning},
author = {Ji, Guanhua and Polavaram, Harsha and Chen, Lawrence Yunliang and Bajamahal, Sandeep and Ma, Zehan and Adebola, Simeon and Xu, Chenfeng and Goldberg, Ken},
year = {2025},
note = {Manuscript}
}
```
|
# language_table_train_160000_165000_augmented
## Overview
- **Codebase version**: `v2.1`
- **Robots**: google_robot, images, jaco, kinova3, kuka_iiwa, panda, sawyer, ur5e
- **FPS**: 10
- **Episodes**: 5,000
- **Frames**: 79,439
- **Videos**: 40,000
- **Chunks**: 5
- **Splits**:
- `train`: `0:5000`
## Data Layout
```text
data_path : data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet
video_path: videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4
```
## Features
| Feature | dtype | shape | description |
|---|---:|---:|---|
| `observation.images.google_robot` | `video` | `360×640×3` | Augmented image for google_robot robot |
| `observation.images.image` | `video` | `360×640×3` | Source robot's image from original dataset |
| `observation.images.jaco` | `video` | `360×640×3` | Augmented image for jaco robot |
| `observation.images.kinova3` | `video` | `360×640×3` | Augmented image for kinova3 robot |
| `observation.images.kuka_iiwa` | `video` | `360×640×3` | Augmented image for kuka_iiwa robot |
| `observation.images.panda` | `video` | `360×640×3` | Augmented image for panda robot |
| `observation.images.sawyer` | `video` | `360×640×3` | Augmented image for sawyer robot |
| `observation.images.ur5e` | `video` | `360×640×3` | Augmented image for ur5e robot |
| `episode_index` | `int64` | `1` | - |
| `frame_index` | `int64` | `1` | - |
| `index` | `int64` | `1` | - |
| `natural_language_instruction` | `int32` | `512` | - |
| `observation.ee_pose` | `float32` | `7` | Source robot's eef position |
| `observation.google_robot.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.google_robot.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.google_robot.ee_error` | `float32` | `7` | The eef difference between the augmented google_robot robot and the original robot |
| `observation.google_robot.ee_pose` | `float32` | `7` | The eef position of google_robot robot |
| `observation.google_robot.joints` | `float32` | `8` | The joint position of google_robot robot |
| `observation.jaco.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.jaco.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.jaco.ee_error` | `float32` | `7` | The eef difference between the augmented jaco robot and the original robot |
| `observation.jaco.ee_pose` | `float32` | `7` | The eef position of jaco robot |
| `observation.jaco.joints` | `float32` | `7` | The joint position of jaco robot |
| `observation.joints` | `float32` | `8` | Joint angle of source robot |
| `observation.kinova3.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.kinova3.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.kinova3.ee_error` | `float32` | `7` | The eef difference between the augmented kinova3 robot and the original robot |
| `observation.kinova3.ee_pose` | `float32` | `7` | The eef position of kinova3 robot |
| `observation.kinova3.joints` | `float32` | `8` | The joint position of kinova3 robot |
| `observation.kuka_iiwa.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.kuka_iiwa.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.kuka_iiwa.ee_error` | `float32` | `7` | The eef difference between the augmented kuka_iiwa robot and the original robot |
| `observation.kuka_iiwa.ee_pose` | `float32` | `7` | The eef position of kuka_iiwa robot |
| `observation.kuka_iiwa.joints` | `float32` | `8` | The joint position of kuka_iiwa robot |
| `observation.panda.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.panda.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.panda.ee_error` | `float32` | `7` | The eef difference between the augmented panda robot and the original robot |
| `observation.panda.ee_pose` | `float32` | `7` | The eef position of panda robot |
| `observation.panda.joints` | `float32` | `8` | The joint position of panda robot |
| `observation.sawyer.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.sawyer.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.sawyer.ee_error` | `float32` | `7` | The eef difference between the augmented sawyer robot and the original robot |
| `observation.sawyer.ee_pose` | `float32` | `7` | The eef position of sawyer robot |
| `observation.sawyer.joints` | `float32` | `8` | The joint position of sawyer robot |
| `observation.state` | `float32` | `2` | Copy of the state field in source robot's RLDS dataset |
| `observation.ur5e.base_orientation` | `float32` | `1` | Rotation along z-axis CCW to make the robot not blocking the camera (mostly 0) |
| `observation.ur5e.base_position` | `float32` | `3` | Base translation applied so the trajectory remains achievable |
| `observation.ur5e.ee_error` | `float32` | `7` | The eef difference between the augmented ur5e robot and the original robot |
| `observation.ur5e.ee_pose` | `float32` | `7` | The eef position of ur5e robot |
| `observation.ur5e.joints` | `float32` | `7` | The joint position of ur5e robot |
| `task_index` | `int64` | `1` | - |
| `timestamp` | `float32` | `1` | - |
## Website
- Website page: [https://oxe-aug.github.io/](https://oxe-aug.github.io/)
- Project repository: [https://github.com/GuanhuaJi/oxe-aug](https://github.com/GuanhuaJi/oxe-aug)
## Paper
- [https://arxiv.org/abs/2210.06407](https://arxiv.org/abs/2210.06407)
## Citation Policy
If you use **OXE-Aug** datasets, please cite **both** our dataset and the **upstream datasets**.
## Upstream Dataset Citation (original dataset)
```bibtex
@article{lynch2022interactive,
title = {Interactive Language: Talking to Robots in Real Time},
author = {Corey Lynch and Ayzaan Wahid and Jonathan Tompson and Tianli Ding and James Betker and Robert Baruch and Travis Armstrong and Pete Florence},
journal = {arXiv preprint arXiv:2210.06407},
year = {2022},
url = {https://arxiv.org/abs/2210.06407}
}
```
## OXE-Aug Dataset Citation (ours)
```bibtex
@misc{
ji2025oxeaug,
title = {OXE-Aug: A Large-Scale Robot Augmentation of OXE for Scaling Cross-Embodiment Policy Learning},
author = {Ji, Guanhua and Polavaram, Harsha and Chen, Lawrence Yunliang and Bajamahal, Sandeep and Ma, Zehan and Adebola, Simeon and Xu, Chenfeng and Goldberg, Ken},
year = {2025},
note = {Manuscript}
}
```
| 0
| 0
|
[
"task_categories:robotics",
"license:cc-by-4.0",
"arxiv:2210.06407",
"region:us",
"robotics",
"lerobot",
"oxe-aug",
"dataset"
] |
2025-11-12T13:41:09+00:00
|
2025-11-12T17:56:08+00:00
| 0
|
ts0pwo/20K_real_and_deepfake_images_ELA
|
This dataset contains the test images used to evaluate our deepfake detection framework. It originally contained 20,000 real and deepfake images, but as some 2300 files are protected by the UK Crown and we do not have a permission to reproduced them, so these files were removed.
Our framework contrains 4 machine learning models, which feed in the original images, error-level analysis (ELA) images, noise analysis (NA) images and Principal Component Analysis (PCA) images.
The models were created using Tensorflow version 2.26.2.
In this repository, the ELA images are stored.
|
This dataset contains the test images used to evaluate our deepfake detection framework. It originally contained 20,000 real and deepfake images, but as some 2300 files are protected by the UK Crown and we do not have a permission to reproduced them, so these files were removed.
Our framework contrains 4 machine learning models, which feed in the original images, error-level analysis (ELA) images, noise analysis (NA) images and Principal Component Analysis (PCA) images.
The models were created using Tensorflow version 2.26.2.
In this repository, the ELA images are stored.
| 0
| 0
|
[
"task_categories:image-classification",
"language:en",
"size_categories:10K<n<100K",
"region:us",
"deepfake"
] |
2025-11-12T16:01:17+00:00
|
2025-11-12T17:54:18+00:00
| 0
|
mixture-vitae/MixtureVitae-2TT
|
# Aurora-M2
We are still uploading data...
This is a **multilingual, permissive, partially synthetic, decontaminated pre-training** dataset. It consists of cc-by, public domain, or governmental websites. This dataset will eventually contain approximately 2 trillion tokens.
We have an overlap with many of the other permissively licensed datasets, such as common corpus, common pile, OLC, KL3M, etc., but we performed different filtering, collated similar data together to form around 4K tokens per example, and included a large amount of synthetic data (derived from permissve data or licensed permissively).
About half of the dataset is synthetic, with a large portion being permissively licensed code, math, and science reasoning traces. We took care to investigate whether the model that was used to generate the data and the ultimate source of the data are permissively usable.
Note that there are concerns of model collapse in using synthetic datasets in pretraining, and you may wish to use techniques to mitigate this.
This dataset is intended for pretraining a foundational LLM. Includes:
- Business & politics - Mostly from SEC filings, along with contracts from CUAD, and Parliament debates from Aurora-M1 dataset
- Fineweb - of .gov.* and cc-by websites, from FineFineweb. We attach domain labels to web files to improve training.
- Formatted Text (JSON, Yaml, HTML, etc from startcoder v1, plus websights)
- Law from OLC
- MAGACorpus synthetic dervied from .gov.* and cc-by websites,
- Math - from DM math and a small of procedurally generated math problems by the authors
- Nemo high synthetic derived from .gov.* and cc-by websites,
- News from OLC
- Science and Tech - Eruo-pat with synthetic image captions, and USPTO from Pile and TXT360, with Arxiv abstracts and CC-BY papers and pubmed, peS2o from common-pile, OLC and elsevier-oa-cc-by.
- Software of select langauges (Python, Java, etc.) from starcoder v1.
* We use starcoder v1 instead of starcoder v2 because of the additional licensing requirements from the Heritage Foundation. While Starcoder v2 is excellent, MixtureVitae is an excercise in creating a dataset that is easy to use with less licensing hurdles.
- Stackexchange - Mostly from TXT360 and RedPajama v1
- Wiki - MegaWiki, and Wikipedia copy from TXT360. There is also a substantial portion of Wikipedia in the Fineweb subset as well. We have also included a reformatted version of meta-active-reading.
- Youtube - Common Corpus, Finevideo and VALID. For the VALID dataset, we included image captions of key frames along with Q/A at the end of some videos about the video.
- Synthetic & Instructions - From permisvelly licensed data (CC-BY-SA, Apache, etc.) - Ling-coder, Ring-Lite, Glaive reasoning, Nemo Math and Science, Open Thoughts, Prism-math, p3 dataset converted to few-shot format
* We have avoided datasets generated by commercial models, as well as the Llama models, and other models with licenses that has restrictions on commercial usage. We do use outputs of certain Apache licensed Qwen models, Phi models, R1 models. Where there is a clear mixture of output - instruction from qwen 70b under the Qwen license and output by R1, we stripped out the problematic Qwen generated instructions. The input for these synthetic data are also, to our knowledge, from permissive sources.
* More synthetic data than the 211BT mixture
- Multilingual .gov, cc-by website from Dcad (which is based on Fineweb2), and CulutraY
- Aya multilingual (without English subset)
Please be aware that we use the <|endoftext|> token to separate documents in each example. We recommend replacing this token with your appropriate eos token from the target tokenizer used for training your model. Also we have used in some reasoning datasets, `<think>` and `</think>` tokens. You may wish to add these special tokens.
All of our work that is not derived from the underlying data, such as our organization, tagging, and data formatting is licensed by us under ODC-By license.
**Please note:** We have found in early ablation studies that a small percentage of instruction data added to our 5BT ablation, 10BT ablations and 15BT ablations pretraining, does convey instruction following skills. This allows trainers to probe their models with instructions, among other things. However, we found that adding refusals for alignment caused the model to overly refuse during pretraining.
Users shoud experiment with various proportions for their purposes, but we believe a random sample of this dataset could form a "fair" comparsion to other similar datasets.
Since this is a working version, and not the final version, there may be errors tagging, or formatting. Also, this version is NOT an aligned version of the dataset. We will release an aligned version which performs more rigoruos debiasing and anonymization.
Under the MixtureVitae datasets, we consider data that is in the public domain, out of copyright, cc-by-*, software under open source (but non GPL licenses), or other open licensed content, as well as certain .gov. data (which we believe there is a strong fair use argument for) as low copyright risk. Permissive, here, means we think there is lower risk for a researcher to train on the data.
But we believe that the risks for infringement for training exists in a continum and can vary by the type and purpose of usage, with content created solely by authors of this dataset the least risky, cc-by content with some intermediate risk, and .gov. content being more risky then open source content even under a fair use analysis. Risks can also vary by jurisdictions.
Even when content is cc-by licensed or published on a government website, this doesn't mean there is not copyright risk. For example, a government website may cite a copyrighted work, an open source github repo may include 3rd party copyrighted content of, for example, product description, in a markdown page, or a Wikipedia cc-by-sa page may include quotes from movies. See our blog here https://aurora-lm.github.io/posts/mixturevitae/ for a longer discussion. See https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-3-Generative-AI-Training-Report-Pre-Publication-Version.pdf for a US oriented analysis. Laws are constantly changing, especial AI laws, so it is best to keep abreast of the current legal risks with your attorneys.
We also think that the risk of infringement during training is different than that of inference. For example, training might be fair use because it is more transformative at least in the US, but outputing verbatim text could very well be infringement if the content was not permissively licensed or allowed to be distributed.
While we have done extensive work to create a permissively usable training dataset, please consult your own attorneys for any legal risks in using this dataset.
TODO:
We will include multimodal tokens. The multimodal data is tokenized SNAC, SEED2 and jpg data.
## Web data from Common Crawl
A portion of our data through the various subsets are dervied from Common Crawl, and thus subject to the Common Crawl terms of use. https://commoncrawl.org/terms-of-use
Common crawl resepcts the robots.txt prohibition. But common-crawl includes many commercial websites available on the Internet. To limit copyright risks we performed the following filters.
We start with FineFineweb which is a domain labeled version of Fineweb, which in turn is a filtered version of Common Crawl.
We filtered based on a list of potential government and NGO websites/URL patterns:
- `.mil/`
- `.vlada.mk`
- `.vlada.cz`
- `.kormany.hu`
- `regeringen.` (matches domains like regeringen.se, regeringen.no, etc.)
- `.rijksoverheid.nl`
- `.government.nl`
- `.regeringen.se`
- `.regeringen.dk`
- `.regeringen.no`
- `.bund.de`
- `.bundesregierung.de`
- `.government.ru`
- `.gc.ca`
- `.admin.ch`
- `www.gob.cl/`
- `www.gob.ec/`
- `guatemala.gob.gt/`
- `presidencia.gob.hn/`
- `www.gob.mx/`
- `presidencia.gob.pa/`
- `www.gob.pe/`
- `gob.es/`
- `argentina.gob.ar/`
- `tanzania.go.tz/`
- `.indonesia.go.id/`
- `.go.kr/`
- `.go.jp/`
- `thailand.go.th/`
- `.europa.eu/`
- `.un/`
- `.int/`
- `.govt.`
- `www.gub.uy`
- `.gov` (as suffix, e.g. `idx.endswith(".gov")`)
- `.gov/`
- `.gov.`
- `.gouv.`
And we created a list of around 50 websites that we know to be cc-by-* or public domain websites. We chose general Wiki's as well as software and technology sites, law related sites and other known sites. We read the terms of use to confirm they provided permissie usage, to the extent we could, before adding these 50 or so domains.
- `.free.law/`
- `.europeana.eu/`
- `.publicdomainreview.org/`
- `.wisdomcommons.org/`
- `.intratext.com/`
- `.mediawiki.org/`
- `.wikimedia.org/`
- `.wikidata.org/`
- `.wikipedia.org/` *
- `.wikisource.org/`
- `.wikifunctions.org/`
- `.wikiquote.org/`
- `.wikinews.org/`
- `.wikivoyage.org/`
- `.wiktionary.org/`
- `.wikibooks.org/`
- `.courtlistener.com/`
- `.case.law/`
- `pressbooks.oer.hawaii.edu/`
- `.huggingface.co/docs/`
- `.opencourselibrary.org/`
- `.medbiq.org/`
- `.doabooks.org/`
- `.bccampus.ca/`
- `open.umn.edu/opentextbooks/`
- `www.gutenberg.org/`
- `.mozilla.org/`
- `www.eclipse.org/`
- `.apache.org/`
- `.python.org/`
- `.pytorch.org/`
- `.numpy.org/`
- `.scipy.org/`
- `.opencv.org/`
- `.scikit-learn.org/`
- `.pydata.org/`
- `.matplotlib.org/`
- `.palletsprojects.com/`
- `.sqlalchemy.org/`
- `.pypi.org/`
- `.sympy.org/`
- `.nltk.org/`
- `.scrapy.org/`
- `.owasp.org/`
- `.creativecommons.org/`
- `.wikia.com/`
- `.foodista.com/`
- `.fandom.com/`
- `.attack.mitre.org/`
While we do include wikipedia in the above, we do not include stackexchange, because Wikipedia has many subdomains that might more diverse in a webcrawl, and we already have a highly formatted subset of stack excahnge. In future interations, we may also include the webcrawled version of stackexchange from Common Crawl.
We also searched for keywords, such as "cc-by-sa" in the header and footer of FineFine web pages and applied heuristics to filter out instances where
Terms of use of the above sites might for example provide 'unless otherwise stated, the contents are licensed under cc-by-sa...' Because of caveats like these, we also had heuristic filters, such as filtering documents that includes "all rights reserved."
We also had a block list of sites which we don't use, even if there might be cc-by content, including common news websites.
Note that we included both Wikipedia from the TXT360 subset, as well as Megawiki and FineFineweb, so there will be duplicated Wikipedia pages.
For the TXT360 Wikipedia subset, we filtered out pages about people are are still alive using patterns "... born March 1, 1999) is an German ...". The reason is that we wish to minimize memorization of personal information. Note, we further perform other forms fo anonymization in our aligned MixtureVitae dataset.
Most of our dataset includes government websites or Wiki's.
Table 2. The top domains in our web (FineFineweb) subset:
| **Domain** | **Count** |
|------------------------------------|---------|
| m.wikipedia.org * | 167428 |
| nlm.nih.gov | 113078 |
| www.federalregister.gov | 98579 |
| nsw.gov.au | 67952 |
| vic.gov.au | 59044 |
| ec.europa.eu | 43850 |
| m.wikisource.org | 38916 |
| www.justice.gov | 38377 |
| qld.gov.au | 35866 |
| jst.go.jp | 34033 |
| ars.usda.gov | 31073 |
| wa.gov.au | 28598 |
| www.cdc.gov | 28115 |
| www.gov.uk | 26916 |
| www.nps.gov | 26298 |
| www.gov.scot | 26145 |
| eric.ed.gov | 25102 |
| reliefweb.int | 24877 |
| clinicaltrials.gov | 24611 |
| sa.gov.au | 20603 |
| chroniclingamerica.loc.gov | 20242 |
| www.army.mil | 20003 |
| history.state.gov | 19195 |
| cordis.europa.eu | 18856 |
| nal.usda.gov | 17032 |
| www.wipo.int | 17021 |
| www.mass.gov | 14921 |
| www.fda.gov | 14853 |
| ukurier.gov.ua | 14808 |
| founders.archives.gov | 14266 |
| act.gov.au | 13822 |
| mn.gov | 13767 |
| www.sec.gov | 13501 |
| bugzilla.mozilla.org | 13004 |
| fhwa.dot.gov | 12738 |
| www.gao.gov | 12690 |
| djvu.wikisource.org | 11954 |
| leg.wa.gov | 11766 |
| www.state.gov | 11468 |
| fs.usda.gov | 11383 |
| aph.gov.au | 11151 |
| apps.dtic.mil | 11097 |
| mail.python.org | 10554 |
| gov.bc.ca | 10514 |
| usace.army.mil | 9973 |
| www.congress.gov | 9882 |
| 2009-2017.state.gov | 9581 |
| military-history.fandom.com | 9313 |
| www.nysenate.gov | 9306 |
| www.epa.gov | 9001 |
| abs.gov.au | 8824 |
| tas.gov.au | 8784 |
| m.wikibooks.org | 8736 |
| gov.on.ca | 8696 |
| gsfc.nasa.gov | 8586 |
| www.fws.gov | 8386 |
| www.ntsb.gov | 8130 |
| blog.gov.uk | 8091 |
| legis.wisconsin.gov | 8070 |
| www.nasa.gov | 8067 |
| cfpub.epa.gov | 7943 |
| www.loc.gov | 7742 |
| www.usgs.gov | 7688 |
| www.clinicaltrials.gov | 7517 |
| natlib.govt.nz | 7465 |
| www.michigan.gov | 7395 |
| ato.gov.au | 7279 |
| sp.gov.br | 7208 |
| www.nist.gov | 7173 |
| obamawhitehouse.archives.gov | 7170 |
| www.nyc.gov | 7111 |
| justice.gc.ca | 7086 |
| service.gov.uk | 7085 |
| nationalarchives.gov.uk | 7082 |
| www.sbir.gov | 7012 |
| www.akleg.gov | 6969 |
| www.defense.gov | 6941 |
| nt.gov.au | 6878 |
| m.wikiquote.org | 6869 |
| niehs.nih.gov | 6867 |
| revisor.mn.gov | 6800 |
| www.dol.gov | 6632 |
| gouv.qc.ca | 6545 |
| statcan.gc.ca | 6509 |
| wwwnc.cdc.gov | 6389 |
| ons.gov.uk | 6301 |
| legislation.gov.uk | 6207 |
| research.va.gov | 6198 |
| eurofound.europa.eu | 6034 |
| portal.ct.gov | 5909 |
| nla.gov.au | 5905 |
| codes.ohio.gov | 5807 |
| www.energy.gov | 5805 |
| oai.dtic.mil | 5757 |
| georgewbush-whitehouse.archives.gov | 5674 |
| health.gov.au | 5554 |
| dec.ny.gov | 5448 |
| www.ftc.gov | 5404 |
| forecast.weather.gov | 5398 |
| aspe.hhs.gov | 5358 |
Table 3. Overlap with common-pile. There is about a .1% overalp with common-pile's cccc subset, which unsuprisingly includes government websites:
| **Domain** | **Count** |
|------------------------------------|-------|
| nsw.gov.au | 67952 |
| qld.gov.au | 35866 |
| abs.gov.au | 8824 |
| addons.mozilla.org | 4045 |
| conicet.gov.ar | 3020 |
| awm.gov.au | 2142 |
| eea.europa.eu | 1966 |
...
### Analysis of data
Notice the compression rate vs the cotnamination rate.
Table 1. Raw sizes of various subsets and their compressed size, and compression ratio.
(This table is not yet complete...)
| Folder | Uncompressed Size | Compressed Size (Sum of files) | Compression Ratio |
| ----------------------- | ----------------- | ------------------------------ | ----------------- |
| **synthetic\_instruct** | 615 GB | 142.16 GB | **4.33×** |
| **software** | 120 GB | 29.49 GB | **4.07×** |
| **wiki** | 215 GB | 55.75 GB | **3.86×** |
| **nemo** | 49 GB | 13.15 GB | **3.73×** |
| **math** | 11 GB | 2.97 GB | **3.70×** |
| **maga** | 33 GB | 9.5 GB | **3.47×** |
| **formatted\_text** | 50 GB | 14.98 GB | **3.34×** |
| **business** | 884 MB | 266 MB | **3.32×** |
| **youtube** | 23 GB | 6.71 GB | **3.43×** |
| **stackexchange** | 94 GB | 32.31 GB | **2.91×** |
| **law** | 82 GB | 28.28 GB | **2.90×** |
| **fineweb** | 88 GB | 30.68 GB | **2.87×** |
| **news** | 1.1 GB | 387 MB | **2.84×** |
Decontaminated following phi-4 like method (13 gram overalp, except in cases where the 13grams are also in train set, wikipedia, public domain books) against:
- Agieval
- ARC
- MBPP
- MBPPPlus
- MMLU
- Gsm8k
- MATH
- ToxiGen
- COPA
- OpenBookQA
- Winogrande
- BoolQ
- HellaSwag
- PIQA
- CommonsenseQA
- Humaneval
- HumanevalPlus
- ALERT
- SimpleQA
- DoNotAnswer
- Ifeval
- LAMBADA
- GPQA
- AIME2024
- AIME2025
- HMMT_Feb_2025
- USAMO
- BRUMO
- MMLU_Redux
- MMLU_Pro
- MATH500
- AdvBench
- MuSR
- BBH
Removed contaminated data in bytes by file:
| File Name | Size |
|------------|-------|
| nemo_science_math-1_contaminated.jsonl | 322M |
| ring-lite-sft-0_contaminated.jsonl | 198M |
| ring-lite-sft-1_contaminated.jsonl | 196M |
| prism_math_contaminated.jsonl | 156M |
| nemo_science_math-0_contaminated.jsonl | 141M |
| open_thoughts-0_contaminated.jsonl | 135M |
| misc_instruct_contaminated.jsonl | 114M |
| open_thoughts-1_contaminated.jsonl | 90M |
| nemo_science_math-2_contaminated.jsonl | 85M |
| open_thoughts-2_contaminated.jsonl | 73M |
| school_math_contaminated.jsonl | 72M |
| ring-lite-sft-2_contaminated.jsonl | 71M |
| open_thoughts-3_contaminated.jsonl | 67M |
| math_sft-1_contaminated.jsonl | 63M |
| open_thoughts-4_contaminated.jsonl | 59M |
| nemo_science_math-3_contaminated.jsonl | 58M |
| math_sft-0_contaminated.jsonl | 57M |
| math_sft-2_contaminated.jsonl | 54M |
| open_thoughts-5_contaminated.jsonl | 49M |
| math_reasoning_contaminated.jsonl | 48M |
| prism_science_contaminated.jsonl | 46M |
| ring-lite-sft-3_contaminated.jsonl | 45M |
| reasoning_instruct_contaminated.jsonl | 44M |
| nemo_science_math-4_contaminated.jsonl | 43M |
| math_sft-3_contaminated.jsonl | 43M |
| ring-lite-sft-4_contaminated.jsonl | 40M |
| open_thoughts-6_contaminated.jsonl | 38M |
| open_thoughts-7_contaminated.jsonl | 37M |
| nemo_science_math-5_contaminated.jsonl | 36M |
| open_thoughts-8_contaminated.jsonl | 34M |
| ring-lite-sft-5_contaminated.jsonl | 34M |
| open_thoughts-9_contaminated.jsonl | 32M |
| math_sft-4_contaminated.jsonl | 32M |
| nemo_science_math-6_contaminated.jsonl | 31M |
| open_thoughts-10_contaminated.jsonl | 29M |
| ring-lite-sft-6_contaminated.jsonl | 28M |
| open_thoughts-11_contaminated.jsonl | 27M |
| math_sft-5_contaminated.jsonl | 26M |
| open_thoughts-12_contaminated.jsonl | 24M |
| ring-lite-sft-7_contaminated.jsonl | 24M |
| open_thoughts-13_contaminated.jsonl | 22M |
| open_thoughts-14_contaminated.jsonl | 21M |
| nemo_science_math-7_contaminated.jsonl | 20M |
| math_sft-6_contaminated.jsonl | 20M |
| open_thoughts-15_contaminated.jsonl | 18M |
| open_thoughts-16_contaminated.jsonl | 17M |
| ring-lite-sft-8_contaminated.jsonl | 17M |
| open_thoughts-17_contaminated.jsonl | 16M |
| math_sft-7_contaminated.jsonl | 15M |
| nemo_science_math-8_contaminated.jsonl | 15M |
| open_thoughts-18_contaminated.jsonl | 14M |
| ring-lite-sft-9_contaminated.jsonl | 14M |
| open_thoughts-19_contaminated.jsonl | 13M |
| math_sft-8_contaminated.jsonl | 13M |
| open_thoughts-20_contaminated.jsonl | 12M |
| nemo_science_math-9_contaminated.jsonl | 11M |
| open_thoughts-21_contaminated.jsonl | 10M |
| math_sft-9_contaminated.jsonl | 10M |
| open_thoughts-22_contaminated.jsonl | 9M |
| ring-lite-sft-10_contaminated.jsonl | 9M |
| open_thoughts-23_contaminated.jsonl | 8M |
| math_sft-10_contaminated.jsonl | 8M |
| nemo_science_math-10_contaminated.jsonl | 7M |
| open_thoughts-24_contaminated.jsonl | 7M |
| ring-lite-sft-11_contaminated.jsonl | 7M |
| open_thoughts-25_contaminated.jsonl | 6M |
| math_sft-11_contaminated.jsonl | 6M |
| open_thoughts-26_contaminated.jsonl | 6M |
| nemo_science_math-11_contaminated.jsonl | 6M |
| open_thoughts-27_contaminated.jsonl | 5M |
| ring-lite-sft-12_contaminated.jsonl | 5M |
| math_sft-12_contaminated.jsonl | 5M |
| open_thoughts-28_contaminated.jsonl | 4M |
| nemo_science_math-12_contaminated.jsonl | 4M |
| open_thoughts-29_contaminated.jsonl | 3M |
| ring-lite-sft-13_contaminated.jsonl | 3M |
| open_thoughts-30_contaminated.jsonl | 3M |
| math_sft-13_contaminated.jsonl | 3M |
| open_thoughts-31_contaminated.jsonl | 2M |
| nemo_science_math-13_contaminated.jsonl | 2M |
| open_thoughts-32_contaminated.jsonl | 2M |
| ring-lite-sft-14_contaminated.jsonl | 2M |
| math_sft-14_contaminated.jsonl | 2M |
| open_thoughts-33_contaminated.jsonl | 1M |
| open_thoughts-34_contaminated.jsonl | 1M |
| nemo_science_math-14_contaminated.jsonl | 1M |
| open_thoughts-35_contaminated.jsonl | 1M |
| ring-lite-sft-15_contaminated.jsonl | 1M |
| math_sft-15_contaminated.jsonl | 1M |
|
# Aurora-M2
We are still uploading data...
This is a **multilingual, permissive, partially synthetic, decontaminated pre-training** dataset. It consists of cc-by, public domain, or governmental websites. This dataset will eventually contain approximately 2 trillion tokens.
We have an overlap with many of the other permissively licensed datasets, such as common corpus, common pile, OLC, KL3M, etc., but we performed different filtering, collated similar data together to form around 4K tokens per example, and included a large amount of synthetic data (derived from permissve data or licensed permissively).
About half of the dataset is synthetic, with a large portion being permissively licensed code, math, and science reasoning traces. We took care to investigate whether the model that was used to generate the data and the ultimate source of the data are permissively usable.
Note that there are concerns of model collapse in using synthetic datasets in pretraining, and you may wish to use techniques to mitigate this.
This dataset is intended for pretraining a foundational LLM. Includes:
- Business & politics - Mostly from SEC filings, along with contracts from CUAD, and Parliament debates from Aurora-M1 dataset
- Fineweb - of .gov.* and cc-by websites, from FineFineweb. We attach domain labels to web files to improve training.
- Formatted Text (JSON, Yaml, HTML, etc from startcoder v1, plus websights)
- Law from OLC
- MAGACorpus synthetic dervied from .gov.* and cc-by websites,
- Math - from DM math and a small of procedurally generated math problems by the authors
- Nemo high synthetic derived from .gov.* and cc-by websites,
- News from OLC
- Science and Tech - Eruo-pat with synthetic image captions, and USPTO from Pile and TXT360, with Arxiv abstracts and CC-BY papers and pubmed, peS2o from common-pile, OLC and elsevier-oa-cc-by.
- Software of select langauges (Python, Java, etc.) from starcoder v1.
* We use starcoder v1 instead of starcoder v2 because of the additional licensing requirements from the Heritage Foundation. While Starcoder v2 is excellent, MixtureVitae is an excercise in creating a dataset that is easy to use with less licensing hurdles.
- Stackexchange - Mostly from TXT360 and RedPajama v1
- Wiki - MegaWiki, and Wikipedia copy from TXT360. There is also a substantial portion of Wikipedia in the Fineweb subset as well. We have also included a reformatted version of meta-active-reading.
- Youtube - Common Corpus, Finevideo and VALID. For the VALID dataset, we included image captions of key frames along with Q/A at the end of some videos about the video.
- Synthetic & Instructions - From permisvelly licensed data (CC-BY-SA, Apache, etc.) - Ling-coder, Ring-Lite, Glaive reasoning, Nemo Math and Science, Open Thoughts, Prism-math, p3 dataset converted to few-shot format
* We have avoided datasets generated by commercial models, as well as the Llama models, and other models with licenses that has restrictions on commercial usage. We do use outputs of certain Apache licensed Qwen models, Phi models, R1 models. Where there is a clear mixture of output - instruction from qwen 70b under the Qwen license and output by R1, we stripped out the problematic Qwen generated instructions. The input for these synthetic data are also, to our knowledge, from permissive sources.
* More synthetic data than the 211BT mixture
- Multilingual .gov, cc-by website from Dcad (which is based on Fineweb2), and CulutraY
- Aya multilingual (without English subset)
Please be aware that we use the <|endoftext|> token to separate documents in each example. We recommend replacing this token with your appropriate eos token from the target tokenizer used for training your model. Also we have used in some reasoning datasets, `<think>` and `</think>` tokens. You may wish to add these special tokens.
All of our work that is not derived from the underlying data, such as our organization, tagging, and data formatting is licensed by us under ODC-By license.
**Please note:** We have found in early ablation studies that a small percentage of instruction data added to our 5BT ablation, 10BT ablations and 15BT ablations pretraining, does convey instruction following skills. This allows trainers to probe their models with instructions, among other things. However, we found that adding refusals for alignment caused the model to overly refuse during pretraining.
Users shoud experiment with various proportions for their purposes, but we believe a random sample of this dataset could form a "fair" comparsion to other similar datasets.
Since this is a working version, and not the final version, there may be errors tagging, or formatting. Also, this version is NOT an aligned version of the dataset. We will release an aligned version which performs more rigoruos debiasing and anonymization.
Under the MixtureVitae datasets, we consider data that is in the public domain, out of copyright, cc-by-*, software under open source (but non GPL licenses), or other open licensed content, as well as certain .gov. data (which we believe there is a strong fair use argument for) as low copyright risk. Permissive, here, means we think there is lower risk for a researcher to train on the data.
But we believe that the risks for infringement for training exists in a continum and can vary by the type and purpose of usage, with content created solely by authors of this dataset the least risky, cc-by content with some intermediate risk, and .gov. content being more risky then open source content even under a fair use analysis. Risks can also vary by jurisdictions.
Even when content is cc-by licensed or published on a government website, this doesn't mean there is not copyright risk. For example, a government website may cite a copyrighted work, an open source github repo may include 3rd party copyrighted content of, for example, product description, in a markdown page, or a Wikipedia cc-by-sa page may include quotes from movies. See our blog here https://aurora-lm.github.io/posts/mixturevitae/ for a longer discussion. See https://www.copyright.gov/ai/Copyright-and-Artificial-Intelligence-Part-3-Generative-AI-Training-Report-Pre-Publication-Version.pdf for a US oriented analysis. Laws are constantly changing, especial AI laws, so it is best to keep abreast of the current legal risks with your attorneys.
We also think that the risk of infringement during training is different than that of inference. For example, training might be fair use because it is more transformative at least in the US, but outputing verbatim text could very well be infringement if the content was not permissively licensed or allowed to be distributed.
While we have done extensive work to create a permissively usable training dataset, please consult your own attorneys for any legal risks in using this dataset.
TODO:
We will include multimodal tokens. The multimodal data is tokenized SNAC, SEED2 and jpg data.
## Web data from Common Crawl
A portion of our data through the various subsets are dervied from Common Crawl, and thus subject to the Common Crawl terms of use. https://commoncrawl.org/terms-of-use
Common crawl resepcts the robots.txt prohibition. But common-crawl includes many commercial websites available on the Internet. To limit copyright risks we performed the following filters.
We start with FineFineweb which is a domain labeled version of Fineweb, which in turn is a filtered version of Common Crawl.
We filtered based on a list of potential government and NGO websites/URL patterns:
- `.mil/`
- `.vlada.mk`
- `.vlada.cz`
- `.kormany.hu`
- `regeringen.` (matches domains like regeringen.se, regeringen.no, etc.)
- `.rijksoverheid.nl`
- `.government.nl`
- `.regeringen.se`
- `.regeringen.dk`
- `.regeringen.no`
- `.bund.de`
- `.bundesregierung.de`
- `.government.ru`
- `.gc.ca`
- `.admin.ch`
- `www.gob.cl/`
- `www.gob.ec/`
- `guatemala.gob.gt/`
- `presidencia.gob.hn/`
- `www.gob.mx/`
- `presidencia.gob.pa/`
- `www.gob.pe/`
- `gob.es/`
- `argentina.gob.ar/`
- `tanzania.go.tz/`
- `.indonesia.go.id/`
- `.go.kr/`
- `.go.jp/`
- `thailand.go.th/`
- `.europa.eu/`
- `.un/`
- `.int/`
- `.govt.`
- `www.gub.uy`
- `.gov` (as suffix, e.g. `idx.endswith(".gov")`)
- `.gov/`
- `.gov.`
- `.gouv.`
And we created a list of around 50 websites that we know to be cc-by-* or public domain websites. We chose general Wiki's as well as software and technology sites, law related sites and other known sites. We read the terms of use to confirm they provided permissie usage, to the extent we could, before adding these 50 or so domains.
- `.free.law/`
- `.europeana.eu/`
- `.publicdomainreview.org/`
- `.wisdomcommons.org/`
- `.intratext.com/`
- `.mediawiki.org/`
- `.wikimedia.org/`
- `.wikidata.org/`
- `.wikipedia.org/` *
- `.wikisource.org/`
- `.wikifunctions.org/`
- `.wikiquote.org/`
- `.wikinews.org/`
- `.wikivoyage.org/`
- `.wiktionary.org/`
- `.wikibooks.org/`
- `.courtlistener.com/`
- `.case.law/`
- `pressbooks.oer.hawaii.edu/`
- `.huggingface.co/docs/`
- `.opencourselibrary.org/`
- `.medbiq.org/`
- `.doabooks.org/`
- `.bccampus.ca/`
- `open.umn.edu/opentextbooks/`
- `www.gutenberg.org/`
- `.mozilla.org/`
- `www.eclipse.org/`
- `.apache.org/`
- `.python.org/`
- `.pytorch.org/`
- `.numpy.org/`
- `.scipy.org/`
- `.opencv.org/`
- `.scikit-learn.org/`
- `.pydata.org/`
- `.matplotlib.org/`
- `.palletsprojects.com/`
- `.sqlalchemy.org/`
- `.pypi.org/`
- `.sympy.org/`
- `.nltk.org/`
- `.scrapy.org/`
- `.owasp.org/`
- `.creativecommons.org/`
- `.wikia.com/`
- `.foodista.com/`
- `.fandom.com/`
- `.attack.mitre.org/`
While we do include wikipedia in the above, we do not include stackexchange, because Wikipedia has many subdomains that might more diverse in a webcrawl, and we already have a highly formatted subset of stack excahnge. In future interations, we may also include the webcrawled version of stackexchange from Common Crawl.
We also searched for keywords, such as "cc-by-sa" in the header and footer of FineFine web pages and applied heuristics to filter out instances where
Terms of use of the above sites might for example provide 'unless otherwise stated, the contents are licensed under cc-by-sa...' Because of caveats like these, we also had heuristic filters, such as filtering documents that includes "all rights reserved."
We also had a block list of sites which we don't use, even if there might be cc-by content, including common news websites.
Note that we included both Wikipedia from the TXT360 subset, as well as Megawiki and FineFineweb, so there will be duplicated Wikipedia pages.
For the TXT360 Wikipedia subset, we filtered out pages about people are are still alive using patterns "... born March 1, 1999) is an German ...". The reason is that we wish to minimize memorization of personal information. Note, we further perform other forms fo anonymization in our aligned MixtureVitae dataset.
Most of our dataset includes government websites or Wiki's.
Table 2. The top domains in our web (FineFineweb) subset:
| **Domain** | **Count** |
|------------------------------------|---------|
| m.wikipedia.org * | 167428 |
| nlm.nih.gov | 113078 |
| www.federalregister.gov | 98579 |
| nsw.gov.au | 67952 |
| vic.gov.au | 59044 |
| ec.europa.eu | 43850 |
| m.wikisource.org | 38916 |
| www.justice.gov | 38377 |
| qld.gov.au | 35866 |
| jst.go.jp | 34033 |
| ars.usda.gov | 31073 |
| wa.gov.au | 28598 |
| www.cdc.gov | 28115 |
| www.gov.uk | 26916 |
| www.nps.gov | 26298 |
| www.gov.scot | 26145 |
| eric.ed.gov | 25102 |
| reliefweb.int | 24877 |
| clinicaltrials.gov | 24611 |
| sa.gov.au | 20603 |
| chroniclingamerica.loc.gov | 20242 |
| www.army.mil | 20003 |
| history.state.gov | 19195 |
| cordis.europa.eu | 18856 |
| nal.usda.gov | 17032 |
| www.wipo.int | 17021 |
| www.mass.gov | 14921 |
| www.fda.gov | 14853 |
| ukurier.gov.ua | 14808 |
| founders.archives.gov | 14266 |
| act.gov.au | 13822 |
| mn.gov | 13767 |
| www.sec.gov | 13501 |
| bugzilla.mozilla.org | 13004 |
| fhwa.dot.gov | 12738 |
| www.gao.gov | 12690 |
| djvu.wikisource.org | 11954 |
| leg.wa.gov | 11766 |
| www.state.gov | 11468 |
| fs.usda.gov | 11383 |
| aph.gov.au | 11151 |
| apps.dtic.mil | 11097 |
| mail.python.org | 10554 |
| gov.bc.ca | 10514 |
| usace.army.mil | 9973 |
| www.congress.gov | 9882 |
| 2009-2017.state.gov | 9581 |
| military-history.fandom.com | 9313 |
| www.nysenate.gov | 9306 |
| www.epa.gov | 9001 |
| abs.gov.au | 8824 |
| tas.gov.au | 8784 |
| m.wikibooks.org | 8736 |
| gov.on.ca | 8696 |
| gsfc.nasa.gov | 8586 |
| www.fws.gov | 8386 |
| www.ntsb.gov | 8130 |
| blog.gov.uk | 8091 |
| legis.wisconsin.gov | 8070 |
| www.nasa.gov | 8067 |
| cfpub.epa.gov | 7943 |
| www.loc.gov | 7742 |
| www.usgs.gov | 7688 |
| www.clinicaltrials.gov | 7517 |
| natlib.govt.nz | 7465 |
| www.michigan.gov | 7395 |
| ato.gov.au | 7279 |
| sp.gov.br | 7208 |
| www.nist.gov | 7173 |
| obamawhitehouse.archives.gov | 7170 |
| www.nyc.gov | 7111 |
| justice.gc.ca | 7086 |
| service.gov.uk | 7085 |
| nationalarchives.gov.uk | 7082 |
| www.sbir.gov | 7012 |
| www.akleg.gov | 6969 |
| www.defense.gov | 6941 |
| nt.gov.au | 6878 |
| m.wikiquote.org | 6869 |
| niehs.nih.gov | 6867 |
| revisor.mn.gov | 6800 |
| www.dol.gov | 6632 |
| gouv.qc.ca | 6545 |
| statcan.gc.ca | 6509 |
| wwwnc.cdc.gov | 6389 |
| ons.gov.uk | 6301 |
| legislation.gov.uk | 6207 |
| research.va.gov | 6198 |
| eurofound.europa.eu | 6034 |
| portal.ct.gov | 5909 |
| nla.gov.au | 5905 |
| codes.ohio.gov | 5807 |
| www.energy.gov | 5805 |
| oai.dtic.mil | 5757 |
| georgewbush-whitehouse.archives.gov | 5674 |
| health.gov.au | 5554 |
| dec.ny.gov | 5448 |
| www.ftc.gov | 5404 |
| forecast.weather.gov | 5398 |
| aspe.hhs.gov | 5358 |
Table 3. Overlap with common-pile. There is about a .1% overalp with common-pile's cccc subset, which unsuprisingly includes government websites:
| **Domain** | **Count** |
|------------------------------------|-------|
| nsw.gov.au | 67952 |
| qld.gov.au | 35866 |
| abs.gov.au | 8824 |
| addons.mozilla.org | 4045 |
| conicet.gov.ar | 3020 |
| awm.gov.au | 2142 |
| eea.europa.eu | 1966 |
...
### Analysis of data
Notice the compression rate vs the cotnamination rate.
Table 1. Raw sizes of various subsets and their compressed size, and compression ratio.
(This table is not yet complete...)
| Folder | Uncompressed Size | Compressed Size (Sum of files) | Compression Ratio |
| ----------------------- | ----------------- | ------------------------------ | ----------------- |
| **synthetic\_instruct** | 615 GB | 142.16 GB | **4.33×** |
| **software** | 120 GB | 29.49 GB | **4.07×** |
| **wiki** | 215 GB | 55.75 GB | **3.86×** |
| **nemo** | 49 GB | 13.15 GB | **3.73×** |
| **math** | 11 GB | 2.97 GB | **3.70×** |
| **maga** | 33 GB | 9.5 GB | **3.47×** |
| **formatted\_text** | 50 GB | 14.98 GB | **3.34×** |
| **business** | 884 MB | 266 MB | **3.32×** |
| **youtube** | 23 GB | 6.71 GB | **3.43×** |
| **stackexchange** | 94 GB | 32.31 GB | **2.91×** |
| **law** | 82 GB | 28.28 GB | **2.90×** |
| **fineweb** | 88 GB | 30.68 GB | **2.87×** |
| **news** | 1.1 GB | 387 MB | **2.84×** |
Decontaminated following phi-4 like method (13 gram overalp, except in cases where the 13grams are also in train set, wikipedia, public domain books) against:
- Agieval
- ARC
- MBPP
- MBPPPlus
- MMLU
- Gsm8k
- MATH
- ToxiGen
- COPA
- OpenBookQA
- Winogrande
- BoolQ
- HellaSwag
- PIQA
- CommonsenseQA
- Humaneval
- HumanevalPlus
- ALERT
- SimpleQA
- DoNotAnswer
- Ifeval
- LAMBADA
- GPQA
- AIME2024
- AIME2025
- HMMT_Feb_2025
- USAMO
- BRUMO
- MMLU_Redux
- MMLU_Pro
- MATH500
- AdvBench
- MuSR
- BBH
Removed contaminated data in bytes by file:
| File Name | Size |
|------------|-------|
| nemo_science_math-1_contaminated.jsonl | 322M |
| ring-lite-sft-0_contaminated.jsonl | 198M |
| ring-lite-sft-1_contaminated.jsonl | 196M |
| prism_math_contaminated.jsonl | 156M |
| nemo_science_math-0_contaminated.jsonl | 141M |
| open_thoughts-0_contaminated.jsonl | 135M |
| misc_instruct_contaminated.jsonl | 114M |
| open_thoughts-1_contaminated.jsonl | 90M |
| nemo_science_math-2_contaminated.jsonl | 85M |
| open_thoughts-2_contaminated.jsonl | 73M |
| school_math_contaminated.jsonl | 72M |
| ring-lite-sft-2_contaminated.jsonl | 71M |
| open_thoughts-3_contaminated.jsonl | 67M |
| math_sft-1_contaminated.jsonl | 63M |
| open_thoughts-4_contaminated.jsonl | 59M |
| nemo_science_math-3_contaminated.jsonl | 58M |
| math_sft-0_contaminated.jsonl | 57M |
| math_sft-2_contaminated.jsonl | 54M |
| open_thoughts-5_contaminated.jsonl | 49M |
| math_reasoning_contaminated.jsonl | 48M |
| prism_science_contaminated.jsonl | 46M |
| ring-lite-sft-3_contaminated.jsonl | 45M |
| reasoning_instruct_contaminated.jsonl | 44M |
| nemo_science_math-4_contaminated.jsonl | 43M |
| math_sft-3_contaminated.jsonl | 43M |
| ring-lite-sft-4_contaminated.jsonl | 40M |
| open_thoughts-6_contaminated.jsonl | 38M |
| open_thoughts-7_contaminated.jsonl | 37M |
| nemo_science_math-5_contaminated.jsonl | 36M |
| open_thoughts-8_contaminated.jsonl | 34M |
| ring-lite-sft-5_contaminated.jsonl | 34M |
| open_thoughts-9_contaminated.jsonl | 32M |
| math_sft-4_contaminated.jsonl | 32M |
| nemo_science_math-6_contaminated.jsonl | 31M |
| open_thoughts-10_contaminated.jsonl | 29M |
| ring-lite-sft-6_contaminated.jsonl | 28M |
| open_thoughts-11_contaminated.jsonl | 27M |
| math_sft-5_contaminated.jsonl | 26M |
| open_thoughts-12_contaminated.jsonl | 24M |
| ring-lite-sft-7_contaminated.jsonl | 24M |
| open_thoughts-13_contaminated.jsonl | 22M |
| open_thoughts-14_contaminated.jsonl | 21M |
| nemo_science_math-7_contaminated.jsonl | 20M |
| math_sft-6_contaminated.jsonl | 20M |
| open_thoughts-15_contaminated.jsonl | 18M |
| open_thoughts-16_contaminated.jsonl | 17M |
| ring-lite-sft-8_contaminated.jsonl | 17M |
| open_thoughts-17_contaminated.jsonl | 16M |
| math_sft-7_contaminated.jsonl | 15M |
| nemo_science_math-8_contaminated.jsonl | 15M |
| open_thoughts-18_contaminated.jsonl | 14M |
| ring-lite-sft-9_contaminated.jsonl | 14M |
| open_thoughts-19_contaminated.jsonl | 13M |
| math_sft-8_contaminated.jsonl | 13M |
| open_thoughts-20_contaminated.jsonl | 12M |
| nemo_science_math-9_contaminated.jsonl | 11M |
| open_thoughts-21_contaminated.jsonl | 10M |
| math_sft-9_contaminated.jsonl | 10M |
| open_thoughts-22_contaminated.jsonl | 9M |
| ring-lite-sft-10_contaminated.jsonl | 9M |
| open_thoughts-23_contaminated.jsonl | 8M |
| math_sft-10_contaminated.jsonl | 8M |
| nemo_science_math-10_contaminated.jsonl | 7M |
| open_thoughts-24_contaminated.jsonl | 7M |
| ring-lite-sft-11_contaminated.jsonl | 7M |
| open_thoughts-25_contaminated.jsonl | 6M |
| math_sft-11_contaminated.jsonl | 6M |
| open_thoughts-26_contaminated.jsonl | 6M |
| nemo_science_math-11_contaminated.jsonl | 6M |
| open_thoughts-27_contaminated.jsonl | 5M |
| ring-lite-sft-12_contaminated.jsonl | 5M |
| math_sft-12_contaminated.jsonl | 5M |
| open_thoughts-28_contaminated.jsonl | 4M |
| nemo_science_math-12_contaminated.jsonl | 4M |
| open_thoughts-29_contaminated.jsonl | 3M |
| ring-lite-sft-13_contaminated.jsonl | 3M |
| open_thoughts-30_contaminated.jsonl | 3M |
| math_sft-13_contaminated.jsonl | 3M |
| open_thoughts-31_contaminated.jsonl | 2M |
| nemo_science_math-13_contaminated.jsonl | 2M |
| open_thoughts-32_contaminated.jsonl | 2M |
| ring-lite-sft-14_contaminated.jsonl | 2M |
| math_sft-14_contaminated.jsonl | 2M |
| open_thoughts-33_contaminated.jsonl | 1M |
| open_thoughts-34_contaminated.jsonl | 1M |
| nemo_science_math-14_contaminated.jsonl | 1M |
| open_thoughts-35_contaminated.jsonl | 1M |
| ring-lite-sft-15_contaminated.jsonl | 1M |
| math_sft-15_contaminated.jsonl | 1M |
| 439
| 0
|
[
"license:odc-by",
"size_categories:100K<n<1M",
"modality:text",
"region:us"
] |
2025-09-26T23:35:57+00:00
|
2025-11-12T17:53:07+00:00
| 0
|
nvail23/BlueSnap-Task
|
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"robot_type": "so101_follower",
"codebase_version": "v3.0",
"total_episodes": 50,
"total_frames": 27779,
"total_tasks": 2,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:50"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"fps": 30
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"fps": 30
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null,
"fps": 30
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null,
"fps": 30
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null,
"fps": 30
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null,
"fps": 30
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null,
"fps": 30
},
"observation.images.top": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
}
},
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"robot_type": "so101_follower",
"codebase_version": "v3.0",
"total_episodes": 50,
"total_frames": 27779,
"total_tasks": 2,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:50"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"fps": 30
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"fps": 30
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null,
"fps": 30
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null,
"fps": 30
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null,
"fps": 30
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null,
"fps": 30
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null,
"fps": 30
},
"observation.images.top": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
}
},
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
| 0
| 0
|
[
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot"
] |
2025-11-12T17:40:33+00:00
|
2025-11-12T17:52:01+00:00
| 0
|
Kiy-K/pretraining-corpus
|
# 🧠 Kiy-K Synthetic Pretraining Corpus
**Author:** [Khoi K. (@Kiy-K)](https://huggingface.co/Kiy-K)
**License:** Apache 2.0
**Last Updated:** 2025-10-30
---
## 📘 Overview
The **Kiy-K Synthetic Pretraining Corpus** is a large-scale collection of **synthetically generated English text** designed for **language model pretraining and instruction-tuning research**.
All data is **synthetic**, created using open-source large language models such as **GPT-OSS**, **NVIDIA Nemotron**, and **DeepSeek**, under full control of the author.
No real user, copyrighted, or sensitive information is included.
---
## 🧩 Structure
Each record contains:
- `id` — unique identifier
- `text` — generated document text
- `meta` — optional metadata such as domain, length, or generation model
The corpus covers diverse domains including:
- Technology and programming
- Science and education
- General conversation and reasoning
- Instructional and QA-style texts
---
## ⚙️ Intended Uses
- Pretraining of small to medium-scale LLMs
- Instruction-tuning and alignment experiments
- Data efficiency and synthetic pipeline research
Not intended for:
- Real-world decision making
- Sensitive or personal data analysis
---
## 🧮 Dataset Statistics
| Field | Description |
|-------|--------------|
| Records | ~xxx,xxx |
| Avg. length | ~xxx tokens |
| Generation models | GPT-OSS, Nemotron, DeepSeek |
| License | Apache 2.0 |
*(Update the numbers once scaling finishes.)*
---
## 🔖 Citation
If you use this dataset in your research or project, please cite:
```bibtex
@dataset{kiy_k_2025_pretraining_corpus,
author = {Khoi K.},
title = {Kiy-K Synthetic Pretraining Corpus},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/Kiy-K/pretraining-corpus}
}- text-generation
language:
- en
---
```
💼 About the Author
This dataset is part of the Kiy-K Synthetic Data Studio Project — an initiative to provide high-quality, customizable synthetic data for research and commercial use.
👉 Interested in custom synthetic datasets?
Contact me on Hugging Face or open an Issue/Discussion on this repository.
---
📜 License
This dataset is licensed under Apache License 2.0, meaning you are free to use, modify, and distribute it — with proper attribution.
---
|
# 🧠 Kiy-K Synthetic Pretraining Corpus
**Author:** [Khoi K. (@Kiy-K)](https://huggingface.co/Kiy-K)
**License:** Apache 2.0
**Last Updated:** 2025-10-30
---
## 📘 Overview
The **Kiy-K Synthetic Pretraining Corpus** is a large-scale collection of **synthetically generated English text** designed for **language model pretraining and instruction-tuning research**.
All data is **synthetic**, created using open-source large language models such as **GPT-OSS**, **NVIDIA Nemotron**, and **DeepSeek**, under full control of the author.
No real user, copyrighted, or sensitive information is included.
---
## 🧩 Structure
Each record contains:
- `id` — unique identifier
- `text` — generated document text
- `meta` — optional metadata such as domain, length, or generation model
The corpus covers diverse domains including:
- Technology and programming
- Science and education
- General conversation and reasoning
- Instructional and QA-style texts
---
## ⚙️ Intended Uses
- Pretraining of small to medium-scale LLMs
- Instruction-tuning and alignment experiments
- Data efficiency and synthetic pipeline research
Not intended for:
- Real-world decision making
- Sensitive or personal data analysis
---
## 🧮 Dataset Statistics
| Field | Description |
|-------|--------------|
| Records | ~xxx,xxx |
| Avg. length | ~xxx tokens |
| Generation models | GPT-OSS, Nemotron, DeepSeek |
| License | Apache 2.0 |
*(Update the numbers once scaling finishes.)*
---
## 🔖 Citation
If you use this dataset in your research or project, please cite:
```bibtex
@dataset{kiy_k_2025_pretraining_corpus,
author = {Khoi K.},
title = {Kiy-K Synthetic Pretraining Corpus},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/Kiy-K/pretraining-corpus}
}- text-generation
language:
- en
---
```
💼 About the Author
This dataset is part of the Kiy-K Synthetic Data Studio Project — an initiative to provide high-quality, customizable synthetic data for research and commercial use.
👉 Interested in custom synthetic datasets?
Contact me on Hugging Face or open an Issue/Discussion on this repository.
---
📜 License
This dataset is licensed under Apache License 2.0, meaning you are free to use, modify, and distribute it — with proper attribution.
---
| 3,655
| 2
|
[
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"synthetic",
"ai",
"nlp",
"pretraining",
"dataset",
"text",
"open-source"
] |
2025-10-30T05:31:56+00:00
|
2025-11-12T17:50:02+00:00
| 0
|
Pendrokar/TTS_Arena
|
[TTS Arena's](https://huggingface.co/spaces/Pendrokar/TTS-Spaces-Arena) DB is _SQLlite_ DB file. The above is just a summary query that should be useful for TTS developers to evaluate faults of their model.
## Why no audio samples?
Unsafe. Cannot constantly oversee the output of uncontrolled HuggingFace Spaces. While it could be safeguarded by using an ASR model before uploading, something unwanted may still slip through.
## Useful queries for TTS developers and evaluators
### All votes mentioning specified TTS model:
```sql
SELECT
spokentext, lang, chosen, rejected, count(spokentext) AS times, MAX(vl.timestamp) AS lastvote
FROM "main"."spokentext"
INNER JOIN votelog vl ON votelog_id = vl.id
WHERE
vl.chosen = "Pendrokar/xVASynth-TTS"
OR vl.rejected = "Pendrokar/xVASynth-TTS"
GROUP BY spokentext, chosen, rejected
ORDER BY times DESC, spokentext ASC
LIMIT 0, 49999;
```
### All rejections of specified TTS model against another:
```sql
SELECT
spokentext, lang, chosen, rejected, count(spokentext) AS times, MAX(vl.timestamp) AS lastvote
FROM "main"."spokentext"
INNER JOIN votelog vl ON votelog_id = vl.id AND vl.rejected = "Pendrokar/xVASynth-TTS"
GROUP BY spokentext, chosen
ORDER BY spokentext ASC
LIMIT 0, 49999;
```
### All rejections of a TTS model against another:
**The one used in dataset viewer.** Note that the `chosen` column may include models that the `rejected` model beat more times. That is also why `votes` may sometimes be even less than the amount of distinct chosen models.
```sql
SELECT
st.spokentext,
vl.rejected,
COUNT(vl.rejected) - COALESCE(chosen_counts.chosen_count, 0) AS votes,
(COUNT(DISTINCT vl.chosen) || ' ' || GROUP_CONCAT(DISTINCT ' ' || vl.chosen)) AS chosen,
MAX(vl.timestamp) AS lastvote
FROM
votelog vl
JOIN
spokentext st ON vl.id = st.votelog_id
LEFT JOIN (
SELECT
st_inner.spokentext,
vl_inner.chosen,
COUNT(vl_inner.chosen) AS chosen_count
FROM
votelog vl_inner
JOIN
spokentext st_inner ON vl_inner.id = st_inner.votelog_id
GROUP BY
st_inner.spokentext,
vl_inner.chosen
ORDER BY
chosen_count DESC
) AS chosen_counts ON st.spokentext = chosen_counts.spokentext AND vl.rejected = chosen_counts.chosen
GROUP BY
st.spokentext,
vl.rejected
HAVING
votes > 0
AND lastvote BETWEEN datetime('now', '-1 month') AND datetime('now', 'localtime')
ORDER BY
((votes * COUNT(DISTINCT vl.chosen)) / 2) DESC,
COUNT(DISTINCT vl.chosen) DESC,
st.spokentext ASC;
```
If you use this data in your publication, please cite us!
Copy the BibTeX citation to cite this source:
```bibtext\n
@misc{tts-arena,
title = {Text to Speech Arena - Pendrokar's HF Spaces Fork},
author = {mrfakename and Srivastav, Vaibhav and Fourrier, Clémentine and Pouget, Lucain and Lacombe, Yoach and main and Gandhi, Sanchit},
year = 2024,
publisher = {Hugging Face},
howpublished = "\\url{https://huggingface.co/spaces/TTS-AGI/TTS-Arena}"
}
```
|
[TTS Arena's](https://huggingface.co/spaces/Pendrokar/TTS-Spaces-Arena) DB is _SQLlite_ DB file. The above is just a summary query that should be useful for TTS developers to evaluate faults of their model.
## Why no audio samples?
Unsafe. Cannot constantly oversee the output of uncontrolled HuggingFace Spaces. While it could be safeguarded by using an ASR model before uploading, something unwanted may still slip through.
## Useful queries for TTS developers and evaluators
### All votes mentioning specified TTS model:
```sql
SELECT
spokentext, lang, chosen, rejected, count(spokentext) AS times, MAX(vl.timestamp) AS lastvote
FROM "main"."spokentext"
INNER JOIN votelog vl ON votelog_id = vl.id
WHERE
vl.chosen = "Pendrokar/xVASynth-TTS"
OR vl.rejected = "Pendrokar/xVASynth-TTS"
GROUP BY spokentext, chosen, rejected
ORDER BY times DESC, spokentext ASC
LIMIT 0, 49999;
```
### All rejections of specified TTS model against another:
```sql
SELECT
spokentext, lang, chosen, rejected, count(spokentext) AS times, MAX(vl.timestamp) AS lastvote
FROM "main"."spokentext"
INNER JOIN votelog vl ON votelog_id = vl.id AND vl.rejected = "Pendrokar/xVASynth-TTS"
GROUP BY spokentext, chosen
ORDER BY spokentext ASC
LIMIT 0, 49999;
```
### All rejections of a TTS model against another:
**The one used in dataset viewer.** Note that the `chosen` column may include models that the `rejected` model beat more times. That is also why `votes` may sometimes be even less than the amount of distinct chosen models.
```sql
SELECT
st.spokentext,
vl.rejected,
COUNT(vl.rejected) - COALESCE(chosen_counts.chosen_count, 0) AS votes,
(COUNT(DISTINCT vl.chosen) || ' ' || GROUP_CONCAT(DISTINCT ' ' || vl.chosen)) AS chosen,
MAX(vl.timestamp) AS lastvote
FROM
votelog vl
JOIN
spokentext st ON vl.id = st.votelog_id
LEFT JOIN (
SELECT
st_inner.spokentext,
vl_inner.chosen,
COUNT(vl_inner.chosen) AS chosen_count
FROM
votelog vl_inner
JOIN
spokentext st_inner ON vl_inner.id = st_inner.votelog_id
GROUP BY
st_inner.spokentext,
vl_inner.chosen
ORDER BY
chosen_count DESC
) AS chosen_counts ON st.spokentext = chosen_counts.spokentext AND vl.rejected = chosen_counts.chosen
GROUP BY
st.spokentext,
vl.rejected
HAVING
votes > 0
AND lastvote BETWEEN datetime('now', '-1 month') AND datetime('now', 'localtime')
ORDER BY
((votes * COUNT(DISTINCT vl.chosen)) / 2) DESC,
COUNT(DISTINCT vl.chosen) DESC,
st.spokentext ASC;
```
If you use this data in your publication, please cite us!
Copy the BibTeX citation to cite this source:
```bibtext\n
@misc{tts-arena,
title = {Text to Speech Arena - Pendrokar's HF Spaces Fork},
author = {mrfakename and Srivastav, Vaibhav and Fourrier, Clémentine and Pouget, Lucain and Lacombe, Yoach and main and Gandhi, Sanchit},
year = 2024,
publisher = {Hugging Face},
howpublished = "\\url{https://huggingface.co/spaces/TTS-AGI/TTS-Arena}"
}
```
| 4,448
| 6
|
[
"language:en",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"arena"
] |
2024-10-11T16:52:25+00:00
|
2025-11-12T17:49:16+00:00
| 0
|
electricsheepafrica/livestock-health-disease-ssa-synthetic
|
# Dataset Card: Livestock Health & Disease Surveillance (Synthetic Data)
## Dataset Summary
This synthetic dataset represents **1,000,000 African smallholder households** with livestock systems, capturing livestock health, disease surveillance, veterinary access, and herd management practices across Sub-Saharan Africa. It combines baseline farm characteristics (Dataset 1) with 15 livestock-specific variables to create a comprehensive picture of livestock production systems and animal health challenges.
**Key Features:**
- **1M households** across 5 agro-ecological zones
- **27 variables** (12 base farm + 15 livestock health)
- **African-specific** livestock systems and diseases
- **Literature-grounded** distributions (50+ peer-reviewed sources)
- **Conditional dependencies** modeling real-world relationships
- **Realistic missing data** patterns
## Variables
### Base Farm Characteristics (Dataset 1 - 12 variables)
1. **agro_ecological_zone**: Arid, semi-arid, sub-humid, humid, highland
2. **region_type**: Urban, peri-urban, rural accessible, rural remote
3. **farm_size_ha**: Farm size in hectares
4. **soil_quality_index**: Soil quality (0-100 scale)
5. **rainfall_mm_annual**: Annual rainfall (mm)
6. **household_size**: Number of household members
7. **market_distance_km**: Distance to nearest market
8. **livestock_tlu**: Tropical Livestock Units owned
9. **extension_access**: Access to agricultural extension (yes/no)
10. **fertilizer_use_kg_ha**: Fertilizer application rate
11. **rainfall_mm_season**: Seasonal rainfall (mm)
12. **maize_yield_kg_ha**: Maize yield (kg/ha)
### Livestock Health & Production (NEW - 15 variables)
#### Herd Composition
13. **herd_size_cattle**: Number of cattle owned (0-50+)
14. **herd_size_small_ruminants**: Sheep and goats owned (0-100+)
15. **poultry_count**: Chickens, ducks, etc. (0-200+)
#### Veterinary Services & Access
16. **vet_distance_km**: Distance to nearest veterinary service (1-200 km)
17. **vaccination_coverage_pct**: % of herd vaccinated (0-100%)
18. **vet_visit_annual**: Had veterinary visit in past year (yes/no)
#### Disease & Health
19. **disease_incidence_annual**: Reported disease in past year (yes/no)
20. **disease_type**: Type of disease (FMD, ECF, CBPP, trypanosomiasis, PPR, Newcastle, respiratory, diarrhea, other)
21. **mortality_rate_annual_pct**: Annual livestock mortality rate (%)
22. **pasture_quality_index**: Pasture/rangeland quality (0-100 scale)
#### Management Systems
23. **grazing_system**: Type of grazing (communal, private, mixed, zero-grazing)
24. **water_source_reliability**: Water availability (year-round, seasonal, unreliable)
25. **treatment_access**: Type of treatment accessed (none, traditional, veterinary, both)
26. **feed_supplementation**: Provides supplementary feed (yes/no)
27. **livestock_dependency_index**: Household dependence on livestock (0-100 scale)
## Dataset Statistics
### Livestock Ownership
- **43.4%** of households own cattle
- **62.9%** own small ruminants (sheep/goats)
- **67.5%** keep poultry
- Mean cattle herd size: ~5 animals (among owners)
- Mean small ruminant herd: ~12 animals (among owners)
- Mean poultry flock: ~8 birds (among keepers)
### Disease Burden
- **32.7%** reported disease incidence in past year
- Most common diseases:
- Newcastle disease (poultry): 20%
- FMD (Foot & Mouth): 18%
- PPR (Peste des Petits Ruminants): 15%
- ECF (East Coast Fever): 12%
- Trypanosomiasis: 10%
### Veterinary Access
- **40.3%** had veterinary contact in past year
- Mean distance to vet services: **58.9 km**
- **20%** vaccination coverage (median)
- Treatment types:
- 35% no treatment
- 45% traditional remedies only
- 15% veterinary treatment
- 5% both traditional and veterinary
### Management Practices
- **50%** use communal grazing systems
- **25%** private grazing
- **20%** mixed systems
- **5%** zero-grazing (intensive)
- **30%** provide feed supplementation
- **40%** have year-round water access
- **35%** seasonal water only
- **25%** unreliable water
## Uses
### Permitted Uses
- **Livestock policy analysis**: Model impacts of disease control programs
- **Veterinary service planning**: Optimize clinic placement and mobile vet routes
- **Disease surveillance system design**: Test outbreak detection algorithms
- **Animal health research**: Train ML models for disease prediction
- **One Health initiatives**: Link livestock-human health systems
- **Extension service planning**: Target interventions by livestock system type
- **Educational purposes**: Teaching livestock epidemiology and policy
- **Climate adaptation**: Model livestock system resilience
- **Value chain analysis**: Link livestock production to markets
- **Research method development**: Test statistical techniques
### Prohibited Uses
- **Not for replacement of real data collection**: Cannot substitute for actual field surveys
- **Not for country-specific policy**: Too generalized for single-country decisions
- **Not for real-time disease outbreak response**: Not actual surveillance data
- **Not for individual farmer targeting**: Synthetic households are not real
- **Not for precise cost-benefit analysis**: Use for methodological prototypes only
## Dataset Creation
### Why This Dataset Exists
Real livestock health data in Sub-Saharan Africa faces critical gaps:
1. **Surveillance gaps**: Most countries lack systematic disease surveillance
2. **Underreporting**: Livestock diseases often go unreported (especially in remote areas)
3. **Fragmented data**: Information scattered across vet clinics, ministries, NGOs
4. **Access restrictions**: Sensitive disease data rarely shared publicly
5. **High collection costs**: Surveys expensive and logistically challenging
6. **Privacy concerns**: Household-level data cannot be openly published
**This synthetic dataset enables:**
- Algorithm development without waiting for data access
- Training of researchers and students
- International collaboration without data sharing barriers
- Rapid prototyping of livestock information systems
- Evidence generation for funding proposals
### Creation Methodology
**Rigorous 4-stage process** following synthetic data best practices:
#### Stage 1: Literature Review (50+ sources)
- Systematic review of livestock systems in SSA
- Disease prevalence studies (FMD, ECF, trypanosomiasis, PPR, Newcastle)
- Veterinary service coverage assessments
- Management practice surveys
- Mortality and productivity benchmarks
#### Stage 2: Parameter Specification (15 files, 60-150 lines each)
- Conditional probability distributions by zone, region, herd size
- Functional relationships (e.g., vet distance → vaccination rates)
- Species-specific disease patterns
- Management system typologies
- Full provenance tracking
#### Stage 3: Conditional Data Generation
- Base variables from Dataset 1 (smallholder farms)
- Sequential generation respecting dependencies
- Zero-inflated distributions for herd sizes
- Categorical conditioning for disease types
- Realistic missing data (MCAR: 1-10%)
#### Stage 4: Validation
- Cross-variable consistency checks
- Literature benchmark comparisons
- Logical constraint verification
- Distribution shape validation
## Limitations and Biases
### Known Limitations
1. **Oversimplified disease dynamics**: Real disease spread is more complex than modeled
2. **Static snapshot**: No temporal dynamics (outbreaks, seasonality within year)
3. **No spatial clustering**: Real diseases show geographic clustering not captured
4. **Coarse zones**: 5 AEZ categories don't capture local variation
5. **Missing variables**: No breed info, no herd demographics, no animal-level data
6. **Treatment outcomes**: No data on treatment success/failure
7. **No cost data**: Disease impacts measured only in mortality, not economics
8. **Simplified grazing**: Complex pastoral mobility patterns simplified
9. **Binary disease incidence**: Real incidence is more granular (multiple episodes)
### Potential Biases
1. **Literature bias**: Sources mostly from East Africa (Kenya, Tanzania, Ethiopia)
2. **Veterinary access**: May overestimate coverage in very remote pastoral areas
3. **Disease reporting**: Literature likely underrepresents mild/unreported diseases
4. **Poultry systems**: Village chickens well-represented, commercial systems underrepresented
5. **Traditional knowledge**: Traditional treatment effectiveness may be under-captured
6. **Gender**: No gender disaggregation of livestock ownership/management
7. **Wealth gradient**: Livestock wealth distribution may be too uniform
8. **Conflict zones**: Data may not reflect pastoralist areas affected by conflict
### What This Dataset Is NOT
- ❌ **Not real surveillance data**: Do not use for actual disease outbreak decisions
- ❌ **Not predictive**: Cannot predict real disease occurrence
- ❌ **Not country-specific**: Generalized SSA patterns, not any single country
- ❌ **Not longitudinal**: Single time point, no panel structure
- ❌ **Not spatially explicit**: No GPS coordinates, no spatial autocorrelation
## Technical Specifications
### File Formats
- **CSV**: `livestock_data.csv` (315 MB, 1M rows)
- **Parquet**: `livestock_data.parquet` (111 MB, compressed)
- **Metadata**: `metadata.json` (generation parameters, sources)
- **Data Dictionary**: `data_dictionary.csv` (variable descriptions)
### Missing Data
Realistic missing data rates by variable:
- Herd sizes: 2%
- Vet distance: 4%
- Vaccination coverage: 5%
- Disease incidence: 3%
- Pasture quality: 6%
- Mortality rate: 3%
- Disease type: 10% (conditional on disease occurrence)
- Management variables: 3-4%
### Data Quality Indicators
- ✅ All constraints validated (no impossible values)
- ✅ Conditional dependencies respected
- ✅ Literature benchmarks matched (±10%)
- ✅ Cross-variable correlations logical
- ✅ Missing data patterns realistic
## Ethical Considerations
### Privacy
- **No real households**: All data fully synthetic, cannot identify real people/places
- **No GPS coordinates**: No geographic identifiers that could reveal locations
- **Aggregated patterns only**: Individual records are fictional
### Representation
- **Pan-African focus**: Captures diversity across SSA, not dominated by single region
- **Pastoral systems included**: Arid/semi-arid zones well-represented
- **Smallholder-centric**: Large commercial farms not included
- **Traditional knowledge**: Ethnoveterinary practices acknowledged
### Responsible Use
Users should:
- ✅ Clearly label outputs as based on synthetic data
- ✅ Validate methods on real data before deployment
- ✅ Not overstate generalizability of findings
- ✅ Cite real data sources when transitioning to applications
- ✅ Engage local stakeholders when designing interventions
## Citation Information
If you use this dataset, please cite:
```bibtex
@dataset{livestock_health_synthetic_2024,
author = {Electric Sheep Africa},
title = {Livestock Health and Disease Surveillance Synthetic Dataset for Sub-Saharan Africa},
year = {2024},
publisher = {HuggingFace},
url = {https://huggingface.co/datasets/electricsheepafrica/livestock-health-disease-ssa-synthetic}
}
```
### Key Literature Sources
This dataset synthesizes information from 50+ sources, including:
- **Perry & Grace (2009)**: Economic impacts of animal diseases (Journal of Agricultural Economics)
- **Cleaveland et al. (2001)**: Diseases of humans and domestic mammals (Phil Trans Royal Society B)
- **Leonard et al. (2017)**: Veterinary service delivery in developing countries (Rev. sci. tech. Off. int. Epiz)
- **Robinson et al. (2011)**: Global livestock production systems (FAO/ILRI)
- **AU-IBAR (2013)**: Veterinary services delivery in Africa (African Union)
- **McCorkle (1995)**: Ethnoveterinary R&D (Agriculture and Human Values)
- **Herrero et al. (2013)**: Biomass use in global livestock systems (PNAS)
- **Reid et al. (2014)**: Pastoral land development models (Ecology and Society)
Full bibliography available in parameter files (`parameters_livestock/` directory).
## Dataset Structure
### Variable Types
- **Categorical** (9 variables): Zones, disease types, systems
- **Continuous** (14 variables): Herd sizes, distances, indices, rates
- **Binary** (4 variables): Access, incidence, supplementation
### Sample Record
```csv
agro_ecological_zone,region_type,herd_size_cattle,disease_incidence_annual,vet_distance_km,...
semi_arid,rural_accessible,4,yes,35.2,...
```
## Updates and Versioning
- **Version**: 1.0
- **Release Date**: November 2024
- **Status**: Stable
- **Planned Updates**: None currently planned
## Contact
**Creator**: Electric Sheep Africa
**Repository**: [GitHub](https://github.com/electricsheepafrica/agriculture-synthetic-data)
**Issues**: Report via GitHub Issues
## License
**CC BY 4.0** (Creative Commons Attribution 4.0 International)
You are free to:
- ✅ Share and redistribute
- ✅ Adapt and build upon
- ✅ Use commercially
Under the condition that you:
- ✅ Give appropriate credit
- ✅ Indicate if changes were made
- ✅ Do not misrepresent as real surveillance data
---
## How to Load
```python
from datasets import load_dataset
# Load full dataset
dataset = load_dataset("electricsheepafrica/livestock-health-disease-ssa-synthetic")
# Load as pandas DataFrame
import pandas as pd
df = dataset['train'].to_pandas()
# Or load Parquet directly
df = pd.read_parquet("livestock_data.parquet")
```
## Example Use Cases
### 1. Disease Risk Prediction
```python
# Train ML model to predict disease incidence
X = df[['herd_size_cattle', 'vet_distance_km', 'vaccination_coverage_pct',
'agro_ecological_zone', 'pasture_quality_index']]
y = df['disease_incidence_annual']
```
### 2. Vet Clinic Placement Optimization
```python
# Find underserved areas
underserved = df[(df['vet_distance_km'] > 60) & (df['livestock_tlu'] > 5)]
```
### 3. Vaccination Campaign Targeting
```python
# Identify high-risk, low-coverage households
targets = df[(df['vaccination_coverage_pct'] < 20) &
(df['disease_incidence_annual'] == 'yes')]
```
---
**Dataset 2 of 5** in the African Agriculture & Food Security Synthetic Data Portfolio
|
# Dataset Card: Livestock Health & Disease Surveillance (Synthetic Data)
## Dataset Summary
This synthetic dataset represents **1,000,000 African smallholder households** with livestock systems, capturing livestock health, disease surveillance, veterinary access, and herd management practices across Sub-Saharan Africa. It combines baseline farm characteristics (Dataset 1) with 15 livestock-specific variables to create a comprehensive picture of livestock production systems and animal health challenges.
**Key Features:**
- **1M households** across 5 agro-ecological zones
- **27 variables** (12 base farm + 15 livestock health)
- **African-specific** livestock systems and diseases
- **Literature-grounded** distributions (50+ peer-reviewed sources)
- **Conditional dependencies** modeling real-world relationships
- **Realistic missing data** patterns
## Variables
### Base Farm Characteristics (Dataset 1 - 12 variables)
1. **agro_ecological_zone**: Arid, semi-arid, sub-humid, humid, highland
2. **region_type**: Urban, peri-urban, rural accessible, rural remote
3. **farm_size_ha**: Farm size in hectares
4. **soil_quality_index**: Soil quality (0-100 scale)
5. **rainfall_mm_annual**: Annual rainfall (mm)
6. **household_size**: Number of household members
7. **market_distance_km**: Distance to nearest market
8. **livestock_tlu**: Tropical Livestock Units owned
9. **extension_access**: Access to agricultural extension (yes/no)
10. **fertilizer_use_kg_ha**: Fertilizer application rate
11. **rainfall_mm_season**: Seasonal rainfall (mm)
12. **maize_yield_kg_ha**: Maize yield (kg/ha)
### Livestock Health & Production (NEW - 15 variables)
#### Herd Composition
13. **herd_size_cattle**: Number of cattle owned (0-50+)
14. **herd_size_small_ruminants**: Sheep and goats owned (0-100+)
15. **poultry_count**: Chickens, ducks, etc. (0-200+)
#### Veterinary Services & Access
16. **vet_distance_km**: Distance to nearest veterinary service (1-200 km)
17. **vaccination_coverage_pct**: % of herd vaccinated (0-100%)
18. **vet_visit_annual**: Had veterinary visit in past year (yes/no)
#### Disease & Health
19. **disease_incidence_annual**: Reported disease in past year (yes/no)
20. **disease_type**: Type of disease (FMD, ECF, CBPP, trypanosomiasis, PPR, Newcastle, respiratory, diarrhea, other)
21. **mortality_rate_annual_pct**: Annual livestock mortality rate (%)
22. **pasture_quality_index**: Pasture/rangeland quality (0-100 scale)
#### Management Systems
23. **grazing_system**: Type of grazing (communal, private, mixed, zero-grazing)
24. **water_source_reliability**: Water availability (year-round, seasonal, unreliable)
25. **treatment_access**: Type of treatment accessed (none, traditional, veterinary, both)
26. **feed_supplementation**: Provides supplementary feed (yes/no)
27. **livestock_dependency_index**: Household dependence on livestock (0-100 scale)
## Dataset Statistics
### Livestock Ownership
- **43.4%** of households own cattle
- **62.9%** own small ruminants (sheep/goats)
- **67.5%** keep poultry
- Mean cattle herd size: ~5 animals (among owners)
- Mean small ruminant herd: ~12 animals (among owners)
- Mean poultry flock: ~8 birds (among keepers)
### Disease Burden
- **32.7%** reported disease incidence in past year
- Most common diseases:
- Newcastle disease (poultry): 20%
- FMD (Foot & Mouth): 18%
- PPR (Peste des Petits Ruminants): 15%
- ECF (East Coast Fever): 12%
- Trypanosomiasis: 10%
### Veterinary Access
- **40.3%** had veterinary contact in past year
- Mean distance to vet services: **58.9 km**
- **20%** vaccination coverage (median)
- Treatment types:
- 35% no treatment
- 45% traditional remedies only
- 15% veterinary treatment
- 5% both traditional and veterinary
### Management Practices
- **50%** use communal grazing systems
- **25%** private grazing
- **20%** mixed systems
- **5%** zero-grazing (intensive)
- **30%** provide feed supplementation
- **40%** have year-round water access
- **35%** seasonal water only
- **25%** unreliable water
## Uses
### Permitted Uses
- **Livestock policy analysis**: Model impacts of disease control programs
- **Veterinary service planning**: Optimize clinic placement and mobile vet routes
- **Disease surveillance system design**: Test outbreak detection algorithms
- **Animal health research**: Train ML models for disease prediction
- **One Health initiatives**: Link livestock-human health systems
- **Extension service planning**: Target interventions by livestock system type
- **Educational purposes**: Teaching livestock epidemiology and policy
- **Climate adaptation**: Model livestock system resilience
- **Value chain analysis**: Link livestock production to markets
- **Research method development**: Test statistical techniques
### Prohibited Uses
- **Not for replacement of real data collection**: Cannot substitute for actual field surveys
- **Not for country-specific policy**: Too generalized for single-country decisions
- **Not for real-time disease outbreak response**: Not actual surveillance data
- **Not for individual farmer targeting**: Synthetic households are not real
- **Not for precise cost-benefit analysis**: Use for methodological prototypes only
## Dataset Creation
### Why This Dataset Exists
Real livestock health data in Sub-Saharan Africa faces critical gaps:
1. **Surveillance gaps**: Most countries lack systematic disease surveillance
2. **Underreporting**: Livestock diseases often go unreported (especially in remote areas)
3. **Fragmented data**: Information scattered across vet clinics, ministries, NGOs
4. **Access restrictions**: Sensitive disease data rarely shared publicly
5. **High collection costs**: Surveys expensive and logistically challenging
6. **Privacy concerns**: Household-level data cannot be openly published
**This synthetic dataset enables:**
- Algorithm development without waiting for data access
- Training of researchers and students
- International collaboration without data sharing barriers
- Rapid prototyping of livestock information systems
- Evidence generation for funding proposals
### Creation Methodology
**Rigorous 4-stage process** following synthetic data best practices:
#### Stage 1: Literature Review (50+ sources)
- Systematic review of livestock systems in SSA
- Disease prevalence studies (FMD, ECF, trypanosomiasis, PPR, Newcastle)
- Veterinary service coverage assessments
- Management practice surveys
- Mortality and productivity benchmarks
#### Stage 2: Parameter Specification (15 files, 60-150 lines each)
- Conditional probability distributions by zone, region, herd size
- Functional relationships (e.g., vet distance → vaccination rates)
- Species-specific disease patterns
- Management system typologies
- Full provenance tracking
#### Stage 3: Conditional Data Generation
- Base variables from Dataset 1 (smallholder farms)
- Sequential generation respecting dependencies
- Zero-inflated distributions for herd sizes
- Categorical conditioning for disease types
- Realistic missing data (MCAR: 1-10%)
#### Stage 4: Validation
- Cross-variable consistency checks
- Literature benchmark comparisons
- Logical constraint verification
- Distribution shape validation
## Limitations and Biases
### Known Limitations
1. **Oversimplified disease dynamics**: Real disease spread is more complex than modeled
2. **Static snapshot**: No temporal dynamics (outbreaks, seasonality within year)
3. **No spatial clustering**: Real diseases show geographic clustering not captured
4. **Coarse zones**: 5 AEZ categories don't capture local variation
5. **Missing variables**: No breed info, no herd demographics, no animal-level data
6. **Treatment outcomes**: No data on treatment success/failure
7. **No cost data**: Disease impacts measured only in mortality, not economics
8. **Simplified grazing**: Complex pastoral mobility patterns simplified
9. **Binary disease incidence**: Real incidence is more granular (multiple episodes)
### Potential Biases
1. **Literature bias**: Sources mostly from East Africa (Kenya, Tanzania, Ethiopia)
2. **Veterinary access**: May overestimate coverage in very remote pastoral areas
3. **Disease reporting**: Literature likely underrepresents mild/unreported diseases
4. **Poultry systems**: Village chickens well-represented, commercial systems underrepresented
5. **Traditional knowledge**: Traditional treatment effectiveness may be under-captured
6. **Gender**: No gender disaggregation of livestock ownership/management
7. **Wealth gradient**: Livestock wealth distribution may be too uniform
8. **Conflict zones**: Data may not reflect pastoralist areas affected by conflict
### What This Dataset Is NOT
- ❌ **Not real surveillance data**: Do not use for actual disease outbreak decisions
- ❌ **Not predictive**: Cannot predict real disease occurrence
- ❌ **Not country-specific**: Generalized SSA patterns, not any single country
- ❌ **Not longitudinal**: Single time point, no panel structure
- ❌ **Not spatially explicit**: No GPS coordinates, no spatial autocorrelation
## Technical Specifications
### File Formats
- **CSV**: `livestock_data.csv` (315 MB, 1M rows)
- **Parquet**: `livestock_data.parquet` (111 MB, compressed)
- **Metadata**: `metadata.json` (generation parameters, sources)
- **Data Dictionary**: `data_dictionary.csv` (variable descriptions)
### Missing Data
Realistic missing data rates by variable:
- Herd sizes: 2%
- Vet distance: 4%
- Vaccination coverage: 5%
- Disease incidence: 3%
- Pasture quality: 6%
- Mortality rate: 3%
- Disease type: 10% (conditional on disease occurrence)
- Management variables: 3-4%
### Data Quality Indicators
- ✅ All constraints validated (no impossible values)
- ✅ Conditional dependencies respected
- ✅ Literature benchmarks matched (±10%)
- ✅ Cross-variable correlations logical
- ✅ Missing data patterns realistic
## Ethical Considerations
### Privacy
- **No real households**: All data fully synthetic, cannot identify real people/places
- **No GPS coordinates**: No geographic identifiers that could reveal locations
- **Aggregated patterns only**: Individual records are fictional
### Representation
- **Pan-African focus**: Captures diversity across SSA, not dominated by single region
- **Pastoral systems included**: Arid/semi-arid zones well-represented
- **Smallholder-centric**: Large commercial farms not included
- **Traditional knowledge**: Ethnoveterinary practices acknowledged
### Responsible Use
Users should:
- ✅ Clearly label outputs as based on synthetic data
- ✅ Validate methods on real data before deployment
- ✅ Not overstate generalizability of findings
- ✅ Cite real data sources when transitioning to applications
- ✅ Engage local stakeholders when designing interventions
## Citation Information
If you use this dataset, please cite:
```bibtex
@dataset{livestock_health_synthetic_2024,
author = {Electric Sheep Africa},
title = {Livestock Health and Disease Surveillance Synthetic Dataset for Sub-Saharan Africa},
year = {2024},
publisher = {HuggingFace},
url = {https://huggingface.co/datasets/electricsheepafrica/livestock-health-disease-ssa-synthetic}
}
```
### Key Literature Sources
This dataset synthesizes information from 50+ sources, including:
- **Perry & Grace (2009)**: Economic impacts of animal diseases (Journal of Agricultural Economics)
- **Cleaveland et al. (2001)**: Diseases of humans and domestic mammals (Phil Trans Royal Society B)
- **Leonard et al. (2017)**: Veterinary service delivery in developing countries (Rev. sci. tech. Off. int. Epiz)
- **Robinson et al. (2011)**: Global livestock production systems (FAO/ILRI)
- **AU-IBAR (2013)**: Veterinary services delivery in Africa (African Union)
- **McCorkle (1995)**: Ethnoveterinary R&D (Agriculture and Human Values)
- **Herrero et al. (2013)**: Biomass use in global livestock systems (PNAS)
- **Reid et al. (2014)**: Pastoral land development models (Ecology and Society)
Full bibliography available in parameter files (`parameters_livestock/` directory).
## Dataset Structure
### Variable Types
- **Categorical** (9 variables): Zones, disease types, systems
- **Continuous** (14 variables): Herd sizes, distances, indices, rates
- **Binary** (4 variables): Access, incidence, supplementation
### Sample Record
```csv
agro_ecological_zone,region_type,herd_size_cattle,disease_incidence_annual,vet_distance_km,...
semi_arid,rural_accessible,4,yes,35.2,...
```
## Updates and Versioning
- **Version**: 1.0
- **Release Date**: November 2024
- **Status**: Stable
- **Planned Updates**: None currently planned
## Contact
**Creator**: Electric Sheep Africa
**Repository**: [GitHub](https://github.com/electricsheepafrica/agriculture-synthetic-data)
**Issues**: Report via GitHub Issues
## License
**CC BY 4.0** (Creative Commons Attribution 4.0 International)
You are free to:
- ✅ Share and redistribute
- ✅ Adapt and build upon
- ✅ Use commercially
Under the condition that you:
- ✅ Give appropriate credit
- ✅ Indicate if changes were made
- ✅ Do not misrepresent as real surveillance data
---
## How to Load
```python
from datasets import load_dataset
# Load full dataset
dataset = load_dataset("electricsheepafrica/livestock-health-disease-ssa-synthetic")
# Load as pandas DataFrame
import pandas as pd
df = dataset['train'].to_pandas()
# Or load Parquet directly
df = pd.read_parquet("livestock_data.parquet")
```
## Example Use Cases
### 1. Disease Risk Prediction
```python
# Train ML model to predict disease incidence
X = df[['herd_size_cattle', 'vet_distance_km', 'vaccination_coverage_pct',
'agro_ecological_zone', 'pasture_quality_index']]
y = df['disease_incidence_annual']
```
### 2. Vet Clinic Placement Optimization
```python
# Find underserved areas
underserved = df[(df['vet_distance_km'] > 60) & (df['livestock_tlu'] > 5)]
```
### 3. Vaccination Campaign Targeting
```python
# Identify high-risk, low-coverage households
targets = df[(df['vaccination_coverage_pct'] < 20) &
(df['disease_incidence_annual'] == 'yes')]
```
---
**Dataset 2 of 5** in the African Agriculture & Food Security Synthetic Data Portfolio
| 0
| 0
|
[
"task_categories:tabular-regression",
"task_categories:tabular-classification",
"language:en",
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"region:us",
"agriculture",
"livestock",
"africa",
"synthetic-data",
"food-security",
"veterinary",
"disease-surveillance",
"smallholder-farming"
] |
2025-11-12T17:34:30+00:00
|
2025-11-12T17:47:57+00:00
| 0
|
TheFactoryX/edition_0345_SWE-Gym-SWE-Gym-readymade
|
# edition_0345_SWE-Gym-SWE-Gym-readymade
**A Readymade by TheFactoryX**
## Original Dataset
[SWE-Gym/SWE-Gym](https://huggingface.co/datasets/SWE-Gym/SWE-Gym)
## Process
This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art.
**What we did:**
1. Selected the original dataset from Hugging Face
2. Shuffled each column independently
3. Destroyed all row-wise relationships
4. Preserved structure, removed meaning
**The result:**
Same data. Wrong order. New meaning. No meaning.
## Purpose
This is art. This is not useful. This is the point.
Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed.
---
Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX).
> _"I am a machine."_ — Andy Warhol
|
# edition_0345_SWE-Gym-SWE-Gym-readymade
**A Readymade by TheFactoryX**
## Original Dataset
[SWE-Gym/SWE-Gym](https://huggingface.co/datasets/SWE-Gym/SWE-Gym)
## Process
This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art.
**What we did:**
1. Selected the original dataset from Hugging Face
2. Shuffled each column independently
3. Destroyed all row-wise relationships
4. Preserved structure, removed meaning
**The result:**
Same data. Wrong order. New meaning. No meaning.
## Purpose
This is art. This is not useful. This is the point.
Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed.
---
Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX).
> _"I am a machine."_ — Andy Warhol
| 0
| 0
|
[
"license:other",
"region:us",
"readymades",
"art",
"shuffled",
"duchamp"
] |
2025-11-12T17:47:45+00:00
|
2025-11-12T17:47:47+00:00
| 0
|
StannumX/ae0815
|
Hong Kong A&E Waiting Time
香港急症室等候時間
可视化&visualization: https://huggingface.co/spaces/StannumX/AE_Time
- `hospCode` = 医院名称
- `hospTimeEn` = 时间点
- `topWait` = 等候时间
|
Hong Kong A&E Waiting Time
香港急症室等候時間
可视化&visualization: https://huggingface.co/spaces/StannumX/AE_Time
- `hospCode` = 医院名称
- `hospTimeEn` = 时间点
- `topWait` = 等候时间
| 5,336
| 0
|
[
"license:mit",
"size_categories:100K<n<1M",
"format:csv",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] |
2025-08-15T05:34:24+00:00
|
2025-11-12T17:47:15+00:00
| 0
|
hf-doc-build/doc-build-dev
|
This is a dataset which contains the docs from all the PRs that are updating one of the docs from https://huggingface.co/docs.
It is automatically updated by this [github action](https://github.com/huggingface/doc-builder/blob/main/.github/workflows/build_pr_documentation.yml) from the [doc-buider](https://github.com/huggingface/doc-builder) repo.
|
This is a dataset which contains the docs from all the PRs that are updating one of the docs from https://huggingface.co/docs.
It is automatically updated by this [github action](https://github.com/huggingface/doc-builder/blob/main/.github/workflows/build_pr_documentation.yml) from the [doc-buider](https://github.com/huggingface/doc-builder) repo.
| 277,972
| 6
|
[
"license:mit",
"region:us",
"documentation"
] |
2022-11-08T09:03:37+00:00
|
2025-11-12T17:47:04+00:00
| 0
|
ems123/Water-Potability-Classification-Project
|
# 💧 Water Potability Prediction: Classification Model Analysis
**Author:** [Emilie Levenbach]
**Date:** [12-11-2025]
**Project Goal:** To classify water samples as either potable (safe to drink) or non-potable based on their chemical properties.
---
## 1. Dataset Selection & Preparation (Part 1)
### Chosen Dataset: Water Potability (Source: Kaggle)
| Criterion | Status |
| :--- | :--- |
| Size | 3,276 rows, 10 features |
| Type | Mostly numerical |
| Target Variable | **`Potability`** (Binary: 1=Potable, 0=Not Potable) |
| ML Task | **Classification** |
### Data Cleaning Decisions
1. **Missing Values:** Missing values were found in `ph`, `Sulfate`, and `Trihalomethanes`. We used **mean imputation** to fill these gaps.
2. **Duplicates:** Duplicate rows were checked for and **removed**.
---
## 2. Exploratory Data Analysis (EDA) & Research (Part 2)
### A. Core Insights (Visual Research)
#### Research Question 1: What is the distribution of Potable vs. Non-Potable water in the dataset?
[**Image of Potability Status Distribution Plot**]
**Insight:** The count plot clearly demonstrates significant **class imbalance**. The Non-Potable class (0) heavily outweighs the Potable class (1). This imbalance is critical as it biases the model towards predicting the majority class.
#### Research Question 2: Are water samples with high Hardness more likely to be Potable or Non-Potable?
[**Image of Hardness Distribution by Potability Status Box Plot**]
**Insight:** The box plots show that the median and IQR of the `Hardness` feature are **almost identical** for both potable and non-potable groups. This indicates that water hardness alone is **not an effective feature** for differentiating safe drinking water from unsafe water.
#### Research Question 3: Does the level of Trihalomethanes show any difference between Potable and Non-Potable water?
[**Image of Trihalomethanes Distribution by Potability Status KDE Plot**]
**Insight:** The density plot (KDE) shows that the distributions for both classes **overlap heavily**. This confirms that `Trihalomethanes` is **not an effective feature when used in isolation** to predict potability.
### B. Outlier Handling Decision
* **Decision:** **Outliers were kept** in the dataset.
* **Justification:** Outliers often represent rare but **real events** (e.g., pollution spikes) that are valuable for training a robust model.
---
## 3. Modeling and Evaluation (Part 3)
### A. Model Selection & Training
* **Model:** Random Forest Classifier
* **Preprocessing:** Data was scaled using `StandardScaler` after the train-test split.
### B. Evaluation Results
The model was tested on the held-out test set (20%).
| Metric | Score |
| :--- | :--- |
| **Accuracy Score** | **0.6784** |
**Classification Report:**
| Class | Precision | Recall | F1-Score | Support |
| ---------------- | --------- | ------ | -------- | ------- |
| 0 | 0.70 | 0.86 | 0.77 | 412 |
| 1 | 0.61 | 0.38 | 0.47 | 244 |
| **Accuracy** | | | **0.68** | **656** |
| **Macro Avg** | 0.65 | 0.62 | 0.62 | 656 |
| **Weighted Avg** | 0.67 | 0.68 | 0.66 | 656 |
C. Feature Importance
This section includes the list of scores that rank all the features based on how much the model relied on them for prediction (e.g., Sulfate: 0.1257, pH: 0.1243).
Part 4: Conclusion & Next Steps
This section provides the analysis:
Conclusion
This summarizes the main findings using the metrics: "The overall accuracy was 67.84%, but there is a critical issue: poor performance on the minority class (Potable water, Class 1), evidenced by a low Recall of 0.38."
It then attributes the problem to the severe class imbalance and notes which features (Sulfate, pH, and Hardness) were most influential.
Screen record - https://www.loom.com/share/d85e8dd15837430eb726ad0852451773
|
# 💧 Water Potability Prediction: Classification Model Analysis
**Author:** [Emilie Levenbach]
**Date:** [12-11-2025]
**Project Goal:** To classify water samples as either potable (safe to drink) or non-potable based on their chemical properties.
---
## 1. Dataset Selection & Preparation (Part 1)
### Chosen Dataset: Water Potability (Source: Kaggle)
| Criterion | Status |
| :--- | :--- |
| Size | 3,276 rows, 10 features |
| Type | Mostly numerical |
| Target Variable | **`Potability`** (Binary: 1=Potable, 0=Not Potable) |
| ML Task | **Classification** |
### Data Cleaning Decisions
1. **Missing Values:** Missing values were found in `ph`, `Sulfate`, and `Trihalomethanes`. We used **mean imputation** to fill these gaps.
2. **Duplicates:** Duplicate rows were checked for and **removed**.
---
## 2. Exploratory Data Analysis (EDA) & Research (Part 2)
### A. Core Insights (Visual Research)
#### Research Question 1: What is the distribution of Potable vs. Non-Potable water in the dataset?
[**Image of Potability Status Distribution Plot**]
**Insight:** The count plot clearly demonstrates significant **class imbalance**. The Non-Potable class (0) heavily outweighs the Potable class (1). This imbalance is critical as it biases the model towards predicting the majority class.
#### Research Question 2: Are water samples with high Hardness more likely to be Potable or Non-Potable?
[**Image of Hardness Distribution by Potability Status Box Plot**]
**Insight:** The box plots show that the median and IQR of the `Hardness` feature are **almost identical** for both potable and non-potable groups. This indicates that water hardness alone is **not an effective feature** for differentiating safe drinking water from unsafe water.
#### Research Question 3: Does the level of Trihalomethanes show any difference between Potable and Non-Potable water?
[**Image of Trihalomethanes Distribution by Potability Status KDE Plot**]
**Insight:** The density plot (KDE) shows that the distributions for both classes **overlap heavily**. This confirms that `Trihalomethanes` is **not an effective feature when used in isolation** to predict potability.
### B. Outlier Handling Decision
* **Decision:** **Outliers were kept** in the dataset.
* **Justification:** Outliers often represent rare but **real events** (e.g., pollution spikes) that are valuable for training a robust model.
---
## 3. Modeling and Evaluation (Part 3)
### A. Model Selection & Training
* **Model:** Random Forest Classifier
* **Preprocessing:** Data was scaled using `StandardScaler` after the train-test split.
### B. Evaluation Results
The model was tested on the held-out test set (20%).
| Metric | Score |
| :--- | :--- |
| **Accuracy Score** | **0.6784** |
**Classification Report:**
| Class | Precision | Recall | F1-Score | Support |
| ---------------- | --------- | ------ | -------- | ------- |
| 0 | 0.70 | 0.86 | 0.77 | 412 |
| 1 | 0.61 | 0.38 | 0.47 | 244 |
| **Accuracy** | | | **0.68** | **656** |
| **Macro Avg** | 0.65 | 0.62 | 0.62 | 656 |
| **Weighted Avg** | 0.67 | 0.68 | 0.66 | 656 |
C. Feature Importance
This section includes the list of scores that rank all the features based on how much the model relied on them for prediction (e.g., Sulfate: 0.1257, pH: 0.1243).
Part 4: Conclusion & Next Steps
This section provides the analysis:
Conclusion
This summarizes the main findings using the metrics: "The overall accuracy was 67.84%, but there is a critical issue: poor performance on the minority class (Potable water, Class 1), evidenced by a low Recall of 0.38."
It then attributes the problem to the severe class imbalance and notes which features (Sulfate, pH, and Hardness) were most influential.
Screen record - https://www.loom.com/share/d85e8dd15837430eb726ad0852451773
| 0
| 0
|
[
"region:us"
] |
2025-11-12T16:58:51+00:00
|
2025-11-12T17:47:34+00:00
| 0
|
Milad96/Kluyveromyces-marxianus
|
# 🧬 Kluyveromyces marxianus Quantum Dataset v10.0.0
## Overview
Comprehensive multi-omics dataset for *Kluyveromyces marxianus* collected using quantum-grade async streaming pipeline, fully integrated with Cell 0's structured directory system.
### Statistics
| Metric | Value |
|--------|-------|
| **Total Collected** | 3,835 |
| **Total Local Saved** | 3,835 |
| **Version** | v10.0.0 |
| **Collection Date** | 2025-11-10 |
### Data Categories & Local Storage
- **Literature**: 1,417 records (local: 1,417)
- **Proteins**: 1,001 records (local: 1,001)
- **PMC Full-Text**: 999 records (local: 999)
- **SRA Sequencing**: 352 records (local: 352)
- **GEO Expression**: 48 records (local: 48)
- **Nucleotide Sequences**: 18 records (local: 18)
### Cell 0 Integration
This dataset **strictly respects** Cell 0's directory structure. Only folders actively used by collectors:
```
km_dataset/
├── genomic/ # Genes, nucleotide sequences
├── protein/ # Protein sequences
├── literature/ # PubMed, PMC articles
├── expression/ # GEO, SRA sequencing data
└── checkpoints/
└── cell1_quantum/ # Collection checkpoints
```
**Note**: Cell 0 also creates `pathway/`, `interaction/`, `structure/`, `repository/` folders, but current collectors don't produce data for these categories yet.
### HuggingFace Organization
Data is organized by phase using `data_dir` to prevent overwrites:
- `cell1_genes` - Gene data
- `cell1_proteins` - Protein sequences
- `cell1_literature` - PubMed articles
- `cell1_pmc` - PMC full-text articles
- `cell1_sequences` - Nucleotide sequences
- `cell1_geo` - GEO expression data
- `cell1_sra` - SRA sequencing data
- `cell1_splits` - Train/validation/test splits
## Usage
### Load All Data
```python
from datasets import load_dataset, concatenate_datasets
# Load all phases (FIXED: correct data_dir names)
all_data = []
for phase in ['cell1_genes', 'cell1_proteins', 'cell1_literature',
'cell1_pmc', 'cell1_sequences', 'cell1_geo', 'cell1_sra']:
try:
ds = load_dataset("Milad96/Kluyveromyces-marxianus", split='train', data_dir=phase)
all_data.append(ds)
except:
pass
combined = concatenate_datasets(all_data)
```
### Load Specific Phase
```python
# Load only genes
genes = load_dataset("Milad96/Kluyveromyces-marxianus", split='train', data_dir='cell1_genes')
# Load only literature
literature = load_dataset("Milad96/Kluyveromyces-marxianus", split='train', data_dir='cell1_literature')
```
### Load Splits
```python
dataset = load_dataset("Milad96/Kluyveromyces-marxianus", data_dir='cell1_splits')
train = dataset['train']
val = dataset.get('validation')
test = dataset.get('test')
```
## Citation
```bibtex
@dataset{km_quantum_v10_0_0,
title={Kluyveromyces marxianus Quantum Dataset},
version={v10.0.0},
year={2025},
url={https://huggingface.co/datasets/Milad96/Kluyveromyces-marxianus}
}
```
**Status**: ✅ Production Ready
**Quality**: 🌟 Quantum Grade
**Pipeline**: Async Streaming v10.0 + Cell 0 Full Integration
**Local Storage**: ✅ All records saved in structured folders
**Overwrite Protection**: ✅ Phase-specific data_dirs
|
# 🧬 Kluyveromyces marxianus Quantum Dataset v10.0.0
## Overview
Comprehensive multi-omics dataset for *Kluyveromyces marxianus* collected using quantum-grade async streaming pipeline, fully integrated with Cell 0's structured directory system.
### Statistics
| Metric | Value |
|--------|-------|
| **Total Collected** | 3,835 |
| **Total Local Saved** | 3,835 |
| **Version** | v10.0.0 |
| **Collection Date** | 2025-11-10 |
### Data Categories & Local Storage
- **Literature**: 1,417 records (local: 1,417)
- **Proteins**: 1,001 records (local: 1,001)
- **PMC Full-Text**: 999 records (local: 999)
- **SRA Sequencing**: 352 records (local: 352)
- **GEO Expression**: 48 records (local: 48)
- **Nucleotide Sequences**: 18 records (local: 18)
### Cell 0 Integration
This dataset **strictly respects** Cell 0's directory structure. Only folders actively used by collectors:
```
km_dataset/
├── genomic/ # Genes, nucleotide sequences
├── protein/ # Protein sequences
├── literature/ # PubMed, PMC articles
├── expression/ # GEO, SRA sequencing data
└── checkpoints/
└── cell1_quantum/ # Collection checkpoints
```
**Note**: Cell 0 also creates `pathway/`, `interaction/`, `structure/`, `repository/` folders, but current collectors don't produce data for these categories yet.
### HuggingFace Organization
Data is organized by phase using `data_dir` to prevent overwrites:
- `cell1_genes` - Gene data
- `cell1_proteins` - Protein sequences
- `cell1_literature` - PubMed articles
- `cell1_pmc` - PMC full-text articles
- `cell1_sequences` - Nucleotide sequences
- `cell1_geo` - GEO expression data
- `cell1_sra` - SRA sequencing data
- `cell1_splits` - Train/validation/test splits
## Usage
### Load All Data
```python
from datasets import load_dataset, concatenate_datasets
# Load all phases (FIXED: correct data_dir names)
all_data = []
for phase in ['cell1_genes', 'cell1_proteins', 'cell1_literature',
'cell1_pmc', 'cell1_sequences', 'cell1_geo', 'cell1_sra']:
try:
ds = load_dataset("Milad96/Kluyveromyces-marxianus", split='train', data_dir=phase)
all_data.append(ds)
except:
pass
combined = concatenate_datasets(all_data)
```
### Load Specific Phase
```python
# Load only genes
genes = load_dataset("Milad96/Kluyveromyces-marxianus", split='train', data_dir='cell1_genes')
# Load only literature
literature = load_dataset("Milad96/Kluyveromyces-marxianus", split='train', data_dir='cell1_literature')
```
### Load Splits
```python
dataset = load_dataset("Milad96/Kluyveromyces-marxianus", data_dir='cell1_splits')
train = dataset['train']
val = dataset.get('validation')
test = dataset.get('test')
```
## Citation
```bibtex
@dataset{km_quantum_v10_0_0,
title={Kluyveromyces marxianus Quantum Dataset},
version={v10.0.0},
year={2025},
url={https://huggingface.co/datasets/Milad96/Kluyveromyces-marxianus}
}
```
**Status**: ✅ Production Ready
**Quality**: 🌟 Quantum Grade
**Pipeline**: Async Streaming v10.0 + Cell 0 Full Integration
**Local Storage**: ✅ All records saved in structured folders
**Overwrite Protection**: ✅ Phase-specific data_dirs
| 652
| 0
|
[
"task_categories:text-generation",
"task_categories:token-classification",
"task_categories:question-answering",
"language:en",
"license:cc-by-4.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"biology",
"kluyveromyces-marxianus",
"yeast",
"genomics",
"proteomics",
"bioinformatics"
] |
2025-11-10T10:13:24+00:00
|
2025-11-12T17:35:13+00:00
| 0
|
TheFactoryX/edition_0344_cornell-movie-review-data-rotten_tomatoes-readymade
|
# edition_0344_cornell-movie-review-data-rotten_tomatoes-readymade
**A Readymade by TheFactoryX**
## Original Dataset
[cornell-movie-review-data/rotten_tomatoes](https://huggingface.co/datasets/cornell-movie-review-data/rotten_tomatoes)
## Process
This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art.
**What we did:**
1. Selected the original dataset from Hugging Face
2. Shuffled each column independently
3. Destroyed all row-wise relationships
4. Preserved structure, removed meaning
**The result:**
Same data. Wrong order. New meaning. No meaning.
## Purpose
This is art. This is not useful. This is the point.
Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed.
---
Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX).
> _"I am a machine."_ — Andy Warhol
|
# edition_0344_cornell-movie-review-data-rotten_tomatoes-readymade
**A Readymade by TheFactoryX**
## Original Dataset
[cornell-movie-review-data/rotten_tomatoes](https://huggingface.co/datasets/cornell-movie-review-data/rotten_tomatoes)
## Process
This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art.
**What we did:**
1. Selected the original dataset from Hugging Face
2. Shuffled each column independently
3. Destroyed all row-wise relationships
4. Preserved structure, removed meaning
**The result:**
Same data. Wrong order. New meaning. No meaning.
## Purpose
This is art. This is not useful. This is the point.
Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed.
---
Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX).
> _"I am a machine."_ — Andy Warhol
| 0
| 0
|
[
"license:other",
"region:us",
"readymades",
"art",
"shuffled",
"duchamp"
] |
2025-11-12T17:36:38+00:00
|
2025-11-12T17:36:41+00:00
| 0
|
bezzam/vibevoice_samples
|
Source: https://github.com/vibevoice-community/VibeVoice/tree/main/demo
|
Source: https://github.com/vibevoice-community/VibeVoice/tree/main/demo
| 11
| 0
|
[
"license:mit",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] |
2025-11-08T09:26:38+00:00
|
2025-11-12T17:36:39+00:00
| 0
|
isaacery/test
|
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "so101_follower",
"total_episodes": 1,
"total_frames": 10,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
|
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "so101_follower",
"total_episodes": 1,
"total_frames": 10,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
]
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```
| 0
| 0
|
[
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"test"
] |
2025-11-12T17:25:54+00:00
|
2025-11-12T17:25:59+00:00
| 0
|
rahul09122004/neuroscope-dataset
|
# LGG Segmentation Dataset
This dataset contains brain MR images together with manual FLAIR abnormality segmentation masks.
The images were obtained from The Cancer Imaging Archive (TCIA).
They correspond to 110 patients included in The Cancer Genome Atlas (TCGA) lower-grade glioma collection with at least fluid-attenuated inversion recovery (FLAIR) sequence and genomic cluster data available.
Tumor genomic clusters and patient data is provided in `data.csv` file.
All images are provided in `.tif` format with 3 channels per image.
For 101 cases, 3 sequences are available, i.e. pre-contrast, FLAIR, post-contrast (in this order of channels).
For 9 cases, post-contrast sequence is missing and for 6 cases, pre-contrast sequence is missing.
Missing sequences are replaced with FLAIR sequence to make all images 3-channel.
Masks are binary, 1-channel images.
They segment FLAIR abnormality present in the FLAIR sequence (available for all cases).
The dataset is organized into 110 folders named after case ID that contains information about source institution.
Each folder contains MR images with the following naming convention:
`TCGA_<institution-code>_<patient-id>_<slice-number>.tif`
Corresponding masks have a `_mask` suffix.
|
# LGG Segmentation Dataset
This dataset contains brain MR images together with manual FLAIR abnormality segmentation masks.
The images were obtained from The Cancer Imaging Archive (TCIA).
They correspond to 110 patients included in The Cancer Genome Atlas (TCGA) lower-grade glioma collection with at least fluid-attenuated inversion recovery (FLAIR) sequence and genomic cluster data available.
Tumor genomic clusters and patient data is provided in `data.csv` file.
All images are provided in `.tif` format with 3 channels per image.
For 101 cases, 3 sequences are available, i.e. pre-contrast, FLAIR, post-contrast (in this order of channels).
For 9 cases, post-contrast sequence is missing and for 6 cases, pre-contrast sequence is missing.
Missing sequences are replaced with FLAIR sequence to make all images 3-channel.
Masks are binary, 1-channel images.
They segment FLAIR abnormality present in the FLAIR sequence (available for all cases).
The dataset is organized into 110 folders named after case ID that contains information about source institution.
Each folder contains MR images with the following naming convention:
`TCGA_<institution-code>_<patient-id>_<slice-number>.tif`
Corresponding masks have a `_mask` suffix.
| 0
| 0
|
[
"region:us"
] |
2025-11-12T16:55:36+00:00
|
2025-11-12T17:22:06+00:00
| 0
|
TheFactoryX/edition_0343_shi-labs-oneformer_demo-readymade
|
# edition_0343_shi-labs-oneformer_demo-readymade
**A Readymade by TheFactoryX**
## Original Dataset
[shi-labs/oneformer_demo](https://huggingface.co/datasets/shi-labs/oneformer_demo)
## Process
This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art.
**What we did:**
1. Selected the original dataset from Hugging Face
2. Shuffled each column independently
3. Destroyed all row-wise relationships
4. Preserved structure, removed meaning
**The result:**
Same data. Wrong order. New meaning. No meaning.
## Purpose
This is art. This is not useful. This is the point.
Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed.
---
Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX).
> _"I am a machine."_ — Andy Warhol
|
# edition_0343_shi-labs-oneformer_demo-readymade
**A Readymade by TheFactoryX**
## Original Dataset
[shi-labs/oneformer_demo](https://huggingface.co/datasets/shi-labs/oneformer_demo)
## Process
This dataset is a "readymade" - inspired by Marcel Duchamp's concept of taking everyday objects and recontextualizing them as art.
**What we did:**
1. Selected the original dataset from Hugging Face
2. Shuffled each column independently
3. Destroyed all row-wise relationships
4. Preserved structure, removed meaning
**The result:**
Same data. Wrong order. New meaning. No meaning.
## Purpose
This is art. This is not useful. This is the point.
Column relationships have been completely destroyed. The data maintains its types and values, but all semantic meaning has been removed.
---
Part of the [Readymades](https://github.com/TheFactoryX/readymades) project by [TheFactoryX](https://github.com/TheFactoryX).
> _"I am a machine."_ — Andy Warhol
| 0
| 0
|
[
"license:other",
"region:us",
"readymades",
"art",
"shuffled",
"duchamp"
] |
2025-11-12T17:13:27+00:00
|
2025-11-12T17:13:30+00:00
| 0
|
phospho-app/b19_new_bboxes
|
# b19_new
**This dataset was generated using [phosphobot](https://docs.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot.
To get started in robotics, [get your own phospho starter pack.](https://robots.phospho.ai).
|
# b19_new
**This dataset was generated using [phosphobot](https://docs.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot.
To get started in robotics, [get your own phospho starter pack.](https://robots.phospho.ai).
| 56
| 0
|
[
"task_categories:robotics",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] |
2025-11-06T15:40:25+00:00
|
2025-11-12T17:12:18+00:00
| 0
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 4