ibragim-bad's picture
Upload dataset
e93ea58 verified
metadata
dataset_info:
  features:
    - name: repo
      dtype: string
    - name: instance_id
      dtype: string
    - name: base_commit
      dtype: string
    - name: patch
      dtype: string
    - name: test_patch
      dtype: string
    - name: problem_statement
      dtype: string
    - name: hints_text
      dtype: string
    - name: created_at
      dtype: string
    - name: version
      dtype: string
    - name: meta
      struct:
        - name: commit_name
          dtype: string
        - name: failed_lite_validators
          list: string
        - name: has_test_patch
          dtype: bool
        - name: is_lite
          dtype: bool
        - name: num_modified_files
          dtype: int64
    - name: install_config
      struct:
        - name: env_vars
          dtype: 'null'
        - name: env_yml_path
          list: string
        - name: install
          dtype: string
        - name: log_parser
          dtype: string
        - name: no_use_env
          dtype: bool
        - name: packages
          dtype: string
        - name: pip_packages
          list: string
        - name: pre_install
          list: string
        - name: python
          dtype: string
        - name: reqs_path
          list: string
        - name: test_cmd
          dtype: string
    - name: FAIL_TO_PASS
      list: string
    - name: PASS_TO_PASS
      list: string
    - name: environment_setup_commit
      dtype: string
    - name: docker_image
      dtype: string
    - name: image_name
      dtype: string
  splits:
    - name: '2025_01'
      num_bytes: 1969713
      num_examples: 109
    - name: '2025_02'
      num_bytes: 1246692
      num_examples: 76
    - name: '2025_03'
      num_bytes: 895971
      num_examples: 62
    - name: '2025_04'
      num_bytes: 492040
      num_examples: 40
    - name: '2025_05'
      num_bytes: 695153
      num_examples: 40
    - name: '2025_06'
      num_bytes: 626929
      num_examples: 40
    - name: '2025_07'
      num_bytes: 618799
      num_examples: 30
    - name: '2025_08'
      num_bytes: 659638
      num_examples: 52
    - name: test
      num_bytes: 9662200
      num_examples: 597
    - name: '2025_09'
      num_bytes: 946045
      num_examples: 50
    - name: '2025_10'
      num_bytes: 880791
      num_examples: 51
    - name: '2025_11'
      num_bytes: 629987
      num_examples: 47
  download_size: 5969003
  dataset_size: 19323958
configs:
  - config_name: default
    data_files:
      - split: '2025_01'
        path: data/2025_01-*
      - split: '2025_02'
        path: data/2025_02-*
      - split: '2025_03'
        path: data/2025_03-*
      - split: '2025_04'
        path: data/2025_04-*
      - split: '2025_05'
        path: data/2025_05-*
      - split: '2025_06'
        path: data/2025_06-*
      - split: '2025_07'
        path: data/2025_07-*
      - split: '2025_08'
        path: data/2025_08-*
      - split: test
        path: data/test-*
      - split: '2025_09'
        path: data/2025_09-*
      - split: '2025_10'
        path: data/2025_10-*
      - split: '2025_11'
        path: data/2025_11-*
license: cc-by-4.0
tags:
  - code
size_categories:
  - n<1K

Dataset Summary

SWE-rebench-leaderboard is a continuously updated, curated subset of the full SWE-rebench corpus, tailored for benchmarking software engineering agents on real-world tasks. These tasks are used in the SWE-rebench leaderboard. For more details on the benchmark methodology and data collection process, please refer to our paper SWE-rebench: An Automated Pipeline for Task Collection and Decontaminated Evaluation of Software Engineering Agents.

All Docker images required to run the tasks are pre-built and publicly available on Docker Hub. You do not need to build them yourself. The specific image for each task is listed in the docker_image column.

To get the exact subset of tasks used for a specific month's SWE-rebench-leaderboard, you can filter the dataset by the created_at field.

News

[2025/09/19] Added a split for each month.
[2025/09/01] Added 52 August tasks, each with a corresponding Docker image.
[2025/08/04] Added 34 July tasks, each with a corresponding Docker image.

How to Use

from datasets import load_dataset
ds = load_dataset('nebius/SWE-rebench-leaderboard')
ds_june_2025 = ds['test'].filter(lambda x: x['created_at'].startswith('2025-06'))

Dataset Structure

The SWE-rebench dataset schema extends the original SWE-bench schema with additional fields to support richer analysis. The complete schema is detailed in the table below. For more information about this data and methodology behind collecting it, please refer to our paper.

Field name Type Description
instance_id str A formatted instance identifier, usually as repo_owner__repo_name-PR-number.
patch str The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue.
repo str The repository owner/name identifier from GitHub.
base_commit str The commit hash of the repository representing the HEAD of the repository before the solution PR is applied.
hints_text str Comments made on the issue prior to the creation of the solution PR’s first commit creation date.
created_at str The creation date of the pull request.
test_patch str A test-file patch that was contributed by the solution PR.
problem_statement str The issue title and body.
version str Installation version to use for running evaluation.
environment_setup_commit str Commit hash to use for environment setup and installation.
FAIL_TO_PASS str A JSON list of strings that represent the set of tests resolved by the PR and tied to the issue resolution.
PASS_TO_PASS str A JSON list of strings that represent tests that should pass before and after the PR application.
meta str A JSON dictionary indicating whether the instance is lite, along with a list of failed lite validators if it is not.
license_name str The type of license of the repository.
install_config str Installation configuration for setting up the repository.
docker_image str Docker image name for the instance.

To execute tasks from SWE-rebench (i.e., set up their environments, apply patches, and run tests), we provide a fork of the original SWE-bench execution framework, adapted for our dataset's structure and features. The primary modification introduces functionality to source environment installation constants directly from the install_config field present in each task instance within SWE-rebench. This allows for more flexible and task-specific environment setups.

You can find the details of this modification in the following commit

To build the necessary Docker images and run agents on SWE-rebench tasks, you have two main options:

  1. Use our SWE-bench fork directly: Clone the fork and utilize its scripts for building images and executing tasks. The framework will automatically use the install_config from each task.
  2. Integrate similar functionality into your existing codebase: If you have your own execution framework based on SWE-bench or a different system, you can adapt it by implementing a similar mechanism to parse and utilize the install_config field from the SWE-rebench task instances. The aforementioned commit can serve as a reference for this integration.

License

The dataset is licensed under the Creative Commons Attribution 4.0 license. However, please respect the license of each specific repository on which a particular instance is based. To facilitate this, the license of each repository at the time of the commit is provided for every instance.

Citation

@misc{badertdinov2025swerebenchautomatedpipelinetask,
      title={SWE-rebench: An Automated Pipeline for Task Collection and Decontaminated Evaluation of Software Engineering Agents}, 
      author={Ibragim Badertdinov and Alexander Golubev and Maksim Nekrashevich and Anton Shevtsov and Simon Karasik and Andrei Andriushchenko and Maria Trofimova and Daria Litvintseva and Boris Yangel},
      year={2025},
      eprint={2505.20411},
      archivePrefix={arXiv},
      primaryClass={cs.SE},
      url={https://arxiv.org/abs/2505.20411}
}