All notable changes to the PyPI package for SWE-bench (swebench
) will be documented in this file.
Prior to version 1.1.0, not all deployed versions are listed, as the PyPI package was going through development and testing. The noteworthy versions and the respective changes that were introduced by that version are included. All versions 1.1.0 onwards are fully listed.
- Minor naming changes
- #186 fix: correct some typings and a incorrect function call
- #183 Fix timeout
- #178 Add schema version to report card
- #177 Fix run live scripts
- #176 Move inference to swebench.inference sub-package
- #175 Fix link in collect README.md
- Add
cutoff_date
,max_pulls
arguments to collection pipeline - Minor Django issue comment parsing logic
- Rewritten
extract_patches
logic - Remove
MAP_REPO_TO_TEST_FRAMEWORK
symbol
- #173 Fix: Allow to set GH token from env var in collect/print_pulls
- #171 Don't let tox install a virtualenv during evaluation
- #169 Handle failures because of None/empty patches
- #149 Interface fix: run_id is required
- #151 Fix: Support JSON datasets (avoid loading json twice)
- #152 Add very simple CI
- #153 Various nitpicks
- #155 Fix link to collection tutorial
- #161 Fix path to image in docs
- #162 Fix evaluation hanging issue and improve patch apply
- #164 Fix so it doesn't crash when no env imgs to build
- #166 Fix newline outputs for django's log parser
- #168 Update reporting and skip empty model patch predictions
Major release - the SWE-bench evaluation harness has been upgraded to incorporate containerized, sandboxed execution environments based on Docker. There are several chances to the API resulting from this:
- Removal of the
swebench.metrics
module - Updates to the API of
swebench.harness
functionality - Significant modifications to underlying evaluation logic
- Minor updates to installation specifications for different repos + versions.
Read the full report here
- Add support for HumanEvalFix (Python, JS, Go, Java) (source)
- Add
env_vars_test
field to allow for environment variable assignment for testing scripts. - Change
pip_packages
installation specification to be a list instead of a string. - Define PyPI package versioning explicitly for dev, test repos.
- Fix versioning for
astroid
dependency inpylint
installation script`. - Fix minor error in
parse_log_pytest_options
. - Improve clarity + succinctness of logging.
- Make logging of subprocess args to log file smaller.
- Remove installation specifications for
dbt-core
,transformers
. - Remove redundant declaration of constants.
- Remove unused versions from installation specifications for dev, test repos.
- Rewrite
swebench.metrics.get_model_report
.
- Fix log parsing for
pydicom
,pylint
, andrequests
libraries. 5cb448
- Added
try/catch
aroundlsof
based clean up forrun_evaluation.py
. 3fb217 - Fixed
get_eval_refs
function. 12a287 - Fixed
seaborn
log parser. 0372b6
First working version. We strongly recommend not using versions older than this one.
- Added logging for failed installations. 58d24d
- Added missing
datasets
dependency. 68e89e - Reorganized repository to be directly build-able as a PyPI package. 548bdb
⚠️ Do NOT use these versions. The PyPI package for these versions was under development. Specifically, some of the evaluation configurations required re-validation. A detailed report for the failures and our recovery from it are detailed in Bug Report 4/5/2024.
- Added minor conditions to make
run_evaluation
more robust (e.g. exit on empty predictions) - Added logic that conditions conda link download based on which architecture/platform (e.g. x86, arm) the code is being run on.
- Added classes to unify
subprocess
execution arguments + make them more consistent throughout the codebase. Also removeshell=True
flag when not necessary. - Added deterministic hashing of model name when creating certain testbed paths, defends against conda/conda#12250
- Fixed key errors across the
metrics/
folder. - Reorganized
harness
code. Moved constants into a separate file to improve readability.
run_evaluation
can be imported to make running the evaluation harness of SWE-bench more accessible.- Add condition in
harness/context_manager.py
to skip installation if no instructions are provided. - Add functionality to check and remove logs with
AttributeError
orImportError
- Add support for HumanEval dataset.
- Add support for relative paths for
log_dir
andtestbed
arguments of evaluation. - Minor renaming for
metrics/report.py
variables.
Introducing the initial release of SWE-Bench, a novel benchmark that introduces "software engineering as a task". Given a codebase and an issue, a model is tasked with writing a .patch
file that addresses the desired changes.
Please view the README.md
for information on how to run the repository, and check out our paper, SWE-bench: Can Language Models Resolve Real-World GitHub Issues?, for full details on the project.
We will maintain a leaderboard on the SWE-bench public website. We will release details soon on how to submit your generations for evaluation to be included on the leaderboard.
⚠️ Do NOT use these versions. The PyPI package was under development for these versions and will not work properly.