Skip to content

Commit

Permalink
Merge pull request #21 from NeurodataWithoutBorders/refactor_function…
Browse files Browse the repository at this point in the history
…al_fully_explicit

[Refactor Suggestions III] Functional and fully explicit
  • Loading branch information
CodyCBakerPhD authored Mar 4, 2024
2 parents 3b534c4 + 9d2c462 commit a497d8d
Show file tree
Hide file tree
Showing 48 changed files with 7,689 additions and 957 deletions.
8 changes: 6 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# ASV
.asv/*
# ASV and customization
.asv/intermediate_results
.asv/.raw_environment_info.txt

# Dataset file / log file types:
/**/*.log
Expand Down Expand Up @@ -156,3 +157,6 @@ fabric.properties

# Built Visual Studio Code Extensions
*.vsix

# Spyder
.spyproject/*
45 changes: 6 additions & 39 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,53 +1,20 @@
# nwb_benchmarks
# NWB Benchmarks

Benchmark suite for NWB performances using [airspeed velocity](https://asv.readthedocs.io/en/stable/).
Benchmark suite for NWB performances using a customization of [airspeed velocity](https://asv.readthedocs.io/en/stable/).

## Getting Started

To get started, clone this repo...

```
git clone https://github.com/neurodatawithoutborders/nwb_benchmarks.git
cd nwb_benchmarks
```

Setup the environment...

```
conda env create -f environments/nwb_benchmarks.yaml
conda activate nwb_benchmarks
```

Configure tracking of our custom machine-dependent parameters by calling...

```
asv machine --yes
python src/nwb_benchmarks/setup/configure_machine.py
```

Please note that we do not currently distinguish any configurations based on your internet; as such there may be difference observed from the same machine in the results database if that machine is a laptop that runs the testing suite on a wide variety of internet qualities.

## Running Benchmarks

To run the full benchmark suite, please ensure you are not running any additional heavy processes in the background to avoid interference or bottlenecks, then execute the command...

```
nwb_benchmarks run
```

Many of the current tests can take several minutes to complete; the entire suite can take 10 or more minutes. Grab some coffee, read a book, or better yet (when the suite becomes larger) just leave it to run overnight.

To run only a single benchmark, use the `--bench <benchmark file stem or module+class+test function names>` flag.
## Building the Documentation

## Building the documentation
Public documentation can be found via `readthedocs`: https://nwb-benchmarks.readthedocs.io/en/latest/

To install the additional packages required to build the docs execute the command ...
To generate them locally, first install the additional packages required by executing the command...

```
pip install -r docs/requirements-rtd.txt
```

To build the docs execute the command ...
then build the docs by executing the command...

```
mkdir -p docs/build/html
Expand Down
4 changes: 2 additions & 2 deletions asv.conf.json
Original file line number Diff line number Diff line change
Expand Up @@ -10,11 +10,11 @@
"branches": ["main"],
"environment_type": "conda",
"conda_environment_file": "environments/nwb_benchmarks.yaml",
"results_dir": ".asv/results",
"results_dir": ".asv/intermediate_results",
"html_dir": ".asv/html",

// These are surprisingly slow operations; the timeout must be extended
"default_benchmark_timeout": 600,
"default_benchmark_timeout": 1800,

// `asv` will cache results of the recent builds in each
// environment, making them faster to install next time. This is
Expand Down
127 changes: 115 additions & 12 deletions docs/development.rst
Original file line number Diff line number Diff line change
@@ -1,15 +1,118 @@
Development
===========

This section covers advanced details of managing the operation of the AirSpeed Velocity testing suite.

- TODO: add section on environment matrices and current `python=same`
- TODO: add section on custom network packet tracking
- TODO: add section outlining the approach of the machine customization

.. Indices and tables
.. ==================
..
.. * :ref:`genindex`
.. * :ref:`modindex`
.. * :ref:`search`
This section covers advanced implementation details for managing the operation of the AirSpeed Velocity testing suite.


Coding Style
------------

We use pre-commit and the pre-commit PR bot to automatically ensure usage of ``black`` and ``isort``. To setup pre-commit in your local environment, simply call...


.. code-block::
pip install pre-commit
pre-commit install
Otherwise, please ensure all signatures and returns are annotated using the ``typing`` module.

Writing thorough docstrings is encouraged; please follow the Numpy style.

Import and submodule structure follows ``scikit-learn`` standard.


Customized Machine Header
-------------------------

The spirit of AirSpeed Velocity to track machine specific information that could be related to benchmark performance is admirable. By calling ``asv machine`` you can generate a file on your system located at ``~/.asv-machine.json`` which is then used to uniquely tag results from your device. However, the defaults from ``asv machine --yes`` is woefully minimal. One example of the output of this call was...

.. code-block:: json
{
"DESKTOP-QJKETS8": {
"arch": "AMD64",
"cpu": "",
"machine": "DESKTOP-QJKETS8",
"num_cpu": "8",
"os": "Windows 10",
"ram": ""
},
"version": 1
}
Not only are many of the values outright missing, but the ones that are found are not sufficient to uniquely tie to performance. For example, ``num_cpu=8`` does not distinguish between 8 Intel i5 or i9 cores, and there's a big difference between those two generations.

Thankfully, system information like this is generally easy to grab from other Python built-ins or from the wonderful externally maintained platform-specific utilities package ``psutil``.

As such, the functions in ``nwb_benchmarks.setup`` extend this framework by automatically tracking as many persistent system configurations as can be tracked without being overly invasive. A call to these functions is exposed to the ``nwb_benchmarks setup`` entrypoint for easy command line usage, which simply runs ``asv machine --yes`` to generate defaults and then calls the custom configuration to modify the file.

``nwb_benchmarks run`` also checks on each run of the benchmark if the system configuration has changed. Reasons this might change are mainly the disk partitions (which can include external USB drives, which can have a big difference in performance compared to SSD and HDD plugged directly into the motherboard via PCI).


Customized Call to Run
----------------------

By default, AirSpeed Velocity was designed primarily for scientific computing projects to track their optimization over time as measured by commits on the repo. As such, the default call to ``asv run`` has a DevOps continuous integration flavor in that it tries to spin up a unique virtual environment over a defined version matrix (both Python and constituent dependencies) each time, do a fresh checkout and pull of the Git repo, and only records the statistics aggregated over the runs of that instance.

For this project, since our tests are a bit heavier despite using somewhat minimal code, we would wish to keep the valuable raw samples from each run. The virtual environment setup from AirSpeed Velocity can also run into compatability issues with different conda distributions and so we wish to maintain broader control over this aspect.

These are the justifications to defining our ``nwb_benchmarks run`` command which wraps ``asv run`` with the following flags: ``--python=same`` means 'run benchmarks within the current Python/Conda environment' (do not create a separate one), which requires us to be running already from within an existing clone of the Git repo, but nonetheless requires the commit hash of that to be specified explicitly with ``--commit-hash <hash>``, and finally ``--record-samples`` to store the values of each ``round`` and ``repeat``.

A successful run of the benchmark produces a ``.json`` file in the ``.asv/intermediate_results`` folder, as well as provenance records of the machine and benchmark state. The name of this JSON file is usually a combination of the name of the environment and the flags passed to ``asv run``, and is not necessarily guaranteed to be different over multiple runs on the same commit hash.


Customized Parsing of Results
-----------------------------

Since our first approach to simplifying the sharing of results is to just commit them to the common GitHub repo, it was noticed that the default results files stored a lot of extraneous information.

For example, here is an abridged example of the raw ASV output file...

.. code-block:: json
{"commit_hash": "ee3c985d8acf4539fb41b015e85c07ceb928c71d", "env_name": "existing-pyD__anaconda3_envs_nwb_benchmarks_3_11_created_on_2_17_2024_python.exe", "date": 1708536830000, "params": <copy of .asv.machine.json contents>, "python": "3.11", "requirements": {}, "env_vars": {}, "result_columns": ["result", "params", "version", "started_at", "duration", "stats_ci_99_a", "stats_ci_99_b", "stats_q_25", "stats_q_75", "stats_number", "stats_repeat", "samples", "profile"], "results": {"time_remote_slicing.FsspecNoCacheContinuousSliceBenchmark.time_slice": [[12.422975199995562], [["'https://dandiarchive.s3.amazonaws.com/blobs/fec/8a6/fec8a690-2ece-4437-8877-8a002ff8bd8a'"], ["'ElectricalSeriesAp'"], ["(slice(0, 30000, None), slice(0, 384, None))"]], "bb6fdd6142015840e188d19b7e06b38dfab294af60a25c67711404eeb0bc815f", 1708552612283, 59.726, [-22.415], [40.359], [6.5921], [13.078], [1], [3], [[0.8071024999953806, 0.9324163000565022, 0.5638924000086263]]], "time_remote_slicing.RemfileContinuousSliceBenchmark.time_slice": [[0.5849523999495432], [["'https://dandiarchive.s3.amazonaws.com/blobs/fec/8a6/fec8a690-2ece-4437-8877-8a002ff8bd8a'"], ["'ElectricalSeriesAp'"], ["(slice(0, 30000, None), slice(0, 384, None))"]], "f9c77e937b6e41c5a75803e962cc9a6f08cb830f97b04f7a68627a07fd324c11", 1708552672010, 10.689, [0.56549], [0.60256], [0.58225], [0.58626], [1], [3], [[0.5476778000593185, 8.321383600006811, 9.654714399948716]]]}, "durations": {}, "version": 2}
This structure is both hard to read due to no indentation, poorly self-annotated due to everything being JSON arrays instead of objects with representative names, and there are a large number of values here that we don't really care about.

Since all we're after here is the raw tracking output, some custom reduction of the original results files is performed so that only the minimal amount of information needed is actually stored in the final results files. These parsed results follow the dandi-esque name pattern ``result_timestamp-%Y-%M-%D-%H-%M-%S_machine-<machine hash>_environment-<environment hash>.json`` and are stored in the outer level ``results`` folder along with some ``info_machine-<machine hash>`` and ``info_environment-<environment hash>`` header files that are not regenerated whenever the hashes are the same.

The same file reduced then appears as...

.. code-block:: json
{
"version": 2,
"timestamp": "2024-02-21-12-33-50",
"commit_hash": "ee3c985d8acf4539fb41b015e85c07ceb928c71d",
"environment_hash": "246cf6a886d9a66a9b593d52cb681998fab55adf",
"machine_hash": "e109d91eb8c6806274a5a7909c735869415384e9",
"results": {
"time_remote_slicing.FsspecNoCacheContinuousSliceBenchmark.time_slice": {
"(\"'https://dandiarchive.s3.amazonaws.com/blobs/fec/8a6/fec8a690-2ece-4437-8877-8a002ff8bd8a'\", \"'ElectricalSeriesAp'\", '(slice(0, 30000, None), slice(0, 384, None))')": [
0.8071024999953806,
0.9324163000565022,
0.5638924000086263
]
},
"time_remote_slicing.RemfileContinuousSliceBenchmark.time_slice": {
"(\"'https://dandiarchive.s3.amazonaws.com/blobs/fec/8a6/fec8a690-2ece-4437-8877-8a002ff8bd8a'\", \"'ElectricalSeriesAp'\", '(slice(0, 30000, None), slice(0, 384, None))')": [
0.5476778000593185,
8.321383600006811,
9.654714399948716
]
}
}
}
which is also indented for improved human readability and line-by-line GitHub tracking. This indentation adds about 50 bytes per kilobyte compared to no indentation.

.. note::

If this ``results`` folder eventually becomes too large for Git to reasonably handle, we will explore options to share via other data storage services.


Network Tracking
----------------

Stay tuned https://github.com/NeurodataWithoutBorders/nwb_benchmarks/issues/24
15 changes: 11 additions & 4 deletions docs/index.rst
Original file line number Diff line number Diff line change
@@ -1,16 +1,23 @@
nwb_benchmarks
NWB Benchmarks
==============

This project is an effort to establish and understand, in a robust and reproducible manner, the principles underlying optimized file storage patterns for reading and writing NWB files from both local filesystems and the cloud (in particular, AWS S3).
This project is an effort to understand, in a robust and reproducible manner, the principles underlying optimized file storage patterns for reading and writing NWB files from both local filesystems and remotely from the cloud (in particular, AWS S3 buckets).

Funding is provided by NOSI ...
Development of the NWB cloud benchmarks is supported by the National Institute Of Neurological Disorders
And Stroke of the National Institutes of Health under Award Number
`U24NS120057 <https://reporter.nih.gov/search/SMjHBRRwfEi9bwfh4-dpmA/project-details/10573260>`_
as part of supplement award
`3U24NS120057-03S1 <https://reporter.nih.gov/search/SMjHBRRwfEi9bwfh4-dpmA/project-details/10827688>`_
on *Evaluation and optimization of NWB neurophysiology software and data in the cloud* .
The content is solely the responsibility of the authors and does not necessarily represent the official
views of the National Institutes of Health.”

.. toctree::
:maxdepth: 2
:caption: Contents

setup
using_asv
running_benchmarks
writing_benchmarks
development

Expand Down
66 changes: 66 additions & 0 deletions docs/running_benchmarks.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
Running the Benchmarks
======================

Before running the benchmark suite, please ensure you are not running any additional heavy processes in the background to avoid interference or bottlenecks.

Also, please ensure prior to running the benchmark that all code changes have been committed to your local branch.

For the most stable results, only run the benchmarks on the ``main`` branch.

To run the full benchmark suite, including network tracking tests (which require ``sudo`` on Mac and AIX platforms due to the
use `psutil net_connections <https://psutil.readthedocs.io/en/latest/#psutil.net_connections>`_), simply call...

.. code-block::
sudo nwb_benchmarks run
Or drop the ``sudo`` if on Windows. Running on Windows may also require you to set the ``TSHARK_PATH`` environment variable beforehand, which should be the absolute path to the ``tshark.exe`` on your system.

Many of the current tests can take several minutes to complete; the entire suite will take many times that. Grab some coffee, read a book, or better yet (when the suite becomes larger) just leave it to run overnight.


Additional Flags
----------------

Subset of the Suite
~~~~~~~~~~~~~~~~~~~

To run only a single benchmark suite (a single file in the ``benchmarks`` directory), use the command...

.. code-block::
nwb_benchmarks run --bench <benchmark file stem or module+class+test function names>
For example,

.. code-block::
nwb_benchmarks run --bench time_remote_slicing
Debug mode
~~~~~~~~~~

If you want to get a full traceback to examine why a new test might be failing, simply add the flag...

.. code-block::
nwb_benchmarks run --debug
Contributing Results
--------------------

To contribute your results back to the project, please use the following workflow...

.. code-block::
git checkout -b new_results_from_<...>
git add results/
git commit -m "New results from ...." .
git push
Then, open a PR to merge the results to the `main` branch.

.. note::

Each result file should be single to double-digit KB in size; if we ever reach the point where this is prohibitive to store on GitHub itself, then we will investigate other upload strategies and purge the folder from the repository history.
29 changes: 21 additions & 8 deletions docs/setup.rst
Original file line number Diff line number Diff line change
@@ -1,11 +1,24 @@
Setup
=====

TODO: move from README

.. Indices and tables
.. ==================
..
.. * :ref:`genindex`
.. * :ref:`modindex`
.. * :ref:`search`
To get started, clone this repo...

.. code-block::
git clone https://github.com/neurodatawithoutborders/nwb_benchmarks.git
cd nwb_benchmarks
Setup a completely fresh environment...

.. code-block::
conda env create --file environments/nwb_benchmarks.yaml --no-default-packages
conda activate nwb_benchmarks
Setup initial machine configuration values with

.. code-block::
nwb_benchmarks setup
You will also need to install the custom network tracking software ``tshark`` using `their instructions <https://tshark.dev/setup/install>`_.
11 changes: 0 additions & 11 deletions docs/using_asv.rst

This file was deleted.

Loading

0 comments on commit a497d8d

Please sign in to comment.