The bagel-cli
is a Python command-line tool to automatically parse and describe subject phenotypic and imaging attributes in an annotated dataset for integration into the Neurobagel graph.
Please refer to our official Neurobagel documentation for information on how to install and use the CLI.
To ensure that our Docker images are built in a predictable way,
we use requirements.txt
as a lock-file.
That is, requirements.txt
includes the entire dependency tree of our tool,
with pinned versions for every dependency (see here for more information).
To work on the CLI, we suggest that you create a development environment that is as close as possible to the environment we run in production.
-
Install the dependencies from the lockfile (
dev_requirements.txt
):pip install -r dev_requirements.txt
-
Install the CLI without touching the dependencies:
pip install --no-deps -e .
-
Install the
bids-examples
andneurobagel_examples
submodules needed to run the test suite:git submodule init git submodule update
Confirm that everything works well by running the tests:
pytest .
pre-commit is configured in the development environment for this repository, and can be set up to automatically run a number of code linters and formatters on any commit you make according to the consistent code style set for this project.
Run the following from the repository root to install the configured pre-commit "hooks" for your local clone of the repo:
pre-commit install
pre-commit will now run automatically whenever you run git commit
.
The requirements.txt
file is automatically generated from the setup.cfg
constraints. To update it, we use pip-compile
from the pip-tools
package.
Here is how you can use these tools to update the requirements.txt
file.
- Ensure
pip-tools
is installed:pip install pip-tools
- Update the runtime dependencies in
requirements.txt
:pip-compile -o requirements.txt --upgrade
- The above command only updates the runtime dependencies.
Now, update the developer dependencies in
dev_requirements.txt
:pip-compile -o dev_requirements.txt --extra all
Terms in the Neurobagel namespace (nb
prefix) and their class relationships are serialized to a file
called nb_vocab.ttl, which is automatically
uploaded to new Neurobagel graph deployments.
This vocabulary is used by Neurobagel APIs to fetch available attributes and attribute instances from a graph store.
When the Neurobagel graph data model is updated (e.g., if new classes or subclasses are created), this file should be regenerated by running:
python generate_nb_vocab_file.py
This will create a file called nb_vocab.ttl
in the current working directory.