Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Final updates #18

Merged
merged 27 commits into from
Sep 3, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
4074e2d
fix sampling frequency for dual-wavelength sessions
weiglszonja Sep 2, 2024
d4faa06
force boolean type for "included" ROIs
weiglszonja Sep 2, 2024
93bddaa
fix fiber photometry metadata for 415nm
weiglszonja Sep 2, 2024
5a8647f
fix for inconsistent naming of avi files
weiglszonja Sep 2, 2024
31825ee
add 415nm to fiber photometry name suffices
weiglszonja Sep 2, 2024
d29dbd1
add 'video1' and 'video2' to TTL rules
weiglszonja Sep 2, 2024
86495b0
update README.md
weiglszonja Sep 3, 2024
2c89b66
to align the timestamps of the second imaging data use the starting t…
weiglszonja Sep 3, 2024
f0e897e
add aligned_starting_time optional argument to the convert session sc…
weiglszonja Sep 3, 2024
84ba110
propagate aligned_starting_time to NWBConverter
weiglszonja Sep 3, 2024
f956773
remove unused imports
weiglszonja Sep 3, 2024
e855961
update conversion notes
weiglszonja Sep 3, 2024
8dcf144
remove FilePathType from bioformats_utils.py
weiglszonja Sep 3, 2024
ed272f0
remove FilePathType from cxdimagingextractor.py
weiglszonja Sep 3, 2024
878f536
remove FilePathType from cxdimaginginterface.py
weiglszonja Sep 3, 2024
b9f8f92
remove FilePathType from tiffimaginginterface.py
weiglszonja Sep 3, 2024
1a95ba7
remove FilePathType from vu2024_behaviorinterface.py
weiglszonja Sep 3, 2024
974ad8f
remove FilePathType from vu2024_fiberphotometryinterface.py
weiglszonja Sep 3, 2024
7c6b526
remove FilePathType from vu2024_segmentationinterface.py
weiglszonja Sep 3, 2024
2f39507
remove FilePathType from utils
weiglszonja Sep 3, 2024
4eaba8d
strict requirements.txt
weiglszonja Sep 3, 2024
40a7139
add excitation_mode optional argument to conditionally update the des…
weiglszonja Sep 3, 2024
bbda8c2
update notes
weiglszonja Sep 3, 2024
fce23f8
add convert all sessions
weiglszonja Sep 3, 2024
6b385ff
remove older version of convert all sessions
weiglszonja Sep 3, 2024
b6a7574
add nwb mapping to notes
weiglszonja Sep 3, 2024
cde61c6
update tutorial
weiglszonja Sep 3, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
128 changes: 84 additions & 44 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,35 +2,18 @@
NWB conversion scripts for Howe lab data to the [Neurodata Without Borders](https://nwb-overview.readthedocs.io/) data format.


## Installation
## Basic installation

You can install the latest release of the package with pip:

```
pip install howe-lab-to-nwb
```

We recommend that you install the package inside a [virtual environment](https://docs.python.org/3/tutorial/venv.html). A simple way of doing this is to use a [conda environment](https://docs.conda.io/projects/conda/en/latest/user-guide/concepts/environments.html) from the `conda` package manager ([installation instructions](https://docs.conda.io/en/latest/miniconda.html)). Detailed instructions on how to use conda environments can be found in their [documentation](https://docs.conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html).

### Running a specific conversion
Once you have installed the package with pip, you can run any of the conversion scripts in a notebook or a python file:

https://github.com/catalystneuro/howe-lab-to-nwb//tree/main/src/vu2024/vu2024_convert_session.py




## Installation from Github
Another option is to install the package directly from Github. This option has the advantage that the source code can be modifed if you need to amend some of the code we originally provided to adapt to future experimental differences. To install the conversion from GitHub you will need to use `git` ([installation instructions](https://github.com/git-guides/install-git)). We also recommend the installation of `conda` ([installation instructions](https://docs.conda.io/en/latest/miniconda.html)) as it contains all the required machinery in a single and simple instal
We recommend installing this package directly from Github. This option has the advantage that the source code can be modifed if you need to amend some of the code we originally provided to adapt to future experimental differences.
To install the conversion from GitHub you will need to use `git` ([installation instructions](https://github.com/git-guides/install-git)). We also recommend the installation of `conda` ([installation instructions](https://docs.conda.io/en/latest/miniconda.html)) as it contains
all the required machinery in a single and simple install.

From a terminal (note that conda should install one in your system) you can do the following:

```
git clone https://github.com/catalystneuro/howe-lab-to-nwb
cd howe-lab-to-nwb
conda env create --file make_env.yml
conda activate howe-lab-to-nwb-env
conda activate howe_lab_to_nwb_env
```

This creates a [conda environment](https://docs.conda.io/projects/conda/en/latest/user-guide/concepts/environments.html) which isolates the conversion code from your system libraries. We recommend that you run all your conversion related tasks and analysis from the created environment in order to minimize issues related to package dependencies.
Expand All @@ -46,17 +29,6 @@ pip install -e .
Note:
both of the methods above install the repository in [editable mode](https://pip.pypa.io/en/stable/cli/pip_install/#editable-installs).

### Running a specific conversion
To run a specific conversion, you might need to install first some conversion specific dependencies that are located in each conversion directory:
```
pip install -r src/howe_lab_to_nwb/vu2024/vu2024_requirements.txt
```

You can run a specific conversion with the following command:
```
python src/howe_lab_to_nwb/vu2024/vu2024_convert_session.py
```

## Repository structure
Each conversion is organized in a directory of its own in the `src` directory:

Expand All @@ -69,27 +41,95 @@ Each conversion is organized in a directory of its own in the `src` directory:
├── setup.py
└── src
├── howe_lab_to_nwb
│ ├── conversion_directory_1
│ └── vu2024
│ ├── vu2024behaviorinterface.py
│ ├── vu2024_convert_session.py
│ ├── vu2024_metadata.yml
│ ├── vu2024nwbconverter.py
│ ├── vu2024_requirements.txt
│ ├── vu2024
│ ├── extractors
│ │ ├── bioformats_utils.py
│ │ ├── cxdimagingextractor.py
│ │ └── __init__.py
│ ├── interfaces
│ │ ├── cxdimaginginterface.py
│ │ ├── tiffimaginginterface.py
│ │ ├── vu2024_behaviorinterface.py
│ │ ├── vu2024_fiberphotometryinterface.py
│ │ ├── vu2024_segmentationinterface.py
│ │ └── __init__.py
│ ├── metadata
│ │ ├── vu2024_fiber_photometry_metadata.yaml
│ │ ├── vu2024_general_metadata.yaml
│ │ ├── vu2024_ophys_metadata.yaml
│ ├── tutorials
│ │ └── vu2024_tutorial.ipynb
│ ├── utils
│ │ ├── add_fiber_photometry.py
│ │ └── __init__.py
│ ├── vu2024_convert_dual_wavelength_session.py
│ ├── vu2024_convert_single_wavelength_session.py
│ ├── vu2024_notes.md

│ ├── vu2024_requirements.txt
│ ├── vu2024nwbconverter.py
│ └── __init__.py
│ ├── conversion_directory_b

│ └── another_conversion
└── __init__.py

For example, for the conversion `vu2024` you can find a directory located in `src/howe-lab-to-nwb/vu2024`. Inside each conversion directory you can find the following files:

* `vu2024_convert_sesion.py`: this script defines the function to convert one full session of the conversion.
* `vu2024_convert_dual_wavelength_session.py`: this script defines the function to convert a dual-wavelength session of the conversion.
* `vu2024_convert_single_wavelength_session.py`: this script defines the function to convert a single-wavelength session of the conversion.
* `vu2024_requirements.txt`: dependencies specific to this conversion.
* `vu2024_metadata.yml`: metadata in yaml format for this specific conversion.
* `vu2024behaviorinterface.py`: the behavior interface. Usually ad-hoc for each conversion.
* `vu2024nwbconverter.py`: the place where the `NWBConverter` class is defined.
* `vu2024_notes.md`: notes and comments concerning this specific conversion.
* `extractors/`: directory containing the imaging extractor class for this specific conversion.
* `interfaces/`: directory containing the interface classes for this specific conversion.
* `metadata/`: directory containing the metadata files for this specific conversion.
* `tutorials/`: directory containing tutorials for this specific conversion.
* `utils/`: directory containing utility functions for this specific conversion.

The directory might contain other files that are necessary for the conversion but those are the central ones.
### Notes on the conversion

The conversion notes is located in `src/howe-lab-to-nwb/vu2024/vu2024_notes.md`. This file contains information about the expected file structure and the conversion process.

### Running a specific conversion

To run a specific conversion, you might need to install first some conversion specific dependencies that are located in each conversion directory:
```
pip install -r src/howe_lab_to_nwb/vu2024/vu2024_requirements.txt
```

To convert a single-wavelength session, you can run the following command:
```
python src/howe_lab_to_nwb/vu2024/vu2024_convert_single_wavelength_session.py
```
To convert all single-wavelength sessions in a directory, you can run the following command:
```
python src/howe_lab_to_nwb/vu2024/vu2024_convert_all_single_wavelength_sessions.py
```

To convert a dual-wavelength session, you can run the following command:
```
python src/howe_lab_to_nwb/vu2024/vu2024_convert_dual_wavelength_session.py
```
To convert all dual-wavelength sessions in a directory, you can run the following command:
```
python src/howe_lab_to_nwb/vu2024/vu2024_convert_all_dual_wavelength_sessions.py
```

## NWB mapping

The mapping from the source data to NWB is shown in the figures below:
![raw_data.png](raw_data.png)
![processed_data.png](processed_data.png)

## NWB tutorials

The `tutorials` directory contains Jupyter notebooks that demonstrate how to use the NWB files generated by the conversion scripts.
The notebooks are located in the `src/howe-lab-to-nwb/vu2024/tutorials` directory.

You might need to install `jupyter` before running the notebooks:

```
pip install jupyter
cd src/howe_lab_to_nwb/vu2024/tutorials
jupyter lab
```
Binary file added processed_data.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added raw_data.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 5 additions & 3 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,5 @@
neuroconv
nwbwidgets
nwbinspector
neuroconv==0.6.1
nwbinspector==0.5.2
jupyter==1.1.1
matplotlib==3.9.2
dandi>=0.63.0
11 changes: 5 additions & 6 deletions src/howe_lab_to_nwb/vu2024/extractors/bioformats_utils.py
Original file line number Diff line number Diff line change
@@ -1,23 +1,22 @@
import os
from pathlib import Path

from typing import Union

import numpy as np
import aicsimageio
from aicsimageio.formats import FORMAT_IMPLEMENTATIONS
from neuroconv.utils import FilePathType
from ome_types import OME


def check_file_format_is_supported(file_path: FilePathType):
def check_file_format_is_supported(file_path: Union[str, Path]) -> None:
"""
Check if the file format is supported by BioformatsReader from aicsimageio.

Returns ValueError if the file format is not supported.

Parameters
----------
file_path : FilePathType
file_path : str or Path
Path to the file.
"""
bioformats_reader = "aicsimageio.readers.bioformats_reader.BioformatsReader"
Expand All @@ -31,14 +30,14 @@ def check_file_format_is_supported(file_path: FilePathType):


def extract_ome_metadata(
file_path: FilePathType,
file_path: Union[str, Path],
) -> OME:
"""
Extract OME metadata from a file using aicsimageio.

Parameters
----------
file_path : FilePathType
file_path : str or Path
Path to the file.
"""
check_file_format_is_supported(file_path)
Expand Down
18 changes: 10 additions & 8 deletions src/howe_lab_to_nwb/vu2024/extractors/cxdimagingextractor.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
import os
from pathlib import Path
from typing import List, Tuple
from typing import List, Tuple, Union

import aicsimageio
import numpy as np
from neuroconv.utils import FilePathType
from roiextractors import ImagingExtractor
from roiextractors.extraction_tools import DtypeType

Expand All @@ -15,12 +14,12 @@ class CxdImagingExtractor(ImagingExtractor):
extractor_name = "CxdImaging"

@classmethod
def get_available_channels(cls, file_path) -> List[str]:
def get_available_channels(cls, file_path: Union[str, Path]) -> List[str]:
"""Get the available channel names from a CXD file produced by Hamamatsu Photonics.

Parameters
----------
file_path : PathType
file_path : str or Path
Path to the Bio-Formats file.

Returns
Expand All @@ -36,12 +35,12 @@ def get_available_channels(cls, file_path) -> List[str]:
return channel_names

@classmethod
def get_available_planes(cls, file_path):
def get_available_planes(cls, file_path: Union[str, Path]) -> List[str]:
"""Get the available plane names from a CXD file produced by Hamamatsu Photonics.

Parameters
----------
file_path : PathType
file_path : str or Path
Path to the Bio-Formats file.

Returns
Expand All @@ -59,7 +58,7 @@ def get_available_planes(cls, file_path):

def __init__(
self,
file_path: FilePathType,
file_path: Union[str, Path],
channel_name: str = None,
plane_name: str = None,
sampling_frequency: float = None,
Expand All @@ -83,7 +82,7 @@ def __init__(

Parameters
----------
file_path : PathType
file_path : str or Path
Path to the CXD file.
channel_name : str
The name of the channel for this extractor. (default=None)
Expand Down Expand Up @@ -114,6 +113,9 @@ def __init__(
self._num_columns = parsed_metadata["num_columns"]
self._dtype = parsed_metadata["dtype"]
self._sampling_frequency = parsed_metadata["sampling_frequency"]
# When the cxd file contains both channels (dual-wavelength excitation), the sampling frequency should be halved.
if frame_indices is not None:
self._sampling_frequency = self._sampling_frequency / 2
self._channel_names = parsed_metadata["channel_names"]
self._plane_names = [f"{i}" for i in range(self._num_planes)]

Expand Down
12 changes: 6 additions & 6 deletions src/howe_lab_to_nwb/vu2024/interfaces/cxdimaginginterface.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
from typing import Literal, List, Optional
from pathlib import Path
from typing import Literal, Union

from neuroconv.datainterfaces.ophys.baseimagingextractorinterface import BaseImagingExtractorInterface
from neuroconv.utils import FilePathType, DeepDict

from howe_lab_to_nwb.vu2024.extractors.cxdimagingextractor import CxdImagingExtractor

Expand All @@ -25,7 +25,7 @@ def get_source_schema(cls) -> dict:

def __init__(
self,
file_path: FilePathType,
file_path: Union[str, Path],
channel_name: str = None,
plane_name: str = None,
sampling_frequency: float = None,
Expand All @@ -37,7 +37,7 @@ def __init__(

Parameters
----------
file_path : FilePathType
file_path : str or Path
Path to the CXD file.
channel_name : str, optional
The name of the channel for this extractor.
Expand All @@ -62,12 +62,12 @@ def __init__(

def get_metadata(
self, photon_series_type: Literal["OnePhotonSeries", "TwoPhotonSeries"] = "OnePhotonSeries"
) -> DeepDict:
) -> dict:
metadata = super().get_metadata(photon_series_type=photon_series_type)

device_name = "HamamatsuMicroscope"
metadata["Ophys"]["Device"][0].update(name=device_name)
optical_channel_name = "OpticalChannel" # TODO: add better channel name
optical_channel_name = "OpticalChannel"
imaging_plane_metadata = metadata["Ophys"]["ImagingPlane"][0]
optical_channel_metadata = imaging_plane_metadata["optical_channel"][0]
optical_channel_metadata.update(name=optical_channel_name)
Expand Down
14 changes: 8 additions & 6 deletions src/howe_lab_to_nwb/vu2024/interfaces/tiffimaginginterface.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
from typing import Literal
from pathlib import Path
from typing import Literal, Union

from neuroconv.datainterfaces import TiffImagingInterface
from neuroconv.utils import FilePathType, DeepDict


class Vu2024TiffImagingInterface(TiffImagingInterface):
Expand All @@ -21,7 +21,7 @@ def get_source_schema(cls) -> dict:

def __init__(
self,
file_path: FilePathType,
file_path: Union[str, Path],
sampling_frequency: float,
verbose: bool = True,
photon_series_type: Literal["OnePhotonSeries", "TwoPhotonSeries"] = "OnePhotonSeries",
Expand All @@ -31,10 +31,12 @@ def __init__(

Parameters
----------
file_path : FilePathType
file_path : str or Path
Path to the TIFF file.
sampling_frequency : float
The sampling frequency of the data.
verbose : bool, default: True
photon_series_type : {'OnePhotonSeries', 'TwoPhotonSeries'}, default: "TwoPhotonSeries"
photon_series_type : {'OnePhotonSeries', 'TwoPhotonSeries'}, default: "OnePhotonSeries"
"""
super().__init__(
file_path=file_path,
Expand All @@ -45,7 +47,7 @@ def __init__(

def get_metadata(
self, photon_series_type: Literal["OnePhotonSeries", "TwoPhotonSeries"] = "OnePhotonSeries"
) -> DeepDict:
) -> dict:
# Override the default metadata to correctly set the metadata for this experiment
metadata = super().get_metadata(photon_series_type=photon_series_type)

Expand Down
Loading
Loading