Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
  • Loading branch information
khl02007 committed Jan 22, 2024
2 parents b0ddff5 + f42a987 commit 3fe7f86
Show file tree
Hide file tree
Showing 87 changed files with 7,591 additions and 4,163 deletions.
21 changes: 11 additions & 10 deletions .github/workflows/test-conda.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,16 +17,6 @@ jobs:
env:
OS: ${{ matrix.os }}
PYTHON: '3.8'
# SPYGLASS_BASE_DIR: ./data
# KACHERY_STORAGE_DIR: ./data/kachery-storage
# DJ_SUPPORT_FILEPATH_MANAGEMENT: True
# services:
# datajoint_test_server:
# image: datajoint/mysql
# ports:
# - 3306:3306
# options: >-
# -e MYSQL_ROOT_PASSWORD=tutorial
steps:
- name: Cancel Workflow Action
uses: styfle/[email protected]
Expand All @@ -49,6 +39,17 @@ jobs:
- name: Install spyglass
run: |
pip install -e .[test]
- name: Download data
env:
UCSF_BOX_TOKEN: ${{ secrets.UCSF_BOX_TOKEN }}
UCSF_BOX_USER: ${{ secrets.UCSF_BOX_USER }}
WEBSITE: ftps://ftp.box.com/trodes_to_nwb_test_data/minirec20230622.nwb
RAW_DIR: /home/runner/work/spyglass/spyglass/tests/_data/raw/
run: |
mkdir -p $RAW_DIR
wget --recursive --no-verbose --no-host-directories --no-directories \
--user $UCSF_BOX_USER --password $UCSF_BOX_TOKEN \
-P $RAW_DIR $WEBSITE
- name: Run tests
run: |
pytest -rP # env vars are set within certain tests
6 changes: 4 additions & 2 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,9 @@
- Add `deprecation_factory` to facilitate table migration. #717
- Add Spyglass logger. #730
- IntervalList: Add secondary key `pipeline` #742
- Increase pytest coverage for `common`, `lfp`, and `utils`. #743
- Update docs to reflect new notebooks. #776
- Add overview of Spyglass to docs. #779

### Pipelines

Expand All @@ -25,13 +28,12 @@
- Refactor input validation in DLC pipeline. #688
- DLC path handling from config, and normalize naming convention. #722
- Decoding:
- Add `decoding` pipeline V1. #731
- Add `decoding` pipeline V1. #731, #769
- Add a table to store the decoding results #731
- Use the new `non_local_detector` package for decoding #731
- Allow multiple spike waveform features for clusterelss decoding #731
- Reorder notebooks #731


## [0.4.3] (November 7, 2023)

- Migrate `config` helper scripts to Spyglass codebase. #662
Expand Down
3 changes: 3 additions & 0 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,3 +55,6 @@ The following items can be commented out in `mkdocs.yml` to reduce build time:
- `mkdocs-jupyter`: Generates tutorial pages from notebooks.

To end the process in your console, use `ctrl+c`.

If your new submodule is causing a build error (e.g., "Could not collect ..."),
you may need to add `__init__.py` files to the submodule directories.
7 changes: 4 additions & 3 deletions docs/build-docs.sh
Original file line number Diff line number Diff line change
Expand Up @@ -10,13 +10,14 @@ cp ./LICENSE ./docs/src/LICENSE.md
mkdir -p ./docs/src/notebooks
cp ./notebooks/*ipynb ./docs/src/notebooks/
cp ./notebooks/*md ./docs/src/notebooks/
cp ./docs/src/notebooks/README.md ./docs/src/notebooks/index.md
mv ./docs/src/notebooks/README.md ./docs/src/notebooks/index.md
cp -r ./notebook-images ./docs/src/notebooks/
cp -r ./notebook-images ./docs/src/

# Get major version
FULL_VERSION=$(hatch version) # Most recent tag, may include periods
export MAJOR_VERSION="${FULL_VERSION:0:3}" # First 3 chars of tag
version_line=$(grep "__version__ =" ./src/spyglass/_version.py)
version_string=$(echo "$version_line" | awk -F"[\"']" '{print $2}')
export MAJOR_VERSION="${version_string:0:3}"
echo "$MAJOR_VERSION"

# Get ahead of errors
Expand Down
31 changes: 15 additions & 16 deletions docs/mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,9 +16,6 @@ theme:
favicon: images/Spyglass.svg
features:
- toc.follow
# - navigation.expand # CBroz1: removed bc long tutorial list hides rest
# - toc.integrate
# - navigation.sections
- navigation.top
- navigation.instant # saves loading time - 1 browser page
- navigation.tracking # even with above, changes URL by section
Expand Down Expand Up @@ -55,27 +52,29 @@ nav:
- Database Management: misc/database_management.md
- Tutorials:
- Overview: notebooks/index.md
- General:
- Intro:
- Setup: notebooks/00_Setup.ipynb
- Insert Data: notebooks/01_Insert_Data.ipynb
- Data Sync: notebooks/02_Data_Sync.ipynb
- Merge Tables: notebooks/03_Merge_Tables.ipynb
- Ephys:
- Spike Sorting: notebooks/10_Spike_Sorting.ipynb
- Config Populate: notebooks/04_PopulateConfigFile.ipynb
- Spikes:
- Spike Sorting V0: notebooks/10_Spike_SortingV0.ipynb
- Spike Sorting V1: notebooks/10_Spike_SortingV1.ipynb
- Curation: notebooks/11_Curation.ipynb
- LFP: notebooks/12_LFP.ipynb
- Theta: notebooks/14_Theta.ipynb
- Position:
- Position Trodes: notebooks/20_Position_Trodes.ipynb
- DLC From Scratch: notebooks/21_Position_DLC_1.ipynb
- DLC From Model: notebooks/22_Position_DLC_2.ipynb
- DLC Prediction: notebooks/23_Position_DLC_3.ipynb
- DLC Models: notebooks/21_DLC.ipynb
- Looping DLC: notebooks/22_DLC_Loop.ipynb
- Linearization: notebooks/24_Linearization.ipynb
- Combined:
- Ripple Detection: notebooks/30_Ripple_Detection.ipynb
- Extract Mark Indicators: notebooks/31_Extract_Mark_Indicators.ipynb
- Decoding with GPUs: notebooks/32_Decoding_with_GPUs.ipynb
- Decoding Clusterless: notebooks/33_Decoding_Clusterless.ipynb
- LFP:
- LFP: notebooks/30_LFP.ipynb
- Theta: notebooks/31_Theta.ipynb
- Ripple Detection: notebooks/32_Ripple_Detection.ipynb
- Decoding:
- Extract Clusterless: notebooks/41_Extracting_Clusterless_Waveform_Features.ipynb
- Decoding Clusterless: notebooks/42_Decoding_Clusterless.ipynb
- Decoding Sorted Spikes: notebooks/43_Decoding_SortedSpikes.ipynb
- API Reference: api/ # defer to gen-files + literate-nav
- How to Contribute: contribute.md
- Change Log: CHANGELOG.md
Expand Down
5 changes: 0 additions & 5 deletions docs/src/api/make_pages.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,11 +28,6 @@
else:
break

if add_limit is not None:
from IPython import embed

embed()


with mkdocs_gen_files.open("api/navigation.md", "w") as nav_file:
nav_file.write("* [Overview](../api/index.md)\n")
Expand Down
Binary file added docs/src/images/fig1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
46 changes: 42 additions & 4 deletions docs/src/index.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,47 @@
# Spyglass

**Spyglass** is a data analysis framework that facilitates the storage,
analysis, and sharing of neuroscience data to support reproducible research. It
is designed to be interoperable with the NWB format and integrates open-source
tools into a coherent framework.
![Figure 1](./images/fig1.png)

**Spyglass** is an open-source software framework designed to offer reliable
and reproducible analysis of neuroscience data and sharing of the results
with collaborators and the broader community.

Features of Spyglass include:

+ **Standardized data storage** - Spyglass uses the open-source
[Neurodata Without Borders: Neurophysiology (NWB:N)](https://www.nwb.org/)
format to ingest and store processed data. NWB:N is a standard set by the BRAIN
Initiative for neurophysiological data ([Rübel et al., 2022](https://doi.org/10.7554/elife.78362)).
+ **Reproducible analysis** - Spyglass uses [DataJoint](https://datajoint.com/)
to ensure that all analysis is reproducible. DataJoint is a data management
system that automatically tracks dependencies between data and analysis code. This
ensures that all analysis is reproducible and that the results are
automatically updated when the data or analysis code changes.
+ **Common analysis tools** - Spyglass provides easy usage of the open-source packages
[SpikeInterface](https://github.com/SpikeInterface/spikeinterface),
[Ghostipy](https://github.com/kemerelab/ghostipy), and [DeepLabCut](https://github.com/DeepLabCut/DeepLabCut)
for common analysis tasks. These packages are well-documented and have active
developer communities.
+ **Interactive data visualization** - Spyglass uses [figurl](https://github.com/flatironinstitute/figurl)
to create interactive data visualizations that can be shared with collaborators
and the broader community. These visualizations are hosted on the web
and can be viewed in any modern web browser. The interactivity allows users to
explore the data and analysis results in detail.
+ **Sharing results** - Spyglass enables sharing of data and analysis results via
[Kachery](https://github.com/flatironinstitute/kachery-cloud), a
decentralized content addressable data sharing platform. Kachery Cloud allows
users to access the database and pull data and analysis results directly
to their local machine.
+ **Pipeline versioning** - Processing and analysis of data in neuroscience is
often dynamic, requiring new features. Spyglass uses *Merge tables* to ensure that
analysis pipelines can be versioned. This allows users to easily use and compare
results from different versions of the analysis pipeline while retaining
the ability to access previously generated results.
+ **Cautious Delete** - Spyglass uses a `cautious delete` feature to ensure
that data is not accidentally deleted by other users. When a user deletes data,
Spyglass will first check to see if the data belongs to another team of users.
This enables teams of users to work collaboratively on the same database without
worrying about accidentally deleting each other's data.

## Getting Started

Expand Down
10 changes: 2 additions & 8 deletions franklab_scripts/nightly_cleanup.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,6 @@
# ignore datajoint+jupyter async warnings
import warnings

import numpy as np

from spyglass.decoding.clusterless import (
MarkParameters,
UnitMarkParameters,
UnitMarks,
)

warnings.simplefilter("ignore", category=DeprecationWarning)
warnings.simplefilter("ignore", category=ResourceWarning)
# NOTE: "SPIKE_SORTING_STORAGE_DIR" -> "SPYGLASS_SORTING_DIR"
Expand All @@ -21,12 +13,14 @@

# import tables so that we can call them easily
from spyglass.common import AnalysisNwbfile
from spyglass.decoding.decoding_merge import DecodingOutput
from spyglass.spikesorting import SpikeSorting


def main():
AnalysisNwbfile().nightly_cleanup()
SpikeSorting().nightly_cleanup()
DecodingOutput().cleanup()


if __name__ == "__main__":
Expand Down
2 changes: 1 addition & 1 deletion notebooks/04_PopulateConfigFile.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,7 @@
" \"DataAcquisitionDevice\",\n",
" \"- data_acquisition_device_name: data_acq_device0\",\n",
" ]\n",
" config_file.writelines(line + '\\n' for line in lines)"
" config_file.writelines(line + \"\\n\" for line in lines)"
]
},
{
Expand Down
302 changes: 80 additions & 222 deletions notebooks/20_Position_Trodes.ipynb

Large diffs are not rendered by default.

Loading

0 comments on commit 3fe7f86

Please sign in to comment.