Skip to content

Commit

Permalink
Fetch upstream, resolve conflicts
Browse files Browse the repository at this point in the history
  • Loading branch information
CBroz1 committed Jul 12, 2023
2 parents 9134ba5 + 94f6b93 commit ece6e43
Show file tree
Hide file tree
Showing 16 changed files with 637 additions and 502 deletions.
7 changes: 2 additions & 5 deletions .github/workflows/publish-docs.yml
Original file line number Diff line number Diff line change
@@ -1,11 +1,8 @@
name: Publish docs
on:
pull_request:
branches:
- master
types:
- closed
push:
tags: # See PEP 440 for valid version format
- "*.*.*" # For docs bump, use X.X.XaX
branches:
- test_branch

Expand Down
28 changes: 19 additions & 9 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# Change Log

## 0.4.1 (Unreleased)
## [0.4.1] (June 30, 2023)

- Add mkdocs automated deployment. #527, #537, #549, #551
- Add class for Merge Tables. #556, #564
- Add class for Merge Tables. #556, #564, #565

## 0.4.0 (May 22, 2023)
## [0.4.0] (May 22, 2023)

- Updated call to `spikeinterface.preprocessing.whiten` to use dtype np.float16.
#446,
Expand Down Expand Up @@ -33,7 +33,7 @@
- Updated `environment_position.yml`. #502
- Renamed `FirFilter` class to `FirFilterParameters`. #512

## 0.3.4 (March 30, 2023)
## [0.3.4] (March 30, 2023)

- Fixed error in spike sorting pipeline referencing the "probe_type" column
which is no longer accessible from the `Electrode` table. #437
Expand All @@ -44,18 +44,28 @@
- Fixed inconsistency between capitalized/uncapitalized versions of "Intan" for
DataAcquisitionAmplifier and DataAcquisitionDevice.adc_circuit. #430, #438

## 0.3.3 (March 29, 2023)
## [0.3.3] (March 29, 2023)

- Fixed errors from referencing the changed primary key for `Probe`. #429

## 0.3.2 (March 28, 2023)
## [0.3.2] (March 28, 2023)

- Fixed import of `common_nwbfile`. #424

## 0.3.1 (March 24, 2023)
## [0.3.1] (March 24, 2023)

- Fixed import error due to `sortingview.Workspace`. #421

## 0.3.0 (March 24, 2023)
## [0.3.0] (March 24, 2023)

To be added.
- Refactor common for non Frank Lab data, allow file-based mods #420
- Allow creation and linkage of device metadata from YAML #400
- Move helper functions to utils directory #386

[0.4.1]: https://github.com/LorenFrankLab/spyglass/releases/tag/0.4.1
[0.4.0]: https://github.com/LorenFrankLab/spyglass/releases/tag/0.4.0
[0.3.4]: https://github.com/LorenFrankLab/spyglass/releases/tag/0.3.4
[0.3.3]: https://github.com/LorenFrankLab/spyglass/releases/tag/0.3.3
[0.3.2]: https://github.com/LorenFrankLab/spyglass/releases/tag/0.3.2
[0.3.1]: https://github.com/LorenFrankLab/spyglass/releases/tag/0.3.1
[0.3.0]: https://github.com/LorenFrankLab/spyglass/releases/tag/0.3.0
22 changes: 14 additions & 8 deletions docs/src/misc/merge_tables.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,12 +70,19 @@ These functions are described in the

### Restricting

One quirk of these utilities is that they take restrictions as arguments,
rather than with operators. So `Table & "field='value'"` becomes
`MergeTable.merge_view(restriction={'field':'value}`). This is because
`merge_view` is a `Union` rather than a true Table. While `merge_view` can
accept all valid restrictions, `merge_get_part` and `merge_get_parent` have
additional restriction logic when supplied with `dicts`.
In short: restrict Merge Tables with arguments, not the `&` operator.

- Normally: `Table & "field='value'"`
- Instead: `MergeTable.merge_view(restriction="field='value'"`).

_Caution_. The `&` operator may look like it's working when using `dict`, but
this is because invalid keys will be ignored. `Master & {'part_field':'value'}`
is equivalent to `Master` alone
([source](https://docs.datajoint.org/python/queries/06-Restriction.html#restriction-by-a-mapping)).

When provided as arguments, methods like `merge_get_part` and `merge_get_parent`
will override the permissive treatment of mappings described above to only
return relevant tables.

### Building Downstream

Expand Down Expand Up @@ -171,8 +178,7 @@ There are also functions for retrieving part/parent table(s) and fetching data.
the format specified by keyword arguments and one's DataJoint config.

```python
result3 = (LFPOutput & common_keys_CH[0]).merge_get_part(join_master=True)
result4 = LFPOutput().merge_get_part(restriction=common_keys_CH[0])
result4 = LFPOutput.merge_get_part(restriction=common_keys_CH[0],join_master=True)
result5 = LFPOutput.merge_get_parent(restriction='nwb_file_name LIKE "CH%"')
result6 = result5.fetch('lfp_sampling_rate') # Sample rate for all CH* files
result7 = LFPOutput.merge_fetch("filter_name", "nwb_file_name")
Expand Down
2 changes: 1 addition & 1 deletion environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ channels:
- franklab
- edeno
dependencies:
- python>=3.8,<3.10
- python>=3.9,<3.10
- jupyterlab>=3.*
- pydotplus
- dask
Expand Down
2 changes: 1 addition & 1 deletion environment_position.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ channels:
- franklab
- edeno
dependencies:
- python>=3.8, <3.10
- python>=3.9, <3.10
- jupyterlab>=3.*
- pydotplus>=2.0.*
- libgcc
Expand Down
4 changes: 2 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ build-backend = "hatchling.build"
name = "spyglass-neuro"
description = "Neuroscience data analysis framework for reproducible research"
readme = "README.md"
requires-python = ">=3.8,<3.10"
requires-python = ">=3.9,<3.10"
license = { file = "LICENSE" }
authors = [
{ name = "Loren Frank", email = "[email protected]" },
Expand Down Expand Up @@ -78,7 +78,7 @@ test = [
"kachery-cloud",
]
docs = [
"hatch", # Get version from env
"hatch", # Get version from env
"mike", # Docs versioning
"mkdocs", # Docs core
"mkdocs-exclude", # Docs exclude files
Expand Down
14 changes: 8 additions & 6 deletions src/spyglass/lfp/v1/lfp_artifact.py
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ def make(self, key):
).fetch1("artifact_params")

artifact_detection_algorithm = artifact_params[
"ripple_detection_algorithm"
"artifact_detection_algorithm"
]
artifact_detection_params = artifact_params[
"artifact_detection_algorithm_params"
Expand All @@ -121,11 +121,13 @@ def make(self, key):
# set up a name for no-artifact times using recording id
# we need some name here for recording_name
key["artifact_removed_interval_list_name"] = "_".join(
key["nwb_file_name"],
key["target_interval_list_name"],
"LFP",
key["artifact_params_name"],
"artifact_removed_valid_times",
[
key["nwb_file_name"],
key["target_interval_list_name"],
"LFP",
key["artifact_params_name"],
"artifact_removed_valid_times",
]
)

LFPArtifactRemovedIntervalList.insert1(key, replace=True)
Expand Down
Loading

0 comments on commit ece6e43

Please sign in to comment.