Skip to content

Commit

Permalink
Fix failing test from prev commit. Also LorenFrankLab#585
Browse files Browse the repository at this point in the history
  • Loading branch information
CBroz1 committed Oct 12, 2023
1 parent fe7fe97 commit fa3a033
Show file tree
Hide file tree
Showing 8 changed files with 248 additions and 133 deletions.
109 changes: 7 additions & 102 deletions docs/src/misc/merge_tables.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ pipeline. By convention...
from spyglass.utils.dj_merge_tables import _Merge

@schema
class MergeTable(_Merge):
class MergeOutput(_Merge):
definition = """
merge_id: uuid
---
Expand All @@ -57,6 +57,11 @@ class MergeTable(_Merge):

![Merge diagram](../images/merge_diagram.png)

By convention, Merge Tables have been named with the pipeline name plus `Output`
(e.g., `LFPOutput`, `PositionOutput`). Using the underscore alias for this class
allows us to circumvent a DataJoint protection that interprets the class as a
table itself.

## How

### Merging
Expand Down Expand Up @@ -106,104 +111,4 @@ is not present in the parent.

## Example

First, we'll import various items related to the LFP Merge Table...

```python
from spyglass.utils.dj_merge_tables import delete_downstream_merge, Merge
from spyglass.common.common_ephys import LFP as CommonLFP # Upstream 1
from spyglass.lfp.lfp_merge import LFPOutput # Merge Table
from spyglass.lfp.v1.lfp import LFPV1 # Upstream 2
```

Merge Tables have multiple custom methods that begin with `merge`. `help` can
show us the docstring of each

```python
merge_methods=[d for d in dir(Merge) if d.startswith('merge')]
help(getattr(Merge,merge_methods[-1]))
```

We'll use this example to explore populating both `LFPV1` and the `LFPOutput`
Merge Table.

```python
nwb_file_dict = { # We'll use this later when fetching from the Merge Table
"nwb_file_name": "tonks20211103_.nwb",
}
lfpv1_key = {
**nwb_file_dict,
"lfp_electrode_group_name": "CA1_test",
"target_interval_list_name": "test interval2",
"filter_name": "LFP 0-400 Hz",
"filter_sampling_rate": 30000,
}
LFPV1.populate(lfpv1_key) # Also populates LFPOutput
```

The Merge Table can also be populated with keys from `common_ephys.LFP`.

```python
common_keys_CH = CommonLFP.fetch(limit=3, as_dict=True) # CH61
LFPOutput.insert1(common_keys_CH[0], skip_duplicates=True)
LFPOutput.insert(common_keys_CH[1:], skip_duplicates=True)
common_keys_J1 = CommonLFP.fetch(limit=3, offset=80, as_dict=True) # J16
LFPOutput.insert(common_keys_J1, skip_duplicates=True)
```

`merge_view` shows a union of the master and all part tables.

```python
LFPOutput.merge_view()
LFPOutput.merge_view(restriction=lfpv1_key)
```

UUIDs help retain unique entries across all part tables. We can fetch NWB file
by referencing this or other features.

```python
uuid_key = LFPOutput.fetch(limit=1, as_dict=True)[-1]
restrict = LFPOutput & uuid_key
result1 = restrict.fetch_nwb()

nwb_key = LFPOutput.merge_restrict(nwb_file_dict).fetch(as_dict=True)[0]
result2 = (LFPOutput & nwb_key).fetch_nwb()
```

There are also functions for retrieving part/parent table(s) and fetching data.

1. These `get` functions will either return the part table of the Merge table or
the parent table with the source information for that part.

2. This `fetch` will collect all relevant entries and return them as a list in
the format specified by keyword arguments and one's DataJoint config.

```python
result4 = LFPOutput.merge_get_part(restriction=common_keys_CH[0],join_master=True)
result5 = LFPOutput.merge_get_parent(restriction='nwb_file_name LIKE "CH%"')
result6 = result5.fetch('lfp_sampling_rate') # Sample rate for all CH* files
result7 = LFPOutput.merge_fetch("filter_name", "nwb_file_name")
result8 = LFPOutput.merge_fetch(as_dict=True)
```

When deleting from Merge Tables, we can either...

1. delete from the Merge Table itself with `merge_delete`, deleting both
the master and part.

2. use `merge_delete_parent` to delete from the parent sources, getting rid of
the entries in the source table they came from.

3. use `delete_downstream_merge` to find Merge Tables downstream and get rid
full entries, avoiding orphaned master table entries.

The two latter cases can be destructive, so we include an extra layer of
protection with `dry_run`. When true (by default), these functions return
a list of tables with the entries that would otherwise be deleted.

```python
LFPOutput.merge_delete(common_keys_CH[0]) # Delete from merge table
LFPOutput.merge_delete_parent(restriction=nwb_file_dict, dry_run=True)
delete_downstream_merge(
table=CommonLFP, restriction=common_keys_CH[0], dry_run=True
)
```
For example usage, see our Merge Table notebook.
20 changes: 9 additions & 11 deletions notebooks/01_Insert_Data.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2081,17 +2081,15 @@
" group_name=\"test\",\n",
" electrode_list=[0],\n",
")\n",
"lfp.v1.LFPSelection.insert1(\n",
" {\n",
" \"nwb_file_name\": nwb_copy_file_name,\n",
" \"lfp_electrode_group_name\": \"test\",\n",
" \"target_interval_list_name\": \"01_s1\",\n",
" \"filter_name\": \"LFP 0-400 Hz\",\n",
" \"filter_sampling_rate\": 30_000,\n",
" },\n",
" skip_duplicates=True,\n",
")\n",
"lfp.v1.LFPV1().populate()\n",
"lfp_key = {\n",
" \"nwb_file_name\": nwb_copy_file_name,\n",
" \"lfp_electrode_group_name\": \"test\",\n",
" \"target_interval_list_name\": \"01_s1\",\n",
" \"filter_name\": \"LFP 0-400 Hz\",\n",
" \"filter_sampling_rate\": 30_000,\n",
"}\n",
"lfp.v1.LFPSelection.insert1(lfp_key, skip_duplicates=True)\n",
"lfp.v1.LFPV1().populate(lfp_key)\n",
"```\n",
"</details>\n",
"<details>\n",
Expand Down
4 changes: 2 additions & 2 deletions notebooks/02_Data_Sync.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -630,8 +630,8 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"In the [next notebook](./10_Spike_Sorting.ipynb), we'll start working with \n",
"ephys data with spike sorting."
"In the [next notebook](./03_Merge_Tables.ipynb), we'll explore the details of a\n",
"table tier unique to Spyglass, Merge Tables."
]
}
],
Expand Down
1 change: 1 addition & 0 deletions notebooks/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ described in the categories below.

Everyone should complete the Setup and Insert Data notebooks. Data Sync is an
optional additional tool for collaborators that want to share analysis files.
Merge Tables features details on a new table tier unique to Spyglass.

## 1. Electrophysiology

Expand Down
20 changes: 9 additions & 11 deletions notebooks/py_scripts/01_Insert_Data.py
Original file line number Diff line number Diff line change
Expand Up @@ -327,17 +327,15 @@
# group_name="test",
# electrode_list=[0],
# )
# lfp.v1.LFPSelection.insert1(
# {
# "nwb_file_name": nwb_copy_file_name,
# "lfp_electrode_group_name": "test",
# "target_interval_list_name": "01_s1",
# "filter_name": "LFP 0-400 Hz",
# "filter_sampling_rate": 30_000,
# },
# skip_duplicates=True,
# )
# lfp.v1.LFPV1().populate()
# lfp_key = {
# "nwb_file_name": nwb_copy_file_name,
# "lfp_electrode_group_name": "test",
# "target_interval_list_name": "01_s1",
# "filter_name": "LFP 0-400 Hz",
# "filter_sampling_rate": 30_000,
# }
# lfp.v1.LFPSelection.insert1(lfp_key, skip_duplicates=True)
# lfp.v1.LFPV1().populate(lfp_key)
# ```
# </details>
# <details>
Expand Down
4 changes: 2 additions & 2 deletions notebooks/py_scripts/02_Data_Sync.py
Original file line number Diff line number Diff line change
Expand Up @@ -274,5 +274,5 @@

# # Up Next

# In the [next notebook](./10_Spike_Sorting.ipynb), we'll start working with
# ephys data with spike sorting.
# In the [next notebook](./03_Merge_Tables.ipynb), we'll explore the details of a
# table tier unique to Spyglass, Merge Tables.
Loading

0 comments on commit fa3a033

Please sign in to comment.