diff --git a/CHANGES.md b/CHANGES.md index 380660df0..08cfd6132 100644 --- a/CHANGES.md +++ b/CHANGES.md @@ -1,3 +1,17 @@ +# Version 1.0.2 + +**New features** + +- Added back the syncLines argument to plotRaster() + +- Updated conversion to/from NeuroML 2 + +- Added RxD-based spreading depression example model + +**Bug fixes** + +- Avoid exception in plotSpikeHist() and plotSpikeFreq() when pop has no spikes + # Version 1.0.1 **New features** diff --git a/doc/source/about.rst b/doc/source/about.rst index 1e8332ba9..81290729b 100644 --- a/doc/source/about.rst +++ b/doc/source/about.rst @@ -199,7 +199,7 @@ Cite NetPyNE - Gast, R., Rose, D., Salomon, C., Möller, H.E., Weiskopf, N. and Knösche, T.R.. **PyRates—A Python framework for rate-based neural simulations.** PLoS ONE, 14(12), *2019*. doi: ``_. -- Tejada J, Roque AC, **Conductance-based models and the fragmentation problem: A case study based on hippocampal CA1 pyramidal cell models and epilepsy** `Epilepsy & Behavior, 106841, *2019*. doi: ``_. +- Tejada J, Roque AC, **Conductance-based models and the fragmentation problem: A case study based on hippocampal CA1 pyramidal cell models and epilepsy** Epilepsy & Behavior, 106841, *2019*. doi: ``_. - Kuhl E, Alber M, Tepole BA, Cannon WR, De S, Dura-Bernal S, Garikipati K, Karniadakis GE, Lytton WW, Perdikaris P, Petzold L. **Multiscale modeling meets machine learning: What can we learn?** arXiv:1911.11958. doi: ``_. [Preprint]. *Under review in Computer Methods in Applied Mechanics and Engineering. 2019* diff --git a/doc/source/conf.py b/doc/source/conf.py index f4ca03571..4ed9f7ee3 100644 --- a/doc/source/conf.py +++ b/doc/source/conf.py @@ -65,9 +65,9 @@ # built documents. # # The short X.Y version. -version = '1.0.1' +version = '1.0.2' # The full version, including alpha/beta/rc tags. -release = '1.0.1' +release = '1.0.2' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. diff --git a/doc/source/reference.rst b/doc/source/reference.rst index 2b2ccaf4f..87c62a7da 100644 --- a/doc/source/reference.rst +++ b/doc/source/reference.rst @@ -908,7 +908,7 @@ Related to recording: * **recordTraces** - Dict of traces to record (default: {} ; example: {'V_soma':{'sec':'soma','loc':0.5,'var':'v'}}) * **recordSpikesGids** - List of cells to record spike times from (-1 to record from all). Can include cell gids (e.g. 5), population labels (e.g. 'S' to record from one cell of the 'S' population), or 'all', to record from all cells. (default: -1) * **recordStim** - Record spikes of cell stims (default: False) -* **recordLFP** - 3D locations of local field potential (LFP) electrodes, e.g. [[50, 100, 50], [50, 200, 50]] (note the y coordinate represents depth, so will be represented as a negative value when plotted). The LFP signal in each electrode is obtained by summing the extracellular potential contributed by each neuronal segment, calculated using the "line source approximation" and assuming an Ohmic medium with conductivity |sigma| = 0.3 mS/mm. Stored in ``sim.allSimData['LFP']``. (default: False). +* **recordLFP** - 3D locations of local field potential (LFP) electrodes, e.g. [[50, 100, 50], [50, 200, 50]] (note the y coordinate represents depth, so will be represented as a negative value when plotted). The LFP signal in each electrode is obtained by summing the extracellular potential contributed by each neuronal segment, calculated using the "line source approximation" and assuming an Ohmic medium with conductivity sigma = 0.3 mS/mm. Stored in ``sim.allSimData['LFP']``. (default: False). * **saveLFPCells** - Store LFP generated individually by each cell in ``sim.allSimData['LFPCells']``; can select a subset of cells to save e.g. [3, 'PYR', ('PV2', 5)] * **recordStep** - Step size in ms for data recording (default: 0.1) @@ -1287,6 +1287,163 @@ A representation of the instantiated network structure generated by NetPyNE is s :align: center +NetPyNE data model: Structure of instantiated network +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +The model is instantiated in **sim.net** as a **Network** object. This object has a number of intrisic attributes, and others may or may not be present depending on particular implementations. The two basic elements composing "sim.net" are: + +* **pops** - An ordered dictionary describing all populations implemented in the network. Its entries are a key labeling the population (as defined in ``netParams.popParams``) and a value referencing the **Pop** object. This object has a number of attributes: + + * **tags**: Dictionary describing different attributes of the population, related to the declaration rule that defines it. For example: + + ``sim.net.pops['Exc_L4'].tags = {'cellType': 'PYR', 'numCells': 100, 'pop': 'Exc_L4'}`` + + * **cellGids**: List of neuron identifiers (global ids) composing the population. For example: + + ``sim.net.pops['Exc_L4'].cellGids = [0, 1, 2, 3, 4]`` + + * **cellModelClass**: Type describing which particular cell model class was implemented (e.g. ``compartCell`` or ``pointCell``). + +* **cells** - A list of cells instantiated **locally** (this disctinction is important when implementing parallel computing). Each element in the list is a **Cell** object (among different cell model classes). As a Cell object, there are a number of intrinsic attributes: + + * **gid**: cell global identification number + + * **tags**: Dictionary describing different attributes of the cell, which may include some tags from the population it belongs to (tags list in ``netParams.popTagsCopiedToCells``, for example 'cellType'). These attributes are: + + * **pop**: Label of the population it belongs to. For example: + + ``sim.net.cells[0].tags['pop'] = 'Exc_L4'`` + + * **x**, **y**, **z**: Cell's x-, y-, z-coordinates. For example: + + ``sim.net.cells[0].tags['x'] = -2.3535`` + + * **xnorm**, **ynorm**, **znorm**: Cell's x-, y-, z-normalized coordinates. + + * Other tags depending on particular situations (for example, declaring populations in ``netParams.popParams`` with a ``cellsList``, including ``params``). + + * **conns**: List of connections (this cell at the post-synaptic side). Each element in this list (a particular connection) is a dictionary with the following entries: + + * **preGid**: Identity of the presynaptic cell (Gid). In the case this connection arises from a stimulation that requires a connection, the value will correspond to 'NetStim'. + + * **weight**: Weight of the connection. + + * **delay**: Delay of the connection. + + * **synMech**: Label of the synaptic mechanism governing this connection. + + * Depending on different scenarios, other entries will appear: + + * If the cell model class is NOT a point process, there will be a section/location where the synapse is put on. Then, these entries will appear: + * **sec**: Section hosting the synapse. + + * **loc**: Location within this section where the synapse is displayed. + + * If the connection is specified as a gap junction (``netParams.connParams['rule']['gapJunction']= True``, only available for 'compartCell' cell model class), entries associated to this kind of connections will appear: + + * **gapJunction**: Connection side ('pre' or 'post'), as specified by the rule. + + * **preLoc**: Gap junction location at pre-synaptic side, as specified by the rule (for cells at 'post'). + + * **gapId**: Id of the cell at its connection side. + + * **preGapId**: Id of the cell at other connection side. + + * **shape**, **plast**, **weightIndex**: Entries used for implementing specific connectivity patterns or processes. + + * **hObj**: Except for the case of gap junctions, all other connections are implemented via NEURON ``NetCon`` objects. The object itself is the value correspoding to this 'hObj' key. So, the NEURON object can be accessed (and modified) with, for example, + + ``sim.net.cells[0].conns[0]['hObj']`` + + In particular, methods corresponding to these objects can be called from command line (once the network has been instantiated). For example: + + ``sim.net.cells[0].conns[0]['hObj'].syn()``. + + * **stims**: List of stimuli incoming to the cell. Each element in this list is a dictionary with the following entries: + + * **label**: Label of the rule specifying target parameters + + * **source**: Label of the source rule that defines the incoming stimulus. + + * **type**: Type of stimulus. For example: + + ``sim.net.cells[0].stims[0]['type'] = 'NetStim'`` or ``sim.net.cells[0].stims[0]['type'] = 'IClamp'`` + + * **sec**: Section where the stimulus is targeting. + + * **loc**: Location within the section where the stimulus is impinging on. + + * Properties defining the incoming stimuli. For example, 'rate', 'noise', 'start', and 'number' for **NetStim** (plus 'seed' to initialize the random generator). + + * **hObj**: NEURON object associated to the stimuli. For example: + + ``sim.net.cells[0].stims[0]['hObj']`` + + Methods associated to the NEURON object are readily available. For example, if the object is an 'IClamp', then we can change its duration via ``sim.net.cells[0].stims[0]['hObj'].dur`` + + + * Depending on the cell model class, other entries are available. In particular, for the **'compartCell'** cell model class, the following one: + + * **secs**: Dictionary with the sections composing the cell. Within this dictionary, the following entries: + + * **geom**: Dictionary with the geometrical properties of the cylinder composing the section. For example, + + ``sim.net.cells[0].secs['soma']['geom'] = {diam: 18.8, L: 18.8, Ra: 123.0}`` + + * **topol**: Dictionary with the specifications of the topology (how is it connected to other sections). + + * **mechs**: Dictionary with distributed mechanisms. For example, + + ``sim.net.cells[0].secs['soma']['mechs']['hh'] = {gnabar: 0.12, gkbar: 0.036, gl: 0.003, el: -70}`` + + * **ions**: Dictionary specyfing ionic mechanisms included in the section. + + * **synMechs**: List of all synapses located at this section. Each element in this list is a dictionary with all the information related to it: + + * **label**: Label describing the rule that specifies the synaptic dynamics. + + * **loc**: Location within the section where the synapse is displayed + + * properties associated to the definition of the synaptic model. For example, 'tau1', 'e', etcetera. + + * **hObj**: NEURON object associated to the synapse. For example + + ``sim.net.cells[0].secs['soma']['synMechs'][0]['hObj'] = Exp2Syn[0]`` + + * **pointps**: Dictionary specifying a point process governing the voltage dynamics of the section, including the name of the 'mod' where the nonlinear mechanism is defined and all neccessary parameters. + + * Other scalar properties: **spikeGenLoc**, **vinit**, etcetera. + + * **hObj**: NEURON object associated to the section. For example, + + ``type(sim.net.cells[0].secs['soma']['hObj']) = nrn.Section`` + + +* Beyond these two elements ('pops' and 'cells'), there are a number of elements also defining and composing "sim.net": + + * **allPops**: Dictionary with all populations composing the network. Each key correspond to a defined population, and its value is a dictionary with information about it ('tags', 'cellGids', etc). + + * **allCells**: List with information about all cells (global). Each element/cell is a dictionary with all the information defining this cell. + + + * **params**: netParams specifying the network. + + * **rxd**: Dictionary with all the specifications of a Reaction-Diffusion model (same format as in ``netParams.rxdParams``) PLUS the associated NEURON objects. + + * **recXElectrode**: Object associated to recording LFP (if any). + + * **compartCells** and **popForEachGid**: Object cells and a dictionary, {'gid': 'pop'}, for use during LFP and dipole recording. + + * **cells_dpls** and **cells_dpl**: Dictionary with vectors of dipoles for each cell, over time and at one time, for use during LFP and dipole recording. + + * **gid2lid**: Dictionary mapping global to local ids, {'gid': 'lid'}. + + * **lastGid**: Last cell's Gid defined. Useful during development of the network. + + * **lastGapId**: Last gap junction defined. Useful during development of the network. + + * **preGapJunctions**: List storing information about presynaptic side, according to the direction written in the connectivity rule, for gap junctions. + + .. _dicts_dotnotation: Accessing dictionaries using dot notation: Dict and ODict classes @@ -1442,10 +1599,57 @@ Cell class - 'delay' -Simulation output data (spikes, etc) -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Simulation output data +^^^^^^^^^^^^^^^^^^^^^^ + +The output data of simulation is stored in dictionaries ``sim.simData`` and ``sim.allSimData``. The former should be used in sinlge process environment, while the latter contains data gathered from all nodes from parallel context. +The contents of simulation output data depend on the settings in ``simConfig``, and may contain the following. + +**1. Traces of cells** + +Keys of ``simData`` correspond to keys of ``simConfig.recordTraces`` (e.g. 'V_soma'), with the value of each trace containing in turn a dictionary with the list of cells for which this traces was recorded. The recorded traces will be stored as ``h.Vector`` for each cell. For example, for simConfig setting:: + + simConfig.recordTraces = {'V_soma':{'sec':'soma','loc':0.5,'var':'v'}, + 'Ina_soma':{'sec':'soma','loc':0.5,'var':'ina'}} + simConfig.recordCells = [1, 3] + simConfig.recordStep = 0.1 + +the ``simData`` dictionary will contain (among others):: + + {'V_soma': {'cell_1': , 'cell_3': }, + 'Ina_soma': {'cell_1': , 'cell_3': } + + +where the length of each Vector depends on ``simConfig.duration`` and ``simConfig.recordStep`` + +**2. Spikes** + +``spkt``, ``spkid`` - ordered lists of spike times and cell gids for each spike. + +The ``simConfig.recordCellsSpikes`` (True, by default) can be used to record only from a subset of cells or turn off spike recordig. + +**3. Stimuli to the network** + +``stims``. For each population of ``NetStim`` or ``VecStim`` it contains the list of spike times (``h.Vector``) for each target cell. +Only available if ``simConfig.recordStim`` is ``True``. + +**4. LFP-related data** + +``LFP``, ``LFPCells``, ``LFPPops``. Depend on the ``recordLFP``, ``saveLFPCells``, ``saveLFPPops`` options in ``simConfig``. + +- ``LFP`` - is an np.array of LFP values of shape ``(num timesteps, num electrodes)`` +- ``LFPCells`` - dictionary of LFP data recorded from each cell (LFP data is in format as above) +- ``LFPPops`` - dictionary of average LFP data recorded from each population (LFP data is in format as above) + +**5. Dipole-related data** + +``dipoleSum``, ``dipoleCells``, ``dipolePops``. Depend on the ``recordDipole``, ``saveDipoleCells``, ``saveDipolePops`` options in ``simConfig`` + +- ``dipoleSum`` - sum of current dipole moments at each timestep ``(num timesteps, 3)``; can be used for EEG/MEG calculation +- ``dipoleCells`` - dictionary of current dipole moments recorded for each cell (dipoles data is in format as above) +- ``dipolePops`` - dictionary of average current dipole moments recorded for each population (dipoles data is in format as above) + -- sim.allSimData (Dict) Data saved to file @@ -1454,4 +1658,4 @@ Data saved to file * simConfig * netParams * net -* simData \ No newline at end of file +* simData diff --git a/examples/HHTut/HHTut.py b/examples/HHTut/HHTut.py index b1ac9a37d..9ff40e99a 100644 --- a/examples/HHTut/HHTut.py +++ b/examples/HHTut/HHTut.py @@ -78,3 +78,6 @@ simConfig.analysis['plotRaster'] = {'saveData': 'raster_data.json', 'saveFig': True, 'showFig': True} # Plot raster simConfig.analysis['plotTraces'] = {'include': [2], 'saveFig': True, 'showFig': True} # Plot cell traces simConfig.analysis['plot2Dnet'] = {'saveFig': True, 'showFig': True} # Plot 2D cells and connections + + +simConfig.validateNetParams=True \ No newline at end of file diff --git a/examples/HHTut/HHTut_export.py b/examples/HHTut/HHTut_export.py index 76295bc96..82539ae42 100644 --- a/examples/HHTut/HHTut_export.py +++ b/examples/HHTut/HHTut_export.py @@ -21,3 +21,5 @@ from netpyne.conversion import createPythonScript createPythonScript('HHTut_regen.py', HHTut.netParams, HHTut.simConfig) + + diff --git a/examples/NeuroMLImport/RS.mod b/examples/NeuroMLImport/RS.mod index b35853f95..730841841 100644 --- a/examples/NeuroMLImport/RS.mod +++ b/examples/NeuroMLImport/RS.mod @@ -3,9 +3,9 @@ TITLE Mod file for component: Component(id=RS type=izhikevich2007Cell) COMMENT This NEURON file has been generated by org.neuroml.export (see https://github.com/NeuroML/org.neuroml.export) - org.neuroml.export v1.8.0 - org.neuroml.model v1.8.0 - jLEMS v0.10.5 + org.neuroml.export v1.8.1 + org.neuroml.model v1.8.1 + jLEMS v0.10.6 ENDCOMMENT diff --git a/examples/NeuroMLImport/poissonFiringSyn.mod b/examples/NeuroMLImport/poissonFiringSyn.mod index 1d4ad2d27..396e35664 100644 --- a/examples/NeuroMLImport/poissonFiringSyn.mod +++ b/examples/NeuroMLImport/poissonFiringSyn.mod @@ -3,9 +3,9 @@ TITLE Mod file for component: Component(id=poissonFiringSyn type=poissonFiringSy COMMENT This NEURON file has been generated by org.neuroml.export (see https://github.com/NeuroML/org.neuroml.export) - org.neuroml.export v1.8.0 - org.neuroml.model v1.8.0 - jLEMS v0.10.5 + org.neuroml.export v1.8.1 + org.neuroml.model v1.8.1 + jLEMS v0.10.6 ENDCOMMENT diff --git a/examples/spreadingDepression/Neuron.py b/examples/spreadingDepression/Neuron.py new file mode 100644 index 000000000..f9863cbdd --- /dev/null +++ b/examples/spreadingDepression/Neuron.py @@ -0,0 +1,103 @@ +from neuron import h +from cfg import cfg +import numpy as np + +class ENeuron: + """ A neuron with soma and dendrite with; fast and persistent sodium + currents, potassium currents, passive leak and potassium leak and an + accumulation mechanism. """ + def __init__(self): + self.soma = h.Section(name='Esoma', cell=self) + # add 3D points to locate the neuron in the ECS + self.soma.pt3dadd(0.0, 0.0, 0.0, 2.0 * cfg.somaR) + self.soma.pt3dadd(0.0, 2.0 * cfg.somaR, 0.0, 2.0 * cfg.somaR) + if cfg.epas: + self.soma.insert('pas') + self.soma(0.5).pas.e = cfg.epas + self.soma(0.5).pas.g = cfg.gpas + +class E2Neuron: + """ A neuron with soma and dendrite with; fast and persistent sodium + currents, potassium currents, passive leak and potassium leak and an + accumulation mechanism. """ + def __init__(self): + self.soma = h.Section(name='E2soma', cell=self) + # add 3D points to locate the neuron in the ECS + self.soma.pt3dadd(0.0, 0.0, 0.0, 2.0 * cfg.somaR) + self.soma.pt3dadd(0.0, 2.0 * cfg.somaR, 0.0, 2.0 * cfg.somaR) + if cfg.epas: + self.soma.insert('pas') + self.soma(0.5).pas.e = cfg.epas + self.soma(0.5).pas.g = cfg.gpas + +class I2Neuron: + """ A neuron with soma and dendrite with; fast and persistent sodium + currents, potassium currents, passive leak and potassium leak and an + accumulation mechanism. """ + def __init__(self): + self.soma = h.Section(name='I2soma', cell=self) + # add 3D points to locate the neuron in the ECS + self.soma.pt3dadd(0.0, 0.0, 0.0, 2.0 * cfg.somaR) + self.soma.pt3dadd(0.0, 2.0 * cfg.somaR, 0.0, 2.0 * cfg.somaR) + if cfg.epas: + self.soma.insert('pas') + self.soma(0.5).pas.e = cfg.epas + self.soma(0.5).pas.g = cfg.gpas + +class E4Neuron: + """ A neuron with soma and dendrite with; fast and persistent sodium + currents, potassium currents, passive leak and potassium leak and an + accumulation mechanism. """ + def __init__(self): + self.soma = h.Section(name='E4soma', cell=self) + # add 3D points to locate the neuron in the ECS + self.soma.pt3dadd(0.0, 0.0, 0.0, 2.0 * cfg.somaR) + self.soma.pt3dadd(0.0, 2.0 * cfg.somaR, 0.0, 2.0 * cfg.somaR) + if cfg.epas: + self.soma.insert('pas') + self.soma(0.5).pas.e = cfg.epas + self.soma(0.5).pas.g = cfg.gpas + +class I4Neuron: + """ A neuron with soma and dendrite with; fast and persistent sodium + currents, potassium currents, passive leak and potassium leak and an + accumulation mechanism. """ + def __init__(self): + self.soma = h.Section(name='I4soma', cell=self) + # add 3D points to locate the neuron in the ECS + self.soma.pt3dadd(0.0, 0.0, 0.0, 2.0 * cfg.somaR) + self.soma.pt3dadd(0.0, 2.0 * cfg.somaR, 0.0, 2.0 * cfg.somaR) + if cfg.epas: + self.soma.insert('pas') + self.soma(0.5).pas.e = cfg.epas + self.soma(0.5).pas.g = cfg.gpas + +class E5Neuron: + """ A neuron with soma and dendrite with; fast and persistent sodium + currents, potassium currents, passive leak and potassium leak and an + accumulation mechanism. """ + def __init__(self): + self.soma = h.Section(name='E5soma', cell=self) + # add 3D points to locate the neuron in the ECS + self.soma.pt3dadd(0.0, 0.0, 0.0, 2.0 * cfg.somaR) + self.soma.pt3dadd(0.0, 2.0 * cfg.somaR, 0.0, 2.0 * cfg.somaR) + if cfg.epas: + self.soma.insert('pas') + self.soma(0.5).pas.e = cfg.epas + self.soma(0.5).pas.g = cfg.gpas + +class I5Neuron: + """ A neuron with soma and dendrite with; fast and persistent sodium + currents, potassium currents, passive leak and potassium leak and an + accumulation mechanism. """ + def __init__(self): + self.soma = h.Section(name='I5soma', cell=self) + # add 3D points to locate the neuron in the ECS + self.soma.pt3dadd(0.0, 0.0, 0.0, 2.0 * cfg.somaR) + self.soma.pt3dadd(0.0, 2.0 * cfg.somaR, 0.0, 2.0 * cfg.somaR) + if cfg.epas: + self.soma.insert('pas') + self.soma(0.5).pas.e = cfg.epas + self.soma(0.5).pas.g = cfg.gpas + +# v0.00 - classes for each cell type in network \ No newline at end of file diff --git a/examples/spreadingDepression/README.md b/examples/spreadingDepression/README.md new file mode 100644 index 000000000..21b066626 --- /dev/null +++ b/examples/spreadingDepression/README.md @@ -0,0 +1,38 @@ +# RxD Spreading Depression/Depiolarization +## Overview +A tissue-scale model of spreading depolarization (SD) in brain slices +ported to NetPyNE from [Kelley et al. 2022](https://github.com/suny-downstate-medical-center/SDinSlice). + +We use NetPyNE to interface with the NEURON simulator's reaction-diffusion framework to embed thousands of neurons +(based on the the model from Wei et al. 2014) +in the extracellular space of a brain slice, which is itself embedded in an bath solution. +![model schematic](./schematic.png) +We initiate SD in the slice by elevating extracellular K+ in a spherical region at the center of the slice. +Effects of hypoxia and propionate on the slice were modeled by appropriate changes to the volume fraction +and tortuosity of the extracellular space and oxygen/chloride concentrations. + +In this example, we use NetPyNE to define the cells, extracellular space, +ion/oxygen dynamics, and channel/cotransporter/pump dynamics, but we use +calls to NEURON to handle running the simulation and interval saving. + +## Setup and execution +Requires NEURON with RxD and Python. We recommend using [MPI](https://www.open-mpi.org/) to parallelize simulations. Simulations, especially without +multiple cores, may take upwards of thiry minutes. + +To run the simulation on a sinle core: +``` +python3 init.py +``` +To run on (for instance) 6 cores with MPI: +``` +mpiexec -n 6 nrniv -python -mpi init.py +``` + +Parameters like slice oxygenation, radius and concentration of the K+ bolus used +to initiate SD, neuronal density, etc. can be changed by editing **cfg.py**. + +## References +Craig Kelley, Adam J. H. Newton, Sabina Hrabetova, Robert A. McDougal, and William W. Lytton. 2022. “Multiscale Computer Modeling of Spreading Depolarization in Brain Slices.” bioRxiv. https://doi.org/10.1101/2022.01.20.477118. + +Yina Wei, Ghanim Ullah, and Steven J. Schiff. "Unification of neuronal spikes, seizures, and spreading depression." Journal of Neuroscience 34, no. 35 (2014): 11733-11743. +https://doi.org/10.1523/JNEUROSCI.0516-14.2014 \ No newline at end of file diff --git a/examples/spreadingDepression/analysis.py b/examples/spreadingDepression/analysis.py new file mode 100644 index 000000000..361d1f6a5 --- /dev/null +++ b/examples/spreadingDepression/analysis.py @@ -0,0 +1,1096 @@ +import numpy as np +from matplotlib import pyplot as plt +import os +import imageio +from scipy.signal import find_peaks +import pickle +import multiprocessing + +def totalSurfaceArea(Lx, Ly, Lz, density, alphaNrn, sa2v, rs=None): + """Computes the total neuronal surface area given dimensions of the slice, + neuronal density, neuronal volume fraction (alphaNrn), and neuronal S:V ratio""" + Vtissue = Lx*Ly*Lz + # cell numbers + Ncell = int(density*(Lx*Ly*Lz*1e-9)) + if not rs: + rs = ((alphaNrn*Vtissue)/(2*np.pi*Ncell)) ** (1/3) + somaR = (sa2v * rs**3 / 2.0) ** (1/2) + a = 4 * np.pi * somaR**2 + return a * Ncell + +def totalCellVolume(Lx, Ly, Lz, density, alphaNrn, sa2v, rs=None): + """Computes the total neuronal volume given dimensions of the slice, + neuronal density, neuronal volume fraction (alphaNrn), and neuronal S:V ratio""" + Vtissue = Lx*Ly*Lz + # cell numbers + Ncell = int(density*(Lx*Ly*Lz*1e-9)) + if not rs: + rs = ((alphaNrn*Vtissue)/(2*np.pi*Ncell)) ** (1/3) + somaR = (sa2v * rs**3 / 2.0) ** (1/2) + cyt_frac = rs**3 / somaR**3 + v = 2 * np.pi * somaR**3 * cyt_frac + return v * Ncell + +def combineRecs(dirs, recNum=None): + """tool for combining recordings from multiple recordings. dirs is a list of + directories with data. intended for use with state saving/restoring""" + # open first file + if recNum: + filename = 'recs' + str(recNum) + '.pkl' + else: + filename = 'recs.pkl' + + with open(dirs[0]+filename,'rb') as fileObj: + data = pickle.load(fileObj) + ## convert first file to all lists + for key in data.keys(): + if isinstance(data[key], list): + for i in range(len(data[key])): + try: + data[key][i] = list(data[key][i]) + except: + pass + else: + data[key] = list(data[key]) + # load each new data file + for datadir in dirs[1:]: + with open(datadir+filename, 'rb') as fileObj: + new_data = pickle.load(fileObj) + ## extend lists in original data with new data + for key in new_data.keys(): + if isinstance(new_data[key], list): + for i in range(len(new_data[key])): + try: + data[key][i].extend(list(new_data[key][i])) + except: + pass + else: + data[key].extend(list(new_data[key])) + return data + +def waveSpeedVs(dirs, xaxis, xlabel, ystop=None, figname='wavespeed'): + # computes and plots wave speed vs. some specified parameter + for datadir in dirs: + times = [] + wave_pos = [] + ## single run + if not isinstance(datadir, list): + f = open(datadir + 'wave_progress.txt', 'r') + for line in f.readlines(): + times.append(float(line.split()[0])) + wave_pos.append(float(line.split()[-2])) + f.close() + else: + ## fragmented run + for d in datadir: + f = open(d + 'wave_progress.txt', 'r') + for line in f.readlines(): + times.append(float(line.split()[0])) + wave_pos.append(float(line.split()[-2])) + f.close() + ## trim if needed + if ystop: + times = [t for t, w in zip(times, wave_pos) if w < ystop] + wave_pos = [w for w in wave_pos if w < ystop] + + +def compareDiffuse(dirs, outpath, species='k', figname='kconc', dur=10, start=0, vmin=3.5, vmax=40, extent=None): + """generates gif(s) of K+ diffusion""" + try: + os.mkdir(outpath) + except: + pass + + files = [species+'_'+str(i)+'.npy' for i in range(int(start*1000),int(dur*1000)) if (i%100)==0] + for filename in files: + plt.figure(figsize=(8,10)) + for ind in range(len(dirs)): + plt.subplot(len(dirs), 1, ind+1) + if isinstance(dirs[ind], list): + try: + data = np.load(dirs[ind][0]+filename) + except: + data = np.load(dirs[ind][1]+filename) + else: + data = np.load(dirs[ind]+filename) + if extent: + plt.imshow(data.mean(2), vmin=vmin, vmax=vmax, interpolation='nearest', origin='lower', extent=extent) + else: + plt.imshow(data.mean(2), vmin=vmin, vmax=vmax, interpolation='nearest', origin='lower') + # plt.title(dirs[ind][:-1]) + t = str(float(filename.split('.')[0].split('_')[1]) / 1000) + ' s' + plt.title(t, fontsize=18) + + plt.tight_layout() + plt.savefig(outpath + filename[:-4] + '.png') + plt.close() + + times = [] + filenames = os.listdir(path=outpath) + for file in filenames: + times.append(float(file.split('_')[-1].split('.')[0])) + inds = np.argsort(times) + filenames_sort = [filenames[i] for i in inds] + imagesc = [] + for filename in filenames_sort: + imagesc.append(imageio.imread(outpath+filename)) + imageio.mimsave(figname + '.gif', imagesc) + +def getKwaveSpeed(datadir, r0=0, rmax=None, tcut=None): + """computes K+ wave speed, handles lists of data folders, drops points below r0""" + if not isinstance(datadir, list): + f = open(datadir + 'wave_progress.txt', 'r') + times = [] + wave_pos = [] + for line in f.readlines(): + times.append(float(line.split()[0])) + wave_pos.append(float(line.split()[-2])) + f.close() + if wave_pos: + if tcut: + wave_pos = [w for t, w in zip(times, wave_pos) if t <= tcut * 1000] + times = [t for t in times if t <= tcut * 1000] + print('Initial: %.3f' % (wave_pos[0])) + print('Final: %.3f' % (wave_pos[-1])) + if rmax: + wave_pos_cut = [w for w in wave_pos if w > rmax] + t_cut = [t for t, w in zip(times, wave_pos) if w > rmax] + if len(wave_pos_cut): + m = (wave_pos_cut[0] - wave_pos[0]) / (t_cut[0] - times[0]) + elif np.max(wave_pos) > 200: + m = (np.max(wave_pos) - wave_pos[0]) / (times[np.argmax(wave_pos)]-times[0]) + else: + m = 0 + else: + m = (np.max(wave_pos) - wave_pos[0]) / (times[np.argmax(wave_pos)]-times[0]) + m = m * 1000 / 16.667 + return m + else: + speeds = [] + for d in datadir: + f = open(d + 'wave_progress.txt', 'r') + times = [] + wave_pos = [] + for line in f.readlines(): + times.append(float(line.split()[0])) + wave_pos.append(float(line.split()[-2])) + f.close() + if wave_pos: + if tcut: + wave_pos = [w for t, w in zip(times, wave_pos) if t <= tcut * 1000] + times = [t for t in times if t <= tcut * 1000] + print('Initial: %.3f' % (wave_pos[0])) + print('Final: %.3f' % (wave_pos[-1])) + if rmax: + wave_pos_cut = [w for w in wave_pos if w > rmax] + t_cut = [t for t, w in zip(times, wave_pos) if w > rmax] + if len(wave_pos_cut): + m = (wave_pos_cut[0] - wave_pos[0]) / (t_cut[0] - times[0]) + elif np.max(wave_pos) > 200: + m = (np.max(wave_pos) - wave_pos[0]) / (times[np.argmax(wave_pos)]-times[0]) + else: + m = 0 + else: + m = (np.max(wave_pos) - wave_pos[0]) / (times[np.argmax(wave_pos)]-times[0]) + m = m * 1000 / 16.667 + speeds.append(m) + return speeds + +def getSpkWaveSpeed(datadir, pos, r0=0): + """compute SD wave speed from spiking activity, handles lists, drops spikes within r0 """ + radius = [] + spktimes = [] + if not isinstance(datadir, list): + m = getSpkMetrics(datadir, uniform=True, position=pos) + r = [k for k in m.keys() if k > r0] + t2firstSpk = [m[k]['t2firstSpk'] for k in m.keys() if k > r0] + slope, b = np.polyfit(t2firstSpk, r, 1) + return slope / 16.667 + else: + speeds = [] + metrics = [getSpkMetrics(d, uniform=True, position=p) for d, p in zip(datadir,pos)] + for d in datadir: + for m in metrics: + r = [k for k in m.keys() if k > r0] + t2firstSpk = [m[k]['t2firstSpk'] for k in m.keys() if k > r0] + slope, b = np.polyfit(t2firstSpk, r, 1) + speeds.append(slope / 16.667) + return speeds + +def compareKwaves(dirs, labels, legendTitle, colors=None, trimDict=None, sbplt=None): + """plots K+ wave trajectories from sims stored in list of folders dirs""" + # plt.figure(figsize=(10,6)) + for d, l, c in zip(dirs, labels, colors): + f = open(d + 'wave_progress.txt', 'r') + times = [] + wave_pos = [] + for line in f.readlines(): + times.append(float(line.split()[0])) + wave_pos.append(float(line.split()[-2])) + f.close() + if sbplt: + plt.subplot(sbplt) + if trimDict: + if d in trimDict.keys(): + plt.plot(np.divide(times,1000)[:trimDict[d]], wave_pos[:trimDict[d]], label=l, linewidth=5, color=c) + else: + plt.plot(np.divide(times,1000), wave_pos, label=l, linewidth=5, color=c) + else: + if colors: + plt.plot(np.divide(times,1000), wave_pos, label=l, linewidth=5, color=c) + else: + plt.plot(np.divide(times,1000), wave_pos, label=l, linewidth=5) + # legend = plt.legend(title=legendTitle, fontsize=12)#, bbox_to_anchor=(-0.2, 1.05)) + legend = plt.legend(fontsize=12, loc='upper left')#, bbox_to_anchor=(-0.2, 1.05)) + # plt.setp(legend.get_title(), fontsize=14) + plt.ylabel('K$^+$ Wave Position ($\mu$m)', fontsize=16) + plt.xlabel('Time (s)', fontsize=16) + plt.xticks(fontsize=14) + plt.yticks(fontsize=14) + +def plotRasters(dirs, figname='raster_plot.png'): + """Plot raster(s) ordered by radial distance from the core of the slice for data stored in list + of folders - dirs""" + raster_fig = plt.figure(figsize=(16,8)) + for ind, datadir in enumerate(dirs): + raster = getRaster(datadir) + plt.subplot(len(dirs), 1, ind+1) + for key in raster.keys(): + plt.plot(np.divide(raster[key],1000), [key for i in range(len(raster[key]))], 'b.') + plt.xlabel('Time (s)', fontsize=16) + plt.ylabel('Distance (microns)', fontsize=16) + plt.xticks(fontsize=14) + plt.yticks(fontsize=14) + raster_fig.savefig(os.path.join(figname)) + +def combineMemFiles(datadirs, file): + """combine membrane potential files from fragmented runs""" + ## load first file + with open(os.path.join(datadirs[0],file), 'rb') as fileObj: + data = pickle.load(fileObj) + ## convert voltages to lists + for ind, v in enumerate(data[0]): + data[0][ind] = list(v) + ## load the rest of them + for datadir in datadirs[1:]: + with open(os.path.join(datadir, file), 'rb') as fileObj: + data0 = pickle.load(fileObj) + for ind, v in enumerate(data0[0]): + data[0][ind].extend(list(v)) + data[2].extend(data0[2]) + return data + +def getComboRaster(datadirs, position='center', uniform=False): + """get raster in dictionary for fragmented simulation runs""" + raster = {} + for ind, datadir in enumerate(datadirs): + if ind == 0: + files = os.listdir(datadir) + mem_files = [file for file in files if (file.startswith(position + 'membrane'))] + for file in mem_files: + with open(os.path.join(datadir,file), 'rb') as fileObj: + data = pickle.load(fileObj) + if ind == 0: + for v, pos in zip(data[0],data[1]): + pks, _ = find_peaks(v.as_numpy(), 0) + if len(pks): + if uniform: + r = (pos[0]**2 + pos[1]**2 + pos[2]**2)**(0.5) + else: + r = (pos[0]**2 + pos[1]**2)**(0.5) + raster[r] = [data[2][ind] for ind in pks] + else: + for v, pos in zip(data[0],data[1]): + pks, _ = find_peaks(v.as_numpy(), 0) + if len(pks): + if uniform: + r = (pos[0]**2 + pos[1]**2 + pos[2]**2)**(0.5) + else: + r = (pos[0]**2 + pos[1]**2)**(0.5) + if r in raster.keys(): + raster[r].extend([data[2][ind] for ind in pks]) + else: + raster[r] = [data[2][ind] for ind in pks] + return raster + +def getSpkMetrics(datadir, includerecs=False, position='center', uniform=False): + """returns basic spiking metrics from single sim""" + raster = getRaster(datadir, includerecs=includerecs, position=position, uniform=uniform) + spkMetrics = {} + for k in raster.keys(): + spkMetrics[k] = {} + t0 = raster[k][0] + t1 = raster[k][-1] + spkMetrics[k]['t2firstSpk'] = t0 / 1000 + spkMetrics[k]['nSpks'] = len(raster[k]) + if len(raster[k]) > 1: + spkMetrics[k]['spkFreq'] = len(raster[k]) / ((t1-t0) / 1000) + spkMetrics[k]['spkDur'] = (t1-t0) / 1000 + else: + spkMetrics[k]['spkFreq'] = 1 + spkMetrics[k]['spkDur'] = 0 + return spkMetrics + + +def getRaster(datadir, includerecs=False, position='center', uniform=False): + """Returns spike raster from a simulation as a dictionary (keys - radial distance)""" + raster = {} + files = os.listdir(datadir) + if position: + mem_files = [file for file in files if (file.startswith(position + 'membrane'))] + else: + mem_files = [file for file in files if (file.startswith('membrane'))] + for file in mem_files: + with open(os.path.join(datadir,file), 'rb') as fileObj: + data = pickle.load(fileObj) + for v, pos in zip(data[0],data[1]): + pks, _ = find_peaks(v.as_numpy(), 0) + if len(pks): + if uniform: + r = (pos[0]**2 + pos[1]**2 + pos[2]**2)**(0.5) + else: + r = (pos[0]**2 + pos[1]**2)**(0.5) + raster[r] = [data[2][ind] for ind in pks] + ## also get spikes from recs.pkl + if includerecs: + with open(datadir+'recs.pkl', 'rb') as fileObj: + data = pickle.load(fileObj) + for pos, v in zip(data['pos'], data['v']): + pks, _ = find_peaks(v.as_numpy(), 0) + if len(pks): + r = (pos[0]**2 + pos[1]**2 + pos[2]**2)**(0.5) + raster[r] = [data['t'][ind] for ind in pks] + return raster + +def xyOfSpikeTime(datadir, position='center'): + """returns dictionary where keys are spike times and values are position data for the spiking cell(s)""" + if isinstance(datadir, list): + files = os.listdir(datadir[0]) + else: + files = os.listdir(datadir) + mem_files = [file for file in files if (file.startswith(position + 'membrane'))] + posBySpkTime = {} + for file in mem_files: + if isinstance(datadir, list): + data = combineMemFiles(datadir, file) + else: + with open(os.path.join(datadir,file), 'rb') as fileObj: + data = pickle.load(fileObj) + for v, pos in zip(data[0],data[1]): + if isinstance(v, list): + pks, _ = find_peaks(v, 0) + else: + pks, _ = find_peaks(v.as_numpy(), 0) + if len(pks): + for ind in pks: + t = int(data[2][ind]) + if t in posBySpkTime.keys(): + posBySpkTime[t]['x'].append(pos[0]) + posBySpkTime[t]['y'].append(pos[1]) + posBySpkTime[t]['z'].append(pos[2]) + else: + posBySpkTime[t] = {} + posBySpkTime[t]['x'] = [pos[0]] + posBySpkTime[t]['y'] = [pos[1]] + posBySpkTime[t]['z'] = [pos[2]] + return posBySpkTime + +def centerVsPeriphKspeed(datadir, dur, rmax=600): + """returns K+ wave speeds for both the core of the slice and the periphery""" + k_files = ['k_'+str(i)+'.npy' for i in range(int(dur*1000)) if (i%100)==0] + time = [] + wave_pos_core = [] + wave_pos_periph = [] + for k_file in k_files: + time.append(float(k_file.split('.')[0].split('_')[1]) / 1000) + with open(datadir+k_file, 'rb') as fileObj: + data = np.load(fileObj) + core = int(data.shape[2]/2) + periph = int(data.shape[2]-2) + inds_core = np.argwhere(data[:,:,core] > 15.0) + if len(inds_core): + ds_core = [] + for ind in inds_core: + ds_core.append(((ind[0]-20)**2 + (ind[1]-20)**2)**(1/2) * 500/20) + wave_pos_core.append(np.max(ds_core)) + else: + wave_pos_core.append(0) + inds_periph = np.argwhere(data[:,:,periph] > 15.0) + if len(inds_periph): + ds_periph = [] + for ind in inds_periph: + ds_periph.append(((ind[0]-20)**2 + (ind[1]-20)**2)**(1/2) * 500/20) + wave_pos_periph.append(np.max(ds_periph)) + else: + wave_pos_periph.append(0) + wave_pos_cut = [w for w in wave_pos_core if w > rmax] + t_cut = [t for t, w in zip(time, wave_pos_core) if w > rmax] + if len(wave_pos_cut): + m = (wave_pos_cut[0] - wave_pos_core[0]) / (t_cut[0] - time[0]) + elif np.max(wave_pos_core) > 200: + m = (np.max(wave_pos_core) - wave_pos_core[0]) / (time[np.argmax(wave_pos_core)]-time[0]) + else: + m = 0 + core_speed = m / 16.667 + wave_pos_cut = [w for w in wave_pos_periph if w > rmax] + t_cut = [t for t, w in zip(time, wave_pos_periph) if w > rmax] + if len(wave_pos_cut): + m = (wave_pos_cut[0] - wave_pos_periph[0]) / (t_cut[0] - time[0]) + elif np.max(wave_pos_core) > 200: + m = (np.max(wave_pos_periph) - wave_pos_periph[0]) / (time[np.argmax(wave_pos_periph)]-time[0]) + else: + m = 0 + periph_speed = m / 16.667 + return core_speed, periph_speed + + + +def allSpeciesMov(datadir, outpath, vmins, vmaxes, figname, condition='Perfused', dur=10, extent=None, fps=40): + """"Generates an mp4 video with heatmaps for K+, Cl-, Na+, and O2 overlaid with spiking data""" + try: + os.mkdir(outpath) + except: + pass + specs = ['k', 'cl', 'na', 'o2'] + k_files = [specs[0]+'_'+str(i)+'.npy' for i in range(int(dur*1000)) if (i%100)==0] + cl_files = [specs[1]+'_'+str(i)+'.npy' for i in range(int(dur*1000)) if (i%100)==0] + na_files = [specs[2]+'_'+str(i)+'.npy' for i in range(int(dur*1000)) if (i%100)==0] + o2_files = [specs[3]+'_'+str(i)+'.npy' for i in range(int(dur*1000)) if (i%100)==0] + for k_file, cl_file, na_file, o2_file in zip(k_files, cl_files, na_files, o2_files): + t = int(k_file.split('.')[0].split('_')[1]) + ttl = 't = ' + str(float(k_file.split('.')[0].split('_')[1]) / 1000) + ' s' + + # fig, big_axes = plt.subplots( figsize=(14, 7.6) , nrows=1, ncols=1, sharey=True) + fig = plt.figure(figsize=(12,7.6)) + # # for row, big_ax in enumerate(zip(big_axes,['Perfused']), start=1): + # big_axes.set_title(ttl, fontsize=22) + # # Turn off axis lines and ticks of the big subplot + # # obs alpha is 0 in RGBA string! + # big_axes.tick_params(labelcolor=(1.,1.,1., 0.0), top='off', bottom='off', left='off', right='off') + # # removes the white frame + # # big_axes._frameon = False + # plt.tight_layout() + fig.text(.45, 0.825, ttl, fontsize=20) + fig.text(0.45, 0.9, condition, fontsize=20) + posBySpkTime = xyOfSpikeTime(datadir) + spkTimes = [key for key in posBySpkTime if (t-50 < key <= t+50)] + + ax1 = fig.add_subplot(141) + data = np.load(datadir+k_file) + im = plt.imshow(data.mean(2), vmin=vmins[0], vmax=vmaxes[0], interpolation='nearest', origin='lower', extent=extent) + plt.colorbar(im, fraction=0.046, pad=0.04) + if len(spkTimes): + for spkTime in spkTimes: + plt.plot(posBySpkTime[spkTime]['x'], posBySpkTime[spkTime]['y'], 'w*') + plt.title(r'[K$^+$]$_{ECS}$ ', fontsize=20) + + ax2 = fig.add_subplot(142) + data = np.load(datadir+cl_file) + im = plt.imshow(data.mean(2), vmin=vmins[1], vmax=vmaxes[1], interpolation='nearest', origin='lower', extent=extent) + plt.colorbar(im, fraction=0.046, pad=0.04) + if len(spkTimes): + for spkTime in spkTimes: + plt.plot(posBySpkTime[spkTime]['x'], posBySpkTime[spkTime]['y'], 'w*') + plt.title(r'[Cl$^-$]$_{ECS}$ ', fontsize=20) + + ax3 = fig.add_subplot(143) + data = np.load(datadir+na_file) + im = plt.imshow(data.mean(2), vmin=vmins[2], vmax=vmaxes[2], interpolation='nearest', origin='lower', extent=extent) + plt.colorbar(im, fraction=0.046, pad=0.04) + if len(spkTimes): + for spkTime in spkTimes: + plt.plot(posBySpkTime[spkTime]['x'], posBySpkTime[spkTime]['y'], 'w*') + plt.title(r'[Na$^+$]$_{ECS}$ ', fontsize=20) + + ax4 = fig.add_subplot(144) + data = np.load(datadir+o2_file) + im = plt.imshow(data.mean(2), vmin=vmins[3], vmax=vmaxes[3], interpolation='nearest', origin='lower', extent=extent) + plt.colorbar(im, fraction=0.046, pad=0.04) + if len(spkTimes): + for spkTime in spkTimes: + plt.plot(posBySpkTime[spkTime]['x'], posBySpkTime[spkTime]['y'], 'w*') + plt.title(r'[O$_2$]$_{ECS}$ ', fontsize=20) + plt.tight_layout() + + # plt.tight_layout() + fig.savefig(outpath + k_file[:-4] + '.png') + plt.close() + + times = [] + filenames = os.listdir(path=outpath) + for file in filenames: + times.append(float(file.split('_')[-1].split('.')[0])) + inds = np.argsort(times) + filenames_sort = [filenames[i] for i in inds] + imagesc = [] + for filename in filenames_sort: + imagesc.append(imageio.imread(outpath+filename)) + imageio.mimsave(figname, imagesc) + + +def spkPlusMovie(dirs, outpath, species='k', vmin=3.5, vmax=40, figname='kmovie', dur=10, extent=None, titles=None): + """generates gif(s) of K+ diffusion overlaid with spiking data""" + try: + os.mkdir(outpath) + except: + pass + + files = [species+'_'+str(i)+'.npy' for i in range(int(dur*1000)) if (i%100)==0] + for filename in files: + plt.figure(figsize=(8,10)) + for ind in range(len(dirs)): + # xy positions of spikes + posBySpkTime = xyOfSpikeTime(dirs[ind]) + t = int(filename.split('.')[0].split('_')[1]) + plt.subplot(len(dirs), 1, ind+1) + if isinstance(dirs[ind], list): + for i in range(len(dirs[ind])): + try: + data = np.load(dirs[ind][i]+filename) + except: + pass + else: + data = np.load(dirs[ind]+filename) + if extent: + im = plt.imshow(data.mean(2), vmin=vmin, vmax=vmax, interpolation='nearest', origin='lower', extent=extent) + else: + im = plt.imshow(data.mean(2), vmin=vmin, vmax=vmax, interpolation='nearest', origin='lower') + cbar = plt.colorbar(im, fraction=0.046, pad=0.04) + cbar.set_label(r'[K$^+$] (mM)', fontsize=16, rotation=270) + # plt.title(dirs[ind][:-1]) + spkTimes = [key for key in posBySpkTime if (t-50 < key <= t+50)] + if len(spkTimes): + for spkTime in spkTimes: + plt.plot(posBySpkTime[spkTime]['x'], posBySpkTime[spkTime]['y'], 'w*') + if titles: + ttl = titles[ind] + ': ' + str(float(filename.split('.')[0].split('_')[1]) / 1000) + ' s' + else: + ttl = str(float(filename.split('.')[0].split('_')[1]) / 1000) + ' s' + plt.xlabel(r'X-Position ($\mu$m)', fontsize=16) + plt.ylabel(r'Y-Position ($\mu$m)', fontsize=16) + plt.xticks(fontsize=14) + plt.yticks(fontsize=14) + plt.title(ttl, fontsize=20) + plt.tight_layout() + plt.savefig(outpath + filename[:-4] + '.png') + plt.close() + + times = [] + filenames = os.listdir(path=outpath) + for file in filenames: + times.append(float(file.split('_')[-1].split('.')[0])) + inds = np.argsort(times) + filenames_sort = [filenames[i] for i in inds] + imagesc = [] + for filename in filenames_sort: + imagesc.append(imageio.imread(outpath+filename)) + imageio.mimsave(figname + '.gif', imagesc) + +def spikeFreqHeatMap(datadir, rmax=300, spatialBin=25, tempBin=25, noverlap=0, dur=10, sbplt=None, spkCmax=40): + """Generates heatmaps where x is time, y is radial distance from center, color is average membrane potential + within spatiotemporal bin. Overlaid with raster plot.""" + # rmax: maximum distance from center, spatialBin: in microns, tempBin: in ms, dur: duration in seconds + ## generate raster with distance keys + if isinstance(datadir, list): + raster = getComboRaster(datadir) + else: + raster = getRaster(datadir) + + ## allocate binned raster + # spatial_bins = list(range(spatialBin,rmax+1,spatialBin)) + spatial_bins = list(range(spatialBin,rmax+1,spatialBin-noverlap)) + binned_raster = {} + for spatial_bin in spatial_bins: + binned_raster[spatial_bin] = [] + + ## find raster keys of each spatial bin + for key in raster.keys(): + ind = 0 + while key > spatial_bins[ind]: + ind = ind + 1 + binned_raster[spatial_bins[ind]].append(raster[key]) + + ## find avg spike rate for each spatial bin + temp_bins = list(range(tempBin, int(dur*1000)+1, tempBin)) + binned_binned = np.zeros((len(spatial_bins),len(temp_bins))) + for row, key in enumerate(spatial_bins): + for col, temp_bin in enumerate(temp_bins): + spk_freqs = [] + for spk_times in binned_raster[key]: + spk_times_inbin = [t for t in spk_times if (temp_bin-tempBin) < t < temp_bin] + spk_freqs.append(len(spk_times_inbin) / (tempBin / 1000)) + binned_binned[row][col] = np.mean(spk_freqs) + if np.isnan(binned_binned[row][col]): + binned_binned[row][col] = 0 + + if sbplt: + ## plotting + plt.subplot(sbplt) + # ex = (0, dur/60, 0, rmax) + ex = (0, dur, 0, rmax) + hmap = plt.imshow(binned_binned, origin='lower', interpolation='spline16', + extent=ex, cmap='inferno', aspect='auto', vmax=40) + cbar = plt.colorbar(hmap) + cbar.set_label(r'Avg. Spike Frequency (Hz)', fontsize=14) + plt.clim(0,spkCmax) + # plt.xlabel('Time (min)', fontsize=14) + plt.xlabel('Time (s)', fontsize=14) + plt.ylabel(r'Radial Distance ($\mu$m)', fontsize=14) + plt.xticks(fontsize=12) + plt.yticks(fontsize=12) + # plt.xscale('log') + return hmap, cbar + else: + return binned_binned + +def avgVmembHeatMap(datadir, figname='vmemb_heat_map.png', rmax=300, spatialBin=25, tempBin=50, dur=10, position='center'): + """Generates heatmaps where x is time, y is radial distance from center, color is average membrane potential.""" + # ignores spikes. rmax: maximum distance from center, spatialBin: in microns, tempBin: in ms, dur: duration in seconds + ## find membrane potential files + vmemb_by_pos = {} + files = os.listdir(datadir) + mem_files = [file for file in files if (file.startswith(position + 'membrane')) and (len(file.split('_')) == 3)] + spatial_bins = list(range(spatialBin,rmax+1,spatialBin)) + + ## generate dictionary where keys are position, values are binned Vmemb + for file in mem_files: + print(str(file)) + if isinstance(datadir, list): + data = combineMemFiles(datadir, file) + else: + with open(os.path.join(datadir,file), 'rb') as fileObj: + data = pickle.load(fileObj) + t = data[2] + for v, pos in zip(data[0],data[1]): + v = v.as_numpy()[:-1] + tbin = tempBin + r = (pos[0]**2 + pos[1]**2 + pos[2]**2)**(0.5) + vmemb_by_pos[r] = [] + while tbin <= dur * 1000: + vbin = [V for V, T in zip(v,t) if tbin-tempBin < T < tbin] + vmemb_by_pos[r].append(np.mean(vbin)) + tbin = tbin + tempBin + + ## spatially binned lists of mean vmembs + binned_vmembs = {} + for space_bin in spatial_bins: + binned_vmembs[space_bin] = [] + for r in vmemb_by_pos.keys(): + if space_bin-spatialBin < r < space_bin: + binned_vmembs[space_bin].append(vmemb_by_pos[r]) + + ## matrix with averaged vmembs for each spatial bin + temp_bins = list(range(tempBin, int(dur*1000)+1, tempBin)) + binned_binned = np.zeros((len(spatial_bins), len(temp_bins))) + for row, key in enumerate(spatial_bins): + binned_binned[row][:] = np.mean(np.array(binned_vmembs[key]), axis=0) + + ## plotting + # plt.ion() + # plt.figure(figsize=(16,8)) + # plt.imshow(binned_binned, interpolation='nearest', origin='lower', aspect='auto', cmap='inferno') + # plt.colorbar() + # plt.savefig(figname) + return binned_binned + +def tempBinning(input_data): + # temporal binning for Vmemb plots + datadir, file, tempBin, dur, uniform, return_dict = input_data[0], input_data[1], input_data[2], input_data[3], input_data[4], input_data[5] + print(str(file)) + if isinstance(datadir, list): + data = combineMemFiles(datadir, file) + else: + with open(os.path.join(datadir,file), 'rb') as fileObj: + data = pickle.load(fileObj) + t = data[2] + temp_dict = {} + for v, pos in zip(data[0],data[1]): + if isinstance(v, list): + v = v[:-1] + else: + v = v.as_numpy()[:-1] + tbin = tempBin + if uniform: + r = (pos[0]**2 + pos[1]**2 + pos[2]**2)**(0.5) + else: + r = (pos[0]**2 + pos[1]**2)**(0.5) + temp_dict[r] = [] + while tbin <= dur * 1000: + vbin = [V for V, T in zip(v,t) if tbin-tempBin < T < tbin] + temp_dict[r].append(np.mean(vbin)) + tbin = tbin + tempBin + for key in temp_dict.keys(): + return_dict[key] = temp_dict[key] + +def vmembHeatMapParallel(datadir, rmax=300, spatialBin=25, noverlap=10, tempBin=50, dur=10, poolSize=1, position='center', uniform=False): + # Uses multiprocessing to run temporal binnging then gen heat map + if isinstance(datadir, list): + files = os.listdir(datadir[0]) + else: + files = os.listdir(datadir) + + mem_files = [file for file in files if (file.startswith(position + 'membrane')) and (len(file.split('_')) == 3)] + spatial_bins = list(range(spatialBin,rmax+1,spatialBin-noverlap)) + + ## parallel temporal binning + manager = multiprocessing.Manager() + vmemb_by_pos = manager.dict() + data = [] + for mem_file in mem_files: + data.append([datadir, mem_file, tempBin, dur, uniform, vmemb_by_pos]) + data = tuple(data) + + p = multiprocessing.Pool(poolSize) + p.map(tempBinning, data) + + ## spatially binned lists of mean vmembs + binned_vmembs = {} + for space_bin in spatial_bins: + print(space_bin) + binned_vmembs[space_bin] = [] + for r in vmemb_by_pos.keys(): + if space_bin-spatialBin < r < space_bin: + binned_vmembs[space_bin].append(vmemb_by_pos[r]) + + ## matrix with averaged vmembs for each spatial bin + temp_bins = list(range(tempBin, int(dur*1000)+1, tempBin)) + binned_binned = np.zeros((len(spatial_bins), len(temp_bins))) + for row, key in enumerate(spatial_bins): + print(row) + binned_binned[row][:] = np.mean(np.array(binned_vmembs[key]), axis=0) + + return temp_bins, spatial_bins, binned_binned, vmemb_by_pos + +def scatterRheobase(datadir, starts, amps, position='center'): + # scatter plots for rheobase stims, currently unused + rheo = [] + radii = [] + rheo_starts = [] + files = os.listdir(datadir) + mem_files = [file for file in files if (file.startswith(position + 'membrane'))] + t0 = starts[0] + for file in mem_files: + fileObj = open(os.path.join(datadir,file), 'rb') + data = pickle.load(fileObj) + fileObj.close() + for v, pos in zip(data[0],data[1]): + t = data[2] + v = v.as_numpy()[:-1] + v = [V for V, T in zip(v, t) if T >= t0-200] + t = [T for T in t if T >= t0-200] + pks, _ = find_peaks(v, 0) + radii.append((pos[0]**2 + pos[1]**2 + pos[2]**2)**(0.5)) + if len(pks): + spk_time = t[pks[0]] + start_ind = np.argmin(np.square(np.subtract(starts,spk_time))) + rheo.append(amps[start_ind]) + rheo_starts.append(starts[start_ind]) + else: + rheo.append(0) + rheo_starts.append(0) + out = {'rheobase' : rheo, 'radius' : radii, 'rheo_startTime' : rheo_starts} + return out + +def comboVmembPlot(datadir, duration, fig, sbplt, vmin, vmax, spatialbin=40, noverlap=1, poolsize=1, rmax=300, position='center', uniform=False, top=False, left=False): + # combine heatmap of Vmemb with white-out where spikes occur + ## heat map + # if isinstance(datadir, list): + # raster = getComboRaster(datadir) + # else: + raster = getRaster(datadir, position=position, uniform=uniform) + temp_bins, spatial_bins, vmemb, vm_by_pos = vmembHeatMapParallel(datadir, + poolSize=poolsize, spatialBin=spatialbin, dur=duration, noverlap=noverlap, position=position, + uniform=uniform) + # if sbplt: + # plt.subplot(sbplt) + # else: + # plt.figure(figsize=(8,10)) + # ex = (0, duration/60, 0, rmax) + ex = (0, duration, 0, rmax) + hmap = sbplt.imshow(vmemb, origin='lower', interpolation='spline16', + extent=ex, cmap='inferno', aspect='auto', vmin=vmin, vmax=vmax) + cbar = fig.colorbar(hmap, ax=sbplt) + if not left: + cbar.set_label(r'Avg. V$_{memb}$ (mV)', fontsize=14) + cbar.ax.tick_params(labelsize=12) + + ## white-out spikes + for key in raster.keys(): + # plt.plot(np.divide(raster[key],60000), [key for i in range(len(raster[key]))], 'w.') + sbplt.plot(np.divide(raster[key],1000), [key for i in range(len(raster[key]))], 'w.') + # plt.xlabel('Time (min)', fontsize=14) + if not top: + sbplt.set_xlabel('Time (s)', fontsize=16) + if left: + sbplt.set_ylabel(r'Radial Distance ($\mu$m)', fontsize=16) + plt.setp(sbplt.get_xticklabels(), fontsize=14) + plt.setp(sbplt.get_yticklabels(), fontsize=14) + + return hmap, cbar + +def vmembSpkFreqSbPlts(datadir, duration, figname=None, spatialbin=40, tempBin=25, noverlap=1, poolsize=1, rmax=300, sbplts=[211,212], logscale=False, tend=6, spkCmax=40): + # single figure with spike frequency heat map and vmemb heat map with spikes whited-out + hmap1, cbar1 = spikeFreqHeatMap(datadir, + rmax=rmax, + spatialBin=spatialbin, + tempBin=tempBin, + dur=duration, + noverlap=noverlap, + sbplt=sbplts[0], + spkCmax=spkCmax) + + hmap2, cbar2 = comboVmembPlot(datadir, duration, + spatialbin=spatialbin, + noverlap=noverlap, + poolsize=poolsize, + rmax=rmax, + sbplt=sbplts[1]) + + if logscale: + plt.subplot(sbplts[0]) + plt.xscale('log') + plt.subplot(sbplts[1]) + plt.xscale('log') + + plt.subplot(sbplts[0]) + plt.xlim(0.01, tend) + plt.subplot(sbplts[1]) + plt.xlim(0.01, tend) + + if figname: + plt.tight_layout() + plt.savefig(figname) + else: + return hmap1, hmap2, cbar1, cbar2 + +def compositeFig(datadir, figname, dif_files, duration=20, spatialBin=75, tempBin=50, noverlap=10, rmax=1250, poolsize=40, sbplts=[223, 224], useAgg=True, logscale=False, extent=(-1000,1000,-1000,1000), figsize=(16,10), tend=6): + if useAgg: + import matplotlib; matplotlib.use('Agg') + + plt.figure(figsize=figsize) + + for filename, i in zip(dif_files, range(4)): + data = np.load(datadir+filename) + plt.subplot(2,4,i+1) + hmap = plt.imshow(data.mean(2), vmin=3.5, vmax=40, interpolation='nearest', origin='lower', extent=extent, cmap='inferno') + t = str(float(filename.split('.')[0].split('_')[1]) / 1000) + ' s' + plt.title(t, fontsize=18) + plt.xlabel(r'X Position ($\mu$m)', fontsize=14) + plt.xticks(fontsize=12) + plt.ylabel(r'Y Position ($\mu$m)', fontsize=14) + plt.yticks(fontsize=12) + # cbar = plt.colorbar(hmap) + # cbar.set_label(r'[K$^+$] (mM)', fontsize=14) + + vmembSpkFreqSbPlts(datadir, duration, figname=figname, tend=tend, spatialbin=spatialBin, tempBin=tempBin, noverlap=noverlap, poolsize=poolsize, rmax=rmax, sbplts=sbplts, logscale=logscale) + +def sinkVsSource(datadir, recNum=None): + """epoching based on when cells act as K+ sinks""" + if recNum: + filename = 'recs' + str(recNum) + '.pkl' + else: + filename = 'recs.pkl' + + if isinstance(datadir, list): + data = combineRecs(datadir) + else: + with open(datadir+filename, 'rb') as fileObj: + data = pickle.load(fileObj) + sink_periods = [] + radius = [] + for i, k in enumerate(data['ki']): + rising_t = [t for t,d in zip(list(data['t'])[1:], + np.diff(list(k))) if d > 0] + # sink_periods.append(rising_t) + if len(rising_t) > 1: + sink_periods.append(rising_t[-1] - rising_t[0]) + else: + sink_periods.append(0) + radius.append((data['pos'][i][0] ** 2 + data['pos'][i][1] ** 2 + data['pos'][i][2] ** 2)**(0.5)) + return sink_periods, radius + +def sinkVsSourceFig(datadir, iss=[0, 7, 15], recNum=None): + """basic plotting for epochs where cells act as K+ sinks""" + if recNum: + filename = 'recs' + str(recNum) + '.pkl' + else: + filename = 'recs.pkl' + + if isinstance(datadir, list): + data = combineRecs(datadir) + else: + with open(datadir+filename, 'rb') as fileObj: + data = pickle.load(fileObj) + + fig, axs = plt.subplots(6,1, sharex=True) + fig.set_figwidth(10) + fig.set_figheight(10) + + for ind, i in enumerate(iss): + rising_t = [t for t,d in zip(list(data['t'])[1:], + np.diff(list(data['ki'][i]))) if d > 0] + max_rise_t = np.max(rising_t) + predrop_t = [t for t in list(data['t']) if t <= max_rise_t] + predrop_ki = [ki for t, ki in zip(list(data['t']), list(data['ki'][i])) + if t <= max_rise_t] + predrop_o2 = [o2 for t, o2 in zip(list(data['t']), list(data['o2'][i])) + if t <= max_rise_t] + predrop_nai = [nai for t, nai in zip(list(data['t']), list(data['nai'][i])) + if t <= max_rise_t] + predrop_cli = [cli for t, cli in zip(list(data['t']), list(data['cli'][i])) + if t <= max_rise_t] + predrop_v = [v for t, v in zip(list(data['t']), list(data['v'][i])) + if t <= max_rise_t] + predrop_ko = [ko for t, ko in zip(list(data['t']), list(data['ko'][i])) + if t <= max_rise_t] + l = r'%s $\mu$m' % str(np.round((data['pos'][i][0] ** 2 + data['pos'][i][1] ** 2 + data['pos'][i][2] ** 2)**(0.5),1)) + axs[0].plot(np.divide(predrop_t, np.max(predrop_t)), predrop_v, label=l) + axs[1].plot(np.divide(predrop_t, np.max(predrop_t)), predrop_ko, label=l) + axs[2].plot(np.divide(predrop_t, np.max(predrop_t)), predrop_ki, label=l) + axs[3].plot(np.divide(predrop_t, np.max(predrop_t)), predrop_o2, label=l) + axs[4].plot(np.divide(predrop_t, np.max(predrop_t)), predrop_nai, label=l) + axs[5].plot(np.divide(predrop_t, np.max(predrop_t)), predrop_cli, label=l) + + leg = axs[0].legend(title='Radial Position', fontsize=8, bbox_to_anchor=(-0.275, 1.05)) + plt.setp(leg.get_title(), fontsize=10) + # axs[0].set_title(r'Membrane Potential', fontsize=18) + axs[0].set_ylabel(r'V$_{memb}$ (mV)', fontsize=12) + plt.setp(axs[0].get_xticklabels(), fontsize=10) + plt.setp(axs[0].get_yticklabels(), fontsize=10) + # axs[1].set_title(r'[K$^+$]$_o$', fontsize=18) + axs[1].set_ylabel(r'[K$^+$]$_o$ (mM)', fontsize=12) + plt.setp(axs[1].get_xticklabels(), fontsize=10) + plt.setp(axs[1].get_yticklabels(), fontsize=10) + # axs[2].set_title(r'[K$^+$]$_i$', fontsize=18) + axs[2].set_ylabel(r'[K$^+$]$_i$ (mM)', fontsize=12) + plt.setp(axs[2].get_xticklabels(), fontsize=10) + plt.setp(axs[2].get_yticklabels(), fontsize=10) + # axs[3].set_title(r'[O$_2$]$_{ECS}$', fontsize=18) + axs[3].set_ylabel(r'[O$_2$]$_{ECS}$ (mM)', fontsize=12) + plt.setp(axs[3].get_xticklabels(), fontsize=10) + plt.setp(axs[3].get_yticklabels(), fontsize=10) + # axs[4].set_title(r'[Na$^+$]$_i$', fontsize=18) + axs[4].set_ylabel(r'[Na$^+$]$_i$ (mM)', fontsize=12) + plt.setp(axs[4].get_xticklabels(), fontsize=10) + plt.setp(axs[4].get_yticklabels(), fontsize=10) + # axs[5].set_title(r'[Cl$^-$]$_i$', fontsize=18) + axs[5].set_ylabel(r'[Cl$^-$]$_i$ (mM)', fontsize=12) + axs[5].set_xlabel(r'Normalized Time', fontsize=12) + plt.setp(axs[5].get_xticklabels(), fontsize=10) + plt.setp(axs[5].get_yticklabels(), fontsize=10) + +def traceExamples(datadir, figname, iss=[0, 7, 15], recNum=None): + """Function for plotting Vmemb, as well as ion and o2 concentration, for selected (iss) recorded + neurons""" + if recNum: + filename = 'recs' + str(recNum) + '.pkl' + else: + filename = 'recs.pkl' + + if isinstance(datadir, list): + data = combineRecs(datadir) + else: + with open(datadir+filename, 'rb') as fileObj: + data = pickle.load(fileObj) + # fig = plt.figure(figsize=(18,9)) + fig, axs = plt.subplots(2,4) + fig.set_figheight(9) + fig.set_figwidth(18) + for i in iss: + l = r'%s $\mu$m' % str(np.round((data['pos'][i][0] ** 2 + data['pos'][i][1] ** 2 + data['pos'][i][2] ** 2)**(0.5),1)) + axs[0][0].plot(np.divide(data['t'],1000), data['v'][i], label=l) + axs[1][0].plot(np.divide(data['t'],1000), data['o2'][i]) + axs[0][1].plot(np.divide(data['t'],1000), data['ki'][i]) + axs[1][1].plot(np.divide(data['t'],1000), data['ko'][i]) + axs[0][2].plot(np.divide(data['t'],1000), data['nai'][i]) + axs[1][2].plot(np.divide(data['t'],1000), data['nao'][i]) + axs[0][3].plot(np.divide(data['t'],1000), data['cli'][i]) + axs[1][3].plot(np.divide(data['t'],1000), data['clo'][i]) + + leg = axs[0][0].legend(title='Radial Position', fontsize=11, bbox_to_anchor=(-0.275, 1.05)) + plt.setp(leg.get_title(), fontsize=15) + axs[0][0].set_ylabel('Membrane Potential (mV)', fontsize=16) + plt.setp(axs[0][0].get_xticklabels(), fontsize=14) + plt.setp(axs[0][0].get_yticklabels(), fontsize=14) + axs[0][0].text(-0.15, 1.0, 'A)', transform=axs[0][0].transAxes, + fontsize=16, fontweight='bold', va='top', ha='right') + + axs[1][0].set_ylabel(r'Extracellular [O$_{2}$] (mM)', fontsize=16) + plt.setp(axs[1][0].get_xticklabels(), fontsize=14) + plt.setp(axs[1][0].get_yticklabels(), fontsize=14) + axs[1][0].text(-0.15, 1., 'E)', transform=axs[1][0].transAxes, + fontsize=16, fontweight='bold', va='top', ha='right') + + axs[0][1].set_ylabel(r'Intracellular [K$^{+}$] (mM)', fontsize=16) + plt.setp(axs[0][1].get_xticklabels(), fontsize=14) + plt.setp(axs[0][1].get_yticklabels(), fontsize=14) + axs[0][1].text(-0.15, 1., 'B)', transform=axs[0][1].transAxes, + fontsize=18, fontweight='bold', va='top', ha='right') + + axs[1][1].set_ylabel(r'Extracellular [K$^{+}$] (mM)', fontsize=16) + plt.setp(axs[1][1].get_xticklabels(), fontsize=14) + plt.setp(axs[1][1].get_yticklabels(), fontsize=14) + axs[1][1].text(-0.15, 1.0, 'F)', transform=axs[1][1].transAxes, + fontsize=18, fontweight='bold', va='top', ha='right') + + axs[0][2].set_ylabel(r'Intracellular [Na$^{+}$] (mM)', fontsize=16) + plt.setp(axs[0][2].get_xticklabels(), fontsize=14) + plt.setp(axs[0][2].get_yticklabels(), fontsize=14) + axs[0][2].text(-0.15, 1.0, 'C)', transform=axs[0][2].transAxes, + fontsize=18, fontweight='bold', va='top', ha='right') + + axs[1][2].set_ylabel(r'Extracellular [Na$^{+}$] (mM)', fontsize=16) + plt.setp(axs[1][2].get_xticklabels(), fontsize=14) + plt.setp(axs[1][2].get_yticklabels(), fontsize=14) + axs[1][2].text(-0.15, 1.0, 'G)', transform=axs[1][2].transAxes, + fontsize=18, fontweight='bold', va='top', ha='right') + + axs[0][3].set_ylabel(r'Intracellular [Cl$^{-}$] (mM)', fontsize=16) + plt.setp(axs[0][3].get_xticklabels(), fontsize=14) + plt.setp(axs[0][3].get_yticklabels(), fontsize=14) + axs[0][3].text(-0.15, 1.0, 'D)', transform=axs[0][3].transAxes, + fontsize=18, fontweight='bold', va='top', ha='right') + + axs[1][3].set_ylabel(r'Extracellular [Cl$^{-}$] (mM)', fontsize=16) + plt.setp(axs[1][3].get_xticklabels(), fontsize=14) + plt.setp(axs[1][3].get_yticklabels(), fontsize=14) + axs[1][3].text(-0.15, 1.0, 'H)', transform=axs[1][3].transAxes, + fontsize=18, fontweight='bold', va='top', ha='right') + + fig.text(0.55, 0.01, 'Time (s)', fontsize=16) + plt.tight_layout() + plt.savefig(figname) + # return fig, axs + +# v0.00 - tools for plotting simulation data, compare K diffusion in two conds +# v0.01 - generalized compareKdiffuse for arbitrary list of data folders +# v0.02 - added functions for plotting raster sorted by radial distance from center and heat map of spike frequency +# v0.03 - added function for plotting Vmemb heat maps +# v0.04 - parallelized Vmemb heat map analysis +# v0.05 - added function for computing rheobase from stimulated cells +# v0.06 - added raster function sans plotting +# v0.07 - added plotting K+ wave position for multiple sims, working on combo Vmemb and raster +# v0.08 - combined heatmaps for vmemb and spike freq for single figure +# v0.09 - add times to gifs +# v0.10 - fig with combo of k diffusion and +# v0.11 - single function for combo figure(s) +# v0.12 - added function for plotting example traces from single run +# v0.13 - combining recordings from different/continued runs, testing with example traces func +# v0.13.1 - combining fragmented runs for rasters and membrane potential files +# v0.14 - getRaster() combines recks.pkl and membrane_potential.pkl files +# v0.15 - compute wave speeds for spike wave and K+ wave +# v0.16 - enable subplots for compareKwaves() +# v0.16.1 - enable fragmented sims for species diffusion gifs +# v0.17 - replace nans with 0 in spike freq heatmap +# v0.17.1 - specification of max for spk freq heatmap and switch to perceptually uniform cmap +# v0.17.2 - ditch linear fit for wave position / speed computation +# v0.17.3 - user can specify whether to plot cells in center of periphery of slice +# v0.17.4 - user specifies center or periphery for combo vmemb and spike heatmap +# v0.17.5 - raster needs to differentiate between uniformly distributed recs and single layer +# v0.18 - introduced function for computing spike metrics (e.g. frequency, duration of spiking) +# v0.19 - changing all species gif to mp4 +# v0.20 - sinkVsSource for looking at periods where cells act as K+ sink +# v0.21 - new method for K+ wave speed in core vs periphery of slice +# v1.0 - minor updates to raster ploting and mp4 functions +# v1.1 - added doc strings for most functions +# v1.2 - make compatible with newest matplotlib version \ No newline at end of file diff --git a/examples/spreadingDepression/cfg.py b/examples/spreadingDepression/cfg.py new file mode 100644 index 000000000..e84a64502 --- /dev/null +++ b/examples/spreadingDepression/cfg.py @@ -0,0 +1,59 @@ +from netpyne import specs +import numpy as np +#------------------------------------------------------------------------------ +# +# SIMULATION CONFIGURATION +# +#------------------------------------------------------------------------------ + +# Run parameters +cfg = specs.SimConfig() # object of class cfg to store simulation configuration +cfg.duration = 2e3 # Duration of the simulation, in ms +cfg.hParams['v_init'] = -70.0 # set v_init to -65 mV +cfg.hParams['celsius'] = 37.0 +cfg.dt = 0.1 #0.025 # Internal integration timestep to use +cfg.verbose = False # Show detailed messages +cfg.recordStep = 1 # Step size in ms to save data (eg. V traces, LFP, etc) +cfg.filename = 'hypox_10s/' # Set file output name +cfg.Kceil = 15.0 +cfg.nRec = 12 + + # Network dimensions +cfg.sizeX = 500.0 #250.0 #1000 +cfg.sizeY = 200.0 #250.0 #1000 +cfg.sizeZ = 500.0 #200.0 +cfg.density = 90000.0 +cfg.Vtissue = cfg.sizeX * cfg.sizeY * cfg.sizeZ + +# slice conditions +cfg.ox = 'hypoxic' +if cfg.ox == 'perfused': + cfg.o2_bath = 0.1 + cfg.alpha_ecs = 0.2 + cfg.tort_ecs = 1.6 +elif cfg.ox == 'hypoxic': + cfg.o2_bath = 0.01 + cfg.alpha_ecs = 0.07 + cfg.tort_ecs = 1.8 + +# neuron params +cfg.betaNrn = 0.24 # total neuronal volume / total tissue volume +cfg.Ncell = int(cfg.density*(cfg.sizeX*cfg.sizeY*cfg.sizeZ*1e-9)) # default 90k / mm^3 +## neuron radius +if cfg.density == 90000.0: + cfg.rs = ((cfg.betaNrn * cfg.Vtissue) / (2 * np.pi * cfg.Ncell)) ** (1/3) +else: + cfg.rs = 7.52 +cfg.epas = -70 # False # passive reversal potential +cfg.gpas = 0.0001 # passive conductance +cfg.sa2v = 3.0 # False # neuron surface area to volume ratio +## conversion factor for surface area independent of cell volume +if cfg.sa2v: + cfg.somaR = (cfg.sa2v * cfg.rs**3 / 2.0) ** (1/2) +else: + cfg.somaR = cfg.rs +cfg.cyt_fraction = cfg.rs**3 / cfg.somaR**3 + +# sd init params +cfg.k0 = 70.0 # initial K+ bolus concentration +cfg.r0 = 100.0 # initial K+ bolus radius (in microns) \ No newline at end of file diff --git a/examples/spreadingDepression/init.py b/examples/spreadingDepression/init.py new file mode 100644 index 000000000..aa9001e1b --- /dev/null +++ b/examples/spreadingDepression/init.py @@ -0,0 +1,170 @@ +from netpyne import sim +from netParams import netParams +from cfg import cfg +import numpy as np +import os +import sys +import pickle +from neuron import h +import random +from matplotlib import pyplot as plt + +# Instantiate network +sim.initialize(netParams, cfg) # create network object and set cfg and net params +sim.net.createPops() # instantiate network populations +sim.net.createCells() # instantiate network cells based on defined populations +sim.net.connectCells() # create connections between cells based on params +sim.net.addStims() # add external stimulation to cells (IClamps etc) +sim.net.addRxD(nthreads=6) # add reaction-diffusion (RxD) + +# Additional sim setup +## parallel context +pc = h.ParallelContext() +pcid = pc.id() +nhost = pc.nhost() +pc.timeout(0) +pc.set_maxstep(100) # required when using multiple processes + +random.seed(pcid+120194) +all_secs = [sec for sec in h.allsec()] +cells_per_node = len(all_secs) +rec_inds = random.sample(range(cells_per_node), int(cfg.nRec / nhost)) +rec_cells = [h.Vector().record(all_secs[ind](0.5)._ref_v) for ind in rec_inds] +pos = [[all_secs[ind].x3d(0), all_secs[ind].y3d(0), all_secs[ind].z3d(0)] for ind in rec_inds] +pops = [str(all_secs[ind]).split('.')[1].split('s')[0] for ind in rec_inds] + +## only single core stuff +if pcid == 0: + ### create output dir + if not os.path.exists(cfg.filename): + try: + os.makedirs(cfg.filename) + except: + print("Unable to create the directory %r for the data and figures" + % cfg.filename) + os._exit(1) + + ### set variables for ecs concentrations + k_ecs = sim.net.rxd['species']['k']['hObj'][sim.net.rxd['regions']['ecs']['hObj']] + na_ecs = sim.net.rxd['species']['na']['hObj'][sim.net.rxd['regions']['ecs']['hObj']] + cl_ecs = sim.net.rxd['species']['k']['hObj'][sim.net.rxd['regions']['ecs']['hObj']] + o2_ecs = sim.net.rxd['species']['o2_extracellular']['hObj'][sim.net.rxd['regions']['ecs_o2']['hObj']] + + ### manually record from cells according to distance from the center of the slice + cell_positions = [((sec.x3d(0)-cfg.sizeX/2.0)**2 + (sec.y3d(0)+cfg.sizeY/2.0)**2 + (sec.z3d(0)-cfg.sizeZ/2.0)**2)**(0.5) for sec in h.allsec()] + t = h.Vector().record(h._ref_t) + soma_v = [] + soma_ki = [] + soma_nai = [] + soma_cli = [] + soma_nao = [] + soma_ko = [] + soma_clo = [] + soma_o2 = [] + rpos = [] + cell_type = [] + for i in range(int(cfg.sizeX//10)): + for r, soma in zip(cell_positions, h.allsec()): + if (10.0*i-2.5) < r < (10.0*i+2.5): + print(i,r) + rpos.append((soma.x3d(0)-cfg.sizeX/2, soma.y3d(0)+cfg.sizeY/2, soma.z3d(0)-cfg.sizeZ/2)) + cell_type.append(soma.name().split('.')[1].split('s')[0]) + soma_v.append(h.Vector().record(soma(0.5)._ref_v)) + soma_nai.append(h.Vector().record(soma(0.5)._ref_nai)) + soma_ki.append(h.Vector().record(soma(0.5)._ref_ki)) + soma_cli.append(h.Vector().record(soma(0.5)._ref_cli)) + soma_nao.append(h.Vector().record(soma(0.5)._ref_nao)) + soma_ko.append(h.Vector().record(soma(0.5)._ref_ko)) + soma_clo.append(h.Vector().record(soma(0.5)._ref_clo)) + soma_o2.append(h.Vector().record(o2_ecs.node_by_location(soma.x3d(0),soma.y3d(0),soma.z3d(0))._ref_concentration)) + break + + recs = {'v':soma_v, 'ki':soma_ki, 'nai':soma_nai, 'cli':soma_cli, + 't':t, 'ko':soma_ko, 'nao':soma_nao, 'clo':soma_clo, + 'pos':rpos, 'o2':soma_o2, 'rad':cell_positions, + 'cell_type' : cell_type} + +# Function definitions +def saveconc(): + np.save(os.path.join(cfg.filename,"k_%i.npy" % int(h.t)), k_ecs.states3d) + np.save(os.path.join(cfg.filename,"na_%i.npy" % int(h.t)), na_ecs.states3d) + np.save(os.path.join(cfg.filename,"cl_%i.npy" % int(h.t)), cl_ecs.states3d) + np.save(os.path.join(cfg.filename,'o2_%i.npy' % int(h.t)), o2_ecs.states3d) + +def progress_bar(tstop, size=40): + """ report progress of the simulation """ + prog = h.t/float(tstop) + fill = int(size*prog) + empt = size - fill + progress = '#' * fill + '-' * empt + sys.stdout.write('[%s] %2.1f%% %6.1fms of %6.1fms\r' % (progress, 100*prog, h.t, tstop)) + sys.stdout.flush() + +def run(tstop): + """ Run the simulations saving figures every 100ms and recording the wave progression every time step""" + if pcid == 0: + # record the wave progress (shown in figure 2) + name = '' + fout = open(os.path.join(cfg.filename,'wave_progress%s.txt' % name),'a') + last_print = 0 + time = [] + saveint = 100 + + while h.t < tstop: + time.append(h.t) + if int(h.t) % saveint == 0: + # plot extracellular concentrations averaged over depth every 100ms + if pcid == 0: + saveconc() + + if pcid == 0: progress_bar(tstop) + pc.psolve(pc.t(0)+h.dt) # run the simulation for 1 time step + + # h.fadvance() + # determine the furthest distance from the origin where + # extracellular potassium exceeds cfg.Kceil (dist) + # And the shortest distance from the origin where the extracellular + # extracellular potassium is below cfg.Kceil (dist1) + if pcid == 0 and h.t - last_print > 1.0: + last_print = h.t + dist = 0 + dist1 = 1e9 + for nd in sim.net.rxd.species['k']['hObj'].nodes: + if str(nd.region).split('(')[0] == 'Extracellular': + r = ((nd.x3d-cfg.sizeX/2.0)**2+(nd.y3d+cfg.sizeY/2.0)**2+(nd.z3d-cfg.sizeZ/2.0)**2)**0.5 + if nd.concentration>cfg.Kceil and r > dist: + dist = r + if nd.concentration<=cfg.Kceil and r < dist1: + dist1 = r + fout.write("%g\t%g\t%g\n" %(h.t, dist, dist1)) + fout.flush() + + if pcid == 0: + progress_bar(tstop) + fout.close() + with open(os.path.join(cfg.filename,"recs.pkl"),'wb') as fout: + pickle.dump(recs,fout) + print("\nSimulation complete. Plotting membrane potentials") + + with open(os.path.join(cfg.filename,"centermembrane_potential_%i.pkl" % pcid),'wb') as pout: + pickle.dump([rec_cells, pos, pops, time], pout) + + pc.barrier() # wait for all processes to save + +## final sim setup +h.load_file('stdrun.hoc') +h.finitialize(cfg.hParams['v_init']) +h.celsius = cfg.hParams['celsius'] +h.dt = cfg.dt + +run(cfg.duration) + +# basic plotting +if pcid == 0: + from analysis import traceExamples, compareKwaves + traceExamples(cfg.filename, cfg.filename + 'traces.png', iss=[0,4,8,12,13]) + plt.close() + compareKwaves([cfg.filename], [cfg.ox], 'Condition', colors=['r'], figname=cfg.filename+'kwave.png') + +pc.barrier() +h.quit() diff --git a/examples/spreadingDepression/netParams.py b/examples/spreadingDepression/netParams.py new file mode 100644 index 000000000..aa77699fc --- /dev/null +++ b/examples/spreadingDepression/netParams.py @@ -0,0 +1,335 @@ +from netpyne import specs + +from cfg import cfg + +#------------------------------------------------------------------------------ +# +# NETWORK PARAMETERS +# +#------------------------------------------------------------------------------ + +netParams = specs.NetParams() # object of class NetParams to store the network parameters + +netParams.sizeX = cfg.sizeX# - 2*cfg.somaR # x-dimension (horizontal length) size in um +netParams.sizeY = cfg.sizeY# - 2*cfg.somaR # y-dimension (vertical height or cortical depth) size in um +netParams.sizeZ = cfg.sizeZ# - 2*cfg.somaR # z-dimension (horizontal length) size in um +netParams.propVelocity = 100.0 # propagation velocity (um/ms) +netParams.probLengthConst = 150.0 # length constant for conn probability (um) + +#------------------------------------------------------------------------------ +## Population parameters +netParams.popParams['E'] = {'cellType': 'E', 'numCells': cfg.Ncell, + 'xRange': [0.0, cfg.sizeX], + 'yRange': [2 *cfg.somaR, cfg.sizeY - 2 *cfg.somaR], + 'zRange': [0.0, cfg.sizeZ], 'cellModel': 'rxdE'} + +#------------------------------------------------------------------------------ +## Cell property rules +Erule = netParams.importCellParams(label='Erule', fileName='Neuron.py', + conds={'cellType' : 'E', 'cellModel' : 'rxdE'}, cellName='ENeuron') +netParams.cellParams['Erule'] = Erule + +#------------------------------------------------------------------------------ +## RxD params + +### constants +from neuron.units import sec, mM +import math + +e_charge = 1.60217662e-19 +scale = 1e-14/e_charge +alpha = 5.3 +constants = {'e_charge' : e_charge, + 'scale' : scale, + 'gnabar' : (30/1000) * scale, # molecules/um2 ms mV , + 'gnabar_l' : (0.0247/1000) * scale, + 'gkbar' : (25/1000) * scale, + 'gkbar_l' : (0.05/1000) * scale, + 'gclbar_l' : (0.1/1000) * scale, + 'ukcc2' : 0.3 * mM/sec , + 'unkcc1' : 0.1 * mM/sec , + 'alpha' : alpha, + 'epsilon_k_max' : 0.25/sec, + 'epsilon_o2' : 0.17/sec, + 'vtau' : 1/250.0, + 'g_gliamax' : 5 * mM/sec, + 'beta0' : 7.0, + 'avo' : 6.0221409*(10**23), + 'p_max' : 0.8, # * mM/sec, + 'nao_initial' : 144.0, + 'nai_initial' : 18.0, + 'gnai_initial' : 18.0, + 'gki_initial' : 80.0, + 'ko_initial' : 3.5, + 'ki_initial' : 140.0, + 'clo_initial' : 130.0, + 'cli_initial' : 6.0, + 'o2_bath' : cfg.o2_bath, + 'v_initial' : -70.0, + 'r0' : 100.0, + 'k0' : 70.0} + +#sodium activation 'm' +alpha_m = "(0.32 * (rxd.v + 54.0))/(1.0 - rxd.rxdmath.exp(-(rxd.v + 54.0)/4.0))" +beta_m = "(0.28 * (rxd.v + 27.0))/(rxd.rxdmath.exp((rxd.v + 27.0)/5.0) - 1.0)" +alpha_m0 =(0.32 * (constants['v_initial'] + 54.0))/(1.0 - math.exp(-(constants['v_initial'] + 54)/4.0)) +beta_m0 = (0.28 * (constants['v_initial'] + 27.0))/(math.exp((constants['v_initial'] + 27.0)/5.0) - 1.0) +m_initial = alpha_m0/(beta_m0 + 1.0) + +#sodium inactivation 'h' +alpha_h = "0.128 * rxd.rxdmath.exp(-(rxd.v + 50.0)/18.0)" +beta_h = "4.0/(1.0 + rxd.rxdmath.exp(-(rxd.v + 27.0)/5.0))" +alpha_h0 = 0.128 * math.exp(-(constants['v_initial'] + 50.0)/18.0) +beta_h0 = 4.0/(1.0 + math.exp(-(constants['v_initial'] + 27.0)/5.0)) +h_initial = alpha_h0/(beta_h0 + 1.0) + +#potassium activation 'n' +alpha_n = "(0.032 * (rxd.v + 52.0))/(1.0 - rxd.rxdmath.exp(-(rxd.v + 52.0)/5.0))" +beta_n = "0.5 * rxd.rxdmath.exp(-(rxd.v + 57.0)/40.0)" +alpha_n0 = (0.032 * (constants['v_initial'] + 52.0))/(1.0 - math.exp(-(constants['v_initial'] + 52.0)/5.0)) +beta_n0 = 0.5 * math.exp(-(constants['v_initial'] + 57.0)/40.0) +n_initial = alpha_n0/(beta_n0 + 1.0) + +netParams.rxdParams['constants'] = constants + +### regions +regions = {} + +#### ecs dimensions +# margin = cfg.somaR +x = [0, cfg.sizeX] +y = [-cfg.sizeY, 0] +z = [0, cfg.sizeZ] + +regions['ecs'] = {'extracellular' : True, 'xlo' : x[0], + 'xhi' : x[1], + 'ylo' : y[0], + 'yhi' : y[1], + 'zlo' : z[0], + 'zhi' : z[1], + 'dx' : 25, + 'volume_fraction' : cfg.alpha_ecs, + 'tortuosity' : cfg.tort_ecs} + +regions['ecs_o2'] = {'extracellular' : True, 'xlo' : x[0], + 'xhi' : x[1], + 'ylo' : y[0], + 'yhi' : y[1], + 'zlo' : z[0], + 'zhi' : z[1], + 'dx' : 25, + 'volume_fraction' : 1.0, + 'tortuosity' : 1.0} + +regions['cyt'] = {'cells': 'all', 'secs': 'all', 'nrn_region': 'i', + 'geometry': {'class': 'FractionalVolume', + 'args': {'volume_fraction': cfg.cyt_fraction, 'surface_fraction': 1}}} + +regions['mem'] = {'cells' : 'all', 'secs' : 'all', 'nrn_region' : None, 'geometry' : 'membrane'} + +netParams.rxdParams['regions'] = regions + +### species +species = {} + +k_init_str = 'ki_initial if isinstance(node, rxd.node.Node1D) else (%f if ((node.x3d - %f/2)**2+(node.y3d + %f/2)**2+(node.z3d - %f/2)**2 <= %f**2) else ko_initial)' % (cfg.k0, cfg.sizeX, cfg.sizeY, cfg.sizeZ, cfg.r0) +species['k'] = {'regions' : ['cyt', 'mem', 'ecs'], 'd' : 2.62, 'charge' : 1, + 'initial' : k_init_str, + 'ecs_boundary_conditions' : constants['ko_initial'], 'name' : 'k'} + +species['na'] = {'regions' : ['cyt', 'mem', 'ecs'], 'd' : 1.78, 'charge' : 1, + 'initial' : 'nai_initial if isinstance(node, rxd.node.Node1D) else nao_initial', + 'ecs_boundary_conditions': constants['nao_initial'], 'name' : 'na'} + +species['cl'] = {'regions' : ['cyt', 'mem', 'ecs'], 'd' : 2.1, 'charge' : -1, + 'initial' : 'cli_initial if isinstance(node, rxd.node.Node1D) else clo_initial', + 'ecs_boundary_conditions' : constants['clo_initial'], 'name' : 'cl'} + +species['o2_extracellular'] = {'regions' : ['ecs_o2'], 'd' : 3.3, 'initial' : constants['o2_bath'], + 'ecs_boundary_conditions' : constants['o2_bath'], 'name' : 'o2'} + +netParams.rxdParams['species'] = species + +### parameters +params = {} +params['dump'] = {'regions' : ['cyt', 'ecs', 'ecs_o2'], 'name' : 'dump'} + +params['ecsbc'] = {'regions' : ['ecs', 'ecs_o2'], 'name' : 'ecsbc', 'value' : + '1 if (abs(node.x3d - ecs._xlo) < ecs._dx[0] or abs(node.x3d - ecs._xhi) < ecs._dx[0] or abs(node.y3d - ecs._ylo) < ecs._dx[1] or abs(node.y3d - ecs._yhi) < ecs._dx[1] or abs(node.z3d - ecs._zlo) < ecs._dx[2] or abs(node.z3d - ecs._zhi) < ecs._dx[2]) else 0'} + +netParams.rxdParams['parameters'] = params + +### states +netParams.rxdParams['states'] = {'vol_ratio' : {'regions' : ['cyt', 'ecs'], 'initial' : 1.0, 'name': 'volume'}, + 'mgate' : {'regions' : ['cyt', 'mem'], 'initial' : m_initial, 'name' : 'mgate'}, + 'hgate' : {'regions' : ['cyt', 'mem'], 'initial' : h_initial, 'name' : 'hgate'}, + 'ngate' : {'regions' : ['cyt', 'mem'], 'initial' : n_initial, 'name' : 'ngate'}} + +### reactions +gna = "gnabar*mgate**3*hgate" +gk = "gkbar*ngate**4" +fko = "1.0 / (1.0 + rxd.rxdmath.exp(16.0 - k[ecs] / vol_ratio[ecs]))" +nkcc1A = "rxd.rxdmath.log((k[cyt] * cl[cyt] / vol_ratio[cyt]**2) / (k[ecs] * cl[ecs] / vol_ratio[ecs]**2))" +nkcc1B = "rxd.rxdmath.log((na[cyt] * cl[cyt] / vol_ratio[cyt]**2) / (na[ecs] * cl[ecs] / vol_ratio[ecs]**2))" +nkcc1 = "unkcc1 * (%s) * (%s+%s)" % (fko, nkcc1A, nkcc1B) +kcc2 = "ukcc2 * rxd.rxdmath.log((k[cyt] * cl[cyt] * vol_ratio[cyt]**2) / (k[ecs] * cl[ecs] * vol_ratio[ecs]**2))" + +#Nerst equation - reversal potentials +ena = "26.64 * rxd.rxdmath.log(na[ecs]*vol_ratio[cyt]/(na[cyt]*vol_ratio[ecs]))" +ek = "26.64 * rxd.rxdmath.log(k[ecs]*vol_ratio[cyt]/(k[cyt]*vol_ratio[ecs]))" +ecl = "26.64 * rxd.rxdmath.log(cl[cyt]*vol_ratio[ecs]/(cl[ecs]*vol_ratio[cyt]))" + +o2ecs = "o2_extracellular[ecs_o2]" +o2switch = "(1.0 + rxd.rxdmath.tanh(1e4 * (%s - 5e-4))) / 2.0" % (o2ecs) +p = "%s * p_max / (1.0 + rxd.rxdmath.exp((20.0 - (%s/vol_ratio[ecs]) * alpha)/3.0))" % (o2switch, o2ecs) +pumpA = "(%s / (1.0 + rxd.rxdmath.exp((25.0 - na[cyt] / vol_ratio[cyt])/3.0)))" % (p) +pumpB = "(1.0 / (1.0 + rxd.rxdmath.exp(3.5 - k[ecs] / vol_ratio[ecs])))" +pump = "(%s) * (%s)" % (pumpA, pumpB) +gliapump = "(1.0/3.0) * (%s / (1.0 + rxd.rxdmath.exp((25.0 - gnai_initial) / 3.0))) * (1.0 / (1.0 + rxd.rxdmath.exp(3.5 - k[ecs]/vol_ratio[ecs])))" % (p) +g_glia = "g_gliamax / (1.0 + rxd.rxdmath.exp(-((%s)*alpha/vol_ratio[ecs] - 2.5)/0.2))" % (o2ecs) +glia12 = "(%s) / (1.0 + rxd.rxdmath.exp((18.0 - k[ecs] / vol_ratio[ecs])/2.5))" % (g_glia) + +# epsilon_k = "(epsilon_k_max/(1.0 + rxd.rxdmath.exp(-(((%s)/vol_ratio[ecs]) * alpha - 2.5)/0.2))) * (1.0/(1.0 + rxd.rxdmath.exp((-20 + ((1.0+1.0/beta0 -vol_ratio[ecs])/vol_ratio[ecs]) /2.0))))" % (o2ecs) +epsilon_kA = "(epsilon_k_max/(1.0 + rxd.rxdmath.exp(-((%s/vol_ratio[ecs]) * alpha - 2.5)/0.2)))" % (o2ecs) +epsilon_kB = "(1.0/(1.0 + rxd.rxdmath.exp((-20 + ((1.0+1.0/beta0 - vol_ratio[ecs])/vol_ratio[ecs]) /2.0))))" +epsilon_k = '%s * %s' % (epsilon_kA, epsilon_kB) + + +volume_scale = "1e-18 * avo * %f" % (1.0 / cfg.sa2v) + +avo = 6.0221409*(10**23) +osm = "(1.1029 - 0.1029*rxd.rxdmath.exp( ( (na[ecs] + k[ecs] + cl[ecs] + 18.0)/vol_ratio[ecs] - (na[cyt] + k[cyt] + cl[cyt] + 132.0)/vol_ratio[cyt])/20.0))" +scalei = str(avo * 1e-18) +scaleo = str(avo * 1e-18) + +### reactions +mcReactions = {} + +## volume dynamics +mcReactions['vol_dyn'] = {'reactant' : 'vol_ratio[cyt]', 'product' : 'dump[ecs]', + 'rate_f' : "-1 * (%s) * vtau * ((%s) - vol_ratio[cyt])" % (scalei, osm), + 'membrane' : 'mem', 'custom_dynamics' : True, + 'scale_by_area' : False} + +mcReactions['vol_dyn_ecs'] = {'reactant' : 'dump[cyt]', 'product' : 'vol_ratio[ecs]', + 'rate_f' : "-1 * (%s) * vtau * ((%s) - vol_ratio[cyt])" % (scaleo, osm), + 'membrane' : 'mem', 'custom_dynamics' : True, + 'scale_by_area' : False} + +# # CURRENTS/LEAKS ---------------------------------------------------------------- +# sodium (Na) current +mcReactions['na_current'] = {'reactant' : 'na[cyt]', 'product' : 'na[ecs]', + 'rate_f' : "%s * (rxd.v - %s )" % (gna, ena), + 'membrane' : 'mem', 'custom_dynamics' : True, 'membrane_flux' : True} + +# potassium (K) current +mcReactions['k_current'] = {'reactant' : 'k[cyt]', 'product' : 'k[ecs]', + 'rate_f' : "%s * (rxd.v - %s)" % (gk, ek), + 'membrane' : 'mem', 'custom_dynamics' : True, 'membrane_flux' : True} + +# nkcc1 (Na+/K+/2Cl- cotransporter) +mcReactions['nkcc1_current1'] = {'reactant': 'cl[cyt]', 'product': 'cl[ecs]', + 'rate_f': "2.0 * (%s) * (%s)" % (nkcc1, volume_scale), + 'membrane': 'mem', 'custom_dynamics' : True, 'membrane_flux' : True} + +mcReactions['nkcc1_current2'] = {'reactant': 'k[cyt]', 'product': 'k[ecs]', + 'rate_f': "%s * %s" % (nkcc1, volume_scale), + 'membrane': 'mem', 'custom_dynamics' : True, 'membrane_flux' : True} + +mcReactions['nkcc1_current3'] = {'reactant': 'na[cyt]', 'product': 'na[ecs]', + 'rate_f': "%s * %s" % (nkcc1, volume_scale), + 'membrane': 'mem', 'custom_dynamics' : True, 'membrane_flux' : True} + +# ## kcc2 (K+/Cl- cotransporter) +mcReactions['kcc2_current1'] = {'reactant' : 'cl[cyt]', 'product': 'cl[ecs]', + 'rate_f': "%s * %s" % (kcc2, volume_scale), + 'membrane' : 'mem', 'custom_dynamics' : True, 'membrane_flux' : True} + +mcReactions['kcc2_current2'] = {'reactant' : 'k[cyt]', 'product' : 'k[ecs]', + 'rate_f': "%s * %s" % (kcc2, volume_scale), + 'membrane' : 'mem', 'custom_dynamics' : True, 'membrane_flux' : True} + +## sodium leak +mcReactions['na_leak'] = {'reactant' : 'na[cyt]', 'product' : 'na[ecs]', + 'rate_f' : "gnabar_l * (rxd.v - %s)" % (ena), + 'membrane' : 'mem', 'custom_dynamics' : True, 'membrane_flux' : True} + +# ## potassium leak +mcReactions['k_leak'] = {'reactant' : 'k[cyt]', 'product' : 'k[ecs]', + 'rate_f' : "gkbar_l * (rxd.v - %s)" % (ek), + 'membrane' : 'mem', 'custom_dynamics' : True, 'membrane_flux' : True} + +# ## chlorine (Cl) leak +mcReactions['cl_current'] = {'reactant' : 'cl[cyt]', 'product' : 'cl[ecs]', + 'rate_f' : "gclbar_l * (%s - rxd.v)" % (ecl), + 'membrane' : 'mem', 'custom_dynamics' : True, 'membrane_flux' : True} + +# ## Na+/K+ pump current in neuron (2K+ in, 3Na+ out) +mcReactions['pump_current'] = {'reactant' : 'k[cyt]', 'product' : 'k[ecs]', + 'rate_f' : "-2.0 * %s * %s" % (pump, volume_scale), + 'membrane' : 'mem', 'custom_dynamics' : True, 'membrane_flux' : True} + +mcReactions['pump_current_na'] = {'reactant' : 'na[cyt]', 'product' : 'na[ecs]', + 'rate_f' : "3.0 * %s * %s" % (pump, volume_scale), + 'membrane' : 'mem', 'custom_dynamics' : True, 'membrane_flux' : True} + +# O2 depletrion from Na/K pump in neuron +mcReactions['oxygen'] = {'reactant' : o2ecs, 'product' : 'dump[cyt]', + 'rate_f' : "(%s) * (%s)" % (pump, volume_scale), + 'membrane' : 'mem', 'custom_dynamics' : True} + +netParams.rxdParams['multicompartmentReactions'] = mcReactions + +#RATES-------------------------------------------------------------------------- +rates = {} +## dm/dt +rates['m_gate'] = {'species' : 'mgate', 'regions' : ['cyt', 'mem'], + 'rate' : "((%s) * (1.0 - mgate)) - ((%s) * mgate)" % (alpha_m, beta_m)} + +## dh/dt +rates['h_gate'] = {'species' : 'hgate', 'regions' : ['cyt', 'mem'], + 'rate' : "((%s) * (1.0 - hgate)) - ((%s) * hgate)" % (alpha_h, beta_h)} + +## dn/dt +rates['n_gate'] = {'species' : 'ngate', 'regions' : ['cyt', 'mem'], + 'rate' : '((%s) * (1.0 - ngate)) - ((%s) * ngate)' % (alpha_n, beta_n)} + +## diffusion +rates['o2diff'] = {'species' : o2ecs, 'regions' : ['ecs_o2'], + 'rate' : 'ecsbc * (epsilon_o2 * (o2_bath - %s/vol_ratio[ecs]))' % (o2ecs)} + +rates['kdiff'] = {'species' : 'k[ecs]', 'regions' : ['ecs'], + 'rate' : 'ecsbc * ((%s) * (ko_initial - k[ecs]/vol_ratio[ecs]))' % (epsilon_k)} + +rates['nadiff'] = {'species' : 'na[ecs]', 'regions' : ['ecs'], + 'rate' : 'ecsbc * ((%s) * (nao_initial - na[ecs]/vol_ratio[ecs]))' % (epsilon_k)} + +rates['cldiff'] = {'species' : 'cl[ecs]', 'regions' : ['ecs'], + 'rate' : 'ecsbc * ((%s) * (clo_initial - cl[ecs]/vol_ratio[ecs]))' % (epsilon_k)} + +## Glia K+/Na+ pump current +rates['glia_k_current'] = {'species' : 'k[ecs]', 'regions' : ['ecs'], + 'rate' : '-(%s) - (2.0 * (%s))' % (glia12, gliapump)} + +rates['glia_na_current'] = {'species' : 'na[ecs]', 'regions' : ['ecs'], + 'rate' : '3.0 * (%s)' % (gliapump)} + +## Glial O2 depletion +rates['o2_pump'] = {'species' : o2ecs, 'regions' : ['ecs'], + 'rate' : '-(%s)' % (gliapump)} + +netParams.rxdParams['rates'] = rates + +# netpyne v0.00 - first working version of model in netpyne. still need to implement invivo bc. turn off stim to compare to original +# v0.01 - rearrange setting initial high K+ based on new netpyney dimensions +# v0.02 - fixed issue with initial distribution of K+ +# v0.03 - added ecs_boundary_conditions and removed hh mechanisms +# v0.04 - fixed issues with naming and cell volume +# v0.05 - setup for 5s anoxic test run +# v0.06 - second iter of test run version, 10s, record 200 cells, anoxic +# v0.07 - udpdated nkcc1 to reflect SpatialModelRealistic.py +# v0.08 - included second ecs for o2 and some other features from SDinSlice/SpatialModel.py previously missing +# v0.09 - replicates results from SpatialModel.py +# v0.10 - six populations, probabilistic connectivity, 60k neurons per mm3 +# v0.11a - separate cell models for separate populations +# v0.11b - add missing na+ current for glial na/k pump current \ No newline at end of file diff --git a/examples/spreadingDepression/schematic.png b/examples/spreadingDepression/schematic.png new file mode 100644 index 000000000..2373bf38a Binary files /dev/null and b/examples/spreadingDepression/schematic.png differ diff --git a/netpyne/__init__.py b/netpyne/__init__.py index 0471a25f7..2c142c355 100644 --- a/netpyne/__init__.py +++ b/netpyne/__init__.py @@ -4,7 +4,7 @@ NetPyNE consists of a number of sub-packages and modules. """ -__version__ = '1.0.1' +__version__ = '1.0.2' import os, sys display = os.getenv('DISPLAY') nogui = (sys.argv.count('-nogui')>0) diff --git a/netpyne/analysis/__init__.py b/netpyne/analysis/__init__.py index a58b906ce..e90f59b31 100644 --- a/netpyne/analysis/__init__.py +++ b/netpyne/analysis/__init__.py @@ -15,7 +15,7 @@ # Import spike-related functions from .spikes import prepareSpikeData, prepareRaster, prepareSpikeHist, popAvgRates -from .spikes_orig import calculateRate, plotRates, plotSyncs, plotSpikeStats, plotRatePSD, plotRateSpectrogram, popAvgRates, plotfI, calculatefI +from .spikes_legacy import calculateRate, plotRates, plotSyncs, plotSpikeStats, plotRatePSD, plotRateSpectrogram, popAvgRates, plotfI, calculatefI # Import traces-related functions #from .traces import prepareTraces diff --git a/netpyne/analysis/spikes.py b/netpyne/analysis/spikes.py index d49065e33..ab8b807da 100644 --- a/netpyne/analysis/spikes.py +++ b/netpyne/analysis/spikes.py @@ -27,7 +27,7 @@ import pandas as pd import numpy as np from numbers import Number -from .utils import exception +from .utils import exception, syncMeasure from .tools import getInclude, getSpktSpkid from .tools import saveData as saveFigData from ..support.scalebar import add_scalebar @@ -41,6 +41,7 @@ def prepareSpikeData( maxSpikes=1e8, orderBy='gid', popRates=True, + syncLines=True, saveData=False, fileName=None, fileDesc=None, @@ -200,6 +201,9 @@ def prepareSpikeData( legendLabels = [popLabel + '\n cells: %i\n syn/cell: %0.1f\n rate: %.3g Hz' % (popNumCells[popIndex], popConnsPerCell[popIndex], avgRates[popLabel]) for popIndex, popLabel in enumerate(popLabels) if popLabel in avgRates] title = 'cells: %i syn/cell: %0.1f rate: %0.1f Hz' % (numCells, connsPerCell, firingRate) + if syncLines: + title = '%s sync=%0.2f' % (title, syncMeasure()) + if 'title' in kwargs: title = kwargs['title'] diff --git a/netpyne/analysis/spikes_orig.py b/netpyne/analysis/spikes_legacy.py similarity index 100% rename from netpyne/analysis/spikes_orig.py rename to netpyne/analysis/spikes_legacy.py diff --git a/netpyne/analysis/utils.py b/netpyne/analysis/utils.py index fe41f7cb4..6c9df489e 100644 --- a/netpyne/analysis/utils.py +++ b/netpyne/analysis/utils.py @@ -463,7 +463,9 @@ def checkAvailablePlots(requireCfg=False): 'plotSpikeStats': False, 'plotLFP': False, 'granger': False, - 'plotRxDConcentration': False} + 'plotRxDConcentration': False, + 'plotDipole': False, + 'plotEEG': False} # plot conn if hasattr(sim, 'net') and hasattr(sim.net, 'allCells') and len(sim.net.allCells) > 0: @@ -494,6 +496,12 @@ def checkAvailablePlots(requireCfg=False): avail['plotLFP'] = True + # plot dipole/EEG + if hasattr(sim, 'allSimData') and 'dipoleSum' in sim.allSimData and len(sim.allSimData['dipole']) > 0: + + avail['plotDipole'] = True + avail['plotEEG'] = True + # rxd concentation if hasattr(sim, 'net') and hasattr(sim.net, 'rxd') and 'species' in sim.net.rxd and 'regions' in sim.net.rxd \ and len(sim.net.rxd['species']) > 0 and len(sim.net.rxd['regions']) > 0: diff --git a/netpyne/cell/compartCell.py b/netpyne/cell/compartCell.py index 535112076..8c85e21f4 100644 --- a/netpyne/cell/compartCell.py +++ b/netpyne/cell/compartCell.py @@ -497,7 +497,7 @@ def addStimsNEURONObj(self): if stimParams['type'] == 'NetStim': self.addNetStim(stimParams, stimContainer=stimParams) - else: #if stimParams['type'] in ['IClamp', 'VClamp', 'SEClamp', 'AlphaSynapse']: + else: #if stimParams['type'] in ['IClamp', 'VClamp', 'SEClamp', 'AlphaSynapse']: stim = getattr(h, stimParams['type'])(self.secs[stimParams['sec']]['hObj'](stimParams['loc'])) stimProps = {k:v for k,v in stimParams.items() if k not in ['label', 'type', 'source', 'loc', 'sec', 'hObj']} for stimPropName, stimPropValue in stimProps.items(): # set mechanism internal stimParams @@ -593,7 +593,7 @@ def associateGid (self, threshold = None): del nc # discard netcon sim.net.gid2lid[self.gid] = len(sim.net.gid2lid) - + def addSynMech (self, synLabel, secLabel, loc): from .. import sim @@ -1083,19 +1083,20 @@ def addStim (self, params): print("Can't set point process paramaters of type vector eg. VClamp.amp[3]") pass #setattr(stim, stimParamName._ref_[0], stimParamValue[0]) - elif 'originalFormat' in params and stimParamName=='originalFormat' and params['originalFormat']=='NeuroML2_stochastic_input': - if sim.cfg.verbose: print((' originalFormat: %s'%(params['originalFormat']))) - - rand = h.Random() - stim_ref = params['label'][:params['label'].rfind(self.tags['pop'])] - - # e.g. Stim3_2_popPyrS_2_soma_0_5 -> 2 - index_in_stim = int(stim_ref.split('_')[-2]) - stim_id = stim_ref.split('_')[0] - sim._init_stim_randomizer(rand, stim_id, index_in_stim, sim.cfg.seeds['stim']) - rand.negexp(1) - stim.noiseFromRandom(rand) - params['h%s'%params['originalFormat']] = rand + elif 'originalFormat' in params and stimParamName=='originalFormat': + if params['originalFormat']=='NeuroML2_stochastic_input': + if sim.cfg.verbose: print((' originalFormat: %s'%(params['originalFormat']))) + + rand = h.Random() + stim_ref = params['label'][:params['label'].rfind(self.tags['pop'])] + + # e.g. Stim3_2_popPyrS_2_soma_0_5 -> 2 + index_in_stim = int(stim_ref.split('_')[-2]) + stim_id = stim_ref.split('_')[0] + sim._init_stim_randomizer(rand, stim_id, index_in_stim, sim.cfg.seeds['stim']) + rand.negexp(1) + stim.noiseFromRandom(rand) + params['h%s'%params['originalFormat']] = rand else: if stimParamName in ['weight']: setattr(stim, stimParamName, stimParamValue) @@ -1358,18 +1359,18 @@ def calcAbsSegCoords(self): p3dsoma = self.getSomaPos() pop = self.tags['pop'] - + self._segCoords = {} p3dsoma = p3dsoma[np.newaxis].T # trasnpose 1d array to enable matrix calculation if hasattr(sim.net.pops[pop], '_morphSegCoords'): - # rotated coordinates around z axis first then shift relative to the soma + # rotated coordinates around z axis first then shift relative to the soma morphSegCoords = sim.net.pops[pop]._morphSegCoords self._segCoords['p0'] = p3dsoma + morphSegCoords['p0'] self._segCoords['p1'] = p3dsoma + morphSegCoords['p1'] else: - # rotated coordinates around z axis - self._segCoords['p0'] = p3dsoma + # rotated coordinates around z axis + self._segCoords['p0'] = p3dsoma self._segCoords['p1'] = p3dsoma self._segCoords['d0'] = morphSegCoords['d0'] diff --git a/netpyne/conversion/neuromlFormat.py b/netpyne/conversion/neuromlFormat.py index 55b773cad..52fbfa157 100644 --- a/netpyne/conversion/neuromlFormat.py +++ b/netpyne/conversion/neuromlFormat.py @@ -1,5 +1,5 @@ """ -Module for importing and exporting NeuroML +Module for importing and exporting NeuroML 2 """ diff --git a/netpyne/plotting/plotRaster.py b/netpyne/plotting/plotRaster.py index f1a1429a1..75f5fd2b4 100644 --- a/netpyne/plotting/plotRaster.py +++ b/netpyne/plotting/plotRaster.py @@ -17,6 +17,7 @@ def plotRaster( popNumCells=None, popLabels=None, popColors=None, + syncLines=False, legend=True, colorList=None, orderInverse=False, @@ -104,6 +105,11 @@ def plotRaster( *Default:* ``None`` draws from the NetPyNE default colorList. + syncLines : bool + Calculate synchrony measure and plot vertical lines for each spike to evidence synchrony if ``True``. + + *Default:* ``False`` + legend : bool Whether or not to add a legend to the plot. @@ -315,6 +321,10 @@ def plotRaster( rasterPlotter = ScatterPlotter(data=scatterData, kind='raster', axis=axis, **axisArgs, **kwargs) metaFig = rasterPlotter.metafig + # add spike lines + for spkt in spkTimes: + metaFig.ax.plot((spkt, spkt), (0, len(cellInds)), 'r-', linewidth=0.1) + # add legend if legend: diff --git a/netpyne/plotting/plotSpikeFreq.py b/netpyne/plotting/plotSpikeFreq.py index fdd86fbaf..fdb99eeb6 100644 --- a/netpyne/plotting/plotSpikeFreq.py +++ b/netpyne/plotting/plotSpikeFreq.py @@ -382,7 +382,10 @@ def plotSpikeFreq( currentGids = popGids[popIndex] # Use GIDs to get a spiketimes list for this population - spkinds, spkts = list(zip(*[(spkgid, spkt) for spkgid, spkt in zip(spkInds, spkTimes) if spkgid in currentGids])) + try: + spkinds, spkts = list(zip(*[(spkgid, spkt) for spkgid, spkt in zip(spkInds, spkTimes) if spkgid in currentGids])) + except: + spkinds, spkts = [], [] # Bin the data using Numpy histoData = np.histogram(spkts, bins=np.arange(timeRange[0], timeRange[1], binSize)) @@ -449,7 +452,10 @@ def plotSpikeFreq( groupColor = popColors[popLabel] # Use GIDs to get a spiketimes list for this population - spkinds, spkts = list(zip(*[(spkgid, spkt) for spkgid, spkt in zip(spkInds, spkTimes) if spkgid in allGids])) + try: + spkinds, spkts = list(zip(*[(spkgid, spkt) for spkgid, spkt in zip(spkInds, spkTimes) if spkgid in allGids])) + except: + spkinds, spkts = [], [] # Bin the data using Numpy histoData = np.histogram(spkts, bins=np.arange(timeRange[0], timeRange[1], binSize)) diff --git a/netpyne/plotting/plotSpikeHist.py b/netpyne/plotting/plotSpikeHist.py index cac7c7446..6bb509d9f 100644 --- a/netpyne/plotting/plotSpikeHist.py +++ b/netpyne/plotting/plotSpikeHist.py @@ -370,7 +370,10 @@ def plotSpikeHist( currentGids = popGids[popIndex] # Use GIDs to get a spiketimes list for this population - spkinds, spkts = list(zip(*[(spkgid, spkt) for spkgid, spkt in zip(spkInds, spkTimes) if spkgid in currentGids])) + try: + spkinds, spkts = list(zip(*[(spkgid, spkt) for spkgid, spkt in zip(spkInds, spkTimes) if spkgid in currentGids])) + except: + spkinds, spkts = [], [] # Append the population spiketimes list to histPlotter.x histPlotter.x.append(spkts) @@ -409,7 +412,10 @@ def plotSpikeHist( groupColor = popColors[popLabel] # Use GIDs to get a spiketimes list for this population - spkinds, spkts = list(zip(*[(spkgid, spkt) for spkgid, spkt in zip(spkInds, spkTimes) if spkgid in allGids])) + try: + spkinds, spkts = list(zip(*[(spkgid, spkt) for spkgid, spkt in zip(spkInds, spkTimes) if spkgid in allGids])) + except: + spkinds, spkts = [], [] # Append the population spiketimes list to histPlotter.x histPlotter.x.append(spkts) diff --git a/netpyne/support/bsmart.py b/netpyne/support/bsmart.py index b0b4d6a3b..56ad7d6aa 100644 --- a/netpyne/support/bsmart.py +++ b/netpyne/support/bsmart.py @@ -29,10 +29,11 @@ sampling rate of the returned quantities is calculated as fs/2. To calculate the power spectrum powspec of a single time series x over the frequency range 0:freq, -use the following (NB: now accessible via "from spectrum import ar") -from bsmart import armorf, spectrum_AR -[A,Z,tmp]=armorf(x,ntrls,npts,p) # Calculate autoregressive fit -for i in range(freq+1): # Loop over frequencies +use the following (NB: now accessible via "from spectrum import ar"): + + from bsmart import armorf, spectrum_AR + [A,Z,tmp]=armorf(x,ntrls,npts,p) # Calculate autoregressive fit + for i in range(freq+1): # Loop over frequencies [S,H]=spectrum_AR(A,Z,p,i,fs) # Calculate spectrum powspec[i]=abs(S**2) # Calculate and store power diff --git a/netpyne/support/morphology.py b/netpyne/support/morphology.py index 7385c2ae0..7bca7ae23 100644 --- a/netpyne/support/morphology.py +++ b/netpyne/support/morphology.py @@ -64,7 +64,7 @@ def load(filename, fileformat=None, cell=None, use_axon=True, xshift=0, yshift=0 # pull the morphology for the demo from NeuroMorpho.Org from PyNeuronToolbox import neuromorphoorg with open('c91662.swc', 'w') as f: - f.write(neuromorphoorg.morphology('c91662')) + f.write(neuromorphoorg.morphology('c91662')) cell = load_swc(filename) """ diff --git a/netpyne/support/scalebar.py b/netpyne/support/scalebar.py index a244a7d70..788a22126 100644 --- a/netpyne/support/scalebar.py +++ b/netpyne/support/scalebar.py @@ -55,8 +55,7 @@ def add_scalebar(ax, matchx=True, matchy=True, hidex=True, hidey=True, unitsx='' Adds a set of scale bars to *ax*, matching the size to the ticks of the plot and optionally hiding the x and y axes - ax : the axis to attach ticks to - - matchx,matchy : if True, set size of scale bars to spacing between ticks - if False, size should be set using sizex and sizey params + - matchx,matchy : if True, set size of scale bars to spacing between ticks, if False, size should be set using sizex and sizey params - hidex,hidey : if True, hide x-axis and y-axis of parent - **kwargs : additional arguments passed to AnchoredScaleBars Returns created scalebar object diff --git a/setup.py b/setup.py index 98291d537..9e56161d8 100644 --- a/setup.py +++ b/setup.py @@ -74,7 +74,7 @@ # your project is installed. For an analysis of "install_requires" vs pip's # requirements files see: # https://packaging.python.org/en/latest/requirements.html - install_requires=["numpy", "scipy", "matplotlib", "matplotlib-scalebar", "future", "pandas", "bokeh", "schema"], + install_requires=["numpy", "scipy", "matplotlib", "matplotlib-scalebar", "future", "pandas", "bokeh", "schema", "lfpykit"], # List additional groups of dependencies here (e.g. development # dependencies). You can install these using the following syntax, # for example: