elephant.statistics.Complexity

class elephant.statistics.Complexity(spiketrains, sampling_rate=None, bin_size=None, binary=True, spread=0, tolerance=1e-08)[source]

Class for complexity distribution (i.e. number of synchronous spikes found) (Grün et al., 2007) of a list of neo.SpikeTrain objects.

Complexity is calculated by counting the number of spikes (i.e. non-empty bins) that occur separated by spread - 1 or less empty bins, within and across spike trains in the spiketrains list.

Implementation (without spread) is based on the cited above paper.

Parameters:
spiketrainslist of neo.SpikeTrain

Spike trains with a common time axis (same t_start and t_stop)

sampling_ratepq.Quantity or None, optional

Sampling rate of the spike trains with units of 1/time. Used to shift the epoch edges in order to avoid rounding errors. If None using the epoch to slice spike trains may introduce rounding errors. Default: None

bin_sizepq.Quantity or None, optional

Width of the histogram’s time bins with units of time. The user must specify the bin_size or the sampling_rate. * If None and the sampling_rate is available 1/sampling_rate is used. * If both are given then bin_size is used.

Default: None

binarybool, optional
  • If True then the time histograms will only count the number of neurons which spike in each bin.

  • If False the total number of spikes per bin is counted in the time histogram.

Default: True

spreadint, optional

Number of bins in which to check for synchronous spikes. Spikes that occur separated by spread - 1 or less empty bins are considered synchronous. * spread = 0 corresponds to a bincount accross spike trains. * spread = 1 corresponds to counting consecutive spikes. * spread = 2 corresponds to counting consecutive spikes and spikes separated by exactly 1 empty bin. * spread = n corresponds to counting spikes separated by exactly or less than n - 1 empty bins.

Default: 0

tolerancefloat or None, optional

Tolerance for rounding errors in the binning process and in the input data. If None possible binning errors are not accounted for. Default: 1e-8

Raises:
ValueError

When t_stop is smaller than t_start.

When both sampling_rate and bin_size are not specified.

When spread is not a positive integer.

When spiketrains is an empty list.

When t_start is not the same for all spiketrains

When t_stop is not the same for all spiketrains

TypeError

When spiketrains is not a list.

When the elements in spiketrains are not instances of neo.SpikeTrain

Warns:
UserWarning

If no sampling rate is supplied which may lead to rounding errors when using the epoch to slice spike trains.

Notes

Note that with most common parameter combinations spike times can end up on bin edges. This makes the binning susceptible to rounding errors which is accounted for by moving spikes which are within tolerance of the next bin edge into the following bin. This can be adjusted using the tolerance parameter and turned off by setting tolerance=None.

Examples

>>> import neo
>>> import quantities as pq
>>> from elephant.statistics import Complexity
>>> sampling_rate = 1/pq.ms
>>> st1 = neo.SpikeTrain([1, 4, 6] * pq.ms, t_stop=10.0 * pq.ms)
>>> st2 = neo.SpikeTrain([1, 5, 8] * pq.ms, t_stop=10.0 * pq.ms)
>>> sts = [st1, st2]
>>> # spread = 0, a simple bincount
>>> cpx = Complexity(sts, sampling_rate=sampling_rate)

Complexity calculated at sampling rate precision

>>> print(cpx.complexity_histogram)
[5 4 1]
>>> print(cpx.time_histogram.flatten())
[0 2 0 0 1 1 1 0 1 0] dimensionless
>>> print(cpx.time_histogram.times)
[0. 1. 2. 3. 4. 5. 6. 7. 8. 9.] ms
>>> # spread = 1, consecutive spikes
>>> cpx = Complexity(sts, sampling_rate=sampling_rate, spread=1)

Complexity calculated at sampling rate precision

>>> print(cpx.complexity_histogram) 
[5 4 1]
>>> print(cpx.time_histogram.flatten())
[0 2 0 0 3 3 3 0 1 0] dimensionless
>>> # spread = 2, consecutive spikes and separated by 1 empty bin
>>> cpx = Complexity(sts, sampling_rate=sampling_rate, spread=2)

Complexity calculated at sampling rate precision

>>> print(cpx.complexity_histogram)
[4 0 1 0 1]
>>> print(cpx.time_histogram.flatten())
[0 2 0 0 4 4 4 4 4 0] dimensionless
>>> pdf1 = cpx.pdf()
>>> pdf1  # noqa
<AnalogSignal(array([[0.66666667],
       [0.        ],
       [0.16666667],
       [0.        ],
       [0.16666667]]) * dimensionless, [0.0 dimensionless, 5.0 dimensionless], sampling rate: 1.0 dimensionless)>
>>> pdf1.magnitude 
array([[0.5],
       [0.4],
       [0.1]])
Attributes:
epochneo.Epoch

An epoch object containing complexity values, left edges and durations of all intervals with at least one spike. * epoch.array_annotations['complexity'] contains the complexity values per spike. * epoch.times contains the left edges. * epoch.durations contains the durations.

time_histogramneo.Analogsignal

A neo.AnalogSignal object containing the histogram values. neo.AnalogSignal[j] is the histogram computed between t_start + j * binsize and t_start + (j + 1) * binsize. * If binary = True : Number of neurons that spiked in each bin, regardless of the number of spikes. * If binary = False : Number of neurons and spikes per neurons in each bin.

complexity_histogramnp.ndarray

The number of occurrences of events of different complexities. complexity_hist[i] corresponds to the number of events of complexity i for i > 0.

__init__(spiketrains, sampling_rate=None, bin_size=None, binary=True, spread=0, tolerance=1e-08)[source]

Methods

__init__(spiketrains[, sampling_rate, ...])

pdf()

Probability density computed from the complexity histogram.