SPADE Tutorial

import numpy as np
import quantities as pq
import neo
import elephant
import viziphant

Generate correlated data

SPADE is a method to detect repeated spatio-temporal activity patterns in parallel spike train data that occur in excess to chance expectation. In this tutorial, we will use SPADE to detect the simplest type of such patterns, synchronous events that are found across a subset of the neurons considered (i.e., patterns that do not exhibit a temporal extent). We will demonstrate the method on stochastic data in which we control the patterns statistics. In a first step, let use generate 10 random spike trains, each modeled after a Poisson statistics, in which a certain proportion of the spikes is synchronized across the spike trains. To this end, we use the compound_poisson_process() function, which expects the rate of the resulting processes in addition to a distribution A[n] indicating the likelihood of finding synchronous spikes of a given order n. In our example, we construct the distribution such that we have a small probability to produce a synchronous event of order 10 (A[10]==0.02). Otherwise spikes are not synchronous with those of other neurons (i.e., synchronous events of order 1, A[1]==0.98). Notice that the length of the distribution A determines the number len(A)-1 of spiketrains returned by the function, and that A[0] is ignored for reasons of clearer notation.

spiketrains = elephant.spike_train_generation.compound_poisson_process(
   rate=5*pq.Hz, A=[0]+[0.98]+[0]*8+[0.02], t_stop=10*pq.s)

In a second step, we add 90 purely random Poisson spike trains using the homogeneous_poisson_process()| function, such that in total we have 10 spiketrains that exhibit occasional synchronized events, and 90 uncorrelated spike trains.

for i in range(90):
        rate=5*pq.Hz, t_stop=10*pq.s))

Mining patterns with SPADE

In the next step, we run the spade() method to extract the synchronous patterns. We choose 1 ms as the time scale for discretization of the patterns, and specify a window length of 1 bin (meaning, we search for synchronous patterns only). Also, we concentrate on patterns that involve at least 3 spikes, therefore significantly accelerating the search by ignoring frequent events of order 2. To test for the significance of patterns, we set to repeat the pattern detection on 100 spike dither surrogates of the original data, creating by dithing spike up to 5 ms in time. For the final step of pattern set reduction (psr), we use the standard parameter set [0, 0, 0].

patterns = elephant.spade.spade(
    spiketrains=spiketrains, binsize=1*, winlen=1, min_spikes=3,
Time for data mining: 0.10381579399108887
Time for pvalue spectrum computation: 15.200305461883545

The output patterns of the method contains information on the found patterns. In this case, we retrieve the pattern we put into the data: a pattern involving the first 10 neurons (IDs 0 to 9), occuring 5 times.

[{'itemset': (3, 4, 7, 9, 0, 2, 5, 6, 8, 1),
  'windows_ids': (369, 1223, 4178, 8498, 9038),
  'neurons': [3, 4, 7, 9, 0, 2, 5, 6, 8, 1],
  'lags': array([0., 0., 0., 0., 0., 0., 0., 0., 0.]) * ms,
  'times': array([ 369., 1223., 4178., 8498., 9038.]) * ms,
  'signature': (10, 5),
  'pvalue': 0.0}]

Lastly, we visualize the found patterns using the function plot_patterns() of the viziphant library. Marked in red are the patterns of order ten injected into the data.

viziphant.patterns.plot_patterns(spiketrains, patterns)
<AxesSubplot:xlabel='Time (s)', ylabel='Neuron'>