# UE - Unitary Event Analysis¶

Unitary Event (UE) analysis is a statistical method that
enables to analyze in a time resolved manner excess spike correlation between simultaneously recorded neurons by comparing the empirical spike coincidences (precision of a few ms) to the expected number based on the firing rates of the neurons.
References:
• Gruen, Diesmann, Grammont, Riehle, Aertsen (1999) J Neurosci Methods, 94(1): 67-79.
• Gruen, Diesmann, Aertsen (2002a,b) Neural Comput, 14(1): 43-80; 81-19.
• Gruen S, Riehle A, and Diesmann M (2003) Effect of cross-trial nonstationarity on joint-spike events Biological Cybernetics 88(5):335-351.
• Gruen S (2009) Data-driven significance estimation of precise spike correlation. J Neurophysiology 101:1126-1140 (invited review)
`elephant.unitary_event_analysis.``gen_pval_anal`(mat, N, pattern_hash, method='analytic_TrialByTrial', **kwargs)[source]

computes the expected coincidences and a function to calculate p-value for given empirical coincidences

this function generate a poisson distribution with the expected value calculated by mat. it returns a function which gets the empirical coincidences, n_emp, and calculates a p-value as the area under the poisson distribution from n_emp to infinity

`elephant.unitary_event_analysis.``hash_from_pattern`(m, N, base=2)[source]

Calculate for a spike pattern or a matrix of spike patterns (provide each pattern as a column) composed of N neurons a unique number.

`elephant.unitary_event_analysis.``inverse_hash_from_pattern`(h, N, base=2)[source]

Calculate the 0-1 spike patterns (matrix) from hash values

Examples

```>>> import numpy as np
>>> h = np.array([3,7])
>>> N = 4
>>> inverse_hash_from_pattern(h,N)
array([[1, 1],
[1, 1],
[0, 1],
[0, 0]])
```
`elephant.unitary_event_analysis.``jointJ`(p_val)[source]

Surprise measurement

logarithmic transformation of joint-p-value into surprise measure for better visualization as the highly significant events are indicated by very low joint-p-values

`elephant.unitary_event_analysis.``jointJ_window_analysis`(data, binsize, winsize, winstep, pattern_hash, method='analytic_TrialByTrial', t_start=None, t_stop=None, binary=True, **kwargs)[source]

Calculates the joint surprise in a sliding window fashion

data: list of neo.SpikeTrain objects
list of spike trains in different trials
0-axis –> Trials 1-axis –> Neurons 2-axis –> Spike times
binsize: Quantity scalar with dimension time
size of bins for descritizing spike trains
winsize: Quantity scalar with dimension time
size of the window of analysis
winstep: Quantity scalar with dimension time
size of the window step
pattern_hash: list of integers
list of interested patterns in hash values (see hash_from_pattern and inverse_hash_from_pattern functions)
method: string
method with which the unitary events whould be computed ‘analytic_TrialByTrial’ – > calculate the expectency (analytically) on each trial, then sum over all trials. ‘analytic_TrialAverage’ – > calculate the expectency by averaging over trials. (cf. Gruen et al. 2003) ‘surrogate_TrialByTrial’ – > calculate the distribution of expected coincidences by spike time randomzation in each trial and sum over trials. Default is ‘analytic_trialByTrial’
t_start: float or Quantity scalar, optional
The start time to use for the time points. If not specified, retrieved from the t_start attribute of spiketrain.
t_stop: float or Quantity scalar, optional
The start time to use for the time points. If not specified, retrieved from the t_stop attribute of spiketrain.
`elephant.unitary_event_analysis.``n_emp_mat`(mat, N, pattern_hash, base=2)[source]

Count the occurrences of spike coincidence patterns in the given spike trains.

`elephant.unitary_event_analysis.``n_emp_mat_sum_trial`(mat, N, pattern_hash)[source]

Calculates empirical number of observed patterns summed across trials

`elephant.unitary_event_analysis.``n_exp_mat`(mat, N, pattern_hash, method='analytic', n_surr=1)[source]

Calculates the expected joint probability for each spike pattern

`elephant.unitary_event_analysis.``n_exp_mat_sum_trial`(mat, N, pattern_hash, method='analytic_TrialByTrial', **kwargs)[source]

Calculates the expected joint probability for each spike pattern sum over trials