next up previous
Next: Simulation Results Up: Decoding From Spike Trains Previous: Detection Problem

Decoding Algorithm

The goal of this section is to present a decoding method that utilizes full statistical description of the response $ S(t)$. In this context the equation (1) becomes

\begin{equation*}{\begin{split}&P(x_i(t) \vert S(t))=\frac{P(S(t) \vert x_i(...
... s_2(t), \cdots,s_N(t)] \vert x_i(t)) P(x_i(t))} \end{split}}\end{equation*} (3)

The proposed scheme is fairly general and often difficult to implement. To make the problem tractable and easy to implement we will make several important assumptions.

Assumption The responses of individual neurons are statistically independent. The consequence of this assumption is that

$\displaystyle P([s_1(t),  s_2(t), \cdots,s_N(t)] \vert x_i(t))=\prod_{j=1}^{N} P(s_j(t) \vert x_i(t))$ (4)

Assumption The prior probabilities of individual inputs are equal i.e.

$\displaystyle P(x_i(t))=\frac{1}{M} \qquad \forall i=1, 2, \cdots, M$ (5)

Applying (4) and (5) to the decoding scheme (3) we have

$\displaystyle P(x_i(t) \vert [s_1(t),  s_2(t), \cdots,s_N(t)])= \frac{{\dis...
...,x_i(t))} {\displaystyle{\sum_{i=1}^M}\prod_{j=1}^{N} P(s_j(t) \vert x_i(t))}$ (6)

The chief difficulty of decoding in the present context is determining the conditional probability of $ j-$th response given $ i-$th input.

Let $ s_j(t)=\sum_{k=1}^{n_j(t)}\delta(t-t_k^j)$ be the response of the $ j-$th neuron in the population to the input $ x_i(t)$. Our goal is to evaluate the conditional probability $ P(s_j(t) \vert x_i(t))$. Since we consider only one input-response pair at a time, the indices $ i$ and $ j$ will be dropped for simplicity. The signal $ s(t)$ is fully characterized by the sequence of times $ \{t_1, t_2, \cdots,t_n\}$ so we have

$\displaystyle P(s(t) \vert x(t))=P(\theta_1=t_1, \theta_2=t_2,  \cdots, \theta_n=t_n, $no spikes in$\displaystyle   [t_n, t] \vert x(t)),$    

where $ \theta_k$ is a random variable that corresponds to the arrival time of the $ k-$th spike in the spike train. Clearly, $ \theta_k$ is a continuous random variable, so the probability of the event above is equal to 0, and we are better off with its likelihood (probability density function), defined by

\begin{displaymath}\begin{split}&f(s(t) \vert x(t))\triangleq\frac{\partial^n ...
...n+dt_n],  N_{[t_n+dt_n, t]}=0)}{dt_1\cdots dt_n}, \end{split}\end{displaymath}    

where conditioning on $ x(t)$ has been dropped for simplicity and $ N_{[t_n+dt_n, t]}=0$ means that we have no spikes on the interval $ [t_n+dt_n, t]$.

Assumption The arrivals (non-arrivals) at instant $ t$ are only dependent on the previous arrival. This Markov-type assumption means that $ \theta_n$ depends on $ \theta_{n-1}$ only.

Under this assumption the conditional probability calculation further simplifies to

\begin{displaymath}\begin{split}P(s(t) \vert x(t))&=P(N_{[t_n+dt_n, t]}=0 \v...
...})\cdots P(\theta_2 \vert \theta_1) P(\theta_1), \end{split}\end{displaymath}    

and the probability density function (pdf) becomes

\begin{displaymath}\begin{split}&f(s(t) \vert x(t))=\  &=\lim_{dt_1\rightarro...
...ert \theta_1}(t_2 \vert t_1) f_{\theta_1}(t_1), \end{split}\end{displaymath}    

where $ f_{\theta_n \vert \theta_{n-1}}$ represent transition densities. It is often more useful to use interspike intervals (ISI) defined by $ T_n=\theta_n-\theta_{n-1}$ ( $ \theta_0=0$) instead of spike arrivals $ \theta_n$. The conditional density then becomes

\begin{displaymath}\begin{split}&f(s(t) \vert x(t))=\  &=K P(N_{[t_n, t]}=0...
...vert T_1}(\tau_2 \vert \tau_1) f_{T_1}(\tau_1), \end{split}\end{displaymath}    

where $ \tau_n=t_n-t_{n-1}$ ($ t_0=0$) and $ K$ is a normalization constant that makes $ f(s(t) \vert x(t)$ a valid pdf candidate. The transition densities $ f_{T_n \vert T_{n-1}}$ are to be found using either parametric or non-parametric methods. Parametric methods rely on assuming the transition densities are parameterized by a number of unknown parameters which are found from experimental observations. Non-parametric methods rely on direct (pointwise) estimate of the transition densities. Both methods are based on experimental data. Using densities instead of probabilities the decoding algorithm (6) becomes

$\displaystyle f(x_i(t) \vert [s_1(t), s_2(t), \cdots, s_N(t)])= \frac{{\di...
..._i(t))} {{\displaystyle \sum_{i=1}^{M}\prod_{j=1}^{N}}f(s_j(t) \vert x_i(t))}$ (7)

To illustrate the application of the algorithm above, let us suppose that the underlying spike generating mechanism is a Poisson process with a constant rate $ \lambda$. One can easily show that

$\displaystyle f_{T_n \vert T_{n-1}}(\tau_n \vert \tau_{n-1})=f_{T_n}(\tau_n)= \lambda e^{-\lambda \tau_n}\qquad ($renewal assumption$\displaystyle )$    

In particular one has

\begin{displaymath}\begin{split}f(s(t) \vert x(t))&=K e^{-\lambda (t-t_n)} \...
...t_2-t_1+t_1)} =K \lambda^{n(t)} e^{-\lambda  t}. \end{split}\end{displaymath}    

To signify that the rate $ \lambda$ depends on both input and cell, we write

$\displaystyle \lambda=\Lambda_j(x_i),
$

where $ x_i(t)=x_i=const$ and $ \Lambda_j(x)$ is so-called tuning curveof the $ j-$th cell. One can easily show that in this case $ K=1/{n(t)!}$. Written more detailed, the conditional pdf is given by

$\displaystyle f(s_j(t) \vert x_i(t))=\frac{[\Lambda_j(x_i)]^{n_j(t)}}{n_j(t)!}  e^{-\Lambda_j(x_i) t},$    

and finally the decoding scheme (7) simply becomes

$\displaystyle f(x_i(t) \vert [s_1(t), s_2(t), \cdots, s_N(t)])= \frac{{\di...
...rod_{j=1}^{N}} \frac{[\Lambda_j(x_i)]^{n_j(t)}}{n_j(t)!}e^{-\Lambda_j(x_i) t}}$ (8)

This result coincides with decoding scheme based on firing rates only. Namely, if $ n_j(t)$ is the number of spikes fired by the $ j-$th cell on the interval $ [0, t]$, one can rewrite (7) as

$\displaystyle P(x_i(t) \vert [n_1(t), n_2(t), \cdots, n_N(t)])= \frac{\prod P(n_j(t) \vert x_i(t))} {\sum\prod P(n_j(t) \vert x_i(t))},$ (9)

and

$\displaystyle P(n_j(t) \vert x_i(t))=\frac{[\Lambda_j(x_i) t]^{n_j(t)}}{n_j(t)!}  e^{-\Lambda_j(x_i) t}.$    

Finally (9) becomes

\begin{displaymath}\begin{split}&P(x_i(t) \vert [n_1(t), n_2(t), \cdots, n_...
...a_j(x_i)]^{n_j(t)}}{n_j(t)!}e^{-\Lambda_j(x_i) t}} \end{split}\end{displaymath} (10)

which is result identical to (8). This result is not surprising since Poisson process is completely determined by the rate $ \lambda$ and taking into account full statistical description of the spike trains does not yield any new information.
next up previous
Next: Simulation Results Up: Decoding From Spike Trains Previous: Detection Problem
Zoran Nenadic 2002-07-18