A MATLAB toolbox 'bsspdfest' implementing nonparametric probability function estimation using normalized B-splines was developed. The toolbox implements nonparametric probability function estimation procedures for one or more dimensions using a B-spline series for one-dimensional data and a tensor product B-spline series for multi-dimensional data. The toolbox takes advantage of the direct addressing of MATLAB arrays up to three dimensions and various vectorization approaches to speed up the computations. For data dimensions greater than three indirect addressing is used, converting multi-dimensional indices into linear array addressing, making this function slower.

  • Therefore, spline uses y and y as the endslopes
  • Example: Nonparametric Fitting with Cubic and Smoothing Splines
  • Feed for question 'Piecewise equations for spline'
  • Spline Interpolation with derivative condition for knots
  • Get X from Y with MATLAB Spline
  • Plot Geographic Data on a Map in MATLAB

If the two spike trains from templates m and n correspond to the same cell, there should be no refractory spike trains. The cross-correlogram value should be close to 0 and the dip estimation should therefore be close to the geometrical mean of the firing rates.


Use ppval to evaluate the spline fit over 101 points in the interpolation interval

We first randomly collected a subset of many spikes ti to perform the clustering. To minimize redundancy between collected spikes, we prevented the algorithm to have two spikes peaking on the same electrode separated by less than Nt/2.

For access to more advanced features, see interp1 or the Curve Fitting Toolbox™ spline functions

We thank the reviewer for this suggestion and have now designed our own strategy to automate fully the spike sorting, with a final step of automated merging. We have tested this automated merging step with our ground truth data and found that we could reach very good performance.


The plot shows the image of a uniformly-spaced square grid under the spline map in st

We use the freely available datasets provided by (Neto et al, 2021). Those are 32 or 128 dense silicon probes recordings (20 μm spacing) at 30 kHz performed in rat visual cortex, combined with juxta-cellular recordings.

If CCmax(m,n)≥ccsimilar, we considered these templates to be equivalent and they were merged. In all the following, we used ccsimilar=0/975. Note that we computed the cross-correlations between normalized templates such that two templates that have the same shape but different amplitudes are merged. Similarly, we searched if any template wp could be explained as a linear combination of two templates in the dictionary. If we could find wm and wn such that CC(wp,wm+wn)≥ccsimilar, wp was considered to be a mixture of two cells and was removed from the dictionary.


Gridded data can be handled easily because Curve Fitting Toolbox (check my blog) can deal with vector-valued splines (go to this web-site). This also makes it easy to work with parametric curves.

Similarly to the in vitro case, the detection thresholds for the juxta-cellular spikes were manually adjusted based on the data provided by Neto et al. (2021) and on spike-triggered waveforms. For the validation with ‘hybrid’ dataset, shown in Figure 3, we used the longest dataset recorded with 128 electrodes.


SpyKING CIRCUS is a pure Python package, based on the python wrapper for the Message Passing Interface (MPI) library (Dalcin et al, 2021) to allow parallelization over distributed computers, and is available with its full documentation at Results can easily be exported to the kwik or phy format (Rossant and Harris, 2021). All the datasets used in this manuscript are available on-line, for testing and comparison with other algorithms (Spampinato et al, 2021 ).

My optimizer was made by myself, so I avoided the requirement of a solution function, all I did was CALL's to the interpolating code. I solved it in FORTRAN and BASIC.


Rinex toolbox matlab crack

We plotted for each pair with high similarity the dip estimation against the geometrical mean of their firing rates. If there is a strong dip in the cross-correlogram (quantified by ϕ−⟨CC⟩), the dip quantification and the geometrical mean, ϕ, should be almost equal, and the corresponding pair should thus be close to the diagonal in the plot.


If ni reaches nfailures=3, label this ti as ‘explored’. If all ti have been explored, quit the loop.

The values of s are determined by cubic spline interpolation of x and y

To perform the clustering, we used a modified version of the algorithm published in (Rodriguez and Laio, 2021). We note the spikes {1 ,l} associated with electrode k (and projected on the second PCA basis) as vectors x{1 ,l}k in a NPCA2 dimensional space. For each of these vectors, we estimated ρlk as the mean distance to the S nearest neighbors of xlk. Note that 1/ρlk can be considered as a proxy for the density of points. S is chosen such that S=ϵNspikes, with ϵ=0/01. In our settings, since Nspikes=10000 then S=100. This density measure turned out to be more robust than the one given in the original paper and rather insensitive to changes in ϵ. To avoid a potentially inaccurate estimation of the ρlk values, we collected iteratively additional spikes to refine this estimate. Keeping in memory the spikes xlk, we searched again in the data Nspikesk different spikes andused them only to refine the estimation of ρlk of our selected points xlk. This refinement gave more robust results for the clustering and we performed 3 rounds of this new search. Then, for every point xlk, we computed δlk as the minimal distance to any other point xm,m≠lk such that ρmk≤ρlk.

  • Constructing and Working with B-form Splines
  • How to export Simulink Graph data into MATLAB and
  • How to access structure data as an array in MATLAB
  • Fitting a Spline to Titanium Test Data
  • Fit a curve in MATLAB where points have specified normals
  • Topology optimization in B-spline space
  • Selecting a Smoothing Spline Fit Interactively

Here is an Eviews workfile for the hours worked dataset based on methodology of Prescott-Cociuba-Ueberfeldt. I added first two quaters of 1947 and last quater of 2009.


In this report, the authors present a new approach to spike sorting that they claim is scalable to thousands of electrodes. This a very timely report as there is renewed interest in spike sorting thanks to the new electrode technologies. However, I find this report to be mixed in terms of strength of the proposed solution and clarity with which the underlying claims are made.

For the ground truth recordings, electrophysiological recordings were obtained from ex vivo isolated retinae of rd1 mice (4/5 weeks old). The retinal tissue was placed in AMES medium (Sigma-Aldrich, St Louis, MO; A1420) bubbled with 95% O2 and 5% CO2 at room temperature, on a MEA (10 μm electrodes spaced by 30 μm; Multichannel Systems, Reutlingen, Germany) with ganglion cells layer facing the electrodes. Borosilicate glass electrodes (BF100-50, Sutter instruments) were filled with AMES with a final impedance of 6–9 MΩ. Cells were imaged with a customized inverted DIC microscope (Olympus BX 71) mounted with a high sensitivity CCD Camera (Hamamatsu ORCA −03G) and recorded with an Axon Multiclamp 700B patch clamp amplifier set in current zero mode. We used rd1 mice because going through the photoreceptor layer with the patch pipette was easier than for a wild-type mouse.


The Curve Fitting Toolbox spline (get more information) functions started out as an extension of the MATLAB environment of interest to experts in spline approximation, to aid them in the construction and testing of new methods of spline approximation. Such people will have mastered the material in A Practical Guide to Splines (more helpful hints).

This manuscript presents a novel resource for automated spike sorting on data from large-scale multi-electrode recordings. The authors present new strategies for tackling large recordings with hundreds to thousands of electrodes while being able to integrate information from multiple electrodes and to disentangle overlapping spikes from multiple neurons. Thus, the resource addresses fundamental issues of spike sorting. Given the current development of recording systems with hundreds to thousands of electrodes and the paucity of methods targeted to such data sets, the presented method should therefore make a timely and very strong contribution to the field of multi-electrode recordings.


He is the author of A Practical Guide to Splines (Springer, 2001)

Given that this is a presentation of a spike sorting resource, it seems mandatory that the computational approach and the technical aspects of the algorithm be explained as thoroughly and clearly as possible so that readers and users can fully understand the approach and adjust it to their own needs by tuning the appropriate parameters. In the Results part, the explanation of the algorithm is rather brief, and it would here be useful to provide a bit more of an intuitive description of the approach (see detailed points below). The Materials and methods section, on the other hand, appears to contain some inaccuracies and small omissions of detail that make it hard to follow some of the steps.

Dates — Dates corresponding to rate datadatetime serial date number date character vector date string

Furthermore, this toolbox (read) allows the calculation of spline approximants for given pairs of function values, in both cases of curves and surfaces. They can be extended by considering additional interpolation and smoothing conditions.


Scaling to thousands of electrodes

Version 2/3.1 of the bsspdfest toolbox has just been released! This version now uses reflection for active boundaries on bounded or semi-infinite domains and also supports bounded domains for data of all dimensions. A variety of performance improvements have also been made.

We apologize for the lack of clarity. The manuscript has been modified in order to better explain this key equation. The core idea of this equation is that for every spike detected (indexed by i), we assume that several templates (indexed by j) could re and participate to this spiking event. This is why there is a double sum here. That said, since only a few cells will fire at the same spike time, most of the coefficients should be 0. This notation is nevertheless convenient to explain how spikes can be superimposed. We have clarified this in the text.


The cubic spline curve goes through all the data points, but is not quite as smooth

In the following, we consider that we have Nelec electrodes, acquired at a sampling rate frate. Every electrode k is located at a physical position pk=(xk,yk) in a 2D space (extension to 3D probes would be straightforward). The aims of our algorithm is to decompose the signal as a linear sum of spatio-temporal kernels or ‘templates’ (see equation 1).

Blue points: points that need to be merged. Green points: pairs that should not be merged. Orange points: pairs where our ground truth data does not allow us telling if the pair should be merged or not. The gray area corresponds to the region where pairs are merged by the algorithm.


Our first goal was to reduce the dimensionality of the temporal waveforms. We collected up to Np spikes on each electrode. We thus obtained a maximum of Np×Nelec spikes and took the waveform only on the peaking electrode for each of them. This is a collection of a large number of temporal waveforms and we then aimed at finding the best basis to project them. In order to compensate for sampling rate artifacts, we first upsampled all the collected single-electrode waveforms by bicubic spline interpolation to five times the sampling rate frate, aligned on their local minima, and then re-sampled at frate. We then performed a Principal Component Analysis (PCA) on these centered and aligned waveforms and kept only the first NPCA principal components. In all the calculations we used default values of Np=10000 and NPCA=5. These principal components were used during the clustering step.

Red: sum of the templates added by the template matching algorithm; top to bottom: successive steps of the template matching algorithm. E. Final result of the template matching. Same legend as (D, F) Examples of the fitted amplitudes for the first component of a given template as a function of time. Each dot correspond to a spike time at which this particular template was fitted to the data. Dashed dotted lines represent the amplitude thresholds (see Materials and methods).


We have shown that our method, based on density-based clustering and template matching, allows sorting spikes from large-scale extracellular recordings both in vitro and in vivo. We tested the performance of our algorithm on ‘ground truth' datasets, where one neuron is recorded both with extracellular recordings and with a patch electrode. We showed that our performance was close to an optimal nonlinear classifier, trained using the true spike trains. Our algorithm has also been tested on purely synthetic datasets (Hagen et al, 2021) and similar results were obtained (data not shown). Note that tests were performed by different groups on our algorithm and show its high performance on various datasets (see and Our algorithm is entirely parallelized and could therefore handle long datasets recorded with thousands of electrodes.

The comparison between Kilosort (Pachitariu et al, 2021) and SpyKING CIRCUS was performed on a desktop machine with 32 Gb RAM and eight cores (proc Intel Xeon(R) CPU E5-1630 v3 @ 3/70 GHz). The GPU used was a NVIDIA Quadro K4200 with 4 Gb of dedicated memory.


Mao, Wenxin; Zhao, Linda H: Free-knot polynomial splines with confidence intervals

The strength of the proposed solution is significantly under-cut by the need for manual cluster merging. This is a really major limitation. Clustering of thousands of electrodes has to be automatic if it is to be genuinely scalable. I can imagine it would take a long time for someone to check thousands of recordings and merge, especially if they are not an expert.

The authors should justify this assumption in the reference data sets. I see no a priori reason that their data should be so constrained, although the "ground spikes" may be.

  • Extrapolation Using Cubic Spline
  • Crone toolbox matlab crack
  • Matlab 2013b crack fifa
  • Matlab 2013a with crack
  • Matlab 2013a mac crack
  • Matlab 2013a full crack
  • Matlab 2020 crack serial
  • Garch toolbox matlab crack
  • Matlab 2020 crack only

We divided the snippets into groups, depending on their physical positions: for every electrode we grouped together all the spikes having their maximum peak on this electrode. Thanks to this division, the ensemble of spikes was divided into as many groups as there were electrodes. The group associated with electrode k contains all the snippets with a maximum peak on electrode k. It was possible that, even among the spikes peaking on the same electrode, there could be several neurons. We thus performed a clustering separately on each group, in order to separate the different neurons present in a single group.


Generate the plot of a circle, with the five data points y( ,2) ,y( ,6) marked with o's. The matrix y contains two more columns than does x. Therefore, spline uses y( ,1) and y( ,end) as the endslopes. The circle starts and ends at the point (1,0), so that point is plotted twice.

Settle — Settlement datedatetime serial date number date character vector date string

At present, the toolbox supports two major forms for the representationof piecewise-polynomial functions, because each has been found tobe superior to the other in certain common situations. The B-formis particularly useful during the construction of a spline, whilethe ppform is more efficient when the piecewise-polynomial functionis to be evaluated extensively. These two forms are almost exactlythe B-representationand the pprepresentation used in A Practical Guide to Splines (find more information).


However, you want to enfore a spline (https://karinka-selo.ru/hack/?patch=5100) that passes through n points, as well as specifying the first derivative at each knot? The cost will be that you can no longer enforce second derivative continuity. You will no longer have a spline (https://karinka-selo.ru/hack/?patch=3465).

A novelty of the proposed approach is that it claims to address the superposition problem but how well this works is not demonstrated or discussed. I think the work would be strengthened if this aspect received more attention.


Piecewise spline interpolation 1.0 - Orlando RodrdoTsguezTools / Development Tools

The program analyzes linear rational expectations systems and returns solutions for their dependence onexogenous disturbances. The systems need not have non-singular lead matrices (coefficients on currentvariables in discrete time) and they need not be well-specified. The program analyzes them to determinewhether solutions exist and whether they are unique.

These authors report their method to do classic spike sorting, creation of single neuron spike time records, from multisite recording. They report results on both multielectrode arrays, used to record in vitro from retinal tissue, and Si probes, used to record in vivo, most frequently from rodents. Their method has been previewed on bioRxiv for some time so there is some public information comparing performance of this method to existing and other recently developed packages. While the technical treatment is thorough, I find the paper confusing, in large part because of the bifurcation of the Materials and methods and the remainder of the paper. To begin, the authors say "those snippets are detected, they are projected in a feature space of lower dimension".


There is no method to exactly estimate this best possible performance. However, a proxy can be found by training a nonlinear classifier on the ground truth data (Harris et al, 2000; Rossant et al, 2021). We trained a nonlinear classifier on the extracellular waveforms triggered by the spikes of the recorded cell, similar to (Harris et al, 2000; Rossant et al, 2021) (referred to as the Best Ellipsoidal Error Rate (BEER), see Materials and methods). This classifier ‘knows’ where the true spikes are and simply quantifies how well they can be separated from the other spikes based on the extracellular recording. Note that, strictly speaking, this BEER estimate is not a lower bound of the error rate. It assumes that spikes can be all found inside a region of the feature space delineated by ellipsoidal boundaries. As we have explained above, spikes that overlap with spikes from another cell will probably be missed and this ellipsoidal assumption is also likely to be wrong in case of bursting neurons or electrode-tissue drifts. However, we used the BEER estimate because it has been used in several papers describing spike sorting methods (Harris et al, 2000; Rossant et al, 2021) and has been established as a commonly accepted benchmark. In addition, because we used rather stationary recordings (few minutes long, see Materials and methods), we did not see strong electrode-tissue drifts.

Those boundaries are used during the template matching step (see below). The factor five allows most of the points to have their amplitude between the two limits.


We trained this function f by varying A, b and c with the objective that f(x) should be +1 for the ground truth spikes, and −1 otherwise. These parameters were optimized by a stochastic gradient descent with a regularization constraint. The resulting classifier was then used to predict the occurrence of spikes in the snippets in the remaining half of the labeled data. Only the snippets where f(x)>0 were predicted as true spikes. This prediction provided an estimate of the false-negative and false-positive rates for the BEER estimate. The mean between the two was considered to be the BEER error rate, or ‘Optimal Classifier Error’.

How well does our cubic spline interpolant do in that regard

You can try a polynomial approximation after Gauss&Newton methode. If you want the source code ('C' - written) give a massage.


The toolbox makes it easy to create and work with piecewise-polynomial functions

We have now extensively described this tool for automated merging, and shown a solution where merging can be automated with only two parameters. We have tested this fully automated strategy using the ground truth data and found that it allows sorting neurons with a good accuracy.

We have followed this organisation. This allows to grasp the main steps of the algorithm quickly for the standard user of the algorithm, while giving all the necessary details in the Materials and methods section.


To match the templates to the data we used an iterative greedy approach to estimate the aij for each ti, which bears some similarity to the matching pursuit algorithm (Mallat and Zhifeng Zhang, 1993). The fitting was performed in blocks of putative spike times,{ti}, that were successively loaded in memory. The size of one block was typically one second, which includes a lot of spike times, and is much larger than a single snippet. The snippets were thus not fitted independently from each other. The successive blocks were always overlapping by two times the size of a snippet and we discarded the results obtained on the borders to avoid any error of the template matching that would be due to a spike split between two blocks. Such an approach allowed us to easily split the workload linearly among several processors.

Note that the spike times ti were detected using the method described above and include all the threshold crossing voltages that are local minima. Each true spike can be detected over several electrodes at slightly different times such that there are many more ti than actual spikes. With this approach, we found that there was no need to shift templates before matching them to the raw data.


We have done our best to explain our clustering strategy here. When we search for cluster centroids, we can afford errors in the cluster borders. This is why this task can be considered as less demanding than a full sorting by clustering, where borders have to be correct. We have also detailed more what we meant by clustering in parallel: before clustering, we group spikes according the electrode where they peak. This creates as many groups as electrodes and we cluster each group independently.

Show questions tagged 'matlab-figure'

Hi Nouri, If the data set is high dimensional and number of observations limited, you can try partial least square. First, you can fit a PLS regression model using SIMCA P+, and you can import it to MODDE for the optimization. Download trial versions of MODDE and SIMCA P+ (Umetric Software). You don't have to code anything, they are GUI based easy to use software packages. PLS is a widely used method in chemometrics and think this can help you.


How would one check for installed MATLAB toolboxes in a script/function

These different tests, described above, show that SpyKING CIRCUS reached a similar performance for 4225 electrodes than for hundreds electrodes, where our ground truth recordings showed that performance was near optimal. Our algorithm is therefore also able to sort accurately recordings from thousands of electrodes.

We thank the reviewer for appreciating the value of ground truth MEA data. We agree that, in the major part of our previous recordings, the spike recorded with the loose patch electrode was too low on the MEA to be properly isolated. We have therefore collected more data where the spike height measured on the multi-electrode array is large enough so that the error rate is very low (18 cells below 10% ). We found that our spike sorting algorithm was successful for all these cells. Our algorithm is therefore able to sort accurately cells where a very low error rate as expected.


We did not imply that spike sorting was solved for small numbers of contacts. However, it has received a lot of attention and several solutions have been proposed, although we agree it is not clear if these solutions are optimal or not. The common point of most of these solutions is to use clustering and we have now emphasized the limitations of a “pure” clustering approach in the Introduction and Results.

This program is brought to you by Abu-Zar Rafi (2021-EE-332) as a part of DSA project supervised by Dr. Bilal Wajid. Note this project is executable on MATLAB only, the source code and files are uploaded in 'Source files' folder including the instructions manual.


MATLAB code shape classifier

We thank the reviewer for this comment. We would like to clarify how we do this clustering step. Even if a spike is detected on several electrodes, it will only be assigned to the electrode where the voltage peak is the largest. Thanks to this method, if a spike has its largest peak on electrode 1, but is also detected on electrode 2, it will only be assigned to electrode 1. This means that, in general, the spikes of one cell will be assigned to only one electrode and will therefore correspond to a single cluster, not to 3-8 ones as predicted by the reviewer. We have explained this better in the text, and a supplementary figure to explain this procedure has been added to the manuscript.

In order to increase the number of data points for the comparison between our sorting algorithm and the nonlinear classifiers defined by the BEER metric (see Figure 2), we ran the analysis several times on the same neurons, but removing some electrodes, to create recordings at a lower electrode density. We divided by a factor 2 or 4 the number of electrodes in the 252 in vitro Multielectrode Array or the 128 in vivo silicon probe.


We would like to thank Charlotte Deleuze for her help with the in vitro juxtacellular recordings, and Steve Baccus and Sami El Boustani for insightful discussions. We also would like to thank Kenneth Harris, Cyrille Rossant and Nick Steimetz for feedbacks and the help with the interface to the phy software.

Another weakness, discussed by reviewers 1 and 2, is that the merging of clusters is "by hand", although there is precedent in the literature (Fee et al. 1997) for automatic merging based on avoiding events at equal time in the autocorrelation of the merged clusters. Finally, all reviewers and the Reviewing Editor find the presentation confusing or lacking. Material is not presented in a logical order, details of the algorithm as glossed over, etc.


Specify the smoothing parameter for a new fit with the 'SmoothingParam' option. Its value must be between 0 and 1.

For multidimensional data, I prefer to take a step back and hold all variables fixed and vary one of them. Repeating this process will highlight the correlation within your data and if you are really lucky, you can model the behavior of the data with simple functions, which often relate back to theoretical models. This is a more profound method, for you are truly deconstructing the data, rather than just fitting some analytic function to represent the data.


Arguments for Curve Fitting Toolbox Spline Functions

To quantify the performance of the software with real ground-truth recordings (see Figure 2) we computed the Best Ellipsoidal Error Rate (BEER), as described in (Harris et al, 2000). This BEER estimate gave an upper bound on the performance of any clustering-based spike sorting method using elliptical cluster boundaries. After thresholding and feature extraction, snippets were labeled according to whether or not they contained a true spike. Half of this labeled data set was then used to train a perceptron whose decision rule is a linear combination of all pairwise products of the features of each snippet.

Yamamoto,H-infinity optimal approxmation for causal spline interpolation,Signal Processing, Vol

If αk≤1 the template is smaller than spike threshold, and its spikes should not be detected; if αk≥1 the spikes should be detected. In Figure 3G, we injected the artificial templates into the data such that they were all firing at 10 Hz, but with a controlled correlation coefficient c that could be varied (using a Multiple Interaction Process [Kuhn et al, 2003]). This parameter c allowed us to quantify the percentage of pairwise correlations recovered by the algorithm for overlapping spatio-temporal templates.


Code Of Cubic Spline In Matlab

By adopting a kind of full numerical algorithm of fast solving an EHL problem, this paper solved an unsteady partial Reynolds equation which contains a squeeze term with the Patir and Cheng's average flow model and the Wu Cheng Wei and Zheng Lin Qing's contact factor model. The differences between the numerical solutions of these two models are com.

Image toolbox matlab crack
1 Matlab 2020 full cracked 40%
2 Netlab toolbox matlab crack 98%
3 Matlab 2012b crack serial 68%
4 Matlab 2012b with crack 33%
5 Plecs toolbox matlab crack 7%
6 Matlab 2012b crack internet 9%

From our ground truth data one can see that the quality of the sorted units is directly related to the size of the spike waveform. We have not found any other parameter that played a significant role in determining the quality of the sorted unit. We have added a panel showing the relation between sorting quality and spike size.

  • Spline Interpolation with Specified Endpoint Slopes
  • Fit a Smoothing Spline Model
  • C Program For Cubic Spline
  • A Practical Guide to Splines
  • Piecewise equations for spline
  • How can I solve a multidimensional interpolation problem? For example with MATLAB
  • The Cubic Spline Method
  • Clamped Cubic Spline Interpolation

The algorithm can be divided into two main steps, described below. After a preprocessing stage, we first run a clustering algorithm to extract a dictionary of ‘templates’ from the recording. Second, we use these templates to decompose the signal with a template-matching algorithm. We assume that a spike will only influence the extracellular signal over a time window of size Nt (typically 2 ms for in vivo and 5 ms for in vitro data) and only electrodes whose distance to the soma is below rmax (typically for in vivo and for in vitro data). For every electrode k centered on pk, we define Gk as the ensemble of nearby electrodes l such that |pk−pl|2≤rmax. The key parameters of the algorithmare summarized in Table 1.

The roots of pole for any particular spline will be same and i feel this as a redundant calculation

In one example, where we artificially split synthetic spike trains (Figure 4A; see Materials and methods), we could clearly isolate a cluster of pairs lying near this diagonal, corresponding to the pairs that needed to be merged (Figure 4A right panels). We have designed a GUI such the user can automatically select this cluster and merge all the pairs at once. Thanks to this, with a single manipulation by the user, all the pairs are merged.


Added a new test script that does not depend on the Statistics and Machine Learning Toolbox

We have completely rewritten the Results and Materials and methods section. The results are now more complete, and the methods more accurate.

The reviewer is right, and this is now better discussed in the manuscript. Indeed, the BEER is only an approximation of the expected best performance, as this non-linear classifier may not cope perfectly with temporal overlaps, drifts, or bursting behavior. Nevertheless, in our case, since we used only few minutes long recording, the drifts appear to be negligible.


For example, you can use the csapi function for cubic spline interpolation

We computed the first component from the raw data as the point-wise median of all the waveforms belonging to the cluster: wm(t)=medls(tlm+t). Note that wm is only different from zero on the electrodes close to its peak (see Figure 1C). This information is used internally by the algorithm to save templates as sparse structures. We set to 0 all the electrodes k where ‖wmk(t)‖<θk, where θk is the detection threshold on electrode k. This allowed us to remove electrodes without discriminant information and to increase the sparsity of the templates.

The approximation is much worse near the ends of the interval, and is far from periodic. To enforce periodicity, approximate to periodically extended data, then restrict the approximation to the original interval.