A note on robust statistical methods

Posted on 23 October 2017 in Pietro's Data Bulletin

TL;DR: Conventional statistical methods like the t-test or the ANOVA F test can perform very poorly if the data does not meet the assumptions of normality and homoscedasticity. So-called robust statistical methods have been developed which perform well even when these assumptions are violated (and we should use them).

Continue reading

Show me your code and I’ll tell you who you are

Posted on 05 October 2017 in Pietro's Data Bulletin

TL;DR: We should do code reviews in science. That's it. It's that simple.

Continue reading


Posted on 26 September 2017 in Pietro's Data Bulletin

TL;DR: Low statistical power, stemming from small sample sizes, is a serious concern for the reliability of results in neuroscience. But not all hope is lost.

Continue reading

Is the difference between significant and not significant... significant?

Posted on 15 September 2017 in Pietro's Data Bulletin

Admittely, the question in the title might seem convoluted and perhaps even nonsensical at first, yet it exposes an important misconception in statistical analysis, which often goes unnoticed.

Suppose you are interested in showing that a certain experimental effect is larger in a certain group or under certain conditions. You …

Continue reading

(Effect) size matters

Posted on 04 September 2017 in Pietro's Data Bulletin

TL;DR: Looking at statistical significance alone can be misleading. There are a lot of good reasons to make use, on a regular basis, of measures of effect size together with their confidence intervals, as opposed to only looking at p-values.

Continue reading

When less (dimensions) is more

Posted on 05 July 2017 in Pietro's Data Bulletin

TL;DR: Dimensionality reduction methods are an interesting tool to visualize, interrogate, and summarize your high-dimensional neural data.

Continue reading

Don't let your analysis run in circles

Posted on 26 June 2017 in Pietro's Data Bulletin

TL;DR: If you use a certain criteria to first select a part of your data (e.g. a certain set of neurons) and then test another related measure on this subset, you might be heavily biasing your results. An easy way to prevent this is to use half of your data for selection, and the other half for testing.

Continue reading

You can do more than the average

Posted on 20 June 2017 in Pietro's Data Bulletin

TL;DR: With little effort, you can get a lot more from your data than by just comparing means and plotting bar plots: a recent paper elaborates on graphical methods that could be an good addition to your data analysis toolbox.

Continue reading