[pymvpa] glm for MVPA

basile pinsard basile.pinsard at gmail.com
Mon Feb 15 15:20:36 UTC 2016

Hi pymvpa users and developers.

I have multiple questions regarding the use of GLM to model events for MVPA
analysis which are not limited to PyMVPA.

First: in NiPy GLM mapper the measure extracted are beta weights from GLM
is that common for MVPA? StatsModelGLM mappers return t-p-z... values from
model. Does using the t-statistic is more relevant?
The following paper Decoding information in the human hippocampus
<http://www.sciencedirect.com/science/article/pii/S0028393212002953>: A
user's guide  says: "In summary, the pre-processing method of choice at
present appears to be the use of the GLM to produce *t*-values as the input
to MVPA analyses. However, it is important to note that this does not
invalidate the use of other approaches such as raw BOLD or betas, rather
the evidence suggests that these approaches may be sub-optimal, reducing
the power of the analysis, making it more difficult to observe significant
What do you think?

Second: I have developed another custom method to use Least-square-separate
(LS-S) model that uses 1 model for each event/block/regressor of interest
as shown to provide more stable estimates of the patterns and improved
classification in Mumford et al 2012. However for each block I want to
model 2 regressors, 1 for instruction and 1 for execution phases which are
consecutive. So the procedure I use is 1 regressor for the block/phase that
I want to model + 1 regressor for each phase, is that correct?
I would be interested to include these in PyMVPA in the future, as the LS-A
(stands for All) is not adequate in rapid event design with correlated


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.alioth.debian.org/pipermail/pkg-exppsy-pymvpa/attachments/20160215/955fd584/attachment.html>

More information about the Pkg-ExpPsy-PyMVPA mailing list