[pymvpa] NFoldPartitioner

Lasse Güldener lasse.gueldener at gmail.com
Wed Apr 10 12:05:11 BST 2019


Hi again,  

>>>On Wed, 20 Mar 2019, lasse.gueldener at gmail.com <https://alioth-lists.debian.net/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa> wrote:

>>> Hi Folks,

>>> I’am quite new to pymvpa and am currently trying to implement searchlight analysis on group level. I ran an fMRI study where subjects had to discriminate the orientation (vertical versus non-Vertical) of a masked stimulus and rate their subjective awareness of the orientation (1-4 scale). So the targets/classes to be decoded are the orientations. However, I want to know whether orientation information persists on unaware trials.  So far I‘ve been able to run the analysis with an NFoldPartitioner  on a dataset that contains unaware trials only, but i actually want to train the classifier on all trials (awareness 1-4) in run 1-9 and test it on the tenth run only on unaware trials (awareness 1) in a leave-one-out fashion. So (I think) the crucial part that troubles me right now is to adjust the partitioner. I‘ve been looking at the sifter and the factorial partitioner documentation, but am still not sure about the implementation in such a case.


	In [6]: FactorialPartitioner?
	Init signature: FactorialPartitioner(cls, *args, **kwargs)
	Docstring:     
	Partitioner for two-level factorial designs

	Given another partitioner on a dataset containing two attributes that are
	organized in a hierarchy, it generates balanced folds of the super-ordinate
	category that are also balanced according to the sub-ordinate category.

	Example
	--------
	We show images of faces to the subjects. Subjects are familiar to some
	identities, and unfamiliar to others. Thus, we have one super-ordinate
	attribute "familiarity", and one sub-ordinate attribute "identity". We want
	to cross-validate familiarity across identities, that is, we train on the
	same number of familiar and unfamiliar identities, and we test on the
	left-over identities.

	>>> partitioner = FactorialPartitioner(NFoldPartitioner(attr='identity'),
	...                                    attr='familiarity')


>so I think you need
	
	>>> partitioner1 = FactorialPartitioner(NFoldPartitioner(attr='chunks'),
	...                                    attr='awareness')

>which should then do both aware -> unaware  and  unaware -> aware.  
What if I  want  to train the classifier  on aware and test it on unaware trials only (aware -> unaware and not vice versa)?  
>You can
>indeed combine with Sifter to just choose one if really desired (adjusting
>again from cut/pasted docs, so YMMV ;) )

    >>> par = ChainNode([partitioner1,
    ...                  Sifter([('partitions', 2),  # testing partition
    ...                          ('awareness', ['unaware'])])
    ...                 ], space='partitions')

>for CVing into unaware condition, assuming that  you have sample
>attributes awareness with values 'unaware' and 'aware' (I would advise to not
>code them up into ints, but be explicit for readability etc)

I thought of combining two such ChainNodes including a Sifter, yet I am not sure whether that would work or do something rather strange. Also is there an easy way to check which targets are contained in the generated training and test partitions?

> I appreciate any comment or suggestion.

>Hope above helps

>-- 
>Yaroslav O. Halchenko
>Center for Open Neuroscience     http://centerforopenneuroscience.org
 <http://centerforopenneuroscience.org/>>Dartmouth College, 419 Moore Hall, Hinman Box 6207, Hanover, NH 03755
>Phone: +1 (603) 646-9834                       Fax: +1 (603) 646-1419
>WWW:   http://www.linkedin.com/in/yarik <http://www.linkedin.com/in/yarik>        

Thanks in advance for any comments or suggestions.

Cheers,


Lasse 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://alioth-lists.debian.net/pipermail/pkg-exppsy-pymvpa/attachments/20190410/e171d59d/attachment.html>


More information about the Pkg-ExpPsy-PyMVPA mailing list