[pymvpa] the effect of ROI size on classification accuracy

MS Al-Rawi rawi707 at yahoo.com
Mon Jul 21 15:57:49 UTC 2014


If it is not due to C of SVM, maybe you could try smoothing before MNI normalization to see how much it would affect your results. (e.g., due to normalization and voxel oversampling).

Regards,
-Rawi



> On Monday, July 21, 2014 12:37 PM, Brian Murphy <brian.murphy at qub.ac.uk> wrote:
> > Hi Meng,
> 
> I don't use SVMs so often, but I wonder if it is related to the setting
> of the C or shrinkage parameter? With smoothing you increase the amount
> of co-linearity between the input features, which can make it harder for
> your algorithm to choose among features with similar informativity.
> 
> best,
> 
> Brian
> 
> 
> 
> On Sun, 2014-07-20 at 17:10 +0100, Meng Liang wrote:
>>  Dear Jo,
>> 
>> 
>>  Thanks for your reply! 
>> 
>> 
>>  I generated a series of smoothed images with Gaussian sigma from 1 mm
>>  to 5 mm using the same code (a for loop was used to run different
>>  sigma, and FSL smoothing command was used). Smoothing was done on the
>>  4d nifti file directly, so I suppose it is unlikely to change the
>>  order of the 3d volumes. By visually inspecting the unsmoothed image
>>  and the smoothed image with sigma=1 mm, they look almost identical.
>>  The classification accuracies for all different datasets and ROIs were
>>  the following:
>>  ======================================================
>>          sigma0 sigma1 sigma2 sigma3 sigma4 sigma5
>>  ROI1 0.7500 0.7917 0.8333 0.8750 0.8750 0.8750
>>  ROI2 0.7917 0.7917 0.7500 0.7500 0.6667 0.6667
>>  ROI3 0.7917 0.7917 0.7500 0.7500 0.6250 0.5833
>>  ======================================================
>> 
>> 
>>  Now my impression is that it wasn't due to some mistake but smoothing
>>  somehow changed the distribution of the data points in the hyperspace
>>  in a strange way for ROI3 so that the classification accuracy was
>>  changed. I guess it is theorectically possible. 
>> 
>> 
>>  If this is true, it raises another question: can we use smoothing as a
>>  way to test whether it is the fine-grained pattern across neiggbouring
>>  voxels or the very coarse pattern across different brain regions that
>>  drives the successful classification? The above example seems to make
>>  the interpretation of the results from such test a bit complicated, as
>>  the smoothing can have very different effect on a combined ROI (ROI3)
>>  than on the separate ROIs (ROI1 and ROI2). Any thoughts?
>> 
>> 
>>  Best,
>>  Meng
>> 
>> 
>> 
>> 
>> 
>>  > Date: Fri, 18 Jul 2014 16:53:54 -0500
>>  > From: jetzel at artsci.wustl.edu
>>  > To: pkg-exppsy-pymvpa at lists.alioth.debian.org
>>  > Subject: Re: [pymvpa] the effect of ROI size on classification
>>  accuracy
>>  > 
>>  > 
>>  > On 7/18/2014 12:06 PM, Meng Liang wrote:
>>  > > That's one reason I'm puzzled about the results. Having 
> said that,
>>  > > sigma=5mm smoothing equals FWHM=11.8mm smoothing, so the smoothed
>>  > > image does look considerably smoother than the unsmoothed image.
>>  > That helps - I'm more used to thinking in FWHM. 11.8 with 2x2x2
>>  voxels
>>  > is fairly substantial and likely make some sort of difference in the
>>  > results.
>>  > 
>>  > > I was also wondering whether this was due to some mistakes. But
>>  all
>>  > > results were generated from the same code (the only difference is
>>  the
>>  > > nifti image files being read into the script). Not sure what 
> other
>>  > > things to check... Ideas?
>>  > Hmm. So you have 4d niftis with the (smoothed or not) functional
>>  data,
>>  > plus 3d niftis with the ROI masks, and just send different 4d niftis
>>  to
>>  > the same classification code? I think you're right then to look at
>>  the
>>  > smoothed niftis. Perhaps something went strange with the smoothing
>>  > procedure, say resulting in some sort of reordering? You could try
>>  > something like running the images through the smoothing code, but
>>  with
>>  > zero (or nearly zero) smoothing, which shouldn't change the actual
>>  > functional data, to see if it turns up anything weird (i.e. if the
>>  > zero-smoothed images don't exactly match the before-smoothing
>>  images).
>>  > 
>>  > Jo
>>  > 
>>  > 
>>  > -- 
>>  > Joset A. Etzel, Ph.D.
>>  > Research Analyst
>>  > Cognitive Control & Psychopathology Lab
>>  > Washington University in St. Louis
>>  > http://mvpa.blogspot.com/
>>  > 
>>  > _______________________________________________
>>  > Pkg-ExpPsy-PyMVPA mailing list
>>  > Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
>>  >
>>  http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
>> 
> 
> -- 
> Dr. Brian Murphy
> Lecturer (Assistant Professor)
> Knowledge & Data Engineering (EEECS)
> Queen's University Belfast
> brian.murphy at qub.ac.uk
> 
> 
> 
> _______________________________________________
> Pkg-ExpPsy-PyMVPA mailing list
> Pkg-ExpPsy-PyMVPA at lists.alioth.debian.org
> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/pkg-exppsy-pymvpa
> 



More information about the Pkg-ExpPsy-PyMVPA mailing list