[pymvpa] Emanuele? Re: Q about IterativeReliefOnline and more.......
Yaroslav Halchenko
debian at onerussian.com
Wed Oct 27 14:22:49 UTC 2010
Hi Patrik,
> The reason I ask is that, as far as I understand, the online version
> of 'IterativeRelief' would get updated with one more sample at a time.
yes and no (see below question to Emanuele)
> Can I access the feature selections from the previous states, when
> only dataset[1:-k] "was available"?
irelief does feature weighting, not selection; so at best we could expose the
trace of weights changes through the learning (upon new samples arrival) within
a conditional attributes collection (called .states in 0.4.x and .ca
later on). Is that what you want?
As for "online" aspect, since I have forgotten details of Ireief and I am not
the one who coded it in PyMVPA, I will refer to Emanuele (CCed directly
here) since he was original author of irelief implementations.
Emanuele,
looking at the code of IterativeReliefOnline, it seems that it is
"online" but also wrapped into outside convergence loop which (as far as I see
it) demolishes the notion of online training:
while change > self.threshold and iteration < self.max_iter:
if __debug__:
debug('IRELIEF', "Iteration %d" % iteration)
for t in range(NS):
counter += 1.0
n = random_sequence[t]
...
Also as a side-question: what was original purpose of w_guess --
shouldn't it be used as a starting point (allowing actually online
training in terms of consecutive calls to irelief with new data) -- now
it is not used (besides conditioning initialization of w, and would lead
to failure if you set it to something) ?
as for 1.: we do not have much of iterative methods for feature
selection, besides
IFS -- greedy incremental feature selection
RFE -- recursive feature elimination
and some learning algorithms (e.g. SMLR) with iterative built-in
feature selection/pruning
On Tue, 26 Oct 2010, patrik andersson wrote:
> Hi,
> I have a couple of questions regarding feature selection;
> 1. What choices are there in incremental feature selection and
> training algorithms? Ideally, combined training and feature selection.
> 2. I was playing around with IterativeReliefOnline and got a bit
> confused. Lets say I have;
> FeatureSelector = SensitivityBasedFeatureSelection(
> IterativeReliefOnline(transformer=N.abs),
> FixedNElementTailSelector(10000, mode='select',tail='upper'),
> enable_states = ['selected_ids'])
> smap = FeatureSelector(dataset)
> Is 'smap' now the values in 'dataset' applying the final set of
> selected features on all samples?
> The reason I ask is that, as far as I understand, the online version
> of 'IterativeRelief' would get updated with one more sample at a time.
> Can I access the feature selections from the previous states, when
> only dataset[1:-k] "was available"?
> Thanks a bunch for any help you can give me!
--
.-.
=------------------------------ /v\ ----------------------------=
Keep in touch // \\ (yoh@|www.)onerussian.com
Yaroslav Halchenko /( )\ ICQ#: 60653192
Linux User ^^-^^ [175555]
More information about the Pkg-ExpPsy-PyMVPA
mailing list