[Neurodebian-users] fsl is not paralelized with condor

Labounek René xlabou01 at stud.feec.vutbr.cz
Mon Feb 29 15:42:28 UTC 2016


Michael,
I have opened new terminal before I have run it after editing. I think  
it should be enough. Or do you think something else?

Definitelly, this works in the terminal where I have run it.

labounek at emperor:~$ echo $FSLPARALLEL
condor
labounek at emperor:~$

Rene

Cituji Michael Hanke <michael.hanke at gmail.com>:

> Hey,
>
> maybe a stupid question, but after you edited the config file, did you
> source it again in the shell where you started bedpost?
>
> Michael
> On Feb 29, 2016 3:40 PM, "Labounek René" <xlabou01 at stud.feec.vutbr.cz>
> wrote:
>
>> Dear Neurodebian users,
>>
>> I am not able to paralelize fsl via condor. I have installed condor based
>> grid of 2 computers (called emperor and magellan).
>>
>> On emperor: condor_master, condor_startd, condor_shedd, condor_collector,
>> condor_negotiator and condor_procd are running
>>
>> On magellan: condor_master, condor_startd and condor_procd are running.
>>
>> condor_status output looks ok:
>>
>> labounek at magellan:~$ condor_status
>> Name               OpSys      Arch   State     Activity LoadAv Mem
>>  ActvtyTime
>>
>> slot10 at emperor.fno LINUX      X86_64 Unclaimed Idle      0.470 2682 14
>> +02:32:15
>> slot11 at emperor.fno LINUX      X86_64 Unclaimed Idle      0.000 2682
>> 14+02:32:16
>> slot12 at emperor.fno LINUX      X86_64 Unclaimed Idle      0.000 2682
>> 14+02:32:17
>> slot1 at emperor.fnol LINUX      X86_64 Unclaimed Idle      1.000 2682
>> 14+02:32:12
>> slot2 at emperor.fnol LINUX      X86_64 Unclaimed Idle      1.000 2682
>> 14+02:32:15
>> slot3 at emperor.fnol LINUX      X86_64 Unclaimed Idle      1.000 2682
>> 14+02:32:16
>> slot4 at emperor.fnol LINUX      X86_64 Unclaimed Idle      1.000 2682
>> 14+02:32:17
>> slot5 at emperor.fnol LINUX      X86_64 Unclaimed Idle      1.000 2682
>> 14+02:32:18
>> slot6 at emperor.fnol LINUX      X86_64 Unclaimed Idle      1.000 2682
>> 14+02:32:19
>> slot7 at emperor.fnol LINUX      X86_64 Unclaimed Idle      1.000 2682
>> 14+02:32:20
>> slot8 at emperor.fnol LINUX      X86_64 Unclaimed Idle      1.000 2682
>> 14+02:32:13
>> slot9 at emperor.fnol LINUX      X86_64 Unclaimed Idle      1.000 2682
>> 14+02:32:14
>> slot1 at magellan.fno LINUX      X86_64 Unclaimed Idle      0.990 1333 0
>> +02:44:39
>> slot2 at magellan.fno LINUX      X86_64 Unclaimed Idle      0.000 1333
>> 0+02:45:06
>> slot3 at magellan.fno LINUX      X86_64 Unclaimed Idle      0.000 1333
>> 0+02:45:07
>> slot4 at magellan.fno LINUX      X86_64 Unclaimed Idle      0.000 1333
>> 0+02:45:08
>> slot5 at magellan.fno LINUX      X86_64 Unclaimed Idle      0.000 1333
>> 0+02:45:09
>> slot6 at magellan.fno LINUX      X86_64 Unclaimed Idle      0.000 1333
>> 0+02:45:10
>>                      Total Owner Claimed Unclaimed Matched Preempting
>> Backfill
>>
>>         X86_64/LINUX    18     0       0        18       0          0
>>   0
>>
>>                Total    18     0       0        18       0          0
>>   0
>> labounek at magellan:~$
>>
>>
>> I have set FSLPARALLEL=condor in /etc/fsl/fsl.sh file (symbolic link to
>> /etc/fsl/5.0/fsl.sh) as it is written here:
>>
>>
>> http://neuro.debian.net/blog/2012/2012-03-09_parallelize_fsl_with_condor.html
>>
>> I have tried to run bedpostx but it is still running on one core under
>> my_user_account (labounek) and not under condor_account at multiple-cores.
>>
>> Does somebody has an idea what is wrong?
>>
>> Here is the terminal output and now is running xfibers on one core for
>> labounek user.
>>
>> labounek at emperor:~/test$ bedpostx dti/
>> subjectdir is /home/labounek/test/dti
>> Making bedpostx directory structure
>> Queuing preprocessing stages
>> Queuing parallel processing stage
>>
>> ----- Bedpostx Monitor -----
>>
>> Regards,
>> Rene Labounek
>>
>>
>> _______________________________________________
>> Neurodebian-users mailing list
>> Neurodebian-users at lists.alioth.debian.org
>> http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/neurodebian-users
>>
>






More information about the Neurodebian-users mailing list