Hi All,
I am experiecing a problem when trying to run the shhh.flows command.
I am using a high performance computer, and the set of sequences are composed by 9 samples, with the total sequence number of 26K. The sequences were generated by 454 GSFX.
I tried to vary the number of processors, the way to run (in the main cluster in Batch mode or in the interative mode), and a lot of other things, but the error is always the same. After aproximatelly 20 minutes of processing, I receive the following message:
mothur > shhh.flows(file=all.flow.files, processors=9, lookup=LookUp_GSFLX.pat)
Using 9 processors.
Getting preliminary data…
Processing all.CC1.flow (file 1 of 9) <<<<<
Reading flowgrams…
[dev-amd09:20629] *** Process received signal ***
[dev-amd09:20629] Signal: Segmentation fault (11)
[dev-amd09:20629] Signal code: Address not mapped (1)
[dev-amd09:20629] Failing at address: 0x40020aa088
[dev-amd09:20629] [ 0] /lib64/libpthread.so.0(+0xf490) [0x2b27239af490]
[dev-amd09:20629] [ 1] /opt/software/OpenMPI/1.4.3–GCC-4.4.5/lib/libopen-pal.so.0(opal_memory_ptmalloc2_int_malloc+0x96a) [0x2b2721e835ca]
[dev-amd09:20629] [ 2] /opt/software/OpenMPI/1.4.3–GCC-4.4.5/lib/libopen-pal.so.0(+0x416d3) [0x2b2721e846d3]
[dev-amd09:20629] [ 3] /usr/lib64/libstdc++.so.6(_Znwm+0x1d) [0x2b27235400bd]
[dev-amd09:20629] [ 4] mothur(_ZNSt6vectorIsSaIsEE13_M_insert_auxEN9__gnu_cxx17__normal_iteratorIPsS1_EERKs+0xbb) [0xb9450b]
[dev-amd09:20629] [ 5] mothur(_ZN13ShhherCommand11getFlowDataEv+0x3aa) [0xb6fc3a]
[dev-amd09:20629] [ 6] mothur(_ZN13ShhherCommand7executeEv+0x6ad) [0xb8499d]
[dev-amd09:20629] [ 7] mothur(_ZN14InteractEngine8getInputEv+0x7b4) [0x70edf4]
[dev-amd09:20629] [ 8] mothur(main+0x13fc) [0x8e006c]
[dev-amd09:20629] [ 9] /lib64/libc.so.6(__libc_start_main+0xfd) [0x2b2723bdacdd]
[dev-amd09:20629] [10] mothur() [0x4884a9]
[dev-amd09:20629] *** End of error message ***
Segmentation fault
Any Idea about the reason of this problem? I tried the Mothur versions 1.26.0 and 1.24.0.
Thank you very much
How much RAM does your computer have? How are you running trim.flows?
The ram is probably not the problem, since I am using a high performance computer.
In the interative mode I didn´t choose any ram configuration, and the computer has a total of 256 GB of ram (I don´t know how much is availiable for the process, since a lot of people can use at the same time). However, in the schedule mode, I reserved 24GB for the operation.
The trim.flows I runned as follow:
mothur > trim.flows(flow=all.flow, oligos=oligos.txt, pdiffs=2, bdiffs=1, processors=2, minflows=360)
Using 2 processors.
10000
Using 2 processors.
10000
13006
13007
Appending files from process 23963
Output File Names:
all.trim.flow
all.scrap.flow
all.CC1.flow
all.CC2.flow
all.CC3.flow
all.CE1.flow
all.CE2.flow
all.CE3.flow
all.CQ1.flow
all.CQ2.flow
all.CQ3.flow
all.flow.files
Oligo file:
forward AYTGGGYDTAAAGNG
#reverse TACCRGGGTHTCTAATCC
#reverse TACCAGAGTATCTAATTC
#reverse CTACDSRGGTMTCTAATC
#reverse TACNVGGGTATCTAATCC
barcode CATGCAGC CC1
barcode CTCAGCAG CC2
barcode CTCATCTG CC3
barcode ATCTCTGC CE1
barcode CATCTCTG CE2
barcode CATGATGC CE3
barcode CTCTCAGC CQ1
barcode CTCTCATG CQ2
barcode CTGAGATC CQ3
Yeah, well 24 GB might not do very well… Can you try doing it with…
trim.flows(flow=all.flow, oligos=oligos.txt, pdiffs=2, bdiffs=1, processors=2)
When you do minflows=360 you’re allowing evertyhign with 360-450 flows in. This will artificially jack up the number of uniques.
Pat,
Now it is working. The problem was not the RAM. I tried to run it on 320 GB of RAM, and I had problems with not memory enough anyway…
The problem was the trim.flows… My data had up to 400 flows, and as yout told, I was asking to trim up to 450.
I correct the trim.flows and the it is working well now. I putted minflow=200 and maxflow=360 (is it good for FLX data?)
Now, I wonder what mothur was trying to calculate to use more than 320GB of RAM, before I change the trim.flows to the correct parameter.
Thank you for your help
I woudl put minflows and maxflows to 225. It’s counter productive to set tehm to different values.