Hi,
I am having trouble with one particular 454 file using version 1.30. It was generated with flow pattern B and has a total of 102146 sequences (57 barcodes). Neither the flow pattern nor number of sequences are the problem since seven other 454 files with sequences ranging from 70K-200K sequences (3 flow pattern B and 3 flow pattern A) run just fine. Since the other large files seem to process okay, I don’t think it is a memory problem either (iMac 8GB Ram, 3.1 GHz Inter Core i5). The commands I am using are as follows:
sffinfo(sff=pool1.sff)
trim.flows(flow=pool1.flow, oligos=pool1primer.oligos, pdiffs=2, bdiffs=1, processors=1, order=B)
shhh.flows(file=pool1.flow.files, order=B, processors=1)
I don’t indicate lookup file location, since the input, output, and temp default directories are specified at the beginning of the Mothur session.
I tried the following to see if it would resolve issues:
- for trim.flows, specified minflow=360, maxflow=450. - I don’t know if this would have helped, but didn’t.
- Tried processors =2. [ERROR]: Could not open /Users/ameetpinto/Desktop/Pyrocurrent/Pinto_dwsff/trial3/acyclic/pool2.shhh.fasta9871.num.temp
[ERROR]: main process expected 9871 to complete 28 files, and it only reported completing 0. This will cause file mismatches. The flow files may be too large to process with multiple processors. I get fasta, qual, name, count, and groups file for 38 out of 57 barcodes. - Tried processors =1. The program just quits after working on several flow files without giving any errors. It typically quits during the clustering flow grams for one of the flow files.
I have run this a few times to figure out whether it is one particular flow file that is the problem, but the program either gives an error (2 processors) or quits (1 processor) for different flow files. Typically after it has processed between 30-38 flow files. I have also tried indication processor=1 and 2 for the trim.flow to see whether this was the problem, but no success.
Any input on how I should handle this issue would be much appreciated.
Thanks.
Ameet