shhh.flows error for one file,


I am having trouble with one particular 454 file using version 1.30. It was generated with flow pattern B and has a total of 102146 sequences (57 barcodes). Neither the flow pattern nor number of sequences are the problem since seven other 454 files with sequences ranging from 70K-200K sequences (3 flow pattern B and 3 flow pattern A) run just fine. Since the other large files seem to process okay, I don’t think it is a memory problem either (iMac 8GB Ram, 3.1 GHz Inter Core i5). The commands I am using are as follows:

trim.flows(flow=pool1.flow, oligos=pool1primer.oligos, pdiffs=2, bdiffs=1, processors=1, order=B)
shhh.flows(file=pool1.flow.files, order=B, processors=1)

I don’t indicate lookup file location, since the input, output, and temp default directories are specified at the beginning of the Mothur session.

I tried the following to see if it would resolve issues:

  1. for trim.flows, specified minflow=360, maxflow=450. - I don’t know if this would have helped, but didn’t.
  2. Tried processors =2. [ERROR]: Could not open /Users/ameetpinto/Desktop/Pyrocurrent/Pinto_dwsff/trial3/acyclic/pool2.shhh.fasta9871.num.temp
    [ERROR]: main process expected 9871 to complete 28 files, and it only reported completing 0. This will cause file mismatches. The flow files may be too large to process with multiple processors. I get fasta, qual, name, count, and groups file for 38 out of 57 barcodes.
  3. Tried processors =1. The program just quits after working on several flow files without giving any errors. It typically quits during the clustering flow grams for one of the flow files.

I have run this a few times to figure out whether it is one particular flow file that is the problem, but the program either gives an error (2 processors) or quits (1 processor) for different flow files. Typically after it has processed between 30-38 flow files. I have also tried indication processor=1 and 2 for the trim.flow to see whether this was the problem, but no success.

Any input on how I should handle this issue would be much appreciated.



Anyway you could post the sff file for us to look at? It’s likely a problem in how sffinfo parses the file. Shoot us an email if you’d like to coordinate how to do this.



Thanks very much for your help. I’ll upload the sff and oligos file to dropbox folder and invite you to share it. Is there any other alternative you prefer? Should I send an email to your inbox or


FYI: just sent dropbox invite with the relevant files.

Thanks again,

Sarah and Pat,

Thanks very much for help with this. I am not sure why that sff is problematic on my computer.

Thanks again for the awesome Mothur support.


Dear Pat,

I contact you because Ameet Pinto wrote you on the forum on Friday, 5 April, 2013 and I have the same problem that him.

For my internship, I work with the 1.30.2 version of Mothur on Linux OS and I create a shell script pipeline in order to facilitate analyzes for biologist. To construct my pipeline, I test Mothur on file which are avaible on my laboratory. Since few days, I use a new sff file and I am having problems. When I run shhh.flows with 3 processors, I notice that my computer use only one processor ( top command on Linux OS). Later, shhh.flows stops in the middle of procedure. At the end, I have just 14 flow files analyzed out of 22.

Initially, I thought that it was because of my computer memory or because my computer was in standby during the night but no. So, I have try the shhh.flow command again, ouf of my shell pipeline and during the day and I got the same result: 14 flow files analyzed out of 22 and incomplete logfile.

According your response, this is caused by a problem in the sff file parsing. Can you give me more informations about this problems and what I do to shhh.flow does not systematically rejected some of my flow file.

Best regards,

Shhh.flows is very memnory intensive. Can you try running it with processors=1?