Pre.cluster Segmentation fault (core dumped) error

Hi, I am running the 16s workflow for MiSeq data, and on the pre.cluster command I get the error “Segmentation fault (core dumped)” exiting mothur. I have tried using only 1 processor and using all available processors and I still get the same error.

I am running: pre.cluster(fasta=16s.trim.contigs.good.unique.good.filter.unique.fasta, count=16s.trim.contigs.good.unique.good.filter.count_table, diffs=2)

mothur version:v1.42.3

Is this a bug in the command?

Thanks!
-Laura

Hi Laura

Which OS were you using?

If you are using it in Windows or in Linux Debian or similar, I had a similar problem in a course, even with relatively small datasets. Windows does not handle memory properly and you run out of memory. You can try checking in your task manager if you are running out of RAM.

Again, this does not happen in IOs or CentOS/RedHat, since they handle memory better. For the same dataset and script I used for the course, a not-that-new IOs was able to finish, meanwhile my windows-based workstation (with better specs than the mac) run out of memory and faulted.

Cheers,

Leo

Hi Leo, Thanks for your answer.
I am running the command on a Linux system (Ubuntu 18.04.3). I checked running the command with top and it is not running out of memory, so I am wondering if there is a bug in the program.
Best,
Laura

How many sequences are going into this command? how much ram are you using?

Hi Michell, Thanks for your answer. I am running 225 fastq files through the command. As for the ram usage, the computer has 13 Gb and the program used 0.2% of ram while running pre.cluster.
Thanks!
-Laura

I suspect the issue is caused by mothur needing more RAM than you have. If you think memory is not the issue, could you send your input files and log file to mothur.bugs@gmail.com?

Thanks! I just sent you an email with the files.

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.