Hi, I am running the 16s workflow for MiSeq data, and on the pre.cluster command I get the error “Segmentation fault (core dumped)” exiting mothur. I have tried using only 1 processor and using all available processors and I still get the same error.
I am running: pre.cluster(fasta=16s.trim.contigs.good.unique.good.filter.unique.fasta, count=16s.trim.contigs.good.unique.good.filter.count_table, diffs=2)
If you are using it in Windows or in Linux Debian or similar, I had a similar problem in a course, even with relatively small datasets. Windows does not handle memory properly and you run out of memory. You can try checking in your task manager if you are running out of RAM.
Again, this does not happen in IOs or CentOS/RedHat, since they handle memory better. For the same dataset and script I used for the course, a not-that-new IOs was able to finish, meanwhile my windows-based workstation (with better specs than the mac) run out of memory and faulted.
Hi Leo, Thanks for your answer.
I am running the command on a Linux system (Ubuntu 18.04.3). I checked running the command with top and it is not running out of memory, so I am wondering if there is a bug in the program.
Best,
Laura
Hi Michell, Thanks for your answer. I am running 225 fastq files through the command. As for the ram usage, the computer has 13 Gb and the program used 0.2% of ram while running pre.cluster.
Thanks!
-Laura
I suspect the issue is caused by mothur needing more RAM than you have. If you think memory is not the issue, could you send your input files and log file to mothur.bugs@gmail.com?