I have some problem in MPI on summary.seqs command. Normally, I used batchfile for running mothur on MPI. For line of align.seqs can generate the result, however, when the step of line on summary.seqs is run, the problem occurs in below:
mothur > summary.seqs(fasta=project1seafinal.good.unique.align, count=project1seafinal.good.count_table)
[compute-1-2:3454] *** An error occurred in MPI_Recv
[compute-1-2:3454] *** on communicator MPI_COMM_WORLD
[compute-1-2:3454] *** MPI_ERR_TRUNCATE: message truncated
[compute-1-2:3454] *** MPI_ERRORS_ARE_FATAL: your MPI job will now abort
Using 16 processors.
--------------------------------------------------------------------------
mpirun has exited due to process rank 2 with PID 3448 on
node compute-1-2 exiting improperly. There are two reasons this could occur:
1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.
2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"
This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
[compute-1-2:03445] 14 more processes have sent help message help-mpi-errors.txt / mpi_errors_are_fatal
[compute-1-2:03445] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
I’m not sure what is happen, Could you please suggest me? I used Mothur.1.34.1 version.
Thank you very much.
I am a IT person in our research institute. I am trying to install mothur 1.36.1 on our cluster for one of our researchers. But, while doing the test run by using summary.seqs function, I always got the same problems posted here. I thought this problem should have been resolved after version 1.34.2 and it should not be an issue for 1.36.1. Can anyone help me to fix this error? I built mothur 1.36.1 with gcc 4.9.2 and openmpi 1.8.4 on rhel 6 with kernel 2.6.32-573.1.1.el6.x86_64. Thanks a lot
LSW
Here is the output from my log file
```text
mothur > summary.seqs(fasta=test.fasta, count=test.count_table)
mothur > quit()
[c132:6625] *** An error occurred in MPI_Recv
[c132:6625] *** reported by process [1152909313,6]
[c132:6625] *** on communicator MPI_COMM_WORLD
[c132:6625] *** MPI_ERR_TRUNCATE: message truncated
[c132:6625] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[c132:6625] *** and potentially your MPI job)
Our non-MPI version runs faster than the MPI version. In our next release we will be removing the MPI option. I would recommend using the non-MPI version.