dist.seqs and MPI failing: ADIOI_Set_lock:: No locks...

Hi,

I’m running into this problem running dist.seqs over MPI on our cluster. As follows (file path removed):

job submission:

qrsh -pe smp 20 `which mpirun` -np 20 ~/../mpimothur "\"#dist.seqs(fasta=/.../test.fasta,cutoff=0.15)\""

mothur starts, writes the first line (0 0), the bombs, presumably when the other mpi processes attempt to write the file:

mothur > dist.seqs(fasta=/.../test.fasta,cutoff=0.15)
0 0
File locking failed in ADIOI_Set_lock(fd F,cmd F_SETLKW/7,type F_WRLCK/1,whence 0) with return value FFFFFFFF and errno 25.
- If the file system is NFS, you need to use NFS version 3, ensure that the lockd daemon is running on all the machines, and mount the directory with the 'noac' option (no attribute caching).
- If the file system is LUSTRE, ensure that the directory is mounted with the 'flock' option.
ADIOI_Set_lock:: No locks available
ADIOI_Set_lock:offset 0, length 8
--------------------------------------------------------------------------
MPI_ABORT was invoked on rank 0 in communicator MPI_COMM_WORLD 
with errorcode 1.

NOTE: invoking MPI_ABORT causes Open MPI to kill all MPI processes.
You may or may not see output from other processes, depending on
exactly when Open MPI kills them.
--------------------------------------------------------------------------
--------------------------------------------------------------------------
mpirun has exited due to process rank 0 with PID 24434 on
node godel2 exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------

I’ve been in touch with our IT folks who claim that things should be fine in terms of file storage and cluster setup. Is this something inherently wrong with dist.seqs over MPI? Running without MPI (with a non-mpi binary) works fine.

Thanks,
Chris