cluster.split making TB of temp files

I have a large data set (234 samples), and when I run cluster.split, the program is making the temp distribution files, where each file is taking up 800 GB or more, which is causign the program to run out of hard-drive space.

My questions:

  1. Is this normal?
  2. Is this the result of too many sequences involved?
  3. What are strategies to reduce the filesize (and presumably run time)?

command:
cluster.split(fasta=current, name=current, taxonomy=current, splitmethod=classify)

What kind of data do you have? You need more RAM than your largest .dist. You may need to increase your precluster diffs or taxlevel. I’ve clustered that many samples (v4 run on MiSeq 2x250, very diverse soils and waters) with diffs=3.

It is MiSeq 2X250,

after clustering at 2 diffs, there were >1 million unique sequences. Am running again using diffs=3 to see what happens.

Thanks!

What region did you sequence?

http://blog.mothur.org/2014/09/11/Why-such-a-large-distance-matrix/