sub.sample for use with the classify.otu command?

Hello all
I am trying to normalize the number of sequences so that I am analyzing the same number for each sample. That being said…I ran the sub.sample command using the shared and groups files with the size parameter, which works great with the summary single command (I can compare the various calculators for equivalent numbers of sequences for all of my samples). This was the command I used to normalize the number of sequences across the sample groups: sub.sample(shared=3C.final.an.shared, groups=3CAuto-3CMax-3CPower, size=10512).

The problem I am having concerns otu classification. Next, I would like to normalize the number of sequences for each sample for use with the classify.otu command (so that I can compare the relative abundance of various taxa for equivalent numbers of sequences).

This is the command I used for classification before the number of sequences for each group were normalized: classify.otu(list=3C.final.an.list, name=3C.final.names, group=3C.final.groups, taxonomy=3C.final.taxonomy, basis=sequence, cutoff=80, label=0.03).

How would I modify this command (or what other files do I need to generate) so that I get a classification for equivalent numbers of sequences for each group, rather than a classification for the original number of sequences? I assumed it would be necessary to also normalize for classification since more sequences = more otus.

Thanks
AO

How would I modify this command (or what other files do I need to generate) so that I get a classification for equivalent numbers of sequences for each group, rather than a classification for the original number of sequences? I assumed it would be necessary to also normalize for classification since more sequences = more otus.

I don’t think this is actually necessary since the relative abundances shouldn’t really change much. But… you could run sub.sample on the list files with list, group, name file options…

sub.sample(fasta=esophagus.unique.fasta, name=esophagus.names, group=esophagus.groups)

Hope this helps…
Pat