Mothur vs dada2

Sorry in advance for starting a new topic but I couldnt find anything relevant using the search function. I would like to share a personal experience and also ask the experts if my intuition is right about a particular problem I am facing (which hopefully will also help others in the future).

I have been analyzing 16S MiSeq data of ca. 50 cow fecal samples. I am new to the lab and I only arrived few months ago, hence the samples were sequenced and data were analyzed long before I arrive. The sequencing was done locally in a facility that favors a partial overlap strategy: i.e. the amplicon sequenced is appx 300bp long and therefore Forward and Reverse reads do not fully overlap but instead they overlap over 200bp in the middle and then there are 50bp of single reads in each side. The people in my current lab did the analysis using dada2 and gave me the analyzed data but also the original fastq files. Stubborn as I am I redid the analysis from scratch using mothur and produced a 97%-OTU table (I only kept the 200bp that fully overlap and trimmed the 50bp of single reads in each side), and now here starts the funny part:
Using mothur
All samples have a sequencing depth from 8200-25000 with a median of 11581 and all samples seem perfectly usable in terms of sequencing depth.
Total number of OTUs is apx. 18000, if I remove singletons (which I dont) 7975.
If I rarefy at 8000 and do a bray curtis ordination i see a very nice clustering according to treatment (as I would expect cause the animals the samples came from were infected/non-infected with a gastrointestinal pathogen)

Using dada2
Samples have a sequencing depth from 60-20000 with a median of 3700 and ten of the samples have less than 500 seq depth (three out of ten <90 and most are <200).
Total number of ASVs are 249! if I remove singletons 244.
Rarefication at 1200 (or unrarefied data) produces an ordination plot that is entirely different than the one produced by mothur and there is absolutely no apparent clustering according to any variable (treatment, animal etc)

Here is what I think:
I mentioned in the beginning that I have cow fecal samples not with the purpose of making people nauseous but to point out that I would expect a massively complex microbiome. Also considering that I have 50 samples, 2 different treatments and 10 animals, I would expect a decent number of OTUs/ASVs. This seem to be the case with mothur but not with dada2 which claims that the total number of species are 249!
I dont think that there is sth wrong with dada2, however, I think that the combination of the sequencing strategy (50bp overhang at each side) and the aggressive filtering algorithm of dada2 have removed A LOT of sequences because this sequencing strategy creates sequences with lots of sequencing error (i think the quality drops a lot after a point in single reads) and the dada2 will remove them.

I posted the whole story here cause I d like to hear if there is sth obvious I might be missing and I actually would be happy to share confidentially some of these data (confidentially cause they are unpublished) if someone wants to try and reproduce this. I know that I am a mothur fanboy and clearly I am biased but I would like to use these data (if possible) and therefore i would like to figure out the best strategy so I welcome any comments, criticism and suggestions.

Happy Wednesday

interesting. I’ve run my workshop dataset (16 soil, sediment, and fresh water samples) through mothur and dada2. I get drastically different number of otus, but (I think) roughly the same number of sequences retained. The NMS looks very similar but that could be because my samples are so different from each other.

whats the sequencing strategy in your samples? partial or full overlap of the two reads?

515f 806r so near complete overlap

ok, that makes me think that it is related to the overlap cause in an older study where we used 515F/806R we got fairly similar results (with the exception of the number of OTUs/ASVs of course)
I d love to hear @pschloss opinion on this!

Hmmmm. Several thoughts…

  1. When I’ve played with dada2 it’s been hard/impossible to turn off removal of singletons. If your Good’s coverage on these samples (calculated from your mothur data) is low, then I suspect one reason for the difference is that dada2 is still a lot of singletons.
  2. With your dada2 data, are you removing the first/last 50 nt of each sequence? I suspect this would make their sequences look even more similar, but it would provide more of an apples-to-apples comparison to your mothur data.
  3. Sticking to the apples-to-apples theme, ASVs in mothur would come from output of pre.cluster, not the output of one of the cluster functions. Also, dada2 and similar approaches remove singletons at the sequence, not OTU level. FWIW, in my analysis of mock community data, if you don’t remove singletons, the error rate from pre.cluster is just as good/better than that of dada2.
  4. dada2 has a big tuning knob that is used to determine whether to split or merge ASVs (omega?). As far as I can tell, there’s no rational basis for the selection of the value. Depending on the size of the value you can get tons or few ASVs. I think dada2 also has a setting that trims the sequences before running them through the analysis. In my benchmarking, trimming had pretty horrible effects and would likely result in the loss of reads like you are seeing.

Let me know what you find. I’m always interested in these types of weird results :slight_smile:

1 Like

I ve worked a bit more on this and I will make a detailed post about what I found tomorrow or Friday!

1 Like

I was troubled about the differences i saw so I tried to see if I can pinpoint the exact parameters that account for that loss of data and the different clustering between mothur and dada2.

I picked 20 samples from the dataset and reran the analyses with mothur and dada2. In order to compare results between the 2 pipelines I used the beta ordination because mothur gves me a clear separation between my 2 treatment groups (black vs red) which I could NOT see in an old dada2 analysis (1-2 years ago done by an old lab member)

First of all I could not reproduce the results from that old dada2 analysis. I asked the people in the lab who gave me the script and the fastq files, but even though I run the exact same script in the exact same samples I get different results than what they got in that old analysis. I tried several times with 2 different versions of dada2 (1.12 and 1.14) and what I get is (more or less) consistent in my analyses (with reasonable differences) but very different from the results they obtained. In addition I never have that huge data loss that I described in the previous comment (see top comment). For that I have no explanation other that somebody (back then) scr**** sth up and/or used a different script than the one they uploaded on git…

That being said I will move on with the results that are reproducible. Using the dada2 default settings but trimming left and right of the sequence (exactly the same way I trimmed with mothur) produces an ordination that resembles a little bit the mothur ordination but not entirely (Is this reasonable differences? I dont know…). If I try to get some numbers out of it: Treatment differences using bray curtis are significant when using the mothur otu table (permanova p<1e-04) but when using the dada2 table it is not significant (Permanova) even if it is close to significance.

Small differences in the dada commands incl. the use of derep.fastq seem to make very little difference, example:

code 1:

errF <- learnErrors(filtFs, multithread=TRUE)

errR <- learnErrors(filtRs, multithread=TRUE)

derepFs <- derepFastq(filtFs, verbose=TRUE)

derepRs <- derepFastq(filtRs, verbose=TRUE)

dadaFs <- dada(derepFs, err=errF, multithread=TRUE, pool=dadapool, selfConsist=FALSE) dadaRs <- dada(derepRs, err=errR, multithread=TRUE, pool=dadapool, selfConsist=FALSE)

mergers <- mergePairs(dadaFs, derepFs, dadaRs, derepRs, verbose=FALSE)

code 2 (tutorial):

errF <- learnErrors(filtFs, multithread=TRUE)

errR <- learnErrors(filtRs, multithread=TRUE)

dadaFs <- dada(filtFs, err=errF, multithread=TRUE)

dadaRs <- dada(filtRs, err=errR, multithread=TRUE)

mergers <- mergePairs(dadaFs, filtFs, dadaRs, filtRs, verbose=TRUE)

Result from code 1, result from code 2 (tutorial) ; almost the same

Different trimming settings before running dada however do seem to have a bigger effect: full and partial overlap (240bp sequence length):, only fully overlapped sequences: trimmed left and right (208bp sequence length):

Changing the OMEGA values also seems to make some difference (using the 208bp sequences - full overlap): using the default omega values (omega A and C are both 1e-40), using an omega-A value of 1e-20 (less strict), using an omega-A value of 1e-60 (more strict) (now this may be my impresssion but I think it starts to resemble the mess I saw in the original (old) analysis I mentioned before).

Finally, I did one final run using omega_A 1e-20 and omega_C 1e-20 but also using the «detect singletons» option. This created a much bigger feature table (ca. 18000ASV), but even though the ordination is different, it still does not resemble the mothur result and the clear seperation between treatments…

Here is another interesting difference that has made me a bit skiddish… As I mentioned before, I am using 20 samples to do the comparisons. These 20 samples originate from a much bigger dataset of 550 samples. All the above analyses were performed in a much smaller dataset (I used 50 fecal samples and then I extracted the 20 samples i presented here). However, I decided to repeat the dada2 analysis in the entire dataset (550 samples) and then produce a massive otu/asv table and then extract the 20 samples of interest and do the ordination. For that comparison I used the settings of the old.lab analysis (sequence length 240bp, code 1). By analyzing the small dataset of 50 samples and then extracting the 20 samples of interest, the bray curtis ordination produced this However, when I run the dada with the exact same settings in all 550 samples, produce an otu table and then extract the 20 samples of interest, the ordination… voila Now this is pretty d*** different. (even more different than the mothur/dada difference to my opinion). Of course for that I have a theory which is that the old-lab script used that «pool» option when running the dada command which pools all samples together for the analysis instead of running each one individually.

I will do a similar comparison with mothur (running the analysis now) because as far as I understand, mothur does sth similar i.e.all samples will be pooled and analyzed together which may result in changes between analyses… However, big differences like the ones I just saw on dada seem extreme to me

Anyways, having lost 3 days of my life to do this, I have decided to make peace with the idea that I cannot reproduce the exact same pattern with two different algorithms, because I am simply using two different algorithms…

That being said, in my dadaset pooling rare variants together and predicting the microbial communities with dada makes my expected biological differences between treatments dissapear. Of course this may be because dada is not ideal to predict and project these differences or it may be because there are no differences and If I try to publish this I might very well meet some mrn reviewer (true story, has happened before) who will tell me «oh you should try to see if there is a significant difference between treatments using dada…, because mothur creates all these fake rare variants that do not really exist…» :frowning: :face_with_symbols_over_mouth: :anger:

Anyways, sorry for the negative mood just ventilating

Happy Thursday everyone


just in case sb wants to know: using mothur and comparing the output of an analysis that ran only on 20 samples vs an analysis that ran on 500 samples and then extracted the 20 samples from a big OTU table results in some changes in ordination but not huge differences! Contrasts between treatments are still significant and the separation can still be seen


1 Like

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.