I am wondering if there is a specific paper or collection of papers that you are referring to in the classify.seqs cutoff section stating “The current thinking seems to be to use a minimum cutoff of 60%. Mothur’s default is set to a value of 80%”? If so, could you point me towards them? I am having trouble understanding how these cutoffs are determined and what makes the most sense for my data.
**Edit: The Mizrahi-Man et. al. 2013 paper seems to cover this, however I am realizing my lack of understanding lies in why one would choose an 80% threshold over 90% or 95%? Is it because a higher % is more precise than what we can realistically classify?
Thank you in advance!