GENEALOGY-DNA-L ArchivesArchiver > GENEALOGY-DNA > 2002-11 > 1037225160
From: "John F. Chandler" <>
Subject: Re: [DNA] DNA as "junk" science
Date: Wed, 13 Nov 2002 17:06 EST
In-Reply-To: email@example.com message <009401c28b56$ed8aa550$19799e18@huffaker> of Wed, 13 Nov 2002 13:54:57 -0700
> I can accept negative results if I have confidence in
> methodology, but the testing company has given me no basis for confidence
> and basically just ignores my questions or gives irrelevant answers.
There are at least three issues here: (1) the general reliability of
STR testing methods, (2) the particular reliability of your testing
company, and (3) customer relations at your testing company.
1. General reliability -- difficult to assess, but there are some reports
available of independent measurements of the same test subjects. The
best example is the one in Wilson et al. in Proc Nat'l Acad Sci, which
mentioned blind retesting of 876 haplotypes and finding 6 that differed.
They called this a "0.7%" error rate, but that is an upper bound based
on the assumption that all the errors occurred in the same round. A more
likely scenario would be 0.3% error rate in the original tests and 0.3%
in the retests. Note, however, that these were tests of a suite of
six loci for each haplotype, so the effective error rate extrapolated
to other cases would be 0.057% times the number of loci. For a 23- or
25-locus test, that comes to about 1.4%. In a three-lab comparison
that the Edmund Rice Association carried out last year, we obtained 3
independent test results for ONE person, but that was before a big
recalibration effort at one of the labs. It has proved next to
impossible to get any kind of retroactive application of the
recalibration to the older results, so the differences that were
observed in that one lab's results last year can now only be viewed as
PROBABLY due to bad calibration. The few recent reports of two- or
three-lab comparisons (always of just one person at a time) have been
"all ok". There is another comparison, though, that gets around the
problem of calibration, which you can see on the Rice DNA project
web page -- that involves multiple retests of persons who are closely
related and therefore nearly the same haplotype. In that comparison,
we found 2 errors in 42 pairs of single-locus measurements. We believe
(but without much objective evidence) that both errors occurred in the
original round of testing.
2. Particular reliability. If you suspect that your lab is less
reliable than you'd like, you are free to switch to another. Retesting
six or seven people would be cheaper than the type of multiple tests
that would be needed to give a significant statistical measure of
repeatability. True, retesting only seven samples would provide only
anecdotal evidence, but I'm going to assume there are budget limitations
in here somewhere.
3. Customer relations. Part of the problem, I suspect, is that the
questions you are asking are based on the assumption that you are
talking to scientists. In fact, the mass market is served by
technicians operating a machine that does most of the work automatically.
They probably don't give any thought to the standard deviation of the
measurements, and they may not be able to give you the kind of detailed
answers you want, but the whole rationale for the business to exist in
the first place is that your questions are misplaced. The automated
stuff is going to work adequately every time, provided that there are
no operator errors to get in the way. To put it another way: if you
don't trust them, you should just take your business elsewhere.
|Re: [DNA] DNA as "junk" science by "John F. Chandler" <>|