APG-L ArchivesArchiver > APG > 2009-06 > 1245890003
From: Kathy <>
Subject: Re: [APG] Evaluation of sources within software (Was APG Digest,Vol 4, Issue 371--program evaluation)
Date: Wed, 24 Jun 2009 20:36:34 -0500
References: <firstname.lastname@example.org> <4A428F98.email@example.com><02d501c9f51c$63188310$29498930$@net>
> In evaluating your sources by a number-ranking scheme you preface your
> statement by saying " ***I*** would rate the sources as ... ." This
> strongly implies that an evidence evaluation by a number scheme involves
> subjective choices to be made by each individual, with the result being a
> range of conclusions within any given pool of evaluators --- as opposed to,
> say, an evaluation using the evidence-analysis process map whose discrete
> measures have a uniform definition regardless of user.
Any measurement expert would tell you that ****ALL**** socially derived
measures are to some degree subjective, no matter how "uniform" their
definitions. Given that inter-rater reliability has yet to be
demonstrated for any of the "measures" used in genealogy, the
objectivity of any so-called "uniform definitions" remains to be
demonstrated. In other words, where's the *evidence* that the use of the
evidence-analysis process map results in consistent conclusions, or even
a narrow range of conclusions??
Kathleen Lenerz, Ph.D.
|Re: [APG] Evaluation of sources within software (Was APG Digest,Vol 4, Issue 371--program evaluation) by Kathy <>|