APG-L Archives

Archiver > APG > 2009-06 > 1245944189


From: <>
Subject: Re: [APG] Evaluation of sources within software (Was APG Digest,Vol 4, Issue 371--program evaluation)
Date: Thu, 25 Jun 2009 10:36:29 -0500
References: <d4c.434daa77.37721c08@aol.com><4A428F98.1030609@verizon.net> <02d501c9f51c$63188310$29498930$@net><4A42D4A2.2070408@worldnet.att.net>
In-Reply-To: <4A42D4A2.2070408@worldnet.att.net>


Kathy wrote:
>Any measurement expert would tell you that ****ALL**** socially derived
measures are to some degree subjective, no matter how "uniform" their
definitions.

Kathy,

You are quite right, of course. All socially derived measures are to some
degree subjective, despite bases in uniform definitions. Life experiences
cause human beings to interpret things in different ways. Even so, defined
terms provide common grounds for communicating concerns and value judgments
in the course of our research and reporting.

In the rest of your message you finger a critical need in our field --i.e.:
>Given that inter-rater reliability has yet to be
demonstrated for any of the "measures" used in genealogy, the
objectivity of any so-called "uniform definitions" remains to be
demonstrated. In other words, where's the *evidence* that the use of the
evidence-analysis process map results in consistent conclusions, or even
a narrow range of conclusions??

We absolutely need more quantitative assessments of methods and theories. As
you know from your own academic background, these assessments exist
abundantly for most other research fields, ranging from theses and
dissertations to ongoing scholarship by academics whose institutional
employment enables (and expects) research of this type. By comparison, in a
field such as ours, where most professionals are self-employed and their
income is derived primarily from client commissions (together with the fact
that advanced degrees in our specific subject area, with the requisite
theses and dissertations, are difficult to obtain), quantitative studies of
methods and theories are rare.

At present, the only tests of the comparative efficacy of various evaluation
standards seem to be those administered for credentialing. Outside of that
arena, in forums such as this one, we frequently debate the merits of
individual records, but--aside from an occasional mention of the
Genealogical Proof Standard or the "choose a number" option that still
exists in some software--we tend to shy away from discussions of the
theoretical framework that underpins our profession.

Now that we have at least three academic programs that explore genealogical
theory to some extent and encourage student papers on the subject, we can
begin to look forward to the kind of theoretical evidence you call for,
Kathy. Meanwhile, those students can't do their work in a vacuum. More
dialogue among professionals about how we use evaluation processes and our
comparative experiences with, say, the "choose a number" option in software
vis a vis the GPS and the research-process map, would not only help the
professionals on this list but also provide launch pads for the studies of
genealogical theory.

Elizabeth

-----------------------------------------------------------
Elizabeth Shown Mills, CG, CGL, FASG
APG member, Tennessee



-----Original Message-----
From: [mailto:] On Behalf
Of Kathy
Sent: Wednesday, June 24, 2009 8:37 PM
To:
Subject: Re: [APG] Evaluation of sources within software (Was APG Digest,
Vol 4, Issue 371--program evaluation)

Any measurement expert would tell you that ****ALL**** socially derived
measures are to some degree subjective, no matter how "uniform" their
definitions. Given that inter-rater reliability has yet to be
demonstrated for any of the "measures" used in genealogy, the
objectivity of any so-called "uniform definitions" remains to be
demonstrated. In other words, where's the *evidence* that the use of the
evidence-analysis process map results in consistent conclusions, or even
a narrow range of conclusions??

Kathy
======================
Kathleen Lenerz, Ph.D.



This thread: