[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Documentation Metrics



Antti Hietala wrote:
> 
> On Mon, 16 Oct 2000, Sandy Harris wrote:

> > There's a whole literature on readability indexes based on statistical
> > analysis of things like words per sentence, letters per word. Some of
> 
> Yes, we could create readability indexes fairly easily. Calculation of
> word frequencies and their statistical analysis is indeed one possible
> approach. And there are plenty of tools for doing that. However, I feel
> there may be a little confusion here, correct me if I am wrong. I don't
> think it is 'readability' as such that we are looking for.

I very much doubt any statistical measure will be "what we are looking for"
in evaluating a text. I like the idea, though, of having whatever objective
measurements of text properties we can reasonably produce. Some will no
doubt turn out to be irrelevant, but others might be quite revealing.

> We are looking for 'text accessibility'.
> 
> By text accessibility I mean that which makes texts easy or difficult to
> understand. An accessible text is one which is easy to understand; an
> inaccessible one - one which is difficult. Now, this may sound like a
> self-evident distinction but is actually a major turning point in
> contemporary linguistic research. Readability formulae as objective,
> statistical methods have proven their inadequacy in answering the basic
> question: why is one text difficult to understand while another one is
> easy.

So what of the other things I suggested we might measure, beyond the ones
used in traditional readability measures?
 
> And the answer lies herein: readability formulae never take the reader
> into account. They examine the text in a vacuum, as if there was no life
> outside it, whereas text accessibility focuses on the reading process as
> an interaction between the text and the reader. The reader's background
> and subject knowledge are crucial factors.

Perhaps we can take that into account. e.g. Classify our docs with some
of David's metrics -- beginning/intermediate/advanced or user/admin/coder
for example -- and measure each group. Then if my HowTo for beginner
admins has statistics typical of things written for advanced coders...

> That is why David is absolutely
> correct when he starts his metrics with such criteria as 'Audience Type'
> and 'Audience Technical Sophistication' and includes macro level
> structures like 'Style'. In fact, I couldn't be happier to see those on
> the list :)

No question. I'm not attacking anything David has suggested, just trying
to extend it. It is clear that we need author-assigned labels of, for example,
target audience, though there's the possibility the author is wrong, e.g.
he's writing for intermediate/advanced admins when every beginner is putting
the program on his home system. We also want judgements from users or reviewers,
on things like clarity and quality of writing. David has, I think, a pretty
good list of these.

My question is whether some objective measurements would be a good addition.
 
> Please, don't misunderstand me. I don't mean to depreciate the valuable
> work done by readability analysts. On the contrary, once we acknowledge
> what questions readability formulae can and cannot answer we know where
> they leave us short. Then we can expand the metrics exactly like David
> did.
> 
> My two euro cents worth... :)
> 
> Regards,
> 
> Antti
> 
> --
> Antti Hietala
> Technical Writer
> antti.hietala@iki.fi


--  
To UNSUBSCRIBE, email to ldp-discuss-request@lists.debian.org
with a subject of "unsubscribe". Trouble? Contact listmaster@lists.debian.org