[WEB4LIB] RE: Reporting Web usage: what numbers do you use?

K.G. Schneider kgs at bluehighways.com
Wed Oct 27 11:50:03 EDT 2004


> I had hoped that there would be an agreed standard for usage data (as
> opposed to a single package) - however this seems not to be the case.

Yes... there are times when I get wistful for authoritarian forms of
government. ;) Now, if *I* were queen...! 

In terms of standards for Web site usage tracking, it intrigues me that so
much emphasis is put on the Holy Grail of "visits," when this seems to me to
be a much more variable and elusive quantity than "hits," and does not
translate to anything I can with any integrity rely on. As a Web site
manager, I feel much more certain about measuring the thumps on the door
than I do about measuring who's standing out on my patio. It's true that
hits don't translate into actual visits (or, God help me, "outcomes," to
trot out the latest jargon of the managerati). But hits do give me an idea
of usage over time, and hits are much less subject to vagary than
measurements that rely on things out of my control, such as how many ip
addresses are stuffed behind a proxy server. 

Someone early on in this discussion pointed me to the INCOLSA
recommendations for databases, which are intriguing because they emphasize
"user sessions" while admitting that these are often chimerical.  If these
are somewhat chimerical for databases designed to track user sessions, how
much more chimerical they are for Web sites.  What really interests me is
that INCOLSA, in justifying these measurements, says they are necessary "in
order to satisfy reporting requirements of government agencies and
professional organizations." But the government agencies and professional
organizations only count what we as a profession tell them to count. Who
says user sessions are important, and why? I'm not saying they don't have
potential value; I'm just wondering who made this gospel and where this is
written down, and why other measurements are not discussed with equal
seriousness. 

There's always an element of bogosity with Web stats, and that's true with
most measurements. I know of one major library where the techies refused to
log Web activity for a long time because of this, and I think that was very
wrong (not to mention an example of Techies Gone Bad). Yes, hits include
spiders and bots, and our statistical software compensates for that
somewhat.  But the key is to follow the bouncing ball across time and not
dwell on the smaller confounding factors. What we can do is acknowledge the
fudge factor in statistics and yet ask ourselves what meaningful common
denominators we can use for comparison.  

Karen G. Schneider
kgs at bluehighways.com






More information about the Web4lib mailing list