[Web4lib] recommendations for web and catalog "visits" statistics solutions

Norwood, Randy randy.norwood at ttu.edu
Thu Apr 3 08:28:15 EDT 2008


For our web site, I use Google Analytics. It's free, easy to use, requires
no software to be installed, and produces a lot of useful and attractive
reports and charts. On the other hand, Google logs all user clicks and views
(it does not use server access logs; it uses a Javascript embedded in each
page that sends statistical data to Google each time a page is loaded). This
could have some privacy implications, depending on what type of site and
users you have, and on whether you trust Google.

With Google Analytics, you can filter out IP ranges. I use that to block it
from tracking usage from within the library. There are other filtering
options (domain, subdirectory, etc.)

For our OPAC, a turn-key Innovative Millennium system, the system provides
some rudimentary stats, including average and peak number of users for each
hour of the day, number of searches, etc. It does not have an access log
that we could use for further analysis (at least not a log that is
accessible to us). 

-- 
~~~~~~~~~~~~~~~~~~~~~~~~~~~
Randy Norwood
Programmer Analyst III
Texas Tech Law School Library
Office: 806-742-3990 x350
Web/Intranet Requests:
http://mytechlaw.law.ttu.edu/it/Lists/WebMyTechLaw%20Requests
Support: 806-742-3990 x318, computersupport.law at ttu.edu
E-mail: randy.norwood at ttu.edu


On 04,2,2008 7:12 PM, "Houghton-Jan, Sarah"
<sarah.houghton-jan at sjlibrary.org> wrote:

> Hi all,
> 
> I am asking for recommendations on two things:
> 1) software you are using or would recommend to capture accurate user visit
> numbers for your library's website
> 2) any method or vendor-provided built-in tool that you are using to capture
> accurate user visit numbers for your library's catalog.
> 
> I am doing a webcast for Infopeople, a California project for training
> libraries, on April 24th on recommended tools and best practices for
> capturing accurate numbers for the two categories above specifically for the
> purpose of having accurate numbers to provide for the Annual Public Library
> Survey.  They are adding a new category for statistics this year that
> libraries will be asked to submit: virtual visits.  The definition provided
> in the survey is as follows:
> 
> Virtual visits include a user's request of the library web site or catalog
> from outside the library building regardless of the number of pages or
> elements viewed.  This statistic is the equivalent of a session for a
> library's website. Exclude virtual visits from within the library, from
> robot or spider crawls and from page reloads.
> 
> So, near as I can figure it, the tricky parts to the above definition are
> that they want all those exclusions (the "within the library" is
> particularly difficult for all but the most robust stats systems) and that
> they're looking for catalog "visits" (not # of searches, # of holds, etc.
> which is what the vendor provides us with).  There are a number of kludgey
> ways that one can try to guess the number of visits if the vendor doesn't
> provide that functionality.  Tell me what you've kludged!
> 
> But the biggest problem, and the wording of the definition here is key, is
> that they want "a user's request of the library web site or
> catalog...regardless of the number of pages or elements viewed."  Does this
> mean that if a user starts in the website, moves into the catalog, then
> leaves, that it should count as one visit?  I don't think there's a way to
> do that since they are on two different servers, two different systems, two
> different databases (correct me if I'm wrong).
> 
> Any insight or recommendations you can offer are much appreciated.
> 
> Thanks!
> Sarah Houghton-Jan
> Digital Futures Manager, San Jose Public Library
> Author of LibrarianInBlack.net
> 
> 
> _______________________________________________
> Web4lib mailing list
> Web4lib at webjunction.org
> http://lists.webjunction.org/web4lib/
> 





More information about the Web4lib mailing list