[Web4lib] Google Allows Downloads of out-of-copyright Books

Jim Campbell campbell at virginia.edu
Tue Sep 5 11:30:37 EDT 2006


Dunno, Karen.  They do specifically ask for reports on scanning problems,
which suggests they do intend to do something, though they commit only to
"We appreciate your help and will do our best to resolve the issue."
http://books.google.com/support/bin/answer.py?answer=43735&topic=9259

I suspect that Google has two timetables, very fast for intitial scanning
and OCR, and much slower for individual problems.  Back in the first few
weeks it was up I reported a couple of viewing problems. They have now been
fixed but it took at least 3-4 months, maybe more (after a while I stopped
checking with any regularity).

Also, at least for now, I don't think they're worrying about either
duplication or getting another copy of a problem book, they're just pulling
stuff off the shelves and feeding it into the machines.  I just saw
yesterday two copies of the same book, same edition, both from Stanford -
one of them marked copy 2.  So the Ruskin may yet turn up.  Indeed the text
is there in the beautiful Library Edition of Ruskin's Works.  Of course, if
you don't like to read online, you can also pick up a copy of the Library
Edition at Alibris for either $9,499 or $17,499. 

- Jim Campbell
Campbell at Virginia.edu
 

> -----Original Message-----
> From: web4lib-bounces at webjunction.org 
> [mailto:web4lib-bounces at webjunction.org] On Behalf Of Karen Coyle
> Sent: Tuesday, September 05, 2006 10:36 AM
> To: campbell at virginia.edu
> Cc: web4lib at webjunction.org
> Subject: Re: [Web4lib] Google Allows Downloads of 
> out-of-copyright Books
> 
> Jim, I sent in a comment about a particularly egregious 
> problem -- two books that are intermixed, sometimes a page 
> from one, sometimes a page from another. (FYI - Ruskin's 
> Stones of Venice and a guidebook to Sweden). I checked a few 
> weeks later and the problem is still there. 
> What I think is significant about this one is that it is a 
> coding problem, not a scanning problem. If a page is missing 
> from a scanned book, I highly doubt that they will pull that 
> book again and re-scan it. 
> However, it would be good to tag that copy as "incomplete" in 
> the hopes that the book will also be scanned from another 
> collection, this time correctly.
> 
> kc
> 
> Jim Campbell wrote:
> > Note that Google does in fact have a feedback form and specifically 
> > asks for comments on accuracy. I've sent in comments on 
> metadata, full 
> > view availability, and bad scans. You get an automated 
> response, but 
> > sometimes you also get a personal response to say the 
> message has been 
> > sent on.  So far that's been true only of metadata comments; I'm 
> > hoping that doesn't mean the other comments have been ignored.
> >
> > - Jim Campbell
> > Campbell at Virginia.edu
> >  
> >
> >   
> >> -----Original Message-----
> >> From: web4lib-bounces at webjunction.org 
> >> [mailto:web4lib-bounces at webjunction.org] On Behalf Of K.G. 
> Schneider
> >> Sent: Monday, September 04, 2006 6:38 PM
> >> To: web4lib at webjunction.org
> >> Subject: RE: [Web4lib] Google Allows Downloads of out-of-copyright 
> >> Books
> >>
> >>     
> >>> I suspect that it's the correcting, rather than finding
> >>>       
> >> errors, that
> >>     
> >>> is onerous. I, too, was thinking of having somewhere that
> >>>       
> >> people could
> >>     
> >>> note which books have errors (I just downloaded one that I
> >>>       
> >> wanted and
> >>     
> >>> found pages missing -- very disappointing). Now I think we
> >>>       
> >> should have
> >>     
> >>> a place where people can report books that appear to be
> >>>       
> >> good scans so
> >>     
> >>> that other libraries can concentrate on the books that
> >>>       
> >> AREN'T on that
> >>     
> >>> list. In the end, though, it's really only economical to do
> >>>       
> >> QC as part
> >>     
> >>> of the scanning process, when you have the book and the scanning 
> >>> equipment and the operators right there. Like most other
> >>>       
> >> activities,
> >>     
> >>> clean up after the fact is the least desirable way to go about it.
> >>>
> >>> kc
> >>>
> >>> Patricia F Anderson wrote:
> >>>       
> >>>> Perhaps take a folksonomy approach -- have a system by
> >>>>         
> >> which patrons
> >>     
> >>>> can report or recommend correction of errors they discover. A 
> >>>> wikipedia model, perhaps. Just brainstorming, but it
> >>>>         
> >> could take the
> >>     
> >>>> burden of correction off the local coders.
> >>>>         
> >> Actually both approaches are good... clean up as you go along, but 
> >> enable the ability to comment on sites (negative, positive, 
> >> evaluative, etc.). The latter is not only good 2.0-ish 
> practice, but 
> >> also could provide valuable information on problems users 
> find that 
> >> are not necessarily evident to providers (and also enables in the 
> >> networked environment the well-respected practice of conversing 
> >> through marginalia...
> >> see the NYT this past weekend, "John Adams Talks to His Books"). 
> >>
> >> Karen G. Schneider
> >> kgs at bluehighways.com
> >>
> >> _______________________________________________
> >> Web4lib mailing list
> >> Web4lib at webjunction.org
> >> http://lists.webjunction.org/web4lib/
> >>
> >>     
> >
> > _______________________________________________
> > Web4lib mailing list
> > Web4lib at webjunction.org
> > http://lists.webjunction.org/web4lib/
> >
> >
> >   
> 
> --
> -----------------------------------
> Karen Coyle / Digital Library Consultant kcoyle at kcoyle.net 
> http://www.kcoyle.net
> ph.: 510-540-7596
> fx.: 510-848-3913
> mo.: 510-435-8234
> ------------------------------------
> 
> 
> _______________________________________________
> Web4lib mailing list
> Web4lib at webjunction.org
> http://lists.webjunction.org/web4lib/
> 



More information about the Web4lib mailing list