[Web4lib] Capturing web sites

David P. Moore mooredp at email.uah.edu
Thu May 19 14:17:19 EDT 2005


I have enjoyed using Content Saver, as I wanted a desktop program that also
integrates with my browser, and the desktop interface has the look and feel
of Outlook:

http://www.macropool.de/en/download/contentsaver/

Yet I have seen where Yahoo! is entering this field now and is offering a
free service.

David P. Moore
Electronic Resources/Business Librarian
M. Louis Salmon Library
University of Alabama in Huntsville
Huntsville, AL 35899
256-824-6285
FAX: 256-824-6083
david.moore at uah.edu
http://www.uah.edu/library

-----Original Message-----
From: web4lib-bounces at webjunction.org
[mailto:web4lib-bounces at webjunction.org] On Behalf Of Doug Payne
Sent: Thursday, May 19, 2005 12:21 PM
To: web4lib at webjunction.org
Subject: Re: [Web4lib] Capturing web sites

Hi Catherine,

Another tool for capturing web sites is WebWhacker by
BlueSquirrel Software.

WebWhacker is at:
http://www.bluesquirrel.com/products/webwhacker/

-- 
Doug Payne     Boston University - Mugar Library &
dbp at bu.edu     Boston University Office of Information Technology
                TEL:  (617) 353-0602  FAX:  (617) 353-2084


Ellen McCullough wrote:

> Catherine, 
> If you're asking about capturing discrete pages of Web sites, I have a
> good software program called SnagIt
> (http://www.techsmith.com/products/snagit/default.asp). It cost about
> $50.00 and I use it to capture Web pages, sections of Web pages, etc.
> There's a feature you can use to capture a scrolling Web page as well,
> so you get the page in its entirety (vs. just the visible window).  You
> can save the files in a number of different formats (PNG, jpeg, et al.)
> I have found it very useful!  By the way, I have no professional
> affiliation with SnagIt or its parent company, TechSmith!
> Thank you, 
> Ellen
> 
> ********************************************
> Ellen McCullough, Marketing Director
> Xrefer
> 31 St. James Ave., Suite 370
> Boston, MA 02116
> ph: 617-426-5710
> fx: 617-426-3103
> www.xrefer.com 
> =================================================
> "If...you want a highly flexible but publisher-neutral online
> [reference] collection made up only of the titles your library needs,
> this cross-referencing system may well be your best bet."
> -Library Journal, April 15, 2005
> 
> 
> -----Original Message-----
> From: web4lib-bounces at webjunction.org
> [mailto:web4lib-bounces at webjunction.org] On Behalf Of Catherine Buck
> Morgan
> Sent: Thursday, May 19, 2005 12:12 PM
> To: SYSLIB-L at LISTSERV.BUFFALO.EDU
> Cc: web4lib at webjunction.org
> Subject: [Web4lib] Capturing web sites
> 
> 
> (Please excuse the cross-posting.)
> 
> We are a state documents depository, collecting annual reports, 
> directories, and other kinds of documents produced by the various 
> agencies in SC. As you're aware, many of these documents are now 
> published electronically.
> 
> Some documents are published only as an html website (including 
> directories and annual reports). Our problem is how to capture that and 
> store it so it can be accessed down the road. (At this point, I'm not 
> concerned with accessing it in the year 2038, just capturing it now.)
> 
> How are other libraries handling this? Are there software
> recommendations?
> 
> Thanks,
> Catherine.


_______________________________________________
Web4lib mailing list
Web4lib at webjunction.org
http://lists.webjunction.org/web4lib/



More information about the Web4lib mailing list