[Web4lib] Capturing web sites

Junus, Ranti junus at mail.lib.msu.edu
Thu May 19 13:47:10 EDT 2005


I completely forgot about this WebWhacker!  Thanks for bringing it up.

Anyway, another possibility is the HTTrack website copier (http://www.httrack.com/).  They have both Windows and Linux versions.  Free/GPL license.

ranti.

--
Ranti Junus - Web Services
 100 Main Library W441
 Michigan State University
 East Lansing, MI 48824, USA
 +1.517.432.6123 ext. 231
 +1.517.432.8374 (fax)


> -----Original Message-----
> From: web4lib-bounces at webjunction.org
> [mailto:web4lib-bounces at webjunction.org]On Behalf Of Doug Payne
> Sent: Thursday, May 19, 2005 1:21 PM
> To: web4lib at webjunction.org
> Subject: Re: [Web4lib] Capturing web sites
> 
> 
> Hi Catherine,
> 
> Another tool for capturing web sites is WebWhacker by
> BlueSquirrel Software.
> 
> WebWhacker is at:
> http://www.bluesquirrel.com/products/webwhacker/
> 
> -- 
> Doug Payne     Boston University - Mugar Library &
> dbp at bu.edu     Boston University Office of Information Technology
>                 TEL:  (617) 353-0602  FAX:  (617) 353-2084
> 
> 
> Ellen McCullough wrote:
> 
> > Catherine, 
> > If you're asking about capturing discrete pages of Web sites, I have a
> > good software program called SnagIt
> > (http://www.techsmith.com/products/snagit/default.asp). It cost about
> > $50.00 and I use it to capture Web pages, sections of Web pages, etc.
> > There's a feature you can use to capture a scrolling Web page as well,
> > so you get the page in its entirety (vs. just the visible window).  You
> > can save the files in a number of different formats (PNG, jpeg, et al.)
> > I have found it very useful!  By the way, I have no professional
> > affiliation with SnagIt or its parent company, TechSmith!
> > Thank you, 
> > Ellen
> > 
> > ********************************************
> > Ellen McCullough, Marketing Director
> > Xrefer
> > 31 St. James Ave., Suite 370
> > Boston, MA 02116
> > ph: 617-426-5710
> > fx: 617-426-3103
> > www.xrefer.com 
> > =================================================
> > "If...you want a highly flexible but publisher-neutral online
> > [reference] collection made up only of the titles your 
> library needs,
> > this cross-referencing system may well be your best bet."
> > -Library Journal, April 15, 2005
> > 
> > 
> > -----Original Message-----
> > From: web4lib-bounces at webjunction.org
> > [mailto:web4lib-bounces at webjunction.org] On Behalf Of Catherine Buck
> > Morgan
> > Sent: Thursday, May 19, 2005 12:12 PM
> > To: SYSLIB-L at LISTSERV.BUFFALO.EDU
> > Cc: web4lib at webjunction.org
> > Subject: [Web4lib] Capturing web sites
> > 
> > 
> > (Please excuse the cross-posting.)
> > 
> > We are a state documents depository, collecting annual reports, 
> > directories, and other kinds of documents produced by the various 
> > agencies in SC. As you're aware, many of these documents are now 
> > published electronically.
> > 
> > Some documents are published only as an html website (including 
> > directories and annual reports). Our problem is how to capture that and 
> > store it so it can be accessed down the road. (At this point, I'm not 
> > concerned with accessing it in the year 2038, just capturing it now.)
> > 
> > How are other libraries handling this? Are there software
> > recommendations?
> > 
> > Thanks,
> > Catherine.
> 
> 
> _______________________________________________
> Web4lib mailing list
> Web4lib at webjunction.org
> http://lists.webjunction.org/web4lib/
> 


More information about the Web4lib mailing list