[Web4lib] Capturing web sites

Paul R. Pival ppival at ucalgary.ca
Thu May 19 14:32:08 EDT 2005


If the document consists of a *single* html page, you might get what you 
need with Macromedia's FlashPaper 
(http://macromedia.com/software/flashpaper/).  Once installed you can 
output to flashpaper as one of your print options, and the resulting 
file is very similar to a PDF but doesn't require the PDF plugin (and 
long load time grrr) to view - it just pops up (assuming you have the 
Flash viewer installed).  There's an example of the output at 
http://macromedia.com/software/flashpaper/productinfo/overview/flashpaper_datasheet.html


Paul R. Pival
Distance Education Librarian
213E MLT
University of Calgary
Calgary, Alberta T2N 1N4 	
*Phone:
Toll Free:
Fax:
email:
Website: *
	(403) 220-2119
1 (866) 210-9637
(403) 282-6837
ppival at ucalgary.ca <mailto:ppival at ucalgary.ca>
Library Connection <http://www.ucalgary.ca/library/libcon>

   



Catherine Buck Morgan wrote:

> (Please excuse the cross-posting.)
>
> We are a state documents depository, collecting annual reports, 
> directories, and other kinds of documents produced by the various 
> agencies in SC. As you're aware, many of these documents are now 
> published electronically.
>
> Some documents are published only as an html website (including 
> directories and annual reports). Our problem is how to capture that 
> and store it so it can be accessed down the road. (At this point, I'm 
> not concerned with accessing it in the year 2038, just capturing it now.)
>
> How are other libraries handling this? Are there software 
> recommendations?
>
> Thanks,
> Catherine.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.webjunction.org/wjlists/web4lib/attachments/20050519/7717b717/attachment-0001.htm


More information about the Web4lib mailing list