[WEB4LIB] RE: identifying links

Hugh Jarvis hjarvis at buffalo.edu
Wed Sep 18 14:24:21 EDT 2002


Perhaps pull the page (site) into something like FrontPage and let it detect
all the links for you...?

Or run a Perl script on it.  Should be simple matter to have it search the
page for links, dump them into a file, sorted as you  like.

Probably both will miss any hidden behind scripting or forms.

 	Hugh



 Hugh Jarvis  (PhD, MLS)
 Cybrarian/Web Information Coordinator
 Creative Services - University at Buffalo
 330 Crofts Hall, Buffalo, New York, USA 14260-7015
 Tel: 716 645-5000 x1428  Fax: -3765
 Email: hjarvis at buffalo.edu (preferred)

 "[Librarians] are the vintners of the information age"  ME Bates

> > -----Original Message-----
> > Subject: [WEB4LIB] RE: identifying links
> >
> > >Can anyone recommend a tool for examining a site to help
> identify all the
> > >links to external sites/domains? On some sites you can view
> source code
> > >(tedious but doable); on others, you cannot. I am looking for a way to
> > >automate the process of harvesting all links to external sites
> so we can
> > >experiment with examining/approving them and adding them to
> the collection
> > >if they pass muster.
>




More information about the Web4lib mailing list