Betr.: Building a database for e-journals

Eric Tull tull at ucalgary.ca
Thu Oct 18 15:54:42 EDT 2001


I think one of the major questions is whether you can need to develop a separate database of your
e-journals, or whether you can manage your e-journals through your library catalogue.  We manage
ours in the catalogue but we don't export them in any way.  I was interested to see how the
Universite de Montreal manages their e-journals within their library catalogue but then uses pulls
out alphabetic and subject lists of their e-journals to the web
(http://www.bib.umontreal.ca/SB/PEL/)

If we maintain our e-journals within the catalogue, a link-server like GODOT offers us a major
advantage.  GODOT gets up-to-date information on the e-journals we subscribe to and the URLs we use
by doing a live Z39.50 check of the catalogue, and then combining this with information from Jake to
form links to the full-text sources as deeply in as Jake is able to go.  In this way we do not need
to maintain a second database - something that will become increasingly important as the number of
e-journals soars.

The ideal of "one-click" access should apply not only to full-text access from article citations,
but also to
1) holdings statement for paper copies in your library,
2) document ordering if your library does not hold the item,
3) exporting the citation into bibliographic software,
and probably numerous other applications.

GODOT, which has been developed open source by Simon Fraser University,  provides us with the first
two, and they have developed a prototype for the third which they showed at Access2001.  If they can
develop an export product that requires only one GODOT filter for each of Endnote, Procite and
Reference Manager instead of filters for each database, it will be a godsend to us.

The whole field of OpenURLs, linking and all the exciting steps that Eric Hellman describes seems to
be exploding.  I would certainly add it to Andrew Mutch's technology predictions of what will be big
in the next year.

Eric Tull
Public Services Systems Librarian
University of Calgary Library


> Date: Wed, 17 Oct 2001 16:28:17 -0400
> From: Eric Hellman <eric at openly.com>
> To: <marlene.delhaye at bu2.univ-mrs.fr>, web4lib at webjunction.org
> Subject: Re: Betr.: Building a database for e-journals
> Message-ID: <p04330118b7f3470f6c8e@[160.79.219.232]>
> Mime-Version: 1.0
> Content-Type: text/plain; charset="iso-8859-1" ; format="flowed"
> Content-Transfer-Encoding: 8bit
> 
> It is an over-simplification to say that OpenURL and person link
> pages are the solution to this problem. They are certainly a part of
> the solution. The number of resources that work with personal link
> pages, in particular, is at present insignificant.
> 
> First, a disclaimer: my company, Openly Informatics,
> (http://www.openly.com/) is deeply involved with the full variety of
> players working to address this situation. We develop and sell
> linking components, software, and linking data. I am a member of the
> NISO committee AX, which is working to standardize OpenURL.
> 
> THE GOAL
> 
> Our goal is one-click access to everything. In other words, when a
> user sees a reference to literature, either in an A+I database or in
> the references in an article, they should be able to click on the
> reference and get to an appropriate resource for that article in as
> few clicks as possible. Maybe asking for one-click access to
> everything is a bit like asking for Peace on Earth, but it's very
> much worth striving for.
> 
> THE TASK
> In order to make this happen, a lot of groups with different and
> often conflicting interests have to cooperate. (See... I told you it
> was a lot like striving for peace on earth!)
> 
> 1. Information providers (publishers, aggregators, libraries, etc.)
> need to embed hooks in their content that can be pointed at a user's
> information agent. The format for the hooks will be customizable
> links called OpenURL's, which is what the NISO Committee AX is
> working on. The OpenURL standard will be based on a submission
> authored by Herbert van de Sompel, now of the British Library, and
> Oren Beit-Arie of Ex Libris.
> 
> 2. Libraries need to deploy and configure OpenURL link-servers.
> Link-servers are software agents that know what electronic content a
> user has access to and how to direct the user to that content. The
> link-server acts as a hub which directs users to the appropriate
> content. Within a few years, link-servers are likely to become an
> integral part of any Integrated Library System worth paying money
> for, but for now link-servers are separate components or services.
> 
> 3. Information providers (publishers, aggregators, libraries, etc.)
> need to be friendly to incoming "deep" links. If a user has
> bibliographic information for an item, either entered by hand or
> carried on an OpenURL link, the user ought to be able to immediately
> get to resources for that item that the user's library has paid for.
> This means sensible authentication mechanisms and automated access.
> No user wants to drill down through ten search screens or directories
> when they already know what they want.
> 
> THE PRESENT
> Here's how the current situation looks in each of these areas.
> 
> 1. OpenURL. Thanks to the marketing push by Ex Libris for SFX, a lot
> of information providers have committed to adding OpenURL support in
> their products. The number that have actually implemented OpenURL is
> much smaller. Notably, many of the large aggregators have
> hooks-to-holdings features in their current products that can be used
> to construct OpenURL links to library link servers. EBSCO, Proquest,
> CSA and Gale and many others  have such capabilities which range from
> sophisticated to trivial.
> 
> Libraries can accelerate this trend primarily by asking for OpenURL
> when it comes time to make purchasing decisions. The web site for
> OpenURL is http://library.caltech.edu/openurl/
> 
> 2. There are two parts here, deployment and configuration.
> A. deployment. Link-servers come in a variety of shapes and sizes.
> While SFX is the best known and most full-featured, you can also
> deploy the free link server (based on Jake data) available on our web
> site. OpenURL link-servers have been announced or are being offered
> by Fretwell-Downing and Endeavor. Some universities have turned their
> e-journal databases into link-servers. GODOT, developed at Simon
> Fraser University for COPPUL, is a good example. A Korean Company,
> KINS, has developed a link server for the asian market. Certain
> aggregators may be able to turn their existing linking systems into
> lightweight link-servers. This is likely to be a market with a
> variety of options.
> 
> B. Configuration. This is the hard part at present. To configure a
> link server, you have to know titles you have access to, and how to
> access them. You have to build an e-journal database. Unfortunately,
> title lists at many aggregators are not as well maintained or as
> complete as they ought to be, making this a lot more work than it
> should be. A number of solutions have popped up to alleviate the
> aggravation being experienced by many e-journal librarians. Serials
> Solutions has helped a lot of people, and their data has been used in
> link servers, in a prototype with us and in at least one SFX
> installation. TDNet offers a bundled service, but they where not
> doing OpenURL, at least when I talked to them last. I don't know
> anything about JournalWebSite. Jake was a great solution a year ago,
> but  data maintenance has stalled. Ultimately this problem has to be
> solved with closer involvement by the information providers and the
> implementation of standards, and that's the tack we're taking.
> 
> Libraries should start thinking about link servers now. The time and
> effort  put into getting e-journal databases into shape will not be
> wasted.
> 
> 3. Linking. This part is rapidly being solved in two ways.
> 
> A. CrossRef. I tell people that CrossRef is a miracle, because it
> represents unprecedented cooperation and agreement between all the
> major science publishers. CrossRef is a consortium of over 70
> publishers that have agreed to contribute article metadata and links
> to a common database, which currently has over 3.5 million items.
> Libraries can become affiliate members of CrossRef for $500/year. For
> more info on how to use CrossRef, see http://www.openly.com/crossref/
> 
> B. Link friendly sites. More and more, both aggregators and
> publishers are recognizing that their products become more valuable
> when linking is easy and stable. We can now direct-link to 7700
> e-journals with the JournalSeek database. Even most of the
> aggregators who used to be hostile to direct linking are now trying
> to retrofit their services to allow it.  The California Digital
> Library in particular has been active in prodding publishers to make
> linking easier, by putting it as a requirement in their procurement
> process; this is something that could be emulated to beneficial
> effect.
> 
> THE FUTURE
> 
> Although today the talk is mostly about link-servers for libraries
> and consortia, I think that eventually everyone will have a link
> server  (or personal link page) operating as a plug-in to their web
> browser, with preferences customized transparently to each
> individual. Authentication and rights management will be built in,
> and the resulting experience will make browsing the professional
> content as simple and easy as the free-content web is today. It's a
> sad reality today that paying for high quality content results in a
> poor user experience because of all the primitive and clunky ways
> that access control is implemented. Oldtimers will tell their
> unbelieving younger colleagues war stories about how they once had to
> go and munge title lists by hand while they sat at terminals in
> *buildings* called libraries.
> 
> Eric
> 
> >
> >
> >The solution to your problem is OpenUrl,  a person link pages (plp),
> >that act as OpenUrl resolver and the mechanism to let your browser
> >know to which personal link page to link. A simple demonstration can
> >be found at http://www.kb.nl/persons/theo/  .
> >When you look into these pages you see what mechanism is used to let
> >the browser know where to link to. It would be great if everybody
> >uses the same mechanism. In that case everyone can always link to
> >his own content provider independend of where you found your
> >metadata.
> >
> >With regards,
> >Theo van Veen
> >
> >  >>> Marlène Delhaye <marlene.delhaye at bu2.univ-mrs.fr> 17-10-01 08:24 >>>
> >Good morning,
> >
> >I have to build a large database to manage and provide access to ca.4500
> >e-journals for universities members of a consortium. The main difficulty
> >for me is that people on different locations shouldn't have access to
> >the same content (as universities don't subscribe to the same journals
> >packages).
> >I'd like to have information about the technical solutions you've
> >choosen in your libraries, and about the time, money and skills you've
> >needed to realize this project.
> >
> >Thanks for your help and experiences,
> >
> >Best regards,
> >
> >Marlène Delhaye
> >Documentation électronique
> >-----------------------------------
> >Service Commun de Documentation
> >Université de la Méditerranée
> >27 Boulevard Jean Moulin
> >F-13385 Marseille cedex 05
> >t : 04 91 32 45 38
> >f : 04 91 25 60 22
> >e : marlene.delhaye at bu2.univ-mrs.fr
> >http://bu2.timone.univ-mrs.fr/
> >-----------------------------------
> 
> --
> Eric Hellman
> Openly Informatics, Inc.
> http://www.openly.com/1cate/      1 Click Access To Everything
> http://my.linkbaton.com/                Links that Learn
> http://addaflag.org/                        Raise the Flag on your Website
>


More information about the Web4lib mailing list