[Web4lib] Google limit of 1,000 results
Roy Tennant
roy.tennant at ucop.edu
Sun Jul 17 12:10:52 EDT 2005
On Jul 17, 2005, at 3:52 AM, Lars Aronsson wrote:
> Roy Tennant wrote:
>
>> Sometimes I need to see the hits that logically
>> would show up at the bottom of Google's ranking,
>>
>
> Really? Can you give an example of a situation like that?
Sure. Given that Google's ranking algorithm tends to weight pages
that important sites point to more heavily than others, there are
times when I want to see what someone says about a topic that 1)
isn't being pointed to by anyone -- perhaps because it is too new,
for example, or 2) isn't linked to by an "important" web site. The
point here is that those seeking information have a plethora of
purposes -- not all information needs can be adequately served by
one, rather specific, model.
> Can we define the end of the web, i.e. can we have knowledge about
> every webpage that exists? And is finding the last webpage
> meaningful even if we can't define the end of the web?
I'm not talking about defining the "end of the web", simply the end
of the Google search results. This end is defined (and redefined on a
regular basis) by Google.
My point is merely that Google is good -- very good -- at some
important but limited tasks. It really sucks at others. We would do
well to know which is which and why, since presumably knowing this is
our business.
While I'm on the topic, I find it interesting that this discussion
has been characterized as "Google bashing" as if my purpose were to
insult and denigrate Google. Rather, I'm trying to more fully
understand what Google is good at and what it isn't good at. Given
that Google is not very forthcoming on the help pages about
limitations such as have been surfaced here by Bernie Sloan and
others, this discussion seems to be one of the few places to get such
information.
Roy
More information about the Web4lib
mailing list