[Web4lib] Another Google question

Roy Tennant roy.tennant at ucop.edu
Wed Jul 6 11:31:50 EDT 2005


Lars' question and Patricia's answer overlooks the fact that Google  
is making a huge assumption about user needs, and creating a system  
that fulfills that assumption but provides no mechanisms for the user  
to change those assumptions. Allow me to be specific.

Sometimes I want to find, for example, brand new web pages -- pages  
that are so new I'm not even sure if Google has crawled them yet. But  
based on the PageRank algorithm as I understand it, these pages would  
naturally fall to the bottom of the search results. Does Google  
provide any method to reverse-sort the results? No. Does Google  
provide a mechanism to view results based on date added to the index?  
No. Does Google provide a mechanism to sort results based on the last  
change date of the page itself? No. So what are we left with? Trying  
to get to the "end" of the search results, wherever that may be.  
Sorry, but that's bad interface design. The fact that you can't,  
apparently, even do it given the systems own mechanisms is flat out  
indefensible. Or, if there numbers are in fact completely wrong and  
there are really only 900 items instead of 15,000 then I guess  
they're just lying to us.

Google does one thing, and it appears to do that one thing well. But  
let's not make the unfortunate assumption that it does more than that  
one, very specific, thing.
Roy

On Jul 6, 2005, at 5:54 AM, Patricia F Anderson wrote:

> Hi, Lars,
>
> Interesting question -- why look at lots of results. For myself, I  
> rarely look at more than the first 300. When I do, the query will  
> fall in one of these categories:
>
>  - a topic of passionate interest where I truly want to see every  
> possible link (and I will spend *days* going through *all* links up  
> to the max displayed);
>
>  - a topic where the first 100 only sporadically revealed anything  
> relevant, and I have not found the magic combination of terms to  
> focus the search.
>
> Because I am someone who tends to skim large search results, I have  
> my Google preferences set to display 50-100 links per page of  
> results, so it doesn't take me long to skim large results sets.
>
> What makes this question especially interesting to me is that I  
> recently attended a Grokker demonstration. They emphasized that a  
> core aspect of the purpose of Grokker's interface is to allow the  
> serious researcher to rapidly scan large results sets (research  
> veresus search <g>). Their arbitrary limit for Google is 1000  
> results per page, but this can be customized by the end-user. Now,  
> if someone is developing and marketing an interface for this  
> purpose, one might think there is at least *some* use for some  
> persons in being able to get beyond the first few pages of results.  
> It will be interesting to see how Grokker does, how their product  
> is used, and what types of persons find it most useful.
>
> Best,
>
> Patricia Anderson, pfa at umich.edu
>
> On Wed, 6 Jul 2005, Lars Aronsson wrote:
>
>
>> Patricia F Anderson wrote:
>>
>>> Ijust tried a search for the word "the". Reported results were
>>> 3,190,000,000. Maximum displayed results were 946. "Repeat the
>>> search" button yielded the same number. I tried a few others,
>>> with equally unpredictatble results.
>>>
>>
>> Perhaps they have a filter that can tell real searchers apart from
>> librarians just trying to test the system.  For example, no real
>> searchers would be interested in more than the first 900 hits, so
>> if you still click "next page", you are just testing.
>>
>> I'm sorry for my ignorance, but what would be the point in finding
>> the 947th and 948th hit for any search expression?
>>
>>
>> -- 
>>  Lars Aronsson (lars at aronsson.se)
>>  Aronsson Datateknik - http://aronsson.se
>> _______________________________________________
>> Web4lib mailing list
>> Web4lib at webjunction.org
>> http://lists.webjunction.org/web4lib/
>>
>>
>>
>>
> _______________________________________________
> Web4lib mailing list
> Web4lib at webjunction.org
> http://lists.webjunction.org/web4lib/
>
>



More information about the Web4lib mailing list