20
Aug

In reviewing the past few months worth of comments I’m realizing that there’s a goldmine of wisdom in your experiences with federated search that is not being noticed by readers who don’t follow comments. So, I’d like to bring attention to a number of your insightful comments and to respond where a response is warranted. Note: You cam subscribe to the comments feed by clicking on the smaller of the two RSS icons near the top of the rightmost sidebar on any blog page or by clicking here.

Beyond shining the spotlight on your excellent comments, a good blogger attends to his comments in a timely manner, right? I last responded to comments in a big batch post on March 31st. Gulp! Well, I’m turning over a new leaf; moving forward I’ll respond to comments as they come in and I’ll do that in the form of a response comment, or in a blog article. Until the next comment arrives - go ahead and test me - I’m responding to those lonely comments that are waiting for a response. Here are my responses to the first batch of comments.

On March 21st I wrote Is federated search ranking impaired, commenting on a blog post that commented on a presentation by Tom Wilson of the University of Alabama. My post led to several comments including a very elaborate response by Mr. Wilson himself highlighting some real problems with federated search and relevance ranking. Peter pointed out another impairment of federated search relevance ranking, one that is often overlooked or forgotten. Abe chimed in with his own detailed response, further emphasizing the challenges of relevance ranking and hinting at how they can be addressed. And, Dave explained how the University of Arizona performs relevance ranking in one of their in-house databases, and that that way of relevance ranking is well received by its audience. This was a great discussion. Thanks, everybody for your comments. I encourage everyone to read the comment thread.

Stephen Francoeur asked, in response to my basics article on SRU/SRW/Z3950:

So what standard works best for the federated search tools? How can one define best? Which standard is likely to provide the easiest results for the fed. search vendor to parse?

While I’m no expert in standards or in results parsing I can say this: Anything is better than screen scraping. Any result format that is well structured, and that has fields and values clearly delineated, particularly in XML, works well with federated search engines. Also, SRU, SRW, and Z39.50 are very popular standards and many free tools exist that operate in many environments and with a broad range of programming languages to process results in these standards.

Stephan responded to Federated search: the challenges of incremental results. He tried an approach that doesn’t display incremental results but instead allows users to set a timeout for queries; sources that don’t return their results within the timeout period get some or none of their results displayed. I like this as an approach although I do like incremental results better, now that I’m used to them. Peter didn’t care for my comment that training, documentation and education could help users to learn to use incremental results, raising the concern that asking the user to do anything special is asking for trouble and means that the software folks are doing a bad job. This is a tough one - I hear Peter’s point and not every software system is intuitive at first glance. Sometimes power tools take time to learn.

Dr. Walt Warnick, Director of OSTI, and one of my employers, was concerned enough with the discussion of usability and incremental results to respond. He made a very strong statement and said, in part:

What OSTI seeks to do is to make all its products intuitive. We consider ourselves to have failed if users need to take a training course to use one of our products. While Google is simple to use, and we emulate that simplicity whenever we can, we think we have achieved our purpose if our applications are intuitive, whether or not they follow the Google model.

Thinking back to my years of involvement with OSTI, intuitive user interface was always central to OSTI’s requirements.

On a different subject, I’m so used to Google being the only surface web search engine I use that I was surprised to be reminded by Gwen, in response to Dogpile comes out at the top of the pile, that “untrained” searchers might prefer the metasearch engines. I’m reminded that I have the mindset that if it’s in the surface web and Google can’t find it, then it’s not there and the other search engines won’t find it either. So, I never think to use the metasearch engines but obviously I’m wrong or Dogpile wouldn’t have a business. And, in fact, my Dogpile article points out that there is much less overlap among search engines than many of us expect and thus there’s a real value to the metasearch engines. I guess I’m too used to Google although I’ll be the first to admit that I’ve never compared search results from Google, Yahoo! and MSN. I’m just happily brainwashed.

Stay tuned for future installments of “comments on comments.” There are many more comments to go through than I had anticipated but having to catch up on so many comments is a good problem to have. I do apologize for not having made comment followup a priority in the past. Again, if you leave a comment that invites a response I will respond much more quickly moving forward.

If you enjoyed this post, make sure you subscribe to the RSS feed!

Tags:

This entry was posted on Wednesday, August 20th, 2008 at 2:29 pm and is filed under viewpoints. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or TrackBack URI from your own site.

Leave a reply

Name (*)
Mail (*)
URI
Comment