21
Mar

The e-resources@ uvm blog has a post this morning that, among other things, said this:

The closing speaker, Tom Wilson (University of Alabama), briefly made a point about Google that I really liked, and that led to discussion afterwards. He pointed out that Google is not a federated search engine: it uses relevancy ranking (maybe well, maybe not well) and federated searches can’t. Federated search engines are, by nature, multiple databases, and can’t apply relevancy like Google can with its single database. I had never thought through to that point, and I think it’ll be on my mind for the plane ride home.

This statement really caught my attention because it’s wrong. I worked at Deep Web Technologies (this blog’s sponsor) for five years and know their technology pretty intimately. Deep Web puts a tremendous amount of effort into doing relevance ranking. Most other federated search vendors provide relevance ranking as well.

I think the point that Mr. Wilson was trying to make was that it is much more difficult for federated search applications to do relevance ranking than it is for applications that crawl and index their content. The difference between federated search and crawling isn’t, as the blog post claims, that “Federated search engines are, by nature, multiple databases, and can’t apply relevancy like Google can with its single database.” The difference is that Google has complete information in its database while federated search engines perform their relevance ranking with very incomplete information.

When a user performs a query using Google, Google can find the user’s search terms anywhere in potential document matches because it has read (extracted text from) entire documents and indexed that text. Google can rank multiple results against one another and do it consistently because it has the full text for all of the results it’s comparing.

Federated search, for all the great benefits it has, is severely limited on the relevance ranking front. A federated search application typically has access to only the title, summary, and other small bits of metadata in the result list of documents returned from a search of its databases. The federated search application can also capture the underlying search engine’s ranking of documents in the result list and use that information to influence its own ranking. But, the federated search application is at the mercy of the databases it searches. Many databases provide poor results that don’t match queries particularly well or, if they do provide relevant results, they may not rank them very well. The federated search engine doesn’t have the luxury of examining the full text of the documents returned by the databases to see if it can rank them better than the source. So, the federated search application is stuck performing the relevance ranking as best it can given very limited information.

Having pointed out the limitation of federated search relevance ranking, I must also say that not all federated search applications rank equally. A search of this blog for the phrase, “relevance ranking”, turns up a number of articles where I’ve addressed the issue. In particular, What determines quality of search results, discusses this subject at length. Plus, a federated search application can extract full text from documents it retrieves from databases, and use the full text to improve its ranking. Science.gov, whose search engine was built by Deep Web Technologies, employs this and another strategy, in selected cases, to improve relevance ranking.

Mike Moran, in the Biznology Blog, compares Dogpile (a metasearch application that federates several of the most popular web crawler search engines, including Google) to searching the underlying search engines individually. While I don’t agree with a number of the points that Mr. Moran makes in the article, I do believe he has articulated the problem well:

Because Dogpile doesn’t actually examine the documents, it suffers from limitations that degrade its results. Relevance ranking, while difficult in a single-index search engine, is excruciating for a federated search engine. Google can rank documents based on where the words appear in the documents, which documents get links to them, and dozens of other factors. Dogpile can’t. Dogpile can only take a guess at which documents are better by examining the titles, snippets, and URLs that Google returns to display on its search results screen. That’s why most people prefer Google, or Yahoo!, or another one-index search engine to Dogpile and Metacrawler.

Note that Dogpile conducted research that refutes Mr. Moran’s claim that most people prefer the native search applications over Dogpile.

For all its strengths, there are trade-offs when using federated search; relevance ranking is one of those places but a well designed search application can overcome some of the inherent limitations.

If you enjoyed this post, make sure you subscribe to the RSS feed!

Tags:

This entry was posted on Friday, March 21st, 2008 at 12:29 pm and is filed under viewpoints. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or TrackBack URI from your own site.

7 Responses so far to "Is federated search “ranking impaired?”"

  1. 1 Toni
    March 22nd, 2008 at 8:04 am  

    Pardon me for misinterpreting Mr. Wilson’s remarks. He did not use the phrase “by nature”. I may have poorly paraphrased his point.

  2. 2 Peter Murray
    March 24th, 2008 at 5:59 pm  

    I believe it is also true that a federated search tool has incomplete knowledge of all of the records from the databases being searched. For instance, if a target database has 10,000 hits that match a search phrase, does the federated search tool get all 10,000 hits? In my experience, some target databases only give you the first 100 or so. If that is the case, then the federated search engine would be relying on the fact that the 100 most relevant hits were returned first by the target database.

    Since the federated search engine can’t see all 10,000 hits, the usability of relevance ranking is further impaired.

  3. 3 Toni
    March 25th, 2008 at 8:58 am  

    That’s an interesting thought, Peter.

    The way I understand it, the individual database results may be ranked, but there are no standards to allow federated search engines to re-rank that information.

  4. 4 Tom
    March 25th, 2008 at 11:04 am  

    Since I’m ultimately the culprit here, and Toni clued me into this discussion, perhaps I ought to contribute a few thoughts.

    Both Sol and Peter provide caveats that I agree with, but I will state here as I did in the presentation (although using different words) that a federated search engine in the context in which I was talking is performing whatever it does on regurgitated data from the databases it points to. If this data has been already ranked by the pointed to database, that will be the order in which the federated search engine receives (and in most cases) displays to the user. If additional processing is done on that data to provide some different type of relevancy ranking, the federated search engine is potentially operating with incomplete information (part of Sol’s point). If the federated search engine is sending the user’s search to several databases simultaneously, the outcome is even more problematic because:
    1. The federated search engine may not be dealing with a complete result set from the database searched (Peter’s point); and
    2. Each database may be performing relevancy ranking using different criteria for relevance.

    It is not so much that it is impossible to conceptually design a metasearch engine that scours all of the results from the target databases and other resources and itself applies a relevancy algorithm to (potentially) re-rank the new uber-result set as it is extremely difficult to operationalize the concept given the constraints mentioned above and the fact that users want an immediate response.

    Further and perhaps more to the point, the audience for this presentation was people working in a library context in which mega-bucks are spent on licensing access to non-crawlable resources, the breadth and depth of published literature. We may be talking about two very different types of federated searching targets: a) those which can be harvested in toto; and b) those which cannot.

  5. 5 Abe
    March 26th, 2008 at 9:47 am  

    I’m glad to see that this topic of “relevance ranking” of federated search results has sparked some debate in this blog. It is an area that has been of great interest to me for 5 years now and my company has invested a lot of resources to address this challenge.

    First of all I agree with the comments/observations that federated search operates with incomplete, “regurgitated” information as Tom points out in an earlier comment.

    The problem as discussed by Sol and commenters to his post is two-fold: first a federated search engine may only bring back and analyze/rank a small subset of available results from a large information source which has many results for a given query. Note, that this is more of a problem with the user’s query than a federated search problem. For example if a user goes to PubMed (U.S. Government’s most popular database which also happens to return its results in chronological order) and searches for “cancer” the search is not going to bring back very useful results.

    A federated search engine, ours included, relies to a great extent on the relevance ranking capabilities of the information sources being queried. We at Deep Web Technologies do a number of things to significantly improve the chances that we’ll bring back and find the most relevant documents that the user is searching for. We ensure that each of the connectors that we create is optimized (supports all search operators of the source, and supports advanced fielded search capabilities) for the information source that is searched and we bring back a larger number of results (at least 100 where possible) from each source being federated.

    The second challenge of federated search is ranking of the results that have been brought back. How does the federated search engine know that the first result returned by one information source is more relevant than the fifth result returned by another information source? Almost 5 years ago I implemented the first of our relevance ranking algorithms, QuickRank, to an initially skeptical group of my customers. QuickRank which ranks results based on the occurrence of search terms within titles and snippets within a result has proven to work extremely well. No, it doesn’t ensure that the most relevant results are always returned to a user but much more often than not the best results are found and returned within the first page of results.

    Results which might be highly relevant but don’t include the search terms in a title, author or snippet are returned as unranked results. Note that Google suffers from a similar type of problem in that a highly relevant web page, a “gem”, that hasn’t yet been discovered (i.e. doesn’t have a many links to it) is not likely to be found by someone searching Google.

    Finally, as I discussed in one of my earlier posts, federated search is a great discovery tool for students and researchers who don’t know or don’t want to know where to search for information. Yes, if one could create one very large index of all the information that one might be interested in searching for, and have it indexed by a highly capable search engine, then this option would be preferable to federated search, but since this is not possible and is going to become less possible as more and more information sources become available what alternative is there to federated search?

  6. 6 Dave
    May 5th, 2008 at 4:01 pm  

    In medicine there is a ranking system that can be applied to search results. It is based on quality of evidence. See http://www.cebm.net/index.aspx?o=1025 for more. At the University of Arizona, we adapted this system to search results in an inhouse developed tool called EBM Search. It harnesses the the search capabilities available from the targeted databases, and displays results according to publication type (which is tied to evidence quality). The results so far have been successful: we surveyed medical students over two years and they both use it and comment favorably on it. It is the default search tool in their courseware. Your PubMed “cancer” example is a great one - do a search on cancer in EBM Search, and you find systematic reviews of randomized controlled trials, highest in rank, followed by clinical trials, all aggregated and displayed according to a ranking system that makes sense to the clinical user. It seems that federated search is too concerned with finding all things for all groups, emulating Google. I would argue the best approach is to work with specific target groups and learn their information-seeking behaviors, then customize something for them specifically, including a ranking system based on that user group’s culture, if possible.

  7. 7 Why users like federated search (even though they shouldn’t) « Dana’s user experience blog
    November 4th, 2009 at 11:29 pm  

    [...] relevance ranking doesn’t really work. Because federated search is pulling in material from a range of sources, each of which use [...]

Leave a reply

Name (*)
Mail (*)
URI
Comment