MIT Technology Review published an article, Searching as a Team: An innovative tool aims to help users search the Internet together, which describes an experiment in collaborative searching using social software. Meredith Morris, of Microsoft’s Adaptive Systems and Interaction group, is designing the software. Here’s how the article describes the tool:
Called SearchTogether, the tool is meant to help groups whose members are working on different computers, whether they’re all logged in simultaneously or one at a time. The tool is a plug-in for Internet Explorer 7 and requires a Windows Live ID to use. Once all the users have the tool installed, Morris explains, if one of them wants to initiate a Web search, she can invite the others to join her. The tool tracks the work done by the group, making it easier for the initiator to assign tasks and for group members to keep track of what they’ve done.
The article goes on to explain that Morris conducted a survey to identify challenges people faced in trying to search collaboratively (redundant effort and inefficient communications about results were two challenges mentioned) and designed the tool to address them.
My question to readers is, “What would it take to create a collaborative federated search environment?” While in many ways, collaboration is collaboration, regardless of where the content comes from, there are differences worth noting between crawled and federated content. A major difference that comes to mind is that users of federated search are more likely to be looking at scholarly documents than are users of the popular search engines. Being able to annotate technical and scientific documents and highlight key points would be of value to me as would be reading the annotations of others.
Scholarly documents also have citations. I would want a collaborative federated search system to track the references, both from and to, any particular document, and to identify who in my collaboration team has seen and annotated any document in the chain of references. And, I would want a mechanism for ranking documents so that I can steer others on my team towards or away from a particular document.
Collaborative federated search environments get more interesting the more people participate. Just look at all of the useful information that Amazon.com provides to its customers because of its huge customer base. I love reading reviews, seeing ratings, and noting the recommendations for other books to peruse that Amazon provides. In scholarly research, a collaborative environment would do well to implement these Amazon features with one important caveat. I would want a way to determine how credible any particular reviewer was. The anonymity of Amazon reviews concerns me; how do I know a particular review isn’t biased? How do I know a book is high quality just because a lot of people say it is. When conducting serious research I want safeguards to flag suspect reviews, like those submitted from new members of the collaboration who don’t yet have an established reputation. I want a mechanism for community members to establish credibility.
I like the idea of applying community software to collaborative search efforts. What do you think users of federated search would value in a collaborative environment? Maybe we can get Microsoft to listen.
Tags: federated search