Aug
[ Editor's note: Implementing federated search solutions is hard work. The following review, by Carl Grant, does an excellent job of identifying a number of key steps that those implementing solutions would be wise to follow. The steps are tedious and time consuming but the structure they provide to the process is worth the effort because it will minimize problems later. This review of one of the essays in Christopher Cox's book is a nice companion to the federated search roadmap series.
Given the quality of the essays in Mr. Cox’s book plus the severe lack of any books related to federated search, I highly recommend the book. You can purchase a copy of Mr. Cox’s book of essays from the publisher, Taylor & Francis, who donated the review copies, by calling their Customer Service department, Monday-Friday 9 A.M. – 5 P.M. EDT, at (800) 634-7064.
You can find other reviews of essays from Mr. Cox’s book in the Cox essay review category. ]
We’ve all heard the stories of librarians who bought a federated search product and had it up and running within days of signing the contract. Yes, that can be done. Other libraries take a far more structured approach and for those the chapter “Planning and Implementing a Federated Searching System: An Examination of Crucial Roles of Technical, Functional, and Usability Testing” will be well worth their reading. Written by Susan Avery, David Ward and Lisa Janicke Hinchliffe from the University of Illinois at Urbana, this chapter summarizes an extremely detailed, analytical and thorough approach to planning for and implementing a federated search product.
The approach is structured around technical evaluation, functionality and user testing. Given this approach, the authors are correct in saying it will be “time-consuming”. Their assertion is that in the end, this will save the users and library staff time.
The section on Technical Testing provides an excellent overview of resources to be consulted in developing a suite of tests suitable for your library. They note that standards compliance is a topic worthy of investigation in detail. They also advocate the involvement of end users in the testing to ensure that they get results they expect from searches and don’t suffer from confusion.
Next, they move through Functionality Testing; i.e. is the system doing what it should? Again, they provide a good overview of resources to be consulted in developing a functionality test suite. They cover leading concerns such as accurate de-duping, relevancy ranking, taxonomies used and how results can be affected. They very correctly note that the team doing the functionality testing requires a truly diverse group of people including those from public service, tech services, system staff and subject experts. They also recommend talking to sites that have already implemented federated search products (including products you’re not considering) to get a good global perspective. They suggest working closely with the vendors under consideration to get questions answered, see demonstrations and avail oneself of trial or demo systems that might be provided. Furthermore, they show consideration for the vendors, by suggesting that librarians do this by making the most of conferences, demos and talks given by the vendors (rather than just demanding on-site visits to get questions answered and that only increase the cost for everyone involved. This reviewer applauds this recommendation).
Testing the search functionality becomes the next focus. They list several key issues to be focused on including general functions, database functions and issues that would affect staff and/or users. They express considerable concern about the issue of the federated search system returning different results than the native interface. As they outline, the lack of adequate standards and implementation of same are a core part of the problem here. It is also worth noting that one can generate considerable debate on this topic as to the likelihood this will ever realistically be achieved at a cost libraries could ever afford.
With the return of results by the systems, the authors begin an analysis of the issues involved in sorting and relevance ranking. Again, they give an excellent overview of some of the complicated issues to be examined and analyzed including results returned in sets, order of returned results and de-duping algorithms. They conclude the section on functionality testing by noting that it is “time-consuming” and for a large academic library, would result in an “extended time-line for implementation”. Indeed.
The components of the usability testing are identified as pre-test, task-oriented, and post-test. Background on each of these test types is provided along with an overview of what is involved in conducting these types of tests. The actual conducting of the usability test and the environment for the testing is also described. They wrap the usability testing up by describing specific tasks that should be done as part of the testing that highlight the differences between search tools like Google and typical federated search products. The authors take a very appropriate approach to usability testing by noting that to do it well, one must really observe how the “actual intended users of the software interact with it.” They quickly examine the three major approaches to user testing and conclude, that for most libraries, informal usability testing will be the only viable approach. Using 3-4 testers and testing often becomes the recommended approach focusing on what users get wrong and don’t understand. It is going to be a concern for most libraries however, that they note that 3 rounds of testing, typically consumes up to 15 weeks of time.
The authors conclude the chapter by noting that, in their view, technical, functional and usability testing are key to providing users with a convenient, functional federated search tool.
What the authors have done in their description of the planning and implementation of their repository system is extremely thorough. However, as they point out midway through their review, many libraries simply don’t have the time or resources to do this. It is unfortunate that they leave this point without talking about it more. It is precisely the issue for most readers. While a few institutions have the capability to do the kind of thorough implementation described by these authors, some 80%+ of the libraries in North America are classified as medium to small. For them, while this chapter will make for interesting reading, it offers them no hope or solutions to the challenges outlined in this chapter.
Recently, there have been some attempts to standardize comparison tests and to make the results and test environments freely available to libraries (including the authors/sponsors of this blog!). Unfortunately, those efforts have met with little success due to the lack of participation by vendors. Perhaps there is a role here for professional library organizations like LITA or 3rd party consulting firms. It will take money, but if libraries would band together, it would be quite affordable and in the end, these testing resources would be usable by far more institutions. Those are issues I wish the authors had taken on because, then, their extensive research would be be useful to far more libraries.
If you enjoyed this post, make sure you subscribe to the RSS feed!
Tags: carl grant, federated search