I recently wrote a two-part review of “One Box to Search Them All: Implementing Federated Search at an Academic Library,” an Emerald Insight publication. Here are links to Part I and to Part II. Here is the introduction to my review:
Library Hi Tech’s first issue of 2009 includes a paper that touches on a number of issues related to the implementation of federated search in libraries. The paper is “One Box to Search Them All: Implementing Federated Search at an Academic Library.” The authors are Ian Gibson, Lisa Goddard, and Shannon Gordon. The authors are library professionals at Memorial University of Newfoundland. Gibson is a Science Research Liaison Librarian, Goddard is the Division Head for Systems, and Gordon is a Reference and Instruction Librarian. The article is available to Emerald Insight subscribers in pre-print form and free to the public from Ian Gibson’s web-site.
Please note that the article is a pre-print version. This means that this is not a final version of the paper and that the authors may revise it.
In my review, I raised a number of questions and I wrote to the authors of the paper, asking them to respond. Nearly immediately, Ian Gibson responded and he was quite willing to engage with me and respond to questions. This morning I received an email from Ian with two attachments. The first attachment was written by Gillian Byrne and Louise McGillis who conducted the original usability testing that received a fair amount of attention in the paper. That attachment is included below in its entirety. The second document was written by Ian, Lisa and Shannon, authors of “One Box.” it responds to my points and it tells what happened after they wrote the “One Box” article. This second document is too long for a single posting so I will break it up into a couple of pieces.
Here is attachment #1. Enjoy!
The authors of “One Box to Search Them All” asked if we would like the chance to comment on the post, as many of your comments directly dealt with the usability study conducted by Louise McGillis and myself in the summer of 2006. As the study wasn’t the prime focus of the paper, it’s natural that some context was lost.
- It’s important to note that this study was designed to look at navigation, labeling and basic functionality of the SingleSearch product. It was not designed to gauge user satisfaction other than in the most basic sense, nor was it designed to be an in-depth study of how users interact with federated search products. This, I think, explains some of the inconsistencies between the performance of users in the study and those seen by reference librarians. The research process is far more complex than what participants were asked to do in the study.
- The study that we conducted was designed to evaluate both our recently purchased link Resolver as well as SingleSearch, which explains the focus of the study on being able to interpret holdings. One of our broad conclusions from the study was that SingleSearch was less responsible for the lack of success of users making it through the tasks than was attempting to decipher availability, whether it was in the Catalogue, the native database interface or the Resolver.
- Some of our study conclusions mentioned in the article could definitely be explained by a lack of user training, and in a broader sense it reflects the difficulty users have with bibliographic research in general. However, point number three directly relates to SingleSearch functionality. Users were lost attempting to get from the hit list to the actual article; and much of this was due to the lack of sophistication of the SingleSearch interface. The best example would be that we were unable to hyperlink any elements of the citation as is standard in most web search tools, and instead had to rely on buttons to lead the user to various availability options. This was not successful and users continually had issues leaping from SingleSearch to the correct place to obtain the full text/holdings. A larger point is this: most users can be trained to use any search product, given enough individualized training. But, if a product is as inflexible and unintuitive as SingleSearch is, as a librarian you will spend all your time teaching users what button to click rather than actual information literacy skills.
- Finally, speed. In my experience, academic library users expect library resources to be slower, more complicated and less sophisticated than the tools they use in their personal lives. The most prevalent feedback we received from the study was that undergraduates, regardless of success or failure, or slowness (and in some cases, complete system failure), liked SingleSearch because it had content their professors wanted them to use. Unlike, presumably, Google.
Stay tuned for the second piece of the response.
Tags: federated search