9
Feb

[ This is a continuation (and the final piece) of the review I started here. ]

Rather than discuss the paper section by section I’ll highlight some of the key points of the paper.

  1. Memorial’s current implementation of SirsiDynix Single Search was purchased through a consortial agreement without broad in-house consultation.” I wonder how many implementations get purchased without buyin from the major stakeholders. This is not in any way a dig against SirsiDynix. Given the frequent tension between librarians and users, skipping the buyin phase is a recipe for trouble.

  2. Participants were generally successful in selecting appropriate categories, with few opting to choose individual resources. This suggests that these library users are attracted to searching broader categories as compared to information seeking at a micro level.” That’s what I would expect.

  3. Byrne and McGillis observed that participants required an average of 5 minutes to find an article or give up, and 4 minutes for books (2008). It was also noted that “the high number of clicks and multiple attempts indicate extreme difficulty finding holdings information.” This concerns me. I’m curious to know how much of this frustration could be relieved through user training.

  4. In addition to excessive clicking, difficulty was exhibited in determining local holdings, and only one-fifth of users correctly interpreted catalogue book holdings (Byrne and McGillis, 2008).” Possibly another information literacy issue.

  5. Participants were much more successful locating items in the Resolver as compared to the Catalogue, and were very likely to ditch the article if full-text could not easily be found (Byrne and McGillis, 2008).” Interesting, but not surprising.

  6. All participants entered the topic exactly as written meaning that no sophisticated search strategies or Boolean operators were used (Byrne and McGillis, 2008).” Yes, it’s an information literacy issue.

  7. Interestingly enough, load speed was not mentioned in the post-test even though it appeared to be an issue to study designers based on recorded screen activity (Byrne and McGillis, 2008).” This is very interesting. I’ve always assumed that users all expect federated search to be as fast as Google. I guess that’s not everyone’s expectation.

  8. “Those libraries that have been willing to report on their implementation of federated searching applications have described missed deadlines, soft launches and compromises made along the way” (Warren, 2007, p. 258).” I’m curious to know more about the major obstacles.

  9. At a minimum, libraries should plan for at least one librarian to work on the implementation for 6-12 months (Elliott, 2004).” I’m interested to know how long different parts of the journey toward implementation take.

  10. Resource selection is a delicate balance between speed and comprehensiveness, and as more resources are searched, result retrieval takes more time. Some vendors limit how many resources can be simultaneously searched to try to improve speed (Walker, 2007).” I found this comment to be a curious one. If sources are searched in parallel, and if slow sources are timed out, why should having more sources slow down a user’s search?

  11. For resources not compatible with federated search, the library can either use HTML ‘screen-scrape’ or omit these resources altogether … If the library chooses to screen-scrape, the connectors constantly break and have to be changed whenever the native interface changes (Hollandsworth and Foy, 2007; Marshall, Herman and Rajan, 2006).” I don’t see this as an either/or. If a resource is important enough to an institution then the federated search vendor should provide it, whether it needs to be screen-scraped or not. Yes, one of the necessary evils of connector management is re-scraping sources. Let your vendor do the work.

  12. A general problem is that users rarely see the navigation elements or advanced features and when they do see them they don’t understand the symbols (George, 2008; Mestre et al., 2007; Ponsford and vanDuinkerken, 2007; Elliott, 2004; Wrubel and Schmidt, 2007). Using the web browser’s navigation can often cause more problems. Another serious interface issue is the display of results. Sufficient information is often unavailable for a patron to determine whether the result is a book, journal article, or some other type of item in a collection (Ponsford and vanDuinkerken, 2007; Boock, Nichols and Kristick, 2006; Walker, 2007; Wrubel and Schmidt, 2007).” This is very interesting. Federated search vendors should all sit up and take notice.

  13. The second major approach to federated search is to harvest all of the relevant sources of data, normalize them into a single metadata schema, and index all of them together in one large union index.” I’m sorry but indexing is not federated search.

  14. The Federated Search Marketplace section includes a very nice summary of each vendor’s implementations and its relationship with other vendors to produce integrated products.

In this review I’ve touched on some of the many topics included in this paper. There’s a good amount of experience from a number of institutions packed into 18 pages. The paper is a good read. I’ll see if I can get the authors to respond to some of my comments and to tell us how things evolve at Memorial.

Update 2/20/09: See the authors’ responses to my comments.

If you enjoyed this post, make sure you subscribe to the RSS feed!

Tags:

This entry was posted on Monday, February 9th, 2009 at 6:36 am and is filed under papers, viewpoints. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or TrackBack URI from your own site.

4 Responses so far to "Review: One Box to Search Them All (part II)"

  1. 1 Anonymous
    February 13th, 2009 at 10:42 am  

    Hello,

    interesting article or summary of the article. I’m wondering if it is possible if you could provide the bibliographic information of the citations for point 12 …

  2. 2 Sol
    February 13th, 2009 at 12:09 pm  

    Hi, here are the references for point 12:

    George, C.A. 2008, “Lessons learned: usability testing a federated search product”, The Electronic
    Library, vol. 26, no. 1, pp. 5-20.

    Mestre, L.S., Turner, C., Lang, B. and Morgan, B. 2007, “Do We Step Together, in the Same Direction, at
    the Same Time? How a Consortium Approached a Federated Search Implementation”, Internet
    Reference Services Quarterly, vol. 12, no. 1, pp. 111-132.

    Ponsford, B.C. and vanDuinkerken, W. 2007, “User Expectations in the Time of Google: Usability Testing
    of Federated Searching”, Internet Reference Services Quarterly, vol. 12, no. 1, pp. 159-178.

    Elliott, S.A. 2004, Metasearch and Usability: Toward a Seamless Interface to Library Resources.,
    University of Alaska, Anchorage, AK.

    Wrubel, L. and Schmidt, K. 2007, “Usability Testing of a Metasearch Interface: A Case Study”, College and
    Research Libraries, vol. 68, no. 4, pp. 292-311.

    Boock, M., Nichols, J. and Kristick, L. 2006, “Continuing the Quest for the Quick Search Holy Grail:
    Oregon State University Libraries’ Federated Search Implementation”, Internet Reference Services
    Quarterly, vol. 11, no. 4, pp. 139-153.

    Walker, D. 2007, “Building Custom Metasearch Interfaces and Services Using the MetaLib X-Server”,
    Internet Reference Services Quarterly, vol. 12, no. 3, pp. 325-339.

  3. 3 Ian Gibson
    February 17th, 2009 at 7:39 am  

    Hi,

    I assembled most of the bibliography for this article. I highly recommend the articles by George and Elliott. Elliott’s paper is available @ http://consortiumlibrary.org/staff/tundra/msuse1.pdf

    Ian

  4. 4 Sol
    February 20th, 2009 at 11:29 am  

    Everyone, Ian Gibson has made a copy of the “One Box” pre-print available. I’ve added the link to part I of my blog article.

Leave a reply

Name (*)
Mail (*)
URI
Comment