A statement in a blog post at Science Library Pad caught my attention. The post, titled “availability, discovery, and delivery - redux,” focuses on the question of how well researchers are able to access the full text of documents they find in search results. The author sees this as a major problem and makes this attention-getting statement:

I’m not convinced that we’re doing a particularly good job of addressing these fundamental challenges even after years of working on proxies, federated search, link resolvers, and “live in your environment” plugins and external website settings.

For those who aren’t familiar with proxies, I wrote about proxy servers and federated search in February. Link resolvers, also called URL resolvers, are worth dedicating an entire post to but here’s the gist of what they do: When a user performs a search, sees a result list, and clicks on a result to view a scholarly article, the URL that the user is sent to when he clicks on the link is intercepted by the federated search application and possibly replaced with a link to a version of the document that the library has licensed rather than the original “for pay” link.

My interest has been in technologies for bringing users and content closer together. I hadn’t considered the possibility that these technologies aren’t working well enough. Of course, there could be other factors in play that are not about the technologies. And, the author’s statement is just one data point plus the author does qualify his view with “I’m not convinced” which does allow for the possibility that his experience isn’t universal.

One issue I’m aware of regarding delivery, is that some sources just don’t provide links to documents in their search results, as odd as that may seem. I can see that this would frustrate users. I suppose that link resolvers could get smarter in such cases; ideally, though, the content provider would fix this at their end.

The blog post also got me thinking about how I’ve gotten hooked into the mindset that everything on the Internet should be free. I am reluctant to pay for information on the Internet even though there is plenty of high quality content worth paying for. It’s the same mindset that leads me to not having cable television - TV should be free, right? And, I certainly don’t expect to pay for content from my local library. So, I’m wondering if some researchers’ experiences aren’t tainted with bad feelings about being asked to pay for an article because there just isn’t a licensed copy available.

The post makes the distinction between discovery and delivery. Discovery is about making sure that researchers can identify relevant documents. Delivery is about getting them the full text of those documents. Which should libraries focus on first, discovery or delivery? Which is harder to do? The author believes delivery should come first and that discovery is harder to do.

While the issues raised at Science Library Pad are not flagged as federated search issues, federated search environments are where these issues are going to be noticed most prominently because federated search engines have to contend with content originating from a number of different sources, with different access and authentication methods for content depending on the source.

I’m interested to hear from the library community. Does federated search get you and your users to the most appropriate source of content most of the time? Does it first get you to content that your institution has licensed? Does it then point you to a free copy elsewhere if a licensed version isn’t available and a free one happens to be found? Does it send you to a “for pay” copy as a last resort?


This entry was posted on Tuesday, April 29th, 2008 at 8:25 am and is filed under viewpoints. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or TrackBack URI from your own site.

2 Responses so far to "Does your delivery deliver?"

  1. 1 Jonathan Rochkind
    April 30th, 2008 at 1:41 pm  

    I’m working on similar issues in my own library. The answers in general are “no, it doesn’t work as well as it should.”

    A big barrier I’ve come into, as I try to write this software, is the difficulty/impossibility of finding/identifying a free copy of a paper that someone found in a search, that may exist online in both free and for-pay versions. There’s really no way I can find for my software to discover the free version and know that it’s a free version. So the “happens to be found” part is trickier than it sounds.

  2. 2 Peter Noerr
    May 1st, 2008 at 6:12 pm  

    Two points highlighted in the reply from Jonathan intrigue me.

    Firstly that the existence of both a free and a for-fee copy of the same article seems a bit unlikely. I can imagine a case where this could happen. (Private pre-publication on the author’s personal web site before publication in a journal.) Not exactly the same thing bibliographically, but the content is the same, so ’same enough’. But, in general, the only way I can see an originally for-fee article becoming free is if it moves into the gray area of fair use copying and posting. And for that we can hold our collective breath and see what Georgia has to say to the Publishers.

    The second, more technical, interest is how this free copy “happens to be found”. I agree with Jonathan that finding such animals is tricky. They obviously don’t exist in the search indexes of the major content providers (sort of bad for business to offer the same thing for money and for nothing - which would _you_ choose?). That means to find them you have to look in lots of small places - personal booklists (article lists?), project repositories, personal websites, and so on. These small locations are initially unknown and then difficult to connect to (quite possibly home grown systems), and then only likely to be of use for vertically specific searches (they are very ‘long tail’). So you end up federating across the generic web search engines to find search engines which should have the sort of content the user is searching for. Hurray! we have meta-meta-search. So far this form of source discovery is very arcane and seems to be a bastion of those totally unreliable finding aids called humans. Until the semantic web abounds, I think it will stay the province of “I know a site that has …”

Leave a reply

Name (*)
Mail (*)