Sequels to successful Hollywood movies (not to mention prequels) are notoriously hit and miss. For every The Empire Strikes Back, Aliens, and Wrath of Kahn you have The Phantom Menace, Alien Resurrection, and The Final Frontier. (Yes, I do watch a lot of sci-fi. Why do you ask?)

User Interfaces has just produced our very own sequel, a companion to our thrilling 2011 debut, Resource Discovery Systems at the UNT Libraries. We call this one: Resource Discovery Systems at the UNT Libraries: Phase Two Action Plan

Think of it as more of a continuation than a full-blown sequel. The original outlined a grand, multi-phased RDS implementation vision and then detailed an action plan for just the first phase. After Phase One, we said that we would revisit our vision, update it based on our experiences, and develop a concrete action plan for Phase Two. As astute readers may have already guessed, this is that plan. We hope that you find it worthy of its predecessorsomething like, Part 2: The Users Strike Back, rather than Part 2: Discovery Boogaloo.

  • Download the Phase Two Action Plan Now!

  • Go aheaddownload it, read it! Its only about 10 pages of actual content, with some pictures. Or, at least read the Executive Summary if youre short on time. 
  • Although we are pleased with it and feel comfortable presenting it to you as a finished product, this is still in draft form! Before finalizing it, we wanted to put it forward to library employees for comment. So after youve read through it, please tell us your thoughts in the comments. (Ive started off the discussion by addressing a few comments and questions that weve already gotten.)
  • If you havent read the original RDS Reportor if you just want to brush up on it before tackling Phase Twoyou can get it here. (We have a publicly accessible version here, but it omits the Phase One Action Plan at the end.)

Posted by & filed under Uncategorized.

5 Responses to “Resource Discovery Systems at the UNT Libraries, Phase 2: The Users Strike Back”

  1. Anonymous

    On page 4, the Phase Two Action Plan states: “Through our work in Phase One we have found that Sierra’s more-open architecture affords us flexibility in how we query and extract data from the ILS that we didn’t have with Millennium.”

    The question is: “Can we have some examples? Will this help with stats gathering?”

    • jjt0005

      The main relevance to the report is that this lets us more easily create our own add-ons or build systems that can interoperate with Sierra. One example is the system that Lib TACO recently put into place to handle requests for Remote Storage items. As you probably know, we recently implemented online holds in the catalog. But Lib TACO manages the materials at Remote Storage using their own database rather than the ILS. Instead of having to deal with printing paging lists in Sierra whenever Remote Storage items are requested, Lib TACO just added a component onto their homegrown inventory management Web app (written in PHP, I think). Now their system sends an SQL query to Sierra’s database to pull back information on holds for items at Remote Storage locations. It grabs a list of current holds, barcodes, dates the holds were placed, expiration dates, item statuses, etc. Then it updates their system based on the information it gets from Sierra in order to tell staff which items need to be pulled.

      On Millennium, this sort of thing was just not possible, as there was no back end database access. There were kludges and workarounds that people had developed, but none of them were as simple, flexible, or reliable as straight-up SQL access. Now we can write programs that can have real-time access to any of the data in Sierra’s database. If we wanted to, we could write programs to extract the data and store it in a different database or index.

      So does this actually help with stats gathering? It depends on what kind of stats you’re talking about. Data we can currently get out of the database includes: any fields (fixed-length or variable-length) attached to any records; current circulation transactions (like currently checked out items, current holds) and associated fields; acquisitions-related transactions; and a bunch of system properties (like system codes and some configuration details). The one thing that we can’t get is historical data on circulation-related use, other than the aggregate count fields that are attached to item records. A lot of the types of data you can get from Web Reports (where you break down different types of circ transactions by date, across patron types, by terminal, etc.) just aren’t available in the database–at least not yet. It may be something that III is working on making available via the database or via some sort of API.

  2. Anonymous

    Pages 5-8 describe the “Bento Box” display concept, which serves as the basis for Objective 4 in the plan–developing a bento-box-style search display to power an “Everything” search.

    The question is: “Do they have data on how users react to this type of searching? Will they do a usability study on this before it goes live? (Will we be able to test it out before it goes live?)”

    • jjt0005

      I’ll start with the last two parts of this question, because they have the shortest answers. Yes, and yes. We definitely plan to incorporate usability testing into the development of the new display, and we also plan to give librarians and staff plenty of time to view, test, and comment on it before it goes live. If we don’t, you’re welcome to come up here and pelt me with rotten tomatoes or something. Seriously–we wouldn’t want to release something to the public that doesn’t work or that people hate. And, of course, we’ll want to give public-facing librarians a chance to get a feel for the new system so that they can help answer patrons’ questions, update their instructional materials, etc.

      Back to the first part: do we have data on how users react to this type of searching? NCSU Libraries (used as the example of the quintessential “bento-box” display in the action plan) have published a few papers that do include some user data about their display. If you’re interested, I’d recommend taking a look at: Lown, C., Sierra, T. and Boyer, J. (May 2013). “How Users Search the Library from a Single Search Box.” College & Research Libraries. 74(3), pp. 227-241.

      On the flip side, since–let’s be real here–we’re comparing the “bento box” style to the “blended results” style (where you have one set of results containing journal articles, catalog results, etc.), there have been a few studies that imply to one degree or another that blended results confuse users. Even some studies that have otherwise been glowing about the efficacy of blended-results displays have had a footnote stating that users expressed confusion when they encountered a result for a book alongside a result for an article. (I’m trying to find the one that I read the other day, and I can’t find it now. I should have bookmarked it when I ran across it. Yes, I see the irony.)

      I also want to mention that the action plan references a blog post from Jonathan Rochkind that presents a position paper for Johns Hopkins University he wrote on this topic. He highlights several pieces of data from various studies that seem to support the idea that users don’t necessarily react well to blended results. I thought it was pretty convincing, and it very closely matches my own experiences and impressions. I’d recommend giving it a read. (Skip down to: New Options for Improving Article Search if you don’t want to read the whole thing.)

      Bottom line: at this stage of development, we have limited choices in how we set up our display. We can force users to choose a particular system before they search, which means they need to understand what they’re searching before they make a choice, fumble around and try to figure it out by trial-and-error, or just use the default search every time regardless of whether it will give them what they want. Or, we can try to offer them some type of Everything search. The blended-results style is (with a system like Summon) easy and popular, but, as we outlined in our original RDS Report and in the Phase 2 Action Plan, there are both strategic and user-centered reasons that it might not be the best thing. Bento-box is certainly not the end-all, be-all of discovery interfaces–but it does address the strategic qualms we have with blended results and it also seems to address the usability qualms. At the very least, no user data has been published (that I’m aware of) suggesting users dislike or have trouble with such displays. Taking all of it together, bento box seems like a good, logical step forward. If we discover significant usability problems during development, you can be sure we’ll reevaluate (and certainly publish our findings!). :-)

      • jjt0005

        I just wanted to make a quick addendum to my last comment. I went back to try to find a few of the articles I’d found previously that specifically pointed out difficulties some users had with blended results displays, as I hate making seemingly unsupported assertions. This isn’t exhaustive, but here are a few.

        Promise Fulfilled? An EBSCO Discovery Service Usability Study

        See Page 194:

        “One commonly touted improvement of discovery tools over federated search engines is that discovery tools combine catalog and database content better than federated search (Notess 2011; Wisniewski 2010). In addition, Jason Vaughan (2011) noted discovery tools will be extremely helpful with the perennial problem of users trying to find article titles in the library catalog. While this is likely true, some participants’ experience with Scenario 3 demonstrates that users still must understand differences in titles and in the systems used to search them. Four participants did not succeed with Scenario 3, and one reason was that participants struggled to differentiate between a poem title and a book title. Addressing this difficulty was outside the scope of this project, but it may be another indicator that instruction would be helpful with discovery tools.”

        Web scale discovery: the user experience

        See Page 10:

        “Once into Library One Search, most students did not encounter difficulties, and they stayed in this environment to answer all subsequent search questions in the study. However, it was obvious from observing them that they did have trouble interpreting the screen results and understanding the differences between different formats. For example, in the Library OneSearch results list display, students were confused between the record of a book, and the record of a book review. … Students in the usability study were confident with the user interface, but somewhat perplexed by the search results.”

        Finally, I have some notes I wrote for myself after watching a webinar that Serials Solutions put on about the upcoming Summon 2.0 this past April. At ~the 27 minute mark, the presenter mentions that Serials Solutions’ own usability testing has shown that users have some trouble differentiating among content types when browsing the results list. It sounds like their solution is to better group certain content types and make it easier to distinguish them visually in the results list–to make it more browseable. In a way it’s a step in a more-bento-y direction while still sticking with the single results list. But I question whether it will actually address the underlying issue.

top