Summon

  1. Why did we purchase Summon? What is its purpose?
  2. Can patrons limit their searches to select databases in Summon?
  3. How much of UNT’s Electronic Resource material is available for searching in Summon? Will that number improve in the future?
  4. Can remote users get access to all of the same material in Summon as on-campus users? Are there any issues with remote access?
  5. Where does Summon’s data come from?
  6. Will we be indexing our catalog data in Summon as well?
  7. How does Summon decide what resources to rank as more relevant than others?

Resource Discovery

  1. What exactly do you mean when you say “resource discovery?”
  2. Improving resource discovery means making our discovery experience more like Google’s. Right?
  3. What are you working toward? What’s the plan? What will it look like when it’s done?

 

Summon

Why did we purchase Summon? What is its purpose?

Summon is a “Web-scale discovery system,” which is a relatively new sort of tool for helping patrons find (or “discover”) library resources in ways that were difficult–or flat out impossible–with our traditional tools. We purchased and implemented Summon mainly to fill a gap that existed in our resource discovery infrastructure: quick, comprehensive discovery of full-text articles that doesn’t require any knowledge about specific databases. Summon’s index combines article-level metadata and full text for the majority of our subscription e-resources together in a single bucket, allowing users to search for articles just as they would within a single database, but without having to know which database(s) to use.

By implementing Summon, we intended to fulfill an unmet need–not to replace any of the traditional components of our online discovery infrastructure. Individual databases and e-journals are still available and accessible from their respective interfaces and the catalog. For deep research, we should certainly continue to point people to the appropriate databases and other resources for their subject area.

But not all research tasks require depth. Patrons may not want to start their research with a particular database, or patrons may wish to search broadly before they search deeply. For these, Summon offers some new possibilities.

  • Sometimes people may need to research a topic or subject area with which they’re not very familiar. We might prefer that they take the time to read a pertinent subject guide, learn the appropriate databases to use, and go about their research that way. But, unless they’re interested or highly motivated, most people will not do more work than is required. This tool helps these people find some (hopefully) relevant resources where they might otherwise stubbornly search the wrong place and not find anything useful. For example:
    • New students (especially undergraduates) who are not familiar with the library or with the subject that they need to research.
    • Experienced researchers who are researching a subject area that is new to them–perhaps for interdisciplinary research.
  • Going straight to a particular database is not always the best search strategy. Tackling a research task by starting as broadly as possible and then narrowing as you go is perfectly valid, but it’s not a strategy that existing library tools support well. On the other hand, search engines on the Web and on retailer websites not only support this method, but they actively encourage it. For people that prefer the broad-to-narrow approach, this tool helps fill that void. For people that prefer more precise, more methodical methods, this tool might not be for them.
  • Even if you’re a serious, experienced researcher who knows your subject area well, knows what databases to use, and is generally happy with how existing library tools support these activities, Summon can help you discover new things and broaden your horizons. Since Summon doesn’t limit to one or even a handful of databases, it lets you cast a very wide net. And, since searches are quick and easy, you don’t have to worry about investing a lot of time in going down bunny trails. It’s relatively low risk and high reward.

Like any other tool, Summon has its place–but it’s not going to satisfy every need or be perfect for every user. It’s one piece of the puzzle that helps us to provide a more satisfying, more complete resource discovery experience to our patrons. UI’s task is to fit this piece together with all of the others so that our patrons can get a complete picture and select the right tool for the job at hand.

Back to top

 

Can patrons limit their searches to select databases in Summon?

Effectively, the answer is no–there is no way to limit searches by database. People who wish to search particular databases should go to the databases directly (e.g., via the Database A-Z list).

There is a parameter that you can send in the URL that will limit by resource, but to use it you have to, 1. know that the parameter exists and what it is, and 2. know what Serials Solutions codes are for their resources. I think it’s intended to be used for testing purposes only.

Back to top

 

How much of UNT’s Electronic Resource material is available for searching in Summon? Will that number improve in the future?

Based on the initial analysis of UNT Libraries’ electronic resource content done by Serials Solutions in October 2011, Summon (at that time) covered ~92% of our e-resources. Summon coverage continually improves as new content partners sign up and are added to the index, so the 92% figure should increase as time goes by. Serials Solutions’ coverage analysis document states, “Serials Solutions works closely with our Summon clients to prioritize acquiring new content whether it is from paid sources or openly available sources. We welcome libraries suggesting additional open access content as this can usually be added to the index quite easily. Paid content requires negotiation with vendors, but is also added at a rapid pace.”

UPDATE 04/2013: At the time that we were implementing Summon (e.g., January 2012), Serials Solutions estimated that they had content from 6,800+ publishers and 94,000+ individual journal titles. Now their data shows they have content from 7,250+ publishers and 136,000+ individual titles. That’s a 6.6% increase in the number of publishers and a 44.7% increase in the number of titles indexed in 15 months.

If users cannot find what they’re looking for in Summon, they can still access databases or e-journals individually via the database and e-journal A-Z lists.

You can view the content analysis that Serials Solutions provided us here: https://ui.library.unt.edu/project-manager/documents-sharing/1303 (note that you will have to log in first to view the document). If you would like to see a more thorough analysis showing exactly what journal titles UNT subscribes to that aren’t covered in Summon, please send an email request to UI. We can only share the list with people affiliated with UNT.

To see more information about Summon’s content and coverage, please visit Serials Solutions’ website

Back to top

 

Can remote users get access to all of the same material in Summon as on-campus users? Are there any issues with remote access?

Remote users can access most of the same material in Summon that on-campus users can access, they just have to authenticate through the library’s proxy server first. Because our proxy sets a cookie on their machine, they only have to sign in once during a session to access full-text materials. The only difference that off-campus users of Summon should experience is that they will not see Web of Science results or citation counts embedded in search results in Summon.

When we first released Summon in February 2012 we had lots of issues with remote access, especially for Internet Explorer users. Those issues were resolved within a couple of weeks. If you know or hear about off-campus users who are still having trouble accessing full-text resources via Summon, please ask them to submit a ticket so that we can troubleshoot with them.

Back to top

 

Where does Summon’s data come from?

Data in the Summon index comes directly from content providers (7,250 publishers and 136,000+ journal and periodical titles)–not from database providers or other content aggregators like EBSCO. This insulates Serials Solutions and Summon against problems like this.

Back to top

 

Will we be indexing our catalog data in Summon as well?

No, there are no immediate plans to do so. Our RDS Report describes why.

  1. Web-Scale Discovery Systems (like Summon) are, by nature, proprietary. We should be careful about coupling all of our content and our entire RDS strategy to a single proprietary system.

    From the RDS Report, Literature Review, Observations, Page 17:

    At this point in time, using such a system [a proprietary Web-Scale Discovery product] to serve as a single access point might very well be putting all of our eggs into one basket, but—if used as one component within a larger resource discovery framework—it would give our users much-needed article-search capabilities without tying our entire discovery strategy to one system. It would give us the flexibility to continue working toward making a genuine single-access-point search a reality without being beholden to what one vendor will or will not allow. 
     
  2. The ultimate goal of indexing our catalog in Summon would be to use it as the single access point for our library’s materials. But, even if we indexed the catalog in Summon, there would still be materials that we could not index in Summon. Any single-search based entirely on Summon would be incomplete, and the single results set presented by Summon would make it difficult (if not impossible) for users to understand what might be missing.

    From the RDS report, Institutional Data, Data Analysis, Page 19:

    Because RDSes only have partial coverage of library resources, what a particular RDS searches—and how to present that information to users—becomes a big issue. Although a single-access-point RDS for libraries sounds great on paper, in practice it requires additional qualification about what’s being searched as well as supplemental access points (e.g., to databases and e-journals) to shore up the weaknesses. We haven’t seen any user studies that address this, but we would guess that this reduces the effectiveness of the single-access-point search. Web-Scale Discovery Systems are a big step forward from the information silos of libraries’ past—but they are not yet able to provide a single-search experience on par with Google. 
     
  3. The phased model we developed and outlined in our RDS Report keeps our catalog and Web-Scale Discovery System indexes separate deliberately. The plan is to integrate our discovery systems at the interface layer rather than to combine the indexes. This helps us to quarantine as much as possible the systems that are entirely vendor controlled and retain as much control as possible over how we present our data to users.

    From the RDS report, Recommendations, Our Vision: The RDS Implementation Model, Page 26:

    The first step—phase one—will have us deal with the weakest component of the existing framework: the electronic-resources search. Current-generation Web-Scale Discovery Systems could actually do what an electronic-resources search implies: search across a wide array of individual articles. Although such a system—both the application and the data—would be closed-source and vendor-controlled, the functionality that it would provide out-of-the-box would justify incorporating it into our model. Furthermore, at this stage we would lessen the effect of that issue in two ways. First, we would select a system that provides a fully-functional API that would give us flexibility in the future, at least at the application layer. Second, we would refrain from incorporating our catalog data into the system. Though this would prevent us from offering a single-search solution at this point, we contend that such solutions are not yet tenable. They do not actually offer a single search of all resources; they obscure too much from end-users; and they would place us on a path of putting our data into systems in which a vendor controls the content and the system.

    And Page 28:

    In phase three, we begin our own development at the application layer. It may be unlikely that vendors of Web-Scale Discovery Systems would ever allow third parties direct access to their data, but a good API would allow us to incorporate the system’s functionality more fully into our existing applications. Hooking the Web-Scale Discovery System and the Discovery Layer applications together would, for instance, allow us to provide a high degree of consistency to the end user, even if we retain separate Books and Articles searches.

When we originally wrote our RDS Report, although we recommended against loading our catalog into Summon, internally we were still entertaining the idea of experimenting with it just to see if there would be benefits—if we could do it easily. But, since writing our report, the landscape—externally and internally—has continued to evolve. And much of this evolution has actually supported our initial findings and reasoning on this topic. A growing body of evidence from usability testing suggests that results combining article-level items and catalog items is confusing—that users prefer these two basic types of things to be kept separated in our interfaces. Our own testing during development of our new website showed this very clearly. There has been growing interest in what has been termed “bento box” style search results interfaces, where the top results from a variety of sources are combined at the interface layer on-the-fly and presented in separate boxes, showing, e.g., Articles results, Catalog results, Database results, etc. in different (clearly-labeled) areas on one screen. There is a growing consensus that this is the current best-of-breed approach to providing a search-box that searches all library resources, and it wouldn’t require indexing our catalog in Summon.

(For more discussion about the “combined library search” idea, see the FAQ question, Improving resource discovery means making our discovery experience more like Google’s. Right?)

Back to top

 

How does Summon decide what resources to rank as more relevant than others?

Summon uses a relevance-ranking algorithm developed by Serials Solutions. Full-text items receive a static rank based on content type, publication date, scholarly/peer review, and whether or not an item is in the local collection. Items that are more recent and peer-reviewed are favored over those that are not, and items that are in the local collection are favored over those that are not.

When a user searches Summon, a dynamic rank is generated–search results are compared against a user’s query and ranked based on term frequency, field weighting, term stemming, and stop-word processing. A combination of the dynamic rank and the static rank determines the final ranking.

Back to top

 

Resource Discovery

What exactly do you mean when you say “resource discovery?”

Ranganathan’s third law: Every book its reader. Resource discovery is foundational to library science. When we consider how best to organize our resources to help our patrons find what they need, we are considering the issue of resource discovery.

In the print universe, the catalog was one of the central systems that enabled this–you could be reasonably sure that you’d searched the entirety of the library by checking the catalog and maybe a handful of other sources. But, as more content has moved online–and as more of the content libraries obtain and make available has moved online–the number of systems for searching that content has increased in kind. For technical reasons, intellectual property reasons, practical reasons, and many other reasons, the content that a library makes available to its patrons has come to exist in many different systems. Each system has its own interface for searching the content that it holds. This greatly complicates how people find what they need. To use this smorgasbord of systems appropriately and effectively, you need to have a better understanding of how libraries work than most people are willing to obtain.

On the flip side, non-library entities have grown to deliver much better online resource discovery experiences. Amazon and other online retailers make it easy to navigate their product catalogs. Google, of course, makes it dead simple to find something that is relevant to just about any query. Using the library is comparatively difficult.

Over the past 10 years, technologically literate folks working in the Library and Information Science profession have been working toward making library resources easier to find and use. One of the fruits of this labor is the “Resource Discovery System,” aka “Next Generation Catalog,” aka “Web Scale Discovery System.” This type of system uses a central index to store content from a variety of sources and allows use of a single interface to search/discover that content. The Summon system that we just purchased and implemented is one example.

But–it’s very important to keep in mind that any system that lets people search for resources could rightfully be called a resource discovery system, and systems like Summon are not the be-all, end-all for improving discovery of library resources. Like any other type of system, they have their positives and negatives, and they have to be willfully and intelligently incorporated into the overall discovery experience (e.g., the library website) in order to be effective.

So when we–the User Interfaces Department–talk about, e.g., improving resource discovery (or resource discovery interfaces) at UNT Libraries, we are talking about both Resource Discovery Systems in particular and about resource discovery systems in general. We’re talking about the discovery experience as a whole. When a patron comes to our library website, how do they get to the resources that they need, no matter what they are and what system they’re in? That’s what we mean when we say “resource discovery.”

For a more complete picture, please see our RDS Report, especially the Introduction and the Literature Review sections.

Back to top

 

Improving resource discovery means making our discovery experience more like Google’s. Right?

Yes and no. It depends on what you mean by “more like Google.” If you mean that we need to continue simplifying  the discovery experience, construct the right tools for the right contexts, customize our tools based on user data and user feedback, and continuously adjust them based on changing user needs (based in turn on user data and feedback)–then yes, absolutely we need our discovery experience to be more like Google’s.

On the other hand, if you mean that we need to mimic Google’s search functionality–i.e., just provide a single search box that searches a single system containing everything we own and returns one results set for each query–then the answer is a very qualified “no.” Or, at least, not necessarily.

Over the past 7 or 8 years, libraries and related organizations have gathered lots and lots of data showing that users prefer starting their research on sites like Google and Wikipedia. Plenty of focus group and user survey data that’s been collected shows that users say that they’d like the library search experience to be more like Google’s. Based on this information, it’s easy to assume that all we need to do to make our patrons happy is to implement a single, Google-like search box. But–until very recently–providing any search experience that crosses the majority of library resources hasn’t been possible, so this assumption is based mostly on preference data. People telling us what they think they want. And with the advent of Web-Scale Discovery Systems, as libraries actually implement their single, Google-like search boxes, it’s now possible to test the assumption. Although it’s still very early, some of the user data that’s been published recently contradicts–or at least qualifies–the idea that users just want a single, Google-like search. (See the More Resources section, below, for supporting examples.)

We do have to tread carefully here and make sure we check our assumptions. There are myriad reasons why users are having mixed reactions to libraries’ single-search implementations. First, there are obviously practical reasons, which include interfaces with usability issues and an underlying infrastructure that still can’t quite provide a totally seamless experience. In short–part of the problem is that the technology is still new, and–although it’s improving quickly–it just isn’t yet able to match the sort of expectations set by Web search engines.

But what’s interesting is that there might actually be conceptual problems with putting all library materials into a single bucket. As Dana Mckay’s paper [3] points out, there are distinct differences between how users use books and how they use online articles. These differences are strong enough that they may lead to confusion when users get search results that mix the two together. Our own user studies that we conducted during November and December of 2011 support these findings–when examining the home pages of different libraries with different types of search boxes, users actually showed a strong preference against library websites that employed a single search box. They liked search boxes that instead presented options, usually in the form of labeled tabs, because that gave them an idea about what they were searching. Of course, this is still preference data–but it’s preference data based on concrete examples.

Something else to keep in mind is that Google Web search and the Web itself go hand in hand. Library search tools don’t search the Web, so expecting them to work similarly to Web search engines–even at a conceptual level–is perhaps a little unrealistic. One of the earliest metaphors that came into widespread use for browsing the Web was “surfing”–which is apt (if silly). But can you imagine anyone “surfing” library resources? The Web is a complex network of interlinked documents and files. It’s vast. It’s open. Although much of its data is not very well-structured, it does at least share a common structure (HTML, XML) and a common infrastructure. You can write a program that crawls from document to document on the Web and automatically gleans lots of contextual information based on what links to what, the text in which the link is embedded, and lots of other contextual clues. The contextual data might not be 100% accurate, but it’s incredibly rich. Library data, on the other hand, consists mostly of various separate pools of records/resources that, 1. have little (if any) contextual data, 2. are not linked together in any meaningful way (not universally and not with unambiguous, machine-readable links), 3. do not share a common structure, 4. do not share a common infrastructure, and 5. are generally not freely/openly available. So much of what Google has leveraged to make Web search work well is simply not part of library data. Even attempting to normalize library data/metadata and pool it all into the same index does not give you the Web–or anything really very close to it.

Going forward, it’s clear that continuing to work toward consolidating the number of discovery interfaces and pools of library data will help improve overall discovery. We just want to make sure we’re proceeding in such a way that we’re not setting users up for confusion or disappointment. Lown, Sierra, and Boyer [2] put it well: “Although libraries may be inclined to design their home pages around a single search box to give an impression of simplicity, inadequate functionality and resource coverage may frustrate users and hide significant portions of library resources and services.”

Back to top

 

What are you working toward? What’s the plan? What will it look like when it’s done?

The last section of our RDS Report (Our Vision: The RDS Implementation Model) addresses this question broadly and shows one possible end-game scenario. The take-away from that, however, isn’t that particular scenario. The following points summarize what we’re ultimately trying to accomplish.

  1. We’re working toward providing a more unified interface for our users to search/find library resources, no matter what those resources are or what system they live in natively. As much as possible, we would like to provide a “one-stop-shop,” even if, at the end of the day, that shop is divided into different departments.
  2. Where possible, we’re working toward consolidation of resources, at least for the purpose of resource discovery. It’s easier to provide a unified interface when your data is interoperable–e.g., in the same index. But–
  3. we’re also working toward having ultimate flexibility with our data and our user interface. Although consolidation of resources is important, we don’t want to compromise control over our local data and how our users interact with it. We want to incorporate the discovery experience into our website–we do not want to have to shoehorn things into a proprietary system or interface that don’t belong there just because it’s our only option. This means we want systems that provide us with API access to our data so that we can query it, retreive it, and then mold it to fit our interface.

To help give you a more concrete idea, here are a couple of library websites whose resource discovery interfaces have inspired us throughout our investigations and planning.

UPDATE 02/2014: BYU and Villanova have changed their websites. BYU’s is completely different, although still interesting. Villanova uses the same basic model as is discussed below.

  1. North Carolina State University Libraries. They offer a tabbed search box where users can choose to search books, articles, or the website. But their default search is a combined search–which makes sense if, as Teague-Rector and Gaphery found [5], users just use the default search most (~60.5%) of the time. And their combined search is interesting–it isn’t really Google-like since the results don’t combine everything into a single bucket. It keeps results for different types of things separate and helps guide users to the particular bucket that they’re actually interested in. Again–this is a locally developed tool that would require some development work on our end.
  2. Brigham Young University’s Harold B. Lee Library. Another example of a discovery tool that has both a consistent interface and is well-integrated into the website. In this case the combined search does combine articles, books, etc. into one set of results. Note that, during the user studies we conducted in November and December of 2011, users preferred this style of search box (of the options we gave them).
  3. Villanova University’s Falvey Memorial Library. This is slightly different approach, but it provides some food for thought. Note how closely this approaches the “single search-box” interface, and yet it’s actually much more like an Amazon search than Google. Based on existing user data, they’ve done a good job separating things that should be separate and setting default options that make sense in particular contexts. (The home page search defaults to “library website,” while the Search page search defaults to “library catalog.”) Their “catalog” search is actually a combined search, but it presents results for books and articles separately, so it presumably avoids the problem of mixing together results for things that users keep separate in their minds. Most importantly, the website interface and search tools are tightly integrated and seem to function based on a well-thought-out high-level model. As you navigate the site and use their discovery tools, it never appears that you leave their website and enter separate systems. Brown University Library and the University of Buffalo Libraries have discovery tools that function similarly to Villanova’s, but Villanova’s is still better unified/integrated.

In some ways, the journey is going to dictate the destination. Although we have our ideas, we don’t know exactly what it will look like when it’s done. This is why we have planned a number of phases to help move us forward. At each step of the way, we’ll be collecting data–search data, usage stats, and user feedback. We’ll also reevaluate what we’re doing after each phase is complete. When other institutions that are a step or two ahead of us release data about what they’ve done, we can learn from that and adjust our own model so that we’re always working from the best, most recent data. Yes, this means that our vision will probably change along the way. But that’s just part of being responsive to an environment that’s constantly evolving.

Back to top

 

More Resources

  1. Howard, D., & Wiebrads, C. (2011). Culture shock: Librarians’ response to web scale search. Retrieved from http://ro.ecu.edu.au/cgi/viewcontent.cgi?article=7208&context=ecuworks
  2. Lown, C., Sierra, T., & Boyer, J. (2012). How users search the library from a single search box. College & Research Libraries.  Retrieved from http://crl.acrl.org/content/early/2012/01/09/crl-321.full.pdf+html
  3. McKay, D. (2011). Gotta keep ’em separated: Why the single search box may not be right for libraries. Hamilton, New Zealand: ACM. Retrieved from http://dl.acm.org/citation.cfm?id=2000772
  4. Swanson, T. A., & Green, J. (2011). Why we are not google: Lessons from a library web site usability study. The Journal of Academic Librarianship, 37(3), 222-229. doi:10.1016/j.acalib.2011.02.014. Retrieved from http://dx.doi.org/10.1016/j.acalib.2011.02.014
  5. Teague-Rector, S., & Ghaphery, J. (2008). Designing search: Effective search interfaces for academic library web sites. Journal of Web Librarianship, 2(4), 479-492. doi:10.1080/19322900802473944. Retrieved from http://dx.doi.org/10.1080/19322900802473944
  6. Thoburn, J., Coates, A., & Stone, G. (2010). Simplifying resource discovery and access in academic libraries: Implementing and evaluating summon at huddersfield and northumbria universities (Project Report. Newcastle: Northumbria University/University of Huddersfield. Retrieved from http://eprints.hud.ac.uk/9921/

Posted by & filed under Uncategorized.

Tags:

Comments are closed.

top