Marketing for Libraries Logo


This online course is no longer being updated online. It's only available through the Canvas website for the current course offering. 

After completing this session, you'll be able to:

Begin by viewing the class presentation in Vimeo. Then, read each of the sections of this page for more detail.

Explore each of the following topics on this page:


The Purpose of Evaluation

The evaluation of collections is a systematic and ongoing process.

ODLIS defines collection assessment and evaluation as

"the systematic evaluation of the quality of a library collection to determine the extent to which it meets the library's service goals and objectives and the information needs of its clientele. Deficiencies are addressed through collection development."


Evaluation is important for a number of reasons. One is that it provides information about internal functions such as budget justification, decision making, and resource allocation (including conversion to digital). It can help determine whether your collection is meeting the outcomes that you wish to achieve, such as improving the weak portions of the collection and enriching the strong. Evaluation can help you see the collection as a whole rather than piece by piece. It can also help you evaluate whether your vendors are living up to their side of the bargain.

Externally, evaluation will guide you through an accreditation, particularly in the higher education arena. Data about your collection can be provided to those with whom you wish to establish resource sharing agreements and used to support fund-raising and grant writing. It is possible to evaluate just about anything. We just have to know how to do it.

Approaches to Evaluation

There are a number of methods for conducting evaluation of your resources.


Some approaches are connected with a particular phase of collection development such as the CREW method – continuous review, evaluation, and weeding. The emphasis on "continuous" means that evaluation can't be done once every ten years and forgotten about during the years between. It really should be an ongoing process.

That process is similar to weeding: start with the mission, the collection development plan, needs of users, etc. You can then conduct the evaluation by examining a portion of the collection on a multi-year timetable (for example, year one: examine fiction; year two: 000 through 699; year three: 700 through 999 in Dewey) or as a whole. If you are part of a separate collection development section, this would be a good opportunity for you to also collaborate with the reference and cataloging departments.

The methods should fit the function of the collection. You need to ask yourself a range of questions:

Evaluation Measures

There are a number of measures that can be employed to conduct the evaluation.

Quantitative measures. Quantitative measures could include examination of the size and growth over time of the collection and its various sections, expenditures for library holdings over time, unfilled requests for materials or information, circulation data, and interlibrary loan data. A library can also develop formulas that can be used so resources expended and collection growth are proportional to set parameters such as the number of credit hours students take within a certain subject area or number of full time equivalent students in a school or major. Some university libraries develop formulas to assess charges to various schools or departments to support collection development while others use the information to allocate resources received. These same formulas can be used to assess that proportionality.

Qualitative measures. There are also a number of qualitative measures that can be employed. These could include using standard lists, such as awards and "best books" lists developed by various divisions of the American Library Association and other organizations. There are also a range of subject bibliographies that have been published, including the "core collection" books addressed earlier in the class. Expert opinion can be sought from scholars or other patrons through surveys, interviews, or focus groups. And one of my favorite evaluation tools is wear and tear of materials – if one portion of your collection is falling apart from use, it might be an indicator that additional materials are needed in this area.

No single method will give you all of the answers, but you need to know the questions that you want to answer before you select the measures.

try itRead!
Knievel, J. E., Wicht, H., & Connaway, L. S. (2006). Use of circulation statistics and interlibrary loan data in collection management. College & Research Libraries, 67, 35-49.

Collection-centered Approaches

There are two different perspectives that can be used to examine a collection based on the function or mission of the collection and the purpose of the evaluation.

One perspective is collection-centered. Quantitative data for this perspective might include size and growth of the collection, expenditures on materials, and median age of the materials owned. Qualitative data would include expert judgments and comparisons to standard tools.

library computer

There are some caveats to these types of evaluations. The person conducting the evaluation may not be an expert in the literature of the subject analyzed; therefore, he or she may not be aware of the appropriate tools for comparison. The tools may fit the subject, but provide more depth than is suitable for your institution. For example, do you purchase every book on the annual Best Books for Young Adults list (now Best Fiction for Young Adults) or is it more appropriate for you to only select the top ten? Another example could be that you may consider using the Library Investment Index as an evaluation tool with a goal to seek membership in the Association of Research Libraries if you are in an academic library; however, the expectations presented by the Index may not be a fit with the resources and mission of your institution. Furthermore, the collection-centered approach may not account for the unknown patron who could be greeted with a range of unmet information needs.


User-centered Approaches

userEvaluation based on a user-centered perspective looks at a range of different indicators.

The quantitative measures might help you identify strengths and weaknesses by usage. This could help you modify your collection policies to accommodate demand. Automated circulation data can help you analyze frequency of use, relative use compared to the rest of the collection, use compared to current acquisitions, and over use or under use in particular areas.

Vendors of digital materials should also be able to provide you with use data for your e-books, full-text journal articles, databases, and other downloads. While staff may have impressions about what is used, the data could provide substantiated evidence. There are caveats with only looking at circulation data, however. Past and present use may not be indicative of future information needs, and these measures do not consider unfilled information needs.

Other quantitative measures could include interlibrary loan requests, median age of the materials used, the last circulation date, and table counts (to include data on materials used in the library). In research collections, another evaluation tool is to compare your collection to sources most frequently cited in articles published in the leading journals.

Qualitative measures could include observing patron use and conducting focus groups or interviews. One other method that has been used is to include questionnaires in random items in your collection; however, you would need to keep track of the blank surveys so you can pull them when you cease using that method of evaluation. It really isn't fair to the patron to have them take the time to fill out a survey that would never be analyzed.


The Real World

Evaluation isn't the "end" of the process. It's something that happens throughout the process of collection development.

Unfortunately, evaluation is often overlooked until it's time to put together the library's annual report.

looking up

try itRead!
Bertot, J., & Jaeger, P. T. (2008). Research in practice: Survey research and libraries: Not necessarily like in the textbooks. Library Quarterly, 78(1), 99-105.



Bertot, J., & Jaeger, P. T. (2008). Research in practice: Survey research and libraries: Not necessarily like in the textbooks. Library Quarterly, 78(1), 99-105.

Borin, J., & Yi, H. (2011). Assessing an academic library collection through capacity and usage indicators: Testing a multi-dimensional model. Collection Building, 30(3), 120-125.

Knievel, J. E., Wicht, H., & Connaway, L. S. (2006). Use of circulation statistics and interlibrary loan data in collection management. College & Research Libraries, 67, 35-49.

Mentch, F., Stauss, B., & Zsulya, C. (2008). The importance of "focusness": Focus groups as a means of collection management assessment. Collection Management, 33(1/2), 115-128.

Nisonger, T. E. (2008). Use of the checklist method for content evaluation of full-text databases: An investigation of two databases based on citations from two journals. Library Resources & Technical Services, 52(1), 4-17.

White, H. D. (2008). Better than brief tests: Coverage power tests of collection strength. College & Research Libraries, 69, 155-174.

Portions of this page were adapted from Collection Development & Management by Irwin and Albee (2012).

| eduscapes | IUPUI Online Courses | Teacher Tap | 42explore | About Us | Contact Us | © 2012-2015 Annette Lamb