Professional Development

Author Archive

2014 ER&L virtual conference

Thursday, April 10, 2014 5:03 pm

For the second year running, I “attended” the Electronic Resources & Libraries conference by watching streamed sessions. I still plan on watching sessions as time permits throughout the year, since the group purchase that Derrik made runs until the next conference is held in 2015. (ZSR folks: Ask Derrik if you need the password.)

One trend that popped up in multiple presentations was Evidence-Based Acquisition (EBA). Like its close relative Demand-Driven (or Patron-Driven) Acquisition, it has two names and two initialisms. So you may also hear of Usage-Driven Acquisition (UDA). With EBA, you give a provider an up-front deposit, say, $5,000. Then then provider turns on their entire catalog of e-books or streaming films. After a set time, say, a year, you get a usage report and can choose $5,000-worth of products for permanent ownership. There are some pros and cons to this approach, especially vis-à-vis DDA. (What if you don’t get $5,000 worth of use? What if all the use is long tail with no “short head”?) However, since providers who use this model generally do not participate in DDA models, EBA may be the most cost-effective way to buy certain types of material.

Another hot topic was the end-user experience with e-books and certain multimedia databases. Basically, it’s bad. Typical problems with e-books include not being able to print, not being able to use the book on certain devices, not being able to store the book for later consultation. Multimedia has a different but related set of concerns. (I’m reminded of this comic and this infographic. They both claim that poor UX drives customers to piracy.) The presenters didn’t go as far as claiming that library resources drive folks to piracy, but they did claim that students will instead either download free alternatives or the “haves” might buy individual copies instead, which could magnify the effects of economic disparities among students. The presenters insisted that libraries should put their collective foot down and refuse to buy user-hostile resources (even if the information contained is high quality). They called out one well-known database as particularly awful. A quick check of that library’s website established that they still subscribe to the bad product, so the force of their argument was somewhat undermined. I have hope, however, because I can remember a time in the 90s when e-journals and e-newspapers were just as bad as e-books are today. Printing from JSTOR used to be a nightmare, and you had to use certain specific computers if you wanted to use ProQuest. Then you had to use a different computer entirely for LexisNexis. These days, e-journals generally just work. Maybe e-books and multimedia sites will get there someday if we keep leaning on the vendors and if we at least occasionally refuse to buy products that are the worst UX offenders.

Two Thoughts from the NC Serials Conference

Monday, April 7, 2014 1:59 pm

I also attended the North Carolina Serials Conference last month. Since several other ZSR bloggers have already reported, I will focus on two ideas put forward during a late-morning plenary session, which featured David Crotty again.

Crotty remarked that the paper announcing the cure is not as important as the actual cure. We might make the paper available via Open Access while the cure itself (say, a drug) might be protected by patent law.

Crotty also asserted that, contrary to popular belief, Humanities often runs at a profit while Physical Science runs a deficit within a university budget. He claimed that’s because a lot of tuition money is paid by Humanities majors, which subsidizes expensive lab space in Physical Science. (I’m carefully noting that he didn’t say Life Sciences, which probably attracts the most grant money of all and is also a popular undergraduate major.) He cited a recent NPR story about Duke that focused on where all the money goes. I listened to that story today, but didn’t hear the same interdepartmental subsidy message that Crotty asserted. So, I don’t know if he cited some other evidence for his claim (that I didn’t write down) or what. Nevertheless, I would be very interested in whether this is true at Wake Forest or not. I have often thought about this issue on a smaller scale when we allocate the collections budget. Even if you just look within a broad discipline group like Humanities, it appears that larger, more popular majors subsidize smaller ones. I have two defenses to offer. The first is that Demand Driven Acquisition serves as a correction to this tendency. The second is that a certain amount of inter-departmental subsidizing is necessary. Students are attracted to Wake Forest because they like the idea that they have over 40 choices for their major. Once they actually get here, over half of the students cluster in a just a few majors. However, many students would not choose WFU at all if we only offered, say, ten majors. Crotty’s broader point, and the point of the NPR story, is to ask whether it’s a good idea for student tuition dollars to go towards research, especially when the tuition comes in the form of a loan that must be repaid with interest.

2013 Charleston according to Carol

Wednesday, November 20, 2013 10:12 am

Here are the highlights of the most important sessions I attended at Charleston:

Derrik has already covered the first session on discovery services. I won’t repeat what he said, except to link to the slides. I’ll also point out that we were one of the 149 libraries that gave approval to be studied (slide 10), but I don’t know if we were ultimately selected. In a related presentation on Friday, Bruce Heterick from JSTOR discussed efforts in getting their content to show appropriately in discovery services. JSTOR found that usage plummeted after certain schools implemented certain discovery layers. (My opinion: Students will frequently use JSTOR on name recognition alone – even when it’s not the optimal source for their topic. If the discovery service delivers more appropriate up-to-date content, so much the better.) Heterick said that many discovery services depend heavily on subject metadata for relevancy ranking. JSTOR does not include that metadata, and it would be expensive to produce. (Just a thought – many JSTOR articles are indexed with subject metadata in A&I places like MLA, which are sometimes included in the discovery service as well. How can that be harvested appropriately?)

Librarians from Ferris State reported on how they processed titles that they committed to retain within their Michigan consortium. They used a 912 field in the MARC record to indicate reasons for retention. Missing books and those in poor condition took extra time to process since they needed to find another consortium member who would take responsibility for keeping the title.

Kristin Calvert from Western Carolina reported on a project to move all their usage stats to EBSCO Usage Consolidation (hence: EUC). Before implementing this project, it took them four full working days each year to collect e-journal stats. I know Derrik would identify with some of the frustrations that Calvert expressed. After the decision to use EUC, it took…

  • 2-3 weeks to set up (I’m not sure if non-stop work is implied here.)
  • 8 hours for initial cleanup
  • 4-6 hours for quarterly loads (could do this annually to save time)
  • <1 hour/month for cleanup

The product includes an “Exceptions” list of journals that had some kind of mismatch in the system. WCU staff had to reconcile the exceptions, but once they did, EUC remembered the fix so the same exception wouldn’t pop up again. The screenshot that Calvert showed had zero exceptions. Calvert concluded that she found this project worthwhile given the efficiencies gained at the end.

On Saturday, two librarians from Bucknell discussed how they dropped their approval plan and went with print DDA for everything. They use WorldCat/WorldShare for their catalog and discovery layer, so they could accomplish this without any loading (or deactivation) of records in their system. Patrons click on a ‘Get It’ button (powered by GIST), and a librarian decides whether to fulfill the request by purchase or by ILL. In the end, they ordered 1/3 fewer titles, spent 50% less, and ILL decreased. Bucknell took this path because their approval books circulate at a low rate. They also weed aggressively (12K new books/year and 6K deletions/year), so their collection was a revolving door. They pointed out that their library focuses on undergraduate curriculum, not research, so WFU may not want to pursue this idea. One point that resonates with me though: they reminded us that ‘efficient’ does not necessarily mean ‘effective.’ Approval plan ordering is the most efficient way to get books, and e-book DDA is even more efficient at delivery. However, are they as effective in getting users to the content they need in the format they want?

Carol at NCLA 2013

Thursday, October 24, 2013 10:02 am

Since I live very close to the Convention Center, I volunteered for the Local Arrangements Committee. In addition to managing the bag-stuffing operation, I spent several hours staffing the Local Information Booth, from which I gave opinionated advice about local restaurants (and handed out restaurant guides prepared by Hu!). I was thrilled to leave my car at home for three days straight, but was mildly disappointed to discover that I didn’t win the short distance award. To my knowledge, that honor belongs to another ZSR librarian (ask around offline if you want to know who!) and a librarian from High Point U. who lives downtown.

Local Information Table
I still had time to attend some of the sessions. I’ll skip talking about sessions already discussed by other ZSR bloggers, and a few others where my main takeaway was confirming that I am already up on current trends. Here are more details on three sessions where I learned a lot of valuable new-to-me information.

Demystifying Fund Formulas in an Academic Library Setting

Lisa Barricella & Cindy Shirkey, ECU
ECU was looking for a different way to allocate the monograph portion of their budget. Their previous formula – based on factors like credit hours, faculty headcount, grad students, etc. – had several flaws. For instance, using credit hours earned in a subject would overfund areas like foreign languages where there is a lot of enrollment at the lower levels, but not a lot of need for library materials. Also their old formula – just for monographs – didn’t account for the journals v. books breakdown which is unique to each discipline. (There was also the procedural issue that the data, which came from sources external to the library, was sometimes very difficult to collect.) Their new formula relied heavily on two factors: how much the collection in, say, Art was used as a proportion of the entire collection and how many ILLs did Art generate in proportion to their holdings. Both of these criteria more closely map to the actual demand for monographic materials in that subject. (The ILL part was not fully implemented due to specific failures in ILLIAD reporting.) Finally the average price of books was considered. While I’m not looking to redo all the monograph budgets anytime soon, I will keep these ideas in mind in case we ever need to overhaul our monograph budget structure.

Taming the Hydra: A Strategic Approach

Kim Vassiliadis, Emily King & Chad Haefele, UNC
This presentation is about how UNC corralled a whole bunch of subject guide thingies all over their website, deleted about half of them, and got all the rest into LibGuides with an updated (and consistent) look-and-feel. Then they initiated a plan to make sure that each LibGuide gets some maintenance at least once a year. Guides that are not updated are given “unpublished” status (i.e. suppressed from public view) in LibGuides. I’m impressed that they were able to pull this off in the decentralized environment at UNC. One rule they implemented was that you can’t have more than one row of tabs. Also every guide has to have an intro paragraph that lists all the tabs. I actually disagree with the intro paragraph idea. More on that in a minute.

I Honestly Had No Idea: LibGuides Usability Assessment in an Academic Library

Randall Bowman, Teresa LePors & Shannon Tennant, Elon
LibGuides best practices is an area where a lot of folks (including yours truly) have lots of ideas but very little evidence. Elon conducted a usability test with some undergraduates to fill the evidence gap. In addition to asking students to perform tasks, they asked some subjective questions at the end.
Some of their conclusions:

  • Students go straight for the search box, any search box. That’s bad news on my guides since the only embedded search box is for “Search this Guide.” That’s also bad if the source with a search box is not the best place to go for that topic. For one task, the relevant guide had a JSTOR search box embedded (also with the pretty “J” logo). However, JSTOR did not contain the particular article that students needed to find.
  • Students don’t read the text on the page. They quickly scan for something that looks familiar.
  • Students ignore the tabs. (Paraphrased comment from audience: I’ve been to three presentations on LibGuides, and they all say that students don’t use tabs! However, LibGuides is built around tabs!)
  • Students were split (on both the tasks and the subjective questions) as to whether “Articles” or “Databases” was the best word for leading students to databases that contain articles. (My own guides hedge on this one by saying e.g. “Linguistics Databases for Finding Journal Articles”)
  • Students don’t scroll, which is bad news if you’re also not using tabs
  • Elon’s main LibGuides page prominently featured the tag cloud. Students didn’t use it, and on the subjective questions they Xed it out as an unnecessary element.
  • Students liked the librarian profiles, which include an embedded chat window. A significant percentage of their chat questions are referred by the research guide pages.

Based on what they learned, Elon is going to lose the tag cloud and have the front page of each LibGuide list all the tabs (like UNC). I disagree with this “intro paragraph” approach since it was also established that students don’t read the text! When I have time, I’m going to edit my LibGuides so that the #1 resource is a search widget, preferably with a pretty logo. If there is no pretty logo, then maybe I’ll add the “Best Bet” star like we use on the database pages.

Carol “at” ER&L

Friday, April 5, 2013 4:45 pm
Local Scenery "at" the conference

Local Scenery "at" the conference

No, my post title isn’t a candidate for the “Blog” of “Unnecessary” Quotation Marks. I was one of the virtual attendees at the conference. Since Derrik and Chris have already blogged, I’ll focus my reflections on some of the topics they haven’t covered yet.

What would Google Do?

Elizabeth German from the University of Houston reported on a transaction log analysis. Like us, their home page had several tabs roughly equating to Summon, catalog, e-journals, etc. Forty-three percent of searches were for known items vs. 53% for unknown item searching.

DDA as a Game Changer

Michael Levine-Clark University of Denver and Barb Kawecki YBP

This presentation was partly an overview of what DDA is. The new part for me is that NISO is working on some Best Practices on how to do DDA. For instance, publishers should keep titles available indefinitely in case a book languishes unused for decades but eventually gets a use. Another key thing they’re working on is how to get titles out of the consideration pool. (At WFU, we are particularly worried about superseded editions staying in the DDA pool alongside the newer editions.)

E or P: A Comparative Analysis of Electronic and Print Book Usage

Christopher C. Brown and Michael Levine-Clark from the University of Denver updated a study that I reported on last year about print and electronic use of the same book. This time, they analyzed Duke University Press books they had bought in print and from ebrary. When both formats were available, print was more likely to be used than e. (619 different print titles were used vs. 451 e-titles. This is from a universe of 1150 titles available in both formats.) However, books that were used in both formats tended to have the highest amount of total usage, and the type of e-usage tended to be more significant (e.g. higher number of pages used, higher number of pages printed). Once again, use in print correlated with use of e. Maybe patrons are using the e-book for discovery, which leads to a checkout in print. (Note this is just a speculated pattern; the data aren’t granular (or invasive) enough to prove that the same person uses a book electronically and then subsequently checks out the print.)

OpenURL Link Resolvers: Tracking Effectiveness over Time

Kenyon Stuart from the University of Michigan studied how SFX and 360 Link and Summon’s Direct Linking succeeded in linking users to full text. After all this time, there’s still a lot of room for improvement in this linking. Supplements, book reviews and newspapers are particular problems. He did find ways to reduce bad links for the most frequently used journals. For instance, he moved better-performing providers up in the 360 Link results list. (We’ve done that too, for instance ranking publisher-based sites generally higher than aggregators and putting print last.)

Reflections on virtual attendance

ER&L is a relevant conference for me, but not my #1 conference for the year. The group virtual attendance option was affordable. It also brought in folks for single sessions who never would have traveled to Texas. Thanks to Derrik for arranging this!

Carol at ASERL Journal Retention Meeting

Thursday, February 21, 2013 12:29 pm

On February 13, I arose ere the dawn to attend the ASERL Journal Retention Steering Committee meeting on the Georgia Tech campus. I don’t normally drink coffee, but I downed two cups once I arrived. (OK, really two cups of coffee-flavored sugared cream.) The opening session reviewed the project (http://www.aserl.org/programs/j-retain/ ) and introduced the WRLC (Washington [i.e., DC] Research Library Consortium) print retention project. Each library representative mentioned what they’re retaining for the group. For WFU, that’s mostly the Wiley-Blackwell journals. Most other libraries are following a subject-based approach or are archiving JSTOR. You can find our commitments by searching for ‘ASERL’ in VuFind.

Cheryle Cole-Bennett covered “How to Document this Retention Agreement within the MARC 583 field.” ZSR currently uses a prescribed basic 583a statement that “This title is in the ASERL Print Journal Archive.” In VuFind and Classic, this note appears only in the staff view (good, since only staff care). However, this data does not feed into the master OCLC record, where an audience of librarians across the country may care to read this data. Also, the minimal data in the 583 does not specify which years of the journal are committed for retention, how long ZSR has committed to retain it, or the conditions of retention (e.g. in a closed-stack facility). For instance, in the case of Psychological Reports, ZSR has newer volumes that aren’t part of the commitment yet. OCLC and ASERL have developed some complicated recommendations for expanding the data in the 583 field, as well as a recommendation to include holdings-level data in OCLC for these titles. (At this point, the presentation got technical, and I hope the catalogers can make sense of the PPT slides.) Questions from the audience included: Can you enhance the 583 by batch? How can you communicate that a title is part of two or more retention plans (e.g. ASERL & TRLN)? Can you have multiple 583 fields or repeated subfields within 583? Apparently, the enhanced 583 does not completely erase the problems when only part of the run is officially retained. ASERL has not yet officially decided what members of the group should do with their 583s, so no action is required yet.

Next, we discussed adding subject categories to the retained journals. Apparently the Deans want this (that so, Lynn?). The group agreed to use Ulrich’s to assign subjects, with subscribers pitching in to provide the headings for the non-Ulrich’s-subscribers (like WFU). I noted that they didn’t specifically assign a library to cover our titles, which might lead to a ‘diffusion of responsibility’ effect. Maybe the deans that care about this the most will step forward with the labor to cover this effort.

Next, Winston Harris from UF demonstrated the Journal Retention and Needs Listing (JRNL) tool that they developed for the group. It answers two questions:

  1. What’s in ASERL?
  2. If I’m weeding, can I fill in a gap for someone else who’s retaining this title?

During lunch, the branding subcommittee (that’s me and Steve Knowlton from U. Memphis) took feedback from the group on potential guidelines for a catchy name. (What?!? “ASERL Cooperative Journal Retention” isn’t snappy enough for you??) I took down some notes, but I don’t want to reveal too much too soon….

Amy Wood from CRL demonstrated the PAPR (Print Archives Preservation Registry) system, which tries to track nationally where journals are being retained. One use case: Say you’re weeding, but you want to make sure someone’s retaining the title. Maybe you’re not comfortable unless several institutions are retaining the title. WorldCat can give you a raw holdings count, but not who’s committed to permanent retention. PAPR fills that need.

Finally, we discussed targeted retention. First, they discussed agriculture titles (Zzzz). Next, they discussed retention of certain big sets. I hoped to turn the discussion to humongous physics runs, but the group was more intent on retaining paper indexes that WFU has long since weeded (like Chem Abstracts and NUC). Most surprising to me was the claim that the pre-1956 NUC is a high-use item. Ah, the world of an ARL….

The meeting ended early, so I took MARTA to the airport, where I played out a round of “Grande Tea vs. Dramamine.” I think the tea won, since I stayed awake all the way home.

Carol in Charleston, with Random Linguistic Side Notes

Wednesday, November 21, 2012 10:47 am

A keynote speaker used ‘gatekept’ as a past participle verb. The OED hasn’t caught on to that yet, but the Google Ngram shows a small but steady increase in the word since 1970.

In “The Changing World of eBooks,” Mike Shatzkin focused on the viewpoint of trade publishers. They’ve discovered that most readers just want to be alone with their books. They don’t care about enhanced content. (He pointed out that this applies to immersive reading for adults. It does not apply to children’s books, how-to, cookbooks and a few other categories.)

In “Ebook Availability Revisited” (the session I saw with Lauren C.), the authors advocated against buying (as opposed to renting) any e-books. They assume that the legal issues surrounding Hathi Trust and Google Books will resolve in a few years. Then we can just buy/lease from them. They promoted subscription over DDA, and came down strongly against doing e-book approvals when DDA is available.

Later that afternoon, I attended the “TRLN Oxford University Press Consortial E-Books Pilot” representatives from Duke, UNC-CH, NCSU, YBP and OUP described how they initiated a shared-cost model for the entire output of Oxford’s UPSO product. (BTW, the Charleston program copyeditors need to decide among ‘e-book,’ ‘eBook’ or ‘ebook.’) I’m skeptical about the ‘Big Deal’ model spreading to e-monographs, but I nonetheless heard this session with great interest. The schools shared costs based on what they thought fair, e.g. accounting for size of school, nature of school, etc. They also purchased one print copy of all non-STEM books. They placed the print in OSS. Records processing work was shared out, with one school processing all the print and another all the electronic. They didn’t go into this hoping to save money – their hope was to expand access for the same money. Access for alumni was also included. I wrote that down calmly in my notes, and then I got excited since that includes me! The speakers noted one significant challenge: OUP excludes some books from UPSO and releases others online after a delay. Therefore, if a selector sees an OUP book of interest, should they buy the print or not?

I ended the day by giving a “shotgun” presentation on the incentive program we ran last year. See my slides on slideshare.

On Friday, I attended “Overview of the Altmetrics Landscape.” The presenters outlined at least five alternatives to traditional journal-level metrics: Impact Story, Altmetric, Plum Analytics, Science Card and Mendeley. They also mentioned the attributes of an ideal altmetric system: free, API available, relevant, and immune to rigging/gaming. The next steps are to explore use cases, give context to numbers and continue to combat gaming.

My final Friday session was “Changing the DNA of Scholarly Publishing – The Impact of the Digital Leap.” Damon Zucca from OUP discussed how the Oxford Handbooks series changed when it became an online product. From the print world, they knew that authors who met the deadline didn’t want their chapters held hostage by those who didn’t meet deadline. They also learned that users often sought out specific essays. Therefore, the obvious decision was to make chapters available online as soon as possible and not worry as much about the container. Lisa Jones from Georgia Gwinnett College had the privilege of starting a brand-new collection when her college opened in 2006. She had an e-book subscription (i.e. the approach advocated by the Thursday presenter), but dropped it due to insufficient use. (I also heard a speaker in this session say ‘editors-in-chief.’ Google Ngrams reveals this is indeed the popular usage. Can you tell how much I love Google Ngrams?)

Finally on Saturday, various vendors hosted 30-minute sessions on new products. As Classics and Linguistics liaison, my obvious choice was a presentation on the Dictionary of American Regional English (DARE) and the Loeb Classical Library. Both online products are still in development, but I’ve signed up to beta-test DARE.

Carol at MSU LEETS, Part II

Tuesday, August 14, 2012 3:32 pm

The second day of the MSU LEETS conference focused on emerging technologies. These presentations overlapped more with each other so I’ll just give some general impressions. The main speaker was Nicole Hennig from MIT.

  • NUIs (Natural User Interfaces) to replace GUIs
  • Libraries creating “hackerspaces” or “makerspaces” which feature 3-D printers. Our own Dr. Atala got a shout-out in the context of 3-D printers (look at 11:05).

We watched the video “What is a MOOC?” by Dave Cormier. The narrator highlighted “distribution” as a key component of MOOCs (starting at 2:50). He mentions “pockets and clusters” of information like blogs, tweets, tags, discussions posts, etc. Later, in the context of user experience studies, a major theme was “Fragmentation Hurts.” What is “fragmentation” but a negative way to say “distribution”? Hennig mentioned another of her presentations on this topic, and I followed it up more thoroughly. I learned that “fragmentation” was used in several contexts, such as the annoyances of e-books (e.g. platform proliferation; some work on certain devices but not others). Fragmentation was also mentioned in light of the cloud and one’s personal cache of information. I know I have work information on Google Docs, Evernote, Gmail, acad1, my hard drive and the wiki. I feel the pain when a needed piece of information isn’t in the first (or second) place I look for it. I also think about distribution/fragmentation in light of the library sharing information with patrons. We currently use Facebook, Twitter, Flickr, (multiple) blogs, ZSReads, Pinterest and maybe others. In some cases, the various outlets point back and forth to each other. I understand that some patrons are on Facebook but not Twitter and vice versa, and we certainly want to reach them all. However, it’s a constant challenge to make sure that our information sharing is aptly described as “distribution” instead of “fragmentation.” What I’m describing is just “events” type info. Don’t get me started on the fragmented sources for research, a problem only partially solved by Summon.

Resources to follow up on:

Augmented Reality Apps that I might consider:

  • Layar
  • Tagwhat
  • Google Goggles
  • LeafSnap

Apparently, MSU LEETS was the first library conference to have an official Instagram feed.

MSU On-Campus Guest House, Dwarfed by Football Stadium

MSU On-Campus Guest House, Dwarfed by Football Stadium

The folks at Mississippi State practiced pleasant hospitality and treated their speakers royally. The MSU community clearly loves its football.

The stadium backed right over the on-campus hotel where I stayed. (By contrast, I never found the basketball arena.) They also got me a guest pass to their fabulous fitness center. I thoroughly enjoyed my visit and would recommend this conference to others.

Carol at MSU LEETS, Part I

Wednesday, August 8, 2012 4:34 pm

I spent last weekend in Starkville, Mississippi at the MSU LEETS conference. LEETS stands for Libraries eResource and Emerging Technologies Summit. The first day of the conference focused on electronic resources.

Tim Collins from EBSCO Publishing emphasized the development of the EDS discovery service in his opening keynote. He worries more about the erosion of library funding than the potential threat of Google. Just as Google covers all things free, he hopes that EBSCO will provide all things vetted. EBSCO bought up indexes like AHL and HA primarily because they can enhance other products like EDS.

He also reflected on EDS participation. All of the major publishers participate because usage increases, and nobody gets access without paying. Aggregators (like LexisNexis) may not participate if they don’t have rights to re-distribute the content. Indexers (like MLA) are reluctant to participate since their customers may stop buying MLA and may start relying on the discovery service instead.

Regina Reynolds from the U.S. ISSN Center at the Library of Congress spoke next on PIE-J. The proposed best practices under development for e-journals include (inter alia):

  • Keep all article content under the title current as of the time of publication.
  • Include accurate ISSNs, including variant ISSNs like p-ISSN and e-ISSN.
  • Include title history.

Western Carolina University recently canceled 190 journals. Kristin Calvert discussed the process of discovering and activating their post-cancellation access (PCA) rights. She affirmed that:

  • ERM data entry is time-consuming.
  • Long grace periods make it difficult to discern whether your archival access works or not.
  • Portico did not work as well as publisher sites for getting PCA.

Ed Cherry and Stephanie Rollins from Samford tried to assess whether library use correlated with academic success. They defined “library use” as logging into an e-resource, and they measured “academic success” by GPA. First, they set EZproxy to require logins for all users, on- and off-campus. Once they had a semester’s worth of login data (i.e., capturing usernames), their partner in Institutional Research could compare library use to Banner information like class year, major and GPA. They learned that more frequent library use correlated with academic success. (They carefully noted that their methodology could not prove causality.) They also determined which majors had low use of resources, so they could better target outreach efforts.

Tammy Sugarman from Georgia State discussed Institutional Repositories. First, she gave an overview of the concept and described the types of materials that typically enter the repositories. Then she outlined how Technical Services staff can be a critical ingredient in the success of an IR.

Yours truly closed out the day with a discussion of DDA. Some tidbits I haven’t shared out with ZSR yet:

  • In the first four months of our DDA program, five books were triggered for automatic purchase (at sixth use). In the most recent four months, 24 books were triggered, including five triggers in July 2012.
  • Of the eight books used on July 30, seven were used for the first time, and four of these titles were loaded on the very first day of DDA in March 2011.

Carol at ER&L

Thursday, April 26, 2012 12:09 pm

Impressions from the Electronic Resources & Libraries conference …

E-books and DDA

When CSU-Fullerton had a budget cut, they prioritized their DDA program and instead cut their approval plan. They skipped the intermediate step of an e-preferred approval profile.

In our own presentation, Derrik and I asserted that annual spending on DDA clusters around $4-$7 per FTE. Outrageous spending seen at other institutions might simply reflect a large FTE. With that thesis in mind (seeking confirmation bias?), we noted during other presentations that CSU-Fullerton is on track for $5/FTE. University of Denver spent $6/FTE.

An EBL rep reminded us to prepare for an increased percentage of triggered purchases each passing year as more infrequently-used books reach the trigger point.

A YBP rep mentioned that e-books now account for 10% of sales.

E-books vs. print books: The University of Denver examined usage in cases where they owned both the print and digital copies of the same book. High e-usage correlated with high print-usage (and vice versa), but without a clear causal link. Apparently, relevant content generates high use of both formats. About half of their presentation covered methodology – problems like separate ISBNs for each format made for a very time-consuming project.

E-journals and Big Deal alternatives

CSU-Fullerton used CCC’s Get It Now service to provide e-journals (with transactional payments) instead of ILL or subscribing. They did not anticipate that the same individual would sometimes download the same article multiple times. How to control for that in a patron-friendly way?

CUNY Graduate Center outlined how they eliminated a Big Deal. Essentially the content of that particular deal did not match current institutional strengths. By contrast, every time I’ve examined WFU use stats, the Big Deal for journals comes out ahead of the à-la-carte model.

Another presenter gave a sophisticated analysis of Big Deal journal usage for a consortium of libraries in the UK. He determined how much they would have to pay in Document Delivery or extra subscription charges if they left the Big Deal and returned to an à-la-carte model. In the end, the consortium renewed with both Big Deal publishers under consideration. The speaker’s model included a percentage use increase each year. He stated that use (i.e. journal article downloads) went up 14% each year. I’ve never thought to account for that before, but I could see whether that holds true for WFU. (If use does indeed go up, does it reflect enrollment growth or an increase in per-FTE consumption?)

CLOCKSS

Libraries (including ZSR) pay for hosting of the CLOCKSS archive at multiple sites worldwide. A speaker noted that the Japanese CLOCKSS site went down due to electric grid malfunctions in the aftermath of the earthquake/tsunami. The site restored itself with data from the other CLOCKSS sites over the next several months thereafter.

Discovery Layer

A speaker from Oklahoma State University investigated a question that Lynn has asked me to look into: If you have a discovery service (like Summon), do you still need A&I databases? OSU examined one case where a low-use A&I database offered a huge price increase. Her methodology was:

  1. Find the overlap between the A&I database and Summon.
  2. For unique titles, determine whether the library has holdings, and whether the title is in English.

Her findings:

  • For the database at issue, OSU determined that about 92% of the titles were covered (at least partially) in Summon. Of the remaining 8%, OSU held 6% (or, 0.48% of the entire list), and those holdings were generally both fragmentary and old.
  • About 75% of the unique titles were non-English. They also examined ILL requests for the unique titles, and discovered there had been none over the past two years.

Ultimately, they cancelled two A&I Databases using this methodology. At WFU, the true duds among our A&I databases have been cancelled already (unless bundled with something else). Therefore, I wouldn’t want to replicate this approach unless (as at OSU) the database is already low-use, budget pressures apply, and a faction protests the cancellation by playing the “unique content” card.

Copyright

One of the keynote addresses introduced the ARL Code of Best Practices in Fair Use for Academic and Research Libraries. This booklet covers scenarios like

  • reproducing portions of special collections items for the purpose of exhibit
  • e-reserves,
  • and many more.

Pages
About
Categories
2007 ACRL Baltimore
2007 ALA Annual
2007 ALA Gaming Symposium
2007 ALA Midwinter
2007 ASERL New Age of Discovery
2007 Charleston Conference
2007 ECU Gaming Presentation
2007 ELUNA
2007 Evidence Based Librarianship
2007 Innovations in Instruction
2007 Kilgour Symposium
2007 LAUNC-CH Conference
2007 LITA National Forum
2007 NASIG Conference
2007 North Carolina Library Association
2007 North Carolina Serials Conference
2007 OCLC International ILLiad Conference
2007 Open Repositories
2007 SAA Chicago
2007 SAMM
2007 SOLINET NC User Group
2007 UNC TLT
2007_ASIST
2008
2008 Leadership Institute for Academic Librarians
2008 ACRL Immersion
2008 ACRL/LAMA JVI
2008 ALA Annual
2008 ALA Midwinter
2008 ASIS&T
2008 First-Year Experience Conference
2008 Lilly Conference
2008 LITA
2008 NASIG Conference
2008 NCAECT
2008 NCLA RTSS
2008 North Carolina Serials Conference
2008 ONIX for Serials Webinar
2008 Open Access Day
2008 SPARC Digital Repositories
2008 Tri-IT Meeting
2009
2009 ACRL Seattle
2009 ALA Annual
2009 ALA Annual Chicago
2009 ALA Midwinter
2009 ARLIS/NA
2009 Big Read
2009 code4lib
2009 Educause
2009 Handheld Librarian
2009 LAUNC-CH Conference
2009 LAUNCH-CH Research Forum
2009 Lilly Conference
2009 LITA National Forum
2009 NASIG Conference
2009 NCLA Biennial Conference
2009 NISOForum
2009 OCLC International ILLiad Conference
2009 RBMS Charlottesville
2009 SCLA
2009 UNC TLT
2010
2010 ALA Annual
2010 ALA Midwinter
2010 ATLA
2010 Code4Lib
2010 EDUCAUSE Southeast
2010 Handheld Librarian
2010 ILLiad Conference
2010 LAUNC-CH Research Forum
2010 LITA National Forum
2010 Metrolina
2010 NASIG Conference
2010 North Carolina Serials Conference
2010 RBMS
2010 Sakai Conference
2011 ACRL Philadelphia
2011 ALA Annual
2011 ALA Midwinter
2011 CurateCamp
2011 Illiad Conference
2012 SNCA Annual Conference
ACRL
ACRL 2013
ACRL New England Chapter
ACRL-ANSS
ACRL-STS
ALA Annual
ALA Annual 2013
ALA Editions
ALA Midwinter
ALA Midwinter 2012
ALA Midwinter 2014
ALCTS Webinars for Preservation Week
ALFMO
APALA
ARL Assessment Seminar 2014
ARLIS
ASERL
ASU
Audio streaming
authority control
Berkman Webinar
bibliographic control
Book Repair Workshops
Career Development for Women Leaders Program
CASE Conference
cataloging
Celebration: Entrepreneurial Conference
Charleston Conference
CIT Showcase
CITsymposium2008
Coalition for Networked Information
code4lib
commons
Conference Planning
Conferences
Copyright Conference
COSWL
CurateGear 2013
CurateGear 2014
Designing Libraries II Conference
DigCCurr 2007
Digital Forsyth
Digital Humanities Symposium
Disaster Recovery
Discovery tools
E-books
EDUCAUSE
Educause SE
EDUCAUSE_SERC07
Electronic Resources and Libraries
Embedded Librarians
Entrepreneurial Conference
ERM Systems
evidence based librarianship
FDLP
FRBR
Future of Libraries
Gaming in Libraries
General
GODORT
Google Scholar
govdocs
Handheld Librarian Online Conference
Hurricane Preparedness/Solinet 3-part Workshop
ILS
information design
information ethics
Information Literacy
innovation
Innovation in Instruction
Inspiration
instruction
IRB101
Journal reading group
Keynote
LAMS Customer Service Workshop
LAUNC-CH
Leadership
Learning spaces
LibQUAL
Library 2.0
Library of Congress
licensing
Lilly Conference
LITA
LITA National Forum
LOEX2008
Lyrasis
Management
Marketing
Mentoring Committee
MERLOT
metadata
Metrolina 2008
MOUG 09
MOUG 2010
Music Library Assoc. 07
Music Library Assoc. 09
Music Library Assoc. 2010
NASIG
NC-LITe
NCCU Conference on Digital Libraries
NCICU
NCLA
NCLA Biennial Conference 2013
NCPC
NCSLA
NEDCC/SAA
NHPRC-Electronic Records Research Fellowships Symposium
NISO
North Carolina Serial Conference 2014
Offsite Storage Project
OLE Project
online catalogs
online course
OPAC
open access
Peabody Library Leadership Institute
plagiarism
Podcasting
Preservation
Preservation Activities
Preserving Forsyth LSTA Grant
Professional Development Center
rare books
RDA/FRBR
Reserves
RITS
RTSS 08
RUSA-CODES
SAA Class New York
SAMM 2008
SAMM 2009
Scholarly Communication
ScienceOnline2010
Social Stratification in the Deep South
Social Stratification in the Deep South 2009
Society of American Archivists
Society of North Carolina Archivists
SOLINET
Southeast Music Library Association
Southeast Music Library Association 08
Southeast Music Library Association 09
SPARC webinar
subject headings
Sun Webinar Series
tagging
Technical Services
technology
ThinkTank Conference
Training
ULG
Uncategorized
user studies
Vendors
video-assisted learning
visual literacy
WakeSpace
Web 2.0
Webinar
WebWise
WFU China Initiative
Wikis
Women's History Symposium 2007
workshops
WSS
ZSR Library Leadership Retreat
Tags
Archives
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007

Powered by WordPress.org, protected by Akismet. Blog with WordPress.com.