Professional Development

Author Archive

The 2015 Charleston Conference according to Derrik

Monday, November 16, 2015 12:13 pm

This was my first time at the Charleston Conference. My overall impressions: (1) This conference has a lot of content (I was afraid I would run out of paper for notes); (2) The content was mostly very practical and detailed; (3) Those practical details were more “cutting edge” than in other conferences I’ve attended, i.e. dealing with new initiatives or developments in the business of library resources.

I think I just put myself on the spot to describe some of those new initiatives or developments, huh?

Library/Vendor relations

I attended a couple of lunch discussions dealing with library/vendor relations. In one, panelists & the audience discussed the evolving role of book vendors in managing libraries’ e-book collections. Among other challenges, a vendor rep pointed out that vendors are now being asked to help libraries manage collections of materials they have not bought from the vendor (for example, e-book collections bought directly from a publisher or aggregator).


Online privacy was a hot topic at the conference, especially after day 2’s plenary session in which a panel of three legal experts scared everyone by showing—in real time—what everyone connected to the room’s Wi-Fi was doing online. One panelist discussed a European law that protects the “right to be forgotten” (for example, ordering Google to remove certain web pages from its search results). I also learned more about a NISO initiative to develop a “Consensus Framework to Support Patron Privacy”. The goal of the initiative is to find common ground for publishers/vendors and libraries, who have different needs and interests. The resulting document will address 12 areas, including transparency, data collection & use, anonymization, access to one’s own user data, and accountability.


There are some new developments and experimentation with e-books. I learned about a Mellon Foundation-funded project at the University of Minnesota Press, in partnership with CUNY Digital Scholarship Lab, to develop a software platform for what they call “iterative editions,” a sort of grey-literature monograph. The platform, called Manifold, is designed for interaction, including contributing and discussing. The platform software will be open source, and the UMN Press speaker said that books published by UMN Press will be open access, though that is not an inherent feature of the software.

In another presentation, which Lauren already mentioned, I learned that the Digital Public Library of America (DPLA) is starting to expand their work with e-books. See more about DPLA and e-books at . One of the panelists in the DPLA presentation was Jill Morris, who used to work with NC LIVE, and told about the NC LIVE Home Grown E-books project. Basically, the point of the presentation was that DPLA is examining some of the innovative things going on with e-books and exploring ways to use those ideas.

I also attended (alongside Lauren) a presentation about EPUB format vs. PDF, discussing the many advantages of the former and users’ ongoing love of the latter, despite its clunkiness. It is apparent that users don’t know much about EPUB, and where there is a choice between EPUB and PDF, they gravitate toward the familiar (and branded) icon. An e-book aggregator rep talked about the difficulty of supporting both formats. And the librarian presenting told about their work with instructors to teach them about the advantages of EPUB format, and to find and promote EPUB versions for course use.


I’ll end with my favorite quote from the conference, by Roën Janyk of Okanagan College, British Columbia:

“The difference between things that might go wrong and things that can’t possibly go wrong is that when things that can’t possibly go wrong go wrong, they’re usually impossible to repair.”

[UPDATE] I wrote it down wrong (but close). Apparently this quote is taken from the book Mostly Harmless, by Douglas Adams: “The major difference between a thing that might go wrong and a thing that cannot possibly go wrong is that when a thing that cannot possibly go wrong goes wrong it usually turns out to be impossible to get at or repair.”

Derrik at ALA 2015

Thursday, July 2, 2015 3:51 pm

Since I am still serving on the ALCTS Standards Committee, I’ll start my ALA report talking about one standard (sort of) that you’ve probably heard of, and two you’re probably less familiar with.

BIBFRAME (heard of it?) – I attended a presentation describing results from converting serials catalog records from MARC into BIBFRAME. I didn’t catch the name of the conversion software, but the presenter was from UC-Davis. Disclaimer: This session reminded me that my cataloging skills have gotten rusty, so I’m not sure I can describe this very well. First of all, she pointed out that most libraries will have MARC records following several different sets of cataloging rules—for example, pre-AACR2, AACR2, & RDA. If I understood correctly, the ISSN, title fields (210, 222, 245), and previous/subsequent title fields (780/785) all transferred fairly well into BIBFRAME. The converter ran into trouble with the older “latest entry” catalog records, which list a serial’s entire title-change history on a single record, because RDA considers each separate title to be a Work, but the conversion software migrated them as Instances. The older date range format also caused problems, because the software interpreted the first issue’s volume number (correctly) as the first issue, but then it interpreted the first issue’s date as the last issue. The speaker also raised the question of how to handle local adaptations when converting to BIBFRAME. The work of analysis and evaluation continues.

ODI – I heard Marshall Breeding speak about NISO’s Open Discovery Initiative (ODI), which lays out ways to improve interoperability of discovery systems with other database products. Breeding discussed the history of “discovery” products (like Summon) and some of the associated challenges. ODI seeks to alleviate some of the problems by providing a recommended structure for data exchange, covering data formats, method of delivery, usage reporting, updates, etc. Breeding acknowledged that index-based systems will never be “done,” but the ODI standard will help add some needed transparency and will provide a framework for evaluating discovery systems. His final thought was that “we need discovery systems that both users and librarians will love … but that’s not going to be easy.”

PESC – The Protocol for Exchanging Serial Content (PESC) is a very newly-published NISO Recommended Practice (published less than a week ago). It provides a recommended structure for transmitting serial content. The speaker pointed out that sending and receiving files is not very complex when it’s one sender and one receiver. But PESC aims to bring order to the chaos of many senders and many receivers. For example, a publisher may send content to EBSCO, ProQuest, JSTOR, Portico, and other receivers, all of whom receive content from multiple publishers. The recommendations cover things like including a manifest (list of files), contents of the manifest, file naming conventions, etc.

Speaking of receiving content from multiple sources, I went to a session on data cleanup that included a presentation by Amy Rudersdorf of the Digital Public Library of America. She described a new ingestion system the DPLA is using, aptly called Heiðrún. Rudersdorf explained that “Heiðrún” refers to a mythical goat who would eat anything and produce mead.

I attended a good presentation on leadership by Susan Massey from the University of North Florida. Massey said to think of the organization chart as a hanging mobile, and when one piece is out of balance it affects all the other pieces above and below it. She discussed some of the character traits of a good leader: trustworthiness, fairness, integrity, loyalty. She said leaders should be real & transparent, communicate openly, model mature response to crises, etc. She also encouraged service leadership, seeing that those you supervise have the resources necessary for their jobs, helping them excel in their jobs, and knowing and helping them achieve their goals.

In the exhibit hall I met a few vendor reps face-to-face for the first time. I also joined Lauren and Jeff in meeting with two vendors at the same time to try to iron out some e-book data problems. I think we may have finally gotten through to at least one of the people we needed to get through to. I also got some questions answered, learned about some upcoming database enhancements, etc.

NASIG 2015

Friday, June 5, 2015 4:50 pm

Last week I attended the 30th annual NASIG conference, presided over by our own Steve Kelley, who looked more and more carefree as the conference progressed and he got closer to handing over the presidential gavel. Well, at NASIG the outgoing president actually receives a gavel; the new one usually gets a hat. This was also the first conference under the newly-official name “NASIG” (no longer the “North American Serials Interest Group”).

The presentations at this year’s NASIG conference (or at least, the presentations I attended) seemed to steer away from “how we done it good” and focused instead on “here’s what we learned from looking at the data.” The following synopses are taken primarily from my notes, so I apologize for any misrepresentation.

  • Marlene van Ballegooie, from the University of Toronto, spoke about the OCLC Knowledgebase (OCLC KB), which is designed to reduce the time librarians spend managing e-resource holdings. Rather than the library having to communicate to the KB provider which journals they subscribe to from which publishers, the publisher sends a holdings file directly to the knowledgebase. Van Ballegooie attempted to assess the effectiveness of the service by comparing each load against information she obtained directly from the publishers. Results of course varied by publisher, but common problems were irregular data loads, and a time lag between a title’s availability at the publisher site and its activation in the KB. Also, any local changes get overwritten in the next data load. But the presenter concluded that the method does have potential for saving time, especially with custom packages or aggregator platforms where manual selection is necessary (such as e-book providers).
  • Gabrielle Wiersma and Esta Tovstiadi, from Univ. of Colorado at Boulder, presented an analysis of approximately 100 randomly-selected e-books published in 2014 across multiple platforms. Using a rubric based on a tool developed by the Center for Research Libraries, they assessed 16 aspects of the user experience, such as metadata, linking, pagination, etc. Some examples from their findings:
    Metadata – some platforms include subtitles, others do not; “date” may refer to date published, copyright date, or date posted online; editors are sometimes named as authors; etc. In none of the cases they examined did the platform-generated “MLA citation” actually match MLA format.
    Searching – different platforms may return search results at the word, page, or chapter level. Most (61%) were chapter-level, which is probably the least useful for searchers.
    Pagination – system page numbers often don’t match the page number displayed on the PDF (probably due to how front matter is counted); in EPUB format, page numbers are often missing altogether.
    The presenters showed examples of how search results may vary wildly from one platform to the next. This can be caused by search functionality, such as auto-stemming, or how the platform treats hyphenation, or whether it defaults to AND or OR searches. They also found problems caused by OCR spacing errors — e.g. “Japa nese” or “infl uential”, or words joinedtogether withouta space.
    See their slides on SlideShare for side-by-side examples.
  • Michael Matos of American University shared his analysis comparing library journal holdings to works referenced in faculty publications. The goal was to use the data to demonstrate the extent to which faculty rely on the library for their research. I confess that his complex methodology lost me. Next steps include looking more closely at the materials referenced which are not held by the library, then compare that to ILL data (thus demonstrating that the researcher also used the library for those materials).
  • In “Strategies for Expanding eJournal Preservation,” Shannon Regan, from Columbia University, described a Mellon Foundation Grant-funded project to identify e-journals that are not currently being preserved by a trusted 3rd-party repository, learn why they are not being preserved, and explore ways to get them preserved. I was kind of surprised, and kind of not-so-much, at the amount of content—even from major publishers—not being preserved. As for reasons why, I came away with the impression that the most prevalent reason is a question of rights/permissions. In some cases, a publisher may not have secured rights from the authors; in other cases, publishers (typically smaller ones) have no understanding of the need or the process for preserving content, or may fear a loss of control over the content (thinking, for example, that permitting an archiving agency to preserve the content would be equivalent to making the journal open access). Other times, the step of preservation may just slip through the cracks. Regan recommended that librarians should make preservation a part of the conversation with publishers, vendors, consortia, faculty, and other stakeholders.
  • In a fun presentation, Kristen Garlock of JSTOR and Eric Johnson of the Folger Shakespeare Library described some projects/products developed as an outgrowth of usage data. The first was JSTOR Classroom Readings (, a free tool intended to give educators a list of articles for core courses. Developers had originally wanted to gather college syllabi and curate a list of articles from those, but there were too many obstacles. So instead they looked at usage data for signs of “teaching use” (short bursts of use at a single institution). Though not perfect, and not yet considered final, Garlock seemed pleased with the methodology and the resulting product. Johnson talked about (among other projects) a JSTOR tool called Understanding Shakespeare ( The user can select a play, then choose a line in that play and get a list of articles in JSTOR that quote that line. Again, not complete (only includes 12 plays so far), but a pretty nifty tool.

In other sessions, I learned a few new Excel functions to try out, plus a couple of things to try with CORAL. I was also pleased to hear EBSCO Chief Strategist Oliver Pesch say very plainly and repeatedly that EBSCO supports customer choice and is actively seeking ways to optimize customer choice. I felt encouraged when he said that “no one vendor can offer libraries all the resources” they need, and that if you want to use an EBSCO product for one part of your workflow and a competitor’s product for another part, the workflow should not only work, it “should be optimized.”

Finally, my favorite quotes from the conference:

Scott Vieira, Rice Univ., referring to typical functionality in e-resource management systems: “Forcing the acquisition of e-resources into a linear workflow is like trying to train tortoises to walk in a straight line.”

Marcella Lesher, St. Mary’s Univ., about a journal weeding project (I’m probably paraphrasing): “We’re talking here about the care and feeding of print resources … although at this point we’re probably starving them to death.”

Derrik at the NC Serials Conference, 2015 edition

Monday, April 20, 2015 2:03 pm

Well, the latest few PD blog posts have guilted me into finally writing about my trip to the 2015 NC Serials Conference. Now, if I can just find my notes …

Aha! Here we are.

Steve’s post already covered Katherine Skinner’s opening keynote address quite well. I’ll add an “Aha” moment I had. Do you know who invented the incandescent light bulb? Hint: It wasn’t Thomas Edison; he merely perfected the design. Skinner also said that the jukebox was not invented by the record industry. Lesson: Innovation per se isn’t the only thing that’s important, and positive changes can come from outside the area you’d normally think to look for them.

I presented a session on library-vendor negotiation, along with co-presenter Lesley Jackson, our EBSCO Account Manager. We presented nine different principles of negotiation, along with examples. There were things like “Be prepared,” “Don’t be afraid to ask,” and “Don’t take it personally.” We finished earlier than expected, but the audience participated and asked good questions. We had a number of vendor reps in the audience too, which made it more fun.

Another plenary session was a panel discussion about text and data mining. A fair amount of this was over my head, but one thing that was clear is that everybody’s still trying to figure it out. The vendor representative on the panel pointed out the difficulty vendors have with managing and licensing text mining because librarians can’t really articulate what “text mining” means. But it was also pointed out that (1) it means different things to different libraries and to different researchers; and (2) in many cases the researchers themselves don’t yet know where the research will take them, so it’s hard to know what permissions to ask for.

Derrik at ALA Midwinter 2015

Monday, February 9, 2015 11:49 am

Vendor highlights

Lauren and I had a really good dinner discussion with a VP of a database vendor, talking about what is and isn’t important for researchers and libraries. That VP and our regular sales rep have already scheduled a campus visit to continue the conversation.

I had a conversation with a publishing company’s VP of Sales regarding demand-driven acquisition (DDA). I described the DDA usage and spending patterns we have seen here, and we talked about the difficulties of finding a sustainable balance for publishers and libraries. We also talked about “evidence-based acquisition” (EBA), where the customer pays first for access, then at the end of the access period can select content for perpetual access, up to the amount paid. I told the VP that the entry cost for EBA is typically too high. He immediately understood—the up-front price that a large library could afford would be cost-prohibitive for smaller libraries. He seemed to like my suggestion that they base the entry cost on the customer’s historic spend.

I had a good meeting getting to know our e-book vendor’s new rep, and his supervisor sat in on part of our meeting so I was able to bend her ear too, mainly about DDA. I learned that there is talk of developing a variation on the short-term-loan DDA model, though nothing concrete yet as far as I know. I don’t want to divulge any secrets here, but I am cautiously optimistic about what they told me.

There were lots of other productive conversations; in all I spoke with at least 17 vendors (that I kept track of). It feels weird to keep this section of my report so brief, but I fear the rest of the vendor stories would get tedious.


A speaker from a large university library described how they collect and analyze data about e-resource outages. Staff enter and track e-resource problem reports in a commercial incident-tracking system. They record the cause (e.g. metadata error, simultaneous-user limit, user error, etc.), the time it took to resolve, and other data. Tracking outages allows them to become aware of trends. One benefit is that they can present a record of incidents to vendors, with actual numbers instead of “your site goes down a lot.” In the first year of collecting data, proxy problems accounted for a small fraction of the total errors. 25% of errors were because the target content was missing from the vendor’s site (i.e. an article or issue missing from a database).

In another session, a representative from a large university press spoke about how usage-based acquisition is affecting the Press. She acknowledged that DDA is scary because they know that not every book will get used, but the only way to know which books will get used is to publish them. She said it will take a while for them to evaluate DDA because they don’t know yet when the revenue for a book will come in and it is difficult to assess which marketing efforts are working. She also expressed a concern that was a new idea to me—she wondered whether access to a large pool of DDA titles might actually obscure the fact that libraries are underfunded.

I attended a presentation by Len Vlahos, Executive Director of the Book Industry Study Group (BISG). Vlahos said wholesale book revenue has remained fairly flat over the past five or six years. The rate of growth of e-book sales has slowed (i.e. still growing, but the curve has flattened); hardcover revenue dipped in 2010 but has regained overall. Sales of print textbooks are declining, but that trend is publisher-driven, unlike the consumer-driven trade market. Publishers are developing online interactive learning systems as a replacement for printed textbooks, since textbooks that are simply digitized versions of the print are not well received. Vlahos predicted that the next big disruption in the book industry will be a business model (like retail discounting in the 1970s or e-commerce in the 1990s) rather than technological (like the printing press or the Kindle). He noted the growth of a subscription economy, in which consumers are being trained that it’s ok not to own content (Netflix, Spotify, Pandora, etc.), and even beyond content (ZipCar, bikeshare), and suggested that publishers expect the subscription model to have a positive effect on revenue within the next 5 years.

The Continuing Resources Standards Forum included an overview of the NISO Recommended Practice for Demand-Driven Acquisition of Monographs. The standard was published last June and to me it already felt a little out of date because it doesn’t address some of the more recent tensions in the DDA market. The Forum also included a review of the very new (published last month) NISO Recommended Practice on Access and License Indicators. This is a simple standard for encoding at the article level whether or not that article is “free to read,” plus a link to the article’s license information.

In an excellent overview of linked data, the presenter described the evolution of the Web from a web of static documents to a web of data. In the web of data, instead of describing an entity with a record (i.e. a surrogate for the entity), an entity has its own unique identifier, and that’s where you go for information about that entity. Note that BIBFRAME is about identifying bibliographic entities. The presenter said that libraries have been very involved in the web of documents, but cautioned about the danger of a “library-shaped black hole” in the web of data. Library projects have tended to use library vocabulary instead of the vocabulary of the larger web, so it is difficult for web searches to find and link to them. The presenter said that the reason libraries should share linked data on the web is the same as the historical reason for cataloging – “So people can find our stuff.”

Derrik’s ALA roundup

Wednesday, July 9, 2014 4:43 pm

I’ve sorted my 2014 ALA Annual Conference experience into 3 categories–Committee work, Vendor chats, and Sessions.

Committee work

The ALCTS Standards Committee was formed last fall to promote member involvement in and education about the development of information standards. Part of my assignment on that committee is to act as liaison to the ALCTS Continuing Resources Section (CRS), which means I got to go to two sets of committee meetings for the price of one! Saturday morning I traveled to Paris (i.e. Paris Las Vegas) where CRS was holding its “all committees” meeting. I met with the CRS Standards Committee, Cataloging Committee, Committee on Holdings Information, and the CRS Executive Committee to discuss the division-level committee’s charge and the best way for me to liaise with CRS. Then Sunday afternoon at the ALCTS division-level “all committees” meeting, we discussed reports from the 5 sections of ALCTS. We are still trying to pin down the best ways to carry out our charge. We are looking for ways to foster collaboration between the sections, and trying to determine the best way to interact with external standards organizations.

Vendor chats

With all the committee meetings and sessions I needed to attend, I wasn’t sure I would have enough time in the vendor exhibit hall. As I look at my results, I’m still not sure how I packed all this in. I won’t give details here, but feel free to follow up with me if you want to know more about any of these.

I had some fairly long, productive discussions with
JSTOR – ebooks & DDA
YBP – DDA profile management (deletions) & more
NYTimes – academic site license
EBSCO – Usage Consolidation questions
Kanopy – DDA
ProQuest/EBL – STL pricing, e-books in Summon, Academic Complete & DDA, etc.

Lauren, Jeff, and I attended a ProQuest-sponsored discussion about DDA, with librarians and publishers participating. Basically, everybody is struggling to adapt.

I had shorter discussions with
Wiley – new article interface coming
Data-Planet – new Java-free interface for Statistical Datasets coming
CLCD – we’re trying to get their author-title catalog lookup to work
Project MUSE – e-books, DDA (which they aren’t doing), & evidence-based acquisition (which they’re working on)
Taylor & Francis – e-book STL pricing
ProQuest – brief Intota demo
McFarland – thanked them for participating in NC LIVE’s Home Grown e-books pilot
BrowZine – now has an iPhone app
and the New York Philharmonic Archives, a free resource I had been unaware of – do you know what they played at their first concert?

And several others, of course. I met some sales reps that I had previously only corresponded with by e-mail (Alexander Street, SAGE, Taylor & Francis, and Euromonitor). I even managed to find a few book signings with no lines!


Michael Levine Clark gave an e-book usage report, very similar to the one I attended at Midwinter last January. The basic (unanswered) question is “What constitutes meaningful use of an e-book?” (Or from a more practical standpoint, what type(s) of e-book use should we be measuring?) At one point, Clark suggested that an e-book being downloaded may be an indicator of significant use, but ZSR’s early data seemed to indicate that a download was usually an indicator of the user’s unfamiliarity with the platform. Answering an audience question about user preference, Clark said that if you ask users “Do you prefer print books or e-books?” most of them will select a preference, but if your questions are more nuanced-Which do you prefer for looking up a fact? Which do you prefer for immersive reading? If you could get an e-book immediately but had to wait 5 minutes for a print book, which would you prefer?-then no clear preference emerges, at least in the research he has done. Clark’s presentation slides are available at

In the CRS Standards Forum, presenter Aron Wolf, a ProQuest software developer, spoke about the NISO Recommended Practice called IOTA (“Improving OpenURL Through Analytics”). IOTA, released in 2013, came out of a 4-year research project to produce a standard way for link resolver vendors (think of the WFU Full Text Options button) to measure & describe how well their product works. Wolf, who was part of the working group for IOTA, said they concluded that there was not an objective, cross-vendor metric, so IOTA instead recommends a methodology for testing a link resolver against itself. At that point he either lost me in the technical details or else I understood it so well I didn’t think I needed to take any notes. One future possibility he suggested was that it may become possible for reports to check accuracy within a matter of hours, enabling link resolvers to respond much more quickly when publishers change linking formats.

The last session I attended at ALA was about “articles on demand,” also called pay-per-view (PPV), which is essentially DDA for journal articles. First, Beth Bernhardt from UNCG talked about their experience dropping PPV about 9 years ago in favor of “big deal” journal subscription bundles, and now having to reconsider PPV in light of ongoing budget cuts. Susanna Bossenga from Northeastern Illinois University explained her library’s on-demand article delivery, which they currently provide using the Copyright Clearance Center’s Get It Now service. Article requests are mediated and processed by their ILL department. Finally, Mark England from the University of Utah described their implementation of ReadCube Access. When authenticated users come across un-owned articles, ReadCube Access presents them with options to rent for 48 hours, download, or get a “cloud” copy (online reading only, with no time limit). The library pays, of course, with each access option costing a different amount–$4 to rent, $10 for the cloud option, or $25 for a downloadable PDF.


That summarizes my conference, and may help explain why I still feel jet-lagged a week later. Speaking of jets, my flight home left Las Vegas 10 minutes before a National Weather Service Excessive Heat Warning went into effect. I must say that if the heat over the weekend wasn’t “excessive” (Monday’s high was 111°), I’m really glad I got out when I did.


Derrik’s takeaways from NASIG 2014

Tuesday, May 20, 2014 4:45 pm

The 2014 conference of the North American Serials Interest Group (NASIG) was a good one, from my point of view. A wide variety of topics and some very good keynote addresses gave me lots to chew on.

Since I started attending NASIG conferences 12 years ago, one of my favorite aspects has been the vendor involvement. NASIG is not just a librarians’ organization; subscription vendors and publishers are also encouraged to join, serve on committees, and be otherwise involved in the organization. Each conference usually includes sessions in which a vendor/publisher perspective is offered. I attended four such sessions this year, including “Vendor Lightning Talks,” a new conference feature in which six different content providers each took 5-7 minutes to tell what’s new with their products. I heard a panel of publishers (Nature, American Chemical Society, and IEEE) describe what they are doing in the Open Access arena, and a subscription vendor and a library consortium officer described their negotiation processes. My favorite vendor-related session was one in which a vendor sales rep, a consortium officer (the same from the previous presentation), and a librarian sat together on the stage and discussed a set of ethical questions (e.g., “Is it fair for a library to write an RFP so narrow that it is obviously customized to a specific vendor?” or “Is it fair for a vendor to go over the head of an acquisitions librarian if he/she says no?”). The probing into gray areas was a good exercise in seeing the other side’s point of view.

In other practical areas, I heard presentations about

  • results of an availability/usability study, in which students were able to successfully locate full text only 41% of the time, sometimes due to system errors, but in many cases, the students simply did not click on the right link, or missed key information that was presented on the screen;
  • survey results regarding methods of tracking perpetual access to online journals, which reminded me of the need to distinguish between post-cancellation access (usually on the publisher’s website) and archival access, meaning access when the publisher no longer provides it (often via Portico or LOCKSS)
  • updates on some emerging NISO standards – PESC, a communication standard for transmitting serial content; KBART, related to vendor knowledge bases; PIE-J, which has to do with how e-journals, especially title changes, are presented on vendor websites; ODI, for sharing metadata for discovery systems; and OAMI, a new metadata standard for open-access content, which includes the wonderful (IMO) feature of not referring to an item as “open access” but rather as “free-to-read” (yes/no).

What really made this year’s conference stand out for me was the amazing slate of “Vision session” (i.e. keynote) speakers. Katherine Skinner’s opening address on “Chance, Choice, & Change” made two particular impressions on me: (1) “Frontier” depends on your viewpoint-you may see empty space ready for development, but “empty” space is never truly empty, and there will often be people who see your pioneering as encroaching on their territory; (2) The cultural processes of production, distribution, and reception “always, always, always” depend on networks of people, not the lone genius.

Chris described Herbert Van de Sompel’s thought-provoking address very well. This address got me wondering to get the broader scholarly communication world to see the problem of “reference rot.”

In the closing keynote address, Jenica Rogers talked about how often she hears people say “I could never do what you did” (i.e. cancel the library’s ACS package), but she said she believes what they really mean is they would like to, but … (“but our faculty would riot”; “but I don’t want to rock the boat”; etc.). So Rogers presented a list of actions/habits that would help prepare us to make life’s tough decisions. Here is the list as I captured it:

  • Know thyself – know why you do the things you do
  • Claim and demonstrate your expertise & authority – know your reputation and how to leverage it
  • Gather data – evidence can shout when you can only whisper
  • Make friends
  • Start now, immediately
  • Find common ground – insisting that everybody else thinks X is important will only frustrate & annoy the people who don’t
  • Communicate effectively
  • Embrace serendipity
  • Evolve, even when it’s uncomfortable
  • Release fear – “Fear doesn’t make smart decisions, fear makes safe decisions.”



ALA Midwinter according to Derrik

Friday, February 7, 2014 12:09 pm

Vendor meetings

As usual, I spent a large part of this conference in the vendor exhibit hall.

I learned that Alexander Street Press is close to signing a deal to offer a certain film collection that I’ve heard people here express specific interest in (I don’t want to jinx the deal by naming the collection before it’s finalized). Bad news: it will be by subscription only, at least at the start. I also had an interesting after-dinner conversation with the President of Alexander Street Press, about how hard it is to come up with a good short-term loan model for streaming media (do you charge by the minute? what’s the appropriate price point?), and the difficulty of getting rights holders on board with it.

Data-Planet (provider of Statistical Datasets database) is targeting June/July for the release of a java-free user interface, and are also contemplating offering a one-time purchase option for their Statistical Data Sheets.

Elsevier‘s main development focus seems to be on Mendeley right now.

I learned more about the respective e-book models of both JSTOR and Project MUSE. Both e-book collections primarily feature university presses. A Project MUSE presenter said they try to select their e-books to match their existing subject strengths, so that the journal and book collections will complement each other. MUSE offers single-title purchasing via YBP, and JSTOR expects to offer it “in a few months.” JSTOR also offers a DDA model for e-books.

Oxford University Press now offers individual title purchasing for their Oxford Scholarship Online books. They are also now offering journal backfiles for title-by-title purchase.

A week before ALA, ZSR was asked/invited to become a Beta test site for the new EBL administrative module. So I spent half an hour at the ProQuest booth with Alison Bobal of EBL, getting a sneak preview. The new module seems much easier to use, and includes some functionality that up until now could only be handled by contacting EBL support, so I’m excited to be an early adopter. I also appreciate the opportunity to help shape the product. We went live on the new admin module today!

At the Third Iron booth, I learned more about a new feature of BrowZine, a product we subscribed to last August that allows WFU users to create a personalized collection of library-subscribed journals on their mobile devices. BrowZine can now include journals from ProQuest, EBSCOhost, and Ovid aggregator databases (initially it only included journals on publisher websites). We have, of course, turned on this new feature.

I learned about a couple of new e-book providers, and also had one-on-one meetings with our sales reps from APA Publishing, SAGE, Springer, Taylor & Francis, Thomson, and Wiley.


Committee meetings

I am in my third year of serving on the ALCTS Transforming Collections Task Force. The Task Force manages an ALCTS microgrant program to fund projects in support of the ALCTS goal of transforming collections. The first year we received a good number of applications, but the second year (last year) we only got a few few, so a good portion of this committee meeting was spent discussing whether or not to continue the microgrants (we decided yes, for at least one more year) and how to drum up applications. We talked about the many different ways in which collections and collecting are being transformed (e.g. shared collections, DDA, digitized local collections, open-access journals, user-generated content, etc.) and brainstormed ways to promote the theme of transforming collections.

I am also a member of the newly-formed ALCTS Standards Committee. The purposes of the committee are to educate ALCTS members about and encourage their involvement in the development of relevant standards and to support ALA’s voting representative to NISO. As this was the committee’s first meeting, it was mostly about getting organized, discussing how to best fulfill the committee’s role.

At my previous ALA conferences, I have enjoyed attending presentations sponsored by the Publisher-Vendor-Library Relations Interest Group (PVLR). At last month’s conference, I had (or made) time available to attend the PVLR business meeting. The group is made up of 3 co-chairs and anybody else who wants to attend. There were about 15-20 people there, and the meeting was simply an open discussion of possible topics for future PVLR presentations. Ideas included security/hacking; data mining & analysis (how to explain legitimate uses to publishers, how to explain rights holders’ concerns to the end-user); self-publishing; hybrid Open Access; and the future of society publishing.



I did manage to attend a few presentations. In one, Rick Anderson, Associate Dean for Scholarly Resources & Collections at the University of Utah, discussed “predatory publishing,” and what makes a publisher “predatory.” Anderson admitted that there could be several kinds of predation, but he focused on two broad categories: misrepresentation (e.g. deliberately misleading journal titles, publisher name mimicking a legitimate-sounding organization, fictitious editorial board or real people’s names used without permission) and selling false prestige (e.g. false claims of peer review or impact factor). Anderson encouraged getting the word out to scholars about predatory publishers, but emphasized the need to do it delicately, lest we inadvertently send the message that all open-access third-world, small, or new publishers are bad. Following Anderson’s remarks, Regina Reynolds of the U.S. ISSN Center discussed how the ISSN network (U.S. and internationally) can and cannot help. Reynolds stressed that the ISSN is not a stamp of legitimacy, it’s just a “dumb number.” On the other hand, the ISSN network recognizes that they cannot be perceived to enable fraudulent publishing, so they have established some guidelines, such as no longer assigning an ISSN prior to the publication of the first issue, and being more careful about publishers requesting a large block of ISSNs.

In a session sponsored by ProQuest, Michael Levine-Clark presented results of his analysis of e-book usage over multiple years on the ebrary and EBL platforms. ProQuest had provided him with usage data from both providers, covering 4 years and 750,000 titles. It’s hard to pick out the salient points when there was so much information presented, but here’s my version of the highlights:

  • University press titles consistently got higher use (sessions, page views, printing, etc.) than the overall collection average. BUT this might simply be because university press titles are available in more libraries and therefore to more users. Levine-Clark was working with aggregate data, and did not have information about individual library usage or holdings.
  • Books in the Social Sciences seem to be used at a slightly higher rate than the Humanities or STM. BUT page views and printing per user session were highest in STM. In other words, even though a lower percentage of STM books got used, they seem to get used more intensely.
  • Question: Does more page views per session = more time in the book? or just rapidly “flipping” through? Levine-Clark did not have data regarding amount of time spent in the book.
  • Question: What constitutes a meaningful use of an e-book? Levine-Clark suggested that copying may be the best measure (indicates the user found something they wanted to save). Printing or time in the book might be other possibilities, though it is not uncommon for a user to print something just for offline reading.

The presentation slides are available at

One of the authors who spoke at the conference was David Baldacci. I credit Baldacci with getting me interested again in reading for pleasure, after I heard him speak back in 2002 or 2003, so of course I had to go hear him again. He only spoke for about 20 minutes, but made up for it with an unannounced book signing, signing proofs of his forthcoming young-adult fiction novel The Finisher.

Charleston Conference online

Thursday, November 14, 2013 9:49 am

I have never actually attended the Charleston Conference, but this year they broadcast a small number of sessions live over the Internet. I tuned in to watch two of those sessions.

In a pre-conference segment, Judy Ruttenberg from the Association of Research Libraries spoke about legal issues in providing online resource access for print-disabled patrons. I learned that Section 508 of the Rehabilitation Act, requiring accessible electronic technology, applies to institutions receiving certain federal funding (and Ruttenberg made it sound like it applies to virtually all universities in the U.S.), but it does not apply to the private sector. So while it is illegal for a school/university to require the use of an inaccessible device, it is not illegal for Amazon or B&N (for example) to produce an inaccessible e-reader. As a matter not just of legality but of providing good service, Ruttenberg encouraged compliance with standards, especially WCAG 2.0 (Web Content Accessibility Guidelines-I had to look it up). She also suggested that libraries could partner with campus offices for students with disabilities, and with professors, to advocate for technology and service standards and to help make sure content is accessible. Finally, Ruttenberg addressed the challenge of getting e-resource licenses in line with accessibility needs, especially given that content providers are not liable. As with the technology, model license language is a moving target, but she recommended pointing to standards (such as WCAG 2.0), as well as asking for the right to make the content usable. She closed by quoting someone (sorry, I didn’t catch who) asking why we don’t push for indemnification against third-party lawsuits for inaccessibility. In the Q&A, a discussion arose around whether an institution would be within their rights to make content accessible even if the license doesn’t permit it; Kevin Smith (Duke’s Scholarly Communications Officer), who was in the audience, asked which lawsuit you would rather defend-a content provider alleging you didn’t have the right to do that, or a disabled student who couldn’t access course material.

The other session I watched was a presentation of research on the effects of discovery systems on e-journal usage. The researchers (Michael Levine-Clark, U. of Denver; Jason Price, SCELC; John McDonald, U. of Southern California) looked at the usage of journals from 6 major publisher at 24 libraries-6 for each of the four major discover systems (Summon, Primo, EBSCO Discover Service [EDS], and WorldCat Local [WCL]). The presentation went fast and I had a hard time keeping up, but the methodology seemed logical and the results interesting. Results varied of course, especially the effect of the discovery system on the different publishers’ content, but there did appear to be a resulting increase in journal usage, with Primo and Summon affecting usage more than EDS and WCL. The main purpose of the current study was to see if they could detect a difference, which they did. Their next step will be to try to determine what factors are causing the differences.

Derrik at NCLA 2013

Monday, October 21, 2013 12:13 pm

Here’s my summary of last week’s North Carolina Library Association conference. Overall, I thought it was a great conference, and I was glad I attended.


Christopher Harris, editor for the American Libraries e-content blog, gave a very good update on the e-book industry, although it was mostly geared toward public libraries. Some of my favorite sound bites and key concepts:

  • Don’t stress out about change. “Stuff is constantly changing; let it flow.”
  • The last disruptive technology we saw was the iPod and mp3’s. Experts (audiophiles) hate mp3’s because of the lower sound quality, but for the average user, an iPod & earbuds sure beats walking around with a phonograph or boombox. Librarians need to avoid being the nay-saying experts.
  • If all we’re doing is providing e-books, we’re in trouble because it can be outsourced at a much lower cost. Libraries can be filters and help users avoid “analysis paralysis,” like shopping at Trader Joe’s, where much of the selection has already been done for you.

Harris encouraged us to be willing to experiment with new models of purchase and access, and to think with our “math brains” instead of our “emotional” brains. For example, we all got up in arms when Harper Collins announced a 26-loan maximum, but Harris pointed out that for a $20 book that amounts to about $0.72 per loan. “How much per loan does a print book cost?” (in labor and building/shelving costs), he asked. Harris reviewed the current license models used by some of the “Big 6″ publishers. He pointed out that Macmillan does not sell to library consortia, and said (almost angrily), “That’s where we should plant our flag!” because resource sharing is much more important than a 26- or 52-loan limit.

Harris’ parting advice:

The next day, I attended a panel discussion and found out that NC LIVE is already working on a new model for shared e-books. I confess I didn’t understand all this very well, and it’s all still in Beta, but I’ll try to keep this general in hopes that I won’t go too far off track. NC LIVE has been working with Wake County Public Library to develop a shared platform for library e-books. Note that it will be the platform technology that is shared, not necessarily the e-books. It will be up to individual libraries to implement the platform (developed by NC LIVE) on their own websites. The vision is that each member library will be able to purchase e-books and place them on the NC LIVE platform, either shareable or private to the purchasing library. NC LIVE has started negotiating with several NC publishers to make their e-books available on the platform. It wasn’t clear to me whether those are e-books that NC LIVE will purchase, or if they’ll simply be available for member libraries to purchase. Target launch date for the platform is January 2014. There will be some content from one publisher (John Blair, based in Winston-Salem) available at that time, and NC LIVE hopes to have additional content from other publishers available by July. For now, the only access model for these e-books will be single concurrent user.


Digital/Digitized Library Collections

I went to a couple of presentations on digital collections available via the State Library. See There’s a lot of good stuff available for NC historical research, such as family bibles, wills, property records, cemetery photographs, a Civil War Roster index, an index of the Raleigh News & Observer covering 1926-1992, and an archive of all NC government websites. I also went to a session that gave an update on NC ECHO [], which searches across the digital collections of various libraries, museums, and archives in North Carolina (including Digital Forsyth, for example). NC ECHO uses the OAI-PMH standard to gather metadata from the various collections, then builds a searchable index of all these collections.


Electronic Resource Management Systems

I formed and participated in a panel discussion about E-Resource Management Systems (ERMS). Our panel included librarians using an open-source ERMS (me, talking about CORAL), a ILS-vendor’s ERMS, and a content-vendor’s ERMS. It was fun (in an e-resource-managing-geeky sort of way) to see how the strengths of the systems varied according to provider. The presentation was well attended, and I received some positive feedback afterward.



I won’t try to summarize the keynote addresses, but here are a couple of my favorite highlights:

In speaking of our responsibility to present readers with all sides of a controversial topic, ALA President Barbara Stripling pointed out that in a print environment, libraries could place all the relevant resources together on the shelf, so readers have to “at least trip over” other points of view on their way to the books they’re looking for. But in an online environment, it is too easy to limit yourself to resources that you already agree with, so libraries have a responsibility to teach users to look for those other points of view.

I’m sure others will offer a better description of ACRL President Trevor Dawes’ address, but the point that stood out the most to me was his explanation of why Financial Literacy is one of his main areas of focus. Dawes said that student loan debt has now surpassed credit card debt in the United States. (Actually, that happened in 2010, but Yikes!)



If you’ve read my past conference summaries, you won’t be surprised that I had some productive conversations with vendors in the exhibit hall. I talked with the Gale rep about the Cengage bankruptcy, and was again assured that it’s “business as usual” for Gale; she compared the bankruptcy to refinancing a mortgage (yeah, I know it’s more complicated than that, but I still thought it was a good analogy). The Reference USA rep gave me a heads up on a new data visualization feature, and told me to contact our sales rep about it (I think it’s available at no additional cost, waiting to hear back). I got an update on the new Alexander Street Press platform for streaming music & video, which is scheduled to be released later this week (but they’ve already had to push it back once). And I had another license-unjamming conversation with a publisher (like happened at ALA earlier this year). I had gone months without hearing a reply, then talked to the sales rep at the conference on Thursday, and I heard back from the license contact within a day!


Professional Development
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007

Powered by, protected by Akismet. Blog with