Professional Development

The Ellers Visit the In-Laws; Charleston 2014

Wednesday, November 12, 2014 12:00 pm

Eleven-day-old daughter and sleep-deprived wife in tow, I attended the 2014 Charleston Conference flying arguably in the face of reason. I had the advantage of a free place to stay: my parents-in-law live out on James Island, a 15-minute drive to the Francis Marion Hotel where the conference is held. Given this fact and the conference’s unique focus on acquisitions, it makes sense for this meeting to become an annual excursion for me.

The opening speaker, Anthea Stratigos (apparently her real last name) from Outsell, Inc. talked about the importance of strategy, marketing, and branding the experience your library provides. She emphasized that in tough budgetary times it is all the more important to know your target users and to deliver the services, products, and environment they are looking for rather than mindlessly trying to keep up with the Joneses and do everything all at once. “Know your portfolio,” advised Ms. Stratigos. I would say that we at ZSR do a good job of this.

At “Metadata Challenges in Discovery Systems,” speakers from Ex Libris, SAGE, Queens University, and the University of Waterloo discussed the functionality gap that exists in library discovery systems. While tools like Summon have great potential and deliver generally good results, they are reliant on good metadata to function. In an environment in which records come from numerous sources, the task of normalizing data is a challenge for library, vendor, and system provider alike. Consistent and rational metadata practices, both across the industry and within a given library, are essential. To the extent that it is possible, a good discovery system ought to be able to smooth out issues with inconsistent/bad metadata; but the onus is largely on catalogers. I for one am glad that we are on top of authority control. I am also glad that at the time of implementation I was safely 800 miles away in Louisiana.

In a highly entertaining staged debate over the premise that “Wherever possible, library collections should be shaped by patrons instead of librarians,” Rick Anderson from Utah and David Magier from Princeton contested the question of how large a role PDA/DDA should play in collection development in an academic context. Arguing pro-DDA, Mr. Anderson claimed that we’ve confused the ends with the means in providing content: the selection process by librarians ought properly to be seen simply as a method for identifying needed content, and if another more automated process (DDA) can accomplish the same purpose (and perhaps do it better), then it ought to be embraced. Arguing the other side, Mr. Magier emphasized DDA’s limitations, eloquently comparing over-reliance on it to eating mashed potatoes with a screwdriver just because a screwdriver is a useful tool. He pointed out that even in the absence of DDA, librarians have always worked closely and directly with patrons to answer their collection needs. In truth, both debaters would have agreed that a balance of DDA and traditional selection by librarians is the ideal model.

One interesting program discussed the inadequacy of downloads as proxy for usage given the amount of resource-sharing that occurs post-download. At another, librarians from UMass-Amherst and Simmons College presented results of their Kanopy streaming video DDA (PDA to them) program, similar to the one we’ll be rolling out later this month; they found that promotion to faculty was essential in generating views. On Saturday morning, librarians from Utah State talked about the importance of interlibrary loan as a supplement to acquisitions budgets and collection development policies in a regional consortium context. On this point, they try to include in all e-resource license agreements a clause specifying that ILL shall be allowed “utilizing the prevailing technology of the day” – an attempt at guaranteeing that they will remain able to loan their e-materials regardless of format, platform changes, or any other new technological developments.

Also on Saturday Charlie Remy of UT-Chattanooga and Paul Moss from OCLC discussed adoption of OCLC’s Knowledge Base and Cooperative Management Initiative. This was of particular interest as we in Resource Services plan on exploring use of the Knowledge Base early next year. Mr. Remy shared some of the positives and negatives he has experienced: among the former, the main one would be the crowdsourcing of e-resource metadata maintenance in a cooperative environment; among the negatives were slow updating of the knowledge base, especially with record sets from new vendors, along with the usual problem of bad vendor-provided metadata. The final session I attended was about link resolvers and the crucial role that delivery plays in our mission. As speakers pointed out, we’ve spent the past few years focusing on discover, discovery, discovery. Now might be a good time to look again at how well the content our users find is being delivered.

Leslie at MLA 2014

Saturday, March 15, 2014 4:38 pm

This year’s Music Library Association conference was held in Atlanta. It was a very productive meeting for me: I got a lot of continuing education in RDA, the new cataloging standard; and an opportunity to renew contacts in the area of ethnomusicology (music area studies), having learned just before leaving for MLA that our Music Department plans to add an ethnomusicologist to their faculty.

RDA

The impact of RDA, one year after its adoption by the Library of Congress, was apparent in the number of sessions devoted to it during the general conference, not just the catalogers’ sessions sponsored by the Music OCLC Users Group. I learned about revisions made to the music rules in the RDA manual, in MLA’s “Best Practices” document, and in the various music thesauri we use. (So if you see a “Do Not Disturb” sign on my door, you’ll know I have a lot of re-reading to do, all over again!). One sign of the music library community’s clout: MLA’s Best Practices will be incorporated into the official RDA manual, with links integrated into the text alongside LC’s policy statements. In a session on RDA’s impact on public services, I was gratified to find that almost all the talking points presented by the speakers had been covered in my own presentation to our liaisons back in September.

PRESERVATION AND COPYRIGHT

LC gave a report on its National Recordings Preservation Plan (NRPP), which began in February 2013. The group has developed 31 recommendations, which will be presented at hearings scheduled for this year by the US Office of Copyright, covering the entire copyright code, including section 108, orphan works, and pre-1972 sound recordings (the ones not covered by federal law, leaving librarians to navigate a maze of state laws). Also to be presented: a proposed “digital right of first sale,” enabling libraries and archives to perform their roles of providing access and preservation for born-digital works whose licensing currently prohibits us doing so. In the meantime, best-practices documents have been developed for orphan works (by UC Berkeley) and fair use for sound recordings (by the NRPP).

ONLINE LICENSING ISSUES

Perennial, and always interesting, sessions are held at MLA on the ongoing problem of musical works and recordings that are issued only online, with licensing that prohibit libraries and archives from acquiring them. An MLA grant proposal aims to develop alternative licensing language that we can use with recording labels, musicians, etc., allowing us to burn a CD of digital-only files. A lively brainstorming session produced additional potential solutions: an internet radio license, which would stream a label’s catalog to students, at the same time generating revenue for the label; placing links to labels in our catalogs, similar to the Google links that many OPACS feature for books, offering a purchase option; raising awareness among musicians, many of whom are unaware of the threat to their legacies, by speaking at music festivals, and asking the musicians themselves to raise public awareness, perhaps even by writing songs on the topic; capturing websites that aggregate music of specific genres, etc., in the Internet Archive or ArchiveIt; collaborating with JSTOR, PORTICO, and similar projects to expand their archiving activities to media.

DIGITAL HUMANITIES

This hot topic has begun to make its impact on the music library community, and MLA has established a new round table devoted to it. In a panel session, music librarians described the various ways they are providing support for, and collaborating with, their institutions’ DH centers. Many libraries are offering their liaisons workshops and other training opportunities to acquire the technical skills needed to engage with DH initiatives.

OTHER TECHNOLOGICAL PROJECTS

In a panel session on new technologies, we heard from a colleague at the University of Music and Drama in Leipzig, Germany, who led a project to add facets in their VuFind-based discovery layer for different types of music scores (study scores, performance scores, parts, etc.); a colleague at Haverford who used MEI, an XML encoding scheme designed for musical notation, to develop a GUI interface (which they named MerMEId) to produce a digital edition of a 16th-century French songbook, also reconstructing lost parts (we’ve been hearing about MEI for some years — nice to see a concrete example of its application); an app for the Ipad, developed by Touch Press, that offers study aids for selected musical works (such as Beethoven’s 9th symphony) allowing you to compare multiple recordings while following along with a modern score or the original manuscript (which automatically scrolls with the audio), watch a visualization tool that shows who’s playing when in the orchestra, and read textual commentary, some in real time with the audio; a consortium’s use of Amazon’s cloud service to host an instance of Avalon, an audio/video streaming product developed by Indiana U, to support music courses at their respective schools; and ProMusicDB, a project that aims to build an equivalent to IMDB for pop music.

Charleston Conference online

Thursday, November 14, 2013 9:49 am

I have never actually attended the Charleston Conference, but this year they broadcast a small number of sessions live over the Internet. I tuned in to watch two of those sessions.

In a pre-conference segment, Judy Ruttenberg from the Association of Research Libraries spoke about legal issues in providing online resource access for print-disabled patrons. I learned that Section 508 of the Rehabilitation Act, requiring accessible electronic technology, applies to institutions receiving certain federal funding (and Ruttenberg made it sound like it applies to virtually all universities in the U.S.), but it does not apply to the private sector. So while it is illegal for a school/university to require the use of an inaccessible device, it is not illegal for Amazon or B&N (for example) to produce an inaccessible e-reader. As a matter not just of legality but of providing good service, Ruttenberg encouraged compliance with standards, especially WCAG 2.0 (Web Content Accessibility Guidelines-I had to look it up). She also suggested that libraries could partner with campus offices for students with disabilities, and with professors, to advocate for technology and service standards and to help make sure content is accessible. Finally, Ruttenberg addressed the challenge of getting e-resource licenses in line with accessibility needs, especially given that content providers are not liable. As with the technology, model license language is a moving target, but she recommended pointing to standards (such as WCAG 2.0), as well as asking for the right to make the content usable. She closed by quoting someone (sorry, I didn’t catch who) asking why we don’t push for indemnification against third-party lawsuits for inaccessibility. In the Q&A, a discussion arose around whether an institution would be within their rights to make content accessible even if the license doesn’t permit it; Kevin Smith (Duke’s Scholarly Communications Officer), who was in the audience, asked which lawsuit you would rather defend-a content provider alleging you didn’t have the right to do that, or a disabled student who couldn’t access course material.

The other session I watched was a presentation of research on the effects of discovery systems on e-journal usage. The researchers (Michael Levine-Clark, U. of Denver; Jason Price, SCELC; John McDonald, U. of Southern California) looked at the usage of journals from 6 major publisher at 24 libraries-6 for each of the four major discover systems (Summon, Primo, EBSCO Discover Service [EDS], and WorldCat Local [WCL]). The presentation went fast and I had a hard time keeping up, but the methodology seemed logical and the results interesting. Results varied of course, especially the effect of the discovery system on the different publishers’ content, but there did appear to be a resulting increase in journal usage, with Primo and Summon affecting usage more than EDS and WCL. The main purpose of the current study was to see if they could detect a difference, which they did. Their next step will be to try to determine what factors are causing the differences.

Leslie at MLA 2011

Monday, February 14, 2011 2:08 am

I’m back from another Music Library Association conference, held this year in Philadelphia. Some highlights:

Libraries, music, and digital dissemination

Previous MLA plenary sessions have focused on a disturbing new trend involving the release of new music recordings as digital downloads only, with licenses restricting sale to end users, which effectively prevents libraries either from acquiring the recordings at all, or from distributing (i.e., circulating) them. This year’s plenary was a follow-up featuring a panel of three lawyers — a university counsel, an entertainment-law attorney, and a representative of the Electronic Frontiers Foundation — who pronounced that the problem was only getting worse. It is affecting more formats now, such as videos and audio books — it’ not just the music librarian’s problem any more — and recent court decisions have tended to support restrictive licenses.

The panelists suggested two approaches libraries can take: building relationships, and advocacy. Regarding relationships, it was noted that there is no music equivalent of LOCKSS or Portico: Librarians should negotiate with vendors of audio/video streaming services for similar preservation rights. Also, libraries can remind their resident performers and composers that if their performances are released as digital downloads with end-user-only licenses, libraries cannot preserve their work for posterity. The panelists drew an analogy to the journal pricing crisis: libraries successfully raised awareness of the issue by convincing faculty and university administrators that exorbitant prices would mean smaller readerships for their publications. On the advocacy side, libraries can remind vendors that federal copyright law pre-empts non-negotiable licenses: a vendor can’t tell us not to make a preservation copy when Section 108 says we have the right to make a preservation copy. We can also lobby state legislatures, as contract law is governed by state law.

The entertainment-law attorney felt that asking artists to lobby their record labels was, realistically speaking, the least promising approach — the power differential is too great. Change, the panelists agreed, is most likely to come through either legislation or the courts. Legislation is the more difficult to affect (there are too many well-funded commercial interests ranged on the opposing side); there is a better chance of a precedent-setting court case tipping the balance in favor of libraries. Such a case is most likely to come from the 2nd or 9th Circuit, which have a record of liberal rulings on Fair Use issues. One interesting observation from the panel was that most of the cases brought so far have involved “unsympathetic figures” — individuals who blatantly abused Fair Use on a large scale, provoking draconian rulings. What’s needed is more cases involving “sympathetic figures” like libraries — the good guys who get caught in the cross-fire. Anybody want to be next? :-)

Music finally joins Digital Humanities

For a couple of decades now, humanities scholars have been digitizing literary, scriptural, and other texts, in order to exploit the capabilities of hypertext, markup, etc. to study those texts in new ways. The complexity of musical notation, however, has historically prevented music scholarship from doing the same for its texts. PDFs of musical scores have long been available, but they’re not searchable texts, and not encoded as digital data, so can’t be manipulated in the same way. Now there’s a new project called the Music Encoding Initiative, jointly funded by the National Endowment for the Humanities and the German Deutsche Forschungsgemeinschaft. MEI (yes, they’ve noticed it’s also a Chinese word for “beauty”) has just released a new digital encoding standard for Western classical musical notation, based on XML. It’s been adopted so far by several European institutions and by McGill University. If, as one colleague put it, it “has legs,” the potential is transformative for the discipline. Whereas critical editions in print force editors to make painful decisions between sources of comparable authority — the other readings get relegated to an appendix or supplementary volume — in a digital edition, all extant readings can be encoded in the same file, and displayed side by side. An even more intriguing application of this concept is the “user-generated edition”: a practicing musician could potentially approach a digital edition of a given work, and choose to output a piano reduction, or a set of parts, or modernized notation of a Renaissance work, for performance. Imagine the savings for libraries, which currently have to purchase separate editions for all the different versions of a work.

http://music-encoding.org

Music and metadata

In a session titled “Technical Metadata for Music,” two speakers, from SUNY and a commercial audio-visual preservation firm respectively, stressed the importance of embedded metadata in digital audio files. Certain information, such as recording date, is commonly included in filenames, but this is an inadequate measure from a long-term preservation standpoint: filenames are not integral to the file itself, and are typically associated with a specific operating system. One speaker cited a recent Rolling Stone article, “File not Found: the Recording Industry’s Storage Crisis” (December 2010), describing the record labels’ inability to retrieve their backfiles due to inadequate filenames and lack of embedded metadata. Metadata is now commonly embedded in many popular end-user consumer products, such as digital cameras and smartphones.

For music, embedded metadata can include not only technical specifications (bit-depth, sample rate, and locations of peaks, which can be used to optimize playback) but also historical context ( the date and place of performance, the performers, etc.) and copyright information. The Library of Congress has established sustainability factors for embedded metadata (see http://digitizationguidelines.gov). One format that meets these requirements is Broadcast Wave Format, an extension of WAV: it can store metadata as plain text, and can include historical context-related data. The Technical Committee of ARSC (Association of Recorded Sound Collections) recently conducted a test wherein they added embedded metadata to some BWF-format audio files, and tested them with a number of popular applications. The dismaying results showed that many apps not only failed to display the embedded metadata, but also deleted it completely. This, in the testers’ opinion, calls for an advocacy campaign to raise awareness of the importance of embedded metadata. ARSC plans to publish its test report on its website (http://www.arsc-audio.org/). The software for embedded metadata that they developed for the test is also available as a free open-source app at http://sourceforge.net/projects/bwfmetaedit.

Music cataloging

A pre-conference session held by MOUG (Music OCLC Users Group) reported on an interesting longitudinal study that aimed to trace coverage of music materials in the OCLC database. The original study was conducted in 1981, when OCLC was relatively new. MOUG testers searched newly-published music books, scores, and sound recordings, as listed in journals and leading vendor catalogs, along with core repertoire as listed in ALA’s bibliography Basic Music Library, in OCLC, and assessed the quantity and quality of available cataloging copy. The study was replicated in 2010. Exact replication was rendered impossible by various developments over the intervening 30 years — changes in the nature of the OCLC database from a shared catalog to a utility; more foreign and vendor contributors; and the demise of some of the reference sources used for the first sample of searched materials, necessitating substitutions — but the study has nevertheless produced some useful statistics. Coverage of books. not surprisingly, increased over the 30 years to 95%; representation of sound recordings also increased, to around 75%; but oddly, scores have remained at only about 60%. As for quality of the cataloging, the 2010 results showed that about 20% of sound recordings have been cataloged as full-level records, about 50% as minimal records; about a quarter of scores get full-level treatment, about 50% minimal. The study thus provides some external corroboration of long-perceived music cataloging trends, and also a basis for workflow and staffing decisions in music cataloging operations.

A session titled “RDA: Kicking the Tires” was devoted to the new cataloging standard that the Library of Congress and a group of other libraries have just finished beta-testing. Music librarians from four of the testing institutions (LC, Stanford, Brigham Young, U North Texas, and U Minnesota) spoke about their experiences with the test and with adapting to the new rules.

All relied on LC’s documentation and training materials, recording local decisions on their internal websites (Stanford has posted theirs on their publicly-accessible departmental site). An audience member urged libraries to publish their workflows in the Toolkit, the online RDA manual. It was generally agreed that the next step needed is the development of guidelines and best practices.

None of the testers’ ILSs seem to have had any problems accomodating RDA records in MARC format. LC has had no problems with their Voyager system, corroborating our own experience here at WFU. Some testers reported problems with some discovery layers, including PRIMO (fortunately, we haven’t seen any glitches so far with VuFind). Stanford reported problems with their (un-named) authorities vendor, mainly involving “flipped” (changed name order) entries. Most testers are still in the process of deciding which of the new RDA data elements they will display in their OPACs.

Asked what they liked about RDA, both the LC and Stanford speakers cited the flexibility of the new rules, especially in transcribing title information, and in the wider range of sources from which bib info can be drawn. Others welcomed the increased granularity, designed to enhance machine manipulation, and the chance this affords to “move beyond cataloging for cards” towards the semantic web and relation-based models. It was also noted that musicians are already used to thinking in FRBR fashion — they’ve long dealt with scores and recordings, for instance, as different manifestations of the same work.

Asked what they thought “needed fixing” with RDA, all the panelists cited access points for music (the LC speaker put up a slide displaying 13 possible treatments of Rachmaninoff’s Vocalise arranged for saxophone and piano). There are other areas — such as instrument names in headings — that the RDA folks haven’t yet thought about, and the music community will probably have to establish its own practice. Some catalogers expressed frustration with the number of matters the new rules leave to “cataloger’s judgment.” Others mentioned the difficulty of knowing just how one’s work will display in future FRBRized databases, and of trying to fit a relational structure into the flat files most of us currently have in our ILSs.

What was most striking about the session was the generally upbeat tone of the speakers — they saw more positives than negatives with the new standard, assured us it only took some patience to learn, and were convinced that it truly was a step forward in discoverability. One speaker, who trains student assistants to do copy-cataloging, telling them “When in doubt, make your best guess, and I’ll correct it later,” observed that her students’ guesses consistently conformed to RDA practice — some anecdotal evidence suggesting that the new standard may actually be more intuitive for users, and that new catalogers will probably learn it more easily than those of us who’ve had to “unlearn” AACR2!

Sidelights

Our venue was the Loews Philadelphia Hotel, which I must say is the coolest place I’ve ever stayed in. The building was the first International Style high-rise built in the U.S., and its public spaces have been meticulously preserved and/or restored, to stunning effect. The first tenant was a bank, and so you come across huge steel vault doors and rows of safety-deposit boxes, left in situ, as you walk through the hotel. Definitely different!

Another treat was visiting the old Wanamaker department store (now a Macy’s) to hear the 1904 pipe organ that is reputed to be the world’s largest (http://www.wanamakerorgan.com/about.php).

Digital Licensing Course

Tuesday, November 18, 2008 11:32 am

From September to November, I was involved in a self-paced course called “Digital Licensing Online.” The course consisted of 27 lessons that were delivered three times a week via email. The course discussed broad topics like why licensing is important, as well as specific clauses and terms found in licenses. The last several lessons focused on negotiating tips. There was also a course blog where we could interact with other students and the instructor.

One suggestion from the course was to document your library’s context and licensing standards. Lauren C. and I have already begun to do this in the wiki, and we hope to do more before the new Electronic Resources Librarian arrives.

I think this type of instruction was well-suited for my learning style. Each lesson was fairly short, so I could work it into my day fairly easily. The segmented approach also was effective in keeping me engaged in the material. Once we hire our new Electronic Resources Librarian, I hope he or she will be able to take this course or something like it.


Pages
About
Categories
Professional Development
Tags
Archives
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007

Powered by WordPress.org, protected by Akismet. Blog with WordPress.com.