Professional Development

Leslie at MLA 2014

Saturday, March 15, 2014 4:38 pm

This year’s Music Library Association conference was held in Atlanta. It was a very productive meeting for me: I got a lot of continuing education in RDA, the new cataloging standard; and an opportunity to renew contacts in the area of ethnomusicology (music area studies), having learned just before leaving for MLA that our Music Department plans to add an ethnomusicologist to their faculty.


The impact of RDA, one year after its adoption by the Library of Congress, was apparent in the number of sessions devoted to it during the general conference, not just the catalogers’ sessions sponsored by the Music OCLC Users Group. I learned about revisions made to the music rules in the RDA manual, in MLA’s “Best Practices” document, and in the various music thesauri we use. (So if you see a “Do Not Disturb” sign on my door, you’ll know I have a lot of re-reading to do, all over again!). One sign of the music library community’s clout: MLA’s Best Practices will be incorporated into the official RDA manual, with links integrated into the text alongside LC’s policy statements. In a session on RDA’s impact on public services, I was gratified to find that almost all the talking points presented by the speakers had been covered in my own presentation to our liaisons back in September.


LC gave a report on its National Recordings Preservation Plan (NRPP), which began in February 2013. The group has developed 31 recommendations, which will be presented at hearings scheduled for this year by the US Office of Copyright, covering the entire copyright code, including section 108, orphan works, and pre-1972 sound recordings (the ones not covered by federal law, leaving librarians to navigate a maze of state laws). Also to be presented: a proposed “digital right of first sale,” enabling libraries and archives to perform their roles of providing access and preservation for born-digital works whose licensing currently prohibits us doing so. In the meantime, best-practices documents have been developed for orphan works (by UC Berkeley) and fair use for sound recordings (by the NRPP).


Perennial, and always interesting, sessions are held at MLA on the ongoing problem of musical works and recordings that are issued only online, with licensing that prohibit libraries and archives from acquiring them. An MLA grant proposal aims to develop alternative licensing language that we can use with recording labels, musicians, etc., allowing us to burn a CD of digital-only files. A lively brainstorming session produced additional potential solutions: an internet radio license, which would stream a label’s catalog to students, at the same time generating revenue for the label; placing links to labels in our catalogs, similar to the Google links that many OPACS feature for books, offering a purchase option; raising awareness among musicians, many of whom are unaware of the threat to their legacies, by speaking at music festivals, and asking the musicians themselves to raise public awareness, perhaps even by writing songs on the topic; capturing websites that aggregate music of specific genres, etc., in the Internet Archive or ArchiveIt; collaborating with JSTOR, PORTICO, and similar projects to expand their archiving activities to media.


This hot topic has begun to make its impact on the music library community, and MLA has established a new round table devoted to it. In a panel session, music librarians described the various ways they are providing support for, and collaborating with, their institutions’ DH centers. Many libraries are offering their liaisons workshops and other training opportunities to acquire the technical skills needed to engage with DH initiatives.


In a panel session on new technologies, we heard from a colleague at the University of Music and Drama in Leipzig, Germany, who led a project to add facets in their VuFind-based discovery layer for different types of music scores (study scores, performance scores, parts, etc.); a colleague at Haverford who used MEI, an XML encoding scheme designed for musical notation, to develop a GUI interface (which they named MerMEId) to produce a digital edition of a 16th-century French songbook, also reconstructing lost parts (we’ve been hearing about MEI for some years — nice to see a concrete example of its application); an app for the Ipad, developed by Touch Press, that offers study aids for selected musical works (such as Beethoven’s 9th symphony) allowing you to compare multiple recordings while following along with a modern score or the original manuscript (which automatically scrolls with the audio), watch a visualization tool that shows who’s playing when in the orchestra, and read textual commentary, some in real time with the audio; a consortium’s use of Amazon’s cloud service to host an instance of Avalon, an audio/video streaming product developed by Indiana U, to support music courses at their respective schools; and ProMusicDB, a project that aims to build an equivalent to IMDB for pop music.

Leslie at MLA 2013

Wednesday, March 6, 2013 7:48 pm

A welcome escape from the usual wintry rigors of traveling to a Music Library Association conference — mid-February this year found us in San Jose, soaking up sun, balmy breezes, and temps in the 70s. (Colleagues battered by the Midwest blizzards were especially appreciative.)

This was the title of a plenary session which yielded a number of high-level insights. For one, it was the first time I had heard the term “disintermediation” to describe the phenomenon of librarians being displaced by Google et al as the first place people go for information.

Henriette Hemmasi of Brown U analogized the MOOCs trend as “Diva to DJ”: that is, the role of the instructor is shifting from lone classroom diva to the collaborative role played by a disc jockey — selecting and presenting material for team-produced courses, working with experts in web development, video, etc. Her conclusion: 21st-century competencies must include not just knowledge, but also synthesizing and systems-thinking skills.

David Fenske, one of the founding developers of Indiana’s Ischool, noted that the rapid evolution of technology has rendered it impossible to make projections more than 5 or 10 years out (his reply to a boss who asked for a 20-year vision statement: “A 20-year vision can’t be done without drugs!”). He also observed that digital preservation is in many ways more difficult than the traditional kind: the scientific community is beginning to lose the ability to replicate experiments, because in many cases the raw data has been lost due to obsolete digital storage media. Fenske envisions the “library as socio-technical system” — a system based on user demographics, designed around “communities of thought leaders” as well as experts. Tech-services people have long mooted the concept of “good-enough” cataloging, in the face of overwhelming publication output; public-services librarians, in Fenske’s view, should start talking about the “good-enough” answer. Fenske wants to look “beyond metadata”: how can we leverage our metadata for analytics? semantic tools? How can we scale our answers and services to compete with Google, Amazon, and others?

Some interesting findings from two studies on the library needs of performing faculty and students (as opposed to musicologists and other researchers in the historical/theoretical branches of the discipline):

One study addressed the pros and cons of e-scores. Performers, always on the go and pressed for time, like e-scores for their instant availability and sharability; the fact that they’re quick and easy to print out; their portability (no more cramming a paper score into an instrument case for travel); easy page turns during performance (a pedal mechanism has been devised for this). Performers also like an e-score that can be annotated (i.e., not a PDF file) so they can insert their notes for performance; and the ability to get a lot of works quickly from one place (as from an online aggregator). On the other hand, academic users, who work with scholarly and critical editions, like the ability of the online versions to seamlessly integrate critical commentary with the musical text (print editions traditionally place the commentary in separate supplementary volumes). Third-party software can also be deployed to manipulate the musical text for analysis. But the limitations of the computer screen continue to pose viewability problems for purposes of analysis. Academic users regard e-scores as a compliment to, not an alternative to, print scores.

Another study interviewed performing faculty to find out how they use their library’s online catalog. Typically, they come to the library wanting to find known items, use an advanced-search mode, and search by author, title, and opus number (the latter not very effectively handled by many discovery layers; VuFind does a reasonably good job). Performing faculty often are also looking for specific editions and/or publishers (aspects that many discovery interfaces don’t offer as search limits/facets). Performing faculty (and students) study a work by using a score to follow along with a sound recording, so come to the library hoping to obtain multiple formats for the same work — icons or other aids for quickly identifying physical format are important to them, as for film users and others. There is also a lot of descriptive detail that performers need to see in a catalog display: contents, duration, performers’ names.

Stuff a lot of music librarians have observed or suspected, but good to see it quantified and confirmed in some formal studies.

This is a topic that has generated much interest in the library community, and music librarians have also been exploring collaborative options for acquiring the specialized materials of their field. Besides shared approval-plan profiles for books, and shared database subscriptions, music librarians have divvied up the collecting of composers’ collected editions, and contemporary composers whose works they want to collect comprehensively. Because music materials are often acquired and housed in multiple locations on the same campus, internal collaboration is as important as external. One thing that does not seem to lend itself to collaborative collection: media (sound recordings and videos). Many libraries don’t lend these out via ILL, and faculty tend to want specific performances — making on-request firm orders a more suitable solution. One consortium of small Maine colleges (Colby, Bates, and Bowdoin) divided the processing labor of their staffs by setting up rotating shipments for their shared approval plan: one library gets this month’s shipment of books, another library receives the next month’s shipment, and so on.

There was a good bit of discussion concerning demand-driven e-book acquisitions among colleagues whose institutions had recently implemented DDA services. On two separate occasions, attendees raised the question of DDA’s impact on the humanities, given those disciplines’ traditional reliance on browsing the stacks as a discovery method.

It was a very busy conference for music catalogers, as over a hundred of us convened to get prepared for RDA. There was a full-day workshop; a cataloging “hot topics” session; a town-hall meeting with the Bibliographic Control Committee, which recently produced a “RDA Best Practices for Cataloging Music” document; and a plenary session on RDA’s impact across library services (the latter reprising a lot of material covered by Steve and others in ZSR presentations — stay tuned for more!)

A very special experience was a visit to the Ira F. Brilliant Center for Beethoven Studies (located on the San Jose State campus), the largest collection of Beethoveniana outside Europe. During a reception there, we got to play pianos dating from Beethoven’s time. Hearing the “Moonlight Sonata” up close on the model of instrument he wrote it for (Dulcken, a Flemish maker) was a true revelation.

Steve at ALA Midwinter 2013

Friday, February 8, 2013 2:10 pm

Although my trip to Seattle for the ALA Midwinter Conference had a rough start (flight delayed due to weather, nearly missed a connecting flight, my luggage didn’t arrive until a day later), I had a really good, productive experience. This Midwinter was heavy on committee work for me, and I was very focused on RDA, authority control and linked data. If you want a simple takeaway from this post, it’s that RDA, authority control and linked data are all tightly bound together and are important for the future of the catalog. If you want more detail, keep reading.
My biggest commitment at the conference was participating in two long meetings (over four hours on Saturday afternoon and three hours on Monday morning) of CC:DA (Cataloging Committee: Description and Access). I’m one of nine voting members of CC:DA, which is the committee responsible for developing ALA’s position on RDA. The final authority for making changes and additions to RDA is the JSC (Joint Steering Committee), which has representation from a number of cataloging constituencies, including ALA, the national library organizations of Canada, the UK, and Australia, as well as other organizations. ALA’s position on proposals brought to the JSC is voted on by CC:DA. Membership on this committee involves reading and evaluating a large number of proposals from a range of library constituencies. Much of the work of the committee has so far involved reviewing proposals regarding how to form headings in bibliographic records, which is, essentially, authority control work. We’ve also worked on proposals to make the rules consistent throughout RDA, to clarify the wording of rules, and to make sure that the rules fit with the basic principles of RDA. It has been fascinating to see how interconnected the various cataloging communities are, and how they relate to ALA and CC:DA. As I said, I am one of nine voting members of the committee, but there are about two dozen non-voting representatives from a variety of committees and organizations, including the Music Library Association, the Program for Cooperative Cataloging, and the Continuing Resources Cataloging Committee of ALCTS.
During our Monday meeting, we saw a presentation by Deborah Fritz of the company MARC of Quality of a visualization tool called RIMMF, RDA In Many Metadata Formats. RIMMF shows how bibliographic data might be displayed when RDA is fully implemented. The tool is designed to take RDA data out of MARC, because it is hard to think of how data might relate in RDA without the restrictions of MARC. RIMMF shows how the FRBR concepts of work, expression and manifestation (which are part of RDA) might be displayed by a public catalog interface. It’s still somewhat crude, but it gave me a clearer idea of the kinds of displays we might develop, as well as a better grasp on the eventual benefits to the user that will come from all our hard work of converting the cataloging world to RDA. RIMMF is free to download and we’re planning to play around with it some here in Resource Services.
I also attended my first meeting of another committee of which I am a member, the Continuing Resources Cataloging Committee of the Continuing Resources Section of ALCTS). Continuing resources include serials and web pages, so CRS is the successor to the old Serials Section. We discussed the program that we had arranged for that afternoon on the possibilities of using linked data to record serial holdings. Unfortunately, I had to miss the program due to another meeting, but I’m looking forward to seeing the recording. We also brainstormed ideas for our program at Annual in Chicago, and the committee’s representative to the PCC Standing Committee on Training gave us an update on RDA training initiatives.
The most interesting other meeting that I attended was the Bibframe Update Forum. Bibframe is the name for an initiative to try to develop a data exchange format to replace the MARC format(s). The Bibframe initiative hopes to develop a format that can make library data into linked data, that is, data that can be exchanged on the semantic web. Eric Miller, from the company Zepheira (which is one of the players in the development of Bibframe), explained that the semantic web is about linking data, not just documents (as a metaphor, think about old PDF files that could not be searched, but were flat documents. The only unit you could search for was the entire document, not the meaningful pieces of content in the document). The idea is to create recombinant data, that is, small blocks of data that can be linked together. The basic architecture of the old web leaned toward linking various full documents, rather than breaking down the statements into meaningful units that could be related to each other. The semantic web emphasizes the relationships between pieces of data. Bibframe hopes to make it possible to record the relationships between pieces of data in bibliographic records and to expose library data on the Web and make it sharable. At the forum, Beacher Wiggins told the audience about the six institutions who are experimenting with the earliest version of Bibframe, which are the British Library, the German National Library, George Washington University, the National Library of Medicine, OCLC, and Princeton University. Reinhold Heuvelmann of the German National Library said that the model is defined on a high level, but that it needs to have more detail developed to allow for recording more granular data, which is absolutely necessary for fully recording the data required by RDA. Ted Fons of OCLC spoke of how Bibframe is an attempt to develop a format that can carry the data libraries need and to allow for library data to interact with each other and the wider web. Fons said that Bibframe data has identifiers that are URIs which can be web accessible. He also said that Bibframe renders bibliographic data as statements that are related to each other, rather than as self-contained records, as with MARC. Bibframe breaks free of the constraints of MARC, which basically rendered data as catalog cards in electronic format. Bibframe is still going through quite a bit of development, but it is moving quickly. Sally McCallum of the MARC Standards Office said that they hope to finalize aspects of the Bibframe framework by 2014, but acknowledged that, “The change is colossal and the unexpected will happen.”
Actually, I think that’s a good way to summarize my thoughts on the current state of the cataloging world after attending this year’s Midwinter, “The change is colossal and the unexpected will happen.”

Steve at NASIG 2012

Thursday, June 14, 2012 5:03 pm

Last Thursday, Chris, Derrik and I hopped in the library van and drove to Nashville for the NASIG Conference, returning on Sunday. It was a busy and informative conference, full of lots of information on serials and subscriptions. I will cover a few of the interesting sessions I attended in this post.
One such session was called “Everyone’s a Player: Creation of Standards in a Fast-Paced Shared World,” which discussed the work of NISO and the development of new standards and “best practices.” Marshall Breeding discussed the ongoing development of the Open Discovery Initiative (ODI), a project that seeks to identify the requirements of web-scale discovery tools, such as Summon. Breeding pointed out that it makes no sense for libraries to spend millions of dollars on subscriptions, if nobody can find anything. So, in this context, it makes sense for libraries to spend tens of thousands on discovery tools. But, since these tools are still so new, there are no standards for how these tools should function and operate with each other. ODI plans to develop a set of best practices for web-scale discovery tools, and is beginning this process by developing a standard vocabulary as well as a standard way to format and transfer data. The project is still in its earliest phases and will have its first work available for review this fall. Also at this session, Regina Reynolds from the Library of Congress discussed her work with the PIE-J initiative, which has developed a draft set of best practices that is ready for comment. PIE-J stands for the Presentation & Identification of E-Journals, and is a set of best practices that gives guidance to publishers on how to present title changes, issue numbering, dates, ISSN information, publishing statements, etc. on their e-journal websites. Currently, it’s pretty much the Wild West out there, with publishers following unique and puzzling practices. PIE-J hopes to help clean up the mess.
Another session that was quite useful was on “CONSER Serials RDA Workflow,” where Les Hawkins, Valerie Bross and Hien Nguyen from Library of Congress discussed the development of RDA training materials at the Library of Congress, including CONSER serials cataloging materials and general RDA training materials from the PCC (Program for Cooperative Cataloging). I haven’t had a chance yet to root around on the Library of Congress website, but these materials are available for free, and include a multi-part course called “Essentials for Effective RDA Learning” that includes 27 hours (yikes!) of instruction on RDA, including a 9 hour training block on FRBR, a 3 hour block on the RDA toolkit, and 15 hours on authority and description in RDA. This is for general cataloging, not specific to serials. Also, because LC is working to develop a replacement for the MARC formats, there is a visualization tool called RIMMF available at that allows for creating visual representations of records and record-relationships in a post-MARC record environment. It sounds promising, but I haven’t had a chance to play with it yet. Also, the CONSER training program, which focuses on serials cataloging, is developing a “bridge” training plan to transition serials catalogers from AACR2 to RDA, which will be available this fall.
Another interesting session I attended was “Automated Metadata Creation: Possibilities and Pitfalls” by Wilhelmina Randtke of Florida State University Law Research Center. She pointed out that computers like black and white decisions and are bad with discretion, while creating metadata is all about identifying and noting important information. Randtke said computers love keywords but are not good with “aboutness” or subjects. So, in her project, she tried to develop a method to use computers to generate metadata for graduate theses. Some of the computer talk got very technical and confusing for me, but her discussion of subject analysis was fascinating. Using certain computer programs for automated indexing, Randtke did a data scrape of the digitally-encoded theses and identified recurring keywords. This keyword data was run through ontologies/thesauruses to identify more accurate subject headings, which were applied to the records. A person needs to select the appropriate ontology/thesaurus for the item(s) and review the results, but the basic subject analysis can be performed by the computer. Randtke found that the results were cheap and fast, but incomplete. She said, “It’s better than a shuffled pile of 30,000 pages. But, it’s not as good as an organized pile of 30,000 pages.” So, her work showed some promise, but still needs some work.
Of course there were a number of other interesting presentations, but I have to leave something for Chris and Derrik to write about. One idea that particularly struck me came from Rick Anderson during his thought provoking all-conference vision session on the final day, “To bring simplicity to our patrons means taking on an enormous level of complexity for us.” That basic idea has been something of an obsession of mine for the last few months while wrestling with authority control and RDA and considering the semantic web. To make our materials easily discoverable by the non-expert (and even the expert) user, we have to make sure our data is rigorously structured and that requires a lot of work. It’s almost as if there’s a certain quantity of work that has to be done to find stuff, and we either push it off onto the patron or take it on ourselves. I’m in favor of taking it on ourselves.
The slides for all of the conference presentations are available here: for anyone who is interested. You do not need to be a member of NASIG to check them out.

Leslie at NCLA 2011

Friday, October 7, 2011 2:50 pm

It was really nice to be able to attend an NCLA conference again — one of my music conferences, as it happens, has been held at the same time for years.

I attended a session on RDA, the new cataloging standard recently beta-tested by LC. Christee Pascale of NCSU gave a very helpful, concise reprise of that school’s experience as a test participant; the staff training program and materials they developed; and advice to others planning to implement RDA.

Presenters from UNCG and UNCC shared a session titled “Technical Services: Changing Workflows, Changing Processes, Personnel Restructuring — Oh My!” Both sites have recently undergone library-wide re-organizations, including the re-purposing of tech services staff to other areas, resulting in pressure to ruthlessly eliminate inefficiencies. Many of the specific steps they mentioned are ones we’ve already taken in ZSR, but some interesting additional measures include:

  • Eliminating the Browsing Collection in favor of a New Books display.
  • Reducing the funds structure (for instance, 1 fund per academic department — no subfunds for material formats)

There also seems to be a trend towards re-locating Tech Services catalogers to Special Collections, in order to devote more resources to the task of making the library’s unique holdings more discoverable; outsourcing or automating as many tech services functions as possible, including “shelf-ready” services, authority control, and electronic ordering; and training support staff (whose time has putatively been freed by the outsourcing/automation of their other tasks) to do whatever in-house cataloging remains. That’s the vision, at any rate — our presenters pointed out the problems they’ve encountered in practice. For instance, UNCC at one point had one person doing the receiving, invoicing, and cataloging: they quickly found they needed to devote more people to the still-significant volume of in-house cataloging that remained to be done even after optimizing use of outsourced services. They’re also feeling the loss of subject expertise (in areas like music, religion, etc.) and of experienced catalogers to make the big decisions (i.e., preparing for RDA).

NCLA plans to post all presentations on their website:



NISO RDA webinar

Wednesday, May 11, 2011 2:29 pm

Today, Erik, Carolyn, Lauren C, Derrik, Alan Keely, Leslie McCall, Steve Kelly, Beth Tedford, Mark McKone, Linda Ziglar, Jean-Paul and Chris B. attended the first of a two part NISO seminar on RDA. The webinar gave a good overview of FRBR and how RDA facilitates the creation of records following the FRBR metadata model.

Rorbert Maxwell did the first part of the session. He did a good job of reviewing the FRBR model and in discussing where RDA standard was relevant. After a tour of what types of searching this approach would enable the presenter said that the next stepss are to implement RDA, design and ER database structure, and figure out how to handle legacy MARC data.

The second part of the session was by John espley at VTLS who covered some of these ‘next steps’ at a high level. He covered some various data modesl including flat file, ER models, and linked MARC records. I was curious to see that he did not discuss object databases or tripple stores as an option. He showed a sample FRBR ER diagram to illustrate how a database model would work.

John talked at length about the role of metadata encoding models and made the assertion that a common encoding model was not a necessary feature of next-generation systems. He asserted rather that interoperability between systems would be based on appropriate encoding crosswalk methods. John indicated that some areas of development included added ability to use Macros, easy access to the RDA toolkit, and more sophisticated workforms.

Leslie at MLA 2011

Monday, February 14, 2011 2:08 am

I’m back from another Music Library Association conference, held this year in Philadelphia. Some highlights:

Libraries, music, and digital dissemination

Previous MLA plenary sessions have focused on a disturbing new trend involving the release of new music recordings as digital downloads only, with licenses restricting sale to end users, which effectively prevents libraries either from acquiring the recordings at all, or from distributing (i.e., circulating) them. This year’s plenary was a follow-up featuring a panel of three lawyers — a university counsel, an entertainment-law attorney, and a representative of the Electronic Frontiers Foundation — who pronounced that the problem was only getting worse. It is affecting more formats now, such as videos and audio books — it’ not just the music librarian’s problem any more — and recent court decisions have tended to support restrictive licenses.

The panelists suggested two approaches libraries can take: building relationships, and advocacy. Regarding relationships, it was noted that there is no music equivalent of LOCKSS or Portico: Librarians should negotiate with vendors of audio/video streaming services for similar preservation rights. Also, libraries can remind their resident performers and composers that if their performances are released as digital downloads with end-user-only licenses, libraries cannot preserve their work for posterity. The panelists drew an analogy to the journal pricing crisis: libraries successfully raised awareness of the issue by convincing faculty and university administrators that exorbitant prices would mean smaller readerships for their publications. On the advocacy side, libraries can remind vendors that federal copyright law pre-empts non-negotiable licenses: a vendor can’t tell us not to make a preservation copy when Section 108 says we have the right to make a preservation copy. We can also lobby state legislatures, as contract law is governed by state law.

The entertainment-law attorney felt that asking artists to lobby their record labels was, realistically speaking, the least promising approach — the power differential is too great. Change, the panelists agreed, is most likely to come through either legislation or the courts. Legislation is the more difficult to affect (there are too many well-funded commercial interests ranged on the opposing side); there is a better chance of a precedent-setting court case tipping the balance in favor of libraries. Such a case is most likely to come from the 2nd or 9th Circuit, which have a record of liberal rulings on Fair Use issues. One interesting observation from the panel was that most of the cases brought so far have involved “unsympathetic figures” — individuals who blatantly abused Fair Use on a large scale, provoking draconian rulings. What’s needed is more cases involving “sympathetic figures” like libraries — the good guys who get caught in the cross-fire. Anybody want to be next? :-)

Music finally joins Digital Humanities

For a couple of decades now, humanities scholars have been digitizing literary, scriptural, and other texts, in order to exploit the capabilities of hypertext, markup, etc. to study those texts in new ways. The complexity of musical notation, however, has historically prevented music scholarship from doing the same for its texts. PDFs of musical scores have long been available, but they’re not searchable texts, and not encoded as digital data, so can’t be manipulated in the same way. Now there’s a new project called the Music Encoding Initiative, jointly funded by the National Endowment for the Humanities and the German Deutsche Forschungsgemeinschaft. MEI (yes, they’ve noticed it’s also a Chinese word for “beauty”) has just released a new digital encoding standard for Western classical musical notation, based on XML. It’s been adopted so far by several European institutions and by McGill University. If, as one colleague put it, it “has legs,” the potential is transformative for the discipline. Whereas critical editions in print force editors to make painful decisions between sources of comparable authority — the other readings get relegated to an appendix or supplementary volume — in a digital edition, all extant readings can be encoded in the same file, and displayed side by side. An even more intriguing application of this concept is the “user-generated edition”: a practicing musician could potentially approach a digital edition of a given work, and choose to output a piano reduction, or a set of parts, or modernized notation of a Renaissance work, for performance. Imagine the savings for libraries, which currently have to purchase separate editions for all the different versions of a work.

Music and metadata

In a session titled “Technical Metadata for Music,” two speakers, from SUNY and a commercial audio-visual preservation firm respectively, stressed the importance of embedded metadata in digital audio files. Certain information, such as recording date, is commonly included in filenames, but this is an inadequate measure from a long-term preservation standpoint: filenames are not integral to the file itself, and are typically associated with a specific operating system. One speaker cited a recent Rolling Stone article, “File not Found: the Recording Industry’s Storage Crisis” (December 2010), describing the record labels’ inability to retrieve their backfiles due to inadequate filenames and lack of embedded metadata. Metadata is now commonly embedded in many popular end-user consumer products, such as digital cameras and smartphones.

For music, embedded metadata can include not only technical specifications (bit-depth, sample rate, and locations of peaks, which can be used to optimize playback) but also historical context ( the date and place of performance, the performers, etc.) and copyright information. The Library of Congress has established sustainability factors for embedded metadata (see One format that meets these requirements is Broadcast Wave Format, an extension of WAV: it can store metadata as plain text, and can include historical context-related data. The Technical Committee of ARSC (Association of Recorded Sound Collections) recently conducted a test wherein they added embedded metadata to some BWF-format audio files, and tested them with a number of popular applications. The dismaying results showed that many apps not only failed to display the embedded metadata, but also deleted it completely. This, in the testers’ opinion, calls for an advocacy campaign to raise awareness of the importance of embedded metadata. ARSC plans to publish its test report on its website ( The software for embedded metadata that they developed for the test is also available as a free open-source app at

Music cataloging

A pre-conference session held by MOUG (Music OCLC Users Group) reported on an interesting longitudinal study that aimed to trace coverage of music materials in the OCLC database. The original study was conducted in 1981, when OCLC was relatively new. MOUG testers searched newly-published music books, scores, and sound recordings, as listed in journals and leading vendor catalogs, along with core repertoire as listed in ALA’s bibliography Basic Music Library, in OCLC, and assessed the quantity and quality of available cataloging copy. The study was replicated in 2010. Exact replication was rendered impossible by various developments over the intervening 30 years — changes in the nature of the OCLC database from a shared catalog to a utility; more foreign and vendor contributors; and the demise of some of the reference sources used for the first sample of searched materials, necessitating substitutions — but the study has nevertheless produced some useful statistics. Coverage of books. not surprisingly, increased over the 30 years to 95%; representation of sound recordings also increased, to around 75%; but oddly, scores have remained at only about 60%. As for quality of the cataloging, the 2010 results showed that about 20% of sound recordings have been cataloged as full-level records, about 50% as minimal records; about a quarter of scores get full-level treatment, about 50% minimal. The study thus provides some external corroboration of long-perceived music cataloging trends, and also a basis for workflow and staffing decisions in music cataloging operations.

A session titled “RDA: Kicking the Tires” was devoted to the new cataloging standard that the Library of Congress and a group of other libraries have just finished beta-testing. Music librarians from four of the testing institutions (LC, Stanford, Brigham Young, U North Texas, and U Minnesota) spoke about their experiences with the test and with adapting to the new rules.

All relied on LC’s documentation and training materials, recording local decisions on their internal websites (Stanford has posted theirs on their publicly-accessible departmental site). An audience member urged libraries to publish their workflows in the Toolkit, the online RDA manual. It was generally agreed that the next step needed is the development of guidelines and best practices.

None of the testers’ ILSs seem to have had any problems accomodating RDA records in MARC format. LC has had no problems with their Voyager system, corroborating our own experience here at WFU. Some testers reported problems with some discovery layers, including PRIMO (fortunately, we haven’t seen any glitches so far with VuFind). Stanford reported problems with their (un-named) authorities vendor, mainly involving “flipped” (changed name order) entries. Most testers are still in the process of deciding which of the new RDA data elements they will display in their OPACs.

Asked what they liked about RDA, both the LC and Stanford speakers cited the flexibility of the new rules, especially in transcribing title information, and in the wider range of sources from which bib info can be drawn. Others welcomed the increased granularity, designed to enhance machine manipulation, and the chance this affords to “move beyond cataloging for cards” towards the semantic web and relation-based models. It was also noted that musicians are already used to thinking in FRBR fashion — they’ve long dealt with scores and recordings, for instance, as different manifestations of the same work.

Asked what they thought “needed fixing” with RDA, all the panelists cited access points for music (the LC speaker put up a slide displaying 13 possible treatments of Rachmaninoff’s Vocalise arranged for saxophone and piano). There are other areas — such as instrument names in headings — that the RDA folks haven’t yet thought about, and the music community will probably have to establish its own practice. Some catalogers expressed frustration with the number of matters the new rules leave to “cataloger’s judgment.” Others mentioned the difficulty of knowing just how one’s work will display in future FRBRized databases, and of trying to fit a relational structure into the flat files most of us currently have in our ILSs.

What was most striking about the session was the generally upbeat tone of the speakers — they saw more positives than negatives with the new standard, assured us it only took some patience to learn, and were convinced that it truly was a step forward in discoverability. One speaker, who trains student assistants to do copy-cataloging, telling them “When in doubt, make your best guess, and I’ll correct it later,” observed that her students’ guesses consistently conformed to RDA practice — some anecdotal evidence suggesting that the new standard may actually be more intuitive for users, and that new catalogers will probably learn it more easily than those of us who’ve had to “unlearn” AACR2!


Our venue was the Loews Philadelphia Hotel, which I must say is the coolest place I’ve ever stayed in. The building was the first International Style high-rise built in the U.S., and its public spaces have been meticulously preserved and/or restored, to stunning effect. The first tenant was a bank, and so you come across huge steel vault doors and rows of safety-deposit boxes, left in situ, as you walk through the hotel. Definitely different!

Another treat was visiting the old Wanamaker department store (now a Macy’s) to hear the 1904 pipe organ that is reputed to be the world’s largest (

Dianne Hillman on collaborative opportunities

Tuesday, February 8, 2011 12:41 pm

The second keynote of the morning was Dianne Hillman – she talked about collaborations between programmers and catalogers.

Dianne dated her career by showing us a few tools that I remember from my early time as a librarian (Cord catalog rods and a card filer)! I wonder what that says about the pace of change from the early 1970s to the early 1990s. For most of her talk however Dianne focused on the emerging roles of catalogers in libraries and potential collaborations that exist between catalogers and programmers. Dianne has published a few times in the past few years about RDA 1) and 2) (among others) and it was interesting to hear her thoughts about the intersection between MARC, RDA, ISBD, AACR, RDF, XML and other ABTs (Acronym based technologies).

Her presentation focused on the need to re-shape the cataloging profession and as such she spent a few minutes talking about the potential impact of RDA encoded in RDF in terms of serving as a replacement for the MARC encoding and representation standards. She introduced some concepts from her recent publications including metadata registries, use of identifiers as opposed to literals in records and use of single record or vocabulary repositories as opposed to replicated records across thousands of databases.

The audience asked some interesting questions 1) about economics of migration (it is tough but not changing is not an option), 2) about the future of cataloging in libraries (traditional cataloging is diminishing – copy cataloging is the current model, distributed cataloging/data-geeking is the future, getting rid of all the catalogers first does not make sense – get them the skills to change), 4) what programmers could learn from the cataloging community (creativity in data representation and use, understanding of the complexities of library data).

The rest of the morning and early afternoon is devoted to short IT presentations & should be very interesting.

ALA, RDA and more

Saturday, January 8, 2011 3:09 pm

I have to admit that I have some conference fatigue. Back to back conferences can make it tough to focus so to combat fatigue I decided to pick and choose sessions on a whim on Saturday. I saw some interesting talks on book reading clubs that dovetailed nicely with an NPR piece I heard last week on the continuing tend of book clubs.

I was wandering down the hall to head to a ‘how Washington works’ session when I ran into Steve Kelley and got a very well informed explanation of the role of FRBR, RDA, FRAD, linked data, authority control and the state of MARC (thanks Steve). I had attended a session on the adoption of RDA at ALISE and have to admit that I left the session confused about what the right first steps are. After talking with Steve it seems that it might be worth taking our library data and trying out some of these models on it to see what transitioning our records to these new standards might mean for us.

The session on the current funding issues for public libraries enforced a discussion I had with Leo Cao at ALISE regarding the importance of framing library relevance in terms that make sense to your community. This means showing economic responsibility, reflecting the diverse makeup of your community and providing services that have real-world impact. In academic libraries I know that we often feel that our mission is focused on research and different populations but I found that the types of services that public libraries provide are highly relevant.

Saturday at ALA with Carolyn

Wednesday, July 15, 2009 11:13 pm

On Saturday, I attended “Workflow Tools for Automating Metadata Creation and Maintenance” which was a panel discussion comprised of individuals who work on digital projects at their institutions.

Much of the talk was highly technical and I didn’t quite understand everything, but one of the most interesting projects discussed was by Brown University’s Ann Caldwell, Metadata Coordinator for the Center of Digital Initiatives, who spoke about their recent project in assisting the Engineering Department with its upcoming accreditation. Engineering professors wanted to digitize materials such as syllabi and assignments so that the accreditation team could have them in advance of their visit. The Center created an easy way for professors to put stuff into the repository by creating a very simple MODS (Metadata Object Description Schema) record form with required fields to fill in (e.g. date, title, genre) and providing an easy way for individuals to upload files (i.e. digital objects). Faculty decide how they want to set up folders for their stuff; they can dump everything in one folder or create multiple folders down to the micro-level. Faculty also determine who and what individuals can see. Because of the enormous amount of material being brought in to be digitized, the Center developed a tracking system. Due to the success of this project, the Engineering Department will continue digitizing their materials for future accreditations, and Ms. Caldwell indicated other departments were interested in doing the same.

In regards to metadata creation workflow, consistency, automation, streamlining and true interoperability between systems are of utmost importance. With the help of metadata tools, librarians can do their jobs better and more efficiently. Smart systems are possible and necessary. We need to pay attention to user interface design for cataloging tools because it is critical to the success of our data.

Next, I attended a four hour panel discussion titled “Look Before You Leap: Taking RDA for a Test-Drive.” Again, a highly technical presentation. RDA is the acronym for “resource description and access” and is a new cataloging tool to be utilized for the description of all types of resources and content. It is compatible with established principles, models, and standards and is adaptable to the needs of a wide range of resource description communities (i.e. museums, libraries, etc.) Tom Delsey began the session by comparing and contrasting AACR2 and RDA. Nanette Naught followed by previewing the RDA Toolkit which is currently in the alpha testing stage. Sally McCallum of the Library of Congress spoke on new fields developed for the MARC record in conjunction with RDA. John Espley, Director of Design at VTLS, gave attendees a preview of what an RDA record would like like in the ILS he represents. His presentation finally shed some light for me as to how an RDA cataloging record would appear in an online catalog. National Library of Medicine’s Barbara Bushman described the upcoming testing of RDA at 23 select institutions. The testing will occur in OCLC Connexion as well as in various ILS. Voyager being one. Once the RDA Online software is released sometime in November or December 2009, a preparation period which includes training for the testing institutions will occur in the months of January-March 2010. Formal testing will commence in April-June, followed in July-September with a formal assessment. October 2010 a final report will be shared with the U.S. library community.

If and when RDA is approved for use, training for catalogers will be the next step. Knowledge and training about RDA for all library staff will need to take place as well. People on the front lines working with patrons in catalog instruction will need to know the differences between a specific work and its possible multiple manifestations (work and manifestation being FRBR terminology).

For more information, one can visit the RDA web site.

Needless to say, after this session ended, I was ready to head back to my hotel for some rest. I will post more information on the rest of my conference experiences on Friday.

2007 ACRL Baltimore
2007 ALA Annual
2007 ALA Gaming Symposium
2007 ALA Midwinter
2007 ASERL New Age of Discovery
2007 Charleston Conference
2007 ECU Gaming Presentation
2007 ELUNA
2007 Evidence Based Librarianship
2007 Innovations in Instruction
2007 Kilgour Symposium
2007 LAUNC-CH Conference
2007 LITA National Forum
2007 NASIG Conference
2007 North Carolina Library Association
2007 North Carolina Serials Conference
2007 OCLC International ILLiad Conference
2007 Open Repositories
2007 SAA Chicago
2007 SAMM
2007 SOLINET NC User Group
2007 UNC TLT
2008 Leadership Institute for Academic Librarians
2008 ACRL Immersion
2008 ALA Annual
2008 ALA Midwinter
2008 ASIS&T
2008 First-Year Experience Conference
2008 Lilly Conference
2008 LITA
2008 NASIG Conference
2008 North Carolina Serials Conference
2008 ONIX for Serials Webinar
2008 Open Access Day
2008 SPARC Digital Repositories
2008 Tri-IT Meeting
2009 ACRL Seattle
2009 ALA Annual
2009 ALA Annual Chicago
2009 ALA Midwinter
2009 Big Read
2009 code4lib
2009 Educause
2009 Handheld Librarian
2009 LAUNC-CH Conference
2009 LAUNCH-CH Research Forum
2009 Lilly Conference
2009 LITA National Forum
2009 NASIG Conference
2009 NCLA Biennial Conference
2009 NISOForum
2009 OCLC International ILLiad Conference
2009 RBMS Charlottesville
2009 SCLA
2009 UNC TLT
2010 ALA Annual
2010 ALA Midwinter
2010 ATLA
2010 Code4Lib
2010 EDUCAUSE Southeast
2010 Handheld Librarian
2010 ILLiad Conference
2010 LAUNC-CH Research Forum
2010 LITA National Forum
2010 Metrolina
2010 NASIG Conference
2010 North Carolina Serials Conference
2010 RBMS
2010 Sakai Conference
2011 ACRL Philadelphia
2011 ALA Annual
2011 ALA Midwinter
2011 CurateCamp
2011 Illiad Conference
2012 SNCA Annual Conference
ACRL 2013
ACRL New England Chapter
ALA Annual
ALA Annual 2013
ALA Editions
ALA Midwinter
ALA Midwinter 2012
ALA Midwinter 2014
ALCTS Webinars for Preservation Week
ARL Assessment Seminar 2014
Audio streaming
authority control
Berkman Webinar
bibliographic control
Book Repair Workshops
Career Development for Women Leaders Program
CASE Conference
Celebration: Entrepreneurial Conference
Charleston Conference
CIT Showcase
Coalition for Networked Information
Conference Planning
Copyright Conference
CurateGear 2013
CurateGear 2014
Designing Libraries II Conference
DigCCurr 2007
Digital Forsyth
Digital Humanities Symposium
Disaster Recovery
Discovery tools
Educause SE
Electronic Resources and Libraries
Embedded Librarians
Entrepreneurial Conference
ERM Systems
evidence based librarianship
Future of Libraries
Gaming in Libraries
Google Scholar
Handheld Librarian Online Conference
Hurricane Preparedness/Solinet 3-part Workshop
information design
information ethics
Information Literacy
Innovation in Instruction
Journal reading group
LAMS Customer Service Workshop
Learning spaces
Library 2.0
Library of Congress
Lilly Conference
LITA National Forum
Mentoring Committee
Metrolina 2008
MOUG 2010
Music Library Assoc. 07
Music Library Assoc. 09
Music Library Assoc. 2010
NCCU Conference on Digital Libraries
NCLA Biennial Conference 2013
NHPRC-Electronic Records Research Fellowships Symposium
North Carolina Serial Conference 2014
Offsite Storage Project
OLE Project
online catalogs
online course
open access
Peabody Library Leadership Institute
Preservation Activities
Preserving Forsyth LSTA Grant
Professional Development Center
rare books
SAA Class New York
SAMM 2008
SAMM 2009
Scholarly Communication
Social Stratification in the Deep South
Social Stratification in the Deep South 2009
Society of American Archivists
Society of North Carolina Archivists
Southeast Music Library Association
Southeast Music Library Association 08
Southeast Music Library Association 09
SPARC webinar
subject headings
Sun Webinar Series
Technical Services
ThinkTank Conference
user studies
video-assisted learning
visual literacy
Web 2.0
WFU China Initiative
Women's History Symposium 2007
ZSR Library Leadership Retreat
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007

Powered by, protected by Akismet. Blog with