Professional Development

Leslie at MLA 2016

Monday, March 14, 2016 8:08 pm

This year’s meeting of the Music Library Association was held in Cincinnati, where, during breaks and receptions, we enjoyed 1920s tunes performed by members of the Cincinnati Opera, and by MLA’s own big band, in the Netherland Plaza Hotel’s beautifully-restored 1930 Art Deco ballroom.

DIVERSITY

It has long been recognized that America’s conservatories and orchestras remain overwhelmingly white (less than 5% of students in music schools are non-Asian minorities). While administrators of these institutions are currently struggling to rectify the situation, libraries (it was noted at the MLA meeting) have a chance to be an exemplar. In a joint project with ARL called the Diversity & Inclusion Initiative, MLA has supported internships and fellowships for MLIS students with music backgrounds to work in music libraries. The diversity aimed for includes not just race/ethnicity, but also gender, marital status, disabilities, etc. In the opening plenary session, we heard from some of the former fellows. Benefits that were particularly appreciated included the visibility and recognition acquired while a student, which subsequently opened doors to professional opportunities; peer mentors (previous fellows) who provided ongoing support with entry into the profession, and after; and help with the hidden costs of college (additional fees, textbooks, etc.) for which first-generation students are often unprepared. Difficulties encountered included locating sources of help – one fellow reported “cold calling” random MLA members before discovering the DII program. This prompted a discussion, during the Q&A, on how the program could be better publicized.

On a similar outreach note, MLA (whose membership encompasses North America – U.S. and Canada) plans to invite Latin American colleagues to next year’s meeting in Orlando, billing it a Pan-American conference.

LINKED DATA

MLA’s initiatives in this field:

  • Two new thesauri have been published in the past year — for medium-of-performance terms (LCMGT), and for music genre/form terms (LCGFT) – along with best-practices documents for both.
  • Involvement in LD4L (Linked Data for Libraries), a collaborative project of Cornell, Harvard, and Stanford.
  • The NACO Music Project, working on authority data.
  • A Bibframe Task Force, which is undertaking various projects to enhance the new encoding schema to meet music users’ needs.

We heard about other projects that member libraries have done to enhance discoverability of special collections:

The Linked Jazz Project, best known for its visualizations, is based on data extracted from oral-history transcripts in numerous jazz archives. The data is then converted to RDF triples reflecting relationships between jazz artists (x talks about y; y knows of x). The data is enhanced via crowdsourcing. The developers hope others will use the LJ data to build additional linked-data sets: mashing LJ data with performances at Carnegie Hall is one such project; another is unearthing female jazz artists (neglected in traditional jazz histories) by enriching LJ data with other sources such as DBpedia, MusicBrainz, and VIAF (the international authority file).

Colleagues at Michigan State used Discogs (a crowdsourced, expert-community-reviewed database of metadata on pop music recordings) to process a gift collection of 1200 LPs of Romani music, which also included pop music containing Gypsy stereotypes. They hope to use this collection as a pilot to develop a process for a much larger corporate gift of 800,000 pop recordings and videos. They were able to extract data directly from the Discogs website using Discogs’ API (which outputs in JSON – they used Python to convert the JSON to XML and then MARCXML). Cataloging challenges included: dealing with usage differences between Discogs’ “release” and RDA’s “manifestation”; similarly, between Discogs’ “roles” for artists and RDA’s “relationship designators”; and mapping Discogs’ genres and subgenres to LC’s genre/form terms and medium-of-performance terms, supplementing with LC subject headings as needed. Discogs’ strengths: expertise in languages (from its international contributor community) and in obsolete formats; and the ability to link to the Discogs entry from the library catalog. Our presenters plan to propose to the Discogs community indexing the UPC (universal product code, the barcodes on CDs); a similar resource, MusicBrainz, does this.

A third project, at Cornell, was ultimately unsuccessful, but also illustrates the variety of data resources and tools that people are trying to link up. For a collection of hip-hop flyers, they constructed RDF triples using data from MusicBrainz, ArtStor, and Cornell’s existing metadata on the related events etc. They chose Bibframe for their encoding schema, and compiled an ontology from Getty’s AAT vocabulary, various music and event ontologies, and Schema.org. Reconciliation of names from all these sources was done using the open-source analytics tool OpenRefine. The problems developed as they came to feel that Bibframe did not meet their test for describing flyers; they decided to abandon it in favor of LD4L. Reconciliation of names also proved more problematic than expected.

DISCOVERY

In a session on music discovery requirements, colleagues noted two things that current ILSs and discovery layers are not good at: showing hierarchies (for instance, making available additional search terms in thesauri, ontologies, etc.); and mapping multiple physical formats to one title (for multi-media items, such as a book issued with a disc, or a score with a recording, or a CD with a DVD – in most interfaces, the content of the second piece will not be retrieved under a format-facet search).

A presenter from Stanford proposed facet displays that include drop-down menus showing a relevant thesaurus, allowing users to further narrow to a subgenre, for instance. For music, the newly-developed medium-of-performance thesaurus, if displayed with multiple search instances, could enable musicians to enter all the instruments in their ensemble, and retrieve music for that specific combination of instruments. Also discussed were domain-specific search interfaces, such as the ones done by UVA for music and videos. Needless to say, there are potential applications for other disciplines.

Colleagues at East Carolina have made use of Blacklight to map multiple physical formats to the same title.

Leslie at MLA 2015

Monday, March 23, 2015 8:26 pm

Lots of good presentations at this year’s meeting of the Music Library Association in Denver. As at ALA, winter weather prevented a number of colleagues from attending, but we were able to Skype presenters in most cases, and for the first time, selected sessions were live-streamed. The latter will be posted on the MLA website.

DIGITAL HUMANITIES

In a session on “digital musicology,” several exciting projects were described:

Contemporary Composers Web Archive (CCWA). A Northeastern consortium project in progress. They’re crawling and cataloging composers’ websites, and contributing the records to OCLC and the Internet Archive. The funding is temporary, so here’s hoping they find a way to continue this critical work preserving the music and music culture of our times.

RISM OPAC. The Repertoire international des sources musicales is the oldest internationally-organized music index (of manuscripts and early printed editions), but only a small portion has so far been made available online. The new online search interface they’re developing retrieves digital scores available on the websites of libraries, archives, composers, and others worldwide. They expect to have 2 million entries when national inventories are completed.

Music Treasures Consortium (MTC). A similar project hosted by the Library of Congress, it links to digitized manuscripts and early printed editions in conservatories, libraries, and archives in the US, UK, and Germany. It’s modeled on an earlier project, the Sheet Music Consortium (hosted by UCLA).

Blue Mountain Project. Named after a Kandinsky painting representing creativity, this Princeton project, funded by a NEH grant, aims to provide coverage of Modernism and the Avant-Garde in arts periodicals 1848-1923. References to music in these sources are often fleeting, so there is a need for enhanced “music discovery.” The presenter discussed the challenges of digitizing magazines: the mix of text, images, and ads; multiple languages of periodicals in this project; variations in the transcription/spelling of names (they plan to cross-index to VIAF, the international authority file).

In the Q & A period, discussion centered on the global importance of projects such as these, and the concomitant need for best-practices standards (including a requirement to link to VIAF) and multi-language capabilities in metadata schema.

INFORMATION LITERACY

Now that the ACRL Framework has replaced learning objectives with “threshold concepts,” music librarians have begun taking first stabs at interpreting these for their discipline:

Scholarship as a conversation = performance as a conversation. Most music students enter college as performers, so this can serve as a base for scaffolding. One notable difference: performance lacks a tradition of formal citation — might some way be found to codify the teacher/student oral tradition by which the performing arts are transmitted?

Authority as constructed and contextual = performers as authorities (Performer X as a leading interpreter of Composer Y’s works); also, the changing of performance practices over time; learning to listen with a critical evaluative ear.

Information creation as process: understanding the editing process for scores, and also of recordings and video (vs. live performance).

Research as inquiry: every performing-arts student who spends long hours in practice and rehearsal is familiar with the concept of an iterative process — an excellent jumping-off point for understanding research as an iterative process.

Searching as strategic exploration: this has been related to musicians’ vexed relationship with library discovery interfaces that don’t work well for music retrieval! Resourcefulness and persistence is needed to meet performers’ information needs regarding specialized details such as instrumentation, key, range, etc.

Information has value = creative output has value. Understanding how the artist fits into the marketplace; the complexities of copyright as it applies to the arts.

COPYRIGHT

The music library community has long been frustrated by issues surrounding music recordings released online but governed by EULAs (end-user license agreements) that prohibit institutional purchase. MLA and the University of Washington have recently received a IMLS grant to develop strategies for addressing these issues, “culminating in a summit with stakeholders and industry representatives.” On the agenda: EULA reform (developing a standard language); preservation (given the industry’s apalling track record, perhaps the library community can create dark archives?); and public relations. Strategies being considered: developing a MLA best practices document; creating a test case; approaching either the smaller labels (who are generally more open to negotiation) or going directly for the big three (Sony, Warner, and Universal) on the theory that if they agree, others will follow.

Another session on recordings and fair use discussed the best practices movement. Noting that the courts, when confronted by new questions, have begun referring to community practice, many disciplines and professions are drafting best-practices documents. Unlike guidelines, whose specificity make them prone to obsolescence, best-practices statements “reflect the fundamental values of a community” — which not only helps them better stand the test of time, but also results in more commonalities between communities, so that they reinforce each other, lending them more weight in the face of legal challenges. The NRPB (National Recordings Preservation Board) recently completed a study that recommended such a document, and the ARSC (Association of Recorded Sound Collections) has a handbook forthcoming.

USAGE PATTERNS

At a poster session, I learned about two surveys done at Kent State that queried the preferences of music and other performing-arts students re the materials they use. One survey noted the significant number of print resources that still occupied top places in a ranking of preferred materials: print scores were much preferred to e-scores (68% to 28%); ditto for books (80% print to 27% electronic); CDs were still used regularly. E-journals, however, were preferred to print (64% to 32%). The survey’s conclusion found a “strong sentiment” in favor of a mix of print and electronic.

The other survey debated the relevance of audio reserves. It confirmed widespread use of extra-library resources by students for their listening assignments: YouTube, streaming services such as Spotify and Pandora, MP3s they had purchased themselves. Reasons given for preferring these sources: ability to listen on a smartphone or tablet (a preference also noted by commercial database vendors, who have begun developing mobile-device capabilities); personal comfort, and convenience. On the other hand, two encouraging reasons students give for using the library’s CD collection: the superior sound quality, and the availability of library staff for help.

CATALOGING

I attended a half-day workshop on genre and medium terms for music. Historically, the Library of Congress subject headings have combined, in long pre-coordinated strings, many disparate aspects of the materials we catalog: topic (Buddhism), genre (drama, folk music), form (symphonies), medium (painting, piano), physical format (scores), publication type (textbooks, catalogs), intended audience (children’s books, textbooks for foreign speakers). Since these can be more effectively machine-manipulated as discrete data than in strings, there’s a project afoot to parse them into separate vocabularies, to be used in new RDA fields, for more precise search-and-sort capabilities in our discovery interfaces.

Three vocabularies are being developed:

  • Genre/form (LCGFT) — e.g., drama, folk music
  • Demographic groups (LCDGT) — author’s nationality, gender, etc.; intended audience
  • Medium of performance (LCMPT) — for music: instruments/voices

Given the many thousands of existing subject terms, this is clearly a challenging task, and I acquired a new appreciation for its complexities as I listened to the LC folks describe their struggles wrestling music terminology (as just one disciplinary example) to the ground. Problems debated included: types of music that musicians have long regarded as genres in their own right (think string quartets) but are really just defined by their instrumentation or number of players; ditto for music set to certain texts (Magnificats, Te Deums); bringing out the distinctions between art music, folk music, and popular music (an attempt to remedy the original classical-centrism of the LC terminology); terms like “world music” that seem to have been invented mainly for marketing purposes; music for specific events or functions; stuff like master classes, concert tours, etc.; ethnomusicological (area studies) terms, which proved too numerous, and too inconsistently defined in reference sources, to be dealt with in the project’s initial phase; and tension between the need to build a logical hierarchy and recognizing the more fluid conventions practiced by user communities. While the new vocabularies are still under construction, we learned about the major changes, and how to encode the terms in RDA records.

In a session on Bibframe (a new encoding format designed to replace the aging MARC format), we heard about LD4L, a project conducted by Standford, Cornell, Harvard, and LC to develop an open-source extensible ontology to aid in conversion of MARC to Bibframe; and another project at UC-Davis to develop a roadmap for Bibframe workflows, from acquisitions operations to cataloging and conversion, and even a prototype discovery layer.

SIDELIGHTS

A Friday-night treat was the screening of a silent film (The General, starring Buster Keaton) accompanied by the Mont Alto Motion Picture Orchestra (a 6-piece strings-and-winds band). The score was one they had compiled from music used by theater orchestras of the period, now archived in the University of Colorado’s American Music Research Center.

BIBFRAME, BIBFRAME, BIBFRAME

Friday, February 6, 2015 10:45 am

It was good to visit my home state of Illinois for ALA Midwinter 2015 in Chicago. I was able to get together with a few cousins with whom I was close growing up in Decatur, three hours south. And who doesn’t like 18 inches of snow? Somehow the weather didn’t actually interfere too much with the conference. If anything it brought attendees closer, I daresay.

At the meeting of the ALCTS Copy Cataloging Interest Group, Angela Kinney from the Library of Congress talked about restructuring at LC, specifically reductions in acquisitions and cataloging staff; this is a theme at many libraries, unfortunately. Roman Panchyshyn from Kent State (whom I’ve also seen present on an RDA-enrichment project similar to the one we’ve just undergone with Backstage) then talked about the considerable proliferation of e-resource bulk record loads in recent years and the need to build copy catalogers’ skills in this area (at their library this work has traditionally been done by professional catalogers and systems staff). Necessary skills include PC file management, FTP/data exchange, basic knowledge of RDA, comfortability with secondary applications such as MarcEdit, and the ability to follow instructions and documentation. Here at ZSR, our copy catalogers, I must say, have these skills in spades, and I do not take for granted the fact that they are so sophisticated; nor should any of us. Not only are they able to follow workflows and documentation, but they create their own. Every record load is a little bit different, and these operations require attentiveness, diligence, and accuracy.

I also attended a session by the ALCTS MARC Formats Transitions Interest Group. The central topic was BIBFRAME, the new encoding format being developed by LC in collaboration with several libraries that eventually is meant to replace MARC as a more linked data/web-friendly format. Nancy Fallgren from the National Library of Medicine talked about the need for BIBFRAME (I think I’m going to get sick of typing that word before the end of this paragraph) to be flexible enough to work with the different descriptive languages of various sectors of the cultural heritage community – libraries, archives, museums, etc. She emphasized that BIBFRAME is not a descriptive vocabulary in and of itself and is built to accommodate RDA, not compete with it; it is a communication method, not the communication itself. Perhaps most importantly, this new format has to be extensible beyond library catalogs, as BIBFRAME-encoded data must go bravely off into the web to seek its fate, alone. Xiaoli Li from UC-Davis described her university’s two-year pilot project, BIBFLOW (BIBframe + workFLOW), in which they are actively experimenting with technical services workflows using the new format. She concluded that “Linked data means an evolutionary leap for libraries, not a simple migration.” This seems fair to say.

In July 2014 I started on two committees, and Midwinter was my first official meeting with both. On the ALCTS Acquisitions Section Organization and Management Committee, or, less conveniently, ALCTSASOAMC, we are planning a preconference for Annual in San Francisco entitled “Streaming Media, Gaming, and More: Emerging Issues in Acquisitions Management and Licensing.” The gaming component of this, in particular, is interesting to me, because I know absolutely nothing about it. I have high hopes for the program, which will be comprised of librarian presentations, a vendor panel, and guided group discussions. I am also on the ALCTS Planning Committee, which has been working on a fairly exhaustive inventory of all ALCTS committees’ and interest groups’ activities with an eye to how they support ALA’s initiatives of Advocacy, Information Policy, and Professional and Leadership Development. It’s been an interesting exercise; one gets a broad sense of the many and diverse efforts being made to support librarians and to advance the profession. In the end we will draft a new three-year strategic plan.

What exactly someone who decided to drive back to Winston-Salem from Chicago can really contribute to strategic planning is a question for another day. I’ll close with the dreary view from inside the hotel room I shared with Steve Kelley, who at the time seemed to be dying. Fortunately blue skies (see above) emerged.

Lauren at ALA Midwinter 2015 (aka Chicago’s 4th Biggest Blizzard)

Thursday, February 5, 2015 5:59 pm

My notes on: IPEDS, ebook STLs and video, our vendors, linked data, BIBFRAME, OCLC and Schema.org, ALCTS/LITA Metadata Standards Committee, advocacy

At the ARL Assessment Forum, there was much complaining over the contradiction in instructions with IPEDs collection counts and circulation. Susan and I had the luck of chatting in the hallway with Bob Dugan from UWF, who turned out to be the main official communicator from libraryland with the person for the library section of IPEDs. Bob is also the author of a LibGuide with clarification info from the IPEDs help desk. Bob seems hopeful that changes in definitions for gathering the info (but not the numbers/form) could happen in time for the next cycle. My main specific takeaways from the various speakers:

  • the only figures that that will be checked between the current IPEDs survey and the previous survey is total library expenditures (not just collection);
  • in spite of the language, the physical circulation part of the survey seems to focus on lending, not borrowing, and may duplicate the ILL info section;
  • some libraries are thinking to use COUNTER BR1 and BR2 reports for ebook circulation and footnote which vendors use which type (BR1 or BR2).

ALCTS Technical Services Managers in Academic Libraries Interest Group discussed a wide range of current issues and it was both reassuring and annoying that no matter the library size, public or private, right now everyone has the same problems and no great answers: high cost ebook STLs, difficulties with video, etc. I inferred that our tactic of explaining prices and the options to faculty (e.g. explaining a mediation message about an EBL ebook or that the producer of a desired video is requiring libraries to pay significantly more than the individual pricing advertised) produces greater customer satisfaction than setting broad restrictive rules to stay within budget.

Jeff, Derrik, and I had a good meeting with a domestic vendor regarding ebooks and I discussed some specific needs with a foreign vendor. All felt like we made progress.

Linked data in libraries is for real (and will eventually affect cataloging). I attended several relevant sessions and here is my distillation: LD4L and Vivo, as a part of LD4L, are the best proof-of-concept work I’ve heard about. When starting to learn about linked data, there is no simple explanation; you have to explore it and then try to wrap your brain around it. Try reading the LD4L Use Cases webpages to get an understanding of what can be achieved and try looking at slide #34 in this LD4L slideshow for a visual explanation of how this can help researchers find each other. Here’s a somewhat simple explanation of Vivo from a company that helped start it and now is the “first official DuraSpace Registered Service Provider for VIVO.” OCLC is doing a lot of groundwork for linked data, using Schema.org, and that effort plays into the work being done by LD4L. While OCLC has been using Schema.org, Library of Congress has invested in developing BIBFRAME. I’m looking forward to reading the white paper about compatibility of both models, released just before the conference. The joint ALCTS/LITA Metadata Standards Committee (which replaced MARBI) is naturally interested in this topic and it was discussed at the Committee meeting. The Committee also gathered input from various groups on high level guidelines (or best practices) for metadata that Erik Mitchell, a committee member, originally drafted.

I also attended the meeting of the ALCTS Advocacy Committee, which has a liaison to the ALA Advocacy Coordinating Group. I understand that advocacy will be emphasized in ALA’s forthcoming strategic plan. If you’re not familiar with the Coordinating Group, it has a broader membership than just ALA division representation, but does include ACRL, LITA, and APALA in addition to ALCTS. I believe ZSR is well-represented in these groups and thus has some clear channels for advocacy!

 

 

 

 

 

 

Leslie at MLA 2014

Saturday, March 15, 2014 4:38 pm

This year’s Music Library Association conference was held in Atlanta. It was a very productive meeting for me: I got a lot of continuing education in RDA, the new cataloging standard; and an opportunity to renew contacts in the area of ethnomusicology (music area studies), having learned just before leaving for MLA that our Music Department plans to add an ethnomusicologist to their faculty.

RDA

The impact of RDA, one year after its adoption by the Library of Congress, was apparent in the number of sessions devoted to it during the general conference, not just the catalogers’ sessions sponsored by the Music OCLC Users Group. I learned about revisions made to the music rules in the RDA manual, in MLA’s “Best Practices” document, and in the various music thesauri we use. (So if you see a “Do Not Disturb” sign on my door, you’ll know I have a lot of re-reading to do, all over again!). One sign of the music library community’s clout: MLA’s Best Practices will be incorporated into the official RDA manual, with links integrated into the text alongside LC’s policy statements. In a session on RDA’s impact on public services, I was gratified to find that almost all the talking points presented by the speakers had been covered in my own presentation to our liaisons back in September.

PRESERVATION AND COPYRIGHT

LC gave a report on its National Recordings Preservation Plan (NRPP), which began in February 2013. The group has developed 31 recommendations, which will be presented at hearings scheduled for this year by the US Office of Copyright, covering the entire copyright code, including section 108, orphan works, and pre-1972 sound recordings (the ones not covered by federal law, leaving librarians to navigate a maze of state laws). Also to be presented: a proposed “digital right of first sale,” enabling libraries and archives to perform their roles of providing access and preservation for born-digital works whose licensing currently prohibits us doing so. In the meantime, best-practices documents have been developed for orphan works (by UC Berkeley) and fair use for sound recordings (by the NRPP).

ONLINE LICENSING ISSUES

Perennial, and always interesting, sessions are held at MLA on the ongoing problem of musical works and recordings that are issued only online, with licensing that prohibit libraries and archives from acquiring them. An MLA grant proposal aims to develop alternative licensing language that we can use with recording labels, musicians, etc., allowing us to burn a CD of digital-only files. A lively brainstorming session produced additional potential solutions: an internet radio license, which would stream a label’s catalog to students, at the same time generating revenue for the label; placing links to labels in our catalogs, similar to the Google links that many OPACS feature for books, offering a purchase option; raising awareness among musicians, many of whom are unaware of the threat to their legacies, by speaking at music festivals, and asking the musicians themselves to raise public awareness, perhaps even by writing songs on the topic; capturing websites that aggregate music of specific genres, etc., in the Internet Archive or ArchiveIt; collaborating with JSTOR, PORTICO, and similar projects to expand their archiving activities to media.

DIGITAL HUMANITIES

This hot topic has begun to make its impact on the music library community, and MLA has established a new round table devoted to it. In a panel session, music librarians described the various ways they are providing support for, and collaborating with, their institutions’ DH centers. Many libraries are offering their liaisons workshops and other training opportunities to acquire the technical skills needed to engage with DH initiatives.

OTHER TECHNOLOGICAL PROJECTS

In a panel session on new technologies, we heard from a colleague at the University of Music and Drama in Leipzig, Germany, who led a project to add facets in their VuFind-based discovery layer for different types of music scores (study scores, performance scores, parts, etc.); a colleague at Haverford who used MEI, an XML encoding scheme designed for musical notation, to develop a GUI interface (which they named MerMEId) to produce a digital edition of a 16th-century French songbook, also reconstructing lost parts (we’ve been hearing about MEI for some years — nice to see a concrete example of its application); an app for the Ipad, developed by Touch Press, that offers study aids for selected musical works (such as Beethoven’s 9th symphony) allowing you to compare multiple recordings while following along with a modern score or the original manuscript (which automatically scrolls with the audio), watch a visualization tool that shows who’s playing when in the orchestra, and read textual commentary, some in real time with the audio; a consortium’s use of Amazon’s cloud service to host an instance of Avalon, an audio/video streaming product developed by Indiana U, to support music courses at their respective schools; and ProMusicDB, a project that aims to build an equivalent to IMDB for pop music.

Steve at ALA Annual 2013 (and RDA Training at Winthrop University)

Friday, July 12, 2013 4:08 pm

Since this is an ALA re-cap from me, you probably know what’s coming-a lot of jabbering about RDA. But wait, this one includes even more jabbering about RDA, because right before leaving for Chicago, I went down to Winthrop University in Rock Hill, South Carolina for two full days of RDA training (I missed the final half day, because I had to fly to Chicago for ALA). The enterprising folks at Winthrop had somehow managed to wrangle an in-person training session taught by Jessalyn Zoom, a cataloger from the Library of Congress who specializes in cataloging training through her work with the PCC (Program for Cooperative Cataloging). In-person training by experts at her level is hard to come by, so Winthrop was very lucky to land her. Leslie and I went to the training, along with Alan Keeley from PCL and Mark McKone from Carpenter. We all agreed that the training was excellent and really deepened our understanding of the practical aspects of RDA cataloging.

The training sessions were so good they got me energized for ALA and the meetings of my two committees, the Continuing Resources Section Cataloging Committee (i.e. serials cataloging) and CC:DA, the Cataloging Committee for Description and Access (the committee that develops ALA’s position on RDA. I’m one of the seven voting members on this committee. I know in a previous post I wrote I was one of nine voting members, but I got the number wrong, it’s seven). CC:DA met for four hours on Saturday afternoon and three hours on Monday morning, so it’s a pretty big time commitment. I also attended the Bibframe Update Forum, the final RDA Update Forum and a session on RDA Implementation Stories. Because so much of the discussion from these various sessions overlapped, I think I’ll break my discussion of these sessions down thematically.

Day-to-Day RDA Stuff

The RDA Implementation Stories session was particularly useful. Erin Stahlberg, formerly of North Carolina State, now of Mounty Holyoke, discussed transitioning to RDA at a much smaller institution. She pointed out that acquisitions staff never really knew AACR2, or at least, never really had any formal training in AACR2. What they knew about cataloging came from on-the-job, local training. Similarly, copy catalogers have generally had no formal training in AACR2, beyond local training materials, which may be of variable quality. With the move to RDA, both acquisitions staff and especially copy catalogers need training. Stahlberg recommended skipping training in cataloging formats that you don’t collect in (for example, if you don’t have much of a map collection, don’t bother with map cataloging training). She recommended that staff consult with co-workers and colleagues. Acknowledge that everyone is trying to figure it out at the same time. Consult the rules, and don’t feel like you have to know it all immediately. Mistakes can be fixed, so don’t freak out. Also, admit that RDA may not be the most important priority at your library (heaven forbid!). But she also pointed out that training is necessary, and you need to get support from your library Administration for training resources. Stahlberg also said that you have to consider how much you want to encourage cataloger’s judgment, and be patient, because catalogers (both professional and paraprofessional) will be wrestling with issues they’ve never had to face before. She encouraged libraries to accept RDA copy, accept AACR2 copy, and learn to live with the ambiguity that comes from living through a code change.

Deborah Fritz of MARC of Quality echoed many of Stahlberg’s points, but she also emphasized that copy cataloging has never been as easy as some folks think it is, and that cataloging through a code change is particularly hard. She pointed out that we have many hybrid records that are coded part in AACR2 and part in RDA, and that we should just accept them. Fritz also pointed out that so many RDA records are being produced that small libraries who though they could avoid RDA implementation, now have to get RDA training to understand what’s in the new RDA copy records they are downloading. She also said to “embrace the chaos.”

Related to Fritz’s point about downloading RDA copy, during the RDA Forum, Glenn Patton of OCLC discussed OCLC’s policy on RDA records. OCLC is still accepting AACR2 coded records and is not requiring that all records be in RDA. Their policy is for WorldCat to be a master record database with one record per manifestation (edition) per language. The preference will be for an RDA record. So, if an AACR2 record is upgraded to RDA, that will be the new master record for that edition. As you can imagine, this will mean that the number of AACR2 records will gradually shrink in the OCLC database. There’s no requirement to upgrade to an AACR2 record to RDA, but if it happens, great.

Higher Level RDA Stuff

A lot of my time at ALA was devoted to discussions of changes to RDA. In the Continuing Resources Section Cataloging Committee meeting, we discussed the problem of what level of cataloging the ISSN was associated with the Manifestation level or the Expression level (for translations). I realize that this may sound like the cataloging equivalent of debating how many angels can dance on the head of a pin (if it doesn’t sound like flat-out gibberish), but trust me, there are actual discovery and access implications. In fact, I was very struck during this meeting and in both of my CC:DA meetings with the passion for helping patrons that was displayed by my fellow catalogers. I think a number of non-cataloging librarians suspect that catalogers prefer to develop arcane, impenetrable systems that only they can navigate, but I saw the exact opposite in these meetings. What I saw were people who were dedicated to helping patrons meet the four user tasks outlined by the FRBR principles (find, identify, select and obtain resources), and who even cited these principles in their arguments. The fact that they had disagreements over the best ways to help users meet these needs led to some fairly passionate arguments. One proposal that we approved in the CC:DA meetings that is pretty easy to explain is a change to the cataloging rules for treaties. RDA used to (well still does until the change is implemented) require catalogers to create an access point, or heading, for the country that comes first alphabetically that is a participant in a treaty. So, the catalog records for a lot of treaties have an access point for Afghanistan or Albania, just because they come first alphabetically, even if it’s a UN treaty that has 80 or 90 participant countries and Afghanistan or Albania aren’t major players in the treaty. The new rules we approved will require creating an access point for the preferred title of the treaty, with the option of adding an access point for any country you want to note (like if you would want to have an access point for the United States for every treaty we participate in). That’s just a taste of the kinds of rule changes we discussed, I’ll spare you the others, although I’d be happy to talk about them with you, if you’re interested.

One other high level RDA thing I learned that I think is worth sharing had to do with Library of Congress’s approach to the authority file. RDA has different rules for formulating authorized headings, so Library of Congress used programmatic methods to make changes to a fair number of their authority records. Last August, 436,000 authority records were changed automatically during phase 1 of their project, and in April of this year, another 371,000 records were changed in phase 2. To belabor the obvious, that’s a lot of changed authority records.

BIBFRAME

BIBFRAME is the name of a project to develop a new encoding format to replace MARC. Many non-catalogers confuse and conflate AACR2 (or RDA) and MARC. They are very different. RDA and AACR2 are content standards that tell you what data you need to record. MARC is an encoding standard that tells you where to put the data so the computer can read it. It’s rather like accounting (which admittedly, I know nothing about, but I looked up some stuff to help this metaphor). You can do accounting with the cash basis method or the accrual basis method. Those methods tell you what numbers you need to record and keep track of. But you can record those numbers in an Excel spreadsheet or a paper ledger or Quicken or whatever. RDA and AACR2 are like accounting methods and MARC is like an Excel spreadsheet.

Anyway, BIBFRAME is needed because, with RDA, we want to record data that is just too hard to fit anywhere in the MARC record. Chris Oliver elaborated a great metaphor to explain why BIBFRAME is needed. She compared RDA to TGV trains in France. These trains are very fast, but they need the right track to run at peak speeds. TGV trains will run on old-fashioned standard track, but they’ll run at regular speeds. RDA is like the TGV train. MARC is like standard track, and BIBFRAME is like the specialized TGV-compatible track. However, BIBFRAME is not being designed simply for RDA. BIBFRAME is expected to be content-standard agnostic, just as RDA is encoding standard-agnostic (go back to my accounting metaphor, you can do cash basis accounting in Excel or a paper ledger, or do accrual basis in Excel or a paper ledger).

BIBFRAME is still a long way away. Beecher Wiggins of Library of Congress gave a rough guess of the transition to BIBFRAME taking 2 to 5 years, but, from what I’ve seen, it’ll take even longer than that. Eric Miller of Zephira, one of the key players in the development of BIBFRAME said that it is still very much a work-in-progress and is very draft-y.

If anyone would like to get together and discuss RDA or BIBFRAME or any of these issues, just let me know, I’d be happy to gab about it. Conversely, if anyone would like to avoid hearing me talk about this stuff, I can be bribed to shut up about it.

Leslie at MLA 2013

Wednesday, March 6, 2013 7:48 pm

A welcome escape from the usual wintry rigors of traveling to a Music Library Association conference — mid-February this year found us in San Jose, soaking up sun, balmy breezes, and temps in the 70s. (Colleagues battered by the Midwest blizzards were especially appreciative.)

THE FUTURE OF SUBJECT COLLECTIONS
This was the title of a plenary session which yielded a number of high-level insights. For one, it was the first time I had heard the term “disintermediation” to describe the phenomenon of librarians being displaced by Google et al as the first place people go for information.

Henriette Hemmasi of Brown U analogized the MOOCs trend as “Diva to DJ”: that is, the role of the instructor is shifting from lone classroom diva to the collaborative role played by a disc jockey — selecting and presenting material for team-produced courses, working with experts in web development, video, etc. Her conclusion: 21st-century competencies must include not just knowledge, but also synthesizing and systems-thinking skills.

David Fenske, one of the founding developers of Indiana’s Ischool, noted that the rapid evolution of technology has rendered it impossible to make projections more than 5 or 10 years out (his reply to a boss who asked for a 20-year vision statement: “A 20-year vision can’t be done without drugs!”). He also observed that digital preservation is in many ways more difficult than the traditional kind: the scientific community is beginning to lose the ability to replicate experiments, because in many cases the raw data has been lost due to obsolete digital storage media. Fenske envisions the “library as socio-technical system” — a system based on user demographics, designed around “communities of thought leaders” as well as experts. Tech-services people have long mooted the concept of “good-enough” cataloging, in the face of overwhelming publication output; public-services librarians, in Fenske’s view, should start talking about the “good-enough” answer. Fenske wants to look “beyond metadata”: how can we leverage our metadata for analytics? semantic tools? How can we scale our answers and services to compete with Google, Amazon, and others?

PERFORMERS’ NEEDS
Some interesting findings from two studies on the library needs of performing faculty and students (as opposed to musicologists and other researchers in the historical/theoretical branches of the discipline):

One study addressed the pros and cons of e-scores. Performers, always on the go and pressed for time, like e-scores for their instant availability and sharability; the fact that they’re quick and easy to print out; their portability (no more cramming a paper score into an instrument case for travel); easy page turns during performance (a pedal mechanism has been devised for this). Performers also like an e-score that can be annotated (i.e., not a PDF file) so they can insert their notes for performance; and the ability to get a lot of works quickly from one place (as from an online aggregator). On the other hand, academic users, who work with scholarly and critical editions, like the ability of the online versions to seamlessly integrate critical commentary with the musical text (print editions traditionally place the commentary in separate supplementary volumes). Third-party software can also be deployed to manipulate the musical text for analysis. But the limitations of the computer screen continue to pose viewability problems for purposes of analysis. Academic users regard e-scores as a compliment to, not an alternative to, print scores.

Another study interviewed performing faculty to find out how they use their library’s online catalog. Typically, they come to the library wanting to find known items, use an advanced-search mode, and search by author, title, and opus number (the latter not very effectively handled by many discovery layers; VuFind does a reasonably good job). Performing faculty often are also looking for specific editions and/or publishers (aspects that many discovery interfaces don’t offer as search limits/facets). Performing faculty (and students) study a work by using a score to follow along with a sound recording, so come to the library hoping to obtain multiple formats for the same work — icons or other aids for quickly identifying physical format are important to them, as for film users and others. There is also a lot of descriptive detail that performers need to see in a catalog display: contents, duration, performers’ names.

Stuff a lot of music librarians have observed or suspected, but good to see it quantified and confirmed in some formal studies.

COLLABORATIVE COLLECTION DEVELOPMENT
This is a topic that has generated much interest in the library community, and music librarians have also been exploring collaborative options for acquiring the specialized materials of their field. Besides shared approval-plan profiles for books, and shared database subscriptions, music librarians have divvied up the collecting of composers’ collected editions, and contemporary composers whose works they want to collect comprehensively. Because music materials are often acquired and housed in multiple locations on the same campus, internal collaboration is as important as external. One thing that does not seem to lend itself to collaborative collection: media (sound recordings and videos). Many libraries don’t lend these out via ILL, and faculty tend to want specific performances — making on-request firm orders a more suitable solution. One consortium of small Maine colleges (Colby, Bates, and Bowdoin) divided the processing labor of their staffs by setting up rotating shipments for their shared approval plan: one library gets this month’s shipment of books, another library receives the next month’s shipment, and so on.

DDA
There was a good bit of discussion concerning demand-driven e-book acquisitions among colleagues whose institutions had recently implemented DDA services. On two separate occasions, attendees raised the question of DDA’s impact on the humanities, given those disciplines’ traditional reliance on browsing the stacks as a discovery method.

RDA
It was a very busy conference for music catalogers, as over a hundred of us convened to get prepared for RDA. There was a full-day workshop; a cataloging “hot topics” session; a town-hall meeting with the Bibliographic Control Committee, which recently produced a “RDA Best Practices for Cataloging Music” document; and a plenary session on RDA’s impact across library services (the latter reprising a lot of material covered by Steve and others in ZSR presentations — stay tuned for more!)

SIDELIGHTS
A very special experience was a visit to the Ira F. Brilliant Center for Beethoven Studies (located on the San Jose State campus), the largest collection of Beethoveniana outside Europe. During a reception there, we got to play pianos dating from Beethoven’s time. Hearing the “Moonlight Sonata” up close on the model of instrument he wrote it for (Dulcken, a Flemish maker) was a true revelation.

Steve at ALA Midwinter 2013

Friday, February 8, 2013 2:10 pm

Although my trip to Seattle for the ALA Midwinter Conference had a rough start (flight delayed due to weather, nearly missed a connecting flight, my luggage didn’t arrive until a day later), I had a really good, productive experience. This Midwinter was heavy on committee work for me, and I was very focused on RDA, authority control and linked data. If you want a simple takeaway from this post, it’s that RDA, authority control and linked data are all tightly bound together and are important for the future of the catalog. If you want more detail, keep reading.
My biggest commitment at the conference was participating in two long meetings (over four hours on Saturday afternoon and three hours on Monday morning) of CC:DA (Cataloging Committee: Description and Access). I’m one of nine voting members of CC:DA, which is the committee responsible for developing ALA’s position on RDA. The final authority for making changes and additions to RDA is the JSC (Joint Steering Committee), which has representation from a number of cataloging constituencies, including ALA, the national library organizations of Canada, the UK, and Australia, as well as other organizations. ALA’s position on proposals brought to the JSC is voted on by CC:DA. Membership on this committee involves reading and evaluating a large number of proposals from a range of library constituencies. Much of the work of the committee has so far involved reviewing proposals regarding how to form headings in bibliographic records, which is, essentially, authority control work. We’ve also worked on proposals to make the rules consistent throughout RDA, to clarify the wording of rules, and to make sure that the rules fit with the basic principles of RDA. It has been fascinating to see how interconnected the various cataloging communities are, and how they relate to ALA and CC:DA. As I said, I am one of nine voting members of the committee, but there are about two dozen non-voting representatives from a variety of committees and organizations, including the Music Library Association, the Program for Cooperative Cataloging, and the Continuing Resources Cataloging Committee of ALCTS.
During our Monday meeting, we saw a presentation by Deborah Fritz of the company MARC of Quality of a visualization tool called RIMMF, RDA In Many Metadata Formats. RIMMF shows how bibliographic data might be displayed when RDA is fully implemented. The tool is designed to take RDA data out of MARC, because it is hard to think of how data might relate in RDA without the restrictions of MARC. RIMMF shows how the FRBR concepts of work, expression and manifestation (which are part of RDA) might be displayed by a public catalog interface. It’s still somewhat crude, but it gave me a clearer idea of the kinds of displays we might develop, as well as a better grasp on the eventual benefits to the user that will come from all our hard work of converting the cataloging world to RDA. RIMMF is free to download and we’re planning to play around with it some here in Resource Services.
I also attended my first meeting of another committee of which I am a member, the Continuing Resources Cataloging Committee of the Continuing Resources Section of ALCTS). Continuing resources include serials and web pages, so CRS is the successor to the old Serials Section. We discussed the program that we had arranged for that afternoon on the possibilities of using linked data to record serial holdings. Unfortunately, I had to miss the program due to another meeting, but I’m looking forward to seeing the recording. We also brainstormed ideas for our program at Annual in Chicago, and the committee’s representative to the PCC Standing Committee on Training gave us an update on RDA training initiatives.
The most interesting other meeting that I attended was the Bibframe Update Forum. Bibframe is the name for an initiative to try to develop a data exchange format to replace the MARC format(s). The Bibframe initiative hopes to develop a format that can make library data into linked data, that is, data that can be exchanged on the semantic web. Eric Miller, from the company Zepheira (which is one of the players in the development of Bibframe), explained that the semantic web is about linking data, not just documents (as a metaphor, think about old PDF files that could not be searched, but were flat documents. The only unit you could search for was the entire document, not the meaningful pieces of content in the document). The idea is to create recombinant data, that is, small blocks of data that can be linked together. The basic architecture of the old web leaned toward linking various full documents, rather than breaking down the statements into meaningful units that could be related to each other. The semantic web emphasizes the relationships between pieces of data. Bibframe hopes to make it possible to record the relationships between pieces of data in bibliographic records and to expose library data on the Web and make it sharable. At the forum, Beacher Wiggins told the audience about the six institutions who are experimenting with the earliest version of Bibframe, which are the British Library, the German National Library, George Washington University, the National Library of Medicine, OCLC, and Princeton University. Reinhold Heuvelmann of the German National Library said that the model is defined on a high level, but that it needs to have more detail developed to allow for recording more granular data, which is absolutely necessary for fully recording the data required by RDA. Ted Fons of OCLC spoke of how Bibframe is an attempt to develop a format that can carry the data libraries need and to allow for library data to interact with each other and the wider web. Fons said that Bibframe data has identifiers that are URIs which can be web accessible. He also said that Bibframe renders bibliographic data as statements that are related to each other, rather than as self-contained records, as with MARC. Bibframe breaks free of the constraints of MARC, which basically rendered data as catalog cards in electronic format. Bibframe is still going through quite a bit of development, but it is moving quickly. Sally McCallum of the MARC Standards Office said that they hope to finalize aspects of the Bibframe framework by 2014, but acknowledged that, “The change is colossal and the unexpected will happen.”
Actually, I think that’s a good way to summarize my thoughts on the current state of the cataloging world after attending this year’s Midwinter, “The change is colossal and the unexpected will happen.”

Steve at NASIG 2012

Thursday, June 14, 2012 5:03 pm

Last Thursday, Chris, Derrik and I hopped in the library van and drove to Nashville for the NASIG Conference, returning on Sunday. It was a busy and informative conference, full of lots of information on serials and subscriptions. I will cover a few of the interesting sessions I attended in this post.
One such session was called “Everyone’s a Player: Creation of Standards in a Fast-Paced Shared World,” which discussed the work of NISO and the development of new standards and “best practices.” Marshall Breeding discussed the ongoing development of the Open Discovery Initiative (ODI), a project that seeks to identify the requirements of web-scale discovery tools, such as Summon. Breeding pointed out that it makes no sense for libraries to spend millions of dollars on subscriptions, if nobody can find anything. So, in this context, it makes sense for libraries to spend tens of thousands on discovery tools. But, since these tools are still so new, there are no standards for how these tools should function and operate with each other. ODI plans to develop a set of best practices for web-scale discovery tools, and is beginning this process by developing a standard vocabulary as well as a standard way to format and transfer data. The project is still in its earliest phases and will have its first work available for review this fall. Also at this session, Regina Reynolds from the Library of Congress discussed her work with the PIE-J initiative, which has developed a draft set of best practices that is ready for comment. PIE-J stands for the Presentation & Identification of E-Journals, and is a set of best practices that gives guidance to publishers on how to present title changes, issue numbering, dates, ISSN information, publishing statements, etc. on their e-journal websites. Currently, it’s pretty much the Wild West out there, with publishers following unique and puzzling practices. PIE-J hopes to help clean up the mess.
Another session that was quite useful was on “CONSER Serials RDA Workflow,” where Les Hawkins, Valerie Bross and Hien Nguyen from Library of Congress discussed the development of RDA training materials at the Library of Congress, including CONSER serials cataloging materials and general RDA training materials from the PCC (Program for Cooperative Cataloging). I haven’t had a chance yet to root around on the Library of Congress website, but these materials are available for free, and include a multi-part course called “Essentials for Effective RDA Learning” that includes 27 hours (yikes!) of instruction on RDA, including a 9 hour training block on FRBR, a 3 hour block on the RDA toolkit, and 15 hours on authority and description in RDA. This is for general cataloging, not specific to serials. Also, because LC is working to develop a replacement for the MARC formats, there is a visualization tool called RIMMF available at marcofquality.com that allows for creating visual representations of records and record-relationships in a post-MARC record environment. It sounds promising, but I haven’t had a chance to play with it yet. Also, the CONSER training program, which focuses on serials cataloging, is developing a “bridge” training plan to transition serials catalogers from AACR2 to RDA, which will be available this fall.
Another interesting session I attended was “Automated Metadata Creation: Possibilities and Pitfalls” by Wilhelmina Randtke of Florida State University Law Research Center. She pointed out that computers like black and white decisions and are bad with discretion, while creating metadata is all about identifying and noting important information. Randtke said computers love keywords but are not good with “aboutness” or subjects. So, in her project, she tried to develop a method to use computers to generate metadata for graduate theses. Some of the computer talk got very technical and confusing for me, but her discussion of subject analysis was fascinating. Using certain computer programs for automated indexing, Randtke did a data scrape of the digitally-encoded theses and identified recurring keywords. This keyword data was run through ontologies/thesauruses to identify more accurate subject headings, which were applied to the records. A person needs to select the appropriate ontology/thesaurus for the item(s) and review the results, but the basic subject analysis can be performed by the computer. Randtke found that the results were cheap and fast, but incomplete. She said, “It’s better than a shuffled pile of 30,000 pages. But, it’s not as good as an organized pile of 30,000 pages.” So, her work showed some promise, but still needs some work.
Of course there were a number of other interesting presentations, but I have to leave something for Chris and Derrik to write about. One idea that particularly struck me came from Rick Anderson during his thought provoking all-conference vision session on the final day, “To bring simplicity to our patrons means taking on an enormous level of complexity for us.” That basic idea has been something of an obsession of mine for the last few months while wrestling with authority control and RDA and considering the semantic web. To make our materials easily discoverable by the non-expert (and even the expert) user, we have to make sure our data is rigorously structured and that requires a lot of work. It’s almost as if there’s a certain quantity of work that has to be done to find stuff, and we either push it off onto the patron or take it on ourselves. I’m in favor of taking it on ourselves.
The slides for all of the conference presentations are available here: http://www.slideshare.net/NASIG/tag/nasig2012 for anyone who is interested. You do not need to be a member of NASIG to check them out.

Steve at 2012 ALA Midwinter

Tuesday, January 31, 2012 6:35 pm

So, if you read nothing else in my post about ALA Midwinter, please take away this fact: RDA is coming. At several sessions, representatives from the Library of Congress indicated that LC is moving forward with plans to adopt RDA early in 2013. When LC adopts RDA, the other libraries in the US will fall in line behind them, so it’s time to start preparing.

On Saturday, January 21, I attended a meeting of the Copy Cataloging Interest Group, where I heard Barbara Tillett, the Chief of the LC Policy and Standards Division, speak about how LC is training their copy catalogers in RDA with an eye toward a 2013 implementation. She said much of the copy cataloger training material is focused on teaching when it is appropriate to change an AACR2 record to an RDA record, and when it is appropriate to change a master record in OCLC. LC has developed a set of RDA data elements that should always be included in their records, which they call “LC core.” Tillett said that LC will adopt RDA no sooner than January 2013, contingent upon continued progress on the recommendations the National Libraries made this spring regarding changes to RDA. LC decided to return most of the catalogers who participated in the RDA test that wrapped up at the end of 2010 to cataloging using RDA in November, 2011, so that these catalogers could work on training, documentation, and further developing the RDA code itself. LC is making its work on RDA, including its copy cataloger training materials available on their website ( http://www.loc.gov/aba/rda ) The Library of Congress has begun releasing “LC Policy Statements” that explain LC interpretations of RDA rules, and which replace the old LC Rule Interpretations that explained LC decisions on AACR2 rules. The Policy Statements are available for free with RDA Toolkit. Regarding the ongoing development of RDA, Tillett said that there will be monthly minor corrections to RDA (typos and such), with more substantive major updates to RDA issued twice per year. Tillett also spoke of the Bibliographic Framework Transition Initiative, which is working to develop a metadata schema to replace the MARC formats. This group released a background statement and general plan in November 2011. They are in the process of developing a funding proposal and of forming an advisory group with various players in the library metadata field.

On Sunday, January 22, I attended a meeting of the RDA Update Forum, and Beacher Wiggins of LC reaffirmed much of what Barbara Tillett said, but he stated more forcefully that the Library of Congress and the other national libraries are really intent on implementing RDA in 2013. However, he allowed for a little more flexibility in his timeline. He placed the date for RDA implementation in the first quarter of 2013, so anything from January 2 to March 31. Wiggins said that many of his colleagues are pushing for a January 2 date, but he said that, taking into account how deadlines can slip, he would be happy with March 31. Nevertheless, the message was clear, RDA is coming.

Also at the RDA Update Forum, I heard Linda Barnart from the Program for Cooperative Cataloging, who spoke about how the PCC is preparing for the implementation of RDA (she said the key decisions of the PCC can be found at http://www.loc.gov/catdir/pcc ). The PPC is busily developing materials related to the RDA implementation. They have developed a set of Post-RDA Test Guidelines as well as an RDA FAQ. They have been working on guidelines for what they are calling a Day One for RDA authority records, which will be a day (probably after LC adopts RDA) from which all new LC authority records created will be created according to RDA rules instead of AACR2 rules. PCC also has a Task Group on Hybrid Bibliographic Records which has prepared guidelines for harmonizing RDA bib records with pre-RDA bib records. I know I’m sounding like a broken record here, but with all of this infrastructure being built up, make no mistake-RDA is coming.

On to other topics, I also attended an interesting session of the Next Generation Catalog Interest Group, where I heard Jane Burke of Serials Solutions speak about a new product they are developing which is designed to replace the back-end ILS. Burke said that Serials Solutions is looking to separate the discovery aspect of catalogs from their management aspect. Summon, as we already know, is their discovery solution, which is designed to allow for a single search with a unified result set. Serials Solutions is working to develop a webscale management solution which they are calling Intota. Intota is an example of “software as a service” (Burke recommended looking it up in Wikipedia, which I did). Burke argued that the old ILS model was riddled with redundancy, with every library cataloging the same things and everybody doing duplicate data entry (from suppliers to the ILS to campus systems). Intota would be a cloud based service that would provide linked data and networked authority control (changes to LC authority headings would be changed for all member libraries, without the need to make local changes). It seems like an interesting model, and I look forward to hearing more about it.

I attended a number of other meetings, which will be of limited interest to a general audience, but something that was pretty cool was attending my first meeting as a member of the Editorial Board of Serials Review. After almost 20 years of working with serials, it was interesting to be on the other side of the process. We discussed the journal’s move to APA from Chicago style, a new formatting guide for the articles, future topics for articles, submission patterns, etc. It was very interesting.

As usual when I got ALA, I saw several former ZSRers. I roomed with Jim Galbraith, who is still at DePaul University in Chicago. I also visited with Jennifer Roper and Emily Stambaugh, both of whom are expecting baby boys in May (small world!).


Pages
About
Categories
ACRL
ALA
ALA Annual
ALA Midwinter
ALCTS
ALFMO
ANCHASL
ANSS
APALA
ARLIS
ASERL
ASIS&T
ATLA
Career Development for Women Leaders
Carolina Consortium
CASE Conference
Celebration: Entrepreneurial Conference
Charleston Conference
Coalition for Networked Information
code4lib
Conferences
CurateGear
DHSI
DigCCurr
Digital Forsyth
EDUCAUSE
edUI
Electronic Resources and Libraries
Elon Teaching and Learning Conference
Entrepreneurial Conference
Evidence Based Library and Information Practice (EBLIP)
Ex Libris Users of North America (ELUNA)
FDLP
First-Year Experience Conference
Handheld Librarian
ILLiad Conference
Immersion
Innovative Library Classroom Conference
IRB101
Journal reading group
LAUNC-CH
Leadership Institute for Academic Librarians
Library Assessment Conference
Lilly Conference
LITA
LITA National Forum
LLAMA
LOEX
Mentoring Committee
MERLOT
Metrolina
Music Library Association
NASIG
NC-LITe
NCCU Conference on Digital Libraries
NCICU
NCLA
NCPC
NCSLA
NISO
North Carolina Serials Conference
online course
Online Learning Summit
Open Repositories
Professional Development Center
RBMS
RTSS
RUSA
SACSCOC
Site Visits and Tours
Society of American Archivists
Society of North Carolina Archivists
SOLINET
Southeast Music Library Association
SPARC
STS
Sun Webinar Series
symposium
TALA Conference
UNC Teaching and Learning with Technology Conference
Uncategorized
University Libraries Group
Webinar
WebWise
WGSS
workshops
ZSR Library Leadership Retreat
Tags
Archives
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007

Powered by WordPress.org, protected by Akismet. Blog with WordPress.com.