Professional Development

In the 'RDA/FRBR' Category...

Steve at ALA Annual 2013 (and RDA Training at Winthrop University)

Friday, July 12, 2013 4:08 pm

Since this is an ALA re-cap from me, you probably know what’s coming-a lot of jabbering about RDA. But wait, this one includes even more jabbering about RDA, because right before leaving for Chicago, I went down to Winthrop University in Rock Hill, South Carolina for two full days of RDA training (I missed the final half day, because I had to fly to Chicago for ALA). The enterprising folks at Winthrop had somehow managed to wrangle an in-person training session taught by Jessalyn Zoom, a cataloger from the Library of Congress who specializes in cataloging training through her work with the PCC (Program for Cooperative Cataloging). In-person training by experts at her level is hard to come by, so Winthrop was very lucky to land her. Leslie and I went to the training, along with Alan Keeley from PCL and Mark McKone from Carpenter. We all agreed that the training was excellent and really deepened our understanding of the practical aspects of RDA cataloging.

The training sessions were so good they got me energized for ALA and the meetings of my two committees, the Continuing Resources Section Cataloging Committee (i.e. serials cataloging) and CC:DA, the Cataloging Committee for Description and Access (the committee that develops ALA’s position on RDA. I’m one of the seven voting members on this committee. I know in a previous post I wrote I was one of nine voting members, but I got the number wrong, it’s seven). CC:DA met for four hours on Saturday afternoon and three hours on Monday morning, so it’s a pretty big time commitment. I also attended the Bibframe Update Forum, the final RDA Update Forum and a session on RDA Implementation Stories. Because so much of the discussion from these various sessions overlapped, I think I’ll break my discussion of these sessions down thematically.

Day-to-Day RDA Stuff

The RDA Implementation Stories session was particularly useful. Erin Stahlberg, formerly of North Carolina State, now of Mounty Holyoke, discussed transitioning to RDA at a much smaller institution. She pointed out that acquisitions staff never really knew AACR2, or at least, never really had any formal training in AACR2. What they knew about cataloging came from on-the-job, local training. Similarly, copy catalogers have generally had no formal training in AACR2, beyond local training materials, which may be of variable quality. With the move to RDA, both acquisitions staff and especially copy catalogers need training. Stahlberg recommended skipping training in cataloging formats that you don’t collect in (for example, if you don’t have much of a map collection, don’t bother with map cataloging training). She recommended that staff consult with co-workers and colleagues. Acknowledge that everyone is trying to figure it out at the same time. Consult the rules, and don’t feel like you have to know it all immediately. Mistakes can be fixed, so don’t freak out. Also, admit that RDA may not be the most important priority at your library (heaven forbid!). But she also pointed out that training is necessary, and you need to get support from your library Administration for training resources. Stahlberg also said that you have to consider how much you want to encourage cataloger’s judgment, and be patient, because catalogers (both professional and paraprofessional) will be wrestling with issues they’ve never had to face before. She encouraged libraries to accept RDA copy, accept AACR2 copy, and learn to live with the ambiguity that comes from living through a code change.

Deborah Fritz of MARC of Quality echoed many of Stahlberg’s points, but she also emphasized that copy cataloging has never been as easy as some folks think it is, and that cataloging through a code change is particularly hard. She pointed out that we have many hybrid records that are coded part in AACR2 and part in RDA, and that we should just accept them. Fritz also pointed out that so many RDA records are being produced that small libraries who though they could avoid RDA implementation, now have to get RDA training to understand what’s in the new RDA copy records they are downloading. She also said to “embrace the chaos.”

Related to Fritz’s point about downloading RDA copy, during the RDA Forum, Glenn Patton of OCLC discussed OCLC’s policy on RDA records. OCLC is still accepting AACR2 coded records and is not requiring that all records be in RDA. Their policy is for WorldCat to be a master record database with one record per manifestation (edition) per language. The preference will be for an RDA record. So, if an AACR2 record is upgraded to RDA, that will be the new master record for that edition. As you can imagine, this will mean that the number of AACR2 records will gradually shrink in the OCLC database. There’s no requirement to upgrade to an AACR2 record to RDA, but if it happens, great.

Higher Level RDA Stuff

A lot of my time at ALA was devoted to discussions of changes to RDA. In the Continuing Resources Section Cataloging Committee meeting, we discussed the problem of what level of cataloging the ISSN was associated with the Manifestation level or the Expression level (for translations). I realize that this may sound like the cataloging equivalent of debating how many angels can dance on the head of a pin (if it doesn’t sound like flat-out gibberish), but trust me, there are actual discovery and access implications. In fact, I was very struck during this meeting and in both of my CC:DA meetings with the passion for helping patrons that was displayed by my fellow catalogers. I think a number of non-cataloging librarians suspect that catalogers prefer to develop arcane, impenetrable systems that only they can navigate, but I saw the exact opposite in these meetings. What I saw were people who were dedicated to helping patrons meet the four user tasks outlined by the FRBR principles (find, identify, select and obtain resources), and who even cited these principles in their arguments. The fact that they had disagreements over the best ways to help users meet these needs led to some fairly passionate arguments. One proposal that we approved in the CC:DA meetings that is pretty easy to explain is a change to the cataloging rules for treaties. RDA used to (well still does until the change is implemented) require catalogers to create an access point, or heading, for the country that comes first alphabetically that is a participant in a treaty. So, the catalog records for a lot of treaties have an access point for Afghanistan or Albania, just because they come first alphabetically, even if it’s a UN treaty that has 80 or 90 participant countries and Afghanistan or Albania aren’t major players in the treaty. The new rules we approved will require creating an access point for the preferred title of the treaty, with the option of adding an access point for any country you want to note (like if you would want to have an access point for the United States for every treaty we participate in). That’s just a taste of the kinds of rule changes we discussed, I’ll spare you the others, although I’d be happy to talk about them with you, if you’re interested.

One other high level RDA thing I learned that I think is worth sharing had to do with Library of Congress’s approach to the authority file. RDA has different rules for formulating authorized headings, so Library of Congress used programmatic methods to make changes to a fair number of their authority records. Last August, 436,000 authority records were changed automatically during phase 1 of their project, and in April of this year, another 371,000 records were changed in phase 2. To belabor the obvious, that’s a lot of changed authority records.

BIBFRAME

BIBFRAME is the name of a project to develop a new encoding format to replace MARC. Many non-catalogers confuse and conflate AACR2 (or RDA) and MARC. They are very different. RDA and AACR2 are content standards that tell you what data you need to record. MARC is an encoding standard that tells you where to put the data so the computer can read it. It’s rather like accounting (which admittedly, I know nothing about, but I looked up some stuff to help this metaphor). You can do accounting with the cash basis method or the accrual basis method. Those methods tell you what numbers you need to record and keep track of. But you can record those numbers in an Excel spreadsheet or a paper ledger or Quicken or whatever. RDA and AACR2 are like accounting methods and MARC is like an Excel spreadsheet.

Anyway, BIBFRAME is needed because, with RDA, we want to record data that is just too hard to fit anywhere in the MARC record. Chris Oliver elaborated a great metaphor to explain why BIBFRAME is needed. She compared RDA to TGV trains in France. These trains are very fast, but they need the right track to run at peak speeds. TGV trains will run on old-fashioned standard track, but they’ll run at regular speeds. RDA is like the TGV train. MARC is like standard track, and BIBFRAME is like the specialized TGV-compatible track. However, BIBFRAME is not being designed simply for RDA. BIBFRAME is expected to be content-standard agnostic, just as RDA is encoding standard-agnostic (go back to my accounting metaphor, you can do cash basis accounting in Excel or a paper ledger, or do accrual basis in Excel or a paper ledger).

BIBFRAME is still a long way away. Beecher Wiggins of Library of Congress gave a rough guess of the transition to BIBFRAME taking 2 to 5 years, but, from what I’ve seen, it’ll take even longer than that. Eric Miller of Zephira, one of the key players in the development of BIBFRAME said that it is still very much a work-in-progress and is very draft-y.

If anyone would like to get together and discuss RDA or BIBFRAME or any of these issues, just let me know, I’d be happy to gab about it. Conversely, if anyone would like to avoid hearing me talk about this stuff, I can be bribed to shut up about it.

Leslie at MLA 2013

Wednesday, March 6, 2013 7:48 pm

A welcome escape from the usual wintry rigors of traveling to a Music Library Association conference — mid-February this year found us in San Jose, soaking up sun, balmy breezes, and temps in the 70s. (Colleagues battered by the Midwest blizzards were especially appreciative.)

THE FUTURE OF SUBJECT COLLECTIONS
This was the title of a plenary session which yielded a number of high-level insights. For one, it was the first time I had heard the term “disintermediation” to describe the phenomenon of librarians being displaced by Google et al as the first place people go for information.

Henriette Hemmasi of Brown U analogized the MOOCs trend as “Diva to DJ”: that is, the role of the instructor is shifting from lone classroom diva to the collaborative role played by a disc jockey — selecting and presenting material for team-produced courses, working with experts in web development, video, etc. Her conclusion: 21st-century competencies must include not just knowledge, but also synthesizing and systems-thinking skills.

David Fenske, one of the founding developers of Indiana’s Ischool, noted that the rapid evolution of technology has rendered it impossible to make projections more than 5 or 10 years out (his reply to a boss who asked for a 20-year vision statement: “A 20-year vision can’t be done without drugs!”). He also observed that digital preservation is in many ways more difficult than the traditional kind: the scientific community is beginning to lose the ability to replicate experiments, because in many cases the raw data has been lost due to obsolete digital storage media. Fenske envisions the “library as socio-technical system” — a system based on user demographics, designed around “communities of thought leaders” as well as experts. Tech-services people have long mooted the concept of “good-enough” cataloging, in the face of overwhelming publication output; public-services librarians, in Fenske’s view, should start talking about the “good-enough” answer. Fenske wants to look “beyond metadata”: how can we leverage our metadata for analytics? semantic tools? How can we scale our answers and services to compete with Google, Amazon, and others?

PERFORMERS’ NEEDS
Some interesting findings from two studies on the library needs of performing faculty and students (as opposed to musicologists and other researchers in the historical/theoretical branches of the discipline):

One study addressed the pros and cons of e-scores. Performers, always on the go and pressed for time, like e-scores for their instant availability and sharability; the fact that they’re quick and easy to print out; their portability (no more cramming a paper score into an instrument case for travel); easy page turns during performance (a pedal mechanism has been devised for this). Performers also like an e-score that can be annotated (i.e., not a PDF file) so they can insert their notes for performance; and the ability to get a lot of works quickly from one place (as from an online aggregator). On the other hand, academic users, who work with scholarly and critical editions, like the ability of the online versions to seamlessly integrate critical commentary with the musical text (print editions traditionally place the commentary in separate supplementary volumes). Third-party software can also be deployed to manipulate the musical text for analysis. But the limitations of the computer screen continue to pose viewability problems for purposes of analysis. Academic users regard e-scores as a compliment to, not an alternative to, print scores.

Another study interviewed performing faculty to find out how they use their library’s online catalog. Typically, they come to the library wanting to find known items, use an advanced-search mode, and search by author, title, and opus number (the latter not very effectively handled by many discovery layers; VuFind does a reasonably good job). Performing faculty often are also looking for specific editions and/or publishers (aspects that many discovery interfaces don’t offer as search limits/facets). Performing faculty (and students) study a work by using a score to follow along with a sound recording, so come to the library hoping to obtain multiple formats for the same work — icons or other aids for quickly identifying physical format are important to them, as for film users and others. There is also a lot of descriptive detail that performers need to see in a catalog display: contents, duration, performers’ names.

Stuff a lot of music librarians have observed or suspected, but good to see it quantified and confirmed in some formal studies.

COLLABORATIVE COLLECTION DEVELOPMENT
This is a topic that has generated much interest in the library community, and music librarians have also been exploring collaborative options for acquiring the specialized materials of their field. Besides shared approval-plan profiles for books, and shared database subscriptions, music librarians have divvied up the collecting of composers’ collected editions, and contemporary composers whose works they want to collect comprehensively. Because music materials are often acquired and housed in multiple locations on the same campus, internal collaboration is as important as external. One thing that does not seem to lend itself to collaborative collection: media (sound recordings and videos). Many libraries don’t lend these out via ILL, and faculty tend to want specific performances — making on-request firm orders a more suitable solution. One consortium of small Maine colleges (Colby, Bates, and Bowdoin) divided the processing labor of their staffs by setting up rotating shipments for their shared approval plan: one library gets this month’s shipment of books, another library receives the next month’s shipment, and so on.

DDA
There was a good bit of discussion concerning demand-driven e-book acquisitions among colleagues whose institutions had recently implemented DDA services. On two separate occasions, attendees raised the question of DDA’s impact on the humanities, given those disciplines’ traditional reliance on browsing the stacks as a discovery method.

RDA
It was a very busy conference for music catalogers, as over a hundred of us convened to get prepared for RDA. There was a full-day workshop; a cataloging “hot topics” session; a town-hall meeting with the Bibliographic Control Committee, which recently produced a “RDA Best Practices for Cataloging Music” document; and a plenary session on RDA’s impact across library services (the latter reprising a lot of material covered by Steve and others in ZSR presentations — stay tuned for more!)

SIDELIGHTS
A very special experience was a visit to the Ira F. Brilliant Center for Beethoven Studies (located on the San Jose State campus), the largest collection of Beethoveniana outside Europe. During a reception there, we got to play pianos dating from Beethoven’s time. Hearing the “Moonlight Sonata” up close on the model of instrument he wrote it for (Dulcken, a Flemish maker) was a true revelation.

Steve at ALA Midwinter 2013

Friday, February 8, 2013 2:10 pm

Although my trip to Seattle for the ALA Midwinter Conference had a rough start (flight delayed due to weather, nearly missed a connecting flight, my luggage didn’t arrive until a day later), I had a really good, productive experience. This Midwinter was heavy on committee work for me, and I was very focused on RDA, authority control and linked data. If you want a simple takeaway from this post, it’s that RDA, authority control and linked data are all tightly bound together and are important for the future of the catalog. If you want more detail, keep reading.
My biggest commitment at the conference was participating in two long meetings (over four hours on Saturday afternoon and three hours on Monday morning) of CC:DA (Cataloging Committee: Description and Access). I’m one of nine voting members of CC:DA, which is the committee responsible for developing ALA’s position on RDA. The final authority for making changes and additions to RDA is the JSC (Joint Steering Committee), which has representation from a number of cataloging constituencies, including ALA, the national library organizations of Canada, the UK, and Australia, as well as other organizations. ALA’s position on proposals brought to the JSC is voted on by CC:DA. Membership on this committee involves reading and evaluating a large number of proposals from a range of library constituencies. Much of the work of the committee has so far involved reviewing proposals regarding how to form headings in bibliographic records, which is, essentially, authority control work. We’ve also worked on proposals to make the rules consistent throughout RDA, to clarify the wording of rules, and to make sure that the rules fit with the basic principles of RDA. It has been fascinating to see how interconnected the various cataloging communities are, and how they relate to ALA and CC:DA. As I said, I am one of nine voting members of the committee, but there are about two dozen non-voting representatives from a variety of committees and organizations, including the Music Library Association, the Program for Cooperative Cataloging, and the Continuing Resources Cataloging Committee of ALCTS.
During our Monday meeting, we saw a presentation by Deborah Fritz of the company MARC of Quality of a visualization tool called RIMMF, RDA In Many Metadata Formats. RIMMF shows how bibliographic data might be displayed when RDA is fully implemented. The tool is designed to take RDA data out of MARC, because it is hard to think of how data might relate in RDA without the restrictions of MARC. RIMMF shows how the FRBR concepts of work, expression and manifestation (which are part of RDA) might be displayed by a public catalog interface. It’s still somewhat crude, but it gave me a clearer idea of the kinds of displays we might develop, as well as a better grasp on the eventual benefits to the user that will come from all our hard work of converting the cataloging world to RDA. RIMMF is free to download and we’re planning to play around with it some here in Resource Services.
I also attended my first meeting of another committee of which I am a member, the Continuing Resources Cataloging Committee of the Continuing Resources Section of ALCTS). Continuing resources include serials and web pages, so CRS is the successor to the old Serials Section. We discussed the program that we had arranged for that afternoon on the possibilities of using linked data to record serial holdings. Unfortunately, I had to miss the program due to another meeting, but I’m looking forward to seeing the recording. We also brainstormed ideas for our program at Annual in Chicago, and the committee’s representative to the PCC Standing Committee on Training gave us an update on RDA training initiatives.
The most interesting other meeting that I attended was the Bibframe Update Forum. Bibframe is the name for an initiative to try to develop a data exchange format to replace the MARC format(s). The Bibframe initiative hopes to develop a format that can make library data into linked data, that is, data that can be exchanged on the semantic web. Eric Miller, from the company Zepheira (which is one of the players in the development of Bibframe), explained that the semantic web is about linking data, not just documents (as a metaphor, think about old PDF files that could not be searched, but were flat documents. The only unit you could search for was the entire document, not the meaningful pieces of content in the document). The idea is to create recombinant data, that is, small blocks of data that can be linked together. The basic architecture of the old web leaned toward linking various full documents, rather than breaking down the statements into meaningful units that could be related to each other. The semantic web emphasizes the relationships between pieces of data. Bibframe hopes to make it possible to record the relationships between pieces of data in bibliographic records and to expose library data on the Web and make it sharable. At the forum, Beacher Wiggins told the audience about the six institutions who are experimenting with the earliest version of Bibframe, which are the British Library, the German National Library, George Washington University, the National Library of Medicine, OCLC, and Princeton University. Reinhold Heuvelmann of the German National Library said that the model is defined on a high level, but that it needs to have more detail developed to allow for recording more granular data, which is absolutely necessary for fully recording the data required by RDA. Ted Fons of OCLC spoke of how Bibframe is an attempt to develop a format that can carry the data libraries need and to allow for library data to interact with each other and the wider web. Fons said that Bibframe data has identifiers that are URIs which can be web accessible. He also said that Bibframe renders bibliographic data as statements that are related to each other, rather than as self-contained records, as with MARC. Bibframe breaks free of the constraints of MARC, which basically rendered data as catalog cards in electronic format. Bibframe is still going through quite a bit of development, but it is moving quickly. Sally McCallum of the MARC Standards Office said that they hope to finalize aspects of the Bibframe framework by 2014, but acknowledged that, “The change is colossal and the unexpected will happen.”
Actually, I think that’s a good way to summarize my thoughts on the current state of the cataloging world after attending this year’s Midwinter, “The change is colossal and the unexpected will happen.”

Steve at 2012 ALA Midwinter

Tuesday, January 31, 2012 6:35 pm

So, if you read nothing else in my post about ALA Midwinter, please take away this fact: RDA is coming. At several sessions, representatives from the Library of Congress indicated that LC is moving forward with plans to adopt RDA early in 2013. When LC adopts RDA, the other libraries in the US will fall in line behind them, so it’s time to start preparing.

On Saturday, January 21, I attended a meeting of the Copy Cataloging Interest Group, where I heard Barbara Tillett, the Chief of the LC Policy and Standards Division, speak about how LC is training their copy catalogers in RDA with an eye toward a 2013 implementation. She said much of the copy cataloger training material is focused on teaching when it is appropriate to change an AACR2 record to an RDA record, and when it is appropriate to change a master record in OCLC. LC has developed a set of RDA data elements that should always be included in their records, which they call “LC core.” Tillett said that LC will adopt RDA no sooner than January 2013, contingent upon continued progress on the recommendations the National Libraries made this spring regarding changes to RDA. LC decided to return most of the catalogers who participated in the RDA test that wrapped up at the end of 2010 to cataloging using RDA in November, 2011, so that these catalogers could work on training, documentation, and further developing the RDA code itself. LC is making its work on RDA, including its copy cataloger training materials available on their website ( http://www.loc.gov/aba/rda ) The Library of Congress has begun releasing “LC Policy Statements” that explain LC interpretations of RDA rules, and which replace the old LC Rule Interpretations that explained LC decisions on AACR2 rules. The Policy Statements are available for free with RDA Toolkit. Regarding the ongoing development of RDA, Tillett said that there will be monthly minor corrections to RDA (typos and such), with more substantive major updates to RDA issued twice per year. Tillett also spoke of the Bibliographic Framework Transition Initiative, which is working to develop a metadata schema to replace the MARC formats. This group released a background statement and general plan in November 2011. They are in the process of developing a funding proposal and of forming an advisory group with various players in the library metadata field.

On Sunday, January 22, I attended a meeting of the RDA Update Forum, and Beacher Wiggins of LC reaffirmed much of what Barbara Tillett said, but he stated more forcefully that the Library of Congress and the other national libraries are really intent on implementing RDA in 2013. However, he allowed for a little more flexibility in his timeline. He placed the date for RDA implementation in the first quarter of 2013, so anything from January 2 to March 31. Wiggins said that many of his colleagues are pushing for a January 2 date, but he said that, taking into account how deadlines can slip, he would be happy with March 31. Nevertheless, the message was clear, RDA is coming.

Also at the RDA Update Forum, I heard Linda Barnart from the Program for Cooperative Cataloging, who spoke about how the PCC is preparing for the implementation of RDA (she said the key decisions of the PCC can be found at http://www.loc.gov/catdir/pcc ). The PPC is busily developing materials related to the RDA implementation. They have developed a set of Post-RDA Test Guidelines as well as an RDA FAQ. They have been working on guidelines for what they are calling a Day One for RDA authority records, which will be a day (probably after LC adopts RDA) from which all new LC authority records created will be created according to RDA rules instead of AACR2 rules. PCC also has a Task Group on Hybrid Bibliographic Records which has prepared guidelines for harmonizing RDA bib records with pre-RDA bib records. I know I’m sounding like a broken record here, but with all of this infrastructure being built up, make no mistake-RDA is coming.

On to other topics, I also attended an interesting session of the Next Generation Catalog Interest Group, where I heard Jane Burke of Serials Solutions speak about a new product they are developing which is designed to replace the back-end ILS. Burke said that Serials Solutions is looking to separate the discovery aspect of catalogs from their management aspect. Summon, as we already know, is their discovery solution, which is designed to allow for a single search with a unified result set. Serials Solutions is working to develop a webscale management solution which they are calling Intota. Intota is an example of “software as a service” (Burke recommended looking it up in Wikipedia, which I did). Burke argued that the old ILS model was riddled with redundancy, with every library cataloging the same things and everybody doing duplicate data entry (from suppliers to the ILS to campus systems). Intota would be a cloud based service that would provide linked data and networked authority control (changes to LC authority headings would be changed for all member libraries, without the need to make local changes). It seems like an interesting model, and I look forward to hearing more about it.

I attended a number of other meetings, which will be of limited interest to a general audience, but something that was pretty cool was attending my first meeting as a member of the Editorial Board of Serials Review. After almost 20 years of working with serials, it was interesting to be on the other side of the process. We discussed the journal’s move to APA from Chicago style, a new formatting guide for the articles, future topics for articles, submission patterns, etc. It was very interesting.

As usual when I got ALA, I saw several former ZSRers. I roomed with Jim Galbraith, who is still at DePaul University in Chicago. I also visited with Jennifer Roper and Emily Stambaugh, both of whom are expecting baby boys in May (small world!).

Leslie at NCLA 2011

Friday, October 7, 2011 2:50 pm

It was really nice to be able to attend an NCLA conference again — one of my music conferences, as it happens, has been held at the same time for years.

I attended a session on RDA, the new cataloging standard recently beta-tested by LC. Christee Pascale of NCSU gave a very helpful, concise reprise of that school’s experience as a test participant; the staff training program and materials they developed; and advice to others planning to implement RDA.

Presenters from UNCG and UNCC shared a session titled “Technical Services: Changing Workflows, Changing Processes, Personnel Restructuring — Oh My!” Both sites have recently undergone library-wide re-organizations, including the re-purposing of tech services staff to other areas, resulting in pressure to ruthlessly eliminate inefficiencies. Many of the specific steps they mentioned are ones we’ve already taken in ZSR, but some interesting additional measures include:

  • Eliminating the Browsing Collection in favor of a New Books display.
  • Reducing the funds structure (for instance, 1 fund per academic department — no subfunds for material formats)

There also seems to be a trend towards re-locating Tech Services catalogers to Special Collections, in order to devote more resources to the task of making the library’s unique holdings more discoverable; outsourcing or automating as many tech services functions as possible, including “shelf-ready” services, authority control, and electronic ordering; and training support staff (whose time has putatively been freed by the outsourcing/automation of their other tasks) to do whatever in-house cataloging remains. That’s the vision, at any rate — our presenters pointed out the problems they’ve encountered in practice. For instance, UNCC at one point had one person doing the receiving, invoicing, and cataloging: they quickly found they needed to devote more people to the still-significant volume of in-house cataloging that remained to be done even after optimizing use of outsourced services. They’re also feeling the loss of subject expertise (in areas like music, religion, etc.) and of experienced catalogers to make the big decisions (i.e., preparing for RDA).

NCLA plans to post all presentations on their website: http://www.nclaonline.org/

 

 

Steve at ALA Annual 2011

Tuesday, July 5, 2011 5:33 pm

I’m a bit late in writing up my report about the 2011 ALA in New Orleans, because I’ve been trying to find the best way to explain a statement that profoundly affected my thinking about cataloging. I heard it at the MARC Formats Interest Group session, which I chaired and moderated. The topic of the session was “Will RDA Be the Death of MARC?” and the speakers were Karen Coyle and Diane Hillmann, two very well-known cataloging experts.

Coyle spoke first, and elaborated a devastating critique of the MARC formats. She argued that MARC is about to collapse due to its own strange construction, and that we cannot redeem MARC, but we can save its data. Coyle argued that MARC was great in its day, it was a very well developed code for books when it was designed. But as other materials formats were added, such as serials, AV materials, etc., additions were piled on top of the initial structure. And as MARC was required to capture more data, the structure of MARC became increasingly elaborate and illogical. Structural limitations to the MARC formats required strange work-arounds, and different aspects of MARC records are governed by different rules (AACR2, the technical requirements of the MARC format itself, the requirements of ILS’s, etc.). The cobbled-together nature of MARC has led to oddities such as the publication dates and language information being recorded in both the (machine readable) fixed fields of the record and in the (human readable) textual fields of the record. Coyle further pointed out the oddity of the 245 title field in the MARC record, which can jumble together various types of data, the title of a work, the language, the general material designation, etc. This data is difficult to parse for machine-processing. Although RDA needs further work, it is inching toward addressing these sorts of problems by allowing for the granular recording of data. However, for RDA to fully capture this granular data, we will need a record format other than MARC. In order to help develop a new post-MARC format, Coyle has begun a research project to break down and analyze MARC fields into their granular components. She began by looking at the 007/008 fields, finding that they have 160 different data elements, with a total of 1,530 different possible values. This data can be used to develop separate identifies for each value, which could be encoded in a MARC-replacement format. Coyle is still working on breaking down all of the MARC fields.

After Karen Coyle, Diane Hillmann of Metadata Management Associates spoke about the developing RDA vocabularies, and it was a statement during her presentation that really struck me. The RDA vocabularies define a set of metadata elements and value vocabularies that can be used by both humans and machines. That is, they provide a link between the way humans think about and read cataloging data and the way computers process cataloging data. The RDA vocabularies can assist in mapping RDA to other vocabularies, including the data vocabularies of record schemas other than the MARC formats. Also, when RDA does not provide enough detailed entity relationships for particular specialized cataloging communities, the RDA vocabularies can be extended to detail more subproperties and relationships. The use of RDA vocabulary extensions means that RDA can grow, and not just from the top-down. The description of highly detailed relationships between bibliographic entities (such as making clear that a short story was adapted as a radio play script) will increase the searching power of our patrons, by allowing data to be linked across records. Hillmann argued that the record has created a tyranny of thinking in cataloging, and that our data should be thought of as statements, not records. That phrase, “our data should be thought of as statements, not records,” struck me as incredibly powerful, and the most succinct version of why we need to eventually move to RDA. It truly was a “wow” moment for me. Near the end of her presentation, Hillmann essentially summed up the thrust of her talk, when she said that we need to expand our ideas of what machines can and should be doing for us in cataloging.

The other session I went to that is really worth sharing with everybody was the RDA Update Forum. Representatives from the Library of Congress and the two other national libraries, as well as the chair of the PCC (Program for Cooperative Cataloging), discussed the results of the RDA test by the national libraries. The national libraries have requested that the PCC (the organization that oversees the RDA code) address a number of problems in the RDA rules over the next eighteen months or so. LC and the other national libraries have decided to put off implementing RDA until January 2013 at the earliest, but all indications were that they plan to adopt RDA eventually. As the PCC works on revising RDA, the national libraries are working to move to a new record format (aka schema or carrier) to replace the MARC formats. They are pursuing a fairly aggressive agenda, intending to, by September 30 of this year, develop a plan with a timeline for transitioning past MARC. The national libraries plan to identify the stakeholders in such a transition, and want to reach out to the semantic web community. They plan for this to be a truly international effort that extends well beyond the library community as it is traditionally defined. They plan to set up communication channels, including a listserv, to share development plans and solicit feedback. They hope to have a new format developed within two years, but the process of migrating their data to the new format will take at least several more years after the format is developed. Needless to say, if the library world is going to move post-MARC format, it will create huge changes. Catalogs and ILS systems will have to be completely re-worked, and that’s just for starters. If some people are uncomfortable with the thought of moving to RDA, the idea of moving away from MARC will be truly unsettling. I for one think it’s an exciting time to be a cataloger.

The Future of Cataloging – Steve at ALA

Tuesday, July 6, 2010 1:49 pm

It’s not often that you go to a conference and have a major realization about the need to re-organize how you do your work and how your library functions, but I did at this year’s ALA. Through the course of several sessions on RDA, the new cataloging code that is slated to replace AACR2, I came to realize that we very much need to implement and maintain authority control at ZSR. This is not easy to say, as it will necessarily involve some expense and a great deal of time and effort, but without proper authority control of our bibliographic database, our catalog will suffer an ever-diminishing quality of service, frustrating patrons and hindering our efficiency.

You may ask, what is authority control? It’s the process whereby catalogers guarantee that the access points (authors, subjects, titles) in a bibliographic record use the proper or authorized form. Subject headings change, authors may have the same or similar names, and without controlling the vocabulary used, users can be confused, retrieving the wrong author or not retrieving all of the works on a given subject.

Here at ZSR we have historically had no authority maintenance to speak of. Our catalog records were sent out to a company to have the authorities cleaned up some time shortly before I started working here, and I started working here eight years ago this month. I have long thought that it was a problem that we have no authority control system, however, it did not seem to be a crisis. However, that was until RDA came along.

RDA (or Resource Description and Access) is, as you probably know by now, the new cataloging code that is supposed to replace AACR2. AACR2 focuses on the forms of items cataloged, whether the item is a book, a computer file, an audio recording, etc. The description gives you plenty of information about the item, the number of pages, the publisher, etc. If you do not have authority control (as we don’t), you may have a book with a similar title to another book, but you can distinguish it by saying, the book I need has 327 pages, with 8 pages of color prints, it’s 23 cm. high and it was published by Statler & Waldorf in 1998. In RDA, the focus is on points of access and in identifying works. Say you have a novel (a work) that has been published as a print book, as an audio book, and as an electronic book (by three different publishers on three different platforms). With RDA, you want to create a record to identify the work, the novel as an abstract concept, not the specific physical (or electronic) form that the novel takes. It’s not as easy to resort to the physical description (as with AACR2), because there may be no physical entity to describe at all. In that case, who wrote the book, the exact title of the book, and the subjects of the book become of paramount importance for identifying a work. RDA essentially cannot function without proper authority control (I had realized this fact during the course of the presentations I attended, but on my last day, a speaker’s first conclusion about preparing for RDA was “Increase authority control.”).

RDA is still being implemented, and the Library of Congress is currently undergoing a test to decide by March 2011 if they will adopt RDA. However, that test period seems a mere formality. There appears to be considerable momentum for the adoption of RDA, and I believe it will be adopted, even if many catalogers do have reservations about it. We may have another year to two-years before the momentum will force us to move to RDA, but in the meantime, I believe we need to get some sort of authority control system in place.

The advantages of authority control will be felt almost immediately in our catalog. The use of facets in VuFind will be far more efficient if the underlying data in the subject headings is in proper order. Also, as we move to implement WakeSpace, which will make us in essence publishers of material, we will need to make sure that we have our authors properly identified and distinguished from others (we need to make sure that our David Smith is the one we’ve got listed as opposed to another David Smith). Also, should we ever attempt to harvest the works of our university authors in an automated way to place them in WakeSpace, we will need to make sure that we are identifying the proper authors. The only way to do that is through authority control.

This issue will require some research and study before we can move forward with implementing authority control and maintenance. We will need some training for our current catalogers (definitely including me), we will need to have our current database’s authorities “cleaned up,” and we will have to institute a way to maintain our authorities, possibly including the hiring of new staff. It won’t be easy, and it won’t be cheap, but if our catalog is to function in an acceptable manner, I think it’s absolutely necessary.

Needless to say, I’ll be happy to talk about this with anyone who wants to.

Leslie at MLA 2010

Sunday, March 28, 2010 8:08 pm

Music librarians are inured to battling winter weather to convene every year during February in some northern clime (during a Chicago snowstorm last year). So it was almost surreal to find ourselves, this year, at an island resort in San Diego in March (beautiful weather, if still a bit on the chilly side). Despite the temptations of the venue, I had a very productive meeting this year.

REFERENCE

In the Southeast Chapter session, it was announced that East Carolina’s music library had scored top place among music libraries participating in a national assessment, sponsored by the Wisconsin-Ohio Reference Evaluation Program (WOREP), of effectiveness in answering reference queries. Initially, the East Carolina staff had misgivings about how onerous the process might be for users, who were asked to fill out a one-page questionnaire. As it turned out, students, when informed that it was part of a national project, typically responded “Cool!” and readily participated. The only refusals were from users who had to rush to their next class.

INSTRUCTION

A panel presentation titled “Weaving the Web: Best Practices for Online Content” resulted in a case of what might be termed the Wake Forest Syndrome: walking into a conference session only to find that we’re already “doing that” at WFU. It was largely about music librarians implementing LibGuides. One item of interest was a usability study conducted by one school of their LibGuides. Its findings:

Users tend to miss the tabs at the top. One solution that was tried was to replicate the tabs as links in the homepage “welcome” box.

Users prefer concise bulleted lists of resources over lengthy descriptions.

Students tend to feel overwhelmed by long lists of resources; they want the top 3-4 resources to start with, then to see others as needed.

Users were confused by links that put them into other LibGuides without explanation.

Students had trouble identifying relevant subject-specific guides when these were offered in a comprehensive list display.

One attendee voiced concern over an apparent conflict of objectives between LibGuides that aim to transmit research skills (i.e., teaching students how to locate resources on their own) and course-specific LibGuides (listing specific resources). Is the latter spoon-feeding?

COLLECTION DEVELOPMENT

A panel presentation on scores approval plans gave me some useful tips, as I’m planning to set one up next fiscal year.

In another panel on collecting ethnic music, Liza Vick of Harvard supplied a gratifying number of acquisition sources that I didn’t know about (in case other liaisons are interested in these, Liza’s presentation, among others, will be posted on the MLA website: http://www.musiclibraryassoc.org). The session also produced an interesting discussion about the objectives of collecting ethnographic materials in the present era. Historically, libraries collected field notes and recordings done by (mostly European) ethnographers of (mostly non-Western) peoples, premised on producing the most “objective” or “authentic” documentation. The spread of technology in recent years has resulted in new situations: “sampler” recordings produced by the former “subjects” with the aim of representing their culture to a general public (once dismissed by academics, these now benefit from a new philosophy that views the ways people choose to represent themselves as worthy of serious attention); in the last twenty years or so, a new genre of “world” music has appeared, fusing elements of historical musical traditions with modern pop styles; and of course the former “subjects” are now documenting their own cultures in venues like YouTube. As a result, there is a movement on the part of ethnographers and librarians away from trying to define authenticity, and towards simply observing the ongoing discourse between traditional and modern communities.

BIBLIOGRAPHY

Lynn has remarked on the need to reduce the percentage of our collections devoted to print bibliographic tools where the online environment now offers equivalent or superior discovery methods. In an MLA session that seemed to constitute a demonstration of this very principle, musicologist Hugh McDonald talked about his work in progress on a born-digital thematic catalog of the works of Bizet. Thematic catalogs have a long and venerable history in print, as definitive sources for the identification and primary source materials of a given composer’s works. They typically provide a numbering system for the works, with incipits (the musical notation for the principle themes) as an additional aid to identification, and cite manuscript materials and early editions. When freed of the space restrictions of print, McDonald envisions these catalogs as “theoretically” (i.e., when copyright issues have been ironed out) capable of documenting not just early editions but all editions ever published; not just the premiere performance, but all performances to date; not just incipits but full-text access to scores, recordings, reviews, and correspondence – compiled and updated collaboratively by many hands, in contrast to the famous catalogers of Mozart and Beethoven, who labored alone and whose catalogs are now “seriously out of date.” There are already many websites devoted to individual composers, but none, McDonald claims, presently approaches the kind of comprehensive compendium that might be realized based on the thematic catalog concept. One attendee, voicing a concern about the preservation of information in the online environment that is certainly not new and not unique to music, wanted to know if edits would be tracked and archived, noting that many librarians retain older print editions on their shelves for the light they cast on reception history and on the state of scholarship at a given time.

HOT TOPICS

Arriving late for the “Hot Topics” session, I walked into the middle of a lively debate on the comparative benefits of having a separate music library in the music department vs. housing the music collection in the main library. Those who headed departmental music libraries argued passionately for the special needs of performing musicians, and a librarian onsite who speaks their language. Those who work as generalists in main libraries pointed to music’s role in the arts and humanities as a whole, and in the increasingly interdisciplinary milieu of today’s academe. In terms of administrative clout, a sense of isolation has always been endemic to departmental libraries: one attendee who “survived” a move of her music collection from the music department to the main library reported that she now enjoys unprecedented access to administration, more effective communication with circulation and technical services staff regarding music materials, and daily contact with colleagues in other disciplines that has opened opportunities she would not have had otherwise.

Another hot topic was “MLA 2.0″: in response to dwindling travel budgets, a proposal was made to ask conference speakers to replay their presentations in Second Life.

CATALOGING

There were presentations on RDA and FRBR, two new cataloging standards, and I got to see some helpful examples for music materials, and well as a report on “deferred issues” that MLA continues to negotiate with the steering committee of RDA (these involve uniform titles and preferred access points; lack of alternative options for the principle source of information – problematic when you have a CD album without a collective title on the disc, but one on the container; definitions and treatment of arrangements and adaptations; and LC genre/form terms for music – which to use anglicized names for, and when to use the original language).

Indiana U, in their upcoming release of Variations, a program they’ve developed for digitizing scores and recordings collections, is “FRBRizing” their metadata. Unlike other early adopters of FRBR, they plan to make their metadata structure openly accessible, so that the rest of us can actually go in and see how they did it – this promises to be an invaluable aid to music catalogers as they transition to the new standard.

Another presenter observed that both traditional cataloging methods and the new RDA/FRBR schema are centered on the concept of “the work” – an entity with a distinct title and a known creator. Unfortunately, when faced with field recordings (and doubtless other ethnographic or other-than-traditionally-academic materials), a cataloger encounters difficulty proceeding on this premise. Does one take a collection-level approach (as archivists do with collections of papers) and treat the recording as “the work,” with the ethnographer as the creator? Or does one consider “the work” to be each of the often untitled or variously titled, often anonymously or collaboratively created performances captured on the recording? Music materials seem to span both sides of the paradigmatic divide, with Western classical repertoire that requires work-centered descriptors of a very precise and specialized nature (opus numbers, key, etc.) and multi-cultural research that challenges traditional modes of description and access.

Finally, I’ve got to share a witty comment made by Ed Jones of National University, who gave the introductory overview of FRBR. Describing how FRBR is designed to reflect the creative process – the multiple versions of a work from first draft through its publication history, to adaptations by others – he noted how the cataloger’s art, working from the other end, is more analogous to forensics: “We get the body, and have to figure out what happened.”

Webinar: RDA and OCLC

Friday, October 30, 2009 4:09 pm

On Oct. 30, Leslie attended a webinar hosted by OCLC, detailing OCLC’s preparations for the soon-to-be-released new cataloging rules, RDA (Resource Description and Access), which will succeed AACR2 (Anglo-American Cataloguing Rules), the standard that has been in place for the last 30 or so years.

A poll of webinar attendees, posing the question “How is your institution responding to RDA?”, produced the following responses: 200+ are presently reading material and attending sessions on RDA; 85 are waiting to see how others proceed; 3 are currently changing their cataloging practices; and a small number do not plan to implement RDA.

An attendee asked: Will libraries be forced (by OCLC) to adopt RDA? The answer: No, we can continue to enter data in AACR2 for the forseeable future. The presenters noted that, while RDA has proven controversial in the United States, it has been received more positively in the UK and Australia — prompting OCLC to proceed early with RDA development, to meet the demand of its international clientele.

The planned release date of the RDA online manual is November of 2009 (http://www.rdaonline.org/). In the six months following the manual’s release, a project to test the new rules will be conducted by the three U.S. national libraries (LC, the National Library of Medicine, and the National Agriculture Library). A group of test participants, representing libraries and archives of all types, as well as cataloging agencies (firms that provide cataloging for other institutions), will work with a core set of materials, representing all the major categories, plus other materials usual to the participating institutions, cataloging them in both AACR2 and RDA. Qualitative and quantitative feedback will be solicited, and the test results will be made public. OCLC presenters noted that, since the testers will be working in OCLC’s live production mode, we will see RDA records contributed to OCLC products such as WorldCat.

Catalogers will no doubt already be aware of the planned changes to the MARC21 record format, in preparation for RDA (http://www.loc.gov/marc/formatchanges-RDA.html). OCLC plans to make the new fields, codes, etc. available in Connexion (OCLC’s input interface for catalogers) before the testing period. Connexion users will be alerted in a future Technical Bulletin.

A webinar attendee asked if OCLC would be providing a new data-input template for RDA. While OCLC is currently working on an interface that incorporates RDA’s controlled vocabulary, the presenters noted that participants in the testing project would be working primarily with MARC21 records, and that “most of us will be working with MARC for some time to come.” They recommend that we follow the test reports, and wait for the results, before jumping in and implementing RDA.

A recording of the webinar will be posted on OCLC’s website (http://www.oclc.org/us/en/default.htm).

NISO Webinar: Bibliographic Control Alphabet Soup

Wednesday, October 14, 2009 5:10 pm

Earlier this afternoon, Lauren C., Leslie, Patty, Chris, Jean-Paul and I attended (watched? listened to? whatever) a NISO Webinar called Bibliographic Control Alphabet Soup: AACR to RDA and the Evolution of MARC. The program consisted of three presentations related to RDA and the future of the MARC format.

The first speaker was Barbara Tillett, the Chief of the Cataloging Policy and Support Office at the Library of Congress. She discussed the history of bibliographic control up to RDA (Resource Description and Access), which is intended to be a new cataloging code to supersede AACR2. RDA grew out of an attempt to develop an AACR3. RDA attempts to incorporate FRBR principles (which has been discussed in a number of other entries. If you have any questions about it, just ask me), and tries to be more universal than AACR2, which is tied to the English-speaking world. Furthermore, RDA reflects changes in technology (both in terms of the content it describes and how content is described), changes in focus (bibliographic description is not just for a local library, but for an international audience), and a change of view (moving from describing items to, in FRBR-terms, describing entities).

So, what does all that mean in practical terms? Well, the RDA code has two major areas it describes, elements of records (in database talk, entities and their atrributes) and relationships (between elements of a record, and between various records). RDA simplifies a number of the descriptive rules for cataloging, using a “take what you see” approach. Rather than qualifying information with parenthetical statements and re-ordering data, as AACR2 requires, RDA would note information as it appears on an item, which will make it far easier to harvest data automatically. RDA also makes the rule of three (the rule that no more than three authors can be listed) optional, gets rid of Latin phrases in notes, dispenses with GMDs (general material designations), gets rid of the “polyglot” designation, allows for more complete data in authority records, etc. All of these changes are made with an eye toward allowing data to be harvested and generated in automated ways more easily from the records, making the records more intelligible to users, as well as strengthening the relationships between records for related and derivative works.

The second presenter was Diane Hillmann, the Director of Metadata Management Services at the Information Institute of Syracuse. Hillmann was very knowledgeable about her topic, but moved very quickly and assumed a lot of familiarity among her audience with the topic she was discussing. It was fairly confusing, but we were able to identify her main point, which was that the exclusive use of MARC by libraries limits us in exchanging data outside the library silo. Nobody else uses MARC, nor are they likely to. Descriptive metadata use outside the library world is exploding and we’re not in on it. To get libraries into the general metadata game, part of the project of the RDA developers is to develop a vocabulary with defined data elements that can be used to create cataloging records, but that are also searchable and intelligible to the Web in general.

The third and final presenter was William Moen, the Director of Research from the University of North Texas’s School of Library and Information Sciences. He discussed a research project he and a team conducted from 2005 to 2007, in which they studied how many of the fields and subfields available in the MARC format were used and/or indexed by libraries in their bibliographic records. They did frequency counts and analyses of more than 56 million MARC21 bib records from the OCLC database. 211 fields and 1,596 subfields were used at least once. Looking at records in the Books, Pamphlets and Printed Sheet format, Moen and his team found that 7 fields appeared in all of the records, while 15 fields occurred in more than 50% of the records. Many, many fields had very few occurrences. The 656 field had only one occurrence. About 60% of all fields and subfields are used in less than 1% of the records. This led Moen and his team to consider the idea of developing core bib records in the MARC format that use a limited number of the currently available fields. By identifying the fields that are used in all bib records, combined with the most commonly used fields, Moen and his team developed proposed core bib records. However, Moen does not advocate simply leaving the decision up to statistical analysis. If we are to move to a more streamlined core MARC record, he suggests that catalogers think long and hard about what is actually needed in the bib record, and that the MARC format be revised with an eye toward supporting the FRBR-defined user tasks (he also asks if we really know which content designations are needed to support a given user task).

As the broadcast part of the webinar wound down, Lauren, Leslie, Patty, Chris, Jean-Paul and I engaged in a lively and interesting conversation about the issues raised in the presentations that last well-past our scheduled end time. That struck me as a very good sign that this webinar was quite worthwhile.


Pages
About
Categories
2007 ACRL Baltimore
2007 ALA Annual
2007 ALA Gaming Symposium
2007 ALA Midwinter
2007 ASERL New Age of Discovery
2007 Charleston Conference
2007 ECU Gaming Presentation
2007 ELUNA
2007 Evidence Based Librarianship
2007 Innovations in Instruction
2007 Kilgour Symposium
2007 LAUNC-CH Conference
2007 LITA National Forum
2007 NASIG Conference
2007 North Carolina Library Association
2007 North Carolina Serials Conference
2007 OCLC International ILLiad Conference
2007 Open Repositories
2007 SAA Chicago
2007 SAMM
2007 SOLINET NC User Group
2007 UNC TLT
2007_ASIST
2008
2008 Leadership Institute for Academic Librarians
2008 ACRL Immersion
2008 ACRL/LAMA JVI
2008 ALA Annual
2008 ALA Midwinter
2008 ASIS&T
2008 First-Year Experience Conference
2008 Lilly Conference
2008 LITA
2008 NASIG Conference
2008 NCAECT
2008 NCLA RTSS
2008 North Carolina Serials Conference
2008 ONIX for Serials Webinar
2008 Open Access Day
2008 SPARC Digital Repositories
2008 Tri-IT Meeting
2009
2009 ACRL Seattle
2009 ALA Annual
2009 ALA Annual Chicago
2009 ALA Midwinter
2009 ARLIS/NA
2009 Big Read
2009 code4lib
2009 Educause
2009 Handheld Librarian
2009 LAUNC-CH Conference
2009 LAUNCH-CH Research Forum
2009 Lilly Conference
2009 LITA National Forum
2009 NASIG Conference
2009 NCLA Biennial Conference
2009 NISOForum
2009 OCLC International ILLiad Conference
2009 RBMS Charlottesville
2009 SCLA
2009 UNC TLT
2010
2010 ALA Annual
2010 ALA Midwinter
2010 ATLA
2010 Code4Lib
2010 EDUCAUSE Southeast
2010 Handheld Librarian
2010 ILLiad Conference
2010 LAUNC-CH Research Forum
2010 LITA National Forum
2010 Metrolina
2010 NASIG Conference
2010 North Carolina Serials Conference
2010 RBMS
2010 Sakai Conference
2011 ACRL Philadelphia
2011 ALA Annual
2011 ALA Midwinter
2011 CurateCamp
2011 Illiad Conference
2012 SNCA Annual Conference
ACRL
ACRL 2013
ACRL New England Chapter
ACRL-ANSS
ACRL-STS
ALA Annual
ALA Annual 2013
ALA Editions
ALA Midwinter
ALA Midwinter 2012
ALA Midwinter 2014
ALCTS Webinars for Preservation Week
ALFMO
APALA
ARL Assessment Seminar 2014
ARLIS
ASERL
ASU
Audio streaming
authority control
Berkman Webinar
bibliographic control
Book Repair Workshops
Career Development for Women Leaders Program
CASE Conference
cataloging
Celebration: Entrepreneurial Conference
Charleston Conference
CIT Showcase
CITsymposium2008
Coalition for Networked Information
code4lib
commons
Conference Planning
Conferences
Copyright Conference
costs
COSWL
CurateGear 2013
CurateGear 2014
Designing Libraries II Conference
DigCCurr 2007
Digital Forsyth
Digital Humanities Symposium
Disaster Recovery
Discovery tools
E-books
EDUCAUSE
Educause SE
EDUCAUSE_SERC07
Electronic Resources and Libraries
Embedded Librarians
Entrepreneurial Conference
ERM Systems
evidence based librarianship
FDLP
FRBR
Future of Libraries
Gaming in Libraries
General
GODORT
Google Scholar
govdocs
Handheld Librarian Online Conference
Hurricane Preparedness/Solinet 3-part Workshop
ILS
information design
information ethics
Information Literacy
innovation
Innovation in Instruction
Innovative Library Classroom Conference
Inspiration
Institute for Research Design in Librarianship
instruction
IRB101
Journal reading group
Keynote
LAMS Customer Service Workshop
LAUNC-CH
Leadership
Learning spaces
LibQUAL
Library 2.0
Library Assessment Conference
Library of Congress
licensing
Lilly Conference
LITA
LITA National Forum
LOEX
LOEX2008
Lyrasis
Management
Marketing
Mentoring Committee
MERLOT
metadata
Metrolina 2008
MOUG 09
MOUG 2010
Music Library Assoc. 07
Music Library Assoc. 09
Music Library Assoc. 2010
NASIG
National Library of Medicine
NC-LITe
NCCU Conference on Digital Libraries
NCICU
NCLA
NCLA Biennial Conference 2013
NCPC
NCSLA
NEDCC/SAA
NHPRC-Electronic Records Research Fellowships Symposium
NISO
North Carolina Serial Conference 2014
Offsite Storage Project
OLE Project
online catalogs
online course
OPAC
open access
Peabody Library Leadership Institute
plagiarism
Podcasting
Preservation
Preservation Activities
Preserving Forsyth LSTA Grant
Professional Development Center
rare books
RDA/FRBR
Reserves
RITS
RTSS 08
RUSA-CODES
SAA Class New York
SAMM 2008
SAMM 2009
Scholarly Communication
ScienceOnline2010
Social Stratification in the Deep South
Social Stratification in the Deep South 2009
Society of American Archivists
Society of North Carolina Archivists
SOLINET
Southeast Music Library Association
Southeast Music Library Association 08
Southeast Music Library Association 09
SPARC webinar
subject headings
Sun Webinar Series
tagging
TALA Conference
Technical Services
technology
ThinkTank Conference
Training
ULG
Uncategorized
user studies
Vendors
video-assisted learning
visual literacy
WakeSpace
Web 2.0
Webinar
WebWise
WFU China Initiative
Wikis
Women's History Symposium 2007
workshops
WSS
ZSR Library Leadership Retreat
Tags
Archives
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007

Powered by WordPress.org, protected by Akismet. Blog with WordPress.com.