Professional Development

Author Archive

Steve at NASIG 2012

Thursday, June 14, 2012 5:03 pm

Last Thursday, Chris, Derrik and I hopped in the library van and drove to Nashville for the NASIG Conference, returning on Sunday. It was a busy and informative conference, full of lots of information on serials and subscriptions. I will cover a few of the interesting sessions I attended in this post.
One such session was called “Everyone’s a Player: Creation of Standards in a Fast-Paced Shared World,” which discussed the work of NISO and the development of new standards and “best practices.” Marshall Breeding discussed the ongoing development of the Open Discovery Initiative (ODI), a project that seeks to identify the requirements of web-scale discovery tools, such as Summon. Breeding pointed out that it makes no sense for libraries to spend millions of dollars on subscriptions, if nobody can find anything. So, in this context, it makes sense for libraries to spend tens of thousands on discovery tools. But, since these tools are still so new, there are no standards for how these tools should function and operate with each other. ODI plans to develop a set of best practices for web-scale discovery tools, and is beginning this process by developing a standard vocabulary as well as a standard way to format and transfer data. The project is still in its earliest phases and will have its first work available for review this fall. Also at this session, Regina Reynolds from the Library of Congress discussed her work with the PIE-J initiative, which has developed a draft set of best practices that is ready for comment. PIE-J stands for the Presentation & Identification of E-Journals, and is a set of best practices that gives guidance to publishers on how to present title changes, issue numbering, dates, ISSN information, publishing statements, etc. on their e-journal websites. Currently, it’s pretty much the Wild West out there, with publishers following unique and puzzling practices. PIE-J hopes to help clean up the mess.
Another session that was quite useful was on “CONSER Serials RDA Workflow,” where Les Hawkins, Valerie Bross and Hien Nguyen from Library of Congress discussed the development of RDA training materials at the Library of Congress, including CONSER serials cataloging materials and general RDA training materials from the PCC (Program for Cooperative Cataloging). I haven’t had a chance yet to root around on the Library of Congress website, but these materials are available for free, and include a multi-part course called “Essentials for Effective RDA Learning” that includes 27 hours (yikes!) of instruction on RDA, including a 9 hour training block on FRBR, a 3 hour block on the RDA toolkit, and 15 hours on authority and description in RDA. This is for general cataloging, not specific to serials. Also, because LC is working to develop a replacement for the MARC formats, there is a visualization tool called RIMMF available at marcofquality.com that allows for creating visual representations of records and record-relationships in a post-MARC record environment. It sounds promising, but I haven’t had a chance to play with it yet. Also, the CONSER training program, which focuses on serials cataloging, is developing a “bridge” training plan to transition serials catalogers from AACR2 to RDA, which will be available this fall.
Another interesting session I attended was “Automated Metadata Creation: Possibilities and Pitfalls” by Wilhelmina Randtke of Florida State University Law Research Center. She pointed out that computers like black and white decisions and are bad with discretion, while creating metadata is all about identifying and noting important information. Randtke said computers love keywords but are not good with “aboutness” or subjects. So, in her project, she tried to develop a method to use computers to generate metadata for graduate theses. Some of the computer talk got very technical and confusing for me, but her discussion of subject analysis was fascinating. Using certain computer programs for automated indexing, Randtke did a data scrape of the digitally-encoded theses and identified recurring keywords. This keyword data was run through ontologies/thesauruses to identify more accurate subject headings, which were applied to the records. A person needs to select the appropriate ontology/thesaurus for the item(s) and review the results, but the basic subject analysis can be performed by the computer. Randtke found that the results were cheap and fast, but incomplete. She said, “It’s better than a shuffled pile of 30,000 pages. But, it’s not as good as an organized pile of 30,000 pages.” So, her work showed some promise, but still needs some work.
Of course there were a number of other interesting presentations, but I have to leave something for Chris and Derrik to write about. One idea that particularly struck me came from Rick Anderson during his thought provoking all-conference vision session on the final day, “To bring simplicity to our patrons means taking on an enormous level of complexity for us.” That basic idea has been something of an obsession of mine for the last few months while wrestling with authority control and RDA and considering the semantic web. To make our materials easily discoverable by the non-expert (and even the expert) user, we have to make sure our data is rigorously structured and that requires a lot of work. It’s almost as if there’s a certain quantity of work that has to be done to find stuff, and we either push it off onto the patron or take it on ourselves. I’m in favor of taking it on ourselves.
The slides for all of the conference presentations are available here: http://www.slideshare.net/NASIG/tag/nasig2012 for anyone who is interested. You do not need to be a member of NASIG to check them out.

Steve at 2012 North Carolina Serials Conference

Tuesday, March 27, 2012 1:04 pm

On March 16, I attended the 2012 North Carolina Serials Conference at UNC-Chapel Hill, along with a number of folks from ZSR. I’m going to write about one session that really interested me, because I think it’s worth a fairly in-depth recap.
The session in question was the closing keynote address by Kevin Guthrie, who was the first employee of JSTOR (back when it was a one-person operation) and is a co-founder of the non-profit organization ITHAKA. Guthrie’s presentation, “Will Books Be Different?” examined the world of electronic books. He began with a brief history of the transition from print to electronic journals, emphasizing that the process was generally driven by the academic world, in the sense that publishers built their electronic journal models with the academic world as the intended audience for their products. Guthrie points out that the electronic book, particularly the scholarly electronic book, faces a very different set of circumstances. Library budgets in the 2010s are even tighter than they were in the ‘90s and ‘00s. Libraries have to figure out how to do more with fewer resources. At the same time, everyone is connected, and the scale of digital activity supports massive commercial investment by publishers. The academic world is now reacting to publishers, rather than the other way around.
The consumer web is influencing the availability of scholarly electronic books. It is on the consumer web where companies are taking advantage of new network and digital technologies in transformative ways, unlike in the ‘90s when electronic journals were developed. The Google Books Project and its audacity have made it actually seem possible that all books will eventually be digital, which is exciting, but that does not mean that scholarly books will be a priority of digital publishers. Commercial companies are becoming increasingly focused on what gets accessed as opposed to the intrinsic research value of sources. Furthermore, search and discovery capabilities could prove critical to what book content gets supported by e-publishers.
Licensing issues are also different in the world of e-books than with e-journals. At a basic level, libraries tend to want to own books, rather than license or subscribe to them. We have gotten over that tendency with journals, but, for many librarians, books somehow feel qualitatively different. Also, Digital Rights Management (DRM) technologies can be difficult to manage. In addition, there is the question of what is the interaction between individual access and institutional access, when dealing with remotely accessible and downloadable e-book content. Consortial purchasing is also complicated by e-books. Historically, library consortia have purchased books to share, rather than each library purchasing their own copy. How do libraries share e-books with each other without violating licensing or access terms? Or, does the pricing model for e-books change such that consortia no longer make sense at all?
We are seeing the development of new versions of the Big Deal, like we have had for e-journals for years. There has been consolidation among publishers, and libraries are being offered access to more content at a package price. However, it is hard to measure good value. Usage statistics are generally far lower for books than for journals. Libraries may buy a big e-book package and have very little use, which can either indicate that the package is not worth it, or the use could justify the costs. We’re still in the early days of e-books, so no real benchmarks for what constitutes substantial use have developed.
Having said all that, despite the complications, libraries are buying more e-books. However, Guthrie argued that libraries are ahead of our users with regards to e-books. He pointed to a recent e-book usage survey which found the 53% of undergraduates prefer print books to e-books, which was a higher percentage than any other university population! We need to think about what resources our users need and how they use them, particularly where e-books are concerned.

Steve at 2012 LAUNC-CH Conference

Monday, March 12, 2012 5:39 pm

Last week I attended the 2012 conference for LAUNC-CH, the Librarians’ Association of UNC-Chapel Hill. This year’s theme was “Engage, Innovate, Assess: Doing More With Less.” The keynote address, “Doing More With Less,” was given by Dick Blackburn, Associate Professor of Organizational Behavior at UNC-CH’s Kenan-Flagler Business School. In his frequently humorous presentation, Blackburn discussed how innovation and creativity can help an organization do more with less. He outlined a cyclical creative/innovative process in six steps, which I will quote here:
1) Motivation – You really gotta wanna.
2) Preparation – Making the strange familiar (that is, learning about what materials/ideas you are working with).
3) Manipulation – Making the familiar strange (taking materials/ideas and putting them together in a new way).
4) Incubation – The occasional need for disruption, distraction, distancing, and/or disengagement from the idea process.
5) Illumination – The “Aha!” experience.
6) Verification/Evaluation – Testing the acceptability of the creative idea.
…and this process can then feed back to motivation.
After outlining this process, Blackburn argued that organizations should not only reward innovation that succeeds, but should also reward innovation that fails, so long as it is “smart failure,” that is, failure that you can learn from. If you punish failure, you stifle creativity. As long as an idea is planned out well and the risk taken is not too enormous (the phrase he used was “below the waterline,” meaning that it can sink the organization), the risk taker should be rewarded for trying. If anything, organizations should punish inaction, not failure. To that end, Blackburn encouraged the development of divergent thinking that takes risks.
I also attended a very interesting session called “Can the Past Be Prologue? What We Can Learn from How the UNC Library Weathered the Great Depression” by Eileen McGrath and Linda Jacobson. They discussed a number of useful lessons that could be drawn from UNC’s experience during the Depression and that were applicable in the current tough economic climate. First, the importance of cooperation. During the Depression, UNC and Duke developed the first cooperative collection development agreements between the two universities, sharing the load for buying expensive, rare, and little used materials. Duke library staff who attended library school classes at UNC would carry loaned books back and forth between the libraries. In order to further cooperative loaning, the library at UNC developed the first union catalog for the state. The second lesson McGrath and Jacobson drew was the importance of supporting and recognizing staff. During the Depression, library staff at UNC suffered a 30% pay cut. The library began a newsletter during this period, in which staff accomplishments were recognized. More effort was made by the library administration to congratulate staff and to encourage and reward initiative at work. The third lesson was to find new ways to develop the collection rather than purchasing materials. UNC was very successful at this task during the Depression. From 1929 to 1930, 70% of new materials were purchased, but from 1934 to 1935, only 32% of new materials were purchased. Also, during this period, the collection actually grew, and so did the rate of growth. This was accomplished by seeking out private donors, gifts, and exchanges. Also, in 1933, a new law required that the state government deposit 25 copies of every state document at UNC. Perhaps the most innovative approach was the work of history professor J.G. de Roulhac Hamilton, who pioneered the collection of manuscripts and family papers throughout the South. Hamilton was so successful in this endeavor that he was nicknamed “Ransack,” and there are stories that he was actually banned from entering some of our neighboring states because he was stripping the cultural heritage from our neighbors. The fourth major lesson from McGrath and Jacobson was that false economies catch up with you. Don’t try to do things on the cheap, because you will eventually pay and often much more than you would have. The Wilson Library was built during the Depression, but to save money, they cut the original useable stack space from 450,000 volumes to 300,000 volumes. They soon hit storage problems far sooner than projected. Similarly, they tried to be cheap with lighting in the public areas of the library. The students launched a very vocal campaign for better lighting, including editorials, petitions and protests that eventually resulted in new lighting being purchased. Sometimes it’s better to just pay the costs upfront.

Steve at 2012 ALA Midwinter

Tuesday, January 31, 2012 6:35 pm

So, if you read nothing else in my post about ALA Midwinter, please take away this fact: RDA is coming. At several sessions, representatives from the Library of Congress indicated that LC is moving forward with plans to adopt RDA early in 2013. When LC adopts RDA, the other libraries in the US will fall in line behind them, so it’s time to start preparing.

On Saturday, January 21, I attended a meeting of the Copy Cataloging Interest Group, where I heard Barbara Tillett, the Chief of the LC Policy and Standards Division, speak about how LC is training their copy catalogers in RDA with an eye toward a 2013 implementation. She said much of the copy cataloger training material is focused on teaching when it is appropriate to change an AACR2 record to an RDA record, and when it is appropriate to change a master record in OCLC. LC has developed a set of RDA data elements that should always be included in their records, which they call “LC core.” Tillett said that LC will adopt RDA no sooner than January 2013, contingent upon continued progress on the recommendations the National Libraries made this spring regarding changes to RDA. LC decided to return most of the catalogers who participated in the RDA test that wrapped up at the end of 2010 to cataloging using RDA in November, 2011, so that these catalogers could work on training, documentation, and further developing the RDA code itself. LC is making its work on RDA, including its copy cataloger training materials available on their website ( http://www.loc.gov/aba/rda ) The Library of Congress has begun releasing “LC Policy Statements” that explain LC interpretations of RDA rules, and which replace the old LC Rule Interpretations that explained LC decisions on AACR2 rules. The Policy Statements are available for free with RDA Toolkit. Regarding the ongoing development of RDA, Tillett said that there will be monthly minor corrections to RDA (typos and such), with more substantive major updates to RDA issued twice per year. Tillett also spoke of the Bibliographic Framework Transition Initiative, which is working to develop a metadata schema to replace the MARC formats. This group released a background statement and general plan in November 2011. They are in the process of developing a funding proposal and of forming an advisory group with various players in the library metadata field.

On Sunday, January 22, I attended a meeting of the RDA Update Forum, and Beacher Wiggins of LC reaffirmed much of what Barbara Tillett said, but he stated more forcefully that the Library of Congress and the other national libraries are really intent on implementing RDA in 2013. However, he allowed for a little more flexibility in his timeline. He placed the date for RDA implementation in the first quarter of 2013, so anything from January 2 to March 31. Wiggins said that many of his colleagues are pushing for a January 2 date, but he said that, taking into account how deadlines can slip, he would be happy with March 31. Nevertheless, the message was clear, RDA is coming.

Also at the RDA Update Forum, I heard Linda Barnart from the Program for Cooperative Cataloging, who spoke about how the PCC is preparing for the implementation of RDA (she said the key decisions of the PCC can be found at http://www.loc.gov/catdir/pcc ). The PPC is busily developing materials related to the RDA implementation. They have developed a set of Post-RDA Test Guidelines as well as an RDA FAQ. They have been working on guidelines for what they are calling a Day One for RDA authority records, which will be a day (probably after LC adopts RDA) from which all new LC authority records created will be created according to RDA rules instead of AACR2 rules. PCC also has a Task Group on Hybrid Bibliographic Records which has prepared guidelines for harmonizing RDA bib records with pre-RDA bib records. I know I’m sounding like a broken record here, but with all of this infrastructure being built up, make no mistake-RDA is coming.

On to other topics, I also attended an interesting session of the Next Generation Catalog Interest Group, where I heard Jane Burke of Serials Solutions speak about a new product they are developing which is designed to replace the back-end ILS. Burke said that Serials Solutions is looking to separate the discovery aspect of catalogs from their management aspect. Summon, as we already know, is their discovery solution, which is designed to allow for a single search with a unified result set. Serials Solutions is working to develop a webscale management solution which they are calling Intota. Intota is an example of “software as a service” (Burke recommended looking it up in Wikipedia, which I did). Burke argued that the old ILS model was riddled with redundancy, with every library cataloging the same things and everybody doing duplicate data entry (from suppliers to the ILS to campus systems). Intota would be a cloud based service that would provide linked data and networked authority control (changes to LC authority headings would be changed for all member libraries, without the need to make local changes). It seems like an interesting model, and I look forward to hearing more about it.

I attended a number of other meetings, which will be of limited interest to a general audience, but something that was pretty cool was attending my first meeting as a member of the Editorial Board of Serials Review. After almost 20 years of working with serials, it was interesting to be on the other side of the process. We discussed the journal’s move to APA from Chicago style, a new formatting guide for the articles, future topics for articles, submission patterns, etc. It was very interesting.

As usual when I got ALA, I saw several former ZSRers. I roomed with Jim Galbraith, who is still at DePaul University in Chicago. I also visited with Jennifer Roper and Emily Stambaugh, both of whom are expecting baby boys in May (small world!).

Steve at ALA Annual 2011

Tuesday, July 5, 2011 5:33 pm

I’m a bit late in writing up my report about the 2011 ALA in New Orleans, because I’ve been trying to find the best way to explain a statement that profoundly affected my thinking about cataloging. I heard it at the MARC Formats Interest Group session, which I chaired and moderated. The topic of the session was “Will RDA Be the Death of MARC?” and the speakers were Karen Coyle and Diane Hillmann, two very well-known cataloging experts.

Coyle spoke first, and elaborated a devastating critique of the MARC formats. She argued that MARC is about to collapse due to its own strange construction, and that we cannot redeem MARC, but we can save its data. Coyle argued that MARC was great in its day, it was a very well developed code for books when it was designed. But as other materials formats were added, such as serials, AV materials, etc., additions were piled on top of the initial structure. And as MARC was required to capture more data, the structure of MARC became increasingly elaborate and illogical. Structural limitations to the MARC formats required strange work-arounds, and different aspects of MARC records are governed by different rules (AACR2, the technical requirements of the MARC format itself, the requirements of ILS’s, etc.). The cobbled-together nature of MARC has led to oddities such as the publication dates and language information being recorded in both the (machine readable) fixed fields of the record and in the (human readable) textual fields of the record. Coyle further pointed out the oddity of the 245 title field in the MARC record, which can jumble together various types of data, the title of a work, the language, the general material designation, etc. This data is difficult to parse for machine-processing. Although RDA needs further work, it is inching toward addressing these sorts of problems by allowing for the granular recording of data. However, for RDA to fully capture this granular data, we will need a record format other than MARC. In order to help develop a new post-MARC format, Coyle has begun a research project to break down and analyze MARC fields into their granular components. She began by looking at the 007/008 fields, finding that they have 160 different data elements, with a total of 1,530 different possible values. This data can be used to develop separate identifies for each value, which could be encoded in a MARC-replacement format. Coyle is still working on breaking down all of the MARC fields.

After Karen Coyle, Diane Hillmann of Metadata Management Associates spoke about the developing RDA vocabularies, and it was a statement during her presentation that really struck me. The RDA vocabularies define a set of metadata elements and value vocabularies that can be used by both humans and machines. That is, they provide a link between the way humans think about and read cataloging data and the way computers process cataloging data. The RDA vocabularies can assist in mapping RDA to other vocabularies, including the data vocabularies of record schemas other than the MARC formats. Also, when RDA does not provide enough detailed entity relationships for particular specialized cataloging communities, the RDA vocabularies can be extended to detail more subproperties and relationships. The use of RDA vocabulary extensions means that RDA can grow, and not just from the top-down. The description of highly detailed relationships between bibliographic entities (such as making clear that a short story was adapted as a radio play script) will increase the searching power of our patrons, by allowing data to be linked across records. Hillmann argued that the record has created a tyranny of thinking in cataloging, and that our data should be thought of as statements, not records. That phrase, “our data should be thought of as statements, not records,” struck me as incredibly powerful, and the most succinct version of why we need to eventually move to RDA. It truly was a “wow” moment for me. Near the end of her presentation, Hillmann essentially summed up the thrust of her talk, when she said that we need to expand our ideas of what machines can and should be doing for us in cataloging.

The other session I went to that is really worth sharing with everybody was the RDA Update Forum. Representatives from the Library of Congress and the two other national libraries, as well as the chair of the PCC (Program for Cooperative Cataloging), discussed the results of the RDA test by the national libraries. The national libraries have requested that the PCC (the organization that oversees the RDA code) address a number of problems in the RDA rules over the next eighteen months or so. LC and the other national libraries have decided to put off implementing RDA until January 2013 at the earliest, but all indications were that they plan to adopt RDA eventually. As the PCC works on revising RDA, the national libraries are working to move to a new record format (aka schema or carrier) to replace the MARC formats. They are pursuing a fairly aggressive agenda, intending to, by September 30 of this year, develop a plan with a timeline for transitioning past MARC. The national libraries plan to identify the stakeholders in such a transition, and want to reach out to the semantic web community. They plan for this to be a truly international effort that extends well beyond the library community as it is traditionally defined. They plan to set up communication channels, including a listserv, to share development plans and solicit feedback. They hope to have a new format developed within two years, but the process of migrating their data to the new format will take at least several more years after the format is developed. Needless to say, if the library world is going to move post-MARC format, it will create huge changes. Catalogs and ILS systems will have to be completely re-worked, and that’s just for starters. If some people are uncomfortable with the thought of moving to RDA, the idea of moving away from MARC will be truly unsettling. I for one think it’s an exciting time to be a cataloger.

Steve at NASIG 2011

Thursday, June 16, 2011 1:19 pm

At this year’s NASIG Conference, there were plenty of sessions on practical things (which I’ll discuss in a bit), but there were also several apt phrases and interesting concepts that jumped out at me. The first phrase came from a session on patron-driven access, where the speakers quoted Peter McCracken of Serials Solutions, who said, “What is interlibrary loan, but patron-driven access?” I thought this was a nice way to show that patron-driven access isn’t so foreign or new to libraries, we’ve been doing it for a long time, just under a different name. The second interesting concept came from one of our vision speakers, Paul Duguid, a professor at the University of California-Berkeley School of Information. He spoke about the importance of branding information in the information supply chain, as it supplies context and validation for information. When someone in the audience said that as librarians, we are experts in information (and old saw if ever there was one), Duguid responded that actually we’re experts in information structures. He went on to say that that’s one thing we have over Google, because an algorithm isn’t a structure. I found that very interesting. The third thought-provoking phrase/concept that appealed to me came in a session on getting the most out of negotiation. The speakers discussed the Samurai idea of “ordered flexibility,” which is essentially the idea of studying and developing a plan, but being prepared to deviate from that plan as necessary to deal with changing conditions and opportunities. I really like this idea of “ordered flexibility,” as it fits with my philosophy to planning large-scale project (if you develop a thorough plan, you have more room to adapt to changing conditions on the fly).

Now, as for the meat-and-potatoes of the sessions I attended, the most interesting one was called “Continuing Resources and the RDA Test,” where representatives from the three US national libraries (Library of Congress, National Agricultural Library, and National Library of Medicine) spoke about the RDA test that has been conducted over the last year and a half or so. This session was on June 5, so it was conducted before the report came out this week, and the speakers were very good about not tipping their hand (the national libraries have decided to delay implementing RDA until January 2013 at the earliest, but still plan to adopt the code). The session covered the details of how the test was conducted and the data analyzed. The 26 test libraries were required to produce four different sets of records. The most useful set was one that was based on a group of 25 titles (10 monographs, 5 AV items, 5 serials, 5 integrating resources) that every tester was required to catalog twice, once using AACR2 rules and once using RDA rules. The national libraries then created RDA records for each of the titles, and edited them until they were as close to “perfect” as possible. During the analysis phase, the test sets for each item were compared against the national libraries’ benchmark RDA records, and scored according to how closely the records matched. The data produced during the RDA test will eventually be made available for testing by interested researchers (maybe you, Dr. Mitchell?).

Another interesting session was conducted by Carol McAdam of JSTOR and Kate Duff of the University of Chicago Press. JSTOR, or course, provides backfiles to numerous journals, but they have begun working with certain partners to publish brand new content on the JSTOR platform. They are still trying to iron out all the details in their pricing model, but this move makes a lot of sense, it seems to me, especially for university presses. If all their material is eventually going to wind up residing on the JSTOR platform anyway, why not just make the new issues available with the backfiles to subscribing institutions?

I also saw a presentation by Rafal Kasprowki of Rice University about IOTA, which is a new NISO standard designed to measure the quality of OpenURL links. Briefly, here’s how OpenURLs work, when a patron clicks on a citation, a Source OpenURL is generated, which, in theory, contains all of the information necessary to adequately describe the source. This Source OpenURL is sent to a Link Resolver, which consults a knowledge base to find the holdings for the library. If the library holds the item, the Link Resolver generates a Target OpenURL which opens the full-text. Prior to the development of IOTA, there was no way to test the reliability of this data, but IOTA tests the Source OpenURL and provides a standard for how much information it should contain, in order to properly identify a resource.

I also attended a session by Amanda Yesilbas of the Florida Center for Library Automation, who discussed how FCLA uses a Drupal-based website in place of an ERMS. I can’t say that I fully understood everything she said, but it might be an inexpensive, low-maintenance alternative to implementing a full-blown ERMS here at ZSR.

This was a busy conference for me. In addition to attending the last meeting of my two year term as a member of the NASIG Executive Board, I started working on a new NASIG committee. And the conference was in St. Louis, my hometown, so I came in early with Shane, so he could spend time with his grandparents , aunts, uncles, and cousins. I also took him to his first-ever Major League baseball game, and the Cardinals beat the Cubs handily.

Steve at 2011 NC Serials Conference

Monday, March 14, 2011 5:23 pm

On Thursday, I attended the 2011 NC Serials Conference, along with Chris, Linda Z. and Bradley. They’ll probably talk about other sessions, but I thought I’d give a brief re-cap of a presentation called “The Future of the Catalog,” by Tim Bucknall of UNC-Greensboro and Margaretta Yarborough of UNC-Chapel Hill, since that’s right in my area.

Bucknall began by talking about how, with the advent of the Web, the catalog is no longer the center of the information universe, and how we have seen a huge increase in the number and types of data sources found in the catalog. He also said that the catalog is no longer a listing of everything we have, but a listing of what we can access. Bucknall pointed to a survey that found that 93% of the public agree that Google provides worthwhile information, while only 78% agree that library websites provide worthwhile information. What this means for the future of the catalog is that we can no longer even hope that the catalog will be a one-stop shop, and that we will need discovery tools that can search the catalog and other information silos simultaneously. Bucknall also pointed to a study that showed that for library users and library directors, out of a list of 18 possible enhancements to the catalog, the most desirable enhancement is to add more links to full-text, while catalogers rated this item dead last on the list. This indicates that, as a whole, the cataloging community must become more aware of the needs of their users.

Yarborough looked at the landscape of the cataloging world, and found a loss of control for catalogers (as more outsiders are adding data to our catalogs), a changing balance of materials (shelf-ready materials vs. traditional cataloging), a change in the sheer volume of material cataloged (purchasing bib records for sets of electronic resources), and more demand for other types of materials not formerly found in the catalog (archival collections, digital collections, etc.). She argued that in the catalog of the near future, library resources should be accessible where users search (rather than forcing them to come to our sites), library resources should be intelligible in ordinary language, the catalog must be intuitive and easy to navigate, the catalog should have a discovery layer to provide a one-stop search, authority control will be more important than ever, and we will need to have user-added tags, reviews, and other enhancements to our records to make them more useful. Yarborough also discussed the ways this new type of catalog is causing shifts in the tasks catalogers perform, with more emphasis on local/unique materials, more analysis and loading of batch records, and a general shift toward more project management functions.

This conference was a special one for me, because I (along with Chris) gave my first-ever conference presentation. We discussed our FixZak group and our model for managing e-resources and technical services troubleshooting. We had a lot of good questions afterward, and there seemed to be some genuine interest in our model.

Steve at LAUNC-CH 2011

Wednesday, March 9, 2011 4:01 pm

On Monday, I went to the LAUNC-CH Conference at the lovely Friday Center on the campus of UNC-Chapel Hill. Lauren discussed Lee Rainie’s keynote address, “Networked Individuals, Networked Libraries,” but I’d like to hit the things I found interesting in his presentation. I found it very interesting that smart cell phones are the most commonly used internet-capable technology by lower-income populations in the U.S. Yes, higher-income folks use smart phones more than lower-income folks, but the gap in usage is lower than with any other kind of technology, like laptops. Also, the ubiquity of the mobile internet usage is revolutionizing personal relationships to the internet. People expect to get information anywhere, on any device, at any time. It has also changed the idea of place and of presence (increasingly common experience of people using devices and being “alone together”). What is fascinating is that Rainie said that there is more evidence that people who use information technology a lot use it as a supplement to activity in the real world, not as a replacement for activity in the real world (which is one of the standard-issue criticisms of heavy technology use). This new information landscape is leading to increased reliance on social networks, which work as sentries (word of mouth matters more and more), as information evaluators (they vouch for or discredit a source’s credibility and authenticity), and as forums for action (everybody is a broadcaster/publisher).

Steve at 2011 ALA Midwinter

Monday, January 17, 2011 6:03 pm

On January 5th, after one day back at work after Christmas break, I flew out to San Diego for ALA Midwinter. I had to get in a couple days early, because I had to attend a NASIG Executive Board meeting on the 6th. It was very productive all-day meeting, where we talked about NASIG business and set new policies, but confidentiality forbids my discussing it in detail.

From Friday the 7th through Sunday the 9th, I attended a number of sessions at ALA Midwinter, and I can talk about those. Almost every session I attended focused on RDA (Resource Description and Access), the new cataloging code that has been proposed to replace AACR2. This is the biggest thing to hit the cataloging world in over 30 years, since AACR2 was adopted. I’ll try to boil down the useful information I gathered as best I can.

The RDA Update Forum was where I heard the biggest news. A representative from the Library of Congress (I’ll confess I didn’t catch his name) discussed the testing of RDA at LC and the two other major national libraries. The testing period closed on December 30, with about 7,000 RDA records created in OCLC, and they are now analyzing the test data. The first report of analysis is due to the management of the national libraries by March 31, and they plan to issue a joint decision on the adoption of RDA at ALA Annual in New Orleans in June. Their decision could range from refusing to adopt RDA at all, adopting it as is, or adopting it after certain problems have been addressed by the Joint Steering Committee (the body responsible for creating RDA). However, since AACR2 is now dead and will not be updated in the future, it seems entirely out of the question that RDA would not be adopted at least provisionally by the Library of Congress. One way or another, RDA will be the new cataloging code.

Another speaker at the RDA Update Forum, Chris Cronin of the University of Chicago, was quite enthusiastic about the adoption of RDA. The University of Chicago is an RDA test library, and their experience was generally positive. Their general approach was, “It’ll be alright…really it will.” They involved all catalogers, professional and paraprofessional alike, in the test, chose to minimize local exceptions and follow the code as written whenever possible, and gave preference to cataloger’s judgment over broad policy decisions for every scenario. In the end, they found a several things that catalogers disliked (such as the changing of established headings, the recording of copyright dates, and navigating the RDA toolkit), and quite a few things that they liked (such as the way RDA expresses relationships between entities, getting rid of abbreviations, and the treatment of reproductions). The biggest area of concern centered around the training and documentation for copy catalogers (questions to be addressed included: will copy catalogers accept copy records as-is? Will they update poor RDA copy? Will they upgrade AACR2 records to RDA?). At the end of the testing period, the University of Chicago catalogers held a vote on whether or not to adopt RDA and they voted unanimously in favor of adopting the code. So my hope is that when we eventually move to adopting RDA at Wake Forest, we’ll find, like Chicago did, that “it’ll be alright…really it will.”

I also attended the FRBR Interest Group Meeting, where I heard an interesting presentation by Yin Zhang and Athena Salaba of Kent State University. They discussed their research project to use existing MARC records to create FRBR-compliant records at the expression and manifestation levels (FRBR is a conceptual model for the description of bibliographic entities that underlies RDA). I realize that’s probably clear as mud to most of you, but the important thing is, in their research, they used algorithms to convert existing bib records into smaller records that describe the work in more abstract terms (that is, a record for all English-language versions of the work, a record for all Spanish-language versions of the work, and a record for the work at its purest form, regardless of language). They found it difficult to use the algorithm to split the existing MARC records into these more finely granulated FRBR-compliant records without losing data or incorrectly converting records. Undoubtedly, some type of automation will be necessary for creating FRBR-ized records, but they will also require a great deal of human intervention to clean up errors, because moving from less clearly-defined data to more clearly-defined data is very difficult to accomplish using only computers.

I saw a presentation at the Cataloging Form & Function Interest Group, which further discussed the issue of FRBR and RDA compliant records from an ILS perspective. John (something, again, I failed to catch his full name, even though I’ve seen him speak before) from VLTS talked about the implementation of RDA in their catalog. They appear to be the company that is most aggressively pursuing a full implementation of RDA, with an underlying relational/object-oriented database structure. Another presentation at this meeting discussed a joint experiment by four libraries involved in the RDA test: North Carolina State, University of Chicago, Columbia, and University of Illinois at Chicago. They experimented with encoding RDA records using formats other than MARC, namely, MODS, EAD and Dublin Core. The results were mixed, finding that EAD worked pretty well overall, but that MODS had too many fields with inferred data to be useful in machine processing, while Dublin Core did not allow for the fine granularity required by RDA.

I was quite interested to hear this discussion of using alternative formats to record RDA data, because I’m currently chair of the ALCTS/LITA MARC Formats Interest Group, and I had to recruit speakers to discuss a topic at Midwinter (and will have to do it again at Annual). The topic I gave my presenters was, “Will RDA Mean the Death of MARC?” People have been saying for years that MARC is inadequate and needs to be replaced, but there has been no serious movement toward adopting a new format, so I wondered if the adoption of RDA might be a big enough event that it might start movement toward a new format. The speakers I invited all thought that MARC should be replaced, but all approached the issue from different directions. Chris Cronin of the University of Chicago, suggested that RDA won’t kill MARC, MARC will kill MARC, by which he meant that the inadequacies of the format will make it untenable in the future. Cronin didn’t know what format would rise up to replace MARC, but he strongly urged that we begin the conversation in the cataloging community. However, the questions of who will finance this shift, and who will do the work loom large. Also, we must be prepared to deal with the fact, that any wholesale move from MARC to another format, will inevitably result in the loss of some data. Jacquie Samples, newly of Duke University, spoke about the need to develop a successor format. In an apt metaphor, she said it was as if MARC were a very old king who had done great things in his youth, but had been lingering on his deathbed for many years, and there was no clear line of succession, and not even a decent contender for the crown available. And Kelley McGrath, of the University of Oregon, spoke of how the more richly detailed data required by RDA will quickly hit up against the structural limitations of the MARC formats, and how, if we wish to take advantage of the possibilities offered by this new, richer data, we will need to find an alternative to MARC. The session was quite well attended and the audience included representatives from the US MARC Office and the Library of Congress. At least one attendee told me afterwards that I had given my speakers a controversial set of questions and that they were brave. So this may either generate some interest in the problem, or make me a pariah in the cataloging community. I guess we’ll see which one at the next meeting.

I’m already running long, but I just want to say quickly that I also managed to have some fun in San Diego. I saw a number of former ZSRers. I roomed with Jim Galbraith, who is now at DePaul University in Chicago, and I had lunch with Jennifer Roper and Emily Stambaugh (with whom I did some impromptu consulting on a data management problem). As Erik posted, the two of us had a brief, but intense conversation about RDA in the halls of the San Diego Convention Center, and I hope we’ll continue that conversation on this side of the continental divide. And, on my last night in San Diego, I went to dinner with Susan, Roz, Carolyn, Molly, and Bill Kane, where Bill announced the ACRL Award to the table with a champagne toast. After that, we went for a reception and tour at Petco Park, where the San Diego Padres play. It was a pleasant way to end my stay in San Diego, even if my travels home were somewhat difficult (I’ll spare you that story).

The Future of Cataloging – Steve at ALA

Tuesday, July 6, 2010 1:49 pm

It’s not often that you go to a conference and have a major realization about the need to re-organize how you do your work and how your library functions, but I did at this year’s ALA. Through the course of several sessions on RDA, the new cataloging code that is slated to replace AACR2, I came to realize that we very much need to implement and maintain authority control at ZSR. This is not easy to say, as it will necessarily involve some expense and a great deal of time and effort, but without proper authority control of our bibliographic database, our catalog will suffer an ever-diminishing quality of service, frustrating patrons and hindering our efficiency.

You may ask, what is authority control? It’s the process whereby catalogers guarantee that the access points (authors, subjects, titles) in a bibliographic record use the proper or authorized form. Subject headings change, authors may have the same or similar names, and without controlling the vocabulary used, users can be confused, retrieving the wrong author or not retrieving all of the works on a given subject.

Here at ZSR we have historically had no authority maintenance to speak of. Our catalog records were sent out to a company to have the authorities cleaned up some time shortly before I started working here, and I started working here eight years ago this month. I have long thought that it was a problem that we have no authority control system, however, it did not seem to be a crisis. However, that was until RDA came along.

RDA (or Resource Description and Access) is, as you probably know by now, the new cataloging code that is supposed to replace AACR2. AACR2 focuses on the forms of items cataloged, whether the item is a book, a computer file, an audio recording, etc. The description gives you plenty of information about the item, the number of pages, the publisher, etc. If you do not have authority control (as we don’t), you may have a book with a similar title to another book, but you can distinguish it by saying, the book I need has 327 pages, with 8 pages of color prints, it’s 23 cm. high and it was published by Statler & Waldorf in 1998. In RDA, the focus is on points of access and in identifying works. Say you have a novel (a work) that has been published as a print book, as an audio book, and as an electronic book (by three different publishers on three different platforms). With RDA, you want to create a record to identify the work, the novel as an abstract concept, not the specific physical (or electronic) form that the novel takes. It’s not as easy to resort to the physical description (as with AACR2), because there may be no physical entity to describe at all. In that case, who wrote the book, the exact title of the book, and the subjects of the book become of paramount importance for identifying a work. RDA essentially cannot function without proper authority control (I had realized this fact during the course of the presentations I attended, but on my last day, a speaker’s first conclusion about preparing for RDA was “Increase authority control.”).

RDA is still being implemented, and the Library of Congress is currently undergoing a test to decide by March 2011 if they will adopt RDA. However, that test period seems a mere formality. There appears to be considerable momentum for the adoption of RDA, and I believe it will be adopted, even if many catalogers do have reservations about it. We may have another year to two-years before the momentum will force us to move to RDA, but in the meantime, I believe we need to get some sort of authority control system in place.

The advantages of authority control will be felt almost immediately in our catalog. The use of facets in VuFind will be far more efficient if the underlying data in the subject headings is in proper order. Also, as we move to implement WakeSpace, which will make us in essence publishers of material, we will need to make sure that we have our authors properly identified and distinguished from others (we need to make sure that our David Smith is the one we’ve got listed as opposed to another David Smith). Also, should we ever attempt to harvest the works of our university authors in an automated way to place them in WakeSpace, we will need to make sure that we are identifying the proper authors. The only way to do that is through authority control.

This issue will require some research and study before we can move forward with implementing authority control and maintenance. We will need some training for our current catalogers (definitely including me), we will need to have our current database’s authorities “cleaned up,” and we will have to institute a way to maintain our authorities, possibly including the hiring of new staff. It won’t be easy, and it won’t be cheap, but if our catalog is to function in an acceptable manner, I think it’s absolutely necessary.

Needless to say, I’ll be happy to talk about this with anyone who wants to.


Pages
About
Categories
2007 ACRL Baltimore
2007 ALA Annual
2007 ALA Gaming Symposium
2007 ALA Midwinter
2007 ASERL New Age of Discovery
2007 Charleston Conference
2007 ECU Gaming Presentation
2007 ELUNA
2007 Evidence Based Librarianship
2007 Innovations in Instruction
2007 Kilgour Symposium
2007 LAUNC-CH Conference
2007 LITA National Forum
2007 NASIG Conference
2007 North Carolina Library Association
2007 North Carolina Serials Conference
2007 OCLC International ILLiad Conference
2007 Open Repositories
2007 SAA Chicago
2007 SAMM
2007 SOLINET NC User Group
2007 UNC TLT
2007_ASIST
2008
2008 Leadership Institute for Academic Librarians
2008 ACRL Immersion
2008 ACRL/LAMA JVI
2008 ALA Annual
2008 ALA Midwinter
2008 ASIS&T
2008 First-Year Experience Conference
2008 Lilly Conference
2008 LITA
2008 NASIG Conference
2008 NCAECT
2008 NCLA RTSS
2008 North Carolina Serials Conference
2008 ONIX for Serials Webinar
2008 Open Access Day
2008 SPARC Digital Repositories
2008 Tri-IT Meeting
2009
2009 ACRL Seattle
2009 ALA Annual
2009 ALA Annual Chicago
2009 ALA Midwinter
2009 ARLIS/NA
2009 Big Read
2009 code4lib
2009 Educause
2009 Handheld Librarian
2009 LAUNC-CH Conference
2009 LAUNCH-CH Research Forum
2009 Lilly Conference
2009 LITA National Forum
2009 NASIG Conference
2009 NCLA Biennial Conference
2009 NISOForum
2009 OCLC International ILLiad Conference
2009 RBMS Charlottesville
2009 SCLA
2009 UNC TLT
2010
2010 ALA Annual
2010 ALA Midwinter
2010 ATLA
2010 Code4Lib
2010 EDUCAUSE Southeast
2010 Handheld Librarian
2010 ILLiad Conference
2010 LAUNC-CH Research Forum
2010 LITA National Forum
2010 Metrolina
2010 NASIG Conference
2010 North Carolina Serials Conference
2010 RBMS
2010 Sakai Conference
2011 ACRL Philadelphia
2011 ALA Annual
2011 ALA Midwinter
2011 CurateCamp
2011 Illiad Conference
2012 SNCA Annual Conference
ACRL
ACRL 2013
ACRL New England Chapter
ACRL-ANSS
ACRL-STS
ALA Annual
ALA Annual 2013
ALA Editions
ALA Midwinter
ALA Midwinter 2012
ALA Midwinter 2014
ALCTS Webinars for Preservation Week
ALFMO
APALA
ARL Assessment Seminar 2014
ARLIS
ASERL
ASU
Audio streaming
authority control
Berkman Webinar
bibliographic control
Book Repair Workshops
Career Development for Women Leaders Program
CASE Conference
cataloging
Celebration: Entrepreneurial Conference
Charleston Conference
CIT Showcase
CITsymposium2008
Coalition for Networked Information
code4lib
commons
Conference Planning
Conferences
Copyright Conference
costs
COSWL
CurateGear 2013
CurateGear 2014
Designing Libraries II Conference
DigCCurr 2007
Digital Forsyth
Digital Humanities Symposium
Disaster Recovery
Discovery tools
E-books
EDUCAUSE
Educause SE
EDUCAUSE_SERC07
Electronic Resources and Libraries
Embedded Librarians
Entrepreneurial Conference
ERM Systems
evidence based librarianship
FDLP
FRBR
Future of Libraries
Gaming in Libraries
General
GODORT
Google Scholar
govdocs
Handheld Librarian Online Conference
Hurricane Preparedness/Solinet 3-part Workshop
ILS
information design
information ethics
Information Literacy
innovation
Innovation in Instruction
Innovative Library Classroom Conference
Inspiration
Institute for Research Design in Librarianship
instruction
IRB101
Journal reading group
Keynote
LAMS Customer Service Workshop
LAUNC-CH
Leadership
Learning spaces
LibQUAL
Library 2.0
Library Assessment Conference
Library of Congress
licensing
Lilly Conference
LITA
LITA National Forum
LOEX
LOEX2008
Lyrasis
Management
Marketing
Mentoring Committee
MERLOT
metadata
Metrolina 2008
MOUG 09
MOUG 2010
Music Library Assoc. 07
Music Library Assoc. 09
Music Library Assoc. 2010
NASIG
National Library of Medicine
NC-LITe
NCCU Conference on Digital Libraries
NCICU
NCLA
NCLA Biennial Conference 2013
NCPC
NCSLA
NEDCC/SAA
NHPRC-Electronic Records Research Fellowships Symposium
NISO
North Carolina Serial Conference 2014
Offsite Storage Project
OLE Project
online catalogs
online course
OPAC
open access
Peabody Library Leadership Institute
plagiarism
Podcasting
Preservation
Preservation Activities
Preserving Forsyth LSTA Grant
Professional Development Center
rare books
RDA/FRBR
Reserves
RITS
RTSS 08
RUSA-CODES
SAA Class New York
SAMM 2008
SAMM 2009
Scholarly Communication
ScienceOnline2010
Social Stratification in the Deep South
Social Stratification in the Deep South 2009
Society of American Archivists
Society of North Carolina Archivists
SOLINET
Southeast Music Library Association
Southeast Music Library Association 08
Southeast Music Library Association 09
SPARC webinar
subject headings
Sun Webinar Series
tagging
TALA Conference
Technical Services
technology
ThinkTank Conference
Training
ULG
Uncategorized
user studies
Vendors
video-assisted learning
visual literacy
WakeSpace
Web 2.0
Webinar
WebWise
WFU China Initiative
Wikis
Women's History Symposium 2007
workshops
WSS
ZSR Library Leadership Retreat
Tags
Archives
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007

Powered by WordPress.org, protected by Akismet. Blog with WordPress.com.