Professional Development

Author Archive

Steve at 2013 LAUNC-CH Conference

Friday, April 5, 2013 5:17 pm

I attended the 2013 LAUNC-CH Conference in Chapel Hill on March 11. This year’s theme was “True Collaborations: Creating New Structures, Services and Breakthroughs.” The session that most interested me the most was the keynote address by Rick Anderson, Interim Dean of the J. Willard Marriott Library at the University of Utah.
As is typical of Rick, his speech had a provocative approach, which was apparent in its title, “The Purpose of Collaboration Is Not to Collaborate.” By this, he means that there may be plenty of benefits to be gained by collaborating, but that you should never collaborate on something merely to collaborate. Here are his major points:

Ground Rules and First Principles:
– Humility
– Patrons come first (a guiding concern here at ZSR)
– Waste nothing
– Fail often, fail early, write an article, move on
– Know what is sacred and what is instrumental
– Keep means and ends in proper relation

Means and Ends:
– The purpose of innovation is not to innovate…but to improve.
– The purpose of committees is not to meet…but to solve problems.
– The purpose of risk-taking is not to take risks…but to do new and better things.
– The purpose of collaboration is not to collaborate.

Reasons to Collaborate:
– To create leverage
• Economies of scale
• Increased impact
– To improve services
• Draw on more brains
• Include multiple perspective
– To build relationships
• On campus
• External to campus
– To bring complexity indoors

Bringing Complexity Indoors
– Good complexity and bad complexity (AKA Richness vs. Confusion). You want to bring bad complexity indoors, not make patrons have to suffer with whatever is problematic.
– Who is paying and who is paid?
• Patrons as “customers”
– What is our goal?
• Education vs. frustration
Opportunities to Collaborate
– Osmotic (by osmosis, Anderson means, if you have open library space, it will be filled. Make sure your library is filled with stuff you want in it.)
– Serendipitous
– Strategic (Institutional)
• To advance university goals
• To advance library goals
– Strategic (Political)
• To bank political capital
• To strengthen library’s brand

Collaborating Better
– Think outside the ghetto. Don’t just focus on the library world, other fields may have fruitful areas for collaboration. Anderson gave a wonderful example of this, describing the Pumps and Pipes Conference, an annual conference in Houston that brings together heart doctors and people from the oil industry. Both fields are concerned with pumping viscous fluid through tubes, and have much to teach each other.
– Work from ends to means (not vice versa).
– Listen promiscuously
– Ask this question up front: “How will we know when the task is accomplished?”
– If project is open-ended, assess regularly
– Evaluate outcomes, not processes.

Steve at ALA Midwinter 2013

Friday, February 8, 2013 2:10 pm

Although my trip to Seattle for the ALA Midwinter Conference had a rough start (flight delayed due to weather, nearly missed a connecting flight, my luggage didn’t arrive until a day later), I had a really good, productive experience. This Midwinter was heavy on committee work for me, and I was very focused on RDA, authority control and linked data. If you want a simple takeaway from this post, it’s that RDA, authority control and linked data are all tightly bound together and are important for the future of the catalog. If you want more detail, keep reading.
My biggest commitment at the conference was participating in two long meetings (over four hours on Saturday afternoon and three hours on Monday morning) of CC:DA (Cataloging Committee: Description and Access). I’m one of nine voting members of CC:DA, which is the committee responsible for developing ALA’s position on RDA. The final authority for making changes and additions to RDA is the JSC (Joint Steering Committee), which has representation from a number of cataloging constituencies, including ALA, the national library organizations of Canada, the UK, and Australia, as well as other organizations. ALA’s position on proposals brought to the JSC is voted on by CC:DA. Membership on this committee involves reading and evaluating a large number of proposals from a range of library constituencies. Much of the work of the committee has so far involved reviewing proposals regarding how to form headings in bibliographic records, which is, essentially, authority control work. We’ve also worked on proposals to make the rules consistent throughout RDA, to clarify the wording of rules, and to make sure that the rules fit with the basic principles of RDA. It has been fascinating to see how interconnected the various cataloging communities are, and how they relate to ALA and CC:DA. As I said, I am one of nine voting members of the committee, but there are about two dozen non-voting representatives from a variety of committees and organizations, including the Music Library Association, the Program for Cooperative Cataloging, and the Continuing Resources Cataloging Committee of ALCTS.
During our Monday meeting, we saw a presentation by Deborah Fritz of the company MARC of Quality of a visualization tool called RIMMF, RDA In Many Metadata Formats. RIMMF shows how bibliographic data might be displayed when RDA is fully implemented. The tool is designed to take RDA data out of MARC, because it is hard to think of how data might relate in RDA without the restrictions of MARC. RIMMF shows how the FRBR concepts of work, expression and manifestation (which are part of RDA) might be displayed by a public catalog interface. It’s still somewhat crude, but it gave me a clearer idea of the kinds of displays we might develop, as well as a better grasp on the eventual benefits to the user that will come from all our hard work of converting the cataloging world to RDA. RIMMF is free to download and we’re planning to play around with it some here in Resource Services.
I also attended my first meeting of another committee of which I am a member, the Continuing Resources Cataloging Committee of the Continuing Resources Section of ALCTS). Continuing resources include serials and web pages, so CRS is the successor to the old Serials Section. We discussed the program that we had arranged for that afternoon on the possibilities of using linked data to record serial holdings. Unfortunately, I had to miss the program due to another meeting, but I’m looking forward to seeing the recording. We also brainstormed ideas for our program at Annual in Chicago, and the committee’s representative to the PCC Standing Committee on Training gave us an update on RDA training initiatives.
The most interesting other meeting that I attended was the Bibframe Update Forum. Bibframe is the name for an initiative to try to develop a data exchange format to replace the MARC format(s). The Bibframe initiative hopes to develop a format that can make library data into linked data, that is, data that can be exchanged on the semantic web. Eric Miller, from the company Zepheira (which is one of the players in the development of Bibframe), explained that the semantic web is about linking data, not just documents (as a metaphor, think about old PDF files that could not be searched, but were flat documents. The only unit you could search for was the entire document, not the meaningful pieces of content in the document). The idea is to create recombinant data, that is, small blocks of data that can be linked together. The basic architecture of the old web leaned toward linking various full documents, rather than breaking down the statements into meaningful units that could be related to each other. The semantic web emphasizes the relationships between pieces of data. Bibframe hopes to make it possible to record the relationships between pieces of data in bibliographic records and to expose library data on the Web and make it sharable. At the forum, Beacher Wiggins told the audience about the six institutions who are experimenting with the earliest version of Bibframe, which are the British Library, the German National Library, George Washington University, the National Library of Medicine, OCLC, and Princeton University. Reinhold Heuvelmann of the German National Library said that the model is defined on a high level, but that it needs to have more detail developed to allow for recording more granular data, which is absolutely necessary for fully recording the data required by RDA. Ted Fons of OCLC spoke of how Bibframe is an attempt to develop a format that can carry the data libraries need and to allow for library data to interact with each other and the wider web. Fons said that Bibframe data has identifiers that are URIs which can be web accessible. He also said that Bibframe renders bibliographic data as statements that are related to each other, rather than as self-contained records, as with MARC. Bibframe breaks free of the constraints of MARC, which basically rendered data as catalog cards in electronic format. Bibframe is still going through quite a bit of development, but it is moving quickly. Sally McCallum of the MARC Standards Office said that they hope to finalize aspects of the Bibframe framework by 2014, but acknowledged that, “The change is colossal and the unexpected will happen.”
Actually, I think that’s a good way to summarize my thoughts on the current state of the cataloging world after attending this year’s Midwinter, “The change is colossal and the unexpected will happen.”

Steve at NASIG 2012

Thursday, June 14, 2012 5:03 pm

Last Thursday, Chris, Derrik and I hopped in the library van and drove to Nashville for the NASIG Conference, returning on Sunday. It was a busy and informative conference, full of lots of information on serials and subscriptions. I will cover a few of the interesting sessions I attended in this post.
One such session was called “Everyone’s a Player: Creation of Standards in a Fast-Paced Shared World,” which discussed the work of NISO and the development of new standards and “best practices.” Marshall Breeding discussed the ongoing development of the Open Discovery Initiative (ODI), a project that seeks to identify the requirements of web-scale discovery tools, such as Summon. Breeding pointed out that it makes no sense for libraries to spend millions of dollars on subscriptions, if nobody can find anything. So, in this context, it makes sense for libraries to spend tens of thousands on discovery tools. But, since these tools are still so new, there are no standards for how these tools should function and operate with each other. ODI plans to develop a set of best practices for web-scale discovery tools, and is beginning this process by developing a standard vocabulary as well as a standard way to format and transfer data. The project is still in its earliest phases and will have its first work available for review this fall. Also at this session, Regina Reynolds from the Library of Congress discussed her work with the PIE-J initiative, which has developed a draft set of best practices that is ready for comment. PIE-J stands for the Presentation & Identification of E-Journals, and is a set of best practices that gives guidance to publishers on how to present title changes, issue numbering, dates, ISSN information, publishing statements, etc. on their e-journal websites. Currently, it’s pretty much the Wild West out there, with publishers following unique and puzzling practices. PIE-J hopes to help clean up the mess.
Another session that was quite useful was on “CONSER Serials RDA Workflow,” where Les Hawkins, Valerie Bross and Hien Nguyen from Library of Congress discussed the development of RDA training materials at the Library of Congress, including CONSER serials cataloging materials and general RDA training materials from the PCC (Program for Cooperative Cataloging). I haven’t had a chance yet to root around on the Library of Congress website, but these materials are available for free, and include a multi-part course called “Essentials for Effective RDA Learning” that includes 27 hours (yikes!) of instruction on RDA, including a 9 hour training block on FRBR, a 3 hour block on the RDA toolkit, and 15 hours on authority and description in RDA. This is for general cataloging, not specific to serials. Also, because LC is working to develop a replacement for the MARC formats, there is a visualization tool called RIMMF available at marcofquality.com that allows for creating visual representations of records and record-relationships in a post-MARC record environment. It sounds promising, but I haven’t had a chance to play with it yet. Also, the CONSER training program, which focuses on serials cataloging, is developing a “bridge” training plan to transition serials catalogers from AACR2 to RDA, which will be available this fall.
Another interesting session I attended was “Automated Metadata Creation: Possibilities and Pitfalls” by Wilhelmina Randtke of Florida State University Law Research Center. She pointed out that computers like black and white decisions and are bad with discretion, while creating metadata is all about identifying and noting important information. Randtke said computers love keywords but are not good with “aboutness” or subjects. So, in her project, she tried to develop a method to use computers to generate metadata for graduate theses. Some of the computer talk got very technical and confusing for me, but her discussion of subject analysis was fascinating. Using certain computer programs for automated indexing, Randtke did a data scrape of the digitally-encoded theses and identified recurring keywords. This keyword data was run through ontologies/thesauruses to identify more accurate subject headings, which were applied to the records. A person needs to select the appropriate ontology/thesaurus for the item(s) and review the results, but the basic subject analysis can be performed by the computer. Randtke found that the results were cheap and fast, but incomplete. She said, “It’s better than a shuffled pile of 30,000 pages. But, it’s not as good as an organized pile of 30,000 pages.” So, her work showed some promise, but still needs some work.
Of course there were a number of other interesting presentations, but I have to leave something for Chris and Derrik to write about. One idea that particularly struck me came from Rick Anderson during his thought provoking all-conference vision session on the final day, “To bring simplicity to our patrons means taking on an enormous level of complexity for us.” That basic idea has been something of an obsession of mine for the last few months while wrestling with authority control and RDA and considering the semantic web. To make our materials easily discoverable by the non-expert (and even the expert) user, we have to make sure our data is rigorously structured and that requires a lot of work. It’s almost as if there’s a certain quantity of work that has to be done to find stuff, and we either push it off onto the patron or take it on ourselves. I’m in favor of taking it on ourselves.
The slides for all of the conference presentations are available here: http://www.slideshare.net/NASIG/tag/nasig2012 for anyone who is interested. You do not need to be a member of NASIG to check them out.

Steve at 2012 North Carolina Serials Conference

Tuesday, March 27, 2012 1:04 pm

On March 16, I attended the 2012 North Carolina Serials Conference at UNC-Chapel Hill, along with a number of folks from ZSR. I’m going to write about one session that really interested me, because I think it’s worth a fairly in-depth recap.
The session in question was the closing keynote address by Kevin Guthrie, who was the first employee of JSTOR (back when it was a one-person operation) and is a co-founder of the non-profit organization ITHAKA. Guthrie’s presentation, “Will Books Be Different?” examined the world of electronic books. He began with a brief history of the transition from print to electronic journals, emphasizing that the process was generally driven by the academic world, in the sense that publishers built their electronic journal models with the academic world as the intended audience for their products. Guthrie points out that the electronic book, particularly the scholarly electronic book, faces a very different set of circumstances. Library budgets in the 2010s are even tighter than they were in the ‘90s and ‘00s. Libraries have to figure out how to do more with fewer resources. At the same time, everyone is connected, and the scale of digital activity supports massive commercial investment by publishers. The academic world is now reacting to publishers, rather than the other way around.
The consumer web is influencing the availability of scholarly electronic books. It is on the consumer web where companies are taking advantage of new network and digital technologies in transformative ways, unlike in the ‘90s when electronic journals were developed. The Google Books Project and its audacity have made it actually seem possible that all books will eventually be digital, which is exciting, but that does not mean that scholarly books will be a priority of digital publishers. Commercial companies are becoming increasingly focused on what gets accessed as opposed to the intrinsic research value of sources. Furthermore, search and discovery capabilities could prove critical to what book content gets supported by e-publishers.
Licensing issues are also different in the world of e-books than with e-journals. At a basic level, libraries tend to want to own books, rather than license or subscribe to them. We have gotten over that tendency with journals, but, for many librarians, books somehow feel qualitatively different. Also, Digital Rights Management (DRM) technologies can be difficult to manage. In addition, there is the question of what is the interaction between individual access and institutional access, when dealing with remotely accessible and downloadable e-book content. Consortial purchasing is also complicated by e-books. Historically, library consortia have purchased books to share, rather than each library purchasing their own copy. How do libraries share e-books with each other without violating licensing or access terms? Or, does the pricing model for e-books change such that consortia no longer make sense at all?
We are seeing the development of new versions of the Big Deal, like we have had for e-journals for years. There has been consolidation among publishers, and libraries are being offered access to more content at a package price. However, it is hard to measure good value. Usage statistics are generally far lower for books than for journals. Libraries may buy a big e-book package and have very little use, which can either indicate that the package is not worth it, or the use could justify the costs. We’re still in the early days of e-books, so no real benchmarks for what constitutes substantial use have developed.
Having said all that, despite the complications, libraries are buying more e-books. However, Guthrie argued that libraries are ahead of our users with regards to e-books. He pointed to a recent e-book usage survey which found the 53% of undergraduates prefer print books to e-books, which was a higher percentage than any other university population! We need to think about what resources our users need and how they use them, particularly where e-books are concerned.

Steve at 2012 LAUNC-CH Conference

Monday, March 12, 2012 5:39 pm

Last week I attended the 2012 conference for LAUNC-CH, the Librarians’ Association of UNC-Chapel Hill. This year’s theme was “Engage, Innovate, Assess: Doing More With Less.” The keynote address, “Doing More With Less,” was given by Dick Blackburn, Associate Professor of Organizational Behavior at UNC-CH’s Kenan-Flagler Business School. In his frequently humorous presentation, Blackburn discussed how innovation and creativity can help an organization do more with less. He outlined a cyclical creative/innovative process in six steps, which I will quote here:
1) Motivation – You really gotta wanna.
2) Preparation – Making the strange familiar (that is, learning about what materials/ideas you are working with).
3) Manipulation – Making the familiar strange (taking materials/ideas and putting them together in a new way).
4) Incubation – The occasional need for disruption, distraction, distancing, and/or disengagement from the idea process.
5) Illumination – The “Aha!” experience.
6) Verification/Evaluation – Testing the acceptability of the creative idea.
…and this process can then feed back to motivation.
After outlining this process, Blackburn argued that organizations should not only reward innovation that succeeds, but should also reward innovation that fails, so long as it is “smart failure,” that is, failure that you can learn from. If you punish failure, you stifle creativity. As long as an idea is planned out well and the risk taken is not too enormous (the phrase he used was “below the waterline,” meaning that it can sink the organization), the risk taker should be rewarded for trying. If anything, organizations should punish inaction, not failure. To that end, Blackburn encouraged the development of divergent thinking that takes risks.
I also attended a very interesting session called “Can the Past Be Prologue? What We Can Learn from How the UNC Library Weathered the Great Depression” by Eileen McGrath and Linda Jacobson. They discussed a number of useful lessons that could be drawn from UNC’s experience during the Depression and that were applicable in the current tough economic climate. First, the importance of cooperation. During the Depression, UNC and Duke developed the first cooperative collection development agreements between the two universities, sharing the load for buying expensive, rare, and little used materials. Duke library staff who attended library school classes at UNC would carry loaned books back and forth between the libraries. In order to further cooperative loaning, the library at UNC developed the first union catalog for the state. The second lesson McGrath and Jacobson drew was the importance of supporting and recognizing staff. During the Depression, library staff at UNC suffered a 30% pay cut. The library began a newsletter during this period, in which staff accomplishments were recognized. More effort was made by the library administration to congratulate staff and to encourage and reward initiative at work. The third lesson was to find new ways to develop the collection rather than purchasing materials. UNC was very successful at this task during the Depression. From 1929 to 1930, 70% of new materials were purchased, but from 1934 to 1935, only 32% of new materials were purchased. Also, during this period, the collection actually grew, and so did the rate of growth. This was accomplished by seeking out private donors, gifts, and exchanges. Also, in 1933, a new law required that the state government deposit 25 copies of every state document at UNC. Perhaps the most innovative approach was the work of history professor J.G. de Roulhac Hamilton, who pioneered the collection of manuscripts and family papers throughout the South. Hamilton was so successful in this endeavor that he was nicknamed “Ransack,” and there are stories that he was actually banned from entering some of our neighboring states because he was stripping the cultural heritage from our neighbors. The fourth major lesson from McGrath and Jacobson was that false economies catch up with you. Don’t try to do things on the cheap, because you will eventually pay and often much more than you would have. The Wilson Library was built during the Depression, but to save money, they cut the original useable stack space from 450,000 volumes to 300,000 volumes. They soon hit storage problems far sooner than projected. Similarly, they tried to be cheap with lighting in the public areas of the library. The students launched a very vocal campaign for better lighting, including editorials, petitions and protests that eventually resulted in new lighting being purchased. Sometimes it’s better to just pay the costs upfront.

Steve at 2012 ALA Midwinter

Tuesday, January 31, 2012 6:35 pm

So, if you read nothing else in my post about ALA Midwinter, please take away this fact: RDA is coming. At several sessions, representatives from the Library of Congress indicated that LC is moving forward with plans to adopt RDA early in 2013. When LC adopts RDA, the other libraries in the US will fall in line behind them, so it’s time to start preparing.

On Saturday, January 21, I attended a meeting of the Copy Cataloging Interest Group, where I heard Barbara Tillett, the Chief of the LC Policy and Standards Division, speak about how LC is training their copy catalogers in RDA with an eye toward a 2013 implementation. She said much of the copy cataloger training material is focused on teaching when it is appropriate to change an AACR2 record to an RDA record, and when it is appropriate to change a master record in OCLC. LC has developed a set of RDA data elements that should always be included in their records, which they call “LC core.” Tillett said that LC will adopt RDA no sooner than January 2013, contingent upon continued progress on the recommendations the National Libraries made this spring regarding changes to RDA. LC decided to return most of the catalogers who participated in the RDA test that wrapped up at the end of 2010 to cataloging using RDA in November, 2011, so that these catalogers could work on training, documentation, and further developing the RDA code itself. LC is making its work on RDA, including its copy cataloger training materials available on their website ( http://www.loc.gov/aba/rda ) The Library of Congress has begun releasing “LC Policy Statements” that explain LC interpretations of RDA rules, and which replace the old LC Rule Interpretations that explained LC decisions on AACR2 rules. The Policy Statements are available for free with RDA Toolkit. Regarding the ongoing development of RDA, Tillett said that there will be monthly minor corrections to RDA (typos and such), with more substantive major updates to RDA issued twice per year. Tillett also spoke of the Bibliographic Framework Transition Initiative, which is working to develop a metadata schema to replace the MARC formats. This group released a background statement and general plan in November 2011. They are in the process of developing a funding proposal and of forming an advisory group with various players in the library metadata field.

On Sunday, January 22, I attended a meeting of the RDA Update Forum, and Beacher Wiggins of LC reaffirmed much of what Barbara Tillett said, but he stated more forcefully that the Library of Congress and the other national libraries are really intent on implementing RDA in 2013. However, he allowed for a little more flexibility in his timeline. He placed the date for RDA implementation in the first quarter of 2013, so anything from January 2 to March 31. Wiggins said that many of his colleagues are pushing for a January 2 date, but he said that, taking into account how deadlines can slip, he would be happy with March 31. Nevertheless, the message was clear, RDA is coming.

Also at the RDA Update Forum, I heard Linda Barnart from the Program for Cooperative Cataloging, who spoke about how the PCC is preparing for the implementation of RDA (she said the key decisions of the PCC can be found at http://www.loc.gov/catdir/pcc ). The PPC is busily developing materials related to the RDA implementation. They have developed a set of Post-RDA Test Guidelines as well as an RDA FAQ. They have been working on guidelines for what they are calling a Day One for RDA authority records, which will be a day (probably after LC adopts RDA) from which all new LC authority records created will be created according to RDA rules instead of AACR2 rules. PCC also has a Task Group on Hybrid Bibliographic Records which has prepared guidelines for harmonizing RDA bib records with pre-RDA bib records. I know I’m sounding like a broken record here, but with all of this infrastructure being built up, make no mistake-RDA is coming.

On to other topics, I also attended an interesting session of the Next Generation Catalog Interest Group, where I heard Jane Burke of Serials Solutions speak about a new product they are developing which is designed to replace the back-end ILS. Burke said that Serials Solutions is looking to separate the discovery aspect of catalogs from their management aspect. Summon, as we already know, is their discovery solution, which is designed to allow for a single search with a unified result set. Serials Solutions is working to develop a webscale management solution which they are calling Intota. Intota is an example of “software as a service” (Burke recommended looking it up in Wikipedia, which I did). Burke argued that the old ILS model was riddled with redundancy, with every library cataloging the same things and everybody doing duplicate data entry (from suppliers to the ILS to campus systems). Intota would be a cloud based service that would provide linked data and networked authority control (changes to LC authority headings would be changed for all member libraries, without the need to make local changes). It seems like an interesting model, and I look forward to hearing more about it.

I attended a number of other meetings, which will be of limited interest to a general audience, but something that was pretty cool was attending my first meeting as a member of the Editorial Board of Serials Review. After almost 20 years of working with serials, it was interesting to be on the other side of the process. We discussed the journal’s move to APA from Chicago style, a new formatting guide for the articles, future topics for articles, submission patterns, etc. It was very interesting.

As usual when I got ALA, I saw several former ZSRers. I roomed with Jim Galbraith, who is still at DePaul University in Chicago. I also visited with Jennifer Roper and Emily Stambaugh, both of whom are expecting baby boys in May (small world!).

Steve at ALA Annual 2011

Tuesday, July 5, 2011 5:33 pm

I’m a bit late in writing up my report about the 2011 ALA in New Orleans, because I’ve been trying to find the best way to explain a statement that profoundly affected my thinking about cataloging. I heard it at the MARC Formats Interest Group session, which I chaired and moderated. The topic of the session was “Will RDA Be the Death of MARC?” and the speakers were Karen Coyle and Diane Hillmann, two very well-known cataloging experts.

Coyle spoke first, and elaborated a devastating critique of the MARC formats. She argued that MARC is about to collapse due to its own strange construction, and that we cannot redeem MARC, but we can save its data. Coyle argued that MARC was great in its day, it was a very well developed code for books when it was designed. But as other materials formats were added, such as serials, AV materials, etc., additions were piled on top of the initial structure. And as MARC was required to capture more data, the structure of MARC became increasingly elaborate and illogical. Structural limitations to the MARC formats required strange work-arounds, and different aspects of MARC records are governed by different rules (AACR2, the technical requirements of the MARC format itself, the requirements of ILS’s, etc.). The cobbled-together nature of MARC has led to oddities such as the publication dates and language information being recorded in both the (machine readable) fixed fields of the record and in the (human readable) textual fields of the record. Coyle further pointed out the oddity of the 245 title field in the MARC record, which can jumble together various types of data, the title of a work, the language, the general material designation, etc. This data is difficult to parse for machine-processing. Although RDA needs further work, it is inching toward addressing these sorts of problems by allowing for the granular recording of data. However, for RDA to fully capture this granular data, we will need a record format other than MARC. In order to help develop a new post-MARC format, Coyle has begun a research project to break down and analyze MARC fields into their granular components. She began by looking at the 007/008 fields, finding that they have 160 different data elements, with a total of 1,530 different possible values. This data can be used to develop separate identifies for each value, which could be encoded in a MARC-replacement format. Coyle is still working on breaking down all of the MARC fields.

After Karen Coyle, Diane Hillmann of Metadata Management Associates spoke about the developing RDA vocabularies, and it was a statement during her presentation that really struck me. The RDA vocabularies define a set of metadata elements and value vocabularies that can be used by both humans and machines. That is, they provide a link between the way humans think about and read cataloging data and the way computers process cataloging data. The RDA vocabularies can assist in mapping RDA to other vocabularies, including the data vocabularies of record schemas other than the MARC formats. Also, when RDA does not provide enough detailed entity relationships for particular specialized cataloging communities, the RDA vocabularies can be extended to detail more subproperties and relationships. The use of RDA vocabulary extensions means that RDA can grow, and not just from the top-down. The description of highly detailed relationships between bibliographic entities (such as making clear that a short story was adapted as a radio play script) will increase the searching power of our patrons, by allowing data to be linked across records. Hillmann argued that the record has created a tyranny of thinking in cataloging, and that our data should be thought of as statements, not records. That phrase, “our data should be thought of as statements, not records,” struck me as incredibly powerful, and the most succinct version of why we need to eventually move to RDA. It truly was a “wow” moment for me. Near the end of her presentation, Hillmann essentially summed up the thrust of her talk, when she said that we need to expand our ideas of what machines can and should be doing for us in cataloging.

The other session I went to that is really worth sharing with everybody was the RDA Update Forum. Representatives from the Library of Congress and the two other national libraries, as well as the chair of the PCC (Program for Cooperative Cataloging), discussed the results of the RDA test by the national libraries. The national libraries have requested that the PCC (the organization that oversees the RDA code) address a number of problems in the RDA rules over the next eighteen months or so. LC and the other national libraries have decided to put off implementing RDA until January 2013 at the earliest, but all indications were that they plan to adopt RDA eventually. As the PCC works on revising RDA, the national libraries are working to move to a new record format (aka schema or carrier) to replace the MARC formats. They are pursuing a fairly aggressive agenda, intending to, by September 30 of this year, develop a plan with a timeline for transitioning past MARC. The national libraries plan to identify the stakeholders in such a transition, and want to reach out to the semantic web community. They plan for this to be a truly international effort that extends well beyond the library community as it is traditionally defined. They plan to set up communication channels, including a listserv, to share development plans and solicit feedback. They hope to have a new format developed within two years, but the process of migrating their data to the new format will take at least several more years after the format is developed. Needless to say, if the library world is going to move post-MARC format, it will create huge changes. Catalogs and ILS systems will have to be completely re-worked, and that’s just for starters. If some people are uncomfortable with the thought of moving to RDA, the idea of moving away from MARC will be truly unsettling. I for one think it’s an exciting time to be a cataloger.

Steve at NASIG 2011

Thursday, June 16, 2011 1:19 pm

At this year’s NASIG Conference, there were plenty of sessions on practical things (which I’ll discuss in a bit), but there were also several apt phrases and interesting concepts that jumped out at me. The first phrase came from a session on patron-driven access, where the speakers quoted Peter McCracken of Serials Solutions, who said, “What is interlibrary loan, but patron-driven access?” I thought this was a nice way to show that patron-driven access isn’t so foreign or new to libraries, we’ve been doing it for a long time, just under a different name. The second interesting concept came from one of our vision speakers, Paul Duguid, a professor at the University of California-Berkeley School of Information. He spoke about the importance of branding information in the information supply chain, as it supplies context and validation for information. When someone in the audience said that as librarians, we are experts in information (and old saw if ever there was one), Duguid responded that actually we’re experts in information structures. He went on to say that that’s one thing we have over Google, because an algorithm isn’t a structure. I found that very interesting. The third thought-provoking phrase/concept that appealed to me came in a session on getting the most out of negotiation. The speakers discussed the Samurai idea of “ordered flexibility,” which is essentially the idea of studying and developing a plan, but being prepared to deviate from that plan as necessary to deal with changing conditions and opportunities. I really like this idea of “ordered flexibility,” as it fits with my philosophy to planning large-scale project (if you develop a thorough plan, you have more room to adapt to changing conditions on the fly).

Now, as for the meat-and-potatoes of the sessions I attended, the most interesting one was called “Continuing Resources and the RDA Test,” where representatives from the three US national libraries (Library of Congress, National Agricultural Library, and National Library of Medicine) spoke about the RDA test that has been conducted over the last year and a half or so. This session was on June 5, so it was conducted before the report came out this week, and the speakers were very good about not tipping their hand (the national libraries have decided to delay implementing RDA until January 2013 at the earliest, but still plan to adopt the code). The session covered the details of how the test was conducted and the data analyzed. The 26 test libraries were required to produce four different sets of records. The most useful set was one that was based on a group of 25 titles (10 monographs, 5 AV items, 5 serials, 5 integrating resources) that every tester was required to catalog twice, once using AACR2 rules and once using RDA rules. The national libraries then created RDA records for each of the titles, and edited them until they were as close to “perfect” as possible. During the analysis phase, the test sets for each item were compared against the national libraries’ benchmark RDA records, and scored according to how closely the records matched. The data produced during the RDA test will eventually be made available for testing by interested researchers (maybe you, Dr. Mitchell?).

Another interesting session was conducted by Carol McAdam of JSTOR and Kate Duff of the University of Chicago Press. JSTOR, or course, provides backfiles to numerous journals, but they have begun working with certain partners to publish brand new content on the JSTOR platform. They are still trying to iron out all the details in their pricing model, but this move makes a lot of sense, it seems to me, especially for university presses. If all their material is eventually going to wind up residing on the JSTOR platform anyway, why not just make the new issues available with the backfiles to subscribing institutions?

I also saw a presentation by Rafal Kasprowki of Rice University about IOTA, which is a new NISO standard designed to measure the quality of OpenURL links. Briefly, here’s how OpenURLs work, when a patron clicks on a citation, a Source OpenURL is generated, which, in theory, contains all of the information necessary to adequately describe the source. This Source OpenURL is sent to a Link Resolver, which consults a knowledge base to find the holdings for the library. If the library holds the item, the Link Resolver generates a Target OpenURL which opens the full-text. Prior to the development of IOTA, there was no way to test the reliability of this data, but IOTA tests the Source OpenURL and provides a standard for how much information it should contain, in order to properly identify a resource.

I also attended a session by Amanda Yesilbas of the Florida Center for Library Automation, who discussed how FCLA uses a Drupal-based website in place of an ERMS. I can’t say that I fully understood everything she said, but it might be an inexpensive, low-maintenance alternative to implementing a full-blown ERMS here at ZSR.

This was a busy conference for me. In addition to attending the last meeting of my two year term as a member of the NASIG Executive Board, I started working on a new NASIG committee. And the conference was in St. Louis, my hometown, so I came in early with Shane, so he could spend time with his grandparents , aunts, uncles, and cousins. I also took him to his first-ever Major League baseball game, and the Cardinals beat the Cubs handily.

Steve at 2011 NC Serials Conference

Monday, March 14, 2011 5:23 pm

On Thursday, I attended the 2011 NC Serials Conference, along with Chris, Linda Z. and Bradley. They’ll probably talk about other sessions, but I thought I’d give a brief re-cap of a presentation called “The Future of the Catalog,” by Tim Bucknall of UNC-Greensboro and Margaretta Yarborough of UNC-Chapel Hill, since that’s right in my area.

Bucknall began by talking about how, with the advent of the Web, the catalog is no longer the center of the information universe, and how we have seen a huge increase in the number and types of data sources found in the catalog. He also said that the catalog is no longer a listing of everything we have, but a listing of what we can access. Bucknall pointed to a survey that found that 93% of the public agree that Google provides worthwhile information, while only 78% agree that library websites provide worthwhile information. What this means for the future of the catalog is that we can no longer even hope that the catalog will be a one-stop shop, and that we will need discovery tools that can search the catalog and other information silos simultaneously. Bucknall also pointed to a study that showed that for library users and library directors, out of a list of 18 possible enhancements to the catalog, the most desirable enhancement is to add more links to full-text, while catalogers rated this item dead last on the list. This indicates that, as a whole, the cataloging community must become more aware of the needs of their users.

Yarborough looked at the landscape of the cataloging world, and found a loss of control for catalogers (as more outsiders are adding data to our catalogs), a changing balance of materials (shelf-ready materials vs. traditional cataloging), a change in the sheer volume of material cataloged (purchasing bib records for sets of electronic resources), and more demand for other types of materials not formerly found in the catalog (archival collections, digital collections, etc.). She argued that in the catalog of the near future, library resources should be accessible where users search (rather than forcing them to come to our sites), library resources should be intelligible in ordinary language, the catalog must be intuitive and easy to navigate, the catalog should have a discovery layer to provide a one-stop search, authority control will be more important than ever, and we will need to have user-added tags, reviews, and other enhancements to our records to make them more useful. Yarborough also discussed the ways this new type of catalog is causing shifts in the tasks catalogers perform, with more emphasis on local/unique materials, more analysis and loading of batch records, and a general shift toward more project management functions.

This conference was a special one for me, because I (along with Chris) gave my first-ever conference presentation. We discussed our FixZak group and our model for managing e-resources and technical services troubleshooting. We had a lot of good questions afterward, and there seemed to be some genuine interest in our model.

Steve at LAUNC-CH 2011

Wednesday, March 9, 2011 4:01 pm

On Monday, I went to the LAUNC-CH Conference at the lovely Friday Center on the campus of UNC-Chapel Hill. Lauren discussed Lee Rainie’s keynote address, “Networked Individuals, Networked Libraries,” but I’d like to hit the things I found interesting in his presentation. I found it very interesting that smart cell phones are the most commonly used internet-capable technology by lower-income populations in the U.S. Yes, higher-income folks use smart phones more than lower-income folks, but the gap in usage is lower than with any other kind of technology, like laptops. Also, the ubiquity of the mobile internet usage is revolutionizing personal relationships to the internet. People expect to get information anywhere, on any device, at any time. It has also changed the idea of place and of presence (increasingly common experience of people using devices and being “alone together”). What is fascinating is that Rainie said that there is more evidence that people who use information technology a lot use it as a supplement to activity in the real world, not as a replacement for activity in the real world (which is one of the standard-issue criticisms of heavy technology use). This new information landscape is leading to increased reliance on social networks, which work as sentries (word of mouth matters more and more), as information evaluators (they vouch for or discredit a source’s credibility and authenticity), and as forums for action (everybody is a broadcaster/publisher).


Pages
About
Categories
2007 ACRL Baltimore
2007 ALA Annual
2007 ALA Gaming Symposium
2007 ALA Midwinter
2007 ASERL New Age of Discovery
2007 Charleston Conference
2007 ECU Gaming Presentation
2007 ELUNA
2007 Evidence Based Librarianship
2007 Innovations in Instruction
2007 Kilgour Symposium
2007 LAUNC-CH Conference
2007 LITA National Forum
2007 NASIG Conference
2007 North Carolina Library Association
2007 North Carolina Serials Conference
2007 OCLC International ILLiad Conference
2007 Open Repositories
2007 SAA Chicago
2007 SAMM
2007 SOLINET NC User Group
2007 UNC TLT
2007_ASIST
2008
2008 Leadership Institute for Academic Librarians
2008 ACRL Immersion
2008 ACRL/LAMA JVI
2008 ALA Annual
2008 ALA Midwinter
2008 ASIS&T
2008 First-Year Experience Conference
2008 Lilly Conference
2008 LITA
2008 NASIG Conference
2008 NCAECT
2008 NCLA RTSS
2008 North Carolina Serials Conference
2008 ONIX for Serials Webinar
2008 Open Access Day
2008 SPARC Digital Repositories
2008 Tri-IT Meeting
2009
2009 ACRL Seattle
2009 ALA Annual
2009 ALA Annual Chicago
2009 ALA Midwinter
2009 ARLIS/NA
2009 Big Read
2009 code4lib
2009 Educause
2009 Handheld Librarian
2009 LAUNC-CH Conference
2009 LAUNCH-CH Research Forum
2009 Lilly Conference
2009 LITA National Forum
2009 NASIG Conference
2009 NCLA Biennial Conference
2009 NISOForum
2009 OCLC International ILLiad Conference
2009 RBMS Charlottesville
2009 SCLA
2009 UNC TLT
2010
2010 ALA Annual
2010 ALA Midwinter
2010 ATLA
2010 Code4Lib
2010 EDUCAUSE Southeast
2010 Handheld Librarian
2010 ILLiad Conference
2010 LAUNC-CH Research Forum
2010 LITA National Forum
2010 Metrolina
2010 NASIG Conference
2010 North Carolina Serials Conference
2010 RBMS
2010 Sakai Conference
2011 ACRL Philadelphia
2011 ALA Annual
2011 ALA Midwinter
2011 CurateCamp
2011 Illiad Conference
2012 SNCA Annual Conference
ACRL
ACRL 2013
ACRL New England Chapter
ACRL-ANSS
ACRL-STS
ALA Annual
ALA Annual 2013
ALA Editions
ALA Midwinter
ALA Midwinter 2012
ALA Midwinter 2014
ALCTS Webinars for Preservation Week
ALFMO
APALA
ARL Assessment Seminar 2014
ARLIS
ASERL
ASU
Audio streaming
authority control
Berkman Webinar
bibliographic control
Book Repair Workshops
Career Development for Women Leaders Program
CASE Conference
cataloging
Celebration: Entrepreneurial Conference
Charleston Conference
CIT Showcase
CITsymposium2008
Coalition for Networked Information
code4lib
commons
Conference Planning
Conferences
Copyright Conference
costs
COSWL
CurateGear 2013
CurateGear 2014
Designing Libraries II Conference
DigCCurr 2007
Digital Forsyth
Digital Humanities Symposium
Disaster Recovery
Discovery tools
E-books
EDUCAUSE
Educause SE
EDUCAUSE_SERC07
Electronic Resources and Libraries
Embedded Librarians
Entrepreneurial Conference
ERM Systems
evidence based librarianship
FDLP
FRBR
Future of Libraries
Gaming in Libraries
General
GODORT
Google Scholar
govdocs
Handheld Librarian Online Conference
Hurricane Preparedness/Solinet 3-part Workshop
ILS
information design
information ethics
Information Literacy
innovation
Innovation in Instruction
Innovative Library Classroom Conference
Inspiration
Institute for Research Design in Librarianship
instruction
IRB101
Journal reading group
Keynote
LAMS Customer Service Workshop
LAUNC-CH
Leadership
Learning spaces
LibQUAL
Library 2.0
Library Assessment Conference
Library of Congress
licensing
Lilly Conference
LITA
LITA National Forum
LOEX
LOEX2008
Lyrasis
Management
Marketing
Mentoring Committee
MERLOT
metadata
Metrolina 2008
MOUG 09
MOUG 2010
Music Library Assoc. 07
Music Library Assoc. 09
Music Library Assoc. 2010
NASIG
National Library of Medicine
NC-LITe
NCCU Conference on Digital Libraries
NCICU
NCLA
NCLA Biennial Conference 2013
NCPC
NCSLA
NEDCC/SAA
NHPRC-Electronic Records Research Fellowships Symposium
NISO
North Carolina Serial Conference 2014
Offsite Storage Project
OLE Project
online catalogs
online course
OPAC
open access
Peabody Library Leadership Institute
plagiarism
Podcasting
Preservation
Preservation Activities
Preserving Forsyth LSTA Grant
Professional Development Center
rare books
RDA/FRBR
Reserves
RITS
RTSS 08
RUSA-CODES
SAA Class New York
SACS-COC
SAMM 2008
SAMM 2009
Scholarly Communication
ScienceOnline2010
Social Stratification in the Deep South
Social Stratification in the Deep South 2009
Society of American Archivists
Society of North Carolina Archivists
SOLINET
Southeast Music Library Association
Southeast Music Library Association 08
Southeast Music Library Association 09
SPARC webinar
subject headings
Sun Webinar Series
tagging
TALA Conference
Technical Services
technology
ThinkTank Conference
Training
ULG
Uncategorized
user studies
Vendors
video-assisted learning
visual literacy
WakeSpace
Web 2.0
Webinar
WebWise
WFU China Initiative
Wikis
Women's History Symposium 2007
workshops
WSS
ZSR Library Leadership Retreat
Tags
Archives
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007

Powered by WordPress.org, protected by Akismet. Blog with WordPress.com.