Professional Development

ALA Annual 2014 Las Vegas – Lauren

Thursday, July 3, 2014 4:08 pm

Three segments to my post: 1) Linked Data and Semantic Web, 2) Introverts at Work, and 3) Vendors and Books and Video — read just the part that interests you!

1. Linked Data and Semantic Web (or, Advances in Search and Discovery)

Steve Kelley sparked my interest in the Semantic Web and Linked Data with reports after conferences over the past few years. Now that I’ve been appointed to the joint ALCTS/LITA Metadata Standards Committee and attended a meeting at this conference, I’ve learned more:

Google Hummingbird is a recent update to how Google searching functions, utilizing all the words in the query to provide more meaningful results instead of just word matches.

Catalogers and Tech Team take note! Work is really happening now with Linked Data. In Jason Clark’s presentation,”Schema.org in Libraries,” see the slide with links to work being done at NCSU and Duke (p. 28 of the posted PDF version).

I’m looking forward to working with Erik Mitchell and other Metadata Standards Committee members in the coming year.

2. Introverts at Work!

The current culture of working in meetings (such as brainstorming) and reaching quick decisions in groups or teams is geared towards extroverts while about 50% of the population are introverts. Introverts can be most productive and provide great solutions when given adequate time for reflection. (Extrovert and introvert were defined in the Jung and MBTI sense of energy gain/drain.) So says Jennifer Kahnweiler, the speaker for the ALCTS President’s Program and author of Quiet Influence. Another book discussing the same topic is Quiet: The Power of Introverts in a World That Can’t Stop Talking by Susan Cain. Many ZSRians attended this session!

3.Vendors and Books and Video

I spent a lot of time talking with vendors. Most notable was the meeting that Derrik, Jeff, and I attended with some of the publishers that are raising DDA short term loan prices. This will affect our budget, but our plan is to watch it for a bit, to develop our knowledge and determine appropriate action. It was helpful to learn more from the publishers. Some publishers are able to switch to print on demand, while others cannot because traditional print runs are cheaper than print on demand and their customers still want print. Print-driven publishers have to come up with a sustainable model to cover all of the costs, so they are experimenting with DDA pricing. DDA overall is still an experiment for publishers, while librarians already have come to think of it as being a stable and welcome method of providing resources.

Derrik and I also started conversing with Proquest about how we will manage our existing DDA program in regards to the addition of ebrary Academic Complete to NC LIVE.

“The combined bookshops of Aux Amateurs de Livres and Touzot Librarie Internationale will be called Amalivre effective July 1, 2014.”

Regarding video, Mary Beth, Jeff, Derrik and I attended a presentation by two Australian librarians from different large universities (QUT and La Trobe, with FTE in tens of thousands). They reported on their shift to streaming video with Kanopy and here are a few bullets:

  • Among drivers for change were the flipped classroom and mobile use
  • 60% of the DVD collection had less than 5 views while streaming video titles licensed through Kanopy averaged over 50 views
  • 23% and 15% (two universities) of DVDs have never been viewed once
  • 1.7 and 1.8 (two universities) times is the true cost of DVD ownership
  • They have a keyboard accessibility arrangement for the visually impaired
  • Usage is growing for PDA and non-PDA titles in Kanopy [reminds us of our experience with e-books]
  • Discovery of the streaming videos came largely through faculty embedding videos in the CMS
  • Other discovery is not good for video, so they had Proquest add a radio button option for video to Summon to help promote discovery [can we do this?]
  • They concluded that because of greater use,online video is the greater value for the money spent

 

Contributing ZSR Digital Collections to the DPLA!

Friday, October 25, 2013 4:07 pm

Tanya, Craig, and Vicki all mentioned the keynote about the DPLA (Digital Public Library of America) at the Tri-State Archivists’ Conference. Before Emily Gore of the DPLA headed to Greenville, SC to deliver her keynote, she was in Greensboro, NC meeting with digital collection managers. I attended the meeting to learn more about the nitty gritty how-to of contributing ZSR’s digital collections to the DPLA.

For those who aren’t familiar, the DPLA aggregates metadata from the digital collections of libraries, archives, and museums across the United States. In addition to providing a slick search interface at dp.la, the DPLA also makes its API open to developers and encourages the building of apps on top of this platform. By contributing our metadata to the DPLA, we will expose our collections to a national audience. In addition, we will drive traffic to our site from both the dp.la site and apps built on top of the DPLA API.

DPLA App Library

At DPLAfest 2013, the North Carolina Digital Heritage Center was recognized as one of three new service hubs that will aggregate metadata from their regions and serve as a conduit to the DPLA. Over 120,000 records from North Carolina institutions are currently available at dp.la, including records from the State Library of North Carolina, State Archives of North Carolina, and the libraries at the University of North Carolina at Chapel Hill, East Carolina University, and the University of North Carolina at Greensboro in addition to all the records made available by the North Carolina Digital Heritage Center itself at digitalnc.org.

When an institution contributes collections to the DPLA via a service hub such as the North Carolina Digital Heritage Center, they share an item’s metadata as well as its thumbnail.

The DPLA record recognizes both the service hub (in the example above the North Carolina Digital Heritage Center) and the contributing institution (Transylvania County Library). Clicking on either the item’s thumbnail or “View Object” takes the user to the item as it appears on the original site, in this case digitalnc.org (see below).

One more interesting thing to note about the DPLA’s approach to aggregating digital collections is that metadata shared with the DPLA is made available under a CC0 license. By participating in the DPLA, we agree that others may re-use our metadata. However, it’s important to recognize that metadata rights are not equal to digital object rights. Rather, the digital objects we make available via Wake Space remain available under whatever terms we determine.

The North Carolina Digital Heritage Center is currently in the process of evaluating our feeds before adding selected collections to the DPLA. Feel free to contact me if you have any questions!

Steve at NASIG 2012

Thursday, June 14, 2012 5:03 pm

Last Thursday, Chris, Derrik and I hopped in the library van and drove to Nashville for the NASIG Conference, returning on Sunday. It was a busy and informative conference, full of lots of information on serials and subscriptions. I will cover a few of the interesting sessions I attended in this post.
One such session was called “Everyone’s a Player: Creation of Standards in a Fast-Paced Shared World,” which discussed the work of NISO and the development of new standards and “best practices.” Marshall Breeding discussed the ongoing development of the Open Discovery Initiative (ODI), a project that seeks to identify the requirements of web-scale discovery tools, such as Summon. Breeding pointed out that it makes no sense for libraries to spend millions of dollars on subscriptions, if nobody can find anything. So, in this context, it makes sense for libraries to spend tens of thousands on discovery tools. But, since these tools are still so new, there are no standards for how these tools should function and operate with each other. ODI plans to develop a set of best practices for web-scale discovery tools, and is beginning this process by developing a standard vocabulary as well as a standard way to format and transfer data. The project is still in its earliest phases and will have its first work available for review this fall. Also at this session, Regina Reynolds from the Library of Congress discussed her work with the PIE-J initiative, which has developed a draft set of best practices that is ready for comment. PIE-J stands for the Presentation & Identification of E-Journals, and is a set of best practices that gives guidance to publishers on how to present title changes, issue numbering, dates, ISSN information, publishing statements, etc. on their e-journal websites. Currently, it’s pretty much the Wild West out there, with publishers following unique and puzzling practices. PIE-J hopes to help clean up the mess.
Another session that was quite useful was on “CONSER Serials RDA Workflow,” where Les Hawkins, Valerie Bross and Hien Nguyen from Library of Congress discussed the development of RDA training materials at the Library of Congress, including CONSER serials cataloging materials and general RDA training materials from the PCC (Program for Cooperative Cataloging). I haven’t had a chance yet to root around on the Library of Congress website, but these materials are available for free, and include a multi-part course called “Essentials for Effective RDA Learning” that includes 27 hours (yikes!) of instruction on RDA, including a 9 hour training block on FRBR, a 3 hour block on the RDA toolkit, and 15 hours on authority and description in RDA. This is for general cataloging, not specific to serials. Also, because LC is working to develop a replacement for the MARC formats, there is a visualization tool called RIMMF available at marcofquality.com that allows for creating visual representations of records and record-relationships in a post-MARC record environment. It sounds promising, but I haven’t had a chance to play with it yet. Also, the CONSER training program, which focuses on serials cataloging, is developing a “bridge” training plan to transition serials catalogers from AACR2 to RDA, which will be available this fall.
Another interesting session I attended was “Automated Metadata Creation: Possibilities and Pitfalls” by Wilhelmina Randtke of Florida State University Law Research Center. She pointed out that computers like black and white decisions and are bad with discretion, while creating metadata is all about identifying and noting important information. Randtke said computers love keywords but are not good with “aboutness” or subjects. So, in her project, she tried to develop a method to use computers to generate metadata for graduate theses. Some of the computer talk got very technical and confusing for me, but her discussion of subject analysis was fascinating. Using certain computer programs for automated indexing, Randtke did a data scrape of the digitally-encoded theses and identified recurring keywords. This keyword data was run through ontologies/thesauruses to identify more accurate subject headings, which were applied to the records. A person needs to select the appropriate ontology/thesaurus for the item(s) and review the results, but the basic subject analysis can be performed by the computer. Randtke found that the results were cheap and fast, but incomplete. She said, “It’s better than a shuffled pile of 30,000 pages. But, it’s not as good as an organized pile of 30,000 pages.” So, her work showed some promise, but still needs some work.
Of course there were a number of other interesting presentations, but I have to leave something for Chris and Derrik to write about. One idea that particularly struck me came from Rick Anderson during his thought provoking all-conference vision session on the final day, “To bring simplicity to our patrons means taking on an enormous level of complexity for us.” That basic idea has been something of an obsession of mine for the last few months while wrestling with authority control and RDA and considering the semantic web. To make our materials easily discoverable by the non-expert (and even the expert) user, we have to make sure our data is rigorously structured and that requires a lot of work. It’s almost as if there’s a certain quantity of work that has to be done to find stuff, and we either push it off onto the patron or take it on ourselves. I’m in favor of taking it on ourselves.
The slides for all of the conference presentations are available here: http://www.slideshare.net/NASIG/tag/nasig2012 for anyone who is interested. You do not need to be a member of NASIG to check them out.

Leslie at MLA 2011

Monday, February 14, 2011 2:08 am

I’m back from another Music Library Association conference, held this year in Philadelphia. Some highlights:

Libraries, music, and digital dissemination

Previous MLA plenary sessions have focused on a disturbing new trend involving the release of new music recordings as digital downloads only, with licenses restricting sale to end users, which effectively prevents libraries either from acquiring the recordings at all, or from distributing (i.e., circulating) them. This year’s plenary was a follow-up featuring a panel of three lawyers — a university counsel, an entertainment-law attorney, and a representative of the Electronic Frontiers Foundation — who pronounced that the problem was only getting worse. It is affecting more formats now, such as videos and audio books — it’ not just the music librarian’s problem any more — and recent court decisions have tended to support restrictive licenses.

The panelists suggested two approaches libraries can take: building relationships, and advocacy. Regarding relationships, it was noted that there is no music equivalent of LOCKSS or Portico: Librarians should negotiate with vendors of audio/video streaming services for similar preservation rights. Also, libraries can remind their resident performers and composers that if their performances are released as digital downloads with end-user-only licenses, libraries cannot preserve their work for posterity. The panelists drew an analogy to the journal pricing crisis: libraries successfully raised awareness of the issue by convincing faculty and university administrators that exorbitant prices would mean smaller readerships for their publications. On the advocacy side, libraries can remind vendors that federal copyright law pre-empts non-negotiable licenses: a vendor can’t tell us not to make a preservation copy when Section 108 says we have the right to make a preservation copy. We can also lobby state legislatures, as contract law is governed by state law.

The entertainment-law attorney felt that asking artists to lobby their record labels was, realistically speaking, the least promising approach — the power differential is too great. Change, the panelists agreed, is most likely to come through either legislation or the courts. Legislation is the more difficult to affect (there are too many well-funded commercial interests ranged on the opposing side); there is a better chance of a precedent-setting court case tipping the balance in favor of libraries. Such a case is most likely to come from the 2nd or 9th Circuit, which have a record of liberal rulings on Fair Use issues. One interesting observation from the panel was that most of the cases brought so far have involved “unsympathetic figures” — individuals who blatantly abused Fair Use on a large scale, provoking draconian rulings. What’s needed is more cases involving “sympathetic figures” like libraries — the good guys who get caught in the cross-fire. Anybody want to be next? :-)

Music finally joins Digital Humanities

For a couple of decades now, humanities scholars have been digitizing literary, scriptural, and other texts, in order to exploit the capabilities of hypertext, markup, etc. to study those texts in new ways. The complexity of musical notation, however, has historically prevented music scholarship from doing the same for its texts. PDFs of musical scores have long been available, but they’re not searchable texts, and not encoded as digital data, so can’t be manipulated in the same way. Now there’s a new project called the Music Encoding Initiative, jointly funded by the National Endowment for the Humanities and the German Deutsche Forschungsgemeinschaft. MEI (yes, they’ve noticed it’s also a Chinese word for “beauty”) has just released a new digital encoding standard for Western classical musical notation, based on XML. It’s been adopted so far by several European institutions and by McGill University. If, as one colleague put it, it “has legs,” the potential is transformative for the discipline. Whereas critical editions in print force editors to make painful decisions between sources of comparable authority — the other readings get relegated to an appendix or supplementary volume — in a digital edition, all extant readings can be encoded in the same file, and displayed side by side. An even more intriguing application of this concept is the “user-generated edition”: a practicing musician could potentially approach a digital edition of a given work, and choose to output a piano reduction, or a set of parts, or modernized notation of a Renaissance work, for performance. Imagine the savings for libraries, which currently have to purchase separate editions for all the different versions of a work.

http://music-encoding.org

Music and metadata

In a session titled “Technical Metadata for Music,” two speakers, from SUNY and a commercial audio-visual preservation firm respectively, stressed the importance of embedded metadata in digital audio files. Certain information, such as recording date, is commonly included in filenames, but this is an inadequate measure from a long-term preservation standpoint: filenames are not integral to the file itself, and are typically associated with a specific operating system. One speaker cited a recent Rolling Stone article, “File not Found: the Recording Industry’s Storage Crisis” (December 2010), describing the record labels’ inability to retrieve their backfiles due to inadequate filenames and lack of embedded metadata. Metadata is now commonly embedded in many popular end-user consumer products, such as digital cameras and smartphones.

For music, embedded metadata can include not only technical specifications (bit-depth, sample rate, and locations of peaks, which can be used to optimize playback) but also historical context ( the date and place of performance, the performers, etc.) and copyright information. The Library of Congress has established sustainability factors for embedded metadata (see http://digitizationguidelines.gov). One format that meets these requirements is Broadcast Wave Format, an extension of WAV: it can store metadata as plain text, and can include historical context-related data. The Technical Committee of ARSC (Association of Recorded Sound Collections) recently conducted a test wherein they added embedded metadata to some BWF-format audio files, and tested them with a number of popular applications. The dismaying results showed that many apps not only failed to display the embedded metadata, but also deleted it completely. This, in the testers’ opinion, calls for an advocacy campaign to raise awareness of the importance of embedded metadata. ARSC plans to publish its test report on its website (http://www.arsc-audio.org/). The software for embedded metadata that they developed for the test is also available as a free open-source app at http://sourceforge.net/projects/bwfmetaedit.

Music cataloging

A pre-conference session held by MOUG (Music OCLC Users Group) reported on an interesting longitudinal study that aimed to trace coverage of music materials in the OCLC database. The original study was conducted in 1981, when OCLC was relatively new. MOUG testers searched newly-published music books, scores, and sound recordings, as listed in journals and leading vendor catalogs, along with core repertoire as listed in ALA’s bibliography Basic Music Library, in OCLC, and assessed the quantity and quality of available cataloging copy. The study was replicated in 2010. Exact replication was rendered impossible by various developments over the intervening 30 years — changes in the nature of the OCLC database from a shared catalog to a utility; more foreign and vendor contributors; and the demise of some of the reference sources used for the first sample of searched materials, necessitating substitutions — but the study has nevertheless produced some useful statistics. Coverage of books. not surprisingly, increased over the 30 years to 95%; representation of sound recordings also increased, to around 75%; but oddly, scores have remained at only about 60%. As for quality of the cataloging, the 2010 results showed that about 20% of sound recordings have been cataloged as full-level records, about 50% as minimal records; about a quarter of scores get full-level treatment, about 50% minimal. The study thus provides some external corroboration of long-perceived music cataloging trends, and also a basis for workflow and staffing decisions in music cataloging operations.

A session titled “RDA: Kicking the Tires” was devoted to the new cataloging standard that the Library of Congress and a group of other libraries have just finished beta-testing. Music librarians from four of the testing institutions (LC, Stanford, Brigham Young, U North Texas, and U Minnesota) spoke about their experiences with the test and with adapting to the new rules.

All relied on LC’s documentation and training materials, recording local decisions on their internal websites (Stanford has posted theirs on their publicly-accessible departmental site). An audience member urged libraries to publish their workflows in the Toolkit, the online RDA manual. It was generally agreed that the next step needed is the development of guidelines and best practices.

None of the testers’ ILSs seem to have had any problems accomodating RDA records in MARC format. LC has had no problems with their Voyager system, corroborating our own experience here at WFU. Some testers reported problems with some discovery layers, including PRIMO (fortunately, we haven’t seen any glitches so far with VuFind). Stanford reported problems with their (un-named) authorities vendor, mainly involving “flipped” (changed name order) entries. Most testers are still in the process of deciding which of the new RDA data elements they will display in their OPACs.

Asked what they liked about RDA, both the LC and Stanford speakers cited the flexibility of the new rules, especially in transcribing title information, and in the wider range of sources from which bib info can be drawn. Others welcomed the increased granularity, designed to enhance machine manipulation, and the chance this affords to “move beyond cataloging for cards” towards the semantic web and relation-based models. It was also noted that musicians are already used to thinking in FRBR fashion — they’ve long dealt with scores and recordings, for instance, as different manifestations of the same work.

Asked what they thought “needed fixing” with RDA, all the panelists cited access points for music (the LC speaker put up a slide displaying 13 possible treatments of Rachmaninoff’s Vocalise arranged for saxophone and piano). There are other areas — such as instrument names in headings — that the RDA folks haven’t yet thought about, and the music community will probably have to establish its own practice. Some catalogers expressed frustration with the number of matters the new rules leave to “cataloger’s judgment.” Others mentioned the difficulty of knowing just how one’s work will display in future FRBRized databases, and of trying to fit a relational structure into the flat files most of us currently have in our ILSs.

What was most striking about the session was the generally upbeat tone of the speakers — they saw more positives than negatives with the new standard, assured us it only took some patience to learn, and were convinced that it truly was a step forward in discoverability. One speaker, who trains student assistants to do copy-cataloging, telling them “When in doubt, make your best guess, and I’ll correct it later,” observed that her students’ guesses consistently conformed to RDA practice — some anecdotal evidence suggesting that the new standard may actually be more intuitive for users, and that new catalogers will probably learn it more easily than those of us who’ve had to “unlearn” AACR2!

Sidelights

Our venue was the Loews Philadelphia Hotel, which I must say is the coolest place I’ve ever stayed in. The building was the first International Style high-rise built in the U.S., and its public spaces have been meticulously preserved and/or restored, to stunning effect. The first tenant was a bank, and so you come across huge steel vault doors and rows of safety-deposit boxes, left in situ, as you walk through the hotel. Definitely different!

Another treat was visiting the old Wanamaker department store (now a Macy’s) to hear the 1904 pipe organ that is reputed to be the world’s largest (http://www.wanamakerorgan.com/about.php).

Vufind updates

Wednesday, February 9, 2011 12:05 am

JP already talked about Vufind but I thought I would add in my notes from the Vufind talk today. Demian Katz (Villanova) took some time in the afternoon to talk about Vufind and its growing support for metadata standards other than MARC. The update centered on how Vufind had been re-tuned to be more agnostic with regards to metadata standards and encoding models. The redesign made use of “Record Drivers” to take control of both screen display functionality and data retrieval processes, OAI harvesters to gather data and XSL importing tools to facilitate metadata crosswalks and full text indexing.

Demian talked at some length about basic features of the metadata indexing toolkit. At the Vufind 2.0 conference he talked a bit about his ability to use the MST from the Extensible Catalog project and I wonder (no answer, just a question) how the toolkit development with Vufind matches with the XC project. Demian reported on the OAI-PMH harvester that will gather records remotely and load them into Vufind. i have used an early version of this tool to successfully harvest and import HathiTrust records and am encouraged to see that development has continued. Demian also mentioned a new XSLT importer tool that enables mapping an XML document into an existing SOLR configuration.

This represents an interesting step forward for Vufind as it will allow ZSR to think about harvesting and indexing data from our Dspace instance as well as other sources that support OAI harvesting. All of these features are going to come in the Vufind 1.1 release on March 21st! More to come on this as we get our test instance of Vufind running.

Dianne Hillman on collaborative opportunities

Tuesday, February 8, 2011 12:41 pm

The second keynote of the morning was Dianne Hillman – she talked about collaborations between programmers and catalogers.

Dianne dated her career by showing us a few tools that I remember from my early time as a librarian (Cord catalog rods and a card filer)! I wonder what that says about the pace of change from the early 1970s to the early 1990s. For most of her talk however Dianne focused on the emerging roles of catalogers in libraries and potential collaborations that exist between catalogers and programmers. Dianne has published a few times in the past few years about RDA 1) http://dlib.org/dlib/january10/hillmann/01hillmann.html and 2) http://www.dlib.org/dlib/january07/coyle/01coyle.html (among others) and it was interesting to hear her thoughts about the intersection between MARC, RDA, ISBD, AACR, RDF, XML and other ABTs (Acronym based technologies).

Her presentation focused on the need to re-shape the cataloging profession and as such she spent a few minutes talking about the potential impact of RDA encoded in RDF in terms of serving as a replacement for the MARC encoding and representation standards. She introduced some concepts from her recent publications including metadata registries, use of identifiers as opposed to literals in records and use of single record or vocabulary repositories as opposed to replicated records across thousands of databases.

The audience asked some interesting questions 1) about economics of migration (it is tough but not changing is not an option), 2) about the future of cataloging in libraries (traditional cataloging is diminishing – copy cataloging is the current model, distributed cataloging/data-geeking is the future, getting rid of all the catalogers first does not make sense – get them the skills to change), 4) what programmers could learn from the cataloging community (creativity in data representation and use, understanding of the complexities of library data).

The rest of the morning and early afternoon is devoted to short IT presentations & should be very interesting.

ArchiveIT 4.0 training

Friday, December 17, 2010 3:18 pm

Today Erik and Audra attended a webex session from the Internet Archive on new features in ArchiveIT 4.0. They had me from the first few minutes when they announced that this year had been named the ‘year of metadata’ at the Internet Archive!

They focused on new features including metadata searching, crawl date limiting, and improved video crawling and streaming.

They also have enhanced their reporting features, specifically introducing a URL report that shows exactly what URLs got archived during a given crawl. They also introduced a number of automatic metadata harvesting features during the seed assignment process and some new features to scope-it that helps you set constraints on specific hosts.

One interesting metadata feature they introduced was the ability to export metadata records for archived items to both MARC and MODS. I thought this was an interesting concept as a way to leverage archived content in local indexes or webservices. They also introduced a third party tool called ProxyToggle, a Firefox plug-in that helps do quality control testing on archived content.

NSF and Data Management webinar at UNC

Tuesday, November 9, 2010 2:41 pm

Today Molly, Susan and Erik attended an open webinar offered by the Odum institute at UNC. The content of the webinar focused on the recently introduced requirement to have a data management plan for all National Science Foundation (NSF) grants. The requirement, which goes into effect January 18, 2011, has created quite a buzz in the library world as it adds new dimensions to the world of scholarly communications.

My favorite phrase “The codebook from your research is the metadata on your project.” There was a fair amount of interest (with over 60 people online in addition to in-room participants) and the question and answer period focused on wide ranging issues such as technical problems, IRB questions and UNC and non-UNC solutions.

Interestingly, UNC has taken the steps to support both in-process and end-of-grant data archiving using a combination of services. One service mentioned in particular was iRODS, a data sharing and archiving platform developed by the Dice program now housed at UNC.

Representatives from ZSR have been talking with ORSP about how to support WFU NSF researchers.

Carolyn at ALA Annual 2010

Tuesday, July 6, 2010 11:58 pm

Although the weather was hot and sweltering in DC during ALA, I still had a great time attending informative sessions on cataloging and metadata, going to socials, catching up with friends, and hanging out with Susan and Erik. I was one of the five who rode up and back in the library’s new van.

After dropping off our luggage in our hotel room, Susan, Erik and I walked to the convention center to pick up our conference materials. I tagged along with Susan and Erik to the LITA Happy Hour, the first of two socials that Friday evening. Following social number one, we all three then headed to the Capital City Brewing Company where the Anthropology and Sociology Section (ANSS) librarians were having their social.

On Saturday, I attended a session, “Converging Metadata Standards in Cultural Institutions: Apples and Oranges” where librarians from the Texas State Library and Archives Commission and the Smithsonian discussed digital projects that their institutions have created. Daniele Plumer, Coordinator of the Texas Heritage Digitization Initiative (THDI), discussed the necessity of educating metadata specialists who work in various institutions (i.e. libraries, archives, museums, state and local government agencies) on content standards, encoding syntaxes, project management and digital library systems and applications. In preparation of the THDI, Amigos Library Services held a series of workshops in five locations across the state as well as online. Some observations from this project by Ms. Plumer included most libraries chose Dublin Core instead of MARC as a metadata scheme, LC subject headings is the most commonly used controlled vocabulary, and overall metadata decisions are driven largely by the design of existing digital asset management systems. Ching-Hsien Wang spoke about the creation of a one-stop discovery center for the 4.6 million records and 445,000 images of the Smithsonian’s museum, archives, library and research holdings and collections. Ms. Wang described this database as a conjoined collaboration, not an individual silo of information. The database has various vocabulary features, facet types from controlled vocabularies, and sharing capability with social media options.

Next, I attended the Copy Cataloging Interest Group’s program where two librarians from the University of Colorado at Boulder described how they developed and implemented a FRBR and FRAD training program for all of their libraries’ professional and copy catalogers. Participants read the entire FRBR document, and at monthly cataloging meetings, discussed the readings and participated in group exercises to reinforce concepts learned. A blog was created for questions and comments on the readings. My last meeting of the day was the ALCTS CCS Recruitment and Mentoring Committee of which I am a member. We are looking into using Google Forms to create a questionnaire for interested mentor and mentee participants in the area of cataloging. Mentors and mentees will be paired based on the the information we collect.

“Cataloging and Beyond: the Year of Cataloging Research” was my first session on Sunday. It was a panel discussion and the room was packed and many were sitting on the floor in the back of the room, including myself. Panelists, one of which was Jane Greenburg, Erik’s Ph.D. advisor, discussed how the data catalogers create provides various areas of research for catalogers to explore. Catalogers’ research can impact and assist in making decisions about cataloging data and catalog design. Are we able and how can we measure usefulness? Per Ms. Greenburg, there are three areas that need researching: automatic metadata generation, creator or author generated metadata, and metadata theory.

Following this session, I attended another panel discussion on the “Strategic Future of Print Collections in Research Libraries.” Print on demand, the impact of scanning on physical books, and preservation were discussed in this session. My final meeting for this ALA was attending the Anthropology Librarians Discussion Group. I always learn much from attending this session. Topics included print and online bibliographic tools for Africa for which I collected several useful handouts that were distributed. It was proposed to request the ANSS Committee develop a list of core academic library journals for anthropology.

Sunday was also a day for catching up with friends. Lauren C. and I had lunch with a graduate school classmate who is the business and economics reference librarian at Clemson. As mentioned in one of Susan’s posts, she, Erik and I had a lovely dinner with Waits and Christian.

It’s been awhile since I attended a conference with both Susan and Erik. Hanging out with them at conferences, I am assured of three things occurring: exploring the sites of the city, exercising (i.e. a lot of walking around) and having fun.

Saturday at ALA with Carolyn

Wednesday, July 15, 2009 11:13 pm

On Saturday, I attended “Workflow Tools for Automating Metadata Creation and Maintenance” which was a panel discussion comprised of individuals who work on digital projects at their institutions.

Much of the talk was highly technical and I didn’t quite understand everything, but one of the most interesting projects discussed was by Brown University’s Ann Caldwell, Metadata Coordinator for the Center of Digital Initiatives, who spoke about their recent project in assisting the Engineering Department with its upcoming accreditation. Engineering professors wanted to digitize materials such as syllabi and assignments so that the accreditation team could have them in advance of their visit. The Center created an easy way for professors to put stuff into the repository by creating a very simple MODS (Metadata Object Description Schema) record form with required fields to fill in (e.g. date, title, genre) and providing an easy way for individuals to upload files (i.e. digital objects). Faculty decide how they want to set up folders for their stuff; they can dump everything in one folder or create multiple folders down to the micro-level. Faculty also determine who and what individuals can see. Because of the enormous amount of material being brought in to be digitized, the Center developed a tracking system. Due to the success of this project, the Engineering Department will continue digitizing their materials for future accreditations, and Ms. Caldwell indicated other departments were interested in doing the same.

In regards to metadata creation workflow, consistency, automation, streamlining and true interoperability between systems are of utmost importance. With the help of metadata tools, librarians can do their jobs better and more efficiently. Smart systems are possible and necessary. We need to pay attention to user interface design for cataloging tools because it is critical to the success of our data.

Next, I attended a four hour panel discussion titled “Look Before You Leap: Taking RDA for a Test-Drive.” Again, a highly technical presentation. RDA is the acronym for “resource description and access” and is a new cataloging tool to be utilized for the description of all types of resources and content. It is compatible with established principles, models, and standards and is adaptable to the needs of a wide range of resource description communities (i.e. museums, libraries, etc.) Tom Delsey began the session by comparing and contrasting AACR2 and RDA. Nanette Naught followed by previewing the RDA Toolkit which is currently in the alpha testing stage. Sally McCallum of the Library of Congress spoke on new fields developed for the MARC record in conjunction with RDA. John Espley, Director of Design at VTLS, gave attendees a preview of what an RDA record would like like in the ILS he represents. His presentation finally shed some light for me as to how an RDA cataloging record would appear in an online catalog. National Library of Medicine’s Barbara Bushman described the upcoming testing of RDA at 23 select institutions. The testing will occur in OCLC Connexion as well as in various ILS. Voyager being one. Once the RDA Online software is released sometime in November or December 2009, a preparation period which includes training for the testing institutions will occur in the months of January-March 2010. Formal testing will commence in April-June, followed in July-September with a formal assessment. October 2010 a final report will be shared with the U.S. library community.

If and when RDA is approved for use, training for catalogers will be the next step. Knowledge and training about RDA for all library staff will need to take place as well. People on the front lines working with patrons in catalog instruction will need to know the differences between a specific work and its possible multiple manifestations (work and manifestation being FRBR terminology).

For more information, one can visit the RDA web site.

Needless to say, after this session ended, I was ready to head back to my hotel for some rest. I will post more information on the rest of my conference experiences on Friday.


Pages
About
Categories
2007 ACRL Baltimore
2007 ALA Annual
2007 ALA Gaming Symposium
2007 ALA Midwinter
2007 ASERL New Age of Discovery
2007 Charleston Conference
2007 ECU Gaming Presentation
2007 ELUNA
2007 Evidence Based Librarianship
2007 Innovations in Instruction
2007 Kilgour Symposium
2007 LAUNC-CH Conference
2007 LITA National Forum
2007 NASIG Conference
2007 North Carolina Library Association
2007 North Carolina Serials Conference
2007 OCLC International ILLiad Conference
2007 Open Repositories
2007 SAA Chicago
2007 SAMM
2007 SOLINET NC User Group
2007 UNC TLT
2007_ASIST
2008
2008 Leadership Institute for Academic Librarians
2008 ACRL Immersion
2008 ACRL/LAMA JVI
2008 ALA Annual
2008 ALA Midwinter
2008 ASIS&T
2008 First-Year Experience Conference
2008 Lilly Conference
2008 LITA
2008 NASIG Conference
2008 NCAECT
2008 NCLA RTSS
2008 North Carolina Serials Conference
2008 ONIX for Serials Webinar
2008 Open Access Day
2008 SPARC Digital Repositories
2008 Tri-IT Meeting
2009
2009 ACRL Seattle
2009 ALA Annual
2009 ALA Annual Chicago
2009 ALA Midwinter
2009 ARLIS/NA
2009 Big Read
2009 code4lib
2009 Educause
2009 Handheld Librarian
2009 LAUNC-CH Conference
2009 LAUNCH-CH Research Forum
2009 Lilly Conference
2009 LITA National Forum
2009 NASIG Conference
2009 NCLA Biennial Conference
2009 NISOForum
2009 OCLC International ILLiad Conference
2009 RBMS Charlottesville
2009 SCLA
2009 UNC TLT
2010
2010 ALA Annual
2010 ALA Midwinter
2010 ATLA
2010 Code4Lib
2010 EDUCAUSE Southeast
2010 Handheld Librarian
2010 ILLiad Conference
2010 LAUNC-CH Research Forum
2010 LITA National Forum
2010 Metrolina
2010 NASIG Conference
2010 North Carolina Serials Conference
2010 RBMS
2010 Sakai Conference
2011 ACRL Philadelphia
2011 ALA Annual
2011 ALA Midwinter
2011 CurateCamp
2011 Illiad Conference
2012 SNCA Annual Conference
ACRL
ACRL 2013
ACRL New England Chapter
ACRL-ANSS
ACRL-STS
ALA Annual
ALA Annual 2013
ALA Editions
ALA Midwinter
ALA Midwinter 2012
ALA Midwinter 2014
ALCTS Webinars for Preservation Week
ALFMO
APALA
ARL Assessment Seminar 2014
ARLIS
ASERL
ASU
Audio streaming
authority control
Berkman Webinar
bibliographic control
Book Repair Workshops
Career Development for Women Leaders Program
CASE Conference
cataloging
Celebration: Entrepreneurial Conference
Charleston Conference
CIT Showcase
CITsymposium2008
Coalition for Networked Information
code4lib
commons
Conference Planning
Conferences
Copyright Conference
costs
COSWL
CurateGear 2013
CurateGear 2014
Designing Libraries II Conference
DigCCurr 2007
Digital Forsyth
Digital Humanities Symposium
Disaster Recovery
Discovery tools
E-books
EDUCAUSE
Educause SE
EDUCAUSE_SERC07
Electronic Resources and Libraries
Embedded Librarians
Entrepreneurial Conference
ERM Systems
evidence based librarianship
FDLP
FRBR
Future of Libraries
Gaming in Libraries
General
GODORT
Google Scholar
govdocs
Handheld Librarian Online Conference
Hurricane Preparedness/Solinet 3-part Workshop
ILS
information design
information ethics
Information Literacy
innovation
Innovation in Instruction
Innovative Library Classroom Conference
Inspiration
Institute for Research Design in Librarianship
instruction
IRB101
Journal reading group
Keynote
LAMS Customer Service Workshop
LAUNC-CH
Leadership
Learning spaces
LibQUAL
Library 2.0
Library Assessment Conference
Library of Congress
licensing
Lilly Conference
LITA
LITA National Forum
LOEX
LOEX2008
Lyrasis
Management
Marketing
Mentoring Committee
MERLOT
metadata
Metrolina 2008
MOUG 09
MOUG 2010
Music Library Assoc. 07
Music Library Assoc. 09
Music Library Assoc. 2010
NASIG
National Library of Medicine
NC-LITe
NCCU Conference on Digital Libraries
NCICU
NCLA
NCLA Biennial Conference 2013
NCPC
NCSLA
NEDCC/SAA
NHPRC-Electronic Records Research Fellowships Symposium
NISO
North Carolina Serial Conference 2014
Offsite Storage Project
OLE Project
online catalogs
online course
OPAC
open access
Peabody Library Leadership Institute
plagiarism
Podcasting
Preservation
Preservation Activities
Preserving Forsyth LSTA Grant
Professional Development Center
rare books
RDA/FRBR
Reserves
RITS
RTSS 08
RUSA-CODES
SAA Class New York
SAMM 2008
SAMM 2009
Scholarly Communication
ScienceOnline2010
Social Stratification in the Deep South
Social Stratification in the Deep South 2009
Society of American Archivists
Society of North Carolina Archivists
SOLINET
Southeast Music Library Association
Southeast Music Library Association 08
Southeast Music Library Association 09
SPARC webinar
subject headings
Sun Webinar Series
tagging
TALA Conference
Technical Services
technology
ThinkTank Conference
Training
ULG
Uncategorized
user studies
Vendors
video-assisted learning
visual literacy
WakeSpace
Web 2.0
Webinar
WebWise
WFU China Initiative
Wikis
Women's History Symposium 2007
workshops
WSS
ZSR Library Leadership Retreat
Tags
Archives
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007

Powered by WordPress.org, protected by Akismet. Blog with WordPress.com.