Professional Development

Author Archive

Steve at NASIG 2014

Friday, May 23, 2014 11:06 am

Since Chris and Derrik have already written their accounts of the 2014 NASIG Conference, I figure I better get on the ball. This was an unusual conference experience for me. As Vice President of NASIG, I was very involved with the planning of this conference, as well as having to do a lot of organizational business. That organizational business included planning for next year’s conference, which will not only have a special celebration for our 30th anniversary, but will also have NASIG’s first ever joint programming with another organization, the Society for Scholarly Publishing. With all the meeting and talking I had to do, I’ll confess that I didn’t attend as many conference sessions as I normally do, but those I did attend were very interesting.

Chris and Derrik have given nice descriptions of the three vision sessions, which were all quite good. I think the most interesting break-out session I attended was “Acquisitions and Management of Digital Collections at the Library of Congress,” given by Ted Westervelt from LC. I’ve known Ted for years, but didn’t know how truly impressive and cool his job is. He manages the eDeposit program at LC, which acquires e-resources for the Library of Congress’s permanent collection. The Copyright Deposit section at LC is charged with preserving material for the life of the Republic, which is quite a long-term commitment. Since 2009, the Copyright Deposit section has required publishers to present two copies of every source deposited, but an exception was made for electronic material. The exemption was eventually dropped and now electronic journals and electronic books have to be given to LC for the Copyright Deposit program. In addition to deposited material, the eDeposit section acquires material through purchase and gifts. They currently have 116 million unique files in inventory with 2.74 petabytes of content. They are also providing web archiving for 8.6 billion files. The digital material acquired includes historical newspapers, web sites, reference works, e-serials, e-books, GIS data, and more than 60 other flows. The section’s operating philosophy is that preservation=access, or, as Ted said, you can’t serve what isn’t preserved. To that end, LC has developed a set of format specifications for electronic materials to be preserved through the eDeposit program (LC has not officially released these specifications yet, but Ted gave us a sneak peak). LC uses a wide range of digital tools to store and manage their digital content. He said that the repository is being built in stages, and that it is important to think of repositories as a suite of tools and services. That is, a repository isn’t a single thing where you stick e-resources and then they’re preserved forever, a repository is a process that relies on a number of tools. Ted also emphasized that you need to think of the entire lifecycle of a resources in a repository, which I think is very important. He pointed out that LC has a system in place for taking in materials, but that they need to scale it. They need to develop more digital collection breadth and depth. LC is currently demanding 230 e-serials via eDeposit, but they will quadruple the number in the next few months. They need more capacity, both for storage and for processing of materials, and they need more standard and automated workflows. They also need to develop their collection development, preservation, metadata and access policies. Even though that work still needs to be done, I find the scale and ambition of this project to be truly amazing and I look forward to hearing more about this work in the future (I’ll certainly be grilling Ted about his work when I see him again).

I also attended an interesting session by Rachel Erb of Colorado State University, who talked about how her library’s Technical Services department used NASIG’s recently published list of Core Competencies for Electronic Resources Librarians as a guide for reorganizing their department. Essentially they found that they had too few people working on electronic resources and that there were needed skills that the department was lacking. Using the Core Competencies as a guide, they were able to justify changing job descriptions (a hard task at their school) and to craft a justification for a new position to bring in the skills they needed. And I saw Richard Wallis from OCLC discuss linked data. The main thrust of his presentation was that libraries need to expose their data to the wider world, which is where linked data comes in. He used the pithy phrase, people want “things not strings.” That is they don’t just want entries, they want entities, entire data sets about a topic.

In addition to the sessions, I did a lot of great networking, and had good conversations about developments in the world of cataloging, serials management, and the future of NASIG. Speaking of the future of NASIG, we’ve got some exciting projects in the works that I can’t quite talk about yet, but I’ll share when I can. Oh, and at this conference I was inaugurated as president of NASIG, and I confirmed the fact that, despite my tendency to be a chatterbox in private conversation, I am the world’s worst public speaker. Ah well, they didn’t elect me to give speeches.

Steve at 2014 North Carolina Serials Conference

Friday, March 21, 2014 4:51 pm

I attended the 2014 North Carolina Serials Conference last week, with quite a crew from ZSR. Ellen has already discussed the all-conference sessions, so I think I’ll write a bit about the break-out sessions I attended. The update session on RDA had some bits of news that might be of interest to folks outside of cataloging: namely that OCLC has announced that General Material Designators (or GMDs) will remain in legacy records (that is, records cataloged according to AACR2 rules) through March 2016. GMDs are those notes in square brackets next to titles in the catalog that say what kind of resource it is (for example, [computer resource]). OCLC is planning to add certain RDA-related fields to their legacy records, including 33X fields that indicate carrier information, over the next few years. In addition to these announcements, Kurt Blythe from UNC-Chapel Hill shared some RDA-related changes to serials cataloging. It was pretty inside-baseball stuff (info on how to code provider neutral online records in the 040 field, how to use indicators in 588 fields, the fact that parallel titles are considered core in CONSER cataloging, etc.), but it was interesting and useful to me. The other speaker at the session, Christee Pascale discussed NCSU’s RDA implementation. She said that most of the RDA training they gave to staff focused on how RDA is different from AACR2. A lot of it boiled down to if you do X in AACR2, then you do Y in RDA. Pascale argued that this actually sold the staff short, because they didn’t look enough at the conceptual underpinnings of RDA, especially the FRBR model. She argued that staff really need to have a solid grasp on the FRBR entities and the relationships between these entities, and that this will become a much more important issue when we begin to make the transition from MARC to BIBFRAME.

The other break-out session I attended was a presentation by Virginia Bacon and Ginny Boyer of ECU, who described how ECU merged the discovery services of their main library, medical library and (unofficial) music library. It was a long process, with a lot of discussion, a lot of persuasion and a lot of compromise. Eventually, they consolidated their web presence into a unified catalog, as well as a unified ILLiad presence, a unified Book Recall feature, a unified Ask a Librarian function, and a single WorldCat Local instance. The process has involved a number of roll-out stages, and constant marketing efforts to re-brand the separate main and medical libraries into a single ECU Libraries brand.

One last thing, in recent years, the NC Serials Conference has started having an expo, with tables for sponsoring companies and organizations to pass out literature and talk to conference attendees. NASIG is a regular sponsor of the conference, and, as the current Vice President of NASIG, I got to represent our organization. It was kind of fun to talk to folks about the joys of membership in NASIG.

Steve at 2014 LAUNC-CH Conference

Friday, March 21, 2014 2:52 pm

Last week I attended the 2014 LAUNC-CH Conference in Chapel Hill, with Sarah and Jeff. This year’s theme was “Every Step of the Way: Supporting Student and Faculty Research,” and there was a lot of talk about data sets and making research publically available. Jeff has already admirably covered Nancy Fried Foster’s keynote address, so I’ll talk a bit about the concurrent sessions. The most interesting one to me was a session by Michael Crumpton and Kathryn Crowe of UNC-G called “Defining the Libraries’ Role in Research: A Needs Assessment Case Study.” They talked about how the UNC-G libraries surveyed researchers in 2013 to find out how they store and manage data. The survey (which had a 13% response rate) found out that only 16% of researchers automatically generate back-ups. Furthermore, 75% of the researchers surveyed reported that they did not anticipate sharing their research data. The reasons were a mix of that they didn’t want to share their data and that they didn’t expect to share their data (so either data hoarding or thinking that nobody else would even want to see it). Analyzing the survey they found a number of barriers to researchers sharing their data, including the large size of data sets, copyright concerns about sharing data, and simply not knowing how to share data. They found that faculty weren’t using best practices in managing their data, and they need much more help in backing up their data. The survey found that many faculty were not even aware of the data management requirements of their university and of their funding agencies. To deal with these problems, the libraries at UNC-G have decided to initiate new education efforts, including expanding the time departmental liaisons have to work with their departments on data management issues. They had planned on hiring a new librarian to specialize in managing research data, but budget concerns killed the plan and forced them to re-direct their efforts into their existing liaison program.

Several of the other programs I attended discussed similar matters, but I found Kathy’s and Mike’s discussion to be the most fully developed. One interesting note, was that Debbie Curry and Mohan Ramaswamy of NCSU discussed how their library recruited data ambassadors, who are either members of or liaisons to departments. These data ambassadors take a hands-on role on teaching faculty about how to properly back-up, store and manage their data. One other interesting item I picked up at the conference came from one of their afternoon lightning talks, where Ann Cooper of UNC-Chapel Hill talked about efforts at UNC’s Wilson archives to preserve born-digital legacy media by converting material in outdated media formats to current formats. As a big music collector, I’m very interested in the process of converting material from outdated formats to usable formats.

Steve at 2014 ALA Midwinter

Monday, February 10, 2014 5:42 pm

Well, it looks like I’m bringing up the rear on reporting on my experience at the 2014 ALA Midwinter Conference, which is somewhat ironic, because I think I was the first person from ZSR to fly up to Philly. I had to get in town two days earlier than I normally would, so I could attend an all-day meeting of the NASIG Executive Board. NASIG isn’t affiliated with ALA, so we met off the conference grid, at the main library of the University of Pennsylvania. I can’t talk much about what we discussed because much of the material is confidential, but I can say that we discussed plans for our 2015 conference in Washington, D.C. The 2015 conference will feature NASIG’s first joint programming with another organization, namely the Society for Scholarly Publishing (SSP). It will also be NASIG’s 30th conference, which will require a special celebration. And it’ll be the year that I’m serving as NASIG’s President, so I’ll get to be right in the thick of planning it.

As for ALA Midwinter proper, much of my involvement revolved around committee meetings. My big committee responsibility is CC:DA (Catalog Committee: Description and Access), which develops ALA’s position on RDA. That means that we read and discuss proposed changes to RDA that come in from all sorts of constituencies. It’s been really interesting to see how the process works. I won’t bore you all by describing it in detail here, but I’d be happy to talk about it with anyone who is interested. CC:DA met for 4.5 hours on one day and 3 hours on another day, which is kind of a lot. We voted and an approved a couple of proposals (which will now move up to the Joint Steering Committee, which is the final arbiter of RDA), and had vigorous debate about several other proposals, including one on how to record the duration of recordings and one on a problem that has the colorfully melodramatic nickname “the cascading vortex of horror.” The committee also saw a presentation on the RDA/ONIX Framework, which would radically change how resource content and resource carriers are described. The RDA/ONIX Framework is years away from implementation (consensus needs to be developed among the relevant constituencies), but it promises to enormously facilitate the machine processing of catalog data (including things like natural language searching), by providing a means to record very specific, very granular data. For example, the Framework allows for recording the sensory mode used to access a resource (sight, hearing, taste, touch, smell), the dimensionality of the resource (2-dimensional, 3-dimensional), and the movement of an image (still, moving). That’s just a taste of the level of detail that it would be possible to record if the Framework were adopted. It’s pretty complex and dense stuff, and I know I only grasped a portion of it. On the more practical level, I have volunteered to serve on a CC:DA task force that has been working for a year or two on developing a list of personal relationship designators (things like father-son, teacher-student, etc.) for RDA.

On to other topics, Carolyn has already done an admirable job of recounting the Authority Control Interest Group meeting, so I will mention the Cataloging & Classification Research Interest Group session. We saw a presentation about an interesting project called the ProMusic Database, which is trying to make it easier to track the identity and roles of musicians, which can be tricky when you look at what a person did on a particular record. The example used was Quincy Jones, who is a composer, a performer and a producer. The ProMusic Database makes it easier to figure out what role or roles Quincy played with a given record. It is a joint project that involves the extensive databases of musician unions, music companies, etc. These professional organizations have a great interest in tracking this data, so they can track things like royalty payments to musicians.

I ended my Midwinter by attending the Update Forum of the Continuing Resources Cataloging Committee (which is another committee that I belong to). Much of it was inside baseball that would be of no interest to anyone but me, but one very interesting thing is that the ISSN (International Standard Serial Number) Centers are going to start assigning ISSNs to institutional repositories. So IRs are starting to be thought of as default continuing resources. Go figure.

Steve at NCLA 2013

Friday, October 25, 2013 5:27 pm

So, as you all know, the NCLA Conference was held here in Winston-Salem last week. Here’s what I did at it.

I served as a consultant to the Exhibits Committee this year, rather than chairing it, which was a big relief. I shared all the information with them that I could beforehand and visited Amy Harris and the rest of the Exhibits Committee often during the conference to see how things were going, be available for questions, and generally commiserate about what is a fairly tough job. They did fantastic work, in my opinion.

Since I could actually attend sessions at an NCLA Conference for the first time since 2003, not being tied down to managing the Exhibits or the Conference Store, I decided to focus my attention on seeing presentations by my fellow ZSR librarians.

I saw Roz’s presentation “‘New Research Shows’ – Or Does It? Using Junk Science in Information Literacy Instruction,” where Roz spoke about having students compare popular news reports of scientific studies to the studies themselves. Most popular reports of scientific studies get much if not most of the information wrong, from basic stuff like the number of study participants to the actual conclusions drawn by the study. In fact, many popular reports will say that a study concludes the exact opposite of what it actually says. Roz uses this exercise as a jumping off point for discussing the peer review process with students and well as the politics of publishing. The crowd was very enthusiastic about the presentation, with one audience member saying flat out that she’s copying the idea herself.

I also saw Hu’s presentation “‘Big Games’ in Academic Libraries.” I finally understood what happened to the video game nights we used to have a few years back. Turns out they were rather expensive and the attendance wasn’t so great, so they’ve been supplanted by Capture the Flag and Humans vs. Zombies. Hu talked about the good features of these two games, including that they are cheap to stage, popular, and get students into the library in a fun setting. His repeated statement that he has “the best library dean in the world” caused my friend from an institution that shall not be named to whisper to me jealously, “I hate you.” The crowd loved Hu’s presentation.

I saw Mary Beth’s presentation with the wonderful Marvin Tillman called “Two Roads to Offsite Storage: Duke and Wake Forest.” The audience, while somewhat small, was riveted and paid very close attention. These folks meant business and really wanted to hear about offsite storage options, in detail. Mary Beth and Marvin provided them with great detail. It was very interesting to get the perspective from two very different ends of the size scale, with Duke’s massive operation for their own enormous collections as well as storage from UNC-Chapel Hill, to our own more modestly-sized storage operation.

I also saw Megan’s presentation with Matt Reynolds of ECU, called “Stuff In Dusty Boxes: Connecting Undergraduates With Special Collections Holdings.” Megan spoke about her undergraduate history of the book class and its development, including how she was the one who initiated it. She spoke about the challenges involved in developing a new class, including getting approval from the curriculum committee, making logistical arrangements, recruiting students, and especially, course planning (she couldn’t find any other undergraduate history of the book classes to model hers on). Megan was enthusiastic about the class and drew lessons from the experience that included: be prepared, design the class around your collection strengths, keep your expectations realistic for undergrads, and have fun. The crowd really appreciated her presentation.

Unfortunately, my NCLA was cut a bit short by a cold that I was fighting all week, which led me to stay home of Friday, so I can’t speak about the last day’s activities.

Steve at ALA Annual 2013 (and RDA Training at Winthrop University)

Friday, July 12, 2013 4:08 pm

Since this is an ALA re-cap from me, you probably know what’s coming-a lot of jabbering about RDA. But wait, this one includes even more jabbering about RDA, because right before leaving for Chicago, I went down to Winthrop University in Rock Hill, South Carolina for two full days of RDA training (I missed the final half day, because I had to fly to Chicago for ALA). The enterprising folks at Winthrop had somehow managed to wrangle an in-person training session taught by Jessalyn Zoom, a cataloger from the Library of Congress who specializes in cataloging training through her work with the PCC (Program for Cooperative Cataloging). In-person training by experts at her level is hard to come by, so Winthrop was very lucky to land her. Leslie and I went to the training, along with Alan Keeley from PCL and Mark McKone from Carpenter. We all agreed that the training was excellent and really deepened our understanding of the practical aspects of RDA cataloging.

The training sessions were so good they got me energized for ALA and the meetings of my two committees, the Continuing Resources Section Cataloging Committee (i.e. serials cataloging) and CC:DA, the Cataloging Committee for Description and Access (the committee that develops ALA’s position on RDA. I’m one of the seven voting members on this committee. I know in a previous post I wrote I was one of nine voting members, but I got the number wrong, it’s seven). CC:DA met for four hours on Saturday afternoon and three hours on Monday morning, so it’s a pretty big time commitment. I also attended the Bibframe Update Forum, the final RDA Update Forum and a session on RDA Implementation Stories. Because so much of the discussion from these various sessions overlapped, I think I’ll break my discussion of these sessions down thematically.

Day-to-Day RDA Stuff

The RDA Implementation Stories session was particularly useful. Erin Stahlberg, formerly of North Carolina State, now of Mounty Holyoke, discussed transitioning to RDA at a much smaller institution. She pointed out that acquisitions staff never really knew AACR2, or at least, never really had any formal training in AACR2. What they knew about cataloging came from on-the-job, local training. Similarly, copy catalogers have generally had no formal training in AACR2, beyond local training materials, which may be of variable quality. With the move to RDA, both acquisitions staff and especially copy catalogers need training. Stahlberg recommended skipping training in cataloging formats that you don’t collect in (for example, if you don’t have much of a map collection, don’t bother with map cataloging training). She recommended that staff consult with co-workers and colleagues. Acknowledge that everyone is trying to figure it out at the same time. Consult the rules, and don’t feel like you have to know it all immediately. Mistakes can be fixed, so don’t freak out. Also, admit that RDA may not be the most important priority at your library (heaven forbid!). But she also pointed out that training is necessary, and you need to get support from your library Administration for training resources. Stahlberg also said that you have to consider how much you want to encourage cataloger’s judgment, and be patient, because catalogers (both professional and paraprofessional) will be wrestling with issues they’ve never had to face before. She encouraged libraries to accept RDA copy, accept AACR2 copy, and learn to live with the ambiguity that comes from living through a code change.

Deborah Fritz of MARC of Quality echoed many of Stahlberg’s points, but she also emphasized that copy cataloging has never been as easy as some folks think it is, and that cataloging through a code change is particularly hard. She pointed out that we have many hybrid records that are coded part in AACR2 and part in RDA, and that we should just accept them. Fritz also pointed out that so many RDA records are being produced that small libraries who though they could avoid RDA implementation, now have to get RDA training to understand what’s in the new RDA copy records they are downloading. She also said to “embrace the chaos.”

Related to Fritz’s point about downloading RDA copy, during the RDA Forum, Glenn Patton of OCLC discussed OCLC’s policy on RDA records. OCLC is still accepting AACR2 coded records and is not requiring that all records be in RDA. Their policy is for WorldCat to be a master record database with one record per manifestation (edition) per language. The preference will be for an RDA record. So, if an AACR2 record is upgraded to RDA, that will be the new master record for that edition. As you can imagine, this will mean that the number of AACR2 records will gradually shrink in the OCLC database. There’s no requirement to upgrade to an AACR2 record to RDA, but if it happens, great.

Higher Level RDA Stuff

A lot of my time at ALA was devoted to discussions of changes to RDA. In the Continuing Resources Section Cataloging Committee meeting, we discussed the problem of what level of cataloging the ISSN was associated with the Manifestation level or the Expression level (for translations). I realize that this may sound like the cataloging equivalent of debating how many angels can dance on the head of a pin (if it doesn’t sound like flat-out gibberish), but trust me, there are actual discovery and access implications. In fact, I was very struck during this meeting and in both of my CC:DA meetings with the passion for helping patrons that was displayed by my fellow catalogers. I think a number of non-cataloging librarians suspect that catalogers prefer to develop arcane, impenetrable systems that only they can navigate, but I saw the exact opposite in these meetings. What I saw were people who were dedicated to helping patrons meet the four user tasks outlined by the FRBR principles (find, identify, select and obtain resources), and who even cited these principles in their arguments. The fact that they had disagreements over the best ways to help users meet these needs led to some fairly passionate arguments. One proposal that we approved in the CC:DA meetings that is pretty easy to explain is a change to the cataloging rules for treaties. RDA used to (well still does until the change is implemented) require catalogers to create an access point, or heading, for the country that comes first alphabetically that is a participant in a treaty. So, the catalog records for a lot of treaties have an access point for Afghanistan or Albania, just because they come first alphabetically, even if it’s a UN treaty that has 80 or 90 participant countries and Afghanistan or Albania aren’t major players in the treaty. The new rules we approved will require creating an access point for the preferred title of the treaty, with the option of adding an access point for any country you want to note (like if you would want to have an access point for the United States for every treaty we participate in). That’s just a taste of the kinds of rule changes we discussed, I’ll spare you the others, although I’d be happy to talk about them with you, if you’re interested.

One other high level RDA thing I learned that I think is worth sharing had to do with Library of Congress’s approach to the authority file. RDA has different rules for formulating authorized headings, so Library of Congress used programmatic methods to make changes to a fair number of their authority records. Last August, 436,000 authority records were changed automatically during phase 1 of their project, and in April of this year, another 371,000 records were changed in phase 2. To belabor the obvious, that’s a lot of changed authority records.


BIBFRAME is the name of a project to develop a new encoding format to replace MARC. Many non-catalogers confuse and conflate AACR2 (or RDA) and MARC. They are very different. RDA and AACR2 are content standards that tell you what data you need to record. MARC is an encoding standard that tells you where to put the data so the computer can read it. It’s rather like accounting (which admittedly, I know nothing about, but I looked up some stuff to help this metaphor). You can do accounting with the cash basis method or the accrual basis method. Those methods tell you what numbers you need to record and keep track of. But you can record those numbers in an Excel spreadsheet or a paper ledger or Quicken or whatever. RDA and AACR2 are like accounting methods and MARC is like an Excel spreadsheet.

Anyway, BIBFRAME is needed because, with RDA, we want to record data that is just too hard to fit anywhere in the MARC record. Chris Oliver elaborated a great metaphor to explain why BIBFRAME is needed. She compared RDA to TGV trains in France. These trains are very fast, but they need the right track to run at peak speeds. TGV trains will run on old-fashioned standard track, but they’ll run at regular speeds. RDA is like the TGV train. MARC is like standard track, and BIBFRAME is like the specialized TGV-compatible track. However, BIBFRAME is not being designed simply for RDA. BIBFRAME is expected to be content-standard agnostic, just as RDA is encoding standard-agnostic (go back to my accounting metaphor, you can do cash basis accounting in Excel or a paper ledger, or do accrual basis in Excel or a paper ledger).

BIBFRAME is still a long way away. Beecher Wiggins of Library of Congress gave a rough guess of the transition to BIBFRAME taking 2 to 5 years, but, from what I’ve seen, it’ll take even longer than that. Eric Miller of Zephira, one of the key players in the development of BIBFRAME said that it is still very much a work-in-progress and is very draft-y.

If anyone would like to get together and discuss RDA or BIBFRAME or any of these issues, just let me know, I’d be happy to gab about it. Conversely, if anyone would like to avoid hearing me talk about this stuff, I can be bribed to shut up about it.

Steve at NASIG 2013

Monday, July 8, 2013 12:24 pm

The 2013 NASIG Conference was held in Buffalo, New York, from June 6th to 9th. I flew in two days early so I could attend an all-day Executive Board meeting on the 5th, in my role as incoming Vice President. It was nice to be back on the Board and get into the issues facing NASIG, although I can’t really talk about what we discussed (confidentiality and all that).

As for the conference content, the opening and closing Vision Sessions were particularly interesting and formed neat bookends (Derrik did a great job describing Megan Oakleaf’s Vision Session on the second day). First up was Bryan Alexander, of the National Institute for Technology in Liberal Education (NITLE). Alexander described how computer interfaces have changed dramatically and how they have grown in ubiquity. He talked about how the use of computer technology to reach out to the public has grown so much that even the government is using computers to communicate in unprecedented ways (in a funny coincidence, just after he said this, I fidgeted with my phone and checked my email, and received an email from the North Carolina Wildlife Commission reminding me that my fishing license was due to expire and offering me the chance to renew online. From a meeting room in Buffalo, NY). Alexander was very matter-of-fact about how pervasive computer technology is throughout our lives. He described a project, or possibly a new app, in Denmark that uses facial recognition technology to identify people in a photograph, which then takes you directly to their Facebook page and social media presence. I was shocked by this, because it sounds like a stalker’s delight, but Alexander did not seemed disturbed by the development. Perhaps he is concerned about the privacy implications of such technology, but it wasn’t apparent during his speech. Alexander went on to describe three possible futures that he sees developing from the proliferation of information technology: 1) The Phantom Learning World – In this world, schools and libraries are rare, because information is available on demand anywhere. Institutions supplement content, not vice versa, and MOOCs are everywhere. 2) The Open World – A future where open source, open access and open content have won. Global conversations increase exponentially in this world, but industries such as publishing collapse, and it is generally chaotic (malware thrives, privacy is gone). 3) The Silo World – In this world, closed architecture has triumphed and there are lots of stacks and silos. Campuses have to contend with increasingly difficult IP issues. Alexander acknowledged that the three variations were all pretty extreme and what eventually develops will probably have features of all three. But he emphasized that, as information professionals, we have to participate in shaping our information future.

While Alexander’s speech seemed to accept that the horse was already out of the barn when it comes to our privacy in the information technology realm, Siva Vaidhyanathan’s Vision Session speech was very much focused on privacy issues. Vaidhyanathan is from the University of Virginia, and he wrote the book “The Googlization of Everything-And Why We Should Worry.” He discussed how Google tries to read our minds and anticipate our behavior, based on our previous online behavior. He argued that Google’s desire to read our minds is actually the reason behind the Google Books project, which won’t make money for them. So, why do they do it? Vaidhyanathan argued that Google is trying to reverse engineer the sentence. They want to create an enormous reservoir of millions and millions of sentences, so they can sift through them to find patterns and simulate artificial intelligence. This would give Google and huge boost to their predictive abilities. Furthermore, he argued that Google is in a very close relationship with the government which should be worrying (particularly in light of the Edward Snowden case, which broke just days before his speech). Considering the enormity of the data at Google’s disposal, this could have enormous consequences. Vaidhyanathan argued that there is currently no incentive to curb Big Data, from the point of view of government, business and even academia. Why go small when there’s so much data to trawl through? Nobody’s trying to stop it, even if they should be. Vaidhyanathan went on to discuss Jeremy Bentham’s idea of the Panopticon, which was a prison with a circular design, with the cells placed in a ring around a central guard tower. The guard tower would have mirrored windows which would prevent the prisoner from ever knowing if they were being watched at any particular time. This was presumed to keep the prisoner on his best behavior. Vaidhyanathan argued that we now live in a Cryptopticon, where we don’t know who is watching us and when (here he gave the example of store loyalty cards, which are used to create a profile of your purchases that is cross-referenced with your credit card, and which is shared with other commercial entities). Unlike the Panopticon, which had the goal of keeping you on your best behavior, the Cryptopticon has the goal of catching you at your worst behavior. And while the Panopticon was visible, the state wants the systems of surveillance to be invisible (hence the Cyrptopticon). The state wants you to do what comes naturally, so as to catch you if you do something wrong. Vaidhyanathan argued that hidden instruments of surveillance are particularly worrying. For example, he discussed the No Fly List and the Terrorist Watch List. We don’t know what it takes to get on or off one of those lists. In essence we’re not allowed to know what laws are governing us, and that’s wrong. And these lists are very fallible. While there are a lot of false positives on the lists (people who don’t belong on the lists, but are, such as the late Sen. Edward Kennedy), there are also a lot of false negatives (people who aren’t on the lists but should be, such as the Boston Marathon bombers). The No Fly and Terrorist Watch Lists could be useful, but they are poorly executed. Vaidhyanathan argued that these lists might function better with more transparency. In conclusion, Vaidhyanathan discussed how, thanks to the proliferation of data about our lives on the web, we are creating a system where it’s hard to get a second chance. Youthful indiscretions and stupid mistakes will be with you for good. It made me think that the classic Vice Principal threat, “This will go down on your permanent record,” is now true. Vaidhyanathan argued that while savvy technology users may be able to take measures to protect their privacy on the web, we should be worried about protecting everyone’s privacy, not just our own.

Of course there were also a number of sessions that I attended, but I think I’ve already written enough and hopefully provided some food for thought.

Steve at 2013 LAUNC-CH Conference

Friday, April 5, 2013 5:17 pm

I attended the 2013 LAUNC-CH Conference in Chapel Hill on March 11. This year’s theme was “True Collaborations: Creating New Structures, Services and Breakthroughs.” The session that most interested me the most was the keynote address by Rick Anderson, Interim Dean of the J. Willard Marriott Library at the University of Utah.
As is typical of Rick, his speech had a provocative approach, which was apparent in its title, “The Purpose of Collaboration Is Not to Collaborate.” By this, he means that there may be plenty of benefits to be gained by collaborating, but that you should never collaborate on something merely to collaborate. Here are his major points:

Ground Rules and First Principles:
– Humility
– Patrons come first (a guiding concern here at ZSR)
– Waste nothing
– Fail often, fail early, write an article, move on
– Know what is sacred and what is instrumental
– Keep means and ends in proper relation

Means and Ends:
– The purpose of innovation is not to innovate…but to improve.
– The purpose of committees is not to meet…but to solve problems.
– The purpose of risk-taking is not to take risks…but to do new and better things.
– The purpose of collaboration is not to collaborate.

Reasons to Collaborate:
– To create leverage
• Economies of scale
• Increased impact
– To improve services
• Draw on more brains
• Include multiple perspective
– To build relationships
• On campus
• External to campus
– To bring complexity indoors

Bringing Complexity Indoors
– Good complexity and bad complexity (AKA Richness vs. Confusion). You want to bring bad complexity indoors, not make patrons have to suffer with whatever is problematic.
– Who is paying and who is paid?
• Patrons as “customers”
– What is our goal?
• Education vs. frustration
Opportunities to Collaborate
– Osmotic (by osmosis, Anderson means, if you have open library space, it will be filled. Make sure your library is filled with stuff you want in it.)
– Serendipitous
– Strategic (Institutional)
• To advance university goals
• To advance library goals
– Strategic (Political)
• To bank political capital
• To strengthen library’s brand

Collaborating Better
– Think outside the ghetto. Don’t just focus on the library world, other fields may have fruitful areas for collaboration. Anderson gave a wonderful example of this, describing the Pumps and Pipes Conference, an annual conference in Houston that brings together heart doctors and people from the oil industry. Both fields are concerned with pumping viscous fluid through tubes, and have much to teach each other.
– Work from ends to means (not vice versa).
– Listen promiscuously
– Ask this question up front: “How will we know when the task is accomplished?”
– If project is open-ended, assess regularly
– Evaluate outcomes, not processes.

Steve at ALA Midwinter 2013

Friday, February 8, 2013 2:10 pm

Although my trip to Seattle for the ALA Midwinter Conference had a rough start (flight delayed due to weather, nearly missed a connecting flight, my luggage didn’t arrive until a day later), I had a really good, productive experience. This Midwinter was heavy on committee work for me, and I was very focused on RDA, authority control and linked data. If you want a simple takeaway from this post, it’s that RDA, authority control and linked data are all tightly bound together and are important for the future of the catalog. If you want more detail, keep reading.
My biggest commitment at the conference was participating in two long meetings (over four hours on Saturday afternoon and three hours on Monday morning) of CC:DA (Cataloging Committee: Description and Access). I’m one of nine voting members of CC:DA, which is the committee responsible for developing ALA’s position on RDA. The final authority for making changes and additions to RDA is the JSC (Joint Steering Committee), which has representation from a number of cataloging constituencies, including ALA, the national library organizations of Canada, the UK, and Australia, as well as other organizations. ALA’s position on proposals brought to the JSC is voted on by CC:DA. Membership on this committee involves reading and evaluating a large number of proposals from a range of library constituencies. Much of the work of the committee has so far involved reviewing proposals regarding how to form headings in bibliographic records, which is, essentially, authority control work. We’ve also worked on proposals to make the rules consistent throughout RDA, to clarify the wording of rules, and to make sure that the rules fit with the basic principles of RDA. It has been fascinating to see how interconnected the various cataloging communities are, and how they relate to ALA and CC:DA. As I said, I am one of nine voting members of the committee, but there are about two dozen non-voting representatives from a variety of committees and organizations, including the Music Library Association, the Program for Cooperative Cataloging, and the Continuing Resources Cataloging Committee of ALCTS.
During our Monday meeting, we saw a presentation by Deborah Fritz of the company MARC of Quality of a visualization tool called RIMMF, RDA In Many Metadata Formats. RIMMF shows how bibliographic data might be displayed when RDA is fully implemented. The tool is designed to take RDA data out of MARC, because it is hard to think of how data might relate in RDA without the restrictions of MARC. RIMMF shows how the FRBR concepts of work, expression and manifestation (which are part of RDA) might be displayed by a public catalog interface. It’s still somewhat crude, but it gave me a clearer idea of the kinds of displays we might develop, as well as a better grasp on the eventual benefits to the user that will come from all our hard work of converting the cataloging world to RDA. RIMMF is free to download and we’re planning to play around with it some here in Resource Services.
I also attended my first meeting of another committee of which I am a member, the Continuing Resources Cataloging Committee of the Continuing Resources Section of ALCTS). Continuing resources include serials and web pages, so CRS is the successor to the old Serials Section. We discussed the program that we had arranged for that afternoon on the possibilities of using linked data to record serial holdings. Unfortunately, I had to miss the program due to another meeting, but I’m looking forward to seeing the recording. We also brainstormed ideas for our program at Annual in Chicago, and the committee’s representative to the PCC Standing Committee on Training gave us an update on RDA training initiatives.
The most interesting other meeting that I attended was the Bibframe Update Forum. Bibframe is the name for an initiative to try to develop a data exchange format to replace the MARC format(s). The Bibframe initiative hopes to develop a format that can make library data into linked data, that is, data that can be exchanged on the semantic web. Eric Miller, from the company Zepheira (which is one of the players in the development of Bibframe), explained that the semantic web is about linking data, not just documents (as a metaphor, think about old PDF files that could not be searched, but were flat documents. The only unit you could search for was the entire document, not the meaningful pieces of content in the document). The idea is to create recombinant data, that is, small blocks of data that can be linked together. The basic architecture of the old web leaned toward linking various full documents, rather than breaking down the statements into meaningful units that could be related to each other. The semantic web emphasizes the relationships between pieces of data. Bibframe hopes to make it possible to record the relationships between pieces of data in bibliographic records and to expose library data on the Web and make it sharable. At the forum, Beacher Wiggins told the audience about the six institutions who are experimenting with the earliest version of Bibframe, which are the British Library, the German National Library, George Washington University, the National Library of Medicine, OCLC, and Princeton University. Reinhold Heuvelmann of the German National Library said that the model is defined on a high level, but that it needs to have more detail developed to allow for recording more granular data, which is absolutely necessary for fully recording the data required by RDA. Ted Fons of OCLC spoke of how Bibframe is an attempt to develop a format that can carry the data libraries need and to allow for library data to interact with each other and the wider web. Fons said that Bibframe data has identifiers that are URIs which can be web accessible. He also said that Bibframe renders bibliographic data as statements that are related to each other, rather than as self-contained records, as with MARC. Bibframe breaks free of the constraints of MARC, which basically rendered data as catalog cards in electronic format. Bibframe is still going through quite a bit of development, but it is moving quickly. Sally McCallum of the MARC Standards Office said that they hope to finalize aspects of the Bibframe framework by 2014, but acknowledged that, “The change is colossal and the unexpected will happen.”
Actually, I think that’s a good way to summarize my thoughts on the current state of the cataloging world after attending this year’s Midwinter, “The change is colossal and the unexpected will happen.”

Steve at NASIG 2012

Thursday, June 14, 2012 5:03 pm

Last Thursday, Chris, Derrik and I hopped in the library van and drove to Nashville for the NASIG Conference, returning on Sunday. It was a busy and informative conference, full of lots of information on serials and subscriptions. I will cover a few of the interesting sessions I attended in this post.
One such session was called “Everyone’s a Player: Creation of Standards in a Fast-Paced Shared World,” which discussed the work of NISO and the development of new standards and “best practices.” Marshall Breeding discussed the ongoing development of the Open Discovery Initiative (ODI), a project that seeks to identify the requirements of web-scale discovery tools, such as Summon. Breeding pointed out that it makes no sense for libraries to spend millions of dollars on subscriptions, if nobody can find anything. So, in this context, it makes sense for libraries to spend tens of thousands on discovery tools. But, since these tools are still so new, there are no standards for how these tools should function and operate with each other. ODI plans to develop a set of best practices for web-scale discovery tools, and is beginning this process by developing a standard vocabulary as well as a standard way to format and transfer data. The project is still in its earliest phases and will have its first work available for review this fall. Also at this session, Regina Reynolds from the Library of Congress discussed her work with the PIE-J initiative, which has developed a draft set of best practices that is ready for comment. PIE-J stands for the Presentation & Identification of E-Journals, and is a set of best practices that gives guidance to publishers on how to present title changes, issue numbering, dates, ISSN information, publishing statements, etc. on their e-journal websites. Currently, it’s pretty much the Wild West out there, with publishers following unique and puzzling practices. PIE-J hopes to help clean up the mess.
Another session that was quite useful was on “CONSER Serials RDA Workflow,” where Les Hawkins, Valerie Bross and Hien Nguyen from Library of Congress discussed the development of RDA training materials at the Library of Congress, including CONSER serials cataloging materials and general RDA training materials from the PCC (Program for Cooperative Cataloging). I haven’t had a chance yet to root around on the Library of Congress website, but these materials are available for free, and include a multi-part course called “Essentials for Effective RDA Learning” that includes 27 hours (yikes!) of instruction on RDA, including a 9 hour training block on FRBR, a 3 hour block on the RDA toolkit, and 15 hours on authority and description in RDA. This is for general cataloging, not specific to serials. Also, because LC is working to develop a replacement for the MARC formats, there is a visualization tool called RIMMF available at that allows for creating visual representations of records and record-relationships in a post-MARC record environment. It sounds promising, but I haven’t had a chance to play with it yet. Also, the CONSER training program, which focuses on serials cataloging, is developing a “bridge” training plan to transition serials catalogers from AACR2 to RDA, which will be available this fall.
Another interesting session I attended was “Automated Metadata Creation: Possibilities and Pitfalls” by Wilhelmina Randtke of Florida State University Law Research Center. She pointed out that computers like black and white decisions and are bad with discretion, while creating metadata is all about identifying and noting important information. Randtke said computers love keywords but are not good with “aboutness” or subjects. So, in her project, she tried to develop a method to use computers to generate metadata for graduate theses. Some of the computer talk got very technical and confusing for me, but her discussion of subject analysis was fascinating. Using certain computer programs for automated indexing, Randtke did a data scrape of the digitally-encoded theses and identified recurring keywords. This keyword data was run through ontologies/thesauruses to identify more accurate subject headings, which were applied to the records. A person needs to select the appropriate ontology/thesaurus for the item(s) and review the results, but the basic subject analysis can be performed by the computer. Randtke found that the results were cheap and fast, but incomplete. She said, “It’s better than a shuffled pile of 30,000 pages. But, it’s not as good as an organized pile of 30,000 pages.” So, her work showed some promise, but still needs some work.
Of course there were a number of other interesting presentations, but I have to leave something for Chris and Derrik to write about. One idea that particularly struck me came from Rick Anderson during his thought provoking all-conference vision session on the final day, “To bring simplicity to our patrons means taking on an enormous level of complexity for us.” That basic idea has been something of an obsession of mine for the last few months while wrestling with authority control and RDA and considering the semantic web. To make our materials easily discoverable by the non-expert (and even the expert) user, we have to make sure our data is rigorously structured and that requires a lot of work. It’s almost as if there’s a certain quantity of work that has to be done to find stuff, and we either push it off onto the patron or take it on ourselves. I’m in favor of taking it on ourselves.
The slides for all of the conference presentations are available here: for anyone who is interested. You do not need to be a member of NASIG to check them out.

2008 North Carolina Serials Conference
2008 ONIX for Serials Webinar
2008 Open Access Day
2008 SPARC Digital Repositories
2008 Tri-IT Meeting
2009 Big Read
2009 code4lib
2009 Educause
2009 Handheld Librarian
2009 LAUNC-CH Conference
2009 LAUNCH-CH Research Forum
2009 NCLA Biennial Conference
2009 NISOForum
2009 OCLC International ILLiad Conference
2009 RBMS Charlottesville
2009 SCLA
2010 ATLA
2010 Code4Lib
2010 EDUCAUSE Southeast
2010 Handheld Librarian
2010 ILLiad Conference
2010 LAUNC-CH Research Forum
2010 Metrolina
2010 North Carolina Serials Conference
2010 RBMS
2010 Sakai Conference
2011 CurateCamp
2011 Illiad Conference
2012 SNCA Annual Conference
acrl immersion
ALA Annual
ALA Editions
ALA Midwinter
ALCTS Webinars for Preservation Week
ARL Assessment Seminar 2014
Audio streaming
authority control
Berkman Webinar
bibliographic control
Book Repair Workshops
Career Development for Women Leaders Program
Carolina Consortium
CASE Conference
Celebration: Entrepreneurial Conference
Charleston Conference
CIT Showcase
Coalition for Networked Information
Conference Planning
Copyright Conference
CurateGear 2013
CurateGear 2014
Designing Libraries II Conference
DigCCurr 2007
Digital Forsyth
Digital Humanities Symposium
Disaster Recovery
Discovery tools
Educause SE
Electronic Resources and Libraries
Elon Teaching and Learning Conference
Embedded Librarians
Entrepreneurial Conference
ERM Systems
evidence based librarianship
Evidence Based Library and Information Practice (EBLIP)
Ex Libris Users of North America (ELUNA)
First-Year Experience Conference
Future of Libraries
Gaming in Libraries
Google Scholar
Handheld Librarian Online Conference
ILLiad Conference
information design
information ethics
Information Literacy
Innovation in Instruction
Innovative Library Classroom Conference
Institute for Research Design in Librarianship
Journal reading group
LAMS Customer Service Workshop
Leadership Institute for Academic Librarians
Learning spaces
Library 2.0
Library Assessment Conference
Library of Congress
Lilly Conference
LITA National Forum
Mentoring Committee
MOUG 2010
Music Library Assoc. 07
Music Library Assoc. 09
Music Library Assoc. 2010
Music Library Association
National Library of Medicine
NCCU Conference on Digital Libraries
NCLA Biennial Conference 2013
NCLA Biennial Conference 2015
NHPRC-Electronic Records Research Fellowships Symposium
North Carolina Serial Conference 2014
North Carolina Serials Conference
Offsite Storage Project
OLE Project
online catalogs
online course
Online Learning Summit
open access
Open Repositories
Peabody Academic Library Leadership Institute
Preservation Activities
Preserving Forsyth LSTA Grant
Professional Development Center
rare books
Scholarly Communication
Social Stratification in the Deep South
Social Stratification in the Deep South 2009
Society of American Archivists
Society of North Carolina Archivists
Southeast Music Library Association
Southeast Music Library Association 08
Southeast Music Library Association 09
SPARC webinar
subject headings
Sun Webinar Series
TALA Conference
Technical Services
ThinkTank Conference
UIPO Symposium
UNC Teaching and Learning with Technology Conference
user studies
video-assisted learning
visual literacy
Web 2.0
WFU China Initiative
Women's History Symposium 2007
ZSR Library Leadership Retreat
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007

Powered by, protected by Akismet. Blog with