Professional Development

During February 2011...

ASERL Webinar on Next-Generation Discovery Systems

Friday, February 25, 2011 2:58 pm

Roz, Erik, Ellen M., Kaeley, Craig, Audra, Rebecca, Kevin, Derrick, Susan and Mary S.attended the ASERL Next Gen Discovery Tools session.

The session was an overview of various next-generation discovery systems (e.g Serials Solutions Summon, WorldCat Local and Ebsco Discovery Services among others). ECU reported on their methodology for evaluating and selecting a product (ultimately Summon). They observed that following the launch they added OAI harvested digital collections resources, used google analytics to measure use (OpenURL clicks was used as a measure), and worked with SerialsSolutions to resolve data mapping issues.

ECU also mentioned a recent nice issue of Library Technology Reports by Jason Vaughan at UNLV that reviews these systems.

NCSU followed with an overview of their Summon experience. Libraries in general seemed to recognize that a main purpose of these sorts of systems is to simplify the initial research process but the NCSU folks discussed some of the concerns of advanced users (IL instruction, depth of search features). NCSU showed the percentage of use of a one search system that grouped results (Articles, Books & Media, Website) into different columns. The results showed that over 40% of the time articles were selected while Books and media were a close second at 36%. The website was used around 2% of the time!

Overall a very informative session.

ASERL Webinar – ASERL Members e-Reader Experiences

Friday, February 18, 2011 2:56 pm

On Friday, Feb. 18th, a group of ZSR colleagues and I took in a webinar put on by ASERL member libraries discussing their experiences with e-Readers. Speakers included Nancy Gibbs from Duke U., Millie Jackson and Beth Holley from University of Alabama, Eleanor Cook from ECU, and Valerie Boulus from Florida International U. The webinar was organized by John Burger.

Valerie Boulus started the talk by discussing the general specifications of the e-Reader devices, such as the Kindle. She then moved on to talking about specific devices and their features. Kindle v. 3, Nook v. 1, Nook Color v. 1, Sony Readers. She also discussed file formats for the devices, as well as Digital Rights Management (DRM).

Nancy Gibbs discussed Duke’s implementation of an e-Reader program. They began in fall 2009, spent around $20,000 for devices and content. They tried to put high demand titles on the devices, and provide multiple copies of the content on a number of their devices, as they could put e-Books on multiple devices at one point. She did not speak to the copyright issues this raises with licensing the ebooks, however.

Alabama started their program in fall of 2009 as well. Obtained funding through fundraiser, and purchased only Kindles. Their different libraries selected original content, then let their patrons request additional titles, excluding textbooks. Spent Approx. $2000 in content last FY. They do not consider the e-reader titles as part of the collection development since the program is a pilot program at this point. Don’t feel the e-format is stable enough. Alabama adds titles to each of their kindles individually, and manages new purchases through a spreadsheet.

Back to Duke, they replicate content across all devices instead of breaking it up based on content. Also have a suggestion email for patrons. Can put content on 6 kindles a piece, as many nooks as you want. Discussed money savings, no discussion of legality. I hope for their sake that no book publishers were on this webinar! The talk moved onto the nuts and bolts of adding titles to the devices, as well as problems relating to content on the devices such as titles disappearing off devices thanks to Amazon.

Eleanor Cook from ECU began her portion discussing the sales tax issue with e-books, though WFU operates differently than most other schools in this regard, according to Lauren C.

Unfortunately I had to leave at this point in the talk, though it was certainly interesting to hear other’s experiences with some of the issues we have encountered here at ZSR with similar technologies.

iRODS user conference

Thursday, February 17, 2011 7:58 pm

Today I went to the iRODS user conference in Chapel Hill, NC. iRODS stands for Integrated Rule Oriented Data System. the system is intended to be used as a data and digital object curation system that supports data curation activities (e.g. preservation activities) and access activite (e.g. create, access, replace, update, version). iRODS is usable as a subsystem for Dspace and Fedora as well as a number of other products and is flexible enough to serve as a platform for other file system services (e.g. Dropbox).

The morning session focused on new features in version 2.5. As I was not familiar with the system I spent the time stepping through a tutorial on the iRODS website. The afternoon focused on use cases which were rather interesting.

The first use case focused on continuous integration using maven/git/hudson which are some tools we have been discussing internally here at ZSR. The development environment at RENCI is called GForge & appears to be a solid approach to enabling distributed development of services. We heard from DataDirect Networks that discussed how they are using iRODS to change how get away from file system work. I also got to see some demonstrations of use cases in Archives and file management systems. Lots more information about iRODS and the conference is available on the iRODS wiki.

One fun project I learned about. National Climatic Data Center. For me the really incredible idea was that an iRODS approach to file system storage combined with lots of good metadata would enable a distributed (multi-institution, multi-site) structure that researchers could use to do retrievals and run processes based on common metadata elements or other file identifiers – for more info see the TUCASI Project.

Sarah at the Lilly Conference

Thursday, February 17, 2011 4:17 pm

On Feb. 4th-6th, I attended the Lilly Conference on College University and Teaching with the support of the Faculty Teaching Initiative Grant sponsored by the Teaching and Learning Center. All of the sessions that I attended were thought-provoking and broadened my view of teaching. Here are some highlights from the conference:

I attended the session on “Defining Effective Teaching”. Leslie Layne from Lynchburg College surveyed students and faculty on how they define “effective teaching.” Both students and faculty agreed that it is important that the teacher “knows the subject material well.” Faculty also ranked important being “organized and well-prepared for class” and “[outlining] expectations clearly and accurately.” Interestingly, students’ responses differed from faculty responses and ranked the following as also important: 1) “is accessible to students”; 2) “uses a variety of teaching methods or formats”; 3) “keeps students interested for the whole class period; makes the class enjoyable”.

I also attended a crowded session on “What Makes a Great Teacher? (or What Makes a Teacher Great?)” At the beginning of the session, Scott Simkins, Director of the Academy for Teaching and Learning at N.C. A&T State University, highlighted the “Professors (& Learners) of the Year,” which is an award given by the Council for Advancement and Support of Education and the Carnegie Foundation for the Advancement of Teaching. Simkins reported on empirical research on effective teaching, and here are some points that he raised from the professional literature:

  • Set big goals and high expectations for students
  • Pedagogical content knowledge
  • Work backwards from learning outcomes
  • Maintain focus on student learning
  • Frame questions that capture the students’ imaginations and challenge paradigms
  • Clarity
  • Organization
  • Build trust
  • Exploring not explaining

I attended the plenary session on “The Good, Bad, and Counterintuitive: How Evidence-Based Teaching Can Correct the Commonsense Approach to Instruction.” Ed Neal and Todd Zakrajsek from UNC-Chapel Hill presented a variety of evidence-based teaching principles:

  • Engage students’ preconceptions; students have preconceptions, but if their preconceptions aren’t engaged, then they may fail to learn new concepts.
  • Deep foundational knowledge that is retrieved; There are different levels of students’ learning: “I heard about it” –> “I understand it” –> “I can do it in my sleep”
  • Learners must be taught to take a metacognitive approach.

I’m always interested in attending sessions on science teaching, and I also learned about the Science Education Resource Center at Carleton College and the National Center for Case Study Teaching in Science. It was also great catching up with other librarians from UNCG at the conference. I am still in the process of reflecting on all of the sessions that I attended, and I have collected bibliographies and articles on teaching if anyone is interested in reading them.

Leslie at MLA 2011

Monday, February 14, 2011 2:08 am

I’m back from another Music Library Association conference, held this year in Philadelphia. Some highlights:

Libraries, music, and digital dissemination

Previous MLA plenary sessions have focused on a disturbing new trend involving the release of new music recordings as digital downloads only, with licenses restricting sale to end users, which effectively prevents libraries either from acquiring the recordings at all, or from distributing (i.e., circulating) them. This year’s plenary was a follow-up featuring a panel of three lawyers — a university counsel, an entertainment-law attorney, and a representative of the Electronic Frontiers Foundation — who pronounced that the problem was only getting worse. It is affecting more formats now, such as videos and audio books — it’ not just the music librarian’s problem any more — and recent court decisions have tended to support restrictive licenses.

The panelists suggested two approaches libraries can take: building relationships, and advocacy. Regarding relationships, it was noted that there is no music equivalent of LOCKSS or Portico: Librarians should negotiate with vendors of audio/video streaming services for similar preservation rights. Also, libraries can remind their resident performers and composers that if their performances are released as digital downloads with end-user-only licenses, libraries cannot preserve their work for posterity. The panelists drew an analogy to the journal pricing crisis: libraries successfully raised awareness of the issue by convincing faculty and university administrators that exorbitant prices would mean smaller readerships for their publications. On the advocacy side, libraries can remind vendors that federal copyright law pre-empts non-negotiable licenses: a vendor can’t tell us not to make a preservation copy when Section 108 says we have the right to make a preservation copy. We can also lobby state legislatures, as contract law is governed by state law.

The entertainment-law attorney felt that asking artists to lobby their record labels was, realistically speaking, the least promising approach — the power differential is too great. Change, the panelists agreed, is most likely to come through either legislation or the courts. Legislation is the more difficult to affect (there are too many well-funded commercial interests ranged on the opposing side); there is a better chance of a precedent-setting court case tipping the balance in favor of libraries. Such a case is most likely to come from the 2nd or 9th Circuit, which have a record of liberal rulings on Fair Use issues. One interesting observation from the panel was that most of the cases brought so far have involved “unsympathetic figures” — individuals who blatantly abused Fair Use on a large scale, provoking draconian rulings. What’s needed is more cases involving “sympathetic figures” like libraries — the good guys who get caught in the cross-fire. Anybody want to be next? :-)

Music finally joins Digital Humanities

For a couple of decades now, humanities scholars have been digitizing literary, scriptural, and other texts, in order to exploit the capabilities of hypertext, markup, etc. to study those texts in new ways. The complexity of musical notation, however, has historically prevented music scholarship from doing the same for its texts. PDFs of musical scores have long been available, but they’re not searchable texts, and not encoded as digital data, so can’t be manipulated in the same way. Now there’s a new project called the Music Encoding Initiative, jointly funded by the National Endowment for the Humanities and the German Deutsche Forschungsgemeinschaft. MEI (yes, they’ve noticed it’s also a Chinese word for “beauty”) has just released a new digital encoding standard for Western classical musical notation, based on XML. It’s been adopted so far by several European institutions and by McGill University. If, as one colleague put it, it “has legs,” the potential is transformative for the discipline. Whereas critical editions in print force editors to make painful decisions between sources of comparable authority — the other readings get relegated to an appendix or supplementary volume — in a digital edition, all extant readings can be encoded in the same file, and displayed side by side. An even more intriguing application of this concept is the “user-generated edition”: a practicing musician could potentially approach a digital edition of a given work, and choose to output a piano reduction, or a set of parts, or modernized notation of a Renaissance work, for performance. Imagine the savings for libraries, which currently have to purchase separate editions for all the different versions of a work.

http://music-encoding.org

Music and metadata

In a session titled “Technical Metadata for Music,” two speakers, from SUNY and a commercial audio-visual preservation firm respectively, stressed the importance of embedded metadata in digital audio files. Certain information, such as recording date, is commonly included in filenames, but this is an inadequate measure from a long-term preservation standpoint: filenames are not integral to the file itself, and are typically associated with a specific operating system. One speaker cited a recent Rolling Stone article, “File not Found: the Recording Industry’s Storage Crisis” (December 2010), describing the record labels’ inability to retrieve their backfiles due to inadequate filenames and lack of embedded metadata. Metadata is now commonly embedded in many popular end-user consumer products, such as digital cameras and smartphones.

For music, embedded metadata can include not only technical specifications (bit-depth, sample rate, and locations of peaks, which can be used to optimize playback) but also historical context ( the date and place of performance, the performers, etc.) and copyright information. The Library of Congress has established sustainability factors for embedded metadata (see http://digitizationguidelines.gov). One format that meets these requirements is Broadcast Wave Format, an extension of WAV: it can store metadata as plain text, and can include historical context-related data. The Technical Committee of ARSC (Association of Recorded Sound Collections) recently conducted a test wherein they added embedded metadata to some BWF-format audio files, and tested them with a number of popular applications. The dismaying results showed that many apps not only failed to display the embedded metadata, but also deleted it completely. This, in the testers’ opinion, calls for an advocacy campaign to raise awareness of the importance of embedded metadata. ARSC plans to publish its test report on its website (http://www.arsc-audio.org/). The software for embedded metadata that they developed for the test is also available as a free open-source app at http://sourceforge.net/projects/bwfmetaedit.

Music cataloging

A pre-conference session held by MOUG (Music OCLC Users Group) reported on an interesting longitudinal study that aimed to trace coverage of music materials in the OCLC database. The original study was conducted in 1981, when OCLC was relatively new. MOUG testers searched newly-published music books, scores, and sound recordings, as listed in journals and leading vendor catalogs, along with core repertoire as listed in ALA’s bibliography Basic Music Library, in OCLC, and assessed the quantity and quality of available cataloging copy. The study was replicated in 2010. Exact replication was rendered impossible by various developments over the intervening 30 years — changes in the nature of the OCLC database from a shared catalog to a utility; more foreign and vendor contributors; and the demise of some of the reference sources used for the first sample of searched materials, necessitating substitutions — but the study has nevertheless produced some useful statistics. Coverage of books. not surprisingly, increased over the 30 years to 95%; representation of sound recordings also increased, to around 75%; but oddly, scores have remained at only about 60%. As for quality of the cataloging, the 2010 results showed that about 20% of sound recordings have been cataloged as full-level records, about 50% as minimal records; about a quarter of scores get full-level treatment, about 50% minimal. The study thus provides some external corroboration of long-perceived music cataloging trends, and also a basis for workflow and staffing decisions in music cataloging operations.

A session titled “RDA: Kicking the Tires” was devoted to the new cataloging standard that the Library of Congress and a group of other libraries have just finished beta-testing. Music librarians from four of the testing institutions (LC, Stanford, Brigham Young, U North Texas, and U Minnesota) spoke about their experiences with the test and with adapting to the new rules.

All relied on LC’s documentation and training materials, recording local decisions on their internal websites (Stanford has posted theirs on their publicly-accessible departmental site). An audience member urged libraries to publish their workflows in the Toolkit, the online RDA manual. It was generally agreed that the next step needed is the development of guidelines and best practices.

None of the testers’ ILSs seem to have had any problems accomodating RDA records in MARC format. LC has had no problems with their Voyager system, corroborating our own experience here at WFU. Some testers reported problems with some discovery layers, including PRIMO (fortunately, we haven’t seen any glitches so far with VuFind). Stanford reported problems with their (un-named) authorities vendor, mainly involving “flipped” (changed name order) entries. Most testers are still in the process of deciding which of the new RDA data elements they will display in their OPACs.

Asked what they liked about RDA, both the LC and Stanford speakers cited the flexibility of the new rules, especially in transcribing title information, and in the wider range of sources from which bib info can be drawn. Others welcomed the increased granularity, designed to enhance machine manipulation, and the chance this affords to “move beyond cataloging for cards” towards the semantic web and relation-based models. It was also noted that musicians are already used to thinking in FRBR fashion — they’ve long dealt with scores and recordings, for instance, as different manifestations of the same work.

Asked what they thought “needed fixing” with RDA, all the panelists cited access points for music (the LC speaker put up a slide displaying 13 possible treatments of Rachmaninoff’s Vocalise arranged for saxophone and piano). There are other areas — such as instrument names in headings — that the RDA folks haven’t yet thought about, and the music community will probably have to establish its own practice. Some catalogers expressed frustration with the number of matters the new rules leave to “cataloger’s judgment.” Others mentioned the difficulty of knowing just how one’s work will display in future FRBRized databases, and of trying to fit a relational structure into the flat files most of us currently have in our ILSs.

What was most striking about the session was the generally upbeat tone of the speakers — they saw more positives than negatives with the new standard, assured us it only took some patience to learn, and were convinced that it truly was a step forward in discoverability. One speaker, who trains student assistants to do copy-cataloging, telling them “When in doubt, make your best guess, and I’ll correct it later,” observed that her students’ guesses consistently conformed to RDA practice — some anecdotal evidence suggesting that the new standard may actually be more intuitive for users, and that new catalogers will probably learn it more easily than those of us who’ve had to “unlearn” AACR2!

Sidelights

Our venue was the Loews Philadelphia Hotel, which I must say is the coolest place I’ve ever stayed in. The building was the first International Style high-rise built in the U.S., and its public spaces have been meticulously preserved and/or restored, to stunning effect. The first tenant was a bank, and so you come across huge steel vault doors and rows of safety-deposit boxes, left in situ, as you walk through the hotel. Definitely different!

Another treat was visiting the old Wanamaker department store (now a Macy’s) to hear the 1904 pipe organ that is reputed to be the world’s largest (http://www.wanamakerorgan.com/about.php).

Code4Lib Day 2

Wednesday, February 9, 2011 7:52 pm

One important point of today’s presentations at code4lib was on using community-based approach to provide solutions. There was also an interesting breakout section led by Lyrasis on the importance of open source solutions in libraries and why they are becoming so popular.

In order to develop a digital exhibit that would aggregate digital collections originally in different formats, the University of Notre Dame decided on a community-based approach. They then joined the Hydra Framework community. The community includes Stanford, Virginia University, DuraSpace, MediaSpace, and Blacklight. Hydra Framework is a shared base of code that each Hydra community member benefits from. It provides developers with a set of tools that facilitate the rapid development of scalable applications. The Notre Dame’s digital exhibit’s information architecture includes Apache Solr and Fedora Commons as their repository and Blacklight and Active Fedora as their interface.

The Chicago Underground Library also used a community-based approach to collect and catalog the Chicago city’s history. They collected every piece of print data imaginable and this included hand-made artist books, university press, self-published poetry books etc. They also gathered information about each individual who contributed to the collection so users can trace back to them. They have accumulated over 2000 publications so far. Their future goal is to expand the collections to include audio and video.

A rep from Lyrasis led a breakout section to talk about how their organization can help libraries achieve their goals and find out why there is so much interest in open source solutions and what is driving such enthusiasm. It was interesting to find out that no attendant thought that the decision to embrace open source solutions as opposed to vendor provided solutions was solely due to financial reasons. Everyone agreed it was more about the independence and the flexibility that open source software provide. Then there was a long discussion on the cost involved in open source software implementation. Overall the group found that open source solutions are definitely worth it.

Gretchen at 2011 Lilly Conference

Wednesday, February 9, 2011 2:59 pm

Over this past weekend, I spent Friday – mid Sunday at the Lilly Conference on College and University Teaching and Learning in Greensboro. The experience was rewarding; I enjoyed meeting others in the field, and learning what other local institutions are working on. I look forward to trying out some new ideas on teaching and learning, particularly with technology, on our campus in the near future. Additionally, I presented on Location-Based Applications on Saturday evening.

Highlights of the conference included several sessions: using video lecture capture systems, integrating Google Apps and Maps to support experiential learning, and a video-based project a faculty member conducted in her classroom. And, for me, giving my first conference presentation!

Learn more about the sessions I attended onFriday,Saturday,Sunday, andmy presentation viaCollaboration @ Wake.

I recorded my presentation with Flip video to see how I actually sound when I present, and then edited in the supporting media to create a video. It is most “interactive” around 6:06 and 17:30. If anyone ever wants to learn how to do this, I’d be happy to show you!

Location-Based Applications: Creating a Community Beyond the Map from Gretchen Edwards on Vimeo.

Gretchen Edwards presents “Location-Based Applications: Creating a Community Beyond the Map” at the 2011 Lilly Conference on College and University Teaching in Greensboro, North Carolina.

Vufind updates

Wednesday, February 9, 2011 12:05 am

JP already talked about Vufind but I thought I would add in my notes from the Vufind talk today. Demian Katz (Villanova) took some time in the afternoon to talk about Vufind and its growing support for metadata standards other than MARC. The update centered on how Vufind had been re-tuned to be more agnostic with regards to metadata standards and encoding models. The redesign made use of “Record Drivers” to take control of both screen display functionality and data retrieval processes, OAI harvesters to gather data and XSL importing tools to facilitate metadata crosswalks and full text indexing.

Demian talked at some length about basic features of the metadata indexing toolkit. At the Vufind 2.0 conference he talked a bit about his ability to use the MST from the Extensible Catalog project and I wonder (no answer, just a question) how the toolkit development with Vufind matches with the XC project. Demian reported on the OAI-PMH harvester that will gather records remotely and load them into Vufind. i have used an early version of this tool to successfully harvest and import HathiTrust records and am encouraged to see that development has continued. Demian also mentioned a new XSLT importer tool that enables mapping an XML document into an existing SOLR configuration.

This represents an interesting step forward for Vufind as it will allow ZSR to think about harvesting and indexing data from our Dspace instance as well as other sources that support OAI harvesting. All of these features are going to come in the Vufind 1.1 release on March 21st! More to come on this as we get our test instance of Vufind running.

JP at Code4Lib 2011

Tuesday, February 8, 2011 7:39 pm

Before commenting on today’s topics, I thought I would say few things about the pre-conference section on “Solr: what’s new?” that I attended yesterday.

Although there were other interesting pre-conference sections offered concurrently, I chose to attend the one on Solr because of the important role that its SolrMarc utility plays in vufind record indexing. If you wonder how this works, basically, SolrMarc reads in records from an imported voyager marc load file, extracts information from various fields and adds that information to the vufind Solr index. I wanted to learn more about Solr and see if there is any new update that can be used to move our vufind implementation to the next level. The preconference was rather technical. Erik Hatcher from Lucid Imagination was the presenter and he talked about how Solr has drastically continued improving over time. Some of the new features in development include SolrCloud which relies on Zookeeper, a centralized service for maintaining configuration information, to provide a shared, centralized configuration and core management to programmers. He also talked about pivot/grid/matrix/tree faceting which is a hierarchical way of providing facets that would branch out to other facets (“sub-facets”) to further narrow down a search. Another cool feature that Solr has improved is the date faceting and that is going to be seen in our upcoming vufind upgrade.

The actual conference started today and Erik has already blogged about all the important subjects. I was interested in what Damian had to say about vufind.

The idea about centralization of code introduced by Erik Hatcher was also embraced by Damian Katz when he talked about the redesign goals for vufind. He is aiming for a centralization of marc specific code to facilitate replacement. Just to be a little bit controversial, Damian stated that “MARC must die”. He wanted to say that library data is not limited to marc data but also consists of other data types that are becoming more and more popular. He expressed pride in the ability for the upcoming release of vufind (vufind 1.1) to provide among others, full Open Archive Initiative Protocol for Metadata Harvesting (OAI-PMH) server support capability, which will enable harvesting metadata into a directory for further data manipulation. Here, Damian provided a solution to the question “where is my data?” . His solution, “grow the toolkit” that will solve the problem of obtaining records from remote sources, process harvested files, and index arbitrary xml records. According to Damian, understanding record drivers gives the programmer a lot of control over vufind.

Update on Kuali OLE

Tuesday, February 8, 2011 1:20 pm

I decided to devote a post to the OLE project given our interest in the direction of OLE in the coming year. Tim McGeary gave an update on the status of OLE – the project is currently in a build phase with nearly $5Million in funding from various sources.


Coding started in early February, working with HTC Global Services for a development partner. The organization is still shooting for a first test-case release in July 2011. The main project goals have not changed in the last year – community-sources, next-gen, re-examined library business processes, break away from print resources, reflect change in scholarly work, integrate with enterprise-level systems.

Tim spent a few moments talking about data storage and metadata formats – the projects is focusing on being format agnostic and is interested in supporting linked data and interchangeable workflows. The data model is visioned to include descriptive, semantic and relational data and will use the Kuali service bus to integrate with other Kuali systems.

One of the interesting aspects of the OLE system is its reliance on the Kuali Finance system. This approach offers an opportunity for universities that use Kuali for their enterprise information system to benefit from some economies of scale. This is part of the Kuali Rice framework that includes a suite of pre-built services common to many enterprise applications.

One of the questions asked for concrete advantages. .Tim listed them 1)store more data points and manage them more holistically 2) raise the library system to the enterprise level, breaking down barriers to integration, 3) break down barriers between libraries to facilitate more data sharing. Tim mentioned the possibility of a central record sharing system.


Pages
About
Categories
2007 ACRL Baltimore
2007 ALA Annual
2007 ALA Gaming Symposium
2007 ALA Midwinter
2007 ASERL New Age of Discovery
2007 Charleston Conference
2007 ECU Gaming Presentation
2007 ELUNA
2007 Evidence Based Librarianship
2007 Innovations in Instruction
2007 Kilgour Symposium
2007 LAUNC-CH Conference
2007 LITA National Forum
2007 NASIG Conference
2007 North Carolina Library Association
2007 North Carolina Serials Conference
2007 OCLC International ILLiad Conference
2007 Open Repositories
2007 SAA Chicago
2007 SAMM
2007 SOLINET NC User Group
2007 UNC TLT
2007_ASIST
2008
2008 Leadership Institute for Academic Librarians
2008 ACRL Immersion
2008 ACRL/LAMA JVI
2008 ALA Annual
2008 ALA Midwinter
2008 ASIS&T
2008 First-Year Experience Conference
2008 Lilly Conference
2008 LITA
2008 NASIG Conference
2008 NCAECT
2008 NCLA RTSS
2008 North Carolina Serials Conference
2008 ONIX for Serials Webinar
2008 Open Access Day
2008 SPARC Digital Repositories
2008 Tri-IT Meeting
2009
2009 ACRL Seattle
2009 ALA Annual
2009 ALA Annual Chicago
2009 ALA Midwinter
2009 ARLIS/NA
2009 Big Read
2009 code4lib
2009 Educause
2009 Handheld Librarian
2009 LAUNC-CH Conference
2009 LAUNCH-CH Research Forum
2009 Lilly Conference
2009 LITA National Forum
2009 NASIG Conference
2009 NCLA Biennial Conference
2009 NISOForum
2009 OCLC International ILLiad Conference
2009 RBMS Charlottesville
2009 SCLA
2009 UNC TLT
2010
2010 ALA Annual
2010 ALA Midwinter
2010 ATLA
2010 Code4Lib
2010 EDUCAUSE Southeast
2010 Handheld Librarian
2010 ILLiad Conference
2010 LAUNC-CH Research Forum
2010 LITA National Forum
2010 Metrolina
2010 NASIG Conference
2010 North Carolina Serials Conference
2010 RBMS
2010 Sakai Conference
2011 ACRL Philadelphia
2011 ALA Annual
2011 ALA Midwinter
2011 CurateCamp
2011 Illiad Conference
2012 SNCA Annual Conference
ACRL
ACRL 2013
ACRL New England Chapter
ACRL-ANSS
ACRL-STS
ALA Annual
ALA Annual 2013
ALA Editions
ALA Midwinter
ALA Midwinter 2012
ALA Midwinter 2014
ALCTS Webinars for Preservation Week
ALFMO
APALA
ARL Assessment Seminar 2014
ARLIS
ASERL
ASU
Audio streaming
authority control
Berkman Webinar
bibliographic control
Book Repair Workshops
Career Development for Women Leaders Program
CASE Conference
cataloging
Celebration: Entrepreneurial Conference
Charleston Conference
CIT Showcase
CITsymposium2008
Coalition for Networked Information
code4lib
commons
Conference Planning
Conferences
Copyright Conference
costs
COSWL
CurateGear 2013
CurateGear 2014
Designing Libraries II Conference
DigCCurr 2007
Digital Forsyth
Digital Humanities Symposium
Disaster Recovery
Discovery tools
E-books
EDUCAUSE
Educause SE
EDUCAUSE_SERC07
Electronic Resources and Libraries
Embedded Librarians
Entrepreneurial Conference
ERM Systems
evidence based librarianship
FDLP
FRBR
Future of Libraries
Gaming in Libraries
General
GODORT
Google Scholar
govdocs
Handheld Librarian Online Conference
Hurricane Preparedness/Solinet 3-part Workshop
ILS
information design
information ethics
Information Literacy
innovation
Innovation in Instruction
Innovative Library Classroom Conference
Inspiration
Institute for Research Design in Librarianship
instruction
IRB101
Journal reading group
Keynote
LAMS Customer Service Workshop
LAUNC-CH
Leadership
Learning spaces
LibQUAL
Library 2.0
Library of Congress
licensing
Lilly Conference
LITA
LITA National Forum
LOEX
LOEX2008
Lyrasis
Management
Marketing
Mentoring Committee
MERLOT
metadata
Metrolina 2008
MOUG 09
MOUG 2010
Music Library Assoc. 07
Music Library Assoc. 09
Music Library Assoc. 2010
NASIG
National Library of Medicine
NC-LITe
NCCU Conference on Digital Libraries
NCICU
NCLA
NCLA Biennial Conference 2013
NCPC
NCSLA
NEDCC/SAA
NHPRC-Electronic Records Research Fellowships Symposium
NISO
North Carolina Serial Conference 2014
Offsite Storage Project
OLE Project
online catalogs
online course
OPAC
open access
Peabody Library Leadership Institute
plagiarism
Podcasting
Preservation
Preservation Activities
Preserving Forsyth LSTA Grant
Professional Development Center
rare books
RDA/FRBR
Reserves
RITS
RTSS 08
RUSA-CODES
SAA Class New York
SAMM 2008
SAMM 2009
Scholarly Communication
ScienceOnline2010
Social Stratification in the Deep South
Social Stratification in the Deep South 2009
Society of American Archivists
Society of North Carolina Archivists
SOLINET
Southeast Music Library Association
Southeast Music Library Association 08
Southeast Music Library Association 09
SPARC webinar
subject headings
Sun Webinar Series
tagging
TALA Conference
Technical Services
technology
ThinkTank Conference
Training
ULG
Uncategorized
user studies
Vendors
video-assisted learning
visual literacy
WakeSpace
Web 2.0
Webinar
WebWise
WFU China Initiative
Wikis
Women's History Symposium 2007
workshops
WSS
ZSR Library Leadership Retreat
Tags
Archives
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007

Powered by WordPress.org, protected by Akismet. Blog with WordPress.com.