Last week, Tanya and I travelled to Chapel Hill for CurateGear 2015: Enabling the Curation of Digital Collections. After reading Tanya’s acoount of her experience, I thought I would fill in some of my favorite bits of the day.
Susan presented a fascinating demonstration of emulation of the Gay Men’s Health Crisis hotline database in the Manuscripts and Archives Division reading room at NYPL. She explained the very simple setup of an emulation experience for researchers to access a disk image of the original born-digital materials from the collections. They have a dedicated machine in the reading room, offline and USB blocked (so patrons cannot make copies). There is a reader login account to access the records. They also load a pdf of the finding aid on the machine so researchers can see what they are looking at (since there is no internet). Serving the disk images in this way allows researchers to experience and utilize the materials without any harm to the original records. Given the many disks in our collections here in Special Collections & Archives, I found this to be a very inspiring and accessible way to provide access to patrons.
Lori spent a lot of time discussing Archive-It’s 5.0 updates that started rolling out in October of 2014 and will continue in 2015. This was a great session, as I think about WFU’s use of Archive-It a lot and enjoy hearing about how we can do this better. Some of the highlights of her talk included the fact that Archive-It is overhauling the user interface for the first time since they started in 2006. This is great news! It’s not done yet, but the reports section has been released. The reports (and later everything else) has a much cleaner, streamlined look and dynamic visualization of the information in the reports. You can really mine down into the information in the reports and fine tune your crawls with a much better understanding of what information you have captured. I was truly excited about these changes and can’t wait to see the future rollouts of Archive-It 5.0
I found the whole day at CurateGear 2015 a very interesting and inspiring experience. I would be happy to talk more about the presentations I mentioned or any others that I attended at CurateGear 2015. Thank you to the Dean’s office for the opportunity to attend.
Last Wednesday I traveled with Rebecca and Tanya to CurateGear 2014 in Chapel Hill, NC. In its third year, CurateGear is a day-long event that showcases tools that facilitate digital curation. The three tools I found most interesting were MetaArchive, a TRAC review tool, and BitCurator.
MetaArchive is a co-op of university libraries and independent research libraries who work together to preserve their digital content. Each MetaArchive member institution contributes a secure, closed-access, preservation server to the MetaArchive LOCKSS network. After an institution ingests content to its own preservation server, six other servers in the MetaArchive LOCKSS network replicate that content. Servers are assigned to content in order to maximize geographic distribution.New or changed content is stored alongside the original, and in fact, this support for versioning is a huge advantage of MetaArchive’s preservation strategy. The seven servers check in with each other periodically in order to perform fixity checks and verify that all seven copies remain identical. If a mismatch is identified, the servers reach consensus about which copy is “correct” and repair the mismatch. The repair is treated as a version and stored alongside the original. The co-op model offers economies of scale, and membership in MetaArchive seems very reasonable. The knowledge community of MetaArchive strikes me as an appealing alternative to preservation-as-a-service vendors such as DuraCloud and Preservica.
Organizational Infrastructure – e.g. mission statement, succession plans, professional development, financial stability
Digital Object Management – e.g. metadata templates, persistent unique identifiers, registries of formats ingested, preservation planning
Technologies, Technical Infrastructure, and Security – e.g. detecting bit corruption, migration processes, off-site backup
While TRAC is designed for repositories to become certified as trustworthy, many institutions simply use it as a self-assessment tool. Developed by Nancy McGovern, the Head of Curation and Preservation Services at MIT Libraries, the TRAC review tool enables the assessor to provide evidence of how well a repository meets a TRAC criterion and rate its compliance on a five-point scale:
4 = fully compliant – the repository can demonstrate that has comprehensively addressed the requirement
3 = mostly compliant – the repository can demonstrate that it has mostly addressed the requirement and is on working on full compliance
2 = half compliant – the repository has partially addressed the requirement and has significant work remaining to fully address the requirement
1 = slightly compliant – the repository has something in place, but has a lot of work to do in addressing the requirement
0 = non-compliant or not started – the repository has not yet addressed the requirement or has not started the review of the requirement
Of course, knowledge of whether a repository meets all of these 88 criteria isn’t the purview of one person, and another benefit of the TRAC review tool is that it enables the lead assessor to assign certain criteria to other people (such as admin or tech team), making the whole process of assessing repository activities more transparent across an organization.
Technically speaking, the TRAC review tool is simply a Drupal instance with a page for each TRAC criterion, so it’s very lightweight and easy to begin using after download!
BitCurator bundles open-source digital forensics tools to help memory institutions manage born-digital materials and perform tasks such as:
acquiring disk images of floppies, hard drives, laptops, or desktops
generating technical metadata for the disk images
identifying and retracting sensitive information such as SSNs, credit card information, etc.
Most of the tools that BitCurator is adapting for use by memory institutions originate in the law enforcement world, whose purposes are very different from our own. BitCurator repurposes these tools for the curation tasks of special collections and archives. For example, capturing a disk image (rather than file by file by file) not only preserves the environment in which the creator worked, but also in a certain sense preserves the “original order” of the records. Last summer I attended a BitCurator hackathon hosted by the Open Planets Foundation, where my main output was a detailed draft of a workflow for ingesting born-digital materials. At CurateGear 2014, I was pleased to hear about some updates to BitCurator 0.5.8 and pleased, too, that my draft workflow doesn’t yet need revision!
I have attended the DLF Forum every year since I began library school, but this year was the first year that I attended as a full-fledged librarian. It was a very different experience to attend the Forum while constantly asking myself “What will I bring back to ZSR?” Below are three of my major takeaways, culled both from formal conference sessions and from informal conversations with other attendees.
Investigate moving towards large-scale digitization of archival materials.
The digitization of rare and unique materials broadens access to those materials beyond the reading room to any screen that can access the Web. Early digitization projects often cherry-picked specific items to digitize and created rich descriptions of those items, similar to how items might be selected for a physical exhibition. Increasingly, however, digital collection managers recognize that completely digitized collections support scholarly inquiry better than boutique digitization efforts. Both an access model and a content strategy, large-scale digitization¹ selects entire collections (or entire series within collections) for digitization, and online access replicates the reading room experience by contextualizing individual items within the archival arrangement of a processed collection. Rather than painstakingly creating metadata at the item level, large-scale digitization makes use of existing metadata from the finding aid at the container, series, and collection level. This approach can both streamline production workflows and better meet the needs of researchers.
At the DLF Forum, a panel presentation titled Big Archival Data: Designing Workflows and Access to Large-Scale Digitized Collections focused on how the principles of large-scale digitization were put into practice in different institutional contexts. Michael Doylen and Ann Hanlon of the University of Wisconsin-Milwaukee discussed the digitization of the Kwasniewski photographs, the collection of a Polish-American photographer who captured images of the Polish community in Milwaukee. 80% of the digitized photographs re-used existing item-level metadata transcribed from negative sleeves during processing of the collection; 20% of the digitized photographs were designated for further image processing and metadata enhancement – e.g. titles that are unique and more specific, description, and additional subject headings. By taking a comprehensive approach, this digital collection makes available “the rare, the lesser-known, the overlooked, the neglected, and the downright excluded.”² Following Michael and Ann, Karen Weiss of the Smithsonian Archives of American Art discussed the workflows that her institution have developed in order to link container lists in their finding aids to digital materials in their digital asset management system. Starting from a collection summary page, the researcher can browse to a particular series and then view all of the items that are contained within a particular folder. In this way, the digital collections experience better approximates the in-person reading room experience.
When performing digital humanities outreach to faculty and students, lead with content.
Another advantage of a large-scale digitization approach is that it enables the library to market its digital collections as corpora for digital humanities research. During THATCamp Digital Humanities & Libraries following the DLF Forum I had the opportunity to chat with Zoe Borovsky, who is the Librarian for Digital Research and Scholarship at the UCLA Library. Zoe shared with me that one tack that she is taking more and more frequently is to demonstrate that UCLA’s digitized special collections support digital humanities modes of inquiry -because the more faculty who build digital projects on top of existing digital collections, the more digital projects the library can support. Thus far, I’ve reached out to a few faculty that I’ve met at social events to learn more about their digital scholarship and pedagogy and how the library might support those aspects of their work. But in the emerging area of digital humanities it’s not always the case that there’s an existing library solution to a faculty problem. At this stage, my goal is to build relationships and gather requirements. Do some faculty want to create crowd-sourced collections, which they could eventually contribute to WakeSpace? Do other faculty want to text mine newspapers? Do still other faculty want to use Omeka to incorporate building digital collections into course projects? These needs are quite heterogenous! In the presentation Testing Omeka for Core Digital Library Services Jenn Riley (formerly of the University of North Carolina at Chapel Hill, now of McGill University) said that she is planning for a future when every humanities faculty member at her university is interested in creating a digital project. With that kind of scalability in mind, when I meet with faculty, in addition to gathering requirements, I will also market ZSR’s existing digital collections as potential corpora for digital humanities research.
Investigate adopting the DMPTool to support data management planning for faculty.
The DMPTool enables universities to provide investigators who are writing data management plans with custom guidance. The DMPTool has been available for some time now, but a new version was recently released, and the development team presented at the sessionDMPTool2: Improvements and Outreach at the DLF Forum. At our last Digital Scholarship team meeting, we discussed investigating the DMPTool as a goal for next year. When an institution adopts the DMPTool, admins are able to provide suggested answers for each question on a particular funder’s data management plan form. After modifying the suggested answers supplied in the DMPTool, the investigator can generate a PDF of their data management plan and append it to his or her grant application. Customization of the DMPTool now includes the option to provide Shibboleth authentication. DMPTool2 improvements for plan creators include the ability to:
copy existing plans into new plans
work collaboratively with colleagues – e.g. add co-owners of plan
request review of plans
share plans within institutions
provide public access to plans
DMPTool2 improvements for administrators include:
a module that enables direct editing of customized responses to different funder templates or the ability to create your own templates (before administrators had to email the DMPTool development team in order to enact this sort of customization)
several new administrator roles – e.g. institutional reviewer and institutional administrator
enhanced search and browse of plans
mandatory or optional review of plans
Outside of conference hours, I enjoyed exploring Austin. Highlights included visiting the flagship Whole Foods store, watching the bat colony emerge from under the First Street bridge at dusk, and eating fabulous mole at El Naranjo (an authentic Mexican restaurant recommended by the Texas Monthly). Work hard, play hard!
Tanya, Craig, and Vicki all mentioned the keynote about the DPLA (Digital Public Library of America) at the Tri-State Archivists’ Conference. Before Emily Gore of the DPLA headed to Greenville, SC to deliver her keynote, she was in Greensboro, NC meeting with digital collection managers. I attended the meeting to learn more about the nitty gritty how-to of contributing ZSR’s digital collections to the DPLA.
For those who aren’t familiar, the DPLA aggregates metadata from the digital collections of libraries, archives, and museums across the United States. In addition to providing a slick search interface at dp.la, the DPLA also makes its API open to developers and encourages the building of apps on top of this platform. By contributing our metadata to the DPLA, we will expose our collections to a national audience. In addition, we will drive traffic to our site from both the dp.la site and apps built on top of the DPLA API.
At DPLAfest 2013, the North Carolina Digital Heritage Center was recognized as one of three new service hubs that will aggregate metadata from their regions and serve as a conduit to the DPLA. Over 120,000 records from North Carolina institutions are currently available at dp.la, including records from the State Library of North Carolina, State Archives of North Carolina, and the libraries at the University of North Carolina at Chapel Hill, East Carolina University, and the University of North Carolina at Greensboro in addition to all the records made available by the North Carolina Digital Heritage Center itself at digitalnc.org.
When an institution contributes collections to the DPLA via a service hub such as the North Carolina Digital Heritage Center, they share an item’s metadata as well as its thumbnail.
The DPLA record recognizes both the service hub (in the example above the North Carolina Digital Heritage Center) and the contributing institution (Transylvania County Library). Clicking on either the item’s thumbnail or “View Object” takes the user to the item as it appears on the original site, in this case digitalnc.org (see below).
One more interesting thing to note about the DPLA’s approach to aggregating digital collections is that metadata shared with the DPLA is made available under a CC0 license. By participating in the DPLA, we agree that others may re-use our metadata. However, it’s important to recognize that metadata rights are not equal to digital object rights. Rather, the digital objects we make available via Wake Space remain available under whatever terms we determine.
The North Carolina Digital Heritage Center is currently in the process of evaluating our feeds before adding selected collections to the DPLA. Feel free to contact me if you have any questions!
On Tuesday the 21st Joy, Kaeley, Roz, and I ventured to Raleigh to participate in the summer meeting of NC-LITe, the twice-annual meeting of NC librarians who are interested in library instruction and instructional technologies. It’s a very informal group and always a fun time with lots of idea-sharing. This year’s summer meeting was at the shiny new Hunt Library at NCSU, which was a sight to behold. Like all NC-LITe meetings, this one followed a familiar format.
Each campus got some time to share updates. Some of the most interesting were:
UNC-CH: A transition to a required ENG105 course in which librarians cooperate with instructors to create assignments and integrate information literacy learning outcomes into the curriculum
UNC-CH: A live-action Clue game held in their special collections department (which would be a good opportunity for both outreach and some light instruction)
NCSU: figuring out how they can integrate their new makerspace into their instruction beyond the traditional STEM applications
NCSU: moving past outdated LOBO tutorial by rethinking learning goals and producing high-quality animated “Big Picture” videos (Kaeley thought the best title was “Picking a Topic *IS* Research!”)
Duke: librarians assigned to every MOOC taught through Coursera, where they might develop libguides or help course developers find open educational resources to support the course
UNCG: just finished a 3-day Power-UP workshop for faculty who want to develop online or blended online courses
Five of us (including me and Joy!) gave quick talks about bigger projects we’d tackled recently. Joy talked about the awesome LIB100 template and I struggled to condense our ZSRx mini-MOOC experiment into a 7-minute talk. Other things:
Kathy Shields at High Point told us about some information literacy modules they built in Blackboard
Kerri Brown-Parker at NCSU’s College of Education media center showed us Subtext, a very cool iPad app for guided literacy and social reading
There was also a rather interesting debate that sprung out of Joy’s presentation on the LIB100 template: what is the role of the library in preventing or educating students about plagiarism? Lots of opinions, but most felt that the library was central in this role, although a focus should be on educating students about the responsible use of ideas, not on “how to avoid plagiarism.”
If you haven’t been to the new Hunt Library at NCSU, make sure to visit! It’s truly an amazing space that is probably only possible at a place like State. It’s hard to put into words, but the entire library was a lab for technology-enhanced and -facilitated learning and creation. Still, despite the impressive architecture and the awe-inspiring spaces, from the MakerSpace and the Game Lab to the Next-Gen Learning Commons and the BookBot, the thing we (and most others) found most impressive were the lockers with outlets in them. There were literally audible gasps, I kid you not.
Joy said it best, though: “it seemed to me that the star of yesterday’s show was the jaw-dropping Hunt Library. Words like ‘unbelievable’ and ‘incredible’ keep racing through my mind as I ponder this blow-your-mind building. To me, this experience made our library feel like Hagrid’s cottage in Harry Potter–cozy, warm, and a bit disheveled. While we might not have a Creativity Studio or designer chairs that cost thousands of dollars, we are greeted by Starbucks and Travis Manning when we come in the door. I’m very proud and glad to call ZSR ‘home.'”
Springshare hosted a four hour webinar today, focusing on the user experience. Lauren Pressley, Kyle Denlinger and I participated in the first half of this multi-presentation webinar in the ZSR screening room. Springshare supplies ZSR with LibGuides and LibAnswers.
The first presentation was by Chrissa Godbout, the Library and Information Technology Consultant at Mount Holyoke College. She discussed their recent redesign of LibGuides. She and others from the Library attended a web design workshop that led them to a plan to do focus groups with students and staff. They bribed students with chocolate covered strawberries and gave participants gift cards to the Library coffee shop. Focus group participants were shown the current LibGuide and then asked to draw their ideal research guide and describe it. From this information, the librarians created categories and ranked them by occurrence.
As a result of the focus groups, they cut way back on text, used fewer and more pleasing colors and repeated the navigation tabs at the top in the body of the home page of the guide, including descriptions of each tab. They also included RSS feeds of the articles for the professors in that department. One idea they used was the “squint test” where users squint at the web page and what pops out while squinting should be where the main contain resides!
The next program, “Going Mobile: LibAnswers SMS and the Mobile Reference Librarian” was by Darcy Gervasioa Reference & Instruction Librarian at Purchase College, SUNY. She is Text Message Reference Coordinator and the liaison librarian for Anthropology, Sociology, and Gender Studies. What Purchase discovered was that students used the texting feature from inside the building for quick answers. So Darcy and the other librarians marketed the service in that way. “Can’t find a book? Text Us!”
Emily O’Connor’s presentation, “LibCal and the Open Workshop: Bolstering Attendance, One Registration at a Time” demonstrated the LibCal application and showed me that many of the services LibCal provides, such as emailing participants, we get from posting content on the PDC site.There was a “tech time” during the break that showed how LibGuides can be embedded in a school’s default Blackboard course, making the LibGuide available to a much larger audience.
Stephanie Rollins, from Samford University, presented “Using Libanalytics to Close the Assessment Loop”Samford uses LibAnalytics to close the Instruction loop. Stephanie described how she uses this system from Initial Instruction Request to Instruction Statistics to Post-Instruction Assessment.
There were two other sessions in the afternoon, but they focused on Springshare products we don’t use at ZSR. All in all it was a very effective webinar. It was clearly popular as we were initially wait-listed to participate! The more I work with Springshare, the more impressed I am with their commitment to their customers and users. I look forward to their next webinar on a new topic.
Last week I had the pleasure of taking a trip down to Cisco’s business center in the Research Triangle Park, along with several other staff members from across WFU’s campus. It was here that I sat in on several presentations of new technologies that Cisco is preparing and discussed how they could be useful at Wake and ZSR. I also had the opportunity of seeing several of their web conferencing technologies at work, such as using one of the “Full Immersion” rooms to video conference with Cisco employees across the country. Some parts will be a bit vague, as some of the information we were told regards future plans for Cisco products that they asked us not to discuss outside.
We began the day with a demo of Cisco Business Video Demo Center, ran by a Cisco employee in California. This was basically a showcase as to how Cisco can inter-operate a number of their technologies. For example, our presenter took a video of his screens showing our WFU group across three separate rooms with a Flip Cam, uploaded it to one of Cisco’s media servers that compiled the video, added titles, and then played back the video for us on the web within 15 minutes. It was an impressive demo, though a bit imposing and seemed unlikely to work as well without Cisco expertise on hand. They did touch on digital signage from Cisco which was a major interest to me, but didn’t go into great detail. The focused more on their ability to take data and push to their signs automatically than particulars of the signs themselves, which was disappointing. We were also running behind so the presenter had to hurry through things, so that may have had something to do with my feelings as well.
We then moved on to a “Casual Conversation” with Lance Ford, a Cisco Business Development Manager who works a great deal with educators using Telepresense tools in their teaching. This was a fun presentation with some interesting views on web teaching. After this talk we had a conference with a Webex engineer discussing the next step in the Webex program, which Wake will be a part of. A major topic of discussion in this and throughout the day was Webex integration with Google, specifically calendar functionality.
Cisco save the new, shiny stuff for after lunch though. We were given a demo of Cicso’s QUAD platform, basically a business version of Facebook. Instead of emails or shared google docs, you sign into QUAD and make posts. You then follow particular posts or invite others to edit them or attach documents. An interesting idea, but not one that I would see as particularly relevant in our environment. At least the consensus in the car I rode home in is that we didn’t need another social network to keep track of. We then saw a presentation on Cisco Jabber, which is a telephone/messaging solution Cisco is offering. What is really nice about it is its future integration with webex, so you can be on the phone on your handset and switch over to a webex meeting when needed. This would also allow for individual computers to communicate with larger telepresence and webex clients, making our awesome new setup in 204 even more useful. Finally we were given a presentation on Show and Share, Cisco’s video solution. It offers a media service that can transcode video, add titles, etc. to it, transcribe the video and map out specific topics of interest, tag it, and put it in a “youtube-like” interface basically with one box. It is designed to be a Youtube for the business world, which once again is cool but not something that would necessarily fit within the Library. And for me, it doesn’t really seem to do enough different from Youtube.
All in all it was an enjoyable trip. It was interesting to see where Cisco is going, especially with Wake becoming closer and closer with the company. It was also interesting to listen to some of the priorities of the ranking members if IS when it comes to those technologies.
Since I first learned of the Horizon Project, I have been impressed with it. It’s an annual report, with editions for higher education, k-12 education, and museums, about the technologies that are on the horizon. Each report focuses on six technologies over three time horizons as well as naming some contextual themes that are applicable across the board.
Several years after first learning of the Horizon Project, I saw some discussion on library blogs about how libraries weren’t represented, so I decided to throw my name in the ring to see if I could be involved. I was fortunate to be included and the first report I contributed to was the Higher Education edition for 2011. I also contributed to the 2012 Higher Education report. The process of creating the reports, itself, is an amazingly efficient and productive modification of an onlineDelphi study, and I’d be happy to blog or chat about it if you’re interested.
The retreat, itself, was for anyone who had served on any of the advisory boards over the past 10 years. It was organized by Dr. Larry Johnson, CEO of the NMC, and Dr. Lev Gonick, VP and CIO at Case Western Reserve University and Board Chair Emeritus of the NMC. It was held in Austin, Texas at the Hyatt Lost Pines Resort. The location was ideal. It wasn’t in the city, so we weren’t tempted away the way we might have been otherwise in the evenings. This meant that for the entire retreat we were all in one space, thinking about the same thing.
The event was comprised of group discussions, nine speakers featured on the NMC’s YouTube channel under 6 minutes with, and the amazing facilitation of David Sibbet, which is hard to understand unless you take a look at his visual representation of the event. Sibbet is a master at visualizing ideas, and I think every one of us probably wished for an ounce of his ability in that area.
As you can see, this event incorporated various communication technologies as you’d hope it would. iPads outnumbered all other computers as best I could tell. (I felt a little old-fashioned with my MacBook Air!) They brought in speakers via videoconferencing technologies. Tagging wasusedextensively.
The pace of the event was quick, as we’d get a little bit of introduction, hear a speaker, have structured small group discussions, bring back the big ideas to the group, and watch as Sibbet illustrated the discussion we were having. The structured group work was built around specific points they wanted us to come to conclusions on–which took a bit of getting used to for me but I ended up really liking it. It reminded me of some of my teaching exercises, trying to make sure we don’t always do the same group work and mixing up the types of interactions.
The main ideas from the retreat are captured in a Communiqué. The ideas in this document are “megatrends” that are impacting all educational institutions (libraries included) around much of the internet-connected world. The executive summary, if you don’t want to pop over there, is:
The world of work is increasingly global and increasingly collaborative.
People expect to work, learn, socialize, and play whenever and wherever they want to.
The Internet is becoming a global mobile network – and already is at its edges.
The technologies we use are increasingly cloud-based and delivered over utility networks, facilitating the rapid growth of online videos and rich media.
Openness – concepts like open content, open data, and open resources, along with notions of transparency and easy access to data and information – is moving from a trend to a value for much of the world.
Legal notions of ownership and privacy lag behind the practices common in society.
Real challenges of access, efficiency, and scale are redefining what we mean by quality and success.
The Internet is constantly challenging us to rethink learning and education, while refining our notion of literacy.
There is a rise in informal learning as individual needs are redefining schools, universities, and training.
Business models across the education ecosystem are changing.
There was brief discussion of including a library-related topic as one of the ten, but there weren’t enough library folks at the retreat to get the votes necessary to include it. If you read the communiqué, you’ll note that libraries are mentioned under many of these 10 megatrends. In fact, there was brief discussion of if there should be a libraries Horizon Report as their is a Museum one. I’d lean towards keeping libraries integrated within the existing documents, while increasing librarian participation. I think I can contribute more about libraries to a higher education discussion, and I’d rather librarians be at that table. Likewise, a school librarian could really contribute to the k-12 report. I’d like to see public libraries represented somewhere, though.
And, since we have a library focus here, I thought I’d include Marsha Semmel’s (Director of Strategic Partnerships at Institute of Museum and Library Services) talk.This talk was given to an audience with only about 5/100 librarians, so she was definitely introducing people to standards of the field as well as pushing on some boundaries.
The Horizon Retreat was an amazing opportunity, and I–frankly–was frequently surprised to find myself included at the table in these discussions. I look forward to seeing what else comes of our work over that week. If you’re interested in following along, you can on the (surprise!) wiki!
This weekend, I had the pleasure of giving a talk at the American Association of Law Librarians in Philadelphia, PA. Some of you may remember that my last trip to Philly resulted in theft of my phone :( so I exercised my best big-city behavior this time and kept my phone in my pocket – except for a few pictures.
What amazed me about AALL was how it is a highly focused ALA. The vendor hall, as you might expect, is focused very much on law librarians but I did get a chance to connect with a few scanner vendors to talk about their work with ILL and E-Reserves software. I also managed to run into a number of our colleagues from WFU and a few people that I have met at other conferences!
On Sunday I shared the stage with Andrew Pace from OCLC and Roy Balleste from the St. Thomas Law Library. It was interesting to hear from both Andrew, who discussed OCLC services as they related to cloud computing, and Roy, whose library has adopted the OCLC Web-Scale product. There was considerable interest in the audience and I was reminded how important continuations were to law libraries when the first question focused on this issue.
On a side note I had a chance to attend the Voyager Law Users Group meeting while there and got some interesting information about the new mass data change features in Voyager 8 and heard about where Voyager libraries think they are headed (ILS wise) in the coming years. Too much detail for this post but if you are interested stop by!
Today Susan and Erik attended a webinar on a Digital Asset management system offered by a ExLibris information systems. It was interesting to see how the vendor discussed asset management and how this current example of a system differs from those previously offered, particularly because we have used a few of their previous products! In the current system a sharp distinction was made between archiving/preservation activities (which fall under the purview of the software) and the discovery layer (which does not).
The speaker discussed a number of use cases that focused on archiving books, legal documents, websites, and a variety special collections resources. I was left wondering what differentiated this sort of system from traditional IR or digital archive systems. The webinar included a few interesting features such as format conversion, metadata tracking on submission and preservation processes, and a form of version control for migrated digital objects.
We were motivated to attend this webinar more to inform our interest in the ‘state of the art’ with regards to digital asset management systems and hope to be able to complete some comparative analysis of systems in the coming months.