I recently attended the first regional OCLC member forum held at UNCG. The meeting focused on the many changes happening with OCLC products and a better understanding of how the products work together. I went to the break out session pertaining to Cataloging and Metadata. Within this session, members were able to give feedback on issues that we have been having particularly with Connexion and make request for features that don’t exist. OCLC has a web page dedicated to the forums which include pictures, questions and feedback from the attendees. Feel free to explore at the following link https://oclc.org/en-US/events/member-forums/after-party.html
One of my goals for my first year is to do a complete refresh of all of the content in the Toolkit. In my role serving online students, I understand the power of video in teaching library tools and selling the library as the place to go for research assistance, technology training, and self-help. Having a digital space like the Toolkit–a centralized location for all of our video and other digital learning objects–is incredibly valuable for reaching students and faculty both on campus and online. However, when a space like the Toolkit starts to show its age, we risk losing a bit of credibility. For as much power as they have, videos quickly become outdated as web interfaces or link paths change. It’s also no secret that we’re bound by constraints on our time, so producing video of high production quality often takes a back seat, resulting in videos that look outdated before their time.
So, to help restore the Toolkit to its former glory and to put in place more sustainable workflows, I’ve started using Camtasia Studio to create what I hope are high-polish, ZSR-quality videos that will stand the test of time. I just finished my first one–a teaser video for Zotero–and wanted to share how I did it.
With any video project, I find it’s much easier to get the audio down first and make it perfect before worrying about capturing video. I’m not one for improv, so that means writing a script first and paring it down to what’s really essential to get the message across. Our users don’t expect elaborate, drawn-out explanations; they’re watching the video because they hope it will save them time. If the content requires a video much longer than two minutes, it should probably be split into two separate videos. So to keep things brief, I scripted and timed myself rehearsing the script. I kept paring it down until I was comfortably under two minutes, and only then was it finally time to record.
The scripting and audio recording take up the bulk of the time, but I’d argue that they’re the most important parts. Even the shiniest video will be ignored if there’s annoying background noise or if the speaker is rambling or talks too quickly or too quietly. I used Camtasia to record myself reading through the script twice three four times, then cut and spliced the best parts into the final audio track. So the video doesn’t feel quite so librarian-sitting-in-an-office-talking-at-you, I used a royalty-free guitar track that came packaged with Camtasia to add some interest.
Now that I had the audio ready to go, it was pretty easy to record the screen. Screen capture is really what Camtasia was made to do, and it does it beautifully. All I had to do was mentally map out my mouse clicks, record a few run-throughs of each “scene,” and I was ready to edit.
I want to note here the significance of recording audio and video separately. One of the major weaknesses of using free, web-based screen recorders like Jing is that you can’t separate the audio from the video; that is, when something in your video changes, like a database interface, you have to re-record the entire video to bring your video up to date. By separating the audio and video, I can feasibly use the same audio track and only update those portions of the video that need updating, saving tons of effort in the long run.
Editing, Callouts, and Text
To bring everything together, I had to manipulate the video clips to match up with the audio. Because this particular video was more of a teaser and not intended to be a “how-to,” I sped up the video clips to keep everything flowing quickly. I focused the user’s attention with some appropriate zooming and panning, then added photos, text, and colored backgrounds when there was no video to display. You’ll notice I like big, bold text that’s readable even on the smallest smartphone screen.
Finally, to make the video accessible to those with hearing impairments and to those who might not have a pair of headphones in a quiet room, I added a caption track that the user can turn on and off in YouTube. Camtasia makes this almost absurdly easy: all I had to do was copy and paste my transcript into my project, then Camtasia guided me through time-stamping the caption track to sync with the audio.
I plan on working my way through the content on the Toolkit as determined by the needs of the online counseling program. I already have planned an entire series of Zotero tutorials, followed by tutorials for the PsycInfo and PubMed databases. If you have ideas for videos I can add to my queue, or if you have a special project in mind, I’m all ears. I hope you enjoy!
On Friday, May 28, Barry, Craig, Erik, Jean-Paul, Megan, and I visited East Carolina University in Greenville to spend some time with their Digital Collections unit and Special Collections department.
After introductions, Digital Collections unit head Gretchen Gueguen gave us an overview of the origin and initiatives of the unit. Like ZSR, digitization projects at ECU began as digital exhibits, but a standard procedure for metadata and digitization was needed. Some of the policies Gretchen and her team created include a digital collections development policy as well as technical guidelines for digitization.
Gretchen then described the current architecture of digital collections. Digital collections are stored in a home-grown TeXtML/asp.net repository. ECU uses dSpace for ETDs and other faculty/student research, but not for special collections. Metadata is formatted as XML in a METS wrapper, using MODS for descriptive metadata and Dublin Core for OAI harvesting. Erik and Jean-Paul mentioned that their repository is Windows-based, but we got some great ideas from their user interface.
On the Digital Collections website, Gretchen explained that digital objects were arranged according to collection strengths instead of by collection title or digital exhibit. Users can explore subject-arranged collections. In this way, Gretchen showed how digital collections can be a way to intellectually organize material in a digital interface. The option to search across all collections is featured on their website. The site includes a shuffled tag cloud consisting of shortened LC subject headings, as well as LC, geographic location, and date information as facets for digital object records. Here is an example. Note how each record includes a “Related Resources” box that links the record to subject collections and records with the same LC subject headings. One of the most innovative features of the box is that it links the record back to its source collection’s finding aid – as well as other items in the same box, folder, or collection! This is something we could do at ZSR. User comments are easy to add and are indexed in the object’s MODS record.
Archival finding aids at ECU are encoded by Mark Custer, who created a functional and creative stylesheet that allows users to interact with content and digitized material. Each finding aid has a tab for viewing digital objects, which links back to the digital repository. Here is an example. Finding aids are fully searchable and include a printable version. ECU does not use Archivists’ Toolkit but they are investigating the possibility of incorporating it into their workflow.
Metadata librarian Patricia Dragon demonstrated the web-based form that was created for her and other catalogers to use to catalog digital objects. Once material has been digitized, catalogers are sent a “job” request to catalog the objects. Using pull-down menus, catalogers choose descriptive terms for materials (even previously used LCSH and creator terms are saved). The web forms interact with a SQL database and are re-indexed regularly. Perhaps the most interesting aspect of the metadata workflow is that all digital objects are made available online with basic metadata (such as title and source collection) and remain that way until fully cataloged.
Digitization and digital project requests forms are also web-based, making it easier for library staff to submit ideas. Their form is similar to a draft that I am working on, except it is connected to an SQL database that staff can interact with and prioritize. ECU is forming a “selection advisory team” similar to what we are working on, that will score, rank, and give deadlines to digital project submissions. This type of group is also being discussed at ZSR.
We got a tour of Special Collections from Dale Sauter, which included their beautiful reading room and spacious archival stacks. We learned that Special Collections’ role is more focused on selection and project suggestion, less on description or project management. While Digital Collections has existed as a unit within the Special Collections department at ECU, last week the unit was moved to a new department called Library Technology and Digital Initiatives (they are searching for a department head). We also had a tour of the Digital Collections area of the library, where Joe Barricella supervises student employees’ digitization and technical metadata.
The digitization task force at ZSR will be meeting early this month to discuss the ECU visit and potential policies for digitization. Overall, it was an inspiring and informative field trip!
On Wednesday the Z. Smith Reynolds Library implemented a new discovery system for their library collections. The system, developed initially by Villanova University, employs innovative indexing and searching techniques to help patrons find and interact with library resources.
This new tool adds the ability for patrons to discover new relationships between resources through the use of faceted browsing, a technique which is commonly used on web-based stores such as Amazon. It also introduces new community-focused features such as the ability to add comments and tags to catalog records. These features allow library patrons to easily discover resources by combining several limiting criteria (such as format, location, and publication date) using dynamic links on the results page.
The system complements a suite of locally-developed and open source information systems that the library employs including the New Book/Film Walls, WakeSpace (a digital library of WFU collections), Book delivery and reserves services, and library-sponsored blogs and wikis for the university community.
Yesterday we held the inaugural Emerging Technologies Talk for the staff development committee. Our plan is to host a monthly discussion (two weeks off from the journal group) on new and emerging technologies.
To get us off to a good start I though I’d go with something that people in the library would be interested in personally, might find relevant for work, and would show that emerging technologies can be fun. The topic? Social Software built around books and reading. You might have heard of some of them: GoodReads, LibraryThing, Shelfari, Amazon’s profiles, Google’s My Library, and My Worldcat. I ran through a quick presentation just to get everyone on the same page, then we talked about some of the similarities, differences, strengths, and weaknesses in each of the options.
If you’re interested, here’s the presentation:
We’d like to host on of these workshops once a month, so if there’s a new technology that you’re curious about, just let me know! We’ve already had requests that next month we talk about some of the things that Google is doing (the Chrome browser and the move from Google Pages to Googld Sites) and some of the things that former Google employees are doing (the Cuil search engine and Friendfeed).
Today some of the RITS team members, Kevin, Kaeley, Sarah, and I, traveled to UNC-G to meet with reference librarians interested in technology at Jackson Library. It was a good meeting with a turnout of about 12 library staff members between UNCG and WFU.
The conversation was casual, with short demonstrations of some of the different things we are doing followed by discussion of similarities and differences in our experiences. Here’s Kaeley talking about marketing:
On the WFU side, we talked about marketing, blogs as a CMS, the toolkit, and libguides. UNCG shared their IR project, chat widget, use of Facebook in marketing, and their Blackboard project.
We had a good time meeting our colleagues, and a tasty lunch at Jack’s Corner. It was good to hear what local colleagues are doing, and to know that we’re all looking at some of the same issues and challenges though our communities are quite different.
Hopefully, once the renovations are done, we can have our UNCG friends to come see the improvements first hand!