Professional Development

During February 2011...

ASERL Webinar on Next-Generation Discovery Systems

Friday, February 25, 2011 2:58 pm

Roz, Erik, Ellen M., Kaeley, Craig, Audra, Rebecca, Kevin, Derrick, Susan and Mary S.attended the ASERL Next Gen Discovery Tools session.

The session was an overview of various next-generation discovery systems (e.g Serials Solutions Summon, WorldCat Local and Ebsco Discovery Services among others). ECU reported on their methodology for evaluating and selecting a product (ultimately Summon). They observed that following the launch they added OAI harvested digital collections resources, used google analytics to measure use (OpenURL clicks was used as a measure), and worked with SerialsSolutions to resolve data mapping issues.

ECU also mentioned a recent nice issue of Library Technology Reports by Jason Vaughan at UNLV that reviews these systems.

NCSU followed with an overview of their Summon experience. Libraries in general seemed to recognize that a main purpose of these sorts of systems is to simplify the initial research process but the NCSU folks discussed some of the concerns of advanced users (IL instruction, depth of search features). NCSU showed the percentage of use of a one search system that grouped results (Articles, Books & Media, Website) into different columns. The results showed that over 40% of the time articles were selected while Books and media were a close second at 36%. The website was used around 2% of the time!

Overall a very informative session.

ASERL Webinar – ASERL Members e-Reader Experiences

Friday, February 18, 2011 2:56 pm

On Friday, Feb. 18th, a group of ZSR colleagues and I took in a webinar put on by ASERL member libraries discussing their experiences with e-Readers. Speakers included Nancy Gibbs from Duke U., Millie Jackson and Beth Holley from University of Alabama, Eleanor Cook from ECU, and Valerie Boulus from Florida International U. The webinar was organized by John Burger.

Valerie Boulus started the talk by discussing the general specifications of the e-Reader devices, such as the Kindle. She then moved on to talking about specific devices and their features. Kindle v. 3, Nook v. 1, Nook Color v. 1, Sony Readers. She also discussed file formats for the devices, as well as Digital Rights Management (DRM).

Nancy Gibbs discussed Duke’s implementation of an e-Reader program. They began in fall 2009, spent around $20,000 for devices and content. They tried to put high demand titles on the devices, and provide multiple copies of the content on a number of their devices, as they could put e-Books on multiple devices at one point. She did not speak to the copyright issues this raises with licensing the ebooks, however.

Alabama started their program in fall of 2009 as well. Obtained funding through fundraiser, and purchased only Kindles. Their different libraries selected original content, then let their patrons request additional titles, excluding textbooks. Spent Approx. $2000 in content last FY. They do not consider the e-reader titles as part of the collection development since the program is a pilot program at this point. Don’t feel the e-format is stable enough. Alabama adds titles to each of their kindles individually, and manages new purchases through a spreadsheet.

Back to Duke, they replicate content across all devices instead of breaking it up based on content. Also have a suggestion email for patrons. Can put content on 6 kindles a piece, as many nooks as you want. Discussed money savings, no discussion of legality. I hope for their sake that no book publishers were on this webinar! The talk moved onto the nuts and bolts of adding titles to the devices, as well as problems relating to content on the devices such as titles disappearing off devices thanks to Amazon.

Eleanor Cook from ECU began her portion discussing the sales tax issue with e-books, though WFU operates differently than most other schools in this regard, according to Lauren C.

Unfortunately I had to leave at this point in the talk, though it was certainly interesting to hear other’s experiences with some of the issues we have encountered here at ZSR with similar technologies.

iRODS user conference

Thursday, February 17, 2011 7:58 pm

Today I went to the iRODS user conference in Chapel Hill, NC. iRODS stands for Integrated Rule Oriented Data System. the system is intended to be used as a data and digital object curation system that supports data curation activities (e.g. preservation activities) and access activite (e.g. create, access, replace, update, version). iRODS is usable as a subsystem for Dspace and Fedora as well as a number of other products and is flexible enough to serve as a platform for other file system services (e.g. Dropbox).

The morning session focused on new features in version 2.5. As I was not familiar with the system I spent the time stepping through a tutorial on the iRODS website. The afternoon focused on use cases which were rather interesting.

The first use case focused on continuous integration using maven/git/hudson which are some tools we have been discussing internally here at ZSR. The development environment at RENCI is called GForge & appears to be a solid approach to enabling distributed development of services. We heard from DataDirect Networks that discussed how they are using iRODS to change how get away from file system work. I also got to see some demonstrations of use cases in Archives and file management systems. Lots more information about iRODS and the conference is available on the iRODS wiki.

One fun project I learned about. National Climatic Data Center. For me the really incredible idea was that an iRODS approach to file system storage combined with lots of good metadata would enable a distributed (multi-institution, multi-site) structure that researchers could use to do retrievals and run processes based on common metadata elements or other file identifiers – for more info see the TUCASI Project.

Sarah at the Lilly Conference

Thursday, February 17, 2011 4:17 pm

On Feb. 4th-6th, I attended the Lilly Conference on College University and Teaching with the support of the Faculty Teaching Initiative Grant sponsored by the Teaching and Learning Center. All of the sessions that I attended were thought-provoking and broadened my view of teaching. Here are some highlights from the conference:

I attended the session on “Defining Effective Teaching”. Leslie Layne from Lynchburg College surveyed students and faculty on how they define “effective teaching.” Both students and faculty agreed that it is important that the teacher “knows the subject material well.” Faculty also ranked important being “organized and well-prepared for class” and “[outlining] expectations clearly and accurately.” Interestingly, students’ responses differed from faculty responses and ranked the following as also important: 1) “is accessible to students”; 2) “uses a variety of teaching methods or formats”; 3) “keeps students interested for the whole class period; makes the class enjoyable”.

I also attended a crowded session on “What Makes a Great Teacher? (or What Makes a Teacher Great?)” At the beginning of the session, Scott Simkins, Director of the Academy for Teaching and Learning at N.C. A&T State University, highlighted the “Professors (& Learners) of the Year,” which is an award given by the Council for Advancement and Support of Education and the Carnegie Foundation for the Advancement of Teaching. Simkins reported on empirical research on effective teaching, and here are some points that he raised from the professional literature:

  • Set big goals and high expectations for students
  • Pedagogical content knowledge
  • Work backwards from learning outcomes
  • Maintain focus on student learning
  • Frame questions that capture the students’ imaginations and challenge paradigms
  • Clarity
  • Organization
  • Build trust
  • Exploring not explaining

I attended the plenary session on “The Good, Bad, and Counterintuitive: How Evidence-Based Teaching Can Correct the Commonsense Approach to Instruction.” Ed Neal and Todd Zakrajsek from UNC-Chapel Hill presented a variety of evidence-based teaching principles:

  • Engage students’ preconceptions; students have preconceptions, but if their preconceptions aren’t engaged, then they may fail to learn new concepts.
  • Deep foundational knowledge that is retrieved; There are different levels of students’ learning: “I heard about it” –> “I understand it” –> “I can do it in my sleep”
  • Learners must be taught to take a metacognitive approach.

I’m always interested in attending sessions on science teaching, and I also learned about the Science Education Resource Center at Carleton College and the National Center for Case Study Teaching in Science. It was also great catching up with other librarians from UNCG at the conference. I am still in the process of reflecting on all of the sessions that I attended, and I have collected bibliographies and articles on teaching if anyone is interested in reading them.

Leslie at MLA 2011

Monday, February 14, 2011 2:08 am

I’m back from another Music Library Association conference, held this year in Philadelphia. Some highlights:

Libraries, music, and digital dissemination

Previous MLA plenary sessions have focused on a disturbing new trend involving the release of new music recordings as digital downloads only, with licenses restricting sale to end users, which effectively prevents libraries either from acquiring the recordings at all, or from distributing (i.e., circulating) them. This year’s plenary was a follow-up featuring a panel of three lawyers — a university counsel, an entertainment-law attorney, and a representative of the Electronic Frontiers Foundation — who pronounced that the problem was only getting worse. It is affecting more formats now, such as videos and audio books — it’ not just the music librarian’s problem any more — and recent court decisions have tended to support restrictive licenses.

The panelists suggested two approaches libraries can take: building relationships, and advocacy. Regarding relationships, it was noted that there is no music equivalent of LOCKSS or Portico: Librarians should negotiate with vendors of audio/video streaming services for similar preservation rights. Also, libraries can remind their resident performers and composers that if their performances are released as digital downloads with end-user-only licenses, libraries cannot preserve their work for posterity. The panelists drew an analogy to the journal pricing crisis: libraries successfully raised awareness of the issue by convincing faculty and university administrators that exorbitant prices would mean smaller readerships for their publications. On the advocacy side, libraries can remind vendors that federal copyright law pre-empts non-negotiable licenses: a vendor can’t tell us not to make a preservation copy when Section 108 says we have the right to make a preservation copy. We can also lobby state legislatures, as contract law is governed by state law.

The entertainment-law attorney felt that asking artists to lobby their record labels was, realistically speaking, the least promising approach — the power differential is too great. Change, the panelists agreed, is most likely to come through either legislation or the courts. Legislation is the more difficult to affect (there are too many well-funded commercial interests ranged on the opposing side); there is a better chance of a precedent-setting court case tipping the balance in favor of libraries. Such a case is most likely to come from the 2nd or 9th Circuit, which have a record of liberal rulings on Fair Use issues. One interesting observation from the panel was that most of the cases brought so far have involved “unsympathetic figures” — individuals who blatantly abused Fair Use on a large scale, provoking draconian rulings. What’s needed is more cases involving “sympathetic figures” like libraries — the good guys who get caught in the cross-fire. Anybody want to be next? :-)

Music finally joins Digital Humanities

For a couple of decades now, humanities scholars have been digitizing literary, scriptural, and other texts, in order to exploit the capabilities of hypertext, markup, etc. to study those texts in new ways. The complexity of musical notation, however, has historically prevented music scholarship from doing the same for its texts. PDFs of musical scores have long been available, but they’re not searchable texts, and not encoded as digital data, so can’t be manipulated in the same way. Now there’s a new project called the Music Encoding Initiative, jointly funded by the National Endowment for the Humanities and the German Deutsche Forschungsgemeinschaft. MEI (yes, they’ve noticed it’s also a Chinese word for “beauty”) has just released a new digital encoding standard for Western classical musical notation, based on XML. It’s been adopted so far by several European institutions and by McGill University. If, as one colleague put it, it “has legs,” the potential is transformative for the discipline. Whereas critical editions in print force editors to make painful decisions between sources of comparable authority — the other readings get relegated to an appendix or supplementary volume — in a digital edition, all extant readings can be encoded in the same file, and displayed side by side. An even more intriguing application of this concept is the “user-generated edition”: a practicing musician could potentially approach a digital edition of a given work, and choose to output a piano reduction, or a set of parts, or modernized notation of a Renaissance work, for performance. Imagine the savings for libraries, which currently have to purchase separate editions for all the different versions of a work.

Music and metadata

In a session titled “Technical Metadata for Music,” two speakers, from SUNY and a commercial audio-visual preservation firm respectively, stressed the importance of embedded metadata in digital audio files. Certain information, such as recording date, is commonly included in filenames, but this is an inadequate measure from a long-term preservation standpoint: filenames are not integral to the file itself, and are typically associated with a specific operating system. One speaker cited a recent Rolling Stone article, “File not Found: the Recording Industry’s Storage Crisis” (December 2010), describing the record labels’ inability to retrieve their backfiles due to inadequate filenames and lack of embedded metadata. Metadata is now commonly embedded in many popular end-user consumer products, such as digital cameras and smartphones.

For music, embedded metadata can include not only technical specifications (bit-depth, sample rate, and locations of peaks, which can be used to optimize playback) but also historical context ( the date and place of performance, the performers, etc.) and copyright information. The Library of Congress has established sustainability factors for embedded metadata (see One format that meets these requirements is Broadcast Wave Format, an extension of WAV: it can store metadata as plain text, and can include historical context-related data. The Technical Committee of ARSC (Association of Recorded Sound Collections) recently conducted a test wherein they added embedded metadata to some BWF-format audio files, and tested them with a number of popular applications. The dismaying results showed that many apps not only failed to display the embedded metadata, but also deleted it completely. This, in the testers’ opinion, calls for an advocacy campaign to raise awareness of the importance of embedded metadata. ARSC plans to publish its test report on its website ( The software for embedded metadata that they developed for the test is also available as a free open-source app at

Music cataloging

A pre-conference session held by MOUG (Music OCLC Users Group) reported on an interesting longitudinal study that aimed to trace coverage of music materials in the OCLC database. The original study was conducted in 1981, when OCLC was relatively new. MOUG testers searched newly-published music books, scores, and sound recordings, as listed in journals and leading vendor catalogs, along with core repertoire as listed in ALA’s bibliography Basic Music Library, in OCLC, and assessed the quantity and quality of available cataloging copy. The study was replicated in 2010. Exact replication was rendered impossible by various developments over the intervening 30 years — changes in the nature of the OCLC database from a shared catalog to a utility; more foreign and vendor contributors; and the demise of some of the reference sources used for the first sample of searched materials, necessitating substitutions — but the study has nevertheless produced some useful statistics. Coverage of books. not surprisingly, increased over the 30 years to 95%; representation of sound recordings also increased, to around 75%; but oddly, scores have remained at only about 60%. As for quality of the cataloging, the 2010 results showed that about 20% of sound recordings have been cataloged as full-level records, about 50% as minimal records; about a quarter of scores get full-level treatment, about 50% minimal. The study thus provides some external corroboration of long-perceived music cataloging trends, and also a basis for workflow and staffing decisions in music cataloging operations.

A session titled “RDA: Kicking the Tires” was devoted to the new cataloging standard that the Library of Congress and a group of other libraries have just finished beta-testing. Music librarians from four of the testing institutions (LC, Stanford, Brigham Young, U North Texas, and U Minnesota) spoke about their experiences with the test and with adapting to the new rules.

All relied on LC’s documentation and training materials, recording local decisions on their internal websites (Stanford has posted theirs on their publicly-accessible departmental site). An audience member urged libraries to publish their workflows in the Toolkit, the online RDA manual. It was generally agreed that the next step needed is the development of guidelines and best practices.

None of the testers’ ILSs seem to have had any problems accomodating RDA records in MARC format. LC has had no problems with their Voyager system, corroborating our own experience here at WFU. Some testers reported problems with some discovery layers, including PRIMO (fortunately, we haven’t seen any glitches so far with VuFind). Stanford reported problems with their (un-named) authorities vendor, mainly involving “flipped” (changed name order) entries. Most testers are still in the process of deciding which of the new RDA data elements they will display in their OPACs.

Asked what they liked about RDA, both the LC and Stanford speakers cited the flexibility of the new rules, especially in transcribing title information, and in the wider range of sources from which bib info can be drawn. Others welcomed the increased granularity, designed to enhance machine manipulation, and the chance this affords to “move beyond cataloging for cards” towards the semantic web and relation-based models. It was also noted that musicians are already used to thinking in FRBR fashion — they’ve long dealt with scores and recordings, for instance, as different manifestations of the same work.

Asked what they thought “needed fixing” with RDA, all the panelists cited access points for music (the LC speaker put up a slide displaying 13 possible treatments of Rachmaninoff’s Vocalise arranged for saxophone and piano). There are other areas — such as instrument names in headings — that the RDA folks haven’t yet thought about, and the music community will probably have to establish its own practice. Some catalogers expressed frustration with the number of matters the new rules leave to “cataloger’s judgment.” Others mentioned the difficulty of knowing just how one’s work will display in future FRBRized databases, and of trying to fit a relational structure into the flat files most of us currently have in our ILSs.

What was most striking about the session was the generally upbeat tone of the speakers — they saw more positives than negatives with the new standard, assured us it only took some patience to learn, and were convinced that it truly was a step forward in discoverability. One speaker, who trains student assistants to do copy-cataloging, telling them “When in doubt, make your best guess, and I’ll correct it later,” observed that her students’ guesses consistently conformed to RDA practice — some anecdotal evidence suggesting that the new standard may actually be more intuitive for users, and that new catalogers will probably learn it more easily than those of us who’ve had to “unlearn” AACR2!


Our venue was the Loews Philadelphia Hotel, which I must say is the coolest place I’ve ever stayed in. The building was the first International Style high-rise built in the U.S., and its public spaces have been meticulously preserved and/or restored, to stunning effect. The first tenant was a bank, and so you come across huge steel vault doors and rows of safety-deposit boxes, left in situ, as you walk through the hotel. Definitely different!

Another treat was visiting the old Wanamaker department store (now a Macy’s) to hear the 1904 pipe organ that is reputed to be the world’s largest (

Code4Lib Day 2

Wednesday, February 9, 2011 7:52 pm

One important point of today’s presentations at code4lib was on using community-based approach to provide solutions. There was also an interesting breakout section led by Lyrasis on the importance of open source solutions in libraries and why they are becoming so popular.

In order to develop a digital exhibit that would aggregate digital collections originally in different formats, the University of Notre Dame decided on a community-based approach. They then joined the Hydra Framework community. The community includes Stanford, Virginia University, DuraSpace, MediaSpace, and Blacklight. Hydra Framework is a shared base of code that each Hydra community member benefits from. It provides developers with a set of tools that facilitate the rapid development of scalable applications. The Notre Dame’s digital exhibit’s information architecture includes Apache Solr and Fedora Commons as their repository and Blacklight and Active Fedora as their interface.

The Chicago Underground Library also used a community-based approach to collect and catalog the Chicago city’s history. They collected every piece of print data imaginable and this included hand-made artist books, university press, self-published poetry books etc. They also gathered information about each individual who contributed to the collection so users can trace back to them. They have accumulated over 2000 publications so far. Their future goal is to expand the collections to include audio and video.

A rep from Lyrasis led a breakout section to talk about how their organization can help libraries achieve their goals and find out why there is so much interest in open source solutions and what is driving such enthusiasm. It was interesting to find out that no attendant thought that the decision to embrace open source solutions as opposed to vendor provided solutions was solely due to financial reasons. Everyone agreed it was more about the independence and the flexibility that open source software provide. Then there was a long discussion on the cost involved in open source software implementation. Overall the group found that open source solutions are definitely worth it.

Gretchen at 2011 Lilly Conference

Wednesday, February 9, 2011 2:59 pm

Over this past weekend, I spent Friday – mid Sunday at the Lilly Conference on College and University Teaching and Learning in Greensboro. The experience was rewarding; I enjoyed meeting others in the field, and learning what other local institutions are working on. I look forward to trying out some new ideas on teaching and learning, particularly with technology, on our campus in the near future. Additionally, I presented on Location-Based Applications on Saturday evening.

Highlights of the conference included several sessions: using video lecture capture systems, integrating Google Apps and Maps to support experiential learning, and a video-based project a faculty member conducted in her classroom. And, for me, giving my first conference presentation!

Learn more about the sessions I attended onFriday,Saturday,Sunday, andmy presentation viaCollaboration @ Wake.

I recorded my presentation with Flip video to see how I actually sound when I present, and then edited in the supporting media to create a video. It is most “interactive” around 6:06 and 17:30. If anyone ever wants to learn how to do this, I’d be happy to show you!

Location-Based Applications: Creating a Community Beyond the Map from Gretchen Edwards on Vimeo.

Gretchen Edwards presents “Location-Based Applications: Creating a Community Beyond the Map” at the 2011 Lilly Conference on College and University Teaching in Greensboro, North Carolina.

Vufind updates

Wednesday, February 9, 2011 12:05 am

JP already talked about Vufind but I thought I would add in my notes from the Vufind talk today. Demian Katz (Villanova) took some time in the afternoon to talk about Vufind and its growing support for metadata standards other than MARC. The update centered on how Vufind had been re-tuned to be more agnostic with regards to metadata standards and encoding models. The redesign made use of “Record Drivers” to take control of both screen display functionality and data retrieval processes, OAI harvesters to gather data and XSL importing tools to facilitate metadata crosswalks and full text indexing.

Demian talked at some length about basic features of the metadata indexing toolkit. At the Vufind 2.0 conference he talked a bit about his ability to use the MST from the Extensible Catalog project and I wonder (no answer, just a question) how the toolkit development with Vufind matches with the XC project. Demian reported on the OAI-PMH harvester that will gather records remotely and load them into Vufind. i have used an early version of this tool to successfully harvest and import HathiTrust records and am encouraged to see that development has continued. Demian also mentioned a new XSLT importer tool that enables mapping an XML document into an existing SOLR configuration.

This represents an interesting step forward for Vufind as it will allow ZSR to think about harvesting and indexing data from our Dspace instance as well as other sources that support OAI harvesting. All of these features are going to come in the Vufind 1.1 release on March 21st! More to come on this as we get our test instance of Vufind running.

JP at Code4Lib 2011

Tuesday, February 8, 2011 7:39 pm

Before commenting on today’s topics, I thought I would say few things about the pre-conference section on “Solr: what’s new?” that I attended yesterday.

Although there were other interesting pre-conference sections offered concurrently, I chose to attend the one on Solr because of the important role that its SolrMarc utility plays in vufind record indexing. If you wonder how this works, basically, SolrMarc reads in records from an imported voyager marc load file, extracts information from various fields and adds that information to the vufind Solr index. I wanted to learn more about Solr and see if there is any new update that can be used to move our vufind implementation to the next level. The preconference was rather technical. Erik Hatcher from Lucid Imagination was the presenter and he talked about how Solr has drastically continued improving over time. Some of the new features in development include SolrCloud which relies on Zookeeper, a centralized service for maintaining configuration information, to provide a shared, centralized configuration and core management to programmers. He also talked about pivot/grid/matrix/tree faceting which is a hierarchical way of providing facets that would branch out to other facets (“sub-facets”) to further narrow down a search. Another cool feature that Solr has improved is the date faceting and that is going to be seen in our upcoming vufind upgrade.

The actual conference started today and Erik has already blogged about all the important subjects. I was interested in what Damian had to say about vufind.

The idea about centralization of code introduced by Erik Hatcher was also embraced by Damian Katz when he talked about the redesign goals for vufind. He is aiming for a centralization of marc specific code to facilitate replacement. Just to be a little bit controversial, Damian stated that “MARC must die”. He wanted to say that library data is not limited to marc data but also consists of other data types that are becoming more and more popular. He expressed pride in the ability for the upcoming release of vufind (vufind 1.1) to provide among others, full Open Archive Initiative Protocol for Metadata Harvesting (OAI-PMH) server support capability, which will enable harvesting metadata into a directory for further data manipulation. Here, Damian provided a solution to the question “where is my data?” . His solution, “grow the toolkit” that will solve the problem of obtaining records from remote sources, process harvested files, and index arbitrary xml records. According to Damian, understanding record drivers gives the programmer a lot of control over vufind.

Update on Kuali OLE

Tuesday, February 8, 2011 1:20 pm

I decided to devote a post to the OLE project given our interest in the direction of OLE in the coming year. Tim McGeary gave an update on the status of OLE – the project is currently in a build phase with nearly $5Million in funding from various sources.

Coding started in early February, working with HTC Global Services for a development partner. The organization is still shooting for a first test-case release in July 2011. The main project goals have not changed in the last year – community-sources, next-gen, re-examined library business processes, break away from print resources, reflect change in scholarly work, integrate with enterprise-level systems.

Tim spent a few moments talking about data storage and metadata formats – the projects is focusing on being format agnostic and is interested in supporting linked data and interchangeable workflows. The data model is visioned to include descriptive, semantic and relational data and will use the Kuali service bus to integrate with other Kuali systems.

One of the interesting aspects of the OLE system is its reliance on the Kuali Finance system. This approach offers an opportunity for universities that use Kuali for their enterprise information system to benefit from some economies of scale. This is part of the Kuali Rice framework that includes a suite of pre-built services common to many enterprise applications.

One of the questions asked for concrete advantages. .Tim listed them 1)store more data points and manage them more holistically 2) raise the library system to the enterprise level, breaking down barriers to integration, 3) break down barriers between libraries to facilitate more data sharing. Tim mentioned the possibility of a central record sharing system.

Professional Development
September 2016
August 2016
July 2016
June 2016
May 2016
April 2016
March 2016
February 2016
January 2016
December 2015
November 2015
October 2015
September 2015
August 2015
July 2015
June 2015
May 2015
April 2015
March 2015
February 2015
January 2015
December 2014
November 2014
October 2014
August 2014
July 2014
June 2014
May 2014
April 2014
March 2014
February 2014
January 2014
December 2013
November 2013
October 2013
August 2013
July 2013
June 2013
May 2013
April 2013
March 2013
February 2013
January 2013
December 2012
November 2012
October 2012
September 2012
August 2012
July 2012
June 2012
May 2012
April 2012
March 2012
February 2012
January 2012
December 2011
November 2011
October 2011
September 2011
August 2011
July 2011
June 2011
May 2011
April 2011
March 2011
February 2011
January 2011
December 2010
November 2010
October 2010
September 2010
August 2010
July 2010
June 2010
May 2010
April 2010
March 2010
February 2010
January 2010
December 2009
November 2009
October 2009
September 2009
August 2009
July 2009
June 2009
May 2009
April 2009
March 2009
February 2009
January 2009
December 2008
November 2008
October 2008
August 2008
July 2008
June 2008
May 2008
April 2008
March 2008
February 2008
January 2008
November 2007
October 2007
September 2007
August 2007
July 2007
June 2007
May 2007
April 2007
March 2007
February 2007
January 2007

Powered by, protected by Akismet. Blog with