Friday, December 17, 2010

Building the foundation for success in implementing digital preservation at your library (Updated: December 20, 2010)

(Note: Due to some excellent reader feedback, I've update this original post to include additional sources of information)

A topic we’ve been raising in the regional director’s meetings that Ex Libris has been conducting around North America is not only the opportunity digital preservation represents for libraries and the profession of librarianship, but also the need for library administrators to get library staff trained in doing digital preservation. It's important to understand that this is part of the foundational work for engaging in an successful digital preservation initiative such as putting Rosetta in production at your library.

There are many ways to go about doing this, starting with readings and online courses and progressing through formal courses of study. In order to help library administrators achieve the goal of getting staff trained, we'd like to share some links/sites where information about digital preservation training can be found.
  1. Required Reading. The report “Sustainable Economics for a Digital Planet: Ensuring Long-term Access to Digital Information” should be essential reading for all librarians planning a digital preservation initiative.
  2. Online Tutorials. One of the best set of online tutorials (and workshops are available as well, check the link for dates) are those from The Inter-university Consortium for Political and Social Research (ICPSR). The program is called the Digital Preservation Management Workshop and Tutorial. Based on work done at Cornell University and supported with funding from the National Endowment for the Humanities (NEH). These tutorials are available at the website in English, French and Italian. Another solid program is the one available from The Digital Preservation Training Programme. Run out of the U.K., by the University of London Computer Centre (ULCC) but also offering information online and a variety of courses in various formats, the program is designed to “provides the skills and knowledge necessary for institutions to combine organizational and technological perspectives, and devise an appropriate response to the challenges that digital preservation needs present.” (Update: Due to the efforts of experts at Virginia Tech and the University of North Carolina, Chapel Hill there are some wonderful modules, journal links, papers and reports available through the Digital Curation Exchange. They also participate in creating an entry at Wikiversity. See the Digital Library entry. Look at the Core Topics, Section 8 to find content specific to Digital Preservation). All of this is well worth checking out.
  3. Online Introductory Videos. YouTube has a number of light-hearted, high-level videos that can be useful in orienting administrators, funders and staff to the basics of digital preservation. Freely available and each being around five minutes in length, these are very useful.
  4. Online Training Videos. Some excellent training videos can be found here covering a wide range of digital preservation topics. Also available for free, these videos are a wonderful asset for libraries looking for low cost ways to get staff trained.
  5. Slides. The Slideshare website also offers a wide range of presentations on digital preservation. There are many, many presentations here, so expect to spend some time examining these with regard to the age of the presentation, credentials of those who posted them and overall suitability to your needs before disseminating widely. A good starting point is this set.
  6. Toolkits. The Northeast Document Conservation Center (NEDCC) has developed and made available some excellent toolkits for libraries to use in planning and assessing their readiness for digital preservation. The planning toolkit provides a questionnaire and sample policies that will assist in planning a digital preservation policy, an essential foundation for digital preservation services. The readiness toolkit offers a wealth of resources to use in assessing your library’s readiness to engage in digital preservation. Some other good policy toolkits can be found at the National Library of Australia’s website and yet another example is available at the the European website, Electronic Resource and Preservation Access Network.
  7. Degreed programs. For those library administrators wishing to have staff with degrees specialized in digital preservation there are some options. In North America, the University of Michigan, School of Information has developed a program that provides specialization in Preservation of Information. (Update: The University of North Carolina, Chapel Hill, School of Information and Library Science also offers a course in digital preservation and access - Course INLS: 752). In the U.K. the University of Dundee, the Center for Archive and Information Studies (CAIS) also offers programs in digital preservation. Of course, these represent a serious commitment of both staff time and library funds to send staff through this program, but for those institutions who want to seriously embrace this new opportunity for libraries, it would likely be a very smart investment.
Digital preservation represents a continuing and exciting new opportunity for librarianship and the ability to show demonstrable value. Librarians have expertise in metadata/taxonomy/ontology, access and preservation. When that knowledge base is updated and married with expertise in digital preservation, it creates a powerful new value proposition for parent organizations and administrators.

Also, as I note when we conduct the regional directors’ meetings: When you train staff in this field, you also need to be sure you’re offering them an attractive set of reasons to want to continue to be a part of your organization. This is because they will be entering a rapidly growing field of needed expertise. People with this knowledge are going to be in demand. Be forewarned.

Finally, if you’re coming to ALA Mid-winter in San Diego, Ex Libris is offering a seminar “Stop the loss of digital content—with Ex Libris Rosetta" on Saturday, January 8, 2011, 1:30 pm–3:30 pm (Pacific Time) at the San Diego Marriott & Marina (Register here). You’ll have the chance to hear a university librarian talk about his plans to boldly move into the realm of true digital preservation, and an executive program manager responsible for moving Rosetta into full production in a large library environment in a matter of just months, reveal how it was done. I think you’ll find it is a great opportunity to start building the foundation for success in implementing digital preservation at your library.

Thursday, December 9, 2010

Reverberations and amplifications on some important issues for libraries

Recently a couple of issues I’ve raised here have been addressed in some other blogs. They provide other points of views and document other dimensions of the issues involved. So, if you haven’t had a chance to catch these posts, I highly recommend them:
  1. Abe Lederman of Deep Web Technologies wrote an interesting post linking the European Union case filed against Google this past week to what both the Federated Search Blog in a recent post and I’ve written about recently in a couple of posts (here and here) concerning the Charleston Conference Face-Off and biased search result sets. Librarians should not turn a blind eye to the potential problems here. It is important to understand that commercial interests can potentially void some of the checks and balances that should be preserved in the library supply chain. In this case, it can be done by separating the purchase of discovery/search tools from the purchase of content. If this isn’t done, as Abe points out in his headline, “If Google might be doing it…” there is no reason to think it can’t happen to the vendor supply chain for libraries as well. It’s a point well worth remembering.
  2. Another blog post I found particularly interesting is one by John Wilkin, Executive Director of HathiTrust and Librarian at the University of Michigan Library. His post, entitled: “Open Bibliographic Data: How Should the Ecosystem Work?” is required reading as are the comments that follow. While I won’t say I agree with all of John’s points, he makes a number of very pointed and well-deserved remarks aimed at OCLC that I do agree with. These include:
    “ By walling off the data, we, the members of the OCLC cooperative, lose any possibility of community input around a whole host of problems bigger than the collectivity of libraries”
    and
    "OCLC should define its preeminence not by how big or how strong the walls are, but by how good and how well-integrated the data are.”
    It’s a different take on what I said in my post, i.e. OCLC should stand for Open and Cooperative Library Content. Karen Coyle, in the comments section notes:
    “If you cannot release your bibliographic data openly, you cannot participate in the linked data movement.”
    The one thing there seems to be a growing consensus about is that clearly some change is needed in Dublin, Ohio.
All of these posts point out that librarians must and can't be passive consumers in the current information landscape. I've written about this in a post before and what I said then is especially true during periods of rapid change and economic crisis. All too often in these times, people simply want to leave it to others and trust they’ll do what is best to represent your interests. Not to be pessimistic here, but that’s naïve and it doesn’t relieve you of the responsibility to make known what is wanted and needed to serve the users and organization with which your library is associated and to do so in the most efficient and effective manner possible.


Thursday, November 11, 2010

Conclusions from the Charleston Conference "Face-Off"

I returned from the Charleston Conference this week. It was a conference crammed with content and crammed into a facility that is stretching at the seams to accommodate it. The overall success of this conference is quite visible and impressive. Hats off to the organizers!

The face-off between the gladiators of publisher / aggregator / discovery vendors that I discussed in my last post was a session that was also crammed with people. Hundreds of people in two linked ballrooms showed up for this session. It strongly indicates the importance that discovery solutions now carry for libraries. That was the good news.

Unfortunately, the bad news is that the face-off ended up being disappointing for a variety of reasons, but mostly because it was really two vendors talking to each other and at the audience, but no useful discussion resulted. In addition, no questions were permitted from the audience. So even that avenue of discovery was slammed closed.

As a result important points such as how librarians should protect themselves by having some checks-and-balances in their supply chain did not get discussed.

Nor was there a useful discussion about why some vendors see an important role for federated search within discovery tools. The long tail of information was left flapping in the unseasonably cold Charleston weather.

Finally any mention of the ICOLC Statement, Principle 3, was avoided like a plague and librarians lost out as a result.

The questions that were posed by the moderator at the end of the face-off were interesting to me in that they seemed to carry a heavy heritage from librarians experience with OPAC’s and the end-users of a different time. While I suppose that is to be expected, it concerns me that the questions seemed disconnected from the way we actually see students/faculty/staff wanting to and/or performing searches, not to mention their use of discovery tools to facilitate learning.

So the session, as I predicted in my last post, resulted in a lot of silliness. Certainly it missed the opportunity to serve as a focused discussion among companies delivering discovery products and the librarians curious about and/or using discovery tools.

Hopefully, this face-off can serve as a foundation for future discussions. There were other sessions at the Charleston Conference that did provide important and useful insights that could be added to the list of topics for a future discussion about discovery solutions. These included:

1) Procurement processes. I’m truly worried that discovery tool procurement processes, as I heard described at the Charleston Conference will, if unchanged, result in libraries buying wonderful tool for librarians, but unfortunately not for the end-users. If that happens, libraries will ultimately set the stage to be dis-intermediated out of, or further diminished, in the information discovery process.

2) Learning models. Another session was given by Stephen Abrams where he pointed out that if you look at learning models (for instance, the Fleming’s VAK/VARK model), you see that most users learn by one of the following methods:

a) Visual
b) Auditory
c) Text (reading/writing)
d) Tactile

He pointed out that in the general population, only 20% of people are text-based learners, but that if you do that same survey on library staff, some 80% are text-based learners. Again, it is not altogether surprising that librarians love text. However, the “disconnect” mentioned above can come into play as a result. If the librarians buying discovery solutions, specify, buy and implement a solution that reflects the way they learn without considering the learning styles of their end-users, the solution they select is likely not going to meet the needs of those end-users. When that happens, we all have a major problem.

If a future discussion/face-off is held I think it would be interesting if some end-users (students/faculty/staff for academic libraries) and public library users were invited to participate. We could have them describe how they use the existing discovery tools and what they find good and bad about them. The session could then have vendors show (or talk) about ways the issues could be addressed through existing product functionality, existing but currently unused or even new technologies. Such a session should also accommodate more vendors on the stage. All of which would make for a more fair, balanced and informative session for attendees.

If that happens, I believe all involved could learn what might need to be done, or changed in their understanding of end-user needs, library procurement processes and in the questions asked of each other in order to help facilitate vendor capabilities being appropriately utilized to meet the needs of both librarians and end-users.

I think that would prove a far more useful “face-off” than what we saw in Charleston last week.

Saturday, October 30, 2010

"Gladiators" to perform sleight-of-hand at Charleston Conference

If you’ve read the most recent Library Journal, Infotech report, you’ll know that there is going to be a faceoff between two "gladiators", i.e. publishers/aggregators Proquest/Serials Solutions with their Summon offering and Ebsco with the Ebsco Discovery Service (EDS) offering at the upcoming Charleston Conference. This contest resulted from a series of letters to the Charleston Advisor. These two publisher/aggregators claim the intent is to show librarians which of their solutions are best.

Since we also wrote a letter to the Charleston Advisor that was published/mentioned and since we are a global leader of discovery solutions you may wonder why we weren't invited? The answer is likely because we’re a “library discovery and scholarly aggregate index solution provider” not a “publisher/aggregator” and for other reasons you'll understand after reading this post.

So, while this whole tactic will be mildly entertaining, it is exceedingly silly.

Does anyone really doubt that this technique will result in anything other than these two publishers showing features that are perceived as unique to their offering that the other publisher can’t do? Isn’t this always true with every product/service offering when compared to another? So we’ll end up with two publishers scoring points and smiling smugly while watching the other writhe in the agony of defeat for a few moments – or at least until it’s their turn to score a few points and smile smugly in return. What will this really prove?

Does anyone really think this will change how you are going to select your Content and Discovery products? Do these publisher/aggregators really think that by doing this you’ll suddenly decide that instead of thorough deliberation, thoughtful analysis and asking dozens of questions of each supplier, you’ll just buy on the basis of a face-off? Apparently so.

I suspect the more likely case will be as our Corporate VP, Nancy Dushkin, said in her letter to the Charleston Advisor:
“As history has shown, multiple solutions arise to address real needs, and each solution has its own characteristics. In terms of discovery solutions, I'm confident that each library, after conducting a thorough evaluation of facts and features, will be able to determine which of the available products best fits the library's mission, needs, policies, and environment.”
But, maybe in a political season of exceeding silliness in North America, we all just need the parody and a light moment and then we can all go back to more serious work.

However, this exercise will prove one thing. It will show that these publishers/content aggregators are attempting to pull some of the fastest sleight-of-hand possible in order to hide the much larger, and far more important, issues about what really matters when selecting a discovery solution today. These two particular firms are, as Library Journal says, in the “greatest competition” because they are, first and foremost, publishers/aggregators fighting head-to-head for their first line of business, which is content and content aggregation services. The discovery solution is secondary to them and it is shown in numerous ways by their actions. You can discover this for yourself by asking these questions:

1. Have you offered your discovery layer for free as part of a packaged content/discovery solution deal?

No matter what the answer is publicly, as competitors we can tell you that we’ve seen instances where customers have been offered the discovery solution for free from a publisher/content aggregator owned firm, as part of a larger content subscription package.

Now, we all know there is no such thing as “free” lunch. If this happens to you, ask why are they willing to do this? Where are they making their money on that product? It may not be in the first year. It might be in later years as you see price hikes on your content and content aggregation services. A good defensive tactic would be for you to use would be to ask for a line item price quote with prices applied to all the content, as well as the ‘no-cost” discovery layer so that in subsequent years, when you ask for the same, you can see where those price hikes are being applied. Want more details? See the section below on “content neutrality”.

2. Which proprietary software vendor produced a discovery solution first and why?

Our Corporate VP, Nancy Dushkin, in her letter to the Charleston Advisor pointed out that our discovery solution, Primo, has been in libraries since 2007 and is installed in hundreds of libraries around the world. We started by building the discovery product first, and getting the functionality right to deal with a broad variety of content types and sources. That was a deliberate choice. Data, on the other hand, is becoming more and more of a commodity and is becoming available from numerous sources. This is also why we’re seeing these “gladiators” fight so publicly and viciously. They want to continue to force people into their discovery interfaces where they can make sure their content and content aggregation is highlighted and used first and foremost because, as we noted above, this is their primary business and where they make their money.

3. What is “content-neutrality”, who offers it and why is it important?

Content neutrality means that the library, not the publisher/content aggregator or vendor, can minimally control the following:
  • What content is included in their discovery tool.
  • The relevance ranking on that content. Can you force the content that is unique to your library to the top of the result sets? Can you control the relevancy ranking of all the content being offered through your discovery layer?
  • Control of the facets offered by the system. Facets are a very quick way for users to quickly sort through a lot of content, but in order for you to meet the specific needs of your users, these must be under your control. If they’re not under your control, careful analysis of those offered and why they’re being offered is needed on the library’s part before proceeding.
Don’t just accept simple answers to these questions. Have these “gladiators” show you exactly how you would perform each and every one of these steps.

Also remember that when you sign with a publisher/aggregator for their discovery tool and you use their aggregate index that it has their competitors data loaded into it. That means they can now see the usage of not only their own content, but also that of their competitors. They can see what titles are used; they can see how often they’re used. It's certainly possible, if you don't control the relevancy ranking as described above, that they might force their content to rank higher than their competitors and therefore encourage greater use. I may be naïve, but no one is ever going to convince me that this information isn’t going to be mighty handy to have when it comes time for these publisher/aggregators to define the content packages for next year, what titles are in them and how they’re going to position and price them against their competitors.

More importantly, those who I see on the losing end of this specific scenario are libraries. After all, if they gave you that discovery interface or charged you very little for it, somewhere, someplace and somebody has to pay for it and you’ll have handed them the possibility to do that to your library .

Remember when you buy a discovery product (Primo) and an aggregate index (Primo Central Index) from a vendor like Ex Libris, you get an assurance that the supplier is content-neutral and that we have no vested interest in the content except to make sure that you, representing the library using the content, are getting the best possible solution at the best possible cost in order to meet the specific needs of your users without outside influence or interference.

4. Do you comply with the the International Coalition of Library Consortium (ICOLC) Statement, Principle 3?

This statement was added in June 2010 and says:
“We encourage publishers to allow their content to be made available through numerous vendors appropriate for their subject matter. We also encourage online providers and aggregators to allow their metadata to be included in emerging discovery layer services on a non-exclusive basis.”
It doesn’t say make “some” of your metadata available and it very specifically says don’t make it available on an exclusive basis (i.e. through the discovery tool offered by another division of the same company). This statement very clearly says that libraries are functioning in a challenging economic situation and they want their vendors to offer their metadata to all discovery products on an open basis. Be sure to ask the “gladiators” what their plans are for complying with that statement. While you're at it, ask them why they think it acceptable for either of these gladiators to ask their competitors to load their metadata in one of these vendors indexes, but it is NOT acceptable for them to be asked to provide their metadata for indexing in other aggregate indexes?

Now you might ask: "Didn't you just say above, that if they have their competitor's metadata in their index, I'm at a disadvantage? Am I not facilitating this by asking for compliance with this statement?" Yes, I did and yes, you are, but remember what I also said above -- buy your discovery solution and associated aggregate index from a content-neutral party, like Ex Libris. By so doing you'll keep a clear division that will avoid any conflict-of-interest and provide you with the statistics and tools to negotiate content and aggregation deals on a fair and open basis.

So if you’re going to the Charleston Conference and you intend to be at the face-off, enjoy the show. If you get the chance, ask some of the questions above. While the show may be entertaining, the questions above deserve real answers when you’re selecting a discovery solution and aggregate index for your library. At Ex Libris, Primo and the Primo Central Index provide you with answers we believe these two “gladiators” do not want you to hear. Now you understand why there will only be two "gladiators" on the stage.

Thursday, October 21, 2010

“The Value of Academic Libraries” and the incomplete chapter

It’s fall here in Chicago. The darkness comes earlier in the day and the temperatures cool, creating an environment where deeper contemplation is more easily achieved as all of the normal distractions of spring and summer fade away.

So it was when I sat down with the newly issued report; “The value of academic libraries; a comprehensive research review and report”. It’s a meaty tome (the full version is 172 pages, an Executive Summary at 16 pages is also available) that requires focus, contemplation and endurance to work your way through it. However, you’ll be rewarded for that effort because the work is packed with great information and ideas.

As noted in the Executive Summary:

“This report is intended to describe the current state of the research on community college, college and university library value and suggest focus areas for future research”
It does this extremely well, resulting in a set of twenty-two recommendations for librarians who want to establish the value of library services on their campuses. However, virtually every recommendation requires substantive, coordinated and collaborative efforts across organizations and numerous departments, and with technology experts from both within and outside of the academic organizations. However, after reading all that, the report begins to leave the reader unfulfilled. I turned to the “What to do next” section and hoped to find a plan for making all these ideas and plans come to fruition. I suspected we were in trouble when I read the following (emphasis is mine):
If each library identifies some part of the Research Agenda in this document, collects data, and communicates it through publication or presentation, the profession will develop a body of evidence that demonstrates library impact in convincing ways.”

“Major professional associations can play a crucial organizing role in the effort to demonstrate library value.”
This is followed by what I consider to be some low level suggestions on how all this might happen.

Surely we can muster a more focused effort than this in order to achieve these goals? What hits you hard as you read this is that ALA/ACRL showed good vision in writing this report, but unfortunately, they’re not showing the leadership to realize its goals. They are instead, relying on wishful thinking and the loose coordination of others.

Now, like many of you, I’ve been a long-time member of ALA (27 years) and I know that it is a huge organization representing many interests, people and organizations. Furthermore, I want to be clear, the fact that this chapter is missing is not the fault of the author, or the many people who worked on this report. It is, as I said before, a wonderful piece of work. It’s just that the final chapter is incomplete. It should describe how we’re going to drive this plan through to completion. Furthermore, I understand that it’s because of the way ALA is organized and governed, that this chapter was written in this manner. The ALA website admits to the problem we’re all facing here (again, emphasis is mine):

“The American Library Association carries out its work through a complex structure of committees and subcommittees, divisions, round tables, and several other types of groups.“
I personally consider that a bit of an understatement. The focus of ALA is so large and so diffuse that I frequently feel that by serving so many competing interests, it really serves none of them as well as it should. As it stands, this report serves as yet another example.

Of course, it’s easy to criticize; the harder task is always to answer the question: what should be done? Here’s what I consider missing in the chapter: Take the 22 recommendations and do the following:

  1. First prioritize the list (P1 through P3). We all know we can’t do 22 things at once, no matter how many organizations enlist. Some items build on other steps, some will require more resources than might be initially redirected, etc. Sure, many can and should be worked concurrently and thus my suggested use of three priorities instead of a pure ranked listing. ACRL staff should work with membership to do this prioritization.

  2. Next, have the Board or Council look at these recommendations and endorse them along with a directive across the organization that these represent goals and tasks to be achieved ASAP. ALA Headquarters staff should assign the elements that need to be accomplished to specific divisions, committees and round tables of ALA, along with completion dates. In other words, this effort needs to have the endorsement of leadership by the highest levels of ALA.

  3. Then the divisions, committees and round tables should take their assigned tasks and the job of enlisting the membership, as appropriate, to achieve their tasks to support of this plan. The ALA and division conferences could be used for progress reports and next-step coordination meetings between all the arms working on the goals and tasks to be achieved.

  4. The plan needs to be treated as a full project, with project implementation and management oversight, to follow up and ensure the pieces are resourced, completed on schedule, and coordinated in time to move towards the full implementation of the ideas in a coherent, cohesive manner. This should be assigned to ALA Headquarters staff to achieve.
Maybe I’m being naïve or too simplistic. However, what’s described in this report is a set of strategically important steps. They will serve to enhance and document the value of academic (and really all) libraries. In my mind, that makes it important enough to think that ALA should marshal resources across the organization, from all the divisions, chapters and round-tables in order to push this agenda forward in a new and bold way.

Such an initiative would demonstrate leadership not only in achieving the goals of this report, but in demonstrating that through directed, focused efforts, ALA can achieve new and very important things for librarianship.

Wednesday, October 20, 2010

Stretching the horizon of technology based solutions in libraries

I just finished reading the 2010 “The Horizon Report - 2010 Museum Edition” and I would encourage you to do the same, with a view to considering how you can stretch the horizon described in this report to include libraries.

Museums, like libraries, are faced with the challenge of increasing their visibility and strengthening their relevance and value in an era when its targeted users would rather look at Facebook and preferably on their mobile phone. The report challenges museums to embrace a wide range of new technologies.

Libraries need to do the same thing and this report can serve as a wonderful, mind-stretching read for a librarian. You could substitute the word “libraries” for “museums” in many, many places and the validity of the statements wouldn’t change at all. Perhaps that underscores the increasing trend towards the blending of the needs these two types of organizations are trying to address. Read the list of technologies that are suggested for museums to watch, which includes:
  1. Mobile and Social Media.
  2. Augmented reality and location based services
  3. Gesture based computing, and the
  4. Semantic Web
Some of those technologies clearly overlap with libraries and some might seem a bit far out to deserve serious consideration. However, before arriving at that conclusion, read the report. Some of my favorite observations were:
  1. Mobile devices.
    “According to a recent Garner Report, mobiles will be the most common way for people to access the Internet by 2013” (page 9).
    (We’re preparing our customers for this with Primo Version 3.0, which includes a mobile discovery interface. Our Open Platform website, EL Commons, includes other open source mobile interfaces.)

  2. The real value of social media.
    “In the way that they encourage a community around the media they host. Users can talk about, evaluate, critique and augment the content that is there – and do so in tremendous numbers.” “Social media allow users to collaborate and engage one another.” (page 13).
    Think about what would happen if we did that with library content, particularly in academic settings. This is really the kind of activity I was talking about in my recent post about the collaborative we really need. This report sees a similar opportunity.

  3. Augmented Reality
    “The concept of blending (augmenting) data – information, rich media and even live action – with what we see in the real world..”. (page 16)
    The report highlights examples such as www.layar.com that:
    “features content layers that may include ratings, reviews, advertising, or other such information to assist consumers on location in shopping or dining areas” (page 16).
    It doesn’t take much imagination to see what we could do with this in academic libraries. Bringing information to life takes on a whole new meaning with this kind of technology.

  4. Gesture-based computing. If you think gesture-based computing is out there in the future a bit, this report points out
    “The screens of the iPhone.. react to pressure, motion and the number of fingers touching the devices. The iPhone additionally can react to manipulation of the device itself – shaking, rotating, tilting, or moving the device in space.” (page 24).
    The Wii is another example of this technology and the list goes on. In other words, this technology is here today. Applying it to museums (and libraries) would give
    “the direct and satisfying personal connection of an individual with the object” (page 25).
    We’re seeing some initial uses of this with assorted mobile library applications, but this report helps you to imagine new and creative ways it might be further embraced and deployed.

  5. Semantic Web. I suspect most of us conceptually buy into the Semantic Web already. A key line here:
    “Semantic searching is currently used primarily to streamline scientific inquiries, allowing researchers to find relevant information without having to deal with apparently similar, but irrelevant information.” (page 28).
    No doubt we can benefit from this technology in libraries.
The report cites numerous real-life examples for each of the technologies and gives further readings. You’ll find ideas that can be readily stretched to include libraries.

I always encourage stepping back from challenging times and situations to take a different view of them. Reading this report about how museums are thinking of applying technology to their operations makes for an intriguing and invigorating read for us as librarians. It bears enough similarities to provide the opportunity to see how what is being proposed there (museums) could be applied here (libraries). It’s an exercise that we as librarians should do more frequently in difficult times. Most importantly it can give you a horizon that makes you feel excited about moving towards it.

Tuesday, October 12, 2010

Ex Libris – The book

Long weekends are a wonderful opportunity to think and reflect. Celebrating an anniversary, my wife and I took a trip to the coast and while walking through the hotel store, a book on the shelf captured my eye: “Ex Libris; Confessions of a Common Reader”, by Anne Fadiman. It is no surprise that this title would attract me and I promptly bought it. Published in 1998, I’m not sure how I’ve missed this book over all the years I’ve worked for Ex Libris (the two are unrelated except for the title). During the course of the weekend I read it.

I suspect many of you, like me, would find yourself nodding your head as you read this authors description of her interactions, appreciation and immersion in the world of books. Describing her home, with several walls lined with bookshelves, filled with books, certainly struck home (accompanied by a deep sigh from my wife). A shelf in those bookcases, called the “Odd Shelf” that is lined with books that don’t really fit with the volumes lining the other shelves also struck home. The chapter entitled “Never do that to a book” is a fun read; there are volumes I’ve read many times that still look like new and others I’ve marked up, made notes in the margins and refer to over and over, but I never, ever leave a book face down to mark my place. Overall, I found this work a delightful read for those who love physical books, love holding them, collecting them and working with them.

As I finished the final pages with a satisfaction similar to that I feel when taking the final sips of a fine port, I paused to think and reflect about our rapidly evolving world of e-books and e-resources. Will we lose something irreplaceable as we move towards that world? Or will it simply be an evolution in the vehicle that conveys feelings, ideas, thoughts and knowledge?

Looking at my Kindle and iPad, I see books containing the underlines of others who’ve read them (and how many people underlined it); I see the integration of multi-media and the Web into the text. So, I know the answer to my question is that it will be an evolution. While I’ll be among the last to advocate we slow down our embrace of technology, it is important to remember that reading is a larger experience than just consuming the words. It is also about the hunt for the right book, the embrace and collection of books that says something about who we are and what we care about that is also important. The very experience of reading them involves a number of mechanisms and results we must be sure to carry forth into the future of e-resources in some evolutionary way.

If you need an enjoyable break from our hectic world of technology and trying to harness information, I encourage you to pick this book up and take the journey. It serves as a useful reminder to us technologists of all the things books represent.

Monday, September 20, 2010

The cooperative we need: Open & Collaborative Library Content

Overview

Today our technology tool sets include Web-services, cloud computing, SaaS, grid computing, mobile devices, etc.—all of which have made possible a whole new way of thinking about library systems/services. As a result there are several efforts underway to build the next generation of library automation software. These include the open source intitiative OLE, Ex Libris's URM and OCLC’s Web Scale Management Services. Each of these efforts, in outlining plans for a next generation of systems/services, utilize at least some portion of these technologies.


All of these next generation systems would benefit immensely from access to a massive store of expanded, networked, linked and shared library data. While OCLC has a starting point in place, the ability for it to serve this expanded role across the profession and multiple products has been overshadowed by a number of issues, including a very questionable record usage policy (earlier withdrawn, revised, resubmitted and now approved), moves regarding regional affiliates and now, a lawsuit announced by SkyRiver and Innovative that further raises questions, concerns, distrust, and anger across the market in many directions. Why are we facing this situation?


It appears to me that the interests of the OCLC we know today do not appear to be in total alignment with the needs and interests of its overall actual membership. Perhaps they are in alignment with the interests of the Board, Council, and other governing and administrative arms, but the feeling I get in talks with librarians is that it is not in alignment with what they want. As I talk to librarians, across the country today, I hear that what they want is an organization, a cooperative that is focused on developing and providing open and collaborative library content and services that are widely accessible by all in order that they (the librarians) can focus on re-establishing and/or maintaining the value of libraries in our society.


The current OCLC originally started out on this path by building a shared bibliographic cataloging utility—i.e., the creation and sharing of bibliographic records—a resource that has long been at the core of many automated library services. OCLC did this exceedingly well and in a timely manner, as there was massive interest/demand for this type of service and OCLC could provide it while offering a cost advantage to help libraries further stretch their dollars. A win-win situation for nearly everyone involved. This was in part because the OCLC service filled a critical need for many libraries, and at the time, was not in direct competition with other major for-profit businesses, it was done for an affordable cost, and brought the power of collaboration to bear in addressing a critical library need.


Today, OCLC has continued the shared bibliographic utility but has, in my opinion, lost its direction. OCLC has bought numerous for-profit businesses and has continued to operate them as for-profit organizations that pay taxes. In trying to use these assets to grow, OCLC is trying to leverage the assets of the non-profit cooperative to achieve the commercial goals of the owned and for-profit businesses. It makes for a conflict-ridden mission statement and a critically important player in the marketplace that is trusted by too few including its members/customers and competitors/partners.


No for-profit vendor, whether they admit it publicly or not (although clearly, SkyRiver/III has gone the public route), likes it when a competitor appears on the market and has the benefit of tax-free status. In fact, most businesses will ask: Why should our tax dollars be used to help create a competitor for our company? Especially one that will not pay taxes on the business they take away from us? In the end, all of these business initiatives, and now resulting lawsuit, strongly work against OCLC being able to do what it does best—building collaboration, content, and related services as a non-profit entity to serve the larger profession.


We all need that cooperative. This should be accompanied by the cooperative building a national information processing structure and amalgamating all library related data that supports all types of library services as delivered through libraries and which can be enhanced by the value librarianship brings to the total offering.


How It Might Look


The necessary questions to ask are: What would this organization look like? How would it operate? How does an existing cooperative move in that direction?


Let’s start with some base line assumptions. First, I think that we can all agree there is no shortage of information today. What there is a shortage of are ways to deal with that information, to determine what is the best, the most authoritative, authenticated and appropriate information and to place that information into a meaningful context to answer an information seeker needs. If we can agree that this constitutes a substantial part of what we as librarians do, then I have some suggestions to make.


I’ve just read an interesting book that I found extremely applicable in thinking about how this might be done and what it might look like. The book “The Power of Pull”, by John Hagel III, examines how communities of users, thinkers, and doers are reshaping the way major progress is made as a result of small moves made by many participants working together loosely, but with a common, and sharply focused goal. There are lessons here to be applied to the world of libraries and particularly cooperatives.


These are lessons that we’ve seen used in places like Wikipedia and even open source software initiatives. I frequently lament that librarians miss one of the really valuable points of open source software and that is because I see them only applying the concepts at a micro level (actually producing open source code, admittedly important, but stepping back is also important) rather than looking at what is happening at a macro level. Applied at a macro level, what happens with the production of open source software, that is relevant to this discussion, can be boiled down to this:
  1. A community of people loosely band together, contributing their time and expertise, in order to help create a product (in this case, software or in the case of Wikipedia, an encyclopedia). In many cases, it is a very large community (an important point as scalability is an important aspect for large scale projects, such as we in libraries are dealing with in processing information).
  2. In so doing they agree to be governed and rewarded by shared guidelines and incentives (a large piece of which is ‘community good’).
  3. They also agree (for the most part) to have their work widely reviewed, modified, improved and/or accepted or rejected for inclusion in the final product.
  4. They share the resulting product openly and freely for the benefit of all.
Now, let’s take these basic principles, back up and apply them to the information landscape at large.

The growth of information today (which, as we all know, continues to grow at logarithmic rates) creates a problem which is finding the answers users need in all of this information and doing it in a scalable way. Which is why Librarianship will continue to be important well into the future. So, if we take the concepts above and apply them here, I see the following possible solution.
  • We need to look at what Wikipedia has done in employing a large community of users to create, filter, and refine a massive database of content. We can argue all day long about whether the precision of their content is as good as what we get in other forms, but the reality is the basic concept works and the resulting resource is massively utilized. So, let’s apply those principles to having libraries/librarians employ their users, and others, to create, filter and contribute to the information banks that we call libraries.
  • Let’s create tools, like browser plug-ins, that allow information seekers/users to instantly rate information sources as they use them including the domain of knowledge they apply to and the score the reader gives them. Use of such plug-ins would require personal registration with the library collaborative that runs this, so that users themselves can build personal authority ratings and collect rewards associated with contributing (which might just be personal satisfaction, recognition or status).
  • Rankings, once entered, would be automatically processed and compiled as to subject domain, source, content ranking and rater ID (which can be linked back to their ranking). These rankings would then be moved into a database for use by others, via software, that would provide results on any such item retrieved from the web.
  • Such rankings would be reviewed by domain experts whose certification as a experts either derives from their sustained rankings within the collaborative or from academic or other credentials that establish their expertise. Once the rankings are reviewed, authority for moving the item to a certified rating would be applied by librarians. Eventually, some of this authority would be delegated down through contributor trees to help make the system more scalable.
  • Over time, a massive new information resource would result and at the top of the pile of ranking/reviewing/organizing and providing discovery and delivery would sit libraries and librarianship.
  • We could then begin to tailor our discovery-to-delivery tools (like Primo) to utilize these certifications as part of the relevancy rankings applied to information as well as offer a whole host of other related new and useful services.
Could such a cooperative draw a community of users large enough to do what libraries need to be done, i.e. to process all the information we’re seeing made available on the Web? Would this work to process, sort, rank and float the very best of that information to the top for inclusion in library discovery/delivery tools? It’s certainly a fair and challenging question, but processing the store of human knowledge and contributing to its long term sustainability would certainly appeal to many people provided the right recognition was associated with their participation and contribution of time and labor. However, it is equally clear that this is only one component of the total solution needed by librarianship and end-users. Algorithmic computation, data mining and statistical analysis tools must accompany the final solution. These are things that I expect the vendor community will supply.

The Benefits


The benefits of this new form of library collaborative would be substantial for the profession and human knowledge. The role of libraries and librarianship would be strengthened. If OCLC were to move in this direction, it would be returning to its non-profit, collaborative roots and as a result the antagonism with libraries and the for-profit business sector would be lessened. If the resulting amalgamated data were to be provided under truly open API’s and other interfaces, libraries would see their collective content truly leveraged and utilized. They would be able to get more functionality from their software vendors as they would be able to focus all their resources on end-user needs rather than building (and frequently duplicating) shared data systems and other infrastructure components.


The business model


It is clear that over the years OCLC has struggled with finding a new business model that will sustain the organization over the long run. The reality appears to be that the majority of current OCLC income still comes from bibliographic based services, which should be an indicator that the market best supports OCLC when it stays within its non-profit, collaborative, shared content/services model. Furthermore, this is a model that works in conjunction with, and not against, the business community.


If OCLC were to focus on developing the collaborative and shared library content, the most likely and sustainable business model for all concerned would be a subscription-based annual fee that provides access to content/services and/or the API’s that serve that data. Libraries would pay a lower fee because they’re also non-profit organizations, but they would continue to pay OCLC revenues needed to run this massive collaborative. Vendors, who are for-profit, would pay a higher fee, but would be able to freely and openly subscribe to an extensive ala carte menu of the content and services on which to build their products and be able without worry that for-profit commercial interests of the collaborative would interfere with the necessary and needed trusted relationship down the road.

Summary


I believe that librarianship today truly needs and want a collaborative effort that would produce these kinds of data/services. OCLC could do this by returning to its roots of being a true non-profit collaborative building shared infrastructure, content, and services for libraries. It should be very open, providing open interfaces that support both open source and proprietary extensions so that the totality of solutions and services available to the profession would deliver substantial added value to information. This strategy would benefit libraries, the businesses that work and serve them, and ultimately the profession of librarianship.


Don’t get me wrong, like most librarians I speak with these days I’ve always thought we needed an OCLC—but I too think we need one that stands for Open & Collaborative Library Content.