Friday, November 27, 2009

Tools in library and academic toolboxes: community, collaboration and openness

Last month, I had the pleasure of doing a talk at the Association for Consortium Leadership conference, held in Chicago. I was asked to talk about the future of academic consortia from my point of view as a commercial provider of software into consortia environments. (Note: In many ways my talk was also applicable to library consortia and I’ve modified this post to reflect that).

In the talk, I focused on the need for education (and libraries) to rethink both the substance and form that they are taking today and how they might, alternatively, deliver their offerings to people. In making the case for that change, I started by showing the audience some websites where they could see large and open web-based communities at work, sharing workloads, performing group-thinking and collaboration in order to provide content, answers to problems, even educational content and courses. These included:

1. Dell’s IdeaStorm. Here is a site that allows users to share ideas and collaborate with one another in suggesting how Dell could develop new and improved products or services. The user’s can vote ideas up or down. Fascinatingly, the stats show that customers have: contributed 13,052 ideas, promoted 701,106 times and posted 87,985 comments. Dell has actually implemented 389 of the ideas which may not seem like a lot, but it was in essence, 389 ideas that came from outside the company and at no cost to them.

2. Mechanical Turk by Amazon. A site where you have access to a global, on-demand, 24 x 7 workforce. Organizations can get thousands of tasks completed; sometimes in minutes and they pay only when they're satisfied with the results.

3. Virtual Tourist. This is a worldwide travel community where travelers and locals share travel advice and experiences. Every tip on is linked to a member’s profile so you can learn more about each member—their age, hometown, travel interests, where they’ve been, hobbies, even what they look like—and then read about more of their travel experiences.

4. Prosper. This is one of the largest people-to-people lending marketplaces with over 860,000 members and over $180,000,000 in loans funded on Prosper. Borrowers can list loan requests between $1,000 and $25,000 on Prosper. They set the maximum rate they are willing to pay an investor for the loan, and tell their story. People and institutional investors register on Prosper as lenders, then set their minimum interest rates, and bid in increments of $25 to $25,000 on loan listings they select. Prosper handles all on-going loan administration tasks including loan repayment and collections on behalf of the matched borrowers and investors.

5. Askville. A site where people get answers from everyday people. 55,000 active Guides throughout the country - with thousands available online at any time. It now has more than a million users and has answered more than 27 million queries since it launched its revolutionary mobile answers service in January. A key difference here is that people who answer are rated as guides.

6. Academic Earth. Here users around the world the ability to easily find, interact with, and learn from full video courses and lectures from the world’s leading scholars.

7. World Digital Library. WDL partners are mainly libraries, archives, or other institutions that have collections of cultural content that they contribute to the WDL. The principal objectives of the WDL are to:
a. Promote international and intercultural understanding;
b. Expand the volume and variety of cultural content on the Internet;
c. Provide resources for educators, scholars, and general audiences;
d. Build capacity in partner institutions to narrow the digital divide within and between countries.

What is really important about each of those sites is what they’re about at the core; collaboration, community and openness. They show us what is possible when you assemble a large community via the web and provide both a common need and a means with which to address it. These sites show what is ours to tap and use in both academia and in libraries. If we do so, I believe it shows the potential of a powerful tool to use in transforming education, educational consortiums, libraries and library consortia.

Think about how we could apply what those sites show to our environments. To start, we should make education and knowledge on subject far more granular in structure. In today’s environment, we frequently encounter new concepts, ideas or terminology and we need short and quick background and information from a recognized and authoritative source.

Unfortunately today’s educational offerings are all too often still in the format of courses, requiring a commitment of many hours per week and many weeks per semester in order to be utilized. Libraries still largely offer content in books, magazines, newspapers (and now increasingly e-content), all of which may require a sizable commitment of time to search, obtain and digest.

However, in today’s environment, what people have is fifteen to thirty minutes to learn a new concept before they walk into a session to discuss it. If courses and content could be broken down into tight, fifteen to thirty minute segments that build on each other, then I think we’d see far greater utilization, not only in academic environments, but also in the workplace and at home.

How do we get this to happen? We reach out through the web to tap communities in order to build new educational content that will be used by people to teach people. Libraries doing this would use these communities of users to develop new subject areas and offer far more current, far more accessible information that, through the use of communities to vet the information, would still offer the assurance of authority, authentication and appropriateness. Ultimately, both libraries and academics should become the ultimate certifiers, rankers and valuators’ of the content created in these environments.

The resulting educational offering would not necessarily be offered in classrooms or on campuses, but deployed across the web, in small, granular components that, when linked together, offer a greater whole than is offered today using traditional settings and methodologies.

There are challenges we have to realize and deal with in this paradigm. For instance, as we all know the staggering stats on how fast the human record is growing and the fact that some (IDC being one) that predict by 2010, nearly 70% of the digital universe will be created by individuals (community). The result is that traditional methods of both education and librarianship can’t scale to handle that growth. Community is one tool that will enable us to harness that human record, distill and analyze it and derive from it new understanding and knowledge.

Yet in order to do that, we have to rethink how we run our operations and offer services to our members, non-members, users and non-users. Academic consortia and certainly library consortia are already heavily in the business of collaboration. But we need to step back and look at the new opportunities that exist in the area of collaboration and how to harness those new opportunities in order to do our work and to determine how we can extend our offerings into new environments.

Another thing to think about is that web-based communities are not geographically limited. They come together across all geographical, political and human borders. Academic and library consortia also no longer need to be geographically limited. Virtual consortia are not only possible, they're desirable. They’re based on: shared interests, purpose or just users and the communities you can extend to include and reach are virtually unlimited. If we want to find new opportunities, we have to look in new places. We need to make sure our educational courses and our library services can be plugged into FLICKR, Facebook or MySpace are just a few examples. (Check out Primo to see how you can do this!) For example, imagine viewing a picture of a snow leopard on the web and wanting to learn more about it. We should provide educational content available right there, where the user is, in a simple search box requiring only a click to obtain.

The bottom line for me – we need to understand that knowledge is built one brick at a time, and today, those bricks are getting more numerous in number but need to get smaller and smaller in size. In order for academia and libraries to harness that change, we must employ collaboration, community and openness to leverage the opportunities that are in front of us. Then we’ll be able to put courses, content and libraries into online communities so that libraries, universities and colleges become the face and the “brand” for knowledge.

Monday, November 16, 2009

Another facet of the “library bypass strategies”

I really appreciate when readers of this blog contact me about various postings. Especially when we have the chance to not only discuss posts via comments, but to also verbally connect and share thoughts about libraries. I recently had one of those conversations with Jean Costello, a library patron in Massachusetts and a reader of this blog. During our conversation, she pointed me towards a recent blog post of her own, entitled “Library bypass strategies” that echoed a different facet of the same thought I’ve been having a lot lately (and have briefly mentioned in another post of my own.

Jean’s concern was how libraries might get bypassed in the context of e-book supply strategies. I totally agree with the comments she makes in her post. What I see, that echoes her concern, is in the area of e-content and discovery products which are being offered to the library marketplace. Increasingly, these are offered as pre-packaged solutions with a discovery interface and with databases from a select number of organizations. But there are some real differences in the offerings and librarians need to be careful how they select and implement this technology.

Libraries must retain control over the selection of the content that is offered to their end users or else they have abandoned a core value-add of librarianship, i.e. the selection of the most authoritative, appropriate and authenticated information (in this case electronic resources) needed to answer a user’s information need. If, as a librarian, you cede this control to a third party organization, you’ve setup your library to be bypassed and ultimately replaced in the information value chain.

Some may ask, how is this any different than the book approval plans most libraries have participated in for years, where vendors put together recommendations of titles for a library to purchase? Those plans, designed over the last approximately 20 years, are built around the Library of Congress classification scheme and subject headings and a variety of other criteria by which titles are selected. With this model, Librarians had the ultimate say over acceptance or rejection of books supplied in response to the plans. However, e-content selected by your vendor, particularly if that vendor is owned by a content aggregator, come with an entire host of complications. You have to ask yourself if you really want to trust a vendor of content to be objective when it comes to managing or delivering content from their competitors. Will they take advantage of usage statistics when determining packages or pricing? Will they tweak ranking algorithms to ensure that their own content gets ranked higher or more prominently?

I think it is important, as a librarian, to understand these realities. If you want to provide your users with an assurance that what they’re searching has passed your selection criteria and that it is the best information to meet their needs, then you’ve just created some important criteria to be met when you select the discovery tool and e-content your library is going to use. These include:

Content-neutrality. Using a discovery tool that is tied to (or owned) by any one content provider is obviously increasing the probability that content from their major content competitors will not be available. Furthermore, content from companies owned by the parent company will likely be more heavily favored in the ranking/relevancy algorithms. This will likely be disguised as “since we own these databases, we can provide richer access”. I’d be cautious if I heard those phrases. The discovery tool you select and use should allow you to provide equal access to all content that is relevant to the end user, not just the content supplier who is providing it. One way to do this is to make sure the discovery tool that is used is from a source that has no vested interest in the content itself. Another way is to ensure you have the ability, indeed control, over the final ranking/relevancy algorithms.

Deep-search and/or metasearch support. If you believe that all content your users will ever need or want to search will be available solely through any discovery interface that is searching harvested metadata, then you need to know this is probably unrealistic.

There are two ways to avoid getting caught in this trap. One option is the ability to add in metasearching capabilities. Yes, we all know the limitations of metasearching – but the reality is that, if you believe like I do, that your job is to connect your users with the most appropriate, authoritative and authenticated information needed to answer their questions – not just the easiest information you can make available that might answer their question -- then you have to provide a way to search information that can’t be harvested, which depending on the topic, can be important information.

The other way to do this is the ability to deep-search, i.e., to connect to an API that will search remote databases. This technology typically offers faster and better searching as well as much better ranking and retrieval capabilities.

Either way, these are capabilities that many discovery interfaces don’t support. But they should, indeed they must, in order to support the value-add of librarianship on top of information.

The ability to load and search databases unique to your user’s information needs. If the above options don’t cover the content you need to provide access to, then you should have the option to add in a database of e-content locally to your harvested metadata. This might be a local digital repository or other e-content, but you should insist on this capability to ensure needed access through the discovery interface.

Any librarian who understands his or her user’s unique information needs will insist, just as librarians have for years in building other collections, that we must have a selection policy that will give us control over the e-content users will be able to utilize.

Watching librarians in action today, there are those ignoring these issues. They are selecting discovery tools that provide quick, pre-defined, pre-packaged content with a discovery interface that doesn’t really meet the deeper needs of their users or their profession. Once they've done this, they’ve reduced their library’s value-add in the information delivery chain and they’ve lost another valuable reason for maintaining their library’s relevance within the institution and handed it to those that believe good enough, is well, good enough.

To avoid this situation be careful in your choice of discovery tools and e-content. Be sure they support the value-add of librarianship. That way you, and your library, won’t become another facet in what Jean calls – “the library bypass strategies”.

Thursday, November 12, 2009

"Who knows what the library means anymore?"

I was at the Educause conference last week in Denver and found it very interesting. While the conference attracts many CIO's a number of librarians also attend and, as a result, some interesting debates also result. One concerned the future of academic libraries and you'll find the presentation reported on here. It's an interesting conversation and I encourage you to read it and the comments that follow. For me, the most telling statement remains what Suzanne Thorin, dean of libraries at Syracuse University, closed with: "Who knows what the library means anymore?"

It's a telling question. I mentioned it in my previous post and I'll say again, it the one question I truly wish the profession would answer so that everyone could align behind and support the answer.

Sunday, November 8, 2009

The OSSification of viewpoints.

I will admit that the recent stir over the release of SirsiDynix’s paper about open source software for libraries by Stephen Abram bothered me. Not because I thought either side in the debates (the responses were on Twitter and in various blogs) had presented their cases well. In fact, my concern was that we are EVEN still having these debates (as I mentioned when interviewed by Library Journal on the subject). Particularly at a time when we have so many, so much more important, issues to be focused on in the library profession.

What we saw unfurl in this debate was what I’ve titled “OSSified” viewpoints. Each side rehashes viewpoints about open source that have been expressed hundreds, if not thousands of times. One side shouts “FUD” and the other side shouts “anti-proprietary” and neither side, in my opinion, is adding anything new or valuable to the discussion. Yes, both sides have many valid points buried under their boxing glove approaches. No, neither side is presenting their view in a compelling, well-reasoned, logical fashion.

When I was in college, (yes, a long time ago) I was on the debate team for the university. On weekends, we’d travel across the country to engage in debates on a wide range of topics. Each topic required massive preparation. Research, statistics, quotes, all kinds of supporting information and not just for one side of the debate, but for BOTH sides of the debate. You never knew until you arrived, which side you would be taking – but you had to be prepared to debate either. The end result was that you learned a great deal about both the advantages and disadvantages of wide range of topics. You also learned, as we often do in life, that the world is not black and white, that depending on what is important to you as an individual, an organization or a profession, the right answer is frequently something in between.

So it is with open source and proprietary software. Both have advantages, both have disadvantages. Which of those apply to your situation depends on who you are and what organization you’re representing. But here is reality as far as I’m concerned – open source software represents a need and ability for organizations and professions to adapt services to end user needs and to do so very quickly. Particularly so in environments where the pace of change is accelerating with every day. However, It also carries with it the need to internally have, or externally pay for, technical staff to adapt the software to those needs. Proprietary software can and usually does offer complete, very functionally rich, systems that address wide market needs at reasonable costs and with limited technical staff on the part of the organization using it. An added bonus can be if the proprietary software is open platform (as are Ex Libris products), so that the proprietary package supports open source extensions which can be made in order to enhance services for users. This is a combination that brings some of the best of both approaches together.

However, let me point out the obvious and yet frequently forgotten key point in what I’ve just said. Because of the rate of change libraries are dealing with today, they need to adapt and implement quickly. Software development technologies, as with all technologies, have limitations. Open source and proprietary do represent two different approaches to development technologies. But what matters at the end of the day is to provide a total SOLUTION that works in meeting the needs of the users. Until such time as users can sit and completely configure software applications to do exactly and exclusively what they want to do – there will be room for both open source and proprietary software in this profession. Each has advantages. Each has disadvantages. Each offers different approaches to solving problems and providing a solution. If we become zealots for either point of view we are not serving our profession or users well. Becoming zealots means we will fight against the use of what the other offers and we will waste massive amounts of time reinventing things that already exist and work well (a point shared by Cliff Lynch in this debate). Libraries can’t afford this redundancy, particularly in the economic climate we’re currently in.

The profession of librarianship has more important things to do at the moment. Let’s devote the energy being wasted in this debate to defining and agreeing what librarianship will look like in five years. What will librarianship mean to end-users and what will our value-add to information be in that time frame? This would greatly help solve many of the funding problems we’re all fighting at the moment. Finally, let’s map out the plans and technology that are going to help us fulfill that vision. I’m sure if we do that, there will be plenty of new places for both OSS and proprietary software to make major contributions and in ways that will build on and support each other. That’s what we’re trying to do at Ex Libris and I would encourage a wider adoption of this approach across the profession rather than continuing boxing matches using old and outdated arguments that do nothing to advance the need to provide solutions to users.

We simply have more important things to do.

Saturday, November 7, 2009

E-book technology is accelerating. Libraries understanding and use of this technology needs to keep pace

While I’ve been traveling much of the last month (I apologize for the lack of postings), much has been happening that is worthy of note in the area of e-book technologies.

Barnes and Noble
introduced their new Nook e-book reader, a device bearing many similarities to the Amazon Kindle, but with some notable advances. These include a portion of the screen that will display color, the ability to lend books you’ve bought to friends, the ability to read entire books for free in a Barnes and Noble store using a wireless connection and last but certainly not least, support for MP3s, PDF’s and ePub and PDB files. These are all significant new advances and the device, which is to be available late this month (November) will further accelerate the adoption of e-books by readers.

Of equal importance is another announcement this week by Marvell and E Ink of a new agreement that “raises the technology bar. This is a total platform solution—including Wi-Fi, Bluetooth, 3G modem, and power management. The Armada e-reader has the potential to deliver the first mass market product accessible and affordable to billions of consumers around the world." Speculation is that instead of the current $250 price for e-book readers, this new technology will bring the prices down into the $100 range.

The pace of technology advancement in the area of e-books is accelerating rapidly and as a result, it is going to change reading habits, methodologies, research and discovery of people. These are all places where librarianship should and can be playing a leading role. With that statement in mind, I’d encourage you to read the article in the October issue of American Libraries magazine entitled “
E-readers in Action”. The article, which highlights the efforts of Penn State to use e-books raises many valid issues concerning the use of e-book technology in libraries. But after reading it, I would ask you to think about what could have been done differently in this case to have made this a more satisfactory experience both for the readers and the library? I personally see quite a few things I would have done differently. Before I put forth my ideas, I invite yours. Comment on this post and I’ll follow up with another post summarizing your ideas and sharing my ideas on what libraries need to be doing to successfully use this new technology.