Sunday, December 13, 2009

The library e-book reader – a “Personal Information/Knowledge Environment”

Having been an avid user of an e-book reader (Kindle 2) for some time now, I’ve come to appreciate both the benefits of this technology and, at the same time, to understand the things I wish it did (but doesn't yet). So, here’s a description of some features I’d like to see.

From a hardware point of view, of course, I’d like to see a larger and color touch screen so I can enjoy pictures as intended by authors. Of course, at the same time, I don’t want to give up on the great battery life I currently experience. I’m sure all of these hardware features are already in the pipeline.

From the software point of view a nice starting point would be a fully functional web browser to exploit the wireless capabilities built into many of these units. This feature would allow all kinds of extended library services to be brought unto these platforms.

Next, I’d like to see these platforms opened up not unlike Ex Libris has done with their products through the open platform, or else to see an app store approach such as Apple has done with the iPhone. Either way, this would allow communities of users to develop and offer applications for others to use in extending the e-reader product capabilities. This would allow development of features not unlike those described below.
  1. Users should have the ability to use the touch screen to easily highlight key text and/or phrases and then have a simple voice recognition capability (see the latest Google search app for the iPhone for an example) that would allow the user to quickly add tags to the text that has been highlighted. Copying this info to a central website, such as the Amazon Kindle does, is a wonderful way to allow the highlighted text to become the launching pad for larger community-based conversations or research/papers based on those clippings. Or alternatively, the software should add the bibliography/citation information needed to the highlighted information, so that if I copy the text into a larger paper or research document, the citation is complete and readily at hand.
  2. The ability to tag should also be applicable to the metadata records for the information I’ve stored on the “Personal Information/Knowledge Environment (PIKE)”. Using those tags, as well as standard search terms, the discovery interface should allow me to easily search and refine my results both on the PIKE and by connecting to remote libraries and the web. Once I’ve found a resource at the library, the e-reader should allow me to borrow and easily, renew e-content that I obtain from the library.
  3. The PIKE should also be able to build a standardized information profile based on what I’ve read and other input and then, based on an agreed upon standard, allow the PIKE to monitor specified collections, services, RSS feeds, blogs, etc. and pull together alert notices for the user to review and possibly use in selecting new materials to read in advancing their knowledge. This would be a true library-like service on the e-reader platform.
While I’m dreaming, as noted in many blog posts, articles and other writings, the key to wide-spread adoption of e-readers, is to get the price under $100 USD/unit.

The bottom line is that I want to see e-readers become prolific and build in librarian like services that will help people use these tools to readily and steadily advance their knowledge in an ever increasing world of complexity and vastly expanding information.

Friday, November 27, 2009

Tools in library and academic toolboxes: community, collaboration and openness

Last month, I had the pleasure of doing a talk at the Association for Consortium Leadership conference, held in Chicago. I was asked to talk about the future of academic consortia from my point of view as a commercial provider of software into consortia environments. (Note: In many ways my talk was also applicable to library consortia and I’ve modified this post to reflect that).

In the talk, I focused on the need for education (and libraries) to rethink both the substance and form that they are taking today and how they might, alternatively, deliver their offerings to people. In making the case for that change, I started by showing the audience some websites where they could see large and open web-based communities at work, sharing workloads, performing group-thinking and collaboration in order to provide content, answers to problems, even educational content and courses. These included:

1. Dell’s IdeaStorm. Here is a site that allows users to share ideas and collaborate with one another in suggesting how Dell could develop new and improved products or services. The user’s can vote ideas up or down. Fascinatingly, the stats show that customers have: contributed 13,052 ideas, promoted 701,106 times and posted 87,985 comments. Dell has actually implemented 389 of the ideas which may not seem like a lot, but it was in essence, 389 ideas that came from outside the company and at no cost to them.

2. Mechanical Turk by Amazon. A site where you have access to a global, on-demand, 24 x 7 workforce. Organizations can get thousands of tasks completed; sometimes in minutes and they pay only when they're satisfied with the results.

3. Virtual Tourist. This is a worldwide travel community where travelers and locals share travel advice and experiences. Every tip on is linked to a member’s profile so you can learn more about each member—their age, hometown, travel interests, where they’ve been, hobbies, even what they look like—and then read about more of their travel experiences.

4. Prosper. This is one of the largest people-to-people lending marketplaces with over 860,000 members and over $180,000,000 in loans funded on Prosper. Borrowers can list loan requests between $1,000 and $25,000 on Prosper. They set the maximum rate they are willing to pay an investor for the loan, and tell their story. People and institutional investors register on Prosper as lenders, then set their minimum interest rates, and bid in increments of $25 to $25,000 on loan listings they select. Prosper handles all on-going loan administration tasks including loan repayment and collections on behalf of the matched borrowers and investors.

5. Askville. A site where people get answers from everyday people. 55,000 active Guides throughout the country - with thousands available online at any time. It now has more than a million users and has answered more than 27 million queries since it launched its revolutionary mobile answers service in January. A key difference here is that people who answer are rated as guides.

6. Academic Earth. Here users around the world the ability to easily find, interact with, and learn from full video courses and lectures from the world’s leading scholars.

7. World Digital Library. WDL partners are mainly libraries, archives, or other institutions that have collections of cultural content that they contribute to the WDL. The principal objectives of the WDL are to:
a. Promote international and intercultural understanding;
b. Expand the volume and variety of cultural content on the Internet;
c. Provide resources for educators, scholars, and general audiences;
d. Build capacity in partner institutions to narrow the digital divide within and between countries.

What is really important about each of those sites is what they’re about at the core; collaboration, community and openness. They show us what is possible when you assemble a large community via the web and provide both a common need and a means with which to address it. These sites show what is ours to tap and use in both academia and in libraries. If we do so, I believe it shows the potential of a powerful tool to use in transforming education, educational consortiums, libraries and library consortia.

Think about how we could apply what those sites show to our environments. To start, we should make education and knowledge on subject far more granular in structure. In today’s environment, we frequently encounter new concepts, ideas or terminology and we need short and quick background and information from a recognized and authoritative source.

Unfortunately today’s educational offerings are all too often still in the format of courses, requiring a commitment of many hours per week and many weeks per semester in order to be utilized. Libraries still largely offer content in books, magazines, newspapers (and now increasingly e-content), all of which may require a sizable commitment of time to search, obtain and digest.

However, in today’s environment, what people have is fifteen to thirty minutes to learn a new concept before they walk into a session to discuss it. If courses and content could be broken down into tight, fifteen to thirty minute segments that build on each other, then I think we’d see far greater utilization, not only in academic environments, but also in the workplace and at home.

How do we get this to happen? We reach out through the web to tap communities in order to build new educational content that will be used by people to teach people. Libraries doing this would use these communities of users to develop new subject areas and offer far more current, far more accessible information that, through the use of communities to vet the information, would still offer the assurance of authority, authentication and appropriateness. Ultimately, both libraries and academics should become the ultimate certifiers, rankers and valuators’ of the content created in these environments.

The resulting educational offering would not necessarily be offered in classrooms or on campuses, but deployed across the web, in small, granular components that, when linked together, offer a greater whole than is offered today using traditional settings and methodologies.

There are challenges we have to realize and deal with in this paradigm. For instance, as we all know the staggering stats on how fast the human record is growing and the fact that some (IDC being one) that predict by 2010, nearly 70% of the digital universe will be created by individuals (community). The result is that traditional methods of both education and librarianship can’t scale to handle that growth. Community is one tool that will enable us to harness that human record, distill and analyze it and derive from it new understanding and knowledge.

Yet in order to do that, we have to rethink how we run our operations and offer services to our members, non-members, users and non-users. Academic consortia and certainly library consortia are already heavily in the business of collaboration. But we need to step back and look at the new opportunities that exist in the area of collaboration and how to harness those new opportunities in order to do our work and to determine how we can extend our offerings into new environments.

Another thing to think about is that web-based communities are not geographically limited. They come together across all geographical, political and human borders. Academic and library consortia also no longer need to be geographically limited. Virtual consortia are not only possible, they're desirable. They’re based on: shared interests, purpose or just users and the communities you can extend to include and reach are virtually unlimited. If we want to find new opportunities, we have to look in new places. We need to make sure our educational courses and our library services can be plugged into FLICKR, Facebook or MySpace are just a few examples. (Check out Primo to see how you can do this!) For example, imagine viewing a picture of a snow leopard on the web and wanting to learn more about it. We should provide educational content available right there, where the user is, in a simple search box requiring only a click to obtain.

The bottom line for me – we need to understand that knowledge is built one brick at a time, and today, those bricks are getting more numerous in number but need to get smaller and smaller in size. In order for academia and libraries to harness that change, we must employ collaboration, community and openness to leverage the opportunities that are in front of us. Then we’ll be able to put courses, content and libraries into online communities so that libraries, universities and colleges become the face and the “brand” for knowledge.

Monday, November 16, 2009

Another facet of the “library bypass strategies”

I really appreciate when readers of this blog contact me about various postings. Especially when we have the chance to not only discuss posts via comments, but to also verbally connect and share thoughts about libraries. I recently had one of those conversations with Jean Costello, a library patron in Massachusetts and a reader of this blog. During our conversation, she pointed me towards a recent blog post of her own, entitled “Library bypass strategies” that echoed a different facet of the same thought I’ve been having a lot lately (and have briefly mentioned in another post of my own.

Jean’s concern was how libraries might get bypassed in the context of e-book supply strategies. I totally agree with the comments she makes in her post. What I see, that echoes her concern, is in the area of e-content and discovery products which are being offered to the library marketplace. Increasingly, these are offered as pre-packaged solutions with a discovery interface and with databases from a select number of organizations. But there are some real differences in the offerings and librarians need to be careful how they select and implement this technology.

Libraries must retain control over the selection of the content that is offered to their end users or else they have abandoned a core value-add of librarianship, i.e. the selection of the most authoritative, appropriate and authenticated information (in this case electronic resources) needed to answer a user’s information need. If, as a librarian, you cede this control to a third party organization, you’ve setup your library to be bypassed and ultimately replaced in the information value chain.

Some may ask, how is this any different than the book approval plans most libraries have participated in for years, where vendors put together recommendations of titles for a library to purchase? Those plans, designed over the last approximately 20 years, are built around the Library of Congress classification scheme and subject headings and a variety of other criteria by which titles are selected. With this model, Librarians had the ultimate say over acceptance or rejection of books supplied in response to the plans. However, e-content selected by your vendor, particularly if that vendor is owned by a content aggregator, come with an entire host of complications. You have to ask yourself if you really want to trust a vendor of content to be objective when it comes to managing or delivering content from their competitors. Will they take advantage of usage statistics when determining packages or pricing? Will they tweak ranking algorithms to ensure that their own content gets ranked higher or more prominently?

I think it is important, as a librarian, to understand these realities. If you want to provide your users with an assurance that what they’re searching has passed your selection criteria and that it is the best information to meet their needs, then you’ve just created some important criteria to be met when you select the discovery tool and e-content your library is going to use. These include:


1.
Content-neutrality. Using a discovery tool that is tied to (or owned) by any one content provider is obviously increasing the probability that content from their major content competitors will not be available. Furthermore, content from companies owned by the parent company will likely be more heavily favored in the ranking/relevancy algorithms. This will likely be disguised as “since we own these databases, we can provide richer access”. I’d be cautious if I heard those phrases. The discovery tool you select and use should allow you to provide equal access to all content that is relevant to the end user, not just the content supplier who is providing it. One way to do this is to make sure the discovery tool that is used is from a source that has no vested interest in the content itself. Another way is to ensure you have the ability, indeed control, over the final ranking/relevancy algorithms.

2.
Deep-search and/or metasearch support. If you believe that all content your users will ever need or want to search will be available solely through any discovery interface that is searching harvested metadata, then you need to know this is probably unrealistic.

There are two ways to avoid getting caught in this trap. One option is the ability to add in metasearching capabilities. Yes, we all know the limitations of metasearching – but the reality is that, if you believe like I do, that your job is to connect your users with the most appropriate, authoritative and authenticated information needed to answer their questions – not just the easiest information you can make available that might answer their question -- then you have to provide a way to search information that can’t be harvested, which depending on the topic, can be important information.

The other way to do this is the ability to deep-search, i.e., to connect to an API that will search remote databases. This technology typically offers faster and better searching as well as much better ranking and retrieval capabilities.

Either way, these are capabilities that many discovery interfaces don’t support. But they should, indeed they must, in order to support the value-add of librarianship on top of information.

3.
The ability to load and search databases unique to your user’s information needs. If the above options don’t cover the content you need to provide access to, then you should have the option to add in a database of e-content locally to your harvested metadata. This might be a local digital repository or other e-content, but you should insist on this capability to ensure needed access through the discovery interface.


Any librarian who understands his or her user’s unique information needs will insist, just as librarians have for years in building other collections, that we must have a selection policy that will give us control over the e-content users will be able to utilize.

Watching librarians in action today, there are those ignoring these issues. They are selecting discovery tools that provide quick, pre-defined, pre-packaged content with a discovery interface that doesn’t really meet the deeper needs of their users or their profession. Once they've done this, they’ve reduced their library’s value-add in the information delivery chain and they’ve lost another valuable reason for maintaining their library’s relevance within the institution and handed it to those that believe good enough, is well, good enough.

To avoid this situation be careful in your choice of discovery tools and e-content. Be sure they support the value-add of librarianship. That way you, and your library, won’t become another facet in what Jean calls – “the library bypass strategies”.

Thursday, November 12, 2009

"Who knows what the library means anymore?"

I was at the Educause conference last week in Denver and found it very interesting. While the conference attracts many CIO's a number of librarians also attend and, as a result, some interesting debates also result. One concerned the future of academic libraries and you'll find the presentation reported on here. It's an interesting conversation and I encourage you to read it and the comments that follow. For me, the most telling statement remains what Suzanne Thorin, dean of libraries at Syracuse University, closed with: "Who knows what the library means anymore?"

It's a telling question. I mentioned it in my previous post and I'll say again, it the one question I truly wish the profession would answer so that everyone could align behind and support the answer.


Sunday, November 8, 2009

The OSSification of viewpoints.

I will admit that the recent stir over the release of SirsiDynix’s paper about open source software for libraries by Stephen Abram bothered me. Not because I thought either side in the debates (the responses were on Twitter and in various blogs) had presented their cases well. In fact, my concern was that we are EVEN still having these debates (as I mentioned when interviewed by Library Journal on the subject). Particularly at a time when we have so many, so much more important, issues to be focused on in the library profession.

What we saw unfurl in this debate was what I’ve titled “OSSified” viewpoints. Each side rehashes viewpoints about open source that have been expressed hundreds, if not thousands of times. One side shouts “FUD” and the other side shouts “anti-proprietary” and neither side, in my opinion, is adding anything new or valuable to the discussion. Yes, both sides have many valid points buried under their boxing glove approaches. No, neither side is presenting their view in a compelling, well-reasoned, logical fashion.

When I was in college, (yes, a long time ago) I was on the debate team for the university. On weekends, we’d travel across the country to engage in debates on a wide range of topics. Each topic required massive preparation. Research, statistics, quotes, all kinds of supporting information and not just for one side of the debate, but for BOTH sides of the debate. You never knew until you arrived, which side you would be taking – but you had to be prepared to debate either. The end result was that you learned a great deal about both the advantages and disadvantages of wide range of topics. You also learned, as we often do in life, that the world is not black and white, that depending on what is important to you as an individual, an organization or a profession, the right answer is frequently something in between.

So it is with open source and proprietary software. Both have advantages, both have disadvantages. Which of those apply to your situation depends on who you are and what organization you’re representing. But here is reality as far as I’m concerned – open source software represents a need and ability for organizations and professions to adapt services to end user needs and to do so very quickly. Particularly so in environments where the pace of change is accelerating with every day. However, It also carries with it the need to internally have, or externally pay for, technical staff to adapt the software to those needs. Proprietary software can and usually does offer complete, very functionally rich, systems that address wide market needs at reasonable costs and with limited technical staff on the part of the organization using it. An added bonus can be if the proprietary software is open platform (as are Ex Libris products), so that the proprietary package supports open source extensions which can be made in order to enhance services for users. This is a combination that brings some of the best of both approaches together.

However, let me point out the obvious and yet frequently forgotten key point in what I’ve just said. Because of the rate of change libraries are dealing with today, they need to adapt and implement quickly. Software development technologies, as with all technologies, have limitations. Open source and proprietary do represent two different approaches to development technologies. But what matters at the end of the day is to provide a total SOLUTION that works in meeting the needs of the users. Until such time as users can sit and completely configure software applications to do exactly and exclusively what they want to do – there will be room for both open source and proprietary software in this profession. Each has advantages. Each has disadvantages. Each offers different approaches to solving problems and providing a solution. If we become zealots for either point of view we are not serving our profession or users well. Becoming zealots means we will fight against the use of what the other offers and we will waste massive amounts of time reinventing things that already exist and work well (a point shared by Cliff Lynch in this debate). Libraries can’t afford this redundancy, particularly in the economic climate we’re currently in.

The profession of librarianship has more important things to do at the moment. Let’s devote the energy being wasted in this debate to defining and agreeing what librarianship will look like in five years. What will librarianship mean to end-users and what will our value-add to information be in that time frame? This would greatly help solve many of the funding problems we’re all fighting at the moment. Finally, let’s map out the plans and technology that are going to help us fulfill that vision. I’m sure if we do that, there will be plenty of new places for both OSS and proprietary software to make major contributions and in ways that will build on and support each other. That’s what we’re trying to do at Ex Libris and I would encourage a wider adoption of this approach across the profession rather than continuing boxing matches using old and outdated arguments that do nothing to advance the need to provide solutions to users.

We simply have more important things to do.

Saturday, November 7, 2009

E-book technology is accelerating. Libraries understanding and use of this technology needs to keep pace

While I’ve been traveling much of the last month (I apologize for the lack of postings), much has been happening that is worthy of note in the area of e-book technologies.

Barnes and Noble
introduced their new Nook e-book reader, a device bearing many similarities to the Amazon Kindle, but with some notable advances. These include a portion of the screen that will display color, the ability to lend books you’ve bought to friends, the ability to read entire books for free in a Barnes and Noble store using a wireless connection and last but certainly not least, support for MP3s, PDF’s and ePub and PDB files. These are all significant new advances and the device, which is to be available late this month (November) will further accelerate the adoption of e-books by readers.

Of equal importance is another announcement this week by Marvell and E Ink of a new agreement that “raises the technology bar. This is a total platform solution—including Wi-Fi, Bluetooth, 3G modem, and power management. The Armada e-reader has the potential to deliver the first mass market product accessible and affordable to billions of consumers around the world." Speculation is that instead of the current $250 price for e-book readers, this new technology will bring the prices down into the $100 range.

The pace of technology advancement in the area of e-books is accelerating rapidly and as a result, it is going to change reading habits, methodologies, research and discovery of people. These are all places where librarianship should and can be playing a leading role. With that statement in mind, I’d encourage you to read the article in the October issue of American Libraries magazine entitled “
E-readers in Action”. The article, which highlights the efforts of Penn State to use e-books raises many valid issues concerning the use of e-book technology in libraries. But after reading it, I would ask you to think about what could have been done differently in this case to have made this a more satisfactory experience both for the readers and the library? I personally see quite a few things I would have done differently. Before I put forth my ideas, I invite yours. Comment on this post and I’ll follow up with another post summarizing your ideas and sharing my ideas on what libraries need to be doing to successfully use this new technology.


Thursday, October 15, 2009

The scalability of the open source business model in libraries...

I enjoy being part of the Library Gang 2.0 podcast series and this month we covered a topic that I felt particularly well suited to discuss, that of the scalability of the open source business model for libraries. Having worked over the years with some of the major open source software packages that libraries use (Index Data’s suite of products, FedoraCommons, DSpace and now Ex Libris’ Open Platform) as well as having founded and run a company that supported OSS for libraries, I truly have some real-world experience to share.

Those experiences have taught me that open source commercial (as opposed to pure community based) business models that succeed for library specific applications are nascent efforts. When they do succeed, they often share many similarities with proprietary software business models. On the other hand, many proprietary software business models are increasingly moving towards new collaborative models (for example, the Ex Libris
Open Platform). All of which supports my long time contention that the future business models for both open source and propriety software is neither as we know them to exist today. As in any evolutionary process, the best features of both will blend together to result in a new model for the future.

The latest
Library Gang 2.0 podcast examines some of the issues currently being wrestled with and also talks about the future of the ILS. Listen in, I think you’ll find it interesting.

Thursday, October 8, 2009

The difference between Google and libraries

There is a new article in Wired that is a powerful reminder of what distinguishes libraries from Google. The author says :
Those "with long memories remember the last time Google assembled a giant library that promised to rescue orphaned content for future generations. And the tattered remnants of that online archive are a cautionary tale in what happens when Google simply loses interest".
It is a useful read, not so much for librarians who already understand the differences, but for librarians to point those that question their existence or funding.

The author says it best at the end, when he says:
Its a reminder that Google is an advertising company — not a modern-day Library of Alexandria.
Libraries have value and important roles to play in our society. Reminders like this are useful.

Tuesday, September 22, 2009

An interesting environmental scan on academic digital libraries

The “New Review of Academic Librarianship” has just published (and it’s available for free download for a limited time period) a really excellent article entitled Academic Digital Libraries of the Future: An Environment Scan .

The author, Derek Law from the Centre for Digital Library Research at the University of Strathclyde in Glasgow Scotland laments “it is no longer clear what business libraries are in and whey they should now interface with other parts of the organizations they serve” and he further says that librarians “have lacked the space to step back and observe it from a higher level.”

The good news is that if they take the time to read this article, he’ll provoke their thinking and help clarify what must be dealt with in the larger environment. He cites numerous reports to show what many feel, even if they couldn’t quantify it – users perceptions of libraries are radically different than what librarians perceive them to be. He tapes the CIBER report to show that researches “expect research to be easy” and that they “do not seek help from librarians” and only want to “download materials at their desks.” One of the most disturbing disconnects is when he points out that “when librarians assist users, satisfaction levels drop” because it is perceived that aren’t trying to simply help them find what they need, but are trying to show them “what is good for them”.

The article deals with the growth in digital content but very accurately points out that librarians have yet to add value to the digital content they do accumulate. Yet all is not lost, because he identifies that being a trusted brand is something libraries and librarianship needs to build upon. He puts forth two really interesting tables in the article, showing first, how many of the social networking tools can replace traditional library activities and the second table suggests how libraries can use those very social networking tools to the benefit of library users (the article is worth downloading for these two tables alone!).

Finally, the article suggests key things that librarians need to do “be at the core of any redefinition of the Library’s role”. I won’t spoil the read for you but let me say that you should grab this article and read it. It’s time well spent.

Sunday, September 20, 2009

“I must follow the people. Am I not their leader?” -- Benjamin Disraeli

After my last post about e-books and e-readers, I saw a flurry of other articles and posts about the future of books, print, digital content and libraries. It’ll be no surprise to my readers that the points of view ranged from one end of the spectrum to the other.

In particular, Cushing Academy made quite a stir when they went completely digital. James Tracy, headmaster of Cushing stated in a Boston Globe article:
"When I look at books, I see an outdated technology, like scrolls before books. This isn’t ‘Fahrenheit 451’ [the 1953 Ray Bradbury novel in which books are banned]. We’re not discouraging students from reading. We see this as a natural way to shape emerging trends and optimize technology."
This, of course, drew all kinds of spirited responses, including some from Keith Fiels, executive director of ALA. I’m afraid I found Mr. Fiels remarks somewhat uniformed. He first indicates that e-readers and books aren’t free. To which one must of course ask, since when are printed ones free? Of course, I understand that once purchased printed books can be used by many others for a fairly low cost, at least to the library (and thus the taxpayer). But his remark seems to indicate that he isn’t up-to-date on how some of the e-book manufacturers (Sony most notably) are working quite diligently to make e-readers and e-books work for libraries in much the same way. Mr Fiels goes on to note “it may become more difficult for students to happen on books with the serendipity made possible by physical browsing.” I would strongly suggest that Mr. Fiels spend some times with students and see how they browse collections today be it music, books, photos, videos or any other digital media, outside or inside of a library. It’s done VIRTUALLY. Of course Mr. Fiels wasn’t alone in expressing concern. Many other people reacted in similar (and different) ways.

However, the reality is that this is not the first time something of this nature has happened, nor will it be the last. Back in 2005, the University of Texas at Austin, under the leadership of Fred Heath made quite a stir when they announced that they were making one campus entirely digital. More recently, the University of Connecticut at Bridgeport did something similar when Diane Mirvis converted the first floor of their university library to a digital learning commons with no books in sight (which, I might add, uses PRIMO as the centerpiece of this new digital learning environment). There are probably countless other examples.

These conversions will continue as time marches forward. Slowly, but steadily they will go on until they are no longer noted because they’re no longer newsworthy. In fact, in reading all these links, the thing that struck me was that the users of the libraries find it all rather mundane. They’re expecting it and welcome it, saying simply “it’s the future”.

The point was further underscored for me this week, when a friend and colleague, Ian Dolphin, pointed me towards the Shared Services Feasibility Study by SCONUL. While interesting reading for a variety of reasons, in particular this survey of 83 higher education institutions in the United Kingdom, showed “the strongest focus is on adopting digital solutions and electronic content to reduce physical holdings and therefore space.”

Taken in totality, all of it reminded me of one of my favorite quotes by Benjamin Disraeli, a former British Prime Minister:
"I must follow the people. Am I not their leader?"
Which is my way of saying that I hope as librarians, we will allow ourselves to be lead by those who understand where people want to go.

Sunday, September 13, 2009

e-books, e-book readers, but what about end-users?

Last week, I participated in a TALIS podcast on ebooks and ebook readers. You can find it here. It was an interesting conversation that featured many different perspectives, ranging from a librarian who is actively running an e-book program at a library, a person from Google (who discussed aspects of the Google book settlement) and other professionals representing other different technological backgrounds and experiences.

There is however, one concern I have when we gather industry people to discuss these topics. That is the fact that there is a important point-of-view missing or only slightly represented and that is the view of the end-users. While presumably, many of us on the podcast talk to end-users, directly or indirectly, and try to interpret what we believe they want, it's still an interpretation and a pale representation. For instance, I spend most of my time working with academic libraries and on academic campuses visiting academic libraries. It is not infrequent for me to hear (or read) reports that state many students use the library only as a meeting place, or a place to catch a nap and how little, if at all, they actually use the physical collection of the library (for a variety of reasons). They use digital resources, whether supplied through their library or not, but digital it must be. So, I try to represent that point of view in these forums as best I can. Yet, I think within the podcast parameters, I'm only able to represent a fragment of what I've heard from end-users.

For example, I've met more than a few students that have told me they expected to graduate without every having actually borrowed a single item from the library. Yet, I've seen these same students fully wired in that there are computers in their backpacks, iPods in their ears and mobile phones in their pockets -- all of which they read quite actively. So, reading is not the issue. We know that print will live on for a very long time in one form or another. Our printed library materials? Maybe, maybe not. I'm not at all sure students care.

Now perhaps digitizing these works will allow them to flow more actively into the environments where end-users appear to be spending more of their time and energy. That would be good. But that won't be enough. We, as librarians must also find ways to extend our value-add out there along with our library resources. That is something I think we need to seriously devote some active thought to in the very near term. More importantly, we need to hold some discussions with end-users so we make library services meaningful to them.

It's a frequent concern of mine when working with libraries, that libraries don't spend enough time talking to their end users about what it is their information needs are and how libraries might fill those needs. The most comprehensive description I've seen in the last decade was
The OCLC Environmental Scan: Pattern Recognition. Unfortunately, it is now six years old -- a lifetime when talking about the changes wrought by technology. I'm sure if this survey was updated today, we'd be very enlightened by what we would hear. Our profession needs to have these conversations more fequently, not less. Once taken, we need to listen and respond to the results. Reading the 2003 OCLC report, one is struck by how little progress we've actually made on the findings it reported. Six years later, our lives are complicated by a financial crisis. Library funding seems to be in critical condition. One has to wonder if the lack of funding could be tied to the lack of progress in meeting end-user needs? Had we done a better job there, would the financial situation be different today?

E-books, e-readers? They're here today and we're trying to grapple with the issues about what to do with them and how to use them in our libraries. Before we get too far down the path, I suggest we have some in-depth conversations with end-users.

Wednesday, August 26, 2009

Library Software Solutions - We need a higher level of discourse..

It seems to me, after a week or so of watching comments fly around on Twitter, Facebook, and on various blogs and press sites, that we need to raise the level of discourse between the vendors of proprietary software, those who produce open source software and the users of both, that is -- libraries. Why do I say that? As I'm sure many of you know the OLE group issued its draft final project report recently, along with a request to comment.

I took that opportunity to write a blog post conveying my concerns about where OLE was headed and how it was getting there. I posed a set of questions, based on my professional experiences, which includes proprietary only software companies, software companies with products based on both proprietary and open source and prior to Ex Libris, my own company that was focused almost exclusively on open source software. That blog post drew a pointed comment from Brad Wheeler, a participant in the OLE project.

Which caused me to stop and wonder; did the OLE group really want comments? Or just not comments from vendors of proprietary software?

If that is the case it is truly unfortunate for all of us. It reminds me of a book review in the Economist that I read this weekend. A statement in that review jumped out at me:
“They are so blinded by partisanship that they are incapable of seeing any vices in their own side or any virtues in their opponents….”
I thought about that for a moment and how broadly it applies to our lives today, from politics (conservative vs. liberal) through the media (FOX vs. CNN) and to computers (Windows vs. Mac). It seems we're increasingly turning into people who can only see black and white and little in between. Is that where we want the discourse between open source software and proprietary software solutions to reside? I sincerely hope not.

Surely we can agree on some things:
  • Proprietary software can co-exist with open source. For instance, I'm extremely proud of what Ex Libris has done in supporting open source software. While I understand those of the “pure open source” camp will still find things to criticize in what I'm about to say, the facts are that Ex Libris has:
    • Opened its software platforms to support open source extensions
    • Participated in standards meetings to support the DLF API initiatives.
    • Sent speakers and attendees to open source conferences around the world to both learn and present.
    • Encouraged community-based software development.
    • Strongly supported standards and standards organizations.
    • Provided financial support to the open source community via direct financial contributions to the OSS4LIB conferences.
    • Organized meetings for open source developers where Ex Libris developers participate to learn and share how our open platform can be utilized to further support open source development.
  • “For-profit” is not bad. This is a cornerstone of our economy and our society. While I note a trend in many open source and even general library conversations that equate the words “for-profit” with “greed” and “bad”, the reality is that this is a diversionary tactic and serves no real purpose.

    Many universities and educators benefit directly from “for-profit” companies via their endowments and pension funds, both of which invest in, and hope for a good return via, these kinds of investments. (It reminds me of those that say they don't want government health care, but don't you dare touch my Medicare!)


    Th
    e reality is that good and successful companies listen to their customers, supply products/services that those customers need and will buy or else -– pure and simple -- they go out of business.

    Pricing of those products is always a discussion point and likely will continue to be. I remember what one company president I worked for said when asked how he arrived at a product price? His answer was “somewhere between what it costs to produce and what the market will bear”. If anyone thinks that libraries could previously, or can now, bear high profit margins; please tell me how to transport to your world. It's not one I've lived in for the last several decades.

    I've noted studies that said the cost of open source products and proprietary products usually turn out to be equal when all aspects of their production and implementation are factored into the equation. I've heard vendors of open source solutions say the same thing.
    When it comes to cost, it's just a difference of where the money will be spent.
  • Competition is good. Let me be clear. We welcome OLE in the marketplace. As I said in my original post, we see much merit in this project. The OLE work will make for better solutions, across the board. Yes, it's a different model of producing software than ours, but it doesn't make our model wrong and it doesn't make the open source software model right. The two methods are just different, each has has advantages and disadvantages that should be weighed by customers to find the one that best suits their specific needs. I agree, it's a big market. There will be alternatives. We'll each represent what we see as the advantages of our solution. Let's agree to let the customers decide.
  • Responsibility belongs to all of us. The current situation of libraries is no more the fault of proprietary software vendors than it is of librarians or any other single player. It's a complex world with many factors at play.

    Open source software organizations understand, as do proprietary firms, that ultimately libraries will determine their own fate. Their willingness to define a compelling vision of their role in the future is the key to their survival. (See my post about
    The future of research libraries and/or Libraries; A silence that is deafening. As software developers we offer a variety of tools and solutions to meet that vision.

    I think we'd be best served by allowing libraries to focus on the larger issues at hand. We can all do that through intelligent exchanges with clear statements of advantages and benefits.
  • Discourse is important. We at Ex Libris have learned a lot from the open source software movement. There is much we admire in this movement and have moved to incorporate into our products and initiatives in order to benefit our customers. If it benefits our customers, we understand that it benefit us as well.

    I wrote my OLE post because I thought it was an important topic and I wanted to share my experience, my view, and what input I could give the group to use in the project and those that wish to use the resulting product. It was never meant as a set of statements meant to foster fear, uncertainty or doubt. If we are wrong in our approach, then I would encourage discourse that helps us to understand why. If we're right (and let's recognize that companies like ours have been producing software for decades for this marketplace so surely we know a few things that would benefit the OSS developers) then perhaps our thoughts can be accepted as constructive input.
Given the quality, quantity and intelligence of the people involved in these discussions, I think it is time to raise this dialogue to a higher level.


Tuesday, August 25, 2009

The Future of Research Libraries

This weekend I read a book-in-progress about the future of research libraries, called "The Great Library In The Sky". The fact that it is a book-in-progress is an interesting idea all by itself that the author discusses in the introduction. Of course, the most interesting part is the book itself. The title of the first chapter is "Time to Say Goodbye" and it open with the statement that "Academic Libraries are confronting a death spiral." The work is an in-depth look at the challenges and problems posed to academic libraries by information competitors and disruptive technologies. It challenges the thinking of today's academic librarians while offering possibilities for remaining relevant into the future. Let's just say that it will not be by doing what has always been done.

As noted, it is a work-in-progress. The version I read is Version 0.6 and admittedly some chapters are a bit choppy yet. However, even in this form, it should be made mandatory reading in every graduate library science program. Also, any academic librarian wanting to see a pathway forward that isn't centered on cutting services and collections would be well served to read this book right now.

The author,
Adam Corson-Finnerty is the Director of Special Initiatives for the University of Pennsylvania Libraries. That institution is indeed fortunate to have someone like this on their team. Libraries need more people like this. He also writes a blog where clearly many of the ideas in this book started out. Excellent reading, both the blog and the book. Check them out.

Monday, August 17, 2009

Importance of content and vendor neutrality in software solutions--what will libraries choose?

Today, our technology tool sets include Web-services, cloud computing, SaaS, grid computing, mobile devices, etc.—all of which have made possible a whole new way of thinking about library systems/services. As an aggregate, they also raise some new issues that will cause libraries to rethink topics like data privacy, conflicts of interest, and market dynamics, in a way that has never been of previous concern.

There are several efforts underway including, Ex Libris URM, OLE, and OCLC WorldCAT that have outlined plans for next generation systems/services that utilize at least some, if not all, of these technologies. With these new technologies come all kinds of new questions and interesting topics for consideration, many that highlight some of the complex decisions that libraries will be making in the next few years. Coming hard on the heels of some record usage policy debates, the inevitable questions arise regarding what might happen to an even bigger body of data resident, in for instance, a new OCLC-hosted ILS system? Will these force librarians to again think long and hard about data privacy and record ownership issues? Will putting the entire patron, usage and budget data resident in today’s library ILS, in the hands of a vendor that also licenses and prices content and has third party relationships with publishers and content providers, raise some concerns? Not just among librarians, but the libraries and larger institutions/organizations that they serve?

A similar tangle arises when a single vendor controls all of the pieces of a solution such as the discovery interface, database(s) and their access; and electronic resource management. Companies like these offer services that allow a library to license, record, discover, and access intellectual content all on a single vendor-hosted platform. The convenience and cost factors are highly touted; as all services are provided courtesy of new technologies unknown just a few years ago. It all sounds too easy and it is-–especially if libraries don’t stop to consider the implications. For example, should a library be concerned about the privacy and exclusive usage of all of its data? If a vendor produces original content, offers access to a database via a hosted service, provides discovery of its own databases, houses usage and cost data and license terms of both its own content, and other vendors’ content, have we crossed a line that should be of concern not only to libraries, but also to other content publishers? It would seem to me that we should all be very concerned. When one solution provider suddenly has control over all facets of the solution you’re using, and significant parts of the competitor’s solutions, you, as the end customer, have lost substantial negotiation power. Firms that compete with these suppliers are also handicapped in that they’ve handed key critical usage information on their products to their very competitors. This information could be used by the solution vendor to modify pricing and packaging choices in ways that won’t be favorable to the library.

The OLE and Ex Libris URM projects continue to sustain the vendor and content neutrality that has been a hallmark of traditional library software, updated to use newer technology. It will be fascinating to see what values libraries choose to prioritize. Will it be perceived low cost and convenience or will it be content and vendor neutrality, i.e. the ability to negotiate low prices coupled with the traditional need to protect privileged data that will continue to weigh heavily into their future decisions? It's an important decision.


Tuesday, August 11, 2009

He's back!

Nicholson Baker has a long track record when it comes to libraries, books and technology. Among those of us who make our living in the technology sector of the library world, Mr. Baker isn’t always considered very forward thinking. Back in 1994 he did an article in the New Yorker magazine talking about how various libraries, including the New York Public Library, Harvard, and others, had discarded their index card files and replaced them with “inferior” on-line systems. In 2001, he wrote a book called “Double Fold” that was very critical of libraries and their handling of original works and their replacement with newer types of surrogates. Now, in the latest issue of New Yorker, he takes on the Amazon Kindle 2 in an article entitled: “A new page”. It’s an entertaining article, certainly. Not surprisingly, it is also a pretty skeptical look at the Kindle as he relates what he views as good and bad about the technology behind e-books and e-book readers. If one checks the web, various sites are already dealing with his article and those sites are building an impressive array of comments. Again, the comments are entertaining and informative and they represent all sides of this very passionate discussion.

As informative and entertaining as these discussions are, as a user of e-books and an e-book reader, many times I find some points of view glaringly missing. These include: Given the quantum leaps each generation of this technology makes, where might it go? What might we do with it? What will it mean for librarianship?

As a starting point, consider the technological leaps made by the iPod, which launched in October 2001. We’ve seen new versions and models virtually every year since, each offering major new features and technology. As a result these devices have become prolific. According to
Wikipedia, over 200M iPods of one variety or another have been sold since their introduction. The number keeps growing.

Now, consider the most popular e-book reader, the Kindle. The first one was introduced in November 2007 and today, almost two years later, we’ve seen two additional new versions – each offering substantial new feature sets. It is estimated that 500,000 have been sold thus far and by 2010 it is projected that over 3M will have been sold. I have no doubt, many of the issues/concerns we hear today, from people like Baker, will be taken as input by the various manufacturers and will be used to rapidly improve their products.

When talking to librarians about these devices, I frequently encounter the point of view that “It’ll never replace books” or “The book is a perfect technology – widely usable, no power needs, it feels and smells good,” etc., etc. However, I think this is a black and white view. It is also a denial of the inevitable. I read somewhere that paper is a technology and like all technologies it too will have an end-of-life. Until that day is fully realized, as librarians we should look at these devices and ask ourselves the following questions:
  1. If I can have a book/magazine/newspaper delivered wirelessly to the device in my hand in less than 60 seconds and for a reasonable charge, why should we expect users to go to the library or use inter-library loan?
  2. If I check out a book at the library, can I plug a headset into it and have the book automatically read to me?
  3. If I’m reading a book from the library, can I instantly change the font size of that book to one more comfortable for my tired eyes?
  4. Can I keyword search the book in my hand, and every other e-book I own, all at the same time, with one simple search?
  5. Can I carry 1,500 books in the same space as one printed book normally takes?
I don’t intend to start a long point-by-point comparison of libraries and library books to e-book readers and e-books. Each has its attributes and it would be taking up the black-and-white view of the world to go down that path. Instead we should realize this new technology offers some very interesting new value-add capabilities that libraries and library books don’t. What are others seeing as the impact? (highlighting below is my own):

“New e-readers are leading the way to a future in which your local library is the solid-state drive in your hand” (
Candice Chan, Wired Magazine, May 2009).

Steven Johnson in the Wall Street Journal of April 20, 2009, in an article entitled, "
How the e-book will change the way we read and write", made some very interesting observations. If you haven’t read this article, I highly recommend it. It does offer you a view of the future of this technology:
  • “It will make it easier for us to buy books, but at the same time, make it easier to stop reading them.”
  • “Print books have remained a kind of game preserve for the endangered species of linear, deep-focus reading.”
  • “2009 may well prove to be the most significant year in the evolution of the books since Gutenberg …”
  • “Think about it. Before too long, you’ll be able to create a kind of shadow version of your entire library, including every book you’ve ever read – as a child, as a teenager, as a college student, as an adult. Every word in that library will be searchable. It is hard to overstate the impact that this kind of shift will have on scholarship. Entirely new forms of discovery will be possible. Imagine a software tool that scans through the bibliographies of the 20 books you’ve read on a topic, and comes up with the most cited work in those bibliographies that you haven’t encountered yet.”
  • “Reading books will become … a community event, with every paragraph a launching pad for a conversation with strangers around the world.”
  • “The unity of the book will disperse..”
All of this should cause one to stop and think. The worlds of publishing and research will see transformation as a result of e-books. Librarianship will be able to move in new directions and address new opportunities. New software will be needed on these platforms that will replicate the some of the value add skills of libraries and librarianship but in this different environment. (Note in these articles, they say the library will be in your e-reader, not the value-add of librarianship. We should make sure it is is also on the e-reader.) At the same time, this new technology raises countless concerns for the profession if we fail to embrace it.

Nicholson Baker will be back again and again, every time he sees a new threat to traditional librarianship and new forms of information consumption that he feels threaten traditional printed books. Obviously, as a librarian I think we need to embrace this new e-book technology and to ensure that we develop and put into place ways to work with and offer librarian services within it. This evolution in technology presages new dimensions in information consumption and utilization. As a result, librarians will have some new tools in their toolbox and others we need to develop. If you want to see how some of your peers are working with this new technology, check out this blog
entry. If you haven’t started, maybe it’s time? While Nicholson Baker will be back, I'd like to make sure librarianship never goes away.

(As an intersting follow up, read this post title: "
Ebook growth explosive; serious disruptions around the corner" which talks about the growth rates in ebook sales, putting some numbers on it and also talks specifically about library sales of ebooks.)



Tuesday, August 4, 2009

OLE; The unanswered questions


After returning from a vacation following ALA, I read the summary of the recently issued draft Final OLE Project Report. While there is much to be admired in what the OLE project has achieved, it is also important to note that OLE is neither the first organization to define these goals nor does OLE represent major unique or innovative technology. Furthermore, it leaves some important questions unanswered that anyone thinking about investing in this project should demand answers for first.

Robert McDonald, Associate Dean for Library Technologies at Indiana University, said in an email about the project:
"The goal is to produce a design document to inform open source library system development efforts, to guide future library system implementations, and to influence current Integrated Library System vendor products."
If you read the Project goals outlined in the document, you'll find it states similar goals:
“to design a next-generation library system that breaks away from print-based workflows, reflects the changing nature of library materials and new approaches to scholarly work, integrates well with other enterprise systems and can be easily modified to suit the needs of different institutions.”
These are all important and readily agreed upon goals. In fact, nearly two years ago, Ex Libris started to define a very similar set of goals, although much broader, more comprehensive and technologically more advanced. This process was the beginning of what was to become known as Unified Resource Management (URM).

The next steps, according to the OLE document are to start
“talking with senior administrators, both internal and external to OLE, to identify those institutions that wish to develop a proposal to carry the project forward into the next phase of building the software. OLE participants also have begun discussions with selected software vendors to explore how they might participate either in software development or software hosting and support as the project continues.”
This statement seems to be a bit at odds with the goals outlined and discussed above. If the goals are to influence and guide current ILS development and/or inform OSS development efforts, then developing the appropriate software is indeed a very large step in a different direction, and it skips an equally important step. So what’s missing? Creating the business model that surrounds this development effort. This is no small task, but it is a critically important one. If the OLE project were a new startup investment opportunity, investors would want assurances that the money being invested would result in a product/service that will provide a measurable return, year after year for a reasonable amount of time.


To do that, the business model would need to answer some very tough questions:
  1. What is the target market for this product? In reading the document as currently drafted, one finds a high-level description (framework) that will appeal to most librarians conceptually. It is clear from the document that the goal is to have a very wide adoption rate for the resulting product. However, it is missing the functional details needed for any specific library to be able to clearly say this product will work for them. Now if the point of the effort is to guide and inform, the document as it exists is fine. But if it is meant to result in a final product, it needs to be considerably more specific. This is where involving vendors that have developed products for the library market will be very important. Vendors that have developed automation products for this market will undoubtedly point out that the devil is in the details. Research libraries are different from academic libraries are different from public libraries, are different from… you know what I’m saying. Each of those segments requires different functionality and workflows. It is stated in the OLE Plan that the ability to accommodate flexible and more modern workflows will be met with the ensuing product. What is not clearly stated is that putting those pieces in place will be left to the institutions that adopt the product. For those institutions, factoring the time, money and resources to add that specific functionality will need to be factored into their cost considerations for adopting this as a development project/product.
  2. Who are the competitors? Clearly there are already competitive products emerging. Ex Libris is developing URM (as mentioned above) its next generation automation product. OCLC is discussing and developing extensions to WorldCAT. Others are also working towards similar goals as outlined in the draft Final OLE Project Report. A comprehensive list should, to the degree possible, be identified and listed so that potential partners understand the competitive landscape being faced by this product.
  3. How much of the market do the organizations above have or are they going to take? How much is OLE hoping to take? Once the competitive solutions are identified, some projections of market share should be developed for all the identified products. Why? Because it needs to be understood that if you have a potential market of “x” libraries (just for discussion sake, let’s say 120 ARL institutions) and OLE is going to hope to obtain a market share of 20%, then the total potential pool of possible participating institutions is 24. So when final costs are developed to fully develop this product, place it into production and maintain it are calculated, the 24 institutions must bear those costs. (For example, if the projection is that it will take 5.2M to build the product, and let’s say it takes another 5M to complete the development needed to put the project in production status by build partners, plus an annual recurring cost of minimally two programmers per institution, at a total of 150K, we’re looking at an annualized cost of nearly $500K per institution before deducting any grant funding the project might obtain). These are big and important numbers that need to be known by any institution that might wish to participate in either the development or adoption of this product.
  4. How many institutions are actually going to put OLE into production status? (Remember, we’re talking an “enterprise” level application here, so institutions have to be willing to bet the future of their organizations on the final result). There are many open source projects in libraries today. Some run in test/development modes for years with no clear date identified as to when it will be a “production” product. While it is equally true there are many OSS products that are in production status, without knowing when a product will be "done" and for how long money must be poured into the development, developing a business case that shows a useful time frame for a return on investment is extremely difficult.
  5. How much money are those institutions going to have to put behind that adoption in order to make it an enterprise, production ready, level product? While these will be projections at best, it is important to factor the answers to these questions into the business model, normally at several different levels of adoption, for the institutions considering the solution to have a comprehensive understanding of how costs might change depending on what happens.
  6. How will that product sustain itself for some defined amount of time (usually 5-10 years)? The current draft Final Report begins to outline the plan for achieving this, but again, a range of numbers need to be applied for a realistic assessment to be performed (i.e if only 50 adopt it’ll cost “x”, if 1000 adopt it’ll cost “y”).
  7. What are the risks? Risk identification is an important part of making any investment. Some of the risks that surround OLE include:
  • Given the scope of what is being proposed and the competitive environment in which the product will exist, can this product develop a large enough following of developers to sustain it in each market segment in which it aspires to compete? The reality is that the library market is one of relatively finite size and given the current economic conditions, the number of institutions that can afford to sustain a staff of developers is shrinking. Given all the other OSS efforts underway, is there a large enough community that will be willing to devote time, energy and resources to this product?
  • The investment represented both by those institutions that will be build partners and those that will end up tailoring the product to meet their needs is very large. A lot of the money to be applied here might come from the Mellon Foundation, a terrific organization that has done more for libraries than can be measured. Yet, someone needs to ask: Is this the best use of that money? Especially when there are clearly competitive products emerging, many of which come from organizations with proven track records in developing this kind of technology. What is the probability of success for this startup effort? What if it fails?
  • The real point here is that risks need to be identified, measured and factored into the investment analysis.
Once gathered, all these answers will need to be loaded into some complex business modeling spreadsheets in order to make projections about what the actual cost will be, per institution, to create and sustain the development of OLE. Given the current economic crisis in both education and libraries, these costs will need to be carefully documented, scrutinized, and compared to other offerings in order to make informed, fiscally sound decisions.

This is tedious stuff. The answers to these questions will probably not be given by the same people who wrote the draft Final Report document. However, these answers will most probably determine the overall direction and success of Project OLE, either as a guiding, influencing, or development force in library automation.

The final question I think anyone responsible for making an investment decision in terms of building OLE should ask themselves is this: If I were investing my own money in a company that said they were going to build OLE, would I do it? If not, I think you know what you should do when it comes to your organization’s money and OLE.