- Users should have the ability to use the touch screen to easily highlight key text and/or phrases and then have a simple voice recognition capability (see the latest Google search app for the iPhone for an example) that would allow the user to quickly add tags to the text that has been highlighted. Copying this info to a central website, such as the Amazon Kindle does, is a wonderful way to allow the highlighted text to become the launching pad for larger community-based conversations or research/papers based on those clippings. Or alternatively, the software should add the bibliography/citation information needed to the highlighted information, so that if I copy the text into a larger paper or research document, the citation is complete and readily at hand.
- The ability to tag should also be applicable to the metadata records for the information I’ve stored on the “Personal Information/Knowledge Environment (PIKE)”. Using those tags, as well as standard search terms, the discovery interface should allow me to easily search and refine my results both on the PIKE and by connecting to remote libraries and the web. Once I’ve found a resource at the library, the e-reader should allow me to borrow and easily, renew e-content that I obtain from the library.
- The PIKE should also be able to build a standardized information profile based on what I’ve read and other input and then, based on an agreed upon standard, allow the PIKE to monitor specified collections, services, RSS feeds, blogs, etc. and pull together alert notices for the user to review and possibly use in selecting new materials to read in advancing their knowledge. This would be a true library-like service on the e-reader platform.
Sunday, December 13, 2009
The library e-book reader – a “Personal Information/Knowledge Environment”
Friday, November 27, 2009
Tools in library and academic toolboxes: community, collaboration and openness
Monday, November 16, 2009
Another facet of the “library bypass strategies”
Jean’s concern was how libraries might get bypassed in the context of e-book supply strategies. I totally agree with the comments she makes in her post. What I see, that echoes her concern, is in the area of e-content and discovery products which are being offered to the library marketplace. Increasingly, these are offered as pre-packaged solutions with a discovery interface and with databases from a select number of organizations. But there are some real differences in the offerings and librarians need to be careful how they select and implement this technology.
Libraries must retain control over the selection of the content that is offered to their end users or else they have abandoned a core value-add of librarianship, i.e. the selection of the most authoritative, appropriate and authenticated information (in this case electronic resources) needed to answer a user’s information need. If, as a librarian, you cede this control to a third party organization, you’ve setup your library to be bypassed and ultimately replaced in the information value chain.
Some may ask, how is this any different than the book approval plans most libraries have participated in for years, where vendors put together recommendations of titles for a library to purchase? Those plans, designed over the last approximately 20 years, are built around the Library of Congress classification scheme and subject headings and a variety of other criteria by which titles are selected. With this model, Librarians had the ultimate say over acceptance or rejection of books supplied in response to the plans. However, e-content selected by your vendor, particularly if that vendor is owned by a content aggregator, come with an entire host of complications. You have to ask yourself if you really want to trust a vendor of content to be objective when it comes to managing or delivering content from their competitors. Will they take advantage of usage statistics when determining packages or pricing? Will they tweak ranking algorithms to ensure that their own content gets ranked higher or more prominently?
I think it is important, as a librarian, to understand these realities. If you want to provide your users with an assurance that what they’re searching has passed your selection criteria and that it is the best information to meet their needs, then you’ve just created some important criteria to be met when you select the discovery tool and e-content your library is going to use. These include:
1. Content-neutrality. Using a discovery tool that is tied to (or owned) by any one content provider is obviously increasing the probability that content from their major content competitors will not be available. Furthermore, content from companies owned by the parent company will likely be more heavily favored in the ranking/relevancy algorithms. This will likely be disguised as “since we own these databases, we can provide richer access”. I’d be cautious if I heard those phrases. The discovery tool you select and use should allow you to provide equal access to all content that is relevant to the end user, not just the content supplier who is providing it. One way to do this is to make sure the discovery tool that is used is from a source that has no vested interest in the content itself. Another way is to ensure you have the ability, indeed control, over the final ranking/relevancy algorithms.
2. Deep-search and/or metasearch support. If you believe that all content your users will ever need or want to search will be available solely through any discovery interface that is searching harvested metadata, then you need to know this is probably unrealistic.
There are two ways to avoid getting caught in this trap. One option is the ability to add in metasearching capabilities. Yes, we all know the limitations of metasearching – but the reality is that, if you believe like I do, that your job is to connect your users with the most appropriate, authoritative and authenticated information needed to answer their questions – not just the easiest information you can make available that might answer their question -- then you have to provide a way to search information that can’t be harvested, which depending on the topic, can be important information.
The other way to do this is the ability to deep-search, i.e., to connect to an API that will search remote databases. This technology typically offers faster and better searching as well as much better ranking and retrieval capabilities.
Either way, these are capabilities that many discovery interfaces don’t support. But they should, indeed they must, in order to support the value-add of librarianship on top of information.
3. The ability to load and search databases unique to your user’s information needs. If the above options don’t cover the content you need to provide access to, then you should have the option to add in a database of e-content locally to your harvested metadata. This might be a local digital repository or other e-content, but you should insist on this capability to ensure needed access through the discovery interface.
Any librarian who understands his or her user’s unique information needs will insist, just as librarians have for years in building other collections, that we must have a selection policy that will give us control over the e-content users will be able to utilize.
Watching librarians in action today, there are those ignoring these issues. They are selecting discovery tools that provide quick, pre-defined, pre-packaged content with a discovery interface that doesn’t really meet the deeper needs of their users or their profession. Once they've done this, they’ve reduced their library’s value-add in the information delivery chain and they’ve lost another valuable reason for maintaining their library’s relevance within the institution and handed it to those that believe good enough, is well, good enough.
To avoid this situation be careful in your choice of discovery tools and e-content. Be sure they support the value-add of librarianship. That way you, and your library, won’t become another facet in what Jean calls – “the library bypass strategies”.
Thursday, November 12, 2009
"Who knows what the library means anymore?"
It's a telling question. I mentioned it in my previous post and I'll say again, it the one question I truly wish the profession would answer so that everyone could align behind and support the answer.
Sunday, November 8, 2009
The OSSification of viewpoints.
What we saw unfurl in this debate was what I’ve titled “OSSified” viewpoints. Each side rehashes viewpoints about open source that have been expressed hundreds, if not thousands of times. One side shouts “FUD” and the other side shouts “anti-proprietary” and neither side, in my opinion, is adding anything new or valuable to the discussion. Yes, both sides have many valid points buried under their boxing glove approaches. No, neither side is presenting their view in a compelling, well-reasoned, logical fashion.
When I was in college, (yes, a long time ago) I was on the debate team for the university. On weekends, we’d travel across the country to engage in debates on a wide range of topics. Each topic required massive preparation. Research, statistics, quotes, all kinds of supporting information and not just for one side of the debate, but for BOTH sides of the debate. You never knew until you arrived, which side you would be taking – but you had to be prepared to debate either. The end result was that you learned a great deal about both the advantages and disadvantages of wide range of topics. You also learned, as we often do in life, that the world is not black and white, that depending on what is important to you as an individual, an organization or a profession, the right answer is frequently something in between.
So it is with open source and proprietary software. Both have advantages, both have disadvantages. Which of those apply to your situation depends on who you are and what organization you’re representing. But here is reality as far as I’m concerned – open source software represents a need and ability for organizations and professions to adapt services to end user needs and to do so very quickly. Particularly so in environments where the pace of change is accelerating with every day. However, It also carries with it the need to internally have, or externally pay for, technical staff to adapt the software to those needs. Proprietary software can and usually does offer complete, very functionally rich, systems that address wide market needs at reasonable costs and with limited technical staff on the part of the organization using it. An added bonus can be if the proprietary software is open platform (as are Ex Libris products), so that the proprietary package supports open source extensions which can be made in order to enhance services for users. This is a combination that brings some of the best of both approaches together.
However, let me point out the obvious and yet frequently forgotten key point in what I’ve just said. Because of the rate of change libraries are dealing with today, they need to adapt and implement quickly. Software development technologies, as with all technologies, have limitations. Open source and proprietary do represent two different approaches to development technologies. But what matters at the end of the day is to provide a total SOLUTION that works in meeting the needs of the users. Until such time as users can sit and completely configure software applications to do exactly and exclusively what they want to do – there will be room for both open source and proprietary software in this profession. Each has advantages. Each has disadvantages. Each offers different approaches to solving problems and providing a solution. If we become zealots for either point of view we are not serving our profession or users well. Becoming zealots means we will fight against the use of what the other offers and we will waste massive amounts of time reinventing things that already exist and work well (a point shared by Cliff Lynch in this debate). Libraries can’t afford this redundancy, particularly in the economic climate we’re currently in.
The profession of librarianship has more important things to do at the moment. Let’s devote the energy being wasted in this debate to defining and agreeing what librarianship will look like in five years. What will librarianship mean to end-users and what will our value-add to information be in that time frame? This would greatly help solve many of the funding problems we’re all fighting at the moment. Finally, let’s map out the plans and technology that are going to help us fulfill that vision. I’m sure if we do that, there will be plenty of new places for both OSS and proprietary software to make major contributions and in ways that will build on and support each other. That’s what we’re trying to do at Ex Libris and I would encourage a wider adoption of this approach across the profession rather than continuing boxing matches using old and outdated arguments that do nothing to advance the need to provide solutions to users.
We simply have more important things to do.
Saturday, November 7, 2009
E-book technology is accelerating. Libraries understanding and use of this technology needs to keep pace
Barnes and Noble introduced their new Nook e-book reader, a device bearing many similarities to the Amazon Kindle, but with some notable advances. These include a portion of the screen that will display color, the ability to lend books you’ve bought to friends, the ability to read entire books for free in a Barnes and Noble store using a wireless connection and last but certainly not least, support for MP3s, PDF’s and ePub and PDB files. These are all significant new advances and the device, which is to be available late this month (November) will further accelerate the adoption of e-books by readers.
Of equal importance is another announcement this week by Marvell and E Ink of a new agreement that “raises the technology bar. This is a total platform solution—including Wi-Fi, Bluetooth, 3G modem, and power management. The Armada e-reader has the potential to deliver the first mass market product accessible and affordable to billions of consumers around the world." Speculation is that instead of the current $250 price for e-book readers, this new technology will bring the prices down into the $100 range.
The pace of technology advancement in the area of e-books is accelerating rapidly and as a result, it is going to change reading habits, methodologies, research and discovery of people. These are all places where librarianship should and can be playing a leading role. With that statement in mind, I’d encourage you to read the article in the October issue of American Libraries magazine entitled “E-readers in Action”. The article, which highlights the efforts of Penn State to use e-books raises many valid issues concerning the use of e-book technology in libraries. But after reading it, I would ask you to think about what could have been done differently in this case to have made this a more satisfactory experience both for the readers and the library? I personally see quite a few things I would have done differently. Before I put forth my ideas, I invite yours. Comment on this post and I’ll follow up with another post summarizing your ideas and sharing my ideas on what libraries need to be doing to successfully use this new technology.
Thursday, October 15, 2009
The scalability of the open source business model in libraries...
Those experiences have taught me that open source commercial (as opposed to pure community based) business models that succeed for library specific applications are nascent efforts. When they do succeed, they often share many similarities with proprietary software business models. On the other hand, many proprietary software business models are increasingly moving towards new collaborative models (for example, the Ex Libris Open Platform). All of which supports my long time contention that the future business models for both open source and propriety software is neither as we know them to exist today. As in any evolutionary process, the best features of both will blend together to result in a new model for the future.
The latest Library Gang 2.0 podcast examines some of the issues currently being wrestled with and also talks about the future of the ILS. Listen in, I think you’ll find it interesting.
Thursday, October 8, 2009
The difference between Google and libraries
Those "with long memories remember the last time Google assembled a giant library that promised to rescue orphaned content for future generations. And the tattered remnants of that online archive are a cautionary tale in what happens when Google simply loses interest".It is a useful read, not so much for librarians who already understand the differences, but for librarians to point those that question their existence or funding.
The author says it best at the end, when he says:
Its a reminder that Google is an advertising company — not a modern-day Library of Alexandria.Libraries have value and important roles to play in our society. Reminders like this are useful.
Tuesday, September 22, 2009
An interesting environmental scan on academic digital libraries
The author, Derek Law from the Centre for Digital Library Research at the University of Strathclyde in Glasgow Scotland laments “it is no longer clear what business libraries are in and whey they should now interface with other parts of the organizations they serve” and he further says that librarians “have lacked the space to step back and observe it from a higher level.”
The good news is that if they take the time to read this article, he’ll provoke their thinking and help clarify what must be dealt with in the larger environment. He cites numerous reports to show what many feel, even if they couldn’t quantify it – users perceptions of libraries are radically different than what librarians perceive them to be. He tapes the CIBER report to show that researches “expect research to be easy” and that they “do not seek help from librarians” and only want to “download materials at their desks.” One of the most disturbing disconnects is when he points out that “when librarians assist users, satisfaction levels drop” because it is perceived that aren’t trying to simply help them find what they need, but are trying to show them “what is good for them”.
The article deals with the growth in digital content but very accurately points out that librarians have yet to add value to the digital content they do accumulate. Yet all is not lost, because he identifies that being a trusted brand is something libraries and librarianship needs to build upon. He puts forth two really interesting tables in the article, showing first, how many of the social networking tools can replace traditional library activities and the second table suggests how libraries can use those very social networking tools to the benefit of library users (the article is worth downloading for these two tables alone!).
Finally, the article suggests key things that librarians need to do “be at the core of any redefinition of the Library’s role”. I won’t spoil the read for you but let me say that you should grab this article and read it. It’s time well spent.
Sunday, September 20, 2009
“I must follow the people. Am I not their leader?” -- Benjamin Disraeli
In particular, Cushing Academy made quite a stir when they went completely digital. James Tracy, headmaster of Cushing stated in a Boston Globe article:
"When I look at books, I see an outdated technology, like scrolls before books. This isn’t ‘Fahrenheit 451’ [the 1953 Ray Bradbury novel in which books are banned]. We’re not discouraging students from reading. We see this as a natural way to shape emerging trends and optimize technology."This, of course, drew all kinds of spirited responses, including some from Keith Fiels, executive director of ALA. I’m afraid I found Mr. Fiels remarks somewhat uniformed. He first indicates that e-readers and books aren’t free. To which one must of course ask, since when are printed ones free? Of course, I understand that once purchased printed books can be used by many others for a fairly low cost, at least to the library (and thus the taxpayer). But his remark seems to indicate that he isn’t up-to-date on how some of the e-book manufacturers (Sony most notably) are working quite diligently to make e-readers and e-books work for libraries in much the same way. Mr Fiels goes on to note “it may become more difficult for students to happen on books with the serendipity made possible by physical browsing.” I would strongly suggest that Mr. Fiels spend some times with students and see how they browse collections today be it music, books, photos, videos or any other digital media, outside or inside of a library. It’s done VIRTUALLY. Of course Mr. Fiels wasn’t alone in expressing concern. Many other people reacted in similar (and different) ways.
However, the reality is that this is not the first time something of this nature has happened, nor will it be the last. Back in 2005, the University of Texas at Austin, under the leadership of Fred Heath made quite a stir when they announced that they were making one campus entirely digital. More recently, the University of Connecticut at Bridgeport did something similar when Diane Mirvis converted the first floor of their university library to a digital learning commons with no books in sight (which, I might add, uses PRIMO as the centerpiece of this new digital learning environment). There are probably countless other examples.
These conversions will continue as time marches forward. Slowly, but steadily they will go on until they are no longer noted because they’re no longer newsworthy. In fact, in reading all these links, the thing that struck me was that the users of the libraries find it all rather mundane. They’re expecting it and welcome it, saying simply “it’s the future”.
The point was further underscored for me this week, when a friend and colleague, Ian Dolphin, pointed me towards the Shared Services Feasibility Study by SCONUL. While interesting reading for a variety of reasons, in particular this survey of 83 higher education institutions in the United Kingdom, showed “the strongest focus is on adopting digital solutions and electronic content to reduce physical holdings and therefore space.”
Taken in totality, all of it reminded me of one of my favorite quotes by Benjamin Disraeli, a former British Prime Minister:
"I must follow the people. Am I not their leader?"Which is my way of saying that I hope as librarians, we will allow ourselves to be lead by those who understand where people want to go.
Sunday, September 13, 2009
e-books, e-book readers, but what about end-users?
There is however, one concern I have when we gather industry people to discuss these topics. That is the fact that there is a important point-of-view missing or only slightly represented and that is the view of the end-users. While presumably, many of us on the podcast talk to end-users, directly or indirectly, and try to interpret what we believe they want, it's still an interpretation and a pale representation. For instance, I spend most of my time working with academic libraries and on academic campuses visiting academic libraries. It is not infrequent for me to hear (or read) reports that state many students use the library only as a meeting place, or a place to catch a nap and how little, if at all, they actually use the physical collection of the library (for a variety of reasons). They use digital resources, whether supplied through their library or not, but digital it must be. So, I try to represent that point of view in these forums as best I can. Yet, I think within the podcast parameters, I'm only able to represent a fragment of what I've heard from end-users.
For example, I've met more than a few students that have told me they expected to graduate without every having actually borrowed a single item from the library. Yet, I've seen these same students fully wired in that there are computers in their backpacks, iPods in their ears and mobile phones in their pockets -- all of which they read quite actively. So, reading is not the issue. We know that print will live on for a very long time in one form or another. Our printed library materials? Maybe, maybe not. I'm not at all sure students care.
Now perhaps digitizing these works will allow them to flow more actively into the environments where end-users appear to be spending more of their time and energy. That would be good. But that won't be enough. We, as librarians must also find ways to extend our value-add out there along with our library resources. That is something I think we need to seriously devote some active thought to in the very near term. More importantly, we need to hold some discussions with end-users so we make library services meaningful to them.
It's a frequent concern of mine when working with libraries, that libraries don't spend enough time talking to their end users about what it is their information needs are and how libraries might fill those needs. The most comprehensive description I've seen in the last decade was The OCLC Environmental Scan: Pattern Recognition. Unfortunately, it is now six years old -- a lifetime when talking about the changes wrought by technology. I'm sure if this survey was updated today, we'd be very enlightened by what we would hear. Our profession needs to have these conversations more fequently, not less. Once taken, we need to listen and respond to the results. Reading the 2003 OCLC report, one is struck by how little progress we've actually made on the findings it reported. Six years later, our lives are complicated by a financial crisis. Library funding seems to be in critical condition. One has to wonder if the lack of funding could be tied to the lack of progress in meeting end-user needs? Had we done a better job there, would the financial situation be different today?
E-books, e-readers? They're here today and we're trying to grapple with the issues about what to do with them and how to use them in our libraries. Before we get too far down the path, I suggest we have some in-depth conversations with end-users.
Wednesday, August 26, 2009
Library Software Solutions - We need a higher level of discourse..
Which caused me to stop and wonder; did the OLE group really want comments? Or just not comments from vendors of proprietary software?
“They are so blinded by partisanship that they are incapable of seeing any vices in their own side or any virtues in their opponents….”
- Proprietary software can co-exist with open source. For instance, I'm extremely proud of what Ex Libris has done in supporting open source software. While I understand those of the “pure open source” camp will still find things to criticize in what I'm about to say, the facts are that Ex Libris has:
- Opened its software platforms to support open source extensions
- Participated in standards meetings to support the DLF API initiatives.
- Sent speakers and attendees to open source conferences around the world to both learn and present.
- Encouraged community-based software development.
- Strongly supported standards and standards organizations.
- Provided financial support to the open source community via direct financial contributions to the OSS4LIB conferences.
- Organized meetings for open source developers where Ex Libris developers participate to learn and share how our open platform can be utilized to further support open source development.
- “For-profit” is not bad. This is a cornerstone of our economy and our society. While I note a trend in many open source and even general library conversations that equate the words “for-profit” with “greed” and “bad”, the reality is that this is a diversionary tactic and serves no real purpose.
Many universities and educators benefit directly from “for-profit” companies via their endowments and pension funds, both of which invest in, and hope for a good return via, these kinds of investments. (It reminds me of those that say they don't want government health care, but don't you dare touch my Medicare!)
The reality is that good and successful companies listen to their customers, supply products/services that those customers need and will buy or else -– pure and simple -- they go out of business.
Pricing of those products is always a discussion point and likely will continue to be. I remember what one company president I worked for said when asked how he arrived at a product price? His answer was “somewhere between what it costs to produce and what the market will bear”. If anyone thinks that libraries could previously, or can now, bear high profit margins; please tell me how to transport to your world. It's not one I've lived in for the last several decades.
I've noted studies that said the cost of open source products and proprietary products usually turn out to be equal when all aspects of their production and implementation are factored into the equation. I've heard vendors of open source solutions say the same thing. When it comes to cost, it's just a difference of where the money will be spent.
- Competition is good. Let me be clear. We welcome OLE in the marketplace. As I said in my original post, we see much merit in this project. The OLE work will make for better solutions, across the board. Yes, it's a different model of producing software than ours, but it doesn't make our model wrong and it doesn't make the open source software model right. The two methods are just different, each has has advantages and disadvantages that should be weighed by customers to find the one that best suits their specific needs. I agree, it's a big market. There will be alternatives. We'll each represent what we see as the advantages of our solution. Let's agree to let the customers decide.
- Responsibility belongs to all of us. The current situation of libraries is no more the fault of proprietary software vendors than it is of librarians or any other single player. It's a complex world with many factors at play.
Open source software organizations understand, as do proprietary firms, that ultimately libraries will determine their own fate. Their willingness to define a compelling vision of their role in the future is the key to their survival. (See my post about The future of research libraries and/or Libraries; A silence that is deafening. As software developers we offer a variety of tools and solutions to meet that vision.
I think we'd be best served by allowing libraries to focus on the larger issues at hand. We can all do that through intelligent exchanges with clear statements of advantages and benefits.
- Discourse is important. We at Ex Libris have learned a lot from the open source software movement. There is much we admire in this movement and have moved to incorporate into our products and initiatives in order to benefit our customers. If it benefits our customers, we understand that it benefit us as well.
I wrote my OLE post because I thought it was an important topic and I wanted to share my experience, my view, and what input I could give the group to use in the project and those that wish to use the resulting product. It was never meant as a set of statements meant to foster fear, uncertainty or doubt. If we are wrong in our approach, then I would encourage discourse that helps us to understand why. If we're right (and let's recognize that companies like ours have been producing software for decades for this marketplace so surely we know a few things that would benefit the OSS developers) then perhaps our thoughts can be accepted as constructive input.
Tuesday, August 25, 2009
The Future of Research Libraries
As noted, it is a work-in-progress. The version I read is Version 0.6 and admittedly some chapters are a bit choppy yet. However, even in this form, it should be made mandatory reading in every graduate library science program. Also, any academic librarian wanting to see a pathway forward that isn't centered on cutting services and collections would be well served to read this book right now.
The author, Adam Corson-Finnerty is the Director of Special Initiatives for the University of Pennsylvania Libraries. That institution is indeed fortunate to have someone like this on their team. Libraries need more people like this. He also writes a blog where clearly many of the ideas in this book started out. Excellent reading, both the blog and the book. Check them out.
Monday, August 17, 2009
Importance of content and vendor neutrality in software solutions--what will libraries choose?
Today, our technology tool sets include Web-services, cloud computing, SaaS, grid computing, mobile devices, etc.—all of which have made possible a whole new way of thinking about library systems/services. As an aggregate, they also raise some new issues that will cause libraries to rethink topics like data privacy, conflicts of interest, and market dynamics, in a way that has never been of previous concern.
There are several efforts underway including, Ex Libris URM, OLE, and OCLC WorldCAT that have outlined plans for next generation systems/services that utilize at least some, if not all, of these technologies. With these new technologies come all kinds of new questions and interesting topics for consideration, many that highlight some of the complex decisions that libraries will be making in the next few years. Coming hard on the heels of some record usage policy debates, the inevitable questions arise regarding what might happen to an even bigger body of data resident, in for instance, a new OCLC-hosted ILS system? Will these force librarians to again think long and hard about data privacy and record ownership issues? Will putting the entire patron, usage and budget data resident in today’s library ILS, in the hands of a vendor that also licenses and prices content and has third party relationships with publishers and content providers, raise some concerns? Not just among librarians, but the libraries and larger institutions/organizations that they serve?
A similar tangle arises when a single vendor controls all of the pieces of a solution such as the discovery interface, database(s) and their access; and electronic resource management. Companies like these offer services that allow a library to license, record, discover, and access intellectual content all on a single vendor-hosted platform. The convenience and cost factors are highly touted; as all services are provided courtesy of new technologies unknown just a few years ago. It all sounds too easy and it is-–especially if libraries don’t stop to consider the implications. For example, should a library be concerned about the privacy and exclusive usage of all of its data? If a vendor produces original content, offers access to a database via a hosted service, provides discovery of its own databases, houses usage and cost data and license terms of both its own content, and other vendors’ content, have we crossed a line that should be of concern not only to libraries, but also to other content publishers? It would seem to me that we should all be very concerned. When one solution provider suddenly has control over all facets of the solution you’re using, and significant parts of the competitor’s solutions, you, as the end customer, have lost substantial negotiation power. Firms that compete with these suppliers are also handicapped in that they’ve handed key critical usage information on their products to their very competitors. This information could be used by the solution vendor to modify pricing and packaging choices in ways that won’t be favorable to the library.
The OLE and Ex Libris URM projects continue to sustain the vendor and content neutrality that has been a hallmark of traditional library software, updated to use newer technology. It will be fascinating to see what values libraries choose to prioritize. Will it be perceived low cost and convenience or will it be content and vendor neutrality, i.e. the ability to negotiate low prices coupled with the traditional need to protect privileged data that will continue to weigh heavily into their future decisions? It's an important decision.
Tuesday, August 11, 2009
He's back!
As a starting point, consider the technological leaps made by the iPod, which launched in October 2001. We’ve seen new versions and models virtually every year since, each offering major new features and technology. As a result these devices have become prolific. According to Wikipedia, over 200M iPods of one variety or another have been sold since their introduction. The number keeps growing.
Now, consider the most popular e-book reader, the Kindle. The first one was introduced in November 2007 and today, almost two years later, we’ve seen two additional new versions – each offering substantial new feature sets. It is estimated that 500,000 have been sold thus far and by 2010 it is projected that over 3M will have been sold. I have no doubt, many of the issues/concerns we hear today, from people like Baker, will be taken as input by the various manufacturers and will be used to rapidly improve their products.
When talking to librarians about these devices, I frequently encounter the point of view that “It’ll never replace books” or “The book is a perfect technology – widely usable, no power needs, it feels and smells good,” etc., etc. However, I think this is a black and white view. It is also a denial of the inevitable. I read somewhere that paper is a technology and like all technologies it too will have an end-of-life. Until that day is fully realized, as librarians we should look at these devices and ask ourselves the following questions:
- If I can have a book/magazine/newspaper delivered wirelessly to the device in my hand in less than 60 seconds and for a reasonable charge, why should we expect users to go to the library or use inter-library loan?
- If I check out a book at the library, can I plug a headset into it and have the book automatically read to me?
- If I’m reading a book from the library, can I instantly change the font size of that book to one more comfortable for my tired eyes?
- Can I keyword search the book in my hand, and every other e-book I own, all at the same time, with one simple search?
- Can I carry 1,500 books in the same space as one printed book normally takes?
“New e-readers are leading the way to a future in which your local library is the solid-state drive in your hand” (Candice Chan, Wired Magazine, May 2009).
Steven Johnson in the Wall Street Journal of April 20, 2009, in an article entitled, "How the e-book will change the way we read and write", made some very interesting observations. If you haven’t read this article, I highly recommend it. It does offer you a view of the future of this technology:
- “It will make it easier for us to buy books, but at the same time, make it easier to stop reading them.”
- “Print books have remained a kind of game preserve for the endangered species of linear, deep-focus reading.”
- “2009 may well prove to be the most significant year in the evolution of the books since Gutenberg …”
- “Think about it. Before too long, you’ll be able to create a kind of shadow version of your entire library, including every book you’ve ever read – as a child, as a teenager, as a college student, as an adult. Every word in that library will be searchable. It is hard to overstate the impact that this kind of shift will have on scholarship. Entirely new forms of discovery will be possible. Imagine a software tool that scans through the bibliographies of the 20 books you’ve read on a topic, and comes up with the most cited work in those bibliographies that you haven’t encountered yet.”
- “Reading books will become … a community event, with every paragraph a launching pad for a conversation with strangers around the world.”
- “The unity of the book will disperse..”
Nicholson Baker will be back again and again, every time he sees a new threat to traditional librarianship and new forms of information consumption that he feels threaten traditional printed books. Obviously, as a librarian I think we need to embrace this new e-book technology and to ensure that we develop and put into place ways to work with and offer librarian services within it. This evolution in technology presages new dimensions in information consumption and utilization. As a result, librarians will have some new tools in their toolbox and others we need to develop. If you want to see how some of your peers are working with this new technology, check out this blog entry. If you haven’t started, maybe it’s time? While Nicholson Baker will be back, I'd like to make sure librarianship never goes away.
(As an intersting follow up, read this post title: "Ebook growth explosive; serious disruptions around the corner" which talks about the growth rates in ebook sales, putting some numbers on it and also talks specifically about library sales of ebooks.)
Tuesday, August 4, 2009
OLE; The unanswered questions
After returning from a vacation following ALA, I read the summary of the recently issued draft Final OLE Project Report. While there is much to be admired in what the OLE project has achieved, it is also important to note that OLE is neither the first organization to define these goals nor does OLE represent major unique or innovative technology. Furthermore, it leaves some important questions unanswered that anyone thinking about investing in this project should demand answers for first.
Robert McDonald, Associate Dean for Library Technologies at Indiana University, said in an email about the project:
"The goal is to produce a design document to inform open source library system development efforts, to guide future library system implementations, and to influence current Integrated Library System vendor products."
“to design a next-generation library system that breaks away from print-based workflows, reflects the changing nature of library materials and new approaches to scholarly work, integrates well with other enterprise systems and can be easily modified to suit the needs of different institutions.”
The next steps, according to the OLE document are to start
“talking with senior administrators, both internal and external to OLE, to identify those institutions that wish to develop a proposal to carry the project forward into the next phase of building the software. OLE participants also have begun discussions with selected software vendors to explore how they might participate either in software development or software hosting and support as the project continues.”This statement seems to be a bit at odds with the goals outlined and discussed above. If the goals are to influence and guide current ILS development and/or inform OSS development efforts, then developing the appropriate software is indeed a very large step in a different direction, and it skips an equally important step. So what’s missing? Creating the business model that surrounds this development effort. This is no small task, but it is a critically important one. If the OLE project were a new startup investment opportunity, investors would want assurances that the money being invested would result in a product/service that will provide a measurable return, year after year for a reasonable amount of time.
To do that, the business model would need to answer some very tough questions:
- What is the target market for this product? In reading the document as currently drafted, one finds a high-level description (framework) that will appeal to most librarians conceptually. It is clear from the document that the goal is to have a very wide adoption rate for the resulting product. However, it is missing the functional details needed for any specific library to be able to clearly say this product will work for them. Now if the point of the effort is to guide and inform, the document as it exists is fine. But if it is meant to result in a final product, it needs to be considerably more specific. This is where involving vendors that have developed products for the library market will be very important. Vendors that have developed automation products for this market will undoubtedly point out that the devil is in the details. Research libraries are different from academic libraries are different from public libraries, are different from… you know what I’m saying. Each of those segments requires different functionality and workflows. It is stated in the OLE Plan that the ability to accommodate flexible and more modern workflows will be met with the ensuing product. What is not clearly stated is that putting those pieces in place will be left to the institutions that adopt the product. For those institutions, factoring the time, money and resources to add that specific functionality will need to be factored into their cost considerations for adopting this as a development project/product.
- Who are the competitors? Clearly there are already competitive products emerging. Ex Libris is developing URM (as mentioned above) its next generation automation product. OCLC is discussing and developing extensions to WorldCAT. Others are also working towards similar goals as outlined in the draft Final OLE Project Report. A comprehensive list should, to the degree possible, be identified and listed so that potential partners understand the competitive landscape being faced by this product.
- How much of the market do the organizations above have or are they going to take? How much is OLE hoping to take? Once the competitive solutions are identified, some projections of market share should be developed for all the identified products. Why? Because it needs to be understood that if you have a potential market of “x” libraries (just for discussion sake, let’s say 120 ARL institutions) and OLE is going to hope to obtain a market share of 20%, then the total potential pool of possible participating institutions is 24. So when final costs are developed to fully develop this product, place it into production and maintain it are calculated, the 24 institutions must bear those costs. (For example, if the projection is that it will take 5.2M to build the product, and let’s say it takes another 5M to complete the development needed to put the project in production status by build partners, plus an annual recurring cost of minimally two programmers per institution, at a total of 150K, we’re looking at an annualized cost of nearly $500K per institution before deducting any grant funding the project might obtain). These are big and important numbers that need to be known by any institution that might wish to participate in either the development or adoption of this product.
- How many institutions are actually going to put OLE into production status? (Remember, we’re talking an “enterprise” level application here, so institutions have to be willing to bet the future of their organizations on the final result). There are many open source projects in libraries today. Some run in test/development modes for years with no clear date identified as to when it will be a “production” product. While it is equally true there are many OSS products that are in production status, without knowing when a product will be "done" and for how long money must be poured into the development, developing a business case that shows a useful time frame for a return on investment is extremely difficult.
- How much money are those institutions going to have to put behind that adoption in order to make it an enterprise, production ready, level product? While these will be projections at best, it is important to factor the answers to these questions into the business model, normally at several different levels of adoption, for the institutions considering the solution to have a comprehensive understanding of how costs might change depending on what happens.
- How will that product sustain itself for some defined amount of time (usually 5-10 years)? The current draft Final Report begins to outline the plan for achieving this, but again, a range of numbers need to be applied for a realistic assessment to be performed (i.e if only 50 adopt it’ll cost “x”, if 1000 adopt it’ll cost “y”).
- What are the risks? Risk identification is an important part of making any investment. Some of the risks that surround OLE include:
- Given the scope of what is being proposed and the competitive environment in which the product will exist, can this product develop a large enough following of developers to sustain it in each market segment in which it aspires to compete? The reality is that the library market is one of relatively finite size and given the current economic conditions, the number of institutions that can afford to sustain a staff of developers is shrinking. Given all the other OSS efforts underway, is there a large enough community that will be willing to devote time, energy and resources to this product?
- The investment represented both by those institutions that will be build partners and those that will end up tailoring the product to meet their needs is very large. A lot of the money to be applied here might come from the Mellon Foundation, a terrific organization that has done more for libraries than can be measured. Yet, someone needs to ask: Is this the best use of that money? Especially when there are clearly competitive products emerging, many of which come from organizations with proven track records in developing this kind of technology. What is the probability of success for this startup effort? What if it fails?
- The real point here is that risks need to be identified, measured and factored into the investment analysis.
This is tedious stuff. The answers to these questions will probably not be given by the same people who wrote the draft Final Report document. However, these answers will most probably determine the overall direction and success of Project OLE, either as a guiding, influencing, or development force in library automation.
The final question I think anyone responsible for making an investment decision in terms of building OLE should ask themselves is this: If I were investing my own money in a company that said they were going to build OLE, would I do it? If not, I think you know what you should do when it comes to your organization’s money and OLE.