Wednesday, April 27, 2016

The OLE Merry-Go-Round spins on…

The news about the OLE (Open Library Environment project) has resulted in two reactions from me.  First, disappointment that my long-standing concerns about this project have proven correct, and second, that the profession of librarianship has seemingly forgot what we know and teach our communities about the skills of accessing and use existing knowledge to perform critical analysis in support of creating new knowledge. Such is apparently the case with the announcement that EBSCO will support a new open source project to build a Library Service Platform (LSP).

Marshall Breeding has done an analysis of this news  on the American Libraries Magazine website.  If you haven’t read it, I would suggest you do so as it will give you a solid foundation for the rest of this post. Return here when done.

Now let me say right up front, there are some very encouraging facets to this announcement.  These include:
  1. The involvement of some organizations with considerable business skills and savvy.   Both EBSCO and Index Data have been in business a long time and bring some much needed business analysis skills to the table.  This is good for the OLE project because it’s been sorely missing for a very long time.
  2. The fact that EBSCO is apparently pivoting in a substantial way to support open source software for the community.  I’m cautiously optimistic about this move.
  3. There are few people in the Open Source Software (OSS) business I respect more than Sebastian Hammer and Lynn Bailey.  Sebastian in particular was doing open source software before most librarians even knew what the term meant.  I’ve partnered with him in past business projects and know his expertise to be amongst the best in the field.  Lynn brings business skills to the equation and together they form an excellent duo.  They have numerous OSS success stories to point towards.  This is good for the OLE project. 
  4. At a presentation at the recent CNI Membership Meeting in San Antonio, in a session led by Nassib Nassar, a Senior Software Engineer at Index Data, he discussed their plan to use microservices as the foundation for this new project.  Microservices (an evolved iteration of SOA architecture), focuses on tightly coupled small software services.  A very good explanation of microservices can be found here.  This is a promising new architecture that has been evolving over the past several years and certainly might have applicability for future library software projects (see below for more on this point).
Now for a little history on the OLE project: Back in August 2008, (note, that was nearly EIGHT years ago) according to the press release, the Mellon Foundation provided an initial grant of $475,000 to support OLE.  The announcement said: “The goal of the Open Library Environment (OLE) Project is to develop a design document for library automation technology that fits modern library workflows, is built on Service Oriented Architecture, and offers an alternative to commercial Integrated Library System products.”

You’d be forgiven if you think that announcement sounds amazingly close to the most recent announcement, which says: “It carries forward much of the vision… for comprehensive resource management and streamlined workflows.” You'd also be forgiven for thinking that after eight years, we might expect something more?

But for now, let’s work our way through this announcement, using what we do know about the history of library automation systems, in order to pose some questions I really think need to be asked:
  1. Is the existing OLE code base dead?  Marshall might have been a little too politic in his article, but I’ll say the obvious:  After eight years (2008-2016) of development and grant awards totaling (according to press releases) of $5,652,000 (yes, read that carefully, five million, six hundred and fifty two thousand dollars) by the Mellon Foundation and who knows how much in-kind investment by affiliated libraries (through the costs of their staff participation in the OLE project) it has all resulted in what Marshall points out in his article: “EBSCO opted not to adopt its (Kuali OLE) codebase as it’s starting point.” And the “Kuali OLE software will not be completed as planned,” but will be developed incrementally to support the three (emphasis my own) libraries that currently use it in production, but it will not be built out as a comprehensive resource management system.”  For those of you not experienced in software development, that phrasing is code for: “it’s dead.”  They’re going to start over from scratch. Sure, they’ll use the user cases over, but for well over $5.5M, we should have expected, indeed demanded a lot more. Let’s also remember that this also means a number of very large libraries, all over the country, delayed their move to newer technology while waiting for OLE.  They stayed on the older, inferior ILS systems and they and their users suffered as a result.  How do we factor that cost in?   Now, sure, we’ll call this new project OLE to paper over this outcome, but folks please, let’s be honest with ourselves here: OLE has failed and it has carried a huge cost.
  2. Do we really need microservices?  Yes, it’s the latest, greatest technology.  But do we need it to do what we need to do?  And do we fully understand all the impacts of that decision? What value does it bring us that we don’t have with existing technology?  Is it proven using open source software in our size market?  (Yes, Amazon uses it.  But Amazon is a huge organization with huge staff resources to devote to this.  Libraries can’t make either claim.) We must answer the question: What is the true value of building a Library system based on this? What will libraries will be able to do that they can't do with current LSP technology.  Why should we take this risk?  Do we really understand the costs of developing and maintaining using this technology? Do we really want to experiment with this in our small and budget-tight community?
  3. Governance – Haven’t we been here before? What’s different?  A new Open Library Foundation is being envisioned to govern OLE.  But hasn’t this been tried?  I thought the reason the Kuali association was put into place was because the financial need and the overhead of running a non-profit organization was too taxing on the participant organizations?  So, the Kuali association made a lot of sense from that viewpoint.  But now the libraries are going to return to a separate foundation?  Why is it going to work this time when it didn’t previously?  Because we have vendors at the table, because we think we’ll enroll more participating organizations?  (See later points on this subject).  Because we found out that charging libraries to be a full participant in an open source software project didn’t fly with the crowd?  Given that library budgets and staffing are stretched to the limit, what is the logic that suddenly says we’re going to now have the capacity to take this new organizational overhead on?  I admit I’m totally mystified by this one. This choice seems to have an incredibly low probability of success.  The merry-go-round continues…
  4. So, OLE will again be solely aimed at academic libraries?  This new project is once again, focused on academic libraries. This is good.  And it’s bad.  It’s good, because as I’ve argued countless times in this blog, success in a software project is dependent upon building a good solution that addresses a market need so thoroughly and successfully, that it finds widespread adoption as early as possible within that segment.  Then, and only then, should a project branch out to address related segments.  To do so too early can result in lower adoption rates (see OCLC’s WorldShare, a product trying to address too many markets concurrently, and their resulting low adoption rate in academic markets.  Compare this to Ex Libris’ Alma, a product focused on academics and the experiencing significant success as a result).  The reason this focus is bad is for the reasons I pointed out, back in 2009, in this blog post.  Back in 2009 they also focused on academic markets, but I questioned how they would add additional market segments; the competitive positioning and market share and what that would leave for OLE and if it would be enough to sustain the product and/or it’s development.   Again, in 2012, I did an analysis of OLE and I also questioned the chosen architecture saying: “OLE is going to miss out on the associated benefits of true multi-tenant architecture.”  Well, here we are anywhere from four to seven years later and it appears those concerns were entirely correct, i.e. the choices made were wrong.  It gives me little satisfaction in saying this, but I think people ignored the obvious.  Given this most recent announcement, I’m concerned once again, the merry-go-round is going to continue.
  5. Multi-Tenant – redefined? The choice of microservices as a new architecture is definitely interesting.  But it has some implications I don’t think many fully understand.  This new version of OLE, based on microservices will, quoting Marshall’s article: “provide core services, exposing API’s to support functional modules that can be developed by any organization.”  Let me share my interpretation of that statement: What will be delivered on first release is probably a very basic set of services and what exactly that will include needs to be very openly and transparently communicated to the profession ASAP.  Because without it, there is no way to understand whether that means basic description processes, fulfillment (circulation) or will it mean it is just a communication layer on top of databases for which users will then have to write additional microservices to provide each of the following, including: selection (acquisitions and demand driven acquisitions (DDA)), print management (circulation, reserves, ILL, etc.), electronic management (licensing, tracking, etc.), digital asset management (IR functions), metadata management (cataloging, collaborative metadata management) and link resolution (OpenURL)).  Because as I’m sure you realize, that’s a lot of additional microservice code that someone is going to need to write to make a fully functioning system.  Plus, I’ll just say, as someone who has been involved in software development for nearly three decades, I find it hard to believe that you can write all these additional related microservices and not need to change the underlying core infrastructure microservice?  Or at least, at the start of designing that core infrastructure, know in some detail how those other pieces of code are going to work so you can provide the supporting and truly necessary infrastructure calls/responses back to the related microservices?  If that doesn’t happen, then when a major new microservice is developed, that core will have to be modified and updated. So, why am I saying multi-tenant appears to need redefinition in this model?  Multi-tenant means there is one version of that core code, perhaps running in multiple geographic locations for failover reasons, but the same exact code running everywhere.  This brought us the capability to try and move forward in some big ways on establishing best-practices, being able to compile transactional analytics that would allow global comparisons of libraries effectiveness and, as mentioned above, real failover capabilities, which given global weather conditions is becoming more and more important.  But now, with the microservices version of multi-tenant LSP’s, we’re back to everyone customizing their implementation and only that common shared core code becomes truly multi-tenant.  Everybody else is doing something different.  Great for allowing customization to unique institutional needs, but sacrificing many of the benefits of true multi-tenant software design.  Plus I have a very hard time, given the competitive nature of vendors in our marketplace, believing for a second that one will agree to serve as a failover location for another vendor running the service.  Maybe, but I’m definitely not holding my breath.
  6. 2018?  Who are we trying to kid? Marshall’s article contains another key phrase:  “Kuali OLE, despite its eight-year planning development process, has experienced slow growth (emphasis my own) beyond its initial development partners, and it has not yet completed electronic resource management functionality.”  Indeed, that would be true.  At the time of this writing, there are three (yes you can count them on one hand and have fingers left over) sites in “production” mode, which apparently means production minus the capability to handle electronic resources (a fairly major operation in academic libraries wouldn’t you agree?).  So, I will admit I nearly fell out of my chair when I heard said at CNI (later confirmed in Marshall’s article) that they expect to have an initial version of the software ready for release in early 2018.  My goodness.  Please pass me some of whatever you’re drinking, because it sure must be a good energy drink, or more probably a hallucinogen!  Some points to consider here:
    • OLE was worked on from 2008-2016 and is still missing functionality.  It was, as mentioned above, put into production by three libraries.  However, there were, according to the website, 15 partners, although two of those were EBSCO and HTC Global, vendors with an interest in the code.  I believe that’s out of 3,000+ academic libraries in North America?  Slow growth indeed.
    • HTC Global was hired as a contract programming firm to expedite the development of the code and conversely, because clearly the number of programmers needed to do the project in a timely manner was NOT available from the library community at large.  Do these people really, REALLY think, that because they’ve now broadened the scope that libraries are now going to assign their limited (and frequently non-existent) programming resources to this project?  I probably have a one-year backlog for my programming staff before I could even think of assigning resources to this project -- in a research library.  As I keep pointing out to my colleagues when discussing open source projects, we have to remember many academic libraries have NO, zip, zero, zilch programmers on staff.  Where oh where do they think they’re going to find needed programmers to enlist to get this massive project done? I’ll say the obvious:  It won’t happen.
    • As noted above, what will likely be coming out in the 2018 version of OLE is just the core code.  So, add a lot of time to those additional and oh-so-necessary other microservices modules needed to make this a complete project.  Index Data really needs to be transparent (by posting on their website) exactly will the the actual deliverable of v1.  Libraries need to know what they will need to build on top of it as additional services microservices (think of microservices as functional modules). This will clearly mean extra cost and probably extra time to get to “complete” (maybe it ccould be done in parallel with careful planning).
    • Let’s also remember the definition of “complete” product is ever evolving.  Even if they could get something out in two years, WorldShare and Alma are not going to sit still. They’ll be 24 releases further down the road.  So “Complete” is a moving target.
    • Let’s also take a moment to study some historical data here.  The Library Journal Annual Automation Reviews have sometimes provided some staffing analysis (last staffing report was in 2013 ) for the firms involved.  If we look at the major players that have tried to develop a “true” Library Service Platforms (Ex Libris, OCLC and Serials Solutions), we see staffing reported numbers of between 130 and 190 people (granted, they were working on more than just the LSP within their organizations, but you can bet the majority were working there).  One of those projects (Intota by Serials Solutions) never made it to the street.  Two (WorldShare by OCLC and Alma by Ex Libris) did.
  7. Do we have options here?  Of course, there are always options.  There are at least two that come to mind, some of which I’ve certainly advocated before:
    • Librarians have already worked together in a collaborative to create a Library Service Platform.  It’s called WorldShare and it has been developed by OCLC.  Librarians need to collectively call upon the collaborative they theoretically own, and help govern, and say: “We want to make WorldShare open source.”  It certainly has issues, but it’s a far more realistic vision that being described for the next generation of OLE.  Then the microservices could be extended out from a solid, true multi-tenant platform with real API’s.  For that matter, if those microservices worked with WorldShare, then there should be no reason, provided similar Alma API’s were supported by Ex Libris (or could be added, for those very same microservices to also work with Alma.  This would then  broaden the adoption base for the microservices  and thus the support for them.
    • Again, let’s take a moment and examine history and let’s see if there is something there we can learn from there to apply in today’s situation.  For instance, look at the history of some of the early vendors into the library automation space:  NOTIS started out with a pre-NOTIS real-time circulation module. Data Research started out in libraries with a Newspaper Indexing module, which eventually gave way to ATLAS (A Total Library Automation System) and Innovative Interfaces per their website: “Innovative provided an interface that allowed libraries to download OCLC bibliographic records into a CLSI circulation system without re-keying.”  The point is this, none of these systems started out as a comprehensive, do-it-all solution.  They started out with niche products and responded to market opportunities until they ended up shaping the products we have today. My point is this: If OLE wants to build a comprehensive library service platform solution, they clearly can’t do it in the time frame needed.  So, instead, they need to start out with a niche product that addresses a key market need (perhaps managing research data? providing a citation tool for research data? Library linked data solution that integrates with existing search engines?), and then drive that product to a leading position in the market.  Only THEN should it start moving sideways to encompass other functionality such as is found in an LSP.
Some remaining questions

Of course, this announcement is being positioned as a big step forward and a positive development.  But it seems to me that in addition to the questions posed above, there are some additional tough questions to be asked before the profession blindly plunges ahead here:
  1. Why did OLE fail?  There was a LOT of time and money spent to produce essentially “use-cases”. Do we really understand what went wrong and what needs to be done differently?
  2. Why did the foundation model/associations fail?  What will be different this time?
  3. Are we entrusting this new version of OLE to the same administrative people who did the previous version? Why?  Don’t we owe it to ourselves to think carefully about the leadership of the project? Is the addition of Index Data and EBSCO enough? We need to think carefully about both governance and administration.  What will everyone do differently to ensure the project’s success this time?
  4. What are the lessons to be learned about open source development for an enterprise module?  Is the library community truly large enough and well resourced enough to support the development of an enterprise, foundational, module for libraries such as the Library Service Platform?  (It would appear not, but I’m willing to be convinced with appropriate data, but I warn you, that’s going to be a tough sell!)
Librarians are some of the most wonderful, positive people in the world.  But here is a time where the rose colored glasses need to come off and we need to ask these serious questions, get some thoughtful answers and do some serious analysis.  We should use our existing knowledge base in order to determine the best path forward.  Otherwise this crazy merry-go-around called OLE is just going to keep spinning in a circle with no real forward progress.  We can’t afford for this to happen again.

Thursday, October 8, 2015

Another perspective on ProQuest buying the Ex Libris Group.

The dust has settled a bit and I’ve had the opportunity to talk to senior executives at both ProQuest and Ex Libris Group about the recent announcement that Ex Libris has been acquired by ProQuest.  Now it’s time for us to sit back and start analyzing what has just happened to a couple of the major suppliers of library automation, and by any measure, this was a BIG event.  

I wrote a series of posts about Library Service Platforms several years back (2012). They apparently met a real need in the profession, as those posts have been viewed over 40,000 times as of today and since the time they were posted. The first post in that series is still very valid, but much of what I’ve said about the companies in subsequent posts has since changed. Of the companies I wrote about then, VTLS was sold to Innovative, Kuali/OLE has gone through massive changes in structure and backing (it’s open source, but not totally, at least not by the classical definition), WorldShare by OCLC has matured a great deal, but the organization behind it is still convulsing with changes under the new OCLC leadership and finally, there is Sierra by Innovative which now seems to be in a very questionable spot.  

In fact, when it comes to Innovative I’m predicting that we’ll see ownership changes of that company as soon as they can be arranged.  You simply don’t force out the CEO on a day’s notice, install a new CEO from the equity owner company and likely do so with any plan other than finding out how fast you can sell the company.  The problem for Innovative (and I told this to previous CEO shortly after he arrived at the company) is that they’ve stayed with the old architecture way too long.  Now whoever buys the company is going to be facing the massive task of totally rewriting and/or developing a new platform that is a true multi-tenant, cloud based architecture, i.e. a truly competitive Library Service Platform, (see this post for a definition).  That's a sizeable task that is slow, costly and has a target market of shrinking size.  My guess is the previous CEO was probably pushing to do that investment and when the equity owners looked at what that was going to cost and the return-on-investment, they decided to pursue another path with their money.  Parts of that sound familiar?  Yes, that would serve as an excellent segue back to the ProQuest / Ex Libris announcement.

Now there have already been a couple of excellent posts published that analyze this acquisition announcement in some detail and do so quite well and are generally very fair.  If you haven’t read the post by Marshall Breeding and the post by Roger Schonfeld, I’d certainly recommend you do so.  

Trying not to repeat what Marshall and Roger have said, here’s where I see important differences from what they’ve said in their posts:

  1. ILS’s vs LSP’s.  Integrated Library Systems (ILS’s), even when hosted in the cloud, and Library Service Platforms (LSP’s) are radically different architectures with huge implications for the future of library technology and thus libraries.  I detailed all this in a post, I’ve already mentioned a couple of times, but it’s worth saying again, multi-tenant software is the future.  Simply hosting multiple virtual instances of an ILS is not an LSP and will not get you where you need to go in another 3-10 years.  It simply won’t. If you go down that path you’re going to eventually get left behind -- way behind.  If you choose that path, understand it’s only good for the short term. (See my post on the “coming divide” for a full explanation).  I would also take serious exception with Schonfeld’s belief that libraries may not need this kind of technology in the future because they’re resources are becoming increasingly digital.  While the latter is true, it doesn’t make the former true.  Most libraries still have massive print collections and as a recent article in the NY Times described, we’re seeing publishers printing more books each year as the e-book business has seemingly hit a plateau, at least for now.  Library management systems will be around for a long time to come.
  2. Content-neutrality.  Let’s not lose sight of the fact that we’ve lost another “content-neutral” discovery vendor as a result of this acquisition.  That’s not a good thing for libraries, although most librarians ignore this reality.  In the end, I believe they’ll regret doing so. We’ve had yet another check-and-balance removed from our supply chain. This post explains why content neutrality is so important and why that loss carries a potentially high price for libraries.  So, in this regard, this is not good news.  OCLC with their WorldCat offering remain our only content-neutral discovery solution at this point outside of open source solutions (which don't’ have an aggregated metadata database like Primo Central, which provides important functionality for libraries).
  3. Equity Ownership.  Ex Libris is no longer held by equity investors. It’s no secret that I’m not a fan of equity ownership of major suppliers to libraries. I understand how equity ownership works and I’ve detailed my related concerns previously in this post. Yes, Ex Libris did well under equity ownership for the very reasons I outlined in my post.  But the fact remains, they could have done even better and invested even more in their products and services had they not been sending so much of their profit to the equity owners.  I’m hoping with that aspect of the ownership now removed from the equation, we’ll see some accelerated product development is some much needed areas, like the discovery system, course management system integration tools, and the some other needed product areas.
  4. Intota’s Future.  Despite what company executives will tell you, Intota has been languishing and a full product has never been released into the marketplace.  That reality has come at a steep price for ProQuest, as other companies now own large portions of the targeted high-end LSP market.  Of course one of those products was Alma by Ex Libris, now part of the ProQuest holdings.  So there is plenty of speculation that Alma will become the premier offering and Intota will eventually fade away entirely or the functionality that exists will be merged with Alma.  Certainly that’s possible although company executives deny that and insist the choices will remain.  However, I think there might well be another outcome.   Alma has long been aimed primarily for the academic, corporate and national library markets.  Which leaves public libraries and smaller academics thirsty for some competition in LSP offerings tailored more to their specific needs.  They really only have OCLC’s WorldShare at this point and I can easily see ProQuest re-aiming Intota towards those markets.  However, if I was betting, over the long-term, I'd go with Intota slowly merging with Alma and there being only one platform left, although possibly with two names to accent the different markets being served.
  5. Primo vs. Summon Discovery Systems.  As Marshall pointed out in his post, these products both have large and very devoted installed bases.  Neither product will disappear anytime soon, although pure business logic will dictate that over time, they will slowly meld together from the core outward until they are one.  But this will take many, many years and I’d agree with Roger Schonfeld, the future of discovery systems in general is more questionable than the future of these two product offerings in particular.
  6. Will Ex Libris remain a separate company?  Yes, for now, I think that’s a safe bet.  But it’s important to look at ProQuest acquisition history here and to note that over time, other companies that have been purchased have been slowly absorbed (remember Serials Solutions?) with only the product names remaining as vestiges of those firms.  But for now, yes, it makes total sense for these organizations to largely remain separate.  At least until company cultures are merged, operations are merged and everything is stabilized.
  7. What’s EBSCO’s next move?  Good question.  Clearly both Ebsco and ProQuest are trying to assemble end-to-end technology solutions for libraries.  Ebsco needs an LSP in their offerings.  They might be working on one behind the scenes.  Many people are speculating that buying Innovative or Sirsi/Dynix could be a step in that direction.  It could be, but as I outlined above, it’s a very problematic one because neither firm’s products are multi-tenant architecture needed for a real Library Service Platform.  So, a total rewrite would be required for them to turn that offering into the needed solution. Ebsco has a real challenge in front of them.  

What’s the bottom line here? I personally have a lot of respect for both of these companies and their teams. From a business point of view, it is a very good move.  Library automation is a tough and challenging field.  These companies have very smart people at the helm. Right now, they have all the right people saying all the right things.  But that’s normal at this stage of an acquisition.  What will matter is what actually happens in the weeks and months ahead.  So stay tuned.  Walking the talk is much harder to do.

Wednesday, July 15, 2015

Why, oh why, do so many librarians continue to chain themselves to the past??

Ask yourself a question:  Do you believe that the only way we in libraries convey and create knowledge is through reading? Via books? By the written word??  Do you have any doubt that when most people think of books, the term “library” is somewhere nearby in their thoughts? 

I doubt you answered any of the above with a “yes”.  (If you did, please contact me separately because we really need to talk!)

So if you don’t think that way, why, OH WHY do we continue to allow our libraries, our services, our very cause for existence, to be repeatedly tied to the idea that reading is the sole purpose of libraries?!?!?!  Why do we so blatantly reinforce that image?

Now, let me state the obvious here.  There is no question that for a very long time books have been and they will continue to be in the future, a major vehicle for the transmission of knowledge, whether it’s fact or of your favorite author’s latest new work of fiction.  However, in all these cases can we agree that the goal is the creation of new knowledge?   

David Lankes reminds us in “The Atlas of New Librarianship” that: “The mission of librarians is to improve society through facilitating knowledge creation in their communities.” 

Our mission - knowledge creation.  I agree with that statement.  (If you’re wondering what I’m defining as knowledge, refer to this article, Section 6.)  

But let’s remember all the additional forms which knowledge exists in, is created, curated and transmitted in today’s world: Video, photography, sound, software, data sets, webcasts,  geo-location files, collaborative rooms, our communities of users/members and yes, librarians!

Why is this so hard for us to take in and act upon?  You say it’s not?  Well then, please consider the following:

  • Library Promotion.  It’s summertime, so look to your nearby public library promotions.  I’m just guessing that the featured event is summer reading.  Ok, I can even agree with this, but where are the programs for using a camera to tell a story, to learn about visual reality as a pathway to have a debate with Plato, to work in collaboration with other children to achieve a knowledge goal, the list goes on and on.  Yet our focus is where?  Books….
  • Face to the World - Physical.  Here’s a photograph of the outside of the parking garage of a Midwestern public library.  I’d say that’s a pretty clear statement of what they think they’re about and it does a great job of reinforcing that the library is all about….. books.

  • Face to the World - Virtual.  Take a look at most any library’s website, discovery system or OPAC and apply a really critical eye to it.  (Better yet, get one of your users to sit down beside you so you can see it through their eyes.)  Ask them:  What does it say to you about how your library provides access to existing knowledge?  In what forms or media types?  I suggest you take a look at Harvard Library’s interface as an example of how to do it well; it features a listing in the left column of “books, all databases, article/journals, news, audio/video, images, archives/manuscripts, dissertations and data. Nice.  
  • Knowledge Creation Tools.  What tools does that site provide you to assist you in creating new knowledge?  Do you feel that you can create new knowledge remotely or must you go to the library to do it?  Can you use any device you want in accessing/creating knowledge?  Can you do it from any location you want?  The answers to those questions will tell you a great deal. 
  • Signage/Services.  One of my pet peeves at most libraries is services with names like “reading lists”.  REALLY?  Have you seen a professor who only uses reading to instruct, to teach, to engage students?  Most use lectures (now frequently recorded and online), PowerPoints, webcasts, podcasts, digital content, labs where students must collaborate, writing assignments and oh yes, some articles and books.  But how many other sensory input/stimuli did they also use?  Did we not assist in providing access to all of those?  So, WHY do we only talk about the “read” portion?!?!?!?  At least, let’s agree to change “reading lists” to “resource lists” and anything else that is so named as well.
As librarians we need to move away from branding that ties libraries solely to printed materials.   We are not just about books or journals, we’re about knowledge and the containers that knowledge comes in are far great than just the printed forms.  

Please, unchain your library from the past.

Tuesday, May 19, 2015

The next step on the path of building a Knowledge Creation Platform

This is one of the photographs that hangs in my office and it’s a quote from Buckminster Fuller which says:  
"You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.”  
It’s a line of thinking I adhere to frequently. So it’ll be no surprise to those of you who have followed this blog that one of my pet projects is not to try and perfect existing discovery solutions, but rather to build a Knowledge Creation Platform. For a starting description of that concept, see this article

Now combine that thinking with the fact that, back in 2011, when I was at the Charleston Conference and attended a session titled “New Initiatives in Open Research” where I heard Cliff Lynch and the late Lee Dirks speak. Cliff said:  
“If you do the math, you will find horrifying numbers, something like a scientific paper is published every minute or two.  It means you’re buried.  It means everybody’s buried.”  
That fact really stuck with me.

Today, what I see and experience in the University environment is the pace of knowledge creation becoming so intense and so fast that our current tools for researching, building and expressing new knowledge are outdated. And thus the existing processes we’ve automated are equally outdated. This strikes me as a set of problems in need of a very large fix.

So I want to introduce what I think is a very exciting step forward in beginning to address those issues.  What we’re doing is starting a new initiative with the high tech firm, Exaptive.  Here is a video describing their product, with an introduction to the initiative. I encourage you to watch it before reading any further.

Ok, you’ve watched it and now you’re back? Great.

Let me fill you in on the thinking behind this announcement.  Consistent with what I’ve said in articles and related blog posts is the fact that I want to provide to our library users/members some substantially different capabilities than those they get when they use a generic search tool like Google Scholar.  As noted in my writings, most discovery/search tools are querying repositories or databases that contain existing knowledge metadata and content.

Certainly that’s very valuable. But those existing tools make no real provision for analyzing this existing knowledge or drawing correlations between data sources, nor do they suggest overlaps, visualize the results or allow the user to easily bring together the people behind the knowledge found.  The existing tools do not give the capability to start building new knowledge, only to find existing knowledge.  However the Exaptive product moves us down the path towards knowledge creation and more.  The implications of this are really far reaching.

For example, the first project we’re moving forward on is one in which a researcher is looking for a concept, but one that over the centuries has existed in many cultures and languages and under many different terms.   Our researcher knows English and several other languages, but not all the languages in which that concept might be expressed.  So, what he is looking for is a tool that will function as an “authority file,” if you will, that’ll essentially provide “see” and “see also” references across those many languages.  By using known taxonomies, linked data, library related and accessible authority files, we hope to be able to do the analysis of data sources, then visualize the results to show the overlap and correlations that exists between data sources.   We believe when data sources are analyzed this way the Exaptive product should provide tremendous new insight into the topic and field of study.

Another exciting part of the Exaptive product is the ability to create what are called “cognitive networks,” groups comprised of the researchers responsible for creating the research and research data found and utilized in the analysis phase.  Unlike social networks where you have to slowly and manually build your connections or friends, the cognitive network is built automatically as researchers explore, select, filter and analyze the data they need. The result is that these cognitive networks facilitate a researcher’s work, instead of adding to it. This cognitive network can become a set of collaborators or peers that, if willing, could be focused on analyzing, vetting and refining the new knowledge from inception to dissemination.  (Yes, obviously, trust plays a huge part in this and must be dealt with as part of the model).  It’s a model that would be more capable of scaling to incorporate the vast amounts of research being conducted today and would increase the speed of dissemination of the new knowledge that results because it isn’t just dependent upon the publication of physical artifacts, such as papers or books (although that could still be done). Rather it would accommodate knowledge being born digitally, and once vetted by the cognitive network, could quickly be disseminated to others for them to continue the cycle and build upon yet further.  Think about how powerful that could be in creating new knowledge!

The Exaptive product, when coupled with what we've already got in place at the OU Libraries (our Discovery system, open journal/open access publication system, repositories and other) will allow us to move further and faster in helping to evolve ideas into new knowledge.

One thing I need to say at this point is that doing this is both a technological challenge and a change management challenge.  If this project is successful, it has the capability to remarkably change the engagement and knowledge creation experience for many people. To smooth this process, we will need to educate our users/members on the new needs we’re addressing and how and why it’s a major step forward. If you’ve read articles/books about how to do change management, you’ll know one of the best ways to do this is to work with thought-leaders on and in our campus and community and provide them with the extra support needed to learn and use these new tools.  We need to do that in order to ensure they are very successful in doing so.  If we do, it’s a win-win for all involved.  It puts users on the front edge of research and dissemination in their field and it gives us a success case to point towards as we talk with others and try to inspire them.  

Of course, those that wish to work in isolation can continue to do so, even with this new model.  However, new value would be added to ideas by bringing multi-disciplinary and multi-faceted viewpoints to the table throughout the lifetime of an idea, which will help to make these ideas substantially more valuable and more applicable in the end.  We already see the health sciences field moving in this direction because they so clearly understand the inter-connected nature of the organs of the human body and the need to bring researchers together as ideas are developed.  The Open Science Framework is another model where collaboration and shared data sets occurs early in the research process.

As I said above, there are lots of implications for new models of knowledge creation based on this initiative.  Existing culture and change are two of the largest challenges early in the process.  But first, we’re going to focus on getting the technological foundations in place and then see what we can do.   Stay tuned!

NOTE:  Those at the University of Oklahoma interested in having a departmental demonstration of this technology and/or meeting with key project team members, should contact me at: carl(dot)grant(at)ou(dot)edu

Wednesday, May 13, 2015

How Can Libraries Find the Money To Make Big Changes? (Part 3)

Over the last two posts, we’ve looked at how libraries can find the money within existing resources in order to fund big changes.  In the first post we looked at strategic plans and in the second post we looked at the use of metrics to measure progress against that strategic plan.  Now, in this final post, we want to step back and look at the efficiency and effectiveness of our current operations as reflected by their internal workflows.

Let’s face it, a great deal of what is done in libraries has been done for a long time.  Even if we’ve automated the workflow along the way, it was likely put into place 5-15 years ago and has rarely been reevaluated for efficiency or effectiveness since that time (unless you’ve implemented the metrics/analysis discussed in post two of this series). Yet reevaluation needs to be done on a regular and recurring basis.  

The process of reevaluating workflows presents you with a golden opportunity because it’s a great time to think really big about what the workflow would look like if you could design it without restriction.  So the first step in this workflow is to think about and design the “perfect” workflow.  I call this “setting-the-destination”, because it serves as a description of where you ultimately want to end up.  The perfect way for your staff to order new resources, the perfect way for metadata to get created or for users/members of your library to find the resources they need?  What’s the perfect way for users/members to utilize them, cite them, etc?  What do those workflow look like?  This is blue sky thinking and it needs to involve those on your team that have the ability to think creatively, but also those that understand the intricacies of the current workflow.  You want to be certain to document what the team comes up with in this step because you’ll come back and visit it again and again, as new versions of products, with new functionality, become available for implementation.

In the second step, you return to reality and, either through use-cases or flowcharts, describe how the workflow is actually being done today.  Again, use cases or flowcharts should document the workflow. When your teams do this, they will quickly realize there are many things being done that no longer need to be done. Those are candidates for immediate removal.  It’s not unusual to find a 10% boost in productivity just from this step happening. 

The third step is to redesign the workflow and workflows associated with it.  A very good time to undertake this kind of workflow is, for instance, when implementing a new Library Service Platform, because the technology touches so many areas of library operations and those products give you an excellent opportunity to streamline a lot of workflows.  

As we all know, in the past the library was primarily a print based operation and over the years, as licensed content and electronic content became a part of the library services, entire new workflows and procedures were created to accommodate each.  Over time, many of the steps in those workflows replicated, or very closely emulated, steps in handling one of the other parallel areas.  Most libraries find that when they perform workflow analysis they can see areas were these steps can now be combined together in order to free team members to address new, more challenging and interesting work that needs to be done.  

In doing workflow analysis it’s important to know that most people do not inherently know how to do workflow analysis.  They know how to do the workflow.  So it takes time to train people to do analysis; the process of pulling apart a workflow and rethinking it.  I’ve found giving them a series of questions to ask themselves, and others, at each step helps them to do this.  Here are some things to be sure to do:

  • Make sure to include everyone that is involved in each step of a workflow, from the beginning to the end.  
  • When looking at a workflow, identify everything that comes into the workflow (the inputs) and that flows out of the workflow (the outputs)
  • Here are some of the specific questions to be asked about each step in the workflow: 

    • Who receives the outputs? 
    • What does the next person do with the output you give them? 
    • How does the quantity and quality of the outputs affect what they do (do steps get skipped when quantity is high?  If quality is low, what do they have to do over or fix before they can move the work to the next step? 
    • Who and how is the quality of the work verified? 
    • When something different than what is expected happens, how is it handled?  What’s the workflow then?  How does it change? Be sure to document these processes as well.
    • Look for places in the workflow where there is waiting, moving and/or repetition.  Try to find ways to eliminate these, for instance, by dong something in parallel if possible. 
    • Identify the places in the workflow where there are complexities, bottlenecks and frustration.  Eliminate those.
  • Once you have done that, then ask these questions:
    • How many people does it currently take to complete a workflow?  What number should it take?  List that number and work towards it.
    • Consider, and document, what technology is needed to support the workflow and what it must functionally achieve?
    • What skills and expertise are needed to both perform and manage workflow?
    • Then ask if those skills/expertise/positions exist in the organization currently.
    • Use the answers to the above to describe the positions needed and then compare to the ones you have.  Any discrepancy will need a plan to address and resolve those differences.
    • Based on the outputs of the total workflow mapping, what are the jobs that: a) remain the same, b) will be modified and/or c) will be new?
    • Prepare a plan to be shared with your team addressing what and how the team will be trained for those new jobs?  When?
    • Consult with team members so they understand where they’re being aimed and what will be done to ensure they are successful in the jobs they will hold.  Repeat as necessary (i.e. which is frequently)
    • Throughout this process, Library Administration must make clear the workflows are being examined for ways to be more efficient and effective in light of the changing environments and that they know they have talented people and simply want to optimize their work.  
    • Be sure to link the new workflow back to the overall goals of the library.  They must see and understand the linkage and what value is added as a result.

  • Next, decide what will be measured to evaluate the workflow and how?
    • Can those measures/metrics be directly linked to the goals of the library?
    • If so, document which specific measure they are linked to.
    • If you can’t link them, re-examine why you’re doing this function and find a way to eliminate it.
  • It’s important as part of this workflow to note the distinction between what are called “core” workflows and those that are “support” workflows.  Core workflows are the ones that deliver value directly to end-users.  Support workflows enable core workflows (such as training, approvals, purchasing, etc).  Obviously, core workflows might be improved, but you want to be certain to only increase the value delivered to the end-users, not to diminish or delete it.  Support workflows are open to considerable revision for obvious reasons.
  • When the new workflow is introduced:
    • Explain, educate, communicate.  Repeat as needed.
    • Do trial runs, analyze the results/problems and make adjustments to resolve those issues.
    • Only then do you implement the new, more effective and efficient workflow.

So there you have it.  I’ve used all the steps described in this post and the previous two on this subject.  They’ve worked for my organizations and I believe they’ll work for yours as well.  When done, you should find that you have more people and financial resources at your disposal in order to resource those new ideas that have been sitting and waiting on your “to-do” list.

Thursday, April 2, 2015

How Can Libraries Find the Money To Make Big Changes? (Part 2)

In the first post of this series looking for money with which to finance change in libraries, we looked at the importance of active strategic plans, both at the levels of the parent organization, the library and all levels within the library. As noted in that post, obtaining new revenue via that pathway is a longer-term approach, one that will reap big dividends in the end, but represents a gradual approach as results happen, data is generated and confidence is grown. So now let’s take a look at a first step you can take in the shorter term. 

One of the most likely places to find resources to fuel new ideas is from within your existing resources.  How?  Especially, when you’re already working full speed and still feel like you’re falling behind?  The answer is by using metrics.

Metrics are a complicated subject and certainly understanding them fully requires more space than I can cover in this blog post.  However, let me cover, at a high level, a few topics and get you started and I’ll also provide a link to a post you can read if you want to know more.  

It’s important to understand that what metrics do is provide measures that will tell you and your organization if you’re moving in the right direction.  Are you  achieving the goals set forth in your strategic plan?  Of course to do that, what’s being measured must be in alignment and support the goals set of the strategic plan.  In other words, metrics provide focus.  Second, they provide accountability.  Now I’ve noted that’s a term a lot of people in libraries treat like a skunk on a hiking trail, i.e. they turnaround and head the other way.  But the result is the same, you backup and make no forward progress.   Accountability isn’t to assign blame, it’s to determine if you’re doing the right things and if not, to determine what should be done and to get moving in that direction.  It needs to be accompanied by an attitude of “failure is part of learning”, so while you don’t want to repeat the same mistakes twice, making mistakes is how we figure out the right way to achieve a goal.  An oft quoted Thomas Edison once said about the light bulb: “I have not failed.  I’ve just found 10,000 ways that won’t work.”  We need to apply this more frequently in libraries and we need to more willingly share the ways that don’t work so we can all more efficiently focus on finding the ways that do work.

Once you’ve determined what is to be measured, the results must be shared and used by all of management.  While many libraries internally operate like a landscape of farm silos, each operated by their own farmer, the reality is that libraries are more tightly woven than fine silk linens and we need to realize that only by working together do we produce a tapestry worthy of art that all want to see.   Management teams need to schedule regular reviews of the metrics and have open conversation about here this positions them on the goals of the strategic plan and to determine what adjustments need to be made.  If you want to read a bit more about metrics, here is a good, quick overview.

Another tool to be used in making new resources available is to conduct a true competitive analysis of the Library’s services.  I’d strongly recommend you convene a sub-committee of your end-users (students, faculty, staff community members, etc) in order to have this done.  You need to remove bias and the colored glasses from the perspectives taken and understand from the end-user’s community member’s point of view, what they see as the advantages and disadvantages of your various services.  OCLC sometimes does this for us with the periodic “Perceptions” reports, but unfortunately, the last one of those was 2010.  A half decade later, one could rightly questions the continued validity of the assessments made there.  So plan only on looking at those to understand questions you might want to ask and how to ask them.  The scope should be all end-user/community facing services.  Discovery, reference, liaison, circulation, ebooks, inter-library loan, etc.  Find out where the users go to get those services right now and specifically ask about the result of those services.  In other words, don’t ask where else they go to borrow a book from another library, ask how they obtain materials to read.  Look into how often they use those services.  Do they find them easier to use or more difficult?  Less costly or more costly?  Faster/slower?  You need a comprehensive, but very unbiased look.  When they report back read the assessment with an open mind because you’ll have been handed a treasure chest of facts.  If they tell you the service your library provides is inferior and little used, you have a candidate for elimination.  If they tell you it’s competitive, they’ll likely have also told you why and what you can amplify to make it even more competitive.  Don’t hesitate to discontinue the service because these are resources you’re wasting.  They can be redirected to support new services which will hopefully allow you to start generating greater value for your end-users/community.  If you want to read an excellent book on how to do this, read Blue Ocean Strategy.  You’ll find my review of that work in this post.

In the next and final post in this series, we’ll look at efficiency and effectiveness and the methodologies to use in finding those within your current operations.

Monday, February 16, 2015

How Can Libraries Find the Money To Make Big Changes? (Part 1)

My last blog post “If information has become ubiquitous due to the Internet, can librarians do the same?” caused a number of people to ask: “Wonderful ideas, but how do we pay for all of this?” It’s a very fair question.   Unfortunately, it is one, which frequently causes librarians, in response, to throw their hands in the air as if it’s hopeless.

Many librarians indicate that what they hear from their administrators is: “Do more with less” and feeling totally maxed out just trying to do what they’re doing now, they simply can’t wrap their heads around the idea of doing even more, without more resources.  However, I’ve always said that when confronted with that directive, we need to hear: “Be more efficient, be more effective”. I still maintain that position.

So, let me share some ideas about how you an organization go about doing that, over the next several posts.  Are these ideas easy?   No, of course not.  Are they quick?  It varies.  Some are quicker than others, but combined into a packaged approach, you’ll be able to show results from early on and well into the future.   Results that will help your library clearly establish its value to the communities it serves.

1.  Strategic Plans. When encountering a librarian reading my ideas and asking: how do we pay for that, my first response is to ask if the University has a current strategic plan published on the University website? (Here is ours at OU) Then I’ll ask the librarian(s) if they know what it says and can they, in fact, tell me something about it? More often than not, at best, I’ll encounter a blank stare or a mumbled response that they think they’ve seen it, but really can’t remember anything it says. Then I’ll ask if the Library has a current strategic plan and is it on their website? (I’ll frequently already know the answer, as I’ll have checked. The results of that checking are, shall we say, grim.  Here's ours at OU Libraries) If they have one, I’ll ask how often it’s reviewed in a year to ensure progress is being made on the goals/objectives that were set? All of which is a strong indicator of why a library is not performing well and/or is not being recognized for it’s contributed value to the University. 

It is staggering to me to believe that a Dean/Director of Libraries, in today’s funding environment, can expect to have a compelling and positive discussion about the Library finances, if they can’t sit down with their administration and directly show how the Library’s Strategic Plan supports and contributes to the goals of the University’s Strategic Plan.  Not only show but also do so in documented and measurable terms!  

For example, if the Universities plan calls for higher student retention, higher matriculation rates or the creation of a new degree or a program – the Library’s strategic plan needs to have some goals and objectives that show what the library is going to do to support those being achieved.  When achieved (and hopefully exceeded) it gives powerful support for why the University needs to continue to, at least, support the library at its current funding levels.   If the University’s goals have been exceeded, it makes a powerful case for the benefits to be shared with the Library.  Of course, this is not a quick path to more revenue.  It will take at least a year and possibly longer, before it starts to pay off.  However, it is likely to strongly support a case for not cutting the Library’s budget, if these linkages are drawn using metrics that make the case.

2.  Organizational Support of the Strategic Plan and Accountability.  Also, when talking to Librarians about their Library’s strategic plans, I all too often hear that it’s an administrative exercise, once done, it’s dropped in the drawer and forgotten until the next round of the exercise.  That’s a terrible mistake to make.  A strategic plan should be a living document and can serve in multiple ways to help build discipline in the organization that will allow the organization to achieve large goals and put the organization on a sustained high-performance track.  Creating objectives from the department level all the way to the team member level can do that.  Most strategic plans state goals.  While this isn’t the preferred route, it is frequently the route that results because people dislike accountability.  Yet any Dean/Director and/or department manager worth their pay, should take goals the Library Strategic Plan and turn it into objectives for their departments and team members.  What’s the difference between a goal and an objective?    According to this reference:
“Goals are long-term aims that you want to accomplish. Objectives are concrete attainments that can be achieved by following a certain number of steps… Goals have the word ‘go’ in it. Your goals should go forward in a specific direction. Objectives have the word ‘object’ in it. Objects are concrete. They are something that you can hold in your hand. Because of this, your objectives can be clearly outlined with timelines, budgets, and personnel needs. Every area of each objective should be firm. Unfortunately, there is no set way in which to measure the accomplishment of your goals. You may feel that you are closer, but since goals are de facto nebulous, you can never say for sure that you have definitively achieved them. Objectives can be measured. For example, ‘I want to accomplish x in y amount of time’ becomes ‘Did I accomplish x in y amount of time?’ This can easily be answered in a yes or no form.”    
If objectives are defined and they are linked to the Library (and thus the University’s) strategic plans, then when a performance period is over, both the team member and the manager should be able to sit down and say: “Did we accomplish this or not?”  Sure there will likely be a conversation why it wasn’t achieved if that is the case, but at least everyone knows what the expected result should be.  

Furthermore, that meeting shouldn't be once a year.  A better practice is to ensure that each team member has a quarterly meeting with their manager to ensure progress towards the goals is being achieved.  If the goals are no longer valid, this is a perfect opportunity to revisit and revise them, rather than waiting till the next annual evaluation cycle.

It's also important that the entire organization receive regular reports about how it is doing in achieving the plan.  A quarterly meeting, led by the Dean/Director is a good vehicle for achieving this.  So is a written report that can be distributed across the community to show the value being created for the community.  (Here is ours at OU Libraries.)

Performing these steps above helps to position the head of the Library to be able to take into their next meeting with the funding authorities, real measurable results that will document the Libraries value in achieving, and aligning, with the goals of the parent organization.  

That will also put in place a much firmer foundation for finding the money to make big changes than I see many libraries currently using.  

(Next time, we’ll talk about increasing efficiency and effectiveness by realizing that what brought you here, won’t take you there.)

Tuesday, January 27, 2015

If information has become ubiquitous due to the Internet, can librarians do the same? image
After my last blog post on library branding, I had an engaging exchange with a good friend who often says things that cause me to pause and think. That conversation was about what constitutes “expertise” in today’s information environment.  Then, over the holiday break, I read a recent book called “Virtual Unreality: Just Because the Internet Told You, How Do You Know It’s True? by Charles Seife.”  Finally, during that same holiday break, while visiting with another friend, who had recently written and self-published a book, he told me that while doing the research for his book, his very knowledgeable librarian, using substantial libraries resources, couldn’t find anything for him that he hadn’t already found using Google or Google Scholar.  In my thinking all of these dialogues converged together.  Here’s why.

A point most everyone, including librarians, agrees upon today is that due to easy accessibility, today information is truly ubiquitous in our environment.  Tapping or talking into our mobile devices readily retrieves information.  Increasingly, we can use normal conversational language in forming the inquiries.  In response the answers are delivered to us in mere moments.   It’s fast, it’s easy. Who needs a library or even a librarian anymore?  As a librarian, I know I’m not alone in having encountered numerous college and university administrators that have said this, nor am I alone in being asked at parties or at airports or on airlines, when being introduced and explaining I’m a librarian, being given the sad, sorrowful look and asked: “With eBooks and Google, aren’t libraries and librarians a thing of the past?”  That’s when I know I have someone in front of me that needs a major update on librarianship.  (Not that it’s really their fault. As I’ve long said, librarians do not excel at articulating their value-add).

Yes, information is ubiquitous today but here is the problem; so is so-called “expertise”.   Senator Patrick Moynihan once said, in an oft-quoted statement: “everyone is entitled to their own opinion, but not everyone is entitled to their own facts”. Unfortunately today, that no longer seems to be true.   As Seife documents in the book mentioned above, the criteria for holding “expertise” has been substantially lowered.  Today, you can be an “expert” by being a celebrity, by being rich (and buying a think-tank to generate “facts” that support your position) or just by being very persistent and vocal in making your position well know.  You can say just about anything online, and if you get a big enough following of people to read and repeat what you’ve said, you’ve by default earned the title of “expert”.  Social media, blogs, pod-casts and today’s TV media all permit, promote and foster the creation of so-called experts, most of who would not pass previous generations criteria for that term. The use of research and/or facts to support positions, particularly research and facts that have passed through tests normally applied to scholarship, have become totally secondary if required at all.  

We also know that we’re facing a population where concentration is becoming rare.  Multi-tasking has become a way of life, as has our supply of mobile devices. Soon those devices won’t even require carrying because they’ll be strapped to, or embedded in our bodies further exacerbating the problem.  Thoughts have become messages limited to 140 characters (Twitter) or videos that need to be limited to 3  minutes on YouTube, 15 seconds on Instagram and 6 seconds on (a definite trend there!).  We know Facebook and Google are using profiling to place us in silos in order to increase their ad sales.  However, those silos also result in us no longer thoughtfully exploring ideas or positions, particularly those that might conflict with our points-of-view.  As a result, we end up with a society, community, campus where we only read what we agree with and where we count on trending Tweets or friends to tell us what we think we need to know from the overwhelming, and ubiquitous, information flow.  It’s a very difficult environment, where simplicity triumphs over sophistication.

Now, let’s get back to libraries and librarianship, because it is this environment we’ve just described that gives librarianship the opportunity to create new, real and sustaining value.  However, as with so many opportunities, it also requires change. 

My previous post pointed out that librarians have not been diligent in keeping their brand up-to-date.  We’ve let the word “books” be our brand for way too long.  That was OK when libraries were THE place to go get information and most of it was made accessible in a book format.  However, that day is long gone.  

This has been compounded by the fact that when we did adopt new information tools, we lagged in the adoption curve and thus when finally introduced, we all too often, in a rush, simply tried to fully emulate that tool (look at our new search tool, it’s just like Google!).  When we did that, it meant we did not take the time to make clear the differentiating values librarianship provided (deep Web searching, alternate points-of-view, appropriateness, authoritativeness and authenticity).  This resulted in the commoditization of the new tool and as a result, it was quickly discounted (why do I need to go to the library, I can search Google and I’ll find more?). These problems were further exacerbated by the rise of mobile devices.  Librarians tended to simply push out their Google like interfaces (although frequently dumbed down) onto those devices. Now lacking the face-to-face interaction with the user, librarians easily became one with the technology. The result?  Librarians became identified with their technology and the total package was commoditized.  Which is where too many libraries still are today and why so many of us have ended up in those painful discussions about their profession and its future viability.  

Leading librarians saw what was happening and decided they had to adapt and so they defined a new pathway.  One, which allowed the value of librarians to be affirmed and even, expanded.  While not in the majority, their examples are now solidifying and are offering solid answers to the questions asked in those discussions.  The results work to ensure that expertise is seen as something that must be earned and measured by established academic criteria and not simply by creative marketing.

If we look at the recently built Hunt Library at North Carolina State, the newly announced, planned library at Temple University in Philadelphia or those institutions that are beginning to transform their existing facilities like the University of Oklahoma Libraries you can see recurring themes emerging with the use of phrases like: collaborative workspaces, intellectual commons & crossroads, knowledge creation, innovation centers and entrepreneurial centers.  In other words they are places where ideas come together, intersect, are examined, analyzed and improved.  This is done under the guidance of people who have earned the title “expert” through the normal channels of academic rigor and peer review, sometimes via face-to-face, sometimes virtually using librarians new investments in technology to support this exchange.  As a result, librarians are increasingly now able to be where their users are located and to add new and demonstrable value to the knowledge creation and supply chain.  Our goal has to be to make the value of Librarianship as ubiquitous as information

(In an upcoming blog post, I’ll talk about ways to fund the retraining of librarians and the reshaping of facilities to support these new pathways.)

Tuesday, July 29, 2014

It’s time to define a new brand for libraries. Let’s make sure it leaves people soaring, not snoring.

I’ve always studied other professional fields as a means to try and understand the profession of librarianship and the future of the field. In particular, I’m interested in looking at points in the history of a business and reviewing where mistakes have been made from which we might learn valuable lessons.   You undoubtedly already know some of the most famous examples. Theodore Levitt wrote a classic piece on this topic for the Harvard Business Review in 1960 called “Marketing Myopia” in which he refers to examples such as:  railroads which didn’t understand they were in the transportation business; Hollywood, which thought it was in the movie business instead of the entertainment business; and numerous other smaller examples that existed at that point in history.  

In more recent times, we’ve seen the cycle repeated by a long and impressive list of those who apparently chose to ignore history, including the music industry thinking they were in the record or CD business rather than the music business and Polaroid and Kodak thinking they were in the film/camera business rather than the photography business.

Today, as I listen and read about our profession of librarianship, I have a deep gnawing feeling in the pit of my stomach when it comes to the issue of branding.  All too often I find the parallels between the examples just mentioned and what I see the librarians doing in shaping the perceptions of our users about librarianship.  The result could have a major impact on the future of our profession.  It’s important to remember that, for most of those businesses that didn’t truly understand where they added value for their customers, there was not a positive ending.   

So, when it comes to librarianship, what am I specifically talking about?  Librarians’ continued belief and acceptance of “books” as their brand.   When I originally read OCLC’s 2010 Perceptions report, it stated:  
“In 2005, most Americans (69%) said “books” is the first thing that comes to mind when thinking about the library. In 2010, even more, 75% believe that the library brand is books.” 
I have to say my discontent took firm root at that point. That report went on to say: 
“Brands are hard to change, almost impossible for a brand as strong as libraries—in an environment where saving money on books is even more valued by consumers.” 
Recently, OCLC issued a very interesting new report titled: “At a Tipping Point; Education, Learning and Libraries” which deals, in part, with this topic again. 

This new report says: 
“Our 2014 research tells us that the library brand remains firmly grounded as the “book” brand. In fact, from 2005 to 2014, the perception of the book brand has cemented. Sixty-nine percent (69%) of online users indicated that their first thought of a library was “books” in 2005, 75% in both 2010 and 2014.”
Very significantly, the new report also states: 
“How concerned should we be about the library brand? The answer must be tied to how significantly we believe the context in which libraries will operate in the future will change.” 
“Brands are not impressions held in isolation. Brands are attitudes informed and shaped by the context in which they operate. Shifting needs shift brands—often faster than any change in the brand product itself.”
I would argue that the context in which libraries operate today has already changed dramatically and certainly will continue to do so into the future.  For me, the key in the above is “shifting needs shift brands”, because that statement gives us the keys to the passageway to redefine our brand.

This OCLC report contains some extremely important observations about branding, how it is formed and what it takes to change it.   For instance, it notes: 
"The library brand, “books” has solidified” and that Campus Libraries are now known as “a place to get work done”.   “Libraries have a context challenge, a brand category problem. Relevance is determined by perceptions, not products, not services, not reality.”….  “The library brand is too strongly associated with books, a category that both library users and non-users perceive to be less relevant with the rise of the Web, mobile information and e-books.”… “Strong enduring brands remain relevant by creating and promoting clear differentiators that match the consumer needs while retaining congruency with the expectations of the brand.”
Exactly!  Differentiators that match needs and relevance are determined by perceptions, which are something we can and should define.   Now note that I don't believe that OCLC is saying we should adopt the perception of "a place to get work done" as the library brand.  They didn't say that.  But I do think we should apply some critical analysis to what they're telling us here.  We need to ask some very important questions.   For instance: 1) In digesting the perception, have we really thought about what our users/members are telling us? 2) What is it they are working on?  Is the perception just an indicator of that larger reality? 4) How do we take that perception and turn it into a brand that inspires people to use the resources and services of libraries? 

Frankly, if we just accept the new perception (a place to get work done) as a brand, I'd find it as lacking as the old one (books).  This was underscored for me this week when Library Journal printed a column about the new Amazon Lending Library announcement.  It noted (emphasis is mine):  
“This massive amount of press attention is not only discussing a new service—and who knows how it will turn out—but more importantly, they rarely mention libraries and what they offer,” said Gary Price, editor of LJ infoDOCKET. “So, it’s as much [a point of concern] about mindshare and relevance as it is about a new Amazon service.”
Of course, he’s absolutely right.  Let’s understand though, that this question of mindshare and relevance is as much our fault as librarians as it is anyone’s.  Historically, we simply have not actively defined the perception or the brand to be about anything but printed books.  Think about it.  We use books in our advertisements. If our discovery tools have a browse function, we tend to use visual representations of the book spines (why do we continue to think browsing spines is an easy thing to do?!?) or we use the book covers.  Why?  Why not use audio clips from the authors or photography to give a portrayal of what a work is about? Now, if we let ourselves be defined by the latest findings about users perceptions as a place to get work done, we'll leave our users snoring rather than soaring.

Let’s pause for a moment and put some foundations in place for the rest of this discussion about branding.

As many of you will know if you regularly read this blog or hear me speak, I’m a huge fan of David Lankes work: “The Atlas of New Librarianship”.  One of the many things I so like about this work is the mission statement for librarians, which states: 
“The mission of librarians is to improve society through facilitating knowledge creation in their communities.”  
It is simple, clear, and compelling and creates a firm foundation for us to use in creating a new brand.  Now, let’s disassemble that statement just a bit by looking at some definitions of some of the key words used in it (from 

Mission:  any important task or duty that is assigned, allotted, or self-imposed

Knowledge:  acquaintance with facts, truths, or principles, as from study or investigation; general erudition

Creation: the act of producing or causing to exist

While we’re at it, let’s add in this definition:

Book:  a handwritten or printed work of fiction or nonfiction.

Now, using those definitions, my take on Lankes’ mission statement is that we, as librarians, take it as our duty and our responsibility to help people produce knowledge through the investigation and use of facts, truths and principles.   Nowhere in there do I find it stated that we must do this only via printed or handwritten works.  

Let me be clear; I love books and I deeply believe in them as a medium.  However, I realize that they are not the only medium for creating and conveying knowledge.  While simple, we must realize this reality in defining a new brand for our profession.  Our new brand must reach out and extend across all the mediums used in conveying existing knowledge and must embrace all people across all societies in order to expose them to the value of librarianship in creating new knowledge.

When changing a brand, marketing experts will often talk about the need to ensure that it aligns with the way the organization operates.  OCLC, as noted above, stated: “retaining congruency with the expectations of the brand”.  Which makes sense.  The users/members of your library want to see that you and your organization walk the talk.  Which, given the knowledge creation approach, is totally consistent with what we see actually happening in many of the leading libraries today. Collaborative learning areas, maker spaces, innovation hubs and visualization tools are all examples of a new dimension of knowledge creation that goes well beyond just reading books.  

At the same time, a new brand needs to be catchy and memorable.  Forbes magazine did a wonderful summary that reminds us of some of the brands that fall into this category.  The following is a subset of those mentioned in that article:

  • The Ultimate Driving Machine (BMW)
  • Just Do It. (Nike)
  • Don’t Leave home Without It.  (American Express)
  • Got Milk? (California Milk Processor Board)
  • Think different (Apple)
  • A diamond is forever (DeBeers)
  • It takes a lickin’, but keeps on tickin’.  (Timex)
  • A mind is a terrible thing to waste (UNCF).
  • We bring good things to life.  (GE)
  • When you care enough to send the very best. (Hallmark)

We need, we really must, find a new brand that encompasses that kind of thinking and stickiness.  In London, they call Libraries “Idea Stores”.  I like that.  

While not offering it as a brand, I heard David Lankes give a talk in Florida in which he said libraries should use the word “Question” instead of “Read”.  I also thought that was great because it would cause people to not just accept, but to dig into what interests them, to challenge themselves to learn and to create new knowledge as a result.  

I’ll offer some other possibilities as a continuing step in the community thought process about library branding:  

  • Libraries: A time to know, a time to grow.
  • We feed hungry minds at the Library
  • Growing requires knowing.
  • The world’s best brain food: Libraries.
  • Creating knowledge? The Library has what you need.
  • Your Library: Come grow your mind.
  • Share to grow.  (Contributed by: )

Ok, those probably still need a lot of work but you’ve got to agree they’re more inspiring than: “A place to get work done”.  Furthermore, they focus on the result rather than the process.  Maybe what OCLC could do for us is create a competition inviting both end-users of libraries and librarians to submit branding ideas and then pick the best one.  

Our brand is critically important to our future.  Let’s be sure it defines librarianship accurately, is congruent with user needs, is compelling and, most importantly, that it is inspiring.