Marshall Breeding has done an analysis of this news on the American Libraries Magazine website. If you haven’t read it, I would suggest you do so as it will give you a solid foundation for the rest of this post. Return here when done.
Now let me say right up front, there are some very encouraging facets to this announcement. These include:
- The involvement of some organizations with considerable business skills and savvy. Both EBSCO and Index Data have been in business a long time and bring some much needed business analysis skills to the table. This is good for the OLE project because it’s been sorely missing for a very long time.
- The fact that EBSCO is apparently pivoting in a substantial way to support open source software for the community. I’m cautiously optimistic about this move.
- There are few people in the Open Source Software (OSS) business I respect more than Sebastian Hammer and Lynn Bailey. Sebastian in particular was doing open source software before most librarians even knew what the term meant. I’ve partnered with him in past business projects and know his expertise to be amongst the best in the field. Lynn brings business skills to the equation and together they form an excellent duo. They have numerous OSS success stories to point towards. This is good for the OLE project.
- At a presentation at the recent CNI Membership Meeting in San Antonio, in a session led by Nassib Nassar, a Senior Software Engineer at Index Data, he discussed their plan to use microservices as the foundation for this new project. Microservices (an evolved iteration of SOA architecture), focuses on tightly coupled small software services. A very good explanation of microservices can be found here. This is a promising new architecture that has been evolving over the past several years and certainly might have applicability for future library software projects (see below for more on this point).
You’d be forgiven if you think that announcement sounds amazingly close to the most recent announcement, which says: “It carries forward much of the vision… for comprehensive resource management and streamlined workflows.” You'd also be forgiven for thinking that after eight years, we might expect something more?
But for now, let’s work our way through this announcement, using what we do know about the history of library automation systems, in order to pose some questions I really think need to be asked:
- Is the existing OLE code base dead? Marshall might have been a little too politic in his article, but I’ll say the obvious: After eight years (2008-2016) of development and grant awards totaling (according to press releases) of $5,652,000 (yes, read that carefully, five million, six hundred and fifty two thousand dollars) by the Mellon Foundation and who knows how much in-kind investment by affiliated libraries (through the costs of their staff participation in the OLE project) it has all resulted in what Marshall points out in his article: “EBSCO opted not to adopt its (Kuali OLE) codebase as it’s starting point.” And the “Kuali OLE software will not be completed as planned,” but will be developed incrementally to support the three (emphasis my own) libraries that currently use it in production, but it will not be built out as a comprehensive resource management system.” For those of you not experienced in software development, that phrasing is code for: “it’s dead.” They’re going to start over from scratch. Sure, they’ll use the user cases over, but for well over $5.5M, we should have expected, indeed demanded a lot more. Let’s also remember that this also means a number of very large libraries, all over the country, delayed their move to newer technology while waiting for OLE. They stayed on the older, inferior ILS systems and they and their users suffered as a result. How do we factor that cost in? Now, sure, we’ll call this new project OLE to paper over this outcome, but folks please, let’s be honest with ourselves here: OLE has failed and it has carried a huge cost.
- Do we really need microservices? Yes, it’s the latest, greatest technology. But do we need it to do what we need to do? And do we fully understand all the impacts of that decision? What value does it bring us that we don’t have with existing technology? Is it proven using open source software in our size market? (Yes, Amazon uses it. But Amazon is a huge organization with huge staff resources to devote to this. Libraries can’t make either claim.) We must answer the question: What is the true value of building a Library system based on this? What will libraries will be able to do that they can't do with current LSP technology. Why should we take this risk? Do we really understand the costs of developing and maintaining using this technology? Do we really want to experiment with this in our small and budget-tight community?
- Governance – Haven’t we been here before? What’s different? A new Open Library Foundation is being envisioned to govern OLE. But hasn’t this been tried? I thought the reason the Kuali association was put into place was because the financial need and the overhead of running a non-profit organization was too taxing on the participant organizations? So, the Kuali association made a lot of sense from that viewpoint. But now the libraries are going to return to a separate foundation? Why is it going to work this time when it didn’t previously? Because we have vendors at the table, because we think we’ll enroll more participating organizations? (See later points on this subject). Because we found out that charging libraries to be a full participant in an open source software project didn’t fly with the crowd? Given that library budgets and staffing are stretched to the limit, what is the logic that suddenly says we’re going to now have the capacity to take this new organizational overhead on? I admit I’m totally mystified by this one. This choice seems to have an incredibly low probability of success. The merry-go-round continues…
- So, OLE will again be solely aimed at academic libraries? This new project is once again, focused on academic libraries. This is good. And it’s bad. It’s good, because as I’ve argued countless times in this blog, success in a software project is dependent upon building a good solution that addresses a market need so thoroughly and successfully, that it finds widespread adoption as early as possible within that segment. Then, and only then, should a project branch out to address related segments. To do so too early can result in lower adoption rates (see OCLC’s WorldShare, a product trying to address too many markets concurrently, and their resulting low adoption rate in academic markets. Compare this to Ex Libris’ Alma, a product focused on academics and the experiencing significant success as a result). The reason this focus is bad is for the reasons I pointed out, back in 2009, in this blog post. Back in 2009 they also focused on academic markets, but I questioned how they would add additional market segments; the competitive positioning and market share and what that would leave for OLE and if it would be enough to sustain the product and/or it’s development. Again, in 2012, I did an analysis of OLE and I also questioned the chosen architecture saying: “OLE is going to miss out on the associated benefits of true multi-tenant architecture.” Well, here we are anywhere from four to seven years later and it appears those concerns were entirely correct, i.e. the choices made were wrong. It gives me little satisfaction in saying this, but I think people ignored the obvious. Given this most recent announcement, I’m concerned once again, the merry-go-round is going to continue.
- Multi-Tenant – redefined? The choice of microservices as a new architecture is definitely interesting. But it has some implications I don’t think many fully understand. This new version of OLE, based on microservices will, quoting Marshall’s article: “provide core services, exposing API’s to support functional modules that can be developed by any organization.” Let me share my interpretation of that statement: What will be delivered on first release is probably a very basic set of services and what exactly that will include needs to be very openly and transparently communicated to the profession ASAP. Because without it, there is no way to understand whether that means basic description processes, fulfillment (circulation) or will it mean it is just a communication layer on top of databases for which users will then have to write additional microservices to provide each of the following, including: selection (acquisitions and demand driven acquisitions (DDA)), print management (circulation, reserves, ILL, etc.), electronic management (licensing, tracking, etc.), digital asset management (IR functions), metadata management (cataloging, collaborative metadata management) and link resolution (OpenURL)). Because as I’m sure you realize, that’s a lot of additional microservice code that someone is going to need to write to make a fully functioning system. Plus, I’ll just say, as someone who has been involved in software development for nearly three decades, I find it hard to believe that you can write all these additional related microservices and not need to change the underlying core infrastructure microservice? Or at least, at the start of designing that core infrastructure, know in some detail how those other pieces of code are going to work so you can provide the supporting and truly necessary infrastructure calls/responses back to the related microservices? If that doesn’t happen, then when a major new microservice is developed, that core will have to be modified and updated. So, why am I saying multi-tenant appears to need redefinition in this model? Multi-tenant means there is one version of that core code, perhaps running in multiple geographic locations for failover reasons, but the same exact code running everywhere. This brought us the capability to try and move forward in some big ways on establishing best-practices, being able to compile transactional analytics that would allow global comparisons of libraries effectiveness and, as mentioned above, real failover capabilities, which given global weather conditions is becoming more and more important. But now, with the microservices version of multi-tenant LSP’s, we’re back to everyone customizing their implementation and only that common shared core code becomes truly multi-tenant. Everybody else is doing something different. Great for allowing customization to unique institutional needs, but sacrificing many of the benefits of true multi-tenant software design. Plus I have a very hard time, given the competitive nature of vendors in our marketplace, believing for a second that one will agree to serve as a failover location for another vendor running the service. Maybe, but I’m definitely not holding my breath.
- 2018? Who are we trying to kid? Marshall’s article contains another key phrase: “Kuali OLE, despite its eight-year planning development process, has experienced slow growth (emphasis my own) beyond its initial development partners, and it has not yet completed electronic resource management functionality.” Indeed, that would be true. At the time of this writing, there are three (yes you can count them on one hand and have fingers left over) sites in “production” mode, which apparently means production minus the capability to handle electronic resources (a fairly major operation in academic libraries wouldn’t you agree?). So, I will admit I nearly fell out of my chair when I heard said at CNI (later confirmed in Marshall’s article) that they expect to have an initial version of the software ready for release in early 2018. My goodness. Please pass me some of whatever you’re drinking, because it sure must be a good energy drink, or more probably a hallucinogen! Some points to consider here:
- OLE was worked on from 2008-2016 and is still missing functionality. It was, as mentioned above, put into production by three libraries. However, there were, according to the Kuali.org website, 15 partners, although two of those were EBSCO and HTC Global, vendors with an interest in the code. I believe that’s out of 3,000+ academic libraries in North America? Slow growth indeed.
- HTC Global was hired as a contract programming firm to expedite the development of the code and conversely, because clearly the number of programmers needed to do the project in a timely manner was NOT available from the library community at large. Do these people really, REALLY think, that because they’ve now broadened the scope that libraries are now going to assign their limited (and frequently non-existent) programming resources to this project? I probably have a one-year backlog for my programming staff before I could even think of assigning resources to this project -- in a research library. As I keep pointing out to my colleagues when discussing open source projects, we have to remember many academic libraries have NO, zip, zero, zilch programmers on staff. Where oh where do they think they’re going to find needed programmers to enlist to get this massive project done? I’ll say the obvious: It won’t happen.
- As noted above, what will likely be coming out in the 2018 version of OLE is just the core code. So, add a lot of time to those additional and oh-so-necessary other microservices modules needed to make this a complete project. Index Data really needs to be transparent (by posting on their website) exactly will the the actual deliverable of v1. Libraries need to know what they will need to build on top of it as additional services microservices (think of microservices as functional modules). This will clearly mean extra cost and probably extra time to get to “complete” (maybe it ccould be done in parallel with careful planning).
- Let’s also remember the definition of “complete” product is ever evolving. Even if they could get something out in two years, WorldShare and Alma are not going to sit still. They’ll be 24 releases further down the road. So “Complete” is a moving target.
- Let’s also take a moment to study some historical data here. The Library Journal Annual Automation Reviews have sometimes provided some staffing analysis (last staffing report was in 2013 ) for the firms involved. If we look at the major players that have tried to develop a “true” Library Service Platforms (Ex Libris, OCLC and Serials Solutions), we see staffing reported numbers of between 130 and 190 people (granted, they were working on more than just the LSP within their organizations, but you can bet the majority were working there). One of those projects (Intota by Serials Solutions) never made it to the street. Two (WorldShare by OCLC and Alma by Ex Libris) did.
- Do we have options here? Of course, there are always options. There are at least two that come to mind, some of which I’ve certainly advocated before:
- Librarians have already worked together in a collaborative to create a Library Service Platform. It’s called WorldShare and it has been developed by OCLC. Librarians need to collectively call upon the collaborative they theoretically own, and help govern, and say: “We want to make WorldShare open source.” It certainly has issues, but it’s a far more realistic vision that being described for the next generation of OLE. Then the microservices could be extended out from a solid, true multi-tenant platform with real API’s. For that matter, if those microservices worked with WorldShare, then there should be no reason, provided similar Alma API’s were supported by Ex Libris (or could be added, for those very same microservices to also work with Alma. This would then broaden the adoption base for the microservices and thus the support for them.
- Again, let’s take a moment and examine history and let’s see if there is something there we can learn from there to apply in today’s situation. For instance, look at the history of some of the early vendors into the library automation space: NOTIS started out with a pre-NOTIS real-time circulation module. Data Research started out in libraries with a Newspaper Indexing module, which eventually gave way to ATLAS (A Total Library Automation System) and Innovative Interfaces per their website: “Innovative provided an interface that allowed libraries to download OCLC bibliographic records into a CLSI circulation system without re-keying.” The point is this, none of these systems started out as a comprehensive, do-it-all solution. They started out with niche products and responded to market opportunities until they ended up shaping the products we have today. My point is this: If OLE wants to build a comprehensive library service platform solution, they clearly can’t do it in the time frame needed. So, instead, they need to start out with a niche product that addresses a key market need (perhaps managing research data? providing a citation tool for research data? Library linked data solution that integrates with existing search engines?), and then drive that product to a leading position in the market. Only THEN should it start moving sideways to encompass other functionality such as is found in an LSP.
Of course, this announcement is being positioned as a big step forward and a positive development. But it seems to me that in addition to the questions posed above, there are some additional tough questions to be asked before the profession blindly plunges ahead here:
- Why did OLE fail? There was a LOT of time and money spent to produce essentially “use-cases”. Do we really understand what went wrong and what needs to be done differently?
- Why did the foundation model/associations fail? What will be different this time?
- Are we entrusting this new version of OLE to the same administrative people who did the previous version? Why? Don’t we owe it to ourselves to think carefully about the leadership of the project? Is the addition of Index Data and EBSCO enough? We need to think carefully about both governance and administration. What will everyone do differently to ensure the project’s success this time?
- What are the lessons to be learned about open source development for an enterprise module? Is the library community truly large enough and well resourced enough to support the development of an enterprise, foundational, module for libraries such as the Library Service Platform? (It would appear not, but I’m willing to be convinced with appropriate data, but I warn you, that’s going to be a tough sell!)