Monday, November 28, 2016

FOLIO, acronym for “Future Of Libraries Is Open”? I’d suggest: “Fantasy Of Librarians Inflamed by Organizations”

Introduction

There is a feature of librarian groupthink that is both a great asset and a tremendous liability to the profession. The asset is that Librarians have big and often inspiring hopes/dreams. The liability is that they don’t have the resources to achieve all of them. Nor do they have a good mechanism for synchronizing the two so that their hopes/dreams are in line with their resources. This can be inflamed by a willful refusal to examine the historical record, to extract lessons learned and apply them to the future. Worse yet, it can be further inflamed by organizations with vested interests.  

Such is the case with the OLE, now recast/renamed the FOLIO project. Now, let me say, as I have before, that I’m a huge supporter of open source software.  As the Chief Technology Officer of an R1 research university, I understand the advantages and disadvantages of it, and have worked with it in libraries and companies. In my library, we run a lot of open source software, including DSpace, Fedora Commons, Drupal, Omeka and many, many more core infrastructure packages.  It’s a wonderful platform and can offer outstanding value.  


So, it’s with dismay that I watch what is happening with FOLIO. Because we’re simply betting a great deal on a bad bet, as it is presently framed.  We need to inject some reality into what is being said and done. We need to separate fact from fiction and the associated marketing hype. I hope this post will help move the discussions in that direction.

I recently keynoted the 2016 University of Houston Discovery Camp and had the opportunity to hear a presentation on FOLIO. I also attended the Charleston Conference and heard a presentation there. One of the talks was very well crafted, but, like many marketing talks, glossed over the realities in order to paint the picture the speaker wanted to paint. Some of that talk aimed to address issues I raised in my keynote talk from earlier that day, including:

  • Understanding that the OLE code base is essentially dead and the at least the Library Service Portion (LSP) portion is being restarted from scratch.
  • Questions about the suitability of micro-services as the core architecture.
  • Questions about the governance structure.
  • What is the true target market?
  • Is the core architecture being used really multi-tenant?  What are the implications if it isn’t?
  • Is it realistic to imply the product delivery date will be 2018? 
These are all topics covered in my previous blog post on OLE. However, as a result of the presentations I heard at the Discovery Camp and the Charleston Conference, some new questions and points need to be raised and added to the list. These include: 


1.  Can the library community support the vision this project entails?  

At the Discovery Camp, there were over 130 academic librarians attending. I did a quick poll asking how many of those librarians worked in libraries which had programmers on staff? Less than 12 people (less than 10%) raised their hands (and note that some of those might have been from the same institution, but for the sake of discussion, let’s say they were not). I then asked how many of them had programmers waiting for projects to perform. ZERO hands were left in the air. I know I’m in that situation at my library. If I wanted to, it would be 1 - 2 years before I could assign anyone to this project for any significant amount of his or her time, and I have a moderately sized programming team for a library.   

When the FOLIO rep took the stage, the response to this issue was: “250 libraries indicated interest in providing development assistance, and there were a total of 750 attendees at 4 recent seminars.”  OK, let's take this apart and inject some facts and reality into that response.   


> First, librarians are usually spending frugally and they feel an obligation to investigate options so they can make informed choices. Open source software is often believed to be a lower cost option (although it’s certainly not always true,) so a library wants to evaluate the option. Their attending a session is clearly not the same as saying they’re definitely going to download the product and put it into production! It simply means they’re interested. 


> Second, 250 libraries saying they want to provide development assistance can mean very little. While it seems wildly out of fashion these days, let’s bring some facts into the discussion and look at the size of the development teams of companies/organizations that have built a Library Service Platform from the ground up. Let’s do this by using Marshall Breeding’s Library Technology website data on staffing levels for the only two organizations that have successfully developed, from the ground-up, Library Service Platform (not ILS!) products. Those would be OCLC and Ex Libris (ProQuest), so let’s focus on those. Next, using numbers from that chart and the number of products they’re supporting (from the same chart), one can do a rough calculation on how many hours were likely devoted to the Library Service Platform software development.  Then using the year those organizations have publicly reported they started their work on the Library Service Platforms, calculations would show that in total person years of development of these products, OCLC = 542, Ex Libris = 447.  

If there are 250 libraries saying they could provide development assistance (which again, doesn’t necessarily mean a programmer), and if 15% of those could provide a .25 FTE programming position on a continuing basis (I’d really like to know who those are as I can’t find any of them in my informal surveys) to help develop the product, and even if the commercial organizations involved devote another 3 FTE just to this development effort -- even if all that were true -- it would take decades of effort to match either Ex Libris’s effort or OCLC’s effort to date.  Now, it is true that libraries could proxy, by financial contributions, development resources that might mitigate this timeline.  However, given the state of higher education funding in the United States today, unless the private institutions take on this load fully (also unlikely) and even looking into the future, this still seems a very long shot.


Which is a nice segue to pointing out that, during the time FOLIO is being developed to bring it to the functional level of the current Library Service Platform products, those products won’t be sitting still. They’ll continue to move forward; and, therefore, FOLIO will always, always be behind the competitive offerings unless a LOT more resources are devoted to it!   


Let’s remember, OLE went on for eight years (starting in 2008) using this same proposed resource model and never did produce a complete production ready system that could be installed by all types of academic libraries.  As I pointed out in my last post, the OLE software only went into semi-production status in three research libraries because it was still missing a lot of functionality.  


Enterprise software requires enterprise level development teams, and we simply do not have that available to us in the form of library part-time open source software developers.   Enterprise development teams need programmers who can focus full time on the extremely complicated nature of the work they're doing in our libraries.  They also have the advantage of work towards filling a common defined product definition (although this is a point of contention for many librarians, as we'll discuss below).   

If you like challenges, just try to get 10 agreed upon functional definitions for any one major library function, out of a group of 250 libraries.  And if you think that's good, then be sure to think about why you're frustrated with your current ILS, which ties directly back to what it takes to maintain all those versions, to coordinate software development between them, to run them on different hardware platforms, to synchronize testing, documentation, releases and installations. In other words, yes, think back to the days of locally installed ILS's and back up the vision for your library by 25+ years. Tell you software developers to emulate that in the open source code they write/contribute because you enjoyed it so much. Oh yes, and be sure to call that new open source software you develop FOLIO.  Because that's what you will be creating.

It is, quite simply, insanity that for as long as libraries have been using automated systems, we can't find common vision or "shared-best-practices" for our workflows.  You can find it at the Dean/Director level, but it rapidly starts disintegrating as you get to the lower levels of the Library. It is truly a failure of the profession and its leadership.

Does anyone besides me see the vicious cycle here?  Quite simply, the answer to the question I posed at the start of this section is: “NO, no, we can’t support this vision.”  

2. Listen to what you’re REALLY being told by the “organizations” involved!  


I found it very interesting to note, that in the Charleston Conference presentation, done by an EBSCO rep, the following was said: "The code would be released in 2018." But they also said this: “The list of things that can be developed are (you caught that right? “can”!):

  • E-Resource management
  • Acquisitions
  • Cataloging
  • Circulation
  • Data Conversion
  • Knowledge Base
  • Resource Sharing 
And "that you will be able to expand" (again, note the highlighting here!) the Library Service Platform to include:
  • Discovery
  • Open URL Linking
  • Holdings Management
  • ERP integration
  • Room booking
  • Analytics and student outcomes
  • Linked Open Data
  • Data mining
  • Research data management
  • Institutions Repositories
  • And ….more!….
Once again, analyze what they’re saying. They’re telling us that the release date of 2018 is actually just going to be the kernel of the system, not the complete system. The good news?  That might even make that date doable. But it also confirms the calculation above, that if you want to have a complete system, you need to add on the years of development required to put all these other modules in place, which will mean a date more like what I’ve shown above, decades from now.

Now ask yourself a really hard question: Do you really believe that this list describes the functional system you’ll want, or even need, decades from now??  I doubt it. I know it doesn’t for me, not even if the numbers were far less, like a single decade from now.  

At the Charleston Conference, I heard James Neal, incoming President of ALA, say in his opening talk:  “I propose that by 2026, there will be no information and services industry targeting products to the library marketplace. Content and applications will be directed to the consumer, the end user." While I might argue his date, I will not argue the direction of his statement. It’s not that far in the future.  Think about that and the implications it has for FOLIO (and really, a lot of library software). 


3. “Follow the money” 

This is a saying that is frequently used to remind people that, in analyzing a situation, they should look at the financial benefits (and to whom they flow) in order to understand why something is happening. Let me suggest we do that here.  EBSCO is providing most of the financial backing for this project, which on the surface level seems quite laudable. They’re a good company with long history in libraries and are privately owned by a family with a long history of philanthropic activities. BUT. They’re also a major company selling a discovery product without an ILS or Library Service Platform that they own (unlike both OCLC and Ex Libris), thus they can’t provide the tight integration that those solutions provide. That puts them at a severe disadvantage in the marketplace.  So they have a vested interest in diverting you from buying one of those solutions, because they don’t want to lose discovery system sales or content sales. Even if they wanted to try to buy another LSP vendor/product to offer, they’re too late. If they were for sale at all, they’ve already been bought. So, EBSCO is in a squeeze. Again, this is a big disadvantage for them and is why they're driving the FOLIO product to move towards the Library Service Platform when, in fact, it should be moving to address market segments where we really do need new tools, i.e., research data management systems, knowledge creation platforms, open access/OA publishing systems, GIS systems, etc., etc. But that’s not where EBSCO wants to go and so since they’re providing the funding, the Library Service Platform is the focus.  


We also had an EBSCO VP standing on stage in Charleston, telling us: “The vendor marketplace is consolidating therefore we must develop alternatives!”  Really? It seems to me we DID develop alternatives (look at this chart Marshall Breeding compiles on this topic), and over a long period of time, the market has acted to consolidate and focus on the solutions that best meet its needs. In so doing it has narrowed and selected two superior products to fill the Library Service Platform niche i.e., WorldShare from OCLC and Alma from Ex Libris.  We don’t need more alternatives, the market has already spoken, and, for the foreseeable future these are the successful and widely adopted products.   Alternatives at this point are only going to further fracture a profession that needs to be spending its efforts on new areas which we allow librarians to add new value.  Simply put, at this point in time, that is not in reinventing the ILS as an open source software solution!


4. What is the governing structure going to be for this effort?  


Have you noticed how little is being said about this topic? Ask and they’ll point you to this webpage. Except there are really only two pages here, neither of which address the specifics of how this project will be governed. Things like:


  • Who sets code directions?  
  • Who decides and commits what gets into the code base? 
  • How are conflicts resolved? 
  • How is Open Library Foundation funded? 
  • By whom and in what amounts? 
  • Who are the board members? 
  • How do they get on the board? 
  • Who names them (EBSCO)?? 
  • How will they make strategic decisions for the organization? 
Without this information, there is cause for concern. Lest we forget, OLE went through multiple governance structures, all of which, after Duke University Library’s initial work on the project, ultimately failed.  A large part of this was documented well in this post. I’m truly unclear why we want to relive this, but apparently some of us do. Again, it seems to me we should learn from history here and, as a result, this whole governance issue should be cause for deep, deep concern.

Conclusion


I’ll say what I’ve said before. There is a real place for a FOLIO concept (i.e. mass community collaboration around open source software) in the library profession. The concept needs far more work in defining governance, sustainability and the guidelines for mass collaboration BEFORE it takes on the effort of developing any products.  What it will not be successful in doing at this point in time is producing a Library Service Platform. (And please remember, the concerns above are my latest ones in addition to all those expressed previously in my last post and only briefly summarized above. 


At the Charleston Conference, one of the librarian presenters, speaking about FOLIO, proudly said: “If FOLIO is a Unicorn, then I believe in Unicorns.”  I listened intently and thought: “You’d better, because that’s exactly the concept you're buying into here…”


There are a lot of good people involved in this effort, and for the sake of all concerned, we need them to do something that will be wildly and broadly successful for libraries. It's not in doing a Library Service Platform. We’ve already put somewhere between $5-10M in the failed venture of trying to build OLE for this purpose.  Now we’re queuing up, like tourists at a DisneyWorld ride, to do it all over again. Let’s stop fawning over FOLIO, as was done recently in Information Today, apply some of the vast facts we have at our disposal in our libraries, and realize we simply CAN’T afford to waste this money or time on this particular idea. 


Let’s stop this fantasy, pause, redirect and get pointed in a direction that offers real value-add and solves some of the many real problems that we really need to be solved.  

Wednesday, April 27, 2016

The OLE Merry-Go-Round spins on…

CC-0
https://pixabay.com/en/merry-go-round-carousel-funfair-1066295/
The news about the OLE (Open Library Environment project) has resulted in two reactions from me.  First, disappointment that my long-standing concerns about this project have proven correct, and second, that the profession of librarianship has seemingly forgot what we know and teach our communities about the skills of accessing and use existing knowledge to perform critical analysis in support of creating new knowledge. Such is apparently the case with the announcement that EBSCO will support a new open source project to build a Library Service Platform (LSP).

Marshall Breeding has done an analysis of this news  on the American Libraries Magazine website.  If you haven’t read it, I would suggest you do so as it will give you a solid foundation for the rest of this post. Return here when done.

Now let me say right up front, there are some very encouraging facets to this announcement.  These include:
  1. The involvement of some organizations with considerable business skills and savvy.   Both EBSCO and Index Data have been in business a long time and bring some much needed business analysis skills to the table.  This is good for the OLE project because it’s been sorely missing for a very long time.
  2. The fact that EBSCO is apparently pivoting in a substantial way to support open source software for the community.  I’m cautiously optimistic about this move.
  3. There are few people in the Open Source Software (OSS) business I respect more than Sebastian Hammer and Lynn Bailey.  Sebastian in particular was doing open source software before most librarians even knew what the term meant.  I’ve partnered with him in past business projects and know his expertise to be amongst the best in the field.  Lynn brings business skills to the equation and together they form an excellent duo.  They have numerous OSS success stories to point towards.  This is good for the OLE project. 
  4. At a presentation at the recent CNI Membership Meeting in San Antonio, in a session led by Nassib Nassar, a Senior Software Engineer at Index Data, he discussed their plan to use microservices as the foundation for this new project.  Microservices (an evolved iteration of SOA architecture), focuses on tightly coupled small software services.  A very good explanation of microservices can be found here.  This is a promising new architecture that has been evolving over the past several years and certainly might have applicability for future library software projects (see below for more on this point).
Now for a little history on the OLE project: Back in August 2008, (note, that was nearly EIGHT years ago) according to the press release, the Mellon Foundation provided an initial grant of $475,000 to support OLE.  The announcement said: “The goal of the Open Library Environment (OLE) Project is to develop a design document for library automation technology that fits modern library workflows, is built on Service Oriented Architecture, and offers an alternative to commercial Integrated Library System products.”

You’d be forgiven if you think that announcement sounds amazingly close to the most recent announcement, which says: “It carries forward much of the vision… for comprehensive resource management and streamlined workflows.” You'd also be forgiven for thinking that after eight years, we might expect something more?

But for now, let’s work our way through this announcement, using what we do know about the history of library automation systems, in order to pose some questions I really think need to be asked:
  1. Is the existing OLE code base dead?  Marshall might have been a little too politic in his article, but I’ll say the obvious:  After eight years (2008-2016) of development and grant awards totaling (according to press releases) of $5,652,000 (yes, read that carefully, five million, six hundred and fifty two thousand dollars) by the Mellon Foundation and who knows how much in-kind investment by affiliated libraries (through the costs of their staff participation in the OLE project) it has all resulted in what Marshall points out in his article: “EBSCO opted not to adopt its (Kuali OLE) codebase as it’s starting point.” And the “Kuali OLE software will not be completed as planned,” but will be developed incrementally to support the three (emphasis my own) libraries that currently use it in production, but it will not be built out as a comprehensive resource management system.”  For those of you not experienced in software development, that phrasing is code for: “it’s dead.”  They’re going to start over from scratch. Sure, they’ll use the user cases over, but for well over $5.5M, we should have expected, indeed demanded a lot more. Let’s also remember that this also means a number of very large libraries, all over the country, delayed their move to newer technology while waiting for OLE.  They stayed on the older, inferior ILS systems and they and their users suffered as a result.  How do we factor that cost in?   Now, sure, we’ll call this new project OLE to paper over this outcome, but folks please, let’s be honest with ourselves here: OLE has failed and it has carried a huge cost.
  2. Do we really need microservices?  Yes, it’s the latest, greatest technology.  But do we need it to do what we need to do?  And do we fully understand all the impacts of that decision? What value does it bring us that we don’t have with existing technology?  Is it proven using open source software in our size market?  (Yes, Amazon uses it.  But Amazon is a huge organization with huge staff resources to devote to this.  Libraries can’t make either claim.) We must answer the question: What is the true value of building a Library system based on this? What will libraries will be able to do that they can't do with current LSP technology.  Why should we take this risk?  Do we really understand the costs of developing and maintaining using this technology? Do we really want to experiment with this in our small and budget-tight community?
  3. Governance – Haven’t we been here before? What’s different?  A new Open Library Foundation is being envisioned to govern OLE.  But hasn’t this been tried?  I thought the reason the Kuali association was put into place was because the financial need and the overhead of running a non-profit organization was too taxing on the participant organizations?  So, the Kuali association made a lot of sense from that viewpoint.  But now the libraries are going to return to a separate foundation?  Why is it going to work this time when it didn’t previously?  Because we have vendors at the table, because we think we’ll enroll more participating organizations?  (See later points on this subject).  Because we found out that charging libraries to be a full participant in an open source software project didn’t fly with the crowd?  Given that library budgets and staffing are stretched to the limit, what is the logic that suddenly says we’re going to now have the capacity to take this new organizational overhead on?  I admit I’m totally mystified by this one. This choice seems to have an incredibly low probability of success.  The merry-go-round continues…
  4. So, OLE will again be solely aimed at academic libraries?  This new project is once again, focused on academic libraries. This is good.  And it’s bad.  It’s good, because as I’ve argued countless times in this blog, success in a software project is dependent upon building a good solution that addresses a market need so thoroughly and successfully, that it finds widespread adoption as early as possible within that segment.  Then, and only then, should a project branch out to address related segments.  To do so too early can result in lower adoption rates (see OCLC’s WorldShare, a product trying to address too many markets concurrently, and their resulting low adoption rate in academic markets.  Compare this to Ex Libris’ Alma, a product focused on academics and the experiencing significant success as a result).  The reason this focus is bad is for the reasons I pointed out, back in 2009, in this blog post.  Back in 2009 they also focused on academic markets, but I questioned how they would add additional market segments; the competitive positioning and market share and what that would leave for OLE and if it would be enough to sustain the product and/or it’s development.   Again, in 2012, I did an analysis of OLE and I also questioned the chosen architecture saying: “OLE is going to miss out on the associated benefits of true multi-tenant architecture.”  Well, here we are anywhere from four to seven years later and it appears those concerns were entirely correct, i.e. the choices made were wrong.  It gives me little satisfaction in saying this, but I think people ignored the obvious.  Given this most recent announcement, I’m concerned once again, the merry-go-round is going to continue.
  5. Multi-Tenant – redefined? The choice of microservices as a new architecture is definitely interesting.  But it has some implications I don’t think many fully understand.  This new version of OLE, based on microservices will, quoting Marshall’s article: “provide core services, exposing API’s to support functional modules that can be developed by any organization.”  Let me share my interpretation of that statement: What will be delivered on first release is probably a very basic set of services and what exactly that will include needs to be very openly and transparently communicated to the profession ASAP.  Because without it, there is no way to understand whether that means basic description processes, fulfillment (circulation) or will it mean it is just a communication layer on top of databases for which users will then have to write additional microservices to provide each of the following, including: selection (acquisitions and demand driven acquisitions (DDA)), print management (circulation, reserves, ILL, etc.), electronic management (licensing, tracking, etc.), digital asset management (IR functions), metadata management (cataloging, collaborative metadata management) and link resolution (OpenURL)).  Because as I’m sure you realize, that’s a lot of additional microservice code that someone is going to need to write to make a fully functioning system.  Plus, I’ll just say, as someone who has been involved in software development for nearly three decades, I find it hard to believe that you can write all these additional related microservices and not need to change the underlying core infrastructure microservice?  Or at least, at the start of designing that core infrastructure, know in some detail how those other pieces of code are going to work so you can provide the supporting and truly necessary infrastructure calls/responses back to the related microservices?  If that doesn’t happen, then when a major new microservice is developed, that core will have to be modified and updated. So, why am I saying multi-tenant appears to need redefinition in this model?  Multi-tenant means there is one version of that core code, perhaps running in multiple geographic locations for failover reasons, but the same exact code running everywhere.  This brought us the capability to try and move forward in some big ways on establishing best-practices, being able to compile transactional analytics that would allow global comparisons of libraries effectiveness and, as mentioned above, real failover capabilities, which given global weather conditions is becoming more and more important.  But now, with the microservices version of multi-tenant LSP’s, we’re back to everyone customizing their implementation and only that common shared core code becomes truly multi-tenant.  Everybody else is doing something different.  Great for allowing customization to unique institutional needs, but sacrificing many of the benefits of true multi-tenant software design.  Plus I have a very hard time, given the competitive nature of vendors in our marketplace, believing for a second that one will agree to serve as a failover location for another vendor running the service.  Maybe, but I’m definitely not holding my breath.
  6. 2018?  Who are we trying to kid? Marshall’s article contains another key phrase:  “Kuali OLE, despite its eight-year planning development process, has experienced slow growth (emphasis my own) beyond its initial development partners, and it has not yet completed electronic resource management functionality.”  Indeed, that would be true.  At the time of this writing, there are three (yes you can count them on one hand and have fingers left over) sites in “production” mode, which apparently means production minus the capability to handle electronic resources (a fairly major operation in academic libraries wouldn’t you agree?).  So, I will admit I nearly fell out of my chair when I heard said at CNI (later confirmed in Marshall’s article) that they expect to have an initial version of the software ready for release in early 2018.  My goodness.  Please pass me some of whatever you’re drinking, because it sure must be a good energy drink, or more probably a hallucinogen!  Some points to consider here:
    • OLE was worked on from 2008-2016 and is still missing functionality.  It was, as mentioned above, put into production by three libraries.  However, there were, according to the Kuali.org website, 15 partners, although two of those were EBSCO and HTC Global, vendors with an interest in the code.  I believe that’s out of 3,000+ academic libraries in North America?  Slow growth indeed.
    • HTC Global was hired as a contract programming firm to expedite the development of the code and conversely, because clearly the number of programmers needed to do the project in a timely manner was NOT available from the library community at large.  Do these people really, REALLY think, that because they’ve now broadened the scope that libraries are now going to assign their limited (and frequently non-existent) programming resources to this project?  I probably have a one-year backlog for my programming staff before I could even think of assigning resources to this project -- in a research library.  As I keep pointing out to my colleagues when discussing open source projects, we have to remember many academic libraries have NO, zip, zero, zilch programmers on staff.  Where oh where do they think they’re going to find needed programmers to enlist to get this massive project done? I’ll say the obvious:  It won’t happen.
    • As noted above, what will likely be coming out in the 2018 version of OLE is just the core code.  So, add a lot of time to those additional and oh-so-necessary other microservices modules needed to make this a complete project.  Index Data really needs to be transparent (by posting on their website) exactly will the the actual deliverable of v1.  Libraries need to know what they will need to build on top of it as additional services microservices (think of microservices as functional modules). This will clearly mean extra cost and probably extra time to get to “complete” (maybe it ccould be done in parallel with careful planning).
    • Let’s also remember the definition of “complete” product is ever evolving.  Even if they could get something out in two years, WorldShare and Alma are not going to sit still. They’ll be 24 releases further down the road.  So “Complete” is a moving target.
    • Let’s also take a moment to study some historical data here.  The Library Journal Annual Automation Reviews have sometimes provided some staffing analysis (last staffing report was in 2013 ) for the firms involved.  If we look at the major players that have tried to develop a “true” Library Service Platforms (Ex Libris, OCLC and Serials Solutions), we see staffing reported numbers of between 130 and 190 people (granted, they were working on more than just the LSP within their organizations, but you can bet the majority were working there).  One of those projects (Intota by Serials Solutions) never made it to the street.  Two (WorldShare by OCLC and Alma by Ex Libris) did.
  7. Do we have options here?  Of course, there are always options.  There are at least two that come to mind, some of which I’ve certainly advocated before:
    • Librarians have already worked together in a collaborative to create a Library Service Platform.  It’s called WorldShare and it has been developed by OCLC.  Librarians need to collectively call upon the collaborative they theoretically own, and help govern, and say: “We want to make WorldShare open source.”  It certainly has issues, but it’s a far more realistic vision that being described for the next generation of OLE.  Then the microservices could be extended out from a solid, true multi-tenant platform with real API’s.  For that matter, if those microservices worked with WorldShare, then there should be no reason, provided similar Alma API’s were supported by Ex Libris (or could be added, for those very same microservices to also work with Alma.  This would then  broaden the adoption base for the microservices  and thus the support for them.
    • Again, let’s take a moment and examine history and let’s see if there is something there we can learn from there to apply in today’s situation.  For instance, look at the history of some of the early vendors into the library automation space:  NOTIS started out with a pre-NOTIS real-time circulation module. Data Research started out in libraries with a Newspaper Indexing module, which eventually gave way to ATLAS (A Total Library Automation System) and Innovative Interfaces per their website: “Innovative provided an interface that allowed libraries to download OCLC bibliographic records into a CLSI circulation system without re-keying.”  The point is this, none of these systems started out as a comprehensive, do-it-all solution.  They started out with niche products and responded to market opportunities until they ended up shaping the products we have today. My point is this: If OLE wants to build a comprehensive library service platform solution, they clearly can’t do it in the time frame needed.  So, instead, they need to start out with a niche product that addresses a key market need (perhaps managing research data? providing a citation tool for research data? Library linked data solution that integrates with existing search engines?), and then drive that product to a leading position in the market.  Only THEN should it start moving sideways to encompass other functionality such as is found in an LSP.
Some remaining questions

Of course, this announcement is being positioned as a big step forward and a positive development.  But it seems to me that in addition to the questions posed above, there are some additional tough questions to be asked before the profession blindly plunges ahead here:
  1. Why did OLE fail?  There was a LOT of time and money spent to produce essentially “use-cases”. Do we really understand what went wrong and what needs to be done differently?
  2. Why did the foundation model/associations fail?  What will be different this time?
  3. Are we entrusting this new version of OLE to the same administrative people who did the previous version? Why?  Don’t we owe it to ourselves to think carefully about the leadership of the project? Is the addition of Index Data and EBSCO enough? We need to think carefully about both governance and administration.  What will everyone do differently to ensure the project’s success this time?
  4. What are the lessons to be learned about open source development for an enterprise module?  Is the library community truly large enough and well resourced enough to support the development of an enterprise, foundational, module for libraries such as the Library Service Platform?  (It would appear not, but I’m willing to be convinced with appropriate data, but I warn you, that’s going to be a tough sell!)
Librarians are some of the most wonderful, positive people in the world.  But here is a time where the rose colored glasses need to come off and we need to ask these serious questions, get some thoughtful answers and do some serious analysis.  We should use our existing knowledge base in order to determine the best path forward.  Otherwise this crazy merry-go-around called OLE is just going to keep spinning in a circle with no real forward progress.  We can’t afford for this to happen again.