Monday, January 14, 2013

GOKb; Is this an idea that will go?

Knowledge bases are a source of much frustration for today’s librarians and, if reality is known, the vendors and organizations that assemble, market and sell them.  It is a very complex field and business.   The GOKb project is one open effort coming out of the library side that is trying to address some of those frustrations.  

At the past Charleston Conference, I was invited to attend a presentation on GOKb, the Global Open Knowledgebase.  Here is a good slide presentation about the offering which, if you haven’t heard of it or don’t know what is about, is a good introduction.  If you don’t have time to go through that slide show, this post from the Kuali site, also provides a quick overview:
“GOKb is not a link resolver knowledge base; it is focused on global-level metadata about e-resources with the goal of supporting management of those e-resources across the resource lifecycle. GOKb does not aspire to replace current vendor-provided KB products. But it does aspire to make good data available to everybody, including existing KBs, and to provide an open and low-barrier way for libraries to access this data. Our goal is that GOKb data permeates the KB ecosystem so that all library systems, whether ILS, ERM, KB or discovery, will have better quality data about electronic collections than they do today."
Now, I want to say right up front that I’m impressed with the creativity and intelligence behind the design.  The people behind this project clearly understand the problems they want to solve and what they’d like a solution to do.  

As I told the organizers after the Charleston session, in an ideal world, I think they’d be well on their way to success.  Unfortunately, as we all know, we don’t live in an ideal world.  So these are some of the issues I think need to be overcome for this idea to be viable and sustainable: 
  1. Focus.  In all honesty, what I heard and see is a wide variety of issues some of which I think might be better solved, or at least understood, if the developers approached the vendors for an open exchange of concerns and ideas and jointly looking for solutions. GOKb appears to be taking on a very broadly scoped problem for which the solution only offers partial control for the foreseeable future.  That alone poses significant hurdles on the road to success. 
  2. Quality control.  Based on my experience, This is an area where I think GOKb  is vastly underestimating both the need and the importance (although I think after hearing this at the Charleston Conference, they may be doing more here).  While I realize the community behind this idea is trying to use the open source software model in doing the GOKb knowledge base,  however, coupling that with my knowledge of what it takes to maintain a proprietary knowledge base, I know the level of expertise and knowledge, the relationships that need to be maintained with the publishing community, the details that must be examined and massaged in this kind of data – I’m just highly skeptical that GOKb will be able to build, and equally important sustain, that kind of effort using community approaches.  In part that fear is based on the size of the community, which as I outline below, could be smaller than expected.  It will take a really large community effort to achieve the quality needed.  Furthermore, even if the community size did seem assured, there will be issues that will require someone (like a committer in the OSS model) that will take on the responsibility of deciding what is right and what is going into the GOKb. Here are a couple of specific instances that are cause for concern:
    • Title changes (title histories as well as platform changes) are a frequent issue. How will these be dealt with and, at the same time, assure the quality of the data?
    • Vendor files may follow the KBART recommendations and they may be downloadable in a standard way, every week or month.  However, that does not, in and of itself, assure the quality of that data. Experience has often showed it varies quite a lot and thus commercial Kb providers use thousands of rules in their scripts to massage the data, check it, change it … you name it. And the vendors still encounter problems after releasing it.
  3. Too many “if” statements.  There are a lot of good, solid ideas behind this project, but what “if “ those statements don’t turn out the way desired?  Are there alternatives in the wings?  For instance:
    • OLE/GOKb dependency.  What I heard was that this is being coupled with OLE and that it is the use by the OLE members that will really drive the expansion of the GOKb.  However, even by OLE’s admission adoption and production usage won’t happen till 2014, two years from now.  A lot can change in that time frame.  The GOKb people, when I pointed this out, told me: “OLE sites will have to load title lists to support their operations, linking to licenses and orders, etc. Those title lists will support GOKb. OLE sites will also have to maintain those lists because it’s what they pay for and track usage for. So even if that data isn’t driving discovery, it’s being maintained at the level needed for management, which is what we’re trying to do. Extending it beyond the OLE libraries is a roadmap goal, but not required for GOKb to serve its primary purpose as being a management KB for OLE.”  But as I’ve noted in this post in my blog, OLE is still primarily a large academic libraries project and there is no assurance, at this point, that it will find a receptive market beyond those types of libraries.  If it does not, that will limit the size of the base of users and will cause the workload of maintaining this data to rest on a much smaller number of institutions.  This will raise each participants need to commit resources.  Will that be possible?  Who knows at this point?
    • Another silo?  It appears to me that what is being built here is yet another Kb silo to be maintained and interfaced with existing Kb silos.  In order for that to be at all manageable, it will need to be an automated interface, presumably through API’s so that an exchange of data can be easily performed.  My concern here is that we have neither the needed interface specified, nor the standards in place, required to make this a truly scalable and easily implemented process.   Those steps will require a great deal of time as vendors will need to get the specs nailed down and then factor them into their development schedules.  Furthermore, even if we have the API’s in place and the data formats, cross-referencing these databases is often very problematic.  It may not be unreasonable to expect to match the big resources in the KB, but it is wildly optimistic to think the majority will match as intended, and this will result in a lot of work and effort to sort these out on a continuing basis. The GOKb organizers tell me that they are seeking to find willing partners on the supplier side to exchange data using existing standards.  As they point out, vendors in developing applications that can accept ONIX data can no longer say there are no systems that consume that data.  They also tell me that the  JISC partners have some evidence that all the players have expressed some interest in this standard and have done some work with it.  So there is some cause for hope here although I’m worried the timeline for all this to happen is much longer than most librarians realize. 

There is a lot good effort and money being put into the GOKb project and ideas.  There are clearly issues surrounding GOKb that need resolution. Without those, GOKb might end up being yet another silo of data to be maintained and one without a clear pathway to the broader adoption and support that will sustain it.  As I’ve noted in many blog posts and in my many talks, librarianship needs more of these types of collaborative efforts and this one incorporates many excellent ideas.  

I urge librarians to pay close attention to GOKb and to contribute and participate in any way they can to make it a viable and sustainable idea.  Clearly the time to do that is right now.

NOTE: After posting this, a reader reminded me (and I apologize for not including it in the original post) that JISC has also been making some efforts on this front with their KB+ project.  Data to be included in that knowledge base includes: a) Publication Data for all NESLi2, SHEDL and WHEEL agreements, all freely available under a CC0 license, b) Subscription Information and c) License information.  The GOKb people had previously mentioned to me that they were in touch with JISC about KB+ and sharing ideas.  Another reader has told me that there is actually a great deal of cross work happening between the two groups including the sharing of resources and joint meetings (with the next one scheduled for late January 2013).  So, hopefully this will have a good result for both projects.

Monday, January 7, 2013

"Going Mobile"; What does that mean for libraries?

Photo property of
I remember when The Who originally released the song “Going Mobile”  which is about one of the band members driving a mobile home across the country (if you're too young to remember that, then click on the link and listen to some good classic rock). Listening to it now, I find the lyrics have some suitability for describing people's use today of of mobile devices: 
“Goin' mobile, I can stop in any street, And talk with people that we meet.  Goin' mobile, Keep me moving, Out in the woods, Or in the city.. Goin' mobile..
Stephen Abrams just did a recent post on his blog about the PEW report discussing mobile library usage which showed: 
“Some 13% of Americans ages 16 and older have visited library websites or otherwise accessed library services by mobile device – a figure that has doubled since the last national reading was taken.”
He followed that post with this one stating: 
“Nine in 10 college students to own a smartphone by 2016" 
The January 1st, 2013 issue of PC Magazine, contains an article on the top tend tech trends for the year as predicted by the Gartner Group.  One of those predictions states that: 
"2013 will be the year that mobile phones will surpass PC’s as the primary device used in accessing the Web." 
All of this only underscores the importance of making the mobile device interface a desirable and useful experience for our library membership.   

When I recently gave the keynote presentation at the PALCI Membership Meeting and talked about the larger subject of next generation library automation infrastructure, I noted the challenges we’re facing in dealing with mobile users and their devices.  Afterward, a member of the audience pointed out to me that in his opinion one of the prime challenges we’re facing isn’t really tied so much to whether the device is mobile or not, but rather to the size of the screen, i.e. big vs. small. His point, while certainly a valid one is, in my opinion, really only one facet of a much larger set of complexities involved in supporting the mobile user. 

In support of what this person pointed out, I’m sure many of you have had the frustrating experience of using an iPad or a laptop which has a reasonably full size screen and upon visiting a Website, is determined by that website to be a mobile device, which then as a result presents a lower resolution and typically, a substantially reduced feature/functionality set for you to use. (Library discovery interfaces are a prime example here!).  It’s very frustrating and a bad choice by the interface designers/coders to do this. Unless they really understand their user’s needs, chances are they only cause the user to leave the site until such time as they can come back (which might or might not happen) using a machine that will be provided the full functionality the site offers.  What’s even more frustrating is that good designers and coders can largely avoid putting users through this experience.   There are many websites, which provide excellent information, like this one for Android devices, which describes how to do a good web interface design that will determine if a smaller display is needed and what to do in that event.  However, the larger point is that we have to understand when a user is mobile, that they need those functions important to them at that point-in-time, which is not the barest minimum set, nor is it the fullest functional set. It’s the appropriate set of functions. 

In order to achieve this, we should also seize the opportunities mobile interfaces create for us. It will include things like HTML 5, apps, semantic content, processing inputs from many sensors, streaming content, gesture recognition, speech interfaces and creating simple, human driven designs. Our users will want a Web that works the way they, and the world they live in, work. As the stats above show, the web is becoming mobile by default. We also need to remember that IPv6 has 340 trillion, trillion, trillion addresses. That’s for a reason. Data is the world and much of that data will be coming from IP addressed sensors. The mobile device will be reading and providing information to/from these sensors. Already, there is probably already a GPS in your phone as well as a gyro.  More is on the way.

Another article I read this past week by Ben Showers in the December 2012 issue of Research Information, touches on some of some of the other considerations for designers of library mobile interfaces.  His article included ideas like a: 
“library card on your phone, which is always with you and enables you to check out books as well as lookup content, and a social networking app for students on distance-learning courses that allows you to connect with people doing similar subjects or from the same institution”.
I'll note that equally important are things like location sensitive tailored information displays, i.e. if you’re standing on the hilltops of San Francisco looking at the skyline, you want your mobile device to use it’s camera to see what your seeing and give you information about that skyline, the buildings your seeing, who the architects are, who owns them, the history of the building, who resides/works in them, etc.  If you’re looking at the night sky, you want your mobile app that is describing the stars you’re seeing to link to your library and show you where you can find more information about those stars. The list goes on and on.  

How are we going to do this?  I suggest we realize that:
  • API’s become the key to the future of information provision and utilization. People will want to use your library's information resources, but they'll want to use in in their own way.   No one interface is going to do that for them, so easy-to-use API’s will make it possible, likely through Apps of some type.  This is also why, when I’m writing about the new cloud computing, library services platforms and the need for extensive and easily used API’s (Application Programming Interfaces) I see the need as so very critical to our success.  The opportunity they create is for us, as librarians, to weave our library content and services into people's life experiences.  
  • We will need to understand the real nature of what mobility means, as described above, and then develop, or work with, apps that enhance and educate the user based on their location, the app’s specific focus, the user’s profile and those sensors that will be feeding that mobile device information about the user environment.   
  • We'll also need our library staff to be able to use their mobile devices, throughout their workday, while working with users/members so as to be able to interact with these new library service platforms such that they help them provide enhanced and better services for library members/users.    
Doing this will allow us to make our libraries more valued by our members/users because we will be able to weave library content and services directly into their life experiences while offering them the chance to make those experiences both enhanced and educational.   

"Going mobile" is an essential, important and exciting part of library services both today and in the future.  Make sure your library content and services are going to be as mobile as your library membership/users.  

NOTE:  After writing the post above, the NY Times did an article which has some great ideas (although not written for libraries, they are directly applicable and fit with what I said above).  Creating functionally targeted apps that directly provides needed content/services into user lives and work at the appropriate point, librarians stand a real chance of taking back from Google, some search functions.  Great article for stirring the imagination.

Wednesday, January 2, 2013

The Top Tech Trends session at ALA MidWinter in Seattle

I'm quite honored to have been ask by LITA to help organize and moderate this year's Top Technology Trends at ALA Midwinter in Seattle.  LITA leadership decided to change the format of the session this year and together we worked to:

  1. Make it a discussion rather than a panel. Towards that end, it will not feature people lined up behind tables, or formal presentations but instead people sitting in chairs in a semi-circle so they can easily see each other to facilitate the discussion and the exchange of thoughts and ideas in response to the questions posed about the topic.
  2. Bring to the forefront some newer, but clearly leading and promising people among the LITA membership.  We also worked to make sure that while doing this we increased the representation of women on the panel so as to more accurately reflect the larger profession.
  3. Provide a mix of backgrounds to address the topic as represented by people from for-profit, non-profit, standards and library backgrounds.
As a result, we have confirmed a group of amazing and knowledgable people:

  • Mackenzie Smith, University Librarian, University of California, Davis.
  • Bess Sadler, Information Systems Project Manager, Stanford University
  • Julie Speer, Associate Dean for Research and Informatics, Virginia Tech University Libraries
  • John Law, Vice President Discovery Solutions, Serial Solutions
  • Todd Carpenter, Executive Director, NISO
  • Roy Tennant, Senior Program Officer, OCLC
As noted above, I will be acting as the moderator of the panel.  In part, this is because the panel is focused on the topic of: "If Data I Created Resides in a Cloud Environment, Is it Still Mine?" which was sparked by a blog post I did about library data ownership.  (If you have questions you want me to consider asking this panel, please send them to me!)

This topic is really timely and has tremendous number of implications for libraries, all of which we'll try to explore during the course of this session.  I think you'll find it well worth your time and I hope you'll support the changes we've made to the session by showing up (in mass!) and then by giving us your feedback.  So please put it on your schedule by clicking on this link which will give you additional information including date/time and a link to automatically add it to your calendar.

The twitter hashtag for this event will be #alamwttt

See you there.  Join in and learn about the topic of library data ownership!