Tuesday, May 19, 2015

The next step on the path of building a Knowledge Creation Platform

This is one of the photographs that hangs in my office and it’s a quote from Buckminster Fuller which says:  
"You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete.”  
It’s a line of thinking I adhere to frequently. So it’ll be no surprise to those of you who have followed this blog that one of my pet projects is not to try and perfect existing discovery solutions, but rather to build a Knowledge Creation Platform. For a starting description of that concept, see this article

Now combine that thinking with the fact that, back in 2011, when I was at the Charleston Conference and attended a session titled “New Initiatives in Open Research” where I heard Cliff Lynch and the late Lee Dirks speak. Cliff said:  
“If you do the math, you will find horrifying numbers, something like a scientific paper is published every minute or two.  It means you’re buried.  It means everybody’s buried.”  
That fact really stuck with me.

Today, what I see and experience in the University environment is the pace of knowledge creation becoming so intense and so fast that our current tools for researching, building and expressing new knowledge are outdated. And thus the existing processes we’ve automated are equally outdated. This strikes me as a set of problems in need of a very large fix.

So I want to introduce what I think is a very exciting step forward in beginning to address those issues.  What we’re doing is starting a new initiative with the high tech firm, Exaptive.  Here is a video describing their product, with an introduction to the initiative. I encourage you to watch it before reading any further.

Ok, you’ve watched it and now you’re back? Great.

Let me fill you in on the thinking behind this announcement.  Consistent with what I’ve said in articles and related blog posts is the fact that I want to provide to our library users/members some substantially different capabilities than those they get when they use a generic search tool like Google Scholar.  As noted in my writings, most discovery/search tools are querying repositories or databases that contain existing knowledge metadata and content.

Certainly that’s very valuable. But those existing tools make no real provision for analyzing this existing knowledge or drawing correlations between data sources, nor do they suggest overlaps, visualize the results or allow the user to easily bring together the people behind the knowledge found.  The existing tools do not give the capability to start building new knowledge, only to find existing knowledge.  However the Exaptive product moves us down the path towards knowledge creation and more.  The implications of this are really far reaching.

For example, the first project we’re moving forward on is one in which a researcher is looking for a concept, but one that over the centuries has existed in many cultures and languages and under many different terms.   Our researcher knows English and several other languages, but not all the languages in which that concept might be expressed.  So, what he is looking for is a tool that will function as an “authority file,” if you will, that’ll essentially provide “see” and “see also” references across those many languages.  By using known taxonomies, linked data, library related and accessible authority files, we hope to be able to do the analysis of data sources, then visualize the results to show the overlap and correlations that exists between data sources.   We believe when data sources are analyzed this way the Exaptive product should provide tremendous new insight into the topic and field of study.

Another exciting part of the Exaptive product is the ability to create what are called “cognitive networks,” groups comprised of the researchers responsible for creating the research and research data found and utilized in the analysis phase.  Unlike social networks where you have to slowly and manually build your connections or friends, the cognitive network is built automatically as researchers explore, select, filter and analyze the data they need. The result is that these cognitive networks facilitate a researcher’s work, instead of adding to it. This cognitive network can become a set of collaborators or peers that, if willing, could be focused on analyzing, vetting and refining the new knowledge from inception to dissemination.  (Yes, obviously, trust plays a huge part in this and must be dealt with as part of the model).  It’s a model that would be more capable of scaling to incorporate the vast amounts of research being conducted today and would increase the speed of dissemination of the new knowledge that results because it isn’t just dependent upon the publication of physical artifacts, such as papers or books (although that could still be done). Rather it would accommodate knowledge being born digitally, and once vetted by the cognitive network, could quickly be disseminated to others for them to continue the cycle and build upon yet further.  Think about how powerful that could be in creating new knowledge!

The Exaptive product, when coupled with what we've already got in place at the OU Libraries (our Discovery system, open journal/open access publication system, repositories and other) will allow us to move further and faster in helping to evolve ideas into new knowledge.

One thing I need to say at this point is that doing this is both a technological challenge and a change management challenge.  If this project is successful, it has the capability to remarkably change the engagement and knowledge creation experience for many people. To smooth this process, we will need to educate our users/members on the new needs we’re addressing and how and why it’s a major step forward. If you’ve read articles/books about how to do change management, you’ll know one of the best ways to do this is to work with thought-leaders on and in our campus and community and provide them with the extra support needed to learn and use these new tools.  We need to do that in order to ensure they are very successful in doing so.  If we do, it’s a win-win for all involved.  It puts users on the front edge of research and dissemination in their field and it gives us a success case to point towards as we talk with others and try to inspire them.  

Of course, those that wish to work in isolation can continue to do so, even with this new model.  However, new value would be added to ideas by bringing multi-disciplinary and multi-faceted viewpoints to the table throughout the lifetime of an idea, which will help to make these ideas substantially more valuable and more applicable in the end.  We already see the health sciences field moving in this direction because they so clearly understand the inter-connected nature of the organs of the human body and the need to bring researchers together as ideas are developed.  The Open Science Framework is another model where collaboration and shared data sets occurs early in the research process.

As I said above, there are lots of implications for new models of knowledge creation based on this initiative.  Existing culture and change are two of the largest challenges early in the process.  But first, we’re going to focus on getting the technological foundations in place and then see what we can do.   Stay tuned!

NOTE:  Those at the University of Oklahoma interested in having a departmental demonstration of this technology and/or meeting with key project team members, should contact me at: carl(dot)grant(at)ou(dot)edu

Wednesday, May 13, 2015

How Can Libraries Find the Money To Make Big Changes? (Part 3)

Over the last two posts, we’ve looked at how libraries can find the money within existing resources in order to fund big changes.  In the first post we looked at strategic plans and in the second post we looked at the use of metrics to measure progress against that strategic plan.  Now, in this final post, we want to step back and look at the efficiency and effectiveness of our current operations as reflected by their internal workflows.

Let’s face it, a great deal of what is done in libraries has been done for a long time.  Even if we’ve automated the workflow along the way, it was likely put into place 5-15 years ago and has rarely been reevaluated for efficiency or effectiveness since that time (unless you’ve implemented the metrics/analysis discussed in post two of this series). Yet reevaluation needs to be done on a regular and recurring basis.  

The process of reevaluating workflows presents you with a golden opportunity because it’s a great time to think really big about what the workflow would look like if you could design it without restriction.  So the first step in this workflow is to think about and design the “perfect” workflow.  I call this “setting-the-destination”, because it serves as a description of where you ultimately want to end up.  The perfect way for your staff to order new resources, the perfect way for metadata to get created or for users/members of your library to find the resources they need?  What’s the perfect way for users/members to utilize them, cite them, etc?  What do those workflow look like?  This is blue sky thinking and it needs to involve those on your team that have the ability to think creatively, but also those that understand the intricacies of the current workflow.  You want to be certain to document what the team comes up with in this step because you’ll come back and visit it again and again, as new versions of products, with new functionality, become available for implementation.

In the second step, you return to reality and, either through use-cases or flowcharts, describe how the workflow is actually being done today.  Again, use cases or flowcharts should document the workflow. When your teams do this, they will quickly realize there are many things being done that no longer need to be done. Those are candidates for immediate removal.  It’s not unusual to find a 10% boost in productivity just from this step happening. 

The third step is to redesign the workflow and workflows associated with it.  A very good time to undertake this kind of workflow is, for instance, when implementing a new Library Service Platform, because the technology touches so many areas of library operations and those products give you an excellent opportunity to streamline a lot of workflows.  

As we all know, in the past the library was primarily a print based operation and over the years, as licensed content and electronic content became a part of the library services, entire new workflows and procedures were created to accommodate each.  Over time, many of the steps in those workflows replicated, or very closely emulated, steps in handling one of the other parallel areas.  Most libraries find that when they perform workflow analysis they can see areas were these steps can now be combined together in order to free team members to address new, more challenging and interesting work that needs to be done.  

In doing workflow analysis it’s important to know that most people do not inherently know how to do workflow analysis.  They know how to do the workflow.  So it takes time to train people to do analysis; the process of pulling apart a workflow and rethinking it.  I’ve found giving them a series of questions to ask themselves, and others, at each step helps them to do this.  Here are some things to be sure to do:

  • Make sure to include everyone that is involved in each step of a workflow, from the beginning to the end.  
  • When looking at a workflow, identify everything that comes into the workflow (the inputs) and that flows out of the workflow (the outputs)
  • Here are some of the specific questions to be asked about each step in the workflow: 

    • Who receives the outputs? 
    • What does the next person do with the output you give them? 
    • How does the quantity and quality of the outputs affect what they do (do steps get skipped when quantity is high?  If quality is low, what do they have to do over or fix before they can move the work to the next step? 
    • Who and how is the quality of the work verified? 
    • When something different than what is expected happens, how is it handled?  What’s the workflow then?  How does it change? Be sure to document these processes as well.
    • Look for places in the workflow where there is waiting, moving and/or repetition.  Try to find ways to eliminate these, for instance, by dong something in parallel if possible. 
    • Identify the places in the workflow where there are complexities, bottlenecks and frustration.  Eliminate those.
  • Once you have done that, then ask these questions:
    • How many people does it currently take to complete a workflow?  What number should it take?  List that number and work towards it.
    • Consider, and document, what technology is needed to support the workflow and what it must functionally achieve?
    • What skills and expertise are needed to both perform and manage workflow?
    • Then ask if those skills/expertise/positions exist in the organization currently.
    • Use the answers to the above to describe the positions needed and then compare to the ones you have.  Any discrepancy will need a plan to address and resolve those differences.
    • Based on the outputs of the total workflow mapping, what are the jobs that: a) remain the same, b) will be modified and/or c) will be new?
    • Prepare a plan to be shared with your team addressing what and how the team will be trained for those new jobs?  When?
    • Consult with team members so they understand where they’re being aimed and what will be done to ensure they are successful in the jobs they will hold.  Repeat as necessary (i.e. which is frequently)
    • Throughout this process, Library Administration must make clear the workflows are being examined for ways to be more efficient and effective in light of the changing environments and that they know they have talented people and simply want to optimize their work.  
    • Be sure to link the new workflow back to the overall goals of the library.  They must see and understand the linkage and what value is added as a result.

  • Next, decide what will be measured to evaluate the workflow and how?
    • Can those measures/metrics be directly linked to the goals of the library?
    • If so, document which specific measure they are linked to.
    • If you can’t link them, re-examine why you’re doing this function and find a way to eliminate it.
  • It’s important as part of this workflow to note the distinction between what are called “core” workflows and those that are “support” workflows.  Core workflows are the ones that deliver value directly to end-users.  Support workflows enable core workflows (such as training, approvals, purchasing, etc).  Obviously, core workflows might be improved, but you want to be certain to only increase the value delivered to the end-users, not to diminish or delete it.  Support workflows are open to considerable revision for obvious reasons.
  • When the new workflow is introduced:
    • Explain, educate, communicate.  Repeat as needed.
    • Do trial runs, analyze the results/problems and make adjustments to resolve those issues.
    • Only then do you implement the new, more effective and efficient workflow.

So there you have it.  I’ve used all the steps described in this post and the previous two on this subject.  They’ve worked for my organizations and I believe they’ll work for yours as well.  When done, you should find that you have more people and financial resources at your disposal in order to resource those new ideas that have been sitting and waiting on your “to-do” list.